entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
15
199
authors
list
primary_category
stringlengths
5
18
categories
list
text
stringlengths
1
461k
http://arxiv.org/abs/2307.01616v1
20230704100825
SageFormer: Series-Aware Graph-Enhanced Transformers for Multivariate Time Series Forecasting
[ "Zhenwei Zhang", "Xin Wang", "Yuantao Gu" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Fixed-time Stabilization with a Prescribed Constant Settling Time by Static Feedback for Delay-Free and Input Delay Systems Andrey Polyakov[Inria, University of Lille, FR-59000, France (andrey.polyakov@inria.fr)] and Miroslav Krstic[ University of California, San Diego, USA] ============================================================================================================================================================ Multivariate time series forecasting plays a critical role in diverse domains. While recent advancements in deep learning methods, especially Transformers, have shown promise, there remains a gap in addressing the significance of inter-series dependencies. This paper introduces SageFormer, a Series-aware Graph-enhanced Transformer model designed to effectively capture and model dependencies between series using graph structures. SageFormer tackles two key challenges: effectively representing diverse temporal patterns across series and mitigating redundant information among series. Importantly, the proposed series-aware framework seamlessly integrates with existing Transformer-based models, augmenting their ability to model inter-series dependencies. Through extensive experiments on real-world and synthetic datasets, we showcase the superior performance of SageFormer compared to previous state-of-the-art approaches. § INTRODUCTION Multivariate Time Series (MTS) data, composed of multiple univariate series, serves as the foundation for MTS forecasting tasks that aim to predict future trends based on a fixed window of historical sequences. The value of MTS forecasting extends beyond academic interest, with essential applications in everyday life and numerous industries. It facilitates resource management and strategic planning across diverse domains such as energy <cit.>, finance <cit.>, and weather <cit.>. In recent years, deep learning methods <cit.>, particularly Transformer structures <cit.>, have achieved remarkable breakthroughs in MTS forecasting tasks compared to traditional methods (e.g., ARIMA, SSM <cit.>). In many Transformer-based studies, the dependencies between different series are often overlooked. These models typically amalgamate different series into hidden temporal embeddings through linear transformations (series-mixing framework, Figure <ref>). These models primarily concentrate on temporal dependencies, largely overlooking the relationships between series<cit.>. Recently, some studies <cit.> discovered that intentionally neglecting inter-series dependencies modeling (series-independent framework, Figure <ref>) could improve prediction results due to its robustness towards distribution drifts <cit.>. However, the series-independent framework entirely disregards the dependencies between series, resulting in suboptimal results on specific datasets (see section <ref>). These findings highlight the challenges in modeling series dependencies, making accurate capture in MTS forecasting tasks a promising research direction. To address this research gap, we propose SageFormer in this paper. Present work. In this paper, we examine inter-series dependencies in long-term MTS forecasting problems. To accurately model the inter-series dependencies, we propose Series-Aware Graph-Enhanced Transformer (SageFormer, Figure <ref>), a series-aware Transformer model enhanced with graph neural networks (GNN). By learning relationships between time series using graph structures, we aim to distinguish series using global tokens and improve the modeling ability for diverse temporal patterns across various series through graph aggregation. SageFormer can function as a universal extension for Transformer-based structures, better utilizing the dependencies between series and achieving superior performance without greatly affecting model complexity. We contend that the proposed SageFormer addresses two challenges in inter-series dependencies modeling: * How can diverse temporal patterns among series be effectively represented? We introduce a series-aware approach that extends the existing series-independent framework by incorporating several global tokens before input tokens. These tokens capture global information for each variable through self-attention and facilitate series interaction via graph aggregation. The addition of global tokens enables SageFormer to learn not only individual series' temporal patterns but also focus on dependencies between series, thereby enhancing diversity and overcoming series-independent limitations (see section <ref>). * How can the impact of redundant information across series be avoided? We propose using sparsely connected graph structures to reduce the impact of redundant information in unrelated series. In MTS forecasting, not all information is useful due to redundancy in time and series dimensions <cit.>. To evaluate model effectiveness with sparse data, we designed Low-rank datasets with varying series numbers (see Section <ref>). Our model's performance remains stable as series dimensions increase, utilizing low-rank properties effectively. In contrast, the series-mixing method suffers from prediction deterioration as series dimensions grow. Our contributions are threefold: * We introduce a novel series-aware framework that serves as a universal extension for Transformer-based models. It effectively utilizes graph structures to exploit the dependencies between series without noticeably increasing the model's complexity. * We propose SageFormer, a series-aware Transformer model for long-term MTS forecasting. By integrating GNN, SageFormer efficiently captures inter-series dependencies, transcending the limitations of existing Transformer-based models in modeling these dependencies. * Experimental results demonstrate that our model attains state-of-the-art performance on both real-world and synthetic datasets. § RELATED WORKS Multivariate Time Series Forecasting. MTS forecasting models can generally be categorized into statistical and deep models. Many forecasting methods begin with traditional tools such as the Vector Autoregressive model and the Vector Autoregressive Moving Average <cit.>. These typical statistical MTS forecasting models assume linear dependencies between series and values. With the advancement of deep learning, various deep models have emerged and often demonstrate superior performance compared to their statistical counterparts. Temporal Convolutional Networks <cit.> and DeepAR <cit.> consider MTS data as sequences of vectors, employing CNNs and RNNs to capture temporal dependencies. Transformers for MTS Forecasting. Recently, Transformer models with self-attention mechanisms have excelled in various fields <cit.>. Numerous studies aim to enhance Transformers for MTS forecasting by addressing their quadratic complexity. Notable approaches include Informer <cit.>, introducing ProbSparse self-attention and distilling techniques; Autoformer <cit.>, incorporating decomposition and auto-correlation concepts; FEDformer <cit.>, employing a Fourier-enhanced structure; and Pyraformer <cit.>, implementing pyramidal attention modules. PatchTST <cit.> divides each series into patches and uses a series-independent Transformer to model temporal patterns. While these models primarily focus on reducing temporal dependencies modeling complexity, they often overlook crucial inter-series dependencies. Inter-series dependencies for MTS Forecasting. Numerous methods have been proposed to explicitly enhance inter-series dependencies in MTS forecasting. LSTnet <cit.> employs CNN for inter-series dependencies and RNN for temporal dependencies. GNN-based models <cit.>, such as MTGNN <cit.>, utilize temporal and graph convolution layers to address both dependencies. STformer <cit.> flattens multivariate time series into a 1D sequence for Transformer input, while Crossformer <cit.> employs dimension-segment-wise embedding and a two-stage attention layer for efficient temporal and inter-series dependencies capture respectively. Most CNN and GNN-based models struggle to capture long-term temporal dependencies. STformer <cit.> and Crossformer <cit.> extend 1-D attention to 2-D, but they fail to reveal the relationships between series explicitly. Unlike the methods mentioned above, our proposed SageFormer serves as a general framework that can be applied to various Transformer-based models, utilizing graph structure learning to enhance their ability to capture inter-series dependencies. § METHODOLOGY §.§ Problem Definition In this paper, we concentrate on long-term MTS forecasting tasks. Let 𝐱_t∈ℝ^C denote the value of C series at time step t. Given a historical MTS instance 𝐗_t=[𝐱_t, 𝐱_t+1, ⋯, 𝐱_t+L-1] ∈ℝ^C× L with length L, the objective is to predict the next T steps of MTS values 𝐘_t=[𝐱_t+L, ⋯, 𝐱_t+L+T-1] ∈ℝ^C× T. The aim is to learn a mapping f(·): 𝐗_t →𝐘_t using the proposed model (we omit the subscript t when it does not cause ambiguity). We employ graphs to represent inter-series dependencies in MTS and briefly overview relevant graph-related concepts. From a graph perspective, different series in MTS are considered nodes, and relationships among series are described using the graph adjacency matrix. Formally, the MTS data can be viewed as a signal set 𝒢={𝒱, 𝐗_t, 𝐀}. The node set 𝒱 contains C series of MTS data and 𝐀∈ℝ^C× C is a weighted adjacency matrix. The entry a_ij indicates the dependencies between series i and j. If they are not dependent, a_ij equals zero. The main symbols used in the paper and their meanings are detailed in Table <ref> in the Appendix <ref>. §.§ Overview SageFormer is designed to augment the capability of Transformer-based models in addressing inter-series dependencies. The overall architecture adheres to a Transformer encoder pipeline, conforming to the series-aware framework. The decoder portion of the Transformer is omitted and replaced with a linear decoder head (FlattenHead in Algorithm <ref>), proving to be more efficient <cit.>. SageFormer's encoding workflow, summarized as Algorithm <ref>, encompasses three key components: (1) series-aware global tokens, (2) graph structure learning, and (3) iterative message passing. §.§ Series-aware Global Tokens Drawing inspiration from the application of the class token in natural language models <cit.> and Vision Transformer <cit.>, we prepend learnable tokens for each series to encapsulate their corresponding global information. In Section <ref>, we employ these global tokens, rather than all tokens, to capture inter-series dependencies, thereby enhancing the series awareness of each sub-series. Following PatchTST <cit.>, the input MTS 𝐗∈ℝ^C× L is reshaped into 𝒳_p={𝐗_1, ⋯, 𝐗_C}∈ℝ^C× N× P, where P is the subsequence patch length, C is the number of time series, and N=⌊(L-P)/S⌋+2 denotes the number of patches, S indicates the non-overlapping length between adjacent patches. 𝐗_c={𝐱_c^1, ⋯,𝐱_c^N}∈ℝ^N× P represents the patched sequence for series c. A consistent latent vector of size D is maintained across Transformer encoding blocks (TEB), with a trainable linear projection (𝐄∈ℝ^P× D) mapping 𝒳_p to the D-dimensional space (Equation <ref>). M learnable embeddings (global tokens) 𝐠_i∈ℝ^D are added before the patched sequences, representing each series' global information after self-attention, resulting in an effective input sequence length of M+N. The prepended global tokens are designed to facilitate interaction across series. Positional information is enhanced via 1D position embeddings 𝐄_pos. The final embedding of 𝐗 is 𝒳^(0)∈ℝ^C× (N+M)× D, where 𝒳^(0)_c::=[𝐠_1;⋯; 𝐠_M;𝐱^1_c𝐄;⋯;𝐱^N_c𝐄]+𝐄_pos. §.§ Graph Structure Learning The adjacency matrix is learned end-to-end, capturing implicit relationships across series without requiring prior knowledge. In MTS forecasting tasks, we postulate that inter-series dependencies are unidirectional (e.g., power load affects oil temperature, but not vice versa), resulting in a directed relationship represented by the learned graph. Specifically, the entire graph structure learning module can be described by the following equations: 𝐌_1 = act_1(𝐄Θ_1); 𝐌_2 = act_1(𝐄Θ_2) 𝐀' =Relu(𝐌_1𝐌_2^T-𝐌_2𝐌_1^T) Node embeddings of series are learned through randomly initialized 𝐄∈ℝ^N× C. Subsequently, 𝐄 is transformed into 𝐌∈ℝ^N× C using Θ∈ℝ^C× C, with the nonlinear act_1 (Equation <ref>). Following the approach of MTGNN <cit.>, Equation <ref> is employed to learn unidirectional dependencies. For each node, its top k nearest nodes are selected as neighbors, setting the weights of non-connected nodes to zero, yielding the final sparse adjacency matrix 𝐀∈ℝ^C× C. §.§ Iterative Message Passing The embedding tokens (outlined in Section <ref>) are processed by SageFormer encoder layers, where temporal encoding and graph aggregation are conducted iteratively (Figure <ref>). This approach aims to disseminate the global information gathered during the GNN phase among all tokens within each series. As a result, the model captures inter-series dependencies through iterative message passing. Graph Aggregation The graph aggregation phase aims to fuse each series' information with its neighbors' information, thereby enhancing each series with related patterns. For each series in the l-th layer, we take the first M embeddings as the global tokens of layer l: 𝐆_i^(l)←𝒳^(l)_:i:∈ℝ^C× D, i≤ M. The global tokens of layer l are gathered from all series and passed into the GNN for graph aggregation. For simplicity, we employ the same model as <cit.> for graph aggregation: 𝐆_i = ∑_d=0^D 𝐀̃^d 𝐆_i 𝐖_d, i≤ M Equation <ref> represents multi-hop information fusion on the graph, where D denotes the depth of graph aggregation and 𝐀̃ is the graph Laplacian matrix. Each of the embeddings 𝐆_i is dispatched to its original series and then concatenated with series tokens, resulting in graph-enhanced embeddings 𝒳^(l). Temporal Encoding The graph-enhanced embeddings can later be processed by any Transformer component ( Transformer<cit.>, Informer <cit.>, FEDformer <cit.>, etc.). We choose the vanilla Transformer encoder <cit.> as our backbone. The output of the TEB functions as input token-level embeddings for the following encoding layer. Previously aggregated information from the GNN is disseminated to other tokens within each series via self-attention, enabling access to related series information. This process enhances the expressiveness of our model compared to series-independent models. § EXPERIMENTS §.§ Experimental Setup Datasets. To evaluate our proposed SageFormer, extensive experiments have been conducted on eight mainstream real-world datasets, including Weather, Traffic, Electricity, ILI(Influenza-Like Illness), and four ETT(Electricity Transformer Temperature) datasets. Details of these multivariate datasets are shown in Table <ref>. Among all datasets, Traffic and Electricity have more series, which can better reflect the effectiveness of the proposed method. Baselines and Task Settings. We compare our proposed method with nine popular models for long-term MTS forecasting problems as baselines, including three models that explicitly utilize inter-series dependencies: Crossformer<cit.>, MTGNN<cit.>, and LSTnet<cit.>; two series-independent neural models: DLinear<cit.> and PatchTST<cit.>; and four series-mixing transformer-based models: Transformer<cit.>, Informer<cit.>, Autoformer<cit.>, and Non-stationary Transformer<cit.>. Implementation details. For model training and evaluation, we adopt the same settings as in <cit.>. The entire dataset is rolled with a stride of 1 to generate different input-output pairs, and Train/Val/Test sets are zero-mean normalized using the mean and standard deviation of the training set. Performance is evaluated over varying future window sizes on each dataset. The past sequence length is set as 36 for ILI and 96 for the others. Mean Square Error (MSE) and Mean Absolute Error (MAE) serve as evaluation metrics. All experiments are conducted five times, and the mean of the metrics is reported. Details regarding datasets, baselines, implementation, and hyper-parameters can be found in Appendix <ref>. §.§ Main Results Long-term forecasting results. Table <ref> presents the forecasting results for the proposed SageFormer and other baseline models. The table shows that the proposed model consistently achieves state-of-the-art performance across all benchmarks and prediction lengths. Notably, SageFormer significantly outperforms other deep models on datasets with a large number of series, attaining a 7.4% average MSE reduction (0.471 → 0.436) on Traffic and a 9.3% average MSE reduction (0.193 → 0.175) on Electricity compared to previous state-of-the-art results. Our model exhibits substantial improvement on every dataset, particularly compared to models that explicitly utilize inter-series dependencies. This indicates that our proposed method effectively enhances the model's ability to capture relationships among multiple series. Framework generality. Furthermore, our model serves as a versatile extension for Transformer-based architectures. To validate this, we apply the SageFormer framework to three prominent Transformers and report the performance enhancement of each model as Table <ref>. Our method consistently improves the forecasting ability of different models, demonstrating that SageFormer is an effective, universally applicable framework. By leveraging graph structures, it can better utilize the interdependencies among various sequences, ultimately achieving superior predictive performance. §.§ Ablation Study The ablation studies were conducted to address two primary concerns: 1) the impact of graph aggregation and 2) the impact of series-aware global tokens. We designate SageFormer variants without specific components as shown in Table <ref>. First, the experiments validated the effectiveness of the graph structure in our time series prediction model. Removing the graph aggregation module from each encoder layer resulted in a substantial decline in prediction accuracy. On the Traffic dataset, the average decrease was 7.3%, and on the seven-series ETTh1 dataset, it was 2.8%, showing that graph structures enhance performance more in datasets with numerous series. Second, series-aware global tokens enhanced the model's prediction accuracy while reducing computational overhead. If all tokens (not just global tokens) participated in graph propagation calculations, the model's performance would decline by 6.3% and 1.6% on the Traffic and ETTh1 datasets, respectively. Lastly, we discovered that techniques like sparse constraints and directed graphs in graph construction were more effective for larger datasets (e.g., Traffic). In comparison, they had little impact on smaller datasets' prediction results. This finding suggests that applying sparse constraints can mitigate the impact of variable redundancy on the model while conserving computational resources. §.§ Effect of Hyper-parameters In this section, we examine the impact of four hyperparameters on our proposed SageFormer model: global token length, depth of graph aggregation, the number of nearest neighbors, and the depth of encoder layers. We conduct a sensitivity analysis on the Traffic dataset (Figure <ref>). For each of the four tasks, SageFormer consistently delivers stable performance, regardless of the selected value. Global token length (Figure <ref>): The model's performance remains consistent across all prediction windows, irrespective of the value of M. To optimize computational efficiency, we set M=1. Depth of graph aggregation (Figure <ref>): The model demonstrates robust performance with varying graph aggregation depths. To balance accuracy and efficiency, we set d=3. Number of nearest neighbors (Figure <ref>): Larger k values generally yield better results, but performance declines when a fully connected graph is utilized. This suggests sequence redundancy in MTS forecasting tasks, so we select k=16. SageFormer encoder layers (Figure <ref>): Increasing the number of encoding layers results in a higher parameter count for the model and its computational time. No significant reduction is observed after the model surpasses three layers, leading us to set the model's layers to 3. §.§ Synthetic Datasets Directed Cycle Graph Dataset. In this section, we investigate the adjacency matrices inferred by SageFormer using a synthetic dataset consisting of N=10 nodes. Each series value x_i,t is sampled from another series (i-1 mod N) with temporal lag τ=10, resulting in a directed cycle graph for the adjacency matrix. The dataset details are provided in Appendix <ref>. Figure <ref> represents the actual inter-series dependencies alongside our inferred results, effectively demonstrating that our method successfully recovers these dependencies. As Figure <ref> indicates, our proposed series-aware framework outperforms the previous series-mixing and series-independent frameworks, achieving the lowest MAE and MSE test losses. Importantly, the series-independent framework's performance is notably poor in this context, with an MSE exceeding 1. This deficiency stems from its disregard for the significant inter-series dependencies inherent in this dataset, particularly given that each sub-series is nearly equivalent to white noise in this dataset. Low-rank Dataset. To assess the effectiveness of different models in handling sparse data, we designed multiple Low-rank MTS datasets with varying numbers of series. Inspired by the Discrete Sine Transformation, we generate arbitrary signals as the sum of distinct sinusoids combined with Gaussian noise. The same sinusoids are shared among different nodes, creating the low-rank property. The dataset details are provided in Appendix <ref>. Figure <ref> presents the prediction MAE results for datasets with varying numbers of series (N) using the series-mixing method and our approach. It can be observed that the prediction performance of the series-mixing method deteriorates rapidly as the number of series increases since it encodes all series information into the same token. In contrast, the MAE of our method does not increase with the growth in the number of series, indicating that our designed approach can effectively exploit the low-rank characteristics of the dataset. §.§ Computational Efficiency Analysis We compared the computational efficiency of our model, SageFormer, with other Transformer-based models (Table <ref>). Although SageFormer's complexity is theoretically quadratic to historical series length T, a large patch length P in practice brings its runtime close to linear complexity models. An additional O(C^2) complexity is due to standard graph convolution operations, but techniques exist to reduce this to linear complexity <cit.>. In the decoder part, the complexity of SageFormer is simplified to linear, owing to the streamlined design of the linear decoder head. We also evaluated running time and memory consumption on the Traffic dataset, which has the most variables. SageFormer balances running time and memory usage well, achieving a running time of 0.31±0.03 seconds per batch and consuming 12.42 GB of memory. This result is slightly slower compared to the PatchTST <cit.> model but is faster than the Crossformer <cit.> model. These outcomes suggest that our proposed SageFormer model presents a competitive trade-off between efficiency and prediction accuracy. § CONCLUSION AND FUTURE WORK This paper presented SageFormer, a novel approach for modeling inter-series dependencies in long-term Multivariate Time Series (MTS) forecasting tasks. By amalgamating graph neural networks (GNN) with Transformer structures, SageFormer can effectively capture diverse temporal patterns and harness dependencies among various series. Our model has demonstrated impressive versatility through extensive experimentation, delivering state-of-the-art performance on real-world and synthetic datasets. SageFormer thus presents a promising solution to overcome the limitations of series dependencies modeling in MTS forecasting tasks and exhibits potential for further advancements and applications in other domains involving inter-series dependencies. We also acknowledged the limitations of our work and briefly delineated potential avenues for future research. While SageFormer achieves exceptional performance in long-term MTS forecasting, the dependencies it captures do not strictly represent causality. As a result, some dependencies may prove unreliable in practical scenarios due to the non-stationary nature of time series. Our primary focus on enhancing long-term forecasting performance has led to some degree of overlooking the interpretability of the graph structure. Moving forward, our work's graph neural network component could be improved to learn causal relationships between variables and reduce its complexity. The framework proposed in this paper could also be applied to non-Transformer models in the future. unsrt § SUPPLEMENTARY MATERIAL §.§ Table of Notations In the paper, scalars are denoted by lowercase letters, vectors by bold lowercase letters, matrices by bold uppercase letters, and tensors by calligraphic letters. Important notations that appear in this paper are summarized in Table <ref> §.§ Benchmarking Datasets Our experiments are carried out on six real-world datasets as described below: * The Traffic dataset[https://pems.dot.ca.gov/] contains data from 862 sensors installed on highways in the San Francisco Bay Area, measuring road occupancy. This dataset is recorded on an hourly basis over two years, resulting in a total of 17,544 time steps. * The Electricity Consumption Load (Electricity) dataset[https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014] records the power usage of 321 customers. The data is logged hourly over a two-year period, amassing a total of 26,304 time steps. * The Weather dataset[https://www.bgc-jena.mpg.de/wetter] is a meteorological collection featuring 21 variates, gathered in the United States over a four-year span. * The Electricity Transformers Temperature (ETT) datasets[https://github.com/zhouhaoyi/ETDataset] are procured from two electricity substations over two years. They provide a summary of load and oil temperature data across seven variates. For ETTm1 and ETTm2, the "m" signifies that data was recorded every 15 minutes, yielding a total of 69,680 time steps. ETTh1 and ETTh2 represent the hourly equivalents of ETTm1 and ETTm2, respectively, each containing 17,420 time steps. * The Exchange dataset[https://github.com/laiguokun/multivariate-time-series-data] details the daily foreign exchange rates of eight different countries, including Australia, British, Canada, Switzerland, China, Japan, New Zealand and Singapore ranging from 1990 to 2016. * The Influenza-Like Illness (ILI) dataset[https://gis.cdc.gov/grasp/fluview/fluportaldashboard.html] is maintained by the United States Centers for Disease Control and Prevention. It collates patient information on a weekly basis for a period spanning from 2002 to 2021. §.§ Hyper-parameter Selection and Implementation Details We conduct all experiments five times, each implemented in PyTorch, and carried out on a single NVIDIA TITAN RTX 3090 24GB GPU. The repeated experimentation helps mitigate the effects of random variations in reported values. The look-back window size is set to 96, while the forecasting horizon ranges between 96 and 720. Model optimization is achieved through the Adam optimizer with a learning rate of 1e-4. The models are trained for 20 epochs or fewer, utilizing an early stop mechanism to prevent overfitting. By default, SageFormer comprises two encoder layers, a head count (H) of 16, and a model hidden dimension (d) of 512. We designate the number of global tokens as M=1; the node embedding dimension is set to 16, the number of nearest neighbors is K=16, and the graph aggregation depth is D=3. Furthermore, we employ a patch length (P) of 24 and a stride (S) of 8, consistent with the parameters utilized in PatchTST. For the larger datasets (Traffic, Electricity), a model with three encoder layers is adopted to enhance the expressive capacity. §.§ Baselines All baselines are reproduced using the original paper's configuration or the official code. However, the only exception is that we standardize the look-back window across all models to 36 for the ILI dataset and 96 for the remaining datasets to ensure a fair comparison. Consequently, some discrepancies may exist between our input-output setting and those reported in the referenced papers. We exclude traditional time series forecasting models (such as ARIMA, LSTM) from our baselines, as Transformer-based models have been demonstrated to outperform these in long-term forecasting tasks <cit.>. §.§ More on Synthetic Datasets Directed Cycle Graph Dataset. This synthetic dataset comprises a panel of N=10 time series, each with a length of 10000. For each series i at timestep t, the data is sampled from the past step t-τ of another time series i-1 mod N from the same panel. The formal generation process can be expressed as follows: x_i, t∼𝒩(β x_(i-1 mod N), t-10; σ^2), where x_i, t represents the i-th time series at time step t. The operator ∼ indicates that the i-th series at time t is generated from a normal distribution, denoted by 𝒩, with mean and variance parameters. The mean of this distribution is given by β x_(i-1 mod N), t-10, which represents the scaled value of the preceding time series (i-1 mod N) at a previous time step (t-10). β is a scaling factor applied to this value. The variance of the distribution is given by σ^2, a constant that determines the degree of deviation allowed from the mean. The dataset exhibits a directed cycle graph structure due to its distinct data generation process. Each time series i at time step t is modeled based on the past values of the preceding series in the panel, specifically, the t-τ step of the series i-1 mod N. This creates a cyclical dependency among the series, with each one being influenced by its predecessor in the panel. Such a design results in a directed cycle graph structure, as observed in the multivariate time series adjacency matrix. This arrangement effectively mirrors the pattern of a previous series with a temporal lag, as evidenced in Figure <ref>, which illustrates the first 120 timesteps of these ten series. Low-rank Dataset. This synthetic dataset comprises time series of length 10000 with varying numbers of series. Each series is generated as a weighted sum of different sinusoids, each with its own frequency and amplitude. This generation process can be formally expressed as: x_i = ∑_m=1^M B_i,m sin (2πω_⌊ i/K⌋,mt)+ϵ_i,t, In the above equation, B_i,m is sampled from a uniform distribution B̃_i,m∼ U(0.4, 1) and subsequently normalized by B_i,m=B̃_i,m/∑_m B̃_i,m. ω_⌊ i/K⌋,m is also sampled from a uniform distribution U(0, 0.2), and ϵ_i,t is sampled from a Gaussian distribution 𝒩(0, 0.2^2). Lastly, ω_⌊ i/K⌋,m is shared among different series within the same group, resulting in the low-rank characteristic of the dataset. We select M=3 and K=2 for these experiments. The low-rank property in this dataset arises due to the shared frequencies ω_⌊ i/K⌋,m among series within the same group. This results in a limited number of unique patterns across the series, causing the dataset to effectively exhibit a low-rank structure. Despite the high number of series, the shared characteristics permit a concise, low-dimensional representation, signifying the low-rank property. §.§ Full Results Due to the space limitation of the main text, we place the full results of experiments in the following: long-term forecasting reults in Table <ref> and framework generality results in Table <ref>.
http://arxiv.org/abs/2307.02557v1
20230705180106
The weak field limit of quantum matter back-reacting on classical spacetime
[ "Isaac Layton", "Jonathan Oppenheim", "Andrea Russo", "Zachary Weller-Davies" ]
gr-qc
[ "gr-qc", "hep-th", "quant-ph" ]
=1 L_α L_β^† ^αβ
http://arxiv.org/abs/2307.00292v1
20230701101917
Dumbbell dynamics: a didactical approach
[ "Benedetto Scoppola", "Matteo Veglianti" ]
astro-ph.EP
[ "astro-ph.EP", "physics.ed-ph" ]
Dumbbell dynamics: a didactical approach Benedetto Scoppola^1 Matteo Veglianti^2 ============================================ ^1 Dipartimento di Matematica, Università di Roma “Tor Vergata” Via della Ricerca Scientifica - 00133 Roma, Italy ^2 Dipartimento di Fisica, Università di Roma “Tor Vergata” Via della Ricerca Scientifica - 00133 Roma, Italy In this paper we propose a simplified model to describe the dissipative effects of tides. We assume a spherical Earth with a dissipative coupling with a mechanical dumbbell. The latter has a mass much smaller than the Earth's, and it models the presence of the tidal bulges. Using properly the scale analysis, we will show that some of the consequences of tidal dissipation are the circularization and the enlargement of orbit of the Moon and the slowing down of the Earth's rotation. We will also see that tidal dissipation plays a fundamental role for the establishment of a regime of spin-orbit resonance in the celestial systems. The mathematical tools used make our treatment appropriate for senior high school students or college students. § INTRODUCTION All textbooks in introductory astronomy and many in physics and mechanics mention the existence of oceanic tides as an interesting manifestation of universal gravitation: indeed many teachers are interested in this topic. As argued in <cit.>, the most important aspects of the origin and properties of tides are often treated inaccurately or even erroneously. Much of the confusion over generating tides is related to the roles of the orbital motion of the Moon and earth about their common center of mass and of the Earth's axial rotation. In discussing the physics behind this phenomenon, authors usually explain (more or less successfully) why two tidal swells appear on the opposite sides of the globe. However, it is difficult to find a plausible explanation of the physical mechanism responsible for the phase shift between the zenith of the moon and the moment of high tide, which at some places approaches 90°. Misunderstandings also occur in discussions about the role of tidal friction in the retardation of axial rotations and in the evolution of orbital motions of the gravitationally coupled celestial bodies. While the conservative aspects of the tides are masterfully treated using elementary tools in <cit.>, <cit.> and <cit.>, the dissipative ones are only qualitatively described. The scientific, non-pedagogical, works on tidal dissipation are divided into two large areas according to the desired target: rheological aspects (see <cit.> and <cit.>) or dynamic aspects (see <cit.> and <cit.>). Our aim is therefore to propose a way to quantitatively deal with the dissipative aspects of the tides and their consequences using high school mathematics. To this end, we will use the "dumbbell model" developed in <cit.>, that consists in describing the planet in terms of a point P of mass M-μ and a mechanical dumbbell centered in P, i.e., a system of two points, each of which has mass μ/2, constrained to be at fixed mutual distance 2r, having P as center of mass. The idea of a dumbbell model is not original, in fact is developed in <cit.> and in many works (see, for instance <cit.>) where however it is used for other purposes. This model is useful to compute, using an elementary force's approach, the torque acting on the Earth’s ocean bulges due to the Moon and the torque acting on the Moon due to the Earth’s ocean bulges. We perform the detailed computation in section <ref>. In section <ref> we present a way to describe the evolution of the system imagining that the variation of the parameters does not occur in a continuous way, but rather discretely. This allows us to avoid a treatment through differential equations and replace them with finite difference equations. In this way, it will be possible to show how the tidal dissipation is responsible for the circularization and the enlargement of the lunar orbit and the slowing down of the earth's rotation. We will see that the first two events occur on very different time scales: the circularization of the lunar orbit first is much faster than the enlargement. Thus the orbit will become circular in shorter times than those of enlargement, and this is what we see in many planet-satellite systems of our galaxy, particularly in the Earth-Moon system. Finally, we will also see that tidal dissipation plays a fundamental role for the establishment of a regime of spin-orbit resonance in the celestial systems. The mathematical tools used make our discussion appropriate for senior high school students or college students. We believe that an appropriate and non-sterile use of mathematics is useful to understand its functionality and to make children passionate about studying this discipline. Furthermore, this subject is very suitable for teaching and learning the scale analysis, that is a very powerful tool to simplify complex equations by neglecting the suitable small terms. § TIDAL TORQUE In this section we want to show how the dumbbell model is useful to compute the torque acting on the Earth's ocean bulges due to the Moon. The ocean bulges are modeled by the aforementioned dumbbell: a pair of massive point each of which with a mass of μ /2 placed at the ends of a segment of length 2r, that we can assume, for simplicity, equal to the diameter 2R of the Earth.[With a detailed computation using suitable integrals, one can show that r=3/4R.] In figure <ref> we imagine the Earth as a sphere with radius R centered at the origin O of the reference frame and the Moon as a massive point, indicated with S, with mass m, at a distance a from the center of the Earth. Let the x-axis be the line joining O with S and the y-axis perpendicular to it. Moreover, for simplicity, we consider the rotation of the Earth perpendicular to the Moon's orbital plane. On the Earth's surface there are the two massive points C_1 and C_2, that represent the ocean bulge's center of gravity. Finally, in general, we imagine that the dumbbell is inclined by an angle ε with respect to the x-axis. Let F⃗_⃗1⃗ and F⃗_⃗2⃗ be the attractive forces acting on C_1 and C_2 due to the Moon and let F⃗_⃗1⃗'⃗ and F⃗_⃗2⃗'⃗ the attractive forces acting on S due to C_1 and C_2 respectively. Clearly F⃗_⃗1⃗'⃗=-F⃗_⃗1⃗ and F⃗_⃗2⃗'⃗=-F⃗_⃗2⃗. Moreover, on S also acts the attractive force F⃗, lying on the x-axis, due to the rest of the Earth, deprived of the two masses of water. Let G be the universal gravitational constant and let β_1 and β_2 be the two angles OŜC_1 and OŜC_2 respectively; we have: F_1=-F_1'=-Gmμ/2/SC_1^2(-x̂cosβ_1 + ŷsinβ_1), F_2=-F_2'=-Gmμ/2/SC_2^2(-x̂cosβ_2 - ŷsinβ_2), F=-Gm(M- μ)/a^2x̂. Moreover, from the geometry of the system we have that r/a is a dimensionless small parameter, so we can expand the forces up to the second order in r/a. To this end, we need the following geometric relation: SC_1^2=(a-r cosε)^2+(r sinε)^2 = a^2 ( 1 - 2r/acosε + r^2/a^2) SC_2^2=(a+r cosε)^2+(r sinε)^2 = a^2 ( 1 + 2r/acosε + r^2/a^2) β_1=r sinε/a - r cosε = r/asinε( 1 + r/acosε) β_2=r sinε/a + r cosε = r/asinε( 1 - r/acosε) From (<ref>) we have: F_1=-F_1'=-Gmμ/2/a^2( 1 + 2r/acosε - r^2/a^2 + 4r^2/a^2cos^2 ε) [-x̂( 1 - r^2/2a^2sin^2 ε) + ŷ( r/asinε( 1 + r/acosε) ) ]= =-Gmμ/2/a^2[-x̂( 1 + 2r/acosε - r^2/a^2( 3/2 - 9/2cos^2 ε) ) + ŷ(r/asinε( 1 + 3r/acosε) ) ]. Similarly, from (<ref>) we have: F_2=-F_2'=-Gmμ/2/a^2[-x̂( 1 - 2r/acosε - r^2/a^2( 3/2 - 9/2cos^2 ε) ) - ŷ(r/asinε( 1 - 3r/acosε) ) ]. We are now ready to calculate the torque acting on the dumbbell due to the Moon and the torque acting on the Moon due to the dumbbell. We will show that the two torques are exactly opposite, as it is expected from the conservation of angular momentum for isolated systems. For simplicity we impose that the Moon moves in a circular orbit around the point G, the center of gravity of the Earth-Moon system. This is equivalent to stating that the sum of the components along the x-axis of the forces acting on the Moon is the centripetal force. Let ω be the angular velocity of the revolution of the Moon and r_S=SG the radius of the circular orbit around the point G, so we have: mω^2 r_S = F_x+F'_1x+F'_2x = = Gm(M- μ)/a^2+Gmμ/2/a^2( 1 + 2r/acosε - r^2/a^2( 3/2 - 9/2cos^2 ε) ) + Gmμ/2/a^2( 1 - 2r/acosε - r^2/a^2( 3/2 - 9/2cos^2 ε) )= = GmM/a^2 - Gmμ/a^23/2r^2/a^2 (1-3cos^2 ε) Hence: r_S = GM/ω^2 a^2(1 +3/2μ/Mr^2/a^2 (3cos^2 ε -1) ). Notice that, in the case of two point masses m and M placed at a distance a, it turns out that m makes a circular orbit around the common center of gravity with a radius r_S=GM/ω^2 a^2. Since in (<ref>) both μ/M and r^2/a^2 are small quantities, the correction to the Moon's orbital radius due to the dumbbell is completely negligible. We can now compute both the torque acting on the dumbbell due to the Moon and the torque acting on the Moon due to the dumbbell up to the smallest order in r/a. Let's start with the latter. The magnitude of the torque acting on the Moon is Γ_M=aF⃗_⃗M⃗, with F⃗_⃗M⃗ is the sum of the forces acting on the Moon. Thanks to (<ref>), F⃗_⃗M⃗ is parallel to the y-axis, and the magnitude of the torque is: Γ_M = a (F'_1y-F'_2y) = Gmμ/2/a^2[r/asinε( 1 + 3r/acosε)-r/asinε( 1 - 3r/acosε) ]= =3r^2/a^3Gm μsinεcosε. On the other hand, the magnitude of the torque acting on the dumbbell due to the Moon is: Γ_D = r F⃗_⃗1⃗sin(ε+β_1)-r F⃗_⃗2⃗sin(ε - β_2)= = rGmμ/2/a^2( 1 + 2r/acosε -r^2/a^2 + 4r^2/a^2cos^2 ε) sin(ε+β_1) - rGmμ/2/a^2( 1 - 2r/acosε - r^2/a^2 + 4r^2/a^2cos^2 ε) sin(α-β_2)= = rGmμ/2/a^2( 1 + 2r/acosε) (sinεcosβ_1+cosεsinβ_1) - rGmμ/2/a^2( 1 - 2r/acosε) (sinεcosβ_2-cosεsinβ_2)= = rGmμ/2/a^2( 1 + 2r/acosε) (sinε+r/acosεsinε) - rGmμ/2/a^2( 1 - 2r/acosε) (sinε+r/acosεsinε)= =3r^2/a^3Gm μsinεcosε. As previously outlined, up to the smallest order in r/a the two torques are equal (and obviously in the opposite direction). This implies that the angular momentum of the system is conserved, as we expected. Moreover, μ represents the mass of the ocean bulge, that is the mass of water whose shape is that of an ellipsoid (of semi-axis R, R, R+h) from which it is subtracted a sphere of radius R concentric to it, with h represents the tidal height of the ocean: h=3/2m/M( R/a)^3 R. For a detailed computation of h based on elementary mathematical tools see <cit.>. So: μ=ρ_w 4/3π R^2h=ρ_w 4/3π R^2 3/2m/ρ_E 4/3π R^3( R/a)^3 R = ρ_w/ρ_E3/2( R/a)^3 m. Finally, remembering that, according to our assumption, r=R and using (<ref>), the torque can be written as: Γ=3R^2/a^3Gm μsinεcosε = ( 9/2ρ_w/ρ_E) G R^5/a^6 m^2 sin(2ε) = k G R^5/a^6 m^2 sin(2ε), where k = 9/2ρ_w/ρ_E is a dimensionless constant. The formula (<ref>) is known in literature as "MacDonald formula for body-tide torques". We note that our dumbbell model allows us to derive this formula in a simple way starting from reasonable physical considerations. § EVOLUTION OF THE SYSTEM In this section we want to study the evolution of the system avoiding advanced mathematical tools. In order to do this, we imagine that the variations of parameters is not continuous but discrete in time. This can be done because the parameters vary on very large time scales, therefore at each revolution they vary by very small quantities. For this reason the difference between a discrete and a continuous evolution is irrelevant. We start proving by this attitude the circularization of the orbit. Since the results of the previous section hold in the case of circular orbit, we can imagine that the real elliptical orbit of the Moon is the superposition of two virtual semicircular orbits centered on the Earth and tangent to the real trajectory in the perigee and in the apogee respectively, as shown in figure <ref>. In this way we can apply the results obtained in the previous section, but unfortunately we have introduced a discontinuity in the trajectory of the Moon, which is very difficult to digest. However, imagining that the evolution of the parameters occur in a discrete way at each semi-revolution, the discontinuity in the virtual trajectory is irrelevant. So: r_a represent the Earth-Moon distance in the apogee and r_p represent the Earth-Moon distance in the perigee. Clearly: r_a - r_p > 0. Moreover: a = 1/2 (r_a + r_p) represent the semi-major axis of the orbit; c = 1/2|F_1-F_2| = 1/2(r_a - r_p) represent the half focal distance; e = c/a represent the eccentricity of the orbit. To determine the evolution of the system, the torque plays a crucial role. As we calculated in the previous section, the torque acting on the Moon in the perigee is: Γ_M_p = k G R^5/r_p^6 m^2 sin(2ε). At the same time, the torque acting on the dumbbell is: Γ_D_p = -k G R^5/r_p^6 m^2 sin(2ε). obviously the signs ± are arbitrary: in any case the torques have opposite sign. The torques in the apogee (Γ_M_a and Γ_D_a) are similar: just replace r_p with r_a. Γ_M determines the variation of the orbital parameters (semi-major axis, focal distance, eccentricity); Γ_D determines the variation of Ω, the sidereal angular velocity of the Earth. To determine the evolution of the parameters, we compare their values during the n-th revolution (a_n; c_n; e_n; Ω_n) with their values during the (n+1)-th revolution (a_n+1; c_n+1; e_n+1; Ω_n+1). As we argued before, we imagine that the change of the parameters at the end of n-th revolution consist of two contribution at the end of each virtual semi-circumference. Moreover: r_a,n represents the Earth-Moon distance at the apogee during the n-th revolution; r_p,n represents the Earth-Moon distance at the perigee during the n-th revolution; a_n=1/2(r_a,n+r_p,n) represents the semi-major axis during the n-th revolution; c_n=r_a,n-r_p,n represents the half focal distance during the n-th revolution; e_n=c_n/a_n represents the eccentricity during the n-th revolution. The same parameters during the (n+1)-th revolution are indicated with the same notation, replacing the subscript n with (n+1). Finally, the change of parameters between two successive revolutions are indicated with Δ: Δ r_p = r_p,n+1 - r_p,n; Δ r_a = r_a,n+1 - r_a,n; Δ a = a_n+1 - a_n; Δ c = c_n+1 - c_n; Δ e = e_n+1 - e_n. We can start by considering the evolution of lunar orbital parameters. From now on, we indicate generically with r_i the Earth-Moon distance in the position i: i can be either p (the perigee) or a (the apogee). Let Γ_Mn = k G R^5/r_i^6 m^2 sin(2ε) the torque acting on the Moon in the n-th revolution. Then the variation of Moon's angular momentum L=m ω r_i^2 between the n-th and the (n+1)-th revolution is Δ L = L_n+1 - L_n is: Δ L = Γ_Mn T = Γ_Mn2π/ω. From this equation we can derive the evolution of a, the semi-major axis of the orbit. We remember that ω depends on r_i, indeed from the Kepler's third laws ω^2 r_i^3 = GM. [This result holds in the case of a circular orbit. Thanks to the assumptions made above, this is our case.] Therefore we have: ω=√(GM)/r_i^3/2 Hence, from (<ref>) and (<ref>), we have: Δ(m √(GMr_i)) = k G R^5/r_i^6 m^2 sin(2ε)2π r_i^3/2/√(GM), that we can rewrite as: Δ(√(r_i)) = 2π k m/MR^5/r_i^9/2sin(2ε). We can rewrite the l.h.s. of the previous equation as: Δ(√(r_i)) = √(r_i, n+1)-√(r_i,n) = √(r_i,n+Δ r_i)-√(r_i,n) = √(r_i,n)( √(1+ Δ r_i/r_i,n) - 1 ) ≃√(r_i,n)( 1+ 1/2Δ r_i/r_i,n - 1 ) = Δ r_i/2√(r_i,n) = Δ r_i/2√(r_i), where we have used the approximation: √(1+x)≃ 1+1/2x (see appendix), with x=Δ r_i/r_i,n << 1. Hence equation (<ref>) becomes: Δ r_i = 4π k m/MR^5/r_i^4sin(2ε) = K/r_i^4, with K= 4π k m/M R^5 sin(2ε) > 0 constant independent of r_i. Actually K depends on ε which depends on r_i, indeed ε is the difference between the angular position on the Moon (with respect a certain reference axis), ω t, and the sidereal angular position of the Earth, Ω t: ε = Ω t - ω t = Ω t ( 1 - ω/Ω). But if we suppose that ω << Ω (this assumption is currently true for the Earth-Moon system, indeed: ω/Ω≃1 day/1 month≃1/30≃ 0,03), then ε≃Ω t is independent on r_i. Let us now suppose, as we argued before, that the variation of a consists of two contributions: the variation when the Moon is at perigee (Δ a_p) and the variation when the Moon is at apogee (Δ a_a): Δ r_p = K/r_p^4 Δ r_a = K/r_a^4 But r_a>r_p, then: Δ r_p>Δ r_a. From equation (<ref>) we can derive two important results: First, Δ a = Δ r_a + Δ r_p > 0, this implies a_n+1 > a_n, then the semi-major axis of the orbit increases. Second, Δ c = c_n+1 - c_n = 1/2(r_a,n+1 - r_p,n+1) - 1/2(r_a,n - r_p,n) = 1/2(r_a,n+1 - r_a,n) - 1/2(r_p,n+1 - r_p,n) = 1/2(Δ r_a - Δ r_p) < 0, this implies c_n+1 < c_n, then the focal distance of the orbit decreases. Finally, the two previous results implies that the eccentricity decreases and so the orbit becomes circular. Indeed: Δ e = e_n+1 - e_n = c_n+1/a_n+1 - c_n/a_n = c_n+1a_n - c_na_n+1/a_n+1a_n = c_n+1a_n - c_na_n + c_na_n - c_na_n+1/a_n+1a_n = a_n(c_n+1 - c_n) - c_n(a_n+1 - a_n)/a_n+1a_n = a_n Δ c_n - c_n Δ a_n/a_n+1a_n < 0, the last inequality follow from the fact that all the terms are positive, except for Δ c. This implies e_n+1 < e_n, then the eccentricity of the orbit decreases. Moreover, we can determine the rate of decrease of c_n. Indeed: Δ c = Δ (r_a - r_p) = Δ r_a - Δ r_p = K/r_a^4 - K/r_p^4 = K (r_p^4 - r_a^4)/r_a^4r_p^4 = K (r_p^2 + r_a^2)(r_p + r_a)(r_p - r_a)/r_a^4r_p^4 = -cKa(r_p^2 + r_a^2)/r_a^4r_p^4≃ -cK/a^5= -λ c, with λ = K/a^5 > 0. Therefore Δ c_n decreases in a way directly proportional to c_n: this kind of decrease is called "exponential decrease " where λ is the rate of decrease. Since λ in positive, then c_n decreases until it becomes lesser than any prefixed positive quantity. When c_n approaches 0, then r_a,n≃ r_p,n and hence, from (<ref>), Δ r_a ≃Δ r_p. This implies Δ c ≃ 0. Thus, when the focal distance "becomes" zero, it no longer varies. But c_n ≃ 0, Δ c ≃ 0 imply that e_n ≃ 0, Δ e ≃ 0. Thus even the eccentricity of the orbit decreases until it becomes lesser than any prefixed positive quantity and, "at the end" it no longer varies. So the orbit becomes circular. On the other hand, the semi-major axis a increases indefinitely: even when the orbit becomes circular, a (that is its radius) continues to increase. But the growth of a and the decrease of e occur on different time scales and in different ways, indeed while the variation of e is exponential, i.e. very fast, that of a is polynomial [ Indeed, Δ a = 1/2Δ (r_a + r_p) = 1/2 ( Δ r_a + Δ r_p ) = 1/2( K/r_a^4 + K/r_p^4) = K (r_p^4 + r_a^4)/2r_a^4r_p^4≃K/2a^4. ], which is slower. Let us now study the evolution of Ω, the sidereal angular velocity of the Earth: it's variation is due to Γ_D, the torque acting on the dumbbell. Indeed the dumbbell is pulled back from the Moon and thanks to the friction between it and the underlying planet, it slows down the rotation of the Earth. The angular momentum of the Earth is I_E Ω, where I_E is the moment of inertia. As we argued before, we suppose that it varies in each lunar revolution. Thus, at the end of n-th revolution we have: Δ( I_E Ω) = Γ_D,n T = Γ_D,n2π/ω, and hence: ΔΩ = Γ_D,n/I_E2π/ω=-2π k G R^5/r_i^9/2m^2/I_Esin(2ε) < 0. So, as we expect, Ω decrease. But if Ω decreases, also ε decreases; indeed as argued before (see equation (<ref>)), ε≃Ω t, then: Δε = ε_n+1 - ε_n = Ω_n+1 t - Ω_n t = (Ω_n+1 - Ω_n)t = ΔΩ t < 0, this implies ε_n+1 < ε_n, then ε decreases. This dynamics has as a stationary situation that in which ε = 0. Indeed, when ε approaches 0, then ΔΩ becomes 0 (see equation (<ref>)), and then Ω no longer varies. But, from equation (<ref>), when ε=0 then Ω - ω = 0 that implies Ω = ω. So the stationary situation is the spin-orbit resonance regime: that is a dynamics in which the Earth complete a rotation over a period of time equal to a Moon revolution. In such a situation the dumbbell (that rotates with velocity ω) moves on the Earth at the same speed as that of the underlying layer (that rotates with velocity Ω) and thus friction between the dumbbell and the underlying Earth ends: in such a situation tides will no longer be observed on Earth and no more energy will be dissipated by this mechanism. This type of dynamics is often observed in the oldest planet-satellite systems of the Solar System, indicating the fact that the equilibrium situation predicted by our model is indeed attractive. § SPECIAL APPROXIMATION In this section we want to show two ways to prove the special approximation we used in equation (<ref>): √(1+x)≃ 1 + 1/2x, valid if x<<1. The first way uses the linearization formula: given a function f(x), if in its point x_0 the function f and its derivative f' are both continuous, it is possible to approximate the function with the tangent line in x_0: f(x_0+δ) = f(x_0) + f'(x_0)δ + R(δ), with R(δ) = o(δ^2). So equation (<ref>) follows from (<ref>) if f(x) = √(1+x), x_0 = 0 and δ = x << 1. A second way to prove (<ref>) does not use derivative, but only techniques of Cartesian geometry. The issue consists to determine the tangent line to the function y=√(1+x) in its point x_0=0. In the Cartesian plane, y=√(1+x) represents the equation of a semi-parabola. The tangent point is (0, 1) and then the equation of tangent line is: y=mx+1. m can be determined by imposing the tangent condition. We can rewrite the equation of our semi-parabola as y^2=1+x and we can substitute y with its expression mx+1, obtaining: m^2x^2+(2m-1)x=0. Finally we have to impose the tangent condition (uniqueness of the solution): the discriminant of (<ref>) is null. This implies: m=1/2. Then, the equation of tangent line is y=1/2x+1. So, the semi-parabola y=√(1+x) can be approximated near the point x_0=0 with its tangent line y=1/2x+1. The authors have no conflicts to disclose. tocchapter 9 1 Butikov, Eugene I., A dynamical picture of the oceanic tides, Am. J. Phys. no. 70, pp. 1001-1010, 2002. 2 Butikov, Eugene I., Oceanic Tides: A Physical Explanation And Modeling, Computer tools in education, no. 5, pp. 12-34, 2017. 3 Chiu-king Ng, How tidal forces cause ocean tides in the equilibrium theory, Phys. Educ., no. 50, pp. 159-164, 2015. 4 Efroimsky, Michael, Bodily Tides near Spin-Orbit Resonances, Celestial Mech. Dynam. Astronom., vol. 112, no. 3, pp. 283-330, 2012. 5 Efroimsky, Michael, Tidal Evolution of Asteroidal Binaries. Ruled by Viscosity. Ignorant of Rigidity, Astronom. J., vol. 150, no. 4, Art. 98, 2015. 6 Ferraz-Mello, S., Ragazzo, C. G., and dos Santos, L. R., Dissipative Forces in Celestial Mechanics, 30° Coloquio Brasileiro de Matematica, Rio de Janeiro: IMPA, 2015. 7 Scoppola, B., Troiani, A., Veglianti, M., Tides and Dumbbell Dynamics, Regul. Chaot. Dyn. 27, 369–380 (2022). https://doi.org/10.1134/S1560354722030078 8 Celletti, A. and Sidorenko, V., Some Properties of the Dumbbell Satellite Attitude Dynamics, Celestial Mech. Dynam. Astronom., vol. 101, nos. 1–2, pp. 105–126, 2008 9 Hut, P., Tidal Evolution in Close Binary Systems, Astronomy and Astrophysics, 99, 126-140 (1981).
http://arxiv.org/abs/2307.02030v1
20230705052753
Topological classes of thermodynamics of the four-dimensional static accelerating black holes
[ "Di Wu" ]
hep-th
[ "hep-th", "gr-qc" ]
∂𝔸𝒩ℱ𝒦GBKsong wdcwnu@163.comSchool of Physics and Astronomy, China West Normal University, Nanchong, Sichuan 637002, People's Republic of China In this paper, utilizing the generalized off-shell Helmholtz free energy, we explore the topological numbers of the four-dimensional static accelerating black hole and its AdS extension, as well as the static charged accelerating black hole and its AdS extension. Our analysis reveals a profound and significant impact of the acceleration parameter on the topological numbers associated with the static black holes. In addition, we demonstrate that the electric charge parameter has an important effect on the topological number of the static neutral accelerating black holes. What is more, we also indicate that the cosmological constant has a remarkable influence on the topological number of the static accelerating black hole. Topological classes of thermodynamics of the four-dimensional static accelerating black holes Di Wu August 1, 2023 ============================================================================================= § INTRODUCTION In the big family of four-dimensional black hole solutions in General Relativity, in addition to the Schwarzschild black hole and Taub-NUT spacetime <cit.>, another simplest exact vacuum solution is the C-metric <cit.>, which represents an accelerating black hole. In fact, it had already been shown <cit.> that the C-metric solution describes a pair of causally separated black holes which accelerate away from each other due to the presence of strings or struts that are represented by conical singularities. Later, it was obviously shown <cit.> that the C-metric can be derived from the metric of two superposed Schwarzschild black holes by assuming that the mass and location of one of them approaches infinity in an appropriate way. In recent years, aspects of the accelerating black holes, including global causal structure <cit.>, quantum thermal properties <cit.>, holographic heat engines <cit.>, black hole shadows <cit.>, holographic complexity <cit.>, and so on, have been investigated extensively. In particular, thermodynamics of the AdS_4 C-metric have been figured out first in Ref. <cit.> and then well addressed in Refs. <cit.>, where the first law of thermodynamics <cit.> and the Bekentein-Smarr mass formula <cit.> as well as the Christodoulou-Ruffini-type squared-mass formula <cit.> are properly extended to accelerating, charged, and rotating black holes. Naturally, the establishment of the above mass formulae is not the only aspect of the investigation of black hole thermodynamics. Recently, topology has attracted a lot of attention as a mathematical tool to explore the thermodynamic properties of black holes <cit.>.[One can also apply topology to study the light rings <cit.> and the timelike circular orbits <cit.>.] Remarkably, a novel approach proposed in Ref. <cit.> has emerged to examine the thermodynamic topological properties of black holes. This approach interprets black hole solutions as topological thermodynamic defects, establishes topological numbers, and subsequently categorizes black holes into three distinct classes based on their respective topological numbers. This groundbreaking methodology has illuminated new facets of our understanding of the fundamental properties of black holes and gravity. The topological approach outlined in Ref. <cit.> has gained widespread acceptance due to its adaptability and simplicity. Consequently, it has been successfully employed to investigate the topological numbers associated with various well-known black hole solutions <cit.>. However, the topological number of the accelerating black holes remains virgin territory, it deserves to be explored deeply, which motivates us to conduct the present work. In this paper, we shall investigate the topological number associated with the four-dimensional static accelerating black hole and its AdS extension, as well as the static charged accelerating black hole and its AdS extension. This paper aims to fill the gap in the existing literature by examining the influence of the acceleration parameter on the topological number of black holes, a facet that has been overlooked so far. The findings of this research will provide valuable insights into the crucial role played by the acceleration parameter in determining the topological number of static black holes and their AdS counterparts within the framework of the Einstein-Maxwell gravity theory. The remaining part of this paper is organized as follows. In Sec. <ref>, we present a brief review of the thermodynamic topological approach outlined in Ref. <cit.>. In Sec. <ref>, we first investigate the topological number of the four-dimensional static accelerating black hole by considering the simplest static C-metric solution, and then extend it to the case of the static AdS-C-metric with a nonzero negative cosmological constant. In Sec. <ref>, we turn to discuss the topological number of the four-dimensional charged accelerating black hole by considering the Reissner-Nordström-C-metric (RN-C-metric) solution, and then extend it to the RN-AdS-C-metric case. Finally, our conclusion and outlook are given in Sec. <ref>. § A BRIEF REVIEW OF THERMODYNAMIC TOPOLOGICAL APPROACH In this section, we give a brief review of the novel thermodynamic topological approach. According to Ref. <cit.>, it is proposed to begin by introducing the generalized off-shell Helmholtz free energy ℱ = M -S/τ for a black hole thermodynamical system with the mass M and the entropy S, where τ is an extra variable that can be viewed as the inverse temperature of the cavity surrounding the black hole. Only when τ = T^-1, the generalized Helmholtz free energy (<ref>) manifests its on-shell characteristics and converges to the standard Helmholtz free energy F = M -TS of the black hole <cit.>. In Ref. <cit.>, a key vector ϕ is defined as ϕ = (ℱ/ r_h ,   -ΘΘ) . Within the given framework, the parameters are subject to the conditions 0 < r_h < +∞ and 0 ≤Θ≤π. It is important to highlight that the component ϕ^Θ exhibits divergence at Θ = 0 and Θ = π, implying an outward direction of the vector in these particular scenarios. A topological current can be established through the utilization of Duan's theory <cit.> on ϕ-mapping topological currents in the following manner: j^μ=1/2πϵ^μνρϵ_ab_νn^a_ρn^b . μ,ν,ρ=0,1,2, Here, we have _ν= / x^ν and x^ν=(τ, r_h, Θ). The unit vector n is formulated as n = (n^r, n^Θ), where n^r = ϕ^r_h/||ϕ|| and n^Θ = ϕ^Θ/||ϕ||. It is evident that the conservation of the aforementioned current (<ref>) can be easily demonstrated, leading to _μj^μ = 0. Furthermore, it can be promptly shown that the topological current represents a δ-function of the field configuration <cit.> j^μ=δ^2(ϕ)J^μ(ϕ/x) , where the three-dimensional Jacobian J^μ(ϕ/x) fulfills: ϵ^abJ^μ(ϕ/x) = ϵ^μνρ_νϕ^a_ρϕ^b. It becomes evident that the value of j^μ vanishes only when ϕ^a(x_i) = 0, allowing us to derive the topological number W in the subsequent manner: W = ∫_Σj^0d^2x = ∑_i=1^Nβ_iη_i = ∑_i=1^Nw_i . In the given context, β_i represents the positive Hopf index, serving as a count for the number of loops formed by the vector ϕ^a within the ϕ-space as x^μ revolves around the zero point z_i. Simultaneously, η_i= sign(J^0(ϕ/x)_z_i)=± 1 denotes the Brouwer degree, and w_i denotes the winding number associated with the i-th zero point of ϕ enclosed within the domain Σ. Furthermore, in the case that two distinct closed curves, denoted as Σ_1 and Σ_2, encompass the identical zero point of ϕ, it follows that the corresponding winding number must be equivalent. Conversely, if there exists no zero point of ϕ within the enclosed region, it is imperative that W = 0. It is important to emphasize that the local winding number w_i can serve as a valuable tool for characterizing the local thermodynamic stability. Thermodynamically stable black holes correspond to positive values of w_i, while unstable black holes correspond to negative values. On the other hand, the global topological number W represents the difference between the numbers of thermodynamically stable and unstable black holes within a classical black hole solution at a fixed temperature <cit.>. Hence, the local winding number not only allows for differentiation between different phases of black holes (stable or unstable) within the same black hole solution at a specific temperature, but it also facilitates the classification of black hole solutions based on the global topological number. Moreover, based on this classification, black holes with the same global topological number exhibit similar thermodynamic properties, even if they belong to different geometric classes. § STATIC NEUTRAL ACCELERATING BLACK HOLES In this section, we will investigate the topological number of the four-dimensional static neutral accelerating black hole by considering the simplest static C-metric solution, and then extend it to the case of the static AdS-C-metric with a nonzero negative cosmological constant. §.§ C-metric An accelerating black hole can be described by the metric <cit.> ds^2 = 1/Ω^2{-f(r)dt^2 +dr^2/f(r) +r^2[dθ^2/g(θ) +g(θ)sin^2θdφ^2/K^2] } , where f(r) = (1 -A^2r^2)(1 -2m/r) , g(θ) = 1 +2mAcosθ , Ω = 1 +Arcosθ , in which K is the conical deficit of the spacetime, m and A are the mass and acceleration parameters, respectively. The thermodynamic quantities are <cit.> M = m/K , μ_± = 1/4(1 -1 ± 2mA/K) , T = m/2π r_h^2 -(r_h -m)A^2/2π , S = π r_h^2/K(1 -A^2r_h^2) , where μ_± are the tensions of the conical deficits on the north and south poles, r_h are the locations of the event and Cauchy horizons that satisfy the equation: f(r_h) = 0. It is a simple matter to check that the above thermodynamic quantities simultaneously fulfil the first law and the Bekenstein-Smarr relation dM = TdS -λ_+dμ_+ -λ_-dμ_- , M = 2TS , with the thermodynamic lengths <cit.>λ_± = r_h/1 ± Ar_h -m are conjugate to the tensions μ_±. The expression of the Helmholtz free energy is then obtain as <cit.> F = M -TS = M/2 = m/2K . In the subsequent step, we will derive the topological number of the four-dimensional static accelerating black hole. The evaluation of the Helmholtz free energy for this black hole can be carried out by utilizing the Euclidean action as follows: I_E = 1/16π∫_M d^4x √(g)R +1/8π∫_M d^3x √(h)( -_0) , where h is the determinant of the induced metric h_ij, is the extrinsic curvature of the boundary, and _0 is the subtracted one of the massless C-metric solution as the reference background. The calculation of the Euclidean action integral yields the following result for the Helmholtz free energy F = I_E/β = m/2K , where β = 1/T being the interval of the time coordinate. Replacing T with 1/τ in Eq. (<ref>) and substituting m = r_h/2, thus the generalized off-shell Helmholtz free energy is = M -S/τ = r_h/2K -π r_h^2/(1 -A^2r_h^2)Kτ . Using the definition of Eq. (<ref>), the components of the vector ϕ can be easily obtained as follows: ϕ^r_h = 1/2K -2π r_h/(A^2r_h^2 -1)^2Kτ , ϕ^Θ = -ΘΘ . By solving the equation: ϕ^r_h = 0, one can get a curve on the r_h-τ plane. For the four-dimensional static accelerating black hole, one can arrive at τ = 4π r_h/(A^2r_h^2 -1)^2 . Note that Eq. (<ref>) consistently reduces to the result obtained in the case of the four-dimensional Schwarzschild black hole <cit.> when the acceleration parameter A vanishes. Under the assumption of Ar_0=1 for the four-dimensional static accelerating black hole, we plot Fig. <ref> and Fig. <ref> to visualize key aspects. These figures depict the zero points of the component ϕ^r_h and the behavior of the unit vector field n on a portion of the Θ -r_h plane, with τ = 20r_0. Here, r_0 corresponds to an arbitrary length scale determined by the size of a cavity that encloses the static accelerating black hole. It is evident that the static accelerating black hole exhibits distinct behavior compared to the Schwarzschild black hole <cit.>, emphasizing the significant impact of the acceleration parameter on the thermodynamic topological classification of the static neutral black hole. Consequently, it would be intriguing to explore deeper into the topological properties of black holes with unusual horizon topologies, such as planar <cit.>, toroidal <cit.>, hyperbolic <cit.>, ultraspinning black holes <cit.>, and NUT-charged spacetimes <cit.>. In Fig. <ref>, the zero points are located at (r_h/r_0, Θ) = (0.62,π/2), and (1.39,π/2), respectively. Consequently, the winding numbers w_i for the blue contours C_i can be interpreted as follows: w_1 = -1, w_2 = 1, which deviate from those associated with the Schwarzschild black hole <cit.>. Regarding the topological global properties, the topological number W=0 for the four-dimensional static accelerating black hole can be readily observed from Fig. <ref>, which also distinguishes it from the topological number of the Schwarzschild black hole (W = -1). Furthermore, it can be inferred that not only do the static accelerating black hole and the Schwarzschild black hole exhibit clear differences in terms of geometric topology, but they also belong to different categories in terms of thermodynamic topology. §.§ AdS-C-metric In this subsection, we will extend the above discussions to the cases of the static neutral AdS accelerating black hole by considering the four-dimensional AdS-C-metric, whose metric is still given by Eq. (<ref>), but now f(r) = (1 -A^2r^2)(1 -2m/r) +r^2/l^2 , in which the AdS radius l is associated with the thermodynamic pressure P = 3/(8πl^2) of the four-dimensional AdS black hole <cit.>. The thermodynamic quantities are <cit.> M = mα/K , μ_± = 1/4(1 -1 ± 2mA/K) , T = r_h^3 +ml^2/2πα r_h^2l^2 -(r_h -m)A^2/2πα , S = π r_h^2/K(1 -A^2r_h^2) , V = 4π/3α K[r_h^3/(1 -A^2r_h^2)^2 +mA^2l^4 ] , P = 3/8π l^2 , λ_± = 1/α[r_h/1 -A^2r_h^2 -m(1 ±2Al^2/r_h) ] , where the rescaled factor α = √(1 -A^2l^2). It is easy to verify that the above thermodynamic quantities obey the differential first law and integral Bekenstein-Smarr mass formula simultaneously, dM = TdS +VdP -λ_+dμ_+ -λ_-dμ_- , M = 2TS -2VP . With the help of the above expressions and utilizing m = r_h/2 +r_h^3/[2(1 -α^2r_h^2)l^2], the Helmholtz free energy of the four-dimensional static neutral AdS accelerating black hole reads F = M -TS = M/2 -VP = mα/2K -1/2α Kl^2[r_h^3/(1 -A^2r_h^2)^2 +mα^2l^4 ] , which coincides with those calculated via the Euclidean action integral, i.e., F = I_E/β. In order to obtain this result, one can calculate the Euclidean action <cit.> I_E = 1/16π∫_M d^4x √(g)(R +6/l^2) +1/8π∫_ M d^3x √(h)[ -2/l -l/2ℛ(h)] , where and ℛ(h) are the extrinsic curvature and Ricci scalar of the boundary metric h_μν, respectively. To eliminate the divergence, the action encompasses not only the standard Einstein-Hilbert term but also includes the Gibbons-Hawking boundary term and the corresponding AdS boundary counterterms <cit.>. Now, we explore the topological number of the four-dimensional static neutral accelerating AdS black hole. Replacing T with 1/τ and substituting l^2 = 3/(8πP) into Eq. (<ref>), then the generalized off-shell Helmholtz free energy simply reads = r_h/24K√(16 -6A^2/π P)(3 +8π Pr_h^2/1 -A^2r_h^2) -π r_h^2/(1 -A^2r_h^2)Kτ . Therefore, the components of the vector ϕ are computed as follows: ϕ^r_h = 1/24K(A^2r_h^2 -1)^2[-8√(2π)Pr_h^2√(8π -3A^2/P)(A^2r_h^2 -3) +3√(16 -6A^2/π P)(A^2r_h^2 -1)^2 ] -2π r_h/(A^2r_h^2 -1)^2Kτ , ϕ^Θ = -ΘΘ , thus one can calculate the zero point of the vector field ϕ^r_h as τ = -24π^3/2r_h/√(4π -3A^2/2P)[8π Pr_h^2(A^2r_h^2 -3) -3(A^2r_h^2 -1 )^2 ] , which consistently reduces to the one obtained in the four-dimensional Schwarzschild-AdS_4 black hole case <cit.> when the acceleration parameter A is turned off. We point out that the generation point satisfies the constraint conditions given by τ/r_h = 0 , ^2τ/r_h^2 > 0 , and the annihilation point obeys the constraint conditions as follows τ/r_h = 0 , ^2τ/r_h^2 < 0 . Considering the pressure as Pr_0^2 = 0.01 and the acceleration parameter Ar_0 = 0.2 for the four-dimensional static neutral accelerating AdS black hole, we illustrate the zero points of ϕ^r_h in the r_h-τ plane in Fig. <ref>, and the unit vector field n in Fig. <ref> with τ = 20r_0, 21r_0, and 22r_0, respectively. From Figs. <ref> and <ref>, one can observe that for these values of Pr_0^2 and Ar_0, one generation point and one annihilation point can be found at τ/r_0 = τ_a/r_0 = 20.75 and τ/r_0 = τ_b/r_0 = 21.77, respectively. It is evident that there exists a singular small black hole branch for τ < τ_a, three distinct black hole branches for τ_a < τ < τ_b, and one large black hole branch for τ > τ_b. Computing the winding number w for these three black hole branches, we find that both the small and large black hole branches have w = -1, while the intermediate black hole branch has w = 1. The static neutral accelerating AdS_4 black hole consistently maintains a topological number of W = -1, in contrast to the four-dimensional static neutral accelerating black hole discussed in the previous subsection, which possesses a topological number of zero. Therefore, from a thermodynamic topological standpoint, these aforementioned two black holes represent distinct categories of black hole solutions, indicating the importance of the cosmological constant in determining the topological number for the static neutral accelerating black hole. Furthermore, since the topological number of the Schwarzschild-AdS_4 black hole is zero, while that of the static neutral accelerating AdS_4 black hole is -1, it can be inferred that the acceleration parameter has a remarkable effect on the topological number for the four-dimensional static uncharged AdS black hole. § STATIC CHARGED ACCELERATING BLACK HOLES In this section, we turn to discuss the topological number of the four-dimensional charged accelerating black hole by considering the RN-C-metric solution, and then extend it to the RN-AdS-C-metric case with a nonzero negative cosmological constant. §.§ RN-C-metric A static charged accelerating black hole is represented by the metric and the Abelian gauge potential <cit.> ds^2 = 1/Ω^2{-f(r)dt^2 +dr^2/f(r) +r^2[dθ^2/g(θ) +g(θ)sin^2θdφ^2/K^2] } , F = dB , B = q/rdt , where f(r) = (1 -A^2r^2)(1 -2m/r +q^2/r^2) , g(θ) = 1 +2mAcosθ +q^2A^2cos^2θ , Ω = 1 +Arcosθ , in which K is the conical deficit of the charged accelerating black hole, m, q, A are the mass, the electric charge and the acceleration parameters, respectively. The thermodynamic quantities are <cit.> M = m/α K , μ_± = 1/4(1 -1 ± 2mA +q^2A^2/K) , T = mr_h -q^2/2πα r_h^3 -(r_h -m)A^2/2πα , Q = q/K , S = π r_h^2/K(1 -A^2r_h^2) , Φ = q/α r_h , where the factor α = √(1 +q^2A^2), r_h are the locations of the event and Cauchy horizons that obey the horizon equation: f(r_h) = 0. It is easy to verify that the above thermodynamic quantities satisfy the differential first law and the integral Bekenstein-Smarr relation simultaneously dM = TdS +Φ dQ -λ_+dμ_+ -λ_-dμ_- , M = 2TS +Φ Q , where the thermodynamic lengths λ_± = 1 ∓ Ar_h/α(1 +q^2A^2)(r_h -2m +m/1 ± Ar_h) being conjugate to the tensions μ_±. In addition, one can easily verify that the Gibbs free energy reads G = M -TS -Φ Q , whose expression can be obtained from the Euclidean action I_E = 1/16π∫_M d^4x √(g)(R -F^2) +1/8π∫_M d^3x √(h)( -_0) . where h represents the determinant of the induced metric h_ij, denotes the extrinsic curvature of the boundary, and _0 signifies the subtracted value of the massless C-metric solution used as the reference background. The computation of the Euclidean action integral yields the Gibbs free energy G = I_E/β = r_h^2 -q^2/4α Kr_h = M -Φ Q/2 , where β = T^-1 denotes the interval of the time coordinate. Next, we will investigate the topological number of the four-dimensional static charged accelerating black hole. We note that the Helmholtz free energy is given by F = G +Φ Q = M -TS . It is very easy to obtain the generalized off-shell Helmholtz free energy as = M -S/τ = r_h^2 +q^2/2α Kr_h -π r_h^2/(1 -A^2r_h^2)Kτ . Then, the components of the vector ϕ are ϕ^r_h = r_h^2 -q^2/2α Kr_h^2 -2π r_h/(A^2r_h^2 -1)^2Kτ , ϕ^Θ = -ΘΘ . Thus, by solving the equation: ϕ^r_h = 0, one can compute the zero point of the vector field ϕ as τ = 4πα r_h^3/(r_h^2 -q^2)(A^2r_h^2 -1)^2 . We point out that Eq. (<ref>) consistently reduces to the on obtained in the case of the four-dimensional RN black hole <cit.> when the accelerating parameter A vanishes. For the four-dimensional static charged accelerating black hole, we take Pr_0^2=0.01, Ar_0=1, q/r_0=1, and plot the zero points of the component ϕ^r_h in Fig. <ref>, and the unit vector field n with τ/r_0=20 in Fig. <ref>, respectively. Obviously, there is only one thermodynamically stable four-dimensional static charged accelerating black hole for any value of τ. Based upon the local property of the zero point, one can obtain the topological number W = 1 for the four-dimensional static charged accelerating black hole, which is different from that of the four-dimensional static accelerating black hole (W = 0). This fact indicates that the electric charge parameter has an important effect on the topological number of the four-dimensional static neutral accelerating black hole. Compared with the four-dimensional RN black hole which has a topological number of zero, it can be inferred that the acceleration parameter plays an crucial role in determining the topological number for the four-dimensional static charged black hole. §.§ RN-AdS-C-metric In this subsection, we will extend the discussions in the last subsection to the cases of the static charged accelerating AdS black hole by considering the four-dimensional RN-AdS-C-metric, whose metric and the Abelian gauge potential are still given by Eqs. (<ref>)-(<ref>), but now f(r) = (1 -A^2r^2)(1 -2m/r +q^2/r^2) +r^2/l^2 . The thermodynamic quantities are <cit.> M = m[1 -A^2l^2(1 +A^2q^2)]/α K , S = π r_h^2/K(1 -A^2r_h^2) , T = 1/4πα[-2A^2r_h(1 -2m/r_h +q^2/r_h^2) +2(1 -A^2r_h^2)(m/r_h^2 -q^2/r_h^3) +2r_h/l^2] , Q = q/K , Φ = q/α r_h , P = 3/8π l^2 , V = 4π l^4/3α Kr_h^5{4m^2r_h^2 +mr_h[A^2(1 +A^2q^2)r_h^4 -4r_h^2 -4q^2] +(r_h^2 +q^2)^2} , μ_± = 1/4(1 -1 ± 2mA +q^2A^2/K) , where the factor α = √(1 -A^2l^2(1 +A^2q^2))√(1 +A^2q^2), and r_h is the largest root of the horizon equation: f(r_h) = 0. Then one can verify that the above thermodynamical quantities completely satisfy both the the first law and the Bekenstein-Smarr mass formula dM = TdS +Φ dQ +VdP -λ_+dμ_+ -λ_-dμ_- , M = 2TS +Φ Q -2VP , with the thermodynamic lengths λ_± = -l^2/(1 +A^2q^2)α r_h^4[2m^2r_h(1 -A^2r_h^2) +m(Ar_h ∓ 1)(Ar_h^3 ± 2A^2q^2r_h^2 ± 3r_h^2 +Aq^2r_h ± q^2) ± r_h(1 +A^2q^2)(A^3q^2r_h^3 ± r_h^2 -Aq^2r_h ± q^2)] , being conjugate to the tensions μ_±. One can calculate the Gibbs free energy as G = M -TS -Φ Q , which is consistent with those computed via the Euclidean action integral, namely G = I/β. In order to obtain this result, one can calculate the Euclidean action for the corresponding Euclidean black hole I_E = 1/16π∫_M d^4x √(g)(R +6/l^2 -F^2) +1/8π∫_ M d^3x √(h)[ -2/l -l/2ℛ(h)] , where and ℛ(h) are the extrinsic curvature and Ricci scalar of the boundary metric h_μν, respectively. Along with the standard Einstein-Hilbert term, the action also contains the Gibbons-Hawking boundary term and the corresponding AdS boundary counterterms in order to eliminate the divergence. Thus, the result is G = I/β = M -Φ Q/2 -VP = mr_h -q^2/2α Kr_h -{2m^2/α Kr_h^3 +(r_h^2 +q^2)^2/2α Kr_h^5 +m[A^2(1 +A^2q^2)r_h^4 -2r_h^2 -2q^2]/α Kr_h^4}l^2 . In order to establish the thermodynamic topological number of the four-dimensional static charged accelerating AdS black hole, we need to obtain the expression of the generalized off-shell Helmholtz free energy in advance. The Helmholtz free energy is given by F = G +Φ Q = M -TS . Using the definition of the generalized off-shell Helmholtz free energy (<ref>) and l^2 = 3/(8πP), one can easily get = √(8π -3(A^4q^2 +A^2)P^-1)/4√(2π(1 +A^2q^2))Kr_h[r_h^2 +q^2 +8π Pr_h^4/3(1-A^2r_h^2)] -π r_h^2/(1 -A^2r_h^2)Kτ . Thus, the components of the vector ϕ are computed as follows: ϕ^r_h = √(16π -6(A^4q^2 +A^2)P^-1)/24√(π(1 +A^2q^2))(A^2r_h^2 -1)^2Kr_h^2[3(r_h^2 -q^2)(A^2r_h^2 -1)^2 -8π Pr_h^4(A^2r_h^2 -3)] -2π^3/2r_h√(1 +A^2q^2)/√(π(1 +A^2q^2))(A^2r_h^2 -1)^2Kτ , ϕ^Θ = -ΘΘ , So the zero point of the vector field ϕ is τ = -24π^3/2r_h^3√(2(1 +A^2q^2))/√(8π -3(A^4q^2 +A^2)P^-1)[8π Pr_h^4(A^2r_h^2 -3) +3(q^2 -r_h^2)(A^2r_h^2 -1)^2 ] . Similar to the procedure adopted before, we show the zero points of the component ϕ^r_h with q = r_0, Ar_0 = 0.2, and Pr_0^2 = 0.01 in Fig. <ref>, and the unit vector field n with τ = 22r_0, q = r_0, Ar_0 = 0.2, and Pr_0^2 = 0.01 in Fig. <ref>. Note that for these values of q = r_0, Ar_0 = 0.2 and Pr_0^2 = 0.01, one generation point can be found at τ/r_0 = τ_c/r_0 = 21.56. Based on the local property of the zero points, we obtain the topological number of the four-dimensional static charged accelerating AdS black hole is W = 0, while that of the RN-AdS black hole is: W = 1<cit.>. Consequently, the introduction of the acceleration parameter brings about a substantial transformation in the topological number of the four-dimensional RN-AdS black hole. Moreover, the contrasting topological numbers between the four-dimensional static neutral accelerating AdS black hole (W = -1) and the four-dimensional static charged accelerating AdS black hole (W = 0) underscores the noteworthy impact of the electric charge parameter on the topological number for the former. Furthermore, the topological number of W = 1 exhibited by the static charge accelerating black hole distinguishes it from the static charged accelerating AdS black hole (W = 0), emphasizing the important role played by the cosmological constant in determining the topological number for the four-dimensional static charge accelerating black hole. § CONCLUSIONS The results we found in the current paper are now presented in Table <ref>. Note that we have also included some known results in the table for comparison purposes. In this paper, employing the generalized off-shell Helmholtz free energy, we investigate the topological numbers of the four-dimensional static accelerating black hole and its AdS extension, along with the static charged accelerating black hole and its AdS extension. We observe that the four-dimensional static neutral accelerating black hole and the four-dimensional static charged accelerating AdS black hole fall under the same category of topological classifications, as evidenced by their same topological number of W = 0. On the other hand, the four-dimensional static neutral accelerating AdS black hole and the four-dimensional static charged accelerating black hole belong to other two distinct topological categories, distinguished by their topological numbers of W = -1 and W = 1, respectively. Through our analysis, we uncover a profound and significant impact of the acceleration parameter on the topological characteristics of the static black holes. Additionally, we provide evidence of the crucial role played by the electric charge parameter in determining the topological number for the static neutral accelerating black holes. Furthermore, we emphasize the remarkable influence exerted by the cosmological constant on the topological number of the static accelerating black hole. A most related issue is to extend the present work to the more general rotating charged accelerating case. This work is supported by the National Natural Science Foundation of China (NSFC) under Grant No. 12205243, No. 11675130, by the Sichuan Science and Technology Program under Grant No. 2023NSFSC1347, and by the Doctoral Research Initiation Project of China West Normal University under Grant No. 21E028. 99Eur. Phys. J. C J. High Energy Phys. Classical Quantum Gravity J. Math. Phys. (N.Y.) Nucl. Phys. B Phys. Lett. B Phys. Rev. D Phys. Rev. Lett. Gen. Relativ. Gravit. AM53-472 A.H. Taub, Empty space-times admitting a three parameter group of motions, https://doi.org/10.2307/1969567Ann. Math. 53, 472 (1951). JMP4-915 E.T. Newman, L. Tamburino, and T. Unti, Empty-space generalization of the Schwarzschild metric, https://doi.org/10.1063/1.17040184, 915 (1963). PRD2-1359 W. Kinnersley and M. Walker, Uniformly accelerating charged mass in general relativity, https://doi.org/10.1103/PhysRevD.2.13592, 1359 (1970). AP98-98 J.F. Plebanski and M. Demianski, Rotating, charged, and uniformly accelerating mass in general relativity, http://dx.doi.org/10.1016/0003-4916(76)90240-2Ann. Phys. (N.Y.) 98, 98 (1976). PRD67-064001 O.J.C. Dias and J.P.S. Lemos, Pair of accelerated black holes in anti-de Sitter background: AdS C metric, https://doi.org/10.1103/PhysRevD.67.06400167, 064001 (2003). IJMPD15-335 J. B. Griffiths and J. Podolsky, A new look at the Plebanski-Demianski family of solutions, http://dx.doi.org/10.1142/S0218271806007742Int. J. Mod. Phys. D 15, 335 (2006). GRG15-535 W.B. Bonnor, The sources of the vacuum C-metric, http://dx.doi.org/10.1007/BF0075956915, 535 (1983). PRD55-7977 Y.-C. Wang, Vacuum C metric and the metric of two superposed Schwarzschild black holes, https://doi.org/10.1103/PhysRevD.55.797755, 7977 (2003). CQG23-6745 J.B. Griffiths, P. Krtous, and J. Podolsky, Interpreting the C-metric, https://doi.org/10.1088/0264-9381/23/23/00823, 6745 (2006). PLA209-6 H.W. Yu, Some quantum thermal properties of uniformly accelerating and rotating black holes, https://doi.org/10.1016/0375-9601(95)00776-6Phys. Lett. A 209, 6 (1995). EPJC78-645 J.L. Zhang, Y.J. Li, and H.W. Yu, Accelerating AdS black holes as the holographic heat engines in a benchmarking scheme, https://doi.org/10.1140/epjc/s10052-018-6137-x78, 645 (2018). JHEP0219144 J.L. Zhang, Y.J. Li, and H.W. Yu, Thermodynamics of charged accelerating AdS black holes and holographic heat engines, https://doi.org/10.1007/JHEP02(2019)14402 (2019) 144. PRD103-025005 M. Zhang and J. Jiang, Shadows of accelerating black holes, https://doi.org/10.1103/PhysRevD.103.025005103, 025005 (2021). PLB823-136731 S. Jiang and J. Jiang, Holographic complexity in charged accelerating black holes, https://doi.org/10.1016/j.physletb.2021.136731823, 136731 (2021). PLB838-137691 M. Zhang, C.X. Fang, and J. Jiang, Holographic complexity of rotating black holes with conical deficits, https://doi.org/10.1016/j.physletb.2023.137691838, 137691 (2023). PRL117-131303 M. Appels, R. Gregory, and D. Kubiznak, Thermodynamics of Accelerating Black Holes, https://doi.org/10.1103/PhysRevLett.117.131303117, 131303 (2016). JHEP0517116 M. Appels, R. Gregory, and David Kubiznak, Black hole thermodynamics with conical defects, https://doi.org/10.1007/JHEP05(2017)11605 (2017) 116. PRD98-104038 A. Anabalon, M. Appels, R. Gregory, D. Kubiznak, R.B. Mann, and A. Ovgun, Holographic thermodynamics of accelerating black holes, https://doi.org/10.1103/PhysRevD.98.10403898, 104038 (2018). JHEP0419096 A. Anabalon, F. Gray, R. Gregory, D. Kubiznak, and R.B. Mann, Thermodynamics of charged, rotating, and accelerating black holes, https://doi.org/10.1007/JHEP04(2019)09604 (2019) 096. PLB796-191 R. Gregory and A. Scoins, Accelerating black hole chemistry, https://doi.org/10.1016/j.physletb.2019.06.071796, 191 (2019). 2306.16187 H. Kim, N. Kim, Y. Lee, and A. Poole, Thermodynamics of accelerating AdS_4 black holes from the covariant phase space https://arxiv.org/abs/2306.16187arXiv:2306.16187. PRD7-2333 J.D. Bekenstein, Black holes and entropy, http://dx.doi.org/10.1103/PhysRevD.7.23337, 2333 (1973). PRD13-191 S.W. Hawking, Black holes and thermodynamics, http://dx.doi.org/10.1103/PhysRevD.13.19113, 191 (1976). PRL30-71 L. Smarr, Mass formula for Kerr black holes, http://dx.doi.org/10.1103/PhysRevLett.30.71. 30, 71 (1973); Erratum, http://dx.doi.org/10.1103/PhysRevLett.30.52130, 521 (1973). PRL25-1596 D. Christodoulou, Reversible and irreversible transforations in black hole physics, http://dx.doi.org/10.1103/PhysRevLett.25.159625, 1596 (1970). PRD4-3552 D. Christodoulou and R. Ruffini, Reversible transformations of a charged black hole, http://dx.doi.org/10.1103/PhysRevD.4.35524, 3552 (1971). PRD105-104003 S.-W. Wei and Y.-X. Liu, Topology of black hole thermodynamics, https://doi.org/10.1103/PhysRevD.105.104003105, 104003 (2022). PRD105-104053 P.K. Yerra and C. Bhamidipati, Topology of black hole thermodynamics in Gauss-Bonnet gravity, https://doi.org/10.1103/PhysRevD.105.104053105, 104053 (2022). PLB835-137591 P.K. Yerra and C. Bhamidipati, Topology of Born-Infeld AdS black holes in 4D novel Einstein-Gauss-Bonnet gravity. https://doi.org/10.1016/j.physletb.2022.137591835, 137591 (2022). PRD107-046013 M.B. Ahmed, D. Kubiznak, and R.B. Mann, Vortex/anti-vortex pair creation in black hole thermodynamics, https://doi.org/10.1103/PhysRevD.107.046013107, 046013 (2023). PRD107-106009 N.J. Gogoi and P. Phukon, Topology of thermodynamics in R-charged black holes, https://doi.org/10.1103/PhysRevD.107.106009107, 106009 (2023). JHEP0623115 M. Zhang and J. Jiang, Bulk-boundary thermodynamic equivalence: a topology viewpoint, https://doi.org/10.1007/JHEP06(2023)11506 (2023) 115. 2305.05595 M.R. Alipour, M.A.S. Afshar, S.N. Gashti, and J. Sadeghi, Topological classification and black hole thermodynamics, https://arxiv.org/abs/2305.05595arXiv:2305.05595. 2305.05916 Z.-M. Xu, Y.-S. Wang, B. Wu, and W.-L. Yang, Riemann surface, winding number and black hole thermodynamics, https://arxiv.org/abs/2305.05916arXiv:2305.05916. 2305.15674 M.-Y. Zhang, H. Chen, H. Hassanabadi, Z.-W. Long, and H. Yang, Topology of nonlinearly charged black hole chemistry via massive gravity, https://arxiv.org/abs/2305.15674arXiv:2305.15674. 2305.15910 T.N. Hung and C.H. Nam, Topology in thermodynamics of regular black strings with Kaluza-Klein reduction, https://arxiv.org/abs/2305.15910arXiv:2305.15910. 2306.16117 J. Sadeghi, M.R. Alipour, S.N. Gashti, and M.A.S. Afshar, Bulk-boundary and RPS thermodynamics from topology perspective, https://arxiv.org/abs/2306.16117arXiv:2306.16117. PRD106-064059 P.K. Yerra, C. Bhamidipati and S. Mukherji, Topology of critical points and Hawking-Page transition, https://doi.org/10.1103/PhysRevD.106.064059106, 064059 (2022). PRD107-044026 Z.-Y. Fan, Topological interpretation for phase transitions of black holes, https://doi.org/10.1103/PhysRevD.107.044026107, 044026 (2023). PRD107-064015 N.-C. Bai, L. Li and J. Tao, Topology of black hole thermodynamics in Lovelock gravity, https://doi.org/10.1103/PhysRevD.107.064015107, 064015 (2023). 2212.04341 N.-C. Bai, L. Song, and J. Tao, Reentrant phase transition in holographic thermodynamics of Born-Infeld AdS black hole, https://arxiv.org/abs/2212.04341arXiv:2212.04341. 2302.06201 R. Li, C.H. Liu, K. Zhang, and J. Wang, Topology of the landscape and dominant kinetic path for the thermodynamic phase transition of the charged Gauss-Bonnet AdS black holes, https://arxiv.org/abs/2302.06201arXiv:2302.06201. 2304.14988 P.K. Yerra, C. Bhamidipati, and S. Mukherji, Topology of critical points in boundary matrix duals, https://arxiv.org/abs/2304.14988arXiv:2304.14988. PRL119-251102 P.V.P. Cunha, E. Berti, and C.A.R. Herdeiro, Light Ring Stability in Ultra-Compact Objects, http://dx.doi.org/10.1103/PhysRevLett.119.251102119, 251102 (2017). PRL124-181101 P.V.P. Cunha, and C.A.R. Herdeiro, Stationary Black Holes and Light Rings, http://dx.doi.org/10.1103/PhysRevLett.124.181101124, 181101 (2020). PRD102-064039 S.-W. Wei, Topological charge and black hole photon spheres, https://doi.org/10.1103/PhysRevD.102.064039102, 064039 (2020). PRD103-104031 M. Guo and S. Gao, Universal properties of light rings for stationary axisymmetric spacetimes, https://doi.org/10.1103/PhysRevD.103.104031103, 104031 (2021). PRD105-024049 M. Guo, Z. Zhong, J. Wang, and S. Gao, Light rings and long-lived modes in quasiblack hole spacetimes, https://doi.org/10.1103/PhysRevD.105.024049105, 024049 (2022). PRD107-064006 S.-W. Wei and Y.-X. Liu, Topology of equatorial timelike circular orbits around stationary black holes, https://doi.org/10.1103/PhysRevD.107.064006107, 064006 (2023). 2301.04786 X. Ye and S.-W. Wei, Topological study of equatorial timelike circular orbit for spherically symmetric (hairy) black holes, https://arxiv.org/abs/2301.04786arXiv:2301.04786. PRL129-191101 S.-W. Wei, Y.-X. Liu, and R.B. Mann, Black Hole Solutions as Topological Thermodynamic Defects, https://doi.org/10.1103/PhysRevLett.129.191101129, 191101 (2022). PRD107-064023 C.H. Liu and J. Wang, The topological natures of the Gauss-Bonnet black hole in AdS space, https://doi.org/10.1103/PhysRevD.107.064023107, 064023 (2023). JHEP0123102 C.X. Fang, J. Jiang and M. Zhang, Revisiting thermodynamic topologies of black holes, http://dx.doi.org/10.1007/JHEP01(2023)10201 (2023) 102. PRD107-024024 D. Wu, Topological classes of rotating black holes, https://doi.org/10.1103/PhysRevD.107.024024107, 024024 (2023). PRD107-084002 D. Wu and S.-Q. Wu, Topological classes of thermodynamics of rotating AdS black holes, https://doi.org/10.1103/PhysRevD.107.084002107, 084002 (2023). PRD107-084053 N. Chatzifotis, P. Dorlis, N.E. Mavromatos, and E. Papantonopoulos, Thermal stability of hairy black holes, https://doi.org/10.1103/PhysRevD.107.084053107, 084053 (2023). 2303.06814 S.-W. Wei, Y.-P. Zhang, Y.-X. Liu, and R.B. Mann, Implementing static Dyson-like spheres around spherically symmetric black hole, https://arxiv.org/abs/2303.06814arXiv:2303.06814. 2303.13105 Y. Du and X. Zhang, Topological classes of black holes in de-Sitter spacetime, https://arxiv.org/abs/2303.13105arXiv:2303.13105. 2304.02889 C. Fairoos and T. Sharqui, Topological nature of black hole solutions in massive gravity, https://arxiv.org/abs/2304.02889arXiv:2304.02889. 2306.13286 D. Chen, Y. He, and J. Tao, Thermodynamic topology of higher-dimensional black holes in massive gravity, https://arxiv.org/abs/2306.13286arXiv:2306.13286. 2304.05695 N.J. Gogoi and P. Phukon, Thermodynamic topology of 4d dyonic AdS black holes in different ensembles, https://arxiv.org/abs/2304.05695arXiv:2304.05695. 2306.05692 J. Sadeghi, S.N. Gashti, M.R. Alipour, and M.A.S. Afshar, Bardeen black hole thermodynamics from topological perspective, https://doi.org/10.1016/j.aop.2023.169391Annals Phys. 455, 169391 (2023). 2306.11212 M.S. Ali, H.E. Moumni, J. Khalloufi, and K. Masmar, Topology of Born-Infeld-AdS black hole phase transition, https://arxiv.org/abs/2306.11212arXiv:2306.11212. EPJC83-365 D. Wu, Classifying topology of consistent thermodynamics of the four-dimensional neutral Lorentzian NUT-charged spacetimes, https://doi.org/10.1140/epjc/s10052-023-11561-483, 365 (2023). 2306.02324 D. Wu, Consistent thermodynamics and topological classes for the four-dimensional Lorentzian charged Taub-NUT spacetimes, https://arxiv.org/abs/2306.02324arXiv:2306.02324. PRD15-2752 G.W. Gibbons and S.W. Hawking, Action integrals and partition functions in quantum gravity, https://doi.org/10.1103/PhysRevD.15.275215, 2752 (1977). PRD33-2092 J.W. York, Black-hole thermodynamics and the Euclidean Einstein action, https://doi.org/10.1103/PhysRevD.33.209233, 2092 (1986)PRD105-084030 S.-J. Yang, R. Zhou, S.W. Wei, and Y.-X. Liu, Dynamics and kinetics of phase transition for Kerr AdS black hole on free energy landscape, https://doi.org/10.1103/PhysRevD.105.084030105, 084030 (2022). PRD106-106015 R. Li and J. Wang, Generalized free energy landscape of a black hole phase transition, https://doi.org/10.1103/PhysRevD.106.106015106, 106015 (2022). SS9-1072 Y.-S. Duan and M.-L. Ge, SU (2) gauge theory and electrodynamics of N moving magnetic monopoles, https://doi.org/10.1142/9789813237278_0001Sci. Sin. 9, 1072 (1979). NPB514-705 Y.-S. Duan, S. Li, and G.-H. Yang, The bifurcation theory of the Gauss-Bonnet-Chern topological current and Morse function, https://doi.org/10.1016/S0550-3213(97)00777-3514, 705 (1998). PRD61-045004 L.-B. Fu, Y.-S. Duan, and H. Zhang, Evolution of the Chern-Simons vortices, https://doi.org/10.1103/PhysRevD.61.04500461, 045004 (2000). CJP52-1 J. Podolsky, Accelerating black holes in anti-de Sitter universe, https://doi.org/10.1023/A:1013961411430Czech. J. Phys. 52, 1 (2002). CQG20-3269 K. Hong and E. Teo, A new form of the C metric, https://doi.org/10.1088/0264-9381/20/14/32120, 3269 (2003). PRD54-4891 R.G. Cai and Y.Z. Zhang, Black plane solutions in four-dimensional space-times, http://dx.doi.org/10.1103/PhysRevD.54.489154, 4891 (1996). PRD56-3600 D.R. Brill and J. Louko, Thermodynamics of (3+1)-dimensional black holes with toroidal or higher genus horizons, http://dx.doi.org/10.1103/PhysRevD.56.360056, 3600 (1997). PRD92-044058 Y. Chen, Y.K. Lim, and E. Teo, Deformed hyperbolic black holes, http://dx.doi.org/10.1103/PhysRevD.92.04405892, 044058 (2015). PRD89-084007 D. Klemm, Four-dimensional black holes with unusual horizons, http://dx.doi.org/10.1103/PhysRevD.89.08400789, 084007 (2014). PRL115-031101 R.A. Hennigar, R.B. Mann, and D. Kubiznak, Entropy Inequality Violations from Ultraspinning Black Holes, http://dx.doi.org/10.1103/PhysRevLett.115.031101115, 031101 (2015). JHEP0114127 A. Gnecchi, K. Hristov, D. Klemm, C. Toldo, and O. Vaughan, Rotating black holes in 4d gauged supergravity, http://dx.doi.org/10.1007/JHEP01(2014)12701 (2014) 127. PRD103-104020 D. Wu and P. Wu, Null hypersurface caustics for high-dimensional superentropic black holes, http://dx.doi.org/10.1103/PhysRevD.103.104020103, 104020 (2021). PRD101-024057 D. Wu, P. Wu, H. Yu, and S.-Q. Wu, Notes on the thermodynamics of superentropic AdS black holes, http://dx.doi.org/10.1103/PhysRevD.101.024057101, 024057 (2020). PRD102-044007 D. Wu, P. Wu, H. Yu, and S.-Q. Wu, Are ultraspinning Kerr-Sen-AdS_4 black holes always superentropic?, http://dx.doi.org/10.1103/PhysRevD.102.044007102, 044007 (2020). PRD103-044014 D. Wu, S.-Q. Wu, P. Wu, and H. Yu, Aspects of the dyonic Kerr-Sen-AdS_4 black hole and its ultraspinning version, http://dx.doi.org/10.1103/PhysRevD.103.044014103, 044014 (2021). JHEP1121031 D. Wu and S.-Q. Wu, Ultra-spinning Chow's black holes in six-dimensional gauged supergravity and their thermodynamical properties, http://dx.doi.org/10.1007/JHEP11(2021)03111 (2021) 031. PRD95-046002 S.M. Noorbakhsh and M. Ghominejad, Ultra-spinning gauged supergravity black holes and their Kerr/CFT correspondence, http://dx.doi.org/10.1103/PhysRevD.95.04600295, 046002 (2017). JHEP0118042 S.M. Noorbakhsh and M.H. Vahidinia, Extremal vanishing horizon Kerr-AdS black holes at ultraspinning limit, http://dx.doi.org/10.1007/JHEP01(2018)04201 (2018) 042. PRD100-101501 S.-Q. Wu and D. Wu, Thermodynamical hairs of the four-dimensional Taub-Newman-Unti-Tamburino spacetimes, https://doi.org/10.1103/PhysRevD.100.101501100, 101501(R) (2019). PRD105-124013 D. Wu and S.-Q. Wu, Consistent mass formulas for the four-dimensional dyonic NUT-charged spacetimes, https://doi.org/10.1103/PhysRevD.105.124013105, 124013 (2022). 2209.01757 D. Wu and S.-Q. Wu, Consistent mass formulae for higher even-dimensional Taub-NUT spacetimes and their AdS counterparts, http://arxiv.org/abs/2209.01757arXiv:2209.01757. 2210.17504 D. Wu and S.-Q. Wu, Revisiting mass formulae of the four-dimensional Reissner-Nordström-NUT-AdS solutions in a different metric form, http://arxiv.org/abs/2210.17504arXiv:2210.17504. 2306.00062 S.-Q. Wu and D. Wu, Consistent mass formulae for higher even-dimensional Reissner-Nordström-NUT (AdS) spacetimes, http://arxiv.org/abs/2306.00062arXiv:2306.00062. CPL23-1096 S. Wang, S.-Q. Wu, F. Xie, and L. Dan, The first laws of thermodynamics of the (2+1)-dimensional BTZ black holes and Kerr-de Sitter spacetimes, http://dx.doi.org/10.1088/0256-307X/23/5/009Chin. Phy. Lett. 23, 1096 (2006). CQG26-195011 D. Kastor, S. Ray, and J. Traschen, Enthalpy and the mechanics of AdS black holes, http://dx.doi.org/10.1088/0264-9381/26/19/19501126, 195011 (2009). PRD84-024037 M. Cvetič, G.W. Gibbons, D. Kubizňák, and C.N. Pope, Black hole enthalpy and an entropy inequality for the thermodynamic volume, http://dx.doi.org/10.1103/PhysRevD.84.02403784, 024037 (2011). PRD60-104001 R. Emparan, C.V. Johnson, and R.C. Myers, Surface terms as counterterms in the AdS/CFT correspondence, https://doi.org/10.1103/PhysRevD.60.10400160, 104001 (1999). PRD60-104026 A. Chamblin, R. Emparan, C.V. Johnson, and R.C. Myers, Holography, thermodynamics and fluctuations of charged AdS black holes, https://doi.org/10.1103/PhysRevD.60.10402660, 104026 (1999). PRD60-104047 R.B. Mann, Misner string entropy, https://doi.org/10.1103/PhysRevD.60.10404760, 104047 (1999). CMP208-413 V. Balasubramanian and P. Kraus, A Stress tensor for Anti-de Sitter gravity, https://doi.org/10.1007/s002200050764Commun. Math. Phys. 208, 413 (1999). CMP217-595 S.de Haro, S.N. Solodukhin, and K. Skenderis, Holographic reconstruction of space-time and renormalization in the AdS/CFT correspondence, https://doi.org/10.1007/s002200100381Commun. Math. Phys. 217, 595 (2001).
http://arxiv.org/abs/2307.00758v1
20230703052605
Structured Network Pruning by Measuring Filter-wise Interactions
[ "Wenting Tang", "Xingxing Wei", "Bo Li" ]
cs.CV
[ "cs.CV" ]
Learning Noise-Resistant Image Representation by Aligning Clean and Noisy Domains Fangzhou Luo August 1, 2023 ================================================================================= Structured network pruning is a practical approach to reduce computation cost directly while retaining the CNNs' generalization performance in real applications. However, identifying redundant filters is a core problem in structured network pruning, and current redundancy criteria only focus on individual filters' attributes. When pruning sparsity increases, these redundancy criteria are not effective or efficient enough. Since the filter-wise interaction also contributes to the CNN's prediction accuracy, we integrate the filter-wise interaction into the redundancy criterion. In our criterion, we introduce the filter importance and filter utilization strength to reflect the decision ability of individual and multiple filters. Utilizing this new redundancy criterion, we propose a structured network pruning approach SNPFI (Structured Network Pruning by measuring Filter-wise Interaction). During the pruning, the SNPFI can automatically assign the proper sparsity based on the filter utilization strength and eliminate the useless filters by filter importance. After the pruning, the SNPFI can recover pruned model's performance effectively without iterative training by minimizing the interaction difference. We empirically demonstrate the effectiveness of the SNPFI with several commonly used CNN models, including AlexNet, MobileNetv1, and ResNet-50, on various image classification datasets, including MNIST, CIFAR-10, and ImageNet. For all experimental CNN models, nearly 60% of computation is reduced in a network compression while the classification accuracy remains. § INTRODUCTION Network pruning can transfer the pre-trained heavy CNNs models to a lightweight form with comparable precision by removing models' inherent redundancy <cit.>. On the one hand, this transformation effectively boosts the real-time performance of CNNs on the edge device. On the other hand, network pruning prevents over-fitting by directly eliminating unnecessary parameters <cit.>. Structured network pruning removes a group of parameters in various granularity (e.g.kernel, filter). Since network pruning at the filter level of granularity requires no special-designed hardware accelerator, filter-wise structured network pruning becomes one of the research focuses currently <cit.>. Identifying redundant filters is a core problem in structured network pruning. The filter is redundant if its contribution to the performance is subtle. Because either the filter’s weight or output feature can be the clue to discriminate its redundancy, structured network pruning algorithms can be categorized as weight-based <cit.> and feature-based approaches <cit.>. Assuming the vital filters have large weights, the LWCE <cit.> identifies redundant filters by the L1-norm. Considering the similarity among filters in feature, the FPGM <cit.> ranks filters by the difference from the geometric median. When pruning intensity is modest, these static criteria can efficiently identify redundancy within a single evaluation. Thus, they are also known as one-shot pruning. Under high pruning intensity, one-shot pruning approaches struggle to recover the performance effectively. Dynamic evaluation overcomes this drawback by measuring redundancy with more caution. <cit.> searches the redundant filters multiple times and recovers the weight through iterative training. Current feature-based approaches dynamically evaluate filters during the inference and the training <cit.>. The Network slimming <cit.> embeds the filter selector layers into the original CNN to improve the accuracy of the redundancy discrimination. Despite various redundancy criteria existing, how to effectively and efficiently identify redundancy is still an unsettled problem. Solely focusing on filter’s unitary attributes, the above redundancy criteria neglect the filter-wise interaction when measuring filter's contribution on prediction. Many network interpretation studies have shown that filter-wise interactions do contribute on the prediction <cit.>. On the one hand, each filter's interaction can reflect its inherent decision ability <cit.>. On the other hand, the filter-wise interactions are universal <cit.> and the functionally close-coupled layer exists in common CNNs as shown in Figure <ref>-(a). In this scenario, filters with various importance have to interact to positively contribute to the output. Therefore, these motivate us to integrate the filter-wise interaction into our redundancy criterion . As shown in Figure <ref>-(a), important filters (green cube) can benefit considerably from interacting with the subtle filters (yellow cube). Thus, only measuring the unitary attribution of filters can not effectively identify redundancy. Since the useless filters are functionally loosely-coupled (blue region) in the layer, measuring the strength of interaction among multiple filters can avoid excluding the filters that seem to be useless but contribute. To identify the functionally loosely-coupled filters, we measure the filter's importance and the utilization strength of interaction by the filter-wise interaction in our redundancy criterion. The contributions of individuals or groups of filters on output reflect the importance of each filter and the strength of interaction. To distribute the unit and the group contribution fairly, we model the filter-wise interaction by the cooperative game theory <cit.>. For the different number of filters in each layer, we refer to the normalized strength of interaction among multiple filters as the filter utilization strength. Since the filter utilization strength can reflect the decision ability of a group of filters, we can provide a theoretical lower bound of sparsity concerning the performance. What's more, we identify the interactions that might cause the generalization gap by comparing the interaction result of the pruned model and the original model. For each pruned computational module, we refer to this disparity as the interaction difference to quantify the potential generalization gap and fine-tune the pruned model with the interaction differences to boost the optimization effectively. In this way, we can maintain the performance of the pruned model by our redundancy criterion. To attain the optimal combination of the layer-wise pruning sparsity, we propose a structured network pruning approach SNPFI (Structured network pruning by measuring filter-wise interaction) in Figure <ref> based on our redundancy criterion. SNPFI includes three parts: the RL (reinforcement learning) module, the Layer-wise Pruning module, and the Fine-tuning module. The RL module predicts the pruning sparsity in layer-wise and consists of the Environment and the Agent. For each computational module, the Environment first generates the state and the sparsity lower bound via the filter utilization strength; then the Agent predicts the sparsity by the state and the sparsity lower bound; in the end, the Environment rewards this pruning decision in constant time. Once the pruning is finished, the Agent is updated by the pruning decision and reward. To alleviate the delayed reward problem of model-free reinforcement learning in the pruning scenario <cit.>, we formulate the reward function by filter utilization strength. In this way, SNPFI can automatically and efficiently optimize the compression plan; The Layer-wise pruning module removes the redundant filters based on the predicted sparsity. During the layer-wise pruning, the redundant filters are removed via the filter importance and the sparsity; The Fine-tuning module recovers the weight of the pruned model according to the interaction difference. In this way, we can ensure the pruned model acquires meaningful interaction behaviors inherent in the original model even under a high pruning intensity. As shown in Figure <ref>-(b), SNPFI can outperform the other state-of-the-art pruning approach even with a more limited computation overhead. The contributions of our work are as follows: * We propose a redundancy criterion by filter-wise interaction and theoretically and experimentally prove its effectiveness. In this new redundancy criterion, the filter importance and filter utilization strength imply the decision ability of individual and multiple filters. According to our redundancy criterion, the interaction difference abstracts the potential generalization gap due to the pruning and effectively guides the weight recovery of the pruned model; * We present a model compression algorithm SNPFI which prunes efficiently and effectively; Utilizing our redundancy criterion and the interaction difference, SNPFI avoids pruning the relatively useful filters and recovers the performance by minimizing the generalization gap caused by pruning; * We experimentally demonstrate the availability of SNPFI among MNIST, CIFAR10, and IMAGENET across AlexNet, MobileNetv1, and ResNet-50. SNPFI can always prune around 60% (20% higher than experience) of filters of the original model without significant performance decline at a single-pass pruning process. § RELATED WORK Weight-based and Feature-based network pruning. Network pruning is a process that decouples the useless structure from the CNN by the redundancy criteria. The unitary attributes (e.g.weight, feature) can spot redundancy well under a conservative pruning sparsity <cit.>. The OBD <cit.> measures neurons' importance by the second-order derivative of weight; The TMI-GKP <cit.> evaluates the redundancy by the similarities among filters by weight. Since the coupling degrees of different computational modules might vary in diverse applications <cit.>, static evaluations of redundancy are not applicable when pruning intensity increases <cit.>. Therefore, current feature-based pruning approaches design filter selectors to decouple the useless filters at the expense of the computation<cit.>. The NPPM <cit.> can foresee the future impact on accuracy for any layer-wise pruning, but requires extra training on the filter selector. Thus, how identifying redundancy effectively and efficiently under the high pruning intensity is still an unsettled problem. We address this problem with a new redundancy criterion by the filter-wise interaction in Section <ref>. Structured pruning and AutoML. Structured pruning <cit.> removes groups of neuron connections at once, which means the adjacent and preceding layer's output channels shrink with the current layer's input channels when pruning at filter granularity. Since the computation cost exponentially decreases when scheduling the model's compression plan at the filter level, AutoML <cit.> (Automated machine learning) can effectively assign layers' sparsity by repeated retraining on each pruned model <cit.>. Since repeated retraining is necessary to bond the accuracy and the sparsity, the reinforcement learning approaches also struggle to balance exploration and exploitation due to the delayed reward problem <cit.> or lack of exploration <cit.>. The delayed reward problem is the reinforcement learning tasks with sparse or episodic rewards <cit.>. Aiming to alleviate the delayed reward problem in real pruning scenarios, we formulate the reward function by filter utilization rate in Section <ref>. § METHOD To identify the redundancy effectively, we introduce a new redundancy criterion by the filter-wise interaction in Section <ref>; To demonstrate the feasibility of our redundancy criteria, we propose a reinforcement learning-based structured network pruning approach SNPFI in Section <ref>; To minimize the performance disparity between the original and the pruned model, we propose the interaction difference for fine-tuning in Section <ref>. §.§ Measuring the redundancy In the pruning scenario, there are two types of filters in each computational module: the redundant and the useful. Taking a CNN with n computational layers as an example, the l-th layer is a set of filters N_l={F_i^l|i=1,...,c_out^l}, where each filter is F_i^l∈ℝ^C_in^l× k^l× k^l, c_out^l and c_in^l are the l-layer's output and input channels, k^l is the kernel size. The layer's pruning sparsity indicates the ratio of number of the redundant filters out of all the filters N_l <cit.>. According to the redundancy criteria and the l-th layer's pruning sparsity, assume r^l filters are useful, then the pruning sparsity is c_out^l-r^l/c_out^l. The useful filter set S_l of the l-th layer is S_l={F_i^l|i ∈ IMP }, where the IMP is a set of the index of the useful filters identified by their importance according to the specific redundancy criteria. In the inference, each filter consumes a comparable amount of computation but contributes to the output differently. Since the redundant filters' contributions are much lower than the useful filters, current pruning algorithms believe that the influence of pruning the redundant filter is subtle<cit.>. However, these approaches neglect the filter-wise interactions' contributions during the redundancy evaluation. Since a relatively useless filter might have a considerable collaborative contribution to prediction <cit.>, we need to fairly quantify the impacts of the potential filter-wise interactions on prediction. To achieve this goal, we regard each inference process achieved by m filters in the l-th layer as a collaborative game <M_l, V> <cit.>. During the inference on a image I, each filter is a player and m players align the coalition M_l with the contribution V(M_l), where m = |M_l| ∈ [1,c_out^l], V(M_l)=log P(ŷ=cls|M_l,I)/1-P(ŷ =cls |M_l,I) <cit.> and ŷ is the prediction with M_l. According to the V(M_l) for each coalition M_l, we can distribute the contributions on the output when multiple filters interact. In this way, we can quantify each filter's importance and each layer's filter utilization strength to identify the functionally loosely-coupled filters. To fairly measure the individual contributions of each filter, we formulate the importance of the c-th filter in the l-th layer by the Shapley value <cit.> in Eq.(<ref>). t_c^l = |∑_c∈ M_l,M_l⊆ N_l |M_l|!(c_out^l + 1 -|M_l|)!/c_out^l!△ V(M_l,c) |, where △ V(M_l,c)=V(M_l) -V(M_l -{ c}). When the c-th filter forms the M_l with other filters in the l-th layer, it might positively contribute to the prediction. Considering the potential contribution of each filter, the filter importance t_c^l assigns the relatively useless filter to a small value. As proven in <cit.>, the t_c^l is unique and fair to each filter. Therefore, we can fairly discriminate the relative useless filter from the useful filter. To fairly distribute the contribution of interaction, we first introduce the filter interaction u_l^d(i,j). The filter interaction is the contribution of the interaction among at least two distinct filters. In this scenario, coalition M_l has to consist of m=d+2 filters, where { i, j}⊆ M_l, i,j ∈ [1, c_out^l], i≠ j, d ∈ [0, c_out^l-2] and m ∈ [2, c_out^l]. Utilizing the Shapley interaction index <cit.>, the filter interaction u_l^d(i,j) among i,j, when the other d filters exist, define in Eq.(<ref>). u^d_l(i,j)=∑_{i,j}⊆ M_l⊆ N_l, | M_l |=d+2△ V(i,j,M_l), where △ V(i,j,M_l)=(V(M_l)-V(M_l-{ j })) - (V(M_l-{ i })-V(M_l-{ i,j })). The larger u^d_l(i,j), the stronger interaction when i,j form a coalition with the other d filters. Notably, it can be proved that u^d_l(i,j) is unique and fair among all coalitions<cit.>. With the u^d_l(i,j), we can measure the filter utilization strength U_l(m) of the l-th layer in Eq.(<ref>). U_l(m)=∑_q=0^m-2𝔼_I∈Ω𝔼_i,j∈ N_l[u^q_l(i,j)]/∑_p=0^c_out^l - 2𝔼_I∈Ω𝔼_i,j∈ N_l[u^p_l(i,j)], where the Ω is the calibration dataset and |Ω|=256 by default. A high value of U_l(m) indicates that the interaction strength is intensive when m filters exist. If subtle filters exist, U_l(m) achieves a high value with a relatively small m. In this way, we can estimate the number of useless filters by U_l(m). §.§ Filter-wise interaction based structured network pruning In the pruning scenario, the approximation of the optimal pruning plan S^* can be time-consuming. Since each layer's pruning sparsity and the accuracy of the pruned model are non-linearly related, previous studies spend considerable amount of computation to sample the best accuracy achieved by the according pruning sparsity<cit.>. Based on our redundancy criterion, we can estimate each layer's sparsity lower bound by the expert experience. Given the desirable filter utilization strength θ to maintain the basic functionality after pruning, the lower bound of the l-th layer's sparsity s_lb^l is estimated in Eq.(<ref>). min _m s_lb^l =m+2/c_out^l s.t. U_l(m)⩾θ, m ∈ [2, c_out^l ] m ∈ℤ^+ According to the s_lb^l and t_c^l, the relatively useful and important filters always remain in the layer-wise pruning as shown in the dotted box of Figure.<ref>. However, scheduling the layer-wise pruning plan only by the expert experience might compromise with the local optimum <cit.>. The objective function of the scheduling process is in the E.q.(<ref>). S^*=min _S COMP(S) s.t. ACC(S) ⩾α, S={s^l|l=1,...,n}, s^l ∈(0,1.0] where COMP(S) and ACC(S) are the computation and the accuracy of the pruned model following the pruning plan S, and α is the legal validation accuracy for the pruned model. To efficiently approximate the S^* through enormous combination of S, we integrate s_lb^l into E.q.(<ref>): S^*=min _S COMP(S) s.t. ACC(S) ⩾α, S={s^l|l=1,...,n}, s^l ∈[s_lb^l,1.0] According to E.q.(<ref>) and E.q.(<ref>), it is straightforward that the searching space sharply shrinks and is feasible for the RL algorithm. Therefore, we integrate our filter-wise interaction-based redundancy criterion into the RL algorithm as shown in Figure <ref>. In the following, we explain the details of the RL module of the SNPFI. State s^l is defined below for the l-th accessible layer: s^l=<type,#param, FLOPs,k^l,μ(U_l(m)), σ(U_l(m)), c_in^l, c_out^l > The type is the type of the layer, #param is the number of the parameters, and FLOPs is the number of floating-point operations. The average and the standard deviation of the filter utilization strength, μ(U_l(m)) and σ(U_l(m)), are included in the state. Action a^l is the pruning sparsity of the l-th layer. In the Agent, the a^l is predicted by the policy network π_ϵ (the Actor in Figure <ref>) on the state s^l and bounded by the s_lb^l. a^l is defined in the Eq.(<ref>). a^l=max(s_lb^l,π_ϵ(s^l)) where the ϵ is the trainable parameters of π. The output range of the policy network is in (0, 1]. Reward should convey the future effects on the model’s computational overhead and performance caused by the following layer-wise pruning. Since the U_l(m) is naturally related to the number of filters and their cooperative contribution to the inference, we can formulate the reward function R_l(·) of the l-th layer via U_l(m) and r^l to alleviate the delayed reward problem <cit.> in the AMC <cit.>: R_l(r^l)={U_l(r^l)/r^l-1/c_out^l ,1⩽ l< n ∑_i=1^n-1R_i(r^i)/n+ c_out^n× U_n(r^n)-r^n/r^n × c_out^n × n ,l=n∧ ACC(S)⩾α ∑_i=1^n-1R_i(r^i)/n + c_out^n× U_n(r^n)-r^n/r^n × c_out^n × n -n^2 ,l=n∧ ACC(S)< α. where r^l=⌊ c_out^l × a^l ⌋ is the number of remaining filters and S={a^l|l=1,...,n } is the predicted pruning plan. For CIFAR-10 and MNIST, we evaluate the ACC(S) on the validation set; for ImageNet, we use the training set. When the S ensures the performance of the pruned model, this reward function encourages the S to achieve a higher filter utilization strength with fewer filters during the optimization; When the S violates the legal performance of the pruned model α, this reward penalizes the actor for the aggressive pruning sparsity. In this way, the S→ S^* step by step. Policy Update. We utilize the DDPG <cit.> algorithm to optimize the pruning policy. The parameters of the policy network π are updated by the Eq.(<ref>). J(ϵ)= 𝔼_s^l∼ρ ^β[Q^π(s^l,π_ϵ(s^l))] In DDPG, ρ ^β is the critic network (the Critic in Figure.<ref>) to encourage the right action, Q^π is the value function to estimate current policy. §.§ Interaction difference based fine-tuning During the deployment on the edge device, removing relatively subtle filters with considerable collaborative contribution might be inevitable. The generalization gap between the pruned and the original model emerges in this scenario. Since our redundancy criterion prevents pruning the filters both useful and important, this gap is caused by the distinct interactions of the pruned filters <cit.>. Therefore, we propose to utilize the filter-wise interaction for fine-tuning as shown in Figure.<ref>. In the perspective of the collaborative game, the remaining r^l filters and the entire c_out^l filters form two different coalitions after pruning on the l-th layer: S_l and N_l. If the generalization gap exists, at least one cooperative filter exists in the {N_l - S_l}. To quantify this generalization gap, we define the interaction difference Δ I(S_l, N_l) among S_l and N_l in the Eq.(<ref>). Δ I(S_l, N_l) = 𝔼_(S_l,N_l)[r^l/c_out^l× V(N_l) - V(S_l)] If V(S_l),V(N_l)∈ℝ is such that V(S_l)≠ V(N_l) for any computational layer indexed by l, then Δ I(S_l, N_l)>0. Based on the theorem <ref>, the non-negative interaction indicates the existence of meaningful interactions that leads to a better generalization. Hence, we propose the ID loss in Eq.(<ref>) to encourage the pruned model to learn these important interactions. L_ID=-1/n∑_l=1^n∑_cls=1^ℂP(ŷ=cls|Δ I(S_l, N_l),I)logP( ŷ=cls|Δ I(S_l, N_l),I) where L_ID is the Shannon entropy<cit.> that uses the Δ I(S_l, N_l) for classification on image I, I is sampled from the training set ,ℂ is the number of the categories and ŷ is the prediction based on the Δ I(S_l, N_l). By minimizing the L_ID, the pruned model can learn the important interaction eventually. Since Δ I(S_l, N_l) is non-negative, L_ID can be effective guidance by avoiding the gradient explosion. Therefore, we integrate Δ I(S_l, N_l) with ground truth during the fine-tuning as shown in Figure <ref>. § EXPERIMENTS We empirically demonstrate the effectiveness of our proposed redundancy metric and the SNPFI on MNIST <cit.>, CIFAR-10 <cit.>, and ImageNet <cit.>. We implemented our method based on the publicly available Torch implementation for ResNet-50 <cit.>, AlexNet <cit.>, and MobileNetv1 <cit.>. As a commonly used lightweight network, MobileNetv1 is used to test the reliability of SNPFI when pruning on a sparse architecture. AlexNet and ResNet-50 are used to test the commonality of SNPFI with different densities of network connections. Baselines. We mainly compare our approach with feature-based pruning: NS (Network slimming) <cit.> and NPPM <cit.>. The NS is a one-shot pruning algorithm and requires no extract training after each pruning; the NPPM requires training for the feature selection network. To demonstrate the efficiency of SNPFI, weight-based pruning <cit.> and Auto-ML methods <cit.> are included. Pruning. For optimization of the SNPFI agent, we use the Adam algorithm to optimize the actor π_ϵ and the critic networks ρ_β. The learning rate is 10^-4 for the actor and 10^-5 for the critic. For fine-tuning, We utilize our proposed ID and wight it and the original loss as 0.5 and 1. For fine-tuning, we use SGD with a Nesterov momentum of 0.9. For AlexNet and MobileNetv1 on MNIST, 15 epochs are used with the initial learning rate of 5e-5; For MobieleNet-v1 and ResNet-50 on CIFAR10, 120 epochs are used with an initial learning rate of 8e-4; For MobieleNet-v1 and ResNet-50 on ImageNet, 160 epochs are used with an initial learning rate of 1e-3. §.§ Reliability of Filter-wise interaction Effectiveness of the filter utilization strength. In the ideal, the number of filters has a linear relationship with the accuracy, which indicates that lower sparsity leads to better performance with promise. In reality, the accuracy doesn't monotonously increase with the proportion of total filters as shown in the shallow or middle layer in AlexNet (Figure <ref>-left). Thus, assigning all layers with the same sparsity is not effective. Since the accuracy monotonously increases with the filter utilization strength U_l(m), a higher U_l(m) can ensure better accuracy and is more effective than the sparsity. In this way, our reward function in Eq.(<ref>) can effectively alleviate the delayed reward problem. Meanwhile, the U_l(m) converges around 0.5 in diverse layers and applications as shown in Figure <ref>-left. Thus, we can estimate the sparsity lower bound s_lb^l in Eq.(<ref>) with a uniform filter utilization strength θ for various layers. Since the filter-wise interactions are diverse among layers, the identical value of U_l(m) implies different s_lb^l. We utilize this property to boost the optimization of the pruning plan as shown in Eq.(<ref>). Various interactive mode. As shown in Figure <ref>-right, a model has three types of interactive modes: low channel first, multi-channel first, and high channel first. As the U(m) of the shallow layer in AlexNet shown, the ‘low channel first’ is the interaction mode when layers’ U(m) reach the inflection point with a few filters; As the U(m) of AlexNet's the middle layer in AlexNet shown, the ‘multi-channel first’ is the interaction mode that layers’ U(m) distinctly increase multiple times with the growing number of the filters; As the U(m) of the deep layer in MobileNetv1 on the MNIST shown, the ‘high channel first’ is the interaction mode that layers’ U(m) distinctly soar with a high number of channels. Since the interactive mode is diverse in the same model, scheduling aggressive pruning sparsity for a single layer can risk the generalization performance of the whole model. Specifically, even the identical model might behave variously when applied in different scenarios. As shown in Figure <ref>-right, MobileNetv1 tends to interact in ‘multi-channel first’ in MNIST but shifts from ‘multi-channel first’ to ‘low channel first’ and ends in ‘high channel first’ in CIFAR-10. Hence, our redundancy criterion is vital in real applications to prevent pruning the useful filters. Effectiveness of the interaction difference. As shown in Figure <ref>-left, our proposed interaction difference loss L_ID is effective in weight recovery for aggressive pruning. Knowledge distillation <cit.> struggles to converge when the ground truth is intricate for training the pruned model. In contrast, the L_ID steadily guides the pruned model to an increasing training accuracy without drastic fluctuation of validation loss. As shown in Figure <ref>-right, the interaction difference can refine the interactive mode for the pruned model. Before pruning, the ResNet-50 tends to interact in ‘low channel first’ for shallower layers and in ‘multi-channel first’ for deeper layers. After fine-tuning with L_ID, all layers tend to interact in ‘multi-channel first’ mode, which means filters strongly cooperate and few redundancies exist. §.§ Pruning models on different datasets Pruning on CIFAR-10. As shown in Table <ref>, SNPFI reduces more than 60% computation of the MobileNetv1 and ResNet-50 without a significant accuracy drop. In the deployment scenario, our method, nearly 2× faster than the QPNN, can achieve comparable performance with QPNN on the FPGA device without any particular software accelerator <cit.>. When compressing the highly computational network ResNet-50, SNPFI achieves the highest compression and accuracy among other state-of-the-art methods. Compared with AMC and TAS, SNPFI provides better network architecture by overcoming the delayed reward; When comparing with Greg-2, our method realizes 3.7% more reduction and 2.54% higher accuracy without iterative training to recover the weight; Without empowering the channel selection ability by manual modification of the network, our methods are more applicable and better performance than NS and NPPM. Pruning on ImageNet. As shown in Table <ref>, SNPFI achieves the best accuracy when pruning generously on ResNet-50. When comparing with Auto-ML methods, SNPFI prunes a 10% of Filters more than the TAS and performs a 2.39% of accuracy higher than the AMC; When comparing with weight-based pruning methods, SNPFI prunes a 10% of channels more than the Greg-2 and LeGR and performs exceeding 2% of accuracy higher than those methods; When comparing with feature-based pruning methods, SNPFI out-performs the SFP and NPPM under comparable pruning ratio. When pruning on MobileNetv1, SNPFI realizes a comparable compression result with the AMC and NS. RGB model for single-band image. According to Table <ref> and <ref>, we can experimentally prove that the architectural sparsity varies for the single-band and the multi-band image. On the one hand, different pruning methods can generalize well with less than 45% of the origin when processing single-band images as shown in Table <ref>, which is lower than the empirical value (60% at most <cit.>).On the other hand, the redundancy does lie at the architecture level when processing single-band images by RGB model. As shown in Table <ref>, applying SNPFI can decline 12% higher of the computation overhead in the ResNet-50 for gray-scale image than the RGB image. The accuracy disparity between the RGB and the gray-scale input might be caused by the absence of colorful information in the gray-scale image, which is vital for object recognition. § CONCLUSION In this paper, we discussed redundancy from the filter interaction aspect. On the one hand, our proposed redundancy metric can effectively predict the model's inherent sparsity and suggest the proper model for a real application. On the other hand, computing the filter utilization rate of our redundancy metric can be time-consuming. Since the number of filters grows with the depth of CNN, the combination number of the interaction can be enormous. In the future, a considerable improvement in computation can be achieved by clustering on filters to evaluate the filter-wise utilization among distinct filters only. plainnat
http://arxiv.org/abs/2307.03170v1
20230706175210
Focused Transformer: Contrastive Training for Context Scaling
[ "Szymon Tworkowski", "Konrad Staniszewski", "Mikołaj Pacek", "Yuhuai Wu", "Henryk Michalewski", "Piotr Miłoś" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Data processing of Visible Emission Line Coronagraph Onboard ADITYA–L1 C. Kathiravan, R. Ramesh August 1, 2023 ====================================================================== Large language models have an exceptional capability to incorporate new information in a contextual manner. However, the full potential of such an approach is often restrained due to a limitation in the effective context length. One solution to this issue is to endow an attention layer with access to an external memory, which comprises of (key, value) pairs. Yet, as the number of documents increases, the proportion of relevant keys to irrelevant ones decreases, leading the model to focus more on the irrelevant keys. We identify a significant challenge, dubbed the , where keys linked to different semantic values might overlap, making them hard to distinguish. To tackle this problem, we introduce the Focused Transformer (), a technique that employs a training process inspired by contrastive learning. This novel approach enhances the structure of the (key, value) space, enabling an extension of the context length. Our method allows for fine-tuning pre-existing, large-scale models to lengthen their effective context. This is demonstrated by our fine-tuning of 3 B and 7 B OpenLLaMA checkpoints. The resulting models, which we name [We release the https://huggingface.co/syzymon/long_llama_3bcheckpoints and the https://github.com/CStanKonrad/long_llamasource code of -3pt, see also our https://colab.research.google.com/github/CStanKonrad/long_llama/blob/main/long_llama_colab.ipynbcolab.], exhibit advancements in tasks requiring a long context. We further illustrate that our models adeptly manage a 256 k context length for passkey retrieval. § INTRODUCTION Language models have served as a catalyst for substantial advancements in several areas, including natural language processing <cit.>, code generation <cit.>, quantitative reasoning <cit.> and theorem proving <cit.>. One of the central challenges with language models is the effective incorporation of extensive new knowledge. The common practice of fine-tuning the model is not only resource-intensive and complex to manage, but it also does not always clearly indicate how to incorporate new knowledge. For example, fine-tuning on a text such as “Alice in Wonderland” does not equip the model to answer questions about the story itself, but rather it trains the model to predict the next token or complete masked sentences. A promising alternative – integrating the new knowledge within the context – doesn't require training but is considerably restricted by the model's effective context length. For this method to work with large knowledge databases, the model needs to manage a context length extending to millions of tokens. In this research, we highlight one of the primary obstacles in augmenting the context length: as the number of documents increases, the ratio of pertinent to irrelevant tokens diminishes. The standard training procedure frequently results in overlaps between keys connected with irrelevant values and those related to relevant ones, exacerbating the model's task of differentiating between them. We term this challenge the . We propose the Focused Transformer (), an innovative technique developed explicitly to address this issue. The Focused Transformer permits a subset of attention layers to access an external memory of (key, value) pairs through the k-nearest neighbors (kNN) algorithm, akin to the method used in <cit.>. This mechanism effectively extends the total context length. The distinctive aspect of the Focused Transformer is its training procedure, drawing from contrastive learning. This method addresses the distraction issue and facilitates larger memory capacities. Specifically, during the training phase, we deliberately expose the memory attention layers to both relevant and irrelevant keys (like negative samples from unrelated documents). This strategy incentives the model to differentiate keys connected with semantically diverse values, thereby enhancing their structure. We introduce and make available s (), fine-tuned OpenLLaMA models with , demonstrating that our method does not require long context during training and can be applied to existing models. Notably, s show significant improvements on tasks necessitating long-context modeling. In particular, they can manage a 256k context length on the passkey retrieval task <cit.>. Our research contributions are the following: 1. We pinpoint the as a significant challenge and a primary obstacle to scaling up the context length in Transformer models, particularly in multi-document scenarios. 2. We develop the Focused Transformer (), designed to alleviate the . includes a unique training objective that improves the (key, value) structure, enabling the use of extensive external memory and k-nearest neighbors lookup to scale the context length. 3. Our method is simple to implement, and it provides the benefit of augmenting existing models with memory without modifying their architecture, facilitated by cost-effective fine-tuning. We demonstrate this on the 3B and 7B OpenLLaMA checkpoints. The resulting models, named s, display enhancements on tasks that benefit from increasing the number of few-shot demonstrations in the extended context, such as TREC <cit.> and WebQS <cit.>. We also prove that for passkey retrieval <cit.>, our models successfully handle a 256k context length. 4. We further scrutinize 's capabilities across various datasets and model sizes. We show that a trained with a total context of 512 tokens can extrapolate to 16 million tokens in a benchmark dictionary lookup task. We also assess on long-context language modeling tasks such as books (PG-19), mathematics (arXiv), code (GitHub), and formal proofs (Isabelle), where it exhibits improvements in perplexity over baselines. § RELATED WORK Long-context transformer architectures A multitude of approaches have been developed to increase the context length of transformers, mostly focusing on alleviating the quadratic complexity of the attention computation. For instance, Transformer-XL <cit.> caches the previous context and enables the linear extension of context with the number of layers. Longformer <cit.> employs an attention mechanism that allows tokens to attend to distant tokens sparsely, reducing the computational complexity. BigBird <cit.>, LongT5 <cit.>, and <cit.> also use sparse attention to handle long sequences. Hourglass <cit.> downsamples activations in intermediate layers to reduce computation and enable longer contexts. COLT5 <cit.> proposes conditional computation to save memory and enable larger contexts. Memorizing Transformer <cit.> uses kNN lookup to pick up the most relevant tokens, which might also be seen as a way to reduce the computational complexity of attention. Our work adheres to this approach and aims to train a key space that handles longer attention context length (e.g., by mitigating the ) and, thus, has better long-context capabilities. Fine-tuning LLMs for longer context Prior works such as RETRO <cit.> (RETROfitting) and Memorizing Transformer <cit.> have demonstrated a promising path for fine-tuning existing LMs to add new capabilities without the need to retrain the entire model. More recently, a number of works have explored fine-tuning LLaMA to extend its context length. Landmark attention <cit.> proposes a compression scheme of LLM's context into landmarks, increasing the context length of LLaMA-7B to 32K. Position Interpolation (PI, <cit.> and <cit.>) introduces a modification to the rotary positional encoding scheme that enables fine-tuning for 32K context. In contrast to this work, our method does not rely on positional encodings, following the findings from <cit.>. Removing positional encoding in memory allows us to extrapolate to 256k tokens, although the model was only trained on sequences up to 8K, yielding theoretically unbounded context length. Contrastive learning Contrastive learning aims to learn good representations by comparing positive and negative examples. CLIP <cit.> and SimCLR <cit.> are two popular contrastive learning methods that have achieved state-of-the-art performance in the image domain. During contrastive pre-training, negative examples are kept in the same batch to learn to distinguish them from positive examples. Scaling the batch size in contrastive learning has been demonstrated to enhance the quality of representations, as shown in <cit.>. It has been suggested <cit.> that the embedding space in language modeling suffers from degeneracy, where embeddings are tightly packed in a narrow cone, making it difficult to distinguish between them. TRIME <cit.> proposes a training approach designed for training LMs with memory augmentation, which uses in-batch negatives to improve the quality of representations. The main difference between this and our approach is that we incorporate negatives into the memory attention layer instead of interpolating in the output layer. § : FOCUSED TRANSFORMER Our method, the Focused Transformer (), is a simple plug-and-play extension of transformer models and can be used both to train new models or fine-tune existing, possibly large, models with longer context. To this end, uses memory attention layers and the training procedure. Memory attention layers enable the model to retrieve information from the external memory at inference time, effectively extending the context. The training procedure biases the model to learn (key, value) representations, which are easy to use by a memory attention layer. See Figure <ref> for an overview of the architecture and Appendix <ref> for pseudocode. 10.05 However, it remains challenging to pre-train LMs that can utilize long-range dependencies <cit.>, as most of the documents in standard training corpora are short <cit.>, thus not contributing much signal to improving long-context capabilities of the model. §.§ Memory attention layers Memory attention layers ℒ are endowed with access to an external memory database during inference. Namely, each query in ℓ∈ℒ attends to preceding keys from the local context and the top k most matching keys from memory. The memory keys are ranked by the inner product with the query and retrieved using the kNN search algorithm. We use the exact kNN search implemented in FAISS <cit.>. The memory is populated incrementally with (key, value) pairs processed by ℓ beforehand. Our memory attention layer design is closely related to <cit.>, we follow most of its design choices, except for the gating, which we replace with a simpler mechanism, which turns out to be more effective in our applications. See details in Section <ref> and Appendix <ref>. We remove positional encodings in memory layers in all our models except s. This allows checkpoints to be a drop-in replacement for LLaMA checkpoints. §.§ training procedure Our training procedure is a novel way of training (or fine-tuning) transformer-based architectures in order to improve the structure of the (key, value) space. The main motivation is to shape this space so that a memory attention layer ℓ∈ℒ can easily focus on relevant information. The key idea, inspired by contrastive learning, is to expose ℓ to (key, value) pairs from the current and previous of the given document (positives) and d-1 contexts from unrelated documents (negatives). Importantly, this is done in a differentiable way. To achieve this, we use a data pipeline in which each element of the batch corresponds to a different document. We embed the previous (C_prev) and the current (C_curr) for each of the processed documents. The overview of our procedure can be found in Figure <ref>. Specifically for each document δ in C_curr we create a set {p^δ_i}_i={1, …, d} consisting of the (key, value) pairs from the previous of δ (positives), along with pairs from d-1 other contexts coming from C_prev (negatives). We also experiment with varying the number of previous contexts and negatives for different batch elements. We do not use external memory during training. This has two important consequences. One, the operation is fully differentiable, and thus, we improve all the (key, value) pairs in p^δ. Two, the procedure is easy to implement; it does not require any additional loss (i.e., uses the standard transformer training objective) and is done on the level of the data loading pipeline and a minor self-attention change. The only new hyperparameter is d, which prescribes the ratio of positive to negative samples. Typically, we find it beneficial to start with small d ≤ 8 (otherwise, the model tends to ignore the previous ) and later switch to bigger values, say d≥64. Appendix <ref> provides more details about the method. §.§ The distraction issue In this section, we conceptualize what we call the distraction issue and hypothesize it is one of the key problems in using large memory databases. Namely, during the standard training, the model is not incentivized to distinguish the keys from different documents. We measure that the attention mass is evenly spread on the related and unrelated documents; see Figure <ref>. More precisely, for a document δ, let w_ij be the softmax weights related to p^δ_ij constructed as described in Section <ref>. We define the positive attention mass as r_d:=∑_j w_1 j/ ∑_i=1^d∑_j w_ij. We observe that r_d ≈ 1/d, which can be interpreted as the fact that the attention is equally distracted by the positive (coming from the current document at i=1) and negative keys. This is an undesirable property since when scaling the memory, the attention becomes increasingly distracted. We show that the mostly alleviates the distraction issue, resulting in a focused attention. More information can be found in Appendix <ref>. In Section <ref>, we also show that the distraction issue has a harmful effect on metrics like perplexity. § -2PT: EXTENDING LLAMA'S CONTEXT LENGTH WITH One of the promises of our work is that can be used to fine-tune already existing large models to extend their context length. In this section, we show that this is indeed the case. We use and OpenLLaMA-7B models trained for 1T tokens as starting points and fine-tune them with . We show that the resulting models, which we call s, are capable of extrapolating beyond their training context length (even up to 256K) and retain the performance on short-context tasks. We release the inference code on GitHub: https://github.com/CStanKonrad/long_llama and the -3B checkpoint on Hugging Face: https://huggingface.co/syzymon/long_llama_3b. We note that our checkpoint is backward compatible, i.e. can be used with any existing LLaMA inference code (both in Hugging Face and other implementations), albeit without long-context capabilities. §.§ Experimental setup =-1 The architecture of the models is the same as OpenLLaMAs, see <cit.> and Appendix <ref>. We use ℒ = {6, 12, 18} (resp. ℒ = {8, 16, 24}) as the memory layers for 3B (resp. 7B) model. We fine-tune the models on 10B (resp. 3B) tokens using , 8k context length and our dataset mixture based on RedPajama <cit.>, see Appendix <ref>. =-1 There are three minor differences from the standard procedure. First, we retain the positional encodings in the local context of the memory layers (this is not necessary for , but makes our checkpoints fully compatible with any existing LLaMA inference codebase). To be more precise, queries and keys from the local context (up to 2K tokens) receive the standard LLaMA rotary positional encoding, whereas memory keys are encoded as if they had position 0 in the local context window. Second, we use dense attention instead of the kNN retrieval, as we found only marginal performance differences, and it is simpler to implement. Third, we modify the training procedure to have more fine-grained control over the number of additional contexts and the ratio of positive to negative samples. All these differences are detailed in Appendix <ref>. §.§ Context length extrapolation on the passkey retrieval task [13]r0.68 *Prompt Format used in the passkey retrieval task, copied from <cit.>. We first measure the effective context length of , namely the distance for which tokens can effectively attend each other. We use passkey retrieval introduced in <cit.>, a synthetic task designed to measure this property. In this task, the model has to retrieve a passkey placed randomly in a long prompt. Results are shown in Figure <ref> - importantly, our 3B model is capable of solving this task much beyond its training context length 8K, achieving 94.5% accuracy for prompts of length 100k and 73% for 256k. §.§ Improving few-shot learning accuracy with longer context =-1 We measure long-context capabilities of these models on two downstream tasks, TREC question classification <cit.> and WebQS question answering <cit.>. We follow the experimental setup of <cit.>. Namely, we few-shot prompt the models with as many demonstration examples as possible up to the given context length. We do not use structured prompting like in <cit.> - instead, we directly provide all demonstrations in context. We observe significant accuracy gains from longer contexts on TREC and some improvements on WebQS (see Table <ref>). The TREC dataset consists of 50 classes. A model is tasked to predict the class label given in-context examples. Only 100 examples fit the standard context length (2K); it is not unusual that no class example is present for a given question, making the task impossible. Increasing the context length and the number of examples mitigates this risk. Moreover, having more demonstrations of the given class is also likely to be beneficial. §.§ Comparison to standard long-context fine-tuning In this section, we compare to standard long-context fine-tuning, showing that it already achieves better performance for the context length used for fine-tuning and, importantly, that it can extrapolate beyond this context length, which is not the case for the baseline. For comparisons, we fine-tune two models, one trained with and another one (baseline) with standard fine-tuning (done similarly to <cit.>). In both cases, we use 3B models fine-tuned on 1B tokens using the 4K context length. We evaluate both models on a number of few-shot downstream tasks in the setting described in Section <ref>. In most cases, see Table <ref>, we observe accuracy improvements when more few-shot demonstrations are provided in the extended context (from 2K used by OpenLLaMA to 4K used in our fine-tuning). On TREC, the gains from additional context are significant for both models, while on WebQS, the standard fine-tuning baseline does not provide any improvement from extended context. Notably, the model fine-tuned with enjoys further accuracy gains when evaluated with context lengths beyond its training length (6K and 8K). This shows extrapolation capabilities of , which are not present in the baseline (see e.g. Figure <ref>). §.§ Performance on short-context tasks Fine-tuning for longer contexts could hurt performance on the original context length (2K), as the training data distribution changes. We show that this is not the case for the models by evaluating them using the LM Evaluation Harness library <cit.>. On most tasks, the performance is kept intact; see Appendix <ref> for details, and Table <ref> for the average scores. This also confirms that s could be used as a drop-in replacement of LLaMA models as they are compatible with the original LLaMA inference code. § ANALYSIS OF In this section, we perform extensive experiments on smaller models to analyze and further validate our approach. In particular, we answer the following questions: (1) How does perform when scaling the length at inference time? (2) Can be used to extend the length of an existing, pre-trained model? (3) How effectively can it handle distractions, and how does this capability translate to enhanced performance in long-context language modeling tasks? Moreover, we provide ablation studies of our method and additional analysis. §.§ Experimental setup Architecture For experiments described in this section we use decoder-only Transformer <cit.> models with 12 layers and 184M parameters (unless stated otherwise, e.g., in fine-tuning experiments in Section <ref> we scale to 1.2B). Following <cit.>; we pick ℓ=8 as the memory attention layer. We tune k=128, the number of top keys retrieved by kNN. In most experiments, we start training with a small dimension d≤ 8 and switch to d≥ 64 after some training. For more details about the architecture and hyperparameters, see Appendix <ref> and Appendix <ref>. Evaluation We distinguish two evaluation settings: single-document (abbreviated to single-doc) and multi-document (abbreviated to multi-doc). The single-doc setting is typically used for evaluating models that process long contexts. Here, we clear the memory for each new document, ensuring that only the current document is available in the context. The multi-doc setting retains memory across multiple documents without resets. This scenario is akin to a long-context retrieval task where the model must focus on tokens from useful documents without getting distracted by irrelevant ones. Datasets We evaluate on the following long-context language modeling datasets: PG-19 (English books), arXiv (mathematical papers), GitHub (code), and Isabelle (formal proofs). PG-19 <cit.> is a large dataset of English-language books published prior to 1919, sourced from the Project Gutenberg archive. This dataset is a well-established benchmark for evaluating long-context language models <cit.>. The arXiv dataset contains source of papers labeled as "Mathematics" that were obtained by downloading articles through the arXiv Bulk Data Access. The token count per paper in this dataset is comparable to that of a book in PG19. For details on the remaining datasets, refer to Appendix <ref>. §.§ Scaling of the length to 16M We use a synthetic dictionary lookup task to check whether the model trained with our method can utilize a large memory to extend its length. In this task, the model is first provided with k_i: v_i mappings and then asked what value is associated with a particular key. We train models using documents of length 512. The first half of each document defines keys and values associated with them, whereas the second consists of questions about values associated with keys defined in the first half. Details about the task can be found in Appendix <ref>. For this task, we use smaller 37M parameter models. For , we use a of 256, thus the model needs to use the memory attention layer to answer the questions correctly. We start with d=1 and increase to d=128 as soon as the model is able to reach 98% training accuracy. During the inference, we use k=32 (the number of keys retrieved by kNN). Figure <ref> shows that , after 5k steps of training, can effectively utilize memory consisting of 16M tokens achieving accuracy above 92%. As a baseline, we use a standard transformer model trained with the context length of 512. In evaluation, we test different lengths, which quickly leads to very poor results. §.§ fine-tuning and context length extrapolation is a minimal modification to the standard transformer architecture; therefore, it is possible to fine-tune existing models to endow them with a longer context length via the memory attention layer, as we already demonstrated in Section <ref>. In this section, we deepen this analysis (on a smaller model) by studying perplexity improvements on various datasets. As a base model, we use a standard transformer model pre-trained for 100k steps with of 1K tokens using the standard objective and fine-tune with the objective (i.e. ). The data used for both fine-tuning and pre-training is the C4 dataset <cit.> (we omit documents shorter than 2K tokens). The fine-tuning phase takes 10k steps. We use the dimension d=128 and local context of 1K tokens ( is 2K during training). We evaluate models in a zero-shot way on 4 language modeling datasets, which require long context: arXiv, PG-19, GitHub and Isabelle, see Section <ref> and Appendix <ref> for details. In Table <ref>, we observe that enjoys steady perplexity gains up to 64K tokens, although it was fine-tuned only with the 2K total differentiable length. We compare the model perplexity to the following baselines: Memorizing Transformer (MT) <cit.> fine-tuned with the local context of 1K and memory size of 16K, and Transformer-XL <cit.> fine-tuned with both local context and window length of 1K. To ensure a fair comparison, all three models are fine-tuned from the same base checkpoint. When evaluated with a context of 2K, our method achieves results on par with the Transformer-XL baseline, which has access to the previous context in all layers, unlike MT and . Compared to the MT baseline, we achieve better scaling when evaluated with 64K context length and significantly better perplexity values. Unlike MT, our method does not require training on long sequences, which is reflected by the lower perplexities of when evaluated in the zero-shot setting. For more details, see Appendix <ref>. §.§ Handling distractions in language modeling tasks In this section, we measure how handling distractions in the multi-document setting helps in language modeling. We pick the PG-19 dataset <cit.> and measure the perplexity of the next token prediction (language modeling task) when varying the size of multi-doc memory (in this case consisting of books). Intuitively, the memory tokens corresponding to the current book might be beneficial (which is also confirmed in <cit.>), while the ones from the other books are unlikely to be useful and thus are distractions. We observe, see Figure <ref>, that higher values of the dimension d lead to better perplexity. This aligns with the observations in Section <ref>, indicating that by mitigating the , we experience benefits in language modeling. Moreover, all versions of are able to utilize memory and achieve much better perplexity than the standard Transformer (no memory). Unsurprisingly, perplexity increases with memory size, but we stress that this happens gracefully. In the standard variant of (bold line), the perplexity increases only by 0.18 when scaling to >500k tokens. Importantly, the perplexity of is close to this of Memorizing Transformer with the single-doc memory, which we treat as a soft lower bound since it is not exposed to distractions from unrelated books. 16.05 comment on multi-doc MT §.§ Context length extrapolation in single-doc The primary motivation behind is to improve the multi-doc setting performance by handling distractions. Interestingly, our method also helps to extrapolate to longer contexts, even when evaluated in the single-doc setting. To study this, we perform FoT fine-tuning (as in Section <ref>) and evaluate the perplexity of the resulting model on the PG-19 dataset with different context lengths in the zero-shot fashion. To deepen the analysis, we introduce an additional parameter w (the number of previous contexts used in cross batch training procedure). We provide results for w=1 (the standard setting for , that corresponds to the total differentiable context being 2·1024) and w=2 (corresponding to the total differentiable context 3·1024). We observe, see Figure <ref>, improvements when context grows, even far beyond the training context length, which reaffirms the hypothesis that helps with extrapolation to longer contexts. Moreover, d=2 is significantly better than d=1. When comparing d=1 and w=2 to d=2 and w=1, we observe that the former is slightly better. This is natural, as the former has longer training context. §.§ Ablations and design choices In this section, we focus on two key properties of training procedure: differentiability and the inclusion of negatives. We also discuss the relation to Memorizing Transformer in terms of the training protocol and memory integration. We refer to Appendix <ref> for a detailed technical description of differences between and Memorizing Transformer. §.§.§ Impact of differentiable keys and values [11]r0.5 Perplexity on PG-19 in the single-doc setting for various local context lengths during training. In these experiments, we used the same context length both during training and evaluation. 1cContext Length 1c d=1 1cMT 512 14.18 14.68 1024 14.17 14.46 2048 14.11 14.43 We compare to Memorizing Transformer, which uses a non-differentiable memory of keys and values during training. In the multi-doc experiment presented in Figure <ref>, both MT and are trained with of 512. We observe that is significantly better when the context is expanded during inference, which confirms that differentiable keys and values are beneficial. We also check whether differentiable keys and values can improve the performance in the single-doc setting. For this, we compare FoT with d=1 to MT with memory consisting of the previous . Table <ref> confirms that differentiable keys and values can also help in this scenario. §.§.§ Importance of negatives We reaffirm the importance of negatives in a multi-document setting. In previous experiments in Figure <ref>, we already observed that increasing the number of negatives (i.e., increasing d) results in more attention mass being dedicated to relevant tokens. In Figure <ref>, we additionally show that the lack of negatives in training (d=1) results in a significant deterioration in model perplexity when the context length grows. This confirms that both using negatives and differentiability are important for to work well. §.§.§ Relation to Memorizing Transformer Memorizing Transformer <cit.> is closely related to our method. The two key differences are 1) the training protocol and 2) how the memory is integrated into the model. In this section, we provide additional insights into these differences. Training protocol In the previous sections, we have discussed the benefits of the training, namely using the contrastive-inspired objective and backpropagating through the previous context. A potential advantage of the MT approach is that it is exposed to the whole memory during training (instead of just the previous context). We performed a proof-of-concept experiment combining the two approaches to explore this further. [17]r0.5 < g r a p h i c s > Single-doc eval of finetuned for 1k steps on non differentiable memory. It achieves lower perplexity than MT, which has access to this memory for the whole training (500k steps). Namely, we trained the model for 499k steps using and fine-tuned it with the MT objective for 1k steps. Interestingly, we observed a significant improvement compared to the MT training with the same step budget, see Figure <ref>. We believe there is further room to explore various training protocols combining the best of both worlds. Memory integration uses a simple memory integration approach where the (key, value) pairs retrieved by kNN lookup are treated the same way as the local context. In contrast, MT uses a gating mechanism, a weighted average of the memory, and local values; see details in Appendix <ref>. We evaluated both approaches and found no difference in performance between these two memory integration methods. However, we decided to use our approach because it does not require any architectural changes (and thus makes fine-tuning existing models easy). For these reasons, we recommend using it. We speculate that the reason why the gating is not needed in is another benefit of the fact that the training backpropagates through the (key, value) pairs from the previous context C_prev in contrast to MT that cannot backpropagate there and needs to rely on when computing gradients for keys and values. Another reason might be the fact that C_prev is embedded for each batch, and thus staleness (see <cit.>) is avoided. § LIMITATIONS AND FUTURE WORK =-1 Our research opens a few avenues for future work. We list them as well as challenges and limitations. Scaling up memory This is by far the most important future research direction. The challenges start from purely engineering, storing more than 16M (key, value) pairs will require a distributed multi-node system. In our experiments, we use the exact kNN search, which is not scalable to large memory. Using approximate kNN search will require a lot of engineering effort, as well as careful evaluation of the impact of the approximation on the model performance. Scaling up We observed that increasing d is beneficial. In our experiments, we used d=64 or d=128, which is the maximum value that fits into the memory of a single TPUv3/TPUv2 machine, see also Appendix <ref>. In future work, we want to further increase d as well as test on devices with bigger memory or utilize multi-node training. Exploring contrastive learning The training is inspired by rather basic contrastive learning (CL) techniques. We show that this improves the key structure so that the is mitigated. We expect that other CL methods could be beneficial, for example, hard negative mining to utilize a larger memory during training (see <cit.>). We leave this for future work. Combining with other methods Developing long-context methods is an active research field, see Section <ref>. We believe that some of these methods could be combined with , resulting in mutually beneficial interactions. We gratefully acknowledge the TPU Research Cloud program, which was instrumental to our research by providing significant computational resources. Parts of the project were realized using the resources of Poznańskie Centrum Superkomputerowo - Sieciowe. We would also like to thank Markus Rabe for reviewing the initial manuscript and Christian Szegedy, Charles Staats, and DeLesley Hutchins for helpful discussions. We are also grateful to Xinyang Geng and Hao Liu for releasing OpenLLaMA checkpoints and the EasyLM library <cit.>, allowing for training these models, which significantly accelerated our research. Piotr Milos was supported by the Polish National Science Centre grant 2019/35/O/ST6/03464. Henryk Michalewski was supported by the Polish National Science Center grant UMO-2018/29/B/ST6/02959. plainnat § §.§ Architecture OpenLLaMA <cit.> is an open-source reproduction of LLaMA <cit.>. It uses a decoder-only architecture with rotary positional embeddings, and a few changes including pre-normalization with RMSNorm <cit.>, and SiLU activation <cit.>. A SentencePiece <cit.> tokenizer with 32k vocabulary size is used. §.§ Extending context length with Positional encodings To achieve backward compatibility with the original LLaMA, we retain positional encodings in the local context. The tokens outside the local context are assigned the same position as the first token in the local context. Dense attention to longer context To make the implementation simpler and less dependent on external software, we resign from using kNN lookup and perform attention over the whole memory. We have found only marginal performance differences between those two approaches to memory attention. Crossbatch details For the 3B model, we set ℒ = {6, 12, 18} as the memory layers. We vary the number of additional contexts d∈{0, 2, 3} across elements of the batch by dividing batch entries into four segments of equal size. Elements from the first segment only see local context (d=0). Elements from the second segment see two additional contexts (d=2), one from the same document (positive) and one from a different one (negative). Elements from the third segment see three additional contexts, two positives, and one negative. The last segment consists of elements exposed to three additional contexts coming from the same document. We abbreviate this setup as 1/4(0, 0), 1/4(1, 1), 1/4(2, 1), 1/4(3, 0). For the 7B model, we set ℒ = {8, 16, 24} as the memory layers. Here we divide batch entries into four segments and use the following setup: 1/4(0, 0), 1/4(1, 2), 1/4(2, 5), 1/4(3, 4). Hyperparameters We follow the choices of OpenLLaMA with respect to most of the hyperparameters, including using the same optimizer. During fine-tuning, we use a batch size of 256K tokens and constant learning rate of 2-5, which is lower than the learning rate at the end of OpenLLaMA training (3-5 after 1T tokens), and weight decay of 0.01. §.§ LLaMA fine-tuning dataset We use a mixture based on RedPajama <cit.> and The Stack <cit.> with the following proportions of each subset: All subsets apart from are taken directly from RedPajama. For the subset, we gather Python source code from The Stack and, to obtain long documents for training, concatenate files that are in the same subdirectory in random order, using a similar procedure as for the GitHub dataset in Section <ref>. Additionally, we filter out short documents for some subsets of the original RedPajama, namely shorter than the Min. doc. length column indicates. §.§ Language Model Evaluation Harness To ensure that the performance of s has not degraded in short context scenarios, we evaluate our models on the Language Model Evaluation Harness benchmark <cit.>. Table <ref> compares our results with OpenLLaMA <cit.>. Similarly to the authors of OpenLLaMA, we omit CB and WSC tasks. § ARCHITECTURE §.§ Transformer models We use the transformer architecture introduced in <cit.> with a few standard changes. First, we use only the decoder without the encoder part. Secondly, we perform layer normalization before the input of both the attention and feed-forward modules. Additionally, we use Rotary Position Embedding <cit.>, normalize keys and queries <cit.>, and introduce a learnable temperature parameter for each attention head. The hyperparameters for each model size can be found in Appendix <ref>. For training the models on PG-19, we use the standard T5 tokenizer with 32k vocabulary <cit.>. The larger models in Section <ref> are trained with a custom SentencePiece tokenizer <cit.> with 64k vocabulary size. §.§ Memory attention layer Memory attention layer ℓ is one of the transformer layers, which has access to the external memory M. The memory stores (key, value) pairs. For each query q in ℓ, we retrieve the k most matching entries from M and use them to compute the attention value. More precisely, we use the kNN algorithm to pick M_top:={(key_1, value_1), …, (key_k, value_k)}⊂ M such that {⟨ q, key_i⟩}_i=1, …, k are the top k inner products in M. These are merged with the part of the local context before q denoted as C_<q and used to compute the attention value using the standard Transformer formula: v: = ∑_(key, v) ∈ M_top∪ C_<q s(key) · v, where s(key) is the softmax score for key. This softmax is calculated as follows: softmax( [⟨ q, key⟩/τ]_key ∈ M_top∪C_<q), where τ is a temperature parameter. In this approach, we do not distinguish between the local context and the memory. Another way of integrating M_top is via gating. In this approach, we separately compute the attention value v_M for M_top and for the local context v_C (using the standard Transformer formula). Then we use a gating mechanism to combine them: v := v_M· g + v_C· (1-g), g = σ(b_g), where σ is the sigmoid function and b_g is a trainable bias. The gating approach was proposed in <cit.>, see formula <cit.>. We found our approach, i.e. using (<ref>), to be equally effective, see Figure <ref>. At the same time, (<ref>) is simpler and does not require additional parameters. Thus, we use it in our experiments. For kNN lookup, we use the exact kNN search implemented in FAISS <cit.>. The memory attention layer does not use positional encodings. The memory is populated incrementally with (key, value) pairs processed by ℓ beforehand. In the single-doc setting, the memory is erased after each document. §.§ training procedure In we choose a subset ℒ of the attention layers for later augmentation with the memory of (key, value) pairs. Let ℓ an attention layer from ℒ. During the training we expose this layer to a mixture of (key, value) pairs from the current , C_curr, and the previous and d-1 contexts from other documents, C_prev; see also Figure <ref> for an illustration. We achieve this by modifying the input pipeline so that each batch index corresponds to a different document (the batch index occupied by each document is fixed from the moment we load the document till we finish processing it). More specifically, we embed the previous and the current for each document in the batch. Then we use C_prev as a source of the previous for a given document and d-1 contexts from other documents. For each element of the batch, the choices of those d additional contexts are fixed. We disable positional encoding in ℓ, as we envision it to handle global information. To be more precise, for each document δ within the batch and query q from the layer ℓ we create the set p^δ consisting of (key, value) pairs from the previous of document δ along with pairs from d-1 contexts gathered from C_prev. The attention value for q is given by v := ∑_(key ,v)∈ p^δ∪ C_curr^δ,<qs(key)· v, where C_curr^δ,<q consists of (key, value) pairs that preceded q in its local context and s(key) is the softmax score for key. We use softmax with learnable temperature τ: softmax( [⟨ q, key⟩/τ]_key ∈ p^δ∪ C_curr^δ,<q). Note that the only difference between (<ref>) and (<ref>) is the source of the additional (key, value) pairs: p^δ. This, in particular, implies that all the operations with respect to the previous context are differentiable. The number of different documents is equal to b_S (the batch size, i.e. each document has a separate index in the batch). Assume that document δ has index i. We include into p^δ all tokens from C_prev with the batch indices in {i, (i+1) b_s, …, (i+d-1) b_s}. §.§ Qualitative analysis Table <ref> provides a brief qualitative analysis of . It shows that the model can handle distractions and retrieve the parts of the character name from the multi-document memory in the PG-19 task dataset and appropriate definitions from the large dictionary (dictionary lookup task). §.§ Memorizing Transformer The Focused Transformer shares many similarities with the Memorizing Transformer <cit.>. In this section, we summarize the differences between those two models. Training The key difference between these two methods lies in the training procedure. Our method uses , see Section <ref>, which, in a nutshell, is the standard transformer training objective, but we additionally attend to tokens from the previous context window, both from the same and different documents, see Appendix <ref> for details. The Memorizing Transformer trains on tokens retrieved from the same document (it was envisioned for single-doc). This has a few important consequences: * does not use memory during training, while MT does. This may result in faster training; moreover, always uses the most up-to-date values, while MT uses the values from memory, which may be outdated. * is differentiable through all (key, value) pairs, while MT does not differentiate through the retrieved tokens. We argue that this is key for joint training of well-structured key, value, and query embeddings and, consequently, good model performance. * does not require long documents in the training set, while MT does in order to capture long dependencies in memory. This is practically important, as many popular datasets consist of short documents. We speculate that there may be benefits in blending these two approaches. One can, for example, argue that MT is better at providing 'hard' negatives for the model. We provide a proof-of-concept experiment in Section <ref>, otherwise leave this for future work. Inference Both models use a very similar memory attention layer. The difference is how the retrieved (key, value) pairs are integrated. treats the retrieved information in the same way as the local context. MT uses a gating mechanism. Details are provided in Section <ref>. § HYPERPARAMETERS Table <ref> shows hyperparameters used in our experiments. We used context length 512 unless stated otherwise. In Section <ref>, Section <ref>, Section <ref>, we use the total batch size of 32K tokens. In Section <ref> and Section <ref>, the total batch size is 128K tokens. For the experiments described in Section <ref> and Section <ref> we performed the following hyperparameter sweeps: * Learning rate: {1-2, 5-3, 3-3, 1-3}, chosen: 1-2, * Batch size: {8K, 16K, 32K}, chosen: 32K. For the dictionary lookup task (Section <ref>) we checked the following hyperparameters: * Learning rate: {4-2, 2-2, 1-2, 5-3, 3-3}, chosen: 2-2, * Number of dictionary tokens in training step: {16K, 32K}, chosen: 32K. Note that during the training number of document tokens dedicated to the dictionary is the same as the number of tokens dedicated to questions. For most of the other hyperparameter choices, we followed <cit.>, to provide a fair comparison. §.§ Schedule of d In Sections <ref>, <ref> and <ref> for models with d ∈{1, 2, 4, 8} we used constant schedule, and for models with d=64 we trained with d=2 for 450k steps and switched to d=64 for the final 50k steps. In Section <ref> we trained with d=1 until model reached 98% accuracy and then we swtiched to d=128. For the 184M model in Section <ref>, we randomly sampled d from {2,128} in each training step, and the 600M and 1.2B models just used d=2. § DICTIONARY LOOKUP TASK We propose a dictionary lookup task to test whether the model trained using our method can utilize a large memory database. Documents in this task are split into two parts. The first part defines keys and their associated values using the records of the format: , k_1, k_2, k_3, k_4, , v_1, v_2, v_3, v_4, where is a special token that denotes the beginning of the defining sequence, The second part consists of queries about the values associated with the previously defined keys. The queries are in the following format: , k_1, k_2, k_3, k_4, , v_1, v_2, v_3, v_4, where is a special token that denotes the beginning of the query. We mask the loss so that for such a question, only v_1, v_2, v_3, v_4 are included. We use a vocabulary of 64 tokens, keys and values are described using 4 tokens. During training, we use documents comprising 512 tokens. The first half of each document consists of definitions, whereas the second one consists of questions. In evaluation, we use longer documents but make only the last 256 tokens correspond to questions. That is, as the context gets bigger (the token axis on Figure <ref>), the number of definitions increases, but the number of queries remains the same. § FINE-TUNING For comparison in Table <ref>, our 184M model is pre-trained for 100k steps with a total batch size of 128 (128K tokens per step, with 1024 ). Then we fine-tune both and baselines for additional 10k steps with the same batch size. When fine-tuning , we randomly sample d from {2, 128} in each training step to prevent the model from overfitting to a large additional context length during training. § CODE In Listing <ref>, we show the s attention code (i.e., the code for the memory attention layers and training), see Section <ref>, Appendix <ref>, Appendix <ref>. We note that the changes to the code are small; they are localized to the memory layer (the other layers follow the standard transformer protocol) and do not require any new trainable parameters. [language=Python,basicstyle=, caption=Memory attention: Let ℓ be a memory attention layer. During the training, we make ℓ attend to the (key, value) pairs from the , previous , and d-1 contexts coming from other documents. During the inference, queries from ℓ attend to the and k nearest neighbors retrieved from memory. For simplicity, we provide the code for one head and assume that the dimension d is equal to the batch size., label=listing:commonattention_memory] def mem_attn_layer(Ql, Kl, Vl, Cl, Km, Vm, Kp, Vp, attn_scf, mode): """Attention mechanism for crossbatch and memory attention Args: Ql, Kl, Vl: tensors of shape [batch, ctx_len, dim] with queries, keys and values from the local context Km, Vm: tensors of shape [batch, ctx_len, k, dim] with k most matching memory keys for each query from Ql along with associated values Kp, Vp: tensors of shape [batch, ctx_len, dim] with keys and values from the previous context attn_scf : a scale factor used before softmax mode: either training or inference Returns: y: a vector with shape [batch, ctx_len, dim] """ # attention to the local context local_attention = jnp.einsum("bqd,bkd ->bqk", Ql, Kl) local_attention *= attn_scf local_attention = apply_causal_mask(local_attention) if mode == "train": # In train mode, we additionally use previous context # and batch-1 contexts from other documents. prev_attention = jnp.einsum("bqd,ckd ->bqck", Ql, Kp) shape = prev_attention.shape additional_attention = prev_attention.reshape(shape[:-2] + (-1,)) elif mode == "inference": # In the inference mode, we additionally use nearest neighbors # retrieved from memory. We retrieve k (key, value) # pairs for each query. memory_attention = jnp.einsum("bqd,bqnd -> bqn", Ql, Km) additional_attention = memory_attention else: raise Exception("Only train and inference modes are supported") additional_attention *= attn_scf # We merge the raw attention scores and calculate the softmax combined_attention = jnp.concatenate([local_attention, additional_attention], axis=-1) combined_weights = jax.nn.softmax(combined_attention, axis=-1) ctx_len = Ql.shape[1] local_weights = combined_weights[..., :ctx_len] additional_weights = combined_weights[..., ctx_len:] y = jnp.einsum("bqk,bkd -> bqd", local_weights, Vl) if mode == "train": prev_weights = additional_weights shape = prev_weights.shape prev_weights = prev_weights.reshape(shape[:-1] + (-1, ctx_len)) y += jnp.einsum("bqck,ckd -> bqd", prev_weights, Vp) else: memory_weights = additional_weights y += jnp.einsum("bqn,bqnd -> bqd", memory_weights, Vm) return y § DATASETS Section <ref> outlines essential details concerning the PG-19 and arXiv datasets employed in this study. Now, we will present details about the remaining datasets: GitHub =-1 We obtained a large corpus of permissively licensed Github repositories using BigQuery. By filtering for specific file extensions (C, C++, Java, Python, Go, and TypeScript), we captured individual source code files that are often short but have numerous dependencies and cross-references within the repository. To preserve the structure while shuffling the files and subdirectories in a random order, we concatenated all the files within each repository, treating subdirectories as a unit, similarly to <cit.>. Isabelle =-1 The Isabelle corpus comprises formal mathematical proofs in the form of theories written in a formal language. We combined theories from The Archive of Formal Proofs (from October 2021) [https://www.isa-afp.org] and the Isabelle standard library to create a corpus of theories licensed as open source. Each theory focuses on topics like foundational logic, advanced analysis, algebra, or cryptography and consists of multiple files containing proofs. Similar to the GitHub corpus, the files within each theory are concatenated into a single document. However, unlike the Github corpus, we arrange the files based on their import dependencies, ensuring that later files can utilize sub-theorems proven in earlier files. § HARDWARE AND TECHNICAL DETAILS We used TPU virtual machines from the Google Cloud Platform (GCP). Each TPU virtual machine has 8 TPUv2 / TPUv3 cores totaling 64GB / 128GB of device memory, 96 CPU cores, and over 300GB of RAM. In larger-scale experiments (Section <ref>) we used machines with 32 TPUv3 cores. For training the checkpoints, a TPUv3-128 pod provided by the TPU Research Cloud was used, which we gratefully acknowledge. § RANDOMNESS To evaluate the significance of our results, we conducted multiple runs for selected experiments in our study. In Figure <ref>, we calculate error bars, showing the minimum and maximum value over 10 runs of the same experiment. For the arXiv baseline experiment in Appendix <ref>, we performed three runs with different random seeds and calculated their standard deviation, which is equal to 0.002 perplexity. However, due to resource constraints, we were unable to conduct multiple runs for all experiments. Our preliminary findings indicate that the observed variance was minimal compared to the impact observed from other factors under investigation. For the calculation of test perplexities, we used 1M tokens. § ADDITIONAL EXPERIMENTAL RESULTS This section presents additional empirical results, providing a detailed comparison of with the Memorizing Transformer <cit.> baseline. Both models are trained for the same number of 500k steps with of 2K and evaluated on the arXiv dataset in the single-document setup, following <cit.>. In particular, we study how models trained with a given context length perform when evaluated with different context lengths. These experiments differ from those in Section <ref>, as the models were both trained and evaluated on the same dataset (arXiv), unlike the C4 training and zero-shot evaluation done in Section <ref>. The MT baseline in Table <ref> with a memory length of 2K struggles to utilize additional context beyond 32K tokens effectively. The model trained with 8K memory performs significantly better when evaluated with longer contexts, showing further perplexity gains at 64K tokens. We observe diminishing returns when scaling up the training memory length to 16K tokens and beyond. Using the same setup, we study the performance of while varying d and w configurations, similarly to Section <ref>, see Table <ref>. Parameter values w=1 and w=2 correspond to additional context lengths of 2K and 4K, respectively. In an apples-to-apples comparison to MT with 2K additional context length, outperforms the MT baseline, which shows the importance of trainable keys and values (see also Section <ref>). Moreover, we confirm the findings from Section <ref> that d=2 works significantly better than d=1 in all settings. Our best configuration achieves 2.148 perplexity with 4K additional context during training, compared to 2.164 of MT with 16K additional context.
http://arxiv.org/abs/2307.00477v1
20230702051543
Query-Efficient Decision-based Black-Box Patch Attack
[ "Zhaoyu Chen", "Bo Li", "Shuang Wu", "Shouhong Ding", "Wenqiang Zhang" ]
cs.CV
[ "cs.CV", "cs.CR" ]
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals 0000–0000/00$00.00  2021 IEEE Query-Efficient Decision-based Black-Box Patch Attack Zhaoyu Chen, Bo Li, Shuang Wu, Shouhong Ding, Wenqiang Zhang Corresponding authors are Bo Li and Wenqiang Zhang. Zhaoyu Chen and Wenqiang Zhang are with Academy for Engineering and Technology, Fudan University, Shanghai, China, and also with Yiwu Research Institute of Fudan University, Yiwu, China. The emails of these authors are: zhaoyuchen20@fudan.edu.cn, njumagiclibo@gmail.com, wqzhang@fudan.edu.cn. =================================================================================================================================================================================================================================================================================================================================================================================================================================== Deep neural networks (DNNs) have been showed to be highly vulnerable to imperceptible adversarial perturbations. As a complementary type of adversary, patch attacks that introduce perceptible perturbations to the images have attracted the interest of researchers. Existing patch attacks rely on the architecture of the model or the probabilities of predictions and perform poorly in the decision-based setting, which can still construct a perturbation with the minimal information exposed – the top-1 predicted label. In this work, we first explore the decision-based patch attack. To enhance the attack efficiency, we model the patches using paired key-points and use targeted images as the initialization of patches, and parameter optimizations are all performed on the integer domain. Then, we propose a differential evolutionary algorithm named DevoPatch for query-efficient decision-based patch attacks. Experiments demonstrate that DevoPatch outperforms the state-of-the-art black-box patch attacks in terms of patch area and attack success rate within a given query budget on image classification and face verification. Additionally, we conduct the vulnerability evaluation of ViT and MLP on image classification in the decision-based patch attack setting for the first time. Using DevoPatch, we can evaluate the robustness of models to black-box patch attacks. We believe this method could inspire the design and deployment of robust vision models based on various DNN architectures in the future. Adversarial example, patch attack, black-box attack, differential evolutionary algorithm. § INTRODUCTION Nowadays, deep neural networks (DNNs) have been employed as the fundamental techniques in the advancement of artificial intelligence in computer vision. Despite the success of DNNs, recent studies have identified that DNNs are vulnerable to adversarial examples <cit.>. By introducing maliciously crafted perturbations to the input images, these adversarial examples are able to evade and mislead DNNs. Consequently, studying the adversarial vulnerability of DNNs has emerged as an important research area, providing the opportunity to better understand and improve computer vision models. Classical works <cit.> focus on studying the adversarial vulnerability of DNNs against virtually imperceptible perturbations that are constrained to have a small norm but are typically applied to the whole input image. Recently, as a complementary type of adversary, patch attacks that introduce perceptible (large norm) but localized perturbations to the images have attracted the interest of researchers. Pioneering works<cit.> perform patch attacks in the white-box setting: with full access to the model’s parameters and architectures, they can directly use gradient-based optimization to find successful adversarial examples. Due to the fact that most real-world applications do not publicly release the actual models they use, this attack scenario usually is less practical in real-world systems, e.g., attacking image analysis APIs <cit.> like Google Cloud Vision or self-driving cars<cit.>. As a more practical scenario in real-world systems, black-box patch attacks have attracted a lot of attention in recent years. There are transfer-based attacks<cit.> and query-based attacks<cit.> for black-box patch attacks, depending on whether the attacker needs to query the victim's machine learning model. Despite the fact that transfer-based attacks do not require query access to the model, it assumes the attacker has access to a large training set to create a carefully-designed substitute model<cit.>, and there is no guarantee of success<cit.>. Query-based attacks assume that attackers can only query the target network and obtain its outputs (score or label) for a given input. According to the output information of the queried models, query-based attacks can be classified into two sub-categories: score-based setting which has access to the class probabilities of the model, and decision-based setting which solely relies on the top-1 predicted label. Significantly, decision-based settings present more practical threats to deployed systems and applications because an adversary is still capable of exploiting the very minimal information exposed – the top-1 predicted label – for constructing an adversarial perturbation. Recently, some score-based patch attacks<cit.> have been proposed. However, when these methods are applied to the decision-based setting, they hardly achieve high attack success rate and query efficiency because the information provided by labels is limited. In this paper, we first explore the decision-based patch attack to better measure the practical threat of patch attacks. To successfully conduct decision-based black-box patch attacks, there are still non-negligible challenges to overcome: Complex Solution Space. Performing patch attacks is extremely challenging since it involves searching for all possible positions, shapes, and perturbations of adversarial patches, which implies an enormous solution space. Moreover, unlike white-box scenarios or the score-based black-box setting, in the decision-based black-box setting, there is almost no valid information to guide the search direction. Query efficiency. In the query-based setting, achieving high query efficiency with a high attack success rate is integral to adversarial objectives. Because: i) adversaries are able to carry out attacks at scale; ii) the cost of mounting the attack is reduced, and iii) adversaries are capable of bypassing defense systems that can recognize malicious activities as a fraud based on a pragmatically large number of successive queries with analogous inputs. Last but not least, the advantage of a smaller query budget is that it correlates to a lower cost of evaluation and research, which is useful for determining the robustness of the model to adversarial attacks. To address the aforementioned issues, we propose a differential evolutionary algorithm named DevoPatch for query-efficient adversarial patch attacks in the decision-based black-box setting. Differential evolutionary algorithm is a black-box optimization algorithm that does not need to know the details of the model and is suitable for parameter search when information is limited. Given the attack objective function, DevoPatch is able to optimize it in a black-box manner through queries only. To simplify the solution space, we restrict parameter optimization to the integer domain and carefully design a differential evolution algorithm based on the integer domain. Further, we model the patches using paired key-points and use targeted images as the initialization of patches. Consequently, the query efficiency of DevoPatch is significantly improved. In addition, it is worth noting that some novel DNN architectures have recently emerged including the Vision Transformer (ViT) model <cit.> and Multi-layer Perceptron (MLP) based model <cit.>. They demonstrate compelling performance, sometimes even outperforming classical convolutional architectures. Although a few studies have explored the vulnerability of ViT against imperceptible adversarial perturbations<cit.>, the adversarial robustness of ViT and MLP under patch attacks has not been considered. This raises a critical security concern for the reliable deployment of real-world applications based on ViT and MLP models. Therefore, we extend our study scope and apply DevoPatch to ViT and MLP to better understand the vulnerability of a wide variety of DNNs under adversarial patch attacks. We illustrate an example patch attack with DevoPatch against ILSVRC2012 in Fig. <ref> on image classification. Extensive experiments on image classification and face verification demonstrate that DevoPatch is a query-efficient decision-based black-box patch attack. We summarize our contributions and results below: * We first explore the decision-based patch attack, which can still construct a perturbation with the minimal information exposed – the top-1 predicted label. * To simplify the solution space, we model the patches using paired key-points and use targeted images as the initialization of patches, and parameter optimizations are all performed on the integer domain. * We propose a novel patch attack – DevoPatch – an evolutionary algorithm capable of exploiting access to solely the top-1 predicted label from a model to search for an adversarial example, whilst minimizing the image area that needs to be corrupted for a successful attack. * Comprehensive experiments on image classification and face verification show that DevoPatch achieves considerably higher success rates compared to related work, while being more efficient in terms of the number of queries. * We conduct the vulnerability evaluation of ViT and MLP on image classification in the decision-based black-box patch attack setting for the first time. We compare results with ResNet to assess the relative robustness of the ViT and MLP models. The remainder of the paper is organized as follows. Section <ref> briefly reviews the literature related to adversarial examples and adversarial patches, white-box patch attacks, black-box patch attacks, and adversarial attacks with evolutionary algorithms. Section <ref> first introduces the definition of decision-based black-box patch attacks and then details the proposed differential evolutionary patch attack. Section <ref> shows the experimental results to demonstrate the effectiveness of the proposed differential evolutionary patch attack. Firstly, we choose appropriate hyperparameters for DevoPatch. Afterward, we evaluate the adversarial robustness of several image classification and face recognition models. In Section <ref>, we further analyze the effects of adversarial patches on different DNN architectures. We summarize the paper in Section <ref>. § RELATED WORK In this section, we briefly review the literature related to adversarial examples and adversarial patches, white-box patch attacks, and black-box patch attacks. In the end, we also discuss adversarial attacks based on evolutionary algorithms. §.§ Adversarial Example and Adversarial Patch The seminal works of Szegedy et al.<cit.> inspire an interest in studying adversarial vulnerability against imperceptible perturbations as a mean of understanding and improving deep neural networks. Since then, a majority of prior works<cit.> have focused on attacking with small and imperceptible perturbations to the input, which can be regarded as the imperceptible adversarial attack. Commonly these imperceptible perturbations are applied to the whole input image and are constrained by p-distances (p ∈{0,2,∞}) similarity measurement. Recently, as a complementary type of adversary, patch attacks that introduce perceptible (large norm) but localized perturbations to the images have emerged and attracted the interest of researchers. Patch attacks (or adversarial patches) can be regarded as the perceptible adversarial attack. The main aim of patch attacks is to minimize the perturbation within a continuous image region that needs to be corrupted to mislead a target machine learning model. Only a handful of works have investigated patch attacks and these works can be broadly categorized based on various degrees of adversarial access to a model. In this paper, we focus on black-box patch attacks because they are more practical and more threatening. §.§ White-box Patch Attack In the white-box setting, an adversary has full knowledge and access to the model, including gradients and parameters. GAP<cit.> first creates universal, robust, targeted adversarial image patches in the real world and causes a classifier to output any target class in the white-box setting. Then LaVAN<cit.> concentrates on investigating the blind spots of state-of-the-art image classifiers in the digital domain, which crafts adversarial patches using an optimization-based approach with a modified loss function. Then<cit.> and<cit.> introduce position search to improve the attack performance of adversarial patches. Due to the enormous solution space and the trade-off between computational cost and attack performance, adversarial patches are usually created with a fixed shape or location even under the white-box setting. Since then, adversarial patches have been used to attack self-driving cars<cit.>, object detection<cit.> and face cognition<cit.>. However, white-box patch attacks are less practical, since most real-world applications do not release their models and cannot directly solve adversarial patches via gradients. In this paper, we focus on black-box patch attacks because they are more threatening to real-world systems. §.§ Black-box Patch Attack Black-box patch attacks can be either transfer-based<cit.> or query-based<cit.>, depending on whether the attacker needs to query the victim's machine learning model. However, transfer-based attacks require access to large amounts of training data and require careful construction of surrogate models. It does not guarantee that the attack will be successful. In contrast, query-based attacks only require access to the output of the victimized model and have a higher attack success rate as the number of queries increases, which is more practical and more threatening. In the query-based attack, an adversary can access all or only one predicted score (score-based settings) or call out just the predicted labels (decision-based settings) for a given input. We need a query-efficient algorithm that helps reduce the cost of evaluating the robustness of DNNs since the attacker has to pay for each query. Query-based patch attacks are first introduced in Hastings Patch Attack (HPA)<cit.>. They do not optimize the pattern of the patches and instead use the monochrome patches. The position and shape of the rectangular patches are randomly searched using Metropolis-Hastings sampling. To improve the query efficiency of HPA, <cit.> first uses reinforcement learning to search the position and size of monochrome rectangular patches, called Monochrome Patch Attack (MPA). But monochrome patches usually lead to a very low attack success rate, especially for the targeted attack. They then use ImageNet training data to build a class-specific texture dictionary via style transfer<cit.> to craft targeted patch attacks, termed Texture-based Patch Attack (TPA). However, in practical scenarios, it is impossible to obtain the whole training set data of the target black-box models. Adv-watermark (AdvW)<cit.> utilizes the basin hopping evolution algorithm to find a suitable position and transparency for the watermark to implement the patch attack. Patch-Rs<cit.> proposes a random search framework and then designs an initialization scheme and a sampling distribution specific for adversarial patches. This outperforms previous works in both query efficiency and attack success rates. Unfortunately, all of the aforementioned works are only designed for the score-based black-box setting. They perform poorly on the more challenging and restrictive decision-based attack (Experiment <ref>). Further, decision-based settings present more practical threats to deployed systems. To the best of our knowledge, we make the first investigation of the robustness of DNNs against patch attacks using a decision-based black-box setting. §.§ Adversarial Attacks with Evolutionary Algorithms Notably, there are some related works<cit.> which also leverage evolutionary algorithms to perform imperceptible attacks. These methods are all under the framework of evolutionary algorithms, with operations such as crossover and mutation. However, these methods cannot achieve query-efficient decision-based patch attacks due to the limitations of the modeling of adversarial examples. One-pixel attack<cit.> generates one-pixel adversarial perturbations based on differential evolution. It is not effective when the number of perturbations is large because the number of queries to the model grows rapidly with respect to the number of perturbed pixels in patches. Evo-Attack<cit.> utilizes the covariance matrix adaptation evolution strategy to search the imperceptible perturbations but it cannot search for the location and shape of patches. SparseEvo<cit.> models sparse perturbations as binary codes and solves them using genetic algorithms. However, this binary representation cannot define contiguous regions, thus making it impossible to model patches and perform patch attacks. Our DevoPatch is based on the differential evolution algorithm, carefully designed in the integer domain to achieve query-efficient decision-based patch attacks. Consequently, we first construct a dimensionality-reduced solution space in which possible solutions (individuals of a population) are paired key-points in the integer domain. This is quite different from most current evolutionary attack methods. § METHODOLOGY In this section, we first introduce the definition of decision-based black-box patch attacks and then detail the proposed differential evolutionary patch attack. §.§ Problem Definition Patch attacks are one of the most threatening types of adversarial examples that an adversary can arbitrarily modify the pixels of a continuous region, and the patch of this region leads machine learning models to make incorrect predictions. Here, we first give the formulation of patch attack on image classification. Face verification can be viewed as a binary classification task, similar to image classification. For an classifier f:x→y, we are given a source image x∈ℝ^C× H × W and its corresponding ground truth label y from the label set 𝕐 = {1, 2, ... ,K} where K denotes the number of classes. C, W, and H denote the number of channels, height, and width of an image, respectively. In the setting of patch attacks, the adversarial patch is composed of adversarial perturbations δ∈ℝ^C× H × W and location masks M∈{0,1}^H × W. Given a source image x, we formulate the adversarial example x as the combination of a source image x, an adversarial patch δ and a location mask M: x = (I-M) ⊙x + M ⊙δ, where ⊙ represents the element-wise Hadmard product and I represents all-one matrices with the same dimension as M. In the decision-based black-box setting, our access is limited to its output label. For the targeted attack, the adversary perturbs the source image x so that the obtained adversarial example x∈ℝ^C× H × W is misclassified as the desired class label y∈𝕐. We refer to the desired class y of the input x as the target class and its ground-truth class y as the source class. For the untargeted attack, the adversary perturbs the source image x to lead the output label of the classifier to any class label except the ground truth label y, i.e. y∈𝕐 where y≠y. In general, the patch attack (including targeted and untargeted settings) to find the best adversarial example x^* can be expressed as a constrained optimization problem: x^* = argmin_M,δ ||x-x||_0 s.t. f(x^*)=y, where ||·||_0 denotes the number of perturbed pixels. For the patch not to be perceived, Eq. <ref> aims to determine the perturbation and position with the constraint of a few perturbed pixels, which leads to a complex solution space and hampers the search. In addition, given the constraint and the fact that f is not differentiable in the decision-based setting, the solution to the optimization problem is not trivial. §.§ Simplification on Solution Space The enormous solution space on patch attacks is caused by all possible positions, shapes, and perturbations of the patch. A naive parametric search method can be directly used to solve this problem. Specifically, the parameter set V is defined as a series of candidate solutions v, represented by the coordinates and RGB values of each pixel. However, this naive application results in very inefficient queries<cit.>. Furthermore, in the decision-based black-box setting, there is little effective information to guide the search direction, thus further reducing the query efficiency. To improve the query efficiency of decision-based attacks, we need to reduce the complexity of the solution space. Although in the decision-based setting, the black-box model can hardly provide effective information so the only information we can fully utilize is the target class y of the targeted attack. To facilitate a parametric search method, instead of searching for parameters defining RGB values of each perturbed pixel, we consider that the perturbation δ can be replaced by a targeted image x_t. Targeted images are only required to be classified as target class by the black-box model and do not need to be i.i.d. with the training set (analyzed in Section <ref>). Simultaneously, it is redundant to represent a patch with a coordinate set of perturbed pixels. For a patch, we only need to know a pair of points to formulate the location mask M of the patch. Therefore, we vectorize each candidate solution in the parameter set V as a 4-dimensional vector v={(i_1,j_1), (i_2,j_2)} (i_1 < i_2, j_1 < j_2) where i∈ℕ and j∈ℕ denotes the coordinate of the paired key-points. Here, we employ a simple mapping function T(·) to re-formulate the location mask M=T(v) and the adversarial example x: T(v)= 1, if 0 ≤ i_1 < i_2 < H, 0 ≤ j_1 < j_2 < W, 0, otherwise, x = (I-M) ⊙x + M ⊙x_t. In general, we transform the original complex solution space into a coordinate programming problem for paired key-points on the integer domain. Interestingly, this strategy of simplification has been found to be extremely effective in a decision-based patch attack. Next, we need to design how to select and update suitable candidate solutions. §.§ Differential Evolutionary Patch Attack In this section, we propose the DevoPatch, an efficient parametric search method based on the differential evolutionary algorithm that seeks a solution by iteratively improving upon potential solutions in search of a desirable one. In differential evolution, the population is the candidate solution and the population set is the parameter set. We carefully design the differential evolution algorithm on the integer domain, including population initialization, mutation, and crossover. Therefore, DevoPatch improves the differential evolution by simplifying solution spaces to the integer domain and the population can improve over time to produce a satisfactory result. Moreover, our search method employs the differential evolutionary algorithm without requiring any background knowledge of the target model, such as its architecture or parameters, to construct the fitness function. DevoPatch can be used to analyze and solve the non-trivial optimization problem in Eq. <ref> in a black-box setting and can provide a possible remedy for complex solution space. The pipeline of DevoPatch is shown in Fig. <ref> First, we give the definition of fitness calculation. Fitness is used to evaluate the quality of candidate solutions, mainly used in population initialization and population selection. In general, fitness function g(·) should reflect optimization objectives. In the score-based setting, since logits can be obtained, cross-entropy loss or margin loss<cit.> can be used to measure the quality of candidate solutions. In the decision-based setting, since only the predicted labels can be obtained, it is difficult for us to use the change of loss to measure the quality of new candidate solutions. The loss only changes when the predicted label changes, which causes many potential candidates to be discarded. Therefore, a fitness function is required to approximate the calculation of the loss function in the decision-based setting. Since our populations describe a paired key-point of the patch and the method uses targeted images as initialization, we consider an adversarial example with a smaller patch area would have better quality. Therefore, we formulate our fitness function as: g(x)= ||x-x||_0, if f(x)=y ∞, otherwise. Although the fitness function is l_0 norm, other distance metrics are also feasible (further analyze in Section <ref>). Then, we initialize a population set of p various candidate solutions named initialized population v^0. In the population initialization, it is trivial to apply targeted images directly as initialization. The diversity among populations is conducive to improving query efficiency, so we introduce an initialization rate μ to control the diversity of population initialization. Specifically, we first calculate height margin Δ h = ⌊ H·μ⌋ and width margin Δ w = ⌊ W·μ⌋ (Δ h ∈ℕ, Δ w ∈ℕ) as candidate domains. Then, every candidate solution is generated by only a randomly sample in the following condition: i_1 ∈ [0, Δ h), i_2 ∈ [H-Δ h, H), j_1 ∈ [0, Δ w), j_2 ∈ [W-Δ w, W]. Finally, if the fitness score of the initialized population v^0 is not ∞ by using Eq. <ref>, the initialized population v^0 will be successfully added to the population set V. Fitness scores are saved in a fitness score vector G for each candidate solution. The population initialization is detailed in Algorithm <ref>. Mutation is an important step in generating superior offspring (new candidate populations). Although the initialized population in the population initialization stage has a certain diversity, the overall difference is not significant, which will cause the next generation to be very similar and reduce query efficiency. In order to ensure the diversity of offspring, we introduce mutation rate γ to generate better offspring. Compared with the traditional differential evolutionary algorithm<cit.>, we need to ensure that the calculation is closed and the solution set of offsprings is an integer domain, so mutation rate γ must be an integer, i.e. γ=1. Specifically, when DevoPatch converges, the coordinate difference in candidate solutions is often only 1. If γ is a real number (i.e. γ=0.5), it may be 0 after rounding to coordinates, resulting in no new offspring and reducing diversity. In practice, we first select the best v^k_best and two randomly selected candidate solutions v^j, v^q. Then, v^r is based on v^k_best plus γ times the difference between v^j and v^q. Formally, the mutation can be formulated as: v^r = v^k_best + γ· (v^j - v^q). In order to increase the diversity of the generated population v^r, crossover is introduced. The diversity of a population enables the exploration of the solution space for better individuals. Consequently, crossover operation is a crucial part of our method for further promoting population diversity, and every offspring after mutation can have the crossover. Unlike the traditional differential evolution algorithm <cit.>, because the paired key-points of the modeling patches have an order relationship, it is impossible to directly perform crossover by element according to the probability. In practice, for any element in a candidate solution v^r after mutation, we randomly add a noise κ = {-1, 0, 1}^4 to it respectively, to help jump out of the local optimal solution. Therefore, crossover can be expressed as: v^m = v^r + {-1, 0, 1}^4. The evolution algorithm assumes that superior individuals are selected from a population and inferior individuals are eliminated. According to this assumption, individuals with better fitness scores are more likely to survive over time. Specifically, if an offspring has a smaller fitness score, it will also be better on Eq. <ref> and be a better adversarial example. Hence, if a new offspring has a smaller fitness score than the worst offspring in the population, the worst offspring will be discarded and the new offspring will be selected in its place. Algorithm <ref> summarizes the pipeline of DevoPatch. First, we obtain the initial population set and fitness scores through population initialization. Then, new offspring is generated by mutation and crossover to enhance diversity during each query. Next, the fitness score is calculated for the new population. Finally, the new population will be selected and updated according to the fitness score. Note that each time a new population is generated, we need perform boundary processing on the new population to ensure that 0 ≤ i_1 < i_2 < H, 0 ≤ j_1 < j_2 < W. § EXPERIMENTS In this section, we show the experimental results to demonstrate the effectiveness of the proposed differential evolutionary patch attack. First, we choose appropriate hyperparameters for DevoPatch. Then we evaluate the adversarial robustness of several image classification models and face recognition models. We further conduct ablation studies and analyze the factors for the effectiveness of DevoPatch. §.§ Experimental Settings §.§.§ Datasets To evaluate the effectiveness of our method, we conduct experiments on image classification and face verification. For image classification, we follow<cit.> and conduct experiments on a challenging dataset, ILSVRC2012<cit.>, which has 1,000 object categories in total. For the evaluation sets, we randomly draw 1,000 correctly classified images from ILSVRC2012 validation set. Target images are also randomly chosen correctly classified images corresponding to target classes from the ILSVRC2012 validation set. These selected images are evenly distributed among the 1,000 classes. For face verification, we select 400 pairs in dodging the attack, where each pair belongs to the same identity, and another 400 pairs in impersonation attack, where the images from the same pair are from different identities. The images are selected from LFW<cit.> and CelebA<cit.>. Target images are also randomly chosen correctly recognized images corresponding to identities from LFW and CelebA. All the selected images can be correctly recognized by the face recognition models. §.§.§ Models To evaluate the effectiveness of DevoPatch on different network architectures of image classification models, we select three different architecture models as threat models. For convolution-based models, we use a pre-trained ResNet-152 (ResNet)<cit.> for ILSVRC2012. For attention-based models, we select a pre-trained ViT-B-16/224 model (ViT-B)<cit.>. For multi-layer-based models, we select MLP-Mixer-B-16/224 model (Mixer-B)<cit.>. We also study three face recognition models, including FaceNet<cit.>, CosFace<cit.> and ArcFace<cit.>, which all achieve over 99% accuracies on the validation set. The threshold for the face recognition model is the one that achieves the highest accuracy on the validation set. §.§.§ Attack Methods Decision-based settings present more practical threats to deployed systems because it is hard to get the score in the system. To solve the practical issue of the score-based setting, our work first explores decision-based patch attacks, which can still construct a perturbation with the minimal information exposed – the top-1 predicted label. Therefore, to reveal the issues of the score-based setting, all experimental comparisons are performed in the decision-based setting. For a fair comparison with score-based patch attacks, we choose HPA<cit.>, MPA<cit.>, TPA<cit.>, Adv-watermark (AdvW)<cit.> and Patch-RS<cit.> as the baseline under the decision-based setting. Inspired by <cit.>, we leverage the label smoothing<cit.> to turn the hard-label into the score for the compared score-based methods without increasing the number of queries, where ε=0.1. Following<cit.>, the patch areas of TPA, AdvW, and Patch-RS are fixed. For white-box patch attacks, we choose GAP<cit.> as the baseline. GAP and DevoPatch share the same location mask M and query (inference) budgets. §.§.§ Evaluation Metrics Following<cit.> and <cit.>, there are three metrics to measure the performance of black-box patch attacks. Patch area (%) is the number of perturbed pixels divided by the total number of pixels of an image. To control how noticeable a patch is, we define an Average Patch Area (APA) as the average area across all successful attacks. To evaluate the efficiency of the patch attack, we calculate the Average Number of Queries (ANQ) over the images finished with patch attacks, followed by <cit.>. Following decision-based adversarial attacks <cit.>, we select the number of queries that reach the minimum value of the patch area during the query process as the calculated value of ANQ. Finally, a measure used to evaluate the adversarial robustness of a model is Attack Success Rate (ASR). ASR (%) is the ratio of adversarial examples that are successfully misrecognized. §.§ Effects of Hyperparameters Here, we analyze the key factors of DevoPatch, including population size p, initialization rate μ, and mutation rate γ and fitness measure. All ablation experiments are performed on ResNet-152 for the image classification task. §.§.§ Population Size Table <ref> shows the effect of different population sizes on performance, where μ=0.1 and γ=1. As the population size p gets larger, the ASR can still remain at 100%, while the APA will decrease further and the ANQ will get greater. In particular, when p=30, its APA is 4.08% less than p=10, but the ANQ is about 2 times larger. For the sake of query efficiency, we choose p=10. §.§.§ Initialization Rate Table <ref> shows the effect of different initialization rates on performance, where p=10 and γ=1. As the initialization rate μ gradually increases, the APA will become less, and the ANQ will not change much. Due to the trade-off between areas and queries, we choose μ=0.35. §.§.§ Mutation Rate Table <ref> shows the effect of different mutation rates on performance, where μ=0.35 and p=10. Obviously, a larger mutation rate can indeed achieve a smaller adversarial patch, but it greatly increases the ANQ. Specifically, when γ=4, the APA is 3.09% less than when γ=1, but the ANQ is 4 times greater. Considering query efficiency and average area, we choose γ=1. §.§.§ Convergence Further, we analyze the convergence of DevoPatch. Fig. <ref> describes the variation curve of area with query budget under different initialization rates μ. Intuitively, our method converges quickly. When the number of queries is about 1,000, the best adversarial example has been solved. §.§.§ Fitness Measure The fitness measure directly affects the efficiency of the solution. In DevoPatch, we choose l_0 norm for the fitness calculation according to the optimization objective in Eq. <ref>. However, other norms can also be used to calculate Eq. <ref>. Table <ref> shows the ablation study about different norms on fitness calculation. l_0 norm consistently outperforms other norms in terms of queries and areas. The main reason may be that the optimization objectives of l_0 norm and Eq. <ref> are consistent. Since other norms calculate the distance similarity with the image, they tend to place the patch in the place where the source image and the targeted image are similar, which will fall into the local optimal solution. §.§ Attacks on Image Classification In this section, we compare the attack performance of various black-box patch attacks on image classification. We set the query budgets for untargeted and targeted attack to 10,000 and 50,000, respectively. The hyperparameters of DevoPatch are: p=10, μ=0.35, γ=1. For targeted attacks, we consider a randomly chosen correctly classified image corresponding to the target class y from the dataset. For untageted attacks, we use a randomly chosen correctly classified image corresponding to the random class except the ground-truth label from the dataset, followed by <cit.>. DevoPatch takes about 6.33 hours to perform 10,000 queries on 1,000 images on ResNet-152, based on an NVIDIA Tesla V100. The experimental results against decision-based patch attacks in untargeted and targeted setting on ILSVRC2012 are summarized in Table <ref>. The experimental results show that our DevoPatch consistently outperforms HPA, MPA, TPA, AdvW and Patch-RS in terms of queries and patch areas with a higher ASR, which shows the effectiveness of DevoPatch. In the untargeted setting, TPA, AdvW, and Patch-RS achieve the trade-off on ASR and ANQ. Although the label returns little information, TPA achieves higher ASR in the decision-based setting due to its strong texture prior. Although HPA has a high ASR, its ANQ and APA are extremely large. MPA achieves sub-optimal ASR, but it is inefficient and always uses the whole query budget since it takes 10,000 queries and chooses the best one. DevoPatch achieves 100% ASR with one-seventh of MPA on ANQ under a smaller average area. Under the more challenging targeted setting, due to the lack of effective information about the target class, HPA, MPA, AdvW, and Patch-RS are almost useless. Because TPA has a texture prior, it still has a high ASR, but the ANQ is extremely high. Because of our simplification of solution space, our DevoPatch outperforms TPA by 18.0%, 28.2%, and 9.0% ASR on ResNet, ViT, and MLP, respectively, while the average queries are only one-tenth, one-twentieth and one-seventh of TPA. Also, we choose to compare with GAP, the most basic white-box patch attack. We expect DevoPatch to reach the lower bound of white-box patch attacks in attack performance. Table <ref> illustrates that DevoPatch achieves ASR equivalent to GAP. Both black-box and white-box experiments show that DevoPatch is a query-efficient decision-based black-box patch attack with high attack performance against different network architectures on image classification. We provide the visualization of patch attacks on different network architectures as shown in Fig. <ref>. The labels below the image indicate the predicted classes. Labels in black, red, and blue represent the ground truth, target classes, and the classes after the targeted attack has failed, respectively. HPA and MPA use gray or colored patches to achieve attacks (the second and third rows), but their patch area is large and it is difficult to achieve targeted attacks. TPA uses the ImageNet pre-trained texture dictionary for patch attack (shown in the fourth row) and has a higher ASR in the decision-based setting, but its area is also larger. AdvW selects pre-defined logos for patch attacks, but it is difficult to implement targeted attacks because logos have little category information (shown in the fifth row). Patch-Rs is based on a random search framework (shown in the sixth row), but it is difficult to implement targeted attacks because the top-1 labels have too little information. Our DevoPatch achieves the query-efficient attack in a decision-based setting with a smaller area and higher ASR. §.§ Attacks on Face Verification In this section, we compare the attack performance of various black-box patch attacks on face verification. We set the query budgets for dodging and impersonation attacks to 10,000 and 50,000, respectively. The hyperparameters of DevoPatch are: p=10, μ=0.35, γ=1. For dodging attacks, for randomly selecting a pair of faces with the same identity, the adversary generates an adversarial face to make the model recognize them as different identities. For impersonation attacks, the adversary generates an adversarial face to make the model recognize them as the same identity, which originally belonged to different identities. Here, we use cosine similarity and threshold to determine whether it is the same identity. When the cosine similarity of a pair of faces is greater than the threshold, the faces belong to the same identity. Table <ref> shows decision-based patch attacks on face verification. Note that TPA exploits ILSVRC2012 on image classification to implement the attack through a class texture dictionary generated by style transfer. Here, because the face can directly represent the identity, we choose targeted images as the texture dictionary of TPA. These targeted images are the same as DevoPatch. AdvW and Patch-Rs achieve very few ASR in the dodging attack. Although HPA and MPA have extremely high ASR in the dodging attack, they tend to cover the face with a larger area and complete the attack with larger APA and extremely low query efficiency. Further, HPA, MPA, and Patch-Rs have very little ASR in the more challenging impersonation attack due to the limited information of the output of the label by the model. Because TPA has the targeted image as a prior, it achieves a good trade-off in ASR and ANQ in the dodging and impersonation attack. But even so, the performance of TPA in the impersonation attack can not achieve an extremely high ASR. Our DevoPatch and TPA share the same prior information (targeted images) in face verification, but DevoPatch significantly outperforms TPA in attack performance and query efficiency. In a dodging attack, the ANQ of TPA is usually three times that of DevoPatch. In the harder impersonation attack, the ANQ of DevoPatch is about one-twentieth of TPA. More importantly, in such a limited number of queries, DevoPatch has a smaller patch area and ASR, which is enough to illustrate the effectiveness of the proposed differential evolution patch attack algorithm. Table <ref> shows the visualization of different patch attacks on face verification. Here, we choose ArcFace as the base model and visualize it on the LFW dataset. The color of the face frame represents whether the attack is successful. Blue represents a failed attack and red represents a successful attack. Because of the different semantic categories in image classification, the generated patches are easy to perceive. In face verification, since the color of patches is irrelevant to semantics, a similar situation also occurs in HPA, MPA, AdvW, and Patch-Rs. However, TPA and DevoPatch select faces as a prior and have face-related features, the resulting patches are relatively imperceptible. Further, since DevoPatch can better determine the location and shape of patches, it can improve attack performance and imperceptibility. DevoPatch has strong applicability and achieves query-efficient decision-based patch attacks on both image classification and face verification. §.§ Attacks on Patch Defenses We also evaluate the performance of DevoPatch against the patch defense methods on image classification, including Local Gradient Smoothing (LGS)<cit.>, Digital Watermarking (DW)<cit.>, Derandomized Smoothing (DS)<cit.> and Efficient Certifiable Vision Transformer (ECViT)<cit.>. For empirical defenses, DW and LGS are regarded as pre-processing operations to remove adversarial patches. For certifiable defenses, we attack models including certifiable mechanisms. The backbone of DS is ResNet-50 and the backbone of ECViT is ECViT-B. Table <ref> shows the adversarial robustness against empirical and certifiable patch defenses on DevoPatch. The above defenses cause very few images to be misclassified. For empirical patch defenses, DW and LGS do not take effect in the face of DevoPatch. A possible reason is that the adversarial patches produced by DevoPatch are part of natural images rather than adversarial perturbations generated by gradients. The former has semantics and harmony in visual understanding. For certifiable patch defenses, ECViT is the state-of-the-art certifiable patch defense, but it also cannot maintain certification in large rectangular patch areas (greater than 10%). However, ECViT increases APA and reduces the quality of patches compared to DS. This experiment exposes deficiencies in existing patch defenses, so it is critical to improve the robustness and certification of defenses. §.§ Ablation Study on Differential Evolution Both genetic algorithm (GA) <cit.> and differential evolution algorithm (DE) <cit.> are evolutionary algorithms, which simulate mutation, crossover, and selection in genetics to solve optimization problems. Due to different encoding, crossover, mutation, and selection strategies, DE generally has faster convergence speed <cit.>. However, directly applying DE to this task encounters the challenges of complex solution space and efficient query efficiency. Therefore, we simplify solution spaces to the integer domain and improve the traditional DE. We conduct ablation studies on differential evolution with ResNet-152, and the parameter settings are consistent with Section <ref>. Fig. <ref> shows how APA changes as the number of queries increases under different differential evolutions. Here, w.o. crossover and w.o. mutation mean that the crossover and mutation improved by DevoPatch are not used, but the crossover and mutation of traditional DE <cit.> are used. Under the premise of guaranteeing 100% ASR, the mutation and crossover of DevoPatch have a fast convergence speed, and can better jump out of the local optimal solution, thereby generating higher-quality adversarial patches. §.§ Analysis on Target Images To explore the impact of the selection of target images on attack performance, we conduct experimental analysis on image classification with ResNet-152 from three perspectives, including color, randomness, and data source. The parameter settings are consistent with Section <ref>. Color. HPA <cit.> and MPA <cit.> introduce monochrome patches to implement the attack. Therefore, monochrome images have the potential to become target images. Here, we select White, Blue, Green, Yellow and Pink as target images. Table <ref> illustrates the attack performance and query efficiency when images of different colors are used as target images. Under the untargeted setting, DevoPatch with monochrome images has a similar ASR and APA as MPA, but the query efficiency is one-seventh of MPA. However, monochrome images have almost no target attack performance, because monochrome images have almost no semantic information of the corresponding class. The above experiment shows the efficiency of DevoPatch and the necessity of random natural images as target images. Randomness. Considering that target images on image classification are randomly selected from the ILSVRC2012 validation set, randomness may affect attack performance and query efficiency. Here, we fix the clean images and randomly sample the target images five times, then evaluate the performance. Table <ref> illustrates the impact of different random target images on attack performance and query efficiency. From the experiments, we can find that although the marginal improvement can be obtained through randomness, the impact of random target images on performance is very small, and APA and ANQ are very close. Although the optimal performance is randomly selected multiple times, the query cost is multiplied, which is not feasible in real-world scenarios. From the perspective of the target images themselves, how to generate a more powerful target image is a future work that has the potential to improve the attack efficiency. Data Source. This work is carried out in a black-box decision-based setting. For the black-box model, we can only obtain the output label, which is difficult to obtain full training data or access the model architecture. As described in Section <ref>, we use targeted image priors to reduce the complexity of the solution space and achieve effective targeted attacks. In Section <ref>, we choose the targeted images randomly sampled from the validation set of ILSVRC2012 and show the effectiveness of DevoPatch, which belong to the same source data as the training set of the models. To further demonstrate the generalization capability of the proposed method in real-world scenarios, we collect 100 images from the Internet as the targeted images, which are not from the same source as ILSVRC2012. In the case of using different-source data, ASR equivalent to same-source data can be obtained on image classification, as described in Table <ref>. DevoPatch based on different-source data has very subtle differences in areas and queries, which shows DevoPatch is not sensitive to the domain of targeted images. It is worth noting that TPA uses ImageNet to generate a texture dictionary to attack the classification model, which is impossible in real-world scenarios. However, DevoPatch can arbitrarily select a correctly identified image from the Internet as the targeted image, thereby realizing a black-box patch attack with high operability and flexibility. §.§ Effectiveness Analysis In this section, we analyze why DevoPatch has a very high targeted attack success rate. We utilize the gradient-based class activation mapping (Grad-CAM)<cit.> to visualize the attention maps of various classes, as shown in Figure <ref>. First, we use Grad-CAM to generate the attention maps of the source image and targeted image of their corresponding classes (such as column 2 and column 4). We can find that the most discriminative regions are all in the regions with the most salient category objects. Then, we visualize the attention map of the source image corresponding to the target class and find that it is not focused on the objects of the source class. Among the limited queries, we notice that the class activation map of the class predicted by the model focuses on the adversarial patch, indicating that DevoPatch can become the most discriminative region without having access to any details of the model. Since adversarial patches based on targeted classes cover the most discriminative regions, the model outputs predictions for the targeted class. Therefore, DevoPatch is a query-efficient decision-based patch attack because of paired key-points and targeted image prior. § DISCUSSION In this section, we compare the robustness of ResNet, ViT, and MLP models to patch perturbation on image classification. In Table <ref>, we find that our method needs a relatively larger area to craft successful patch attacks on ViT and MLP models compared with ResNet model with similar query budgets. It means ViT and MLP models are relatively more robust than ResNet model under the most threatening decision-based patch attack. Probably because ViT and MLP split the image into multiple non-overlapping patches which reduce the impact of noises on one local region to the final classification results<cit.>. As shown in Table <ref>, all kinds of models are equally vulnerable to perturbations computed using white-box attack GAP. We then find that adversarial perturbations computed using ResNet rarely transfer to ViT or MLP in the white-box setting especially for the targeted attack, which is also observed by <cit.> in imperceptible attacks. Interestingly, different from the conclusion in <cit.>, we find that adversarial perturbations computed using ViT and MLP do transfer to ResNet. Particularly, the adversarial perturbations crafted by our black-box method transfer more easily over different architectures, even for the targeted attack. In addition, we first present the lower bound on the area required for the targeted patch attack in the decision-based setting. Targeted attacks of any category can be completed in about 25% of the patch area under about 1,300 queries. This is an important safety reference for real-world systems. The above observations suggest that studying the adversarial robustness of DNNs from the perspective of decision-based black-box patch attacks is necessary to better understand and improve DNNs. § CONCLUSIONS In this work, we explore the practical threat of decision-based black-box patch attack to the robustness of existing DNNs for the first time. Compared with transfer-based and score-based settings, decision-based settings do not require access to a large amount of training data and only rely on minimal information, the labels by the model’s output, to achieve the adversarial attack. To simplify the solution space and improve query efficiency, we propose a differential evolutionary algorithm named DevoPatch for query-efficient adversarial patch attacks in the decision-based black-box setting. In DevoPatch, we model adversarial patches as paired key-points and utilize targeted images as priors. With paired key-points and targeted image priors, the differential evolution algorithm based on the integer domain greatly improves the query efficiency. As a result of our comprehensive results, DevoPatch outperforms the state-of-the-art black-box patch attack in terms of patch area and ASR both on image classification and face verification. More importantly, with a reduced solution space, DevoPatch illustrates significant query-efficiency when compared with the existing patch attacks in the decision-based black-box setting. We also investigate the robustness of various DNN architectures against DevoPatch. DevoPatch exposes the shortcomings of existing DNNs against patch attacks. In future research, we can use DevoPatch to evaluate the robustness of the model to black-box adversarial patches. In addition, our work provides a deep understanding of the robustness of DNNs against decision-based patch attacks. We believe this work could be used to inform and inspire the design and deployment of robust vision models based on various DNN architectures in the future. § ACKNOWLEDGMENTS This work was supported by National Natural Science Foundation of China (No.62072112), Scientific and Technological innovation action plan of Shanghai Science and Technology Committee (No.22511102202), Fudan Double First-class Construction Fund (No. XM03211178). IEEEtran
http://arxiv.org/abs/2307.01322v1
20230703195230
Toward Blockchain-based Fashion Wearables in the Metaverse: the Case of Decentraland
[ "Amaury Trujillo", "Clara Bacciu" ]
cs.SI
[ "cs.SI" ]
Toward Blockchain-based Fashion Wearables in the Metaverse: the Case of Decentraland Amaury Trujillo IIT-CNR Pisa, Italy amaury.trujillo@iit.cnr.it Clara Bacciu IIT-CNR Pisa, Italy clara.bacciu@iit.cnr.it August 1, 2023 ===================================================================================================================================== Among the earliest projects to combine the Metaverse and non-fungible tokens (NFTs) we find Decentraland, a blockchain-based virtual world that touts itself as the first to be owned by its users. In particular, the platform's virtual wearables (which allow avatar appearance customization) have attracted much attention from users, content creators, and the fashion industry. In this work, we present the first study to quantitatively characterize Decentraland's wearables, their publication, minting, and sales on the platform's marketplace. Our results indicate that wearables are mostly given away to promote and increase engagement on other cryptoasset or Metaverse projects, and only a small fraction is sold on the platform's marketplace, where the price is mainly driven by the preset wearable's rarity. Hence, platforms that offer virtual wearable NFTs should pay particular attention to the economics around this kind of assets beyond their mere sale. metaverse, virtual fashion, non-fungible tokens § INTRODUCTION The idea of the Metaverse as a fully immersive future iteration of the Internet has fueled the imagination of many tech enthusiasts for a few decades now, with the latest advancements in extended reality (XR) having further ignited public interest. The fashion industry is no exception; in recent years there has been an increasing participation in Metaverse-related initiatives, ranging from independent designers to famous commodity and luxury brands <cit.>. After all, avatar customization is one of the main enjoyments of users and one of the main factors of purchase intent in virtual worlds <cit.>, with the economic potential being highly enticing to the fashion industry <cit.>. Also recently, non-fungible tokens (NFTs) —units of cryptographic data that represent a unique asset via blockchain technology— are being used to certify ownership of digital objects such as virtual garments. Arguably, the first and most influential Metaverse-related project that utilized NFTs is Decentraland, a virtual world based on the Ethereum blockchain in which users can create, experience, and monetize their content and assets (including wearables for avatars) via the in-platform cryptocurrency, called MANA. The platform is managed through a Decentralized Autonomous Organization (DAO) —a blockchain-based system that enables self coordination and governance <cit.>— based on MANA ownership. In addition, users can also “buy” several kinds of NFTs, namely parcels of land; groups of parcels called estates; unique names; as well as wearables and the recently introduced emotes, items to customize the appearance and animation sequences of avatars, respectively. Decentraland started in 2015 and launched in February 2020 to a moderate success. In 2021, there was a surge in the public interest in NFTs caused by several high-profile sales <cit.>, giving rise to a sudden increase in the value of MANA. A few months later, “Facebook Inc.” rebranded itself into “Meta Platforms”, drawing even more attention to the Metaverse in general, and to Decentraland in particular. In view of this newfound popularity, Decentraland was host to the first Metaverse Fashion Week (MVFW) in March 2022, which involved a wide variety of brands and several in-world fashion events. However, the 2022 cryptowinter (i.e., a remarkable devaluation of cryptoassets) greatly affected Decentraland. Such a volatility, together with many uncertainties surrounding NFTs, brought into discussion the approach used by Decentraland for its economy and the ownership of the in-platform digital assets, particularly wearables, with potential ramifications for other Metaverse-related projects. Herein, we thus characterize the publication, minting, and marketplace sales of Decentraland's wearables. Our work's main contribution is —to the best of our knowledge— the first quantitative characterization in literature of blockchain-based digital wearable assets on a virtual world. § BACKGROUND AND RELATED WORK By Metaverse, we refer to a shared universe of interoperable virtual worlds, with these in turn being “[s]hared, simulated spaces which are inhabited and shaped by their inhabitants who are represented as avatars” <cit.>. We are still in an early development phase of the Metaverse, but a plethora of independent virtual worlds have already been created in the last decades, such as Second Life, or massively multiplayer online (MMO) games such as World of Warcraft, Minecraft and Fortnite. Nevertheless, these have been subject to much criticism in terms of the governance and ownership of in-world assets: a new form of virtual feudalism in Second Life <cit.>, conflicts of agency and ownership in World of Warcraft between publisher and gamers <cit.>, and an overall separation of content and platform ownership in virtual worlds to the detriment of users <cit.>. As a response, a wave of blockchain virtual experiences with decentralized governance and assets emerged, of which Decentraland is a prominent example. Still, most previous research on Decentraland either focus on landholding assets <cit.>, or take the platform's tokens as a whole within a larger NFT ecosystem <cit.>. As far as we know, Decentraland's wearables have not been yet studied in literature. Fashion within virtual worlds, on the other hand, has been studied for several years now, mainly on the still on-going Second Life (released in 2003), which popularized the idea of customizable virtual worlds and avatars <cit.>, but who many argue failed to meet the expectations of the general public <cit.>. Nonetheless, within Second Life there is still a strong economy based on avatar customization and many users have found a new passion (and in some cases a source of income) by becoming in-world “fashion designers” of avatar clothing and accessories<cit.>. In the last few years, however, the fashion industry has begun to focus more on highly popular MMO games —mainly through branded skins, i.e., in-game character cosmetic options— with a growing interest on more recent blockchain-based virtual worlds <cit.>. For instance, many important luxury brands have started experimenting with NFT collectibles <cit.>, e.g., participating in initiatives such as the MVFW or also creating Decentraland weareables <cit.>. These wearables consist of items in the form of clothing, accessories, and partial or complete body features that change the appearance of avatars. Each wearable item belongs to a collection and can be minted into unique NFTs, i.e. items themselves cannot be bought or sold, only the tokens minted from them (see Figure <ref>). First, collection creators must pay a fee so that it can be vetted by a curation committee (elected by the project's DAO). If approved, the collection items are available for minting. Afterward, creators can either mint tokens and transfer them to any account in the blockchain network, or put their items on sale for minting on the official marketplace. Other users can obtain wearable tokens in three main ways: 1) by participating in an in-world event (e.g., a fashion show) that airdrops (gives away) tokens; 2) by claiming tokens airdropped by an external project (e.g., to reward its stakeholders); and 3) by buying tokens within or without the official marketplace. Regardless of the minting mechanism, the wearable NFTs are property of the holder; hence, it is also possible to use other trading platforms for secondary sales, such as OpenSea, the largest NFT marketplace. Herein we mainly focus on primary sales on Decentraland's marketplace. Currently, there are two wearable versions in Decentraland's marketplace: v1 was implemented on the Ethereum network, with the first collection being published in October 2019. However, many concerns emerged within the community around Ethereum's scalability and volatile fees to process transactions. Hence, v2 wearables were developed for the Polygon network, a blockchain platform and an Ethereum-compatible sidechain that allows a more efficient processing of transactions <cit.>. The last v1 collection was published in March 2021; the first v2 collection was published in June 2021, with v1 being deprecated but remaining usable in-world and available on NFT marketplaces. Therefore, users must be aware of the network in which they carry out secondary sales. § METHODS We collected various wearable data, from first record available up to December 31 2022, via Decentraland's query endpoints on TheGraph —an indexing protocol and service for querying blockchain networks. These endpoints are used across the different applications within Decentraland and their implementations are open source.[<https://github.com/decentraland>] In detail, we retrieved metadata regarding wearable collections (name, status, creator, blockchain network) and items (name, category, rarity, price), the 3D asset metadata for v2 wearables (number of triangles, meshes, textures), in addition to all v2 items' mints on Polygon, as well as the primary and secondary NFT sales (buyer, seller, kind, price) within Decentraland. To compare the monetary value of wearable items and sales over time, we first converted the price in MANA of a given item or sale to USD, based on data from Yahoo! Finance, and we then adjusted for inflation on a monthly basis to the USD value in December 2022, based on the Consumer Price Index (CPI) by the Bureau of Labor Statistics of the United States. Henceforth, all monetary value will be expressed in CPI-adjusted USD. To present results, we use the median (x̃) and median absolute deviation (mad) to describe distributions, and multinomial χ^2 tests to compare proportions of wearable kinds between groups. We used the R statistical software and its ecosystem for all of the analyses. § PUBLISHED WEARABLES For v1, there were only 41 collections (representing 436 items), whereas at the cutoff date there were 4,110 v2 collections (8,324 items) submitted, with most (88%) being approved and published (3,756 collections with 7,323 items). Concerning the 1,266 distinct v2 creator accounts, more than half (53.9%) had only one collection; the most prolific creator by far had 79 collections with 616 items, mostly used as rewards by Decentral Games, a company whose main product is a poker game within Decentraland. All v1 collections consist of two items or more, with 19.5% having more than ten; whereas the majority of v2 collections (68.8%) consist of a single item. The monthly publication of approved v2 items, (see Figure <ref>) follows the rise and decline in MANA, with the most prolific month being March 2022, around the MVFW. The median asking price of v2 wearables at the cutoff date was 30.64USD (45.42). Interestingly, 987 items (13.4% of the total published) have a zero price, but are not necessarily listed for sale, as these are usually used for airdrops or claimed as rewards. Table <ref> presents an overview of v2 items by rarity, which is set by their creators and limits the number of token supply that can be minted. The share of items (from the grand total) by rarity and version follows a bell-shaped distribution, with legendary being the most popular in both versions. The relative minted supply (i.e. the proportion of tokens already minted) by rarity has instead a linear-log correlation, with the lower the limit the more supply minted. Wearable 3D models can be adapted to different body shapes: female, male, or both (unisex). Between v1 and v2 there was a significant change in the share of items by body shape (286.9<.001), with unisex increasing by 12.7 percent points to 92.3%, whereas female and male decreased to a respective share of 4.1% and 3.6%. Other 3D model attributes refer to resources for its rendering, such as the number of triangles and textures, which –––according to the guidelines— should be limited by wearable category. These are: upper body, lower body, feet, accessory (i.e., earring, hat, helmet, etc.), head (i.e., facial hair, eyebrows, etc.), and skin (which changes the appearance of the whole avatar). For instance, no more than 5k triangles and 5 textures for skin; no more than 1.5k triangles for all other wearables except accessory (500 triangles) with at most 2 textures. However, we found that these values are exceeded in 41% of the items for triangles and 15% for textures. By far the most occurring item categories are upper body and accessory, with 36.6% and 36%, respectively. Based on Freeman's theta measure of association for ordinal and multinomial variables, rarity and category are very weakly associated (θ = 0.06). Still, and as can be seen in the item share by these two variables in Figure <ref>, skin has proportionally more unique items (11%) compared to the mean of the other categories (4.3%). § TOKEN MINTS The maximum token supply of v1 wearables was 246.2k, of which 70.9k tokens (28.8%) have been minted. However, given the deprecation of v1, their minting has practically halted. The maximum v2 token supply was 50.87M, of which only 3.56M tokens (7%) have been minted. There were 1,225 v2 creators (96% of the total) that published 6,9k items (94.3%) minted at least once, or, stated differently, 50 creators (4%) had 423 items (5.7%) never minted as of the cut-off date. Around 66.6% of v2 creators with mints were involved on a primary sale on the marketplace, in which 8,953 buyers minted the tokens. Of the mints not associated with a primary sale, 682.3k mints (18.9% of the grand total) were transfers from the creators to themselves, and the rest (80.8% of the grand total) were transfers to other beneficiaries. Overall, there were 564.3k distinct beneficiaries of 3.56M minted tokens, with 354.7k beneficiaries (62.8%) having received only one token, and 35.85k (6.4%) ten or more. § MARKETPLACE SALES Concerning the overall sales within the marketplace, v1 had 9.2K sales with a median sale price of 3.1USD, and v2 had 166.9K sales with a median sale of 2.2USD, with only 341 v1 sales being made since the introduction of v2. The number of v2 primary sales was 120.7k (72.4% of the total), representing just 3.4% of v2 mints; the median primary sale value was 1.56USD (2.32). Despite both sales volume and value increasing at the end of 2021 and reaching their peak in early 2022, the median value of primary sales decreased remarkably during the second half of 2021 (see Figure <ref>), most likely due to the highly increased supply of tokens. As for secondary sales, 9.7% were bids (offers made for a token not listed for sale) and the rest were listings (of tokens publicly listed for sale), with an overall median value of 3.03USD (3.91). The number of items with at least one primary sale was 3,332 (45.5% of the total). We modeled the item median price of these sales by first selecting features intrinsic to the item (i.e., rarity, category, single or multiple item collection, 3D model complexity), and then using a forward stepwise regression linear model based on the Akaike information criterion (AIC). Table <ref> reports the predictor coefficients of the final model, with R^2_adj = 0.29, F = 341.3, p.001. Of these predictors, and based on a simple elasticity model, the most important is the item's rarity limit (R^2_adj = 0.24). As expected, the rarer the item, the more expensive it is. However, this predictor alone systematically underestimates the price of unique items, as can be seen in Figure <ref>, hence the dummy predictor for this rarity in the final linear model. Based on this model, the elasticity of the price with respect to the rarity limit is –0.389, and being unique is associated with a +56.9% price change. On the contrary, belonging to a single-item collection is associated with a –32.5% change. Interestingly, the number of textures (and not triangles) is the most relevant 3D model metric, with an associated +5.7% price change by texture. In regards to secondary sales, there were 3,269 distinct items, of which 1,508 (46%) did not have any primary sale within the marketplace, i.e., the respective tokens were minted by other means and then sold on the marketplace. Of the tokens minted via a primary sale, only 3.4% were the object of a secondary sale. The tokens with multiple sales were 7,375, with 88.3% having just two sales. Of the 840 tokens with three of more sales, 73 (8.6%) have at least one round trip sale, i.e., an account eventually re-buys the token it previously sold. This is indicative of trade washing, a manipulative practice that affects most NFT markets in varying degrees <cit.>. The most blatant instance concerns the most sold token, with only two accounts trading it 15 times in just 18 days. § DISCUSSION §.§ Change and standardization The change from the short-lived v1 to v2 wearables was substantial in terms of publication, implementation, and economics. The initial version was, in part, victim to the sudden increase in interest on NFTs and on Ethereum in particular, which put into the limelight the network's transaction fees volatility, scalability issues. Hence, such a speedy transition was inevitable, but it was not without hiccups. For instance, changes in rendering of materials entailed a cumbersome procedure of 3D model rejection and re-submission, which puts in doubt the long-term functioning of in-world assets. These issues are also a manifestation of the current frenzy around blockchain and the Metaverse. Both are nowadays frequently associated with the so-called Web3, an envisioned decentralized iteration of the Web based on blockchain, and as it happened with the Web, to achieve such a vision it is necessary to first agree on common standards on a global scale. Recently, several companies (including Meta and Microsoft) have founded the Metaverse Standards Forum, a platform for collaboration between standards organizations and businesses to promote an interoperable and open Metaverse. The blockchain-based community reacted with the Open Alliance for the Metaverse (OMA3), with the intended goal of ensuring that virtual land, digital assets, ideas, and services follow the principles of Web3. Decentraland is among the founding members of OMA3, which has now joined the Metaverse Standards Forum to work on 3D asset interoperability, digital asset management, privacy, cybersecurity and identity. To add to the confusion, the Open Metaverse Foundation (by the Linux Foundation) has been recently launched. At the moment of writing all of these organizations had existed for less than a year; it remains to be seen if their missions are indeed reached. §.§ Wearables and the monetization of promotion Surprisingly, only a mere share of mints (3.4%) is done via primary sales on the platform marketplace, and the volume of secondary sales on it is much lower than we had expected. Moreover, at the moment of writing (early 2023), there were only 3.2k tokens of v2 wearables listed as “buy now” (i.e., listed at a set price by the owner) on OpenSea, the most common external marketplace for Decentraland assets. From the minting data alone, it seems that the majority of tokens are given away as part of airdrops or awards to entice potential or existing users. Upon manual verification, this is certainly the case with the most prolific creators such as Decentral Games. Further, for the most popular or complex items, it seems that the main scope is not earn money from the sales per se, but to promote other products and services. For instance, Figure <ref> depicts: a) the top most minted item which was part of a give away to promote its creator; b) the item with the most complex 3D geometry used to promote Sophia The Robot; and c) the item with the highest number of primary sales, at zero price plus network fees. Indeed, at the moment of writing MVFW 2023 hast just taken place, a testament of the in-world promotion appeal for the fashion industry. Concerning desirability of wearables, based on our linear model the most important factor is the item's rarity, especially if unique, with other significant factors being the size of the collection and the number of textures of the 3D model. It should be noted, however, that the term rarity as used in Decentraland wearables differs considerably from the same term used on most NFT projects, in which it denotes the relative frequency of a token with a given set of characteristics. In the latter case, rarity is also the most important factor for sale price, but with different and varying statistical distributions per project <cit.>. §.§ Study limitations One of the main limitations of our work is the focus on Decentraland's marketplace: a creator that mints a token for themselves could put it for sale (within or without Decentraland's marketplace), transfer it to other accounts (e.g, as part of an airdrop or claimed awards), or simply keep it indefinitely. At the same time, Decentraland's marketplace is the entrance point and main setting for trading wearables on the platform, particularly for v2 primary sales, and it is the most common data source regarding Decentraland used in previous literature <cit.>. Another limitation is the lack of data and analysis regarding the effective use of wearables within Decentraland's virtual world. For instance, it is plausible that many users own wearables that they have not worn in-world and perhaps will never do or will do for a very brief period of time. However, the on-chain data concerns only ownership of wearables, which is an interesting aspect by itself, but it is a faulty proxy of actual fashion as seen in-world. In addition, despite the popularity and influence of Decentraland, it would be interesting to see if similar wearable dynamics are present on similar blockchain-based virtual worlds. § CONCLUSION Decentraland's blockchain-based approach to virtual wearables has piqued the attention of digital fashion designers and consumers. Although items can be minted and sold within the platform's marketplace, our study shows that most are distributed by other means, mainly with the scope of promoting other projects and initiatives. For those sold within, the price is unsurprisingly primarily driven by their predefined rarity, with collection cardinality and number of textures also being important. However, being among the first in this domain has also brought problems for the platform regarding price volatility and long-term robustness of its technical implementations. We thus hope that our work inspires others working on bringing the Metaverse into a vogue reality. IEEEtran
http://arxiv.org/abs/2307.03288v1
20230706204942
Optimal Scalarizations for Sublinear Hypervolume Regret
[ "Qiuyi Zhang" ]
cs.LG
[ "cs.LG", "cs.DS", "math.OC" ]
[ Optimal Scalarizations for Sublinear Hypervolume Regret Qiuyi (Richard) Zhangyyy yyyGoogle Deepmind Qiuyi Zhangqiuyiz@google.com Multiobjective Optimization 0.3in ] Scalarization is a general technique that can be deployed in any multiobjective setting to reduce multiple objectives into one, such as recently in RLHF for training reward models that align human preferences. Yet some have dismissed this classical approach because linear scalarizations are known to miss concave regions of the Pareto frontier. To that end, we aim to find simple non-linear scalarizations that can explore a diverse set of k objectives on the Pareto frontier, as measured by the dominated hypervolume. We show that hypervolume scalarizations with uniformly random weights are surprisingly optimal for provably minimizing the hypervolume regret, achieving an optimal sublinear regret bound of O(T^-1/k), with matching lower bounds that preclude any algorithm from doing better asymptotically. As a theoretical case study, we consider the multiobjective stochastic linear bandits problem and demonstrate that by exploiting the sublinear regret bounds of the hypervolume scalarizations, we can derive a novel non-Euclidean analysis that produces improved hypervolume regret bounds of Õ( d T^-1/2 + T^-1/k). We support our theory with strong empirical performance of using simple hypervolume scalarizations that consistently outperforms both the linear and Chebyshev scalarizations, as well as standard multiobjective algorithms in bayesian optimization, such as EHVI. § INTRODUCTION Optimization objectives in modern AI systems are becoming more complex with many different components that must be combined to perform precise tradeoffs in machine learning models. Starting from standard ℓ_p regularization objectives in regression problems <cit.> to increasingly multi-component losses used in reinforcement learning <cit.> and deep learning <cit.>, many of these single-objective problems are phrased as a scalarized form of an inherently multiobjective problem. Practitioners often vary the weights of the scalarization method, with the main goal of exploring the entire Pareto frontier, which is the set of optimal objectives that cannot be simultaneously improved. First, one chooses some weights λ∈^k and scalarization functions s_λ(y) : ^k → that convert k multiple objectives F(a) := (f_1(a), ... , f_k(a)) over some parameter space a∈⊆^d into a single-objective scalar. Optimization is then applied to this family of single-objective functions s_λ(F(x)) for various λ and since we often choose s_λ to be monotonically increasing in all coordinates, x_λ = max_a ∈ s_λ(F(a)) is on the Pareto frontier and the various choices of λ recovers an approximation to the Pareto frontier <cit.>. Due to its simplicity of use, many have turned to a heuristic-based scalarization strategy to pick the family of scalarizer and weights, which efficiently splits the multi-objective optimization into numerous single "scalarized" optimizations <cit.>. Linear scalarizations with varying weights are often used in multi-objective optimization problems, such as in multi-objective reinforcement learning to combine task reward with the negative action norm <cit.> or in RLHF to align responses with human preferences <cit.>. Furthermore, some works have proposed piecewise linear scalarizations inspired by economics <cit.>, while for multi-armed bandits, scalarized knowledge gradient methods empirically perform better with non-linear scalarizations <cit.>. Other works have come up with novel scalarizations that perform better empirically in some settings <cit.>. In general, previous works have tried to do comparisons between different scalarizations but with varying conclusions <cit.>. However, the appeal of using scalarizations in multiobjective optimization largely declined as linear scalarizations are shown to be provably incapable of exploring the full Pareto frontier <cit.>. This has led to a flurry of recent developments in specific multi-objective algorithms tailored to specific settings such as ParEgo <cit.> and MOEAD <cit.> for black-box optimization or multivariate iteration for reinforcement learning <cit.>. Furthermore, many adaptive reweighting strategies have been proposed in order to target or explore the full Pareto frontier, which have connections to gradient-based multi-objective optimization; however these strategies are much more complicated to implement and produce higher runtimes due to the addition logic <cit.>. This begs the question of Are simple scalarization methods at all competitive and if so, how would one optimally choose them? To judge the effectiveness of an multiobjective optimizer, a natural and widely used metric to measure progress is the hypervolume indicator, which is the volume of the dominated portion of the Pareto set <cit.>. The hypervolume metric has become a gold standard because it has strict Pareto compliance meaning that if set A is a subset of B and B has at least one Pareto point not in A, then the hypervolume of B is greater than that of A. Therefore, it is of no surprise that multiobjective optimization methods often use hypervolume related metrics for progress tracking or acquisition optimization, such as the Expected Hypervolume Improvement (EHVI) or its differentiable counterpart <cit.>. In previous works, we note that the notion of optimality becomes varied. Previous work by <cit.> proved Pareto regret bounds of O(d√(T)), but that only guarantees recovery of a single point close to the Pareto frontier. Some works minimize a notion of distance to the Pareto frontier, such as the ℓ_∞ norm <cit.>, although such approaches work in the finite multi-arm bandit setting which mandates at least a pull of each arm. Some recent works provide sub-linear hypervolume regret bounds which guarantees convergence to the full Pareto frontoer; however, they are exponential in k and its analysis only applies to a specially tailored algorithm that requires an unrealistic classification step <cit.>. Most relevant is recent work by <cit.> that introduces random hypervolume scalarizations and when combined with our generalization bounds, one can directly derive a O(k^2d /√(T) + T^-1/O(k)) convergence bound for Gaussian Process bandits. §.§ Our Contributions We show, perhaps surprisingly, that a simple ensemble of hypervolume scalarizations, first introduced in <cit.>, are theoretically optimal to minimize hypervolume regret and are empirically competitive for general multiobjective optimization. Specifically, we show that the hypervolume scalarization has sharp level curves that allows for the targeting of a specific part of the Pareto frontier, without any convexity assumptions or the need for adaptively changing weights. Theoretically, we show that exploring the Pareto frontier by choosing T maximizers of randomly weighted hypervolume scalarizations achieves a sublinear hypervolume regret rate of O(T^-1/k), where T is the number of points sampled. Our proofs follow from novel arguments that combine the Lipschitz properties of the hypervolume scalarizations with classic metric entropy bounds for L-Lipschitz functions in ^k. We observe that our derived sublinear hypervolume regret rate of the hypervolume scalarization holds for any Pareto frontier, regardless of the inherent multiobjective function F or the underlying optimizer. Therefore, we emphasize that analyzing these model-agnostic rates can be a general theoretical tool to compare and analyze the effectiveness of proposed multiobjective algorithms. In fact, although many scalarizers will search the entire Pareto frontier as T →∞, the rate at which this convergence occurs can differ significantly, implying that this framework paves the road for a theoretical standard by which to judge the effectiveness of advanced strategies, such as adaptively weighted scalarizations. On the other hand, we show surprisingly that no multiobjective algorithm, whether scalarized, adaptive, or not, can beat the optimal hypervolume regret rates of applying single-objective optimization with the hypervolume scalarization. To accomplish this, we prove novel lower bounds showing one cannot hope for a better convergence rate due to the exponential nature of our regret, for any set of T points. Specifically, we show that the hypervolume regret of any algorithm after T actions is at least Ω(T^-1/k), demonstrating the necessity of the O(T^-1/k) term up to small constants in the denominator. As a corollary, we leverage the sublinear regret properties of hypervolume scalarization to transfer our lower bounds to the more general setting of scalarized Bayes regret. Together, we demonstrate that for general multiobjective optimzation, finding maximas of the hypervolume scalarizations with a uniform weight distribution optimally finds the Pareto frontier asymptotically. Let _T = {y_1,..., y_T} be a set of T points in ^k such that y_i ∈y∈max s^HV_λ_i(y) with λ_i ∼ randomly drawn i.i.d. from an uniform distribution ands^HV are hypervolume scalarizations. Then, the hypervolume regret satisfies (^⋆) - (_T) = O(T^-1/k ) where ^⋆ is the Pareto frontier and is the hypervolume function. Furthermore, any algorithm for choosing these T points must suffer hypervolume regret of at least Ω(T^-1/k). Next, we use a novel non-Euclidean analysis to prove improved hypervolume regret bounds for our theoretical toy model: the classic stochastic linear bandit setting. For any scalarization and weight distribution, we propose a new scalarized algorithm (alg:exploreucb) for multiobjective stochastic linear bandit that combines uniform exploration and exploitation via an UCB approach to provably obtain scalarized Bayes regret bounds, which we then combine with the hypervolume scalarization to derive optimal hypervolume regret bounds. Specifically, for any scalarization s_λ, we show that our algorithm in the linear bandit setting has a scalarized Bayes regret bound of O(L_p k^1/pd T^-1/2 + T^-1/k), where L_p is the Lipschitz constant of the s_λ(·) in the ℓ_p norm. Finally, by using hypervolume scalarizations and exploiting their ℓ_∞-smoothness, we completely remove the dependence on the number of objectives, k, which had a polynomial dependence in previous regret bound given by <cit.>. Let _T ⊆ be the actions generated by T rounds of alg:exploreucb, then our hypervolume regret is bounded by: _z(Θ^⋆) - _z(Θ^⋆_T) ≤O( d T^-1/2 + T^-1/k) Guided by our theoretical analysis, we empirically evaluate a diverse combination of scalarizations and weight distributions with our proposed algorithm for multiobjective linear bandits. Our experiments show that for some settings of linear bandits, in spite of a convex Pareto frontier, applying linear or Chebyshev scalarizations naively with various weight distributions leads to suboptimal hypervolume progress, especially when the number of objective increase to exceed k ≥ 5. This is because the non-uniform curvature of the Pareto frontier, exaggerated by the curse of dimensionality and combined with a stationary weight distribution, hinders uniform progress in exploring the frontier. Although one can possibly adapt the weight distribution to the varying curvature of the Pareto frontier when it is convex, we suggest remediating the issue by simply adopting the use of non-linear scalarizations that are more robust to the choice of weight distribution and are theoretically sound. For general multiobjective optimization, we perform empirical comparisons on BBOB benchmarks for biojective functions in a bayesian optimization setting, using classical Gaussian Process models <cit.>. When comparing EHVI with hypervolume scalarization approaches, we find that EHVI tends to limit its hypervolume gain by over-focusing on the central portion of the Pareto frontier, whereas the hypervolume scalarization encourages a diverse exploration of the extreme ends. From our broader analysis, we recommend the use of hypervolume scalarizations as a simple, general, efficient, non-adaptive method to perform various multiobjective optimization, even in complex settings, such as reinforcement learning. We believe that this is especially relevant given the modern era of learning algorithms that commonly makes tradeoffs between multiple objectives such as fairness, privacy, latency. § PROBLEM SETTING AND NOTATION We assume, for sake of normalization, that Θ_i^⋆≤ 1 and that a_t≤ 1, where · denotes the ℓ_2 norm unless otherwise stated. Other norms that are used include the classical ℓ_p norms ·_p and matrix norms x_ = x^⊤ x for a positive semi-definite matrix . For a scalarization function s_λ(x), s_λ is L_p-Lipschitz with respect to x in the ℓ_p norm on if for x_1, x_2 ∈, |s_λ(x_1) - s_λ(x_2)| ≤ L_p x_1 - x_2 _p, and analogously for λ. We let ^k-1 = { y ∈^k | y = 1, y > 0 } be the sphere in the positive orthant and by abuse of notation, we also let y ∼^k-1 denote that y is drawn uniformly on ^k-1. For two outputs y, z ∈⊆^k, we say that y is Pareto-dominated by z if y ≤ z and there exists j such that y_j < z_j, where y ≤ z is defined for vectors element-wise. A point is Pareto-optimal if no point in the output space can dominates it. Let ^⋆ denote the set of Pareto-optimal points (objectives) in , which is also known as the Pareto frontier. Our main progress metrics for multiobjective optimization is given by the standard hypervolume indicator. For S ⊆^k compact, let vol(S) be the regular hypervolume of S with respect to the standard Lebesgue measure. For ⊆^k, we define the (dominated) hypervolume indicator of with respect to reference point z as: _z() = vol({ x | x ≥ z, x is dominated by some y ∈}) We can formally phrase our optimization objective as trying to rapidly minimize the hypervolume (psuedo-)regret. Let be our action space and for some general multi-objective function F, let be the image of under F. Let _T be any matrix T actions and let _T = F(_T) ⊆^k be the k objectives corresponding. Then, the hypervolume regret of actions _T, with respect to the reference point z, is given by: Hypervolume-Regret(_T) = _z(^⋆) - _z(_T) §.§ Scalarizations for Multiobjective Optimization For multiobjective optimization, we generally only consider monotone scalarizers that have the property that if y > z, then s_λ(y) > s_λ(z) for all λ. Note this implies that an unique optimal solution to the scalarized optimization is on the Pareto frontier. A common scalarization used widely in practice is the linear scalarization: s^LIN_λ(y) = λ^⊤ y for some chosen positive weights λ∈^k. By Lagrange duality and hyperplane separation of convex sets, one can show that any convex Pareto frontier can be characterized fully by an optimal solution for some weight settings. However, linear scalarizations cannot recover the non-convex regions of Pareto fronts since the linear level curves can only be tangent to the Pareto front in the protruding convex regions (see fig:scalarizations). To overcome this drawback, another scalarization that is proposed is the Chebyshev scalarization: s^CS_λ(y) = iminλ_i y_i. Indeed, one can show that the sharpness of the scalarization, due to its minimum operator, can discover non-convex Pareto frontiers <cit.>. § HYPERVOLUME SCALARIZATIONS In this section, we show the utility and optimality of a related scalarization known as the hypervolume scalarization, s^HV_λ(y) = imin (y_i/λ_i)^k that was introduced in <cit.>. First, observe that this scalarization allows you to target a specific part of the Pareto frontier, which eliminates the need of adaptive targeting techniques that heuristically update parameters of the optimization objectives. The visualization of the non-linear level curves of the scalarization provides intuition that our scalarization targets the portion of the Pareto frontier in the direction of λ for any λ >0 (see Figure <ref>), as we can show that the tangent point of the level curves of the scalarization is always on the vector in the direction of λ. For any point y^⋆ on the Pareto frontier of any set that lies in the positive orthant, there exists λ > 0 such that y^⋆ = y∈max s^HV_λ(y). Furthermore, for any α, λ >0 such that αλ is on the Pareto frontier, then αλ∈y∈max s^HV_λ(y). Furthermore, this scalarization additionally has the special property that the expected scalarized value under a uniform weight distribution on ^k-1 gives the dominated hypervolume, up to a constant scaling factor. Intuitively, this lemma says that the optima of the hypervolume scalarization over some uniform choice of weights will be sufficiently diverse for any Pareto set so as to capture its dominated hypervolume. Let _T = {y_1,..., y_T} be a set of T points in ^k. Then, the hypervolume with respect to a reference point z is given by: _z(_T) = c_k _λ∼^k-1 [ max_y∈_T s^HV_λ(y - z) ] where c_k = π^k/2/2^k Γ(k/2+1) is a constant depending only on k. While this lemma captures useful properties of the scalarization in the infinite limit, we supplement it by showing that finite asymptotic bounds on the strongly sublinear convergence rate of using this scalarization in optimization. In fact, while many scalarizations will eventually explore the whole Pareto frontier in the infinite limit, the rate at which the exploration improves the hypervolume is not known, and in the worst case might be exponentially slow. We show that the simple procedure of optimizing hypervolume scalarizations with a uniform weight distribution is a sublinear hypervolume regret multiobjective algorithm in that it satisfies O(T^-ϵ) hypervolume regret convergence rates when is known. We note that this rate is agnostic of the underlying optimization algorithm or objective function, meaning this is a general property of the scalarization. Our novel proof of convergence uses a symmetry argument and exploits the Lipschitz properties of s^HV_λ to derive generalization bounds via metric entropy. Proving smoothness properties of our hypervolume scalarizations for any λ > 0 with λ normalized on the unit sphere is non-obvious as s^HV_λ(y) depends inversely on λ_i so when λ_i is small, s^HV_λ might change wildly. However, the crucial observation is that λ_i being small makes it unlikely that it becomes the minimum coordinate, implying that it is not contributing to the scalarized value or its rate of change. Let _T = {y_1,..., y_T} be a set of T points in ^k such that y_i ∈y∈max s^HV_λ_i(y) with λ_i ∼ i.i.d. drawn. Then with probability at least 1-δ over the randomness of λ_i, the hypervolume of _T with respect to a reference point z satisfies sublinear hypervolume regret bounds _z(^⋆) - _z(_T) = O(T^-1/k+1 + √(ln(1/δ))T^-1/2) §.§ Lower Bounds and Optimality The dominating factor in our derived convergence rate is the O(T^-1/(k+1)) term and we show that this cannot be improved. Over all subsets _T ⊆ of size T, note that our optimal convergence rate is given by the the subset that maximizes the dominated hypervolume of _T, although finding this is in fact a NP-hard problem due to reduction to set cover. By constructing a lower bound via a novel packing argument, we show that even this optimal set would incur at least Ω(T^-1/k) regret, implying that our convergence rates, derived from generalization rates when empirically approximating the hypervolume, are optimal. There exists a setting of linear objective parameters Θ^⋆ and = { a : a = 1} such that for any actions _T, the hypervolume regret at z = 0 after T rounds is _z(Θ^⋆) - _z(Θ^⋆_T) = Ω(T^-1/k-1) §.§ Multiobjective Stochastic Linear Bandits We propose a simple scalarized algorithm for linear bandits and provide a novel ℓ_p analysis of the hypervolume regret that removes the polynomial dependence on k in the scalarized regret bounds. When combined with the ℓ_∞ sharpness of the hypervolume scalarization, this analysis gives an optimal O(d/√(T)) bound on the scalarized regret, up to log(k) factors. This log dependence on k is perhaps surprising but is justified information theoretically since each objective is observed separately. Our scalarized algorithm works despite of noise in the observations, which makes it difficult to even statistically infer measures of hypervolume progress. Many of the notation and intermediate theorems in this section are given in the Appendix. InputInput myprocProcedure mycommfont number of maximum actions T, weight distribution , scalarization s_λ Initialize iteration counter n = 1 number of actions exceed T Play action e_i ∈ for i ≡ n d and increment n n+1 Let C_ti be the confidence ellipsoid for Θ_i and let UCB_i(a) = max_θ∈ C_iθ^⊤ a Sample λ∼ and play action that maximizes a^* = _a ∈ s_λ(UCB_i(a)) ExploreUCB(T, , s_λ): Scalarized UCB for Linear Bandits Let z ∈^k be a reference point such that over all a ∈, B = min_aΘ^⋆ a - z is positive. Then, with constant probability, running alg:exploreucb with s^HV_λ(y) and = gives hypervolume regret bound of _z(Θ^⋆) - _z(Θ^⋆_T) ≤ O(c_k (B+2)^k/B k^k/2 -1 d √(log(kT)/T) + c_k (B+2)^2k+1/B k^k - 1/2T^1/(k+1)) For k constant, the hypervolume regret satisfies _z(Θ^⋆) - _z(Θ^⋆_T) ≤O( dT^-1/2 + T^-1/k+1) § EXPERIMENTS In this section, we empirically justify our theoretical results by running alg:exploreucb with multiple scalarizations and weight distributions in different settings of the multiobjective stochastic linear bandits. Our empirical results highlight the advantage of the hypervolume scalarization with uniform weights in maximizing the diversity and hypervolume of the resulting Pareto front when compared with other scalarizations and weight distributions, especially when there are a mild number of output objectives k. Our experiments are not meant to show that scalarizations is the only or even the best way to solve multiobjective optimization; rather, it is a simple yet competitive baseline when optimal scalarizations and weight distributions are chosen for solving multiobjective optimization in a variety of settings. §.§ Stochastic Linear Bandits We compare the three widely types of scalarizations that were previously mentioned: the linear, Chebyshev, and the hypervolume scalarization. Note that we use slightly altered form of our hypervolume scalarization as s^HV_λ(y) = imin y_i/λ_i, which is a simply a monotone transform of the proposed scalarization and does not inherently affect the optimization. We set our reference point to be z = - 2 in k dimension space, since our action set of = { a : a = 1} and our norm bound on Θ^⋆ ensures that our rewards are in [-1, 1]. In conjuction with the scalarizer, we use our weight distribution =, which samples vectors uniformly across the unit sphere. In addition, we also compare this with the bounding box distribution methods that were suggested by <cit.>, which samples from the uniform distribution from the min to the max each objective and requires some prior knowledge of the range of each objective <cit.>. We run our algorithm with inherent dimension d = 4 for T = 100, 200 rounds with k = 2, 6, 10. As expected, we find the hypervolume scalarization consistently outperforms the Chebyshev and linear scalarizations, with linear scalarization as the worst performing (see fig:scalarizationhv). Note that when we increase the output dimension of the problem by setting k = 10, the hypervolume scalarization shows a more distinct advantage. The boxed distribution approach of <cit.> does not seem to fare well and consistently performs worse than its uniform counterpart. While linear scalarization provides relatively good performance when the number of objective k ≤ 5, it appears that as the number of objectives increase in multi-objective optimization, more care needs to be put into the design of scalarization and their weights due to the curse of dimensionality, since the regions of non-uniformity will exponentially increase. §.§ BBOB Functions We empirically demonstrate the competitiveness of hypervolume scalarizations for Bayesian Optimization by comparing them to the popular BO method of EHVI <cit.>. Running our proposed multiobjective algorithms on the Black-Box Optimization Benchmark (BBOB) functions, which can be paired up into multiple bi-objective optimization problems <cit.>. Our goal is to use a wide set of non-convex benchmarks to supplement our experiments on our simple toy example of linear bandits. For scalarized approaches, we use hypervolume scalarizations with the scalarized UCB algorithm <cit.> with a constant standard deviation multiplier of 1.8 and all algorithms with use a Gaussian Process as the underlying model with a standard Matérn kernel that is tuned via ARD <cit.>. Our objectives are given by BBOB functions, which are usually non-negative and are minimized. The input space is always a compact hypercube [-5,5]^n and the global minima is often at the origin. For bi-objective optimization, given two different BBOB functions f_1, f_2, we attempt to maximize the hypervolume spanned by (-f_1(x_i), -f_2(x_i)) over choices of inputs x_i with respect to the reference point (-5, -5). We normalize each function due to the drastically different ranges and add random observation noise, as well as applying vectorized shifts. We run each of our algorithms in dimensions d = 8, 16, 24 and optimize for 160 iterations with 5 repeats. From our results, we see that both EHVI and UCB with hypervolume scalarizations are competitive on the BBOB problems but the scalarized UCB algorithm seems to be able to explore the extreme ends of the Pareto frontier, whereas EHVI tends to favor points in the middle (see Figure <ref>). From our experiments, this trend appears to be consistent across different functions and is more prominent as the input dimensions d increase, as shown by additional plots given in the appendix. unsrtnat § ADDITIONAL NOTATION FOR SECTION <REF> For various scalarizations and weight distributions, an related measure of progress that attempts to capture the requirement of diversity in the Pareto front is scalarized Bayes regret for some scalarization function s_λ. For some fixed scalarization with weights λ, s_λ : ^k →, we can define the instantaneous scalarized (psuedo-)regret as r(s_λ, a_t) = max_a ∈ s_λ(Θ^⋆ a) - s_λ(Θ^⋆ a_t) Since the scalarized regret is only a function of a single action a_t, it fails to capture the variety of solutions that we would optimize for in the Pareto frontier. To capture some notion of diversity, we must define progress with respect to a set of past actions _t. Generalizing the scalarized regret above, we can formulate the Bayes regret as an average of the scalarized regret over some distribution of non-negative weight vectors, λ∼. Specifically, we define the (scalarized) Bayes regret with respect to a set of actions _t to be: BR(s_λ, _t) = _λ∼ [ max_a∈ s_λ( F(a)) - max_a ∈_t s_λ (F(a))] = _λ∼[a ∈_tmin r(s_λ, a)] Unlike previous notions of Bayes regret in literature, we are actually calculating the Bayes regret of a reward function that is maximized with respect to an entire set of actions _t. Specifically, by maximizing over all previous actions, this captures the notion that during multi-objective optimization our Pareto set is always expanding. We will see later that this novel definition is the right one, as it generalizes to the multi-objective setting in the form of hypervolume regret. For theory, we use the classic stochastic linear bandit setting. For the single-objective setting, in round t = 1, 2, ..., T, the learner chooses an action a_t ∈^d from the action set and receives a reward y_t = ⟨θ^⋆, a_t ⟩ + ξ_t where ξ_t is i.i.d. 1-sub-Gaussian noise and θ^⋆∈ℝ^d is the unknown true parameter vector. In the multi-objective stochastic linear bandit setting, the learner instead receives a vectorized reward y_t = Θ^⋆ a_t + ξ_t, where Θ^⋆∈^k × d is now a matrix of k true parameters and ξ_t ∈^k is a vector of independent 1-sub-Gaussian noise. We also denote _t ∈^d × t to be the history action matrix, whose i-th column is a_i, the action taken in round i. Similarly, _t is defined analogously. Finally, for sake of analysis, we assume that contains an isotropic set of actions and specifically, there is ⊂ with size || = O(d) such that ∑_i e_i e_i^⊤≽1/2, where ≽ denotes the PSD ordering on symmetric matrices. This assumption is not restrictive, as it can be relaxed by using optimal design for least squares estimators <cit.> and the Kiefer-Wolfrowitz Theorem <cit.>, which guarantees the existence and construction of an uniform exploration basis of size O(poly(d)). § ADDITIONAL THEOREMS FOR SECTION <REF> By using the confidence ellipsoids given by the UCB algorithm, we can determine each objective parameter Θ^⋆_i, up to a small error. To bound the scalarized regret, we utilize the ℓ_p smoothness of s_λ, L_p, to reduce the dependence on k to be O(k^1/p), which effectively removes the polynomial dependence on k when p →∞. This is perhaps not surprising, since each objective is observed independently and fully, so the information gain scales with the number of objectives. Consider running ExploreUCB (alg:exploreucb) for T > max(k, d) iterations and for T even, let a_T be the action that maximizes the scalarized UCB in iteration T/2. Then, with probability at least 1-δ, the instantaneous scalarized regret can be bounded by r(s_λ, a_T) ≤ 10k^1/pL_p d √(log(k/δ) + log(T)/T) where L_p is the ℓ_p-Lipschitz constant for s_λ(·). Finally, we connect the expected Bayes regret with the empirical average of scalarized regret via uniform convergence properties of all functions of the form f(λ) = a ∈max s_λ(Θ^⋆ a). By using s^HV_λ and setting p=∞, we derive our final fast hypervolume regret rates for stochastic linear bandits, which is the combination of the scalarized regret rates and the hypervolume regret rates. Note that our analysis improve the scalarized regret rates by removing the polynomial dependence on k. Assume that for any a ∈, |s_λ(Θ^⋆ a)| ≤ B for some B and s_λ is L_λ-Lipschitz with respect to the ℓ_2 norm in λ. With constant probability, the Bayes regret of running alg:exploreucb at round T can be bounded by BR(s_λ, _T) ≤ O(k^1/pL_p d √(log(kT)/T) + BL_λ/T^1/k+1) § MISSING PROOFS Let λ = y^⋆/ y^⋆. Note that λ > 0 since y^⋆ is in the positive orthant and for the sake of contradiction, assume there exists z such that s_λ(z) > s_λ(y^⋆). However, note that for any i, z_i/λ_i≥min_i z_i/λ_i > min_i y^⋆_i/λ_i = y^⋆_i/λ_i, where the last line follows since y^⋆_i/λ_i = y^⋆ for all i by construction. Therefore, we conclude that y^⋆ < z, contradicting that y^⋆ is Pareto optimal. Finally, note that if αλ is on the Pareto frontier, then we see that min_i αλ_i/λ_i = α and furthermore, this min value is achieved for all i. Therefore, since αλ is on the Pareto frontier, any other point y ∈ has some coordinate j such that y_j < αλ_j, which implies that min_i y_i/λ_i < α. Let us first decompose _z(^⋆) - _z(_T) ≤ |_z(^⋆) - ∑_i=1^T max_y ∈ s_λ_i(y)| + |∑_i=1^T max_y ∈ s_λ_i(y) - _z(_T)| ≤ |_z() - ∑_i max_y ∈ s_λ_i(y)| + |∑_imax_y ∈_T s_λ_i(y) -_z(_T)| where the second inequality exploits the fact that y_i ∈y∈max s_λ_i(y). We proceed to bound both parts separately and we show that it suffices to prove uniform concentration of the empirical sum to the expectation, which is the hypervolume by Lemma <ref>. Let f_(λ_i) = max_y ∈ s_λ_i(y). We let = {f_ : ⊆} be our class of functions over all possible output sets . We will first demonstrate uniform convergence by bounding the complexity of . Specifically, by generalization bounds from Rademacher complexities <cit.>, over choices of λ_i ∼, we know that with probability 1-δ, for all , we have the bound |_λ∼[ f_] - 1/m∑_i=1^m f_(λ_i) | ≤ R_m() + √(8ln(2/δ)/m) where R_m() = _λ_i ∼, σ_i[ f ∈sup2/m∑_i σ_i f(λ_i)], where σ_i are i.i.d. ± 1 Rademacher variables. To bound R_m(), we appeal to Dudley's integral formulation that allows us to use the metric entropy of to bound R_m() ≤inf_α > 0(4 α + 12 ∫_α^∞√(log(𝒩(ϵ, , ·_2))/m) dϵ) where 𝒩 denotes the standard covering number for under the ℓ_2 function norm metric over λ∈. Since is the uniform distribution over , this induces a natural ℓ_∞ function norm metric on that is f_∞ = sup_λ∈ |f(λ)|. By Lemma <ref>, s_λ(y) is L_λ Lipschitz with respect to the Euclidean norm in λ. Note that since the maximal operator preserves Lipschitzness, f_(λ) is also L_λ-Lipschitz with respect to λ∈^k for any . Since contains L_λ-Lipschitz functions in ^k, we can bound the metric entropy via a covering of λ via a Lipschitz covering argument (see Lemma 4.2 of <cit.>), so we have log(𝒩(ϵ, , ·_2)) ≤log(𝒩(ϵ,,·_∞)) ≤ (4L_λ/ϵ)^k log(8/k) Finally, we follow the same Dudley integral computation of Theorem 4.3 of <cit.> to get that R_m() ≤inf_α > 0(4α + 12 ∫_α^2 √((4L_λ/ϵ)^k log(8/k)/T) dϵ) = O(L_λ/m^1/(k+1)) Therefore, we conclude that with probability at least 1-δ over the independent choices of λ_i ∼, for all and setting m = T |_λ∼[max_a∈ s_λ (Θ^⋆ a)] - 1/T∑_i=1^T max_a∈ s_λ_i(Θ^⋆ a) | ≤ O(BL_λ/T^1/(k+1)) + √(8ln(2/δ)/T) Finally, we conclude by using Lemma <ref> to replace the expectation by the hypervolume and by setting = , _T respectively. §.§ Proofs of Lipschitz Properties We utilize the fact that if s_λ is differentiable everywhere except for a finite set, bounding Lipschitz constants is equivalent to bounding the dual norm ∇ s_λ_q, where 1/p + 1/q = 1, which follows from mean value theorem, which we state as prop:lipschitz. Let f: → be a continuous function that is differentiable everywhere except on a finite set, then if ∇ f (x)_q ≤ L_p for all x ∈, f(x) is L_p-Lipshitz with respect to the ℓ_p norm. Let s_λ(y) = λ^⊤ y be the linear scalarization with λ≤ 1 and y_∞≤ 1. Then, we may bound L_p ≤max(1, k^1/2 - 1/p) and L_λ≤√(k) and |s_λ| ≤√(k). Since ∇_λ s_λ(x) = y, we use prop:lipschitz to bound L_λ≤max_yy≤√(k)y_∞ = √(k). Similarly, since ∇_y s_λ(y) = λ, we may bound for p ≤ 2, L_p ≤λ_q ≤λ≤ 1 for 1/p + 1/q = 1 and for p ≥ 2, we may use Holder's inequality to bound L_p ≤λ_q ≤ k^1/q - 1/2λ≤ k^1/2 - 1/p. To bound the absolute value of s_λ, note s_λ(y) = λ^⊤ y ≤√(k) for all since y_2 ≤√(k)y_∞≤√(k). Let s_λ(y) = min_i λ_i y_i be the Chebyshev scalarization with λ≤ 1 and y_∞≤ 1. Then, we may bound L_p ≤ 1 and L_λ≤ 1 and |s_λ| ≤ 1/√(k). For a specific λ, y, let i^⋆ be the optimal index of the minimization. Then, the gradient ∇_λ s_λ(x) is simply zero in every coordinate except at i^⋆, where it is y_i^⋆. Therefore, since we can only have a finite number of discontinuities due to monotonicity, we use prop:lipschitz to bound L_λ≤ y_i^⋆≤ 1. Similarly, since ∇_y s_λ(y) has only one non-zero coordinate except at i^⋆, which is λ_i^⋆, we may bound for L_q ≤λ_i^⋆≤ 1. To bound the absolute value of s_λ, note that there must exists λ_i < 1/√(k) as λ≤ 1. Thus, min_i λ_i y_i < 1/√(k) for y_∞≤ 1. Let s_λ(y) = min_i (y_i/λ_i)^k be the hypervolume scalarization with λ = 1 and 0 < B_l ≤ y_i ≤ B_u. Then, we may bound L_p ≤B_u^k/B_lk^k/2 -1 and L_λ≤B_u^k+1/B_l k^(k-1)/2 and |s_λ| ≤B_u^k/k^k/2. For a specific λ, y, let i^⋆ be the optimal index of the minimization. Then, the gradient ∇_λ s_λ(x) is simply zero in every coordinate except at i^⋆, which in absolute value is k (y_i^⋆/λ_i^⋆)^k(1/λ_i^⋆). Let j be the index such that λ_j is maximized and since λ =1, we know that λ_j ≥ 1/√(k). Therefore, we see that since y_i^⋆/λ_i^⋆≤ y_j/λ_j ≤ y_j /√(k), we conclude that 1/λ_i^⋆≤ (B_u/B_l)/√(k). Therefore, using prop:lipschitz, we have L_λ≤ k (y_i^⋆/λ_i^⋆)^k(1/λ_i^⋆) ≤ k(B_u/√(k))^k(B_u/B_l)/√(k) = B_u^k+1k^(k+1)/2/B_l k^(k-1)/2 And similarly, since ∇_y s_λ(y) has only one non-zero coordinate except at i^⋆, which is k (y_i^⋆/λ_i^⋆)^k-1 (1/λ_i^⋆), we may bound for L_q ≤ k (y_i^⋆/λ_i^⋆)^k-1(1/λ_i^⋆) ≤ k(B_u/√(k))^k-1(B_u/B_l)/√(k)≤B_u^k/B_lk^k/2 -1 To bound the absolute value of s_λ, note that s_λ(y) ≤ (y_j/λ_j)^k ≤B_u^k/k^k/2. §.§ Proofs for Linear Bandits The following lemma about the UCB ellipsoid is borrowed from the original analysis of linear bandits. Consider the least squares estimator θ_t = (_t)^-1_t^⊤_t, where the covariance matrix of the action matrix is _t = _t^⊤_t + λ, then with probability 1-δ, θ̂_̂t̂ - θ^*__t≤√(λ)θ^* + √(2 log(1/δ) + d log(T/λ)) Let Θ_T be the least squares estimate of the true parameters after observing (_T, _T). Since the noise ξ_t in each objective is independent and 1-sub-Gaussian, by lem:estimateparam, if we let _T = _T^⊤_T + λ, then with regularization λ = 1 Θ_Ti - Θ^⋆_i__T≤ 1 + √(2 log(k/δ) + d log(T)) := D_T holds with probability at least 1 - δ/k. Note that this describes the confidence ellipsoid, C_Ti = {θ∈^d : Θ_Ti - θ_i__T≤ D_T } for Θ_Ti. By the definition of the UCB maximization of a_t, we see a_t, Θ_t = _a ∈max_θ_i ∈ C_Ti s_λ (Θ_i^⊤ a). Note that since Θ^⋆∈_T, we can bound the instantaneous scalarized regret as: r(s_λ, a_t) = max_a ∈ s_λ(Θ^⋆ a) - s_λ(Θ^⋆ a_t) ≤ s_λ(Θ_t a_t) - s_λ(Θ^⋆ a_t) By the Lipschitz smoothness condition, we conclude that r(s_λ, a_t) ≤ L_p (Θ_t - Θ^⋆) a_t _p. To bound the desired ℓ_p norm, first note that by triangle inequality, Θ_t - Θ^⋆__T≤ 2 D_T. Since we apply uniform exploration every other step and ∑_i e_i e_i^⊤≽1/2 for e_i ∈ with size || = d, we conclude that _T ≽T/5d. Therefore, we conclude that Θ_Ti - Θ^⋆_i≤ 5√(d/T) D_T := E_T with probability at least 1-δ/k. Since a_t≤ 1, we conclude by Cauchy-Schwarz, that |(Θ_Ti - Θ^⋆_i)a_t| ≤ E_T. Together with our Lipschitz condition, we conclude that r(s_λ, a_t) ≤ k^1/p L_p E_T ≤ 10 k^1/p L_p d √((log(k/δ) + log(T))/T) . For any set of actions ⊆, we define f_ (λ) = max_a ∈ s_λ(Θ^⋆ a). We let = {f_ : ⊆} be our class of functions over all possible action sets and for any Bayes regret bounds, we will first demonstrate uniform convergence by bounding the complexity of . Specifically, by generalization bounds from Rademacher complexities <cit.>, over choices of λ_i ∼, we know that with probability 1-δ, for all , we have the bound |_λ∼[ f_] - 1/m∑_i=1^m f_(λ_i) | ≤ R_m() + √(8ln(2/δ)/m) where R_m() = _λ_i ∼, σ_i[ f ∈sup2/m∑_i σ_i f(λ_i)], where σ_i are i.i.d. ± 1 Rademacher variables. To bound R_m(), we appeal to Dudley's integral formulation that allows us to use the metric entropy of to bound R_m() ≤inf_α > 0(4 α + 12 ∫_α^∞√(log(𝒩(ϵ, , ·_2))/m) dϵ) where 𝒩 denotes the standard covering number for under the ℓ_2 function norm metric over λ∈. Since is the uniform distribution over , this induces a natural ℓ_∞ function norm metric on that is f_∞ = sup_λ∈ |f(λ)|. Since s_λ(Θ^⋆ a) is L_λ Lipschitz with respect to the Euclidean norm in λ. Note that since the maximal operator preserves Lipschitzness, f_(λ) is also L_λ-Lipschitz with respect to λ∈^k. Since contains L_λ-Lipschitz functions in ^k, we can bound the metric entropy via a covering of λ via a Lipschitz covering argument (see Lemma 4.2 of <cit.>), so we have log(𝒩(ϵ, , ·_2)) ≤log(𝒩(ϵ,,·_∞)) ≤ (4L_λ/ϵ)^k log(8/k) Finally, we follow the same Dudley integral computation of Theorem 4.3 of <cit.> to get that R_m() ≤inf_α > 0(4α + 12 ∫_α^2 √((4L_λ/ϵ)^k log(8/k)/m) dϵ) = O(L_λ/m^1/(k+1)) Therefore, we conclude that with probability at least 1-δ over the independent choices of λ_i ∼, for all , |_λ∼[max_a∈ s_λ (Θ^⋆ a)] - 1/m∑_i=1^m max_a∈ s_λ_i(Θ^⋆ a) | ≤ O(BL_λ/m^1/(k+1)) + √(8ln(2/δ)/m) Finally, note that for T even, with constant probability, BR(s_λ, _t ) = _λ∼ [r(s_λ, _t)] = _λ∼[ max_a ∈ s_λ(Θ^⋆ a) - max_a∈_T s_λ(Θ^⋆ a)] ≤1/T/2∑_i=1^T/2[max_a ∈ s_λ_i(Θ^⋆ a) - max_a ∈_T s_λ_i (Θ^⋆ a) ] + O(BL_λ/T^1/(k+1)) ≤1/T/2∑_i=1^T/2[max_a ∈ s_λ_i(Θ^⋆ a) - s_λ_i (Θ^⋆ a_2i) ] + O(BL_λ/T^1/(k+1)) ≤1/T/2∑_i=1^T/2 r(s_λ_i, a_2i) + O(BL_λ/T^1/(k+1)) ≤ O(k^1/pL_p d √(log(kT)/T) + BL_λ/T^1/(k+1)) where the last line used lem:instantregret with δ = 1/T^2 and applied a union bound over all O(T) iterations. Note that by lem:hypervolume, we connect the Bayes regret to the hypervolume regret for : _z(Θ^⋆) - _z(Θ^⋆_t) = c_k _λ∼ [ max_a ∈ s_λ (Θ^⋆ a) - max_a ∈_T s_λ(Θ^⋆ a)] where s_λ(y) = min_i (y_i - z_i/λ_i)^k. Note that since Θ^⋆ a_∞≤ 1 for all a∈ and B is maximal, we have B ≤Θ^⋆ a - z ≤ B + 2. Therefore, we conclude by lem:hypervolumelipschitz that s_λ is Lipschitz with L_p ≤(B+2)^k/B k^k/2 -1 , L_λ≤(B+2)^k+1/B k^(k-1)/2, |s_λ| ≤(B+2)^k/k^k/2 Finally, we combine this with thm:bayesregret with p = ∞ as the optimal choice of p (since L_p does not depend on p) to get our desired bound on hypervolume regret. We let = { a : a≤ 1 } be the unit sphere and Θ^⋆_i = e_i be the unit vector directions. Note that in this case the Pareto frontier is exactly ^k-1. Consider a uniform discretization of the Pareto front by taking an ϵ grid with respect to each angular component with respect to the polar coordinates. Let p_1,...,p_m be the center (in terms of each of the k - 1 angular dimensions) in the m = Θ((1/ϵ)^k-1) grid elements. We consider the output _T = Θ^⋆_T and assume that for some grid element i, it contains none of the T outputs _T. Since our radial component r = 1, by construction of our grid in the angular component, we deduce that min_t y_t - p_i _∞ > ϵ/10 by translating polar to axis-aligned coordinates. Let ϵ' = ϵ/10. Assume also that 1/k < p_i < 1- 1/k. Next, we claim that the hypercube from p_i - ϵ'/k^2 to p_i is not dominated by any points in _T. Assume otherwise that there exists y_t such that y_t ≥ p_i - ϵ'/k^2. Now, this combined with the fact that since min_t y_t - p_i _∞ > ϵ' implies that there must exist a coordinate such that y_tj≥ p_j + ϵ'. However, this implies that ∑_i=1^k y_ti^2 ≥∑_i≠ j (p_i - ϵ'/k^2)^2 + (p_j + ϵ')^2 ≥∑_i p_i^2 - 2(ϵ'/k^2) ∑_i ≠ j p_i + 2ϵ' p_j > 1 where the last inequality follows since ∑_i p_i < 1/√(k) and p_j > 1/k by assumption. However, this contradicts that y_t≤ 1, so it follows that p_i - ϵ' is not dominated. Therefore, for any grid element such that p_i > 1/k, if there is no y_t in the grid, we must have a hypervolume regret of at least Ω(ϵ'^k) = Ω(ϵ^k) be simply consider the undominated hypervolume from p_i to p_i - ϵ', which lies entirely within the grid element. In fact, since there are Θ((1/ϵ)^k-1) such grid elements satisfying p_i > 1/k, we see that if T < O((1/ϵ)^k-1), by pigeonhole, there must be a hypervolume regret of at least Ω((1/ϵ)^k-1ϵ^k) = Ω(ϵ) Therefore, for any 1/2 > ϵ > 0, _z(Θ^⋆) - _z(Θ^⋆_T) < ϵ implies that T = Ω((1/ϵ)^k-1). Rearranging shows that _z(Θ^⋆) - _z(Θ^⋆_T) = Ω(T^-1/(k-1)) § FIGURES
http://arxiv.org/abs/2307.01750v2
20230704143959
SRCD: Semantic Reasoning with Compound Domains for Single-Domain Generalized Object Detection
[ "Zhijie Rao", "Jingcai Guo", "Luyao Tang", "Yue Huang", "Xinghao Ding", "Song Guo" ]
cs.CV
[ "cs.CV", "cs.LG" ]
Manuscript Rao et al.: SRCD: Semantic Reasoning with Compound Domains for Single-Domain Generalized Object Detection SRCD: Semantic Reasoning with Compound Domains for Single-Domain Generalized Object Detection Zhijie Rao, Jingcai Guo, Member, IEEE, Luyao Tang, Yue Huang, Xinghao Ding, and Song Guo, Fellow, IEEE Z. Rao, L. Tang, Y. Huang, and X. Ding are with the School of Information Science and Engineering, Xiamen University, Xiamen 361005, China (e-mail: raozhijie@stu.xmu.edu.cn; lytang@stu.xmu.edu.cn; huangyue05@gmail.com; dxh@xmu.edu.cn). J. Guo and S. Guo are with Department of Computing, The Hong Kong Polytechnic University, Hong Kong SAR., China, and with The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen 518057, China (e-mail: jc-jingcai.guo@polyu.edu.hk; song.guo@polyu.edu.hk). Corresponding authors: Jingcai Guo and Yue Huang. August 1, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This paper provides a novel framework for single-domain generalized object detection (i.e., Single-DGOD), where we are interested in learning and maintaining the semantic structures of self-augmented compound cross-domain samples to enhance the model's generalization ability. Different from DGOD trained on multiple source domains, Single-DGOD is far more challenging to generalize well to multiple target domains with only one single source domain. Existing methods mostly adopt a similar treatment from DGOD to learn domain-invariant features by decoupling or compressing the semantic space. However, there may have two potential limitations: 1) pseudo attribute-label correlation, due to extremely scarce single-domain data; and 2) the semantic structural information is usually ignored, i.e., we found the affinities of instance-level semantic relations in samples are crucial to model generalization. In this paper, we introduce Semantic Reasoning with Compound Domains (SRCD) for Single-DGOD. Specifically, our SRCD contains two main components, namely, the texture-based self-augmentation (TBSA) module, and the local-global semantic reasoning (LGSR) module. TBSA aims to eliminate the effects of irrelevant attributes associated with labels, such as light, shadow, color, etc., at the image level by a light-yet-efficient self-augmentation. Moreover, LGSR is used to further model the semantic relationships on instance features to uncover and maintain the intrinsic semantic structures. Extensive experiments on multiple benchmarks demonstrate the effectiveness of the proposed SRCD. Single-Domain Generalization, Transfer Learning, Object Detection, Semantic Reasoning. § INTRODUCTION Object detection aims to identify and localize the target of interest in the scene. Previous detection techniques based on deep convolution networks have made tremendous progress in the past few years <cit.>. However, existing off-the-shelf detectors still suffer from distribution discrepancy, i.e., domain shift, that is caused by common factors such as different weathers, regions, or styles. When the training and testing data are not independent identically distributed (non-i.i.d.), the performance of the detector may degrade dramatically <cit.>. In this regard, a series of research has been conducted on how to suppress the effect of domain shift and learn a generalized model. Among them, unsupervised domain adaptation (UDA) is a widely investigated direction that learns to relieve the impact of domain shift for object detection, which attempts to transfer shared knowledge from a labeled source domain to an unlabeled target domain. Although some works have obtained several promising results <cit.>, UDA relies heavily on the strong assumption that the target domain is accessible during training, which is hardly to be satisfied nor applicable in real-world scenarios. To address this issue, domain generalization (DG) follows a more practical setting of cross-domain learning without access to the target domain <cit.>. Specifically, the goal of DG-based object detection (DGOD) is to train a sufficiently generalized model on several available source domains and evaluate it directly on the target domain. However, existing DGOD methods mostly require the support of multiple source domains, which is usually costly and time-consuming in data collection and annotation. In recent years, some research considers a more challenging and practical problem setup in DGOD, where a sufficiently generalized and robust detection model is trained with only one single source domain, namely, single-DGOD. Existing Single-DGOD methods can be categorized into three mainstreams including feature regularization <cit.>, feature decoupling <cit.>, and consistency constraints <cit.>, all of which share a similar treatment from conventional DGOD that regularize and encourage the models to learn domain-invariant feature representation. Despite the simplicity and effectiveness of existing off-the-shelf detectors, such methods overlook two critical issues that may limit the generalization ability of learned models, and in turn, degrade the detection performance: 182 First, since only one source domain is available during training, the extremely scarce data may bring about some pseudo attribute-label correlations among samples. For example, vehicles on sunny days are usually accompanied by shadows, which may not appear on rainy days (Fig. <ref> (a)). In this regard, the learned models may tend to adopt the source domain-specific features together with real domain-invariant features to construct the detector, and therefore, introduce significant bias to the learned models. 183 Worse still, we found that the semantic structural information in samples is usually ignored. Notably, we observe that the affinities of instance-level semantic relations in samples are of crucial importance to the model generalization ability. Such semantic relationships are reflected in intra- and inter-classes (Fig. <ref> (b)). For example, bicycles with the same view should be semantically closer than those with different views. In contrast, bicycles and motorcycles are quite similar to each other, and thus they should be semantically closer than any pair with cars. Such a structural relationship does not change with domain changes. Recent studies <cit.> have also shown that maintaining the semantic structures inherently between samples can facilitate the cross-domain reasoning ability of the model. To address the above issues, we propose a novel single-DGOD framework that utilizes semantic reasoning with compound domains, termed SRCD, to learn generalized representation by uncovering and maintaining the semantic relation with self-augmented compound domains. Specifically, our method consists of two components including texture-based self-augmentation (TBSA) and local-global semantic reasoning (LGSR). The goal of TBSA is to eliminate the interference of low-order information such as textures for category determination, i.e., pseudo-association between irrelevant attributes and labels. To achieve this goal, TBSA grafts the style of local patches to the whole image, changing the image style while preserving the semantic information to avoid overfitting the model to the source domain. Meanwhile, grey level co-occurrence matrix (GLCM) <cit.> is introduced to evaluate the texture complexity of the patterns to filter out valuable patches. TBSA provides abundant style-diverse augmented samples, converting the single source domain into compound domains. Moreover, the goal of LGSR is to uncover and learn the latent semantic structures from the compound domains and empower the model to reason by maintaining semantic relationships. LGSR is composed of two parts, namely, local semantic reasoning (LSR) and global semantic reasoning (GSR). LSR utilizes weighted attribute similarity to develop accurate semantic relations among samples. Concretely, the feature is decomposed into several attributes, and the weights of the attributes are determined by the average intra-class similarity, thus suppressing the influence of class-irrelevant attributes. In contrast, GSR models the relation among the local prototypes of each class and facilitates the interaction of classes across domains. The prototypes aggregate semantic information from multiple samples, expanding the perceptual scope to the semantic space. Our contributions can be summarized as follows: * We delve into a practical and challenging topic, Single-DGOD, which is still relatively less investigated. Meanwhile, we propose a novel framework named SRCD to address two issues, one for eliminating the pseudo-correlation link between the single-source domain-specific attributes and category labels, and the other for maintaining the inherent semantic structural relationships among samples. * SRCD consists of two key components, TBSA and LGSR. TBSA aims to eliminate the influence of irrelevant attributes such as texture, light, and shadow on category determination from the image level. The single-source domain is transformed into the compound domains by style transformation. LGSR uncovers the potential semantic structure among samples on instance-level features and activates the reasoning ability of the model by maintaining semantic relationships, thus enhancing cross-domain generalization. * We evaluate the performance of our method under a variety of condition settings, including different weather, different cities, and virtual-to-reality situations. The experimental results demonstrate the effectiveness of the proposed SRCD. § RELATED WORK §.§ Domain Generalization Domain generalization aims to train a robust enough model on several available source domains to generalize to unseen target domains <cit.>. The mainstream DG methods can be broadly classified into domain augmentation, distribution alignment and feature decoupling. Domain augmentation methods include image-wise augmentation <cit.> and feature-wise augmentation <cit.>. For example, Zhou et al. <cit.> leverage generative adversarial networks to generate samples with different styles, while Li et al. <cit.> synthesizes new forms of distributions based on the principle that feature statistics can characterize the style of an image. The core idea of distribution alignment is to drive the model to extract domain-invariant features. To achieve this goal, it is common practice to use metric learning <cit.> or adversarial learning <cit.> to bridge the feature differences across domains. For example, Li et al. <cit.> align different distributions by constraining the Maximum mean discrepancy (MMD) to be minimized. Matsuura et al. <cit.>, on the other hand, set up a domain classifier and reduce the distribution discrepancy by adversarial learning. Feature decoupling aims to decouple sample features into domain-invariant and domain-specific parts. For example, Khosla et al. <cit.> decompose the network parameters into domain-invariant and domain-specific lower-order components. Piratla et al. <cit.> develop a new method based on a low-public specific low-rank decomposition algorithm to adjust the final classification layer of the network. In addition, Carlucci et al. <cit.> propose a jigsaw game to improve the generalization of the model. Chen et al. <cit.> use the graph convolutional network to empower the inter-class inference ability of the model. Although all the above DG methods have achieved sound results, they are difficult to be directly applied to the areas of one single domain or object detection. §.§ Single-Domain Generalization Single-domain generalization considers the case where only one source domain is used for training. Most current methods <cit.> are customized for the classification task and cannot be easily transferred to the object detection task. For Single-DGOD, Wu et al. <cit.> are the first to propose the Single-DGOD problem and provide a circular self-decoupling scheme. The scheme filters noisy information by secondary decoupling of domain-invariant features and, meanwhile, imposes consistency constraints on multilevel features. Zhao et al. <cit.> propose a unified framework to learn generalized feature representations by means of dual consistency constraints on stylized and historical samples. Their methods focus on how to learn compact semantic representation, but ignore the attribute pseudo-correlation and the semantic structure. Our approach strives to learn invariant representation while maintaining intra- and inter-class semantic relationships, enabling the model to possess semantic reasoning capability. §.§ Multi-Domain Generalized Object Detection Lin et al. <cit.> are the first to explore the ability to enhance model generalization across domains in the object detection task. They present a novel feature decoupling framework that decomposes pixel-level and semantic-level features into domain-dependent and domain-independent parts. Zhang et al. <cit.> propose gated decoupling networks, whose main idea is to adaptively turn on or off certain feature channels, thus focusing on the domain-invariant parts. Although their approaches have achieved promising results, they all rely on multiple domains as well as domain labels for support and are not applicable to the single-domain problem. § METHODOLOGY The overall framework of the proposed method is shown in Fig. <ref>. Our method is based on Faster-RCNN <cit.> framework. Firstly, TBSA converts the input images into different styles by weak augmentation and strong augmentation, encouraging the model to utilize domain-invariant information for discrimination. Subsequently, the feature extraction network acquires the instance-level features of the images. LSR creates a refined relational graph of the samples within the batch. The correct semantic links between samples are obtained by attribute association analysis. GSR models global relationships among the current and historical category prototypes with a wider perceptual field. LGSR promotes the generalization ability of the model by uncovering and maintaining the semantic structure among samples in the compound domains. §.§ Texture-Based Self Augmentation Extracting domain-invariant information from a single source domain is critical and difficult. One of the challenges is the potential pseudo-correlation between source domain-specific attributes and labels. For example, vehicles in sunny days have shadows while cloudy and rainy days do not. TBSA aims to remove the pseudo-correlation between irrelevant attributes such as light and shadow, color, and labels at the image level. The phase spectrum of an image is known to carry more high-level semantic information, which is often not easily changeable. The magnitude spectrum, on the other hand, represents lower-order information such as texture <cit.>. Our goal is to encourage the model to learn semantic knowledge and ignore low-order information. To this end, TBSA performs pixel-level perturbation with the texture of image patches to force the model to focus on semantic content. The illustration of TBSA is shown in Fig. <ref>. We observe that the background of an image occupies more area in the object detection task. The elements in the background are complex and variable, and they possess rich textures, colors, etc. Therefore, the image itself is a texture library for augmentation. A simple approach is to randomly select a patch from an image and transpose its texture to the whole image. However, some patches are too plain, such as the sky and the road. We need to design a selection mechanism to evaluate the complexity of the texture to filter out the plain patches. For this purpose, we introduce the gray level co-occurrence matrix (GLCM) <cit.>, a traditional texture evaluation method. For a RGB image X∈ℝ^3× H× W, where H, W indicate the height and width. Firstly convert X to a grayscale image X̅∈ℝ^H× W, then its GLCM is represented as, G(i,j|d,θ) = ∑_a=0^H∑_b=0^W(X̅_a, b=i)(X̅_a+dcosθ,b+dsinθ=j), where d denotes the relative distance and θ denotes the direction. Different GLCM can be obtained by setting d and θ. In this paper, we set d=1 and θ=0^o. The GLCM describes the location distribution properties of pixels and its statistics are often used to quantify the texture characteristics of an image. The entropy of the GLCM measures the randomness and complexity of an image. The formula is, ENT = -∑_i=0∑_j=0G(i,j)log(G(i,j)). We use ENT to measure the complexity of the image to filter out plain patches. For one patch P from the image X, if ENT(P) < ENT(X), then discard this patch and reselect. Once a suitable patch is selected, the picture is augmented by Fourier transformation. Specifically, the image X and the patch P are firstly Fourier transformed to obtain their respective amplitude spectra Amp(X), Amp(P) and phase spectra Pha(X), Pha(P). Then we perform the mixup strategy on their amplitude spectra and generate the augmented image by inverse Fourier transformation. The formula is expressed as, M_amp = (1-ϕ)Amp(X)+ϕ Amp(P), X_aug = ℱ^-1(M_amp,Pha(X)), where X_aug denotes the augmented image and ℱ^-1 is inverse Fourier transformation. ϕ is a random number. At each iteration, we perform two kinds of augmentation on the input image, weak augmentation, i.e. ϕ∈[0, 0.5) and strong augmentation i.e. ϕ∈[0.5, 1) and horizontal flip. §.§ Local-Global Semantic Reasoning Benefiting from TBSA, the single source domain is converted into the compound domains. An intuitive idea is to align the features from different domains to make the semantic space more compact, which is a common practice in many studies <cit.>. However, such alignment approaches only emphasize intra-class distance but ignore the latent semantic structure. The reduced discrepancy between features may come from class-irrelevant attributes, which is caused by attribute pseudo-association. Meanwhile, the semantic association between classes is not taken into account. To this end, we propose LGSR to uncover the semantic relation among samples. We argue that maintaining such semantic relations while narrowing the gap between domains facilitates the model to perform semantic reasoning, thus improving cross-domain generalization. LGSR consists of two parts, Local Semantic Reasoning (LSR) and Global Semantic Reasoning (GSR). §.§.§ Local Semantic Reasoning LSR models the samples of the current batch. In each iteration, we feed an image X to the network. The image is augmented by TBSA to obtain two samples X_1, X_2 with the same semantics but from different domains. The samples are fed into the convolutional network and then the instance (semantic) features are extracted by ROI-Pooling. We define their semantic features as 𝕍_1={v_1^1, v_2^1, ..., v_m^1|v∈ℝ^C× H× W}, 𝕍_2={v_1^2, v_2^2, ..., v_n^2|v∈ℝ^C× H× W}, where m, n denotes the number of their instances. C represents the number of feature channels. Here, H and W denote the height and width of the feature map. Taking the instances in 𝕍_1,𝕍_2 as nodes, our goal is to construct a relation graph across domains. The relation among the nodes is quantified by the cosine similarity of the features. However, as mentioned before, class-irrelevant attributes may mislead the relation construction. To this end, we decompose the features into several attributes and estimate the importance of each attribute separately. Specifically, for an instance feature v∈ℝ^C× H× W}, it is flattened to v∈ℝ^CHW} and decomposed into k segments. Then an instance feature is denoted as, v = [v̅_1, v̅_2, ..., v̅_k], v̅∈ℝ^CHW/k stands for attribute. Then the distance between any two instance features v_i, v_j is expressed as, S(v_i∈ Q, v_j)=∑_g=1^kε_g^Qcos(v̅_g^i, v̅_g^j)/∑_g=1^kε_g^Q. In the above equation, S(·) denotes the feature distance, Q denotes the category label, that is, S is related to the category Q and S(v_i, v_j)≠ S(v_j,v_i). ε^Q=[ε_1^Q,...,ε_k^Q] denotes the weights of the attributes with respect to the category Q. Next, we describe how to calculate ε^Q. For any one attribute ε_g^Q, it is calculated by, ε_g^Q = 1/|v∈ Q|∑_v_i∈ Q,v_j∈ Qcos(v_g^i,v_g^j). The reason for measuring the attribute weights in terms of average intra-class similarity is that, we argue that the average similarity will be smaller for complex and variable attributes such as background. However, the value of ε^Q cannot be accurately estimated from just one batch of data, so we update it with an exponential moving average strategy, ε^Q_(t) = (1-γ)ε^Q_(t-1)+γε^Q_(t), where t denotes the number of training iteration and γ is the exponential decay rate and we set it to 0.99. Then we instantiate the relation graph with an adjacency matrix, 𝒜^ℒ=[ 0_m× m S(𝕍_1, 𝕍_2)_m× n; S(𝕍_2, 𝕍_1)_n× m 0_n× n ], where 0 denotes the all-zero matrix. Finally, with the relation graph, we can perform information fusion to obtain new features with respect to the semantic structure, 𝕍_graph=(𝒜^ℒ+I)⊗𝕍, where 𝕍=[𝕍_1, 𝕍_2], ⊗ denotes the matrix multiplication and I denotes a diagonal matrix with diagonal elements of 1 and the remaining elements of 0. The new and original features are fed into the downstream network together for classification. Their classification outputs are denoted as O_graph^L and O^L, and we narrow the gap between them by minimizing the Kullback–Leibler (KL) divergence. The loss function is defined as, ℒ_KL^L=KL(O_graph^L||O^L). The classification loss of the new features is, ℒ_CL^L=CE(softmax(O_graph^L),y^L), where CE is the cross-entropy loss. y^L denotes the category label shared with 𝕍. Then the total loss is defined as, ℒ_LSR=ℒ_KL^L+ℒ_CL^L. §.§.§ Global Semantic Reasoning Limited by the batch size, LSR only models relationships for a small portion of samples and lacks the capability to perceive the global structure. To facilitate global perception and cross-domain interaction, GSR models the relation of local prototypes and historical prototypes. The prototype aggregates information from multiple samples. The class prototype of category Q is defined as, 𝒫^Q=1/|v∈ Q|∑_v_i∈ Qv_i. Let 𝒫_1={𝒫_1^Q_1,...,𝒫_1^Q_r} denote the current prototype set and r denotes the number of categories contained in the set. To broaden the perceptual field, we use a memory pool to cache the latest Z prototype sets, i.e., historical prototypes. Let 𝒫_2, 𝒫_3, ..., 𝒫_Z+1 denote historical prototype sets. Our goal is to construct a global relation graph with all prototypes of all prototype sets as nodes. Then the relation between any two prototypes is measured by the cosine similarity. However, as the training process advances forward, the cached historical prototypes may expire. Therefore, the pure cosine similarity cannot represent the relationship between the two well. To solve this problem, we weigh it by the storage time. Specifically, we use T_1, T_2, T_3, ⋯, T_z+1 to denote the length of time that each prototype set has been stored, i.e., the number of training iterations. Obviously, the value of T_1 is 0. Then the distance between any two prototypes 𝒫_i, 𝒫_j is expressed as, S̅(𝒫_i, 𝒫_j)=exp^-|T_i-T_j|/τ·cos(𝒫_i, 𝒫_j), where τ is a temperature coefficient and we set it to Z. Let 𝒫̂ denote the super set containing all prototypes and the adjacency matrix 𝒜^𝒢 denote the global relation graph. 𝒜^𝒢 is computed by Eq. <ref>. Then the structured prototype features are represented as, 𝒫̂_graph=𝒜^𝒢⊗𝒫̂. The subsequent processing is the same as LSR. We naturally obtain the classification loss, ℒ_CL^G=CE(softmax(O^G),y^G)+CE(softmax(O_graph^G),y^G), and the KL loss, ℒ_KL^G=KL(O_graph^G||O^G). The meaning of the symbols is analogous to the LSR section and no more tautology here. Finally, the total loss is, ℒ_GSR=ℒ_CL^G+ℒ_KL^G. §.§ Overall Objective Suppose the basic optimize objective of Faster-RCNN <cit.> is defined as ℒ_det, including the classification and bounding box regression losses. The overall objective function is formulated as: ℒ_SRCD=ℒ_det+λℒ_LSR+βℒ_GSR, where λ and β are hyper-parameters. § EXPERIMENTS §.§ Datasets To fully evaluate the effectiveness of our approach, we conducted experiments on multiple datasets, including different weather, different cities, and virtual-to-reality datasets. Different Weather: Daytime-Sunny contains 19,395 training images collected from the BDD100K dataset <cit.> under clear weather during the daytime. Night-Sunny contains 26,158 images from clear weather at night, which are also sampled from the BDD100K dataset. Dusk-Rainy and Night-Rainy are collected from rainy weather and include 3501 and 2494 images, respectively. Finally, Daytime-Foggy contains a total of 3775 images of foggy days, which are collected from the FoggyCityscapes <cit.> and Adverse-Weather <cit.> datasets. All datasets share 7 categories. Following CDSD <cit.>, we use Daytime-Sunny as the source domain to train the model and then directly test on other domains. Different City: Cityscapes <cit.> is collected from Germany and surrounding countries and contains 2975 images. BDD100K <cit.> is collected from the US and contains 100k images, of which we use 47,060 images from daytime clear weather. KITTI <cit.> is also collected from Germany and contains 7,481 labeled images. Cityscapes and BDD100K share 7 categories. For KITTI, we only report the results of Car. We use Cityscapes as the source domain and the other two as the target domains. Virtual-To-Reality: Sim10K <cit.> contains 10,000 virtual images rendered by the computer game Grand Theft Auto V (GTA V). We use Sim10K as the source domain, and Cityscapes, BDD100K, and KITTI as the target domains, and report the results of Car. §.§ Implementation Details Following CDSD <cit.>, we use the Faster-RCNN <cit.> with the pre-trained Resnet-101 <cit.> as backbone to experiment. The batch size is set to 1 and the shorter side of the input image is resized to 600. We optimize the network with stochastic gradient descent(SGD) with a momentum of 0.9 and the initial learning rate is set to 1e-3, which is decreased to 1e-4 after 5 epochs. The number of attributes k is set to 4, and the size of the memory bank Z is set to 10. The hyper-parameters λ and β are fixed to 0.1 and 0.01. We report the mean average precision (mAP) with an IoU threshold of 0.5. All experiments are implemented by the Pytorch framework and trained with a TITAN RTX GPU. §.§ Main Results We compare with the following methods. SW <cit.>, IBN-Net <cit.>, IterNorm <cit.>, ISW <cit.> improve the generalization of the model by designing different feature regularization methods. CDSD <cit.> uses cyclic self-decoupling to extract domain-invariant features. SHADE <cit.> learns robust feature representations by means of dual consistency constraints. We use their released code for experiments. Different Weather. Table <ref> shows the experimental results under different weather conditions. Daytime-Sunny is used as the source domain and other datasets are used as the target domains. It can be seen that there is a huge domain shift between clear and severe weather. Our method achieves the best results on three of the datasets. In particular, on the Daytime-Foggy dataset, our method outperforms Faster-RCNN by 4% and outperforms CDSD by 2.4%. Compared to the current state-of-the-art methods CDSD and SHADE, our method improves by 2.4% and 2.5%, respectively. The experimental results indicate that obtaining pure domain-invariant representations from the single-source domain is extremely challenging. And our method effectively mitigates the interference of irrelevant attributes on the discrimination, while the semantic relationship modeling has a beneficial effect on model transferability. Different City. Due to the different styles of architecture, roads, etc., there is a domain shift problem between different cities. We conducted cross-domain experiments accordingly. Table <ref> shows the results of Cityscapes to BDD100K, and it can be seen that our method achieves the best results, getting a 3% gain on the category of rider and an overall gain of 1.4%. The CDSD based on feature decoupling does not bring significant gain effect, which may be owing to the fact that the model misclassifies the source domain-specific features as domain-invariant features, indicating that a single-minded pursuit of invariant features may be a suboptimal choice. Table <ref> shows the experimental results of Cityscapes to KITTI. Since both datasets are from the same city, Faster-RCNN provides a strong baseline. Compared to other methods, our method still provides a small performance improvement. Virtual-To-Reality. Inherent differences in distribution exist between synthetic and real data. To investigate the generalizability of the model on synthetic data, we conducted the corresponding experiments, and the results are shown in Table <ref>. Our method leads the other methods by a large margin. In particular, on the KITTI dataset, the proposed SRCD outperforms Faster-RCNN by 13.4% and SHADE by 4.8%. On the Cityscapes dataset, our method is higher than Faster-RCNN by 8.7%, demonstrating that our method effectively models the potential target domains and learns robust features through relational modeling. §.§ Further Empirical Analysis Ablation study. To further investigate the effectiveness of the individual components, a series of ablation experiments are performed. The experimental results are shown in Table <ref>, where w/o indicates the removal of the component. It can be seen that removing any one of the components degrades the performance, demonstrating that the design of each component is reasonable. Comparison with domain adaptation methods. To further explore the generalizability of the model, we conducted comparative experiments on Sim10K-to-Cityscapes with domain adaptation methods, which require access to the target domain during the training phase. The experimental results are shown in Table <ref>, and we can see that our method is ahead of them, demonstrating the strong cross-domain generalization ability of our method. It also shows that the generalization of the model has not been fully exploited. Visualization of patch selection. We performed a visual analysis of the TBSA module, and the results are shown in Fig. <ref>. We can see that the texture of the discarded patches is relatively smooth and the pattern lacks variation. In addition, it can be observed that elements such as sky and road occupy most of the area of the image and are more likely to be selected randomly. Therefore, the GLCM-based filtering mechanism is reasonable. Qualitative detection results. Fig. <ref> demonstrates some detection results under different weather conditions. Compared with Faster-RCNN, our method reduces the chance of wrong and missed detections. For example, the baseline method treats the near light as the foreground (Third column) and the distant vehicles as the background (Second column). For small objects in the distance and some blurred objects, our method performs a more accurate detection. It demonstrates that our approach improves the model's resistance to irrelevant attributes, resulting in better cross-domain detection performance. § CONCLUSION In this work, we delve into the problem of model generalization under the single source domain for object detection. We analyze that existing methods lack the exploration of two key issues, first, the pseudo-correlation link between irrelevant attributes and labels, and second, the semantic structural relationships between samples that can facilitate the model's generalization ability. To this end, we propose a novel framework named SRCD, consisting of two key components, TBSA and LGSR. TBSA exploits the characteristic that the magnitude spectrum carries more domain-relevant information to change the image style, forcing the model to focus on domain-invariant information. Based on the stochasticity of TBSA, the single-source domain is transformed into the compound domains. LGSR aims to uncover and learn the semantic relationships among samples from the compound domains and empower the model to reason by maintaining a robust semantic structure, thus enhancing the model's ability to generalize across domains. Experiments on multiple benchmarks demonstrate the effectiveness of our approach. IEEEtran
http://arxiv.org/abs/2307.00193v1
20230701020234
Fast, Smooth, and Safe: Implicit Control Barrier Functions through Reach-Avoid Differential Dynamic Programming
[ "Athindran Ramesh Kumar", "Kai-Chieh Hsu", "Peter J. Ramadge", "Jaime F. Fisac" ]
cs.SY
[ "cs.SY", "cs.RO", "eess.SY" ]
Fast, Smooth, and Safe: Implicit Control Barrier Functions through Reach-Avoid Differential Dynamic Programming Athindran Ramesh Kumar, Kai-Chieh Hsu, Peter J. Ramadge, and Jaime F. Fisac All authors are with the Department of Electrical and Computer Engineering, Princeton University, New Jersey, USA. email: arkumar@princeton.edu ========================================================================================================================================================================================================================================= empty empty Safety is a central requirement for autonomous system operation across domains. Hamilton-Jacobi (HJ) reachability analysis can be used to construct “least-restrictive” safety filters that result in infrequent, but often extreme, control overrides. In contrast, control barrier function (CBF) methods apply smooth control corrections to guard the system against an often conservative safety boundary. This paper provides an online scheme to construct an implicit CBF through HJ reach-avoid differential dynamic programming in a receding-horizon framework, enabling smooth safety filtering with infinite-time safety guarantees. Simulations with the Dubins car and 5D bicycle dynamics demonstrate the scheme's ability to preserve safety smoothly without the conservativeness of handcrafted CBFs. Autonomous systems, Reach-avoid analysis, Control barrier functions § INTRODUCTION Through improved sensors, computation, learning, and decision-making techniques, autonomous robotic systems are becoming increasingly capable. However, deploying these systems in safety-critical environments requires robust fail-safe methods to ensure the avoidance of catastrophic failure states. To enforce safe behavior, various modern techniques rely on finding a safe set , that is, a subset of non-failure states such that from each state in there exists a control strategy that will keep the state in , i.e., is controlled-invariant. Hamilton-Jacobi (HJ) reachability analysis provides a constructive approach to obtain the maximal controlled-invariant safe set <cit.>. Since there generally exist no closed-form solutions, a state-space discretization is often used to compute the value function and control policy using numerical dynamic programming (DP). These solvers <cit.> scale poorly with state space dimension n and are impractical for n≥ 5. Several approximate methods <cit.> have been proposed to overcome this issue. The safety controls obtained from reachability solvers are subsequently used in a “least-restrictive” (LR) framework where the safety control is only applied if the system attempts to cross the zero-level set of the HJ value function. While attractive in theory, this minimally invasive scheme can yield abrupt “panic’’ behaviors in practice, as it waits until the last chance before applying the safety control. Control barrier functions (CBF) <cit.> offer an alternative approach to guarantee safety. Given a known safe set Ω, these functions are designed to guarantee its invariance working in conjunction with a quadratic program (CBF-QP). The CBF-QP modifies the control well before the boundary of Ω, and a constraint on the derivative of the CBF prevents rapid movement towards the boundary, resulting in smoother trajectories than the LR approach. However, control barrier functions have two drawbacks. First, they are typically handcrafted for conservative invariant sets Ω⊂Ω^*. Second, the CBF-QP can become infeasible in the presence of tight actuation constraints, endangering safety; a backup policy is then needed to ensure persistent feasibility <cit.>. In contrast, the invariant sets obtained via HJ reachability analysis are maximal and guarantee feasibility, but the LR controls can be extreme. To bridge both frameworks, <cit.> introduces a control-barrier value function (CBVF) derived using HJ reachability theory. The CBVF is computed offline using numerical dynamic programming. The warm-start approach of <cit.> alleviates the high computational burden by using a learned or handcrafted candidate CBF as initialization and iteratively refining it through numerical HJ analysis, asymptotically converging to a valid infinite-time CBF. However, numerical methods generally scale poorly to systems beyond 6 continuous state dimensions and are challenging for real-time applications. Here, we propose a provably safe online scheme, which constructs an implicit CBF at each control cycle through reach-avoid differential dynamic programming (DDP). Our CBF-DDP scheme enables the fast runtime computation of control solutions that vary smoothly along system trajectories and are guaranteed to persistently enforce safety at all future times. To our knowledge, this work provides the first general online constructive approach for CBFs leveraging HJ safety analysis. Our race car simulations with a 5-D state space demonstrate smoother safety filtering than LR while overcoming the conservativeness of handcrafted CBFs. § PRELIMINARIES Let ⊆^ be a non-empty compact set and consider a discrete-time system with state ∈⊆^ and control input ∈, evolving as _+1 = (_, _). Let [, ] := {, +1, ⋯, } and _: denote the set of input sequences _: := ( _, ⋯, _). Let ( ; _0, ), ≥0, denote a state trajectory starting from _0 under control policy → and dynamics . We sometimes abbreviate ( ; _0, ) as _. For an open set ⊆ of failure states, there always exists a Lipschitz-continuous function → such that () < 0 ∈ (a signed distance function). §.§ Controlled-invariant Set A (closed) set ⊂ is controlled-invariant under policy if for all _0 ∈ and all ≥ 0, (; _0, ) ∈. The goal of safety analysis is to find a controlled-invariant set , and its corresponding , such that ∩ = ∅. In this context, is called the safe set. Additionally, to reduce conservativeness, we desire to be as large as possible (ideally, close to the maximal safe set). Once such a safe set is constructed, a least-restrictive safety filter can be applied to any performance-driven control policy →, enforcing the safety-oriented control () whenever using () would lead to leaving the safe set : c () := { [ (), ( , () ) ∈; (), otherwise. ] . §.§ Control Barrier Functions Control barrier functions (CBFs) provide Lyapunov-like conditions to guarantee the forward invariance of a (safe) set and have been applied in various safety-critical control problems <cit.>. <cit.> extend the CBF initially developed for the continuous-time dynamic systems to the discrete-time domain. Formally, they consider a continuously differentiable function → whose zero superlevel set is the safe set ⊆, i.e., = {∈|() ≥ 0}. A function is a discrete-time exponential CBF for dynamics  if, for some γ∈ (0, 1] and for each ∈, there exists a control ∈ such that ( (, ) ) ≥γ() <cit.>. In other words, such control input keeps the state trajectory inside  and ensures that its boundary can only be approached asymptotically. We can incorporate this CBF constraint into an online trajectory optimization as rcc () = _∈ - () _2^2 ( (, ) ) ≥γ(). If dynamics are control-affine and the CBF is quadratic, (<ref>) can be solved with a quadratically constrained quadratic program (QCQP) <cit.>. This enables real-time computation even for high-dimensional dynamics, making it attractive for many robotics applications. However, finding a valid CBF for general dynamic systems is difficult. Although there are successes of heuristically constructed CBFs, these do not recover the maximal safe set and may be overly conservative <cit.>. Further, the safety guarantee is contingent on the feasibility of the quadratic program, which could be lost in the presence of tight actuation limits or other control constraints. §.§ Hamilton-Jacobi Reachability Analysis Hamilton-Jacobi (HJ) reachability analysis defines an optimal control problem to find the maximal safe set (or viability kernel) . The value function × [0, ] →, (_, ) := max_min_∈ [, ]((-; _, )) encodes the minimum safety margin that the safety policy can maintain over all times up to a planning horizon . The value function is then computed by finding the solution of the finite-horizon terminal-value problem with safety Bellman equation <cit.> rcl (_, ) := min{ (_), max__∈ ((_, _), +1) },  (_, ) := (_). Note that this value function only allows finite-horizon safety. So in practice, HJ reachability analysis usually considers a sufficiently long horizon to guarantee infinite-horizon safety, i.e., ^∞() := lim_→∞max_min_∈ [0, ]((; , )). Numerical level-set methods have been developed to compute the HJ value function, and require discretizing the state space into a finite grid; thus, computation scales exponentially with the dimensionality of the state space, becoming practically prohibitive beyond 6 continuous state variables <cit.>. If this value function can be computed, we can retrieve the viability kernel by := {∈|^∞() ≥ 0 } and the safety policy () = _∈^∞((, )). Additionally, wherever this value function is differentiable, it is a valid CBF; thus, HJ reachability analysis provides a principled way to construct a CBF. Due to the maximumminimum operator in (<ref>), ^∞( (, ()) ) ≥^∞(), the safety policy  always prevents the safety value from decreasing. Thus, this HJ value function is often paired with the least-restrictive control law in (<ref>), only overriding the proposed control input near the boundary of the safe set  <cit.>. However, this least-restrictive scheme can result in jerky control switches and is susceptible to numerical issues near the safe set boundary. In contrast, CBF-constrained optimization (<ref>) provides a smoother transition at the cost of additional restrictions to the control input away from the safe set boundary. §.§ Model Predictive Control Model predictive control (MPC) <cit.> verifies the system's safety on the fly instead of relying on a pre-computed value function or CBF. In the context of safety filters, MPC can be used to optimize a control sequence such that the first control is the closest to the task-oriented control but satisfies all the constraints below cl min__|, _| (_) - _0 | ∀∈[0, -1] _0 | = _, _+1 | = ( _|, _| ), ∉, ∈, ∈. In addition to the dynamics constraint (<ref>) and state and control constraints (<ref>), MPC adds a terminal constraint (<ref>) confining the final state to a target ⊂, rendered controlled-invariant by a known . After the optimization, only the first control will be executed, i.e., (_) = _0 |. At the next time step +1, (<ref>) is solved again with a shifted time horizon. Importantly, constraint (<ref>) ensures recursive feasibility, as we can repeat the control sequence (_1:-1 |, (_|)) at +1. This recursive feasibility in turn guarantees all-time safety. §.§ Reach-Avoid Analysis MPC requires the system's state to be inside a known controlled-invariant set exactly at the end of the planning horizon. However, we can loosen this requirement by considering a reach-avoid problem, which finds a control policy to guide the system's state into a (closed) target set at any time within the planning horizon without breaching the constraints at prior times. Specifically, the reach-avoid set for horizon is c := { ∈|∃, ∈[0, ], (; , ) ∈∧∀∈[0, ], (; , ) ∉}. Introducing an additional Lipschitz-continuous target margin function → such that () ≥ 0 ∈, we can follow an analogous formulation to <ref>, defining the reach-avoid objective along horizon [, ] for arbitrary ∈ [0, ]: _ := max_∈ [, ]min{( _), min_∈ [, ]( _) } with _ = (-; _, ). The associated value function (_, ) :=max__ satisfies the reach-avoid Bellman equation <cit.>: ll (_, ) = min{ (_), max{ ( _), max__∈ ((_, _), +1) } } , with terminal condition (_, ) = min{(_), (_) }. The reach-avoid set (<ref>) is characterized by the sign of the value function at =0, as (, 0) ≥ 0 ∈. Recent work addresses the curse of dimensionality (the exponential scaling of computation) by approximate methods such as state-space decomposition <cit.>, deep reinforcement learning <cit.>, sum-of-squares (SOS) optimization <cit.> or differential dynamic programming (DDP) <cit.>, in particular, using iterative linear-quadratic (ILQ) optimization <cit.>. § REACH-AVOID CONTROL BARRIER FUNCTIONS While least-restrictive safety frameworks provide maximum flexibility to task-driven control policies, they may result in abrupt control switches. On the other hand, handcrafted CBFs are often overly conservative and difficult to design for high-dimensional systems. All of the above methods rely on offline computation and suffer under unforeseen scenarios, such as obstacles emerging into the system's field of view. This section introduces a novel scheme to systematically construct control barrier functions at runtime, by computing a reach-avoid value function through differential dynamic programming. Specifically, we solve a receding-horizon ILQ optimization problem until convergence and use the resulting value function to construct a quadratically constrained quadratic program (QCQP), allowing for a smooth transition between task-oriented and safety-oriented control. We prove that the proposed CBF-DDP ensures infinite-horizon safety through recursively feasible finite-horizon planning by reaching a controlled-invariant target set. This section introduces a technique that utilizes the linear quadratic (LQ) approximation of reach-avoid analysis to develop an implicit control barrier function. Particularly, we solve ILQ optimization until convergence to obtain the reach-avoid value function . This function establishes a quadratically constrained quadratic program (QCQP), allowing for a smooth shift between task-oriented and safety-oriented control. Unlike handcrafted CBFs, our CBF-DDP is general and can be constructed online. This section outlines how to generate the barrier in real-time through a receding-horizon approach to ensure infinite-horizon recursive feasibility through finite-horizon planning by reaching a controlled-invariant target set. §.§ Barrier Safety Filter with Reach-Avoid Value Function In this paper, we construct an implicit control barrier function by computing the LQ approximation of the HJ reach-avoid value function (<ref>). The reach-avoid value function is constructed in real-time using ILQ optimization. We begin by laying out the theoretical foundations of our approach. In order to provide infinite-horizon safety guarantees, we assume the existence of a known target set with the following two properties: (1) the target set does not intersect with the failure set, ∩=∅, and (2) the target set is controlled-invariant under a known policy ^. We note that the above are mild assumptions, satisfied by broad classes of systems of practical interest: for example, many robots and vehicles are able to come to a stop, or alternatively transition into a safe loiter orbit, after which the system can remain safe indefinitely under benign engineering assumptions. While more sophisticated methods may be used to determine the controlled-invariant regions of attraction around these states (e.g. Lyapunov analysis), here we illustrate a simpler approach. We construct such a target set using a conservative stopping policy , which brings the robot to a complete halt and maintains the robot at rest. The target margin function can be rapidly computed from each state as the signed distance to the failure set along the robot's hypothetical stopping path from that state. Being in the target set, then, is equivalent to being able to safely come to a halt using . Specifically, is defined as: ( _) = min_≥( ( -; _, ) ). At each state x_t, we are provided with an arbitrary task-oriented control ^ proposed by the task policy . For the corresponding control cycle , we formulate a constrained program, which seeks to minimize deviation from ^ while satisfying a control barrier constraint as shown below: cc min_ + ∈ 1/2 _2^2 (1, 0) ≥γ(_, 0),  where 1 = (_, ^+), (_, 0) is the reach-avoid value (<ref>) at the current state and (1, 0) is the reach-avoid value at the state reached after applying control +. Generally, computing these two values is computationally prohibitive for real-time decision-making. Instead, we use DDP to compute the quadratic approximation of the reach-avoid value function and the (locally) optimal safe control. This LQ approximation provides a principled way to construct a CBF-type constraint. Additionally, we apply this constraint in a receding-horizon fashion. We compute the LQ approximation for two value functions at each time step, denoted as ^0 and ^1. Specifically, ^0 optimizes the control sequence locally to maximize the outcome in (<ref>). In contrast, ^1 optimizes the same objective but fixes the first control to the arbitrary task-oriented control ^. The ILQ gives us not only the (locally) optimal safe control (from ^0) but also a quadratic approximation of the reach-avoid value function after we take the task-oriented control (from ^1). At each time step , we denote 0 = _, ^ = (_), 1 = (_, ^), and 1 = (_, ^+). We have We use the iterative linear-quadratic approach outlined in Alg. <ref> to compute these value functionsdue to its light-weight computation. Additionally, (1, 0) in (<ref>) is approximated by (1, 0) ≈(1, 0) + 1^⊤ + 1/21^⊤1, where 1 = (_, ) and 1 = 1 - 1≈ is obtained from dynamics Jacobian := ∂/∂ |__,. Then, (<ref>) is approximated as a quadratic constraint: ^⊤+ ^⊤ + ≥ 0, where = 1/2^⊤ is symmetric, = ^⊤, and = (1, 0) - γ(_, 0). Thus, (<ref>) is reduced to a quadratically constrained quadratic program (QCQP) as shown below: cc min_ + ∈ _2^2 ^⊤+ ^⊤ + ≥0.   The final executed control is _ = (_) := ^ + ^∗, where ^∗ is an optimal solution to (<ref>). Henceforth, we represent the parameters of the QCQP in (<ref>) as =(, , ). For the QCQP with parameters := (, , ) in (<ref>), if is negative definite (≺ 0), then optimal solution ^∗ is unique and continuous in . Additionally, if = 0, then ^∗ is unique and Lipschitz continuous in . If =0, the CBF-QP has a known closed-form solution. Thm. 3 in <cit.> proves Lipschitz continuity. If ≺ 0, the QCQP is a strictly convex QP with a strictly convex constraint. Thus, ^∗ is unique. The level sets of the constraint are bounded ellipsoids and are hence compact sets. The objective and quadratic constraint of (<ref>) are continuous in ,. From Berge's maximum theorem <cit.>, the set of optimizers is compact and upper hemi-continuous (UHC). Since ^∗ is unique, UHC is equivalent to continuity. Given ∀,, almost everywhere (a.e.), ∈^2 with respect to (w.r.t.) and ∈^2 w.r.t. (, ), is continuous in and ≼ 0, then ∃ unique ^∗ continuous in a.e.. Given , is ^2 a.e., , is ^2 at (, ()) almost surely. Hence, := (, , ) is continuous in a.e.. Using Lemma <ref> and the continuity of composition of continuous functions, ^∗ is unique, continuous in a.e.. A sufficient condition for continuous ^∗ is if , is ^2. We choose , , to be piecewise ^2 to aid this requirement. To use Lemma <ref>, we check whether ≺ 0 and switch to solving with a linear constraint if this condition is violated. In practice, and are chosen as the maximum of multiple safety constraints that are independently ^2. There can be isolated points of discontinuity in . Compared to the least-restrictive baseline, these discontinuities appears away from the boundary of and does not result in chatter along the boundary. Suppose _∈ and _∉, then constraint (<ref>) is feasible. A feasible solution is =- with ^=max_∈((_, ), 1). Since _∈, we have (_, 0) ≥ 0. Also, since _∉, (_)<0. Consequently, from (<ref>), (_, 0) = min{ (_), max_∈((_, ), 1) }≤max_∈((_, ), 1). Now, using (<ref>) and (<ref>), for any _0, l ( _0, 0 ) = max_ max_∈[0, ] min{ ( _), min_∈[0, ] ( _) } ≥max_ max_∈[0, -1] min{ ( _), min_∈[0, ] ( _) } = ( _0, 1 ), where _ = (; _0, ). From above observations, we note: max_∈((_, ), 0) ≥max_∈((_, ), 1) > γ(_, 0). for any γ∈[0, 1 ). This ensures that ∃ u ∈ ∃ + ∈ which guarantees the feasibility of (<ref>), e.g., =-. with a feasible solution ^ With the above Lemmas, we can prove our results of interest. Suppose that ∩=∅ and is controlled-invariant. If _∈, then ∃_ such that _+1 = (_, _) ∈. Furthermore, ^ is a feasible solution for _+1∈. If _∈, the theorem is proven by choice of being controlled-invariant with a known policy ^. By definition, ^ achieves ((_, ^), 1) ≥((_, ^(_)), 1) ≥ 0. If _∉, feasibility of (<ref>) from Lemma <ref> ensures (_+1, 0) ≥γ(_, 0) ≥ 0 and therefore _+1∈. Choosing ()=^ is sufficient for feasibility of (<ref>). Given _0∈, Alg. <ref> ensures _∈,∀. Further, Alg. <ref> maintains a fallback policy ensuring safety for infinite-horizon. Suppose _∈, then from Thm.  <ref>, we get _+1∈. Given _0∈, we see _∈,∀. Also, given, _∈, ∃ that ensures (-; _, ) ∈ for some ∈ [, +]. Alg. <ref> maintains a sequence of fallback policies constructed by Alg. <ref> capable of guiding the robot inside the target set. The construction of is valid by choice of using closed-loop policies from DDP until entering and thereafter. Given ∀,, a.e., ∈^2 w.r.t. , ∈^2 w.r.t. (, ) and is continuous in , then Alg. <ref> provides _ continuous in _ a.e. Alg. <ref> solves (<ref>) iteratively until (<ref>) is satisfied or the computational budget is exhausted. In each QCQP iteration, ^∗ is continuous in a.e. from Lemma <ref>. If the QCQP is not feasible, the reduction to QP is guaranteed to be feasible with ^∗ continuous in _ a.e.. Given _, 1=(_) + ^∗ is a continuous function of _ a.e. The final ^∗ after multiple QCQP iterations is continuous if at each QCQP iteration, ,∈^2 at (_, 1). Alg. <ref> almost surely does not encounter a discontinuity during the QCQP iterations. Thus, Alg. <ref> provides _ continuous in _ a.e. §.§ Implementation Details The flow of our CBF-DDP is outlined in Alg. <ref>[The sequence of fallback policies can be initialized with 0:-1.]. Each time, the core subprocedure of the algorithm solves the QCQP in (<ref>). Since this QCQP only solves a local quadratic approximation of the non-linear constraint in (<ref>), it necessitates repeatedly updating the initial control and re-solving the QCQP until (<ref>) is satisfied. To reduce the number of QCQPs we solve, we use a scaling parameter ∈ on . With <0, by scaling higher than needed, we get closer to satisfying (<ref>) faster. This inner loop is allowed to find a feasible solution within five attempts or is terminated. If QCQP cannot find a feasible solution and ^1<0 (1, 0) < 0, we use , thus guaranteeing fully safe operation. This fallback never occurs in the simulations as long as γ is chosen large enough that does not decrease rapidly to zero. § SIMULATION RESULTS In many applications, such as autonomous driving, it is desirable to create a smooth safety filter that maintains safety desiderata. The least-restrictive safety filter causes jerky behavior, which affects the comfort of the rider in a car. We show our CBF-DDP method to result in a smoother and more desirable behavior. Our CBF-DDP algorithm uses ∈ [1.2, 1.3] in all our runs. In all simulations, we use the Runge–Kutta method (RK4) for integration with sampling time δ t=0.05. §.§ Dubins Car We illustrate CBF-DDP in a Dubins car with velocity fixed at 0.7. The equations of motion are ẋ = vcos(θ), ẏ = vsin(θ), θ̇=u_1. A circular footprint with a radius of 0.15 is chosen for the ego-vehicle. The steering control limits are [-1.0, 1.0]. We use a test case where the Dubins car goes around an obstacle. The task-oriented policy is to turn in the direction facing the goal. In this setup, we don't use a reach-avoid objective function. We use in a reachability <cit.> formulation where the car is constrained to be in a set from which it avoids all obstacles. The comparison of CBF-DDP with LR-DDP is illustrated in Fig. <ref> with =40. We see that LR-DDP filter frequently switches between the task and safety control near the unsafe boundary, resulting in jerky movements. In contrast, the CBF-DDP filter activates earlier and smoothly controls the car. The number of QCQP iterations is limited to 2. We also compare it to a manual CBF that is known to be sub-optimal. The manual CBF provides smooth controls but is more conservative because of the sub-optimality. §.§ Kinematic Bicycle Dynamics We consequently test on dynamics with two controls, i.e., acceleration and steering. The equations of motion are ẋ = vcos(θ), ẏ = vsin(θ), v̇=u_1, θ̇=v/Ltanδ, δ̇=u_2. The car can slow down and curve around to avoid obstacles. A circular footprint with a radius of 0.1 is chosen for ego-vehicle. The acceleration control limits u_1 are [-0.5, 0.5] and steering control limits u_2 are [-0.8, 0.8]. Here, we use a reach-avoid value function with the target margin function computed based on the stopping path criterion. as illustrated in Fig. <ref> For each state, the target margin function is the distance to the failure set boundary along the path where the car is applying full deceleration to come to a halt. By lieu of this , being in the reach-avoid set is equivalent to being able to come to a safe halt resulting in infinite horizon safety. For , we use a naive heuristic where the steering is pursuing a look-ahead point near the center of the road. This policy has a switch when the car is already moving in a desired direction. Further, this feeds acceleration to maintain velocity at 0.9 using linear feedback. We run our experiments on a 3.6 GHz CPU with 64 GB RAM. The CBF-DDP is robust and provides smooth controls (Fig. <ref>) for these dynamics, but LR-DDP makes a U-turn or comes to a safe halt resulting in task failure. LR-DDP can be improved by constraining the horizontal yaw deviation . The , are then defined using the worst-case margin of all constraints. The CBF-DDP provides smoother trajectories and fast task completion (Fig. <ref>) even with the constraint, but the controls get jerkier with tighter limits on . Discontinuities in derivatives of from interacting constraints between two obstacles or between obstacle and yaw can induce discontinuities in controls from CBF-DDP. Future practical improvements can incorporate soft versions of minimum operators to ease some of these discontinuities. Table <ref> compares LR-DDP and CBF-DDP with =45 and a constraint on . LR-DDP completes the task, but the trajectories are quantitatively jerkier than CBF-DDP. From Table <ref>, the total integral deviation (‖ - ‖_1) from is less for CBF-DDP, indicating that it is not overly restrictive. With stronger task policies, the yaw constraint is not needed giving an advantage to CBF-DDP. For , we also evaluate using receding horizon ILQ optimization with the sum of multiple cost functions. This cost penalizes soft constraint violations such as yaw, velocity, and track. The additive weight of each soft constraint is tuned such that maintains the vehicle near the center of the road at a reference velocity of 0.9. With this much-improved task policy, the yaw constraint is unnecessary for LR-DDP to complete the task (Fig. <ref>). The qualitative behavior of CBF-DDP compared to LR-DDP remains unchanged (Table <ref>), demonstrating the benefits of CBF-DDP. § CONCLUSIONS =-1 We propose a principled approach to construct a CBF online with reach-avoid DDP. We use a receding horizon to provide infinite-horizon recursive feasibility despite finite-horizon planning. The construction of our reach-avoid value functions guarantees safety. Simulations on a bicycle model show that CBF-DDP improves the smoothness over least-restrictive control and avoids the conservativeness of handcrafted CBFs. Compared to the least-restrictive baseline, the safety filter acts much earlier when approaching the unsafe set boundary and provides faster and smoother obstacle avoidance.
http://arxiv.org/abs/2307.02882v1
20230706093654
Contrast Is All You Need
[ "Burak Kilic", "Florix Bex", "Albert Gatt" ]
cs.CL
[ "cs.CL", "cs.AI" ]
2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). Proceedings of the Sixth Workshop on Automated Semantic Analysis of Information in Legal Text (ASAIL 2023), June 23, 2023, Braga, Portugal. 1, 2]Burak Kilic[ email=b.kilic@uu.nl, url=https://www.shareforcelegal.com, ] [1] [1]Department of Information and Computing Sciences, Utrecht University, Utrecht, The Netherlands [2]Shareforce B.V., Rotterdam, The Netherlands [3]Tilburg Institute for Law, Technology, and Society, Tilburg University, Tilburg, The Netherlands 1, 3]Floris Bex[ email=f.j.bex@uu.nl, url=https://www.uu.nl/staff/FJBex, ] [1] 1]Albert Gatt [ email=a.gatt@uu.nl, url=https://albertgatt.github.io/, ] [1] [1]Corresponding author. [1]These authors contributed equally. In this study, we analyze data-scarce classification scenarios, where available labeled legal data is small and imbalanced, potentially hurting the quality of the results. We focused on two finetuning objectives; SetFit (Sentence Transformer Finetuning), a contrastive learning setup, and a vanilla finetuning setup on a legal provision classification task. Additionally, we compare the features that are extracted with LIME (Local Interpretable Model-agnostic Explanations) to see which particular features contributed to the model's classification decisions. The results show that a contrastive setup with SetFit performed better than vanilla finetuning while using a fraction of the training samples. LIME results show that the contrastive learning approach helps boost both positive and negative features which are legally informative and contribute to the classification results. Thus a model finetuned with a contrastive objective seems to base its decisions more confidently on legally informative features. LegalNLP Contrastive Learning NLP Explainable AI Contrast Is All You Need [ August 1, 2023 ======================== § INTRODUCTION The scarcity of publicly available, high quality legal data is causing a bottleneck in legal text classification research <cit.>. While there are a few publicly available datasets, such as CUAD <cit.>, and LEDGAR <cit.>, these datasets are unbalanced. They may provide good baselines to start with; however, the scarcity of samples for specific classes means that there is no guarantee of robust performance once models are adapted to downstream classification tasks. Few-shot learning methods have proven to be an attractive solution for classification tasks with small datasets where data annotation is also time-consuming, inefficient and expensive. These methods are designed to work with a small number of labeled training samples and typically require adapting a pretrained language model to a specific downstream task. In this paper [While our paper shares a similar title with "Attention is all you need" <cit.>, we focus on a different topic.], we focus on three major aims. First, we finetune the LegalBERT<cit.> model on the publicly available LEDGAR provision classification dataset. We compare the success of a contrastive learning objective and a more standard objective to finetune the pretrained model. Secondly, we finetune the same baseline model with these two finetuning objectives with the balanced dataset created from LEDGAR. Lastly, to analyze the trustworthiness and explain individual predictions, we extract the tokens from the model as features by using LIME <cit.> to compare which features had more positive or negative impacts. § RELATED WORK The legal text classification has been tackled with various BERT techniques to adopt domain-specific legal corpora <cit.> <cit.>. While these studies often report state-of-the-art results with BERT-based models, they do not address the issue of data scarcity for specific applications. There have been several pieces of research on efficient finetuning setups that can potentially address this necessity, such as parameter efficient finetuning (PEFT), pattern exploiting training (PET), and SetFit (Sentence Transformer Finetuning) <cit.>, an efficient and prompt-free framework for few-shot finetuning of Sentence Transformers (ST). SetFit works by first finetuning a pretrained ST on a small number of text pairs, in a contrastive Siamese manner. Also, SetFit requires no prompts or verbalizers, unlike PEFT and PET. This makes SetFit simpler and faster. We explain how SetFit works in more depth in the following section. §.§ SetFit: Sentence Transformer Finetuning SetFit is a prompt free framework for few-shot finetuning of ST, addressing labeled data scarcity by introducing contrastive learning methods to generate positive and negative pairs from the existing dataset to increase the number of samples. There are two main steps involved in SetFit, from training to inferencing. First, a contrastive objective is used to finetune the ST, and then the classification head is trained with the encoded input texts. At the inference stage, the finetuned ST also encodes the unseen inputs and produces the embeddings accordingly. Then the classifier head gives the prediction results based on the newly generated embeddings. ST finetuning To better handle the limited amount of labeled training data in few-shot scenarios, contrastive training approach is used. Formally, we assume a small set of K-labeled samples D = (x_i, y_i), where x_i and y_i are sentences and their class labels, respectively. For each class label c ∈ C, R positive triplets are generated: T_p^c=(xi , xj , 1), where x_i and x_j are pairs of randomly chosen sentences from the same class c, such that y_i = y_j = c. Similarly, a set of R negative triplets are also generated: T_n^c=(xi, xj , 0), where x_i are sentences from class c and x_j are randomly chosen sentences from different classes such that y_i = c and y_j ≠ c. Finally, the contrastive finetuning data set T is produced by concatenating the positive and negative triplets across all classes where |C| is the number of class labels, |T| = 2R|C| is the number of pairs in T and R is a hyperparameter. SetFit will generate positive and negative samples randomly from the training set, unless they are explicitly given <cit.>. This contrastive finetuning approach enlarges the size of training data. Assuming that a small number (K) of labeled samples is given for a binary classification task, the potential size of the ST finetuning set T is derived from the number of unique sentence pairs that can be generated, namely K(K - 1)/2, which is significantly larger than just K. Classification head training In this second step, the fine-tuned ST encodes the original labeled training data {x_i}, yielding a single sentence embedding per training sample: Emb(x_i) = ST(x_i) where ST() is the function representing the fine-tuned ST. The embeddings, along with their class labels, constitute the training set for the classification head TCH = (Emb(x_i), y_i) where |TCH| = |D|. A logistic regression model is used as the text classification head throughout this work. Inference At inference time, the fine-tuned ST encodes an unseen input sentence (x_i) and produces a sentence embedding. Next, the classification head that was trained in the training step, produces the class prediction of the input sentence based on its sentence embedding. Formally this is predicted label i = CH(ST(xi)), where CH represents the classification head prediction function. § DATA We present experimental results both on the original LEDGAR dataset, and on a balanced version. We describe the original dataset first, then we give a brief description of how the dataset was further balanced for the presented experiments. §.§ Data source As a main corpus, we used the publicly available LEDGAR[https://autonlp.ai/datasets/ledgar] provision classification dataset, consisting of 60,000 training samples in total, with 100 different provision labels. We did not apply any additional preprocessing or data modification techniques to keep the data as it is to make the experiments reproducible. To create a dedicated test dataset for the unbalanced data scenario, we randomly selected 25 samples per label from the corpus, in total approximately 2,500 samples. The rest of the 57,500 samples are used to generate the train/dev sets. The training sets are created by selecting 4, 8, 12, and 16 samples per label for SetFit, and 50, 100, 150, and 200 for the vanilla finetuning setup. Therefore the maximum number of samples is calculated as: maximum number of samples per label multiplied by the number of total labels, as can be seen in Figure <ref>. In practice, in the case of the vanilla finetuning setup, we end up with fewer training samples than this total. This is because some labels are extremely sparse, and there are fewer total samples than the stipulated maximum per label. §.§ Crawling and balancing The original LEDGAR dataset is imbalanced. The smallest label consists of only 23 samples, and the largest has 3167 samples in the original training dataset. Therefore, to create a new balanced dataset, we selected the most frequent 32 labels. For labels with more than 1000 samples, we downsampled to 1000 samples per label. For labels with fewer than 1000 samples, we upsampled by crawling and retrieving additional data from LawInsider,[https://www.lawinsider.com/] removing any duplicates. As a result, a new dataset has been created that consists of 32 classes, with each having 1000 provisions. Additionally, we also created a dedicated test dataset for the balanced data scenario, and selected 25 samples per label randomly for the 32 labels, for a total of 800 samples. The remaining 31,200 samples are used for training with a random 80/20 train/dev split. For finetuning with the balanced dataset, we again train with varying sizes of training data, using 4, 8, 12, and 16 samples per label for SetFit, and 50, 100, 150, and 200 for the vanilla finetuning setup, as can be seen in Figure <ref>. Note that, unlike the case of the unbalanced data, the total sizes for the vanilla finetuning setup in the balanced case correspond to the totals obtained by multiplying the maximum sample size with the number of labels. § EXPERIMENTS §.§ Models It has been shown that models which have been pretrained on domain-specific legal data outperform general-purpose models <cit.>. Therefore, throughout this paper, the baseline we use is a finetuned LegalBERT using legal-bert-base-uncased.[https://huggingface.co/nlpaueb/legal-bert-base-uncased] We compare this standard, or "vanilla" finetuned baseline to a model finetuned with the contrastive objective used in SetFit. §.§ Experimental Setup The finetuning setup is the most crucial stage of the experimenting setup. Therefore, we kept the common hyperparameters of SetFit and vanilla setups the same. The rest of the parameters were kept as their default values, provided by the respective implementations. The important hyperparameter for SetFit finetuning is the R parameter, which defines the number of positive and negative pairs to be generated from the given training set. We kept this parameter as its default value, 20 across all the experiments. For both models, we used 1 epoch for the finetuning. Table <ref> gives detailed common hyperparameters of finetuning setups for both SetFit Trainer[https://github.com/huggingface/setfit] and Vanilla Trainer.[https://huggingface.co/docs/transformers/main_classes/trainer] § RESULTS §.§ F1-score comparisons: Original dataset In Table <ref>, we compare the F1-scores for different experiments, with the test set described in Section <ref>. The original LEDGAR dataset is used in this experiments, with an 80/20 train/dev split. As can be seen from the table above, SetFit's contrastive learning approach yielded a better F1-score compared to the vanilla finetuning, despite only using a fraction of the training samples. Additionally, we observed that Weighted-F1 displays a larger gap between models compared to Micro-F1. This is particularly expected, since the problem of unbalanced data is exacerbated in the vanilla finetuning setup as the maximum number of samples per label increases. §.§ Accuracy comparisons: Original and balanced dataset In Figure <ref>, we compare the finetuning results of SetFit and vanilla models, finetuned on the original LEDGAR dataset with the same training split and test dataset as the previous experiment. We observed that the models achieve comparable accuracies overall, despite the differences in Weighted F1-scores in the Table <ref>. However, it is still noteworthy that the contrastive learning approach achieves accuracy comparable to the vanilla finetuned model with very small sample sizes. In Figure <ref>, we compare the accuracy of the two approaches, this time with the balanced LEDGAR dataset. In this experiment we also used 80/20 train/dev split. The results show that the contrastive learning finetuning has a warmer start compared to vanilla finetuning, particularly in small data scenarios. However, as can be seen from the graph, SetFit is comparable with vanilla model across all the experiments as well. §.§ LIME feature comparisons In machine learning in general, but especially in domains such as law, trustworthiness of AI systems is crucial. The ability to explain model predictions is central to increasing trustworthiness in at least two respects. First, explanations have an impact on whether a user can trust the prediction of the model to act upon it; second, they also influence whether a user can trust the model to behave in a certain way when deployed. Several approaches to explaining model predictions have been proposed in the literature, including LIME <cit.>, SHAP <cit.>, and GRAD-CAM <cit.>. Through the training results mentioned in previous sections, we observed that SetFit models were comparable with vanilla models, despite using a fraction of the dataset. However, we get very little information about whether the models base their decisions on features which are intuitively correct, that is, if the models are classifying the provisions with legally informative features, or arbitrary ones. LIME is a technique based on the creation of interpretable, surrogate models over the features that are locally faithful to the original classifier. This means that interpretable explanations need to use representations of those features that are understandable, trustworthy, and justifiable to humans <cit.>. For the text classification tasks, LIME features are restricted to the words that are presented in the provisions. Thus, the positively weighted words that lead toward a particular label are called "positive" features. Likewise, the negatively weighted words that reduce the model's estimate of the probability of the label are called "negative" features. We kept the LIME hyperparameters the same in each model explanation for fair comparison and the details are as follows: The limit for the total number of words per classification is defined as K, and the complexity measure for the models is defined as: Ω(g) = ∞1[‖w_g‖_0 > K] where the g is defined as a simple interpretable sparse linear model (logistic regression in the case of SetFit, multinomial logistic regression in the vanilla model); w_g is defined as the weight vector of g. The K=10 is selected across all the experiments for simplicity and potentially can be as big as the computation allows. The size of the neighborhood for local exploration is set to 25. The distance function D was kept as the default, cosine distance metric. [https://github.com/marcotcr/lime] Thus, in this section, we compare the positive and negative features of SetFit and vanilla models extracted using the LIME setup mentioned above. To ensure a fair comparison, we used the SetFit model trained with 800 training samples and the vanilla model trained with 9756 training samples. As shown in Figure <ref>, the two models converged and obtained comparable performance with these settings. We selected two test labels to compare, namely Adjustments and Authority provisions. Again, for a fair comparison, we chose the labels based on the cases where one technique did better than the other, in terms of their respective F1-scores. For the Adjustments label, the SetFit model outperformed the vanilla model, and for the Authority label, vanilla finetuning outperformed the SetFit model. Thus, we aim to observe the differences in the model-predicted features for these labels. Table <ref> shows the F1-score differences of these provisions. We begin by comparing the positive features which both approaches have in common (i.e. the features they both assign a positive weight to), for the two target labels. These are shown in Figure <ref> and Figure <ref>. The figures suggest that the contrastive approach from SetFit seems to help to boost legally informative features more than vanilla models, even in the small data scenarios. For instance, words like "adjustments", "shares", "dividend", "stock", etc. can give a first strong hint about the Adjustment provision classification results, as well as words like "authority", "power", "act", "execute", "binding", etc. for the Authority provision. Thus, domain experts can make decisions based on their usefulness. In Figures <ref> to <ref>, we show the top positive features for the two models separately, for each label. We note that similar observations can be made with respect to these figures, that is, the contrastive learning framework boosts the positive weight of features that are intuitively more legally informative. Nevertheless, we also see that less informative features, including stop words, are also assigned some positive weight. Additionally, we also observed similar behavior with the negative features in Figures <ref> to <ref>. For negative features, the SetFit model trained with a contrastive objective assigns a greater negative magnitude. Thus, it appears that negative role of these features is accentuated in the contrastive setting, relative to the standard finetuning setup. For instance, words like "changes", "shall" and "without" for the Adjustments provision and "which", "common", "document" and "carry" for the Authority provision sound generic and may not give legally informative hints to humans. However, in the vanilla model case, similar legally non-informative negative features are also present but not enough to perturb the model's decisions. § CONCLUSIONS & FUTURE WORK This paper presented a detailed comparison of legal provision classification. Motivated by the challenge of low-resource scenarios and data imbalance, we compared the performance of a LegalBERT model finetuned in a standard setting, to one finetuned using a contrastive objective. Following previous work <cit.>, we assumed that models pretrained on legal data are better able to retain the legal knowledge and terminologies in the process of finetuning. On the other hand, our experiments show that the type of finetuning approach matters, especially where data is relatively scarce. In particular, the contrastive learning approach showed promising results in terms of evaluation metrics, achieving performance comparable or better than the vanilla finetuning setup. The results also showed that the positive and negative features extracted from the models differ significantly, favoring the SetFit model, despite using almost 11 times less data. As future work, investigating the limitations of SetFit deeper with more hyperparameters on legal data may be beneficial for pushing the model capabilities further. Also, we plan to use other explainability tools such as SHAP or GRAD-CAM to compare the extracted features. Finally, an evaluation of the appropriateness of the positive and negative features identified using explainability methods needs to be carried out with domain experts. § APPENDICES § CLASSIFICATION REPORTS This section contains detailed classification reports for first 16 labels, for each model, finetuned on original LEDGAR dataset. The reports were kept this way, due to the size of the tables and page limits.
http://arxiv.org/abs/2307.00728v1
20230703032027
A new approach to QCD evolution in processes with massive partons
[ "Benoit Assi", "Stefan Höche" ]
hep-ph
[ "hep-ph" ]
FERMILAB-PUB-23-336-T, MCNET-23-09 Fermi National Accelerator Laboratory, Batavia, IL, 60510 Fermi National Accelerator Laboratory, Batavia, IL, 60510 We present an algorithm for massive parton evolution which is based on the differentially accurate simulation of soft-gluon radiation by means of a non-trivial azimuthal angle dependence of the splitting functions. The kinematics mapping is chosen such as to to reflect the symmetry of the final state in soft-gluon radiation and collinear splitting processes. We compute the counterterms needed for a fully differential NLO matching and discuss the analytic structure of the parton shower in the NLL limit. We implement the new algorithm in the numerical code and present a first comparison to experimental data. A new approach to QCD evolution in processes with massive partons Stefan Höche August 1, 2023 ================================================================= § INTRODUCTION The production and evolution of massive partons are an important aspect of collider physics, and they play a particularly prominent role at the Large Hadron Collider at CERN. Key measurements and searches, such as tt̅H and triple Higgs boson production, involve final states with many b-jets. The success of the LHC physics program therefore depends crucially on the modeling of heavy quark processes in the Monte-Carlo event generators used to link theory and experiment. With the high-luminosity phase of the LHC approaching fast, it is important to increase the precision of these tools in simulations involving massive partons. Heavy quark and heavy-quark associated processes have been investigated in great detail, both from the perspective of fixed-order perturbative QCD and using resummation, see for example <cit.>. Various proposals were made for the fully differential simulation in the context of particle-level Monte Carlo event generators <cit.>. Recently, a new scheme was devised for including the evolution of massive quarks in the initial state of hadron-hadron and lepton-hadron collisions <cit.>. In this manuscript, we will introduce an algorithm for the final-state evolution and matching in heavy-quark processes, inspired by the recently proposed parton-shower model  <cit.>. The soft components of the splitting functions are derived from the massive eikonal and are matched to the quasi-collinear limit using a partial fractioning technique. In contrast to the matching of <cit.>, we partial fraction the complete soft eikonal, leading to strictly positive splitting functions and thus keeping the numerical efficiency of the Monte-Carlo algorithm at a maximum. We also propose to use a kinematic mapping for the collinear splitting of gluons into quarks that treats the outgoing particles democratically. This algorithm can be extended to any purely collinear splitting (i.e., after subtracting any soft enhanced part of the splitting functions) while retaining the NLL precision of the evolution. Multi-jet merging and matching of parton-shower simulations to NLO calculations in the context of heavy-quark production were discussed, for example, in <cit.>. The NLO matching is typically fairly involved, because of the complex structure and partly ambiguous definition of the infrared counterterms. In this publication, we compute the integrated counterterms for our new parton-shower model, making use of recent results for angular integrals in dimensional regularization <cit.>. This calculation provides the remaining counterterms needed for the matching of the parton-shower model at NLO QCD. We briefly discuss the extension to initial-state radiation but postpone a detailed analysis to a future publication. This paper is structured as follows: Section <ref> briefly reviews the construction of the parton-shower model and generalizes the discussion to massive particles. Section <ref> introduces the different kinematics mappings. Section <ref> discusses the general form of the phase-space factorization and provides explicit results for processes with soft radiation and collinear splitting. The computation of integrated infrared counterterms is presented in Sec. <ref>. Section <ref> discusses the impact of the kinematics mapping on sub-leading logarithms, and Sec. <ref> provides first numerical predictions for e^+e^-→hadrons. Section <ref> contains an outlook. § SOFT-COLLINEAR MATCHING We start the discussion by revisiting the singularity structure of n-parton QCD amplitudes in the infrared limits. If two partons, i and j, become quasi-collinear, the squared amplitude factorizes as _n⟨1,…,n|1,…,n⟩_n= ∑_λ,λ'=± _n-1<1,…,i\(ij),…,j\,…,n| 8πα_s P^λλ'_(ij)i(z)/(p_i+p_j)^2-m_ij^2|1,…,i\(ij),…,j\,…,n>_n-1 , where the notation i\ indicates that parton i is removed from the original amplitude, and where (ij) is the progenitor of partons i and j. The functions P^λλ'_ab(z) are the spin-dependent, massive DGLAP splitting functions, which depend on the momentum fraction z of parton i with respect to the mother parton, (ij), and on the helicities λ <cit.>. In the limit that gluon j becomes soft, the squared amplitude factorizes as <cit.> _n⟨1,…,n|1,…,n⟩_n=-8πα_s∑_i,k≠ j _n-1<1,…,j\,…,n| T_i T_k w_ik,j|1,…,j\,…,n>_n-1 , where T_i and T_k are the color insertion operators defined in <cit.>. In the remainder of this section, we will focus on the eikonal factor, w_ik,j, for massive partons and how it can be re-written in a suitable form to match the spin-averaged splitting functions in the soft-collinear limit. The eikonal factor is given by w_ik,j= p_ip_k/(p_ip_j)(p_jp_k)-p_i^2/2/(p_ip_j)^2-p_k^2/2/(p_kp_j)^2 Following Refs. <cit.>, we split Eq. (<ref>) into an angular radiator function, and the gluon energy w_ik,j=W_ik,j/E_j^2 , where W_ik,j=1-v_iv_kcosθ_ik/(1-v_icosθ_ij)(1-v_kcosθ_jk) -(1-v_i^2)/2/(1-v_icosθ_ij)^2 -(1-v_k^2)/2/(1-v_kcosθ_jk)^2 . The parton velocity is defined as v=√(1-m^2/E^2) and is frame dependent. When we label velocities by particle indices in the following, it is implicit that they are computed in the frame of a time-like momentum n^μ. In this reference frame they reduce to the relative velocity v_i=v_p_in, where v_pq=√(1-p^2q^2/(pq)^2) . For convenience, we also introduce the velocity four-vector l_p,q^μ=√(q^2) p^μ/pq . When this vector is labeled by a single particle index only, it again refers to the four-velocity of that particle in the frame of n^μ, for example l_i^μ=l_p_i,n^μ. When partial fractioning Eq. (<ref>), we aim at a positive definite result in order to maintain the interpretation of a probability density and, with it, the high efficiency of the unweighted event generation in a parton shower. Following the approach of Ref. <cit.>, we obtain W_ik,j=W̅_ik,j^i+W̅_ki,j^k , where W̅_ik,j^i=1-v_kcosθ_jk/ 2-v_icosθ_ij-v_kcosθ_jk W_ik,j . The partial fractioned angular radiator function can be written in a more convenient form using the velocity four-vectors. We find an expression that makes the matching to the ij-collinear sector manifest W̅_ik,j^i=1/2l_il_j(l_ik^2/l_ikl_j -l_i^2/l_il_j-l_k^2/l_kl_j) , where l_ik=l_i+l_k . In the quasi-collinear limit for partons i and j, we can write the eikonal factor in Eq. (<ref>) as <cit.> w_ik,jm_i p_ip_ji||j⟶ w_ik,j^(coll)(z)=1/2p_ip_j(2z/1-z-m_i^2/p_ip_j) , where zm_i p_ip_ji||j⟶E_i/E_i+E_j . This can be identified with the leading term (in 1-z) of the DGLAP splitting functions P_aa(z,), where [ Note that in contrast to standard DGLAP notation, we separate the gluon splitting function into two parts, associated with the soft singularities at z→ 0 and z→1.] P_qq(z,) =C_F(2z/1-z-m_i^2/p_ip_j+(1-)(1-z)) , P_gg(z,) =C_A(2z/1-z+z(1-z)) , P_gq(z,) =T_R(1-2z(1-z)/1-) . To match the soft to the collinear splitting functions, we therefore replace P_(ij)i(z,)/(p_i+p_j)^2-m_ij^2→P_(ij)i(z,)/(p_i+p_j)^2-m_ij^2+δ_(ij)i T_i^2 [W̅_ik,j^i/E_j^2-w_ik,j^(coll)(z)] , where the two contributions to the gluon splitting function are treated as two different radiators <cit.>. As in the massless case, this substitution introduces a dependence on a color spectator, k, whose momentum defines a direction independent of the direction of the collinear splitting <cit.>. In general, this implies that the splitting functions which were formerly dependent only on a momentum fraction along this direction, now acquire a dependence on the remaining two phase-space variables of the new parton. It is convenient to define the purely collinear remainder functions C_qq(z,) =C_F(1-)(1-z) , C_gg(z,) =C_A z(1-z) , C_gq(z,) =T_R(1-2z(1-z)/1-) . They can be implemented in the parton shower using collinear kinematics, while maintaining the overall NLL precision of the simulation. This will be discussed further in Sec. <ref>. § MOMENTUM MAPPING Every parton shower algorithm requires a method to map the momenta of the Born process to a kinematical configuration after additional parton emission. This mapping is linked to the factorization of the differential phase-space element for a multi-parton configuration. Collinear safety is a basic requirement for every momentum mapping. In addition, a mapping is NLL safe if it preserves the topological features of radiation simulated previously <cit.>. We will make this statement more precise in Sec. <ref>. This section provides a generic momentum mapping for massive partons that is both collinear and NLL-safe. It is constructed for both initial- and final-state radiation, inspired by the identified particle dipole subtraction algorithm of <cit.>. In the massless limit, we reproduce this algorithm and thus agree with the existing parton-shower model  <cit.>. §.§ Radiation kinematics We will first describe the kinematics mapping needed for soft evolution. This is sketched in Fig. <ref>. We define the variables μ_ij^2=p̃_ij^2/2p̃_ijK̃ , μ̅_ij^2=2μ_ij^2/1+ṽ_p̃_ijK̃ , κ=K̃^2/2p̃_ijK̃ , κ̅=2κ/1+v_p̃_ijK̃ , and the analogous final-state variables μ_i/j^2=m_i/j^2/(2p̃_ijK̃) and μ̅_i/j^2=2μ_i/j^2/(1+ṽ_p̃_ijK̃). The scalar invariants after the splitting are defined in terms of the energy fraction, z <cit.> 2p_in=z 2p̃_ijK̃ , n^2=(1-z+κ+μ_ij^2-μ_i^2) 2p̃_ijK̃ . The momentum of the radiator after the emission is p_i^μ=z̅ p̃_ij^μ+μ_i^2-z̅^2μ_ij^2/z̅ v_p̃_ijK̃(K̃^μ-κ̅ p̃_ij^μ) , where z̅=z+2μ_i^2/1+v_p̃_ijK̃+2μ_ij^2 +√((z+2μ_i^2/1+v_p̃_ijK̃+2μ_ij^2)^2 -2μ_i^2(1+κ̅)/1+v_p̃_ijK̃+2μ_ij^2) . We define the variable v̅= 2 v z/1+v_p̃_ijK̃ -μ̅_i^2/z̅(1-z̅+κ̅-κ̅-μ̅_j^2/ζ)/z̅ -μ̅_i^2/z̅1-z̅+κ̅/ζ , where ζ=2/1+v_p̃_ijK̃ 1-z+κ+μ_ij^2-μ_i^2/1-z̅+κ̅ , and where v=(p_ip_j)/(p_in) is defined as in the massless case <cit.>. In terms of these quantities, the gluon momentum is given by p_j^μ=1-z+μ_ij^2-μ_i^2+μ_j^2/v_p̃_ijK̃ζ(p̃_ij^μ-μ̅_ij^2K̃^μ) +v̅ 1+v_p̃_ijK̃/2v_p̃_ijK̃[(K̃^μ-κ̅p̃_ij^μ) -1-z̅+κ̅/ζ(p̃_ij^μ-μ̅_ij^2K̃^μ)] +k_⊥^μ . where k_⊥^2=p̃_ijK̃(1+v_p̃_ijK̃) v̅[(1-v̅/ζ)(1-z̅) -1-ζ+v̅/ζ κ̅+μ̅_j^2/ζ ]-m_j^2 . In order to determine a reference direction for the azimuthal angle ϕ=arctan( k_y/ k_x), we note that the soft radiation pattern must be correctly generated. To achieve this, we compose the transverse momentum as k_⊥^μ= k_⊥(cosϕ n_⊥^μ/|n_⊥| +sinϕ l_⊥^ μ/|l_⊥|) , where the reference axes n_⊥ and l_⊥ are given by the transverse projections[In kinematical configurations where p_k^ μ is a linear combination of p_i^ μ and n̅^ μ, n_⊥ in the definition of Eq. (<ref>) vanishes. It can then be computed using n_⊥=^μ j_ νρ p_i^ ν n̅^ ρ, where j∈{1,2,3} may be any index that yields a nonzero result. Note that in this case, the matrix element cannot depend on the azimuthal angle.] n_⊥^μ=p_k^μ -(p_k(K̃-κ̅p̃_ij)) (p̃_ij^μ-μ̅_ij^2K̃^μ)/ 2p̃_ijK̃ v_p̃_ijK̃^2/(1+v_p̃_ijK̃) -(p_k(p̃_ij-μ̅_ij^2K̃)) (K̃^μ-κ̅p̃_ij^μ)/ 2p̃_ijK̃ v_p̃_ijK̃^2/(1+v_p̃_ijK̃) , and l_⊥^ μ=^μ_ νρσ p̃_ij^ ν (K̃^ρ-κ̅p̃_ij^ ρ) n_⊥^σ . Since the differential emission phase-space element is a Lorentz-invariant quantity, the azimuthal angle ϕ is Lorentz invariant <cit.>. This allows us to write the emission phase-space in a frame-independent way. Upon construction of the momenta p_i, p_j and K, the momenta {p_l} which are used to define K̃ are subjected to a Lorentz transformation, p^μ_l→Λ^μ_ν(K,K̃)p^ν_l, where Λ^μ_ν=g^μ_ν -2(K̃+K)^μ(K̃+K)_ν/(K̃+K)^2 +2K̃^μK_ν/K^2 §.§ Splitting kinematics In this section, we describe the kinematics for the implementation of the purely collinear components of the splitting functions in Eq. (<ref>). This is sketched in Fig. <ref>. We make use of some of the notation in <cit.>, in particular y=p_ip_j/p_ip_j+p_iK+p_jK , z=p_iK/p_iK+p_jK , and we define the scaled masses μ_i^2=m_i^2/2p̃_ijK̃ , μ_j^2=m_j^2/2p̃_ijK̃ , κ=K̃^2/2p̃_ijK̃ , κ̅=2κ/1+v_p̃_ij,K̃ . We also introduce the following variables α_ij=y+(1-y)(μ_i^2+μ_j^2) , α̅_ij=2α_ij/1+v_p̃_ij,K̃ , μ_ij^2=m_ij^2/2p̃_ijK̃ , μ̅_ij^2=2μ_ij^2/1+v_p̃_ij,K̃ . In terms of the additional variables z_ij= 1/2α̅_ij( 1+α̅_ij/1+κ̅+μ̅_ij^2 -√((1-α̅_ij/1+κ̅+μ̅_ij^2)^2 -4α̅_ijκ̅/1+κ̅) ) , z̅_ij=2z_ij/1+v_p̃_ij,K̃ , z̅= z(1-α_ij+μ_ij^2) -(α_ij+μ_i^2-μ_j^2) z̅_ijκ/1-z̅_ijα_ij+μ̅_ij^2/1-z̅_ijα_ij+μ̅_ij^2/z̅_ij -z̅_ijα_ijκ/1-z̅_ijα_ij+μ̅_ij^2 , the momenta after the splitting are given by p_i^μ= z̅ p̃_ij^μ-μ̅_ij^2K̃^μ/z̅_ij v_p̃_ij,K̃ +(y(1-z̅)(1+μ_ij^2-μ_i^2-μ_j^2) -z̅(μ_i^2+μ_j^2)+2μ_i^2) K̃^μ-κ̅ p̃_ij^μ/ v_p̃_ij,K̃/z_ij +k_⊥^μ , p_j^μ= (1-z̅) p̃_ij^μ-μ̅_ij^2K̃^μ/z̅_ij v_p̃_ij,K̃ +(y z̅ (1+μ_ij^2-μ_i^2-μ_j^2) -(1-z̅)(μ_i^2+μ_j^2)+2μ_j^2) K̃^μ-κ̅ p̃_ij^μ/ v_p̃_ij,K̃/z_ij -k_⊥^μ . The transverse momentum squared is given by k_⊥^2=2p̃_ijK̃ [ z̅(1-z̅)α_ij -(1-z̅)μ_i^2-z̅μ_j^2 ] . The construction of the transverse momentum vector proceeds as described in Sec. <ref>. While the choice of the reference vector defining n_⊥^μ is in principle arbitrary, it can be made conveniently, e.g., to aid the implementation of spin correlations in collinear gluon splittings. § PHASE SPACE FACTORIZATION In this section, we discuss the factorization of the differential n+1 particle phase-space element into a differential n particle phase-space element and the radiative phase space. We start from the generic four-dimensional expression for the initial-state momenta p_a^μ and p_b^μ and final-state momenta, {p_1^μ,…,p_i^μ,…,p_j^μ,…,p_n^μ}, dΦ_n(p_a,p_b;p_1,…,p_i,…,p_j,…,p_n) = [∏_i=1^n1/(2π)^3 d^3p_i/2p_i^0] (2π)^4δ^(4)(p_a+p_b-∑ p_i) . Replacing particles i and j with a combined “mother” particle , we can write the expression for the underlying Born phase space as dΦ_n-1(p̃_a,p̃_b;p̃_1,…,p̃_ij, …,\p̃_j,…,p̃_n) = [∏_k=1 k≠ i,j^n1/(2π)^3 d^3p̃_i/2p̃_i^0]1/(2π)^3 d^3p̃_ij/2p̃_ij^0 (2π)^4δ^(4)(p̃_a+p̃_b -∑_k≠ i,jp̃_k-p̃_ij) . The ratio of differential phase-space elements after and before the mapping is defined as the differential phase-space element for the one-particle emission process dΦ_+1(p̃_a,p̃_b;p̃_1,…, p̃_ij,…,p̃_n;p_i,p_j)= dΦ_n(p_a,p_b;p_1,…,p_i,…,p_j,…,p_n)/ dΦ_n-1(p̃_a,p̃_b;p̃_1,…,p̃_ij, …,\p̃_j,…,p̃_n) . This expression naturally generalizes to D dimensions. It can be computed using the lowest final-state multiplicity, i.e., n=3. In order to do so, we first derive a suitable expression for the D-dimensional two-particle phase space in the frame of an arbitrary, time-like momentum n. It reads dΦ_2(P;p,q)= 1/(2π)^2D-2 d^D-1p/2p^0 d^D-1q/2q^0 (2π)^Dδ^D(P-p-q) =1/4(2π)^D-2 |p⃗ |^D-2/ P^0|p⃗ |-p^0|P⃗|cosθ_pn dΩ_p,n^D-2 . where dΩ_p,n is the integral over the D-1 dimensional solid angle of p^μ in the frame of n^μ. All energies and momenta are computed in this frame, and we have defined P=p+q. We can re-write the momentum-dependent denominator factor on the right-hand side of Eq. (<ref>) as p^0/|p⃗|(P^0p^0-|p⃗||P⃗|cosθ_pn) -P^0|p⃗|((p^0)^2/|p⃗|^2-1)= (pn)(Pp)-p^2(Pn)/√((pn)^2-p^2n^2) . Inserting this into Eq. (<ref>), and using D=4-2, we obtain a manifestly covariant form of the phase-space element, dΦ_2(P;p,q)=(4π^2 n^2)^/16π^2 ((pn)^2-p^2n^2)^3/2-/((pn)(Pp)-p^2(Pn))n^2 dΩ_p,n^2-2 . §.§ Radiation kinematics To derive the emission phase space for final-state splittings in the radiation kinematics of Sec. <ref>, we make use of the standard factorization formula dΦ_3(-K;p_i,p_j,q) = dΦ_2(-K;p_j,-n) dn^2/2π dΦ_2(-n;p_i,q) , where q=∑_k≠ i,jp_k is the sum of all final-state momenta except p_i and p_j. We can use Eq. (<ref>) to relate the phase-space factor dΦ_2(-n;p_i,q) to the underlying Born phase space as follows dΦ_2(-n;p_i,q)/ dΦ_2(-K̃;p̃_ij,q̃) = ((p_i n)^2-p_i^2n^2/ (p̃_ij n)^2-p̃_ij^2n^2)^3/2- (p̃_ijn)(p̃_ijK̃) -p̃_ij^2(K̃n)/(p_in)^2-p_i^2n^2 = (1-2μ_ij^2(2(κ-μ_i^2)-z +σ_i,ij(1+2μ_ij^2)))[zv_p_i,n]^1-2/[(1+2μ_ij^2(1-σ_i,ij))^2 -4μ_ij^2(1-z+κ+μ_ij^2-μ_i^2)]^3/2- , where we have defined σ_i,ij=z̅+μ_i^2/μ_ij^2-z̅^2/z̅v_p̃_ij,K̃1-2κ̅μ_ij^2/2 . The differential two-body phase-space element for the production of n^μ and p_j^μ is given by dΦ_2(-K;n,p_j)= (4π^2)^/16π^2 ((p_jn)^2-p_j^2n^2)^1/2-/(n^2)^1- dΩ_j,n^2-2 = (2p̃_ijK̃)^-/(16π^2)^1- [(1-z+μ_ij^2-μ_i^2+μ_j^2)v_p_j,n]^1-2/ 2(1-z+κ+μ_ij^2-μ_i^2)^1- dΩ_j,n^2-2 , Finally, we combine Eqs. (<ref>) and (<ref>) to obtain the single-emission phase space element in Eq. (<ref>) dΦ_+1(p̃_a,p̃_b;…,p̃_ij,…;p_i,p_j) =(2p̃_ijK̃/16π^2)^1- (1-2μ_ij^2(2(κ-μ_i^2)-z +σ_i,ij(1+2μ_ij^2)))[v_p_i,nv_p_j,n]^1-2/[(1+2μ_ij^2(1-σ_i,ij))^2 -4μ_ij^2(1-z+κ+μ_ij^2-μ_i^2)]^3/2- ×(z(1-z+μ_ij^2-μ_i^2+μ_j^2))^1-2/ (1-z+κ+μ_ij^2-μ_i^2)^1- dz dΩ_j,n^2-2/4π . In the massless limit, this simplifies to dΦ_+1(p̃_a,p̃_b;…,p̃_ij,…;p_i,p_j) =(2p̃_ijK̃/16π^2)^1- (z(1-z))^1-2/(1-z+κ)^1- dz dΩ_j,n^1-2/4π . We demonstrate in App. <ref> that this expression agrees with the result derived in Ref. <cit.>. §.§ Splitting kinematics To derive the emission phase space for final-state radiation in the splitting kinematics of Sec. <ref>, the recoiler is chosen to be the sum of the remaining final-state partons. We make use of the standard factorization formula dΦ_3(q;p_i,p_j,K) = dΦ_2(q;K,p_ij) dp_ij^2/2π dΦ_2(p_ij;p_i,p_j) , where q=∑_k≠ i,jp_k is the sum of final-state momenta except p_i and p_j. Working in the frame of q=K̃+p̃_ij, we can use Eq. (<ref>) to relate the phase-space factor dΦ_2(q;K,p_ij) to the underlying Born differential phase space element dΦ_2(q;K̃,p̃_ij) as follows dΦ_2(q;K,p_ij)/ dΦ_2(q;K̃,p̃_ij) = ((Kp_ij)^2-K^2p_ij^2/ (K̃p̃ _ij)^2-K̃^2p̃_ij^2)^1/2- =(1-y)^1-2(1+μ_ij^2-μ_i^2-μ_j^2)^1-2(v_p_ij,K/v_p̃_ij,K̃)^1-2 , The decay of the intermediate off-shell parton is associated with the differential phase-space element dΦ_2(p_ij;p_i,p_j)= (4π^2)^/16π^2 ((p_ip_j)^2-p_i^2p_j^2)^1/2-/(p_ij^2)^1- dΩ_i,ij^2-2 = (2p̃_ijK̃)^-/(16π^2)^1- (y^2(1+μ_ij^2-μ_i^2-μ_j^2)^2-4μ_i^2μ_j^2)^1/2-/ 2(y(1+μ_ij^2-μ_i^2-μ_j^2)+μ_i^2+μ_j^2)^1- dΩ_i,ij^2-2 . Finally, we combine Eqs. (<ref>) and (<ref>) to obtain dΦ_+1(p̃_a,p̃_b;p̃_1,…, p̃_ij,…,p̃_n;p_i,p_j) =(2p̃_ijK̃/16π^2)^1- (1-y)^1-2(1+μ_ij^2-μ_i^2-μ_j^2)^2-2(v_p_ij,K/v_p̃_ij,K̃)^1-2 ×(y^2(1+μ_ij^2-μ_i^2-μ_j^2)^2-4μ_i^2μ_j^2)^1/2-/(y(1+μ_ij^2-μ_i^2-μ_j^2)+μ_i^2+μ_j^2)^1- dy dΩ_i,ij^2-2/4π . In the massless limit, this simplifies to dΦ_+1(p̃_a,p̃_b;p̃_1,…, p̃_ij,…,p̃_n;p_i,p_j) =(2p̃_ijK̃/16π^2)^1- (1-y)^1-2 y^- dy dΩ_i,ij^2-2/4π . In App. <ref>, we will show the equivalence of Eq. (<ref>) to the single-emission differential phase-space element of <cit.>. § INFRARED SUBTRACTION TERMS AT NEXT-TO-LEADING ORDER The matching of parton showers to fixed-order NLO calculations in dimensional regularization based on the MC@NLO algorithm <cit.> requires the knowledge of integrated splitting functions in D=4-2 dimensions. Since our technique for massive parton evolution is modeled on the Catani-Seymour identified particle subtraction and the parton-shower, we can use the methods developed in <cit.>. We will limit the discussion to the main changes needed to implement the algorithm for massive partons. The results of this section provide an extension of the subtraction method for identified hadrons first introduced in <cit.>. §.§ Soft angular integrals In this section, we compute the angular integrals of the partial fractioned soft eikonal. While we focus on pure final-state radiation, the results of this integration are generic and apply to the case of final-state and initial-state emission of massless vector bosons. The integrand is given by Eq. (<ref>), W̅_ik,j^i= l_ik^2/2(l_il_j)(l_ikl_j) -l_i^2/2(l_il_j)^2 -l_k^2/2(l_il_j)(l_kl_j) We can write the angular phase space for the emission of the massless particle j as ∫ dΩ_j^2-2=Ω(1-2) ∫ dcosθ(sin^2θ)^-∫ dϕ(sin^2ϕ)^- , where Ω(n)=2π^n/2/Γ(n/2) is the d-dimensional line element, in particular Ω(1-2)=2(4π)^-Γ(1-)/Γ(1-2). We finally find the following expression for the cases with massive emitter ∫ dΩ_j^2-2/Ω(1-2) W̅_ik,j^i =l_ik^2/4I_1,1^(2)(l_il_ik/2,l_i^2,l_ik^2/4) -l_i^2/2I_2^(1)(l_i^2)-l_k^2/2I_1,1^(2)(l_il_k,l_i^2,l_k^2) . For massless emitter, we obtain (see also <cit.>) ∫ dΩ_j^2-2/Ω(1-2) W̅_ik,j^i =l_ik^2/4I_1,1^(1)(l_il_ik/2,l_ik^2/4) -l_k^2/2I_1,1^(1)(l_il_k,l_k^2) , The angular integrals I_1,1 and I_2 have been computed in <cit.>. They read <cit.> I_1,1^(2)(v_12,v_11,v_22)= π/√(v_12^2-v_11v_22){logv_12+√(v_12^2-v_11v_22)/v_12-√(v_12^2-v_11v_22) +(1/2log^2v_11/v_13^2-1/2log^2v_22/v_23^2.. ..+2 Li_2(1-v_13/1-√(1-v_11)) +2 Li_2(1-v_13/1+√(1-v_11)).. ..-2 Li_2(1-v_23/1-√(1-v_22)) -2 Li_2(1-v_23/1+√(1-v_22)))+𝒪(^2)} , I_1,1^(1)(v_12,v_11)= π/v_12{-1/-logv_11/v_12^2 -(1/2log^2v_11/v_12^2.. ..+2 Li_2(1-v_12/1-√(1-v_11)) +2 Li_2(1-v_12/1+√(1-v_11)))+𝒪(^2)} , I_2^(1)(v)= 2π/v{1+/√(1-v)log1+√(1-v)/1-√(1-v)+𝒪(^2)} . where the velocities are defined in the notation of Ref. <cit.>, and where v_13= (1-λ) v_11+λ v_12 , v_23=(1-λ) v_12+λ v_22 , λ=v_11-v_12-√(v_12^2-v_11v_22)/ v_11+v_22-2v_12 . Note that spurious collinear poles appear in the second term of Eq. (<ref>), which cancel between I_1,1^(1)(l_il_ik/2,l_ik^2/4)/2 and I_1,1^(1)(l_il_k,l_k^2). In addition, in the fully massless case I_1,1^(1) is related to the massless two-denominator integral and gives the simple result <cit.> I_1,1^(1)(v_12,v_12)= -π/v_12^1+{1/ + Li_2(1-v_12)+𝒪(^2)} . §.§ Soft energy integrals In this section, we introduce the basic techniques to solve the energy integrals in Eq. (<ref>). After performing the angular integrals, we are left with the additional z-dependence induced by the energy denominator in Eq. (<ref>). We focus on the cases relevant for QCD and QED soft radiation, where μ_ij=μ_i and μ_j=0. The differential emission probability per dipole is then given by dΦ_+1(p̃_a,p̃_b;…,p̃_ij,…;p_i,p_j) n^2/(p_jn)^2 W̅_ik,j^i =(4π)^/16π^2 Γ(1-)/Γ(1-2) (2p̃_ijK̃)^- (1-2μ_i^2(2(κ-μ_i^2)-z +σ_i,ij(1+2μ_i^2)))v_p_i,n^1-2/[(1+2μ_i^2(1-σ_i,ij))^2 -4μ_i^2(1-z+κ)]^3/2- × z^1-2/(1-z+κ)^- dz/(1-z)^1+2 2/π dΩ_j,n^2-2/Ω(1-2)W̅_ik,j^i . In order to carry out the integration over the energy fraction, z, we expand the integrand into a Laurent series. The differential soft subtraction counterterm, summed over all dipoles, is given by dσ^S_S= -8πα_sμ^2∑_,k dΦ_+1(p̃_a,p̃_b;…,p̃_ij,…;p_i,p_j) T_ T_kn^2/(p_jn)^2 W̅_ik,j^i = -α_s/2π1/Γ(1-)∑_,k T_ T_k/ T_^2(4πμ^2 (p_kn)/2(p_kp_i)(p_in))^ {-δ(1-z)/+2z/[1-z]_+ -4 z[log(1-z)/1-z]_+} × 𝒥_ik(z,μ_i^2,κ) (l_ik^2)^ C_/πΓ(1-)^2/Γ(1-2) [ dΩ_j,n^2-2/Ω(1-2) W̅_ik,j^i] . where the mass correction factor reads 𝒥_ik(z,μ_i^2,κ)= (1-2μ_i^2(2(κ-μ_i^2)-z +σ_i,ij(1+2μ_i^2)))v_p_i,n^1-2/[(1+2μ_i^2(1-σ_i,ij))^2 -4μ_i^2(1-z+κ)]^3/2- and where the massless limit is given by 𝒥_ik(z,0,κ)=1. All 1/ poles have been extracted, and we can (after expanding the -dependent prefactors) compute the final result as an integral over the delta functions and plus distributions in z. In general, some of the terms must be computed numerically, as n implicitly depends on z, see Eq. (<ref>). In order to apply Eq. (<ref>) to processes with resolved hadrons, we can make use of the formalism derived in <cit.>. The 1/ pole proportional to 2z/[1-z]_+ can then be combined with the soft enhanced part of the collinear mass factorization counterterms. The extension to initial-state radiation is straightforward and only requires a repetition of the derivation in Sec. <ref> for initial-state kinematics. We will discuss this in a subsequent publication. §.§ Collinear integrals To compute the purely collinear counterterms, we use the splitting kinematics and make use of the results in App. <ref>. Note that we also use the corresponding definitions of the scaled masses. We define the purely collinear anomalous dimension in terms of the collinear remainders in Eq. (<ref>) γ̅_ab()= Ω(2-2)/Ω(3-2)(z_+-z_-/2)^-1+2∫_z_-^z_+ dz ((z-z_-) (z_+-z))^- C_ab(z,) , where z_± are given by Eq. (<ref>). This leads to γ̅_qq()= C_F(1-)(1-z_+-z_-/2) , γ̅_gg()= C_A(z_+-z_-/2 -(1-/2)(z_++z_-)^2-z_+z_-/3-2) , γ̅_gq()= T_R(1+z_+^2+z_-^2-(z_++z_-)/1- -(z_+-z_-)^2/3-2) . The complete counterterm can now be obtained by Laurent series expansion, using Eq. (<ref>) ∫ dΦ_+1(q;p̃_ij,K̃;p_i,p_j)8πα_sμ^2C_ab/(p_i+p_j)^2-m_ij^2 = α_s/2π (μ^2/Q^2)^Ω(3-2)/(4π)^1-2 (1-μ̂_i^2-μ̂_j^2-κ̂)^2-2λ(1,μ̂_ij^2,κ̂)^-1/2+ ×∫ dy (1-y)^1-2 [ y(1-μ̂_i^2-μ̂_j^2-κ̂)+μ̂_i^2+μ̂_j^2 ]^-/ y(1-μ̂_i^2-μ̂_j^2-κ̂)+μ̂_i^2+μ̂_j^2-μ̂_ij^2 (z_+-z_-)^1-2 γ̅_ab() . There are three variants of Eq. (<ref>) relevant for QCD. The first describes the collinear splitting of a massive quark into a quark and a gluon, and is characterized by μ̂_i=μ̂_ij>0 and μ̂_j=0. We obtain ∫ dΦ_+1(q;p̃_ij,K̃;p_i,p_j) 8πα_sμ^2C_qq/(p_i+p_j)^2-m_ij^2 =α_s/2π (1-μ̂_i^2-κ̂)/√(λ(1,μ̂_i^2,κ̂)) ×∫ dy/y(1-μ̂_i^2-κ̂)+μ̂_i^2 √([(1-y)(1-μ̂_i^2-κ̂)+2κ̂]^2-4κ̂) γ̅_qq(0) , Note that the result is infrared finite. This is expected because the only poles that occur in the radiation off massive quarks have their origin in soft gluon emission, which is fully accounted for by the eikonal integral in Eq. (<ref>). This also shows that in our subtraction scheme, there is a hierarchy between the soft and the collinear enhancements, as required for a proper classification of leading and sub-leading logarithms. The second case is the splitting of a gluon into two massive quarks, which is characterized by μ̂_i=μ̂_j>0 and μ̂_ij=0. The result is finite and reads ∫ dΦ_+1(q;p̃_ij,K̃;p_i,p_j) 8πα_sμ^2C_gq/(p_i+p_j)^2 = α_s/2π (1-2μ̂_i^2/1-κ̂) ×∫ dy √(y^2(1-2μ̂_i^2-κ̂)^2-4μ̂_i^4)/ (y(1-2μ̂_i^2-κ̂)+2μ̂_i^2)^2√([(1-y)(1-2μ̂_i^2-κ̂)+2κ̂]^2-4κ̂) γ̅_gq(0) . The final case describes the collinear decay of a massless parton into two massless partons, and is characterized by μ̂_i=μ̂_j=μ̂_ij=0. This term is collinearly divergent, and we obtain ∫ dΦ_+1(q;p̃_ij,K̃;p_i,p_j) 8πα_sμ^2C_ab/(p_i+p_j)^2 =α_s/2π(μ^2/Q^2)^(4π)^/Γ(1-) ×∫ dy {-δ(y)/+1/[y]_+}([(1-y)(1-κ̂)+2κ̂]^2-4κ̂)^1/2-/(1-κ̂)^2- Γ(1-)^2/Γ(2-2) γ̅_ab() . Initial-state singularities can be treated in a similar fashion. We will discuss the details in a future publication. § KINEMATICAL EFFECTS ON SUB-LEADING LOGARITHMIC CORRECTIONS We now demonstrate that our kinematics mapping satisfies the criteria for NLL accuracy laid out in  <cit.>, extending the discussion of the massless case in <cit.>. We refer the reader to <cit.> for more details on the general method of the proof. The analysis is based on the technique for general final-state resummation introduced in <cit.>. Following the notation of Sec. 2 of <cit.>, the observable to be resummed is denoted by v, while the hard partons and soft emission have momenta p_1, …, p_n and k, respectively. The observable is a function of these momenta, v=V({p},{k}), and for any radiating color dipole formed by the hard momenta, p_i and p_j, the momentum of a single emission can be parametrized as k=z_i,j p_i+z_j,i p_j+k_T,ij , where k_T,ij^2=2p_ip_j z_i,j z_j,i . with rapidity η_ij=1/2ln(z_i,j/z_j,i). An observable can be expressed as V(k) = d_l(k_T,l/Q)^a e^-b_lη_lg_l(ϕ_l) , with k_T,l=k_T,lj and η_l=η_lj for j∦ l in the collinear limit. A natural extension of Eq. (<ref>) to the case of massive emitters would be k=(z_i,j-μ̅_j^2z_j,i) p_i+(z_j,i-μ̅_i^2z_i,j) p_j+k_T,ij , where k_T,ij^2=2p_ip_j2v_p_i,p_j/1+v_p_i,p_j z_i,j z_j,i . In the quasi-collinear limit, this Sudakov decomposition agrees with the one given in <cit.> if the auxiliary (light-like) vector defining the anti-collinear direction is chosen as the direction of the color partner of the QCD dipole. In particular, for constant z_i,j, z_j,i and small k_T, the gluon momentum behaves as kk_T^2≪ p_ip_jm_i/j^2 k_T^2⟶ z_i,jp_i+z_j,ip_j+k_T,ij+𝒪(k_T^2) , in complete agreement with Eq. (<ref>). In the quasi-collinear limit, the value of the observable V(k) will therefore be unchanged from the case of massless evolution. However, both the Sudakov radiator and the ℱ function will change, due to the modified splitting functions, Eq. (<ref>), and the effects of masses on the integration boundaries. The Lund plane for gluon emission off a dipole containing a massive quark with mass m and energy E will have a smooth upper rapidity bound at η=ln(E/m), consistent with the well known dead-cone effect <cit.>. When the similarity transformation introduced in Sec. 2.2.3 of <cit.> is generalized to the quasi-collinear limit, and applied to a process with massive quarks, the location of this boundary in relative rapidity is unchanged, because the quasi-collinear limit requires m k_T. The sub-leading logarithmic terms in the resummed cross section can therefore be extracted in the same way as in the massless case. In order to extend the proof of NLL accuracy from Ref. <cit.> to the case of massive partons, we need to show that the radiation kinematics in Sec. <ref> have the same topological features as in the massless case. This amounts to showing that hard, (quasi-)collinear emissions, i.e. highly energetic emissions in the direction of the hard partons, do not generate Lorentz transformations that change existing momenta by more than 𝒪((k_T^2/K^2)^ρ̃), where ρ̃ is a positive exponent. To show this, we analyze the behavior of Eq. (<ref>) in the (quasi-)collinear region. We can split K^μ into its components along the original recoil momentum, K̃^μ, the emitter, p̃_i^μ, and the emission, p_j^μ. K^μ=K̃^μ-X^μ, where X^μ=p^μ_j-(p̃_ij-p_i) . For the effect of the Lorentz transformation that determines the event topology after radiation, only the parametric form of X^μ is of interest. Using Eqs. (<ref>) and (<ref>), the small momentum X^μ for gluon radiation can be written as X^μ= (1-z/v_p̃_ijK̃ζ-(1-z̅)) p̃_ij^μ -μ_ij^2/v_p̃_ijK̃(1-z/1+v_p̃_ijK̃ K̃^μ +1-z̅^2/z̅(K̃^μ-κ̅ p̃_ij^μ)) +v̅ 1+v_p̃_ijK̃/2v_p̃_ijK̃[(K̃^μ-κ̅p̃_ij^μ) -1-z̅+κ̅/ζ(p̃_ij^μ-μ̅_ij^2K̃^μ)] +k_⊥^μ . In the quasi-collinear limit, this scales as 𝒪(k_⊥), which is sufficient to make the effect of the Lorentz transformation vanish even for hard quasi-collinear splittings. Finally, we note that the kinematics mapping of Sec. <ref>, which is applied to the simulation of purely collinear emissions, does not affect the logarithmic structure of the resummed result at NLL, independent of its topological features. This is because the sub-leading corrections that are sensitive to multiple, correlated emissions do not receive contributions from the collinear anomalous dimension, cf. Eq. (2.21) in Ref. <cit.>. § COMPARISON TO EXPERIMENTAL DATA In this section we present first numerical results obtained with the extension of the final-state parton shower to massive parton evolution. The implementation is part of the event generation framework  <cit.>. We do not perform NLO matching or multi-jet merging, and we set C_F=(N_c^2-1)/(2N_c)=4/3 and C_A=3. The quark masses are set to m_c=1.42 GeV and m_b=4.75 GeV and the same flavor thresholds are used for the evaluation of the strong coupling. The running coupling is evaluated at two loop accuracy, and we set α_s(m_z)=0.118. Following standard practice to improve the logarithmic accuracy of the parton shower, we employ the CMW scheme <cit.>. In this scheme, the soft eikonal contribution to the flavor conserving splitting functions is rescaled by 1+α_s(t)/(2π) K, where K=(67/18-π^2/6) C_A-10/9 T_R n_f, and where n_f is scale dependent with the same flavor thresholds as listed above. Our results include the simulation of hadronization using the Lund string fragmentation implemented in Pythia 6.4 <cit.>. We use the default hadronization parameters, apart from the following: , , , and compare our predictions with those from the parton shower <cit.>. All analyses are performed with Rivet <cit.>. Figure <ref> displays predictions from the parton shower for differential jet rates in the Durham scheme compared to experimental results from the JADE and OPAL collaborations <cit.>. The b-quark mass corresponds to y∼ 2.8·10^-3, and hadronization effects dominate the predictions below ∼10^-4. We observe good agreement with the experimental data. Overall, the quality of the results is similar to Ref. <cit.>, where heavy quark effects were modeled by thresholds. Figure <ref> shows a comparison for event shapes measured by the ALEPH collaboration <cit.>. It can be expected that the numerical predictions will improve upon including matrix-element corrections, or when merging the parton shower with higher-multiplicity calculations. Again, the overall quality of the prediction is similar to Ref. <cit.>. Finally, we show predictions for the b-quark fragmentation function as measured by the ALEPH <cit.>, DELPHI <cit.>, OPAL <cit.> and SLD <cit.> collaborations. Figure <ref> shows a fair agreement of both the and predictions with experimental data. We note that both parton shower implementations use the same hadronization tune. § CONCLUSIONS We have introduced an extension of the recently proposed parton-shower model to the case of massive QCD evolution. An essential aspect of the new algorithm is the form of the collinearly matched massive eikonal, which is obtained by partial fractioning the angular component of the eikonal of a complete dipole. This technique preserves the positivity of the splitting function, thus leading to an excellent efficiency of the Monte-Carlo simulation. Inspired by the symmetry of the partonic final state in purely collinear splittings, we also introduced a dedicated kinematics mapping for this scenario and showed that it preserves the NLL precision of the overall simulation. We computed the infrared counterterms needed for the matching to fixed-order calculations at NLO accuracy, and discussed the logarithmic structure of the resummation in the case of heavy-quark evolution. Several improvements of this algorithm are needed before it can be considered on par with the parton shower simulations used by past and current experiments. Clearly, spin correlations and dominant sub-leading color effects should be included. Furthermore, an extension to initial-state evolution and a matching to light-flavor PDFs using the FONLL or ACOT scheme are needed. These additions can be provided in a straightforward manner, using techniques from <cit.>. Finally, the algorithm should be extended to higher orders based on the techniques developed in <cit.>. In this context, we note that the all-orders (in ) expressions from Sec. <ref>, in conjunction with higher-order expressions for the angular integrals in Sec. <ref> that can be obtained from <cit.>, can be used to compute the factorizable integrals at NNLO, thus providing a significant part of the components needed for an MC@NNLO matching <cit.>. The computation of the remaining non-factorizable integrals is a further development needed in order to reach the precision targets of the high-luminosity LHC. § ACKNOWLEDGMENTS We thank John Campbell, Florian Herren and Simone Marzani for many helpful and inspiring discussions. We are grateful to Davide Napoletano and Pavel Nadolsky for clarifying various questions on PDFs and the implementation of the FONLL and ACOT schemes. This work was supported by the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE–AC02–07CH11359. § PHASE-SPACE FACTORIZATION IN COMPARISON TO OTHER INFRARED SUBTRACTION SCHEMES In this appendix, we compare the phase-space factorization in our method to the existing dipole subtraction schemes of Refs. <cit.> and <cit.>. We will show that our generic form of the factorized phase space, derived from Eq. (<ref>) can be used to obtain the relevant formulae, for pure final-state evolution. §.§ Massless radiation kinematics First, we show that Eq. (5.189) of Ref. <cit.> can be derived from our generic expression, Eq. (<ref>), using radiation kinematics. We start with the massless limit of Eq. (<ref>) dΦ_+1(p̃_a,p̃_b;…,p̃_ij,…;p_i,p_j) =(2p̃_ijK̃/16π^2)^1- (z(1-z))^1-2/(1-z+κ)^1- dz dΩ_j,n^1-2/4π . Expressing the polar angle in the frame of n in terms of v and z (see also Eq. (32) of <cit.>) cosθ_j,n=1-2v1-z+κ/1-z=1-n^2 v z/(1-z) p_an . we can perform a change of integration variables dcosθ_j,n→ dv, leading to Eq. (5.189) of Ref. <cit.>. dΦ_+1(p̃_a,p̃_b;…,p̃_ij,…;p_i,p_j) = (2p̃_ijK̃)^1-/16π^2 (z(1-z))^1-2/(1-z+κ)^- (sin^2θ_j,n)^-/1-z dz dv dΩ_j,n^1-2/(4π)^1-2 = (2p_in)^1-/16π^2 (1-z)^-2 (vz/1-z(1-n^2vz/(1-z) 2p_in))^- dz dv dΩ_j,n^1-2/(2π)^1-2 . §.§ Massive splitting kinematics In this appendix, we derive the phase-space factorization formula, Eq. (5.11) in <cit.> from our generic expression, Eq. (<ref>). We use the definitions of Eq. (<ref>) <cit.>, and we set Q^2=(p̃_ij+K̃)^2. In addition, we define the scaled masses[ Note that these definitions differ from the ones in Sec. <ref>.] μ̂_i^2=m_i^2/Q^2 , μ̂_j^2=m_j^2/Q^2 , μ̂_ij^2=m_ij^2/Q^2 , κ̂=K^2/Q^2 . The single-emission phase space element is given by dΦ_+1(q;p̃_ij,K̃;p_i,p_j)= dΦ_2(q;p_ij,K)/ dΦ_2(q;p̃_ij,K̃) dp_ij^2/2π dΦ_2(p_ij;p_i,p_j) . The decay of p_ij is simplest to compute in its rest frame. In this frame, we can write z=E_i^(ij)E_K^(ij)/p_ijK(1-v_ij,iv_ij,kcosθ_i,ij) =p_ip_ij/p_ij^2(1-v_ij,iv_ij,kcosθ_i,ij) , where the velocities are given by v_ij,i=√(y^2(1-μ̂_i^2-μ̂_j^2-κ̂)^2-4μ̂_i^2μ̂_j^2)/ y(1-μ̂_i^2-μ̂_j^2-κ̂)+2μ̂_i^2 , v_ij,k=√([(1-y)(1-μ̂_i^2-μ̂_j^2-κ̂)+2κ̂]^2-4κ̂)/ (1-y)(1-μ̂_i^2-μ̂_j^2-κ̂) The decay phase space, written in the frame of p_ij, then reads dΦ_2(p_ij;p_i,p_j)= (4π^2)^/16π^2 (E_i^(ij)v_ij,i)^1-2/(p_ij^2)^1/2 dΩ_i,ij^2-2 =(4π^2)^/16π^2 (p_i,⊥^(ij))^-2/v_ij,k dz dΩ_i,ij^1-2 . We can use the factorization Ansatz p_i,⊥^(ij) 2=X(z-z_-)(z_+-z), where the physical boundary condition gives the roots of the quadratic |cosθ_i,ij|=1, leading to z_±=p_ip_ij/p_ij^2(1± v_ij,iv_ij,k) =y(1-μ̂_i^2-μ̂_j^2-κ̂)+2μ̂_i^2/ 2[y(1-μ̂_i^2-μ̂_j^2-κ̂)+μ̂_i^2+μ̂_j^2](1± v_ij,iv_ij,k) . The factor X is determined by equating the transverse momentum at the extremal point z_i, max=p_ip_ij/p_ij^2 to the total three-momentum of p_i in the frame of p_ij. This leads to p_i,⊥^(ij) 2=p_ij^2/v_ij,k^2 (z-z_-)(z_+-z) , and finally dΦ_2(p_ij;p_i,p_j)= (4π^2/Q^2)^/16π^2 v_ij,k^1-2 [y(1-μ̂_i^2-μ̂_j^2-κ̂)+μ̂_i^2+μ̂_j^2]^-((z-z_-)(z_+-z))^- dz dΩ_i,ij^1-2 . We also have dp_ij^2=Q^2(1-μ̂_i^2-μ̂_j^2-κ̂) dy . The last missing component of the emission phase space is the ratio of the production to the underlying Born phase-space element. It is most easily computed in the frame of q=p̃_ij+K̃ and results in dΦ_2(q;p_ij,K)/ dΦ_2(q;p̃_ij,K̃) =((2p_ijK)^2 v_ij,k^2/Q^4λ(1,μ̂_ij^2,κ̂))^1/2- =((1-y)^2(1-μ̂_i^2-μ̂_j^2-κ̂)^2 v_ij,k^2/λ(1,μ̂_ij^2,κ̂))^1/2- . where the Källen function is given by λ(a,b,c)=(a-b-c)^2-4bc. Combining Eqs. (<ref>)-(<ref>) gives the final result dΦ_+1(q;p̃_ij,K̃;p_i,p_j) = (Q^2)^1-/16π^2 (1-μ̂_i^2-μ̂_j^2-κ̂)^2-2λ(1,μ̂_ij^2,κ̂)^-1/2+ × dy (1-y)^1-2 [ y(1-μ̂_i^2-μ̂_j^2-κ̂)+μ̂_i^2+μ̂_j^2 ]^- × dz ((z-z_-) (z_+-z))^- dΩ_i,ij^1-2/(2π)^1-2 .
http://arxiv.org/abs/2307.00870v1
20230703090953
Notes on weight-shifting operators and unifying relations
[ "Qi Chen", "Yi-Xiao Tao" ]
hep-th
[ "hep-th", "gr-qc" ]
1cm ^aDepartment of Physics, Tsinghua University, Beijing 100084, China ^bDepartment of Mathematical Sciences, Tsinghua University, Beijing 100084, China chenq20@mails.tsinghua.edu.cn, taoyx21@mails.tsinghua.edu.cn We seek the inverse formulas for the cosmological unifying relation between gluons and conformally coupled scalars. We demonstrate that the weight-shifting operators derived from the conformal symmetry at the dS late-time boundary can serve as the inverse operators for the 3-point cosmological correlators. However, in the case of the 4-point cosmological correlator, we observe that the inverse of the unifying relation cannot be constructed from the weight-shifting operators. Despite this failure, we are inspired to propose a “weight-shifting uplifting" method for the 4-point gluon correlator. § INTRODUCTION Cosmic inflation is widely accepted as the period of exponential expansion that occurred at the beginning of our universe, during which the background spacetime can be approximated as de Sitter (dS) space. In the standard inflationary scenario, the early universe is dominated by dark energy, represented by the potential energy of a scalar field known as the inflaton. Quantum fluctuations of the inflaton and other particles during this primordial epoch give rise to non-Gaussianity (NG) and serve as the seeds for the formation of the Large Scale Structure (LSS) we observe today. The NG and LSS contain valuable information that allows us to study the history of the early universe. Extracting such intriguing information is possible through the analysis of cosmological correlation functions <cit.>. Calculating cosmological correlators can be a daunting task. The conventional approach involves employing the in-in formalism to track the time evolution of fields during inflation. This requires integrating over some field mode functions and interaction vertices in dS spacetime, which can be highly intricate. In particular, evaluating the interaction vertices for spinning fields can be arduous, even in flat spacetime, let alone in dS spacetime. However, the in-in formalism provides us with the late-time correlation functions at the future boundary. Motivated by this observation, the cosmological bootstrapping program <cit.> has been proposed. This program aims to compute cosmological correlators from a boundary perspective, leveraging symmetries, locality, unitarity, and other principles. By exploiting these fundamental properties, the cosmological bootstrapping program provides an alternative framework for studying cosmological correlators. From the inflation perspective, we regard the early universe as a dS_4 spacetime, which has the same symmetry as CFT_3. Then we can use many CFT methods to derive these correlators from the conformal properties of the cosmological correlators. In the past decade, people have constructed a series of differential operators which can change the quantum numbers of the operators in cosmological correlators by some CFT methods. These operators are the so-called “weight-shifting operators" <cit.>, and the appearance of these operators allows us to obtain correlators with different quantum numbers from a given cosmological correlator. A brief review of weight-shifting operators is given in section <ref>. However, recently the authors find another relation between YM theory and the conformally coupled scalar theory with a gluon minimal coupled. This relation is a generalization of the “unifying relation" <cit.> for the flat amplitudes. Hence we also use “unifying relation" to denote this relation for cosmological correlators. The proof, which uses the Berends-Giele recursion, can be found in <cit.>. A generalization to the loop integrand is also found in <cit.>. It seems that there are some connections between these two kinds of relations. In this paper, we want to seek the inverse of unifying relations. In the 3-pt case, we successfully obtain the inverse, while in the 4-pt case things go wrong. To find in what sense we can inverse the unifying relations, we construct the 4-pt gluon correlator JJJJ from the unifying relations and express it by weight-shifting operators to find some clues. And finally we find that there is a correspondence between the flat amplitudes and the weight-shifting operators in this case. We give a dictionary between the varieties in the flat 4-pt YM amplitudes and the weight-shifting operators that construct JJJJ. Our paper will organize as follows. In section <ref>, we will review some weight-shifting operators and point out the effect they have. In section <ref>, we will review the unifying relations for cosmological correlators briefly. In section <ref>, we will construct the 4-pt gluon cosmological correlator JJJJ from the unifying relation, and point out that our construction is also consistent with the weight-shifting perspective. Moreover, we point out that this construction can be related to the flat amplitudes. § WEIGHT-SHIFTING OPERATORS An interesting way to get the cosmological correlators with correct quantum numbers from other correlators has been introduced in <cit.>, which is known as the “weight-shifting operators". These operators can be derived from the conformal properties of the cosmological correlators in the embedding space. In this section, we will briefly review these operators and write down some typical examples, following the notation in <cit.>. A particle in dS spacetime can be labeled by its conformal dimension Δ and its spin l from the representation of the isometry group of dS spacetime. In cosmology, we only focus on the case dS_4, which means that we can specialize the weight-shifting operators for dS_d+1 to the d=3 case. The weight-shifting operators are some bi-local operators which can be used to change the conformal dimension Δ and the spin l of some particles in the correlators, and after acting such operators on a certain correlator we will get another correlator with different quantum numbers. We should point out that we can obtain the correlators with the same quantum numbers by acting different correlators, and the results may be different. The correct correlators should be a linear combination of these different results. Moreover, sometimes we will get the correlators with wrong singularities, which means that we should do some modifications by hand. Such an example will be given in section <ref>. In this section, we only present the explicit expressions for the weight-shifting operators. The derivation and detailed examples illustrating the action of weight-shifting operators on specific correlators will be provided in Appendix <ref>. Let us begin by introducing the simplest differential operators that decrease the conformal dimensions of two fields by one unit in correlators: 𝒲^–_12=1/2K⃗_12·K⃗_12,      K⃗_12=∂_k⃗_1-∂_k⃗_2. In the above expression, the subscript 1,2 indicates the particle for which we aim to decrease its conformal weight. It is important to emphasize that K⃗_12 represents a differential operator with respect to the component of momentum, and thus, it is a vector. Another valuable operator is the one that increases the conformal dimensions of two fields by one unit. The general expression for such weight-raising operators, acting on a spinning field, can be quite intricate. However, for our current purpose, we will not delve further into the details of these operators acting on spinning fields. Instead, we will focus on presenting the expression for the weight-raising operators that specifically act on pure scalar operators: 𝒲^++_12=1/2(k_1k_2)^2K_12^2-(3-2Δ_1)(3-2Δ_2)k⃗_1·k⃗_2+[k_2^2(3-2Δ_1)(2-Δ_1+k⃗_1·K⃗_12)+(1↔2)]. For a comprehensive understanding of the general weight-raising operators, one can refer to <cit.>. In the preceding discussion, we introduced operators that can increase or decrease the conformal weight. It is worth noting that the 3-point correlation function with one particle having spin can also be determined up to an overall constant through the application of conformal symmetry. Furthermore, the expression for these spinning 3-point correlators exhibits a similar structure to that of scalar fields. Therefore, we can also generate correlators for spinning fields by utilizing differential operators on the scalar 3-point correlator. In other words, we can construct weight-shifting operators that enable us to raise the spins of two fields by one unit within correlators: S^++_12= (l_1+Δ_1-1)(l_2+Δ_2-2)z⃗_1·z⃗_2-1/2(z⃗_1·k_1)(z⃗_2·k_2)K_12^2 +[(l_1+Δ_1-1)(k⃗_2·z⃗_2)(z⃗_1·K⃗_12)+(1↔2)]. Here, z⃗ represents an auxiliary null vector that coincides with the spin polarization vector ϵ⃗ (with ϵ⃗_i·k⃗_i=0) in the “on-shell limit" and this limit can only be taken after all operators have acted. More about this limit can be seen in <cit.>. We can also construct weight-shifting operators that simultaneously change the conformal weight and spin of the particles. There are three types of such differential operators, each with a distinct effect. The first type can be represented as H_12 = 2(z⃗_1·K⃗_12)(z⃗_2·K⃗_12)-(z⃗_1·z⃗_2)K_12^2. This operator has the capability to raise the spins at points 1 and 2 by one unit while simultaneously lowering their conformal dimensions by one unit. The remaining two types of operators can also simultaneously change the weight and spin of the particles. However, unlike H_12, they have distinct effects on different particles: D_12= (Δ_1+l_1-1)(z⃗_1·K⃗_12)-1/2(z⃗_1·k⃗_1)K_12^2 D_11= (Δ_2-3+k⃗_2·K⃗_12)(z⃗_1·K⃗_12)-1/2(z⃗_1·k⃗_2)K_12^2-(z⃗_2·K⃗_12)(z⃗_1·∂_z⃗_2)+(z⃗_1·z⃗_2)(∂_z⃗_2·K⃗_12) Here, the operator D_12 raises the spin by one unit at point 1 and lowers the conformal dimension by one unit at point 2. On the other hand, the operator D_11 raises the spin by one unit at point 1 and lowers the conformal dimension by one unit at the same point. In the case of conserved spin-1 operators with dimension Δ=2, there is a convenient shortcut. We can treat these conserved spin-1 operators as projections of spin-1 operators with Δ=1. After some algebraic manipulations (as detailed in <cit.>), we find that multiplying the norm of the momentum of the spin-1 currents with Δ=1 gives us the conserved currents. In fact, this operation is also valid for Δ=1 scalars and we will give some examples later. Interestingly, this operation effectively performs the shadow transform. While similar results exist for the spin-2 case, we will not delve into it within the scope of this paper. Let us write down some results here: J_1φ_2φ_3 =D_11φ_1φ_2φ_3 J_1φ_2φ_3φ_4_s =k_2D_12φ_1φ_2φ_3φ_4_s where ϕ denotes massless scalars, φ denotes conformally coupling scalars, and J denotes the conserved currents (contracted with the auxiliary null vectors z⃗). We need to point out that weight-shifting operators will not change the type of channels, which means that we can act these operators on the correlators with a given channel. The label s in the above example denotes s-channel. Up to this point, our focus has been on differential operators acting on the external legs. However, it is worth noting that we can also construct similar differential operators to change the conformal weight and spin of internal line particles. In the subsequent discussion, we will introduce these differential operators that raise the spin of the internal line particles. In general, the 4-point correlation function can be decomposed into connected and disconnected parts. The disconnected part refers to the component that can be factorized into the product of two 3-point correlators. This implies that we can construct spin-raising operators for the internal line particles in the disconnected part by utilizing this factorization, as the internal lines in the 4-point correlator become the external legs in the 3-point correlators. As an example, let us consider the disconnected contribution to the 4-point function arising from spin-1 exchange (e.g. s-channel): ⟨φ_k⃗_1φ_k⃗_2φ_k⃗_3φ_k⃗_4⟩^(1)_d=⟨φ_k⃗_1φ_k⃗_2O^i_-s⃗⟩(Π_1)_ij⟨O^j_s⃗φ_k⃗_3φ_k⃗_4⟩/⟨O_s⃗O_-s⃗⟩, where O is a scalar operator with dimension Δ, s⃗=k⃗_1+k⃗_2 is the momentum of the exchanged particle, and (Π_1)_ij is a symmetric traceless tensor: (Π_1)_ij=δ_ij+3-2Δ/Δ-2s_is_j/s^2, which encodes the polarization structure of the inverse two-point function of the exchanged field: ⟨O^i_s⃗O^j_-s⃗⟩^-1∝ (Π_1)_ij⟨O_s⃗O_-s⃗⟩^-1. Then, we can generate the 3-point correlator ⟨φ_k⃗_1φ_k⃗2O^i_-s⃗⟩ from the 3-point scalar correlator using the spin-raising operator defined in (<ref>): ⟨φ_k⃗_1φ_k⃗_2O^i_-s⃗⟩=S^i_12⟨φ_k⃗_1φ_k⃗_2O_s⃗⟩. where S_12=k_2D_32=∑_iϵ_3^iS^i_12 is the differential operator that raises the spin of the O field by one unit. The index i indicates that we have removed the auxiliary null vector z⃗ from D_32. Therefore, the disconnected contribution to the 4-point function from spin-1 exchange can be expressed as: ⟨φ_k⃗_1φ_k⃗_2φ_k⃗_3φ_k⃗_4⟩^(1)_d=S^i_12⟨φ_k⃗_1φ_k⃗_2O_-s⃗⟩(Π_1)_ijS^j_34⟨O_s⃗φ_k⃗_3φ_k⃗_4⟩/⟨O_s⃗O_-s⃗⟩. It is evident that the combination S^i_12(Π_1)_ijS^j_34 appearing in the numerator raises the spin of the exchanged particle. If we can manage to extract this differential operator and apply it to the disconnected 4-point correlator, we will obtain the internal line particle spin-raising operator. This process is analogous to the construction of weight-shifting operators for the external leg particles that we discussed earlier. The detailed extraction process is beyond the scope of this paper, but interested readers can refer to <cit.> for more information. Here, we present the final result obtained from this extraction: ⟨φ_k⃗_1φ_k⃗_2φ_k⃗_3φ_k⃗_4⟩^(1)_d=(Π_1,1D_uv+Π_1,0Δ_u)⟨φ_k⃗_1φ_k⃗_2φ_k⃗_3φ_k⃗_4⟩^(0)_d, where Π_1,1 and Π_1,0 are polarization sums for the s-channel (we only focus on this case in the following discussion), and D_uv and Δ_u are differential operators: Π_1,1=(k_1^2-k_2^2)(k_3^2-k_4^2)/s^4+(k⃗_1-k⃗_2)(k⃗_3-k⃗_4)/s^2,      Π_1,0=(k_1-k_2)(k_3-k_4)/s^2, D_uv=u^2v^2∂_u∂_v,      Δ_u=u^2(1-u^2)∂_u^2-2u^3∂_u,      u≡ s/(k_1+k_2),      v≡ s/(k_3+k_4). We introduce the dimensionless variables u and v to simplify our writing. The 4-point correlation function ⟨φ_k⃗_1φ_k⃗_2φ_k⃗_3φ_k⃗_4⟩^(0)_d can be expressed in terms of these two independent variables u and v due to conformal symmetry (s-channel). Thus, such a choice of variables allows us to simplify our calculations in the subsequent sections. So far, we have focused on the disconnected contribution to the four-point function, considering the exchange of a single scalar operator with a general conformal dimension Δ. However, it is important to note that the generalization to arbitrary spin exchange is straightforward. The disconnected part of the four-point correlator with the arbitrary spin exchange can also be factorized into a product of three-point correlators. Furthermore, it is worth mentioning that the external line spin-raising operator (<ref>) does not depend on the conformal dimension. As a result, we can apply this operator to the full four-point pure scalar correlator, generating a complete four-point scalar correlator with an exchange of spinning fields. § UNIFYING RELATIONS FOR COSMOLOGICAL CORRELATORS Recently, new relations among cosmological correlators have been discovered using Berends-Giele currents <cit.>. These relations can be seen as a generalization of the unifying relations found for flat amplitudes <cit.>, which is why they are also referred to as “unifying relations" for cosmological correlators. In general, we expect that the entire set of unifying relations from <cit.> can be extended to curved spacetime. However, currently, only the relations involving pure gluons and conformally coupled scalars with minimal gluon coupling have been well established (for the general Yang-Mills-scalar theory, there is also a proof based on BG currents <cit.> which can be generalized to the dS case easily). Therefore, in this work, we will focus solely on the case of pure gluon theory and conformally coupled scalars with minimal gluon coupling. It is important to mention that in the following discussion, we will present the unifying relations without delving into the details of their proofs. We will focus on cosmological correlators involving pure gluons and conformally coupled scalars, and present the following unifying relations: 𝒯^XA_YM(g_1,g_2,⋯,g_n)=A_S(ϕ_X,g_{1,2,⋯,n}\X). In the given expression, the notation ϕ_X only points out which particles are scalars, while the order of the particles in A_S is the same as A_YM. The differential operator 𝒯^X represents the pairing of the letters in the word X, resulting in a product of 𝒯[ij]=∂_ϵ_i·ϵ_j (where ϵ_μ is the polarization vector satisfying the transversal condition). This pairing method is applied to obtain the desired operator, and the final result is obtained by summing over all possible pairing methods. For instance, for the case X=1234, the corresponding differential operator is as follows: 𝒯^1234=𝒯[12]𝒯[34]+𝒯[13]𝒯[24]+𝒯[14]𝒯[23]. Furthermore, we can explicitly demonstrate the action of 𝒯[ij]. Let us consider the 4-point gluon correlator as an example, which allows us to establish a relation between the gluon correlator and correlators involving both gluons and conformally coupled scalars through the operation of 𝒯[ij]: 𝒯[12]A_YM(g_1,g_2,g_3,g_4) =A_S(ϕ_1,ϕ_2,g_3,g_4) 𝒯[13]A_YM(g_1,g_2,g_3,g_4) =A_S(ϕ_1,g_2,ϕ_3,g_4). We will not delve into the general formalism for cosmological unifying relations in detail here, but instead, we will provide explicit examples in the following discussion. For a more comprehensive introduction, readers can refer to <cit.>. However, before we proceed to specific examples, it is important to make some remarks regarding cosmological unifying relations. Unlike the weight-shifting operators discussed in Section <ref>, which heavily rely on conformal symmetry, the unifying relation operators 𝒯[ij] are in both flat and dS spacetime. It is worth mentioning that the aforementioned relation remains valid within a specific channel since the operator 𝒯^X does not alter the channel. To demonstrate this explicitly, let's consider the explicit expressions for the 4-point correlators involving pure gluons and correlators with both gluons and conformally coupled scalars. These correlators can be computed using Berends-Giele currents <cit.> or directly from Feynman diagrams <cit.>. Here, we will provide the results for these correlators[Here we use the correlators JJJJ in <cit.>, while in <cit.> there is an overall minus sign from some conventions.]: ⟨ JJJJ⟩_s= -1/k_s^2(k_1-k_2)(k_3-k_4)/k_1+k_2+k_3+k_4(ϵ⃗_1·ϵ⃗_2)(ϵ⃗_3·ϵ⃗_4)+(k_1^2-k_2^2)(k_3^2-k_4^2)(ϵ⃗_1·ϵ⃗_2)(ϵ⃗_3·ϵ⃗_4)/k_s^2(k_1+k_2+k_s)(k_3+k_4+k_s)(k_1+k_2+k_3+k_4) +δ^ij[2ϵ_1i(k⃗_1·ϵ⃗_2)-k_1i(ϵ⃗_1·ϵ⃗_2)-(1↔2)][2ϵ_3j(k⃗_3·ϵ⃗_4)-k_3j(ϵ⃗_3·ϵ⃗_4)-(3↔4)]/(k_1+k_2+k_3+k_4)(k_1+k_2+k_s)(k_3+k_4+k_s), ⟨ JJφφ⟩_s= -(ϵ⃗_1·ϵ⃗_2)(k_1-k_2)(k_3-k_4)/k_s^2(k_1+k_2+k_3+k_4)+ (k_1^2-k_2^2)(k_3^2-k_4^2)(ϵ⃗_1·ϵ⃗_2)/k_s^2(k_1+k_2+k_s)(k_3+k_4+k_s)(k_1+k_2+k_3+k_4) +(ϵ⃗_1·ϵ⃗_2)(k⃗_1-k⃗_2)·(k⃗_3-k⃗_4)+4(k⃗_2·ϵ⃗_1)(k⃗_3·ϵ⃗_2)-4(k⃗_1·ϵ⃗_2)(k⃗_3·ϵ⃗_1)/(k_1+k_2+k_s)(k_3+k_4+k_s)(k_1+k_2+k_3+k_4), and ⟨φφφφ⟩_s=-1/k_s^2(k_1-k_2)(k_3-k_4)/k_1+k_2+k_3+k_4 +(k_1^2-k_2^2)(k_3^2-k_4^2)/k_s^2(k_1+k_2+k_3+k_4)(k_1+k_2+k_s)(k_3+k_4+k_s) +(k⃗_1-k⃗_2)·(k⃗_3-k⃗_4)/(k_1+k_2+k_3+k_4)(k_1+k_2+k_s)(k_3+k_4+k_s). It is important to note that in the above expressions for ⟨ JJJJ⟩, ⟨ JJφφ⟩, and ⟨φφφφ⟩, we have only presented the s-channel contribution and neglected the contact contribution. Here, the symbol J represents the conserved current, while φ denotes the conformally coupled scalar with a conformal weight of Δ=2. Additionally, η^ij represents the late time boundary metric, which coincides with the 3D flat metric with the Euclidean signature. By comparing equations (<ref>), (<ref>), and (<ref>), we observe that these three correlators can be related through the differential operators ∂_ϵ⃗_1·ϵ⃗_2 and ∂_ϵ⃗_3·ϵ⃗_4, which precisely correspond to the unifying relation operators T[12] and T[34]. To provide a more explicit expression (Fig. <ref>), we can write these relations as follows: T[12]T[34]⟨JJJJ⟩_s=T[12]⟨JJφφ⟩_s=⟨φφφφ⟩_s. Here for JJJJ_s, the action of T^1234 is equivalent to the action of T[12]T[34]. The unifying relation for the 4-point gluon correlator appears to be relatively straightforward, as we can explicitly write down the expression for the correlator with lower-point functions. However, finding the inverse operators for these unifying relations is an ongoing endeavor and presents a significant challenge. The difficulties arise from the fact that certain terms in ⟨ JJJJ⟩_s and ⟨ JJφφ⟩_s do not contain (ϵ⃗_1·ϵ⃗_2) or (ϵ⃗_3·ϵ⃗_4). These terms vanish when the unifying relation operators T[12] and T[34] are applied. Therefore, the most difficult issue in seeking the inverse of the unifying relation operators is how to restore these terms. It is possible that certain symmetries may assist in achieving the restoration. Indeed, weight-shifting arising from conformal symmetry may provide insights into restoring these terms and constructing the inverse of the unifying relation operators. Therefore, in the subsequent sections, we aim to find the inverse of the unifying relation operators using the weight-shifting operators. § SEEKING THE INVERSE OF UNIFYING RELATIONS In the previous section, we demonstrated the potential of weight-shifting operators derived from conformal symmetry in constructing the inverse of the unifying relation. In this section, we will provide concrete examples to support this claim. While constructing the inverse, we will also highlight the numerous unexplored weight-shifting methods. We will begin with a discussion of the 3-pt case as a warm-up before delving into the more challenging 4-pt correlators. Unfortunately, we will encounter difficulties in directly constructing the inverse of the unifying relation operators. However, in dS spacetime, there exists the uplifting method <cit.> that allows us to obtain dS correlators from flat amplitudes by some replacement. In the final part of this section, we will explore the analogous uplifting method for weight-shifting operators in dS space. It will be interesting to compare these two different uplifting methods. §.§ 3-pt inverse: warm-up Due to the favorable property that we can determine 3-point correlators up to a coupling constant through conformal symmetry, our primary focus lies on exploring the inverse of unifying relation operators for the 3-point correlator. We anticipate that we can recover all the information encoded by conformal symmetry. To facilitate our discussion, let us begin with an explicit presentation of the 3-point gluon correlators. Similar to the 4-point gluon correlator discussed in Section <ref>, we will provide the final results here, while referring interested readers to previous papers<cit.> for detailed calculations: ⟨ JJJ⟩=2(ϵ⃗_1·ϵ⃗_2)(ϵ⃗_3·k⃗_1)/k_1+k_2+k_3+2(ϵ⃗_2·ϵ⃗_3)(ϵ⃗_1·k⃗_2)/k_1+k_2+k_3+2(ϵ⃗_1·ϵ⃗_3)(ϵ⃗_2·k⃗_3)/k_1+k_2+k_3. Next, we can utilize the unifying relation operators to derive the 3-pt mixed correlator between conformally coupled scalars and the conserved current from the 3-pt pure gluon correlators: Jφφ=T[23]⟨ JJJ⟩=2(ϵ⃗_1·k⃗_2)/k_1+k_2+k_3. Now, our goal is to find suitable weight-shifting operators that can recover the 3-pt gluon correlators ⟨ JJJ⟩ in (<ref>) from ⟨ Jφφ⟩ in (<ref>). There are various methods to generate a correlation function with the correct kinematics, and one possible approach is to utilize the spin-raising operator (<ref>): ⟨ JJJ⟩=S^++_23⟨ Jφφ⟩+cycclic permutations. Indeed, utilizing the spin-raising operators allows us to restore the full result for the 3-pt gluon correlator. However, it is important to note that these naive spin-raising operators cannot serve as the inverse of our unifying relation operators. The reason for this is that we need to incorporate all the possible information contained in the 3-pt mixed correlators involving both conformally coupled scalars and gluons, such as ⟨φ Jφ⟩ and ⟨φφ J⟩. The concept of inverse implies that we expect to recover the 3-pt gluon correlator only from ⟨ Jφφ⟩, which is not achievable with these spin-raising operators alone. However, it is important to reiterate that the 3-pt correlators are determined only up to a coupling constant. Consequently, there is a possibility to recover the 3-pt gluon correlator only from ⟨ Jφφ⟩, which allows us to find out the inverse of the unifying relation operators T[23] ⟨ JJJ⟩=T[23]^-1⟨ Jφφ⟩,      T[23]^-1=(H_23+D_22D_33-2D_23D_32)W_23^++. It is evident from the equation (<ref>) presented that the operator T[23]^-1 can be utilized to restore the 3-pt gluon correlator ⟨ JJJ⟩ from the mixed correlator ⟨ Jφφ⟩. This implies that we have successfully identified the inverse of the unifying relation operator T[23]. Consequently, we can initiate the process with a 3-pt pure gluon correlator and generate a mixed correlator through the application of a differential operator, and vice versa (see Fig. <ref>). While it is true that the 3-pt correlators are determined up to coupling constants, the situation becomes more intricate when multiple particles possess spin <cit.>. In such cases, there can exist more than one linearly independent structure that complies with conformal invariance. Consequently, determining which weight-shifting operator enables the recovery of the pure YM correlator ⟨ JJJ⟩ from ⟨ Jφφ⟩ remains an open question. To illustrate, when generating the 3-pt gluon correlator (<ref>) from the mixed correlator (<ref>) using a weight-shifting operator, one possible approach is as follows: ⟨ JJJ⟩=k_2k_3H_23⟨ Jφφ⟩. This particular weight-shifting operator successfully reproduces the appropriate quantum numbers, including both spin and conformal dimension, for ⟨ JJJ⟩. However, it introduces additional pole structures that are not expected to be present. In fact, the 3-pt pure gluon correlator obtained using these weight-shifting operators corresponds to the Tr(F^3) term, where F represents the YM field strength tensor. §.§ 4-pt inverse: weight-shifting uplifting? However, the situation becomes more intricate for higher-point correlators. While conformal invariance can still yield certain structures in 4-point correlators, directly inverting the unifying relation proves to be challenging. For higher-point correlators, conformal invariance does not offer a straightforward solution, suggesting that a readily available weight-shifting prescription may not exist. In the subsequent discussion of this subsection, our focus remains on our attempt at constructing the inverse of unifying relation operators. We want to emphasize once again that we only consider the s-channel in this subsection. Before delving into the formal discussion of the inverse of unifying relation operators in 4-point correlators, let us establish some conventions for this subsection. We denote F_Δ=2 as the 4-point scalar correlator with a conformally coupled exchange in the internal line. Additionally, we use ⟨φφφφ⟩ to represent the 4-point scalar correlator with a spinning particle exchange in the internal line. §.§.§ Known results and discussions Let us begin by examining the simpler cases involving 4-point mixed correlators between two conserved currents and two conformally coupled scalars. In this discussion, we will not consider scenarios where the internal line corresponds to spinning fields; instead, our focus will be on the cases with scalar exchanges. For instance, consider the correlator ⟨ Jφ Jφ⟩, where the internal line represents a conformally coupled scalar. Drawing inspiration from the 3-point cases, we can employ weight-shifting operators for the external legs to generate ⟨ Jφ Jφ⟩ from F_Δ=2 since there is no need to raise the spin of the internal line[In <cit.>, they omit the overall constant in the eq. (5.24).]: JφJφ=4k_2D_12k_4D_34F_Δ=2=4(k⃗_2·ϵ⃗_1)(k⃗_4·ϵ⃗_3)/(k_1+k_2+k_s)(k_3+k_4+k_s)(k_1+k_2+k_3+k_4). There does not exist a known weight-shifting operator to obtain the 4-point gluon correlator ⟨ JJJJ⟩ from ⟨ Jφ Jφ⟩. This outcome may not come as a surprise because T[24] is known to decrease both the spin of the internal line and the external legs (Fig. <ref>), and the operators from the 3-point case are required to act on two points connected to the same vertex. Conversely, in order to find the inverse of T[24], we would need to identify an operator that can increase the spin of both the internal line and the external legs. However, to find this operator we need to regard Jφ Jφ as a product of two 3-pt correlators and then act operators on these two 3-pt correlators, and then we will find a singularity from the polarization tensor (Π_1)_ij. In the next subsection, we will discuss more about this operation. Next, we turn our attention to the more intricate scenario where the 4-point mixed correlators involve spinning field exchanges in the internal line. Consider, for instance, the correlator ⟨ JJφφ⟩, which features a spinning field exchange in the internal line. Recall that in the case of the 3-point correlators, we successfully constructed the inverse of the unifying relation operators (<ref>), enabling us to generate ⟨ JJJ⟩ from ⟨ Jφφ⟩. In the current situation, we also need to raise the spin of two external legs, suggesting the feasibility of considering the inverse of the unifying relation operator in (<ref>): ⟨JJφφ⟩∼(H_12+D_11D_22-2D_12D_21)W_12^++φφφφ. Although this combination of weight-shifting operators preserves the correct quantum numbers, it does not yield the expected expression for ⟨ JJφφ⟩ in the case of 4-point correlators. Unlike what we observed in the 3-point correlators, the combination of weight-shifting operators in (<ref>) leads to different pole structures compared to (<ref>). Specifically, the result in (<ref>) exhibits the same pole structure as the contact term rather than the desired poles. To explicitly reveal its pole structure, we can express ⟨φφφφ⟩ in terms of F_Δ=2 using the spin-raising operator for the internal line in (<ref>): (H_12+D_11D_22-2D_12D_21)W_12^++φφφφ=[S_12^++(Π_1,1D_uv+Π_1,0Δ_u)+2(ϵ⃗_1∘ϵ⃗_2)D_uv]Δ_u(Δ_u-12)F_Δ=2. Here, S^++_12 represents the spin-raising operator for external legs, Π_1,1 and Π_1,0 denote the spin polarization sums introduced in (<ref>). Furthermore, D_uv and Δ_u are differential operators with respect to dimensionless variables u and v, as defined in (<ref>). To simplify notation, we introduce the circle product between two polarization vectors ϵ⃗, which is defined as: ϵ⃗_1∘ϵ⃗_2=2/k_s^2[(ϵ⃗_1·k⃗_3)(ϵ⃗_2·k⃗_4)-(ϵ⃗_1·k⃗_4)(ϵ⃗_2·k⃗_3)]. It is worth noting that the singularity of (<ref>) is not correct for the s-channel part of a 4-pt correlator. However, the presence of the additional factor Δ_u(Δ_u-12) can give the right singularity. In order to eliminate these extra poles and obtain the correct result for ⟨ JJφφ⟩, we need to manually replace Δ_u(Δ_u-12)F_Δ=2 with F_Δ=2: ⟨ JJφφ⟩=[S_12^++(Π_1,1D_uv+Π_1,0Δ_u)+2(ϵ⃗_1∘ϵ⃗_2)D_uv]F_Δ=2. We should bear in mind that the 4-pt mixed correlator ⟨ JJφφ⟩ and the 4-pt pure scalar correlator with a spinning field exchange in the internal line are related through the unifying relation operators T[12], as demonstrated in (<ref>). However, it seems important to construct a combination of weight-shifting operators that can serve as the inverse of the unifying relation operator T[12] based on the weight-shifting operator prescription without the need for any manual replacement procedure. Such a failure, together with the failure of the Jφ Jφ case, can be understood since the conformal symmetry cannot determine the structure of 4-pt correlators, which means that there will still be some information that cannot be restored. In the following discussion, we will show how to construct JJJJ in the weight-shifting meaning, and then show the hints of a weight-shifting uplifting method for cosmological correlators. §.§.§ JJJJ and hints of a weight-shifting uplifting method The unifying relation tells us that for JJJJ, where all terms include a ϵ⃗_i·ϵ⃗_j, we can multiply the corresponding ϵ⃗_i·ϵ⃗_j to JOJO, JJOO, and so on. From this prescription, we can find the following result: JJJJ= (ϵ⃗_1·ϵ⃗_2)φφJJ+(ϵ⃗_3·ϵ⃗_4)JJφφ-(ϵ⃗_1·ϵ⃗_2)(ϵ⃗_3·ϵ⃗_4)φφφφ +(ϵ⃗_1·ϵ⃗_3)φJφJ+(ϵ⃗_1·ϵ⃗_4)φJJφ+(ϵ⃗_2·ϵ⃗_3)JφφJ+(ϵ⃗_2·ϵ⃗_4)JφJφ. For terms proportional to (ϵ⃗_1·ϵ⃗_2)(ϵ⃗_3·ϵ⃗_4) in JJJJ, they will be contributed by both (ϵ⃗_1·ϵ⃗_2)φφ JJ and (ϵ⃗_3·ϵ⃗_4) JJφφ. Therefore, we must add the term -(ϵ⃗_1·ϵ⃗_2)(ϵ⃗_3·ϵ⃗_4)φφφφ to cancel the repeated part. This equation satisfies the unifying relation manifestly. We can also use the seed function F_Δ=2 to get a more compact formalism: ⟨ JJJJ⟩= [S^++_12S^++_34(Π_1,1D_uv+Π_1,0Δ_w)+2S^++_34(ϵ⃗_1∘ϵ⃗_2)D_uv+2S^++_12(ϵ⃗_3∘ϵ⃗_4)D_uv +4S^++_24k_4D_34k_2D_12+4S^++_13k_3D_43k_1D_21-4S^++_23k_2k_3D_12D_43-4S^++_14k_1k_4D_21D_34]F_Δ=2. This formalism coincides with the statement in <cit.>, that the differential operators in unifying relations can be written as functional derivatives with respect to the weight-shifting operators. Note that the order of the weight-shifting operators is the same as the order defined in <cit.>. However, this equation seems not consistent with the weight-shifting perspective. The first line of the equation has raised the spin of the internal line, while the second line of the equation has not. From a weight-shifting perspective, we may find that the second line of the equation corresponds to a scalar internal line, which is not consistent with JJJJ. This inconsistency, however, does not exist. In the following discussion, we will show that the second line of the equation can also be obtained from a weight-shifting process with the spin of the internal line being raised. Jφφ J can be written as a product of two 3-pt correlators, which is the characterization of the disconnected part of 4-pt correlators: JφφJ=-4(ϵ⃗_1·k⃗_2)(ϵ⃗_4·k⃗_3)/E(k_1+k_2+k_s)(k_3+k_4+k_s)=⟨J_1φ_2φ_-s⃗⟩⟨φ_s⃗φ_3 J_4⟩/E. Here, E=k_1+k_2+k_3+k_4 is the total energy. Inspired by the method of obtaining Π_1,1D_uv+Π_1,0Δ_u (reviewed in section <ref>), we want to act some operators on the 3-pt correlator factors so that we can raise the spin of both the internal line and the external legs. A natural choice for the operator acting on the 3-pt correlator factors is S^++_ij, which raises the spin of two legs connected to the same point in the 3-pt case[One may wonder if we can use the operators like (H_12+D_11D_22-2D_12D_21)W_12^++. A simple calculation shows that this case is equivalent to the case we discuss now. The only difference is the value of some coefficients.]. First we derive the result after acting S^++i_ij: S_2,-s^++i⟨J_1φ_2φ_-s⟩ =-k^i_s(-2(⃗k⃗_⃗1⃗·ϵ⃗_2) (k⃗_2·ϵ⃗_1)/k_s(k_1+k_2+k_s)^2+2(ϵ⃗_1·ϵ⃗_2)/k_1+k_2+k_s)+ϵ^i_22(k⃗_2·ϵ⃗_1) /k_1+k_2+k_s S_s3^++j⟨φ_sφ_3 J_4⟩ =-k^j_s (2 (k⃗_3·ϵ⃗_4) (k⃗_4·ϵ⃗_3)/k_s (k_3+k_4+k_s)^2-2(ϵ⃗_3·ϵ⃗_4)/k_3+k_4+k_s)-ϵ_3^j2 (k⃗_3·ϵ⃗_4)/k_3+k_4+k_s Then, we can get (Π_1)_ijS_2s^++i⟨J_1φ_2φ_s⟩S_s3^++j⟨φ_sφ_3 J_4⟩ =-4(ϵ⃗_2·ϵ⃗_3)(k⃗_2·ϵ⃗_1)(k⃗_3·ϵ⃗_4)/(k_1+k_2+k_s)(k_3+k_4+k_s)+terms symmetric under (1↔2) or (3↔4). The first term in the second line is exactly -ES^++_23k_2k_3D_12D_43F_Δ=2. There is a singularity at Δ=2 in “terms symmetric under (1↔2) or (3↔4)". This will not bother us because if we sum over all the four terms of the second line of the equation (<ref>), “terms symmetric under (1↔2) or (3↔4)" will be canceled. One may wonder if there will be terms with a factor Δ-2 when we consider a general Δ and exactly cancel the singularity, since such terms may not satisfy the (1↔2) and (3↔4) symmetry. In other words, one may wonder if it is valid to take Δ=2 first in our calculations. The answer is yes. Here we show some illuminating calculations. From some CFT methods, one will find that the 3-pt function of a spin-1 current and two scalars can be determined up to a coefficient f_JOO: J_Δ_1(x_1)O_Δ_2(x_2)O_Δ_3(x_3)=(x⃗_21·ϵ⃗_1/x_21^2-x⃗_31·ϵ⃗_1/x_31^2)f_JOO/x_23^Δ_2+Δ_3-Δ_1+lx_31^Δ_3+Δ_1-Δ_2-lx_12^Δ_1+Δ_2-Δ_3-l, where x_ij=|x⃗_i-x⃗_j| is the distance between the operator O_i and O_j at the boundary of dS spacetime. One can take the Fourier transformation to obtain the cosmological correlators in the momentum space. Thus we will find that we can get (<ref>) in momentum space after acting some derivatives on the 3-pt correlators of 3 scalars (also in momentum space). More precisely, J_Δ_1(k_1)O_Δ_2(k_2)O_Δ_3(k_3)= c_1(ϵ⃗_1·K⃗_21)O_Δ_1(k_1)O_Δ_2+1(k_2)O_Δ_3(k_3) -c_2(ϵ⃗_1·K⃗_31)O_Δ_1(k_1)O_Δ_2(k_2)O_Δ_3+1(k_3), where c_1 and c_2 denote the ratio of the coefficients of 3-pt correlators (e.g. f_JOO/f_OOO). After taking Δ_1=Δ_2=2, we have J(k_1)φ(k_2)O_Δ(k_3)=c_1 (ϵ⃗_1·K⃗_21)φ(k_1)ϕ(k_2)O_Δ(k_3)-c_2(ϵ⃗_1·K⃗_31)φ(k_1)φ(k_2)O_Δ+1(k_3). It is not hard to get the answer by Mathematica[here c_1 and c_2 are all included in the overall constant of the following equation, which we have omitted.]: J(k_1)φ(k_2)O_Δ(k_3)∼ (ϵ⃗_1·k⃗_2)[2^Δ-5/2 Γ(3-Δ) Γ(Δ-3/2) (k_1+k_2)^Δ-3 _2F_1(3-Δ/2,2-Δ/2;5/2-Δ;k_3^2/(k_1+k_2)^2) +2^1/2-Δ Γ(3/2-Δ) Γ(Δ) k_3^2 Δ-3 (k_1+k_2)^-Δ _2F_1(Δ/2,Δ+1/2;Δ-1/2;k_3^2/(k_1+k_2)^2)], when we take Δ→2, the equation above will change to J(k_1)φ(k_2)φ(k_3)∼(ϵ⃗_1·k⃗_2)/k_1+k_2+k_3, which matches with the result in <cit.>. Then we can act S^++i_23 (S^++_23=∑_iϵ_3^iS^++i_23) on J(k_1)φ(k_2)O_Δ(k_3), we have S^++i_23 J(k_1)φ(k_2)O_Δ(k_3) ∼(Δ-1)ϵ_2^i(ϵ⃗_1·k⃗_2)2^Δ-5/2 Γ(3-Δ) Γ(Δ-3/2)k_12^Δ-3 _2F_1(3-Δ/2,2-Δ/2;5/2-Δ;k_r^2) +(Δ-1)ϵ_2^i(ϵ⃗_1·k⃗_2)2^1/2-Δ Γ(3/2-Δ) Γ(Δ) k_3^2 Δ-3 k_12^-Δ _2F_1(Δ/2,Δ+1/2;Δ-1/2;k_r^2) +k_3^i[(ϵ⃗_1·ϵ⃗_2)2^Δ-5/2 Γ(3-Δ) Γ(Δ-3/2) k_12^Δ-3 _2F_1(3-Δ/2,2-Δ/2;5/2-Δ;k_r^2) +(ϵ⃗_1·ϵ⃗_2)2^1/2-Δ Γ(3/2-Δ) Γ(Δ) k_3^2 Δ-3k_12^-Δ _2F_1(Δ/2,Δ+1/2;Δ-1/2;k_r^2) +(ϵ⃗_2·k⃗_1) (ϵ⃗_1·k⃗_2)/Δ-1/22^-Δ-1/2 Δ(Δ+1) Γ(3/2-Δ) Γ(Δ) k_3^2 Δ-3 k_12^-Δ-2 _2F_1(Δ/2+1,Δ+1/2+1;Δ+1/2;k_r^2) +(ϵ⃗_2·k⃗_1) (ϵ⃗_1·k⃗_2)/5/2-Δ2^Δ-5/2(2-Δ/2) (3-Δ)Γ(3-Δ) Γ(Δ-3/2)k_12^Δ-5 _2F_1(5-Δ/2,3-Δ/2;7/2-Δ;k_r^2) +(ϵ⃗_2·k⃗_1) (ϵ⃗_1·k⃗_2)2^1/2-Δ (2 Δ-3) Γ(3/2-Δ) Γ(Δ) k_3^2 Δ-5k_12^-Δ _2F_1(Δ/2,Δ+1/2;Δ-1/2;k_r^2))], To simplify our notation, we introduce two new variables: k_12=k_1+k_2 and k_r= k_3/(k_1+k_2). Here particle 3 will be treated as the propagating particle. It is not hard to find that things are the same as before: the singularity at Δ=2 will be canceled! In fact, there will be no term proportional to Δ-2 in S^++i_23 Jφ O_Δ, which means taking the limit Δ→ 2 before acting S^++i_ij is valid in this case. So far, we have demonstrated how to construct JJJJ and the consistency with the weight-shifting perspective. In fact, there may be deeper reasons for such a construction. Inspired by the traditional uplifting method <cit.>, we find that the equation (<ref>) can be obtained by some replacement of the flat 4-pt gluon amplitude and will show this fact in the following discussion. Now we write down the s-channel terms of the flat 4-pt gluon amplitude appearing in <cit.>: -1/2S{(ϵ⃗_1·ϵ⃗_2)(ϵ⃗_3·ϵ⃗_4)(-U+T)-4(ϵ⃗_2·ϵ⃗_3)(k⃗_3·ϵ⃗_4)(k⃗_2·ϵ⃗_1)+4(ϵ⃗_2·ϵ⃗_4)(k⃗_4·ϵ⃗_3)(k⃗_2·ϵ⃗_1) +4(ϵ⃗_1·ϵ⃗_3)(k⃗_3·ϵ⃗_4)(k⃗_1·ϵ⃗_2)-4(ϵ⃗_1·ϵ⃗_4)(k⃗_4·ϵ⃗_3)(k⃗_1·ϵ⃗_2) +4(ϵ⃗_1·ϵ⃗_2)[(k⃗_1·ϵ⃗_3)(k⃗_2·ϵ⃗_4)-(k⃗_2·ϵ⃗_3)(k⃗_1·ϵ⃗_4)]+4(ϵ⃗_3·ϵ⃗_4)[(k⃗_3·ϵ⃗_1)(k⃗_4·ϵ⃗_2)-(k⃗_4·ϵ⃗_1)(k⃗_3·ϵ⃗_2)]} where T=-(k⃗_1+k⃗_4)^2 and so as U, S, which are the Mandelstem varieties in flat spacetime. Note that in flat spacetime we have k^2=0 for massless particles. After the following replacement: ϵ⃗_i·ϵ⃗_j →S^++_ij ϵ⃗_i·k⃗_j →k_jD_ij (i,j on the same side of the channel) T-U=-S-2U →Π_1,1D_uv+Π_1,0Δ_u 2(ϵ⃗_i·k⃗_j)(ϵ⃗_k·k⃗_l) →ϵ⃗_i∘ϵ⃗_kD_uv (i,j (also k,l) on the different sides of the channel) -1/2S →F_Δ=2, we will reproduce (<ref>). In this replacement, k_s^2 corresponds to D_uv, and -S-2U corresponds to the operator that raises the spin of the internal line. This can be explained by comparing the “polarization sum" P_1=Π_1,1s^2-Π_1,0(k_1+k_2+k_s)(k_3+k_4+k_s) in the factorization method (the definition of P_1 and more about cosmological bootstrap can be found in <cit.>) with the spin-raising operators Π_1,1D_uv+Π_1,0Δ_u. Note that -S-2U is the flat limit of P_1. From (<ref>), we find that we cannot simply get JJJJ from JOJO or JJOO, but we need to consider all of those and finally express the correlators by acting some operators on the seed F_Δ=2. It means the trial for seeking the inverse of the unifying relation fails, like the flat case. However, the equation (<ref>) has a similar structure as the expanding formula between YM theory and the bi-adjoint scalar theory in the flat spacetime. This inspires us that maybe expanding formula can be generalized to dS spacetime. § OUTLOOKS In this paper, we have reviewed two ways, weight-shifting operators and unifying relations, to get other cosmological correlators from the given one and made some new comments. Then we showed that the trial for seeking the inverse of unifying relations fails. However, we have found a “weight-shifting uplifting" method similar to the traditional uplifting method for dS correlators in our special case, which is a hint for applying weight-shifting operators to higher point correlators. There are several open problems inspired by this work: * Can we find other weight-shifting operators to raise the spin of the internal lines? In section <ref>, we have constructed a new way to raise the spin of the internal line for the special case we met. Maybe there are some other similar cases, and they will lead to some new allowed structures. * Can this prescription be generalized to BG currents so that we can get n-pt results by BG recursions? For weight-shifting operators, it is very hard to use them to deal with higher-point correlators. However, from the structure of BG currents in dS spacetime, even n-pt cosmological correlators have some features similar to the flat case. This may imply that the uplifting way may lead to a higher-point correspondence between the cosmological correlators and the flat amplitudes. * Can we do the same thing for the graviton correlators<cit.>? Maybe we need to prove unifying relations for the gravity theory. However, in flat spacetime, the gravity theory we deal with is the extended gravity theory, which includes dilatons and B-fields. Recently, there are some progress in the double copy of the cosmological correlators<cit.>, which may help us to find out the “weight-shifting uplifting" for graviton correlators in dS spacetime. These are all interesting problems, and solving them will deepen our understanding of cosmological correlators. We also hope we can solve some of them in future work. § ACKNOWLEDGEMENTS We would like to thank Hayden Lee for enlightening discussions, and Jiajie Mei for discussions and comments on our draft. YT is partly supported by National Key R&D Program of China (NO. 2020YFA0713000). § MORE ON WEIGHT-SHIFTING OPERATORS In this appendix, we will give more comments on weight-shifting operators. In the context of cosmological applications, expressing correlators in momentum space is highly convenient. However, we need to keep in mind that in the Poincaré patch of de Sitter space (dS_4), we lack a timelike Killing vector, which prevents us from performing a Fourier transformation with respect to the time direction. Instead, the cosmological correlator in momentum space can be expressed as an integral over conformal time, often referred to as the seed integral. Subsequently, the weight-shifting operators can be represented as differential operators with respect to momentum, acting on the correlators in momentum space. One effective approach to identifying the appropriate weight-shifting operator is by examining the seed integral. In essence, the cosmological correlator can be expressed as an integral over conformal time, incorporating both bulk-to-bulk and bulk-to-boundary propagators. Although the integrand of various correlators may differ, we can establish a connection between these integrands using differential operators in terms of momentum. Since the derivative with respect to momentum can be commuted with the time integral, we can establish correlations among correlators of diverse theories using these differential operators, which precisely correspond to the weight-shifting operators. To establish a clearer connection between different theories, we can begin by explicitly formulating the integral for various correlator expressions. The starting point of this work is the well-established 3-point scalar correlator with a general conformal weight in momentum space. Let us begin by presenting the seed integral for this particular correlator<cit.>: ⟨O_1O_2O_3⟩=k_1^Δ_1-3/2k_2^Δ_2-3/2k_3^Δ_3-3/2∫_0^∞dz z^1/2K_Δ_1-3/2(k_1 z)K_Δ_2-3/2(k_2 z)K_Δ_3-3/2(k_3 z). It is worth noting that the expression provided above neglects an overall normalization coefficient. Additionally, the symbol K represents the Bessel-K function. Interestingly, for certain special conformal dimensions Δ, the integral can be expressed using elementary functions. In the context of cosmological applications, our primary focus is on conformally coupled and massless scalars, which have conformal dimensions of Δ=2 and Δ=3, respectively. Remarkably, for these two conformal dimensions, the integral can be simplified precisely to elementary functions. For example, the 3-point correlator for the conformally coupled scalar can be expressed as follows: ⟨φφφ⟩=log(K/μ) where K=k_1+k_2+k_3 is the total momentum and μ represents an energy scale that can be chosen arbitrarily. It is important to mention that the logarithm arises due to the IR divergence in the integral. This divergence becomes evident from the bulk perspective, where the seed integrand for φφφ is proportional to e^-Kz/z, exhibiting divergence in the infrared limit. The apparent dependence on the energy scale violates dilatation symmetry, which is why it is commonly referred to as the anomalous term from the boundary perspective. Now, we aim to introduce the weight-shifting operators and demonstrate their effects using the seed integral. To provide a clearer illustration, let us introduce the simplest weight-shifting operators 𝒲^–_12 as an example, which decrease the conformal dimension by one unit at each point (in this case, particle 1 and particle 2). Back to the position space, the seed integral can be expressed as: ⟨O_1(x⃗_1)O_2(x⃗_2)O_3(x⃗_3)⟩=f_123/|x⃗_1-x⃗_2|^Δ_1+Δ_2-Δ_3|x⃗_2-x⃗_3|^Δ_2+Δ_3-Δ_1|x⃗_3-x⃗_1|^Δ_3+Δ_1-Δ_2, where f_123 is an overall constant that is not crucial for our discussion and thus we can disregard such coefficients in the subsequent analysis. In order to obtain the 3-point scalar with conformal dimensions Δ_1, Δ_2, and Δ_3 in momentum space, we can perform a Fourier transformation: ⟨O_1O_2O_3⟩=∫ dx⃗_1dx⃗_2dx⃗_3e^ip⃗_1·x⃗_1e^ip⃗_2·x⃗_2e^ip⃗_3·x⃗_3/|x⃗_1-x⃗_2|^Δ_1+Δ_2-Δ_3|x⃗_2-x⃗_3|^Δ_2+Δ_3-Δ_1|x⃗_3-x⃗_1|^Δ_3+Δ_1-Δ_2 The derivative with respect to momentum can be interchanged with the integral over position variables x⃗_1, x⃗_2, and x⃗_3. Therefore, we can apply the weight-shifting operator 𝒲^–_12 to the correlator, placing it in front of the integral. Subsequently, we insert it into the integrand, and these differential operators act solely on the exponential term involving momentum. As a result, we obtain the 3-point scalar correlator with the conformal dimensions Δ_1 and Δ_2, both reduced by one unit: ⟨O_Δ_1-1(k⃗_1)O_Δ_2-1(k⃗_2)O_Δ_3(k⃗_3)⟩ =∫ dx⃗_1dx⃗_2dx⃗_3|x⃗_1-x⃗_2|^2e^ik⃗_1·x⃗_1e^ik⃗_2·x⃗_2e^ik⃗_3·x⃗_3/|x⃗_1-x⃗_2|^Δ_1+Δ_2-Δ_3|x⃗_2-x⃗_3|^Δ_2+Δ_3-Δ_1|x⃗_3-x⃗_1|^Δ_3+Δ_1-Δ_2 ∼∫ dx⃗_1dx⃗_2dx⃗_3W^–_12e^ik⃗_1·x⃗_1e^ik⃗_2·x⃗_2e^ik⃗_3·x⃗_3/|x⃗_1-x⃗_2|^Δ_1+Δ_2-Δ_3|x⃗_2-x⃗_3|^Δ_2+Δ_3-Δ_1|x⃗_3-x⃗_1|^Δ_3+Δ_1-Δ_2 =W^–_12⟨O_Δ_1(k⃗_1)O_Δ_2(k⃗_2)O_Δ(k⃗_3)⟩ The notation ∼ means that we omit some overall constants. Such constants can be restored by the explicit calculation on the cosmological correlator side. The aforementioned calculations indicate that we can construct the weight-shifting operators by comparing the correlator of our interest with the 3-point correlator in position space and subsequently transforming it to momentum space. Remarkably, this construction ensures that the weight-shifting operators possess the same quantum numbers for both spin and conformal dimensions. Henceforth, we will directly present the weight-shifting operator without explicitly demonstrating the construction procedure. However, it is essential to bear in mind that the construction procedure is nearly identical to what we previously employed in constructing 𝒲^–_12. To learn more about the construction procedure, see <cit.>, where they use the embedding space formalism. § 4-PT SCALAR SEED CORRELATORS In this appendix, we introduce the 4-point scalar seed correlators. It is important to note that the 4-point scalar seed integral differs from the 3-point correlators, as it involves two conformal time integrals, adding complexity to the time ordering. Our main focus in the following discussions will be on the scalar exchange solution The conventional approach to compute vacuum expectation values in a time-dependent background is through the Schwinger-Keldysh (SK) formalism<cit.>. For instance, considering the interaction gφ^2σ, where σ is a scalar field with an arbitrary conformal dimension, its seed integral can be expressed as follows: F=-g^2/2∑_ab∫_-∞^0dη/η^2dη'/η'^2e^iak_12ηe^ibk_34η'G_ab(k_s;η,η'), where G_ab(k_s;η,η') represents the bulk-to-bulk propagator for the massive scalar σ, where a,b=± denote the SK indices. And η represents the conformal time, and g denotes the coupling constant between the fields φ and σ. The specific form of the propagator G_ab(k_s;η,η') depends on the particular model under consideration, and we will not delve further into its details in this study <cit.>. For our specific application, we can provide the explicit expression for the seed integral when the exchanged particle σ has a conformal dimension Δ=2. This solution can be derived using both the bulk and boundary perspectives. Here, we present the result without going into the detailed derivation. For a more comprehensive understanding, interested readers are encouraged to refer to previous papers such as <cit.>: F_Δ_σ=2=1/2k_s[Li(k_34-k_s/E)+Li(k_12-k_s/E)+log(k_12+k_s/E)log(k_34+k_s/E)]. In the above expression, we have introduced the following notations: E=k_1+k_2+k_3+k_4 as the total energy, and k_12=k_1+k_2 and k_34=k_3+k_4 for convenience. Additionally, the symbol “Li" denotes the di-logarithm function. JHEP
http://arxiv.org/abs/2307.02268v1
20230705130600
Modelling dynamically driven global cloud formation microphysics in the HAT-P-1b atmosphere
[ "Elspeth K. H. Lee" ]
astro-ph.EP
[ "astro-ph.EP" ]
firstpage–lastpage SVDM: Single-View Diffusion Model for Pseudo-Stereo 3D Object Detection Yuguang Shi, The authors are with the School of Automation, Southeast University, Nanjing 210096, China, and also with the Key Laboratory of Measurement and Control of Complex Systems of Engineering, Ministry of Education, Nanjing 210096, China (e-mail: syg@seu.edu.cn; xblu2013@126.com). August 1, 2023 =============================================================================================================================================================================================================================================================================================================== Insight into the formation and global distribution of cloud particles in exoplanet atmospheres continues to be a key problem to tackle going into the JWST era. Understanding microphysical cloud processes and atmospheric feedback mechanisms in 3D has proven to be a challenging prospect for exoplaneteers. In an effort to address the large computational burden of coupling these models in 3D simulations, we develop an open source, lightweight and efficient microphysical cloud model for exoplanet atmospheres. `Mini-cloud' is a microphysical based cloud model for exoplanet condensate clouds that can be coupled to contemporary general circulation models (GCMs) and other time dependent simulations. We couple mini-cloud to the Exo-FMS GCM and use a prime JWST target, the hot Jupiter HAT-P-1b, as a test case for the cloud formation module. After 1000+ of days of integration with mini-cloud, our results show a complex 3D cloud structure with cloud properties relating closely the dynamical and temperature properties of the atmosphere. Current transit and emission spectra data are best fit with a reduced cloud particle number density compared to the nominal simulation, with our simulated JWST NIRISS SOSS spectra showing promising prospects for characterising the atmosphere in detail. Overall, our study is another small step in first principles 3D exoplanet cloud formation microphysical modelling. We suggest that additional physics not included in the present model, such as coagulation, are required to reduce the number density of particles to appropriately observed levels. planets and satellites: atmospheres – planets and satellites: individual: HAT-P-1b – planets and satellites: gaseous planets § INTRODUCTION Clouds are a ubiquitous part of atmospheres inside and outside the Solar System, dictating a significant or sometimes dominant part of their observational properties. In the JWST Early Release Science (ERS) transiting exoplanet program <cit.> transit spectra of the hot Saturn planet WASP-39b was observed across a variety of instrument modes <cit.>, finding that models with a cloud component were best able to fit the data across the wide wavelength range of all observing modes. <cit.> produced a holistic look at the spectra of ten hot Jupiters with HST and Spitzer data, finding a continuum of cloudy spectral features dependent on the equilibrium temperature of the planets. Clouds are a prime candidate to explain the nightside brightness temperature trends for hot Jupiter exoplanets <cit.>, where the nightside brightness temperatures remain clustered around 1000 K for planets below equilibrium temperatures of around 2000 K. Modelling observable trends in cloudy exoplanets has been attempted by a few studies. In <cit.>, a combined cloud and haze model was used to fit the amplitude trend found in the 1.4μm H2O feature in transmission spectra that is affected by aerosol particle opacity. <cit.> model the cloud profiles of the nightside regions of hot Jupiters, concluding that the Spitzer brightness temperatures can be explained through changes in the cloud top pressure as a function of the planetary equilibrium temperature. In concert with recent JWST data, high resolution spectroscopy from ground based instrumentation such as ESPRESSO and HARPS are now able to probe the condensation fronts of refractory species in ultra hot Jupiters (UHJs) <cit.> and more generally discover chemical gradients between the east and west terminator regions in UHJ atmospheres <cit.>. This new JWST and high-resolution era warrants a three dimensional (3D) understanding of the atmospheric distribution of cloud particles, due to the fidelity and spacial information shown by these observatories providing detailed insight into the chemical composition and cloud properties of exoplanets hemisphere to hemisphere. A popular option to model clouds across an exoplanet atmosphere is to post-process output of a GCM model and apply a 1D cloud formation code to extracted cloud free vertical T-p structures. This has been performed successfully in many studies <cit.>, giving detailed insight into global distributions of cloud structures and the expected cloud compositions across a wide range of exoplanet system parameters. The microphysical cloud formation bin method model CARMA has been used in this context, where in <cit.> they processed outputs from the SPARC/MITgcm <cit.> across a wide equilibrium temperature range. This model was also used in <cit.> and <cit.> to explain trends in cloud affected transmission spectra features seen as a function of equilibrium temperature. Most commonly, phase equilibrium models, where the cloud particles are assumed to be at equilibrium with the surrounding gas vapour, have been coupled to 3D GCM models of exoplanet atmospheres. For example, <cit.> use a simple phase equilibrium conversion scheme to model KCl and ZnS clouds in GJ 1214b. The one dimensional EddySed <cit.> model has been coupled into the Met-Office UM GCM to model clouds on HD 209458b <cit.> and GJ 1214b <cit.>. <cit.> apply a temperature dependent cloud opacity scheme with prescribed cloud distributions and sizes which is fed-back into the GCM radiative-transfer (RT) module. Similarly, <cit.> use a temperature dependent cloud opacity scheme to mimic the radiative effects of cloud formation. <cit.> use a vapour and condensate tracer scheme with a relaxation timescale method coupled to the gas replenishment and settling rate to convert between both quantities. A common property of the above phase equilibrium models is the assumption of a particular size distribution of particles, arranging the condensed mass into corresponding parameterised size-distribution parameters (e.g. assuming a mean particle size and variance), generally a log-normal distribution is the distribution of choice due to its useful mathematical properties. Only a couple of studies to date have attempted to couple microphysical based cloud formation models to large scale hydrodynamic simulations. These have used the DIHRT model <cit.>, with <cit.> coupling to the radiative-hydrodynamic atmospheric model of <cit.>, and <cit.> coupling DIHRT to the UK Met Office UM GCM <cit.>. Microphysical models typically attempt to simulate the complete lifecycle of the cloud size-distribution time-dependently in some manner, nucleating seed particles from the gas phase and considering the condensation and evaporation rates of individual species with time. This adds considerable complexity and additional physics to consider compared to phase equilibrium assumptions. In addition, typically size-distributions are organically computed in microphysical models rather than imposed. However, such models have proven to be extremely computationally intensive, with the above studies only able to run for 100 simulated days coupled to the cloud formation model, with substantial high performance computing resources required to be utilised. To address the issue of computational feasibility of microphysical models we develop a new, open source and more efficient microphysical cloud formation model, `mini-cloud'. We perform 3D GCM simulations of the hot Jupiter HAT-P-1b as an example testbed of our new cloud formation module. HAT-P-1b is a hot Jupiter exoplanet discovered by <cit.>. It is a prime candidate for atmospheric characterisation due to its inflated radius (1.32 R_ Jup) and reduced bulk density due to its mass of 0.53 M_ Jup <cit.>. <cit.> observed a transit of HAT-P-1b using WFC3 onboard HST, finding a strong H2O signature in the near-IR, indicative of a relatively cloud free atmosphere. <cit.> used the HST STIS instrument to measure the optical wavelength transmission spectrum, finding a strong Na feature but a lack of K absorption. They find that the wings of the Na feature are cut off, suggesting a cloud opacity component as a possibility to fit and connect the STIS and WFC3 data consistently. This HST data was collated in the <cit.> summary of ten hot Jupiters as well as the addition of Spitzer 3.6 μm and 4.5 μm transit depths. <cit.> perform retrieval modelling of the HAT-P-1b data from <cit.> finding grey or Rayleigh scattering clouds between 0.1 and 0.01 bar are consistent with the data. <cit.> suggest a super-solar H2O abundance with a median cloud top pressure of 0.1 bar when using their retrieval framework on the same data. HAT-P-1b is scheduled to be observed with the JWST NIRISS SOSS instrument for transit and eclipse during Cycle 1 <cit.> which will add further detailed characterisation information on the atmosphere. In Section <ref> we present the mini-cloud model as well as how it differs from the full DIHRT model. In Section <ref> we detail the expected cloud formation species on HAT-P-1b and details on the GCM setup with mini-cloud. Section <ref> presents the results of our coupled 3D GCM modelling of HAT-P-1b and the produced cloud structures. In Section <ref> we post-process our GCM results to produce synthetic transmission and secondary eclipse emission observables. Section <ref> contains the discussion of our results and Section <ref> contains the summary and conclusion. § MINI-CLOUD Mini-cloud is an open source[<https://github.com/ELeeAstro/mini_cloud>] microphysical cloud formation model that has its origins in the methodologies of Helling and collaborators <cit.>, which in turn is based on the theory and development from modelling AGB star wind dust formation <cit.>. A time-dependent version of the underlying theory was utilised in <cit.>, DIHRT, to be coupled to hydrodynamic simulations of exoplanet atmospheres. However, a major limitation of this model was the inability to simulate beyond 100 simulated Earth days due to the large computational effort required to couple the full microphysical model <cit.>. With this background, we develop mini-cloud as an offshoot of DIHRT, designed to be much more efficient at coupling to 3D GCMs and allowing longer integration timescale simulations to be performed. However, this comes at the price of reduced complexity compared to the full DIHRT model (Section <ref>). Mini-cloud can therefore be seen as an intermediate complex model for simulating cloud microphysics in large scale projects where computational efficiency is paramount. Mini-cloud uses the `method of moments' to evolve the integrated cloud particle size-distribution. The ith moment of the particle size-distribution, K_i [cm^i cm^-3], is defined as <cit.> K_i = ∫^∞_a_0a^if(a)da, where a_0 [cm] is the seed particle size and f(a) [cm^-3 cm^-1] the particle size distribution as a function of particle size a [cm]. The moments therefore represent integrated quantities of the size distribution, for example, the total cloud particle number density, N_0 [cm^-3], is equal to the zeroth moment N_0 = K_0, the number density weighted mean grain size, <a> [cm], is <a> = K_1/K_0. Another useful quantity is the area weighted mean grain size, also known as the effective radius, a_ eff [cm], given by the ratio a_ eff = K_3/K_2. The mean particle area, <A> [cm^2], and volume <V> [cm^3] are then <A> = 4πK_2/K_0, and <V> = 4π/3K_3/K_0, respectively. To integrate the moments in time, we use the equation set described in <cit.> where the time derivative of each moment (from i=0-3) is given by d K_0/dt = J_* + J_ evap, d K_1/dt = a_0(J_* + J_ evap) + da/dtK_0, d K_2/dt = a_0^2(J_* + J_ evap) + 2da/dtK_1, d K_3/dt = a_0^3(J_* + J_ evap) + 3da/dtK_2. Following <cit.>, this equation set is supplemented with a K_3 moment evolved for each cloud species s d K_3^s/dt = a_0^3(J_*^s + J_ evap^s) + 3da^s/dtK_2, which tracks the individual contribution of each cloud species to the total volume of the particle distribution. By definition, the total K_3 value is given by the sum of each individual species K_3 = ∑_sK^s_3. Due to this, mini-cloud is a mixed species `dirty' grain model the same way as the DRIFT model <cit.> with multiple condensed species contributing simultaneously to the grain bulk properties and composition. For the nucleation rate, J_* [cm^-3 s^-1], we use the modified classical nucleation theory as presented in several sources <cit.>. A single specific species is chosen as the main seed particle carrier (TiO2, C, SiO, KCl or NaCl in mini-cloud). For this specific HAT-P-1b case, we chose TiO2 with the updated temperature dependent surface tension expression from <cit.>. Throughout mini-cloud, a seed particle size of 1 nm is assumed. Unlike in previous studies, we include explicitly the rate of evaporation of seed particles, J_ evap [cm^-3 s^-1], into the differential equation set. This is justified as in a typical particle size distribution a population of seed particles can be present, which would evaporate for a given atmospheric thermochemical condition. For example, a significant population of seed particles can be seen in the CARMA simulations of <cit.>, suggesting that accounting for their evaporation time dependently is an important consideration. The seed particle evaporation rate is estimated to be J^s_ evap = K_0/a_0da^s/dt, which only applies when da^s/dt < 0. The cloud particle growth or evaporation rate, da/dt [cm s^-1], for a species s, is given by <cit.> da^s/dt = V_0^sα^s n_i^s√(k_bT/2π m_i)S, where α the reaction efficiency, m_i [g] the mass of the reactant and V_0^s [cm^3] the unit volume of the condensed species monomer. S is the stability coefficient given by S = (1 - 1/S_ r), where S_ r ≈ p_ par^s/p_ vap^s, the ratio of the partial pressure to the species vapour pressure is the reaction supersaturation ratio of the cloud species <cit.>. Following <cit.>, when S is negative it is multiplied by the volume mixing ratio of the cloud species to the total bulk, K_3^s/K_3. The consumption rate of the number density of condensable gas, i, n_i [cm^-3], due to dust species s is given by <cit.> dn_i/dt = - N_l^s(J_*^s + J_ evap^s) - 4π K_2/V_0^sda^s/dt, where N_l^s is the number of monomers that make up one seed particle size. To integrate the equation set in time we use the stiff ODE solver dvode which is part of the odepack[<https://computing.llnl.gov/projects/odepack>] package. §.§ The Kelvin effect An important consideration when modelling the condensation of species onto a surface is the Kelvin effect in which a condensing droplet more efficiency grows on a flat surface compared to a curved surface. This is represented by the dimensionless factor, K_ f, K_ f = exp(2σ^s_∞V_0^s/ak_bT), where σ^s_∞ [erg cm^-2] is the bulk surface tension of the condensate. This alters the stability criteria from Eq. <ref>, which then becomes S = (1 - K_ f/S_ r). The Kelvin effect therefore increases the supersaturation required by a condensation species before condensation will occur, depending on the bulk properties of the material through the surface tension and the radius of the surface the species is condensing on. The K_ f factor strongly depends on the size of the surface, becoming negligible (∼ 1) for large grains. The Kelvin effect is therefore most important when considering condensation and evaporation on smaller grain sizes. In mini-cloud we assume the grain size to be the effective particle size a_ eff in Eq. <ref>. §.§ Cloud particle opacity As in DIHRT, we use Mie theory to calculate the opacity and scattering properties of the cloud particles. The optical constants of the mixed grain are calculated using Landau-Lifshitz-Looyenga effective medium theory <cit.>, which combines each individual species optical constants weighted by their volume mixing ratio contribution to the grain bulk. Optical constants for each species are taken from the <cit.> collection. We use the LX-MIE routine from <cit.> to perform the Mie theory calculations. To increase computational efficiency and avoid expensive Mie calculations, for small size parameters (x < 0.01) we use the Rayleigh scattering limit approximation <cit.>. For large size parameters (x > 100) we use the Anomalous Diffraction Approximation (ADT) <cit.>. Therefore, the Mie theory calculations are only applied for intermediate size parameters (0.01 < x < 100), which increases the computational efficiency of the opacity calculation scheme considerably compared to using Mie theory across the whole size parameter range. However, this comes at the cost of some accuracy, particularly for large size parameter regimes where the real refractive index is large (n ≳ 1.5), which is common for the mineral materials used in this study. The calculated cloud opacity is then mixed with the gas phase opacities for use inside the RT scheme. §.§ Cloud particle settling To describe the settling of cloud particles, we follow the Stokes flow prescription presented in <cit.>, where the terminal fall speed, v_ f [cm s^-1], is given as v_ f = -2β a^2g(ρ_ d - ρ_ gas)/9η , where a [cm] is the cloud particle radius, g [cm s^-2] the local vertical gravitational acceleration (here taken as a positive quantity), ρ_ d [g cm^-3] the bulk density of the cloud particle and ρ_ gas [g cm^-3] the density of the local gas phase. β is the Cunningham slip factor β = 1 + K_N{1.256 + 0.4exp(-1.1/K_N)} , with K_N the Knudsen number K_N = λ/a, where λ [cm] is the atmospheric mean free path (Eq. <ref>). The dynamical viscosity, η [g cm^-1 s^-1], of the background gas is given by <cit.> η = 5/16√(π m k_bT)/π d^2(k_bT/ϵ_ LJ)^0.16/1.22, where d [cm] is the molecular diameter, m [g] the molecular mass and ϵ_ LJ [K] the depth of of the Lennard-Jones potential. For the background gases of interest in this study, H2 and He, we take the parameters from <cit.>, replicated in Table <ref>. For an ideal gas, the mean free path, λ [cm], in each layer is related to the dynamical viscosity from λ = η/p√(π k_b T/2 μ̅ amu), where μ̅ [g mol^-1] is the mean molecular weight of the background gas. We assume that the particle size used in Eq. <ref> is the effective particle size (Eq. <ref>). In the model Eq. <ref> is calculated each dynamical timestep and converted to pressure velocity units (Pa s^-1) assuming hydrostasy. We then use a MacCormack method vertical advection routine with minmod flux limiter to calculate the vertical flux of the cloud particle moment solutions. In testing, it was found the vertical advection was a prime source of numerical instability and produce nonphysical cloud particle properties. It was important to rigorously impose all boundary conditions and set the local vertical velocity to zero should the number density of cloud particles be less than 10^-10 cm^-3. Future studies will investigate different methodologies and more sophisticated vertical advection schemes to achieve greater accuracy, however in practice our current scheme produces satisfactory and stable results. Horizontal and vertical advection of cloud particle tracers due to atmospheric flows is handled directly by the GCM dynamical core. §.§.§ Viscosity of gas mixtures We include the ability to specify a static background mixture with individual species mixing ratios. To capture the effect of gas mixtures on the dynamical viscosity, we apply the classical viscosity mixing law <cit.> η_ mix = ∑_i=1^Ny_iη_i/∑_j=1^Ny_iψ_ij, where η_ mix [g cm^-1 s^-1] is the dynamical viscosity of the gas mixture and y_i the molar ratio of component i. The function ψ_ij is given by ψ_ij = [1 + √(η_i/η_j)√(m_j/m_i)]^2/4/√(2)√(1 + m_i/m_j), where m [g] is the molecular mass of the gas species. The path length of the mixture can then be calculated from Eq. <ref>. For simplicity we assume a constant neutral gas Solar H2 and He ratio of 0.85 and 0.15 respectively. §.§.§ Importance of considering gas mixtures We briefly examine the importance of considering H2 and He gas mixtures compared to their pure components. Additionally, we include the approximate `square root' mixing law <cit.> η_mix = ∑_i√(m_i) y_iη_i/∑_i√(m_i) y_i. Figure <ref> presents the dynamical viscosity of pure H2 and He and a 0.85 - 0.15 mix, typical of a neutral gas solar metallicity mixing ratio, using Eqs <ref> and <ref>. The difference between the pure H2 and mixtures range between 10-15% dependent on the temperature. With the viscosity of the mixture being larger than the pure H2 case, it can be expected that the settling velocity from Eq. <ref> will be lower, barring any possible cancelling out effects, for the mixture than that to pure H2. In addition, from Figure <ref> the <cit.> square root expression approximates the full <cit.> equation well, suggesting the square root expression is a simple way to take into account background mixtures when required. §.§ Differences to DIHRT In this section, we detail some of the key differences between mini-cloud and the DIHRT model. Mini-cloud is a simplified version of DIHRT, with several approximations and assumptions compared to the full model. In DIHRT several gas to solid phase reactions (sometimes in the 10s <cit.>) for each species are considered to contribute to the growth or evaporation of the grain surface. In mini-cloud this is replace with a single pseudo-reaction where the fictitious condensable gas is directly condensed into the solid phase. For example, Mg2SiO4→Mg2SiO4[s], where [s] denotes the solid phase. The initial abundance of the fictitious gas may be set to the limiting elemental abundance at some metallicity ratio, e.g. Mg at ≈ 3.548·10^-5 for the <cit.> solar atomic ratios, dividing by the stoichiometric factor of that element in the pseudo-gas (i.e. 2 for Mg). The stoichiometric factor of the pseudo-reactions are therefore always equal to one, simplifying the equation set further. This is similar to the concepts used in phase equilibrium modelling <cit.>, where a representative atomic abundance is used as a proxy for condensing molecules. As a consequence, a large difference between the DIHRT and mini-cloud model is that mini-cloud does not perform any chemical equilibrium calculations during integration, rather directly condensing the representative element. This simplification is the most important time-saving measure of mini-cloud compared to DIHRT. However, this means mini-cloud may over-represent the rate of growth of cloud particles, as the rate is not tempered by considering the changing rates of each individual surface chemical reaction due to changes in gas phase compositions, or by any temperature and pressure dependence on the chemical equilibrium abundance of gas species that take part in said surface reactions. A particular numerical instability occurs in both DIHRT and mini-cloud when strong evaporation of cloud particles occurs across a rapid timescale, this can lead to overshooting the saturation limits and oscillating behaviour, resulting in overestimating the gas phase abundances from the evaporation process. In our experience, this behaviour is only triggered in deep regions when large cloud particles rapidly fall past their thermal stability conditions. To counter this DIHRT utilised an instant evaporation technique <cit.> to ease the timescale of evaporation on the ODE solver. However, this method adds large computational expense as the threshold of instant evaporation has be checked in small timesteps during integration to be successful. In mini-cloud, we instead assume that the maximum abundance of evaporated material cannot exceed the deep atmosphere initial conditions. This is probably a reasonable assumption as typically the instability regions occur deep in the atmosphere, where convective motions would rapidly homogenise the atmospheric tracers. § MODELLING THE ATMOSPHERE OF HAT-P-1B In this section, we detail the approach to modelling the HAT-P-1b atmosphere coupled to the mini-cloud scheme. §.§ Expected clouds on HAT-P-1b As a first estimate of the expected cloud composition found in HAT-P-1b we perform 1D radiative-convective equilibrium (RCE) calculations using the HELIOS code <cit.> with system parameters taken from <cit.>. We assume a full globe redistribution factor of 0.25. We then overplot the thermochemical stability curves of various condensed species that have been published in the literature <cit.>. Figure <ref> shows the result of this initial 1D test. We can see that the main condensate species to consider are TiO2, Al2O3, Fe, Mg2SiO4, MgSiO3, Cr and MnS. Due to their high surface tensions, Cr and MnS are unlikely to condense in significant mass for this case <cit.>. Therefore, for simplicity we simulate TiO2, Al2O3, Fe and Mg2SiO4, opting for one silicate species as representative of the silicate clouds component to simplify the calculations. We assume TiO2 as the primary seed particle nucleating species. §.§ GCM modelling We use the Exo-FMS GCM model in gas giant configuration <cit.> which has been used to model a variety of exoplanet regimes, from sub-Neptunes <cit.> to ultra hot Jupiters <cit.>. Our adopted GCM parameters are shown in Table <ref>, derived from the <cit.> measurements. We use the stellar parameters of HAT-P-1 from <cit.> (T_ eff = 5980 K, log g = 4.359, [M/H] = 0.13) and use a PHOENIX stellar atmosphere model <cit.> to calculate the flux incident onto the planetary atmosphere in each wavelength band. For our radiative-transfer solution, the longwave radiation is calculated using the Toon source function technique <cit.>, a 1D two-stream multiple scattering method commonly used across many exoplanet GCM models <cit.>. For the shortwave radiation we use the adding method <cit.>, which takes into account the scattering of incident starlight. In this way we can consistently take into account the scattering feedback of the clouds, either from direct scattering of incident starlight or scattering of thermal radiation inside the atmosphere. We use a correlated-k opacity scheme with 11 bands as in <cit.> with pre-mixed k-tables the same as in <cit.>, but without including strong optical and UV wavelength absorbing species such as TiO, VO, Fe and SiO. Rayleigh scattering by H2 and He as well as collisional induced absorption by H2-H2 and H2-He pairs is included. With this scheme we do not include feedback of cloud particle formation on the gas phase composition through changes in the refractory and oxygen element abundance by the condensation and evaporation processes. For example, the condensation of Mg2SiO4 would reduce the oxygen atoms available in the gas phase by ≈ 10% compared to solar ratios, which will affect the the abundance of oxygen bearing species such as H2O. Therefore, our current set-up is not fully self-consistent in this regard and is a key aim for future iterations of this model. The changes to C/O ratio and chemical composition of this effect is explored in detail in <cit.> across a wide range of planetary parameters. §.§.§ Initial conditions and spin-up We use an internal temperature of 477 K, derived using the expression in <cit.>. The <cit.> picket-fence scheme is used to generate the initial T-p profile conditions of the GCM model (Figure <ref>). As in <cit.>, we perform an extended 2000 day spin-up period using the picket-fence RT scheme before switching to the correlated-k scheme for 1000 additional days. We then run the correlated-k scheme coupled to mini-cloud for 2000 days but neglect the effects of radiative-feedback of cloud opacity to allow some evolution of the cloud scheme. Finally, radiative-feedback with the cloud particle opacity included is run for 500 days to complete the simulation, with the first 100 days consisting of a ramping up period (see next paragraph). An additional 50 days is then performed, the average results of which is taken as the final product. To retain numerical stability during the phase where cloud feedback is first included we follow a similar scheme to <cit.>, linearly increasing the cloud opacity over the course of 100 days of simulation. As in <cit.>, we also apply a vertical boxcar smoothing of the cloud opacity properties to reduce the effect of spikes on opacity (for example, regions at the top or bottom of the cloud layer) that can produce too large heating gradients for the simulation to handle. Unfortunately, during runtime it was discovered that too strong thermal instabilities occur in the model where the cloud opacities change too rapidly in the vertical direction. Typically this occurs from a combination of large cloud opacity gradients with large single scattering albedo and asymmetry parameters, notorious conditions where two-stream RT schemes traditionally struggle with. To combat this, we limit the single scattering albedo and asymmetry factor to a maximum of 0.75 and divide the extinction opacity by 50. We also limit the cloud opacity to a minimum of 10^-5 bar, we find including cloud opacity at the uppermost layers led to huge instability with the RT scheme, probably due to the extremely small radiative-timescales at these pressures. Though these limitations are significant, we find this scheme to be adequate in accounting for the large scale radiative-feedback effects of the cloud particle formation. Improvements in this area will be investigated in future iterations of the model. For the mini-cloud initial conditions, we assume a cloud-free atmosphere. We assume a deep reservoir of condensable gas by setting the initial abundance of the gas tracers at their values where the pressure of the atmosphere is greater than 10 bar, slightly below the saturation temperature limit of Al2O3. Every timestep, this deep region is removed of cloud particles and replenished to the initial gas abundances, emulating break up of cloud particles through convective motions accompanied by a strong replenishment of condensable materials. This pressure is also within the radiative-convective boundary, where we can expect tracers to be well mixed due to convective motions. This significantly reduces the time that would be required to mix enough condensable gas from only the lower boundary to the saturation point of TiO2, which we estimate would take 1000s of days of simulation. Using our initial condition set-up with the <cit.> solar atomic abundances, during testing it was found to take around five simulated days before cloud formation was triggered due to upward mixing of the deep reservoir of condensable gas. This scheme is significantly different to the initial conditions used in <cit.> and <cit.>, where a Solar abundance across the globe was assumed as initial conditions. The current scheme aims to emulate the `bottom up' approach used in cloud formation models such as EddySed <cit.> and CARMA <cit.>, where condensable gas is mixed upwards to the condensation layers from a deep gas reservoir. § CLOUD STRUCTURES OF HAT-P-1B In Figure <ref> we present the vertical temperature-pressure (T-p) structures and zonal mean zonal velocity of the HAT-P-1b GCM model with mini-cloud and Figure <ref> shows the globally averaged T-p profile from the GCM compared to the HELIOS and picket-fence initial conditions. This shows a highly typical HJ transport pattern for this dynamical regime, with a strong and wide equatorial jet. Clouds have greatly affected the upper atmospheric temperature structure, inducing an inversion at low pressures and more isothermal-like atmosphere in these regions. This is most apparent in Figure <ref>, where the globally average GCM T-p profile shows a large deviation in the upper atmospheric profiles compared to the HELIOS and picket-fence models. We also see influences deeper in the atmosphere, the temperature shows a stark uptick when clouds begin to form in the atmosphere around 1-10 bar and add their opacity to the atmosphere. The zonal mean zonal velocity shows a typical wind pattern for hot Jupiters seen in many hot Jupiter GCM studies, with a strong central jet region. These structures suggest two main equatorial jets, one at around the 0.1 bar level and one at low pressure at around 10^-4 bar. In Figure <ref> we also show global, nightside and dayside averaged cloud number density and average particle size vertical profiles. This shows that both the global cloud properties of the atmosphere are generally homogeneous between the nightside and dayside hemisphere, suggesting strong transport and mixing in the zonal direction. Figure <ref> presents latitude-longitude maps of the GCM output at the 0.1 bar pressure level. This shows a typical HJ thermal structure, with the hottest points offset east from the sub-stellar point. The cloud particle properties follow closely the dynamical structure of the atmosphere, the largest number densities of cloud particles in the atmosphere are found at the equatorial regions and colder nightside high latitude Rossby gyres. These Rossby gyres are key nucleation zones of new cloud particles, as also seen in <cit.> and <cit.>. This pressure level contains mostly micron and sub-micron sized particles, split between sub-micron at the equator and micron sizes at latitude. This result shows that the spacial variations in cloud properties are mostly with latitude than longitude, with quite homogeneous cloud properties at each longitude of the planet. This suggests that the cloud particle properties strongly follow the global flow and wave patterns. The depletion of Mg atoms clearly follows the temperature profile of the planet in this case. Figure <ref> presents the results of the GCM at the equatorial jet region averaged across ±20 latitude. Here, the clear differences between the dayside and nightside temperature structures can be seen. The number density is highly uniform in the upper atmosphere, but decreases sharply at pressures greater than 10^-1 bar where seed particles begin to be thermally unstable. This is reflected in the particle sizes, which shows mostly micron or sub-micron particles mixed across the globe but larger particles deeper than around 1 bar. The Mg abundance shows a ring structure from around 1 to 10^-3 bar, suggesting a Mg stability zone has formed at these pressures. According to the calculations performed in <cit.>, the opening angle of HAT-P-1b is around 20 (H_ p ≈ 635 km, R_ p ≈ 14.7 R_ E). Therefore, for the transmission limb contour plots in Figure <ref> we average across ± 10 at the east and west terminators. These plots show the latitudinal structure of the GCM results, with the temperature plot showing the difference between the east and west terminators. The cloud structures show large variations with latitude, here we see that the high latitudes contain more cloud particle number density and subsequently slightly smaller particle sizes. We also see that the latitude of these reduced number density regions is different between the east and west terminator. Again, the Mg abundance shows a stability zone similar to the equatorial plots, suggesting a spherical shell of Mg2SiO4 stability is present across the globe of the planet. §.§ Cloud composition In Figure <ref> we repeat the transmission limb sections of Figure <ref> but show the composition of the cloud particles through their individual contributions to the VMR of the grain bulk. Perhaps the most surprising result is the mixing ratio of Fe[s] is generally higher than Mg2SiO4[s] at the millibar pressure regions, with Mg2SiO4[s] dominating at pressures higher and lower than ≈10^-2 bar. This is most probably due to the Kelvin effect (Section <ref>), where due to the high surface tension value of Fe[s], a larger supersaturation ratio is required to condense Fe[s] on the grain surfaces. This behaviour is also seen in other hot Jupiter cloud microphysical studies that include the Kelvin effect <cit.>. TiO2[s] and Al2O3[s] do not contribute significantly to the cloud particle bulk, only appearing at around 1 bar in small pockets of the east and west equatorial regions. § POST-PROCESSING To post-process our HAT-P-1b GCM results we use the 3D Monte Carlo RT model gCMCRT <cit.>. We focus on comparing to the available transmission spectral data from <cit.> and <cit.> and Spitzer emission data from <cit.>. We then focus on producing synthetic spectra for the NIRISS SOSS instrument mode onboard JWST owing to the ongoing observational program of <cit.>. §.§ Transmission spectra In Figure <ref> we present the transmission spectra resulting from the GCM model. From this, it is clear that the nominally produced cloud structures presented in the previous sections drastically over estimate the cloud opacity, resulting in a too flat spectra compared to the available HST and Spitzer data. We arbitrarily reduce the cloud number density by a factor of 0.01 to return the model to a more appropriate level. This is similar to the procedure used in <cit.>. From the left hand side plot, we show that the cloud model can reduce the H2O feature amplitudes and helps connect the HST WFC3 and STIS data in more consistent manner. In the right hand side plot, we examine the effect of assuming different particle size distributions, derived from the moment solutions in the GCM (App. <ref>). This shows the effect of each assumption on the spectrum. As expected, distributions that contain larger particle sizes, such as the Rayleigh distribution, flatten the spectrum more compared to distributions that prefer smaller particle sizes, such as the inverse gamma or exponential distribution. Our post-processed GCM transit spectra are unable to fit the Spitzer points in both the cloud free and cloudy cases. Retrieval modelling performed in <cit.> suggest that a H2O only atmosphere without carbon bearing species and a grey cloud deck can best explain the Spitzer values. For one case in Figure <ref>, we attempt to emulate this by removing carbon species from the spectra, which results in a smaller transit depth more in line with the observed Spitzer values. Therefore, our nominal assumption of Solar Metallicity with carbon species in chemical equilibrium is possibly not the most accurate representation of the atmosphere, but this assumption enables greater ease and simplification of simulation within the coupled GCM and mini-cloud model. Future JWST measurements in this wavelength region, most probably with G395H, will be highly enlightening for measuring the carbon content in the atmosphere. §.§ Emission spectra In Figure <ref> we present the secondary eclipse emission spectrum and dayside planetary flux from the GCM results including both the cloud free and cloudy spectra. Clouds have a significant effect on the spectra at the redder end of the wavelength range, with each assumed distribution altering the shape of the spectra in different ways. For example, the Rayleigh distribution provides the largest cloud opacity, reducing the outgoing flux by several factors compared to other distributions. From the spectral flux plot it is clear the cloud opacity reduces the outgoing flux across the entire wavelength regime. All models are generally consistent with the 3.6 μm and 8 μm photometric points, but underpredict the flux at 4.5 μm and 5.8 μm points. The model without carbon bearing species (black dashed point), suggested by the transmission spectra retrieval modelling (See discussion in Sect. <ref>), well fits the 4.5μm point. §.§ JWST NIRISS SOSS predictions The JWST GTO program 1201 <cit.> is scheduled to observe a single transit and eclipse of HAT-P-1b using NIRISS SOSS. To put our GCM results in a practical context we produce simulated NIRISS SOSS transit and emission spectra using PandExo <cit.>, presented in Figure <ref>. We use the log-normal distribution case as representative of the cloudy spectra produced in Section <ref>. We use the same observational run parameters as in the <cit.> program as best as possible. Our simulated spectra show that in transmission, the cloudy and non-cloudy case should be easily discernible, improving the precision from the HST data. The shape of the spectrum at the potassium wings and flattening of the H2O windows will provide ample evidence of the cloud coverage on HAT-P-1b as well as confirming the potential non-presence of potassium on this planet suggested by the HST STIS data <cit.>. However, according to Figure <ref>, we suggest it may be difficult to differentiate between different size distributions even with this improved precision, but larger sized peaked distributions such as the Rayleigh distribution may be easily ruled out. For the emission spectrum, our simulations show it will be harder to discern between the cloudy and non-cloudy case. Evidence for dayside clouds may be sought at the reddest end of the spectrum, where the cloud model diverges from the cloud free case more. Possibly, a combination of the SOSS data with the current Spitzer data would provide a much tighter constraint on the possible dayside cloud structures. Overall, we suggest the prospects for in-depth characterisation of HAT-P-1b's atmosphere and accompanied cloud structures are extremely good. These upcoming data will be an excellent future testbed for modelling cloud formation in this hot Jupiter regime. § DISCUSSION In this study, we have used the moment method to evolve the integrated properties of the cloud particle size distribution across the simulation. This makes an assumption that the cloud particle distribution will smooth across radius space and settle at an averaged rate. A more realistic depiction of the size distribution would be achieved using a `bin method' where each individual size bin of the distribution is integrated in time and has it's own settling rates. The CARMA model <cit.> is an example of a microphysical bin model successfully used for hot Jupiter exoplanet cloud modelling <cit.>. If computationally feasible, coupling such a model to the GCM would be greatly enlightening as well as provide an insight into the most realistic distributions to assume for the moment method. As stated earlier, we have assumed that the condensation of clouds does not alter the background atmospheric gas composition from Solar values, the oxygen abundance is not depleted self-consistently inside mini-cloud which has a large impact on the chemical composition of the atmosphere by reducing H2O abundance and altering C/O ratio significantly <cit.>. This may have a further feedback effect on the thermal structure of the atmosphere through changes in the gas phase composition. We plan to take this into account in future studies by including the tracking of oxygen condensation inside mini-cloud, along with coupling a chemical equilibrium model to consistently calculate gas phase abundances for input to the RT scheme. Related, we have assumed a solar metallicity atmosphere in our current simulations. An increase in metallicity will affect both the dynamical properties of the atmosphere, through an increase in the day/night temperature contrast and reduction in energy transport efficiency to the nightside of the atmosphere. Cloud formation wise, increasing the metallicity will increase the thermal stability zone of cloud species to higher temperatures as well as increasing the available condensable mass. This has the effect of compacting the clouds to thicker cloud layers deeper in the atmosphere. In addition, a larger nucleation rate due to increased metallicity may allow a much larger population of smaller particles to reside in the atmosphere compared to the micron sized particles produced in this study. The effect of metallicity on the 3D cloud structures is left to future work. §.§ Observational predictability As noted in the review in <cit.>, consistent microphysical cloud models coupled to GCMs tend to produce too flat transmission spectra with reductions of the cloud opacity by several 10s of factors required to reproduce spectral features <cit.>. This problem persists in this study, suggesting that additional physical processes not considered are required to be added to mini-cloud before stronger predictions can be made using the model. A key process that would reduce cloud number density and opacity would be coagulation, where cloud particles collide together in the atmosphere forming particles of larger sizes. This processes is included in the bin models of CARMA <cit.> but not in the moment method of the current study. Possibly including the coagulation and fragmentation mechanism from <cit.>, who also used a moment based method, would be a way of including coagulation into the current model. In <cit.> retrieval modelling was performed on the available HST and Spitzer transmission spectra data for HAT-P-1b. They suggest a carbon free atmosphere is preferred, with only H2O as the main gas phase absorber driven by the position of 4.5μm Spitzer photometric point. In Figures <ref> and <ref> we also add this scenario in transmission and emission by arbitrarily removing the opacity of carbon bearing species from the post-processing routine. However, this wavelength range lies beyond the NIRISS SOSS limits meaning that, if possible, other instruments such as NIRSpec G395H would have to be used to investigate this scenario in detail. Encouragingly, Spitzer transmission data has been shown to line up well with NIRSpec G395H data for the hot Saturn WASP-39b <cit.>. §.§ Runtime In this study we report around 5 minutes per simulated day with the cloud-free model and with the correlated-k RT scheme [Using a server with Intel Xeon Gold 6130 CPU 2.10GHz CPUs]. Coupled to mini-cloud using 48 processors with radiative-feedback we report an approximate maximum 15 minutes per simulated day, suggesting the addition of the cloud microphysics module adds approximately 3x the computational burden to the basic GCM model. Compared to the previous models that used DIHRT <cit.> which reported ∼hours of simulation per simulated day, mini-cloud offers a magnitude increase in computational efficiency and puts 3D microphysical cloud modelling within reach of most contemporary GCM modelling groups. We suggest that runtime can be further improved by using more modern HPC infrastructure, which would put longer timescale evolution into reach. §.§ Convergence In <cit.> coupled the DRIFT formalisms in a time dependent manner to a 1D settling and diffusive scheme, DIFFUDRIFT, to model brown dwarf and exoplanet atmospheres. They found that long timescales of ≈ years were required for convergence to occur. Due to the similarity in methodology and set-up to from <cit.> to this study, we find it likely similar timescales are required from the GCM simulations with clouds for convergence (or rather statistical convergence in the GCM case). However, our current study also concurs with some of the conclusions found in <cit.>, namely, * Seed particle nucleation rates are very low due to the slower mixing from the deep regions to the TiO2 nucleating zones. * As a result, larger particles form more easily and are confined to deep layers while only small particles can be lofted upwards beyond the main condensation fronts. These conclusions are similar to the 1D CARMA microphysical bin model results at similar T-p conditions found in <cit.>. It is therefore likely the cloud structures presented in this study are not fully converged. However, we note the dynamics of GCM simulations in this regime in general is unlikely to be fully converged due to the extremely long timescales (∼ 100,000 days) required for momentum exchange between the deep and upper atmosphere <cit.>. Both of these problems combined suggest that a truly statistically converged model that couples microphysical clouds to GCMs is a highly challenging prospect, beyond the scope of current model capabilities. § SUMMARY AND CONCLUSIONS In this study, we present the open source microphysical cloud formation module, `mini-cloud', designed for time dependent modelling of cloud particles in exoplanet atmospheres. Mini-cloud offers an intermediate methodology between full microphysical models and phase equilibrium cloud models. In particular, mini-cloud is most suited for examining cloud microphysics for 3D GCM modelling of cloud structures in hot gas giant exoplanets where computational expediency is a key requirement. We simulated the cloud structures of the hot Jupiter exoplanet HAT-P-1b, finding that extensive cloud coverage is expected. However, we significantly overpredict the number density of cloud particles by approximately two orders of magnitude, which is required to fit the present transit and dayside emission observational data. This suggests that additional physical processes that can reduce cloud number density not included in the model, such as coagulation, may be important considerations for future modelling efforts. Our results overall suggest that consideration of the 3D dynamical transport of particles and gas in addition to temperature is important in setting the global cloud properties of the atmosphere. The interplay between mixing of condensable material, avenues of replenishment to the upper atmosphere play a key role in determining the overall cloud coverage properties, with polar and high latitude regions acting as a source of cloud particle nucleation and formation in the HAT-P-1b case. Mini-cloud enables simulation of cloud microphysics for 1000s of simulated days rather than the previous studies 100 days or less. Mini-cloud offers a useful intermediate method between the microphysical cloud models and phase equilibrium models currently in used for 3D GCM modelling of exoplanet atmospheres, and puts the study of cloud formation microphysics into reach of most contemporary exoplanet GCM modeller resources. § ACKNOWLEDGEMENTS E.K.H. Lee is supported by the SNSF Ambizione Fellowship grant (#193448). The HPC support staff at AOPP, University of Oxford is highly acknowledged. § DATA AVAILABILITY The mini-cloud and gCMCRT source codes are available on the lead author's GitHub: <https://github.com/ELeeAstro>. All other data is available upon request to the lead author. mnras § RECONSTRUCTING SIZE DISTRIBUTIONS FROM MOMENT SOLUTIONS Given specific shape information a size distribution can be reconstructed from the moment solutions by assuming a suitable arithmetic distribution shape to fit to the moment data. The relations between moments represent a mathematical property of the size-distribution, for example, the expectation value, E[a], is approximately equal to the mean cloud particle size <a> = K_1/K_0, and the variance, Var[a], approximately equal to K_2/K_0 - (K_1/K_0)^2 or <A>/4π - <a>^2. In the following sections we provide a simple reference on various distributions that are commonly used across the literature. In Figure <ref> we show the resultant reconstructed size-distributions for a mini-cloud test that has N_0 = 215 cm^-3, E[a] = 10^-4 cm and Var[a] = 9.14 · 10^-10 cm^2. §.§ Log-normal distribution The log-normal distribution is a ubiquitous assumed distribution for describing clouds in gas giant atmospheres <cit.>. It is characterised by having the mean, mode and median the same value, with a broad distribution spread equally in log space across a given variance value. The log-normal distribution is given by f(a) = N_0/aσ√(2π)exp[-(lna - μ)^2/2σ^2], where N_0 = K_0 [cm^-3] is the total cloud particle number density, μ the natural logarithm of the distribution mean and σ the natural logarithm of the distribution variance. μ is estimated through μ = ln(E[a]/√(Var[a]/E[a]^2 + 1)) σ = √(ln(Var[a]/E[a]^2 + 1)) §.§ Inverse Gamma distribution The inverse gamma distribution is a common alternative to the log-normal distribution, characterised by an given by f(a) = N_0/Γ(α)β^αa^-α-1exp(-β/a), where the α and β parameters are estimated from the two relations α = E[a]^2/Var[a] + 2.0 and β = E[a](α - 1) §.§ Gamma distribution The gamma distribution is given by f(a) = N_0/Γ(α)β^αa^α-1exp(-β a), where the α and β parameters are estimated from the two relations α = E[a]^2/Var[a] and β = E[a]/Var[a] §.§ Exponential distribution The exponential distribution is given by f(a) = N_0λexp(-λ a), where lambda is λ = 1/E[a]. §.§ Rayleigh distribution The Rayleigh distribution is special form of the gamma distribution given by f(a) = N_0a/σ^2exp(-a^2/2σ^2), with σ given by σ = E[a]/√(π/2). Although the Rayleigh distribution is simple to use, as a single parameter distribution it can struggle to represent well peaked distributions. §.§ Potential exponential distribution <cit.> propose fitting moment solutions to a potential exponential distribution as a variant of the Gamma distribution f(a) = a^Bexp(A - Ca), with B = 2K_1K_3 - 3K_2^2/K_2^2 - K_1K_3, C = (2 + B)K_1/K_2, and A = ln K_1 + (2 + B)ln C - lnΓ(2 + B). §.§ Hansen distribution The Hansen distribution was proposed in <cit.> to fit observed cloud particle size distributions on Earth but has been used in Brown Dwarf and exoplanet atmospheric contexts <cit.>. It follows an alternative formulation of the Gamma distribution f(a) = N_0/Γ[(1 - 2β)/β](αβ)^(2β - 1)/βα^(1 - 3β)/βexp(-a/αβ), where α is the effective particle radius (Eq. <ref>) and β the effective variance of the particle size distribution, these can be estimated as α = K_3/K_2, β = K_2K_4/K_3^2 - 1. Since the K_4 moment is not directly calculated in mini-cloud, it may be reasonable to approximate β by using ratios of lower integer moments β = K_1K_3/K_2^2 - 1.
http://arxiv.org/abs/2307.02733v1
20230706023208
MMNet: Multi-Collaboration and Multi-Supervision Network for Sequential Deepfake Detection
[ "Ruiyang Xia", "Decheng Liu", "Jie Li", "Lin Yuan", "Nannan Wang", "Xinbo Gao" ]
cs.CV
[ "cs.CV" ]
Roberg et al.: Bare Demo of IEEEtran.cls for Journals MMNet: Multi-Collaboration and Multi-Supervision Network for Sequential Deepfake Detection Ruiyang Xia, Decheng Liu, Jie Li, Lin Yuan, Nannan Wang, Member, IEEE, Xinbo Gao, Senior Member, IEEE (Corresponding authors: Nannan Wang; Xinbo Gao.) R. Xia and J. Li are with the State Key Laboratory of Integrated Services Networks, School of Electronic Engineering, Xidian University, Xi'an 710071, Shaanxi, China (e-mail: ryon@stu.xidian.edu.cn; leejie@mail.xidian.edu.cn). D. Liu is with the State Key Laboratory of Integrated Services Networks, School of Cyber Engineering, Xidian University, Xi'an 710071, China (e-mail: dcliu.xidian@gmail.com). L. Yuan is with the Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing 400065, China (e-mail:yuanlin@cqupt.edu.cn). N. Wang is with the State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an 710071, Shaanxi, China (e-mail: nnwang@xidian.edu.cn). X. Gao is with with the Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing 400065, China (e-mail: gaoxb@cqupt.edu.cn). ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Advanced manipulation techniques have provided criminals with opportunities to make social panic or gain illicit profits through the generation of deceptive media, such as forged face images. In response, various deepfake detection methods have been proposed to assess image authenticity. Sequential deepfake detection, which is an extension of deepfake detection, aims to identify forged facial regions with the correct sequence for recovery. Nonetheless, due to the different combinations of spatial and sequential manipulations, forged face images exhibit substantial discrepancies that severely impact detection performance. Additionally, the recovery of forged images requires knowledge of the manipulation model to implement inverse transformations, which is difficult to ascertain as relevant techniques are often concealed by attackers. To address these issues, we propose Multi-Collaboration and Multi-Supervision Network (MMNet) that handles various spatial scales and sequential permutations in forged face images and achieve recovery without requiring knowledge of the corresponding manipulation method. Furthermore, existing evaluation metrics only consider detection accuracy at a single inferring step, without accounting for the matching degree with ground-truth under continuous multiple steps. To overcome this limitation, we propose a novel evaluation metric called Complete Sequence Matching (CSM), which considers the detection accuracy at multiple inferring steps, reflecting the ability to detect integrally forged sequences. Extensive experiments on several typical datasets demonstrate that MMNet achieves state-of-the-art detection performance and independent recovery performance. deceptive media, sequential deepfake detection, face recovery § INTRODUCTION The development of deep learning not only brings significant improvement to traditional visual tasks <cit.> but gives birth to massive novel and heuristic vision applications <cit.>. Deepfake, a new technique used to generate artificial media by deep neural networks <cit.>, has raised public concerns about personal security and privacy. To fight against malicious facial deepfakes, detection methods have been naturally proposed and extensively studied over recent years. Conventional approaches to deepfake detection predominantly prioritize binary classification as a means of assessing the genuineness of facial imagery. However, these binary outcomes offer restricted insights for subsequent investigations, such as face recovery. Additionally, contemporary deepfake detection methods primarily concentrate on identifying face replacement and reenactment. Nevertheless, the editing techniques can also be used by the attacker to fabricate fictitious identities for the purpose of deceiving others, such as portraying a healthy leader as being unwell <cit.>. The realm of face editing techniques encompasses a wide array of manipulated methods capable of distorting multiple facial regions to achieve deceptive visual outcomes. In addition, as shown in Fig. <ref>(a), the entanglement of the generator in the latent vector space makes forged images different even if the manipulated region labels are the same <cit.>. Therefore, to identify these manipulated regions and achieve recovery for defense, sequential deepfake detection is proposed to detect these manipulated regions with the correct sequence <cit.>. The dissimilarities between deepfake detection and sequential deepfake detection are summarized in Fig. <ref>(b). A proficient sequential deepfake detector exhibits the capability to discern various combinations of spatial and sequential manipulation traces present within forged images. Previous advanced method deems sequential deepfake detection as a special image captioning task and adopts an enhanced auto-regressive model to detect manipulated regions <cit.>. However, as only one network branch is available during detection, it is difficult to learn the representations of these various manipulation traces in the forged face images because the manipulation combinations have high discrepancies. In addition, to achieve recovery, an intuitive way is to use the same manipulation technique but the reverse manipulation sequence <cit.>. Nonetheless, it is challenging to perceive which manipulation technique is utilized in the real scenario as the prevalence of numerous similar techniques and attackers may intentionally conceal their use of certain techniques. Lastly, previous evaluation matrices only focus on the detection accuracy in a single step rather than the whole continuous sequence. In spite of that, we prefer to know the matching degree between the complete predicted sequence and the corresponding sequential ground-truth so that aware of the quality of the generated face image in the recovering stage. Albeit these forged facial images seem complex, a novel network called Multi-Collaboration and Multi-Supervision Network (MMNet) is proposed by introducing multiple detection branches and extra supervisions to generalize various manipulation traces. Moreover, MMNet can recover face images without the knowledge of manipulation techniques. To the best of our knowledge, we are the early exploration to simultaneously achieve detection and independent recovery, which can be seen as a baseline for further research. To reflect the capability of capturing integrally forged sequences and the potential recovering quality of the forged facial images, a new evaluation metric named Complete Sequence Matching (CSM) is proposed to evaluate the matching degree between the complete predicted sequence and the sequential ground-truth. Therefore, the main contributions of this paper can be summarized as follow: * A novel network named MMNet is proposed for sequential deepfake detection. MMNet incorporates both multi-collaboration and multi-supervision modules, which work together to generalize across a range of manipulation traces and thus provide more accurate detection accuracy. * Additionally, MMNet is the first network that attempts to recover manipulated facial regions even without prior knowledge of the specific manipulation technique. * A new evaluation metric called Complete Sequence Matching (CSM) is proposed to expand the temporal dimension, treat the predicted results as an entity and then match with the ground-truth sequence. * Extensive experiments on several public typical manipulation datasets demonstrate the superior performance of our proposed method and the importance of the presented evaluation metric. § RELATED WORK §.§ Deepfake Detection The aim of deepfake detection is to make a model able to mine the discrepancy of distribution between real and fake images. Therefore, the challenge of this task is to not only achieve high detection accuracy but also keep this characteristic among datasets with different manipulation techniques and image qualities. While numerous deepfake detection methods have been proposed in recent years, they can be broadly categorized into spatial-based, frequency-based, and data-based approaches. Spatial-based methods involve modifying the network architecture to enhance the extraction of spatial face image features <cit.>. Frequency-based methods focus on analyzing the frequency differences between real and fake images <cit.>. Data-based methods aim to identify common characteristics of fake images to improve model generalization across different manipulation techniques <cit.>. The objective of both deepfake detection and sequential deepfake detection is to distinguish between authentic and counterfeit images <cit.>. However, the latter involves a more intricate analysis of the correlation between detection and recovery processes. Specifically, sequential deepfake detection strives to pinpoint the manipulated facial regions and establish the optimal order of operations to recover the original identity-related features. Consequently, sequential deepfake detection represents a newly emerging and notably more demanding research task. §.§ Face Editing Facial editing encompasses the alteration of various attributes and components within an image that pertains to the face, including but not limited to the color of the hair or skin, gender, age, and the addition of glasses. In <cit.>, the authors introduced a technique called the Invertible Conditional GAN (IC-GAN) to facilitate complex image editing. This method combines an encoder with a conditional GAN (cGAN) <cit.>. Another notable contribution by Lample et al. <cit.>, involves an encoder-decoder architecture that is specifically trained to disentangle salient information and attribute values in the latent space, thereby enabling image reconstruction. Building upon these advancements, <cit.> introduced an enhanced approach known as StarGAN, which eliminates the need to develop separate models for each pair of image domains. Additionally, He et al. proposed attGAN in <cit.>, which relaxes the attribute-independent constraint on the latent representation and instead focuses on applying the attribute-classification constraint to the generated image, ensuring accurate attribute changes. Moreover, based on the pretrained styleGAN <cit.>, more advanced techniques are proposed and produce more realistic edited images and some interesting applications <cit.>. With the development of the editing technique, the authenticity of generated images has notably improved, accompanied by a heightened ease of utilizing these techniques. However, this progress inadvertently makes attackers manipulate forged images with greater ease, leading to the dissemination of misleading information and the humiliation of unsuspecting victims <cit.>. §.§ Image Captioning The goal of image captioning is to depict the global content and the local details of an image in natural language <cit.>. The challenge of this task is both the design of the visual and language framework. To elevate the representation of the visual framework, different attention mechanisms are introduced such as general attention <cit.> and self-attention <cit.>. As for language framework, LSTM-based <cit.> and CNN-based <cit.> approaches are the main strategies to deal with this task at the early stage. In recent years, Transformer-based <cit.> and Image-Text early fusion methods <cit.> further achieve appealing captioning performance. Given that forged face images may contain diverse manipulation traces, a visual encoder can be employed to extract spatial manipulation features, while a language model can focus on sequential manipulation traces. This makes sequential deepfake detection akin to a unique image captioning task, wherein the output is limited to keywords (i.e., manipulated region labels) obtained through scanning a forged face image. § METHODOLOGY In this section, we will introduce MMNet which consists of a detection framework and a recovery framework. The detection framework identifies manipulated facial components/attributes in the correct sequence, while the recovery framework is optional and used when the manipulation model is unknown. Additionally, we present a new evaluation metric called Complete Sequence Matching (CSM) that treats the predicted results as a complete entity and matches it with the sequential ground-truth. §.§ Detection Framework The detection framework is shown in Fig. <ref>. It consists of a backbone for feature extraction, a multi-collaboration module to cope with spatial and sequential manipulation traces, and a multi-supervision module to promote the localization of manipulated regions. The challenge of learning representations for diverse spatial manipulation traces arises from various manipulated areas, which is evident in the context of a single-scale detection branch. Feature pyramid network (FPN) <cit.> has been extensively employed for detecting objects of varying scales by leveraging inherent network features with different scales. However, the computational cost of FPN is exacerbated by the interactions among different network layers. Recent studies have highlighted that the effectiveness of FPN primarily stems from its multiple output structure <cit.>. This observation has motivated us to propose a concise version of FPN to address the issue of diverse spatial manipulation traces. Accordingly, we default to setting three detection branches to capture small (e.g., eye), middle (e.g., nose), and large (e.g., hair) manipulated regions. To reduce redundancy in the channel dimension, we initially squeeze the output features from the backbone. Subsequently, we respectively employ convolution and deconvolution operations with 2 strides to downsample and upsample the spatial size of the squeezed features f_sqz. Hence, expanding the feature scale from 16 to 8, 16, and 32, respectively. In each detection branch, to elevate the determination confidence, a self-attention mechanism is employed in the encoder layer for capturing long-range dependencies among different facial regions. Prior to training, independent position encodings are computed and added to their corresponding spatial features <cit.>. The self-attentions are computed as follows: Attn(Q_h, K_h, V_h)=Softmax(Q_hK_h^T/√(d)) V_h, h=1, …, H where Q_h, K_h and V_h∈ℝ^n^2× d denote the h-th head of query Q, key K and value V. In each attention head, n and d denote the spatial length and the number of channels. Multi-head attentions MA_enc∈ℝ^n^2× Hd in the encoder layer are computed by concatenating these head attentions in the channel and adding the flattened extracted image features f_in∈ℝ^n^2× Hd: MA_enc = LN( Concat (Attn(Q_1, K_1, V_1), …,. .Attn(Q_H, K_H, V_H)) W_e^T + f_in), where H indicate the number of heads. LN denotes layer normalization. Encoder features f_enc∈ℝ^n^2× Hd are hence expressed as: f_enc=LN(MLP_enc(MA_enc)+MA_enc), where MLP_enc indicates the multilayer perceptron in the encoder layer. Accordingly, a single decoder may impede the sequential detection performance as different branches have distinct concentrations. As a result, there are also three decoder branches that correspond to the encoders. Within each decoder, learnable embeddings are utilized to represent labels for manipulated facial regions, no manipulation (NM), start-of-sentence (SOS), and end-of-sentence (EOS). The SOS and EOS indicate the beginning and end of prediction, while the NM is output when there is no manipulation at a certain step. The selected embeddings are subsequently input into the masked attention layer, which serves to model the sequential inference process: MaskAttn(Q^'_h, K^'_h, V^'_h)=Softmax(Q^'_hK^'_h^T/√(d)) MV^'_h, h=1, …, H where the h-th head query Q^'_h, key K^'_h and value V^'_h∈ℝ^n_s × d come from the linear projection of the input embeddings f^'_in∈ℝ^n_s× Hd. n_s is the number of steps and each step has a predetermined label during training. M∈ℝ^n_s × n_s denotes the lower triangular matrix with the value set as one, which prevents the model from seeing the rear information at each step during training. Then the output f^m_out∈ℝ^n_s× Hd is computed by concatenating these single head attentions, doing dot-product with the learned weight matrix W_m∈ℝ^Hd × Hd, and adding the f^'_in: f^m_out = LN( Concat (MaskAttn(Q^'_1, K^'_1, V^'_1), …,. .MaskAttn(Q^'_H, K^'_H, V^'_H)) W_m^T + f^'_in). After getting f_enc, cross attention layer is utilized to interact the embeddings with the extracted image features, which can be expressed as: CrossAttn(Q_h^c, K_h^e, V_h^e)=Softmax(Q_h^c K_h^e T/√(d)) V_h^e, h=1, …, H where queries Q_h^c∈ℝ^n_s× d are the linear projection of f^m_out. Keys K_h^e and values V_h^e are linearly projected from f_enc, respectively. Each head cross-attention will also be concatenated and add f^m_out to get the multi-head decoder attentions MA_dec∈ℝ^n_s× Hd by following: MA_dec = LN( Concat (CrossAttn(Q^c_1, K^e_1, V^e_1), …,. .CrossAttn(Q^c_H, K^e_H, V^e_H)) W_d^T + f^m_out). After that, the decoder features f_dec∈ℝ^n_s× Hd are expressed as: f_dec=LN(MLP_dec(MA_dec)+MA_dec), where MLP_dec indicates the multilayer perceptron in the decoder. After inputting f_dec into the last MLP layer, the prediction P in one of the decoder branches at i-th step is computed as: P(i, X)=max(P(y_i, 1| y_1, …, y_i-1, X), …,. .P(y_i, n_l| y_1, …, y_i-1, X)), where n_l denotes the number of predicted labels. X indicates the input extracted features. Sequential manipulation traces derive from various manipulated permutations. It is imperative for different decoder branches to collaborate together as such allows for a holistic analysis of the sequential predictions. Specifically, the SOS embedding is input to different decoders to initiate the detection procedure. At each step, the manipulated region label with the highest confidence among the different scale decoder branches is selected and then acts the input of these decoders in the subsequent step. Therefore, the probability of the proposed detection model at the n-th step can be formed as: P_d(n)=∏_i=1^nmax(P_s(i, X_s), P_m(i, X_m), P_l(i, X_l)), where X_s, X_m and X_l represent the extracted features from the small, middle, and large scale branches, respectively. y is the selected manipulated region label. Furthermore, the utilization of this selection strategy facilitates the simultaneous operation of distinct decoder branches, thereby circumventing temporal increments throughout the collaborative procedure. Each prediction at each step is contingent upon the entirety of preceding prognosticated manipulated region labels. If we gather and scrutinize the assorted forecasted sequences generated by divergent decoder branches at the final phase, it would necessitate a computation time threefold lengthier than that demanded by the suggested strategy. This disparity arises from the dynamic variation in inputs to each decoder branch at every step, thus impeding the attainment of concurrent execution. Since the image-level supervisions (i.e., manipulated region labels) provide limited information, the learned representations may further benefit from the pixel-level supervisions as they show the difference in a specific image position. Therefore, a multi-supervision module is proposed by learning pixel-level supervision and offering external attention to the multi-collaboration module. A detailed illustration of each spatial scale block in the multi-collaboration module is presented in Fig. <ref>. It consists of two blocks in each spatial scale, where the first block is responsible for increasing the spatial scale of features, while the second block further extracts the upsampled features. The squeezed features f^'_sqz go through these blocks and the intermediate scale features (8, 16, and 32 spatial sizes) are processed through the squeeze and excitation operation to get the external attention corresponding to the scales of detection branches, which can be expressed as: Attn_e = Sigmoid(W_oReLU(W_iz)), where W_i∈ℝ^c_i/r× c_i and W_o∈ℝ^c_o×c_i/r. c_i and c_o denotes the feature channels in the multi-supervision module and the multi-collaboration module, respectively. z indicates the specific scale features extracted from the multi-supervision module after global average pooling in the spatial dimension. r follows the default configuration setting <cit.>. Two pixel-level supervisions are utilized during training. The first is the real face images. The aim of the multi-supervision module is to learn the projection from the manipulated images to the real counterparts. Therefore, external attention not only brings the explicitly manipulated position between the real and fake image but also the sequential difference information for forged images with the same manipulated regions. The second is the manipulated mask related to the input images. To generate the mask, in Fig. <ref>, the facial landmarks related to the manipulated region labels are selected to generate the bounding boxes. The values within these boxes vary, as larger manipulated regions have lower manipulated density and vice versa. Although the manipulated masks only provide relatively coarse position information compared to real face images, they can be adopted in a more severe situation that lacks the real images with respect to the fake one during training. Therefore, the real face images are the default selection unless they are not available. The detection framework is trained with the sum of cross-entropy loss from three detection branches (ℒ_CE^s, ℒ_CE^m, and ℒ_CE^l) and mean square error (MSE) ℒ_MSE between the generated image/mask and the corresponding pixel-level supervision, which is expressed as: ℒ_𝒟ℯ𝓉ℯ𝒸𝓉𝒾ℴ𝓃=ℒ_CE^s+ℒ_CE^m+ℒ_CE^l+ℒ_MSE. §.§ Recovery Framework The recovery framework serves to independently recover the forged face images. It applies to the scenario that the corresponding manipulation model is concealed by attackers. The pipeline of the recovery framework is depicted in Fig. <ref>. Due to the presence of various manipulation traces in forged face images, achieving direct projection from forgery to real using only the multi-supervision module is challenging. Therefore, images generated from the multi-supervision module will first be clarified as foreground by adopting a super-resolution module (SRM). The forged image then serves as the background because the editing technique mainly focuses on the designated regions, and the rest regions are still close to the real. To blend these two images, we collect the predictions from the detection framework and the facial landmarks related to the manipulated region labels from the forged face image. A blending mask is formed by finding the bounding boxes related to the selected landmarks. The generation of the blending mask is similar to the manipulated mask except for the value within the selected region is one and the index of the selected landmarks corresponds to the manipulated region labels. Moreover, for the hair region, we select the outside region of the selected bounding box. The difference between manipulated mask and blending mask related to all region labels is shown in Fig. <ref>. After obtaining the manipulated image, the clearly generated image, and the blending mask, we adopt the laplacian pyramid blending algorithm to get the blending details and edges by following: L^k(i, j)=G(M^k(i, j))L_f^k(i, j)+(1-G(M^k(i, j)))L_b^k(i, j), where (i, j) denotes a specific image position. In the k-th scale, M^k is the blending mask. G represents the gaussian smoothing function. L_f^k and L_b^k stand for the laplacian image of the foreground and background, which are computed by following: L_f^k(i, j)=F^k(i, j)-Upsample(G(F^k-1(i, j))), L_b^k(i, j)=B^k(i, j)-Upsample(G(B^k-1(i, j))), where F^k and B^k denote the foreground (i.e., clearly generated image) and background (i.e., forged image) in k-th spatial scale, respectively. Therefore, the k-th scale blending image BI^k can be expressed as: BI^k(i, j)=L^k(i, j)+Upsample(BI^k-1(i, j)). Note that the smallest scale of the blending image BI^0 is directly generated by using M^0 on the foreground and background. It is worthy note that if the face image is detected as real by the detection framework, it will not go through the recovery framework. We choose PixelStylePixel <cit.>, a popular image translation model, as the SRM in the recovery framework. In the training stage, we combine the downsampled real images (from 2-16× downsample rate) and the generated images from the multi-supervised image as the training set. The recovery loss function is defined as: ℒ_ℛℯ𝒸ℴ𝓋ℯ𝓇𝓎=ℒ_𝒮ℛℳ=λ_1 ℒ_MSE+λ_2 ℒ_LPIPS+λ_3 ℒ_ID+λ_4 ℒ_reg, where λ_1 to λ_4 are the item weights. The loss items respectively measure the difference of the pixel, feature perception, and face identity between the output of SRM and real images. ℒ_reg aims to reduce the distance between the latent vector to the average one. §.§ Evaluation Metrics The comparison between existing evaluation metrics and CSM is shown in Fig. <ref>. It can be seen that the fixed and adaptive accuracy all only reflect the detection accuracy per time. There is a vacancy to evaluate the model detection performance under multiple continuous steps. Therefore, CSM is proposed to represent the matching degree between the complete predicted sequence and the sequential ground-truth. In addition, CSM not only reflects the detection accuracy in continuous steps but also the potential recovering ability for forged face images. This is because of the demonstration that the best recovering performance appears when a predicted sequence is totally equal to the sequential ground-truth <cit.>. Assuming the recovering quality only has two categories, i.e., completely consistent and inconsistent, the result of CSM will directly reflect the recovering ability without using the corresponding facial editing techniques. Therefore, the proposed metric for a single sample can be expressed as: CSM( pred, g t)= 1, pred[i]=g t[i], ∀ i ∈[1,2, …, T] 0, otherwise where T is the max length of the forged sequence. § EXPERIMENTS §.§ Experimental Setup §.§.§ Datasets. Experiments are conducted on public sequential deepfake datasets: facial components and attributes manipulation datasets, which include 35,166 and 49,920 images, respectively. Both datasets were split into training, validation, and testing sets in an 8:1:1 ratio. The facial components dataset has {Lip, Eye, Eyebrow, Hair, Nose} manipulated region labels with 28 sequential permutations, while the facial attributes dataset has {Bangs, Eyeglasses, Beard, Smiling, Young} manipulated region labels with 26 permutations. Each region is only manipulated once and hence the length of manipulation sequence is range from 0 to 5. The real images <cit.> are manipulated by utilizing advanced editing techniques <cit.>. More details are described in <cit.>. §.§.§ Implementation details for the detection framework All face images are resized into 256, and augmented by random JPEG compression (60-100 quality factor) only. The backbone is ResNet50 <cit.> and the channel dimension is 256 for encoder and decoder layers in each branch. We utilize 4 attention heads in both the encoder and decoder and set the encoder and decoder layers to 2, which is the same as SeqFakeFormer. In the training stage, the optimizer is AdamW with 10^-4 weight decay. The learning rate is set as 10^-3 and divided by 10 after 70 and 120 epochs. The batch size is 64 and the number of epochs is set as 150. In the multi-supervision module, as the facial attributes dataset does not offer real images, we hence adopt the real images and the manipulated masks on the facial components and attributes dataset, respectively. As to the manipulated mask, for simplicity, there are only three different value settings (i.e., 0, 0.5, 1) for unmodified region, large (Young), and small (Bangs, Eyeglasses, Beard, and Smiling) manipulated regions. Facial landmarks are detected by using the Dlib library and the indexes of the selected landmarks corresponding to the five manipulated region labels, namely {Bangs, Eyeglasses, Beard, Smiling, Young}, are {[69-74, 76, 77, 80, 81], [1, 16, 18-28, 37-48], [6-12], [49-68], [1-17, 69-81]}. §.§.§ Implementation details for the recovery framework The number of channel dimensions in the multi-supervision module for different scale features is 256. The SRM is pertrained on FFHQ <cit.> dataset and finetune on the generated face images from the proposed multi-supervision module and the downsampled real face images from <cit.>. The training strategy and hyperparameter settings in ℒ_ℛℯ𝒸ℴ𝓋ℯ𝓇𝓎 are the same as <cit.> during training. The batchsize is 8 and the number of iterations is set as 5 × 10^5. Since only the facial components dataset provides corresponding real images, the proposed recovery framework focuses exclusively on the dataset. As for the blending mask, the indexes of the selected landmarks corresponding to the manipulated region labels (i.e., {nose, eye, eyebrow, lip, hair}) are {[28-36], [[37-42] (left eye), [43-48] (right eye)], [[18-22] (left eyebrow), [23-27] (right eyebrow)], [49-68], [1-17, 69-81]}. The number of scales in the laplacian pyramid blending is 8, which is range from 2 to 256 with exponential multiples of 2. All models are implemented with Pytorch on NVIDIA GeForce RTX 3090. §.§ Evaluation on the detection performance §.§.§ Comparison with state-of-the-art To establish the superiority and efficacy of our proposed method, we conducted comprehensive comparisons with state-of-the-art detectors that are widely recognized as cutting-edge solutions in the domain of sequential deepfake detection. Table <ref> showcases the evaluation metrics of fixed accuracy (Fix-ACC), adaptive accuracy (Ada-ACC), and CSM, along with the corresponding results obtained from the comparative analysis. These rigorous evaluations provide a robust validation of the exceptional performance of our proposed method. DRN <cit.>, MA <cit.>, DETR <cit.>, and Two-Stream <cit.> models consider each manipulation sequence as a separate class and treat sequential deepfake detection as a multi-class classification task. However, these approaches neglect the crucial analysis of forged sequential information, which makes the detection performance lower than sequential inferring models. Furthermore, as the number of different sequential permutations increases, these models would require an expanding number of classification branches, resulting in increased parameters. In contrast, SeqFakeFormer and MMNet can achieve better detection performance, which reflects the importance of addressing both spatial and sequential forged information. Particularly, MMNet surpasses SeqFakeFormer in terms of Fix-ACC (1.28% and 0.41%), Ada-ACC (1.53% and 0.81%), and CSM (1.89% and 0.70%) on facial components and attributes datasets. These findings highlight the superior performance of the proposed MMNet, which integrates multi-collaboration and multi-supervision modules, in effectively capturing manipulated clues and achieving exceptional results in sequential deepfake detection. §.§.§ Discussion on the proposed modules MMNet is mainly composed of multi-collaboration and multi-supervision modules. We first analyze the effectiveness of the multi-collaboration module. Table <ref> shows the detection results with different combinations. ME+SD and ME+MD denote the multiple encoders with a single decoder and multiple encoders with multiple decoders, respectively. The baseline is the single encoder with a single decoder. The gain of ME+SD indicates the structure of multiple encoders can better generalize different scales of spatial manipulation traces as each branch only responds to several specific discriminative manipulated features. Fig. <ref> illustrates the feature responses from different scale encoder branches. The large scale detection encoder shows high responses in the hair region, the middle scale encoder focuses on the lip region, and the small scale encoder concentrates on the eyebrow region around. With multiple decoders, various sequential manipulation traces can be further captured and hence elevate the sequential detection performance. Notably, we have outlined two collaboration strategies in Table <ref>. The first strategy, termed "Mean", involves computing the average value of the predictions from different decoder branches, and then selecting the manipulated region label with the highest score at each step. The second strategy, termed "Max", follows Eq. (9) and Eq. (10). Our results reveal that adopting the max selection strategy yields better prediction results. We attribute this difference to the reason that the mean selection strategy may introduce noise as each step has one manipulation operation at most and each decoder exists a specific concentration, resulting in a decrease in the score of the correct label during the average process. To assess the effectiveness of the proposed multi-supervision module, as shown in Table <ref>, ablation studies were conducted with different combinations of the proposed modules. It is evident from the results that the incorporation of the multi-supervision module leads to further improvement in the detection performance. This can be attributed to the promotion of forgery localization, which introduces external channel attention to the multi-collaboration module. To visually illustrate this, Fig. <ref> also presents a comparison of feature responses between the multi-collaboration module without/with the multi-supervision module. The introduction of the multi-supervision module has resulted in a notable improvement in the concentration of feature responses within locally manipulated regions and hence contributes to a higher level of detection accuracy. §.§.§ Discussion on the proposed evaluation metric Upon analyzing the results obtained from Ada-ACC and CSM, further insights into the properties of the model have been revealed. Specifically, the value of Ada-ACC is larger than the CSM score on the facial components dataset suggesting that the model has limitations in effectively capturing continuously manipulated sequences. Conversely, a higher CSM value in the facial attributes dataset suggests that the model may tend to miss certain manipulated regions, as the CSM value remains unchanged while the Ada-ACC result decreases. Our findings are supported by Table <ref>, which indicates that the detector is more likely to overlook manipulated regions, as evidenced by the number of samples in which the predicted sequence length is lower than the sequential ground-truth. Additionally, Fig. <ref> provides a statistical example of instances where the detector ignores manipulated regions, leading to lower values of Ada-ACC as compared to CSM. These findings underscore the importance of evaluating both Ada-ACC and CSM as they can guide further research toward improving the model detection performance, which further highlights the necessity of the proposed evaluation metric CSM in our study. §.§.§ Discussion on the pixel-level supervision In order to evaluate the impact of various pixel-level supervisions on detection performance, on the facial components dataset, we adopt the default manipulated mask settings wherein unmodified regions are set to 0, large manipulated regions (i.e., hair) are set to 0.5, and small manipulated regions (i.e., nose, eye, eyebrow, and lip) are set to 1. The results of our comparative analysis of pixel-level supervision for both real images and manipulated masks are presented in Table <ref>. It can be observed that there is a difference of approximately 1% in performance for Ada-ACC and CSM when manipulated masks and real images are used as pixel-level supervision, respectively. This difference in performance provides quantitative evidence for the superiority of using real images as the default pixel-level supervision when they are available. §.§.§ Discussion on the trade off between the spatial size and channel dimension Due to the limited GPU memory, the multi-supervision module produces an output of 64 instead of 256 spatial size. To compensate for this, we increase the spatial size of the output at the expense of squeezing the channel dimension. Our analysis of Table <ref> reveals that despite the increase in spatial size, the reduction in channel dimension adversely impacts detection performance. We conjecture that this is due to a decrease in the learning capability of each layer, leading to the propagation of misleading information to the multi-collaboration module, despite the presence of more pixel-level supervisions. §.§.§ Discussion on the value setting in the manipulated mask In order to further investigate the appropriate value settings for the manipulated mask, Fig. <ref>(a) provides insight into the values of the large manipulated region. A crucial consideration in the detection of manipulated regions is the selection of an appropriate value for the large manipulated region. Assigning lower values may diminish the significance of the large manipulated region. On the other hand, higher values can diminish the impact of small manipulated regions, which can negatively affect the overall detection performance. Striking the right balance in value assignment is imperative to ensure accurate and reliable detection outcomes. §.§.§ Discussion on the binary deepfake detection To delve into the identification of authenticity, we consider the face images without/with sequential manipulations as real/fake. The results in Fig. <ref>(b) show that the proposed model can respectively achieve 95.65% and 99.93% AUC (Area Under Curve) on facial attributes and components test sets, which demonstrates the availability of our model on binary deepfake detection. The numerical difference between the binary accuracy and sequential adaptive accuracy indicates the hardness of sequential deepfake detection as the model needs to output the specific manipulated region labels with the correct sequence. Moreover, the high binary accuracy ensures the feasibility of the subsequent recovery framework as it only focuses on the forged face images. §.§ Evaluation on the recovery performance §.§.§ Discussion on the independent recovery performance To evaluate the recovery performance of the proposed recovery framework, the discrepancies between the fake and real face images are shown in the first row of Table <ref>. The aim of the recovery framework is to generate images that are more close to real ones. Four classic image similarity evaluation metrics are listed to make the comparison of both the pixel-level and feature perception-level differences. Based on the quantitative results, it can be observed that face images generated from the multi-supervision module with SRM show a reduction in pixel-level differences to some extent in the second row of Table <ref>. However, a decrease in Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), and an increase in Learned Perceptual Image Patch Similarity (LPIPS) indicate that the overall perceptual differences, such as luminance, contrast, and structure, are enlarged. This is attributed to the forged face images exhibiting different manipulation traces, making them challenging for the model to directly recover. Hence, our proposed recovery framework takes into consideration the local regions of the images generated from SRM as the foreground, while treating the remaining regions in the forged face image as the background because the editing technique mainly focuses on the designated regions. In comparison to the forgeries, in the third row of Table <ref>, images generated from the proposed recovery framework demonstrate a significant improvement in both pixel and perceptual similarity. Fig. <ref> illustrates the translation from forgery to real images, showcasing that the recoveries are much closer to the real images, thus validating the effectiveness of our proposed recovery framework. It is a worthy note that our proposed framework focuses on recovering forged face images without the knowledge of the manipulation model, which sets it apart from the comparison with the recovery process when the model is known. Secondly, the comparison is impeded by the lack of corresponding real images in the facial attributes dataset, which limits the availability of relevant recovery results <cit.>. Furthermore, the recovery results shown in <cit.> only involve 100 randomly selected samples. In contrast, our work provides statistical results encompassing the entire test set, offering a more comprehensive and representative evaluation. §.§.§ Discussion on the local recovery performance To assess the efficacy of the recovery process on a local level, we respectively compare the local regions of the fake and recovered face images w.r.t the real ones. These analyses are presented in Table <ref>. Apparently, the large manipulated regions (i.e., hair) are more prone to deviate from the real images, and hence more difficult to recover than the small manipulated regions (i.e., nose, eye, eyebrow, and lip). Nonetheless, the proposed recovery framework can still make the recovered regions resemble the real ones. To comprehensively evaluate the local recovery performance, we average the results across all manipulated facial components. The final row of Table <ref> shows that our recovery framework can significantly elevate the similarity metrics, including MSE, SSIM, and PSNR, thereby indicating its efficacy in bridging the gap between the recovered and real images. §.§.§ Discussion on the recovery error Although the recovery framework has shown promising results in recovering forged face images from their real counterparts, there are still some errors present in the recovered images. These errors can be attributed to three main factors: the SRM, the multi-supervision module, and the manipulated label prediction. An experiment is conducted where we employ a split approach for the SRM by inputting the real images with the same scale as the output of the multi-supervision module. The differences observed between the super-resolution images and the real images are listed in the last row of Table <ref>, which indicates the super-resolution error of SRM. This is likely due to the pretrained domain bias (trained from FFHQ) and the noise amplification caused during the clarification process. Furthermore, the metric differences between the second and last row in Table <ref> represent the errors between the images generated from the multi-supervision module and the downsampled real images. These errors stem from the limited learned representation of the multi-supervision module, which may not fully capture the diverse manipulation traces present in forged face images. Additionally, our findings in Table <ref> shed light on the impact of manipulated label predictions on recovery performance. Wrong (sequence) means correct predicted labels with the wrong sequence. Wrong (half) means at least half of the labels in the predicted sequence match with the sequential ground-truth. Correct means all-time predictions are the same as the sequential ground-truth. The experimental results indicate that accurate predictions play a vital role in improving overall recovery performance. Specifically, correct predictions contribute to enhanced recovery outcomes by ensuring unbiased blending masks. On the other hand, incorrect predictions in wrong (half) can lead to biased blending masks, which can have a detrimental impact on recovery performance. Moreover, as only the multi-supervision module pays attention to the sequential manipulation traces in our proposed recovery framework, the results from the wrong (sequence) indicate that even if the blending mask is correct, the variation of sequential manipulation traces can still make the recovery performance worse. Therefore, the investigations and refinement of these aspects will be the future work to enhance the recovery performance. More qualitative results are shown in Fig. <ref>. § CONCLUSION In this paper, we have introduced Multi-Collaboration and Multi-Supervision Network (MMNet) for sequential deepfake detection. The proposed detection framework of MMNet is designed to effectively handle various manipulation traces commonly found in forged images. The proposed recovery framework is the early exploration to achieve independent face recovery. Additionally, we have introduced a new evaluation metric called Complete Sequence Matching (CSM) to assess the detection accuracy of the model over continuous multiple steps. Our experimental results on several public datasets have shown the superiority of MMNet over existing methods, and the necessity of incorporating the proposed evaluation metrics. In the future, our work will major in improving the multi-supervision module, SRM, and sequential predictions, which are closely related to detection as well as recovery performance. IEEEtran
http://arxiv.org/abs/2307.00931v1
20230703111140
Comment on "Effects of shear methods on shear strengths and deformation modes of two typical transition metal carbides and their unification"
[ "Marcin Maździarz" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "physics.app-ph", "physics.comp-ph" ]
[]mmazdz@ippt.pan.pl Institute of Fundamental Technological Research Polish Academy of Sciences, Pawińskiego 5B, 02-106 Warsaw, Poland Recently, Chuanying Li, Tao Fu, Xule Li, Hao Hu, and Xianghe Peng in [https://doi.org/10.1103/PhysRevB.107.224106Phys. Rev. B 107, 224106] investigated the mechanical behavior of cubic HfC and TaC under simple shear (SS) and pure shear (PS) using first-principles calculations. Unfortunately, the paper contains some serious and fundamental flaws in the field of continuum mechanics and nanomechanics. The results presented appear to be qualitatively and quantitatively incorrect, they would be correct if we were in the small/linear deformation/strain regime, which we are not. A correct description therefore requires a finite/nonlinear deformation/strain apparatus. Comment on "Effects of shear methods on shear strengths and deformation modes of two typical transition metal carbides and their unification" Marcin Maździarz August 1, 2023 ============================================================================================================================================== § INTRODUCTION In the paper entitled "Effects of shear methods on shear strengths and deformation modes of two typical transition metal carbides and their unification" by Chuanying Li et al. <cit.> cubic HfC and TaC crystals were subjected to simple (SS) and pure shear (PS) by DFT simulations. The paper mistakes concepts from the continuum mechanics, incorrectly enforces deformations, and it appears that the computational results are qualitatively and quantitatively incorrect. * The Authors write: "During deformation, the lattice vectors need to be changed, ... Hence, the deformation can be imposed by transforming the i-1th step lattice vector matrix R^i-1 to the deformed ith step lattice vector matrix R^i as follows [36]: R^i = R^i-1[I+( [ 0 0 0; 0 0 0; 0 Δε_zy 0; ]) ] ." This formula is incorrect. In the non-linear mechanics of crystals there is well know hypothesis, called the Cauchy-Born rule <cit.>, which assumes that under a homogeneous macroscopic deformation, the primitive Bravais lattice vectors of a 3D crystal deform in an affine manner via a 3×3 matrix F: a_i = FA_i, where a_i stands for a spatial lattice vectors, A_i reference vectors and F the deformation gradient <cit.>. If we even wanted to do it in steps, it is clear from the tensor calculus point of view that Eq.<ref> should be as follows: R^def = F· R^ini. The vector, or matrix of vectors, is multiplied by the matrix on the left. Moreover, the components of the deformation gradient F are not, in general, components of the strain tensor. * The Authors write further: "For PS, after each ε_zy, the atomic coordinates and the other five independent strain components (except ε_zy) are optimized simultaneously to reach a stress state with σ_xx = σ_yy = σ_zz = σ_xy = σ_zx = 0 [35,37,38]. For SS, after applying ε_zy, the atomic coordinates are optimized but remain ε_xx = ε_yy = ε_zz = ε_xy = ε_zx = 0 [14,39]. The schematics of PS and SS are illustrated in Figs. 1(a) and 1(b), respectively." The above excerpt from the Authors' text contains several misrepresentations. In general, the Lagrangian finite strain tensor E, also called the Green-Lagrangian strain tensor or Green – St-Venant strain tensor is defined as: E = 1/2 ( F^T F - I) . and its linear approximation the infinitesimal strain tensor ε, also called the Cauchy's strain tensor, linear strain tensor, or small strain tensor takes the form: ε = 1/2 ( F^T + F) - I . The classical finite simple shear deformation (SS) is an isochoric plane deformation defined by the deformation gradient tensor F in the following form <cit.> (and this implies the form of the tensor E and ε): F^SS = [ [ 1 0 0; 0 1 γ; 0 0 1; ]] ⇒E^SS = [ [ 0 0 0; 0 0 γ/2; 0 γ/2 γ^2/2; ]] ⇒ε^SS = [ [ 0 0 0; 0 0 γ/2; 0 γ/2 0; ]]. It can be seen here that E^SS_zz≠ 0. This would be the case if we were using the small strain tensor ε^SS_zz. The deformations in this work are not small. It is important to remember what the consequences of overusing linear theories can be. Because F is not objective and hence small strain tensor ε also and a rigid rotation can induce any stresses in a deformable body and they should not, see <cit.> . The pure shear deformation (PS) <cit.> is a deformation in which the body is elongated in one direction while being shortened perpendicularly and is defined by: F^PS = [ [ 1 0 0; 0 β 0; 0 0 1/β; ]] ⇒E^PS = [ [ 0 0 0; 0 β^2-1/2 0; 0 0 β^-2-1/2; ]] ⇒ε^PS = [ [ 0 0 0; 0 β-1 0; 0 0 β^-1-1; ]]. The details of the PS in the rotated coordinate frame along the shear plane can be found in <cit.>. The Authors' PS is not in fact a PS, but another deformation. The pure shear stress (PSS) is such a deformation for which the Cauchy stress tensor is a pure shear stress tensor of the form σ = τ(e_2 ⊗ e_3 + e_3 ⊗ e_2) with τ ∈ ℝ <cit.>. To enforce such a deformation F must take the form of a simple shear composed with a triaxial stretch: σ^PSS = [ [ 0 0 0; 0 0 τ; 0 τ 0; ]] F^PSS = [ [ a 0 0; 0 b c η; 0 0 c; ]] ⇒E^PSS = [ [ a^2-1/2 0 0; 0 b^2-1/2 b c η/2; 0 b c η/2 η^2 c^2+c^2-1/2; ]] . * In "FIG. 2. Mechanical response of samples sheared along [11̅0](110). (a), (b) Variations of stress components of HfC and TaC with ε_zy under SS and PS, respectively. ..." the Authors presented the results of their DFT shear computations. The HfC and TaC crystals analyzed here have cubic symmetry and there is no reason for them to behave differently in shear than other cubic crystals, see <cit.>. The shear component of the stress during SS deformation, by the way for PSS also, should have a sine-like character. In order to verify this, the HfC[11̅0](110) crystal was subjected to SS and PSS by ab-initio calculations based on density functional theory (DFT) <cit.> within the pseudopotential plane-wave approximation (PP-PW) performed by the ABINIT <cit.> code. The projector augmented wave method (PAW) pseudopotentials used for the PBE <cit.> generalized gradient approximation (GGA) exchange–correlation functionals (XC) were obtained from the PseudoDojo project <cit.>. The valence electron configurations and the cut-off energy used for Hf and C atoms were consistent with those utilized in the PAW pseudopotentials. The calculation accuracy settings correspond to those used in <cit.>. It can be seen from Figure <ref> that the results presented here are substantially different from those presented by the Authors in their Figure 2 for HfC. For SS, the σ_yz shear component of the stress has a proper sine-like character and the σ_yy component of the stress is about twice as large. Zeroing the normal stresses in the PSS reduces the shear stress and introduces its asymmetry. These results also differ substantially from those presented by the Authors. This work was partially supported by the National Science Centre (NCN – Poland) Research Projects: No. 2021/43/B/ST8/03207. Additional assistance was granted through the computing cluster GRAFEN at Biocentrum Ochota, the Interdisciplinary Centre for Mathematical and Computational Modelling of Warsaw University (ICM UW) and Poznań Supercomputing and Networking Center (PSNC).
http://arxiv.org/abs/2307.02467v1
20230705174135
The waveform of the scalar induced gravitational waves in light of Pulsar Timing Array data
[ "Zhu Yi", "Qing Gao", "Yungui Gong", "Yue Wang", "Fengge Zhang" ]
gr-qc
[ "gr-qc" ]
yz@bnu.edu.cn Advanced Institute of Natural Sciences, Beijing Normal University, Zhuhai 519087, China gaoqing1024@swu.edu.cn School of Physical Science and Technology, Southwest University, Chongqing 400715, China yggong@hust.edu.cn School of Physics, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China Department of Physics, School of Physical Science and Technology, Ningbo University, Ningbo, Zhejiang 315211, China m202170265@hust.edu.cn School of Physics, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China Corresponding author. zhangfg5@mail.sysu.edu.cn School of Physics and Astronomy, Sun Yat-sen University, Zhuhai 519088, China The recent gravitational wave signal detected by NANOGrav, Parkers Pulsar Timing Array, European Pulsar Timing Array, and Chinese Pulsar Timing Array collaborations can be explained by scalar induced secondary gravitational waves. The waveforms of scalar induced secondary gravitational waves exhibit a near-model-independent behavior in the infrared region k≪ k_p, h^2Ω_GW = A_GW(k/k_ ref)^n_GW, where the index n_GW is n_GW = 2 n_1 for n_1<3/2, n_GW = 3-3/ ln(k_p/k) for n_1=3/2, and n_GW =3-2/ ln(k_p/k) for n_1>3/2 if the primordial curvature perturtation is parameterized as a power-law with the index n_1. Through Bayesian analysis, we discuss the parameter space that characterizes the behavior of scalar induced gravitational waves in the infrared region. The mean values and one sigma confidence intervals of parameters are log_10 A_GW = -7.18^+0.24_-0.26 and n_1 = 0.94^+0.17_-0.17 for n_1<3/2, log_10 A_GW = -6.96^+0.27_-0.30 and log_10 k_p/ Mpc^-1 = 8.24^+1.48_-0.58 for n_1=3/2, and log_10 A_GW = -6.77^+0.19_-0.22 and log_10 k_p/ Mpc^-1 = 8.37^+1.69_-0.68 for n_1>3/2. Comparing with the interpretation of the detected signal as stochastic background from massive black hole binaries, the results for n_1<3/2, n_1=3/2, and n_1>3/2 give the support of scalar induced gravitational waves with the Bayes factor lnℬ= 2.8, lnℬ= 2.9, and lnℬ = 1.8, respectively. The waveform of the scalar induced gravitational waves in light of Pulsar Timing Array data Fengge Zhang August 1, 2023 =========================================================================================== § INTRODUCTION The detection of a common-spectrum process, exhibiting the Hellings-Downs angular correlation characteristic of gravitational waves (GWs), has been confirmed in the recent report by the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) <cit.>, Parkers Pulsar Timing Array (PPTA) <cit.>, European Pulsar Timing Array (EPTA) <cit.>, and Chinese Pulsar Timing Array (CPTA) <cit.>. Estimations based on assuming the signal originates from an ensemble of binary supermassive black hole inspirals and utilizing a fiducial f^-2/3 characteristic-strain spectrum suggest a strain amplitude of 2.4^+0.7_-0.6× 10^-15 at a reference frequency of 1  yr^-1 <cit.>. While supermassive black hole binaries (SMBHBs) present a plausible explanation for the observed signal, it is also conceivable that the signal arises from cosmological processes such as cosmic inflation, first-order phase transitions, cosmic strings, domain walls, or scalar-induced gravitational waves (SIGWs) <cit.>. This paper specifically focuses on investigating the scenario where the signal is attributed to SIGWs. SIGWs originate from scalar perturbations seeded by the primordial curvature perturbations generated during the inflationary epoch <cit.>. These gravitational waves possess a broad frequency distribution and can be detected not only by PTAs but also by space-based gravitational wave detectors, including the Laser Interferometer Space Antenna (LISA) <cit.>, Taiji <cit.>, TianQin <cit.>, and the Deci-hertz Interferometer Gravitational-Wave Observatory (DECIGO) <cit.>. The study of SIGWs provides valuable insights into the inflationary period and its dynamics. In order to generate significant SIGWs, it is expected that the amplitude of the power spectrum of the primordial curvature perturbations, denoted as 𝒜_ζ, should be on the order of 𝒪(0.01) <cit.>. However, measurements of cosmic microwave background (CMB) anisotropy at large scales have constrained the primordial power spectrum to approximately 𝒜_ζ= 2.1× 10^-9 <cit.>. Consequently, to account for the NANOGrav 15 yrs signal, a significant enhancement of approximately seven orders of magnitude in the primordial curvature perturbations is required at small scales. This enhancement can be achieved through inflation models incorporating a transitional ultra-slow-roll phase <cit.>. Interestingly, SIGWs generated by these models exhibit a universal behavior in the infrared region relative to the position of the peak in their energy density <cit.>. This behavior is described by the expression Ω_ GW =Ω_0 (k/k_p)^n_ GW(k,k_p), where Ω_ GW represents the energy density of SIGWs, the index n_ GW(k,k_p)>0 is a function of the scale k and peak scale k_p, and k≪ k_p. The gravitational wave signal favored by NANOGrav 15 yrs data exhibits a characteristic of increasing energy density with the frequency, which is consistent with the universal behavior of the SIGWs in infrared regions. In this paper, we show that the power index n_ GW(k,k_p) can exhibit three distinct values depending on the behavior of the primordial curvature power spectrum with broken power law form. These values include a constant, n_ GW(k,k_p) = 3-3/ln(k_p/k), or n_ GW(k,k_p) = 3-2/ln(k_p/k). It is important to note that these power indices deviate from the default scenario of SMBHBs presented in the NANOGrav report <cit.>. By using the universal behavior waveform of SIGWs within the infrared region, we aim to provide an explanation for the observed NANOGrav 15 yrs signal. The structure of this paper is outlined as follows. In Section <ref>, we provide a concise overview of the generation of SIGWs from primordial curvature perturbations. Section <ref> presents the universal behavior of SIGWs' waveform in the infrared region. The explanation of the NANOGrav 15 yrs data by the SIGWs in the infrared region is presented in Section <ref>, and the conclusions are summarized in Section <ref>. The behavior of SIGWs in ultraviolet region is presented in appendix <ref>. § THE SCALAR INDUCED GRAVITATIONAL WAVES The perturbed metric in the Newtonian gauge is ds^2=a^2(η)[-(1+2Φ)dη^2+{(1-2Φ)δ_ij+1/2h_ij}dx^i x^j], where Φ is the Bardeen potential, and h_ij is the tensor perturbation with the transverse-traceless gauge, ∂^ih_ij=h^i_i=0. The tensor perturbation in the Fourier space is h_ij(x,η)=∫d^3k/(2π)^3/2e^ik·x[h^+_k(η)e^+_ij(k) +h^×_k(η)e^×_ij(k)], where e^+_ij(k) and e^×_ij(k) are the plus and cross-polarization tensors, e^+_ij(k)=1/√(2)[e_i(k)e_j(k) -e_i(k)e_j(k)], e^×_ij(k)=1/√(2)[e_i(k) e_j(k)+e_i(k)e_j(k)], and the two orthonormal basis vectors e_i(k) and e_i(k) are orthogonal to the wave vector k. Neglecting the anisotropic stress, the equation of motion for the tensor perturbation with either polarization in Fourier space is h^”_k(η)+2ℋh^'_k(η)+k^2h^_k(η)=4S_k(η), where η is the conformal time, dη = dt/a, a prime denotes the derivative with respect to the conformal time, ℋ=a'/a is the conformal Hubble parameter, and the source is S_k(η)=∫d^3k̃/(2π)^3/2 e^ij(k)k̃_ik̃_j f(k,k̃,η)ϕ_k̃ϕ_k-k̃, with f(k,k̃,η)= 2T_ϕ(k̃η)T_ϕ(|k-k̃|η) +4/3(1+w)ℋ^2(T'_ϕ(k̃η)+ℋ T_ϕ(k̃η))(T'_ϕ(|k-k̃|η) +ℋ T_ϕ(|k-k̃|η)), where e_ij^+(k)k̃^ik̃^j=k̃^2[1-μ^2]cos(2φ)/√(2), e_ij^×(k)k̃^ik̃^j=k̃^2[1-μ^2]sin(2φ)/√(2), and μ=k·k̃/(kk̃), and φ is the azimuthal angle. T_ϕ is the transfer function for the Bardeen potential Φ and the definition is Φ_k(η)=ϕ_kT_ϕ(kη). The perturbations ϕ_k can be obtained from the primordial curvature power spectrum 𝒫_ζ by the relation ⟨ϕ_kϕ_p⟩=δ^3(k+p)2π^2/k^3(3+3w/5+3w)^2 𝒫_ζ(k). By using the Green's function method, we can solve equation (<ref>), and the solution is h_k(η)=4/a(η)∫^ηdηG_k(η,η) a(η) S_k(η), where the Green function satisfies G^”_k+(k^2-a^”(η)/a(η))G_k(η,η)= δ(η-η). The definition of the power spectrum of SIGWs is ⟨ h_k(η)h_p(η)⟩ =δ^3(k+p) 2π^2/k^3𝒫_h(k,η). Combining Eqs. (<ref>), (<ref>) and (<ref>), we get 𝒫_h(k,η)=4∫^∞_0 dk̃∫^1_-1dμ(1-μ^2)^2ℐ^2_RD(k,k-k̃,η) k^3k̃^3/|k-k̃|^3𝒫_ζ(k̃)𝒫_ζ (|k-k̃|), where ℐ_RD(k,k-k̃,η)=(3+3w/5+3w)^2 4/a(η)∫^η_0 dηG_k(η,η) a(η)f(k,k̃,η), and SIGWs are assumed to be produced before the horizon reentry. During radiation domination, w=1/3, the Green's function is G_k(η,η)=sin[k(η-η)]/kΘ(η-η), where Θ is the Heaviside step function, and the transfer function is T_ϕ(x)=9/x^2[sin(x/√(3))/x/√(3)-cos(x/√(3))], and x=kη. After some calculations, we get the power spectrum of the SIGWs <cit.> 𝒫_h(k,η)=4∫^∞_0dv∫^1+v_|1-v|du (4v^2-(1+v^2-u^2)^2/4vu)^2 I^2_RD(u,v,x)𝒫_ζ(vk)𝒫_ζ(uk), where u=|k-k̃|/k, v=|̃k̃|̃/k and I_RD(u,v,x)=k^2 ℐ_RD(k,k-k̃,η). At late time, kη≫ 1, i.e., x→∞, the time average of I^2_RD(u,v,x→∞) is <cit.> I_RD^2(v,u,x→∞) = 1/2x^2( 3(u^2+v^2-3)/4 u^3 v^3 )^2 {π^2 (u^2+v^2-3)^2 Θ( v+u-√(3)) + ( -4uv+(u^2+v^2-3) log| 3-(u+v)^2/3-(u-v)^2| )^2 }. The definition of the fractional energy density of SIGWs per logarithmic interval of k is Ω_GW(η,k)=1/24(k/ℋ(η))^2𝒫_h(k,η). Because the energy density of SIGWs decays similarly to radiation, we can estimate the energy density of SIGWs today in terms of the present energy density of radiation, denoted as Ω_r,0. And the relationship is <cit.> Ω_GW(η_0,k)=Ω_GW(η,k)Ω_r,0/Ω_r(η), where η can be chosen at a generic time towards the end of the radiation domination era. § THE SIGWS IN INFRARED REGION The gravitational wave signal favored by NANOGrav 15 yrs data exhibits a characteristic of increasing energy density with frequency. This characteristic is in line with the model-independent behavior of the waveform of SIGWs in infrared regions. In this section, we investigate the waveform of SIGWs in infrared regions. Some parameterized power spectra of primordial curvature perturbation are usually used to investigate the waveform of SIGWs <cit.>. One simple parameterization is the broken power-law function, which can effectively characterize the peak in power spectrum predicted in various inflationary models <cit.>. In this paper, we parameterize the power spectrum of the primordial curvature perturbation with the broken power-law function, 𝒫_ζ(k)={[ 𝒜_ζ(k/k_p)^n_1, k≤ k_p,; 𝒜_ζ(k/k_p)^n_2, k>k_p, ]. where k_p is the peak scale and 𝒜_ζ is the amplitude at the peak. For simplicity of notation, we define the following function f(u,v)= x^2/6(4v^2-(1+v^2-u^2)^2/4vu)^2I^2_RD(u,v,x →∞). With this, the fractional energy density of SIGWs can be expressed as Ω_GW(k)=∫^∞_0dv ∫^1+v_|1-v|du f(u,v)𝒫_ζ(vk)𝒫_ζ(uk). Dividing the above integral into three parts, Ω_GW(k)= ∫^c_1_0dv ∫^1+v_|1-v|du f(u,v)𝒫_ζ(vk)𝒫_ζ(uk) +∫^c_2_c_1dv ∫^1+v_|1-v|du f(u,v)𝒫_ζ(vk)𝒫_ζ(uk) +∫^∞_c2dv ∫^1+v_|1-v|du f(u,v)𝒫_ζ(vk)𝒫_ζ(uk), with c_1≪1 and c_2≫1. Then we consider the asymptotic behavior of f(u,v) in the following limitation: * v≤ c_1≪1 In this case, u→1 due to |u-v|<1. Then we have f(1,v)≃1/3v^2. * 1≪ v≤ c_2 In this case, u≫1 and u≃ v, and f(u,v) behaves as f(v,v)≃ 3/4[4+π^2-4ln(4/3)+(ln(4/3))^2. .-4(2-ln(4/3))ln(v)+4(ln v)^2]1/v^4≡ F(v). Using the approximate expressions (<ref>) and (<ref>) provided above, we are able to calculate the energy density Ω_GW of SIGWs in the infrared regions, where k is much smaller than the peak scale k_p, k≪ k_p. With the Eqs. (<ref>), (<ref>), (<ref>), and (<ref>), we have Ω_GW/(𝒜_ζ)^2≃ 2/3(k/k_p)^2n_1∫^c_1_0v^3+n_1dv+∫^c_2_c_1dv∫^1+v_|1-v|du f(u,v)𝒫_ζ(uk)𝒫_ζ(vk) +2(k/k_p)^2n_1∫^k_p/k_c_2F(v)v^2n_1dv+2(k/k_p)^2n_2∫^∞_k_p/kF(v)v^2n_2dv. If n_1≠ 3/2, the energy density becomes Ω_GW/(𝒜_ζ)^2≃ 2/3(k/k_p)^2n_1/4+n_1v^4+n_1|^c_1_0+g(n_1)(k/k_p)^2n_1 +3/2(k/k_p)^2n_1v^2n_1-3/(2n_1-3)^3[A(n_1)+B(n_1)ln(v)+C(n_1)(ln(v))^2]|^k_p/k_c_2 +3/2(k/k_p)^2n_2v^2n_2-3/(2n_2-3)^3[A(n_2)+B(n_2)ln(v)+C(n_2)(ln(v))^2]|^∞_k_p/k, where A(x)= 20+9π^2-24ln(4/3)+9(ln(4/3))^2-4x(8+3π^2-10ln(4/3)+3(ln(4/3))^2) +4x^2(4+π^2-4ln(4/3)+(ln(4/3))^2), B(x)=4(-3+2x)(4+x(-4+2ln(4/3))-3ln(4/3)), C(x)=4(3-2x)^2, and g(x)=∫^c_2_c_1dv∫^1+v_|1-v|duf(u,v)u^xv^x. Note that g is a finite constant once we fix its argument. From the first and last terms in the right hand side of Eq. (<ref>), if n_1<-4 or n_2>3/2, Ω_GW will be divergent. In this paper, we only consider the power spectrum with a peak, namely, n_1>0 and n_2<0. From the above assumption, we obtain Ω_GW in infrared region for n_1≠ 3/2, Ω_GW/(𝒜_ζ)^2≃ (k/k_p)^2n_1(2/3c_1^4+n_1/4+n_1+g(n_1) -3/2(c_2)^2n_1-3/(2n_1-3)^3[A(n_1). .+B(n_1)ln(c_2)+C(n_1)(ln(c_2))^2]) + 3/2(k/k_p)^3/(2n_1-3)^3[A(n_1) +B(n_1)ln(k_p/k)+C(n_1)(ln(k_p/k))^2] -3/2(k/k_p)^3/(2n_2-3)^3[A(n_2)+B(n_2)ln(k_p/k) +C(n_2)(ln(k_p/k))^2]. Similarly, for n_1=3/2, we have Ω_GW/(𝒜_ζ)^2≃ (k/k_p)^2n_1(2/3c_1^4+n_1/4+n_1+g(n_1))+ 3/2(k/k_p)^3[(π^2+(-2+ln(4/3))^2)ln(k_p/k) . . +2(-2+ln(4/3))(ln(k_p/k))^2+4/3(ln(k_p/k))^3] -3/2(k/k_p)^3[(π^2+(-2+ln(4/3))^2)ln(c_2) . .+2(-2+ln(4/3))(ln(c_2))^2+4/3(ln(c_2))^3] -3/2(k/k_p)^3/(2n_2-3)^3[A(n_2) +B(n_2)ln(k_p/k)+C(n_2)(ln(k_p/k))^2]. By utilizing the approximate expressions of SIGWs given in Eq. (<ref>) and (<ref>), we can investigate the behavior of SIGWs in the infrared region. The first term of (<ref>) is dominated in infrared regions for n_1<3/2, thus we have Ω_GW(k) ∝ k^2n_1, which means that the power index of SIGWs is always two times the index of the power spectrum for n_1<3/2. For n_1>3/2, the second and last terms on the right-hand side of Eq. (<ref>) are dominant in the infrared regions. Both of these terms exhibit similar scaling behavior with respect to the wavenumber. We can obtain n_GW=dlnΩ_GW(k)/dln k= 3-2/ln(k_p/k), Note that this log-dependent behavior can be derived from more general primordial curvature power spectrum and is universal for narrow spectrum <cit.>. For n_1=3/2, the second term in the right hand side of Eq. (<ref>) is dominated, then we have n_GW=dlnΩ_GW(k)/dln k=3-3/ln(k_p/k). In conclusion, the index of SIGWs in the infrared region is n_GW= {[ 2n_1, n_1 < 3/2; 3-3/ln(k_p/k), n_1=3/2; 3-2/ln(k_p/k), n_1>3/2 ].. From the above expression, it is obvious that n_GW will tend to the constant 3 in infrared regions if n_1≥3/2. In fact, n_GW→ 3 is a quit universal behavior in infrared region for narrow power spectrum <cit.>. With the same method, we can obtain the approximate expressions of the fractional energy density of SIGWs in ultraviolet region easily, which can be found in <ref>. In order to test the accuracy of our analytic expression, in Fig. <ref>, we show our analytic results and the numerical results with different sets of parameters. From Fig. <ref>, the analytic results and the numerical results match well in both infrared and ultraviolet regions. § THE MODEL AND RESULTS From the previous section, we give a universal behavior of the energy density of SIGWs in infrared regions. It can be described by the following equation Ω_GWh^2 = A_GW(k/k_ ref)^n_GW, where k_ ref is the scale corresponding to the frequency of f_ ref=1  yr ^-1. From equation (<ref>), the index n_GW is n_GW= {[ 2n_1, n_1 < 3/2; 3-3/ ln(k_p/k), n_1=3/2; 3-2/ ln(k_p/k), n_1>3/2 ]., where k_p is the peak of the SIGWs and the scale k should satisfy the infrared regions condition k≪ k_p. The last log-dependent index is valid not only for the primordial power spectrum with the power-law form, but also for general narrow spectrum <cit.>. We perform a Bayesian analysis of the NANOGrav 15 yrs data to study the parameterization of the SIGWs given in Eq. (<ref>). We used the 14 frequency bins reported in <cit.> to fit the posterior distributions of the parameters. The analysis was performed using the Bilby code <cit.>, employing the dynesty algorithm for nested sampling <cit.> with 1000 live points (nlive = 1000). The log-likelihood function was derived by evaluating the energy density of the SIGWs at the 14 specific frequency bins. We then calculated the sum of the logarithm of the probability density functions from 14 independent kernel density estimates corresponding to these values <cit.>. The resulting posterior distributions for the parameters in equation (<ref>) are presented in figures <ref>, <ref>, and <ref>, corresponding to the cases where n_1 < 3/2, n_1 = 3/2, and n_1 > 3/2, respectively. For the case n_1<3/2, the marginalized posterior distributions result in the following mean values and one-sigma confidence intervals, log_10 A_GW = -7.18^+0.24_-0.26, n_1 =0.94^+0.17_-0.17. These values are in agreement with the results from NANOGrav collaboration<cit.>, where the power index is constant. For the case n_1=3/2, the marginalized posterior distributions result in the following mean values and one-sigma confidence intervals, log_10 A_GW = -6.96^+0.27_-0.30, log_10 k_p/ Mpc^-1 =8.24^+1.48_-0.58. And for the case n_1 > 3/2, the marginalized posterior distributions result in the following mean values and one-sigma confidence intervals, log_10 A_GW = -6.77^+0.19_-0.22, log_10 k_p/ Mpc^-1 =8.37^+1.69_-0.68, which are similar to the situation of n=3/2. When comparing the interpretation of the detected signal as a stochastic background from massive black hole binaries, the results for n<3/2, n=3/2, and n>3/2 provide support for SIGWs with respective Bayes factors of lnℬ= 2.8, lnℬ= 2.9, and lnℬ = 1.8. By choosing the best-fit parameter values from equations (<ref>), (<ref>), and (<ref>), we depict the corresponding energy density of the SIGWs in figure <ref>. The black, green, and red curves represent the energy density of SIGWs with n_1<3/2, n_1=3/2, and n_1>3/2, respectively. The orange line denotes the energy density of the gravitational wave with the form Ω_GWh^2 =A_GW(k/k_ref)^2/3 and log_10 A_GW = -8.14, which is from the scenario of SMBHBs explaining the NANOGrav 15 yrs data. The vertical dashed grey line locates at frequency f_p = 2.7× 10^-7, representing the scale associated with the best-fit peak scale k_p value of log_10 k_p / Mpc^-1 = 8.24. The frequencies in the NANOGrav 15 yrs data set satisfy the condition f_ NANOGrav≤ f_p/10, ensuring that the scales we focus on satisfy k < k_p. This ensures that the frequencies in the data set fall within the infrared region and fulfill the necessary infrared condition. § CONCLUSION The recent gravitational wave signal detected by NANOGrav, Parkers Pulsar Timing Array, European Pulsar Timing Array, and Chinese Pulsar Timing Array collaborations can be originated from the SIGWs. In this paper, we investigate the asymptotic behavior of SIGW waveforms by parameterizing the power spectrum of primordial curvature perturbations using a broken power-law model. In the infrared regions k≪ k_p, the SIGWs exhibit a near-model-independent behavior, can be characterized by the form Ω_GWh^2 = A_GW(k/k_ ref)^n_GW, where the index n_GW takes different values depending on the range of n_1: n_GW = 2 n_1 for n_1<3/2, n_GW = 3-3/ ln(k_p/k) for n_1=3/2, and n_GW =3-2/ ln(k_p/k) for n_1>3/2. For the NANOGrav 15 yrs signal, we have demonstrated that it can be effectively explained by the near-model-independent behavior of SIGWs in the infrared regions. Through Bayesian analysis, we have identified the parameter space that can adequately account for the NANOGrav 15 yrs data. In the scenario where n_1<3/2, our results align with those of the NANOGrav collaboration and yield the following mean values and one-sigma confidence intervals, log_10 A_GW = -7.18^+0.24_-0.26 and n_1 = 0.94^+0.17_-0.17. This indicates a consistent trend with the previous findings. For the case of n_1=3/2, the mean values and one-sigma confidence intervals are log_10 A_GW = -6.96^+0.27_-0.30 and log_10 k_p/ Mpc^-1 = 8.24^+1.48_-0.58. This scenario provides an alternative explanation that deviates from the previous scenario and introduces a different peak scale k_p. In the scenario where n_1>3/2, we find the corresponding values log_10 A_GW = -6.77^+0.19_-0.22 and log_10 k_p/ Mpc^-1 = 8.37^+1.69_-0.68. This represents another distinct possibility for the waveform behavior of SIGWs. Comparing with the interpretation of the detected signal as a stochastic background from massive black hole binaries, our results provide support for SIGWs explanation. The Bayes factors for n<3/2, n=3/2, and n>3/2 are lnℬ= 2.8, lnℬ= 2.9, and lnℬ = 1.8, respectively. These values indicate a favorable likelihood for the SIGWs scenario. Overall, our findings contribute to a comprehensive understanding of the NANOGrav 15 yrs signal and emphasize the potential significance of SIGWs in explaining the observed gravitational wave data. This research is supported in part by the National Key Research and Development Program of China under Grant No. 2020YFC2201504, the National Natural Science Foundation of China under Grant No. 12175184. ZY is supported by the National Natural Science Foundation of China under Grant No. 12205015 and the supporting fund for young researcher of Beijing Normal University under Grant No. 28719/310432102. § SIGWS IN ULTRAVIOLET REGIONS K≫ K_P In this appendix, we analyze the behavior of SIGWs in ultraviolet region. In this region, the energy density of the SIGWs is Ω_GW/(𝒜_ζ)^2 ≃2/3(k/k_p)^n_1+n_2∫^k_p/k_0v^3+n_1dv+g(n_2)(k/k_p)^2n_2 +2/3(k/k_p)^2n_2∫^c_1_k_p/kv^3+n_2dv+2(k/k_p)^2n_2∫^∞_c_2F(v)v^2n_2dv. For n_2≠ -4, we have the expression Ω_GW/(𝒜_ζ)^2≃ 2/3(k/k_p)^n_2-4(1/4+n_1-1/4+n_2)+ (k/k_p)^2n_2(2/3c_1^4+n_2/4+n_2+g(n_2) . . -3/2(2n_2-3)^3[A(n_2)+B(n_2)ln(c_2)+C(n_2)(ln(c_2))^2]). If -4<n_2<0, the last term in the right side of Eq. (<ref>) is dominated, thus it becomes Ω_GW(k)∝ k^2n_2. If n_2<-4, the first term in the right side of Eq. (<ref>) is dominated, we have Ω_GW(k)∝ k^n_2-4. For n_2 = -4, we have Ω_GW/(𝒜_ζ)^2≃ 2/3(k/k_p)^n_2-4/4+n_1+ 2/3(k/k_p)^2n_2[ln(c_1)-ln(k_p/k)+3/2g(n_2)] -3/2(k/k_p)^2n_2/(2n_2-3)^3[A(n_2)+B(n_2)ln(c_2)+C(n_2)(ln(c_2))^2]. The second term in the right side of Eq. (<ref>) is dominated, so we have Ω_GW(k)∝ k^-8ln k, and this result is consistent with <cit.>. In conclusion, in ultraviolet regions, the behavior of the SIGWs is Ω_GW(k) ∝{[ k^n_2-4, n_2<-4; k^-8ln k, n_2=-4; k^2n_2, -4< n_2<0 ].. 106 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Agazie et al.(2023a)Agazie et al.]NANOGrav:2023gor author author G. Agazie et al. (collaboration NANOGrav), https://doi.org/10.3847/2041-8213/acdac6 journal journal Astrophys. J. Lett. volume 951, pages L8 (year 2023a), https://arxiv.org/abs/2306.16213 arXiv:2306.16213 [astro-ph.HE] NoStop [Agazie et al.(2023b)Agazie et al.]NANOGrav:2023hde author author G. Agazie et al. (collaboration NANOGrav), https://doi.org/10.3847/2041-8213/acda9a journal journal Astrophys. J. Lett. volume 951, pages L9 (year 2023b), https://arxiv.org/abs/2306.16217 arXiv:2306.16217 [astro-ph.HE] NoStop [Zic et al.(2023)Zic et al.]Zic:2023gta author author A. Zic et al., @noop (year 2023), https://arxiv.org/abs/2306.16230 arXiv:2306.16230 [astro-ph.HE] NoStop [Reardon et al.(2023)Reardon et al.]Reardon:2023gzh author author D. J. Reardon et al., https://doi.org/10.3847/2041-8213/acdd02 journal journal Astrophys. J. Lett. volume 951, pages L6 (year 2023), https://arxiv.org/abs/2306.16215 arXiv:2306.16215 [astro-ph.HE] NoStop [Antoniadis et al.(2023a)Antoniadis et al.]Antoniadis:2023lym author author J. Antoniadis et al. https://doi.org/10.1051/0004-6361/202346841 10.1051/0004-6361/202346841 (year 2023a), https://arxiv.org/abs/2306.16224 arXiv:2306.16224 [astro-ph.HE] NoStop [Antoniadis et al.(2023b)Antoniadis et al.]Antoniadis:2023ott author author J. Antoniadis et al., @noop (year 2023b), https://arxiv.org/abs/2306.16214 arXiv:2306.16214 [astro-ph.HE] NoStop [Xu et al.(2023)Xu et al.]Xu:2023wog author author H. Xu et al., https://doi.org/10.1088/1674-4527/acdfa5 journal journal Res. Astron. Astrophys. volume 23, pages 075024 (year 2023), https://arxiv.org/abs/2306.16216 arXiv:2306.16216 [astro-ph.HE] NoStop [Afzal et al.(2023)Afzal et al.]NANOGrav:2023hvm author author A. Afzal et al. (collaboration NANOGrav), https://doi.org/10.3847/2041-8213/acdc91 journal journal Astrophys. J. Lett. volume 951, pages L11 (year 2023), https://arxiv.org/abs/2306.16219 arXiv:2306.16219 [astro-ph.HE] NoStop [Franciolini et al.(2023)Franciolini, Iovino, Vaskonen, and Veermae]Franciolini:2023pbf author author G. Franciolini, author A. Iovino, Junior., author V. Vaskonen, and author H. Veermae, @noop (year 2023), https://arxiv.org/abs/2306.17149 arXiv:2306.17149 [astro-ph.CO] NoStop [Liu et al.(2023a)Liu, Chen, and Huang]Liu:2023ymk author author L. Liu, author Z.-C. Chen, and author Q.-G. Huang, @noop (year 2023a), https://arxiv.org/abs/2307.01102 arXiv:2307.01102 [astro-ph.CO] NoStop [Vagnozzi(2023)]Vagnozzi:2023lwo author author S. Vagnozzi, @noop (year 2023), https://arxiv.org/abs/2306.16912 arXiv:2306.16912 [astro-ph.CO] NoStop [Cai et al.(2023)Cai, He, Ma, Yan, and Yuan]Cai:2023dls author author Y.-F. Cai, author X.-C. He, author X. Ma, author S.-F. Yan, and author G.-W. Yuan, @noop (year 2023), https://arxiv.org/abs/2306.17822 arXiv:2306.17822 [gr-qc] NoStop [Wang et al.(2023)Wang, Zhao, Li, and Zhu]Wang:2023ost author author S. Wang, author Z.-C. Zhao, author J.-P. Li, and author Q.-H. Zhu, @noop (year 2023), https://arxiv.org/abs/2307.00572 arXiv:2307.00572 [astro-ph.CO] NoStop [Ananda et al.(2007)Ananda, Clarkson, and Wands]Ananda:2006af author author K. N. Ananda, author C. Clarkson, and author D. Wands, https://doi.org/10.1103/PhysRevD.75.123518 journal journal Phys. Rev. D volume 75, pages 123518 (year 2007), https://arxiv.org/abs/gr-qc/0612013 arXiv:gr-qc/0612013 NoStop [Baumann et al.(2007)Baumann, Steinhardt, Takahashi, and Ichiki]Baumann:2007zm author author D. Baumann, author P. J. Steinhardt, author K. Takahashi, and author K. Ichiki, https://doi.org/10.1103/PhysRevD.76.084019 journal journal Phys. Rev. D volume 76, pages 084019 (year 2007), https://arxiv.org/abs/hep-th/0703290 arXiv:hep-th/0703290 NoStop [Saito and Yokoyama(2009)]Saito:2008jc author author R. Saito and author J. Yokoyama, https://doi.org/10.1103/PhysRevLett.102.161101 journal journal Phys. Rev. Lett. volume 102, pages 161101 (year 2009), note [Erratum: Phys.Rev.Lett. 107, 069901 (2011)], https://arxiv.org/abs/0812.4339 arXiv:0812.4339 [astro-ph] NoStop [Alabidi et al.(2012)Alabidi, Kohri, Sasaki, and Sendouda]Alabidi:2012ex author author L. Alabidi, author K. Kohri, author M. Sasaki, and author Y. Sendouda, https://doi.org/10.1088/1475-7516/2012/09/017 journal journal JCAP volume 09, pages 017, https://arxiv.org/abs/1203.4663 arXiv:1203.4663 [astro-ph.CO] NoStop [Nakama et al.(2017)Nakama, Silk, and Kamionkowski]Nakama:2016gzw author author T. Nakama, author J. Silk, and author M. Kamionkowski, https://doi.org/10.1103/PhysRevD.95.043511 journal journal Phys. Rev. D volume 95, pages 043511 (year 2017), https://arxiv.org/abs/1612.06264 arXiv:1612.06264 [astro-ph.CO] NoStop [Kohri and Terada(2018)]Kohri:2018awv author author K. Kohri and author T. Terada, https://doi.org/10.1103/PhysRevD.97.123532 journal journal Phys. Rev. D volume 97, pages 123532 (year 2018), https://arxiv.org/abs/1804.08577 arXiv:1804.08577 [gr-qc] NoStop [Cheng et al.(2018)Cheng, Lee, and Ng]Cheng:2018yyr author author S.-L. Cheng, author W. Lee, and author K.-W. Ng, https://doi.org/10.1088/1475-7516/2018/07/001 journal journal JCAP volume 07, pages 001, https://arxiv.org/abs/1801.09050 arXiv:1801.09050 [astro-ph.CO] NoStop [Cai et al.(2019a)Cai, Pi, Wang, and Yang]Cai:2019amo author author R.-G. Cai, author S. Pi, author S.-J. Wang, and author X.-Y. Yang, https://doi.org/10.1088/1475-7516/2019/05/013 journal journal JCAP volume 05, pages 013, https://arxiv.org/abs/1901.10152 arXiv:1901.10152 [astro-ph.CO] NoStop [Cai et al.(2019b)Cai, Pi, and Sasaki]Cai:2018dig author author R.-g. Cai, author S. Pi, and author M. Sasaki, https://doi.org/10.1103/PhysRevLett.122.201101 journal journal Phys. Rev. Lett. volume 122, pages 201101 (year 2019b), https://arxiv.org/abs/1810.11000 arXiv:1810.11000 [astro-ph.CO] NoStop [Cai et al.(2019c)Cai, Pi, Wang, and Yang]Cai:2019elf author author R.-G. Cai, author S. Pi, author S.-J. Wang, and author X.-Y. Yang, https://doi.org/10.1088/1475-7516/2019/10/059 journal journal JCAP volume 10, pages 059, https://arxiv.org/abs/1907.06372 arXiv:1907.06372 [astro-ph.CO] NoStop [Cai et al.(2020a)Cai, Guo, Liu, Liu, and Yang]Cai:2019bmk author author R.-G. Cai, author Z.-K. Guo, author J. Liu, author L. Liu, and author X.-Y. Yang, https://doi.org/10.1088/1475-7516/2020/06/013 journal journal JCAP volume 06, pages 013, https://arxiv.org/abs/1912.10437 arXiv:1912.10437 [astro-ph.CO] NoStop [Cai et al.(2021a)Cai, Ding, Yang, and Zhou]Cai:2020fnq author author R.-G. Cai, author Y.-C. Ding, author X.-Y. Yang, and author Y.-F. Zhou, https://doi.org/10.1088/1475-7516/2021/03/057 journal journal JCAP volume 03, pages 057, https://arxiv.org/abs/2007.11804 arXiv:2007.11804 [astro-ph.CO] NoStop [Pi and Sasaki(2020)]Pi:2020otn author author S. Pi and author M. Sasaki, https://doi.org/10.1088/1475-7516/2020/09/037 journal journal JCAP volume 09, pages 037, https://arxiv.org/abs/2005.12306 arXiv:2005.12306 [gr-qc] NoStop [Domènech et al.(2020)Domènech, Pi, and Sasaki]Domenech:2020kqm author author G. Domènech, author S. Pi, and author M. Sasaki, https://doi.org/10.1088/1475-7516/2020/08/017 journal journal JCAP volume 08, pages 017, https://arxiv.org/abs/2005.12314 arXiv:2005.12314 [gr-qc] NoStop [Liu et al.(2023b)Liu, Yang, Guo, and Cai]Liu:2021jnw author author L. Liu, author X.-Y. Yang, author Z.-K. Guo, and author R.-G. Cai, https://doi.org/10.1088/1475-7516/2023/01/006 journal journal JCAP volume 01, pages 006, https://arxiv.org/abs/2112.05473 arXiv:2112.05473 [astro-ph.CO] NoStop [Yuan et al.(2020a)Yuan, Chen, and Huang]Yuan:2019fwv author author C. Yuan, author Z.-C. Chen, and author Q.-G. Huang, https://doi.org/10.1103/PhysRevD.101.063018 journal journal Phys. Rev. D volume 101, pages 063018 (year 2020a), https://arxiv.org/abs/1912.00885 arXiv:1912.00885 [astro-ph.CO] NoStop [Chen et al.(2020)Chen, Yuan, and Huang]Chen:2019xse author author Z.-C. Chen, author C. Yuan, and author Q.-G. Huang, https://doi.org/10.1103/PhysRevLett.124.251101 journal journal Phys. Rev. Lett. volume 124, pages 251101 (year 2020), https://arxiv.org/abs/1910.12239 arXiv:1910.12239 [astro-ph.CO] NoStop [Yuan et al.(2020b)Yuan, Chen, and Huang]Yuan:2019wwo author author C. Yuan, author Z.-C. Chen, and author Q.-G. Huang, https://doi.org/10.1103/PhysRevD.101.043019 journal journal Phys. Rev. D volume 101, pages 043019 (year 2020b), https://arxiv.org/abs/1910.09099 arXiv:1910.09099 [astro-ph.CO] NoStop [Yuan et al.(2019)Yuan, Chen, and Huang]Yuan:2019udt author author C. Yuan, author Z.-C. Chen, and author Q.-G. Huang, https://doi.org/10.1103/PhysRevD.100.081301 journal journal Phys. Rev. D volume 100, pages 081301 (year 2019), https://arxiv.org/abs/1906.11549 arXiv:1906.11549 [astro-ph.CO] NoStop [Papanikolaou et al.(2022)Papanikolaou, Tzerefos, Basilakos, and Saridakis]Papanikolaou:2021uhe author author T. Papanikolaou, author C. Tzerefos, author S. Basilakos, and author E. N. Saridakis, https://doi.org/10.1088/1475-7516/2022/10/013 journal journal JCAP volume 10, pages 013, https://arxiv.org/abs/2112.15059 arXiv:2112.15059 [astro-ph.CO] NoStop [Papanikolaou et al.(2023)Papanikolaou, Tzerefos, Basilakos, and Saridakis]Papanikolaou:2022hkg author author T. Papanikolaou, author C. Tzerefos, author S. Basilakos, and author E. N. Saridakis, https://doi.org/10.1140/epjc/s10052-022-11157-4 journal journal Eur. Phys. J. C volume 83, pages 31 (year 2023), https://arxiv.org/abs/2205.06094 arXiv:2205.06094 [gr-qc] NoStop [Danzmann(1997)]Danzmann:1997hm author author K. Danzmann, https://doi.org/10.1088/0264-9381/14/6/002 journal journal Class. Quant. Grav. volume 14, pages 1399 (year 1997)NoStop [Amaro-Seoane et al.(2017)Amaro-Seoane et al.]Audley:2017drz author author P. Amaro-Seoane et al. (collaboration LISA), @noop (year 2017), https://arxiv.org/abs/1702.00786 arXiv:1702.00786 [astro-ph.IM] NoStop [Hu and Wu(2017)]Hu:2017mde author author W.-R. Hu and author Y.-L. Wu, https://doi.org/10.1093/nsr/nwx116 journal journal Natl. Sci. Rev. volume 4, pages 685 (year 2017)NoStop [Luo et al.(2016)Luo et al.]Luo:2015ght author author J. Luo et al. (collaboration TianQin), https://doi.org/10.1088/0264-9381/33/3/035010 journal journal Class. Quant. Grav. volume 33, pages 035010 (year 2016), https://arxiv.org/abs/1512.02076 arXiv:1512.02076 [astro-ph.IM] NoStop [Gong et al.(2021)Gong, Luo, and Wang]Gong:2021gvw author author Y. Gong, author J. Luo, and author B. Wang, https://doi.org/10.1038/s41550-021-01480-3 journal journal Nature Astron. volume 5, pages 881 (year 2021), https://arxiv.org/abs/2109.07442 arXiv:2109.07442 [astro-ph.IM] NoStop [Kawamura et al.(2011)Kawamura et al.]Kawamura:2011zz author author S. Kawamura et al., https://doi.org/10.1088/0264-9381/28/9/094011 journal journal Class. Quant. Grav. volume 28, pages 094011 (year 2011)NoStop [Yi and Fei(2023)]Yi:2022ymw author author Z. Yi and author Q. Fei, https://doi.org/10.1140/epjc/s10052-023-11233-3 journal journal Eur. Phys. J. C volume 83, pages 82 (year 2023), https://arxiv.org/abs/2210.03641 arXiv:2210.03641 [astro-ph.CO] NoStop [Akrami et al.(2020)Akrami et al.]Akrami:2018odb author author Y. Akrami et al. (collaboration Planck), https://doi.org/10.1051/0004-6361/201833887 journal journal Astron. Astrophys. volume 641, pages A10 (year 2020), https://arxiv.org/abs/1807.06211 arXiv:1807.06211 [astro-ph.CO] NoStop [Martin et al.(2013)Martin, Motohashi, and Suyama]Martin:2012pe author author J. Martin, author H. Motohashi, and author T. Suyama, https://doi.org/10.1103/PhysRevD.87.023514 journal journal Phys. Rev. D volume 87, pages 023514 (year 2013), https://arxiv.org/abs/1211.0083 arXiv:1211.0083 [astro-ph.CO] NoStop [Motohashi et al.(2015)Motohashi, Starobinsky, and Yokoyama]Motohashi:2014ppa author author H. Motohashi, author A. A. Starobinsky, and author J. Yokoyama, https://doi.org/10.1088/1475-7516/2015/09/018 journal journal JCAP volume 09, pages 018, https://arxiv.org/abs/1411.5021 arXiv:1411.5021 [astro-ph.CO] NoStop [Yi and Gong(2018)]Yi:2017mxs author author Z. Yi and author Y. Gong, https://doi.org/10.1088/1475-7516/2018/03/052 journal journal JCAP volume 03, pages 052, https://arxiv.org/abs/1712.07478 arXiv:1712.07478 [gr-qc] NoStop [Garcia-Bellido and Ruiz Morales(2017)]Garcia-Bellido:2017mdw author author J. Garcia-Bellido and author E. Ruiz Morales, https://doi.org/10.1016/j.dark.2017.09.007 journal journal Phys. Dark Univ. volume 18, pages 47 (year 2017), https://arxiv.org/abs/1702.03901 arXiv:1702.03901 [astro-ph.CO] NoStop [Germani and Prokopec(2017)]Germani:2017bcs author author C. Germani and author T. Prokopec, https://doi.org/10.1016/j.dark.2017.09.001 journal journal Phys. Dark Univ. volume 18, pages 6 (year 2017), https://arxiv.org/abs/1706.04226 arXiv:1706.04226 [astro-ph.CO] NoStop [Motohashi and Hu(2017)]Motohashi:2017kbs author author H. Motohashi and author W. Hu, https://doi.org/10.1103/PhysRevD.96.063503 journal journal Phys. Rev. D volume 96, pages 063503 (year 2017), https://arxiv.org/abs/1706.06784 arXiv:1706.06784 [astro-ph.CO] NoStop [Ezquiaga et al.(2018)Ezquiaga, Garcia-Bellido, and Ruiz Morales]Ezquiaga:2017fvi author author J. M. Ezquiaga, author J. Garcia-Bellido, and author E. Ruiz Morales, https://doi.org/10.1016/j.physletb.2017.11.039 journal journal Phys. Lett. B volume 776, pages 345 (year 2018), https://arxiv.org/abs/1705.04861 arXiv:1705.04861 [astro-ph.CO] NoStop [Di and Gong(2018)]Gong:2017qlj author author H. Di and author Y. Gong, https://doi.org/10.1088/1475-7516/2018/07/007 journal journal JCAP volume 07, pages 007, https://arxiv.org/abs/1707.09578 arXiv:1707.09578 [astro-ph.CO] NoStop [Ballesteros et al.(2019)Ballesteros, Beltran Jimenez, and Pieroni]Ballesteros:2018wlw author author G. Ballesteros, author J. Beltran Jimenez, and author M. Pieroni, https://doi.org/10.1088/1475-7516/2019/06/016 journal journal JCAP volume 06, pages 016, https://arxiv.org/abs/1811.03065 arXiv:1811.03065 [astro-ph.CO] NoStop [Dalianis et al.(2019)Dalianis, Kehagias, and Tringas]Dalianis:2018frf author author I. Dalianis, author A. Kehagias, and author G. Tringas, https://doi.org/10.1088/1475-7516/2019/01/037 journal journal JCAP volume 01, pages 037, https://arxiv.org/abs/1805.09483 arXiv:1805.09483 [astro-ph.CO] NoStop [Bezrukov et al.(2018)Bezrukov, Pauly, and Rubio]Bezrukov:2017dyv author author F. Bezrukov, author M. Pauly, and author J. Rubio, https://doi.org/10.1088/1475-7516/2018/02/040 journal journal JCAP volume 02, pages 040, https://arxiv.org/abs/1706.05007 arXiv:1706.05007 [hep-ph] NoStop [Kannike et al.(2017)Kannike, Marzola, Raidal, and Veermäe]Kannike:2017bxn author author K. Kannike, author L. Marzola, author M. Raidal, and author H. Veermäe, https://doi.org/10.1088/1475-7516/2017/09/020 journal journal JCAP volume 09, pages 020, https://arxiv.org/abs/1705.06225 arXiv:1705.06225 [astro-ph.CO] NoStop [Lin et al.(2020)Lin, Gao, Gong, Lu, Zhang, and Zhang]Lin:2020goi author author J. Lin, author Q. Gao, author Y. Gong, author Y. Lu, author C. Zhang, and author F. Zhang, https://doi.org/10.1103/PhysRevD.101.103515 journal journal Phys. Rev. D volume 101, pages 103515 (year 2020), https://arxiv.org/abs/2001.05909 arXiv:2001.05909 [gr-qc] NoStop [Lin et al.(2023)Lin, Gao, Gong, Lu, Wang, and Zhang]Lin:2021vwc author author J. Lin, author S. Gao, author Y. Gong, author Y. Lu, author Z. Wang, and author F. Zhang, https://doi.org/10.1103/PhysRevD.107.043517 journal journal Phys. Rev. D volume 107, pages 043517 (year 2023), https://arxiv.org/abs/2111.01362 arXiv:2111.01362 [gr-qc] NoStop [Gao et al.(2021)Gao, Gong, and Yi]Gao:2020tsa author author Q. Gao, author Y. Gong, and author Z. Yi, https://doi.org/10.1016/j.nuclphysb.2021.115480 journal journal Nucl. Phys. B volume 969, pages 115480 (year 2021), https://arxiv.org/abs/2012.03856 arXiv:2012.03856 [gr-qc] NoStop [Gao(2021)]Gao:2021vxb author author Q. Gao, https://doi.org/10.1007/s11433-021-1708-9 journal journal Sci. China Phys. Mech. Astron. volume 64, pages 280411 (year 2021), https://arxiv.org/abs/2102.07369 arXiv:2102.07369 [gr-qc] NoStop [Yi et al.(2021a)Yi, Gong, Wang, and Zhu]Yi:2020kmq author author Z. Yi, author Y. Gong, author B. Wang, and author Z.-h. Zhu, https://doi.org/10.1103/PhysRevD.103.063535 journal journal Phys. Rev. D volume 103, pages 063535 (year 2021a), https://arxiv.org/abs/2007.09957 arXiv:2007.09957 [gr-qc] NoStop [Yi et al.(2021b)Yi, Gao, Gong, and Zhu]Yi:2020cut author author Z. Yi, author Q. Gao, author Y. Gong, and author Z.-h. Zhu, https://doi.org/10.1103/PhysRevD.103.063534 journal journal Phys. Rev. D volume 103, pages 063534 (year 2021b), https://arxiv.org/abs/2011.10606 arXiv:2011.10606 [astro-ph.CO] NoStop [Yi and Zhu(2022)]Yi:2021lxc author author Z. Yi and author Z.-H. Zhu, https://doi.org/10.1088/1475-7516/2022/05/046 journal journal JCAP volume 05number number (05), pages 046, https://arxiv.org/abs/2105.01943 arXiv:2105.01943 [gr-qc] NoStop [Yi(2023)]Yi:2022anu author author Z. Yi, https://doi.org/10.1088/1475-7516/2023/03/048 journal journal JCAP volume 03, pages 048, https://arxiv.org/abs/2206.01039 arXiv:2206.01039 [gr-qc] NoStop [Zhang et al.(2021)Zhang, Gong, Lin, Lu, and Yi]Zhang:2020uek author author F. Zhang, author Y. Gong, author J. Lin, author Y. Lu, and author Z. Yi, https://doi.org/10.1088/1475-7516/2021/04/045 journal journal JCAP volume 04, pages 045, https://arxiv.org/abs/2012.06960 arXiv:2012.06960 [astro-ph.CO] NoStop [Pi et al.(2018)Pi, Zhang, Huang, and Sasaki]Pi:2017gih author author S. Pi, author Y.-l. Zhang, author Q.-G. Huang, and author M. Sasaki, https://doi.org/10.1088/1475-7516/2018/05/042 journal journal JCAP volume 05, pages 042, https://arxiv.org/abs/1712.09896 arXiv:1712.09896 [astro-ph.CO] NoStop [Kamenshchik et al.(2019)Kamenshchik, Tronconi, Vardanyan, and Venturi]Kamenshchik:2018sig author author A. Y. Kamenshchik, author A. Tronconi, author T. Vardanyan, and author G. Venturi, https://doi.org/10.1016/j.physletb.2019.02.036 journal journal Phys. Lett. B volume 791, pages 201 (year 2019), https://arxiv.org/abs/1812.02547 arXiv:1812.02547 [gr-qc] NoStop [Fu et al.(2019)Fu, Wu, and Yu]Fu:2019ttf author author C. Fu, author P. Wu, and author H. Yu, https://doi.org/10.1103/PhysRevD.100.063532 journal journal Phys. Rev. D volume 100, pages 063532 (year 2019), https://arxiv.org/abs/1907.05042 arXiv:1907.05042 [astro-ph.CO] NoStop [Fu et al.(2020)Fu, Wu, and Yu]Fu:2019vqc author author C. Fu, author P. Wu, and author H. Yu, https://doi.org/10.1103/PhysRevD.101.023529 journal journal Phys. Rev. D volume 101, pages 023529 (year 2020), https://arxiv.org/abs/1912.05927 arXiv:1912.05927 [astro-ph.CO] NoStop [Dalianis et al.(2020)Dalianis, Karydas, and Papantonopoulos]Dalianis:2019vit author author I. Dalianis, author S. Karydas, and author E. Papantonopoulos, https://doi.org/10.1088/1475-7516/2020/06/040 journal journal JCAP volume 06, pages 040, https://arxiv.org/abs/1910.00622 arXiv:1910.00622 [astro-ph.CO] NoStop [Gundhi and Steinwachs(2021)]Gundhi:2020zvb author author A. Gundhi and author C. F. Steinwachs, https://doi.org/10.1140/epjc/s10052-021-09225-2 journal journal Eur. Phys. J. C volume 81, pages 460 (year 2021), https://arxiv.org/abs/2011.09485 arXiv:2011.09485 [hep-th] NoStop [Cheong et al.(2021)Cheong, Lee, and Park]Cheong:2019vzl author author D. Y. Cheong, author S. M. Lee, and author S. C. Park, https://doi.org/10.1088/1475-7516/2021/01/032 journal journal JCAP volume 01, pages 032, https://arxiv.org/abs/1912.12032 arXiv:1912.12032 [hep-ph] NoStop [Zhang(2022)]Zhang:2021rqs author author F. Zhang, https://doi.org/10.1103/PhysRevD.105.063539 journal journal Phys. Rev. D volume 105, pages 063539 (year 2022), https://arxiv.org/abs/2112.10516 arXiv:2112.10516 [gr-qc] NoStop [Kawai and Kim(2021)]Kawai:2021edk author author S. Kawai and author J. Kim, https://doi.org/10.1103/PhysRevD.104.083545 journal journal Phys. Rev. D volume 104, pages 083545 (year 2021), https://arxiv.org/abs/2108.01340 arXiv:2108.01340 [astro-ph.CO] NoStop [Cai et al.(2021b)Cai, Chen, and Fu]Cai:2021wzd author author R.-G. Cai, author C. Chen, and author C. Fu, https://doi.org/10.1103/PhysRevD.104.083537 journal journal Phys. Rev. D volume 104, pages 083537 (year 2021b), https://arxiv.org/abs/2108.03422 arXiv:2108.03422 [astro-ph.CO] NoStop [Chen et al.(2021)Chen, Koh, and Tumurtushaa]Chen:2021nio author author P. Chen, author S. Koh, and author G. Tumurtushaa, @noop (year 2021), https://arxiv.org/abs/2107.08638 arXiv:2107.08638 [gr-qc] NoStop [Zheng et al.(2022)Zheng, Shi, and Qiu]Zheng:2021vda author author R. Zheng, author J. Shi, and author T. Qiu, https://doi.org/10.1088/1674-1137/ac42bd journal journal Chin. Phys. C volume 46, pages 045103 (year 2022), https://arxiv.org/abs/2106.04303 arXiv:2106.04303 [astro-ph.CO] NoStop [Karam et al.(2023)Karam, Koivunen, Tomberg, Vaskonen, and Veermäe]Karam:2022nym author author A. Karam, author N. Koivunen, author E. Tomberg, author V. Vaskonen, and author H. Veermäe, https://doi.org/10.1088/1475-7516/2023/03/013 journal journal JCAP volume 03, pages 013, https://arxiv.org/abs/2205.13540 arXiv:2205.13540 [astro-ph.CO] NoStop [Ashoorioon et al.(2021)Ashoorioon, Rostami, and Firouzjaee]Ashoorioon:2019xqc author author A. Ashoorioon, author A. Rostami, and author J. T. Firouzjaee, https://doi.org/10.1007/JHEP07(2021)087 journal journal JHEP volume 07, pages 087, https://arxiv.org/abs/1912.13326 arXiv:1912.13326 [astro-ph.CO] NoStop [Cai et al.(2020b)Cai, Pi, and Sasaki]Cai:2019cdl author author R.-G. Cai, author S. Pi, and author M. Sasaki, https://doi.org/10.1103/PhysRevD.102.083528 journal journal Phys. Rev. D volume 102, pages 083528 (year 2020b), https://arxiv.org/abs/1909.13728 arXiv:1909.13728 [astro-ph.CO] NoStop [Xu et al.(2020)Xu, Liu, Gao, and Guo]Xu:2019bdp author author W.-T. Xu, author J. Liu, author T.-J. Gao, and author Z.-K. Guo, https://doi.org/10.1103/PhysRevD.101.023505 journal journal Phys. Rev. D volume 101, pages 023505 (year 2020), https://arxiv.org/abs/1907.05213 arXiv:1907.05213 [astro-ph.CO] NoStop [Lu et al.(2019)Lu, Gong, Yi, and Zhang]Lu:2019sti author author Y. Lu, author Y. Gong, author Z. Yi, and author F. Zhang, https://doi.org/10.1088/1475-7516/2019/12/031 journal journal JCAP volume 12, pages 031, https://arxiv.org/abs/1907.11896 arXiv:1907.11896 [gr-qc] NoStop [Inomata et al.(2017a)Inomata, Kawasaki, Mukaida, Tada, and Yanagida]Inomata:2016rbd author author K. Inomata, author M. Kawasaki, author K. Mukaida, author Y. Tada, and author T. T. Yanagida, https://doi.org/10.1103/PhysRevD.95.123510 journal journal Phys. Rev. D volume 95, pages 123510 (year 2017a), https://arxiv.org/abs/1611.06130 arXiv:1611.06130 [astro-ph.CO] NoStop [Espinosa et al.(2018)Espinosa, Racco, and Riotto]Espinosa:2018eve author author J. R. Espinosa, author D. Racco, and author A. Riotto, https://doi.org/10.1088/1475-7516/2018/09/012 journal journal JCAP volume 09, pages 012, https://arxiv.org/abs/1804.07732 arXiv:1804.07732 [hep-ph] NoStop [Namba et al.(2016)Namba, Peloso, Shiraishi, Sorbo, and Unal]Namba:2015gja author author R. Namba, author M. Peloso, author M. Shiraishi, author L. Sorbo, and author C. Unal, https://doi.org/10.1088/1475-7516/2016/01/041 journal journal JCAP volume 01, pages 041, https://arxiv.org/abs/1509.07521 arXiv:1509.07521 [astro-ph.CO] NoStop [Nakama and Suyama(2015)]Nakama:2015nea author author T. Nakama and author T. Suyama, https://doi.org/10.1103/PhysRevD.92.121304 journal journal Phys. Rev. D volume 92, pages 121304 (year 2015), https://arxiv.org/abs/1506.05228 arXiv:1506.05228 [gr-qc] NoStop [Nakama and Suyama(2016)]Nakama:2016enz author author T. Nakama and author T. Suyama, https://doi.org/10.1103/PhysRevD.94.043507 journal journal Phys. Rev. D volume 94, pages 043507 (year 2016), https://arxiv.org/abs/1605.04482 arXiv:1605.04482 [gr-qc] NoStop [Bartolo et al.(2019a)Bartolo, De Luca, Franciolini, Lewis, Peloso, and Riotto]Bartolo:2018evs author author N. Bartolo, author V. De Luca, author G. Franciolini, author A. Lewis, author M. Peloso, and author A. Riotto, https://doi.org/10.1103/PhysRevLett.122.211301 journal journal Phys. Rev. Lett. volume 122, pages 211301 (year 2019a), https://arxiv.org/abs/1810.12218 arXiv:1810.12218 [astro-ph.CO] NoStop [Byrnes et al.(2019)Byrnes, Cole, and Patil]Byrnes:2018txb author author C. T. Byrnes, author P. S. Cole, and author S. P. Patil, https://doi.org/10.1088/1475-7516/2019/06/028 journal journal JCAP volume 06, pages 028, https://arxiv.org/abs/1811.11158 arXiv:1811.11158 [astro-ph.CO] NoStop [Orlofsky et al.(2017)Orlofsky, Pierce, and Wells]Orlofsky:2016vbd author author N. Orlofsky, author A. Pierce, and author J. D. Wells, https://doi.org/10.1103/PhysRevD.95.063518 journal journal Phys. Rev. D volume 95, pages 063518 (year 2017), https://arxiv.org/abs/1612.05279 arXiv:1612.05279 [astro-ph.CO] NoStop [Garcia-Bellido et al.(2017)Garcia-Bellido, Peloso, and Unal]Garcia-Bellido:2017aan author author J. Garcia-Bellido, author M. Peloso, and author C. Unal, https://doi.org/10.1088/1475-7516/2017/09/013 journal journal JCAP volume 09, pages 013, https://arxiv.org/abs/1707.02441 arXiv:1707.02441 [astro-ph.CO] NoStop [Bartolo et al.(2019b)Bartolo, De Luca, Franciolini, Peloso, Racco, and Riotto]Bartolo:2018rku author author N. Bartolo, author V. De Luca, author G. Franciolini, author M. Peloso, author D. Racco, and author A. Riotto, https://doi.org/10.1103/PhysRevD.99.103521 journal journal Phys. Rev. D volume 99, pages 103521 (year 2019b), https://arxiv.org/abs/1810.12224 arXiv:1810.12224 [astro-ph.CO] NoStop [Unal(2019)]Unal:2018yaa author author C. Unal, https://doi.org/10.1103/PhysRevD.99.041301 journal journal Phys. Rev. D volume 99, pages 041301 (year 2019), https://arxiv.org/abs/1811.09151 arXiv:1811.09151 [astro-ph.CO] NoStop [Inomata and Nakama(2019)]Inomata:2018epa author author K. Inomata and author T. Nakama, https://doi.org/10.1103/PhysRevD.99.043511 journal journal Phys. Rev. D volume 99, pages 043511 (year 2019), https://arxiv.org/abs/1812.00674 arXiv:1812.00674 [astro-ph.CO] NoStop [Germani and Musco(2019)]Germani:2018jgr author author C. Germani and author I. Musco, https://doi.org/10.1103/PhysRevLett.122.141302 journal journal Phys. Rev. Lett. volume 122, pages 141302 (year 2019), https://arxiv.org/abs/1805.04087 arXiv:1805.04087 [astro-ph.CO] NoStop [De Luca et al.(2019)De Luca, Desjacques, Franciolini, and Riotto]DeLuca:2019llr author author V. De Luca, author V. Desjacques, author G. Franciolini, and author A. Riotto, https://doi.org/10.1088/1475-7516/2019/09/059 journal journal JCAP volume 09, pages 059, https://arxiv.org/abs/1905.13459 arXiv:1905.13459 [astro-ph.CO] NoStop [Bullock and Primack(1997)]Bullock:1996at author author J. S. Bullock and author J. R. Primack, https://doi.org/10.1103/PhysRevD.55.7423 journal journal Phys. Rev. D volume 55, pages 7423 (year 1997), https://arxiv.org/abs/astro-ph/9611106 arXiv:astro-ph/9611106 NoStop [Saito et al.(2008)Saito, Yokoyama, and Nagata]Saito:2008em author author R. Saito, author J. Yokoyama, and author R. Nagata, https://doi.org/10.1088/1475-7516/2008/06/024 journal journal JCAP volume 06, pages 024, https://arxiv.org/abs/0804.3470 arXiv:0804.3470 [astro-ph] NoStop [Kadota et al.(2015)Kadota, Kawasaki, and Saikawa]Kadota:2015dza author author K. Kadota, author M. Kawasaki, and author K. Saikawa, https://doi.org/10.1088/1475-7516/2015/10/041 journal journal JCAP volume 10, pages 041, https://arxiv.org/abs/1503.06998 arXiv:1503.06998 [hep-ph] NoStop [Ballesteros and Taoso(2018)]Ballesteros:2017fsr author author G. Ballesteros and author M. Taoso, https://doi.org/10.1103/PhysRevD.97.023501 journal journal Phys. Rev. D volume 97, pages 023501 (year 2018), https://arxiv.org/abs/1709.05565 arXiv:1709.05565 [hep-ph] NoStop [Cicoli et al.(2018)Cicoli, Diaz, and Pedro]Cicoli:2018asa author author M. Cicoli, author V. A. Diaz, and author F. G. Pedro, https://doi.org/10.1088/1475-7516/2018/06/034 journal journal JCAP volume 06, pages 034, https://arxiv.org/abs/1803.02837 arXiv:1803.02837 [hep-th] NoStop [Cheng et al.(2019)Cheng, Lee, and Ng]Cheng:2018qof author author S.-L. Cheng, author W. Lee, and author K.-W. Ng, https://doi.org/10.1103/PhysRevD.99.063524 journal journal Phys. Rev. D volume 99, pages 063524 (year 2019), https://arxiv.org/abs/1811.10108 arXiv:1811.10108 [astro-ph.CO] NoStop [Bhaumik and Jain(2020)]Bhaumik:2019tvl author author N. Bhaumik and author R. K. Jain, https://doi.org/10.1088/1475-7516/2020/01/037 journal journal JCAP volume 01, pages 037, https://arxiv.org/abs/1907.04125 arXiv:1907.04125 [astro-ph.CO] NoStop [Tada and Yokoyama(2019)]Tada:2019amh author author Y. Tada and author S. Yokoyama, https://doi.org/10.1103/PhysRevD.100.023537 journal journal Phys. Rev. D volume 100, pages 023537 (year 2019), https://arxiv.org/abs/1904.10298 arXiv:1904.10298 [astro-ph.CO] NoStop [Inomata et al.(2017b)Inomata, Kawasaki, Mukaida, Tada, and Yanagida]Inomata:2017okj author author K. Inomata, author M. Kawasaki, author K. Mukaida, author Y. Tada, and author T. T. Yanagida, https://doi.org/10.1103/PhysRevD.96.043504 journal journal Phys. Rev. D volume 96, pages 043504 (year 2017b), https://arxiv.org/abs/1701.02544 arXiv:1701.02544 [astro-ph.CO] NoStop [Ashton et al.(2019)Ashton et al.]Ashton:2018jfp author author G. Ashton et al., https://doi.org/10.3847/1538-4365/ab06fc journal journal Astrophys. J. Suppl. volume 241, pages 27 (year 2019), https://arxiv.org/abs/1811.02042 arXiv:1811.02042 [astro-ph.IM] NoStop [Skilling(2004)]NestedSampling author author J. Skilling, https://doi.org/10.1063/1.1835238 journal journal AIP Conf. Proc. volume 735, pages 395 (year 2004)NoStop [Moore and Vecchio(2021)]Moore:2021ibq author author C. J. Moore and author A. Vecchio, https://doi.org/10.1038/s41550-021-01489-8 journal journal Nature Astron. volume 5, pages 1268 (year 2021), https://arxiv.org/abs/2104.15130 arXiv:2104.15130 [astro-ph.CO] NoStop
http://arxiv.org/abs/2307.01259v1
20230703180001
Mapping the imprints of stellar and AGN feedback in the circumgalactic medium with X-ray microcalorimeters
[ "Gerrit Schellenberger", "Ákos Bogdán", "John A. ZuHone", "Benjamin D. Oppenheimer", "Nhut Truong", "Ildar Khabibullin", "Fred Jennings", "Annalisa Pillepich", "Joseph Burchett", "Christopher Carr", "Priyanka Chakraborty", "Robert Crain", "William Forman", "Christine Jones", "Caroline A. Kilbourne", "Ralph P. Kraft", "Maxim Markevitch", "Daisuke Nagai", "Dylan Nelson", "Anna Ogorzalek", "Scott Randall", "Arnab Sarkar", "Joop Schaye", "Sylvain Veilleux", "Mark Vogelsberger", "Q. Daniel Wang", "Irina Zhuravleva" ]
astro-ph.GA
[ "astro-ph.GA" ]
Gerrit Schellenberger gerrit.schellenberger@cfa.harvard.edu 0000-0002-4962-0740]Gerrit Schellenberger Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA 0000-0003-0573-7733]Ákos Bogdán Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA 0000-0003-3175-2347]John A. ZuHone Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA 0000-0002-3391-2116]Benjamin D. Oppenheimer University of Colorado, Boulder, 2000 Colorado Avenue, Boulder, CO 80305, USA 0000-0003-4983-0462]Nhut Truong NASA Goddard Space Flight Center, Code 662, Greenbelt, MD 20771 Center for Space Sciences and Technology, University of Maryland, Baltimore County, MD 21250, USA Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany 0000-0003-3701-5882]Ildar Khabibullin Universitäts-Sternwarte, Fakultät für Physik, Ludwig-Maximilians-Universität München, Scheinerstr.1, 81679 München, Germany Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Straße 1, 85748 Garching, Germany Space Research Institute (IKI), Profsoyuznaya 84/32, Moscow 117997, Russia 0009-0000-0152-9983]Fred Jennings Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh, EH9 3HJ, United Kingdom 0000-0003-1065-9274]Annalisa Pillepich Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany 0000-0002-1979-2197]Joseph Burchett Department of Astronomy, New Mexico State University, Las Cruces, NM 88001, USA 0000-0002-5840-0424]Christopher Carr Department of Astronomy, Columbia University, 550 West 120th Street, New York, NY, 10027, USA 0000-0002-4469-2518]Priyanka Chakraborty Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA 0000-0001-6258-0344]Robert Crain Astrophysics Research Institute, Liverpool John Moores University, 146 Brownlow Hill, Liverpool L3 5RF, UK 0000-0002-9478-1682]William Forman Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA 0000-0003-2206-4243]Christine Jones Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA 0000-0001-9464-4103]Caroline A. Kilbourne NASA Goddard Space Flight Center, Code 662, Greenbelt, MD 20771 0000-0002-0765-0511]Ralph P. Kraft Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA 0000-0003-0144-4052]Maxim Markevitch NASA Goddard Space Flight Center, Code 662, Greenbelt, MD 20771 0000-0002-6766-5942]Daisuke Nagai Department of Physics, Yale University, New Haven, CT 06520, USA 0000-0001-8421-5890]Dylan Nelson Universität Heidelberg, Zentrum für Astronomie, Institut für theoretische Astrophysik, Albert-Ueberle-Str. 2, 69120 Heidelberg, Germany 0000-0003-4504-2557]Anna Ogorzalek NASA Goddard Space Flight Center, Code 662, Greenbelt, MD 20771 Department of Astronomy, University of Maryland, College Park MD, USA 0000-0002-3984-4337]Scott Randall Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA 0000-0002-5222-1337]Arnab Sarkar Department of Physics, Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA 0000-0002-0668-5560]Joop Schaye Leiden Observatory, Leiden University, Niels Bohrweg 2, NL-2333 CA Leiden, the Netherlands 0000-0002-3158-6820]Sylvain Veilleux Department of Astronomy and Joint Space-Science Institute, University of Maryland, College Park, MD 20742, USA 0000-0001-8593-7692]Mark Vogelsberger Department of Physics, Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA 0000-0002-9279-4041]Q. Daniel Wang Astronomy Department, University of Massachusetts, Amherst, MA 01003, USA 0000-0001-7630-8085]Irina Zhuravleva Department of Astronomy and Astrophysics, University of Chicago, Chicago IL 60637, USA The Astro2020 Decadal Survey has identified the mapping of the circumgalactic medium (CGM, gaseous plasma around galaxies) as a key objective. We explore the prospects for characterizing the CGM in and around nearby galaxy halos with future large grasp X-ray microcalorimeters. We create realistic mock observations from hydrodynamical simulations (EAGLE, IllustrisTNG, and Simba) that demonstrate a wide range of potential measurements, which will address the open questions in galaxy formation and evolution. By including all background and foreground components in our mock observations, we show why it is impossible to perform these measurements with current instruments, such as X-ray CCDs, and only microcalorimeters will allow us to distinguish the faint CGM emission from the bright Milky Way (MW) foreground emission lines. We find that individual halos of MW mass can, on average, be traced out to large radii, around R_500, and for larger galaxies even out to R_200, using the O VII, O VIII, or Fe XVII emission lines. Furthermore, we show that emission line ratios for individual halos can reveal the radial temperature structure. Substructure measurements show that it will be possible to relate azimuthal variations to the feedback mode of the galaxy. We demonstrate the ability to construct temperature, velocity, and abundance ratio maps from spectral fitting for individual galaxy halos, which reveal rotation features, AGN outbursts, and enrichment. § INTRODUCTION Structure formation models predict that galaxies reside in massive dark matter halos and are embedded in large-scale gaseous halos, the circumgalactic medium (CGM). The CGM plays a crucial role in the evolution of galaxies as gas flows through the CGM and regulates galaxy growth over cosmic time. To establish a comprehensive picture of the formation and evolution of galaxies, it is essential to probe the interplay between the stellar body, the supermassive black hole (SMBH), and the large-scale CGM. While the stellar component and SMBHs of galaxies have been the subject of a wide range of studies over the past decades, our knowledge about the CGM remains very limited, which poses major gaps in our understanding of galaxy formation and evolution. The importance of the CGM is highlighted by the fact that it plays a role on a wide range of spatial scales from small-scale processes (e.g. galactic winds driven by supernovae) to the largest scales of galaxies (e.g. accretion of gas from large scale structure filaments). Theoretical and observational studies hint that the CGM has a complex and multi-phase structure. In Milky Way-type and more massive galaxies, the dominant phases of the CGM have characteristic temperatures of millions of degrees and are predominantly observable at X-ray wavelengths (e.g., ). Indeed, in the well-established picture of structure formation, dark matter halos accrete baryonic matter, which is thermalized in an accretion shock (). The characteristic temperature is determined by the gravitational potential of the galaxies and reaches X-ray temperatures (≳10^6 K) for Milky Way-type galaxies. Since the cooling time of the hot gas is much longer than the dynamical time, the CGM is expected to be quasi-static and should be observable around galaxies in the present-day universe. Because the importance of studying the X-ray emitting large-scale gaseous component of galaxies was recognized decades ago, all major X-ray observatories attempted to explore this component. Studies of elliptical (or quiescent) galaxies achieved significant success in the early days of X-ray astronomy (). Observations of massive ellipticals with the Einstein and ROSAT observatories revealed the presence of gaseous X-ray halos that extend beyond the optical extent of galaxies <cit.>. These observations not only revealed the ubiquity of the gaseous halos, but allowed characterizations of the physical properties of the X-ray gas, and measurements of its mass within various radii. Follow-up observations with XMM-Newton and Chandra played a major role in further probing the gaseous emission around a larger sample of nearby massive elliptical galaxies <cit.>. Thanks to the sub-arcsecond angular resolution of Chandra, it became possible to clearly resolve and separate point-like sources, such as low-mass X-ray binaries and AGN, from the truly diffuse gaseous emission <cit.>. This allowed more detailed studies of the X-ray-emitting interstellar medium and the larger-scale CGM. However, a major hindrance to studying elliptical galaxies is that the dominant fraction of galaxies explored by Chandra and XMM-Newton reside in rich environments, such as galaxy groups or galaxy clusters. The presence of these group and cluster atmospheres makes it virtually impossible to differentiate the CGM component of the galaxy from the large-scale group or cluster emission. Because the group or cluster atmosphere will dominate the overall emission beyond the optical radius, it becomes impossible to separate these components from each other and determine their relative contributions. Additionally, the gaseous component around quiescent galaxies is likely a mix of the infalling primordial gas onto the dark matter halos and the ejected gas from evolved stars, which was shock heated to the kinetic temperature of the galaxy. Due to quenching mechanisms, most quiescent galaxies reside in galaxy groups and clusters, which are not ideal targets to probe the primordial gas. As opposed to their quiescent counterparts, star-forming galaxies provide the ideal framework to probe the gas originating from the primordial infall. The main advantage of disk (or star-forming) galaxies is their environment. While quiescent galaxies form through mergers, which happen at a higher likelihood in rich environments, due to the higher galaxy density, a substantial fraction of star-forming galaxies will preferentially reside in relatively isolated environments. The CGM around disk galaxies was also probed in a wide range of observations. Using ROSAT observations, the X-ray gas around disk galaxies remained undetected <cit.>. However, this posed a challenge to galaxy formation models that predicted bright enough gaseous halos to be observed around nearby disk galaxies (). Revising these models and involving efficient feedback from AGN drastically decreased the predicted X-ray luminosity of the X-ray CGM, implying that non-detection by ROSAT was consistent with theoretical models. More sensitive observations with Chandra and XMM-Newton led to the detection of the CGM around isolated massive disk galaxies (e.g., ). Most notably, the CGM of two massive galaxies, NGC 1961 and NGC 6753, was detected and characterized out to about 50-60 kpc radius, which corresponds to about 0.15r_ 200, where r_ 200 is the radius within which the density is 200 times the critical density of the Universe, and we consider it to be the virial radius of the galaxies. These studies not only detected the gas, but also measured the basic properties of the CGM, such as the temperature and abundance, and established simple thermodynamic profiles <cit.>. Following these detections, the CGM of other disk galaxies was also explored albeit to a much lesser extent due to the lower signal-to-noise ratios of these galaxies <cit.>. Despite these successes, however, it is important to realize that all these detections explored massive galaxies (few ×10^11 M_⊙ in stellar mass), while the CGM emission around Milky Way-type galaxies remains undetected. The main challenge in detecting the extended CGM of external galaxies is due to the hot gas residing in our own Milky Way. Specifically, our Solar system is surrounded by the local hot bubble (LHB), which has a characteristic temperature of kT≈0.1keV <cit.>. On the larger scales, the Milky Way also hosts an extended hot component with a characteristic temperature of ∼0.2keV <cit.>. These gas temperatures are comparable to those expected from other external galaxies and, since both the Milky Way and the other galaxies exhibit the same thermal emission component, the emission signal from the low-surface brightness CGM of external galaxies can be orders of magnitude below the Milky Way foreground emission. Because the X-ray emission from the Milky Way foreground is present in every sightline and this component cannot be differentiated at CCD resolution (∼50-100 eV), its contribution cannot be easily removed from the CGM component of other galaxies. A direct consequence of this is that even future telescopes with larger collecting areas with CCD-like instruments will be limited by the foreground emission and thus cannot probe the large-scale CGM. To achieve a transformative result in exploring the extended CGM, we must utilize high spectral resolution spectroscopy to spectroscopically differentiate the emission lines from the Milky Way foreground from those emitted by the external galaxies. We emphasize that mapping the CGM is essential in order to learn about its distribution, enrichment, and thermodynamic structure, which is not feasible with dispersive (grating) spectroscopy. Recent advances in technology allow us to take this transformative step. The development of high spectral resolution X-ray Integral Field Units (IFUs) provides the much-needed edge over traditional CCD-like instruments. Notably, X-ray IFUs can simultaneously provide traditional images with good spatial resolution and very high, 1-2 eV, spectral resolution across the array. In this work, we explore how utilizing the new technology of X-ray IFUs can drastically change our understanding of galaxy formation. We assume capabilities similar to the Line Emission Mapper (LEM) Probe mission concept <cit.>. The LEM concept is designed to have a large field-of-view (≳900arcmin^2), state-of-the-art X-ray microcalorimeter with 1 eV spectral resolution in the central array, and 2 eV spectral resolution across the field-of-view. The single-instrument telescope is planned to have 2500cm^2 collecting area at 1keV energy. Overall, the spectral resolution of LEM surpasses that of CCD-like instruments by 50-100 times, allowing us to spectrally separate the Milky Way foreground lines and the emission lines from the CGM of external galaxies. Modern cosmological hydrodynamical simulations are able to model the detailed distribution of the hot CGM (e.g., ). The divergence among these simulations is chiefly driven by the difference in modelling baryonic physics, most notably the modelling of feedback processes, such as those driven by star formation activities or accretion onto SMBHs (see e.g. for a recent review). Intrinsic limitations due to the finite resolution of these simulations require sub-resolution models, which add uncertainties to the results. Sub-resolution feedback effects are implemented to mimic net effects of AGN feedback, but depend on the numerical implementation. The simulations of the IllustrisTNG project, for example, switch from thermal to kinetic feedback mode, depending on the chosen thresold of the AGN accretion rate (). Other simulations, such as EAGLE, use only the thermal AGN feedback channel to reheat the gas (). Thus, simulations are significantly diverse in predicting the CGM properties (e.g., X-ray line emission profiles, see , Truong et al. 2023). Therefore, probing the hot phases of the CGM is essential to understand how feedback processes operate on galactic scales, and future observations will constrain models by comparison of observations with simulations. In this work, we utilize three modern hydrodynamical structure formation simulations, IllustrisTNG, EAGLE, and Simba, to demonstrate that LEM will provide an unprecedented view into the formation and evolution of galaxies. In section <ref> we describe the hydrodynamical simulations and the setup of the mock observations. We show the surface brightness profiles of 4 bright emission lines for galaxy subsamples selected by halo mass and star formation rate in section <ref>. We also quantify the level of substructure and present 2D maps of the temperature and element abundance ratios inferred from a spectral analysis. Section <ref> discusses the results. § METHODS Here we describe our methodology for the analysis of microcalorimeter mock observations. §.§ An X-ray Probe microcalorimeter with large grasp There are currently several missions and mission concepts with a microcalorimeter, such as Athena X-IFU (), the X-Ray Imaging and Spectroscopy Mission (XRISM, ), the Hot Universe Baryon Surveyor (HUBS, ), and the Line Emission Mapper (LEM, ). However, the CGM can be best probed by a large area, large field of view instrument (i.e., large grasp, which is the product of the field of view and collecting area), with the characteristics of LEM, a probe concept designed for the X-ray detection of the faint emission lines of the CGM with unprecedented spectral resolution. The spatial resolution (half-power diameter, HPD) of 10 will be utilized through a dithering pattern, similar to Chandra, and will allow the separation and exclusion of bright point sources to resolve the structure of the CGM. We assume a collecting area of 2500cm^2 at 1keV (1600cm^2 at 0.5keV) and a bandpass from 0.2 to 2 keV, which is ideal to map nearby galaxies (see ). Through its large grasp and 12eV resolution, it is far superior to any other high spectral resolution instrument for surveying the CGM emission. We emphasize that the spectral resolution will at least provide the ability to disentangle the redshifted line emission, namely C VI, O VII, O VIII, Fe XVII, from the bright foreground line emission of our Milky Way, i.e. a few eV depending on the redshift. Although the LEM concept includes a high-resolution (1 eV) central array, we only consider the energy resolution of the main array (2 eV), since the angular sizes of objects considered here exceed the FoV of the central array and require the full field of view. A galaxy with similar stellar mass as our Milky Way will reach about R_500=165kpc, which corresponds to 13arcmin at z=0.01. Therefore, we conservatively only consider a 2 eV resolution throughout the field of view. For larger and brighter galaxies with about twice the Milky Way stellar mass at a slightly larger distance (z=0.035), we are able to trace the CGM even beyond R_500. §.§ Mock observations of hydro-dynamical cosmological simulations §.§.§ Simulations Our galaxy selection from the cosmological simulations follows clear criteria. We focus here on three state-of-the-art simulations, TNG100 of the IllustrisTNG project (, with ∼110.7cMpc box size (comoving coordinates), and a baryon mass m_ baryon = 1.4e6M_⊙), EAGLE-Ref-L100N1504 (EAGLE, , with ∼100Mpc box size, and a baryon mass m_ baryon = 1.81e6M_⊙), and Simba 100Mpc (Simba, , with ∼147Mpc box size, and a baryon mass m_ baryon = 1.82e7M_⊙), which all have similar volumes and cosmological parameters, are tuned to reproduce observed stellar properties, but vary in hydrodynamic codes and modules for galaxy formation, including AGN feedback prescriptions (e.g., ). All three simulations trace the evolution of gas, stellar, dark matter, and SMBH particles over a large redshift range. Gas heating and cooling processes are included, as well as star formation and stellar evolution, and various feedback processes (AGN, SNe, stellar winds). The implementation of the AGN feedback is significantly different in the three simulations, emphasizing the need for a detailed comparison of observables that can trace the feedback mechanisms. In order to understand the impact of stellar-driven and AGN feedback on the CGM, and how it can be traced with a large grasp microcalorimeter, we subdivide simulated galaxies into halo mass bins based on the M_200 (the total mass within 200 times the critical density of the Universe at the redshift of the galaxy). §.§.§ Samples We select galaxies with M_200 from 10^11.5 to 1e12M_⊙ as our low mass sample (Fig. <ref> and Tab. <ref>). In this mass range, the stellar mass fraction increases with halo mass, which means more and more gas is converted into stars indicating strong stellar feedback (see e.g., and references therein). Galaxies in the low mass sample are characterized by lower central black hole masses (around 7e7M_⊙ in TNG100), and a high specific star formation rate (sSFR∼6e-11yr^-1 for instance in the case of EAGLE), which are not representative of the typical galaxy across the three samples. In simulations and observations a peak of the stellar mass fraction of non-satellite galaxies is observed around the Milky Way halo mass and stellar mass fraction (), and the M31 regime (), which we also indicated in Fig. <ref>. <cit.> characterize the multiphase CGM based on TNG50 of IllustrisTNG, and find MW/M31 analogs in the simulation. This transition is poorly understood and only a detailed analysis of the CGM will reveal the driving mechanism of this feedback mode change. Galaxies in the mass range from M_200 = 10^12 to 10^12.5 M_⊙ form our medium mass sample (Fig. <ref> and Tab. <ref>), and generally are not clearly dominated by either one feedback mechanism. The impact of both stellar and AGN feedback on the CGM should be visible in these galaxies, which have significantly more massive central AGNs (on average three times as massive as in the low mass sample), and three times lower specific star formation rate. Our high mass sample consists of halo masses from M_200 = 10^12.5 to 10^13 M_⊙ (Fig. <ref> and Tab. <ref>), and is generally dominated by AGN feedback, while star formation and stellar feedback become less and less important. The central black hole masses are on average a factor of 2.3 larger than in the medium mass sample, while the specific star formation decreases by a factor of 2. From each simulation, TNG100, EAGLE, and Simba, we select 40 galaxies for each of the three samples. We also exclude any galaxy that is a non-central galaxy of the dark matter halo, e.g., a member galaxy of a cluster that evolves very differently due to an early onset of quenching by the surrounding ICM. The galaxies have been selected to be uniformly distributed in log M_200. Our samples are summarized in Table <ref>, and we describe all the individual galaxies in more detail in Table <ref>, and show the galaxy properties, including halo and stellar mass, star formation rate, and black hole mass in Fig. <ref>. cc|ccc|ccc|ccc Galaxy halos samples for tracing and mapping the CGM emission. 2cSimulation 3cIllustris TNG 3cEAGLE 3cSimba 2cbox 3cTNG100 3cRef-L0100N1504 3c100 Mpc/h low medium high low medium high low medium high Sample size 40 40 40 40 40 40 40 40 40 log_10 M_200 median 11.79 12.29 12.73 11.72 12.26 12.76 11.76 12.21 12.73 [log_10 M_⊙] 25% 11.65 12.17 12.63 11.63 12.15 12.64 11.63 12.15 12.58 75% 11.92 12.39 12.89 11.84 12.35 12.88 11.87 12.35 12.87 log_10 M_⋆ median 10.15 10.77 11.06 9.91 10.52 10.93 10.07 10.72 10.95 [log_10 M_⊙] 25% 9.94 10.71 10.95 9.64 10.41 10.80 9.96 10.60 10.74 75% 10.32 10.85 11.11 10.09 10.65 10.97 10.21 10.84 11.05 log_10 SFR median -0.01 -3.95 -1.36 -0.32 0.03 0.15 0.22 0.61 0.33 [ M_⊙ yr^-1] 25% -0.34 -5.00 -5.00 -0.51 -0.61 -0.93 0.11 0.34 0.18 75% 0.13 -0.71 -0.41 -0.12 0.32 0.50 0.33 0.78 0.66 log_10 M_ 𝐁𝐇 median 7.73 8.33 8.68 6.33 7.46 8.14 7.38 7.94 8.43 [log_10 M_⊙] 25% 7.50 8.22 8.53 6.09 7.17 7.96 7.18 7.62 8.25 75% 7.99 8.43 8.75 6.67 7.69 8.31 7.57 8.07 8.62 R_500 median 123 175 242 116 170 250 117 166 238 [ kpc] 25% 110 163 222 105 158 225 107 154 218 75% 135 193 274 125 184 272 126 183 264 [ arcmin] median 9.96 14.21 5.78 9.42 13.84 5.97 9.54 13.47 5.68 For each quantity, the halo mass M_200, the stellar mass within 30 kpc M_⋆, the halo star formation rate (SFR), the SMBH mass M_ BH, and the characteristic radius R_500, we show the median value of the sample, as well as the 25th and 75th percentiles. To calculate R_500 in arcmin, we place the high mass samples at z=0.035, while the low and medium mass samples are at z=0.01. §.§.§ Mock X-ray observations We produce mock observations from the hydro simulations using pyXSIM[<http://hea-www.cfa.harvard.edu/ jzuhone/pyxsim/>] (see ), which creates large photon samples from the 3D simulations. We create mock observations from a high spectral resolution imaging instrument assuming a Gaussian PSF with 10 arcsec FWHM using SOXS[<https://hea-www.cfa.harvard.edu/soxs/>]. We use a detector size of 128× 128 pixels with 15 per pixel, yielding a FoV of 32×32. The spectral bandpass covered is 0.2 to 2 keV, with 2 eV FWHM resolution. The X-ray emission is modelled from each emitting, non-star-forming gas cell with T > 3e5K and ρ < 5e-25cm^-3 (a value of the density close to the star formation density threshold in the TNG simulations). In each galaxy, there is also a small set of isolated gas cells which are abnormally bright in X-rays–these typically have extreme values of cooling time and/or thermal pressure, and on this basis are excluded from the analysis to improve visualizations, but we do not find that leaving them in changes any of our conclusions (see ZuHone et al. submitted, for more details). The plasma emission of the hot gas surrounding the galaxy is based on the Cloudy emission code (), and includes the effect of resonant scattering from the CXB, which enhances the O VIIr line (). An extensive description is provided in <cit.>. In contrast to other emission models, such as APEC/AtomDB (), we utilize a density-dependent model, that is sensitive to the photo-ionization state of the gas at densities ≲1e-4cm^-3 (see, e.g., ). We updated the code with respect to <cit.> to include the latest version of Cloudy, and ensured that the intrinsic resolution matches the sub-eV requirements of a microcalorimeter. The various metal species are independent of each other, allowing a consistent modelling of gas with arbitrary metal abundance patterns. However, we neglect the effects of resonant scattering from the hot ISM gas in the galaxy (). We place the low and medium mass samples at a nearby redshift of z=0.01, which separates the forbidden O VII line, the C VI line, the O VIII line and the 725 and 739 eV Fe XVII lines from the MW foreground. However, the O VII resonance line is blended with the MW foreground forbidden line. The size of these halos (R_500) fits well within the assumed field of view of the detector. However, since galaxies in the high mass sample are too large to fit them in the 32x32 arcmin field of view, we chose z=0.035 for this sample. This allows us to include the resonance line of O VII, but blends the 739 eV Fe XVII line with MW foreground (see Fig. <ref>). The Galactic foreground emission is assumed to consist of a thermal component for the local hot bubble (LHB, temperature 0.1keV, ), an absorbed thermal model to account for Galactic halo emission (GHE, temperature of 0.23keV, , and a velocity broadening of 100km s^-1), and the North Polar Spur or hotter halo component (NPS, temperature 0.7keV, see <cit.>, also broadened by 100km s^-1). Each thermal component is implemented with the APEC model () with solar abundances (), and the absorption with the tbabs model () with a hydrogen column density to 1.8e20cm^-2. The normalizations are 𝒩_ LHB = 1.7e-6cm^-5 arcmin^-2 for the LHB, and 𝒩_ GHE = 0.43 𝒩_ LHB for the GHE, and 𝒩_ NPS = 0.05 𝒩_ LHB for the NPS. The spatial distributions are flat in the mock event files. The astrophysical background contains unresolved X-ray point sources, mostly distant AGNs (cosmic X-ray background, CXB). On average, the flux distribution of a source follows a powerlaw S_ν∝ν^-2 (), and the log N - log S distribution from <cit.>. The average powerlaw normalization is 4.1e-7pht s^-1 keV^-1 cm^-2 after excising the brightest 50 point sources from the event file, which make up half of the total CXB flux (and about 3% of the FoV area). Considerations on the particle background based on Athena X-IFU () studies showed that a spectral component due to Galactic cosmic rays will be a factor of 30 to 60 lower than the second lowest component, the CXB. We included a conservative estimate on the residual particle background, after anti-coincidence filtering, of 1cts s^-1 keV^-1 for the field of view. The particle background is assumed to have a flat spectrum and no spatial features. Our mock event files include all the above-mentioned components and simulate a 1 Ms observation. §.§ Analysis The analysis of the mock event files relies partly on existing software, such as CIAO, but most routines are re-implemented in python using astropy (), and the scipy packages. §.§.§ Preparation Our simulated mock event files use coordinates that reflect the physical pixels of the LEM detector array with 15arcsec width and height. However, the optics and mirror assembly of LEM reach a spatial resolution of 10arcsec, which will be utilized through Lissajous-dithering. Therefore, we oversample the detector pixels by a factor of 2 for all images that are produced, also internally, e.g. for point source detection. To start the analysis of the simulated event files, we visually inspect the images and spectra around the O VII(f) emission line. The exact spectral location of the peak of the line emission is specified, in order to center the extraction region for line surface brightness profiles later on. Then we define the emission weighted center using the emission around the O VII(f) line (±2eV), which is typically within a few pixels of the center of the FoV. §.§.§ Point sources Following these initial tasks, we use the algorithm included in the CIAO 4.14 package () to detect point sources from the CXB in the observation, and in the corresponding background file. Since the point sources are expected to have a continuum powerlaw spectrum, we use a broad band image, 250950eV for the detection. While several hundred point sources are typically detected, we select the 50 most significant and brightest sources, which contribute about 50% to the total CXB flux, but only about 2-3 % of the detector area. The least significant of the top 50 sources is still detected at 100σ. An example of the detected sources can be seen in the top left panel of Fig. <ref>. The area of these 50 point sources is masked out in the observation and background event file for the following analysis steps. More details on the point source contribution is given in section <ref>. §.§.§ Surface brightness profiles For our radial surface brightness profiles we extract the counts in the mock event files with respect to the center of the galaxy halo. We define the center as the emission-weighted peak in the O VII(f) image. Counts are extracted in a narrow 2 eV spectral window centered at the redshifted line energies (367.49eV for C VI, 560.97eV for O VII(f), 574.04eV for O VII(r), 653.72eV for O VIII, and 725.09eV, 727.00eV, 739.02eV, 825.75eV for Fe XVII. The 2 eV spectral window is the optimal width in terms of signal-to-noise, and it includes 76% of the line counts. The area of the 50 brightest point sources is masked. Figure <ref> shows the redshifted spectral extraction with respect to the foreground emission for several interesting line windows considered here. The red shaded regions correspond to the extraction for the nearby (z=0.01) galaxies, the blue shaded regions to the more distant (z=0.035) galaxies. For the case of O VII, we can use the forbidden and resonant lines at z=0.035, while for the nearby galaxies at z=0.01, we only use the forbidden line. In the case of Fe XVII, we can use three lines, the 725 eV, 727eV and the 739 eV lines at lower redshift, while at z=0.035, we extract the 725 eV, 727 eV and the 826 eV lines. We note that we compute the radial bin size adaptively, where we require the bins to have a minimum signal-to-noise ratio of 3, and also require the source-to-background ratio in each bin to be at least 10%, to not be dominated by statistical or systematic uncertainties. We estimate the background counts based on a simulated blank field observation with only foreground and CXB background components, where we repeat the same steps that were performed for the CGM observation, namely the point source detection, as this field has a different realization of the CXB point sources (see details in section <ref>). The background counts are estimated from the whole field of view (minus the excised point sources) to reduce the uncertainty in the background. Since this background estimate assumes a field of view averaged residual CXB contribution (point sources that are fainter than the 50 brightest that are excluded), we introduce a scaling factor to the background counts in each annulus, based on the broad band emission (±200eV around the line) of the continuum CXB sources (see appendix <ref>). The individual profiles are then interpolated if needed, and a median profile (of all the galaxies in the sample) is created at fixed fractions of R_500. The scatter (68.3%) at each radii between the ensembles of profiles represents the galaxy-to-galaxy scatter. §.§.§ Structural clumping in the gas While the radial surface brightness profiles demonstrate LEM's ability to detect the CGM to large distances, it does not quantify the level of substructure present in the gas at a given radius, and how well LEM can detect this. It is expected that different feedback mechanisms leave imprints in the CGM. Stellar feedback is able to push out gas from the inner region near the disk to larger radii and cause an anisotropic distribution, especially within intermediate radii (∼ 0.5 R_500). A very dominant AGN will have a major impact on the gas distribution in the halo. After several feedback cycles, it is expected that the gas distribution gets smoothed by the impact of the AGN. To capture this information in our mock observations, we derive the ratio of the average squared density to the square of the average density (e.g., ), which characterizes the azimuthal asymmetry and clumpiness in the X-ray gas. We trace this quantity observationally by calculating the median emission line surface brightness profile over small sectors S(r)_ median, and compare it to the average surface brightness S(r)_ mean: 𝒞 (r) = ⟨ρ^2 ⟩/⟨ρ⟩^2≈S_ mean(r)/S_ median(r) . 𝒞(r), where ρ is the gas density, also referred to in the literature as the emissivity bias (). Typical values from observations range from 1, meaning no azimuthal asymmetry or substructure, to about 2 at large radii. Emission from a single line is more prone to vary, due to temperature variations in the gas. We calculate 𝒞 in annuli of width 0.25R_500 from stacked images of the O VII(f), the O VIII, C VI, and the Fe XVII 725 eV line. Therefore, we combine the signal from the three emission lines, which also increases the signal to noise. We trace the substructure out to a radius of R_500 (4 radial bins), while using 8 sectors (45 deg each). We then stack the 𝒞 profiles for galaxies to derive the median profile. We notice that especially for lower mass halos, the scatter in 𝒞 is substantial. Since we are only interested in the type of galaxy where 𝒞 is significantly larger than 1, we use the range of 𝒞 values in the 50th to 75th percentile as a diagnostic. §.§.§ Spectral analysis We can use the signal of the various emissions lines to constrain the CGM temperature. The model fitting of small spectral windows around the emission lines allows us to extract even more information from the CGM observations, such as the relative line-of-sight velocity, and the abundance ratio, which gives further insights on gas motion, outflows, enrichment history, and re-heating processes. We utilize this technique to make spectral maps. For the spectral mapping, we extract 8 eV windows of the spectrum from circular regions based on the brightness distribution of the three lines, O VII(f), O VIII, and Fe XVII, through an adaptive binning technique (see ), which we briefly describe here: At every pixel of the combined line image, we derive a radius at which we reach a threshold signal to noise. This radius can be different for neighboring pixels. The spectral extraction region for each pixel is given by the determined radius. As a consequence, the spectra of neighboring pixels are not independent, and we will oversample the map. We use a signal-to-noise of 10 as a threshold parameter (which is sufficient given the emission line takes only a small portion of the 8 eV wide spectral window), and we do not include pixels if the radius has to be larger than 7. We assume the foreground model and CXB models are not known apriori, and constrain their parameters through spectral fitting of a separate background spectrum. This background spectrum is extracted from the same observation, using a region outside the central 15 radius, and masking extended bright regions within R_500. We apply the mask that has been determined for removing the point sources in the background observation. This leaves about 30% of the detector area for the background spectrum, which is enough to measure all parameters (temperatures and normalizations) with high precision. The spectral extraction is done using the CIAO tools dmextract, and dmgroup to have a grouped spectrum file with at least one count per bin. For the spectral fitting, we use Sherpa (), which is distributed with CIAO, and also provides the Xspec models, such as APEC and tbabs <cit.>. We model the background emission with two absorbed APEC models (including thermal and velocity broadening with a velocity of 100km s^-1, and one unabsorbed APEC model plus one absorbed power low to fit the background components in the mock observations (see section <ref>). We also add CGM components, but we are mostly interested in constraining the background and foreground model parameters with this outer region where the background dominates. We are able to reproduce the input background parameters within 2% relative accuracy. We save the best-fit values of the temperatures and normalizations for the next steps. We then determine the extraction region for spectra through adaptive binning. During the spectral fitting, we use the previously determined background parameters and freeze these values (the normalizations are scaled by the ratio of region size to the background region). These foreground parameters are well determined from a large extraction area (statistical uncertainties on a percent level). We only leave the CXB powerlaw normalization free to vary, since each spectral region can contain a slightly different population of CXB sources, while the CXB normalization from the background spectrum is just the average over a larger area. We also include a Gaussian smoothing (gsmooth, with α=1) of the CGM components to account for any broadening of the lines. Our CGM emission model consists of 19 absorbed components: The individual elements C, N, O, Ne, Mg, Si, S, Fe, plus a model for all other elements, one component for each element that includes the effect of resonant scatter of the cosmic microwave background (), and one component for the emission in plasma without metals. The normalization of the resonant scattered components is frozen to half the normalization of the corresponding element, since we removed about half the flux from CXB emission through the point source masking. Each of the CGM components has a temperature, redshift, and intrinsic hydrogen column density (for the photo-ionization), and all parameters are linked, so there is effectively only a single temperature plasma component. We use Cash statistics () for the fitting of the input spectrum and select several spectral windows: * Carbon: The lowest energy window from 33.4 to 34.1 (363.5eV to 371.5eV) contains the C VI line which peaks at 1.28e6K (0.11keV) temperature, but also significant emission from N and other metals. * Nitrogen: Emission around the N VII line (peak temperature of 1.97e6K, or 0.17keV), with the window ranging from 24.6 to 25 (496eV to 505eV), also contains emission from C and other metals. * Oxygen (2 windows): The O VII triplet (from 21.4 to 22.3 (557eV to 579eV), and O VIII 18.8 to 19.1 (650eV to 658eV) lines peak at 1.97e6K (0.17keV) and 3.13e6K (0.27keV), respectively. Of the O VII triplets, the forbidden line is very interesting, since it is not blended with the foreground lines in our low redshift samples. O VIII is dominant at higher temperatures and can help to constrain the temperatures in the regions. * Iron: The Fe XVII complex window from 16.7 to 17.2 (721eV to 743eV) is very bright, peaks at a temperature of 5e6K (0.43keV), and contains several weaker transitions of other elements, such as O VII. * Neon: The Ne IX triplet window from 13.4 to 13.8 (901eV to 926eV) is not very bright and only visible in some galaxies. The peak temperature is 3.95e6K (0.34keV). Note that all the windows are redshifted according to the true (simulated) redshift of the galaxy. Unconstrained parameters are frozen to a default value. The reduced χ^2 are typically very close to 1. After the best-fit parameters are found, we use the MCMC sampling method integrated into Sherpa to find the parameter uncertainties using the Metropolis Hastings algorithm. § RESULTS The results of our analysis laid out in the following sections demonstrate the extraordinary capabilities of LEM for detecting and characterizing the CGM emission. As an example, we show in Figure <ref> the line emission image, surface brightness profile and spectral extraction regions for a TNG100 galaxy at the upper mass end (log_10 M_⋆ / M_⊙ = 11.18, and R_500=243kpc) placed at z=0.01. The top left panel shows the stacked O VII, O VIII, and Fe XVII image which fills the LEM field of view. The top right panel shows the surface brightness profile, and the background level (dashed line). At the red extraction region around 0.75R_500 and 1 width, the CGM emission still reaches 50% of the background. The bottom panel of Fig. <ref> shows the overall spectrum from 544 to 800eV with the prominent CGM emission lines indicated. The two middle panels show closeups of the O VII and O VIII windows with the foreground model overplotted in blue, and the CGM model in orange. In the following, we systematically analyze the two samples for each of the three simulations. §.§ Line surface brightness profiles We extract surface brightness profiles of 4 important emission line windows, C VI, O VII(f), O VIII, and Fe XVII, which peak at different plasma temperatures (0.11keV, 0.17keV, 0.27keV, and 0.43keV, respectively). The profiles are extracted in a 2eV window around the redshifted line energy, which minimizes the contamination with foreground lines from the MW. We extract a profile of a simulated background/foreground observation in the same fashion and subtract this from the observation of the CGM. Figure <ref> shows the spectra of a single temperature plasma at z=0.01 (blue) and z=0.035 (red). The spectra contain the foreground emission, which we show also without the redshifted CGM (only foreground) as black lines. It is clear that already at redshift z=0.01, the CGM line emission is well separated from the foreground lines. Each of the low, medium, and high mass samples contain 40 galaxies, while the former two have the galaxies at z=0.01, and the latter at z=0.035. More details on the individual galaxies, such as stellar mas, gas mass, and black hole mass, are given in Tables <ref>, <ref>, <ref>, and <ref>. We analyze the median profile with the radius scaled by R_500. Each of the three mass-defined samples is subdivided into thirds based on the star formation rate. We conservatively consider CGM detection if the measured signal is at least 10% of the background, and the signal-to-noise of each extracted radial bin is at least 3. We show these profiles in Figs. <ref>, <ref>, and <ref> for EAGLE, TNG100, and Simba, respectively. The low-mass, medium-mass, and high-mass samples are shown in blue, orange, and green, respectively. The subsamples are shown in solid, dotted, and dashed lines for the top, lowest, and intermediate thirds, respectively. Therefore, each line is the median profile of 13 or 14 galaxies. We note that the star formation subsample is defined by percentiles of the galaxies in the sample, and therefore does not necessarily reflect globally star-forming or quiescent galaxies. The typical 68% scatter is shown for the star-forming, medium mass sample as the orange region. We detect the C VI, O VII, O VIII, and Fe XVII lines in emission in all simulations. Based on the galaxy mass, all simulations detect the CGM emission out to R_500. Simba is clearly the faintest. However, there is significant scatter between the galaxies of a single simulation, and between the different simulations. O VIII can be detected out R_200 (1.5 R_500) for the more massive galaxies in TNG100 and EAGLE. The O VIII line is the brightest, with the highest number of counts and a relatively low background, followed by O VII and Fe XVII. While C VI can be detected in most galaxies, it is very weak in Simba. For the oxygen and carbon, TNG100 and EAGLE are comparable, but TNG100 typically has a steeper shape, leading to a smaller detection radius. Especially for galaxies in the low and medium mass sample (blue, orange), TNG100 has a clear trend that galaxies with higher star formation are also brighter (solid line above dashed line, and the dotted line is lowest). This has also been pointed out, e.g., by <cit.>. The lowest mass galaxies with little star formation appear very faint in EAGLE. However, for the medium mass sample, we only partially find the same trend with star formation rate (solid line highest, but dotted and dashed lines comparable). For the high mass sample, we find in both, TNG100 and EAGLE, that at larger radii close to R_500 the brightness is independent of star formation (see also ). The medium mass sample shows oxygen and iron emission at about 0.6 to 1 R_500 in both simulations, EAGLE and TNG. For C VI we find a big difference between EAGLE and TNG100, where in EAGLE carbon is detected out to 0.8 R_500, and in TNG100 only to about 0.3 R_500. The visibility of the high mass galaxies placed at z=0.035 is not limited by the field of view, and can therefore can be traced far beyond R_500 as in the case of O VII (both resonant and forbidden line combined, see Fig. <ref>) and O VIII. We also can detect Fe XVII in both, EAGLE and TNG100, nearly to R_500, depending on the star formation rate. C VI was not detected in the high mass TNG100 galaxies, but it is very clearly visible in EAGLE. Clearly, Fe XVII is detected best in EAGLE, likely because some of the galaxy halos are hotter. The difference in the visibility of C VI between EAGLE and TNG100 at higher halo masses is not explained by the higher EAGLE CGM temperatures, but possibly by a different metal composition, as carbon is produced also by AGB stars. Comparing the scatter between the galaxy halos of a given sample, we notice a slightly larger scatter among TNG100 galaxies. The scatter of the CGM profiles between the Simba galaxies can only be measured at smaller radii. §.§ Emission line ratios If the hot gas in and around galaxies is not isothermal, we expect the shape of emission line profiles that are sensitive to different gas temperatures, to reflect this. In the case of the same element (e.g., oxygen) the line ratio, e.g., O VII and O VIII, is proportional to the gas temperature. At large radii with low gas densities, photo-ionization becomes important and changes the line ratios. When comparing the emission lines of different elements (e.g., C VI and Fe XVII), the conclusions are less clear, since not only the temperature changes in the gas, but the enrichment mechanism: SNIa contribute significantly to the abundance of iron, but not to that of carbon or oxygen. We note that we only include galaxy halos simulated with TNG100 and EAGLE here. Figure <ref> shows the O VII to O VIII line ratio for the same samples that were shown in section <ref>. Looking at the nearby galaxies in Fig. <ref> (left), we notice a systematic offset between star-forming TNG100 and EAGLE galaxies, where the EAGLE ratios are always below the TNG100 ones. This indicates that the EAGLE galaxy halos are systematically hotter within the covered radius < 0.5R_500 (see also Truong et al. submitted). For the mixed star forming/quiescent galaxies (the quiescent ones alone did not provide enough statistics), we find comparable line ratios within the scatter of the samples (compare e.g., Truong et al. submitted). Unfortunately, statistics only allow us to derive line ratios to about 0.4 R_500, until which we see a rising line ratio, indicating a hotter core and cooler outer regions (Fig. <ref>). For the high mass galaxy halos (Fig. <ref>, right), we see a similar trend of star-forming galaxies being hotter in TNG100 (lower O VII to O VIII ratio). TNG100 halos appear almost isothermal (constant line ratio), while EAGLE galaxies have an increasing line ratio toward the outer regions, and become even steeper beyond 0.6 R_500. At these large radii, the difference between star-forming and quiescent galaxies appears to vanish: The impact of star formation is most prominent in the core. In principle, we need to include the effects of photo-ionization in our interpretation of line ratios at large radii, since at very low plasma densities the assumption of collisional ionization equilibrium (CIE) is no longer applicable. However, simulating several line ratios with fixed plasma temperature and only changing the density showed that above a density of 3e-5cm^-2 the changes in the ratio are less than 5-6%. Densities within R_500 are expected to be larger than that (e.g., ). We note that in dense regions with column densities above e21cm^-2, the O VII to O VIII ratio will be affected by electron scattering escape (). §.§ Substructure in the CGM emission In order to analyze the substructure that can be detected in the mock observations, we apply the method introduced in section <ref>. The clumping factor, 𝒞, is calculated for the TNG100 and EAGLE halos in all three mass samples. As pointed out before, a clumping factor of 1 means that there was no clumping detected, and observations have shown that it can raise typically up to 2 at R_500. This can be associated with the substructure in the outskirts being accreted. For clusters <cit.> have observed very low clumping factor values, even at R_500, while <cit.> found higher values for the Perseus cluster. Since the calculation of the clumping factor of a single emission line, like the O VII, could bias the results due to the sensitivity to temperature changes in the CGM, we use the stacked signal of the O VII, O VIII, and Fe XVII lines. The combined samples cover a wide range of galaxy halos in terms of mass, star formation rate, or temperature. Therefore, we divided the sample into galaxies with a central SMBH below the median SMBH mass, and the ones above, shown in Fig. <ref> as dark shaded and light shaded regions. We find an interesting trend for the black hole mass distinction: We do not see any difference in EAGLE for different SMBH masses (all values outside the core are within 1 to 1.1, see Fig. <ref> right). However, galaxies in TNG100 show a dichotomy (Fig. <ref> left), as we find systematically higher 𝒞 for lower SMBH masses outside 0.4 R_500, while massive SMBHs show a very similar trend between EAGLE and TNG100. Simba galaxy halos are generally fainter and statistical uncertainties are larger, but lower mass SMBH appear similar to the trend in TNG100, as they have higher 𝒞, but less significant. This trend found for the TNG100 galaxy halos is consistent with simulations by <cit.>, in which they argue that SNe and especially AGN feedback can smooth out the gas distribution. While TNG100 and Simba have efficient AGN feedback that can push out the gas and increase the kinematics, EAGLE can pressurize the gas more efficiently. It should be noted that the numerical schemes, hydrodynamic solvers, and sub-grid physics, especially of feedback in the simulations suites are different, which will impact the gas distributions. A Smoothed Particle Hydrodynamics (SPH) simulation will also produce a smoothed gas distribution (e.g., ). §.§ Spectral analysis The detectability of CGM emission lines with LEM out to large radii offers the opportunity to not only derive one-dimensional profiles, but, to map the emission out to R_500, and measure temperatures, line-of-sight velocities, and abundance ratios. We described our spectral fitting approach in section <ref>, and apply it here to two galaxies selected from TNG50 of the IllustrisTNG project (). With the smaller box size, the TNG50 simulation has higher resolution with respect to TNG100 (used for the profiles). Since we do not want to select a large number of galaxies, but rather study two examples in more detail, a larger simulation box won't provide any advantage. We explore two galaxy halos in more detail: The first galaxy (358608) has a halo mass of 10^12.7 M_⊙, which would place it in our high mass sample. It has a relatively high stellar mass of 10^11.18 M_⊙, and a high star formation rate, 3.87M_⊙ yr^-1. With a mass of 10^8.64 M_⊙ its central SMBH is relatively dominant, and we expect both AGN and stellar feedback to be present here. R_500=242kpc exceeds slightly the LEM FoV at z=0.01, which yields 19.7. Figure <ref> a) shows the observed (i.e., including background and foreground treatment) O VIII emission: We see a bright core and extended CGM emission out to the edge of the field of view (16 radius or 195kpc). Brighter filaments extend to the north (forming a rim around a lower surface brightness region), and south-east, perpendicular to the galactic disk, which is edge-on and oriented south-west to north-east (see the optical r-band image tracing the stellar population in Fig. <ref> b ). The optical image also shows a smaller structure, about 100 kpc to the west, which is a smaller galaxy. It also has an X-ray counterpart in the O VIII image. The X-ray brightness in the simulation (Fig. <ref> c) shows the “true” distribution of the hot gas, which is very filamentary. The temperature map of this system (Fig. <ref> d) is derived from the relative line strengths of the O VII(f), O VIII, and Fe XVII lines, and will be most sensitive to trace temperature between 0.15 and 0.45 keV. The typical statistical uncertainties vary between 0.03keV and 0.005keV, so relative uncertainties range between 1% and 10% (see Fig. <ref> d). Comparing this observed temperature map with the idealized, emission weighted temperature (0.5-1 keV band) derived from the simulation (Fig <ref> f), we can reproduce the brighter parts to the north of the core and the south-east. The emission weighted temperature is even higher in these regions, mostly because the lines that we used from our mock observation to probe the gas temperature are not sensitive enough to the hotter gas components. We note that a mass weighted temperature will be biased toward high mass, low temperature gas cells, and therefore be lower than the emission weighted temperature, which more closely reflects our measured quantity in the mock observations. The velocity map (Fig. <ref> g ) reveals a split between east and west, with an average difference between the two sides of about 300km s^-1. As this is most pronounced in the central region, it can be explained by the rotation of the disk. The higher velocity gas (red region southeast of the core) is likely an outflow, since it overlaps with the hotter regions. The predicted, emission weighted velocity map from the simulation (Fig. <ref> i) confirms this, as it shows structures that are very consistent: The central rotation of the disk, the higher velocity part to the south-east, and the large scale velocity structure of the hot gas. A subsequent paper (ZuHone et al. submitted) will analyze the velocity structure of simulated galaxy halos in great detail. The statistical uncertainty mainly depends on the number of counts in a line. With our adaptive binning described in section <ref>, we get a velocity uncertainty of about 25 to 45km s^-1 in the brighter central regions, and about 80km s^-1 in the lower surface brightness region north-east of the center. We assumed conservatively a 2 eV response across the field of view. We also show the observed oxygen-to-iron ratio (Fig. <ref> j). This abundance ratio is sensitive to the enrichment history, mainly the supernovae SNIa versus core-collapsed supernovae (). We find typical values in the central region [ O/Fe]=-0.3 (Z_ O / Z_ Fe = 0.5), and values closer to [ O/Fe]=0 (Z_ O / Z_ Fe = 1) and above, in the outer regions. This is consistent with the predicted O/Fe brightness from the simulation (Fig. <ref> l), which shows the center and outflows to be more Fe-rich. For galaxy clusters and groups, the oxygen abundance has been found to be flat, while iron is centrally peaked , which leads to an increasing O/Fe profile. A similar trend can be expected for galaxies (). Some regions, especially in the south-east, have very high oxygen abundances, up to 2.5. Typical uncertainties range from 10-20% in the center to 70% in the faintest regions. The second galaxy (467415) that we map in detail is also selected from TNG50, but with a lower halo mass of 10^12.32 M_⊙, which would place it in the medium mass sample. The stellar mass, 10^10.94 M_⊙ is relatively high for its size, and the star formation of 8.9M_⊙ yr^1- might still dominate its halo environment. Also, its SMBH mass is at the higher end with 10^8.33 M_⊙, which makes this another interesting target to study the effects of stellar and AGN feedback. The radius, R_500=183kpc, is within the LEM field of view at z=0.01. The observed O VIII emission (Fig. <ref> a) is brightest in the center, but extends out to almost 100 kpc, and some filaments beyond that. The distribution is not azimuthally uniform, but seems to be aligned along filaments, mainly to the south-east, the north, and a narrow region to the west. The r-band contours in Fig. <ref> b show one single galaxy at the center, and some much smaller and fainter structures to the south, possibly small satellite galaxies. Figure <ref> c displays the X-ray surface brightness in the simulation. The observed temperature map (Fig. <ref> d, and the error map e) shows the hottest emission (∼0.3keV) near the center of the galaxy, while the temperature in the outer filaments drops to 0.18keV. We identify slightly hotter structures, extending from the south-east to the north-west, while the north-east to south-west axis has cooler gas. The uncertainties are again in the 1% to 10% range. The temperatures are broadly consistent with emission weighted temperature in the simulation (Fig <ref> f). The velocity (Fig. <ref> g, and the error map h) ranges from 250km s^-1 in the south-east to -250km s^-1 in the north-west, outside the stellar disk. These are even higher velocities than the within first, more massive galaxy. This coincides with the hotter regions in the temperature map, and is consistent with the predicted velocity from the simulation (Fig. <ref> i). Along the faint, low-temperature regions in the south-west and north-east, the velocities are lower than the surrounding. Statistical uncertainties are similar to the other galaxy, ranging from 20 to 80km s^-1. Lastly, the oxygen-iron map (Fig. <ref> j, and the error map k) has typical values between 0.5 and 1, but also several small regions exceed 1 by far (statistical uncertainties are between 20% and 70%). It is consistent with the predicted O/Fe brightness in the simulation (Fig. <ref> l), where the iron rich gas is again in the center and along the high velocity, high temperature trajectory from the south-east to slightly north of the core. The level of detail that is revealed in these spectral maps is unprecedented for galaxy-sized halos. Typical spectral maps from CCD based detectors (e.g., Chandra or XMM-Newton) can only reach large radii (e.g., R_500) for galaxy clusters and massive galaxy groups, however, without any line-of-sight velocity information. § DISCUSSION §.§ Tracing the CGM with X-ray microcalorimeters The extended gaseous halos around MW-like galaxies are predicted by simulations, and have been detected in stacked broad-band images. Constraining their extent, brightness profile, azimuthal distribution, and enrichment with various elements for individual galaxies will allow us to distinguish between simulation models and ultimately understand the changing feedback processes of galaxies on various scales. The X-ray continuum emission of these galaxies is very faint, but the bright emission lines, namely O VII, O VIII, and Fe XVII, are clearly detectable over the local background and foreground. Selecting a narrow ∼2eV energy band around these lines results in a high signal to noise detection with LEM, in a 1 Ms exposure of a z=0.01 galaxy, and even allows the mapping of these galaxy halos. Only focusing on emission lines and not having the continuum information will still allow the majority of science questions to be answered: How is the hot gas distributed, and what are the relative metal abundances (e.g., wrt iron). Only the degeneracy between metallicity and density cannot be broken. The faint CGM line emission around individual, nearby spiral and elliptical galaxies cannot be detected and mapped with current X-ray CCD instruments due to the bright Milky-Way foreground emission. Even state-of-the-art DEPFET detectors such as the Athena/WFI () with its ∼80eV energy resolution will not be able to distinguish the CGM emission lines from the much brighter foreground. A galaxy redshift of at least 0.12 is necessary to shift the O VIII line from the foreground, which will also reduce the apparent size of the galaxy to about 1arcmin, and the total flux to about 8% with respect to a galaxy at redshift 0.01. The development of microcalorimeters marks the start of a new epoch in X-ray astronomy, reaching unprecedented energy resolution, while spatially resolving the structure of the source. The currently planned Athena/X-IFU instrument () will have a large effective area and good spatial resolution. However the field-of-view of ∼ 5× 5 arcmin^2 (before reformulation) is clearly not sufficient to observe the extended CGM of nearby galaxies. At z=0.01 a Milky-Way-sized galaxy has R_500≈ 15, and therefore requires about 30 pointings. Moving to a higher redshift and utilizing Athena's large effective area can reduce the required amount of observing time to a factor of 5-10 times what a LEM-like mission would need. Galaxies at z≫ 0.01 will also not offer the same amount of structure that can be resolved. Since simulations predict variance between galaxies, also based e.g., on the star formation rate, one would like to cover a medium-sized sample of 10 to 20 galaxies. The X-Ray Imaging and Spectroscopy Mission (XRISM, expected to be launched in 2023) will also have a microcalorimeter onboard. However, its field-of-view is limited to only 3×3, the effective area is about 10-15 times smaller than LEM, and together with the arcmin spatial resolution and only 6×6 pixels, it will not be able to map the extended CGM. Other mission concepts with a large effective area microcalorimeter include the Line Emission Mapper (), and HUBS (). While HUBS does not have enough spatial resolution to map the structure, and distinguish X-ray point sources in the field, LEM is clearly optimized to the CGM science by having sufficient energy resolution (2eV), a large effective area similar to XMM-Newton, a 10arcsec PSF, and a large field-of-view of ∼900arcmin^2, allowing one to map nearby galaxies in a single pointing. In contrast to typical X-ray observations of faint, diffuse sources, the instrumental background level plays only a minor role when using a narrow energy band of a microcalorimeter: The requirement for Athena/X-IFU is to reach an internal particle background level of 5e-3cts s^-1 cm^-2 keV^-1 (). This should be achieved through a graded anti-coincidence shield, while the background is predicted to be about an order of magnitude higher without the shielding. For our results, we assumed a constant particle background level in the soft band of 8.6e-2cts s^-1 cm^-2 kev^-1, which is more than 15 times higher the Athena requirement. The foreground emission however, scales with the effective area of the mirror. In the case of LEM, it will be the dominant background component, and even at the foreground continuum around the O VIII line, the particle background is still below all other components (foreground and CXB). §.§ Model distinction We have demonstrated that a LEM-like microcalorimeter will be able to detect the CGM of MW-mass galaxies to large radii ∼ R_500, even in low mass galaxies below the “transition” regime (Fig. <ref>). Long exposure times with CCD instruments such as Chandra ACIS or XMM-Newton EPIC spent on individual massive galaxies have revealed only the innermost part of the CGM, and at best give us a vague idea of the temperature structure, especially if they are not in an ongoing starburst phase. <cit.> used Chandra to image NGC 266, a massive (M_200≈8e12M_⊙), nearby galaxy, and detected the CGM out to about 60 kpc, which is about 20% of R_500. <cit.> used XMM-Newton to detect and characterize the CGM around the massive galaxy NGC 6753, which has a virial mass of e13M_⊙. The authors could reliably make a detection out to 50 kpc, before background systematics made any conclusions impossible. This is about 17% of R_500. These exceptional cases demonstrate, that only with massive efforts, we are currently able to explore up to 1% of the volume that the CGM fills out to R_500, and this only for the most massive, hand-picked galaxies, which are at the high mass end. Our mock observations show that with a large grasp microcalorimeter we can not only detect these type of galaxies out to R_200 in individual lines, such as O VIII, but also map their dynamical, thermal, and chemical abundance. These galaxies are expected to be dominated by AGN feedback, which we see as outbursts in the velocity map or the O/Fe ratio map. Features in the abundance ratio map, such as high O/Fe ratios maybe indicative of strong early feedback or a recent starburst, whereas a high abundance of metals from AGB winds, such as carbon and nitrogen may be evidence for efficient gas entrainment from the ISM in SNe-driven winds (e.g., ). A LEM-like instrument will explore unknown territory by also mapping galaxies of much lower mass, down to M_200≈3e11M_⊙, which has not been done so far. In this regime, we do not expect AGNs to be important in the feedback cycle, while stellar winds are expected to enrich and reheat the CGM. The details of the transition from stellar to AGN feedback are largely not understood and are implemented ad-hoc in simulations to match some observational constraints, such as stellar scaling relations (M-σ, M_⋆-M_ halo, galaxy morphologies, quiescent fractions as a function of stellar mass, and SFR-z relations). Many measurements can be conducted that will lead to a new understanding of the processes within the CGM: Central regions, where a spectral continuum of the CGM emission can be measured, allowing us to constrain the absolute metal abundance, and at larger radii, the steepening of the X-ray line emission to distinguish the contribution from SN feedback, as seen, e.g., between EAGLE and TNG100, where TNG100 produces centrally peaked profiles with a steeper decrease in surface brightness. Measurements of the X-ray luminosities, surface brightness profiles, temperature distributions that relate to the outflow energies, will allow one to distinguish if feedback is instantaneously stopping a cooling flow, if the cumulative effect from preventing gas phases to cool, or if the gas is ejected from the the disk (). With a few assumptions such as a metallicity profile, a total gas mass can be derived. Supplementary observations such as Sunyaev-Zeldovich (SZ, e.g., ), or fast radio bursts (FRB, e.g., ) will also help to derive the gas mass. The impact of the central AGN will be observed in the range of azimuthal asymmetry observed in the CGM emission. <cit.> and <cit.> have demonstrated through stacking of optically detected galaxies in the eROSITA Final Equatorial Depth Survey (eFEDS) that the X-ray bright CGM exists even in low mass galaxies. Based on these results, simulations such as EAGLE and TNG100 are likely underestimating the surface brightness, especially in the low mass regime (see, e.g., the 2.5e10M_⊙ stellar mass bin in Fig. 4 of , which is a factor of 3-5 above the simulation predictions). Furthermore, the dichotomy between star forming galaxies being brighter in simulations with respect to quiescent galaxies, might not be true, at least not to the extent that it is predicted. Quiescent galaxies with little to no star formation tend to have massive and dominant AGNs, and are well-suited to understand the AGN cycles. These results cast doubt on the validity of the Simba CGM profiles, as Simba appears to drive gas to too large radii, making the galaxy halos fainter than observed by <cit.>. However, stacking cannot replace systematic analyses of individual galaxies as it might be biased by a few bright objects. Simba also appears to be too X-ray faint, compared to low-mass groups <cit.>, as the energy output from the bipolar jets evacuates the halos. §.§ Observing strategies In the previous sections we demonstrated that a LEM-like mission with a large grasp microcalorimeter will be able to map nearby galaxies over a wide range of mass, star formation rate, and black hole mass. We argued that an instrument such as the Athena X-IFU will not be able to dedicate enough observing time to this science area. However, even an observatory such as LEM will not be able to spend 1 Ms on 120 galaxies that we assumed to have observations available (40 in each of the three mass samples). We test the impact on constraining the line surface brightness distribution with shorter observations (Fig. <ref>), and found that even with 100 ks per galaxy, a LEM-like mission will be able to map the O VIII emission out to R_500, in both EAGLE and TNG100 simulated galaxies. In an observing plan for a LEM-like mission, fainter galaxies will take up significantly more observing time, especially in the crucial transition regime of Milky Way mass galaxies. Having 10 low, 10 medium, and 10 high mass galaxies, where each galaxy is observed by 1 Ms with the exception of the high mass halos (100 ks), one can achieve such an ambitious program within about 20 Ms, which is a typical directed science program for a probe class mission. While the trend of the average properties (e.g., O VIII brightness) in each mass bin is important, the dispersion around the median, also will be important to understand. Therefore, galaxies should be selected to cover a range of properties that might shape the CGM, such as the stellar mass M_⋆ at a given halo mass, the star formation rate, and the mass of the supermassive black hole. We also tested whether dedicated background observations are necessary, or whether surface brightness profiles can be extracted using a model of the foreground and background emission. This model can be fit to the same observation in an outer region, and then constrain the expected background plus foreground counts in each annulus at the CGM line spectral window. This method achieves comparable results, and reduces the overhead, but requires a good model of the foreground spectrum. § SUMMARY Mapping the X-ray emission of the hot circumgalactic medium (CGM) is the key to understanding the evolution of galaxies from smaller galaxies with star formation driven feedback, to larger, quiescent galaxies. Milky Way mass galaxies appear to be at the transition point between these regimes. However, the current generation of X-ray instruments are unable to capture the emission from the hot CGM, that is dominant in the soft X-ray band, and distinguish it from the bright Milky Way foreground. We demonstrate that a high spectral resolution microcalorimeter with a large field of view and large effective area cannot only detect the CGM line emission to R_500, but also can map the individual physical properties such as temperature and velocity. A mission designed to study the hot CGM, similar to the Line Emission Mapper probe concept, will transform all fields of astrophysics (). We created realistic mock observations, based on hydrodynamical simulations from EAGLE, IllustrisTNG, and Simba, for a large effective area instrument with 2 eV spectral resolution, and a 32×32 FoV. We included all background and foreground components in these mock observations. The galaxies span a mass range from log M_200[M_⊙]=11.5-13, and have been divided into three samples, based on their halo mass to represent the dominating feedback regime. For the mock observations, the low and medium mass galaxies (up to log M_200[M_⊙] ≤ 12.5) are placed at z=0.01, while the high mass galaxies are at z=0.035, and an exposure time of 1Ms is used. For each galaxy we constrain the surface brightness profile of the O VII(f), O VIII, Fe XVII (725 and 729 eV), and C VI. For galaxies at z=0.035 we also include the O VII(r) and the Fe XVII (826 eV) lines, but have to omit the Fe XVII (729 eV) line, since it is blended with the Milky Way foreground. Our findings are summarized as follows: * The median galaxy surface brightness profile for Milky Way sized galaxies at z=0.01 can be traced to R_500, which is typically 170kpc or 14. * The CGM in more massive galaxy halos up to log M_200 = 13 at z=0.035 can be measured out to R_200. Even for the lowest mass halos (down to log M_200 = 11.5 we typically will measure CGM emission out to ∼ 0.5 R_500. * The O VIII emission line is brightest in most cases, followed by O VII and Fe XVII. * Subdividing the galaxy samples by star formation rate reveals that star forming TNG100 galaxies are brighter in the core. * There is significant scatter in the CGM brightness due to galaxy-to-galaxy variation. Also the different simulations produce slightly different CGM luminosities at a given mass scale, where EAGLE galaxies are brightest, especially at higher masses, and Simba galaxies are typically the faintest, due to the strong AGN feedback expelling the gas. We demonstrate that the O VII to O VIII line ratio in the mock observations can be used as a temperature tracer out to R_500 for more massive galaxies at z=0.035, and to 0.5 R_500 for the less massive galaxies at z=0.01. We find that EAGLE galaxies are hotter in the center compared to TNG100, while having similar line ratios to TNG100 at large radii. We are able to map the substructure of galaxies out to R_500 through by quantifying the azimuthal asymmetry. Interestingly, we find that TNG100 and Simba galaxies with a smaller SMBH reach high values of substructure beyond 0.4 R_500, while EAGLE galaxies do not show that level of clumping. For massive SMBHs, all simulations predict lower CGM clumping factors. This observable appears to be crucial to understand the mechanisms of AGN feedback, as it directly points to the efficiency of the AGN to pressurize the CGM gas. Finally, we test the 2D properties of the gaseous halos around galaxies with spectral maps of properties, such as the temperature, the line-of-sight velocity, and the O/Fe ratio. Together, these quantities can be used to pin-point signatures of AGN feedback, such as the AGN duty cycle, or energy output. JAZ, AB, and RPK are funded by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. DN acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG) through an Emmy Noether Research Group (grant number NE 2441/1-1). I.K. acknowledges support by the COMPLEX project from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program grant agreement ERC-2019-AdG 882679. The material is based upon work supported by NASA under award number 80GSFC21M0002. The TNG50 simulation was run with compute time granted by the Gauss Centre for Supercomputing (GCS) under Large-Scale Projects GCS-DWAR on the GCS share of the supercomputer Hazel Hen at the High Performance Computing Center Stuttgart (HLRS). This study made use of high-performance computing facilities at Liverpool John Moores University. AstroPy <cit.> CIAO <cit.>, Matplotlib <cit.>, NumPy <cit.>, pyXSIM <cit.>, Sherpa <cit.>, SOXS <cit.> aasjournal § ANALYSIS DETAILS The analysis of simulated microcalorimeter observations of nearby galaxies resembles the traditional X-ray analysis. However, a few details have been altered to take advantage of the high spectral resolution. We describe this here in more detail. §.§ Point source removal The point sources in the simulated event files are detected through in the 11.4keV band, where only CXB and NXB dominate. At softer energies the Galactic emission will dominate, and at higher energies the particle background will dilute the CXB signal due to the decrease in effective area. We show in Fig. <ref> (left) the distribution of cumulative number counts as a function of the threshold flux S (in net source counts within a 1 Ms observation). The distribution is consistent with the expected trend based on the Chandra observations by <cit.>, where a broken powerlaw is found for distant AGNs and galaxies. If we excise the 50 brightest sources (Fig. <ref> right) we remove about 50% of the total flux in point sources, while only removing about 2% of the detector area. For the 100 brightest sources we would remove more than 4% of the area and 60% of the flux, and if we removed 250 sources, we would excise 75% of the flux and almost 10% of the total detector area. §.§ Surface brightness extraction In order to trace the emission lines (such as O VIII) in a narrow band out to large radii, it is important to quantify the background precisely. While the MW foreground should be greatly reduced, due to the redshift, the exact level of the background is still crucial, when we want to make a detection with a signal that is only 10% of the total background level. We separate the background components into MW foreground (Local Hot Bubble - LHB, Galactic Halo Emission - GHE, and North Polar Spur - NPS), the unresolved/un-removed point source contribution from distant AGNs and galaxies (Cosmic X-ray Background - CXB), and the interaction of charged particles (galactic cosmic rays) with the detector and/or the satellite (producing secondary particles, such as fluorescent X-rays or electrons), which is not focused by the mirrors (Non X-ray Background - NXB). The modeling of these components in our simulated X-ray observations has been described in section <ref>. To estimate the background counts in a narrow band at the redshifted CGM line, we take a blank field observation without a science target, and spatially close to the observation of each nearby galaxy of interest. We assume that the MW foreground emission does not vary within the FoV of our observation, and is also consistent with the MW foreground in the blank field observation. The same assumption is made for the NXB, although its contribution is less important at the emission lines of interest. Our focus is set on the CXB contribution to the total background, as it is different not only between the blank field and the galaxy observation, but also varies slightly from each extraction region (annulus) of the surface brightness profile. Therefore, we include a CXB correction factor to the blank field background counts, that is determined from the hard band (e.g., 11.4keV), where the MW component is insignificant. We label the extracted total counts (CGM and total background) in a narrow line band and small extraction region within our observation as cts^ Obs_ line. We define the counts in the broad band and the blank field observation (Bkg) accordingly, cts^ Obs_ line = CGM^ Obs_ line + MW^ Obs_ line + CXB^ Obs_ line + NXB^ Obs_ line = CGM^ Obs_ line + ℬ^ Obs_ line cts^ Obs_ broad = CGM^ Obs_ broad + MW^ Obs_ broad + CXB^ Obs_ broad + NXB^ Obs_ broad = CGM^ Obs_ broad + ℬ^ Obs_ broad cts^ Bkg_ line = MW^ Bkg_ line + CXB^ Bkg_ line + NXB^ Bkg_ line = ℬ^ Bkg_ line cts^ Bkg_ broad = MW^ Bkg_ broad + CXB^ Bkg_ broad + NXB^ Bkg_ broad = ℬ^ Bkg_ broad . Note that the MW and NXB components scale between the observation and blank field region only with the area (the observing time has already been taken into account), and in the broad band, the CGM component has no real contribution. We utilize a broad band above 1 keV (e.g., 11.4keV, see Fig. <ref>), to estimate the difference in the CXB between our extraction region in the observation and the total blank field (whole FoV). We derive a scaling factor, f_ broad = cts^ Obs_ broad/f_A ×ℬ^ Bkg_ broad , where f_A is the ratio of the extraction area of the observation and the background. f_ broad is typically close to one, especially if the extraction region in the observation is large (outer annuli). In order to minimize Poisson noise, we estimate the narrow line band background from the entire blank field observation and scale it by the area and f_ broad. We tested this method with a randomly chosen galaxy halo from TNG100 (ID 419061), and define 7 radial bins of 0.15 R_500 width to reach an outer radius of 1.05 R_500. In each of these regions, we extracted the spectrum from the observation, and fitted it with the model components (see, e.g., Fig. <ref>), so we have a precise knowledge of background components and the CGM emission. From the fitted model components, we are able to calculate the precise total background in each annulus ℬ^ 𝒪𝒷𝓈_ 𝓁𝒾𝓃ℯ, as well as the CGM counts CGM^ Obs_ line, and make a comparison with the corrected blank field background estimate, ℬ^ Bkg_ line× f_ broad. We find that the difference in counts between the actual background ,ℬ^ 𝒪𝒷𝓈_ 𝓁𝒾𝓃ℯ , and the blank field predicted background is always much less than 1% of the CGM counts in each annulus (reaching 0.5% around R_500). Therefore, we employ this method of scaling the blank field by the broad band CXB contribution to estimate the background. Deriving the background from the spectral fit is computationally expensive and not feasible for each galaxy and extraction region. In order to define statistically significant radial bins of the surface brightness profile, we require a minimum signal to noise ratio of 3. The signal is directly calculated from the measured cts^ Obs_ line minus the background estimate ℬ^ Bkg_ line× f_ broad. The noise is assumed to be Noise = √( cts^ Obs_ line + ℬ^ Obs_ line× (1-f_ broad) + 0.1×ℬ^ Obs_ line) , where we account for systematic uncertainties in the background, as well as uncertainties in the CXB contribution. We also tested the impact of MW foreground uncertainties on the narrow line counts. With the typical uncertainties of the foreground model fitting (using only the narrow lines of C VI, O VII, O VIII, and Fe XVII) in the temperature, e.g., of the LHB, we find a difference in background counts of less than 1% of the CGM counts. § LIST OF SIMULATED GALAXY HALOS Tables <ref>, <ref>, and <ref> list all the simulated galaxy halos that are used for the line surface brightness profiles, the line ratios, and the azimuthal substructure test. As described in section <ref>, we use three simulation suits, TNG100, EAGLE (100 Mpc box) and Simba (100 Mpc box), and subselect from each 120 galaxies. These galaxies are equally divided into a low mass sample log_10 M_200/M_⊙ = 11.5-12, a medium mass sample log_10 M_200/M_⊙ = 12-12.5, and a high mass sample log_10 M_200/M_⊙ = 12.5-13. In Figs. <ref>, <ref>, and <ref>, we show stacked O VII and OVIII images for each of the galaxies, including instrumental background, foreground emission, and CXB (see section <ref>). rrrrrrr|rrrrrr IllustrisTNG halos Halo M_⋆ M_200 M_ BH SFR R_500 Halo M_⋆ M_500 M_ BH SFR R_500 10^10M_⊙ 10^12M_⊙ 10^8 M_⊙ M_⊙ yr^-1 kpc 10^10M_⊙ 10^12M_⊙ 10^8 M_⊙ M_⊙ yr^-1 kpc 13lTNG100 low mass sample 582879 0.63 0.33 0.32 0.487 99 541402 1.27 0.64 0.54 1.189 123 539812 1.56 0.63 0.62 0.418 125 581475 1.71 0.34 0.86 0.076 102 513718 3.71 0.95 1.25 2.564 142 530171 0.98 0.75 0.22 1.217 125 509402 3.21 0.96 1.08 0.507 142 489863 1.99 0.99 0.79 4.143 130 505030 2.49 0.84 0.54 2.334 138 513262 3.98 0.97 1.89 0.000 142 557923 1.31 0.49 0.34 1.372 114 512631 0.44 0.93 0.19 0.667 143 562983 0.29 0.38 0.09 0.466 94 509468 4.90 0.96 1.01 4.559 144 551694 1.04 0.49 0.41 0.967 111 567124 1.01 0.47 0.57 1.141 113 537893 1.24 0.44 0.53 1.357 110 525081 1.77 0.84 0.88 6.793 135 524295 2.99 0.82 0.98 2.155 135 539998 2.16 0.66 1.01 0.534 127 562544 0.52 0.51 0.23 0.479 116 528531 1.43 0.86 0.81 1.569 141 555033 0.77 0.37 0.36 0.623 103 554843 0.41 0.50 0.13 0.402 109 556431 2.02 0.61 1.35 0.000 122 505616 4.03 0.98 1.66 0.000 145 538080 3.26 0.70 1.35 0.000 129 523231 2.35 0.81 0.78 1.281 135 559158 1.43 0.45 0.50 1.081 106 558021 1.10 0.52 0.32 1.231 116 520230 1.68 0.73 0.53 0.885 130 465921 0.91 0.78 0.39 1.842 127 506526 2.07 0.89 1.17 1.163 134 505680 0.34 0.38 0.08 0.403 87 578707 0.43 0.34 0.30 0.449 99 567178 1.42 0.44 0.29 1.512 112 589067 1.00 0.35 0.59 0.306 103 567085 0.63 0.41 0.22 0.970 107 547892 1.76 0.58 1.06 1.285 121 549236 0.67 0.50 0.31 0.774 110 13lTNG100 medium mass halos 497646 4.90 1.09 0.91 2.781 149 497800 4.45 1.20 1.56 0.000 156 487244 6.27 1.20 1.56 2.622 155 490577 6.15 1.30 1.50 0.005 160 421835 7.19 3.10 1.69 0.177 208 419061 3.76 1.92 1.01 5.303 164 460273 6.27 1.79 2.39 0.005 175 484427 5.20 1.38 1.90 0.000 157 422831 6.59 2.97 2.70 0.000 203 449034 8.66 2.34 2.75 0.000 189 411321 6.72 1.93 1.80 0.479 172 449549 7.59 2.24 2.28 0.000 189 428813 6.57 2.70 1.75 0.002 199 442855 7.61 2.24 2.47 0.001 188 483900 5.75 1.31 1.61 0.718 159 460746 8.07 1.86 2.49 0.015 174 444545 5.25 2.79 3.56 0.000 200 438996 5.76 1.78 2.60 0.000 172 463453 5.26 1.59 1.89 0.000 163 494771 4.13 1.11 1.04 3.695 149 461136 6.04 1.80 1.81 0.117 177 493920 5.39 1.40 2.56 0.000 162 432711 8.37 2.67 2.62 0.649 198 471801 4.27 1.32 1.83 0.000 155 418230 8.11 2.42 1.61 0.019 192 428158 7.14 2.95 3.57 0.000 207 448269 5.25 2.03 1.86 1.907 174 433537 4.76 2.65 3.24 0.000 196 459270 4.32 2.14 2.95 0.000 187 426764 8.91 2.39 2.65 0.248 187 462391 4.59 1.90 2.11 0.000 175 460823 5.29 1.98 2.19 0.000 179 436233 5.18 2.63 2.75 0.000 196 456014 4.82 2.04 1.70 0.000 173 432831 5.17 2.97 3.41 0.000 206 480587 5.02 1.43 2.23 0.000 161 426483 10.13 2.88 3.15 0.002 205 471109 6.03 1.50 1.47 0.021 164 458378 6.63 2.19 2.72 0.000 184 469930 7.65 1.51 1.58 1.236 166 13lTNG100 high mass halos 390859 8.48 4.75 4.49 0.000 237 415373 9.43 3.66 4.95 0.000 219 383307 10.31 4.87 4.52 0.365 232 332068 8.15 5.34 2.78 0.034 217 387236 12.15 4.46 3.65 0.865 230 341356 11.92 7.18 5.27 0.064 273 377398 17.08 6.09 5.98 0.003 260 414447 8.42 3.44 3.35 0.000 211 333672 10.32 8.70 2.34 0.472 288 312412 13.09 7.79 4.97 0.000 282 394439 12.03 4.29 3.39 1.286 230 399520 8.74 3.93 4.83 0.000 220 249164 13.77 9.95 6.87 0.069 298 372568 13.64 5.53 5.60 0.104 254 413091 12.81 3.67 2.39 0.021 218 342689 13.01 4.74 3.69 0.201 238 346420 11.97 5.31 5.37 0.056 244 359639 14.99 6.41 7.83 0.456 261 360916 5.75 5.98 4.77 0.000 238 405460 12.18 3.23 5.59 0.000 206 345812 10.51 8.82 3.41 5.729 295 288014 7.10 9.55 2.24 0.246 292 384103 6.69 4.91 2.94 0.010 218 339547 11.91 7.57 4.71 0.000 271 359086 8.14 7.75 4.70 0.000 283 382059 11.56 3.80 2.87 2.505 221 377212 14.52 5.98 7.86 0.079 258 376132 7.35 3.61 4.06 0.000 214 337444 10.97 7.94 6.50 0.000 278 329105 14.29 8.71 8.22 1.058 294 361418 6.22 6.46 4.64 0.000 252 399969 11.73 4.23 4.43 0.000 227 313402 10.00 9.02 6.59 0.002 245 386429 9.24 4.78 1.69 4.350 208 327822 10.97 4.27 2.22 4.068 222 371859 8.90 5.62 7.07 0.000 253 298206 12.79 8.75 5.82 1.032 282 398110 11.38 4.01 4.98 0.033 230 312891 15.21 9.78 11.30 0.088 303 380119 12.16 4.85 5.02 0.290 239 13lTNG50 halos 358608 15.06 5.04 4.38 4.018 243 467415 8.68 2.08 2.13 9.224 183 rrrrrrr|rrrrrr EAGLE halos Halo M_⋆ M_200 M_ BH SFR R_500 Halo M_⋆ M_500 M_ BH SFR R_500 10^10M_⊙ 10^12M_⊙ 10^8 M_⊙ M_⊙ yr^-1 kpc 10^10M_⊙ 10^12M_⊙ 10^8 M_⊙ M_⊙ yr^-1 kpc 13llow mass sample 2703 0.45 0.42 0.02 0.691 104 2144 0.79 0.44 0.01 0.292 110 2258 0.27 0.52 0.02 0.477 109 2824 0.66 0.44 0.01 0.431 109 2577 0.21 0.37 0.00 0.498 95 1949 1.15 0.69 0.02 1.131 125 2578 1.15 0.52 0.01 0.480 117 1791 0.99 0.86 0.05 0.606 139 2044 1.07 0.71 0.01 0.190 117 2478 0.53 0.39 0.02 0.306 104 2535 0.71 0.51 0.02 0.614 114 2370 0.92 0.55 0.02 0.992 117 1662 1.32 0.85 0.07 1.099 133 2428 0.41 0.33 0.01 0.411 98 2899 0.40 0.38 0.01 0.750 98 2726 0.71 0.53 0.08 0.000 119 2592 1.14 0.50 0.05 0.205 115 1730 2.86 0.78 0.05 1.402 132 2345 0.36 0.53 0.01 0.230 117 2864 0.56 0.41 0.01 0.355 105 2557 0.28 0.37 0.02 0.326 98 2516 0.83 0.48 0.03 0.434 113 2671 0.25 0.48 0.01 0.408 100 1725 1.93 0.81 0.11 0.003 131 2118 0.85 0.39 0.01 0.823 105 2902 0.36 0.38 0.01 0.241 97 1647 2.13 0.87 0.08 0.577 138 2874 0.31 0.39 0.02 0.147 105 2184 1.47 0.65 0.03 0.690 125 1803 1.64 0.73 0.03 1.379 128 1581 1.21 0.82 0.12 0.018 123 2688 0.73 0.52 0.04 0.338 115 2176 1.09 0.68 0.01 0.547 124 2338 0.29 0.63 0.03 0.304 125 1564 2.37 0.92 0.07 0.885 140 2852 0.63 0.46 0.04 0.578 112 2716 0.56 0.42 0.01 0.478 100 1569 1.50 0.84 0.03 0.983 132 2066 1.59 0.69 0.05 0.768 126 2157 1.43 0.59 0.05 1.220 121 13lmedium mass halos 1096 2.81 1.45 0.28 0.111 160 798 4.79 1.67 0.31 0.010 162 700 4.45 2.70 0.81 1.100 201 816 4.28 1.86 0.07 2.487 170 521 2.76 2.19 0.73 0.255 185 1256 1.61 1.20 0.04 0.776 155 480 6.90 3.12 0.76 0.218 203 1303 1.44 1.27 0.06 0.632 156 964 4.46 1.55 0.04 2.218 166 846 2.02 1.82 0.23 0.041 164 789 2.47 2.19 0.47 0.000 189 1233 2.02 1.06 0.10 0.548 144 1244 2.56 1.10 0.27 1.034 147 774 4.40 2.20 0.53 0.021 185 965 2.84 1.53 0.28 0.577 158 1133 3.28 1.28 0.16 1.896 153 1350 2.59 1.05 0.15 1.869 147 1296 3.15 1.09 0.26 0.910 149 927 5.02 1.83 0.49 1.128 179 833 5.81 2.03 0.79 2.467 182 482 4.98 2.67 0.58 1.394 198 660 5.16 2.23 0.14 7.143 183 744 3.43 2.06 0.72 0.053 183 1122 3.16 1.51 0.31 0.500 165 1205 2.47 1.06 0.19 1.044 144 597 4.57 3.01 0.48 2.220 203 933 3.30 1.74 0.52 0.000 170 977 3.68 1.63 0.30 1.095 168 527 4.90 2.92 0.80 2.603 196 914 2.22 1.53 0.02 3.448 164 1142 2.33 1.25 0.49 0.024 155 961 1.82 1.69 0.39 0.000 168 1117 3.30 1.32 0.13 2.053 155 835 3.91 1.79 0.08 2.951 173 512 4.32 2.70 0.39 2.959 196 622 3.81 3.03 0.41 1.113 207 621 4.63 2.41 0.10 2.462 177 689 2.65 2.14 0.27 0.412 175 709 3.29 2.35 0.24 1.544 178 577 4.82 2.42 0.45 1.189 181 13lhigh mass halos 207 8.57 4.86 0.34 6.738 223 218 6.27 7.19 1.27 0.533 261 214 12.05 8.57 2.52 0.101 294 262 8.67 6.05 1.67 0.094 255 209 9.18 8.53 2.77 2.621 287 541 8.47 3.29 1.22 1.588 210 235 7.45 6.75 1.71 3.997 264 399 5.15 3.89 2.18 0.000 226 251 9.04 6.17 2.02 3.160 252 342 4.51 3.89 1.20 0.199 224 253 9.42 6.85 1.34 4.110 262 132 11.83 6.70 1.35 1.723 255 244 8.89 6.58 0.89 4.195 255 169 7.16 9.12 0.68 4.680 256 377 9.35 4.19 0.85 0.538 222 224 5.07 4.59 1.44 2.278 236 283 6.32 5.87 1.17 3.746 252 231 13.12 7.43 2.13 0.000 271 208 8.55 3.93 1.21 0.000 226 422 6.10 4.14 0.91 1.463 230 282 3.92 5.62 0.27 2.259 219 141 12.53 8.53 2.14 4.794 285 189 12.05 7.94 4.10 0.124 275 121 9.44 9.98 2.75 1.904 298 248 4.91 5.62 2.34 0.081 247 439 6.53 3.58 0.66 3.127 212 151 13.37 9.33 2.03 4.601 285 203 9.59 4.32 1.53 1.344 229 194 11.61 8.99 1.71 3.159 277 360 8.57 4.95 1.52 0.008 240 539 6.25 3.24 1.01 0.019 210 177 8.83 9.25 2.88 2.211 292 427 6.19 4.05 0.79 0.765 223 254 11.07 7.55 1.64 0.385 274 334 8.81 5.09 1.80 0.000 240 243 7.60 7.78 0.21 0.124 277 408 6.70 4.39 0.30 3.627 227 277 3.39 4.65 0.43 1.054 212 276 8.18 4.37 1.13 0.104 221 337 8.00 5.22 1.60 0.793 235 rrrrrrr|rrrrrr Simba halos Halo M_⋆ M_200 M_ BH SFR R_500 Halo M_⋆ M_500 M_ BH SFR R_500 10^10M_⊙ 10^12M_⊙ 10^8 M_⊙ M_⊙ yr^-1 kpc 10^10M_⊙ 10^12M_⊙ 10^8 M_⊙ M_⊙ yr^-1 kpc 13llow mass sample 4721 1.94 0.93 0.51 1.702 140 6326 1.38 0.68 0.28 2.072 126 5266 0.63 0.75 0.04 1.432 128 4496 1.43 0.94 0.16 4.941 133 7185 1.60 0.58 0.24 1.664 121 6991 1.10 0.50 0.20 1.610 109 3729 1.68 0.97 0.43 2.214 124 5205 2.40 0.83 0.12 2.447 136 7837 0.91 0.49 0.17 2.523 113 4172 1.34 0.93 0.38 1.880 135 5566 1.74 0.60 0.49 1.522 119 11638 0.83 0.32 0.05 1.070 98 6542 1.42 0.60 0.37 1.558 122 4862 1.65 0.53 0.38 1.122 119 6652 1.18 0.50 0.26 2.086 111 5494 2.18 0.65 0.57 1.301 123 7552 1.63 0.58 0.27 1.385 119 4267 0.92 0.92 0.34 2.029 132 9143 0.87 0.47 0.09 1.036 111 6156 0.87 0.58 0.04 5.991 116 7189 0.85 0.62 0.14 1.226 115 3768 1.89 0.64 0.46 3.384 119 5263 2.38 0.51 0.56 3.635 113 5762 1.15 0.36 0.33 2.065 104 8135 0.83 0.38 0.16 1.581 103 4965 0.30 0.40 0.29 2.137 107 4672 0.95 0.97 0.24 1.679 140 5557 1.31 0.75 0.28 1.001 130 6084 1.18 0.35 0.18 2.453 100 9136 1.11 0.43 0.22 1.063 107 11327 1.04 0.33 0.24 1.031 101 6800 1.06 0.54 0.14 1.076 116 5936 0.93 0.39 0.21 2.142 102 8881 0.82 0.50 0.08 1.207 116 4439 1.65 0.73 0.47 1.755 126 11362 0.69 0.35 0.10 1.166 100 10409 0.70 0.37 0.11 1.507 103 5802 1.31 0.38 0.37 1.842 104 4802 3.35 0.91 0.44 4.681 141 5565 1.33 0.70 0.21 1.498 124 13lmedium mass halos 2788 7.33 1.34 0.87 5.462 158 2992 5.23 1.06 0.40 15.196 148 2722 4.66 1.55 0.93 10.480 159 2401 4.21 1.12 1.10 1.439 151 1323 6.02 2.22 0.96 3.030 170 1792 3.99 1.38 0.21 11.855 147 3956 4.32 1.13 0.40 5.601 150 1864 11.30 2.48 1.17 3.974 195 1278 7.51 2.37 1.30 2.963 180 2496 2.55 1.49 0.23 9.434 156 2132 11.38 2.12 0.77 4.576 188 1650 2.30 1.84 1.38 1.849 170 2030 4.23 1.79 0.49 4.188 160 2166 2.76 1.08 0.42 5.268 146 395 8.61 3.13 3.03 3.697 206 1201 6.09 2.33 1.25 1.188 180 2828 4.97 1.57 1.19 5.388 167 2622 2.52 1.47 0.99 1.495 157 2058 6.82 2.03 0.99 5.198 183 1380 6.64 3.13 0.61 1.297 209 2131 3.20 1.92 0.88 1.257 172 2089 6.11 2.23 1.74 1.040 186 4086 4.22 1.15 0.15 6.362 152 3606 5.71 1.23 1.43 2.548 154 1403 3.92 2.24 0.90 3.556 176 1726 7.23 1.99 2.01 6.853 179 1985 7.44 2.13 0.67 1.127 183 2081 5.61 2.19 0.40 9.351 191 1482 6.83 2.86 1.91 3.100 197 1942 15.99 2.56 0.63 5.588 199 2204 3.86 1.59 0.48 1.382 164 2790 5.33 1.58 0.29 14.680 168 3101 6.34 1.45 0.36 9.689 163 3409 3.16 1.12 1.04 2.281 148 1762 2.39 1.57 0.79 4.764 154 3012 4.14 1.49 1.02 3.209 163 2831 3.98 1.45 0.16 13.514 152 2036 10.33 2.31 1.31 5.877 193 2066 1.14 1.65 0.24 3.519 144 3883 8.74 1.22 0.55 1.652 158 13lhigh mass halos 793 11.00 5.82 1.25 5.868 236 483 7.07 7.41 2.06 1.367 241 414 11.41 9.12 9.96 7.806 296 928 5.38 3.69 1.77 2.034 201 343 17.38 9.31 8.13 1.533 292 406 17.12 9.79 4.20 6.775 300 704 15.05 6.15 2.46 1.044 259 859 19.38 3.63 2.00 8.954 209 550 5.75 6.84 2.71 2.130 256 729 11.17 4.49 3.22 2.392 236 589 8.08 5.37 4.71 1.272 234 499 9.01 8.77 2.22 2.046 280 1101 13.65 4.24 1.20 5.909 231 533 9.96 8.53 4.88 1.674 283 606 8.68 7.33 1.81 1.538 276 576 6.72 4.69 4.46 3.734 233 797 5.25 5.10 3.96 2.142 242 981 19.67 3.73 2.51 1.825 220 587 6.22 6.79 3.92 5.185 244 530 10.34 7.74 7.76 1.340 259 422 6.25 3.88 3.30 7.754 204 792 4.40 5.33 2.31 1.513 242 1162 9.05 3.98 1.35 1.839 214 1250 5.21 3.48 2.66 1.492 195 675 6.11 7.43 3.49 1.278 269 303 10.63 7.19 4.63 4.744 267 663 5.61 5.19 2.86 3.604 239 521 9.28 6.21 1.21 4.473 200 647 10.31 7.93 4.45 1.167 283 1369 3.15 3.58 1.66 9.063 217 1302 4.14 3.16 1.78 2.846 202 1375 4.02 3.56 0.83 4.062 219 1264 11.70 3.60 0.96 4.972 218 930 5.35 4.52 0.83 3.742 226 410 9.51 9.42 4.18 1.048 303 1067 5.37 3.26 2.46 2.147 201 948 4.69 4.21 3.68 3.268 229 396 13.91 7.52 5.18 1.300 263 472 7.78 7.03 3.55 2.297 257 1343 11.42 3.41 0.34 1.601 216
http://arxiv.org/abs/2307.01049v1
20230703142715
Doubly Robust Estimation of Direct and Indirect Quantile Treatment Effects with Machine Learning
[ "Yu-Chin Hsu", "Martin Huber", "Yu-Min Yen" ]
econ.EM
[ "econ.EM", "stat.ML" ]
Doubly Robust Estimation of Direct and Indirect Quantile Treatment Effects with Machine Learning Yu-Chin HsuInstitute of Economics, Academia Sinica, 128, Section 2, Academia Road, Nankang, Taipei 115, Taiwan. E-mail: . Academia Sinica Martin HuberUniversity of Fribourg, Department of Economics, Bd. de Pérolles 90, 1700 Fribourg, Switzerland. E-mail: . University of Fribourg Yu-Min YenDepartment of International Business, National Chengchi University, 64, Section 2, Zhi-nan Road, Wenshan, Taipei 116, Taiwan. E-mail: . National Chengchi University August 1, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================ We suggest double/debiased machine learning estimators of direct and indirect quantile treatment effects under a selection-on-observables assumption. This permits disentangling the causal effect of a binary treatment at a specific outcome rank into an indirect component that operates through an intermediate variable called mediator and an (unmediated) direct impact. The proposed method is based on the efficient score functions of the cumulative distribution functions of potential outcomes, which are robust to certain misspecifications of the nuisance parameters, i.e., the outcome, treatment, and mediator models. We estimate these nuisance parameters by machine learning and use cross-fitting to reduce overfitting bias in the estimation of direct and indirect quantile treatment effects. We establish uniform consistency and asymptotic normality of our effect estimators. We also propose a multiplier bootstrap for statistical inference and show the validity of the multiplier bootstrap. Finally, we investigate the finite sample performance of our method in a simulation study and apply it to empirical data from the National Job Corp Study to assess the direct and indirect earnings effects of training. JEL classification: C01, C21 Keywords: Causal inference, efficient score, mediation analysis, quantile treatment effect, semiparametric efficiency § INTRODUCTION Causal mediation analysis aims at understanding the mechanisms through which a treatment affects an outcome of interest. It disentangles the treatment effect into an indirect effect, which operates through a mediator, and a direct effect, which captures any causal effect not operating through the mediator. Such a decomposition of the total treatment effect permits learning the drivers of the effect, which may be helpful for improving the design of a policy or intervention. Causal mediation analysis typically focuses on the estimation of average indirect and direct effects, which may mask interesting effect heterogeneity across individuals. For this reason, several contributions focusing on total (rather than direct and indirect) effects consider quantile treatment effects (QTE) instead of average treatment effects (ATE). The QTE corresponds to the difference between the potential outcomes with and without treatment at a specific rank of the potential outcome distributions, but has so far received little attention in the causal mediation literature. The main contribution of this paper is to propose doubly robust/debiased machine learning (DML) estimators of the direct and indirect QTE under a selection-on-observables (or sequential ignorability) assumption, implying that the treatment and the mediator are as good as random when controlling for observed covariates. The method computes the quantile of a potential outcome by inverting an DML estimate of its cumulative distributional function (c.d.f.). This approach makes use of the efficient score function of the c.d.f., into which models for the outcome, treatment, and mediator enter as plug-in or nuisance parameters. Relying on the efficient score function makes treatment effect estimation robust, i.e., first-order insensitive to (local) misspecifications of the nuisance parameters, a property known as <cit.>-orthogonality. This permits estimating the nuisance parameters by machine learning (which generally introduces regularization bias) and still obtains root-n-consistent treatment effect estimators, given that certain regularity conditions hold. In addition, cross-fitting is applied to mitigate overfitting bias. Cross-fitting consists of estimating the nuisance parameter models and treatment effects in different subsets of the data and swapping the roles of the data to exploit the entire sample for treatment effect estimation, see <cit.>. We then establish uniform consistency and asymptotic normality of the effect estimators. For conducting statistical inference, we propose a multiplier bootstrap procedure and show the validity of the multiplier bootstrap. We also provide a simulation study to investigate the finite sample performance of our method. Finally, we apply our method to empirical data from the Job Corps Study to analyse the direct and indirect QTE of participation in a training program on earnings when considering general health as a mediator. The results point to positive direct effects of training across a large range of the earnings quantiles, while the indirect effects are generally close to zero and mostly statistically insignificant. To more formally discuss the direct and indirect effects of interest, let Y denote the outcome of interest, D the binary treatment, M the mediator, and X a vector of pre-treatment covariates. Following <cit.>, we may represent causal relationships between (Y,D,M,X) by means of a directed acyclic graph (DAG), as provided in Figure <ref>. The causal arrows in the DAG imply that (D,M,X) may affect Y, (D,X) may affect M, and X may affect D. We can therefore define the outcome as a function of the treatment and the mediator, Y=Y(D, M), and the mediator as a function of the treatment, M=M(D), while being agnostic about X. Furthermore, we make use of the potential outcome notation advocated by <cit.> and <cit.> to denote by Y(d,m) the potential outcome if D were set to a specific value d∈{ 0,1} and M were set to some value m in the support of the mediator, while M(d) denotes the potential mediator for D=d. Accordingly, Y(d, M(d)) is the potential outcome if D were set to d, implying that the mediator is not forced to take a specific value m, but corresponds to its potential value under D=d. Depending on the actual treatment and mediator values of an observation, Y(d, M(d)), Y(d,m), M(d) is either observed or counterfactual. Furthermore, the potential outcome Y(d, M(1-d)) is inherently counterfactual, as no observation can be observed in the opposite treatment states d and 1-d at the same time. Armed with this notation, we define the causal parameters of interest. The natural direct effect (NDE), which is for instance considered in <cit.>, <cit.> and <cit.>, is based on a comparison of the potential outcomes when varying the treatment, but keeping the potential mediator fixed at treatment value D=d: Y(1,M(d))-Y(0,M(d)). The natural indirect effect (NIE) is based on a comparison of the potential outcomes when fixing the treatment to D=d, but varying the potential mediator according to the values it takes under treatment and non-treatment: Y(d,M(1))-Y(d,M(0)). It is worth noting that the NDE and the NIE defined upon opposite treatment states d and 1-d sum up to the total effect (TE): Y(1,M(1))-Y(0,M(0)). Previous methodological research on causal mediation predominantly focused on the estimation of averages of the aforementioned NDE, NIE and TE or of averages of related path-wise causal effects <cit.>. We complement this literature by suggesting a method for estimating natural direct and indirect QTEs, which permits assessing the effects across the entire distribution of potential outcomes. The estimation of the total (rather than the direct or indirect) QTE has already been studied in multiple contributions <cit.>. Among the few studies considering QTEs in causal mediation is <cit.>, suggesting a two-stage quantile regression estimation to estimate the controlled direct QTE, i.e., Y(1,m)-Y(0,m) at a specific rank, as well as a particular indirect QTE. The latter is based on first estimating the mediator at a specific rank and then including it in a quantile regression of the outcome, which generally differs from the natural indirect QTE considered in this paper. Furthermore, our approach is nonparametric and relies on results on semiparametric efficiency, very much in contrast to the parametric approach of <cit.>. <cit.> adapted the Changes-in-Changes (CiC) approach of <cit.> to estimate direct and indirect QTEs in subgroups defined in terms of how the mediator reacts to (or complies with) the treatment. The NDE and NIE investigated here differ from such subgroup-specific causal parameters and furthermore, our identification strategy relies on a selection-on-observables (or sequential ignorability) assumption rather than CiC. The remainder of this study is organized as follows. Section <ref> introduces the natural direct and indirect QTE, the identifying assumptions, the effect estimators based on double/debiased machine learning, and the multiplier bootstrap procedure for inference. Section <ref> gives the theoretical results on the asymptotic behavior of our methods. Section <ref> presents a simulation study that investigates the finite sample properties of our method. Section <ref> provides an empirical application to data from the National Job Corps Study to assess the direct earnings effects of training across the earnings distribution, as well as the indirect effects operating via general health. Section <ref> concludes. § METHODOLOGY §.§ Causal effects and Identifying Assumptions To define the direct and indirect QTEs of interest, let Q_Z(τ):=inf{ q∈ℝ:P(Z≤ q)≥τ} denote the τth-quantile of a random variable Z, where τ∈(0,1). Furthermore, let Q_Z|V(τ):=inf{ q∈ℝ:P(Z≤ q|V)≥τ} denote the τth-quantile of Z conditional on another random variable (or a random vector) V, where τ∈(0,1). Let F_Z(z) and f_Z(z) denote cumulative distribution function (c.d.f.) and probability density or probability mass function (p.d.f. or p.m.f.) of Z at z, and F_Z|V(z|v) and f_Z|V(z|v) denote the c.d.f. and p.d.f. (or p.m.f.) of Z at z conditional on V=v. We define the natural direct quantile treatment effect (NDQTE) at the τth-quantile as: NDQTE(τ):=Q_Y(1,M(0))(τ)-Q_Y(0,M(0))(τ), and the natural indirect quantile treatment effect (NIQTE) at the τth-quantile as: NIQTE(τ):=Q_Y(1,M(1))(τ)-Q_Y(1,M(0))(τ). The NDQTE in equation (<ref>) corresponds to the direct effect of the treatment when fixing the mediator at its potential value under non-treatment, M(0). Alternatively, we may consider the NDQTE when conditioning on the potential mediator under treatment, M(1): NDQTE^'(τ):=Q_Y(1,M(1))(τ)-Q_Y(0,M(1))(τ). Likewise, the NIQTE in equation (<ref>) is the indirect effect when varying the potential mediators but keeping the treatment fixed ad D=1, but we may also consider the indirect effect conditional on D=0: NIQTE^'(τ):=Q_Y(0,M(1))(τ)-Q_Y(0,M(0))(τ). If the effects in expressions (<ref>) and (<ref>) (or (<ref>) and (<ref>)) are different, then this implies effect heterogeneity due to interaction effects between the treatment and the mediator. The sum of NDQTE (NDQTE') and NIQTE (NIQTE') yields the total quantile treatment effect (TQTE) at the τth-quantile, which includes all causal mechanisms through which the treatment affects the outcome: TQTE(τ) =NDQTE(τ)+NIQTE(τ)=NDQTE^'(τ)+NIQTE^'(τ) =Q_Y(1,M(1))(τ)-Q_Y(0,M(0))(τ). We aim at estimating the quantile treatment effects (<ref>) to (<ref>).[We do not consider the controlled direct quantile treatment effect (CDQTE) at the τ-quantile, CDQTE(τ):=Q_Y(1,m)(τ)-Q_Y(0,m)(τ), which can be identified under less stringent assumptions than required for the identification of natural effects. ] To this end, we first need to estimate the τth-quantile of the relevant potential outcomes, by inverting estimates of the corresponding c.d.f.'s at the τth-quantile. Let F_Y(d,M(d^'))(a) denote the c.d.f. of the potential outcome Y(d,M(d^')) at value a. To identify F_Y(d,M(d^'))(a) in the data, we impose the following assumptions. 1. For any observation and d∈{ 0,1} as well as m in the support of M, M=M(d) if D=d, and Y=Y(d,m) if D=d and M=m. 2. (Y(d,m),M(d^'))⊥ D|X=x for (d,d^')∈{ 1,0}^2 and m,x in the support of (M,X). 3. Y(d,m)⊥ M(d^')|D=d^',X=x for (d,d^')∈{ 1,0}^2 and m,x in the support of (M,X). 4. f_D|M,X(d|m,x)>0 for any d∈{ 1,0} and m,x in the support of (M,X). Assumption 1.1 implies the stable unit treatment value assumption (SUTVA), see <cit.> and <cit.>, stating the potential mediators and potential outcomes are only a function of an individual's own treatment and mediator states, respectively, which are well defined (ruling out multiple treatment or mediator versions). Assumptions 1.2 and 1.3 are sequential ignorability or selection-on-observables conditions <cit.> for causal mediation analysis. Assumption 1.2 states that conditional on X, the treatment variable D is independent of the potential outcome Y(d,m) and the potential mediator M(d^'). This assumption also implies that Y(d,m)⊥ D|M(d^')=m^',X=x. Assumption 1.3 requires that Y(d,m) and M(d^') are independent, too, conditional on X and D. Even if treatment D were random, this would not suffice to identify direct and indirect effects and for this reason, we need to impose an identifying assumption like Assumption 1.3 to tackle the endogeneity of the mediator. Assumption 1.4 is a common support condition, which says that the treatment is not deterministic in covariates X and mediator M such that for each covariate-mediator combination in the population, both treated and non-treated subjects exist. Under Assumptions 1.1 to 1.4, we obtain the following identification result. Under Assumptions 1.1 to 1.4, F_Y(d,M(d^'))(a) = ∫ g_d,d^',a(x)f_X(x)dx where (d,d^')∈{0,1}^2, a∈𝒜 where 𝒜 is a countable subset of ℝ, and g_d,d^',a(x) = ∫ F_Y|D,M,X(a|d,m,x)f_M|D,X(m|d^',x)dm = E[F_Y|D,M,X(a|d,M,X)|d^',X=x]. The proof of Proposition 1 is provided in the appendix. Under Proposition 1, we may estimate F_Y(d,M(d^'))(a) based on plug-in estimation of the nuisance parameters F_Y|D,M,X(a|d,m,X) and f_M|D,X(m|d^',X): θ̂_d,d^',a^YM=1/n∑_i=1^nĝ_d,d^',a(X_i), where ĝ_d,d^',a(X_i)=∫F̂_Y|D,M,X(a|d,m,X_i)f̂_M_i|D_i,X_i(m|d^',X_i)dm, and F̂_Y|D,M,X(a|d,m,X_i) and f̂_M|D,X(m|d^',X_i) are estimates of F_Y|D,M,X(a|d,m,X) and f_M|D,X(m|d^',X_i). If M is a continuous variable, we may avoid estimating the conditional density f_M|D,X(m|d^',X), and use the following alternative estimator for estimating F_Y(d,M(d^'))(a): θ̂_d,d^',a^RI=1/n∑_i=1^nÊ[F_Y|D,M,X(a|d,M_i,X_i)|d^',X_i], where Ê[F_Y|D,M,X(a|d,M_i,X_i)|d^',X_i] is an estimate of E[F_Y|D,M,X(a|d,M_i,X_i)|d^',X_i]. For example, it might be based on a “regression-imputation” <cit.>, corresponding to the fitted value of a linear regression of F_Y|D,M,X(a|d,X_i,M_i) on D_i and X_i at (d^',X_i). The quality of estimators (<ref>) and (<ref>) crucially depends on the accuracy of nuisance parameter estimation. If the number of pretreatment covariates X is small (low dimensional X) and the functional forms of the nuisance parameters are known, parametric methods can provide high-quality estimations on the nuisance parameters. In contrast, if X is high dimensional and/or the nuisance parameters have complex forms, machine learning may be the preferred choice of estimation. However, applying ML directly to estimate expressions (<ref>) or (<ref>) may result in non-negligible bias induced by regularization and/or overfitting <cit.>. Causal machine learning algorithms aim at avoiding such biases by applying ML estimation when making use of Neyman-orthogonal moment conditions, which imply that the estimation of causal parameters is first order insensitive to (regularization) bias in the nuisance parameters, and of cross-fitting, which avoids overfitting. One of these causal algorithms is double/debiased machine learning (DML) <cit.>, which has been previously adapted to the estimation of average effects in causal mediation analysis <cit.>, while this study extends it to the estimation of direct and indirect quantile treatment effects. Let Y_a=1{Y≤ a} be an indicator function which is one if outcome Y is smaller than or equal to a (and zero otherwise) and W_a=(Y_a, D, M, X) be a vector of the observed variables. An estimator of the c.d.f. of the potential outcome that satisfies Neyman-orthogonality can be derived from the efficient influence function (EIF) of F_Y(d,M(d^'))(a): ψ_d,d^',a^θ(W_a;v_a) =ψ_d,d^',a(W_a;v_a)-θ, where ψ_d,d^',a(W_a;v_a) = 1{ D=d}/f_D|X(d^'|X)f_D|M,X(d^'|M,X)/f_D|M,X(d|M,X)×[Y_a -F_Y|D,M,X(a|d,M,X)] +1{ D=d^'}/f_D|X(d^'|X)×[F_Y|D,M,X(a|d,M,X)-g_d,d^',a(X)]+g_d,d^',a(X) for (d,d^')∈{0,1}^2, a∈𝒜 and v_a denoting the vector of nuisance parameters. Let θ_d,d^',a denote the value of θ that satisfies E[ψ_d,d^',a^θ(W_a;v_a)]=0: θ_d,d^',a = E[ ψ_d,d^',a(W_a;v_a)]. We can show that θ_d,d^',a=F_Y(d,M(d^')(a) if Assumptions 1.1 to 1.4 hold, see the appendix for a derivation of these results. Therefore, we may use the sample analogue of equation (<ref>) to estimate F_Y(d,M(d^'))(a). A similar strategy was previously used to derive the triply robust approach for estimating E[Y(d,M(d^'))] in <cit.> and <cit.>. If d=d^', then the estimator of equation (<ref>) reduces to θ_d,d,a = E[ ψ_d,d,a(W_a;v_a)] , where ψ_d,d,a(W_a;v_a) =1{ D=d}/f_D|X(d|X)×[Y_a -g_d,d,a(X)]+g_d,d,a(X), g_d,d,a(X) = ∫ F_Y|D,M,X(a|d,m,X)f_M|D,X(m|d,X)dm = F_Y|D,X(a|d,X), for d∈{0,1} and a∈𝒜. We may use the sample analogue of equation (<ref>) to estimate F_Y(d,M(d))(a). This is in analogy to the doubly robust approach for estimating E[Y(d,M(d))] in <cit.> and <cit.>. We note that by using the Bayes rule, we can rewrite equation (<ref>) alternatively as: ψ_d,d^',a^'(W_a;v_a^') = 1{ D=d}/f_D|X(d|X)f_M|D,X(M|d^',X)/f_M|D,X(M|d,X)×[Y_a -F_Y|D,M,X(a|d,M,X)] +1{ D=d^'}/f_D|X(d^'|X)×[F_Y|D,M,X(a|d,M,X)-g_d,d^',a(X)]+g_d,d^',a(X). Therefore, θ_d,d^',a^'=E[ψ_d,d^',a^'(W_a;v_a^')] can also be used to construct an estimator of F_Y(d,M(d^')(a). There are several differences between the estimators based on equations (<ref>) and (<ref>). Making use of equation (<ref>) requires estimating four nuisance parameters: f_D|X(d|x), f_D|M,X(d|m,x), F_Y|D,M,X(a|d,m,x) and g_d,d^',a(x). Since D is binary, the first two nuisance parameters may for instance be estimated by a logit or probit model. The conditional c.d.f. F_Y|D,M,X(a|d,m,x) can be estimated by distributional regression (DR) <cit.>. g_d,d^',a(x) might be estimated by regression imputation as outlined in equation (<ref>). However, the estimator based on equation (<ref>) requires only three nuisance parameter estimates of f_D|X(d|x), f_M|D,X(m|d,x) and F_Y|D,M,X(a|d,m,x). We may estimate g_d,d^',a(x) based on equation (<ref>) after having estimated f_M|D,X(m|d,x) and F_Y|D,M,X(a|d,m,x). The estimator based on equation (<ref>) appears particularly attractive if the mediator M is discrete and takes a finite (and relatively small) number of values. However, if M is continuous, estimation based on equation (<ref>) may appear more attractive, because it avoids estimating the conditional density f_M|D,X(m|d,x) and the integral in equation (<ref>) to obtain an estimate of g_d,d^',a(x). The estimators based on equations (<ref>) and (<ref>) also differ in terms of their robustness to misspecification of the nuisance parameters. Let θ̂_d,d^',a and θ̂_d,d^',a^' denote estimators based on equations (<ref>) and (<ref>) and the respective estimators of the nuisance parameters. Applying the theorem of semiparametric efficiency in <cit.> and <cit.>, we can show that under certain regularity conditions, the following results hold for estimating F_Y(d,M(d^'))(a) at outcome value a: * If F_Y|D,M,X(a|d,m,x) and f_M|D,X(m|d,x) are correctly specified, θ̂_d,d^',a^'p.⟶θ_d,d^',a^'. * If F_Y|D,M,X(a|d,x,m) and f_D|X(d^'|x) are correctly specified, θ̂_d,d^',a^'p.⟶θ_d,d^',a^'. * If f_D|X(d^'|x) and f_M|D,X(m|d,x) are correctly specified, θ̂_d,d^',a^'p.⟶θ_d,d^',a^'. This implies that if two of the nuisance parameters entering equation (<ref>) are correctly specified, while also certain regularity conditions and Assumptions 1.1 to 1.4 hold, then θ̂_d,d^',a^' is a consistent estimator of F_Y(d,M(d^'))(a) at outcome value a. In contrast, θ̂_d,d^',ap.⟶θ_d,d^',a only holds if f_D|X(d|x) is consistently estimated. If the latter holds and only one of the other three nuisance parameters in equation (<ref>) is misspecified, while certain regularity conditions and Assumptions 1.1 to 1.4 are satisfied, then θ̂_d,d^',a remains a consistent estimator of F_Y(d,M(d^'))(a) at outcome value a. Finally, if all nuisance parameters are correctly specified and consistently estimated, while certain regularity conditions and Assumptions 1 to 4 also hold, then both θ̂_d,d^',a and θ̂_d,d^',a^' are semiparametrically efficient. §.§ Improving Finite Sample Behavior The estimate of the c.d.f. of Y(d,M(d^')) can be inverted at a specific rank τ to obtain an estimate of the τth quantile, which we denote by Q_Y(d,M(d^'))(τ). Suppose Y(d,M(d^')) is continuous and let the grid of points used for the estimation be a non-decreasing sequence {a_l}_l=1^L, where 0<a<a_1<a_2<…<a_L<a̅<∞. Let p̂_l denote an estimate of F_Y(d,M(d^'))(a_l) (e.g., the K-fold cross-fitting estimate, see Section 2.2). Note that p̂_l is not necessarily bounded away from 0 and 1 nor monotonically increasing in a_l, as required for a valid c.d.f. For this reason, we apply two additional constraints on the estimates p̂_l, l=1,…,L. The first one restricts their values to be within the range [0,1]. That is, we replace p̂_l with p̃_l=max{min{p̂_l,1} ,0}. Then we follow <cit.> and use the rearrangement operator to sort p̃_l in non-decreasing order. Let (p̃_(1),p̃_(2),…,p̃_(L)) be the sorted sequence of p̃_l, l=1,2,…,L. The sequence (p̃_(1),p̃_(2),…,p̃_(L)) is our final estimate of the c.d.f. of Y(d,M(d^')) at (a_1,a_2,…,a_L). We then fit a function for points (p̃_(l),a_l), l=1,2,…,L with linear interpolation, and use the fitted function to calculate the value of a at rank τ to estimate Q_Y(d,M(d^'))(τ),[The monotonicity property is preserved under the linear interpolation.], which permits estimating the quantile treatment effects (<ref>) to (<ref>). When Y(d,M(d^')) is discrete, we need not fit the function for points (p̃_(l),a_l); we may obtain Q_Y(d,M(d^'))(τ) by directly using the definition of the τth-quantile. §.§ K-Fold Cross-Fitting Neyman-orthogonality may mitigate regularization bias coming from machine learning-based estimation of the nuisance parameters in equations (<ref>) or (<ref>). To also safeguard against overfitting bias, we follow <cit.> and <cit.> and apply K-fold cross-fitting to estimate the nuisance parameters and the potential outcome distributions, F_Y(d,M(d^'))(a), in different parts of the data. To describe the approach, let Y_a,i=1{Y_i≤ a} and W_a,i=(Y_a,i,D_i,M_i,X_i) denote the ith observation, i=1,2,…,n. In the following, we use the estimator based on equation (<ref>) to illustrate K-fold cross-fitting. * Randomly split the n samples into K (mutually exclusive) subsamples of equal sample size n_k=n/K, k=1,2,…,K. Let I_k, k=1,2,…,K denote the set of indices for the K different subsamples. Let I_k^c, k=1,2,…,K denote the complement set of I_k: I_k^c={ 1,2,…,n}∖ I_k. * For each k, estimate the model parameters of the nuisance parameters F_Y|D,M,X(a), f_D|X(d^'|X), f_D|M,X(d|M,X) and g_d,d^',a(X) based on observations W_a,i, i∈ I_k^c. For observations W_a,i, i∈ I_k, predict the nuisance parameters: F̂_Y|D,M,X^(k)(a|D_i,M_i,X_i), f̂_D|X^(k)(d^'|X_i), f̂_D|M,X^(k)(d|M_i,X_i), f̂_D|M,X^(k)(d^'|M_i,X_i) and ĝ_d,d^',a(X), i∈ I_k. * For each k, compute the estimate of F_Y(d,M(d^'))(a) using the predicted nuisance parameters of step 2 as θ̂_d,d^',a^(k) =1/n_k∑_i∈ I_k{1{ D_i=d}f̂_D|M,X^(k)(d^'|M_i,X_i)/f̂_D|X^(k)(d^'|X_i)f̂_D|M,X^(k)(d|M_i,X_i). ×[1{ Y_i≤ a} -F̂_Y|D,M,X^(k)(a|d,M_i,X_i)] +1{ D_i=d^'}/f̂_D|X^(k)(d^'|X_i)[F̂_Y|D,M,X^(k)(a|d,M_i,X_i)-ĝ_d,d^',a^(k)(X_i)] .+ĝ_d,d^',a^(k)(X_i)} . * Average θ̂_d,d^',a^(k) over k=1,2,…,K to obtain the final estimate of F_Y(d,M(d^'))(a): θ̂_d,d^',a=1/K∑_k=1^Kθ̂_d,d^',a^(k). We repeat steps 1 to 4 for a grid of points a, a non-decreasing sequence {a_l}_l=1^L, where 0<a<a_1<a_2<…<a_L<a̅<∞, to construct the estimate of the c.d.f. profile of Y(d,M(d^')). In Section 3, we will establish the asymptotic properties uniformly over a∈A for the K-fold cross-fitting estimator of equation (<ref>). Alternatively, we can also construct the K-fold cross-fitting estimator based on equation (<ref>): θ̂_d,d^',a^'=1/K∑_k=1^Kθ̂_d,d^',a^'(k), where θ̂_d,d^',a^'(k), =1/n_k∑_i∈ I_k{1{ D_i=d}f̂_M|D,X^(k)(M_i|d^',X_i)/f̂_D|X^(k)(d|X_i)f̂_M|D,X^(k)(M_i|d,X_i). ×[1{ Y_i≤ a} -F̂_Y|D,M,X^(k)(a|d,M_i,X_i)] +1{ D_i=d^'}/f̂_D|X^(k)(d^'|X_i)[F̂_Y|D,M,X^(k)(a|d,M_i,X_i)-ĝ_d,d^',a^(k)(X_i)] .+ĝ_d,d^',a^(k)(X_i)} . §.§ Nuisance Parameter Estimation In the case when D and M are binary variables, f_D|X(d|x) and f_M|D,X(m|d,x) can be estimated straightforwardly, for instance by logit or probit models. If M is continuous, we might prefer θ̂_d,d^',a to avoid estimating f_M|D,X(m|d,x), while the required nuisance parameter f_D|M,X(d|m,x) can be straightforwardly estimated if D is binary. The estimation of any nuisance parameter may be based on machine learning, for example, the lasso or neural networks. For estimating F_Y|D,M,X(a|d,m,x), we use distributional regression (DR). Conditional on (D=d,M=m,X=x), the conditional c.d.f. of Y may be written as F_Y|D,M,X(a|d,m,x)=E[Y_a |d,m,x], where a is a constant and a∈𝒜⊂ℝ, with 𝒜 being a countable subset of ℝ, and Y_a=1{ Y≤ a} being an indicator function for the event { Y≤ a}. Equation (<ref>) is the building block for constructing the DR estimator <cit.>, which is based on using the binary dependent variable Y_a to estimate F_Y|D,M,X(a|d,m,x) by a regression approach that estimates E[Y_a|d,m,x]. For example, one may assume that the c.d.f. is linear in variables (D,M,X), their higher-order terms and interaction terms, and estimate a linear probability model (LPM) by OLS. However, the LPM does not guarantee that the estimated F_Y|D,M,X(a|d,m,x) will lie within the interval [0,1]. To overcome this difficulty, we may assume that F_Y|D,M,X(a|d,m,x)=G_a(v_a(d,m,x)), where v_a:(D,M,X)↦ℝ and G_a(.) is a link function which is non-decreasing and satisfies G_a:ℝ↦[0,1], G_a(y)→0 if y→-∞ and G_r(y)→1 if y→∞. The choices of v_a(D,M,X) and the link function G_a(.) are flexible. For example, v_a(D,M,X) might be a neural network (see Section 3.1) or a transformation of (D,M,X), which can vary with a. Depending on whether Y is continuous or discrete, the link function G_a(.) may be the logit, probit, linear, log-log, Gosset, the Cox proportional hazard function or the incomplete Gamma function. As pointed out by <cit.>, for any given link function G_a(.), we can approximate F_Y|D,M,X(a|d,m,x) arbitrarily well if v_a(D,M,X) is sufficiently flexible. There are various ways to implement DR, and a popular choice is maximum likelihood estimation. Let v̂_a(D,M,X) denote the maximum likelihood estimator of v_a(D,M,X), which is obtained by max_v_a∈𝒱_a∑_i=1^n{ Y_a ln G_a(v_a(D_i,M_i,X_i))+(1-Y_a) ln[1-G_a(v_a(D_i,M_i,X_i))]} , where 𝒱_a is the parameter space of v̂_a(D,M,X). Then, G(v̂_a(D,M,X)) is an estimate of F_Y|D,M,X(a|d,m,x). v_a may be estimated by machine learning methods when the dimension of X is large and/or the functional form of v_a is complex. §.§ Summarizing the Estimation Approach Our estimation approach can be summarized as follows: Step 1 Modeling the nuisance parameters The nuisance parameters include f_D|X(d|x), f_D|M,X(d|m,x), F_Y|D,M,X(a|d,m,x) and g_d,d^',a(X). Depending on the properties of (Y,D,M), choose appropriate functional forms for the nuisance parameters. Step 2 Estimating the nuisance parameters Estimate the nuisance parameters by K-fold cross-fitting, as described in points 1 and 2 in Section 2.2. Step 3 Computing the Neyman-orthogonal estimator With estimates of the nuisance parameters from K-fold cross-fitting, compute the Neyman-orthogonal estimator as described in points 3 and 4 in Section 2.2. Step 4 Repeating steps 2 to 3 for a grid of points {a_l}_l=1^L, where 0<a<a_1<a_2<…<a_L<a̅<∞ Notice that only F_Y|D,M,X(a|d,m,x) needs to be re-estimated, not the remaining nuisance parameters. Step 5 Adopting the two constraints described in Section 2.1 to obtain p̃_(l), the final estimate of F_Y(d,M(d^'))(a_l) Step 6 If Y(d,M(d^')) is continuous, fitting a function for the points (p̃_(l),a_l), l=1,2,…,L, and using the fitted function to obtain Q̂_Y(d,M(d^'))(τ), the estimate of Q_Y(d,M(d^'))(τ); if Y(d,M(d^')) is discrete, using the definition of the τth quantile to obtain Q̂_Y(d,M(d^'))(τ). Step 7 Estimating the quantile treatment effects of interest as defined in equations (<ref>) to (<ref>) based on Q̂_Y(d,M(d^'))(τ). §.§ Multiplier Bootstrap We propose a multiplier bootstrap for statistical inference. Let {ξ_i}_i=1^n be a sequence of i.i.d. (pseudo) random variables, independent of the sample path {(Y_a,i,M_i,D_i,X_i)}_i=1^n, with E[ξ_i]=0, Var(ξ_i)=1 and E[exp(|ξ_i|)]<∞. Let v̂_k,a denote a vector containing the K-fold cross-fitting estimates of the nuisance parameters, whose model parameters are estimated based on observations in the complement set, W_a,i with i∈ I^c_k. The proposed multiplier bootstrap estimator for θ̂_d,d^',a in equation (<ref>) is given by: θ̂_d,d^',a^*=θ̂_d,d^',a+1/n∑_i=1^nξ_i(ψ_d,d^',a(W_a,i;v̂_k,a)-θ̂_d,d^',a), The multiplier bootstrap estimator does not require re-estimating the nuisance parameters and re-calculating the causal parameters of interest in each bootstrap sample. This is particularly useful in our context, since our estimation approach is to be repeatedly applied across grid points a. After obtaining θ̂_d,d^',a^* for all grid points, we use the procedures introduced in Section 2.1 to construct the bootstrap estimate of the c.d.f. of Y(d,M(d^')) at these grid points and the bootstrap estimate of the τth quantile Q_Y(d,M(d^'))^*(τ). Section 3 establishes the uniform validity of the proposed multiplier bootstrap procedure. § THEORETICAL RESULTS §.§ Notation In this section, we show that the proposed K-fold cross-fitting estimator in Section 2.2 is uniformly valid under certain conditions. We focus on establishing the theoretical properties of the estimator in equation (<ref>) and note that the assumptions and procedures required for demonstrating uniform validity of estimation based on equation (<ref>) are similar and omitted for this reason. To ease notation in our analysis, let g_1d^0(X):=F_D|X^0(d|X), g_2d^0(M,X):=F_D|M,X^0(d|M,X), g_3a^0(D,M,X):=F_Y|D,M,X^0(a|D,M,X) and g_4ad^0(D,X):=E[F_Y|D,M,X^0(a|d,M,X)|D,X] denote the nuisance parameters. Let v_a^0 denote the vector containing these true nuisance parameters and 𝒢_a be the set of all v_a^0. Let F_Y(d,M(d^'))^0(a) denote the true c.d.f. of the potential outcome Y(d,M(d^')). The EIF of F_Y(d,M(d^'))^0(a) is ψ_d,d^',a^θ(W_a;v_a^0)=ψ_d,d^',a(W_a;v_a^0)-θ, where ψ_d,d^',a(W_a;v_a^0) =1{ D=d}[1-g_2d^0(M,X)]/[1-g_1d^0(X)]g_2d^0(M,X)×[1{ Y≤ a} -g_3a^0(d,M,X)] +1{ D=d^'}/1-g_1d^0(X)[g_3a^0(d,M,X)-g_4ad^0(d^'X)]+g_4ad^0(d^',X). Under Assumptions 1.1 to 1.4, it can be shown that F_Y(d,M(d^'))^0(a)=E[ψ_d,d^',a(W_a,v_a^0)] for all a∈𝒜 and (d,d^')∈{ 0,1} ^2 (see proof of Theorem 1). In the subsequent theoretical analysis, the expectation E[.] is operated under the probability P∈𝒫_n. Let N=n/K be the size in a fold or subsample, where K is a fixed number. Let E_n:=n^-1∑_i=1^nς_W_i and E_N,k:=N^-1∑_i∈ I_kς_W_i where ς_w is a probability distribution degenerating at w and I_k is a set of indices of observations in the kth subsample. Let Z⇝ Z^' denote a random variable Z that weakly converges to a random variable Z^'. Let ‖ x‖ denote the l^1 norm and ‖ x‖ _q denote the l^q norm, q≥2 for a deterministic vector x. Let ‖ X‖ _P,q denote (E[‖ X‖ ^q])^1/q for a random vector X. The function ψ_d,d^',a for identifying the parameter of interest and constructing the estimator is such that ψ_d,d^',a(w,t):𝒲_a×𝒱_a⟼ℝ, where (d,d^')∈{ 0,1} ^2, a∈𝒜⊂ℝ, 𝒲_a⊂ℝ^d_w is a d_w dimensional Borel set and 𝒱_a is a G dimensional set of Borel measurable maps. Let ψ_a=(ψ_1,1,a,ψ_1,0,aψ_0,1,a,ψ_0,0,a) and ψ_a:𝒲_a×𝒱_a⟼ℝ^4. Let v_a,g^0:𝒰_a⟼ℝ denote the gth true nuisance parameter, where 𝒰_a⊆𝒲_a is a Borel set, and v_a^0:=(v_a,1^0,…,v_a,G^0)∈𝒱_a denote the vector of these true nuisance parameters. Let v̂_k,a,g:𝒰_a⟼ℝ denote an estimate of v_a,g^0, which is obtained by using the K-fold cross-fitting, such that the model parameters of the nuisance parameters are estimated based on observations in the complement set, W_a,j with j∈ I_k^c. Let v̂_k,a:=(v̂_k,a,1,…,v̂_k,a,G) denote the vector of these estimates. v_a^0 and v̂_k,a are both functions of U_a∈𝒰_a, a subvector of W_a∈𝒲_a. But to ease notation, we will write v_a^0 and v̂_k,a instead v_a^0(U_a) and v̂_k,a(U_a). ψ_a(W_a,v) denotes ψ_a with elements ψ_d,d^',a(W_a;v), (d,d^')∈{ 0,1} ^2. The parameter of interest is F_Y(d,M(d^'))^0(a), which can be identified by equation (<ref>), the expectation of ψ_d,d^',a(W_a;v) evaluated at the true nuisance parameters v_a^0. Let θ_d,d^',a^0:=E[ψ_d,d^',a(W_a,v_a^0)]. The proposed estimator of θ_d,d^',a^0 is the K-fold cross-fitting estimator θ̂_d,d^',a of (<ref>), in which θ̂_d,d^',a^(k)=N^-1∑_i∈ I_kψ_d,d^',a(W_a,i;v̂_k,a). In our case, the vector v̂_k,a contains estimates of the four true nuisance parameters g_1d^0(X),g_2d^0(M,X),g_3a^0(D,M,X) and g_4ad^0(D,X) when estimating the model parameters based on observations in the complement set, W_a,i with i∈ I_k^c. Let θ_a^0, θ̂_a, θ̂_a^(k) and 𝐅^0(a) denote vectors containing θ_d,d^',a^0, θ̂_d,d^',a, θ̂_d,d^',a^(k) and F_Y(d,M(d^'))^0(a) for different (d,d^')∈{0,1}^2. We note that θ̂_a=K^-1∑_k=1^Kθ̂_a^(k) and θ_a^0=E[ψ_a(W_a;v_a^0)], and if equation (<ref>) holds for all a∈𝒜 and (d,d^')∈{0,1}^2, then 𝐅^0(a)=θ_a^0. §.§ Main Results To establish the uniform validity of θ̂_a when estimating 𝐅^0(a), we impose the following conditions. 1. For 𝒫:=⋃_n=n_0^∞𝒫_n, Y_a:=1{ Y≤ a} satisfies lim_δ↘0sup_P∈𝒫sup_d_𝒜(a,a̅)≤δ‖ Y_a-Y_a̅‖ _P,2 =0, sup_P∈𝒫Esup_a∈𝒜|Y_a|^2+c <∞, where (a,a̅)∈𝒜 and 𝒜 are a totally bounded metric space equipped with a semimetric d_𝒜. The uniform covering number of the set 𝒢_5:={ Y_a:a∈𝒜} satisfies sup_Qlog N(ϵ‖𝒢_5‖ _Q,2,𝒢_5,‖‖ _Q,2)≤ Cloge/ϵ, for all P∈𝒫, where B_5(W)=sup_a∈𝒜|Y_a| is an envelope function with the supremum taken over all finitely discrete probability measures Q on (𝒲,𝒳_𝒲). 2. For d∈{ 0,1}, P(ε_1<g_1d^0(X)<1-ε_1)=1 and P(ε_2<g_2d^0(M,X)<1-ε_2)=1, where ε_1,ε_2∈(0,1/2). 3.The models for estimating the nuisance parameters v_a^0 have functional forms (h_1(f(x)^⊤β_1),h_2(f(m,x)^⊤β_2),h_3(f(d,m,x)^⊤β_3),h_4(f(d^',x)^⊤β_4)), respectively, and satisfy the following conditions. 3a. Functional forms of h_i(.) The functions h_i, i=1,2,3 take the forms of commonly used link functions ℒ={𝐈d,,1-,Φ,1-Φ} , where 𝐈d is the identity function, is the logistic link, and Φ is the probit link.Function h_4 has the form of the identity function 𝐈d. 3b. Dictionary controls f(.) The dimension of exogenous variables is (X)=p and log p=o(n^-1/3). The functions h_i contain a linear combination of dictionary controls f(.), where dim(f(x))=p×1 (dimension of X), dim(f(m,x))=(p+1)×1 (X plus one mediator), dim(f(d,m,x))=(p+2)×1 (X plus one mediator and one treatment variable), and dim(f(d^',x))=(p+1)×1 (X plus one treatment variable). 3c. Approximately sparsity The vectors of coefficients β_i, i=1,…,4 satisfy ‖β_i‖ _0≤ s_i, where ‖‖ _0 denotes the l^0 norm and s_i denotes the sparsity index. Furthermore, ∑_i=1^4s_i≤ s≪ n and s^2log^2(p∨ n)log^2n≤δ_nn. Let β̅_i, denote estimators of β_i. These estimators are sparse such that ∑_i=1^4‖β̅_i‖ _0≤ C^'s, where C^'<1 is some constant. 3d. Gram matrix The empirical and population norms induced by the Gram matrix formed by the dictionary f are equivalent on sparse subsets: sup_‖δ‖ _0≤ slog n|‖ f^⊤δ‖ _ℙ_n,2/‖ f^⊤δ‖ _P,2-1|→0 as n→∞, and also ‖‖ f‖ _∞‖ _P,∞≤ L_n. 3e. ‖ X‖ _P,q<∞. 4. Given a random subset I of [n]={ 1,…,n} of size n/K, let β̂_̂î denote an estimate of coefficient vector β_i defined in Assumption 3. These estimated nuisance parameters v̂_a :=(h_1(f(X)^⊤β̂_1),h_2(f(M,X)^⊤β̂_2),h_3(f(D,M,X)^⊤β̂_3),h_4(f(D,X)^⊤β̂_4)) =(ĝ_1d(X),ĝ_2d(M,X),ĝ_3a(D,M,X),ĝ_4ad(D,X)) satisfy the following conditions concerning their estimation quality. For d∈{ 0,1} P(ε_1<ĝ_1d(X)<1-ε_1) =1, P(ε_2<ĝ_2d(M,X)<1-ε_2) =1, where ε_1,ε_2>0. Let δ_n be a sequence converging to zero from above at a speed at most polynomial in n, e.g., δ_n≥ n^-c for some c>0. With probability P at least 1-Δ_n, for d∈{ 0,1}, all a∈𝒜 and q≥4, ‖v̂_a-v_a^0‖ _P,q ≤ C, ‖v̂_a-v_a^0‖ _P,2 ≤δ_nn^-1/4, ‖ĝ_1d(X)-0.5‖ _P,∞ ≤0.5-ϵ, ‖ĝ_2d(M,X)-0.5‖ _P,∞ ≤0.5-ϵ, ‖ĝ_1d(X)-g_1d^0(X)‖ _P,2‖ĝ_2d(M,X)-g_2d^0(M,X)‖ _P,2 ≤δ_nn^-1/2, ‖ĝ_1d(X)-g_1d^0(X)‖ _P,2‖ĝ_3a(D,M,X)-g_3a^0(D,M,X)‖ _P,2 ≤δ_nn^-1/2, ‖ĝ_1d(X)-g_1d^0(X)‖ _P,2‖ĝ_4ad(D,X)-g_4ad^0(D,X)‖ _P,2 ≤δ_nn^-1/2, ‖ĝ_2d(M,X)-g_2d^0(M,X)‖ _P,2‖ĝ_3a(D,M,X)-g_3a^0(D,M,X)‖ _P,2 ≤δ_nn^-1/2. Note that in Assumption 2.4, v̂_a is by definition constructed based on observations in the complement set (W_a,i)_i∈ I^c: v̂_a=v̂_a((W_a,i)_i∈ I^c). When the random subset I=I_k, then v̂_a=v̂_k,a. Let 𝔾_n denote an empirical process 𝔾_nf(W)=√(n)(E_nf(W)-E[f(W)]), where f is any P∈𝒫_n integrable function on the set 𝒲. Let 𝔾_Pf(W) denote the limiting process of 𝔾_nf(W), which is a Gaussian process with zero mean and a finite covariance matrix E[(f(W)-E[f(W)])(f(W)-E[f(W)])^⊤] under probability P (the P-Brownian bridge). Based on our notation and assumptions, we obtain the following result concerning the asymptotic behaviour of our estimator. If Assumptions 1 and 2 hold, the K-fold cross-fitting estimator θ̂_a=K^-1∑_k=1^Kθ̂_a^(k) for estimating 𝐅^0(a) satisfies √(n)(θ̂_a-𝐅^0(a))_a∈𝒜=Z_n,P+o_P(1), in l^∞(𝒜)^4, uniformly in P∈𝒫_n, where Z_n,P:=( 𝔾_n(ψ_a(W_a;v_a^0)-θ_a^0)) _a∈𝒜. Furthermore, Z_n,P⇝ Z_P in l^∞(𝒜)^4, uniformly in P∈𝒫_n, where Z_P:=( 𝔾_P(ψ_a(W_a;v_a^0)-θ_a^0)) _a∈𝒜 and paths of 𝔾_P(ψ_a(W_a;v_a^0)-θ_a^0) have the properties that uniformly in P∈𝒫_n, sup_P∈𝒫_nE[sup_a∈𝒜‖𝔾_P(ψ_a(W_a;v_a^0)-θ_a^0)‖] <∞, lim_δ→0sup_P∈𝒫_nE[sup_d_𝒜(a,a̅)‖𝔾_P(ψ_a(W_a;v_a^0)-θ_a^0)-𝔾_P(ψ_a̅(W_a̅;v_a̅^0)-θ_a̅^0)‖] =0. Next, we establish the uniform validity of the multiplier bootstrap under Assumptions 1 and 2. As in Assumption 2.4, let v̂_a denote the cross-fitting estimator of the nuisance parameters, whose model parameters are estimated based on observations in the complement set, W_a,i with i∈ I^c_k. Recall the multiplier bootstrap estimator in equation (<ref>) and the definition of the random variable ξ. By independence of ξ and W_a, we have that E[ξ(ψ_d,d^',a(W_a;v̂_a)-θ̂_d,d^',a)]=E[ξ]E[ψ_d,d^',a(W_a;v̂_a)-θ̂_d,d^',a]=0, and therefore, √(n)(θ̂_d,d^',a^*-θ̂_d,d^',a) =𝔾_nξ(ψ_d,d^',a(W_a;v̂_a)-θ̂_d,d^',a). Let θ̂_a^* denote a vector containing the multiplier bootstrap estimators θ̂_d,d^',a^* for different (d,d^')∈{0,1}^2. We write the previous result in vector form as √(n)(θ̂_a^*-θ̂_a)=𝔾_nξ(ψ_a(W_a;v̂_k,a)-θ̂_a), and let Z_n,P^*:=(𝔾_nξ(ψ_a(W_a;v̂_k,a)-θ̂_a))_a∈𝒜. Then we obtain the following result on the asymptotic behavior of the multiplier bootstrap. If Assumptions 1 and 2 hold, the large sample law Z_P of Z_n,P, can be consistently approximated by the bootstrap law Z_n,P^*: Z_n,P^*⇝_BZ_P uniformly over P∈𝒫_n in l^∞(𝒜)^4. Let ϕ_τ(F_X):=inf{ a∈ℝ:F_X(a)≥τ} be the τth quantile function of a random variable X whose c.d.f. is F_X. The von Mises expansion of ϕ_τ(F_X) (p.292 in <cit.>) is given by: ϕ_τ(E_n)-ϕ_τ(E)=1/√(n)ϕ_τ,E^'(𝔾_n)+…+1/m!1/n^m/2ϕ_τ,E^(k)(𝔾_n)+…, where ϕ_τ,E^'(.) is a linear derivative map and 𝔾_n denotes an empirical process: 𝔾_nf(W)=√(n)(E_nf(W)-E[f(W)]). Let ϕ_θ^':=(ϕ_τ,θ^')_τ∈𝒯, where θ=(θ_a)_a∈𝒜. Let Q_Y(d,M(d^'))^0(τ):=inf{ a∈ℝ:F_Y(d,M(d^'))^0(a)≥τ}, Q̂_Y(d,M(d^'))(τ):=inf{ a∈ℝ:θ̂_d,d^',a≥τ} and Q̂_Y(d,M(d^'))^*(τ):=inf{a∈ℝ:θ̂_d,d^',a^*≥τ}. Let 𝐐^0_τ, 𝐐̂_τ and 𝐐̂^*_τ denote the corresponding vectors containing Q_Y(d,M(d^'))^0(τ), Q̂_Y(d,M(d^'))(τ) and Q̂_Y(d,M(d^'))^*(τ) over different (d,d^')∈{0,1}^2, respectively. We then obtain the following result of uniform validity for the estimation of quantiles, which can be proven by invoking the functional delta theorems (Theorems B.3 and B.4) of <cit.>. If Assumptions 1 and 2 hold, √(n)(𝐐̂_τ-𝐐_τ^0)_τ∈𝒯 ⇝ T_P:=ϕ_θ^'(Z_P), √(n)(𝐐̂_τ^*-𝐐̂_τ)_τ∈𝒯 ⇝_BT_P:=ϕ_θ^'(Z_P). uniformly over P∈𝒫_n in l^∞(𝒯)^4, where 𝒯⊂(0,1), T_P is a zero mean tight Gaussian process for each P∈𝒫_n and Z_P:=(𝔾_Pψ_a(W_a;v_a^0))_a∈𝒜. § SIMULATION §.§ Simulation Design This section presents a simulation study to examine the finite sample performance of the proposed DML estimators in equations (<ref>) and (<ref>). We consider the following data-generating process for the observed covariates X=(X_1,X_2,X_3), where X_1 = 0.75V_1+0.1V_2+0.15V_3, X_2 = 0.15V_1+0.7V_2+0.15V_3, X_3 = 0.14V_1+0.08V_2+0.78V_3, with V_1, V_2 and V_3 being i.i.d. random variables following a chi-squared distribution with 1 degree of freedom. The binary treatment variable D is generated based on the following model: D = 1{ 0.371+X^⊤(0.198,0.125,-0.323)+ε_D>0} . For the binary mediator M, the data-generating process is M=1{ -0.070+0.710D+X^⊤(-0.054,-0.482,0.299)+ε_M>0} . The model for outcome Y is defined as follows: h = 0.766+0.458D+0.836DM+0.383M+X^⊤(0.640,0.260,0.474), Y = h(D,M,X)^-1ε_Y. The error terms (ε_Y,ε_D,ε_M) are mutually independent standard normal random variables and also independent of X. Analytically computing the unconditional c.d.f. and quantiles of the potential outcome Y(d,M(d^')) is difficult for the data generating process considered. For this reason, we use a Monte Carlo simulation to approximate the true values of the c.d.f. and the quantiles. We draw 40 million observations of (ε_Y,ε_M,V_1,V_2,V_3) from their respective true distributions, and for each observation, we calculate the corresponding potential outcomes Y(d,M(d^')) for (d,d^')∈{0,1}^2. In the next step, we evaluate profiles of the empirical c.d.f.'s and quantiles of the 40 million sampled potential outcomes and use the evaluated profiles as approximations to their true profiles. In Figure <ref> in the appendix, the upper panel shows the approximate true profiles of the c.d.f.'s F_Y(d,M(d^'))(a), while the lower panel provides the approximate true profiles of quantiles Q_Y(d,M(d^'))(τ) for (d,d^')∈{0,1}^2. Figure <ref> in the appendix depicts the approximate true profiles of NDQTE (NDQTE'), NIQTE (NIQTE') and TQTE across quantiles. In our simulation design, the nuisance parameters have the following functional forms: F_Y|D,M,X(a|D,M,X) = Φ(β_0,a+α_0.aD+α_1,aDM+β_1,aM+X^⊤β_2,a) f_D|X(D=1|X) = Φ(λ_0+X^⊤λ_1) f_M|D,X(M=1|D,X) = Φ(b_0+a_0D+X^⊤b_1), where Φ(.) is the c.d.f. of a standard normal random variable. The vector of parameters satisfies (β_0,a,α_0,a,α_1,a,β_1,a,β_2,a^⊤)=a×(β_0,α_0,α_1,β_1,β_2^⊤). When running the simulations, we also include a set of auxiliary (exogenous) variables: X^aug:=(X_1^aug,X_2^aug,…,X_J^aug), X_j^aug=U_1j(V_1+V_2+V_3)+U_2j(Z_j)^2, j=1,…,J, where U_1j∼ i.i.d.U(0,0.2), U_2j∼ i.i.d.U(0.8,1). 𝐙:=(Z_1,Z_2,…,Z_J) follow a multivariate normal distribution with a mean vector 0 and a covariance matrix with elements 0.5^|j-l|, j,l=1,…,J. V_1,V_2,V_3,U_1j,U_2j, 𝐙 and the error terms (ε_Y,ε_D,ε_M) are mutually independent. Depending on the realized values of U_1j and U_2j, the correlations cor(X_1,X_j^a), cor(X_2,X_j^a), cor(X_3,X_j^a) vary, and on average they amount to 0.139, 0.147 and 0.135, respectively. §.§ The Post Lasso Estimator Let W_a,i^aug=(Y_i,D_i,M_i,X_i,X_i^aug) denote the ith observation of the simulated data. When applying K-fold cross-fitting to W_a,i^aug, we estimate the models of the nuisance parameters based on post-lasso regression: we first estimate the models by lasso regression and then re-estimate the models without (lasso) penalization when including only those regressors with non-zero coefficients in the respective previous lasso steps. We denote the lasso estimator of the coefficients of F_Y|D,M,X(a|D,M,X) in K-fold cross-fitting by γ̂_Y,a∈min_γ_Y,a∈ℝ^p1/|I_k^c|∑_i∈ I_k^cL(W_a,i^aug;γ_Y,a)+γ/|I_k^c|‖γ_Y,a‖ _1, where p is the number of covariates, |I_k^c| is the number of observations in the complement set I_k^c, γ is the penalty parameter, ‖ .‖ _1 denotes the l_1 norm and is a diagonal matrix of penalty loadings. Here, the loss function L(.) corresponds to that in equation (<ref>) and the link function G_a(.) is Φ(.). When solving the lasso estimation problem of expression (<ref>), we only impose a penalty on (X,X^aug) and therefore, the first four diagonal elements (for the intercept term, D, M and DM) of are ones, while the remaining diagonal elements are zeros. The value of the penalty parameter is determined based on the procedure outlined in <cit.>. The other nuisance parameters f_D|X(d|X) and f_M|D,X(m|D,X) are estimated in an analogous way and we denote by γ̂_D and γ̂_M their corresponding lasso estimators. Let Ξ̃ denote the union of variables in (X,X^aug) with non-zero lasso coefficient estimates in one or several lasso regressions of the three nuisance parameters, with Ξ̃⊆supp(γ̂_Y,a)∪supp(γ̂_D)∪supp(γ̂_M). The post-lasso estimator of F_Y|D,M,X(a|D,M,X) is defined as γ̃_Y,a∈min_γ_Y,a∈ℝ^p1/|I_k^c|∑_i∈ I_k^cL(W_a,i^aug;γ_Y,a):supp(γ_Y,a)⊆supp(γ̂_Y,a)∪Ξ̃. The post-lasso estimators of f_D|X(d|X) and f_M|D,X(m|D,X) are obtained analogously. Based on the post-lasso estimates of the coefficients, we estimate the nuisance parameters among observations W_a,i^aug, i∈ I_k, and use them to compute estimators (<ref>) or (<ref>). When using the estimator based on equation (<ref>), we approximate the nuisance parameters f_D|M,X(d|M,X) by a probit model Φ(λ_2+λ_3M+X^⊤λ_4). The post-lasso approach for estimating f_D|M,X(d|M,X) is the same as before. To estimate E[F_Y|D,M,X(a|D,M,X)|d^',X], we approximate E[F_Y|D,M,X(a|D,M,X)|D,X] by a linear model β_3+β_4D+X^⊤β_5. We calculate post-lasso estimates of F_Y|D,M,X(a|D,M,X) among observations in the complement set, W_a,i^aug with i∈ I_k^c, and estimate (β_3,β_4,β_5) by linearly regressing these estimates on D and those covariates previously selected for computing the post-lasso estimate of F_Y|D,M,X(a|D,M,X). We then use the linear regression coefficients coming from the complement set to make cross-fitted predictions among observations with indices i∈ I_k and D_i=d^', which serve as estimates of E[F_Y|D,M,X(a|D,M,X)|d^',X]. §.§ Simulation Results To evaluate the performance of the proposed DML estimators of the c.d.f.'s of the potential outcomes across grid values a, we calculate integrated mean squared error (IMSE) and integrated Anderson–Darling weighted MSE (IWMSE) for each simulation: IMSE = ∫_a∈𝒜[F̂_Y(d,M(d^'))(a)-F_Y(d,M(d^'))(a)]^2dF_Y(d,M(d^'))(a), IWMSE = ∫_a∈𝒜[F̂_Y(d,M(d^'))(a)-F_Y(d,M(d^'))(a)]^2/F_Y(d,M(d^'))(a)(1-F_Y(d,M(d^'))(a))dF_Y(d,M(d^'))(a). To assess the performance of the estimators of the quantiles of the potential outcomes, Q_Y(d,M(d^')), we compute the integrated absolute error (IAE) across ranks τ: IAE = 1/|𝒯|∑_τ∈𝒯|Q̂_Y(d,M(d^'))(τ)-Q_Y(d,M(d^'))(τ)|, where 𝒯 is the grid of ranks, which we set to (0.05,0.06,…,0.95). Furthermore, we calculate the IAE for the estimators of the quantile treatment effects defined in equations (<ref>) to (<ref>). In the simulations, we set K=3 for 3-fold cross-fitting. The number of auxiliary variables is J=250 and we consider sample sizes 2,500, 5,000 and 10,000 observations in the simulations. The reported performance measures are averages for (d,d^')∈{0,1}^2 over 1,000 simulations. Table <ref> reports the results for the IMSE and IWMSE (scaled by 1,000), Table <ref> those for the IAE. All the performance measures behave rather favorably. As the sample size increases, the performance measures (and thus, estimation errors) decline sharply. However, for different combinations of (d,d^'), the levels of the performance measures are different, especially when the sample size is small. Estimation errors are significantly larger if (d,d^') = (0,1) and (0,0), rather than (d,d^')=(1,1) and (1,0). This is also reflected by the performance measures of NDQTE (NDQTE') and TQTE, which point to higher errors than those of NIQTE (NIQTE'). When comparing the performance measures of the two estimators based on equations (<ref>) and (<ref>), we find some differences in their levels when the sample size is small. However, the differences vanish as the sample size increases, which suggests that the two estimators perform equally well asymptotically in the simulation design considered. § EMPIRICAL APPLICATION §.§ The Job Corps Data We apply the proposed estimators of natural direct and indirect quantile treatment effects to data from the National Job Corps Study, in order to evaluate the impact of the Job Corps (JC) training program on earnings of young individuals with disadvantaged backgrounds. JC is the largest and most comprehensive job training program for disadvantaged youth in the US. It provides participants with vocational training and/or classroom education, housing, and board over an average duration of 8 months. Participants also receive health education as well as health and dental care. <cit.> and <cit.> assess the average effects of random assignment to JC on several labor market outcomes and find it to increase education, employment, and earnings in the longer run. Other contributions evaluate more specific aspects or components of JC, like the average effect of the time spent in training or of particular training sequences on employment and earnings, see e.g. <cit.> and <cit.>. Furthermore, several studies conduct mediation analyses to assess the average direct and indirect effects of program participation. <cit.> and <cit.> consider work experience or employment as mediators, respectively, and find positive direct effects of JC on earnings and general health, respectively, when invoking a selection-on-observables assumption. <cit.> avoid the latter assumption based on a partial identification approach based on which they compute upper and lower bounds for the causal mechanisms of JC when considering the achievement of a GED, high school degree, or vocational degree as mediators. Under their strongest set of bounding assumptions, they find a positive direct effect on labor market outcomes, net of the indirect mechanism via obtaining a degree. <cit.> base their mediation analysis on separate instrumental variables for the treatment and the mediator and find a positive indirect effect of training on earnings through an increase in the number of hours worked. We contribute to the causal mediation literature on the effectiveness of the JC program by considering quantile treatment effects across different ranks of the potential outcome distributions, which provides more insights on effect heterogeneity than the evaluation of average effects. For our empirical analysis, we consider the JC data provided in the package by <cit.> for the statistical software , which is a processed data set with 9,240 observations that contains a subset of the variables available in the original National Job Corps Study. Our outcome of interest is weekly earnings in the third year after the assignment (the variable in the data frame), while the treatment is a binary indicator for participation in any (classroom-based or vocational) training in the first year after program assignment (). We aim at assessing whether training directly affects the earnings outcome, and whether it also has an indirect effect by affecting health. For this reason, we consider general health one year after program assignment () as mediator, a categorical variable ranging from 1 (excellent health) to 4 (poor health). The motivation is that participation in training aimed at increasing human capital and labor market perspectives may have an impact on mental health, which in turn may affect labor market success. Furthermore, JC might also affect physical health through health education and health/dental care, which can influence labor market success, too. For this reason, we aim at disentangling the direct earnings effect of training and its indirect effect operating via health. The data set also contains 28 pre-treatment covariates, which include socio-economic information such as a study participant's gender, age, ethnicity, (own) education and parents' education, mother tongue, marital status, household size, previous employment, earnings and welfare receipt, health status, smoking behavior, alcohol consumption, and whether a study participant has at least one child. We assume that sequential ignorability of the treatment and the mediator holds conditional on these observed characteristics, implying that the permit controlling for any factors jointly affecting training participation and the earnings outcome, training participation and health 12 months after assignment, or health and earnings. To make lasso-based estimation of the nuisance parameters in our DML approach more flexible, we create interaction terms between all of the 28 covariates and squared terms for any non-binary covariates. This entails a total of 412 control variables that include both the original covariates and the higher order/interaction terms which we include in our DML approach. Table <ref> provides summary statistics for the outcome, the treatment, the mediator and the covariates. §.§ Effect Estimates Before considering quantile treatment effects, we first estimate the average direct and indirect effects by a K-fold cross-fitting estimator based on Theorem 2 in <cit.>, as implemented in the package for . Table <ref> reports the estimated average total effect (TE) of training, the average natural direct effects (NDE and NDE') and the average natural indirect effects (NIE and NIE') operating via general health. The TE estimate (Effect) suggests that participation in JC increases average weekly earnings in the third year by roughly 16 to 17 USD. As the estimated mean potential outcome under non-treatment amounts to approximately 161 USD, the program increases weekly earnings by roughly 10% according to our estimate. The TE is highly statistically significant as the standard error (Sdt.err) of 3.740 is rather low relative to the effect estimate, such that p-value that is close to zero. The total effect seems to be predominantly driven by the direct impact of training on earnings, as both NDE and NDE' are of similar magnitude as TE and highly statistically significant. In contrast, the indirect effect under non-treatment (d=0), NIE', is close to zero and insignificant, while that under treatment (d=1), NIE, amounts to -0.403 USD and is statistically significant at the 5% level. Bearing in mind that the health mediator is inversely coded (a smaller value implies better health), this negative estimate suggests a positive average indirect effect of training participation on earnings under treatment, which is, however, rather modest. Furthermore, the effect heterogeneity across NIE and NIE' points to moderate interaction effects of the treatment and the mediator: the impact of health on earnings appears to be somewhat more important under training than without training. The average effects might mask interesting effect heterogeneity across ranks of the earnings distribution. For this reason, we estimate the total quantile treatment effect (TQTE), natural direct quantile treatment effects (NDQTE and NDQTE') and natural indirect quantile treatment effects (NIQTE and NIQTE') across ranks (τ) 0.2 to 0.9. To this end, we invert our K-fold cross-fitting estimator θ̂_d,d^',a of equation (<ref>) and estimate the nuisance parameters by post-lasso regression as outlined in Section 4.2. Figures <ref> and <ref> depict the estimates of the causal effects (on the y-axis) across τ (on the x-axis), which correspond to the solid lines in the respective graphs. The dashed lines provide the 95% confidence intervals based on the multiplier bootstrap introduced in Section 2.5. The quantile treatment effects are by and large in line with the average treatment effects. TQTE, NDQTE and NDQTE' are statistically significantly positive at the 5% across almost all ranks τ considered and generally quite similar to each other. In contrast, all of the NIQTE estimates (the indirect effects under d=1) are relatively close to zero and statistically insignificant. The majority of the NIQTE' estimates (the indirect effects under d=0) are not statistically significantly different from zero either. However, several of the negative effects measured at lower ranks (roughly between the 0.2th and 0.4th quantiles) are marginally statistically significant and point to an earnings-increasing indirect effect under non-treatment (due to inverse coding of the health mediator). This potentially interesting pattern is averaged out when considering NIE' (the average indirect effect under d=0), which we found to be virtually zero and insignificant, see Table <ref>. Finally, the non-monotonic shape of the point estimates of TQTE, NDQTE and NDQTE' across ranks τ suggests heterogeneous effects at different quantiles of the potential earnings distributions. At the same time, the width of the confidence intervals suggests that the null hypothesis of homogeneous effects cannot be rejected for most of the quantiles considered. § CONCLUSION We proposed a DML approach for estimating natural direct and indirect quantile treatment effects under a sequential ignorability assumption. The method relies on the efficient score functions of the potential outcomes' cumulative distributional functions, which are inverted to compute the quantiles as well as the treatment effects (as the differences in potential outcomes at those quantiles). The robustness property of the efficient score functions permits estimating the nuisance parameters (outcome, treatment, and mediator models) by machine learning and cross-fitting avoids overfitting bias. We demonstrated that our quantile treatment effect estimators are root-n-consistent and asymptotically normal. Furthermore, we suggested a multiplier bootstrap and demonstrated its consistency for uniform statistical inference. We also investigated the finite sample performance of our estimators by means of a simulation study. Finally, we applied our method to data from the National Job Corp Study to evaluate the direct earnings effects of training across the earnings distribution, as well as the indirect effects operating via general health. We found positive and statistically significant direct effects across a large range of the earnings quantiles, while the indirect effects were generally close to zero and mostly statistically insignificant. equationsection § APPENDIX §.§ Proof of Proposition 1 The proof relies on using Assumptions 1.1 through 1.4. Under these assumptions, it can be shown that: F_Y(d,M(d^'))(a) = ∫ P(Y(d,M(d^'))≤ a|X=x)f_X(x)dx (by iterated expectation) = ∫∫ P(Y(d,m)≤ a|M(d^')=m,X=x)dP(M(d^')=m|X=x)f_X(x)dx (by iterated expectation) = ∫∫ P(Y(d,m)≤ a|D=d^',M(d^')=m,X=x) × dP(M(d^')=m|D=d^',X=x)f_X(x)dx (by Assumption 2) = ∫∫ P(Y(d,m)≤ a|D=d^',M=m,X=x)dP(M=m|D=d^',X=x)f_X(x)dx (by Assumption 1) = ∫∫ P(Y(d,m)≤ a|D=d^',X=x)dP(M=m|D=d^',X=x)f_X(x)dx (by Assumption 3) = ∫∫ P(Y(d,m)≤ a|D=d,X=x)dP(M=m|D=d^',X=x)f_X(x)dx (by Assumption 2) = ∫∫ P(Y(d,m)≤ a|D=d,M=m,X=x)dP(M=m|D=d^',X=x)f_X(x)dx (by Assumption 3) = ∫ P(Y≤ a|D=d,M=m,X=x)dP(M=m|D=d^',X=x)f_X(x)dx (by Assumption 1) = ∫∫ F_Y|D,M,X(a|d,m,x)f_M|D,X(m|d^',x)f_X(x)dmdx. §.§ Derivations of the EIF The derivation of the efficient influence function (EIF) of an estimand is based on calculating Gateaux derivatives for the estimand. Let P denote the true data generating distribution and Ψ(P) the estimand of interest, which is a statistical functional of P. The Gateaux derivative of Ψ(.) measures how the estimand Ψ(.) changes as P shifts in the direction of another distribution, say P̃. Let P_t=tP̃+(1-t)P, where t∈[0,1]. Formally, the Gateaux derivative of estimand Ψ(.) when changing P in the direction of P̃ is defined as lim_t↓0(Ψ(P_t)-Ψ(P)/t)=.d/dtΨ(P_t)|_t=0, if the limit on the right-hand side exists. It can be shown that under certain regularity conditions, the EIF of Ψ(P) under the distribution P̃ is equal to Gateaux derivative (<ref>) <cit.>. This fact provides a convenient way of deriving the EIF. Following <cit.>, we use the strategy of “point mass contamination” to derive the EIF of F_Y(d,M(d^'))(a). Specifically, we consider P̃ to be a point mass of a single observation, say õ, and then the EIF of Ψ(P) evaluated at õ is equal to the Gateaux derivative (<ref>). This derivation strategy appears attractive when the treatment variable D is discrete.[Notice that if D is not discrete, this strategy can not be used, and the derivation needs to rely on using other methods instead, see <cit.>.] Let 1_õ(o)= 1 if o=õ 0 otherwise denote the Dirac delta function with respect to õ. If the density function for P is f_O(o), the density function for P_t is f_O^t(o)=t1_õ(o)+(1-t)f_O(o) and .d/dtf_O^t(o)|_t=0=1_õ(o)-f_O(o), and f_O^t(o)=f_O(o) when t=0. Under Assumptions 1.1 to 1.4, it follows from Proposition 1 that F_Y(d,M(d^'))(a) = ∫∫ F_Y|D,M,X(a|d,m,x)f_M|D,X(m|d^',x)f_X(x)dmdx = ∫∫∫1{ y≤ a}f_Y,D,M,X(y,d,m.x)/f_D,M,X(d,m,x)f_M,D,X(m,d^',x)/f_D,X(d^',x)f_X(x)dydmdx. Let Ψ(P_t):=∫∫∫1{ y≤ a}f_Y,D,M,X^t(y,d,m.x)/f_D,M,X^t(d,m,x)f_M,D,X^t(m,d^',x)/f_D,X^t(d^',x)f_X^t(x)dydmdx. We would like to calculate the Gateau derivative: .d/dtΨ(P_t)|_t=0=.d/dt(∫∫∫1{ y≤ a}f_Y,D,M,X^t(y,d,m.x)/f_D,M,X^t(d,m,x)f_M,D,X^t(m,d^',x)/f_D,X^t(d^',x)f_X^t(x)dydmdx)|_t=0. It can be shown that .d/dtΨ(P_t)|_t=0 = ∫∫∫1{ y≤ a}.(f_Y,D,M,X^t(y,d,m.x)/f_D,M,X^t(d,m,x)f_M,D,X^t(m,d^',x)/f_D,X^t(d^',x)d/dtf_X^t(x))|_t=0dydmdx +∫∫∫1{ y≤ a}.(f_X^t(x)d/dtf_Y,D,M,X^t(y,d,m.x)/f_D,M,X^t(d,m,x)f_M,D,X^t(m,d^',x)/f_D,X^t(d^',x))|_t=0dydmdx. Considering the expression within the integral of (<ref>), .(f_Y,D,M,X^t(y,d,m.x)/f_D,M,X^t(d,m,x)f_M,D,X^t(m,d^',x)/f_D,X^t(d^',x)d/dtf_X^t(x))|_t=0 = f_Y,D,M,X^t(y,d,m.x)/f_D,M,X^t(d,m,x)f_M,D,X^t(m,d^',x)/f_D,X^t(d^',x)×[1_x̃(x)-f_X(x)] = f_Y|D,M,X(y|d,m.x)f_M|D,X(m|d^',x)1_x̃(x) -f_Y|D,M,X(y|d,m.x)f_M|D,X(m|d^',x)f_X(x). Therefore, (<ref>) can be further expressed as E[F_Y|D,M,X(a|d,M,x̃)|d^',x̃]-E[g_d,d^',a(X)]. Considering the xpression within the integral of (<ref>), .(f_X^t(x)d/dtf_Y,D,M,X^t(y,d,m.x)/f_D,M,X^t(d,m,x)f_M,D,X^t(m,d^',x)/f_D,X^t(d^',x))|_t=0 = .(f_X^t(x)f_Y,D,M,X^t(y,d,m.x)/f_D,M,X^t(d,m,x)d/dtf_M,D,X^t(m,d^',x)/f_D,X^t(d^',x))|_t=0 +.(f_X^t(x)f_M,D,X^t(m,d^',x)/f_D,X^t(d^',x)d/dtf_Y,D,M,X^t(y,d,m.x)/f_D,M,X^t(d,m,x))|_t=0. Considering (<ref>), .d/dtf_M,D,X^t(m,d^',x)/f_D,X^t(d^',x)|_t=0 = .(1/f_D,X^t(d^',x)d/dtf_M,D,X^t(m,d^',x))|_t=0 -.(f_M,D,X^t(m,d^',x)/(f_D,X^t(d^',x))^2d/dtf_D,X^t(d^',x))|_t=0 = 1{ D=d^'}/f_D,X(d^',x)[1_(m̃,x̃)(m,x)-f_M|D,X(m|d^',x)1_x̃(x)]. Using some algebra, the part of (<ref>) appearing in (<ref>) can be expressed as 1{ D=d^'}/f_D|X(d^'|x̃)(F_Y|D,M,X(a|d,m̃,x̃)-E[F_Y|D,M,X(a|d,M,x̃)|d^',x̃]). Concerning (<ref>), .d/dtf_Y,D,M,X^t(y,d,m.x)/f_D,M,X^t(d,m,x)|_t=0 = .(1/f_D,M,X^t(d,m,x)d/dtf_Y,D,M,X^t(y,d,m.x))|_t=0 -.(f_Y,D,M,X^t(y,d,m,x)/(f_D,M,X^t(d,m,x))^2d/dtf_D,M,X^t(d,m,x))|_t=0 = 1/f_D,M,X(d,m,x)[1_(ỹ,m̃,x̃)(y,m,x)1{ D=d} -f_Y,D,M,X(y,d,m,x)] -f_Y|D,M,X(y|d,m,x)/f_D,M,X(d,m,x)[1_(m̃,x̃)(m,x)1{ D=d} -f_D,M,X(d,m,x)] = 1{ D=d}/f_D,M,X(d,m,x)[1_(ỹ,m̃,x̃)(y,m,x)-f_Y|D,M,X(y|d,m,x)1_m̃,x̃(m,x)]. Furthermore, the part of (<ref>) appearing in (<ref>) can be expressed as 1{ D=d}/f_D|X(d|x̃)f_M|D,X(m̃|d^',x̃)/f_M|D,X(m̃|d,x̃)(1{ỹ≤ a} -F_Y|D,M,X(a|d,m̃,x̃)). Combing (<ref>), (<ref>) and (<ref>), we obtain .d/dtΨ(P_t)|_t=0 = E[F_Y|D,M,X(a|d,M,x̃)|d^',x̃]-E[g_d,d^',a(X)] +1{ D=d^'}/f_D|X(d^'|x̃)(F_Y|D,M,X(a|d,m̃,x̃)-E[F_Y|D,M,X(a|d,M,x̃)|d^',x̃]) +1{ D=d}/f_D|X(d|x̃)f_M|D,X(m̃|d^',x̃)/f_M|D,X(m̃|d,x̃)(1{ỹ≤ a} -F_Y|D,M,X(a|d,m̃,x̃)). If we replace the notation (ỹ,m̃,x̃) with (Y,M,X) and notice that g_d,d^',a(X):=E[F_Y|D,M,X(a|d,M,X)|d^',X] and E[g_d,d^',a(X)]=F_Y(d,M(d^'))(a), then E[.d/dtΨ(P_t)|_t=0]=0 implies that F_Y(d,M(d^'))(a)=E[ψ^'(W_a,v_a^')], where ψ^'(W_a,v_a^') is defined in equation (<ref>). We can apply the Bayes rule to rewrite the term (<ref>) as 1{ D=d}/f_D|X(d^'|x̃)f_D|M,X(d^'|m̃,x̃)/f_D|M,X(d|m̃,x̃)(1{ỹ≤ a} -F_Y|D,M,X(a|d,m̃,x̃)), and E[.d/dtΨ(P_t)|_t=0]=0 now implies that F_Y(d,M(d^'))(a)=E[ψ(W_a,v_a)], where ψ(W_a,v_a) is defined in equation (<ref>). §.§ Proofs of Theorems in Section 3 The proof relies on Theorem A.1 in Appendix A.4, which states that K-fold cross-fitting is uniformly valid for estimating a parameter of interest under certain regularity conditions. We show that the conditions in Assumption 2 are sufficient for the proposed K-fold cross-fitting estimator to satisfy Assumptions A.1.1 to A.1.8, which are required for establishing Theorem A.1. Notice that the condition ‖v̂_a-v_a^0‖ _P,2≤δ_nn^-1/4 in Assumption 2.4 already satisfies Assumption A.1.8. For this reason, we will only verify Assumption A.1.1 to A.1.7 in the subsequent discussion. We first derive several preliminary results which are useful for the proof to follow. Let 𝒢_an be the set of v:=(g_1d(X),g_2d(M,X),g_3a(D,M,X),g_4ad(D,X)). g_1d(X),g_2d(M,X),g_3a(D,M,X) and g_4ad(D,X) are P-integrable functions such that for d∈{ 0,1}, P(ε_1<g_1d(X)<1-ε_1) =1, P(ε_2<g_2d(M,X)<1-ε_2) =1, where ε_1,ε_2∈(0,1/2), and with probability P at least 1-Δ_n, for d∈{ 0,1}, all a∈𝒜 and q≥4, ‖ v-v_a^0‖ _P,q ≤ C, ‖ v-v_a^0‖ _P,2 ≤δ_nn^-1/4, ‖ g_1d(X)-0.5‖ _P,∞ ≤0.5-ϵ, ‖ g_2d(M,X)-0.5‖ _P,∞ ≤0.5-ϵ, ‖ g_1d(X)-g_1d^0(X)‖ _P,2‖ g_2d(M,X)-g_2d^0(M,X)‖ _P,2 ≤δ_nn^-1/2, ‖ g_1d(X)-g_1d^0(X)‖ _P,2‖ g_3a(D,M,X)-g_3a^0(D,M,X)‖ _P,2 ≤δ_nn^-1/2, ‖ g_1d(X)-g_1d^0(X)‖ _P,2‖ g_4ad(D,X)-g_4ad^0(D,X)‖ _P,2 ≤δ_nn^-1/2, ‖ g_2d(M,X)-g_2d^0(M,X)‖ _P,2‖ g_3a(D,M,X)-g_3a^0(D,M,X)‖ _P,2 ≤δ_nn^-1/2. Notice that v̂_a∈𝒢_an by Assumption 2.4. Proving the result for all functions in 𝒢_an implies that it also holds for v̂_a. By Assumption 2.2, it can be shown that for (d,d^')∈{ 0,1} ^2, the event 0<ε_2/1-ε_2<g_2d^0(M,X)/g_2d^'^0(M,X)<1-ε_2/ε_2<∞ holds with probability one. Furthermore, ‖ g_3a(D,M,X)-g_3a^0(D,M,X)‖ _P,q =(E[E[|g_3a(D,M,X)-g_3a^0(D,M,X)|^q|M,X]])^1/q =(E[∑_d∈{ 0,1}|g_3a(d,M,X)-g_3a^0(d,M,X)|^qg_2d^0(M,X)])^1/q ≥ε_2^1/q(E[∑_d∈{ 0,1}|g_3a(d,M,X)-g_3a^0(d,M,X)|^q])^1/q ≥ε_2^1/q(max_d∈{ 0,1}E[|g_3a(d,M,X)-g_3a^0(d,M,X)|^q])^1/q. which implies that for d∈{ 0,1} and all a∈𝒜, with P probability at least 1-Δ_n, ‖ g_3a(d,M,X)-g_3a^0(d,M,X)‖ _P,q≤ Cε_2^-1/q, by the assumption ‖ v-v_a^0‖ _P,q≤ C and ε_2>0. A similar argument can be used to show that for d∈{ 0,1} and all a∈𝒜, with probability P at least 1-Δ_n, ‖ g_3a(d,M,X)-g_3a^0(d,M,X)‖ _P,2 ≤δ_nn^-1/4ε_2^-1/q≲δ_nn^-1/4, ‖ g_4ad(d^',X)-g_4d^0(d^',X)‖ _P,2 ≤δ_nn^-1/4ε_1^-1/q≲δ_nn^-1/4, by assumption ‖ v-v_a^0‖ _P,2≤δ_nn^-1/4 and ε_2>0. Similarly, ‖ g_1d(X)-g_1d^0(X)‖ _P,2‖ g_3a(d,M,X)-g_3a^0(d,M,X)‖ _P,2 ≤ε_2^-1/qδ_nn^-1/2≲δ_nn^-1/2, ‖ g_1d(X)-g_1d^0(X)‖ _P,2‖ g_4ad(d^',X)-g_4ad^0(d^',X)‖ _P,2 ≤ε_1^-1/qδ_nn^-1/2≲δ_nn^-1/2, ‖ g_2d(M,X)-g_2d^0(M,X)‖ _P,2‖ g_3a(d,M,X)-g_3a^0(d,M,X)‖ _P,2 ≤ε_2^-1/qδ_nn^-1/2≲δ_nn^-1/2. §.§.§ Verifying Assumption A.1.1: F_Y(d,M(d^'))^0(a)=θ_d,d^',a^0 §.§.§ The case when d≠ d^' Recall that ψ_d,d^',a(W_a;v_a^0) =1{ D=d}[1-g_2d^0(M,X)]/[1-g_1d^0(X)]g_2d^0(M,X)×[Y_a -g_3a^0(d,M,X)] +1{ D=d^'}/1-g_1d^0(X)[g_3a^0(d,M,X)-g_4ad^0(d^',X)]+g_4ad^0(d^',X). Conditional on M and X, the expectation of the first term after the equals sign in expression (<ref>) is E[1{ D=d}[1-g_2d^0(M,X)]/[1-g_1d^0(X)]g_2d^0(M,X)Y_a |M,X] =1-g_2d^0(M,X)/[1-g_1d^0(X)]g_2d^0(M,X)× E[1{ D=d} Y_a |M,X] =1-g_2d^0(M,X)/[1-g_1d^0(X)]g_2d^0(M,X)× g_2d(M,X)g_3a(d,M,X) =1-g_2d^0(M,X)/1-g_1d^0(X)g_3a(d,M,X), and the expectation of the last term right in expression (<ref>) is E[1{ D=d}[1-g_2d^0(M,X)]/[1-g_1d^0(X)]g_2d^0(M,X)g_3a^0(d,M,X)|M,X] =1-g_2d^0(M,X)/[1-g_1d^0(X)]g_2d^0(M,X)× g_2d(M,X)g_3a(d,M,X) =1-g_2d^0(M,X)/1-g_1d^0(X)g_3a(d,M,X). Therefore, conditional on M and X, the term in expression (<ref>) is zero and its unconditional expectation is also zero. Concerning the terms in expression (<ref>), notice that the first term E[1{ D=d^'}/1-g_1d^0(X)[g_3a^0(d,M,X)-g_4ad^0(d^',X)]|X] =1/1-g_1d^0(X)E[g_3a^0(d,M,X)1{ D=d^'} |X] -g_4ad^0(d^',X) =0 Therefore, the expectation of the first term in expression (<ref>) is zero. The expectation of the second term of in expression (<ref>) is E[g_4ad^0(d^',X)]=F_Y(d,M(d^'))^0(a) by Proposition 1. Therefore, it follows that F_Y(d,M(d^'))^0(a)=θ_d,d^',a^0. §.§.§ The case when d=d Now, we have ψ_d,d,a(W_a;v_a^0) =1{ D=d}/g_1d^0(X)×[Y_a -g_4ad^0(d,X)]+g_4ad^0(d,X), where g_4ad^0(d,X)=F_Y|D,X^0(a|d,X). Concerning the first term right of the equals sign in equation (<ref>), E[1{ D=d}/g_1d^0(X)×[Y_a -g_4ad^0(d,X)]|X] =E[1{ D=d} Y_a |X]/g_1d^0(X)-g_4ad^0(d,X) =0. Concerning the last term in equation (<ref>), notice that F_Y|D,X^0(a|d,X)=F_Y(d,M(d))|D,X^0(a|d,X)=F_Y(d,M(d))|X^0(a|X) by Assumptions 1.1 and 1.2, and therefore, E[g_4ad^0(d,X)] =E[F_Y(d,M(d))|X^0(a|X)]=F_Y(d,M(d))^0(a). Combining the previous results, it follows that F_Y(d,M(d^'))^0(a)=θ_d,d^',a^0 holds for all a∈𝒜 and (d,d^')∈{ 0,1} ^2. §.§.§ Verifying Assumption A.1.2 If we treat v as deterministic, the second order Gateau derivative of the map v⟼ E[ψ_d,d^',a(W_a;v)] exists and is continuous on v∈𝒢_an, and this property holds for each (d,d^')∈{ 0,1} ^2 and a∈𝒜. Therefore v⟼ E[ψ_a(W_a;v)] is twice continuously Gateau-differetiable for a∈𝒜. §.§.§ Verifying Assumption A.1.3 (Neyman near orthogonality) §.§.§ The case when d≠ d^' Recall that ψ_d,d^',a(W_a,v_a^0) =1{ D=d}[1-g_2d^0(M,X)]/[1-g_1d^0(X)]g_2d^0(M,X)[1{ Y≤ a} -g_3a^0(d,M,X)] +1{ D=d^'}/1-g_1d^0(X)[g_3a^0(d,M,X)-g_4ad^0(d^',X)]+g_4ad^0(d^',X). Let μ_d,d^',a(𝐭)=1{ D=d}(1-t_2)/(1-t_1)t_2(1{ Y≤ a} -t_3)+1{ D=d^'}/1-t_1(t_3-t_4)+t_4, where 𝐭=(t_1,,…,t_4). If we set 𝐭=(g_1d(X),g_2d(M,X),g_3a(d,M,X),g_4ad(d^',X)), then E[μ_d,d^',a(𝐭)]=E[ψ_d,d^',a(W_a,v)]. If we set 𝐭=𝐭^0=(g_1d^0(X),g_2d^0(M,X),g_3a^0(d,M,X),g_4ad^0(d^',X)), then E[μ_d,d^',a(𝐭_0)]=E[ψ_d,d^',a(W_a,v_a^0)]. Furthermore, ‖∂_vE[ψ_d,d^',a(W_a,v_a^0)[v-v_a^0]]‖ =‖ E[∂_𝐭μ_d,d^',a(𝐭_0)[𝐭-𝐭_0]]‖. The partial derivatives with respect to (t_1,…,t_4) are given by ∂_t_1μ_d,d^',a(𝐭) =1{ D=d}(1-t_2)/(1-t_1)^2t_2(1{ Y≤ a} -t_3)+1{ D=d^'}/(1-t_1)^2(t_3-t_4), ∂_t_2μ_d,d^',a(𝐭) =-1{ D=d}/(1-t_1)t_2^2(1{ Y≤ a} -t_3), ∂_t_3μ_d,d^',a(𝐭) =1{ D=d^'}/1-t_1-1{ D=d}(1-t_2)/(1-t_1)t_2, ∂_t_4μ_d,d^',a(𝐭) =1-1{ D=d^'}/1-t_1. Replacing 𝐭 with 𝐭^0 and taking expectations in equation (<ref>), we have E[∂_t_1μ_d,d^',a(𝐭_0)[g_1d(X)-g_1d^0(X)]] =E[1-g_2d^0(M,X)/(1-g_1d^0(X))^2g_2d^0(M,X)×[E[1{ Y≤ a} 1{ D=d} |M,X].. ..-g_2d^0(M,X)g_3a^0(d,M,X)][g_1d(X)-g_1d^0(X)]] +E[1/(1-g_1d^0(X))^2[E[1{ D=d^'} g_3a^0(d,M,X)|X].. .-(1-g_1d^0(X))g_4ad^0(d^',X)][g_1d(X)-g_1d^0(X)] =0, because in expressions (<ref>) and (<ref>), we have that E[1{ D=d} 1{ Y≤ a} |M,X] =P_Y,D|M,X^0(Y≤ a,D=d|M,X) =g_2d^0(M,X)g_3a^0(d,M,X), E1[1{ D=d^'} g_3a^0(d,M,X)|X] =∫ g_3a^0(d,m,X)P_D,M|X^0(D=d^',M=m|X)dm =(1-g_1d^0(X))g_4ad^0(d^',X). When taking expectations in equation (<ref>), we have E[∂_t_2μ_d,d^',a(𝐭_0)[g_2d(M,X)-g_2d^0(M,X)]] =E[-1/(1-g_1d^0(X))(g_2d^0(M,X))^2. ×[E[1{ D=d^'} 1{ Y≤ a} |M,X]-g_2d^0(M,X)g_3a^0(d,M,X)] .×[g_2d(M,X)-g_2d^0(M,X)]] =0, by making use of expression (<ref>). When taking expectations in equation (<ref>), we have E[∂_t_3μ_d,d^',a(𝐭_0)[g_3a(d,M,X)-g_3a^0(d,M,X)]] =E[E[1{ D=d^'}/1-g_1d^0(X)-1{ D=d}(1-g_2d^0(M,X))/(1-g_1d^0(X))g_2d^0(M,X)|M,X]. .×[g_3a(d,M,X)-g_3a^0(d,M,X)]] =0, because E[1{ D=d^'}/1-g_1d^0(X)-1{ D=d}(1-g_2d^0(M,X))/(1-g_1d^0(X))g_2d^0(M,X)|M,X] =0. When taking expectations in equation (<ref>), we have E[∂_t_4μ_d,d^',a(𝐭_0)[g_4ad(d^',X)-g_4ad^0(d^',X)]] =E[E[(1-1{ D=d^'}/1-g_1d^0(X))|X][g_4ad(d^',X)-g_4ad^0(d^',X)]] =0, since E[(1-1{ D=d^'}/1-g_1d^0(X))|X]=1-1-g_1d^0(X)/1-g_1d^0(X)=0. §.§.§ The case when d=d^' Recall that ψ_d,d,a(W_a,v_a^0) =1{ D=d}/g_1d^0(X)[1{ Y≤ a} -g_4ad^0(d,X)]+g_4ad^0(d,X). Now μ_d,d^',a(𝐭) is given by μ_d,d^',a(𝐭)=1{ D=d}/t_1(1{ Y≤ a} -t_4)+t_4. The partial derivatives with respect to (t_1,t_4) are given by ∂_t_1μ_d,d^',a(𝐭) =-1{ D=d}/t_1^2(1{ Y≤ a} -t_4), ∂_t_4μ_d,d^',a(𝐭) =1-1{ D=d}/t_1. Replacing 𝐭 with 𝐭^0 and taking expectation in equation (<ref>), we have E[∂_t_1μ_d,d^',a(𝐭_0)[g_1d(X)-g_1d^0(X)]] =E[1/(g_1d^0(X))^2×[E[1{ Y≤ a} 1{ D=d} |X].. ..-g_1d^0(X)g_4ad^0(d,X)][g_1d(X)-g_1d^0(X)]] =0, since the term g_1d^0(X)g_4ad^0(d,X) =P_Y,D,M,X^0(Y≤ a,d|X)=E[1{ D=d} Y_a|X]. Taking expectations in equation (<ref>), we have E[∂_t_4μ_d,d^',a(𝐭_0)[g_4ad(d,X)-g_4ad^0(d,X)]] =E[E[1-1{ D=d}/g_1d^0(X)|X][g_4ad(d^',X)-g_4ad^0(d^',X)]] =0, since E[1-1{ D=d}/g_1d^0(X)|X]=1-g_1d^0(X)/g_1d^0(X)=0. Combining all previous results, it follows that ‖∂_vE[ψ_d,d^',a(W_a,v_a^0)[v-v_a^0]]‖ =0 holds for each (d,d^')∈{ 0,1} ^2 and all a∈𝒜. Therefore, ‖∂_vE[ψ_a(W_a,v_a^0)[v-v_a^0]]‖ =0 holds all a∈𝒜. §.§.§ Verifying Assumption A.1.4a (r_n≤δ_nn^-1/4) §.§.§ The case when d≠ d^' We have ψ_d,d^',a(W_a,v)-ψ_d,d^',a(W_a,v_a^0) ={1{ D=d}[1-g_2d(M,X)]/[1-g_1d(X)]g_2d(M,X)-1{ D=d}[1-g_2d^0(M,X)]/[1-g_1d^0(X)]g_2d^0(M,X)} 1{ Y≤ a} +{1{ D=d^'}/1-g_1d(X)-1{ D=d}[1-g_2d(M,X)]/[1-g_1d(X)]g_2d(M,X)}× g_3a(d,M,X) -{1{ D=d^'}/1-g_1d^0(X)-1{ D=d}[1-g_2d^0(M,X)]/[1-g_1d^0(X)]g_2d^0(M,X)}× g_3a^0(d,M,X) +[1-1{ D=d^'}/1-g_1d(X)]g_4ad(d^',X)-[1-1{ D=d^'}/1-g_1d^0(X)]g_4ad^0(d^',X). To ease the notation, we express these nuisance parameters without their arguments in the following proof. Using the Minkowski inequality yields ‖ψ_d,d^',a(W_a,v)-ψ_d,d^',a(W_a,v_a^0)‖ _P,2 ≤Π_1+Π_2+Π_3, where Π_1 =‖[1{ D=d}(1-g_2d)/(1-g_1d)g_2d-1{ D=d}(1-g_2d^0)/(1-g_1d^0)g_2d^0]1{ Y≤ a}‖ _P,2, Π_2 =‖[1{ D=d^'}/1-g_1d-1{ D=d}(1-g_2d)/(1-g_1d)g_2d]× g_3a-[1{ D=d^'}/1-g_1d^0-1{ D=d}(1-g_2d^0)/(1-g_1d^0)g_2d^0]× g_3a^0‖ _P,2, Π_3 =‖(1-1{ D=d^'}/1-g_1d)g_4ad-(1-1{ D=d^'}/1-g_1d^0)g_4ad^0‖ _P,2. In the following, Assumption 2.4 and the boundedness conditions (<ref>), (<ref>), (<ref>) and (<ref>) are applied to derive the relevant upper bounds. For the term Π_1, with probability P at least 1-Δ_n, we have Π_1 ≤‖g_2d^0-g_2d/(1-g_1d)g_2d‖ _P,2+‖(1-g_2d^0)/(1-g_1d)g_2d-(1-g_2d^0)/(1-g_1d^0)g_2d^0‖ _P,2 ≤1/ε_1ε_2‖ g_2^0-g_2‖ _P,2+1/ε_1ε_2‖ g_2-g_2^0‖ _P,2+1/ε_1^2‖ g_1-g_1^0‖ _P,2 ≤(2/ε_1ε_2+1/ε_1^2)δ_nn^-1/4≲δ_nn^-1/4 , by making use of the fact that |1/(1-g_1d)g_2d-1/(1-g_1d^0)g_2d^0| =|g_2d^0-g_2d/(1-g_1d)g_2dg_2d^0+g_1d-g_1d^0/(1-g_1d)(1-g_1d^0)g_2d^0| ≤1/ε_1ε_2^2|g_2d-g_2d^0|+1/ε_1^2ε_2|g_1d-g_1d^0| and Assumption ‖ v-v_a^0‖ _P,2≲δ_nn^-1/4. For the term Π_2, it is known that with probability P at least 1-Δ_n, for a∈𝒜, Π_2 ≤‖(2g_2d-1)/(1-g_1d)g_2d×(g_3d-g_3d^0)‖ _P,2+‖[2g_2d-1/(1-g_1d)g_2d-2g_2d^0-1/(1-g_1d^0)g_2d^0]× g_3a^0‖ _P,2 ≤1-2ε_2/ε_1ε_2‖ g_3a-g_3a^0‖ _P,2+1-2ε_2/ε_1ε_2^2‖ g_2d-g_2d^0‖ _P,2+1-2ε_2/ε_1^2ε_2‖ g_1d-g_1d^0‖ _P,2+2/ε_1ε_2‖ g_2d-g_2d^0‖ _P,2 ≤(1-2ε_2/ε_1ε_2ε_2^-1/q+1-2ε_2/ε_1ε_2^2+1-2ε_2/ε_1^2ε_2+2/ε_1ε_2)δ_nn^-1/4≲δ_nn^-1/4, by using 2g_2d-1/(1-g_1d)g_2d≤1-2ε_2/ε_1ε_2, |2g_2d-1/(1-g_1d)g_2d-2g_2d^0-1/(1-g_1d^0)g_2d^0| =|[g_2d^0-g_2d/(1-g_1d)g_2dg_2d^0+g_1d-g_1d^0/(1-g_1d)(1-g_1d^0)g_2d^0]×(2g_2d-1)+2(g_2d-g_2d^0)/(1-g_1d^0)g_2d^0| ≤1-2ε_2/ε_1ε_2^2|g_2d-g_2d^0|+1-2ε_2/ε_1^2ε_2|g_1d-g_1d^0|+2/ε_1ε_2|g_2d-g_2d^0|, and Assumption ‖ v-v_a^0‖ _P,2≲δ_nn^-1/4 as well as condition (<ref>). For the term Π_3, we can show that with probability P at least 1-Δ_n and for a∈𝒜, Π_3 ≤‖g_1d^0-g_1d/(1-g_1d^0)(1-g_1d)g_4ad^0‖ _P,2+‖g_4ad^0/1-g_1d-g_4ad/1-g_1d‖ _P,2+‖ g_4ad-g_4ad^0‖ _P,2 ≤1/ε_1^2‖ g_1d^0-g_1d‖ _P,2+(1+1/ε_1)‖ g_4ad^0-g_4ad‖ _P,2 ≤[1/ε_1^2+ε_2^-1/q(1+1/ε_1)]δ_nn^-1/4≲δ_nn^-1/4, by using 0≤ g_4ad^0≤1, Assumption ‖ v-v_a^0‖ _P,2≲δ_nn^-1/4 and condition (<ref>). §.§.§ The case when d=d^' We have ψ_d,d,a(W_a,v)-ψ_d,d,a(W_a,v_a^0) ={1{ D=d}/g_1d(X)-1{ D=d}/g_1d^0(X)} 1{ Y≤ a} +[1-1{ D=d^'}/g_1d]g_4ad-[1-1{ D=d}/g_1d^0]g_4ad^0, where g_4ad=g_4ad(d,X) (not g_4ad(d^',X)). Using the triangle inequality yields ‖ψ_d,d,a(W_a,v)-ψ_d,d,a(W_a,v_a^0)‖ _P,2 ≤Π_4+Π_5, where Π_4 =‖[1{ D=d}/g_1d(X)-1{ D=d}/g_1d^0(X)]1{ Y≤ a}‖ _P,2, Π_5 =‖[1-1{ D=d^'}/g_1d]g_4ad-[1-1{ D=d}/g_1d^0]g_4ad^0‖ _P,2. Following previous arguments, it can be shown that Π_4≤ε_2^-2δ_nn^-1/4≲δ_nn^-1/4. Similar as for Π_3, we have for Π_5, Π_5≤1/ε_1^2‖ g_1d^0-g_1d‖ _P,2+(1+1/ε_1)‖ g_4ad^0-g_4ad‖ _P,2≤[1/ε_1^2+ε_2^-1/q(1+1/ε_1)]δ_nn^-1/4≲δ_nn^-1/4. Combining the previous results, we obtain that with probability P at least 1-Δ_n and for all a∈𝒜, ‖ψ_d,d^',a(W_a,v)-ψ_d,d^',a(W_a,v_a^0)‖ _P,2≲δ_nn^-1/4 holds for each (d,d^')∈{ 0,1} ^2. Therefore, with probability P at least 1-Δ_n and all a∈𝒜 and v∈𝒢_an, ‖ψ_a(W_a,v)-ψ_a(W_a,v_a^0)‖ _P,2≲δ_nn^-1/4. §.§.§ Verifying Assumption A.1.4b (λ_n^'≤δ_nn^-1/2) §.§.§ The case when d≠ d^' We may write ψ__d,d^',a(W_a;r(v-v_a^0)+v_a^0) as ψ__d,d^',a(W_a;r(v-v_a^0)+v_a^0) =1{ D=d}{ 1-[r(g_2d-g_2d^0)+g_2d^0]}/{ 1-[r(g_1d-g_1d^0)+g_1d^0]}{ r(g_2d-g_2d^0)+g_2d^0} ×{ Y_a-[r(g_3d-g_3d^0)+g_3d^0]} +1{ D=d^'}/1-[r(g_1d-g_1d^0)+g_1d^0] ×{ r[(g_3d-g_4ad)-(g_3a^0-g_4ad^0)]+(g_3a^0-g_4ad^0)} +[r(g_4ad-g_4ad^0)+g_4ad^0]. Let A_1(r) =1-[r(g_2d-g_2d^0)+g_2d^0],A_2(r)=1-[r(g_1d-g_1d^0)+g_1d^0], A_3(r) =r(g_2d-g_2d^0)+g_2d^0,A_4(r)=Y_a-[r(g_3a-g_3a^0)+g_3a^0], A_5(r) =r[(g_3a-g_4ad)-(g_3a^0-g_4ad^0)]+(g_3a^0-g_4ad^0). Notice that the functions A_i(r), i=1,…,5 are also functions of the random variables (Y_a,M,X). For this reason, we may rewrite ψ__d,d^',a(W_a;r(v-v_a^0)+v_a^0) as ψ__d,d^',a(W_a;r(v-v_a^0)+v_a^0) =1{ D=d} A_1(r)/A_2(r)A_3(r)× A_4(r) +1{ D=d^'}/A_2(r)× A_5(r)+[r(g_4ad-g_4ad^0)+g_4ad^0]. After some calculations, we obtain .1/2E[∂_r^2(ψ__d,d^',a(W_a;r(v-v_a^0)+v_a^0))]|_r=r̅ =-E[1{ D=d} A_4(r̅)/(A_2(r̅))^2A_3(r̅)(g_1d-g_1d^0)^2] +E[1{ D=d} A_4(r̅)/A_2(r̅)(A_3(r̅))^2(g_2d-g_2d^0)(g_1d-g_1d^0)] +E[1{ D=d}/A_2(r̅)A_3(r̅)(g_2d-g_2d^0)(g_3a-g_3a^0)] +E[1{ D=d} A_1(r̅)A_4(r̅)/(A_2(r̅))^3A_3(r̅)(g_1d-g_1d^0)^2] -E[1{ D=d} A_1(r̅)A_4(r̅)/(A_2(r̅))^2(A_3(r̅))^2(g_1d-g_1d^0)(g_2d-g_2d^0)] +E[1{ D=d} A_1(r̅)/(A_2(r̅))^2A_3(r̅)(g_1d-g_1d^0)(g_3a-g_3a^0)] +E[1{ D=d} A_1(r̅)A_4(r̅)/A_2(r̅)(A_3(r̅))^3(g_2d-g_2d^0)^2] +E[1{ D=d} A_1(r̅)/A_2(r̅)(A_3(r̅))^2(g_2d-g_2d^0)(g_3a-g_3a^0)] +E[1{ D=d^'} A_5(r̅)/(A_2(r̅))^3(g_1d-g_1d^0)^2] +E[1{ D=d^'} A_5(r̅)/(A_2(r̅))^2(g_1d-g_1d^0)[g_3a-g_4ad-(g_3a^0-g_4ad^0)]]. To bound the expectation of the second order derivative above, we can use the properties of A_i(r). Using Assumption 2.2 and acknowledging that r̅∈(0,1), we have that ε_2<A_1(r̅) =1-[r(g_2d-g_2d^0)+g_2d^0]<1-ε_2, ε_1<A_2(r̅) =1-[r(g_1d-g_1d^0)+g_1d^0]<1-ε_1, ε_2<A_3(r̅) =r(g_2d-g_2d^0)+g_2d^0<1-ε_1 hold with probability one. Also |A_4(r̅)| and |A_5(r̅)| are bounded by constants, since |Y_a|, g_3a, g_3a^0, g_4ad and g_4ad^0 are all bounded and r̅∈(0,1). Based on these results and Assumption 2.4, it can be shown that the absolute values of those terms on the right hand side of equation (<ref>) that involve interaction terms are all bounded by δ_nn^-1/2. We now consider the terms on the right hand side of equation (<ref>) that involve quadratic terms (the first, fourth, seventh and ninth terms). By the assumption ‖ v-v_a^0‖ _P,2≤δ_nn^-1/4, we have ‖ g_1d-g_1d^0‖ _P,2≲δ_nn^-1/4 and ‖ g_2d-g_2d^0‖ _P,2≲δ_nn^-1/4. Concerning the first term, |-E[1{ D=d} A_4(r̅)/(A_2(r̅))^2A_3(r̅)(g_1d-g_1d^0)^2]| ≤|-E[1{ D=d}(Y_a-g_3a^0)/(A_2(r̅))^2A_3(r̅)(g_1d-g_1d^0)^2]| +|E[1{ D=d}r̅(g_3a-g_3a^0)/(A_2(r̅))^2A_3(r̅)(g_1d-g_1d^0)^2]| ≤r̅(1-2ε_1)/ε_1^2ε_2ε_2^-1/qδ_nn^-1/2≲δ_nn^-1/2, since E[1{ D=d}(Y_a-g_3a^0)/(A_2(r̅))^2A_3(r̅)(g_1d-g_1d^0)^2] =E[(g_1-g_1^0)^2/(A_2(r̅))^2A_3(r̅)×(P_Y,D|M,X^0(Y≤ a,D=d|M,X)-g_2d^0g_3a^0)] =0, and |E[1{ D=d}r̅(g_3a-g_3a^0)/(A_2(r̅))^2A_3(r̅)(g_1d-g_1d^0)^2]| ≤ E[|1{ D=d}/(A_2(r̅))^2A_3(r̅)|r̅|g_3a-g_3a^0|(g_1d-g_1d^0)^2] ≤r̅(1-2ε_1)/ε_1^2ε_2‖ g_3a-g_3a^0‖ _P,2‖ g_1d-g_1d^0‖ _P,2 ≤r̅(1-2ε_1)/ε_1^2ε_2ε_2^-1/qδ_nn^-1/2≲δ_nn^-1/2, by assuming that ‖ v-v_a^0‖ _P,2≤δ_nn^-1/2 and |g_1d-g_1d^0|<1-2ε_1 with probability one. Applying a similar argument to the fourth term, |E[1{ D=d} A_1(r̅)A_4(r̅)/(A_2(r̅))^3A_3(r̅)(g_1d-g_1d^0)^2]| ≤|E[A_1(r̅)(g_1d-g_1d^0)^2/(A_2(r̅))^3A_3(r̅)× E[1{ D=d}(Y_a-g_3a^0)|M,X]]| +E[|A_1(r̅)/(A_2(r̅))^3A_3(r̅)|r̅|g_3a-g_3a^0|(g_1d-g_1d^0)^2] ≤0+r̅(1-ε_2)(1-2ε_1)/ε_1^3ε_2ε_2^-1/qδ_nn^-1/2≲δ_nn^-1/2. For the ninth term, |E[1{ D=d^'} A_5(r̅)/(A_2(r̅))^3(g_1d-g_1d^0)^2]| ≤|E[(g_1d-g_1d^0)^2/(A_2(r̅))^3E[1{ D=d^'}(g_3a^0-g_4ad^0)|X]]| +E[|1{ D=d^'}/(A_2(r̅))^3|r̅|(g_3a-g_3a^0)-(g_4ad-g_4ad^0)|(g_1d-g_1d^0)^2] ≤r̅(1-2ε_1)/ε_1^3(ε_2^-1/qδ_nn^-1/2+ε_1^-1/qδ_nn^-1/2)≲δ_nn^-1/2, since the term E[1{ D=d^'}(g_3a^0-g_4ad^0)|X] =∫ F_Y|D,M,X(a|d,m,X)f_M|D|X(m|d^'X)f_D|X(d^'|X)dm -f_D|X(d^'|X)E[F_Y|D,M,X(a|d,M,X)|d^',X] =0. For the seventh term, |E[1{ D=d} A_1(r̅)A_4(r̅)/A_2(r̅)(A_3(r̅))^3(g_2d-g_2d^0)^2]| ≤0+r̅(1-ε_2)(1-2ε_2)/ε_1ε_2^3‖ g_3a-g_3a^0‖ _P,2‖ g_1d-g_1d^0‖ _P,2 ≤r̅(1-ε_2)(1-2ε_2)/ε_1ε_2^3ε_2^-1/qδ_nn^-1/2≲δ_nn^-1/2, by |g_2d-g_2d^0|<1-2ε_2 with probability one. §.§.§ The case when d=d^' Let A_6(r) =r(g_1d-g_1d^0)+g_1d^0, A_7(r) =Y_a-[r(g_4ad-g_4ad^0)+g_4ad^0], where g_4ad=g_4ad(d,X). It holds that ε_1<A_6(r̅)<1-ε_1 and |A_7(r̅)| for r̅∈[0,1] are bounded, since |Y_a|, g_4ad and g_4ad^0 are bounded. Then, ψ__d,d,a(W_a;r(v-v_a^0)+v_a^0) =1{ D=d}/r(g_1d-g_1d^0)+g_1d^0{ Y_a-[r(g_4ad-g_4ad^0)+g_4ad^0]} +[r(g_4ad-g_4ad^0)+g_4ad^0] =1{ D=d}/A_6(r̅)A_7(r̅)+[r(g_4ad-g_4ad^0)+g_4ad^0]. After some calculations, we obtain .1/2E[∂_r^2(ψ__d,d,a(W_a;r(v-v_a^0)+v_a^0))]|_r=r̅ =E[1{ D=d} A_7(r̅)/A_6(r̅)^3(g_1d-g_1d^0)^2] +E[1{ D=d}/A_6(r̅)^2(g_1d-g_1d^0)(g_4ad-g_4ad^0)]. Considering the first term on the right hand side, we have that |E[1{ D=d} A_7(r̅)/A_6(r̅)^3(g_1d-g_1d^0)^2]| ≤|E[1/A_6(r̅)^3(g_1d-g_1d^0)^2(E[1{ D=d} Y_a|X]-g_1d^0g_4ad^0)]| +E[|1{ D=d}r̅/A_6(r̅)^3||g_1d-g_1d^0||g_1d-g_1d^0||g_4ad-g_4ad^0|] ≤r̅(1-2ε_1)/ε_1^3ε_1^-1/qδ_nn^-1/2≲δ_nn^-1/2, since E[1{ D=d} Y_a|X]-g_1d^0g_4ad^0=0 and ‖ g_1d-g_1d^0‖ _P,2‖ g_4ad-g_4ad^0‖ _P,2 ≤‖ g_1d(X)-g_1d^0(X)‖ _P,2‖ g_4ad(D,X)-g_4ad^0(D,X)‖ _P,2ε_1^-1/q ≤ε_1^-1/qδ_nn^-1/2≲δ_nn^-1/2. Concerning the second term on the right hand side, |E[1{ D=d}/A_6(r̅)^2(g_1d-g_1d^0)(g_4ad-g_4ad^0)]| ≤1/ε_1^2‖ g_1d-g_1d^0‖ _P,2‖ g_4ad-g_4ad^0‖ _P,2 ≲δ_nn^-1/2. Finally, if v=v_a^0, it is trivial to see that .E[∂_r^2(ψ__d,d^',a(W_a;r(v-v_a^0)+v_a^0))]|_r=r̅=0 for each (d,d^')∈{ 0,1} ^2, a∈𝒜 and r̅∈(0,1). Combining the previous results, it follows that with probability P at least 1-Δ_n, for r̅∈(0,1), all a∈𝒜 and v∈𝒢_an∪ v_a^0, we have that |.E[∂_r^2(ψ__d,d^',a(W_a;r(v-v_a^0)+v_a^0))]|_r=r̅|≲δ_nn^-1/2, and this result holds for each (d,d^')∈{ 0,1} ^2. Therefore, with probability P at least 1-Δ_n, ‖.E[∂_r^2(ψ_a(W_a;r(v-v_a^0)+v_a^0))]|_r=r̅‖≲δ_nn^-1/2 holds for r̅∈(0,1), all a∈𝒜 and v∈𝒢_an∪ v_a^0. §.§.§ Verifying Assumption A.1.5 (Smoothness condition) §.§.§ The case when d≠ d^' We may write ψ_d,d^',a(W_a,v_a^0)-ψ_d,d^',a̅(W_a̅,v_a̅^0) as ψ_d,d^',a(W_a,v_a^0)-ψ_d,d^',a̅(W_a̅,v_a̅^0) =1{ D=d}(1-g_2d^0)/(1-g_1d^0)g_2d^0(Y_a-Y_a̅) +[1{ D=d^'}/1-g_1d^0-1{ D=d}(1-g_2d^0)/(1-g_1d^0)g_2d^0](g_3a^0-g_3a̅^0) +(1-1{ D=d^'}/1-g_1^0(d,X))(g_4ad^0-g_4a̅d^0). Using the Minkowski inequality yields ‖ψ_d,d^',a(W_a,v_a^0)-ψ_d,d^',a̅(W_a̅,v_a̅^0)‖ _P,2 ≤Π_1(a)+Π_2(a)+Π_3(a), where Π_1(a) =‖1{ D=d}(1-g_2d^0)/(1-g_1d^0)g_2d^0(Y_a-Y_a̅)‖ _P,2, Π_2(a) =‖[1{ D=d^'}/1-g_1d^0-1{ D=d}(1-g_2d^0)/(1-g_1d^0)g_2d^0](g_3a^0-g_3a̅^0)‖ _P,2, Π_3(a) =‖(1-1{ D=d^'}/1-g_1^0(d,X))(g_4ad^0-g_4a̅d^0)‖ _P,2. For Π_1(a), we note that |1{ D=d}(1-g_2d^0)/(1-g_1d^0)g_2d^0|≤|1-g_2d^0/(1-g_1d^0)g_2d^0|≤1-ε_2/ε_1ε_2 with probability one, which implies that Π_1(a) ≤1-ε_2/ε_1ε_2‖ Y_a-Y_a̅‖ _P,2. For Π_2(a), we note that |1{ D=d^'}/1-g_1d^0-1{ D=d}(1-g_2d^0)/(1-g_1d^0)g_2d^0|≤1/ε_1+1-ε_2/ε_1ε_2 with probability one, which implies that Π_2(a) ≤(1/ε_1+1-ε_2/ε_1ε_2)‖ g_3a^0-g_3a̅^0‖ _P,2≤(1/ε_1+1-ε_2/ε_1ε_2)‖ Y_a-Y_a̅‖ _P,2, because ‖ g_3a^0-g_3a̅^0‖ _P,2 ≤(E[1{ D=d}(Y_a-Y_a̅)^2])^1/2≤‖ Y_a-Y_a̅‖ _P,2. For Π_3(a), we note that |1-1{ D=d^'}/1-g_1d^0|≤1+ε_1/ε_1, which implies that Π_3(a)≤ 1+ε_1/ε_1‖ g_4ad^0-g_4a̅d^0‖ _P,2≤1+ε_1/ε_1‖ Y_a-Y_a̅‖ _P,2, because ‖ g_4ad^0-g_4a̅d^0‖ _P,2 ≤(E[1{ D=d^'}(E[Y_a-Y_a̅|d,M,X])^2])^1/2≤‖ Y_a-Y_a̅‖ _P,2. Combining the previous results, we have ‖ψ_d,d^',a(W_a,v_a^0)-ψ_d,d^',a̅(W_a̅,v_a̅^0)‖ _P,2 ≤(1-ε_2/ε_1ε_2+1/ε_1+1-ε_2/ε_1ε_2+1+ε_1/ε_1)‖ Y_a-Y_a̅‖ _P,2 ≲‖ Y_a-Y_a̅‖ _P,2. By Assumption 2.1, for each (d,d^')∈{ 0,1} ^2, we then obtain sup_P∈𝒫‖ψ_d,d^',a(W_a,v_a^0)-ψ_d,d^',a̅(W_a̅,v_a̅^0)‖ _P,2≲sup_P∈𝒫‖ Y_a-Y_a̅‖ _P,2=0 as d_𝒜(a,a̅)→0. §.§.§ The case when d=d^' ψ_d,d,a(W_a,v_a^0)-ψ_d,d,a̅(W_a̅,v_a̅^0) =1{ D=d}/g_1d^0(1{ Y≤ a} -1{ Y≤a̅}) +[1-1{ D=d}/g_1d^0](g_4ad^0-g_4a̅d^0), where g_4ad^0=g_4ad^0(d,X). By the triangle inequality, ‖ψ_d,d,a(W_a,v_a^0)-ψ_d,d,a̅(W_a̅,v_a̅^0)‖ _P,2 ≤Π_4(a)+Π_5(a)≲‖ Y_a-Y_a̅‖ _P,2, where Π_4(a) =‖1{ D=d}/g_1d^0(1{ Y≤ a} -1{ Y≤a̅})‖ _P,2≤1/ε_1‖ Y_a-Y_a̅‖ _P,2 Π_5(a) =‖[1-1{ D=d}/g_1d^0](g_4ad^0-g_4a̅d^0)‖ _P,2≤1+ε_1/ε_1‖ Y_a-Y_a̅‖ _P,2 By Assumption 2.1, for each (d,d^')∈{ 0,1} ^2, we obtain sup_P∈𝒫‖ψ_d,d,a(W_a,v_a^0)-ψ_d,d,a̅(W_a̅,v_a̅^0)‖ _P,2≲sup_P∈𝒫‖ Y_a-Y_a̅‖ _P,2=0 as d_𝒜(a,a̅)→0. Combining the previous results, it follows sup_P∈𝒫‖ψ_a(W_a,v_a^0)-ψ_a(W_a̅,v_a̅^0)‖ _P,2=0 as d_𝒜(a,a̅)→0. §.§.§ Verifying Assumption A.1.6 §.§.§ The case when d≠ d^' Let 𝒢_1^0={ g_1d^0(X):d∈{ 0,1}} 𝒢_2^0={ g_2d^0(M,X):d∈{ 0,1}}, 𝒢_3^0 ={ g_3a^0(d,M,X):a∈𝒜,d∈{ 0,1}} , 𝒢_4^0 ={ g_4ad^0(d^',X):a∈𝒜,{ d,d^'}∈{ 0,1} ^2} , 𝒢_5={ Y_a:a∈𝒜}, 𝒢_6={ 1{ D=d} :d∈{ 0,1}}. The union ∪_j=1^4𝒢_j^0 forms the set 𝒢_a as defined above. By our assumptions, g_1d^0(X) and g_2d^0(M,X) are bounded within the interval (0,1) with probability one. g_3a^0(d,M,X)=F_Y|D,M,X(a|d,M,X) is a conditional c.d.f. and g_4^0(a,d,d^'X)=E[g_3a^0(d,M,X)|d^',X] is a conditional expectation of a c.d.f., which are also bounded for a∈𝒜 with probability one. The functions Y_a=1{ Y≤ a} and 1{ D=d} are indicator functions and are bounded with probability one. In conclusion, functions in the sets 𝒢_j^0, j=1,…,4 are uniformly bounded and their envelop functions are all bounded by some constant. By Assumption 1 and Lemma L.2 of <cit.>, it can be shown that uniform covering numbers of functions in 𝒢_3^0 and 𝒢_4^0 are bounded by log(e/ϵ)∨0 multiplied by some constants. Uniform covering numbers of functions in 𝒢_1^0,𝒢_2^0, 𝒢_5 and 𝒢_6 are also bounded by log(e/ϵ)∨0 multiplied by some constants. The function ψ_d,d^',a(W_a,v_a^0) is formed based on a union of functions in the sets 𝒢_j^0, j=1,…,4, 𝒢_5 and 𝒢_6. Let Ψ_d,d^'^0={ψ_d,d^',a(W_a,v_a^0),a∈𝒜} , where (d,d^')∈{ 0,1} ^2. Fixing (d,d^'), we have |ψ_d,d^',a(W_a,v_a^0)| ≤|(1-g_2d^0)/(1-g_1d^0)g_2^0||Y_a|+|1{ D=d^'}/1-g_1^0-1{ D=d}(1-g_2d^0)/(1-g_1d^0)g_2^0||g_3a^0| +|1-1{ D=d^'}/1-g_1d^0||g_4ad^0| ≤1-ε_2/ε_1ε_2+(1/ε_1+1-ε_2/ε_1ε_2)|g_3a^0|+(1+ε_1/ε_1)|g_4ad^0|. The envelop function of f∈Ψ_d,d^'^0 is defined as ψ_d,d^'^0(W):=sup_a∈𝒜,v∈(∪_j=1^4𝒢_j^0)|ψ_d,d^',a(W_a,v)| and we have ψ_d,d^'^0(W) ≤1-ε_2/ε_1ε_2+(1/ε_1+1-ε_2/ε_1ε_2)sup_a∈𝒜,g_3a^0∈𝒢_3^0|g_3a^0(d,M,X)| +(1+ε_1/ε_1)sup_a∈𝒜,g_4^0∈𝒢_4^0|g_4ad^0(d^',X)|. Using the facts that for q≥4, ‖ g_3a^0(d,M,X)‖ _P,q≤‖ g_3a^0(D,M,X)‖ _P,qε_2^-1/q and ‖ g_4ad^0(d^',X)‖ _P,q≤‖ g_4ad^0(D,X)‖ _P,qε_1^-1/q, it follows that ‖ψ_d,d^'^0(W)‖ _P,q ≤1-ε_2/ε_1ε_2+(1/ε_1+1-ε_2/ε_1ε_2)sup_a∈𝒜,g_3a^0∈𝒢_3^0‖ g_3a^0(D,M,X)‖ _P,qε_2^-1/q +(1+ε_1/ε_1)sup_a∈𝒜,g_4^0∈𝒢_4^0‖ g_4ad^0(D,X)‖ _P,qε_1^-1/q ≤1-ε_2/ε_1ε_2+(1/ε_1+1-ε_2/ε_1ε_2)ε_2^-1/q+(1+ε_1/ε_1)ε_1^-1/q<∞, and this property holds for P∈𝒫. Therefore, sup_P∈𝒫‖ψ_d,d^'^0(W)‖ _P,q<∞ for q≥4 and f∈Ψ_d,d^'^0 is uniformly bounded and has a uniform covering entropy bounded by log(e/ϵ)∨0 up to multiplication by a constant. §.§.§ The case when d=d^' The function ψ_d,d,a(W_a,v_a^0) is formed by a union of functions in the sets 𝒢_1^0, 𝒢_4^0, 𝒢_5 and 𝒢_6. Let Ψ_d,d^0={ψ_d,d,a(W_a,v_a^0),a∈𝒜} , where d∈{ 0,1}. Fixing d, we have |ψ_d,d,a(W_a,v_a^0)| ≤|1{ D=d}/g_1d^0||Y_a|+|1{ D=d}/g_1d^0||g_4ad^0|+|g_4ad^0| ≤1/ε_1+(1/ε_1+1)|g_4ad^0|, where g_4ad^0=g_4ad^0(d,X). The envelop function of f∈Ψ_d,d^0 is defined as ψ_d,d^0(W):=sup_a∈𝒜,v∈(𝒢_1^0∪𝒢_4^0)|ψ_d,d,a(W_a,v)| and we have ψ_d,d^0(W)≤1/ε_1+(1/ε_1+1)sup_a∈𝒜,g_4^0∈𝒢_4^0|g_4ad^0(d,X)|. For q≥4 and ‖ g_4ad^0(d,X)‖ _P,q≤‖ g_4ad^0(D,X)‖ _P,qε_1^-1/q, we have ‖ψ_d,d^0(W)‖ _P,q ≤1/ε_1+(1/ε_1+1)sup_a∈𝒜,g_4^0∈𝒢_4^0‖ g_4ad^0(d,X)‖ _P,q ≤1/ε_1+(1/ε_1+1)sup_a∈𝒜,g_4^0∈𝒢_4^0‖ g_4ad^0(D,X)‖ _P,qε_1^-1/q ≤1/ε_1+(1/ε_1+1)ε_1^-1/q<∞, and this property holds for P∈𝒫. Therefore, sup_P∈𝒫‖ψ_d,d^0(W)‖ _P,q<∞ for q≥4 and f∈Ψ_d,d^0 is uniformly bounded and has a uniform covering entropy bounded by log(e/ϵ)∨0 up to multiplication by a constant. Combining the previous results, let Ψ^0=Ψ_1,1^0∪Ψ_1,0^0∪Ψ_0,1^0∪Ψ_0,0^0 be a union of Ψ_d,d^'^0, (d,d^')∈{ 0,1} ^2. Since Ψ^0 is a union of Ψ_d,d^'^0 for (d,d^')∈{ 0,1} ^2, it is a finite union of classes of functions which are uniformly bounded and have the uniform entropies bounded by log(e/ϵ)∨0 up to multiplication by a constant. For this reason, Ψ^0 has a uniform covering number sup_P∈𝒫sup_Qlog N(ϵ,Ψ^0,‖ .‖ _Q,2)≲log(e/ϵ)∨0 and its envelop function Ψ^0(W)=sup_(d,d^')∈{ 0,1} ^2,a∈𝒜,v∈∪_j=1^4𝒢_j^0|ψ_d,d^',a(W_a,v)| is also bounded. Furthermore, the uniform covering integral of Ψ^0 satisfies: ∫_0^1√(sup_Qlog N(ϵ,Ψ^0,‖ .‖ _Q,2))dϵ ≤√(C)∫_0^1√(1-logϵ)dϵ≤√(C)∫_0^11/√(ϵ)dϵ<∞ by the fact that 1-logϵ≤1/ϵ for all ϵ>0 and ∫_0^1ϵ^-bdϵ<∞ for b<1. The second condition in (B.1) of <cit.> can also be proven (which corresponds to Assumption A1.4). The goal of such a proof is the same as the aim of bounding m_n in <cit.> and <cit.>. But in their scenarios, they need to consider Y, E[Y|d,M,X] and E[E[Y|d,M,X]|d^',X], which requires additional moment conditions, say ‖ Y‖ _P,q, ‖ E[Y|d,M,X]‖ _P,q and ‖ E[E[Y|d,M,X]|d^',X]‖ _P,q. Our scenario is less challenging since the functions g_3a(d,M,X) (a c.d.f.) and g_4ad(d^',X) (an expectation of a c.d.f.) are bounded. §.§.§ Verifying Assumption A.1.7 §.§.§ The case when d≠ d^' We define the following sets of functions: 𝒢_1(d) :={[ x⟼ h_1(f(x)^⊤β_1):‖β_1‖ _0≤ s_1; ‖ h_1(f(X)^⊤β_1)-g_1d^0(X)‖ _P,2≲δ_nn^-1/4; ‖ h_1(f(X)^⊤β_1)-g_1d^0(X)‖ _P,∞≲ C ]} , 𝒢_2(d) :={[ (m,x)⟼ h_2(f(m,x)^⊤β_2):‖β_2‖ _0≤ s_2; ‖ h_2(f(M,X)^⊤β_2)-g_2d^0(M,X)‖ _P,2≲δ_nn^-1/4; ‖ h_2(f(M,X)^⊤β_2)-g_2d^0(M,X)‖ _P,∞≲ C ]} , 𝒢_3(d) :={[ (d,m,x)⟼ h_3(f(d,m,x)^⊤β_3):‖β_3‖ _0≤ s_3; ‖ h_3(f(d,M,X)^⊤β_3)-g_3a^0(d,M,X)‖ _P,2≲δ_nn^-1/4; ‖ h_3(f(d,M,X)^⊤β_3)-g_3a^0(d,M,X)‖ _P,∞≲ C ]} , 𝒢_4(d^') :={[ (d^',x)⟼ h_4(f(d^',x)^⊤β_4):‖β_4‖ _0≤ s_4; ‖ h_4(f(d^',x)^⊤β_4)-g_4ad^0(d^',X)‖ _P,2≲δ_nn^-1/4; ‖ h_4(f(d^',X)^⊤β_4)-g_4ad^0(d^',X)‖ _P,∞≲ C ]} , where β_i, i=1,…,4 are vectors of coefficients on different sets of conditioning variables and ‖‖ _0 denotes the l^0 norm. By Assumption 2.3b, dim(f(x))=p×1, dim(f(m,x))=(p+1)×1, dim(f(d,m,x))=(p+2)×1, and dim(f(d^',x))=(p+1)×1. From Assumption 2.4 follows that with probability P no less than 1-Δ_n, 1-ĝ_1d(X)∈1-𝒢_1(d), ĝ_2d(M,X)∈𝒢_2(d), (1-ĝ_2d(M,X))∈1-𝒢_2(d). ĝ_3a(d,M,X)∈𝒢_3(d) and ĝ_4ad(d^',X)∈𝒢_4(d^'). Notice that the union (1-𝒢_1(d))∪𝒢_2(d)∪(1-𝒢_2(d))∪𝒢_3(d)∪𝒢_4(d^') forms the set 𝒢_an. We consider the following sets of functions: ℋ_1 ={ x⟼ h_1(f(x)^⊤β_1):‖β_1‖ _0≤ s_1,h_1∈ℋ_1^*} , ℋ_2 ={(m,x)⟼ h_2(f(m,x)^⊤β_2):‖β_2‖ _0≤ s_2,h_2∈ℋ_2^*} , ℋ_3 ={(d,m,x)⟼ h_3(f(d,m,x)^⊤β_3):‖β_3‖ _0≤ s_3,h_3∈ℋ_3^*} , ℋ_4 ={(d^',x)⟼ h_4(f(d^',x)^⊤β_4):‖β_4‖ _0≤ s_4,h_1∈ℋ_4^*} , where ℋ_i^*, i=1,…,4 are sets containing a finite number of monotonically increasing, continuously differentiable link functions, possibly bounded within a certain interval (say (0,1)). For example, <cit.> chose some commonly used link functions: {𝐈d,,1-,Φ,1-Φ}, where 𝐈d is the identity function, is the logistic link, and Φ is the probit link. In our case, each ℋ_i^* is also a subset of {𝐈d,,1-,Φ,1-Φ}. Obviously, 1-𝒢_1(d)⊆1-ℋ_1, 𝒢_2(d)⊆ℋ_2, 1-𝒢_2(d)⊆1-ℋ_2 𝒢_3(d)⊆ℋ_3, and 𝒢_4(d^')⊆ℋ_4 by Assumption 2.3a. For functions in 1-𝒢_1(d) 𝒢_2(d), 1-𝒢_2(d) and 𝒢_3(d), their envelope functions are constant and bounded. As shown in <cit.>, for the set 1-ℋ_1, f(x)^⊤β_1 is VC-subgraph function with VC dimension bounded by some constant (s_1), and 1-ℋ_1 is a union of at most ps_1 of such functions. Therefore, logsup_QN(ϵ,𝒢_1,‖ .‖ _Q)≲(s_1log p+s_1loge/ϵ)∨0. Using a similar argument, we obtain logsup_QN(ϵ,𝒢_2∪(1-𝒢_2),‖ .‖ _Q) ≲(s_2log(p+1)+s_2loge/ϵ)∨0, logsup_QN(ϵ,𝒢_3,‖ .‖ _Q) ≲(s_3log(p+2)+s_3loge/ϵ)∨0. Concerning the functions in 𝒢_4(d), they have an additive linear form f(d^',X)^⊤β_4=aD+X^⊤𝐛, β_4=(a,𝐛)^⊤. Its envelope function is bounded by β_4 being bounded and Assumption 2.3e, implying that ‖ X‖ _P,q is bounded. Concerning the set ℋ_4 and as shown in <cit.>, f(d^',X)^⊤β_4 is a VC-subgraph function with VC dimension bounded by some constant (s_4), and ℋ_4 is a union of at most p+1s_4 of such functions. Therefore, logsup_QN(ϵ,𝒢_4,‖‖ _Q)≲(s_4log(p+1)+s_4loge/ϵ)∨0. Combining the previous results, it follows that logsup_QN(ϵ,𝒢_an,‖‖ _Q)≲(slog p+sloge/ϵ)∨0, where s_1+s_2+s_3+s_4≤ s. The set of functions ℱ_2,n={ψ_d,d^',a(W_a,v)-ψ_d,d^',a(W_a,v_a^0):(d,d^')∈{ 0,1} ^2,a∈𝒜,v∈𝒢_an} is a Lipschitz transformation of function sets 𝒢_i^0 i=1,…,4 , 𝒢_5, 𝒢_6 defined in the previous proof, and 𝒢_an, with bounded Lipschitz coefficients and with a constant envelope. Therefore, logsup_QN(ϵ,ℱ_2,n,‖‖ _Q)≲(slog p+sloge/ϵ)∨0. With probability P 1-o(1), we have sup_(d,d^')∈{ 0,1} ^2,a∈𝒜|𝔾_N,k(ψ_d,d^',a(W_a;v̂_k,a)-ψ_d,d^',a(W_a;v_a^0))|≤sup_f∈ℱ_2|𝔾_N,kf| Furthermore, we have proven that r_n≲δ_nn^-1/4 and therefore, sup_f∈ℱ_2‖ f‖ _P,2≲ r_n≲δ_nn^-1/4 holds. Let L be a constant and L≥e, using the maximum inequality A.1 of Lemma 6.2 of <cit.> by setting the parameters σ=C^''δ_nn^-1/4. C^''>1 is some constant, a=b=p, where log p=o(n^-1/3)=o(K^-1/3N^-1/3) and v=s in this maximum inequality. We obtain that sup_f∈ℱ_2,n|𝔾_N,kf| ≲δ_nn^-1/4√(slog(p∨ L∨σ^-1))+s/√(N)log(p∨ L∨σ^-1) ≲δ_nn^-1/4√(slog(p∨ n))+√(Ks^2log^2(p∨ n)n^-1) ≲δ_nδ_n^1/4+δ_n^1/2log^-1n≲δ_n^1/2. by applying similar arguments as in the proof of Theorem A.1 and by Assumption 2.3c. §.§.§ The case when d=d^' As this case is a special case of d≠ d^', the same results apply. The proof makes use of Theorem A.2. As in Theorem A.2, let U_n,P^*:=(𝔾_nξψ_a(W_a;v_a^0)-θ_a^0)_a∈𝒜. To show that Z_n,P^*⇝_BZ_P uniformly over P∈𝒫_n in l^∞(𝒜)^4, we first show that ‖ Z_n,P^*-U_n,P^*‖ =o_P(1) and then show that U_n,P^*⇝_BZ_P uniformly over P∈𝒫_n in l^∞(𝒜)^4. Let Z_n,P^*(a):=𝔾_nξ(ψ_a(W_a;v̂_k,a)-θ̂_a) and U_n,P^*(a):=𝔾_nξ(ψ_a(W_a;v_a^0)-θ_a^0). We notice that since E[ξ]=0 and ξ and W_a are independent, E[ξ(ψ_a(W_a;v_a^0)-θ_a^0)]=0 and Z_n,P^*(a) =1/√(n)∑_i=1^nξ_i(ψ_a(W_a,i;v̂_a^0)-θ̂_a^0), U_n,P^*(a) =1/√(n)∑_i=1^nξ_i(ψ_a(W_a,i;v_a^0)-θ_a^0). It follows that sup_a∈𝒜‖ Z_n,P^*(a)-U_n,P^*(a)‖≤Π_1+Π_2, where Π_1 =sup_a∈𝒜‖𝔾_nξ(ψ_a(W_a,i;v̂_a^0)-ψ_a(W_a,i;v_a^0))‖ =sup_a∈𝒜‖√(n)1/K∑_k=1^K1/√(N)[𝔾_N,kξ(ψ_a(W_a;v̂_k,a^0)-ψ_a(W_a;v_a^0))]‖ ≤√(n)1/K∑_k=1^K1/√(N)sup_a∈𝒜‖𝔾_N,kξ(ψ_a(W_a;v̂_k,a^0)-ψ_a(W_a;v_a^0))‖ , and Π_2=sup_a∈𝒜‖1/√(n)∑_i=1^nξ_i(θ̂_a^0-θ_a^0)‖≤sup_a∈𝒜‖θ̂_a^0-θ_a^0‖|𝔾_nξ|. The term Π_2 is O_p(n^-1/2), since sup_a∈𝒜‖θ̂_a^0-θ_a^0‖ =O_p(n^-1/2) by Theorem 1 and |𝔾_nξ|=O_p(1). Concerning the term Π_1, recall the class of functions used in the proof of Theorem 1: ℱ_2,n={ψ_d,d^',a(W_a;v)-ψ_d,d^',a(W_a;v_a^0):(d,d^')∈{ 0,1} ^2,a∈𝒜,v∈𝒢_an} . with the envelop function F_2,n is |ξ| times a constant. In the proof of Theorem 1, we have established that the covering entropy of ℱ_2 obeys logsup_QN(ϵ,ℱ_2,n,‖‖ _Q)≲(slog p+sloge/ϵ)∨0. Furthermore, using Lemma L.1 in the appendix of <cit.>, multiplication of this class by ξ does not change the entropy bound modulo an absolute constant, and therefore its covering entropy is bounded by the same order as logsup_QN(ϵ,ℱ_2,n,‖‖ _Q), logsup_QN(ϵ‖ F_2,n‖ _Q,2,ξℱ_2,n,‖ .‖ _Q,2)≲(slog p+sloge/ϵ)∨0. Next, we use the result (E[max_i∈ I_kξ_i^2])^1/2≲log N by E[exp(|ξ|)]<∞, and the maximum inequality A.1 of Lemma 6.2 of <cit.> by setting the envelope function F_2,n=C^'''|ξ|, σ=C^''δ_nn^-1/4, where C^''>1 is some constant, a=b=p, where log p=o(n^-1/3)=o(K^-1/3N^-1/3) and v=s, with probability P 1-o(1). We have sup_f∈ξℱ_2,n|𝔾_N,kf| ≲δ_nn^-1/4√(slog(p∨ L∨σ^-1))+slog N/√(N)log(p∨ L∨σ^-1) ≲δ_nn^-1/4√(slog(p∨ L∨σ^-1))+K^1/2s(log n-log K)/√(n)log(p∨ L∨σ^-1) ≲δ_nn^-1/4√(slog(p∨ n))+√(s^2log^2(p∨ n)log^2n/n) ≲δ_nδ_n^1/4+δ_n^1/2≲δ_n^1/2=o_p(1), by using sup_f∈ξℱ_2,n‖ f‖ _P,2=sup_f∈ℱ_2,n‖ f‖ _P,2≤ r_n≲δ_nn^-1/4 and Assumption 2.3c. With probability P 1-o(1) and for v̂_k,a∈𝒱_an, it can be shown that sup_a∈𝒜‖𝔾_N,kξ(ψ_a(W_a;v̂_k,a)-ψ_a(W_a;v_a^0))‖≲sup_f∈ξℱ_2,n|𝔾_N,kf|. Therefore we conclude that with probability P 1-o(1), 1/√(N)sup_a∈𝒜‖𝔾_N,kξ(ψ_a(W_a;v̂_k,a)-ψ_a(W_a;v_a^0))‖≲ K^1/2n^-1/2o_p(1)≲ o_p(n^-1/2), and since K is fixed and finite, sup_a∈𝒜‖𝔾_nξ(ψ_a(W_a;v̂_k,a)-ψ_a(W_a;v_a^0))‖ ≲√(n)o_p(n^-1/2)=o_p(1), which implies that ‖ Z_n,P^*-U_n,P^*‖ =o_p(1). Next, we notice that U_n,P^* is associated with the class of functions ξ f, where f∈ℱ_0. As shown in the proof of Theorem 1, the class of ℱ_0 is Donsker uniformly in P∈𝒫_n under the required assumptions. Therefore we can invoke Theorem B.2 of <cit.> and conclude that U_n,P^*⇝_BZ_P. Indeed, U_n,P^* and Z_P both are Gaussian processes, and share the same (zero) mean and the same covariance matrix. Finally, using a similar argument as in step 2 for proving Theorem 5.2 in the appendix of <cit.>, it follows that Z_n,P^*⇝_BZ_P. Let BL_1(l^∞(𝒜)) be the space of functions mapping the space of functions in l^∞(𝒜) to [0,1] with a Lipschitz norm of at most 1. Let E_B_n denote the expectation over the multiplier weights (ξ_i)_i=1^n when holding the data (W_i)_i=1^n fixed. Following step 2 for proving Theorem 5.2 in the appendix of <cit.>, we obtain the following inequality: sup_h∈BL_1(l^∞(𝒜))|E_B_n[h(Z_n,P^*)]-E_P[h(Z_P)]| ≤sup_h∈BL_1(l^∞(𝒜))|E_B_n[h(U_n,P^*)]-E_P[h(Z_P)]| +E_B_n[‖ Z_n,P^*-U_n,P‖2]. The first term vanishes as asserted by using Theorem B.2 of <cit.>, since we have proven that U_n,P^*⇝_BZ_P. The second term is o_P(1) since E[‖ Z_n,P^*-U_n,P‖2]=E[E_B_n[‖ Z_n,P^*-U_n,P‖2]]→0 by using the Markov inequality, P(E_B_n[‖ Z_n,P^*-U_n,P^*‖2]≥ε) ≤E[E_B_n[‖ Z_n,P^*-U_n,P^*‖2]]/ε =E[‖ Z_n,P^*-U_n,P^*‖2]/ε. As shown above, ‖ Z_n,P^*-U_n,P^*‖ =o_p(1), which implies that ‖ Z_n,P^*-U_n,P^*‖2=o_P(1). Therefore, sup_h∈ BL_1(l^∞(𝒜))|E_B_n[h(Z_n,P^*)]-E_P[h(Z_P)]| vanishes and we obtain that Z_n,P^*⇝_BZ_P. Concerning the proof of Theorem 3, we first introduce the definition of uniform Hadamard differentiability in the appendix further below. The proof relies on Theorems B.3 and B.4 of <cit.> (restated as Theorem A.4 and A.5 in their appendix), which show that when an estimator satisfies uniform validity, this property also holds for a transformation of this estimator if uniform Hadamard tangential differentiability of the transformation holds. : Since the ϕ_θ satisfies uniform Hadamard tangential differentiable, and as shown in Theorem 1 and 2, both Z_n,P⇝ Z_P and Z_n,P^*⇝ Z_P in l^∞(𝒜)^4 uniformly in P∈𝒫_n. Therefore the proof can be completed by using Theorems A.4 and A.5 (which are restated results of Theorems B.3 and B.4 of <cit.>). §.§ General Theorems for Uniform Validity of a K-Fold Cross Fitting Estimator Based on the EIF We subsequently derive some useful theorems for establishing the uniform validity of the proposed K-fold cross-fitting estimator based on the efficient influence function (EIF) under specific conditions (see below). We recall the notation used in Section 3 and assume that the parameter of interest, F_Y(d,M(d^'))^0(a), is identified as F_Y(d,M(d^'))^0(a)=θ_d,d^',a^0, for (d,d^')∈{0,1}^2, where θ_d,d^',a^0:=E[ψ_d,d^',a(W_a;v_a^0)] is an expectation of ψ_d,d^',a evaluated with the true nuisance parameters. The estimator of θ_d,d^',a^0 is the K-fold cross-fitting estimator θ̂_d,d^',a=1/K∑_k=1^Kθ̂_d,d^',a^(k), where θ̂_d,d^',a^(k)=N^-1∑_i∈ I_kψ_d,d^',a(W_a,i;v̂_k,a). Let ψ_a(W_a,v) denote a vector containing the elements ψ_d,d^',a(W_a;v), (d,d^')∈{ 0,1} ^2. Let θ_a^0, θ̂_a, θ̂_a^(k) and 𝐅^0(a) denote vectors containing θ_d,d^',a^0, θ̂_d,d^',a, θ̂_d,d^',a^(k) and F_Y(d,M(d^'))^0(a) over different (d,d^')∈{0,1}^2. It holds that θ_a^0=E[ψ_a(W_a;v_a^0)] and θ̂_a=K^-1∑_k=1^Kθ̂_a^(k). If equation (<ref>) holds for all a∈𝒜 and (d,d^')∈{0,1}^2, 𝐅^0(a)=θ_a^0. The main results are stated in Theorems A.1 to A.3. Establishing these theorems relies on imposing the following high level assumptions on ψ_d,d^',a(W_a;v). Consider a random element W, taking values in a measure space (𝒲,𝒳_W), with the law determined by a probability measure P∈𝒫_n. The observed data ((W_a,i)_a∈𝒜)_i=1^n consist of n i.i.d. copies of a random element (W_a)_a∈𝒜 which is generated as a suitably measurable transformation with respect to W and a. Uniformly for all 3≤ n_0≤ n and P∈𝒫_n, 1. The true parameter F_Y(d,M(d^'))^0(a) satisfies equation (<ref>), θ_d,d^',a^0 is interior relative to Θ_a⊂Θ⊂ℝ for all a∈𝒜, (d,d^')∈{ 0,1} ^2 and Θ is a compact set. 2. For a∈𝒜, the map v⟼ E[ψ_a(W_a;v)] is twice continuously Gateaux-differentiable on 𝒱_a. 3. The function ψ_a(W_a;v) satisfies the following Neyman λ_n near-orthogonality condition at v=v_a^0 with respect to a∈𝒜 and v∈𝒱_an∪{ v_a^0}: λ_n:=sup_a∈𝒜,v∈𝒱_an∪{ v_a^0}‖∂_vE[ψ_a(W_a;v_a^0)[v-v_a^0]]‖≤δ_nn^-1/2, where δ_n is a sequence converging to zero from above at a speed at most polynomial in n, e.g., δ_n≥ n^-c for some c>0. 4. The following moment conditions hold: r_n :=sup_a∈𝒜,v∈𝒱_an‖ψ_a(W_a;v)-ψ_a(W_a;v_a^0)‖ _P,2≤δ_nn^-1/4 λ_n^' :=sup_a∈𝒜,v∈𝒱_an∪{ v_a^0} ,r̃∈(0,1)‖∂_r^2E[ψ_a(W_a;r(v-v_a^0)+v_a^0)]|_r=r̃‖≤δ_nn^-1/2, 5. The following smoothness condition holds for each (d,d^')∈{0,1}^2: sup_d_𝒜(a,a̅)≤δE[{(ψ_d,d^',a(W_a;v_a^0)-θ_d,d^',a^0)-(ψ_d,d^',a̅(W_a̅;v_a̅^0)-θ_d,d^',a̅^0)} ^2]≤ Cδ^c_1, where c_1 is a constant. 6. The set of functions ℱ_0={ψ_d,d^',a(W_a;v_a^0)-θ_d,d^',a^0:(d,d^')∈{ 0,1} ^2,a∈𝒜}, expressed as a function of W, is suitably measurable, and has an envelope function F_0(W)=sup_(d,d^')∈{ 0,1} ^2,a∈𝒜,v∈𝒱_a,θ∈Θ_a|ψ_d,d^',a(W_a;v)-θ| , which is measurable with respect to W, and ‖ F_0(W)‖ _P,q≤ C, where q≥4 is a fixed constant. Its uniform covering entropy satisfies logsup_QN(ϵ‖ F_0‖ _Q,2,ℱ_0,‖ .‖ _Q,2)≤ Clog(e/ϵ)∨0, where C>0 is a constant, e denotes exp(1) and 0<ϵ≤1. 7. The set of functions ℱ_1={ψ_d,d^',a(W_a;v)-θ:(d,d^')∈{ 0,1} ^2,a∈𝒜,v∈𝒱_an,θ∈Θ_a} is suitably measurable and has an envelope function F_1(W)=sup_(d,d^')∈{ 0,1} ^2,a∈𝒜,v∈𝒱_an,θ∈Θ_a|ψ_d,d^',a(W_a;v)-θ| which is measurable with respect to W, and F_1(W)≤ F_0(W). Its uniform covering entropy satisfies logsup_QN(ϵ‖ F_1‖ _Q,2,ℱ_1,‖ .‖ _Q,2)≤ vlog(b/ε)∨0, where v≥1 and b≥max{e,N} and 0<ϵ≤1. 8. Nuisance parameter estimation: Let K be a fixed integer and Δ_n and τ_n be a sequence of positive constants converging to zero at a speed of at most polynomial n. The following conditions hold for each n≥3 and all P∈𝒫_n. Given a random subset I_k, k=1,…,K of size n/K, the estimated nuisance parameter {v̂_k,a,g} _g=1^G∈𝒱_an with probability at least 1-Δ_n, where 𝒱_an is the set of measurable maps { v_g} _g=1^G∈𝒱_a such that for each g, ‖ v_g-v_a,g^0‖ _P,2≤τ_n and √(n)τ_n^2≤δ_n. Therefore and when denoting by ℰ_n the event that v̂_k,a∈𝒱_an for all k=1,…,K, the probability of ℰ_n is not smaller than 1-KΔ_n=1-o(1). Let 𝔾_n denote an empirical process 𝔾_nf(W)=√(n)(E_nf(W)-E[f(W)]), where f is any P∈𝒫_n integrable function on the set 𝒲. Let 𝔾_Pf(W) denote the limiting process of 𝔾_nf(W), which is a Gaussian process with zero mean and a finite covariance matrix E[(f(W)-E[f(W)])(f(W)-E[f(W)])^⊤] under probability P (the P-Brownian bridge). Using the previous notation and assumptions, we obtain the following result. If Assumptions A.1.1 to A.1.8 hold, the K-fold cross-fitting estimator θ̂_a for estimating 𝐅^0(a) satisfies √(n)(θ̂_a-𝐅^0(a))_a∈𝒜=Z_n,P+o_P(1), in l^∞(𝒜)^4, uniformly in P∈𝒫_n, where Z_n,P:=( 𝔾_n(ψ_a(W_a;v_a^0)-θ_a^0)) _a∈𝒜. Furthermore, Z_n,P⇝ Z_P in l^∞(𝒜)^4, uniformly in P∈𝒫_n, where Z_P:=( 𝔾_P(ψ_a(W_a;v_a^0)-θ_a^0)) _a∈𝒜 and the paths of a⟼𝔾_P(ψ_a(W_a;v_a^0)-θ_a^0) are a.s. uniformly continuous on (𝒜,d_𝒜), and sup_P∈𝒫_nE[sup_a∈𝒜‖𝔾_P(ψ_a(W_a;v_a^0)-θ_a^0)‖] <∞, lim_δ→0sup_P∈𝒫_nE[sup_d_𝒜(a,a̅)‖𝔾_P(ψ_a(W_a;v_a^0)-θ_a^0)-𝔾_P(ψ_a̅(W_a̅;v_a̅^0)-θ_a̅^0)‖] =0. Under Assumptions A.1.1 to A.1.8, we can also establish the uniform validity of the multiplier bootstrap. Recall the multiplier bootstrap estimator: θ̂_d,d^',a^*=θ̂_d,d^',a+1/n∑_i=1^nξ_i(ψ_d,d^',a(W_a,i;v̂_k,a)-θ̂_d,d^',a), where ξ is a random variable independent of W_a that satisfies E[ξ]=0, Var(ξ)=1 and E[exp(|ξ|)]<∞. By the independence of ξ and W_a, E[ξψ_d,d^',a(W_a;v̂_k,a)]=E[ξ]E[ψ_d,d^',a(W_a;v̂_k,a)]=0, and also E[ξθ̂_d,d^',a]=0. Therefore, √(n)(θ̂_d,d^',a^*-θ̂_d,d^',a) =1/√(n)∑_i=1^nξ_i(ψ_d,d^',a(W_a,i;v̂_k,a)-θ̂_d,d^',a) =𝔾_nξ(ψ_d,d^',a(W_a;v̂_k,a)-θ̂_d,d^',a). Let θ̂_a^* denote a vector containing the multiplier bootstrap estimators θ̂_d,d^',a^* over different (d,d^')∈{0,1}^2. We may rewrite the previous result in a vector form as √(n)(θ̂_a^*-θ̂_a)=𝔾_nξ(ψ_a(W_a;v̂_k,a)-θ̂_a). Furthermore, let Z_n,P^*:=(𝔾_nξ(ψ_a(W_a;v̂_k,a)-θ̂_a))_a∈𝒜 to postulate the following theorem. If Assumptions A.1.1 through A.1.8 hold, the large sample law Z_P of Z_n,P in Theorem A.1 can be consistently approximated by the bootstrap law Z_n,P^*: Z_n,P^*⇝_BZ_P uniformly over P∈𝒫_n in l^∞(𝒜)^4. Let ϕ_τ(F_X):=inf{ a∈ℝ:F_X(a)≥τ} be the τ-th quantile function of a random variable X associated with a c.d.f. F_X. The von Mises expansion of ϕ_τ(F_X) (p.292 in <cit.>) is given by: ϕ_τ(E_n)-ϕ_τ(E)=1/√(n)ϕ_τ,E^'(𝔾_n)+…+1/m!1/n^m/2ϕ_τ,E^(k)(𝔾_n)+…, where ϕ_τ,E^'(.) is a linear derivative map and 𝔾_n denotes an empirical process 𝔾_nf(W)=√(n)(E_nf(W)-E[f(W)]). Let ϕ_θ^':=(ϕ_τ,θ^')_τ∈𝒯, where θ=(θ_a)_a∈𝒜. Let Q_Y(d,M(d^'))^0(τ):=inf{ a∈ℝ:θ_d,d^',a^0≥τ}, Q̂_Y(d,M(d^'))(τ):=inf{ a∈ℝ:θ̂_d,d^',a≥τ} and Q̂_Y(d,M(d^'))^*(τ):=inf{ a∈ℝ:θ̂_d,d^',a^*≥τ}. Let 𝐐_τ^0, 𝐐̂_τ and 𝐐̂_τ^* denote the corresponding vectors containing Q_Y(d,M(d^'))^0(τ), Q̂_Y(d,M(d^'))(τ) and Q̂_Y(d,M(d^'))^*(τ) for different (d,d^')∈{0,1}^2, respectively. We then obtain the following result of uniform validity of quantile estimation, which can be proven by invoking the functional delta theorems (Theorems B.3 and B.4) of <cit.>. Under Assumptions A.1.1 to A.1.8, √(n)(𝐐̂_τ-𝐐_τ^0)_τ∈𝒯 ⇝ T_P:=ϕ_θ^'(Z_P), √(n)(𝐐̂_τ^*-𝐐̂_τ)_τ∈𝒯 ⇝_BT_P:=ϕ_θ^'(Z_P). uniformly over P∈𝒫_n in l^∞(𝒯)^4, where 𝒯⊂(0,1), T_P is a zero mean tight Gaussian process for each P∈𝒫_n and Z_P:={𝔾_P(ψ_a(W_a;v_a^0)-θ_a^0)} _a∈𝒜. §.§ Proofs of Theorems A.1 to A.3 In the proofs of Theorems A.1 to A.3, we will use the notation ≲ to denote “less than equal a constant times”: b≲ c denotes b≤ Bc, where B is a constant depending on Assumptions A.1.1 to A.1.8, but not on n_0≤ n and P∈𝒫_n. We assume n_0≤ n since the results are all asymptotic. It is sufficient to establish the result over any sequence of induced probability measure P_n∈𝒫_n. But we will write P=P_n to simplify the notation. Furthermore, we fix any k=1,...,K. From the definition of θ̂_a and under Assumption A.1.1, we obtain √(n)(θ̂_a-𝐅^0(a)) = √(n)[1/K∑_k=1^Kθ̂_a^(k)-θ_a^0] = √(n){1/K∑_k=1^KE_N,k[ψ_a(W_a;v̂_k,a)-E[ψ_a(W_a;v_a^0)]]} = √(n){1/K∑_k=1^KE_N,k[ψ_a(W_a;v̂_k,a)]-E[ψ_a(W_a;v_a^0)]} = √(n){1/K∑_k=1^KE_N,k[ψ_a(W_a;v̂_k,a)]-1/n∑_i=1^nψ_a(W_a,i;v_a^0). +.1/n∑_i=1^n(ψ_a(W_a,i;v_a^0)-θ_a^0)-(E[ψ_a(W_a;v_a^0)-θ_a^0])} = √(n){1/K∑_k=1^KE_N,k[ψ_a(W_a;v̂_k,a)]-1/n∑_i=1^Nψ_a(W_a,i;v_a^0)}_m_1,n(a) +𝔾_n(ψ_a(W_a;v_a^0)-θ_a^0). Therefore, proving √(n)(θ̂_a-θ_a^0)_a∈𝒜=Z_n,P+o_P(1) uniformly over P∈𝒫_n in l^∞(𝒜)^4 is equivalent to showing that (m_1,n(a))_a∈𝒜=o_P(1) uniformly over P∈𝒫_n in l^∞(𝒜)^4. Notice that 1/K∑_k=1^KE_N,k[ψ_a(W_a;v̂_k,a)]-1/n∑_i=1^Nψ_a(W_a,i;v_a^0) =1/K∑_k=1^K{ E_N,k[ψ_a(W_a;v̂_k,a)]-E_N,k[ψ_a(W_a;v_a^0)]} and sup_a∈𝒜‖ m_1,n(a)‖ =√(n)sup_a∈𝒜‖1/K∑_k=1^K{ E_N,k[ψ_a(W_a;v̂_k,a)]-E_N,k[ψ_a(W_a;v_a^0)]}‖ ≤√(n)sup_a∈𝒜1/K∑_k=1^K‖ E_N,k[ψ_a(W_a;v̂_k,a)]-E_N,k[ψ_a(W_a;v_a^0)]‖ ≤√(n)1/K∑_k=1^Ksup_a∈𝒜‖ E_N,k[ψ_a(W_a;v̂_k,a)]-E_N,k[ψ_a(W_a;v_a^0)]‖ . Therefore, it suffices to show that sup_a∈𝒜‖ E_N,k[ψ_a(W_a;v̂_k,a)]-E_N,k[ψ_a(W_a;v_a^0)]‖_m_2,N,k(a)=o_P(n^-1/2) holds, since K is finite and fixed. Next, m_2,N,k(a) =‖1/N∑_i∈ I_kψ_a(W_a,i;v̂_k,a)-1/N∑_i∈ I_kψ_a(W_a,i;v_a^0)‖ =‖1/N∑_i∈ I_kψ_a(W_a,i;v̂_k,a)-1/N∑_i∈ I_kE[ψ_a(W_a,i;v̂_k,a)|(W_a,j)_j∈ I_k^c]. -[1/N∑_i∈ I_kψ_a(W_a,i;v_a^0)-1/N∑_i∈ I_kE[ψ_a(W_a,i;v_a^0)]] .+1/N∑_i∈ I_kE[ψ_a(W_a,i;v̂_k,a)|(W_a,j)_j∈ I_k^c]-1/N∑_i∈ I_kE[ψ_a(W_a,i;v_a^0)]‖ ≤1/√(N)‖𝔾_N,k(ψ_a(W_a;v̂_k,a)-ψ_a(W_a;v_a^0))‖_m_3,N,k(a) +‖1/N∑_i∈ I_kE[ψ_a(W_a,i;v̂_k,a)|(W_a,j)_j∈ I_k^c]-E[ψ_a(W_a;v_a^0)]‖_m_4,N,k(a), where 𝔾_N,k is an empirical process defined as 𝔾_N,kf(W)=√(N)(1/N∑_i∈ I_kf(W_i)-∫ f(w)dP), and f is any P integrable function on 𝒲. We note that E[ψ_a(W_a,i;v̂_k,a)|(W_a,j)_j∈ I_k^c]=E[ψ_a(W_a,i;v̂_k,a)] for i∈ I_k, since conditional on (W_a,j)_j∈ I_k^c, v̂_k,a is a constant and (W_a,i)_i∈ I_k and (W_a,j)_i∈ I_k^c are independent. Then, sup_a∈𝒜m_2,N,k(a)≤sup_a∈𝒜m_3,N,k(a)/√(N)+sup_a∈𝒜m_4,N,k(a). In order to bound sup_a∈𝒜m_3,N,k(a), we define the following class of functions: ℱ_2^'={ψ_d,d^',a(W_a;v)-θ-(ψ_d,d^',a(W_a;v_a^0)-θ_d,d^',a^0):(d,d^')∈{ 0,1} ^2,a∈𝒜,v∈𝒱_an,θ∈Θ_an} , where Θ_an:={θ∈Θ_a:|θ-θ_d,d^',a^0|≤ Cτ_n}. Notice that the envelope function of ℱ_2^', denoted by F_2^'(W) =sup_(d,d^')∈{ 0,1} ^2,a∈𝒜,v∈𝒱_an,θ∈Θ_an|ψ_d,d^',a(W_a;v)-θ-(ψ_d,d^',a(W_a;v_a^0)-θ_d,d^',a^0)| ≤sup_(d,d^')∈{ 0,1} ^2,a∈𝒜,v∈𝒱_an,θ∈Θ_an|ψ_d,d^',a(W_a;v)-θ| +sup_(d,d^')∈{ 0,1} ^2,a∈𝒜,v∈𝒱_a,θ∈Θ_an|ψ_d,d^',a(W_a;v)-θ| ≤ F_1(W)+F_0(W) ≤2F_0(W). The uniform covering entropy of ℱ_2^': logsup_QN(ϵ‖ F_2^'‖ _Q,2,ℱ_2^',‖ .‖ _Q,2) satisfies logsup_QN(ϵ‖ F_2^'‖ _Q,2,ℱ_2^',‖ .‖ _Q,2)≲2v(log(a/ϵ))∨0. Next, consider another class of functions: ℱ_2={ψ_d,d^',a(W_a;v)-ψ_d,d^',a(W_a;v_a^0):(d,d^')∈{ 0,1} ^2,a∈𝒜,v∈𝒱_an,θ∈Θ_an} . ℱ_2 is a subset of ℱ_2^' in which we choose C=0 in Θ_an, and for this reason, its envelope function F_2(W) is bounded by F_2^'(W). Therefore, the uniform covering entropy of ℱ_2: logsup_QN(ϵ‖ F_2‖ _Q,2,ℱ_2,‖ .‖ _Q,2) also satisfies logsup_QN(ϵ‖ F_2‖ _Q,2,ℱ_2,‖ .‖ _Q,2)≲2v(log(a/ϵ))∨0. With probability P 1-KΔ_n=1-o(1), we have sup_(d,d^')∈{ 0,1} ^2,a∈𝒜|𝔾_N,k(ψ_d,d^',a(W_a;v̂_k,a)-ψ_d,d^',a(W_a;v_a^0))|≤sup_f∈ℱ_2|𝔾_N,kf|, where ψ_d,d^',a(W_a;v̂_k,a)-ψ_d,d^',a(W_a;v_a^0) is an element of ψ_a(W_a;v̂_k,a)-ψ_a(W_a;v_a^0) in m_3,N,k(a). Furthermore, it can be shown that sup_f∈ℱ_2‖ f‖ _P,2≤ r_n, where r_n:=sup_a∈𝒜,v∈𝒱_an‖ψ_a(W_a;v)-ψ_a(W_a;v_a^0)‖ _P,2. Then using the maximum inequality A.1 of Lemma 6.2 of <cit.> by setting the used parameter σ=C^'δ_nn^-1/4, where C^'>1 is a constant, and a=b=N in this maximum inequality, we have sup_f∈ℱ_2|𝔾_N,kf| ≲δ_nn^-1/4√(log(N∨σ^-1))+N^1/q-1/2log(N∨σ^-1) ≲δ_n+K^1/2-1/qlog n/n^1/2-1/q=o(1) If follows that with probability P 1-o(1), sup_(d,d^')∈{ 0,1} ^2,a∈𝒜|𝔾_N,k(ψ_d,d^',a(W_a;v̂_k,a)-ψ_d,d^',a(W_a;v_a^0))|≤sup_f∈ℱ_2|𝔾_N,kf|≲ o(1), and sup_a∈𝒜m_3,N,k(a)≲ o(1). Then, N^-1/2sup_a∈𝒜m_3,N,k(a)=n^-1/2K^1/2sup_a∈𝒜m_3,N,k(a)=o_P(n^-1/2) since K is fixed and finite. Concerning the term m_4,N,k(a), let φ_k(r)=E[ψ_a(W_a;r(v̂_k,a-v_a^0)+v_a^0)|(W_a,j)_j∈ I_k^c]-E[ψ_a(W_a;v_a^0)], where r∈(0,1). Notice that φ_k(0)=0, since E[ψ_a(W_a;v_a^0)|(W_a,j)_j∈ I_k^c]=E[ψ_a(W_a;v_a^0)], and φ_k(1)=E[ψ_a(W_a;v̂_k,a)|(W_a,j)_j∈ I_k^c]-E[ψ_a(W_a;v_a^0)]. By applying a Taylor expansion to φ_k(r) around 0, φ_k(r)=φ_k(0)+φ_k^'(0)r+1/2φ_k^''(r̅)r^2 for some r̅∈(0,1). Then, m_4,N,k(a)≤‖φ_k(1)‖ =‖φ_k^'(0)+1/2φ_k^''(r̅)‖, where φ_k^'(0) =∂_vE[ψ_a(W_a;v_a^0)[v̂_k,a-v_a^0]|(W_a,j)_j∈ I_k^c]=∂_vE[ψ_a(W_a;v_a^0)[v̂_k,a-v_a^0]], φ_k^''(r̅) =∂_r^2E[ψ_a(W_a;r̅(v̂_k,a-v_a^0)+v_a^0)|(W_a,j)_j∈ I_k^c],r̅∈(0,1). Therefore, sup_a∈𝒜m_4,N,k(a)≤sup_a∈𝒜‖φ_k^'(0)‖ +1/2sup_a∈𝒜‖φ_k^''(r̅)‖. Furthermore, sup_a∈𝒜‖φ_k^'(0)‖ and sup_a∈𝒜‖φ_k^''(r̅)‖ are bounded by the following terms, respectively: sup_a∈𝒜,v∈𝒱_an∪{ v_a^0}‖∂_vE[ψ_a(W_a;v_a^0)[v-v_a^0]]‖ =o(n^-1/2), sup_a∈𝒜,v∈𝒱_an∪{ v_a^0} ,r̃∈(0,1)‖∂_r^2E[ψ_a(W_a;r̅(v-v_a^0)+v_a^0)]‖ =o(n^-1/2). It follows that sup_a∈𝒜m_4,N,k(a)=o_P(n^-1/2). Combining the previous results, we obtain that sup_a∈𝒜m_2,N,k(a)=o_P(n^-1/2), and sup_a∈𝒜‖ m_1,n(a)‖≤ n^1/2o_P(n^-1/2)=o_P(1) uniformly over P∈𝒫_n in l^∞(𝒜)^4. Finally, to show that Z_n,P⇝ Z_P in l^∞(𝒜)^4 uniformly in P∈𝒫_n, we may exploit the properties of functions in ℱ_0. Recall that ℱ_0 is suitably measurable, has an envelop function F_0(W) that is measurable with respect to 𝒲 and satisfies ‖ F_0‖ _P,q=(E|F_0|^q)^1/q≤ C, where q≥4 is a fixed number. By Assumption A.1.5, functions in ℱ_0 satisfy sup_P∈𝒫_nE[{(ψ_d,d^',a(W_a;v_a^0)-θ_d,d^',a^0)-(ψ_d,d^',a̅(W_a̅;v_a̅^0)-θ_d,d^',a̅^0)} ^2]→0, as d_𝒜(a,a̅)→0. By Assumption A.1.6, uniform covering entropy of ℱ_0 satisfies sup_Qlog N(ϵ‖ F_0‖ _Q,2,ℱ_0,‖ .‖ _Q,2)≤ Clog(e/ϵ)∨0. In fact, the uniform covering integral satisfies ∫_0^1√(sup_Qlog N(ϵ‖ F_0‖ _Q,2,ℱ_0,‖ .‖ _Q,2))dϵ ≤√(C)∫_0^1√(1-logϵ)dϵ ≤√(C)∫_0^11/√(ϵ)dϵ<∞, which follows from the result that 1-logϵ≤1/ϵ for all ϵ>0 and ∫_0^1ϵ^-bdϵ<∞ for b<1. Therefore, we may invoke Theorem B.1 of <cit.> to obtain the result. The class of ℱ_0 is Donsker uniformly in P∈𝒫_n because ‖ F_0‖ _P,q is bounded, the entropy condition holds, and Assumption A.1.5 implies that sup_P∈𝒫_n‖ψ_a(W_a,v_a^0)-θ_a^0-(ψ_a̅(W_a̅,v_a̅^0)-θ_a̅^0)‖ _P,2→0 as d_𝒜(a,a̅)→0. It is sufficient to establish the result over any sequence of probability measure P_n∈𝒫_n. Again, we will write P=P_n for simplifying the notation. Let U_n,P^*:=(𝔾_nξ(ψ_a(W_a;v_a^0)-θ_a^0))_a∈𝒜. To show that Z_n,P^*⇝_BZ_P uniformly over P∈𝒫_n in l^∞(𝒜)^4, we first show that ‖ Z_n,P^*-U_n,P^*‖ =o_P(1) and then show that U_n,P^*⇝_BZ_P uniformly over P∈𝒫_n in l^∞(𝒜)^4. To prove ‖ Z_n,P^*-U_n,P^*‖ =o_P(1) uniformly over P∈𝒫_n in l^∞(𝒜)^4, we use a similar argument as for proving equation (E.7) in the appendix of <cit.>. Let Z_n,P^*(a):=𝔾_nξ(ψ_a(W_a;v̂_k,a)-θ̂_a) and U_n,P^*(a):=𝔾_nξ(ψ_a(W_a;v_a^0)-θ_a^0). We first notice that since E[ξ]=0 and ξ and W_a are independent, E[ξ(ψ_a(W_a;v_a^0)-θ_a^0)]=0 and Z_n,P^*(a) =1/√(n)∑_i=1^nξ_i(ψ_a(W_a,i;v̂_a^0)-θ̂_a^0), U_n,P^*(a) =1/√(n)∑_i=1^nξ_i(ψ_a(W_a,i;v_a^0)-θ_a^0). Therefore, we have sup_a∈𝒜‖ Z_n,P^*(a)-U_n,P^*(a)‖≤Π_1+Π_2, where Π_1 =sup_a∈𝒜‖𝔾_nξ(ψ_a(W_a,i;v̂_a^0)-ψ_a(W_a,i;v_a^0))‖ =sup_a∈𝒜‖√(n)1/K∑_k=1^K1/√(N)[𝔾_N,kξ(ψ_a(W_a;v̂_k,a^0)-ψ_a(W_a;v_a^0))]‖ ≤√(n)1/K∑_k=1^K1/√(N)sup_a∈𝒜‖𝔾_N,kξ(ψ_a(W_a;v̂_k,a^0)-ψ_a(W_a;v_a^0))‖ , and Π_2=sup_a∈𝒜‖1/√(n)∑_i=1^nξ_i(θ̂_a^0-θ_a^0)‖≤sup_a∈𝒜‖θ̂_a^0-θ_a^0‖|𝔾_nξ|. The term Π_2 is O_p(n^-1/2), since sup_a∈𝒜‖θ̂_a^0-θ_a^0‖ =O_p(n^-1/2) by Theorem A.1 and |𝔾_nξ|=O_p(1). Concerning the term Π_1, recall the class of functions used in the proof of Theorem A.1, ℱ_2={ψ_d,d^',a(W_a;v)-ψ_d,d^',a(W_a;v_a^0):(d,d^')∈{ 0,1} ^2,a∈𝒜,v∈𝒱_an}, as well as its envelope function F_2≤2F_0 and the covering entropy: logsup_QN(ϵ‖ F_2‖ _Q,2,ℱ_2,‖ .‖ _Q,2)≲2v(log(a/ϵ))∨0. Using Lemma K.1 in the Appendix of <cit.>, multiplication of this class by ξ does not change the entropy bound modulo an absolute constant, and therefore its covering entropy logsup_QN(ϵ‖ξ F_2‖ _Q,2,ξℱ_2,‖ .‖ _Q,2) is bounded by the same order as logsup_QN(ϵ‖ F_2‖ _Q,2,ℱ_2,‖ .‖ _Q,2). Next, notice that (E[max_i∈ I_kξ_i^2])^1/2≲log N by E[exp(|ξ|)]<∞. By the independence of ξ_i and W_i, we have ‖max_i∈ I_kξ_iF_0(W_i)‖ _P,2≤‖max_i∈ I_kξ_i‖ _P,2‖max_i∈ I_kF_0(W_i)‖ _P,2≲ N^1/qlog N, which holds for k=1,…,K. Using the maximum inequality A.1 of Lemma 6.2 of <cit.>, with probability P 1-o(1), we obtain sup_f∈ξℱ_2|𝔾_N.kf| ≲δ_nn^-1/4√(log(N∨σ^-1))+N^1/qlog N/√(N)log(N∨σ^-1) ≲δ_nn^-1/4√(log(N∨σ^-1))+K^1/2-1/q(log n-log K)/n^1/2-1/qlog(N∨σ^-1) ≲δ_n+n^1/q-1/2log nlog(n∨σ^-1)=o_p(1), by using the fact that sup_f∈ξℱ_2‖ f‖ _P,2=sup_f∈ℱ_2‖ f‖ _P,2≤ r_n and setting the parameters σ=C^'δ_nn^-1/4 and a=b=N in this maximum inequality. With probability P 1-o(1) and for v̂_k,a∈𝒱_an, it can be shown that sup_a∈𝒜‖𝔾_N,kξ(ψ_a(W_a;v̂_k,a)-ψ_a(W_a;v_a^0))‖≲sup_f∈ξℱ_2|𝔾_N,kf|. Therefore, it follows with probability P 1-o(1) that 1/√(N)sup_a∈𝒜‖𝔾_N,kξ(ψ_a(W_a;v̂_k,a)-ψ_a(W_a;v_a^0))‖≲ K^1/2n^-1/2o_p(1)≲ o_p(n^-1/2), and since K is fixed and finite, Π_1=sup_a∈𝒜‖𝔾_nξ(ψ_a(W_a;v̂_k,a)-ψ_a(W_a;v_a^0))‖ ≲√(n)o_p(n^-1/2)=o_p(1). Combining the previous results, we obtain that ‖ Z_n,P^*-U_n,P^*‖ =o_p(1). Next, notice that U_n,P^* is associated with the class of functions ξ f, where f∈ℱ_0 is defined in Assumption A.1.6. As shown in the proof of Theorem A.1, the class of ℱ_0 is Donsker, uniformly in P∈𝒫_n under the imposed assumptions. Therefore, we may invoke Theorem B.2 of of <cit.> and obtain U_n,P^*⇝_BZ_P. Indeed, both U_n,P^* and Z_P are Gaussian processes that share the same (zero) mean and the same covariance matrix. Finally, using a similar argument in step 2 for proving Theorem 5.2 in the appendix of <cit.>, we can obtain Z_n,P^*⇝_BZ_P. Let BL_1(l^∞(𝒜)) be the space of functions mapping the space of functions in l^∞(𝒜) to [0,1], with the Lipschitz norm being at most 1. Let E_B_n denote the expectation over the multiplier weights (ξ_i)_i=1^n when holding the data (W_i)_i=1^n fixed. Following step 2 for proving Theorem 5.2 in the appendix of <cit.>, we obtain the following inequality: sup_h∈BL_1(l^∞(𝒜))|E_B_n[h(Z_n,P^*)]-E_P[h(Z_P)]| ≤sup_h∈BL_1(l^∞(𝒜))|E_B_n[h(U_n,P^*)]-E_P[h(Z_P)]| +E_B_n[‖ Z_n,P^*-U_n,P‖2]. The first term vanishes by Theorem B.2 of <cit.>, since we have proven that U_n,P^*⇝_BZ_P. The second term is o_P(1), because E[‖ Z_n,P^*-U_n,P‖2]=E[E_B_n[‖ Z_n,P^*-U_n,P‖2]]→0 by the Markov inequality P(E_B_n[‖ Z_n,P^*-U_n,P^*‖2]≥ε) ≤E[E_B_n[‖ Z_n,P^*-U_n,P^*‖2]]/ε =E[‖ Z_n,P^*-U_n,P^*‖2]/ε. As shown above, ‖ Z_n,P^*-U_n,P^*‖ =o_p(1), which implies that ‖ Z_n,P^*-U_n,P^*‖2=o_P(1). Therefore, sup_h∈ BL_1(l^∞(𝒜))|E_B_n[h(Z_n,P^*)]-E_P[h(Z_P)]| vanishes and it follows that Z_n,P^*⇝_BZ_P. The proof of Theorem A.3 relies on the uniform Hadamard differentiability <cit.> of the quantile function. The definition of uniform Hadamard differentiability is as follows. Uniform Hadamard Tangential Differentiability, <cit.>: Let 𝔼 and 𝔻 be normed spaces. Consider a map ϕ:𝔻_ϕ⟼𝔼, where 𝔻_ϕ⊂𝔻 and the range of ϕ is a subset of 𝔼. Let 𝔻_0⊂𝔻 be a normed space, and 𝔻_ρ⊂𝔻_ϕ be a compact metric space. Let h⟼ϕ_ρ^'(h) be the linear derivative map associated with ϕ, where h∈𝔻_0 and ρ∈𝔻_ρ. The linearity of ϕ_ρ^'(h) holds for each ρ. Then the map ϕ:𝔻_ϕ⟼𝔼 is called Hadamard differentiable uniformly in ρ∈𝔻_ρ tangentially to 𝔻_0 with derivative map h⟼ϕ_ρ^'(h), if |ϕ(ρ_n+t_nh_n)-ϕ(ρ_n)/t_n-ϕ_ρ^'(h)| →0, |ϕ_ρ_n^'(h_n)-ϕ_ρ^'(h)| →0 as n→0. for all convergence sequences ρ_n→ρ, t_n→0 in ℝ and h_n→ h, such that ρ_n+t_nh_n∈𝔻_ϕ for every n. As pointed out by <cit.>, the quantile function is uniformly Hadamard-differentiable if we set 𝔻=l^∞(T), where T=[ϵ,1-ϵ],ϵ>0, 𝔻_ϕ is a set of càdlàg functions on T, 𝔻_0=UC(T), 𝔻_ρ being a compact subset of C^1(T) such that each ρ satisfies ∂ρ(u)/∂ u>0.[UC(T) denotes a set of uniformly continuous functions from T to ℝ, and C^1(T) denotes a set of continuous differentiable functions from T to ℝ.] Notice that this setting rules out the case that Y(d,M(d^')) is a discrete random variable. Also if 𝔻_ρ=𝔻_ϕ, the quantile function is not Hadamard-differentiable uniformly in 𝔻_ρ in the sense of our definition. This is different from the definition of uniformly differentiability given in <cit.> which requires 𝔻_ρ=𝔻_ϕ. Since our estimation is for infinite dimension, it is essential to restrict 𝔻_ρ to be much smaller than 𝔻_ϕ and endow 𝔻_ρ to have a much stronger metric than the metric induced by the norm of 𝔻. However, here the estimated ρ̂ can satisfy ρ̂∈𝔻_ϕ, but ρ̂∉𝔻_ρ (for example when ρ̂ is an empirical c.d.f.), even though the population values of ρ∈𝔻_ρ and ∂ρ(u)/∂ u>0 should hold. With the definition of uniform Hadamard differentiability, we in a next step restate Theorems B.3 and B.4 of <cit.> as follows. Functional delta method uniformly in P∈𝒫, <cit.>: Let ϕ:𝔻_ϕ⊂𝔻⟼𝔼 be Hadamard differentiable uniformly in ρ∈𝔻_ρ⊂𝔻_ϕ tangentially to 𝔻_0 with derivative map h⟼ϕ_ρ^'(h). Let ρ̂_n,P be a sequence of stochastic processes taking values in 𝔻_ϕ, where each ρ̂_n,P is an estimator of the parameter ρ=ρ_P∈𝔻_ρ. Suppose there exists a sequence of constants r_n→∞ such that Z_n,P:=r_n(ρ̂_n,P-ρ_P)⇝ Z_P in 𝔻 uniformly in P∈𝒫_n. The limit process Z_P is separable and takes its values in 𝔻_0 for all P∈𝒫=⋃_n≥ n_0𝒫_n, where n_0 is fixed. Moreover, the set of stochastic processes { Z_P:P∈𝒫} is relatively compact in the topology of weak convergence in 𝔻_0, that is, every sequence in this set can be split into weakly convergent subsequences. Then, r_n(ϕ(ρ̂_n,P)-ϕ(ρ))⇝ϕ_ρ_P^'(Z_P) in 𝔼 uniformly in P∈𝒫_n. If (ρ,h)⟼ϕ_ρ_P^'(h) is defined and continuous on the whole of 𝔻_ρ×𝔻, then the sequence r_n(ϕ(ρ̂_n,P)-ϕ(ρ))⇝ϕ_ρ_P^'(r_n(ρ̂_n,P-ρ)) converges to zero in outer probability uniformly in P∈𝒫_n. Moreover, the set of stochastic processes {ϕ_ρ_P^'(Z_P):P∈𝒫} is relatively compact in the topology of weak convergence in 𝔼. Functional delta method uniformly in P∈𝒫 for the bootstrap and other simulation methods, <cit.>: Assume that the conditions in Theorem A.4 hold. Let ρ̂_n,P and ρ̂_n,P^* be maps as previously indicated, taking values in 𝔻_ϕ such that Z_n,P:=r_n(ρ̂_n,P-ρ_P)⇝ Z_P and Z_n,P^*:=r_n(ρ̂_n,P^*-ρ_P)⇝_BZ_P in 𝔻 uniformly in P∈𝒫_n. Then, r_n(ϕ(ρ̂_n,P^*)-ϕ(ρ̂_n,P))⇝_Bϕ_ρ_P^'(Z_P) uniformly in P∈𝒫_n. Function ϕ_θ satisfies uniform Hadamard tangential differentiability and both Z_n,P⇝ Z_P and Z_n,P^*⇝ Z_P in l^∞(𝒜)^4 uniformly in P∈𝒫_n, as shown in Theorems A.1 and A.2. Therefore, the proof follows by applying Theorems A.4 and A.5. §.§ Tables and Figures [ht] Approximate true c.d.f. profiles from 40 million Monte Carlo simulations under the data generating process described in Section 4. Top: F_Y(1,M(1)), F_Y(1,M(0)), F_Y(0,M(1)) and F_Y(0,M(0)); Bottom: Q_Y(1,M(1)), Q_Y(1,M(0)), Q_Y(0,M(1)) and Q_Y(0,M(0)). The data generating process is described in Section 4. ECTA
http://arxiv.org/abs/2307.03065v1
20230706153250
Quantum Complexity for Discrete Logarithms and Related Problems
[ "Minki Hhan", "Takashi Yamakawa", "Aaram Yun" ]
quant-ph
[ "quant-ph", "cs.CR" ]
0 0 0 =1 quantikz Section =1 Institute 1 email 1 Institute 2 email 2 plain =0 plain theoremTheorem[section] lemma[theorem]Lemma corollary[theorem]Corollary proposition[theorem]Proposition definition definitionDefinition remarkRemark claimClaim problemProblem corCorollary clm[theorem]Claim construction[theorem]Construction cstrt[theorem]Construction prop[theorem]Proposition conjConjecture exmpExample[section] maintheoremMain Theorem factFact
http://arxiv.org/abs/2307.01797v1
20230704160708
Stochastic and self-consistent 3D modeling of streamer discharge trees with Kinetic Monte Carlo
[ "Robert Marskar" ]
physics.plasm-ph
[ "physics.plasm-ph" ]
SINTEF Energy Research, Sem Sælands vei 11, 7034 Trondheim, Norway. robert.marskar@sintef.no This paper contains the foundation for a new Particle-In-Cell model for gas discharges, based on diffusion and Kinetic Monte Carlo (KMC). In the new model the electrons are described with a microscopic drift-diffusion model rather than a macroscopic one. We discuss the connection of the -KMC model to the equations of fluctuating hydrodynamics and the advection-diffusion-reaction equation which is conventionally used for simulating streamer discharges. The new model is coupled to a particle description of photoionization, providing a non-kinetic all-particle method with several attractive properties, such as: 1) Taking the same input as a fluid model, e.g. mobility coefficients, diffusion coefficients, and reaction rates. 2) Guaranteed non-negative densities. 3) Intrinsic support for reactive and diffusive fluctuations. 4) Exceptional stability properties. The model is implemented as a particle-mesh model on cut-cell grids with Cartesian adaptive mesh refinement. Positive streamer discharges in atmospheric air are considered as the primary application example, and we demonstrate that we can self-consistently simulate large discharge trees. Streamer Particle-In-Cell Cartesian AMR Parallel computing Stochastic and self-consistent 3D modeling of streamer discharge trees with Kinetic Monte Carlo Robert Marskar August 1, 2023 =============================================================================================== § INTRODUCTION Substantial efforts have been made in order to understand the nature of streamer discharges <cit.>, which is a specific type of transient and filamentary plasma. Streamers occur naturally as precursors to electric sparks, and also appear as sprite discharges in the upper atmosphere <cit.>. They are also highly useful for CO_2 conversion <cit.>, in plasma assisted combustion <cit.>, plasma catalysis <cit.>, and plasma medicine <cit.>. Streamers are inherently three-dimensional structures that usually appear in bundles or in the shape of discharge trees. These develop through repetitive branching, which is a fundamental property of streamers <cit.>. For positive streamer discharges in air, the amount of photoionization in front of the streamer strongly affects the degree of branching <cit.>. When more photoelectrons are generated in front of positive streamers the amount of streamer branching is reduced. Although single streamers are now relatively well understood, discharge trees are the more relevant structures since they appear in virtually all applications involving streamer discharges. Figure <ref> shows an example of such a structure for a positive streamer discharge in air at atmospheric pressure and temperature. Clearly, this structure requires full 3D modeling over many orders of magnitude in both space and time, and is therefore quite difficult to describe quantitatively. Most contemporary computational models can only solve for at most a few filaments. See e.g. <cit.> for recent results with fluid models, or <cit.> for kinetic particle models. Several researchers have questioned the feasibility of using fluid and particle models for obtaining numerical solutions that describe entire discharge trees <cit.>, such as those in figure <ref>. Given the difficulties in simulating even just an isolated streamer filament <cit.>, it is easy to understand why such claims are made. In a recent review <cit.> remarked that although fluid and particle simulations of streamers with tens or hundreds of branches are computationally unfeasible, reduced-order models of single filaments <cit.> are candidates for improved tree and fractal based models <cit.>. Although these models have been heralded for quite some time and could be used for simulating discharge trees, they are still in their infancy and they are unfortunately also excessively simplified. Many natural phenomena like branching and charge transport do not self-consistently evolve from the model itself, and the lack of a density (or particle) description also complicates quantitative descriptions of the chemistry in the streamer channels. Currently, drift-diffusion fluid models in the local field approximation (LFA) are most frequently used for studying streamer discharges. With fluid models the advection-diffusion-reaction equation for the electron density is discretized on an Eulerian grid, and the plasma density is updated in time using either explicit or implicit time integration <cit.>. There are several well-known numerical restrictions for fluid models, such as the existence of a Courant-Friedrichs-Lewy (CFL) condition on the time step Δ t, or restriction by the dielectric relaxation time ϵ_0/σ where σ is the conductivity of the plasma. The latter can be avoided by using semi-implicit formulations <cit.>. Infrequently mentioned is the fact that there is a rather fundamental requirement on the spatial resolution Δ x as well <cit.>, which applies to both explicit and implicit temporal discretizations. One issue that is often faced in simulation codes is that explicit codes at best have time steps Δ t ∝Δ x, which leads to an undesired scaling of computational resources. For example, refining the grid Δ x →Δ x/2 doubles the amount of grid cells per coordinate direction, and requires twice as many time steps. Implicit codes can decouple Δ t from Δ x, but it is not clear how to obtain a scalable implicit discretization in the context of the frequent regridding that is a de-facto requirement for large scale 3D streamer discharge simulations <cit.>. Cognizant of the above issues, we have developed a new model based on a microscopic drift-diffusion model rather than a macroscopic one. This is combined with mesoscopic reaction algorithms for describing the stochastic plasma chemistry, i.e. we replace the conventionally used deterministic chemistry by a Kinetic Monte Carlo (KMC) algorithm. The new model takes particle discreteness, random collisions, and stochastic reactions into account. Fundamentally, the model is a non-kinetic Particle-In-Cell (PIC) model, and it is indeed ironic that this is actually a helpful model since the switch from a fluid to a particle description is usually associated with an increased computational cost. But the new model has no fundamental restriction on Δ x or Δ t, and numerical tests show that it is exceptionally stable in both space and time. Importantly, we are also achieving a decoupling of Δ t from Δ x without requiring an implicit discretization. This renders the new model capable of obtaining numerical solutions not only for single filaments, but also for comparatively large discharge trees. In this paper we use the model to investigate laboratory discharges, but the model itself is applicable to many other types of streamers (e.g. sprites). This paper has two main goals: 1) A thorough presentation of the model, with details as to how it can be implemented with robust and scalable computer algorithms. 2) A capability demonstration for self-consistent simulation of discharge trees at the laboratory scale, similar to the ones shown in figure <ref>. The organization of this paper is as follows. Section <ref> presents a computational prelude that focuses on finite-volume discretization issues for fluid models, for both explicit and implicit time discretizations. In section <ref> we formulate the new model and discuss its connection to the conventional fluid model. Section <ref> contains the numerical discretization of the model. In section <ref> we provide some numerical tests of the model, and some concluding remarks are provided in section <ref>. § PRELUDE We first consider an underlying issue facing the stability properties of discretized fluid models in the LFA. Our line of reasoning follows <cit.> who proved that the spatial resolution for fluid models must essentially resolve the avalanche length for the solution to remain bounded in time. Consider a one-dimensional advection-reaction model for the electron density n, for the moment ignoring electron diffusion: ∂_t n = -v∂_x n + α v n, where v > 0 is a constant electron velocity and α > 0 is a constant ionization coefficient. The exact solution is n(x,t) = n(x-vt)exp(α x), where n(x-vt) is some initial function. If n is bounded in time, so is n(x,t). We now consider the numerical discretization of equation (<ref>) on a one-dimensional Cartesian grid with grid points x_i = iΔ x where Δ x is the grid point spacing and i is the grid index. Each grid cell spans the volume [x_i-Δ x/2,x_i+Δ x/2]. A first order finite-volume upwind discretization in space with an implicit Euler discretization in time for equation (<ref>) yields n_i^k+1 = n_i^k/1 + vΔ t/Δ x - α vΔ t + vΔ t/Δ xn_i-1^k+1 ≥n_i^k/1 + ξ(1 - αΔ x), where ξ≡ vΔ t/Δ x ≥ 0 is the Courant number. The solution n_i^k+1 is bounded in time only if αΔ x ≤ 1. Here, the discretization is fully implicit but it is only conditionally stable and non-negative. More generally, the underlying stability issues are related to the advective-reactive coupling. Using a Godunov splitting for equation (<ref>) with explicit fractional Euler steps yields n_i^† = n_i^k(1 - ξ) + ξ n_i-1^k, n_i^k+1 = n_i^† + α vΔ t n_i^† = [n_i^k(1-ξ) + ξ n_i-1^k](1+ξαΔ x) ≥ n_i^k(1-ξ)(1+ξαΔ x). The stability region is now αΔ x ≤ 1/(1-ξ), i.e. the discretization is more stable for larger time steps. Splitting methods expand the stability region because they advect electrons out of the grid cell before they react. Several codes <cit.> use second order slope-limited discretizations, but these discretization are not fundamentally more stable. An analysis is more difficult in this case, but one only needs to observe that slope-limited schemes default to piecewise constant reconstruction if there is a local maximum or a large gradient in the solution, and in this case one again obtains the stability limit αΔ x ≤ 1. The problem dimensionality and presence of diffusion also affects the stability region. However, since equation (<ref>) is a subset of multi-dimensional simulations where v_y=v_z=0 and v_x ≠ 0, the stability limit applies to multi-dimensional simulations as well. The analysis above is quite simplified and ignores the fact that the streamer is a moving structure and thus that any potential instability regions αΔ x > 1 move with the solution. In practice, the situation is far less dire and one may still observe that n_i^k remains bounded even for quite significant violations of αΔ x ≤ 1. It is nonetheless clear that caution is needed for fluid simulations since numerical underresolution is fundamentally capable of enabling unbounded growth in the plasma density and corresponding non-physical diverging growth of the electric field E, i.e. E→∞ as n→∞. Such instabilities have been scrutinized in recent years, and they are particularly relevant in the cathode sheath <cit.> and in the context of so-called stagnant positive streamers <cit.>. Although the LFA is often identified as the culprit <cit.>, the underlying issue is also present in the transport equation itself. Furthermore, the ionization coefficient α increases with E, and in practice the requirement on Δ x introspectively depends on the numerical solution itself, complicating the selection of a spatial step size. Various resolutions to the spatial stability restriction have been proposed in the literature. <cit.> showed that the mesh-dependent stability criterion αΔ x ≤ 1 can be removed by treating reactions with an upwind method. <cit.> softened it by using a Godunov splitting like equation (<ref>), and used a Corner Transport Upwind (CTU) scheme <cit.> for maintaining an overall larger time step for multi-dimensional simulations. Model corrections to the ionization term have also been considered. <cit.> change the characteristic length scale of the ionization term by applying a smoothing operator that changes α without net loss in the number of reactions. <cit.>, and <cit.> have considered alterations to α based on electron energy considerations. For explicit codes, the requirement on Δ x can be quite penalizing for the time steps that can be used. On structured Cartesian 3D grids Δ x = Δ y = Δ z, a fully explicit discretization using a first order upwind method and centered finite differencing of the diffusion operator yields the time step restriction Δ t ≤(|v_x| + |v_y| + |v_z|/Δ x + 6D/Δ x^2)^-1, where we now also include the electron diffusion coefficient D. Figure <ref> shows how this time step varies with Δ x for different selections of the field strength E. <cit.> found that E∼25 for streamer discharges in air, and with a spatial resolution of Δ x ≲2 the time step is approximately 1. A reasonable simulation time for streamer discharges in atmospheric air is around 100, requiring roughly 100000 time steps, at which point even fluid simulations become numerically expensive. Implicit methods are subject to the same requirement on Δ x as explicit methods, but they remain attractive since they do not impose fundamental limitations on Δ t. Unfortunately, 3D simulations often use hundreds of millions of grid points <cit.> and billions of degrees of freedom. Full Newton methods <cit.> are not very practical at this scale since the full Jacobian must be factored at every time step. Jacobian-Free Newton-Krylov (JFNK) is a more attractive computational strategy, but it is not clear if JFNK methods remain computationally feasible at this scale, particularly when adaptive mesh refinement (AMR) is required. Next, consider the evolution of the microscopic version of equation (<ref>): _̣t X = v, _̣tW = α v W, where X is a one-dimensional electron position and W is the (average) number of electrons sharing this position. For demonstration purposes, we are using a deterministic reaction rate equation for W. This is less meaningful when dealing with a particle method but we improve on this aspect later in the paper. Consider a single starting electron, X(0) = 0, W(0) = 1, in which case the solutions to equation (<ref>) using the explicit Euler rule until time t = kΔ t are X^k = v t, W^k = (1 + α v Δ t)^k. The number of particles per unit length for a grid cell i is then n_i^k= 1/Δ x(1 + α v Δ t)^k if |vt - x_i| ≤Δ x/2, 0 otherwise. This is to be contrasted with the exact solution to equation (<ref>) with the initial condition n(x -vt) = δ(x-vt), n_i(t) = 1/Δ x_i∫_x_i -Δ x/2^x_i + Δ x_2δ(x-vt)exp(α x)x̣ = 1/Δ xexp(α v t) |vt - x_i| ≤Δ x/2, 0 otherwise. The two solutions (equations (<ref>) and (<ref>)) differ only due to the way we approach the numerical integration of equation (<ref>). Notably, numerical discretizations that start from equation (<ref>) do not require αΔ x ≤ 1, and we can identify why: Equation (<ref>) is numerically diffusive and the electron density only asymptotically tends to zero as t→∞, and thus there is always some fraction of n that will react in the grid cell <cit.>. On the other hand, there is no numerical diffusion involved in equation (<ref>) and the discretization is also stable for any time step, i.e. it does not have a CFL condition. These are the two basic properties that we exploit in the new PIC model. § THE NEW MODEL §.§ Particle transport Rather than using a macroscopic drift-diffusion model for the electrons, which is subject to fairly strict requirements on Δ x and Δ t, we consider a microscopic model based on diffusion X_p = V_p + √(2D_p)W^p_t, where X_p is the position of a particle p, V_p is the drift velocity of the particle and √(2D_p) is the diffusion coefficient of the particle. Here, W_t^p is a Wiener process over a time . It can be represented as W_t^p = √()𝒩_p where 𝒩_p is a normal distribution with standard deviation of 0 and variance of 1 in d-dimensional physical space. The noise is uncorrelated in time and space, and independent of noise acting on other particles. The representation of the particle diffusion coefficient as √(2D_p) is due to a convenient normalization when coarse-graining the model onto a continuum representation where the macroscopic diffusion coefficient D appears instead. Averaging equation (<ref>) over many identical particles, i.e. V_p = v, D_p= D, yields ⟨X_p(t+Δ t) - X_p(t)⟩ = vΔ t, ⟨[X_p(t+Δ t) - X_p(t) - vΔ t]^2⟩ = 2DdΔ t, where ⟨…⟩ indicates the expectation value. Thus, by taking V and D to be the macroscopic electron drift velocity and diffusion coefficients, the model recovers macroscopic drift-diffusion statistics. In this paper we adopt the LFA and take v and D to be functions of E. The velocity and diffusion coefficients are found by interpolation of the macroscopic quantities v and D to the particle positions, i.e. V_p = v(X_p), D_p = D(X_p). Extensions to the local mean energy approximation where the coefficients are given as functions of the average electron energy are not examined in this paper. In a formal derivation <cit.> showed that the evolution of the global density n(x,t) = ∑_pδ[x -X_p(t)] yields the advection-diffusion equation of fluctuating hydrodynamics: ∂ n/∂ t = ∇·(-vn+ D∇ n + √(2Dn)Z), where Z(x,t) is a Gaussian random field without space-time correlations, ⟨Z(x,t)Z(x^',t^')⟩ = δ(x-x^')δ(t-t^'). The term √(2Dn)Z is a stochastic flux that accounts for fluctuations from Brownian motion. This term is usually ignored in studies of non-equilibrium gas discharges. In the macroscopic limit of vanishing fluctuations, equation (<ref>) yields the deterministic advection-diffusion equation which is the conventional starting point for fluid models. The recovery of equation (<ref>) from equation (<ref>) in the macroscopic limit is hardly surprising since equation (<ref>) is a microscopic drift-diffusion model. §.§ Kinetic Monte Carlo For the plasma chemistry we compute reactions locally within each grid cell, using KMC. Suppose that we are provided with a set of reactions that evolve a system of M different chemical species S_i, i∈[1,2,.…,M]. The number of particles for each species S_i is given by a state vector X⃗(t) = [ X_1(t); X_2(t); ⋮; X_M(t) ], where X_i(t) is the number of particles of type S_i in some computational volume Δ V at time t. Reactions are represented stoichiometrically, e.g. S_A + S_B + … S_C + S_D + …, where k is the reaction rate. The set of such reactions is called the reaction network R⃗. Let ν⃗_r denote the state change in X⃗ caused by a single firing of a reaction of type r, i.e. X⃗⇒X⃗ + ν⃗_r, For example, if X⃗ = (X_1, X_2)^⊺ and the reaction network consists of a single reaction S_1 S_2 then ν⃗_1 = (-1,1)^⊺. Propensity functions a_r(X⃗(t),t) are defined as the probability that exactly one reaction of type r occurs in the infinitesimal interval [t,t+]. In other words, a_r can loosely be interpreted as the number of reactions of type r per unit time in a computational volume. The rates k that occur in reactions like equation (<ref>) are not equivalent to the conventional reaction rates that are used in the deterministic reaction rate equation (RRE). For a unipolar reaction of the type S_1 ∅ the propensity function is a_r = kX_1 and in this case k is numerically equal to the rate that occurs in the RRE (see equation (<ref>)). However, for bipolar reactions of the type S_1 + S_1 ∅ the propensity is k1/2 X_1(X_1-1) since there are 1/2X_1(X_1-1) unique pairs of particles of type S_1. We use the KMC algorithm as proposed by <cit.>. This algorithm advances X⃗ over a time step Δ t using a sequence of adaptive smaller steps Δτ where the reaction network is advanced using either the SSA <cit.>, tau-leaping, or a combination of these. This algorithm is discussed further in section <ref>, but we first provide some context to the SSA/KMC and tau-leaping algorithms. §.§.§ Stochastic simulation algorithm (SSA) and tau-leaping The SSA (or Gillespie algorithm <cit.>), is a next-reaction model which advances X⃗(t) one reaction at a time. Given a total propensity A(t) = ∑_r a_r(X⃗(t),t), the time until the next reaction is randomly determined from Δ T_next = 1/A(t)ln(1/u_1), where u_1∈[0,1] is a random number sampled from a uniform distribution. The reaction type is further determined with j = smallest integer satisfying∑_r=0^j-1 a_r(X⃗(t),t)> u_2 A(t), where u_2∈[0,1] is another uniformly sampled random number. The system is then advanced as X⃗(t+Δ T_next) = X⃗(t) + ν_j. The SSA resolves one reaction at a time, and the algorithm becomes increasingly inefficient as the number of reactions per unit time grows. In its isolated form, the algorithm is not very useful for discharge simulations. The tau-leaping method advances the entire reaction network in a single step over time Δ t using Poisson sampling: X⃗(t+Δ t) = X⃗(t) + ∑_rν⃗_r 𝒫[a_r(X⃗(t),t)Δ t], where 𝒫(μ) is a random number sampled from a Poisson distribution with mean μ. If the propensity functions a_r do not change significantly on the time interval [t,t+Δ t] then reaction events are statistically independent, which is the condition for the validity of the tau-leaping scheme. A tau-leaping method has been considered by <cit.> in the context of streamer discharges (although the authors do not use the tau-leaping terminology). Unlike the SSA, equation (<ref>) does not guarantee a physically valid state since Poisson sampling of reactions that consume reactants can yield negative population numbers, and thus needs to be combined with time step selection and rejection sampling <cit.>. §.§.§ Connection to the reaction rate equation Tau-leaping is related to the RRE as follows: If a sufficiently large number of reactions occur within Δ t, i.e. a_r(X⃗(t),t)Δ t ≫ 1, then we can approximate the Poisson process by a Gaussian process. Furthermore, if reactive fluctuations are negligible, i.e. √(a_r(X⃗(t),t)Δ t)≪ a_r(X⃗(t),t)Δ t then we can replace the Gaussian process by its mean value. It can then be shown <cit.> that equation (<ref>) yields ̣⃗X/ = ∑_rν⃗_ra_r(X⃗(t),t), which we recognize as the deterministic reaction rate equation for the particle density n⃗(t) = X⃗(t)/Δ V. Equation (<ref>) now allows us to identify the usual rate constants from the propensities. For example, for the bimolecular reaction S_i + S_j∅ the reaction rate constant is 2k/Δ V for i=j and k/Δ V for i≠ j. §.§.§ Reaction algorithm outline Complete details regarding the reactive algorithm that we use are found in <cit.>. We are interested in advancing X⃗(t) from time t to time t+Δ t for a set of reactions R⃗. Letting τ be the simulated time within Δ t, this proceeds as follows: * Partition R⃗ into critical and non-critical reaction sets R⃗_c and R⃗_nc. The critical reactions are defined as the set of reactions that are within N_c firings of consuming its reactants. We take N_c = 5 in this paper. * Compute all propensities, the total propensity A and the critical propensity A_c: A = ∑_r∈R⃗ a_r, τ, A_c = ∑_r∈R⃗_c a_r, τ, where a_r,τ≡ a_r(X⃗(t+τ),t+τ). * Compute the time Δτ_c until the next critical reaction: Δτ_c = 1/A_cln(1/u_1), where u_1∈[0,1] is a uniformly distributed random number. * Compute a permitted time step Δτ_nc such that non-critical reaction propensities do not change by a relative factor greater than ϵ: Δτ_nc = min_i∈ I_rs(max(ϵ X_i/g_i, 1)/|ξ_i|, max(ϵ X_i/g_i, 1)^2/σ^2_i), where I_rs is the set of reactant species in R⃗_nc and ξ_i = ∑_r∈R⃗_ncν_ria_r, τ, σ_i^2 = ∑_r∈R⃗_ncν_ri^2a_r, τ. Here, ν_ri is the state change of X_i due to one firing of reaction r. The factor g_i depends on the highest order of reaction where the reactant i appears <cit.>. In this paper we consider only first order reactions and then g_i=1. * To halt integration at time Δ t, select a reactive substep Δτ within Δ t from Δτ = min[Δ t - τ, min(Δτ_c, Δτ_nc)]. * Resolve reactions as follows: * If Δτ_c < Δτ_nc and Δτ_c < Δ t - τ: One critical reaction fires. Determine the critical reaction r_c from r_c = smallest integer satisfying∑_r=0^r_c-1 a_r> u_2 A_c, where u_2∈[0,1] is sampled from a uniform distribution. The sum only runs over the critical reactions. Advance the state X⃗ with the results from the SSA and tau-leaping reaction firings: X⃗ →X⃗ + ν⃗_r_c + ∑_r∈R⃗_nc𝒫(a_r, τΔτ). * Otherwise: No critical reactions fire. Advance X⃗ over Δτ with the non-critical reactions only: X⃗→X⃗ + ∑_r∈R⃗_nc𝒫(a_r, τΔτ). An exception is made if AΔτ is smaller than some factor (we take AΔτ≤ 1) since tau-leaping is inefficient in this limit. In this case we switch to SSA stepping using the whole reaction network, taking up to N_SSA=10 steps for a total integration time Δτ^'. Obviously, we restrict this integration to Δτ^'≤Δτ. * Check if X⃗ is a valid state: * If any particle numbers in X⃗ are negative, reject the update. Let Δτ_nc→Δτ_nc/2 and return to step 5. * Otherwise, increment τ by Δτ, or by Δτ^' if triggering use of the SSA in step 6(b). * If τ < Δ t, return to step 1. The above algorithm is a well-tested procedure which uses the SSA and tau-leaping algorithms in their respective limits. The factor ϵ determines the maximum permitted relative change in the propensities during one tau-leaping step, and therefore adjusts the accuracy and number of reactive substeps that the algorithm will take. §.§.§ Comparing reaction algorithms To highlight the potential importance of reactive fluctuations, we compare the stochastic reaction algorithm with the RRE for a zero-dimensional test case that illustrates basic electron-neutral interactions in atmospheric pressure air. For simplicity we only consider electron impact ionization and attachment, i.e. + ∅ + + ∅, + ∅ ∅, where the ionization and attachment rates are computed using BOLSIG+ (see section <ref> for further details). The breakdown field is E_b≈3, and the exact solution to the RRE (equation (<ref>)) is X_(t) = X_(0)exp[(k_α - k_η)t], where X_(0) is the number of starting electrons. Figure <ref>a) shows the results under breakdown conditions with an electric field E=10 and a single starting electron. We have advanced for 50 which from the RRE yields 4-5 electrons. This value is compared with the predictions of eight independent runs using the stochastic algorithm. The stochastic algorithm shows considerable variation in the final number of electrons, including one case where the initial electron attached. Figure <ref>b) shows a similar case for sub-breakdown conditions with an electric field E = 1.7 and five initial electrons. In this case attachment processes dominate the evolution. We find that the hybrid algorithm eventually leads to attachment of all five initial electrons while the RRE yields a solution which only asymptotically decays to zero, i.e. it contains fractional electrons. This latter point is particularly pertinent to positive streamers in highly attaching gases (e.g., sulphur-hexafluoride). In a fluid model the seed electrons ahead of the streamer never completely attach and form negative ions. Rather, the asymptotic decay of the electron density means that a computational fraction of the electron is always available for further seeding the streamer, artificially reducing fluctuations at the streamer tip. §.§ Model remarks The -KMC model presented above is a microscopic drift-diffusion model with stochastic chemistry, and in the above we have shown that it recovers the standard drift-diffusion-reaction fluid model in the coarse-grained deterministic limit. In principle, one can think of the Îto-KMC model as a non-kinetic PIC method that samples the macroscopic evolution using computational particles that represent average electrons. Our model rectifies some shortcomings of macroscopic drift-diffusion models, in particular those that pertain to numerical stability and efficiency, but also by maintaining a particle description in regions where the plasma is rarefied. The -KMC model is qualitatively similar to the model by <cit.>, which is essentially a reaction-diffusion master equation (RDME) model supplemented with electron drift. The RDME model evolves the total number of particles in a cell using stochastic sampling of transfer rates. However, the -KMC model has a few important distinctions. Firstly, we expand beyond pure tau-leaping for the plasma chemistry, with the primary benefits being guaranteed non-negativeness and adjustable accuracy. The same algorithm could be used in the <cit.> model. Secondly, the RDME model <cit.> does not generalize very well to the strong drift regime of streamers where negative transfer probabilities between grid cells can appear <cit.>. The -KMC model resolves these issues, but the cost is the adoption of a microscopic model rather than a mesoscopic one. We have not included any energy description for the electrons or ions. In fact, it is generally not clear if the model can be extended to include energy transport in such a way that one also recovers the fluid electron energy transport equation when coarse-graining the model. However, an excellent alternative is to combine -KMC with a kinetic electron description. The -KMC method is already a particle model and so the inclusion of kinetic electrons is possible, and is most likely algorithmically simpler than existing hybrid models based on fluid-particle couplings <cit.>. § COMPUTER IMPLEMENTATION In this section we present our implementation of the new PIC model on cut-cell Cartesian AMR grids. The equations of motion are the -KMC-Poisson system X_p = V_p + √(2D_p)𝒩_p, X⃗(t) X⃗(t+ṭ), ∇^2Φ = -ρ/ϵ_0, where equation (<ref>) is the Poisson equation for the electric field E = -∇Φ where Φ is the electrostatic potential and ρ is the space charge density. This model has been implemented into the chombo-discharge[<https://github.com/chombo-discharge/chombo-discharge>] code, which we have used for streamer simulations in the past <cit.>. §.§ Spatial discretization We discretize the equations over a Cartesian grid with patch-based AMR, see figure <ref>. With AMR, the equations of motion are solved over a hierarchy of grid levels l∈ 0,1, …, l_max. When refining a grid level, the resolution increases by a factor two, i.e. Δ x_l+1 = Δ x_l/2. Each grid level consists of a union of properly nested rectangular grid boxes. That is, the valid region of levels l-1 and l+1 are separated by at least one grid cell on level l, and boxes are disjoint (non-overlapping) on each level. As in previous publications relating to non-equilibrium gas discharges <cit.> we use the Chombo <cit.> library for handling the AMR infrastructure. Algorithmic details that are not specific to the -KMC model are found in references <cit.>. §.§ Charge deposition & interpolation Deposition of particles and interpolation to the particle positions is done using a cloud-in-cell (CIC) scheme with Cartesian AMR. The mesh densities n are given on cell centers n(x_i) = n_i where i is a multi-dimensional index and x_i = iΔ x. The mesh density is given by n_i = ∑_p∈R(i)(w_p/Δ V_i)𝒲_cic(x_i - X_p/Δ x), where p∈R(i) indicates particles whose clouds extend into cell i, w_p is the particle weight, Δ V_i is the volume of the grid cell and 𝒲_cic(x) = ∏_s=1^dW_cic(x_s), W_cic(x) = 1 - |x|, |x|<1, 0, otherwise. Deposition of particles near refinement boundaries are sketched in figure <ref> and handled as follows: If the particle cloud for a particle at level l+1 hangs over the refinement boundary into the coarse level l, the deposition weight is added to the corresponding level l cells and then normalized by the appropriate volume fraction. Particles that live on a coarse grid level l but whose clouds hang into level l+1 have particle widths Δ x_l on level l, and for factor two refinements they may extend into the first strip of cells on level l+1 (see figure <ref>). In order to ensure that this weight ends up in the correct cells on level l+1, these particles are also deposited on level l+1, but using the original particle width Δ x_l. These particles thus have a width of 2Δ x_l+1 and in 3D they can deposit into at most 9 grid cells on level l+1. Their deposition function on level l+1 is W_cic^'(x) = 1/2, |x| ≤1/2 1/2(3/2 - |x|), 1/2 < |x| ≤3/2 0, otherwise. §.§ Semi-implicit Euler-Maruyama method The standard Euler-Maruyama method for equation (<ref>) is X_p^k+1 = X_p^k + Δ tV_p^k + √(2D_p^kΔ t)𝒩_p. Coupled to equation (<ref>), the discretization must be restricted by the dielectric relaxation time in order to be stable, i.e. Δ t ≤σ/ϵ_0 We remove this limitation by using a semi-implicit formulation as follows: X_p^k+1 = X_p^k + Δ tV^k+1_p + √(2D_p^kΔ t)𝒩_ p. where V_p^k+1 = (Z_s)μ_p^kE^k+1(X_p^k) and Z_s is the charge number for species s, and is the sign operator. I.e., we have (Z_s) = -1 for electrons and (Z_s) = 1 for positive ions. We achieve this coupling by first solving the Poisson equation ∇·E^k+1 = ρ^k+1/ϵ_0, which to first order in Δ t can be written ∇·E^k+1 = 1/ϵ_0ρ^† - Δ t/ϵ_0∇·J_adv.^k, where J_adv.^k is the advective current density and ρ^† is the space charge density computed from the update X_ p^† = X_ p^k + √(2D_p^kΔ t)𝒩_p. For a species s the advective current density at a grid point x_i is J_i, adv.^s,k = q_|Z_s|∑_pμ_s^kE^k+1(X_p^k)𝒲(x_i-X_p^k). Expanding E^k+1(X_p^k) as a polynomial around the grid point x_i yields E^k+1(X_p^k) ≈E^k+1_i - (x_i - X_p^k)·∇E^k+1_i + 𝒪((x_i - X^k_p)^2), where E_i = E(x_i). Equation (<ref>) yields J_i, adv.^s,k≈[q_|Z_s|∑_pμ_p^k𝒲(x_i-X_p^k)] E^k+1_i -[q_|Z_s|∑_pμ_p^k𝒲(x_i-X_p^k)(x_i - X_p^k)]·∇E^k+1_i, where the first term is the conventional Ohmic contribution that we recognize from semi-implicit formulations for fluid models <cit.>. The support of 𝒲 is Δ x, and so the second term scales as Δ x as well. When used together with equation (<ref>) this term scales as 𝒪(Δ xΔ t) in the semi-implicit Poisson equation, and it also has a small error constant: The moments 𝒲(x_i-X_p^k)(x_i-X_p^k) are anti-symmetric in X_p^k with respect to the grid cell center x_i, so when particles distribute uniformly over a grid cell the summation yields the zero vector. This is, for example, the case in the discharge channels where there are many electrons per grid cell, whereas outside of the channel the current is negligibly small. Summing over all species s to leading order yields J^k_adv. = (q_∑_s|Z_s|μ_s^kn_s^k)E^k+1, where μ_s^k and n_s^k are mesh variables for species s (we have suppressed the index i). Thus, ignoring higher-order moments, equation (<ref>) can be written in the familiar form <cit.> ∇·[(1 + σ^kΔ t/ϵ_0)E^k+1] = 1/ϵ_0ρ^†, where σ^k = q_∑_s|Z_s|μ_s^kn_s^k is the conductivity of the plasma. Equation (<ref>) is discretized with finite volumes, using a standard 9-point stencil in the interior points (and flux matching at the coarse-fine interface). The embedded boundary fluxes are also constructed to second order, using additional interior points when evaluating the normal derivative on the cut-cell boundary centroids. The corresponding linear system is solved using geometric multigrid with V-cycling, using red-black Gauss-Seidel relaxation as the smoother on each grid level and a biconjugate gradient stabilized method (BiCGSTAB) as a bottom solver. A relative exit tolerance of 10^-10 is used as a convergence criterion for multigrid. Further details regarding the finite volume discretization of the variable-coefficient Poisson equation and its embedding into geometric multigrid in the presence of embedded boundaries and Cartesian AMR are given in e.g. <cit.>. After obtaining the electric field E^k+1 we compute V_p^k+1 from equation (<ref>) and complete the particle update (equation (<ref>)). §.§ KMC-particle coupling The reaction algorithm solves for the total number of particles in a grid cell and leaves substantial freedom in how one assigns the chemistry products into new computational particles. Since we are concerned with methods that potentially use very large time steps, the creation of computational particles with physical weights w=1 is not possible due to the large number of physical particles generated in a time step. In this paper, if the reaction step led to net creation of particles we instead create at most N_^new=64 new computational particles in the cell. If the KMC solver gave N > N_^new physical particles in the cell, we construct N_^new computational particles with weights w = N÷ N_^new, where ÷ denotes integer division. The remainder (N, N_^new) is assigned to one of the new particles. These particles are later merged with the computational particles that already exist in the cell. If the KMC algorithm led to net loss of particles, we remove the weight directly from the existing computational particles. Production of particles in cut-cells only takes place in the valid region of the cell. Particle positions X_p are drawn from a uniform distribution in each coordinate direction with the requirement (X_p-x_c)·n̂_c ≥ 0, where x_c is the cut-cell boundary centroid and n̂_c is the cut-cell boundary normal. Since cut-cell volume fractions can be arbitrarily small, we optimize this step by only drawing the position X_p inside the minimum bounding box that encloses the valid region of the cut-cell. §.§ Photon generation and transport Photons are also treated with a particle method, and for simplicity we consider instantaneous transport where we don't have to track the photons in time. As with the particles, the KMC algorithm provides the number of physical photons that is generated in the reaction step, which we limit to N_^new=64 computational photons. Photon weights and emission positions are assigned in the same way as we do for the particles. Photon absorption positions are determined individually for each (super-)photon. For example, assume that a photon has some frequency f and is emitted from an initial position Y_f^0. This photon is absorbed at position Y_f = Y_f^0 + r_f ĉ, where r_f is a random number drawn from an exponential distribution with parameter κ(f), and ĉ is a uniformly distributed random point on the unit sphere. Here, κ(f) is the mean absorption coefficient in the gas for a photon with frequency f, i.e. 1/κ(f) is the mean absorption length. Since specral absorption lines also have a spectral width, the mean absorption coefficient κ(f) is frequency dependent. When we sample the photon generation, we begin by sampling the spectral line by stochastically determining f according to some distribution. We then use a known expression for κ(f) for stochastically determining r_f for each photon. Depending on the photoionization model that is used, one can sample multiple spectral lines <cit.> in combination <cit.> or individually <cit.>. Photoemission is disregarded in this paper, so if a photon trajectory intersects an internal boundary (e.g. an electrode) or a domain boundary, it is removed from the simulation. §.§ Superparticle management In order to maintain a manageable number of particles, only computational particles that represent many physical particles are tracked. Our particle merging and splitting strategy uses a bounding volume hierarchy with k-d trees for locating spatial clusters of particles, and particles are merged/split within each cluster. We use a standard tree structure which uses top-down construction, i.e. it is hierarchically built from the root node and downwards. However, the algorithm that we use for splitting a leaf node is new and it is therefore discussed in detail. Initially, a leaf node L contains a list P⃗_L of M particles that are each identified by a tuple ⟨X_p, w_p⟩, where w_p is the particle weight. Then, P⃗_L is sorted based on one of the axis coordinates and split into two bounding volumes such that the total weight of the two halves differ by at most one physical particle. This process is shown in figure <ref> and proceeds as follows: * Pick a splitting direction. We choose the coordinate direction where the minimum bounding box enclosing the particles has the largest extent. * Sort the particle list P⃗_L from smallest to largest coordinate in splitting direction. * Locate the median particle with index p^' in the list, where p^' > 1 is the smallest index satisfying ∑_p=1^p^'-1 w_p + w_p^' > ∑_p=p^'+1^M w_p. The index p^' indicates the position of the particle on the splitting plane, i.e. all particles p < p^' are found on the left hand side of the splitting plane and all particles p > p^' are found on the right-hand side . * Transfer particles p∈[1, p^'-1] to a new list P⃗_l in the left leaf node, and particles p∈[p^'+1,M] to another list P⃗_r in the right leaf node. * Assign the median particle p^': * If the median particle is a physical particle the particle is assigned to whichever child list (P⃗_l or P⃗_r) has the lowest total weight. * If the median particle is a superparticle, i.e. w_p^'≥ 2, it is split into two new particles with the same position X_p^' but with new weights. Due to the median selection in equation (<ref>), these weights can be constructed such that the weight of P⃗_l and P⃗_r differ by at most one physical particle. Because we merge particles by groups rather than in pairs <cit.>, the process above only proceeds until we have N_ leaves in the tree. Choosing the final number of computational particles to be a factor of two gives a balanced tree where all the leaves exist on the same tree level, and in this case the number of physical particles between any two arbitrary leaves in the tree differs by at most one. At the end of the tree-building algorithm each leaf node represents a bounding volume with a list P⃗ of computational particles. The particles in this list become a new superparticle with weight and position w = ∑_p∈P⃗ w_p X = 1/w∑_p∈P⃗ w_pX_p. Particle merging is done on a cell-by-cell basis in order to prevent creation of particles that lie inside the embedded boundary. The algorithm also handles splitting of superparticles. If a cell contains a single particle with a large weight then step (v) in the above algorithm ensures that this particle is hierarchically split until we have created N_ new particles. Figure <ref> shows an example of merging 1000 initial particles whose positions are uniformly distributed within a cut-cell and whose weights are uniformly distributed on the interval w_p∈[1,100]. These particles are merged into N_ = 64 particles, and as expected the final particle weights differ by at most one. §.§ Final algorithm The final algorithm for integration over a time step Δ t is as follows: * Compute the conductivity σ^k. * Perform the diffusive advance: X_p^†= X_p^k + √(2D_p^kΔt)𝒩_p. * Compute the space charge density ρ^† = ρ(X_p^†) and solve for E^k+1 using equation (<ref>). * Interpolate particle velocities V_p^k+1 = v[E^k+1(X_p^k)]. * Advect particles X_p^k+1 = X_p^† + V_p^k+1Δ t. * Move photons Y = Y_0 + r_κĉ using equation (<ref>). * Advance the reaction network over Δ t, see section <ref> and section <ref>. * Manage superparticles, section <ref>. Conceptually, the above algorithm uses a Godunov splitting between particle transport (steps 1 through 5) and plasma chemistry (step 7). The particle transport step is a first-order accurate semi-implicit discretization, and the plasma chemistry is solved with a stochastic reaction algorithm with adjustable accuracy through the factor ϵ (section <ref>). Setting ϵ=∞ will accept any tau-leaping step, i.e. the chemistry is resolved with a time step Δ t. On the other hand, setting 0 < ϵ≪ 1 yields a highly accurate chemistry algorithm which may potentially take many substeps within Δ t. But if high order chemistry is used together with a large splitting step Δ t, the overall stability of the algorithm can deteriorate. The reason for this is that when transport and field updates are performed rarely but the chemistry integration is highly accurate, the number of free electrons that are generated in a grid cell is overestimated. This issue is not unique for -KMC but also occurs for deterministic fluid models when using operator splitting methods with large splitting steps. Since we use fixed time steps in this paper, we therefore resolve the transport and chemistry with the same time step, i.e. we use ϵ = ∞, and rely on the SSA steps primarily to avoid negative particle numbers. In the future, we will be extending our methodology to dynamic time stepping (either CFL or physics based), at which point we will be able to leverage the adjustable accuracy features in the KMC integrator. §.§ Parallelization Our computer implementation is parallelized with flat MPI, using the natural domain decomposition offered by the AMR grids where each MPI rank solves for a subset of the grids on each level. The simulations are performed away from the strong scaling limit, which left room for load balancing our application. The field and particle updates have different computational metrics. A reasonable proxy for the computational load of the discretized Poisson equation is the number of grid cells in a grid patch, while for the particles the load is better estimated by the number of particles that are assigned to the patch. We have load balanced our simulations with dual grids. In this approach we use two sets of AMR grids where the grid levels consist of the same grid patches, but where the assignment of grid subsets among the MPI ranks differ, see figure <ref>. One AMR grid set is load balanced with the grid patch volume as a proxy for the computational load, and is used for grid kernels that scale with the number of grid points, e.g. the discretized Poisson equation or advancing the reaction network. On the other grid, we advance kernels that scale with the number of grid particles, i.e. transport kernels, mesh deposition and interpolation, and superparticle handling. The dual grid approach adds some computational complexity, but these drawbacks were offset by reductions in simulation times which were up to 40. § NUMERICAL TESTS §.§ Simulation conditions We consider a 10 computational domain with a vertical needle-plane gap. A cross section of the computational domain and the boundary conditions is shown schematically in figure <ref>. A 5 long cylindrical electrode with a spherical cap at the end sticks out of the live electrode plane, and the opposite plane is grounded. The electrode diameter is 1mm and the vertical distance between the live electrode and the ground plane is 5. Homogeneous Neumann boundary conditions are used for the Poisson equation on the side faces, and all simulations start from a step voltage of 20. The peak initial electric field magnitude is roughly 11 on the anode tip. All simulations use a coarsest AMR level of 128^3 cells, but use up to another eight levels of refinement, i.e. up to effective domains of 32768^3 cells. We use a three-species model for discharges in air, consisting of electrons, positive ions, and negative ions. The plasma kinetics that we use is summarized in table <ref>. We focus on using a simple and well-known reaction set for our example simulations. Using more elaborate plasma chemistry is possible, but not required for our simulation examples. The electron diffusion coefficient D_, mobility μ_, temperature T_, ionization frequency k_α, and attachment frequency k_η are field-dependent and are computed by using BOLSIG+ <cit.> and the SIGLO database <cit.>. The electron-ion and ion-ion recombination rates are k_ep = β_ep/Δ V, k_np = β_np/Δ V, where Δ V is the grid cell volume and β_ep = 1.138×10^-11T_^-0.7 , β_np = 2×10^-13(300/T)^0.5 , where T=300 is the gas temperature and T_ = T_(E) is the electron temperature. We use the Zheleznyak photoionization model <cit.> including the corrections by <cit.> for modeling photon transport for the reaction + ∅ + γ + ∅. The rate constant is k_γ = p_q/p + p_qν_Z(E)k_α, where ν_Z(E) is a lumped function that accounts for excitation efficiencies and photoionization probabilities <cit.>. The quenching pressure is p_q = 40 and the gas pressure is p=1. When a photon is generated within the reaction step we draw a random absorption coefficient as κ_f = K_1(K_2/K_1)^f-f_1/f_2-f_1, where K_1 = [per-mode=power]530, K_2=[per-mode=power]3E4, f_1 = 2.925, f_2 = 3.059, and f is a random number sampled from a uniform distribution on the interval [f_1, f_2]. The propagation distance of each photon is then determined by drawing a random number from an exponential distribution with parameter κ_f. All simulations start by drawing 10^4 initial electron-ion pairs uniformly distributed in a sphere with a 500 radius centered at the needle tip. Electron-ion pairs whose positions end up inside the electrode are removed before the simulation starts. In the computer simulations we refine cells on level l if αΔ x_l ≥ 1, and coarsen if αΔ x_l ≤ 0.2, where Δ x_l is the grid spacing on level l and α is the Townsend ionization coefficient. §.§ Comparison with hydrodynamics In this section we present a comparison between the -KMC model and an equivalent drift-diffusion-reaction model based on deterministic hydrodynamics (equation (<ref>) without the stochastic term), using the same photoionization model and transport data. Since we use cut-cell Cartesian AMR grids, the discretization of the fluid model is bit involved, and is therefore not discussed in detail here. We follow the discretization that we used in <cit.>. There, we used a Godunov splitting between plasma transport and reactions, and employed a CTU scheme <cit.> that also include transverse slopes in the advective term, permitting a softer CFL constraint. Here, the only major difference between that discretization and the current one is that we here use explicit diffusion and a semi-implicit coupling to the electric field. To initialize the fluid model we include the same particle distribution as for the -KMC model and deposit the particles as a density using a nearest-grid-point scheme when the simulation begins. In the Cartesian 2D comparison we have also raised the potential on the electrode to 80 in order to facilitate propagation of the streamer. The Cartesian 2D version is included because the lack of fluctuations leads to a single streamer, and the models can then be both qualitatively and quantitatively compared. We compare the two models in 2D planar coordinates, using fixed time steps of Δ t = 5 for a total integration time of t = 10. The field magnitude and electron density after 10 are shown in figure <ref>. We find that the two solutions are qualitatively very similar (they both tilt slightly to the right due to the initial particle distribution). Figure <ref> shows the temporal evolution of the maximal electric field and the streamer head position for the simulations. The head position is defined as the position where the electric field is at its maximum. Only minor differences are found between the two models: For example, the largest difference between the maximum electric field in the two models is about 4, while the average streamer velocities agree to within 0.01. §.§ Discrete particle noise To determine if particle noise impacts the simulations, we consider three-dimensional numerical solutions obtained using a varying number of N_ but fixed Δ x and Δ t. We select Δ x ≈12 and Δ t = 20 and integrate for 5. The same initial particles are used in these tests. Figure <ref> shows the resulting electron density in the neighborhood of the electrode using between 8 and 256 computational particles per cell (per species). There is no apparent disagreement or numerical artifacts for these simulations, which suggests that even N_=16 is sufficiently accurate for these particular simulations. Similar results were obtained by <cit.>. This finding can not be extrapolated to coarser grids because if we keep N_ fixed and reduce Δ x by a factor of two, the average particle weights increase by a factor of 8. It also warrants mention that there is no particle noise in the KMC algorithm itself, as it operates with the number of physical particles. However, elevated particle noise still arises due to transport and splitting/merging of superparticles. §.§ Grid sensitivity In this section we perform a grid sensitivity study by varying the spatial and temporal resolutions. We set N_=32 and consider temporal resolutions ranging from 580, and spatial resolutions ranging from 3195, and integrate for 50. Figure <ref> shows the final state for the 35 different simulations, and we make several observations: * Simulations are stable for all time steps, also when the time step is orders of magnitude larger than the dielectric relaxation time. For the simulations with Δ t=80 and Δ x ≈3 the time step is equivalent to using an advective CFL number of > 30. * Bounded solutions are obtained for all spatial resolutions, i.e. we do not have n_→∞ or E→∞ for any Δ x we investigate. Here, the spatial resolutions range from moderately fine to extremely coarse. From experience with fluid simulations <cit.> we have found that instabilities can occur for branching streamers, particularly if some of the branches become stagnant. * Numerical branching occurs on too coarse grids, which can be seen on the column Δ t = 80 where the streamer initially splits into four branches, but these four initial branches disappear on finer grids. Similar phemonena are seen on the row Δ x≈24 where we find small protrusion needles for Δ t ≤20, and similarly for Δ x ≈12 for Δ t ≤10. We believe that these branches appear due to spatial underresolution since for Δ x≈12 we only have about 5-10 grid cells for resolving the cross section of the streamers with the smallest radii. * The degree of branching decreases when larger time steps are used, which is particularly evident for the row Δ x≈3. This can be understood in terms of the photoionization-induced noise ahead of the streamer. <cit.> and <cit.> have shown that the branching behavior depends on the amount of photoionization ahead of the streamer. Increasing the amount of photoionization reduces noise in the plasma density ahead of the streamer, which also reduces the amount of branching. As larger time steps are used there are more photoelectrons generated during time steps, which artificially suppresses fine-grained temporal variations in the plasma density ahead of the streamer. * The velocity of the streamers increase with increasing temporal resolution. There are at least two reasons for this: * The number of electrons in the ionization zone in the streamer grows exponentially, and larger time steps lead to numerical underestimation of the electron impact ionization in the streamer tip. * When large time steps are used the electrons in the reaction zones at the tip of the streamers can be moved completely out of it. For example, the simulation with Δ x≈3 and Δ t=80 used an effective CFL number of >30. The reaction zone is just a few grid cells thick, and moving the electrons too far out of the reaction zone reduces the amount of ionization in it, and thus also velocity of the streamer. * The velocities of the streamers increase slightly when the resolution increases, which we can see on the column Δ t = 10. We believe this occurs because coarse grids lead to under-resolution of the electric field ahead of the streamer. For coarser Δ x the electric field on the tips is therefore lower, and this reduces the amount of ionization. * Streamer radii agree with experimental observations only on the fine grids (Δ x ≲6). <cit.> have measured streamer diameters in atmospheric air and found that diameters range from 100 to 3, depending on experimental conditions like applied voltage, gap inhomogeneity, and various other factors. However, for Δ x=97195 we only find streamers with radii R∼1. On finer grids streamers with smaller radii also emerge. E.g. for Δ x≈3 we find streamer radii ranging from 100 to 1, which agree with experimental observations. * The electric field at the streamer tips vary by streamer radius. On the finest grid Δ x∼3 we find E∼2530 for filaments with radii on the order of R∼50150. This is in agreement with fluid simulations of positive streamers <cit.> as well as analytical estimates <cit.>. On grids Δ x ≥24 we find that the electric field strength at the streamer tips is E≲10, but these solutions are quite clearly underresolved and thus have no practical relevance. In summary, we find that the grid resolution should be Δ x ≤6, which is about the same requirement as in fluid models <cit.>. The time step should be Δ t≤10, which at Δ x ≈6 is about a factor of 5 larger than that permitted through a conventional CFL condition like equation (<ref>), as shown in figure <ref>. We also point out that the -KMC method can maintain this time step even for finer grids, which is not possible for explicit fluid codes. Although -KMC is quite forgiving for larger time steps, it is clear from figure <ref> that lack of temporal resolution leads to suppression of several morphological features in the discharge, while underresolved grids lead to numerical branching. Also note that the the fastest time scales in the -KMC method are the same as in fluid methods, which for the reaction set in table <ref> is the electron ionization impact frequency αμ_ E. We have not been able to run numerical convergence tests due to the stochastic component that is involved, which would require ensemble studies at high spatial and temporal resolutions. We nonetheless observe that the solutions convergence to physically meaningful solutions with streamer diameters and velocities that quantitatively agree with experimental observations <cit.>. Section <ref> showed that that discrete particle noise had a comparatively low qualitative impact on the simulations, so the artificial branching for Δ x≳12 is probably mesh-based. This finding can not be automatically extrapolated to different pressure due to very different plasma densities. For example, in our simulations with n_≈E18, Δ x≈10 and N_ = 64, particles have an average weight w ≈ 15.6 in the streamer head. But in sprite discharges one may have n_≈E10 and Δ x≈1, so N_ =64 yields particle weights w ≈1.56E8. Discrete particle noise is therefore much higher for sprites than for atmospheric pressure streamers. §.§ Positive streamer evolution We now run the simulation with Δ x≈6 and Δ t=10, using N_=64 particles per cell until t=150. This simulation case is probably not completely grid converged, which we can see from the fact that the discharge tree in figure <ref> for Δ x=6 and Δ t=5 is approximately 25 faster. Part of this difference can also be due to a natural variation in the front velocity, depending on how the discharge tree develops. Since we only ran a single simulation per spatial and temporal resolution, we do not know the size of these deviations. Comparing with the panel Δ x=3 and Δ t=5 in the same figure, we find that we are probably also missing some fine-scale features in the discharge tree as well. Ideally, this is the simulation that we would have run further, but the case Δ x=6, Δ t=10 was the largest case we could fit in our current compute quota. The simulation case is nonetheless quite challenging. Positive streamers in air have smaller radii and branch more frequently than negative ones, and the branching behavior is also voltage-dependent as positive streamers in air branch more frequently at lower background fields. In our case the average background electric field measured along the symmetry axis is just 0.4, which is below the so-called positive streamer stability field of ∼0.5. Under these conditions, experiments show that repeated branching leads to development of a discharge tree consisting of multiple small-diameter positive streamers <cit.>. Streamers may also stagnate, which has been identified as a challenge for fluid models <cit.>. Figure <ref> shows temporal snapshots of the positive streamer evolution every ten nanoseconds until t=150. We have not (yet) been able to skeletonize the discharge structure for quantitative analysis, but from the figure(s) we extract the following information: * The streamer radii vary between ∼1 and ∼100. The thicker streamers appear closest to the anode, and as they propagate they branch into thinner filaments. * In addition to propagating towards to the ground plane, electrostatic repulsion between the filaments yields numerous sideways branches, which determines the radius of the tree. * The front velocity is on average 0.2, and is defined by the velocity of the front streamers. * The total length of the discharge tree at t=150 is approximately 3 and its radius is approximately 1.5. * Many streamer filaments stop propagating after some distance, but do not lead to unbounded growth in the plasma density. We have indicated one of these branches in figure <ref>, but many more can be identified. Figure <ref> shows the discharge at t=150 from additional perspectives. This data corresponds to the bottom-right panel in figure <ref>. As an amendment to figure <ref> and figure <ref>, we have added an animation of the corresponding data to the supplemental material in this article. In the simulations we do not observe streamer merging, which can occur if two streamer heads are sufficiently close in space and there is sufficient photoionization between them <cit.>. Although streamer merging is fundamentally possible under very specific conditions, the prerequisites for it to occur are obviously not present in our simulations. Streamer reconnection <cit.> is not observed in the simulations either. Reconnection occurs when a streamer connects into the wake of another streamer, and is thus a different phenomenon from streamer merging where two streamer heads merge directly. We have observed streamer reconnection in other 3D fluid simulations when one of the streamer filaments reach the ground plane and acts as a virtual ground for the other streamers. Further details regarding this finding will be reported elsewhere. The simulation results presented here clearly can not be understood from axisymmetric simulations. Many axisymmetric studies of streamer discharges have been reported in the last decades, but these only show the emergence of a single filament and therefore have a limited range of applicability. The radius and velocity of the streamer then tend to increase with streamer length <cit.> (depending on the field conditions), but experiments generally show repeated branching into thinner filaments <cit.>. Because thinner streamers propagate slower than thicker streamers, it is reasonable to expect that repeated branching lowers the front velocity. It is obvious that the front velocity of the tree is determined by the velocity of the front streamers, but we point out that these streamers are also influenced by fields set up by neighboring branches. The dynamics of such streamers and corresponding single-filament streamers with the same radii are therefore not necessarily the same. In our simulations the streamers branch into small-diameter streamers, and the most relevant parametric 2D studies are the ones pertaining to so-called minimal streamers. Such streamers have the smallest experimentally observed radius, do not branch, but still propagate over comparatively long distances. In order to compare our front velocity with that of single streamers, we consider the case in <cit.> who presented a computational analysis of steady and stagnating positive streamers in air. The reported velocities in <cit.> varied from 0.251.25 when the streamer radii varied from 25125, and the corresponding electric field at the streamer tip varied from 220150. In our simulations we observe the same emerging radii and velocities, while the electric field is slightly higher at approximately ∼250∼280. We do not know the source of this discrepancy, but it could be due to the slightly larger numerical resolution used in our simulations. Another factor could be that <cit.> use a correction to the α-coefficient while we do not. Both of these factors can facilitate small-scale features with higher fields. Regardless of these finer points, the streamer radii, velocities, and fields that emerge in our simulations are consistent with the parametric study in <cit.>. §.§ Computational characteristics We now present some of the computational characteristics for the simulation in section <ref>. The simulation was run on 32 nodes on the Norwegian supercomputer Betzy, and ran to completion in about 4 days. Each node on Betzy consists of two AMD Epyc 7742 CPUs for a total of 128 cores per node, so we used 4096 CPU cores in total. When we terminated the simulation it consisted of approximately 500 million grid cells and E10 computational particles, so there were approximately 15.6 million grid cells and 312.5 million particles per node. Although our simulations are firmly footed in the realm of high-performance computing, they do not represent a large burden for modern supercomputers (which currently can have more than one million CPU cores). Figure <ref> shows a breakdown of the kernel costs for the time step at t = 150, showing the wall clock time spent in various computational routines. This particular time step took around 26, but time steps varied down to around 1 at the very beginning of the simulation. Radiation transport for our simulations has a negligible cost, but this would not be the case e.g. for sprite simulations where many more physical photons are generated per cell. The KMC algorithm has a cost of about 510, whereas setting up and solving the Poisson equation had a relative cost of approximately 35. Particle-related routines like superparticle handling, particle-mesh operations, transport, and spatial binning of particles had an accumulated cost of around 50%. There was also significant load imbalance for the super-particle handling since we load-balanced using the number of particles. This distributed the load along the streamer channels, but did not account for particle merging/splitting which mainly occured in the streamer head. Regridding the solution had about the same cost as 1-2 time steps, which was done every ten time steps. The majority of the regrid cost comes from 1) recomputing cut-cell stencils and re-solving the Poisson equation on the new grids, and 2) redistributing particles on the new grids. Particle redistribution is, unfortunately, quite expensive since virtually all computational particles change MPI rank ownership during regrids. Two forms of I/O were used in the simulations: Plot files and checkpoint files. Plot files contained data for analysis and ranged up to 160 per file. Checkpoint files contained data for restarting simulations, e.g. in case of hardware failures or for allocating more nodes as the simulation mesh grows. These ranged up to 360 per file. Plot and checkpoint files took about two minutes to write, and were written every 100th time step. § CONCLUSIONS AND OUTLOOK §.§ Main findings We have presented the foundation of a new type of model for streamer discharges based on a microscopic drift-diffusion model with a Kinetic Monte Carlo solver for the plasma chemistry. A thematic discussion on the role of -KMC and its connection to conventional fluid models was presented. The model was coupled to photoionization with Monte-Carlo radiative transport, and a particle merging and splitting algorithm was presented. Suitable algorithms for integrating the equations of motion were then presented. These algorithms were implemented in 2D and 3D and adapted to cut-cell Cartesian AMR grids. We then implemented an example model for streamer discharges in air in needle-plane gaps, and showed that the -KMC model agrees qualitatively and quantitatively with conventional fluid models. Example simulations that demonstrate the output and stability of the -KMC model were then provided. The simulations presented in this paper demonstrate the feasibility of simulating discharge trees containing many streamer branches. There are some advantages to using the new model, which are listed below: * The model takes the same input as a fluid model, e.g. mobility and diffusion coefficients, and reaction rates. * It is inherently a PIC model and maintains particle discreteness. * -KMC incorporates both reactive and diffusive fluctuations. * The model is exceptionally stable in both space and time, even on very coarse grids and for large time steps. The absence of a CFL condition is particularly liberating. We conjecture that the -KMC method will be suitable for hybrid modeling <cit.> where some of the electrons are treated kinetically. Unlike hybrid models based on a fluid description, transfer of electrons between -KMC and PIC-MCC descriptions can be done without disturbing the original charge distribution. §.§ Future work In this paper our focus has been on an all-discrete approach where also the ions are treated using Îto diffusion. Future works will benefit from a mixed description where the electrons are treated using Îto diffusion while (some of) the heavy species are treated using a continuum model. This can substantially reduce the computational load when more species of ions are tracked. However, the relative cost of the Poisson solver is already quite high, and even with this improvement the Poisson equation will remain a computational bottleneck. As the -KMC is highly stable, higher order algorithms become particularly attractive. In the absence of solid boundaries, fourth order deposition and interpolation methods exist <cit.>, but these techniques have not been extended to cases where embedded boundaries are involved. For the Poisson equation, fourth order discretizations that include embedded boundary formulations have been reported <cit.>. In time, only second-order convergence can be expected in the context of splitting methods and, furthermore, higher-order integration for particle transport is a significant challenge due to the presence of the Wiener process. Suppressing numerical streamer branches for coarse-grid simulations is also desirable. Moving forward, we will explore strategies for suppressing these by filtering the solutions <cit.>. The KMC-particle reactive coupling used in this paper adds some numerical diffusion into the system. Here, the algorithm is set up to uniformly distribute reactive products over a grid cell, and so it will end up placing secondary electrons in the wake of primary electrons. This can become a source of numerical instability quite similar to the one seen in fluid simulations, but fortunately we have not (yet) observed these types of instabilities in our simulations. Regardless, future efforts may benefit from reducing this source of numerical diffusion by introducing sub-grid models for the KMC-particle coupling. We expect that 3D streamer simulations will become increasingly more sophisticated in the future. In parallel with this development, there is an emergent need for improving the tools that we use for analyzing such simulations. While two-dimensional simulations are relatively straightforward to analyze, 3D discharge trees are much more complex. Quantitative analysis requires us to skeletonize the discharge trees in full 3D for extraction of branching ratios and angles, filament lengths, velocities, and so on. In the future, we will also focus on establishing such analysis procedures. § ACKNOWLEDGEMENTS This study was partially supported by funding from the Research Council of Norway through grants 319930/E20 and 321449. The computations were performed on resources provided by UNINETT Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway. The author expresses his gratitude to Fanny Skirbekk for providing the images used in Fig. <ref>. § CODE AVAILABILITY STATEMENT The computer code that was used to perform the calculations in this paper is publically available at <https://github.com/chombo-discharge/chombo-discharge>. Input scripts are available upon reasonable request.
http://arxiv.org/abs/2307.02816v1
20230706072044
The grid-minor theorem revisited
[ "Vida Dujmović", "Robert Hickingbotham", "Jędrzej Hodor", "Gweanël Joret", "Hoang La", "Piotr Micek", "Pat Morin", "Clément Rambaud", "David R. Wood" ]
math.CO
[ "math.CO", "cs.DM" ]
⊂ 3 truecm 1.2 truecm 0mm 4mm @space@setup @preskip=4mm @postskip=0mm enumerate[label=, noitemsep, topsep=-7.5pt, labelindent=.2em, leftmargin=*, widest=iii,] enumerateNum [label=, noitemsep, topsep=-7.5pt, labelindent=.2em, leftmargin=*, widest=iii] enumerateNumE[label=, noitemsep, topsep=-7.5pt, labelindent=.2em, leftmargin=*, widest=iii] enumerateNumD[label=, noitemsep, topsep=-7.5pt, labelindent=.2em, leftmargin=*, widest=iii] enumerateAlpha[label=, noitemsep, topsep=0pt, labelindent=.2em, leftmargin=*, widest=iii] enumerateNumP[label=, noitemsep, topsep=-7.5pt, labelindent=.2em, leftmargin=*, widest=iii] enumerateNumX[label=, noitemsep, topsep=-7.5pt, labelindent=.2em, leftmargin=*, widest=iii] {} ⌈⌉ ⌊⌋ plain thmTheorem *thm*Theorem theorem[thm]Theorem lem[thm]Lemma lemma[thm]Lemma *lemma*Lemma cor[thm]Corollary *cor*Corollary obs[thm]Observation remark quesQuestion prop[thm]Proposition *claimClaim obsObservationObservations *lem*Lemma definition conj[thm]Conjecture *conj*Conjecture lemLemmaLemmas thmTheoremTheorems corCorollaryCorollaries ≤⩽≥⩾≤⩽≥⩾≰≱⊂⊆⊊⊃⊇⊋ϵεε∖- @setaddressessetaddresses setaddresses0pt@setaddresses Dujmović]Vida Dujmović [V. Dujmović]School of Computer Science and Electrical Engineering, University of Ottawa, Ottawa, Canada vida.dujmovic@uottawa.ca Hickingbotham]Robert Hickingbotham robert.hickingbotham@monash.edu Hodor]Jędrzej Hodor [J. Hodor, H. La, P. Micek]Theoretical Computer Science Department, Faculty of Mathematics and Computer Science, Jagiellonian University, Kraków, Poland jedrzej.hodor@gmail.com Joret]Gwenaël Joret [G. Joret]Département d'Informatique, Université libre de Bruxelles, Belgium gwenael.joret@ulb.be La]Hoang La hoang.la.research@gmail.com Micek]Piotr Micek piotr.micek@uj.edu.pl Morin]Pat Morin [P. Morin]School of Computer Science, Carleton University, Ottawa, Canada morin@scs.carleton.ca Rambaud]Clément Rambaud [C. Rambaud]DIENS, École Normale Supérieure, CNRS, PSL University, Paris, France clement.rambaud@ens.psl.eu Wood]David R. Wood [R. Hickingbotham, D.R. Wood]School of Mathematics, Monash University, Melbourne, Australia david.wood@monash.edu J. Hodor, H. La, and P. Micek are supported by the National Science Center of Poland under grant UMO-2018/31/G/ST1/03718 within the BEETHOVEN program. G. Joret is supported by a CDR grant from the Belgian National Fund for Scientific Research (FNRS), by a PDR grant from FNRS, and by the Australian Research Council. Research of R. Hickingbotham is supported by an Australian Government Research Training Program Scholarship. Research of V. Dujmović is supported by NSERC, and a Gordon Preston Fellowship from the School of Mathematics at Monash University. Research of D.R. Wood is supported by the Australian Research Council. The Grid-Minor Theorem Revisited [ August 1, 2023 ================================ We prove that for every planar graph X of treedepth h, there exists a positive integer c such that for every X-minor-free graph G, there exists a graph H of treewidth at most f(h) such that G is isomorphic to a subgraph of H⊠ K_c. This is a qualitative strengthening of the Grid-Minor Theorem of Robertson and Seymour (JCTB, 1986), and treedepth is the optimal parameter in such a result. As an example application, we use this result to improve the upper bound for weak coloring numbers of graphs excluding a fixed graph as a minor. § INTRODUCTION The seminal Graph Minors series of Robertson and Seymour is the foundation of modern structural graph theory. In this work, treewidth is a central concept that measures how similar a given graph is to a tree. A key theorem of Robertson and Seymour <cit.> states that a minor-closed graph class 𝒢 has bounded treewidth if and only if some planar graph is not in 𝒢. In particular, for every planar graph X, every X-minor-free graph has treewidth at most some function g(X). This result is often called the Grid-Minor Theorem since it suffices to prove it when X is a planar grid. The asymptotics of g have been substantially improved since the original work. Most significantly, Chekuri and Chuzhoy <cit.> showed that g can be chosen to be polynomial in |V(X)|. The current best bound is g(X)∈(|V(X)|^9), which follows from a result of Chuzhoy and Tan <cit.>. Dependence on |V(X)| is unavoidable, since the complete graph on |V(X)|-1 vertices is X-minor-free, but has treewidth |V(X)|-2. Our goal is to prove a qualitative strengthening of the Grid-Minor Theorem via graph product structure theory. Graph product structure theory describes graphs in complicated classes as subgraphs of strong products[The strong product G_1⊠ G_2 of two graphs G_1 and G_2 is the graph with vertex set V(G_1⊠ G_2):=V(G_1)× V(G_2) and that includes the edge with endpoints (v,x) and (w,y) if and only if vw∈ E(G_1) and x=y; v=w and xy∈ E(G_2); or vw∈ E(G_1) and xy∈ E(G_2).] of simpler graphs. For example, the Planar Graph Product Structure Theorem by Dujmović, Joret, Micek, Morin, Ueckerdt, and Wood <cit.> says that for every planar graph G there is a graph H of treewidth at most 3 and a path P such that G H ⊠ P ⊠ K_3. Here, G_1 G_2 means that G_1 is isomorphic to a subgraph of G_2. Inspired by this viewpoint, we prove the following product structure extension of the Grid-Minor Theorem. Note that H⊠ K_c is the graph obtained from H by `blowing-up' each vertex of H by a complete graph K_c. Let (G) denote the treewidth of a graph G, and let (G) denote the treedepth of G (both defined in <Ref>). For every planar graph X, there exists a positive integer c such that for every X-minor-free graph G, there exists a graph H of treewidth at most 2^(X)+1-4 such that G H⊠ K_c. The point of Theorem <ref> is that the treewidth of H only depends on the treedepth of X, not on |V(X)| [Arbitrarily large graphs can have bounded treedepth, such as edgeless graphs (treedepth 1) or stars (treedepth 2).]. The described product structure of G is a more refined description of G compared to the output of the Grid-Minor Theorem since (H⊠ K_c)≤ ((H)+1)c-1. This refinement is useful because various graph parameters can be bounded on H⊠ K_c by a fast-growing function of (H) times a slow-growing (usually linear) function of c; this includes queue-number <cit.>, nonrepetitive chromatic number <cit.>, and others <cit.>. As concrete applications of <ref>, we use it to improve bounds for weak coloring numbers and p-centered colorings of X-minor-free graphs. The Grid-Minor Theorem relates to treewidth in the same way as the Excluded-Tree-Minor Theorem by Robertson and Seymour <cit.> relates to pathwidth. The latter says that a minor-closed class 𝒢 has bounded pathwidth if and only if some tree is not in 𝒢. In particular, there is a function g such that for every tree X, every X-minor-free graph has pathwidth at most g(|V(X)|). The following product structure version of this result was proved by Dujmović, Hickingbotham, Joret, Micek, Morin, and Wood <cit.>: there exists a function f such that for every tree X, there exists a positive integer c such that for every X-minor-free graph G, there exists a graph H of pathwidth at most f((X)) such that G H⊠ K_c. (In fact, they prove a stronger statement in which the pathwidth of H is bounded by 2h-1, where h is the radius of X.) We actually prove the following result for an arbitrary excluded minor, which combined with the Grid-Minor Theorem immediately implies <ref>. For every graph X, there exists a positive integer c such that for every positive integer t and for every X-minor-free graph G with (G)<t, there exists a graph H of treewidth at most 2^(X)+1-4 such that G H⊠ K_ct. <Ref> was inspired by and can be restated in terms of the parameter `underlying treewidth' introduced by Campbell, Clinch, Distel, Gollin, Hendrey, Hickingbotham, Huynh, Illingworth, Tamitegama, Tan, and Wood <cit.>. They defined the underlying treewidth of a graph class 𝒢, denoted by (𝒢), to be the smallest integer such that, for some function f, for every graph G∈ there is a graph H of treewidth at most (𝒢) such that G H ⊠ K_f((G)). Here, f is called the treewidth binding function. Campbell et al. <cit.> showed that the underlying treewidth of the class of planar graphs equals 3, and the same holds for any fixed surface. More generally, let 𝒢_X be the class of graphs excluding a given graph X as a minor. Campbell et al. <cit.> showed that (𝒢_K_t)=t-2 and (𝒢_K_s,t)=s (for t≥max{s,3}). In these results, the treewidth binding function is quadratic. Illingworth, Scott and Wood <cit.> reproved these results with a linear treewidth binding function. Determining the underlying treewidth of the class of X-minor-free graphs, for an arbitrary graph X, was one of the main problems left unsolved by Campbell et al. <cit.> and Illingworth, Scott and Wood <cit.>. <Ref> together with a well-known lower bound construction given in <Ref> shows that (_X) and (X) are tied: (X)-2≤(_X) ≤ 2^(X)+1-4. This shows that treedepth is the right parameter to consider in <ref>. Moreover, in the upper bound the treewidth binding function is linear. Application #1. Weak Coloring Numbers. Weak coloring numbers are a family of graph parameters studied extensively in structural and algorithmic graph theory. See the book by Nešetřil and Ossona de Mendez <cit.>, or the recent lecture notes by Pilipczuk, Pilipczuk, and Siebertz <cit.> for more information on this topic. For the algorithmic side, see Dvořák <cit.> and also Theorem 5.2 in <cit.>, which contains a polynomial-time approximation algorithm for r-dominating set, with approximation ratio bounded by a function of a weak coloring number of the input graph. We now quickly introduce the definition. The length of a path is the number of its edges. For two vertices u and v in a graph G, a u–v path is a path in G with endpoints u and v. Let G be a graph and let σ be an ordering of the vertices of G. For an integer r≥0 and two vertices u and v of G, we say that u is weakly r-reachable from v in σ, if there exists a u–v path of length at most r such that for every vertex w on the path, u≤_σ w. The set of vertices that are weakly r-reachable from a vertex v in σ is denoted by _r[G, σ, v]. The r-th weak coloring number of G, denoted by _r(G), is defined as _r(G) = min_σmax_v ∈ V(G) |_r[G, σ, v]|. where σ ranges over the set of all vertex orderings of G. Several papers give bounds for weak coloring numbers of graphs in a given sparse class. For example, if G is planar, then _r(G)=(r^3); if G has no K_t-minor, then _r(G)=(r^t-1) as proved by van den Heuvel, Ossona de Mendez, Quiroz, Rabinovich, and Siebertz <cit.>; and if (G)≤ t, then _r(G)≤r+tt and this bound is tight as proved by Grohe, Kreutzer, Rabinovich, Siebertz, and Stavropoulos <cit.>. Fix a planar graph X. What is known about _r(G) when X is not a minor of G? Since G is K_|V(X)|-minor-free, we have _r(G) = (r^|V(X)|-1). However, thanks to <Ref>, there exists a graph H with (H)≤ f((X)) =(2^(X)) and c depending only on X such that G H⊠ K_c. Therefore, _r(G) ≤_r(H⊠ K_c) ≤ c·_r(H) ≤ c· r^f((X)). Indeed, the first inequality follows from the monotonicity of _r and the second inequality is an easy property[ Let σ_H be an ordering of V(H) such that |_r[H,σ,x]| ≤_r(H) for every x ∈ V(H). Consider σ an ordering of V(H ⊠ K_c) such that for every (x,u), (x',u') ∈ V(H ⊠ K_x) if x <_σ_H x', then (x,u) <_σ (x',u'). It is easy to see that _r[H⊠ K_c,σ,(x,u)] ⊆{(y,v) ∈ V(H ⊠ K_c) | y ∈_r[H,σ_H,x]}, and so |_r[H ⊠ K_c, σ, u]| ≤ c ·_r(H). ] of _r. The obtained upper bound on _r(G) is polynomial in r, where the exponent depends only on (X) and not on |V(X)|. As mentioned, the Grid-Minor Theorem and also <Ref> hold only when the excluded minor X is planar. However, <ref> does not have this restriction, hence there is no obvious obstacle for the above bound on _r(G) to hold for all graphs X. We prove that this is indeed the case, which is the second main contribution of this paper. theoremweakcol There exists a function g such that for every graph X, there exists a constant c such that for every X-minor-free graph G and every positive integer r, _r(G) ≤ c· r^g((X)). Again, the point of <Ref> is that the degree of the polynomial in r bounding _r(G) depends only on (X) and not on |V(X)|. In the previous best bound for weak colouring number of X-minor free graphs, the degree of the polynomial in r depended on the vertex-cover[The vertex-cover number τ(G) of a graph G is the size of a smallest set S ⊆ V(G) such that every edge of G has at least one endpoint in S.] number τ(X). In particular, it follows from a result by van den Heuvel and Wood <cit.> regarding the weak colouring number of K_s,t^⋆-minor-free graphs that _r(G) ≤ c· r^τ(X)+1 for every X-minor-free graph G and integer r≥ 1. <Ref> is qualitatively stronger since (X)≤τ(X)+1 and there are graphs X with (X)=3 and arbitrarily large τ(X). The proof of the theorem relies on the same decomposition lemma as the proof of <Ref>. The ordering of the vertices witnessing the bound on _r in <Ref> is built via chordal partitions—a powerful proof technique originally developed in the 1980s in the context of the cops and robber game <cit.> that was rediscovered and used in <cit.> to bound weak coloring numbers, and has subsequently found several other applications in structural graph theory. Application #2. Product Structure for Apex-Minor-Free Graphs. As already mentioned, Dujmović et al. <cit.> proved that every planar graph is isomorphic to a subgraph of H⊠ P⊠ K_3 for some graph H with treewidth at most 3 and for some path P. This result has been the key ingredient in the solution of several open problems on planar graphs <cit.>. Building on this work, Distel, Hickingbotham, Huynh, and Wood <cit.> proved that every graph of Euler genus g is isomorphic to a subgraph of H⊠ P⊠ K_max{2g,3} for some graph H with treewidth at most 3 and for some path P. More generally, Dujmović et al. <cit.> characterized the graphs X for which there exist integers t and c such that every X-minor-free graph is isomorphic to a subgraph of H⊠ P⊠ K_c where (H)≤ t and P is a path. The answer is precisely the apex graphs. Here a graph X is apex if V(X)=∅ or X-u is planar for some vertex u of X. The following natural problem arises: for a given apex graph X, what is the minimum integer t(X) such that, for some integer c, every X-minor-free graph is isomorphic to a subgraph of H⊠ P⊠ K_c where (H)≤ t(X) and P is a path? Illingworth, Scott and Wood <cit.> showed that t(X)≤τ(X). We show, via an application of <ref>, that t is tied to treedepth. In particular, (X)-2 ≤ t(X) ≤ 2^(X)+1-1. The proof of this result is presented in <Ref>. Application #3. p-Centered Colorings. <Ref> can also be used to improve bounds for p-centered chromatic numbers of graphs excluding a fixed minor. For an integer p≥ 1, a vertex coloring ϕ of a graph G is p-centered if for every connected subgraph H of G either ϕ uses more than p colors in H or there is a color that appears exactly once in H. The p-centered chromatic number of G, denoted by χ_p(G), is the least number of colors in a p-centered coloring of G. Centered colourings are important since they characterize graph classes of bounded expansion <cit.>, and are a central tool for designing parameterized algorithms in classes of bounded expansion <cit.>; see <cit.> for an overview. If (G)≤ t then χ_p(G)≤p+tt, and this bound is again tight <cit.>. If X is a planar graph and G is X-minor-free, then by <Ref>, there exists a graph H with (H)≤ f((X)) and c depending only on X such that G H⊠ K_c. Therefore, χ_p(G) ≤χ_p(H⊠ K_c) ≤ c·χ_p(H) ≤ c· p^f((X)), where the first inequality follows from the monotonicity of χ_p and the second inequality is an easy property of p-centered colorings (Lemma 8 in <cit.>). The obtained upper bound on χ_p(G) is polynomial in p, where the exponent depends only on (X). Similarly, for an apex graph X, we use (<ref>) to show that for some c=c(X), every X-minor-free graph G satisfies χ_p(G) ≤ c· p^f((X)). Outline. <Ref> gives all the necessary definitions, as well as some preliminary results about tree-decompositions. <Ref> proves the lower bounds in (<ref>) and (<ref>). <Ref> provides a decomposition lemma (<Ref>) for graphs avoiding an `attached model' of a fixed graph, which is a key ingredient in the results that follow. This part of the argument, in particular <Ref>, is inspired by a result of Kawarabayashi <cit.> on rooted minors, which in turn is inspired by results of Robertson and Seymour <cit.>. <Ref> contains the proof of <Ref>. <Ref> contains the proof of <Ref>, which relies on chordal partitions (see Lemma <ref> and Figure <ref>), and a variant of the Helly property for K_t-minor-free graphs that is of independent interest (see <Ref>). Section <ref> contains the proof of <Ref>. This part builds on the work of Pilipczuk and Siebertz <cit.> for bounded genus graphs, and on the Graph Minor Structure Theorem of Robertson and Seymour <cit.> for K_t-minor-free graphs. <Ref> presents a product structure decomposition for graphs excluding an apex graph of small treedepth as a minor. As a consequence, we obtain better bounds for the p-centered chromatic number of such graphs. <Ref> concludes with four questions that we find relevant and exciting for future work. § PRELIMINARIES For a positive integer k, we use the notation [k]={1,…,k}, and when k=0 let [k] = ∅. The empty graph is the graph with no vertices. All graphs considered in this paper are finite and may be empty. Let G be a graph. Recall that the length of a path P, denoted by (P), is the number of edges of P. The distance between two vertices u and v in G, denoted by _G(u,v), is the minimal length of a path with endpoints u,v in G, if such a path exists, and +∞ otherwise. A path P is a geodesic in G if it is a shortest path between its endpoints in G. Let u be a vertex of G. The neighborhood of u in G, denoted by N_G(u), is the set {v ∈ V(G) | uv ∈ E(G)}. For every set X of vertices of G, let N_G(X)=⋃_u ∈ X N_G(u) ∖ X. For every integer r ≥ 1, we denote by N_G^r[u] = {v ∈ V(G) |_G(u,v) ≤ r}. We omit G in the subscripts when it is clear from the context. A rooted forest is a disjoint union of rooted trees. The vertex-height of a rooted forest F is the maximum number of vertices on a path from a root to a leaf in F. For two vertices u, v in a rooted forest F, we say that u is a descendant of v in F if v lies on the path from a root to u in F. The closure of F is the graph with vertex set V(F) and edge set vw| v is a strict descendant of w in F. The treedepth of a graph G, denoted by (G), is 0 if G is empty, and otherwise is the minimum vertex-height of a rooted forest F with V(F)=V(G) such that G is a subgraph of the closure of F. Consider the following family of graphs U_h,d. For every positive integer d, define U_0,d to be the empty graph. For all positive integers h and d, define U_h,d to be the closure of the disjoint union of d complete d-ary trees of vertex-height h. Observe that U_h,d has treedepth h. Moreover, this family of graphs is universal for graphs of bounded treedepth: For every graph X of treedepth at most h, there exists d such that X U_h,d. Thus, every X-minor-free graph is U_h,d-minor-free, and to prove <Ref> it suffices to do so for X=U_h,d. A tree-decomposition 𝒲 of a graph G is a pair (T,(W_x | x∈ V(T))), where T is a tree and the sets W_x for each x ∈ V(T) are subsets of V(G) called bags satisfying: * for each edge uv∈ E(G) there is a bag containing both u and v, and * for each vertex v∈ V(G) the set of vertices x∈ V(T) with v∈ W_x induces a non-empty subtree of T. The width of 𝒲 is max{|W_x|-1 | x∈ V(T) }, and its adhesion is max{|W_x∩ W_y| | xy ∈ E(T)}. The treewidth of a graph G, denoted (G), is the minimum width of a tree-decomposition of G. A clique in a graph is a set of pairwise adjacent vertices. Given two graphs G_1,G_2, a clique K^1 in G_1, a clique K^2 in G_2, a function f:K^2→ K^1, the clique-sum of G_1 and G_2 according to f is the graph G obtained from the disjoint union of G_1 and G_2 by identifying x with f(x) for every x ∈ K^2. Note that f does not have to be injective. It is well known that (G) ≤max{(G_1),(G_2)}. Given two graphs G and H, an H-partition of G is a partition (V_x | x ∈ V(H)) of V(G) with possibly empty parts such that for all distinct x,y∈ V(H), if there is an edge between V_x and V_y in G, then xy ∈ E(H). The width of such an H-partition is max{|V_x| x ∈ V(H)}. [Observation 35 in <cit.>] For all graphs G and H, and every positive integer c, G H⊠ K_c if and only if G has an H-partition of width at most c. A partition of a graph G is a family 𝒫 of induced subgraphs of G such that every vertex in G is in the vertex set of exactly one member of 𝒫. Given a partition 𝒫 of G, define G/𝒫 to be the graph with vertex set 𝒫 and edge set all the pairs P,Q ∈𝒫 such that there is an edge between V(P) and V(Q) in G. A layering of a graph G is a sequence (L_0,L_1,…) of disjoint subsets of V(G) whose union is V(G) and such that for every edge uv of G there is a non-negative integer i such that u,v ∈ L_i ∪ L_i+1. A layered partition of G is a pair (𝒫,ℒ) where 𝒫 is a vertex partition of G and ℒ is a layering of G. [Observation 35 in <cit.>] For all graphs G and H, and every positive integer c, G H⊠ P ⊠ K_c for some path P if and only if there is an H-partition (V_x | x ∈ V(H)) of G and a layering ℒ such that |V_x ∩ L| ≤ c for every x ∈ V(H) and L ∈ℒ. We finish these preliminaries with three simple statements on tree-decompositions. For every graph G, for every tree-decomposition 𝒲 of G, for every family ℱ of connected subgraphs of G, for every positive integer d, either: * there are d pairwise vertex-disjoint subgraphs in ℱ, or * there is a set S that is the union of at most d-1 bags of 𝒲 such that V(F) ∩ S ≠∅ for every F ∈ℱ. A tree-decomposition (T,(W_x| x∈ V(T))) of a graph G is said to be natural if for every edge e in T, for each component T_0 of T-e, the graph G[⋃_z∈ V(T_0) W_z] is connected. The following statement appeared first in <cit.>, see also <cit.>. Let G be a connected graph and let (T,(W_x| x∈ V(T))) be a tree-decomposition of G. There exists a natural tree-decomposition (T',(W'_x| x∈ V(T'))) of G such that for every x'∈ V(T') there is x∈ V(T) with W'_x'⊆ W_x. Consider a multiset (|W_z| | z∈ V(T)) with values sorted non-increasingly. We call this multiset the signature of (W_z| z∈ V(T)). We are going to show that either (W_z| z∈ V(T)) is natural or there is a tree-decomposition (W'_z| z∈ V(T')) of G with (1) for each z'∈ V(T') there exists z∈ V(T) so that W'_z'⊆ W_z, and (2) the signature lexicographically smaller than the signature of (W_z| z∈ V(T)). This will prove the lemma. Let e be an edge in T with endpoints x and y. We define T_x,y and T_y,x to be components of T-e containing y and x, respectively. We say that a pair (x,y) is bad if G[⋃_z∈ V(T_x,y)W_z] is disconnected. If T has no bad pair, then T is natural and there is nothing to prove. Otherwise, fix a bad pair (x,y) and let C_1,…, C_m (m≥2) be the components of G[⋃_z∈ V(T_x,y)W_z]. We modify the tree-decomposition (W_x| x∈ V(T)) of G as follows. Let T^1 be the tree obtained from T_y,x by attaching m disjoint copies T_1,…,T_m of T_x,y by edges between x in T_y,x and vertices corresponding to y in each copy of T_x,y. Let (W^1_z | z∈ V(T^1)) be a family of subsets of V(G) defined as follows: W^1_z = W_z if z∈ V(T_x,y) W_z∩ (V(C_i)∪ W_x) if z∈ V(T_i) for i∈[m]. We claim that (W^1_z | z∈ V(T^1)) is a tree-decomposition of G. Indeed, for each i∈[m], each edge in C_i has both ends in a bag W^1_z for some z∈ V(T_i). Also for each edge uv in G connecting a vertex u in C_i with v∈ G-⋃ C_j, we have v∈ W_x and therefore if u,v ∈ W_z for some z∈ V(T_x,y) then uv ∈ W^1_z for some z∈ V(T_i). Thus, (W^1_z | z∈ V(T^1)) is a tree-decomposition of G. Moreover, for each z∈ V(T^1), the bag W^1_z is contained in one of the bag of (W_z| z∈ V(T)). Finally, we claim that the multiset (|W^1_x| | x ∈ V(T^1)) is lexicographically smaller than (|W_z| | z ∈ V(T)). In order to prove that first note that the bags of vertices in T_y,x remained the same. Consider a vertex z in T_x,y. Note that this vertex appears in T^1 in m copies. There are two possibilities either W_z is contained in one of the components C_1,…, C_m and in this case one copy of z obtains a bag of the same size while all other copies have empty bags; or the second possibility is that W_z intersects at least two components among C_1,…, C_m and in this case each copy of z in T^1 obtains a bag of size smaller than W_z. The last remark completing the proof is that W_y hits all the components C_1,…,C_m as otherwise G would be disconnected. The following technical lemma encapsulates a step in the main proof. In this lemma, we “capture” a given set of vertices Y with a superset X such that X is not too large and each component of G-X has a bounded number of neighbors in X. Let m be a positive integer. Let G be a graph and let 𝒲 be a tree-decomposition of G. If Y is the union of m bags of 𝒲, then there is a set X that is the union of at most 2m-1 bags of 𝒲 such that Y⊆ X and for every component C of G-X, N(V(C)) ∩ X is a subset of the union of at most two bags of 𝒲. Moreover, if 𝒲 is natural, then N(V(C)) ∩ X intersects at most two components of G-V(C). First, we prove the following claim for rooted trees. Let T be a rooted tree, let r be the root of T, and let U be a non-empty subset of V(T). Then, there exists V ⊆ V(T) such that U ⊆ V, |V| ≤ 2|U|-1, and for each component C of T-V: * if r ∈ V(C), then C is adjacent to at most one vertex of V; * otherwise, C is adjacent to at most two vertices of V. We prove the claim by induction on |V(T)|. For a 1-vertex tree T, the statement holds with V=U. Now assume that |V(T)|>1. Let T_1, …, T_d be the rooted subtrees of T-r, where d is the degree of r in T. If T_i is disjoint from U for some i ∈ [d], then apply induction to T-T_i and U. The obtained set V satisfies the claim also for T. Therefore, without loss of generality, we assume that for every i ∈[d], T_i intersects U, and let U_i = V(T_i) ∩ U. First, suppose that d=1. By induction applied to T_1 and U_1, we obtain V_1 satisfying the assertion of the claim. Let V = V_1 if r ∉ U, and V = V_1 ∪{r} otherwise. One can immediately verify that V satisfies the assertion for T. Next, suppose that d > 1. For each i ∈ [d], by induction applied to T_i and U_i, we obtain V_i satisfying the assertion. Let V := {r}∪⋃_i ∈ [d] V_i. Clearly, U ⊂ V. Consider a component C of T-V. Then C is a component of T_i-V for some i∈ [d]. Since r ∈ V, we have r ∉ C, and so, <ref> holds. If C is not adjacent to r, then <ref> is satisfied by induction. If C is adjacent to r, then the root of T_i is in C, thus, by induction, C is adjacent to at most one vertex in V_i, and so, it is adjacent to at most two vertices in V. Finally, |V|≤ 1+∑_i∈ [d]|V_i| ≤ 1+∑_i∈ [d](2|U_i|-1) = (2∑_i ∈ [d]|U_i|) - (d-1) ≤ 2|U| - (d-1) ≤ 2|U| - 1. Now, we prove the lemma. Let 𝒲 = (T,(W_x | x ∈ V(T))). Let U be a set of m vertices in T such that Y ⊆⋃_x ∈ U W_u. By the claim, there exists V ⊆ V(T) of size at most 2m-1 such that U ⊆ V and every component of T-V has at most two neighbors in V. Let X := ⋃_x ∈ V W_x. For a given component C of G-X, let T_C be the subtree of T induced by the vertices x ∈ V(T) with W_x ∩ V(C) ≠∅. Observe that T_C is connected and disjoint from V, and so S = N_T(V(T_C)) ∩ V has size at most two. Finally, N(V(C)) ∩ X ⊆⋃_s ∈ S W_s. Moreover, if 𝒲 is natural, then for every s ∈ S, W_s is in a single component of G-V(C). § IMPROPER COLOURINGS AND LOWER BOUNDS This section explores connections between our results and improper graph colorings, which lead to the lower bound on (_X) in (<ref>) and the lower bound on t(X) in (<ref>). A graph G is k-colorable with defect d if each vertex can be assigned one of k colors such that each monochromatic subgraph has maximum degree at most d. A graph G is k-colorable with clustering c if each vertex can be assigned one of k colors such that each monochromatic connected subgraph has at most c vertices. The defective chromatic number of a graph class 𝒢 is the minimum integer k such that for some integer d, every graph in 𝒢 is k-colorable with defect d. Similarly, the clustered chromatic number of a graph class 𝒢 is the minimum integer k such that for some integer c, every graph in 𝒢 is k-colorable with clustering c. These topics have been widely studied in recent years; see <cit.> for example. Clustered coloring is closely related to the results in this paper, since a graph G is k-colorable with clustering c if and only if G H⊠ K_c for some graph H with χ(H)≤ k. Our results are stronger in that they replace the condition χ(H)≤ k by the qualitatively stronger statement that (H)≤ k (since χ(H)≤(H)+1). Of course, this is only possible when G itself has bounded treewidth. The treedepth of X is the right parameter to consider when studying the defective or clustered chromatic number of the class of X-minor-free graphs. Fix any connected[These results hold for possibly disconnected X, but with treedepth replaced by a variant parameter called connected treedepth, which differs from treedepth by at most 1.] graph X with treedepth h. Ossona de Mendez, Oum and Wood <cit.> proved that the defective chromatic number of the class of X-minor-free graphs is at least h-1, and conjectured that equality holds. Norin, Scott, Seymour and Wood <cit.> proved a relaxation of this conjecture with an exponential bound, and in the stronger setting of clustered coloring. In particular, they showed that every X-minor-free graph is (2^h+1-4)-colorable with clustering c(X). The proof of Norin et al. <cit.> went via treewidth. In particular, they showed that every X-minor-free graph with treewidth t is (2^h-2)-colorable with clustering ct where c=c(X); that is, G H⊠ K_ct for some graph H with χ(H)≤ 2^h-2. <Ref> provides a qualitative strengthening of this result by showing that G H⊠ K_ct for some graph H with (H)≤ 2^h+1 where c=c(X). Liu <cit.> recently established the original conjecture of Ossona de Mendez et al. <cit.>, which also implies that the clustered chromatic number of X-minor-free graphs is at most 3h-3, by a result of Liu and Oum <cit.>. For the sake of completeness, we now adapt the argument of Ossona de Mendez et al. <cit.> to conclude the lower bound in (<ref>) on underlying treewidth, and the lower bound in (<ref>) related to the product structure of apex-minor-free graphs. We start with the following well-known statement (see <cit.> for a similar result). Let h,d be positive integers, and let H be a graph. For every H-partition of U_h,d of width at most d, we have (H)≥ h-1. Let (V_x | x ∈ V(H)) be an H-partition of U_h,d with width at most d. Recall that U_h,d is the closure of the disjoint union of d complete d-ary trees of vertex-height h. In what follows, we refer to these underlying complete d-ary trees when we consider parent/child relations, subtrees rooted at a given vertex, and leaves. For every x ∈ V(H), every vertex u ∈ V_x that is not a leaf in U_h,d has a child v such that the subtree rooted at v in U_h,d is disjoint from V_x. This implies that there is a sequence u_1, …, u_h of vertices in U_h,d such that u_i+1 is a child of u_i for every i ∈ [h-1], and u_i ∈ V_x_i for every i ∈ [h] with x_1, …, x_h pairwise distinct. Since {u_1, …, u_h} is a clique in U_h,d, {x_1, …, x_h} is a clique in H. This shows that K_h H, which implies (H)≥ h-1. For every graph X and integer d, there exists an X-minor-free graph G with radius 1 and treewidth (X)-2 such that if G has an H-partition with width at most d, then (H)≥(X)-2. Recall that U_h,d is the closure of the disjoint union of d complete d-ary trees of vertex-height h, this gives a parent/child relation on the vertices of U_h,d and also a notion of a subtree rooted at a vertex in U_h,d. Let (V_x | x ∈ V(H)) be an H-partition of U_h,d with width at most d. Then for every x ∈ V(H), every vertex u ∈ V_x that is not a leaf in U_h,d has a child v such that the subgraph rooted at v is disjoint from V_x. This implies that there is a sequence u_1, …, u_h of vertices in U_h,d such that u_i+1 is a child of u_i for every i ∈ [h-1], and u_i ∈ V_x_i for every i ∈ [h] with x_1, …, x_h pairwise distinct. Since {u_1, …, u_h} is a clique in U_h,d, {x_1, …, x_h} is a clique in H. This shows that K_h H, implying (X)-2 = h-1 ≤(H). So G=U_h,d satisfies the lemma. The next lemma proves the lower bound in (<ref>). For every graph X, (_X)≥(X)-2. Let X be a graph and let h=(X)-1. By the definition of (·) together with <Ref>, there exists an integer-valued function f such that every X-minor-free graph G has an H-partition of width at most f((G)) for some graph H of treewidth at most (_X). Let d=f((X)-2). Note that X has larger treedepth than U_h,d, therefore U_h,d∈_X. By <Ref>, every H-partition of U_h,d of width at most d satisfies (H)≥ h-1. Hence (_X)≥ h-1 = (X)-2. The next result, which is an adaptation of Theorem 19 in <cit.>, proves the lower bound in (<ref>). Let c be a positive integer, and let X be a graph. There exists an X-minor-free graph G such that for every graph H and every path P, if G H⊠ P⊠ K_c, then (H)≥(X)-2. Fix h=(X)-1 and d=3c. Since h=(U_h,d)>(X), we conclude that U_h,d is X-minor-free. Now suppose that U_h,d H⊠ P⊠ K_c for some graph H and path P. We claim that (H)≥ h-1=(X)-2, which would complete the proof. By <Ref> there is an H-partition (V_x | x ∈ V(H)) of U_h,d and a layering ℒ such that |V_x ∩ L| ≤ c for every x ∈ V(H) and L ∈ℒ. Since U_h,d has radius 1, any layering of U_h,d has at most three layers. So |V_x|≤ 3c for every x∈ V(H). Thus (V_x | x ∈ V(H)) is an H-partition of U_h,d with width at most 3c. Now <Ref> implies (H)≥ h-1=(X)-2, as desired. § ATTACHED MODELS Let G and H be graphs. Then H is a minor of G if a graph isomorphic to H can be obtained from G by deleting edges, deleting vertices and contracting edges. If H is not a minor of G, then G is H-minor-free. A model of H in G is a family (B_x | x ∈ V(H)) of pairwise disjoint subsets of V(G) such that: * for every x ∈ V(H), the subgraph induced by B_x is non-empty and connected. * for every edge xy ∈ E(H), there is an edge between B_x and B_y in G. The sets B_x for x ∈ V(H) are called the branch sets of the model. Note that H is a minor of G if and only if there is an H-model in G. The join of graphs G_1 and G_2, denoted by G_1 ⊕ G_2, is the graph obtained from the disjoint union of G_1 and G_2 by adding all edges between vertices in G_1 and vertices in G_2. Similarly, given a set U and a graph G with V(G)∩ U = ∅, denote by U⊕ G the graph with vertex set U ∪ V(G) and edge set E(G) ∪{uv | u ∈ U, v ∈ V(G)}∪{uu' | u,u' ∈ U, u≠ u'}. Let G and H be graphs. Let a and k be integers with a≥ k≥0, and let R_1,…,R_ k be pairwise disjoint subsets of V(G). A model (B_v| v∈ V(K_a⊕ H)) of K_a⊕ H in G-⋃_i=1^k R_i is {R_1,…,R_k}-attached in G if there are k distinct vertices v_1,…,v_k of K_a such that B_v_i contains a neighbor of a vertex in R_i in G for each i∈[k]. If R = {r_1, …, r_k}⊆ V(G) is a set of k vertices, then we say that a model of K_a⊕ H in G is R-attached in G if it is {{r_1},…, {r_k}}-attached in G. In this paper, a separation in G is a pair (A,B) of subgraphs of G such that A ∪ B=G (where V(A) ⊆ V(B) or V(B) ⊆ V(A) is allowed). The order of (A,B) is |V(A) ∩ V(B)|. Let G be a graph and S,T be two sets of vertices of G. Let k be a positive integer. A linkage of order k between S and T is a family of k vertex-disjoint paths from S to T in G, with no internal vertices in S∪ T. Menger's Theorem asserts that either G contains a linkage of order k between S and T or there is a separation (A,B) of G of order at most k-1 such that S ⊆ V(A) and T ⊆ V(B). The next lemma and corollary are tools to be used in the main decomposition lemma that follows (<Ref>). <Ref> is inspired by a result of Kawarabayashi <cit.>. For a graph G and a subset R of the vertices of G, let G^+R be the graph obtained from G by adding all missing edges between vertices of R. Let H be a graph, and let and k be positive integers with ≥ 2k. For every graph G and every set R of k vertices of G such that there exists a model ℳ=(B_x | x∈ V(K_⊕ H)) of K_⊕ H in G^+R, at least one of the following properties hold: * G contains an R-attached model of K_-k⊕ H', for some graph H' obtained from H by removing at most 2k vertices, * there is a separation (A,B) in G of order at most k-1 and a vertex z in K_a such that R⊆ V(A) and B_z⊆ V(B)-V(A). Suppose that the lemma is false and let G be a graph with the minimum number of vertices for which there exist R and as in the statement such that neither <ref> nor <ref> holds. Fix such a set R and model = (B_x | x ∈ V(K_a⊕ H)). We start with an easy observation. For every separation (A,B) of order m in G, if there exists x ∈ V(K_a) such that B_x ⊆ V(B)-V(A), then every branch set of ℳ intersects V(B), and at most m branch sets of ℳ intersect V(A). Indeed, for every y ∈ V(K_a⊕ H) distinct from x, there is an edge in G between B_x and B_y, and so B_y contains a vertex of B. Since |V(A) ∩ V(B)| = m, at most m such branch sets intersect V(A). We claim that for each x∈ V(K_a⊕ H) we have B_x⊆ R or B_x is a singleton. Suppose the opposite, that is, there exists a branch set U of such that |U| > 1 and U is not a subset of R. In particular, G[U] contains an edge e=uv such that u ∈ V(G) - R and v ∈ V(G). Consider the graph G_1 obtained from G by contracting e. Contracting an edge inside a branch set of a model preserves the model. Let ℳ_1 = (B^1_x | x ∈ V(K_a⊕ H)) be the resulting model of K_⊕ H in G_1^+R. By the minimality of G, the lemma holds for G_1, R, and _1. If item <ref> holds, that is, G_1 contains an R-attached model of K_-k⊕ H', where H' is a graph obtained from H by removing at most 2k vertices, then G does as well, a contradiction. Therefore, item <ref> holds and we fix a separation (A_1,B_1) in G_1 of order at most k-1 and z∈ V(K_a) such that R ⊆ V(A_1) and B^1_z⊆ V(B_1)-V(A_1). By uncontracting e, we obtain a separation (A,B) in G of order at most k such that R ⊆ V(A) and B_z⊆ V(B)-V(A). This separation has to be of order exactly k, in particular, u and v are both in V(A) ∩ V(B), as otherwise, <ref> would be satisfied for G, R, and . Let R' = V(A) ∩ V(B). By Menger's Theorem, either there exists a linkage of order k between R and R' in A, or there exists a separation (C,D) in A of order at most k-1 such that R⊆ V(C) and R'⊆ V(D). In the latter case, we obtain a separation (C,D∪ B) in G of order at most k-1 such that R⊂ V(C) and B_z⊆ V(D∪ B)-V(C). Thus, <ref> is satisfied for G,R, and , which is a contradiction. Therefore, there exists a linkage ℒ of order k between R and R' in A. Since |R|=k, |R'|=k, and not all vertices of R' are in R (since u∈ R'-R), at least one vertex of R is in V(A) - V(B). Since z is adjacent to every other vertex in K_a⊕ H and B_z ⊆ V(B)-V(A), every branch set Y in ℳ contains a vertex of B, and thus B^+R'[Y] is non-empty and connected. Let ℳ'=(B'_x | x∈ V(K_a⊕ H)) be obtained from ℳ by restricting each branch set to the graph B. It follows that ℳ' is a model of K_a⊕ H in B^+R'. Since B has fewer vertices than G, the triple B,R',ℳ' satisfy the lemma. If <ref> is satisfied, that is, if there is an R'-attached model of K_-k⊕ H' in B, where H' is a graph obtained from H by removing at most 2k vertices, then we can extend the model using ℒ to obtain an R-attached model of K_a-k⊕ H' in G, a contradiction. Therefore, <ref> is satisfied for B,R',ℳ', that is, there is a separation (A',B') in B of order at most k-1 and z'∈ V(K_a⊕ H) with R' ⊆ V(A') and B_z'⊆ V(B')-V(A'). Observe that (A ∪ A',B') is a separation in G of order at most k-1 such that R⊂ V(A ∪ A') and B_z'⊆ V(B') - V(A∪ A'), a contradiction. This proves that each branch set of is either a singleton or a subset of R. Let M be the union of all branch sets in ℳ that does not intersect R. By Menger's Theorem, either there is a linkage of order k between R and M in G, or there is a separation (A,B) of G of order at most k-1 with R ⊆ V(A), M ⊆ V(B). Suppose the latter is true. Observe that for every vertex z in K_a, the corresponding branch set B_z is either contained in M or intersects V(A)∩ V(B), since z is adjacent to every other vertex of K_a⊕ H (and M is not empty). Thus, since a > k-1, there is a choice of z such that B_z is disjoint from V(A). Hence, item <ref> holds. Now, assume that there is a linkage ℒ of order k between R and M in G. Let W_1K,W_1H,W_2K,W_2H,W_3K,W_3H be the partition of V(K_a⊕ H) defined by V(K_a)=⋃_i∈[3]W_iK, V(H)=⋃_i∈[3]W_iH, and W_1K∪ W_1H = {x ∈ V(K_a⊕ H) | B_x ⊆ R}, W_2K∪ W_2H = {x ∈ V(K_a⊕ H) | B_x ⊆⋃_L ∈ℒ V(L) - R}, W_3K∪ W_3H = V(K_a⊕ H) - (W_1K∪ W_1H∪ W_2K∪ W_2H). See <Ref> for an illustration. First, we argue that |W_2H| ≤ |W_3K|. Observe that a = |W_1K| + |W_2K| + |W_3K|, k = |W_2H| + |W_2K|. Combining the above with a ≥ 2k and |W_1K| ≤ k, we obtain |W_3K| = a-|W_1K| - |W_2K| = a-|W_1K|-(k - |W_2H|) ≥ 2k-k-k+|W_2H| = |W_2H|. It follows that there exists an injective mapping f: W_2H→ W_3K. Let '= - |W_1K|, and let H' = H - (W_1,H∪ W_2H). Note that H' is a graph obtained from H by removing at most 2k vertices. Now, define a model ℳ'=(B'_x| x∈ W_2K∪ W_3K∪ V(H')) of K_'⊕ H' as follows: B'_x= B_x ∪ B_f^-1(x) if x ∈ f(W_2H) ⊂ W_3K, B_x otherwise. See <Ref> again. Now, ℒ is a linkage of order k between R and k distinct branch sets B'_x with x ∈ V(W_2K∪ W_3K). We can extend the model using . Namely, for each path in we add all its internal vertices to the unique branch set that intersects the path. We obtain an R-attached model of K_'⊕ H', hence, <ref> holds. This contradiction concludes the proof. Let H be a graph, and let and k be two positive integers such that ≥ 2k+min{|V(H)|,k}. For every graph G that contains a model of K_⊕ H, and for every set R of k vertices of G: * G contains an R-attached model of K_-(k+min{|V(H)|,k})⊕ H, or * there is a separation (A,B) in G of order at most k-1 such that R⊆ V(A) and B - V(A) contains a model of K_-k+1. τ(h)=(2^h) version. Let H be a graph, and let and k be two positive integers such that ≥ 2k. For every graph G that contains a model of K_⊕ H, and for every set R of k vertices of G: * G contains an R-attached model of K_-k⊕ H' where H' is a graph obtained from H by removing at most 2k vertices, or * there is a separation (A,B) in G of order at most k-1 such that R⊆ V(A) and B - V(A) contains a model of K_-k+1. The graph G contains a model of K_⊕ H, thus, in particular, the graph G^+R contains a model =(B_y| y∈ V(K_a⊕ H)) of K_⊕ H. Therefore, by Lemma <ref>, either <ref> is satisfied, or there is a separation (A,B) in G of order at most k-1 and x∈ V(K_a) such that R ⊂ V(A) and B_x⊆ V(B)-V(A). In the second case, it remains to show that B-V(A) contains a model of K_-k+1 and <ref> will hold. Since x is adjacent to remaining a-1 vertices of K_a, for each y∈ V(K_a) the set B_y contains a vertex of B. This implies that the number of y∈ V(K_a) such that B_y∩ V(A)≠∅ is at most |V(A)∩ V(B)|≤ k-1. Therefore, B-V(A) contains a model of K_a-k+1, as desired. Let W_K, W_H be the sets of vertices in V(K_) (resp. V(H)) whose branch sets intersect A. Since ≥ 2k-1 and |W_H| + |W_K| ≤ k-1, there is an injection f from W_H to V(K_a) - W_K. Let '= - |f(W_H)| - |W_K| ≥-k+1. Now define the model ℳ' = (B'_x | x ∈ V(K_a⊕ H) - f(W_H) - W_K) of K_'⊕ H as follows: B'_x= B_f(x) if x ∈ W_H, B_x otherwise. See <Ref> for an illustration how the new model is built. Clearly, ' is a model of K_'⊕ H in B-A, which shows the lemma. Let G be a connected graph, let k be a positive integer, and let R be a set of k vertices of G. If K_2k is a minor of G^+R, then for some ℓ∈[k] there is a separation (A,B) in G of order ℓ such that R⊆ V(A), and B contains a V(A) ∩ V(B)-attached model of K_ℓ. We proceed by induction on k. For k=1, one can take A to be a 1-vertex graph containing the vertex of R, and B = G. Note that B - A is non empty, and since G is connected, there is a vertex in B-A adjacent to the vertex in R. This vertex constitutes a V(A) ∩ V(B)-attached model of K_1. Now, assume that k≥ 2 and that the result holds for all positive integers less than k. Let =(B_x | x∈ V(K_2k)) be a model of K_2k in G^+R. Apply <Ref> to G,R, with H being the empty graph and a = 2k. If item <ref> is satisfied, then take A to be the graph on R with no edges and B to be the whole graph G, and the lemma is satisfied with ℓ=k. Otherwise, there exists a separation (C,D) in G of order at most k-1 and z∈ V(K_a) such that R ⊆ V(C) and B_z ⊆ V(D)-V(C). Let E be the component of D containing B_z. Since z is adjacent to every other vertex in K_2k, B_x contains a vertex of E for every x∈ V(K_2k). Let _E be obtained from by replacing each branch set in by its restriction to E. Let R' = V(C) ∩ V(E). Thus, |R'|≤ k-1. Observe that _E is a model of K_2k in E^+R'. By induction applied to E and R', there exists a separation (A',B') of order at most k-1 in E such that R'⊆ V(A') and B' has a V(A') ∩ V(B')-attached model of K_|V(A')∩ V(B')|. Finally, put A = C ∪ A' and B = B'∪ (D-E). A crucial step in the proof of <Ref> relies on the following lemma, which decomposes graphs that do not have some attached models. Let G be a graph, let h,a,k,d be integers with h,d≥1 and a ≥ k≥0, and let R be a set of k vertices of G. If G contains no R-attached model of K_a ⊕ U_h,d, then there is an induced subgraph C of G such that R ⊆ V(C) and the following items hold. * Let m be the number of components of G-C, let C^1,… C^m be these components, and let N^i=N_G(V(C^i)) for every i∈ [m]. For every i∈[m], |N^i|≤ k-1 and G[V(C^i)∪ N^i] has an N^i-attached model of K_|N^i|. * Let C^0 be the graph obtained from C-R by adding all missing edges between vertices of N^i for every i∈[m] (C^0 is a minor of G-R by <ref>). Then, C^0 is (K_a+k⊕ U_h,d+2k)-minor-free. See <Ref> for an illustration of the assertion. We proceed by induction on |V(G)|. Clearly, if G-R is (K_a+k⊕ U_h,d+2k)-minor-free, then C=G is the required graph. In particular, this is always the case when k=0. Now, assume that G-R contains a model ℳ=(B_x| y∈ V(K_a+k⊕ U_h,d+2k)) of K_a+k⊕ U_h,d+2k and k ≥ 1. Apply <Ref> with H = U_h,d+2k. Observe that every graph obtained from U_h,d+2k by removing at most 2k vertices contains U_h,d as an induced subgraph. Therefore, since G does not contain an R-attached model of K_a⊕ U_h,d, <ref> in <Ref> does not hold. It follows that there exists a separation (A,B) of order at most k-1 and z∈ V(K_a+k) such that R ⊆ V(A) and B_z⊆ V(B)-V(A). We can assume that B-A is connected. Indeed, if B-A is disconnected, then let D be a component of B-A containing B_z and replace (A,B) with the separation (G[V(A)∪ V(B)-V(D)],G[V(D)∪ (V(A)∩ V(B))]). If (A,B) is a separation of order 0, then by induction applied to G-B, there exists an induced subgraph C of G-B such that R ⊂ V(C) and items <ref>-<ref> hold. Components of G-C are components of (G-B)-C and B. Hence, C also witnesses the assertion of the lemma for G. Therefore, we assume that (A,B) is of order at least 1. Let R'=V(A)∩ V(B). Note that R' is non-empty and |R'|≤ k-1. Since z is adjacent to the remaining a+k-1 vertices of K_a+k and B_z⊆ V(B)-V(A), for every x ∈ V(K_a+k) the set B_x contains a vertex of B. Let _B be obtained from by replacing each branch set in by its restriction to V(B). Observe that _B is a model of K_a+k in B^+R'. By <Ref> applied to B and R', there is a separation (E,F) in B such that if R” = V(E) ∩ V(F), then 1 ≤ |R”| ≤ |R'| ≤ k-1, and F contains an R”-attached model of K_|R”|. Like before, we can assume that F-E is connected. Let G' be the graph obtained from A ∪ E by adding all missing edges between vertices in R”. The model of K_|R”| in F-E is disjoint from E. Thus, G' has fewer vertices than G. Hence, by induction, G' contains an induced subgraph C' such that R ⊂ V(C') and items <ref>-<ref> hold. Let C^1,…,C^m be the connected components of G'-C', and let N^i = N_G'(V(C^i)) for every i ∈ [m]. We claim that C = G[V(C')] satisfies items <ref>-<ref>. Note that C and C' have the same set of vertices. Since G'[R”] is a complete graph, either R”⊂ V(C), or R”⊂ V(C^i) ∪ N^i for some i ∈ [m]. In the first case, C^1,…,C^m, and C^m+1=F-E are the components of G-C. Observe that N^m+1=R” and items <ref>-<ref> hold. In the second case, R”⊂ V(C^i) ∪ N^i for some i ∈ [m] and R” is not a subset of V(C). In this case, C^i ∪ (F-E) is a connected component of G-C, and so, both items of the assertion follow immediately. § PROOF OF THEOREM <REF> This section proves <Ref>. As argued in <Ref>, it suffices to do so for X=U_h,d. Let τ : ℤ^2_≥ 0→ℤ be the function defined by τ(0,k) = k-2, and τ(h,k) = τ(h-1,2k+1)+k+1 for every h≥ 1, for every k ≥ 0. One can check that τ(h,0) = 2^h+1-4 for every h ≥ 0. Moreover, let c: ℤ_≥ 0^3 →ℤ be the function defined by c(0,d,k) =1, and c(h,d,k) = max{d-1, 2, k, c(h-1,d+2k,2k+1), 2(d-1)2^k-1} for every h≥1, for every d,k ≥ 0. A key to our proof of <Ref> is to prove the following stronger result for K_k⊕ U_h,d-minor-free graphs. For all integers h,d,t≥1 and k≥0, for every K_k⊕ U_h,d-minor-free graph G with (G) < t, there exists a graph H with treewidth at most τ(h,k), and an H-partition (V_x | x ∈ V(H) ) of G such that |V_x|≤ c(h,d,k) · t for all x∈ V(H). This result with k=0 and <Ref> implies <Ref>. The proof of <Ref> is by induction on h. Considering K_k⊕ U_h,d-minor-free graphs enables the proof to trade-off a decrease in h with an increase in k. We will need the following result by Illingworth, Scott, and Wood <cit.> for the base case of our induction. For all integers k≥2 and t≥ 1, for every K_k-minor-free graph G with (G) < t, there is a graph H of treewidth at most k-2 and an H-partition of G of width at most t. The next lemma with ℓ=0 immediately implies <Ref>, which in turn implies <Ref> and <Ref>. For all integers h, d, k, ℓ, t with h,d,t≥1, k≥0, and 0 ≤ℓ≤ k, for every graph G such that K_k⊕ U_h,d is not a minor of G, (G) < t, and for all pairwise disjoint non-empty subsets R_1,…,R_ℓ of vertices of G such that |R_j| ≤ 2 for every j ∈ [ℓ], there exists a graph H, an H-partition (V_x | x ∈ V(H) ) of G, and x_1,…,x_ℓ∈ V(H) such that: * (H) ≤τ(h,k), * |V_x|≤ c(h,d,k) · t for all x∈ V(H), * R_j = V_x_j for all j ∈ [ℓ], * {x_1,…,x_ℓ} is a clique in H. We call a tuple (h,d,k,t,G,R_1,…,R_ℓ) satisfying the premise of the lemma an instance. We proceed by induction on (h,|V(G)|) in lexicographic order. If h=1 and k=0, then K_k ⊕ U_h,d is the graph with d vertices and no edges. Thus, |V(G)| ≤ d-1 and {V(G)} is a K_1-partition of G of width at most d-1. Then, items <ref> and <ref> hold vacuously, and items <ref> and <ref> are clear since (K_1) = 0 = τ(1,0) and d-1 ≤ c(1,d,0). From now on, assume that (h,k) ≠ (1,0). If |V(G)-⋃_j ∈ [ℓ] R_j| < k, then the K_ℓ+1-partition {R_1, …, R_ℓ, V(G)-⋃_j∈[ℓ] R_j} of G satisfies <ref>, <ref>, <ref>. Since 2 ≤ c(h,d,k) · t and k≤ c(h,d,k) · t, item <ref> also holds and we are done. Now assume that G-⋃_j ∈ [ℓ] R_j has at least k vertices. This enables us to enforce ℓ = k, indeed, if ℓ<k, then pick distinct vertices s_ℓ+1, …, s_k ∈ V(G)- ⋃_j∈[ℓ]R_j and set R_j = {s_j} for every j ∈{ℓ+1, …, k}. From now on, we assume that ℓ=k. If G-⋃_j∈[k] R_j is not connected, then for every component C of G-⋃_j∈[k] R_j, Apply induction to the instance (h,d,k,t,G[V(C) ∪⋃_j∈[k] R_j],R_1,…,R_k) to obtain a graph H^C with distinguished vertices x^C_1, …, x^C_k, and an H^C-partition (V^C_x | x ∈ V(H^C)) of G[V(C) ∪⋃_j ∈ [k]R_j] satisfying <ref>-<ref>. In particular, V^C_x^C_j = R_j for every j ∈ [k]. Let C_1, …, C_m be the components of G-⋃_j ∈ [k] R_j. Let H be the graph obtained from the disjoint union of H^C_1, …, H^C_m by identifying the vertices in {x^C_j_i}_j∈[m] into a single vertex x_i, for each i ∈ [k]. Finally, set V_x_j = R_j for every j ∈ [k], and V_x = V^C_i_x for every x ∈ V(H^C_i) ∖{x^C_1, … x^C_k} and for every i ∈ [m]. Item <ref> holds since (H) = max_i∈[m]{(H^C_i)}≤τ(h,k). Item <ref> holds by induction. Items <ref> and <ref> hold by construction of H. From now on, we assume that G-⋃_j∈[k] R_j is connected. Let ℱ be the family of all connected subgraphs of G - ⋃_j∈[k] R_j containing an {R_1, …, R_k}-attached model of K_k+1⊕ U_h-1,d. If ℱ contains (d-1)2^k+1 pairwise disjoint subgraphs, then by the pigeonhole principle there exist s_1,…,s_k with s_j∈ R_j for each j ∈ [k], and d vertex-disjoint {s_1, …, s_k}-attached models ^i = (M^i_x | x ∈ V(K_k+1⊕ U_h-1,d)) of K_k+1⊕ U_h-1,d for i ∈ [d]. We denote by v_1,…, v_k+1 the vertices of K_k+1 in K_k+1⊕ U_h-1,d. Since these vertices have the same closed neighborhood in K_k+1⊕ U_h-1,d, we can assume that M^i_v_j contains a neighbor of s_j, for all i∈[d] and j∈[k]. For each j ∈ [k], let M_j = s_j∪⋃_i∈[d]M^i_v_j. Note that for every i ∈ [d], ^i = (M_x^i | x ∈ V(K_k+1⊕ U_h-1,d) - {v_1,…,v_k}) is a model of K_1 ⊕ U_h-1,d in G. Moreover, for every j ∈ [k], i ∈ [d], and M ∈^i, M_j is adjacent to M. Therefore, ^1,…,^d together with M_1,…,M_k constitute a model of K_k⊕ U_h,d in G, a contradiction (see <Ref>). Hence, there are no (d-1)2^k+1 pairwise disjoint members in ℱ. Since (G-⋃_j∈(k] R_j) ≤(G)<t and G-⋃_j ∈ [k] R_j is connected, by <Ref>, G admits a natural tree-decomposition 𝒲 of width at most t-1. By Lemma <ref>, there exists a set Y of vertices of G - ⋃_j∈[k] R_j that is the union of at most (d-1)2^k bags of 𝒲, such that Y intersects all the members of ℱ. By <Ref> applied to G-⋃_j ∈[k]R_j, 𝒲, Y, there exists a set X of at most (2 (d-1)2^k - 1) · t ≤ c(h,d,k) · t vertices in G such that Y⊆ X and each component D of G-⋃_j ∈ [k] R_j - X has neighbors in at most two components of G-⋃_j=1^kR_j - D. Consider the graph G' obtained from G-X by identifying the vertices in R_j into a single vertex r_j, for each j∈[k]. Let R = {r_1, …, r_k}. Note that G' is not necessarily a minor of G, however, G'-R is a subgraph of G. Observe that G' has no R-attached model of K_k+1⊕ U_h-1,d since X is disjoint from V(G') and every such model in G intersects X. By <Ref> applied with a=k+1, we obtain an induced subgraph C of G' with the following properties. Let C^1,…, C^m be the connected components of G'-C, let N^i=N_G'(V(C^i)) for every i∈ [m], and let C^0 be the graph obtained from C-R by adding all missing edges between vertices of N^i-R for each i∈[m]. Observe that: * R ⊆ V(C), * |N^i|≤ k-1 for each i ∈ [m], * C^0 is K_2k+1⊕ U_h-1,d+2k-minor-free, * C^0 is a minor of G'-R. In particular C^0 is a minor of G and (C^0) < t. If h=1, then K_2k+1⊕ U_h-1,d+2k=K_2k+1. Moreover, since (h,k) ≠ (1,0), we have k ≥ 1. So, we can apply <Ref> to C^0, which is K_2k+1-minor-free, to obtain a graph H^0 with (H^0)≤ 2k-1=τ(0,2k+1) and an H^0-partition (V^0_x | x ∈ V(H^0)) of C^0 of width at most (d+2k)t = c(0,d+2k,2k+1) · t. If h>1, then apply induction to the instance (h-1,d+2k,2k+1,t,C^0,∅). In both cases, we obtain a graph H^0 with (H^0)≤τ(h-1,2k+1) and an H^0-partition (V^0_x | x ∈ V(H^0)) of C^0 of width at most c(h-1,d+2k,2k+1) · t. For every i ∈ [m], the set N^i-R is a clique in C^0. Hence, the parts containing vertices in N^i-R form a clique in H^0. Fix i∈ [m]. Let D^i be the component of G-⋃_j=1^k R_j - X containing C^i. Let X^i_1, …, X^i_q be the components of G-⋃_j=1^k R_j - D^i with a neighbor in D^i. Note that q ≤ 2 by the definition of X. Let G^i be the graph obtained from G as follows: * identify the vertices in X^i_a into a single vertex x^i_a, for each a ∈ [q], * if q > 0 let ℛ^i = {R_j | j ∈ [k], r_j ∈ N^i}∪{{u}| u ∈ N^i - R}∪{{x^i_a | a ∈[q]}} and if q=0, letℛ^i = {R_j | j ∈ [k], r_j ∈ N^i}∪{{u}| u ∈ N^i - R}. * remove all vertices outside V(C^i) ∪⋃ℛ^i. Observe that G^i is a minor of G. This implies that K_k⊕ U_h,d is not a minor G^i and (G^i) < t. Since N_G(V(C^i))-⋃_j=1^k R_j-X ⊆ V(D^i), ℛ^i is a family of at most |N^i|+1 ≤ k pairwise disjoint non-empty sets, each of at most two vertices in G^i. Since |N^i|≤ k-1, there exists j∈[k] such that r_j∉N^i and therefore R_j is disjoint from V(G^i). Thus, |V(G^i)|<|V(G)|. Now apply induction to the instance (h,d,k,t,G^i,ℛ^i). It follows that there is a graph H^i with (H^i)≤τ(h,k) and an H^i-partition (V^i_x | x ∈ V(H^i)) of G^i of width at most c(h,d,k) · t such that ℛ^i = {V_x_i,j^i| j∈[|^i|]} for some clique {x_i,1, …,x_i,|ℛ^i|} in H^i. Finally, define the graph H by the following process (see <Ref> for an informal summary of the rest of the proof). Start with the disjoint union of H^0 and H^1, …, H^m. Add a clique {x_1, …, x_k,z} of k+1 new vertices, each adjacent to every vertex of H^0. For every i ∈ [m], let f_i be a mapping of {x_i,1,…, x_i,|ℛ^i|} to V(H^0) ∪{x_1, …, x_k,z} defined as follows: f_i(x_i,j) = w if w ∈ V(H^0) and V^i_x_i,j⊆ V^0_w, x_j' if j' ∈ [k] and V^i_x_i,j = R_j', z if V^i_x_i,j = {x^i_a | a∈[q]} for each j∈[|ℛ^i|]. Now, identify x_i,j with f_i(x_i,j) for every i ∈ [m] and every j ∈ [|ℛ_i|]. This identification step can be seen as a result of the sequence of clique-sums between {x_1, …, x_k,z}⊕ H^0 and the graphs H^i according to f_i for i∈[m]. This completes the definition of H. Note that (H) ≤max{({x_1, …, x_k,z}⊕ H^0),max_i∈[m](H^i)} ≤max{k+1+τ(h-1,2k+1), τ(h,k)} ≤τ(h,k). Define an H-partition (V_x | x ∈ V(H)) of G, where for each x∈ V(H), V_x = R_j if x=x_j for j∈[k], X if x=z, V_x^0 if x ∈ V(H^0), V_x^i if x ∈ V(H^i) - {x_i,1, …, x_i,|^i|}. As mentioned (H)≤τ(h,k) so <ref> holds. For every x ∈ V(H) distinct from z, |V_x| ≤max{c(h-1,d+2k,2k+1)· t,c(h,d,k)· t} = c(h,d,k) · t, and |V_z| = |X| ≤ (2(d-1)2^k-1) · t ≤ c(h,d,k) · t. Thus, <ref> holds. <Ref> holds by the definition of the H-partition. Finally, <ref> holds by the definition of H. § PROOF OF THEOREM <REF> First, we recall the definition of weak coloring numbers. Given a graph G, a linear ordering σ of V(G), a vertex v of G, and an integer r≥ 1, define _r[G,σ,v] to be the set of vertices w of G such that there is a path from v to w of length at most r whose minimum with respect to σ is w. Then define _r(G,σ) = max_v ∈ V(G)|_r[G,σ,v]| and _r(G) = min_σ_r(G,σ). In this section, we prove the following theorem. * Theorem <ref> follows immediately from the next result. There exist functions f and g such that for all integers h,d,r≥ 1 and for every U_h,d-minor-free graph G, _r(G) ≤ f(h,d)· r^g(h). The proof of <Ref> builds on a good understanding of the behavior of weak coloring numbers of graphs excluding a complete graph as a minor, and also of graphs of bounded treewidth. Van den Heuvel, Ossona de Mendez, Quiroz, Rabinovich, Siebertz <cit.> proved that for every integer t≥4, every K_t-minor-free graph G satisfies _r(G)≤r+t-2t-2(t-3)(2r+1) for every integer r≥ 1. Their proof technique, specifically chordal partitions of graphs, inspired a lot of follow-up research, including our work on weak coloring numbers. The base case of our main technical contribution (<Ref>) relies on the following structural result from <cit.> that underlies the upper bound on weak coloring numbers for K_t-minor-free graphs. We include a rough sketch of the proof – see <Ref>. Recall that a geodesic in a graph is a shortest path between its endpoints. Let t≥ 3 and let G be a K_t-minor-free graph. [ The statement in <cit.> assume t ≥ 4 and item <ref> bounds the number of geodesics by t-3. However the statement also holds for t=3 with t-3 replaced by max{t-3,1} in item <ref>. ] Then there is a graph H and an H-partition (V_x | x ∈ V(H)) of G together with an ordering x_1, …, x_|V(H)| of V(H) such that * G[V_i] is connected for every i ∈ [|V(H)|], in particular, H is a minor of G; * {x_j | j < i and x_jx_i ∈ E(H)} is a clique in H, for every i ∈ [|V(H)|]; * (H) ≤ t-2; * V_x_i is the union of the vertex sets of at most max{t-3,1} geodesics in G[V_x_i∪…∪ V_x_|V(H)|], for every i ∈ [|V(H)|]. Let t and r be integers with t,r≥0. For every graph G with (G)≤ t, we have _r(G)≤r+tt, as proved by Grohe, Kreutzer, Rabinovich, Siebertz, and Stavropoulos <cit.>. We will need the following slightly more precise statement that follows line-by-line from the proof in <cit.>. Let t and r be integers with t,r≥0. Let G be a graph and let σ=(x_1, …, x_|V(G)|) be an ordering of V(G) such that for every i ∈ [|V(G)|], the set {x_j | j<i and x_i x_j ∈ E(G)} is a clique of size at most t in G. Then _r(G,σ) ≤r+tt. Note that the above two lemmas easily imply the mentioned bound on _r(G) for K_t-minor-free graphs. To see this, order the vertices according to the index of a part of the H-partition that they are in, and within each part, we order the vertices arbitrarily. Now, to verify the bound, we need a simple observation on geodesics that we will prove later, see <Ref>. If a graph G has bounded treewidth, say (G)<t, then G satisfies the Helly property articulated by <Ref>. Namely, when is a family of connected subgraphs of G, then either there are d pairwise disjoint members of , or there is subset of (d-1)t vertices of G hitting all members of . One of the main difficulties that arises when trying to prove <Ref> is to find an equally useful statement as <Ref>, but for K_t-minor-free graphs. This is the motivation for <Ref>. We defer the proof of it to <Ref>. lemmageodesics There exist functions γ, δ such that for all integers t,d ≥ 1, for every K_t-minor-free graph G, for every family of connected subgraphs of G either: * there are d pairwise vertex-disjoint subgraphs in ℱ, or * there exist A ⊆ V(G) such that |A| ≤ (d-1) γ(t), and a subgraph X of G, where X is the union of at most (d-1)^2δ(t) geodesics in G-A, and for every F ∈ we have V(F) ∩ (V(X) ∪ A) ≠∅. With <Ref> in hand, we are ready to proceed with <Ref>, the key technical contribution standing behind <Ref>. The proof relies on some ideas from the proof of <Ref>, see <Ref> for a sketch of the proof. After the proof of <Ref> we complete the final argument for <Ref>. Recall the definition of τ : ℤ^2_≥ 0→ℤ: τ(0,k) = k-2, and τ(h,k) = τ(h-1,2k+1)+k+1 for every h≥ 1. Let t(h,d,k) be the number of vertices in K_k⊕ U_h,d; that is, for all h,d≥1 and k≥0, t(h,d,k)= k+d(d^h-1)/(d-1). Let ϵ: ℤ_≥ 0×ℤ_>0^2 →ℤ be the function defined by ϵ(0,d,k) = max{k-3, 1}, and ϵ(h,d,k) = max{d-1, k, (d-1)γ(t(h,d,k))+2(d-1)^2δ(t(h,d,k))-1, ϵ(h-1,d+2k,2k+1)}, for every h ≥ 1. A set S of vertices in a graph G is a subgeodesic in G if there is a supergraph G^+ of G and a geodesic P in G^+ such that S ⊆ V(P). For all integers h,d,k,ℓ with h,d≥1 and k≥ℓ≥ 0, for every graph G such that K_k⊕ U_h,d is not a minor of G, for all pairwise distinct vertices r_1,…,r_ℓ in G, there is a graph H with an ordering x_1, …, x_|V(H)| of V(H), and an H-partition (V_x | x ∈ V(H)) of V(G) such that: * {x_j| j<i and x_j x_i∈ E(H)} is a clique in H for all i∈[|V(H)|]; * {x_1, …, x_ℓ} is a clique in H; * (H) ≤τ(h,k); * V_x_j = {r_j} for all j ∈ [ℓ]; * for each integer i with ℓ+1≤ i ≤ |V(H)|, there exists a partition (A_x_i,B_x_i) of V_x_i such that: * |A_x_i| ≤ϵ(h,d,k), and * B_x_i is the union of at most ϵ(h,d,k) subgeodesics in G[B_x_i∪⋃_j > i V_x_j]. We call a tuple (h,d,k,G,r_1,…,r_ℓ) satisfying the premise of the lemma an instance. We proceed by induction on (h,|V(G)|) in lexicographic order. Let R = {r_1, …, r_ℓ}. If h=1 and k=0, then K_k ⊕ U_h,d is the graph with d vertices and no edges. Thus |V(G)| ≤ d-1, and {V(G)} is a K_1-partition of G of width at most d-1. Then take σ = (x) where x is the vertex of K_1. Items <ref>, <ref> are clear. <Ref> holds as τ(1,0) = 0. <Ref> holds vacuously. Finally, <ref> holds by taking A_x = V(G) and B_x = ∅, since d-1 ≤ϵ(1,d,0). Now we assume (h,k) ≠ (1,0). If |V(G)-R| ≤ k, then take H = K_ℓ+1 with vertex set {x_1, …, x_ℓ+1}. Let V_x_j = {r_j} for every j ∈ [ℓ] and let V_x_ℓ+1 = V(G)-R. Note that (V_x | x ∈{x_1, …, x_ℓ+1}) is a K_ℓ+1-partition of G. Let A_x_ℓ+1 = V(G)-R and B_x_ℓ+1 = ∅. In particular, |A_x_ℓ +1| ≤(h,d,k). It is straightforward to check that <ref>-<ref> hold. Now, if |V(G)-R| > k and ℓ<k, then set r_ℓ+1, …, r_k to be distinct vertices of G - R. Therefore, from now on assume ℓ=k and V(G)- R is non-empty. Suppose that G-R is disconnected. Let C^1, …, C^m be the components of G-R. For every i ∈[m] Apply induction to the instance (h,d,k,G[V(C^i) ∪ R],r_1,…,r_k) and obtain H^i with V(H^i)={x^i_1, …, x^i_|V(H^i)|} and an H^i-partition (V^i_x | x ∈ V(H^i)) of G[V(C^i) ∪ R] satisfying <ref>-<ref>. Let H be the graph obtained from the disjoint union of H^1, …, ,H^m by identifying the vertices in {x^i_j}_i∈[m] into a single vertex x_j, for each j ∈ [k]. Then order the vertices of H by σ= (x_1, …, x_k, x^1_k+1, …, x^1_|V(H^1)|, …,x^m_k+1, …, x^m_|V(H^m)|). Finally, set V_x_j = {r_j} for each j ∈ [k], and V_x = V^i_x for every x ∈ V(H^i) ∖{x^i_1,…,x^i_k} for each i∈[m]. Items <ref> and <ref> follow by construction of H and (V_x | x∈ V(H)). In order to prove item <ref>, consider x ∈ V(H) and let N be the neighbors of x in H that are smaller than x in σ. If x ∈{x_1,…, x_k}, then clearly N is a clique in H. If x ∈ V(H^i) for some i ∈[m], let Y = N ∩{x_1, …, x_k} and Z = N - Y. Observe that Z ⊆ V(H^i) ∖{x^i_1, …, x^i_k}. Then by induction {x^i_j | j∈[k], x_j ∈ Y}∪ Z ⊆ V(H^i) is a clique in H^i and so N is a clique in H. This proves item <ref>. Note that (H)=max_i∈[m]{(H^i)}≤τ(h,k) which proves item <ref>. In order to prove item <ref>, consider x^i_a for some i ∈ [m] and a ∈ [|V(H^i)|]. Then by induction there exists a partition A_x^i_a,B_x^i_a of V^i_x^i_a such that |A_x^i_a| ≤ϵ(h,d,k) and B_x^i_a is the union of at most ϵ(h,d,k) subgeodesics in G[B_x^i_a∪⋃_b>a V^i_x^i_b]. But since components of G[B_x^i_a∪⋃_b>a V^i_x^i_b] are components of G[B_x^i_a∪⋃_y>_σ x^i_a V_y], we deduce that B_x^i_a is the union of at most ϵ(h,d,k) subgeodesics in G[B_x^i_a∪⋃_y>_σ x^i_a V_y]. This proves item <ref>. Now assume that G-R is connected. Let ℱ be the family of all connected subgraphs of G - R containing an R-attached model of K_k+1⊕ U_h-1,d. If ℱ contains d pairwise vertex-disjoint subgraphs, then there exist d vertex-disjoint R-attached models ^i = (M^i_x | x ∈ V(K_k+1⊕ U_h-1,d)) in G for each i ∈ [d]. Denote by v_1,…, v_k+1 the vertices of K_k+1 in K_k+1⊕ U_h-1,d and since these vertices are twins in K_k+1⊕ U_h-1,d, we can assume that M^i_v_j contains a neighbor of r_j, for all i∈[d] and j∈[k]. For each j ∈ [k], let M_j = r_j∪⋃_i∈[d]M^i_v_j. Note that for every i ∈ [d], ^i = (M_x^i | x ∈ V(K_k+1⊕ U_h-1,d) - {v_1,…,v_k}) is a model of K_1 ⊕ U_h-1,d in G. Moreover, for every j ∈ [k], i ∈ [d], and M ∈^i, M_j is adjacent to M. Therefore, ^1,…,^d together with M_1,…,M_k constitute a model of K_k⊕ U_h,d in G, a contradiction. Hence, there are no d pairwise disjoint members in ℱ. Let t = |V(K_k⊕ U_h,d)| = t(h,d,k). Note that G is K_t-minor-free. By <Ref>, there is a set A of at most (d-1)γ(t) vertices in G- R, and a set X_0 which is the union of the vertex sets of at most (d-1)^2δ(t) geodesics in G - R - A, such that A ∪ X_0 intersects every member of ℱ. If A ∪ X_0 = ∅, then take A = ∅ and X_0 an arbitrary singleton included in V(G)-R. Since G-R is connected, we can add to A∪ X_0 at most |A|+(d-1)^2δ(t)-1 geodesics in G-R to obtain a set X such that G[X] is connected. Let B = X ∖ A. Note that B is the union of at most (d-1)γ(t) + 2(d-1)^2δ(t) - 1 ≤ϵ(h,d,k) subgeodesics in G - R - A. By construction, G[X] is connected and G-X does not contain an R-attached model of K_k+1⊕ U_h-1,d. By <Ref> applied for a=k+1, we obtain an induced subgraph C of G-X with the following properties. Let C^1,… C^m be the connected components of G-X-C, let N^i=N_G-X(V(C^i)) for every i∈ [m], and let C^0 be the graph obtained from C-R by adding all missing edges between vertices of N^i-R for each i∈[m]. Then * R ⊆ V(C), * |N^i|≤ k-1 for each i ∈ [m], * C^0 is K_2k+1⊕ U_h-1,d+2k-minor-free, * C^0 is a minor of G-X-R. If h=1, then K_2k+1⊕ U_h-1,d+2k=K_2k+1. Moreover k≥ 1 since (h,k) ≠ (1,0), and so 2k+1 ≥ 3. Thus we can apply <Ref> to C^0, which is K_2k+1-minor-free, and obtain a graph H^0 with (H^0)≤ 2k-1 =τ(0,2k+1) and an H^0-partition (V^0_x | x ∈ V(H^0)) with an ordering x_0,1, …, x_0,|V(H^0)| of V(H^0) such that for every p ∈ [|V(H^0)|], V^0_x_0,p is the union of at most max{2k-2,1}≤ϵ(0,d+2k,2k+1) geodesics in C^0[V^0_x_0,p∪…∪ V^0_x_0,|V(H^0)|]. Then set A^0_x_0,i = ∅ and B^0_x_0,i=V^0_x_0,i for every i ∈ [|V(H^0)|]. If h>1, then apply induction to the instance (h-1,d+2k,2k+1,C^0,∅). In both cases, we obtain a graph H^0 with (H^0)≤τ(h-1,2k+1) and an H^0-partition (V^0_x | x ∈ V(H^0)) of C^0 with an ordering σ^0 = (x_0,1, …, x_0,|V(H^0)|) of V(H^0) such that for every p ∈ [|V(H^0)|], V^0_x_0,p has a partition (A^0_x_0,p,B^0_x_0,p) such that |A^0_x_0,p| ≤ϵ(h-1,d+2k,2k+1) ≤ϵ(h,d,k) and B^0_x_0,p is the union of at most ϵ(h-1,d+2k,2k+1)≤ϵ(h,d,k) subgeodesics in C^0[B^0_x_0,p∪⋃_q>pV^0_x_0,q]. For every i ∈ [m], the graph N^i-R is a clique in C^0. Hence, the parts containing vertices in N^i-R form a clique in H^0. Fix some i ∈ [m]. Let G^i be the graph obtained from G[V(C^i) ∪ N^i ∪ X] by contracting X into a single vertex z^i. Note that G^i is a minor of G and therefore G^i has no model of K_k⊕ U_h,d. Since |N^i|≤ k-1, there exists j∈[k] such that r_j∉N^i and therefore r_j∉V(G^i). Thus, |V(G^i)|<|V(G)|. Let R^i = N^i ∪{z^i}, so |R^i|≤ k-1+1 = k. Now, apply induction to the instance (h,d,k,G^i,R^i). It follows that there is a graph H^i with (H^i)≤τ(h,k) and an H^i-partition (V^i_x | x ∈ V(H^i)) of G^i and an ordering σ_i = (x_i,p)_p ∈ [|V(H^i)|] of V(H^i) such that for each j∈ [|R^i|] the set V^i_x_i,j is a singleton, ⋃_j∈[|R^i|] V^i_x_i,j=R^i, the set {x_i,1, …,x_i,|R^i|} is a clique in H^i, and for every integer p with |R^i| < p ≤ |V(H^i)|, the set V^i_x_i,p has a partition (A^i_x_i,p,B^i_x_i,p) such that |A^i_x_i,p| ≤ϵ(h,d,k) and B^i_x_i,p is the union of at most ϵ(h,d,k) subgeodesics in G^i[B^i_x_i,p∪⋃_q>p V^i_x_i,q]. Finally, define the graph H as follows. Start with the disjoint union of H^0 and H^1, …, H_m. Add a clique {x_1, …, x_k,z} of k+1 new vertices, each adjacent to every vertex of H^0. For every i ∈ [m], let f_i be a mapping of {x_i,1,…, x_i,|R^i|} to V(H^0) ∪{x_1, …, x_k,z} defined as follows: f_i(x_i,j) = w if w ∈ V(H^0) and V^i_x_i,j⊆ V^0_w, x_j' if j' ∈ [k] and V^i_x_i,j = {r_j'}, z if V^i_x_i,j =z^i, for each j ∈[|R^i|]. Now, identify x_i,j with f_i(x_i,j) for every i ∈ [m] and every j ∈ [|R_i|]. This identification step can be seen as a result of the sequence of clique-sums between {x_1, …, x_k,z}⊕ H^0 and the graphs H^i according to f_i for i∈[m]. This completes the definition of H. Note that (H) ≤max{({x_1, …, x_k,z}⊕ H^0),max_i∈[m](H^i)} ≤max{k+1+τ(h-1,2k+1), τ(h,k)} =τ(h,k). Now define an H-partition (V_x | x ∈ V(H)) of G, where for each x ∈ V(H), V_x = r_j if x=x_j for j∈[k], X if x=z, V_x^0 if x ∈ V(H^0), V_x^i if x ∈ V(H^i) - {x_i,1, …, x_i,|R^i|}. Moreover, order the vertices of H by σ=(x_1, …, x_k, z, x_0,1, …, x_0,|V(H^0)|, …, x_m,|R^m|+1,…, x_m,|V(H^m)|). In order to prove item <ref>, consider a vertex x ∈ V(H), and let N = {y ∈ V(H) | y <_σ x, xy ∈ E(H)}. If x ∈{x_1, …, x_k,z}, then N ⊆{x_1, …, x_k,z} and so N is a clique in H. If x ∈ V(H^0), then N ∖{x_1, …, x_k,z} is a clique in H^0, thus N is a clique in {x_1, …, x_k,z}⊕ H^0, and so in H. If x ∈ V(H^i)-{x_i,1, …, x_1,|R^i|} for some i ∈ [m], let N' = N ∩ V(H^0) and N” = N ∖ N'. Then f_i^-1(N') ∪ N” = {y ∈ V(H^i) | y <_σ_i x, xy ∈ E(H^i)} is a clique in H^i, and so N is a clique in H. This proves item <ref>. <Ref> follows from the definition of H. As mentioned before (H)≤τ(h,k) so item <ref> holds. <Ref> follows from the definition of (V_x | x ∈ V(H)). For <ref>, for each x ∈ V(H) - {x_1, …, x_k}, define A_x,B_x = A,B if x=z, A^0_x,B^0_x if x∈ V(H^0), A^i_x,B^i_x if x∈ V(H^i)-x_i,1,…,x_i,|R^i| for i∈[m]. Consider now some x ∈ V(H)-{x_1, …, x_k}. First observe that |A_x| ≤ϵ(h,d,k). It remains to show that B_x is the union of at most ϵ(h,d,k) subgeodesics in G[B_x ∪⋃_y >_σ x V_y]. If x = z, this follows from the definition of A and B. If x ∈ V(H^0), then there is a supergraph C^+ of C^0 such that B_x=B_x^0 is in the union of the vertex sets of at most ϵ(h,d,k) geodesics in C^+[B^0_x ∪⋃_y >_σ^0x V^0_y]. Let C^++ be obtained from the disjoint union of C^+ and C^1, …, C^m by adding every edge between V(C^i) and V(C^0) that is in G, for each i∈[m]. Since N^i ∩ V(C^0) is a clique in C^0 for every i∈[m], for every two vertices u, v in C^+, _C^+(u,v) = _C^++(u,v). Hence B_x is the union of the vertex sets of at most ϵ(h,d,k) geodesic in C^++, which is a supergraph of G[B_x ∪⋃_y >_σ X V_y]. This shows that B_x is the union of at most ϵ(h,d,k) subgeodesics in G[B_x ∪⋃_y >_σ X V_y], as desired. Finally, if x ∈ V(H^i) ∖{x_i,1, …, x_i,|R^i|}, then B_x=B^i_x is the union of at most ϵ(h,d,k) subgeodesics in C^i[B^i_x ∪⋃_y>_σ^i x V^i_y]. Since components of C^i[B^i_x ∪⋃_y>_σ^i x V^i_y] are components of G[B_x ∪⋃_y>_σ x V_y], we deduce that B_x is the union of at most ϵ(h,d,k) geodesics in G[B_x ∪⋃_y>_σ x V_y]. This proves <ref> and concludes the proof. Let G be a graph and let r be a non-negative integer. For every subgeodesic S in G and for every vertex v ∈ V(G), |N^r_G[v] ∩ S| ≤ 2r+1. Let S be a subgeodesic of G. Let G^+ be a supergraph of G and let P be a geodesic with endpoints s,t in G^+ such that S ⊆ V(P). Let v ∈ V(G). Suppose for contradiction that 2r+2≤ |N^r_G[v] ∩ S|. However, N^r_G[v] ∩ S ⊆ N^r_G^+[v] ∩ S ⊆ N^r_G^+[v] ∩ V(P). Let x and y be the vertices in N^r_G^+[v] ∩ V(P) closest to s and t, respectively. Since N^r_G^+[v] ∩ V(P) ⊆ xPy, and |N^r_G^+[v] ∩ V(P)|≥ 2r+2, we have _P(x,y) ≥ 2r+1. However, since P is a geodesic in G^+, we have 2r+1≤_P(x,y) = _G^+(x,y) ≤_G^+(x,v) + _G^+(v,y) ≤ 2r, a contradiction. Let h,d,r≥ 1 and let G be a U_h,d-minor-free graph. We will show that (G) ≤ 2ϵ(h,d,0) · (2r+1)τ(h,0)+rτ(h,0), which implies the theorem. By <Ref> applied to G with ℓ=k=0, there is a graph H with an ordering σ_H=(x_1, …, x_|V(H)|) of V(H), and an H-partition (V_x | x ∈ V(H)) of V(G) such that <ref>-<ref> hold. Let σ be a total order on V(G) such that for all i,j ∈ [|V(H)|] and u,v ∈ V(G): * if i < j and u ∈ V_x_i, v ∈ V_x_j, then u<_σ v; * if u ∈ A_x_i, v ∈ B_x_i, then u<_σ v. Let u ∈ V(G). Consider a vertex v ∈_r[G,σ,u]. Let i,j ∈ [|V(H)|] be such that u ∈ V_x_j, v ∈ V_x_i. Then x_i ∈_r[H,σ_H,x_j]. In particular i≤ j. By <Ref> |_r[H,σ_H,x_j]| ≤r+τ(h,0)τ(h,0). Moreover V_x_i = A_x_i∪ B_x_i where |A_x_i| ≤ϵ(h,d,0) and B_x_i is the union of the vertex sets of at most ϵ(h,d,0) subgeodesics in G[B_x_i∪ V_x_i+1∪…∪ V_x_|V(H)|]. Since vertices r-weakly reachable from u in B_x_i are in N^r_G[B_x_i∪ V_x_i+1∪…∪ V_x_|V(H)|][u], we deduce by <Ref> that |_r[G,σ,u] ∩ B_x_i| ≤ϵ(h,d,0)·(2r+1). Hence |_r[G,σ,u] ∩ V_x_i| = |_r[G,σ,u] ∩ A_x_i| + |_r[G,σ,u] ∩ B_x_i| ≤ϵ(h,d,0) + (2r+1) ·ϵ(h,d,0) ≤ (2r+1) · 2ϵ(h,d,0). It follows that |_r[G,σ,u]| ≤r+τ(h,0)τ(h,0)· (2r+1) 2ϵ(h,d,0). This proves the theorem. § PROOF OF LEMMA <REF> This section proves the following lemma. * In short, <Ref> follows from a result by Pilipczuk and Siebertz <cit.>, see <Ref>, which we lift in order to accommodate vortical decompositions and clique-sums. First, we recall the Graph Minor Structure Theorem of Robertson and Seymour <cit.>, which says that every graph in a proper minor-closed class can be constructed using four ingredients: graphs on surfaces, vortices, apex vertices, and tree-decompositions. The Euler genus of a surface with h handles and c cross-caps is 2h+c. The Euler genus of a graph G is the minimum integer g≥ 0 such that there is an embedding of G in a surface of Euler genus g; see <cit.> for more about graph embeddings in surfaces. Let G be a graph and let Ω be a cyclic permutation of a subset of V(G). An interval of Ω is a sequence (v_1, …, v_ℓ) of vertices of G such that v_i+1 is the successor of v_i on Ω for every i ∈ [ℓ-1]. A vortical decomposition of G is a pair (Ω, (B_x⊆ V(G) | x∈ V(Ω))) such that: * x ∈ B_x, for every x∈ V(Ω), * for each edge uv∈ E(G) there is x∈Ω with u,v∈ B_x, and * for each vertex v∈ V(G) the set of vertices x∈ V(Ω) with v∈ B_x induces a non-empty interval of Ω. The width of a vortical decomposition (Ω,(B_x⊆ V(G) | x∈ V(Ω))) is defined to be max_x∈ V(Ω) |B_x|. For any integers g,p,k,a≥0, a graph G is (g,p,k,a)-almost-embeddable if for some set A⊆ V(G) with |A|≤ a, there are graphs G_0,G_1,…,G_s for some 0 ≤ s ≤ p, cyclic permutations Ω_1,…,Ω_s of pairwise disjoint subsets of V(G), and a surface Σ of Euler genus at most g such that: * G-A = G_0∪ G_1∪⋯∪ G_s; * G_1, …, G_s are pairwise vertex-disjoint and non-empty; * for each i∈[s], there is a vortical decomposition (Ω_i,(B^i_x| x∈ V(Ω_i))) of (G_i,Ω_i) of width at most k; * G_0 is embedded in Σ; * there are s pairwise disjoint closed discs in Σ whose interiors Δ_1,…,Δ_s are disjoint from the embedding of G_0, and such that the boundary of Δ_i meets the embedding of G_0 exactly in vertices of Ω_i, and the cyclic ordering of Ω_i is compatible with the ordering of the vertices around the boundary of D_i, for each i∈[s]. The vertices in A are called apex vertices. They can be adjacent to any vertex in G. For an integer m≥ 0, a graph is m-almost-embeddable if it is (m,m,m,m)-almost-embeddable. Let G be a graph, let =(T,(B_x| x∈ V(T))) be a tree-decomposition of G. For x ∈ V(T), the torso of B_x, denoted by (G,ℬ,x), is the graph obtained from G[B_x] by adding edges so that B_x∩ B_y is a clique for each neighbour y of x in T. We now state the Graph Minor Structure Theorem, which is the cornerstone of structural graph theory. There exists a function α such that for every positive integer t, for every K_t-minor-free graph G, there exists a tree-decomposition ℬ=(T,(B_x | x ∈ V(T))) of G such that (G,ℬ,x) is α(t)-almost-embeddable, for every x ∈ V(T). The following result of Pilipczuk and Siebertz <cit.> is the starting point of our proof of <Ref>. There exists a function ζ such that for every graph G of Euler genus at most g, there is a partition 𝒫 of G into geodesics in G such that (G/𝒫) < ζ(g). Pilipczuk and Siebertz <cit.> proved <Ref> with ζ(g)= 16g +9, which was later improved to ζ(g)= 2g +7 by Distel et al. <cit.>. The next lemma lifts the previous statement to (m,m,m,0)-almost-embeddable graphs. This type of argument is folklore in the structural graph theory community. We use the following convenient notation for manipulating paths in a graph. Let G be a graph. A walk in G is a sequence (v_1, …, v_m) of vertices in G such that v_i v_i+1∈ E(G) for each i ∈ [m-1]. Let U =(u_1, …, u_ℓ) and W=(w_1, …, w_m) be two walks in G such that u_ℓ w_1 ∈ E(G) or u_ℓ=w_1. The concatenation of U and W, denoted by UW, is the walk (u_1, …, u_ℓ,w_1, …, w_m) if u_ℓ w_1 ∈ E(G), or (u_1, …, u_ℓ,w_2, …, w_m) if u_ℓ = w_1. Let P be a path in G and let u,v be two vertices of P. Define uPv to be the subpath of P from u to v (which is also a walk in G). This allows us to write expressions of the form aPbcQdRe given that: a,b,c,d,e are vertices in the graph; P is a path containing a and b; bc is an edge; Q is a path containing c and d; R is a path containing d and e. There is a function β such that for every integer m≥ 0, for every (m,m,m,0)-almost-embeddable graph G, there is a partition 𝒫 of G into geodesics in G such that (G/𝒫) < β(m). Let β(m)=ζ(m+2m-2)(11+3m) for all integers m≥0, where ζ(·) is the function given by <Ref>. Fix m≥0 and let g, p, k be integers with 0≤ g,p,k≤ m. Let G be a (g,p,k,0)-almost-embeddable graph. If G is not connected, then we can process each component independently and take the union of the resulting partitions. Now assume that G is connected. Let s, G_0,G_1,…,G_s, Ω_1, …, Ω_s, Σ, witness the fact that G is (g,p,k,0)-almost-embeddable, and fix a vortical decomposition (Ω_i,(B^i_x| x∈ V(Ω_i))) of G_i of width at most k, for every i ∈ [s]. For convenience, we denote by Ω the permutation ⋃_i ∈ [s]Ω_i. By definition, Ω_1, …, Ω_s are pairwise disjoint, hence, for x ∈ V(Ω), we write B_x = B_x^i for the unique i ∈ [s] such that x ∈ V(Ω_i). Let G' be G_0 if s=0, and otherwise let G' be a graph obtained from G_0 as follows: for every i∈ [s], for every pair u,v of consecutive vertices on Ω_i, if uv ∉ E(G_0), then add the edge uv (note that this is compatible with the embedding of G_0); next pick arbitrarily a vertex r∈ V(Ω) and for all v∈ V(Ω)-r, if rv ∉ E(G_0), then add the edge rv. Note that we may add s-1 handles to Σ, and embed G' on the resulting surface, thus, G' has Euler genus at most g + 2(s-1) ≤ g + 2p-2. _G'(u,v) ≤_G(u,v) + 1, for every u,v ∈ V(G'). Let P be a geodesic in G with endpoints u and v. If P intersects V(Ω) in at most one vertex, then P is a path between u and v in G', and so _G'(u,v) ≤(P) = _G(u,v). Now suppose that P contains at least two vertices in V(Ω). Let u', v' be such vertices that are closest in P' to u and v, respectively. Then uPu' r v'Pv is a walk from u to v in G' of length at most (P)+1 = _G(u,v) + 1, and so _G'(u,v) ≤_G(u,v) + 1. For every geodesic P' in G', P' contains at most three vertices in V(Ω). Let P' be a geodesic in G' between u and v. Suppose to the contrary that P' has at least four vertices in V(Ω), and let u', v' be such vertices that are closest in P' to u and v, respectively. Now u' and v' can be connected by a two-edge path via r in G'. Therefore, Q'=uP'u'rv'P'v is a walk in G', and since there are at least two vertices on P' between u' and v', the walk Q' is shorter than P', a contradiction. For every geodesic P' in G', the vertex set of P' is the union of the vertex sets of at most six disjoint geodesics in G, and moreover, each of these geodesics contains at most one vertex in V(Ω). Let P' be a geodesic in G' between u and v. Since P' has at most three vertices in Ω, it can be split into the disjoint union of at most three geodesics in G' such that each part has at most one vertex in Ω. Consider now a geodesic Q in G' with at most one vertex in Ω. The key property of Q is that it is also a path G. We are going to prove that Q can be split into at most two geodesics in G. Let a,b ∈ V(G_0) be the endpoints of Q. By a previous claim, (Q)=_G'(a,b) ≤_G(a,b) + 1. Since Q is a path in G we also have _G(a,b) ≤(Q). Altogether, (Q) ∈_G(a,b),_G(a,b)+1. If (Q)=_G(a,b) then Q is a geodesic in G and there is nothing to prove. Now suppose that ℓ=(Q)=_G(a,b)+1. Let (q_0,…,q_ℓ) be the walk along Q with q_0=a and q_ℓ=b. For each i∈0,…,ℓ consider d_i=_G(q_0,q_i). Note that d_0=0, d_ℓ=ℓ-1, and d_i-d_i-1∈-1,0,1 for all i∈[ℓ] . These three conditions force that d_i-d_i-1=1 for all i∈[ℓ] except one value, say j, for which d_j-d_j-1=0. It follows, that _G(q_0,q_j-1) = j-1, and _G(q_j,q_ℓ) = ℓ - j, hence, aQq_j-1 and q_jQb are geodesics in G. This completes the proof that Q can be split into at most two geodesics in G. Altogether, P' is split into at most three times two geodesics in G, as desired. Since G' has Euler genus at most g + 2(s-1) ≤ g + 2p-2, by <Ref>, there is a partition 𝒫' of G' into geodesics in G' such that (G'/𝒫') < ζ(g+2p-2). Let (T,(W'_x | x ∈ V(T))) be a tree-decomposition of G'/𝒫' of width at most ζ(g+2p-2). For each P' ∈', let S(P') be a set of at most six geodesics in G whose union of vertex sets is V(P'), and such that each of them intersects V(Ω) in at most one vertex. Define a partition of V(G) into geodesics in G by = ⋃_P' ∈' S(P') ∪{{u}| u ∈ V(G) ∖ V(G_0)} . We claim that (G/𝒫) < ((G'/𝒫')+1)·(6+3k) ≤ζ(g+2p-2)·(6+3k). The family is a partition of G and the family ' is a partition of G', thus, for each u ∈ V(G) and v ∈ V(G') we can define P_u ∈ and P_v' ∈' to be such that u ∈ P_u and v ∈ P_v'. For each x ∈ V(T), consider the following subsets of : W_x^1 = ⋃_P' ∈ W_x' S(P'), W_x^2 = ⋃_P' ∈ W_x'⋃_w ∈ V(Ω) ∩ V(P'){ P_v | v ∈ B_w }, W_x =W_x^1 ∪ W_x^2. Clearly, |W_x^1| ≤ 6|W_x'|. Moreover, we proved that every geodesic in G' has at most three vertices in V(Ω), thus, |W_x^2| ≤ 3k|W_x'|. It follows that |W_x| ≤ |W_x'| · (6+3k). Therefore, if we show that (T,(W_x | x ∈ V(T))) is a tree-decomposition of G/𝒫, then indeed, (G/𝒫) < ((G/𝒫')+1) · (6+3k). (T,(W_x | x ∈ V(T))) is a tree-decomposition of G/𝒫. Let u,v ∈ V(G) be such that P_u and P_v are distinct, and suppose that uv ∈ E(G). If uv ∈ E(G_0), then there exists x ∈ V(T) such that P_u',P_v' ∈ W'_x. Moreover, P_u ∈ S(P_u') and P_v ∈ S(P_v'). It follows that P_u,P_v ∈ W_x^1 ⊆ W_x. If uv ∈⋃_i ∈ [s]E(G_i) then there exists w ∈ V(Ω) such that u,v ∈ B_w. We have P_w' ∈ W_x' for some x ∈ V(T), and this yields P_u,P_v ∈ W_x^2 ⊂ W_x. It remains to show that for every P ∈𝒫, the set X_P = {x ∈ V(T) | P ∈ W_x} induces a non-empty, connected subset of V(T). For every P' ∈, let X'_P' be defined as {x ∈ V(T) | P' ∈ W_x'}. Since (T,(W'_x | x ∈ V(T)) is a tree-decomposition of G'/𝒫', we have that X'_P' induces a non-empty, connected subset of V(T). Observe that the union ⋃_w ∈ V(H') X'_P'_w, where H' is a connected subgraph of G', also induces a non-empty, connected subset of V(T). For each u ∈ V(G), let I_u = {w ∈ V(Ω) | u ∈ B_w}. Since V(G_1), …, V(G_s) are pairwise disjoint, and (Ω_i, (B_x | x ∈ V(Ω_i))) is a vortical decomposition of G_i for each i ∈ [s], I_u is either empty, or is an interval in some Ω_i. Recall that we added cycle edges in G' representing each Ω_i, and hence, I_u induces a connected subgraph in G'. First, suppose that P = {u} for some u ∈ V(G)∖ V(G_0). By definition, X_{u} = ⋃_w∈ I_uX'_P_w'. Since I_u is connected in G', we conclude that X_{u} induces a non-empty, connected subset of V(T). Now, suppose that P ∈ S(P') for some P' ∈'. Recall that P contains at most one vertex in V(Ω). If V(P) ∩ V(Ω) = ∅, then X_P = X'_P', which induces a non-empty, connected subtree of T. Otherwise, let w be the unique vertex in the intersection V(P) ∩ V(Ω). Then X_P = X'_P'∪⋃_v ∈ I_w X'_P_v'. Note that w ∈ P', thus P' = P_w', and since w ∈ I_w, we have X_P = ⋃_v ∈ I_w X'_P_v'. This shows that X_P induces a non-empty, connected subtree of T. This completes the proof that (G/𝒫)<β(m)=ζ(m+2m-2)(11+3m). The next lemma is an immediate corollary of <Ref> and <Ref>. The function β is the same as in <Ref>. There exists a function β such that for all integers m≥ 0 and d ≥ 1, for every (m,m,m,0)-almost-embeddable graph G, for every family of connected subgraphs of G either: * there are d pairwise vertex-disjoint subgraphs in ℱ, or * there exist a subgraph X of G that is the union of at most (d-1)β(m) geodesics in G, and for every F ∈ we have V(F) ∩ V(X) ≠∅. Consider a graph embedded in a fixed surface. It is clear that one can introduce parallel edges, subdivide edges of the graph, and the resulting graph still has an embedding into the surface. The point of the following observation is that we can do the same with (g,p,k,a)-embeddable graphs and the resulting graph has the same parameters except for the width of the vortices that may go up by +2. This is folklore in the structural and algorithmic graph theory community. Let g,p,k,a be non-negative integers, and let G be a (g,p,k,a)-almost-embeddable graph. For every graph G' obtained from G by duplicating some edges and then subdividing some edges, G' is (g,p,k+2,a)-almost-embeddable. The following observation says that almost-embeddability is preserved under taking subgraphs but, surprisingly, this may increase the width of the vortices. A proof can be found, for example, in <cit.>. Let g,p,k,a be non-negative integers, and let G be a (g,p,k,a)-almost-embeddable graph. Let G' be a subgraph of G. Then G' is (g,p,2k,a)-almost-embeddable. We have all the tools in hand to prove <Ref>. Let s, G_0,G_1,…,G_s, Ω_1, …, Ω_s, be as in the definition, and fix a vortical decomposition (B^i_x| x∈ V(Ω_i)) of (G_i,Ω_i) of width at most k, for every i ∈ [s]. For every edge e=uv ∈ E(G), let m_e ∈ℤ_>0 and let (d_e,j)_j ∈ [m_e]∈ℤ_≥ 0^m_e be be such that G' is obtained from G by replacing e by m_e copies e_1, …, e_m_e and subdividing every edge e_j for j ∈ [m_e] into a path w^e,j_0=u,w^e,j_1, …, w^e,j_d_e,j,w^e,j_d_e,j+1=v. We will define G'_0, G'_1, …, G'_s, Ω'_1, …, Ω'_s, and a vortical decomposition (B'^i_x| x∈ V(Ω'_i)) of (G'_i,Ω'_i) of width at most k, for every i ∈ [s], witnessing the fact that G' is (g,p,k,0)-almost-embeddable. We initialize Ω'_i to Ω_i, B'^i_x to B^i_x for every i∈ [s] and for every x ∈ V(Ω_i), and G'_i to G_i for every i ∈ [s]. For every edge e ∈ E(G_0), for every i ∈ [m_e], replace e by m_e copies e_1, …, e_m_e and subdivide d_e,j times the edge e_j for every j ∈ [m_e]. Note that the resulting graph G_0' is still embeddable in Σ. Consider now an edge e=uv ∈ E(G_i) for i ∈ [s]. For every j ∈ [m_i], consider the copy e_j of e. Subdivide the edge e_j in G'_i into a path w^e,j_0=u, w^e,j_1, …, w^e,j_d_e,j+1=v. Since (B^i_x | x ∈ V(Ω_i)) is a vortical decomposition of (G_i,Ω_i), there is a vertex w ∈ V(Ω_i) such that u,v ∈ B^i_w. Then add new vertices w^e,j_1, …, w^e,j_d_e,j just after w on Ω'_i, in this order, with the bags B'^i_w^e,j_ℓ = B^i_w ∪{w^e,j_ℓ, w^e,j_ℓ+1} for every ℓ∈ [d_e,j]. Clearly G' = G'_0 ∪ G'_1 ∪…∪ G'_s. Observe that the width of (B'^i_x| x∈ V(Ω'_i)) is at most two plus the width of (B^i_x| x∈ V(Ω'_i)), and so at most k+2. It remains to prove that (B'^i_x| x∈ V(Ω'_i)) is a vortical decomposition of (G'_i,Ω'_i). * For every w ∈ V(Ω'_i), w ∈ B'^i_w. * For every edge w^e,j_ℓ w^e,j_ℓ+1∈ E(G'_i), we have w^e,j_ℓ, w^e,j_ℓ+1∈ B'^i_w^e,j_ℓ+1. * For every vertex v ∈ V(G_i), {x ∈ V(Ω'_i) | v ∈ B'^i_x} is an interval on Ω'_i. * For every vertex w^e,j_ℓ∈ V(G'_i)∖ V(G_i), {x ∈ V(Ω'_i) | v ∈ B'^i_x} = {w^e,j_ℓ-1,w^e,j_ℓ,w^e,j_ℓ+1} is an interval on Ω'_i. This shows that G' is (g,p,k+2,0)-almost-embeddable. Let t,d be positive integers, let G be a K_t-minor-free graph, and let be a family of connected subgraphs of G. If ℱ is empty, then the result holds since β(t),γ(t) ≥ 0. Now assume that ℱ is non-empty. Without loss of generality, we can assume that each member of is an induced subgraph. Therefore, with a slight abuse of notation, from now on we refer to as a family of subsets of V(G) such that each induces a connected graph. Suppose that <ref> does not hold; that is, has no d pairwise disjoint members. In particular, d≥2. Let α be the function from <Ref>. By the theorem, there exists a tree-decomposition = (T,(B_x | x ∈ V(T))) of G such that (G,,x) is α(t)-almost-embeddable, for every x ∈ V(T). For each x ∈ V(T), let A_x be the apex set of (G,,x) (that is, A_x is a set of at most α(t) vertices such that (G,,x) - A_x is (α(t),α(t),α(t),0)-almost-embeddable). By <Ref>, there exists an integer d' < d and x_1, …, x_d'∈ V(T) such that for every F ∈ℱ, F intersects ⋃_i∈[d'] B_x_i. Let A = ⋃_i∈[d'] A_x_i. Note that |A| ≤ (d-1)α(t), so it suffices to take γ = α. For each i ∈ [d'], let _i' be the family of all F ∈ disjoint from A that intersect B_x_i. We now sketch the next steps of the proof, see also <Ref>. First, for each i ∈ [d'] we modify the graph G[B_x_i] to obtain an auxiliary graph G_i^* that is (α(t),α(t),2α(t)+2,0)-almost-embeddable. Then, we carefully project the family '_i into G_i^*. In particular, when two sets from '_i intersect, their projections will intersect as well. Next, we will apply <Ref> to the auxiliary graph to obtain a hitting set for the projected '_i being a union of a small number of geodesics in G_i^*. Finally, we will lift the hitting set to the initial graph, perhaps adding some more geodesics. Taking the union of hitting sets over all i ∈ [d'], we will finish the proof. Fix some i ∈ [d'], let B = B_x_i - A, and let ' = _i'. We say that two distinct vertices u,v ∈ B are interesting if u and v are in the same component of G-A and there exists y ∈ V(T) with y ≠ x_i such that u,v ∈ B_y. Let be the set of all 2-subsets of vertices in B that are interesting. We construct the auxiliary graph G^* as follows. We start the construction with G[B]. For all {u,v}∈, if u and v are adjacent in G[B], then we call this length-one path P_uv or P_vu; if u and v are not adjacent in G[B], then we add to the graph a path connecting u and v of length _G-A(u,v) where all internal vertices are new, i.e. disjoint from all the rest. Again, we call this path P_uv or P_vu. Moreover, for all {u,v}∈, we add to the graph a path connecting u and v of length _G-A(u,v)+1 where all internal vertices are new. We call this path P'_uv or P'_vu. This completes the construction of G^*. Note that G^* is obtained from (G,,x_i) by removing some vertices (from A), duplicating and perhaps subdividing some edges. Therefore, by <Ref>, the graph G^* is (α(t),α(t),2α(t)+2,0)-almost-embeddable. Now, we will define a family ^* of connected subgraphs of G^* that is roughly a projection of ' into G^*. For a path P, let (P) denote the subpath of P induced by all internal vertices of P. For every F ∈', define F^* = (F ∩ B) ∪⋃_{u,v}∈ u,v ∈ F V(P_u,v) ∪⋃_{u,v}∈ u ∈ F V((P_u,v')), and ^*=F^*| F∈'. The following claim captures the critical properties of ^*. Let E,F ∈'. Then: * The graph G^*[F^*] is connected. * If E intersects F then E^* intersects F^*. Let u,v ∈ F^*. We will show that there is a path from u to v in G^*[F^*] which will prove <ref>. If u∉B then u lies on one of the added paths in the construction of G^*. Since each such path in F^* has at least one endpoint in B, we can connect u in F^* with a vertex in F^*∩ B. Therefore, we assume that both u and v are in F^*∩ B. Since F∈', there is a walk P connecting u and v in G[F]. Recall that F is disjoint from A, and so is P. We split P into segments with endpoints in vertices from B, i.e., let w_0,…,w_ℓ be vertices in V(P)∩ B such that P= w_0Pw_1⋯ Pw_ℓ-1Pw_ℓ, where w_0=u, w_ℓ=v and w_j-1Pw_j has no internal vertex in B for each j∈[ℓ]. Note that w_j-1Pw_j could be just a one-edge path for some j∈[ℓ]. We claim that we can replace each section w_j-1Pw_j by a path connecting w_j-1 and w_j in G^*[F^*]. Fix j∈[ℓ]. If w_j-1, w_j are adjacent in G[B], then they are also adjacent in G^*, as desired. If w_j-1 and w_j are not adjacent in G[B], the set X = {y ∈ V(T) | B_y ∩ V((w_j-1Pw_j)) ≠∅} induces a non-empty connected subset of V(T). Moreover, since w_j-1 and w_j are both adjacent to a vertex in (w_j-1Pw_j), there are vertices y,y' ∈ X such that w_j-1∈ B_y and w_j ∈ B_y'. Since X ∪{x_i} is acyclic in T, we have y=y', and so w_j-1,w_j ∈ B_y. This shows that {w_j-1,w_j}∈ℐ. Thus, P_w_j-1,w_j was added to F^* and we can use this path to connect w_j-1 and w_j in G^*[F^*]. This way we completed a proof that there is a path from u to v in G^*[F^*]. Assume that E, F ∈' and that E intersects F. To prove <ref>, we will show that E^*∩ F^* is non-empty as well. Fix w∈ E∩ F. If w∈ B, then w ∈ E^*∩ F^*, and we are done. Hence, we suppose that w∉B. Let P be a path in G[E] from a vertex u of B to w with no internal vertex in B. Let Q be a path in G[F] from w to a vertex v of B with no internal vertex in B. If u=v, then u ∈ E^* ∩ F^* and we are done. Otherwise we claim that {u,v}∈ℐ. Indeed, (PQ) is a non-empty connected subgraph of G, and so X = {x ∈ V(T)| V((PQ)) ∩ B_x ≠∅} is a non-empty connected subset of V(T). Then, since u and v both have a neighbor in (PQ), we deduce that u ∈ B_y, v ∈ B_y' for some y,y' ∈ X ∩ N_T(x_i). But since T[X∪x_i] is a tree, we must have y=y', and so u,v ∈ B_y. This shows that {u,v}∈ℐ. Thus, (P'_u,v) ⊆ E^* ∩ F^* and so E^* ∩ F^* ≠∅. By the claim, the family ^* is a family of connected subgraphs of G^* containing no d pairwise vertex-disjoint members. Therefore, by <Ref>, there exists a subgraph X^* of G^* such that X^* is the union of a family ^* of at most (d-1)β(2α(t) + 2) geodesics in G^* and for every F ∈' we have V(F^*) ∩ V(X^*) ≠∅. Let R^* ∈^*. Note that if one of the endpoints of R^* lies on (P_u,v) for some {u,v}∈, then one can remove (P_u,v) from R^* maintaining the fact that ℛ^* is a family of geodesics in G^* whose union of vertex sets intersects every member of ℱ^*. Therefore, now assume without loss of generality that none of R^* ∈^* has an endpoint in the interior of any P_u,v. We now discuss the relation of geodesics in G^* to the paths P_u,v'. Let {u,v}∈. No geodesic in G^* contains P_u,v' as a subpath. Let R^* be a geodesic in G^*. Suppose that it contains P_u,v' as a subpath. Then replacing the segment corresponding to P_u,v' in R^* with P_u,v gives a shorter walk between endpoints of R^* in G^*, which is a contradiction. We need the following easy observation. For all u,v ∈ B, we have _G-A(u,v) = _G^*(u,v). Moreover, if R^* is a geodesic in G^* connecting u and v, then there exists a geodesic R in G-A connecting u and v such that V(R^*) ∩ B ⊂ V(R) ∩ B. Let u,v ∈ B and let P be a path between u and v in G-A. We will show that there exists a path P^* between u and v in G^* of length at most the length of P. Let w_0,…,w_ℓ∈ B and let P_1,…,P_ℓ be (possibly empty) paths in G-A-B such that P = w_0 P_1 w_1 P_2 … P_ℓ w_ℓ with w_0 = u and w_ℓ=v. Let j ∈ [ℓ]. If P_j is an empty path, then let P_j^* be also an empty path. Otherwise, w_j-1,w_j. It follows that P_w_j-1,w_j⊂ G^* and moreover, (int(P_w_j-1,w_j)) ≤(P_j). Define P_j^* = P_w_j-1,w_j. Let P^* be the walk defined by P^* = w_0 P_1^* w_1 … P_ℓ^* w_ℓ. Clearly, P^* is a walk between u and v in G^*, and (P^*) ≤(P). This shows that _G^*(u,v) ≤_G-A(u,v). Now, let P^* be a path between u and v in G^*. Let w_0,…,w_ℓ∈ B and let P_1^*,…,P_ℓ^* be (possibly empty) paths in G^*-B such that P^* = w_0 P_1^* w_1 … P_ℓ^* w_ℓ with u=w_0 and v=w_ℓ. If P_j^* is an empty path, then let P_j be also an empty path. Otherwise, by definition, it is clear that _G-A(w_j-1,w_j) ≤(P_j^*). Let P_j be any shortest path between w_j-1 and w_j in G-A. Let P be the walk defined by P = w_0 P_0 w_1 … P_ℓ w_ℓ. Clearly, P is a walk between u and v in G-A, and (P) ≤(P^*). This shows that _G-A(u,v) ≤_G^*(u,v). Moreover, if P^* is a geodesic in G^*, then P is a geodesic in G-A with V(P^*) ∩ B ⊆ V(P) ∩ B. Let be the collection of all the paths of the form int(P_u,v') in G^* – note that all such paths are nonempty. It follows that for every R^* ∈, the geodesic R^* intersects at most two distinct members of , and so, we can write that R^* is a concatenation of S_1, R_0^*, and S_2, where S_1 and S_2 are subpaths of paths in each, and R_0^* is disjoint from ⋃_S ∈ V(S). Clearly, R_0^* is a geodesic in G^*, and moreover, it connects vertices in B. We aim to replace each geodesic R^* ∈ with at most three geodesics in G maintaining the property that the union of all constructed geodesics intersects every member of ℱ'. For technical reasons, we assume that the empty path is a geodesic. Let R^* ∈. There exist at most three geodesics F_R^*^0,F_R^*^1,F_R^*^2 in G such that for every F ∈', if F^* ∩ V(R^*) ≠∅ then F ∩ V(F_R^*^j) ≠∅ for some j ∈{0,1,2}. Let S_1,S_2,R_0^* be a partition of R^* as described above. Let u_1 and u_2 be the endpoints of R_0^*. By the previous claim, there exists a geodesic R_0 connecting u_1 and u_2 such that V(R^*_0) ∩ B ⊂ V(R_0) ∩ B. Put F_R^*^0 = R_0. Let j ∈{1,2}. If S_j is an empty path, then set F_R^*^j to be an empty path. Otherwise, S_j is a segment of the path P_u_j,u_j'' for some u_j' ∈ B such that {u_j, u_j'}∈ℐ. In this case, set F_R^*^j to be the one-vertex path containing u_j'. Clearly, F_R^*^0,F_R^*^1,F_R^*^2 are geodesics in G-A. We now prove that they satisfy the assertion of the claim. Let F ∈' be such that F^* ∩ V(R^*) ≠∅. Thus, either F^* ∩ V(S_j) ≠∅ for some j ∈{1,2}, or F^* ∩ V(R_0^*) ≠∅. If F^* ∩ V(S_j) ≠∅ for some j ∈{1,2}, then by the construction of F^*, either u_j ∈ F or u'_j ∈ F. In the first case F ∩ V(F_R^*^0) ≠∅, and in the second case F ∩ V(F_R^*^j) ≠∅. It remains to deal with the case when F^* ∩ V(R_0^*) ≠∅. By construction of F^* we have F^* ∩ B ∩ V(R_0^*) ≠∅. However V(R^*_0) ∩ B ⊂ V(R_0) ∩ B and F^* ∩ B = F ∩ B. Therefore, F ∩ V(F^0_R^*) ⊇ F ∩ V(R_0) ⊇ F ∩ B ∩ V(R_0) ⊇ F^* ∩ B ∩ V(R_0) ⊇ F^* ∩ B ∩ V(R_0^*) ≠∅. Finally, define X_i = ⋃_R^* ∈ F_R^*^0∪ F_R^*^1∪ F_R^*^2. It follows that for each i ∈ [d'], the subgraph X_i is the union of at most 3||≤ 3(d-1)β(2α(t) + 2) geodesics. Let X = ⋃_i∈ [d'] X_i. For every F ∈ we have F ∩ (X ∪ A) ≠∅. Moreover, X is a union of at most 3(d-1)^2β(2α(t) + 2) geodesics in G-A. This proves the lemma with δ(t) = 3β(2α(t) + 2). § EXCLUDING AN APEX GRAPH Recall that a graph G is apex if there is a vertex v∈ V(G) such that G-v is planar. For a given apex graph X, let t(X) be the minimum integer such that, for some integer c, every X-minor-free graph is isomorphic to a subgraph of H⊠ P⊠ K_c where (H)≤ t(X) and P is a path. In this section, we show that t(X) is tied to the treedepth of X. A tree-decomposition (T,(B_x| x∈ V(T))) of a graph is rooted when T is a rooted tree. For a rooted tree-decomposition ℬ = (T,(B_x | x∈ V(T))) of a graph G, let ^-(G,ℬ,x) be the supergraph of G[B_x] obtained by adding all edges uv with u,v ∈ B_x ∩ B_y and x is the parent of y in T. We use the following result of Dujmović, Esperet, Morin, and Wood <cit.>. For every apex graph X, there exist positive integers w,t such that every X-minor-free graph G has a rooted tree-decomposition =(T,(B_x | x∈ V(T))) of adhesion at most 3, and for each x∈ V(T), there exists a layered partition (𝒫_x,ℒ_x) of ^-(G,,x) with: * |P∩ L|≤ w for each (P,L)∈𝒫_x×ℒ_x; * if x has a parent y in T, then * all vertices in B_x∩ B_y are in the first layer of ℒ_x, * each vertex of B_x∩ B_y is in a singleton part of 𝒫_x; and * ^-(G,,x)/𝒫_x is a minor of G and has treewidth less than t. The next result proves the upper bound in (<ref>). For every apex graph X, there exists a positive integer c such that for every X-minor-free graph G, there exists a graph H of treewidth at most 2^(X)+1-1 such that G H⊠ P ⊠ K_c for some path P. Let X be an apex graph. Let w,t be the constants depending only on X given by <Ref>. Let c' be the constant depending only on X given by <Ref>. Let c = c' · t · w. Let G be an X-minor-free graph. By <Ref>, there is a rooted tree-decomposition =(T,(B_x | x∈ V(T))) of G and for every x ∈ V(T) there is a layered partition (𝒫_x,ℒ_x) of ^-(G,ℬ,x) such that items <ref>-<ref> hold. Let r be the root of T. For each vertex x in T with x≠ r, let p(x) be the parent of x in T. Let (v_1, …, v_|V(T)|) be an ordering of V(T) such that for every edge v_i v_j of T, if v_i=p(v_j), then i<j. For every i ∈[|V(T)|], let G^i be the graph obtained from G[⋃_j ≤ i B_v_j] by adding for every j >i with p(v_j) ∈{v_1, …, v_i}, all the missing edges with both endpoints in B_v_j∩ B_p(v_j). Next, for each i∈[|V(T)|], we will construct a graph H^i, an H^i-partition (V^i_x | x ∈ V(H^i)) of G^i and a layering ℒ^i of G^i such that * (H^i) ≤ 2^(X)+1-1, and * |V^i_x ∩ L| ≤ c for every x ∈ V(H^i) and L ∈ℒ^i. By <Ref>, this yields G H^|V(T)|⊠ P ⊠ K_c for some path P, which will complete the proof. The construction is iterative, starting with i=1. Observe that v_1=r and G^1 = ^-(G,ℬ,r). Let Q=^-(G,ℬ,r)/𝒫_r. By <Ref>.<ref>, (Q)<t and Q is a minor of G, so Q is X-minor-free. By <Ref>, we obtain a graph H^1 and an H^1-partition (U_z | z ∈ V(H^1)) of Q such that (H^1) ≤ 2^(X)+1-4 and |U_z| ≤ c' · t for every z ∈ V(H^1). Let V^1_z = ⋃_P ∈ U_z P for every z ∈ V(H^1) and ℒ^1 = ℒ_r. Then (V^1_z | z ∈ V(H^1)) is an H^1-partition of G^1 such that |V^1_z ∩ L| ≤ |U_z| · w ≤ c' · t · w = c for every z ∈ V(H^1) and L ∈ℒ^1. Next, let i>1, and assume that H^i-1, (V^i-1_x | x ∈ V(H^i-1)) and ℒ^i-1 are already defined. Let x = v_i, R = B_x ∩ B_p(x), and Z = {z ∈ V(H^i-1) | R ∩ V^i-1_z ≠∅}. Note that R is a clique in G^i-1 and so Z is a clique in H^i-1. Recall that the elements of R are in singleton parts of 𝒫_x by <Ref>.<ref>.<ref>. Let Q = ^-(G,ℬ,x)/𝒫_x - {{v}| v ∈ R}. By <Ref>.<ref>, (Q)<t and Q is a minor of G, so Q is X-minor-free. By <Ref>, we obtain a graph H' and an H'-partition (U_z | z ∈ V(H')) of Q such that (H') ≤ 2^(X)+1-4 and |U_z| ≤ c' · t for every z ∈ V(H'). Now define H^i to be the clique-sum of H^i-1 and Z ⊕ H' according to the identity function on Z. Then (H^i) = max{(H^i-1),|Z|+(H')}≤ 2^(X)+1-4+3. For every z ∈ V(H^i) let V^i_z = V^i-1_z if z ∈ V(H^i-1), ⋃_P ∈ U_z P if z∈ V(H'). Then (V^i_z | z ∈ V(H^i)) is an H^i-partition of G^i. It remains to define the layering ℒ^i = (L^i_0,L^i_1, …). Let ℒ^i-1 = (L^i-1_0,L^i-1_1, …) and ℒ_x = (L^x_0,L^x_1, …). Since R is a clique in G^i-1, there is a non-negative integer j such that R ⊆ L^i-1_j ∪ L^i-1_j+1. For every non-negative integer k, let L^i_k = L^i-1_k if k < j, L^i-1_k ∪ (L^x_k-j - R) if k≥ j. First we show that ℒ^i=(L^i_0, L^i_1, …) is a layering of G^i. Let uv be an edge of G^i. Note that either uv is an edge of G^i-1 or uv is an edge of ^-(G,ℬ,x). If uv is an edge of G^i-1 then there is an integer k such that u,v ∈ L^i-1_k ∪ L^i-1_k+1, and so u,v ∈ L^i_k ∪ L^i_k+1. If uv is an edge of ^-(G,ℬ,x), then there is an integer k such that u,v ∈ L^x_k ∪ L^x_k+1. Note that in this case {u,v}⊄R. If u,v ∉R, then u,v ∈ (L^x_k-R) ∪ (L^x_k+1 -R) ⊆ L^i_j+k∪ L^i_j+k+1. The last case to consider is when |{u,v}∩ R| = 1. Without loss of generality, assume that u ∈ R. By <Ref>.<ref>.<ref>, u ∈ R ⊂ L_0^x, hence, v ∈ L^x_0 ∪ L^x_1. Moreover, u ∈ R ⊂ L^i-1_j∪ L^i-1_j+1. It follows that u,v ∈ L^i_j ∪ L^i_j+1. This proves that ℒ^i is a layering of G^i. Finally, for every non-negative integer k, and for every z ∈ V(H^i), either z ∈ V(H^i-1) and |L^i_k ∩ V^i_z| = |L^i-1_k ∩ V^i-1_z| ≤ c, or z ∈ V(H') and |L^i_k ∩ V^i_z| = |L^x_k-j∩⋃_P ∈ U_z P| ≤ |U_z| · w ≤ c' · t · w = c. This concludes the proof. Dębski et al. <cit.> proved that if G H ⊠ P ⊠ K_c where (H)≤ t and P is a path, then χ_p(G) ≤ c(p+1)p+tt≤ c(p+1)^t+1. <Ref> thus implies: For every apex graph X, every X-minor-free graph G, and every integer p≥ 1, χ_p(G) ≤ c(p+1)^2^(X)+1, where c is from <Ref>. § OPEN QUESTIONS We conclude the paper with a number of open problems. Can the upper bound on (𝒢_X) in <Ref> be improved? In particular, is (𝒢_X) at most a polynomial function of (X)? The next problem asks whether <Ref> can be extended to the setting of excluded topological minors. Is there a function f such that for every graph X there exists a function c such that for every positive integer t and for every graph G with (G)<t that does not contain X as a topological minor, there exists a graph H of treewidth at most f((X)) such that G H⊠ K_c(t)? This question is related to various results of Campbell et al. <cit.> on the underlying treewidth of X-topological minor-free graphs. They showed that a monotone class has bounded underlying treewidth if and only if it excludes some fixed topological minor. In particular, they proved the weakening of Question <ref> with (H)≤ f((X)) replaced by (H)≤ |V(X)|. This is tight for complete graphs. That is, the underlying treewidth of K_t-topological minor-free graphs equals t (for t≥ 5), which implies Question <ref> for complete graphs X. Campbell et al. <cit.> also prove <Ref> for X=K_s,t for s≤ 3, but note that it is open for s≥ 4. They also prove that the underlying treewidth of P_k-free graphs equals log_2k-1, which gives good evidence for a positive answer to <Ref> since (P_k) = ⌈log_2(k+1) ⌉. A positive answer to <Ref> would be a qualitative generalisation of both <Ref> and the following result of an anonymous referee of <cit.> (where X=K_1,Δ+1 in <Ref>): for every graph G with treewidth t and maximum degree Δ, there is a tree T such that G T ⊠ K_24 t Δ. Is there a function g such that for every graph X, there is a constant c such that for every X-minor-free graph G, χ_p(G) ≤ c· p^g((X)) for every p≥ 1? Our results give a positive answer to <Ref> when X is apex. However, we do not see a way to adjust our proof techniques and prove an analogue of <Ref> for p-centered colorings when X is an arbitrary graph. The main obstacle is that we do not know how to use chordal partitions to construct p-centered colorings. Therefore, we do not know how to set up an equivalent of <Ref>. Let X be a graph. Let f(X) be the infimum of all the real numbers c such that there is a constant a, such that for every X-minor-free graph G and every integer r ≥ 1, _r(G) ≤ a · r^c. <Ref> and a construction of Grohe et al. <cit.> imply that (X)-1 ≤ f(X) ≤ g((X)) for some function g. Is f(X) tied to some natural graph parameter of X? Is f tied to some natural graph parameter? We know that f is tied to neither , nor . For treedepth or pathwidth, consider X to be a complete ternary tree of vertex-height k so both the pathwidth and treedepth of X are k. Then X-minor-free graphs have bounded pathwidth, and it is easy to see that _r(G)≤ ((G)+1)(2r+1) for all graphs G. Thus, the exponent is 1 which is independent of k. For treewidth, consider the family G_r,t_r,t≥ 0 from <cit.>, which satisfy (G_r,t)≤ t and _r(G_r,t)=Ω(r^t). Note that G_r,t excludes L_t (a ladder with t rungs). Since (L_t)≤ 3 for all t, when we take X=L_t, the exponent becomes t while treewidth remains constant. The only parameter that we are aware of that could be tied with f is _2, as defined in <cit.>. plain
http://arxiv.org/abs/2307.00348v1
20230701140955
Lottery and Sprint: Generate a Board Game with Design Sprint Method on Auto-GPT
[ "Maya Grace Torii", "Takahito Murakami", "Yoichi Ochiai" ]
cs.HC
[ "cs.HC" ]
Both authors contributed equally to this research. toriparu@digialnature.slis.tsukuba.ac.jp 0000-0003-4025-9212 [1] webmaster@marysville-ohio.com 0000-0003-2077-9747 wizard@slis.tsukuba.ac.jp 0000-0002-4690-5724 R&D Center for Digital Nature, University of Tsukuba 1-2 Kasuga Tsukuba Ibaraki Japan 305-0821 In this paper, we present a novel approach using the Auto GPT system alongside Design Sprint methodology to facilitate board game creation for inexperienced users. We introduce the implementation of Auto GPT for generating diverse board games and the subsequent optimization process through a customized Design Sprint. A user study is conducted to investigate the playability and enjoyment of the generated games, revealing both successes and challenges in employing systems like Auto GPT for board game design. Insights and future research directions are proposed to overcome identified limitations and enhance computational-driven game creation. <ccs2012> <concept> <concept_id>10003120.10003121.10003129</concept_id> <concept_desc>Human-centered computing Interactive systems and tools</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010405.10010469</concept_id> <concept_desc>Applied computing Arts and humanities</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [300]Human-centered computing Interactive systems and tools [500]Applied computing Arts and humanities 22 June 2023 Lottery and Sprint: Generate a Board Game with Design Sprint Method on Auto-GPT Yoichi Ochiai August 1, 2023 ================================================================================= § INTRODUCTION Creating novel, enjoyable and effective board games typically requires a detailed understanding of game mechanics, player engagement, and strategic balance <cit.>. Inexperienced individuals often face challenges in designing board games that cater to various play styles, objectives, and constraints <cit.>. As a potential solution to this issue, we present a new approach that combines the capabilities of the AutoGPT system—an AI model based on Generative Pre-trained Transformers (GPT)—with the structured methodology of Design Sprint to facilitate the process of board game creation. In this paper, we explore the novel application of the AutoGPT system to board game design using a unique "Lottery and Sprint" approach. The process begins with AutoGPT generating a variety of game designs in a metaphorical "lottery", allowing users to select a favored design. Once a game is chosen, it enters a customized Design Sprint <cit.> process to refine and improve the game concept, ensuring both entertainment and playability for potential players. The central research question of this study is whether the AutoGPT system, operating within the proposed framework, can effectively create board games that are enjoyable, playable, and captivating, even for individuals with limited experience in game design. Our investigation encompasses a review of related works. We then present the complete system design using AutoGPT and customized Design Sprint. This is followed by a user study evaluating the generated board games in terms of playability and entertainment. By harnessing AutoGPT's potential in generating diverse board game designs combined with the Design Sprint methodology, we aim to revolutionize the game creation process, enabling users to develop captivating, strategically balanced games suitable for various gaming preferences and experiences. § RELATED WORKS Game design and creation have long been an area of research, with scholars and developers alike analyzing the elements and processes involved in crafting innovative and engaging game experiences. Traditional methods and tools of game design have been a subject of extensive exploration, aimed at unlocking the secrets of enjoyable and playable games <cit.>. In recent years, AI-driven game design has emerged as a popular and promising direction in the gaming domain, advancing the field and introducing new possibilities for game creation through the use of artificial intelligence <cit.>. Through the development and application of AI systems, researchers have been able to enhance existing game design methodologies, enabling the generation of more complex and immersive experiences for players. The rapid growth and advancements in Generative Pre-trained Transformers (GPT) and other Large Language Models (LLMs) have further revolutionized the landscape of game design [15, 14]. These powerful tools are capable of generating content across various domains and have been harnessed to drive the creation of not only game narratives but also game mechanics and systems <cit.>. The increased attention and focus on GPT and LLMs have led to the exploration of their potential in creating board games, video games, card games, and more <cit.>. One such example is the Auto GPT system, an open-source project still in development that aims to facilitate automatic task decomposition and problem-solving using natural language prompts <cit.>. By leveraging the capabilities of this system, researchers and game designers have opened up new possibilities in the automation of game design processes. With Auto GPT's potential for generating diverse and balanced game designs, researchers are now exploring various applications of this technology, such as the investigation of its effectiveness in online decision-making and gaming <cit.>. By integrating the Auto GPT system with a board game creation framework in our study, we aim to contribute to the growing body of research on the applications of GPT and LLMs in the field of game design. In conclusion, the evolution of game design and creation processes has been significantly impacted by advancements in technology, particularly through the emergence of AI-driven methods and tools. As GPT and LLMs continue to develop and grow, their applications in various aspects of game design hold immense potential for further revolutionizing the field and reshaping the gaming experience for players worldwide. § SYSTEM OVERVIEW §.§ Design Sprint and Its Integration Design Sprint is a time-bound, structured process that aims to solve complex problems through design, prototyping, and testing ideas with users. Developed by Google Ventures, it is a five-day process where each day is dedicated to a specific phase: Understand, Diverge, Converge, Prototype, and Test <cit.>. In the context of the Auto GPT board game creation system, the principles of Design Sprint are integrated into the architecture, enhancing the efficiency and effectiveness of the game design process. This integration can be mapped onto the four key stages of the system's architecture as follows: 1. Research existing board games (Understand and Diverge): In this stage, the system surveys existing board games, gathering insights and generating ideas, similar to the Understand and Diverge phases of Design Sprint. The system learns about diverse rule sets, game balance, and strategic depth to inform its game design process. 2. Create a board game with constraints (Converge and Prototype): This stage corresponds to the Converge and Prototype phases of Design Sprint. The system uses the insights gathered from the research stage to create a new board game based on given materials and constraints, akin to building a prototype for testing. 3. Reflect on game design (Test): The Test phase of Design Sprint is mirrored in this stage, where the system evaluates the created game based on usability, rule clarity, design consistency, strategic balance, and enjoyment. This serves as an internal evaluation mechanism for the system’s output. 4. Update and iterate (Iterate): The final stage of the Auto GPT system emphasizes iteration, a core principle of Design Sprint. Informed by the reflections from the previous stage, the system refines and updates the game to ensure that all materials, instructions, and game elements are clear and consistent. By incorporating the principles of Design Sprint into the system's architecture, the Auto GPT board game creator demonstrates the ability to efficiently generate innovative, balanced, and engaging board games (Fig. <ref>). This structured and iterative process effectively combines artificial intelligence with human user insights, enhancing the overall game design experience and outcomes provided by the Auto GPT system. §.§ Prompts In the Auto GPT board game creation system(Fig <ref>) [<https://github.com/DigitalNatureGroup/create_a_board_game>], the four key stages of the architecture are supported by two types of prompts, creation prompts and feedback prompts. These prompts guide the Auto GPT system during the design process and are divided according to their role within the system architecture. Creation Prompts take part in stages 1, 2, and 3 of the system architecture and focus on researching existing board games, generating new games based on constraints, and reflecting on the game design. [caption= Creation prompts] (G1) In order to make an interesting board game, survey board games focusing on basic various rules of board game, how other games are keeping game balance, and how other games exist. (G2) Create a board game with the material and constraint written in text file "<board game material and constraint>.txt". Output the game explanation including all information to play the game with Title, Materials, setup, gameplay rules, game ending conditions, board layout and design, the unique point, the enjoyable point and strategy to win the game as text file so you can memorize or refer any time. (G3) Step-by-step reflect the game you have created and consider if there are any failure in following the materials and constraints, difficult instructions for human to keep track while playing the game(usability in games), unclear rule instruction for situation potentially to occur(rule design), failure in rule, inconsistent in between the winning strategy and intention of the game(board game design and game balance, game theory) and export your result in text file. If there are not any problems, write so. use_gpt4 for better reflection. (G4) Reading the reflection, updating the game. Check if the game is perfect and ready to play by taking the step in (G3). Feedback Prompts take part in stages 3 and 4 of the system architecture, where the system reflects on user feedback and updates the game accordingly. [caption= Feedback prompts] (G1) Read the board game rule which was created and written in "<Target board game>.txt", material and constraints defined in "<board game material and constraint>.txt". (G2) You have feedback from players of the game in this file "<feedback>.txt". Read the file and recreate the game. output and save the reflection to the text file so you can refer any time. (G3) Step-by-step reflect the game you have created by considering the materials used (only defined materials should be used), rules or instructions to be very clear for first time players to play, clear winning conditions. output and save the reflection to the text file so you can refer any time. (G4) Reading the reflection, updating the game by cooperating with the reflections and output and saving the updated game to a text file so you can refer any time. Make sure to include all the rules in the text file. § USER STUDY §.§ User Study Design and Methodology The user study was designed as a two-game experiment, with each game consisting of playtest and survey. Participants did not interact with the auto gpt game generating system directly; instead, they provided feedback to the user study conductor, who then input the feedback into the system. The overall view of the user study is explained in fig <ref>. At the beginning and end of each phase, participants completed a survey designed to gather quantitative and qualitative data on the Auto GPT system's usability, the playability of the generated games, and the overall enjoyment and engagement of the games. The two-phase structure allowed for a thorough examination of different generated games and iterative feedback within each phase. Participants in the user study were divided into groups of four, consisting of a Game Master (GM) and three players, with a total of 12 participants, including 3 GMs and 9 players. The participants, with ages ranging from 20 to 27 years and a balanced representation of both genders, were recruited from the University of Tsukuba. They had different levels of experience and familiarity with playing board games, which varied from less than a year to over a decade, and played at frequencies ranging from a few times a year to once a month. The GM was responsible for guiding the session, selecting the board game from three generated game rules, providing feedback to the user study conductor, who then input the feedback into the Auto GPT system, and overseeing the playtesting. The players were responsible for playtesting the game, providing feedback, and participating in the discussion. §.§ Data Collection Data was collected through board game preference surveys, post-gameplay surveys, final surveys, Creativity Support Index (CSI) <cit.> for Game Masters (GMs), and user study conductor observations. This approach allowed us to gather insights into the usability, playability, and enjoyment of the Auto GPT-generated games. Surveys were administered to gauge participants' initial expectations and subsequent impressions. Post-gameplay surveys covered various aspects of game experience, while the final survey allowed participants to reflect on their overall experience. The CSI <cit.> was used to evaluate the level of creativity support provided by the Auto GPT system to the GMs. GM's feedback, survey responses, and conductor observations contributed to the data collection regarding the system's usability and game development process. § RESULT AND DISCUSSION Overall Assessment of the Auto GPT System: The results of the final survey showed that participants provided positive evaluations of the Auto GPT board game creation system. In the Game Master (GM) category, high scores were recorded for Q1 (Median: 4.0, SD: 0.577), Q2 (Median: 4.0, SD: 0.0), and Q3 (Median: 4.0, SD: 0.0). Similarly, in the player category, high scores were observed for Q1 (Median: 4.0, SD: 0.866) and Q2 (Median: 4.0, SD: 1.453) in the final survey. These high scores indicate that the generated games met participants' expectations and that the Auto GPT system was effective in creating board games that were generally well-received (Fig <ref>). The system scored an average of 77.58 (n=3) on CSI scores ranging from 0 (worst) to 100 (best). The highest factor was Collaboration with an average of 30.67, followed by Exploration with an average of 27.67. Collaboration evaluates if the system allowed other people to work with the user easily, and Exploration evaluates if the user could explore many different ideas, options, designs, or outcomes. Expressiveness had the lowest score of 5.29, evaluating if the user was able to be very expressive and creative while doing the activity. This result is supported by a comment from the final GM survey, where the respondent was asked to identify which aspects of the system they found convenient or supportive and any difficulties they encountered: "I can't predict what kind of instructions make what kind of difference." Comprehensibility of Game Rules: In the post-gameplay survey, both GMs and players reported increased median scores for the understandability of game rules in the second game compared to the first game. GMs showed an increase in median score from 2.0 in the first game to 3.0 in the second game (SD1: 1.0, SD2: 1.0), and players displayed an increase in the median score from 2.0 to 3.0 as well (SD1: 1.333, SD2: 0.972). However, this improvement could be attributed to participants playing the game twice with similar rules. It remains unclear whether the generated game itself enhanced its comprehensibility (Fig <ref>). Strategic Elements and Fairness: The evaluation of strategic elements revealed a discrepancy between GM and player perceptions, with GMs reporting a higher score in the second game, increasing the median score from 3.0 to 4.0 (SD1: 1.528, SD2: 0.577). In contrast, players' evaluations remained unchanged, with their median score staying at 3.0 in both the first and second game (SD1: 1.0, SD2: 1.236). This suggests that the improvement in strategic elements, based on GM input and expectations, did not translate into an enhanced experience for the players. Similarly, evaluations of fairness improved for GMs, with an increase in the median score from 2.0 in the first game to 4.0 in the second game (SD1: 1.732, SD2: 2.082); however, player evaluations did not exhibit a significant change between the first and second games (Median1: 3.0, Median2: 3.0, SD1: 1.269, SD2: 1.225). This suggests that only GMs were satisfied with the improvements in fairness while players did not share the same sentiment (Fig <ref>). Originality and Game Theme: The Auto GPT system successfully delivered novel and original game concepts, as evidenced by consistently high scores provided by both GMs and players. In the post-gameplay survey, GMs recorded median scores of 4.0 for both games (SD1: 0.577, SD2: 0.0), and players reported median scores of 4.0 for both games as well (SD1: 1.236, SD2: 0.866). In contrast, evaluations of game theme showed differing opinions between GMs, who perceived a positive improvement, and players whose assessment declined in the second game (Fig <ref>). Contingency and Board Graphic Design: The decrease in player evaluations of contingency in the post-gameplay survey, with the median score dropping from 4.0 in the first game to 3.0 in the second game (SD1: 1.13, SD2: 1.302), and the increased importance placed on contingency in the board game preference survey indicate that contingent or random events play a crucial role in the overall enjoyment of generated board games (Fig <ref>). Furthermore, high-quality graphic design may not be as essential to board game enjoyment as initially thought (Fig <ref>A,B,D). The evaluation of board graphic design in the "board game preference survey" (Q2) changed, with the median score decreasing from 4.5 (SD1: 0.888) to 4.0 (SD2: 1.215)(Fig <ref>). Despite its simplicity, the hand-drawn board provided an engaging gaming experience. This finding aligns with Kieran Hicks et al.'s observation that visual embellishments do not always improve the user experience <cit.>. § LIMITATION AND FUTURE WORK Our study of the AutoGPT system for board game creation showed promising results. However, there are some limitations to address, and we suggest potential areas for future research. First, the system had difficulty creating complete game board designs, often resulting in issues with overlapping components and missing elements. Future work could focus on improving the graphical capabilities of the AutoGPT system for better board designs. Second, sometimes the system generated games with contradictory rules or materials not allowed by the given constraints. To make the games more playable, future research should focus on refining the system's ability to understand constraints and provide clear rules. Lastly, the limited working memory of the Auto GPT system prevented it from researching existing board games thoroughly. Enhancing its working memory by developing database system could help the system better understand existing games and create more engaging designs. § CONCLUSION In conclusion, our study demonstrated the potential of the Auto GPT system in generating engaging and original board games. By addressing limitations and refining the system's capabilities, it can better match players' expectations and deliver satisfying gaming experiences. Future work can focus on enhancing the system for various game types, contributing to the advancement of human-computer interaction in the gaming industry. Overall, the Auto GPT system shows promise in revolutionizing the game creation process and offering enjoyable, well-rounded board games for diverse audiences and gaming preferences. The manuscript was drafted using OpenAI ChatGPT and GPT-4. The AI generated text was read, revised and proofed carefuly by the authors. ACM-Reference-Format
http://arxiv.org/abs/2307.02072v1
20230705072821
Mathematical and numerical study of an inverse source problem for the biharmonic wave equation
[ "Yan Chang", "Yukun Guo", "Tao Yin", "Yue Zhao" ]
math.NA
[ "math.NA", "cs.NA", "math.AP" ]
Mathematical and numerical study of an inverse source problem for the biharmonic wave equation Yan ChangSchool of Mathematics, Harbin Institute of Technology, Harbin, China. 21B312002@stu.hit.edu.cn, Yukun GuoSchool of Mathematics, Harbin Institute of Technology, Harbin, China. ykguo@hit.edu.cn (Corresponding author), Tao YinLSEC, Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China. yintao@lsec.cc.ac.cn, and Yue ZhaoSchool of Mathematics and Statistics, Central China Normal University, Wuhan, China. zhaoyueccnu@163.com ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Mathematical and numerical study of an inverse source problem for the biharmonic wave equation Yan ChangSchool of Mathematics, Harbin Institute of Technology, Harbin, China. 21B312002@stu.hit.edu.cn, Yukun GuoSchool of Mathematics, Harbin Institute of Technology, Harbin, China. ykguo@hit.edu.cn (Corresponding author), Tao YinLSEC, Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China. yintao@lsec.cc.ac.cn, and Yue ZhaoSchool of Mathematics and Statistics, Central China Normal University, Wuhan, China. zhaoyueccnu@163.com ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== In this paper, we study the inverse source problem for the biharmonic wave equation. Mathematically, we characterize the radiating sources and non-radiating sources at a fixed wavenumber. We show that a general source can be decomposed into a radiating source and a non-radiating source. The radiating source can be uniquely determined by Dirichlet boundary measurements at a fixed wavenumber. Moreover, we derive a Lipschitz stability estimate for determining the radiating source. On the other hand, the non-radiating source does not produce any scattered fields outside the support of the source function. Numerically, we propose a novel source reconstruction method based on Fourier series expansion by multi-wavenumber boundary measurements. Numerical experiments are presented to verify the accuracy and efficiency of the proposed method. § INTRODUCTION This paper is concerned with the scattering problem in an infinitely thin elastic plate for which the wave propagation can be modeled by the following two-dimensional biharmonic wave equation Δ^2 u(x, k) - k^4 u(x, k) = S(x), where k>0 is the wavenumber and S is the source function. We assume that S∈ L_ comp^2(ℝ^2) and suppS ⊂ B_R where B_R:= {x∈ℝ^2: |x|< R} with R being a positive constant. Denote the boundary of B_R by Γ_R. The following radiation conditions are imposed on u and Δ u: lim_r→∞√(r) (∂_r u-ik u)=0, lim_r→∞√(r) (∂_r (Δ u)-ik (Δ u))=0 uniformly in all directions x̂ = x/|x| with r = |x|. For the direct problem, since (Δ^2 - k^4)^-1 = 1/2k^2( (-Δ-k^2)^-1 - (-Δ+k^2)^-1), the fundamental solution of the biharmonic wave equation takes the form G(x, y, k) = i/8k^2(H_0^(1)(k|x - y|) - H_0^(1)( ik|x - y|)). Here H_0^(1) denotes the first kind Hankel function with order zero. Then the solution to the direct scattering problem (<ref>)–(<ref>) is given by u(x, k) = ∫_ℝ^2 G(x, y, k) S(y) dy. In this work, we are interested in the inverse problem of identifying the source function from boundary measurements. The biharmonic wave equation arises in the theory of plate bending and thin plate elasticity <cit.>. Although there has been extensive literature on the inverse scattering problems of acoustic, elastic, and electromagnetic waves, the inverse problems of the biharmonic wave equation are much less studied. Recently, due to their important applications in many scientific areas including offshore runway design, seismic cloaks, and platonic crystal <cit.>, the inverse problems of the biharmonic wave equation have received considerable attention <cit.>. However, compared with the well-developed inverse theory for acoustic, elastic, and electromagnetic wave equations, the inverse scattering problems for the biharmonic wave equation are much less studied. Motivated by the significant applications, we investigate the inverse source problem for the biharmonic wave equation. This work contains two main contributions. First, the radiating and non-radiating sources are characterized at a fixed wavenumber mathematically. We further derive a Lipschitz stability estimate for determining the radiating source. Second, a direct and effective numerical method is developed to reconstruct the source function by multi-wavenumber boundary measurements. For the inverse source problems at a single wavenumber, in general the uniqueness cannot be guaranteed due to the existence of non-radiating sources <cit.>. The radiating sources and non-radiating sources were mathematically characterized in <cit.> for Maxwell equations. The results in <cit.> were then extended to acoustic and elastic wave equations in <cit.>. In this paper, we investigate the inverse source problem of the fourth-order biharmonic wave equation. We prove that the source can be decomposed into a radiating source and a non-radiating source. The radiating source can be uniquely identified by Dirichlet boundary measurements at a fixed wavenumber. We further derive a Lipschitz stability estimate for determining the radiating source. The proof of the stability is unified and can be extended to acoustic, elastic, and electromagnetic wave equations. On the other hand, the non-radiating source does not produce any scattered fields outside the support of the source function which thus could not be identified. We point it out that the analysis for the biharmonic wave equation is more involved due to the increase of the order of the elliptic operator. For numerical methods, recently a direct and effective Fourier-based method has been proposed to reconstruct source functions for acoustic, electromagnetic, and elastic waves using data at discrete multiple wavenumbers. We refer the reader to <cit.> and references therein for relevant studies. In this paper, we apply this Fourier method to investigate the more sophisticated biharmonic wave equation. We would like to emphasize that the increase of the order brings many challenges to the computation. To deal with this difficulty, we introduce two auxiliary functions based on a splitting of the biharmonic wave operator, which enables us to convert the fourth-order equation to two second-order equations. The rest of this paper is arranged as follows. <Ref> provides a characterization the radiating and non-radiating sources at a fixed wavenumber. In <Ref>, we develop a computational method to reconstruct the source function based on the Fourier expansion. Numerical experiments are presented in <Ref> to verify the effectiveness of the proposed method. § CHARACTERIZATION OF THE RADIATING AND NON-RADIATING SOURCES AT A FIXED WAVENUMBER In this section, we characterize the radiating and non-radiating sources. We start with the radiating sources. We say that v∈ H^2(B_R) is a weak solution to the following homogeneous equation Δ^2 v - k^4 v = 0 in the distributional sense if for every φ∈ C_0^∞(B_R) one has ∫_B_RΔ v Δφ - k^4 v φ dx = 0. Denote the set of weak solutions to (<ref>) by ℋ(B_R) and denote by H(B_R) the closure of ℋ(B_R) in the L^2(B_R) norm. Then one has the following L^2-orthogonal decomposition L^2(B_R) = H(B_R) + H^⊥(B_R). In the following theorem we characterize all the radiating sources which are contained in the functional space H(B_R). Let S ∈ H(B_R). The Dirichlet boundary measurements u|_Γ_R, Δ u|_Γ_R at a fixed wavenumber uniquely determine S. It suffices to prove that u = Δ u=0 on Γ_R gives S≡ 0. Let v∈ℋ(B_R). Multiplying both sides of (<ref>) by v and integrating by parts twice over B_R yield ∫_B_R S v dx = ∫_Γ_R(∂_νΔ u v - ∂_ν v Δ u + ∂_ν u Δ v - ∂_νΔ v u ) ds(x). Given u = Δ u = 0 on Γ_R, we also have ∂_ν u = ∂_νΔ u = 0 on Γ_R by the uniqueness result of the exterior scattering problem in ℝ^2∖ B_R (cf <cit.>). Therefore, we have ∫_B_R S v dx = 0. Since ℋ(B_R) is dense in H(B_R) in the norm of L^2(B_R), using a density argument we have S ≡ 0. The proof is completed. We further derive a Lipschitz stability estimate of determining the radiating sources with the boundary measurements u, Δ u, ∂_ν u, ∂_νΔ u on Γ_R at a fixed wavenumber. This method is unified and thus can be extended to study wave equations of second order. Let S∈ H(B_R). It holds that S_L^2(B_R)≤ C(∫_Γ_R(|u|^2 + |Δ u|^2 + |∂_ν u|^2 + |∂_νΔ u|^2) ds(x))^1/4, where C is a generic positive constant depending on k. Since S∈ H(B_R), there exists a sequence {S_j}_j=1^∞ of weak solutions to (<ref>) in H^2(B_R) such that lim_j→∞ S_j = S in L^2(B_R). Assume that u_j is the radiating solution to (<ref>) corresponding to the source S_j, i.e., Δ^2 u_j - k^4 u_j = S_j. Since Δ^2 S_j - k^4 S_j= 0 in B_R and S_j_L^2(B_R)≤ C, we have Δ^2S_j_L^2(B_R)≤ C which gives S_j_H^4(B_R)≤ C. Here C is a generic constant depending on k. Multiplying both sides of (<ref>) by S̅_j and integrating by parts over B_R yields ∫_B_R |S_j|^2 dx = ∫_Γ_R (S̅_j ∂_νΔ u_j - Δ u_j∂_νS̅_j) d s(x) + ∫_Γ_R (ΔS̅_j ∂_ν u_j - u_j∂_νΔS̅_j) d s(x). Since S_j_H^4(B_R)≤ C, by Schwartz's inequality we get ∫_B_R |S_j|^2 dx ≤ C(∫_Γ_R (|u_j|^2 + |Δ u_j|^2 + |∂_ν u_j|^2 + |∂_νΔ u_j|^2) ds(x))^1/2. On the other hand, subtracting (<ref>) by (<ref>) gives Δ^2 (u - u_j) - k^4 (u - u_j) = S - S_j. Then the resolvent estimate <cit.> implies that u - u_j_H^4(B_R)≤ CS- S_j_L^2(B_R). Hence, u_j→ u in H^4(B_R) which gives by letting j→∞ in (<ref>) that S^2_L^2(B_R)≤ C(∫_Γ_R( |u|^2 + |Δ u|^2 + |∂_ν u|^2 + |∂_νΔ u|^2) ds(x))^1/2. The proof is completed. In the following theorem we prove that the orthogonal complement functional space H^⊥(B_R) of H(B_R) consists of all the non-radiating sources. Let S∈ H^⊥(B_R) in the scattering problem (<ref>)–(<ref>). It holds that u = Δ u = ∂_ν u = ∂_νΔ u = 0 on Γ_R. Let w∈ H^2_ loc(ℝ^2∖Γ_R) be a solution to the following transmission problem Δ^2 w - k^2 w = 0 in ℝ^2∖Γ_R, [w]_Γ_R = ξ_1, [Δ w]_Γ_R = ξ_2, on Γ_R, [∂_ν w]_Γ_R = η_1, [∂_νΔ w]_Γ_R = η_2 on Γ_R, w and Δ w satisfy (<ref>). where ξ_1∈ H^3/2(Γ_R), ξ_2 ∈ H^-1/2(Γ_R) and η_1∈ H^1/2(Γ_R), η_2∈ H^-3/2(Γ_R) and [·]_Γ_R denotes the jump of the traces from both sides of Γ_R. Such a solution can be found through integral representations using only the boundary data ξ_1,ξ_2,η_1,η_2, which is analogous to the integral representations in <cit.> of the solutions to the classical acoustic transmission problems. Multiplying both sides of (<ref>) by w and integrating by parts over B_r with r>R and then letting r→∞, we arrive at ∫_B_R S w dx = ∫_Γ_R(∂_νΔ u ξ_1 - η_1 Δ u + ∂_ν u ξ_2 - η_2 u) ds(x). Here we have used the jump conditions on Γ_R and the radiating conditions (<ref>) to eliminate the boundary integral on Γ_r by letting r→∞. Then by the arbitrariness of ξ_i and η_i for i = 1, 2, we have u = Δ u = ∂_ν u = ∂_νΔ u = 0 on Γ_R. The proof is completed. From Theorem <ref> we see that the non-radiating source does not produce surface measurements on Γ_R which thus cannot be identified at a fixed wavenumber. § FOURIER-BASED METHOD TO RECONSTRUCT THE SOURCE FUNCTION In this section, we develop a Fourier-based method to reconstruct the source function by Dirichlet boundary measurements at multiple wavenumbers. As discussed in Introduction, the use of multiple wavenumbers is necessary in order to achieve an accurate reconstruction. §.§ Fourier approximation Let l=(l_1,l_2)∈ℤ^2. Denote 𝕂={k_ l: k_ l = 2π/a| l|, l∈ℤ^2, | l|_∞≤ N, N∈ℕ^+}. In this section, we propose a Fourier method to approximate the source function S(x) from the following Dirichlet boundary data at multiple frequencies {u(x, k),Δ u(x, k):x∈Γ_R,k∈𝕂}. For the geometry setup of the inverse source problem, we refer to <Ref> in which the square V_0 is set to be V_0=(-a/2,a/2)×(-a/2,a/2), a>0, such that suppS⊂ V_0⊂ B_R. Denote by ϕ_ l(x) the Fourier basis functions ϕ_ l(x)=e^i2π/a l· x, l∈ℤ^2. Then the source function S(x) has the following expansion S(x) = ∑_ l∈ℤ^2ŝ_lϕ_ l(x). Based on the biharmonic wave equation (<ref>), we calculate the Fourier coefficients ŝ_l as follows. Multiplying both sides of (<ref>) by ϕ_ l(x) and integrating by parts over the domain V_0 gives ŝ_l= 1/a^2∫_V_0S(x)ϕ_l(x)dx=1/a^2∫_B_R(Δ^2u(x, k_ l)-k^4u(x, k_ l))ϕ_l(x)dx = 1/a^2∫_Γ_R(∂_νΔ u(x, k_ l)+2π/a( l·ν)Δ u(x, k_ l))ϕ_l(x)ds(x) -k^2/a^2∫_Γ_R(∂_ν u(x, k_ l)+2π/a( l·ν) u(x, k_ l))ϕ_l(x)ds(x), l∈ℤ^2. The main idea of our method is to approximate the source functions S in (<ref>) by the following finite Fourier expansion S_N(x)=∑_ l∈𝕂ŝ_lϕ_ l(x). From (<ref>) we see that the Fourier coefficient ŝ_ 0 is related to the boundary data corresponding to k = 0. However, in practice the data at zero wavenumber is hard to achieve. To resolve this issue, we introduce the following admissible wavenumbers. The idea is to approximate the Fourier basis ϕ_ 0 for l = 0 by ϕ_ l_0 where the vector l_0 is close to 0. Let λ be a small positive constant such that 2π/aλ<1/2 and let l_0:=(λ,0). The admissible wavenumbers are defined by k_l:={[ 2π/a|l|, l∈ℤ^2\{0},; 2π/aλ, l= l_0. ]. We approximate the source function S by the Fourier series ŝ_ l_0ϕ_ l_0+∑_|l|_∞≥ 1ŝ_lϕ_l(x). To seek for ŝ_ l_0, as before we multiply both sides of (<ref>) by ϕ_l_0 and integrate by parts over B_R which gives 1/a^2∫_Γ_R(∂_νΔ u(x, k_ l_0)+2π/a(l_0·ν)Δ u(x, k_ l_0))ϕ_l_0 s(x) -k^2/a^2∫_Γ_R(∂_ν u(x, k_ l_0)+2π/a(l_0·ν) u(x, k_ l_0))ϕ_l_0 s(x) = 1/a^2∫_V_0ŝ_l_0ϕ_l_0(x) x+1/a^2∑_|l|_∞≥1ŝ_l∫_V_0ϕ_l(x)ϕ_l_0(x) x = sinλπ/λπŝ_l_0+1/a^2∑_|l|_∞≥1ŝ_l∫_V_0ϕ_l(x)ϕ_l_0(x) x. As a consequence, we obtain the computation formula for ŝ_l_0 as follows: ŝ_ l_0= λπ/a^2sinλπ∫_Γ_R(∂_νΔ u(x, k_ l_0)+2π/a(l_0·ν)Δ u(x, k_ l_0))ϕ_l_0 s(x) -λπ k^2/a^2sinλπ∫_Γ_R(∂_ν u(x, k_ l_0)+2π/a(l_0·ν) u(x, k_ l_0))ϕ_l_0 s(x) -λπ/a^2sinλπ∑_1≤| l|_∞≤ Nŝ_ l∫_V_0ϕ_ l(x)ϕ_l_0(x) x. We take ŝ_ l_0 as an approximation of ŝ_ 0. §.§ A stable substitution technique In practice, it is usually the case that we are only given the Dirichlet boundary measurements u(x, k)|_Γ_R and Δ u(x, k)|_Γ_R. However, to utilize the integral identities (<ref>) and (<ref>), the normal derivatives ∂_ν u(x, k)|_Γ_R and ∂_νΔ u(x, k)|_Γ_R are also required. Additionally, as suggested in <cit.> for the acoustic wave equation, we shall reconstruct the Fourier coefficients ŝ_l, l∈ℤ^2 using the boundary data on Γ_ρ for ρ>R. To this end, analogous to the analysis in Section <ref>, the Fourier coefficients ŝ_l can also be calculated through the following integral identity for ρ>R: ŝ_l= 1/a^2∫_Γ_ρ(∂_νΔ u+2π/a( l·ν)Δ u)ϕ_l(x)ds(x) -k^2/a^2∫_Γ_ρ(∂_ν u+2π/a( l·ν) u)ϕ_l(x)ds(x), l∈ℤ^2∖ 0, and ŝ_l_0= λπ/a^2sinλπ∫_Γ_ρ(∂_νΔ u+2π/a(l_0·ν)Δ u)ϕ_l_0 s(x) -λπ k^2/a^2sinλπ∫_Γ_ρ(∂_ν u+2π/a(l_0·ν) u)ϕ_l_0 s(x) -λπ/a^2sinλπ∑_1≤| l|_∞≤ Nŝ_ l∫_V_0ϕ_ l(x)ϕ_l_0(x) x. Given the measurements u(x, k)|_Γ_R and Δ u(x, k)|_Γ_R, in what follows we show that the boundary terms u(x, k)|_Γ_ρ, Δ u(x, k)|_Γ_ρ, ∂_ν u(x, k)|_Γ_ρ and ∂_νΔ u(x, k)|_Γ_ρ in the integrals (<ref>)-(<ref>) can be obtained via the series expansion of the solutions exterior to Γ_R. To this end, introduce two auxiliary functions u_H=-1/2k^2(Δ u-k^2u), u_M=1/2k^2(Δ u+k^2u). Then we decompose u and Δ u in the exterior domain ℝ^2\B_R by u=u_H+u_M, Δ u=k^2(u_M-u_H), where u_H and u_M satisfy the following Helmholtz and modified Helmholtz equations Δ u_H+k^2u_H=0 ℝ^2\B_R,lim_r→∞√(r) (∂_r u_H-ik u_H)=0, Δ u_M-k^2u_M=0 ℝ^2\B_R,lim_r→∞√(r) (∂_r u_M+ik u_M)=0. Based on the above decomposition, to compute u, Δ u and ∂_ν u, ∂_νΔ u on Γ_ρ it suffices to compute u_ξ(x, k)|_Γ_ρ, ∂_ν u_ξ(x, k)|_Γ_ρ, ξ∈{H,M}. Using the polar coordinates (r,θ):x=r(cosθ,sinθ), the solutions u_H and u_M admits the following series expansions for r≥ R, u_H(x, k)=∑_n∈ℤH_n^(1)(kr)/H_n^(1)(kR)û^H_k,ne^ nθ, u_M(x, k)=∑_n∈ℤK_n(kr)/K_n(kR)û^M_k,ne^ nθ, where H_n^(1) and K_n denote the first-kind Hankel function and second-kind modified Bessel function, respectively, with order n and the Fourier coefficients û^H_k,n and û^M_k, n can be obtained using the boundary data u_H(x, k)|_Γ_R and u_M(x, k)|_Γ_R as follows û^ξ_k, n =1/2π∫_0^2πu_ξ(R,θ;k)e^- nθdθ, ξ∈{H,M}. Therefore, for x∈Γ_ρ we have u_H(x, k)=∑_n∈ℤH_n^(1)(kρ)/H_n^(1)(kR)û^H_k,ne^ nθ, u_M(x, k)=∑_n∈ℤK_n(kρ)/K_n(kR)û^M_k,ne^ nθ, and ∂_νu_H(x, k)=∑_n∈ℤkH_n^(1)'(kρ)/H_n^(1)(kR)û^H_k,ne^ nθ, ∂_νu_M(x, k)=∑_n∈ℤkK_n'(kρ)/K_n(kR)û^M_k,ne^ nθ. Thus, from (<ref>) we have that the boundary values of u(x, k), Δ u(x, k), ∂_ν u(x, k) and ∂_νΔ u(x, k) on Γ_ρ can be computed by the Dirichlet data u(x, k) and Δ u(x, k) on Γ_R. § NUMERICAL EXAMPLES In this section, we present several numerical examples to verify the performance of the proposed Fourier method. The synthetic data are generated via the integration. For S∈ L^2(ℝ^2) with supp S⊂ V_0, the unique solution to (<ref>) subject to the radiation condition (<ref>) is given by (<ref>). For the numerical implementation of the Fourier method, the computation domain is chosen to be V_0=[-0.5,0.5]×[-0.5,0.5] in the first two examples and V_0 = [-3,3]×[-3,3] in the last example. The volume integral over V_0 is evaluated over a 201× 201 grid of uniformly spaced points x_m∈ V_0, m=1,⋯, 201^2. Once the radiated data is obtained, we further disturb the data by u^δ :=u+δ r_1|u|e^π r_2, Δ u^δ :=Δ u+δ r_3|u|e^π r_4, where r_ℓ,ℓ=1,⋯,4 are uniformly distributed random numbers ranging from -1 to 1, and δ∈[0,1) is the noise level. As a rule of thumb, without otherwise specified, we choose the truncation N by N:=5[δ^-1/4], where [X] denotes the largest integer that is smaller than X+1. Once N=N(δ) is given, the wavenumber k_ l for l∈ℤ^2 is given by k_ l ={ 10π/3| l|: l∈ℤ^2,1≤| l|_∞≤ N, 10π/3λ, | l|_∞=0, . with λ=10^-3. Given the admissible wavenumbers, we measure the radiated data { u(k_i,R,θ_j):k_i∈𝕂∪{k_ 0},i=1,⋯,(2N+1)^2,θ_j∈[0,θ_max],j=1,⋯, N_m } on the measurement circle centered at the origin with R=0.8 for the first two examples and R=5 for the last example. N_m=200 equally spaced measurement angles θ_j=jθ_max/N_m where θ_max is the measurement aperture. To evaluate the approximate Fourier coefficients defined in (<ref>) and (<ref>), we calculate {ℳ(ρ,Θ_j; k_i):k_i∈𝕂∪{k_ 0}:i=1,⋯,(2N+1)^2, Θ_j∈ [0,2π]:j=1,⋯, 200 }, where ℳ stands for u^δ, Δ u^δ, ∂_ν_ρu^δ and ∂_νΔ u^δ, respectively. Here, we take Θ_j=jπ/100. In the first two examples, ρ=2. In the last example, ρ=6. To approximate the infinite series in (<ref>) and (<ref>), we numerically truncate them by |n|≤ 60. Under these parameters, the approximate source function S_N(x) is computed by (<ref>). To evaluate the performance qualitatively, we compute the discrete relative L^2 error: S_N^δ-S_0/S_0:=(∑_m=1^201^2|S_N^δ(x_m)-S(x_m)|^2)^1/2/(∑_m=1^201^2|S(x_m)|^2)^1/2, and H^1 error: S_N^δ-S_1/S_1:=(∑_m=1^201^2|∇ S_N^δ(x_m)-∇ S(x_m)|^2+|S_N^δ(x_m)-S(x_m)|^2)^ 1/2/(∑_m=1^201^2|∇ S(x_m)|^2+|S(x_m)|^2)^1/2, respectively. In the first numerical example, we consider the reconstruction of a smooth function of mountain-shape, which is given by S_1(x_1,x_2)=1.1e^-200((x_1-0.01)^2+(x_2-0.12)^2)-100(x_2^2-x_1^2)e^-90(x_1^2-x_2^2). In <Ref>, we plot the surface and the contour of the exact source function. To compare the accuracy of the reconstruction, we list the relative L^2 and H^1 errors in <Ref> under different noise levels. In addition, we also display the CPU time of the inversion in the last row of <Ref> (All the computations in the numerical experiments are run on an Intel Core 2.6 GHz laptop). By taking δ to be 10%, we exhibit the reconstruction of the source in <Ref>. From <Ref> and <Ref>, we can see that the source function can be well-reconstructed under different noise levels. Comparing different columns in <Ref>, we find that our method is stable and insensitive to random noise. Though there exists an alternative choice of the truncation N in (<ref>), it is N that mainly influences the effect of reconstruction in our method. If N is not chosen properly, the reconstruction may be unsatisfactory. To illustrate the influence of the truncation, we take N=2[δ^-1/3], as done in <cit.> and reconsider the reconstruction the results obtained in <Ref> with other parameters unchanged. The reconstruction is shown in <Ref>. As shown in <Ref>, the reconstruction is destructed due to the inappropriate choice of N. In this case, the relative errors of the reconstruction are S_1^δ-S_1_0/S_1_0 =10.46%, S_1^δ-S_1_1/S_1_1 =18.87%, which are much larger than the errors with N defined in (<ref>). Thus, it is of vital importance to choose a proper truncation N of the Fourier expansion. Since an optimal choice of N is still open, we would like to take N as defined in (<ref>) in the following numerical experiments. In the second example, we consider the reconstruction of a discontinuous source function. The exact source function is given by S_2(x_1,x_2) = 0.8 if x_1^2+x_2^2<0.04, 0.3 if 0.04≤ x_1^2+x_2^2≤ 0.09, 0 elsewhere, which is plotted in <Ref>. By taking θ_max=2π, we exhibit the reconstruction in <Ref>. Comparing <Ref> and <Ref>, we can see that the source function can be roughly reconstructed. To better understand the reconstruction, we depict several cross-sections of the source function and the reconstruction in <Ref>. From <Ref>, we can find that our method exhibits distinct reconstruction performance by taking different x_2. Meanwhile, the jump discontinuity of the source function can be captured by the proposed method, and the well-known Gibbs phenomenon around the jumps is clearly observed. In the last example, we consider the reconstruction of the source function given by S_3(x_1,x_2)=0.3(1-x_1)^2e^-x_1^2-(x_2+1)^2-(0.2x_1-x_1^3-x_2^5)e^-x_1^2-x_2^2-0.03e^-(x_1+1)^2-x_2^2. We display the exact source function in <Ref>. In <Ref>, we present the reconstruction and the error under different observation apertures. Further, we depict several cross-section plots at x_2=0 and x_2=-1.5, respectively in <Ref>. From <Ref> and <Ref>, we can see that using the full-aperture observation, the reconstructed source function matches well with the exact one and the reconstruction error is relatively small. When only limited-aperture data is available, the overall reconstruction is less satisfactory though some parts of the function can be reconstructed. 100 ACTV S. Acosta, S. Chow, J. Taylor, and V. Villamizar, On the multi-wavenumber inverse source problem in heterogeneous media, Inverse Problems, 28 (2012), 075013. Monk R. Albanese and P. Monk, The inverse source problem for Maxwell's equations, Inverse Problems, 22 (2006), 1023–1035. blz G. Bao, P. Li, and Y. Zhao, Stability for the inverse source problems in elastic and electromagnetic waves, J. Math. Pures Appl., 134 (2020), 122–178. CK D. Colton and R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory, Applied Mathematical Sciences, vol. 93, Springer-Verlag, Berlin, 1998. DL23 H. Dong and P. Li, A novel boundary integral formulation for the biharmonic wave scattering problem arXiv:2301.10142 FGE M. Farhat, S. Guenneau, and S. Enoch, Ultrabroadband elastic cloaking in thin plates, Phys. Rev. Lett., 103 (2009), 024301. KW P. Kow and J. Wang, On the Characterization of Nonradiating Sources for the Elastic Waves in Anisotropic Inhomogeneous Media, SIAM J. Appl. Math., 81 (2021), 1530–1551. GLL Y. Gao, H. Liu, and Y. Liu, On an inverse problem for the plate equation with passive measurement, SIAM J. Appl. Math., 83 (2023), 1196–1214. GGS F. Gazzola, H.-C. Grunau, and G. Sweers, Polyharmonic Boundary Value Problems, Positivity Preserving and Nonlinear Higher Order Elliptic Equations in Bounded Domains, Lecture Notes in Mathematics, Springer-Verlag, Berlin, Heidelberg, 2010. LW-22 P. Li and X. Wang, An inverse random source problem for the biharmonic wave equation, SIAM/ASA J. Uncertainty Quantification, 10 (2022), 949–974. LW P. Li and X. Wang, Inverse scattering for the biharmonic wave equation with a random potential, arXiv:2210.05900. LYZ P. Li, X. Yao, and Y. Zhao, Stability for an inverse source problem of the biharmonic operator, SIAM J. Appl. Math., 81 (2021), 2503–2525. LYZ1 P. Li, X. Yao, and Y. Zhao, Stability of an inverse source problem for the damped biharmonic plate equation, Inverse Problems, 37 (2021), 085003. MMM R. C. McPhedran, A. B. Movchan, and N. V. Movchan, Platonic crystals: Bloch bands, neutrality and defects, Mech. Mater., 41 (2009), 356–363. SongWang M. Song and X. Wang, A Fourier method to recover elastic sources with multi-wavenumber data, E. Asian. Appl. Math., 9 (2019), 369–385. TS T. Tyni and V. Serov, Scattering problems for perturbations of the multidimensional biharmonic operator, Inverse Problems and Imaging, 12 (2018), 205–227. WangMaGuoLiJDE G. Wang, F. Ma, Y. Guo, and J. Li, Solving the multi-wavenumber electromagnetic inverse source problem by the Fourier method, J. Differ. Equations, 265 (2018), 417–443. WUW E. Watanabe, T. Utsunomiya, and C. Wang, Hydroelastic analysis of pontoon-type VLFS: a literature survey, Eng. Struct., 26 (2004), 245–256. IP15 D. Zhang and Y. Guo, Fourier method for solving the multi-wavenumber inverse source problem for the Helmholtz equation, Inverse Problems, 31 (2015), 035007.
http://arxiv.org/abs/2307.02699v1
20230706000912
Unsteady large-scale wake structure behind levitated freestream-aligned circular cylinder
[ "Sho Yokota", "Taku Nonomura" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
Generation of robust temporal soliton trains by the multiple-temporal-compression (MTC) method [ August 1, 2023 ============================================================================================== The relationships between characteristic large-scale wake structures appearing behind a freestream-aligned circular cylinder are investigated and discussed from the velocity field obtained by wind tunnel tests. The tests were conducted under a supportless condition using a magnetic suspension and balance system and stereo PIV measurements at a Reynolds number of 3.46×10^4. The velocity fields were analysed with a modal decomposition combining azimuthal Fourier decomposition and proper orthogonal decomposition. Four characteristic large-scale wake structures of recirculation bubble pumping, azimuthal shear, large-scale vortex shedding and streaks are identified and mainly focused on in the present study. The state of the vortex shedding is classified into three; anticlockwise/clockwise circular and flapping patterns. Each state has a relationship with the azimuthal shear and it tends to appear when the state is circular. Furthermore, from the analysis of the relationship between modes, the recirculation bubble pumping is found to be related to the vortex shedding position in the radial direction and the strength of the streaks. Particularly, analysis of causality shows that the recirculation bubble pumping affects them in the low-frequency range. Authors should not enter keywords on the manuscript, as these must be chosen by the author during the online submission process and will then be added during the typesetting process (see https://www.cambridge.org/core/journals/journal-of-fluid-mechanics/information/list-of-keywordsKeyword PDF for the full list). Other classifications will be added at the same time. § INTRODUCTION The flow around bluff bodies is often found in our surroundings and in industrial fields. Typical examples are aircraft gears, railroad vehicles, and buildings such as skyscrapers and bridge piers. Comprehension of the aerodynamic characteristics of these blunt-head applications is essential for evaluating their impact on the economy, safety, and the living environment, such as noise. Representative bluff bodies, such as circular cylinders, rectangular prisms, and spheres, have been investigated by many researchers <cit.>. On the other hand, there are relatively few studies on a freestream-aligned circular cylinder, which is one of the bluff bodies. A freestream-aligned circular cylinder is a cylinder in which the central axis is parallel to the direction of the freestream. Applications with shapes similar to the freestream-aligned circular cylinder include oil tanks, engine canisters <cit.>, re-entry capsules<cit.>, and automobile door mirrors <cit.>. Studies on the freestream-aligned circular cylinder have been conducted both numerically <cit.> and experimentally <cit.>. The flow around the freestream-aligned circular cylinder is classified into two main types based on the time-averaged velocity field. One type is nonreattaching flow in which the separated flow at the leading edge does not reattach on the curved surface, and it appears when the fineness ratio L/D (L: length, D: diameter) is less than 1.5. The other is reattaching flow, in which the separated flow reattaches on the curved surface, and is observed when L/D is greater than 1.5. Moreover, three characteristic flow structures have been confirmed in the wake of the cylinder from the velocity and pressure fluctuation spectra: recirculating bubble pumping (St < 0.05), large-scale vortex shedding (St ≈ 0.13), and the Kelvin-Helmholtz instability. Here, the Strouhal number is a nondimensional frequency defined by St = fD/U (f: frequency, U: freestream velocity). The first two phenomena relate to aerodynamic force fluctuations acting on the cylinder. The recirculation bubble pumping is a phenomenon of axisymmetric fluctuations in which the size of the recirculation region formed behind the cylinder expands or contracts in the freestream direction and appears in the case of the nonreattaching flow. Simultaneously, the pressure field at the base of the cylinder fluctuates, which appears as a drag force fluctuation acting on the cylinder. Large-scale vortex shedding is a phenomenon of antisymmetric fluctuations in which a large vortex structure containing small vortices is shed in the lateral direction from the downstream end of the recirculation region, and appears for L/D ≤ 1.5. Fluctuations due to this flow structure are dominant in the flow around the freestream-aligned circular cylinder and appear as pressure fluctuations not only downstream of the recirculation region, but also near the sides of the cylinder. The result appears as lateral force fluctuations acting on the cylinder. Although the relationship between the phenomena and aerodynamic fluctuations has been well discussed in previous studies, the interrelationships between the phenomena have not been fully investigated. An interesting interrelationship between the phenomena is the relationship between the recirculating bubble pumping and fluctuations in the shedding position of large-scale vortex structures, as suggested by <cit.>. They investigated numerically the flow around a disc (L/D = 0.2), in which they focused on the fluctuations of the shedding position of the large-scale vortex structure and showed that the switching of the rotational direction of the shedding position could be related to fluctuations in the recirculating bubble pumping. However, coherence and phase differences calculated from the velocity fluctuations due to the bubble pumping and positional fluctuations of vortex shedding have not been reported. No physical discussion has yet taken place on this point, and the mechanism of the switching is not clear. Numerical simulations provide the three velocity components over a three-dimensional field, whereas obtaining a long-duration flow field is difficult from the point of view of computational resources. In particular, it is necessary to obtain data containing sufficient periods of phenomena with low-frequency fluctuations such as the recirculation bubble pumping, for comprehension of the aforementioned relationships. This requires the experimental acquisition of velocity fields with suitable time resolution and sufficient data length. Investigations of the flow around three-dimensional objects have been carried out both numerically and experimentally. In most cases of experiments, the model is fixed in the channel by means of support. However, problems arise when stings, struts, wires, etc. used as supports interfere with the flow, altering the large-scale wake structure <cit.>. This is known as support interference and makes it difficult to comprehend the actual large-scale wake structure and aerodynamic characteristics that occur in the flow around an object. The main experimental methods by which support interference can be eliminated are free-fall tests <cit.>, ballistic flight tests <cit.> and wind tunnel tests using a magnetic support balance system (MSBS) <cit.>. Among these methods, the MSBS, which allows steady model support, is suitable for the investigation of flows around objects. The MSBS is a device that can levitate and support models through the interaction between the magnetic field produced by the coil system and the permanent magnets inside the model. Several wind tunnel tests using the MSBS have been carried out on freestream-aligned circular cylinders. In recent years, experiments have also combined particle image velocimetry (PIV). The velocity measurement planes in these previous studies can be divided into two main categories. The first is the case where the measurement plane is set parallel to the freestream through the cylinder axis, which has been adopted by <cit.>, <cit.> and <cit.>. This plane can capture fluctuations due to recirculating bubble pumping, but obviously not the azimuthal fluctuations in the position of the large-scale vortex structure. The second is the case where the measurement plane is perpendicular to the freestream and was adopted in the study by <cit.>. However, they obtained non-time-resolved two-dimensional and two-component velocity data that could not be used for frequency analysis, although the vortex emission position could be captured. Furthermore, the relationship between the bubble pumping and vortex shedding position has not been discussed because the velocity in the freestream direction has not been measured. Thus, the measurement plane perpendicular to the freestream with a two-dimensional, three-component PIV should enable the discussion of unexplained relationships between phenomena. The objective of the present study is to clarify the three-dimensional large-scale wake structure formed behind a freestream-aligned circular cylinder. Wind tunnel tests under support interference-free conditions were conducted using the MSBS and the stereo PIV measurement system developed and installed. Discussion between phenomena is provided mainly from the results of mode decomposition for the measured velocity data. § EXPERIMENTAL APPARATUS §.§ Model The flow around a freestream-aligned circular cylinder is divided into the nonreattaching flow and reattaching flow approximately at L/D = 1.5. Cylindrical models with L/D = 1.0, 1.5 and 2.0 were used for wind tunnel tests in the present study, and both flows were investigated. A schematic of the model is shown in Fig.<ref> (a). The model is made of machined polyoxymethylene, and permanent magnets required for magnetically levitated and support by MSBS are inserted inside. The model diameter is 50 mm regardless of L/D. The blockage rate in the wind tunnel test was 2.2%. The inserted permanent magnets were cylindrical with an outer diameter of 40 mm, an inner diameter of 5 mm and a length of 20 mm, with two or three magnets connected lengthwise according to L/D. The outside of the model is basically white. Still, there is a black band for measuring the model position by the sensor subsystem of the MSBS, and the base of the model is painted black to suppress reflections of the laser beam for PIV measurement. The Cartesian coordinate system in the present study is based on the cylindrical model with an angle of attack to the freestream of 0 deg, as shown in Fig. <ref> (b). The centre of the base of the model is the origin, the x axis is set in the freestream direction corresponding to the cylinder axis, the z axis is set vertically upwards and the y axis is set to form a right-handed system. The pitch θ, yaw ψ and roll ϕ angles are defined around the y, z and x axes respectively. Furthermore, a cylindrical coordinate system is defined. The x axis is the same as in the Cartesian coordinate system, but the r axis is the axis perpendicular to the circumference of the model from the origin, and the θ axis is the axis that is positive anticlockwise when the cylinder is viewed from behind. Here, the positive part of the y axis is θ=0 deg. §.§ Wind tunnel Tohoku University–Basic Aerodynamics Research Tunnel (T-BART) was used in the tests. This wind tunnel is a suction-type wind tunnel with a closed test section. The dimensions of the cross section of the test section are usually 300 mm × 300 mm square, whereas the test section for stereo PIV used in the present study has a cross section of 296 mm × 300 mm. The reason for the smaller vertical size was to build a mirror system into the measuring section and the laser sheet was irradiated in a plane perpendicular to the freestream. The mirror systems are discussed in <ref>. The freestream velocity in T-BART can be set in the range of 5–60 m/s, at which the degree of turbulence is less than 0.5%. Refer to appendix <ref> for the effects of changing the cross-sectional dimension of the test section. The freestream velocity U was set to 10.5 m/s in the experiment, which corresponds to a Reynolds number of 3.46×10^4 with the cylinder diameter as the representative length. §.§ Magnetic suspension and balance system The 0.3-m magnetic suspension and balance system (MSBS) at Tohoku University was used as a support system for the model. It consists of a sensor subsystem that monitors the position and attitude of the model, a coil subsystem that generates the magnetic field necessary for levitating and supporting the model, and a control subsystem that connects these two systems to control the position and attitude of the model. The sensor subsystem consists of five charge-coupled device (CCD) line sensors, nine blue light-emitting-diode (LED) light sources, short-pass optical filters, plano-convex lenses and half mirrors. This structure was also adopted in the previous studies <cit.>. On the other hand, another configuration can be adopted when the fineness ratio is low <cit.>. The CCD line sensors capture the surface of the model illuminated by the LED light sources and the images are used to detect the edges of the model and the black band painted on the model. After that, the position and attitude of the model are calculated based on the positions of the marker and edges in the image. The relationship between them is obtained in advance by sensor calibration. Subsequently, the calculated position and attitude of the model are used in feedback control, which mainly consists of a proportional-integral control system including a double-phase advance compensator. This control subsystem determines the electric current values for each coil to keep the position and attitude of the model close to the set target values. The coil subsystem consists of eight iron-core coils and two air-core coils, which are arranged around the test section. The model is levitated and supported by the interaction between the magnetic field produced by this coil subsystem and the permanent magnets inside the model. The control frequency is 1250 Hz and the sequence of operations mentioned above is performed within the cycle. The 0.3-m MSBS is capable of position and attitude control in up to six degrees of freedom. Since a freestream-aligned circular cylinder forms an axisymmetric flow, the wind tunnel tests were conducted to control the model's position and attitude for five degrees of freedom, excluding the roll direction. The model was supported in the centre of the test section and was not rotated significantly in the roll direction during wind tunnel tests. The root-mean-square (RMS) values of fluctuations in the x, y, z, pitch and yaw directions under the wind-on condition of a run were 4.91 µm, 3.13 µm, 5.53 µm, 0.017 deg and 0.010 deg respectively, which are very small and the effect can be considered negligible. §.§ Stereo PIV Velocity fields on the yz plane were measured by stereo PIV measurements and the characteristic flow structures were investigated. Figures <ref> (a, b) show a schematic of the optical system setup for PIV measurements and the levitated model during measurements. The optical system consists of two high-speed cameras (SA-X2, Photron), single focal length lenses (Micro-Nikkor 105 mm f/2.8), bandpass optical filters (527±10 nm, Edmund Optics), one-axis scheimpflug mounts (Dantec Dynamics), an Nd: YLF laser (LDY-303PIV, Litron), and mirrors (custom-made, SIGMAKOKI). The pixel size, number of pixels and bit depth of the high-speed cameras are 20 µm, 1024 x 1024 and 12 bit, respectively. The tracer particles were made of dioctyl sebacate microparticulated by Ruskin nozzles. Note that previous studies have reported that the tracer particles have sufficient followability to the flow <cit.>. Optical access for PIV measurements is limited because the model is surrounded by the coils, LED light sources and line sensors. The PIV measurement in the plane parallel to the airflow, which has been employed in previous studies of our group, was relatively easy because the coils in the freestream direction are air-core coils and the laser sheet could be illuminated from downstream of the test section and a camera could take particle images from between the coils. However, PIV measurements in the yz plane are not possible with the conventional experimental equipment. Therefore, a stereo PIV measurement system, as shown in Fig. <ref>, was developed and introduced that avoids the optical access limitations caused by the MSBS. The newly introduced systems are a mirror system for reflecting the laser light sheet, a traverser to place the MSBS in the desired position and a seeding rake to introduce the particles uniformly. The mirror system consists of two slender mirrors built into the bottom of the test section as shown in Fig. <ref>, each of which can be manually adjusted in angle. When the laser light sheet is illuminated, the laser head is placed at the bottom of the test section facing upwards and is reflected twice by the mirrors to set the measurement plane on the yz plane. The measurement plane is fixed to the test section because of the complex optical setup. Therefore, an MSBS traverser was introduced and the model position relative to the measurement laser plane was changed. The MSBS traverser consists of a horizontal plate and two linear guides, on which the MSBS can be placed for smooth movement in the direction of the wind tunnel axis. The seeding rake consists of two main pipes and 17 sub-pipes and is installed in front of the wind tunnel inlet. Compressed air containing particles produced by the Ruskin nozzles passes through the main pipe and exits through holes in the sub pipe in a spray pattern, thereby introducing particles uniformly throughout the test section. Although the number of particles in the test section increased compared with those in the previous version, the uniform distribution of particles meant that the influence of the scattered light from the LED light source for MSBS was small, and no clear problems occurred with the measurement of the model position and attitude by the sensor system. The measurement plane was located at 1.0D, 1.4D and 2.0D from the cylinder base for all L/D cases. The cameras, as shown in the figure, were positioned at an angle of 25 deg to the wind tunnel axis, looking into the inside of the air-core coil of the MSBS from downstream of the measurement plane. In the camera calibration before measurement, a calibration plate with white dots on a black background was fixed to an automated stage (OSMS20-35(X)-4M4, custom-made, SIGMAKOKI) and calibration images were acquired in 0.1 mm steps over a range of -2 mm ≤ x ≤ 2 mm centred on the laser light sheet. The camera calibration was performed only once, as the laser sheet position and camera were fixed. The measurement frequency of the velocity field was set to 400 Hz for all measurement planes. Particle images were acquired five times at 400 Hz for 4,000 pairs (10 s) and once for 10,000 pairs (25 s). However, 0.3 s just after the start of the measurement was not used in the analysis because the laser sheet was not sufficiently bright. § ANALYSIS §.§ Velocity field estimation The instantaneous velocity field was calculated by the conventional spatial correlation (CSC) method using analysis software (Dynamic Studio 6.11 and 7.5, Dantec Dynamics). This method was developed by <cit.>. First, a background image was calculated from particle images obtained by each camera. The particle images show the model in the background, and if the cross-correlation of the luminance values between the paired images is taken as is, error vectors will occur near the model. The background image was subtracted and this problem was avoided by eliminating the reflected light on the model. A recursive correlation method was applied to the particle images after background subtraction to calculate the displacement of the group of particles on the image of each camera. The initial size of the correlation window was 32 px × 32 px and the final size was 8 px × 8 px. A two-dimensional three-component velocity field was then obtained from the displacements of the group of particles in each camera and the results of the calibration. Error vector processing using velocity vectors at eight points around the inspection vector was applied to this velocity field. The velocity field obtained through the above process was used as the instantaneous velocity field for subsequent analysis. The distance between adjacent vectors of the obtained velocity field is 1.27 mm. §.§ Transformation of coordinate system The discussion in the present study will be carried out in Cartesian and cylindrical coordinate systems, as described in  <ref>. The Cartesian coordinate system was defined in the software for velocity field estimation, but as the coordinate system defined by the calibration plate and the coordinate system based on the levitated cylinder do not exactly match, a Cartesian coordinate system based on the cylinder was obtained by applying the origin correction based on the time-averaged streamwise velocity. The transformation of the coordinates to a cylindrical coordinate system was done by post-processing in MATLAB. The distance between adjacent vectors in the r direction Δ r was set to the same value as in the Cartesian coordinate system, with a grid of N_r = 47 points in the r direction and N_θ = 128 points in the azimuthal direction. Here, no grid point was placed because the origin is a singularity. The velocity was also transformed from the y and z components to the r and θ components using the following equation in accordance with the coordinate transformation. [ u_x; u_r; u_θ ] = [ 1 0 0; 0 cosθ sinθ; 0 -sinθ cosθ ][ u_x; u_y; u_z ]. The velocities in the y and z directions at the set cylindrical coordinate grid points were obtained by interpolation from the velocity data in the Cartesian coordinate system and then transformed into the r and θ components using Eq. <ref>. §.§ Modal decomposition The wake of the freestream-aligned circular cylinder is a highly complicated flow field due to the convection of KH vortices in the separated shear layer and the relatively large vortex structures including them <cit.>. It is effective to identify the flow modes which correspond to the fluctuations due to recirculating bubble pumping and the large-scale vortex shedding from them. Modal decomposition, which combines azimuthal Fourier decomposition and proper orthogonal decomposition (POD) <cit.>, is applied to velocity data in a cylindrical coordinate system in the present study to discuss the unexplained phenomena described in the introduction. The spatial modes and coefficients of the POD at each azimuthal wavenumber were obtained for the acquired velocity data by the following analysis. As the turbulent wake of a freestream-aligned circular cylinder is homogeneous in the azimuthal direction, the azimuthal Fourier decomposition with a fast Fourier transform is applied to the velocity fields: u'(x; r, θ, t) = ∑_m u_m(x; r, t)e^imθ. The obtained Fourier coefficients are used to form the following matrix U_m as follows: U_m = [u_m^(1), u_m^(2), u_m^(3), ⋯, u_m^(N)], where N is the number of instantaneous velocity fields used for POD, which is equal to the number of samples taken in one run of PIV measurements. Since the three velocity components are stacked in the row direction, the size of U_m is 3N_r × N. The spatial modes, square roots of eigenvalues and mode coefficients of the POD were obtained using singular value decomposition: WU_m = U_mS_mV_m^* = U_mZ_m, where U_m, S_m, and V_m are matrices including the spatial modes, square roots of eigenvalues and mode coefficients, respectively. Z_m is used for frequency analyses, conditional sampling and causality analyses in the present study. Asterisk represents complex conjugate transpose. In addition, W is the diagonal matrix for weighting with the size of 3N_r × 3N_r. Elements of W are calculated by the following equation: W(r) = √(A(r)) = √(π{(r+Δ r/2)^2-(r-Δ r/2)^2}/2π/Δθ), A(r) is the area that the point at each r position is responsible for, and is used to weight for each of the obtained Fourier coefficients of the three velocity components. § RESULTS AND DISCUSSIONS §.§ Flow properties Figures <ref> show the temporal and azimuthal averaged velocity profiles at x/D = 1.0, 1.4 and 2.0 for the wake of the cylinder with each L/D. The results for freestream and radial velocities are compared with the PIV results of Yokota et al. (two dimensional two component), except for x/D = 2.0 for L/D = 1.5 and 2.0, respectively. The figures illustrate that the velocity in the freestream direction in all cases deviates significantly in the positive direction compared to the previous study, with a maximum difference of approximately 30% of the freestream velocity, even though the trend is in agreement with that of the previous study. Since the velocity in the freestream direction corresponds to the out-of-plane component in the stereo PIV measurements, the error is considered to be significant. This error is discussed in appendix <ref>. The two components corresponding to the in-plane components are in good agreement with the results of the previous study for the r component, while the θ component is almost zero regardless of the r position, indicating the axisymmetry of the wake of the cylinder. The velocity profiles of the previous study show an unnatural high-wavenumber oscillation in some places due to the time during the measurement when particles could not be introduced into the freestream region and the scratched lines on the acrylic wall used in the test section. Next, the profiles of the turbulence statistics are discussed. Figure. <ref> shows the profile of turbulent kinetic energy k_3C calculated from the three components measured in the present study and the RMS values of fluctuations of each velocity component are shown. The calculations are as follows: k_3C = 1/2∑_j(u_j, RMS/U)^2 (j = x, r, θ), u_j, RMS = √(1/N∑_n=1^Nu'_j, n^2). All statistics for the data obtained in the present study are averaged in the azimuthal direction. The RMS values are compared with the results of the previous study in the same way as with the case of time-averaged velocity profiles. Figures <ref> (a–c) show that k_3C is large at r/D < 0.5, which is behind the cylinder, and decreases rapidly in the freestream region. This tendency is more noticeable at x/D = 1.0, which is close to the cylinder. In addition, for a given L/D, k_3C decreases over the entire r-position downstream, and the profile also becomes linear from a curve with a large change from the freestream region to the wake. This is considered to indicate that small vortices or large vortex structures in the turbulence of the wake of the cylinder are convected and weakened by viscous dissipation. Furthermore, a comparison of Figs. <ref> (a–c) illustrates that k_3C decreases as L/D increases. The reason for this L/D dependence is considered to be related to the reattachment of flow separated at the leading edge to the curved surface of the cylinder. A nonreattaching flow with no flow reattachment occurs at L/D = 1.0, while a reattaching flow is formed at L/D =1.5 and 2.0. This flow classification was made by <cit.> based on the time-averaged velocity field and they reported that the presence or absence of flow reattachment switches at L/D = 1.5. Note that there is intermittency in flow reattachment for L/D = 1.5. <cit.> reported that the vortex structure formed in the separated shear layer in the case of the reattaching flow is smaller than those in the nonreattaching flow and that the instantaneous velocity also decreases as it approaches the trailing edge, and <cit.> obtained power spectral densities that are consistent with this report. In short, k_3C is considered to be smaller in the order of L/D = 1.0, where a steady large-scale structure is observed, L/D = 1.5, where an intermittent large-scale structure is observed, and L/D = 2.0, where the vortex structure is small. The profiles of RMS values of the velocity fluctuations in each direction are shown in Figs. <ref> (d–f). The results of <cit.> are plotted together in these figures for comparison. However, as data for x/D = 2.0 could not be obtained for the cases L/D =1.5 and 2.0 in the previous study, comparisons were not made at that position. As shown in Figs <ref> (e, f), for L/D =1.5 and 2.0, the magnitude of the radial velocity fluctuations differs from the previous study for r/D > 0.6, while the profiles are consistent for the other positions and cases. Unnatural changes in the r direction are also observed, which could have been caused by problems in PIV measurements in the previous study, as described above. The streamwise velocity fluctuations are larger at locations where the change in the time-averaged streamwise velocity in the r direction is greater, i.e. at the shear layer location, and are particularly large for L/D = 1.0, which is in the case of the nonreattaching flow. On the other hand, the radial and circumferential velocity fluctuations are large at the wake centre and decrease towards the r direction positive. These velocity fluctuations in each direction show a profile similar to the normalised turbulent stresses in the disc wake reported by <cit.>. Here, their paper shows results for 20 < x/D < 120. The change in profile in the r direction becomes more gradual downstream of any L/D, resulting in a linear turbulent kinetic energy distribution downstream, as shown in Figs. <ref> (a–c). §.§ Characteristic fluctuations §.§.§ Eigenspectra Figure <ref> shows the eigenspectra obtained for each L/D and each x-position by applying the modal decomposition described in  <ref> to the velocity fluctuation field. The range is for the azimuthal mode m=0–10 and leading POD mode n=1–4. At first, eigenvalues become smaller as L/D increases. This is a similar L/D-dependent trend to that of k_3C shown in the figure, corresponding to the large velocity fluctuations in the nonreattaching flow and the smaller fluctuations in the reattaching flow. Regardless of L/D and x-position, the dominant mode is the m=1 mode, which is larger than the other azimuthal modes, especially for L/D=1.0 and 1.5. On the other hand, for L/D=2.0, the magnitudes of the eigenvalues of m=1 and the second contributor, m=2, are comparable. The order of contribution of azimuthal modes other than m=1 depends on L/D or x/D. Comparing the magnitude of the eigenvalues of the n=1 mode for each azimuthal mode, the order of contribution for higher azimuthal wavenumbers than the m=1 mode is decreasing with increasing wavenumbers while the order of contribution of the axisymmetric mode m=0 varies with L/D or x/D. The contribution order of the axisymmetric mode for L/D=1.0 is fifth after m=4 at x/D=1.0, sixth after m=5 at x/D=1.4 and seventh after m=6 at x/D=2.0. Since the present study focuses on the large-scale wake structure in the nonreattaching flow, the subsequent discussion will be on the wake behind the cylinder with L/D=1.0. <cit.> applied spectral POD to velocity fluctuation fields in the wake of a disc which forms a nonreattaching flow, and showed eigenspectra integrated in the frequency direction. Their results show that the m=0 mode contributes third after the m=2 mode at x/D=0.1 and fourth after the m=3 mode at x/D≥1.0. Here, the results are compared for a disc and a cylinder with L/D=1.0, which occur the same flow structure of the nonreattaching flow. <cit.> showed that the distance from the leading edge of the cylinder to the downstream end of the recirculation region varies almost linearly with L/D from the results of <cit.> and <cit.> and them. Although the distance between L/D=0 and 1.0 differs by 0.34D, this difference is not considered to be significant, and the result of <cit.> at x/D=2.0 are compared with that of L/D=1.0 at x/D=1.0, with the same distance from the leading edge. This position is inside the recirculation region in both cases. As mentioned above, the order of contribution of the azimuthal modes for a disc is 1→2→3→0→⋯, whereas for a cylinder the order is 1→2→3→4→0→⋯ as shown in Fig <ref> (a). This suggests that even if the flow structure is similar, the order of contribution of the azimuthal modes changes with L/D. Next, we consider a large-scale wake structure that corresponds to each mode in the case of L/D=1.0. <cit.> reported that large-scale vortex shedding appears in the wake at L/D≤1.5, which causes velocity fluctuations that are antisymmetric with respect to the cylinder axis. Since the amplitude of the m=1 mode is larger than those of the other azimuthal modes at L/D=1.0 and 1.5 as mentioned above and the m=1 mode shows an antisymmetric distribution with respect to the cylinder axis, this mode is considered to represent large-scale vortex shedding. The next point to note is that the energy of the m=0 mode is large only when L/D=1.0. The m=0 mode is an axisymmetric velocity fluctuation and the relatively high energy in the recirculation region at x/D=1.0 suggests that this mode represents recirculation bubble pumping. Furthermore, The m=2 mode with a large contribution is considered to correspond to the double-helix structure reported by <cit.> and <cit.> for the disc wake or the streak-like fluid structure reported by <cit.> and <cit.>, but as there are no reports on this structure in previous studies on cylinders, it is discussed together with spatial distributions of modes in . <ref>. §.§.§ Eigenfunctions and mode coefficients The eigenfunctions of each velocity component at each x-position for mode (m,n)=(0–2,1) in the case of L/D=1.0, which forms the nonreattaching flow focused on in the previous section, are shown in Fig <ref>. In addition, the eigenfunctions for the mode (m,n)=(0,2), which show characteristic spatial patterns, are also presented. These figures show that the eigenfunctions of each mode are almost the same regardless of the x-position. Moreover, Fig. <ref> shows the power spectral density of the real part of the time-series mode coefficients for these modes at each x-position. Figures <ref> (a–c) show that for axisymmetric modes (m,n)=(0,1), the x component is dominant, with large fluctuations in the recirculation region. The r component also fluctuates in phase with the velocity fluctuations in the x direction in the freestream region. On the other hand, the θ component shows less fluctuation than the other two components, but downstream, fluctuations indicative of flow rotation can be seen in the centre of the wake. This mode becomes less variable downstream in the low-frequency region at St≈0.024, as can be seen from Figs. <ref> (a–c). This corresponds to the energy change in the x direction of this mode shown in the eigenspectra (Fig. <ref> (a–c)). The fact that the variation at St≈0.024 is large at x-positions corresponding to the inside of the recirculation region suggests that this variation is due to the recirculation bubble pumping <cit.>. The length of the recirculation region is longer than that in the time-averaged field when a negative fluctuation of the x component in the recirculation region occurs. At the same time, the width of the recirculation region also increases, and a negative fluctuation of the r component is considered to be observed in the freestream region, corresponding to flow toward the centre. The PSD of this mode also shows a high fluctuation at St ≈ 0.23. Fluctuations at St≈0.2 have been identified in axisymmetric modes in the reports by <cit.>, <cit.> and <cit.>, but it is not clear what kind of fluid phenomena they correspond to. However, as shown by <cit.>, <cit.> and the results of the present study, there is a large fluctuation in the frequency around it from near wake to far wake, which is important for the analysis of the wake. The relationship between the fluctuations and them in other modes is discussed in  <ref>. Figures <ref> (d–f) show that the θ component is dominant for the axisymmetric mode (m,n)=(0,2), while the fluctuations of the other two components are very small. The sign of θ component is opposite in the centre and outer region, indicating that it causes azimuthal shear. <cit.> observed a non-planar-symmetric vorticity distribution called “Yin-Yang” pattern in a disc wake, and <cit.> suggested that the plane-symmetric vortex may be twisted by azimuthal shear during vortex formation or in the early stages of vortex shedding, forming a “Yin-Yang” pattern. It was reported that the rotational processes only appear in the recirculation region in the wake of an axisymmetric bluff body, although this cannot be elucidated in the present study as it was only conducted in the region near the recirculation region. From the above, it is considered that this mode of azimuthal shear is strongly related to vortex shedding and will be further investigated in  <ref>. The PSD of this mode, shown in Figs. <ref> (d–f), decreases in the entire frequency domain as x/D increases, but no peaks are observed and no characteristic trend is observed, unlike the PSDs of other modes. Figures <ref> (g–i) show that for the antisymmetric mode (m,n)=(1,1), the u_x fluctuation is maximum at the outer edge of the recirculation region, and the position of the maximum value shifts the positive direction of r as x/D increases. This indicates that the wake width increases downstream. Negative fluctuation regions in the x component correspond to the wake position, with larger fluctuations indicating that the wake position is further away from the central axis of the cylinder. The u_r fluctuations occur at the same position as the u_x fluctuations, but in opposite phase, and this causes the Reynolds stress -u_x'u_r'. The θ component has a large fluctuation at a 90-degree deviation in the azimuthal direction from the x and r components. It can be seen that, when considered together with the r and θ components, the distributions shown in these figures create vortices with opposite sign streamwise vorticity in the z>0 and z<0 regions. In the region between the two vortices, the flow is from the u_x acceleration region to the deceleration region, and the flow outside each vortex is from the u_x deceleration region to the acceleration region. The PSD of this mode shown in Figs. <ref> (g–i) represents that there is a clear peak at St=0.129. This fluctuation frequency is in good agreement with the large-scale vortex shedding in the wake of the freestream-aligned circular cylinder reported by previous studies <cit.>. Peaks at this frequency appear from near wake to far wake, indicating that large-scale vortex shedding is a global fluctuation <cit.>. However, the intensity of the fluctuations weakens as x/D increases, as can be seen from the present study and their results. Similarly, the fluctuations weaken in the low-frequency region as x/D increases. Low-frequency fluctuations of m=1 modes have not been discussed in flow around a freestream-aligned circular cylinder, but the study of the wake of an axisymmetric bluff body <cit.> has identified fluctuations with the same spatial structure as very low frequency (VLF) mode, with St on the order of 10^-2. It is known that VLF mode exists as an m=1 mode near the object, but switches to an m=2 mode around the downstream end of the recirculation region. The relationship between m=1 and m=2 modes is also discussed in  <ref>. Figures <ref> (j–l) show that mode (m,n)=(2,1), as with the mode (m,n)=(1,1), has large u_x fluctuations at the outer edge of the recirculation region and that the region with large fluctuations also moves in the positive direction of the r axis. The u_r fluctuations are also large in the same position as the u_x fluctuations as well as mode (m,n)=(1,1) and are in opposite phases. Hence, mode (m,n)=(2,1) is also a factor producing Reynolds stress -u_x'u_r'. The θ component has a large fluctuation at a position 90 degrees shifted azimuthally from the other two components, and when considered together with the r component, shows that a total of four vortices appear, with the sign of the streamwise vorticity switching alternately in the azimuthal direction. This structure is also seen in the disc wake <cit.> and is very similar to that of the present study. There is an acceleration region of u_x at the boundary of vortices with switching positive to negative streamwise vorticity in the azimuthal direction, and a deceleration region of u_x at the boundary of vortices with negative to positive streamwise vorticity. The PSDs for this mode shown in Figs. <ref> (j–l) reveal a broad peak at St ≈ 0.23, as seen for mode (m,n)=(0,1). The fluctuations at this frequency become smaller as x/D increases. A similar trend is observed in the results of <cit.>. <cit.> present that the double-helix structure is apparent in the wake of an axisymmetric bluff body at almost twice the frequency of large-scale vortex shedding, and that the fluctuations decrease downstream, from the spatial distribution of the modes obtained by dynamic mode decomposition (DMD). A peak at St=0.27 appeared in the frequency spectrum for the azimuthal mode m=2, whereas the frequency of fluctuation of large-scale vortex shedding is St=0.135 at x/D=10 in a study of the disc wake <cit.>. Therefore, the fluctuations at St ≈ 0.23 in the cylinder wake are considered to be caused by the double-helix vortex. On the other hand, the fluctuations increase downstream in the low-frequency region, which is opposite to the trend in the fluctuations in the low-frequency region for mode (m,n)=(1,1). Since fluctuations with the same spatial structure as the VLF mode mentioned above make a transition from m=1 to m=2 around the downstream end of the recirculation region, the reversal of the tendency of the PSDs for m=1 and m=2 in the low-frequency range to change in the downstream direction is also considered to be indicative of this transition. Another related fluctuation in the low-frequency region of the azimuthal mode m=2 is the recirculation bubble pumping. <cit.> numerically investigated the wake of the re-entry capsule and presents that recirculation bubble pumping is associated with four streaks that appear downstream of the recirculation region by showing corresponding DMD results. In any case, the low-frequency fluctuations of mode (m,n)=(2,1) are considered to indicate four streaks appearing in the wake. §.§ Change in vortex shedding position The vortex shedding position has been reported to be related to aerodynamic fluctuations in the lateral direction of a freestream-aligned circular cylinder by <cit.> and <cit.>. Therefore, the comprehension of the vortex shedding position is important for suppressing vibrations acting on the cylinder. In addition, <cit.> suggested that fluctuations in the azimuthal position of vortex shedding are associated with the recirculation bubble pumping. The discussion proceeds for the vortex shedding position before investigating this relationship. In the present study, the representative position of the vortex shedding position is expressed by the barycentre of the momentum deficit in the following equation, as used in the previous studies <cit.>: y_m = ∫ y(1-u_x/U)dS/∫ (1-u_x/U)dS, z_m = ∫ z(1-u_x/U)dS/∫ (1-u_x/U)dS, where S is the examined area, which in the present study is [-1.2D 1.2D] in both y and z directions. Fig. <ref> shows that the time-averaged velocity in the freestream direction is positively biased, but that the fluctuation components in the present study are in good agreement with the previous study. Therefore, the qualitative discussion should be reasonable. Here, the trajectory is based on the data for 24.7 s. Figures. <ref>(a–c) show that the vortex shedding position fluctuates irregularly around the origin. They also show that the circular area drawn by the trajectory becomes larger as x/D increases, indicating that the wake width increases. The centre of the trajectory is shifted from the origin to the negative direction of the y axis for x/D=2.0 (Fig.<ref>(c)), which is due to error vectors at the left end of the examined area. The trajectory of the vortex shedding position represents a loop or a flapping-like reciprocating motion, depending on the time. <cit.> also showed trajectories of vortex shedding positions under Re=10^4, in which closed loops were identified. They suggested that this loop was a helical vortex structure appearing in the wake of the cylinder. However, they also recognised that the shape of the loops varied irregularly with time. If only helical vortices appeared, they would show a circular trajectory as shown by <cit.>. Therefore, helical vortices, flapping, and a structure mixing them in the wake can be assumed to appear depending on the time at this Reynolds number. Consequently, the velocity fluctuations due to mode (m,n)=(1,1) at x/D=1.4, corresponding to large-scale vortex shedding, are visualised three-dimensionally using Taylor's hypothesis. Bandpass filtering of the mode coefficients with 0.1≤ St ≤0.2 was applied and large-scale vortex shedding was focused. The parameter k_3C^1/2, calculated from the turbulent kinetic energy, is at the maximum of 28.4% for L/D=1.0 and x/D=1.4, and although it is not appropriate to apply Taylor's hypothesis as in the paper of <cit.>, it was applied because it is useful for understanding changes in the vortex shedding position. Figure <ref> shows selected visualisations at times when helical vortices, flapping or a mixture of both are considered to occur. The left contour plot shows the velocity fluctuations of the (m,n)=(1,1) mode at time t, and the isosurface is formed by connecting the points where the fluctuations are ±0.05. Figures <ref> (a, b) show clockwise and anticlockwise helices in the view from downstream, respectively, where twisting occurs without significant change in the magnitude of the fluctuations. In other words, the vortex shedding position in the r direction remains the same, but its position changes in the azimuthal direction in a circular pattern. On the other hand, Fig. <ref> (c) shows that the positive and negative values switch without changing the azimuthal position, indicating that vortex shedding is performed in a certain plane like flapping motion. Furthermore, Fig. <ref> (d) shows a mixture of helical and flapping features and is considered to be an elliptical motion. These four states appear irregularly, as shown in the supplementary material of movie 1. Figure <ref> shows the PSD calculated from the amplitude and the angular variation of the spatial pattern of mode (m,n)=(1,1), which are considered to correspond to the radial and azimuthal fluctuations of the vortex shedding position, respectively. The angular variation of the spatial pattern was calculated from the azimuthal position of the deceleration region of the streamwise velocity corresponding to the vortex shedding position. The data used for the calculation of the PSDs were the same as those used for the plot of the trajectories, while each PSD was normalised by its maximum value. Only azimuthal position fluctuations appear when the trajectory of the vortex shedding position is circular, as shown in Figs. <ref> (a, b), and a peak is considered to appear at St=0.129 in the PSD of the angular variation. However, the angular variation appears as a fluctuation at St=0.129 even in the case of flapping as shown in Fig. <ref> (c). The gradient of the angular change, i.e. the angular velocity, is necessary to classify them. Flapping or elliptical loops across the vicinity of the cylinder axis, as shown in Fig. <ref> (c, d), are expected to appear as the twice higher fluctuation frequency due to large-scale vortex shedding. Accordingly, the fluctuations at St≈0.26 are considered to be due to these patterns. Similar positional fluctuations have been observed immediately behind an axisymmetric bluff body <cit.>. Two other peaks were observed, appearing at St≈0.02 and 0.23 of the PSD of the amplitude. The low-frequency fluctuations at St≈0.02 are considered to be related to the recirculation bubble pumping. The vortex shedding position becomes larger or smaller in the r direction as the size of the recirculation region changes in the streamwise direction. At last, the fluctuations at St=0.23 are relatively weak compared to the fluctuations at St=0.26 as x/D increases, suggesting that these fluctuations are represented inside the recirculation region. The broad peak is observed at St≈0.23 for modes (m,n)=(0,1) and (2,1), as shown in the Fig. <ref>, and further discussion of the relationship between the modes at St=0.23 will be carried out in the next section. Figure. <ref> shows the temporal variation of the vortex shedding position at x/D=1.4, close to the downstream end of the recirculation region. Figure <ref>(a) plots the azimuthal position over 24.7 s, while Figs. <ref>(b–e) show the change in the position over time of the pseudo-three-dimensional maps shown in Figs. <ref>(a–d), respectively. As in the previous paragraph, band-pass filtered data with 0.1≤ St ≤0.2 was used here and the azimuthal position was calculated. Figure <ref>(a) shows that the rotational direction of the vortex shedding switches irregularly with a major trend to rotate in the positive direction of the θ axis at the times shown. Although not shown here for brevity, different runs at the same location had a major trend to rotate in the negative direction of the θ axis. <cit.> suggested that switching the direction of rotation of the vortex shedding position is associated with the recirculation bubble pumping. Therefore, conditional sampling was applied to mode (m,n)=(0,1) and its association was investigated in the present study. Figures <ref> (a, b) (red histograms) show the results of sampling Z_0,1 at the time for positive and negative slopes of the position fluctuations of the vortex shedding. The probability distributions of them over the whole measurement time are shown together in the figure for comparison (blue histogram). The Z_0,1 probability distributions should be biased if the rotational direction of the vortex shedding position is associated with recirculation bubble pumping, but no such bias is observed in the results. In other words, there is no relationship between the rotational direction of the vortex shedding position and the recirculation bubble pumping. The present study investigates not only the rotational direction of the vortex shedding position but also whether the state of the recirculation region differs depending on whether the vortex shedding pattern shows a circular or reciprocating pattern, such as flapping. The gradient of the azimuthal position was used as a condition for classifying these patterns. The conditions are as follows: state isanticlockwise circular if 0.1<ω_m/2π<0.2, clockwise circular if -0.2<ω_m/2π<-0.1, flapping else, ω_m = dθ_m/d(tU/D), where θ_m is the azimuthal position of the vortex shedding. Figures <ref>(b, c) show that the vortex shedding position varies smoothly if the pattern is circular. On the other hand, Fig. <ref>(d) illustrates that discontinuous changes and near-zero gradient changes appear if it shows a flapping pattern. The present authors compared the pseudo-three-dimensional field shown in Fig. <ref> with the positional changes in Fig.<ref> and confirmed that the sample points satisfying the conditions were selected appropriately. The points of the circle pattern and the points of the flapping pattern were confirmed to appear when the two patterns are mixed, as shown in Fig. <ref>(e). Figures <ref> (c–e) show the results of sampling Z_0,1 when the anticlockwise/clockwise circular pattern and the flapping pattern appear, respectively. In the case of the circular pattern shown in Fig. <ref>(c, d), the distribution is slightly different from that of Z_0,1 for the whole measurement time, but there is no clear difference. No significant changes in the distribution of Z_0,1 compared to the probability distribution of it over the whole measurement time are also observed in Fig. <ref>, suggesting that there is no link between the shedding pattern and the recirculation bubble pumping. The distribution of rotational shear is shown at mode (m,n)=(0,2) in the present study, and the previous study <cit.> suggests that the mode is highly associated with vortex shedding. Therefore, conditional sampling was applied to this mode as well as mode (m,n)=(0,1). The conditions are described in Eq. <ref>. Figures <ref> (a–e) show the results of sampling Z_0,2 for anticlockwise and clockwise changes, anticlockwise and clockwise circular patterns and flapping, respectively, as in Fig. <ref>. At first, Figs. <ref> (a,b) presents that Z_0,2 are biased depending on the direction of rotation of the vortex shedding position. It is negatively biased when the vortex shedding is anticlockwise and positively biased when the shedding is clockwise. The bias becomes more apparent when the conditions are divided into circular patterns and flapping, with Z_0,2 being negatively biased when the vortex shedding shows an anticlockwise circular pattern and positively biased when it shows a clockwise circular pattern, while the probability distribution of Z_0,2 in the case of flapping is the same as that of Z_0,2 at all times, centred around zero. The results above show that rotational shear tends to appear not when the vortex shedding shows flapping, but when the vortex shedding shows a circular pattern. Additionally, the rotational direction of the vortex shedding is found to determine the direction of the rotational shear. The rotational direction of the vortex shedding and the moments generated by the azimuthal velocity fluctuation field of mode (m,n)=(0,2) are also discussed. The moments at the cross section are calculated by the following equation <cit.>: M(x, t) = ∫_r ∫_θu_θ(x, r, θ, t)/U(r/D)^2drdθ. When Z_0,2=0.01>0, the moment calculated by the equation is -0.0070, which corresponds to the clockwise-direction moment. The bias distributions in Figs. <ref> (a-d) show that anticlockwise moments act when the vortex shedding position changes in the anticlockwise direction and clockwise moments act when the vortex shedding position changes in the clockwise direction, which indicates that the direction of the moments matches the change in the vortex shedding position. These relationships are further discussed in  <ref>. §.§ Relationship between characteristic fluctuations Previous studies on the wake flow of a disc have discussed which characteristic fluid phenomena each mode corresponds to, based on the results of modal decomposition. Similar discussions have been developed in  <ref> and <ref> of this paper. The fluctuations of Z_0,1 and |Z_1,1| were found to be larger in the low-frequency region of St≈0.024, while those of Z_0,1, Z_2,1 and |Z_1,1| were larger in the high-frequency region of St≈0.23. Coherence and phase differences were calculated and the relationships between the modes were investigated. The results shown in Fig. <ref> illustrate the high coherence between Z_0,1–|Z_1,1|, Z_0,1–|Z_2,1| and |Z_1,1|–|Z_2,1|. At first, high coherence is observed in the low-frequency region of St≈0.024. The values are approximately 0.75 between Z_0,1–|Z_1,1| as shown in Fig. <ref> (a–c), and the phase differences are approximately 45 deg regardless of the x-position. The coherence between Z_0,1–|Z_2,1| is large downstream from x/D=1.4 near the downstream end of the recirculation region with a value of about 0.5 as shown in Figs. <ref> (d–f), and the phase difference is similar to that between Z_0,1–|Z_1,1|. The high coherence observed downstream of the recirculation region suggests that the low-frequency fluctuations in |Z_2,1| are mainly related to the streak. A similar trend is also observed between |Z_1,1|–|Z_2,1| as shown in Fig. <ref> (g–i). Figure <ref> summarises the relationship between the length of the recirculation region and the strength of fluctuations in mode (m,n)=(1,1),(2,1) based on the phase difference between Z_0,1–|Z_1,1| and Z_0,1–|Z_2,1|. The fluctuations in mode (m,n)=(1,1),(2,1) are strong when the recirculation region is shorter than the mean field due to bubble pumping (red region in the figure). However, the causality between these modes is not clear, and therefore the transfer entropy between Z_0,1, |Z_1,1| and (m,n)=(1,1),(2,1) was calculated. Entropy H in information theory is calculated by the following equation: H(X) = -∑_x ∈ X p(x)log_2p(x), where p is the probability density function with x as the random variable. If two or more random variables are involved, it is called joint entropy and is calculated as follows: H(X,Y) = -∑_x ∈ X∑_y ∈ Y p(x,y)log_2p(x,y). The transfer entropy TE is calculated by Eq. <ref> and the dominant direction of information flow is determined by the net transfer entropy TE_ net calculated by Eq. <ref>: TE_X → Y = H(Y_t|Y_t-1)-H(Y_t|Y_t-1,X_t-1) = H(Y_t,Y_t-1)-H(Y_t-1)-H(Y_t,Y_t-1,X_t-1)+H(Y_t-1,X_t-1), TE_ net,X → Y = TE_X → Y-TE_Y → X. Positive TE_ net,X → Y indicates that the information flow is dominant in the X to Y direction. In the present study, the data for 24.7 s at x/D=1.4 was used to create a discrete probability distribution as shown Fig. <ref>, and entropy and joint entropy were calculated to compute TE_ net. Figure <ref> shows the distribution of the whole-time data when X=Z_0,1, Y=|Z_1,1|, Z=|Z_2,1| and its probability distribution as an example, and the joint entropy H(X_t, Y_t, Z_t) calculated by Eq. <ref> is 12.74. Figure <ref> (a) shows the results of the net transfer entropy for three different paths between Z_0,1, |Z_1,1| and |Z_2,1|. Here, they were preprocessed with a lowpass filtering of St<0.05. The TE_ net between Z_0,1–|Z_1,1| is positive, which indicates that the information flow is dominant in the direction from mode (m,n)=(0,1), which corresponds to bubble pumping, to the amplitude of mode (m,n)=(1,1), which is considered to correspond to the radial position of the vortex shedding position. Similarly, the direction from mode (m,n)=(0,1) to amplitude of mode (m,n)=(2,1) is dominant between Z_0,1–|Z_2,1|. The dominant flow of information between |Z_1,1|–|Z_2,1| is from the amplitude of mode (m,n)=(1,1) to the amplitude of mode (m,n)=(2,1), and information is mainly transferred from the one with low wavenumber to the one with high wavenumber in all cases. In short, the bubble pumping is considered to be the starting point for the change in the radial position of the vortex shedding and the strength of the streak. Secondly, relatively high coherence between Z_0,1–|Z_1,1| in the high-frequency region of St=0.23, which is apparent in the x-position where the recirculation region is located, as shown in Fig. <ref>. The relationship between Z_0,1–|Z_1,1| at this frequency implies that the vortex shedding position moves in the r-positive direction as the length of the recirculation region increases, which is the reversal of the relationship in the low-frequency region. High-frequency fluctuations in the azimuthal mode m=0 have been observed in previous studies (Berger et al., Fuchs et al., Nidhan et al., Nekkanti et al.) but neither they have not been discussed in detail nor the present study does not provide a three-dimensional velocity field. Thus, the discussion of physical relationships is difficult. However, the fact that the relationship is found only in the recirculation region is consistent with the finding by <cit.> that the nonlinear interaction between unstable modes occurs only in the recirculation region. Another feature found only within the recirculation region is the rotational flow reported by <cit.>. They suggested that the planar symmetric vortex loops formed behind an axisymmetric body are collapsed by the rotational shear that appears in the recirculation region, which results in a twisted form of vortex loops known as the “Yin-Yang” pattern. Mode (m,n)=(0,2), which represents rotational shear, is also found in the present study, as shown in the figure, and the results in the section reveal that mode (m,n)=(0,2) is associated with the state of vortex shedding. However, the causality between them is not clear. For this reason, a causality investigation was carried out using transfer entropy, in the same way as described above. Figure <ref> (b) shows the TE_ net between the Z_0,2 and vortex shedding states, which shows that the information flow is dominant in the direction of the vortex shedding states to Z_0,2. This indicates that vortex shedding positions produce rotational shear and moments when the change in the vortex shedding position follows a circular pattern. The above discussion shows that axisymmetric fluctuations such as bubble pumping are related to the radial position of vortex shedding and the strength of the streak, especially in the low-frequency range, the information is transmitted in the direction of increasing azimuthal wavenumber, which starts with bubble pumping. The large-scale vortex shedding is linked to aerodynamic forces acting in the lateral, pitch and yaw directions of the cylinder, while fluctuations due to bubble pumping are closely related to base pressure <cit.>. This suggests the possibility of predicting the aerodynamic forces acting in the lateral and rotational directions by measuring base pressure in axisymmetric bodies such as those focused on in the present study. However, since the causes of bubble pumping are not known and could not be clarified in the present study, further investigation is required for this point. § CONCLUSIONS The present study has focused on the three-dimensional large-scale wake structure of characteristic phenomena appearing behind the freestream-aligned circular cylinder. Particularly, the relationship between the modes of velocity fluctuations in the nonreattaching flow, the change in the state of the large-scale vortex shedding, and the relationship between axisymmetric fluctuations such as the recirculation bubble pumping and the rotational shear have been clarified. Experimental investigations in a flow-interference-free condition were carried out by contactless two-dimensional three-component velocity measurements, which combined the 0.3-m MSBS with the stereo PIV measurement system. Three cylindrical models with L/D=1.0, 1.5 and 2.0 were used for the wind tunnel tests. Firstly, the results of the flow properties show that the errors are large for the time-averaged field compared to previous studies, but there is a good agreement for the fluctuation field. The modal decomposition, which combines azimuthal Fourier decomposition and proper orthogonal decomposition, is employed for the field of velocity fluctuations, and the relationship between modes and vortex shedding has been discussed. The eigenvalue spectra show that the m=1 mode is dominant regardless of L/D and x-position in the range of the present study. The order of contribution for higher wavenumber modes than m=1 is the order of increasing wavenumber, but the order of contribution for m=0 depends on the L/D and x-position. The eigenfunctions and the PSDs of the real part of the mode coefficients are subsequently presented, which focus on the case of L/D=1.0 where characteristic structures appear. The eigenfunctions of mode (m,n)=(0,1) are dominated by the velocity fluctuations in the x direction and also produce the in-phase velocity fluctuations in the r direction. Since the fluctuations of this mode at St≈0.024 are large at x-positions corresponding to the inside of the recirculation region, these fluctuations were treated as the recirculation bubble pumping. On the other hand, the fluctuations in this mode were also found to be large at St≈0.23, but it could not be clarified what kind of fluid phenomena correspond to these fluctuations. The mode (m,n)=(0,2) shows that the main component is the θ component and the eigenfunctions represent a velocity distribution that causes azimuthal shear. However, no characteristic trend is observed from the PSDs. The eigenfunctions of the dominant mode (m,n)=(1,1) illustrate the acceleration and deceleration regions of u_x and two vortices with opposite signs of vorticity in the freestream direction. A clear peak is observed at St=0.129 in this mode, and this fluctuation corresponds to large-scale vortex shedding. The eigenfunctions for mode (m,n)=(2,1) exhibit four vortices with vorticity in the freestream direction with switching signs alternately in the azimuthal direction. The fluctuations at St≈0.23 of this mode are considered to correspond to a double-helix structure and the low-frequency fluctuations are considered to correspond to a streak. One of the relationships between characteristic phenomena is the relationship between the shedding position of the large-scale vortex structure and the recirculation bubble pumping, as suggested by <cit.>. In the present study, the relationship is discussed mainly based on the mode (m,n)=(1,1). The vortex shedding showed circular patterns in anticlockwise and clockwise directions, flapping patterns, and a mixture of them. The state is irregularly changed with time. Amplitude changes in the mode coefficients corresponding to r-position fluctuations of the vortex shedding occur at St≈0.26 as a doubling of the fluctuation frequency due to the large-scale vortex shedding. This amplitude change is considered to be due to the vortex shedding in a flapping pattern. On the other hand, fluctuations in the azimuthal position occur at St=0.129, which are considered to be caused by the vortex shedding of a circular pattern. The association between these three vortex shedding patterns and the recirculation bubble pumping was investigated by conditional sampling, but no association was found between them. However, a link was found between the vortex shedding pattern and the mode (m,n)=(0,2), which indicates rotational shear. Rotational shear tends to appear when vortex shedding exhibits a circular pattern, and the direction of shear also tends to vary depending on whether the circular direction is anticlockwise or clockwise. Similarly, a trend occurred in the moments calculated from the field of azimuthal velocity fluctuations. Furthermore, the main flow of information is from vortex shedding to Z_0,2, which implies that the rotational shear or moment is generated when the vortex shedding position follows a circular pattern. A relationship was observed between modes other than the vortex shedding patterns, and the coherence is high between Z_0,1–|Z_1,1|, Z_0,1–|Z_2,1|, and |Z_1,1|–|Z_2,1|. In the low-frequency region of St≈0.024, the phase difference indicates that Z_1,1 and Z_2,1 become larger when the recirculation region is shorter due to the recirculation bubble pumping, i.e. the fluctuations of the vortex shedding position in the r direction become larger and the streaks are stronger. The information is mainly transferred from modes with low to high azimuthal wavenumbers, and the bubble pumping is the starting point for influencing the vortex shedding and streak conditions. In the high-frequency region of St=0.23, the coherence is relatively high between Z_0,1–|Z_1,1|, and the phase difference indicates that the vortex shedding position moves in the positive direction of r as the length of the recirculation region increases. This relationship is only observed within the recirculation region, which is in agreement with the finding of <cit.> that the interactions between modes only occur within the recirculation region. The present study has partly clarified the relationships between the characteristic flow structures that appear in a nonreattaching flow. The relationship between the bubble pumping and the large-scale vortex shedding could be used to predict the aerodynamic forces acting on a body and control them with a plasma actuator <cit.>, etc. Further clarification of the relationship is desired in the future. Further understanding could be improved by experimental investigations, such as pressure fluctuation field measurements at the base and sides of the cylinder, and simultaneous dual-plane PIV measurements in planes parallel and perpendicular to the cylinder axis. Since it is rare for the flow to come from the front in the applications listed in the introduction, similar investigations to the present study are required for the case where the cylinders are angled to the flow, which is closer to the application. [Acknowledgements]We thank Drs. K. Asai and Y. Ozawa who gave ideas to construct the stereo PIV measurement system compatible with the MSBS. [Funding]This work was supported by JSPS KAKENHI (grant numbers 18H03809, 21H04586, 21J20673, 22KJ0175). [Declaration of interests]. The authors report no conflict of interest. [Author ORCID] S. Yokota, https://orcid.org/0000-0002-0004-7015; T. Nonomura, https://orcid.org/0000-0001-7739-7104 [Author contributions]S.Y. conducted the experiment and analysed data and T.N. proposed the method of the analysis and supervised the study. All authors contributed equally to reaching conclusions and in writing the paper. § CORRECTION OF FREESTREAM VELOCITY Wind tunnel calibration tests were performed in the present study because the test section with different dimensions from the normal one was used and the profiles of velocity and turbulence intensity were expected to be different. The tests were performed to simulate the conditions of stereo PIV measurements, although particle introduction was not conducted. The profiles of the velocity and the turbulence intensity were measured at the centre of the test section in the y direction with traversing in the z direction at 750 and 874.5 mm from the upstream end of the test section, respectively. The velocity profile was measured using an L-shaped pitot tube. The pitot coefficient of this pitot tube is 1. Figure <ref> (a) shows the results of the velocity profile measurements. U_ ave, T-BART are freestream velocities obtained by the wind tunnel data measurement system and calculated from the differential pressure before and after the contraction part. The test section used in the present study has a narrower width in the z direction than normal, which accelerates the flow. Since U_ ave, T-BART was obtained in the experiment, a correction factor was multiplied to it to obtain the freestream velocity. The correction factor was 1.05, which is the average value for the range -100 mm≤ z≤100 mm. The turbulence intensity was measured by a hot-wire anemometer. The used devices were a hot-wire anemometer (CTA-002, Institute of Flow Research), a probe (type 55P11, Dantec Dynamics), a low-pass filter (PGF-8ELA, Japan Audio), a filter (FV-665, NF Electronic Instruments), an amplifier (3628, NF Electronic Instruments) and a recording device (USB-6363, National Instruments). Figure <ref> (b) shows the results of the turbulence intensity measurements. Turbulence due to the boundary layer was strong near the test section wall, but the average value for the range -100 mm≤ z≤100 mm mm was low (0.18%), which shows that the effect of change in dimension was small. jfm
http://arxiv.org/abs/2307.03243v1
20230706181743
That's BAD: Blind Anomaly Detection by Implicit Local Feature Clustering
[ "Jie Zhang", "Masanori Suganuma", "Takayuki Okatani" ]
cs.CV
[ "cs.CV" ]
That’s BAD: Blind Anomaly Detection by Implicit Local Feature Clustering Jie Zhang^1     Masanori Suganuma^1     Takayuki Okatani^1,2 ^1Graduate School of Information Sciences, Tohoku University      ^2RIKEN Center for AIP {jzhang,suganuma,okatani}@vision.is.tohoku.ac.jp =================================================================================================================================================================================================================== Recent studies on visual anomaly detection (AD) of industrial objects/textures have achieved quite good performance. They consider an unsupervised setting, specifically the one-class setting, in which we assume the availability of a set of normal (i.e., anomaly-free) images for training. In this paper, we consider a more challenging scenario of unsupervised AD, in which we detect anomalies in a given set of images that might contain both normal and anomalous samples. The setting does not assume the availability of known normal data and thus is completely free from human annotation, which differs from the standard AD considered in recent studies. For clarity, we call the setting blind anomaly detection (BAD). We show that BAD can be converted into a local outlier detection problem and propose a novel method named PatchCluster that can accurately detect image- and pixel-level anomalies. Experimental results show that PatchCluster shows a promising performance without the knowledge of normal data, even comparable to the SOTA methods applied in the one-class setting needing it. § INTRODUCTION In this paper, we consider visual anomaly detection and localization of industrial objects and textures. Anomaly detection (AD) <cit.> is the task of detecting anomalous images or patterns that are out of the distribution of normal images or patterns. AD for industrial applications often requires distinguishing small differences between normal and anomalies <cit.>; see examples from a standard benchmark dataset, MVTec AD, in Fig.<ref>. As anomalies can appear with countless types, and the majority are usually the normal samples in a manufacturing line, the one-class unsupervised setting has drawn the most attention from the research community. This setting assumes a set of anomaly-free images for training, which are selected by human experts, and we detect anomalies at test time. Since the features from the standard pre-trained deep models were `rediscovered' to be effective for the task <cit.>, recent studies employing them have achieved higher and higher performances on existing public benchmarks. Those features are proved to be representative enough for local image regions, even without any adaptation to the anomaly detection datasets at hand <cit.>. While visual unsupervised AD appears to be a solved problem due to these successes, attention has also been paid to more challenging problems, such as few-shot AD <cit.> and developing a unified model that can detect anomalies for multiple different classes of objects/textures <cit.>. However, these studies continue to consider the one-class setting, assuming a perfect set of anomaly-free samples to be available, which usually needs manual annotation by manufacturing experts. In this paper, we consider yet another scenario of unsupervised AD, which does not need any human annotations. Specifically, we consider the problem where we are not given the knowledge of normal samples for training; we want to detect anomalies in an input set of samples that might contain both normal and anomalous samples. Note that traditional machine learning often calls this setting `unsupervised AD' and the above one `semi-supervised AD' <cit.>, unlike recent studies in computer vision. For clarity, we call the setting blind anomaly detection (BAD) in this paper. The recent unsupervised AD methods mentioned above are not designed for BAD and cannot directly be applied to the problem. We then consider BAD a local patch outlier detection problem and introduce PatchCluster, which does not require human annotation and could automatically detect anomalies under BAD settings. We make the assumption that normal local features follow dense distributions and have small distances from each other in the feature space. Under this assumption, we propose to use local patch features to implicitly estimate the local feature distribution and use the average distance as the abnormality score. Unlike previous patch distribution modeling-based methods that assume features in the same spacial location follow the same distribution <cit.>, we cluster local patch features for the same contextual location by distance-based nearest neighbor searching. We show the effectiveness of PatchCluster and prove our assumption through comprehensive experiments and analysis. Reveling the strict restriction of spacial location-based modeling, PatchCluster is robust to spacial translation and rotation. PatchCluster achieves 95.7% and 95.9% average image-level and pixel-level anomaly scores on MVTec AD dataset without reaching any normal training data. § RELATED WORK One-class anomaly detection, also known as one-class novelty detection <cit.>, is a long-standing problem in computer vision. Prior arts mainly focus on image-level outlier detection where the anomalous samples follow distributions of other semantic categories, e.g., detecting dog images for the cat category. Representation learning-based methods that could effectively learn the global contextual information are employed to tackle this problem, of which deep Autoencoders(AE) <cit.> and Generative Adversarial Networks(GANs) <cit.> are popular choices. In industrial manufacturing scenarios, however, anomalies will generally occur in confined areas on a specific kind of product, making the anomalous samples very close to the normal data distribution and the task more challenging. Recent one-class anomaly detection benchmarks <cit.> providing normal real-world industrial products for training draw lots of research attention and lots of attempts are paid to utilizing ImageNet <cit.> pre-trained models that could extract representative features and conduct industrial anomaly detection in a local patch feature-based manner. PaDiM <cit.> explicitly models the feature distribution at each spatial location. However, the assumption that patch features at the same spatial location follow the same distribution is too strict. SPADE <cit.> creates a feature memory bank from the normal training data and assigns anomaly scores to test image features by kNN search. The image-level anomaly scores are still based on the global image distances. PatchCore <cit.> propose to use locally aware features to retain more contextual information and further utilize the greedy search method to reduce the size of the memory bank. There are also many approaches based on top of the pre-trained features that try to transfer the knowledge of normal features to a student network <cit.> or estimate the distribution of normal features by flow-based methods <cit.>. While recent SOTA methods have shown nearly perfect anomaly detection performance on the public benchmark, MVTec AD dataset <cit.>, e.g., PatchCore-Ensemble achieves a 99.6% image-level AUROC score on the MVTec AD, to the best of our knowledge, no attention has been paid to exploring achieving comparable performance without any human annotations. The one-class unsupervised anomaly detection setting requires specialists to annotate a set of normal images for training, especially for various industrial products. Under the BAD setting, most of the learning-based methods cannot be utilized directly. It should be noted that although some works have studied anomaly detection from noisy data <cit.>, i.e., anomaly-free training set contaminated by wrongly labeled anomalous samples, they are still under the one-class setting and need image labels from the human. The methods most related to ours are PaDiM <cit.> and PatchCore <cit.>. We estimate the local patch feature distributions by kNN search, without explicitly modeling the distribution and revealing the overly strict assumptions of PaDiM. PatchCore only uses the nearest neighbor for anomaly scoring, which is sensitive enough under the one-class setting, however, we show that there may be several candidates that are too close to the anomalous feature in the memory bank and significantly decrease the anomaly detection sensitivity. § BLIND ANOMALY DETECTION (BAD) We propose the task of blind anomaly detection (BAD), which involves identifying anomalies from a set of unknown images that may be either normal or anomalous. Given a set of images 𝒮 = I_1, I_2, ..., I_n, the objective is to identify anomalous images and localize the defective regions in each image. In this task, we do not use any image- and pixel-level annotations for training; we are not given any knowledge of normal or abnormal samples. Thus, recent AD methods can not be applied directly to the BAD task. As there is currently no comprehensive real-world dataset that aligns with this task, we formulate several variations of the BAD task using the existing unsupervised anomaly detection benchmark, MVTec AD dataset. The MVTec AD dataset <cit.> comprises 5 texture categories and 10 object categories, and is designed for one-class anomaly detection. It includes 3629 normal images for training and 1725 normal and anomalous images for testing. We have formulated three BAD settings: MVTec-Mix, MVTec-Test, and MVTec-Ano. The MVTec-Mix setting merges the training and test images for each category, while the MVTec-Test setting is the original test dataset. The MVTec-Ano setting is the most extreme case, in which it includes only the anomalous images from the test dataset. We present a statistical overview of the proposed BAD settings in Table <ref>. Under the BAD scenarios, it is not necessary to build separate training and testing splits, as opposed to supervised learning and one-class unsupervised learning. In the Ano setting, there are no normal images. Given this, the question arises whether it is possible to not only classify the two classes of images but also determine which class is normal, even though all images are anomalous. In other words, is it possible to assign low anomaly scores to normal images, even if they account for a small portion, and assign high anomaly scores to anomalous images, even though all the images contain anomalies? Considering the specific scene, industrial manufacturing lines, we could make use of the prior knowledge that most anomalies of industrial products will occur in subtle areas, then for one anomaly image, it is more likely that the majority of the pixels are still normal. We also show the mean percentage of anomalous pixels over 15 categories in Table <ref>. The prior knowledge that the normal pixels will always account for the majority makes it possible for blind anomaly detection. § LOCAL OUTLIER DETECTION We then introduce the proposed PatchCluster for BAD. It is an extension of existing feature memory bank-based methods and assigns anomaly scores to each patch feature by local patch feature clustering, which makes it suitable for either one-class setting or blind anomaly detection. §.§ Revisiting Memory Bank-based Methods Under the one-class setting, building a memory bank using mid-scale features extracted by a deep pre-trained model and then applying the simple nearest neighbor search for anomaly scoring could achieve nearly perfect anomaly detection performance. We also build a feature memory bank first as the extracted features have been shown to be representative enough for anomaly detection. Given an image I_i from the dataset 𝒮, the l-th layer of the pre-trained model E extracts feature map F_i^l ∈ℝ^W^l × H^l × C^l for the image. The feature vector f_i^l(w, h) ∈ℝ^C^l at spacial location (w, h)(w = 1, ..., W^l, h = 1, ..., H^l) is employed as the image patch representation.The pre-trained model could effectively extract features from low levels to high semantic levels. As the low-level features may be too generic and the high-level features are source-domain biased. The multi-scale features are concatenated along the channel dimension in PatchCore to get one single feature representation f_i(w, h) at each location. To increase the effective receptive field of the pre-trained features while avoiding introducing ImageNet-biased knowledge, PatchCore further employs adaptive average pooling to fuse the feature vector with its neighboring features in the feature map. The full feature memory bank ℳ is then created for all the images in the dataset 𝒮 ℳ = {f_i(w, h)}, i ∈{1, 2, ..., n}, w ∈{1, ..., W^l}, h∈{1, ..., H^l} To reduce the size of the memory bank, SPADE creates a memory bank for one test image using its nearest images in the dataset, while PatchCore subsamples the memory bank into a coreset by greedy subsampling. §.§ Proposed PatchCluser Method We extend our method from the feature memory bank-based arts. We first create a memory bank ℳ following PatchCore. SPADE and PatchCore score a patch feature by the distance with its nearest neighbor feature from the memory bank. It is effective enough, as there are only normal features in ℳ. However, in BAD where the memory bank contains both normal and anomalous features especially ℳ is built on top of the dataset 𝒮 itself, it may also be easy to find similar patch features for the anomalous patches. To address the above issue, we make the assumption that 1). features that follow the same distribution have smaller distances in the feature space and they could be used to estimate the local feature distribution in a specific contextual location. 2). anomalies are random events and the distribution of each type of anomaly has larger variances compared to normal features. Under the above assumption, if we interpret the creating process of memory bank ℳ as a sampling problem, it is possible to estimate the normal local distributions using the local features as they are similar to each other and have high probabilities to be sampled, while it is more difficult to estimate the anomalous distributions as they have more variants and have less probability to be sampled. We then build a local feature gallery 𝒢_i(h, w) by searching the corresponding K nearest neighbors. Fig. <ref> gives an overview of the proposed PatchCluster. Then the anomaly detection for each patch feature is converted to a local outlier detection problem. Under the assumption, normal local distributions are in a dense feature space while anomalous local distributions are relatively sparse. We then estimate the feature abnormality by the mean distance to its neighbors without explicitly modeling the distributions a_w, h = 1/K∑_j=1^Kdist(f_i(w, h), f_j)), j ∈𝒢_i(h, w) With a proper K, for normal patch features, it should be easy to search for enough neighbors that follow the same dense distribution. However, for an anomalous patch feature, on the one hand, the neighbors tend to have larger distances with it. On the other hand, it is also likely to fail to find enough close neighbors from ℳ. Consequently, after scoring all the patch features in an image, we get an anomaly score map for it. As the features are down-sampled compared to the input images, we up-sample the score map by bi-linear interpolation. Following the popular choice, to remove local noises, we apply a Gaussian filter with a kernel size of 4 to get the final anomaly score map. We could simply choose the maximum score a_i^* in a score map to account for the image-level anomaly score. We find it still robust for BAD to increase the image anomaly score if the nearest feature f^* in M of the corresponding patch feature f_i^* has a large anomaly score a_i = (1 - exp(a_i^*)/∑_f ∈𝒩(f^*)expdist(f_i^*, f))· a_i^* Where 𝒩(f^*) is a set of nearest neighboring patch features for f^*. The image anomaly scoring function is the same as PatchCore. § EXPERIMENTAL RESULTS AND ANALYSIS Evaluating Metrics We use AUROC score and PRO score to evaluate the pixel-wise anomaly detection results. The two metrics are threshold-free and the PRO metric treats each anomalous region equally. We also use AUROC score to evaluate the image-level anomaly detection results except for the MVTec-Ano, as there is only one kind of anomalous image in this setting. Implementation Details We resize the images into a 256 × 256 resolution and then center crop the images into 224 × 224 throughout our experiments. For a fair comparison, we use the same ImageNet pre-trained WideResNet-50 <cit.> as the feature extractor for all evaluation methods. For PaDiM <cit.> and PatchCore <cit.>, we use the same experimental settings as in their papers. We also use the intermediate features from the output of the second and third residual stages of the feature extractor, which is the same as PatchCore. We also make SPADE follow the same choice of layers to extract features, which is different from the original SPADE that also uses the first residual stage. We find this change significantly improved the inference speed and yields better performance. As SPADE first creates a memory bank for each image by image-level nearest neighbor search, we exclude the image itself when creating the memory bank to avoid including features with high similarity for both normal and anomalous features from the image itself. For PatchCore and PatchCluster especially with memory bank subsampling, however, we aim to build a unified memory bank for all test images and cannot avoid confusing features from the image itself. We could compute the anomaly score for a given patch feature using or starting from its k-th nearest neighbor. However, the optimal k for a set of images is highly dependent on the dataset size, the inner distribution of the images, and post-processing approaches on the memory bank such as coreset sub-sampling. To tackle this issue, we simply set k to 2 to avoid searching the feature itself as the nearest neighbor. We show that using a proper K value which has a clear choosing criterion makes PatchCluster robust to various BAD settings in <ref> and <ref>. We set the number of nearest neighbors K for patch feature anomaly scoring to 100 for PatchCluster-100%, which is close to the average number of test images for each category. For the coresets with subsampling ratios of 1%, 10%, and 25%, we simply reduce the K to 5, 10, and 25 without careful tuning, respectively. It should be noted that as mentioned above, we do not consider the feature itself as its neighboring feature. §.§ Blind Anomaly Detection on MVTec AD We first give an overall comparison with PaDiM, SPADE, PatchCore, and the proposed PatchCluster in Table <ref>. The proposed PatchCluster uses the same memory bank as PatchCore. Our proposed method outperforms all competitors by a large margin under all BAD settings regarding both image-level and pixel-level anomaly detection evaluation metrics. PatchCluster-25% achieves 97.5% and 98.2% AUROC scores for anomaly detection and localization for Mix setting, which is comparable to several SOTA one-class anomaly detection approaches. Without approaching any training data under the Test setting, PatchCluster still preserves high anomaly detection performance. Under the most aggressive Ano setting, PatchCluster still achieves a 94.3% AUROC score and a PRO score of 88.3% for pixel-wise anomaly detection. As a comparison, the best image-level and pixel-level AUROC scores are only 92.7% and 92.9% under the Mix setting, at the influence of only 1.1% anomalous pixels in the dataset. It should also be noted that even for the Mix setting with the smallest portion of anomalous pixels, there are still 23% anomalous images in the dataset, which is a stringent BAD setting and practically impossible for uniform sampling from industrial product lines. As another patch feature memory bank-based method, the modified SPADE shows better pixel-level anomaly detection than PatchCore, demonstrating the influence of confusing features from the test image itself. PaDiM explicitly models the distributions for each fixed patch feature location. Well-modeled distributions are desired for one-class anomaly detection, however, they conflict against the performance under BAD settings as they also cover the anomalous patch features. We also extend the PatchCore with classical Local Outlier Factor (LoF) <cit.> algorithm, a local-density-based anomaly scoring method. We use the full memory bank without coreset subsampling and calculate the relative local density for each patch feature according to the distances with its neighbors as the anomaly score. We found using 100 neighbors yield better anomaly detection results, which is consistent with our assumption. We then report the detailed detection results of PatchCluster-25% for each category under different BAD settings in Tab <ref>. Some visualization examples under the Test setting for objects and textures are shown in Fig. <ref> and Fig. <ref>, respectively. PatchCore fails to effectively assign high anomaly scores to anomalous patch features which leads to too many false-positive cases, i.e., normal patches could easily be detected as anomalous. PatchCluster is robust to various types of texture and object products under all BAD settings. However, it shows slightly inferior performance for categories that local spatial variations are relatively large for normal patches such as the cable, and categories with too fine-grained or large defects such as pill and transistor. §.§ Effectiveness of Local Feature Clustering We first report the performance of the proposed PatchCluster-100% with different numbers of patches used for estimating the distance for the test patch feature and the local patch distribution in Table <ref>. PatchCluster is stable for a wide range of K. It should be noted that with K=1 the PatchCluster is identical to PatchCore-100%. The proper value of K has a certain choosing criterion. From the global contextual viewpoint, each image of a certain kind of product tends to contain most kinds of local patches. Setting K near the number of total images in 𝒮 is likely to yield stable performance. We set K fixed to 100 for PatchCluster-100% which is approximately the average number of images for each category under the Test setting. We visualize the local feature clustering under the Test setting on two examples from one object category and one texture category in Fig. <ref>. The normal feature and its neighboring features tend to compromise a dense distribution and they are close in distance to each other. However, for anomalous features, there may exist several neighboring features, but most of the neighbors tend to have large distances from them. To further analyze the effectiveness of local feature clustering, we extend SPADE into SPADE-Cluster and experimentally verify the performance. The results are shown in Table <ref>. It is also obvious that with local feature clustering, the SPADE-Cluster also shows significant improvement against SPADE. The performance of SPADE-Cluster drops with K larger than50, which is the number of nearest images used to create the memory bank for each test image. §.§ Effectiveness of Memory Bank Subsampling Table <ref> shows the BAD performance of PatchCluster with different coreset subsampling ratios using the same greedy search method as PatchCore. We observe obvious performance drops with too small subsampling ratios, regardless of the significant memory bank size reduction and improvement of inference speed. A similar observation for PatchCore could also be found in Table <ref>. We argue that the local patch-based image-level anomaly scoring methods will become unreliable if the pixel-level anomaly detection ability is worse. An underlying interpretation is the difficulty of covering not only normal patch features but also anomalous patch features that are relatively sparse in the feature space with small subsampling ratios. PatchCluster-25% performs even better than PatchCluster-100% as the subsampling process, to a certain degree, plays a role of a noise feature filter. §.§ One-Class Anomaly Detection on MVTec AD Table <ref> shows the results of PatchCluster-25% and other methods under the one-class setting on the MVTec AD. With the patch features of the training data fully anomaly-free, the greedy-searched coreset effectively reduced the size of the memory bank while retaining strong anomaly detection ability. However, as there are no anomalous patch features in the memory bank and coreset, using a certain amount of neighboring patch features for anomaly scoring reduces the detection sensitivity, leading to slightly inferior performance compared to PatchCore. § CONCLUSION We introduce the blind anomaly detection(BAD) problem for industrial inspection, a task of finding fine-grained local anomalies in a set of mixed normal and anomalous images without using any human annotations. We formulate three BAD settings, Mix, Test, and Ano based on the existing industrial anomaly detection benchmark MVTec AD dataset. Based on the memory bank-based method, we convert BAD as a local outlier detection problem and propose PatchCluster, a method of measuring the local patch feature's distribution using the corresponding nearest neighbors in the memory bank. The proposed approach could effectively estimate the normal feature distribution, while it fails for anomalies. PatchCluster is robust to various kinds of BAD settings and shows comparable performance with current one-class anomaly detection SOTAs. ieee_fullname
http://arxiv.org/abs/2307.02724v1
20230706021841
Massive MIMO with Cauchy Noise: Channel Estimation, Achievable Rate and Data Decoding
[ "Ziya Gulgun", "Erik G. Larsson" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
Massive MIMO with Cauchy Noise: Channel Estimation, Achievable Rate and Data Decoding Ziya Gülgün, and Erik G. Larsson, Fellow, IEEE Z. Gülgün was with the Department of Electrical Engineering (ISY), 58183 Linköping, Sweden. He is now with Ericsson AB, 16440 Stockholm, Sweden (ziya.gulgun@ericsson.com). E. G. Larsson is with the Department of Electrical Engineering (ISY), 58183 Linköping, Sweden e-mail: (erik.g.larsson@liu.se). This work was supported by Security Link and the SURPRISE project funded by the Swedish Foundation for Strategic Research (SSF). A preliminary version of this paper was presented at the International Conference on Communications (ICC), 2022 <cit.>. ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We consider massive multiple-input multiple-output (MIMO) systems in the presence of Cauchy noise. First, we focus on the channel estimation problem. In the standard massive MIMO setup, the users transmit orthonormal pilots during the training phase and the received signal at the base station is projected onto each pilot. This processing is optimum when the noise is Gaussian. We show that this processing is not optimal when the noise is Cauchy and as a remedy propose a channel estimation technique that operates on the raw received signal. Second, we derive uplink-downlink achievable rates in the presence of Cauchy noise for perfect and imperfect channel state information. Finally, we derive log-likelihood ratio expressions for soft bit detection for both uplink and downlink, and simulate coded bit-error-rate curves. In addition to this, we derive and compare the symbol detectors in the presence of both Gaussian and Cauchy noises. An important observation is that the detector constructed for Cauchy noise performs well with both Gaussian and Cauchy noises; on the other hand, the detector for Gaussian noise works poorly in the presence of Cauchy noise. That is, the Cauchy detector is robust against heavy-tailed noise, whereas the Gaussian detector is not. Massive MIMO, Cauchy Noise, Achievable Rates, Symbol-error-rate (SER), Bit-error-rate (BER). § INTRODUCTION Massive multiple-input multiple-output (MIMO) is one of the core technologies in the 5G physical layer <cit.>. A massive MIMO base station (BS) is equipped with a large number of antennas (on the order of 100), each connected to an independent radio-frequency chain. Thanks to this flexibility, the BS can serve tens of users on the same time-frequency resources simultaneously. During the last decade, many aspects of massive MIMO technology have been investigated in depth, for example: power allocation and user association algorithms <cit.>, hardware impairments <cit.>. This paper discusses an important aspect that has been largely ignored in the massive MIMO literature: the fact that noise and interference may not be Gaussian in general. More specifically, the noise can be impulsive and have a heavy-tailed distribution, corresponding to the presence of outliers. For example, in <cit.>, impulsive noise was mentioned as one of the physical-layer challenges for 6G. §.§ Motivation for Impulsive Noise The Gaussian noise assumption is justified if the noise results from superposition of many independent components. However, this assumption does not apply in many real situations that may occur in electronic devices <cit.>. For example, in <cit.> the authors performed a series of measurements of electromagnetic noise for industrial wireless communications, and identified impulsive noise which can be modelled via Cauchy or Gamma distributions. Multi-carrier transmission with impulsive noise was studied in <cit.>. Adaptive demodulation for channels with impulsive noise was studied in <cit.>. Other examples are that ambient noise in shallow water for acoustic communication is highly impulsive <cit.>; the noise in powerline communication channels is impulsive <cit.>; interference in ad hoc networks can be impulsive <cit.>; and clutter models in radar signal processing are typically non-Gaussian <cit.>. Moreover, impulsive noise can also be generated by malicious transmitters (jammers). One way to model impulsive noise is as symmetric α-stable (SαS) random variables. The smaller α is, the more impulsive the noise is. When α is 1 and 2, the SαS distribution becomes Cauchy and Gaussian, respectively. In the literature, impulsive noise (SαS) has been studied in various contexts. For example, array signal processing with impulsive noise is treated in <cit.>. Spectrum sensing with SαS channel was studied in <cit.>. Localization problems in the presence of α-stable noise were investigated in <cit.>. The probability density functions (pdfs) of SαS distributions are approximated and the signal detection performances of these pdfs are investigated in <cit.>. In <cit.>, soft-decision metrics for coded signals in Cauchy noise were derived. An interesting conclusion in <cit.> is that the performance loss of the detector designed for Cauchy noise when exposed to Gaussian noise is much smaller than the loss when the detector designed for Gaussian noise is exposed to Cauchy noise. Indeed, the detector designed for Gaussian noise works poorly in the presence of Cauchy noise, whereas the detector designed for Cauchy noise is very robust to the actual distribution of the noise. We consider massive MIMO specifically with Cauchy (S(α=1)S) noise. Another paper that addressed channel estimation for massive MIMO with impulsive noise is <cit.>. The impulsive noise in <cit.> is modelled as a mixture of Gaussian noise and outliers, rather than via an SαS distribution; the mixture weights (probabilities of outliers) are estimated by tuning a sparsity level parameter. In contrast to <cit.>, we chose to work with Cauchy noise since it is simpler to deal with analytically (for example, the Cauchy distribution is preserved under linear combinations) and it has fewer parameters that require tuning to the impulsive level of the noise. In addition, compared with <cit.>, we provide achievable rate expressions and soft decoding metrics. §.§ Summary of Technical Contributions and Organization of the Paper We describe a system model for single-cell massive MIMO with Cauchy noise and focus on the channel estimation in Section <ref> where we derive two types of channel estimators. In the first, the received signal at the base station (BS) is de-spread by correlating it with each user's pilot signal. In the presence of Gaussian noise, this entails no loss of information; the de-spread received pilots are sufficient statistics. However, this is not the case if the noise is non-Gaussian. The second channel estimation approach rather estimates the channels from the raw received signal without de-spreading first, and is shown to be superior. Next, in Section <ref>, we derive uplink and downlink achievable rate expressions for the cases of perfect and imperfect channel state information (CSI). Thereby, for the imperfect CSI case, the channel estimates are used as side information. In Section <ref>, we then derive log-likelihood ratio (LLR) expressions for soft decoding both for uplink and downlink. Finally, in Section <ref> we give numerical results on achievable rates and bit-error-rate (BER) for the different detectors. This paper is a comprehensive extension of our conference paper <cit.>. The new material compared to <cit.> is mainly that (i) we analyze achievable rates for both perfect and imperfect CSI, (ii) we analyze the decoding performance of a Cauchy receiver in the presence of noise with other SαS distributions, (iii) we derive LLR expressions for soft decoding, and (iv) we present numerical results on achievable rates and coded BER. §.§ Notation |.|, (.)^T, (.)^* and (.)^H denote the absolute value of a scalar, determinant of a matrix, transpose, conjugate and conjugate transpose operators, respectively. Boldface lowercase letters, 𝐱, denote column vectors, boldface uppercase letters, 𝐗, denote matrices, and uppercase letters, X, denote random variables. The l_p norm is denoted by 𝐱_p. 𝔼[.] refers to the expectation operator. (.) is the real part of a complex number. § SYSTEM MODEL AND CHANNEL ESTIMATION §.§ System Model We consider a single-cell massive MIMO system including a BS equipped with M antennas, serving K users where each user has a single antenna. A block-fading model is considered in which the channel is constant and frequency-flat in a coherence block with size T samples. A sub-block with length τ is reserved for the channel estimation and the rest of the coherence block with length T-τ is dedicated to data transmission. During the training phase, the users transmit orthonormal pilot vectors with length τ to the BS. τ should be greater than or equal to K and less than T. Let us denote the pilot vector for the k^th user by ϕ_k∈ℂ^τ. Since these vectors are orthonormal, ϕ_k^2=1 and ϕ_k^Hϕ_j=0 for k≠j. It remains an open question under what exact circumstances orthogonal pilot sequences are optimal in the presence of Cauchy noise. (For example, in Cauchy noise, unlike in Gaussian noise, de-spreading is suboptimal; see Section <ref>.) The choice of orthogonal pilots, however, is optimal for Gaussian noise and it is the standard design choice most systems; hence, we adopt it here. Even when restricting the pilots coming from an orthogonal pilot book, numerical evidence shows that the performance is different for different choices of this pilot book (unlike in the Gaussian case). For example, pilots chosen from the identity matrix and pilots chosen from a normalized discrete Fourier transform (DFT) matrix do not provide the same performances. In this work, we choose the pilots from the normalized DFT matrix because this increases the number of observed realizations in the receiver side, which is important because of the outliers in the Cauchy distribution. The received pilot signal in the BS can be expressed as: 𝐘=∑_k=1^K√(τp_k)𝐡_kϕ_k^T+𝐍, where p_k is the power of the received signal corresponding to the k^th user, and 𝐡_k∈ℂ^M is the k^th user's channel vector. The elements in all channel vectors are independent, identically distributed (i.i.d.) circularly symmetric complex Gaussian random variables, i.e., 𝐡_k∼𝒞𝒩(0,𝐈), and 𝔼{𝐡_k^H𝐡_m}=0 for k≠m, corresponding to uncorrelated Rayleigh fading. 𝐍 is an M×τ noise matrix containing i.i.d. isotropic complex Cauchy random variables with dispersion parameter γ=1. If the noise components were complex Gaussian with unit variance, p_k would have the meaning of signal-to-noise ratio (SNR). However, it is not meaningful to define p_k as SNR with Cauchy noise, because the second-order moment of the Cauchy distribution is infinite; see Appendix <ref>. Therefore, we use the term signal-to-dispersion ratio (SDR) for p_k in this work. To contrast SDR and SNR, we investigate the relation between the standard deviation of the Gaussian distribution and the dispersion of the Cauchy distribution. For the (real-valued) Gaussian distribution, the probability that a realization, X, lies between -σ and σ, where σ is the standard deviation, is around 2/3. The probability that a (real-valued) Cauchy realization, X, lies between 0 (the median) and t can be expressed as: P(0<X<t)=1/πtan^-1(t/γ), where γ is the dispersion. From (<ref>), we infer that the probability that a Cauchy realization, X, lies between -1.7γ and 1.7γ is around 2/3. Therefore, if we set σ=1.7γ, the probability is 2/3 for both a Gaussian and a Cauchy random variable. §.§ Channel Estimation In this section, we propose two channel estimation techniques for the case when Cauchy noise is present. The first relies on de-spreading of the received pilots. The second, in contrast, operates on the original received pilots in (<ref>) without performing de-spreading. §.§.§ Channel Estimation with De-spreading Operation After the de-spreading operation, the signal corresponding to the k^th user can be expressed as: 𝐲_k=𝐘ϕ_k^*=√(τp_k)𝐡_k+𝐧_k, where 𝐧_k=𝐍ϕ_k^*. Now, let us analyze the statistic of 𝐧_k. 𝐧_k contains i.i.d. isotropic complex Cauchy random variables with the dispersion γ_k^'=γϕ_k_1=γ∑_i=1^τ|ϕ_k[i]| where ϕ_k[i] is the i^th element of ϕ_k. First, since the row vectors in 𝐍 are independent, the elements in 𝐧_k are mutually independent. Next, as a preliminary observation that we will exploit in the proof, we analyze the statistics of cX where c∈ℂ and X is an isotropic complex Cauchy random variable. By using (<ref>), we can express the characteristic function of cX as: ϕ_(cX)(ω)=𝔼[exp(j(ω(cX)^*))]=𝔼[exp(j(c^*ωX^*))]. Let us define ω^'=c^*ω. Then (<ref>) can be rewritten as: 𝔼[exp(j(ω'X^*))]=exp(-γ|ω'|)=exp(-γ|c||ω|). Now, we focus on the l^th element in 𝐧_k, denoted as 𝐧_k[l]. The characteristic function of 𝐧_k[l] is: ϕ_𝐧_k[l](ω) =𝔼[exp(j(ω(𝐧_k[l])^*))] =𝔼[exp(j(ω∑_i=1^τϕ_k[i]𝐍[l,i]^*))] =∏_i=1^τ𝔼[exp(j(ωϕ_k[i]𝐍[l,i]^*))], where (<ref>) can be represented as a product of random variables as in (<ref>) because all random variables are mutually independent (𝐍[l,i] is the element in the l^th row and the i^th column of 𝐍 and 𝐍 contains i.i.d. random variables). By using (<ref>), the following is obtained: ϕ_𝐧_k[l](ω) =∏_i=1^τ𝔼[exp(j(ωϕ_k[i]𝐍[l,i]^*))] =∏_i=1^τexp(-γ|ϕ_k[i]||ω|) =exp(-γ∑_i=1^τ|ϕ_k[i]||ω|), which concludes the proof. Proposition 1 presents an important result. Although the de-spreading operation preserves the distribution of noise, the dispersion may increase. For example, if the pilot signals are chosen from a normalized DFT matrix, the dispersion after de-spreading operation becomes γ_k^'=γ√(τ) ∀ k. This is not the case for the isotropic complex Gaussian noise because a multiplication with a unitary matrix does not change the noise variance <cit.>. Now, we need to estimate the channels from (<ref>). Most existing papers use minimum mean-square-error (MMSE) or linear MMSE techniques to estimate the channels <cit.>. However, these techniques cannot be applied in the presence of Cauchy noise because the first and second order moments for this noise distribution are undefined. Therefore, here we take a maximum-likelihood (ML) approach. The maximum-a-posteriori (MAP) technique could also be applied if a prior on the channel statistic is known. The difference between MAP and ML is that the pdf of channel statistic appears in the objective function; in this paper, we only consider ML for simplicity. From (<ref>), the likelihood function for 𝐡_k based on the de-spread data is p(𝐲_k|𝐡_k)=∏_i=1^Mγ_k'/2π((γ_k')^2+|𝐲_k[i]-√(τp_k)𝐡_k[i]|^2)^3/2. Based on (<ref>), the ML estimate of the channel 𝐡_k is: 𝐡̂_k^ML=_𝐡_kp(𝐲_k|𝐡_k)=𝐲_k/√(τp_k). From (<ref>) and (<ref>), it can be immediately seen that 𝐡̂_k^ML is also identical with the least-squares channel estimate for 𝐡_k. Next, we analyze whether 𝐲_k is a sufficient statistic or not. To do this, we go back the unprocessed received signal in (<ref>). The likelihood function based on the raw data in (<ref>) is: p(𝐘|𝐡_1,,𝐡_K)= ∏_l=1^M∏_i=1^τγ/2π(γ^2+|𝐘[l,i]-∑_k=1^K√(τp_k)𝐡_k[l]ϕ_k[i]|^2)^3/2, where 𝐘[l,i] is the element in the l^th row and i^th column of 𝐘. In order to obtain a sufficient statistic, there should exist a factorization of the likelihood function in (<ref>) as <cit.>: p(𝐘|𝐡_1,,𝐡_K)=h(𝐘)g(T(𝐘),𝐡_1,,𝐡_K), where g(·) and h(·) are some functions, and T(·) is the sufficient statistic. Note that g(·) should depend on 𝐘 only through T(𝐘). Because of the fractional exponent 3/2 in the denominator, one cannot expand the corresponding term. One can multiply the all the terms under the fractional exponent 3/2 but a function T(·) cannot be obtained because of the many cross-terms that appear. Remark 1: Let us consider an affine estimator of the k^th user's channel which has the structure 𝐘𝐚_k+𝐛_k where 𝐘𝐚_k≠0, 𝐚_k∈ℂ^τ and 𝐛_k∈ℂ^M. As a result of Proposition 1, this affine estimator can be linearly decomposed into noise which has a complex isotropic Cauchy distribution and a signal part. For example, from (<ref>), 𝐧_k is the additive part of 𝐲_k which has the complex isotropic Cauchy distribution. Therefore, the mean of 𝐲_k is undefined and the variance of 𝐲_k is infinite for the affine estimators. §.§.§ Channel Estimation from the Unprocessed Received Signal In this section, we derive the ML estimates for the channel vectors based on the unprocessed received signal in (<ref>). The likelihood function is given in (<ref>). Maximizing (<ref>) is very complicated because the parameters of all users interact with each other. To solve this problem, one approach is to use a coordinate search algorithm <cit.>. The idea is as follows: First, assign some initial values to all parameters except 𝐡_1. Then find the estimate of 𝐡_1 that maximizes the likelihood function. Next, with the so-obtained value of 𝐡_1, find the estimate of 𝐡_2 that maximizes the likelihood function and so forth. One can then iteratively loop through all 𝐡_k and then return to 𝐡_1, and run another complete iteration over all k. This procedure goes on until there is a sufficiently small difference between the norms of the channel estimates in the current round compared to in the previous round. The coordinate descent algorithm guarantees that the function value obtained in an iteration is less than or equal to the function value obtained in the previous iteration. However, the algorithm may converge to a non-stationary point, or the algorithm can continue searching infinitely many times for non-convex functions <cit.>. The examples in <cit.> are very special examples, but the conclusions therein prevent us from making a general statement that the algorithm converges to a stationary point. Bertsekas <cit.> shows in Proposition 2.7.4 that if the objective is continuously differentiable, i.e., the first derivative is continuous, and the minimizer along any coordinate direction from any point is unique, then the algorithm converges to a stationary point. The objective function in (<ref>) is continuously differentiable, but we do not have proof that the minimizer along any direction is not unique. (In <cit.>, we claimed that the coordinate descent algorithm converges to a locally optimum point, but this is unknown.) Numerical experiments suggest that the convergence to a stationary point depends on the initial point. In the first iteration, we assign 𝐡̂_k^ML in (<ref>) to 𝐡_k for k=2,,K as initial solutions. This results in the modified objective function in (<ref>) for 𝐡_1 as follows: p(𝐘'|𝐡_1)=∏_l=1^M∏_i=1^τγ/2π(γ^2+|𝐘'[l,i]-√(τp_1)𝐡_1[l]ϕ_1[i]|^2)^3/2, where 𝐘'=𝐘-∑_k=2^K√(τp_k)𝐡̂_k^MLϕ_k^T. When maximizing with respect to the first element of 𝐡_1, we have M separable maximization problems. The corresponding observation vector is the first row from 𝐘'. Therefore, we need to maximize the following: ∏_i=1^τγ/2π(γ^2+|𝐘'[1,i]-√(τp_1)𝐡_1[1]ϕ_1[i]|^2)^3/2. Taking the logarithm, the maximization of (<ref>) is equivalent to the minimization of the following: ∑_i=1^τlog(γ^2+|𝐘'[1,i]-√(τp_1)𝐡_1[1]ϕ_1[i]|^2). To minimize (<ref>), we use the gradient descent algorithm <cit.> and give the details in Appendix <ref>. Note that the function in (<ref>) is nonconvex because the Cauchy distribution is non-log-convex. Remark 2: Suppose we want to estimate a parameter from data containing Cauchy noise. The estimate may have a certain variance and even the Cramer-Rao bound may exist for the estimate <cit.>. In general, when the noise is Cauchy, the relation between the ML estimate and the observed data is not affine, not even for a linear signal model. Hence, the ML estimate may have a certain variance, although sometimes it does not; for example, see (<ref>). § ACHIEVABLE RATES WITH CAUCHY NOISE In this section, we present the uplink and downlink achievable rates for the massive MIMO communication link with perfect and imperfect CSI. In the literature, <cit.> derived the capacity for the scalar Cauchy channel under a logarithmic constraint on the input distribution; see <cit.> for details. Another paper <cit.> derived some capacity bounds for SαS noise channel with α>1 which does not cover the Cauchy noise. As the exact capacity appears intractable for our setup when the noise is Cauchy, in this paper we focus on achievable rates (lower bounds on capacity). To study the effects of the Cauchy noise specifically, we restrict this analysis to the one-user scenario. §.§ The Uplink Achievable Rate The mutual information of the single-input single-output (SISO) channel can be calculated empirically; see details in Appendix <ref>. Now we apply this result to the case of a single-input multiple-output channel with Cauchy noise. We start with the perfect CSI case. Mathematically, we can express the received signal as an M-vector: 𝐲=√(p^ul)𝐡x+𝐧, where p^ul is the uplink SDR, x is the transmitted signal, 𝐡∈ℂ^M is the channel gain. Both the channel gain and its statistics are known by the receiver. 𝐧∈ℂ^M is the noise vector including complex isotropic Cauchy random variables with unit dispersion parameter. Let us define a joint random variable Y̅=[Y_1,,Y_M]^T comprising the random variables observed at each antenna and a joint random variable H̅=[H_1,,H_M]^T including the channel realizations known by the receiver. Assume that the transmitted signal is chosen uniformly from the discrete set A_X with cardinality S. After this, we can define the uplink achievable rate: R^ul=log_2S-𝔼_X,Y̅,H̅{log_2(∑_x∈A_Xp(Y̅,H̅|x)/p(Y̅,H̅|X))}. All (Y_i,H_i) pairs are independent to each other if X is given. Hence, (<ref>) can be rewritten as: R^ul =log_2S-𝔼_X,Y̅,H̅{log_2(∑_x∈A_X∏_i=1^Mp(Y_i,H_i|x)/∏_i=1^Mp(Y_i,H_i|X))} =log_2S-𝔼_X,Y̅,H̅{log_2(∑_x∈A_X∏_i=1^Mp(Y_i|H_i,x)/∏_i=1^Mp(Y_i|H_i,X))}, where p(y_i|h_i,x) is: p(y_i|x)=γ/2π(γ^2+|y_i-√(p^ul)h_ix|^2)^3/2, where y_i and h_i are the i^th elements of 𝐲 and 𝐡, respectively. Now consider the imperfect CSI case. The BS estimates the channel denoted by ĥ in the training phase. These estimates are used as side information to calculate achievable rates. This calculation exploits standard results on capacity bounding with side information; for example, see <cit.>. Since there exists an error between the estimates and the real channels, the achievable rates for the imperfect CSI case are less than or equal to the achievable rates for the perfect CSI case <cit.>. Let us define a joint random variable Ĥ̅=[Ĥ_1,,Ĥ_M]^T including the random variables that represent the channel estimates. Then, the uplink achievable rate for the imperfect CSI case can be expressed as: R^ul, imCSI=(1-τ/T)× (log_2S-𝔼_X,Y̅,Ĥ̅{log_2(∑_x∈A_X∏_i=1^Mp(Y_i|x, H_i=Ĥ_i)/∏_i=1^Mp(Y_i|X, H_i=Ĥ_i))}), where p(y_i|x, h_i=ĥ_i) can be obtained from (<ref>) when h_i=ĥ_i. The pre-log factor appears in (<ref>) because we do not send any data during τ samples of each coherence block. It is important to note that we do not have the statistics of the channel estimation error, because we do not have closed-form channel estimates. However, for the uplink the BS can calculate some statistics of the channel estimation error empirically. In <ref>, we explain how the BS calculates the statistics of the channel estimation error and how the effects of this estimation error can be considered as additional noise term. Remark 3: Let us define a variable y=𝐯^H𝐲, where 𝐯 is a decoding vector. For the perfect CSI and the Gaussian cases, we do not lose any information if 𝐯=𝐡/𝐡 <cit.>. In other words, y is a sufficient statistic for 𝐲. However this is not the case in Cauchy noise. Therefore, we derive the rate expressions by using 𝐲 for perfect and imperfect CSI. §.§ The Downlink Achievable Rate Now we find the achievable rate of a communication link where the BS transmits the data to the user. Again, we start with the perfect CSI case. The received signal can be written as, y=√(p^dl)𝐡^T𝐚x+n, where p^dl is the downlink SDR, x is the transmitted signal, 𝐡∈ℂ^M is the channel gain that is known by the receiver, 𝐚∈ℂ^M is the precoder vector designed by the BS, n is the Cauchy noise with unit dispersion parameter. We assume that 𝐚_2=1 so that we do not obtain any power gain from the precoder. To maximize the received useful signal power, 𝐚 should equal 𝐡^*/𝐡. Hence, the downlink achievable rate becomes: R^dl=log_2S-𝔼_X,Y{log_2(∑_x∈A_Xp(Y|x)/p(Y|X))}, where p(y|x) is: p(y|x)=γ/2π(γ^2+|y-√(p^dl)𝐡_2x|^2)^3/2. Similar to the uplink achievable rates, we use the channel estimates as side information to calculate the downlink achievable rates. To do this, the precoder vector should be ĥ^* for the imperfect CSI case. Let us define a joint random variable H̅=[H_1,,H_M]^T. The achievable rate is given by: R^dl, imCSI=(1-τ/T) × (log_2S-𝔼_X,Y,Ĥ̅̂{log_2(∑_x∈A_Xp(Y|x, H̅=Ĥ̅̂)/p(Y|X, H̅=Ĥ̅̂))}), where p(y|x, 𝐡=ĥ) can be obtained from (<ref>) when 𝐡=ĥ. In the downlink, the effect of the channel estimation error is implicit, entering only via the precoder vector, but not explicitly visible in the achievable rate expression. Remark 4: For the Gaussian noise and perfect CSI case, we observe the uplink-downlink duality when the maximum-ratio (MR) decoder and MR precoder are used in the uplink and downlink, respectively. Therefore, we have the same achievable rates. However, this appears not to be the case with Cauchy noise, because linear processing of the received signal in the uplink is suboptimal. §.§ The Cauchy Decoder with Other SαS Noise In this section, we evaluate the performance of the Cauchy decoder in the presence of other complex isotropic SαS noise. We focus on complex isotropic SαS noise with 1<α<2, for which capacity bounds exist in closed form <cit.>. For the complex isotropic SαS noise with 1<α<2, the pdf can be expressed as <cit.>: f_X(x)=1/παγ^2/α∑_k=0^∞(-1)^k/2^2k+1(k!)^2Γ(2k+2/α)(|x|/γ^1/α)^2k, where Γ(·) is the standard Gamma function. The pdf in (<ref>) is not in closed-form. Therefore, to develop a decoding metric, either this pdf would have to be approximated, or implemented via a table lookup. We consider that the Cauchy decoding metric is used, and next evaluate how it performs when it is exposed to noise with a S(1<α<2)S distribution. Consider a SISO channel: y=√(p)x+n, where y, x and p are the received signal, transmitted signal and the received power, respectively. Let us assume that 𝔼[|X^R|]=𝔼[|X^I|]. The capacity bounds presented in Eq. (3) of <cit.> apply for the case of real-valued SαS distributions. For complex S(1<α≤2)S, the real and imaginary parts are statistically dependent, which means that the capacity at least doubles when considering the real and imaginary parts of the channel together. This results in the bound: C≥2/αlog_2(1+(√(p)c/𝔼[|N^R|])^α), 𝔼[|X^R|]≤c, where C is the capacity for (<ref>), and N^R is the real part of noise having the SαS distribution for the real-valued realizations. Note that N^R is identically distributed with the imaginary part. The fractional moment of N^R can be expressed as <cit.>: 𝔼[|N^R|^p]=γ^p/α2^p+1Γ(p+1/2)Γ(-p/α)/α√(π)Γ(-p/2), 0<p<α. Now, assume that y in (<ref>) is decoded by using the Cauchy model. This decoding technique is called mismatched decoding because the actual present noise has different statistics than what the decoder assumes <cit.>. The achievable rate obtained from the mismatched decoder is called as mismatched achievable rate. The mismatched achievable rate is a lower bound on the actual mutual information <cit.> so for the capacity: C≥I(X;Y)≥log_2S-𝔼_X,Y{log_2∑_x∈A_Xp̃(Y|x)/p̃(Y|X)}, where p̃(y|x) is the decoder metric that is assumed to be Cauchy model. In Section <ref>, we compare the capacity bound in (<ref>) and the mismatched achievable rate in (<ref>). § DATA DECODING In this section, we present hard decision metrics for uncoded symbol detection and soft decision metrics for coded bit detection, respectively. For the hard symbol detection, we only consider the uplink for both Gaussian and Cauchy noises. For the soft bit detection, we consider the uplink and the downlink for Cauchy case only. §.§ Symbol Detection for The Uplink (Uncoded Modulation) In this section, we use the channel estimates obtained in Section <ref> to infer the transmitted data. Express the received signal as: 𝐫=∑_k=1^K√(p_k)𝐡_ks_k+𝐧, where s_k is the symbol transmitted by user k, chosen from a certain alphabet. If the noise is Gaussian, the detector will be: ŝ=_𝐬∑_i=1^M|𝐫[i]-∑_k=1^K√(p_k)𝐡̂_k[i]s_k|^2. For the case of Cauchy noise in 𝐧, we insert the channel estimates into the likelihood function associated with (<ref>): p(𝐫|𝐡̂_1,,𝐡̂_K,𝐬)= ∏_i=1^Mγ/2π(γ^2+|𝐫[i]-∑_k=1^K√(p_k)𝐡̂_k[i]s_k|^2)^3/2, where 𝐬=[s_1s_K]^T. Therefore the ML estimates of the symbols are: ŝ=_𝐬∑_i=1^Mlog(γ^2+|𝐫[i]-∑_k=1^K√(p_k)𝐡̂_k[i]s_k|^2). The problems in (<ref>) and (<ref>) are constellation-constrained minimization problems. Neglecting the constellation constraint, the solution of the problem in (<ref>) is the well-known zero-forcing (ZF) detector: 𝐬̂=(Ĥ^HĤ)^-1Ĥ^H𝐫, where Ĥ∈ℂ^M×K is a matrix where the k^th column is ĥ_k. For the problem with Cauchy noise, (<ref>), again we neglect the constellation constraint and obtain a soft decision for each symbol from the alphabet by evaluating the minimum Euclidean distance. The problem in (<ref>) can be solved by using the gradient descent algorithm that is described in Section <ref>. For the initial solution of ŝ in the gradient descent, we take the zero vector. Note that the problem in (<ref>) has more than one local optimum solution. The global solution of (<ref>), however, is unique and easily given by the ZF. §.§ Soft Decision Metric for The Uplink (Coded Modulation) Consider again the received signal in (<ref>). We now focus on the LLR expression for the i^th bit of the k^th symbol. This LLR can be written as: l_k,i=log(p(b_k,i=0|𝐫)/p(b_k,i=1|𝐫))=log(∑_𝐬:b_k,i(𝐬)=0p(𝐫|𝐬)p(𝐬)/∑_𝐬:b_k,i(𝐬)=1p(𝐫|𝐬)p(𝐬)). The notation 𝐬:b_k,i(𝐬)=β represents that the set contains all 𝐬 where b_k,i is equal to β. Without loss of generality, we assume that p(𝐬) equals 1/S^K, where S is the cardinality of the symbol alphabet. The BS makes use of the channel estimates as the real channels. Therefore p(𝐫|𝐬) is the same as (<ref>). The complexity of (<ref>) is huge because of the many terms in the summation. To simplify the expression in (<ref>), one approach is to replace the summation with the largest term; this is called the max-log approximation <cit.>. However, it has still high complexity because we need to search through S/2×S^K-1 candidates just for one bit. Therefore, finding the largest term may be infeasible. Instead, we propose the following: * For the k^th symbol, there are S/2 symbols whose i^th bit is 0. Let us denote these symbols by S_k^i=0={s_k,1^i=0,,s_k,S/2^i=0}. For each S/2 symbol, we can solve the following maximization problem by neglecting the constellation constraint: 𝐬̃_k,t^i=0=_𝐬̃p(𝐫|𝐬̃,s_k^i=0=s_k,t^i=0), where t ranges from 1 to S/2, 𝐬̃_k,t^i=0 is an (K-1)×1 vector including soft estimates when s_k^i=0=s_k,t^i=0. By combining (<ref>) and (<ref>), the problem in (<ref>) is equivalent to: 𝐬̃_k,t^i=0=_𝐬̃∑_i=1^Mlog( γ^2+|𝐫[i]-√(p_k)𝐡̂_k[i]s_k,t^i=0 - ∑_n=1, n≠k^K√(p_n)𝐡̂_n[i]s̃_n|^2). We can obtain hard estimates from 𝐬̃_k,t^i=0 based on the Euclidean distance between soft estimates and the real symbols. Let us denote ŝ_k,t^i=0 a vector including the hard estimates. * Choose the maximum among the S/2 likelihood terms. The LLR is approximately equal to: l_k,i≈ log(max_{ŝ_k,t^i=0}_t=1^S/2p(𝐫|ŝ_k,t^i=0,s_k^i=0=s_k,t^i=0))- log(max_{ŝ_k,t^i=1}_t=1^S/2p(𝐫|ŝ_k,t^i=1,s_k^i=1=s_k,t^i=1)) * By doing so, we need to solve S/2 optimization problems for each bit. In order to obtain the gradient for (<ref>), we need 𝒪(MK^2) flops. Unlike our proposed method, the complexity of max-log approximation, which is S/2×S^K-1, grows with K exponentially. §.§ Soft Decision Metric for The Uplink Including Channel Estimation Error (Coded Modulation) In this section we consider the bit detection problem for the uplink, also incorporating the channel estimation error into the analysis. Denote the channel estimation error for the k^th user by: 𝐡̃_k=𝐡_k-ĥ_k. The received signal in (<ref>) can be rewritten as: 𝐫=∑_k=1^K√(p_k)ĥ_ks_k+∑_k=1^K√(p_k)𝐡̃_ks_k+𝐧. Since we do not have a closed-form expression for ĥ_k, the BS may calculate the variance of each realization of 𝐡̃_k empirically. From (<ref>), the variance of a complex isotropic Gaussian distribution is 4 times the dispersion parameter. Heuristically, all channel estimation errors are considered to have a complex isotropic Cauchy distribution. This approach is heuristic because one cannot calculate the variance of the complex Cauchy distribution but our aim is a simple model to find additional dispersion for the noise. Let us denote the dispersion of the k^th user's channel estimation error by γ_k. Using Proposition 1, the received signal can be expressed as: 𝐫=∑_k=1^K√(p_k)ĥ_ks_k+ñ, where ñ is assumed to be the complex isotropic Cauchy distribution with the dispersion γ̃=∑_k=1^K√(p_k)γ_k+γ. We can replace γ by γ̃ in the conditional probability density functions defined in <ref> and <ref> to calculate the uplink BER and the uplink achievable rate. §.§ Soft Decision Metric for The Downlink (Coded Modulation) For the downlink part, the transmitted signal can be expressed: 𝐱=𝐀𝐬=∑_k=1^K𝐚_ks_k, where 𝐀 is an M×K precoder matrix, 𝐚_k is the k^th column of 𝐀, and 𝐬 is an K×1 vector including the transmitted symbols. Notice that 𝐚_k_2=1. The precoder matrix is designed based on the channel estimates. We consider the MR and ZF precoders, for which: 𝐀^MR=Ĥ^*𝐃^MR, 𝐀^ZF=Ĥ^*(Ĥ^TĤ^*)^-1𝐃^ZF, where 𝐃^MR and 𝐃^ZF are K×K diagonal matrices for normalization purposes, and the k^th column of Ĥ is ĥ_k. The k^th user receives the following signal: y_k=√(p_k^k)𝐡_k^T𝐚_ks_k+∑_l=1,l≠k^K√(p_l^k)𝐡_k^T𝐚_ls_l+n_k, where p_l^k is the received power corresponding to the symbol s_l at the user k, and n_k is the complex isotropic Cauchy noise at the k^th user. Note that neither the BS nor the users know the real channels. Moreover, the users do not have access to any channel estimates. Therefore, for the downlink decoding we assume that the k^th user only knows the corresponding channel gain, which is given by: g_k^MR=ĥ_k_2, g_k^ZF=𝐃^ZF_kk, where 𝐃^ZF_kk refers to the k^th diagonal element. The LLR expression for the i^th bit of the k^th symbol when the MR precoder is used can be expressed as: l_k,i^MR=log(p(b_k,i=0|y_k)^MR/p(b_k,i=1|y_k)^MR)=log(∑_s:b_i(s)=0p(y_k|s)^MR/∑_s:b_i(s)=1p(y_k|s)^MR), where p(y_k|s)^MR is given by: p(y_k|s)^MR=γ/2π(γ^2+|y_k-√(p_k^k)g_k^MRs|^2)^3/2. To find p(y_k|s)^ZF, one can replace g_k^MR by g_k^ZF. § SIMULATION RESULTS In this section, we present simulation results. First, we compare the performances of the channel estimates in terms of uncoded SER. Then we present achievable rate results for the perfect and imperfect CSI cases. Finally we present coded BER curves and compare their behavior with the predictions from the achievable rate analysis. §.§ Performances of the Channel Estimates The simulation parameters are as follows: The number of antennas is M=100. The number of users and the pilot length are K=8 and τ=15, respectively. The pilot signals are chosen from the normalized DFT matrix. The channel matrix contains realizations of circularly symmetric Gaussian random variables with unit variance. The dispersion parameter of the Cauchy noise is normalized to unity. We fix the received signal powers of 7 users such that these powers range from 1 to 7 dB. We change the received signal power of the remaining user and observe the effect on the performance. The parameters of the simulation are summarized in Table <ref>. We present the effects of the channel estimates on the symbol detection. We have three types of channel estimates: the estimates obtained from the signal after de-spreading, the estimates obtained from the unprocessed signal where the initial values for the algorithm in Section <ref> are taken from (<ref>), and the estimates obtained from the unprocessed signal where the initial values for the algorithm in Section <ref> are zero. To generate symbols, each user transmits 200 quadrature-phase-shift keying (QPSK) symbols for each coherence block, whose length is taken to be T=215. The small-scale fading for each channel vector is created 500 times so we have 10^5 symbols for each SDR. We present SER performances in Fig. <ref>. Based on Fig. <ref>, the detector using the channel estimates obtained from the unprocessed signal outperforms the detector using the channel estimates obtained via the de-spreading operation. One important observation is that the choice of pilot books is important even if the pilot books are chosen from any unitary matrix. For example, based on Fig. <ref>, if the pilots are chosen from the normalized DFT matrix, we have much better SER performance than the case where the pilots are chosen from the identity matrix. This situation does not appear in the Gaussian noise because the pilot books chosen from any unitary matrix give the same performances <cit.>. Therefore, one can conclude that if the noise is Cauchy, one can split the power of the pilots and can increase the number of signal samples in the receiver side. Another important observation is that the channel estimates obtained from the unprocessed signal are sensitive to the initial values used in the algorithm in Section <ref>; this is because the likelihood function of the Cauchy distribution is neither log-concave nor log-convex so it has has many local minima. Based on this, 𝐡̂_k^ML includes outliers causing us to get trapped at poor local optima. Since the mean of the channel realizations is zero, it is expected to obtain a better local optimum point when the initial values are zero. Quantitatively, when the SER is 10^-3, the required SDR for the best detector is almost 5 dB. For SER 10^-3, the performance gaps to the second and the third detectors compared with the best detector are almost 10 and 15 dB, respectively. In Fig. <ref> we present the performances of two detectors in the presence of two types of noise: Gaussian noise-detector for Gaussian noise, Gaussian noise-detector for Cauchy noise, Cauchy noise-detector for Cauchy noise, and Cauchy noise-detector for Gaussian noise. Note that we change the noise only in the data phase and all detectors use the best channel estimates that are obtained under the Cauchy noise. Based on Fig. <ref>, the performance gap between the detector for Cauchy and the detector for Gaussian is small in the presence of Gaussian noise. On the other hand, the performance of the detector designed for Gaussian noise is quite poor when the noise is Cauchy. The conclusion is that the detector for Gaussian noise is not robust against the outliers. For the rest of the paper, we only use the best channel estimates. §.§ Achievable rates with the Cauchy Noise In this section, we first present the uplink achievable rates of the communication link for both the perfect and imperfect CSI cases. For the modulation scheme, we use QPSK again. The BS has either 100 or 4 antennas and serves a single user. For the imperfect CSI case, the length of the coherence block is T=339 (we will explain why we choose 339 later). In Fig. <ref>, we present uplink performances of the communication link. From Fig. <ref>, we observe that when we increase the number of antennas we can obtain the same achievable rates with less SDR. For example, when τ and the desired rate are chosen to be 15 and 1.8 bits-per-channel-use (bpcu), the required SDR for M=100 is almost 20 dB less than the required SDR for M=4. Another observation is that when we increase the pilot length, we obtain better performances in low SDR. However in high SDR cases, we do not need to use long pilot sequences. Also in any case, we cannot obtain the same performance as in the perfect CSI case, because of the pre-log factors appearing in the achievable rate. In Fig. <ref>, we present the uplink achievable rate curves for the two cases that the BS considers the channel estimation error, and ignores it, respectively. We observe that when τ increases the gap between two achievable rate curves, i.e., the curve obtained when the BS ignores the channel estimation error and the curve obtained when the BS considers the channel estimation error, gets closer to the each other. This result is expected because when τ increases, the channel estimation error decreases. Next, in Fig. <ref>, we present the downlink achievable rates. Again for the downlink, when we increase the number of antennas, we need less SDR for the same achievable rates and for the low SDR cases we may need longer pilot sequences. Another interesting observation is that the uplink-downlink duality does not hold for the Cauchy noise even for perfect CSI, in contrast to the case of Gaussian noise. With Gaussian noise, when the MR decoder/precoder is used, the structure of the signal is the same both on uplink and downlink. However, this is not the case for the Cauchy noise. From Figs. <ref> and <ref>, we do not observe uplink-downlink duality which is consistent with Remark 4 in Section <ref>. In Fig. <ref>, we present the capacity bound in (<ref>) and the mismatched achievable rate in (<ref>) for QPSK modulation. We generate SαS noise with unit dispersion for different α. From Fig. <ref>, if the noise becomes more impulsive, the gap between the capacity bound and the mismatched achievable rate decreases. Let us focus on code rate 3/4, which corresponds to 1.5 bpcu for QPSK modulation. Numerically, the gaps between these two metrics in (<ref>) and (<ref>) become 3.7, 3.5, 3.1 and 0.9 dB with α=1.8, α=1.6, α=1.4 and α=1.2, respectively. In <cit.>, it is shown that the capacity bound becomes tighter for smaller α. In conclusion, Fig. <ref> demonstrates that a decoder metric based on the Cauchy model can perform well in the presence of noise with any S(1<α<2)S distribution. §.§ Coded BER Comparisons In this section, we present coded BER performance for both the uplink and the downlink. To do this, QPSK modulation is used. For the channel coding, we use a low-density-parity-check (LDPC) code. The coding rate is chosen to be 3/4 and there are 648 bits per packet after the encoder. Therefore, the number of transmitted symbols in each packet is 324. We split each packet into 9 sub-packets; each sub-packet occupies 36 symbols in a coherence block and all sub-packets go into different coherence blocks and see different channel realizations. This way, a comparison with the ergodic achievable rates derived in Section <ref> is justified. The length of the pilot vector is 15 in all simulations. We use a coherence block length of 339, which is reasonable for an urban area with some mobility <cit.>. Note that we divide each coherence block into either a pilot-phase plus an uplink-phase, or into a pilot-phase plus a downlink-phase. For the decoding, we first calculate the LLR values for each bit. These values are then given to the LDPC decoder, which uses belief propagation with 50 iterations. First, we consider the uplink performance. Fig. <ref> shows the results. For massive MIMO we consider K=1, K=2 and K=8 users. We also present the performance of communication link including a single user (K=1) and a BS (M=4). For the BER curves, the decoding threshold is defined as the SDR value at which the waterfall starts (the SDR value when BER is 10^-3). The quantitative results are presented in Table <ref>. From Fig. <ref> and Table <ref>, it is clear that the BER performance gets closer to the achievable rate bound when the number of antennas increases. Note that there is a gap between the decoding threshold obtained from the BER simulations and the bound obtained by the achievable rate analysis. The gap of massive MIMO, (M=100, K=8), is greater than that of the network, (M=4, K=1). However, the main advantage is that massive MIMO serves more than one user. In Fig. <ref>, we present BER curves for the uplink when the BS considers the channel estimation error in decoding metric. We observe that when the number of users increases, the BER performance improves if the BS considers the channel estimation error. Quantitatively, when the number of users is 8, the BER performance is improved by 0.8 dB. On the other hand when the number of user is 1, this improvement becomes 0.2 dB. In Fig. <ref>, we present the uplink BER curves when the noise dispersions in the likelihood functions are 0.5, 1, 3 and 10 and the dispersion of the noise is 1. When the dispersion in the likelihood function is 0.5 and there is only a single user, we have a relatively low performance loss: around 0.7 dB when M=100. Moreover, when K=8 the receiver whose dispersion is 3 outperforms the receiver whose dispersion is 1 because of interference. Similarly, we obtained BER performances for the downlink; see Fig. <ref>. Compared with the uplink of massive MIMO, we have extra curves corresponding to the two different precoders, MR and ZF. Like for the uplink, in the downlink having more antennas implies that the performance gets closer to the bounds obtained from the achievable rate analysis. Another observation is that the ZF precoder performs slightly better than the MR precoder when the number of users is 8. These precoders perform almost the same when the number of users is 2. This is expected because the user-interference is so small when there are 2 users so these precoders are almost identical. The numerical results are presented in Table <ref>. § CONCLUSIONS We investigated massive MIMO with Cauchy noise from three perspectives: channel estimation, achievable rates and soft bit detection. First, we obtained the channel estimates from uplink pilots in two ways: with and without de-spreading the received pilot signal. In contrast to the case of Gaussian noise, with Cauchy noise the de-spreading operation does not result in a sufficient statistic for the channel estimation. Consequently, better channel estimation performance is obtained when using the unprocessed received signal. Next, we obtained uplink and downlink achievable rates for the cases of perfect and imperfect CSI (using the channel estimates we developed). We observed that for low SDR, longer pilot signals can be used to obtain better achievable rates. Also, the achievable rate increases with an increasing number of antennas on both uplink and downlink. We also compared the performance of Cauchy decoding with a capacity bound for general SαS noise available in the literature. Through simulations, we validated the technical soundness of using the Cauchy receiver for other SαS noise. We compared the detectors designed for the Cauchy and Gaussian noises in the presence of both types of noises. An important observation is that the performance losses both of the detector designed for Cauchy noise and of the detector designed for Gaussian noise are small in the presence of Gaussian noise. However, the detector designed for Gaussian noise works poorly in the presence of Cauchy noise. This means that unlike the Gaussian-noise detector, the Cauchy-noise detector is very robust, and should be a preferred choice whenever robustness to unknown noise distributions is a priority. Finally, we obtained metrics for soft bit detection. Based on these metrics, we performed numerical simulations of the BER, using LDPC coding and QPSK modulation. We compared the threshold of this BER performance with the achievable rate bound. The main conclusions are that the gap between the decoding threshold and achievable rate bound is small, and that this gap decreases with an increasing number of antennas. § MATHEMATICAL PRELIMINARIES A real-valued random variable SαS is defined by its characteristic function <cit.>: ϕ(t)=exp(jδt-γ|t|^α), where t∈ℝ, δ is the location parameter, γ is the dispersion parameter which determines the spread of the distribution in γ>0, and α is the characteristic exponent which satisfies 0<α≤2 and determines the tail level of distribution. A smaller α yields a more impulsive and heavily-tailed distribution, and vice versa. There are two important special cases of the SαS distribution: Cauchy (α=1) and Gaussian (α=2), which both have pdfs in closed form. These pdfs, if δ=0, are given by c(x)=γ/π(x^2+γ^2), for the Cauchy case and g(x)=1/√(4πγ)exp(-x^2/4γ), for the Gaussian. From (<ref>) and (<ref>), two important results can be obtained: For the Gaussian distribution, the variance is 2γ. For the Cauchy distribution, the mean and variance are undefined. More precisely the variance is infinite because 𝔼[|𝐗|^p]=∞ if p≥α. In communication problems, we generally deal with complex-valued random variables. Define the complex Cauchy random variable X=X^R+jX^I where X^R and X^I are jointly S(α=1)S. By definition, the characteristic function is: ϕ_X(ω)=𝔼[exp(j(ωX^*))], where ω∈ℂ. X is an isotropic complex Cauchy random variable if and only if the characteristic function has the form <cit.>: ϕ_X(ω)=exp(-γ|w|). The pdf of an isotropic complex Cauchy distribution is: f_X(x)=γ/2π(|x|^2+γ^2)^3/2. As the name “isotropic” suggests, the pdf is invariant to a rotation of the complex angle and depends only on the magnitude of the realization from (<ref>). The marginal distributions of X^R and X^I can be easily obtained and they are same as for the pdf in (<ref>). An important property of the isotropic complex Cauchy distribution is that X^R and X^I are statistically dependent. This can be immediately seen from the fact that the product of the pdfs of X^R and X^I is not equal to the joint pdf in (<ref>). This is a fundamental difference between the isotropic complex Cauchy and isotropic complex Gaussian distributions. (For the latter, in-phase and quadrature components are independent, which is also referred to as circular symmetry of the noise <cit.>.) For the sake of completeness, the pdf of an isotropic complex Gaussian random variable, Y, (which necessarily has zero mean) is given by: f_Y(x)=1/4πγexp(-|x|^2/4γ). § SOLVING THE OPTIMIZATION PROBLEM IN (<REF>) We denote the objective function in (<ref>) by f. We use the gradient descent algorithm <cit.>. The optimization variables are 𝐡_1^R[1] and 𝐡_1^I[1], corresponding to the real and imaginary parts of 𝐡_1[1]. Before defining the gradient vector, let us define the following auxiliary functions: p(𝐡_1^R[1],𝐡_1^I[1], i)= 𝐘'^R[1,i]-√(τp_1)(𝐡_1^R[1]ϕ_1^R[i]-𝐡_1^I[1]ϕ_1^I[i]), r(𝐡_1^R[1],𝐡_1^I[1], i)= 𝐘'^I[1,i]-√(τp_1)(𝐡_1^I[1]ϕ_1^R[i]+𝐡_1^R[1]ϕ_1^I[i]). The gradient of the objective function in (<ref>) can now be expressed as: ∇f[ 𝐡_1^R[1]; 𝐡_1^I[1] ] = -2√(τp_1)× [ ∑_i=1^τϕ^R_1[i]p(𝐡_1^R[1],𝐡_1^I[1],i)+ϕ^I_1[i]r(𝐡_1^R[1],𝐡_1^I[1],i)/γ^2+|𝐘'[1,i]-√(τp_1)𝐡_1[1]ϕ_1[i]|^2; ∑_i=1^τ-ϕ^I_1[i]p(𝐡_1^R[1],𝐡_1^I[1],i)+ϕ^R_1[i]r(𝐡_1^R[1],𝐡_1^I[1],i)/γ^2+|𝐘'[1,i]-√(τp_1)𝐡_1[1]ϕ_1[i]|^2 ]. With the gradient descent approach, we update the channel vector according to: [ 𝐡_1^R[1]; 𝐡_1^I[1] ]^j+1=[ 𝐡_1^R[1]; 𝐡_1^I[1] ]^j-η_j∇f([ 𝐡_1^R[1]; 𝐡_1^I[1] ]^j). where [ 𝐡_1^R[1]; 𝐡_1^I[1] ]^j and η_j are the vectors that represent the solutions and the step length for the j^th iteration, respectively. η_j can be chosen differently for each iteration using, for example, the backtracking algorithm <cit.>. Note that the cost of obtaining the gradient for one channel realization is 𝒪(τ) flops, and for all channel realizations the cost is 𝒪(MKτ) flops. § EMPIRICAL CALCULATION OF SISO CHANNEL'S MUTUAL INFORMATION The mutual information between channel input X, and channel output Y can be defined as: I(X;Y)=H(X)-H(X|Y). Let us assume that X is chosen uniformly from a certain discrete set A_X with cardinality S. Then, (<ref>) can be written as: I(X;Y)=log_2S-∫∑_x∈A_X-p(x,y)log_2(p(x|y))dy, where p(x,y) is the joint probability density function of X and Y, and p(x|y) is the conditional probability density function. The second term is the expectation of (-log_2(p(x|y))) with respect to X and Y. By using Bayes' rule, we can re-write mutual expression as: I(X;Y) =log_2S-𝔼_X,Y{log_2p(Y)/p(X,Y)}, =log_2S-𝔼_X,Y{log_2∑_x∈A_Xp(Y|x)p(x)/p(Y|X)p(X)}, =log_2S-𝔼_X,Y{log_2∑_x∈A_Xp(Y|x)/p(Y|X)}. Now, we consider a perfect CSI case where the receiver has the perfect knowledge of channel. The mutual information is given <cit.>: I(X;Y|H)=H(X|H)-H(X|H,Y). Note that X is independent of H. By following the steps in (<ref>), (<ref>) can be rewritten as: I(X;Y|H)=log_2S-𝔼_X,Y,H{log_2∑_x∈A_Xp(Y|H,x)/p(Y|H,X)}. The expectation terms in (<ref>) and (<ref>) can be calculated by Monte Carlo simulations. 10 ICC2022 Z. Gülgün and E. G. Larsson, “Channel Estimation for Massive MIMO in the Presence of Cauchy Noise,” in IEEE International Conference on Communications (ICC), 2022, pp. 1769–1774. BJORNSON20193 E. Björnson, L. Sanguinetti, H. Wymeersch, J. Hoydis, and T. L. Marzetta, “Massive MIMO is A Reality—What is Next?: Five Promising Research Directions for Antenna Arrays,” Digital Signal Processing, vol. 94, pp. 3–20, 2019, special Issue on Source Localization in Massive MIMO. 7497508 T. Van Chien, E. Björnson, and E. G. Larsson, “Joint Power Allocation and User Association Optimization for Massive MIMO Systems,” IEEE Trans. Wireless Commun., vol. 15, no. 9, pp. 6384–6399, 2016. 8429913 C. Mollén, U. Gustavsson, T. Eriksson, and E. G. Larsson, “Spatial Characteristics of Distortion Radiated From Antenna Arrays With Transceiver Nonlinearities,” IEEE Trans. Wireless Commun., vol. 17, no. 10, pp. 6663–6679, 2018. 9356519 M. Matthaiou, O. Yurduseven, H. Q. Ngo, D. Morales-Jimenez, S. L. Cotton, and V. F. Fusco, “The Road to 6G: Ten Physical Layer Challenges for Communications Engineers,” IEEE Commun. Mag., vol. 59, no. 1, pp. 64–69, 2021. 7763811 E. Axell, P. Eliardsson, S. Tengstrand, and K. Wiklundh, “Power Control in Interference Channels With Class A Impulse Noise,” IEEE Wireless Commun. Lett., vol. 6, no. 1, pp. 102–105, 2017. meas J. Zhang, L. Liu, K. Wang, Y. Fan, and J. Qiu, “Measurements and Statistical Analyses of Electromagnetic Noise for Industrial Wireless Communications,” Int. J. of Intell. Syst., vol. 36, no. 3, pp. 1304–1330, 2021. ofdm J. Ferrer-Coll, S. Slimane, J. Chilo, and P. Stenumgaard, “Detection and Suppression of Impulsive Noise in OFDM Receiver.” Wireless Pers. Commun., vol. 85, p. 2245–2259, 2015. 8447433 Q. Zheng, F. Wang, B. Ai, and Z. Zhong, “Multicarrier Downlink Transmission for High-speed Railway in Non-Gaussian Noise Channels,” IEEE Access, vol. 6, pp. 52 607–52 615, 2018. 6930962 J. Ferrer-Coll, B. Slimane, J. Chilo, and P. Stenumgaard, “Impulsive noise detection in OFDM systems with PAPR reduction,” in 2014 International Symposium on Electromagnetic Compatibility, 2014, pp. 523–527. 9645175 K. Hägglund and E. Axell, “Adaptive Demodulation in Impulse Noise Channels,” IEEE Trans. Veh. Tech., vol. 71, no. 2, pp. 1685–1698, 2022. 1707997 M. Chitre, J. Potter, and S.-H. Ong, “Optimal and Near-Optimal Signal Detection in Snapping Shrimp Dominated Ambient Noise,” IEEE J. Ocean. Eng., vol. 31, no. 2, pp. 497–503, 2006. 990732 M. Zimmermann and K. Dostert, “Analysis and Modeling of Impulsive Noise in Broad-band Powerline Communications,” IEEE Trans. Electromag. Compat., vol. 44, no. 1, pp. 249–258, 2002. 165447 E. Sousa, “Performance of A Spread Spectrum Packet Radio Network Link in A Poisson Field of Interferers,” IEEE Trans. Inf. Theory, vol. 38, no. 6, pp. 1743–1754, 1992. 7805183 S. R. K. Vadali, P. Ray, S. Mula, and P. K. Varshney, “Linear Detection of a Weak Signal in Additive Cauchy Noise,” IEEE Trans. on Commun., vol. 65, no. 3, pp. 1061–1076, 2017. 510611 P. Tsakalides and C. Nikias, “The Robust Covariation-based MUSIC (ROC-MUSIC) Algorithm for Bearing Estimation in Impulsive Noise Environments,” IEEE Trans. Signal Process., vol. 44, no. 7, pp. 1623–1633, 1996. 550156 ——, “Robust Adaptive Beamforming in Alpha-stable Noise Environments,” in 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings, vol. 5, 1996, pp. 2884–2887 vol. 5. 8781820 M. Liu, N. Zhao, J. Li, and V. C. M. Leung, “Spectrum Sensing Based on Maximum Generalized Correntropy Under Symmetric Alpha Stable Noise,” IEEE Trans. Veh. Tech., vol. 68, no. 10, pp. 10 262–10 266, 2019. 482119 P. Tsakalides and C. Nikias, “Maximum Likelihood Localization of Sources in Noise Modeled as A Stable Process,” IEEE Trans. Signal Process., vol. 43, no. 11, pp. 2700–2713, 1995. 6932454 Y. Chen and J. Chen, “Novel Sα S PDF Approximations and Their Applications in Wireless Signal Detection,” IEEE Trans. Wireless Commun., vol. 14, no. 2, pp. 1080–1091, 2015. 4374156 M. R. Souryal, E. G. Larsson, B. Peric, and B. R. Vojcic, “Soft-Decision Metrics for Coded Orthogonal Signaling in Symmetric Alpha-Stable Noise,” IEEE Trans. Signal Process., vol. 56, no. 1, pp. 266–273, 2008. 9291441 L. Zhou, J. Dai, W. Xu, and C. Chang, “Uplink Channel Estimation for Massive MIMO Systems With Impulsive Noise,” IEEE Commun. Lett., vol. 25, no. 5, pp. 1534–1538, 2021. marzetta_larsson_yang_ngo_2016 T. L. Marzetta, E. G. Larsson, H. Yang, and H. Q. Ngo, Fundamentals of Massive MIMO.1em plus 0.5em minus 0.4emCambridge University Press, 2016. massivemimobook E. Björnson, J. Hoydis, and L. Sanguinetti, “Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency,” Foundations and Trends® in Signal Processing, vol. 11, no. 3-4, pp. 154–655, 2017. Kay:1993:FSS:151045 S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory.1em plus 0.5em minus 0.4emUpper Saddle River, NJ, USA: Prentice-Hall, Inc., 1993. nocedal2006numerical J. Nocedal and S. Wright, Numerical Optimization, ser. Springer Series in Operations Research and Financial Engineering.1em plus 0.5em minus 0.4emSpringer New York, 2006. Powell1973OnSD M. J. D. Powell, “On search directions for minimization algorithms,” Mathematical Programming, vol. 4, pp. 193–201, 1973. bertsekas2016nonlinear D. Bertsekas, Nonlinear Programming, ser. Athena scientific optimization and computation series.1em plus 0.5em minus 0.4emAthena Scientific, 2016. 6875400 J. Fahs and I. Abou-Faycal, “A Cauchy Input Achieves The Capacity of a Cauchy Channel Under A Logarithmic Constraint,” in 2014 IEEE International Symposium on Information Theory, 2014, pp. 3077–3081. 7866884 M. L. de Freitas, M. Egan, L. Clavier, A. Goupil, G. W. Peters, and N. Azzaoui, “Capacity Bounds for Additive Symmetric α -Stable Noise Channels,” IEEE Trans. Inf. Theory, vol. 63, no. 8, pp. 5115–5123, 2017. 841172 M. Medard, “The Effect upon Channel Capacity in Wireless Communications of Perfect and Imperfect Knowledge of The Channel,” IEEE Trans. Inf. Theory, vol. 46, no. 3, pp. 933–946, 2000. Tse05fundamentalsof D. Tse and P. Viswanath, Fundamentals of Wireless Communication.1em plus 0.5em minus 0.4emCambridge University Press, 2005. Panagi P. Tsakalides, “Array signal processing with alpha-stable distributions,” Ph.D. dissertation, Univ. Southern California, Los Angeles, CA, 1995. 340469 N. Merhav, G. Kaplan, A. Lapidoth, and S. Shamai Shitz, “On information rates for mismatched decoders,” IEEE Trans. Inf. Theory, vol. 40, no. 6, pp. 1953–1967, 1994. 1661831 D. Arnold, H. A. Loeliger, P. Vontobel, A. Kavcic, and W. Zeng, “Simulation-Based Computation of Information Rates for Channels With Memory,” IEEE Trans. Inf. Theory, vol. 52, no. 8, pp. 3498–3508, 2006. 4520145 E. G. Larsson and J. Jalden, “Fixed-Complexity Soft MIMO Detection via Partial Marginalization,” IEEE Trans. Signal Process., vol. 56, no. 8, pp. 3397–3407, 2008. Niki M. Shao and C. Nikias, “Signal Processing with Fractional Lower Order Moments: Stable Processes and Their Applications,” Proceedings of the IEEE, vol. 81, no. 7, pp. 986–1010, 1993. gallager2013stochastic R. Gallager, Stochastic Processes: Theory for Applications.1em plus 0.5em minus 0.4emCambridge University Press, 2013.
http://arxiv.org/abs/2307.01928v1
20230704212512
Robots That Ask For Help: Uncertainty Alignment for Large Language Model Planners
[ "Allen Z. Ren", "Anushri Dixit", "Alexandra Bodrova", "Sumeet Singh", "Stephen Tu", "Noah Brown", "Peng Xu", "Leila Takayama", "Fei Xia", "Jake Varley", "Zhenjia Xu", "Dorsa Sadigh", "Andy Zeng", "Anirudha Majumdar" ]
cs.RO
[ "cs.RO", "cs.AI", "stat.AP" ]
Learning ECG signal features without backpropagation [ July 2023 ==================================================== Large language models (LLMs) exhibit a wide range of promising capabilities — from step-by-step planning to commonsense reasoning — that may provide utility for robots, but remain prone to confidently hallucinated predictions. In this work, we present , which is a framework for measuring and aligning the uncertainty of LLM-based planners such that they know when they don't know and ask for help when needed. builds on the theory of conformal prediction to provide statistical guarantees on task completion while minimizing human help in complex multi-step planning settings. Experiments across a variety of simulated and real robot setups that involve tasks with different modes of ambiguity (from spatial to numeric uncertainties, from human preferences to Winograd schemas) show that performs favorably over modern baselines (which may involve ensembles or extensive prompt tuning) in terms of improving efficiency and autonomy, while providing formal assurances. can be used with LLMs out of the box without model-finetuning, and suggests a promising lightweight approach to modeling uncertainty that can complement and scale with the growing capabilities of foundation models.[Webpage with additional results and videos: <https://robot-help.github.io>] § INTRODUCTION How can we endow our robots with the ability to know when they don't know? Accurately modeling and accounting for uncertainty is a longstanding challenge towards robots that operate reliably in unstructured and novel environments. In this work, we study this challenge in the context of language-instructed robots. Language provides a natural and flexible interface for humans to specify tasks, contextual information, and intentions, while also allowing us to provide help and clarification to robots when they are uncertain. Recently, approaches that leverage large language models (LLMs) for planning <cit.> have demonstrated the ability to respond to natural and unstructured language instructions to generate temporally extended plans. These approaches enable leveraging the vast amount of prior knowledge and rich context embedded in pretrained LLMs, and lead to substantial abstract reasoning capabilities. However, one of the major challenges with current LLMs is their tendency to hallucinate, i.e., to confidently generate outputs that are plausible but incorrect and untethered from reality. Such false confidence in incorrect outputs poses a significant challenge to LLM-based planning in robotics. Moreover, natural language instructions in real-world environments often contain a high degree of ambiguity inherently or unintentionally from humans, and confidently following an incorrectly constructed plan could lead to undesirable or even unsafe actions. As an example (<ref>), a robot tasked with heating food may be asked to “place the bowl in the microwave”; if there are multiple bowls on the counter, the instruction is ambiguous. Moreover, the metal bowl may not be safe for the microwave. Rather than acting in this ambiguous setting and damaging the microwave or even causing a fire, the robot should know when it doesn't know and ask for clarification instead (e.g., ask which bowl should be placed in the microwave). Prior work in language-based planning either does not seek such clarifications <cit.> or does so via extensive prompting <cit.>, which requires careful prompt engineering to prevent the robot from excessively relying on seeking assistance. Moreover, prior approaches do not provide a way to ensure that asking for help results in a desired level of task success. We formalize these challenges via two desiderata: (i) calibrated confidence: the robot should seek sufficient help to ensure a statistically guaranteed level of task success specified by the user, and (ii) minimal help: the robot should minimize the overall amount of help it seeks by narrowing down possible ambiguities in a task. We collectively refer to these sufficiency and minimality conditions as uncertainty alignment. Statement of contributions. We propose — Know When You Don't Know — a framework for aligning the uncertainty of LLM-based planners utilizing the theory of conformal prediction (CP) <cit.>. We make the following contributions: (1) Given a language instruction, we utilize a pre-trained LLM with uncalibrated confidence to generate a set of possible actions for the robot to execute next. We demonstrate how to use CP to select a subset of these options, which allows the robot to decide an action to execute (if the subset is a singleton) or to ask for help otherwise. (2) We prove theoretical guarantees on calibrated confidence in both single-step and multi-step planning problems: with a user-specified level 1-ϵ, the robot performs the tasks correctly in 1-ϵ % of scenarios by asking for help when it deems it necessary. CP also minimizes the average size of prediction sets, thus addressing the goal of minimal help. (3) We evaluate in both simulation and hardware with a suite of language-instructed manipulation tasks with various types of potential ambiguities (e.g., based on spatial locations, numerical values, attributes of objects, and Winograd schemas). Experiments across multiple settings and embodiments validate the ability of to provide statistically guaranteed levels of task success while reducing the amount of help required by 10-24% as compared to baseline approaches. § OVERVIEW: ROBOTS THAT ASK FOR HELP Language-based planners. Language model planners can generate step-by-step robot plans, where each step y is composed of variable-length sequences of symbols (σ_1,σ_2,…,σ_k), text tokens as input to a language-conditioned policy <cit.> (see <ref>), or robot code executed by an interpreter <cit.>. Pretrained autoregressive LLMs predict each step y, whose joint probability over tokens can be factorized as the product of conditional probabilities of next token prediction p(y)=∏_i=1^k p(σ_i |σ_1, …, σ_i-1). Here, we are interested in characterizing the uncertainty of next step prediction p(y). The distribution of p remains highly sensitive to variable-length k; hence p(y) on its own serves as a rather poor scoring function <cit.> particularly when steps in a plan are expressed in natural language (our experiments in <ref> also show that using p(y) directly for calibration leads to poor performance). Planning as multiple-choice Q&A. We can address this length bias with a simple trick. First, with a few-shot prompt that includes possible next steps in a few scenarios (<ref>), the LLM generates a set {y^i} of candidate next steps (e.g., “Put plastic bowl in microwave", “Put metal bowl in microwave", etc., in <ref>) that are semantically different. Then the task of choosing among them is formatted as multiple-choice Q&A (MCQA). This eliminates plans that the LLM considers unlikely and reduces the problem of next-step prediction down to a single next-token prediction — aligning with LLM log-likelihood loss functions and LLM training data (MCQA datasets <cit.>). These probabilities can serve as normalized scores that can be used by various uncertainty quantification methods such as thresholding and ensemble methods. In this work, we use these normalized scores within a conformal prediction (CP) framework. Specifically, CP uses a held-out calibration set of example plans in different scenarios to generate a reduced prediction set of plans among {y^i} (<ref>). The LLM is certain if this prediction set is a singleton, and triggers help from a human otherwise. <ref> details additional rationale of applying MCQA to evaluate the semantic uncertainty of the LLM. Robots that ask for help. In this work, we show that LLM planning — combined with CP for uncertainty estimation — can effectively enable robots to interact with an environment, and ask for help when needed. The environment e can be formulated as a partially observable Markov decision process (POMDP): at any given state s^t at time t, given a user instruction ℓ, the robot executes an action a^t according to a policy π, then transitions to a new state s^t+1. Our policy π is composed of four parts (<ref>): * Multiple-choice generation: An LLM generates a diverse set of candidate plans labeled with `A', `B', `C', `D' , and an additional possible plan, `E) an option not listed here', which is appended post-hoc. We denote the set of labels by 𝒴{`A', `B', `C', `D', `E'}. These plans are generated by prompting the LLM with context x^t, which is text that includes (1) the robot observation at each time step (e.g., using a vision-based object detector or an oracle; see <ref>), (2) the user instruction, and (3) few-shot examples of possible plans in other scenarios. An augmented context x̃^t is obtained by appending the LLM-generated plans to the context x^t. * Prediction set generation: We use CP to choose a subset C(x̃^t) ⊆𝒴 of candidate plans using the LLM's (uncalibrated) confidence f̂(x̃^t)_y in each prediction y ∈𝒴 given the context x̃^t. * Human help: If the prediction set is a non-singleton, the robot leverages help from a human (or any other supervisor agent, denoted as a function f_ℋ) to arrive at an unambiguous next step y_ℋ∈ C(x̃^t). * Low-level control: A low-level module φ converts the plan in y_ℋ to an action a^t = φ(y_ℋ). Goal: uncertainty alignment. Often in real-world settings, language instructions ℓ can be ambiguous, “place the bowl in the microwave” does not specify that the human means the plastic bowl (<ref>). Our goal in this work is to address uncertainty alignment: achieve a desired level of task success while minimizing human help. We formalize this by considering a joint distribution 𝒟 over scenarios ξ (e,ℓ,g), where e is an environment (POMDP), ℓ is a (potentially ambiguous) language instruction, and g is a goal (e.g., formulated as a subset of acceptable states in the POMDP and partially observable through l). Importantly, we do not assume knowledge of 𝒟, except that we can sample a finite-size dataset of i.i.d. scenarios from it. We formalize uncertainty alignment in our setting as (i) calibrated confidence: the robot's policy (with human help as described above) succeeds with a user-specified probability 1-ϵ over new scenarios ξ∼𝒟, and (ii) minimal help: the policy minimizes the number |C(·)| of options presented to the human on average across scenarios ξ∼𝒟. § CALIBRATING LLM CONFIDENCE WITH CONFORMAL PREDICTION The MCQA setup above allows us to apply CP to obtain calibrated confidence guarantees while (approximately) minimizing help. We introduce CP below, and then present the different practical settings we consider (possibly involving multiple planning steps and/or multiple correct plans per step). §.§ Background: Conformal Prediction For now, we drop the timestep superscript and consider a generic MCQA setup with pairs (x̃, y) consisting of input x̃ and true label y. Suppose there is a calibration set Z = {z_i = (x̃_i, y_i)}_i=1^N of such pairs drawn i.i.d. from an unknown distribution 𝒟 over 𝒵:= 𝒳×𝒴. Now, given a new i.i.d. sample z_test = (x̃_test, y_test) with unknown true label y_test, CP generates a prediction set C(x̃_test) ⊆𝒴 that contains y_test with high probability <cit.>: ℙ(y_test∈ C(x̃_test)) ≥ 1-ϵ, where 1-ϵ is a user-specified value (desired task success level in our setting) that affects the size of C(·). To generate C(x̃_test), CP first uses the LLM's confidence f̂ (cf. <ref>) to evaluate the set of nonconformity scores {s_i = 1 - f̂(x̃_i)_y_i}_i=1^N over the calibration set — the higher the score is, the less each data in the calibration set conforms to the data used for training f̂. Then CP performs calibration by defining q̂ to be the ⌈ (N+1)(1-ϵ) ⌉/N empirical quantile of s_1, …, s_N. Lastly, CP generates C(x̃_test) = {y ∈𝒴 | f̂(x̃_test)_y ≥ 1-q̂)}, i.e., the prediction set that includes all labels that the predictor is at least 1-q̂ confident in. The generated prediction set ensures that the coverage guarantee in <ref> holds. Dataset-conditional guarantee. The probability in <ref> is over both the sampling of the calibration set Z and z_test (i.e., a marginal guarantee). Thus, to ensure the desired probability of coverage for each new z_test, one needs a fresh calibration set. But, in practice, we only calibrate once with a fixed set. The following dataset-conditional guarantee <cit.> holds with probability 1-δ over the sampling of the calibration set Z: ℙ(y_test∈ C(x̃_test) | {z_1, …, z_N}) ≥Beta_N+1-v, v^-1(δ), v := ⌊ (N+1)ϵ̂⌋, where Beta_N+1-v, v^-1(δ) denotes the inverse CDF (quantile) level of δ in a Beta distribution with parameters N+1-v and v, and ϵ̂ is the threshold used for calibration. In practice, we use a modest-sized calibration dataset (N=400) and δ=0.01, and adjust ϵ̂ to achieve the desired 1-ϵ coverage (with probability 1-δ = 0.99 over the sampling of the calibration set). Minimal prediction set size. From <cit.>, C(·) achieves the smallest average set size among possible prediction schemes 𝒞 that achieve the coverage guarantee, if f̂(x̃)_y models true conditional probabilities: min_C ∈𝒞 (x̃,·) ∼𝒟𝔼[|C(x̃)|], subject to (<ref>). The assumption that f̂ models true conditional probabilities may be a good approximation for LLMs trained on large-scale data with a proper scoring rule <cit.>; one can also obtain bounds on near-optimal average set size for CP using f̂ that approximately models conditional probabilities <cit.>, but we omit these results for brevity. We emphasize that the CP coverage guarantees hold regardless of the accuracy of f̂. Overall, CP is a powerful and easy-to-use statistical tool to produce (1) tight coverage guarantees—addressing the goal of calibrated confidence, and (2) small prediction sets for unseen data given a blackbox predictor like an LLM and an unknown data distribution—addressing our second goal of minimal help. §.§ Single-Step Uncertainty Alignment We now demonstrate how to use CP to achieve uncertainty alignment for LLM-based planning with a user-specified task completion rate 1-ϵ. We first consider a single-step setting, where the LLM plans only once given a context. For simplicity, we again drop the timestep superscript t in this section. Data collection. We collect N i.i.d. scenarios from the distribution 𝒟, and the corresponding contexts summarizing the robot observation and instruction (<ref>). We use the MCQA approach from <ref> to generate candidate plans and then label each augmented context x̃ (i.e., context combined with plans) with the correct label (here and in <ref>, we assume that there is a unique correct candidate plan; we provide an extension to multiple acceptable options in <ref>). We thus obtain a calibration set Z = {z_i = (x̃_i, y_i)}_i=1^N with pairs of augmented contexts and correct labels. Calibration. Next we follow <ref> to perform calibration: first adjust ϵ̂ to achieve the 1-ϵ coverage based on <ref> and then find the quantile q̂. Given a new context x̃_test (after MCQA in a new scenario) at test time, we can construct the calibration set C(x̃_test) that contains y_test with 1-ϵ probability. Triggering help. If C(x̃_test) is a singleton, the robot executes the corresponding plan. Otherwise, we deem the LLM uncertain over possible actions and trigger human help. The robot presents the human with C(x̃_test) (including the corresponding plans in text) and asks the human to choose one[In practice we convert the prediction set to a question in natural language (<ref>).]. The human chooses y_test if y_test∈ C(x̃_test)[If the correct option in C(x̃_test) is `E', the human provides the correct action that was not listed by the robot. ], or halts the operation otherwise. This setup turns the coverage guarantee from CP to the task completion guarantee: Consider a single-step setting where we use CP with coverage level 1-ϵ to generate prediction sets and seek help whenever the set is not a singleton. With probability 1-δ over the sampling of the calibration set, the task completion rate over new test scenarios drawn from 𝒟 is at least 1-ϵ. If f̂ models true conditional probabilities, the average prediction set size is minimized among possible prediction schemes that achieve 1-ϵ completion rate. The proof immediately follows from the fact that under the assumption of accurate human help, the robot fails only when the prediction set does not contain the true label; the prediction set minimality follows from <ref>. Thus, our approach addresses the goals of calibrated confidence and minimal help from <ref>. §.§ Multi-Step Uncertainty Alignment Now we extend the CP-based uncertainty alignment approach to settings where the LLM plans in multiple timesteps. This setting can be helpful when the LLM receives feedback from the environment or human between steps. However, the original CP formulation cannot be applied here since the context x^t between steps are dependent; moreover, the robot's actions at step t influence the distribution over contexts that the robot observes at future steps. Thus, the i.i.d. assumption for the coverage guarantee is no longer valid. Here, we present a novel extension of CP to multi-step settings that tackles this challenge. Sequence-level calibration. The key ideas are to (i) lift the data to sequences, and (ii) perform calibration at the sequence level using a carefully designed nonconformity score function that allows for causal construction of the prediction set at test time. Suppose that each data point consists of a sequence of augmented context x = (x̃^0, x̃^1, …, x̃^T-1) and true labels y = (y^0, y^1, …, y^T-1), where T is the time horizon and x̃^t arises from having performed the correct actions in previous steps. The distribution 𝒟 over scenarios induces a distribution over data sequences. We can again collect a calibration set Z = {z_i = (x_i, y_i)}_i=1^N. Next we use the lowest score over the timesteps as the score for the sequence[We overload notation here and use f̂ to also assign confidence scores to sequences.]: f̂(x)_y := min_t ∈ [T] f̂(x^t)_y^t. With the standard calibration procedure in <ref>, we construct a sequence-level prediction set C(x_test) := {y∈𝒴^T | f̂(x_test)_y≥ 1-q̂} for a new context sequence x_test with the quantile q̂. Causal construction of C(x) at test time. Note that C(x_test) is constructed with the full sequence x_test at once. However, at test time, we do not see the entire sequence of contexts all at once but rather x_test^t one at a time. We thus need to construct C(x_test) in a causal manner (i.e., always relying only on current and past information). Consider the causally constructed prediction set C^t(x_test^t) := { y^t | f̂(x_test^t)_y^t≥ 1-q̂} at time t using the same quantile level q̂ from the non-causal calibration above, and define C(x_test) := C^0(x_test^0) × C^1(x_test^1) ×…× C^T-1(x_test^T-1). We would like to obtain a lower bound on the sequence-level coverage: ℙ(y_test∈ C(x_test)) ≥ 1-ϵ. For any y∈𝒴^T, y∈C(x_test) y∈ C(x_test). Consider a multi-step setting where we use CP with coverage level 1-ϵ to causally construct the prediction set and seek help whenever the set is not a singleton at each timestep. With probability 1-δ over the sampling of the calibration set, the task completion rate over new test scenarios drawn from 𝒟 is at least 1-ϵ. If f̂ models true conditional probabilities, the average prediction set size is minimized among possible prediction schemes that achieve 1-ϵ completion rate. The proofs are deferred to <ref>. Claim 1 allows us to construct causal prediction sets from non-causal calibration. We then show that the sequence-level task completion rate guarantee still holds. Multiple acceptable options. Often, there can be multiple acceptable options at the same timestep, e.g., the task is to bring the human a soda, and either the Coke or Sprite on the table is acceptable. In such settings, we would like the prediction set to contain at least one acceptable option. We extend our method and confidence guarantees to this setting for both single- and multi-step problems in <ref> and <ref>. § EXPERIMENTS We evaluate our framework in a diverse set of language-instructed tasks and environments below, and demonstrate its effectiveness in achieving a user-specified task completion rate while minimizing user help. We use PaLM-2L <cit.> as the LLM in all examples unless otherwise noted. <ref> shows details of the scenario distribution in each example and additional information on the baselines. <ref> shows details of robot perception and low-level policies. <ref> shows additional discussions and experiment results. Baselines. A straightforward way to construct prediction sets given a desired 1-ϵ coverage is to rank options according to confidence and construct a set such that the cumulative confidence exceeds 1-ϵ; we consider two baselines that are based on such cumulative thresholding but use different kinds of scores: Simple Set uses the same f̂ as ; Ensemble Set <cit.> instead uses the frequencies of the LLM outputting y ∈𝒴 (out of 20 trials total) with randomized sampling of few-shot examples in the prompt. However, the resulting prediction sets are not guaranteed to achieve 1-ϵ coverage as the probabilities can be miscalibrated <cit.>, and often include additional unnecessary options <cit.>. Instead of using cumulative thresholding, constructs prediction sets by including options with scores higher than a threshold computed using CP, which results in statistical guarantees. We also introduce two prompt-based baselines: Prompt Set prompts the LLM to directly output the prediction set (“Prediction set: [A, C]”); Binary prompts the LLM to directly output a binary indicator for uncertainty (“Certain/Uncertain: Certain”), which is used in other LLM-based planning work <cit.> for triggering human intervention. Note that the ϵ level is not used in Prompt Set or Binary, and so the user cannot explicitly control the task success rate. Lastly, we consider No Help where the option with the highest score is always executed without any human intervention. §.§ Simulation: Tabletop Rearrangement [12]r0.4 < g r a p h i c s > Deviation from specified task success level 1-ϵ = 0.85 to the empirical success rate for the three settings in Simulation. A robot arm is asked to rearrange objects on a table in the PyBullet simulator <cit.> (<ref> right top). Each scenario is initialized with three bowls and blocks of green, yellow, and blue colors. The task is to move a certain number of blocks or bowls towards a different object or at a specific location around it. We introduce three settings based on different types of ambiguities in the user instruction: (1) Attribute (e.g., referring to the bowl with the word “receptacle”), (2) Numeric (e.g., under-specifying the number of blocks to be moved by saying “a few blocks”), and (3) Spatial (e.g., “put the yellow block next to the green bowl”, but the human has a preference over placing it at the front/back/left/right). For each setting, we construct a distribution over scenarios (detailed in <ref>) and perform experiments separately. achieves target task success rate consistently. First, we investigate whether and the baselines achieve a given target task success rate consistently in the three settings — we set the failure level ϵ = 0.15. In <ref> we show the difference between achieved and target rates for all methods. Results show that achieve the least deviations overall, due to the coverage guarantee from CP. Simple Set and Ensemble Set cannot achieve coverage consistently. Prompt Set, Binary, and No Help have larger deviations from the target since the user has no control over the error rate. Also, as the scenarios get increasingly ambiguous (least in Attribute and most in Spatial), the baselines show larger deviations. achieves high task success rate with lower human help as ϵ varies. In <ref> we vary the target error rate ϵ and show the curves of task success rate vs. prediction set size and human help rate averaged over the three settings. For , Simple Set, and Ensemble Set, specifying a lower ϵ improves the empirical task success rate while also requiring more human help. The most natural comparison is between and Simple Set, as both use next-token probabilities from the LLM as the confidence score. achieves higher success-to-help ratios across ϵ levels, thanks to calibrated confidence from CP. Meanwhile, Prompt Set and Binary do not allow controlling success rates. Prompt Set performs the worst, indicating the challenge of prompting-based methods for calibrated prediction sets. Binary works favorably at some success levels, but lacks flexibility and doesn't provide prediction sets for human feedback. In addition, <ref> shows the results for individual ambiguity settings. As the scenarios become more ambiguous, shows a greater reduction of human help compared to Simple Set — as much as 24% at certain success levels. [15]r0.65 < g r a p h i c s > Comparison of task success rate vs average prediction set size (Left) and vs. human help rate (Right) in Simulation averaged over the three settings. 200 trials are run for each method. ϵ is varied from 0.25 to 0.01 for , and from 0.6 to 0.01 for Simple Set and Ensemble Set. Binary and No Help are not shown on the left since prediction sets are not provided. Ensemble Set can perform well but is computationally expensive. <ref> also shows that Ensemble Set provides high task success rate with small amount of human help at higher ϵ levels. However, there are two main drawbacks. First, we find in some scenarios that even with 20 randomized prompts, the LLM can fail to choose the correct option and thus assigns zero probability to it. As shown in <ref>, this means that Ensemble Set can fail to improve once it reaches some level of human help. Second, it requires 20× inference time compared to other methods. Investigating how to lower the computational burden and combining ensemble-based probabilities with CP can be a fruitful future direction. §.§ Hardware: Multi-Step Tabletop Rearrangement In this example, a UR5 robot arm is asked to sort a variety of toy food items on a table (<ref> left). In each scenario, three items are placed on the table initially, and the task is to sort them based on human preferences; we simulate a human with strong preferences for healthy food like eggs and fruits, and dislike for less healthy food like donuts and Skittles candies. To introduce ambiguities, the context for the LLM reveals only a subset of the preferences. Here we consider a multi-step setting with possibly multiple acceptable options per step — the LLM plans the new step conditioned on the previous action taken. [6]R6.5cm Method 1-ϵ Plan Succ Task Succ Set Size Help-Step Help-Trial 0.75 0.76 0.74 1.72 0.58 0.92 Simple Set 0.58 0.76 0.72 2.04 0.72 1.00 No Help - 0.41 0.38 - 0 0 Results for Hardware Multi-Step Tabletop Rearrangement. Plan success rate is fixed between and Simple Set for comparing the other metrics. reduces step-wise and trial-wise intervention rates in multi-step setting. Since <ref> has shown that Ensemble Set can be expensive (even more so in the multi-step setting) and Prompt Set and Binary can fail to achieve the user-specified success level, we focus on comparing with Simple Set for the remainder of the evaluation. Here we set the desired error level ϵ=0.25. Since Simple Set does not provide calibrated coverage, we first find ϵ=0.42 for Simple Set to achieve the same planning error rate as in simulation. Then we run 50 trials for both methods in hardware. <ref> shows that reduces the human help rate by 14% step-wise and 8% trial-wise, while also reducing the average prediction set size. Compared to Simple Set which uses a much higher ϵ, achieves the specified trial-level task success rate precisely by leveraging the Multi-Step Uncertainty Alignment from Sec. <ref>. We also find that if we set ϵ=0.25 for Simple Set, the planner is grossly over-conservative and requires a step-wise help rate of 87%. Bimanual manipulation. We additionally present results for a bimanual object rearrangement setup where ambiguities arise from the choice of the arm due to the limited reachability of each arm (<ref> right); results are deferred to <ref>. §.§ Hardware: Mobile Manipulator in a Kitchen [9]R6.5cm Method Model 1-ϵ Plan Succ Task Succ Set Size Help PaLM-2L 0.85 0.87 0.76 2.22 0.67 Simple Set PaLM-2L 0.76 0.87 0.75 2.38 0.81 No Help PaLM-2L - 0.62 0.51 - 0 PaLM-2L-IF 0.85 0.86 - 1.86 0.67 GPT-3.5 0.85 0.87 - 2.50 0.86 Results for Hardware Mobile Manipulation. Plan success rate is fixed between and Simple Set to compare the other metrics. In this example, each scenario involves a mobile manipulator in front of a countertop and next to a set of recycling/compost/landfill bins in an office kitchen (<ref>). The tasks include picking up some object from the counter, and possibly putting it in the drawer, or disposing of it in one of the bins. For the distribution of possible scenarios, we introduce new types of ambiguities based on Winograd Schemas <cit.> (“There is an apple and bottled water on the counter...it is rotten. Can you dispose of it?”), and ones that potentially involve unsafe actions (“place the bowl in the microwave."; there is a plastic bowl and a metal bowl, but only the plastic one is safe for the microwave). In <ref>, we compare to Simple Set again by first setting ϵ=0.15 and also finding ϵ=0.24 for Simple Set that achieves the same plan success rate in simulation. The hardware experiment results again show that reduces the human help rate by 14% and also reduces the average prediction set size. Target success guarantee from KnowNo is robust to varying LLM choice. We also run with two other LLMs (without hardware evaluation). First, we use an instruction-finetuned version of PaLM-2L (PaLM-2L-IF); there is no significant performance difference from PaLM-2L; however, it generates smaller prediction sets in general by reducing the number of larger prediction sets (size 3 and 4) significantly. Second, we run GPT-3.5 (text-davinci-003) from OpenAI. However, we find that it exhibits significant MCQA bias towards options D and E and against A and B, affecting the overall performance. Nonetheless, KnowNo still achieves 1-ϵ target success rate, as the coverage guarantee from CP makes no assumption about the LLM confidences (e.g., calibrated or accurate) — KnowNo flexibly compensates for the degraded LLM performance by triggering more human intervention. § RELATED WORK LLMs for robot planning and interaction. Large language models have shown a wide range of capabilities: reasoning <cit.>, logic <cit.>, math <cit.>, physics <cit.>, high-level planning <cit.> with language feedback <cit.>, and writing robot code <cit.>. The generated outputs can be guided to a certain extent with sufficient prompting, but LLMs are still prone to confidently hallucinating outputs (referring to objects not observed in the scene <cit.>, or calling motion primitive APIs that may not exist <cit.>). We hypothesize that these challenges can be alleviated while obtaining statistical guarantees by modeling the uncertainty of LLMs <cit.> and generating prediction sets via CP. Uncertainty quantification for LLMs. Motivated by LLMs' overconfidence and hallucinations, there has been a growing body of work in quantifying and better calibrating uncertainty <cit.>. In contrast to typical calibration methods that associate uncertainty with point-valued outputs, CP-based methods for language modeling provide coverage guarantees for set-valued predictors <cit.>. However, there have been few applications of CP in quantifying uncertainty of LLMs with free-form outputs <cit.>: <cit.> apply CP to next-token prediction in MCQA tasks exclusively, while builds on MCQA but is applicable to general natural language generation tasks. Conformal prediction in robotics. To the best of our knowledge, this work is the first to employ CP for language-based planning. Prior work has utilized CP for fault detection, trajectory prediction, and planning in dynamic environments <cit.>. At each point in the planning horizon, the probabilistic safety guarantee either holds on average <cit.>, or is too conservative due to union bounding <cit.>, or requires additional calibration data to reduce conservatism <cit.>. In contrast, we provide a novel multi-step extension to CP to guarantee correctness for the entire planning horizon by performing sequence-level calibration in settings where the robot's actions influence the distribution of future inputs. § DISCUSSION Summary: We propose , a framework that applies conformal prediction (CP) to address the problem of uncertainty alignment for language-instructed robots, which we formalize as providing statistical guarantees of task completion while minimizing human help. Experiments across a variety of simulated and hardware setups demonstrate that achieves user-specified task completion levels consistently while reducing human help by 10-24% compared to baseline approaches that lack formal assurances. Limitations and future work: The primary limitation of our work is that the task completion guarantee assumes environments (objects) are fully grounded in the text input to the LLM, and the actions proposed by the LLM planner can be executed successfully. In the future, we are looking to incorporate uncertainty of the perception module (e.g., vision-language model) and the low-level action policy (e.g., language-conditioned affordance prediction) into the CP calibration. Another exciting direction is to combine our methods with active preference learning <cit.> to generate open-ended queries that maximally reduce uncertainty about human preferences. On the theoretical front, modifying CP to optimize different metrics for human help (e.g., minimizing human intervention rate by maximizing number of singleton sets) would be of practical interest. Overall, we hope that the work presented here spurs further efforts towards uncertainty alignment for safe and reliable language-instructed robots. This work was partially supported by the NSF CAREER Award [#2044149] and the Office of Naval Research [N00014-23-1-2148]. We thank Chad Boodoo for helping set up the UR5 hardware experiments, and Jensen Gao, Nathaniel Simon, and David Snyder for their helpful feedback on the paper. §.§ Evaluating Semantic Uncertainty of the LLM with MCQA Here we provide additional rationale of using the MCQA setup for evaluating LLM uncertainty. The uncertainty of the language model can be thought of as the predictive entropy of the output distribution. Consider the input tokens x = (σ_i,…σ_k) and the output distribution Y where y = (σ_i,…σ_k) ∈ Y: U(x) := H(Y | x) = - ∫ p(y|x) ln p(y|x) dy. Evaluating this is very challenging for LLMs: the output distribution over Y lies on the space of dimension 𝒪(|𝒯|^k-i+1) where 𝒯 is the set of possible tokens, and it has to be evaluated with a large number of samples and Monte-Carlo integration. Among the samples, there is also the bias against longer sequences <cit.>. We are partially inspired by <cit.> that instead consider the semantic uncertainty of the model: among samples in Y, there are groups of samples that have the same semantic meanings, such as “put the sponge in the top drawer by first opening it” and “open the top drawer and put the sponge in it”. They may differ in p(y|x) but we are not concerned with such uncertainty since it does not reflect the uncertainty of the LLM about the scenario. <cit.> addresses this by first sampling a large number of samples from Y, and then grouping them based on some semantics classifier before evaluating the semantic uncertainty, which is the predictive entropy over the groups instead of over Y. Could we improve the efficiency of finding semantically distinct groups in Y? The MCQA setup that we propose addresses this by prompting the LLM to generate likely, and also semantically different, options given the task using few-shot exemplars. We can think of this as splitting the output space Yinto multiple spaces representing semantically different outputs. MCQA first samples the representative outputs from the spaces with higher weights in Y (top four), and then includes the additional option “an option not listed here” to cover rest of the spaces. Unlike <cit.> who calculate the entropy among the groups to decide whether to trust the answer to a question, instead combines the normalized probabilities with conformal prediction to provide set-based predictions with coverage guarantees. §.§ Proofs for CP in Multi-Step Setting Proof of Claim <ref>: Suppose y∈C(x_test). We have, y∈C(x_test) min_t f̂(x_test^t)_y^t≥ 1-q̂ f̂(x_test^t)_y^t≥ 1-q̂, ∀ t ∈ [T] y^t ∈ C^t(x_test^t), ∀ t ∈ [T] y∈ C(x_test). Proof of Proposition <ref>: Since we can bound the probability that y_test∉C(x_test), we can also bound the probability that y_test∉ C(x_test). From the conformalization procedure, we have the following dataset-conditional guarantee: with probability 1-δ over the sampling of the calibration set Z, we have ℙ(y_test∈C(x_test) | Z) ≥Beta^-1_N+1-v , v (δ), v= ⌊(N+1)ϵ̂⌋ []Claim <ref> ℙ(y_test∈C(x_test) | Z) ≥Beta^-1_N+1-v , v (δ), where ϵ̂ is chosen such that ϵ = 1- Beta^-1_N+1-v , v (δ). Hence, the following marginal guarantee also holds: ℙ(y_test∈C(x_test)) ≥ 1-ϵ̂ []Claim <ref> ℙ(y_test∈C(x_test)) ≥ 1-ϵ̂. This result provides a bound on the task completion rate if x_test is drawn using the distribution 𝒟. However, recall that the sequence x of augmented contexts as defined in <ref> arises from having performed the correct actions in previous steps; incorrect actions may result in a distribution shift. In order to obtain a bound on the task completion rate, we consider three cases at any given timestep: (1) the prediction set is a singleton and contains the correct label, (2) the prediction set is not a singleton but does contain the correct label, and (3) the prediction set does not contain the true label. The robot performs the correct action in the first two cases (without help in (1) and with help in (2)), while CP bounds the probability of case (3). Thus, the CP bound translates to a bound on the task success rate. As seen in <ref>, we have from <cit.>, that we achieve the smallest average set size among all possible sequence-level prediction schemes, 𝒞, if f̂ models the prediction uncertainty accurately, min_C∈𝒞 (x,·) ∼𝒟𝔼[|C(x)|], subject to ℙ(y∈C(x)) ≥ 1-ϵ̂. §.§ CP in Settings with Multiple Acceptable Options Per Step Consider a setting where we use CP with coverage level 1-ϵ to construct the prediction set when there are multiple true labels and seek help whenever the set is not a singleton at each timestep. With probability 1-δ over the sampling of the calibration set, the task completion rate over new test scenarios drawn from 𝒟 is at least 1-ϵ. Proof: We have a dataset of Z = {(x̃_i, Y_i),...}_i=1^N sampled i.i.d. from a data distribution 𝒟 for calibration (we use the same notation 𝒟 as in the single-label setting here), where Y_i := {y_i,j}_j=1^J_i is the set of true labels for a single trial. For each label, we use the same heuristic notion of confidence, f̂(x)_y ∈ [0,1]. We define an operator β: 𝒳×𝒴^J →𝒴 where 𝒳 is the space of contexts and 𝒴 is the space of labels: β(x, Y) := max_y ∈ Yf̂(x)_y, which takes the true label with the highest confidence value from the true label set. If we consider applying β to every point in the support of 𝒟, a new distribution 𝒟' is induced. We also consider the induced dataset of samples S' = { (x_i, y^max_i) }_i=1^N, where y^max_i := β(x_i, Y_i). Then we can perform the usual conformalization and obtain the guarantee that with C(x_test) := {y | f̂(x_test)_y ≥ 1-q̂}, the following marginal guarantee holds, ℙ(y^max_test∉ C(x_test)) ≤ϵ̂, ⇒ℙ(_y ∈ Y_testf̂(x_test)_y ∉ C(x_test)) ≤ϵ̂, ⇒ℙ(β(x_test, Y_test) ∉ C(x_test)) ≤ϵ̂, and the following dataset-conditional guarantee holds when we choose ϵ̂ such that ϵ = 1-Beta^-1_N+1-v , v (δ) where v= ⌊(N+1)ϵ̂⌋, ℙ(β(x_test, Y_test) ∈ C(x_test) | Z) ≥ 1-ϵ . Hence, C(x_test) contains the true label with the highest confidence with probability at least 1-ϵ. At test time, we sample (x_test, Y_test) from 𝒟 that is i.i.d. with samples in S — for the guarantee to hold for β(x_test, Y_test), we need to show β(x_test, Y_test) is a sample from 𝒟' that is i.i.d. with samples in S'. This is true since functions of independent random variables are independent, and functions of identically distributed random variables are identically distributed if the functions are measurable. §.§ CP in Multi-Step Setting with Multiple Acceptable Options Per Step Consider a multi-step setting where we use CP with coverage level 1-ϵ to causally construct the prediction set when there may be multiple true labels at any step and seek help whenever the set is not a singleton at each timestep. With probability 1-δ over the sampling of the calibration set, the task completion rate over new test scenarios drawn from 𝒟 is at least 1-ϵ. Proof: For the multi-step setting, each trial now involves a sequence of contexts x and a set of sequences of true labels: Y = {y_1, y_2, ..., y_M}, where y_m := (y^0_m, y^2_m, ..., y^T-1_m). For example, Y can contain the sequence of “blue block, yellow block, green block”, “green block, blue block, yellow block”, ..., for the task of picking up three blocks. We collect a dataset of Z = {(x_i, Y_i)} of i.i.d. samples from the data distribution 𝒟. Unlike the single-step setting, here we cannot apply β to the set of true labels in each step since we are reasoning over a set of sequences, and not a sequence of sets of true labels. Notably, the true label set at time step t depends upon the sequence of previously chosen true labels. Let Y^t[x^0, y̅^t-1] denote the set of true labels at timestep t, conditioned upon the initial context x^0 and a partial sequence of past true labels y̅^t-1 := (y^0, …, y^t-1) extracted from Y. We then autoregressively define the following sequence: β_0(x, Y) := _y ∈ Y^0f̂(x^0)_y, Y^0 := {y_1^0, …, y_M^0} β_t(x, Y) := β_t-1(x, Y) ⋃_y ∈ Y^t[x^0, β_t-1(x, Y)]f̂(x^t)_y, t = 1, …, T-1. For convenience, we denote β_t(x, Y)[τ] the τ element in β_t(x, Y), τ≤ t. An intuitive interpretation is that, we can consider Y forming a tree of valid executions (all possible actions that can be taken by choosing each of true labels). Hence, at each time step t, β_t(x, Y) prunes the tree to a single branch by taking the true label with the highest heuristic value f̂(x^t). This reduces the tree of all possible sequences of true labels to a single branch of true labels with highest confidence. Given this single branch of true labels, we can now perform CP as shown in the multi-step setting in <ref>. We apply β_T-1 to every point in the support of 𝒟, and a new distribution 𝒟' is induced. We consider S' = {(x_i, y^max_i)}, where y^max_i := β_T-1(x_i, Y_i). Let Y_test be the set of sequences of true labels for x_test. Suppose we get the marginal bound with β_T-1 as the labels: ℙ(β_T-1(x_test, Y_test) ∉ C(x_test)) ≤ϵ̂, and dataset-conditional bound when we choose ϵ̂ such that ϵ = 1-Beta^-1_N+1-v , v (δ) where v= ⌊(N+1)ϵ̂⌋, ℙ(β_T-1(x_test, Y_test) ∉ C(x_test)|Z) ≤ϵ, which states that at test time, given a context sequence x_test, we produce a prediction set of sequences; if we consider a sequence consisting of the true label with the highest score at each step, the probability of this sequence covered by C(x_test) is lower bounded by 1-ϵ. However, we need to be careful of following β_t at each step at test time. Conside the three cases: * (1) At a given time-step, the prediction set C^t(x^t_test) does not contain the true label, β_t(x, Y)[t]. * (2a) The prediction set is a singleton and does contain the true label. * (2b) The prediction set is not a singleton (but does contain the correct label). We already bound the probability of (1) happening with the CP bound; (2a) is fine since the LLM will take the correct action; (2b) is more challenging — in this case the robot asks the human for help, and we need to make sure the human “follows” the true label, by choosing the true label in the prediction set with the highest confidence by f̂. In practice, we present the labels ranked by f̂ and ask the human to choose the true label with the highest rank. Now let's derive the bound in <ref> and <ref>. Again we need to consider the causal construction issue. As seen in Section <ref>, we construct the prediction set C(x_test) non-causally using the score function s_i = 1-f̂(x_i)_y^max_i (taking minimum over steps). For a test sequence x_test, we apply β_T-1 to the true label set of sequences Y_test to get y^max_test = β_T-1(x_test, Y_test). Now suppose y^max_test∈C(x_test), then we can show y^max_test∈ C(x_test) with the same proof as the single-label setting, which gives us the bound. Lastly we need to show the sampled test sequence from D leads to a sample from 𝒟' i.i.d. with S'. This is true with the same argument that functions of independent random variables are independent. §.§ Additional Experiment Setting: Hardware Bimanual Setup In this example, a bimanual setup with two Kuka IIWA 7 arms move objects on the table, with one bin at each side (<ref> right). The reachable workspace of each arm is limited so that one arm cannot reach the other end of the table or the other bin. Thus, there can be ambiguities in the choice of the arm depending on the task; <ref> shows the human asking the robot to pass over the mango, but not specifying which side the human is standing at. is able to capture such ambiguities and triggers clarification. We design a scenario distribution with all instructions being ambiguous (thus requiring high human intervention rate): with ϵ=0.15, the robot achieves 84% plan success with 92% help. With 10 trials, the robot succeeds 9 times while triggering help for 9 times. Details of the scenario distribution are shown in <ref>. §.§ LLM Prompt Setup Next we detail the LLM prompt setup for MCQA applied in . We will use Mobile Manipulation from <ref> as the example. Multiple choice generation. Given a scenario, we first prompt the LLM to generate four semantically different options for possible next step. We apply few-shot prompting as shown in <ref> below. In this scenario, there is a Coke, a bottled tea, and a Pepsi on the counter, and the task is to put the Coke in the top drawer but the choice of drawer is under-specified (“Put the Coke in the drawer please.”). After the LLM generates four options, we append an additional option `an option not listed here' to the four generated ones and then randomize the order to further prevent bias. We then use a zero-shot prompt in <ref> for querying next-token probabilities (`A', `B', `C', D', `E'). §.§ Additional Experiment Details Environments. In addition to <ref> and <ref>, here <ref> shows the office kitchen environment with the set of drawers and bins used in Mobile Manipulation (left), and the bimanual setup with the set of objects used on the mat used in Bimanual (right). There is another set of drawers used in the mobile manipulation experiments underneath a much bigger countertop not shown here. Scenario distributions. Next, we provide more details on the task settings, in particular, the possible ambiguities in the scenario distribution. With the distributions set up, it is straightforward to sample 400 i.i.d. scenarios from them for conformal prediction calibration. * Attribute ambiguities in Simulation: besides unambiguous terms like “green”, “yellow”, “blue”, “block” and “bowl” (“put green block in yellow bowl”), refer to the block as one of “cube”, “cuboid”, “box”, “square object”, to the bowl as one of “container”, “round object”, “receptacle”, or to either block or bowl as one of “object”, “item”, “thing” (“move the blue object in yellow bowl”); refer to “blue” as one of “cyan”, “navy”, to “green” as one of “greenish”, “grass-colored”, and to “yellow” as “orange” or “gold”. This setting is the least ambiguous one among the three ambiguity types. * Numeric ambiguities in Simulation: besides unambiguous terms like “a”, “one”, “a single of”, “two”, “a pair of”, “three”, “all” (“put a block in yellow bowl”), refer to either two or three numerically with one of “a few”, “a couple of”, “some”, “a handful of” (“put some blocks in the green bowl”). * Spatial ambiguities in Simulation: besides unambiguous terms like “in front of“, “behind”, “to the left”, and “to the right” (“put the green block to the left of green bowl”), refer to any of the four possible directions with “near”, “close to”, “beside”, “next to”, refer to either left to right with “lateral to”, and refer to either front or behind with “along the line of sight”. This setting is the most ambiguous one among the three ambiguity types. * Hardware Tabletop Rearrangement: we split the 28 toy items (<ref>) to simulate human preferences in healthy foods over less healthy ones: human likes corn, avocado, celery, carrot, tomato, lettuce, apple, orange, pear, lemon, peanut butter, sunny-side-up egg, egg, and pea; human dislikes pretzel, cracker, waffle, mustard, ketchup, pizza, meat patty, cheese, chicken drumstick, peach, mango M&M, Skittles, and donut. In each scenario, three items are involved with either two or one of them from the preferred list. * Hardware Mobile Manipulation: please refer to <https://robot-help.github.io/prompts/mobile_tasks.txt> for the full list of scenarios sampled from the distribution. The distribution is a mixture of different task types: (1) single-label, `Bring me a Coke' (unambiguous); (2) creative-single-label, `I want a healthy fruit to munch on.' which means the apple (unambiguous); (3) multi-label, `Bring me a cola.' and either Coke or Pepsi is acceptable; (4) creative-multi-label, `Bring me something with a kick.' and either RedBull or jalapeno chips are acceptable; (5) spatially-ambiguous, `Put the Coke in the drawer' or `Put the Coke near the fruit' which under-specifies the drawer or fruit; (6) unsafe, `Can you dispose of the bottle drink? It should have expired.' or `Place the bowl on the cooktop, please.'; (7) Winograd, 'There is a sponge and a bag of rice chips...I don't want to use it for cleaning any more. Can you please dispose of it?" We use the GPT-4 model for generating the creative tasks. * Hardware Bimanual: please refer to <https://robot-help.github.io/prompts/bimanual_tasks.txt> for the full list of scenarios sampled from the distribution. The distribution is a mixture of different task types: (1) `Pick up the {object} and pass it to me. I am next to the bin.' (2) `Pick up the {object} with the left arm.' (3) `Put the {object} in the bin closer to it.' (4) `Pick up the {object} with the arm closer to it.' (5) `Pick up the {object}.' (6) `Pick up the {object} at the handle.' (7) `Move the {object} to the front of the table.' (8) `Move the {object} on the sticky rubber mat to the front of the table.' Next we provide more details on the baselines that require additional prompting strategies. Baselines - Ensemble Set. Our ensemble-based method is a weaker method than the traditional model-based ensemble where multiple copies of neural network are trained and inferred with; however, this is infeasible with the LLM we use. In our work, we randomize over the few-shot examples in the prompt as the ensemble. We select a pool of 20 possible MCQA examples (see examples in Fig. <ref>), and then randomly sample a certain amount from it for each inference. Note that in this case, Ensemble Set actually has advantage over and Simple Set that, for the same data, it has seen many more examples than the fixed ones in the prompt used in and Simple Set. We only apply ensemble for next-token prediction; the same set of multiple choices generated is used. Baselines - Prompt Set. First, multiple choices are generated in the same way as . Then LLM is prompted to generate the prediction set, with few-shot examples in the prompt showing the possible labels (<ref>). For example, “We: Which options are possibly correct? You: A, C, D.”. Baselines - Binary. Instead of generating multiple choices, the LLM is first prompted to give the most likely action (“We: Put the Coke can in the drawer. You: I will” shown in <ref>). Then we attach the generated response to the same prompt, and ask LLM to label “Certain/Uncertain:” given few-shot examples (<ref>). §.§ Additional Implementation Details While the focus of is mainly on providing uncertainty alignment for the LLM-based planner, below we provide details of the perception and action modules applied in all examples. Perception. For all examples except for the Mobile Manipulation, we use either MDETR <cit.> (Hardware Tabletop Rearrangement) or Owl-ViT <cit.> (Simulation and Bimanual) open-vocabulary object detector for recognizing the objects in the environment and obtaining the object locations for low-level action. In Simulation and Bimanual, the variations of the object types are limited, and with general prompting, the objects are detected without issue. In Hardware Tabletop Rearrangement, since we are use a wide variety of toy items (<ref> right), the detector has issues often differentiating objects like peanut butter and meat patty that are both darker colors. We modify the scenario distributions to avoid using such items together in one scenario. In addition, we apply the Segment Anything model <cit.> to extract the object segmentation masks (shown overlaid in <ref> left), and then use the algorithm <cit.> to find the most distant internal point of the mask as the suction point (shown as red dots). Low-level action. In Simulation and Hardware Tabletop Rearrangement, simple pick-and-place actions are executed based on object locations and solving the inverse kinematics. In Bimanual, the reachability of the Kuka arm is limited, and the pick-and-place action trajectories are solved using Sequential Quadtratic Programming (SQP) instead <cit.>. In Mobile Manipulation, for most of the tasks that involve simple pick-and-place and opening the drawers, the action is from an end-to-end policy from the RT-1 policy (please refer to <cit.> for details), which takes in the raw observation. For some of the hard tasks such as putting the plastic bowl in the microwave and putting the metal bowl on the bowl, object locations are assumed known and we use scripted action policies. Human feedback. In , once human help is triggered, human is presented with the prediction set to choose the correct action from (if there is one). For example, the prediction set could include `A) put peanut butter in blue plate' and `C) put peanut butter in green plate' in Hardware Tabletop Rearrangement. In practice, we can convert the prediction set to a question in more natural language, “Do you like peanut butter or not?” using simple heuristics. In Mobile Manipulation and Bimanual, we prompt the LLM to generate the question based on the prediction set. §.§ Additional Discussions Sentence-level score leads to worse performance. In <ref> we hypothesize that the distribution of probabilities of LLM outputs p(y) is highly sensitive to the output length. Here we explore the effect of using sentence output and the perplexity score for CP in Simulation. We still apply multiple choice generation first to obtain the possible options from LLM, and then query LLM scoring, for example, the probability of “put the blue block in the green bowl” with the prompt ending with “I will” (<ref>). We test two different method: CP with RawProb, which uses the raw product of conditional probabilities p(y)=∏_i=1^k p(σ_i |σ_1, …, σ_i-1), and CP with Perplexity, which applies length normalization and commonly used in language modeling, perplexity(y)=exp{-1/k∑_i=1^klog p(σ_i |σ_1, …, σ_i-1)}. <ref> shows that for all three settings, using either sentence-level score leads to worse performance, and performance degradation correlates with variance of the multiple choice lengths. We also notice that there is no significant difference in performance between CP with RawProb and CP with Perplexity; this could indicate that beyond eliminating length bias, applying MCQA setup in affords additional benefit of soliciting more accurate confidence from the LLM. Potentially stronger baselines with model fine-tuning. In <ref> we introduce the two prompt-based baselines Prompt Set and Binary, and demonstrate them being (1) inflexible (not allowing controlling the target success rate) and (2) do not properly model the uncertainty. We note that these two baselines can be potentially strengthened by fine-tuning the LLM to better predict the binary uncertainty or the uncertainty set, if the true labels can be properly defined. In fact, some recent work <cit.> have explored model fine-tuning and exhibiting the effectiveness of Binary for uncertainty calibration. We also explored fine-tuning the GPT3 model (davinci) from OpenAI, which is the most powerful one from OpenAI available for fine-tuning. However, we find the model performing at very low accuracy with MCQA, and fine-tuning the model always results in overfitting to the dataset, even with thousands of data and varying hyperparameters (including ones from <cit.> and default ones from the API). We suspect that our scenarios exhibit high complexity and variance, and it is non-trivial to fine-tune the model well with our dataset. Nonetheless, we hope to have future work looking into better training the model for proper uncertainty, and then applying CP on top of it to achieve set-based calibration. Low-level control success rate. translates the coverage guarantee from CP to task completion guarantee leveraging human help. However, this relies on the low-level control working reliably. In Simulation Tabletop Rearrangement, we find the pick-and-place primitives always executed as the object diversity is limited to square blocks and normal bowls (only differing in color). In Hardware Tabletop Rearrangement, we find the pick-and-place primitive only failed once during the 50 trials of running and twice for Simple Set (<ref>). The high success rate is largely thanks to the precise object masks from Segment Anything <cit.>. Also, to allow reliable suctioning, we apply clear scotch tape on some of the objects (donut, waffle) to smoothen the surfaces. In Hardware Mobile Manipulation, we find the low-level action success rate to be around 86%, which causes the non-trivial discrepancies between plan success and task success rates in <ref>. One exciting future direction is to quantify and better calibrate the uncertainty of the low-level action module, and take such uncertainty into account of the end-to-end task completion guarantee.
http://arxiv.org/abs/2307.01770v1
20230704152041
Fast Optimal Transport through Sliced Wasserstein Generalized Geodesics
[ "Guillaume Mahey", "Laetitia Chapel", "Gilles Gasso", "Clément Bonet", "Nicolas Courty" ]
stat.ML
[ "stat.ML", "cs.LG", "stat.AP", "62, 65", "G.3" ]
square,sort,comma,numbersnatbib ./Image/ hmargin=4 cm,vmargin=2.5 cm definition thmTheorem[section] definition[thm]Definition constru[thm]Construction quest[thm]Question statement[thm]Statement exemple[thm]Exemple proposition[thm]Proposition conj[thm]Conjecture property[thm]Property coro[thm]Corollary lemma[thm]Lemma reminder[thm]Reminder prob[thm]Problem nota[thm]Notation remark[thm]Remark heuristic[thm]Heuristic problemProblem[section] rk Fast Optimal Transport through Sliced Wasserstein Generalized Geodesics Guillaume Mahey INSA Rouen-Université Bretagne Sud LITIS - IRISA Laetitia Chapel Université Bretagne Sud IRISA Gilles Gasso INSA Rouen LITIS Clément Bonet Université Bretagne Sud LMBA Nicolas Courty Université Bretagne Sud IRISA August 1, 2023 ============================================================================================================================================================================================================================================================================================================================================== Wasserstein distance (WD) and the associated optimal transport plan have been proven useful in many applications where probability measures are at stake. In this paper, we propose a new proxy of the squared WD, coined , that is based on the transport map induced by an optimal one-dimensional projection of the two input distributions. We draw connections between and Wasserstein generalized geodesics in which the pivot measure is supported on a line. We notably provide a new closed form for the exact Wasserstein distance in the particular case of one of the distributions supported on a line allowing us to derive a fast computational scheme that is amenable to gradient descent optimization. We show that is an upper bound of WD and that it has a complexity similar to as Sliced-Wasserstein, with the additional feature of providing an associated transport plan. We also investigate some theoretical properties such as metricity, weak convergence, computational and topological properties. Empirical evidences support the benefits of in various contexts, from gradient flows, shape matching and image colorization, among others. § INTRODUCTION Gaspard Monge, in his seminal work on Optimal Transport (OT) <cit.>, studied the following problem: how to move with minimum cost the probability mass of a source measure to a target one, for a given transfer cost function? At the heart of OT is the optimal map that describes the optimal displacement; the Monge problem can be reformulated as an assignment problem. It has been relaxed by <cit.> as finding a plan that describes the amount of mass moving from the source to the target. Beyond this optimal plan, an interest of OT is that it defines a distance between probability measures, the Wasserstein distance (WD). Recently, OT has been successfully employed in a wide range of machine learning applications, in which the Wasserstein distance is estimated from the data, such as supervised learning <cit.>, natural language processing <cit.> or generative modelling <cit.>. Its ability to provide meaningful distances between empirical distributions is at the core of distance-based algorithms such as kernel-based methods <cit.> or k-nearest neighbors <cit.>. The optimal transport plan has also been used successfully in many applications where a matching between empirical samples is sought such as color transfer <cit.>, domain adaptation <cit.> and positive-unlabeled learning <cit.>. Solving the OT problem is computationally intensive; the most common algorithmic tools to solve the discrete OT problem are borrowed from combinatorial optimization and linear programming, leading to a cubic complexity with the number of samples that prevents its use in large scale application <cit.>. To reduce the computation burden, regularizing the OT problem with e.g. an entropic term allows defining solvers with a quadratic complexity <cit.>. Other methods based on the existence of a closed form of OT were also devised to efficiently compute a proxy of WD as sketched hereafter. Projections-based OT. Sliced-Wasserstein distance (SWD) <cit.> leverages 1D-projections of distributions to give a lower approximation of the Wasserstein distance, relying on the closed form of OT for 1D probability distributions, leading to a linearithmic time complexity. While SWD averages WDs computed over several 1D projections, max-SWD <cit.> keeps only the most informative projection. This framework provides efficient algorithms that can handle millions of points and have similar topological properties to as WD <cit.>. Otherworks restrain SWD and max-SWD to projections onto low dimensional subspaces <cit.> to provide more robust estimation of those OT metrics. Though effective as a surrogate for WD, those methods do not provide a transport plan in the original space ^d. To overcome this limitation, <cit.> aims at computing transport plans in a subspace which are extrapolated in the original space. Pivot measure-based OT. Other research works rely on a pivot, yet intermediate measure. They decompose the OT metric into Wassesrtein distances between each input measure and the considered pivot measure. They exhibit better properties such as statistical sample complexity or computation gain <cit.>. Even though the OT problems are split, they are still expensive when dealing with large sample size distributions, notably when only two distributions are involved. Contributions. =-1 We introduce a new proxy of the squared WD that exploits the principles of aforesaid approximations of OT metric. The original idea is to rely on projections and one-dimensional assignment of the projected distributions to compute the new proxy. The approach is well-grounded as it hinges on the notion of Wasserstein generalized geodesics <cit.> with pivot measure supported on a line. The main features of the method are as follows: i) its computational complexity is on par with SW, ii) it provides an optimal transport plan through the 1D assignment problem, iii) it acts as an upper bound of WD, and iv) is amenable to optimization to find the optimal pivot measure. As an additional contribution, we establish a closed form of WD when an input measure is supported on a line. Outline. Section <ref> presents some background of OT. Section <ref> formulates our new WD proxy starting and provides some of its topological properties and a numerical computation scheme. Section <ref> builds upon the notion of Wasserstein generalized geodesics to reformulate our OT metric approximation as the Sliced Wasserstein Generalized Geodesics (SWGG) along its optimal variant coined . The reformulation allows deriving additional topological properties and an optimization scheme. Finally, Section <ref> provides experimental evaluations. Notations. Let ⟨·,·⟩ be the Euclidean inner product on ^d and let ={u∈^d s.t. u_2=1}, the unit sphere. We denote the set of probability measures on ^d endowed with the σ-algebra of Borel set and ⊂ those with finite second-order moment i.e. = {μ∈𝒫(^d) s.t. ∫_^dx^2_2dμ(x) < ∞}. Let be the subspace of defined by empirical measures with n-atoms and uniform masses. For any measurable function f:^d →^d, we denote f_# its push forward, namely for μ∈ and for any measurable set A ∈^d, f_#μ(A)=μ(f^-1(A)), with f^-1(A)={x∈^d s.t. f(x) ∈ A}. § BACKGROUND ON OPTIMAL TRANSPORT The squared WD <cit.> between μ_1, μ_2∈ is defined as: W^2_2(μ_1,μ_2)inf_π∈Π(μ_1,μ_2)∫_^d ×^dx-y_2^2dπ(x,y) with Π(μ_1,μ_2) = {π∈𝒫_2(^d ×^d) s.t. π(^d × A)=μ_2(A) and π(A ×^d)=μ_1(A), ∀ A measurable set of ^d }. The min of eq (<ref>) is called the optimal transport plan. Denoted π^*, it expresses how to move the probability mass from μ_1 to μ_2 with minimum cost. In some cases, π^* is of the form (Id,T)_#μ_1 for a measurable map T:^d→^d, i.e. there is no mass splitting during the transport. This map is called a Monge map and is denoted T^μ_1 →μ_2 (or T^1 → 2 for a shorthand). Thus, one has W^2_2(μ_1,μ_2)=inf_T s.t. T_#μ_1=μ_2∫_^dx-T(x)^2_2dμ_1(x). This occurs, for instance, when μ_1 has a density w.r.t. the Lebesgue measure <cit.> or when μ_1 and μ_2 are in <cit.>. Endowed with the WD, the space is a geodesic space. Indeed, since there exists a Monge map T^ 1→ 2 between μ_1 and μ_2, one can define a geodesic curve μ^1→ 2:[0,1]→ <cit.> as: ∀ t∈ [0,1], (tT^1→ 2+(1-t)Id)_#μ_1 which represents the shortest path w.r.t. Wasserstein distance in between μ_1 and μ_2. The Wasserstein mean between μ_1 and μ_2 corresponds to t=0.5 and we simply denote it . This notion of geodesic allows the study of the curvature of the Wasserstein space <cit.>. It was shown that the Wasserstein space is of positive curvature <cit.>, i.e. it respects the following inequality: W^2_2(μ_1,μ_2) ≥ 2W^2_2(μ_1,ν)+2W^2_2(ν,μ_2)-4W^2_2(μ^1→ 2,ν) for all pivot measures ν∈. Solving and approximating Optimal Transport. The Wasserstein distance between empirical measures μ_1,μ_2 with n-atoms can be computed in 𝒪(n^3log n), preventing the use of OT for large scale applications <cit.>. Several algorithms have been proposed to lower this complexity, for example the Sinkhorn algorithm <cit.> that provides an approximation in near 𝒪(n^2) <cit.>. Notably, when μ_1=1/n∑_i=1^n δ_x_i and μ_2=1/n∑_i=1^n δ_y_i are 1D distributions, computing WD can be done by matching the sorted empirical samples, leading to an overall complexity of 𝒪(nlog n). More precisely, let σ and τ two permutation operators s.t. x_σ(1)≤ x_σ(2)≤...≤ x_σ(n) and y_τ(1)≤ y_τ(2)≤...≤ y_τ(n). Then, the 1D Wasserstein distance is given by: W^2_2(μ_1,μ_2)=1/n∑_i=1^n(x_σ(i)-y_τ(i))^2. Sliced WD. The Sliced-Wasserstein distance (SWD) <cit.> aims to scale-up the computation of OT by relying on the closed form for 1D distributions. It is defined as the expectation of 1D-WD computed along projection directions θ∈ over the unit sphere: _2^2(μ_1, μ_2) ∫_ W^2_2(μ_1,μ_2)dω(θ), where μ_1 and μ_2 are projections onto the direction θ∈ with P^θ : ^d→, x↦⟨x,θ⟩ and where ω is the uniform distribution . Since the integral in eq (<ref>) is intractable, one resorts, in practice, to Monte-Carlo estimation to approximate the SWD. Its computation only involves projections and permutations. For L directions, the computational complexity is 𝒪(dLn + Lnlog n) and the memory complexity is 𝒪(Ld+Ln). However, in high dimension, many projections are necessary to approximate accurately SWD and many projections lead to 1D-WD close to 0. This issue is well known in the SW community <cit.>, where different ways of performing effective sampling have been proposed <cit.> such as distributional or hierarchical slicing. In particular, this motivates the definition of max-Sliced-Wasserstein <cit.> which keeps only the most informative slice: _2^2(μ_1, μ_2) max_θ∈W^2_2(μ_1,μ_2). While being a non convex problem, it can be optimized efficiently using a gradient ascent scheme. SW-like distances are attractive since they are fast to compute and enjoy theoretical properties: they are proper metrics and metricize the weak convergence. However, they do not provide an OT plan. Projected WD. An other quantity of interest based on the 1D-WD is the projected Wasserstein distance (PWD) <cit.>. It leverages the permutations of the projected distributions in 1D in order to derive couplings between the original distributions. Let μ_1=1/n∑_i=1^n δ_x_i and μ_2=1/n∑_i=1^n δ_y_i in , we have: _2^2(μ_1,μ_2)∫_1/n∑_i=1^nx_σ_θ(i)- y_τ_θ(i)^2_2dω(θ), where σ_θ,τ_θ are the permutations obtained by sorting μ_1 and μ_2. As some permutations are not optimal, we straightforwardly have W^2_2 ≤_2^2. Note that, some permutations can appear really irrelevant in the original space, leading to an overestimation of W^2_2 (typically when the distributions are multi-modal or with support lying in a low dimension manifold, see Supp. <ref> for a discussion). In this paper, we restrict ourselves to empirical distributions with the same number of samples. They are defined as μ_1 =1/n∑_i=1^n δ_x_i and μ_2 =1/n∑_i=1^n δ_y_i in . Note that the results presented therein can be extended to any discrete measures by mainly using cumulative functions instead of permutations and transport plans instead of transport maps (see Supp. <ref>). § DEFINITION AND PROPERTIES OF MIN-SWGG The fact that overestimates W^2_2 motivates the introduction of our new loss function coined which keeps only the most informative permutations. Afterwards, we derive a property of distance and grant an estimation of via random search of the directions. Let μ_1,μ_2∈ and θ∈. Denote by σ_θ and τ_θ the permutations obtained by sorting the 1D projections μ_1 and μ_2. We define respectively and as: _2^2(μ_1,μ_2,θ) 1/n∑_i=1^n x_σ_θ(i)-y_τ_θ(i)^2_2, _2^2(μ_1,μ_2) min_θ∈_2^2(μ_1,μ_2,θ). One shall remark that the function corresponds to the building block of PWD in eq. (<ref>). One main feature of is that it comes with a transport map. Let θ^* ∈ (μ_1,μ_2,θ) be the optimal projection direction. The associated transport map is: T(x_i) = y_τ_θ^*^-1(σ_θ^*(i)), ∀ 1 ≤ i ≤ n. We now give some theoretical properties of the quantities and . Their proofs are deferred in Supp. <ref>. Let θ∈. (·,·,θ) defines a distance on . Moreover, is an upper bound of W^2_2, and W^2_2≤_2^2 ≤_2^2, with equality between W_2^2 and when d>2n. Similarly to max-SW, retains only one optimal direction θ^* ∈. However, the two distances strongly differ: i) is an upper bound and max-SW a lower bound of W^2_2, ii) the optimal θ^* differs (see Supp. <ref> for an illustration), and iii) max-SW does not provide a transport plan between μ_1 and μ_2. Solving eq. (<ref>) can be achieved using a random search, by sampling L directions θ∈ and keeping only the one leading to the lowest value of . This gives an overall computational complexity of 𝒪(Ldn+Lnlog n) and a memory complexity of 𝒪(dn). In low dimension, the random search estimation is effective: covering all possible permutations through can be done with a low number of directions. In high dimension, many more directions θ are needed to have a relevant approximation, typically 𝒪(L^d-1). This motivates the definition of gradient descent techniques for finding θ^*. § SWGG AS MINIMIZING ALONG THE WASSERSTEIN GENERALIZED GEODESICS Solving problem in eq. (<ref>) amounts to optimize over a set of admissible permutations. That problem is hard since is non convex w.r.t. θ and piecewisely constant, thus not differentiable over . Indeed, as long as the permutations remain the same for different directions θ, the value of is constant. Whenever the permutations change, the objective SWGG "jumps" as illustrated in Fig. <ref>. In this section, we tackle this problem by providing an alternative formulation of that allows smoothing the different kinks of , hence, making amenable to optimization. This formulation relies on Wasserstein generalized geodesics we introduce hereinafter. We show that this alternative formulation brings in computational advantages and allows establishing some additional topological properties and deriving an efficient optimization scheme. We also provide a new closed form for the Wasserstein distance W^2_2(μ_1,μ_2) when either μ_1 or μ_2 is supported on a line. §.§ SWGG based on Wasserstein Generalized Geodesics Wasserstein generalized geodesics (see Supp. <ref> for more details) were first introduced in <cit.> in order to ensure the convergence of Euler scheme for Wasserstein Gradient Flows. This concept has been used notably in <cit.> to speed up some computations and to derive some additional properties (see Supp. <ref> for more details on related works). Generalized geodesics lay down on a pivot measure ν∈ to transport the distribution μ_1 toward μ_2. Indeed, one can leverage the optimal transport maps T^ν→μ_1 and T^ν→μ_2 to construct a curve t↦ linking μ_1 to μ_2 as ((1-t)T^ν→μ_1 + tT^ν→μ_2)_#ν, ∀ t ∈[0,1]. The related generalized Wasserstein mean corresponds to t=0.5 and is denoted . Intuitively, the optimal transport maps between ν and μ_i, i=1,2 give rise to a sub-optimal transport map between μ_1 and μ_2: T_ν^1→ 2 T^ν→μ_2∘ T^μ_1 →ν with (T_ν^1→ 2)_#μ_1=μ_2. One can be interested in the cost obtained by the transportation of μ_1 to μ_2 via the transport map T_ν^1→ 2, known as the ν-based Wasserstein distance <cit.> and defined as W^2_ν(μ_1,μ_2)∫_^dx-T_ν^1→ 2(x)^2_2dμ_1(x) =2W^2_2(μ_1,ν)+2W^2_2(ν,μ_2)-4W^2_2(,ν). Notably, the second part of eq. (<ref>) straddles the square Wasserstein distance with eq. (<ref>). Remarkably, the computation of W^2_ν can be efficient if the pivot measure ν is appropriately chosen. As established in Lemma <ref>, it is the case when ν is supported on a line. Based on these facts, we propose hereafter an alternative formulation of . Let μ_1 and μ_2 ∈. We restrain the pivot measure ν to be the Wasserstein mean of the measures μ_1 and μ_2: min_μ∈W^2_2(μ_1,μ)+W^2_2(μ,μ_2), where θ∈ and Q^θ : ^d →^d, x↦θ⟨x,θ⟩ is the projection onto the subspace generated by θ. One shall notice that μ_1 and μ_2 are supported on the line defined by the direction θ, so is the pivot measure ν = . We are now ready to re-express the metric . Let θ∈, μ_1,μ_2 ∈ and be the pivot measure. Let be the generalized Wasserstein mean between μ_1 and μ_2 ∈ with pivot measure . Then, _2^2(μ_1,μ_2,θ) = 2W^2_2(μ_1,)+2W_2^2(,μ_2)-4W_2^2(,). The proof is in <ref>. From Proposition <ref>, is the -based Wasserstein distance between μ_1 and μ_2. This alternative formulation allows establishing additional properties for . §.§ Theoretical properties Additionally to the properties derived in Section <ref> ( is a distance and an upper bound of W^2_2), we provide below other theoretical guarantees. metricizes the weak convergence in . In other words, let (μ_k)_k ∈ℕ be a sequence of measures in and μ∈. We have: μ_kkμ_2^2(μ_k,μ) 0 , where stands for the weak convergence of measure i.e. ∫_^dfdμ_k→∫_^dfdμ for all continuous bounded functions f. Beyond the weak convergence, possesses the translation property, i.e. the translations can be factored out as Wasserstein distance does (see <cit.> for a recall). Let T^u (resp. T^v) be the map x↦x-u (resp. x↦x-v), with u,v vectors of ^d. We have: _2^2(T^u_#μ_1,T^v_#μ_2)=  _2^2(μ_1,μ_2)+u-v_2^2-2⟨u-v,m_1-m_2⟩ where m_1=∫_^dx dμ_1(x) and m_2=∫_^dx dμ_2(x) are the means of μ_1, μ_2. This property is useful in some applications such as shape matching, in which translation invariances are sought. The proofs of the two Propositions are deferred to Supp. <ref> and <ref>. and W^2_2 are equals in different cases. First, <cit.> showed that it is the case whenever μ_1 is the shift and scaling of μ_2 (see Supp. <ref> for a full discussion). In Lemma <ref>, we will state that it is also the case when at least one of the two distributions is supported on a line. §.§ Efficient computation of SWGG defined in eq. (<ref>) involves computing three WDs that are fast to compute, with an overall 𝒪(dn+nlog n) complexity, as detailed below. Building on this result, we provide an optimization scheme that allows optimizing over θ with 𝒪(sdn+snlog sn) operations at each iteration, with s a (small) integer. We first start by giving a new closed form for WD whenever one distribution is supported on a line, that proves useful to derive an efficient computation scheme. New closed form for WD. The following lemma states that W^2_2(μ_1,μ_2) admits a closed form whenever μ_2 is supported on a line. For that, it relies on the computation of WD between μ_2 and the orthogonal projection of μ_1 onto the linear subspace defined by the line. Additionally, the optimal transport map T^1 → 2 is also given by an explicit formulation. Let μ_1,μ_2 in with μ_2 supported on a line of direction θ∈. We have: W^2_2(μ_1,μ_2)=W^2_2(μ_1,Q_#^θμ_1)+W^2_2(Q_#^θμ_1,μ_2) with Q^θ as in Def. <ref>. Note that W^2_2(μ_1,Q^θ_#μ_1)=1/n∑x_i-Q^θ(x_i)^2_2 and that W^2_2(Q_#^θμ_1,μ_2)=W^2_2(P_#^θμ_1,P_#^θμ_2) are WD between 1D distributions. Additionally, the optimal transport map is given by T^1→ 2 = T^Q^θ_#μ_1 →μ_2∘ T^μ_1 → Q^θ_#μ_1=T^Q^θ_#μ_1 →μ_2∘ Q^θ. In particular, the map T^1→ 2 can be obtained via the permutations of the 1D distributions P^θ_#μ_1 and P^θ_#μ_2. The proof is in Supp. <ref>. Efficient computation of . Eq. (<ref>) is defined as the Wasserstein distance between a distribution (either μ_1 or μ_2 or ) and a distribution supported on a line (). As detailed in Supp. <ref>, computation of eq. (<ref>) results in three Wasserstein distances between distributions and their projections: i) W^2_2(μ_1,Q^θ_#μ_1), ii) W^2_2(μ_2,Q^θ_#μ_2), iii) W^2_2(,), and one dimensional Wasserstein distance W_2^2(P^θ_#μ_1,P^θ_#μ_2), resulting in a 𝒪(dn+nlog n) complexity. Optimization scheme for min-SWGG The term W_2^2(,) in eq. (<ref>) is non continuous w.r.t. θ. Indeed, the generalized mean only depends on the transport maps T^→μ_1 and T^→μ_2, that are constant as long as the projection direction θ leads to the same permutations σ_θ and τ_θ. Hence, we rely on a smooth surrogate of the generalized mean and we aim at minimizing the following objective function: _2^2(μ_1,μ_2,θ) 2W^2_2(μ_1,) +2W_2^2(,μ_2) -4W_2^2(,). To define , one option would be to use entropic maps in eq. (<ref>) but at the price of a quadratic time complexity. We rather build upon the blurred Wasserstein distance <cit.> to define as it can be seen as an efficient surrogate of entropic transport plans in 1D. In 1D, can be efficiently approximated by adding an empirical Gaussian noise followed by a sorting pass. In our case, it resorts in making s copies of each sorted projections P^θ(x_σ(i)) and P^θ (y_τ(i)) respectively, to add an empirical Gaussian noise of deviation √(ϵ)/2 and to compute averages of sorted blurred copies x^s_σ^s, y^s_τ^s. We finally have ()_i = 1/2s∑_k=(i-1)s+1^isx^s_σ^s(k)+y^s_τ^s(k). <cit.> showed that this blurred WD has the same asymptotic properties as the Sinkhorn divergence. The surrogate (μ_1,μ_2,θ) is smoother w.r.t. θ and can thus be optimized with a gradient descent, converging towards a local minima. Once the optimal direction θ^* is found, resorts to be the solution provided by (μ_1,μ_2,θ^*). Fig. <ref> illustrates the effect of the smoothing on a toy example and more details are given in Supp. <ref>. The computation of (μ_1,μ_2,θ) is summarized by Alg. <ref>. § EXPERIMENTS We highlight that is fast to compute, gives an approximation of WD and the associated transport plan. We start by comparing the random search and gradient descent schemes for finding the optimal direction in <ref>. In subsection <ref>, we illustrate the weak convergence property of through a gradient flow application to match distributions. We then implement an efficient algorithm for colorization of gray scale images in <ref>, thanks to the new closed form of WD. We finally evaluate on a shape matching application in <ref>. When possible from the context, we compare with the main methods for approximating WD namely SW, max-SW, Sinkhorn <cit.>, factored coupling <cit.> and subspace robust WD (SRW) <cit.>. Supp. <ref> provides additional results on the behavior of and experiments on other tasks such as color transfer or on data sets distance computation. §.§ Computing min-SWGG Let consider Gaussian distributions in dimension d ∈{2, 20, 200}. We first sample n=1000 points from each distribution to define μ_1 and μ_2. We then compute _2^2(μ_1, μ_2) with different schemes, either by random search, by simulated annealing <cit.> or by gradient descent; we report the results in Fig. <ref> (left). For the random search scheme, we repeat each experiment 20 times and we plot the average value of ± 2 times the standard deviation. For the gradient descent, we select a random initial θ. We observe that, in low dimension, all schemes provide similar values of . When the dimension increases, optimizing the direction θ leads to a better approximation of the true Wasserstein distance (see plots' title in Fig. <ref>). On Fig. <ref> (right), we compare the empirical runtime evaluation for with different competitors for d = 3 between n samples from Gaussian distributions, with n ∈{10^2, 10^3, 10^4, 5× 10^4, 10^5}. We observe that, as expected, with random search is as fast as with a super linear complexity. With the optimization process, it is faster than SRW for a given number of samples. We also note that SRW is more demanding in memory and hence does not scale as well as . §.§ Gradient Flows We highlight the weak convergence property of . Starting from a random initial distribution, we aim at moving the particles of a source distribution μ_1 to a target one μ_2 by reducing the objective _2^2(μ_1,μ_2) at each step. We compare both variants of with , and , relying on the code provided in <cit.> for running the experiment; we report the results on Fig. <ref>. We consider several target distributions, representing diverse scenarios and fix n=100. We run each experiment 10 times and report the mean ± the standard deviation. In every case, one can see that μ_1 moves toward μ_2 and that all methods tend to have similar behavior. One can notice though that, for the distribution with d=500, with optimization scheme leads to the best alignment of the distributions. §.§ Gray scale image colorization Lemma <ref> states that WD has a closed form when one of the 2 distributions is supported on a line, allowing us to compute the WD and the OT map with a complexity of 𝒪(dn+nlog n). This particular situation arises for instance with RBG images (μ_1,μ_2 ∈𝒫_2^n(^3)), where black and white images are supported on a line (the line of gray). One can deal with the problem of colorization through color transfer, where a black and white image is the source and a colorful image the target. Our fast procedure allows considering large images without sub-sampling Fig. <ref> gives an example of colorization of an image of size 1280×1024 that was computed in less than 0.2 second, while being totally untractable for the 𝒪(n^3log n) solver of WD. This procedure can be lifted to pan-sharpening <cit.> where one aims at constructing a super-resolution multi-chromatic satellite image with the help of a super-resolution mono-chromatic image (source) and low-resolution multi-chromatic image (target). Obtained results are given in the Supp. <ref>. §.§ Point clouds registration Iterative Closest Point (ICP) is an algorithm for aligning point clouds based on their geometries <cit.>. Roughly, its most popular version defines a one-to-one correspondence between point clouds, computes a rigid transformation (namely translation, rotation or reflection), moves the source point clouds using the transformation, and iterates the process until convergence. The rigid transformation is the solution of the Procrustes problem i.e. min_(Ω,t) ∈ O(d)×^dΩ (X-t)-Y_2^2, where X,Y are the source and the target cloud points and O(d) the space of orthogonal matrices of dimension d. This Procrustes problem can be solved using a SVD <cit.> for instance. We perform the ICP algorithm with different variants to compute the one-to-one correspondence: neareast neighbor (NN) correspondence, OT transport map (for small size datasets) and transport map. Note that SW, PWD, SRW, factored coupling and Sinkhorn cannot be run in this context where a one-to-one correspondence is mandatory; subspace detours <cit.> is irrelevant in this context (see Supp. <ref>). We evaluate the results of the ICP algorithm considering 3 datasets of different sizes on: i) the quality of the final alignment, measured by the Sinkhorn divergence between the re-aligned and target point cloud; ii) the speed of the algorithm given by the time until convergence. Results are shown in Table <ref> and more details about the setup, together with a deeper analysis of the results, can be found in Supp. <ref>. One can see that the assignment provided by OT-based methods is better than NN. allows working with large datasets, while OT fails to provide a solution for n=150 000. § CONCLUSION In this paper, we hinge on the properties of SWD and on the Wasserstein generalized geodesics to define , a new upper bound of the Wasserstein distance that comes with an associated transport map. Topological properties of are provided, showing that it defines a metric and that metrizes the weak convergence of measure. We also proposed two algorithms for computing , by either a random search scheme or gradient descent after smoothing the generalized geodesics definition of . We illustrate its behavior in several experimental setups, notably showcasing its interest in applications where a transport map is needed. The set of permutations that is covered by is the one induced by projections and permutations on the line. It is a subset of the original Birkhoff polytope and it would be interesting to characterise how these two sets relates. In particular, in case of empirical realisations of continuous distributions, the behavior of , when n grows, needs to be investigated. In addition, the fact that and WD coincide when d>2n calls for embedding the distributions in higher dimensional spaces to benefit from more expressive power of projection onto the line. An other important consideration is to have a theoretical upper bound for . plain 14cm2pt Supplementary Material to “Fast Optimal Transport through Sliced Generalized Wasserstein Geodesics” 14cm0.4pt .1in § PROOFS AND SUPPLEMENTARY RESULTS RELATED TO SECTION <REF> §.§ Overestimation of WD by PWD As stated in Section <ref>, the projected Wasserstein distance (see Eq. <ref>) tends to overestimate the Wasserstein distance. This is due to the fact that some permutations σ_θ and τ_θ (with θ∈) involved in PWD computation may be irrelevant. Such situation occurs when the the distributions are in high dimension but supported on a low dimensional manifold or when the distributions are multi-modal. Let consider distributions μ_1 and μ_2 lying on a low dimensional manifold. In high dimension, randomly sampled vectors θ tend to be orthogonal. Moreover, vectors orthogonal to the low dimensional manifold lead to “collapse” projected distributions μ_1 and μ_2 onto θ. Hence, such projection directions lead to permutations that can be randomly chosen. To empirically illustrate this behavior of PWD, we consider μ_1 and μ_2 as Gaussian distributions in ℝ^d, d=10 but supported on the first two coordinates and we sample 200 points per distribution. Table <ref> summarizes obtained distances and shows that PWD overestimates the WD. Now, let us consider two multimodal distributions μ_1,μ_2 with K clusters such that each cluster of μ_1 has a close cluster from μ_2 (cyclical monotonicity assumption). Also we assume the same number of points in each cluster. OT plan will match the corresponding clusters and will lead to a relatively low value for W^2_2 (since cluster from μ_1 has a closely related cluster in μ_2). However as may allow permutations that make correspondences between points from different clusters (since a source cluster and a target cluster can be far in the original space but very close when projected on 1D), the resulting distance will be much more larger, leading to an overestimation of the Wasserstein distance. Table <ref> provides an illustration for K=10 clusters and d=2. §.§ Quantile version of SWGG In the paper, we express for distributions μ_1 and μ_2 with the same number of points and with uniform distribution for the probability masses. This constraint can be relaxed, using cumulative function for the 1D Wasserstein distance instead of the permutations σ,τ. For a distribution μ∈𝒫_2^n(), its cumulative function is defined as: F_μ : → [0,1] , x↦∫_-∞^xdμ and its pseudo inverse: F^-1_μ : [0,1] → , r ↦min{x ∈∪{-∞} s.t. F_μ(x)≥ r } For 1D distributions μ_1,μ_2 one can recover the Wasserstein distance through the pseudo inverse with: W^2_2(μ_1,μ_2)=∫_0^1|F_μ_1^-1(r)-F^-1_μ_2(r)|^2dr Moreover, the optimal transport plan is given by: π=(F^-1_μ_1,F^-1_μ_2)_#λ_[0,1] where λ_[0,1] is the Lebesgue measure on [0,1]. The Wasserstein mean of μ_1,μ_2 can be defined efficiently since, its inverse cumulative function is given by: F^-1_μ^1→ 2=1/2F^-1_μ_1+1/2F^-1_μ_2. Additionally, the theory of generalized geodesics is well defined when the transport map is replaced by the transport plan, which permits to construct the generalized Wasserstein mean (see <cit.>, section 9.2 for a formal definition). Finally, we have all the ingredients to define either by its permutation definition (in this case cumulative function definition) and by its generalized geodesic formulation. Let remark that the transport plan has at most n+m-1 non null value so the pivot measure (Wasserstein mean on the line) and the generalized Wasserstein mean have at most (n+m-1) points. This implies that the complexity goes from 𝒪(dn+nlog n) to 𝒪(d(n+m)+(n+m)log (n+m)). §.§ Proof of Proposition <ref> We aim to prove that ^2_2(μ_1,μ_2,θ) is an upper bound of W_2^2(μ_1,μ_2) and that (μ_1,μ_2,θ) is a distance ∀θ∈ ,μ_i ∈, i=1,2. Distance. Note that this proof will be derived for the alternative definition of in supp. <ref>. Let μ_1=1/n∑δ_x_i,μ_2=1/n∑δ_y_i,μ_3=1/n∑δ_z_i be in , let θ∈. We note σ (resp. τ and π) the permutation such that ⟨x_σ(1),θ⟩≤ ... ≤... ⟨x_σ(n) ,θ⟩ (resp. ⟨y_τ(1),θ⟩≤ ... ≤... ⟨y_τ(n) ,θ⟩ and ⟨z_π(1),θ⟩≤ ... ≤... ⟨z_π(n) ,θ⟩). Non-negativity and finite value. From the ℓ_2 norm, it is derived Symmetry. ^2_2(μ_1,μ_2,θ)=1/n∑_ix_σ(i)-y_τ(i)_2^2=1/n∑_iy_τ(i)-x_σ(i)_2^2=^2_2(μ_2,μ_1,θ) Identity property. From one side, μ_1=μ_2 implies that ⟨x_i ,θ⟩=⟨y_i,θ⟩, ∀ 1 ≤ i ≤ n and that σ = τ, which implies ^2_2(μ_1,μ_2,θ)=0. From the other side, ^2_2(μ_1,μ_2,θ)=0 1/n∑x_σ(i)-y_τ(i)^2_2=0 x_σ(i)=y_τ(i), ∀ 1 ≤ i ≤ n μ_1=μ_2. Triangle Inequality. We have (μ_1,μ_2,θ)=(1/n∑_ix_σ(i)-y_τ(i)^2_2)^1/2 ≤(∑_ix_σ(i)-z_π(i)^2_2. .+∑_iz_π(i)+y_τ(i)^2_2 )^1/2 ≤(∑_ix_σ(i)-z_π(i)^2_2)^1/2+(∑_iz_π(i)+y_τ(i)^2_2 )^1/2=(μ_1,μ_3,θ)+(μ_3,μ_2,θ) Upper Bound The fact that ^2_2 in an upper bound of W^2_2 comes from the sub-optimality of the permutations σ_θ,τ_θ. Indeed, they induce a one-to-one correspondence x_σ_θ(i)→y_τ_θ(i) ∀ 1 ≤ i ≤ n. This correspondence corresponds to a transport map T^θ such that T^θ_#μ_1=μ_2. Since W^2_2 = inf_ T s.t. T_#μ_1=μ_21/n∑x-T(x)^2_2 we necessarily have W^2_2≤_2^2. Equality The fact that W^2_2=_2^2 whenever d>2n comes from the fact that all the permutations are in the range of , in particular minimizing is equivalent to solve the Monge problem. We refer to Supp. <ref> for more details. §.§ Difference between max-SW and min-SWGG Herein, we give an example where the selected vectors θ for and differ. Let μ_1,μ_2 ∈𝒫(^2) be an empirical sampling of 𝒩(m_1,Σ_1) and of 𝒩(m_2,Σ_2) with m_1= [ -10; 0 ], m_2= [ 10; 0 ], Σ_1=[ 1 0; 0 11 ] and Σ_2=[ 2 0; 0 2 ]. Since these two distributions are far away on the x-coordinate, will catch this difference between the means by selecting θ≈[ 1; 0 ]. Indeed, the projection on the x-coordinate represents the largest 1D WD. Conversely, selects the pivot measure to be supported on θ≈[ 1; 0 ] that separates the two distributions. Indeed, this direction better captures the geometry of the 2 distributions, delivering permutations that are well grounded to minimize the transport cost. Fig. <ref> illustrates that difference between and . §.§ From permutations to transport map In this section we provide the way of having a transport map from permutations. Let μ_1,μ_2 ∈, let θ^* ∈min and let σ_θ^*,τ_θ^* the associated permutations. The associated map must be T(x_σ(i))=y_τ(i) ∀ 1 ≤ i ≤ n. In the paper, we formulate the associated transport map as: T(x_i) = y_τ_θ^*^-1(σ_θ^*(i)), ∀ 1 ≤ i ≤ n. Moreover, the matrix representation of T is given by: T_ij={[ 1/n if σ(i)=τ(j); 0 otherwise ]. § BACKGROUND ON WASSERSTEIN GENERALIZED GEODESICS We gave some concepts around Wasserstein generalized geodesics in Sec. <ref>. In this section, we give more details about these geodesics in order to provide a wider view on this theory. In the following definitions, we do not deal with the problem of uniqueness of the geodesics. However this is not a problem in our setup since we focus our study on pivot measure with n-atoms ν∈. In this case, we have uniqueness of the ν-based Wasserstein distance <cit.>. Wasserstein generalized geodesics As mentioned in Sec. <ref>, Wasserstein generalized geodesics rely on a pivot measure ν∈ to transport μ_1 to μ_2. Indeed, one can leverage the optimal transport maps T^ν→μ_1 and T^ν→μ_2 to construct a curve linking μ_1 to μ_2. The generalized geodesic with pivot measure ν is defined as: ((1-t)T^ν→μ_1 + tT^ν→μ_2)_#ν ∀ t ∈[0,1]. The generalized Wasserstein mean refers to the middle of the geodesic, i.e. when t=0.5 and has been denoted . Intuitively, the optimal transport maps between ν and μ_i, i=1,2 give rise to a sub-optimal transport map between μ_1 and μ_2 through: T_ν^1→ 2 T^ν→μ_2∘ T^μ_1 →ν with (T_ν^1→ 2)_#μ_1=μ_2. T_ν^1 → 2 links μ_1 to μ_2 via the generalized geodesic: =((1-t)Id - tT_ν^1→ 2)_#μ_1. We recall here the ν-based Wasserstein distance induced by T_ν^1→ 2 and introduced in Eq. (<ref>). The ν-based Wasserstein distance <cit.> is defined as: W^2_ν(μ_1,μ_2) ∫_^dx-T_ν^1→ 2(x)^2_2dμ_1(x) = ∫_^dT^ν→μ_1(z)-T^ν→μ_2(z)^2_2dν(z). Moreover, this new notion of geodesics comes with an inequality, which is of the opposite side to Eq. (<ref>): W^2_2(,ν) ≤ (1-t)W^2_2(μ_1,ν)+tW_2^2(ν, μ_2)- t(1-t)W_2^2(μ_1, μ_2). The parallelogram law is not respected but straddles with eq. (<ref>) and eq. (<ref>). We refer to Figure <ref> for an intuition behind positive curvature <cit.>, parallelogram law and generalized geodesics. Setting t=0.5 in Eq. (<ref>) and reordering the term gives: W^2_2(μ_1,μ_2) ≤ 2W^2_2(μ_1,ν)+2W^2_2(ν,μ_2)-4W^2_2(,ν). Moreover one can remark that: W^2_ν(μ_1,μ_2)=2W^2_2(μ_1,ν)+2W^2_2(ν,μ_2)-4W^2_2(,ν) In particular situations W_ν^2 and W^2_2 coincide. It is the case for 1D distributions where the Wasserstein space is known to be flat <cit.>. In that case, the Wasserstein mean and the generalized Wasserstein mean are the same. Multi-marginal Another formulation of the ν-based Wasserstein distance is possible through the perspective of multi-marginal OT <cit.>. Let Π(μ_1,μ_2,ν)={T s.t. T^12_#μ_1=μ_2 , T^13_#μ_1=ν and T^23_#μ_2=ν}, where T^ij is the projection of T onto the coordinates i,j. Let also Π^*(μ_i,ν) be the space of optimal transport maps between μ_i and ν. We have: W_ν^2(μ_1, μ_2)=inf_ T ∈Π(μ_1,μ_2,ν) s.t. T^i3∈Π^*(μ_i,ν) i=1,2∫_^dx-T^12(x)^2_2dμ_1(x) Equation (<ref>) expresses the fact that we select the optimal map from Π(μ_1,μ_2,ν) which is already optimal for Π(μ_i,ν). Mathematically, this minimization is not a multi-marginal problem, since the optimal map is supposed to be already optimal for some coordinate. The set { T ∈Π(μ_1,μ_2,ν) s.t. T^i3∈Π^*(μ_i,ν) i=1,2} is never empty, i.e. there is always existence of T_ν^1→ 2 (thanks to the gluing lemma <cit.>, page 23). Moreover, in situations where it is a singleton, there is uniqueness of T^1→ 2_ν. Uniqueness is an ingredient which overpasses the selection of a final coupling and comes with additional result. Whenever { T ∈Π(μ_1,μ_2,ν) s.t. T^i3∈Π^*(μ_i,ν) i=1,2} is a singleton, W_ν^2 is a proper distance. It is a semi-distance otherwise. Notably, 1D pivot measure was studied in <cit.> to ensure a dendritic structure of the distributions along the geodesic. § RELATED WORKS In this section we highlight the fact that several upper approximations of W^2_2 are in the framework of generalized geodesics. The differences lay in the choice of the pivot measure ν. Factored Coupling. In <cit.>, the authors impose a low rank structure on the transport plan by factorizing the couplings through a pivot measure ν expressed as the k-Wasserstein mean between μ_1 and μ_2 (k ≤ n). It is of particular interest since whenever the pivot distribution is the Wasserstein mean between μ_1 and μ_2, W_ν^2 and W_2^2 coincide. Factored coupling results in a problem of computing the k-Wasserstein mean (μ^1→ 2) followed by solving two OT problems between the clustered Wasserstein mean and the two input distributions (W_2^2(μ_1,μ^1 → 2) and W^2_2(μ^1 → 2,μ_2)). Even though the OT problems are smaller, they are still expensive in practice. Moreover, in this scenario, the uniqueness of the OT plan T^1 → 2_ν is not ensured. It appears that <cit.> chooses the most entropic transport plan, i.e. simply T^1→ 2_ν=T^μ^1 → 2→μ_2∘ T^μ_1 →μ^1 → 2. Subspace Detours. From a statistical point of view, it is beneficial to consider optimal transport on a lower dimensional manifold <cit.>. In <cit.>, authors compute an optimal transport plan T^μ^E_1 →μ^E_2 between projections on a lower linear subspace E of μ_1 and μ_2, i.e. μ_i^E=P_E #μ_i, where P_E is the linear projection on E. They aimed at leveraging T^μ_1^E →μ_2^E to construct a sub-optimal map T_E^1→ 2 between μ_1 and μ_2. The problem can be recast as a generalized geodesic problem with ν being the Wasserstein mean of μ_1^E and μ_2^E embedded in ^d. Once again, uniqueness of T_ν^μ_1 →μ_2 is not guaranteed, authors provide two ways of selecting the map, namely Monge-Knothe and Monge-Independent lifting. Subspace detours result in a problem where one needs to select a linear subspace E (which is a non convex procedure), compute an optimal transport between μ_1 and μ_2 (in 𝒪(n^3log n) whenever (E)>1) and reconstruct T_E^μ_1 →μ_2. Linear Optimal Transport (LOT). Given a set of distributions (μ_i)_i=1^m ∈^m, LOT <cit.> embeds the set of distributions into the L^2(ν)-space by computing the OT of each distribution to the pivot distribution. Mathematically, it computes T^ν→μ_i ∀ 1 ≤ i ≤ m and lies on estimating W_2^2(μ_i,μ_j) with W_ν^2(μ_i,μ_j) through eq. (<ref>). In LOT, the pivot measure ν was chosen to be the average of the input measures <cit.>, the Lebesgue measure on ^d <cit.> or an isotropic Gaussian distribution <cit.>. Instead of computing m2 expensive Wasserstein distances, it resorts only on m Wasserstein distances between (μ_i)_i^m and ν. While significantly reducing the computational cost when several distributions are at stake, it does not allow speeding up the computation when only two distributions are involved. §.§ Linear Optimal Transport with shift and scaling In this section, we recall the result from <cit.>. The theorem states that the ν-based approximation is very close to WD whenever μ_1, μ_2 are continuous distributions which are very close to be shift and scaling of each other. It can applies to a continuous version of , however it works with discrete measures in the particular case of equality between W^2_ν and W_2^2. Let Λ={S_a (shift) ,a∈^d}∪{R_c (scaling) , c ∈}, Λ_μ,R={h∈Λ s.t. h_μ≥ R } and G_μ,R,ϵ={g ∈ L^2(^d,μ) s.t. ∃ h ∈Λ_μ,R s.t. g-h_μ≤ϵ} Let ν, μ∈, with μ,ν≪λ (the Lebesgue measure). Let R>0,ϵ>0 * For g_1,g_2 ∈ G_μ,R,ϵ and ν=λ on a convex compact subset of ^d, we have: W_ν(g_1#μ,g_2#μ)-W_2(g_1#μ,g_2#μ)≤ Cϵ^2/15+2ϵ * If μ and ν satisfy the assumption of Caffarelli's regularity theorem <cit.>, then for g_1,g_2 ∈ G_μ,R,ϵ, we have: W_ν(g_1#μ,g_2#μ)-W_2(g_1#μ,g_2#μ)≤Cϵ^1/2+Cϵ where C,C depdends on ν,μ and R. § PROOFS AND OTHER RESULTS RELATED TO SECTION <REF> §.§ Proof of Proposition <ref>: equivalence between the two formulations of SWGG In this section, we prove that the two definitions of in Def. <ref> and Prop. <ref> are equivalent. Let θ∈ be fixed. From one side in Def. <ref>, we have: _2^2(μ_1,μ_2,θ)1/n∑_i x_σ_θ(i)-y_τ_θ(i)^2_2 where σ_θ and τ_θ are the permutations obtained by sorting P^θ_#μ_1 and P^θ_#μ_2. From the other side we note D(μ_1,μ_2,θ) the quantity: D(μ_1,μ_2,θ) 2W^2_2(μ_1,)+2W_2^2(,μ_2)-4W_2^2(,). We want to prove that ^2_2(μ_1,μ_2,θ)=D(μ_1,μ_2,θ), ∀μ_1, μ_2 ∈ and θ∈. Eq. (<ref>) in the main paper states that D(μ_1,μ_2,θ) is equivalent to ∫_^dx-T^1 → 2_(x)^2_2dμ_1(x). Finally, Lemma <ref> states that the transport map T^1→ 2_ is fully determined by the permutations on the line: the projections part is a one-to-one correspondence between x and θ⟨x ,θ⟩ (resp. between y and θ⟨y ,θ⟩). More formally T^1 → 2_(x_σ_θ(i))=y_τ_θ(i) ∀ 1 ≤ i ≤ n. And thus we recover: ∫_^dx-T^1 → 2_(x)^2_2dμ_1(x)=1/n∑_i x_σ_θ(i)-y_τ_θ(i)^2_2 which concludes the proof. §.§ Proof of Weak Convergence (Proposition <ref>) We want to prove that, for a sequence of measures (μ_k)_k ∈ℕ∈, we have: μ_k μ∈) _2^2(μ_k,μ) 0 The notation μ_kμ stands for the weak convergence in i.e. ∫_^df(x)dμ_(k)(x)→∫_^df(x)dμ(x) for all continuous bounded functions f and for the Euclidean distance f(x)= x_0-x_2^2 for all x_0 ∈^d. From one side, if _2^2(μ_k,μ) → 0 W^2_2(μ_k,μ) → 0 μ_k μ. The first implication is due to the fact that _2^2 is an upper-bounds of W^2_2, the Wasserstein distance, and that WD metrizes the weak convergence. From another side, assume μ_k μ; we have for any θ: * Let μ_θ^μ_k →μ∈ stands for the Wasserstein mean of the projections μ_k and μ and let μ_θ^μ→μ = μ. We have μ_θ^μ_k →μ converges towards (in law) to μ_θ^μ→μ, which implies that: W^2_2(μ_k,μ_θ^μ_k →μ) W^2_2(μ,μ_θ^μ→μ). * Since μ∈, we have T^μ_θ^μ_k →μ→μ_k T^μ_θ^μ_k →μ→μ (see <cit.>, theorem 3.2). It implies that μ_g, θ^μ_k →μμ and particularly: W^2_2(μ_g, θ^μ_k →μ, μ_θ^μ_k →μ) W^2_2(μ,μ_θ^μ→μ) By combining the previous elements, we get: 2W^2_2(μ_k,μ_θ^μ_k →μ)+2W_2^2(μ_θ^μ_k →μ,μ_k)-4W^2_2(μ_g, θ^μ_k →μ, μ_θ^μ_k →μ) 2W^2_2(μ,μ_θ^μ→μ) +2W^2_2(μ_θ^μ→μ,μ) -4W^2_2(μ,μ_θ^μ→μ)=0 The previous relation shows that μ_k μ implies ^2_2(μ_k,μ,θ) 0 for any θ. Hence, we can conclude that: μ_k μ_2^2(μ_k,μ) → 0 This concludes the proof. Note that when μ_1 and μ_2 are continuous, <cit.> proved that when the distributions are smooth enough (i.e. respecting the Cafarelli theorem <cit.>), there is a bi-Holder equivalence between the ν-based Wasserstein distance and W^2_2. Hence, it still holds for for any θ∈ S^d-1: W^2_2(μ_1,μ_2) ≤_2^2(μ_1,μ_2, θ) ≤ B × W^2_2(μ_1,μ_2)^2/15 ∀μ_i ∈ where B depends on μ_i, i ∈{1, 2}, θ and the dimension d. This bound is sufficient to prove that metrizes the weak convergence in this context. We refer to <cit.> for more details. §.§ Proof of Translation property (Proposition <ref>) We prove that _2^2 has the same behavior w.r.t. the translation as W^2_2. This property is well known for Wasserstein and useful in applications such as shape matching. Let μ_1,μ_2 ∈, and let T^u (resp. T^v) be the map x↦x-u (resp. x↦x-v), with u,v vectors of ^d. To ease the notations, let define μ̃_1 = T^u_#μ_1 and μ̃_2 = T^v_#μ_2. Let remind that in the case of Wasserstein distance we have <cit.>(Remark 2.19): W^2_2(μ̃_1, μ̃_2) W^2_2(T^u_#μ_1,T^v_#μ_2)=W^2_2(μ_1,μ_2)-2⟨u-v,m_1-m_2⟩ +u-v^2_2 with m_1=∫_^dxdμ_1(x) and m_2=∫_^dxdμ_2(x). We aim to compute ^2_2(μ̃_1, μ̃_2) ^2_2(T^u_#μ_1,T^v_#μ_2). Let express first ^2_2(μ̃_1, μ̃_2) = 2W^2_2(μ̃_1,μ̃^1 → 2_θ) +2W^2_2(μ̃_2,μ̃^1 → 2_θ) -4W^2_2(μ̃^1 → 2_g,θ,μ̃^1 → 2_θ) where μ̃^1 → 2_θ is the Wasserstein mean of the projections along θ of the shifted measures μ̃_1 = T^u_#μ_1 and μ̃_2 = T^v_#μ_2 as in Proposition <ref>. The generalized Wasserstein mean μ̃^1 → 2_g,θ is defined accordingly (see also Proposition <ref>). We have: W^2_2(μ̃_1, μ̃^1 → 2_θ)=W^2_2(μ_1,)-2⟨u,m_1-m_3⟩ +u^2_2 where m_3=∫_^dxdμ̃^1 → 2_θ(x). Similarly W^2_2(μ̃_2, μ̃^1 → 2_θ)=W^2_2(μ_2, )-2⟨v,m_2-m_3⟩ +v^2_2. Let express now the third term in eq. (<ref>). For that we require to define the generalized Wasserstein mean μ̃^1 → 2_g,θ with pivot measure μ̃^1 → 2_θ. By the virtue of eq. (<ref>) in the main paper, we have: μ̃^1 → 2_g,θ = (1/2T^μ̃^1 → 2_θ→μ̃_1 + 1/2T^μ̃^1 → 2_θ→μ̃_2)_#μ̃^1 → 2_θ =(1/2T^μ^1 → 2_θ→μ_1 +1/2T^μ^1 → 2_θ→μ_2-T^u+v/2)_#μ̃^1 → 2_θ =T^u+v/2_#( (1/2T^μ^1 → 2_θ→μ_1 +1/2T^μ^1 → 2_θ→μ_2)_#) Hence, the third term in (<ref>) is: W^2_2(μ̃^1 → 2_g,θ, μ̃^1 → 2_θ)=W^2_2(,)-2 < u+v/2,m_1+m_2/2-m_3 > + u+v/2^2_2 since the mean of a Wasserstein mean is the mean of m_1, m_2. Putting all together, we have: ^2_2(T^u_#μ_1,T^v_#μ_2) =^2_2(μ_1,μ_2) - 4 ⟨u,m_1-m_3⟩ -4 ⟨v,m_2-m_3⟩ + 8 < u+v/2,m_1+m_2/2-m_3 > +2u^2_2+2v^2_2-4u+v/2^2_2 =^2_2(μ_1,μ_2) + 4⟨u+v,m_3⟩ -4⟨u+v ,m_3⟩-4⟨u,m_1⟩ -4 ⟨v,m_2⟩ +4⟨u+v,m_1+m_2⟩ +u-v^2_2 Parallelogram law =^2_2(μ_1,μ_2) - 2⟨u,m_1⟩-2 ⟨v,m_2⟩ +2⟨u,m_2⟩ +2⟨v,m_1⟩ +u-v^2_2 =^2_2(μ_1,μ_2) - 2⟨u-v,m_1 - m_2⟩+u-v^2_2 §.§ Proof of the new closed form of the Wasserstein distance (Lemma <ref>) We recall and prove the lemma that makes explicit a new closed form for WD. Let μ_1,μ_2 be in with μ_2 a distribution supported on a line whose direction is θ∈. We have: W^2_2(μ_1,μ_2)=W^2_2(μ_1,Q_#^θμ_1)+W^2_2(Q_#^θμ_1,μ_2). Moreover, the optimal map is given by T^1→ 2 = T^Q^θ_#μ_1 →μ_2∘ T^μ_1 → Q^θ_#μ_1=T^Q^θ_#μ_1 →μ_2∘ Q^θ. Let μ_1,μ_2 be in with μ_2 a distribution supported on a line of direction θ. We have: W^2_2(μ_1,μ_2)=W^2_2(μ_1,Q_#^θμ_1)+W^2_2(Q_#^θμ_1,μ_2) Moreover, the optimal map is given by: T^1 → 2= T^Q^θ_#μ_1 → 2∘ T^1 → Q^θ_#μ_1=T^Q^θ_#μ_1 → 2∘ Q^θ Here Q^θ is given in Def. <ref> of the paper. The proof of the Lemma was first inspired by <cit.>(Proposition 2.3), where authors show that W^2_C(μ_1,μ_2)=W^2_C^1(μ_1,μ)+W^2_C^2(μ,μ_2) , with C^1,C^2 and C some cost matrices with the constraints C_ij=min_sC^1_is+C^2_sj. Let μ_1=1/n∑δ_x_i and μ_2=1/nδ_y_i be in with μ_2 a distribution supported on a line with direction θ. Let Q_#^θμ_1=μ_1=1/n∑δ_x_i∈. We emphasize here the fact that the atoms of μ_1 and μ_2 are supported on a line are denoted by the overline symbol. From one side, we have: W^2_2(μ_1,μ_2) =inf_T^1 s.t. T^1_#μ_1= μ_2∫_^dx-T^1(x)^2_2dμ_1(x) =inf_T^1 s.t. T^1_#μ_1= μ_2∫_^d(x-Q^θ(x)^2_2+Q^θ(x)-T^1(x)^2_2 )dμ_1(x) =∫_^dx- Q^θ(x)^2_2dμ_1(x)+inf_T^1 s.t. T^1_#μ_1= μ_2∫_^dQ^θ(x)-T^1(x)^2_2dμ_1(x) ≥inf_T^2 s.t. T^2_#μ_1= μ_1∫_^dx- T^2(x)^2_2dμ_1(x)+inf_T^3 s.t. T^3_#μ_1= μ_2∫_^dx-T^3(x)^2_2dμ_1(x) ≥ W^2_2(μ_1,μ_1)+W^2_2(μ_1,μ_2) Equation (<ref>) is obtained thanks to the Pythagorean theorem since ⟨x_i,Q^θ(x_i),y_i⟩ is a right triangle ∀ 1 ≤ i≤ n. The equation (<ref>) is obtained by taking the inf of the previous first term of the previous equation. From the other side: W^2_2(μ_1,μ_1)+W^2_2(μ_1,μ_2)= ∫_^dx-T^3(x)^2_2dμ_1(x)+∫_^dx-T^4(x)^2_2dμ_1(x) = ∫_^dT^3(x)-T^4(x)^2_2 dμ_1(x) = W^2_μ_1(μ_1,μ_2) ≥ W^2_2(μ_1,μ_2) Where T^3 and T^4 are the optimal plan of W^2_2(μ_1,μ_1) and+W^2_2(μ_1,μ_2). Similarly, (<ref>) is obtained via the Pythagorean theorem. This concludes the proof. We plot an illustration of the lemma in Figure <ref>. §.§ Details on the efficient computation of SWGG We decompose the second formulation of . Let first remind that Q^θ : ^d →^d, x↦θ⟨x,θ⟩ and P^θ: ^d →, x↦⟨x , θ⟩ are the projections on the subspace generated by θ. We have: _2^2(μ_1,μ_2,θ)=2W^2_2(μ_1,)+2W^2_2(,μ_2)-4W^2_2(,). First, by lemma <ref>, 2W^2_2(μ_1,)=2W^2_2(μ_1,μ_1)+2W^2_2(μ_1,) as 's support is on a line. Similarly, 2W^2_2(μ_2,)=2W^2_2(μ_2,μ_2)+2W^2_2(μ_2,). and -4W^2_2(,)=-4W^2_2(,)-4W^2_2(,). We notice that 2W^2_2(μ_1,)+2W^2_2(,μ_2)=W^2_2(μ_1,μ_2) (as is the Wasserstein mean between μ_1 and μ_2). We also notice that -4W^2_2(,)=0 (it comes from the fact that the generalized Wasserstein mean is induced by the permutations on the line), we can put all together to have: ^2_2(μ_1,μ_2,θ)=2W^2_2(μ_1,μ_1)+2W^2_2(μ_2,μ_2)-4W^2_2(,)+W^2_2(μ_1,μ_2) One can show that is divided into 3 Wasserstein distances between a distribution and its projections on a line and 1D Wasserstein problem. This results in a very fast computation of . §.§ Smoothing of SWGG In this section, we give details on the smoothing procedure of , an additional landscape of and its smooth counterpart and an empirical heuristic for setting hyperparameters s and ϵ. Smoothing Procedure. A natural surrogate would be to add an entropic regularization within the definition of T^→μ_i, i ∈{1,2} and to solve an additional optimal transport problem. Nevertheless, it would lead to an algorithm with an 𝒪(n^2) complexity. Instead, we build upon the blurred Wasserstein distance <cit.> between two distributions ν_1 and ν_2: B^2_ϵ(ν_1, ν_2) W_2^2(k_ϵ/4∗ν_1, k_ϵ/4∗ν_2) where ∗ denotes the smoothing (convolution) operator and k_ϵ/4 is the Gaussian kernel of deviation √(ϵ)/2. In our case, it resorts in making s copies of each sorted projections P^θ(x_i) and P^θ (y_i) respectively, to add a Gaussian noise of deviation √(ϵ)/2 and to compute averages of sorted blurred copies x^s_σ^s, y^s_τ^s: ()_i = 1/2s∑_k=(i-1)s+1^isx^s_σ^s(k)+y^s_τ^s(k). Further, we provide additional examples of the landscape of (μ_1,μ_2) and discuss how to choose empirically relevant s and ϵ values. <cit.> has shown that the blurred WD has the same asymptotic properties as the Sinkhorn divergence, with parameter ϵ the strength of the blurring: it interpolates between WD (when ϵ→ 0) and a degenerate constant value (when ϵ→∞). To find a minimum of Eq. (<ref>) in the paper (i.e. _2^2(μ_1,μ_2,θ)), we iterate over: θ_t+1 = θ_t+η∇_θ^2_2(μ_1,μ_2,θ) θ_t+1 = θ_t+1/θ_t+1_2 where η∈_+ is the learning rate. This procedure converges towards a local minima with a complexity of 𝒪(snd+snlog(sn)) for each iteration. Once the optimal direction θ^⋆ is found, the final solution resorts to be the solution provided by ^2_2(μ_1,μ_2,θ^⋆), where the induced optimal transport map is an unblurred matrix. Heuristic for setting the hyperparameters of We here provide an heuristic for setting parameters s (number of copies of each points) and ϵ (strength of the blurring). We then give an example of the behavior of w.r.t. these hyper parameters. Let μ_1 =1/n∑δ_x_i and μ_2 =1/n∑δ_y_i. • s∈ℕ_+ represents the number of copies of each sample. We observe empirically that the quantity sn should be large to provide a smooth landscape. It means that the s values can be small when n increases, allowing to keep a competitive algorithm (as the complexity depends on ns) • ϵ∈_+ represents the variance of the blurred copies of each sample. Empirically, ϵ should depend on the variance of the distributions projected on the line. Indeed, an ϵ very close to zero will not smooth enough the discontinuities whereas a large ϵ will give a constant landscape. As discussed in Section <ref>, finding an optimal θ∈ is a non convex problem and provides a discontinuous loss function. We give some examples of the landscape of w.r.t. different values of the hyperparameters in Fig. <ref>. The landscapes were computed with a set of projections θ regularly sampled with angles ∈[0, 2π]. We observe that the larger s, the smoother . Additionally, raising ϵ tends to flatten w.r.t. θ (erasing local minima). Indeed similarly to Sinkhorn, a large ϵ blurred the transport plan and thus homogenize all the value of w.r.t. θ. Moreover, we empirically observe that the number of samples for μ_1 and μ_2 enforces the continuity of . We then conjecture that the discontinuities of are due to artifact of the sampling and thus the smoothing operation erases this unwanted behavior. A full investigation of this assumption is left for future work. §.§ Inconsequential of the pivot measure Importantly, only the direction θ is of importance for the value of . Indeed, whenever ν∈ is supported on a line of direction θ, the position of the atoms is irrelevant for W_ν and the associated transport plan whenever the atoms are distinct. Despite the fact that the pivot measure is inconsequential for the value of (at θ fixed), we choose it to be . This choice is supported by the fact that can be efficiently computed (as a 1D Wasserstein mean) and that some computation can be alleviated: 2W^2_2(μ_1,)+2W^2_2(, μ_2)=W^2_2(μ_1, μ_2) It is an important comment to derive the property of distance for ; it also allows minimizing over θ∈ without consideration for ν, since any choice of ν supported on the subspace generated by θ give the same result for . This property of irrelevance comes from the nature of the subspace where ν is supported, which is uni-dimensional. More formally we give the following proposition and its associated proof. Let μ_1, μ_2 ∈. Let θ∈. Let ν_1, ν_2 ∈ be two pivot measures supported on a line with direction θ, with disctincs atoms for each measure. We then have: W^2_ν_1(μ_1,μ_2)=W^2_ν_2(μ_1,μ_2) We give a proof of this proposition. Thanks to lemma <ref>, we known that the transport map T_ν^1 → 2 is fully induced by the transport plan T_ν^Q^θ_#μ_1 → Q^θ_#μ_2. Let remind that T_ν^Q^θ_#μ_1 → Q^θ_#μ_2 is given by T^ν→ Q_#^θμ_2∘ T^Q_#^θμ_1→ν (see equation (<ref>)). Moreover the two optimal transport plans are obtained via the ordering permutations, i.e. let σ,τ,π∈ s.t: x_σ(1)≤ ... ≤x_σ(n) y_τ(1)≤ ... ≤y_τ(n) z_π(1)≤ ... ≤z_π(n) With x_i being the atoms of Q^θ_#μ_1, y_i the atoms of Q^θ_#μ_2 and z_i being the atoms of Q^θ_#ν. One have T^μ_1 →ν(x_σ(i))=z_π(i) (resp. T^ν→μ_2(z_π(i))=x_τ(i)) ∀ 1 ≤ i ≤ n. Composing these two identities gives: T_ν^1→ 2(x_σ(i))=y_τ(i) ∀ 1 ≤ i ≤ n The last equation shows that T_ν^1 → 2 is in fact independent of π and thus of ν. §.§ Proof that min-SWGG is a distance (generalized geodesic formulation) This proof has already been established in <ref>. However we rephrase the proof in the context of generalized geodesics. We aim to prove that = √(2W^2_2(μ_1,)+2W_2^2(,μ_2)-4W_2^2(,)) defines a metric. Finite and non-negativity. Each term of ^2_2 is finite thus the sum of the three terms is finite. Moreover, being an upper bound of WD makes it non-negative. Symmetry. We have ^2_2(μ_1,μ_2,θ) = 2W_2^2(μ_1,)+2W^2_2(μ_2,)-4W^2_2(,) = 2W_2^2(μ_2,)+2W^2_2(μ_1,)-4W^2_2(,) = ^2_2(μ_2,μ_1,θ). Identity property. From one side, when μ_1=μ_2 T^μ_1 →=T^μ_2 →=Id, giving =μ_1=μ_2. Thus: ^2_2(μ_1,μ_2,θ)=2W^2_2(μ_1,)+2W^2_2(μ_1,)-4W^2_2(μ_1,)=0 From another side, ^2_2(μ_1,μ_2,θ)=0 W^2_2(μ_1,μ_2)=0 μ_1= μ_2 (by being an upper bound of WD). Triangle Inequality. We have: ^2_2(μ_1,μ_2,θ) = 2W^2_2(μ_1,)+2W^2_2(,μ_2)-4W^2_2(,) = 2∫_^d T_θ^1(x)-x_2^2d(x)+2∫_^dT_θ^2(x)-x^2_2 d(x) -4∫_^dT_θ^g(x)-x_2^2d(x) = ∫_^d(2 T_θ^1(x)-x_2^2+2T_θ^2(x)-x^2_2-4T_θ^g(x)-x_2^2) d(x) = ∫_^dT_θ^1(x)-T_θ^2(x)_2^2d(x) where, with an abuse of notation for clarity sake, T_θ^i is the optimal map between and μ_i and T_θ^g is the optimal map between and . The last line comes from the parallelogram rule of ^d. Thanks to Proposition <ref> we see that is simply the L^2(^d,ν) square norm, i.e.: ^2_2(μ_1,μ_2,θ)=T_θ^1-T_θ^2_ν^2 ∫_^dT_θ^1-T_θ^2^2_2dν with ν being any arbitrary pivot measure of . And thus is the L^2(^d,ν) norm. This observation is enough to conclude that is a proper distance for θ fixed. § EXPERIMENT DETAILS AND ADDITIONAL RESULTS WD, SW, Sinkhorn, Factored coupling are computed using the Toolbox <cit.> and our code is available at <https://github.com/MaheyG>. The Sinkhorn divergence for the point cloud matching experiment was computed thanks to the package <cit.>. §.§ Behavior of min-SWGG with the dimension and the number of points In this section, we draw two experiments to study the behavior of w.r.t. the dimension and to the number of points. Evolution with d In <cit.>[Theorem of Section 2], authors aim at enumerate the number of permutations obtained via the projection of point clouds on a line. It appears that the number of permutations increases with the dimension. They even show that whenever d≥ 2n (2n being the total number of points of the problem), all the possible permutations (n!) are in the scope of a line. Fig. <ref> depicts the number of obtainable permutations as a function of the dimension d, for n fixed. This theorem can be applied to to conclude that whenever d ≥ 2n, we have ^2_2=W^2_2. It turns out empirically that the greater the dimension, the better the approximation of W^2_2 with (see Fig. <ref>) for a fixed n. More formally, the set of all possible transport maps is called the Birkhoff polytope and it is known that the minimum of the Monge problem is attained at the extremal points (which are exactly the set of permutations matrices, a set of n! matrices in our context) <cit.>. The set of the transport maps in the scope of is a subset of the extremal points of the Birkhoff polytope (there are permutations matrices but not all possibilities are represented). Theoretically, the set of transport maps in the scope of is larger as d grows, giving a subset that is more and more tight with the extremal points of the Birkhoff polytope. This explains that can benefit from higher dimension. We plot in Fig. <ref> the evolution, over 50 repetitions, of the ratio (μ_1,μ_2)/W^2_2(μ_1,μ_2) with d, n=50 and μ_1 ∼𝒩(1_^d,Id), μ_2∼𝒩(-1_^d,Id). Evolution with n Fig. <ref> represents the evolution of W^2_2(μ_1,μ_2) and _2^2(μ_1,μ_2) for two distributions μ_1 ∼𝒩(1_^d,Id) and μ_2∼𝒩(-1_^d,Id), with d=4 and a varying number of points. The results are averages over 10 repetitions. We observe that, when n is large enough, tends to stabilize around some constant value. We conjecture that there may exist an upper bound for : ^2_2(μ_1,μ_2)≤ψ(d,n,d') W^2_2(μ_1,μ_2) Where d' is the max of the dimensions of the distributions μ_1,μ_2 <cit.>, and ψ an unknown function. §.§ Computing min-SWGG We now provide here more details about the experimental setup of the experiments of Section <ref>. Choosing the optimal θ We compare three variants for choosing the optimal direction θ: random search, simulated annealing and optimization (defined in Section <ref>). We choose to compare with simulated annealing since it is widely used in discrete problem (such as the travelling salesman) and known to perform well in high dimension <cit.> <cit.> <cit.>. We notice in Fig. <ref> of the paper that the smooth version of is always (comparable or) better than the simulated annealing. In this experiment, we randomly sample 2 Gaussian distributions with different means and covariances matrices, whose parameters are chosen randomly. For optimizing , we use the Adam optimizer of Pytorch, with a fixed learning rate of 5e^-2 during 100 iterations, considering s=10 and ϵ=1. Fig. <ref> provides the timings for computing the random search approximation, simulated annealing and the optimization scheme. In all cases, we recover the linear complexity of (blue curves) in a log space. For the computation timings we compute with random search with L=500, simulated annealing (green curves) with 500 iterations with a temperature scheme (1-k+1/500)_k=1^500 and the optimization scheme (considering s=10 with a fixed number of iterations for the optimization scheme equals to 100). Runtime Evaluation In the paper, on Fig. <ref> (Right), we compare the empirical runtime evaluation on GPU for different methods. We consider Gaussian distributions in dimension d=3 and we sample n points per distribution with n∈{10^2,10^3,10^4,5· 10^4, 10^5}. For Monte-Carlo and random search, we use L=200 projections. For both and with optimization, we use 100 iterations with a learning rate of 1, and we fix s=50 for . We use the official implementation of the Subspace Robust Wasserstein (SRW) with the Frank-Wolfe algorithm <cit.>. §.§ Gradient Flows We rely on the code provided with <cit.> for running the experiment of Section <ref>. We fix n=100, the source distribution is taken to be Gaussian and we consider four different target measures that represent several cases: i) a 2 dimensional Gaussian, ii) a 500 dimensional Gaussian (high dimensional case), iii) 8 Gaussians (multi-modal distribution) and iv) a two-moons distribution (non-linear case). We fix a global learning rate of 5e^-3 with an Adam optimizer. For , and (random search), we sample L=100 directions. For the optimization methods , we set a learning rate of 1e^-3 with a number of 100 iterations for i), iii), and iv) and 200 iterations for ii). For (optimization), we took a learning rate of i)1e^-1, ii)1e^-3, iii)5e^-2, and iv) 1e^-3. The hyper parameters for the optimization of are s=10 and ϵ=0.5, except for the 500-dimensional Gaussian for which we pick ϵ=10 . Each experiment is run 10 times and shaded areas in Fig. <ref> (see the main paper) represent the mean ± the standard deviation. §.§ Gray scale image colorization We now provide additional results on a pan-sharpening application to complete results provided in Section <ref>. In pan-sharpening <cit.>, one aims at constructing a super-resolution multi-chromatic satellite image with the help of a super-resolution mono-chromatic image (source) and low-resolution multi-chromatic image (target). To improve the relevance of the colorization (by adding a proximity prior), we used super pixels computed via the Watershed algorithm <cit.> thanks to the the package <cit.>. Obtained high resolution colorized images of size 512×512 (n=262 144) are reported on Fig. <ref>. All pan-sharpening experiments were run on the PairMax data set <cit.>. The hyperparameters (markers and compactness) for the watershed super-pixels are: 500, 200, 200, 200 markers (an upper bound for the number of super pixel) for each image (by order of apparition) and compactness 1e-8 (high values result in more regularly-shaped watershed basins) for all the images. §.§ Point clouds registration We here provide additional details and results about the experiments in Section <ref>. Authors of <cit.> highlighted the relevance of OT in the point clouds registration context, plugged into an Iterative Closest Point (ICP) algorithm. They leveraged the 1D partial OT without consideration for the direction of the line. Our experimentation shows the importance of θ: the smaller is, the better the registration. In this experiments, having a one to one correspondence is mandatory: as such, we compare with a nearest neighbor assignment and the one provided by OT. Note that we do not compare with subspace detour <cit.>, since: i) with empirical distributions, the reconstruction of the plan is degenerated (as it doesn't involve any computation), ii) the research of subspace can be intensive since no prior is provided. To create the source distributions, we used random transformation (Ω,t) ∈ O(d)×^d of the target domain. Ω was picked randomly from O(d) the set of rotations and reflection and t has random direction with t_2=5. We also add a Gaussian noise 𝒩(0,ϵ Id), with ϵ=0.1. The ICP algorithm was run with 3 datasets of the following features: i) 500 points in 2D, ii) 3000 points in 3D, and iii) 150 000 points in 3D. was computed through the random search estimation with L=100. One stopping criterion is the maximum number of iterations of the algorithm, which varies with the dataset i.e.: i) 50, ii) 100, and iii) 200. The other stopping criterion is Ω -Id+t_2≤ε with ε chosen as follows: i) 1e-4, ii) 1e-2, and iii) 1e-2, where (Ω,t) ∈ O(d)×^d is the current transformation and · is the Frobenius norm. All these settings were run with 50 different seeds. Results are reported in Fig. <ref>. From Fig. <ref>, one can see that for: * n=500: The registration obtained via OT is very powerful (fast to compute and convergence to a good solution). is slightly faster with better convergence result and an equivalent number of iterations. Finally the nearest neighbor does not converge to a final solution closed to the target. * n=3000: registration by OT can converge poorly, moreover the timings are much higher than the competitors. shows efficient convergence results with an attractive computation time (order of fews seconds). We observe that the number of iterations can be very large and we conjecture that it is due to the fact that can exit local minima. The nearest neighbor is fast but, most of the time, does not converge to global minima. * n=150000: In this setting, OT is totally untractable (the cost matrix needs 180 GB in memory). shows good convergence and is most of the time very fast whenever the number of iterations does not attain the stopping criterion. The nearest neighbor assignment is faster but only converges to local minima. Note that, despite the fact that is slightly slower than the nearest neighbor, the overall algorithm can be faster due to the better quality of each iteration ( can attain a minima with less iterations). An other important aspect of ICP is that the algorithm tends to fall into local minima: the current solution is not good but further iterations do not allow a better convergence of the algorithm. We observed empirically that can avoid getting stuck on local minima when a reasonable number of lines is sampled (L ∼ 100). We conjecture that it happens because the random search approximation is not always the ideal solution and has the possibility to escape local minima. Sometimes, this lead to an algorithm converging through the global optimal solution. §.§ Color Transfer In this section, we provide an additional experiment in a color transfer context. We aim at adapting the color of an input image to match the color of a target one <cit.>. This problem can be recast as an optimal transport problem where we aim at transporting the color of the source image X into the target Y. For that, usual methods lie down on the existence of a map T : X→Y. We challenge to this problem to highlight relevance of the obtained transport map. Images are encoded as vector in ℝ^nm× 3, where n and m are the size of the image and 3 corresponds to the number of channels (here RBG channels). We first compute a map T_0 : X_0 →Y_0 between a subsample of X and Y of size 20000 and secondly extend this mapping to the complete distributions T : X→Y using a nearest neighbor interpolation. The subsampling step is mandatory due to the size of the images but can deteriorate the quality of the transfer. We compare the results obtained with maps obtained from Wasserstein distance, with random search (100 projections), subspace detour <cit.> and (optimized). Obtained images and the associated timings are provided in fig. <ref>. Figure <ref> shows that and W_2^2 provide visually equivalent solutions. Since, the quality of the color transfer is dependent on the size of the subsampling: using permits larger subsamples than W^2_2 and thus improves the quality of the map T. Moreover one can note that (optimized) is the fastest to compute. We now give more details about how to perform color transfer between two distributions. The first step is to encode n× m images as ℝ^nm× 3 vectors, with 3 channels in a RGB image. Note that m and n can differ for the source and target image. The second step consists of defining subsamples X_0,Y_0 of X,Y, in our case we took X_0,Y_0 ∈ℝ^20000× d. We subsample the same number of points for the source and target image. In order to have a better subsampling of X and Y, it is common to perform a k-means <cit.> to derive X_0 and Y_0 (X_0,Y_0 are then taken as centroids of the k-means algorithm). The third step is to compute T_0 : X_0 →Y_0. We set T as the optimal Monge map given by the Wasserstein distance and T as the optimal map given by . Finally, the fourth step deals with extending T_0 : X_0 →Y_0 to T:X→Y. ∀x∈X. We compute the closest element x_0 ∈X_0 and we pose: T(x)=T(x_0). More details on the overall procedure can be found in <cit.>. To perform the experiment, we took L=100 projections for (random search). For (optimized), we fixed the following set of parameters for the gradient descent: learning rate 5e^-2, number of iterations 20, number of copies s=10 and ϵ=1. Regarding the subspace detour results, we used the code of <cit.> provided at <https://github.com/BorisMuzellec/SubspaceOT>. Additionally, we perform color transfer without sub-sampling with the help of (we the same hyperparameters). This procedure is totally untractable for either W^2_2 and subspace detours (due to memory issues). As we mentioned before, the subsampling phase can decrease the quality of the transfer and thus can deliver better result than before. Result are give in Fig. <ref> §.§ Data set distance We finally evaluate in an other context: computing distances between datasets. Let _1={(x^1_i,y^1_i)}_i=1^n and _2={(x^2_i,y^2_i)}_i=1^n be source and target data sets such that x^1_i,x^2_i∈^d are samples and y_i^1,y_i^2 are labels ∀ 1 ≤ i≤ n. In <cit.>, the authors compare those data sets using the Wasserstein distance with the entries of the cost matrix defined as: C_ij = (x^1_i-x^2_j^2_2+W^2_2(α_y_i^1,α_y_j^2))^1/2 and the corresponding distance as: OTDD(_1,_2) = min_P ∈ U⟨ C,P⟩ where α_y is the distribution of all samples with label y, namely {x∈^d | (x,y) ∈} for being either _1 or _2 and U is the Birkhoff polytope which encodes the marginal constraints. Notice that cost in Eq. (<ref>) encompasses the ground distance and a label-to-label distance. This distance is appealing in transfer learning application since it is model-agnostic. However, it can be cumbersome to compute in practice since it lays down on solving multiple OT problems (to compute the cost matrix and the OTDD). To circumvent that, <cit.> proposed several methods to compute the cost matrix in Eq. (<ref>). They used the Sinkhorn algorithm (in 𝒪(n^2)) or they assumed α_y∼𝒩(m_y,Σ_y) in order to get the WD through the Bures metric (that provides a closed form of OT for Gaussian distributions in 𝒪(d^3)), which is still prohibitive for high dimension. We challenge in this context. In this experiment, we compare the following datasets: MNIST <cit.>, EMNIST <cit.>, FashionMNIST <cit.>, KMNIST <cit.> and USPS <cit.>. We rely on the code of OTDD provided at <https://github.com/microsoft/otdd>. In order to make it compliant with the hypothesis, we require the empirical distributions α_y to have the same number of atoms. Fig. <ref> provides results for a batch size of n=40000 samples using the Sinkhorn divergence (with a regularisation parameter of 1e^-1) and for (optimized) on batch of size 40000. We report results for a learning rate of 1e^-5, 20 iterations and s and ϵ to be 1 and 0. We check that the orders of magnitude are preserved with . For example OTDD(MNIST,USPS) is smaller than OTDD(MNIST,FashionMNIST) for either Sinkhorn divergence or as distance between labels, this validate that is a meaningful distance in this case scenario. Moreover in our setup, the computation cost is more expensive for Sinkhorn than for and totally untractable for W^2_2. On smaller batches (see Fig <ref>), the same observation can be made: is comparable (in term of magnitude) with W^2_2, Sinkhorn and the Bures approximation. We give additional results in Fig. <ref> for batches of size of n=2000 samples obtained with W^2_2, Sinkhorn divergence (setting the entropic regularization parameter to 1e^-1), the Bures approximation, (random search with L = 1000 projections) and (optimized, with a learning rate of 5e^-1, 50 iterations, s=20 and ϵ=0.5). Note that the Figs. <ref> and <ref> are not symmetric (OTDD(KMNIST,FashionMNIST)≠ OTDD(FashinMNIST,KMNIST)) because of the random aspect of batches.
http://arxiv.org/abs/2307.00663v1
20230702205216
Solving Multi-Agent Target Assignment and Path Finding with a Single Constraint Tree
[ "Yimin Tang", "Zhongqiang Ren", "Jiaoyang Li", "Katia Sycara" ]
cs.AI
[ "cs.AI", "cs.RO" ]
Implementation of nonlocal non-Fourier heat transfer for semiconductor nanostructures [ August 1, 2023 ===================================================================================== empty empty Combined Target-Assignment and Path-Finding problem (TAPF) requires simultaneously assigning targets to agents and planning collision-free paths for agents from their start locations to their assigned targets. As a leading approach to address TAPF, Conflict-Based Search with Target Assignment (CBS-TA) leverages both K-best target assignments to create multiple search trees and Conflict-Based Search (CBS) to resolve collisions in each search tree. While being able to find an optimal solution, CBS-TA suffers from scalability due to the duplicated collision resolution in multiple trees and the expensive computation of K-best assignments. We therefore develop Incremental Target Assignment CBS (ITA-CBS) to bypass these two computational bottlenecks. ITA-CBS generates only a single search tree and avoids computing K-best assignments by incrementally computing new 1-best assignments during the search. We show that, in theory, ITA-CBS is guaranteed to find an optimal solution and, in practice, is computationally efficient. § INTRODUCTION Multi-Agent Path Finding (MAPF) requires planning collision-free paths for multiple agents from their respective start locations to pre-assigned target locations while minimizing the sum of individual path costs <cit.>. Solving MAPF to optimality is NP-hard <cit.>, and many algorithms have been developed to handle this computational challenge. Among them, Conflict-Based Search (CBS) <cit.> is an efficient approach that finds an optimal solution to MAPF. This work considers a variant of MAPF that is often referred to as Combined Target-Assignment and Path-Finding (TAPF) <cit.>, where the target locations of the agents are not pre-assigned but need to be allocated during the computation: TAPF requires assigning each agent a unique target (location) out of a pre-specified set of candidate targets and then finds collision-free paths for the agents so that the sum of path costs is minimized. When the candidate target set of each agent contains only a single target, TAPF becomes MAPF and is thus NP-hard. MAPF and TAPF arise in many applications such as robotics <cit.>, computer gaming <cit.>, warehouse automation <cit.>, traffic management at road intersections <cit.>. Several attempts <cit.> have been made to solve TAPF optimally by leveraging MAPF algorithms such as CBS <cit.>. Among them, a leading approach is Conflict-Based Search with Target Assignment (CBS-TA) <cit.>, which simultaneously explores different target assignments and creates multiple search trees (i.e., a CBS forest), while planning collision-free paths with respect to each assignment. CBS-TA suffers from scalability as the number of agents or targets increases for the following two reasons. First, CBS-TA may resolve the same collision in multiple search trees many times, leading to duplicated computation and low search efficiency. Second, CBS-TA involves solving a K-best target assignment problem, which is often computationally expensive. This work thus attempts to bypass these two computational bottlenecks by exploring a new framework for integrating CBS with target assignment. The resulting algorithm is called Incremental Target Assignment CBS (ITA-CBS). First, ITA-CBS creates only a single search tree during the search and is thus able to avoid duplicated collision resolution in different trees as in CBS-TA. Second, ITA-CBS completely avoids solving the K-best assignment problem, and instead, ITA-CBS updates the target assignment in an incremental manner during the CBS-like search, which further reduces the computational effort. Our experimental results show significant improvement in efficiency: ITA-CBS is faster than CBS-TA in 96.1% testcases, 5 times faster in 38.7% testcases, and 100 times faster in 5.6% testcases than CBS-TA among 6,334 effective testcases. § PROBLEM DEFINITION We define the Combined Target-Assignment and Path-Finding problem (TAPF) as follows. Let I={1,2,⋯,N} denote a set of N agents. Let G = (V,E) denote an undirected graph, where each vertex v ∈ V represents a possible location of an agent in the workspace, and each edge e ∈ E is a unit-length edge between two vertices that moves an agent from one vertex to the other. Self-loop edges are allowed, which represent “wait-in-place” actions. Each agent i∈ I has a unique start location s_i ∈ V. Let {g_j ∈ V | j ∈{1, 2, ..., M}}, M≥ N, denote the set of all M target locations. Let A denote a binary N × M matrix, where each entry a_ij (the i-th row and j-th column in A) is one if agent i is eligible to be assigned to target g_j and zero otherwise. Our task is to assign each agent i a unique target g_j while ensuring a_ij=1 and plan corresponding collision-free paths. Each action of agents, either waiting in place or moving to an adjacent vertex, takes a time unit. Let p^i=[v_0^i, v_1^i, ..., v_T^i^i], v^i_k ∈ V denote a path of agent i from v_0^i to v_T^i^i with the arrival time T^i. This work considers two types of agent-agent conflicts along their paths. The first type is the vertex conflict, where two agents i,j occupy the same vertex at the same time. The second type is the edge conflict, where two agents go through the same edge from opposite directions at the same time (i.e. v_t^i = v_t+1^j and v_t+1^i = v_t^j). The goal of the TAPF problem is to find a set of paths {p^i | i∈ I} for all agents such that, for each agent i: * v_0^i = s_i (i.e., agent i starts from its start location); * v_t^i = g_j, ∀ t ∈ [T^i, max{T^k | ∀ k ∈ I}] and a_ij = 1 (i.e., agent i stops at a target location g_j which is eligible to be assigned to i when all agents reach their goals); * Every pair of adjacent vertices in path p^i is either identical or connected by an edge (i.e., v_k^i=v_k+1^i (v_k^i, v_k+1^i) ∈ E, ∀ k ∈ [0, T^i-1]); * {p^i | i∈ I} is conflict-free; * The flowtime ∑_i=1^NT^i is minimized. § RELATED WORK §.§ MAPF MAPF can be viewed as a special case of TAPF where each agent can be assigned to only one target location. MAPF has a long history <cit.> and remains an active research problem <cit.>. A variety of methods are developed to address MAPF, trading off completeness and optimality for runtime efficiency. These methods range from decoupled methods <cit.>, which plan a path for each agent independently and synthesize the paths, to coupled methods <cit.>, where all agents are planned together. Other methods <cit.> consider agents that are planned independently at first and then together only when needed in order to resolve agent-agent conflicts. Conflict-Based Search (CBS) <cit.> is optimal with respect to flowtime and forms the foundation of this paper. CBS is a two-level search algorithm that finds an optimal solution with minimum flowtime. Its low level plans a shortest path for an agent from its start location to target location. Its high level searches a binary Constraint Tree (CT). Each CT node H = (c, Ω, π) includes a scalar flowtime(cost) c, constraint set Ω and plan π which is a set of paths for all agents from their start locations to target locations, satisfying Ω. In each H, CBS only select and resolve the first conflict, even when multiple collisions occur in the plan. To resolve a conflict in H, we can formulate two constraints, wherein each constraint prohibits one agent from executing its originally intended action at timestep t, and then add them individually to two successor CT nodes. Here we also define two types of constraints, namely vertex constraint (i, v, t) that prohibits agent i from occupying vertex v at timestep t and edge constraint (i, u, v, t) that prohibits agent i from going from vertex u to vertex v at timestep t. By maintaining a priority queue based on each CT node cost, it can be proved that CBS is optimal with respect to the flowtime. §.§ Assignment Problem and TAPF Given N agents, M tasks, and a N× M matrix C denoting the corresponding assignment cost of each task to each agent, the task assignment problem <cit.> seeks to allocate the tasks to agents such that each agent is assigned to a unique task and the total assignment cost is minimized. Popular methods to address this problem include Hungarian algorithm <cit.> and Successive Shortest Path (SSP) algorithm <cit.>. Additionally, the Dynamic Hungarian algorithm <cit.> seeks to quickly re-compute an optimal assignment based on the existing assignment, when some entries change in matrix C. TAPF can be viewed as a combination of the MAPF problem and the assignment problem. While conventional MAPF has a pre-defined target location for each agent, TAPF and its variants <cit.> seek to simultaneously allocate the targets to agents and find conflict-free paths for the agents. Of close relevance to this work is CBS-TA <cit.>, which is a leading algorithm in the literature that solves TAPF to optimality respect to flowtime. Some work  <cit.> follows the similar CBS forest idea, but none of them is designed to solve TAPF optimally. CBS-TA operates on this principle: a fixed Target Assignment solution transforms a TAPF problem into a MAPF problem, and each MAPF problem has a binary Constraint Tree (CT). CBS-TA efficiently explores all nodes of various CTs (CBS forest) by enumerating every TA solution. Each CT node in CBS-TA, denoted H = (c, Ω, π, r, π_ta), has two extra fields compared to CBS: a root flag r signifying if the node is a root, and a TA solution π_ta. CBS-TA keeps a priority queue for storing H from all CTs and lazily generates root H for varying π_ta. Since the cost of a root H is the lowest flowtime for a given TA, it's unnecessary to expand it if its cost surpasses all other H in the priority queue. So CBS-TA can orderly generate root H according to their costs and only needs to generate a new root H with the succeeding optimal TA solution when the current one has been expanded. Motivated by K-best task assignment algorithms <cit.> and SSP with Dijkstra algorithm, CBS-TA finds the succeeding optimal TA with O(N^2M^2). § ITA-CBS Our ITA-CBS, as shown in <Ref>, has the same low-level search as CBS and CBS-TA, but its high-level search is different. Each CT node H = (c, Ω, π, π_ta, M_c) in ITA-CBS has two additional fields compared to that in CBS: a TA (i.e., Target Assignment) solution and a N× M cost matrix M_c, where each entry describes the length of the shortest path from the corresponding start to target locations that satisfies the constraint set Ω. ITA-CBS begins by creating the first CT node with an empty Ω and the corresponding M_c and π_ta (<Ref>; Line 1-6). ITA-CBS maintains one priority queue to store all CT nodes that are generated during the search (<Ref>; Line 7-9, 24). ITA-CBS selects a CT node H_cur with the minimum cost from the priority queue and checks if it includes a conflict-free solution. If so, ITA-CBS is guaranteed to find an optimal solution (<Ref>; Line 10-13). Otherwise, ITA-CBS uses the first detected conflict to create two new constraints (<Ref>; Line 14) as in CBS. Then ITA-CBS creates two child nodes identical to the current node H and adds each constraint respectively into the constraint set of the two child nodes (<Ref>; Line 15-21). For each new node Q (with a constraint on agent i added), the low-level search is invoked for agent i to recompute its optimal paths subject to the new constraint set from its start to all possible targets (<Ref>). The cost of these planned paths are then used to update the cost matrix M_c in Q. Since M_c changes, the TA solution π_ta should also be updated. We use dynamic Hungarian algorithm <cit.> to get the assignment solution more efficiently, and compute the solution path and total cost (<Ref>; Line 22-24). §.§ Incremental Target Assignment During the search, when a new constraint is added to an agent i, only the row in the cost matrix corresponding to agent i may change. One can run Hungarian algorithm from scratch based on the new cost matrix to compute the assignment. However, it's too costly for ITA-CBS to execute the algorithm at each CT node. To expedite the computation, we employ the dynamic Hungarian algorithm <cit.> to reuse previous assignment and quickly update the assignment after cost matrix changes. Specifically, Hungarian algorithm assigns each vertex i a value l(i) which should satisfy M(u, v) ≤ l(u) + l(v), where u, v are different vertices, M is the cost matrix. A special subgraph is formed that includes all vertices and edges meeting the condition M(u, v) = l(u) + l(v).  <cit.> proved that if the special subgraph's matching is a perfect matching, this matching is the optimal matching in original weight graph. Hungarian algorithm aims to adjust vertex values to achieve a perfect match in the special subgraph. For dynamic Hungarian algorithm, if k rows and columns are changed, these k affected vertexes will be unmatched. Then dynamic Hungarian algorithm will adjust the vertex value l(i) for each affected vertex i, ensuring that M(u, v) ≤ l(u) + l(v) still holds. The complexity will be O(kM^2) to get a new optimal matching. In ITA-CBS, time complexity is O(M^2) since a new conflict only impacts one row in M, which is faster than original Hungarian algorithm with O(M^3). §.§ Example An example of our algorithm is shown in <Ref>. The map has 5 vertices, a, b, c,d,e, and there are 2 agents 1 and 2. Agent 1's target location set is {d, e} and agent 2's set is {c, e}. Each blue rounded rectangle in our figure represents a CT node H. Within each H, we have a constraint set Ω, a cost matrix M associated with Ω, a TA solution π_ta, and the total cost (flowtime) c. Initially, we create the first node H_1. Conflicts can arise in our initial solution, so we use the first conflict, where agent 1 and agent 2 collide at timestep 2, to establish 2 constraints. Then we create 2 new CT nodes (H_2, H_3) and add these 2 constraints into each constraint set separately and update each cost matrix with the new constraint. The new cost matrix only has one row different from the previous cost matrix. Because the cost matrix changed, we will obtain a new TA result by dynamic Hungarian algorithm. Then we push new H into our priority queue. In Fig. <ref>, both new nodes H_2 and H_3 have the same total cost. Consider H_2 is first selected from the priority queue for expansion. Two new nodes H_4, H_5 are generated from H_2. Among {H_3, H_4, H_5}, H_3 has the smallest flowtime and is thus selected for expansion, which leads to H_6, H_7. Now, the priority queue has 4 nodes: {H_4, H_5, H_6, H_7}, and H_4, H_5, H_7 have the same lowest flowtime 6. When H_4 is selected for expansion, it has 2 equal TAs: {1 → d, 2 → c} and {1 → d, 2 → e}. In ITA-CBS, ties are broken at random and consider the case {1 → d, 2 → e} without losing generality. In this case, there is no conflict, and ITA-CBS returns the solution: {1 → d, 2 → e} with flowtime is 6, which is an optimal solution. §.§ Properties of ITA-CBS This section shows that ITA-CBS is guaranteed to find an optimal TAPF solution if one exists. The cost of each CT node is a lower bound on the flowtime of all solutions that satisfy the node's constraints. Since the entries of the cost matrix of a CT node correspond to the shortest paths that ignore collisions, for any solution that satisfies the node's constraints, its flowtime cannot be smaller than the flowtime of its corresponding target assignment. Since we find the best target assignment at each node, its flowtime is a lower bound on the flowtime of all solutions that satisfy the node's constraints. It is easy to prove that the cost of a CT node is equal to the flowtime of its best target assignment. Therefore, the lemma holds. Every collision-free set of paths that satisfies the constraints of a CT node must also satisfy at least one of its child nodes' constraints. We prove by contradiction and assume that there is a collision-free solution {p^i} that satisfies the constraints of a node H_x but does not satisfy the constraints of either child node. Suppose the collision chosen to resolve in H_x is between agents i and j at vertex v (or edge e) at timestep t. Since each child node has only one additional constraint compared to node H_x, we know that {p^i} violates both additional constraints. That is, both path p^i and path p^j visit vertex v (or edge e) at timestep t, which leads to a collision and contradicts the assumption that {p^i} is collision-free. Therefore, the lemma holds. At any iteration of the high-level search, every collision-free solution must satisfy at least one CT node's constraints in the OPEN list. Since the root CT node has no constraints, all solutions satisfy the constraints of the root CT node. When we pop a CT node from the OPEN list, we will insert its child nodes back into the OPEN list. According to <Ref>, this lemma holds. ITA-CBS guarantees to find an optimal TAPF solution if exists. According to <Ref>, the cost of the CT node with the smallest cost in the OPEN list is a lower bound on the flowtime of all collision-free solutions. Therefore, when ITA-CBS terminates, its returned solution is guaranteed to be optimal. § EXPERIMENTAL RESULTS We evaluate the performance of ITA-CBS and CBS-TA. We implement ITA-CBS and CBS-TA in C++ based on the existing CBS-TA implementation.[The CBS-TA source code is publicly available at <https://github.com/whoenig/libMultiRobotPlanning>. We will open source our code after the anonymous review.] Our CBS-TA implementation outperforms the original based on our tests. To our best knowledge, CBS-TA is the only existing work that solves TAPF optimally for flowtime, and thus we only compare ITA-CBS with CBS-TA in our experiments. We classify a testcase as a failure if no solution is found within 30 seconds and we mark the runtime for this testcase as 30 seconds. All experiments were executed on a computer with Ubuntu 20.04.1, AMD Ryzen 3990X 64-Core Processor, 64G RAM with 2133 MHz. We use 8 different maps from MAPF Benchmark Sets <cit.>: (1) den312d is from video game Dragon Age Origins (DAO), (2) random-32-32-10 and empty-32-32 are open grids with and without random obstacles, (3) maze-32-32-2 is a maze-like grid, (4) room-64-64-8 is a room-like grid, (5) warehouse-10-20-10-2-1 is inspired by real-world autonomous warehouse applications and (6) orz900d and Boston-0-256 are the first and second largest maps among all benchmark map files. All maps are shown in <Ref>. §.§ Test Scenarios We develop 2 test scenarios: (1) Group Test: We divide all agents into groups, and each group shares the same target location set. (2) Common Target Test: Each agent receives a target set of equal size. All agents have some common target locations. We evaluate the performance of the ITA-CBS and CBS-TA algorithms by altering the proportion of common targets in target sets. In each testcase, we randomly select the start and goal locations for every agent. We generate a set of 20 testcases for a given map using a specific test configuration. The success rate is calculated as the percentage of completed tests out of the total 20 test cases. §.§.§ Group Test In this test scenario, we put every 5 agents into one group, and the agents in each group share 5 different target locations. Different groups have different target locations. We increase the agent number with 5 intervals and all numbers can be found in <Ref>. Since groups do not share the same target locations, testcases grow increasingly complicated as the number of agents increases. The black lines reflect the success rates of both algorithms. <Ref> shows that ITA-CBS outperformed CBS-TA on all test maps. §.§.§ Common Target Test In this test scenario, we give each agent one target set with a fixed size and adjust the proportion of common targets. The size of the fixed target set is determined by dividing the total valid grid count of the map by the maximum number of agents. For the maze map, agent numbers vary from 15 to 35 with an increment of 5. For other maps, the agent number is from 15 to 60 with an increment of 5. Correspondingly, the target set sizes are {15, 15, 80, 40, 15, 50, 20, 20} for {empty, random, warehouse, den312d, maze, room, orz900d, boston}. The percentage of common targets among all targets are: 0, 30%, 60% and 100%. <Ref> shows that as common targets increase, the total success rates decrease, and ITA-CBS outperformed the CBS-TA under most proportions. §.§ Test Overall Situation We also show all testcases in <Ref>. The X-axis represents ITA-CBS running time in seconds, and the Y-axis represents CBS-TA algorithm running time. We have a total of 7,600 testcases, including 5,134 testcases both algorithms solved, 1,191 testcases ITA-CBS solved only, 9 testcases CBS-TA solved only and 1266 testcases both algorithms failed.. For the 6,334 effective testcases which are solved by at least one algorithm, ITA-CBS is faster in 96.1% testcases, 5 times faster in 38.7% testcases, and 100 times faster in 5.6% testcases than CBS-TA. §.§ Program Profile For this section, all time and CT node number related data are from the previous 2 scenarios' test data. For this test, we only use 5,134 testcases in which both algorithms successfully find optimal solutions within the given runtime limit and take the average of these data. §.§.§ Running Time of Various Parts Now we show the average running time for various parts of each algorithm program. We divide the program running time into 5 parts: time of target assignment, time of low-level path search, and time of collision detaction and other time. The average time for CBS-TA and ITA-CBS are {1.2s, 0.51s, 0.22s, 0.058s} and {0.006s, 0.36s, 0.032s, 0.027s}. This result shows that our dynamic Hungarian algorithm largely reduced the time taken by target assignment. Because ITA-CBS and CBS-TA may have different numbers of CT nodes which may result in an unfair comparison of target assignment, we also show their target assignment average runtime in <Ref>. The figure shows that ITA-CBS is an order of magnitude faster than CBS-TA. And for time of collision detaction, since this action will be invoked for each CT node, the result matches the CT node numbers in <Ref>. §.§.§ The number of CT nodes and CTs <Ref> also shows the numbers of CT nodes and Constraint Trees(CTs) for each test case. CBS-TA runs target assignment only when it needs a new CT, and ITA-CBS runs it in every CT node update. So we compare the number of ITA-CBS CT nodes with CBS-TA's numbers of CTs and CT nodes. The result shows that even comparing the number of ITA-CBS CT nodes with CBS-TA CTs, ITA-CBS has fewer target assignment than CBS-TA, which can imply constraints in low-level search can reduce target assignment search space. We also found the ratio of the root node number and total CT node number may be very high for CBS-TA. For all 5,134 testcases, the ratio will be 37.7% with 2226 mean TA times compare with ITA-CBS 862 times. This result explains CBS-TA has a very large TA partition in total runtime. § CONCLUSION This work develops a new algorithm called Incremental Target Assignment CBS (ITA-CBS) to solve the TAPF problem to optimality. We show that our algorithm (1) avoids duplicate effort in conflict resolution and (2) updates target assignment incrementally, thus leading to guarantees of optimality as well as efficient computation, as attensted by our experimental results. For future work, we plan to apply ITA-CBS to realistic scenarios, such as planning for robots with dynamics and uncertainty in warehouses. IEEEtran
http://arxiv.org/abs/2307.02090v1
20230705080626
Interactive Conversational Head Generation
[ "Mohan Zhou", "Yalong Bai", "Wei Zhang", "Ting Yao", "Tiejun Zhao" ]
cs.CV
[ "cs.CV" ]
We introduce a new conversation head generation benchmark for synthesizing behaviors of a single interlocutor in a face-to-face conversation. The capability to automatically synthesize interlocutors which can participate in long and multi-turn conversations is vital and offer benefits for various applications, including digital humans, virtual agents, and social robots. While existing research primarily focuses on talking head generation (one-way interaction), hindering the ability to create a digital human for conversation (two-way) interaction due to the absence of listening and interaction parts. In this work, we construct two datasets to address this issue, “ViCo” for independent talking and listening head generation tasks at the sentence level, and “ViCo-X”, for synthesizing interlocutors in multi-turn conversational scenarios. Based on ViCo and ViCo-X, we define three novel tasks targeting the interaction modeling during the face-to-face conversation: 1) responsive listening head generation making listeners respond actively to the speaker with non-verbal signals, 2) expressive talking head generation guiding speakers to be aware of listeners' behaviors, and 3) conversational head generation to integrate the talking/listening ability in one interlocutor. Along with the datasets, we also propose corresponding baseline solutions to the three aforementioned tasks. Experimental results show that our baseline method could generate responsive and vivid agents that can collaborate with real person to fulfil the whole conversation. Project page: <https://vico.solutions/>. Conversational Head Generation, Listening Head Generation, Video Synthesis Interactive Conversational Head Generation Mohan Zhou,  Yalong Bai,  Wei Zhang, Ting Yao, Senior Member, IEEE and Tiejun Zhao Corresponding Email: tjzhao@hit.edu.cn August 1, 2023 ================================================================================================================================================================ § INTRODUCTION Communication <cit.> is the fundamental social process for effectively exchanging information between people. A typical oral-based conversation ordinarily involves two interlocutors alternating the roles of speaker and listener to achieve a successful conversation through verbal and non-verbal reciprocal interactions, , auditory signals, gestures, hand signs, body postures, and . There have been extensive investigations into the study of human communication among sociology, psychology, human-computer interaction, and . Especially the oral-based conversation can be modeled by three dimensions: speaker, listener, and their mutual interactions. In real scenarios, all three dimensions are equally important: speakers produce and transmit messages, listeners receive messages and give feedback, and the mutual interactions connect them, making the conversation a closed loop. While existing researches mainly focus on single-sided communication – speaker-centric synthesis. As shown in <ref>, speech-to-gesture generation <cit.> learns a mapping between the audio signal and the speaker's pose. Speech to lip generation <cit.> aims to refine the lip-synchronization of a given video input. Talking head synthesis <cit.> tries to generate a vivid talking video of a specific speaker with facial animations from a still image and a clip of audio. However, these works solely concentrate on the speaking role, disregarding the essential role of the listener, let alone the crucial aspect of their mutual interactions. Notably, during a face-to-face conversation, listening behavior is even more critical, as proper feedback to the speaker (, nod, smile, eye contact, .) is vital for a successful communication <cit.>. Through real-time feedback, listeners show how they are engaged (, interested, understand, agree, .) to the speech, such that conversation gets more accessible for both participants. And only if both the listener and the speaker are modeled, do we have the occasion to model their interactions. In this paper, we first construct two datasets for the conversational head generation: 1) ViCo, a dataset of in-the-wild speaker-listener clip pairs to highlight the listening part within one utterance. Three listening styles, based on positive, neutral, and negative attitudes, are exhibited by listeners, and qualified listeners are expected to respond actively to the speaker with verbal and non-verbal signals. It serves the purpose of talking and listening head generation at the sentence level. 2) ViCo-X, a dataset of recorded dialogue scenarios by experienced actors with twenty-six different dialog acts, communicative intentions, to emphasize the interaction part within multi-turn dialogues. Interlocutors would talk, listen and collaborate with real person to complete the conversation. The generation of conversational agents can be achieved from this. Compared to speaker-centric datasets such as MEAD <cit.>, VoxCeleb2 <cit.> and , ViCo introduces the listener role to the conversation, focuses on the receipt and feedback of information and offers a vital complement to speaker-centeric tasks. It features real persons in actual conversations, thus enabling natural human interactions to reflect all three dimensions found in classical oral conversations. While ViCo-X records the behaviors of two interlocutors in multiple turns of a dialogue, paying more attention to the “interactions”. The multi-turn dialogues are brought in as a video corpus, bringing the possibility of long-term mutual interaction modeling. Based on ViCo and ViCo-X, we propose to formulate a conversational agent who aims to model one of the interlocutors in the conversation. It is designed as a digital twin of interlocutor as <ref> (d) shows. We further nominate two sub-tasks, expressive talking head generation and responsive listening head generation, to empower the agent with communication capabilities. Different from traditional talking head generation tasks that only transmit messages, expressive talking head generation introduces the listener role to speaker modeling, enabling speakers to notice the listener feedback concurrently, , bi-directional communication. Responsive listening head generation focuses on listener modeling and targets to synthesize listeners that are responsive, , dynamically respond to speaker verbal and non-verbal signals. And finally, to achieve head generation in multi-turn dialogues, a role transformer network is included to make interlocutors' role switches in a seamless and natural way. Hence, the conversational property in digital human generation tasks could be highlighted. It allows us to synthesize an interlocutor who can normally communicate with real people. This capability is undoubtedly critical to a wide range of applications, including virtual anchors, digital influencers, customer representatives, and digital avatars in Metaverse, wherever it involves interactive and conversational communication. The main contributions are summarized as follows: * We construct two datasets ViCo and ViCo-X for listener modeling and conversational agent modeling. * Based on the datasets, we introduce a new task, conversational head generation, including expressive talking head generation and responsive listening head generation, to mimic one interlocutor in a conversation. * Furthermore, we introduce a baseline approach for synthesizing an interlocutor capable of engaging in speaker/listener interactions with real individuals. Our method is evaluated through comprehensive quantitative and qualitative analyses, including user studies, which substantiate its effectiveness and efficiency. § RELATED WORKS Speaker-centered video synthesis Given time-varying signals and a reference still image of the speaker, the talking head synthesis task aims to generate a vivid clip for the speaker with the time-varying signals matched. Based on the different types of time-varying signals, we can group these tasks into two groups: 1) audio-driven talking head synthesis <cit.>, 2) video-driven talking head synthesis <cit.>. The goal of the former one is to generate a video of the speaker that matches the audio. And the latter one is to generate videos of speakers with expressions similar to those in the video. They only matter to transmit messages to listeners while ignoring listener feedback. Our expressive talking head generation is an improved version: we do not just talk, we communicate. Listening behaviors modeling Many applications and research papers have focused on speaking, while “listener modeling” is seldom explored. Gillies  <cit.> first propose the data-driven method that can generate an animated character that can respond to speaker's voice. This lacks the supervision of speaker visual signals, which is incomplete for responsive listener modeling. And this method can not be applied to realistic head synthesis. Heylen  <cit.> further studied the relationship between listener and speaker audio/visual signals from a cognitive technologies view. SEMAINE <cit.> records the conversation between a human and a limited artificial listener. MAHNOB Laughter database <cit.> focuses on studying laughter's behaviors when watching funny video clips. Apart from these related works, ALICO <cit.> corpus about active listener analysis is the most relevant dataset to our proposed task. However, it has not been made public and also not constructed from the real scene conversations. Moreover, the main objective of ALICO is for psychology analysis, the data mode of that dataset is vastly different from the audio-video corpora in computer vision area. In the past few years, social AI intelligence <cit.> has been introduced to model the nonverbal social signals in triadic or multi-party interactions. Joo  <cit.> is concerned with the overall posture and head movement of a person, and Oertel  <cit.> aims to mine listening motion rules for robotics controlling. Both related works only deal with the speaking status and ignore the speaker's content. What's more, they rarely care about two-person interactions nor pay attention to model the face in detail, which is also different from our task. A detailed comparison to existing related datasets is shown in <ref>. As far as we know, this is the first time to introduce the learning-based listening head generation task in computer vision area. In this work, we propose a formulation of responsive listening head generation and construct a public ViCo dataset for this task. Meanwhile, a baseline method is proposed for listening head synthesis by perceiving both speaker's audio/visual signals and preset attitude. Speaker-Listener Coupling in Communication Classical two-party face-to-face communication usually includes two interlocutors' collaboration. They would act as speakers or listeners in turn and finally agree with some points, then finalize the conversation. We can notice the "coupling" in this procedure: each interlocutor plays both the speaker and listener roles. This phenomenon is traceable to the neuroscience <cit.> level and is deserved to be studied for more realistic and emotional digital human modeling. § TASK OVERVIEW We present Interactive Conversational Head Generation, a novel task involving the synthesis of one interlocutor's head during the conversation. In particular, our aim is to comprehend the dialog act and the behaviors of other interlocutors, such as audio cues, head motions, facial expressions, eye blinks, and more. Assume there are two interlocutors and , given the input N-turn video sequences of : ^_N={^_1, ^_2, …, ^_N}, these two interlocutors' audio signal sequences ={_1, _2, …, _N}, the speaker's dialog acts or listener's attitudes ={e_1, e_2, …, e_N}, and the role indicator vector ={r_1, r_2, …, r_N} (r_i=1 for speaker and r_i=0 for listener) of , the conversational head generation task aims to generate the visual representations of in this N-turn conversation that can both listen and talk in one paradigm: ^_N = (, , , ^_N, v^), where v^ represents the initial (identity) image of the interlocutor . Note that the ^·_i represent a video clip of this interlocutor and can be further subdivided as ^·_i={^·_i;1, ^·_i;2, …, ^·_i;|_i|}, where ^·_i;j denotes the image. Speaker dialog act definition Dialog act theory provides an empirically-grounded framework for computational modeling of communication, specifically focusing on linguistic and nonverbal behaviors within a dialogue. Informally speaking, dialog acts are such actions as providing information, apologizing for a misunderstanding, and giving feedback, . This theory remains independent of any specific dialog system and effectively captures the actions performed by the interlocutors. Roughly, it can be divided into 9 core orthogonal dimensions (, task, turn management, time management, social obligations management, own/partner communication management and ) and hosts total of 57 functions. These dialog acts are theoretically justified and domain-independent. They can also be recognized by human analysts from the “semantic” view. Listener attitude definition During conversation, after perceiving the signals from the speaker, the listener usually reacts with an active, responsive attitude, including epistemic attitudes (, agree, disagree) and affective attitudes (, like, dislike). In general, attitude potentially guides the listener's behavior and consequently affects the conversation. Also, different attitude results in different facial expressions and behaviors of the listener <cit.>, e.g. a laugh appears as the most appropriate signal for like, a combination of smile and raise eyebrows could be a possibility for interested, disagree can be meant by a head shake, dislike is represented by a frown and tension of the lips, . Feature extraction In this work, we extract the energy feature, temporal domain feature, and frequency domain feature of the input audio; and model the facial expression and head poses using 3DMM <cit.> coefficients. For the audio, we extract the Mel-frequency cepstral coefficients (MFCC) feature with the corresponding MFCC Delta and Delta-Delta features. Furthermore, we incorporate the energy, loudness, and zero-crossing rate (ZCR) are also embedded into audio features denoted as _i for each audio clip _i. The audio feature extracted from can be denoted as ={_1, _2, …,_t}. There has been extensive research <cit.> in the area of 3D face reconstruction. Here, we leverage the state-of-the-art deep learning-based face reconstruction model <cit.> to get the 3DMM <cit.> coefficients and then drive the head movements and expression changes. Specially, for any facial image, we can get the reconstruction coefficients {α, β, δ, p, γ} which denote the identity, expression, texture, pose and lighting, respectively. Further, we distinguished the 3D reconstruction coefficients into two parts: =(α, δ, γ) to represent relatively fixed, identity-dependent features, and =(β, p) to represent relatively dynamic, identity-independent features. After the face parameterization, we can adjust the motion and expression changes through m, and modify the identity through . Task definition Given that the identity-dependent features () are weakly correlated with the interlocutor motion patterns, we use only the head motion and facial expression feature m for conversational head generation model training and then adapt the identity-dependent features of different interlocutor identities for visualization and evaluation. Thus our interactive conversational head synthesis task can be formulated: ^_N = _m(, , , ^_N, m^), ^_N = _v(^_N, ^, v^), where ^·_i is the dynamic feature predicted for this interlocutor of i-th round. Here, _m would infer 3DMM coefficients of and _v render the coefficients into video. To get a better modeling for _m, we split it into expressive talking head modeling and responsive listening head modeling with respect to and then combine them to formulate the whole conversational head. Start with a much simpler subproblem, considering a single turn of this conversation. Our aim is either to generate a listener based on the speaker's audio , behaviors ^ and current dialog act e: ^ = _m^(, e, ^, m^), where and denote the speaker and listener respectively, or to generate a speaker based on the audio A, dialog act or attitude e and listener behaviors ^: ^ = _m^(, e, ^, m^), For the former task, we propose a framework for responsive listening head generation, which aims to fill the blank of listener modeling. And for the latter task, we introduce listener behaviors ^ to classical talking head generation, named expressive talking head generation, to make speaker better communicate with listener. And finally, towards generation in N-round and ensure the interlocutors' role switches are smooth, we continue to solve <ref> in conversational wide to modeling agent . The 3D face rendering technology has been well studied in many recent works <cit.>. Moreover, the face rendering models are usually identity-specific, so one may need to train separately for each identity for better performance. To highlight the properties of the conversational head synthesis task, and decouple the critical factor, our proposed model primarily focuses on the motion-related and identity-independent 3D facial coefficients prediction task 𝐆_m, and use the pretrained rendering model <cit.> for simplified visualization. And some video post-processing methods, such as video frame interpolation <cit.>, denoising <cit.>, super resolution <cit.>, in-painting <cit.>, can be used for better visual effects. § DATASET CONSTRUCTION §.§ ViCo Dataset To highlight the “listener” dimension, we first construct a speaker-listener dataset ViCo mainly for responsive listening-head generation by capturing conversational video clips from YouTube containing two people's frontal faces. To ensure the validity of a video clip, it must fulfill the following conditions:: * The screen should display only two individuals, with one person actively speaking and the other attentively listening. * Both individuals' frontal faces should be prominently visible, ensuring clear visibility. * The facial expressions of both individuals should appear natural and stable throughout the clip. * The listener should actively engage with the speaker, responding dynamically and in real time to foster an interactive exchange. The annotators were tasked with meticulously documenting the precise start and end times of each “valid” clip. Additionally, they were instructed to label the speaker's position (left or right of the screen) and discern the listener's attitude exhibited in the video. In ViCo, we group the attitudes into three categories: positive, negative and neutral. Positive attitude consists of agree, like, interested. Conversely, negative attitude consists of disagree, dislike, disbelieve, not interested. Cross-validation was applied among at least three annotators for each candidate clip for quality control. For each valid clip, we use the MTCNN to detect the face regions in each frame, and then crop and resize the detected face regions to 384×384 resolution image sequence for model training and evaluation, as shown in <ref>. <Ref> shows the statistical information of our annotated responsive listening head generation dataset ViCo. The proposed dataset contains rich samples of 483 video clips. We normalize all videos to 30 FPS. Moreover, our dataset is composed of high-quality videos (1920×1080) and audios (44.1/16bit) and contains diverse scenarios, including news interviews, entertainment interviews, TED discussions, variety shows and , which provides rich semantic information and various listener patterns. The video clips' length can range from 1 to 71 seconds. §.§ ViCo-X Dataset We have collected the in-the-wild dataset ViCo for talking head generation and listening head generation. However, limited by the video quality, conversation content, and scenario restrictions, the functional multi-turn conversational data is still unavailable. Thus we use higher quality devices to record conversational “interactions” in specific scenarios to build the multi-turn conversation dataset, ViCo-X. To the best of our knowledge, this is the first multi-turn multimodal interaction dataset (ViCo-X) to model human behaviors in conversation. The dataset consists of data in three modalities: audio, video, and text, and a fine-grained dialog act annotation is attached to each sentence in the conversation. We establish this multi-modality dataset to encourage more studies in “interactive face-to-face conversation understanding and modeling”. In this dataset, we focus on the multi-turn conversations in a functional and application-oriented scenario, thus using the Jing Dong Dialogue Corpus (JDDC) <cit.>, the large-scale real scenario Chinese E-commerce conversation corpus, as the transcriptions. Based on this corpus, we first filter out the sensitive and inappropriate dialogues. After that, we select representative and diverse dialogues, and further label the dialog acts (DA) following the ISO standard <cit.>, and its distribution in our dataset is shown in <ref>. To ensure rigorous quality control in DA labeling, cross-validation was implemented using a minimum of three annotators with the standard of Fleiss Kappa exceeding 0.9. Then, given the transcriptions, during the recording process, the actors are spaced two meters apart, with their bodies facing the camera and their backs against the green screen. Actors are asked to communicate messages and intentions through words, expressions, head movements, hand postures, or body gestures under the dialog act constraints. Speakers are required to raise their emotions and speak their transcripts in a natural style, and listeners should be responsive and active to the speaker. There is a guidance team to guarantee the recording quality and the professionalism of actors. <Ref> shows an image when recording. Our videos are in 2048×1024 and 30 fps format, and audios are 48 and over 300kbps. Thus the details can be maintained well. With the recording finished annotators are asked to indicate the start and end time of each sentence (accurate to 130 second) and label the position of the speaker (left or right of the screen). Finally, these components make up the entire ViCo-X dataset totaling 44k frames. In contrast to the ViCo dataset, our newly recorded ViCo-X dataset not only offers enhanced real-time performance but also boasts higher-quality recordings. ViCo-X is built upon a real E-commerce conversation corpus, encompassing twenty-five diverse scenarios that encompass the majority of interactions between customer service representatives and customers, which provides a wealth of semantic information and encompasses various conversation patterns. It enables comprehensive studies of human-human interactions within specific scenarios and offers valuable insights into the dynamics of mutual interaction between virtual humans and real humans. In contrast to the existing talking head video datasets <cit.> and listening head video dataset ViCo that aims to model a sub-part of the conversation, ViCo-X takes a different viewpoint by focusing on modeling the real conversations. It emphasizes the dynamic interaction between interlocutors who can alternate roles as both speakers and listeners, and prioritizes mutual engagement rather than unidirectional information perception or conveyance. § INTERACTIVE CONVERSATIONAL HEAD GENERATION This section demonstrates how to achieve interactive conversational head generation in multi-turn communicative scenarios as outlined in <ref>. §.§ Responsive Listening Head Generation According to psychological knowledge, an active listener tends to respond based on the speaker's audio <cit.> and visual signals <cit.>. And at a given moment, the listener receives information from the speaker of that moment as well as information from history and adopts a certain attitude to present actions in response to the speaker. Thus, the goal of our model is to estimate the conditioned probability P(^|, e, ^, m^), where the ^ and are time-varying signals that the listener should respond to, and the reference listener feature m^ and attitude e constrain the pattern of the entire generated sequence. Inspired by the sequence-to-sequence model, a multi-layer sequential decoder module 𝐆_m is applied for modeling the time-sequential signals of conversation. Unlike talking-head generation <cit.>, which accepts an entire input of audio and then processes it using a bidirectional LSTM or attention layer; in our scenario, the model receives the streaming input of the speaker where future information is not available. For the speaker feature encoder, at each time step t, we first extract the audio feature t and the speaker's head and facial expression representation t, then apply non-linear feature transformations following a multi-modal feature fusion function f_am to get the encoded feature of speaker. The representation of reference listener 1 and attitude e can be embedded as the initial state h_1 for the sequential motion decoder. At each time step t, taking the speaker's fused feature f_am(t, t) as input, ^_m in Eq. <ref> is functioned as updating current state h_t+1 and generating the listener motion m_t+1^l, which contains two feature vectors, i.e. β_t+1^l for the expression and p_t+1^l for the head rotation and translation. Our responsive listening head generator supports an arbitrary length of speaker input. The procedure can be formulated as: β_t+1^l, p_t+1^l = 𝐆^_m(h_t, f_am(s_t, m_t^s)). This way, the responsive listener can be synthesized by referring to the speaker's verbal and non-verbal signals. §.§ Expressive Talking Head Generation Talking head generation aims to synthesize a clear and vivid talking-head video that the identity matches the given reference image(s) and the motion (notably for lips) corresponds to the driven source (, a piece of audio speech). It has been well studied in recent years, a classical 3DMM coefficients-based solution can be formulated as: β^s^*_{1,⋯,t}, p^s^*_{1,⋯,t} = ^^*_m(, e, m^), where e could be some factors those affect speaker behaviours, , emotion, dialogue act, ., while such efforts are neglected the listener. From the perspective of psychology and social behavior, the speakers should also react to the listener's non-verbal response when talking. Here we propose the listener-aware speaker modeling, which targets making the speaker more lively with the listener supervision by introducing the concept of “interactive” to the speaker. As <ref>(d) left subfigure shown, different from <ref>(c) which the driven source is audio signals only, we force the talking head generation model to receive the streaming input of listener frames and expect the synthesized video more suitable for face-to-face conversation scenario. In view of the given audio are batched and can be modeled with bi-directional models <cit.> while the listener inputs are streaming, we divide the talking head modeling into two parts: audio-supervised speaker modeling and listener-supervised speaker modeling, and fuse these two for final results. The former task has been well studied in previous literature, and the latter task is similar to <ref>: At each time step t, we would extract the listener's representation m_t^l and encode the representation of reference speaker m_t^s and s as the initial state h'_1 for the listener-aware speaker decoder. Then we can predict the expected speaker head motion and expression which reflect to the listener: β_t+1^s', p_t+1^s' = _m^'(h'_t, m_t^l) Then a weighted fusion is performed for blending the output of audio-supervised speaker model and listener-supervised speaker model: β_t+1^s = α_ββ_t+1^s' + (1 - α_β) β_t+1^s^*, p_t+1^s = α_p p_t+1^s' + (1 - α_p) p_t+1^s^*, where α_β and α_p are trainable parameters that can be optimized during training procedure. §.§ Interactive Conversational Head Generation In the real scenario, a conversation is usually multi-turn and each interlocutor can act as both the speaker and listener. Existing works mostly focus on modeling a single role and neglect the constant transformation for each interlocutor between speaker and listener. To tackle this problem and towards face-to-face conversation modeling, we propose a novel task: conversational head generation, which aims to synthesize a conversational head which can interact with human avatar in both talking and listening manner and the state can be well switches during role alteration as illustrated in <ref>(d). Instead of using if/else toggles directly, which would make the transformation rigid and spiky, we introduce a more smooth transformation trick to switch the agent role. To alternate the agent role from r_t-1 to r_t (r_t-1≠ r_t), the hidden state h_t is inferred from the previous state h_t-1: h_t = (h_t-1, r_t-1, r_t), where T is role switcher network to bridge r_t with r_t-1. In such manner, we can extend the single-turn speaker/listener modeling to multi-turn agent modeling, formulating a complete conversational head. §.§ Optimization Objectives For optimization, regardless of the role in the communication, we use a consistent optimization strategy. With the ground truth patterns denoted as =[2, 3, ⋯, T], we drop the last prediction T+1 due to the lack of supervision and use L_2 distance to optimize training procedure: _gen = ∑_t=2^T‖β_t - β̂_t ‖_2 + ‖ p_t - p̂_t ‖_2. Moreover, a motion constraint loss _mot is applied to guarantee the inter-frame continuity across is similar to the predicted : _mot = ∑_t=2^T w_1‖μ(β_t) - μ(β̂_t) ‖_2 + w_2‖μ(p_t) - μ(p̂_t) ‖_2, where μ(·) measures the inter-frame changes of current frame and its adjacent previous frame, i.e., μ(β_t)=β_t-β_t-1, w_1 and w_2 is a weight to balance the motion constraint loss and generation loss. The final loss function of this communication role can be formulated as: _total = _gen + _mot. For the responsive listening head generation (<ref>) and expressive talking head generation (<ref>), we optimize the _total for the listener and speaker patterns respectively. And for the conversational head finetune (<ref>), we optimize these two patterns simultaneously. § EXPERIMENTS §.§ Implementation Details The ViCo dataset is divided into two parts: 1) training set for learning listener / speaker patterns, 2) test set for validating the performance, generalizability, and transferability of our model. All identities do not overlap between train set and test set. And in ViCo-X dataset, we perform a five-fold cross validation and results are conducted on the whole dataset. We extract 45-dimensional acoustic features for audios, including 14-dim MFCC, 28-dim MFCC-Delta, energy, ZCR and loudness. There are multiple choices to implement 𝐆_m, such as standard sequential models like LSTM, GRU, or a Transformer decoder with sliding window. Here we adopt LSTM for our baseline since it has been widely used in many similar applications such as motion generation <cit.>, and achieve stable state-of-the-art performance when training on small corpus <cit.>. The role switcher network is implemented just by two linear mappings (listener↔speaker) on hidden states. Our models are trained with AdamW optimizer with a learning rate of 2e-3 (decayed exponentially by 0.5 every 30 epochs), β_1=0.9 and β_2=0.999, for 300 epochs. For all experiments, we keep the same hyper-parameter for fair and set w_1 to 1e-3 and w_2 to 1. §.§ Evaluation Metrics Since we use a detached renderer rather than an end-to-end pipeline, we can divide the assessment into two sides: the performance of listener generator _m and the visual effects of renderer _v. For the former one (the Generator Part), we use L_1 distance between the generated features and the ground-truth features (FD) to ensure the predicted fine-grained head and expression coefficients similar to the ground-truth. And for the latter one (the Renderer Part), we select the Fréchet Inception Distance (FID) <cit.>, Structural SIMilary (SSIM), Peak Signal-to-Noise Ratio (PSNR) and Cumulative Probability of Blur Detection (CPBD) to evaluate the visual effects of renderer. Besides, the cosine similarity from ArcFace is introduced for identity-preserving measurement. And for talking head generation, we apply the Landmark Distance (LMD) for the fine lip movements matching, and the lip-sync metric by SyncNet <cit.> for the synchronization of lip motion with the input audio. Since we did not contribute to the renderer part and instead used a pre-trained renderer, we only assess the model performance with the Generator Part and leave the Renderer Part to provide a basic criterion for evaluation of future realistic face rendering research work. In order to properly evaluate the performance of the conversational head generation system, it is necessary to conduct user studies. To ensure fair results, 20 volunteers will be recruited to rate the synthesized results. §.§ Quantitative Results Responsive Listening Head Generation In <Ref>, we report the results across different listening head generation methods, including 1) “Random”: generate frames from reference image but injecting small perturbations in a normal distribution to mimic random head motion. 2) “Mirror”: copying the speaker's motion patterns ^S. 3) “Responsive”: the proposed responsive listening head generation method (<ref>). The results show that in both datasets, ViCo and ViCo-X, the “Responsive” listeners significantly outperformed the traditional non-parametric listeners (“Random” and “Mirror”). This significant difference in performance can be attributed to the introduction of speakers' audio and visual signals. This is also in line with our psychological perception that the listener doesn't just do some mechanical or meaningless actions, it coordinates with the speaker's behaviors in real-time and dynamically. Expressive Talking Head Generation <Ref> shows how the listener's visual signals affect speaker behaviors. In ViCo and ViCo-X, the audio + listener speaker shows a similar trend of improvements compared to audio-only speakers. With the integration of listener visual signals, the speakers can exhibit more realistic expressions. These findings support that communication and presentation are distinct, and monologues are not leading to communication. Conversational Head Generation To assess the distinctness of the speaker/listener switches agent and our carefully designed agent, we aggregated the best listeners and speakers in ViCo-X dataset to build a baseline and compare with our method. Results in <ref> reveal that the performance of our conversational agent was superior to the baseline on the majority of benchmarks. A plausible explanation for the decline in performance observed in LMD, AVOffset, and AVConf could be attributed to our conversational agent's emphasis on fostering a smoother transition in role alternation. This prioritization aims to enhance naturalness and rationality in conversations. §.§ Qualitative Results Further, to gain more intuitive insight into these methods, we also visualized the results of the generations to compare the differences between the different configurations. <Ref> shows the results of listening head generation and talking head generation on ViCo and ViCo-X dataset. For each task and dataset, a random sequence was chosen and then down-sampled to six frames to qualitatively visualize the generated results against the ground truth video. The figure reveals that our model is generally able to capture listener and speaker patterns, such as eye, mouth, and head motion, ; even if they may differ from the ground truth but still convey rationality. Overall, in terms of visual plausibility, our results outperform other methods, showcasing more convincing outcomes characterized by realistic head motions and expressive changes. We plot the conversational head generation results on ViCo-X dataset in <ref>. We employ the same processing strategy as described earlier. Notably, during role alternation, our method exhibits a smoother transition in both motion and expression changes, resulting in a more natural performance overall. §.§ User Study Given one interlocutor's ground-truth video and the other's synthesized video, twenty volunteers participated in the study and were asked to rate on the total ViCo-X dataset. We adopt the commonly-used five-point scale for each instance: score 1 indicates the synthesized heads cannot speak expressively or listen responsively, while score 5 implies the generated heads are consistent with human subjective perceptions, and even able to communicate with real people. In this test, our synthesized heads get an average score of 3.35± 0.12, which is superior to the baseline Listener + Speaker 3.02± 0.35. This verifies that our model is capable to mimic the interlocutor behaviors in conversation and results in responsive, vivid, and natural conversational heads. In addition, we also explore the upper bounds of the ViCo-X dataset. The score rated on ground-truth videos is 4.4± 0.10, indicating there is ample room to promote. § CONCLUSION In this paper, we address the problem of synthesizing conversational interaction in current digital humans by proposing two datasets ViCo and ViCo-X. On ViCo, we propose the responsive listening head generation task, which involves generating video clips that respond attentively to a speaker by comprehending their facial signals and voices. Additionally, we introduce the expressive speaker generation task to enable "conversational" speakers to perceive the listener's reactions while speaking. And on ViCo-X, we define the conversational head generation task, which aims to synthesize a conversational agent with the other interlocutor's behaviors. With agent modeling, digital humans can talk, listen, and collaborate with each other to complete the conversation. We anticipate that ViCo and ViCo-X would prove valuable in human-computer interaction research and virtual human applications. Given the prevalence of communication across various domains such as doctor–patient interactions, teacher–student dialogues, salesperson–customer exchanges, and more, our proposed conversational head generation task fills a crucial void in face-to-face communication modeling. This advancement holds the potential to drive applications in those scenarios, opening new avenues for development and innovation. IEEEtran
http://arxiv.org/abs/2307.06109v1
20230704124428
SpComp: A Sparsity Structure-Specific Compilation of Matrix Operations
[ "Barnali Basak", "Uday P. Khedker", "Supratim Biswas" ]
cs.MS
[ "cs.MS" ]
[mathescape] for(i0;in;i){ for(j0;ji;j){ for(k0;kj;k){ S_1:A[i][j]A[i][k]× A[j][k]; } if(A[j][j]0) S_2:A[i][j]A[j][j]; } for(l0;li;l){ S_3:A[i][i]A[i][l]× A[i][l]; } S_4:A[i][i]sqrt(A[i][i]); } [mathescape] i=0; i<3; i++) S_4:A_val[2*i+0]sqrt(A_val[2*i+0]); A_val[4]0) S_2:A_val[6]A_val[4]; S_3:A_val[7]A_val[6]× A_val[6]; S_4:A_val[7]sqrt(A_val[7]); A_val[7]0) S_2:A_val[10]A_val[7]; S_3:A_val[11]A_val[10]× A_val[10]; [mathescape] 0; i<15439; i++){ S_4:valA[i]sqrt(valA[i]); } [mathescape] for(i0;in;i){ for(j0;jn;j) S:Y[i]A[i][j]× X[j]; } [mathescape] for(i0;in;i){ for(jptr[i];jptr[i+1];j) y[i]A_val[j]× X[A_col[j]]; } [mathescape] for(i0;i5;i){ for(jptr[i];jptr[i+1];j){ if(X[A_col[j]] 0) Y[i]A_val[j]× X[A_col[j]]; }} [mathescape] for(i0;in;i){ y[i]A_val[i]× X[i]; } [mathescape] for(i0;i≤ 2;i){ Y_val[i]A_val[i+2]× X_val[0]; } Y_val[2]A_val[5]× X_val[1]; Indian Institute of Technology Bombay India bbasak@cse.iitb.ac.in Indian Institute of Technology Bombay India uday@cse.iitb.ac.in Indian Institute of Technology Bombay India sb@cse.iitb.ac.in Sparse matrix operations involve a large number of zero operands which makes most of the operations redundant. The amount of redundancy magnifies when a matrix operation repeatedly executes on sparse data. Optimizing matrix operations for sparsity involves either reorganization of data or reorganization of computations, performed either at compile-time or run-time. Although compile-time techniques avert from introducing run-time overhead, their application either is limited to simple sparse matrix operations generating dense output and handling immutable sparse matrices or requires manual intervention to customize the technique to different matrix operations. We contribute a sparsity structure-specific compilation technique, called SpComp, that optimizes a sparse matrix operation by automatically customizing its computations to the positions of non-zero values of the data. Our approach neither incurs any run-time overhead nor requires any manual intervention. It is also applicable to complex matrix operations generating sparse output and handling mutable sparse matrices. We introduce a data-flow analysis, named Essential Indices Analysis, that statically collects the symbolic information about the computations and helps the code generator to reorganize the computations. The generated code includes piecewise-regular loops, free from indirect references and amenable to further optimization. We see a substantial performance gain by SpComp-generated Sparse Matrix-Sparse Vector Multiplication (SpMSpV) code when compared against the state-of-the-art TACO compiler and piecewise-regular code generator. On average, we achieve ≈ 79% performance gain against TACO and ≈ 83% performance gain against the piecewise-regular code generator. When compared against the CHOLMOD library, SpComp generated sparse Cholesky decomposition code showcases ≈ 65% performance gain on average. SpComp: A Sparsity Structure-Specific Compilation of Matrix Operations Supratim Biswas August 1, 2023 ====================================================================== § INTRODUCTION Sparse matrix operations are ubiquitous in computational science areas like circuit simulation, power dynamics, image processing, structure modeling, data science, etc. The presence of a significant amount of zero values in the sparse matrices makes a considerable amount of computations, involved in the matrix operation, redundant. Only the computations computing non-zero values remain useful or non-redundant. In simulation-like scenarios, a matrix operation repeatedly executes on sparse matrices whose positions or indices of non-zero values, better known as sparsity structures, remain unchanged although the values in these positions may change. For example, Cholesky decomposition in circuit simulation repeatedly decomposes the input matrix in each iteration until the simulation converges. The input matrix represents the physical connections of the underlying circuit and thus, remains unchanged throughout the simulation, although the values associated with the connections may vary. In such a scenario, the redundant computations caused by computing zero values get multi-fold and therefore, it is prudent to claim substantial performance benefits by altering the execution based on the fixed sparsity structure. The state-of-the-art on avoiding redundant computations categorically employs either [(a)] * reorganizing sparse data in a storage format such that the generic computations operate only on the non-zero data or * reorganizing computations to restrict them to non-zero data without requiring any specific reorganization of the data. Figure <ref> demonstrates the avoidance of redundant computations of Sparse Matrix-Sparse Vector Multiplication operation (SpMSpV) by reorganizing data and reorganizing computations. The operation multiplies sparse matrix A to sparse vector X and stores the result in the output sparse vector Y. Figure <ref>(b) presents the computations on reorganized matrix A stored using Compressed Sparse Row (CSR) format. As a result, the non-zero values are accessed using indirect reference X[A_col[j]]. On the contrary, Figure <ref>(c) presents the reorganized SpMSpV computations customized to the positions of the non-zero elements of Y. Clearly, reorganized computations result in direct references and a minimum number of computations. Approaches reorganizing computations avoid redundant computations by generating sparsity structure-specific execution. This can be done either at run-time or at compile-time. Run-time techniques like inspection-execution <cit.> exploit the memory traces at run-time. The executor executes the optimized schedule generated by the inspector after analyzing the dependencies of the memory traces. Even with compiler-aided supports, the inspection-execution technique incurs considerable overhead at each instance of the execution and thus increases the overall runtime, instead of reducing it to the extent achieved by compile-time optimization approaches. Instead of reorganizing access to non-zero data through indirections or leaving its identification to runtime, it is desirable to symbolically identify the non-zero computations at compile-time and generate code that is aware of the sparsity structure. The state-of-the-art that employs a static approach can be divided into two broad categories: * A method could focus only on the sparsity structure of the input, thereby avoiding reading the zero values in the input wherever possible. This approach works only when the output is dense and the memory trace is dominated by the sparsity structure of a single sparse data. Augustine et al. <cit.> and Rodríguez et al. <cit.> presented a trace-based technique to generate sparsity structure-specific code for matrix operations resulting in dense data. * A method could focus on the sparsity structure of the output, thereby statically computing the positions of non-zero elements in the output from the sparsity structures of the input. This approach works when the output is sparse or the memory trace is dominated by the sparsity structures of multiple sparse data. This can also handle changes in the sparsity structure of the input, caused by the fill-in elements in mutable cases. Such a method can involve a trace-based technique that simply unwinds a program and parses it based on the input to determine the sparsity structure of the output. Although sounds simple, the complexity of this technique bounds to the computations involved in the matrix operation and size of the output, making it a resource-consuming and practically intractable for complex matrix operations and large-sized inputs. Alternatively, a graph-based technique like Symbolic analysis <cit.> uses matrix operation-specific graphs and graph algorithms to deduce the sparsity structure of the output. Cheshmi et al. <cit.> apply this analysis to collect symbolic information and enables further optimization. The complexity of this technique is bound to the number of non-zero elements of the output, instead of its size, making it significantly less compared to the compile-time trace-based technique. However, the Symbolic analysis is matrix-operation specific, so the customization of the technique to different matrix operations requires manual effort. We propose a data-flow analysis-based technique, named Sparsity Structure Specific Compilation (SpComp), that statically deduces the sparsity structure of the output from the sparsity structure of the input. Our method advances the state-of-the-art in the following ways. * In comparison to the run-time approaches, our method does not depend on any run-time information, making it a purely compile-time technique. * In comparison to the piecewise-regular code generator <cit.>, our method handles matrix operations resulting in sparse output, including mutable cases. * In comparison to the compile-time trace-based technique, the complexity of our method is bound to the number of non-zero elements present in the output which is significantly less than the size of the output, making it a tractable technique. * In comparison to the Symbolic analysis <cit.>, our method is generic to any matrix operation, without the need for manual customization. SpComp takes a program performing matrix operation on dense data and sparsity structures of the input sparse data to compute the sparsity structure of the output and derive the non-redundant computations. The approach avoids computing zero values in the output wherever possible, which automatically implies avoiding reading zero values in the input wherever possible. Since it is driven by discovering the sparsity structure of the output, it works for matrix operations producing sparse output and altering the sparsity structures of the input. From the derived symbolic information, SpComp generates the sparsity structure-specific code, containing piecewise-regular and indirect reference-free loops. Figure <ref>(a) shows the inputs to the SpComp; [(i)] * the code performing forward Cholesky decomposition of symmetric positive definite dense matrix A and * the initial sparsity structure of the input matrix 494_bus selected from the Suitesparse Matrix Collection <cit.>. Figure  <ref>(b) illustrates the output of SpComp; [(i)] * fill-in elements of 494_bus along with the initial sparsity structure generate the sparsity structure of the output matrix. * Cholesky decomposition code, customized to 494_bus sparse matrix. Note that, although there exists a read-after-write (true) dependency from statement S_3 to statement S_4 in the program present in Figure <ref>(ai), a few instances of S_4 hoist above S_3 in the execution due to spurious dependencies produced by zero-value computations. The rest of the paper is organized as follows. Section <ref> provides an overview of SpComp. Section <ref> describes the first step of SpComp, which identifies the indices involving the computations leading to non-zero values through a novel data flow analysis called Essential Indices Analysis. Section <ref> explains the second step which generates the code. Section <ref> presents the empirical results. Section <ref> describes the related work. Section <ref> concludes the paper. § AN OVERVIEW OF SPCOMP As depicted in Figure <ref>, SpComp has two modules performing (a) essential indices analysis and (b) code generation. The essential indices analysis module constructs an Access Dependence Graph (ADG) from the program and performs a data flow analysis, named Essential Indices Analysis. The analysis effectively identifies the essential data indices of the output matrix and essential iteration indices of the iteration space. Essential data indices identify the indices of non-zero elements which construct the underlying sparse data storage beforehand, without requiring any modification during run-time. Essential iteration indices identify the iteration points in the iteration space that must execute to compute the values of the non-zero elements and facilitate the generation of piecewise-regular loops. For our motivating example in Figure <ref>, the essential indices analysis generates the essential data indices and essential iteration indices for statements S_1, S_2, S_3, and S_4 as shown in Figure <ref>. Note that this analysis is an abstract interpretation of the program with abstract values, we do not compute the actual expressions with concrete values. The input sparse matrix 494_bus is represented by the array A in the code, which is both the input and the output matrix. The essential data indices of the output matrix comprise the indices of non-zero elements of the input matrix and the fill-in elements denoting the indices whose values are zero in the input but become non-zero in the output. [The converse (i.e. a non-zero value of an index in the input becoming zero at the same index in the output) is generally not considered explicitly in such computations and are performed by default.] Fill-in element (38,27) identifies A[38][27] whose value changes from zero to non-zero during the execution of the program. The essential iteration index (0) of statement S_4 identifies A[0][0]=sqrt(A[0][0]) as a statement instance that should be executed to preserve the semantics of the program. At first, the code generation module finds the timestamps of the essential iteration indices and lexicographically orders them to construct the execution trace, without any support for the out-of-order execution. Then it simply finds the pieces of execution trace that can be folded back into regular loops. The module also constructs the memory access trace caused by the execution order and mines the access patterns for generating the subscript functions of the regular loops. Note that, the generated code keeps the if conditions to avoid division by zero during execution. The execution trace ⟨ S_4,0⟩ → ⟨ S_4,1⟩ → ⟨ S_4,2⟩ → ⟨ S_2,3,2⟩ → ⟨ S_3,3,2⟩ → ⟨ S_4,3⟩ → ⟨ S_2,4,3⟩ → ⟨ S_3,4,3⟩ → … is generated by the lexicographic order of the timestamps where ⟨ S_k,i,j⟩ represents iteration index (i,j) of statement S_k. It is evident that the piece of execution trace ⟨ S_4,0⟩ → ⟨ S_4,1⟩ → ⟨ S_4,2⟩ can be folded back in a loop. The corresponding memory access trace A[0][0] → A[1][1] → A[2][2] creates a one-dimensional subscript function valA[2× i+0]|0≤ i ≤ 2. valA represents the one-dimensional array storing the non-zero values of A and 2×i+0|0≤ i≤ 2 represents the positions of A[0][0], A[1][1], and A[2][2] in the sparse data storage. The generated code snippet is presented in Figure <ref>(bii). § ESSENTIAL INDICES ANALYSIS In sparse matrix operations, we assume that the default values of matrix elements are zero. The efficiency of a sparse matrix operation lies in avoiding the computations leading to zero or default values. We call such computations as default computations. Computations leading to non-zero values are non-default computations. We refer to the data indices of all input and output matrices holding non-zero values as essential data indices. As mentioned before, we have devised a data flow analysis technique called Essential Indices analysis that statically computes the essential data indices of output matrices from the essential data indices of input matrices. We identify all the iteration indices of the loop computing non-default computations as essential iteration indices. Here we describe the analysis by defining Access Dependence Graph as the data flow analysis graph in Subsection <ref>, the domain of data flow values in Subsection <ref>, and the transfer functions with the data flow equations in Subsection <ref>. Finally, we prove the correctness of the analysis in Subsection <ref>. §.§ Access Dependence Graph Conventionally, data flow analysis uses the Control Flow Graph (CFG) of a program to compute data flow information at each program point. However, CFG is not suitable for our analysis because the set of information computed over CFG of a loop is an over-approximation of the union of information generated in all iterations. Thus, information gets conflated across all iterations, and no distinction exists between the fact that information generated in i-th iteration cannot be used in iterations j if j≤ i. SpComp accepts static control parts (SCoP) <cit.> of a program which is a sequence of perfectly and imperfectly nested loops where loop bounds and subscript functions are affine functions of surrounding loop iterators and parameters. For essential indices analysis, we model SCoP in the form of an Access Dependence Graph (ADG). ADG captures [(a)] * data dependence, i.e., accesses of the same memory locations, and * data flow, i.e., the flow of a value from one location to a different location. They are represented by recording flow, anti and output data dependencies, and the temporal order of read and write operations over distinct locations. This modeling of dependence is different from the modeling of dependence in a Data Dependence Graph (DDG) <cit.> that models data dependencies among loop statements which are at a coarser level of granularity. ADG captures dependencies among access operations which are at a finer level of granularity compared to loop statements. Access operations on concrete memory locations, i.e., the locations created at run-time, are abstracted by access operations on abstract memory locations that conflate concrete memory locations accessed by a particular array access expression. For example, the write access operations on concrete memory locations at statement S_1 in Figure <ref>(a) are accessed by the access expression { A[i][j]|0≤ i<n, 0≤ j<i}. ADG handles the affine subscript function of the form ∑_k=1^na_k× i_k + c, assuming a_k and c be constants and i_k be an iteration index. A set of concrete memories read by an affine array expression { A[f(i_1,…,i_n)] |lb_l≤ i_l<ub_l, 1≤ l<n} at statement S_k is denoted by an access operation r^k_A[f(i_1,…,i_n)], where lb_l and ub_l denote lower and upper bounds of regular or irregular loops. Similarly, a set of concrete memories written by the same array expression at statement S_l is denoted by an access operation w^l_A[f(i_1,…,i_n)]. Note that, the bounds on the iteration indices i_1,…,i_n become implicit to the access operation. For code generation, we concretize an abstract location A[i][j] into concrete memory locations A[1][1], A[1][2] etc. using the result of essential indices analysis as explained in Section <ref>. ADG captures the temporal ordering of access operations using edges annotated with a dependence direction that models the types of dependencies. Dependence direction <, ≤ and < model flow, anti and output data dependencies, whereas dependence direction = captures the data flow between distinct memory locations. Note that dependence direction > is not valid as the source of a dependency can not be executed after the target. Consider statement Y[i]=Y[i]+A[i][j]× X[j] in Figure <ref>(a). It is evident that there is an anti dependency from the read access of {Y[i]|0≤ i<n}, denoted as r_Y[i], to write access of {Y[i]|0≤ i<n}, denoted as w_Y[i]. The anti-dependency is represented by an edge r_Y[i] w_Y[i] where the direction of the edge indicates the ordering and the edge label ≤ indicates that it is an anti-dependency. Similarly, there is a flow dependency from w_Y[i] to r_Y[i]. This is represented by an edge w_Y[i] r_Y[i] where direction of the edge indicates the ordering and the edge label < denotes the flow dependency. The data flow from r_A[i][j] to w_Y[i] and r_X[j] to w_Y[i] are denoted by the edges r_A[i][j] w_Y[i] and r_X[j] w_Y[i] respectively where the directions indicate the ordering and the edge labels = indicate that these are data flow. Each vertex v in the ADG G=(V, E) is called an access node, and the entry and exit points of each access node are access points. Distinctions between entry and exit access points are required for formulating transfer functions and data flow equations in Section <ref>. Formally, ADG captures the temporal and spatial properties of data flow. Considering every statement instance as atomic in terms of time, the edges in an ADG have associated temporal and spatial properties, as explained below. Let r_P and w_Q respectively denote read and write access nodes. * For edge r_P w_Q where mem(r_P)∩ mem(w_Q)=∅, edge label = implies that w_Q executes in the same statement instance as r_P and captures data flow. * For edge r_P w_Q where mem(r_P)∩ mem(w_Q)≠∅, edge label ≤ implies that w_Q executes either in the same statement instance as r_P or in a statement instance executed later and captures anti dependencies. * For edge w_P r_Q where mem(w_P)∩ mem(r_Q)≠∅, edge label < implies that r_Q executes in a statement instance executed after the execution of w_P and captures flow dependencies. * For edge w_P w_Q where mem(w_P)∩ mem(w_Q)≠∅, edge label < implies that w_Q executes in a statement instance executed after the execution of w_P and captures output dependencies. §.§ Domain of Data Flow Values Let the set of data indices of an n-dimensional matrix A of size m_1×…× m_n be represented as 𝒟_A^n where A has data size m_k at dimension k. Here 𝒟_A^n={d⃗ |(0,…,0)≤ d⃗ ≤ (m_1,…,m_n)} where vector d⃗ represents a data index of matrix A. If A is sparse in nature then d⃗ is an essential data index if A[d⃗ ] 0. The set of essential data indices of sparse matrix A is represented as 𝔻 _A^n such that 𝔻 _A^n⊆𝒟_A^n. For example, 𝒟 _A^2 of a two-dimensional matrix of size 3× 3 is {(0,0), (0,1), (0,2), (1,0), (1,1), (1,2), (2,0), (2,1), (2,2)}, the set of all data indices of A. If A is sparse with non-zero elements at A[0][0], A[1][1], and A[2][2] then 𝔻 _A^2 = {(0,0), (1,1), (2,2)}. Thus 𝔻 _A^2⊆𝒟 _A^2. In this analysis, we consider the union of all data indices of all input and output matrices as data space 𝒟. The union of all essential data indices of all input and output matrices is considered as the domain of data flow values 𝔻 such that 𝔻⊆𝒟. For our analysis each essential data index of 𝔻 is annotated with the name of the origin matrix. For an n-dimensional matrix A each essential data index thus represents an n+1 dimensional vector d⃗ where the 0-th position holds the name of the matrix, A. For example, in the matrix-vector multiplication of Figure <ref>(a), let 𝔻 _A^2={(0,0),(1,1),(2,2)}, 𝔻 _X^1={(1),(2)} and 𝔻 _Y^1={(1),(2)}. Thus, the domain of data flow values 𝔻 is ={(A,0,0), (A,1,1), (A,2,2), (X,1), (X,2), (Y,1), (Y,2)}. In the rest of the paper data index d⃗ is denoted as d for convenience. The value at data index d is abstracted as either zero (Z) or nonzero (NZ) based on the concrete value at d. Note that the domain of concrete values, cval, at each data index d is a power set of ℝ, which is the set of real numbers. Our approach abstracts the concrete value at each d by val(d) defined as follows. val(d) = Z if cval(d) = {0} NZ otherwise The domain of values at each data index d forms a component lattice 𝕃̂=⟨{ Z, NZ},⊑⟩, where NZ⊑ Z and ⊑ represents the partial order. A data index d is called essential if val(d)=NZ. As the data flow value at any access point holds the set of essential data indices 𝔻^' such that 𝔻^'⊆ 2^𝒟, the data flow lattice is thus represented as ⟨ 2^𝒟,⊇⟩ where partial order is a superset relation. §.§ Transfer Functions This section formulates the transfer functions used to compute the data flow information of all data flow variables and presents the algorithm performing Essential Indices Analysis. For an ADG, G=(V,E), the data flow variable Gen_n captures the data flow information generated by each access node n∈ V, and the data flow variable Out_n captures the data flow information generated at the exit of each node n∈ V. Let 𝔻 ^0 be the initial set of essential data indices that identifies the indices of non-zero elements of input matrices. In Essential Indices Analysis, Out_n is defined as follows. Out_n = 𝔻^0 if Pred_n=∅ ⋃_ p∈ Pred_n( Out_p∪ Gen_n) if n is write ⋃_ p∈ Pred_n Out_p otherwise Pred_n denotes the set of predecessors of each node n in the access dependence graph. Equation <ref> initializes Out_n to 𝔻 ^0 for access node n that does not have any predecessor. Read access nodes do not generate any essential data indices; they only combine the information of their predecessors. A write access node typically computes the arithmetic expression associated with it and generates a set of essential data indices as Gen_n. Finally, Gen_n is combined with the out information of its predecessors to compute Out_n of write node n. Here we consider the statements associated with write access nodes and admissible in our analysis. They are of the form e_1:d=d^', e_2:d=op(d^') and e_3:d=op(d^',d^'') where e_1 is copy assignment, e_2 uses unary operation and e_3 uses binary operation. Here d, d^', and d^'' are data indices. Instead of concrete values, the operations execute on abstract values { Z,NZ}. Below we present the evaluation of all valid expressions on abstract values. Note that the unary operations negation, square root, etc. return the same abstract value as input.[Unary operations such as floor, ceiling, round off, truncation, saturation, shift etc. that may change the values are generally not used in sparse matrix operations.] Thus evaluation effect of expressions e_1 and e_2 are combined into the following. val(d) = val(d^') We consider the binary operations addition, subtraction, multiplication, division, and modulus. Thus the evaluation of expression e_3 is defined as follows for the aforementioned arithmetic operations. * If op is addition or subtraction then val(d) = NZ if val(d^')= NZ∨ val(d^'')= NZ Z otherwise * If op is multiplication then val(d) = NZ if val(d^')= NZ∧ val(d^'')= NZ Z otherwise * If op is division or modulus then val(d) = val(d^') if val(d^'')≠ Z Note that, the addition and subtraction operations may result in zero values due to numeric cancellation while operating in the concrete domain. This means cval(d) can be zero when cval(d^') and cval(d^'') are non-zeroes. In this case, our abstraction over-approximates cval(d) as NZ, which is a safe approximation. Division or modulus by zero is an undefined operation in the concrete domain and is protected by the condition on the denominator value cval(d^'')≠ 0. We protect the same in the abstract domain by the condition val(d^'')≠ Z, as presented in Equation <ref>. Although handled, the conditions are still part of the generated code to preserve the semantic correctness of the program. For example, the if conditions in the sparsity structure-specific Cholesky decomposition code in Figure <ref>(bii) preserve the semantic correctness of the program by prohibiting the divisions by zero values in the concrete domain. In this paper, we limit our analysis to simple arithmetic operations. However, similar abstractions could be defined for all operations by ensuring that no possible non-zero result is abstracted as zero. Now that we have defined the abstract value computations for different arithmetic expressions, below we define Gen_n, which generates the set of essential data indices for write access node n. Gen_n = { d | (d=d^'∨ d=op(d^')) ∧(d^'∈ Out_p, p∈ Pred_n) ∧(val(d)=NZ)} { d | ( d= op(d^', d^'')) ∧( d^'∈ Out_p, p∈ Pred_n ∨ d^''∈ Out_q, q∈ Pred_n) ∧(val(d)= NZ)} ∅ otherwise In the case of a binary operation, predecessor node p may or may not be equal to predecessor node q. The objective is to compute the least fixed point of Equation <ref>. Thus the analysis must begin with the initial set of essential data indices 𝔻 ^0. The data flow variables Out_n are set to initial values and the analysis iteratively computes the equations until the fixed point is reached. If we initialize with something else the result would be different and it would not be the least fixed point. Once the solution is achieved the analysis converges. Essential indices analysis operates on finite lattices and is monotonic as it only adds the generated information in each iteration. Thus, the analysis is bound to converge on the fixed point solution. The existence of a fixed point solution is guaranteed by the finiteness of lattice and monotonicity of flow functions. Figure <ref> demonstrates essential indices analysis for the sparse matrix-sparse vector multiplication operation. It takes the ADG from Figure <ref>(b) and performs the data flow analysis based on the sparsity structures of input matrix A and input vector X as depicted in Figure <ref>(a). Let, 𝔻^ 0 be {(A,0,0), (A,0,2), (A,1,1), (A,2,1), (A,3,1), (A,3,3), (X,1), (X,3)}. The gen and out information of access nodes r_A[i][j], r_X[j], r_Y[i] and w_Y[i] are presented in tabular form for convenience. Out information of all read nodes are initialized to 𝔻^ 0, whereas the out information of write node is initialized to ∅. In iteration 1, Gen_Y[i] is 𝔻^' where 𝔻^' ={(Y,1),(Y,2),(Y,3)}. Thus Out_Y[i] becomes 𝔻^ 1 such that 𝔻^1=𝔻^0∪𝔻^'. In iteration 2 the out information of w_Y[i] propagates along with edge w_Y[i]→ r_Y[i] and sets Out_r_Y[i] to 𝔻^ 1. Finally at iteration 3 the analysis reaches the fixed point solution and converges. Data flow variable AGen_n is introduced to accumulate Gen_n at each write node n required by a post-analysis step computing set of essential iteration indices from the set of essential data indices. AGen_n = AGen_n∪ Gen_n In the current example AGen_w_Y[i] is initialized to ∅. It accumulates Gen_w_Y[i] generated at each iteration, finally resulting AGen_w_Y[i] as {(Y,1),(Y,2),(Y,3)}. AFill denotes the fill-in elements of the output sparse matrix and is computed as follows. AFill = ⋃_∀ n∈ VAGen_n ∖𝔻^0 The initial essential data indices 𝔻^0 and the fill-in elements AFill together compute the final essential data indices 𝔻^f that captures the sparsity structure of the output matrix. 𝔻^f = AFill ∪𝔻^0 In the current example, AFill is computed as {(Y,1),(Y,2),(Y,3)}. As 𝔻^0 does not contain any initial essential data index of Y, 𝔻^f becomes same as AFill. The set of essential iteration indices is computed from the set of essential data indices. Let ℐ of size l_1×…× l_p be the iteration space of dimension p of a loop having depth p where the loop at depth k has iteration size l_k. Thus, ℐ={i⃗_⃗k⃗ | 1≤ k≤ l_1×…× l_p} where vector i⃗_⃗k⃗ is an iteration index. For convenience, here on we identify i⃗_⃗k⃗ as i. The iteration index at which non-default computations are performed is called the essential iteration index. The set of all essential iteration indices is denoted as 𝕀 such that 𝕀⊆ℐ. For each essential data index d∈ AGen_n there exists a set of essential iteration indices 𝕀^' at which the corresponding non-default computations resulting d occur. We introduce iter:𝔻→ 2^𝕀 such that iter(d) results 𝕀^' where d∈𝔻 and 𝕀^'∈ 2^𝕀. Data flow variable AInd_n is introduced to capture the set of essential iteration indices corresponding to the data indices d∈ AGen_n. Thus, AInd_n = ⋃_∀ d∈ AGen_n iter(d) In the current example AInd_w_Y[i] is computed as {(1,1), (2,1), (3,1), (3,3)}. Finally, the set of all essential iteration indices 𝕀 is computed as 𝕀 = ⋃_∀ n∈ V AInd_n Algorithm <ref> presents the algorithm for essential indices analysis. Line number 1 initializes Out_n. Line number 4 sets the work list, WorkList, to the nodes of the ADG. Lines 3-11 perform the data flow analysis by iterating over the ADG until the analysis converges. At an iteration, each node is picked and removed from the work list and Gen_n, AGen_n, and Out_n are computed. If the newly computed Out_n differs from its old value, the node is pushed back to the work list. The process iterates until the work list becomes empty. Post convergence, 𝔻^f and 𝕀 are computed in line number 12 and the values are returned. The complexity of the algorithm depends on the number of iterations and the amount of workload per iteration. The number of iterations is derived from the maximum depth d(G) of the ADG, i.e., the maximum number of back edges in any acyclic path derived from the reverse postorder traversal of the graph. Therefore, the total number of iterations is 1+d(G)+1, where the first iteration computes the initial values of Out_n for all the nodes in the ADG, d(G) iterations backpropagate the values of Out_n, and the last iteration verifies the convergence. In the current example, the reverse postorder traversal of the ADG produces the acyclic path r_A[i][j]→ r_X[j]→ w_Y[i]→ r_Y[i], containing a single back edge r_Y[i]→ w_Y[i]. Therefore, the total number of iterations becomes 3. The amount of workload per iteration is dominated by the computation of Gen_n. In the case of a binary operation, the complexity of Gen_n is bound to 𝒪(d^'× d^''), where Out_p_1=d^', Out_p_2=d^'', and {p_1, p_2}∈ Pred_n. In the case of assignment and unary operations, the complexity of Gen_n is bound to 𝒪(d^'). §.§ Correctness of Essential Indices Analysis The following claims are sufficient to prove the correctness of our analysis. * Claim 1: Every essential data index will always be considered essential. * Claim 2: A data index considered essential will not become non-essential later. Before reasoning about the aforementioned claims we provide an orthogonal lemma to show the correctness of our abstraction. Our abstraction function is sound. Our abstraction function α maps the concrete value domain of 2^ℝ to the abstract value domain { Z,NZ}. { 0} in the concrete domain maps to Z in the abstract domain and all other elements map to NZ. Now to guarantee the soundness of α one needs to prove that the following condition <cit.> holds. f(α(c))⊑α(cf(c)) where c is an element in the concrete domain, f is an auxiliary function in the abstract domain, and cf is the corresponding function in the concrete domain. This condition essentially states that the evaluation of function in the abstract domain should overapproximate the evaluation of function in the concrete domain. We prove the above condition for evaluation of each admissible statement in the following lemmas. For copy assignment statement d=d^', val(d^')⊑α(cval(d^')). For statement using unary operation d=op(d^'), op(val(d^'))⊑α(op(cval(d^'))). For statement using binary operation d=op(d^',d^''), op(val(d^'),val(d^''))⊑α(op(cval(d^'),Cali(d^''))). We prove lemmas <ref> to <ref> in the following. Let cval(d^')=r_1 and cval(d^'')=r_2 where r_1 and r_2 are non-zero elements in the concrete domain and val(d^') = val(d^') = NZ where NZ represents abstract non-zero value. From Figure <ref> we can state that the concrete and abstract evaluations of all statements satisfy the safety condition in Equation <ref>. Claim 1 primarily asserts that an essential data index will never be considered non-essential. We prove it using induction on the length of paths in the access dependence graph. Let 𝔻 be the set of essential data indices computed at each point in ADG. * Base condition: At path length 0, 𝔻=𝔻^ 0 where 𝔻^ 0 is the initial set of essential data indices of input sparse matrices. * Inductive step: Let us assume that at length l the set of essential data indices does not miss any essential data index. As abstract computation of such data index is safe as per Lemma <ref>, we can conclude that no essential data index is missing from 𝔻 computed at path length l+1. Hence all essential data indices will always be considered as essential. Because of the monotonicity of transfer functions as the newly generated information is only added to the previously computed information without removing any, we assert that once computed no essential data index will ever be considered as non-essential as stated in Claim 2. For all statements admissible in our analysis the abstraction is optimal except for addition and subtraction operations where numerical cancellation in concrete domain results into NZ in the abstract domain. § CODE GENERATION In this section, we present the generation of code, customized to the matrix operation and the sparsity structures of input. Essential data indices 𝔻^f and essential iteration indices 𝕀 play a crucial role in code generation. The fill-in elements generated during the execution alter the structure of the underlying data storage and pose challenges in the dynamic alteration of the same. 𝔻^f statically identifies the fill-in elements and sets the data storage without any requirement for further alteration. The set of essential iteration indices 𝕀 identifies the statement instances that are critical for the semantic correctness of the operation. In the case of a multi-statement operation, it identifies the essential statement instances of all the statements present in the loop. The lexicographic ordering of the iteration indices statically constructs the execution trace E_trace of a single statement operation. However, a multi-statement operation requires the lexicographic ordering of the timestamp vectors associated with the statement instances, where the timestamp vectors identify the order of loops and their nesting sequences. Assuming the 𝑡𝑖𝑚𝑒𝑠𝑡𝑎𝑚𝑝 function computes the timestamp of each essential index and the 𝑙𝑒𝑥𝑜𝑟𝑑𝑒𝑟 lexicographically orders the timestamp vectors to generate the execution trace E_trace as follows. E_trace = 𝑙𝑒𝑥𝑜𝑟𝑑𝑒𝑟(⋃_∀ e∈ I𝑡𝑖𝑚𝑒𝑠𝑡𝑎𝑚𝑝(e)) Assuming the timestamp vectors as ⟨ i,0,j,0,k⟩, ⟨ i,0,j,1⟩, ⟨ j,1,l⟩, and ⟨ i,2⟩ for the statements S_1, S_2, S_3, and S_4 in Figure <ref>(a), Figures <ref>(a) and <ref>(b) present the snippets of lexicographic order of the timestamp vector instances and the generated execution trace respectively. Here execution instance ⟨ S_k,i,j⟩ denotes the instance of statement S_k at iteration index (i,j). The problem of constructing piecewise regular loops from the execution trace is similar to the problem addressed by Rodríguez et al. <cit.> and Augustine et al. <cit.>. Their work focuses on homogeneous execution traces originating from single statement loops where reordering statement instances is legitimate. They note that handling multi-statement loops is out of the scope of their work. They construct polyhedra from the reordered and equidistant execution instances and use CLooG <cit.> like algorithm to generate piecewise-regular loop-based code from the polyhedra. They support generating either one-dimensional or multi-dimensional loops. Our work targets generic loops including both single-statement and multi-statements, having loop-independent or loop-dependent dependencies. In the case of multi-statement loops, the instances of different statements interleave, affecting the homogeneity of the execution trace. Such interleaving limits the size of the homogeneous sections of the trace that contribute to loop generation. Additionally, most loops showcase loop-dependent dependencies, and thus, reordering statement instances may affect the semantic correctness of the program. Taking these behaviors of programs into account, we use a generic approach to generate one-dimensional piecewise regular loops from the homogeneous and equidistant statement instances without altering their execution order. The execution trace E_trace prepares the memory access trace M_trace, accessing the underlying storage constructed by the essential data indices 𝔻^f. Assuming 𝑚𝑒𝑚𝑎𝑐𝑐𝑒𝑠𝑠 returns the data accessed by each iteration index e in the execution trace E_trace, M_trace is computed as follows. M_trace = ⋃_∀ e∈ E_trace𝑚𝑒𝑚𝑎𝑐𝑐𝑒𝑠𝑠(e,𝔻^f) Instead of a single-dimensional data access trace, i.e., a memory access trace generated by a single operand accessing sparse data, our code generation technique considers a multi-dimensional data access trace, where the memory access trace is generated by multiple operands accesing sparse data. In the case of a loop statement A[i] = f( B[j]), {…,⟨ A,m⟩,…,⟨ A,n⟩,…} and {…,⟨ B,m^'⟩,…,⟨ B,n^'⟩,…} represent two single-dimensional data access traces generated by accessing arrays A and B respectively. Thus the multi-dimensional data access trace generated by the statement is {…,⟨⟨ A,m⟩, ⟨ B,m^'⟩⟩, …, ⟨⟨ A,n⟩, ⟨ B,n^'⟩⟩,…}. Note that, if the underlying data storage changes the data access trace changes too. Figure <ref>(c) and <ref>(d) represent the snippet of multi-dimensional data access trace accessing dense and sparse storage respectively. Data access point ⟨ S_4, ⟨ A,0,0⟩, ⟨ A,0,0⟩⟩ denotes accessing memory location A[0][0] of the dense storage by the left-hand side and right-hand side operands of statement S_4. Similarly, ⟨ S_4, ⟨ valA,0⟩,⟨ valA,0⟩⟩ represents corresponding accesses to valA[0] of the sparse storage. The code generator parses the execution trace to identify the homogeneous sections and computes distance vectors between consecutive multi-dimensional data access points originated by the same homogeneous section. If data access points m_i-1, m_i, and m_i+1 of M_trace are homogeneous and equidistant, then they form a partition which is later converted into a regular loop. The distance vector between data access points ⟨⟨ A,m⟩,⟨ B,m^'⟩⟩ and ⟨⟨ A,n⟩,⟨ B,n^'⟩⟩ is ⟨⟨ A,n-m⟩, ⟨ B,n^'-m^'⟩⟩. Homogeneous and equidistant data access points ⟨ A,m⟩, ⟨ A,m+d⟩, …, ⟨ A,m+n×d⟩, with identical distance d, form an affine, one-dimensional, indirect-reference free access function A[m+d×i]. Iteration index i forms a regular loop iterating from 0 to n. For example, the homogeneous and equidistant data access points ⟨ S_4,⟨ valA,0⟩, ⟨ valA,0⟩⟩, ⟨ S_4,⟨ valA,2⟩, ⟨ valA,2⟩⟩, and ⟨ S_4,⟨ valA,4⟩, ⟨ valA,4⟩⟩ is ⟨⟨ valA,2⟩,⟨ valA,2⟩⟩ construct one dimensional, affine access function { valA[2i+0]|0≤i≤2}. In the absence of regularity, our technique generates small loops with iteration-size two. As this hurts performance because of instruction cache misses, Augustine et al. <cit.> proposed instruction prefetching for the program code. However, we deliberately avoid prefetching and reordering in our current work and limit the code generation to code that is free of indirect references, and contains one-dimensional and piecewise-regular loops for generic programs. Algorithm <ref> presents the algorithm for code generation. Line number 1 computes E_trace and M_trace. Lines 2-9 partition M_trace into multiple partitions, containing consecutive, homogeneous, and equidistant data access points. Lines 10-12 generate regular loop for each partition and accumulate them into the code. The complexity of the code generation algorithm is bound to the size of the essential iteration indices 𝕀. § EMPIRICAL EVALUATION §.§ Experimental Setup We have developed a working implementation of SpComp in C++ using STL libraries. It has two modules performing the essential indices analysis and piecewise-regular code generation. Our implementation is computation intensive that is addressed by parallelizing the high-intensity functions into multiple threads with a fixed workload per thread. For our experimentation, we have used Intel(R) Core(TM) i5-10310U CPU @ 1.70GHz octa-core processor with 8GB RAM size, 4GB available memory size, 4KB memory page size, and L1, L2, and L3 caches of size 256KB, 1MB, and 6MB, respectively. The generated code is in .c format and is compiled using GCC 9.4.0 with optimization level -O3 that automatically vectorizes the code. Our implementation successfully scales up for Cholesky decomposition to sparse matrix Nasa/nasa2146 having 7×10^4 non-zero elements but limits the code generation due to the available memory. Here we use the PAPI tool <cit.> to profile the dynamic behavior of a code. The profiling of a performance counter is performed thousand times, and the mean value is reported. The retired instructions, I1 misses, L1 misses, L2 misses, L2I misses, L3 misses, and TLB misses are measured using PAPI_TOT_INS, ICACHE_64B : IFTAG_MISS, MEM_LOAD_UOPS_RETIRED : L1_MISS, L2_RQSTS:MISS, L2_RQSTS : CODE_RD_MISS, LONGEST_LAT_CACHE : MISS, and PAPI_TLB_DM events respectively. §.§ Use Cases and Experimental Results The generated sparsity structure-specific code is usable as long as the sparsity structure remains unchanged. Once the structure changes, the structure-specific code no longer remains relevant. In this section, we have identified two matrix operations; (a) Sparse Matrix-Sparse Vector Multiplication and (b) Sparse Cholesky decomposition, that have utility in applications where the sparsity structure-specific codes are reused. §.§.§ Sparse Matrix-Sparse Vector Multiplication This sparse matrix operation has utility in applications like page ranking, deep Convolutional Neural Networks (CNN), numerical analysis, conjugate gradients computation, etc. Page ranking uses an iterative algorithm that assigns a numerical weighting to each vertex in a graph to measure its relative importance. It has a huge application in web page ranking. CNN is a neural network that is utilized for classification and computer vision. In the case of CNN training, the sparse inputs are filtered by different filters until the performance of CNN converges. SpMSpV multiplies a sparse matrix A to a sparse vector X and outputs a sparse vector Y. It operates on two sparse inputs and generates a sparse output, without affecting the sparsity structure of the inputs. The corresponding code operating on dense data contains a perfectly nested loop having a single statement and loop-independent dependencies. We compare the performance of SpComp-generated SpMSpV code against the following. * The state-of-art Tensor Algebra Compiler (TACO) <cit.> automatically generates the sparse code supporting any storage format. We have selected the storage format of the input matrix A as CSR and the storage format of the input vector X as a sparse array. The TACO framework <cit.> does not support sparse array as the output format, thus, we have selected dense array as the output storage format. * The piecewise regular code generated by  <cit.>. We use their working implementation from PLDI 2019 artifacts <cit.> and treat it as a black box. Although, this implementation supports only Sparse Matrix-Vector Multiplication (SpMV) operation, we use this work to showcase the improvement caused by SpComp for multiple sparse input cases. By default, the instruction prefetching is enabled in this framework. However, instruction prefetching raises a NotImplementedError error during compilation. Thus, we have disabled instruction prefetching for the entire evaluation. We enable -O3 optimization level during the compilation of the code generated by TACO, piecewise-regular work, and SpComp. Each execution is performed thousand times and the mean is reported. The input sparse matrices are randomly selected from the Suitesparse Matrix Collection <cit.>, as SpMSpV can be applied to any matrix. The input sparse vectors are synthesized from the number of columns of the input sparse matrices with sparsity fixed to 90%. The initial 10% elements of the sparse vectors are non-zero, making the sparsity structured. Such regularity is intentionally maintained to ease the explanation of sparsity structures of the input sparse vectors. The statistics of the selected sparse matrices and synthesized sparse vectors are presented in Table <ref>. Due to constraints on the available memory, we limit the number of non-zero elements of the selected sparse matrices between 10000 and 100000. All of the matrices showcase ≈ 99.9% sparsity of unstructured nature. Only cell1 and rdist3a sparse matrices are square and the rest of them are rectangular. Table <ref> presents the performance achieved by the SpMSpV codes generated by TACO, piecewise-regular, and SpComp for the sparse matrices and sparse vectors shown in Table <ref>. The performance is captured in terms of the number of retired instructions, % of retired instructions missed by L1 and L2 instruction caches, and execution time in micro-second (usec). We observe significant execution time improvement by SpComp compared to both TACO and Piecewise-regular framework. Although SpComp incurs a significant amount of relative instruction misses, the major saving happens due to the reduced number of retired instructions by the sparsity structure-specific execution of the SpComp-generated code. The plot in Figure <ref>(a) illustrates the performance of SpComp compared to TACO. The % gain in execution time is inversely proportional to the % increment in L1 and L2 instruction misses but is limited to the % reduction of the retired instructions. The increments in relative instruction misses by L1 and L2 caches occur due to the presence of piecewise-regular loops. Note that, the % gain and % reduction by SpComp are computed as (perf_taco - perf_spcomp) / perf_taco * 100, where perf_taco and perf_spcomp denote the performance by TACO and SpComp respectively. Similarly, the % increment by SpComp is computed as (perf_spcomp - perf_taco) / perf_spcomp * 100. As illustrated in the plot in Figure <ref>(b), the % gain in execution time by SpComp compared to piecewise-regular framework is primarily dominated by the % reduction in retired instructions. This is quite obvious as, unlike the piecewise regular work, SpComp considers sparsity of both sparse matrix and sparse vector, making the code specific to both the sparsity structures. However, the increments in relative instruction miss by SpComp for lp_maros and pcb1000 occur due to the irregularity present in the SpMSpV output, resulting in piecewise-regular loops of small size. On the contrary, SpComp showcases significantly fewer relative instruction misses for rdist3a as the SpMSpV output showcases high regularity, resulting in large-sized loops. §.§.§ Sparse Cholesky Decomposition This matrix operation has utility in the circuit simulation domain, where the circuit is simulated until it converges. Here the sparsity structure models the physical connections of the circuit which remains unchanged throughout the simulation. In each iteration of the simulation the sparse matrix is factorized (Cholesky decomposed in the case of Hermitian positive-definite matrices) and the factorized matrix is used to solve the set of linear equations. In this reusable scenario, having a Cholesky decomposition customized to the underlying sparsity structure should benefit the overall application performance. We consider the Cholesky decomposition chol(A), where A = LL^* is a factorization of Hermitian positive-definite matrix A into the product of a lower triangular matrix L and its conjugate transpose L^*. The operation is mutable, i.e., alters the sparsity structure of the input by introducing fill-in elements, and has multiple statements and nested loops with loop-carried and inter-statement dependencies. The SpComp-generated code is compared against CHOLMOD <cit.>, the high-performance library for sparse Cholesky decomposition. CHOLMOD applies different ordering methods like Approximate Minimum Degree (AMD) <cit.>, Column Approximate Minimum Degree(COLAMD) <cit.> etc. to reduce the fill-in of the factorized sparse matrix and selects the best-ordered matrix. However, we configure both CHOLMOD and SpComp to use only AMD permutation. CHOLMOD offers cholmod_analyze and cholmod_factorize routines to perform symbolic and numeric factorization respectively. We profile the cholmod_factorize function call for the evaluation. We select the sparse matrices from the Suitesparse Matrix Collection <cit.>. As the Cholesky decomposition applies to symmetric positive definite matrices, it is challenging to identify such matrices from the collection. We have noticed that sparse matrices from the structural problem domain are primarily positive definite and thus can be Cholesky decomposed. In the collection, we have identified 200+ such Cholesky factorizable sparse matrices and selected 35+ matrices for our evaluation from the range of 1000 to 17000 numbers of nonzero elements. We see that a sparse matrix with more nonzeroes exhausts the available memory during code generation and thus is killed. Table <ref> presents the sparsity structure of input and output sparse matrices and the structure of the generated piecewise regular loops for a few sparse matrices. All the matrices in the table have sparsity within the range of 79% to 99% and almost all of them introduce a considerable amount of fill-in when Cholesky decomposed. The amount of fill-in(%) is computed by (elem_out - elem_in) / elem_out * 100, where elem_in and elem_out denote the number of non-zero elements before and after factorization. Sparse matrices bcsstm11, bcsstm26, and bcsstm25 are diagonal, and thus no fill-in element is generated when factorized. For these diagonal sparse matrices, SpComp generates a single regular loop with an average loop size of 1473, 1922, and 15439, the number of non-zero elements. In these cases, 100% of the generated code is looped back. The rest of the sparse matrices in Table <ref> showcase irregular sparsity structures and thus produce different amounts of fill-in elements and piecewise regular loops with different average loop sizes. As an instance, sparse matrix nos1 with 98.19% sparsity generates 7.03% fill-in elements when Cholesky decomposed and 37.72% of generated code is piecewise-regular loops with an average loop size of 2.35. Similarly, another irregular sparse matrix dwt_992 with 98.29% sparsity produces 56.59% fill-in elements and 95.95% of generated code represents piecewise-regular loops with an average loop size of 8.9. The graph in Figure <ref> illustrates the performance gained by SpComp against CHOLMOD. The number of nonzero elements of sparse matrices is plotted against the logarithmic scale on X-axis. Considering the performance in terms of the number of retired instructions, the number of TLB miss, and execution time (usec) of CHOLMOD as the baseline, we plot the performance difference (in %) by SpComp against Y-axis. The performance difference is computed as (perf_cholmod - perf_spcomp) / perf_cholmod * 100, where perf_cholmod and perf_spcomp denote the performance by CHOLMOD and SpComp respectively. We see a directly proportional relation between % gain in execution time and % reduction in the number of retired instructions. SpComp contributes to a lesser number of instructions and thus improves the execution time. We find ≈ 100% reduction in the instructions executed for use cases where the sparse matrices are diagonal, like bcsstm25 and bcsstm39. Additionally, we see ≈100% improvement in TLB misses for all the selected use cases. This happens due to the static allocation of the fill-in elements that avert the need for dynamic modification of sparse data storage, thus improving the TLB miss. Table <ref> presents the raw performance numbers for the sparse matrices. We see an equal number of retired instructions and TLB misses by CHOLMOD, which implies dynamic memory allocation for all the nonzero elements including fill-in elements. SpComp takes ≈4sec to perform the analysis on sparse matrix nos1 and ≈20min to perform the same on sparse matrix dwt_992. As expected, our approach generates large codes even for moderate-sized sparse matrices. In the case of dwt_992 with size 992× 992 and NNZ of 16744 the generated code size is ≈ 6.3 MB. § RELATED WORK Here we provide an overview of the work related to optimizing sparse matrix operations either by reorganizing data or by reorganizing computation. Over decades researchers have explored various optimization approaches and have established various techniques, either hand-crafted or compiler-aided. Researchers have developed various hand-crafted algorithms involving custom data structures like CSR, CSC, COO, CDS, etc., <cit.> that contain the data indices and values of the non-zero data elements. Hand-crafted libraries like Cholmod <cit.>, Klu <cit.>, CSparse <cit.> etc. from SuiteSparse <cit.>; C++ supported SparseLib++ <cit.>, Eigen <cit.>; Python supported Numpy <cit.>; Intel provided MKL <cit.>, Radios <cit.>; CUDA supported cuSparse <cit.>; Java supported Parallel Colt <cit.>; C, Fortran supported PaStiX <cit.>, MUMPS <cit.>, SuperLU <cit.> etc. are widely used in current practice. Although these libraries offer high-performing sparse matrix operations, they typically require human effort to build the libraries and port them to different architectures. Also, libraries are often difficult to be used in the application and composition of operations encapsulated within separate library functions may be challenging. Compiler-aided optimization technique includes run-time optimization approaches like inspection-execution<cit.> where the inspector profiles the memory access information, inspects data dependencies during execution, and uses this information to generate an optimized schedule. The executor executes the optimized schedule. Such optimization can be even hardware-aware like performing run-time optimization for distributed memory architecture <cit.>, and shared memory architecture <cit.> etc. Compiler support has been developed to automatically reduce the time and space overhead of inspector-executor  <cit.>. Polyhedral transformation mechanisms <cit.>, Sparse Polyhedral Framework (SPF) <cit.> etc. address the cost reduction of the inspection. Other run-time approaches <cit.> propose optimal data distributions during execution such that both computation and communication overhead is reduced. Other run-time technique like Eggs <cit.> dynamically intercepts the arithmetic operations and performs symbolic execution by piggybacking onto Eigen code to accelerate the execution. In contrast to run-time mechanisms, compile-time optimization techniques do not incur any execution-time overhead. Given the sparse input and code handling dense matrix operation, the work done by <cit.> determine the best storage for sparse data and generate the code specific to the underlying storage but not specific to the sparsity structure of the input. They handle both single-statement and multi-statement loops and regular loop nests. The generated code contains indirect references. These approaches have been implemented in MT1 compiler <cit.>, creating a sparse compiler to automatically convert a dense program into semantically equivalent sparse code. Given the best storage for the sparse data, <cit.> propose relational algebra-based techniques to generate efficient sparse matrix programs from dense matrix programs and specifications of the sparse input. Similar to <cit.>, they do not handle mutable cases and generate code with indirect references. However, unlike the aforementioned work they handle arbitrary loop nests. Other compile-time techniques like Tensor Algebra Compiler(TACO) <cit.> automatically generate storage specific code for a given matrix operation. They provide a compiler-based technique to generate code for any combination of dense and sparse data. Bernoulli compiler proposes restructuring compiler to transform a sequential, dense matrix Conjugate-Gradient method into a parallel, sparse matrix program <cit.>. Compared to immutable kernels, compile-time optimization of mutable kernels is intrinsically challenging due to the run-time generation of fill-in elements <cit.>. Symbolic analysis <cit.> is a sparsity structure-specific graph technique to determine the computation pattern of any matrix operation. The information generated by the Symbolic analysis guides the optimization of the numeric computation. <cit.> generate vectorized and task level parallel codes by decoupling symbolic analysis from the compile-time optimizations. The generated code is specific to the sparsity structure of the sparse matrices and is free of indirect references. However, the customization of the analysis to handle different kernels requires manual effort. <cit.> propose a fundamentally different approach where they construct polyhedra from the sparsity structure of the input matrix, and generate indirect reference free regular loops. The approach only applies to immutable kernels and the generated code supports out-of-order execution wherever applicable. Their work is the closest to our work available in the literature. Alternate approaches include machine learning techniques and advanced search techniques to select the optimal storage format and suitable algorithms for different sparse matrix operations <cit.>. Apart from generic run-time and compile-time optimization techniques, domain experts have also explored domain-specific sparse matrix optimization. As an instance, <cit.> propose FPGA accelerated sparse matrix operations required in circuit simulation domain. § CONCLUSIONS AND FUTURE WORK SpComp is a fully automatic sparsity-structure specific compilation technique that uses data flow analysis to statically generate piecewise-regular codes customized to the underlying sparsity structures. The generated code is free of indirect access and is amenable to SIMD vectorization. It is valid until the sparsity structure changes. We focus on the sparsity structure of the output matrices and not just that of the input matrices. The generality of our method arises from the fact that we drive our analysis by the sparsity structure of the output matrices which depend on the sparsity structure of the input matrices and hence is covered by the analysis. This generality arises from our use of abstract interpretation-based static analysis. Unlike the state-of-art methods, our method is fully automatic and does not require any manual effort to customize to different kernels. In the future, we would like to parallelize our implementation which suffers from significant computation overhead and memory limitations while handling large matrices and computation-intensive kernels. We would also like to explore the possibility of using GPU-accelerated architectures for our implementation. Currently, we generate SIMD parallelizable code specific to shared memory architectures. In the future, we would like to explore the generation of multiple programs multiple data (MPMD) parallelized codes specific to distributed architectures. Finally, the current implementation considers the data index of each non-zero element individually. We would like to explore whether the polyhedra built from the sparsity structure of the input sparse matrices can be used to construct the precise sparsity structure of the output sparse matrices. ACM-Reference-Format
http://arxiv.org/abs/2307.03330v1
20230706232524
On the convexity of static output feedback control synthesis for systems with lossless nonlinearities
[ "Talha Mushtaq", "Peter Seiler", "Maziar S. Hemati" ]
eess.SY
[ "eess.SY", "cs.SY", "math.OC", "physics.flu-dyn" ]
UMN]Talha Mushtaqmusht002@umn.edu, UMich]Peter Seilerpseiler@umich.edu, UMN]Maziar S. Hematimhemati@umn.edu [UMN]Aerospace Engineering and Mechanics, University of Minnesota, Minneapolis, MN 55455, USA [UMich]Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109, USA Computing a stabilizing static output-feedback (SOF) controller is an NP-hard problem, in general. Yet, these controllers have amassed popularity in recent years because of their practical use in feedback control applications, such as fluid flow control and sensor/actuator selection. The inherent difficulty of synthesizing SOF controllers is rooted in solving a series of non-convex problems that make the solution computationally intractable. In this note, we show that SOF synthesis is a convex problem for the specific case of systems with a lossless (i.e., energy-conserving) nonlinearity. Our proposed method ensures asymptotic stability of an SOF controller by enforcing the lossless behavior of the nonlinearity using a quadratic constraint approach. In particular, we formulate a bilinear matrix inequality (BMI) using the approach, then show that the resulting BMI can be recast as a linear matrix inequality (LMI). The resulting LMI is a convex problem whose feasible solution, if one exists, yields an asymptotically stabilizing SOF controller. § INTRODUCTION Static-output feedback (SOF) stabilization is an open controls problem <cit.>. The bilinearity of the SOF problem for linear time-invariant (LTI) systems makes the problem non-convex, rendering the solution intractable for high-dimensional LTI systems. Despite the difficulties of SOF control synthesis, the simple structure of SOF control has made it an attractive choice in practice. As such, various SOF synthesis algorithms have been developed for LTI systems <cit.>. On the contrary, SOF synthesis methods are less prevalent for nonlinear systems, wherein a similar bilinearity issue arises. Nonetheless, SOF controllers would be desirable in many nonlinear applications. Computing an SOF gain is NP-hard in general <cit.>, i.e., the solution method for the SOF problem has a time complexity greater than 𝒪(n^k) for some positive k and input size n; therefore, exploiting the structure of a given system is the best approach to obtain a solution in a computationally tractable manner. No universally applicable algorithm exists for SOF synthesis of generic nonlinear systems; different systems possess different underlying structures that can be exploited. For example, SOF synthesis methods have been developed for systems for which the nonlinearity is passive <cit.>, polynomial <cit.> or Lipschitz <cit.>. Furthermore, sufficient conditions for local SOF stabilization have been established for nonlinear (affine) systems <cit.>. In this note, we will use standard Lyapunov arguments to establish a special class of system for which the SOF stabilization problem is convex: i.e., that of an LTI system in feedback with a lossless nonlinearity or uncertainty element. Such systems arise, for example, in the context of fluid dynamics, where it is well-established that the nonlinear terms in the incompressible Navier-Stokes equations are lossless and kinetic-energy conserving <cit.>. In fluid dynamics, the lossless (or more generally passivity) property of the nonlinearity has been exploited for stability analysis <cit.>, model reduction <cit.>, and feedback control synthesis <cit.>. Further, the utility of (linear) SOF strategies for fluid flow control has also been investigated in recent works <cit.> The work presented in this paper establishes convexity of SOF control synthesis for an LTI system interacting in feedback with a lossless uncertainty element. The lossless property is a special case of passivity; energy is strictly conserved, neither being created nor dissipated. We first use quadratic constraints (QC) to bound the lossless behavior of the nonlinearity. The QC allows us to formulate a bilinear matrix inequality (BMI). We then show that the particular BMI that arises in this problem has a structure that reduces it to a linear matrix inequality (LMI). The LMI condition establishes the convexity of the associated SOF synthesis problem. In addition, we establish necessary and sufficient conditions for feasibility of our synthesis approach through standard application of the projection lemma. The paper is organized as follows. Section 2 defines the QC approach, which is later used in Section 3 to establish the main result of this paper, i.e., explicit conditions for SOF synthesis. Section 4 contains an example problem and Section 5 is the conclusion. § FORMULATING THE FRAMEWORK Consider a state-space system of the following form: x(t) = Ax(t) + Bu(t) + z(t) y(t) = Cx(t) z(t) = N(x(t))x(t), where the vectors x(t) ∈^n, u(t) ∈^q, and y(t) ∈^p are the states, control inputs, and system outputs, respectively. Moreover, A ∈^n × n, B ∈^n × q and C ∈^p × n are the system, control, and output matrices, respectively. Also, N: ^n →^n× n is a continuous function such that N(0) = 0 and each entry of z(t) = N(x(t))x(t) is a continuous function in x(t). Here, we only consider lossless nonlinearities for which ⟨ x(t), z(t) ⟩ = 0 for all x(t), where ⟨·, ·⟩ is the inner-product of signals. Note that ⟨ x(t), z(t) ⟩ = x(t)^T N(x(t)) x(t) = 0 is a nonlinear constraint in N. This constraint can be satisfied by any linear or nonlinear continuous function N such that N(x(t)) is a skew-symmetric matrix for all x(t). However, other continuous functions are possible as long as ⟨ x(t), z(t) ⟩ = 0 for all x(t). Lossless nonlinearities are a special case of passive nonlinearities for which energy is conserved (i.e., energy is neither created nor dissipated) <cit.>. As shown in <cit.>, we can derive a matrix inequality condition using the Lyapunov stability theorem to capture the lossless property. Consider one such matrix inequality for the uncontrolled (open-loop) system studied in <cit.>, i.e., set u(t) = 0 in (<ref>): [ A^TP + P A P; P 0; ] + ξ_o [ 0 I; I 0 ]≤ -[ ϵP 0; 0 0 ]. Here ξ_o ∈ℝ_≶ 0 is the Lagrange multiplier and P∈𝒮_++^n, where 𝒮_++^n is the set of n× n symmetric, positive definite matrices. Feasibility of the matrix inequality (<ref>) implies that V(x(t))=x(t)^T P x(t) is a Lyapunov function. This can be verified by multiplying (<ref>) on the left and right by [ x(t)^T z(t)^T ] and [ x(t)^T z(t)^T ]^T, respectively. This gives the following constraint along trajectories of the system (<ref>): V̇(x(t)) +ξ_o x(t)^T z(t) ≤ -ϵ V(x(t)) Apply the lossless constraint to show that V̇(x(t)) ≤ -ϵ V(x(t)), i.e., V(x(t)) decays with rate ϵ∈_>0 along trajectories. Further details about the derivation are given in <cit.>. The dynamics in (<ref>) are open-loop stable for u(t) = 0 if there exists a P > 0 satisfying (<ref>). Thus, by extension, (<ref>) is closed-loop stable for u(t) = Ky(t) = KCx(t) if there exists a P > 0 satisfying the following: [ (A + BKC)^TP + P (A+BKC) P; P 0; ] + ξ_o [ 0 I; I 0 ]≤ -[ ϵP 0; 0 0 ], where K ∈^q × p is the SOF controller gain. However, the solution is difficult to compute because the inequality in (<ref>) is bilinear in the variables P and K. To remedy the bilinearity issue, we show in section <ref> that the BMI in (<ref>) can be easily reduced to an equivalent LMI, which is convex and easily solvable. § OUTPUT FEEDBACK CONTROLLER (MAIN RESULT) This section establishes conditions to solve (<ref>) for an asymptotically stabilizing K. Specifically, we show—using standard techniques—that (<ref>) can be formulated as a convex problem in variable K and that P is a solution with a fixed structure. Furthermore, we provide explicit feasibility conditions on the existence of a stabilizing K. There exists P ∈𝒮^n_++, ξ_o ∈_≶ 0, and K ∈^q × p satisfying (<ref>) if and only if there exists K satisfying the following: (A+BKC) + (A+BKC)^T + ϵ I ≤ 0. Notice that the lower right block of (<ref>) is zero. Thus, the off-diagonal terms must be zero in order for (<ref>) to be feasible. Therefore, P = -ξ_o I and ξ_o < 0. Hence, (<ref>) reduces to the following form: (A + BKC)^T (-ξ_o I) + ( -ξ_o I)(A + BKC) - ξ_o ϵ I ≤ 0. Since ξ_o is a negative scaling factor, we can eliminate it without affecting the solution to (<ref>). Thus, the following LMI in K results: (A + BKC)^T + (A + BKC) + ϵ I ≤ 0. Notice that any choice of ξ_o < 0 will satisfy the solution in (<ref>) since the resulting P can always be re-scaled to an identity and the same argument used for deriving (<ref>) will follow. Therefore, the solution to (<ref>) can be generalized to P = I and ξ_o = -1. The controller gain (if one exists) can be obtained by solving the feasibility problem in (<ref>), which is now convex and is an equivalent condition to (<ref>). Therefore, the solution K from (<ref>) is a solution of (<ref>) and vice versa. The feasible solution K ∈^q× p of (<ref>) is globally stabilizing if and only if P = I. Recall that the necessary condition for global stability is monotonic decay of state trajectories <cit.>. Thus, we bound P using Lemma 4 in <cit.> as follows: I ≤ P ≤γ I where γ is the maximum eigenvalue bound on P. Thus, γ is bounding the maximum distance of x(t) from the origin. Therefore, we can define the quantity Φ = max_t≥0{x(t)^2/x(0)^2}≤γ. For the state-trajectories to be monotonically decaying, we must have that x(t+δ t)^2 < x(t)^2 ∀ t≥0, where δ t is a small increment in time. Therefore, x(t)^2 < x(0)^2 ∀ t>0 and consequently, γ equals unity to satisfy the condition of monotonic decay i.e., Φ≤ 1 ∀ t ≥ 0. By extension, we can see that d x(t)^2/dt < 0. Thus, P = I from (<ref>), which satisfies the monotonic decay condition. If P ≠ I then γ > 1 from (<ref>), which includes state-trajectories that are either constant or monotonically increasing. Thus, any solution of (<ref>) is a globally stabilizing controller K if and only if P = I. Consequently, we have ξ_o = -1 from Lemma 1. Note that we have not yet established the explicit conditions that guarantee the existence of K as a feasible solution of (<ref>). There exists an ϵ∈ℝ_>0 and an output-feedback gain K ∈^q × p satisfying (<ref>) if and only if the following set of conditions are satisfied: U_B^T(A^T + A)U_B < 0 U_C^T(A^T + A)U_C < 0, where U_B = null(B^T) and U_C = null(C) are the null-spaces of B^T and C, respectively. We showed in Lemma <ref> that the following LMI results from the optimal solution P = I and ξ_o = -1: (A+BKC) + (A+BKC)^T + ϵ I ≤ 0 which is equivalent to (<ref>). Furthermore, the LMI in (<ref>) implies the following LMI for an arbitrarily chosen ϵ∈ℝ_>0: (A+BKC) + (A+BKC)^T < 0. Then, the LMI in (<ref>) can be written in an equivalent projection lemma form (see section 2.6.2 in <cit.>): A^T + A + B K C + C^T K^T B^T < 0. This is a particularly useful result as we can use the properties of the projection lemma to create two equivalent LMIs of the following form to solve for K: U_B^TΨ U_B < 0 U_C^TΨ U_C < 0 where Ψ = A^T + A, and U_B and U_C are the null-spaces of B^T and C, respectively. If B and C are full-rank, then the associated null spaces U_B and U_C will only contain the trivial zero vector, respectively. In any of these cases, we can simply use a congruence transformation followed by a change of variables (i.e., Y = PBK or Y = KCP^-1) to solve for the variables P and K <cit.>. In cases where the lossless nonlinearity defines the global behavior of the system, the LMI (<ref>) gives a sufficient condition for global asymptotic stability. An SOF gain K can be obtained simply by solving (<ref>) using an LMI solver, such as or . § EXAMPLE PROBLEM Consider the system in (<ref>) with the following matrices for x=[ x_1 x_2 ] ^T∈ℝ^2: A = [ -0.1 1; 0 -0.1 ], B = [ 1; 1; ], N(x) = [ 0 -x_1; x_1 0 ] C = [ 1 2 ]. Then, K ∈ℝ is the SOF gain to compute. We choose ϵ = 10^-6 and obtain K = -3.6231 by solving (<ref>) using the parser and the LMI solver  <cit.>. Fig. <ref> shows the phase portrait of the controlled (closed-loop) system. We can see that x(t) converges to the origin for all trajectories of the controlled (closed-loop) system. On the contrary, Fig. <ref> shows that some of the trajectories diverge from the origin for the uncontrolled (open-loop) system. Note the difference in scales between Fig. <ref> and Fig. <ref>: relatively small magnitudes x(0) result in divergent trajectories for the uncontrolled (open-loop) system, whereas no divergent trajectories exist for the controlled system even for relatively large magnitudes x(0). This further confirms that the designed controller successfully provides asymptotic stability to the system in (<ref>). § CONCLUSION We provide explicit conditions under which an SOF gain can be obtained using the lossless behavior of the nonlinearity. Thus, an SOF gain for any nonlinear system with a lossless nonlinearity can be solved using the framework in this paper. Note that the convexity of the problem was established as an LMI feasibility problem, which in general corresponds to a family of stabilizing SOF gains when the problem is feasible. This non-uniqueness of stabilizing SOF gains opens additional avenues for optimal control using convex optimization methods. Furthermore, the conditions established for feasibility of the LMI provide requirements on the set of sensors and actuators needed for synthesis. In conjunction with convexity of the problem, these requirements can potentially be exploited to perform optimal sensor/actuator selection using established methods based on convex optimization <cit.>. § ACKNOWLEDGEMENTS This material is based upon work supported by the ARO under grant number W911NF-20-1-0156 and the NSF under grant number CBET-1943988. MSH acknowledges support from the AFOSR under award number FA 9550-19-1-0034. IEEEtran
http://arxiv.org/abs/2307.00903v1
20230703095553
Magnetic lump motion in saturated ferromagnetic films
[ "Xin-Wei Jin", "Shi-Jie Shen", "Zhan-Ying Yang", "Ji Lin" ]
nlin.PS
[ "nlin.PS" ]
APS/123-QED School of Physics, Northwest University, Xi'an 710127, China Department of Physics, Zhejiang Normal University, Jinhua 321004, China Department of Physics, Zhejiang Normal University, Jinhua 321004, China School of Physics, Northwest University, Xi'an 710127, China Peng Huanwu Center for Fundamental Theory, Xi'an 710127, China Corresponding author: linji@zjnu.edu.cn Department of Physics, Zhejiang Normal University, Jinhua 321004, China In this paper, we study in detail the nonlinear propagation of magnetic soliton in a ferromagnetic film. The sample is magnetized to saturation by an external field perpendicular to film plane. A new generalized (2+1)-dimensional short-wave asymptotic model is derived. The bilinear-like forms of this equation are constructed and exact magnetic line soliton solutions are exhibited. It is observed that a series of stable lumps can be generated by an unstable magnetic soliton under Gaussian disturbance. Such magnetic lumps are highly stable and can maintain their shapes and velocities during evolution or collision. The interaction between lump and magnetic soliton, as well as interaction between two lumps, are numerically investigated. We further discuss the nonlinear motion of lumps in ferrites with Gilbert-damping and inhomogeneous exchange effects. The results show that the Gilbert-damping effects make the amplitude and velocity of the magnetic lump decay exponentially during propagation. And the shock waves are generated from a lump when quenching the strength of inhomogeneous exchange. Magnetic lump motion in saturated ferromagnetic films Ji Lin August 1, 2023 ===================================================== § INTRODUCTION The propagation of electromagnetic wave in ordered magnetic materials, especially in a ferromagnetic medium, plays a vital role in faster and higher density storage fields <cit.>. In particular, magnetic soliton(MS), which exists in both ferro- and antiferro-magnets, is becoming a very promising information carrier because of its particle-like behavior and maneuverability <cit.>. In the past few decades, a wide range of soliton-type propagation phenomena has been theoretically predicted <cit.>, and some of them have been confirmed experimentally <cit.>. Indeed, wave propagation in ferromagnetic media is well-known as a highly nonlinear problem. A complete description of all types of nonlinear excitations is governed by the Maxwell equations coupled with Landau-Lifschitz equation. For this moment, let us notice that a fully nonlinear theory has not been developed. But the linear theory for sufficiently small amplitudes was established and validated experimentally <cit.>. In order to obtain results valid in nonlinear regimes, or at least weakly nonlinear, one has to resort to intermediate models (by introducing a small perturbative parameter related to the soliton wavelength) <cit.>. These models include long-wave model <cit.>, modulational asymptotic model <cit.>, and short-wave model <cit.>. Both long-wave model and modulational asymptotic model are mainly used to explain and predict the behavior of large-scale phenomena owing to their long-wave-type approximate condition <cit.>. However, this condition is not always applicable because the scale of magnetic materials and devices are getting more refined and more sophisticated. Moreover, the main practical interest of ferrites is that they propagate microwaves <cit.>. On the contrary, from the viewpoint of applied physics, the short-wave-type approximation is much more relevant to available experiments than the former one. Since Kraenkel et al. first proposed the short-wave model <cit.>, quite a few related nonlinear evolution equations have been derived, which belong to the Kraenkel-Manna-Merle (KMM) system <cit.>. Some significant works have been devoted to searching and explaining different excitation patterns of ferromagnetic insulators. As for (1+1)-dimensional KMM system, the existence of multi-valued waveguide channel solutions has been verified, and the nonlinear interaction properties were investigated between the localized waves alongside the depiction of their energy densities <cit.>. By applying the Hirota bilinear transformation method, the one- and two-soliton solutions were constructed while studying in details the solitons scattering properties <cit.>. This system is also solvable using the inverse scattering method <cit.>. It is noteworthy that this system possesses the loop-soliton and spike-like soliton <cit.>, and the magnetic loop-soliton dynamics have been extensively studied <cit.>. The propagation of electromagnetic waves in higher-dimensional ideal ferromagnets has also been studied, corresponding to the (2+1)-dimensional KMM system <cit.>. The analytical one-line-soliton solution as well as its transverse stability have been reported <cit.>. It has been shown that these structures were stable under certain conditions. On the other hand, most previous studies have only focused on the propagation of MS in ideal ferrites, which means some important properties of the magnetic material were neglected. The main reason is that the nonlinear wave equation describing the propagation of electromagnetic waves in non-ideal ferromagnetic materials is no longer integrable. However, the Gilbert-damping and inhomogeneous exchange effects are essential features in a real ferromagnetic film, and their connection with MS motion is an important issue that has not been explored so far. In this paper, we aim to investigate theoretically and numerically the dynamics of the MS in a ferromagnetic film including damping and the inhomogeneous exchange effect. The rest of this paper is organized as follows. In Section 2, we review the physical background and derive a new (2+1)-dimensional short-wave asymptotic model in ferromagnetic media. In Section 3, the bilinear-like form of the reduced system is constructed and the analytical MS solutions are acquired. In Section 4, the transmission stability of the magnetic soliton is numerically explored. The results show that an unstable MS will split to some magnetic lumps by a small perturbation. The motions of these lumps under the influence of damping and inhomogeneous exchange are analysed in detail. We end this work in Section 5 with a brief conclusion and perspectives. § PHYSICAL BACKGROUND §.§ Basic equations The physical system under consideration is a saturated magnetized ferrite film lying in the x-y plane, as shown in Fig. 1. Different from Ref. <cit.>, we consider the external field H_0^∞ perpendicular to the film, i.e., M_0=(0,0,m). So the transverse drift is avoided. The typical thickness of the film is about 0.5mm, and the width is approximately 10mm. We assume the propagation distance is large enough with regard to the wavelength, say more than 50cm. The evolution of the magnetic field H and the magnetization density M is governed by the Maxwell equations coupled with Landau-Lifschitz-Gilbert equation, which read as -∇(∇·H)+ΔH=1/c^2∂^2/∂ t^2(H+M), ∂/∂ tM=-γμ_0M×H_eff+σ/M_sM×∂/∂ tM, where c=1/√(μ_0ϵ̃) is the speed of light with the scalar permittivity ϵ̃ of the medium, γ is the gyromagnetic ratio, μ_0 being the magnetic permeability of the vacuum, σ is the damping constant, and M_s is the saturation magnetization. The effective field H_ eff is given by <cit.> H_eff=H-βn(n·M)+αΔM. Here α and β are the constants of the inhomogeneous exchange and the magnet anisotropy (β>0 corresponds to the easy-plane case), respectively. For a simple tractability, the unit vector n of the anisotropy axis is assumed to be along the z axis (i.e., n≡e_z). In order to transform the above systems to dimensionless equation, we rescale the quantities M, H, and t into μ_0γM/c, μ_0γH/c, and ct. Thus, the constants μ_0γ/c and c in Eqs.(2) and (3) are replaced by 1, M_s by m=μ_0γ M_s/c, and σ by σ̃=σ/μ_0γ <cit.>. §.§ Linear analysis To study the linear regime we look at a small perturbation of a given solution. Equations (1) are linearized about the steady state: M_0=(0 ,0 ,m), H_0=μM_0. where μ is the strength of the internal magnetic field. Before proceeding further we assume that the ferromagnetic materials have weak damping σ̅∼ϵσ̃. The exchange interaction parameter α and anisotropy parameter β are of order ϵ^2 and ϵ^3, respectively (i.e. α̅=ϵ^2α, β̅=ϵ^3β). Let us seek for the plane wave perturbation solution propagating along the x-direction such as M=M_0+ϵmexp[i(kx+ly-ω t)], H=H_0+ϵhexp[i(kx+ly-ω t)], where k and l are the wave numbers in the x and y directions, ω is the frequency. Vectors m=(m^x,m^y,m^z) and h=(h^x,h^y,h^z) are arbitrary real scalar quantities. Substituting Eq. (4) into (1) and (2) in the linear limit, it is reduced to [ ω^2 0 0 ω^2-l^2 kl 0; 0 ω^2 0 kl ω^2-k^2 0; 0 0 ω^2 0 0 ω^2-k^2-l^2; -iω mμ 0 0 -m 0; -mμ -iω 0 m 0 0; 0 0 -iω 0 0 0 ]·[ m_x; m_y; m_z; h_x; h_y; h_z ] =0 Then we obtain the following dispersion relation m^2(μ+1)[μ(k^2+l^2-ω^2)-ω^2]-ω^2(k^2+l^2-ω^2)=0 Note that we focus on studying the short-wave approximation k→∞ [2]. It comes k_0∼ϵ^-1 through a small parameter ϵ≪1 linked to the magnitude of the wavelength. Consequently, the frequency expands accordingly as ω=ω_-1ϵ^-1+ω_1ϵ+ω_3ϵ^3+.... This assumption guarantees the phase velocity ω(k)/k and the group velocity ∂ω/∂ k are always bounded [3]. Now, replacing Eq. (6) into the dispersion relation above, we obtain a set of equations: ∙ At order of ϵ^-4: ω_-1=± k_0 ∙ At order of ϵ^-2: ω_1=[(μ+1)m^2+l^2]/2k_0 ∙ higher order equations which determines ω_3, ω_5,...The direction of the wave propagation is assumed to be close to the x axis, thus y variable gives only account of a slow transverse deviation<cit.>. Therefore l is assumed to be very small with respect to k and we write l=l_0 of order 0 with respect to ϵ. The phase up to order ϵ is thus (x-t)/ϵ+l_0y-ϵω_1t, which motivates the introduction of new variables: ζ=1/ϵ(x-Vt), y=y, τ=ϵ t. The variable ζ describes the shape of the wave propagating at speed V; it assumes a short wavelength about 1/ϵ. The slow time variable τ accounts for the propagation during very long time on very large distances with regard to the wavelength. The transverse variable y has an intermediate scale, as in KP-type expansions <cit.> §.§ Multiple scale approach In order to derive the nonlinear model, fields M and H are expanded in power series of ϵ as M=M_0+ϵM_1+ϵ^2M_2+ϵ^3M_3+..., H=H_0+ϵH_1+ϵ^2H_2+ϵ^3H_3+... . where M_0, H_0, M_1, H_1,...are functions of (ζ,y,τ). We consider the boundary conditions: ζ→-∞limM_0=(0,0,m),ζ→-∞limM_j=ζ→-∞limH_j=0,(j0). We derive the following expressions by substituting Expansions (8) into equation (1): ∙ At order ϵ^-2:M_0 is a constant vector M_0=(0,0,m), ∙ At order ϵ^-1:H_0^x=0, M_1^y=0, M_1^z=0, ∙ At order ϵ^0:M_1ζ^x=mH_0^y,M_2ζζ^x=-H_2ζζ^x-H_1ζτ^yM_2ζζ^y=-H_1ζ y^x+H_0ζ y^xM_2ζζ^z=H_2ζτ^z+H_y y^z ∙ At order ϵ^1:M_2ζ^x=-m H_1^yM_2ζ^y=mα̅M_1ζζ^x+σ̅M_1ζ^x-M_1^xH_0^z+mH_1^xM_2ζ^z=M_1^xH_0^y let us introduce some independent variables X and T defined as X=-mζ/2, Y=my, T=mτ. After eliminating H_2 and M_2, we finally obtain the (2+1)-dimensional KMM equation: C_XT=-BB_X+C_YY, B_XT=BC_X+B_YY-sB_X+ρ B_XX, where observables B, C and constants s, ρ are defined by C=-X-∫^X(H^z_0/m)dX, B=M^x_1/2m, s=-σ̅/2, ρ=α̅m^2/4. This equation is new, which describes the evolution of magnetization field M and magnetic field H within a ferrite film in presence of Gilbert-damping and inhomogeneous exchange. The quantities H_0 and M_1 refer to the zeroth and first-order expansion coefficients of the external magnetic field and the magnetization, respectively. For some simplicity, in the next, the independent variables X, Y and T will be rewritten as their lower cases x, y and t, respectively. § HIROTA'S BILINEARIZATION AND SOLITON SOLUTIONS OF THE (2+1)-DIMENSIONAL KMM EQUATION To explore soliton solutions for the (2+1)-dimensional KMM equation (9), we consider a specific dependent variable transformation B=G/F, C=δ x-2(ln F)_t-2(ln F)_y, where δ is an arbitrary constant. Consequently, the bilinear-like forms of the (2+1)-dimensional KMM equation can be derived as follow F· (D_xD_t + sD_x - D_y^2)G· F + G· (D_xD_y + D_y^2)F· F=δ F^2G ∂_x[G^2/2F^2 -(D_yD_t + D_t^2)F· F/F^2] +∂_y[(D_yD_t + D_t^2)F· F/F^2] = 0 where G, F are all differential functions of (x,y,t) to be determined. The symbols D_x, D_t refer to the Hirota's operators with respect to the variable x, t, respectively. In order to construct the solitary wave solutions of Eq.(6), we expand G and F with respect to a formal expansion parameter as G=ϵ G_1+ϵ^3 G_3+ϵ^5 G_5+..., F=1+ϵ^2F_2+ϵ^4F_4+ϵ^6 F_6+..., in which ϵ is a perturbation parameter and functions G_i, F_i,(i=1,2,3,...) are expansion coefficients of the above series. The one-soliton solution could be constructed by truncating the perturbation expansion of G and F as follow G=e^η_1, F=1+k^2A^2/16δ^2e^2η_1. Substituting these expressions into Eq.(9) and solving the bilinear system recursively, in the absence of damping, the analytical one-soliton solution of the (2+1)-dimensional KMM equation can be transformed into B=2δ/k(η_1+η_0), C=δ x-2δ/k[tanh(η_1+η_0)+1], where η_1=kx+ly+[(l^2-kl)/2k] t, η_0=ln(k/4δ), k and l are arbitrary real constants. It should be noted that this soliton solution exists only when the damping is neglected (s=0). Similar to the procedure for constructing one-soliton solution, the two-soliton solution can be given by treating the truncated perturbation expansions of G and F as G=A_1e^ξ_1+A_2e^ξ_2+C_12e^ξ_1+2ξ_2+C_21e^2ξ_1+ξ_2, F=1+B_11e^2ξ_1+B_22e^2ξ_2+B_12e^ξ_1+ξ_2+E_12e^2ξ_1+2ξ_2, where A_1,A_2,k_1,k_2 are real constants, ξ_i=k_ix+l_iy+[(l_i^2+δ)/k_i]t,(i=1,2), and the remaining parameters have the following forms: B_ii=A_i^2k_i^2/16δ^2,B_12=A_1A_2/2δ^2k_1^2k_2^2/k_+^2,k_1l_2=k_2l_1, C_ij=A_iA_j^2/16δ^2k_j^2k_-^2/k_+^2,E_12=A_1^2A_2^2/256δ^4k_1^2k_2^2k_-^4/k_+^4, where k_+=k_1+k_2, k_-=k_1-k_2. Parameters A_i, A_j, k_i, k_j and l_i,(i=1,2,j=3-i) are arbitrary real constants. § NUMERICAL INVESTIGATION OF LINE-SOLITON AND MAGNETIC LUMPS §.§ Unstable MS splits into lumps We now turn to the stability and interactions between MSs in a ferromagnetic film. The initial data is a MS perturbed by some position-dependent Gaussian wave packets with the following expression: f=bexp[-(x-x_0/x_r)^2-(y/y_r)^2], where b,x_r and y_r correspond to the shape of the wave packet and x_0 is related to the perturbation position. The time evolution results clearly show the instability of the MS. For small b_i, the MS will break up and eventually evolve into some stable two-dimensionally localized lumps, as displayed in Figs. 2(a) and 2(b). We observe that most of the energy is always propagated as a lump, even if its speed may differ from the input. Such a magnetic lump is a solitary wave packet that maintains its shape and speed during propagation or collision. A complete single lump of magnetic field component H^z (component H^y) is circled in red (black) in Fig.2. The enlarged views (see Figs.2(c) and 2(d)) provide a clear picture of the shape and contour map of the lump. It can be found that component H^z is a dipole-mode lump, whereas component H^y is a standard KP-lump. We also show the vector field of the magnetic lump in Fig.3(a). Note that magnetic field component H^x is zero, the magnetic field is always in the y-z plane, hence the lump can be regarded as a 360^∘ domain wall localized in x and y directions. Fig.3(b) presents the magnetic field along y=0. The blue and red arrows correspond to the magnetic field intensity of component H^z, H^y, respectively. The rest of this work is concerned with the propagation and interaction behavior of these lumps in ferrite medium. §.§ Lump motion in ferromagnets with damping or inhomogeneous exchange effects The evolution behavior of the magnetic lump in the ideal ferrite is quite simple and imaginable. Each lump maintains its shape while it travels at a constant speed. However, in most of real ferromagnetic materials, we have to take the Gilbert-damping into account . For instance, the dimensionless damping constant s ranges from 0.048 to about 0.385 in garnet ferrite films. Here we are going to study the dynamics of magnetic lump in a damped ferrite film. The typical ferromagnetic film under consideration is a garnet ferrite film with the dimensionless damping constant s=0.1. For a clearer view of the change in shape of the lump, we define ℋ and 𝒲 as the height and width of the lump, which are the vertical distance between the highest point and the lowest point and the horizontal distance along the propagation direction, respectively. All of these are summarized in Fig.4. The propagation of a lump on the garnet ferrite film is presented in Fig.5. As shown in Fig.5(a), the lump travels forward a visible distance in the damped ferrite. Beyond that, comparing the profiles of lump between t=0 and t=10, we evidently observe that the lump becomes smaller and narrower. Fig.5(b) shows the lump height and width exhibit a tendency of exponential decay. The solid blue line is the exponential fitting curve to ℋ(t), with the function expression being ℋ(t)=A_0e^-st. We confirm the above-mentioned amplitude attenuation law is universal by simulating the motion of lump in ferrites with virous damping factors. Moreover, a definite relationship between the amplitude and the localization region of solitons is important for the soliton excitations. We analyze different sizes of numerical lumps and mark the width and height of lumps in the phase diagram (see Fig. 5(c)). The results show that for a magnetic lump excitation, its width and height meet a linear relationship within the error range (𝒲/ℋ∼0.305). So the lump excitation, upon decay, retains a soliton form. Therefore, in this system, the Gilbert-damping plays a role of dissipating energy during the motion of magnetic lumps and it is characterized by decreasing the amplitude and width of lump. The inhomogeneities otherwise referred to as deformities is inevitable in real magnetic materials, and it can be caused by either external fields or the presence of defects, voids and gaps in the material. It has already been reported that the MS may be deformed by the presence of inhomogeneities, in particular its structure and speed <cit.>. In this present system, the inhomogeneous exchange process is unignorable when the wavelength of lump is comparable to the characteristic exchange length. We now move to study the lump motion in the presence of inhomogeneous exchange effect. The initial data is the stable magnetic lump shown in Fig.5. As can be observed from Fig. 6(a) and 6(b), in ferrite without exchange interaction, the lump solution propagates at a constant speed and along the previous path. We then consider the non-equilibrium dynamics of lump by performing a sudden interaction quench. The pictures of component H^y at dimensionless times t=2 and t=4.5 are shown in Fig. 6(c) and 6(d). As we see, for a quench from the non-interacting to strong inhomogeneous exchange ferrite film, the lump oscillates rapidly and diffracts along the propagation direction. A two-dimensional shock wave is generated and propagates forward. The shock wave front continues to propagate in the negative direction along x-axis. Finally, the energy of lump will be dissipated into numberless tiny waves. Accordingly, considering that the lump would be destroyed by the inhomogeneous exchange process, one has to consider keeping its wavelength away from the characteristic exchange length in the lump-based microwave applications. §.§ Some examples of excitations and interactions The evolution pattern given in Fig.2 reveals that the lump moves at a larger velocity than the broken MS in the propagation. The reason is that the velocity of soliton solution is proportional to the soliton amplitude. During the formation of the lump, the original MS will be destroyed, and most of the energy is concentrated in some certain centers, which causes the amplitude (and velocity) of the lump to be greater than that of MS. These lumps with various speeds enable us to explore the interaction between lump and soliton, as well as between two lumps. A typical example of lump-MS collision is shown in Fig.7(a). The MS begins to break up around at t=4. Subsequently, the splitting lump is going to catch up and collide with the front-MS. After the collision, the front-MS is destroyed and broken into several lumps with various sizes. It is remarkable that the lump keep its localized form before and after the collision almost unchanged. This phenomenon implies such two-component lumps are natural results from this nonlinear propagation equations. Further simulation shows these lump structures could be generated by a MS with random disturbance. Fig.7(b) depicts a characteristic inelastic collision between two lumps. We initially generate two adjoining lumps. They are emitted by MS at dimensionless time t=6.5. The merging process can be performed as follows. From t=7.5 to t=9.5, two lumps merge simultaneously together and give birth to a new lump whose amplitude is significantly greater than the amplitude of previous lumps. Obviously there is a weak attraction between two lumps which results in their fusion. In addition to the fusion of the two lumps, we also observed an extraordinary peak at a specific moment (about t=9.5), which looks like a second-order rogue wave. It appears to be the result of the interaction between the ripples surrounding the two lumps. After the fusion, the rouge wave-like structure disappears and the dynamics of the output is determined mainly by a single high-amplitude lump. § CONCLUSION As a conclusion, the nonlinear propagation of MS in a saturation magnetized ferromagnetic thick film is studied in detail. In the starting point, we derive the (2+1)-dimensional KMM system that governs the evolution of short MS waves in a saturated ferromagnetic film. The bilinear form of the KMM system is constructed and the MS solutions are obtained analytically. After that, numerical simulations are performed to analyse the evolution behaviours of MS. A significant observation is that the unstable MS can be destroyed by Gaussian perturbation and broken into some stable magnetic lumps. These lumps exhibit high stability during the propagation. Furthermore, some examples are given to analyse the collision behaviours between lump and MS, and the interaction between two lumps. It is found the lump keeps its shape and speed in the collision with MS. The results confirm that the lump is a stable propagation mode in this system and, more to the point, the velocity of lump can be adjusted by its amplitude. Their robustness and controllability provide the possibility for future information memory and logic devices. We also study the propagation of such a lump in ferrites subjected to influence of damping and inhomogeneous exchange effects. When the Gilbert-damping of ferrite is considered, the lumps undergo the following changes: the amplitude and the speed of lump are decreased, and the width of lump along the propagation direction is getting narrow. It would cause a strong diffraction of the lump if we quench the interaction strength. We hope our work will invoke follow-up experimental studies of lump-based microwave applications. Additionally, since only one- and two-line-soliton are obtained, the integrability of the (2+1)-dimensional system Kraenkel-Manna-Merle (KMM) remains an open issue. The existence of the higher-dimensional evolution system as well as the bulk polariton solution is an intriguing avenue for future exploration. § ACKNOWLEDGMENT This work was supported by the National Natural Science Foundation of China under Great Nos. 11835011; 11675146; 11875220;. unsrt
http://arxiv.org/abs/2307.01574v1
20230704090239
Secondary gas in debris discs released following the decay of long-lived radioactive nuclides, catastrophic or resurfacing collisions
[ "Amy Bonsor", "Mark C. Wyatt", "Sebastian Marino", "Björn J. R. Davidsson", "Quentin Kral" ]
astro-ph.EP
[ "astro-ph.EP" ]
firstpage–lastpage The key role of Lagrangian multiplier in mimetic gravitational theory in the frame of isotropic compact star G. G. L. Nashed 0000-0001-5544-1119 August 1, 2023 ============================================================================================================== Kuiper-like belts of planetesimals orbiting stars other than the Sun are most commonly detected from the thermal emission of small dust produced in collisions. Emission from gas, most notably CO, highlights the cometary nature of these planetesimals. Here we present models for the release of gas from comet-like bodies in these belts, both due to their thermophysical evolution, most notably the decay of long-lived radioactive nuclides and collisional evolution, including catastrophic and gentler resurfacing collisions. We show that the rate of gas release is not proportional to the rate of dust release, if non-catastrophic collisions or thermal evolution dominate the release of CO gas. In this case, care must be taken when inferring the composition of comets. Non-catastrophic collisions dominate the gas production at earlier times than catastrophic collisions, depending on the properties of the planetesimal belt. We highlight the importance of the thermal evolution of comets, including crucially the decay of long-lived radioactive nuclides, as a source of CO gas around young (<50Myr) planetary systems, if large (10-100s kms) planetesimals are present. § INTRODUCTION Belts of comets or asteroids, similar to the Kuiper belt, are detected around stars other than the Sun, from the thermal emission of small dust produced in collisions between the larger planetesimals <cit.>. The volatile-rich, cometary, nature of the planetesimals in many of these systems is witnessed by the detection of gaseous emission, most commonly CO <cit.>. This gas has now been detected for tens of planetary systems around AFGM-type stars <cit.>. In most cases where the spatial distribution of the gas is resolved, it is associated with the position of the dust belts <cit.>. Whilst some planetary systems with high masses of CO could be primordial, left-over from the proto-planetary disc phase <cit.>, a secondary origin can readily explain the CO in the many gas-poor debris systems <cit.>, as well as in some systems with higher levels of CO <cit.>. In other words, the observed gas is not hydrogen-rich gas leftover from the gas-rich protoplanetary disc, but hydrogen-poor gas released from planetesimals in gas-poor debris discs. The CO in this hydrogen-poor gas would only survive on short timescales (∼ 130 yrs) due to UV photodissociation <cit.>. Instead it is best explained by secondary gas released from the (icy) comets that also produce the dusty material observed in the infrared <cit.>. Mechanisms suggested for the release of gas include UV desorption <cit.>, collisions <cit.>, radiogenic heating <cit.> and heating from stellar irradiation <cit.>. In the Solar System, activity triggered by the increased stellar radiation as comets are scattered close to the Sun leads to a cometary tail, with a wide range of species detected including H_2O, CO_2, CO, CH_3OH, H_2CO, HCN, H_2S and CS_2 <cit.>. Cometary tails are dominated by water, whilst a few show spectra dominated by hyper-volatile species including N_2 <cit.>). The recent New Horizons fly-by of Arrokoth placed upper limits on the release of hyper-volatiles (CO, N_2 and CH_4) <cit.>, whilst CO is regularly detected in comets in the inner Solar System <cit.>. If the Kuiper-belt is still releasing CO gas today, it is likely too small to be detectable even with in-situ missions like New Horizons <cit.>. The survival of CO as CO ice in Solar System comets following exposure to stellar irradiation or the decay of long-lived radioactive nuclides including ^40K, ^232Th, ^235U and ^238U is unclear and the CO is potentially trapped in alternate reservoirs including amorphous water ice and CO_2 ice as suggested by <cit.> and references therein. This work considers the potential mechanisms that lead to the production of gas in exo-planetary systems, within debris, or cometary belts. The aim is to quantify the gas production rates, such that the conditions (timescales) on which the different mechanisms dominate can be considered. The early release of hyper-volatiles (CO) due to thermophysical evolution driven by heating from the decay of long-lived radioactive nuclides will be compared to the release of hypervolatiles from collisions, both catastrophic and non-catastrophic. Here we focus on the dramatic resurfacing collisions that occur more frequently than catastrophic collisions for those large planetary bodies that are held together by their gravitational strength. Many of the conclusions, however, apply to less violent cratering collisions. Whilst comets likely form during the gas disc phase and the evolution here plays an important role <cit.>, this work starts with fully formed planetesimals in a gas-poor environment, akin to a fully formed planetary system. This work compares two models for the release of volatiles, the first due to collisions and the second due to radiogenic heating. Whilst in a realistic system both processes might act together, the purpose here is to compare the two processes. The work starts in <ref> by considering collisions, firstly presenting the properties of this planetesimal belt (<ref>). This is followed by details of the collision model for the evolution of the planetesimal belt and the release of volatiles in <ref>, including mass conservation (<ref>) and the release of CO following collisions (<ref>). Results from the numerical model for the gas production from planetesimal belts due to collisions are presented in <ref>. This is followed by the presentation of a simple model for the volatile release due to radiogenic heating (<ref>) and results for the gas production are compared to those from collisions in <ref>. The validity of the model is discussed in (<ref>), followed by a discussion of whether (and when) radiogenic heating or collisions dominate the gas production in debris belts (<ref>), the importance of resurfacing collisions, compared to catastrophic collisions (<ref>), how the observed gas production can provide insights regarding the size of the largest planetesimals present in debris belts (<ref>) and the composition of comets, as derived from gas detection (<ref>). The paper is concluded in <ref>. § VOLATILE RELEASE FROM COLLISIONS The aim here is to quantify the potential release of volatiles, notably CO, in collisions, for comparison with the release from thermophysical evolution. This work considers that many exoplanetary systems have belts of comets or asteroids similar to the Solar System; the key difference being that these planetesimal belts may occur at any radial location and contain significantly more material, as consistent with observations of bright debris discs. In this work, the term planetesimal will be used to refer to the small planetary bodies that are part of the planetesimal belt, regardless of their volatile content and size. Crucially here we do not assume that the release of volatiles follows the dust evolution of a collisional planetesimal belt, but also consider that volatiles are released in both catastrophic and non-catastrophic collisions. Whilst cratering collisions may play a notable role in the release of CO gas (see discussion in <ref>), we focus here on resurfacing or shattering collisions. That is, the collisions that regularly occur to the largest planetesimals, where sufficient energy is imparted to shatter, but not to overcome self-gravity and disperse the fragments, leading to the formation of a rubble pile. Here we present a framework to follow the collisional evolution of a debris belt, alongside a potential model for how volatiles are released in collisions. Given uncertainties in exactly how volatiles are released in collisions, the model is presented in such a manner that it could be readily adapted to an updated model for the collisional production of gas. Full details of the variables used in this work can be found in Table <ref>. §.§ Properties of the planetesimal belt The planetesimal belt is characterised by a range of properties, including its total initial mass in dust or solids, m_ s, tot (0), its radial location, r and width dr. The properties of the planetesimals themselves are characterised based on their masses, M_k, a size distribution, with slope α, from a minimum, M_ min to a maximum, M_ max, planetesimal mass, a density ρ_k and a CO content, or volatile mass fraction, f_v,k. In this work, the volatile is considered to be CO, but the model could be applied to trace the evolution of any volatile, including water ice, and solids refers to the dusty component of comets. In order to numerically follow the evolution of the belt, the mass in the planetesimal belt is distributed between bins, labelled by the mass in individual solid planetesimals in each kth bin, M_ s, k. The planetesimals in each bin have a total mass which is the sum of the mass in dust or solids, M_ s,k and volatiles, M_ v,k, such that M_k = M_ s, k + M_ v, k. The bins are logarithmically spaced in M_ s,k, and k=1 labels the bin of largest mass. The logarithmic bin spacing δ is defined such that 1-δ= M_ s,k+1/M_ s,k. The initial number of planetesimals in each mass bin is assumed to scale as: n_k(M)= KM_ s,k^-α, which is equivalent to the commonly used n(D)dD∝ D^-α'dD, where α=(α'+2)/3 <cit.>, such that when α'=7/2, α = 11/6. Thus, the total mass of solid planetesimals in each mass bin, is given by integrating the mass in solids across the bin between M_ s,k and M_ s, k(1-δ) to give: m_s, k (0)= K M_ s, k^2-α, assuming δ <<1, where K= m_ s, tot(0)/Σ_i=1^i_ max M_ s, i^2-α. The planetesimals in the kth bin have an average density, ρ_k and diameter D_k, where M_k = πρ_k D_k^3/6. The diameter of the planetesimal, D_k in each bin, as well as their average density, ρ_k, can change as a function of time as the planetesimals lose volatiles, whilst the mass in solids, M_ s, k remains constant. Thus, the volatile fraction of each planetesimal, f_ v,k, also changes as a function of time and is denoted: f_ v,k= M_ v,k/M_ s,k+M_ v,k. The average density is given by: ρ_k= 1/((1-f_ v,k)/ρ_s + f_ v,k /ρ_ v), where ρ_s is the density of the dust or solid component and ρ_ v is the density of the volatile component. The number of colliders in the k-th bin is given by the ratio of the total mass in solids to the mass of each planetesimal in solids, as the total diameter or mass in volatiles in the k-th bin may change, n_k=m_s,k/M_s,k. §.§ Conditions for catastrophic/resurfacing collisions Catastrophic collisions are those with sufficient energy to disrupt a planetesimal, leaving no remnant larger than half the mass of the original planetesimal. The incident energy must be above the specific incident (impact) energy required to cause a catastrophic collision, or the dispersal threshold, Q_D^*. A power-law form for the dispersal threshold is assumed, following work on collision outcomes by <cit.>, such that: Q_D^*=Q_a D^-a + Q_b D^b, where a and b are both positive constants related to the planetesimal's material and gravitational strength, respectively. Following <cit.>, we take Q_a = 620 J kg^-1, a = 0.3, Q_b=5.6× 10^-3 J kg^-1, and b = 1.5, where D is in m, noting that these parameters are applicable to basalt rather than a mixture of ice and basalt. This is plotted in Fig. <ref> as a function of the planetesimal's total diameter, D_k or mass, M_k, assuming an average density of ρ_k=1gcm^-3. For small planetesimals, the dispersal threshold is dominated by the planetesimal's material strength, whereas for larger planetesimals, gravity dominates. Resurfacing collisions occur when there is sufficient energy to disrupt a planetesimal, but insufficient energy to disperse the fragments subsequently. Resurfacing collisions occur when the incident energy is above the specific incident energy required for shattering, Q_S^*, but below the specific incident energy required for dispersion, Q_D^*, where: Q_S^*=Q_a D^-a. This shattering threshold is plotted, alongside the dispersal threshold, on Fig. <ref>. The minimum in the dispersal threhold occurs for diameters of size D_W, where D_W =(aQ_a/bQ_b)^1/(a+b). Resurfacing collisions can occur for all sizes, however, the incident energy range for which Q_D^*>Q>Q_S^*, the condition required for a resurfacing collision to occur, becomes vanishingly small for small diameter particles. In the numerical model a cut-off is introduced, where no resurfacing collisions are assumed to occur if there are less than three mass bins between minimum mass bin above which resurfacing collisions would occur, labelled i_ rk and the minimum mass above which catastrophic collisions would occur, labelled i_ ck. Or in other words, no resurfacing collisions occur for M_ min, r. This avoids resurfacing collisions from switching on and off around diameter bins of a few hundred kms (depending on the properties of the planetesimal belt). For these bodies, the smallest bodies that have sufficient energy to shatter a planetesimal of mass, M_ min, r, also have sufficient energy to catastrophically destroy it. §.§ Collision Model In order to model the collisional evolution of the planetesimal belt, we follow <cit.>, with the additional ability to trace both solids and volatiles. A numerical method is utilised. The total mass in each bin is traced at each timestep, δ_t. A particle-in-a-box approach is used to model the rate of collisions between planetesimals in different mass bins, tracing both catastrophic and re-surfacing collisions. Following each catastrophic collision, mass is redistributed amongst smaller fragments, until it eventually becomes sufficiently small to be blown out of the system by radiation pressure (see Fig. <ref>). In such a manner, the planetesimal belt is depleted in mass as a function of time. Additionally to <cit.>, the model presented here traces the volatile content of planetesimals separately to their refractory (solid) content. In other words, the total mass in solids in the k-th bin is given by m_ s, k, whilst the total mass in volatiles is given by m_ v, k. At each timestep, volatiles can be released to gas and the total mass in gas is also traced m_ gas. A full list of the variables used in this model can be found in Table <ref>. §.§.§ Collision Rates The rate of collisions between particles in the k-th bin, with particles in the i-th bin is determined by the cross-sectional area for collisions, π/4(D_k + D_i)^2, the relative velocity, v_ rel, and the volume through which the planetesimals are moving, V. Planetesimals in the k-th bin can only collide catastrophically with particles larger than a certain size, which corresponds to those planetesimals in size bins with indices less than i_ck, such that: R_k^c= Σ_i=1^i_ckn_i/4 (D_k + D_i)^2 π v_ rel/V, where n_i is the number of colliders in the i-th bin and V is the volume through which the planetesimals are moving, given by <cit.>: V = 8 π r^3 e sin (I) (1+ e^2/3), where r is the belt radius, e the average eccentricity and I the maximum inclination of particles. The index i_ck labels the bin containing planetesimals of mass, M_ ck, the smallest bodies that can cause catastrophic collisions to planetesimals in the k-th bin, where: M_ck=( 2 Q_D^*/v_ rel^2) M_k. We note here that if the minimum mass that can catastrophically destroy planetesimals is larger than the size of the planetesimals, M_ ck>M_k the premise of the model breaks down. This is because in such a simple model, the largest planetesimals would no longer evolve collisionally, whilst in reality they would lose mass due to e.g. cratering collisions. We note that the approximations break down for targets larger than 30km (D>30km) for a belt at 100au, with v_ rel= e v_k, where e=0.1 and the model for Q_D^* used here. The rate of resurfacing collisions is calculated in a similar manner, with only colliders too small to result in catastrophic collisions (i<i_ ck) and sufficiently large to cause resurfacing collisions, i_rk considered: R_k^r= Σ_i=i_ck^i_rkn_i/4 (D_k + D_i)^2 π v_ rel/V, where i_rk labels the bin of mass, M_rk that contains the smallest impactors that can cause resurfacing collisions, where: M_rk=( 2 Q_S^*/v_ rel^2) M_k. The average lifetime of a planetesimal of diameter, D, against catastrophic collisions can be calculated as t_c = 1/R_k^c (Eq. <ref>), and similarly for resurfacing collisions as t_r= 1/R_k^r (Eq. <ref>). §.§ Mass Conservation The mass in solids in the collisional cascade is depleted with time as catastrophic collisions grind down the planetesimals into dust that is lost from the system. The mass in volatiles can be lost to gas from planetesimals of all sizes. However, at every timestep the total mass is conserved, such that the rate of change in the kth bin of the total mass in solids (volatiles) ṁ_ s,k (ṁ_ v,k) is given by: ṁ_ s,k = ṁ_ s,k^+c - ṁ_ s,k^-c, ṁ_ v,k = ṁ_ v,k^+c - ṁ_ v,k^-c - ṁ_ v,k^-r, ṁ_ g = Σ_k=1^k_ max (ṁ_ g,k^+c +ṁ_ v,k^-r) + Σ_k=k_ max^∞ṁ_ g,k^+c , where ṁ_ s,k^-c (ṁ_ v,k^-c) is the rate at which the total mass in solids (or volatiiles) in the k-th bin is lost to catastrophic collisions, ṁ_ s,k^+c is the rate at which the mass in solids is gained from catastrophic collisions of larger bodies, ṁ_ v,k^+c is the rate at which mass in volatiles is gained from catastrophic collisions of larger bodies, noting that this accounts for the volatiles mass lost to gas, ṁ_ g,k^+c is the mass lost to gas directly as the k-th bin gains mass in volatiles from catastrophic collisions in larger bins and ṁ_ v,k^-r is the rate of mass loss in volatiles from re-surfacing collisions. The mass in gas, m_ g, is increased by mass loss from volatiles to gas in both catastrophic and re-surfacing collisions, from the material received in the kth bin at a rate of ṁ_ g,k^+c and ṁ_ v,k^+r, respectively. Additionally, some gas is produced when volatile fragments directly to particles smaller than the minimum present in the collisional cascade, which leads to the additional term, Σ_k=k_ max^∞ṁ_ g,k^+c. The mass loss rate for solids is given by ṁ_ s,k^-c= m_ s,k R_k^c, whilst that for volatiles, exclusively due to catastrophic collisions, is given by ṁ_ v,k^-c= m_ v,k R_k^c. The mass rate gained for solids is given by ṁ_ s,k^+c= Σ^i_mk_i=1 F(k-i) ṁ_ s,i^-c, where F(k-i) is the fraction of the mass leaving the i-th bin from collisions that goes into the k-th bin, or the redistribution function, which we assume to be scale independent. We assume that fragments produced in catastrophic collisions have a range of masses from the largest fragment, with M_ s,i/2, which falls in the bin labelled i_lr, to the smallest body considered, which falls in the bin labelled by i_max, which we assume to be much smaller than M_ s, i/2. This is a good approximation for collisions destroying bodies with D≫ cm as the smallest particles in the disc will be mm-sized or smaller. Thus, the k-th bin can only gain mass from catastrophic collisions between objects with a mass 2M_ s, k or greater, labelled by i_mk=k - ln (2)/δ. Thus, the mass rate gained for solids in the k-th bin is calculated by summing over the contributions from the largest mass bin, i=1, down to i_mk, which labels the bin of mass 2M_ s, i. We assume that the size distribution of fragments is given by Eq. <ref>, where α>1 and the separation between bins δ≪ 1. We consider the fragmentation of a body that is a uniform mixture of volatiles (ices) and solids, and that, therefore, all the fragments retain the same uniform mixture. This leads to a redistribution function, where we assume the same exponent for the power-law, α, as the size distribution, given by : F_s(k-i) = η^(α-1) (1-δ) ^(k-i)(1-α) δ (1-α) This is based on Eq. 20 of <cit.>, where δ is now the spacing between mass bins and not radial bins, η_ max=1/2, such that α'=3 α-2, where α' are the parameters used in <cit.> and δ∼ 3 δ'^3. This function plotted in Fig. <ref>, truncated at (k-i)=ln (2)/δ=69, which for δ=0.01 labels the bin of solid mass, M_ s, i/2, or the largest fragment of a catastrophic collision. By definition all of the mass leaving the i-th bin ends up in bins between k=i_ lr and k=∞, such that Σ_k=i_ lr^∞ F(k-i)=1. In a similar manner, the rate of mass gain for volatiles is given by : ṁ_ v,k^+c= Σ_i=1^i_mk F_ v(k-i, f_ v, i) ṁ_ v,i^-c , where the F_ v(k-i, f_ v, i) is the redistribution function for volatiles. This is explicitly a function of the volatiles fraction of the disrupting bodies, f_ v, i, as this accounts for the possibility that some of these volatiles may be lost soon after the collision. This is related to the redistribution function for solids, in that: F_ v(k-i, f_ v, i)= F_s(k-i) (1-χ_k^c(f_ v, i)), where χ_k^c(f_ v, i) is the fraction of the volatile mass lost to gas as soon as the fragment is created, calculated in the following section, which is a function of the total mass of the newly formed body, M_k, and the volatile fraction of the original planetesimal in the i-th bin, f_ v,i. In a similar manner, a fraction χ_k^c(f_ v, i) of the volatiles in the fragment gained by the k-th bin from catastrophic collisions is released directly to gas, from catastrophic collisions between larger bodies, is released directly to gas. The rate at which this occurs in the k-th bin is given by: ṁ_ g,k^+c= Σ_i=1^i_mkF_ s(k-i) χ_k(f_ v, i) ṁ_ v,i^-c. The rate of mass lost for volatiles, due to re-surfacing collisions, is given by: ṁ_ v,k^-r= χ_k^r(f_v,k) m_ v,k R_k^r. §.§ Release of CO following collisions The exact role of collisions in releasing volatiles from cometary bodies is not clear. Experimental work is best at probing collisions between small particles <cit.> and tracking of volatile species is limited. In the Solar System, whilst activity and degassing in individual active comets can be monitored, linking this to a comet's collision history is challenging. Nonetheless, collisions will clearly play a role in releasing volatiles from comets. Collisions can transfer heat to planetesimals, which in turn leads to volatile loss <cit.>. Collisions also expose new surface, such that volatiles can be lost via sublimation or UV desorption. Rather than attempting to model these processes in detail, here we produce a simple model in which we consider the efficiency at which volatiles are lost in an individual collision to be a function of the surface area of the colliders. The model is set up in such a way that this precription could be updated in the future. The model is equivalent to the devolatisation of a thin layer of depth, h. The depth of this layer may be relatively large, for example for comets sufficiently close to the star that sublimation acts on long timescales (tens of metres), but may be very small for example for comets far from the star, if thermal or UV desorption of the fresh surface layer is the only mechanism to release volatiles (less than milimetres), as the rest of the CO would remain trapped. §.§.§ Model for release of volatiles in Catastrophic Collisions We consider a simple model in which following a catastrophic collision the fractional release of volatiles is proportional to the surface area of the fragments produced, or in other words volatiles are released from the equivalent of a surface layer of depth, h. Although we acknowledge here that this layer may not truly be a thin surface surrounding the whole comet, but instead be focused around the impact site. We consider the mass in volatiles released to gas due to fragments produced by catastrophic collisions of solid mass, M_ s, k, to be given by the mass in volatiles found in the layer, h, which can be calculated by subtracting the mass of the smaller planetesimal (∝ (R-h)^3) from the total mass of the planetesimal (∝ R^3). Thus, the mass released to gas by fragments entering the k-th bin is given by: δ M_g, k^c = 4 πρ_k f_v, k ( (3 M_k/4πρ)^2/3 h - (3 M_k/4πρ)^1/3 h^2 + h^3/3) such that the fractional release of volatiles is given by: χ_k^c = δ M_v,k/f_v,k M_k. The solid lines on Fig. <ref> shows the fraction of volatiles arriving in the k-th bin from the disruption of larger planetesimals that are released directly to gas, for different assumptions about the depth of the layer, h. The fractional release decreases with increasing particle size for small particles, as the assumption of a constant depth, h, accounts for a larger fraction of the body. Alternatively, the fractional release to volatiles due to the catastrophic destruction of planetesimals of mass, M_k, can be considered as the sum of the mass lost from all fragments produced and is shown by the dotted lines on Fig. <ref>. §.§.§ Model for the release of volatiles in Resurfacing Collisions We consider that a large planetesimal of mass M_k, following a resurfacing collision is split into fragments of mass M_f (diameter, D_f). Each fragment individually loses volatiles to gas from an outer layer of depth, h, such that the volatile loss, δ M_ g, f, is given by Eq. <ref>. The mass released to volatiles in a collision of a body of mass, M_k is, thus, the sum of the mass of fragments in each mass bin multiplied by the fraction of that mass released to volatiles χ_f: δ m_ g, k^r = f_ v,k/(1-f_ v,k) Σ_f=i_ frag^i_ max m_s,f χ_f, where χ_f comes from Eq. <ref>, m_s,f= K' M_s,f^2-α and the constant of normalisation K' is given by the mass in solids of a planetesimal in the kth bin, (1-f_ v, k) M_k=K' Σ_i=i_ frag^i_ maxM_s,i^2-α. Thus, the fraction of the volatile mass released to gas by a resurfacing collision is given by: χ_k^r = Σ_f=i_ frag^i_ max M_s,f^2-αχ_f /Σ_i=i_ frag^i_ maxM_s,i^2-α . The dot-dashed lines on Fig. <ref> shows the fractional release of volatiles to gas from resurfacing collisions of planetesimals in the kth bin. §.§ Numerical Simulations The CO production from a planetesimal belt is calculated numerically at every timestep by splitting the planetesimal belt into logarithmically spaced solid mass bins of width δ, using the size distribution (Eq. <ref>). The dust and gas production from collisions are calculated using Eqs. <ref>, with a numerical method invoked to trace the mass in solids and the mass in volatiles in every bin m_ s, k and m_ v, k, as a function of time. This solves Eqs. <ref>, tracking the volatiles lost to gas at each timestep. The timestep is selected such that the mass lost by the smallest bin is less than half of the mass initially in the smallest bin. F_s is normalized such that Eq. <ref> applies exactly, counting only bins up to a large number 2n_ bin, where n_ bin is the total number of bins used. Mass conservation is assured by tracking the total mass in solids or volatiles at every output and adding the very small additional mass lost to gas or dust. The evolution of the solids in the code is benchmarked against <cit.> using an initial mass distribution with α=0.86 (α'=-3.6), a bin width of δ =0.02, and a belt between 7.5 and 11au to match <cit.>. § RESULTS OF THE SIMULATIONS FOR THE COLLISIONAL EVOLUTION OF SOLIDS AND VOLATILES IN PLANETESIMAL BELTS The evolution of the mass in solids and volatiles in the fiducial simulation with a belt between 75 and 125au, containing 100M_⊕ of planetesimals in sizes between 1mm and 30km, a volatile fraction of 4% and volatiles lost from a depth h=10cm following each collision, is shown in the top panel of Fig. <ref>. Both the solids and volatiles start with an initial α=11/6 profile (see <ref>), but this is quickly lost as collisions erode the smallest bodies in the belt. The kink or wave in the size distribution at a few mms results from the absence of grains smaller than D_ min = 1mm in the model considered, which leads to under and over densities of particles, as discussed in more detail in <cit.>. The artificially high minimum grain size of 1mm chosen to speed up the calculations may cause this wave to occur at larger diameters than is realistic. The very largest bodies have not yet reached collisional equilibrium, and thus, are not yet suffering catastrophic collisions and losing mass in solids. The bottom panel of Fig. <ref> shows the lifetime against collisions as a function of particle diameter. At 100Myrs in this disc, only bodies smaller than 1km are suffering catastrophic collisions, but this size increases with time, as larger and larger bodies suffer catastrophic collisions. Resurfacing collisions deplete volatiles from large (D≳ 0.5km) planetesimals, such that their volatile content decreases with time. This occurs on a timescale related to the resurfacing collision timescale, which the bottom panel of Fig. <ref> shows can be several orders of magnitude shorter than the catastrophic collision timescale, particularly for tens of kilometer sized planetesimals. This depletion leads to a turn-over in the size distribution for volatiles (top panel of Fig. <ref> and Fig. <ref>) at a few hundred meters which occurs due to resurfacing collisions. §.§ The release of volatiles from collisions Key for comparison with observations of gas in debris disc systems is the gas production rate due to collisions. This is traced in the numerical simulations using Eq. <ref>. The top panel of Fig. <ref> plots the rate at which volatiles are released (ṁ_ gas) as a function of time for a belt with the fiducial properties and h=10cm with only catastrophic collisions (blue dotted) compared to catastrophic and resurfacing collisions (orange dashed). The release of gas from catastrophic collisions follows that of dust from solids (black solid line), which initially increases as larger bodies enter the collisional cascade, until collisions start to deplete the entire belt mass at late times <cit.>. If collisions are less efficient at releasing volatiles, the gas production rate is lower, but continues on longer timescales, as shown by the green dot-dashed line for the same simulation, but with h=1cm. Catastrophic collisions release volatiles at a lower rate than resurfacing collisions, but continue to do so on a timescale related to the catastrophic collision timescale (Eq. <ref>), whereas resurfacing collisions release volatiles at an initially higher rate, but this rate falls off on a timescale related to the shorter resurfacing collision timescale (Eq. <ref>), as shown by Fig. <ref>. The bottom panel of Fig. <ref> shows the ratio of gas to dust production. At early times (<50Myr), volatiles are being lost from the smallest bodies (D<30m), as shown in the middle panel of Fig. <ref> and, therefore, extra volatiles are released relative to solids, as shown by the dotted black line in the bottom panel of Fig. <ref>. This evolution could be seen as part of initialising the simulation. At later times (tens of Myrs), catastrophic collisions release gas at the same rate as dust and therefore, ṁ_ gas/ṁ_s tends to a constant value, in this case just below the initial volatile fraction of the planetesimals. When resurfacing collisions are included, however, the initial release of gas can be significantly above that from solids, but at late times, when even large planetesimals are volatile depleted, there is no longer a supply of volatiles to release and ṁ_ gas/ṁ_s can tend to very small values. If resurfacing collisions are less efficient at releasing volatiles (h=1cm, shown by the green dot-dashed line in the bottom panel of Fig. <ref>), gas release is dominated by catastrophic collisions and volatiles are passed down the collisional cascade, such that ṁ_ gas/ṁ_s is also representative of the planetesimal composition. The rate at which dust (and also volatiles) are released depends on the properties of the planetesimal belt, most notably the total initial mass, m_ s, tot(0) and the size of the largest planetesimals, D_ max, as these influence the catastrophic collision timescale of the largest bodies present (Eq. <ref>). Both gas and dust are released at a higher rate, for example, in more massive belts, as shown in Fig. <ref>, which shows the release of gas and dust in the fiducial simulation, compared to an order of magnitude higher and lower total initial planetesimal belt mass (noting that on sufficiently long timescales the dust evolution tends to the same level, independent of initial belt mass - see <cit.> for details). For comparison the top plot of Fig. <ref> indicates the mass in CO detected in a selection of bright debris discs as a function of the system age, noting that the scaling of the axes is arbitrary. Details of the sample can be found in Table <ref>. The intention of this plot is to indicate that most systems with CO detection are young[Most surveys of CO gas have targeted primarily young systems with the most massive debris discs, <cit.>], which is also when collisions release gas at the highest rate. § VOLATILES RELEASE DUE TO HEATING FROM LONG-LIVED RADIOACTIVE NUCLIDES Whilst comets are in general formed in the cool outer regions of the disc, their interior temperatures can increase due to the stellar irradiation and the decay of radioactive materials in their interior. This leads to outgassing of volatiles, notably hypervolatiles such as CO and N_2. CO can be directly released from the sublimation of CO ice, or from where it is potentially trapped in amorphous water ice or CO_2 ice <cit.>. Whilst heating from the star can lead to release of CO <cit.> and the decay of short-lived radioactive nuclides are important in the Solar System <cit.>, this work focuses on radiogenic heating from long-lived nuclides, such as ^40K, ^232Th, ^235U and ^238U. Here, we consider a simplistic model with a population of comets (planetesimals) in the belt, as described in <ref>. This model provides an estimate for the total CO released by considering that every comet releases CO at a constant rate up until a time, t_ release(D), after which there is no further CO release. In total a fraction, f_ release(D), of the CO in an individual comet is released. In reality the CO release from an individual comet may be peaked at earlier times, decreasing towards a time, t_ release (see discussion in <ref>). The value of t_ release (D) is essentially a free parameter of the model presented, but we use representative values based on more detailed simulations for Solar System comets by <cit.>, where a 203km comet releases CO for approximately 25Myr, whilst a 74km comet releases CO for 30Myrs. <cit.> focuses on the release of CO from amorphous water ice due to the decay of rather than radiation-driven processes, although the models include time-variant protosolar heating, long-lived radionuclide heating, radial and latitudinal solid-state and radiative heat conduction, sublimation of CO ice, release of CO during segregation of CO_2:CO mixtures, sublimation of CO_2 ice, crystallization of amorphous water ice and release of entraped CO and CO2, radial and latitudinal diffusion of CO and CO_2 vapours (including mass and heat transport), and recondensation of CO and CO_2 vapour when applicable. The models of <cit.> trace the thermophysical evolution of comets with a dust:ice mass ratio of 4 and an initial CO:H_2O molar ratio of 0.155, assuming Solar System abundances of long-lived radioactive nuclides and the stellar irradiation received at 23 au from the Sun. Full details can be found in <cit.>, including the thermal properties used for the comets. This model finds that CO trapped in amorphous water ice is released from comets smaller than 68km due to radiogenic heating, whilst bodies larger than this limit cannot escape crystallization due to heating from This is a sharp transition in the model, as the budget of long-lived radioactive nuclides increases above a limit required to provide sufficient heating for crystallization of water ice. Thus, assuming the time at which all CO is released increases linearly with size D: t_ release(D)= K_0 + K_1 D, where K_0=3.8 × 10^7 yr and K_1=-39yr m^-1. In the models of <cit.>, the 203km comet releases 93% of its total CO, whilst the 74km stops thermally evolving with 30% of its total CO still present. The fraction of the total CO released is estimated assuming a linear function: f_ release (D) = C_0 + C_1 D where C_0 = 2.3 × 10^-4 and C_1= 53m^-1. Both the time and the fraction of CO released are shown as a function of diameter in Fig. <ref>. No dependence on the distance to the star is assumed in this simple model. We acknowledge that the exact timescales on which CO release continues, the proportion of the total CO released and the minimum size for which long-lived radioactive nuclides can heat sufficiently that any CO is released would all vary with many of the free parameters used in the <cit.> models, see <ref>. In order to implement the release of volatiles due to thermophysical evolution, each mass bin is considered to release volatiles at a constant rate up until a time t_ release^k. The rate at which mass in the k-th bin releases volatiles is given by the total mass released divided by the time period over which it is released, ṁ_k, a = f_ v, k K M_ s, k^2-αδ f_ release^k/ (1-f_ v, k) t_ release^k, where K is the constant defined in Eq. <ref>. We note that this equation is based on the premise that there is no evolution of the mass in the belt, such that the power-law for the size distribution with constant α continues to apply. The total volatile release is the sum over all bins, noting that volatiles are only released up to a time, t_ release^k, and that no volatiles are released for bodies smaller than D<68km, as these bodies do not have a sufficient budget of to lead to mobilisation of CO: ṁ_̇ ̇ȧ = Σ_k=1^i_ max Z_k(t) ṁ_k, a , where i_ max labels the smallest mass bin, Z_k is a function which equals 1 for times, t, shorter than t_ release^k and otherwise zero. We acknowledge that this basic model falls short of more detailed thermal evolution models, but it is intended to provide an indication of the probable levels of gas release, which can be compared with the release from collisions. §.§ A comparison between the volatile release from thermophysical evolution (and collisions. The model described in <ref> is used to quantify the gas released from a planetesimal belt in which all (many) of the comets are active, with the focus being the late time thermal evolution due to the decay of long-lived radioactive nuclides, leading to the release of CO trapped in CO_2 or amorphous water ice. Fig. <ref> shows the total CO production rate from the fiducial belt (see <ref>), assuming an average rate. This release in gas depends crucially on the total mass in large planetesimals. It is only planetesimals larger than D=68km that contain a sufficient budget of long-lived radioactive nuclides in this model to contribute to the release of CO. This limit depends on the efficiency of heating or cooling and the total budget of long-lived radioactive nuclides, but is unlikely to change by orders of magnitude for Solar System budgets of The CO in the 68-80km planetesimals is released at the latest times, which in this model means that no CO is released due to thermal evolution after around 30 Myrs, noting that in reality this is unlikely to be a hard cut-off and the exact time at which it occurs may not be well predicted by this model (see further discussion in <ref>). Crucially in this simple model gas is only released due to thermal evolution if planetesimals larger than tens of kms (D>68km) exist in the planetesimal disc. This is because the budget of long-lived radioactive nuclides in these comets is sufficient to lead to significant thermal evolution, presuming that exoplanetary systems have a similar budget of long-lived radioactive nuclides to the Solar System. Fig. <ref> shows that the gas production rate depends on the mass in large planetesimals present. When the largest planetesimal is increased from 100km (purple dotted line) to 1,000km (brown dotted line), whilst maintaining the same mass in dust, the gas production rate increases by over an order of magnitude. This is plausibly an important test of whether large planetesimals are present in the planetesimal belt. Both radiogenic heating and collisions release volatiles at rates which are within the same order of magnitude (see Fig. <ref>). The gas production rates are broadly dominated by the availability of CO within the planetesimals in the belt within this simple model. § DISCUSSION This work presents a simple model that aims to quantify the production of gas (most notably CO) from planetesimal belts, based on the parameters of the planetesimal belt. CO is released at early times due to thermal evolution powered by the decay of long-lived radioactive nuclides. This is compared to the release of CO due to both resurfacing and catastrophic collisions, following the collisional evolution of the planetesimal belt. The models point to the importance of thermal evolution in young planetary systems (<30Myr), whilst collisional gas production can be maintained on Gyr timescales. In the following sections, we first discuss the validity of the model presented, highlighting the many simplifying assumptions. We then discuss whether the model presented here can be used to distinguish the dominant method of gas production in debris systems and whether that is related to radiogenic heating, resurfacing or catastrophic collisions. §.§ Validity of Model The biggest simplification of the model presented here is the lack of any attempt to model the interior structure of the cometary bodies. This is crucial for the comet’s thermal evolution, the location of gases within the comet, the struc- ture of the comet and thus, its ability to release gas during collisions. Whilst there have been significant advancements in our understanding of cometary interiors in recent years <cit.>, several key open questions remain and simulations are computationally expensive even for a single comet. The aim here is to consider the population of planetesimals as a whole. Key changes that would influence these models include the thermal conductivity of the comets. The model presented here is based upon the timescales found in the simulations of <cit.> who adjust the thermal conductivity as tabulated for H5 ordinary chondrite, amorphous, cubic and hexagonal water ice <cit.>, CO and CO_2 <cit.> for the anticipated porosity of comets. This matches timescales predicted for the removal of hypervolatiles from Arrokoth <cit.>. A reduced thermal conductivity would minimise the energy re-radiated in the infrared, such that heating is faster and CO loss occurs earlier. This would make it harder for CO loss to be sustained on long timescales, as required to explain gas detections in older exoplanetary systems. We acknowledge the importance of these poorly known parameters and the exact details of the thermal evolution model in determining the release of CO. A second important limitation of the model regards the interior structure of the comet, in particular, the location of hypervolatiles following initial thermal evolution. A typical outcome of models with radiogenic heating is that activity removes volatiles from the core, leaving a cold volatile-bearing mantle intact <cit.>. Collisions can then release CO from this cold outer crust whenever they occur. In particular this would influence the simple model for the release of gas from collisions, presented in <ref>. This model is clearly an over simplification of reality, retaining only the baseline assumption that the release of gas is proportional to the surface area of the fragments. If different collision strengths or velocities are more/less efficient at releasing volatiles, this would significantly influence the overall gas production rate predicted by the models. The interior structure of the comets has a crucial influence on the ability of collisions to break them apart. This is modelled here by the simplistic prescription for Q_D^* based on SPH modelling of collisions of icy and rocky bodies from <cit.>, however, this may take a different form if comets are formed predominantly via pebble accretion or are highly porous <cit.>. The timescales involved in the collision models presented here depend crucially on the strength of the planetesimals, thus, these timescales could potentially increase (decrease) significantly in an improved collision model. However, in the collision models, these timescales would scale in the same manner for both gas and dust production, such that even in this case, the model presented here has the power to distinguish whether the gas production is purely related to catastrophic collisions. The efficiency at which collisions release volatiles is parameterised in a very simple manner in the model presented here (see <ref> and  <ref>). It is based upon the premise that the fractional release of volatiles (χ_k) depends on the surface area of the planetesimal fragments. Whilst this appears a reasonable broad assumption as most thermal processes depend on the surface area available for heating or cooling, it is clear that the story is more complex for collisions which occur, for example, only in a particular region of the planetesimal or in a non-axisymmetric manner. The mode of heat transport through the body during such collisions is unclear. In this work this lack of knowledge is parameterised by the free parameter h. However, we acknowledge that this free parameter may have values which are orders of magnitude different depending on whether the key process is thermal heating from stellar irradiation, UV desorption or the physical release of trapped CO following collisions. The situation for resurfacing collisions is even less clear. The model presented here assumes a size distribution of fragments which each lose volatiles before the body reaccumulates, with the absence of knowledge of the timescale for this re-accumulation parameterised in the free parameter h. Again, both the timescale, the size distribution and the symmetric nature of this process are poorly understood. If we consider that collisions only continue to release CO on the resurfacing collision timescale, modified by an efficiency for release χ_k^r (Eq. <ref>), then the age of the systems with CO detections (Table <ref>) can be used to place an upper limit on h, such that CO only survives in the largest planetesimals at such late times. In order to retain CO release from a belt centred on 100au for 40Myr, with the properties of the fiducial belt considered here, including initial total mass, m_ s, tot(0)= 100M_⊕ and collisions at v_ rel = 0.1v_k, h must be less than ∼10cm. This motivated the choice of h for this work, but a smaller value would still be consistent with the observations, in which case the gas production rates predicted here may be an overestimate of the efficiency of collisional release of gas. We also note that in a realistic system it is unlikely that volatile release is equally efficient from the fragments that reaccumulate to form large bodies in the gravity regime in both resurfacing and catastrophic collisions and from the fragments produced in catastrophic collisions, or in other words h is unlikely to take the same value for both collision types, as assumed here. We highlight here one inconsistency in the model presented in that by assuming the same value of h for both catastrophic and resurfacing collisions, whilst the total mass released to volatiles from a resurfacing collision of a planetesimal of mass, M_k and a catastrophic collision whose largest remnant is of mass M_k, the mass released to volatiles in the kth bin is significantly lower for the remnant of the catastrophic collision. Accounting for this potential additional volatile loss would only have a relatively minor effect, leading to earlier gas release from the belt. The models for the thermal evolution of comets are based on a Solar System-like budget of long-lived radioactive nuclides. It is thus, crucial to question whether Solar System-like budgets are likely to be typical across exoplanetary systems. Whilst there is a base-line contribution to long-lived radioactive nuclides from the nuclear supply of the galaxy, their budget can be enriched by supernovae, either directly, or by enrichment of the star-forming molecular cloud. These processes are ubiquitous and all exoplanetary systems will be enriched to a certain degree in long-lived radioactive nuclides. Abundances of Thorium in sun-like stars suggest that most exoplanetary systems around sun-like star have similar, if not higher abundances of <cit.>. The story for the more volatile ^40K may, however, differ, with galactic chemical evolution models suggesting that Solar System-levels of ^40K occur in about 1/80 exoplanetary systems <cit.>. This is interesting to note, as whilst a reduced budget of long-lived radioactive nuclides would increase the minimum size heated sufficiently to lead to CO out-gassing, it is plausible that larger comets could continue to release CO on longer timescales than in the models presented here. The models presented here ignore the presence of short-lived radioactive nuclides, such as ^26Al, as the budget of these in exoplanetary systems is unknown, which some studies suggesting that Solar System-like budgets are typical, whilst others that very few systems are enriched at similar levels as the Solar System <cit.>. The decay of short-lived radioactive nuclides was potentially important for Solar System comets <cit.>, although the presence of amorphous water ice could suggest a limited budget of ^26Al at formation <cit.>. The model presented here essentially ignores any thermophysical evolution prior to the end of the gas disc lifetime and assumes that the planetesimals are fully formed at this point. We acknowledge here that planetesimal belts may not be collisionally active (stirred) at the end of the gas disc lifetime, with rather a continued period of growth prior to self-stirring, as discussed in detail in <cit.>. The model presented here treats the thermophysical evolution and collisions as separate processes. In reality both processes may act on the same bodies. In which case, the late time collisional gas production may be significantly depleted due to the fact that volatiles have been lost from the largest planetesimals due to thermal evolution. The retention of some volatiles in 68-100km planetesimals (see Fig. <ref>) and all volatiles in smaller planetesimals allows for the continued gas production from collisions on long timesacles, as relevant for example for Fomalhaut. §.§ Radiogenic heating or collisions? This paper highlights three main channels for the secondary production of gas in debris discs; radiogenic heating, catastrophic or gentler resurfacing or cratering collisions. The model presented here for the thermophysical evolution focuses on the heating due to the decay of long-lived radioactive nuclides, whilst we acknowledge here that for comets sufficiently irradiated by their host stars, external heating may also play a role. All three processes are able to sustain the release of CO at levels similar to those required to explain the observations, around 10^-6-10^-4M_⊕ dissociating in ∼ 100yrs[In belts with higher CO masses, CO likely has a longer photodissociation timescale as it is self-shielded or shielded by other species <cit.>, therefore it is difficult to determine the CO release rate in those systems or assess whether it is of secondary origin.], i.e 10^-8-10^-6M_⊕yr^-1 (see Fig. <ref>), depending on the properties of the planetesimal belt. The key difference between the decay of and collisions here are the timescales, with radiogenic heating only leading to gas production at early times. Thus, whilst the decay of can explain the detection of gas in the majority of debris systems which are around young (< 30Myr) stars, the detection of CO gas in older planetary systems, provides a key test. These include systems such as 49 Ceti at 40Myr or HD 21997 at 45Myr, as well as significantly older systems such as Fomalhaut at 440Myr. The 440Myr age <cit.> of the Fomalhaut system renders the current release of gas from the decay of long-lived radioactive nuclides or the survival of CO from an earlier epoch unlikely. For 49 Ceti and HD 21997, there is sufficient uncertainty in whether shielded CO could have survived or radiogenic heating could continue on slightly longer timescales than those used in the model presented here. However, we have to acknowledge uncertainties in the model presented here. In the model used here the maximum timescale for CO release from radiogenic heating depends on when the maximum temperature of the planetesimals is reached, which in turn depends on the exact structure and cooling of the cometary bodies. In the models of <cit.> a decrease in the dust:water-ice ratio, accompanied by an increase in the CO:H_2O would reduce the rate of heating from LLRs and increase the timescale for which CO could be released (by increasing the total amount of CO to be released). Whether this could be increased sufficiently to produce gas at the rate required for Fomalhaut after 440Myr is not clear. An alternate explanation for the gas production in the Fomalhaut system is the release of volatiles following heating by stellar irradiation. <cit.> show that a 200km body continues to lose CO from CO ice for 200Myr when irradiated by the Sun at 23 au, in a similar manner to that mentioned in <cit.>. If this CO is released, the same irradiation would occur at 93 au in the Fomalhaut system (L_*=16.6L_⊙), whilst at the location of the Fomalahaut belt (130-150au), the irradiation is reduced to just under half and in principle CO loss could continue for just over twice as long (∼400-500Myr). However, the rate of release of CO would potentially be lower. A simple estimate finds a constant average rate of 10^-9M_⊕ yr^-1, assuming that the total mass in the Fomalhaut system in bodies up to 300km is 63M_⊕ <cit.>, with a CO mass fraction of 4%. This low release rate may just be able to explain the low mass of CO (10^-7M_⊕ <cit.>, dissociating in ∼ 100yr). On the other hand, if CO remains present within the planetesimals in the Fomalhaut belt, collisions will continue to release CO. Assuming an initial CO fraction of 4%, the belt's location (143au), width (13.6au) and predicted total planetesimal mass (1.8M_⊕) in bodies up to 0.3km <cit.>, the collisional gas production (both catastrophic and resurfacing collisions) would be 10^-11 M_⊕yr^-1. If instead, we assume that the collisional cascade extends up to 300km, the gas production rate increases to 10^-9 M_⊕yr^-1, predominantly due to resurfacing collisions. Thus, whilst the decay of long-lived radioactive nuclides is unlikely to continue on sufficiently long timescales to explain the gas production observed at Fomalhaut, if the Fomalhaut planetary system had a substantially lower initial budget of long-lived radioactive nuclides (such that volatiles can survive in large planetesimals on long timescales), it remains plausible that the low (compared to other debris discs with CO detection) CO mass observed at Fomalhaut could be released due to the stellar irradiation slowly heating and mobilising the CO ice. This explanation, however, depends crucially on the presence of large (hundreds of km) planetesimals. Collisions, are able to sustain a low rate of gas production for the age of Fomalhaut without the presence of large planetesimals. Thus, this work suggests that both thermal evolution and collisions lead to the release of gas in debris disc systems, with thermal evolution dominating at early times, but less likely to explain CO in old planetary systems such as Fomalhaut. §.§ Resurfacing or Catastrophic Collisions It is currently difficult to find observational evidence that the gas observed in debris systems is produced in resurfacing collisions, rather than catastrophic collisions. One prediction of the models presented here is that the rate of gas production from catastrophic collisions is proportional to the dust production rate (infrared emission), whilst for resurfacing collisions it depends additionally on the population of large planetesimals. At face value the observed population of debris discs with and without CO detections could be seen as evidence in support of resurfacing collisions as there does not appear to be a direct correlation between infrared emission and mass in CO detected, see <cit.> for a detailed discussion. However, this lack of correlation can plausibly be explained by two things. Firstly, the observed CO may be shielded and thus, not proportional to the CO production rate. Secondly, the CO production rate from catastrophic collisions may be proportional to the dust production rate, but will depend additionally on other parameters which can vary between systems, such as the CO fraction of the planetesimals. Thus, it is not currently possible to use the observed population to rule out the potential importance of resurfacing collisions in CO production. From a theoretical perspective, however, the evidence points towards the importance of non-catastrophic collisions, not just resurfacing collisions, but cratering or other non-catastrophic collisions. Whilst this model does not explicitly include cratering collisions, these collisions could potentially release CO at earlier times, as the more frequent cratering collisions chip away at the outer layers of the planetesimals. However, the bulk of the CO, trapped in the deep interior would still need to wait for a shattering or resurfacing collision to be released. §.§ How big are the largest planetesimals in debris discs? This is a crucial question, as highlighted for example by <cit.>, which determines the long-term evolution of debris systems. Whilst the Solar System's debris belt contains large (D∼ 1,000kms) planetesimals such as Pluto, the presence of large (D>100km) planetesimals contradicts observations that indicate fewer old systems have high infrared luminosities from dusty planetesimal belts, as predicted by the collision evolution of belts containing only small planetesimals <cit.>. Additionally, the mass budget in planetesimals required for the brightest observed debris systems would exceed that of the solid component of proto-planetary discs or exoplanet population. Gas production from either radiogenic heating or resurfacing collisions depends crucially on the population of large planetesimals and thus, provides, a test for the size of these bodies in debris discs. This is clearly seen in Fig. <ref>, where the gas production rate is significantly higher, for the same dust production rate, when the population of larger planetesimals is increased, irrespective of whether gas is released from radiogenic heating or resurfacing collisions. If planetesimal belts do not contain a population of large (> tens of km) planetesimals, catastrophic collisions are more likely to dominate the observed release of CO. §.§ The composition of comets, as derived from gas observations The detection of individual gases released from comets in debris discs provides a unique opportunity to probe the composition of comets in exoplanetary systems, in comparison to our Solar System, as in <cit.>. This work highlights the difficulties in using observed CO as a probe of the total CO content of planetesimals. Thermal evolution is likely to have played a significant role in reducing the initial volatile content of comets, even during the primordial disc phase, for comets both in the Solar System and in exoplanetary systems <cit.>. Additionally, the models presented here, notably Fig. <ref>, show that when resurfacing collisions are considered the ratio of the gas to dust production rate can be significantly above (at early times) or below (at late times) the CO content of the planetesimals. § CONCLUSIONS The observation of gas in traditionally gas-poor debris disc systems provides crucial clues regarding the evolution of volatiles within planetary systems. Here, we compare a model that predicts the secondary release of gas from planetesimal belts due to heating from the decay of to a model for the collisional production of CO in both catastrophic and resurfacing collisions. The release of gas from catastrophic collisions follows the dust evolution of the belt, whilst non-catastrophic collisions, such as resurfacing (or shattering) collisions in large (tens to hundreds of kms) planetesimals contribute to the early release of gas at higher rates than with only catastrophic collisions. We predict the gas release from collisions as a function of properties of the planetesimal belt. The release of gas from resurfacing collisions depends crucially on the presence of large (tens to hundreds of km) planetesimals and means that the observed rate of CO release compared to the dust production is no longer a good probe for the CO content of the comets. Radiogenic heating from the decay of isotopes such as ^40K, ^232Th, ^235U and ^238U can lead to the heating of comets and CO gas production rates comparable to those required to explain the observations, if planetesimal belts contain tens to hundreds of kilometer planetesimals. Radiogenic heating has the potential to explain the CO observed in all young (<50Myr) planetary systems, whilst the presence of CO gas in older planetary systems, most notably Fomalhaut at 440Myr <cit.>, is readily sustained by collisions. We highlight the potential importance of the slow penetration of stellar irradiation to the deep interiors of comets, as suggested by <cit.>, particularly for old planetary systems, such as Fomalhaut. § DATA AVAILABILITY The data and codes used in this manuscript can be found at <https://github.com/abonsor/coll_gas> § ACKNOWLEDGEMENTS AB acknowledges the support of a Royal Society University Research Fellowship, URF\R1\211421. We acknowledge fruitful discussions with Uri Malamud and Jürgen Blum. Parts of the research were carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration mn
http://arxiv.org/abs/2307.01979v1
20230705014124
ToothSegNet: Image Degradation meets Tooth Segmentation in CBCT Images
[ "Jiaxiang Liu", "Tianxiang Hu", "Yang Feng", "Wanghui Ding", "Zuozhu Liu" ]
eess.IV
[ "eess.IV", "cs.CV" ]
The existence of distinguishable bases in three-dimensional subspaces of qutrit-qudit systems under one-way local operations and classical communication Dragomir Žoković August 1, 2023 ======================================================================================================================================================== In computer-assisted orthodontics, three-dimensional tooth models are required for many medical treatments. Tooth segmentation from cone-beam computed tomography (CBCT) images is a crucial step in constructing the models. However, CBCT image quality problems such as metal artifacts and blurring caused by shooting equipment and patients’ dental conditions make the segmentation difficult. In this paper, we propose ToothSegNet, a new framework which acquaints the segmentation model with generated degraded images during training. ToothSegNet merges the information of high and low quality images from the designed degradation simulation module using channel-wise cross fusion to reduce the semantic gap between encoder and decoder, and also refines the shape of tooth prediction through a structural constraint loss. Experimental results suggest that ToothSegNet produces more precise segmentation and outperforms the state-of-the-art medical image segmentation methods. Tooth Segmentation, CBCT, Orthodontics, Tooth Models, Image Degradation § INTRODUCTION Stomatologists and dentists use tooth models to carry out diagnosis, orthodontic treatment planning and dental restoration <cit.>. Constructing a tooth model first requires tooth segmentation from CBCT scans <cit.>. Existing computational methods then generate the final tooth models according to the segmentation. Traditionally, specialists have to manually label each tooth from CBCT images slice by slice, which is a huge workload and extremely time-consuming. It is therefore practically demanded to design accurate and fully automatic end-to-end methods to attain tooth segmentation from CBCT images <cit.>. In recent years, deep learning has been increasingly applied to image segmentation tasks in various medical areas including dentistry <cit.>. Some prior works <cit.>, <cit.> formulated CBCT tooth segmentation task as 3D instance segmentation which usually demands labeling tooth pixels and instances on 3D voxels across the entire CBCT scan. To explore a more efficient way, we recognize the task as semantic segmentation over 2D CBCT images to alleviate the data annotation labour. Directly applying current segmentation methods to the 2D case <cit.>, however, leads to unsatisfied results because of the existence of low-quality images. Fig.1 illustrates two most commonly seen defects in dental CBCT images <cit.>: (a) consecutively displays a high quality CBCT image, an image with metal artifacts and an image with blurring; (b) and (c) give respective ground truth and segmentation results of U-Net <cit.>; the red boxes are magnification areas for metal artifacts and blurring. The second and third triplets indicate that such image defects can lead to abysmal tooth segmentation. Addressing these problems should greatly improve 2D CBCT tooth segmentation, promising an efficient and effective tooth modeling for the dentistry community. In this study, we propose a metal artifacts and blurring robust and teeth structural constrained tooth segmentation method named ToothSegNet. The philosophy is to acquaint the deep model with defective cases when learning each CBCT image sample. Experiments show that our method not only outperforms the state-of-the-art segmentation methods on high-quality images but also has a strong robustness to images with metal artifacts and blurring, suggesting the applicability of our framework in real-world clinical scenarios. The main contributions are summarized below: * We propose a medical image degradation simulation strategy, combined with a multi quality fusion module, to attain higher robustness to problematic images. * We design a channel-wise cross fusion module (CCF) to reduce the semantic gap between the encoder and decoder, which eliminates the ambiguity features existed in vanilla U-Net caused by skip-connection. * We design a structural constraint loss in order to restrict the structure of tooth prediction. * We explore an annotation approach which is much lower-cost than that implemented in current CBCT tooth segmentation methods. § METHOD §.§ Overview Suppose we have a dataset 𝒟 = {(𝐱_i, 𝐲_i)}_i=1^N, where 𝐱_i and 𝐲_i denote the i-th CBCT image and corresponding pixel-level segmentation labels respectively, N is the number of images in the dataset. For each pixel, we use y_i ∈{0, 1} to denote the background and tooth respectively. In our task, we aim to generate accurate pixel-level predictions ŷ_i for each input image. As is illustrated in Fig. 2, ToothSegNet is composed of the degradation simulation module, multi-quality fusion module, CCF, vanila U-Net decoder, SE-Net <cit.> and the structural constraint loss. The vanila U-Net decoder follow the conventional CNN-based architecture <cit.>. §.§ Degradation simulation module In computer vision tasks, people often construct data sets through image degradation to mimic complex real-world scenarios <cit.>, <cit.>. Referred to that, ToothSegNet randomly degrades the input image during training. For blurring and double blurring simulation, the input image is down-sampled once (1x) and twice (2x) respectively and then up-sampled to generate the degraded image 𝐱_i^d, as is shown in Fig. 2. In the downsampling, a Gaussian pyramid blurring is performed to downscale the image. In the upsampling, a bilinear interpolation is performed to convert images to the original size. For artifact simulation, since we observe that the teeth with metal have higher image contrast compared with others, we create contrast enhanced images 𝐱_i^d by image square. The degradation simulation procedure is defined to follow the operation probability distribution: ℙ( X_i^d = ϕ ^(j)(𝐱_i)|X_i = 𝐱_i) = 1/4 ∀ j ∈{ 1,2,3,4} where X_i and X_i^d denote an input CBCT image and its degradation simulation, ϕ ^(1) to ϕ ^(4) represent the blurring degradation, double blurring degradation, no operation for source images and artifact degradation respectively. §.§ Network architecture §.§.§ Multi-quality fusion module After the degraded images are obtained, the source and degraded images are sent to a designed multi-quality fusion module similar to the two-branch improved U-Net encoder. As shown in Fig. 2, the blue branch is the vanilla U-Net encoder which encodes the feature of source images, the green branch is the U-Net encoder with 1x1 convolution which encodes the feature of degraded images. The fused feature is defined as follows: F_i = E_i + 1 + L_i + 1 i∈{ 1,2,3} F_4 is the fused feature. §.§.§ Channel-wise cross fusion Shallow layer features with less semantic information may damage performance via a skip connection in conventional U-Net <cit.>. To solve this issue, we design the CCF module, similar to channel attention <cit.>, <cit.>, to increase the semantic gap between teeth and the background. Fig. 2 shows the framework of CCF. The encoded CCF feature is defined as: O_i = {[ CCF( E_1,D_1) i = 1; CCF( F_i - 1,D_i) i = 2,3,4 ]. The feature fusion of CCF in Fig.2 is defined as: Ô_i = Ê_i + D̂_i Also, the SE-Net module is added to exploit the correlation of feature channels, enhancing the feature representation: D_i = {[ concat(D_i+1,O_i+1) i = 1,2,3; ℱ_SE(F_5) i = 4 ]., where ℱ_SE denotes the SE-Net. 𝒟_i are the fused features of inconsistent semantics between the CCF and SE-Net. §.§ Structural constraint loss With noise in images, the inference often lose the tooth structure. Thus, we design the structural constraint loss based on the structural similarity index measure (SSIM) <cit.>: Loss_str = 1 -SSIM( I_O^, I_GT^) + SSIM( I_C^, I_GT^)/2, where I_GT is the ground truth zero-one vector, I_O is the prediction of ToothSegNet, I_C^ is the element-wise multiplication between I_O and I_GT which, as shown in Fig. 3, retains the correctly predicted area in I_O and sets the wrong predicted area to 0. Loss_str ensures that the prediction approaches I_GT structurally, effectively constraining the results. The total loss function is the sum of three losses: Loss_total = Loss_Dice^ + Loss_BCE + Loss_str, Besides Loss_str, Loss_Dice^ is based on the Dice metrics <cit.> and Loss_BCE is the binary cross-entropy (BCE) loss <cit.>. §.§ Training details We set the batch size to 4 and the epoch to 300. The input resolution and patch size are set as 512 x 512 and 16 respectively. For optimizer, we use Adam with the initial learning rate to be 0.01. We perform a number of data augmentations including horizontal flipping, vertical flipping and random rotating. § EXPERIMENT §.§ Dataset We construct a large-scale CBCT dataset consisting of 503 patient samples from hospitals and clinics across 25 provinces in China during 2018-2021. The 503 patients are at the age of 19.53±7.57 years old, with 32.5% male and 67.5% female. Each patient has 400 slices, from which we select 15-25 slices to annotate to make sure labeled images contain different anatomical information. Selected images are annotated with software Labelme, under the supervision of senior radiologists with more than 10 year experience. Accordingly, the dataset has 9651 CBCT images in total from which 8612 images of 453 patients are used to train the model and 1039 images of 50 patients are used to test the model. §.§ Main results To demonstrate the overall segmentation performance of the proposed ToothSegNet, we compare it with six methods for a comprehensive evaluation, covering four U-Net based methods: U-Net <cit.>, UNet++ <cit.>, FCN <cit.>, Deeplabv3 <cit.> and two state-of-the-art transformer-based segmentation methods: MedT <cit.>, and UCTransNet <cit.>. To make a fair comparison, the implementations of MedT, UCTransNet, UNet, and UNet++ are based on their original source codes, and the implementations of FCN, and Deeplabv3 are based on the codes from MMsegmentation <cit.> where their originally published settings are used in the experiment. The backbone of DeepLabv3 and FCN is ResNet-50, and the training hyper parameters are in MMSegmentation <cit.> by default, the training procedure lasts for 160K iterations. Experimental results are reported in Table 1 where the best results are boldfaced, and the second results are underlined. Each entry of Table 1 exhibits the average over 50 patient samples. The results indicate that ToothSegNet attains noticeable improvements over prior arts. In clinics, the segmentation results of tooth CBCT images need to be reconstructed into 3D mesh images, which turns out to require a balance between false positives and false negatives. Therefore, CBCT segmentation favors higher Dice and IoU. ToothSegNet achieves 89.74% Dice and 82.24% IoU with testing, which surpasses UCTransNet by 4.37% Dice, and 5.38% IoU. We visualize the segmentation results of all models in Fig. 4 where The red boxes contains FN pixels(red) and FP pixels(green). Fig. 4 (a) shows the results of a high-quality CBCT image on which ToothSegNet outperforms other methods. (b), (c) show the results of CBCT images with blurring, illustrating that our method generates segmentation results more similar to the ground truth than others. (d) shows the results of a CBCT image with metal artifacts, demonstrating that ToothSegNet have the best performance on tooth boundaries. §.§ Reconstruction To explore the performance of ToothSegNet in clinics, we employ the marching cubes algorithm <cit.> to get 3D CBCT mesh. Fig. 5 shows the reconstruction results using four state-of-the-art methods for a selected sample in which CBCT images are blurred around teeth erupting areas, and have high contrast differences as well as significant metal artifacts in teeth crown areas. As is shown in the right view, the tooth root reconstruction of ToothSegNet is the most complete, others have unreasonable disconnected fragments. In the frontal view, ToothSegNet results in the most smooth and natural reconstruction at the tooth connection, whereas UCtransNet leads to unnatural connections, U-Net and U-Net++ both end up with strange textures illustrated in a yellow box and a red box respectively. In the left view, one can observe that the reconstruction of UCTransNet, U-Net, and U-Net++ all contains floating tooth fragments, violating the common sense. §.§ Ablation study Table 2 records the ablation study results for the degradation simulation module, structural constraint loss, and the CCF module. One can observe that all three designs are effective for segmentation, calling for the extension of this degradation simulation approach to other computer vision tasks. Also, we compare the performance of the structural constraint loss with the traditional SSIM loss <cit.> as shown in Fig. 3, demonstrating that our constraint loss leads to preciser segmentation. § CONCLUSION Accurate tooth CBCT image segmentation is essential for the clinics in orthodontics. In this work, we combine the strengths of the designed degradation simulation module, CCF, and the structure constraint loss to provide a precise and robust automatic CBCT tooth segmentation. With in-depth analysis and ablation study, we show that the proposed ToothSegNet obtains better segmentation results on tooth datasets than the other six advanced medical image segmentation methods in terms of quantitative evaluation and visualization. In the future, we will deploy our work into clinical usage. IEEEbib
http://arxiv.org/abs/2307.03217v1
20230706175610
Quantification of Uncertainty with Adversarial Models
[ "Kajetan Schweighofer", "Lukas Aichberger", "Mykyta Ielanskyi", "Günter Klambauer", "Sepp Hochreiter" ]
cs.LG
[ "cs.LG", "stat.ML" ]
arrows.meta calc,shapes,arrows,positioning,automata,trees basicstyle=, columns=fixed, fontadjust=true, basewidth=0.5em annotatedFigure[1] [anchor=south west,inner sep=0] (image) at (0,0) #1;[x=(image.south east),y=(image.north west)] a>mColor1c ⌊⌋ importantresultcolback=solarized@yellow!5!white, colframe=solarized@yellow,parbox, left=0.5mm, right=0.5mm,top=0.5mm,bottom=0.5mm importantresult_noparboxcolback=solarized@yellow!5!white, colframe=solarized@yellow,parbox=false, left=0.5mm, right=0.5mm,top=0.5mm,bottom=0.5mm theoremTheorem definitionDefinition corollaryCorollary propositionProposition lemmaLemma conjectureConjecture *theorem*Theorem *definition*Definition theoremATheorem definitionADefinition corollaryACorollary propositionAProposition lemmaALemma conjectureAConjecture [2]() #1 #2 [2]() #1 ; #2 [2]() #1 ; #2 [2]() #1 ; #2 [3] _#1()#2 ; #3 R[1]>p#1 C[1]>p#1 L[1]>p#1 Quantification of Uncertainty with Adversarial Models Kajetan Schweighofer [1] ^1 Lukas Aichberger [1] ^1 Mykyta Ielanskyi [1] ^1 Günter Klambauer ^1 Sepp Hochreiter ^1 2 ^1 ELLIS Unit Linz and LIT AI Lab, Institute for Machine Learning,   Johannes Kepler University Linz, Austria ^2 Institute of Advanced Research in Artificial Intelligence (IARAI), Vienna, Austria [1]  Joint first author August 1, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================== toc Quantifying uncertainty is important for actionable predictions in real-world applications. A crucial part of predictive uncertainty quantification is the estimation of epistemic uncertainty, which is defined as an integral of the product between a divergence function and the posterior. Current methods such as Deep Ensembles or MC dropout underperform at estimating the epistemic uncertainty, since they primarily consider the posterior when sampling models. We suggest Quantification of Uncertainty with Adversarial Models (QUAM) to better estimate the epistemic uncertainty. QUAM identifies regions where the whole product under the integral is large, not just the posterior. Consequently, QUAM has lower approximation error of the epistemic uncertainty compared to previous methods. Models for which the product is large correspond to adversarial models (not adversarial examples!). Adversarial models have both a high posterior as well as a high divergence between their predictions and that of a reference model. Our experiments show that QUAM excels in capturing epistemic uncertainty for deep learning models and outperforms previous methods on challenging tasks in the vision domain. § INTRODUCTION Actionable predictions typically require risk assessment based on predictive uncertainty quantification <cit.>. This is of utmost importance in high stake applications, such as medical diagnosis or drug discovery, where human lives or extensive investments are at risk. In such settings, even a single prediction has far-reaching real-world impact, thus necessitating the most precise quantification of the associated uncertainties. Furthermore, foundation models or specialized models that are obtained externally are becoming increasingly prevalent, also in high stake applications. It is crucial to assess the robustness and reliability of those unknown models before applying them. Therefore, the predictive uncertainty of given, pre-selected models at specific test points should be quantified, which we address in this work. We consider predictive uncertainty quantification (see Fig. <ref>) for deep neural networks <cit.>. According to <cit.>, predictive uncertainty can be categorized into two types. First, aleatoric (Type A, variability, stochastic, true, irreducible) uncertainty refers to the variability when drawing samples or when repeating the same experiment. Second, epistemic (Type B, lack of knowledge, subjective, reducible) uncertainty refers to the lack of knowledge about the true model. Epistemic uncertainty can result from imprecision in parameter estimates, incompleteness in modeling, or indefiniteness in the applicability of the model. Epistemic uncertainty can be reduced by more data, better models, or more knowledge about the problem, while aleatoric uncertainty cannot be reduced. We follow <cit.> and consider epistemic uncertainty as the imprecision or variability of parameters that determine a distribution. <cit.> calls this epistemic uncertainty ”parameter uncertainty”, which results from an imperfect learning algorithm or from insufficiently many training samples. Consequently, we consider uncertainty quantification as characterizing a stochastic model of the world, where aleatoric uncertainty is the stochasticity of the model and epistemic uncertainty is the uncertainty about model parameters. Quantifying predictive uncertainty, especially for deep learning models, is an active area of research. Classical uncertainty quantification methods such as Bayesian Neural Networks (BNNs) <cit.> are challenging for deep learning, since (i) the Hessian or maximum-a-posterior (MAP) is difficult to estimate and (ii) regularization & normalization techniques cannot be treated <cit.>. Epistemic neural networks <cit.> add a variance term (the epinet) to the output only. Bayes By Backprop <cit.> and variational neural networks <cit.> work only for small models as they require considerably more parameters. Monte-Carlo (MC) dropout <cit.> casts applying dropout during inference as sampling from an approximate distribution. MC dropout was generalized to MC dropconnect <cit.>. Deep Ensembles <cit.> are often the best-performing uncertainty quantification method <cit.>. Masksembles or Dropout Ensembles combine ensembling with MC dropout <cit.>. Stochastic Weight Averaging approximates the posterior over the weights <cit.>. Single forward pass methods are efficient, as they aim to capture epistemic uncertainty through the distribution or distances of latent representations <cit.>, but were found to have lower performance under distribution shifts <cit.>. For further methods see <cit.> and <cit.>. Current uncertainty quantification methods such as Deep Ensembles <cit.> or MC dropout <cit.> underperform at estimating the epistemic uncertainty, since they primarily consider the posterior when sampling models. Thus they are prone to miss important posterior modes, where the integrand of the integral defining the epistemic uncertainty is large. To identify those posterior modes, we introduce Quantification of Uncertainty with Adversarial Models (QUAM) for uncertainty quantification. QUAM searches for those posterior modes via adversarial models and uses them to reduce the approximation error when estimating the integral defining the epistemic uncertainty. Adversarial models are characterized by a large value of the integrand of the integral defining the epistemic uncertainty. Thus, they considerably differ to the reference model's prediction at a test point while having a similarly high posterior probability. Consequently, they are counterexamples of the reference model that predict differently for a new input, but explain the training data equally well. Fig. <ref> shows examples of adversarial models which assign different classes to a test point, but agree on the training data. Our main contributions are: * We introduce QUAM as a framework for uncertainty quantification. QUAM approximates the integrals that define the epistemic uncertainty substantially better than previous methods, since it reduces the approximation error of the integral estimator. * We introduce the concept of adversarial models for estimating posterior integrals with non-negative integrands. For a given test point, adversarial models have considerably different predictions than a reference model while having similarly high posterior probability. * We introduce a new setting for uncertainty quantification, where the uncertainty of a given, pre-selected model is quantified. § CURRENT METHODS TO ESTIMATE THE EPISTEMIC UNCERTAINTY §.§ Definition of Predictive Uncertainty Predictive uncertainty quantification is about describing a stochastic model of the world, where aleatoric uncertainty is the stochasticity of the model and epistemic uncertainty is the uncertainty about the parameters of the model. We consider two distinct settings of predictive uncertainty quantification. (a) First, the expected predictive uncertainty at a new test point when selecting a model given a training dataset <cit.>. This definition of uncertainty comprises uncertainty about which model will be selected (epistemic) and the prediction uncertainty of the selected model (aleatoric). In this setting, epistemic uncertainty is the uncertainty about which parameters will be selected. (b) Second, the predictive uncertainty of a given, pre-selected model at a new test point. This definition of uncertainty comprises uncertainty about the true model of the world (epistemic) and prediction uncertainty of the given, pre-selected model (aleatoric). In this setting, epistemic uncertainty is the uncertainty about the parameters of the true model that produced the training data <cit.>. As an example, assume we have initial data from an epidemic, but we do not know the exact infection rate which is a parameter of a prediction model. The goal is to predict the number of infected persons at a specific time in the future, where each time point is a test point. In setting (a), we are interested in the uncertainty at test point predictions of all models using infection rates that explain the initial data. If all potential models agree for a given new test point, the prediction of any of those models can be trusted, otherwise we can not trust the prediction regardless of which model is selected in the end. In setting (b), we have selected a specific infection rate from the initial data as parameter for our model to make predictions. We refer to this model as the given, pre-selected model. However, we do not know the true infection rate of the epidemic. All models with infection rates that are consistent with the initial data are likely to be the true model. If the likely models agree with the given, pre-selected model for a given new test point, the prediction of the model can be trusted. Measuring Predictive Uncertainty. We consider the predictive distribution of a single model p(|, ), which is a stochastic model of the world. Depending on the task, the output distribution of this stochastic model can be a categorical distribution for classification or a Gaussian distribution for regression. The Bayesian framework offers a principled way to treat the uncertainty about the parameters through the posterior p(|) ∝ p(|) p() for a given dataset . The Bayesian model average (BMA) predictive distribution is given by p(|, ) = ∫_ p(|, ) p(| ). Following <cit.>, the uncertainty of the BMA predictive distribution is commonly measured by the entropy [ p(|, ) ], which can be decomposed into an aleatoric and epistemic part. This entropy is equal to the posterior expectation of the cross-entropy between the predictive distribution of potential models and the BMA, which corresponds to setting (a). The expected cross-entropy is also applicable to setting (b). A more detailed discussion about the entropy and cross-entropy as measures of uncertainty is given in Sec. <ref> in the Appendix. In the following, we formalize how to measure the notions of uncertainty in setting (a) and (b) using the expected cross-entropy over the posterior. Setting (a): Expected uncertainty when selecting a model. We estimate the predictive uncertainty at a test point when selecting a model given a training dataset . The total uncertainty is the expected cross-entropy between the predictive distribution of candidate models p(|, ) and the BMA predictive distribution p(|, ), where the expectation is with respect to the posterior: ∫_ p(|, )p(|, ) p(| ) = [ p(|, ) ] = ∫_[p(|, ) ] p(| ) + YW |, = ∫_[p(|, ) ] p(| ) _aleatoric + ∫_p(|, )p(|, ) p(| ) _epistemic . The aleatoric uncertainty characterizes the uncertainty due to the stochasticity of the predictive distribution of the candidate model p(|, ). The epistemic uncertainty characterizes the uncertainty due to the mismatch between the predictive distribution of candidate models and the BMA predictive distribution. See Appendix Sec. <ref> for a more detailed derivation. Setting (b): Uncertainty of a given, pre-selected model. We estimate the predictive uncertainty of a given, pre-selected model at a test point . We assume that the dataset is produced according to the true distribution p(|, ^*) parameterized by ^*. The posterior p(| ) is an estimate of how likely match ^*. For epistemic uncertainty, we should measure the difference between the predictive distributions under and ^*, but ^* is unknown. Therefore, we measure the expected difference between the predictive distributions under and . In accordance with <cit.> and <cit.>, the total uncertainty is therefore the expected cross-entropy between the predictive distributions of a given, pre-selected model and candidate models , as those could be the true model ^* according to the posterior: ∫_ p(|, )p(|, ) p(| ) = [ p(|, ) ] _aleatoric + ∫_p(|, )p(|, ) p(| ) _epistemic . The aleatoric uncertainty characterizes the uncertainty due to the stochasticity of the predictive distribution of the given, pre-selected model p(|, ). The epistemic uncertainty characterizes the uncertainty due to the mismatch between the predictive distribution of the given, pre-selected model and the predictive distribution of candidate models that could be the true model. See Appendix Sec. <ref> for a more detailed derivation. §.§ Estimating the Integral for Epistemic Uncertainty Current methods for predictive uncertainty quantification suffer from underestimating the epistemic uncertainty. The epistemic uncertainty is given by the respective terms in Eq. (<ref>) for setting (a) and Eq. (<ref>) for our new setting (b). To estimate these integrals, almost all methods use gradient descent on the training data. Thus, posterior modes that are hidden from the gradient flow remain undiscovered and the epistemic uncertainty is underestimated. An illustrative example is depicted in Fig. <ref>. Posterior expectations as in Eq. (<ref>) and Eq. (<ref>) that define the epistemic uncertainty are generally approximated using Monte Carlo integration. A good approximation of posterior integrals through Monte Carlo integration requires to capture all large values of the non-negative integrand <cit.>, which is not only large values of the posterior, but also large values of the KL-divergence. Variational inference <cit.> and ensemble methods <cit.> estimate the posterior integral based on models with high posterior. Posterior modes may be hidden from gradient descent based techniques as they only discover mechanistically similar models. Two models are mechanistically similar if they rely on the same input attributes for making their predictions, that is, they are invariant to the same input attributes <cit.>. However, gradient descent will always start by extracting input attributes that are highly correlated to the target as they determine the steepest descent in the error landscape. These input attributes create a large basin in the error landscape into which the parameter vector is drawn via gradient descent. Consequently, other modes further away from such basins are almost never found. Thus, the epistemic uncertainty is underestimated. <cit.> found that neither BNNs, Concrete Dropout, nor Deep Ensembles performed well in estimating the epistemic uncertainty for samples far from the training distribution. Another reason that posterior modes may be hidden from gradient descent is the presence of different labeling hypotheses. If there is more than one way to explain the training data, gradient descent will use all of them as they give the steepest error descent. Other work focuses on MCMC sampling according to the posterior distribution, which is approximated by stochastic gradient variants <cit.> for large datasets and models. Those are known to face issues to efficiently explore the highly complex and multimodal parameter space and escape local posterior modes. There are attempts to alleviate the problem <cit.>. However, those methods do not explicitly look for important posterior modes, where the output distribution of sampled models contribute strongly to the approximation of the posterior integral, and thus have large values for the KL-divergence. § ADVERSARIAL MODELS TO ESTIMATE THE EPISTEMIC UNCERTAINTY *Intuition. The epistemic uncertainty in Eq. (<ref>) for setting (a) compares possible models with the BMA. Thus, the BMA is used as reference model. The epistemic uncertainty in Eq. (<ref>) for our new setting (b) compares models that are candidates for the true model with the given, pre-selected model. Thus, the given, pre-selected model is used as reference model. If the reference model makes some prediction at the test point, and if other models (the adversaries) make different predictions while explaining the training data equally well, then one should be uncertain about the prediction. Adversarial models are plausible outcomes of model selection, while having a different prediction at the test data point than the reference model. In court, the same principle is used: if the prosecutor presents a scenario but the advocate presents alternative equally plausible scenarios, the judges become uncertain about what happened and rule in favor of the defendant. We use adversarial models to identify locations where the integrand of the epistemic uncertainty in Eq. (<ref>) or Eq. (<ref>) is large. These locations are used to construct a mixture distribution that is used for mixture importance sampling to estimate the desired integrals in Eq. (<ref>) or Eq. (<ref>). Using the mixture distribution for sampling, we aim to considerably reduce the approximation error of the estimator. *Mixture Importance Sampling. We estimate the integrals of epistemic uncertainty in Eq. (<ref>) and in Eq. (<ref>). In the following, we focus on setting (b) with Eq. (<ref>), but all results hold for setting (a) with Eq. (<ref>) as well. Most methods sample from a distribution q() to approximate the integral: v = ∫_p(|, )p(|, ) p(| ) = ∫_u(, , )/q() q() , where u(, , )= p(|, )p(|, ) p(| ). As with Deep Ensembles or MC dropout, posterior sampling is often approximated by a sampling distribution q() that is close to p(| ). Monte Carlo (MC) integration estimates v by v̂ = 1/N∑_n=1^N u(, , _n)/q(_n) , _n ∼ q() . If the posterior has different modes, the estimate under a unimodal approximate distribution has high variance and converges very slowly <cit.>. Thus, we use mixture importance sampling (MIS) <cit.>. MIS utilizes a mixture distribution instead of the unimodal distribution in standard importance sampling <cit.>. Furthermore, many MIS methods iteratively enhance the sampling distribution by incorporating new modes <cit.>. In contrast to the usually applied iterative enrichment methods which find new modes by chance, we have a much more favorable situation. We can explicitly search for posterior modes where the KL divergence is large, as we can cast it as a supervised learning problem. Each of these modes determines the location of a mixture component of the mixture distribution. The expected mean squared error of importance sampling with q() can be bounded by _q()[ ( v̂ - v )^2 ] ≤_q()[ (u(, , )/q())^2] 4/N . The inequality Eq. (<ref>) follows from Theorem 1 in <cit.>, when considering 0 ≤ u(, , ) as an unnormalized distribution and setting φ=1. Approximating only the posterior p(| ) as done by Deep Ensembles or MC dropout is insufficient to guarantee a low expected mean squared error, since the sampling variance cannot be bounded (see Appendix Sec. <ref>). With constant c, _q()[ ( v̂ - v )^2 ] ≤ 4 c^2 / N holds if u(, , ) ≤ c q(). Consequently, q() must have modes where u(, , ) has modes even if the q-modes are a factor c smaller. The modes of u(, , ) are models with both high posterior and high KL-divergence. We are searching for these modes to determine the locations _k of the components of a mixture distribution q(): q() = ∑_k=1^K α_k ( ; _k, ) , with α_k = 1 /K for K such models _k that determine a mode. Adversarial model search finds the locations _k of the mixture components, where _k is an adversarial model. The reference model does not define a mixture component, as it has zero KL-divergence to itself. We then sample from a distribution at the local posterior mode with mean _k and a set of shape parameters . The simplest choice for is a Dirac delta distribution, but one could use e.g. a local Laplace approximation of the posterior <cit.>, or a Gaussian distribution in some weight-subspace <cit.>. Furthermore, one could use _k as starting point for SG-MCMC chains <cit.>. More details regarding MIS are given in the Appendix in Sec. <ref>. In the following, we propose an algorithm to find those models with both high posterior and high KL-divergence to the output distribution of the reference model. *Adversarial Model Search. Adversarial model search is the concept of searching for a model that has a large distance / divergence to the reference predictive distribution and at the same time a high posterior. We call such models ”adversarial models” as they act as adversaries to the reference model by contradicting its prediction. A formal definition of an adversarial model is given by Def. <ref>: Given are a reference conditional probability model p(. | . , ) from a model class parameterized by , a divergence or distance measure (.;.) for probability distributions, γ>0, Λ > 0, a dataset , and a new test data point . Then a model with parameters that satisfies the inequalities |log p(|) - log p(|) | ≤γ and (p(. |,);p(. |, ))≥Λ is called an (γ, Λ)-adversarial model. Adversarial model search corresponds to the following optimization problem: max_∈Δ p(|, )p(|, + ) log p(|) - log p( + |) ≤ γ . We are searching for a weight perturbation that maximizes the distance .. to the reference distribution without decreasing the log posterior more than γ. The search for adversarial models is restricted to ∈Δ, for example by only optimizing the last layer of the reference model or by bounding the norm of . This optimization problem can be rewritten as: max_∈Δ p(|, )p(|, + ) + c ( log p( + |) - log p(|) + γ) . where c is a hyperparameter. According to the Karush-Kuhn-Tucker (KKT) theorem <cit.>: If ^* is the solution to the problem Eq. (<ref>), then there exists a c^* ≥ 0 with ∇_(^*,c^*) = ( is the Lagrangian) and c^* ( log p(|) - log p( + ^* |) - γ) = 0. This is a necessary condition for an optimal point according to Theorem on Page 326 of <cit.>. We solve this optimization problem by the penalty method, which relies on the KKT theorem <cit.>. A penalty algorithm solves a series of unconstrained problems, solutions of which converge to the solution of the original constrained problem (see e.g. <cit.>). The unconstrained problems are constructed by adding a weighted penalty function that measures the violation of the constraints to the objective function. At every step, the weight of the penalty is increased, thus the constraints are less violated. If exists, the solution to the constraint optimization problem is an adversarial model that is located within a posterior mode but has a different predictive distribution compared to the reference model. We summarize the adversarial model search in Algorithm <ref>. Further implementation details are given in the Appendix Sec. <ref>. § EXPERIMENTS In this section, we compare previous uncertainty quantification methods and our method QUAM in a set of experiments. First, we assess the considered methods on a synthetic benchmark, on which it is feasible to compute a ground truth epistemic uncertainty. Then, we conduct challenging out-of-distribution (OOD) detection, adversarial example detection, misclassification detection and selective prediction experiments in the vision domain. We compare the following methods (1) QUAM, (2) cSG-HMC <cit.>, (3) an efficient Laplace approximation <cit.>, (4) MC dropout <cit.> and (5) Deep Ensembles <cit.> on their ability to estimate the epistemic uncertainty. Those baseline methods, especially Deep Ensembles, are persistently among the best performing uncertainty quantification methods across various benchmark tasks <cit.> §.§ Epistemic Uncertainty on Synthetic Dataset We evaluated all considered methods on the two-moons dataset, created using the implementation of <cit.>. To obtain the ground truth uncertainty, we utilized full-batch Hamiltonian Monte Carlo (HMC) <cit.>. HMC is regarded as the most precise algorithm to approximate posterior expectations <cit.>, but necessitates extreme computational expenses to be applied to models and datasets of practical scale. The results are depicted in Fig. <ref>. QUAM most closely matches the uncertainty estimate of the ground truth epistemic uncertainty obtained by HMC and excels especially on the regions further away from the decision boundary such as in the top left and bottom right of the plots. All other methods fail to capture the epistemic uncertainty in those regions as gradient descent on the training set fails to capture posterior modes with alternative predictive distributions in those parts and misses the important integral components. Experimental details and results for the epistemic uncertainty as in Eq. (<ref>) are given in the Appendix Sec. <ref>. §.§ Epistemic Uncertainty on Vision Datasets We benchmark the ability of different methods to estimate the epistemic uncertainty of a given, pre-selected model (setting (b) as in Eq. (<ref>)) in the context of (i) out-of-distribution (OOD) detection, (ii) adversarial sample detection, (iii) misclassification detection and (iv) selective prediction. In all experiments, we assume to have access to a pre-trained model on the in-distribution (ID) training dataset, which we refer to as reference model. The epistemic uncertainty is expected to be higher for OOD samples, as they can be assigned to multiple ID classes, depending on the utilized features. Adversarial samples indicate that the model is misspecified on those inputs, thus we expect a higher epistemic uncertainty, the uncertainty about the model parameters. Furthermore, we expect higher epistemic uncertainty for misclassified samples than for correctly classified samples. Similarly, we expect the classifier to perform better on a subset of more certain samples. This is tested by evaluating the accuracy of the classifier on retained subsets of a certain fraction of samples with the lowest epistemic uncertainty <cit.>. We report the AUROC for classifying the ID vs. OOD samples (i), the ID vs. the adversarial examples (ii), or the correctly classified vs. the misclassified samples (iii), using the epistemic uncertainty as score to distinguish the two classes respectively. For the selective prediction experiment (iv), we report the AUC of the accuracy vs. fraction of retained samples, using the epistemic uncertainty to determine the retained subsets. MNIST. We perform OOD detection on the FMNIST <cit.>, KMNIST <cit.>, EMNIST <cit.> and OMNIGLOT <cit.> test datasets as OOD datasets, using the LeNet <cit.> architecture. The test dataset of MNIST <cit.> is used as ID dataset. We utilize the aleatoric uncertainty of the reference model (see aleatoric uncertainty in Eq. (<ref>)) as a baseline to assess the added value of estimating the epistemic uncertainty of the reference model. The results are listed in Tab. <ref>. QUAM outperforms all other methods on this task, with Deep Ensembles being the runner up method on all dataset pairs. Furthermore, we observed, that only the epistemic uncertainties obtained by Deep Ensembles and QUAM are able to surpass the performance of using the aleatoric uncertainty of the reference model. ImageNet-1K. We conduct OOD detection, adversarial example detection, misclassification detection and selective prediction experiments on ImageNet-1K <cit.>. As OOD dataset, we use ImageNet-O <cit.>, which is a challenging OOD dataset that was explicitly created to be classified as an ID dataset with high confidence by conventional ImageNet-1K classifiers. Similarly, ImageNet-A <cit.> is a dataset consisting of natural adversarial examples, which belong to the ID classes of ImageNet-1K, but are misclassified with high confidence by conventional ImageNet-1K classifiers. Furthermore, we evaluated the utility of the uncertainty score for misclassification detection of predictions of the reference model on the ImageNet-1K validation dataset. On the same dataset, we evaluated the accuracy of the reference model when only predicting on fractions of samples with the lowest epistemic uncertainty. All ImageNet experiments were performed on variations of the EfficientNet architecture <cit.>. Recent work by <cit.> showed that typical ImageNet-1K classifiers learn desired features of the data even if they rely on simple, spurious features for their prediction. Furthermore, they found, that last layer retraining on a dataset without the spurious correlation is sufficient to re-weight the importance that the classifier places on different features. This allows the classifier to ignore the spurious features and utilize the desired features for its prediction. Similarly, we apply QUAM on the last layer of the reference model. We compare against cSG-HMC applied to the last layer, MC dropout and Deep Ensembles. MC dropout was applied to the last layer as well, since the EfficientNet architectures utilize dropout only before the last layer. Two versions of Deep Ensembles were considered. First, Deep Ensembles aggregated from pre-trained EfficientNets of different network sizes (DE (all)). Second, Deep Ensembles of retrained last layers on the same encoder network (DE (LL)). While the latter is a more fair comparison to the other methods, the former represents a beneficial scenario for Deep Ensembles: ensembling not just over various parametrizations, but different model architectures. We further utilize the aleatoric uncertainty of the reference model (see aleatoric uncertainty in Eq. (<ref>)) as a baseline to assess the additional benefit of estimating the epistemic uncertainty of the reference model. The Laplace approximation was not feasible to compute on our hardware, even only for the last layer. The results are listed in Tab. <ref>. Furthermore, we show the ROC curve of misclassification detection as well as the curve of the accuracy over retained samples for the selective prediction experiment in Fig. <ref>. We observe that using the epistemic uncertainty provided by Deep Ensembles on the last layer has the worst performance throughout all experiments. While Deep Ensembles composed of multiple trained models performed second best on most tasks, MC dropout outperforms it on OOD detection on the ImageNet-O dataset. QUAM outperforms all other methods on all tasks we evaluated, except for ImageNet-A, where it performed on par with Deep Ensembles composed of multiple trained models. Details about all experiments and additional results are given in the Appendix Sec. <ref>. § CONCLUSION We have introduced QUAM, a novel method that quantifies predictive uncertainty using adversarial models. Adversarial models identify important posterior modes that are missed by previous uncertainty quantification methods. We conducted various experiments on deep neural networks, for which epistemic uncertainty is challenging to estimate. On a synthetic dataset, we highlighted the strength of our method to capture epistemic uncertainty. Furthermore, we conducted experiments on large-scale benchmarks in the vision domain, where QUAM outperformed all previous methods. Searching for adversarial models is computationally expensive and has to be done for each new test point. However, more efficient versions can be utilized. One can search for adversarial models while restricting the search to a subset of the parameters, e.g. to the last layer as done for the ImageNet experiments, to the normalization parameters, or to the bias weights. Furthermore, there are a lot of advances for efficient fine-tuning of large models <cit.>. Utilizing those for more efficient versions of our algorithm is an interesting direction for future work. Additionally, in the classification setting, one could search for adversarial models only for a subset of classes with highest probability assigned by the reference model. Nevertheless, high stake applications justify this effort to obtain the best estimate of predictive uncertainty for each new test point. Furthermore, QUAM is applicable to quantify the predictive uncertainty of any single given model, regardless of whether uncertainty estimation was considered during the modeling process. This allows to assess the predictive uncertainty of foundation models or specialized models that are obtained externally. § ACKNOWLEDGEMENTS We would like to thank Angela Bitto-Nemling, Daniel Klotz and Sebastian Lehner for helpful discussions and feedback during all stages of this research project. The ELLIS Unit Linz, the LIT AI Lab, the Institute for Machine Learning, are supported by the Federal State Upper Austria. IARAI is supported by Here Technologies. We thank the projects AI-MOTION (LIT-2018-6-YOU-212), DeepFlood (LIT-2019-8-YOU-213), Medical Cognitive Computing Center (MC3), INCONTROL-RL (FFG-881064), PRIMAL (FFG-873979), S3AI (FFG-872172), DL for GranularFlow (FFG-871302), EPILEPSIA (FFG-892171), AIRI FG 9-N (FWF-36284, FWF-36235), ELISE (H2020-ICT-2019-3 ID: 951847), Stars4Waters (HORIZON-CL6-2021-CLIMATE-01-01). We thank Audi.JKU Deep Learning Center, TGW LOGISTICS GROUP GMBH, Silicon Austria Labs (SAL), FILL Gesellschaft mbH, Anyline GmbH, Google, ZF Friedrichshafen AG, Robert Bosch GmbH, UCB Biopharma SRL, Merck Healthcare KGaA, Verbund AG, GLS (Univ. Waterloo), Software Competence Center Hagenberg GmbH, TÜV Austria, Frauscher Sensonic and the NVIDIA Corporation. plainnat figuresection tablesection toc § APPENDIX This is the Appendix of the paper ”Quantification of Uncertainty with Adversarial Models”. It consists of three sections. In view of the increasing influence of contemporary machine learning research on the broader public, section <ref> gives a societal impact statement. Following to this, section <ref> gives details of our theoretical results, foremost about the measure of uncertainty used throughout our work. Furthermore, Mixture Importance Sampling for variance reduction is discussed. Finally, section <ref> gives details about the experiments presented in the main paper, as well as further experiments. § SOCIETAL IMPACT STATEMENT In this work, we have focused on improving the predictive uncertainty estimation for machine learning models, specifically deep learning models. Our primary goal is to enhance the robustness and reliability of these predictions, which we believe have several positive societal impacts. * Improved decision-making: By providing more accurate predictive uncertainty estimates, we enable a broad range of stakeholders to make more informed decisions. This could have implications across various sectors, including healthcare, finance, and autonomous vehicles, where decision-making based on machine learning predictions can directly affect human lives and economic stability. * Increased trust in machine learning systems: By enhancing the reliability of machine learning models, our work may also contribute to increased public trust in these systems. This could foster greater acceptance and integration of machine learning technologies in everyday life, driving societal advancement. * Promotion of responsible machine learning: Accurate uncertainty estimation is crucial for the responsible deployment of machine learning systems. By advancing this area, our work promotes the use of those methods in an ethical, transparent, and accountable manner. While we anticipate predominantly positive impacts, it's important to acknowledge potential negative impacts or challenges. * Misinterpretation of uncertainty: Even with improved uncertainty estimates, there's a risk that these might be misinterpreted or misused, potentially leading to incorrect decisions or unintended consequences. It is vital to couple advancements in this field with improved education and awareness around the interpretation of uncertainty in AI systems. * Increased reliance on machine learning systems: While increased trust in machine learning systems is beneficial, there's a risk it could lead to over-reliance on these systems, potentially resulting in reduced human oversight or critical thinking. It's important that robustness and reliability improvements don't result in blind trust. * Inequitable distribution of benefits: As with any technological advancement, there's a risk that the benefits might not be evenly distributed, potentially exacerbating existing societal inequalities. We urge policymakers and practitioners to consider this when implementing our findings. In conclusion, while our work aims to make significant positive contributions to society, we believe it's essential to consider these potential negative impacts and take steps to mitigate them proactively. § THEORETICAL RESULTS §.§ Measuring Predictive Uncertainty In this section, we first discuss the usage of the entropy and the cross-entropy as measures of predictive uncertainty. Following this, we introduce the two settings (a) and (b) (see Sec. <ref>) in detail for the predictive distributions of stochastic models in classification and regression. Finally, we discuss Mixture Importance Sampling for variance reduction of the uncertainty estimator. §.§.§ Entropy and Cross-Entropy as Measures of Predictive Uncertainty <cit.> defines the entropy () = - ∑_i=1^N p_i log p_i as a measure of the amount of uncertainty of a discrete probability distribution =(p_1,…,p_N) and states that it measures how much ”choice” is involved in the selection of a class i. See also <cit.> for an elaboration on this topic. The value - log p_i has been called "surprisal" <cit.> (page 64, Subsection 2.9.1) and has been used in computational linguistics <cit.>. Hence, the entropy is the expected or mean surprisal. Instead of ”surprisal” also the terms ”information content”, ”self-information”, or ”Shannon information” are used. The cross-entropy = - ∑_i=1^N p_i log q_i between two discrete probability distributions =(p_1,…,p_N) and =(q_1,…,q_N) measures the expectation of the surprisal of under distribution . Like the entropy, the cross-entropy is a mean of surprisals, therefore can be considered as a measure to quantify uncertainty. The higher surprisals are on average, the higher the uncertainty. The cross-entropy has increased uncertainty compared to the entropy since more surprising events are expected when selecting events via instead of . Only if those distributions coincide, there is no additional surprisal and the cross-entropy is equal to the entropy of the distributions. The cross-entropy depends on the uncertainty of the two distributions and how different they are. In particular, high surprisal of q_i and low surprisal of p_i strongly increase the cross-entropy since unexpected events are more frequent, that is, we are more often surprised. Thus, the cross-entropy does not only measure the uncertainty under distribution , but also the difference of the distributions. The average surprisal via the cross-entropy depends on the uncertainty of and the difference between and : = - ∑_i=1^N p_i log q_i = - ∑_i=1^N p_i log p_i + ∑_i=1^N p_i logp_i/q_i = () + , where the Kullback-Leibler divergence .. is = ∑_i=1^N p_i logp_i/q_i . The Kullback-Leibler divergence measures the difference in the distributions via their average difference of surprisals. Furthermore, it measures the decrease in uncertainty when shifting from the estimate to the true <cit.>. Therefore, the cross-entropy can serve to measure the total uncertainty, where the entropy is used as aleatoric uncertainty and the difference of distributions is used as the epistemic uncertainty. We assume that is the true distribution that is estimated by the distribution . We quantify the total uncertainty of as the sum of the entropy of (aleatoric uncertainty) and the Kullback-Leibler divergence to (epistemic uncertainty). In accordance with <cit.> and <cit.>, the aleatoric uncertainty measures the stochasticity of , while the epistemic uncertainty measures the deviation of the parameters from the true parameters . In the context of quantifying uncertainty through probability distributions, other measures such as the variance have been proposed <cit.>. For uncertainty estimation in the context of deep learning systems, e.g. <cit.> proposed to use the variance of the BMA predictive distribution as a measure of uncertainty. Entropy and variance capture different notions of uncertainty and investigating measures based on the variance of the predictive distribution is an interesting avenue for future work. §.§.§ Classification Setting (a): Expected uncertainty when selecting a model. We assume to have training data and an input . We want to know the uncertainty in predicting a class from when we first choose a model based on the posterior p(| ) an then use the chosen model to choose a class for input according to the predictive distribution p(|, ). The uncertainty in predicting the class arises from choosing a model (epistemic) and from choosing a class using this stochastic model (aleatoric). Through Bayesian model averaging, we obtain the following probability of selecting a class: p(|, ) = ∫_ p(|, ) p(| ) . The total uncertainty is commonly measured as the entropy of this probability distribution <cit.>: [ p(|, ) ] . We can reformulate the total uncertainty as the expected cross-entropy: [ p(|, ) ] = - ∑_∈ p(|, ) log p(|, ) = - ∑_∈log p(|, ) ∫_ p(|, ) p(| ) = ∫_( - ∑_∈ p(|, ) log p(|, ) ) p(| ) = ∫_p(|, )p(|, ) p(| ) . We can split the total uncertainty into the aleatoric and epistemic uncertainty <cit.>: ∫_ p(|, )p(|, ) p(| ) = ∫_( [p(|, ) ] + p(|, )p(|, )) p(| ) = ∫_[p(|, ) ] p(| ) + ∫_p(|, )p(|, ) p(| ) = ∫_[p(|, ) ] p(| ) + YW |, . We verify the last equality in Eq. (<ref>), i.e. that the Mutual Information is equal to the expected Kullback-Leibler divergence: YW |, = ∫_∑_∈ p(,|, ) logp(,|, )/p(|, ) p(|) = ∫_∑_∈ p(|, ) p(|) log p(|, ) p(|) /p(|, ) p(|) = ∫_∑_∈ p(|, ) log p(|, ) /p(|, ) p(|) = ∫_p(|, ) p(|, ) p(|) . This is possible because the label is dependent on the selected model. First, a model is selected, then a label is chosen with the selected model. To summarize, the predictive uncertainty is measured by: [ p(|, ) ] = ∫_[p(|, ) ] p(| ) + YW |, = ∫_[p(|, ) ] p(| ) + ∫_p(|, )p(|, ) p(| ) = ∫_p(|, )p(|, ) p(| ) . The total uncertainty is given by the entropy of the Bayesian model average predictive distribution, which we showed is equal to the expected cross-entropy between the predictive distributions of candidate models selected according to the posterior and the Bayesian model average predictive distribution. The aleatoric uncertainty is the expected entropy of candidate models drawn from the posterior, which can also be interpreted as the entropy we expect when selecting a model according to the posterior. Therefore, if all models likely under the posterior have low surprisal, the aleatoric uncertainty in this setting is low. The epistemic uncertainty is the expected KL divergence between the the predictive distributions of candidate models and the Bayesian model average predictive distribution. Therefore, if all models likely under the posterior have low divergence of their predictive distribution to the Bayesian model average predictive distribution, the epistemic uncertainty in this setting is low. Setting (b): Uncertainty of a given, pre-selected model. We assume to have training data , an input , and a given, pre-selected model with parameters and predictive distribution p(|, ). Using the predictive distribution of the model, a class is selected based on , therefore there is uncertainty about which is selected. Furthermore, we assume that the true model with predictive distribution p(|, ^* ) and parameters ^* has generated the training data and will also generate the observed (real world) ^* from that we want to predict. The true model is only revealed later, e.g. via more samples or by receiving knowledge about ^*. Hence, there is uncertainty about the parameters of the true model. Revealing the true model is viewed as drawing a true model from all possible true models according to their agreement with . Note, to reveal the true model is not necessary in our framework but helpful for the intuition of drawing a true model. We neither consider uncertainty about the model class nor the modeling nor about the training data. In summary, there is uncertainty about drawing a class from the predictive distribution of the given, pre-selected model and uncertainty about drawing the true parameters of the model distribution. According to <cit.> and <cit.>, the aleatoric uncertainty is the variability of selecting a class via p(|, ). Using the entropy, the aleatoric uncertainty is [ p(|, ) ] . Also according to <cit.> and <cit.>, the epistemic uncertainty is the uncertainty about the parameters of the distribution, that is, a difference measure between and the true parameters ^*. We use as a measure for the epistemic uncertainty the Kullback-Leibler divergence: p(|, )p(|, ^* ) . The total uncertainty is the aleatoric uncertainty plus the epistemic uncertainty, which is the cross-entropy between p(|, ) and p(|, ^*): p(|, )p(|, ^* ) = [ p(|, ) ] + p(|, )p(|, ^* ) . However, we do not know the true parameters ^*. The posterior p(| ) gives us the likelihood of being the true parameters ^*. We assume that the true model is revealed later. Therefore we use the expected Kullback-Leibler divergence for the epistemic uncertainty: ∫_p(|, )p(|, ) p(| ) . Consequently, the total uncertainty is [ p(|, ) ] + ∫_p(|, )p(|, ) p(| ) . The total uncertainty can therefore be expressed by the expected cross-entropy as it was in setting (a) (see Eq. (<ref>)), but between p(|, ) and p(|, ): ∫_ p(|, )p(|, ) p(| ) = ∫_( [ p(|, ) ] + p(|, )p(|, )) p(| ) = [ p(|, ) ] + ∫_p(|, )p(|, ) p(| ) . §.§.§ Regression We follow <cit.> and measure the predictive uncertainty in a regression setting using the differential entropy [p(y |, )] = - ∫_ p(y |, ) log p(y |, ) y of the predictive distribution p(y |, ) of a stochastic model. In the following, we assume that we are modeling a Gaussian distribution, but other continuous probability distributions e.g. a Laplace lead to similar results. The model thus has to provide estimators for the mean μ(, ) and variance σ^2(, ) of the Gaussian. The predictive distribution is given by p(y |, ) = (2 π σ^2(, ))^- 1/2exp{-(y - μ(, ))^2/2 σ^2(, )} . The differential entropy of a Gaussian distribution is given by [ p(y |, ) ] = - ∫_ p(y |, ) log p(y |, ) y = 1/2 log(σ^2(, )) + log(2 π) + 1/2 . The KL divergence between two Gaussian distributions is given by p(y |, )p(y |, ) = - ∫_ p(y |, ) log( p(y |, )/p(y |, )) y = 1/2 log( σ^2(, )/σ^2(, )) + σ^2(, ) + ( μ(, ) - μ(, ) )^2/2 σ^2(, ) - 1/2 . Setting (a): Expected uncertainty when selecting a model. <cit.> consider the differential entropy of the Bayesian model average p(y |, ) = ∫_W p(y |, ) p(|), which is equal to the expected cross-entropy and can be decomposed into the expected differential entropy and Kullback-Leibler divergence. Therefore, the expected uncertainty when selecting a model is given by ∫_ p(y |, )p(y |, ) p(| ) = [ p(y |, ) ] = ∫_[ p(y |, ) ] p(| ) + ∫_p(y |, )p(y |, ) p(| ) = ∫_ 1/2 log(σ^2(, )) p(| ) + log(2 π) + ∫_p(y |, )p(y |, ) p(|) . Setting (b): Uncertainty of a given, pre-selected model. Synonymous to the classification setting, the uncertainty of a given, pre-selected model is given by ∫_ p(y |, )p(y |, ) p(| ) = [ p(|, ) ] + ∫_p(|, )p(|, ) p(| ) = 1/2 log(σ^2(, )) + log(2 π) + ∫_ 1/2 log( σ^2(, )/σ^2(, )) + σ^2(, ) + ( μ(, ) - μ(, ) )^2/2 σ^2(, ) p(| ) . Homoscedastic, Model Invariant Noise. We assume, that noise is homoscedastic for all inputs ∈, thus σ^2(, ) = σ^2(). Furthermore, most models in regression do not explicitly model the variance in their training objective. For such a model , we can estimate the variance on a validation dataset _val = {(_n, y_n)}|_n=1^N as σ̂^2() = 1/N ∑_n=1^N (y_n - μ(_n, ))^2 . If we assume that all reasonable models under the posterior will have similar variances (σ̂^2() ≈σ^2() for ∼ p(|)), the uncertainty of a prediction using the given, pre-selected model is given by ∫_ p(y |, )p(y |, ) p(| ) ≈1/2 log(σ̂^2()) + log(2 π) + ∫_ 1/2 log( σ̂^2()/σ̂^2()) + σ̂^2() + ( μ(, ) - μ(, ) )^2/2 σ̂^2() p(| ) = 1/2 log(σ̂^2()) + 1/σ̂^2() ∫_ ( μ(, ) - μ(, ) )^2 p(| ) + 1/2 + log(2 π) . §.§ Mixture Importance Sampling for Variance Reduction The epistemic uncertainties in Eq. (<ref>) and Eq. (<ref>) are expectations of KL divergences over the posterior. We have to approximate these integrals. If the posterior has different modes, a concentrated importance sampling function has a high variance of estimates, therefore converges very slowly <cit.>. Thus, we use mixture importance sampling (MIS) <cit.>. MIS uses a mixture model for sampling, instead of a unimodal model of standard importance sampling <cit.>. Multiple importance sampling <cit.> is similar to MIS and equal to it for balanced heuristics <cit.>. More details on these and similar methods can be found in <cit.>. MIS has been very successfully applied to estimate multimodal densities. For example, the evidence lower bound (ELBO) <cit.> has been improved by multiple importance sampling ELBO <cit.>. Using a mixture model should ensure that at least one of its components will locally match the shape of the integrand. Often, MIS iteratively enrich the sampling distribution by new modes <cit.>. In contrast to iterative enrichment, which finds modes by chance, we are able to explicitly search for posterior modes, where the integrand of the definition of epistemic uncertainty is large. For each of these modes, we define a component of the mixture from which we then sample. We have the huge advantage to have explicit expressions for the integrand. The integrand of the epistemic uncertainty in Eq. (<ref>) and Eq. (<ref>) has the form p(|, )p(|, ) p(| ) , where .. is a distance or divergence of distributions which is computed using the parameters that determine those distributions. The distance/divergence .. eliminates the aleatoric uncertainty, which is present in p(|, ) and p(|, ). Essentially, .. reduces distributions to functions of their parameters. Importance sampling is applied to estimate integrals of the form s = ∫_ f() p() = ∫_f() p()/q() q() , with integrand f(x) and probability distributions p() and q(), when it is easier to sample according to q() than p(). The estimator of Eq. (<ref>) when drawing _n according to q() is given by ŝ = 1/N∑_n=1^N f(_n) p(_n)/q(_n) . The asymptotic variance σ^2_s of importance sampling is given by (see e.g. <cit.>): σ^2_s = ∫_( f() p()/q() - s )^2 q() = ∫_( f() p()/q())^2 q() - s^2 , and its estimator when drawing _n from q() is given by σ̂^2_s = 1/N∑_n=1^N ( f(_n) p(_n)/q(_n) - s )^2 = 1/N∑_n=1^N ( f(_n) p(_n)/q(_n))^2 - s^2 . We observe, that the variance is determined by the term f() p()/q(), thus we want q() to be proportional to f() p(). Most importantly, q() should not be close to zero for large f() p(). To give an intuition about the severity of unmatched modes, we depict an educational example in Fig. <ref>. Now we plug in the form of the integrand given by Eq. (<ref>) into Eq. (<ref>), to calculate the expected divergence .. under the model posterior p(| ). This results in v = ∫_p(|, )p(|, ) p(| )/q() q() , with estimate v̂ = 1/N ∑_n=1^N p(|, )p(|, _n ) p(_n | )/q(_n) . The variance is given by σ^2_v = ∫_( p(|, )p(|, ) p(| )/q() - v )^2 q() = ∫_( p(|, )p(|, ) p(| )/q())^2 q() - v^2 . The estimate for the variance is given by σ̂^2_v = 1/N∑_n=1^N ( p(|, )p(|, _n ) p(_n | )/q(_n) - v )^2 = 1/N∑_n=1^N ( p(|, )p(|, _n ) p(_n | )/q(_n))^2 - v^2 , where _n is drawn according to q(). The asymptotic (N →∞) confidence intervals are given by lim_N →∞( - a σ_v/√(N) ≤ v̂ - v ≤ b σ_v/√(N)) = 1/√(2 π) ∫_-a^b exp ( - 1/2 t^2 ) t . Thus, v̂ converges with σ_v/√(N) to v. The asymptotic confidence interval is proofed in <cit.> and <cit.> using the Lindeberg–Lévy central limit theorem which ensures the asymptotic normality of the estimate v̂. The q() that minimizes the variance is q() = p(|, )p(|, ) p(| )/v . Thus we want to find a density q() that is proportional to p(|, )p(|, ) p(| ). Only approximating the posterior p(| ) as Deep Ensembles or MC dropout is insufficient to guarantee a low expected error, since the sampling variance cannot be bounded, as σ^2_v could get arbitrarily big if the distance is large but the probability under the sampling distribution is very small. For q() ∝ p(| ) and non-negative, unbounded, but continuous .., the variance σ^2_v given by Eq. (<ref>) cannot be bounded. For example, if .. is the KL-divergence and both p(|, ) and p(|, ) are Gaussians where the means μ(, ), μ(, ) and variances σ^2(, ), σ^2(, ) are estimates provided by the models, the KL is unbounded. The KL divergence between two Gaussian distributions is given by p(y |, )p(y |, ) = - ∫_ p(y |, ) log( p(y |, )/p(y |, )) y = 1/2 log( σ^2(, )/σ^2(, )) + σ^2(, ) + ( μ(, ) - μ(, ) )^2/2 σ^2(, ) - 1/2 . For σ^2(, ) going towards zero and a non-zero difference of the mean values, the KL-divergence can be arbitrarily large. Therefore, methods that only consider the posterior p(| ) cannot bound the variance σ^2_v if .. is unbounded and the parameters allow distributions which can make .. arbitrary large. § EXPERIMENTAL DETAILS AND FURTHER EXPERIMENTS Our code is publicly available at https://github.com/ml-jku/quamhttps://github.com/ml-jku/quam. §.§ Details on the Adversarial Model Search During the adversarial model search, we seek to maximize the KL divergence between the prediction of the reference model and adversarial models. For an example, see Fig. <ref>. We found that directly maximizing the KL divergence always leads to similar solutions to the optimization problem. Therefore, we maximized the likelihood of a new test point to be in each possible class. The optimization problem is very similar, considering the predictive distribution _ := p(|, ) of a reference model and the predictive distribution _ := p(|, ) of a model that can be updated. The KL divergence between those two is given by __ = ∑ p_log( p_/p_) = ∑ p_log( p_) - ∑ p_log( p_) = - [_] + __ . Only the cross-entropy between the predictive distributions of the reference model and the model to be updated plays a role for the optimization, since the entropy is constant w.r.t. . Thus, the optimization target is equivalent to the cross-entropy loss, except that _ is generally not one-hot encoded but an arbitrary categorical distribution. This also relates to targeted / untargeted adversarial attacks on the input. Targeted attacks try to maximize the output probability of a specific class. Untargeted attacks try to minimize the probability of the originally predicted class, by maximizing all other classes. We found that attacking individual classes to work better empirically, as directly maximizing the KL divergence always leads to similar solutions for different runs. Therefore, we conducted as many adversarial model searches for a new test point, as there are classes in the classification task. For regression, we add a small perturbation to the bias of the output linear layer. This is necessary to ensure a gradient in the first update step, as the model to optimize is initialized with the reference model. For regression, we perform the adversarial model search two times, as the output of an adversarial model could be higher or lower than the reference model if we assume a scalar output. We force, that the two adversarial model searches get higher or lower outputs than the reference model respectively. While the loss of the reference model on the training dataset _ref is calculated on the full training dataset (as it has to be done only once), we approximate _pen by randomly drawn mini-batches for each update step. Therefore, the boundary condition might not be satisfied on the full training set, even if the boundary condition is satisfied for the mini-batch estimate. As described in the main paper, the resulting model of each adversarial model search is used to define the location of a mixture component of a sampling distribution q() (Eq. (<ref>)). The epistemic uncertainty is estimated by Eq. (<ref>), using models sampled from this mixture distribution. The simplest choice of distributions for each mixture distribution is a delta distribution at the location of the model. While this performs well empirically, we discard a lot of information by not utilizing predictions of models obtained throughout the adversarial model search. The intermediate solutions of the adversarial model search allow to assess how easily models with highly divergent output distributions to the reference model can be found. Furthermore, the expected mean squared error (Eq. (<ref>)) decreases with 1/N with the number of samples N and the expected variance of the estimator (Eq. (<ref>)) decreases with 1/√(N). Therefore, using more samples is beneficial empirically, even though we potentially introduce a bias to the estimator. Note that we do not know the temperature of the posterior probability of those samples, therefore we set it as a hyperparameter when computing Eq. (<ref>). §.§ Simplex Example We sample the training dataset = {(_k, _k)}_k=1^K from three Gaussian distributions (21 datapoints from each Gaussian) at locations _1 = (-4, -2)^T, _2 = (4, -2)^T, _3 = (0, 2 √(2))^T and the same two-dimensional covariance with σ^2 = 1.5 on both entries of the diagonal and zero on the off-diagonals. The labels _k are one-hot encoded vectors, signifying which Gaussian the input _k was sampled from. The new test point we evaluate for is located at (-6, 2). To attain the likelihood for each position on the probability simplex, we train a two-layer fully connected neural network (with parameters ) with hidden size of 10 on this dataset. We minimize the combined loss = 1/K∑_k=1^K l(p(|_k, ), _k) + l(p(|, ), ) , where l is the cross-entropy loss function and is the desired categorical distribution for the output of the network. We report the likelihood on the training dataset upon convergence of the training procedure for on the probability simplex. To average over different initializations of and alleviate the influence of potentially bad local minima, we use the median over 20 independent runs to calculate the maximum. For all methods, we utilize the same two-layer fully connected neural network with hidden size of 10; for MC dropout we additionally added dropout with dropout probability 0.2 after every intermediate layer. We trained 50 networks for the Deep Ensemble results. For MC dropout we sampled output distributions using 1000 forward passes. Fig. <ref> (a) shows models sampled using HMC, which is widely regarded as the best approximation to the ground truth for predictive uncertainty estimation. Furthermore, Fig. <ref> (b) shows models obtained by executing the adversarial model search for the given training dataset and test point depicted in Fig. <ref> (c). HMC also provides models that put more probability mass on the orange class. Those are missed by Deep Ensembles and MC dropout (see Fig. <ref> (a) and (b)). The adversarial model search used by QUAM helps to identify those regions. §.§ Epistemic Uncertainty on Synthetic Dataset We create the two-moons dataset using the implementation of <cit.>. All experiments were performed on a three-layer fully connected neural network with hidden size 100 and ReLU activations. For MC dropout, dropout with dropout probability of 0.2 was applied after the intermediate layers. We assume to have a trained reference model of this architecture. Results of the same runs as in the main paper, but calculated for the epistemic uncertainty in setting (b) (see Eq. (<ref>)) are depicted in Fig. <ref>. Again, QUAM matches the ground truth best. Furthermore, we conducted experiments on a synthetic regression dataset, where the input feature x is drawn randomly between [-π , π] and the target is y = sin(x) + ϵ, with ϵ∼(0, 0.1). The results are depicted in Fig. <ref>. As for the classification results, the estimate of QUAM is closest to the ground truth provided by HMC. The HMC implementation of <cit.> was used to obtain the ground truth epistemic uncertainties. For the Laplace approximation, we used the implementation of <cit.>. For SG-MCMC we used the python package of <cit.>. §.§ Epistemic Uncertainty on Vision Datasets Several vision datasets and their corresponding OOD datasets are commonly used for benchmarking predictive uncertainty quantification in the literature, e.g. in <cit.>. Our experiments focused on two of those: MNIST <cit.> and its OOD derivatives as the most basic benchmark and ImageNet1K <cit.> to demonstrate our method's ability to perform on a larger scale. Four types of experiments were performed: (i) OOD detection (ii) adversarial example detection, (iii) misclassification detection and (iv) selective prediction. Our experiments on adversarial example detection did not utilize a specific adversarial attack on the input images, but natural adversarial examples <cit.>, which are images from the ID classes, but wrongly classified by standard ImageNet classifiers. Misclassification detection and selective prediction was only performed for Imagenet1K, since MNIST classifiers easily reach accuracies of 99% on the test set, thus hardly misclassifying any samples. In all cases except selective prediciton, we measured AUROC, FPR at TPR of 95% and AUPR of classifying ID vs. OOD, non-adversarial vs. adversarial and correctly classified vs. misclassified samples (on ID test set), using the epistemic uncertainty estimate provided by the different methods. For selective prediction, we utilized the epistemic uncertainty estimate to select a subset of samples on the ID test set. §.§.§ MNIST OOD detection experiments were performed on MNIST with FashionMNIST (FMNIST) <cit.>, EMNIST <cit.>, KMNIST <cit.> and OMNIGLOT <cit.> as OOD datasets. In case of EMNIST, we only used the ”letters” subset, thus excluding classes overlapping with MNIST (digits). We used the MNIST (test set) vs FMNIST (train set) OOD detection task to tune hyperparameters for all methods. The evaluation was performed using the complete test sets of the above-mentioned datasets (n=10000). For each seed, a separate set of Deep Ensembles was trained. Ensembles with the size of 10 were found to perform best. MC dropout was used with a number of samples set to 2048. This hyperparameter setting was found to perform well. A higher sampling size would increase the performance marginally while increasing the computational load. Noteworthy is the fact, that in this setting the computational requirements of MC dropout surpassed those of QUAM. Laplace approximation was performed only for the last layer, due to the computational demand making it infeasible on the full network with our computational capacities. SG-HMC was performed on the full network using the Python package from <cit.>. Parameters were set in accordance with those of the original authors <cit.>. For QUAM, the initial penalty parameter found by tuning was c_0 = 6, which was exponentially increased (c_t+1 = η c_t) with η = 2 every 14 gradient steps for a total of two epochs through the training dataset. Gradient steps were performed using Adam <cit.> with a learning rate of 5.e-3 and weight decay of 1.e-3, chosen equivalent to the original training parameters of the model. A temperature of 1.e-3 was used for posterior probabilities when calculating Eq. (<ref>). Detailed results and additional metrics and replicates of the experiments can be found in Tab. <ref>. Experiments were performed three times with seeds: {42, 142, 242} to provide confidence intervals. Histograms of the scores on the ID dataset and the OOD datasets for different methods are depicted in Fig. <ref>. §.§.§ ImageNet For ImageNet1K <cit.>, OOD detection experiments were performed with ImageNet-O <cit.>, adversarial example detection experiments with ImageNet-A <cit.>, and misclassification detection as well as selective prediction experiments on the official validation set of ImageNet1K. For each experiment, we utilized a pre-trained EfficientNet <cit.> architecture with 21.5 million trainable weights available through PyTorch <cit.>, achieving a top-1 accuracy of 84.2% as well as a top-5 accuracy of 96.9%. cSG-HMC was performed on the last layer using the best hyperparameters that resulted from a hyperparameter search around the ones suggested by the original authors <cit.>. The Laplace approximation with the implementation of <cit.> was not feasible to compute for this problem on our hardware, even only for the last layer. Similarly to the experiments in section <ref>, we compare against a Deep Ensemble consisting of 10 pre-trained EfficientNet architectures ranging from 5.3 million to 66.3 million trainable weights (DE (all)). Also, we retrained the last layer of 10 ensemble members (DE (LL)) given the same base network. We also compare against MC dropout used with 2048 samples with a dropout probability of 20%. The EfficientNet architectures utilize dropout only before the last layer. The adversarial model search for QUAM was performed on the last layer of the EfficientNet, which has 1.3 million trainable parameters. To enhance the computational efficiency, the output of the second-to-last layer was computed once for all samples, and this output was subsequently used as input for the final layer when performing the adversarial model search. We set c_0 to 1 and updated exponentially at every of the 256 update steps. Weight decay was set to 1.e-4. Two hyperparameters have jointly been optimized on ImageNet-O and ImageNet-A using a small grid search, with learning rate α∈{5.e-3, 1.e-3, 5.e-4, 1.e-4} and the exponential schedule update constant η∈{1.15, 1.01, 1.005, 1.001}. The hyperparameters α = 1.e-3 and η = 1.01 resulted in the overall highest performance and have thus jointly been used for each of the three experiments. This implies that c_0 increases by 1% after each update step. We additionally searched for the best temperature and the best number of update steps for each experiment separately. The best temperature was identified as 0.05, 0.005, and 0.0005, while the best number of update steps was identified as 50, 100, and 100 for ImageNet-O OOD detection, ImageNet-A adversarial example detection, and ImageNet1K misclassification detection, respectively. Selective prediction was performed using the same hyperparameters as misclassification detection. We observed that the adversarial model search is relatively stable with respect to these hyperparameters. The detailed results on various metrics and replicates of the experiments can be found in <ref>. Histograms of the scores on the ID dataset and the OOD dataset, the adversarial example dataset and the correctly and incorrectly classified samples are depicted in Fig. <ref> for all methods. ROC curves, as well as accuracy over retained sample curves are depicted in Fig. <ref>. To provide confidence intervals, we performed all experiments on three distinct dataset splits of the ID datasets, matching the number of OOD samples. Therefore we used three times 2000 ID samples for Imagenet-O and three times 7000 ID samples for Imagenet-A and misclassification detection as well as selective prediction. §.§ Comparing Mechanistic Similarity of Deep Ensembles vs. Adversarial Models The experiments were performed on MNIST, EMNIST, and KMNIST test datasets, using 512 images of each using Deep Ensembles, and the reference model , trained on MNIST. Results are depicted in Fig. <ref>. For each image and each ensemble member, gradients were integrated over 64 steps from 64 different random normal sampled baselines for extra robustness <cit.>. Since the procedure was also performed on the OOD sets as well as our general focus on uncertainty estimation, no true labels were used for the gradient computation. Instead, predictions of ensemble members for which the attributions were computed were used as targets. Principal Component Analysis (PCA) was performed for the attributions of each image separately, where for each pixel the attributions from different ensemble members were treated as features. The ratios of explained variance, which are normalized to sum up to one, are collected from each component. If all ensemble members would utilize mutually exclusive features for their prediction, all components would be weighted equally, leading to a straight line in the plots in the top row in Fig. <ref>. Comparatively high values of the first principal component to the other components in the top row plots in Fig. <ref> indicate low diversity in features used by Deep Ensembles. The procedure was performed similarly for an ensemble of adversarial models. The main difference was that for each image an ensemble produced as a result of an adversarial model search on that specific image was used. We observe, that ensembles of adversarial models utilize more dissimilar features, indicated by the decreased variance contribution of the first principal component. This is especially strong for ID data, but also noticeable for OOD data. §.§ Prediction Space Similarity of Deep Ensembles and Adversarial Models In the following, ensembles members and adversarial models are analyzed in prediction space. We used the same Deep Ensembles as the one trained on MNIST for the OOD detection task described in Sec. <ref>. Also, 10 adversarial models were retrieved from the reference model and a single OOD sample (KMNIST), following the same procedure as described in Sec. <ref>. For the analysis, PCA was applied to the flattened softmax output vectors of each of the 20 models applied to ID validation data. The resulting points represent the variance of the model’s predictions across different principal components <cit.>. The results in Fig. <ref> show, that the convex hull of blue points representing adversarial models, in general, is much bigger than the convex hull of orange points representing ensemble members across the first four principal components, which explain 99.99% of the variance in prediction space. This implies that even though adversarial models achieve similar accuracy as Deep Ensembles on the validation set, they are capable of capturing more diversity in prediction space. §.§ Computational Expenses Experiments on Synthetic Datasets The example in Sec. <ref> was computed within half an hour on a GTX 1080 Ti. Experiments on synthetic datasets shown in Sec. <ref> were also performed on a single GTX 1080 Ti. Note that the HMC baseline took approximately 14 hours on 36 CPU cores for the classification task. All other methods except QUAM finish within minutes. QUAM scales with the number of test samples. Under the utilized parameters and 6400 test samples, QUAM computation took approximately 6 hours on a single GPU and under one hour for the regression task, where the number of test points is much smaller. Experiments on Vision Datasets Computational Requirements for the vision domain experiments depend a lot on the exact utilization of the baseline methods. While Deep Ensembles can take a long time to train, depending on the ensemble size, we utilized either pre-trained networks for ensembling or only trained last layers, which significantly reduces the runtime. Noteworthy, MC-dropout can result in extremely high runtimes depending on the number of forward passes and depending on the realizable batch size for inputs. The same holds for SG-HMC. Executing the QUAM experiments on MNIST (Sec. <ref>) took a grand total of around 120 GPU-hours on a variety of mostly older generation and low-power GPUs (P40, Titan V, T4), corresponding to roughly 4 GPU-seconds per sample. Executing the experiments on ImageNet (Sec. <ref>) took about 100 GPU-hours on a mix of A100 and A40 GPUs, corresponding to around 45 GPU-seconds per sample. The experiments presented in Sec.<ref> and <ref> took around 2 hours each on 4 GTX 1080 Ti.
http://arxiv.org/abs/2307.00548v2
20230702115537
Robust Target Localization in 2D: A Value-at-Risk Approach
[ "João Domingos", "João Xavier" ]
eess.SP
[ "eess.SP", "math.OC" ]
theoremTheorem lemmaLemma remarkRemark exampleExample resultResult definitionDefinition corollaryCorollary[theorem] Robust Target Localization in 2D: A Value-at-Risk Approach João Domingos and João Xavier João Domingos and João Xavier oliveira.domingos@ist.utl.pt, jxavier@isr.tecnico.ulisboa.pt. are with the Instituto Sistemas e Robótica and Instituto Superior Técnico in Portugal. This work has been supported by FCT, Fundação para a Ciência e a Tecnologia, under the projects PD/BD/150631/2020 and LARSyS - FCT Project UIDB/50009/2020. August 1, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================== This paper consider considers the problem of locating a two dimensional target from range-measurements containing outliers. Assuming that the number of outlier is known, we formulate the problem of minimizing inlier losses while ignoring outliers. This leads to a combinatorial, non-convex, non-smooth problem involving the percentile function. Using the framework of risk analysis from Rockafellar et al., we start by interpreting this formulation as a Value-at-risk (VaR) problem from portfolio optimization. To the best of our knowledge, this is the first time that a localization problem was formulated using risk analysis theory. To study the VaR formulation, we start by designing a majorizer set that contains any solution of a general percentile problem. This set is useful because, when applied to a localization scenario in 2D, it allows to majorize the solution set in terms of singletons, circumferences, ellipses and hyperbolas. Using know parametrization of these curves, we propose a grid method for the original non-convex problem. So we reduce the task of optimizing the VaR objective to that of efficiently sampling the proposed majorizer set. We compare our algorithm with four benchmarks in target localization. Numerical simulations show that our method is fast while, on average, improving the accuracy of the best benchmarks by at least 100m in a 1 Km^2 area. Target Localization, Robust Estimation, Risk Measure, Value-at-Risk, First Order Optimality Conditions, Percentile Optimization, Plane geometry, Conic sections. Robust Target Localization in 2D: A Value-at-Risk Approach João Domingos and João Xavier João Domingos and João Xavier oliveira.domingos@ist.utl.pt, jxavier@isr.tecnico.ulisboa.pt. are with the Instituto Sistemas e Robótica and Instituto Superior Técnico in Portugal. This work has been supported by FCT, Fundação para a Ciência e a Tecnologia, under the projects PD/BD/150631/2020 and LARSyS - FCT Project UIDB/50009/2020. August 1, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION This paper considers the problem of robust target estimation, that is, estimating the position of a target x∈𝐑^2 from range measurements, some of which may be outliers. In concrete, we consider the additive model y_m=x-a_m+u_m, m=1,…,M with measurements y_m∈𝐑, anchors a_m ∈𝐑^2 and additive uncertainties u_m∈𝐑. Several papers consider estimation problems where the uncertainty vector u:=(u_1,…,u_m) lies in a known bounded region which reflects the support of inlier error distributions <cit.>, <cit.>, <cit.>. Let us note, however, that in practice model (<ref>) can be affected by outlier uncertainties capable of biasing non-robust estimates by alarming amounts (see the numerics of section <ref>). In this paper we partition the set of M measurements into inliers ℐ and outliers 𝒪, i.e., {1,…,M}=ℐ∪𝒪,ℐ∩𝒪=∅. In broad terms, measurement y_m is an outlier if it deviates largely from the assumed data model x-a_m, that is, the difference y_m -x-a_m is large in absolute value. In this paper we assume that the number of outliers measurements L∈{0,…,M-1} is known. Let {π_0,…,π_M-1} denote the permutation of {1,…,M} that orders the absolute deviations | y_m -x-a_m | in descending order, that is, | y_π_0 -x-a_π_0 | ≥…≥ | y_π_M-1 -x-a_π_M-1 |. Given L, the outlier set 𝒪 corresponds to the set of measurements which have the L largest absolute deviations so 𝒪={π_0,…,π_L} with 𝒪=∅ if L=0. Note that, in general, the permutation {π_0,…,π_M-1} depends on target x and, hence, is unknown. For simplicity of notation we ignore the dependency of indices {π_0,…,π_M-1} on x. From now on we assume that only the number of outliers L is know; not the indices {π_0,…,π_M-1} of outlier measurements. *Pointwise range-based target localization The literature around range-based target localization is vast when considering point estimates of the position of a target from range measurements to precisely known locations, termed anchors. The seminal paper by Beck <cit.>, was followed by many, notably <cit.>, which also models process noise as i.i.d. Gaussian random variables. The work by Oguz-Ekim et al. <cit.> considers outlier measurements, by assuming measurement errors with Laplace distribution. Other relevant literature on range-based target localization is <cit.>. *Robust network localization The problem of robust network localization, where there are many unlocalized nodes with access to a few noisy pairwise distance measurements affected by outliers, has been addressed by a few works. One of them relies on identifying outliers from regular data and discarding them, as in Ihler et al. <cit.> that formulates the problem using a probabilistic graphical model to encode the data distribution. Vaghefi et al. <cit.> proposed a semidefinite relaxation of a model considering unknown, unbounded outlier errors for the cooperative localization scenario. Forero el al. <cit.> presented a robust multidimensional scaling based on regularized least squares, where the robust regularization term was relaxed to a convex function. Korkmaz and van der Veen <cit.> use the Huber loss composed with a discrepancy function between measurements and estimated distances, in order to achieve robustness to outliers. Recently, Soares and Gomes <cit.> have proposed a distributed Huber-based point estimator for range measurements corrupted with Gaussian measurement noise, and non-Gaussian, long-tail noise, modelled as Laplace or Cauchy noise. The work in <cit.> uses range-only data to initialize an extended Kalman Filter for sensor fusion data. *Robust range-based localization The space of range-based robust target localization was explored by several authors, for example the work in <cit.>, where a robust M-estimator was applied to target localization in a bootstrapping scheme in <cit.>, motivated by outlier measurements generated by non-line of sight (NLOS) propagation of signals, where bouncing on obstacles causes large delays in the time of flight, and thus a large error in the estimated distance. Bootstrapping with the Huber M-estimator was already used for centralized target localization in [6], where the Huber estimation was used in a bootstrapping scheme. Wang and colleagues <cit.> address the target localization problem by relaxing the NLOS-aware problem to both a semidefinite and a second-order cone problem. Later, Tomic et al. <cit.> modeled an additive Gaussian noise term, plus a bounded NLOS bias as a nuissance parameter to be jointly estimated, and formalized the problem as a trust region subproblem, solved by bisection. Still, the model considers an infinite support for the errors and the solution is a pointwise estimate of the target position. The recent work of Chen and colleagues <cit.> puts forward a new model for LOS/NLOS based on a multiplicative transformation of the additive data model, considering Exponential noise. The authors argue that this type of noise is routinely found in dense urban areas. Other relevant works include <cit.>. *Target localization with bounded noise Another line of research assumes unknown and bounded noise, which, in practice, could be a more reasonable model, considering that setup and hardware specifications are, in general, known. The authors in <cit.> consider bounded errors with unknown distribution in range measurements, and compute a point-wise estimate by minimizing the worst-case position estimation error. *Delimiting the set of all possible solutions for an estimation problem considering bounded noise A few papers examine the important problem of, given a data model, determining the region where all possible estimates compatible with observed data may lie. Eldar et al. <cit.> develop a convex solution using Lagrange duality to a data model with linear dependency relative to the unknown parameter and added Gaussian noise. §.§ Contributions A preliminary version of this work also used linear fractional representations to approach the problem of target localization <cit.>. This work expands comprehensively the early version along four main directions: * Scalability: In <cit.> our LFR modelling leads to a flattening map (explained ahead) L_𝒰_[2M+M^2] that has an input dimension 𝒪(M^3) and output dimension 𝒪(M^2), with M denoting the number of anchors. In this paper we achieve a flattening map L_𝒰_[2M] with both lower input 𝒪(M^2) and output 𝒪(M) dimensions. This decrease in dimensions is key for scalability since, for a fixed dimension d, our approach solves 2d semidefinite programs each with 𝒪(M^2) variables (input dimension of L_𝒰_[2M]) and a linear matrix inequality (LMI) in[For an arbitrary d, 𝐒^d denotes the set of d × d symmetric matrices. ] 𝐒^2M (output dimension of L_𝒰_[2M]). The modeling approach of <cit.> leads to larger semidefinite programs with 𝒪(M^3) variables and an LMI of dimension 𝒪(M^2). This effective decrease in complexity is achieved by a careful manipulation of LFR modelling tools (detailed in Appendix <ref>). * Non-negativeness assumption: Unlike <cit.>, in this paper we introduce set 𝒳_2 which explicitly copes with the assumption that the measurements generated according to the data model (<ref>) are non-negative. There is a simple statistical interpretation of set 𝒳_2, which is responsible for the aforementioned assumption. Let x̂ denote any estimate of the target position x. If x̂∈𝒳_2 then our estimate x̂ is coherent in the following sense: if we use data model (<ref>) with x=x̂ then any measurement is non-negative with probability one, regardless of the underlying noise density. So set 𝒳_2 forces estimates x̂ to respect the non-negativeness assumption (almost surely). * Statistical interpretation: We give a statistical interpretation of the problem in terms of a frequentist formalism. Under mild assumptions, we show that the set of target positions 𝒳 contains any Maximum Likelihood (ML) estimate of target position x regardless of the underlying noise density. So, in simple terms, 𝒳 majorizes the set of point estimates that are plausible under a frequentist formalism and that respect the non-negativeness assumption. Furthermore, this majorization is actually tight: for any point estimate x̂∈𝒳 there exists a noise density such that x̂ respects the non-negativeness assumption and x̂ is optimal for the ML criteria. * Numerical validation: We give extensive numerical experiments to validate our approach. Namely, we compare our method with a benchmark convex relaxation and verify that our method tends to outperform or match the benchmark method. In average, our localization approach improves the benchmark method by 20% (in terms of size) when the amount of measurement noise is mild (in high-noise regimes, both methods deliver similar approximations). Furthermore, our approach tends to be actually tight since the size of the computed regions is, on median terms, always within a 3% margin of a lower bound computed via a naive grid search method. §.§ Paper Organization In section <ref> we formalize the problem of computing outer approximations of the set 𝒳 of possible target positions. Section <ref> provides some background on Linear Fractional Representations (LFRs). Section <ref> details our approach for computing an outer approximation of 𝒳 via LFRs. In section <ref> we outline a benchmark convex relaxation. Section <ref> provides a statistical interpretation of set 𝒳 in terms of a Frequentist formalism when the noise densities are unknown. Section <ref> provides numerical evidence that our method tends to outperform the benchmark for mild noise. Section <ref> concludes the paper. §.§ Problem Formulation We consider the problem of estimating the target position x by knowing, in advance, that L measurements deviate from model (<ref>). In concrete, we want an estimate x̂ such that the model deviations {| y_m -x̂-a_m |}_m=1^M are as low as possible for the inlier measurements m∈ℐ while discarding/ignoring outlier measurements m∈𝒪. We compute x̂ by minimizing the percentile objective x̂∈min_x p_L{ | y_m -x-a_m | }, with p_L:𝐑^M↦𝐑 the L-th percentile function. The percentile function p_L(z) returns the largest element of z∈𝐑^M after discarding its L largest entries. So, p_0(z)=max(z_i) and p_M-1(z)=min(z_i). In compact notation, p_L(z)=z_p_L with {π_0,…π_M-1} the permutation of {1,…,M} that orders the elements of z in descending order, that is z_π_0≥…≥ z_π_L = p_L(z)≥ z_π_L+1≥…≥ z_π_M-1. In general, problem (<ref>) is difficult to solve because both the percentile function p_L and the deviation mapping x↦ | y_m -x-a_m | are non-convex and non-differentiable. §.§ Literature Review Target Localization Most literature on target localization considers a least squares (LS) approach that aims to minimize the sum of residuals between the measurement model and fixed range measurements <cit.>. The popularity of this formulation stems from its connections with maximum likelihood estimation in the presence of Gaussian noise <cit.>. Although statistical significant, the least squares formulation is non-convex and, hence, difficult to solve exactly in general. Over the past twenty years there have been several ideas to approach this inherently challenging problem. From our perspective, most of these ideas cluster into three algorithmic families: (1) classical algorithms like gradient descent <cit.> and Newton method <cit.>; (2) semidefinite (SDP) relaxations that exploit the quadratic nature of the problem <cit.>; (3) trust region approaches that reformulate the problem as a quadratically constrained quadratic program <cit.>. On a more theoretical front, we highlight the recent work of Pun et al. <cit.> which shows that the LS objective, although non-convex, is locally strongly convex at its global minima. This property is relevant for first order methods since it enables global convergence for “good enough” initializations. We latter compare our approach with the gradient method of <cit.>.   It is well known, however, that traditional least squares formulations are highly sensitive to outlier measurements <cit.>. In localization, outliers can come from non-line-of-sight (NLOS) propagation conditions, typically due to indoor or dense environments <cit.>. Given this limitation, there have been several approaches towards the robustification of the original problem. Earlier work by Sun et al. <cit.> focuses on a bootstrapping scheme and Huber M-estimation. Most recently, Soares et al. <cit.> also consider Huber estimation for soft rejection of outliers. The employed Huber loss is non-convex but the authors derive tight convex underestimators that lead to tight convex relaxations. The work of Zaeemzadeh et al. <cit.> achieves robustness by considering Geman-Mclcure updates to re-fit the observed measurements in a iterative fashion. The problem formulation is again routed in M-estimation and, in each iteration, the employed algorithm uses the trust region results of <cit.> to update the position of the target. We compare our method with both M-estimates <cit.>. A different viewpoint is to account for NLOS biases explicitly in the original LS formulation. In  <cit.> Vaghefi et al. propose a semidefinite relaxation that simultaneously estimate the targets position and the NLOS biases of the model. More recently, Wang et al. <cit.> consider a LS objective for worst case NLOS biases, assuming known error bounds. The problem is relaxed into a convex SDP by means of the S-lemma. Value-at-Risk optimization The proposed percentile formulation is well-known in portfolio optimization. In concrete, problem (<ref>) is known as a value-at-risk (VaR) problem in the context of risk analysis <cit.>. This connection is formalized latter in section <ref> after we review some background on this field. Here, we describe the algorithmic literature for this optimization class, which is rooted in economic theory <cit.>. From our perspective, there have been four main ideas to approach VaR problems like the one in (<ref>): (1) conditional value-at-risk (CVaR) methods based on the seminal work of Rockafellar et al. <cit.>; (2) difference-of-convex (DC) approaches that decompose the percentile function as a difference of two convex functions <cit.>; (3) integer programming schemes that explore the combinatorial nature of the percentile objective <cit.> and (4) smoothing techniques that filter out local, erratic modes of the VaR objective <cit.>. The CVaR approach is, perhaps, the most popular since, for convex losses, it leads to a convex problem that upper bounds the VaR objective <cit.>. So decisions with a low CVaR will implicitly also have a low VaR. In the context of portfolio optimization the resulting CVaR problem is a linear program <cit.>, which can be solved efficiently by standard convex methods <cit.>. Furthermore, linearity actually enables efficient iterative schemes that try to improve the original VaR objective, per iteration <cit.>. Let us note, however, that we found no method on the VaR literature that can be used to approach problem (<ref>). The main issue is that, in our case, the percentile function is composed with non-convex maps x↦|y_m-||x-a_m|||. All methods described so far assume that the percentile function is composed with linear <cit.> or general convex maps <cit.>. §.§ Contributions We claim three main contributions: * Theoretical Analysis: We present a novel majorization inequality for an optimization class that we denote as selection problems (see Section <ref>). This optimization class is rich enough to include percentile problems of the form (<ref>). Our analysis (Theorem <ref>) produces a first order majorizer on this challenging optimization class which, when applied to formulation (<ref>), actually leads to a simple, yet highly effective localization algorithm – RTPE; * Statistical interpretation: We interpret problem (<ref>) under the framework of risk analysis <cit.>. We show that formulation (<ref>) is actually minimizing the Value-At-Risk (VaR) measure for a risk problem with outliers ( Section <ref>); * Numerical validation: We validate our approach by considering a setup with ten anchors (M=10) in a square area of 1 Km^2. We compare algorithm <ref> with four state-of-the art benchmarks in target localization <cit.>, <cit.>, <cit.>, <cit.>.   When we have a reasonable number of outlier measurements (say L=3,4,5) and their deviation from model (<ref>) is moderate to high (1 Km to 2.5 Km) our method improves the accuracy of the best benchmark by at least 100m in a1 Km^2area. For low model deviations (500 m to 750 m), our approach still improves the best benchmark but with lower accuracy gains ( ≈ 10 meters). §.§ Paper Organization The remaining of the paper is organized as follows. Section <ref> shows that problem (<ref>) admits a natural interpretation as minimizing the value-at-risk (VaR) risk measure from portfolio optimization. Section (<ref>) proves a majorization bound for an optimization class that includes the percentile problem (<ref>). This majorization bound is applied to our problem in section <ref>; in this case the derived bound is computationally tractable in the sense that it corresponds to the union of easily parametrized regions in 2D space. Section <ref> deals with the unboundedness of some of these regions, in order to construct a fully implementable algorithm for problem (<ref>) – see Algorithm <ref>. Section <ref> provides numerical evidence that, on average, our method tends to outperforms four benchmarks in target localization. Section <ref> concludes the paper. § ROBUST TARGET ESTIMATION AS A RISK ANALYSIS PROBLEM The percentile function p_L is of primary importance in risk analysis <cit.>. For completeness, we review the theoretical setup of risk analysis <cit.> that introduces the Value-at-Risk (VaR) risk measure. Afterwards we show how the robust estimation scheme of (<ref>) is actually optimizing the Value-at-Risk (VaR) measure for an underlying stochastic risk problem. §.§ Value-At-Risk Generically, risk analysis considers the problem of decision making under uncertainty and different risk measures. We are given a loss function f(x,Z)∈𝐑 which depends on a decision vector x∈𝐑^n_x and on a random vector Z∈𝐑^n_Z. The primary goal of risk analysis is the design of a decision vector x such that the loss function f(x,Z) is typically low. We use the keyword typically because, for each decision vector x, the loss function f(x,Z) is a random variable hence the mapping x↦ f(x,Z) is non deterministic. One standard risk measure <cit.> is the so called Value-at-Risk (VaR). In simple terms, the β-VaR of a decision vector x is the maximum loss f(x,Z) that can be incurred with probability at least β. In concrete β-VaR, denoted as u_β(x), is defined as u_β(x):=inf{u ∈𝐑: ℙ (f(x,Z)≤ u) ≥β}. Ideally we would like to find the decision vector x that minimizes u_β(x) for a fixed confidence level β∈[0,1] so x̂∈min_x u_β(x). In general, problem (<ref>) cannot be exactly solved either because the distribution of the random vector Z may be unknown or because the objective u_β(x) may be computationally intractable. A common fix <cit.> is to collect independent and identically distributed (i.i.d.) samples {z_1,…,z_M} of Z and consider the empirical distribution u↦1/M∑_m=1^M 1_{f(x,z_m)≤ u} with (x,u)↦1_{f(x,z_m)≤ u} the indicator function of the event {f(x,z_m)≤ u}. So 1_{f(x,z_m)≤ u}=1 if variables x and u are such that f(x,z_m)≤ u and 1_{f(x,z_m)≤ u}=0 otherwise. By using the empirical distribution (<ref>) on definition (<ref>) we get the approximate β-VaR x↦inf{u: 1/M∑_m=1^M 1_{f(x,z_m)≤ u}≥β}. Assume now, for simplicity, that β is a multiple of of 1/M. In this case the infimum in (<ref>) is exactly the (1-β) M percentile of the observations f(x,z_m), that is inf{u: 1/M∑_m=1^M 1_{f(x,z_m)≤ u}≥β}=p_(1-β) M{ f(x,z_m) } Minimizing the approximate objective (<ref>) leads to min_x p_(1-β) M{f(x,z_m) }. So, as seen, a sampled VaR problem is equivalent to a percentile formulation where we optimize the worst case loss after discarding the (1-β) M highest losses f(x,z_m). §.§ Risk Analysis Interpretation of the Robust Target Estimator Considering measurements (<ref>), we start by modelling both the uncertainties u_m and anchor positions a_m as realization of two random objects: an additive uncertainty U in 𝐑 and a anchor position vector A in 𝐑^2. Here we assume that the anchor positions a_m are observed realizations of A, while the additive uncertainties u_m are unobserved samples of U. Given U and A we interpret (<ref>) as a sampled version of Y=||x-A||+U. Let Z=(Y, A) denote the random vector of observed quantities. Both these quantities are observable since, in our model, we estimate the position of target x from measurements y_m of Y and anchor positions a_m of A. Given Z, we estimate x by considering the deviation loss between the true model ||A-x|| and measurements Y f(x;Z):=| A-x-Y | , Z=(Y, A). So we are considering a risk analysis problem where the measurements Y and anchors A are observed random objects with an underlying joint distribution and we want to design/estimate a target position x such that model (<ref>) is accurate, that is, the additive uncertainty U is small according to (<ref>).   As explained in section <ref>, in a robust estimation problem the measurements Y tend to be affect, most of the time, by some form of inlier noise but, sporadically, there exist outlier measurements in model (<ref>). We model this behaviour by assuming that, in M total measurements, L of them are outlier. Given L, it is reasonable to choose β, the confidence level of the risk analysis problem, equal to 1-L/M, the ratio of inlier measurements. So, in a risk analysis context, we can compute the decision vector x by minimizing the β-VaR risk measure with confidence level β=1-L/M. This boils down to solving problem (<ref>) with loss (<ref>). Given samples (y_m,a_m) of (Y,Z) we use approximation (<ref>) to get x̂∈min_x p_L { | y_m -x-a_m | }. So the problem of estimating the target position x, through (<ref>), actually corresponds to minimizing the empirical VaR risk measure, from the observed data pairs (a_m,y_m). § A SIMPLE MAJORIZING ALGORITHM In this section we detail our algorithm for solving problem (<ref>). Our method is simple and consists in majorizing the solution set of problem (<ref>), that is, we design a set Φ⊆𝐑^2 that contains any minimizer of (<ref>), so min_x p_L{ |y_m- x-a_m | }⊆Φ. Our set Φ is computationally tractable, being the union of easily parameterized regions in 2D space. In concrete, set Φ is comprised of singletons, circumferences, ellipses and bounded half-hyperbolas. Given that all these regions can be efficiently parameterized our algorithm is simple: we create a grid over Φ and choose the best grid point for the objective (<ref>) – see algorithm <ref> in section <ref>. Our empirical results show that a fine grid is able to outperform several benchmarks in robust localization, while sharing similar computing times. §.§ Designing the Majorizing Set Φ In this section we construct a non-trivial set Φ such that (<ref>) holds. In fact our technique holds for a broader class of problems that we denote as selection problems. Given a dimension D, let {f_m}_m=1^M denote a collection of real-valued functions on 𝐑^D and {S_m}_m=1^M⊆2^𝐑^D a partition[So the sets S_m are disjoint while covering 𝐑^D, i.e., 𝐑^D=S_1 ∪…∪ S_M.] of 𝐑^D. Now let f denote the function that is equal to f_m on S_m so f(x):= 1_S_1(x) f_1(x)+…+1_S_M(x) f_M(x) In words, function f selects among M possible atoms {f_m}_m=1^M and the selection rule is described by sets {S_m}_m=1^M such that f(x)=f_m(x) whenever x∈ S_m. A function of the form (<ref>) is denoted as an M-th selector. A function f:𝐑^D ↦𝐑 is an M-th selector if there exists functions {f_m}_m=1^M and a partition {S_m}_m=1^M of 𝐑^D such that f selects f_m on S_m, that is f(x)= 1_S_1(x) f_1(x)+…+1_S_M(x) f_M(x). Let 𝒮_M denote the collection of M-th order selectors. We consider the minimization of an M-th selector f∈𝒮_M, min_x f(x). Under a mild technical condition, the next theorem produces a majorizer set Φ⊆𝐑^M for problem (<ref>) that is independent of the selection rules {S_m}_m=1^M, that is, a set Φ that only depends on M atoms {f_m}_m=1^M that make up the selector f∈𝒮_M. This is useful because, in general, the selection rules S_m are intractable, non-convex regions in 𝐑^D while the atoms f_m have simple algebraic expressions. In concrete, we typically have closed-form expressions for the derivatives of f_m and can compute the non differentiable points of f_m. Majorizer Φ will use these computations on atoms {f_m}_m=1^M. Assuming it exists, let ∇ f(x) denote the gradient of function f at point x. In simple terms, Theorem <ref> combines the selection structure of f∈𝒮_M with the well-known first-order necessary condition for optimality; namely if x^* is a minimizer of f and f is differentiable at point x^* then x^* must be a stationary point of f, that is, ∇ f(x^*)=0. Given an arbitrary set Φ⊆𝐑^M let cl Φ denote the closure[The closure of Φ⊆𝐑^M is defined as the set of limits points of Φ so cl Φ={x: ∃ {x_n}_n≥ 1⊆Φ, x_n→_n x }.] of Φ and int Φ its interior[The interior of Φ⊆𝐑^M is defined as the set of interior points of Φ so int Φ={x: ∃ ϵ>0, B(x,ϵ)⊆Φ} with B(x,ϵ) an open ball in 𝐑^M so B(x,ϵ)={v: ||x-v||<ϵ}.]. The boundary of Φ, denoted as ∂ Φ, is the set of points that belong to the closure of Φ but not to its interior, ∂ Φ= cl Φ∖int Φ. Let f∈𝒮_M denote an M-th selector with atoms {f_m}_m=1^M and selection rules {S_m}_m=1^M. Assume that atoms f_m are constant along the boundaries of the partition S_m, that is, any point x in the intersection of two distinct boundaries ∂ S_m_1,∂ S_m_2 (for m_1≠ m_2) achieves the same value m_1≠ m_2: x∈∂ S_m_1∩∂ S_m_2⇒ f_m_1(x)=f_m_2(x). Let x^* denote a minimizer of f. Then x^* must be a stationary point of some atom f_m or a non-differentiable point of yet another atom f_m, or there must exist two distinct atoms m_1≠ m_2 that coincide on x so f_m_1(x)=f_m_2(x). In compact notation min_x f(x)≠∅⇒min_x f(x) ⊆ Φ with majorizer Φ⊆𝐑^D given by Φ= ⋃_m=1^M {x: ∇ f_m(x)=0 } ∪⋃_m=1^M {x: ∄ ∇ f_m(x) } ∪⋃_m_1=1^M-1 ⋃_m_2=m_1+1^M{x: f_m_1(x)=f_m_2(x) }. Let x^* be a minimizer of f. By a first-order criterion, either x^* is a stationary point (∇ f(x^*)=0) or f is non-differentiable at x^*. In compact notation min_x f(x) ⊆ {x: ∇ f(x)=0 }_ :=Φ_1∪{x: ∄ ∇ f(x)=0 }_ :=Φ_2. Note that (<ref>) is already a valid majorizer of problem (<ref>) (although not particularly useful due to the selection structure of f). The proof proceeds by successively upper bounding the right-hand side of (<ref>) until we get (<ref>). We start by analysing Φ_1 by using partition S_m, so Φ_1 = ⋃_m=1^M Φ_1 ∩ S_m ⊆⋃_m=1^M Φ_1 ∩cl S_m =( ⋃_m=1^M Φ_1 ∩∂ S_m ) ∪( ⋃_m=1^M Φ_1 ∩int S_m ) ⊆( ⋃_m=1^M ∂ S_m ) ∪( ⋃_m=1^M Φ_1 ∩int S_m ). The first inequality bounds each individual set S_m by its closure cl S_m; the second equality uses (<ref>). Now if x belongs to Φ_1 ∩int S_m then f(x)=f_m(x) by (<ref>); furthermore ∇ f(x)=∇ f_m(x) since x is an interior point of S_m. Using this property we upper bound Φ_1 ∩int S_m by Φ_1 ∩( int S_m ={x: ∇ f(x)=0 }) ∩( int S_m ) ={x: ∇ f_m(x)=0 }∩int S_m ⊆{x: ∇ f_m(x)=0 }. Combining bounds (<ref>) with (<ref>) yields Φ_1 ⊆(⋃_m_1=1^M ∂ S_m_1) ∪( ⋃_m=1^M {x: ∇ f_m(x)=0 }). where the index set on ∂ S_m_1 was purposely changed to prepare for the next step. Let us fix an index m_1. Using assumption (<ref>), we upper bound the boundary ∂ S_m_1 as follows ∂ S_m_1 = ∂ S_m_1 ∩ ∂ (𝐑^D ∖ S_m_1) = ∂ S_m_1 ∩ ∂ (S_1∪…∪ S_m_1-1∪ S_m_1+1…∪ S_M) ⊆⋃_m_2≠ m_1∂ S_m_1 ∩ ∂ S_m_2 ⊆⋃_m_2≠ m_1{x: f_m_1(x)=f_m_2(x)}. The first equality holds because the boundary is closed under complements, that is, ∂ A= ∂ (𝐑^D∖ A) for any set A; the second equality follows because {S_m}_m=1^M is a partition of 𝐑^D; the first inequality uses the generic property ∂ (A ∪ B) ⊆∂ A ∪∂ B. The final inequality uses assumption (<ref>). Combining results (<ref>) and (<ref>) leads to Φ_1 ⊆⋃_m_1≠ m_2{x: f_m_1(x)=f_m_2(x)} ∪⋃_m=1^M {x: ∇ f_m(x)=0 }. The previous analysis can be adapted for Φ_2 because, for an interior point x∈int S_m, gradient ∇ f(x) exists if and only if ∇ f_m(x) exists. Using similar arguments we majorize Φ_2 by Φ_2 ⊆⋃_m_1≠ m_2{x: f_m_1(x)=f_m_2(x)} ∪⋃_m=1^M {x: ∄ ∇ f_m(x) }. Majorizer Φ, in (<ref>), is the union of bounds (<ref>) and (<ref>). In the proof of theorem <ref> we never use the fact that the atoms {f_m}_m=1^M are all different, i.e., f_m_1≠ f_m_2 for m_1≠m_2. Let us note, however, that if there exist two atoms f_m_1,f_m_2 which are equal f_m_1=f_m_2 then majorization (<ref>) becomes trivial in the sense that Φ, given by (<ref>), becomes equal to the ambient space 𝐑^D. Simple examples show that condition (<ref>) is actually necessary for majorization (<ref>) to hold, that is, if the atoms {f_m}_m=1^M of f are not constant along the boundaries of the selection rules {S_m}_m=1^M then we cannot generally conclude that any minimizer x^* of (<ref>) (assuming it exists) is contained in Φ. As a concrete example consider D=1 and the selector f∈𝒮_2 with atoms and partition f_1(x) = |x+2| x<-1, S_1=(-∞,-1.5), 0 x≥ -1 f_2(x) = x+1 x< -1/2 1/2 x≥ -1/2 , S_2=[-1.5,+∞). Both rules S_1,S_2 have the same boundary ∂ S_1=∂ S_2={-1.5} but different atom values f_1(-1.5)=0.5 ≠ -0.5=f_2(-1.5). So condition (<ref>) does not hold and, in this example, Φ is not a valid majorizer of problem (<ref>). In concrete we get min_x f(x)={-1.5}⊈ {-2}∪ [-1,+∞) = Φ. It turns out that the localization problem (<ref>) is, in fact, a selection problem of the form (<ref>) due to the percentile function p_L. In concrete the objective of (<ref>) is an M-th selector function since it can be decomposed as p_L{ |y_m- x-a_m | }= f_1(x) 1_𝒮_1(x)+… +f_M(x) 1_𝒮_M(x) for atoms f_m(x):=|y_m- x-a_m | and selection rules S_1 ={x: f(x)=f_1(x) } S_2 ={x: f(x)=f_2(x),f_1(x)≠ f(x) } S_3 ={x: f(x)=f_3(x), f_1(x)≠ f(x), f_2(x)≠ f(x) } ⋮ S_M ={x: f(x)=f_M(x), f_1(x)≠ f(x), …, f_M-1(x)≠ f(x) }, with f(x):=p_L{ |y_m- x-a_m | }. In words, the atoms of p_L{ |y_m- x-a_m | } are the model deviations x ↦ |y_m- x-a_m. The selection rule S_m_1 identifies the set of points such that p_M{f_m(x)}=f_m_1(x), while insuring that {S_m}_m=1^M is a proper partition [ The collection {S_m}_m=1^M is pairwise disjoint due to condition f_m(x)≠ f(x) for m̅≠ m in (<ref>). The sets {S_m}_m=1^M span 𝐑^2 because the L-th percentile p_L of a vector z is an element of z, as seen in (<ref>).] of 𝐑^2.   Given the connection between problems (<ref>) and (<ref>) a natural idea is to use Theorem <ref> to majorize the solutions[Problem (<ref>) has a non-empty minimizer set because the functions x↦ |y_m- x-a_m are all coercive regardless of m.] of problem (<ref>). If majorization (<ref>) holds then we can, without loss of generality, reduce the search space from 𝐑^2 to Φ so min_x p_L{ |y_m- x-a_m | }=min_x∈Φ p_L { |y_m- x-a_m | }. Result (<ref>) is useful because it suggests a direct method to solve the original problem: we can simply create a finite grid over Φ and choose the grid point that yields the lowest objective. Let us note, however, that in order to apply Theorem <ref> we must show that the atoms {f_m}_m=1^M are constant along the boundaries of the selection rules {S_m}_m=1^M. In general verifying (<ref>) for an arbitrary selector f∈𝒮_M might be non-trivial. It turns out, however, that for a broad class of percentiles objectives assumption (<ref>) becomes simple to verify; the next lemma shows that condition (<ref>) holds for percentile objectives composed with continuous mappings. Given M continuous functions {f_m}_m=1^M let {S_m}_m=1^M denote the partition of (<ref>). Then, the percentile function x↦ p_L(f_m(x)) is constant along the boundaries of {S_m}_m=1^M, that is, condition (<ref>) holds for f=p_L{ f_m }. To prove (<ref>) we majorize the boundary ∂ S_m_1 by using simple topological properties of the boundary and closure operators. In concrete, ∂ S_m_1 ⊆cl S_m_1 ⊆cl ( {x: f(x)=f_m_1(x)}), =cl ( ⋃_𝒪: |𝒪|=L{x: f_m(x)≥ f_m_1(x), m∈𝒪, f_m_1(x)≥ f_m̅(x), m̅∈𝒪^C }) =⋃_𝒪: |𝒪|=Lcl ( {x: f_m(x)≥ f_m_1(x), m∈𝒪, f_m_1(x)≥ f_m̅(x), m̅∈𝒪^C }) =⋃_𝒪: |𝒪|=L( {x: f_m(x)≥ f_m_1(x), m∈𝒪, f_m_1(x)≥ f_m̅(x), m̅∈𝒪^C }) with 𝒪 the set of outliers. The first inequality uses definition (<ref>). The second inequality makes uses of (<ref>) and the generic property cl (A∩ B)⊆cl A ∩ cl B. The first equality uses the percentile definition in f=p_L{f_m}. For the second equality note that cl A ∪ B = cl A ∪ cl B for any sets A,B. The continuity assumption of {f_m}_m=1^M justifies the final equality. In conclusion, for any x in ∂ S_m_1 there exists L outliers in 𝒪 such that f_m_1 is greater than the atoms in 𝒪^C and smaller than the atoms in 𝒪. This is equivalent to f(x)=f_m_1(x). Applying the same reasoning for x in ∂ S_m_2 leads to f(x)=f_m_2(x). Hence, for x in the intersection ∂ S_m_1 ∩ ∂ S_m_2, we get the desired equality f_m_1(x)=f(x)=f_m_2(x). To illustrate the usefulness of Theorem <ref> and Lemma <ref>, we present a first example where the majorizer set Φ actually reduces to a finite set. In this case the aforementioned grid method actually solves the problem, in the sense that it leads to a finite-time algorithm that returns a provable minimizer of (<ref>). The example as follows: we want to approximate a data vector d∈𝐑^M by a scalar x∈𝐑, given that d has L outliers entries. Using the motivation of section <ref> we formalize this problem as min_x f(x), with f(x)=p_L{ | x-d_m|}. Assume all entries of d are distinct (so d_m_1 d_m_2 for m_1≠ m_2). Then, simple computations give Φ= ⋃_m=1^M {d_m } ∪ ⋃_m_1=1^M-1 ⋃_m_2=m_1+1^M{d_m_1,m_2}, with d_m_1,m_2:=(d_m_1+d_m_2)/2 denoting the average of points d_m_1 and d_m_2. So any minimizer of (<ref>) must be a original data point s_m or the average d_m_1,m_2 of two points. The majorization inequality (<ref>) suggest a naive algorithm for problem (<ref>): we simply evaluate the objective over set Φ and select the one which yields the lowest objective, that is min_x f(x) =min{min_m f(d_m), min_m_1≠ m_2 f(d_m_1,m_2) }. Figure <ref> plots a realization of problem (<ref>) with M=3. §.§ Parametrizing Set Φ Theorem <ref> and Lemma <ref> show that any minimizer x^* of problem (<ref>) must belong to set Φ given by Φ= ⋃_m=1^M S_m ∪ ⋃_m=1^M N_m ∪⋃_m_1<m_2 E_m_1,m_2, with the auxiliary sets S_m= {x: ∄ ∇ f_m(x)=0 }, N_m= {x: ∄ ∇ f_m(x) } E_m_1,m_2= {x: f_m_1(x)=f_m_2(x) }, defined in terms of atoms f_m(x):=|y_m- x-a_m |. This section derives an alternative representation of the set Φ which is sample friendly, that is, such that sampling points from Φ is computationally tractable. This enables a simple scheme to solve problem (<ref>) by creating a grid over Φ and simply choosing the best grid point. As seen in example <ref>, this strategy delivers a provable minimizer of (<ref>) if Φ is a finite set. It turns out that set Φ given by (<ref>) will be, in general, non-finite and even unbounded. Nonetheless, this grid approach often delivers very good estimates of the true target x, which tend to outperform several benchmarks in robust localization – see section <ref>.   According to (<ref>), we first compute the set of stationary and non-differentiable points of atoms f_m(x)=|y_m- x-a_m |. Function f_m is differentiable if and only if x≠ a_m and x-a_m≠ y_m. An application of the chain rule <cit.> yields ∇ f_m(x)= -x-a_m/x-a_msign (y_m- x-a_m ) with sign(t) the sign of t≠ 0 (sign(t)=1 if t>0 and sign(t)=-1 when t<0). Since the gradient ∇ f_m(x) computed in (<ref>) is always non-zero for x≠ a_m and x-a_m≠ y_m we conclude that there exists no stationary point of atom f_m, i.e., S_m=∅. Additionally, the set of non-differentiable points of atom f_m is equal to N_m={a_m}∪{x:x-a_m = y_m}. Since x lies in 𝐑^2 we can easily parameterized sets N_m by N_m= {a_m}∪{ a_m+y_m(cosθ, sinθ): θ∈ [0,2π] }. To parametrize the third term of (<ref>) we fix a pair of distinct indices m_1≠ m_2 and analyze the set of points where the atoms f_m_1, f_m_2 yield the same value, that is, E_m_1,m_2={x: |y_m_1- x-a_m_1 |=|y_m_2- x-a_m_2 | }. Let us assume, without loss of generality, that y_m_1≥ y_m_2. In order to parametrize set Em_1,m_2 we use two well know curves in plane geometry: standard ellipses and hyperbolas. (Pins-and-string Characterization of an Ellipse) Given constants k,c≥ 0 consider the symmetric focal points f_1:=(-c,0), f_2:=(c,0). For k>f_1-f_2 =2c the standard ellipse is defined as the set of x∈𝐑^2 points whose additive distances to f_1 and to f_2 equals k, so ℰ(c,k):={x: x-f_1+ x-f_2 =k }. (Pins-and-string Characterization of an Hyperbola) Given constants k,c≥ 0 consider the symmetric focal points f_1:=(-c,0), f_2:=(c,0). For k<f_1-f_2 =2c the standard hyperbola is defined as the set of x∈𝐑^2 points whose subtractive distances to f_1 and to f_2 equals ± k, so ℋ(c,k):= ℋ_+(c,k) ∪ℋ_-(c,k), with the half-hyperbolas ℋ_+(c,k), ℋ_-(c,k) defined by ℋ_+(c,k) := {x: x-f_1- x-f_2 =+k }, ℋ_-(c,k) := {x: x-f_1- x-f_2 =-k }. It turns out that set Em_1,m_2 is closely related to standard ellipses ℰ(c,k) and standard half hyperbolas ℋ_+(c,k). In concrete, after carefully translating and rotating Em_1,m_2 we can get (1) a standard ellipse ℰ(c,k), or (2) a standard half-hyperbola ℋ_+(c,k) or (3) their union ℰ(c,k) ∪ℋ_+(c,k). The switching between these three regimes depends on the anchor-measurement data (a_m_1,a_m_2,y_m_1,y_m_2). The next lemma designs a suitable translation and rotation that morph Em_1,m_2 as required. Given any vector x in 𝐑^2∖{0} let (x)∈(-π,π] denote the angle of vector x in polar coordinates. Given anchors a_m_1, a_m_2∈𝐑^2 and measurements y_m_1,y_m_2 assume y_m_1≥ y_m_2. Then set E_m_1,m_2 is given by E_m_1,m_2=a+R ℰ(c,k_E) if 2c <k_H ℋ_+(c,k_H) if 2c>k_E ℰ(c,k_E) ∪ℋ_+(c,k_H) if k_H≤ 2c≤ k_E , with a=(a_m_1+a_m_2)/2 the mean anchor position, (c,k_E,k_H)=( a_m_2-a,y_m_1+y_m_2,y_m_1-y_m_2) the constants that define the sets ℰ(c,k_E) and ℋ_+(c,k_H), and R the rotation matrix associated with angle θ^* of vector a_m_2-a, R =[ cosθ^* -sinθ^*; sinθ^* cosθ^* ],θ^* := ( a_m_2-a). We start by opening the absolute values in definition (<ref>), so E_m_1,m_2= E_m_1,m_2^S∪ E_m_1,m_2^O, with E_m_1,m_2^O={x: x-a_m_1+x-a_m_2 =y_m_1+y_m_2} and E_m_1,m_2^S={x: x-a_m_1-x-a_m_2 =y_m_1-y_m_2}. Set E_m_1,m_2^O assumes that the absolute values in E_m_1,m_2 have opposite signs (hence the subscript O of opposite) while E_m_1,m_2^S assumes equal signs (hence the subscript S of same). To proceed we consider three separate cases: 2c>k_E, 2c<k_H and k_H≤ 2c≤ k_E. For 2c>k_E there exists no target position x which yields opposite signs on E_m_1,m_2, that is, E_m_1,m_2^O=∅. Indeed assume there exists an x with x-a_m_1+x-a_m_2 =y_m_1+y_m_2. Since 2c=a_m_1-a_m_2 we may bound this constant by 2c=a_m_1-a_m_2 ≤x-a_m_1+x-a_m_2 =y_m_1+y_m_2 =k_E. But (<ref>) contradicts 2c>k_E. Hence E_m_1,m_2^O=∅ when 2c>k_E. A similar reasoning shows that the absolute values in E_m_1,m_2 must have opposite signs when 2c<k_H, that is, E_m_1,m_2^S=∅ for 2c<k_H. Hence E_m_1,m_2= E_m_1,m_2^O if 2c <k_H E_m_1,m_2^S if 2c>k_E E_m_1,m_2^O∪ E_m_1,m_2^S if k_H≤ 2c≤ k_E . To finish the proof we show that sets E_m_1,m_2^O and E_m_1,m_2^S are equal to a standard ellipse ℰ(c,k_E) and a standard hyperbola ℋ^+(c,k_H) (respectively) after a translation and rotation. Using the translation vector a and rotation matrix R leads to E_m_1,m_2^S ={x: x-a_m_1-x-a_m_2 =y_m_1-y_m_2} =a+ R {x̃: x̃-ã_m_1-x̃-ã_m_2 =y_m_1-y_m_2}, with the change of variable x̃=R^T (x-a) for any point x. Result (<ref>) follows because R defines an isometry, that is, ∀ ϕ:x-ϕ = R^T (x-a)-R^T (ϕ-a)= x̃-ϕ̃. Simple computations show that the change of variable x↦ R^T (x-a) maps the anchors a_m_1 and a_m_2 to the focal points (-c,0) and (c,0 ) (this is shown in figure <ref> since the transformation x↦ Rx rotates the input x counter-clockwise by θ^* radians). Hence set (<ref>) is equal to a+R ℰ(c,k_E). Doing the same change of variables in E_m_1,m_2^O leads to E_m_1,m_2^O=a+R ℋ_+(c,k_H) as desired. The quantities (a,c,k_E,k_H,R) introduced in Lemma <ref> depend on the indices m_1,m_2 that define the set E_m_1,m_2. So, in full rigour, these quantities should also be indexed by indices m_1,m_2 in order to be fully identifiable. Since this makes the notation denser we choose not to display the dependency of (a,c,k_E,k_H,R) on m_1,m_2. So, from now on, the symbols (a,c,k_E,k_H,R) are always associated with the set E_m_1,m_2 for some pair (m_1,m_2) understood implicitly. Figure <ref> plots the three generic forms of E_m_1,m_2. A consequence of Lemma <ref> is that the task of sampling from E_m_1,m_2 reduces to sampling from an ellipse (<ref>) or an half hyperbola (<ref>). Since these parametric curves are well known in plane geometry there exist simple parametrization of both sets. An ellipse ℰ(c,k_E) can be parametrized by means of the trigonometric functions (cosθ, sinθ), while an hyperbola ℋ(c,k_H) uses the corresponding hyperbolic functions, that is cosh t=e^t+e^-t/2, sinh t=e^t-e^-t/2. The next lemma summarizes one standard parametrization of sets ℰ(c,k_E) and ℋ(c,k_H) by means of trigonometric and hyperbolic functions, respectively. The standard ellipse ℰ(c,k_E) and standard half hyperbolas ℋ_+(c,k_H), ℋ_-(c,k_H) can be parametrized by ℰ(c,k_E) = {( k_E/2cosθ, √(k_E^2/4-c^2)sinθ): θ∈ [0,2π] }, ℋ_+(c,k_H) = {(k_H/2cosh t, √(c^2-k_H^2/4)sinh t): t ∈𝐑}, ℋ_-(c,k_H) = {(-k_H/2cosh t, √(c^2-k_H^2/4)sinh t): t ∈𝐑}. Lemmas <ref> and <ref> allow to efficiently sample set E_m_1,m_2 defined in (<ref>). Combining this result with (<ref>) we express Φ in terms of computational tractable regions in 2-D space: singletons, circles, ellipses and half-hyperbolas. The next section tackles the unboundedness of the half-hyperbolas. §.§ Dealing with the unboundedness of Φ As seen in Figure <ref>, set E_m_1,m_2 is unbounded whenever 2c≥ k_H due to the half-hyperbolic component ℋ_+(c_H,k). In this case we may sample E_m_1,m_2 by first discretizing the half hyperbola ℋ_+(c,k_H) and then applying the affine transformation x↦a̅ + R x. But discretizing a generic half hyperbola ℋ_+(c,k_H) is computational intractable since the parameter t, in (<ref>), can take any value in 𝐑. Let us note, however, that this issue can be easily fixed since our half-hyperbola ℋ_+(c,k_H) is not completely generic; it parametrizes set Φ that majorizes the minimizer set of problem (<ref>). The objective function x↦ p_L{ | y_m -x-a_m | } is clearly coercive since x→∞ implies that the deviation mapping | y_m -x-a_m | grows unbounded for each m. So, clearly, a minimizer x^* of (<ref>) cannot have a arbitrarily high norm x^*. This intuitively prevents the parameter t in ℋ_+(c,k_H) to grow unbounded, since this would make both hyperbolic functions in (<ref>) also grow unbounded, that is min{|cosh t|,|sinh t|}→ +∞, when |t|→ +∞.   To formalize the previous argument we start by designing a tractable family of upper bounds on the norm of a generic minimizer x^* of (<ref>). Let x̂∈𝐑^2 be arbitrary. Then, the solution set of (<ref>) is bounded since any minimizer x^* of (<ref>) respects x^*≤ B(x̂), where B(x̂)=p_L{| y_m -x̂-a_m| } + max_m {a_m+y_m }. Assume there exists a minimizer x^*of (<ref>) with x^* > p_L{ | y_m -x̂-a_m | } + max_m {a_m+y_m }. To get a contradiction we first note that (<ref>) is equivalent to x^*- a_m-y_m > p_L{ | y_m -x̂-a_m | },∀ m. Using (<ref>) we bound the objective p_L{ | y_m -x^*-a_m | } by p_L{| y_m -x^*-a_m| } ≥ p_L{x^*- a_m-y_m } >p_L{ | y_m -x̂-a_m | }. Result (<ref>) says that x^* yields a strictly larger objective than x̂. This is impossible because x^* is a minimizer of (<ref>). Assume now that E_m_1,m_2 is unbounded because of the half-hyperbolic component ℋ_+(c_H,k). Sampling E_m_1,m_2 is computationally intractable since the parameter t in (<ref>) can take any value in 𝐑. A natural idea to approximate ℋ_+(c,k_H) is to bound the parameter t to a finite interval, say t∈ [-U,U], and consider ℋ^T_+(c,k_H) ={ ( k_H/2cosh t, √(c^2-k_H^2/4)sinh t): |t|≤ U }. The subscript T in ℋ^T_+(c,k_H) stands for truncated, since ℋ^T_+(c,k_H) is a truncated version of ℋ_+(c,k_H). Intuitively, high values of U make ℋ^T_+(c_H,k) a good approximation of the half-hyperbola ℋ_+(c_H,k). Hence, it is plausible to bound the minimizers of (<ref>) by considering a truncated version of E_m_1,m_2 where set ℋ_+(c_H,k) is substituted by its truncated version ℋ^T_+(c_H,k). Let E_m_1,m_2^T denote such a set, that is E_m_1,m_2^T =a̅+R ℰ(c,k_E) if 2k <k_H. ℋ^T_+(c,k_H) if 2k >k_E ℰ(c,k_E) ∪ℋ^T_+(c,k_H) if k_H≤ 2k≤ k_E. In general, the bound U appearing in ℋ^T_+(c,k_H) will depend on the data (a_m_1,a_m_2,y_m_1,y_m_2) defining set E_m_1,m_2. But, for ease of notation, we choose to not represent this dependency. Set E_m_1,m_2^T is now computational tractable, in the sense that we can efficiently sample the truncated half hyperbola ℋ^T_+(c_H,k) for t∈[-U,U]. Given sets E_m_1,m_2^T we mimic the structure (<ref>) of Φ and consider the truncated set Φ^T⊆𝐑^2 Φ^T= ⋃_m=1^M {a_m}∪{ a_m+y_m(cosθ, sinθ): θ∈ [0,2π] }. ∪⋃_m_1=1^M-1 ⋃_m_2=m_1+1^M E_m_1,m_2^T, where sets E_m_1,m_2 were substituted by its truncated version E_m_1,m_2^T. Set Φ^T is now sample friendly since each set E_m_1,m_2^T is sample friendly. The next lemma tunes the value of U such that Φ^T a valid majorizer of problem (<ref>). The main insight is to use Lemma <ref> to bound the norm of any point in ℋ_+(c,k_H). Given any x̂∈𝐑^2 consider the bound U =log( Û + √(Û^2-1) ), Û =√(( B(x̂)+ c+ ||a_m_2||)^2+ (c^2-k_H^2/4)/c^2) with B(x̂) the norm bound defined in (<ref>), (c,k_H) the constants of ℋ_+(c,k_H) and m_2 the index of measurement y_m_2≤ y_m_1 as in Lemma <ref>. Let Φ^T denote the truncated version of Φ with bound (<ref>). Then Φ^T majorizes (<ref>), that is min_x p_L {| y_m -x-a_m| }⊆ Φ^T. Let x^* denote a minimizer of (<ref>). We show that if x^* belongs to an hyperbolic set a + R ℋ_+(c,k_H) then it also belongs to its truncated version a + R ℋ^T_+(c,k_H). Assume x^* belongs to a + R ℋ_+(c,k_H) for some constants (c,k_H) such that 2c≥ k_H. In this case Lemma <ref> yields the parametric form x^*= a + R ϕ^*,ϕ^*:=[ k_H/2cosh t^*; √(c^2-k_H^2/4)sinh t^* ] for some t^*∈𝐑. Our goal is to bound the parameter t^* to the interval [-U,U] such that x^* belongs to the truncated set a + R ℋ^T_+(c,k_H). For this we use Lemma <ref> which already bounds ||x^*|| by B(x̂) with x̂ arbitrary. This yields ϕ^* = Rϕ^* ≤a+Rϕ^*+ a = x^*+ a ≤ B(x̂)+ c+ ||a_m_2||. The final equality uses definition c=||a-a_m_2|| from Lemma <ref>. Squaring both sides of (<ref>) leads to k_H^2/4cosh^2(t^*) + ( c^2- k_H^2/4) sinh^2(t^*) ≤( B(x̂)+ c+ ||a_m_2|| )^2. The identity sinh^2(t^*)=cosh^2(t^*)-1 implies that cosh(t^*) ≤√(( B(x̂)+ c+ ||a_m_2||)^2+ (c^2-k_H^2/4)/c^2). The right hand side of (<ref>) is greater than or equal to one since B(x̂) is non-negative and c≥ k_H/2. The inverse of the hyperbolic cosine cosh is equal to z↦log (z+ √(z^2-1)) when z∈[1,+∞). The inverse is an increasing function hence t^* ≤log( Û + √(Û^2-1) ), with Û equal to the right-hand side of (<ref>). The final method for problem (<ref>) is summarized in Algorithm <ref>. As in example <ref>, the idea is to sample set Φ and choose the best grid point according to the percentile objective. § NUMERICAL RESULTS In this section we benchmark our method with four state-of-the art algorithms in target localization <cit.>, <cit.>, <cit.>, <cit.>. The first two methods <cit.>, <cit.> are based on non-robust formulations of the problem, that is, formulations that ignore the existence of outlier measurements. We include these methods to highlight the importance of robust approaches. The algorithms in <cit.>, <cit.> optimize robust formulations of the problem. The benchmark estimates are denoted by x̂_SR-LS <cit.>, x̂_GD <cit.> , x̂_Huber <cit.> and x̂_SR-IRLS <cit.>. Our estimate, denoted as x̂_RPTE, is computed via Algorithm <ref> with just G=20 grid points.   Simulated setup. As the localization setup, we consider a 1 Km^2 square area and with M=10 anchors. The number of outliers, L varies between 0 (only inliers) and M/2 (half-outliers, half-inliers). As explained in Section <ref>, y_m is an outlier if it introduces a large deviation in the model y_m=x^*-a_m+u_m, with x^* the true position of the target. To distinguish inlier and outlier, we sample u_m from a zero-mean Gaussian distribution with varying standard deviation: inlier measurements {y_m}_m∈ℐ have a fixed standard deviation (uncertainty) of 50 meters so σ_ℐ=0.05 Km; the uncertainty σ_𝒪 of outliers varies in the kilometer grid {0.5,0.75,1,1.5,2,2.5}. For each value of σ_𝒪, we generate 5000 problem instances as follows: * We generate 100 positions for the anchors {a_m}_m=1^M and target x^* by sampling the unit square [ 0, 1]^2 uniformly. * For each configuration of anchors a_m and target x^* we generate 50 random sequences of outliers, i.e., 50 lists l^(1) =(l_1^(1),…,l_M/2^(1)), ⋮ l^(50) =(l_1^(50),…,l_M/2^(50)) of M/2 outlier indices. We generate 50 lists instead of a single one such that, for a particular realization of anchors a_m and target x^*, our experiments accommodate setups where the number of outliers L varies (so taking the first L indexes of each individual list) but also the outlier assignment varies for a fixed number of outliers (so taking the same index for different lists). For L=0, all measurements y_m are generated by a Gaussian model with standard deviation of σ_ℐ, that is y_m=|x^*-a_m+ 𝒩(0,σ_ℐ^2)|. The absolute value in (<ref>) simply insures that all measurements are non-negative. For L≥ 1, we select the first L indices of each list l^(1),…,l^(50) to serve as the outliers of our model. Outlier measurements have a standard deviation of σ_𝒪, that is y_m=|x^*-a_m+ 𝒩(0,σ_𝒪^2)|. * For each configuration of anchors {a_m}_m=1^M , true target x^* and measurement vector y, we deploy all previously mentioned algorithms. Implementation Details. All method were implemented in a computer with a Intel(R) Core(TM) i7-3630QM CPU @ 2.4GHz processor. Algorithm <ref> is deployed with G=20 grid points. The bisection method of <cit.> was implemented with a stopping tolerance of 10^-6. The value of the parameter ϵ in <cit.> was set to ϵ=1.34 √(3) σ_ℐ (as suggested by the authors). Algorithm 1 of <cit.> runs with a stopping criteria of 10^-4 and a bisection tolerance of 10^-6. The gradient descent method of <cit.> runs until it finds a point whose gradient is lower (in euclidean norm) than 10^-8 or until the number of iterations reaches the limit of 5000 iterations. The Huber parameter of <cit.> was set of 1.34 σ_ℐ as motivated by <cit.> (Appendix A).   Results. Figure <ref> compares our percentile estimate x̂_Percentile with the benchmarks x̂_SR-LS, x̂_GD , x̂_Huber and x̂_SR-IRLS. For L=0 (no outliers) only the Huber method delivers a poor estimate x̂_Huber of target x^*. In concrete, all other methods have an average reconstruction error lower than ≈ 40 meters (in a 1 Km^2 region, regardless of σ_ℐ) while the Huber estimate x̂_Huber is ≈ 125 meters away from x^* (average error for different values of σ_ℐ). The upper bound of 40 meters is met by the estimates x̂_Percentile, x̂_SR-LS and x̂_SR-IRLS while the gradient descent estimate x̂_GD tends to be ≈ 10 meters closer to x^*. The higher performance of the gradient method is intuitive for two reasons: (1) when L=0 the measurements y_m are essentially modeled (up to the absolute in (<ref>)) as i.i.d. samples of a Gaussian distribution with mean x^*-a_m and standard deviation σ_ℐ; (2) the gradient iterates are minimizing the range-based least squares objective (R-LS) which is proportional to the maximum likelihood objective <cit.>. Pun et al. <cit.> show that, under mild assumptions, there exists a neighbourhood where the R-LS objective is strongly convex. We suspect that the initialization proposed in <cit.> tends to be inside this neighbourhood. In this case, Theorem 1 of <cit.> suggests that x̂_GD is an approximate maximum likelihood estimate of x^*, when no outlier exists.   For L≥ 1 both non-robust estimates x̂_GD, x̂_SR-LS exhibit a large performance decline. In concrete, a single outlier measurement (L=1) is capable of increasing the estimation error x^*-x̂ by alarming amounts. For example, when σ_𝒪=1 Km the estimation error x^*-x̂ tends to increase by a factor of ≈ 5 (so the error x^*-x̂_GD≈ 31 m for L=0 becomes x^*-x̂_GD≈ 158 m for L=1). Furthermore the bias introduced in the estimates x̂_GD, x̂_SR-LS increases with the outlier uncertainty σ_𝒪. For σ_𝒪=2 Km a single outlier now makes the estimation error x^*-x̂_GD an order of magnitude (so ≈ 10 times) worst. So the non-robust methods of <cit.>, <cit.> fail to generalize in the presence of outliers.   For a larger number of outliers (say L=2,3,4,5) the most accurate estimates tends to be achieved by the proposed estimate x̂_RTPE. Our method delivers significant accuracy gains with respect to the robust alternatives x̂_SR-IRLS and x̂_Huber. For example when σ_𝒪=1 Km and L=3 the best robust benchmark x̂_SR-IRLS is, on average, 115 m away from x^* while the percentile estimator x̂_RTPE doubles the estimation accuracy (x^*-x̂_RTPE≈ 54 m) — see the upper row of Figure <ref>. As before, higher values of L and σ_𝒪 tend to accentuate the differences between methods. For σ_𝒪=1.5 Km and L=4 both bencharmks yield similar estimates x̂_SR-IRLS, x̂_Huber (x^*-x̂_SR-IRLS≈ 300 m, x^*-x̂_Huber≈ 220 m) yet the percentile method decreases the estimation error approximately 3 to 4 times (x^*-x̂_RTPE≈ 70 m) — see the lower row of Figure <ref>. These findings suggest that the VaR methodology allows for robust localization in the presence of outlier measurements.   The higher performance of RTPE has an associated computational cost coming from the grid methodology of Algorithm <ref>. Let us note, however, than in our experiments this cost is negligible. Indeed, on average, RTPE only takes 0.017 seconds to compute x̂_RTPE. This computational time is comparable to that of the benchmarks since the fastest method takes 0.0009 seconds to compute x̂_SR-LS while the slowest approach takes 0.036 seconds to estimate x̂_Huber. The low computational cost of RTPE comes from (a) considering a fine grid of G=20 in algorithm <ref> and (b) having a reasonable number of anchors M (say M≤ 10) which is typical in localization. § CONCLUSION This paper addresses the problem of locating a target from range measurements, some of which may be outliers. Assuming that the number of outliers is known, we formulate a robust estimation problem using the percentile objective. This new formulation is given a statistical interpretation by considering the framework of risk analysis. In concrete, our formulation is equivalent to optimizing the value-at-risk (VaR) risk measure from portfolio optimization. To actually address the percentile problem we design a majorizer set which contains all minimizers of the original VaR formulation. Our majorizer can be parametrized by well known curves in plane geometry: singletons, circumferences, ellipses and bounded half-hyperbolas. Since all these regions have efficient parametrization we propose a grid algorithm – RPTE – which reduces the optimization problem to an efficient sampling scheme. Numerical experiments show that, on average, our method outperforms several benchmarks in target localization. IEEEtran
http://arxiv.org/abs/2307.01851v1
20230704175958
Boundary Flat Bands with Topological Spin Textures Protected by Sub-chiral Symmetry
[ "Yijie Mo", "Xiao-Jiao Wang", "Rui Yu", "Zhongbo Yan" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.mes-hall", "cond-mat.quant-gas", "cond-mat.str-el", "quant-ph" ]
Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices, School of Physics, Sun Yat-Sen University, Guangzhou 510275, China Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices, School of Physics, Sun Yat-Sen University, Guangzhou 510275, China yurui@whu.edu.cn School of Physics and Technology, Wuhan University, Wuhan 430072, China Wuhan Institute of Quantum Technology, Wuhan 430206, China yanzhb5@mail.sysu.edu.cn Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices, School of Physics, Sun Yat-Sen University, Guangzhou 510275, China Chiral symmetry plays an indispensable role in topological classifications as well as in the understanding of the origin of bulk or boundary flat bands. The conventional definition of chiral symmetry refers to the existence of a constant unitary matrix anticommuting with the Hamiltonian. As a constant unitary matrix has constant eigenvectors, boundary flat bands enforced by chiral symmetry, which share the same eigenvectors with the chiral symmetry operator, are known to carry fixed (pseudo)spin polarizations and be featureless in quantum geometry. In this work, we generalize the chiral symmetry and introduce a concept termed sub-chiral symmetry. Unlike the conventional chiral symmetry operator defined as constant, the sub-chiral symmetry operator depends on partial components of the momentum vector, so as its eigenvectors. We show that topological gapped or gapless systems without the chiral symmetry but with the sub-chiral symmetry can support boundary flat bands, which exhibit topological spin textures and quantized Berry phases. We expect that such intriguing boundary flat bands could give rise to a variety of exotic physics in the presence of interactions or disorders. Boundary Flat Bands with Topological Spin Textures Protected by Sub-chiral Symmetry Zhongbo Yan August 1, 2023 =================================================================================== The band theory is fundamental and powerful in the description of both quantum and classical periodic systems. Being an exotic type of band structure, flat bands have triggered enduring and tremendous research interest in a diversity of disciplines <cit.>. Due to the quench of kinetic energy, even weak interactions or disorders may have profound effects on a flat-band system, and this raises the possibility of the emergence of exotic correlated phases or peculiar transport phenomena. Well-known examples include interaction-driven ferromagnetism <cit.>, high-temperature superconductivity <cit.>, and disorder-driven inverse Anderson transition <cit.>, to name a few. When the flat bands carry nontrivial topology, it has been predicted that even more exotic correlated phases like fractional topological insulators with long-range entanglement can arise <cit.>. Flat bands generally imply the confined motion of electrons in real space, which may be induced by a confining potential or destructive interference effects associated with special lattice structures <cit.>. A textbook example of ideal flat bands is the Landau levels induced by a perpendicular magnetic field in two dimensions (2D) <cit.>. In this case, the magnetic field provides a potential to confine the motion of electrons. For translation invariant systems, to have perfectly flat bands, in general, requires the existence of special symmetries to constrain the Hamiltonian. The chiral symmetry is one such symmetry. Notably, the chiral symmetry can enable the realization of both bulk and boundary flat bands. For instance, a bipartite lattice system with chiral symmetry and sublattice imbalance will have ideal bulk flat bands <cit.>, with the number of flat bands precisely equal to the sublattice imbalance <cit.>. Quite differently, the connection between chiral symmetry and boundary flat bands is through the bulk-boundary correspondence, a central property of topological phases <cit.>. In chiral symmetric systems, a topological invariant known as winding number can be defined along 1D noncontractible loops in the Brillouin zone <cit.>. In dimensions d≥2, a nonzero momentum-dependent winding number 𝒲(_∥) dictates the existence of 𝒲 branches of zero-energy flat bands on each boundary with normal vector perpendicular to _∥. The 1D flat bands on the boundary of 2D Dirac semimetals/superconductors <cit.>, like the electronic flat bands on the zigzag edges in graphene and the Andreev flat bands in d-wave high-temperature superconductors <cit.>, and the 2D flat bands in 3D nodal-line semimetals/superconductors <cit.> are celebrated examples of this class. Despite being a generic guiding principle to realize boundary flat bands, the chiral symmetry to the boundary bands is like a double-edged sword. On the one hand, it ensures the flatness and stability of the boundary bands. On the other hand, it rules out the possibility of the presence of nontrivial quantum geometry or topology in the boundary bands. The latter is because the zero-energy boundary states are also the eigenstates of the chiral symmetry operator <cit.>, which itself is a constant unitary matrix in matrix representation. This fact implies that the eigenvectors of the boundary states are momentum-independent, so their derivatives, which determine the quantum geometry <cit.>, always vanish. The constant eigenvectors also imply that the zero-energy states on a given boundary carry fixed (pseudo)spin polarizations. In this work, we generalize the chiral symmetry and introduce a concept termed sub-chiral symmetry. A fundamental difference between the two is that the symmetry operator of the sub-chiral symmetry is momentum-dependent and itself allows a topological characterization by the winding number like the Hamiltonian. Due to this difference, the sub-chiral symmetry not only ensures the flatness and stability of the boundary bands but also allows the presence of nontrivial quantum geometry and topology in them. Intriguingly, when the winding number characterizing the sub-chiral symmetry operator is nonzero, we find that the boundary flat bands carry both topological spin textures and quantized Berry phases. Generic theory.— For a chiral symmetric Hamiltonian, there exists an operator 𝒞 anticommuting with the Hamiltonian, i.e., {𝒞,ℋ}=0. The chiral symmetry is a kind of nonspatial symmetry, so the chiral symmetry operator has no momentum dependence in any momentum-space basis <cit.>. Besides the anticommutation relation with the Hamiltonian and the momentum independence, the chiral symmetry operator needs to satisfy two more constraints, i.e., unitary and 𝒞^2=1 (here 1 denotes an identity matrix with dimension determined by the basis). In this work, we generalize the chiral symmetry by only releasing the constraint of momentum independence. As we will show below, such a generalization is justified and rather useful in understanding the properties of the topological boundary states. To be specific, when there exists a momentum-dependent unitary matrix satisfying 𝒞(_∥)ℋ(k_⊥,_∥)𝒞^-1(_∥) =-ℋ(k_⊥,_∥), and 𝒞^2(_∥)=1, we claim that the Hamiltonian ℋ(k_⊥,_∥) has a sub-chiral symmetry. The prefix “sub” describes the fact that the operator 𝒞(_∥) only depends on partial components of the momentum vector. Here we have decomposed the momentum vector into two parts, i.e., =(k_⊥,_∥). The component k_⊥ refers to the momentum perpendicular to the edge or surface considered to be cut open, and _∥ refers to the momentum components parallel to the edge or surface. Why the sub-chiral symmetry can still be interpreted as a chiral symmetry is because when one focuses on the topological boundary states on a given boundary, the momenta along directions with periodic boundary conditions are good quantum numbers, and can be viewed as parameters of a 1D Hamiltonian. When 𝒞(_∥) has no momentum dependence, it just goes back to the conventional chiral symmetry. When the Hamiltonian has such a sub-chiral symmetry, a momentum-dependent winding number can accordingly be defined <cit.>, 𝒲(_∥)=1/4π i∮ dk_⊥Tr[𝒞(_∥) ℋ^-1()∂_k_⊥ℋ()]. This topological invariant counts the number of zero-energy states on a boundary and at the boundary momentum _∥. Notably, if the Hamiltonian satisfies Eq.(<ref>), there must exist a unitary operator 𝒮 satisfying 𝒮^2=1 and anticommuting with the sub-chiral symmetry operator, i.e., 𝒮𝒞(_∥)𝒮^-1=-𝒞(_∥). The above equation indicates that the sub-chiral symmetry operator itself has chiral symmetry, thereby one can further introduce a winding number to characterize the sub-chiral symmetry operator, 𝒲_c=1/4π i∮_c dk_lTr[𝒮𝒞^-1(_∥) ∂_k_l𝒞(_∥)], where the integral is performed along a noncontractible or contractible loop in the boundary Brillouin zone. A noncontractible loop refers to a momentum line traversing the boundary Brillouin zone. As will be shown below, when 𝒲_c is a nonzero integer, the spin textures of boundary flat bands are also characterized by a winding number with its value equal to 𝒲_c. Furthermore, when 𝒲_c is an odd integer, we find that the boundary flat bands are characterized by a π Berry phase. Below we consider two explicit models in 2D and 3D to demonstrate the above generic physics. 1D edge flat bands with topological spin textures.— We first consider a tight-binding model in 2D, ℋ() = (m-t_xcos k_x-t_ycos k_y)σ_z+λ_2sin k_xsin k_yσ_y +λ_1(cos k_y+δ)sin k_xσ_x, where σ_x,y,z are Pauli matrices, and m, t_x,y, λ_1,2 and δ are real parameters. Depending on the concrete physical realization, the Pauli matrices may act on either real spin or pseudo spin (e.g., sublattice). To simplify the discussion, however, we do not emphasize their difference and always use spin to represent the two internal degrees of freedom. In addition, all lattice constants are set to unity throughout for notational simplicity. For the Hamiltonian in Eq.(<ref>), it is easy to see that the chiral symmetry is absent as one cannot find a constant unitary operator to be anticommuting with the Hamiltonian. According to the ten-fold way classification, this Hamiltonian belongs to the symmetry class AI as it only has the time-reversal symmetry, i.e., 𝒯ℋ() 𝒯^-1=ℋ(-), where𝒯=σ_z𝒦 with 𝒦 being the complex conjugation operator, and 𝒯^2=1. In 2D, this symmetry class does not support any strong topological insulator phase <cit.>. Nevertheless, this Hamiltonian turns out to have rather interesting bulk topology and boundary states. Despite the absence of chiral symmetry, this Hamiltonian has the sub-chiral symmetry defined in Eq.(<ref>), with the symmetry operator of the form 𝒞(k_y)=-sinθ(k_y)σ_x+cosθ(k_y)σ_y, where θ(k_y)=[λ_1(cos k_y+δ)+iλ_2sin k_y]. Based on Eq.(<ref>), a winding number can be defined, 𝒲(k_y)=1/4π i∫_-π^π dk_xTr[𝒞(k_y) ℋ^-1()∂_k_xℋ()]. For the convenience of discussion, we assume t_x,y and λ_1,2 to be positive. We consider t_x>|m-t_ycos k_y| for arbitrary k_y and δ≠± 1 so that the Hamiltonian in Eq.(<ref>) describes an insulator, then a straightforward calculation gives 𝒲(k_y)=-1 for arbitrary k_y. For the sub-chiral symmetry operator given in Eq.(<ref>), obviously, it anticommutes with σ_z, so its chiral symmetry operator is 𝒮=σ_z. Using the formula in Eq.(<ref>), one can find 𝒲_c = 1/4π i∫_-π^π dk_yTr[σ_z𝒞^-1(k_y) ∂_k_y𝒞(k_y)] = 1/2π∫_-π^π dk_y∂θ(k_y)/∂ k_y ={[ 1, |δ|<1,; 0, |δ|>1. ]. The result indicates that the sub-chiral symmetry operator has a nontrivial winding in the regime |δ|<1. To show that a nonzero 𝒲(k_y) ensures the existence of zero-energy flat bands, we choose a set of parameters leading to 𝒲(k_y)=-1 and consider a cylindrical sample with open (periodic) boundary conditions in the x (y) direction. As shown in Fig.<ref>(a), the numerical result confirms the existence of one zero-energy flat band on each x-normal edge, verifying the correspondence with the winding number 𝒲(k_y). While a zero-energy flat band protected by chiral symmetry is dictated to carry a fixed spin polarization, below, we will show that the situation drastically changes for the zero-energy flat bands protected by sub-chiral symmetry. To show this, we analytically extract the spin textures of the zero-energy edge flat bands exemplified in Fig.<ref>(a). Here a key observation is that the eigenvector of the zero-energy flat bands at a given boundary momentum must also be the eigenvector of the sub-chiral symmetry operator at the same momentum (see more discussions in the Supplemental Material <cit.>), which is reminiscent of the relation between zero-energy bound states and chiral symmetry operator. The explicit steps are as follows. First, according to Eq.(<ref>), it is easy to find that the two eigenvectors of 𝒞(k_y) are given by |𝒞(k_y)=±1⟩=1/√(2)([ 1; ± ie^iθ(k_y) ]). Second, consider open boundary conditions in the x direction. Without loss of generality, we assume the lattice-site number in the x direction to be N. Accordingly, the Hamiltonian becomes a 2N× 2N matrix, and the form of the sub-chiral symmetry operator is expanded as 𝒞̃(k_y)=𝐈_N⊗𝒞(k_y), where 𝐈_N stands for the N-by-N identity matrix. The eigenvectors of the zero-energy states at the x-normal edges will take the following general form (explicit expressions can be found in Ref. <cit.>) |Ψ_α(k_y)⟩=(ξ_1,ξ_2,...,ξ_N-1,ξ_N)^T⊗|𝒞(k_y)=β⟩, where α labels left or right x-normal edge, and β=-1 (left edge) or 1 (right edge). ξ_i are real parameters (their explicit expressions are unimportant for our below discussion), and the normalization condition requires ∑_i|ξ_i|^2=1. Based on |Ψ_α⟩, one can determine the spin textures of the zero-energy edge flat bands by using the formula σ̅_i^α(k_y)=⟨Ψ_α(k_y)|𝐈_N⊗σ_i|Ψ_α(k_y)⟩, which gives σ̅_x^α(k_y) = -βsinθ(k_y), σ̅_y^α(k_y) = βcosθ(k_y), σ̅_z^α(k_y) = 0. Apparently, the spin textures will wind n times around the origin if the argument θ(k_y) changes 2nπ when k_y varies from -π to π. In addition, the factor β indicates that the spin textures on the two edges are just opposite. In Fig.<ref>(b), we show the spin textures for one zero-energy edge flat band. A complete cycle of the winding of the spin polarizations is evident. As the result in Eq.(<ref>) reveals that the spin polarizations always lie in the xy plane, a winding number can be further introduced to characterize the spin textures. Its form is 𝒲_s^α = 1/2π∫_-π^π dk_y(σ̅_x^α∂_k_yσ̅_y^α- σ̅_y^α∂_k_yσ̅_x^α)=𝒲_c. The above equation suggests that the nontrivial winding of the spin textures originates from the nontrivial winding of the sub-chiral symmetry operator. Next, let us analyze the quantum geometry property of the zero-energy edge flat bands. For zero-energy boundary flat bands protected by chiral symmetry, their Berry connections can always be made identically vanishing as their eigenvectors have no momentum dependence. Hence, their Berry phases always take the trivial value, i.e., ϕ=0 2π. In the current case, again the situation is drastically different. Based on the eigenvectors in Eq.(<ref>), the Berry connections for the zero-energy edge flat bands are given by A_α(k_y) = -i⟨Ψ_α(k_y)|∂_k_y|Ψ_α(k_y)⟩ = -i⟨𝒞(k_y)=β|∂_k_y|𝒞(k_y)=β⟩ = 1/2∂θ(k_y)/∂ k_y. The second line used the fact that ξ_i are real, and the Berry connections are also real. Immediately, one finds that the Berry phases associated with the two edge flat bands are <cit.> ϕ_L=ϕ_R=∫_-π^πA_L/R(k_y)dk_y=𝒲_cπ (mod 2π). The result indicates that, in the regime 𝒲_c=1, the flat edge bands are characterized by a quantized π Berry phase. 2D surface flat bands with topological spin textures.— Let us generalize the study to 3D and consider a topological semimetal to illustrate the generality of the physics. We consider the following simple model, ℋ() = (m-t∑_i=x,y,zcos k_i)σ_z+λsin k_zsin k_xsin k_yσ_y +λsin k_z(cos k_x-cos k_y)σ_x. This Hamiltonian is the tight-binding counterpart of the low-energy continuum Hamiltonian developed by Xu et al. to describe the low-energy physics of a magnetic Weyl semimetal candidate HgCr_2Se_4 <cit.>. For the convenience of discussion, we again consider the parameters t and λ to be positive. Due to the existence of a mirror symmetry ℳ_z=iσ_z, this Hamiltonian can support not only Weyl points but also nodal rings. Let us consider t<m<3t, which leads to the presence of two Weyl points at _w=±(0,0,arccos((m-2t)/t)) and a nodal ring in the k_z=0 plane. The nodal ring corresponds to the momentum contour satisfying t(cos k_x+cos k_y)=m-t. While the Weyl points and the concomitant Fermi arcs are of central interest in previous studies of this Hamiltonian <cit.>, here we will focus on the zero-energy surface flat bands associated with the nodal ring. It is easy to see that this Hamiltonian also does not have the chiral symmetry, but has a sub-chiral symmetry with the symmetry operator of the form 𝒞(k_x,k_y)=-sinθ(k_x,k_y)σ_x+cosθ(k_x,k_y)σ_y, where θ(k_x,k_y)=[(cos k_x-cos k_y)+isin k_xsin k_y]. Similarly, following Eq.(<ref>), one obtains 𝒲(k_x, k_y)={[ -1, (k_x,k_y) inside the nodal ring,; 0, (k_x,k_y) outside the nodal ring. ]. This indicates that when the z direction is cut open, zero-energy surface states exist only when the surface momentum (k_x, k_y) falls inside the z-directional projection of the nodal ring. In Fig.<ref>(a), the numerical result confirms that the zero-energy surface bands do exist only in the mentioned region, verifying the bulk-boundary correspondence. Next, let us again first explore the topological property of the sub-chiral symmetry operator. According to Eq.(<ref>), it is easy to find that its chiral symmetry operator 𝒮 is also given by σ_z. Applying Eq.(<ref>), one obtains 𝒲_c = 1/4π i∮_c dk_lTr[σ_z𝒞^-1(k_x,k_y) ∂_k_l𝒞(k_x,k_y)]=-2, where the integral contour is a loop enclosing the origin of the surface Brillouin zone and falls inside the projection of the nodal ring. Here 𝒲_c=-2 is simply because the argument θ(k_x,k_y) will change 4π when the polar angle of the momentum changes 2π. Following the same analysis applied in 2D, one can find that the spin textures of the two zero-energy surface flat bands are given by σ̅_x^α(k_x,k_y) = -βsinθ(k_x, k_y), σ̅_y^α(k_x,k_y) = βcosθ(k_x,k_y), σ̅_z^α(k_x,k_y) = 0. Here α labels the top and bottom z-normal surfaces, and β=1 (-1) for the top (bottom) surface, revealing that the spin textures on the top and bottom surfaces are opposite. Compared to θ(k_y) in the two-dimensional model, here θ(k_x, k_y) varies twice faster along a loop around the origin, corresponding to a faster variation of the spin textures, as shown in Fig.<ref>(b). Using the formula for 𝒲_s^α in Eq.(<ref>), it is easy to find 𝒲_s^α=𝒲_c, confirming again the one-to-one correspondence between the winding of the spin textures and the winding of the sub-chiral symmetry operator. For this specific model, as 𝒲_c=-2, the Berry phase along a closed curve around the origin is thus trivial (mod 2π). Discussions and conclusions.— The generic theory and the two exemplary models above reveal that the sub-chiral symmetry is another generic symmetry principle to realize boundary flat bands. As boundary flat bands protected by sub-chiral symmetry, unlike those protected by chiral symmetry, can carry topological spin textures and quantized Berry phases, they provide a more appealing basis to explore exotic phases driven by interaction or disorder effects. About the experimental implementation of such boundary flat bands, we have exemplified that they can naturally appear in some topological quantum materials. Of course, they can also be easily implemented in artificial systems with even higher flexibility <cit.>. In conclusion, we introduce the sub-chiral symmetry and reveal a class of boundary flat bands with fascinating properties. Our study enriches the types of flat bands and hence opens a direction for the study of flat-band-related physics. Acknowledgements.— Y. M., X.-J. W., and Z. Y. are supported by the National Natural Science Foundation of China (Grant No. 12174455) and the Natural Science Foundation of Guangdong Province (Grant No. 2021B1515020026). R. Y. is supported by the National Natural Science Foundation of China (Grant No. 12274328), and the Beijing National Laboratory for Condensed Matter Physics. Supplemental Material for “Boundary Flat Bands with Topological Spin Textures Protected by Sub-chiral Symmetry” Yijie Mo^1, Xiao-Jiao Wang^1, Rui Yu^2,3,*, Zhongbo Yan^1,† ^1Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices, School of Physics, Sun Yat-Sen University, Guangzhou 510275, China ^2School of Physics and Technology, Wuhan University, Wuhan 430072, China ^3Wuhan Institute of Quantum Technology, Wuhan 430206, China The supplemental material contains two sections: (I) Topological characterization of the two-dimensional model; (II) Details for the derivation of spin textures and quantized Berry phases of the zero-energy edge flat bands in the two-dimensional model. § I. TOPOLOGICAL CHARACTERIZATION OF THE TWO-DIMENSIONAL MODEL For the two-dimensional model considered in the main text, ℋ() = (m-t_xcos k_x-t_ycos k_y)σ_z+λ_2sin k_xsin k_yσ_y +λ_1(cos k_y+δ)sin k_xσ_x, the energy spectra are E_±()=±√((m-t_xcos k_x-t_ycos k_y)^2+[λ_1^2(cos k_y+δ)^2+λ_2^2sin^2 k_y]sin^2k_x). Without loss of generality, we consider t_x,y and λ_1,2 to be positive. Then the energy spectra have a gap if δ≠±1 and min{|m+t_x|,|m-t_x|}>t_y. In contrast, as long as t_y>|m-t_x|, there are two nodal points on the k_x=0 line. Similar nodal points also appear on the k_x=π line if t_y>|m+t_x|. Let us focus on the insulator case, then the winding number 𝒲(k_y) is well defined for all k_y from -π to π. According to the formula 𝒲(k_y)=1/4π i∫_-π^π dk_xTr[𝒞(k_y) ℋ^-1()∂_k_xℋ()], where 𝒞(k_y)=-sinθ(k_y)σ_x+cosθ(k_y)σ_y with θ(k_y)=[λ_1(cos k_y+δ)+iλ_2sin k_y]. A straightforward calculation gives 𝒲(k_y)=-1/2π∫_-π^π dk_x√([λ_1^2(cos k_y+δ)^2+λ_2^2sin^2 k_y])[t_x-cos k_x(m-t_ycos k_y)]/E_+^2(). Numerical integration reveals that if t_x>|m| and min{|m+t_x|,|m-t_x|}>t_y, 𝒲(k_y)=-1, for k_y∈(-π,π). Accordingly, if the x direction is cut open, each x-normal edge will harbor one zero-energy flat band. Below, we will show that the zero-energy flat bands carry topological spin textures as well as a quantized π-Berry phases when |δ|<1. § II. DETAILS FOR THE DERIVATION OF SPIN TEXTURES AND QUANTIZED BERRY PHASES OF THE ZERO-ENERGY EDGE FLAT BANDS IN THE TWO-DIMENSIONAL MODEL To determine the spin textures and Berry phases of the zero-energy edge flat bands, we first need to determine the eigenvectors of the zero-energy boundary states. Consider open boundary conditions in the x direction and periodic boundary conditions in the y direction. Choosing the basis Ψ=(c_1,↑,k_y,c_1,↓,k_y,..., c_N,↑,k_y,c_N,↓,k_y)^T, the Hamiltonian is accordingly given by H=( [ m(k_y) 0 -t_x/2 -iΛ(k_y)e^-iθ(k_y)/2 0 0 ⋯; 0 -m(k_y) -iΛ(k_y)e^iθ(k_y)/2 t_x/2 0 0 ⋯; -t_x/2 iΛ(k_y)e^-iθ(k_y)/2 m(k_y) 0 -t_x/2 -iΛ(k_y)e^-iθ(k_y)/2 ⋯; iΛ(k_y)e^iθ(k_y)/2 t_x/2 0 -m(k_y) -iΛ(k_y)e^iθ(k_y)/2 t_x/2 ⋯; 0 0 -t_x/2 iΛ(k_y)e^-iθ(k_y)/2 m(k_y) 0 ⋯; 0 0 iΛ(k_y)e^iθ(k_y)/2 t_x/2 0 -m(k_y) ⋯; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱; ]), where m(k_y)=m-t_ycos k_y, Λ(k_y)=√(λ_1^2(cos k_y+δ)^2+λ_2^2sin^2k_y), and θ(k_y)=[λ_1(cos k_y+δ)+iλ_2sin k_y]. Solving the eigenvalue equation H|Ψ⟩=E|Ψ⟩, where |Ψ⟩=(ψ_1↑,ψ_1↓,...,ψ_N↑,ψ_N↓)^T. For zero-energy eigenstates, one can find that they satisfy the following iteration equations, m(k_y)ψ_1↑-t_x/2ψ_2↑-iΛ(k_y)e^-iθ(k_y)/2ψ_2↓=0, -m(k_y)ψ_1↓-iΛ(k_y)e^iθ(k_y)/2ψ_2↑+t_x/2ψ_2↓=0, -t_x/2ψ_1↑+iΛ(k_y) e^-iθ(k_y)/2ψ_1↓+m(k_y)ψ_2↑ -t_x/2ψ_3↑-iΛ(k_y) e^-iθ(k_y)/2ψ_3↓=0, iΛ(k_y) e^iθ(k_y)/2ψ_1↑+t_x/2ψ_1↓-m(k_y)ψ_2↓ -iΛ(k_y)e^iθ(k_y)/2ψ_3↑+t_x/2ψ_3↓=0, ... -t_x/2ψ_n-1↑+iΛ(k_y) e^-iθ(k_y)/2ψ_n-1↓+m(k_y)ψ_n↑ -t_x/2ψ_n+1↑-iΛ(k_y) e^-iθ(k_y)/2ψ_n+1↓=0, iΛ(k_y) e^iθ(k_y)/2ψ_n-1↑+t_x/2ψ_n-1↓-m(k_y)ψ_n↓ -iΛ(k_y)e^iθ(k_y)/2ψ_n+1↑+t_x/2ψ_n+1↓=0,.... Since the Hamiltonian has the sub-chiral symmetry, the zero-energy states on the boundary must be the eigenstates of the sub-chiral symmetry operator. This fact can be simply inferred by noting that if |Ψ⟩ is the eigenvector of a zero-energy state, i.e., H|Ψ⟩=0, then as H𝒞(k_y)|Ψ⟩=-𝒞(k_y)H|Ψ⟩=0, and 𝒞(k_y) is an on-site operator for a given k_y (the good quantum number k_y can be treated as a parameter when considering the boundary states on the x-normal edges), 𝒞(k_y)|Ψ⟩ must be equal to |Ψ⟩ up to a phase factor. On the other hand, since 𝒞(k_y)^2=1, one obtains 𝒞(k_y)|Ψ⟩=|Ψ⟩, or 𝒞(k_y)|Ψ⟩=-|Ψ⟩, confirming that the eigenvector of a zero-energy state is also the eigenvector of the sub-chiral symmetry operator. As the sub-chiral symmetry operator takes the form 𝒞(k_y)=-sinθ(k_y)σ_x+cosθ(k_y)σ_y, its two eigenvectors read |𝒞(k_y)=1⟩=1/√(2)([ 1; ie^iθ(k_y) ]), |𝒞(k_y)=-1⟩=1/√(2)([ 1; -ie^iθ(k_y) ]), where |𝒞(k_y)=±1⟩ satisfy 𝒞(k_y)|𝒞(k_y)=±1⟩=±|𝒞(k_y)=±1⟩. Accordingly, for the zero-energy states, the spinor (ψ_j↑,ψ_j↓)^T for each unit cell must be proportional to either |𝒞(k_y)=1⟩ or |𝒞(k_y)=-1⟩. Let us focus on the zero-energy states on the left x-normal edge. If one considers the special limit m(k_y)=0 and Λ(k_y)=t_x, it is easy to see that the wave function for the zero-energy state is of the simple form |Ψ_L⟩=1/√(2)(1,-ie^iθ(k_y),0,0,0,0,...)^T. Based on this special case, one knows that for the zero-energy state on the left x-normal edge, the spinor (ψ_j↑,ψ_j↓)^T is proportional to |𝒞(k_y)=-1⟩. With this observation, for the more generic case, we can assume that the wave function takes the following trial form, ψ_j=([ ψ_j↑; ψ_j↓ ])=Cξ^j([ 1; -ie^iθ(k_y) ])/√(2), where |ξ|<1 must be enforced to ensure the decaying nature of the wave function of the zero-energy bound state, and C is a constant for latter normalization. Taking the above expression into the series of iterative equations, one can find that they become (-t_x/2+Λ(k_y)/2)ξ^j-1+m(k_y)ξ^j-(t_x/2+Λ(k_y)/2)ξ^j+1=0 for j≥2, which can be further reduced to the following equation, (t_x+Λ(k_y))ξ^2-2m(k_y)ξ+(t_x-Λ(k_y))=0. The solutions are ξ_±=m(k_y)±√(m^2(k_y)+Λ(k_y)^2-t_x^2)/t_x+Λ(k_y). As Λ(k_y)>0, one can verify |ξ_±|<1 as long as |m(k_y)|<t_x, which is consistent with the bulk-boundary correspondence, which says that when the bulk topological invariant 𝒲(k_y)=-1, each x-normal edge will harbor one zero-energy bound state. Because of the existence of two solutions for ξ, the wave function for the zero-energy state will take the form ([ ψ_j↑; ψ_j↓ ])=𝒩(C_+ξ_+^j+C_-ξ_-^j)([ 1; -ie^iθ(k_y) ])/√(2), where 𝒩 stands for the normalization constant. Enforcing the physical boundary condition ψ_j↑=ψ_j↓=0 for j=0, one obtains C_+=-C_-=C. Bringing the above expression into the first two equations in the series shown in Eq.(<ref>), one gets C(ξ_+-ξ_-)m(k_y)-(t_x+Λ(k_y))/2C(ξ_+^2-ξ_-^2)=0. It is easy to find that the equation is naturally satisfied regardless of the value of C. Therefore, we can set C=1. Accordingly, we obtain ([ ψ_j↑; ψ_j↓ ])=𝒩(ξ_+^j-ξ_-^j)([ 1; -ie^iθ(k_y) ])/√(2). Before proceeding, it is worth pointing out that ξ_+ and ξ_- are either both real or complex conjugate to each other. This fact implies that the spatial part of the wave function can always be made real, a property that will be used to derive the Berry connection and Berry phase. Using the normalization condition, ∑_j=0^∞(|ψ_j↑|^2+|ψ_j↓|^2) = 𝒩^2[|ξ_+-ξ_-|^2+|ξ_+^2-ξ_-^2|^2+...+|ξ_+^n-ξ_-^n|^2+...] = 𝒩^2[∑_n=1|ξ|_+^2n-∑_n=1(ξ_+^*ξ_-)^n-∑_n=1(ξ_+ξ_-^*)^n+∑_n=1|ξ_-|^2n] = 𝒩^2[|ξ_+|^2/1-|ξ_+|^2+|ξ_-|^2/1-|ξ_-|^2-ξ_+^*ξ_-/1-ξ_+^*ξ_- -ξ_+ξ_-^*/1-ξ_+ξ_-^*] = 1, one determines 𝒩=1/√([|ξ_+|^2/1-|ξ_+|^2+|ξ_-|^2/1-|ξ_-|^2-ξ_+^*ξ_-/1-ξ_+^*ξ_- -ξ_+ξ_-^*/1-ξ_+ξ_-^*]). Let us calculate the Berry connection, A_L(k_y) = -i⟨Ψ_L(k_y)|∂_k_y|Ψ_L(k_y)⟩ = 1/2∂θ(k_y)/∂ k_y-i(𝒩^2∑_n=1(ξ_+^n*-ξ_-^n*)n(ξ_+^n-1∂ξ_+/∂ k_y-ξ_-^n-1∂ξ_-/∂ k_y)+𝒩∂𝒩/∂ k_y∑_n=1|ξ_+^n-ξ_-^n|^2). When ξ_+ and ξ_- are real (when Λ(k_y)>t_x for arbitrary k_y, the realness of ξ_+ and ξ_- is ensured), the terms in the bracket must cancel with each other as the intraband Berry connection is a real quantity. The other situation is that ξ_+ and ξ_- are complex and conjugated to each other, i.e., ξ_+^*=ξ_-. It is easy to see that, in this situation, the two terms in the bracket are also real, so they also have to cancel with each other due to the same reason. Accordingly, one can find that the Berry connection is given by A_L(k_y)=1/2∂θ(k_y)/∂ k_y =-i⟨𝒞(k_y)=-1|∂_k_y|𝒞(k_y)=-1⟩. Accordingly, the zero-energy flat band on the left x-normal edge is characterized by a quantized Berry phase, ϕ_L=∫_-π^πA_L(k_y)dk_y=1/2(θ(π)-θ(-π))={[ π 2π, |δ|<1,; 0 2π, |δ|>1,; ].. Similar analysis reveals that the zero-energy flat band on the right x-normal edge is also characterized by a quantized Berry phase, and ϕ_R = ∫_-π^πA_R(k_y)dk_y = -i∫_-π^π⟨𝒞(k_y)=1|∂_k_y𝒞(k_y)=1⟩ dk_y = 1/2∫_-π^π∂θ(k_y)/∂ k_y dk_y = 1/2(θ(π)-θ(-π))=ϕ_L. For the spin textures of the zero-energy flat band on the left x-normal edge, it is easy to find out that σ̅_x,L = ∑_j=1ψ_j^†σ_xψ_j=sinθ(k_y), σ̅_y,L = ∑_j=1ψ_j^†σ_yψ_j=-cosθ(k_y), σ̅_z,L = ∑_j=1ψ_j^†σ_zψ_j=0. A similar analysis shows that the spin textures of the zero-energy flat band on the right x-normal edge are just opposite, i.e., σ̅_x,R = -sinθ(k_y), σ̅_y,R = cosθ(k_y), σ̅_z,R = 0. The derivation for the three-dimensional model is similar, and hence we don't repeat the details.
http://arxiv.org/abs/2307.06863v1
20230703151453
Measuring a Low-Earth-Orbit Satellite Network
[ "Jianping Pan", "Jinwei Zhao", "Lin Cai" ]
cs.NI
[ "cs.NI" ]
Measuring a Low-Earth-Orbit Satellite Network Jianping Pan, Jinwei Zhao and Lin Cai University of Victoria, BC, Canada July 2023 ============================================================================ Starlink and alike have attracted a lot of attention recently, however, the inner working of these low-earth-orbit (LEO) satellite networks is still largely unknown. This paper presents an ongoing measurement campaign focusing on Starlink, including its satellite access networks, gateway and point-of-presence structures, and backbone and Internet connections, revealing insights applicable to other LEO satellite providers. It also highlights the challenges and research opportunities of the integrated space-air-ground-aqua network envisioned by 6G mobile communication systems, and calls for a concerted community effort from practical and experimentation aspects. LEO satellites, Starlink, network measurement § INTRODUCTION The rapid growth of Starlink <cit.>, including over 4,000 launched low-earth-orbit (LEO) satellites and the associated ground infrastructure, as well as over 1.5 million subscribers around the world currently, has attracted a lot of attention from the industry and research community <cit.>. Competing efforts such as OneWeb and Kuiper are also underway. They promise to have a global coverage and reach under-served population with broadband Internet. They further revolutionize the landscape of traditional satellite communication (SatCom) systems. However, the inner working of Starlink and alike is still largely unknown, and Starlink itself is also a moving target with continuous improvement and more satellites. Compared to traditional SatCom systems, Starlink and alike employ many more LEO satellites much closer to ground users with significantly lower propagation delay, and offer much more capacity. On the other hand, both user terminal (UT) and ground station (GS) have to switch between LEO satellites constantly to maintain connectivity. In addition, Starlink will eventually use inter-satellite link (ISL) throughout its entire system to fully take the speed-of-light advantage in space over optical fibers. It provides a rare opportunity to communication and networking industry and research communities for the integrated space-air-ground-aqua (SAGA) network currently envisioned by 6G mobile communication systems. This paper presents an ongoing measurement campaign focusing on Starlink, as the only operational LEO SatCom system with a massive user base now. Starting with a few UTs, we measured Starlink access networks, gateway and point-of-presence (PoP) structures, and backbone and Internet connections, leveraging the viewpoints and assistance of RIPE Atlas probes <cit.> and Reddit Starlink enthusiasts <cit.>, as well as Starlink's regulatory filings <cit.>. The revealed insights not only show the difference from traditional SatCom and conventional terrestrial Internet service providers (ISPs) but also apply to other LEO satellite providers. This can help the research community envision the 6G SAGA networks better. More importantly, this paper highlights the challenges and research opportunities of SAGA networks and calls for a concerted community effort beyond RIPE Atlas, given the geographical coverage and diversity of Starlink and alike. With more community involvement from different locations around the world, the research community can have a better and more accurate understanding of Starlink and other LEO systems. Unlike the traditional SatCom systems with a few giant geosynchronous equatorial orbit (GEO) satellites and GSs and the conventional terrestrial ISPs mainly with regional coverage, Starlink and alike pose their unique challenges deserving global attention right now. The rest of the paper is organized as follows. In Section <ref>, we briefly introduce Starlink including its user equipment and service offerings and related research efforts. After user-perceived performance metrics such as throughput, delay and loss with different UT and service prioritization, we present the Starlink access networks in Section <ref>, gateway and PoP structures in Section <ref>, and backbone and Internet connections in Section <ref> with detailed topology diagrams first discovered by us, followed by research challenges, opportunities and community efforts in Section <ref>. Section <ref> concludes the paper with further remarks. § STARLINK BACKGROUND AND RELATED WORK §.§ Starlink in a Nutshell Starlink <cit.> is a LEO satellite network for broadband Internet access and backbone, put together by SpaceX with its revolutionary reusable rocket launch technology and commodity satellite manufacturing process, which greatly reduced the cost, complexity and turnaround time of building and maintaining an operational SatCom system. Currently Starlink has more than 4,000 LEO satellites in different generations launched into orbit in different shells (inclination and altitude combinations), with several thousands more approved and to be launched, but the regulatory filings are still changing and adapted to the need of SpaceX. Most current satellites are in 53^∘ inclination at 550km above the Earth. In addition to satellite telemetry, tracking and control (TTC) channels, each satellite has a number of UT and GS-facing phased-array antennas in Ku and Ka bands, respectively, with E bands added to new generations and cellular bands to future ones. The specific frequency and transmission power are regulated by different countries at different locations. Similar to traditional SatCom systems, these frequencies are susceptible to atmospheric impairments and obstacles including heavy rain, snow and trees. Both UT and GS antennas (commonly known as “flat” and “dome” dish, respectively) need a clear view of the sky with 25^∘ minimal elevation above the horizon. Starlink subscribers receive a self-installation kit with different mounting options. The first-generation “round” dish has similar performance as the second-generation rectangular dish (“v3” currently), albeit a larger surface and more antenna elements as the latter has a higher efficiency. Round and v3 dishes are widely used for standard/residential services. The high-performance “HP” square dish has more power and higher efficiency, and is used for priority/business, maritime and in-motion mobility/roaming services, with more portable dishes upcoming. Starlink subscribers pay for their dish upfront, and their monthly fee determined by the service subscription, without long-term contracts or hard data caps. Once Starlink subscribers installed the dish, they can use Starlink-provided router with WiFi, or use their own router through the Ethernet port or adapter, to connect their devices to the Internet, just like other forms of Internet access, where the router provides the network address translation (NAT) functionality to allow multiple user devices connected to the Internet with a public IP address. For Starlink business users, the public IP address is static and accessible from the Internet, while other types of Starlink users share public IP addresses through Starlink's carrier-grade NAT (CGNAT) at the GS, where the public IP address reflects the associated PoP. §.§ Related Work Research on SatCom has a long history, but mostly focused on GEO satellites. Even before Starlink was launched, the new approach to LEO SatCom has attracted a lot of attention from the research community. With regulatory filings, various geometry-based simulation appeared with a back-of-the-envelop calculation of Starlink system capacity and user performance, some assuming ISL and ground or vessel relay, etc <cit.>. Simulation-based network performance studies also emerged, as well as network emulation and testbed construction <cit.>. On the other hand, many enthusiastic Starlink users, some from the initial beta test program, reported their experience on Reddit, despite some inconsistency <cit.>. One notable network measurement effort is enabled by RIPE Atlas <cit.>, with more than 10 thousand active probes and anchors deployed by Internet users and ISPs around the world, actively measuring network liveliness using tools such as ping, traceroute and nslookup. Currently, there are about 65 active probes in about 17 Starlink access networks, located in the USA, Canada, Australia, New Zealand, and some European, Asian and South American countries. However, compared to the Starlink global coverage so far in more than 50 countries, there are still considerable regions without any active Atlas probe. One purpose of this paper is to promote more community members to host Atlas probes. The most related work is a measurement study to a few Starlink dishes from cloud data centers around the world, and the results are thus also dominated by the global Internet <cit.>. In this work, we have access to a few dishes while also leveraging Atlas probes and Reddit assistance. In particular, we focus on the Starlink access networks with different dishes and subscription tiers, the gateway and PoP structures at different locations, and the backbone and Internet connections with other ISPs. Through network measurement for almost a year, we present the first detailed topology diagrams of Starlink access, PoP and backbone networks after crosschecked with Atlas and Reddit communities with high confidence. § STARLINK NETWORK MEASUREMENT We acquired several Starlink dishes and gained access to a dozen more through the assistance of Starlink users, initially associated with the Seattle PoP and now expanding to other PoPs, complemented by the ability of RIPE Atlas probes and the data provided by Reddit users around the world, so we can measure the Starlink system from multiple vantage points with different dishes and service tiers. The results are then validated by more Reddit Starlink users, representing the state-of-the-art understanding of Starlink other than by SpaceX itself. §.§ Starlink Access Networks Starlink access networks include the user network facilitated by the Starlink-provided or user-provisioned router, connecting multiple user devices to the Internet through a NAT on the router. The router is connected to the Starlink dish, collectively known as Starlink UT, through a power-over-Ethernet (PoE) cable. The user dish communicates with ground dish, through one or possibly multiple satellites. Users are grouped into service cells, roughly 24km in diameter, and a rule-of-thumb capacity limit now is “100 users in every 300 km^2.” §.§.§ Access Topology and Structure The access network topology and structure is shown in Fig. <ref>. Starlink-provided router is fixed at 192.168.1.1 (user-provisioned router can use other private address), and the Starlink dish is always at 192.168.100.1, from the user's viewpoint, so users may need to add a static route to reach the dish if using their own router. From Starlink side, user router will have a CGNAT address in 100.64/10 and Starlink can remotely access Starlink-provided routers with consent. Starlink dish shall have a 100.64/10 address too, although it has not been independently verified as the communication between the dish and its gateway is inaccessible now and claimed to be encrypted. Starlink satellites, also known as “birds”, have no user visibility at IP layer. Not all satellites are now ISL capable or have ISL enabled, which can be observed through the delay between user and ground dishes. Each satellite has multiple antennas subdivided into beams. According to a patent granted to SpaceX <cit.>, Starlink uses a very simple grouping-based media access control (MAC) scheme at the satellite for UT and GS links. Normally four users share a communication channel, and the satellite polls user dishes periodically to grant access for the uplink. With larger propagation delay due to the distance and considerable MAC delay due to the polling, the minimum Starlink access round-trip time (RTT) is about 20ms. The ground dish is always at 100.64.0.1 from user's viewpoint, which is also the CGNAT, i.e., translating user's 100.64/10 address to a public IPv4 address. For business users, the public address is statically bounded at their router and also reachable from the Internet, and for other users, it is shared and temporary, and user router cannot be reached directly unless proper NAT traversal is done at CGNAT, e.g., through tailscale. With IPv6, Starlink users are better reached but not all networks are IPv6 operational now, so we only focus on IPv4 in this paper. The public address reflects the location of users and their subscription, specific to the associated PoP, for geo-location and third-party service provisioning. §.§.§ Access Measurement and Performance Figure <ref> shows the latency and throughput performance comparison of Starlink dishes and service tiers. HP and v3 dishes have similar latency performance to 100.64.0.1 for residential users, but HP dish can have a higher throughput to an iPerf3 server peered at the associated PoP for both downlink and uplink (not shown, 14.3 vs 6.94Mbps) due to more antenna elements, higher transmission power and better capability to track satellites. On the other hand, business users, which also use HP dishes, have considerably lower latency than roaming users, who similar to best-effort and portability users have the lowest priority when compared with residential, business and maritime users. The ping time sequence shows that the RTT to the gateway is highly fluctuating, reflecting the nature of satellite handover and competing users sharing the same beam. Comparing with other Internet access and home WiFi technologies as shown in Fig. <ref>, Starlink access RTT is indeed higher than fiber optics, digital subscriber line (DSL) and cable modem (although the cable uplink can be quite bursty due to its shared neighborhood), but comparable to long-term evolution (LTE) cellular and significantly better than traditional GEO SatCom such as Intelsat. When ISL is involved, the access RTT shows considerable stage effect, possibly going through a different number of satellites and different gateways to the same PoP. Starlink currently has no roaming support at the PoP level. Please note that Starlink access is highly influenced by weather and traffic conditions. Users can use Starlink mobile app (or a Web browser pointing to 192.168.100.1) to know the outage events and statistics of their Starlink dish: obstructed (locally at the user dish), no signal received (between the user dish and satellite) and network issue (between the satellite and ground station), as well as ping time to their associated PoP (not the gateway as we measured) and observed uplink and downlink throughput (not capacity). Using gRPC tools, Starlink users can automate the process with many user-contributed monitoring and dashboard tools, including regular speedtest to popular sites, available on GitHub <cit.>. §.§ Starlink Gateway and PoP Networks Starlink gateways, loosely referring to the ground station dishes, antennas and beams, are connected to a regional or country-wide PoP. For some regions, Starlink users can be associated with neighbor PoPs, as reflected by their entries in the Starlink geo-location database <cit.> and reverse domain name system (DNS) lookup, for redundancy and load-balancing purposes. Here we use the Seattle PoP as an example. §.§.§ PoP Topology and Structure Each gateway is identified by an address in 172.16/12 network, which is only unique in the same PoP, and different PoPs can reuse the same 172.16/12 address, similarly as any other private IP address space. Under Seattle PoP, we have observed gateways in 172.16.250/24, 172.16.251/24 and 172.16.252/24, where odd ending digits associated with the CGNAT and even ones associated with the PoP. A Starlink user can reach all gateway identifiers within the same 172.16.x/24 (except those ending with 9) by ping, indicating they are associated with the same ground station, possibly different dishes or beams. However, the user cannot ping those in 172.16.y/24 in other ground stations. The traceroute from Starlink users to 172.16.y/24 outside their own gateway station shows the packet reaches the PoP identified by 206.224.64/19 for large PoPs (e.g., Seattle) or 149.19.108/23 for smaller PoPs (e.g., Denver), indicating some hierarchical structures among PoPs. Note that Starlink does not limit traceroute to 172.16/12 properly within its PoP and may route toward the public Internet, incurring network unreachable messages returned by some routers. Starlink has been alerted on this private route leaking issue but no fixes have been observed yet. Traceroute to the public IP address of Starlink users depends on whether it traverses the CGNAT. Starlink PoP structure is quite similar to other terrestrial and cellular ISPs, with leased fiber connections to ground stations usually located in rural areas or on hill or building tops for a better sky view. Each ground station has multiple (often nine) parabolic dishes in protective domes. For Seattle PoP, it covers ground stations even in Alaska, and for Los Angeles PoP, it covers Hawaii, indicating possible long-haul fiber connections between gateways and its associated PoP. This can cause unnecessarily long delay for local traffic within a PoP, as observed by many Starlink users and reported on Reddit, so Starlink can make further improvement. §.§.§ PoP Measurement and Results Figure <ref> shows a snapshot of the Seattle PoP with three dishes and service tiers: residential with round dish, and roaming and portability with rectangular dish. Their public IP addresses change very often, reflecting the competition among users associated with the same PoP, but still indicate the service tier, e.g., residential and roaming near Vancouver, and portability from Seattle, all physically in Victoria. However, their gateway addresses are relatively stable but do change from time to time. At this snapshot, both residential and roaming are associate with the same ground station, and portability with another. Similar behaviors are observed in other PoPs in the US and elsewhere. In large PoPs such as Seattle, there are two levels of interconnection facilitated by the 206.224.64/24 network. Different ground stations are interconnected by the PoP, which also peers with other PoPs, ISPs and content distribution networks (CDNs). Interconnection is organized in pods, and there are multiple parallel links between pods, which are revealed in Fig. <ref>. TCP and ICMP messages are hashed to a particular link, while UDP packets can traverse parallel links, resulting in packet reordering and affecting some UDP-based applications. However, this is a common practice among ISP and Internet exchange provider (IXP), and is not unique to Starlink. Through our measurement, we observed different Starlink users can be assigned to the same gateway (172.16.x.z) but with different public IP addresses, or even the same public IP address for some time. One of them is a dish we have access and another is a different Atlas probe, so we can conclude that Starlink CGNAT can assign the same public IP address to different users at the same time, complicating user tracking. Starlink users reported on Reddit that they were served with Digital Millennium Copyright Act (DMCA) notices without involving in such activities, indicating Starlink mixed up user tracking due to excessive reuse of IP addresses, while keeping some addresses idle to switch GS between PoPs. §.§ Starlink Backbone and Internet Exchange Networks Starlink runs its own backbone with global reach and interconnect with other ISPs around the world. Most access network providers do not have global coverage, while Starlink operates both its access and backbone networks, which is quite unique. Starlink's autonomous system (AS) number is 14593 and advertises many separate IP address blocks through BGP, which increases its routing complexity in today's world. §.§.§ Backbone Topology and Structure Most Starlink PoP has a very regular topology, with two interconnected backbone routers connected with at least two neighbor PoPs, and two cross-connected peering routers with other ISP and CDN at public or private IXP locations. Backbone addresses are in 149.19.108/24 initially, with 149.19.109/24 for peering. With the rapid growth of Starlink, 206.224.64/24, 206.224.65/24 and 206.224.66/24 are also used nowadays. In the USA, currently there are seven PoPs, namely Seattle (SEA), Los Angeles (LAX), Denver (DEN), Dallas (DFW), Chicago (ORD), Atlanta (ATL) and New York City (LGA), serving both American and Canadian customers, although there are a few GSs in Canada, mostly located on east coast (e.g., Newfoundland). Worldwide, Starlink currently has three PoPs in Europe, i.e., London (LHR), Frankfurt (FRA) and Madrid (MAD), three in Oceania, i.e., Sydney (SYD), Auckland (AKL) and Perth (PER), two in Asia, i.e., Tokyo (NRT) and Manila (MNL), and one in Lagos (LOS), Africa. In addition, Starlink has one PoP in Queretaro (QRO), Mexico, and four more in South America, i.e., Bogota (BOG), Lima (LIM), Santiago de Chile (SCL) and Sao Paulo (GRU). The setup of PoPs certainly follows the local regulatory requirement, and different PoPs have very different range of ground station coverage. Starlink initially leveraged Google cloud platform (GCP) and later evolved to have its own backbone and direct peering agreements with other ISPs/IXPs. §.§.§ Backbone Measurement and Results To discover Starlink terrestrial backbone, we started with Starlink's entries in PeeringDB <cit.> and Starlink published GeoIP database <cit.>, where we can identify the public IP address blocks advertised by Starlink with their reverse DNS name lookup. Please note that Starlink's GeoIP database is not always updated and synchronized, as we found addresses advertised to be associated with Seattle PoP actually route toward Frankfurt, reflecting the fact that Starlink may move address blocks to serve the growing subscription in certain regions during certain events. Thus, we trace from known landmarks to determine the location of Starlink backbone addresses as shown in Fig. <ref>. For example, the Seattle PoP has two interconnected backbone routers (149.19.108.4 and 5) connected to LAX, Denver and Chicago PoPs, and peers at six, Seattle Internet Exchange. Using mtr for a long time, we can determine the best (or minimum) RTT difference between neighbor hops, and treat it as the propagation delay since the transmission delay of ground links for ping messages is negligible. The half of the RTT difference, treated as one-way delay (OWD), is crosschecked with the travel distance between PoPs, as most fiber conduits are along highway or railway, which shows high correlation as highlighted in the figure in ms and km. This gives us the confidence that the discovered topology is close to reality, even not disclosed by Starlink yet. Once we identified PoPs and backbone router addresses, we run systematic traceroute and when possible mtr to determine backbone links, where we also leveraged Atlas probes at different locations and received assistance from Starlink users who reported their service on Reddit. Starlink's backbone link numbering is quite regular. For example, 20–21 for Chicago PoP, 30–31 for LAX PoP, 40-41 for LGA PoP, 80–81 for Atlanta PoP, etc, although such regularity decays when Starlink expanded worldwide. For example, 160–161 is associated with London PoP and 162–163 with Sydney. Despite our best effort, so far we still cannot exactly locate clusters such as 116–117, 183 and 110–111 in 149.19.108/24 block. We also used public looking glass, route and traceroute servers external to Starlink to verify the Starlink backbone, PoP, gateway and even access networks that we measured and discovered. So far, we have observed many BGP routing deficiencies. For example, traffic toward Starlink users does not always enter the nearest Starlink PoP. Although this is a common problem among other ISPs, the impact is more severe for satellite ISPs and their users, as they already suffer higher delay than those on terrestrial networks. On the other hand, Starlink has its own unique challenge as it is both a global backbone and access network provider. This requires more attention from the research community. § RESEARCH CHALLENGES AND OPPORTUNITIES Although our measurement has revealed Starlink access, gateway, PoP and backbone networks in great details and also outlined some problems in the current Starlink network arrangement and provisioning, we found more challenges and opportunities deserving a community effort. §.§ Challenges of Integrated SAGA Networks Networking research has mainly focused on the Earth surface, although there are research efforts for the Moon and the interplanetary scenarios, but not at a consumer level. With the integrated SAGA networks envisioned by 6G, commercially viable SatCom becomes an immediate need. The reusable rocket launch technology enabled it, but the integration of networking protocols with TCP/IP is still lacking. Specifically, most TCP/IP protocols have been designed and then optimized for terrestrial networks. Although there is link-layer mobility supported in mobile cellular networks and WiFi, it is far behind what LEO satellites demand, where not only end users but also the network itself is mobile, although quite predictable given the physics. Currently, Starlink does not reveal its satellite relay scheme. Given the space and spectrum resource limitation, as well as geographical and even political constraints, LEO SatCom systems may have to inter-operate at certain levels around the world. This is a challenge that has not been tackled in ordinary cellular or WiFi systems. Moreover, integrated SAGA also enables multiple paths, including those traversing satellites, between end hosts. Currently most Internet routing is single, shortest path routing by some metrics or policies, while the rich link and network diversity is ignored <cit.>. For SAGA networks, multi-path is not only an option to increase resiliency and load balancing but also a necessity for mission-critical tasks in deep space and distressed scenarios, as evidenced by the fact that more cellular phones adopted direct satellite SOS capabilities recently. Not only the shortest path but also maximal network flow becomes a need, especially in a distributed manner, given there are some recent breakthroughs in max-flow problems. Last but not least, the dominant transport-layer protocol, TCP, still assumes packet loss due to network congestion and backs off in different ways, where in SAGA networks, packet loss and delay variation can come from different sources. Traditional GEO SatCom has engineered many TCP performance-enhancing proxies (PEPs) at the user terminal and ground station to improve TCP performance. Whether such PEPs are still beneficial in the LEO scenario needs to be evaluated again. On the other hand, new transport-layer protocol, such as QUIC, is emerging, with built-in connection migration capability, and its multi-path features are currently under active discussion at IETF for standardization. §.§ Opportunities through Starlink-like Systems Therefore, being the first LEO satellite network with global coverage and considerable user base, Starlink and its competitors in the near future are good testbed candidates for the research community. Recall that the current Internet was a testbed initially known as ARPANET. Starlink is not a perfect system, and it is also a moving target with more satellites added, services introduced and policies changed, on a weekly or monthly basis. However, it is a good opportunity for the research community to understand the SAGA network envisioned for 6G, just as the UCLA network measurement center (NMC) interacted with BBN who manufactured the IMP deployed on ARPANET, which evolved into the Internet. One rare opportunity is inter-satellite communication and networking. So far we have not had a dynamic network with very fast but regular topology changes such as LEO satellite networks, where the shortest-path and maximum-flow routing algorithms and protocols will have an ultimate test. TCP/IP was designed for a network with arbitrary topology thus unnecessary overhead when the network topology shows regularity either statically (e.g., in data centers) or dynamically like LEO satellites. It is the time to rethink TCP/IP. Another opportunity is integrated sensing, communication and computing. With more powerful LEO satellites, not only satellite-to-ground and inter-satellite communications become commonplace, so do Earth-facing imaging and star-facing observation with less light and atmosphere pollution. However, not all sensed data can or need to be transferred to ground for processing, so on-board computing becomes a need. Amazon's Kuiper attempts to do so, leveraging its ground-based cloud-edge computing. Integrated SAGA opens a new dimension. §.§ An Appeal for a Global SAGA Testbed More importantly, we appeal to the research community, especially those who can acquire or access Starlink dishes, especially in remote locations, to host RIPE Atlas probes to help the research and also user community to better understand Starlink and similar systems. For research purposes, we also appeal to federate these dish-connected computers for remote access, just as what Planetlab (and later GENI) did at the start of this century for distributed systems and networking research. Experimentation is an important approach to initiate and validate our research and keep it practical and relevant. § CONCLUSIONS In this paper, we presented an ongoing measurement campaign on Starlink, the first large-scale, operational, LEO satellite network, on its access, gateway, PoP and backbone networks with detailed network topology diagrams and some performance results. More importantly, we discussed the challenges and opportunities brought in by Starlink and alike, and appealed to the research community to create and participate in a global observatory testbed for a SAGA network as envisioned by 6G communication systems. 00 starlink Wikipedia, Starlink, https://en.wikipedia.org/wiki/Starlink, 2023. leoconn LEOCONN, https://leoconn.github.io/, 2021 and 2022. atlas RIPE Atlas, https://atlas.ripe.net/, 2023. reddit Reddit, Starlink, https://www.reddit.com/r/Starlink/, 2023. nsf NSF, Starlink, https://forum.nasaspaceflight.com/index.php?topic=48981.0 space-routing M. Handley, “Delay is not an option: Low latency routing in space,” ACM HotNets, 2018. ground-relay M. Handley, “Using ground relays for low-latency wide-area routing in megaconstellations,” ACM HotNets, 2019. space-race D. Bhattacherjee, W. Aqeel, I. Bozkurt, A. Aguirre, et al., “Gearing up for the 21st century space race,” ACM HotNets, 2018. topology-design D. Bhattacherjee, and A. Singla, “Network topology design at 27,000 km/hour,” ACM CoNEXT. 2019. hypatia S. Kassing, D. Bhattacherjee, A. Aguas, J. Saethre, and A. Singla, “Exploring the Internet from space with Hypatia,” ACM IMC, 2020. testbed M. Kassem, A. Raman, D. Perino, and N. Sastry, “A browser-side view of starlink connectivity,” ACM IMC, 2022. with-quic F. Michel, M. Trevisan, D. Giordano, and O. Bonaventure, “A first look at Starlink performance,” ACM IMC, 2022. from-cloud S. Ma, et al., “Network characteristics of LEO satellite constellations: A Starlink-based measurement from end users,” IEEE INFOCOM, 2023. mac J. Iyer, K. Mahammad, et al., “System and method of providing a medium access control scheduler,” US Patent 11,540,301, 2021. grpc Starlink gRPC Tools, https://github.com/sparky8512/starlink-grpc-tools geoip Starlink Self-Published IP Geo-location Feed, http://geoip.starlinkisp.net peeringdb PeeringDB, https://www.peeringdb.com/asn/14593, 2023. percolation J. Hu, et al., “Directed percolation routing for ultra-reliable and low-latency services in low earth orbit satellite networks,” IEEE VTC-W'20.
http://arxiv.org/abs/2307.02667v1
20230705214531
Unveiling Causal Mediation Pathways in High-Dimensional Mixed Exposures: A Data-Adaptive Target Parameter Strategy
[ "David B. McCoy", "Alan E. Hubbard", "Mark van der Laan", "Alejandro Schuler" ]
stat.ME
[ "stat.ME" ]
Pattern formation and bifurcation analysis of delay induced fractional-order epidemic spreading on networks [ August 1, 2023 =========================================================================================================== Mediation analysis in causal inference typically concentrates on one binary exposure, using deterministic interventions to split the average treatment effect into direct and indirect effects through a single mediator. Yet, real-world exposure scenarios often involve multiple continuous exposures impacting health outcomes through varied mediation pathways, which remain unknown a priori. Addressing this complexity, we introduce NOVAPathways, a methodological framework that identifies exposure-mediation pathways and yields unbiased estimates of direct and indirect effects when intervening on these pathways. By pairing data-adaptive target parameters with stochastic interventions, we offer a semi-parametric approach for estimating causal effects in the context of high-dimensional, continuous, binary, and categorical exposures and mediators. In our proposed cross-validation procedure, we apply sequential semi-parametric regressions to a parameter-generating fold of the data, discovering exposure-mediation pathways. We then use stochastic interventions on these pathways in an estimation fold of the data to construct efficient estimators of natural direct and indirect effects using flexible machine learning techniques. Our estimator proves to be asymptotically linear under conditions necessitating n^-1/4‐consistency of nuisance function estimation. Simulation studies demonstrate the √(n) consistency of our estimator when the exposure is quantized, whereas for truly continuous data, approximations in numerical integration prevent √(n) consistency. Our NOVAPathways framework, part of the open-source SuperNOVA package in R, makes our proposed methodology for high-dimensional mediation analysis available to researchers, paving the way for the application of modified exposure policies which can delivery more informative statistical results for public policy. § INTRODUCTION Causal mediation analysis allows for the decomposition of an exposure's total effect on an outcome into direct and indirect pathways operating through an intermediate mediator or set of mediators. Identifying the pathways through which environmental mixtures impact health outcomes is crucial for corroborating causal inference of total effects and for developing effective public health policies. This information can help to strengthen causal inference by providing evidence for a plausible biological mechanism underlying the observed association between the exposure mixture and the outcome. Additionally, if several chemicals with similar structures are found to operate through the same mediating pathway, it may suggest that other chemicals with similar structures may have the same mediating effects. This type of inference is consistent with coherence used in the Bradford Hill criteria <cit.>. Such evidence can be used to strengthen regulations of unstudied chemicals which are structurally similar to chemicals which have been found to have both total effects and effects through certain biological pathways leading to disease. Mediation analysis can help in the development of targeted interventions in the context of environmental health by identifying specific pathways through which environmental exposures affect health outcomes. By determining the mediator variable(s) that link the exposure to the outcome, mediation analysis can help identify potential targets for intervention that may reduce the harmful effects of the exposure. For example, if mediation analysis identifies that inflammation is a key mediator between exposure to air pollution and cardiovascular disease, interventions that reduce inflammation, such as anti-inflammatory medications or dietary changes, may be targeted to reduce the harmful effects of air pollution on cardiovascular health in situations where air pollution cannot be immediately reduced. By identifying specific pathways, mediation analysis can help guide the development of targeted interventions that are more likely to be effective and efficient in reducing the harmful effects of environmental exposures. Decomposing the total effects of a mixed exposures in environmental epidemiology presents unique challenges. Unlike single exposures, we do not know a priori which specific exposures or sets of exposures act through which mediators to cause the outcome. There can be multiple such pathways, and using the same data to identify these pathways and estimate a target parameter given these pathways can lead to biased results due to overfitting to the sample data. That is, both the discovered pathway and effects for this pathway are overfit to the sample data and may not generalize to the population level. Additionally, it is possible that multiple exposures use the same pathways, and thus may interact through this pathway, such as multiple heavy metals interacting through epigenetic mediators, which can have more than additive effects through this pathway. This highlights the importance of developing methods that can identify and estimate the effects of mixed continuous-valued exposures on health outcomes through multiple mediators simultaneously while addressing issues of double-dipping and interactions between exposures. Currently, no such statistical methods exist to capture such complex exposure-outcome systems although, almost in all cases this is the system by which exposure leads to disease. Building upon the seminal work of Sewall Wright, who introduced path analysis in 1934 <cit.>, researchers gained a foundation for exploring causal relationships among observed variables using path diagrams and standardized path coefficients. This approach enabled the decomposition of the total effect of one variable on another into direct and indirect effects via intermediary variables. In 1972, Arthur Goldberger <cit.> further advanced the field by developing structural equation models (SEMs) for mediation analysis. By integrating path analysis with factor analysis, SEMs facilitated the modeling of intricate relationships between observed and unobserved (latent) variables. Goldberger's contribution linked path analysis to a more comprehensive statistical framework, providing enhanced precision in estimating causal effects while accounting for measurement error. Consequently, the scope and applicability of mediation analysis were significantly expanded. The initial development of path analysis and SEMs largely focused on parametric models, where assumptions about the distributional properties of the data and the functional form of relationships between variables were made. However, over time, researchers extended SEMs to include nonparametric and semiparametric approaches, allowing for more flexible modeling of relationships without strong distributional assumptions <cit.>. In recent years, the field of causal inference has witnessed substantial advancements with the introduction of non-parametric structural equation models and directed acyclic graphs. These developments have facilitated the non-parametric estimation of causal effects and the evaluation of conditions that permit causal effect identification from data <cit.>. While these novel approaches have addressed some limitations of traditional parametric structural equation models for mediation analysis, they also brought forth new challenges. Early non-parametric SEMs struggled with issues such as increased computational complexity; model identification, meaning that, in the absence of parametric assumptions, determining whether a non-parametric estimates are identifiable from the observed data is more challenging; sensitivity to choice of estimator; difficulty in assessing model fit and interpretability and limited available software. Non-parametric partitioning of the causal influence of a binary treatment into natural indirect and direct impacts began by employing the potential outcomes framework proposed by Robins and Greenland <cit.>. The indirect impact measures the effect on the outcome variable via the mediator, while the direct impact measures the effect through all other pathways. Pearl <cit.> derived a similar effect partitioning utilizing non-parametric structural equation modeling. The identification of these natural (in)direct impacts depends on cross-world counterfactual independencies. Essentially, this means that we assume the outcomes of different imaginary scenarios, where intervention on the exposure and mediator, do not influence each other. The cross-world counterfactual independence assumption is not directly falsifiable from experimental data. This is because the assumption involves counterfactual variables that correspond to different hypothetical interventions, and we can only observe one intervention outcome in a single experiment. Therefore, the natural (in)direct impact is not identifiable in a randomized experiment, which means that even in in randomized experiments we cannot know if these estimated mediation effects actual exist at the population level for a deterministic intervention. These limitations arise because, in most causal inference research on mediation, deterministic interventions are studied, which assign fixed exposure values. Historically, binary exposures have been investigated for several reasons 1. interpretability: causal effects are easier to understand for binary exposures as they involve comparisons between two distinct groups or switching from one group to another; 2. estimation complexity: binary exposures often lead to simpler functional forms and estimation procedures, even in non-parametric settings; 3. identification: verifying assumptions for causal effects identification can be more straightforward for binary exposures; 4. potential outcomes framework: this framework is more intuitive for binary exposures, as there are only two potential outcomes for each individual. To avoid limitations of binary exposures while retaining interpretability and relaxed identification assumptions, stochastic interventions can be implemented. Stochastic interventions allow exposures to be a random variable after conditioning on baseline covariates. For example, in the context of air pollution exposure and cardiovascular outcomes, we can consider a stochastic shift intervention where air pollution exposure is reduced by an amount δ for each individual in the population. Therefore, this post-intervention distribution still depends on the originally observed air pollution levels. We then would estimate the impact under this post-intervention distribution and compare the average to the observed outcomes under observed air pollution exposures. Stochastic interventions offer analytical benefits over deterministic approaches by enabling the straightforward definition of causal effects for continuous exposures, providing an interpretation that is easily understood by those familiar with linear regression adjustment. Estimation of total effects for stochastic interventions has been explored in various studies, including methods for modified treatment policies and propensity score interventions for binary exposure distributions <cit.>. Nevertheless, these studies do not focus on decomposing the effects of stochastic interventions into direct and indirect effects, which was first investigated in <cit.>. In <cit.>, the authors introduce a decomposition of a stochastic intervention's effect into direct and indirect components. This approach identifies (in)direct effects without necessitating cross-world counterfactual independencies, producing experimentally testable scientific hypotheses that can be empirically tested by intervening on the mediator and exposure. The authors develop a one-step non-parametric estimator based on the efficient influence function, incorporating machine learning regression techniques, and provide √(n)-rate convergence and asymptotic linearity results. Importantly, the proposed method provides definition and estimation of non-parametric mediated effects for continuous exposures. However, in the software implementation of the proposed method, the authors employ a reparameterization of specific integrals as regressions and the authors treat the exposure as binary to reduce computation complexity by avoiding direct estimation of the probability density function (PDF) and estimating the probability mass function (PMF) instead. Likewise, restricting the software to a binary exposure also avoids numeric integration necessary for the estimator. While this approach enables the inclusion of multiple mediators, it necessitates a binary exposure to function effectively. This limitation motivates the work presented here. In many environmental epidemiology cases, it is crucial to understand the specific mediators through which particular exposures impact an outcome. Instead of reparameterizing an estimand to avoid high-dimensional density estimation or integrals, identifying individual mediators and estimating stochastic effects solely through these mediating pathways leads to deeper interpretation when dealing with multiple mediators. The random variables driving the outcome can be treated as parameters, where these mediators are identified using one portion of the data, and direct/indirect effects are estimated for this mediator using another part of the data. Estimation becomes considerably more complex with multiple exposures, as the connections between exposures and mediators remain unknown. Thus, these paths must be discovered in the data, and mediation analysis employing stochastic interventions can then be estimated for these paths. This study presents a methodological approach for estimating mediation effects in the presence of high-dimensional exposures and mediators. We employ a cross-validated framework, where in path-finding folds, a cross-validation process is used to identify the mediating paths through a series of semi-parametric regressions. With these paths established, we estimate the direct and indirect effects of a stochastic intervention on the exposure through the mediator, both identified in the path, in an estimation fold. Drawing on the efficient influence function from Diaz et al. <cit.>, we directly compute the integrals required for each component of the efficient influence function, rather than reparameterizing the estimates when the exposure is continuous. We also build in estimation for the case where the exposure is quantized, for example, into bins which represent quartiles. This approach enables the mediation of continuous/discrete exposures which are unknown a priori through mediators which are also unknown a priori and can take on multiple variable types. The use of stochastic interventions in a semi-parametric framework provides a promising approach for estimating direct and indirect effects of exposure mixtures on health outcomes through mediation pathways. To our knowledge, no such methods exist in the causal inference literature which both makes available mediation for a continuous/discrete exposure and data-adaptive discovery of mediating paths. Our method proposed here is available for use in the SuperNOVA package in R which also estimates interaction and effect modification of a mixed exposure using stochastic interventions and data-adaptive target parameters. § THE ESTIMATION PROBLEM Our mediation parameter of interest for a continuous exposure was first described in <cit.> and therefore, what follows in the our mediation framework for data-adaptively discovered mediation pathways is based on this previous work. That is, the notation, target parameter, identification and efficient influence function are all the same as in <cit.>; however we extend this method to work in the continuous case and data-adaptively identify exposure-mediator pathways. To make this current work more self-contained we review and explain with brevity parts of their estimator in order to describe our approach making estimates work in the fully continuous case of data-adaptively identified pathways. We consider the causal inference problem involving a multivariate continuous, categorical, or binary exposure (A), a continuous, categorical, or binary outcome (Y), a multivariate continuous, categorical, or binary mediator (Z), and a vector of observed covariates (W) which are also a variety of data types. Let O = (W, A, Z, Y) be a random variable with distribution ℙ. We denote the empirical distribution of a sample of n independent and identically distributed observations O_1, ..., O_n as ℙ_n. For any given function f(o), we denote ℙf = ∫ f(o) d ℙ(o) and use 𝔼 to represent expectations with respect to ℙ averaging over all randomness. We assume ℙ belongs to ℳ, the nonparametric statistical model comprising all continuous densities on O with respect to a dominating measure v, with p denoting the corresponding probability density function. We go through the framework first ignoring the data-adaptive selection of subsets of the {A,Z}. We then introduce the data-adaptive component which follows naturally. Our approach diverges from previous methods, focusing on data-adaptively identifying which sets of exposures (Â) impact which sets of mediators (Ẑ). This approach bypasses the need for estimating the high-dimensional joint impact of exposures through the mediators, an effort that often encounters the 'curse of dimensionality,' a phenomenon that complicates accurate modeling and prediction due to exponential increase in volume associated with adding extra dimensions in the exposure space. The discovered  and Ẑ represent the "estimated" or selected subsets from the full set of A and Z variables. Probability density functions and regression functions are represented as follows: * g(a | w): Represents the conditional probability density or mass function of A given W = w. * Q(a, z, w): Represents the expected outcome given the variables A, Z, and W. * e(a | z, w): Represents the conditional density or probability mass function of A given (Z, W). * q(z | a, w) and r(z | w): Denote the conditional densities of Z. To define our counterfactual variables, we use the following nonparametric structural equation model (NPSEM): W = f_W(U_W); A = f_A(W, U_A); Z = f_Z(W, A, U_Z); Y = f_Y(W, A, Z, U_Y). This set of equations signifies a mechanistic model, grounded in nonparametric statistical methods, that is assumed to generate the observed data O. It incorporates several fundamental assumptions. First, an implicit temporal ordering is assumed, with Y occurring after Z, A, and W; Z taking place after A and W; and A happening after W. Second, each variable (i.e., W, A, Z, Y) is assumed to be generated from the corresponding deterministic, yet unknown, function (i.e., f_W, f_A, f_Z, f_Y) of the observed variables that precede it temporally, as well as an exogenous variable, denoted by U. Each exogenous variable is assumed to encompass all unobserved causes of the corresponding observed variable. In the context of nonparametric statistics, the independence assumptions on the exogenous variables U = (U_W, U_A, U_Z, U_Y) necessary for identification will be addressed in the assumptions section. This approach allows the model to accommodate the complexities and nuances of the relationships between the variables without relying on specific functional forms or parametric assumptions. Our causal effects of interest are characterized by hypothetical interventions on the NPSEM. In our situation, we focus on an intervention where the equation associated with A is changed, and the exposure is drawn from a user-defined distribution g_δ(a | w). This distribution relies on g (the conditional density under observed exposures) and is indexed by a user-specified parameter δ. We assume that when δ = 0, g_δ = g. Let A_δ represent a draw from g_δ(a | w). In our scenario, the distribution g_δ is given by g(a - δ | W), which indicates a shift of δ in the conditional density of A. This shift corresponds to a modified treatment policy aimed at reducing exposure by δ. Essentially, the intervention involves removing the equation associated with A and establishing the exposure as a hypothetical regime, d(A, W). The regime d depends on the natural exposure level A (i.e., without any intervention) and covariates W. For instance, if A denotes continuous exposures such as various air pollution factors (Carbon Monoxide, Lead, Nitrogen Oxides, Ozone, Particulate Matter, etc.) related to asthma incidence Y, we may be interested in investigating the expected asthma incidence if all individuals experienced a δ-unit reduction in Lead exposure, while keeping other exposures and covariates unchanged. Assume that the distribution of A given W = w is supported within the interval (l(w), u(w)). In other words, the minimum pollution level for an individual with covariates W = w is l(w). We can then define a hypothetical post-intervention exposure, A_δ = d(A, W), as follows: d(a, w) = a - δ if a > l(w) + δ a if a ≤ l(w) + δ Here, 0 < δ < u(w) is an arbitrary value provided by the user. This regime can be further refined by allowing δ to be a function of w, thereby enabling the researcher to specify a different change in pollution levels as a function of factors such as demographic characteristics or geographical location. This intervention was initially proposed by <cit.> and <cit.> and <cit.>. We are interested in the population intervention effect (PIE) of A on Y using stochastic interventions. That is, given values for an exposure and mediator (a, z), we examine the counterfactual outcome Y(a, z) = f_Y(W, a, z, U_Y), the expected outcome if all individuals were exposed to these values for the exposure and mediator. We also examine the counterfactual mediator Z(a) = f_Z(W, a, U_Z) or the expected value the mediator takes on given exposure A = a. The counterfactual Y(a, z) represents the outcome in a hypothetical scenario where (A, Z) = (a, z) is fixed for all individuals. We are interested in the contrast between the expected outcome given an intervention A_δ which say, reduces exposure to pollution and the expected outcome under no intervention, the observed outcome under observed exposures. This looks like: ψ(δ) = 𝔼{Y(A_δ) - Y}. Drawing from causal inference literature on mediation, we know that since A is a cause of Z, any intervention altering exposure to A_δ also affects the counterfactual mediator Z(A_δ). Owing to the consistency ensured by the NPSEM, we obtain Y(A, Z) = Y and Z(A)=Z. In addition, from Pearl's <cit.> law of composition we can express Y(A_δ, Z(A_δ)) = Y(A_δ). In words, this means that the expectation of Y under dual shift is implied by a shift in A ignoring Z. Consequently, the PIE can be decomposed into a population intervention direct effect (PIDE) and a population intervention indirect effect (PIIE). The interepretation of these effects are the same as natural direct and indirect effects but are for a stochastic intervention rather than a deterministic intervention on A. ψ(δ) = 𝔼{Y(A_δ, Z(A_δ)) - Y(A_δ, Z)}_PIIE + 𝔼{Y(A_δ, Z) - Y(A, Z)}_PIDE. Essentially, the direct effect demonstrates the impact of an intervention that modifies the exposure distribution while maintaining the mediator distribution at the level it would have been without any intervention. On the other hand, the indirect effect quantifies the influence of an indirect intervention on the mediators, initiated by changing the exposure, while keeping the exposure intervention constant. state/.style=draw, rounded rectangle, minimum width=2cm, minimum height=1cm, text centered, text width=2cm, node distance=3cm [->,shorten >=1pt,auto,semithick] [state] (A) at (0,0) A; [state] (Z) at (3,0) Z; [state] (Y) at (6,0) Y; [state] (W) at (3,2) W; (W) edge[->] node (A); (W) edge[->] node (Z); (W) edge[->] node (Y); (A) edge[->] node[above] IE (Z); (Z) edge[->] node[above] IE (Y); (A) edge[bend right=45,->] node[above] DE (Y); [align=center, below=of A] Exposure a; [align=center, below=of Z] Mediator z; [align=center, below=of Y] Outcome y; [align=center, above=0.5cm of W] Covariates w; Above is a simple directed acyclic graph (DAG) which illustrates the IE through the mediator Z and DE which is the causal effect not through Z. For example, in a study investigating the effects of environmental exposure, such as air pollution, on respiratory health, the direct effect measures how changing pollution levels impact health outcomes, assuming the mediators (e.g., time spent outdoors) remain unchanged. The indirect effect, conversely, evaluates how health outcomes are influenced by changes in the mediators (e.g., reduced time spent outdoors) that result from modifying pollution levels, while the pollution intervention remains constant. Above, Y(A, Z) = 𝔼(Y), is simply estimated by the empirical mean in the sample. Moving forward, the optimality theory described in <cit.> which we review and estimators we present for the truly continuous exposure case focus on θ(δ) = 𝔼{Y (A_δ, Z)}. These two terms are then used in calculation of the direct effect. Because 𝔼{Y (A_δ, Z(A_δ))} = 𝔼{Y (A_δ)}, which in words is simply the total effect in Y after shifting A ignoring Z. That is, if we were to construct an efficient estimator for a shift in A ignoring Z these estimates encapsulate the indirect effect A has through Z as in the total effect. Ivan Diaz and Mark van der Laan <cit.> first proposed estimators of the total effect of a stochastic shift intervention including inverse probability weighted, outcome regression, and doubly robust estimators based on the framework of targeted minimum loss-based estimation (TMLE) where in each case data adaptive machine learning can be used to estimate the relevant nuisance parameters. Call the total effect θ(δ)_t, which is the expected Y given a shift in A and includes the implied shift in Z(A_δ). Call 𝔼{Y(A_δ, Z) - Y(A, Z)}, θ(δ)_d, the direct effect or the effects of shift in A keeping Z fixed. Lastly, 𝔼{Y(A_δ, Z(A_δ)) - Y(A_δ, Z)}, the effects of a shift in Z due to a shift in A keeping A fixed we call θ(δ)_i. Then: θ(δ)_t = θ(δ)_d + θ(δ)_i Which means we can then estimate the indirect effect as: θ(δ)_i = θ(δ)_t - θ(δ)_d Which is simply estimating the indirect effect by subtracting the total effect from the direct effect, this provides us with the point estimate. We can do inference on this difference by utilizing work from <cit.> which provides an efficient estimator for θ(δ) for the construction of the direct effect θ(δ)_d. We use TMLE or one-step estimators proposed from <cit.> to estimate θ(δ)_t and we use the scalar delta method to estimate θ(δ)_i. Moving forward we describe θ(δ), or 𝔼{Y(A_δ, Z)}, the average outcome under a shift in A keeping Z at natural values. §.§ Identification of the Causal Parameter We can evaluate the causal effect of our intervention by considering the counterfactual mean of the outcome under our stochastically modified intervention distribution. This target causal estimand is Y(a,z), which is the counterfactual outcome we would observe when ℙ((A, Z) = (a, z)) = 1. Our causal quantitiy is: θ(δ) = ∫ y p_Y(A_δ, Z)(y) dy <cit.> describe identification for this parameter and we briefly review here. We must assume that the data is generated by independent and identically distributed units, and that there is no unmeasured confounding, consistency, or interference (discussed in more detail in subsequent sections). Under these assumptions, θ(δ) can be identified by a functional of the distribution of O: θ(δ) = ∫_𝒲∫_𝒜∫_𝒵Q(a,z,w) g_δ(a | w) r(z | w) q(w) dw da dz Mechanically this is the outcome predictions from our Q model integrated over density predictions from our g model under δ shift integrated over our the conditional mediator and covariate density. Interpreting the statistical effects in our analysis as causal rests upon two assumptions: common support and conditional exchangeability (or ignorability). These are standard assumptions in causal inference that require consideration in mediation. Common support, also known as positivity or overlap, is a fundamental assumption in causal inference that ensures that the distribution of the exposure of interest is well defined and supported by the data. For each individual in the population, there should be a non-zero probability of observing the shifted exposure value given their observed covariates. This assumption ensures that the exposure effect is identifiable and that causal inference can be validly conducted. In our case, positivity refers to the probability density of exposure being bounded away from zero or one after an exposure shift. We propose a method that data-adaptively finds a shift which does not lead to positivity violations (described later). Conditional exchangeability, or ignorability, is related to the assumption made in <cit.>. In our context, it means that given the observed covariates, the distribution of the potential outcomes, Y(a, z), is independent of the actual exposure, A, and mediator, Z, assignments. This assumption is akin to stating that we have adequately controlled for confounding. Here, it's essential to note that we need conditional exchangeability both for the exposure-outcome and mediator-outcome relations. This implies that all confounders between the exposure A and outcome Y, and between the mediator Z and outcome Y, should be measured and properly adjusted for. If this assumption is violated—if there are unmeasured confounders—it can lead to biased effect estimates. Consider the directed acyclic graph (DAG) below: [ node distance=1.5cm, every node/.style=draw, circle, every edge/.style=draw, -latex ] (A) A; [right=of A] (Z) Z; [below=of Z] (V) V; [right=of Z] (Y) Y; (A) edge (Z); (Z) edge (Y); (V) edge (Z); (A) edge[bend right] (Y); (V) edge[bend left] (Y); This DAG illustrates the relations between the exposure A, mediator Z, confounder V, and outcome Y. Here, V can be seen as a confounder that affects both Z (the mediator) and Y (the outcome). Conditioning on a collider Z when there are unmeasured confounders (V), would open a pathway from A to Y(a, z), introducing bias into our estimates. Additionally, the methods presented here cannot account for situations where the mediator-outcome confounder V is affected by the exposure A. As this too opens up a backdoor path that would lead to bias <cit.>. §.§ Efficient Estimation of the Direct Effect In this section we focus on the efficiency theory for estimating θ(δ) within the nonparametric model ℳ, with a focus on the efficient influence function (EIF) which was originally derived in <cit.>. Diaz and Hejazi offer a rigorous breakdown of the EIF for this part of the direct effect and we give a brief overview here to explain our approach for estimating each part of the EIF. The EIF is a fundamental concept in semi-parametric estimation theory. It plays a vital role in determining the asymptotic behavior of all regular and efficient estimators. In simpler terms, the EIF contains the information to predict how these estimators perform when the sample size approaches infinity. Calculating the EIF is crucial for constructing locally efficient estimators for θ(δ). Locally efficient estimators are estimators that achieve the best possible asymptotic variance within a specified class of estimators under certain regularity conditions within a stated statisticla model. They are optimal in the sense that, asymptotically, they have the lowest variance among all unbiased estimators in their class. <cit.> derived the EIF for this problem: we briefly describe each part of the EIF here and describe how we estimate its components for the case where A is a continuous/discrete exposure. The efficient influence function for θ(δ) in the nonparametric model ℳ for a modified treatment policy is D^Y(o) + D^A(o) + D^Z,W(o) - θ(δ), where: D^Y(o) = g_δ(a | w)/e(a | z, w){y - Q(a, z, w)}, D^Z,W(o) = ∫ Q(a, z, w)g_δ(a | w) da D^A(o) = ϕ(a, w) - ∫ϕ(a, w)g(a | w) da Where: ϕ(a, w) = ∫ Q(d(a, w), z, w)r(z | w) dz = 𝔼[g(A | W)/e(A | Z, W) Q(d(A, W), Z, W) | A = a, W = w], Constructing an efficient estimator always involves estimating the EIF and so here we describe at a high level how we estimate each component in the rest of the article. D^Y describes the "weighting factor" in the EIF which adjusts the residuals of the outcome model (Q) based on the differences in exposure distributions between the intervention g_δ and the natural course of exposure e. We calculate this by directly constructing estimators for the conditional densities of g and e. Likewise, Q is simply an outcome regression model which is estimated using flexible machine learning. Therefore, in the case where A is continuous, we use conditional density estimators to estimate the conditional density functions used in this nuisance parameter. When A is discrete, we can also use an ensemble of multinomial regression estimators which provide the probability of exposure falling in each "bin". This probability mass function then replaces the probability density function used when A is continuous. D^Z,W is expected outcome (Q) multiplied by the estimated probability density of the exposure under a shift by δ, and integrating over all possible values of the exposure a. This takes into account the potential shift in the distribution of w (which affects the exposure), to provide a more accurate prediction of the outcome Y. For this estimation we directly integrate the two functions over the exposure using Monte Carlo integration of the exposure variable over the exposure range. That is, exposure values a are shifted until they meet the upper or lower bound in which case they simply take on the min or max value depending on the direction of δ. In the case where A is discrete, g_δ is simply the probability for the bin that corresponds to a ±δ depending on the direction. For example, if A is discretized into quartiles and δ is 1, then if a is quartile 1, g_δ is the probability of quartile 2. In this case the integral is simply a weighted sum: D^Z,W(o) = ∑_a_k ∈ A Q(a_k, z, w)g_δ(a_k | w) For D^A the first expression ϕ(a, w) can be calculated using either integration or regression. The first line of the expression uses integration to calculate this expected outcome by averaging the outcome model Q over all possible values of z, weighted by the conditional density of z given w, denoted as r(z | w). The second line of the expression uses an alternative formulation to calculate the same expected outcome. It uses the conditional expectation formula to take the conditional expected value of Q given A=a and W=w, where the expectation is taken with respect to the conditional density of A given W, denoted as g(A | W), divided by the inverse of the conditional density of A given Z and W, denoted as e(A | Z, W), which effectively regresses out the effect of Z from A. Therefore, it possible to estimate ϕ(a, w) by either integrating or using pseudo-regression. We take both approaches to compare finite sample performance in both estimation approaches. For the integration approach, we directly estimate the conditional density of the mediator given covariates and use this function in the integration with Q over z using a Monte Carlo approach. Again if A is discrete this looks like: D^A(o) = ϕ(a, w) - ∑_a_k ∈ Aϕ(a_k, w) g(a_k | w) Because ϕ(a, w) is the integration of Q and r over z and does not include A as an outcome, it is still necessary to estimate the conditional density of Z given W even when the exposure is discrete. In this discrete exposure case, we use a double integration approach and pseudo regression approach. §.§.§ Monte Carlo Integration Monte Carlo (MC) integration is a numerical integration technique that uses random sampling to approximate the integral of a function over a given domain. In our case the range of exposures and/or mediators are the domains to integrate over. MC integration works by first generating random points within the domain. Then, the function values are computed at these points. The average of these values is then multiply by the volume of the domain. As the number of samples increases, the approximation converges to the true integral value. In our case we are integrating the product of two density/regression estimators, for example in the case of, D^Z,W(o) = ∫ Q(a, z, w)g_δ(a | w) da, MC integration can be more advantageous than quadrature methods for several reasons: * Handling high-dimensional and non-linear functions: The product of fits using, for example, two Super Learners for Q and g, may result in complex, non-linear, and high-dimensional functions. MC integration is well-suited for handling such functions, as it does not rely on any specific parametric assumptions or require the function to be smooth or continuous. * Adaptability to irregular functions: MC integration is adaptive to irregularities in the function being integrated, making it a reasonable method for integrating the product of two flexible Super Learners fits, which can have irregular shapes across covariates. Quadrature methods, on the other hand, often rely on the function being smooth or continuous and may struggle with irregular functions. * Scalability: MC integration is easily scalable to high dimensions, making it suitable for problems with a large number of covariates. Quadrature methods, in contrast, can suffer from the curse of dimensionality, where the number of required evaluation points grows exponentially with the dimensionality, leading to an intractable computational burden. * Convergence properties: MC integration has desirable convergence properties, meaning that as the number of random samples increases, the accuracy of the approximation improves. This allows for obtaining more accurate estimates, even for complex and irregular functions. * Ease of implementation: MC integration is relatively simple to implement and can be easily parallelized for efficient computation on modern hardware. Quadrature methods, on the other hand, can be more complex and challenging to implement, especially for high-dimensional and irregular functions. For these reasons, we use MC for estimating the necessary integrals of each nuisance function. MC integration is much faster than adaptive quadrature, especially in our case where we need to integrate these functions at every vector of covariates (for each observation). To ensure that the number of iterations is scaled by sample size the number of iterations used in the MC integration is set to four times sample size in this paper. §.§ Estimation §.§.§ Direct Effect <cit.> derive the efficient influence function D_η, δ to construct a robust and efficient estimator, which is defined as the solution to the estimating equation P_n D_η̂, δ=̂ 0 in θ, given a preliminary estimator η̂ of η. They advise utilizing cross-fitting in the estimation process to avoid entropy conditions of the initial estimators which we employ in our approach. To do this, the index set 1, …, n is randomly partitioned into K equally sized estimation samples, V_k. For each k, the corresponding training sample T_k is obtained by excluding V_k from the index set. The estimator η̂_T_k is derived by training the prediction algorithm using only the data in T_k. The index of the validation set containing observation i is denoted by V_k(i). The estimator is thus defined as: θ̂(δ) = 1/n∑_i=1^n D_η̂_k(i), δ(O_i) = 1/n∑_i=1^n[ D^Y_η̂_k(i), δ(O_i) + D^A_η̂_k(i), δ(O_i) + D^Z, W_η̂_k(i), δ(O_i) ] Effectively, the efficient estimator is the average of the cross-estimated sum of each nuisance parameter. Subtracting the mean from this sum of nuisance parameters then gives us the EIF for this shift parameter since the EIF is defined as D^Y_η,δ(o) + D^A_η,δ(o) + D^Z_η,δ(o) - θ(δ). When estimating θ(δ) compared to the observed outcome, we employ the scalar delta method by subtracting the two efficient influence functions, resulting in an EIF for θ(δ)_d that can be used for constructing confidence intervals and performing hypothesis testing. By subtracting the two EIFs and calculating the variance of the resulting EIF scaled by n observations, we obtain the variance of θ(δ)_d, which is asymptotically Gaussian and centered around the true difference. Finally, we construct confidence intervals and conduct hypothesis testing using the standard error. This gives us our final point and variance estimates for the θ(δ)_d. §.§.§ Indirect Effect One Mediator We employ one-step estimation or targeted maximum likelihood estimation (TMLE) to estimate the expected outcome of a shift in exposure A without considering the mediator Z. TMLE solves the efficient influence function (EIF) and the delta method is used to estimate the total effect <cit.> by subtracting this EIF from the observed Y EIF (Y - Q(a,w)). By solving the EIF for the total effect parameter θ(δ)_t using TMLE/one-step estimation and applying the delta method, we obtain the EIF for the indirect effect parameter θ(δ)_i by subtracting θ(δ)_d from θ(δ)_t, the same is done for the point estimates. Although we use different approaches for estimating θ(δ)_t (TMLE) and θ(δ) (estimating equations), both result in efficient estimators. According to the central limit theorem, the distribution of each estimator is Gaussian and centered at the true value. We can compute the estimate of the variance σ_n^2, allowing for Wald-style confidence intervals to be computed at a coverage level of (1 - α) as ψ_n ± z(1 - α/2)·σ_n / √(n). Many Mediators In research situations where multiple mediators are measured, we need to adjust the above described methodology in order to isolate the indirect effect for a given pathway. A simple subtraction of the direct effect from the total effect to derive the indirect effect when multiple potential mediators are present would yield an oversimplified estimation. This approach would instead estimate the collective indirect effect through all potential mediators. This would not provide the specific indirect effect attributable to the pathways of interest. To delineate the specific indirect effect through the mediator of interest, we adopt a slightly different approach. When we estimate the total effect, we adjust for all the other mediators but not the mediator of interest in the model, symbolized as 𝔼[Y|A, Z_ i, W] where Z_ i represents all mediators other than the mediator of interest. This enables us to isolate the total effect of A on Y with respect to the particular A-Z pathway under investigation. The rest of the estimation procedure is the same where we subtract this total effect point estimate and EIF from the direct effect to get the indirect effect estimates. § FINDING MEDIATING PATHWAYS §.§ Mixed Exposures and Mediators Up to this point, we have focused on fixed exposure A and mediator Z, showing the efficient influence functions (EIFs) necessary for estimating the natural (in)direct effects. However, in scenarios involving mixed exposure and multiple potential mediating paths, the most important exposure-mediator (A-Z) paths among a potentially high-dimensional set are typically unknown. Consider a hypothetical situation where five exposures represent different pesticides (A_1-5), and five measured variables potentially mediate the effects of these pesticides (Z_1-5), representing neurotoxicity, endocrine disruption, oxidative stress, immune system, and DNA damage. In this scenario, let us imagine that the effects of A_1 and A_2 are mediated through Z_1 and Z_2 respectively, while A_3 shows no measured indirect effects, and A_4 and A_5 have no impact on the outcome. Additionally, Z_3-Z_5 do not mediate any measured exposures. A directed acyclic graph (DAG) illustrating this situation is presented below. state/.style=draw, rounded rectangle, minimum width=2cm, minimum height=1cm, text centered, text width=2cm, node distance=3cm [->,shorten >=1pt,auto,semithick] [state] (A1) at (0,0) A_1; [state] (Z1) at (3,0) Z_1; [state] (Y) at (6,1) Y; [state] (Z2) at (3,2) Z_2; [state] (A2) at (0,2) A_2; [state] (A3) at (0,-2) A_3; [state] (W) at (3,5) W; (W) edge[bend right=95,->] node (A1); (W) edge[bend right=95,->] node (A3); (W) edge[bend right=45,->] node (A2); (W) edge[bend right=45,->] node (Z1); (W) edge[bend left=45,->] node (Y); (W) edge[->] node (Z2); (A1) edge[->] node[above] IE (Z1); (Z1) edge[->] node[below] IE (Y); (A1) edge[bend right=45,->] node[below] DE (Y); (A2) edge[->] node[above] IE (Z2); (Z2) edge[->] node[above] IE (Y); (A2) edge[bend left=45,->] node[above] DE (Y); (A3) edge[bend right=45,->] node[below] DE (Y); Scenarios featuring mixed exposures and multiple mediating pathways are not uncommon in real-world contexts. For instance, agricultural workers may encounter multiple pesticides, chemicals, and environmental factors, each acting through different mediating pathways to exert health effects. Similarly, industrial employees can be exposed to various chemicals, urban residents to diverse air pollutants, and individuals practicing unhealthy lifestyle habits to the risk of chronic diseases. Even the spread of infectious diseases and climate change can involve a complex interplay of multiple exposures and mediating pathways. In all these instances, understanding the complex interplay between various exposures and mediating pathways is crucial. However, since the A-Z pathways are not known a priori, and continuously testing different exposure-mediator pathways could lead to type 1 error, a data-driven approach is essential. §.§ Basis Function Estimators for Pathway Discovery Uncovering mediating pathways in our data requires a non-parametric method, as not only are pathways not known a priori but also the functional forms underlying their relationships are not known as well. We leverage a series of discrete Super Learners—best fitting flexible estimators selected from a library of candidate estimators—for this task. These constituent learners used in the Super Learner construct non-linear models through linear combinations of basis spline terms and their tensor products, rendering them ideal for the task at hand. In the most flexible setting, we form indicator variables for each predictor. These variables denote if a predictor X is less than or equal to a specific value x_s, this approach can be extended for combinations of predictors, like X_1, X_2. Consequently, a function of our outcome distribution can be represented as: ψ_β = β_0 + ∑_s⊂{1,2,...,p}∑_i=1^nβ_s,iϕ_s,i, where ϕ_s,i = I(X_i,s≤ x_s), A ∈ℝ^p Here, s denotes indices of subsets of the X. This estimator is known as the highly adaptive lasso (HAL) estimator <cit.>. Its unique attribute is its theoretically proven n^-1/4 convergence, a necessary condition for the √(n) rate conditions to hold for our estimator and for convergence to selection of basis functions of true pathways for the underlying yet unknown DGP. However, the HAL estimator is not scalable in high dimensions. Therefore, the estimators employed in NOVAPathways that return tensor products of arbitrary order and approximate this more exhaustive approach include the earth <cit.>, polySpline <cit.>, and hal9001 (under restrinctions) <cit.> packages. Each method utilizes a linear combination of basis functions to estimate the conditional outcome, allowing us to extract variable sets used in these basis functions as our data-adaptively identified variable sets. Our process to construct pathways includes: * Fitting 𝔼(Z|A,W) as β_0 + ∑[β_s · h(A, W)_s]. Here, β_0 is the intercept, β_s are the coefficients, h(A, W)_s are the basis functions involving A and W, and the sum is over all basis functions in the model. * Extracting basis functions for A with non-zero coefficients. * Fitting 𝔼(Y|A,Z,W) = β_0 + ∑[β_s · g(A, Z, W)_s]. * Matching A to Z pathways: we align the basis functions involving A from the first model (𝔼(Z|A,W)) with the basis functions involving Z from the second model (𝔼(Y|A,Z,W)), if used, to identify the A-Z pathways. Pathways are also AZ basis functions used directly in the second model. This stepwise approach is necessary. In cases where the effects of A go entirely through Z, or when effects that don't pass through Z are negligible for the model fit, the second model will only contain basis functions for Z. As such, the first model is required to illuminate the underlying A driver, thereby establishing the pathway connection. In summary, this approach non-parametrically identifies mediating pathways in a mixed exposure scenario. §.§ Non-Parametric Analysis of Variance for Identifying "Important" Pathways Upon identification of the optimal basis spline estimator for each sequential regression segment, we use an ANOVA-like decomposition of basis functions to rank the "important" variable sets employed by each algorithm; thereby filtering to the most important pathways. This selection process becomes critical in high-dimensional A and Z scenarios, where the possible paths multiply, and the goal is to discern the most influential pathways on Y adaptively. We apply a variant of ANOVA, generalized for large-scale, non-parametric models. In this context, we partition the response variable's variance based on the contributions from distinct basis factors. For multivariate adaptive regression models, the variance is decomposed into the individual contributions of linear basis functions. For highly adaptive lasso models, zero-order basis functions (exposure-covariate indicators) make these contributions. In both cases, the F-statistic is calculated using the traditional ANOVA formula, albeit with modifications to accommodate the non-linear model concerning the original covariates. The response variable's variance is split into two: variance explained by the linear combination of basis functions and the residual variance left unexplained by the model. The F-statistic represents the ratio of explained to residual variance, adjusted for degrees of freedom. The F-statistic is computed for each basis using the standard formula, presuming a linear relationship between the response variable and basis functions, though the basis functions themselves need not be linear in the original covariates. Once we've computed F-statistics for each basis function, we have a measure of each basis function's contribution to the explained variance in the response variable. However, these basis functions represent transformations (which may or may not be linear) of the original variables. Hence, we're interested in getting a measure of the importance of each variable, not just the individual basis functions. To aggregate these F-statistics to the variable level, we need to map each basis function back to the original variables it was derived from. We do this using the naming conventions of the basis functions, which contain the names of the variables they were derived from. This allows us to identify which F-statistics belong to which variables. Once this mapping is complete, we have a collection of F-statistics for each variable, with each statistic representing a different basis function of that variable. To aggregate these statistics, we take their sum. The sum provides a measure of the total contribution of all basis functions of a variable to the explained variance in the response variable. In other words, it gives us a measure of the overall importance of that variable. Finally, we rank the variables according to these sums of F-statistics, which we can then use to filter variables in subsequent analyses. It's important to note that this approach assumes that the F-statistics of basis functions of a variable can be meaningfully added together. This assumption holds true if the basis functions are orthogonal (i.e., uncorrelated), as is the case with splines. However, it may not hold if the basis functions are correlated, which might be the case with other types of basis functions. We then rank variable sets based on the computed F-statistics, and subsets can be decided based on the F-statistic quantile to yield a concise variable list. The resultant list contains variable sets that meet the F-statistic threshold. This procedure is applied to both 𝔼(Z|A,W) and 𝔼(Y|A,Z,W) models, to filter A based on the F-statistics driving each mediator, and to filter Z, A, and A-Z basis functions, respectively. This variable set process is implemented within a V-fold cross-validation framework using data, which we discuss in the subsequent section. It's worth noting that our proposed methodology operates on the principle of heuristics, aiming for an approach that's both computationally practical and effective. It's not designed to achieve theoretical optimality but rather to robustly identify potential pathways, that are part of the data-adaptive target parameter. Theoretical rigor is still maintained during the estimation step. While other methods could be employed, our approach offers a blend of simplicity, speed, and suitability for the task at hand by using basis-function estimators in the two step process that are both flexible but also interpretable, which allows us to construct the pathways. For example, methods could be employed such as using exposure or exposure-mediator sets used in the branches of a best fitting decision tree <cit.> to identify potential pathways. § CROSS-ESTIMATION Ensuring the estimators of our mediation target parameters meet the requisite complexity conditions, such as smoothness (differentiability) and entropy small enough to satisfy the Donsker conditions, can be challenging in high-dimensional settings (p > n) that necessitate complex/adaptive ML methods. Although verifying entropy conditions is feasible for certain machine learning techniques like lasso, it becomes notably difficult with methods involving cross-validation or hybrid models, such as Super Learner. To address this, we employ a strategy of sample splitting. This approach separates the data into two independent sets: one for estimating the nuisance functions and the other for constructing the mediation parameters. Originally proposed by Bickel and later refined by Schick, this strategy has been extended to k-fold cross-validation, allowing for the average mediation estimates from different data partitions to be employed. Sample splitting allows us to handle the more complex task of identifying mediating pathways within high-dimensional data. Typically, we lack prior knowledge of these pathways amidst a diverse mixture of exposures and mediators, necessitating data-adaptive identification methods. The separate data partitions help ensure that pathway discovery and estimation of direct and indirect effects are not overfit to the sample data, thus avoiding the statistical pitfall of double-dipping, which is akin to multiple testing issues. This process of identification has been termed "dredging with dignity" in the literature <cit.>, recognizing the necessity of exploring the vast array of potential paths in a principled manner. Just as the analyst might be tempted to cherry-pick interesting results from multiple testing, so too can the analyst fall into the trap of selecting intriguing pathways from the same dataset. This separate sample approach steers clear of that, offering a way to explore high-dimensional pathways responsibly. §.§ K-fold Cross-Validation K-fold cross-validation is a technique that divides our observations, indexed from 1 to n, into K equally sized subgroups. For each k, an estimation sample P_k is defined as the k-th subgroup of size n/K, while the complement of P_k, denoted as P_n_-k, serves as the parameter-generating sample. Using P_n_-k, we identify mediation pathways in the exposure-mediator space by employing basis functions from the best-fitting b-spline estimators. In each fold, we have nuisance estimators for every component of the EIF. With these mediation pathways fixed we then train nuisance parameter estimators on the same P_n_-k samples, which are essential for solving the EIF and providing asymptotically unbiased estimators. The process is carried out in a round-robin manner. For K=10, we obtain 10 (possibly different) pathways, outcome estimators Q_k, and density estimators g_k, e_k, and r_k, which are used to construct nuisance parameters that comprise D^Y(o), D^A(o), and D^Z,W(o). To estimate a pooled θ(δ) using the full data, we stack the estimation-sample estimates for each nuisance parameter across the folds. We then calculate the sum and average across the folds to obtain our point estimate and subtract this average from the summed nuisance parameters to obtain the EIF for the full data, yielding our pooled θ(δ) estimate. The variance is then calculated by pooling the pooled EIFs. The NDE parameter is obtained by subtracting the pooled θ(δ) from the full data mean outcome, and the delta method is applied to the pooled EIFs to obtain the EIF for the pooled NDE (θ(δ)_d), which is used to derive confidence intervals (CIs). A similar procedure is employed for the total effect. For the total effect we are using TMLE or one-step estimation. For TMLE, we stack initial estimates and clever covariates across all folds and perform a fluctuation step across the full initial estimates and clever covariate estimates to obtain our estimate ϵ. We then update the counterfactuals across all folds using the ϵ values. The updated conditional means, counterfactuals, and clever covariates are employed to solve the EIF across the entire sample for the shift in A, ignoring Z. The delta method is applied to subtract the EIF for a shift in A, ignoring Z, from the EIF of the observed Y to obtain the EIF for the total effect (θ(δ)_t), and the same process is used to derive the point estimate for the total effect. The delta method is again used to estimate the pooled NIE (θ(δ)_i). In addition to the pooled estimates, we report k-fold specific estimates of the in(direct) effects and fold-specific variance estimates for these target parameters using the fold-specific IC. This is important because if the mediation pathway identified in each fold varies significantly, the pooled estimates can be challenging to interpret (if the same pathway is not found across all folds). By providing both k-fold specific and pooled results, users can assess the robustness of the pooled result across the folds. To visualize the algorithm and what is happening in the parameter generating and estimation folds, we provide a schematic in Figure <ref> §.§ Pooled Estimates under Data-Adaptive Delta Stochastic interventions, particularly those involving significant shifts in exposure, can be susceptible to positivity violations, leading to biased and increased variance in exposure effect estimation. This happens if the exposure shift is so substantial that some subgroups have zero probability of receiving a specific exposure level. This challenge persists even when utilizing an efficient estimator like TMLE. To mitigate this, we can employ a data-adaptive approach to adjust the exposure shift magnitude, δ. When the exposure is continuous, we modify δ within the parameter-generating sample to meet specific positivity criteria, which helps limit positivity violations. However, when exposure is quantized, a delta of one, signifying an increase in quantiles, is the minimum and most interpretable δ. Consider H(a_δ,w)_i, the probability density ratio for observation i upon an exposure shift of δ. We aim to ensure that all observations have a ratio below a specific threshold λ. To do this, we iteratively decrease δ by a small amount, ϵ, until H(a_δ,w)_i < λ for all observations i: ∀ i, H(a_δ,w) = g_n_-k(a_n_-k - δ| w)/g_n_-k(a_n_-k| w)≤λ Here, λ is a preset threshold, and δ is reduced until all clever covariate density ratios fall below λ. By default, in our SuperNOVA package, λ is set to 50, and ϵ to 10% of δ. This means that if any predicted conditional probabilities exceed the probability under observed exposure by a factor of 50, we reduce δ. Finally, we account for data-adaptive δ during the pooling process. If δ is constant, the pooling is simply an average of the estimates across folds. However, for a data-adaptive δ, we average δ and pair it with the average estimates and the pooled variance calculations described previously. §.§ Interpreting Shifts when Exposure is Discretized In the case where the exposure variable is discretized into quantiles, we can still interpret the results in a continuous context. If we let A_min and A_max denote the minimum and maximum values of the continuous exposure, respectively, and n_bins denote the number of quantiles, then each quantile represents an interval of size A_max - A_min/n_bins on the continuous scale. A_quantile = A_min + (A_max - A_min/n_bins) · (q - 1) Here, q denotes the quantile rank, which ranges from 1 (for the smallest values of the exposure) to n_bins (for the largest values of the exposure). Each value of A_quantile represents the lower bound of the interval on the continuous scale that corresponds to that quantile. For example, if we have an exposure variable ranging from 0 to 10, and we discretize it into 5 quantiles, then each quantile represents an interval of size 10 - 0/5 = 2 on the continuous scale. Thus, the first quantile represents the interval from 0 to 2, the second quantile represents the interval from 2 to 4, and so on. Despite using a discretized version of the exposure in the analysis, the interpretation can still be related back to the original continuous exposure scale. In this way, if the discretized approach is preferable, a pseudo-continuous interpretation is still possible. § SIMULATIONS In this section, we demonstrate using simulations that our approach identifies the correct mediating pathways in a complex mixture of exposures and mediators and correctly estimates the natural direct, indirect and total effects for a given pathways using stochastic interventions. §.§ Data-Generating Processes We first construct a simple data-generating process (DGP) where Y is generated from a linear combination of an exposure and mediator. In this DGP we measure the asymptotical behavior of the in(direct) effect estimators keeping the pathway fixed (not data-adaptively discovering the pathway). We do simulations for both continuous and discrete exposures to investigate the behavior of the estimator using numeric integration vs. simple weighted sums. In the second DGP, we generate multiple pathways to the outcome from multiple exposures and measure the estimators performance in data-adaptively identifying the correct pathways. §.§.§ Simple Mediation Simulation This DGP has the following characteristics, O = W,A,Z,Y. We call this the "DGP 1" moving forward which we use to investigate the asymptotic behavior of our estimates. The data-generating process involved the following steps: * Baseline covariates: * W_1 ∼𝒩(20, 2^2): generated from a normal distribution with mean 20 and standard deviation 2. * W_2, w_3 ∼Binomial(1, 0.5): generated from binomial distributions with size 1 and probability 0.5. * W_4 ∼𝒩(30, 3^2): generated from a normal distribution with mean 30 and standard deviation 3. * W_5 ∼Poisson(1.2): generated from a Poisson distribution with rate 1.2. * The exposure A was generated from a normal distribution, conditional on the covariate W_1: A ∼𝒩(1 + 0.5 W_1, 1^2) * The exposure A was shifted by an amount δ (in this example δ = 1), producing A_δ: A_δ = A + δ * The mediator Z was generated from a normal distribution, conditional on the exposure A and the covariate W_1: Z ∼𝒩(2 · A + W_1, 1^2) * The mediator Z was also shifted given a shift in A, producing Z_A_δ: Z_A_δ∼𝒩(2 · A_δ + W_1, 1^2) * The outcome Y, Y_A_δ (Y given a shift in only A), and Y_A_δ, Z_A_δ (Y given a shift in A and Z) was generated as a linear function of the exposure A and the mediator Z: Y = 10 · Z + 40 · A + ϵ Y_A_δ = 10 · Z + 40 · A_δ + ϵ Y_A_δ, Z_A_δ = 10 · Z_A_δ + 40 · A_δ + ϵ We use this simulation to test NOVAPathways estimation of the total, direct and indirect effects. Our approach was to keep things relatively straightforward, keeping the DGP a linear process to test the asymptotic behavior of the estimator when the functional forms are correctly specified (GLMs are included in each Super Learner that model the true underlying function). §.§.§ Complicated Mediation Simulation We now want to create a more complicated scenario where there are many correlated exposures and some go through mediators to drive the outcome. In this simulation, we want to test NOVAPathways in discovering the correct paths. This data-generating process (DGP) has the following characteristics, (O = (W,A,Z,Y)), we call this moving forward "DGP 2". The exposures A = (A_1, A_2, A_3, A_4, A_5) are generated and have potential indirect (through Z = (Z_1, Z_2, Z_3, Z_4, Z_5)) effects on Y. Even though there are a total of 25 possible mediating paths due to 5 exposures and 5 mediating variables, only the exposures A_1, A_2 have actual direct and indirect (through Z_1, Z_2 respectively) effects on Y. Our goal is to test the proportion of times across the simulation that the correct paths among the 25 potential ones are discovered. The data-generating process involved the following steps: * Baseline covariates: * W_1 ∼𝒩(20, 2^2): generated from a normal distribution with mean 20 and standard deviation 2. * W_2, W_3 ∼Binomial(1, 0.5): generated from binomial distributions with size 1 and probability 0.5. * W_4 ∼𝒩(30, 3^2): generated from a normal distribution with mean 30 and standard deviation 3. * W_5 ∼Poisson(1.2): generated from a Poisson distribution with rate 1.2. * Five exposure variables A = (A_1, A_2, A_3, A_4, A_5) are generated from a multivariate normal distribution, conditional on the covariates W: * A_1 ∼𝒩(1 + 0.5 · W_1, Σ) * A_2 ∼𝒩(2 · W_2 · W_3, Σ) * A_3 ∼𝒩(1.5 · W_4 / 20 · W_1 / 3, Σ) * A_4 ∼𝒩(3 · W_4 / 2 · W_2 / 3, Σ) * A_5 ∼𝒩(2 · W_5, Σ) The exposures are correlated as per the following correlation matrix, Σ, which represents common scenarios in air pollution where particulate matter and gaseous pollutants show high intra-group correlation but lower inter-group correlation: Σ = [ 1 0.8 0.3 0.3 0.2; 0.8 1 0.3 0.3 0.2; 0.3 0.3 1 0.8 0.2; 0.3 0.3 0.8 1 0.2; 0.2 0.2 0.2 0.2 1; ] * Five mediators Z = (Z_1, Z_2, Z_3, Z_4, Z_5) are generated from normal distributions, conditional on the exposures A and covariates W: * Z_1 ∼𝒩(2 · A_1 + W_1, 1^2) * Z_2 ∼𝒩(2 · A_2 + W_2, 1^2) * Z_3 ∼𝒩(5 · A_3 · A_4 + W_3, 1^2) * Z_4 ∼𝒩(3 · A_4 · W_4, 1^2) * Z_5 ∼𝒩(4 · A_5 · W_5, 1^2) * The outcome Y is generated as a linear function of Z_1, A_1, W_3, A_2, and Z_2: Y = 10 · Z_1 + 40 · A_1 + 15 · W_3 - 6 · A_2 + 7 · Z_2 §.§.§ Calculating Ground-Truth We numerically approximated the natural direct effect (NDE), natural indirect effect (NIE), and total effect (ATE) of the exposure A on the outcome Y to high precision using 100000 samples from our DGP. In our DGP which assesses estimation (simple DGP), δ is equal to 1. * The NDE was calculated as the mean difference in Y when shifting the exposure A while keeping the mediator Z constant: NDE = 𝔼(Y_A_δ, Z - Y) * The NIE was calculated as the mean difference in Y when shifting both the exposure A and the mediator Z: NIE = 𝔼(Y_A_δ, Z_A_δ - Y_A_δ) * The ATE was calculated as the sum of NDE and NIE: ATE = NDE + NIE Additionally, we conducted the same analysis using a discrete exposure that has been split into quantiles (10) after step 2. and compute the quantile-based NDE, NIE, and total effect estimates. §.§ Evaluating Performance We assessed the asymptotic convergence to the true exposure relationships used in the DGP, as well as the convergence to the true in(direct) effects and total effects for these exposure-mediator pathways, in each simulation. To do so, we followed the following steps: * We generated a random sample of size n, which we divided into K equal-sized estimation samples of size n_k = n/K, each with a corresponding parameter generating sample of size n - n_k. * At each iteration, we used the parameter generating sample to define the mediation pathway(s) and create the estimators for the nuisance parameters used for θ(δ)_d and θ(δ)_t. We then use the estimation sample to obtain the causal parameter estimate using generating equations and TMLE. We repeated this process for all folds. * At each iteration, we output the stochastic shift estimates given the pooled one-step and TMLE estimation. * For the simple DGP, we use the var_sets parameter in SuperNOVA to bypass the data-adaptive discovery of mediating paths and simply examine performance of the one A-Z pathway. For the complicated DGP we use the discover_only parameter to do only pathway discovery and skip estimation. In the complicated DGP we report the proportion of iterations NOVAPathways identifies the correct two pathways out of the possible twenty-five pathways. To evaluate the performance of our approach, we calculated several metrics for each iteration, including bias, variance, MSE, confidence interval (CI) coverage, and the proportion of instances in which the true meditating pathways were identified. To visually inspect if the rate of convergence was at least as fast as √(n), we show projections of a √(n) consistent estimator starting from the initial bias. For brevity, we focus on the absolute bias and confidence interval coverage. We calculated these performance metrics at each iteration, performing 50 iterations for each sample size n = (250, 500, 1000, 1500, 2000, 2500, 3000). We used SuperNOVA with 10-fold cross-validation and default learner stacks for each nuisance parameter and data-adaptive parameter. Additionally, the quantile threshold was set to 0 to include all basis functions used in the final best fitting model. To ensure our estimator has a sampling distribution that is normal, we standardize the bias by dividing by the standard deviation of the estimate at each sample size and plot the density distributions for the direct, indirect and total effects. §.§ Default Estimators SuperNOVA has two density estimating methods that come built into the package. The haldensify estimator <cit.> can be used for conditional density estimation of g_n = p(A|W), e_n = p(A|W,Z) and r_n = p(Z|W). Haldensify is a flexible, data-adaptive approach that employs a histogram-based technique to estimate densities. The maximum interaction degree is set by the user as is the number of bins to discretize the outcome. Haldensify works by constructing a histogram of the data and employing a multivariate step function to estimate the density, which makes it computationally efficient and suitable for a wide range of applications. As an alternative to haldensify, the SuperNOVA package also offers the option to use Super Learner for conditional density estimation. The default Super Learner stack includes a diverse set of learners, such as glm <cit.>, elastic net <cit.>, random forest <cit.>, and xgboost <cit.>. We create estimators based on homoscedastic errors (HOSE) and heteroscedastic errors (HESE). For the simulations presented in this paper, we have opted to use Super Learner. This way we can investigate the behavior of the estimator when the true function or an algorithm that approximates the true function is included in the Super Learner library. Additionally, we need an estimator for Q̅ = E(Y|A,W). SuperNOVA provides default algorithms to be used in a Super Learner <cit.> that are both fast and flexible. For our data-adaptive procedure, we include learners from the packages earth <cit.>, polspline <cit.>, and hal9001 <cit.>. The results from each of these packages can be formed into a model matrix, on which we can fit an ANOVA to obtain the resulting linear model of basis functions. In the case where A is discrete, g_n = p(A|W) and e_n = p(A|W,Z) are instead Super Learners built from categorical outcome estimators such as neural networks, random forest and polspline. §.§ Results §.§.§ Do Target Parameters Estimated by NOVAPathways Converge to Truth at 1/√(n) for Continuous Exposures? An important aspect of our estimator's performance is its convergence rate. In the context of our simulation (DGP 1 with one exposure-mediator pathway), the convergence rate signifies how quickly the estimator approaches the true parameter value as the sample size increases. Ideally, we want estimates for the total effect, direct effect and indirect effect to show convergence to the truth at √(n) using a DGP that, although simple, at least includes confounding and relationships that feasible could be observed in a real-world analysis setting. Figure <ref> exhibits the absolute bias and the anticipated rate of convergence for a √(n) consistent estimator, given the initial bias, when the exposure is truly continuous. It shows the bias as the sample size increases to 3000. Observing the estimates from the integration method, the bias is generally lower but exhibits a non-convergent behavior when reaching a sample size of 3000, particularly for Natural Direct Effect (NDE) and Natural Indirect Effect (NIE). Although the pseudo-regression approach displays greater consistency, the bias remains considerably high, hindering proper coverage. Coverage, illustrated in Figure <ref>, refers to the proportion of iterations for each sample size where confidence intervals contain the true value. For both methodologies—integration and pseudo-regression—the estimated coverage for NDE and NIE does not achieve the desired 95% level. This shortfall is likely attributable to the bias in estimates induced by numeric integration, a necessary procedure for estimating the nuisance parameters in the case of a continuous exposure. Therefore, continuous exposures do not demonstrate the expected √(n) convergence. This behavior implies that our estimator falls short of the necessary criteria to qualify as asymptotically normal. The departure from asymptotic normality may partly stem from approximations made during the numerical integration required for our estimation process. Alternatively, coding inaccuracies could be at play. Theoretically, the estimator should function correctly, so these anomalies warrant further investigation. Additionally, a sample size of 3000 may still be too small to assess for normality. §.§.§ Do Target Parameters Estimated by NOVAPathways Converge to Truth at 1/√(n) for Quantized Exposures? We also evaluated the performance of the NOVAPathways estimator under our DGP 1 scenario where the exposure variable is quantized. Like for the truly continuous exposure, the main aspects of the evaluation are the rate of convergence and the coverage of the confidence intervals, as these metrics represent the robustness and reliability of the estimator. In contrast to the results for continuous exposures, we observe satisfactory performance of the estimator for quantized exposures. The bias for the estimated NDE, NIE, and total effect demonstrates a clear trend of convergence towards zero with increasing sample size, for both integration and pseudo-regression methods. Figure <ref> shows the absolute bias and expected √(n) convergence given initial bias as sample size increases. This pattern is more prominent for the integration method, with bias levels generally being much lower than in pseudo-regression. For instance, for NDE, the absolute bias using the integration method decreases from 0.668 at a sample size of 250 to 0.0621 at a sample size of 3000. Coverage of confidence intervals for these estimates also shows a desirable pattern. Considering the average coverage across all sample sizes, the coverage for NDE reaches an average of 95.6 % for the pseudo-regression method and remains at 100% for the integration method. For NIE, the pseudo-regression method provides an average coverage of 85%, while the integration method provides a higher average coverage of 96%. Figure <ref> shows the proportion of confidence intervals that contain the true value for each approach at increasing sample size. Finally, for the total effect, the average coverage across all sample sizes reaches 100%. It's worth noting that for both NDE and NIE, the pseudo-regression method exhibits lower coverage than the integration method. In summary, our results demonstrate that when the exposure variable is quantized, the NOVAPathways estimator exhibits desirable characteristics of a reliable estimator. It provides a rate of convergence that meets the 1/√(n) standard, and the confidence intervals demonstrate appropriate coverage. These results confirm the robustness of NOVAPathways when applied to quantized exposure variables, and underline the necessity of having appropriately quantized exposure variables in order to achieve reliable and valid results. §.§.§ NOVAPathways Correctly Identifies Mediating Pathwways in a Realistic Complex Mixed Exposure-Mediator Situation In DGP 2, despite having 25 potential mediating pathways in the complex exposure mixture-mediation simulation, NOVAPathways consistently identified the two true pathways (A_1-Z_1 and A_2-Z_2) with a frequency of 100%, across various sample sizes ranging from 250 to 3000 observations. Furthermore, direct effects of A_1 and A_2 on the outcome Y were also consistently identified across all scenarios, reinforcing the robustness of our detection method. Figure <ref> shows the frequency each pathways was detected for each sample size. Note that, only pathways detected are reported. However, it is noteworthy that there were some instances of incorrectly identified pathways, as seen from the non-zero frequencies of pathways such as A_1-Z_2, A_2-Z_1, and others, which could be attributed to the high correlation between the exposures. While these false discoveries present opportunities for methodological refinement, the consistently correct identification of the true pathways underpins the effectiveness of our methodology in the presence of multiple mediators and exposures, which is a common scenario in air pollution research. Of note is that, the incorrect pathways were identified in very few folds and in such cases the analyst would report the inconsistency of such a finding. §.§.§ Assessing the Validity of NOVAPathways's Inference through Simulations The fundamental premise of a robust inference is the verification of the estimator's normal sampling distribution, centered at zero and progressively narrowing with increasing sample size. This premise is tested in the context of the NOVAPathway estimator for natural direct, indirect, and total effects. We illustrate the empirical distribution of the standardized bias, defined as the difference between the estimated and true values from the data-generating process, normalized by the standard deviation of the estimates across iterations. The assessment is conducted using 50 iterations per sample size and visualized as a probability density distribution in Figure <ref>. In Figure <ref>, we observe the convergence of the sampling distribution to a mean-zero normal as sample size escalates. This phenomenon is evident across all types of effect estimates. The total effect, calculated via one-step, remains consistent regardless of whether the natural direct effect (NDE) is computed using integration or pseudo-regression methods. The NDE for the integration method is concentrated more closely around zero, albeit exhibiting greater tail variability, while the pseudo-regression counterpart maintains a smoother, centered distribution. The natural indirect effect (NIE) demonstrates the widest dispersion, although still centered around zero. Notably, the pseudo-regression method achieves a slightly narrower distribution around zero for NIE. All plots in Figure <ref> exhibit normal or near-normal distributions centered at zero that contract with an increase in sample size. This characteristic is crucial for the validity of confidence interval construction and underscores the reliability of our estimator. As such, the simulation results affirm the soundness of NOVAPathways's inference methodology. § APPLICATIONS §.§ NHANES Data §.§.§ Data Description To provide a motivating example for the application of NOVAPathways we extracted data from the 2001-2002 cycle of the National Health and Nutrition Examination Survey (NHANES). The NHANES program, managed by the Centers for Disease Control and Prevention (CDC), is a comprehensive set of studies designed to assess the health and nutritional status of adults and children in the United States <cit.>. These studies employ a combination of interviews and physical examinations to capture a broad array of health information. NHANES data is particularly suitable for motivating the use of NOVAPathways due to its representative sample of the U.S. population (specifically for pollution exposure), broad collection of health-related variables, and its open availability. This enables us to make our analysis transparent and easily replicable, fostering open science practices and facilitating methodological testing <cit.>. For these purposes, all code for data cleaning and curation for this motivating analysis example using NHANES is included in the SuperNOVA package which uses the NOVAPathways method. One significant challenge of using cross-sectional datasets like NHANES is the potential for reverse causality, wherein the outcomes/mediators may influence the exposures rather than vice versa. This characteristic violates the temporal assumption required for traditional causal inference <cit.>. However, the use of NHANES data in our study is not primarily to establish causal relationships but rather to provide a real-world demonstration of the capabilities of our method, NOVAPathways. The NHANES data provides a large number of well measured toxic metal exposures, biomarkers for possible mediating pathways and covariates. For our purposes, this offers an opportunity to determine if NOVAPathways identifies consistent mediating pathways in high-dimensional data and delivers interpretable direct, indirect and total effect results based on stochastic shift interventions data-adaptively determined pathways. For our motivating example we investigate the association of a mixture of toxic metals on asthma both directly and possibly indirectly through biomarkers for inflammation, oxidative stress and immune function. Our choice of the 2001-2002 NHANES data cycle was informed by the fact that this cycle included all relevant variables necessary for a comprehensive investigation into the associations between toxic metal exposures, inflammation, immune function, oxidative stress, and the prevalence of asthma <cit.>. This particular NHANES cycle collected exhaustive data on these variables, offering a unique opportunity to conduct our investigation within a representative sample of the U.S. population. The original NHANES 2001-2002 dataset consisted of 11,039 participants, with 4,260 individuals providing blood samples and consenting to DNA analysis <cit.>. After applying our exclusion criteria, such as missing environmental chemical analysis data, missing key covariate data, and insufficient stored samples for telomere length estimation, our final study sample was comprised of 1,344 participants. Our data cleaning and curation techniques were relatively basic as our main goal is to demonstrate our proposed methodology and software, not provide a thorough analysis. Nontheless, data cleaning and curation were undertaken to ensure the integrity of distributions in our dataset was retained while allowing us to not lose too many observations due to missingness. We first omitted observations with missing values in the outcome variable (asthma) and in the crucial exposure variables (toxic metals). We then retained columns where less than 20% of the data was missing. This balance allowed us to maximize the use of available data while avoiding the potential bias from imputing excessive missing values. Next, we imputed missing data in the remaining variables through suitable methods: mean imputation for numeric variables and mode imputation for categorical ones <cit.>. This strategy helped ensure the final dataset maintained the original distributions and variable relationships to the greatest extent possible. We then quantized the metal exposure data to address the methodological issue with our proposed method when the exposure is fully continuous. As shown, continuous exposures necessitate numeric integration in the calculations of the mediation effects. However, this approach lead to approximations that are not precise enough, inducing asymptotic bias and resulting in poor confidence interval coverage. To avoid this issue, we quantized the continuous exposure data, transforming each exposure into a categorical variable with equal frequency bins (deciles in our case). This transformation allows a shift delta = 1 to represent an increase in decile, and we can calculate each nuisance function as a simple weighted sum rather than a numeric integration. By doing this, we have observed improved asymptotic behavior of our estimators and accurate confidence interval coverage. The selection of toxic metals as exposures in our study was informed by prior literature demonstrating the potential link between toxic metal exposure, oxidative stress, inflammation, and immune function—all factors implicated in the etiology of asthma <cit.>. Several studies have shown that exposure to toxic metals can lead to oxidative stress, which in turn can trigger inflammatory responses and modulate immune function <cit.>. These processes can potentially contribute to the onset or exacerbation of asthma, hence our interest in exploring these relationships in this study. By investigating these associations within the NHANES 2001-2002 dataset, we aim to show that semi-parametric methods utilizing efficient estimators and data-adaptive target parameters can yield a deeper understanding of the complex interplay between environmental exposures, molecular biomarkers, and disease outcomes. Through this process, we seek to illustrate the utility of our SuperNOVA software which incorporates the NOVAPathways mediation methodology. Our results, which are presented in subsequent tables, offer a comprehensive view of the information that SuperNOVA can generate from a provided dataset. §.§.§ Consistent Findings for Toxic Metal Exposure on Asthma Through Inflammatory, Oxidative Stress and Immune Function Mediators Because NOVAPathways data-adaptively discovers exposure-mediator pathways, it's best to report first any notable consistencies across the multiple folds. Cesium, as an exposure, was found in 80% of the folds, demonstrating the greatest consistency among all exposures investigated. This highlights the potential relevance of cesium in our model, warranting further investigations into its role and impact on asthma in future studies. When considering the mediators, monocyte percentage and vitamin E emerged as the most consistent across the folds, being detected in 80% and 60% of the folds, respectively. The consistent appearance of monocyte percentage, a key indicator of immune system activation, underscores the possible involvement of immune modulation in the effect of toxic metal exposure on asthma. Similarly, the recurring detection of vitamin E may imply a role for antioxidant mechanisms in modulating the exposure-asthma relationship. Furthermore, we found the exposure-mediator pairs of cesium-monocyte percentage and tungsten-monocyte percentage in 60% of the folds. The pairings of specific exposures with monocyte percentage suggest potential pathways where these elements could influence asthma pathogenesis through immune mechanisms. Meanwhile, the lead-vitamin E pair appeared in 50% of the folds, alluding to another potential pathway via oxidative stress mechanisms. Given these results, we next report the fold specific and pooled results of Cesium, the Cesium-Monocyte Percentage pathway, the Lead-Vitamin E pathway, and the Tungsten-Monocyte Percentage pathway. §.§.§ Results for Cesium, Lead and Tungsten Through Monocyte Percentage and Vitamin E We first examined the potential impact of Cesium exposure on the likelihood of developing asthma, independent of mediation. In 8 out of 10 folds, Cesium consistently appeared, implying a possible influence in the disease's progression. Here we use a decile shift increment in Cesium exposure and observe the expected probability of asthma given this shift compared to the observed probability of asthma. Here, a decile increase is equivalent to a rise of 4.182,/L on the Cesium continuous scale. While the results varied slightly across the folds, the effect generally leaned towards the positive. In the pooled analysis, a decile increase in Cesium corresponded to a 0.012 increase in the asthma probability. However, this result didn't achieve statistical significance at the conventional 0.05 level (p-value = 0.17). While the findings do not conclusively establish a relationship between cesium and asthma, the consistent results hint at a potential correlation warranting further investigation. Table <ref> presents the total effects of a decile shift in Cesium on the likelihood of asthma. We subsequently examined how much of this effect passed through the monocyte percentage as opposed to not. Table <ref> displays the fold-specific and pooled results for the NDE, NIE, and total effect of Cesium on asthma, via monocyte percentage, using both pseudo-regression and double integration methods to construct the estimator. With pseudo-regression estimates, we observed an NDE of 0.61 (-0.66 - 1.88) and an NIE through monocyte percentage of -0.60 (-1.86 - 0.66). Both results were not significant. However, despite the absence of traditional statistical significance, the persistence of cesium through the monocyte percentage across the majority of the folds suggests a potential influence of this pathway on asthma. For Lead and Tungsten, a similar process was followed. In this instance, a decile increase in Lead and Tungsten corresponded to a rise of 1.3,/L and 0.285,/L on the respective continuous scales. A pathway for Tungsten through monocyte percentage was found in 60%. These results are provided in Table <ref>. Like Cesium-monocyte percentage, although these pathways were found in a majority of folds, the effects were not significant. A one decile increase in Tungston is associated with a -0.005 (-0.013 - 0.004) decrease in the probability of asthma (p-value = 0.28). Results for the NDE using pseudo-regression are the same as the total effect indicating no indirect effect through monocyte percentage. Lastly, we give results for Lead on asthma through vitamin E. Table <ref> shows these results. Again, the total effect, NDE and NIE are not significant given a decile increase in lead although this pathway was identified in 50% of the folds. Through an analysis of the NHANES dataset, we highlight the proficiency of NOVAPathways in identifying mediating pathways within high-dimensional data contexts. The dataset in focus included 9 exposures and 12 mediators, thus theoretically encompassing 108 potential pathways. These pathways could mediate the effects of toxic metals via proxies of inflammation, oxidative stress, and immune function. Using flexible basis estimators, NOVAPathways successfully discerned the most influential pathways and provided estimates associated with a one-decile increment in exposure. While none of the effects reached the threshold of statistical significance, some exhibited borderline significance. Our comparison of Natural Direct Effects (NDE) estimates from pseudo-regression and integration procedures revealed similar trends. The stability of estimates across the folds and a decreased variance for the pooled results, as anticipated, reaffirmed the advantage of pooling estimates across folds for precision. We acknowledge the potential limitations of our method, as we discretized exposures prior to implementing NOVAPathways. This more rudimentary representation of exposures, although simplifying the data, might make pathway discovery more challenging. Nevertheless, our primary objective was not the pinpoint accuracy of a causal inference but rather a demonstration of the potential output from NOVAPathways. We sought consistency of results across folds and the provision of interpretable estimates for NDE, NIE, and total effects. In conclusion, this example underscores NOVAPathways' utility in navigating complex associations within high-dimensional data, offering a useful tool for analysts working with multiple exposures and potential mediators. § SOFTWARE The accessibility and application of statistical software that executes semi-parametric methods which respect data-generating processes found in real-world data is pivotal for ensuring consistent and reproducible outcomes across research studies. SuperNOVA, an open-source R package, attempts to address this need by facilitating the evaluation of causal effects from mixed exposures using asymptotically linear estimators, which now includes the NOVAPathways method for mediation. These estimators are proven to converge to the true estimand at √(n) given estimates of nuisance parameters converge at n^1/4. Its ability to handle both continuous and discretized exposures addresses a notable limitation of its predecessor, the medshift package <cit.> developed by Ivan Diaz and Nima Hejazi, which only supports binary exposures. We also offer some additional functionality compared to the longitudinal modified treatment policies approach and packege <cit.> by data-adaptively finding mediating pathways in cross-sectional data. While continuous exposures are accommodated in SuperNOVA, caution is warranted due to the potential for bias introduced by numerical integration, which we have shown. At the heart of SuperNOVA, with its integrated NOVAPathways, is Super Learning, a machine learning technique employed via the SL3 package <cit.>. This methodology allows SuperNOVA to adaptively identify mediating pathways using ensembles of basis-function estimators, improving the adaptability and efficiency of the software to find pathways even in complex exposure settings. Likewise, Super Learning is used for the estimation of each nuisance function. Comparison with existing software illustrates the potential for SuperNOVA to enhance the accuracy and flexibility of mixed exposure-mediator research. Many environmental health studies that have performed mediation analyses have used packages such as medflex <cit.> and mediation <cit.> which are largely reliant on parametric assumptions. For instance, these packages make strong assumptions about functional form, and they often assume no interactions between the exposure and mediator, which can lead to biased estimates of direct and indirect effects. In contrast, SuperNOVA’s semi-parametric approach relaxes these assumptions, potentially resulting in more accurate and consistent estimates. Additionally, no method or package currently exists which can identify pathways and make valid inference on these pathways in the presence of high-dimensional data. SuperNOVA's design allows for both sequential and parallel computing, leveraging the parallel processing capabilities offered by the furrr package <cit.>. Its computational efficiency expands its suitability for use on personal computers, which can be crucial in resource-limited research settings. Additionally, in the context where the analyst has a pre-defined pathways they want to test, the path discovery section of NOVAPathways can be skipped and the direct, indirect and total effects can directly be estimated using the cross-validation procedure. Conversely, if the analyst is instead interested in only finding the most relevant exposure-mediator paths to guide future study develop, this approach is also available. Additional features of SuperNOVA include a comprehensive vignette, a detailed exposition of the underlying semi-parametric theory, and comparisons to existing methods. The package also offers the NHANES mixed metal exposure data for reproducibility purposes, coding notebooks illustrating the application of the software, and interpretative summaries of SuperNOVA output. SuperNOVA is regularly updated, available on GitHub (https://github.com/blind-contours/SuperNOVA), and aims to equip researchers with robust tools to advance the quality of research in mixed exposure and environmental health. § LIMITATIONS Even as we have made a concerted effort to apply rigorous methodology in this study, following <cit.> there are several limitations to consider which influenced our results, particularly when the exposure is truly continuous. Firstly, we used Monte Carlo integration methods, which are inherently stochastic. This could have introduced some level of bias in our estimates. We sought to minimize this by implementing four times the sample size for the number of Monte Carlo samples. However, in high-dimensional or complex model scenarios, such adjustments may not fully eradicate the error. Furthermore, data variability, particularly in the density estimation, could have contributed to bias introduction. Specifically, when density values hover at the extremes - either exceedingly low or high - the subsequent variance in the estimator may inflate the bias. Our proposed mediation method for continuous exposures also struggled with potential issues regarding integration boundaries. Even though we were cautious in setting these boundaries (the range of the exposure), the region of integration might have covered areas where the functions integrated were not well-behaved. This could have added to the bias. Moreover, instabilities in the numerical computations could have subtly influenced our findings. Despite the power of contemporary computational tools, they are not entirely devoid of errors. Instances of round-off or truncation errors could subtly impact the results. Likely, this issue with continuous exposures arises as a cumulative effect of the aforementioned limitations. Nevertheless, our results have demonstrated that when exposure is quantized into a discrete form, thereby bypassing numeric integration, our estimator exhibits the expected asymptotic behavior. Moreover, it provides valid confidence intervals for inference - results that can be interpreted continuously. In relation to positivity, violations of this principle are often an unavoidable reality in many contexts. Nonetheless, our suggested approach optimizes the situation by considering smaller shifts. These shifts are based on the ratio of exposure densities, which contrast the density under shift to the observed density when there is no shift. Similar to our methodology for path discovery, this strategy is heuristic in nature. It attempts to strike a balance between ease of comprehension and implementation while effectively achieving the intended objective. § DISCUSSION In this study, we introduce a novel approach for the estimation of natural direct, indirect, and total effects, facilitated through data-adaptive identification of mediating pathways in high-dimensional data. This breakthrough addresses a significant gap in current analytical methods, particularly when dealing with data that comprises numerous exposures and mediators, which is a common occurrence in environmental omics data. Our approach first fits a very large statistical model to the exposure-mediator-covariate space and treats the basis functions used in this model as a data-adaptive target parameter. This is done in two stages to discover the mediating pathways, the first step discerns the mediating pathways by determining which exposures influence the mediators and subsequently identifying the mediators that impact the outcome. The discovery process yields a set of exposure-mediators, termed pathways. With these pathways fixed, we estimate the average change in the outcome under stochastic shift interventions on exposures, which are further partitioned into direct and indirect effects. We use and extend the methodology first proposed by <cit.>. We use the same efficient influence function for the expected change in outcome given a stochastic shift intervention on the exposure holding the mediator at observed values. We explore numeric integration required for nuisance function estimation and build software for mediation when the exposure is continuous or discrete. The resulting estimates, derived within a cross-validated framework paired with general estimating equations and targeted learning, are asymptotically unbiased with the lowest possible variance, subject to the fulfillment of the unconfoundedness and positivity assumptions. Our proposed method delivers valid confidence intervals, unfettered by the number of exposures, covariates, or the intricacy of the data-generating process, provided the exposures are binned into an arbitrary set of categories. As shown, the numeric integration required for exposures that are modeled truly as continuous induces bias in the estimator which prevents the estimator from converging at the required √(n) rate, which prohibits our ability to construct valid confidence intervals. However, we acknowledge the method's limitations, primarily its requirement for binned exposures and the computational demands of density estimation. Furthermore, interpretation can be challenging in instances where findings are inconsistent. To enhance the reliability and consistency of the data, we recommend reporting the number of folds in which estimates occur and running NOVAPathways with a high number of folds so a majority of data is used for path discovery in each fold. Notwithstanding these constraints, both our simulations and real-world data applications underscore the robustness and interpretability of our approach, particularly when exposures are binned, which still have valid continuous interpretations. Our NOVAPathways method provides the research community with a statistical machine wherein, the researcher simply puts in a vector of exposures, mediators, covariates, an outcome, estimators used in the Super Learner of each nuisance parameter, and deltas for each respective exposure. The researcher is then provided a table of proportions for each pathway found in the folds and tables providing direct, indirect and total effects for each pathway. To support the adoption of semi-parametric methods such as the one we propose, we have made NOVAPathways available via the SuperNOVA R package on GitHub. We believe that by equipping researchers with tools that are not only robust but also flexible, we are inching closer towards solving complex questions in environmental health research. § APPENDIX unsrtnat
http://arxiv.org/abs/2307.01192v1
20230703175431
NANOGrav signal from axion inflation
[ "Xuce Niu", "Moinul Hossain Rahat" ]
hep-ph
[ "hep-ph", "astro-ph.HE", "hep-th" ]
Poisson-Voronoi tessellations and fixed price in higher rank Amanda Wilkens August 1, 2023 ============================================================ § INTRODUCTION The first detection of gravitational wave (GW) signals from two colliding black holes by the LIGO and VIRGO collaborations <cit.> heralded a new era in observational astronomy. Since then, the gravitational wave interferometers have detected signals from various astrophysical sources. However, an equally likely possibility of detection of a stochastic background of gravitational wave has remained elusive until very recently, when several pulsar timing arrays (PTA) including EPTA <cit.>, PPTA <cit.>, IPTA <cit.>, CPTA <cit.>, and finally NANOGrav <cit.> announced the observation of excess red common-spectrum signals in the nano-Hz regime, with inter-pulsar correlations following the Hellings-Downs pattern <cit.>, pointing to a stochastic gravitational wave background (SGWB) origin. The result from various PTAs are consistent and similar, we therefore focus on the results obtained from NANOGrav 15 yr dataset. The source of this SGWB can be various, ranging from supermassive black hole binaries <cit.> to new physics <cit.>, such as cosmic inflation <cit.>, scalar-induced gravitational waves <cit.>, first order phase transitions <cit.>, topological defects like cosmic strings or domain walls <cit.>, with varying degrees of likelihood <cit.>. See refs. <cit.> for some recent examples. In this paper we focus on a possible origin of this signal from an axion inflation model <cit.>, where the pseudoscalar inflaton ϕ with an approximate shift symmetry <cit.> has a Chern-Simons coupling ϕ F F̃ to a U(1) gauge field, where F is the field strength of the gauge field and F̃ is its dual. This coupling enables a tachyonic production of a transverse mode of the gauge field <cit.>, which eventually backreacts on the dynamics of the inflaton and impacts the primordial scalar and tensor perturbations. On one hand, it introduces significant non-Gaussianity in the scalar perturbations, on the other hand, it yields a characteristic parity-violating GW spectrum that remains flat at low CMB frequencies but rises at smaller scales probed by the gravitational wave interferometers. Intriguingly, the backreaction of the gauge fields on the inflationary dynamics tames the growth of the GW signal, helping it evade stringent constraints from cosmological considerations and non-observation of SGWB at LIGO scales <cit.>. The degree of backreaction, which determines at what frequency the signal starts to rise, depends on the inflationary potential. Here we consider the Starobinsky potential <cit.> as a specific example currently favored by Planck 2018 data <cit.>, and show that the GW signal becomes red-tilted at small enough frequencies to reach the amplitude reported by the PTAs, while remaining consistent with various constraints. Furthermore, we show that the spectral tilt of the signal matches well with the NANOGrav result, a feature not easily achieved in GW spectrum sourced from other new physics processes. The production of massive gauge fields during inflation has at least two more important aspects. First, it generates a parity-violating GW spectrum coming from two polarizations of the graviton, a unique feature that distinguishes it from most other sources of SGWB. Second, it has interesting implications for “cosmological collider” physics <cit.>, where the mass and spin of a particle produced during inflation leave imprints on the oscillatory three-point correlation function of the scalar perturbations in the “squeezed limit”. The fact that the same process generates gravitational wave signals potentially observed at PTAs offers a remarkable opportunity to probe cosmological collider physics. The paper is organized as follows. In section 2, we review the phenomenology of gauge field production in axion inflation and discuss relevant constraints on the model. We discuss the evolution of the inflationary variables from CMB scales to smaller scales in the context of the generalized Starobinsky potential in section 3. In section 4, we discuss the gravitational wave signals in this model and show that the results can explain the NANOGrav 15-yr signal at face value. We conclude in section 5. § PHENOMENOLOGY OF GAUGE BOSON PRODUCTION We consider a single field slow-roll inflation, where the inflaton ϕ is an axion-like pseudoscalar with an approximate shift symmetry ϕ→ϕ + const. Due to the shift symmetry, the inflaton couples to a U(1) gauge field A_μ via a Chern-Simons term, - ϕ F F̃ / (4 Λ). The action can be written as S = ∫ d^4x √(-g)[ - 1/4F^μνF_μν+1/2m_A^2 A^μ A_μ - 1/4ΛϕF^μνF_μν], where F_μν≡∂_μ A_ν -∂_ν A_μ is the field strength of the gauge field, and F^μν≡1/2ϵ^μναβ/√(-g)F_αβ is its dual, with ϵ^0123=+1 antisymmetric in any two indices. We assume a quasi-de Sitter space with scale factor a(t) = e^Ht, where the Hubble rate H slowly varies in time. The metric is given by ds^2 ≡ g_μνdx^μ dx^ν = dt^2 - a^2(t) dx_i dx^i = a^2(τ) (dτ^2 - δ_ij dx^i dx^j), where t and τ are the physical and conformal time, respectively. We consider both massive (m_A ∼ H) and massless (m_A= 0 or m_A ≪ H) gauge boson production from the Chern-Simons interaction. In the massive case, we employ the constraint ∂_μ (√(-g)A^μ) = 0 derived from the equation of motion and decompose the gauge field in the helicity basis λ = ± , 0, 𝐀(τ, 𝐱) = ∑_λ = ±,0∫d^3 k/(2π)^3[ ϵ_λ (𝐤) a_λ (𝐤) A_λ (τ, k) e^i 𝐤·𝐱 + h.c.] where ϵ_λ (𝐤) is the polarization vector, a_λ (𝐤) is the annihilation operator, and A_λ (τ, k) is the mode function. The longitudinal mode and two transverse modes are denoted by λ =0 and λ = ± , respectively. The creation/annihilation operators and the polarization vectors obey the usual commutation and orthonormality relations. The dominant vector field production is governed by the field equations of the transverse modes, ∂_τ^2A_± ( τ, k) + (k^2 + a(τ)^2 m_A^2 ±2kξ/τ) A_± ( τ, k) = 0, where ξ≡ϕ̇_0/2Λ H. We assume ϕ̇_0 > 0 without loss of generality so that only A_+ mode experiences tachyonic instability and is dominantly produced. For the transverse modes, the massless case results are obtained simply by setting m_A =0 in solutions. We assume the Bunch-Davies initial condition and treat the inflaton's rolling speed ϕ̇_0 to be a constant during inflation. The solution of the mode function can then be written as A_± (τ, k ) = 1/√(2k) e^±πξ/2 W_∓ iξ, iμ(2ikτ), where W stands for the Whittaker-W function, and the parameter μ≡√((m_A/H)^2 - 1/4). The mode function solution for the massless case is derived by setting m_A = 0 in <ref>. For a particular k mode, the energy density of the gauge mode function receives an exponential enhancement due to ξ > 0 when -kτ∼ O(1), which enhances particle production. For the massive case, there is an exponential suppression if m_A ∼ O(H), so that the physical observables affected by the gauge boson production roughly depends exponentially on ξ - m_A/H <cit.>. We treat ξ and m_A/H as free parameters of the model that are later constrained from various phenomenological considerations. It is useful to decompose the inflaton into a homogeneous background field and an inhomogeneous perturbation ϕ (t, x) = ϕ_0(t) + δϕ(t, x ). The dynamics of the background field can be expressed in terms of the following coupled equations ϕ̈_0 + 3 Hϕ̇_0+dV/dϕ_0 = 1/Λ⟨𝐄·𝐁⟩, 3 H^2 M_ Pl^2 - 1/2ϕ̇_0^2 - V = 1/2⟨𝐄^2 + 𝐁^2+m_A^2/a^2𝐀^2 ⟩, where the physical electric and magnetic fields corresponding to the gauge field are given by 𝐄 = -1/a^2𝐀' and 𝐁 = 1/a^2∇×𝐀. For simplicity, we restrict our analysis to the parameter space where the source terms on the r.h.s. of <ref> are negligible at the CMB scale. We now focus on the primordial scalar and tensor perturbations which can be significantly enhanced even in the case of small backreaction on the evolution of the background field. The curvature perturbation on uniform density hypersurfaces is defined as ζ (τ, 𝐱) ≡ -H/ϕ̇_0 δϕ(τ, 𝐱), and the inflaton perturbations follow the equation of motion <cit.> δϕ̈ + 3β H δϕ̇ - (1/a^2∇^2 - d^2 V/dϕ^2)δϕ = 1/Λ(𝐄·𝐁 - ⟨𝐄·𝐁⟩), where β≡ 1- 2πξ⟨𝐄·𝐁⟩/(3Λ Hϕ̇_0). To calculate the tensor perturbation, we write the perturbed metric in terms of the tensor perturbation h_ij using the scalar-vector-tensor decomposition as ds^2 = a^2(τ) [ dτ^2 - (δ_ij + h_ij)dx^i dx^j ], where h_ij is transverse (∂_i h_ij = 0) and traceless (h_ii = 0). The equation of motion of h_ij is given by <cit.> h_ij”-∇^2 h_ij + 2 H h_ij'=2/M_ Pl^2 T_ij^TT, where M_ Pl≃ 2.4× 10^18 GeV is the reduced Planck mass, and T_ij^TT is the transverse and traceless part of the stress-energy tensor. We decompose the tensor perturbation into two helicity modes h_ij(τ, 𝐩) = ∑_λ = ±ϵ^λ_i(𝐩)ϵ^λ_j(𝐩)(a_λ(𝐩) h_p^λ(τ) + a^†_λ(-𝐩) h_p^λ*(τ)) ≡∑_λ = ±ϵ^λ_i(𝐩)ϵ^λ_j(𝐩) h^λ(τ, p). The correlation functions of the curvature and tensor perturbations are calculated at τ_0 = 0 after the end of inflation. For details of the calculation using the in-in formalism <cit.>, we refer the interested readers to refs. <cit.>. Here we focus on the phenomenological observables derived from the two and three-point correlation functions of the primordial perturbations. The scalar power spectrum is proportional to the two-point correlation function of the curvature perturbation and can be written as P_ζ = P_ζ^[ϕ] + P_ζ^[A], where P_ζ^[ϕ]≡(H/ϕ̇_0)^2 (H/2π)^2 comes from the usual vacuum fluctuations, and P_ζ^[A]≡2 k^3/(2π)^2⟨ζ_𝐤_1(τ_0)ζ_𝐤_2(τ_0)⟩_(1)^' comes from the impact of the gauge field production and ' denotes that the δ-function (2π)^3δ^(3)(𝐤_1 + 𝐤_2) is stripped off. The amplitude of the scalar power spectrum at CMB scale is well measured <cit.>, P_ζ≃ 2.5 × 10^-9, which accounts for the contribution of the vacuum modes as well as the extra degrees of freedom (gauge modes in this case). Conservatively taking the gauge field's contribution to be subdominant at the CMB, we can ignore P_ζ^[A] and fix P_ζ^[ϕ] = 2.5 × 10^-9. This assumption would be valid as long as P_ζ^[A]≪ P_ζ^[ϕ]= 2.5 × 10^-9. Gauge field production during inflation may introduce significant non-Gaussianity in the curvature perturbations. It can be parametrized by a dimensionless quantity, f_ NL^ eq = 10/9 k_1^6 / (2π)^4⟨ζ_𝐤_1ζ_𝐤_2ζ_𝐤_3⟩'/P_ζ(k)^2 , where the superscript `eq' represents the equilateral shape k_1 = k_2 = k_3 ≡ k of the three-point correlation function. Current bound on equilateral non-Gaussianity gives f_ NL^ eq = -25 ± 47 at 68% CL <cit.>. Another important constraint comes from the ratio of the tensor or scalar power at the CMB scales, r ≡ P_h / P_ζ, where the tensor power spectrum is chiral and can be decomposed as P_h = [1/π^2( H/M_ Pl)^2 + P_h^[A],+] + [1/π^2( H/M_ Pl)^2 + P_h^[A],-] = P_h^+ + P_h^-, where ± corresponds to the two polarizations of the graviton. Here the vacuum contribution has been equally distributed to the two polarizations. At the CMB scale, tensor-to-scalar ratio is constrained by current data r_* ≤ 0.056 <cit.>. In fig. <ref>, we show the parameter space violating the above constraints, along with the region where the assumptions of negligible backreaction and negligible contribution of the gauge modes to the scalar power spectrum at CMB scales breaks down. Evidently, the upper bound on scalar non-Gaussianity puts the most stringent constraint on the parameter space of the model. From <ref> we see that ξ cannot be too large compared to m_A/H at the CMB scales, and the upper bound becomes stricter for larger m_A/H. However, we will show that even O(1) values of ξ-m_A/H at the CMB scales can evolve to become larger to generate observable GW spectrum at PTA and other interferometer scales. § EVOLUTION FROM CMB SCALES TO INTERFEROMETER SCALES Modes responsible for CMB scale observables can be assumed to experience constant ξ and H. However, modes contributing to the GW spectrum at smaller scales face the time evolution of these parameters, and are subject to strong backreaction from the inverse decay of the gauge field. These effects can be studied from a simultaneous solution of the coupled <ref>, ignoring the source term in <ref> as it is negligible compared to the source term in <ref>. For convenience we change variables from time to the e-folding number N, where dN = -H dt, and N decreases as we approach the end of inflation. Eqs. (<ref>) and (<ref>) can then be expressed as d^2ϕ/dN^2 + dϕ/dN( 3+d logH/dN) + 1/H^2dV/dϕ = 1/H^21/Λ⟨𝐄·𝐁⟩, H^2 ≈ V [3-1/2( dϕ/dN)^2]^-1. Solving eqs. (<ref>) and (<ref>) numerically for a given potential, we get H(N) and ϕ(N), which can be used to yield ξ(N) ≡1/2Λdϕ/dN. We now specialize to the example of the generalized Starobinsky model <cit.> which is favored with respect to the spectral index, n_s vs. tensor-to-scalar ratio, r plot from combined Planck 2018 analysis <cit.>. The inflaton potential in this model is given by V(ϕ) = 3/4V_0 [ 1-e^-γϕ]^2, where V_0 and γ are free parameters, which can be constrained from CMB measurements of n_s = 0.9649 ± 0.0042 (at 68% CL) and r < 0.056 (at 95% CL) <cit.>. We choose γ^2 = 8/125 and V_0 ≈ 1.6 × 10^-9 which is consistent with all of the above constraints(M_ Pl is set to be 1 in this expression). The evolution of ξ and m_A/H as a function of N are shown in <ref> for the four benchmark points. These points are chosen because they will be used later to illustrate gravitational wave signals at NANOGrav and other interferometers. For ξ and m_A/H at the CMB scale (N≃60), standard slow-roll conditions prevail and backreaction effects can be neglected. Furthermore, the selected points are in the allowed region in fig. <ref>, so that large non-Gaussianities can be avoided. We notice that ξ increases rapidly for about 20 e-folds, when backreaction effects start to slow down its rise. Closer to the end of inflation backreaction takes over and slow-roll condition is re-established, and ξ rises rapidly. In this regime, the Hubble rate also rises swiftly, compensating the parameter m_A/H with respect to ξ, so that particle production remains under control. § GRAVITATIONAL WAVE SIGNATURES We now turn to the gravitational wave spectrum of the model potentially observed at NANOGrav and within the sensitivity of various upcoming interferometers. The tensor perturbations sourced by the massive gauge field leave the horizon during inflation and can source gravitational waves after re-entering the horizon at later stage. The amplitude of the GW signal observed today is given by Ω_GW(f) ≡1/24Ω_R,0 P_h(f). Here Ω_R,0≃ 8.6 × 10^-5 denotes the radiation energy density today and P_h(f) is the frequency dependent power spectrum of the tensor fluctuations at the time of horizon exit. The power spectrum depends on the model parameters ξ and m_A/H, whose time evolution was discussed in sec. 3. We can relate the time parameter N to frequency of the GW signal f as <cit.> N = N_ CMB + logk_ CMB/0.002 Mpc^-1 - 44.9 - logf/10^2 Hz, where the CMB pivot scale is k_ CMB = 0.002 Mpc^-1 and we take N_ CMB = 60. Here we specifically focus on the parameter space that can potentially explain the observed excess at the NANOGrav 15-yr data. The effect of the gauge field creation on the tensor fluctuations is minimal for CMB scales and the power spectrum is dominated by the vacuum fluctuations. Current bound on scale-invariant stochastic gravitational wave at the CMB scales implies a tensor-to-scalar ratio r < 0.056 <cit.>, which gives H/M_ Pl≲ 2.6 × 10^-5, and Ω_ GW < 1.2 × 10^-16. Larger frequencies correspond to modes which left the horizon later than the CMB modes. By that time the rolling speed of the inflaton increases and the Hubble rate decreases, the combined effect of which implies a larger value of ξ. This dramatically enhances the power spectrum of the tensor perturbations sourced by the gauge field and it quickly supersedes the contribution from the vacuum fluctuations. Gravitational wave amplitude that eludes observation at the CMB scale now offers the possibility of detection at the interferometer scales. Although the red-tilting of the tensor power spectrum is a known feature of the axion inflation scenario, achieving a signal at the nano-Hz range probed by PTAs is significantly challenging. It requires the signal to start to rise at sufficiently smaller frequencies, yet such signals should not violate the upper bound at the LIGO-VIRGO-KAGRA (LVK) <cit.>. Furthermore, such a wideband signal with a red-tilted spectrum is likely to introduce significant contribution to Δ N_ eff. Essentially, these characteristics depend on the inflaton potential one considers, as the rolling speed of the inflaton is determined by the potential. Intriguingly, we find that these peculiar features can be accommodated in the generalized Starobinsky model, but cannot be achieved in the broader class of α-attractor models <cit.>, of which the Starobinsky model is a special case. We show the gravitational wave spectrum from four benchmark points in fig. <ref>. In each case we keep the parameter m_A/H same at the CMB scales but let the parameter ξ vary. All of these signals remain flat at the CMB frequencies, evading the upper bound. However, the spectrum red-tilts near 10^-11 Hz, and rises to be visible in the PTA frequencies. Interestingly, such a dramatic growth of the signal is soon tamed by the backreaction effects. At higher frequencies, the gravitational wave spectrum for the four benchmark points get very close, and fall well within the sensitivity of various detectors μ-Ares <cit.>, LISA <cit.>, DECIGO <cit.>, BBO <cit.>, AEDGE <cit.>, AION <cit.>, CE, ET <cit.> and future upgrades of LVK. In all cases, we find that the signals are below the upper bound of LVK. We now investigate the GW signals in further detail by calculating the dimensionless strain h_c (f) = √(2 H_0^2 Ω_ GW(f)/2π^2 f^2), where H_0 ≈ 68 km/s/Mpc is today's Hubble rate. We show h_c(f) as a function of frequency for two of the benchmark points in ref. <ref>. For comparison we also show the spectral slope β = dlog h_c(f)/d log f modeling the NANOGrav strains spectrum with a simple power law of the form h_c(f) = A_ GW (f/f_ PTA)^β. Expressing γ_GW = 3-2β, NANOGrav 15-yr data finds γ_ GW≃ 3.2 ± 0.6 around the frequency 1/(10 yr). We show the band of slope favored by the NANOGrav data with gray shaded regions in ref. <ref>. Interestingly we see that the two benchmark points fall within the shaded region, further indicating that the NANOGrav signal could arise from the inflationary gravitational waves in the axion inflation scenario. We now calculate the contribution of the GW spectrum to the radiation energy budget of the Universe in terms of the effective number of relativistic species N_ eff. The extra contribution is given by Δ N_ eff≃ 1.8 × 10^5∫_f_ min^f_ max df Ω_ GW(f) h^2/f, where f_ min and f_ max depend on the era of interest and the maximum temperature reached. To make sure that the GW spectrum does not spoil the successful prediction of BBN, corresponding to the frequency of a mode crossing the horizon during the time of BBN when the temperature of the radiation bath was T ∼ O( MeV). We take f_ max = 10^4 Hz for a sufficiently large reheating temperature. The current upper bound on Δ N_ eff from BBN and CMB probes is Δ N_ eff≃ 0.4 <cit.>. We find that all the benchmark points satisfy this bound, with Δ N_ eff = 0.028 (red), 0.015 (purple), 0.01 (blue), and 0.009 (green). Finally, we comment on the issue of primordial black hole (PBH) creation from large scalar perturbation in our setup. The mass of a PBH is related to the e-folding number N when a fluctuation sourcing the creation of the PBH leave the horizon. Following the estimates in refs. <cit.>, we show an upper bound on the scalar power spectrum as a function of N in <ref> with a dashed curve following ref. <cit.>. In the same plot we also show scalar power spectrum, which suffers from strong backreaction at smaller scales. Nevertheless, we see some tension of the spectrum with PBH bound, which is within the theoretical uncertainty of the bound. Furthermore, recent lattice simulations <cit.> indicate that the combined effect of non-Gaussian perturbations become more Gaussian overall at smaller scales, which weakens the bound (shown with a dot-dashed line in <ref>) and allows the model to avoid overproduction of PBH. The problem can be further alleviated by considering N copies of the gauge field, which reduces the scalar power spectrum by a factor of N <cit.>, but does not significantly affect the tensor power spectrum at smaller scales. Large scalar perturbations can also generate second order tensor perturbations <cit.>, which typically are subdominant compared to the GW spectrum. § CONCLUSION AND OUTLOOK We have pointed out that an explanation of the observed excess in the NANOGrav 15-yr dataset is possible from the gravitational waves generated in axion inflation from axion coupling to (massive) gauge bosons. Such a coupling, natural for a shift-symmetric inflaton, leads to copious particle production during inflation, leaving indelible imprint on the primordial scalar and tensor perturbations. This leads to a unique parity-violating contribution to the GW spectrum, that remains flat at CMB scales, but red-tilts at smaller scales and can become audible to pulsar timing arrays. The growth of the spectrum at higher frequencies, potentially dangerous for LIGO scales, is controlled by the backreaction of the gauge quanta on the inflationary dynamics, the details of which depend somewhat on the inflaton potential. We have specifically discussed the case of the generalized Starobinsky potential, a model currently favored by data, and showed that the produced signal can, at face value, interpret the observed excess the in NANOGrav data. More specifically, we find that the spectral slope of the signal matches quite well with the NANOGrav result, indicating that an interpretation of the NANOGrav excess from GW waves generated in axion inflation is quite likely. We are grateful to Anish Ghoshal and Wei Xue for helpful discussion. X.N. is partially supported by the U.S. Department of Energy under grant DE-SC0022148 at the University of Florida. M.H.R. acknowledges support from the STFC Consolidated Grant ST/T000775/1, and from the European Union's Horizon 2020 Research and Innovation Programme under Marie Sklodowska-Curie grant agreement HIDDeN European ITN project (H2020-MSCA-ITN-2019//860881-HIDDeN) JHEP
http://arxiv.org/abs/2307.01364v1
20230703213739
Quadrupole moments and their interactions in the triangular lattice antiferromagnet FeI$_2$
[ "Gang Chen" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci" ]
gangchen@hku.hk Department of Physics and HKU-UCAS Joint Institute for Theoretical and Computational Physics at Hong Kong, The University of Hong Kong, Hong Kong, China The University of Hong Kong Shenzhen Institute of Research and Innovation, Shenzhen 518057, China Motivated by the recent experiments on the triangular lattice antiferromagnet FeI_2, we consider the presence of the quadrupole moments and their interactions between the spin-orbital-entangled local moments. In addition to the anisotropic pairwise interaction between the local moments, the interaction between the quadrupole moment arises from the interaction of the orbital occupation configurations. This is argued primarily from the time reversal symmetry and the different orbital configurations via their exchange paths. We discuss the implication of these interactions and expect this result to be complementary to the existing works in this system and other systems alike. Quadrupole moments and their interactions in the triangular lattice antiferromagnet FeI_2 Gang Chen August 1, 2023 ========================================================================================= Quantum many-body systems with a large local Hilbert space have attracted a significant attention in the recent years <cit.>. These include the well-known Sachdev-Ye-Kitaev non-Fermi liquid model with a large number of interacting Majorana fermions <cit.>, the Kondo-lattice-like models for intermetallics with both local moments and itinerant electrons <cit.>, the spin-orbital-coupled Mott insulators <cit.>, even the weak Mott insulators with active charge and spin degrees of freedom <cit.>, and so on. In these systems, there often exist a large number of degrees of freedom in the local Hilbert space. These degrees of freedom are sometimes of the same character, and more commonly, they are of quite distinct characters with very different physical properties. For the latter case, their differences may help people to clarify their self and mutual interactions as well as the probing of their properties in the actual experiments. For example, in the intermetallic systems with the description of the Kondo-lattice-like models, the local moments could be well modeled as the spins and probed by usual magnetic measurements such as neutron scattering, while the itinerant electrons are more appropriately described by the Landau quasiparticles in the model setting and probed by the angle resolved photon emission, electric transports, et al. For the case of the spin-orbital-coupled Mott insulators that are of interest in this work, the distinct characters of the spin and the orbital can actually make many interesting differences in the understanding of the candidate quantum materials. The distinct characters between the spin and the orbital degrees of freedom have already been adopted in many early works in the field. The spin interaction without the involvement of the orbitals does not usually carry any anisotropy. With the intrinsic orientation dependence, the orbitals bring the spatial anisotropy into the model. One representative example of such models are the well-known Kugel-Khomskii spin-orbital exchange model <cit.>. Upon the spin-orbital entanglement due to the strong spin-orbit coupling, the resulting effective spin models develops both the spatial anisotropy and the effective-spin-space anisotropy <cit.>. These models have played an important role in Kitaev magnets <cit.>, pyrochlore spin ice materials <cit.>, iridates, osmates, rare-earth triangular lattice magnets <cit.>, and even many late 3d transition metal compounds such as cobalts and vanadates <cit.>. In this work, we are inspired by the recent work <cit.> on the triangular lattice antiferromagnet FeI_2 and attempt to understand the behaviors of spin and orbitals. In FeI_2, the Fe^2+ ion is located in an approximately octahedral environment. Roughly, the orbitals are separated into the upper e_g and the lower t_2g orbitals, and the electron occupation configuration is shown in Fig. <ref>. Unlike the tetrahedral environment in the diamond lattice antiferromagnet FeSc_2S_4, the upper e_g orbitals are half-filled without any orbital degeneracy. The lower t_2g are filled with four electrons. In the case of the orbital degeneracy in the t_2g manifold, the orbital degrees of freedom are active, and the spin-orbit coupling plays a crucial role at the linear order. Even though the orbital degeneracy is actually lifted by the non-perfect-octahedral environment, if the orbital separation is not large enough compared to the spin-orbit coupling, the spin-orbit coupling is still able to entangle the spin and orbital degrees of freedom here. From the Hund's coupling, the total spin is S=2. The triply degenerate t_2g orbitals provide an effective orbital angular momentum L=1. The spin-orbit coupling entangles them together and leads to an effective spin moment J with J=1. In the modeling of Ref. Bai_2021, the orbital character of the moment J has ready been considered, and the anisotropic effective spin interaction was obtained by considering the symmetry operation on the local moments <cit.>. Since the effective local moment is not a spin-1/2 moment, the local Hilbert space is a bit large, and the spin-orbital entanglement allows the system to more effectively access this large local Hilbert space <cit.>. Without the careful derivation of the actual model, one can make some progress by the microscopic and the symmetry analysis. We will take this approach and then perform the calculation for the effective model. First of all, the effective model is obtained from a bit more parent model with the following form, H = H_KK + H_SOC + H_ani, where the first term H_KK refers to the Kugel-Khomskii spin-orbital exchange with the S=2 and L=1 moment, the second term H_SOC is the atomic spin-orbit coupling with the form of ∑_i λ L_i · S_i, and the third term H_ani arises from the splitting among the t_2g manifold. The second and third terms provide the local onsite Hamiltonian that determines the structure of the local moment. H_SOC favors a J= |S-L|=1 local moment, and H_ani becomes the single-ion anisotropy for the J representation. In the spirit of the degenerate perturbation theory, H_KK is then projected onto the degenerate manifold of the J=1 states on each lattice site, and reduced to the effective spin model. Since the Kugel-Khomskii spin-orbital exchange model is usually a bit complicated, we here provide a physical understanding before performing the above procedure. First of all, there exist eight different Hermitian operators for the J=1 local moment. Among them, three are simple dipole moments, i.e. J^μ (μ =a,b,c), and five quadrupole moments with Q^3c^2 = 1/√(3) [ 3 (J^c)^2 - J^2] , Q^a^2b^2 = (J^a)^2 - (J^b)^2 , Q^ab = 1/2 (J^a J^b + J^b J^a) , Q^ac = 1/2 (J^a J^c + J^c J^a) , Q^bc = 1/2 (J^b J^c + J^c J^b) . The single-ion anisotropy (J^c_i)^2 can be thought as an onsite polarization term for the quadrupole moment Q^3c^2. Because the quadrupole moments involve the product to two J^μ operators, they are even under time reversal symmetry. The dipole moments, however, are odd under time reversal symmetry. The distinction between the quadrupole and dipole moments may be cast as an example of moment or Hilbert space fragmentation. The usual pairwise spin interaction, that is obtained by the symmetry analysis, often includes the interaction between the dipole moments, and does not include the quadrupole moments. To illustrate the origin of these complicated interactions, we here return to the microscopic exchange model. Unlike the above separation of the degrees of freedom into the spin part with S=2 and the orbital part with L=1, we now separate the local moment from the upper e_g one and the lower t_2g one. The upper e_g manifold provides a spin S_e_g =1 moment, while the lower t_2g manifold provides a spin S_t_2g=1 and an orbital angular moment L=1 with an active spin-orbit coupling. The e_g spin and the t_2g spin are then coupled with the ferromagnetic Hund's coupling, forming the total spin with S=2. In this picture, the local Hamiltonian is given as ∑_i [λ̅ S_i,t_2g· L_i - J_H S_i,t_2g· S_i,e_g] + H_ani , where J_H is the Hund's coupling, and λ̅ is related to the λ in Eq. (<ref>) when S_i,t_2g is reduced to the total spin S_i as S_i,t_2g→ S_i /2 after being symmetrized with S_i,e_g. As it is equivalent to the previous discussion, this local Hamiltonian also gives the J=1 spin-orbit-entangled local moment. In the exchange of the electrons with both spin and orbital flavors for the Kugel-Khomskii model, one can classify the exchange process according to the orbital manifolds of the relevant electrons. This is depicted in Fig. <ref>. One can make a couple interesting observation and simplify the Kugel-Khomskii exchange model, which we explain below. For the homo-orbital exchange process for the electrons from the upper e_g orbitals, since the orbital sector is quenched for the e_g orbital filling, this exchange process will simply give rise to the Heisenberg model in the form of ∼ J_1 S_i, e_g· S_j, e_g . This interaction, after projecting onto the J=1 local moment manifold, will lead to the pairwise interaction between the dipole moments. For the hetero-orbital exchange process between the electrons from the upper e_g and lower t_2g (see the dashed line in Fig. <ref>), the orbitals from the t_2g manifold become active and contribute to the exchange interaction. The site that contributes the e_g electrons, however, does not carry an active orbital information. Thus, the resulting exchange interaction should be of the following form, ∼ J_2 [ S_i, e_g·𝒪_j +𝒪_i · S_j, e_g] , where 𝒪_j is a complicated operator that involves both S_j,t_2g and the orbital moment L_j. Due to the time reversal symmetry, the operator 𝒪_j should be odd under time reversal. Moreover, even if the operator 𝒪_j may involve the operator like i Q_j^μ (with Q_j^μ as one of the quadrupole moments) when it is projected to the J=1 manifold, the interaction between the dipole moment and the quadrupole moment should cancel out eventually by the hermiticity condition. Thus, despite involving the complicated 𝒪_j operator, Eq. (<ref>) should also be reduced to the pairwise interaction between the dipole moments J^μ after projecting onto the J=1 local moment manifold. For the homo-orbital exchange process between the lower t_2g manifold, both the spins and the orbitals are involved. There are two types of operators that are involved in this exchange interaction. One type of operators is odd in S_t_2g or L, such as S_i,t_2g^μ and S_i,t_2g^μ L_i^α L_i^β. These operators are labeled as 𝒪. The exchange interaction between these 𝒪 operators should be mostly reduced to the pairwise interaction between the dipole moments J^μ after projecting onto the J=1 local moment manifold. Since 𝒪 could be related to the quadrupole moment as i Q^μ, there can still exists some part of the interaction that becomes the quadrupole-quadrupole interaction. This requires a careful calculation. The other type of operators is even in L, such as L_i^α L_i^β. These operators are labeled as 𝒩. Due to this even property under the time reversal, the exchange interaction between these 𝒩 operators should be mostly reduced to constant and/or the interaction between the quadrupole moments Q's. Again, since 𝒩 could be related to the dipole moment as i J^μ, some part of the interaction could become the interaction between the dipole moments. Thus, to obtain the interaction between the quadrupole moments in this system, one essentially needs to keep track of the exchange interaction from the homo-orbital exchange process between the lower t_2g manifold for both 𝒪-𝒪 and 𝒩-𝒩 interactions. After the above classification and simplification, one can now focus on the exchange process between the t_2g electrons. This exchange process will be identical to the one for the d^4 electron configurations, or equivalently, the d^2 hole configurations on the t_2g manifold. We single out the bond between the Fe site 0 and the Fe site 1 in Fig. <ref>, and the exchange on the other bonds can be obtained by the symmetry operation. With the assumption of the ideal geometry, the Fe-I-Fe exchange path is 90-degree. Since here we are dealing with 3d electrons, we consider the indirect superexchange path via the intermediate I atoms. In principle, there could be a weak direct Fe-Fe exchange, for example, via the xy orbitals from the site 0 and the site 1 in Fig. <ref>. For the indirect exchange path in Fig. <ref>, there are two relevant hopping processes. One is the d_yz-p_z-d_xz from Fe-0 to Fe-1 via the upper I atom as we have depicted in Fig. <ref>. The other is the d_xz-p_z-d_yz from Fe-0 to Fe-1 via the lower I atom. Since the exchange of the t_2g manifolds is equivalent to the one for the d^4 Mott insulators, one can actually use the existing results for the superexchange interaction <cit.>. Following Ref. PhysRevLett.111.197201, we obtain the exchange interaction for the Fe-0 and the Fe-1 that is then given as H_t_2g,0↔ 1 = J_KK[ ( S_t_2g,0· S_t_2g,1 + 1 ) [ ( L^x_0 L^y_1 )^2 + ( L^y_0 L^x_1 )^2 + L_0^x L_0^y L_1^x L_1^y + L_0^y L_0^x L_1^y L_1^x ] + ( L_0^z )^2 + (L_1^z)^2 ], where the components of L are defined in the xyz coordinates in Fig. <ref> and Fig. <ref>. Again, this interaction is obtained by ignoring the direct hopping process between the xy orbitals. The Hubbard interaction is assumed to be dominant compared to the spin-orbit coupling and the Hund's coupling such that the correction from the Hund's coupling in Eq. (<ref>) is not considered <cit.>. The (L^z)^2 term in Eq. (<ref>), after being added with the similar terms from the symmetry equivalent bonds, will become constant, and thus can be neglected. Following our previous reasoning, the 𝒪-𝒪 interaction is identified as, J_KK( S_t_2g,0· S_t_2g,1) [ ( L^x_0 L^y_1 )^2 + ( L^y_0 L^x_1 )^2 + L_0^x L_0^y L_1^x L_1^y + L_0^y L_0^x L_1^y L_1^x ] , and the 𝒩-𝒩 interaction is identified as J_KK[ ( L^x_0 L^y_1 )^2 + ( L^y_0 L^x_1 )^2 + L_0^x L_0^y L_1^x L_1^y + L_0^y L_0^x L_1^y L_1^x ] . Ignoring the single-ion anisotropy H_ani, one can express the local J-states in terms of the spin and orbital wavefunction in the local xyz coordinate system. One then projects the above 𝒪-𝒪 and 𝒩-𝒩 interactions onto the J-state basis. We then proceed and establish the relationship between the operators 𝒪 and the projected J-operators such as S (L^x)^2 = (S^xL^xL^x, S^yL^xL^x, S^zL^xL^x) (6/5J^x, 9/10 J^y, 9/10 J^z ), S (L^y)^2 = (S^xL^yL^y, S^yL^yL^y, S^zL^yL^y) (9/10J^x, 6/5 J^y, 9/10 J^z ), SL^x L^y = (S^xL^xL^y, S^yL^xL^y, S^zL^xL^y) (3/20 J^y-3i/10 Q^xz, 3/20 J^x - 3i/10 Q^yz , -i/2 -i√(3)/10 Q^3z^2), SL^y L^x = (S^xL^yL^x, S^yL^yL^x, S^zL^yL^x) ( 3/20 J^y +3i/10 Q^xz, 3/20 J^x + 3i/10 Q^yz ,i/2 +i√(3)/10 Q^3z^2 ) , where P_J=1 refers to the projection onto the J=1 manifold. Using the above relations, we then obtain the quadrupole interaction from above 𝒪-𝒪 interaction in Eq. (<ref>) that is given as -9J_KK/200[ 1/3Q_0^3z^2 Q_1^3z^2 + Q_0^xz Q_1^xz + Q_0^yz Q_1^yz]. Likewise, for the quadrupole interaction from above 𝒩-𝒩 interaction in Eq. (<ref>), we use the relations (L^x)^2 2/3 + 1/20 (- 1/√(3)Q^3z^2 + Q^x^2y^2 ), (L^y)^2 2/3+ 1/20 (- 1/√(3) Q^3z^2 - Q^x^2y^2 ), L^x L^y 1/10 Q^xy, L^y L^x 1/10 Q^xy, and obtain J_KK/200[ 1/3 Q_0^3z^2 Q_1^3z^2 - Q_0^x^2y^2 Q_1^x^2y^2 + 4 Q_0^xy Q_1^xy]. The summation of Eq. (<ref>) and Eq. (<ref>) is the total quadrupole interaction for the 01 bond. Here we have expressed the quadrupole moments in the local xyz coordinate for the simplicity of the expression. The relation between Q's in the local coordinates and the J^x,y,z is identical to the ones for Q's in the global abc coordinates and the J^a,b,c. The relation of Q's in the local xyz coordinates and the global abc coordinates are listed as Q^3z^2 = -√(3)/3Q^a^2b^2 -2√(6)/3 Q^bc , Q^x^2y^2 = 2√(3)/3Q^ab +2√(6)/3 Q^ac , Q^xy = - √(3)/6 Q^3c^2 + Q^a^2b^2/3 - √(2)/3 Q^bc , Q^xz = √(3)/6 Q^3c^2+ Q^a^2b^2/6 -Q^ab/√(3) + Q^ac/√(6) - √(2)/6Q^bc , Q^yz = -√(3)/6 Q^3c^2- Q^a^2b^2/6 - Q^ab/√(3) + Q^ac/√(6) + √(2)/6Q^bc and the dipole moment J^μ is given as J^a= 1/√(2)(J^x+J^y), J^b=1/√(6) (J^x-J^y-2J^z) and J^c= 1/√(3)(J^x - J^y + J^z). This completes the derivation of the interaction between the quadrupole moment along the bond 01. For the other equivalent bonds, one can simply obtain the quadrupole interaction via the cubic permutation. Discussion.—Here we discuss the implication of the exchange interaction between the quadrupole moments for FeI_2. Clearly, the presence of the quadrupole interaction would enhance quantum fluctuations. This is because the quadrupole moment operators connect all the spin states with more-or-less similar probability and allow the system to tunnel quantum mechanically between different spin states more effectively <cit.>. This aspect differs from the conventional dipole moment that only changes the spin quantum number by 0 or ± 1 at one time. Thus, a system with the substantial quadrupole interaction is more likely to be more delocalized in its spin Hilbert space and favors more exotic quantum ground states, and even for an ordered state, the usual spin wave theory should be replaced by the flavor wave theory that takes into account of all possible channels connected by the dipole and quadrupole operators <cit.>. In Ref. Bai_2021, however, the authors found a reasonable agreement with the inelastic neutron scattering spectroscopy based on a model with the single-ion anisotropy and the symmetry-allow pairwise spin interaction. It is likely that, the quadrupole interaction in FeI_2 might be weak compared to the pairwise dipole interaction as the dipole interaction has more microscopic processes than the quadrupole interaction according to the discussion. Or, the quadrupole interaction does not impact the inelastic spectrum significantly in FeI_2. The single-ion anisotropy, that is an onsite polarization term of the Q^3c^2 moment, seems to be large in FeI_2. This is expected from the presence of the several magnetization plateau or steps for FeI_2 in the c-direction magnetic field <cit.>. The presence of a large single-ion anisotropy could suppress the effect of other quadrupole interactions. A more systematic analysis may need to combine both the dipole and quadrupole interactions and at the same time acquire a more quantitative input from the density functional theory for the exchange interactions. Our work here provides a natural scheme to understand the origin of the multipolar interaction and can be particularly useful for the multi-orbital systems with distinct and separate orbital manifolds. One relevant system would the Co-based Kitaev materials that are under an active investigation recently. In fact, for these Co-based honeycomb Kitaev magnets, the local moment for the Co^2+ ion with a 3d^7 electron configuration on the octahedral environment can be understood as the combination of the spin-1 from the e_g manifold and the spin-1/2 with an orbital moment L=1 from the t_2g manifold. The exchange interaction over there may be understood from the separation into the e_g and the t_2g manifolds. It seems, a similar separation scheme was already been considered for the 3d^7 cobaltates  <cit.>. Since the local moment for Co^2+ is an effective J=1/2 moment, the resulting interaction is always dipole interaction. This scheme simply becomes a way to organize the superexchange processes, but does not seem to provide much a simplification. We thus expect a more useful application to occur in systems with larger-J moments. Acknowledgments.—This work is supported by the National Science Foundation of China with Grant No. 92065203, the Ministry of Science and Technology of China with Grants No. 2021YFA1400300, and by the Research Grants Council of Hong Kong with C7012-21GF.
http://arxiv.org/abs/2307.03241v1
20230706180928
The Mathematics of Mathematics: Using Mathematics and Data Science to Analyze the Mathematical Sciences Community and Enhance Social Justice
[ "Ron Buckmire", "Joseph E. Hibdon, Jr.", "Drew Lewis", "Omayra Ortega", "José L. Pabón", "Rachel Roca", "Andrés R. Vindas-Meléndez" ]
math.HO
[ "math.HO", "00-02, 00A35, 00A69, 97-01, 97A40" ]
That’s BAD: Blind Anomaly Detection by Implicit Local Feature Clustering Jie Zhang^1     Masanori Suganuma^1     Takayuki Okatani^1,2 ^1Graduate School of Information Sciences, Tohoku University      ^2RIKEN Center for AIP {jzhang,suganuma,okatani}@vision.is.tohoku.ac.jp =================================================================================================================================================================================================================== We present and discuss a curated selection of recent literature related to the application of quantitative techniques, tools, and topics from mathematics and data science that have been used to analyze the mathematical sciences community. We engage in this project with a focus on including research that highlights, documents, or quantifies (in)equities that exist in the mathematical sciences, specifically, and STEM (science, technology, engineering, and mathematics) more broadly. We seek to enhance social justice in the mathematics and data science communities by providing numerous examples of the ways in which the mathematical sciences fails to meet standards of equity, equal opportunity and inclusion. We introduce the term “mathematics of Mathematics” for this project, explicitly building upon the growing, interdisciplinary field known as “Science of Science” to interrogate, investigate, and identify the nature of the mathematical sciences itself. We aim to promote, provide, and posit sources of productive collaborations and we invite interested researchers to contribute to this developing body of work. § INTRODUCTION §.§ Motivation Mathematics has long been used to solve problems, document patterns, describe phenomena, explain observations, and provide insight into how the world works. In the last century, with the advent of the computing age and its concomitant generation and storage of vast quantities of data, the ability to analyze data has become another important tool of mathematicians, engineers, and scientists. The context for what kinds of problems mathematics and data science can be used to solve are extremely varied, running the gamut from those that are theoretical with no currently foreseen applications to those that are immediately applicable to real-world situations. In this paper we are interested in how tools, topics, and techniques from the mathematical sciences can be and have been used to analyze the mathematical sciences itself. We introduce here the phrase “the mathematics of Mathematics" (or #MetaMath, for short) to refer to this growing body of work using mathematics to analyze Mathematics. Note that we are using the lower-case term “mathematics” in this phrase as shorthand for quantitative and qualitative techniques commonly associated with the mathematical sciences, and the upper-case term “Mathematics" to mean the people, practices, and organizational structures associated with the profession of mathematics. In particular, we seek to highlight that mathematics is done by people who exist as part of a larger community, not in a vaccuum. We chose the term “mathematics of Mathematics" specifically as an allusion to the growing field of “science of Science"<cit.>, which shares many of our goals. We note that despite the similarity in nomenclature, neither of these terms have any association with the so-called “science of math” movement, a reactionary movement in K-12 education that rejects many equity-minded reforms that are based on mathematics education research. We further note that #metamath has no connection to metamathematics which is a branch of mathematical logic. §.§ Distinguishing Mathematics of Mathematics (#MetaMath) from Science of Science (SciSci) and Other Similar Disciplines The science of science (SciSci) is a well-developed field of study in which the tools and techniques of scientific exploration are used to investigate and interrogate science itself. For example, Fortunato et al. <cit.> say SciSci places the practice of science itself under the microscope, leading to a quantitative understanding of the genesis of scientific discovery, creativity, and practice and developing tools and policies aimed at accelerating scientific progress. [...] SciSci relies on a broad collection of quantitative methods, from descriptive statistics and data visualization to advanced econometric methods, network science approaches, machine-learning algorithms, mathematical analysis, and computer simulation, including agent-based modeling. We intend the mathematics of Mathematics (#MetaMath) to take advantage of the increased availability of data about different aspects of the Mathematics community, such as public information about research publications, citation patterns, faculty hiring, racial, ethnic, and gender demographics, doctoral degree advisor-advisee relationships, research funding, and more, in order to explicitly advance the social justice goal of a more equitable, diverse, and inclusive Mathematics community, profession, and enterprise. (We note that just as the science of science includes qualitative techniques we intend for the mathematics of Mathematics to do so as well.) §.§ Context and Goals For This Paper This review paper is written in the context of recent activity in how data science and mathematics are being used to enhance social justice in various new and exciting ways. For example, we envision #MetaMath as a quantitative justice project, where the term “quantitative justice" is defined as “the application of techniques, tools and topics from the quantitative sciences (e.g., mathematics, applied mathematics, data science, computer science, etc.) in subject domains that are derived from and/or typically associated with the social sciences (e.g., history, political science, law, economics, sociology etc.) with the explicit goal of promoting social justice” <cit.> . Talks in the developing area of quantitative justice have occurred at multiple minisymposia <cit.> and special sessions <cit.> at several widely attended international STEM conferences in the last two years. Examples include the 2022 annual meeting of the Society for Industrial and Applied Mathematics (SIAM) in Pittsburgh, PA <cit.>, the 2023 SIAM Computer Science and Engineering Conference in Amsterdam, the Netherlands <cit.>, the 2023 annual meeting of the American Association for the Advancement of Science (AAAS) in Washington, DC <cit.>, and the 2023 Joint Mathematics Meetings in Boston, MA <cit.>. In addition to being described as a quantitative justice project, #MetaMath also meets the definition of Data Science for Social Justice (DS4SJ) provided in <cit.> as “data scientific work (broadly construed) that actively challenges systems of inequity and concretely supports the liberation of oppressed and marginalized communities." Regardless of which term one uses to describe the #MetaMath project (quantitative justice, data science for social justice, justice data science, or something else), its goal is the same as this paper: to enhance social justice by using the mathematical sciences to investigate, interrogate, and identify inequities in the mathematical sciences itself. In the main body of the paper, we provide a survey of selected recent work that uses tools, topics, and techniques from mathematics and data sciences to describe, document, and display inequities in the mathematics and data science communities. In the rest of the paper, we discuss some methodological considerations for quantitative scientists and mathematicians interested in working in the field, as well as some future directions for research. § SURVEY OF EXISTING #METAMATH RESEARCH In this section we will present our commentary on various examples of articles that have appeared in the research literature over the last two decades that can be characterized with a #MetaMath label, i.e. they use tools, topics, or techniques from mathematics and data science to analyze various aspects of discrimination, disparity, and difference in the mathematics community specifically, or the STEM academic community, generally. We note that our presentation is not an exhaustive list or review of all such examples, since we are particularly interested in providing evidence that describes, displays, or documents inequity in the mathematics community. If one believes the saying, as we do, that “Talent is evenly distributed, but opportunity is not,” then the research presented below bolsters that view. It shows that there exist hierarchical structures in mathematics, evidence of elitism, disparities in distribution of resources, over-representation of certain favored groups, under-representation of other disfavored groups, and a paucity of data on mathematics curriculum. We hope by highlighting #metamath research that uses data to documents these deficiencies we will encourage others to enhance social justice in the mathematical sciences community by enacting positive change. In the subsequent subsections we provide several examples of peer-reviewed articles that use mathematics and data science to demonstrate these phenomena exist in STEM broadly and mathematics specifically. We note that these phenomena are interrelated and interdependent so it can be difficult to assign a particular paper or result to only one of these overlapping areas. For example, research by Liu et al <cit.> that shows that women and racial/ethnic minorities receive fewer citations to their research in mathematics and scientific journals demonstrates the lack of diversity in these fields, but also the existence of a hierarchy based on race, ethnicity and gender in academic publishing as well as be indicative of elitism in this area. In cases like this the same research may be referred to in multiple parts of the paper. §.§ Diversity and Demographics of the Mathematics Community We believe it is self-evident that mathematicians, i.e. the members of the mathematics community in the United States (or however one defines the term <cit.>) do not represent the full spectrum of the diversity and demographics of the general population. In other words, there are groups that are currently under-represented and historically marginalized in the mathematics community <cit.>. This under-representation of women, racial and ethnic minorities, people whose parents or grandparents didn't go to college (i.e., first-generation), people living with disabilities and members of the LGBTQ community has been documented and analyzed in many STEM fields, including in mathematics, and in different professional activities in academia. The sectors of academia activity in which this under-representation manifests is varied and near-ubiquitous. In other words, it is difficult to find an aspect of academia in which the diversity and demographics of those who participate in that aspect match the corresponding racial, ethnic and gender breakdowns of the general population of the United States. Data on the demographics of the STEM community can be found in documents created by the National Center for Education Statistics in the U.S. Department of Education (NCES) <cit.> and the National Science Foundation's National Center for Science and Engineering Statistics (NCSES) <cit.>. For math-specific data, the quinquennial survey conducted every five years since 1970 by the Conference Board of the Mathematical Sciences (CBMS) and the annual survey of earned doctorates by the American Mathematical Society (AMS) are key data sourcesThe CBMS survey can be found at <https://www.ams.org/profession/data/annual-survey/annual-survey> and the Annual Survey of Earned Doctorates can be found at <https://www.ams.org/profession/data/annual-survey/annual-survey>. Due to the COVID-19 pandemic, updates of these data sources has been significantly delayed. Buckmire <cit.> notes that between 2013 to 2018, women made up approximately 42.6% of recipients of bachelor's degrees in Mathematics. In the same time period, the racial and ethnic makeup of undergraduate math graduates ranged from 64.9% White, 7.5% Latino/Hispanic, and 5.0% Black to 52.5% White, 9.9% Latino/Hispanic and 4.2% Black. Medina <cit.> notes that between 1993 and 2002, less than 5% of those who earned doctoral degrees in mathematics were Black, Latinx, or Indigenous, despite those communities composing a quarter of the US population at that time. Martin <cit.> compiled a comprehensive list of 95 citations focusing on women in mathematics in his annotated bibliography, covering biases and inequities in primary school to tenure. Cech <cit.> found that LGBTQ+ scientists and mathematicians were more likely to report social marginalization, work place harassment, as well as limited career opportunities. Topaz et al. <cit.> analyzed 13,067 editorial positions on the boards of 435 mathematical science journals and found that roughly 8.9% of editorships were held by women. More recently, Liu et al. <cit.> expanded on this analysis in multiple dimensions. They looked at over one million papers between 2001 to 2020 in over 500 journals processed by almost 65,000 editors and found that “non-white scientists appear on fewer editorial boards, spend more time under review, and receive fewer citations" than White scientists. In this subsection we have presented examples of data that documents how the demographics of the mathematics (and wider STEM) community do not reflect the diversity of the larger population from which it draws. In the next subsection, we will present examples of research that addresses the similar but distinct problem that mathematics (and STEM) contain hierarchical structures that reflect and reproduce the lopsided demographic distributions previously discussed above. §.§ The Myth of Meritocracy and Existence of Hierarchy in Mathematics and Science Here we present examples of published research articles that can be used to prove the existence of hierarchical structures in mathematics (and STEM), thus refuting the pernicious claim that mathematics (or STEM) is a meritocracy. Examples of hierarchical structures <cit.> could include the existence of institutions which have greater prestige <cit.> or are more likely to have students who are more likely to go on to obtain faculty positions at more prestigious institutions. In the last few years, several researchers have taken advantage of the availability of vast quantities of data on faculty positions at institutions of higher education in the United States to document the existence of hierarchies in faculty hiring networks in academia. Clauset et al. <cit.> demonstrated the existence of hierarchy in faculty hiring in a study involving 19,000 faculty members in a total of 461 departments in computer science, business, and history. Wapman et al. <cit.> expanded this analysis to cover 295,089 faculty in 10,612 departments at 368 Ph.D.-granting institutions and all academic disciplines for the 2011-2020 decade. FitzGerald et al. <cit.> built upon this research by using data from the Mathematics Genealogy Project (MGP) to restrict their analysis to mathematics faculty. They analyzed 120,000 records from 150 institutions over seven decades (from 1950 to 2019). The results of this research demonstrates that who becomes faculty at Ph.D.-granting institutions is not an unbiased process or reflective of a meritocratic ideal. Instead, analysis of the availability data shows that there exist certain institutions that are more likely than others to have their Ph.D. graduates become faculty members at other Ph.D.-granting institutions. In other words, the data demonstrates that hierarchies exist in faculty hiring networks. One of the most salient and persistent hierarchical structures in society that is omnipresent in mathematics and the wider STEM community is gender. Researchers have analyzed data describing many different aspects of academic activity and demonstrated the many ways gender can negatively mediate opportunity for advancement, participation, and achievement in science and mathematics <cit.>. Another significant hierarchical structure in society is class. Morgan et al <cit.> analyzed data from a survey with responses from more than 7,000 faculty in eight STEM disciplines to demonstrate that faculty are 25 times more likely to have a parent with a Ph.D. and that at “elite" institutions this rate is doubled. This data analysis demonstrates the salience of socioeconomic status on the composition of the professoriate. In this subsection, we have provided articles that confirm the existence of hierarchical structures in mathematics and science. Our argument is that these research results using data demonstrate the idea that mathematics (and STEM, more generally) are egalitarian and meritocratic is a myth that can be and has been disproved, debunked, and discredited. §.§ Evidence of Elitism in Mathematics Our survey of existing results finds extensive evidence, gathered via mathematical tools and techniques, of widespread elitism and exclusion of minorities and under-represented populations in the mathematics community <cit.>. For example, Topaz et al. found in 2016 that in academia, women comprised only 15% of tenure-stream faculty positions in doctoral-granting mathematical sciences departments in the United States. Editorial appointments in the journals studied in their work found even more paltry representation for women; only 8.9% of editorships within the journals reviewed belonged to women <cit.>. Another way elitism is perpetuated in the mathematics community is via the selection of a self-perpetuating cadre of “elite" personnel for prestigious prizes. The International Mathematical Union's selections for the Fields Medal, (widely considered the most elite prize in Mathematics) for example, have shown an alarming trend of rewarding only already-elite mathematics researchers in an apparent negation of one of the main goals of its conferral, which was to shine light on under-represented members of the mathematics community <cit.>. Other scientists have found widespread elitism perpetuates as a vicious circle of like begetting like <cit.>. Schlenker <cit.> notes that fields with applications to the social or physical sciences such as numerical analysis, mathematical modeling or statistics seem to be viewed as having low status, and this lack of prestige accompanies the low representation of researchers in these fields among elite prizewinners. However, other fields such as Algebraic Geometry have no shortage of practitioners and medalists, in spite of having little direct application to other disciplines, once again displaying the self-serving cycle of elitism within the mathematics community. Other researchers have also used other mathematical tools to identify inequity in mathematics in other aspects of academia. Leslie et al. <cit.> and Storage et al. <cit.> found widespread under-representation of women and African Americans in their research which analyzed over 14 million records from the popular website RateMyProfessor. Their findings included detailing that in fields where brilliance and innate intellectual talent were treasured, African Americans and women were even less well represented. Brisbin and Whitcher <cit.> concur with the above research by finding that women are under-represented in subfields of mathematics that are viewed as having more prestige by analyzing almost a million papers in the mathematical sciences uploaded to the arXiv, a popular website for the dissemination of research results. Way et al. <cit.> studied comprehensive data on both hiring outcomes and scholarly productivity for 2659 tenure-track faculty across 205 Ph.D.-granting departments in North America and found deep under-representation of women in these groups. Curiously, Way et al. also found pervasive elitism including discrimination against females, as well as some other less prominent examples of biases against women. One of the salient trends that these researchers found was that top-ranked institutions had tendencies to hire women at higher rates than their mathematical data science enabled models would expect. In this sense, higher ranked academic institutions did a “better job” of being less discriminating against women. This was also the case at the lower end of the spectrum of lower ranked academic institutions; thus the majority of hiring and appointing of women in academia take place in the middle ranked academic institutions. The research highlighted in this subsection has provided examples of the pervasive elitism across academia at large and the mathematics community in particular. §.§ Disparities in Federal Funding for Science and Mathematics Another example of #MetaMath research involves analyzing the public data of who receives funding from public and private sources in order to detect and document disparities in support for mathematics and science in the United States. In this subsection we provide examples of research that provides evidence of these discriminatory results. It is well documented that there are racial disparities in funding from the National Institutes of Health (NIH) <cit.>. The 2011 study of NIH R01 funding by Ginther et al. <cit.> hypothesized that scientists of different races but with similar research records and institutional affiliations would also have a similar likelihood of being awarded research grants. However, the opposite was discovered. They found that Black and Asian investigators are less likely to be awarded an R01 on the first or second attempt, Blacks and Hispanics are less likely to resubmit a revised application, and Black investigators that do resubmit have to do so more often to receive an award. Even after controlling for the applicant’s educational background, country of origin, training, previous research awards, publication record, and employer characteristics, the authors discovered that black applicants remain 10 percentage points less likely than whites to be awarded NIH research funding. Funding from the National Science Foundation (NSF) have not been as extensively studied as the NIH. As the main source for research funding in the mathematical sciences, the funding priorities selected by the NSF will greatly influence how the fields in the mathematical sciences will grow, behave, and look. Both the NIH and NSF are federal funding agencies with similar missions but different foci. As such, we would not expect to see trends in NSF funding differ significantly from the NIH. Indeed, Chen et al. <cit.> recently examined NSF funding data from 1996 to 2019 and found that proposals by white PIs were consistently funded at rates higher than the overall rate, with an average relative funding rate of +8.5% from 1999 to 2019. Additionally, the relative funding rate for proposals by white PIs increased steadily during this period, from +2.8% in 1999 to +14.3% in 2019. Proposals by Asian, Black/African-American, and Native Hawaiian/Pacific Islander PIs, were consistently funded below the overall rate, with average relative funding rates of –21.2%, –8.1%, and –11.3%, respectively. Additionally, these findings revealing race-based and gender-based funding disparities are not confined to federal agencies. The Wellcome Trust is a private global charitable foundation that conducted a similar study on their own award practices in 2020 <cit.>. The Wellcome study stated that, “As a funder, we know from our data that BAME[Black, Asian, and minority ethnic, also known as “BAME,” is an umbrella term, common in the United Kingdom, used to describe non-white ethnicities.], and especially Black, applicants are less likely to be awarded Wellcome research grants in the UK than White applicants.” At a time when the mathematics community has been doubling down on its efforts to diversify the profession, the type of groundbreaking data analysis presented in this subsection is absolutely necessary to motivate action and meaningful change to eliminate racial and gender disparities in funding in the mathematical sciences community. §.§ Discourse in the Mathematics Community Just like SciSci, #MetaMath also includes the use of qualitative techniques. In this subsection we provide some examples of the ways that discourse in and about the mathematics community can be analyzed, primarily through mixed or qualitative methods, in order to raise awareness about social justice issues in the mathematical sciences. The mathematics community shares its opinions and responses to given situations in a variety of public ways, often leaving behind artifacts that can be studied. Examples of such dissemination of opinions and responses include the Inclusion/Exclusion blog <cit.>, a justice and mathematics weblog that is a continuation of the AMS blog archived in 2021; an article in Scientific American highlighting the debate surrounding the founding of the Association for Mathematical Research <cit.>; and the statement by the Mathematical Association of America (MAA) on the location of Mathfest 2023 <cit.>, in which they prioritize financial concerns over the safety of the LGBTQ+ community at the meeting site in a state that has enacted discriminatory legislation targeting LGBTQ+ people. The artifacts left behind in these cases include the blog, the article and the statement as well as any public replies to these on social media, statements from organizations, letters to the editor, and more. By examining the conversation through these artifacts, we can also better understand and quantify the divisions and counter-movements within the mathematics community. It is worth highlighting that this type of work often requires mixed-methodology; strictly quantitative analysis cannot fully capture nuance and “whys” behind the discourse. One of the first questions we might ask ourselves in this work is who is part of the conversation, which can be answered quantitatively. The work of Topaz et al. <cit.> compares the demographics of the hundreds of signatories on three public letters responding to an essay published in Notices of the American Mathematical Society that opposed the use of diversity statements in academic hiring, likening them to “McCarthyism.” The results showed that signatories of Letter A, which highlights diversity and social justice, were inferred to be more diverse in terms of gender, under-represented ethnic groups, professional security, and employment at academic institutions. Letters B and C, which did not mention diversity and spoke against diversity statements, respectively, were instead signed by those who were inferred to be majority tenured white men at research focused universities. The results presented in this paper are consistent with theories of power and positionality found in social sciences <cit.>. From a more qualitative perspective, Burrill et al. <cit.> identified broad themes through recordings from a math education forum, consolidating and sharing common ground across diverse experts. Their work highlights the importance of communication and articulation among stakeholders, as well as the need to support students and teachers in curricular design. Thematic qualitative analysis can also be executed in a more fine-grained manner, such as through the examination of Twitter posts containing the #disruptJMM hashtag by Roca et al. <cit.>. This paper examines the math social movement born from Piper H's post on the Inclusion/Exclusion Blog calling for more commonplace discussion on inclusion and equity within the Joint Mathematics Meeting (JMM) under the banner. Those who used the hashtag and challenged others to get involved built a community that highlighted the need to rehumanize mathematics, make visible power and privilege in the mathematics community, and elevate the stories of under-represented folks at the conference. The examples of the ways discourse in the mathematics community can be studied given in this subsection allows us to identify salient topics of conversation, understand who is included within or excluded from these discussions, and provide context to situations affecting the heterogeneous mathematics community in disproportionate way. §.§ Quantifying Diversity in Mathematical Content and Curricular Choices This subsection is motivated by two questions. First, what courses are mathematics departments currently implementing in their undergraduate programs? Second, how are mathematics departments using data to make curricular choices? The reality is that there is limited data available to properly answer these questions. Every five years since 1970, CBMS sponsors a comprehensive national survey of undergraduate mathematical and statistical sciences in the United States' four-year and two-year universities and colleges. The national data collected includes statistics on enrollment, curriculum, degrees awarded, course availability, faculty demographics, and special one-time topics (these topics vary) <cit.>. The most recent CBMS survey, whose complete results are published, is from 2015. (The release of data from the 2020 survey has been delayed due to the COVID-19 pandemic.) This survey showed that among the 28 upper-level math courses listed, the courses that were most offered (at least once) in the academic years 2014-2016 and 2015-2016 were Modern Algebra I (78% of all math departments), Geometry (71%), Advanced Calculus/Real Analysis I (72%), Math Senior Seminar/Independent study (66%), History of Mathematics (58%), and Introduction to Proofs (56%). Although there are guidelines written by leading mathematical societies on what the mathematics undergraduate curriculum should consist of (see for instance <cit.>), the CBMS survey demonstrates that there is not a consistent vision for the mathematics curriculum among the institutional respondents. Additionally, we must take into consideration the limitations of such surveys/data, such as survey response rates. For example, among the four-year mathematics respondents of the 2015 CBMS survey, only 5 out of the 23 California State University System (CSU) campuses and 4 out of the 9 University of California (UC) System undergraduate campuses participated. Combined, both the CSU and UC systems would create the largest public institution of higher education in the United States, thus there is a large number of students and departments not voiced in the survey results. This leads us to the following question: why should there be data on mathematics course offerings at different institutions? Answering this question provides an avenue for at least two data science projects that can benefit the broader mathematical community. One such project would be to actually collect the data on course offerings at all institutions with an undergraduate mathematics major. This may involve applying data science techniques, such as data/web-scraping, information retrieval, and/or data clustering. Having data on mathematics course offerings at different institutions can be used to influence funding, administrative, and educational practices. As far as we know, there is also no known information or literature on how mathematics departments are using quantitative data as an input when a department or faculty chooses math content and curricula. Hence, a second project is to pursue a study towards understanding the decision-making processes of mathematics departments in regards to the aforementioned setting. Such a study can have implications for mathematical practice and education and can help interested parties to identify related important and timely matters, such as textbook selection for courses. A deeper dive into the content of mathematics courses would inform the question, what is mathematics? This question is explored through the Rehumanizing Mathematics framework <cit.>, developed by Dr. Rochelle Gutierrez, and can have implications for future data science inquiry. In this framework, there are eight dimensions; participation/positioning; cultures/histories; windows/mirrors; living practice; broadening maths; creation; body/emotions; and ownership. These dimensions can be implemented in the process of rehumanizing the mathematics classroom. This framework has also been used in conjunction with lesson study to rehumanize the content of mathematics courses by bringing back diverse mathematics history that is often left out of mathematics courses, including and incorporating students' existing knowledge and experience into the classroom, and by considering alternative formats for conducting and communicating mathematics (for example). Thus far, no one has quantified and categorized the content offered in mathematics courses. This inquiry could be further expanded to identify geographical trends in the types of mathematics content offered. Decisions about who gets admitted to college, particularly to elite or selective institutions, are the result of a combination of policy and practice, and math expectations play a role <cit.>. Strongly-held beliefs about calculus as a sign of rigor play a consequential role in college admissions. This can also translate to upper-division mathematics courses and to graduate admissions. Data, such as that presented in the CBMS survey or potential results from the data science projects proposed above, should be carefully used and analyzed. For example, graduate admission committees when reviewing a student's application should take into consideration questions such as: Did the student have opportunities to enroll in graduate courses as an undergraduate student? Did the student study at small college or large university? Did the student participate in research or an internship? Did the student have the opportunity to pursue an undergraduate thesis? The call in this subsection for the ability to quantify the diversity of mathematical content and curricular choices at undergraduate institutions could lead to more nuanced admissions decisions at graduate schools and more effective undergraduate mathematics curricula with a possibility of a future mathematics community that has been positively impacted by the existence and analysis of such data. § METHODOLOGICAL CONSIDERATIONS As members of the mathematical community embark on work in #MetaMath, we caution them to proceed with humility. Much of this work, while quantitative in nature, is epistemologically closer to the social sciences than to proving theorems or writing computer code. As such, we point out some common pitfalls we urge people considering this line of research to consider. One common research approach is to analyze an existing public data set (<cit.>). While this can provide important insights, this can be limiting in several ways. First, this approach inverts the typical research process, in which a research question is formulated, and then data obtained; here, data is obtained, and research questions are formed (implicitly or explicitly). In particular, this can limit the kinds of questions that can be asked, and as a result limit the kinds of conclusions that can be made. Second, data collection is not a neutral endeavor. Many choices are made in the production of a data set, and these choices can have profound implications for the kinds of questions that can be answered and the kinds of conclusions that can be drawn. For example, <cit.> makes use of the data contained in the popular Mathematics Genealogy Project. However, as the authors note, this has the effect of excluding from their dataset all mathematicians who have not advised a Ph.D. student. Many of the studies given in previous subsections above concern questions about racial and gender representation in the mathematics community. However, the needed demographic information to conduct this data analysis is often not present. This leaves researchers to collect new data, or to somehow infer demographic information from other information present in the data set. One approach is to try to match the names of individuals to other public information, such as a personal website, as in <cit.>. This is, of course, somewhat labor intensive. An automated approach is to use software tools that attempt to infer an individual's gender based on their name (and sometimes other information), as in <cit.>. This practice of researchers ascribing gender or race to an individual is certain to contain errors to some degree, and should be used with caution—researchers should carefully consider whether the benefits of these techniques truly outweigh the harms caused by incorrectly ascribing identities to people. Further, many studies including gender as a factor completely disregard nonbinary individuals, either marking them as “other” or discarding them from the data set. Mathematicians undertaking this work should also be wary of an epistemological bias. Our training understandably predisposes us to use quantitative techniques. However, many questions of interest to the mathematics community are better suited to mixed or qualitative methodologies, and in fact may be impossible to answer with only quantitative methods. For example, we above highlighted several papers that illuminate demographic disparities in various aspects of the mathematical community. The next important step is to then interrogate why these disparities exist and persist, which is much better suited to qualitative methodologies. As such, we encourage those engaging in this type of work to foster collaborations with experienced qualitative researchers. The authors' personal experience is that the mathematics education research community (in addition to other social science researchers) is a natural source of collaborators that can help push this work forward. § FUTURE DIRECTIONS It should be clear to the reader that the #MetaMath examples given above are not an exhaustive list of the ways that mathematics and data science can be used to examine the mathematical sciences itself. There are many different avenues that future work could build upon the information we have provided in this article. First, a review of the points raised in Section <ref> on Methodological Considerations could lead to a number of ways to improve upon or modify and replicate some of the results discussed in the survey of the literature that we have provided. Second, examining the number of references where the context for the research is STEM and not exclusively the mathematical sciences could provide opportunities to repeat STEM-specific research in Math-exclusive contexts. Third, inspired by the subsection discussing qualitative analysis of discourse in the mathematics community, future researchers could try to expand the use of qualitative techniques to analyze the mathematical sciences. Fourth, there are numerous suggestions for data-enabled projects involving curricular data in the subsection on mathematical content and curricular choices that could benefit the wider mathematics community. Lastly, as more and more data about the mathematical sciences community becomes available, either updated versions of existing data or data about new topics or in areas where data is currently incomplete or inaccessible, these sources will provide new opportunities for ways to provide insight into the nature of the mathematics community through the application of techniques, tools, and topics from mathematics and data science. We conclude by inviting interested researchers to join us in this project. The work of promoting social justice and improving equity within the field of mathematics is and will remain an ongoing process. § AUTHOR CONTRIBUTIONS All the listed co-authors contributed equally to the creation of this article; the order of attribution is alphabetical, as is customary in mathematics and is not intended to demonstrate any distinction in credit or effort. § ACKNOWLEDGMENTS This material is based upon work supported by the National Science Foundation under Grant No. DMS-1929284 while some of the authors were in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the https://icerm.brown.edu/programs/ep-22-dssj/Data Science and Social Justice: Networks, Policy, and Education program. The authors thank ICERM for providing the space for this work. The authors would like to acknowledge Katherine (Katie) Kinnaird, who contributed to early discussions. Buckmire acknowledges that the Data Science and Social Justice program at ICERM, under lead organizer Carrie Diaz Eaton, provided key organizational and logistical resources that facilitated the relationships and connections that resulted in this paper. Hibdon was supported, in part, by the National Institutes of Health’s National Cancer Institute, Grant Numbers U54CA202995, U54CA202997, and U54CA203000. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Pabón thanks Nicholas Dubicki for insightful discussions. Pabón is partially supported by the National Science Foundation under Awards DMS-2108839 and DMS-1450182 . Vindas-Meléndez thanks Benjamin Braun for helpful discussion. Vindas-Meléndez is partially supported by the National Science Foundation under Award DMS-2102921. plain § AUTHORS * Ron Buckmire received his Ph.D. in mathematics from Rensselaer Polytechnic Institute in 1994 and has been on the faculty of Occidental College in Los Angeles, California ever since. He is a passionate advocate for broadening the participation of people from historically marginalized and currently under-represented groups in the mathematical sciences. Dr. Buckmire believes that: 1) mathematics is a human endeavor; 2) mathematics is created, discovered, learned, researched, and taught by people; 3) the identities and experiences of “who does math" are important and that everyone and anyone can and should be welcome in the mathemaitcs community. Mathematics Department, Occidental College, Los Angeles, CA ron@oxy.edu * Joseph E. Hibdon, Jr. received his PhD in Applied Mathematics from Northwestern University in 2011. He is an associate professor of mathematics at Northeastern Illinois University, a Hispanic Serving Institutions in Chicago, Illinois. Dr. Hibdon works in many interdisciplinary teams across the country to increase diversity in STEM and broadening the participation of students in research at the undergraduate level. Dr. Hibdon's recent research is in mathematical modeling, with a focus on public health and biological phenomenon. Department of Mathematics, Northeastern Illinois University, Chicago, Illinois j-hibdonjr@neiu.edu * Drew Lewis received his Ph.D. in mathematics from Washington University in St. Louis in 2012 and most recently served on the faculty at the University of South Alabama. He now works primarily in education research and faculty development. drew.lewis@gmail.com * Omayra Ortega received her PhD in Applied Mathematics & Computational Sciences from the University of Iowa in 2008. She is an Associate Professor of Applied Mathematics & Statistics and the Assistant Dean of Research and Internships in the School of Science and Technology at Sonoma State University. Using the tools from statistics, mathematics, data science, public health and epidemiology, Dr. Ortega tackles the emerging health issues. She is deeply committed to broadening the participation of under-represented minorities in STEM and mentoring students through the challenges of academia. Department of Mathematics & Statistics, Sonoma State University, Rohnert Park, CA ortegao@sonoma.edu * José L. Pabón is a Ph.D. candidate in mathematics at the New Jersey Institute of Technology in Newark, N.J. He received his A.B. degree in mathematics from Princeton University in 2019 and has a particular affinity for working with disenfranchised populations, having first-hand experience with these given he was born and raised in Puerto Rico. Mathematical Sciences Department, New Jersey Institute of Technology, Newark, NJ jlp43@njit.edu * Rachel Roca is a Ph.D. student in computational mathematics, science, and engineering at Michigan State University. Her interests lie in topological data analysis and computing education with an emphasis in social good. She believes in leveraging mathematical and data science tools for both advocacy and activism. Department of Computational Mathematics, Science, and Engineering, Michigan State University, East Lansing, MI rocarach@msu.edu * Andrés R. Vindas-Meléndez received their PhD in mathematics from the University of Kentucky in 2021. Andrés is an NSF Postdoctoral Fellow and Lecturer at UC Berkeley and will begin as a tenure-track Assistant Professor at Harvey Mudd College in July 2024. Andrés' research interests are in algebraic, enumerative, and geometric combinatorics and have expanded to include applications of data science and mathematics for social justice. Department of Mathematics, University of California, Berkeley, CA Department of Mathematics, Harvey Mudd College, Claremont, CA andres.vindas@berkeley.edu; arvm@hmc.edu
http://arxiv.org/abs/2307.01946v2
20230704224255
A Synthetic Electrocardiogram (ECG) Image Generation Toolbox to Facilitate Deep Learning-Based Scanned ECG Digitization
[ "Kshama Kodthalu Shivashankara", "Afagh Mehri Shervedani", "Reza Sameni" ]
cs.CV
[ "cs.CV", "cs.LG" ]
Instantaneous Wireless Robotic Node Localization Using Collaborative Direction of Arrival Ehsan Latif and Ramviyas Parasuraman^* School of Computing, University of Georgia, Athens, GA 30602, USA. ^* Corresponding Author Email: ramviyas@uga.edu. August 1, 2023 ================================================================================================================================================================= The electrocardiogram (ECG) is an accurate and widely available tool for diagnosing cardiovascular diseases. ECGs have been recorded in printed formats for decades and their digitization holds great potential for training machine learning (ML) models in algorithmic ECG diagnosis. Physical ECG archives are at risk of deterioration and scanning printed ECGs alone is insufficient, as ML models require ECG time-series data. Therefore, the digitization and conversion of paper ECG archives into time-series data is of utmost importance. Deep learning models for image processing show promise in this regard. However, the scarcity of ECG archives with reference time-series is a challenge. Data augmentation techniques utilizing digital twins present a potential solution. We introduce a novel method for generating synthetic ECG images on standard paper-like ECG backgrounds with realistic artifacts. Distortions including handwritten text artifacts, wrinkles, creases and perspective transforms are applied to the generated images, without personally identifiable information. As a use case, we generated an ECG image dataset of 21,801 records from the 12-lead PhysioNet PTB-XL ECG time-series dataset. A deep ECG image digitization model was built and trained on the synthetic dataset, and was employed to convert the synthetic images to time-series data for evaluation. The signal-to-noise ratio (SNR) was calculated to assess the image digitization quality vs the ground truth ECG time-series. The results show an average signal recovery SNR of 27±2.8 dB, demonstrating the significance of the proposed synthetic ECG image dataset for training deep learning models. The codebase is available as an open-access toolbox for ECG research. Electrocardiogram (ECG), ECG digitization, deep learning, synthetic data, digital twins, data augmentation § INTRODUCTION Cardiovascular diseases (CVDs) are the primary cause of mortality globally among adults aged 37 to 70 years <cit.>. An electrocardiogram (ECG) is a representation of the cardiac electrical activity over time <cit.>, and the most accessible and widely used tool for CVD diagnosis. The ECG is most accurately studied through standard multilead recordings (e.g., 12-lead). Every day, clinicians conduct millions of ECGs, and wearable and personal devices generate millions more. There are billions of digital diagnostic ECGs globally and billions more in conventional formats such as microfilms, printed papers and scanned images. Although this legacy contains invaluable information on prevalent and rare CVDs and their evolution across generations and geography, we are not currently “learning” from ECG archives. Due to natural deterioration, lack of funding and a transition to digital ECGs, non-digital ECG archives worldwide will soon be destroyed, before we can learn from them. This will be an irreversible loss for CVD research, since the ECG is the only biological signal that has been recorded for over a century without significant changes in its acquisition protocol, especially for low and low middle income countries (LMICs), where paper ECGs are still more common. Although non-digital ECGs can be scanned and archived as images, there is little incentive to do so; because currently ECG images are not automatically searchable for anomalies, and are incompatible with annotation software that analyze ECG time-series <cit.>. Digitization of the existing paper ECG archives is an essential step in this context. ECG biomarkers, such as RR, PR, QRS, QT intervals, waveforms and rhythms are easily accessible from ECG time-series and could help better performance of existing machine learning models. Further, paper ECGs are restricted mainly due to privacy concerns as they contain personal and sensitive information. The US HIPAA rule requires appropriate safeguards to protect the privacy of protected health information and sets limits and conditions on the uses and disclosures that may be made of such information without an individual's consent <cit.>. This requires informed consent from each patient from whom the record has been procured, making the availability of open-source datasets scarce. Most datasets have been randomised or pseudonymised for de-identification of personally identifiable information <cit.>. However, there are significant concerns that de-identified datasets can be re-identified through linkage of metadata <cit.>, or more recently through deep learning models <cit.>. Thus there have been many attempts to model ECG synthetically in a privacy-preserving manner to adhere to all privacy regulations while generating massive amounts of data. In this work, we focus on generating synthetic paper-like ECG images from ground truth time-series data to cater to applications such as ECG image digitization that may require large paper ECG datasets <cit.>. Synthetic data must contain all characteristics of the real data population without any personally identifiable information. Thus, we aim to mimic all possible distortions present in paper ECG records while generating images from time-series data. We present a step-by-step mechanism for generating distortionless paper ECG records from digital data, followed by sequentially adding distortions such as printed text artifacts, handwritten text, wrinkles, creases, perspective distortions, and environmental noise. Our developed tool for synthetic ECG image generation (ECGGen) has been implemented in Python and is available online[A link to the toolbox will be provided upon the acceptance of the manuscript.]. § RELATED WORK There is an increasing interest in the generation of realistic synthetic data and so-called digital twins. Privacy preservation and HIPAA mandates have forced researchers to look at synthetic medical data generation, particularly Electronic Health records (EHR). In <cit.>, John et al. proposed using a federated generative adversarial network (GAN) to generate EHRs. The GAN was trained on real-world EHRs to generate a dataset of “fake” patients through synthetic data. Through medGAN <cit.>, Choi et al. demonstrated the usability of a GAN in the generation of EHRs. EHR-Safe <cit.> is another synthetic EHR-data generation framework that aimed to model EHR data through a sequential encoder-decoder architecture and GANs. In the ECG context, there are numerous research elucidating the generation of synthetic time-series ECG records. Previously, dynamical models have been used to produce synthetic ECG data of adults and fetuses <cit.>. The model generates a trajectory for the ECG in a three-dimensional state space. The model parameters can be selected to generate different morphologies for the PQRST complex and heart rate. This model also captures morphological variabilities in the human ECG. In <cit.>, Clifford et al. used an artificial vector model for generating abnormal ECG rhythms. The normal ECG is generated using a three-dimensional vectorcardiogram formulation, while abnormal beats are generated as perturbations to the expected trajectory. In <cit.> a general framework was presented for morphological modeling of ECG data using signal decomposition techniques. <cit.> demonstrated simulating sinus rhythm, episodes of atrial fibrillation, and atrial premature beats in ECGs. These simulations again use physical models to generate synthetic abnormalities. There have been recent advancements in synthetic data generation using GANs. Zhu et al. <cit.> proposed using a GAN composed of a bidirectional long short-term memory(LSTM) and convolutional neural network(CNN) to generate synthetic ECG time-series. The generated synthetic time-series data exhibit properties that match existing clinical data and retain features of patients with CVDs. Delaney et al. <cit.> proposed a regular GAN-based architecture but utilized recurrent neural networks (RNN) and convolutional neural networks (CNN) to generate time-series ECG data. The generated time-series samples are structurally similar to their training sets and exhibit variabilities across samples. GANs have also been successfully shown to generate synthetic ECG with abnormalities. Zhang et al. <cit.> proposed a two-dimensional bidirectional LSTM GAN to produce 12-lead ECGs corresponding to four different conditions: left ventricular hypertrophy (LVH), left branch bundle block (LBBB), acute myocardial infarction (ACUTMI), and normal. In <cit.>, Wulan introduced three generative models for the generation of ECG signals — namely the WaveNet-based model, the SpectroGAN-based model, and the WaveletGAN model — to generate ECG signals with three different kinds of beats: normal, left bundle branch block and right bundle branch block. These research demonstrate that generating synthetic time-series ECG can be made possible through generative models. To leverage these advances for the development and validation of ECG image digitization algorithms, considerably large datasets of scanned paper ECG images and their corresponding ground truth time-series data are required. However, to date, large paper ECG archives are not readily available. In <cit.>, Thambawita et al. used WaveGAN and Pulse2Pulse GAN to generate DeepFake ECGs and have created a repository with the resulting ECG images. The generated ECGs inherently have synthetic time-series data underlying the images and are validated using General Electric's MUSE 12SL ECG interpretation program. Herein, we take a different approach to synthetic paper ECG generation. We focus on a privacy-preserving approach to synthetic ECG image generation by plotting ECG time-series data on standard paper ECG grids and adding artifacts and distortions sequentially. Therefore, synthetic paper-like ECG images are obtained for which the underlying ground-truth real ECG time-series are available. Thus, the dataset can be used to develop effective ECG image digitization algorithms of high accuracy. § METHODOLOGY The proposed methodology includes step-by-step addition of distortions to distortion-less time-series data plotted on standard ECG grids, as shown in Fig. <ref>. Each step of the developed pipeline is detailed in the sequel. §.§ The ECG time-series The synthetic paper ECG generation pipeline requires a time-series dataset which functions as the ground truth. For this purpose, we use the PTB-XL dataset <cit.>, a clinical ECG dataset which has been compiled for training machine learning algorithms with clear benchmark procedures . The dataset contains 21,801 clinical 12-lead ECGs from 18,869 patients of 10 second length. This dataset also contains extensive metadata and statistics on signal properties and demographics such as age, sex, height and weight. Each record is provided with the standard set of ECG leads I, II, III, aVR, aVL, aVF, V1, V2, V3, V4, V5 and V6 <cit.>. In addition to the PTB-XL dataset, we also use other 12-lead clinical ECG datasets such as the CPSC and the CPSC-Extra Databases <cit.>, the INCART Database <cit.>, the Georgia 12-lead ECG database (G12EC) and the PTB database <cit.>. These datasets were used as part of the 2021 PhysioNet Challenges on multilead ECG annottaion <cit.>. The segments of interest are extracted as short 10 s time-series arrays from the entire time-series data, lead-wise. All the leads are used to construct the standard 12 lead ECG. Most time-series records from the above mentioned datasets could be contaminated by various types of measurement noises such as baseline artifacts, power line interference, motion artifacts, muscle noise and additive white Gaussian noise (AWGN) <cit.>. These different forms of real ECG noises can be modeled and added to time series data in order to make it more realistic <cit.>. Since realistic ECG noise is not white or stationary, most practical applications involving noise addition to time-series ECG data involves addition of real ECG noises such as those found in the MIT-BIH non-stress test database with varying signal-to-noise ratios <cit.>. Further, time-varying models such as autoregressive models can be used to simulate realistic ECG noise and can be added to time-series data, as mentioned in <cit.>. §.§ The standard ECG paper The standard printed ECG is recorded at 25 mm/sec speed and the standard vertical scale is such that 10 mm on the paper corresponds to 1 mV of physical units of the ECG. There are two grids plotted on the standard paper ECG: a coarse grid of 5 mm×5 mm corresponding to 0.5 mV in vertical (amplitude) and 0.2 s in horizontal (time) directions, and a fine grid of 1 mm×1 mm corresponding to 0.1 mV and 40 ms in vertical and horizontal directions, as shown in Fig. <ref>. Historically, a calibration DC pulse of 1 mV amplitude and 0.2 s width is also shown on most paper ECGs <cit.>. Most paper ECG records are of a reddish-orange colour, however this is not standardized. The typical paper ECG record is plotted on A4 or US Letter sized papers. The standard surface ECG is recorded by placing 12 electrodes on 12 different locations on the body, in addition to reference leads. Leads I, II, III, aVR, aVF and aVL denote the leads placed on the limb whereas the leads V1-V6 denote the precordial leads. Reduced ECG lead sets, measure fewer number of leads and reconstruct the other leads by calculation <cit.>, or more recently by machine and deep learning <cit.>. Most standard paper ECG records contain all the 12 leads plotted along four rows. The V1 (or V2) lead is plotted separately in the 4th row as a 12 s continuous strip, for rhythm analysis. Although majority of paper ECG records follow these practices, there do exist paper ECG records that do not adhere to this. Consequently, a synthetic paper ECG generation pipeline must take into account the variability that exists across real paper ECG records. Our developed ECGGen toolbox, allows the user to tweak these variabilities and generate a diverse dataset. §.§ Printed text artifacts Typically, text artifacts on paper ECG records could contain lead names, patient information/ID, ECG calibration, date, physician's name, diagnostic codes/keywords, ECG-based measurements, and some medical terminologies. Several state and federal regulations, such as the Privacy Rule of the Health Insurance Portability and Accountability Act (HIPAA), mandate the protection of patient privacy <cit.>. Consequently, in most paper ECG records available, protected health information (PHI) is redacted to preserve privacy. The synthetic paper ECG generation pipeline must accommodate these variable printed text artifacts. In ECGGen, the lead names (I, II, III, V1, V2, V3, V4, V5, V6, avL, avF, avR) are printed alongside their respective ECG segments on the synthetically generated ECG image (cf. Fig. <ref>). The user of the toolbox has the option of choosing whether there must be an overlap between ECG segments and printed text artifacts. Although overlapped characters do pose a problem in digitizing paper ECG records <cit.>, they are added to represent realistic paper ECGs. Further, to add other printed information such as date, ECG calibration, patient record numbers, etc. we have designed several template ECG record files that can be customized by the user. These text are superimposed on the synthetically generated ECG image obtained from the previous step (cf. Fig. <ref>). §.§ Handwritten text artifacts Scanned ECGs may contain annotations or hand-written diagnoses from the heathcare provider. Our synthetic ECG image generation pipeline attempts to simulate such handwritten text artifacts to generate more realistic ECG images. Most handwritten text on paper ECG records comprise of medical terms related to cardiology. We collected a set of medical text related to ECG and CVDs. We next applied Natural Language Processing (NLP) models on these text to compile a list of biomedical phrases and keywords from these texts. The resulting set was converted to handwritten style images using pretrained models and the resulting images were overlayed on the ECG images from the previous step of our pipeline. The Python-based model from sciSpacy <cit.>, was used for the NLP step, which provides a fast and efficient pipeline to perform tokenization, parts of speech tagging, dependency parsing and names entity recognition. Next, the SpaCy model <cit.> was retrained on our collected medical texts to retain words in the ECG-CVD context. The dependency parser and the parts of speech tagger in the released models were retrained on the treebank of McClosky and Charniak <cit.>, which is based on the GENIA 1.0 corpus <cit.>. Major named entity recognition models have been trained on the MedMentions dataset <cit.>. Our developed synthetic ECG image generation toolbox parses words from an input text file or from online links using the BeatifulSoup library <cit.>, performs parts of speech tagging on the parsed words and then uses named entity recognition from the aforementioned models to identify ECG-related keywords, which are randomly chosen and add as handwritten text. The extracted words need to be converted into handwritten-style text to overlay on the synthetic ECG image. A pretrained Recurrent Neural Network (RNN) transducer- based model paired with soft window is used to generate handwritten text from the extracted words <cit.>. One of the major challenges in converting the words to handwritten text is that the input and output sequences are of very different lengths dependent on the handwriting style, pen size, etc. The RNN transducer based model is capable of predicting output sequences which are of different length and unknown alignment from the input sequence <cit.>. The soft window is used to determine the length of the output handwritten sequence by convolving with the input text string. This results in outputs of different lengths for different handwriting styles. Our current handwritten text pipeline allows the user to choose from seven different handwriting styles to overlay onto the ECG image. The coordinates to overlay the text can be chosen by the user of the toolbox, and if not specified are selected randomly. Examples of the resulting images are shown in Fig. <ref>. §.§ Paper wrinkles and creases Wrinkles and creases are very common distortions observed on scanned paper-based ECG images. Wrinkle distortions are caused by the uneven surface of the wrinkled document. When the scanner light passes over the uneven surface, the resulting image can be distorted by shadows or reflections from the wrinkles. This can cause areas of the image to appear darker or lighter than the actual paper, or it can cause the image to appear blurry or distorted. Creases, on the other hand, are caused by the physical crease in the scanned paper. When the scanner light passes over the crease, it can create a shadow that appears as a dark line in the resulting image. This can make the text or image on the creased area difficult to read or interpret. Wrinkles and creases can be modeled using image processing techniques. Creases can be modeled by Gaussian-distributed blurred lines, linearly spaced to give an impression of creases caused by paper folds. Mathematically, to introduce crease artifacts in an image, the coordinates of the creases can be determined using line equations and translations. To generate the line equations representing creases and to determine their starting and ending points based on the number of creases, their angle of inclination, and the width and height of the image, the steps in Algorithm <ref> were taken. This algorithm provides us with the coordinates of the creases using line equations and translation, ensuring that the creases are within the image bounds and contributing to the generation of realistic crease artifacts in the synthetic paper-like ECG images. To simulate the effect of blurring on crease lines, we apply Gaussian blurring to the generated crease lines, which is commonly used as an image augmentation technique to simulate smoothing or blurring effects in images. It has also been used to simulate the effect of unfocused camera on the object of interest, which is known to impact the performance of deep neural networks <cit.>. The blurring process is mathematically expressed as a convolution of the crease lines with a Gaussian kernel: G(x, y) = 1/2πσ^2 e^-x^2 + y^2/2σ^2 where x and y represent the coordinates in the 2D space, and σ represents the standard deviation of the Gaussian distribution in pixels. The application of Gaussian blurring to the created crease lines aids in generating more realistic images by simulating a shadow effect in the crease, which is commonly observed in scanned paper ECG images. Wrinkles can be thought of as textures and can thus be synthesized by state-of-the-art texture synthesis techniques such as image quilting <cit.>. Image quilting starts with a plain wrinkle image as a seed, followed by randomly selecting a patch from this image. This patch serves as the basis for synthesizing the entire wrinkle texture. Multiple such patches are generated and seamlessly blended using the minimum boundary error cut method <cit.>. In this method, the goal is to find the optimal boundary between two patches by minimizing the error in the overlap region. The minimum cost path through the region of overlap is found using dynamic programming according to the algorithm proposed in <cit.>. Instead of using a straight line between the two patches, this method calculates the minimum cost path that represents the boundary of the new block. By doing so, the placement of texture patches appears smoother and more realistic, effectively reducing noticeable sharp edges between the patches. Let B_1 and B_2 be two blocks that overlap along their vertical edges, and let B_ov1 and B_ov2 be the regions of overlap between them. The error surface e is defined as the squared difference between B_ov1 and B_ov2: e = (B_ov2 - B_ov2)^2. The error surface e is traversed for each row (i = 2,…, N) in the overlapping region and the minimum error path E is computed using dynamic programming as follows: E_i, j = e_i, j + min(E_i-1, j-1, E_i-1, j, E_i-1, j+1) where E_i, j represents the cumulative minimum error at position (i, j) in the error matrix E. The minimum error path is determined by taking the minimum of the three adjacent pixels in the previous row (i-1) and adding it to the corresponding pixel in the current row (i). The last row of E contains the minimal value and hence, the end of the minimal vertical path through the error surface. Backtracking can be used from the last row to determine the path of the best-fit boundary between B_1 and B_2 <cit.>. Thus a seamless and visually coherent texture is synthesized. This image quilting technique, combined with the minimum boundary error cut method, helps generate a realistic paper-based texture with wrinkles and creases. Thus the resultant image exhibits natural and realistic blending, contributing to the overall authenticity of the synthetic ECG images. Together, wrinkles and creases are added as cumulative transforms on the ECG image to add realistic distortions, as shown in Fig. <ref>. §.§ Perspective artifacts We apply perspective transformations on the synthetic paper ECG image to incorporate different camera viewpoints that could have been used while scanning or taking pictures of paper ECG records. Traditionally perspective transformations have been widely used in image augmentation for several vision-based applications. <cit.> refer to the use of perspective transformations for data augmentation to produce new images captured from different camera viewpoints, specifically for object detection applications. Perspective transforms are simulated by applying geometric transformations to distortionless ECG images. Affine, projective, or homography transformations are utilized to introduce variations in scaling, rotation, and skewing, mimicking the perspective distortions encountered in real paper ECGs. Affine transformations preserve parallelism of lines and can be used to simulate translations, scaling, rotations and shear transformations. Thus, they simulate parallel movements of a camera when scanning a paper ECG. Given an ECG image, the affine transformation can be represented by the matrix transform: ( [ x'; y'; 1; ]) = ( [ a b c; d e f; 0 0 1; ]) ( [ x; y; 1; ]) where (x, y) are the coordinates of a pixel in the original image I, and (x', y') are the transformed coordinates after the affine transformation. Projective transformations, on the other hand, can be used to simulate skew transformations as well. The affine transformation is a special case of a projective transformation. Projective trasnformations do not preserve parallelism and can be used to simulate how perceived images change when the view point of the observer changes. Thus, projective transformations are also incorporated into the synthetic paper ECG generation pipeline to account for such perceived observational changes for mobile- or camera-based ECG images. For a projective transformation, also known as a perspective transformation, the matrix equation is given by: ( [ x'; y'; w'; ]) = ( [ a b c; d e f; g h i; ]) ( [ x; y; 1; ]) where (x, y) are the coordinates of a pixel in the original ECG image I, (x', y') are the transformed coordinates after the projective transformation, and w' is a scaling factor that ensures homogeneous coordinate representation. The elements a, b, c, d, e, f, g, h, and i in the transformation matrix control the scaling, rotation, shear, skew and perspective effects applied to the image. These transformations add depth and simulate different viewing angles and positions, further enhancing the resemblance to real ECG images. The above mentioned transforms are performed using the imgaug library, available on Python <cit.>. Imgaug is a library for image augmentation in machine learning experiments. It supports a wide range of augmentation techniques, allows to easily combine these and to execute them in random order. Thus it can be suitably used for generating synthetic ECG images as a highly variable dataset. §.§ Imaging artifacts and noise The final distortions added to the synthetic paper ECG images include imaging noise modeled by Gaussian noise, Poisson noise, salt and pepper noise, and color temperatures, which are essential tools for generating realistic synthetic images and improving the robustness of machine and deep learning models trained on these images. Typically, Gaussian noise arises in digital images during image acquisition, which in this case refers to scanning or taking pictures of paper ECG images or ECG images from a monitor. Gaussian noise, which models sensor noise, is added to each pixel independently: I_noisy(x, y) = I(x, y) + N_gaussian(0, η) where I_noisy(x, y) represents the noisy pixel value, I(x, y) is the original pixel value of the ECG images, and N_gaussian(0, η) is a random value drawn from a Gaussian distribution with mean 0 and standard deviation η. Poisson-distributed noise, is commonly used to model noises due to electromagnetic interference during image acquisition, can be added to each pixel: I_noisy(x, y) = min{255, max[0, I(x, y) + N_poisson(λ)]} where N_poisson(λ) is a random value drawn from a Poisson distribution with parameter λ, and clip ensures the pixel value stays within the range [0, 255] (for 8-bit per color image representations). Salt and pepper noise models camera sensor malfunction, which may happen when scanning the ECG images. Salt and pepper noise randomly sets pixels to either the minimum or maximum intensity: I_noisy(x, y) = 0, with probability p/2 255, with probability p/2 I(x, y), with probability 1 - p where the probability p determines the density of salt and pepper noise. Finally, color temperatures can be simulated by adjusting the color channels of the image. A color temperature value is selected and the color channels are transformed using appropriate algorithms such as color temperature scaling or white balance adjustment to mimic the desired color temperature effect. The RGB values of the image are also changed in accordance to a temperature value ranging between 1000 to 40000 Kelvin. The lower values correspond to bluish tinges and the higher values correspond to orangish tinges. In the developed toolbox, all these artifacts are added in user-adjustable proportions to the synthetic ECG image using the imgaug library in Python. § CASE STUDY: A DEEP LEARNING MODEL FOR ECG IMAGE DIGITIZATION As a use case, we used our synthetic ECG image generation toolbox to generate a diverse ECG image dataset to train a deep learning-based ECG image digitization architecture that has demonstrated good performance on real ECG images. The deep model consists of a denoising CNN (DnCNN), followed by post-processing algorithms to recover the ECG time-series data. This model was trained on a dataset of synthetic paper ECG images generated from the PTB diagnostic ECG database time-series <cit.>, consisting of 549 records. The trained model was applied to assess the fidelity of a synthetic paper ECG dataset generated from the PTB-XL dataset <cit.>, consisting of 1000 images. The histogram of the signal-to-noise ratio (SNR) for recovering the ECG time-series from the synthetic dataset is depicted in Fig. <ref>, with an average of 27.03 dB and a standard deviation of 2.82 dB. The mean square error (MSE) was computed as an additional metric for comparison, yielding a value of approximately 3.1  µV for PTB-XL ECG dataset, as listed in Table <ref>. The results demonstrate the effectiveness of the synthetic dataset in generating ECG images for training machine and deep learning models. The average SNR of 27.03 dB (and a MSE of 3.1 µV) indicates a high level of similarity between the synthetic paper ECG images and the ground truth time-series data, implying that the generated images closely resemble real paper ECGs. The low standard deviation of 2.82 dB also suggests consistent performance across the dataset, further reinforcing the reliability and stability of the synthetic images. The low MSE of the algorithm in recovering ECG time-series indicates that the algorithm effectively captures the details and characteristics of the ECG waveforms, further validating the fidelity of the synthetic dataset. Overall, these results affirm the quality and authenticity of the synthetic paper ECG dataset and the effectiveness of the employed digitization algorithm. The findings underscore the significance of utilizing synthetic data for the development and evaluation of digitization algorithms, offering a valuable resource for researchers to explore and improve techniques in ECG analysis and medical diagnostics. §.§ Computational performance The dataset generation process was executed in parallel using the HPC cluster of Emory University's Department of Biomedical Informatics (BMI). The cluster is equipped with an NVIDIA DGX1 node to perform computationally intensive operations. The NVIDIA DGX1 node has 80 Intel Xeon E5-2640 v4 CPU cores, a total of 512 GB RAM, and 8 NVIDIA Tesla V100 GPUs, enabling high-performance parallel operations. The dataset generation was distributed across 14 threads on Linux virtual machines. On average, the generation of each synthetic ECG image of size 2200×1700 pixels and 200 dpi resolution took approximately 0.42 s. To provide further insight into the individual components of the dataset generation pipeline, the required time for each step of the pipeline was also recorded, as detailed in Table <ref>. Accordingly, generating distortionless ECG images (the initial step of the pipeline), took an average of 0.035 s. Subsequently, the addition of handwritten text artifacts required approximately 0.2 s, while wrinkles and creases took around 0.1 s. The application of perspective transforms and augmentation techniques was achieved in approximately 0.065 s per image. To assess the impact of system specifications, the dataset generation was also conducted on a Windows desktop machine with an Intel 11th Gen Core i7-1165G7 processor, 16 GB of RAM and 10 CPU cores. On this machine, each image (same size and resolution as above) was generated in 27 s. Specifically, distortionless ECG image generation took 2 s, followed by handwritten text artifact addition which required 17 s. The incorporation of wrinkles and creases took approximately 5 s, while perspective transforms and augmentation techniques consumed an average of 4 s. § DISCUSSION The ECG image digitization results demonstrate the successful synthesis of realistic ECG images that closely resemble the characteristics of real scanned ECG images. The performance evaluation of our pipeline highlights its ability to accurately represent ECG images in a computationally efficient manner. In the sequel, we discuss further steps that could refine the proposed pipeline and optimize its performance for a wider range of machine-learning-based ECG analysis. According to Table <ref>, the generation of handwritten-like text artifacts is the most time-consuming step of the synthetic ECG image generation pipeline. This is attributed to its RNN transducer-based model, which has 3.4 million parameters <cit.>. There exist several techniques to shrink the size of a model while achieving acceptable trade-offs on model performance. Future work can entail improving prediction latency using techniques such as freezing, pruning and quantization. Accelerating deep learning inference via freezing has been shown to save potentially 50% of the computation time by halving the number of effective layers <cit.>. In <cit.>, Molchanov et al. proposed a formulation for pruning convolutional kernels, which resulted in monotonically decreasing inference times. These techniques can be used to prune the handwritten-like image generation step. Quantization of weights can also result in a typical speedup of 2–3 times with tradeoffs in precision for both memory access and computations <cit.>. Text artifacts in scanned paper ECG images hinder the image digitization performance. A visual inspection of the low performing cases of the ECG digitization pipeline, corresponding to the low-SNR cases on the left tale of the histogram in Fig. <ref>, revealed that these cases had multiple significant text artifacts, which were not easily discriminated from the ECG traces. This effect is however partially mitigated for the printed text (such as lead names), which are detected by the optical character recognition step. The lowest observed SNR value is 15.12 dB as the corresponding image contained both handwritten and printed text artifacts in huge font sizes, along with wrinkles and creases. Typical examples of these corner cases are shown in Fig. <ref>. The second step involved addition of wrinkle and crease artifacts. Generating realistic wrinkle-like textures introduces latency to the synthetic image generation process. Image quilting is a computationally intensive process, taking up to an average of 4.71 s per image to generate the texturesand another 1.28 s to add wrinkle and crease artifacts (5.99 s in total, see Table <ref>). In <cit.>, Raad et al. propose implementation strategies of the image quilting algorithm using partial parallel processing, allowing a significant speed-up when run with multi-core processors. Incorporating these techniques can significantly reduce the wrinkle generation step run time. Further, Wei et al. in <cit.> proposed using tree-structured vector quantization for faster texture synthesis that provided nearly 150 times of speedup for certain textures. This technique can further be incorporated in our implementation for faster texture synthesis. The presence of wrinkles and creases significantly reduces digitization performance. These artifacts, intended to fabricate realistic images resembling real-world paper ECGs, contribute to a reduction in the deep model's performance. The reduced SNRs for wrinkle and crease-based distortions range from 15.12 dB to 16.43 dB. This can be attributed to the additional texture and irregularities introduced by these artifacts, which can affect the quality and clarity of the underlying ECG time-series. The observed lower SNR values associated with text artifacts and the inclusion of wrinkles and creases indicate the importance of carefully managing and minimizing the impact of these artifacts during paper ECG scanning and digitization. Sample results of the wrinkle and crease removal step are shown in Fig. <ref>. Addition of perspective transforms and imaging noise do not add significant overhead to the synthetic ECG image generation pipeline. These artifacts are added by traditional image processing techniques that do not require significant computational resources. These artifacts can also be compensated for effectively and do not significantly affect the ECG time-series recovery SNR. This performance is attributed to the denoising Convolutional Neural Network (DnCNN) model, which we employed in the deep-learning based digitization algorithm to remove grid and noise artifacts. This model can handle general image denoising tasks such as blind Gaussian denoising, single image super-resolution with multiple upscaling factors, and JPEG image deblocking with different quality factors <cit.>. ECG images collected in clinical settings have variable qualities, which depend on the standard practices of acquisition and storage per site. Hence, the images do not always have extreme distortions and are rather consistent in quality and noise/artifact type per clinical setting. Therefore, synthetic datasets used for training ECG digitization models should ideally be generated with similar statistical distributions of these artifacts per site. Future work should incorporate a statistically conscious model-based synthetic ECG generation pipeline for more realistic images. Future research efforts should also focus on developing accurate digitization algorithms and approaches that can effectively mitigate the SNR degradation caused by extreme text artifacts and wrinkles. A practical approach is to pretrain multiple deep models, which are optimized for different scenarios. These models can be deployed in a model, where the user could select between the available models, based on specific needs of each clinical setting. § CONCLUSION In this work, we presented a novel approach for generating synthetic paper-like ECG images from time-series data and evaluated the fidelity of the model in a case study for training a deep learning-based ECG image digitization algorithm. The synthetic images have realistic distortions, including handwritten-like text artifacts, wrinkles and creases, and perspective transforms, making them very similar to actual paper ECGs. The evaluation results, which used the signal-to-noise ratio and mean square error (MSE) metrics of the original vs recovered ECG time-series, show that the synthetic ECG image dataset has been indeed effective for building an accurate ECG image digitization pipeline. The high SNR values (low MSE) and visual inspection show a strong resemblance between the generated synthetic images and the ground truth time-series data, assessing the synthetic dataset's integrity. The significance of this research lies in its ability to address the challenges associated with limited access to real patient ECG data due to privacy concerns and regulations governing protected health information (PHI). By creating synthetic paper ECG images, researchers can ensure privacy compliance while still enabling the development and testing of ECG analysis algorithms. This proposed method offers a controlled and diverse dataset with realistic distortions, allowing for comprehensive testing and optimization of digitization techniques. Additionally, the synthetic paper ECG dataset serves as a valuable resource for algorithm development and training data augmentation for machine learning models, thereby supporting advancements in medical diagnoses. Future research in this field could focus on expanding the synthetic paper ECG dataset by incorporating additional forms of distortions and variations, such as electrode misplacements, noise patterns, and heart rate variability. Furthermore, conducting a Turing test in collaboration with medical specialists would be beneficial to determine whether the synthetic data generated in this study adequately simulates real medical ECG records. This human evaluation would provide crucial insights into the perceptual fidelity and realism of the synthetic dataset, with measures like the Kappa value being employed to assess agreement between human assessors. Such evaluation would confirm the similarity and reliability of the synthetic data, facilitating its effective integration into clinical settings, with applications in algorithmic ECG annotation and training. Moreover, exploring the utilization of generative adversarial networks (GANs) and other advanced techniques for generating synthetic medical data could enhance the dataset's realism and diversity. In conclusion, the study presented in this paper lays the foundation for employing synthetic medical data in ECG analysis, opening doors to future advancements in medical diagnostics and contributing to improved patient care. IEEEtran
http://arxiv.org/abs/2307.02208v1
20230705112024
Cavity-Born-Oppenheimer Hartree-Fock Ansatz: Light-matter Properties of Strongly Coupled Molecular Ensembles
[ "Thomas Schnappinger", "Dominik Sidler", "Michael Ruggenthaler", "Angel Rubio", "Markus Kowalewski" ]
quant-ph
[ "quant-ph", "physics.chem-ph" ]
< g r a p h i c s > Experimental studies indicate that optical cavities can affect chemical reactions, through either vibrational or electronic strong coupling and the quantized cavity modes. However, the current understanding of the interplay between molecules and confined light modes is incomplete. Accurate theoretical models, that take into account inter-molecular interactions to describe ensembles, are therefore essential to understand the mechanisms governing polaritonic chemistry. We present an ab-initio Hartree-Fock ansatz in the framework of the cavity Born-Oppenheimer approximation and study molecules strongly interacting with an optical cavity. This ansatz provides a non-perturbative, self-consistent description of strongly coupled molecular ensembles taking into account the cavity-mediated dipole self-energy contributions. To demonstrate the capability of the cavity Born-Oppenheimer Hartree-Fock ansatz, we study the collective effects in ensembles of strongly coupled diatomic hydrogen fluoride molecules. Our results highlight the importance of the cavity-mediated inter-molecular dipole-dipole interactions, which lead to energetic changes of individual molecules in the coupled ensemble. Modelling the formation and evolution of solar wind microstreams: from coronal plumes to propagating Alfvénic velocity spikes [ August 1, 2023 ============================================================================================================================= The strong coupling of light and matter within optical cavities provides an innovative way to alter and design matter properties, making it a rapidly evolving field <cit.> at the intersection of chemistry, quantum optics, and materials science. In polaritonic chemistry, depending on whether the quantized cavity modes are coupled via their characteristic frequencies to electronic or vibrational degrees of freedom of molecules, the situation is described as esc or vsc, respectively. Under esc, it becomes possible to modify the photochemistry/photophysics of molecules including charge transfer processes and electronic spectroscopy, and photoinduced reactions can be influenced <cit.>. Similarly, for vsc, the vibrational spectra of molecules are altered by the formation of light-matter hybrid states, and even the chemical reactivity of the ground state can be modified <cit.>. The observed effects of molecular esc and vsc are often discussed in a phenomenological way, and understanding of the underlying microscopic and macroscopic physical mechanisms, especially with respect to the effects of vsc, is still incomplete <cit.>. In our recent work <cit.> we have shown numerically that the interaction between an optical cavity and ensembles of molecules not only leads to cavity detuning and a change of the optical length, but also allows for a local molecular polarization mechanism under strong collective vibrational coupling in the thermodynamic limit. The interplay of microscopic and macroscopic polarization is due to a cavity-mediated dipole-dipole interaction. A deep understanding of this effect may bridge the gap between existing simplified models for a macroscopic ensemble of molecules and experiments. We have been able to study this cavity-mediated dipole-dipole interaction for very large ensembles by using simple Shin-Metiu molecules. As a next step to go beyond this simple molecular model, we present here a formulation of the well-known Hartree-Fock ansatz in the context of the cboa <cit.> derived from the complete non-relativistic Pauli-Fierz Hamiltonian <cit.>. We refer to the resulting method as cbohf approach and, to the best of our knowledge, this is the first wave function-based method to describe strong coupling of real molecules in a cavity in the cboa framework. The cbohf method allows us to study cavity-mediated dipole-dipole interactions for realistic molecular systems. The first part of this work describes the cbohf formalism. Next, the cbohf approach is used to explore the effects of collective cavity-mediated coupling in small ensembles of diatomic hydrogen fluoride (HF) molecules. By explicitly simulating the molecular systems in a self-consistent calculation, we are able to study the interactions within the ensemble beyond what can be captured by scaled single-molecule model Hamiltonians. In the following we study static ensembles of molecules but do not address the effects of vibrational resonances <cit.>. Consistent with recently reported results <cit.>, we observe non-negligible cavity-induced energy changes at the local molecular level. By a detailed analysis of these energy changes at the microscopic (single-molecule) and macroscopic (molecular ensemble) level, we can show how the size of the ensemble, the individual molecular orientation, and the change in nuclear configuration can affect these collective interactions. The physics of a cavity-embedded molecular system is described using the non-relativistic Pauli–Fierz Hamiltonian <cit.>, which is represented in the length gauge, assuming the dipole approximation and the cboa <cit.>. Within the cboa, the cavity modes and nuclei are considered to be "slow" degrees of freedom compared to the "fast" electrons and, consequently, only the electrons are treated quantum mechanically. In the following bold symbols denote vectors and atomic units (ħ=4πε_0=m_e=1) are used throughout the paper, unless otherwise indicated. For a single-mode cavity, the cboa Hamiltonian takes the form Ĥ_CBO = Ĥ_el + 1/2ω_c^2 q_c^2 - ω_c q_c(λ_c·μ̂) + 1/2(λ_c·μ̂)^2 , where μ̂ = μ̂_el + μ_Nuc = -∑_i=1^N_elr̂_i + ∑_A=1^N_Nuc Z_AR_A , represents the molecular dipole operator, which is defined by the operators of the N_el electron coordinates r̂, the classic coordinates R of the N_Nuc nuclei and the nuclear charge Z. Ĥ_el is the Hamiltonian for the field-free many-electron system, and the second term defines the harmonic potential introduced by the cavity-mediated displacement field, with the photon displacement coordinate q_c and ω_c being the frequency of the cavity mode. The third term of Eq. <ref> describes the dipole coupling between the molecular system and the photon displacement field, which is characterized by the coupling strength λ_c. The last term is the dse operator <cit.>, which is an energy contribution that describes the self-polarization of the molecule-cavity system. Note that the inclusion of the dse contribution is strictly necessary to obtain a finite polarization and a bounded solution <cit.>. Since in practice a finite basis, such as Gaussian basis sets used in most of quantum chemistry methods, also limits the polarization, the lack of a stable ground state is not always observed in numerical calculations. In the following, we will show that the dse term is not only formally needed, but yields inter-molecular interactions in an ensemble of molecules. The coupling parameter λ_c for a cavity with an effective mode volume V_c is defined as follows: λ_c = eλ_c = e√(4 π/V_c) . The unit vector e denotes the polarization axis of the cavity mode. In the context of a Fabry-Pérot type cavity, λ_c can be directly related to the electric vacuum field strength ϵ_c via λ_c = e√(2/ω_c)ϵ_c. Without loss of generality, we will use this relation to quantify λ_c by ϵ_c. As in the standard Hartree-Fock approach, the cbohf electronic wave function of an N_el-electron molecular system is a Slater determinant of mutually orthonormal spin orbitals φ_i <cit.>: Ψ( τ_1, τ_2, …, τ_N) = 1/√(N_el!)φ_1, φ_2, …φ_i . Here τ_N is used to denote the complete set of coordinates associated with the N-th electron, comprised of the spatial coordinate r_N and a spin coordinate. Note that the N_el-electron system described by Ψ can be a single molecule or an ensemble of many molecules. Thus, it is possible to treat cavity-induced interactions and standard Coulomb interactions between molecules in the ensemble in the same way. For the special case of the dilute gas limit, that is, the situation in which the electronic structures of different molecules do not overlap and interact, the ensemble Slater determinant may be replaced by a product <cit.> of individual molecular Slater determinants. Note that the displacement coordinate q_c of the electric field mode is treated as a parameter in cbohf ansatz, analog to the nuclear coordinates, and is thus not part of the wave function. Using Ĥ_CBO and Ψ, the cbohf energy expectation value E_CBO can be determined using the standard scf procedure <cit.>: ⟨ E_CBO⟩ = ⟨Ψ| Ĥ_el - ω_c q_c(λ_c·μ̂) + 1/2(λ_c·μ̂)^2 | Ψ⟩ + 1/2ω_c^2 q_c^2. The resulting energy expectation E_CBO consists of four energy contributions: E_CBO = E_el + E_lin + E_dse + E_dis with E_dis = 1/2ω_c^2 q_c^2 . The detailed derivation of all new energy contributions in the cbohf ansatz is given in section S1 of the supporting information. The following discussion of these energies is based on Hartree-Fock matrix elements formulated in the basis of orthonormal spin orbitals. The first term E_el contains all Hartree-Fock energy components of the many-electron system <cit.> and is only indirectly affected by the cavity via the scf procedure. The second term E_lin describes the linear part of the light-matter coupling and is obtained from the dipole coupling between the photon displacement field, the electrons, and the nuclei. It can be written as a sum of one-electron integrals formulated in terms of spin orbitals φ_i and a parametric nuclear contribution: E_lin = - ω_c q_c⟨Ψ| λ_c·μ̂| Ψ⟩ = - ω_c q_c∑_i=1^N_oc⟨φ_i | x̂| φ_i ⟩ - ω_c q_c( λ_c·μ_Nuc) with x̂= -λ_c·r̂ . The remaining component E_dse can be decomposed into a purely electronic term, a mixed electron-nuclear term, and a pure nuclear contribution: E_dse = 1/2⟨Ψ| (λ_c·μ̂)^2 | Ψ⟩ = E^ (el)_dse + E^ (e-n)_dse + E^ (nuc)_dse = 1/2⟨Ψ| ( λ_c·μ̂_el)^2 | Ψ⟩ + ( λ_c·μ_Nuc) ⟨Ψ| ( λ_c·μ̂_el) | Ψ⟩ + 1/2( λ_c·μ_Nuc)^2 . To simplify the electronic structure calculations in this work, the classical nuclei are arranged so that their contributions to the total dipole moment are zero (μ_Nuc = 0, center of charge). Thus, the nuclear contribution in Eq. <ref> as well as E^ (e-n)_dse and E^ (nuc)_dse are zero by definition. More details on the latter two can be found in section S1 of the Supporting Information. The pure electronic contribution E^(el)_dse, contains a squared electronic position operator x̂^2=x̂_i x̂_j, which is a two-electron operator but can be decomposed into one-electron contributions and a two-electron contributions (for details see Eq. S7 in the supporting information) <cit.>: E^(el)_dse = E^(1e)_dse + E^(2e)_dse = 1/2∑_i=1^N_oc⟨φ_i | x̂^2 | φ_i ⟩ + 1/2[ ∑_i,j^N_oc⟨φ_i | x̂| φ_i ⟩⟨φ_j | x̂| φ_j ⟩ - | ⟨φ_i | x̂| φ_j ⟩|^2 ] . The one-electron E^(1e)_dse term retains the quadratic nature of the x̂ operator and behaves like scaled quadrupole tensor elements and describes a localized energy contribution. The expansion of the two-electron part, E^(2e)_dse in terms of spin orbitals, follows a logic similar to that of the derivation of the Coulomb and exchange integrals in the regular Hartree-Fock ansatz. The terms here can be factorized and further decomposed into a dipole-dipole interaction component E^(2J)_dse and an exchange-like component E^(2K)_dse. This exchange-like quantity E^(2K)_dse vanishes if φ_i and φ_j have no spatial overlap and therefore describes a localized interaction: E^(2K)_dse = - ∑_i,j^N_oc| ⟨φ_i | x̂| φ_j ⟩|^2 The E^(2J)_dse part can be rewritten as a product of the scaled electronic dipole moment: E^(2J)_dse = ∑_i,j^N_oc⟨φ_i | x̂| φ_i ⟩⟨φ_j | x̂| φ_j ⟩ = ( λ_c·⟨μ̂_el⟩) ( λ_c·⟨μ̂_el⟩). Unlike E^(2K)_dse, the dipole-dipole interaction E^(2J)_dse does not require a spatial overlap of the spin orbitals φ_i and φ_j and thus results in a delocalized interaction. These two properties are of special interest when ensembles of molecules are described. The last energy contribution E_dis in Eq. <ref> is the energy resulting from the photon displacement field <cit.>. By replacing the electronic dipole moment μ̂_el of the total ensemble with the sum over the individual molecular dipole moments in Eq. <ref> we obtain: E^(2J)_dse = ∑_m=1^N_mol[ ( λ_c·⟨μ̂_el^(m)⟩) ( λ_c·⟨μ̂_el^(m)⟩) + ∑_n ≠ m^N_mol( λ_c·⟨μ̂_el^(m)⟩)( λ_c·⟨μ̂_el^(n)⟩) ] , where the summations run over all molecules N_mol in the ensemble. The first term is the local or intra-molecular contribution to E^(2J)_dse for each individual molecule m, while the second product describes the interaction of the molecule m with all other molecules in the ensemble. This inter-molecular interaction depends only on the orientation and size of the individual dipole moments, but not on their distance. This molecular dipole-dipole interaction term E^(2J)_dse in combination with E^(nuc)_dse is commonly used as a first-order approximation <cit.> of the dse energy. Note that the full E_dse can also be approximated with the help of permanent dipole moments and transition dipole moments in a resolution of identity approach by summing over excited-electronic states <cit.>. By solving Eq. <ref> with a scf approach, E_CBO is minimized for a given configuration of classic nuclei and a fixed photon displacement coordinate (parametric photon field). The ground state for the combined electronic-photonic subsystem is obtained by minimizing E_CBO with respect to the photon displacement coordinate, which leads to the following expression: ∂/∂ q_c E_CBO = ω_c^2 q_c - ω_c ⟨Ψ| λ_c·μ̂| Ψ⟩ = ω_c^2 q_c - ω_c ( λ_c·⟨μ̂⟩) != 0 . The resulting minimum of E_CBO is: q_c = q_min = ( λ_c·⟨μ̂⟩) /ω_c . Because we work in the length gauge, the electric field ℰ <cit.> is: 14 πℰ = 𝒟 -𝒫 = 14 πλ_cω_c q_c - 14 πλ_c(λ_c·⟨μ̂⟩) = 0 , where 𝒟 is the cavity-mediated displacement field and 𝒫 is the polarization. Requiring that the transverse electric field vanishes in the ground state, ℰ=0 also leads for Eq. <ref>. This demonstrates that minimizing E_CBO with respect to q_c is equivalent to fulfilling the zero transverse electric field condition <cit.> in the semi-classical limit, which guarantees a non-radiating ground state. From Eq. (<ref>) we also find that E_dis + E_lin + E_dse = 18π∫_V_cℰ̂^2 d r, which is in agreement with Maxwell's equations <cit.>. Thus, by making the cboa, we discard the magnetic contribution to the photonic energy <cit.> The main aim of this work is to study an ensemble of well-separated molecules interacting with the cavity field. We thus assume that the wave functions of different molecules in the ensemble do not overlap and that the Coulomb interaction between them is negligible. By satisfying the zero transversal electric field condition (Eq. <ref>) for such a molecular ensemble, the photon displacement coordinate q_min becomes a function of the total ensemble dipole moment. Thus, E_lin and E_dis and E_dse are not exclusively dependent on local molecular properties, but rather on the total ensemble. The cbohf method has been implemented in the Psi4NumPy environment <cit.>, which is an extension of the PSI4 <cit.> electronic structure package. All calculations were performed using the aug-cc-pVDZ basis set <cit.> and the geometry of the isolated single HF molecule has been optimized at the Hartree-Fock level of theory. Note that we have not re-optimized the geometries of the molecular systems in the cavity; as such, our calculations do not account for any geometry relaxation effects originating from the presence of the cavity. In all cbohf calculations performed in this work, we consider a single mode, lossless cavity. We keep the collective coupling strength λ_c constant by applying a scaling factor of scaling factor of 1/√(N_mol) to obtain a fixed Rabi splitting for different ensemble sizes and treat λ_0 as a tunable coupling parameter: λ_c = λ_0/√(N_mol)e Here λ_0 is equivalent to λ_c in Eq. <ref> in the single molecule case. As a result, we increase the mode volume V_c of the cavity, but by including more molecules, we keep the average density of molecules N_mol/V_c fixed. For N_mol≫ 1, we would be approaching the thermodynamic limit. Since the main goal of this work is to demonstrate the cbohf ansatz and to understand how the different energy contributions to the cavity-mediated interaction under vsc behave when increasing the size of the ensemble but keeping N_mol/V_c fixed, it is enough to simulate small ensembles. In this work, we restrict ourselves to up to eight HF molecules. We use an artificially increased coupling strength λ_0 in the range of 0.004 to 0.04, which corresponds to effective mode volumes (Eq. <ref>) in the single-molecule case as large as 125.27 (for λ_0 0.004) or as small as 1.25 (for λ_0 0.04). To refer to a more intuitive physical quantity, the unscaled coupling strength λ_0 is quantified in this work by the vacuum electric field strength ϵ_c. The fundamental cavity frequency ω_c is set to be identical with the first vibrational mode of the uncoupled HF molecule, which is 4467 for the chosen level of theory. This value of ω_c is chosen arbitrarily in our static setup, since resonance effects are not likely to play a role in our analysis. All calculations were performed in a reproducible environment using the Nix package manager together with NixOS-QChem <cit.> (commit f5dad404) and Nixpkgs (nixpkgs, 22.11, commit 594ef126). In this work, we study fixed ensembles of perfectly aligned, well-separated HF molecules in an optical cavity. We chose HF because of its large permanent dipole moment of 1.8, which is advantageous for the interaction between the molecule and the cavity mode, but its polarizability is small with 0.8. <cit.> Furthermore, the HF simulation results directly extend our previous results in Ref. , whereas the molecular setup connects to earlier ab-initio studies on (collective) electronic strong coupling <cit.>. To define these ensembles, the optimized structure of a single HF molecule is replicated N_mol times. All these replicas are separated by 800 and placed inside the cavity to avoid interactions via longitudinal electric fields. In general, three different orientations of the molecular HF ensembles are studied in this work and are visualized in Fig. <ref>. In the first orientation, called all-parallel, the HF molecules are aligned parallel to the cavity mode polarization axis. In the antiparallel case, the N_mol HF molecules are pairwise antiparallel, resulting in the ensemble dipole moment being zero (even number of molecules) or equal to μ of a single HF (odd N_mol). The third configuration of the ensemble, labeled defective, represents the situation where the dipole moments of N_mol-1 HF molecules point in the opposite direction to the remaining molecule. Note that for all three ensemble configurations, the individual dipole moment vectors are aligned with the cavity polarization axis, and the zero transverse electric field condition (Eq. <ref>) is satisfied for the entire ensemble. This aligned orientation is not the energetically most favorable configuration for the individual molecule (see the supporting information section S2 for a detailed discussion of the single-molecule situation), but it allows us to set an upper bound on all effects, as it guarantees the maximum molecular cavity interaction. All calculations are carried out with rescaled values of λ_c, see Eq. <ref>. Analogous calculations without rescaling can be found in the supporting information (see Figs. S5 and S6 in section S3). The energy change of the all-parallel ensembles induced by the interaction with the cavity, as well as the underlying energy components, is visualized in Fig. <ref> as a function of the size of the ensemble N_mol. Let us first consider how the different total (ensemble) energies behave and what we can learn from them. For simplicity, we focus on the all-parallel configuration. In Fig. <ref> a) we see that the proposed rescaling with an increasing ensemble size of the coupling, i.e., the mode volume V_c ∝ N_mol, keeps the light and light-matter interaction energy constant. That is, on the total energy level we see that the thermodynamic limiting procedure is well-behaved, and we expect that approximately also for N_mol≫ 1 we find such an energy difference. From a total energy contribution perspective, one might be tempted to conclude that the photon and photon-matter interaction contribution can be safely ignored, since E_el increases linearly with N_mol and hence dominates. If we, however, in a next step consider the different contributions of Eq. (<ref>), we see that even for the total-ensemble energy, a delicate balancing of macroscopically scaling energy contributions appears. Indeed, in Fig. <ref> b) we see that the energy of the displacement field increases linearly even if we rescale the coupling strength. This approximately linearly increasing term is countered by E_lin. That E_lin contributes negatively is simple to understand, since the displacement field (interpreted as a constant external field) allows to lower the total energy by separating particles of different charges. Without E_dse depicted in Fig. <ref> d), we would find the well-known result that the linear interaction would dissociate and ionize any bound system regardless of the coupling strength <cit.>. We can thus conclude that in order to describe an ensemble of molecules form first principles the dipole self-energy term E_dse is needed to find a stable and physical result. So far, we have discussed the effect of the collective coupling on the total molecular ensemble. However, the main question in polaritonic chemistry is to understand how a collectively coupled ensemble can influence individual molecules. In the next step, we will thus analyze the energy changes at the level of a single molecule, which arise as a result of the collective interactions of the entire ensemble. Such a local perspective is possible since the ensemble cbohf density matrix is block diagonal, and individual blocks can be used to create partial density matrices for each molecular subsystem. These partial density matrices can be combined with the ensemble Hamiltonian to calculate local energies (per molecule) and combine them pairwise to calculate the interaction between the molecules (see Eq. <ref>). This part of the dse we denote inter E_dse, while all other contributions to the dse are summed and labeled local E_dse. The individual energy obtained per molecule is equivalent to the eigenvalues of the cavity-Hartree equation in our previous work <cit.>. The change in individual molecular energy induced by the interaction with the cavity, as well as the underlying energy components, is visualized in Fig. <ref> as a function of the size of the all-parallel ensemble N_mol. The energy difference Δ E between the individual molecular energy E^(1)_CBO and E^(1)_HF of the isolated HF molecule without cavity interaction is shown in Fig. <ref> a). Note that E_dis is an ensemble quantity, but since each molecule in the ensemble is affected by the same potential, we include it in E^(1)_CBO. If a second molecule is included in the cavity, the local E^(1)_CBO decreases. As more and more HF molecules are added, the initial trend reverses and E^(1)_CBO increases almost linearly, following the linear behavior of E_dis shown in Fig. <ref> b). Without E_dis, E^(1)_CBO converges to a finite non-zero value with N_mol increasing, as can be seen in Fig. S4 a) in the supporting information. The local dipole cavity interaction E_lin converges to a constant value with increasing N_mol, as shown in Fig. <ref> b). This behavior is a direct consequence of fulfilling the zero-field condition for the entire all-parallel ensemble, and cannot be generalized to every nuclear configuration. For this specific orientation, the ensemble dipole moment μ increases linearly with the number of molecules, and thus the displacement induced by the cavity leads to higher values of q_min. This effect, in combination with the rescaled couplingλ_c leads to the constant value of E_lin that depends only on the coupling strength. On the contrary, the local E_dse, visualized in Fig. <ref> c), decays with 1N_mol and approaches zero in the large N_mol limit. The intermolecular dipole-dipole energy ( inter E_dse) shown in Fig. <ref> d) is part of the E^(2J)_dse term and arises as a result of the cavity-mediated interaction of the dipole moment of one molecule with all other molecules in the ensemble (for the definition, see Eq. <ref>). This energy contribution increases with an increasing number of molecules in the ensemble and approaches a constant, non-zero value following the behavior of 1-1 N_mol. All these results are clear indications that the nontrivial interplay of the collective photon displacement effects (E_lin in combination with E_dis) and the cavity-mediated dipole-dipole interaction allow for local strong coupling to emerge. In the following, we introduce a defect in perfectly aligned ensembles to further study the effect of anisotropic ensembles on a single molecule. We perform scans along the bond length of one HF molecule in fixed ensembles of different sizes for all three configurations all-parallel, antiparallel and defective. The scan along the bond length is performed for the HF molecule encircled in red in Fig. <ref>. In the defective orientation, the fixed N_mol-1 HF molecules point in the opposite direction to the perturbed molecule. For the resulting two-dimensional cpes spanned by the bond length coordinate and the photon displacement coordinate, the minimum energy path along the bond length is determined. This is equivalent to satisfying the zero transverse electric field condition (Eq. <ref>) for each nuclear configuration. The energy differences between these minimum energy paths and the field-free one-dimensional pes for all three orientations are visualized in Fig. <ref>. The corresponding energy contributions E_lin, E_dis, the local E_dse and the inter E_dse are shown in Figs. S7, S8, and S9 in section S4 of the supporting information. The observed change on the energetics of the dissociation path due to the cavity interaction is small (see Fig. <ref>), and interestingly, the same for all three ensemble configurations. However, the effect is not negligible and the cavity interaction shifts the ensemble cpes to higher energies compared to the cavity-free pes. With increasing bond length, the ensemble dipole moment becomes larger and consequently the upshift mediated by the cavity interaction increases. When comparing the different sizes of the ensemble, there is a decreasing effect on the change in the energy of the ensemble with increasing N_mol. As a second effect, the energy change is less and less dependent on the bond length. It seems to converge to a non-zero finite value, which is constant with respect to the bond length. A closer look at the individual contributions E_lin, E_dis, the local E_dse and the inter E_dse all shown in Figs. S7, S8, and S9 in section S4 of the supporting information can explain these trends. The general shape of the cavity-induced energy change shown in Fig. <ref> is determined by the local E_dse. With an increasing number of molecules in the cavity, this contribution becomes dominated by the fixed ensemble of N_mol-1 molecules and therefore constant. The other three contributions, E_lin, E_dis, and the inter E_dse are quite large and depend on both the size of the ensemble and the bond length. However, when they are summed, they almost completely cancel each other out, leaving only an almost negligible energy contribution. Since the local E_dse is the same for all three orientations and dominates the energy change, all three ensemble configurations show the same behavior. It should be noted that by imposing the zero transverse electric field condition along the complete dissociation path, we assume that the whole ensemble coupled to the cavity is in the electronic-photonic ground state. The behavior discussed above may be different if this assumption no longer holds, for example, if the system is coupled to a thermal bath. In the last part of this work, we focus on the local perspective of the dissociating HF molecule. Its field-free pes and the change in the local energy induced by the cavity interaction in the presence of different ensembles are shown in Fig. <ref>. Additional figures can be found in section S4 of the supporting information, showing the underlying energy components: E_lin in Fig. S10, E_dis in Fig. S11, the local E_dse in Fig. S12, and the inter E_dse in Fig. S13. The last two are calculated from the perspective of the dissociating molecule. The most striking difference from the perspective of the ensemble is that at the local level of the dissociating HF molecule, the three configurations all-parallel (Fig. <ref> b)), antiparallel (Fig. <ref> c)) and defective (Fig. <ref> d)) can be distinguished in cavity-induced energy changes. In all three situations, the cavity-induced changes depend on N_mol as well as on the length of the bond. In the all-parallel case, shown in Fig. <ref> b), the energy change increases with the bond length as well as with the number of molecules. Note that the coupling is rescaled by 1√(N_mol) to keep the collective Rabi-splitting constant. However, the effect on the individual molecules grows with the number of molecules in the ensemble, even though the single molecule coupling strength λ_0 becomes smaller. We thus conclude, that collective coupling induces locally strong effects, as has been reported for excited states in esc previously <cit.>. The local contribution E_lin, which converges to a finite value, is not large enough to compensate for E_dis, which grows linearly with the size of the ensemble (see Figs. S10 a) and S11 a)). The resulting upshift in energy is further amplified since inter E_dse is also positive due to the all-parallel configuration and grows with 1-1 N_mol (see Fig. S13 a)). Also in the antiparallel case (Fig. <ref> c)) the local energy change due to the cavity interaction increases with increasing bond length. However, the cavity-induced energy change is generally smaller than for the all-parallel configuration and becomes significantly smaller with increasing N_mol. The curves, shown in Fig. <ref> c), have a pairwise structure, where each ensemble with an even value of N_mol is very close in energy to the ensemble with N_mol+1 molecules. The case of odd and even number of molecules in the all-parallel configuration creates two different situations from a single-molecule perspective. For even N_mol the whole ensemble reduces to an effective antiparallel bimolecular case, and the situation of odd N_mol is equivalent to the single-molecule case where both are scaled down by 1N_mol. Therefore, E_lin and E_dis become smaller with increasing N_mol for odd and even N_mol (see Figs. S10 b) and S11 b)). Furthermore, for even N_mol ensembles, there is a small negative contribution from inter E_dse (see Fig. S13 b)). In contrast to the all-parallel and antiparallel configurations, the ensembles in the defective orientation show clearly different trends. Similarly to the all-parallel case, Δ E increases as N_mol increases for a fixed nuclear configuration, but for N_mol > 3, it simultaneously decreases along the dissociation path. This changing behavior is caused by the interplay of the local E_lin, the ensemble quantity E_dis, and the cavity-mediated dipole-dipole interaction. These three contributions are shown in Figs. S10 c), S11 c), and S12 c). For N_mol < 4 the ensemble, the dipole moment changes sign along the dissociation path, whereas for a larger ensemble it is negative for the whole pathway. The local dipole moment of the dissociating molecule is in the studied configuration positive, which leads to a positive contribution of the local E_lin for N_mol > 3 and a change in sign for E_lin in smaller ensembles. The inter E_dse (see Fig. S13 c)) acting on the dissociating molecule is always negative and is increasingly relevant for increasing bond length. Consequently, the cavity-induced energy shift from the local perspective decreases for the defective configuration along the dissociation pathway. In summary, for all three configurations studied, the interaction of the cavity with the molecular ensembles modifies the local energy landscape of the individual molecule. These changes are strongly dependent on the ensemble properties (size and orientation of its components) and are due to the interplay of the displacement field effects and the cavity-induced polarization. In conclusion, we have established an ab-initio Hartree-Fock ansatz in the framework of the cavity Born-Oppenheimer approximation, capable of describing the electronic ground state of molecular systems coupled to an optical cavity. We have applied the cbohf ansatz to study the collective effects in small ensembles of diatomic hydrogen fluoride (HF) molecules in an optical cavity. The detailed analysis of the cavity-induced energy changes for the whole ensemble and individual molecules shows that the self-consistent treatment and the full dipole self-energy operator are crucial to capture relevant aspects for the description of a strongly coupled molecular ensemble and its chemical properties. The dse terms are essential to describe cavity-induced polarization at the level of individual molecules, as well as for the whole ensemble. The observed interplay of displacement field effects and the cavity-induced polarization enables energy changes for the individual molecule because of collective coupling to an optical cavity. Consistent with our previous work <cit.> we could identify a macroscopically induced microscopic polarization mechanism based on intermolecular dipole-dipole interactions. Although we have only studied the system in the electronic-photonic ground state, we see indications that thermal fluctuations may play a decisive role in polaritonic chemistry, in line with our previous work <cit.>. Due to the nature of the intermolecular dipole-dipole interactions, a local change/fluctuation in the dipole moment and/or polarizability could affect the whole ensemble. A pre-polarization of an ensemble with a static electric field, for example, should lead to an observable effect in experiment. Another interesting topic for further study is the interplay of this self-consistent polarization mechanism with vibrational or electronic resonances. The derivation of the cbohf equations demonstrates which molecular properties are important for the dse term and the couplings it introduces: molecular dipole moments are important for inter-molecular interactions, while the combination of dipole moments, quadrupole moments, and transition dipole moments are important on an intra-molecular level. The cbohf ansatz and the underlying cboa formulation offers a suitable framework to derive post-Hartree-Fock methods, such as configuration interaction or coupled cluster, or potential self-consistent embedding schemes <cit.> for molecules under vsc or even esc <cit.>. It may also provide potential energy surfaces that can be used for ab-initio semiclassical dynamics or for nuclear-photonic quantum dynamics simulations of molecular ensembles. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement no. 852286), the RouTe Project (13N14839), financed by the Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung (BMBF)) and supported by the European Research Council (ERC-2015-AdG694097), the Cluster of Excellence “CUI: Advanced Imaging of Matter” of the Deutsche Forschungsgemeinschaft (DFG), EXC 2056, project ID 390715994 and the Grupos Consolidados (IT1249-19). The Flatiron Institute is a division of the Simons Foundation. See the supplementary material for the details of the derivation of the cbohf energy contribution, the discussion of the single-molecule case and additional figures for the ensembles of HF molecules in an optical cavity. All data underlying this study are available from the corresponding author upon reasonable request. @ifundefinedendmcitethebibliography 70 f subitem(mcitesubitemcount) [Ebbesen(2016)]Ebbesen2016-jx Ebbesen, T. W. Hybrid Light-Matter States in a Molecular and Material Science Perspective. Acc. Chem. Res. 2016, 49, 2403–2412 [Ruggenthaler et al.(2018)Ruggenthaler, Tancogne-Dejean, Flick, Appel, and Rubio]Ruggenthaler2018-ew Ruggenthaler, M.; Tancogne-Dejean, N.; Flick, J.; Appel, H.; Rubio, A. From a quantum-electrodynamical light–matter description to novel spectroscopies. Nature Reviews Chemistry 2018, 2, 1–16 [Gargiulo et al.(2019)Gargiulo, Berté, Li, Maier, and Cortés]Gargiulo2019-cy Gargiulo, J.; Berté, R.; Li, Y.; Maier, S. A.; Cortés, E. From Optical to Chemical Hot Spots in Plasmonics. Acc. Chem. Res. 2019, 52, 2525–2535 [Herrera and Owrutsky(2020)Herrera, and Owrutsky]Herrera2020-bg Herrera, F.; Owrutsky, J. Molecular polaritons for controlling chemistry with quantum optics. J. Chem. Phys. 2020, 152, 100902 [Nagarajan et al.(2021)Nagarajan, Thomas, and Ebbesen]Nagarajan2021-sl Nagarajan, K.; Thomas, A.; Ebbesen, T. W. Chemistry under Vibrational Strong Coupling. J. Am. Chem. Soc. 2021, 143, 16877–16889 [Sidler et al.(2022)Sidler, Ruggenthaler, Schäfer, Ronca, and Rubio]Sidler2022-cg Sidler, D.; Ruggenthaler, M.; Schäfer, C.; Ronca, E.; Rubio, A. A perspective on ab initio modeling of polaritonic chemistry: The role of non-equilibrium effects and quantum collectivity. J. Chem. Phys. 2022, 156, 230901 [Li et al.(2022)Li, Cui, Subotnik, and Nitzan]Li2022-gi Li, T. E.; Cui, B.; Subotnik, J. E.; Nitzan, A. Molecular Polaritonics: Chemical Dynamics Under Strong Light-Matter Coupling. Annu. Rev. Phys. Chem. 2022, 73, 43–71 [Fregoni et al.(2022)Fregoni, Garcia-Vidal, and Feist]Fregoni2022-op Fregoni, J.; Garcia-Vidal, F. J.; Feist, J. Theoretical Challenges in Polaritonic Chemistry. ACS Photonics 2022, 9, 1096–1107 [Dunkelberger et al.(2022)Dunkelberger, Simpkins, Vurgaftman, and Owrutsky]Dunkelberger2022-oh Dunkelberger, A. D.; Simpkins, B. S.; Vurgaftman, I.; Owrutsky, J. C. Vibration-Cavity Polariton Chemistry and Dynamics. Annu. Rev. Phys. Chem. 2022, 73, 429–451 [Tibben et al.(2023)Tibben, Bonin, Cho, Lakhwani, Hutchison, and Gómez]Tibben2023-if Tibben, D. J.; Bonin, G. O.; Cho, I.; Lakhwani, G.; Hutchison, J.; Gómez, D. E. Molecular Energy Transfer under the Strong Light-Matter Interaction Regime. Chem. Rev. 2023, [Wang et al.(2014)Wang, Chervy, George, Hutchison, Genet, and Ebbesen]Wang2014-xn Wang, S.; Chervy, T.; George, J.; Hutchison, J. A.; Genet, C.; Ebbesen, T. W. Quantum Yield of Polariton Emission from Hybrid Light-Matter States. J. Phys. Chem. Lett. 2014, 5, 1433–1439 [Kowalewski et al.(2016)Kowalewski, Bennett, and Mukamel]Kowalewski2016-zo Kowalewski, M.; Bennett, K.; Mukamel, S. Cavity Femtochemistry: Manipulating Nonadiabatic Dynamics at Avoided Crossings. J. Phys. Chem. Lett. 2016, 7, 2050–2054 [Triana et al.(2018)Triana, Peláez, and Sanz-Vicario]Triana2018-fy Triana, J. F.; Peláez, D.; Sanz-Vicario, J. L. Entangled Photonic-Nuclear Molecular Dynamics of LiF in Quantum Optical Cavities. J. Phys. Chem. A 2018, 122, 2266–2278 [Eizner et al.(2019)Eizner, Martínez-Martínez, Yuen-Zhou, and Kéna-Cohen]Eizner2019-ke Eizner, E.; Martínez-Martínez, L. A.; Yuen-Zhou, J.; Kéna-Cohen, S. Inverting singlet and triplet excited states using strong light-matter coupling. Sci Adv 2019, 5, eaax4482 [Ulusoy et al.(2019)Ulusoy, Gomez, and Vendrell]Ulusoy2019-uq Ulusoy, I. S.; Gomez, J. A.; Vendrell, O. Modifying the Nonradiative Decay Dynamics through Conical Intersections via Collective Coupling to a Cavity Mode. J. Phys. Chem. A 2019, 123, 8832–8844 [Groenhof et al.(2019)Groenhof, Climent, Feist, Morozov, and Toppari]Groenhof2019-nz Groenhof, G.; Climent, C.; Feist, J.; Morozov, D.; Toppari, J. J. Tracking Polariton Relaxation with Multiscale Molecular Dynamics Simulations. J. Phys. Chem. Lett. 2019, 10, 5476–5483 [Felicetti et al.(2020)Felicetti, Fregoni, Schnappinger, Reiter, de Vivie-Riedle, and Feist]Felicetti2020-qq Felicetti, S.; Fregoni, J.; Schnappinger, T.; Reiter, S.; de Vivie-Riedle, R.; Feist, J. Photoprotecting Uracil by Coupling with Lossy Nanocavities. J. Phys. Chem. Lett. 2020, 11, 8810–8818 [Ulusoy et al.(2020)Ulusoy, Gomez, and Vendrell]Ulusoy2020-ab Ulusoy, I. S.; Gomez, J. A.; Vendrell, O. Many-photon excitation of organic molecules in a cavity-Superradiance as a measure of coherence. J. Chem. Phys. 2020, 153, 244107 [Tichauer et al.(2021)Tichauer, Feist, and Groenhof]Tichauer2021-mk Tichauer, R. H.; Feist, J.; Groenhof, G. Multi-scale dynamics simulations of molecular polaritons: The effect of multiple cavity modes on polariton relaxation. J. Chem. Phys. 2021, 154, 104112 [Martinez et al.(2021)Martinez, Rosenzweig, Hoffmann, Lacombe, and Maitra]Martinez2021-aj Martinez, P.; Rosenzweig, B.; Hoffmann, N. M.; Lacombe, L.; Maitra, N. T. Case studies of the time-dependent potential energy surface for dynamics in cavities. J. Chem. Phys. 2021, 154, 014102 [Gudem and Kowalewski(2022)Gudem, and Kowalewski]Gudem2022-ej Gudem, M.; Kowalewski, M. Triplet-triplet Annihilation Dynamics of Naphthalene. Chemistry 2022, 28, e202200781 [Couto and Kowalewski(2022)Couto, and Kowalewski]Couto2022-uv Couto, R. C.; Kowalewski, M. Suppressing non-radiative decay of photochromic organic molecular systems in the strong coupling regime. Phys. Chem. Chem. Phys. 2022, [Li et al.(2022)Li, Nitzan, Hammes-Schiffer, and Subotnik]Li2022-xp Li, T. E.; Nitzan, A.; Hammes-Schiffer, S.; Subotnik, J. E. Quantum Simulations of Vibrational Strong Coupling via Path Integrals. J. Phys. Chem. Lett. 2022, 3890–3895 [Li et al.(2022)Li, Tao, and Hammes-Schiffer]Li2022-dc Li, T. E.; Tao, Z.; Hammes-Schiffer, S. Semiclassical Real-Time Nuclear-Electronic Orbital Dynamics for Molecular Polaritons: Unified Theory of Electronic and Vibrational Strong Couplings. J. Chem. Theory Comput. 2022, 2774–2784 [Mukherjee et al.(2023)Mukherjee, Feist, and Börjesson]Mukherjee2023-mc Mukherjee, A.; Feist, J.; Börjesson, K. Quantitative Investigation of the Rate of Intersystem Crossing in the Strong Exciton-Photon Coupling Regime. J. Am. Chem. Soc. 2023, 145, 5155–5162 [Schnappinger and Kowalewski(2023)Schnappinger, and Kowalewski]Schnappinger23jctc Schnappinger, T.; Kowalewski, M. Nonadiabatic Wave Packet Dynamics with Ab Initio Cavity-Born-Oppenheimer Potential Energy Surfaces. J. Chem. Theory Comput. 2023, 19, 460–471 [Weight et al.(2023)Weight, Krauss, and Huo]Weight2023-ma Weight, B. M.; Krauss, T. D.; Huo, P. Investigating Molecular Exciton Polaritons Using Ab Initio Cavity Quantum Electrodynamics. J. Phys. Chem. Lett. 2023, 14, 5901–5913 [Bauer and Dreuw(2023)Bauer, and Dreuw]Bauer2023-gu Bauer, M.; Dreuw, A. Perturbation theoretical approaches to strong light-matter coupling in ground and excited electronic states for the description of molecular polaritons. J. Chem. Phys. 2023, 158, 124128 [George et al.(2016)George, Chervy, Shalabney, Devaux, Hiura, Genet, and Ebbesen]George2016-sy George, J.; Chervy, T.; Shalabney, A.; Devaux, E.; Hiura, H.; Genet, C.; Ebbesen, T. W. Multiple Rabi Splittings under Ultrastrong Vibrational Coupling. Phys. Rev. Lett. 2016, 117, 153601 [Thomas et al.(2016)Thomas, George, Shalabney, Dryzhakov, Varma, Moran, Chervy, Zhong, Devaux, Genet, Hutchison, and Ebbesen]Thomas2016-fy Thomas, A.; George, J.; Shalabney, A.; Dryzhakov, M.; Varma, S. J.; Moran, J.; Chervy, T.; Zhong, X.; Devaux, E.; Genet, C. et al. Ground-State Chemical Reactivity under Vibrational Coupling to the Vacuum Electromagnetic Field. Angew. Chem. Int. Ed Engl. 2016, 55, 11462–11466 [Thomas et al.(2019)Thomas, Lethuillier-Karl, Nagarajan, Vergauwe, George, Chervy, Shalabney, Devaux, Genet, Moran, and Ebbesen]Thomas2019-ve Thomas, A.; Lethuillier-Karl, L.; Nagarajan, K.; Vergauwe, R. M. A.; George, J.; Chervy, T.; Shalabney, A.; Devaux, E.; Genet, C.; Moran, J. et al. Tilting a ground-state reactivity landscape by vibrational strong coupling. Science 2019, 363, 615–619 [Hirai et al.(2020)Hirai, Hutchison, and Uji-I]Hirai2020-pa Hirai, K.; Hutchison, J. A.; Uji-I, H. Recent Progress in Vibropolaritonic Chemistry. Chempluschem 2020, 85, 1981–1988 [Hirai et al.(2020)Hirai, Takeda, Hutchison, and Uji-I]Hirai2020-uv Hirai, K.; Takeda, R.; Hutchison, J. A.; Uji-I, H. Modulation of Prins Cyclization by Vibrational Strong Coupling. Angew. Chem. Int. Ed Engl. 2020, 59, 5332–5335 [Ahn et al.(2023)Ahn, Triana, Recabal, Herrera, and Simpkins]Ahn2023-qk Ahn, W.; Triana, J. F.; Recabal, F.; Herrera, F.; Simpkins, B. S. Modification of ground-state chemical reactivity via light-matter coherence in infrared cavities. Science 2023, 380, 1165–1168 [Zhong et al.(2023)Zhong, Hou, Zhao, Bai, Wang, Gao, Guo, and Zhang]Zhong2023-lq Zhong, C.; Hou, S.; Zhao, X.; Bai, J.; Wang, Z.; Gao, F.; Guo, J.; Zhang, F. Driving DNA Origami Coassembling by Vibrational Strong Coupling in the Dark. ACS Photonics 2023, [Gu et al.(2023)Gu, Si, Li, Gao, Wang, and Zhang]Gu2023-uq Gu, K.; Si, Q.; Li, N.; Gao, F.; Wang, L.; Zhang, F. Regulation of Recombinase Polymerase Amplification by Vibrational Strong Coupling of Water. ACS Photonics 2023, [Schütz et al.(2020)Schütz, Schachenmayer, Hagenmüller, Brennen, Volz, Sandoghdar, Ebbesen, Genes, and Pupillo]Schutz2020-en Schütz, S.; Schachenmayer, J.; Hagenmüller, D.; Brennen, G. K.; Volz, T.; Sandoghdar, V.; Ebbesen, T. W.; Genes, C.; Pupillo, G. Ensemble-Induced Strong Light-Matter Coupling of a Single Quantum Emitter. Phys. Rev. Lett. 2020, 124, 113602 [Simpkins et al.(2023)Simpkins, Dunkelberger, and Vurgaftman]Simpkins2023-ze Simpkins, B. S.; Dunkelberger, A. D.; Vurgaftman, I. Control, Modulation, and Analytical Descriptions of Vibrational Strong Coupling. Chem. Rev. 2023, [Campos-Gonzalez-Angulo et al.(2022)Campos-Gonzalez-Angulo, Poh, Du, and Yuen-Zhou]Campos-Gonzalez-Angulo2022-gb Campos-Gonzalez-Angulo, J. A.; Poh, Y. R.; Du, M.; Yuen-Zhou, J. Swinging between shine and shadow: Theoretical advances on thermally-activated vibropolaritonic chemistry (a perspective). 2022, [Davidsson and Kowalewski(2023)Davidsson, and Kowalewski]Davidsson2023-yp Davidsson, E.; Kowalewski, M. The role of dephasing for dark state coupling in a molecular Tavis-Cummings model. 2023, [Davidsson and Kowalewski(2020)Davidsson, and Kowalewski]Davidsson2020-bs Davidsson, E.; Kowalewski, M. Atom Assisted Photochemistry in Optical Cavities. J. Phys. Chem. A 2020, 124, 4672–4677 [Sidler et al.(2023)Sidler, Schnappinger, Obzhirov, Ruggenthaler, Kowalewski, and Rubio]Sidler2023-vm Sidler, D.; Schnappinger, T.; Obzhirov, A.; Ruggenthaler, M.; Kowalewski, M.; Rubio, A. Unraveling a cavity induced molecular polarization mechanism from collective vibrational strong coupling. 2023, [Flick et al.(2017)Flick, Ruggenthaler, Appel, and Rubio]flick2017atoms Flick, J.; Ruggenthaler, M.; Appel, H.; Rubio, A. Atoms and molecules in cavities, from weak to strong coupling in quantum-electrodynamics (QED) chemistry. Proc. Natl. Acad. Sci. U.S.A. 2017, 114, 3026–3034 [Flick et al.(2017)Flick, Appel, Ruggenthaler, and Rubio]flick2017cavity Flick, J.; Appel, H.; Ruggenthaler, M.; Rubio, A. Cavity Born–Oppenheimer approximation for correlated electron–nuclear-photon systems. J. Chem. Theory Comput. 2017, 13, 1616–1625 [Flick and Narang(2018)Flick, and Narang]Flick2018-ns Flick, J.; Narang, P. Cavity-Correlated Electron-Nuclear Dynamics from First Principles. Phys. Rev. Lett. 2018, 121, 113002 [Spohn(2004)]spohn2004dynamics Spohn, H. Dynamics of charged particles and their radiation field; Cambridge university press, 2004 [Jestädt et al.(2019)Jestädt, Ruggenthaler, Oliveira, Rubio, and Appel]jestadt2019light Jestädt, R.; Ruggenthaler, M.; Oliveira, M. J.; Rubio, A.; Appel, H. Light-matter interactions within the Ehrenfest–Maxwell–Pauli–Kohn–Sham framework: fundamentals, implementation, and nano-optical applications. Adv. Phys. 2019, 68, 225–333 [Lindoy et al.(2023)Lindoy, Mandal, and Reichman]lindoy2023quantum Lindoy, L. P.; Mandal, A.; Reichman, D. R. Quantum dynamical effects of vibrational strong coupling in chemical reactivity. Nature Communications 2023, 14, 2733 [Li et al.(2021)Li, Mandal, and Huo]li2021cavity Li, X.; Mandal, A.; Huo, P. Cavity frequency-dependent theory for vibrational polariton chemistry. Nat. Commun. 2021, 12, 1315 [Schäfer et al.(2022)Schäfer, Flick, Ronca, Narang, and Rubio]schafer2021shining Schäfer, C.; Flick, J.; Ronca, E.; Narang, P.; Rubio, A. Shining light on the microscopic resonant mechanism responsible for cavity-mediated chemical reactivity. Nature Communications 2022, 13, 7817 [Sidler et al.(2021)Sidler, Schaefer, Ruggenthaler, and Rubio]sidler2020polaritonic Sidler, D.; Schaefer, C.; Ruggenthaler, M.; Rubio, A. Polaritonic Chemistry: Collective Strong Coupling Implies Strong Local Modification of Chemical Properties. J. Phys. Chem. Lett. 2021, 12, 508–516 [Schäfer et al.(2020)Schäfer, Ruggenthaler, Rokaj, and Rubio]Schafer2020-cb Schäfer, C.; Ruggenthaler, M.; Rokaj, V.; Rubio, A. Relevance of the Quadratic Diamagnetic and Self-Polarization Terms in Cavity Quantum Electrodynamics. ACS Photonics 2020, 7, 975–990 [Rokaj et al.(2018)Rokaj, Welakuh, Ruggenthaler, and Rubio]Rokaj2018-ww Rokaj, V.; Welakuh, D. M.; Ruggenthaler, M.; Rubio, A. Light–matter interaction in the long-wavelength limit: no ground-state without dipole self-energy. J. Phys. B At. Mol. Opt. Phys. 2018, 51, 034005 [Szabo and Ostlund(1996)Szabo, and Ostlund]aszabo82-qc Szabo, A.; Ostlund, N. S. Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory, 1st ed.; Dover Publications, Inc.: Mineola, 1996 [Philbin et al.(2022)Philbin, Haugland, Ghosh, Ronca, Chen, Narang, and Koch]Philbin2022-pc Philbin, J. P.; Haugland, T. S.; Ghosh, T. K.; Ronca, E.; Chen, M.; Narang, P.; Koch, H. Molecular van der Waals fluids in cavity quantum electrodynamics. 2022, [Riso et al.(2022)Riso, Haugland, Ronca, and Koch]Riso2022-ll Riso, R. R.; Haugland, T. S.; Ronca, E.; Koch, H. Molecular orbital theory in cavity QED environments. Nat. Commun. 2022, 13, 1368 [Vu et al.(2022)Vu, McLeod, Hanson, and DePrince]Vu2022-mx Vu, N.; McLeod, G. M.; Hanson, K.; DePrince, A. E., Iii Enhanced Diastereocontrol via Strong Light–Matter Interactions in an Optical Cavity. J. Phys. Chem. A 2022, [Liebenthal et al.(2022)Liebenthal, Vu, and DePrince]Liebenthal2022-dp Liebenthal, M. D.; Vu, N.; DePrince, A. E., 3rd Equation-of-motion cavity quantum electrodynamics coupled-cluster theory for electron attachment. J. Chem. Phys. 2022, 156, 054105 [Fischer and Saalfrank(2021)Fischer, and Saalfrank]Fischer2021-eq Fischer, E. W.; Saalfrank, P. Ground state properties and infrared spectra of anharmonic vibrational polaritons of small molecules in cavities. J. Chem. Phys. 2021, 154, 104311 [Fischer and Saalfrank(2023)Fischer, and Saalfrank]Fischer2023-kw Fischer, E. W.; Saalfrank, P. Cavity–Catalyzed Hydrogen Transfer Dynamics in an Entangled Molecular Ensemble under Vibrational Strong Coupling†. Phys. Chem. Chem. Phys. 2023, [Fischer and Saalfrank(2023)Fischer, and Saalfrank]Fischer2023-ob Fischer, E. W.; Saalfrank, P. Cavity-Catalyzed Hydrogen Transfer Dynamics in an Entangled Molecular Ensemble under Vibrational Strong Coupling. 2023, [Gudem and Kowalewski(2021)Gudem, and Kowalewski]Gudem2021-um Gudem, M.; Kowalewski, M. Controlling the Photostability of Pyrrole with Optical Nanocavities. J. Phys. Chem. A 2021, 125, 1142–1151 [Flick et al.(2017)Flick, Appel, Ruggenthaler, and Rubio]Flick2017-jh Flick, J.; Appel, H.; Ruggenthaler, M.; Rubio, A. Cavity Born-Oppenheimer Approximation for Correlated Electron-Nuclear-Photon Systems. J. Chem. Theory Comput. 2017, 13, 1616–1625 [Smith et al.(2018)Smith, Burns, Sirianni, Nascimento, Kumar, James, Schriber, Zhang, Zhang, Abbott, Berquist, Lechner, Cunha, Heide, Waldrop, Takeshita, Alenaizan, Neuhauser, King, Simmonett, Turney, Schaefer, Evangelista, DePrince, Crawford, Patkowski, and Sherrill]Smith2018-tu Smith, D. G. A.; Burns, L. A.; Sirianni, D. A.; Nascimento, D. R.; Kumar, A.; James, A. M.; Schriber, J. B.; Zhang, T.; Zhang, B.; Abbott, A. S. et al. Psi4NumPy: An Interactive Quantum Chemistry Programming Environment for Reference Implementations and Rapid Development. J. Chem. Theory Comput. 2018, 14, 3504–3511 [Smith et al.(2020)Smith, Burns, Simmonett, Parrish, Schieber, Galvelis, Kraus, Kruse, Di Remigio, Alenaizan, James, Lehtola, Misiewicz, Scheurer, Shaw, Schriber, Xie, Glick, Sirianni, O'Brien, Waldrop, Kumar, Hohenstein, Pritchard, Brooks, Schaefer, Sokolov, Patkowski, DePrince, Bozkaya, King, Evangelista, Turney, Crawford, and Sherrill]Smith2020-kq Smith, D. G. A.; Burns, L. A.; Simmonett, A. C.; Parrish, R. M.; Schieber, M. C.; Galvelis, R.; Kraus, P.; Kruse, H.; Di Remigio, R.; Alenaizan, A. et al. Psi4 1.4: Open-source software for high-throughput quantum chemistry. J. Chem. Phys. 2020, 152, 184108 [Kendall et al.(1992)Kendall, Dunning, and Harrison]Kendall1992-wu Kendall, R. A.; Dunning, T. H.; Harrison, R. J. Electron affinities of the first‐row atoms revisited. Systematic basis sets and wave functions. J. Chem. Phys. 1992, 96, 6796–6806 [Kowalewski and Seeber(2022)Kowalewski, and Seeber]nix Kowalewski, M.; Seeber, P. Sustainable packaging of quantum chemistry software with the Nix package manager. Int. J. Quant. Chem. 2022, 122, e26872 [Gussoni et al.(1998)Gussoni, Rui, and Zerbi]GUSSONI1998163 Gussoni, M.; Rui, M.; Zerbi, G. Electronic and relaxation contribution to linear molecular polarizability. An analysis of the experimental values. Journal of Molecular Structure 1998, 447, 163–215 [Schäfer(2022)]schaefer2022polaritonic Schäfer, C. Polaritonic chemistry from first principles via embedding radiation reaction. J. Phys. Chem. Lett. 2022, 13, 6905–6911
http://arxiv.org/abs/2307.00516v1
20230702085037
A re-examination to the SCoTLASS problems for SPCA and two projection-based methods for them
[ "Qiye Zhang", "Kuoyue Li" ]
math.ST
[ "math.ST", "stat.TH" ]
[t1]This work was supported in part by the National Natural Science Foundation of China under grant numbers 61172060 and 62133001. 1]Qi-Ye Zhangcor1 zhangqiye@buaa.edu.cn 1]Kuo-Yue Li sy2109121@buaa.edu.cn [cor1]Corresponding author [1]School of Mathematical Science, Beihang University, Beijing 102206, P. R. China SCoTLASS is the first sparse principal component analysis (SPCA) model which imposes extra ℓ_1 norm constraints on the measured variables to obtain sparse loadings. Due to the the difficulty of finding projections on the intersection of an ℓ_1 ball/sphere and an ℓ_2 ball/sphere, early approaches to solving the SCoTLASS problems were focused on penalty function methods or conditional gradient methods. In this paper, we re-examine the SCoTLASS problems, denoted by SPCA-P1, SPCA-P2 or SPCA-P3 when using the intersection of an ℓ_1 ball and an ℓ_2 ball, an ℓ_1 sphere and an ℓ_2 sphere, or an ℓ_1 ball and an ℓ_2 sphere as constrained set, respectively. We prove the equivalence of the solutions to SPCA-P1 and SPCA-P3, and the solutions to SPCA-P2 and SPCA-P3 are the same in most case. Then by employing the projection method onto the intersection of an ℓ_1 ball/sphere and an ℓ_2 ball/sphere, we design a gradient projection method (GPSPCA for short) and an approximate Newton algorithm (ANSPCA for short) for SPCA-P1, SPCA-P2 and SPCA-P3 problems, and prove the global convergence of the proposed GPSPCA and ANSPCA algorithms. Finally, we conduct several numerical experiments in MATLAB environment to evaluate the performance of our proposed GPSPCA and ANSPCA algorithms. Simulation results confirm the assertions that the solutions to SPCA-P1 and SPCA-P3 are the same, and the solutions to SPCA-P2 and SPCA-P3 are the same in most case, and show that ANSPCA is faster than GPSPCA for large-scale data. Furthermore, GPSPCA and ANSPCA perform well as a whole comparing with the typical SPCA methods: the ℓ_0-constrained GPBB algorithm, the ℓ_1-constrained BCD-SPCA_ℓ_1 algorithm, the ℓ_1-penalized ConGradU and Gpower_ℓ_1 algorithms, and can be used for large-scale computation. Sparse principal component analysis SCoTLASS problem gradient projection method approximate Newton algorithm Barzilar-Borwein stepsize § INTRODUCTION A fundamental task in statistical analysis and engineering is to find simpler, low-dimensional representations for data. Principal Component Analysis (PCA) has become an extremely tool for this purpose since proposed in <cit.>. PCA generates a lower-dimensional coordinate system in which data exhibits the most variability, which can be modelled as the following constrained matrix approximation optimization problem. For a given data matrix A∈ℝ^m× n, the basic version of PCA aims at computing the singular vectors of the covariance matrix Σ=A^TA of associated with the largest singular values. This purpose can be formulated into a rank-one matrix approximation problem of the following form when only one principal component (PC) is considered: max_ x x^TΣ x, s.t.   x_2 = 1, where x∈ℝ^n, ·_2 being the ℓ_2 norm, and that the covariance matrix Σ must be symmetric and positive semidefinite. This problem is nonconvex since the feasible set is the ℓ_2 unit sphere. However, the major shortcoming of the basic PCA (<ref>) is the lack of interpretability of the new coordinates, and various versions have been proposed to ensure that the new coordinates are interpretable. A common approach is to require that each of the generated coordinate be a weighted combination of only a small subset of the original variables. This technique is referred as Sparse Principal Component Analysis (SPCA) <cit.>. There the authors first modeled SPCA as the following LASSO-based PCA, called SCoTLASS (here we call it SPCA-P3 model borrowing the statement about projection in <cit.>): SPCA-P3max_ x x^T Σ x, s.t.   x_2 = 1, x_1≤ t, where t is a tuning parameter with 1<t≤√(n), and ·_1 is the ℓ_1 norm. In <cit.>, the authors pointed out for small values of t the above SCoTLASS problem (<ref>) can be approximately considered on the feasible set x_2 = 1, | x_1= t, and the corresponding SPCA model (we call it SPCA-P2) is as follows: SPCA-P2max_ x x^T Σ x, s.t.   x_2 = 1, x_1 = t. They further proposed a projected gradient approach to the SCoTLASS problem (<ref>) by reformulating it as a dynamic system on the manifold defined by the constraints. SPCA-P3 and SPCA-P2 problems are not convex due to the sphere constraint x_2 = 1, which makes the optimization problem difficult to solve. The convex relaxation approach was applied to SPCA in <cit.> to keep the feasible set convex, and the model can be formulated into the following ℓ_1 constrained PCA (we call it SPCA-P1): SPCA-P1max_ x x^T Σ x, s.t.   x_2 ≤ 1, x_1≤ t, where the ℓ_2 unit ball constraint is simply a relaxation of the ℓ_2 unit sphere constraint in (<ref>). They unified the SCoTLASS method of the maximum variance criterion <cit.> and the iterative elastic net regression method of Zou etc <cit.> with the regularized low-rank matrix approximation approach <cit.> for SPCA by using the technique of penalized matrix decomposition (PMD). From the above we see that the commonly used SCoTLASS-based SPCA models (<ref>), (<ref>) and (<ref>) are essentially to solve the projection subproblems onto the intersection of an ℓ_2 unit ball/sphere and an ℓ_1 ball/sphere mainly include the following three types: (P1) Euclidean projection onto the intersection of an ℓ_1 ball and an ℓ_2 ball; (P2) Euclidean projection onto the intersection of an ℓ_1 sphere and an ℓ_2 sphere; (P3) Euclidean projection onto the intersection of an ℓ_1 ball and an ℓ_2 sphere. Another model for SPCA is to directly constrain the cardinality, i.e., the number of nonzero elements of the maximizer in (<ref>). This can be formulated into the following ℓ_0 constrained PCA <cit.>: max_ x x^T Σ x, s.t.   x_2 = 1, x_0≤ k, with 1<k≤ n and k∈ℕ. Here x_0 is the “ℓ_0 norm” of x, stands for the number of nonzero components of x. Choosing small k will drive many of the components in x to 0, and problem (<ref>) reduces to the basic PCA (<ref>) when k = n. However, the early solutions to SPCA preferred to using exterior penalty function and turn to the penalized/relaxed problem with the ℓ_0 ball/sphere or ℓ_1 ball/sphere constraint replaced by a penalty on the violation of these constraints in the objective, resulting in the ℓ_0 penalized SPCA, the ℓ_1 penalized SPCA and the ℓ_2 penalized SPCA <cit.>, respectively: max_ x x^T Σ x-s x_0, s.t.   x_2 ≤ 1, max_ x x^T Σ x-s x_1, s.t.   x_2 ≤ 1, max_ x x^T Σ x-s x_2, s.t.   x_1 ≤ t. Notice that these penalized/relaxed problems only need projections onto the ℓ_2 or ℓ_1 ball, which are easier to be characterized. Many other methods also focused on penalized PCA problems to circumvent the projections onto a complicated feasible set. In <cit.>, a convex relaxation for (<ref>) is derived by using lifting procedure technique in semidefinite relaxation by relaxing both the rank and cardinality constraints, which may not be suitable for large-scale cases. Another convex relaxation is derived in <cit.> for the ℓ_1 constrained PCA problem via a simple representation of the ℓ_1 unit ball and the standard Lagrangian duality. An expectation-maximization method is designed in <cit.> and a conditional gradient method is proposed in <cit.> for solving the ℓ_2 penalized version of (<ref>). Recently, many researchers have utilized the block approach to SPCA, typical methods including ALSPCA <cit.> and BCD-SPCA <cit.>, GeoSPCA <cit.>, etc. These methods aimed to calculate multiple sparse PCs at once by utilizing certain block optimization techniques. Maybe, it is because that gradient projection (GP) method need repeatedly carried out projections onto the feasible sets, this could be a heavy computational burden, especially when the projection operations cannot be computed efficiently, a common issue for complicated feasible sets. In fact, it is criticised that this issue has greatly limited applicability of GP methods for many problems. To the best of our knowledge, there is no existing GP methods for solving SCoTLASS problems (<ref>), (<ref>) and (<ref>). This is mostly due to the difficulty of projecting onto the intersection of an ℓ_1 ball/sphere and an ℓ_2 ball/sphere. The only algorithms for solving the constrained SPCA problems were the PMD method for (<ref>) in <cit.> and projection algorithms for (<ref>) in <cit.>. In spite of this, GP methods have become a popular approach for solving a wide range of problems <cit.> because of their many interesting features. GP methods have flexibility of handling various complicated models, e.g., different types of feasible sets including convex and nonconvex sets, as long as the projection operation can be carried out efficiently. GP methods only use the first-order derivatives, and are considered to be memory efficient, since they can be easily implemented in a distributed/parallel way. They are also robust to use cheap surrogate of the gradient, such as stochastic gradient, to reduce the gradient evaluation cost <cit.>. Therefore, GP methods are also viewed as a useful tool in big data applications <cit.>. More recently, Liu et al have proposed a unified approach in <cit.> for computing the projection onto the intersection of an ℓ_1 ball/sphere and an ℓ_2 ball/sphere. They converted the above projection issues to find a root of a auxiliary function, and then provided an efficient method, called Quadratic Approximation Secant Bisection (QASB) method, to find the root. This makes it possible to design GP method for directly solving SCoTLASS problems (<ref>), (<ref>) and (<ref>), which may be greatly accelerated and become an useful tool to solve other ℓ_1 ball/sphere and ℓ_2 ball/sphere constrained problems. Thus the first goal of this paper is to investigate the difference among the solutions to the SCoTLASS problems (<ref>), (<ref>) and (<ref>), and design GP algorithms for them by employing the projection method onto the intersection of an ℓ_1 ball/sphere and an ℓ_2 ball/sphere proposed in <cit.>. Moreover, we also propose more efficient approximate Newton algorithms for SCoTLASS problems. Finally, we provide convergence analysis for the proposed algorithms. The rest of paper is organized as follows. In Sect. 2, we review the main results about the projection onto the intersection of an ℓ_1 ball/sphere and an ℓ_2 ball/sphere proposed in <cit.>, show the solutions of projection subproblem (<ref>) and (<ref>) are the same in most case, and provide the algorithms computing the above projections. Moreover, we point out a bug in the root-finding procedure of <cit.> and modify it. In Sect. 3, we propose a GP method for solving the SCoTLASS problems (<ref>), (<ref>) and (<ref>), GPSPCA for short, and prove the global convergence of the proposed GPSPCA algorithms. In Sect. 4, we further suggest an approximate Newton method for solving the three SCoTLASS problems, ANSPCA for short, and prove the global convergence of the proposed ANSPCA algorithms under some conditions. The GPSPCA and ANSPCA algorithms proposed in this paper are then compared with existing typical methods on several famous dataset: synthetic data, Pitprops data, 20newsgroups data and ColonCancer data in Sect. 5. Finally, we present some conclusions in Sect. 6. § PRELIMINARIES §.§ Notation Let ℝ^n be the space of real n-vectors and ℝ^n_+ be the nonnegative orthant of ℝ^n, i.e., ℝ^n_+ := { x∈ℝ^n: x_i≥ 0, i=1,…,n}. On ℝ^n, the ℓ_2 (i.e., Euclidean) norm is indicated as ·_2 with the unit ℓ_2 ball (sphere) defined as 𝔹_2 := { x∈ℝ^n: x_2 ≤ 1} (𝕊_2 := { x∈ℝ^n: x_2 = 1}), and the ℓ_1 norm is indicated as ·_1 with the ℓ_1 ball (sphere) with radius t denoted as 𝔹_1^t:={ x∈ℝ^n: x_1≤ t} (𝕊_1^t:={ x∈ℝ^n: x_1= t}). Notice that x_2 ≤ x_1 ≤√(n) x_2. Trivial cases for problems (P1)-(P3) are: (a) t≥√(n), in this case, x_2 ≤ 1 implies x_1 < t, which means 𝔹_1^t⊂𝔹_2. (b) t≤ 1, in this case, x_1 ≤ t implies x_2 < 1, meaning 𝔹_2⊂𝔹_1^t. Therefore, without loss of the generality, it is assumed that 1<t<√(n) in the later. Let 1∈ℝ^n denote the vector of all ones. Given v∈ℝ^n, define v^+ to be such that v_i^+ = max(v_i, 0) for i = 1, …, n; the largest component of v is denoted by v_max. For a nonempty closed set C⊂ℝ^n, the projection operator onto C is denoted as P_C( y) = min_ x∈ C x- y_2^2. It is shown in <cit.> that the projection of v onto 𝔹_1^t can be characterized by the root of the auxiliary function ψ(λ) :=∑_i=1^n max(v_i-λ, 0)-t = ( v-λ 1)^+ - t. Let Ω_1, Ω_2, Ω_3 be the constrained sets in (<ref>), (<ref>) and (<ref>), respectively. It easily follows that Ω_1={ y∈ℝ^n: y_1≤ t, y_2≤ 1} = 𝔹_1^t ∩𝔹_2, Ω_2={ y∈ℝ^n: y_1=t, y_2= 1}=𝕊_1^t ∩𝕊_2, Ω_3={ y∈ℝ^n: y_1≤ t, y_2= 1}=𝔹_1^t ∩𝕊_2. Then the projection subproblems (P1), (P2) and (P3) can be formulated into P1 P_Ω_1( v)=argmin_ x∈ℝ^n   x - v_2^2, s.t.   x_1≤ t,   x_2 ≤ 1, P2 P_Ω_2( v)=argmin_ x∈ℝ^n   x - v_2^2, s.t.   x_1 = t,   x_2 = 1, P3 P_Ω_3( v)=argmin_ x∈ℝ^n   x - v_2^2, s.t.   x_1≤ t,   x_2 = 1, respectively, where v∈ℝ^n is given. (Proposition 2.2. in <cit.>) Let x = P_Ω_1( v), y∈ P_Ω_2( v) and z∈ P_Ω_3( v), then v_i x_i ≥ 0, v_i y_i ≥ 0 and v_i z_i ≥ 0 for i = 1,…,n. Further, let Ω_1^+, Ω_2^+, Ω_3^+ be the corresponding part of Ω_1, Ω_2, Ω_3 in the first quadrant, that is, Ω_1^+={ y∈ℝ_+^n: y_1≤ t, y_2≤ 1}, Ω_2^+={ y∈ℝ_+^n: y_1=t, y_2= 1}, Ω_3^+={ y∈ℝ_+^n: y_1≤ t, y_2= 1}. By Proposition <ref>, one can restrict the projection to the nonnegative case, that is, replacing v by | v| = (|v_1|, …, |v_n|) and assigning the signs of the elements of v to the solution afterward. Thus, one only needs to focus on the following projection subproblems restricted on ℝ^n_+ corresponding to (P1), (P2) and (P3), respectively: P1^+ P_Ω_1^+( v)=argmin_ x∈ℝ^n_+   x - v_2^2, s.t.   x_1≤ t,   x_2 ≤ 1, P2^+ P_Ω_2^+( v)=argmin_ x∈ℝ^n_+   x - v_2^2, s.t.   x_1 = t,   x_2 = 1, P3^+ P_Ω_3^+( v)=argmin_ x∈ℝ^n_+   x - v_2^2, s.t.   x_1≤ t,   x_2 = 1, where v∈ℝ^n_+ is given. §.§ Projections onto the intersection of an ℓ_1 ball/sphere and an ℓ_2 ball/sphere In this subsection, we briefly review the unified approach proposed in <cit.> for computing the projection onto the intersection of an ℓ_1 ball/sphere and an ℓ_2 ball/sphere, which is constructed based on the following auxiliary function: ϕ(λ) := ( v-λ 1)^+_1^2 - t^2( v-λ 1)^+_2^2,   ∀λ∈ℝ. They characterized the solution of (<ref>), (<ref>) and (<ref>) with the root of ϕ, corresponding to Theorem 4.1, Theorem 4.2 and Theorem 4.3 in <cit.>, respectively (as stated in the following), and then designed a bisection method for finding the foot of ϕ, that is, Quadratic Approximation Secant Bisection (QASB) method. (Theorem 4.1 in <cit.>) For any v∈ℝ^n_+, let x^* be the optimal solution of (<ref>). Then, (i) if v_2 ≤ 1 and v_1 ≤ t, then x^* = v; (ii) if v_2 > 1 and v_1 ≤ t v_2, then x^* = v/ v_2; (iii) if v_1 > t and v_1 > t v_2, ψ(λ) = 0 has a unique root λ̂ in (0, v_max). Furthermore, if ( v-λ̂ 1)^+_2≤ 1, then x^* = ( v-λ̂ 1)^+. Otherwise, ϕ(λ) = 0 has a unique root λ^* in (0, λ̂), and x^* = ( v-λ^* 1)^+( v-λ^* 1)^+_2. For v∈ℝ^n_+, let λ_j, j = 1, …, k denote the k distinct components of v such that λ_1 > … > λ_k with λ_1 = v_max, λ_2=v_ 2nd-max and let λ_k+1 =-∞. For j=1, …, k, let I_j :={i : v_i≥λ_j, i = 1, . . . , n} and I_j := | I_j|. (Theorem 4.2 in <cit.>) For any v∈ℝ^n_+, one of the following statements must be true: (i) if I_1>t^2, then any x^* satisfying ∑_i∈ I_1 x_i = t,  ∑_i∈ I_1 x_i^2 = 1,   x_i≥ 0, i∈ I_1; x_i = 0, i∉ I_1 is optimal for (<ref>). (ii) if I_1=t^2, then (<ref>) has a unique solution x_i^* = 1√(I_1), i ∈ I_1; x_i^* = 0, i∉ I_1. (iii) if I_1<t^2, then (<ref>) has a unique solution x^* = ( v-λ^* 1)^+( v-λ^* 1)^+_2, where λ^* is the unique root of ϕ(λ) = 0 on (-∞, v_max). (Theorem 4.3 in <cit.>) For any v∈ℝ^n_+, one of the following statements must be true: (i) if I_1≤ t^2, then (<ref>) has a unique solution x^*. If v_1 > t v_2, then x^* satisfying (<ref>) with λ^* the root of ϕ(λ) = 0 on (0, v_max). Otherwise, x^* = v v_2. (ii) if I_1>t^2, then then any x^* satisfying (<ref>) is optimal for (<ref>). The QASB method for finding the root of the equation ϕ(λ)=0 is described as in Algorithm <ref>. Based on Theorem <ref>, Theorem <ref>, Theorem <ref> and the QASB method of root-finding, one easily get the following Algorithm <ref>, Algorithm <ref> and Algorithm <ref>, which computes projections P_Ω_1( v), P_Ω_2( v) and P_Ω_3( v) of a vector v∈ℝ^n onto Ω_1=𝔹_1^t∩𝔹_2, Ω_2=𝕊_1^t∩𝕊_2 and Ω_3=𝔹_1^t∩𝕊_2, respectively. (A simple solution for Equation (<ref>)) Notice that the solution for (<ref>) is not unique. Here we provide a simple solution for (<ref>) to easily implement. Sine the index set ℐ_1 is known, the problems (<ref>) and (<ref>) can be all reduced to solving ∑_i∈ℐ_1y_i=t,∑_i ∈ℐ_1y_i^2=1,y_i≥ 0, i∈ℐ_1. One can find a solution y^s for (<ref>) as follows, and then get the corresponding solution x^s for (<ref>) and (<ref>). Let y=(tI_1,…,tI_1)^T, ỹ=(t,0,…,0)^T. Then y and ỹ are both on the ℓ_1 sphere with dimension I_1. Let y^s be the point on the unit ball that also in the line segment connected the two points y and ỹ, we have y^s satisfying (<ref>). In fact, suppose that y^s=(1-s) y+sỹ= y+s(ỹ- y) with s∈[0,1]. Then y^s_1=((1-s)tI_1+st,…,(1-s)tI_1)^T_1=I_1(1-s)tI_1+st=t. It remains to find an s satisfying y_s_2^2=1. By simple calculation, one easily get s=√(1- y_2^2ỹ- y_2^2)=1t√(I_1-t^2I_1-1). Consequently, y^s=(1-s) y+sỹ=(1-1t√(I_1-t^2I_1-1)) y+1t√(I_1-t^2I_1-1)ỹ. Now let x^s be the point with x_i^s=y_i^s, i∈ℐ_1 and x_i^s=0, i∉ℐ_1. (Comparing the solutions to (<ref>) and (<ref>)) When using the solution to (<ref>) in Remark <ref>, the solutions to (<ref>) and (<ref>) are the same except the case of I_1<t^2 and v_1 ≤ t v_2. Proof.  From Theorem 4 (i), Theorem 5 (ii) and Remark <ref>, we know that the solutions to (<ref>) and (<ref>) are the same when I_1>t^2. When I_1=t^2, the unique solution to (<ref>) is given by x^* with its components x_i^* = 1√(I_1), i ∈ I_1; x_i^* = 0, i∉ I_1 from Theorem 4 (ii); the unique solution to (<ref>) is given by x^* = ( v-λ^* 1)^+( v-λ^* 1)^+_2 with λ^*∈(λ_2,v_max) satisfying ϕ(λ^*)=0 from the proof of Theorem 5 (i) in <cit.>. Meanwhile, from the definition of I_1 and λ^*∈ (λ_2,v_max) we know that the entries of the vector ( v-λ^* 1)^+ are as follows: ( v-λ^* 1)^+_i= [ v_max-λ^*, i∈ I_1; 0, i∉ I_1. ] So we have ( v-λ^* 1)^+_2=√(I_1)(v_max-λ^*), and consequently the solution x^* to (<ref>) is also composed of x_i^* = 1/√(I_1), i ∈ I_1; x_i^* = 0, i∉ I_1, which is the same with the solution to (<ref>). When I_1<t^2, from the proofs of Theorem 4 (iii) and Theorem 5 (i) in <cit.>, the unique solution to (<ref>) and (<ref>) are both given by x^* = ( v-λ^* 1)^+( v-λ^* 1)^+_2 with λ^*∈(0,v_max) satisfying ϕ(λ^*)=0 if v_1 > t v_2; however, for the case of v_1 ≤ t v_2, the unique solution to (<ref>) is given by x^* = ( v-λ^* 1)^+( v-λ^* 1)^+_2 with λ^*∈(-∞,v_max) satisfying ϕ(λ^*)=0, whereas the unique solution to (<ref>) is given by x^* = v v_2, which may be different. Notice that the case of I_1<t^2 and v_1 ≤ t v_2 occurs infrequently when t is very small, which is exactly the needed sparsity requirement. This indicates that the solutions to the SCoTLASS problems (<ref>) and (<ref>) will be the same in most case. §.§ Modified bisection Newton method for finding the root of the equation ϕ(λ)=0 In Algorithm <ref>, Algorithm <ref> and Algorithm <ref>, one always needs to find the root λ^* of the equation ϕ(λ)=0 to get the unique solution x^* = ( v-λ^* 1)^+( v-λ^* 1)^+_2 when ( v-λ 1)^+_2≠ 0. It is implemented by using QASB method proposed in <cit.>. Notice the equation ϕ(λ)=0 is equivalent to the equation Ψ(λ)=0 proposed in <cit.> when ( v-λ 1)^+_2≠ 0, where Ψ(λ) was defined by Ψ(λ)=( v-λ 1)^+_1( v-λ 1)^+_2-t,  λ∈ [0,v_max). So one can also use the Algorithm 3 in <cit.> to find the root of ϕ(λ)=0 (assume ( v-λ 1)^+_2≠ 0). However, it is worth noting that there is a bug in the root-finding procedure of Ψ(λ)=0 when using Bisection-Newton solver (BNW for short) in <cit.>. The algorithm BNW performs bisection and continuously checks for sign changes in the auxiliary function Ψ. As soon as this is fulfilled, the root λ^* can be computed by a closed-form. To get the analytic expression of the root λ^*, let us first study the properties of the function Ψ(λ). Following the denotations in <cit.>, for v=(v_1,…,v_n)∈ℝ_+^n, let λ_j, j = 1, …, k denote the k distinct components of v such that λ_1 > … > λ_k with λ_1 = v_max, λ_2=v_ 2nd-max and let λ_k+1 =-∞. And let ℐ_λ={i:v_i ≥λ, i=1,⋯,n}, I_λ=|ℐ_λ|, s_λ=∑_i ∈ℐ_λv_i, w_λ=∑_i ∈ℐ_λv_i^2 and s_j=s_λ_j, w_j=w_λ_j. Since λ_j > λ_j+1, we know that ∀ j=1, ⋯, k, ∀λ∈ (λ_j+1,λ_j], ℐ_j ⊂ℐ_j+1, I_j < I_j+1, s_j < s_j+1, w_j < w_j+1 ℐ_λ=ℐ_j, I_λ=I_j, s_λ=s_j, w_λ=w_j. Therefore I_j, s_j, w_j are all constants on (λ_j+1, λ_j]. From this, we easily get the following results. (The properties of Ψ(λ)) (1) Ψ(λ) is continuous on [0, v_max); (2) Ψ(λ) is differentiable on [0, v_max)\{λ_1,…,λ_k}; (3) Ψ(λ) is strictly deceasing on [0,λ_2) and is constant on [λ_2, v_max); (4) ∀ j=2, ⋯, k, Ψ(λ) is strictly concave on (λ_j+1, λ_j]; and (5) there is exactly one λ^*∈ (0,λ_2) with Ψ(λ)=0. Proof. From Lemma 3 in <cit.>, we only need to show (4): Ψ(λ) is strictly concave on (λ_j+1, λ_j], ∀ j=2, ⋯, k. Following the denotations in <cit.>, let l_1(λ)=( v-λ1)^+_1=s_λ-λ I_λ, l_2(λ)=( v-λ1)^+_2=√(w_λ-2λ s_λ+λ^2I_λ). Then Ψ(λ)=l_1(λ)l_2(λ)-t. Therefore ∀ j=2,⋯,k, ∀λ∈ (λ_j+1, λ_j], I_j≥ 2, l_1j(λ)=s_j-λ I_j>0, l_2j(λ)=√(w_j-2λ s_j+λ^2I_j)>0, l_2j'(λ)=-l_1j(λ)/l_2j(λ), Ψ_j(λ)=l_1j(λ)l_2j(λ)-t, Ψ_j'(λ)=l_1j'(λ)l_2j(λ)-l_1j(λ)l_2j'(λ)[l_2j(λ)]^2 =-I_jl_2j(λ)+[l_1j(λ)]^2/l_2j(λ)[l_2j(λ)]^2=s_j^2-I_jw_j[l_2j(λ)]^3, Ψ_j”(λ)=3l_1j(λ)(s_j^2-I_jw_j)[l_2j(λ)]^5. However, since v_max=λ_1> λ_2≥λ, that is, there are at least two distinct values in the following summation, we have s_j^2-I_j w_j =(∑_i ∈ℐ_jv_i)^2-I_j∑_i ∈ℐ_jv_i^2 =∑_i ∈ℐ_jv_i^2+2∑_i,k ∈ℐ_j i ≠ kv_iv_k-I_j∑_i ∈ℐ_jv_i^2 < 3∑_i ∈ℐ_jv_i^2-I_j∑_i ∈ℐ_jv_i^2=(3-I_j)∑_i ∈ℐ_jv_i^2. If I_j ≥ 3, then s_j-I_jw_j < 0; if I_j=2, s_j^2-I_jw_j=(λ_1+λ_2)^2-2(λ_1^2+λ_2^2) < 0. This indicates that s_j^2-I_jw_j< 0 always holds. Hence Ψ_j”(λ) < 0, which implies that Ψ(λ) is strictly concave on (λ_j+1, λ_j], ∀ j=2,⋯,k. This completes the proof. Proposition <ref> indicates that the root of Ψ(λ)=0 must be in the open interval (λ_j+1, λ_j) such that Ψ changes its sign, and meanwhile we can get the closed-form expression of the root λ^* by solving the quadratic equation Ψ_j(λ)=0 (or ϕ_j(λ)=0, refer to (5) in <cit.>) about λ, that is, (I_j-t^2)I_jλ^2-2(I_j-t^2)s_jλ+s_j^2-t^2w_j=0. It easily follows that the smaller root of the equation (<ref>) has the following closed-form (refer to (7) in <cit.>): λ^*=1I_j(s_j-t√(I_jw_j-s_j^2I_j-t^2)). The proof of Theorem 1 in <cit.> pointed out the unique root λ^*∈(0,λ_2) of Ψ(λ)=0 must be the smaller one. Notice that in <cit.>, the domain of Ψ was set to be [0,v_max) for they only cared about the case of σ( v)<σ^*. They also pointed out that Ψ(0)≤ 0 held when σ( v)≥σ^*, which was thought to be trivial in their sparseness-decreasing setup. So their BNW algorithm first checked whether Ψ(0)≤ 0 (i.e. v_1 ≤ t v_2), which was equivalent to ϕ(0)≤ 0, meanwhile the unique root λ^* of Ψ(λ)=0 (i.e., ϕ(λ)=0) would be negative (also be the smaller one), and could be also computed by (<ref>). Thus the BNW procedure (refer to Algorithm 3 in <cit.>) is shown in the following Algorithm <ref>. But we want to say, the involved Newton process could become an endless loop. In the following we will provide an algorithm to get a counterexample. For the simplicity of denotations, let a=I_λ∈ℕ, b=s_λ, c=w_λ, we rewrite Ψ(λ) as Ψ(λ)=b-aλ√(aλ^2-2bλ+c)-t, (b^2-ac ≤ 0). Let λ_1∈ (λ_j_1+1,λ_j_1) and λ_2∈ (λ_j_2+1,λ_j_2) (j_1>j_2) be two consecutive iteration points which make BNW become an endless loop. And Ψ_1(λ)=b_1-a_1λ√(a_1λ-2b_1λ+c_1)-t, λ∈ (λ_j_1+1,λ_j_1], Ψ_2(λ)=b_2-a_2λ√(a_2λ^2-2b_2λ+c_2)-t, λ∈ (λ_j_2+1,λ_j_2] are the corresponding functions. From (<ref>) we have that a_1 > a_2 (when a_1, a_2 ≠ 1,2), b_1 > b_2, c_1 > c_2. From (<ref>) we further have Ψ_1(λ_1)=b_1-a_1λ_1√(a_1λ_1-2b_1λ_1+c_1)-t,   Ψ_1'(λ_1)=b_1^2-a_1c_1(a_1λ_1^2-2b_1λ_1+c_1)^3/2. To obtain the places of λ_1,λ_2, we suppose a_2, b_2, c_2, t and λ_2 be given. To get λ_1, from the loop condition, we know that the tangent at the point (λ_1, Ψ(λ_1)) will also pass through the point (λ_2, 0), which implies that Ψ_1(λ_1)=Ψ_1'(λ_1)(λ_1-λ_2). Put Ψ_1(λ_1) in (<ref>) into (<ref>) one easily obtains √(a_1λ_1^2-2b_1λ_1+c_1)=b_1-a_1λ_1Ψ_1'(λ_1)(λ_1-λ_2)+t, c_1=(b_1-a_1λ_1Ψ_1'(λ_1)(λ_1-λ_2)+t)^2+2b_1λ_1-a_1λ_1^2. Then put (<ref>) into the second equality in (<ref>) and reduce it, we can get a cubic equation about b_1: Ab_1^3+Bb_1^2+Cb_1+D=0, where A=Ψ_1'(λ_1)(Ψ_1'(λ_1)(λ_1-λ_2)+t)^3, B=-3a_1λ_1Ψ_1'(λ_1)(Ψ_1'(λ_1)(λ_1-λ_2)+t)^3 +a_1/(Ψ_1'(λ_1)(λ_1-λ_2)+t)^2-1, C=3a_1^2λ_1^2Ψ_1'(λ_1)(Ψ_1'(λ_1)(λ_1-λ_2)+t)^3 -2a_1^2λ_1/(Ψ_1'(λ_1)(λ_1-λ_2)+t)^2+2a_1λ_1, D=-a_1^3λ_1^3Ψ_1'(λ_1)(Ψ_1'(λ_1)(λ_1-λ_2)+t)^3 +a_1^3λ_1^2/(Ψ_1'(λ_1)(λ_1-λ_2)+t)^2-a_1^2λ_1^2. After figuring out b_1 from (<ref>), then one obtains c_1 by (<ref>). In fact, we can provide a procedure of getting a counterexample for BNW based on the above idea. Example 1 (A counterexample which makes BNW become an endless loop) Let a_1=10, b_1=33, c_1=109, λ_1=3, a_2=2, b_2=9, c_2=41, λ_2=4, t=2. Then b_1^2-a_1c_1=-1 ≤ 0, b_2^2-a_2c_2=-1 ≤ 0, and we get the functions Ψ_1(λ)=33-10λ√(10λ^2-66λ+109)-2, Ψ_2(λ)=9-2λ√(2λ^2-18λ+41)-2. It is easily verified that Ψ_1(λ_1)=1, Ψ_1'(λ_1)=-1, λ_1-λ_2=-1. From this we know the loop condition (<ref>) holds, therefore BNW will be in an endless loop between λ_1 and λ_2, as shown in Figure <ref>. Now we modify the BNW procedure (refer to Algorithm 3 in <cit.>) by adding a judging condition to get our modified BNW method (MBNW for short) as in Algorithm <ref>. § THE GRADIENT PROJECTION ALGORITHM FOR SCOTLASS PROBLEMS §.§ The framework for GPSPCA In this section, we bring forward the GP algorithm for SCoTLASS problems (<ref>), (<ref>) and (<ref>) (GPSPCA for short) based on the projection method proposed in <cit.>, Barzilai-Borwein (BB) stepsize and the deflation method in the context of PCA. There are mainly two methodologies utilized in PCA. The first is the greedy approach, that is, deflation method <cit.>, such as SCoTLASS <cit.>, ConGradU <cit.> Gpower_ℓ_1 <cit.>, etc. Deflation method in PCA is to find r principal components by solving the optimization problem (<ref>) sequentially one-by-one on the deflated data matrix or data covariance. Specifically, for a given data matrix A∈ℝ^m× n (without loss of generality, it is assumed that the variables contained in the columns of A are centred), denoted by Σ_0=A^TA∈ℝ^n× n the sample covariance matrix. Then the sample covariance matrix Σ_j (j = 1, 2, …, r) should be updated recursively to eliminate the influence of the previous computed loading as follows <cit.>: Σ_j=(I- x_j x_j^T)Σ_j-1(I- x_j x_j^T). The second is the block approach. Typical methods include SPCA <cit.>, Gpower_ℓ_1,k <cit.>, ALSPCA <cit.> and BCD-SPCA <cit.>, GeoSPCA <cit.> etc. These methods aim to calculate multiple sparse PCs at once by utilizing certain block optimization techniques. However, we use the above described deflation method in this paper. According to Algorithm <ref>, Algorithm <ref> and Algorithm <ref>, which computes projections P_Ω_1( v), P_Ω_2( v) and P_Ω_3( v) of a vector v∈ℝ^n onto Ω_1, Ω_2 and Ω_3, respectively, the proposed GPSPCA algorithm is described informally in the following Algorithm <ref> (denoted by GP-P1, GP-P2 or GP-P3 when the constraint set of SPCA problem is taken to be Ω_1, Ω_2 or Ω_3, which corresponding SPCA-P1, SPCA-P2 or SPCA-P3 model of the SCoTLASS problem, respectively). (The equivalence of GP-P1 and GP-P3) The solution to SCoTLASS problems (<ref>) and (<ref>) are the same by using GP-P1 and GP-P3, respectively. Proof.  From (<ref>) and (<ref>), we know that the SCoTLASS problem is to maximize a convex function f( x)= x^TΣ x with a semi-definite covariance matrix Σ =A^T A. However, the constraint sets Ω_1 of SPCA-P1 problem and Ω_3 of SPCA-P3 problem are both non-empty closed bounded sets contained in ri(dom(f)) =ℝ^n, and Ω_1 is also convex. According to Corollary 32.3.2 in <cit.>, the maximum of f on Ω_1 exists and is obtained at some extreme point of Ω_1. In the following, we will prove that the set of extreme points of Ω_1 is exactly Ω_3. In fact, suppose x∈Ω_3. We are going to prove x is an extreme point of Ω_1. Assume that x=(1-μ) y+μ z for some μ∈ (0,1) and y, z∈Ω_1, then y_2≤ 1 and z_2≤ 1. If y_2<1 or z_2<1, from the properties of norm we have x_2≤ (1-μ) y_2+μ z_2<(1-μ)+μ=1, which is contradict with x∈Ω_3. So it must hold y_2=1 and z_2=1. If y≠ z, by Cauchy-Schwartz inequality, we have x_2=  [(1-μ) y+μ z]^T[(1-μ) y+μ z] =  (1-μ)^2 y_2^2+2μ(1-μ) y^T z+μ^2 z_2^2 <  (1-μ)^2+2μ(1-μ) y_2 z_2+μ^2=1, which is again contradict with x∈Ω_3. Therefore, y= z. Meanwhile, x=(1-μ) y+μ z=(1-μ) y+μ y= y= z. To sum up, we know that x is an extreme point of Ω_1. However, suppose x∈Ω_1\Ω_3. If x_2<1, x_1<t, since x is an interior point of Ω_1, it could not be an extreme point. In fact, there exists a r>0 such that the ball B_r( x)⊆Ω_1. Meanwhile let y and z are the two endpoints of a diameter of this ball. Then y, z∈Ω_1 (as Ω_1 is a closed subset) and x=1/2 y+1/2 z, which implies that x is not an extreme point of Ω_1. If x_2<1, x_1=t, let assume that x is in one of a hyperplane of 𝕊_1^t. It easily follows that x is a relative interior point of the intersection set C of this hyperplane and Ω_1. Then from Theorem 6.4 in <cit.> we know for all y∈ C, there exists a μ >1 such that (1-μ) y+μ x∈ C. Take y∈Ω_2∩ C⊆Ω_1 and the associated μ>1, let z=(1-μ) y+μ x∈ C⊆Ω_1. Then we have x=1/μ z+μ-1/μ y, and 1/μ, μ-1/μ∈ (0,1), 1/μ+ μ-1/μ=1. That is, x is a convex combination of two different points y, z∈Ω_1, which implies that x is not an extreme point of Ω_1. Thus we have shown each point x∈Ω_1\Ω_3 is not an extreme point of Ω_1. This completes the proof. §.§ Convergence results In this subsection, we will prove the global convergence of our proposed GPSPCA algorithm using the analysis in <cit.>. Bolte et al considered a broad class of nonconvex-nonsmooth problems in <cit.> of the form Mminimize_ x, y F( x, y)=f( x)+g( y)+H( x, y), ( x, y)∈ℝ^n ×ℝ^m where f:ℝ^n→ (-∞,+∞] and g:ℝ^m→ (-∞,+∞] are both proper closed functions and H:ℝ^n×ℝ^m →ℝ is a smooth function. Starting with some given initial point ( x^0, y^0), they generate an iterated sequence {( x^k, y^k)}_k∈ℕ in <cit.> via the proximal regularization for H linearized at a given point of the Gauss-Seidel scheme : x^k+1∈argmin_ x∈ℝ^n{⟨ x- x^k,∇ H_ x( x^k, y^k)⟩ + c_k2 x- x^k_2^2+f( x^k)} y^k+1∈argmin_ y∈ℝ^m{⟨ y- y^k,∇ H_ y( x^k+1, y^k)⟩ + d_k2 y- y^k_2^2+g( y^k)} where c_k and d_k are positive real numbers, and yield the Proximal Alternating Linearized Minimization (PALM for short) algorithm.  PALM reduces to Proximal Forward-Backward (PFB) algorithm when there is no y term. In this case, F( x) := f( x)+h( x) (where h( x) ≡ H( x, 0)), and the proximal forward-backward scheme for minimizing Ψ can simply be viewed as the proximal regularization of h linearized at a given point x^k, i.e., x^k+1∈ argmin_ x∈ℝ^n{⟨ x- x^k,∇ h( x^k)⟩ + t_k2 x- x^k_2^2+f( x^k)} It is well-known that PFB reduces to the gradient projection (GP) method when f=δ_Ω (where Ω is a nonempty, closed and nonconvex subset of ℝ^n, i.e., GP method generates an iterated sequence { x^k}_k∈ℕ via x^k+1∈ P_Ω( x^k-1t_k∇ h( x^k)) To describe the global convergence of PFB, let us first give the definition of Kurdyka-Łojasiewicz (KL) function <cit.>. Let η∈ (0,+∞]. We denote by Φ_η the class of all concave and continuous functions φ : [0, η)→ℝ^+ which satisfy the following conditions (i) φ(0) = 0; (ii) φ is smooth on (0, η) and continuous at 0; (iii) for all s ∈ (0, η), φ'(s) > 0. Definition (Kurdyka-Łojasiewicz property, Definition 3 in <cit.>) Let σ:ℝ^d →(-∞, +∞] be proper and lower semi-continuous. (i) The function σ is said to have the Kurdyka-Łojasiewicz (KL) property at u̅∈dom∂σ := {u ∈ℝ^d : ∂σ(u) ≠∅}, if there exist η∈ (0,+∞], a neighborhood U of u̅ and a function φ∈Φ_η, such that for all u ∈ U∩{u ∈ℝ^d : σ(u̅) < σ(u) < σ(u̅)+η}, the following inequality holds φ'(σ(u)-σ(u̅))·dist (0,∂σ(u))≥ 1. (ii) If σ satisfy the KL property at each point of domain ∂σ, then σ is called a KL function. With some assumptions, Bolte et al proved the global convergence for PALM algorithm, and consequently the global convergence for PFB algorithm in <cit.>. (Proposition 3 in <cit.>) (A convergence result of PFB) Let h: ℝ^n →ℝ be a continuously differentiable function with gradient ∇ h assumed L_h-Lipschitz continuous and let f: ℝ^n → (-∞, +∞] be a proper and lower semi-continuous function with inf_ℝ^n f >-∞. Assume that F=f + h is a KL function. Let { x^k}_k∈ℕ be a sequence generated by PFB which is assumed to be bounded and let t_k > L_h. The following assertions hold: (i) The sequence { x^k}_k∈ℕ has finite length, that is, ∑_k=1^∞ x^k+1- x^k_2<+∞. (ii) The sequence { x^k}_k∈ℕ converges to a critical point x^* of F. Consider the SCoTLASS problems (<ref>), (<ref>) and (<ref>), they can be reformulated respectively as [ - x^TΣ x; s.t. x∈Ω_i ],   i=1,2,3 In Proposition <ref>, taking h( x)=- x^TΣ x, f_i( x)=δ_Ω_i( x)= 0, x∈Ω_i +∞, x∉Ω_i ,  i=1,2,3 which is the indicator function on Ω_i, and F_i( x)=h( x)+f_i( x)= - x^TΣ x+δ_Ω_i( x), i=1,2,3. Then, we can obtain the global convergence for our GPSPCA algorithm on different constraint sets Ω_1, Ω_2 and Ω_3. In fact, Σ=A^TA is positive semidefinite, ∇ h=-2Σ x. By the Cauchy-Schwartz inequality, we have for all x, y∈ℝ^n, ∇ h( x)-∇ h( y)_2 = 2 Σ( x- y)_2 ≤ 2Σ_2 x- y_2 = 2λ_max(Σ) x- y_2. That is, h is Lipschitz continuous with moduli L_h=2λ_max(Σ) (here λ_max(Σ) denotes the largest eigenvalue of Σ). Since Ω_i, i=1,2,3 are all nonempty compact closed sets, it easily follows that f_i( x)=δ_Ω_i( x), i=1,2,3 are all proper and lower semi-continuous. And inf_ℝ^nf( x)=inf_ℝ^nδ_Ω_i( x)=0>-∞. Now we show that F_i( x), i=1,2,3 are all KL functions. According to the properties of semi-algebraic functions <cit.>, we know that a semi-algebraic function must be a KL function, and the finite sum of semi-algebraic functions is also semi-algebraic. And h( x)=- x^TΣ x is actually a polynomial function, from Appendix Example 2 in <cit.>, we have that h( x) is a semi-algebraic function. Thus the only thing is to show that δ_Ω_i( x),i=1,2,3 are semi-algebraic functions. However, the indicator function of a semi-algebraic set must be a semi-algebraic function. Remaining we prove Ω_i,i=1,2,3 are all semi-algebraic sets. From the definition of the semi-algebraic set, for S ⊂ℝ^d, if there exists a finite number of real polynomial functions g_ij,h_ij:ℝ^d →ℝ such that S=⋃_j=1^p⋂_i=1^q{ u∈ℝ^d:g_ij( u)=0, h_ij( u) <0} then S is a semi-algebraic set. Notice that { x∈ℝ^n : ∑_k=1^n| x_k|-t=0}=⋃_i=1^2^n{ x∈ℝ^n : e_i^T x-t=0} and { x∈ℝ^n : ∑_k=1^n| x_k|-t<0}=⋂_i=1^2^n{ x∈ℝ^n : e_i^T x-t<0} where the j-th (j=1,…,n) component e_ij of e_i takes value in {-1,1}. We have { x∈ℝ^n : ∑_k=1^n | x_k|-t=0} and { x∈ℝ^n : ∑_k=1^n| x_k|-t<0} are semi-algebraic sets. And { x∈ℝ^n : ∑_k=1^nx_k^2-1=0},  { x∈ℝ^n : ∑_k=1^nx_k^2-1<0} are clearly semi-algebraic sets. Therefore Ω_i,i=1,2,3 are all semi-algebraic sets. By Proposition <ref> and Remark <ref>, we have the following global convergence result of our GPSPCA algorithm for SCoTLASS problems (<ref>), (<ref>) and (<ref>). (Global convergence of GPSPCA) Let { x^k}_k∈ℕ be a sequence generated by GPSPCA which is assumed to be bounded and let t_k > 2λ_max(Σ). The following assertions hold: (i) The sequence { x^k}_k∈ℕ has finite length, that is, (<ref>) holds; (ii) The sequence { x^k}_k∈ℕ converges to a critical point x^* of F( x)=- x^TΣ x+δ_Ω( x), i.e. x^* satisfies 0∈∂ F, Ω=Ω_1, Ω_2 or Ω_3. § THE APPROXIMATE NEWTON ALGORITHM FOR SCOTLASS PROBLEMS In <cit.>, besides the GP algorithm, Hager et al. also proposed an approximate Newton algorithm for non-convex minimization and applied it to SPCA. They pointed out that in some cases, the approximate Newton algorithm with a Barzilai-Borwein (BB) Hessian approximation and a non-monotone line search can be substantially faster than the other algorithms, and can converge to a better solution. For f:ℝ^n→ℝ a concave second-order continuously differentiable function, and Ω a compact nonempty set, they consider the algorithm in which the new iterate x_k+1 is obtained by optimizing the quadratic model: x_k+1∈ argmin{∇ f( x_k)( x- x_k) + α_k/2 x- x_k_2^2 : x∈Ω}. Notice that after completing the square, the iteration is equivalent to x_k+1∈ argmin{α_k x-( x_k-g_k/α_k)_2^2 : x∈Ω}, where g_k=∇ f( x_k). If α_k > 0, then this reduces to x_k+1∈ P_Ω( x_k-g_k/α_k); in other words, perform the gradient projection algorithm with step size 1/α_k. If α_k< 0, then the iteration reduces to x_k+1∈ Q_Ω( x_k-g_k/α_k), where Q_Ω( x)= argmax{ x- y_2^2 : y∈Ω}. To design the approximate Newton algorithm for SCoTLASS problems, let us first characterize the solutions to the problems Q_Ω_1( x), Q_Ω_2( x) and Q_Ω_3( x).   For ∀ x∈ℝ^n, the solution to the problems Q_Ω_1( x) and Q_Ω_3( x) are the same. Proof.   ∀ x∈ℝ^n, the problems Q_Ω_1( x) and Q_Ω_3( x) are both to maximize a convex function f( y)= x- y_2^2. Then the rest of the proof is the same as that in Theorem <ref>.   For ∀ x∈ℝ^n, we have argmax_ y∈Ω_2 x- y_2^2 =- argmin_ y∈Ω_2 x- y_2^2, argmax_ y∈Ω_3 x- y_2^2 =- argmin_ y∈Ω_3 x- y_2^2. That is, Q_Ω_i( x)=- P_Ω_i( x),   i=2,3. Proof.   [ argmax_ y∈Ω_2 x- y_2^2; = argmax_ y∈Ω_2{ x_2^2-2⟨ x, y⟩+ y_2^2}; = argmax_ y∈Ω_2{ x_2^2-2⟨ x, y⟩+1}  (since y∈Ω_2, y_2=1); = argmax_ y∈Ω_2-2⟨ x, y⟩ = argmax_ y∈Ω_22⟨ x,- y⟩  (since x_2 is a constant); = argmin_ y∈Ω_2-2⟨ x,- y⟩ = argmin_ y∈Ω_2 x-(- y)_2^2; = argmin_- y∈Ω_2 x- y_2^2 = - argmin_ y∈Ω_2 x- y_2^2.  (since Ω_2 is symmetrical about the origin) ] The proof for Ω_3 is similar. Now we can design the approximate Newton algorithms for SCoTLASS problems (<ref>), (<ref>) and (<ref>) (ANSPCA for short) based on the projection method proposed in <cit.> and Barzilai-Borwein (BB) stepsize and the deflation method in the context of PCA in the following Algorithm <ref> (denoted by AN-P1, AN-P2 and AN-P3, respectively, when the constraint set of SPCA is taken to be Ω_1, Ω_2 or Ω_3, which corresponding SPCA-P1, SPCA-P2 or SPCA-P3 model of the SCoTLASS problem, respectively). From Proposition <ref>, Q_Ω_1( x) and Q_Ω_3( x) have the same solutions, and from Proposition <ref>, the optimal solutions of Q_Ω_2( x) and Q_Ω_3( x) can be obtained by the projections x onto Ω_2 and Ω_3, respectively. So we take the the optimal solution Q_Ω_3( x) as the the optimal solution Q_Ω_1( x) in Algorithm <ref>. By Theorem 3.4 and Theorem 3.5 in <cit.>, we also have the following convergence result of our ANSPCA algorithm for SCoTLASS problems (<ref>), (<ref>) and (<ref>). (Global convergence of ANSPCA) Suppose that the covariance matrix Σ=A^TA of data matrix A satisfy that there exists a μ<0 such that 2Σ+μ I is positive semidefinite, and let { x^k}_k∈ℕ be a sequence generated by ANSPCA. Then the following assertions hold: (i) the sequence of objective values f( x_k) generated by ANSPCA algorithm for memory M > 0 converge to a limit f^* as k tends to infinity. If x^* is any limit point of the iterates x^k, then x^*∈ Q_Ω( x^*- g( x^*)/α) for some α < 0, and ∇ f( x^*)( y- x^*)≤ 0, ∀ y∈ conv(Ω) (ii) there exists a constant c independent of k, such that min{ x^j+1- x^j_2: 0<j<kM}≤c√(k). Proof.  Theorem 3.4 and Theorem 3.5 in <cit.> require two conditions: (1) the objective function f is continuously differentiable on a compact set Ω⊆ℝ^n; (2) the following inequality (<ref>) holds for some μ <0, f( y)≤ f( x)+∇ f( x)^T( y- x)+μ2 y- x^2 For SPCA problem with data matrix A and covariance matrix Σ=A^T A, the objective function f( x)=- x^TΣ x is clearly continuously differentiable on the compact sets Ω_1, Ω_2 and Ω_3. And if there exists a μ<0 such that 2Σ+μ I is positive semidefinite, then x^T(Σ+μ2) x=-f( x)--μ2 x^2 is a convex function, which implies that -f( x) is a strongly convex function with module -μ, and then -f( y)≥ -f( x)-∇ f( x)^T( y- x)-μ2 y- x^2. By taking the opposite value in the two sides of the above inequality, one easily gets (<ref>). Thus (i) and (ii) holds by Theorem 3.4 and Theorem 3.5 in <cit.>. In fact, by the proof of Theorem 3.5 in <cit.> we know the constant c can be computed by c=√(2(f( x_0)-f^*)|α|), where α=max(σμ/2,α_max). § NUMERICAL EXPERIMENTS In this section, we conduct several numerical experiments using MATLAB 2019 on a laptop with 8GB of RAM and an 1.80GHz Intel Core i7-8550U processor under WINDOWS 10, and present the experiment results which we have done to investigate the performance of our proposed GPSPCA and ANSPCA algorithms. On one hand, we contrast the similarities and differences of GPSPCA and ANSPCA among three projection subproblems on Ω_1=𝔹_1^t ∩𝔹_2, Ω_2=𝕊_1^t ∩𝕊_2 or Ω_3=𝔹_1^t ∩𝕊_2. On the other hand, we also compare the performance of our GPSPCA and ANSPCA algorithms with several typical SPCA algorithms: the ℓ_1-constrained block coordinate descent approach (BCD-SPCA_ℓ_1) in <cit.> (which use and the ℓ_0-constrained approximate Newton algorithm (GPBB) in <cit.>, the conditional gradient algorithm with unit step-size (ConGradU) proposed in <cit.>, the generalized ℓ_1-penalized power method (Gpower_ℓ_1) in <cit.>. In our experiments, when many principal components (PCs) are extracted, ConGrad and BCD-SPCA algorithms compute PC loadings on the Stiefel manifold simultaneously, while other algorithms all compute PC loadings successively using the deflation method. There are different ways to select the initialized vectors. In <cit.>, x^0=e_i, the i-th column of the identity matrix, where i is the index of the largest diagonal element of the covariance matrix Σ. In <cit.>, x^0 is chosen parallel to the column of A with the largest norm, that is, x^0=a_i^*a_i^*_2, where i^*=max_i a_i_2. In this paper, we mainly use the first initialization method. In addition, in terms of sparse or penalty parameter selection, based on the given ℓ_0 sparse parameter k in the GPBB algorithm, the ℓ_1 penalized parameter γ of Gpower_ℓ_1 or s of ConGradU are obtained carefully by grid search to achieve the desired sparsity, notice from <cit.> that γ∈ [0,a_i^*_2). And for the BCD-SPCA approach and the proposed GPSPCA and ANSPCA algorithms we easily select the appropriate t according to the inequality x_1 ≤√( x_0) x_2. §.§ Performance indexes Refer to <cit.>, we use the following performance indexes: Sparsity (or Cardinality):  Sparsity stands for the percentage of nonzero elements in the loading matrix. The smaller the sparsity is, the sparser the data is. And cardinality means the number of nonzero elements. Non-orthogonality (non-ortho for short):  Let x_i and x_j be any two loading vectors, and the included angle between them is denoted by α_ij. The non-orthogonality is defined by the maximum of |90-α_ij| over i and j. The smaller this value is, the better the non-orthogonality is. Correlation:  Represents for the maximum of the absolute value of correlation coefficients over all PCs. Percentage of explained variance (PEV for short):  Means that the ratio of the tuned variance sum of the top k sparse PCs to the variance sum of all PCs, it can be calculated as Ẑ=QR PEV=∑_j=1^nR_jj^2 where Ẑ is the modified sparse PC, and performed QR decomposition with Q a orthogonal matrix and R a upper triangular matrix. Reconstruction error minimization criterion (RRE for short): RRE is defined as RRE=A-Â_F/A_F where A denotes the data matrix, Â=ÛV^T, V is the loading matrix, Û=AV(V^TV)^-1. Time:  The running time of the procedure. Iterations:  The number of iterations computing the projection subproblem. §.§ Termination Criterion Denote by f_k the value of the objective function for the kth iteration, g_k the gradient for kth iteration, and ε the tolerated error. Our GPSPCA and ANSPCA algorithms use the following stopping criterion: * Absolute error of the argument: x_k- x_k-1_2<ε; * Relative error of the objective function: f_k-f_k-1_∞<ε(1+f_k-1_∞); * Relative error of the gradient: g_k-g_k-1_∞<ε(1+g_k-1_∞); * Relative error of the argument: x_k- x_k-1_∞<ε x_k-1_∞. §.§ Simulations In this subsection, we employ two artificially synthesized datasets to evaluate the performance of the proposed GPSPCA and ANSPCA algorithms, one is for recovering the ground truth sparse principal components underlying data, the other is for comparing the average computational time. §.§.§ Hastie dataset Hastie dataset was first introduced by Zou et al. <cit.> to illustrate the advantage of sparse PCA over conventional PCA on sparse PC extraction. So far this dataset has become one of the most commonly utilized data for testing the effectiveness of sparse PCA methods. The dataset was generated in the following way: at first, three hidden variables V_1, V_2 and V_3 were defined as: V_1 ∼ N(0,290) V_2 ∼ N(0,300) V_3=-0.3V_1+0.925V_2+ε where ε∼ N(0,1), V_1, V_2 and ε are independent to each other. Then ten observation variables are generated by V_1, V_2 and V_3 as follows: X_i=V_1+ε_i^1,i=1,2,3,4 X_i=V_2+ε_i^2,i=5,6,7,8 X_i=V_3+ε_i^3,i=9,10 where ε_i^j ∼ N(0,1) and ε_i^j (j=1,2,3, i=1,...,10) are independent to each other. Thus, only two principal components (PC) can include most information in the raw data. The first PC corresponding to V_1 can be computed by X_1, X_2, X_3, and X_4; the second PC corresponding to V_2 can be computed by X_5, X_6, X_7, and X_8. In these experiments, the ℓ_1 sparsity parameter t in GPSPCA and ANSPCA are set to be t=2, taking σ=0.25, M=50 in ANSPCA. Table <ref> and Table <ref> present the values of the above performance indexes for GPSPCA, ANSPCA. From the Table <ref> and Table <ref> we see that for SCoTLASS problems (<ref>), (<ref>) and (<ref>), GPSPCA and ANSPCA can always restore effective loading matrix, which satisfies V_3=-0.3V_1+0.925V_2+ε, and has good results in non-orthogonality, correlation, PEV and RRE indexes. This indicates (P1), (P2) and (P3) subproblems are all suitable to the SPCA model. §.§.§ Randomly generated data In these experiments, we generate randomly data matrixes A ∈ R^m × n, where A_ij∼ N(0,1/m), then get its covariance matrix Σ=A^TA. We will examine the solution quality and the solving speed (efficiency) of GPSPCA and ANSPCA algorithms for SCoTLASS problems (<ref>), (<ref>) and (<ref>) on such high-dimension and small-sample data. In the first scenario, let m=150, n=1000, the ℓ_0 sparse parameters are set to be k=5, 10, …, 250, respectively. Correspondingly, the sparsity are 0.005, 0.01, …, 0.25, respectively. We only consider the first PC, compare GPSPCA algorithms under different (P1), (P2) and (P3) subproblems, and ANSPCA algorithms under different (P1), (P2) and (P3) subproblems in four indexes, that is, mean running time, mean iteration number of subproblem, mean PEV and the mean number of nonzero elements (cardinality) of the first PC. Figure <ref> depicts the the solution quality and efficiency of GPSPCA algorithm under (P1), (P2) and (P3) subproblems in four aspects: time, iterations, cardinality and PEV. From Figure <ref> we see that the cardinality and PEV of GP-P1, GP-P2, GP-P3 are the same. The running time and iterations of GP-P1, GP-P2 and GP-P3 are both close. The simulation results confirm the conclusions in Theorem <ref> and Remark <ref>. Figure <ref> depicts the the solution quality and efficiency of ANSPCA algorithm under (P1), (P2) and (P3) subproblems in four aspects: time, iterations, cardinality and PEV. From Figure <ref> we find that AN-P1, AN-P2 and AN-P3 are the same in iterations, cardinality and PEV on this random data. The the running time of AN-P1 and AN-P2 are almost identical, only the time of AN-P3 is a faster than that of AN-P1 and AN-P2. The simulation results again confirm the conclusions in Theorem <ref> and Remark <ref>. In addition, we also see that ANSPCA is faster than GPSPCA. In the second scenario, we consider problems with two aspects of matrix dimensions: a fixed aspect ratio n/m = 10, and m is fixed at 500 with exponentially increasing values of n. One one hand, we compare the solving speed of GPSPCA algorithms and ANSPCA algorithms using different root-finding methods QASB and MBNW. One the other hand, we compare the speed of different constrained algorithms, including our GPSPCA and ANSPCA algorithms, the ℓ_1 constrained BCD-SPCA_ℓ_1 approach and the ℓ_0 constrained GPBB method. We test 20 times and calculate the average computational time for the extraction of one component (in seconds), which does not include the previous grid search time. We set the sparsity of the first PC extracted to be 5%. Table <ref> and Table <ref> show the solving speed of three GPSPCA algorithms and three ANSPCA algorithms using QASB and MBNW, respectively. From Table <ref> and Table <ref> we see that the solving speed of GPSPCA and ANSPCA algorithms by using QASB and MBNW methods are close, QASB is a little bit faster than MBNW in most cases on large-scale data. Table <ref> and Table <ref> further compare the solving speed of different constrained algorithms, including our GPSPCA and ANSPCA algorithms (using QASB root-finding method), the ℓ_1 constrained BCD-SPCA_ℓ_1 approach and the ℓ_0 constrained GPBB method. One also sees that there are no obvious difference among GP-P1, GP-P2, GP-P3 algorithms, among AN-P1, AN-P2 and AN-P3 algorithms, but ANSPCA is much faster than GPSPCA as a whole. ANSPCA is the fastest constrained algorithm, GPBB is the second, and BCDSPCA_ℓ_1 is the slowest maybe due to the ordering process. §.§.§ Convergence test of the proposed ANSPCA algorithm In these simulations, we will test the convergence result (ii) in Theorem <ref>. We randomly generate 50× 200 and 500× 200 data matrixes A with mean 0 and standard deviation 1, and use the same parameters μ=-0.1, σ=0.25 with that in <cit.>, and also set M to be 50, 6, 1, respectively. The change of the errors x^j+1- x^j_2 with the iterations are depicted in Figure <ref> and Figure <ref> for AN-P1, AN-P2, AN-P3 algorithms. From Figure <ref> and Figure <ref>, we see that three ANSPCA algorithms AN-P1, AN-P2, AN-P3 can all converge quickly, especially when taking a large M. §.§ Experiments on real data In this subsection, we compare our GPSPCA and ANSPCA algorithms with the ℓ_0 constrained GPBB method and the ℓ_1 penalized ConGradU, Gpower_ℓ_1 algorithms on three real datasets: Pitprops data, 20 newsgroups data and ColonCancer data. §.§.§ Pitprops dataset The Pitprops dataset, which stores 180 observations of 13 variables, was first introduced by Jeffers <cit.> to show the difficulty of interpreting PCs. This dataset has been a standard benchmark to evaluate algorithms for sparse PCA. In these experiments, for each utilized method, 6 sparse PCs were extracted with cardinality setting: 7-2-3-1-1-1 <cit.>, 7-2-4-7-2-3 and 12-6-5-4-3-2 <cit.>, and so that the ℓ_0 sparsity for GPBB is about 0.19, 0.32 and 0.41. The ℓ_1 norm parameters t of GPSPCA, ANSPCA and BCD-SPCA are taken to be [2.5 1.1 1.43 1.0002 1.0002 1.0002], [2.51 1.1 1.565 2.3 1.03 1.3] and [2.95 1.895 1.23 1.5 1.02 1.03] (which are near [√(7) √(2) √(3) 1 1 1], [√(7) √(2) √(4) √(7) √(2) √(3)], [√(12) √(6) √(5) √(4) √(3) √(2)]), respectively. Through the time-consuming grid search to get the penalized parameters of ℓ_1 norm for ConGradU and Gpower_ℓ_1. In GPSPCA, taking the lower bound α_min = 0.1 and upper bound α_max=2 of BB step size, and the tolerance error ε_GP = 1e-6; In ANSPCA, taking the lower bound α_min = -1e+7 and upper bound α_max=-0.1 of BB step size, the tolerance error ε_AN = 1e-6, and set M = 50 and σ = 0.25 same as in <cit.>. QASB root-finding method is used in GPSPCA and ANSPCA algorithms Table <ref>-<ref> presents the performance index results of GPSPCA and ANSPCA for SCoTLASS problems (<ref>), (<ref>) and (<ref>) with three cardinality settings: 7-2-3-1-1-1, 7-2-4-7-2-3 and 12-6-5-4-3-2, respectively. From them, one also sees that the results of GPSPCA and ANSPCA for (P1) and (P3) subproblems are the same. There is no obvious difference between (P2) and (P1) (or (P3)) subproblems. This again confirm the conclusions in Theorem <ref> and Remark <ref>. Table <ref>-<ref> presents the performance index results of all 6 considered algorithms in this paper with three cardinality settings: 7-2-3-1-1-1, 7-2-4-7-2-3 and 12-6-5-4-3-2, respectively. We bold the best indexes in each table. From Table <ref>-<ref> we see that although all the performance indexes of our GPSPCA and ANSPCA algorithms are not the best, but the solution quality are close to the best results, which indicate that our algorithms perform stably and fairly on Pitprop data. Then Figure <ref> depicts the change of PEVs and RREs under different sparse PCs of all 6 considered algorithms with three cardinality settings: 7-2-3-1-1-1, 7-2-4-7-2-3 and 12-6-5-4-3-2 for Pitprops data. From Figure <ref> we also see that our algorithms perform stably and fairly on Pitprop data. §.§.§ 20 newsgroups dataset 20 Newsgroups data used in this subsection downloaded from http://cs. nyu.edu/r̃oweis/data.html. It's a tiny version of the 20newsgroups data, with binary occurance data for 100 words across 16242 postings (that is, m = 16242, n= 100), which is a typical low-dimension and large-sample data. Sam Roweis have tagged the postings by the highest level domain in the array “newsgroups”. In these experiments, we set the ł_0 sparsity parameter k in GPBB to be 12, 26 and 36 so that ℓ_0 sparsity are 0.12, 0.26 and 0.36, respectively, then choose proper ℓ_1 sparsity parameters for other algorithms. In GPSPCA, taking the lower bound α_min = 0.1 and upper bound α_max=10000 of BB step size, and the tolerance error ε_GP=1e-6; In ANSPCA, the lower bound α_min = - 1e+7 and upper bound α_max=-0.1 of BB step size, the tolerance error ε_GP=1e-6, and set M = 50 and σ = 0.25 same as in <cit.>. Table <ref> - Table <ref> present the performance index results of GPSPCA and ANSPCA for SCoTLASS problems (<ref>), (<ref>) and (<ref>) on 20newsgroups with three ℓ_0 sparsity: 0.12, 0.26 and 0.36. We test GPSPCA and ANSPCA algorithms using different root-finding methods QASB and MBNW. Since the solution quality are completely same, we only show the running time (in seconds) of GPSPCA and ANSPCA algorithms using QASB and MBNW in the last two rows, respectively. From Table <ref> - Table <ref>, one also see that the solution quality of GPSPCA and ANSPCA for (P1) and (P3) subproblems are the same, there is no obvious difference between (P2) and (P1) (or (P3)) subproblems. This again confirms the conclusions in Theorem <ref> and Remark <ref>. Moreover, one also observe that ANSPCA is faster than GPSPCA as a whole. MBNW is a little bit faster than QASB on 20 newsgroups data in most cases. Table <ref>-Table <ref> present the performance index results of BCD-SPCA_ℓ_1, GPBB, ConGradU, Gpower_ℓ_1 and our GP-P3 and AN-P3 algorithms (using QASB). We bold the best indexes in the table. As shown in Table <ref>-Table <ref>, Gpower_ℓ_1 performs the best on 20newsgroups data; our GPSPCA and ANSPCA algorithms perform fairly on 20newsgroups data. When the sparsity is higher, the solution quality and efficiency of our GPSPCA and ANSPCA algorithms are better. §.§.§ ColonCancer dataset ColonCancer dataset is similar to the yeast gene expression dataset. It contains expression levels of 2000 genes taken in 62 different samples (20 normal samples and 42 cancerous sample). For each sample it is indicated whether it came from a tumor biopsy or not. In these experiments, we will examine the performance of GPSPCA and ANSPCA algorithms for SCoTLASS problems (<ref>), (<ref>) and (<ref>), and compare them with the ℓ_0-constrained GPBB method in <cit.> for such high-dimension and small-sample data. Here ten PCs are retained, the ℓ_0 sparsity are set to be 0,025 and 0.05, respectively. In GPSPCA, taking the lower bound α_min = 0.1 and upper bound α_max = 1e+7 of BB step size, and the tolerance error ε_GP=1e-7; In ANSPCA, the lower bound α_min = - 1e+7 and upper bound α_max=-0.1 of BB step size, the tolerance error ε_AN=1e-7, and set M = 50 and σ = 0.25 same as in <cit.>. Table <ref> and Table <ref> present the performance index results of GPSPCA and ANSPCA for SCoTLASS problems (<ref>), (<ref>) and (<ref>) on ColonCancer data with two ℓ_0 sparsity: 0.025, and 0.05. We also test GPSPCA and ANSPCA algorithms using different root-finding methods QASB and MBNW. The solution quality are still completely same, we also show the running time (in seconds) of GPSPCA and ANSPCA algorithms using QASB and MBNW in the last two rows, respectively. From Table <ref> and Table <ref>, one see that the solution quality of GPSPCA and ANSPCA for (P1), (P2) and (P3) subproblems are the same for very small t. This again confirms the conclusions in Theorem <ref> and Remark <ref>. Moreover, one also observe that ANSPCA is faster than GPSPCA as a whole. As shown in Table <ref> and Table <ref>, QASB is a little bit faster than MBNW on ColonCancer data. Table <ref> and Table <ref> present the performance index results of BCD-SPCA_ℓ_1, GPBB, Gpower_ℓ_1 and our GP-P3 and AN-P3 algorithms (using QASB) with two ℓ_0 sparsity: 0.025 and 0.05 on ColonCancer data. We bold the best indexes in the table. As shown in Table <ref> and Table <ref>, Gpower_ℓ_1 performs the best on ColonCancer data; our GPSPCA and ANSPCA algorithms perform well and stably on ColonCancer data, especially they are faster than the other two constrained BCD-SPCA_ℓ_1, GPBB algorithms. Figure <ref> further depicts the change of PEV and RRE with different numbers of sparse PCs for GPSPCA, ANSPCA, BCDSPCA_ℓ_1, GPBB and Gpower_ℓ_1 on ColonCancer data. From Figure <ref> we see that PEV increases and RRE decreases with growth of the numbers of sparse PCs. Same as shown in Table <ref> and Table <ref>, Gpower_ℓ_1 performs the best on ColonCancer data; our GPSPCA and ANSPCA algorithms perform well on ColonCancer data. As a whole, through comparing the proposed GPSPCA and ANSPCA algorithms for SCoTLASS problems (<ref>), (<ref>) and (<ref>) with the present typical algorithms GPBB, BCDSPCA_ℓ_1, ConGradU and Gpower_ℓ_1 on three typical real dataset, we find that our proposed algorithms GPSPCA and ANSPCA for three SCoTLASS problems perform stably, are faster than the other two constrained algorithms BCD-SPCA_ℓ_1 and GPBB on the large-scale dataset. Although our GPSPCA and ANSPCA algorithms did not perform the best, the ℓ_1 sparse parameters are easy to be set since the initial vector can be chosen simply, and the (P1), (P2) and (P3) subproblems have analytic solutions, the proposed GPSPCA and ANSPCA algorithms can be all computed efficiently, and used for large-scale computation. From the experimental results we also see that Gpower_ℓ_1 perform the best in solution quality and efficiency, however, its penalized parameters are difficult to be chosen, and one need to spend much longer time to find them. § CONCLUSIONS In this paper, we employed the projection method onto the intersection of an ℓ_1 ball/sphere and an ℓ_2 ball/sphere proposed in <cit.>, designed a gradient projection method (GPSPCA for short) and an approximate Newton algorithm (ANSPCA for short) for three SCoTLASS problems (<ref>), (<ref>) and (<ref>), and proved the global convergence for them. We showed the equivalence of the solution to SPCA-P1 and SPCA-P3, and the solution to SPCA-P2 and SPCA-P3 are the same in most case. We conducted several numerical experiments in MATLAB environment to exmine the performance of our proposed GPSPCA and ANSPCA algorithms. The simulation results confirmed the conclusions in Theorem <ref> and Remark <ref>, and showed that ANSPCA was faster than GPSPCA on large-scale data. Compare to the typical SPCA algorithms GPBB <cit.>, BCDSPCA_ℓ_1 <cit.>, ConGradU <cit.> and Gpower_ℓ_1 <cit.>, the proposed GPSPCA and ANSPCA algorithms performed fairly well and stably, and highly efficient for large-scale computation. § ACKNOWLEDGEMENT We would like to express our gratitude to Associate Professor Hongying Liu for her kind help and useful recommendations. § REFERENCES elsarticle-num-names
http://arxiv.org/abs/2307.02837v1
20230706081215
Restricting Dyck Paths and 312-avoiding Permutations
[ "Elena Barcucci", "Antonio Bernini", "Stefano Bilotta", "Renzo Pinzani" ]
math.CO
[ "math.CO" ]
1cmRestricting Dyck Paths and 312-avoiding Permutations 1cm Elena Barcucci, Antonio Bernini, Stefano Bilotta, Renzo Pinzani Dipartimento di Matematica e Informatica “Ulisse Dini” Università di Firenze Viale G. B. Morgagni 65, 50134 Firenze, Italy plain theoremTheorem corollary[theorem]Corollary lemma[theorem]Lemma proposition[theorem]Proposition definition defi[theorem]Definition example[theorem]Example conjecture[theorem]Conjecture remark remark[theorem]Remark Dyck paths having height at most h and without valleys at height h-1 are combinatorially interpreted by means of 312-avoding permutations with some restrictions on their left-to-right maxima. The results are obtained by analyzing a restriction of a well-known bijection between the sets of Dyck paths and 312-avoding permutations. We also provide a recursive formula enumerating these two structures using ECO method and the theory of production matrices. As a further result we obtain a family of combinatorial identities involving Catalan numbers. AMS 2020 Mathematics Subject Classifications: Primary 05A19; Secondary 05A05, 05A15. Keywords: Dyck path, avoiding permutation, Catalan number. § INTRODUCTION Dyck paths have been widely used in several combinatorial applications. Here, we only recall their involvement in theory of codes <cit.>, cryptography <cit.> and partial ordered structures <cit.>. Dyck paths enumeration has also received much attention in recent decades. An interesting paper dealing with this matter is the one by E. Deutsch <cit.> where the author enumerates Dyck paths according to various parameters. A subclass of these paths has been considered thanks to the simple behavior of the recursive relations describing them and the rational nature of the related generating function. More precisely, the generating function associated to Dyck paths is algebraic and it is rational when the paths are bounded <cit.>, for example with respect to the height. Kallipoliti et al. <cit.> consider Dyck paths having height less or equal to a precise value k. Moreover, in the same paper a further restriction is considered: the authors analyze some characteristics of Dyck paths avoiding valleys at specified height. In our work we consider Dyck paths having height equal or less than h and having no valleys at height h-1. We obtain an interesting relation with a subclass of 312-avoiding permutations (actually, we obtain a bijection) having some constraints on the left-to-right maxima. The paper structure is the following. In Section <ref> some preliminaries on Dyck paths and pattern avoiding permutations are introduced. Here we also recall a well-known bijection between the sets of Dyck paths and 312-avoiding permutations, we are going to largely use in the whole paper. Sections <ref> and <ref> are devoted to the generation of the considered Dyck paths (having height equal or less than h and having no valleys at height h-1) and the corresponding 312-avoiding permutations with some restriction on their left to right maxima. The enumerative results are presented in Section <ref>. they provide the generating functions for the above mentioned classes, and a recurrence relation for their enumeration according to their size. Finally, we conclude the paper proposing some further developments on the present topics. § PRELIMINARIES A Dyck path is a lattice path in the discrete plane ℤ^2 from (0,0) to (2n,0) with up and down steps in {(1,1),(1,-1)}, never crossing the x-axis. The number of up steps in any prefix of a Dyck path is greater or equal to the number of down steps and the total number of steps (the length of the path) is 2n. We denote the set of Dyck paths having length 2n (or equivalently semilength n) by 𝒟_n. A Dyck path can be codified by a string over the alphabet {U,D}, where U and D replace the up and down steps, respectively. The empty Dyck path is denoted by ε. The height of a Dyck path P is the maximum ordinate reached by one of its steps. A valley of P is an occurrence of the substring DU while a peak is an occurrence of the substring UD. The height of a valley (peak) is the ordinate reached by D (U). We denote by 𝒟_n^(h,k) the set of Dyck paths having semilength n and height at most h, and avoiding k-1 consecutive valleys at height h-1. The set of Dyck paths having semilength n with height at most h (without restriction on the number of valleys) is denoted by 𝒟_n^(h). Moreover, 𝒟^(h)=∑_n≥ 0𝒟_n^(h) and 𝒟^(h,k)=∑_n≥ 0𝒟_n^(h,k). The cardinalities of 𝒟_n^(h,k) and 𝒟_n^(h) are indicated by D_n^(h,k) and D_n^(h), respectively. Finally, the set 𝒟_n of unrestricted Dyck paths having semilength n≥ 0 is enumerated by the n-Catalan number C_n=1/n+12nn . When k=2, the set 𝒟_n^(h,2) represents the set of Dyck paths avoiding valleys at height h-1. In the present work we describe a combinatorial interpretation of 𝒟_n^(h,2) in terms of restricted permutations. In our context, the above mentioned permutations are related to the notion of pattern avoidance which can be generally described as the absence of a substructure inside a larger structure. In particular, an occurrence of a pattern σ in a permutation π=π_1 π_2 …π_n of length n is a sub-sequence (not necessarily constituted by consecutive entries) of π whose entries appear in the same relative order as those in σ. Otherwise, we say that π avoids the pattern σ, or that σ is a forbidden pattern for π. For example, π=352164 contains two occurrences of σ=312 in the sub-sequences 514 and 524, while π=34251 avoids the pattern σ. The set 𝒮_n(312) denotes the set of 312-avoiding permutations of length n which are enumerated by the n-Catalan number. We are going to briefly recall a well-known bijection φ, useful in the rest of the paper, between the classes 𝒟_n and 𝒮_n(312) (see for example <cit.>). Fix a Dyck path P and label its up steps by enumerating them from left to right (so that the ℓ-th up step is labelled ℓ). Next assign to each down step the same label of the up step it corresponds to. Now consider the permutation whose entries are constituted by the labels of the down steps read from left to right. Such a permutation π=φ(P) is easily seen to be 312-avoiding. As far as the inverse map φ^-1 is concerned, once fixed a 312-avoiding permutation π=π_1π_2 …π_n we can consider its factorization in terms of descending sub-sequences whose first elements coincide with the left-to-right maxima of π. A left-to-right maximum (l.r.M for short) is an element π_i which is greater than all the elements to its left, i.e., greater than all π_j with j<i. Denoting π_i_1, π_i_2,…, π_i_ℓ the left-to-right maxima of π, the corresponding Dyck path P=φ^-1(π) is obtained as follows: * write as many U's as π_i_1 (=π_1) followed by as many D's as the cardinality of the first descending sub-sequence headed by π_i_1; * for each j=2,…,ℓ, add as many U's as π_i_j - π_i_j-1 followed by as many D's as the cardinality of the sub-sequence headed by π_i_j. Two easy properties of l.r.M of π∈ S_n(312), and the corresponding steps in P=φ^-1(π) are summarized in the following: Let P be a Dyck path in 𝒟_n and π=φ(P)=π_1 …π_n be the associated permutation in S_n(312). Each entry π_i_j corresponding to the first down step of a sub-sequence of consecutive down steps in P is a left to right maximum. Moreover, π_i_j - i_j is the height reached by the descending step in P corresponding to π_i_j. § A GENERATING ALGORITHM The set 𝒟^(h,2) can be exhaustively generated by means an ECO operator <cit.> which allows to construct all the paths of a certain length n+1 (the size of the combinatorial objects) staring from the ones of size n. To this aim, consider a Dyck path P ∈𝒟_n^(h,2) which, obviously, starts with t ≤ h up steps U. We mark these steps factorizing the path P as P=U_1 U_2 ⋯ U_t D P', where P' is a suitable Dyck suffix of length n-t-1. The idea is to consider some sites in P ∈𝒟_n^(h,2) where an insertion of the factor 𝐔𝐃 is allowed in order to obtain paths in 𝒟_n+1^(h,2) from P (so that the sites are called active sites). Thus, we define an operator ϑ for the class 𝒟_n^(h,2) as follows: - if P=U_1 U_2 ⋯ U_t-1U_t D P' ∈𝒟_n^(h,2), with t < h, then [ ϑ(P)={ 𝐔𝐃 U_1 U_2 ⋯ U_t-1U_t D P',; U_1𝐔𝐃 U_2 ⋯ U_t-1U_t D P',; ⋯; U_1 U_2 ⋯ U_t-1𝐔𝐃 U_t D P',; U_1 U_2 ⋯ U_t-1 U_t 𝐔𝐃 D P' } ; ] - if P=U_1 U_2 ⋯ U_t-1U_t D P' ∈𝒟_n^(h,2), with t=h, then [ ϑ(P)={ 𝐔𝐃 U_1 U_2 ⋯ U_t-1U_t D P',; U_1𝐔𝐃 U_2 ⋯ U_t-1 U_t D P',; ⋯; U_1 U_2 ⋯𝐔𝐃 U_t-1 U_t D P'} . ] We note that the insertion of 𝐔𝐃 may create a valley in the paths of ϑ(P). In particular, * the insertion of 𝐔𝐃 before the step U_j, with j=1,2,…,t-1, gives the occurrence of the valley 𝐃U_j having height j-1<h-1 in any case; * the insertion of 𝐔𝐃 before the step U_t in the case t<h gives the occurrence of the valley 𝐃U_t having height equal to t-1<h-1; * the insertion of 𝐔𝐃 after the step U_t in the case t<h does note give the occurrence of a valley (since the next step is again a D step). In other words, the valley possibly generated by the insertion of 𝐔𝐃 has height less than h-1, therefore we have: If x ∈ϑ(P), with P ∈𝒟_n^(h,2), then x ∈𝒟_n+1^(h,2). In the spirit of ECO method, we have to prove the following proposition. The operator ϑ is an ECO operator. The proof consists in the following steps: i) If x,y ∈𝒟_n^(h,2) with x ≠ y, then ϑ(x) ∩ϑ(y)=∅. ii) If x ∈𝒟_n+1^(h,2) then ∃ y ∈𝒟_n^(h,2) such that x ∈ϑ(y). For case i), we suppose that exists a path P such that P ∈ϑ (x) and P ∈ϑ(y), with x ≠ y. From the description of the operator ϑ it is easy to realize that the first peak of P is precisely generated by the insertion of the factor 𝐔𝐃. By removing such a peak from P, we obtain an unique path. Thus, we would have x=y, against the hypothesis. For case ii), if x∈ D_n+1^(h,2), then x=U^j𝐔𝐃T', with j=0,1,…, h-1, where T' is a suitable Dyck suffix of suitable length. Then, the path y=U^jT' starts with at most h up steps U so that x∈ D_n^(h,2). Clearly, it is x∈ϑ(y) since y is obtained by the insertion of 𝐔𝐃 in x. A generating algorithm can be naturally described by means of the concept of succession rule. Such a concept was introduced by Chung et al. <cit.> to study reduced Baxter permutations. Recently this technique has been successfully applied to other combinatorial objects <cit.> and it has been recognized as an extremely useful tool for the ECO method <cit.>. In all these cases there is a common approach to the examined enumeration problem: a generating tree is associated to certain combinatorial class, according to some enumerative parameters, in such a way that the number of nodes appearing on level n of the tree gives the number of n-sized objects in the class. A succession rule is a formal system constituted by an axiom (a) and some productions (possibly only one) having the form (k)⇝(e_1(k))(e_2(k))…(e_k(k)) , so that a succession rule Ω is often denoted by Ω: {[ (a); ; (k)⇝(e_1(k))(e_2(k))…(e_k(k)) .; ]. The symbols (a),(k) and e_i(k) are called labels (their values are positive integers), and play a crucial role when the the succession rule Ω is represented by a generating tree. This is a rooted tree whose nodes are the labels of Ω. More precisely, the root is labelled with (a) and each node having label (k) has k children having labels e_1(k),e_2(k),…,e_k(k), according to the productions in Ω. In our case the generating algorithm for 𝒟^(h,2) is performed by the operator ϑ and from its definition is easy to realize that: * the empty path ε can be labelled with the axiom (1) having production (1)⇝ (2): the path ε generates the path UD, having in turns label (2); * every other path P can have label (2), (3), …or (h) depending on the number t of its starting up steps U. More precisely, if 1 ≤ t ≤ h-1 then P is labelled (t+1). Otherwise, if t=h, then P is labelled (h-1). In order to write the productions of the labels (k) of P, with k=2,3,…,h we observe that: * if k<h then the k paths in ϑ(P) start, respectively, with 1,2,… k up steps, so that, in turns, they are labelled (2),(3),… (k+1). Then we can write the production (k)(2)(3)⋯(k)(k+1) , 2≤ k <h. * if k=h then the k paths in ϑ(P) start, respectively, with 1,2,… h up steps. Since the path having h starting up steps is labelled (h-1), then we can write the production (h)(2)(3)⋯(h-1)^2(h). The two paths having label (h-1) are precisely the one starting with h up steps and the one starting with h-2 up steps. Finally, the generating algorithm for 𝒟^(h,2) can be described by the succession rule (for h≥3) as follows: Ω_h: {[ (1); (1) (2); (k) (2)(3)⋯(k)(k+1), 2≤ k<h; (h) (2)(3)⋯(h-1)^2(h) ]. § THE BJECTION WITH A SUBSET OF 312-AVOIDING PERMUTATIONS Let 𝒮_n^(h)(312)⊆ S_n(312) be the subset of permutations π∈ S_n(312) such that π_i_j - i_j ≤ h-1, for each l.r.M. π_i_j of π. The reader can easily check that the restriction φ_|D_n^(h) of φ to the set D_n^(h) is a bijection between D_n^(h) and S_n^(h)(312) (using Proposition <ref>). We now consider the paths in D_n^(h,2) and characterize the corresponding permutations via the restriction of φ to this set. The following proposition holds. Let P be a Dyck path in 𝒟_n. Then, P ∈𝒟_n^(h,2) if and only if in the corresponding permutation π=φ(P) there is no left-to-right maximum π_i_j such that * π_i_j - i_j =h-1 and * π_i_j+1=π_i_j+1. Suppose that π=φ(P) has no a left-to-right maximum π_i_j such that π_i_j - i_j =h-1 and π_i_j+1=π_i_j+1. Let P=φ^-1(π) be the corresponding path. We have to prove that P∈𝒟_n^(h,2. * If P has height less than h, then, surely, P∈𝒟_n^(h,2) and the proof is completed. * Suppose that P has height equal to h and suppose, ad absurdum, that P∉𝒟_n^(h,2). Therefore, there exists a valley having height h-1. Thus, P can be written as P'U_iD_iU_i+1D_i+1P”, where P' and P” are, respectively, a Dyck prefix and a Dyck suffix having height h-1. Considering the permutation π=φ(P)=π_1 …π_i π_i+1…π_n (where we highlighted the entries π_i and π_i+1 corresponding to the steps D_i and D_i+1), thanks to Proposition <ref>, it is possible to observe that the elements π_i and π_i+1 associated to U_i and U_i+1, respectively, are l.r.M. in π. Again from Proposition <ref>, we have π_i-i=h-1 and π_i+1-(i+1)=h-1 and, by substitution, it is π_i+1=π_i +1, against the hypothesis. Thus, P∈𝒟_n(h,2). On the other side, suppose that P ∈𝒟_n^(h,2) and suppose, ad absurdum, that π=φ(P)=π_1 …π_i π_i+1…π_n ∈𝒮_n^(h)(312) has a left to right maximum π_i with π_i+1=π_i+1 and π_i - i = h-1. Then, it is π=π_1 …π_i (π_i+1) …π_n. Since π_i < π_i+1 and π is a 312-avoiding permutation, then there is not π_l > π_i with l<i. Thus, both π_i and π_i+1 are l.r.M. in π. From Proposition <ref>, the quantities π_i - i and π_i+1 - (i+1) are the heights reached by the corresponding descending steps in P. Moreover, from the two hypotheses π_i - i =h-1 and π_i+1=π_i+1, we deduce π_i+1 - (i+1)=π_i+1 - (i+1)= h-1. Thus, P=φ^-1(π) can be factorized as P=P'U_iD_iU_i+1D_i+1P” showing that P admits a valley having height h-1, against the hypothesis P∈𝒟_n^(h,2). The permutations corresponding to the paths in 𝒟_n^(h,2) are denoted by 𝒮_n^(h,2)(312). By means of the above proposition, we proved the following one. There exists a bijection between the classes 𝒮_n^(h,2)(312) and 𝒟_n^(h,2), which is the restriction φ_|D_n^(h,2). From Proposition <ref>, a generating algorithm for the class 𝒮_n^(h,2)(312) according to the succession rule Ω_h can be obtained. A combinatorial interpretation of Ω_h in terms of permutations is then desired. First of all we note that, if π=π_1 …π_n ∈𝒮_n^(h,2)(312), then π_1 ≤ h. After that, we have to find an interpretation of the parameters comparing in the rule Ω_h. The axiom (1) at level 0 can be associated to the empty permutation and its production labelled with (2) can be associated to the permutation 1. The parameter (k) at level n in the rule Ω_h admits the following interpretation according to the value of π_1 in π∈ S_n^(h,2)(312): (k)= π_1 + 1 if π_1 ≠ h ; π_1 - 1 if π_1 = h . More precisely, if π_1 < h, a permutation π=π_1 …π_n ∈𝒮_n^(h,2)(312) at level n, produces k=π_1+1 sons at level n+1 by inserting the element ℓ, with ℓ=1,2,…, π_1 +1, before π_1 and rescaling the sequence ℓπ in order to obtain a permutation π' ∈𝒮_n+1^(h,2)(312) (for the sake of clearness, each entry π_i of π equal or greater than ℓ is increased by 1 in order to obtain π'). Otherwise, when π_1 = h, a given permutation π=π_1 …π_n ∈𝒮_n^(h,2)(312) at level n, produces k=π_1-1=h-1 sons at level n+1 by inserting the element ℓ, with ℓ=1,2,…, h-1, before π. Analogously, π' ∈𝒮_n+1^(h,2)(312) is obtained by rescaling the sequence ℓπ, for each ℓ. As an example, fixed h=3, the succession rule for 𝒮_n^(3,2)(312), or equivalently for 𝒟_n^(3,2), is as follows: Ω_3: {[ (1); (1) (2); (2) (2)(3); (3) (2)(2)(3) ]. In Figure <ref> a graphical representation of the first levels of Ω_3 is shown in terms of permutations in 𝒮_n^(3,2). § ENUMERATION The case h=2 is not included in the general formula (<ref>) for the succession rules. However, it is easy to see that in this case it is Ω_2: {[ (1); (1) (2); (2) (1)(2); ]. The succession rule (<ref>) defines the Fibonacci numbers. According to the theory developed by Deutsch et al. <cit.>, the production matrix P_2 associated to Ω_2 is P_2= ( 0 1 1 1 ) and, for h≥ 3, P_h= ( 0 u^t 0 P_h-1+eu^t ) where u^t is the row vector (1 0 0…) and e is the column vector (1 1 1 …)^t, of appropriate size and for what the generating function f_h(x) of the sequence corresponding to Ω_h is concerned, we have <cit.>, for h≥ 2, f_h(x)=1/1-xf_h-1(x) . When h=1, clearly the unique paths in 𝒟_n^(1,2) are the empty path ε and UD, so that the sequence (D_n^(1,2))_n≥ 0 is {1,1,0,0,…}, whose generating function is f_1(x)=1+x which is rational. Thanks to (<ref>) it is possible to deduce that also f_h(x) with h≥ 2 is rational, too. Therefore, we can consider its general form as follows: f_h(x)=p_h(x)/q_h(x) , where p_h(x) and q_h(x) are polynomials with suitable degrees. From (<ref>) and (<ref>) we obtain p_h(x) = q_h-1(x) q_h(x) = q_h-1(x)-xq_h-2(x) . Since the degree of the polynomial q_h(x) is ⌈h+1/2⌉ (it can be easily seen by induction), we can assume q_h(x)=a_h,0-a_h,1x-a_h,2x^2-…-a_h,jx^j j=⌈h+1/2⌉ . Clearly, it is a_h,j=0 if j>⌈h+1/2⌉. As a_1,0=1, thanks to (<ref>) we have a_h,0=a_h-1, and a_h,0=1 h≥ 1 . Moreover, q_h(x)=1-a_h,1x-a_h,2x^2-…-a_h,jx^j j=⌈h+1/2⌉ . Using the expression for q_h(x) in (<ref>), we obtain q_h(x)= 1-a_h-1,1x-a_h-1,2x^2-…-a_h-1,j-1x^j-1 -x( 1-a_h-2,1x-a_h-2,2x^2-…-a_h-2,j-2x^j-2) . For the identity theorem for polynomials, comparing formulas (<ref>) and (<ref>) for q_h(x), it is a_h,j= a_h-1,1+1 for j=1 a_h-1,j - a_h-2,j-1 for j=2,3,…,⌈h+1/2⌉ . In Table <ref> we list the first numbers of the coefficients a_h,j for some fixed values of h≥ 1 and j ≥ 1. On the diagonals, it is possible to observe a similarity with the A112467 sequence in The On-line Encyclopedia of Integer Sequences <cit.>. We have an explicit formula for the coefficients a_h,j thanks to the following proposition. For h≥ 2 and for j=1,2,…,⌈h+1/2⌉ we have: a_h,j=3j-h-2/jh-j+1j-1(-1)^j We proceed by induction on h. For h=2, it is j=1,2, and expression (<ref>) gives a_2,1=1 and a_2,2=1, agreeing with the expression for f_2(x)=1/1-x-x^2 derived from (<ref>) and f_1(x)=1+x. For h>2, we first analyze the case j=1. Using a_h,1=a_h-1,1+1 from (<ref>) and the inductive hypothesis, we have a_h,1 = a_h-1,1+1=(2-h)(-1)^1+1=h-1 which matches the value of a_h,1 returned by (<ref>). For j>2, we use a_h,j=a_h-1,j-a_h-2,j-1 from (<ref>) and, again, the inductive hypothesis. We get a_h,j = a_h-1,j-a_h-2,j-1 =3j-h-1/jh-jj-1(-1)^j-3j-h-3/j-1h-jj-2(-1)^j-1 =3j-h-1/jh-jj-1(-1)^j+3j-h-3/j-1h-jj-2(-1)^j . Expanding the binomial coefficients and with some manipulations, it is a_h,j=(-1)^j(h-j+1)!(3j-h-2)/j(j-1)!(h-2j+2)!=3j-h-2/jh-j+1j-1(-1)^j , as required. The proof is completed. In the sequel, we are going to evaluate a recurrence relation for the terms D_n^(h,2) involving the series expansion at x=0 of the generating function f_h(x)=p_h(x)/q_h(x)=q_h-1(x)/q_h(x)=∑_n≥ 0D_n^(h,2)x^n . The expression for q_h(x) becomes q_h(x)=1-∑_j=1^⌈h+1/2⌉3j-h-2/jh-j+1j-1(-1)^jx^j Thus, we obtain f_h(x)=1-∑_j=1^⌈h/2⌉3j-h-1/jh-jj-1(-1)^jx^j/1-∑_j=1^⌈h+1/2⌉3j-h-2/jh-j+1j-1(-1)^j x^j and ( 1-∑_j=1^⌈h+1/2⌉3j-h-2/jh-j+1j-1(-1)^j x^j) ( ∑_n≥ 0D_n^(h,2)x^n ) = =1-∑_j=1^⌈h/2⌉3j-h-1/jh-jj-1(-1)^jx^j Sorting the first part according to the increasing powers of x we have ∑_n≥ 0( D_n^(h,2)- ∑_j=1^⌈h+1/2⌉ D_n-j^(h,2)3j-h-2/jh-j+1j-1(-1)^j ) x^n = =1-∑_j=1^⌈h/2⌉3j-h-1/jh-jj-1(-1)^jx^j where D_ℓ^(h,2)=0 whenever ℓ≤ 0. For the identity theorem for polynomials we can deduce the desired recurrence relation D_n^(h,2) = 1 for n=0 ; ∑_j=1^⌈h+1/2⌉ D_n-j^(h,2)3j-h-2/jh-j+1j-1(-1)^j -3n-h-1/nh-nn-1 (-1)^n for n≥ 1 . A very interesting note arises when, once h is fixed, we ask for the number D_n^(h,2) of Dyck paths having semilength n≤ h. Clearly, in this case, it is D_n^(h,2)=C_n since all the Dyck paths of a certain semilegth n≤ h have height at most equal to n. Thanks to the above argument it is possible to derive interesting relations involving Catalan numbers. Indeed, for the above remark, posing h=n+α, we can write D_n^(n+α,2)=C_n, where α≥ 0 is integer. Then, it is possible to deduce the combinatorial identity involving Catalan numbers as follows: C_n=∑_j=1^n C_n-j3j-n-α-2/jn+α-j+1j-1(-1)^j -2n-α-1/nαn-1 (-1)^n . § FURTHER DEVELOPMENTS In this paper we analyzed the case k=2 leading to bounded Dyck paths avoiding valleys at given height (i.e., h-1) corresponding to the permutations in 𝒮_n^(h,2)(312). An interesting generalization could concern the cases k>2 in order to investigate what are the arising constraints on the subclasses of 312-avoiding permutations. The number k-1 of consecutive valleys allowed at height h-1 clearly affects the value and position of the l.r.M., as we have seen in the k=2 case. For values of k larger than 2, the permutations probably have a structure that can be described in terms of a suitable block decomposition. The above combinatorial identity (<ref>) is obtained by means of a purely combinatorial consideration. By virtue of this, similar relations are expected to arise even in cases k > 2. It might then be possible to derive a family of combinatorial identities as k varies. Another further line of research could consider the possibility to list the paths of 𝒟_n^(h,2) in a Gray code sense using the tools developed by Barcucci, Bernini et al. <cit.>. As mentioned in Section <ref>, these paths can be encoded by strings on the alphabet {U,D}, so the problem of defining a Gray code could be addressed by starting from the techniques developed by Vajnovszki et al. <cit.>. Moreover, the considered Dyck paths could be used for the construction of a strong non-overlapping code proposed by Barcucci et al. <cit.>. 99 BBP2 E. Barcucci, A. Bernini, and R. Pinzani. Strings from linear recurrences: a Gray code, in Combinatorics on Words: 13th International Conference, WORDS 2021, Rouen, France, September 13–17, 2021, Lect. Notes in Comp. Sci., Vol. 12857, Springer, 2021, pp. 40–49. BBP5 E. Barcucci, A. Bernini, and R. Pinzani. A strong non-overlapping Dyck code, in DLT, 2021, Lect. Notes in Comp. Sci., Vol. 12811, 2021, pp. 43–53. BDPP1 E. Barcucci, A. Del Lungo, E. Pergola, and R. Pinzani, ECO: a methodology for the enumeration of combinatorial object, J. Difference Equ. Appl. 5 (1999), 435–490. BBPSV2 A. Bernini, S. Bilotta, R. Pinzani, A. Sabri, and V. Vajnovszki, Prefix partitioned Gray codes for particular cross-bifix-free sets, Cryptogr. Commun. 6 (2014), 359–369. BBPSV A. Bernini, S. Bilotta, R. Pinzani, A. Sabri, and V. Vajnovszki, Gray code orders for q-ary words avoiding a given factor, Acta Inform. 52 (2015), 573–592. BBPV2 A. Bernini, S. Bilotta, R. Pinzani, and V. Vajnovszki, A Gray code for cross-bifix-free sets, Math. Structures Comput. Sci. 27 (2017), 184–196. BBPV1 A. Bernini, S. Bilotta, R. Pinzani, and V. Vajnovszki, A trace partitioned Gray code for q-ary generalized Fibonacci strings, J. Discrete Math. Sci. Cryptogr. 18 (2015), 751–761. BCFS A. Bernini, G. Cervetti, L. Ferrari, and E. Steingrímsson, Enumerative combinatorics of intervals in the Dyck pattern poset, Order 38 (2021), 473–487. bi S. Bilotta, Variable-length non-overlapping codes, IEEE Trans. Inform. Theory 63 (2017), 6530–6537. Bilotta2013157 S. Bilotta, E. Grazzini, E. Pergola, and R. Pinzani, Avoiding cross-bifix-free binary words, Acta Inform. 50 (2013), 157–173. Bilotta201910 S. Bilotta, E. Pergola, R. Pinzani, S. Rinaldi, Recurrence relations, succession rules and the positivity problem, J. Comput. System Sci. 104 (2019), 102–118. BM M. Bousquet-Mélou, Discrete excursion, Sém. Lothar. Combin 57 (2008), B57d. BMP M. Bousquet-Mélou and Y. Ponty, Culminating paths, Discrete Math. Theoret. Comput. Sci. 10 (2008), 125–152. CGHK F. R. K. Chung, R. L. Graham, V. E. Hoggatt, and M. Kleimann, The number of Baxter permutations, J. Combin. Theory Ser. A 24 (1978), 382–394. D E. Deutsch, Dyck path enumeration, Discrete Math. 24 (1999), 167–222. DFR E. Deutsch, L. Ferrari, S. and Rinaldi, Production matrices, Adv. in Appl. Math.34 (2005), 101–122. KST M. Kallipoliti, R. Sulzgruber, and E. Tzanaki, Patterns in Shi Tableaux and Dyck Paths, Order 39 (2022), 263–289. K D. E. Knuth, The Art of Computer Programming: Sorting and Searching, Addison-Wesley, 1966. SAB M. Saracevic, S. Adamovic, and E. Bisevac, Application of Catalan Numbers and the Lattice Path Combinatorial Problem in Cryptography, Acta Polytechnica Hungarica 15 (2018), 91–110. S N. J. A. Sloane, The On-Line Encyclopedia of Integer Sequences, http://oeis.org. VW V. Vajnovszki, and T. Walsh, A loop-free two-close Gray-code algorithm for listing k-ary Dyck words, J. Discrete Algorithms 4 (2006), 633–648.
http://arxiv.org/abs/2307.02075v1
20230705073234
Combating Confirmation Bias: A Unified Pseudo-Labeling Framework for Entity Alignment
[ "Qijie Ding", "Jie Yin", "Daokun Zhang", "Junbin Gao" ]
cs.AI
[ "cs.AI", "cs.CL", "cs.LG" ]
Combating Confirmation Bias: A Unified Pseudo-Labeling Framework for Entity Alignment]Combating Confirmation Bias: A Unified Pseudo-Labeling Framework for Entity Alignment 1]Qijie Dingqijie.ding@sydney.edu.au [1]Jie Yinjie.yin@sydney.edu.au 2]Daokun Zhangdaokun.zhang@monash.edu 1]Junbin Gaojunbin.gao@sydney.edu.au [1]Discipline of Business Analytics, The University of Sydney, Sydney, NSW, Australia [2]Department of Data Science & AI, Monash University, Melbourne, VIC, Australia Entity alignment (EA) aims at identifying equivalent entity pairs across different knowledge graphs (KGs) that refer to the same real-world identity. It has been a compelling but challenging task that requires the integration of heterogeneous information from different KGs to expand the knowledge coverage and enhance inference abilities. To circumvent the shortage of prior alignment seeds provided for training, recent EA models utilize pseudo-labeling strategies to iteratively add unaligned entity pairs predicted with high confidence to the training data for model retraining. However, the adverse impact of confirmation bias has been largely overlooked during pseudo-labeling, thus hindering entity alignment performance. To systematically combat confirmation bias, we propose a Unified Pseudo-Labeling framework for Entity Alignment (UPL-EA) that explicitly eliminates pseudo-labeling errors to boost the accuracy of entity alignment. UPL-EA consists of two complementary components: (1) The Optimal Transport (OT)-based pseudo-labeling uses discrete OT modeling as an effective means to enable more accurate determination of entity correspondences across two KGs and to mitigate the adverse impact of erroneous matches. A simple but highly effective criterion is further devised to derive pseudo-labeled entity pairs that satisfy one-to-one correspondences at each iteration. (2) The cross-iteration pseudo-label calibration operates across multiple consecutive iterations to further improve the pseudo-labeling precision rate by reducing the local pseudo-label selection variability with a theoretical guarantee. The two components are respectively designed to eliminate Type I and Type II pseudo-labeling errors identified through our analyse. The calibrated pseudo-labels are thereafter used to augment prior alignment seeds to reinforce subsequent model training for alignment inference. The effectiveness of UPL-EA in eliminating pseudo-labeling errors is both theoretically supported and experimentally validated. The experimental results show that our approach achieves competitive performance with limited prior alignment seeds. [ [ August 1, 2023 ================== § INTRODUCTION Knowledge Graphs (KGs) are large-scale structured knowledge bases that represent real-world entities (or concepts) and their relationships as a collection of factual triplets. Recent years have witnessed the release of various open-source KGs (e.g., Freebase <cit.>, YAGO <cit.> and Wikidata <cit.>) from general to specific domains and their proliferation to empower many artificial intelligence (AI) applications, such as recommender systems <cit.>, question answering <cit.> and information retrieval <cit.>. Nevertheless, it has become a well-known fact that real-world KGs suffer from incompleteness arising from their complex, semi-automatic construction process. This has led to an increasing number of research efforts on knowledge graph completion, such as TransE <cit.> and TransH <cit.>, which aim to add missing facts to individual KGs. Unfortunately, due to its limited coverage and incompleteness, a single KG cannot fulfill the requirements for complex AI applications that build upon heterogeneous knowledge sources. This necessitates the integration of heterogeneous information from multiple individual KGs to enrich knowledge representation. Entity alignment (EA) is a crucial task towards this objective, which aims to establish the correspondence between equivalent entity pairs across different KGs that refer to the same real-world identity. Over the last decade, there has been a surge of research efforts dedicated to entity alignment across KGs. Most mainstream entity alignment models are based on knowledge graph embedding, which embeds different KGs into a common latent space so that similarities between entities can be directly measured via their embeddings for alignment purposes. To learn better KG embeddings, methods like GCN-Align <cit.> leverage graph convolutional networks (GCNs) <cit.> to capture structural and neighboring entity information for alignment inference. Along this line of research, recent studies <cit.> utilize a highway strategy <cit.> to alleviate over-smoothing during GCN propagation, or jointly learn entity and relation embeddings for improving the accuracy of entity alignment. These models, however, require an abundant amount of pre-aligned entity pairs (known as prior alignment seeds) during training, which are labor-intensive and costly to acquire in real-world KGs. To tackle the shortage of prior alignment seeds provided for training, recently proposed models, such as BootEA <cit.>, IPTransE <cit.>, MRAEA <cit.>, and RNM <cit.>, adopt a bootstrapping strategy that iteratively selects unaligned entity pairs predicted with high confidence as pseudo-labels and adds them to prior alignment seeds for either model re-training or posterior embedding distance rectification. The bootstrapping strategy originates from the field of statistics and is also referred to as pseudo-labeling – a predominant learning paradigm proposed to tackle the label scarcity problem in semi-supervised learning. In general semi-supervised learning, pseudo-labeling approaches inherently suffer from confirmation bias <cit.>. The bias stems from using incorrectly predicted labels generated on unlabeled data for subsequent training, thereby misleadingly increasing confidence in incorrect predictions and generating a biased model with degraded performance. Unfortunately, there is a lack of understanding of the fundamental factors that give rise to confirmation bias for pseudo-labeling-based entity alignment. Our analysis (see Section <ref>) advocates that the confirmation bias is exacerbated during pseudo-labeling for entity alignment. Due to the lack of sufficient prior alignment seeds at the early stages of training, the existing models are inclined to learn uninformative entity embeddings and make incorrect predictions on unaligned entity pairs. As a consequence, the pseudo-labels generated based on the model's unreliable high-confidence predictions are error-prone. The resultant errors can be characterized into two types, as depicted in Fig. <ref>. Type I pseudo-labeling errors refer to conflicted misalignments, where a single entity in one KG is simultaneously aligned with multiple entities in another KG with erroneous matches. Type II pseudo-labeling errors refer to one-to-one misalignments, where an entity in one KG is incorrectly matched with an entity in another KG. These pseudo-labeling errors, if not properly mitigated, would propagate into subsequent model training, thereby jeopardizing the efficacy of pseudo-labeling-based entity alignment. However, current pseudo-labeling-based EA models have made only limited attempts to alleviate alignment conflicts using simple heuristics <cit.> or imposing constraints to enforce hard alignments <cit.>, while the confirmation bias has been left under-explored. To fill in the research gap, in this work, we propose a Unified Pseudo-Labeling framework for Entity Alignment (UPL-EA) to systematically combat confirmation bias. The scope of our work focuses primarily on a semi-supervised learning paradigm, where we aim to eliminate Type I and Type II pseudo-labeling errors to boost the precision of entity alignment. Towards this goal, UPL-EA comprises two complementary components – within-iteration Optimal Transport (OT)-based pseudo-labeling and cross-iteration pseudo-label calibration. Specifically, OT-based pseudo-labeling uses discrete OT modeling as an effective means to enable more accurate determination of entity correspondences across KGs and to mitigate the adverse impact of erroneous matches. The discrete OT models the alignment as a probabilistic matching process between the entity sets in two KGs with the minimal overall transport cost. The transport cost is calculated according to the rectified distance between entity embeddings obtained through graph convolution augmented with global-level semantics. Based on the estimated OT solution, a simple but highly effective selection criterion is further devised to infer pseudo-labeled entity pairs that satisfy one-to-one correspondence, thus eliminating conflicted misalignments (Type I pseudo-labeling errors) at each iteration. The pseudo-label calibration operates across multiple consecutive iterations to reduce the local pseudo-label selection variability with theoretical guarantee, thereby eliminating one-to-one misalignments (Type II pseudo-labeling errors). The calibrated pseudo-labels are then used to augment prior alignment seeds to reinforce subsequent model training for alignment inference. The contribution of this paper can be summarized as follows: * To our best knowledge, we are the first to investigate the confirmation bias problem for pseudo-labeling-based entity alignment. We conceptually and empirically identify two types of pseudo-labeling errors that give rise to confirmation bias. * Motivated by our analysis, we propose a unified pseudo-labeling framework (UPL-EA) that systematically alleviates the problem of confirmation bias for pseudo-labeling-based entity alignment. UPL-EA is carefully designed to explicitly eliminate Type I and Type II pseudo-labeling errors, leading to significant performance gains. * Extensive experiments on benchmark KG datasets demonstrate that UPL-EA is able to significantly eliminate pseudo-labeling errors and yield competitive performance with respect to limited amounts of prior alignment seeds, outperforming state-of-the-art supervised and semi-supervised baselines. The remainder of this paper is organized as follows. Section <ref> provides a problem formulation of pseudo-labeling-based entity alignment and presents an empirical analysis of confirmation bias that motivates this work. Section <ref> presents the proposed framework, followed by extensive experimental evaluation reported in Section <ref>. Related works are discussed in Section <ref>, and we conclude the paper in Section <ref>. § PRELIMINARIES In this section, we first provide a problem formulation of pseudo-labeling-based entity alignment. Then, we perform a thorough analysis of confirmation bias during pseudo-labeling, which motivates the design of our proposed framework. §.§ Problem Formulation A knowledge graph (KG) can be defined as 𝒢 = {ℰ, ℛ, 𝒯} with the entity set ℰ, relation set ℛ, and triplet set 𝒯. We use e ∈ℰ, r ∈ℛ, (e_i, r, e_j) ∈𝒯 to represent an entity, a relation, and a triplet, respectively. Each entity e is characterized by an attribute vector x_e, which can be obtained from entity names with semantic meanings or entity textual descriptions. Formally, two heterogeneous KGs are given for the task of entity alignment, i.e., 𝒢_1 = {ℰ_1, ℛ_1, 𝒯_1} and 𝒢_2 = {ℰ_2, ℛ_2, 𝒯_2}. An entity e_i∈ℰ_1 in 𝒢_1 is likely to correspond to the same concept with another entity e_j∈ℰ_2 in 𝒢_2, denoted as e_i⇔ e_j, and vice versa. To provide supervision for entity alignment, a small number of pre-aligned entity pairs between 𝒢_1 and 𝒢_2 are sometimes provided as prior alignment seeds in the form of ℒ^0_e={ (e_i, e_j) | e_i∈ℰ_1, e_j∈ℰ_2, e_i⇔ e_j}. Apart from prior alignment seeds, there are two sets of unaligned entities ℰ'_1⊆ℰ_1 and ℰ'_2⊆ℰ_2 respectively from two KGs. Given two KGs, 𝒢_1 and 𝒢_2, the task of pseudo-labeling-based entity alignment iterates over the following two components: (1) Training an entity alignment (EA) model that learns entity embeddings f_θ(ℒ_e): h_e →ℝ^d using the available alignment seeds ℒ_e, where ℒ_e is set to the prior alignment seeds ℒ_e^0 at the first iteration, e∈ℰ_1∪ℰ_2, and d is the embedding dimension; (2) Designing a pseudo-label selection component that selects unaligned entity pairs g(h_e_i, h_e_j):ℰ'_1×ℰ'_2→{0,1} using learned entity embeddings h_e, then augments alignment seeds with pseudo-labels, i.e., ℒ_e←ℒ_e∪{(e_i,e_j)| e_i∈ℰ'_1,e_j∈ℰ'_2,g(h_e_i,h_e_j)=1}. To achieve accurate entity alignment, we need to ensure that: (1) the EA model f_θ has the capacity to learn informative, high-quality entity embeddings; (2) the pseudo-label selection component g can select precise pseudo-labeled entity pairs for augmenting alignment seeds, providing high-quality supervision for boosting the EA model. The two components reinforce each other alternately to make accurate inferences for entity alignment. §.§ Analysis of Confirmation Bias To investigate the impact of confirmation bias on pseudo-labeling-based entity alignment, we perform an error analysis of a naive pseudo-labeling strategy, where unaligned entity pairs with an embedding distance smaller than a pre-specified threshold are selected as pseudo-labels. To gain insights into the underlying causes of confirmation bias, we explicitly analyze the variations in Type I and Type II pseudo-labeling errors throughout the training process, when the naive pseudo-labeling strategy is used. This analysis is conducted on a widely used cross-lingual benchmark dataset DBP15KZH_EN <cit.> (Details of the dataset are presented in Section <ref>). We follow the conventional 30%-70% ratio to randomly split the training and test data on the 15,000 ground truth entity alignment pairs provided. The proportions of two types of errors and entity alignment accuracy are calculated at different iterations on the test data. As shown in Fig. <ref>, Type I and Type II pseudo-labeling errors could propagate and increase proportionally via the naive pseudo-labeling strategy as training proceeds. The accumulated errors give rise to confirmation bias, thereby severely hindering the capability of pseudo-labeling for performance improvements. Our analysis affirms that the confirmation bias essentially stems from Type I and Type II pseudo-labeling errors. These errors, if not adequately addressed, would propagate into subsequent model training, thus diminishing the effectiveness of pseudo-labeling for entity alignment. Our work is thus motivated to explicitly eliminate Type I and Type II pseudo-labeling errors, leading to significant performance improvements for entity alignment inference. As shown in Fig. <ref>, our proposed method outperforms the naive pseudo-labeling strategy, yielding substantial performance gains. § THE PROPOSED UPL-EA FRAMEWORK With the insights derived from our analysis in Section <ref>, the proposed UPL-EA framework is designed to systematically address the confirmation bias issue for pseudo-labeling based entity alignment. At the core of UPL-EA are two key complementary modules that greatly enhance the entity alignment (EA) model: * Module 1 (Within-Iteration OT-Based Pseudo-Labeling) utilizes Optimal Transport (OT) modeling to more precisely determine the correspondences between entities across KGs so as to eliminate conflicted misalignments at each iteration. (Addressing Type I pseudo-labeling errors) * Module 2 (Cross-Iteration Pseudo-Label Calibration) further calibrates pseudo-labels generated from Module 1 across iterations to reduce local variability of pseudo-label selection so as to minimize one-to-one misalignments across iterations. (Addressing Type II pseudo-labeling errors) Fig. <ref> illustrates an overview of the proposed UPL-EA framework. At each iteration, the EA model learns informative entity embeddings via a global-local aggregation scheme based on the available alignment seeds. Then, Module 1 takes entity embeddings as input to pseudo-label conflict-free entity alignment pairs (eliminating Type I pseudo-labeling errors), which are further used by the EA model in the subsequent iteration. After every m iterations, pseudo-labels across iterations are passed on to Module 2 for further calibration, preventing one-to-one misalignments (Type II pseudo-labeling errors) from accumulating in subsequent model training. The calibrated pseudo-labels are then used to augment the prior alignment seeds. The augmented alignment seeds are used by the next m iterations to perform pseudo-labeling-based entity alignment. The EA model and the pseudo-label selection component empowered by Module 1 and Module 2 mutually reinforce each other to learn informative entity embeddings. Lastly, the learned entity embeddings are used for entity alignment inference. §.§ Entity Alignment Model The entity alignment (EA) model aims to learn informative entity embeddings and perform model training for entity alignment inference. §.§.§ Global-Local Aggregation for Entity Embedding Learning To enable the EA model to learn informative entity embeddings, it is essential to capture the relational information conveyed in KGs. Towards this end, we employ a global-local aggregation scheme <cit.> to perform two levels of neighborhood aggregation for each entity in 𝒢_1 and 𝒢_2. Global-Level Relation Aggregation. To capture the global statistics of a given relation r_i, we construct its feature vector x_r_i by taking into account all triplets associated with r_i in two KGs. To be specific, we compute the feature vector x_r_i for each relation r_i ∈ℛ_1 ∪ℛ_2, by taking an average of the concatenated feature vectors of its associated head and tail entities: x_r_i = 1/|{(e_h, r_i, e_t) ∈𝒯_1 ∪𝒯_2}|∑_(e_h, r_i, e_t) ∈𝒯_1 ∪𝒯_2 [x_e_hx_e_t], where [··] denotes the concatenation operation, {(e_h, r_i, e_t) ∈𝒯_1 ∪𝒯_2} is the set of all triplets associated with relation r_i . x_e_h and x_e_t∈ℝ^d are the feature vectors of entity e_h and e_t, respectively. As such, the obtained relation feature vector x_r_i∈ℝ^2d can capture the intrinsic relational semantics of relation r_i. Then, for each entity e_i∈ℰ_1 ∪ℰ_2, we construct its averaged neighboring relation feature as follows: x_e_i_rels = 1/|𝒩_r(e_i)|∑_r_j ∈𝒩_r(e_i)𝕀_e_i(r_j) ·x_r_j, where 𝒩_r(e_i) is the set of one-hop neighboring relations of entity e_i. 𝕀_e_i(r_j) indicates the direction of relation r_j with respect to e_i, where -1 indicates the successor of e_i and +1 indicates the predecessor of e_i. The incorporation of directional information enables to capture richer relational information in the neighborhood, benefiting the learning of entity embeddings. To perform global-level relation aggregation, we concatenate each entity's averaged neighboring relation feature x_e_i_rels∈ℝ^2d with its original feature x_e_i∈ℝ^d, followed by a non-linear transformation to capture the interplay between different dimensions: h̃_e_i^(1) = ReLU(W_1[x_e_ix_e_i_rels]+b_1), where ReLU(·) is the ReLU activation function, W_1∈ℝ^d× 3d and b_1∈ℝ^d are the weight matrix and the bias vector, respectively. To avoid over-smoothing and deliver a more effective representation scheme, we augment the embedding vector h̃_e_i^(1) with the original entity feature vector x_e_i: h_e_i^(1) = h̃_e_i^(1) + x_e_i. Local-Level Entity Aggregation. After obtaining entity embeddings via relation aggregation at the global level, we further perform local-level entity aggregation to capture neighboring entity structures. For this purpose, we take advantage of a two-layer GCN <cit.> along with a highway gate strategy <cit.> to avoid over-smoothing. Formally, we first stack relation aggregated entity embeddings h_e_i^(1) for each entity e_i∈ℰ_1∪ℰ_2 into an embedding matrix H^(1)_e∈ℝ^(|ℰ_1|+|ℰ_2|)× d, where each row indicates the embedding vector of the corresponding entity. Then, the entity embedding matrix H^(1)_e is updated as follows from l=1: { H̃_e^(l+1) = ReLU(D̃^-1/2ÃD̃^-1/2 H_e^(l) W_l+1), H_e^(l+1) = T(H_e^(l)) ⊙H̃^(l+1)_e + (1-T(H_e^(l))) ⊙ H^(l)_e. . Above, Ã = A + I_|ℰ_1|+|ℰ_2| denotes the undirected adjacency matrix of the combined graph 𝒢_1 ∪𝒢_2 with self-connections that are represented by the (|ℰ_1|+|ℰ_2|)× (|ℰ_1|+|ℰ_2|) identity matrix I_|ℰ_1| +|ℰ_2|. D̃_ii = ∑_j Ã_ij is the degree matrix, W_l+1 ∈ℝ^d×d is the weight matrix at layer l, and ⊙ is the Hadamard product (or element-wise multiplication). In addition, T(H_e^(l))=σ(H_e^(l)W_T^(l)+b_T^(l))∈ℝ^(|ℰ_1|+|ℰ_2|)× d is the transformation gate obtained from H_e^(l), where σ(·) is the Sigmoid activation function, W_T^(l)∈ℝ^d× d and b_T^(l)∈ℝ^d are the gate-specific weight matrix and bias vector, respectively. The transformation gate helps filter out the over-smoothed feature dimensions. Finally, the entity embedding matrix is obtained as H_e^(3). Accordingly, the final embedding for entity e_i is h_e_i=h^(3)_e_i∈ℝ^d, where h^(3)_e_i∈ℝ^d is the transpose of the row vector of H_e^(3) indexed by entity e_i. §.§.§ Model Training for Entity Alignment Given a set of alignment seeds ℒ_e together the constructed entity embeddings, the loss function for entity alignment can be defined as: L = ∑_(e_i, e_j) ∈ℒ_e∑_(e_i', e_j') ∈ℒ^'_eR̂(e_i, e_j)·max(0, d(e_i, e_j)-d(e_i', e_j')+γ), where ℒ_e^' is the set of sampled negative alignment pairs that are not included in ℒ_e. γ is a hyper-parameter that determines the margin to separate positive pairs from negative pairs. d(e_i,e_j) indicates the embedding distance between entity pair (e_i,e_j) across two KGs, defined as: d(e_i,e_j)=h_e_i-h_e_j_1. Above, R̂(e_i, e_j) indicates a weighting factor that measures distinct contribution of each alignment pair (e_i, e_j) to the overall loss. For any pair from the prior alignment seeds, R̂(e_i, e_j)=1, otherwise R̂(e_i,e_j)=1/(1+e^d̃(e_i,e_j)) ∈(0,1), where d̃(e_i,e_j) denotes the rectified embedding distance defined later using Eq.(<ref>) in Section <ref>. For model training, we adopt an adaptive negative sampling strategy to obtain a negative alignment set ℒ^'_e. Specifically, for each positive pair (e_i,e_j) in the alignment set ℒ_e, we select K nearest entities of e_i (or e_j) measured by the embedding distance in Eq. (<ref>) to replace e_j (or e_i) and form K negative counterparts (e_i,e_j') (or (e_i',e_j)). This adaptive strategy helps produce “hard" negative alignment pairs and push their associated entities to be apart from each other in the embedding space. After minimizing the loss on the set of alignment seeds, the learned entity embeddings h_e can be used to infer unaligned entity pairs as pseudo-labels, which are used to augment alignment seeds for subsequent training. However, as we have discussed earlier in Section <ref>, using a naive pseudo-label selection strategy would inevitably result in a large portion of erroneous pseudo-labels in the form of Type I and Type II pseudo-labeling errors, leading to confirmation bias. Below, we detail the two key modules that are explicitly proposed to mitigate Type I and Type II pseudo-labeling errors, respectively. §.§ Within-Iteration OT-Based Pseudo-Labeling To effectively mitigate Type I pseudo-labeling errors, we propose to use Optimal Transport (OT) as an effective means to more precisely determine one-to-one correspondence between entities across KGs. This ensures that correct entity alignment pairs can be sufficiently pseudo-labeled at each iteration to boost entity alignment performance significantly. The within-iteration OT-based pseudo-labeling module uses the distances between the learned entity embeddings to define the transport cost for OT-based modeling. However, directly employing the original embedding distances would be error-prone, especially at early training stages where the learned entity embeddings are uninformative. Thus, we resort to rectifying the embedding distance defined in Eq. (<ref>) using relational neighborhood matching <cit.>. The principle of distance rectification is to leverage the relational context within local neighborhoods to aid in determining the extent to which two entities should be aligned. Intuitively, if two entities e_i ∈ℰ_1 and e_j ∈ℰ_2 share more aligned neighboring relations/entities, the distance between their embeddings should be smaller, and thus they are more likely to be aligned with each other. As such, the embedding distance used for pseudo-labeling is rectified as d̃(e_i,e_j)=d(e_i,e_j)-λ s(e_i,e_j), where λ is a trade-off parameter, s(e_i, e_j) is a score function indicating the degree to which relational contexts of two entities match. It is calculated by determining whether neighboring entities/relations between (e_i, e_j) are aligned or not: s(e_i, e_j) = (∑_ℳ_e_i,e_j1/|{e| (e,r,e_k)∈𝒯_1}|·|{e| (e,r',e_l)∈𝒯_2}|)/(|𝒩_e(e_i)|+|𝒩_e(e_j)|), where ℳ_e_i,e_j is the set of matched neighboring relation-entity tuples: ℳ_e_i,e_j={(e_k, e_l, r,r')| e_k∈𝒩_e(e_i), e_l∈𝒩_e(e_j), (e_i,r,e_k)∈𝒯_1, (e_j,r',e_l)∈𝒯_2, (e_k,e_l)∈ℒ_e, (r, r')∈ℒ_r}, 𝒩_e(e) is the set of neighboring entities of entity e. ℒ_r is the set of aligned relations obtained via pseudo-labeling based on the embedding distance between relations. Moreover, due to the smoothing effect of feature aggregation, entities in the neighborhood tend to have indistinguishable representations. As a result, simply using a pre-specified threshold to pseudo-label entity pairs would inevitably result in conflicted misalignments. To circumvent this issue, we propose an OT-based strategy to discover a globally optimal alignment configuration with minimal overall inconsistency. To warrant one-to-one alignments, we model the alignment between two unaligned sets ℰ'_1 and ℰ'_2 as an OT process, i.e., transporting each entity e_i ∈ℰ'_1 to a unique entity e_j ∈ℰ'_2, with minimal overall transport cost. Naturally, the rectified distance can be used to define the transport cost across two KGs: C_e_i,e_j= d̃(e_i,e_j), e_i∈ℰ'_1, e_j∈ℰ'_2, where C∈ℝ^|ℰ'_1|× |ℰ'_2| is the transport cost matrix. Without loss of generality, we assume |ℰ'_1|<|ℰ'_2|. The transport plan is a mapping function T: e_i → T(e_j), where e_i∈ℰ'_1, T(e_i)∈ℰ'_2. The objective is thus to find the optimal transport plan T^* that minimizes the overall transport cost: T^*=min_T∑_e_i∈ℰ'_1 C_e_i, T(e_i). Eq. (<ref>) models a hard assignment optimization problem that is not scaled up. To enable more efficient optimization and to allow for a more flexible alignment configuration, we reformulate this objective as a discrete OT problem, where the optimal transport plan is considered as a coupling matrix P^*∈ℝ_+^|ℰ'_1|×|ℰ'_2| between two discrete distributions. We denote μ and ν as two discrete probability distributions over all entities {e_i|e_i∈ℰ'_1} and {e_j|e_j∈ℰ'_2}, respectively. Without any entity alignment preference, the two discrete distributions μ and ν are assumed to follow a uniform distribution such that μ=1/|ℰ'_1|∑_e_i∈ℰ'_1δ_e_i and ν=1/|ℰ'_2|∑_e_j∈ℰ'_2δ_e_j, where δ_e_i and δ_e_j are the Dirac function centered on e_i and e_j, respectively. Both μ and ν are bounded to sum up to one: ∑_e_i∈ℰ'_1μ(e_i)=∑_e_i∈ℰ'_11/|ℰ'_1|=1 and ∑_e_j∈ℰ'_2μ(e_j)=∑_e_j∈ℰ'_21/|ℰ'_2|=1. Accordingly, the OT objective is formulated as finding the optimal coupling matrix P^* between μ and ν: P^*= min_P∈Π(μ, ν)∑_e_i∈ℰ'_1∑_e_j∈ℰ'_2P_e_i,e_j· C_e_i,e_j, subject to: ∑_e_j∈ℰ'_2 P_e_i,e_j=μ(e_i)=1/|ℰ'_1|, ∑_e_i∈ℰ'_1 P_e_i,e_j=ν(e_j)=1/|ℰ'_2|, P_e_i,e_j≥0, ∀ e_i∈ℰ'_1, ∀ e_j∈ℰ'_2, where Π(μ, ν)={P∈ℝ_+^|ℰ'_1|×|ℰ'_2|| P1_|ℰ'_2|=μ, P^⊤1_|ℰ'_1|=ν} is the set of all joint probability distributions with marginal probabilities μ and ν, 1_n denotes an n-dimensional vector of ones. P is a coupling matrix signifying probabilistic alignments between two unaligned entity sets ℰ'_1 and ℰ'_2. Therefore, P_e_i,e_j indicates the amount of probability mass transported from μ(e_i) to ν(e_j). A larger value of P_e_i,e_j indicates a higher likelihood of e_i and e_j being aligned to each other. To solve this discrete OT problem, some exact algorithms have been proposed, such as interior point methods <cit.> and network simplex <cit.>. The exact algorithms guarantee finding a closed-form optimal transport plan but with considerably high computational cost, making them intractable to work for iterative pseudo-labeling. Therefore, instead of Eq. (<ref>), we propose to use the entropy regularized OT problem, as defined in Eq. (<ref>) below, which can be solved by the efficient Sinkhorn algorithm  <cit.>. P^*=min_P∈Π(μ, ν)∑_e_i∈ℰ'_1∑_e_j∈ℰ'_2P_e_i,e_j· C_e_i,e_j + β∑_e_i∈ℰ'_1∑_e_j∈ℰ'_2 P_e_i,e_jlogP_e_i,e_j, where β is a hyper-parameter that controls the extent of regularization. Solving the above entropy regularized OT problem can be easily implemented using popular deep-learning frameworks like PyTorch and TensorFlow. Once the optimal coupling matrix P^* is obtained, entity alignments can be inferred accordingly. Since one-to-one correspondences are crucial for eliminating conflicted misalignments (Type I pseudo-labeling errors), we propose a simple yet highly effective decision threshold to select entity pairs as pseudo-labels such that: ℒ̂_e = {(e_i,e_j)| P^*_e_i, e_j>1/2·min(|ℰ'_1|, |ℰ'_2|), e_i∈ℰ'_1, e_j ∈ℰ'_2}. This criterion ensures the pseudo-labels satisfy one-to-one correspondence with theoretical guarantee. Any pseudo-labeled entity pair (e_i,e_j), e_i∈ℰ'_1, e_j ∈ℰ'_2 that satisfies the condition P^*_e_i, e_j>1/2·min(|ℰ'_1|, |ℰ'_2|) warrants one-to-one correspondence such that no conflicted pairs {(e_i,e_k)| e_k∈ℰ'_2 ∖{e_j}} and {(e_l,e_j)| e_l∈ℰ'_1 ∖{e_i}} associated with e_i and e_j, respectively, are selected as pseudo-labels. Given a coupling matrix P^*∈ℝ_+^|ℰ'_1|×|ℰ'_2| with the constraints of ∑_e_j∈ℰ'_2 P^*_e_i,e_j=1/|ℰ'_1| for all rows (∀ e_i∈ℰ'_1) and ∑_e_i∈ℰ'_1 P^*_e_i,e_j=1/|ℰ'_2| for all columns (∀ e_j∈ℰ'_2). Assume |ℰ'_1|<|ℰ'_2|, then the decision threshold is 1/2·min(|ℰ'_1|, |ℰ'_2|)=1/2|ℰ'_1|. Entity pairs {(e_i,e_j)| P^*_e_i, e_j>1/2|ℰ'_1|, e_i∈ℰ'_1, e_j ∈ℰ'_2} are selected as pseudo-labels. For each pseudo-labeled entity pair (e_i,e_j) with a probability value P^*_e_i,e_j>1/2|ℰ'_1|, we can prove that no conflicted pairs {(e_i,e_k)| e_k∈ℰ'_2 ∖{e_j}} associated with e_i are selected as pseudo-labels: P^*_e_i,e_j >1/2|ℰ'_1|, ∑_e_j∈ℰ'_2P^*_e_i,e_j-P^*_e_i,e_j <∑_e_j∈ℰ'_2P^*_e_i,e_j-1/2|ℰ'_1|, ∑_e_k∈ℰ'_2 ∖{e_j}P^*_e_i,e_k+P^*_e_i,e_j-P^*_e_i,e_j <1/|ℰ'_1|-1/2|ℰ'_1|, ∑_e_k∈ℰ'_2 ∖{e_j}P^*_e_i,e_k <1/2|ℰ'_1|. Since the coupling matrix P^*∈ℝ_+^|ℰ'_1|×|ℰ'_2| has non-negative entries, the summation ∑_e_k∈ℰ'_2 ∖{e_j}P^*_e_i,e_k from Eq. (<ref>) must be no smaller than any component of it, i.e., P^*_e_i,e_k≤∑_e_k∈ℰ'_2 ∖{e_j}P^*_e_i,e_k, ∀ e_k∈ℰ'_2 ∖{e_j}. Together with Eq. (<ref>), we can further derive that any component in the summation is smaller than the decision threshold, i.e., P^*_e_i,e_k<1/2|ℰ'_1|, ∀ e_k∈ℰ'_2 ∖{e_j}. In other words, all other probability values in the same row of P^*_e_i,e_j are smaller than the decision threshold. Thus, no conflicted pairs {(e_i,e_k)| e_k∈ℰ'_2 ∖{e_j}} associated with e_i are selected. Similarly, for each pseudo-labeled entity pair (e_i,e_j) with a probability value P^*_e_i,e_j>1/2|ℰ'_1|, we can prove that no conflicted pairs {(e_l,e_j)| e_l∈ℰ'_1 ∖{e_i}} associated with entity e_j are selected as pseudo-labels. Similar to Eq. (<ref>), we can also obtain ∑_e_l∈ℰ'_1 ∖{e_i}P^*_e_l,e_j<1/2|ℰ'_2| and P^*_e_l,e_j≤∑_e_l∈ℰ'_1 ∖{e_i}P^*_e_l,e_j,∀ e_l∈ℰ'_1 ∖{e_i}. Together with the assumption of |ℰ'_1|<|ℰ'_2|, we can further derive that P^*_e_l,e_j≤∑_e_l∈ℰ'_1 ∖{e_i}P^*_e_l,e_j<1/2|ℰ'_2|<1/2|ℰ'_1|. Therefore, all other probability values in the same column of P^*_e_i,e_j are smaller than the decision threshold, i.e., P^*_e_l,e_j<1/2|ℰ'_1|, ∀ e_l∈ℰ'_1 ∖{e_i}. Hence, no conflicted pairs {(e_l,e_j)| e_l∈ℰ'_1 ∖{e_i}} associated with entity e_j are selected. In summary, we conclude that the selected pseudo-labels {(e_i,e_j)| P^*_e_i, e_j>1/2·min(|ℰ'_1|,|ℰ'_2|), e_i∈ℰ'_1, e_j ∈ℰ'_2} are guaranteed to be one-to-one alignments when |ℰ'_1|<|ℰ'_2|, and the same conclusion also holds when |ℰ'_1|≥|ℰ'_2|. The overall process of the OT-based pseudo-labeling algorithm is provided in Algorithm <ref>. In Steps 1-7, the Sinkhorn algorithm takes the transport cost matrix C as input to estimate the optimal transport plan P^* via k iterations of row normalization and column normalization. In Step 8, entity pairs with values in P^* larger than the decision threshold are selected as pseudo-labels. Finally, in Step 9, the model returns the set of pseudo-labeled entity pairs. The overall time complexity of Algorithm <ref> is O(k·|ℰ'_1|·|ℰ'_2|). §.§ Cross-Iteration Pseudo-Label Calibration Through OT-based pseudo-labeling, a set of conflict-free entity alignment pairs are obtained as pseudo-labels ℒ̂^(t)_e at each iteration t. However, these pseudo-labels are still susceptible to one-to-one misalignments (Type II pseudo-labeling errors), especially at early training stages. Directly augmenting the prior alignment seeds with the pseudo-label set ℒ̂^(t)_e would make the model overfit these errors, thereby deteriorating the confirmation bias. Therefore, we propose to further calibrate pseudo-labels through reducing the variability of pseudo-label selection across consecutive iterations. As such, the resultant pseudo-label set could have a higher precision and thus be more reliable. Specifically, we obtain m sets of pseudo-labels {ℒ̂^(t)_e|t=1,...,m} respectively from every m consecutive iterations of pseudo-labeling. To prevent pseudo-labeling errors of previous iterations from adversely impacting the subsequent embedding learning, we randomly reinitialize model parameters during each of these m consecutive iterations, following the strategy proposed in <cit.>. This helps decorrelate the dependency among m consecutive iterations. Furthermore, inspired by ensemble learning <cit.>, we propose to calibrate pseudo-labels by taking the common pseudo-labels among {ℒ̂^(t)_e|t=1,...,m}: ∩_t=1^mℒ̂^(t)_e = {(e_i,e_j)|∑_t=1^m1( (e_i,e_j)∈ℒ̂^(t)_e) =m, e_i∈ℰ'_1, e_j∈ℰ'_2}, where 1(·) is a binary indicator function. By excluding pseudo-labels with high selection variability, we can effectively reduce the overall variability of pseudo-label selection to obtain calibrated pseudo-labels ∩_t=1^mℒ̂^(t)_e. Mathematically, we can prove that calibrated pseudo-labels have a higher precision than non-calibrated ones. The alignment pseudo-labels generated via pseudo-label calibration over m consecutive iterations have a higher precision ℙ(y=1|ŷ=1) than those generated at a single iteration, under the mild condition that ℙ(ŷ=1|(e_i,e_j),y=1) is a constant p for each (e_i,e_j) at each iteration. At the t-th iteration, for an entity pair (e_i,e_j) with a ground-truth alignment label y=0, the probability of aligning them is ℙ_t(ŷ=1|(e_i,e_j),y=0). With the pseudo-label calibration on m consecutive iterations, the alignment probability becomes ∏_l=0^m-1ℙ_t+l(ŷ=1|(e_i,e_j),y=0). Similarly, at t-th iteration, for an entity pair (e_i,e_j) with ground-truth alignment label y=1, the alignment probability is ℙ_t(ŷ=1|(e_i,e_j),y=1)=p_t. With calibration, the alignment probability is ∏_l=0^m-1ℙ_t+l(ŷ=1|(e_i,e_j),y=1)=∏_l=0^m-1p_t+l. With the increase of iterations, we assume the alignment model is increasingly more accurate, that is, we have ∏_l=0^m-1ℙ_t+l(ŷ=1|(e_i,e_j),y=0)≤ℙ^m_t(ŷ=1|(e_i,e_j),y=0), ∏_l=0^m-1ℙ_t+l(ŷ=1|(e_i,e_j),y=1)≥ℙ_t^m(ŷ=1|(e_i,e_j),y=1)=p_t^m. In the case with calibration, consider the probabilities, ℙ_c(ŷ=1,y=0) =∑_(e_i,e_j)∏_l=0^m-1ℙ_t+l(ŷ=1|(e_i,e_j),y=0)ℙ((e_i,e_j),y=0) ≤∑_(e_i,e_j)ℙ_t^m(ŷ=1|(e_i,e_j),y=0)ℙ((e_i,e_j),y=0), ℙ_c(ŷ=1,y=1) =∑_(e_i,e_j)∏_l=0^m-1ℙ_t+l(ŷ=1|(e_i,e_j),y=1)ℙ((e_i,e_j),y=1) ≥∑_(e_i,e_j)ℙ_t^m(ŷ=1|(e_i,e_j),y=1)ℙ((e_i,e_j),y=1), ℙ_c(ŷ=1,y=1) ≥∑_(e_i,e_j)p_t^mℙ((e_i,e_j),y=1). Thus, we have ℙ_c(ŷ=1,y=0)/ℙ_c(ŷ=1,y=1)≤∑_(e_i,e_j)ℙ_t^m(ŷ=1|(e_i,e_j),y=0)ℙ((e_i,e_j),y=0)/∑_(e_i,e_j)p_t^mℙ((e_i,e_j),y=1). In the case without calibration, at the l-th iteration, we have ℙ_t(ŷ=1,y=0)/ℙ_t(ŷ=1,y=1)=∑_(e_i,e_j)ℙ_t(ŷ=1|(e_i,e_j),y=0)ℙ((e_i,e_j),y=0)/∑_(e_i,e_j)p_tℙ((e_i,e_j),y=1). By comparing the two ratios, ℙ_c(ŷ=1,y=0)/ℙ_c(ŷ=1,y=1)/ℙ_t(ŷ=1,y=0)/ℙ_t(ŷ=1,y=1)≤∑_(e_i,e_j)ℙ_t^m(ŷ=1|(e_i,e_j),y=0)ℙ((e_i,e_j),y=0)/∑_(e_i,e_j)p_t^mℙ((e_i,e_j),y=1)/∑_(e_i,e_j)ℙ_t(ŷ=1|(e_i,e_j),y=0)ℙ((e_i,e_j),y=0)/∑_(e_i,e_j)p_tℙ((e_i,e_j),y=1), ℙ_c(ŷ=1,y=0)/ℙ_c(ŷ=1,y=1)/ℙ_t(ŷ=1,y=0)/ℙ_t(ŷ=1,y=1)≤∑_(e_i,e_j)ℙ_t^m(ŷ=1|(e_i,e_j),y=0)ℙ((e_i,e_j),y=0)/∑_(e_i,e_j)p_t^m-1ℙ_t(ŷ=1|(e_i,e_j),y=0)ℙ((e_i,e_j),y=0). If all pseudo-label sets {ℒ̂^(t)_e|t=1,...,m} have one-to-one correspondence and each pseudo-label set ℒ̂^(t)_e has at least two correct alignments. Then the condition ℙ(ŷ=1|(e_i,e_j),y=1)=p>ℙ(ŷ=1|(e_i,e_j),y=0) holds. The detailed proof of Lemma <ref> is provided in Appendix <ref>. As ℙ_t(ŷ=1|(e_i,e_j),y=0)<p_t=ℙ_t(ŷ=1|(e_i,e_j),y=1), then ℙ^m_t(ŷ=1|(e_i,e_j),y=0)<p_t^m-1ℙ_t(ŷ=1|(e_i,e_j),y=0) for each (e_i,e_j). Therefore, we have ℙ_c(ŷ=1,y=0)/ℙ_c(ŷ=1,y=1)/ℙ_t(ŷ=1,y=0)/ℙ_t(ŷ=1,y=1)<1, i.e., ℙ_c(ŷ=1,y=0)/ℙ_c(ŷ=1,y=1)<ℙ_t(ŷ=1,y=0)/ℙ_t(ŷ=1,y=1). Finally, the precision of pseudo-labels can be derived as: ℙ(y=1|ŷ=1) =ℙ(ŷ=1,y=1)/ℙ(ŷ=1)=ℙ(ŷ=1,y=1)/ℙ(ŷ=1,y=0)+ℙ(ŷ=1,y=1) =1/ℙ(ŷ=1,y=0)/ℙ(ŷ=1,y=1)+1, whose value decreases with the increase of ℙ(ŷ=1,y=0)/ℙ(ŷ=1,y=1). According to Eq. (<ref>), we have ℙ_c(y=1|ŷ=1)>ℙ_t(y=1|ŷ=1). Theorem <ref> implies that the precision of pseudo-labels increases after cross-iteration calibration, such that ℙ_c(y=1|ŷ=1)>ℙ_t(y=1|ŷ=1), where ℙ_c and ℙ_t represent the probability of calibrated set, and the probability of non-calibrated set from iteration t, respectively. Finally, the calibrated pseudo-labels are used to augment the prior alignment seeds: ℒ̂^0_e←ℒ^0_e ∪(∩_t=1^mℒ̂^(t)_e ). The augmented prior alignment seeds ℒ̂^0_e contain a considerable number of reliably calibrated pseudo-labels with high precision for subsequent model training. This enables to effectively eliminate Type II pseudo-labeling errors and alleviate the problem of confirmation bias, thus boosting the performance of entity alignment in turn. §.§ Overall Workflow The overall procedure of the proposed UPL-EA model is described in Algorithm <ref>. In Step 1, the algorithm is initialized with the prior alignment seeds if provided. In Steps 3-8, the EA model and OT-based pseudo-labeling alternately reinforce each other towards learning more informative entity embeddings. During this process, negative alignment pairs are sampled in an adaptive manner for model training. In Steps 2-9, the pseudo-label calibration is performed every m iterations to augment the prior alignment seeds with reliably selected pseudo-labels for subsequent training. Finally, the learned entity embeddings are used to infer newly aligned entity pairs. § EXPERIMENTS In this section, we validate the efficacy of our proposed method through extensive experiments and ablation analyses on benchmark datasets. §.§ Datasets and Baselines We evaluate the performance of our UPL-EA[The source code of UPL-EA will be released publicly upon paper acceptance.] method on two benchmark datasets: DBP15K <cit.> and SRPRS <cit.>. DBP15K is a widely used benchmark dataset for entity alignment <cit.>. DBP15K contains three cross-lingual datasets, each of which contains two KGs built upon English and another different language (Chinese, Japanese or French). Each cross-lingual dataset has 15,000 aligned entity pairs. SRPRS is a more recently established benchmark dataset with sparser connections <cit.>. SRPRS consists of two cross-lingual datasets, each having two KGs built upon English and French/German as well as 15,000 aligned entity pairs. The statistics of DBP15K and SRPRS are summarized in Table <ref>. For evaluation, we compare UPL-EA with 12 state-of-the-art entity alignment models, which are categorized into two groups: * Supervised models, including MTransE <cit.>, JAPE <cit.>, JAPE in its structure-only variant denoted as JAPE-Stru, GCN-Align <cit.>, GCN-Align in its structure-only variant denoted as GCN-Stru, RDGCN <cit.>, HGCN <cit.>, HMAN <cit.>, and CEA <cit.>; * Pseudo-labeling based models, including IPTransE <cit.>, BootEA <cit.>, MRAEA <cit.>, RNM <cit.>, and CPL-OT <cit.>. We utilize Hit@k (k=1,10) and Mean Reciprocal Rank (MRR) as the evaluation metrics. Hit@k measures the percentage of correctly aligned entities ranked in the top k candidate list. MRR measures the average of the reciprocal ranks for the correctly aligned entities. Higher Hit@k and MRR scores indicate better entity alignment performance. §.§ Experimental Setup For fair comparisons, we follow the conventional 30%-70% ratio to randomly partition training and test data on DBP15K and SRPRS. We use semantic meanings of entity names to construct entity features. On DBP15K with big linguistic barriers, we first use Google Translate to translate non-English entity names into English, then look up 768-dimensional word embeddings pre-trained by BERT <cit.> with English entity names to form entity features. On SRPRS with small linguistic barriers, we directly look up word embeddings without translation. As each entity name comprises one or multiple words, we further use TF-IDF to measure the contribution of each word towards entity name representation. Finally, we aggregate TF-IDF-weighted word embeddings for each entity to form its entity feature vector. The parameter settings of UPL-EA is specified as follows: K = 125, β=0.5, m=3, γ=1 and λ=10. The embedding dimension d is set to 300. For BERT pre-trained word embeddings, we use a PCA-based technique <cit.> to reduce feature dimension from 768 to 300 with minimal information loss. The batch size is set to 256 and the number of training epochs is set to 100. We implement our model in PyTorch. The Adam optimizer is used with a learning rate of 0.001 and 0.00025 on DBP15K and SRPRS, respectively. All experiments are run on a computer with an Intel(R) Core(TM) i9-13900KF CPU @ 3.00GHz and an NVIDIA Geforce RTX 4090 (24GB memory) GPU. The results of MRAEA and CPL-OT on both benchmark datasets, and RNM on DBP15K are obtained from their original papers. Results of other baselines are obtained from <cit.>. For UPL-EA, we report the average results over five runs. §.§ Comparison with State-of-the-Art Baselines We compare the performance of the proposed UPL-EA with 12 state-of-the-art entity alignment baselines. Table <ref> and Table <ref> report performance comparisons on DBP15K and SRPRS, respectively. This set of results is reported with 30% prior alignment seeds used for training. The mark “*" indicates that the semantic information of entity names is used to construct entity features. The best and second best results are highlighted in boldface and underlined, respectively. Overall, UPL-EA significantly outperforms most of the existing EA models on five cross-lingual datasets. In particular, on DBP15KZH_EN, UPL-EA outperforms the second and third performers, CPL-OT and RNM, by over 2% and 10%, respectively, in terms of Hit@1. This affirms the necessity of systematically combating confirmation bias for pseudo-labeling-based entity alignment. It is worth noting that disparities in overall performance can be observed among the five cross-lingual datasets, where the lowest accuracy is achieved on DBP15KZH_EN due to its large linguistic barriers. Nevertheless, for the most challenging EA task on DBP15KZH_EN, UPL-EA yields strong performance gains over other baselines. §.§ Comparison w.r.t. Different Rates of Prior Alignment Seeds Table <ref> further compares UPL-EA with four representative baselines (BootEA, RDGCN, RNM, and CPL-OT) with respect to different rates of prior alignment seeds varying from 10% to 40% on DBP15K. As expected, UPL-EA consistently outperforms four competitors on all three cross-lingual datasets at all rates. This is attributed to UPL-EA's ability to augment the training set with reliable pseudo-labeled entity pairs by effectively alleviating confirmation bias. As the rate of prior alignment seeds decreases, the performance of BootEA significantly degrades due to its limited ability to prevent the accumulation of pseudo-labeling errors. RNM yields performance gains over RDGCN owing to its posterior embedding distance editing using iteratively pseudo-labeled entity pairs; however, the lack of iterative model re-training largely hinders its performance. CPL-OT demonstrates more stable performance with different rates of prior alignment seeds because it selects pseudo-labels via the conflict-aware OT modeling and then uses them to train entity alignment model in turn; nevertheless, the neglect of one-to-one misalignments (Type II pseudo-labeling errors) limits the potential of CPL-OT. In contrast, the performance of UPL-EA remains competitively stable when the rate of prior alignment seeds decreases, even on the most challenging DBP15KZH_EN dataset. §.§ Ablation Study To testify the importance of different key components of the proposed UPL-EA model, we conduct a series of ablation studies on five cross-lingual datasets from DBP15K and SRPRS. To provide more insights, apart from the conventional setting using 30% prior alignment seeds as training data, we also conduct our analysis on the setting with no prior alignment seeds. The full UPL-EA model is compared with its ablated variants, with the best performance highlighted by boldface. From Table <ref> and Table <ref>, we can see that the full UPL-EA model performs best in all cases. * w.o. Global-lev. Rel. Aggr.: We compare the full UPL-EA model against the variant without using global-level relation aggregation for entity embedding learning. This ablation leads to degraded performance on both settings primarily due to the increase in conflicted misalignments caused by feature over-smoothing. * w.o. Within-iter. OT: To study the effectiveness of within-iteration OT modeling for pseudo-labeling, we ablate it from the full UPL-EA model. As OT modeling can eliminate a considerable number of conflicted misalignments (Type I pseudo-labeling errors) to ensure one-to-one alignments at each iteration, this ablation results in a significant performance drop across all datasets in both settings. * w.o. Cross-iter. Calibration: To study the efficacy of cross-iteration pseudo-label calibration, we remove it from UPL-EA. This ablation has a profound negative effect, leading to substantial performance declines in both settings, especially on DBP15KZH_EN and DBP15KJA_EN with large linguistic barriers, whereas the performance drops are smaller on DBP15KFR_EN with relatively small linguistic barriers. This is because larger linguistic barriers tend to incur more one-to-one misalignments (Type II pseudo-labeling errors). Our findings confirm that pseudo-label calibration is crucial for UPL-EA to achieve its full potential, especially when there exists a large number of pseudo-labeling errors at the beginning of model training. * w.o. Within & Cross-iter.: We also analyze the overall effect of ablating both within-iteration OT modeling and cross-iteration pseudo-label calibration from the full model. This ablation has a substantial adverse impact, leading to a dramatic performance drop in all cases. The results demonstrate the effectiveness of our unified UPL-EA framework in systematically combating confirmation bias for pseudo-labeling-based entity alignment. Note that under the setting with no prior alignment seeds, the variant without OT-based pseudo-labeling (w.o. Within-iter. OT) has similar performance as compared to the variant completely ignoring confirmation bias (w.o. Within & Cross-iter.). In particular, on DBP15KZH_EN and DBP15KJA_EN, the former variant even performs slightly worse. This is because under the challenging case where there are no prior alignment seeds, ablating OT-based pseudo-labeling might incur considerably more conflicted misalignments (Type I pseudo-labeling errors), violating Lemma <ref> and thus making Theorem <ref> invalid because the number of misalignments (false positives) is not bounded. As a result, it becomes ineffective to calibrate erroneous pseudo-labels. §.§ Impact of Pre-trained Word Embeddings To analyze the impact of using different pre-trained word embeddings, we report the results of UPL-EA that form entity features with Glove embedding <cit.>, which is widely used in the existing EA models. We conduct this analysis on the setting with 30% prior alignment seeds. The results on DBP15K are reported in Table <ref> as a case study. We can observe that UPL-EA with Glove embedding still achieves competitive results, significantly outperforming all other baselines. This confirms that the efficacy of UPL-EA is not highly dependent on embedding initialization methods used. When switching from Glove embedding to BERT pre-trained embeddings, performance gains can be observed, especially on DBP15KJA_EN. This indicates the usefulness of pre-trained word embeddings with high quality for entity alignment. §.§ Parameter Sensitivity Study We further study the sensitivity of our UPL-EA model with regards to four hyper-parameters: embedding dimension d, number of iterations m for cross-iteration pseudo-label calibration, regularization hyper-parameter β for Sinkhorn algorithm, and margin parameter γ for defining the alignment ranking loss. This sensitivity analysis is conducted on DBP15KZH_EN with 30% prior alignment seeds as a case study. The respective results in terms of Hit@1 and Hit@10 are plotted in Fig. <ref>. As we can see from Fig. <ref>, the performance of UPL-EA improves considerably as the embedding dimension d increases from 100 to 300 and then retains at a relatively stable level. For the number of iterations m used for pseudo-label calibration, having cross-iteration calibration (m>1) leads to considerably better performance as opposed to not having calibration (m=1). This proves the effectiveness of our proposed calibration mechanism, requiring only a very small number of iterations used for calibration (i.e., m=3) to achieve competitive performance (see Fig. <ref>). In addition, as observed in Fig. <ref>, the performance of UPL-EA is insensitive to different values of β associated with OT-based pseudo-labeling. As for the margin parameter γ, the performance of UPL-EA starts to drop gradually after γ exceeds 1, as shown in Fig. <ref>. This observation is reasonable, as a larger margin parameter would give more tolerance to alignment errors, thus degrading model performance. §.§ Empirical Evidence on the Effectiveness of Pseudo-label Calibration Lastly, to validate the effectiveness of cross-iteration pseudo-label calibration, we perform an empirical analysis to compare the precision of pseudo-labeling with and without calibration. This empirical analysis is carried out on DBP15K as a case study and under the setting with 30% prior alignment seeds. In Fig. <ref>, we directly compare the precision between pseudo-labeling with calibration (indicated by the blue curve) and without calibration (indicated by the orange curve) over eight pseudo-labeling iterations. Notably, pseudo-labeling with calibration consistently achieves a higher precision compared to pseudo-labeling without calibration throughout the training process. This discrepancy is particularly profound in the case of DBP15KZH_EN and DBP15KJA_EN, where significant linguistic barriers exist. Specifically, without calibration, the precision of pseudo-labeling deteriorates gradually as the training progresses due to confirmation bias arising from the accumulation of one-to-one misalignments (Type II pseudo-labeling errors). In contrast, when employing pseudo-label calibration, the precision of pseudo-labeling approaches to almost 100% after only three consecutive iterations across all datasets. These findings affirm the necessity and efficacy of calibrating pseudo-labels across iterations to mitigate the adverse impact of confirmation bias, particularly on challenging datasets like DBP15KZH_EN. § RELATED WORK In this section, we review three streams of related literature, including entity alignment in knowledge graphs, pseudo-labeling in semi-supervised learning, and optimal transport on graphs. §.§ Entity Alignment in Knowledge Graphs Most entity alignment models are embedding-based approaches, which exploit distances between entity embeddings in latent spaces to measure inherent semantic correspondences between entities. Inspired by TransE <cit.>, MTransE <cit.> embeds two KGs into two respective embedding spaces, where a transformation matrix is learned using prior alignment seeds. To reduce the number of parameters involved, most subsequent models <cit.> embed KGs into a common latent space by imposing the embeddings of pre-aligned entities to be as close as possible. This ensures that alignment similarities between entities can be directly measured via their embeddings. To leverage KG structural information, methods like GCN-Align <cit.> utilize GCNs to obtain better embeddings for entity alignment. However, GCNs and their variants are inclined to result in alignment conflicts, as their feature smoothness scheme tends to make entities have similar embeddings among local neighborhoods. To reduce the over-smoothing effect, more recent works <cit.> adopt a highway strategy <cit.> on GCN layers, which “mixes" the learned entity embeddings with the original features. Another line of research efforts is devoted to improving GCN-based approaches through considering heterogeneous relations in KGs. HGCN <cit.> jointly learns the embeddings of entities and relations, without considering the directions of relations. RDGCN <cit.> performs embedding learning on a dual relation graph, but fails to incorporate statistical information of neighboring relations of an entity. RNM <cit.> uses iterative relational neighborhood matching to refine finalized entity embedding distances. This matching mechanism proves to be empirically effective, but it is used only after the completion of model training and fails to reinforce embedding learning in turn. All the aforementioned models, however, require an abundance of prior alignment seeds provided for training purposes, which are labor-intensive and costly to acquire in real-world KGs. To tackle the shortage of prior alignment seeds, semi-supervised EA models have been proposed in recent years. As a prominent learning paradigm among such, pseudo-labeling-based methods, e.g., BootEA <cit.>, IPTransE <cit.>, RNM <cit.>, MRAEA <cit.>, and CPL-OT <cit.>, propose to iteratively pseudo-label unaligned entity pairs and add them to prior alignment seeds for subsequent model retraining. For RNM, there is a slight difference that it augments prior alignment seeds to rectify embedding distance after the completion of model training. Although these methods have achieved promising performance gains, the confirmation bias associated with iterative pseudo-labeling has been largely under-explored. Only limited attempts have been recently made to alleviate Type I pseudo-labeling errors (conflicted misalignments) while Type II errors (one-to-one misalignments) have been completely overlooked. To eliminate Type I pseudo-labeling errors, RNM <cit.> and MRAEA <cit.> use simple heuristics to preserve only the most convincing alignment pairs, for example, those with the smallest distance, at the presence of conflicts. BootEA <cit.> and CPL-OT <cit.>, on the other hand, model the inference of pseudo-labeled entity pairs as an assignment problem, where the most likely aligned pairs are selected at each pseudo-labeling iteration. Unlike BootEA that selects a small set of pseudo-labels using a pre-specified threshold, CPL-OT imposes a full match between two unaligned entity sets to maximize the number of pseudo-labels at each iteration. Nevertheless, both methods impose constraints to enforce hard alignments for the purpose of alleviating alignment conflicts, but this might potentially lead to an increase in more one-to-one misalignments (Type II errors). To fill in the research gap, our work explicitly tackles both Type I and Type II pseudo-labeling errors to combat confirmation bias in a principled way. The proposed UPL-EA effectively eliminates Type I pseudo-labeling errors by casting entity alignment inference into a discrete OT problem; this formulation enables more accurate determination of entity correspondences across KGs. In combination with a carefully designed selection criterion, UPL-EA is guaranteed to generate one-to-one alignments as pseudo-labels at each iteration. Furthermore, UPL-EA calibrates pseudo-labels by reducing local pseudo-label selection variability across iterations to alleviate Type II pseudo-labeling errors, preventing erroneous pseudo-labels from propagating and jeopardizing subsequent model training. §.§ Pseudo-Labeling Pseudo-labeling has emerged as an effective semi-supervised approach in addressing the challenge of label scarcity. It refers to a self-training paradigm where the model is iteratively bootstrapped with additional labeled data based on its own predictions. The pseudo-labels generated from model predictions can be defined as hard (one-hot distribution) or soft (continuous distribution) labels <cit.>. More specifically, pseudo-labeling strategies are designed to select high-confidence unlabeled data by either directly taking the model's predictions, or sharpening the predicted probability distribution. It is closely related to entropy regularization <cit.>, where the model's predictions are encouraged to have low entropy (i.e., high-confidence) on unlabeled data. The selected pseudo-labels are then used to augment the training set and to fine-tune the model initially trained on the given labels. This training regime is also extended to an explicit teacher-student configuration <cit.>, where a teacher network generates pseudo-labels from unlabeled data, which are used to train a student network. Despite its promising results, pseudo-labeling is inevitably susceptible to erroneous pseudo-labels, thus suffering from confirmation bias <cit.>, where the prediction errors would accumulate and degrade model performance. The confirmation bias has been recently studied in the field of computer vision. In works like <cit.>, confirmation bias is considered as a problem of poor network calibration, where the network is overfitted towards erroneous pseudo-labels. To alleviate confirmation bias, pseudo-labeling approaches have adopted strategies such as mixup augmentation <cit.> and uncertainty weighting <cit.>. Subsequent works like <cit.> address confirmation bias by applying curriculum learning principles, where the decision threshold is adaptively adjusted during the training process and model parameters are re-initialized after each iteration. Recently, pseudo-labeling has also been studied on graphs for the task of semi-supervised node classification <cit.>. <cit.> propose a self-trained GCN that enlarges the training set by assigning a pseudo-label to high-confidence unlabeled nodes, and then re-trains the model using both genuine labels and pseudo-labels. The pseudo-labels are generated via a random walk model in a co-training manner. <cit.> show that a shallow GCN is ineffective in propagating label information under few-label settings, and employ a multi-stage self-training approach that relies on a deep clustering model to assign pseudo-labels. <cit.> propose to incorporate the node informativeness scores for the selection of pseudo-labels and adopt distinct loss functions for genuine labels and pseudo-labels during model training. Despite these research efforts, the problem of confirmation bias remains under-explored in graph domains. This work systematically analyzes the cause of confirmation bias and proposes a principled approach to conquer confirmation bias for pseudo-labeling-based entity alignment across KGs. §.§ Optimal Transport on Graphs Optimal Transport (OT) is the general problem of finding an optimal plan to move one distribution of mass to another with the minimal cost <cit.>. As an effective metric to define the distance between probability distributions, OT has been applied in computer vision and natural language processing over a range of tasks including machine translation, text summarization, and image captioning <cit.>. In recent years, OT has also been studied on graphs to match graphs with similar structures or align nodes/entities across graphs. For graph partitioning and matching, the transport on the edges across graphs has been used to define the Gromov-Wasserstein (GW) discrepancy <cit.> that measures how the edges in a graph compare to those in another graph <cit.>. For entity alignment across graphs, <cit.> incorporate an OT objective into the overall loss to enhance the learning of entity embeddings. <cit.> propose to jointly perform structure learning and OT alignment through minimizing the multi-view GW distance matrices between two attributed graphs. These methods have primarily used OT to define a learning objective, which involves bi-level optimization for model training. To further enhance the scalability of OT modeling for entity alignment, <cit.> propose to make the similarity matrix sparse by dropping its entries close to zero. However, this sparse OT modeling potentially violates the constraints of the OT objective, thereby failing to guarantee one-to-one correspondences across two KGs. In our work, we focus on tackling the scarcity of prior alignment seeds via iterative pseudo-labeling; we seek to find more accurate correspondences between entities via OT modeling. This enables us to derive a one-to-one alignment configuration more precisely to eliminate conflicted misalignments (Type I pseudo-labeling errors) at each iteration, mitigating the negative impact of confirmation bias. § CONCLUSION We proposed a novel unified pseudo-labeling framework (UPL-EA) that addresses the problem of confirmation bias for pseudo-labeling-based entity alignment across KGs. UPL-EA employs an entity alignment model based on the global-local aggregation architecture to generate informative entity embeddings. In addition, UPL-EA includes two modules to combat confirmation bias: within-iteration Optimal Transport (OT)-based pseudo-labeling and cross-iteration pseudo-label calibration. The OT-based pseudo-labeling module utilizes OT modeling to eliminate conflicted misalignments (Type I pseudo-labeling errors) within each iteration. The pseudo-label calibration module employs pseudo-labels from multiple consecutive iterations to reduce pseudo-label selection variability, thus preventing the accumulation and propagation of inevitable one-to-one misalignments (Type II pseudo-labeling errors) across iterations. Our extensive experiments on benchmark datasets show that UPL-EA outperforms state-of-the-art baselines with limited amounts of prior alignment seeds. The competitive performance of UPL-EA demonstrates its superiority in addressing confirmation bias and its potential for pseudo-labeling-based entity alignment across KGs. § PROOF OF LEMMA <REF> Given two unaligned entity sets ℰ'_1∈𝒢_1 and ℰ'_2∈𝒢_2, there is an entity alignment candidate space ℰ'_1 ×ℰ'_2∈{y=0,y=1}^|ℰ'_1|× |ℰ'_2|. The pseudo-label selection process can be considered as a classification problem, where each entity pair (e_i,e_j), e_i∈ℰ'_1, e_j ∈ℰ'_2 in the candidate space is classified as either being aligned (ŷ=1) or not aligned (ŷ=0). Let |ℰ'_1|=m and |ℰ'_2|=n, we can thus obtain a confusion matrix based on the pseudo-label selection with one-to-one correspondence as given in Table <ref>. TP, FP, FN and TN represent “True Positive", “False Positive", “False Negative" and “True Negative", respectively. The number of ground truth entity alignment pairs with y=1 is TP+FN=min(m,n). Correspondingly, the number of ground truth entity pairs that are not aligned (y=0) is FP+TN=mn-min(m,n). Since the pseudo-labels obtained using our proposed OT-based modeling satisfy one-to-one correspondence, the number of pseudo-labeled entity pairs with ŷ=1 is bounded, i.e., 0≤ TP+FP≤min(m,n), thus the number of misalignments is bounded such that FP≤min(m,n)-TP. Denote p = ℙ(ŷ=1|(e_i,e_j),y=1) and p' = ℙ(ŷ=1|(e_i,e_j),y=0). Let us consider two cases below. * When m<n, we have min(m,n)=m, FP≤ m-TP, 0<m/n<1. Thus, p = TP/TP+FN=TP/m, p'=FP/FP+TN=FP/mn-m. We can calculate the ratio p/p'=TP/m·mn-m/FP=TP/FP·(n-1). For the condition p>p' to hold, we have p/p'=TP/FP(n-1) >1, TP(n-1) >FP, Since FP≤ m-TP, we ensure TP(n-1) > m-TP, nTP-TP > m-TP, TP >m/n, Since 0<m/n<1, we ensure TP≥ 1. Therefore, as long as there is at least one correct alignment (TP≥1) in the pseudo-label set when m<n, the condition p>p' holds. * Similarly, when m≥ n, min(m,n)=n, we have FP≤ n-TP, 0<n/m≤1. Thus, p = TP/n, p'=FP/mn-n, p/p'=TP/FP· (m-1). For the condition p>p' to hold, we have p/p'=TP/FP(m-1) >1, TP(m-1) >FP, Since FP≤ n-TP, we ensure mTP-TP > n-TP, TP >n/m, Since 0<n/m≤1, we ensure TP>1. Therefore, as long as there are at least two correct alignments (TP≥2>1) in the pseudo-label set when m≥ n, the condition p>p' holds. Putting together, as long as there are at least two correct alignments (TP≥2) in the pseudo-label set with one-to-one correspondence, the condition ℙ(ŷ=1|(e_i,e_j),y=1)=p>ℙ(ŷ=1|(e_i,e_j),y=0) holds.
http://arxiv.org/abs/2307.01075v2
20230703145447
Finding critical points and correlation length exponents using finite size scaling of Gini index
[ "Soumyaditya Das", "Soumyajyoti Biswas", "Anirban Chakraborti", "Bikas K. Chakrabarti" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "physics.comp-ph" ]
soumyaditya_das@srmap.edu.in soumyajyoti.b@srmap.edu.in anirban.chakraborti@bmu.edu.in bikask.chakrabarti@saha.ac.in The order parameter for a continuous transition shows diverging fluctuation near the critical point. Here we show, through numerical simulations and scaling arguments, that the inequality between the values of an order parameter, measured near a critical point, is independent of the system size. Quantification of such inequality through Gini index (g), therefore, leads to a scaling form g=G[|F-F_c|N^1/dν], where F denotes the driving parameter for the transition (e.g., temperature T for ferro-para transition at T_c, or lattice occupation probability p near percolation threshold p_c), N is the system size, d is the spatial dimension and ν is the correlation length exponent. We demonstrate the scaling for the Ising model in two and three dimensions and site percolation on square lattice. Finding critical points and correlation length exponents using finite size scaling of Gini index Bikas K. Chakrabarti August 1, 2023 ================================================================================================ The critical point of a system is where the fluctuation diverges i.e., a suitably defined correlation length would span the system (see e.g., <cit.>). While the (universal) critical exponents could be found, especially where the system is not exactly solvable, from finite size and other scaling and hyperscaling relations, all such analysis require an accurate knowledge of the (non-universal) critical point. The value of the critical point cannot be argued from the symmetry or dimensionality of the system, unlike the critical exponent values. It, therefore, requires additional analysis that are specific to the system under consideration <cit.>. The knowledge of the critical point, apart from determining scaling relations, is also necessary for a class of systems, where vicinity of such points have catastrophic consequence, e.g. the point of breakdown of an externally stressed disordered solid <cit.>, crash of the stock markets <cit.>, approaching environmental catastrophe <cit.> and so on. Here we propose a method that determines the critical point using the inequality of the values of order parameter of a second order phase transition. For a particular value of the driving field (say, temperature for the Ising model or site occupation probability for percolation), the order parameter values fluctuate with Monte Carlo steps (MCS). A measure of relative inequality of such values show that it is independent of the system size at the critical point. Hence, such a measure is a very good indicator of the critical point, which we demonstrate here for the Ising model in two and three dimensions and the site percolation model on a square lattice. There are well known methods to determine critical point of a system or imminent catastrophic breakdowns. These are often related to the fluctuation characteristics of the order parameter of the system. Particularly, one way to determine the critical exponent accurately is to measure the ratio of the fourth and the square of the second moment of the order parameter that becomes independent of the system size at the critical point <cit.>. There are other known methods, for example monitoring the size distribution exponent of the avalanches shown by a stressed disordered solid, the value of which is generally lowered as the system approach a critical breakdown point (see e.g., <cit.>), noted both analytically and in real data. Noting the elastic energy stored in a stressed disordered solid is another way of detecting imminent failure point (often characterized as a critical point), as that quantity shows a non-monotonic variation prior to breakdown <cit.> among many others <cit.>. Very recently, the inequalities in the values of different response functions have been investigated near the critical point <cit.>. If the driving field through which the critical point is reached is varied, the response functions would change drastically; for example the susceptibility of the Ising model will diverge at the critical point following χ∼ |T-T_c|^-γ. Therefore, the values of susceptibility are highly unequal as the temperature is changed. Similar behavior could be seen for specific heat (diverging), order parameter (vanishing at T_c) and so on. Quantifying the inequalities in such response functions yield some useful scaling forms and precursors to imminent critical points <cit.>. For the inequalities measured at the critical point, universal features could be found for self-organized critical systems <cit.>, as well as in the cluster size distributions of site percolation <cit.>. On the other hand, if the driving field is held fixed, the time evolution of the equilibrium values of any response function will fluctuate around its average and hence are also unequal. The inequality in any such set of values of any response function (particularly the order parameter) for the time evolution at a constant value of the associated driving field, could also be quantified using the Gini index (g). The Gini index is traditionally used in economics to quantify wealth inequality, which is a summary statistic of the Lorenz function L(f) <cit.>. In social sciences, the inequalities (say, in individual wealth) are represented by the Lorenz function L(f), where f fraction of the population possesses L(f) fraction of the total wealth, when the population is arranged in the ascending order of their wealth. A monotonically increasing and a continuous function, it trivially satisfies L(0)=0 and L(1)=1. If everyone had exactly the same wealth, the Lorenz function would be a diagonal line L_e(f)=f. A departure from it, therefore, is a measure of inequality. One such measure is the normalized area between the actual Lorenz curve and the equality line L_e(f) = f, defined as the Gini index <cit.> i.e, g = 1 - 2∫_0^1L(f) df, where g=0 means perfect equality and g=1 means extreme inequality. The above definition could easily be translated for any set of real numbers – discrete values or continuous functions – resulting in a compact measure of inequality among such numbers. Given that the order parameter of a system near a critical point shows statistical regularities in its fluctuations, it is therefore appealing to apply such measures for the order parameter, revealing its inequality statistics. In Fig. <ref>, the time series of order parameter values, after reaching equilibrium, are shown for the two dimensional Ising model for three system sizes and for three values of temperature, T<T_c, T=T_c and T>T_c. The Gini indices are measured up to time τ by taking all the values of the order parameter for t<τ, remembering that at t=0 the system has already reached equilibrium i.e., the transient values are discarded. As can be seen, the Gini index quickly reaches a saturation level in all the cases. However, the saturation levels differ with system sizes (top and bottom panels) if the temperature T T_c. At T=T_c, the Onsagar temperature in this case, the saturation values of the Gini index remain independent of the system size (middle panel). This remarkable tendency is valid for the Ising model in two and three dimensions, site percolation on a square lattice (discussed later) and for any other equilibrium or non-equilibrium system showing a continuous phase transition. While the saturation value of the Gini index is independent of the system size at the critical point, away from the critical point it depends on the system size. Such dependence is also not always monotonic. Nevertheless, an off-critical fine size scaling is possible, involving the correlation length exponent ν of the respective models. We assume a scaling behavior for the Gini index of the order parameter to be in the form g=G[|F-F_c|N^1/dν], where F is the driving field (F=T for the Ising model and F=p for site percolation), N is the system size, d is the spatial dimension and ν is the correlation length exponent. In Fig. <ref>, the variations of the saturation values of the Gini index are shown for different system sizes for the two dimensional Ising model and the site percolation model on square lattice. The scaling collapse obeying the form in Eq. (<ref>), are also shown. For both of these models, the critical points (indicated by vertical lines) and correlation length exponents are well known. In the scaling collapse, these values of the critical points and correlation length exponents were used, resulting in a very good finite size off-critical scaling. The same could be done for the three dimensional Ising model (see Fig. <ref>) using the numerical estimates for the critical point and correlation length exponent values. The g index is bounded, as indicated before, from below (g = 0) and above (g = 1). However, away from these values, g index will generally depend on the driving field (for example, temperature T) and the system size (for example, the number of spins N), when they are neither zero, nor infinity. Application of Fisher's finite size scaling argument in this case suggests that g will be function of a single scaled variable ℒ/ξ, where ℒ (=N^1/d), and ξ∼ |T - T_c|^-ν denote the linear size and correlation length of the d dimensional system, having the critical (for N →∞) temperature T_c and correlation length exponent ν. This scaling function, therefore, is expected to have the generic form assumed in Eq. (<ref>). Hence, generally speaking, g = g(ℒ/ξ), within the above-mentioned limiting values of g. As the ℒ dependence of g disappears at T_c for the infinite system, the crossing points of g as functions of T for different (finite) ℒ values will give the T_c of the infinite system. Unlike the Binder cumulant, the Gini index has well defined natural limiting values and therefore can be employed conveniently for accurate estimation of the critical point T_c, from finite size results, which we have demonstrated in Figs. <ref> & <ref>. The system size independence of the g index actually goes further back to the system size independence of the Lorenz function of the order parameter values at the critical point. In Fig. <ref>, the Lorenz functions are plotted for the order parameter values of the two dimensional Ising model for T=T_c and T T_c. As can be seen, the Lorenz function itself becomes independent of the system size at the critical point. Therefore, all summary statistics of inequality, including the g index, will be independent of the system size at the critical point (see Fig. 1). It may be noted at this point that the Gini (g) values at the critical point depend on the nature of the quantity for which the inequality statistics is being investigated (see Figs. 2 & 3.). In percolation problem we studied the inequality of the largest cluster size at different concentrations (p), while for Ising models, the unequal distribution of the net magnetization (given effectively by the largest spin clusters) at different values of temperature (T). For other distributions of inequalities in the statistics of these models, the g values at the critical points can be different. The above mentioned scaling argument, therefore, holds for any measure of inequality index that could be derived from the Lorenz function (for example, the Kolkata index <cit.> <cit.>). In summary, the order parameter values of a system near its critical point are equally unequal irrespective of the system size. This is not a result of the scaling form of the distribution of such order parameter values. However, a scaling argument could be made about its functional dependence, resulting in revealing the correlation length exponent. The scaling ansatz mentioned in Eq. (<ref>) is conclusively verified through numerical simulations for the two and three dimensional Ising models and the site percolation model on a square lattice, reproducing the well established critical point values and the correlation length exponents. This methodology is applicable for any equilibrium or non-equlibrium system showing critical transition and is an accurate indicator of the critical point in all such systems. Acknowledgement BKC is grateful to the Indian National Science Academy for their Senior Scientist Research Grant. The simulations of FBM were done using HPCC Surya in SRM University - AP. 99 skma S.-k. Ma, Modern theory of critical phenomena, Taylor & Francis, New York, 2001. crpt_book N. Goldenfeld, Lectures on phase transitions and the renormalization group, CRC Press, 1992. wiley_book S. Biswas, P. Ray, B. K.. Chakrabarti, Statistical Physics of Fracture, Breakdown and Earthquake, Wiley-VCH, Weinheim (2015). cup_book B. K. Chakrabarti, A. Chakraborti, S. R. Chakravarty, A. Chatterjee, Econophysics of income and wealth distributions, Cambridge University Press, Cambridge (2013). pnas A. E. Noble, T. S. Rosenstock, P. H. Brown, J. Machta, A. Hastings, Spatial patterns of tree yield explained by endogenous forces through a correspondence between the Ising model and ecology, Proc. Natl. Acad. Sci. 115, 1825 (2018). binder K. Binder, Finite size scaling analysis of Ising model block distribution functions Z. Phys. B 43, 119 (1981). hatano T. Hatano, C. Narteau, P. Shebalin, Common dependence on stress for the statistics of granular avalanches and earthquakes, Sci. Rep. 5, 12280 (2015). pradhan W. Debski, S. Pradhan, A. Hansen, Criterion for Imminent Failure During Loading—Discrete Element Method Analysis, Front. Phys. 9, 675309 (2021). ew1 J. M. Drake, B. D. Griffen, Early warning signals of extinction in deteriorating environments, Nature 467, 456 (2010). method1 E. van Nieuwenburg, Y.-H. Liu, S. Huber, Learning phase transitions by confusion, Nat. Phys. 13, 435 (2017). method2 J. Carrasquilla and R. G. Melko, Machine learning phases of matter, Nat. Phys. 13, 431 (2017). method3 R. A. Vargas-Hernández, J. Sous, M. Berciu, R. V. Krems, Extrapolating Quantum Observables with Machine Learning: Inferring Multiple Phase Transitions from Properties of a Single Phase, Phys. Rev. Lett. 121, 255702 (2018). method4 N. Maskara, M. Buchhold, M. Endres, E. van Nieuwenburg, Learning algorithm reflecting universal scaling behavior near phase transitions, Phys. Rev. Research 4, L022032 (2022). method5 M. Yang, T. Karmakar, M. Parrinello, Liquid-liquid critical point in phosphorous, Phys. Rev. Lett. 127, 080603 (2021). method6 J. C. Xavier, F. C. Alcaraz, Precise determination of quantum critical points by violation of the entropic area law, Phys. Rev. B 84, 094410 (2011). method7 T. F. J. Bögels, R. Caracas, Critical point and supercritical regime of MgO, Phys. Rev. B 105, 064105 (2022). method8 K. Binder, Critical Properties from Monte Carlo Coarse Graining and Renormalization, Phys. Rev. Lett. 47, 693 (1981). succ_front A. Ghosh, S. Biswas, B. K. Chakrabarti, Success of social inequality measures in predicting critical or failure points in some models of physical systems, Front. Phys. 10, 803 (2022). das S. Das, S. Biswas, Critical scaling through Gini index, arxiv:2211.01281 (2022). manna S. S. Manna, S. Biswas, B. K. Chakrabarti, Near universal values of social inequality indices in self-organized critical models, Physica A 596, 127121 (2022). lor M. O. Lorenz, Methods of Measuring the Concentration of Wealth, Publ. Am. Stat. Assoc. 9, 209–219 (1905). gini C. Gini, Measurement of inequality of incomes, Economics Journal 31, 124126 (1921). onsagar Lars Onsager, Crystal Statistics. I. A Two-Dimensional Model with an Order-Disorder Transition, Phys. Rev. E 107, 054103 (1944). stf Dietrich Stauffer, Ammon Aharony, Introduction to Percolation Theory, Taylor & Francis, 2003. newmann M. E. J. Newman and R. M. Ziff, Efficient Monte Carlo Algorithm and High-Precision Results for Percolation, Phys. Rev. Lett. 85, 4104 (2000). ferren Alan M. Ferrenberg, Jiahao Xu, and David P. Landau, Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model, Phys. Rev. E 97, 043301 (2018). kolkata A. Ghosh, N. Chattopadhyay, B. K. Chakrabarti, Inequality in societies, academic institutions and science journals: Gini and k-indices, Physica A 410, 3034 (2014). diksha Diksha, S. Kundu, B. K. Chakrabarti, and S. Biswas, Inequality of avalanche sizes in models of fracture, arXiv:2303.10168 (2023; Phys. Rev. E, in press)
http://arxiv.org/abs/2307.02640v1
20230705201620
Unsupervised Sentiment Analysis of Plastic Surgery Social Media Posts
[ "Alexandrea K. Ramnarine" ]
cs.CL
[ "cs.CL", "68T50" ]
Valley-controlled transport in graphene/ WSe_2 heterostructures under an off-resonant polarized light M. Tahir August 1, 2023 ===================================================================================================== The massive collection of user posts across social media platforms is primarily untapped for artificial intelligence (AI) use cases based on the sheer volume and velocity of textual data. Natural language processing (NLP) is a subfield of AI that leverages bodies of documents, known as corpora, to train computers in human-like language understanding. Using a word ranking method, term frequency-inverse document frequency (TF-IDF), to create features across documents, it is possible to perform unsupervised analytics, machine learning (ML) that can group the documents without a human manually labeling the data. For large datasets with thousands of features, t-distributed stochastic neighbor embedding (t-SNE), k-means clustering and Latent Dirichlet allocation (LDA) are employed to learn top words and generate topics for a Reddit and Twitter combined corpus. Using extremely simple deep learning models, this study demonstrates that the applied results of unsupervised analysis allow a computer to predict either negative, positive, or neutral user sentiment towards plastic surgery based on a tweet or subreddit post with almost 90% accuracy. Furthermore, the model is capable of achieving higher accuracy on the unsupervised sentiment task than on a rudimentary supervised document classification task. Therefore, unsupervised learning may be considered a viable option in labeling social media documents for NLP tasks. § INTRODUCTION Cosmetic plastic surgery is an expensive yet increasingly popular set of procedures for both men and women, especially considering the ease of accessibility to surgeons, patient testimonials, and procedure-specific information, such as “before-and-after” visual aids, afforded by the Internet. The Internet is virtually a bottomless trove of textual data that are easily created at velocities of millions of posts per second across social media platforms. The exponential adoption of social media, largely facilitated by the wide global distribution of smart phone technology, is a known disseminator of beauty standards that is highly targeted to billions of users daily. Cosmetic surgery becomes a quick, permanent fix in adopting sought after beauty standards set by celebrities, models and social media “influencers”, relative to longer term alternatives such as dieting and exercise, or temporary alternatives such as adoption of fashion and cosmetic trends. Social media, while distributing information about plastic surgery procedures, also provides a setting for public social commentary on the election of undergoing these surgeries. Users across many platforms are able to freely communicate their sentiments on a broad scale and even as granular as commenting on another individual’s posted surgical outcomes. Different social media platforms exist for different purposes, and thus attract and form distinct user bases that comprise these Internet communities. Each text post is unique to a user, time-stamped, geo- located, and has the capability to possess multiple data formats including but not limited to links and images. Therefore, text posts from social media sites provide specific insight into the public’s opinion on cosmetic surgery. It is thus reasonable to assume that the text posts made on one platform can be used to distinguish user-derived text from other platforms. Curating massive corpora of text post documents from popular social media networks, Twitter and Reddit, is feasible with the implementation of AI web scraping technology. NLP is then leveraged to process and mathematically transform text to computationally understandable representations. ML methods can then identify patterns among the corpora that is an otherwise impossible task for a human to accomplish given the sheer volume of data. Deep learning (DL) methods, relying on powerful and speedy neural network technology, are then poised to use the NLP-curated and ML-processed data in order to accurately predict document class and user sentiment across the corpora. This study demonstrates that very simple, regularized neural network architectures effectively use unsupervised NLP to answer an easy to ask yet difficult to answer question, “how does the Internet feel about plastic surgery?” § LITERATURE REVIEW Opinion mining, better known as sentiment analysis, is an application of NLP that is particularly suited to extracting and assessing human opinion from textual data over the Internet and social media networks <cit.>. While spoken language offers context surrounding feelings and opinions through auditory cues such as tone and pitch, written language often broadly captures polarity in discussions, which can be leveraged by AI. Trained AI are able to detect polarity, whether negative, positive, or neutral, based on word associations captured mathematically by distance metrics. Distance is able to represent and capture context, giving connotative rather than denotative meaning to the words that ultimately decide whether a word is positive or negative <cit.>. Therefore, ranking word importance to use as term features for AI is critical to achieve high accuracy for sentiment analysis, particularly unsupervised sentiment assignment. This study adopts the information retrieval ranking function of TF-IDF, combining two methods by <cit.> and <cit.> to assign weights to important terms across a corpus of documents. Two popular unsupervised analyses are utilized in this study to support analyst judgment for assigning sentiment to social media posts that lack these labels. <cit.> and <cit.> proposed the “k-means” method as an unsupervised algorithm to group observations based on minimizing variance, or the sum of squared deviations of points, to generate clusters of points using centroided means. <cit.> formulated LDA, which uses Bayesian probabilistic modeling to generate topic representations of NLP corpora based on their documents and associated terms. LDA therefore will ultimately support clustering analysis in generating labels for subsequent sentiment analysis. More recently, <cit.> applied LDA to extract features of YouTube comments, proposing that semantic topic extraction can directly aid in sentiment scoring of comments as “negative” or “positive” through an NLP-hybrid framework when applied to fuzzy lattice reasoning. In conjunction with unsupervised analysis, this application is useful in identifying user groups within social media networks. <cit.> created an unsupervised approach to determine interests of social media users based on tweet semantics. A ML survey of unsupervised learning applications to NLP specifically highlights clustering algorithms such as k-means to address categorical outputs when labeled training data is not available for analytics. § METHODS §.§ Data Acquisition A Python 3.8 development version of snscrape package was utilized to run command line bash scripts that scrape top and relevant social media posts from chosen Reddit subreddits and Twitter hashtags through March 2021, which serve as the document categories. Reddit queries from three different subreddits, “PlasticSurgery”, “CosmeticSurgery”, and “BotchedSurgeries”, had a maximum of 8000 or 4000 result scrapes of the latest posts based on total reported posts on each subreddit’s webpage. Twitter queries for each of the following hashtags, “plasticsurgery”, “liposuction”, “lipinjections”, “botox”, and “nosejob”, had a maximum of 8000 result scrapes of top tweets as determined by Twitter’s internal algorithm. Each scrape was saved as a JSON line file and subsequently read into respective Pandas dataframes. Null data were replaced with empty strings using NumPy. All Reddit and Twitter dataframes were concatenated respectively. §.§ Data Pre-processing The separate corpora dataframes were joined based on identification number, category, text, and title columns. Pre-processing steps for the combined corpus utilized a Python implementation of NLTK and custom functions to convert text to lowercase, remove punctuation, remove emojis and other Unicode characters, tokenize each document, remove English stop words, and stem each token. TF-IDF vectorization of the combined corpus employed additional pre-processing to drop duplicates and strip Unicode characters. §.§ Unsupervised Analysis The Scikit-learn TF-IDF Vectorizer was set to generate unigrams and subsequently fit to the combined corpus after randomly shuffling the dataframe. Scipy and Scikit-learn implementations of k-means using k of 8, 3, and 2, and a 2-component t-SNE multidimensionality rescale using cosine distance and perplexity of 50.0 were applied to the TF- IDF matrix. Each method underwent at least ten initializations and between 300 and 1000 iterations before convergence. The Scikit-learn implementation of LDA was used for topic modeling on the TF-IDF matrix, generating the top 20 words for 8, 3, and 2 topics. All visualizations were generated using MatPlotLib and Seaborn. §.§ Deep Learning The high-level Keras API on TensorFlow was utilized to build a Sequential dense neural network (DNN) with one input layer using rectified linear unit (ReLU) activation, one dropout regularization layer, and one output layer using softmax activation for both document category classification and sentiment analysis tasks. For sentiment analysis tasks, 1-D temporal convolutional (1D-CNN) Sequential models were built with an input layer of 32 units and a 3x3 kernel, ReLU activation and He uniform initialization, a 2x2 1-D max pooling layer, followed by a dropout and flatten layer feeding to a dense layer with 128 units before the final dense output layer. Each model was compiled using the Adam optimizer and a categorical cross-entropy loss function. After Scikit-learn 80% train, 20% test splitting of the TF-IDF matrix and labels, the models were fit to shuffled data and trained over 15 epochs with an internal training validation split of 20 For classification labels, each of the eight document categories were represented as an integer. For sentiment labels, analyst judgment of both the k-means clusters mapped into the t- SNE space and LDA topics was used to create three classes corresponding to negative, positive, or neutral sentiment. All labels were converted to categorical vectors using TensorFlow. Training and validation loss and accuracy were tracked over each epoch before evaluating each model on the test sets. Scikit-learn classification reports of precision and recall, and confusion matrices were generated for each test instance. § RESULTS After vectorization of the combined corpus, unsupervised analyses were performed to visualize the distribution of document categories. Figure 1 illustrates the most similar documents towards the middle of the t-SNE space, while outlier documents sparsely separate at the fringe of the main cluster. Documents from Reddit are more similar to each other than documents from Twitter, primarily falling in the mid to top right quadrant of the space while the Twitter documents cluster together along the bottom and left of the distribution. Documents related generally to plastic surgery, sourced from the plastic surgery Twitter hashtag or the Plastic Surgery and Cosmetic Surgery subreddits, are the most similar to each other and fall within the middle of the distribution. About half of the Botched Surgery subreddit documents strongly form a smaller cluster away from the rest, however the second half is well interspersed within the other subreddits toward the center of the distribution. The liposuction and lip injection documents are most dissimilar from each other of the Twitter-sourced hashtags, while the nose job hashtag is similar to the liposuction and half of the Botox hashtag. The other half of the Botox hashtag is more similar to the lip injection hashtag source. This half of the Botox hashtag, along with the lip injection hashtag, are more dissimilar than the general plastic surgery hashtag than the nose job and liposuction hashtags. The outlier documents are distributed in smaller yet distinguishable groups. There are three nose job Twitter hashtag outlier groups where two are somewhat related to the larger group, but one is more related to the main Botox hashtag group. There are many liposuction tweets that are more closely related to the Reddit documents than to Twitter documents. There is one Botched Surgery subreddit outlier group that is more related to the liposuction tweets. Finally, there are two strongly separated groups of mixed documents, however primarily comprised of Plastic Surgery and Botched Surgery subreddit documents. One of these falls within the main distribution but largely distanced in its entire circumference from the general plastic surgery tweets and subreddit posts, and the second falls completely out of the main distribution towards the most negative t-SNE 1 space, closer to a polarized outlier group of the twitter lip injection hashtag. Centroid clustering was applied to the t-SNE space using a k-means approach. The eight generated clusters highlight the strongest outlier document groups, the differences among the twitter hashtags, and the similarities among both the Reddit documents and the general plastic surgery documents, as illustrated in Figure 2. Cluster 3 is a “catch all”, predominantly mapping to the Reddit documents and general plastic surgery related documents. Cluster 0 mapped directly to the Botched Surgery outlier group, cluster 1 to the Botox hashtag, cluster 7 to the nose job hashtag, and cluster 5 to the liposuction hashtag. Cluster 2 seemed to map to documents that fell directly central within the Botox and lip injection document space but were not sourced from either of those hashtags. Clusters 3 and 6 highlight strong outlier groups in the t-SNE space, where the former maps to the outliers of the general plastic surgery documents, and the latter maps to the fringe outliers of the liposuction hashtag tweets. In order to stratify the documents into three balanced groups corresponding to positive, negative, or neutral sentiment, k-means clustering was performed again on the TF-IDF matrix using k equal to 3 and 2. Appendix A demonstrates that the most significant differences between the documents using centroid clustering is between the main distribution and the large Botched Surgery outlier group. Therefore, LDA was employed to further support analyst judgement in unsupervised sentiment labeling. Figure 3 depicts the results of the top 20 terms for eight topics across the combined corpus, corresponding to the eight document categories. While the majority of the top terms can be considered neutral, there are a few that can be mapped back to the k-means top term results, shown in Table 1, in order to assign sentiment labels to each k-means cluster. From Topic 4, “delete” and “improve”, and from Topic 8, “remove”, “addict”, and “stubborn”, are key terms indicative of negative sentiment if also highlighted by k-means. Reducing the LDA to three or two topics still captures these negative connotation terms. Therefore, k-means clusters 0, 4, and 5 were assigned a negative sentiment label, clusters 1, 6, and 7 were assigned a neutral sentiment label, and clusters 2 and 3 were assigned a positive sentiment label. To correct for class imbalance, any Botched Surgery subreddit documents not assigned to negative sentiment were reassigned a negative label based on that particular subreddit’s culture of mocking and shaming plastic surgery procedure outcomes subjectively deemed poor. §.§ Predicting on Supervised versus Unsupervised Labels A very simple one-layer DNN architecture, utilizing 30% dropout regularization, was used to test supervised document category classification versus the unsupervised sentiment analysis. Appendix C illustrates that training accuracy increases with epochs; however, validation accuracy stagnates. Training loss decreases with epochs, but validation loss increases in both classification and sentiment analysis cases. Table 2 compares the performances between classification and sentiment analysis tasks on the combined corpus. Overall, the model was able to achieve better performance on unsupervised sentiment analysis versus supervised document classification. Appendices D and E compare the harmonic mean and confusion matrices of the two learning tasks. For classification, there was a class imbalance for the lip injections Twitter hashtag, therefore the F1-score was low and misclassification rate was high relative to the performance on the other labels. The model performed best on correctly classifying the nose job Twitter documents. For sentiment analysis, although there was class imbalance skewed towards over-representation of the positive-labeled documents, this had no noticeable effect on model performance. The model performed best on predicting neutral sentiment and relatively worse on predicting negative sentiment, however the precision and recall metrics between the three sentiments are similar. Almost all of the neutral documents were correctly predicted as such, and less than 20 §.§ Sentiment Analysis Given that a simple DNN could achieve near 90% accuracy on unsupervised sentiment analysis, experiments varying dropout regularization and use of a temporal convolutional neural network were conducted. Table 3 summarizes the training, validation, and test results of these experiments to predict sentiment. Increasing dropout rate in both model cases increases both validation and test accuracies overall. However, increasing dropout rate for the 1D-CNN does not improve validation or test loss compared to using no dropout regularization, and instead caused the validation loss to behave erratically over training epochs (Appendix G). Using dropout had no substantial effect on validation accuracy over epochs of the 1D-CNN. Appendix F illustrates that using high dropout rates for the DNN shrinks the gap between training and validation metrics at each epoch, notably shrinking validation loss despite the similar upwards trending loss over epochs in both zero and 60% dropout cases. Appendix H displays the test results of the sentiment analysis comparing dropout regularization between the two models. The 1D-CNN using 60% had the highest true classification rate of positive sentiment, while the 1D-CNN using 30% dropout had the lowest. The DNN using 60% dropout had the best classification rate of negative sentiment, while the DNN using no dropout performed relatively poorly on correctly classifying negative sentiment. The 1D-CNN using 30% dropout correctly predicted neutral sentiment at the highest rate, and the DNN using no dropout correctly predicted neutral sentiment at the lowest relative rate among the models. All models displayed almost negligible misclassification rates bidirectionally between negative and neutral sentiment. § DISCUSSION While it is reasonable to assume that virtual communities formed over social media forums tend to attract like-minded opinions, this over-generalization may conflate the outlier user posts within each group. Herein, it is demonstrated that applying unsupervised dimensional reductions and clustering algorithms to an extremely large and heterogenous corpus of Twitter and Reddit text data is a viable option to capture user sentiment based on word rankings. In conjunction with topic modeling, these methods may be employed to label noisy text data in a semi-supervised manner. The Twitter and Reddit documents mostly separate in the t-SNE space, suggesting that the types of posts, and therefore the user bases, can distinguish between the two social media networks. The relative homogeneity of the Reddit to Twitter distribution supports the idea that Reddit posts, and therefore users, are somewhat more similar to each other that Twitter users. This is likely a function of subreddits being niche internet communities with users sharing multiple posts within the same subreddit, and perhaps even between the three sampled plastic surgery subreddits since there are no other major related communities that were found on Reddit pertaining to plastic or cosmetic surgery. It is a fair assumption that Twitter has a more heterogenous representation because hashtags do not act as niche communities the way subreddits are structured to. Given that the sourced tweets are stratified mainly by procedure related hashtags, it is unsurprising that non-invasive, injection-based procedures cluster closely together, such as the lip injection and Botox clusters, while the invasive procedures such as nose job and liposuction cluster together. k-means cluster 1 therefore must be representative of non-invasive or injection- based procedures. That nose job and Botox documents are still relatively close in distance in the t-SNE space indicates relatedness due to terms associated with facial procedures. Interestingly, these four hashtags for multiple smaller outlier clusters in the t-SNE space, probably indicative of underlying sentiment distributions given the k-means mapped analysis and strong predictive power of the neutral networks. Despite the biased sourcing used for these tweets, the general plastic surgery Twitter hashtag documents almost uniformly span all of the procedure-specific Twitter documents in the t-SNE space. Surprisingly, the unsupervised generated labels seemingly allowed the simplest of neural networks to outperform a supervised NLP task, which may suggest that the content of plastic surgery related documents sourced from Twitter and Reddit are better captured by analyst judgment sentiment and not by the empirical document source. This further suggests that the term ranking methodology employed, together with topic modeling, generated strong indicators of social media user opinion, effectively grouping words based on cosine distance. Using a temporal convolutional network, a model theoretically better suited to capturing high and low dimensionalities of sequential text data, showed negligible improvement over the DNN in terms of accuracy, loss, and sentiment true classification rate. In general, both neural networks overfit the training data, averaging about 10% differences in accuracy between training and test instances. Training and validation instances indicate that both models would benefit from early stopping well before 10 epochs in order to achieve higher validation accuracy and lower validation loss; it is assumed that the test metrics would follow in suit. All models had comparably high precision and recall for both neutral and positive sentiment, although the F1-scores for negative sentiment prediction were not dramatically lower. Given absolute true classifications, the models overall were able to distinguish negative from neutral sentiment very well. For each model, most of the misclassifications occurred between positive and negative sentiment, followed by positive and neutral sentiment. The top-ranking terms therefore must strongly segregate neutral from other sentiment in the case of plastic surgery, indicative of volume of terms used and associated with the medical procedures rather than with user opinions of those procedures or their outcomes. That the models struggled most with misclassifications between positive and negative sentiment could be indicative of vernacular and colloquial usage of terms mixed with denotative usage, confounding learning and thus impairing the decision boundary between these two sentiments. Additionally, it may be useful to use n-grams rather than unigrams to better define terms, such as “beauty” and “change”, that could realistically fall into any sentiment category for plastic surgery depending on the context it is used in. The high predictive capacities of these simple models indicate that favored NLP recurrent neural networks (RNNs), including gated recurrent unit networks and long short-term memory networks, may not perform that much better for sentiment analysis of these social media sourced documents, given the abbreviated length of each document and the frequently associated vanishing gradient problems with RNNs. While it may be interesting to pursue future work with different model architectures, the results from the temporal convolutional network, considered an advancement to simple RNNs given its ability to capture spatio-temporal information, indicate that it may be better to invest efforts in curation, vectorization and thus representation of top terms, using fewer terms but a more polarizing vocabulary to model the data after. Additionally, it may be fruitful to capture a wider breadth of hashtags from Twitter, more posts from the subreddits, and even venture to other social media networks for relevant plastic surgery documents to expand the user base, and thus opinion, representation. unsrt
http://arxiv.org/abs/2307.02444v1
20230705171457
Foundations of Differential Calculus for modules over posets
[ "Jacek Brodzki", "Ran Levi", "Henri Riihimäki" ]
math.AT
[ "math.AT", "55U99, 18F30" ]
Persistence modules were introduced in the context of topological data analysis. Generalised persistence module theory is the study of functors from an arbitrary poset, or more generally an arbitrary small category, to some abelian target category. In other words, a persistence module is simply a representation of the source category in the target abelian category. Unsurprisingly, it turns out that when the source category is more general than a linear order, then its representation type is generally wild. In this paper we introduce a new set of ideas for local analysis of persistence module by methods borrowed from spectral graph theory and multivariable calculus. Quantifying Poynting flux in the Quiet Sun Photosphere [ August 1, 2023 ====================================================== The term persistence module emerged as a concept in early work of Carlsson and Zomorodian <cit.>, and motivated much of the theoretical developments in persistent homology and topological data analysis. Generalised persistence modules were introduced by Bubenik, de Silva and Scott in <cit.>, where a persistence module is defined to be any functor F from a preordered set to an arbitrary category . In most setups the preordered set is assumed to be a poset (i.e. an antisymmetric preordered set) and the target category is taken to be abelian - typically the category of finite dimensional vector spaces over a field k. Working with arbitrary posets, particularly such that have a natural topology like (, ≤), is in practice too general, and from a computational point of view, impossible. Chazal et. al. <cit.> introduced the concept of tame persistence modules for functors M (, ≤) →. The concept was generalised by Scolamiero et. al. <cit.> to functors on (^n, ≤), and generalised further to arbitrary posets by Miller <cit.>, who defines a persistence module M→ to be tame, if there exists a finite poset , a poset map α→, and a persistence module N→, such that M = N∘α. Specialising Miller's definition to modules on (,≤), such a module M has the property that there is a finite set of real numbers t_1<t_2<⋯ < t_n, depending on M, such that for any t_i≤ a ≤ b ≤ t_i+1, the homomorphism M(a≤ b) is an isomorphism. Functors from finite categories to the category of vector spaces are referred to in the literature as representations of the categories in question. The functor category from a finite category to an abelian category is itself an abelian category. It is in fact isomorphic to the module category, in the ordinary sense, over the category algebra k <cit.>. Thus persistence modules over finite posets can be thought of as ordinary modules over the poset algebra k (see Section <ref>). For category algebras over finite posets one has Drozd trichotomy theorem <cit.>, which states that any finite dimensional algebra is either of finite representation type, i.e. has finitely many isomorphism classes of indecomposable modules, or otherwise of infinite representation type, in which case they are either of tame representation type or of wild representation type. Notice that the term “tame representation type” refers to the algebra and not any module over it. This is not to be confused with the notion of a “tame persistence module”, as defined above. Tame persistence modules over (, ≤) can be factored, by definition, through a finite linearly ordered poset, and poset algebras over such posets are of finite representation type. Hence modules over (, ≤) are classifiable by their persistence diagrams (or equivalently persistence barcodes, or persistence landscapes) up to isomorphism. Traditional algebraic tools employed in the study of generalised persistence modules, such as decomposition into indecomposable modules, where possible, free presentations, and resolutions (see for example <cit.>) all share the same difficulty. Arbitrary finite posets are typically of infinite (tame or wild) representation type, which makes classification practically impossible. Thus, from the point of view of topological data analysis, finding alternative methods of extracting computable information out of persistence modules is desirable; some notable examples of this line of work are <cit.>. In this article we introduce a new approach to the study of persistence modules. Our starting point is essentially representation theoretic. Namely we consider isomorphism classes of modules over the category algebra of a finite poset. However, instead of attempting to understand persistence modules globally, we propose a calculus of persistence modules, that is a methodology that enables one to extract local information. Indeed, in exploring properties of a nice real valued function of several variables one typically employs standard techniques of multivariable calculus, which allow studying the function locally. The notions of gradient, divergence and Laplacian come to mind in this context. Inspired by these ideas, there is a discrete version of multivariable calculus for weighted directed graphs. Our treatment of persistence module theory is a powerful generalisation of the ideas of discrete calculus on graphs. In particular we shall define the notion of a gradient for persistence modules, as well as concepts corresponding to the divergence and the Laplacian. By contrast to standard representation theoretic analysis of persistence modules, where the ground poset remains fixed, as does the representation type of its poset algebra, the construction of the gradient and other operations, analogous to classical multivariable calculus, for persistence modules allows one to consider modules locally, namely on sub-posets of the original poset, informed by the behaviour of the gradient. For instance, in <cit.> the authors study modules over commutative ladders. They show that a commutative ladder of any type is representation-finite if and only if its length is at most 4. In Section <ref> we consider two types of commutative ladders of any length and show that the gradient of any module over such ladders can be written as a sum of modules over posets of finite representation type. Another example is motivated by <cit.>, where the authors study certain module categories over finite 2-dimensional grids which they refer to as filtered hierarchical clustering. An important corollary of their main results <cit.> states that m× n grid posets are of finite or tame representation type only for a very small number of cases, and are of wild representation type in all other cases. In Section <ref> we demonstrate how our approach easily gives computable information about modules over grid posets of any size. To state our main theorems some preparation is required. We provide a brief description here, and more details in Section <ref>. Calculus on weighted directed graphs is a discrete calculus for functions whose domain is the vertex set of a finite graph with weighted directed edges <cit.>. In this context the gradient is an operation which takes functions on the vertex set of a graph to functions on its edge set. The line digraph of a directed graph is a directed graph , whose vertices are the edges of . Thus the gradient can be thought of as an operator that takes functions on the vertices of to functions on the vertices of . We take a similar point of view. Our gradient will take a persistence module on a finite poset to a difference of persistence module on a poset associated to , where is obtained from using the line digraph of the Hasse diagram of . The graph theoretic gradient is defined on an edge as the difference between the value at its target vertex and the value at its source vertex <cit.>. To bring this idea to the universe of persistence modules we therefore need an additive structure with additive inverses. Since we are only interested in persistence modules up to isomorphism, it makes sense to consider the Grothendieck ring (k) of isomorphism classes of modules over the poset algebra k, where the sum and product operations are given by direct sum and tensor product (over the field k), respectively. This allows us to define the gradient as the difference ∇ϕ^*-β^* of two natural homomorphisms ϕ^*, β^*(k)→(k), (the front and back morphisms) that captures the variation of the module along each indecomposable edge in , i.e. a relation in the Hasse diagram of . The indecomposable relations in can be thought of as discrete analogs of infinitesimally small moves in a metric space. Our categorical setup fits directly with the classical definition (See Example <ref>). With this setup we can now state our first result. [Theorem <ref>] Let be a finite poset. Then the gradient operator ∇(k)→(k) satisfies the following properties: * ∇ is a well defined group homomorphism. * If [M]∈(k) is locally constant then ∇[M]=0. * ∇ satisfies a Leibniz type rule, i.e. for all [M], [N]∈(k), the identity ∇([M]· [N]) = ∇[M]·ϕ^*[N] + β^*[M]·∇[N] holds in (k). Furthermore, ∇ is natural with respect to restrictions to sub-posets, namely, if ι⊆ is a sub-poset, then ι^*∘∇_= ∇_∘ι^*. Here by a locally constant module we mean a functor M→ that takes every morphism in to an isomorphism of vector spaces. By analogy to ordinary calculus, an obvious question is whether a vanishing gradient of a module M implies that it is locally constant. The answer turns out to be not quite so straightforward. We say that a digraph is a directed tree if between any two distinct vertices a and b in there is at most one directed path. Recall that for any digraph =(V,E), a maximal tree in is a subgraph ⊆ on the same vertex set V and with edges E'⊆ E such that =(V, E') is a directed tree and such that E' is maximal with respect to this property, i.e. adding extra edges from E to E' will produce a subgraph that is not a directed tree. We say that a poset is line connected if the line digraph of its Hasse diagram is connected. [Theorem <ref>] Let be a finite line connected poset, and let be a line connected maximal tree in its Hasse diagram _. Let _⊆ denote the sub-poset generated by . Let M∈ k be a module, and let M_ denote the restriction of M to _. Assume that ∇[M_]=0 in (k_). Then the following statements hold. * For any objects u,v∈() = (_), there is an isomorphism α_u,v M(u)→ M(v), such that α_u,u = 1_M(u) and α_v,wα_u,v = α_u,w. * For every pair of indecomposable morphisms u≤ w and s≤ t in _, α_w,t∘ M(u≤ w) = M(s≤ t)∘α_u,s. * M_ is locally constant if and only if M(u≤ v) is an isomorphism for some indecomposable relation u≤ v in _. Furthermore, if M∈ k is a module such that for any indecomposable relation u≤ v in _ there exists an isomorphism α_u,v M(u)→ M(v) that satisfy Conditions (<ref>) and (<ref>), then ∇[M_]=0. If is not generated by a tree, it is rather easy to find examples of modules on that are not the gradient of any k-module. Thus such modules are, in the appropriate sense, not integrable. In particular, in this situation it is possible to construct k-modules with a vanishing gradient that do not satisfy Conditions (<ref>) and (<ref>) of Theorem <ref>. This is the reason why we restrict modules to a maximal sub-tree. Of course similar statements can be made for any sub-tree of with the obvious modification to the conclusions. This stands in sharp contrast to ordinary calculus, where any differentiable function whose gradient vanishes on a domain is constant in the interior of that domain, and any real valued continuous function on a reasonable domain is integrable there. Theorem <ref> thus provides a sharp consequence of the vanishing of the gradient of a module M∈ k on a maximal tree . It implies that for such a module all point modules (the values on objects) are abstractly isomorphic, and all morphisms induced by applying M to indecomposable morphisms in _ can be identified through a collection of non-canonical isomorphisms between point modules. We next examine the implication for a pair of k-modules of having isomorphic gradients. A typical element in the Grothendieck ring (k), which is ordinarily referred to as a virtual module, is a difference of two equivalence classes of genuine modules. For M∈ k, define the rank function (M)()→ to be the function that takes a relation x≤ y to the rank of the homomorphism M(x≤ y). The rank function clearly extends by additivity to (k), since it depends only on the isomorphism type of a module. Notice also that our definition of the rank function includes also the dimensions of point modules, which appear as the rank of M applied to identity morphisms. It is well known that the rank invariant is a complete invariant for modules over (,≤) or any other linear order, but this fails for more general posets <cit.>. However the rank invariant is still an important invariant of modules. The next two theorems give new conditions on some of the local behaviour of the rank invariant. [Theorem <ref>] Let be a finite poset. Let [X]=[M]-[N]∈(k) be an element with M, N∈ k. * Assume that ∇[X] = 0. Then [X](u_0<v_0) = [X](u_1<v_1) for any pair of comparable objects (u_0,u_1) < (v_0,v_1) in . Assume in addition that is line connected, let be a line connected maximal tree for _, and let _⊆ be the sub-poset generated by . If [X] has a vanishing gradient on , then [X] has the following properties: * It is constant on all identity morphisms in . * It is constant on all indecomposable relations in _, namely for any pair of indecomposable relations u_0<u_1, and v_0< v_1 in _, one has [X](u_0<u_1) = [X](v_0<v_1). Theorem <ref> falls short of stating that isomorphic gradients imply equality of rank invariants. Indeed Example <ref> shows that this is not the case. A natural question is whether a converse implication is true, namely whether equal rank invariants imply that the modules in question have isomorphic gradients. This question is answered in the negative in Example <ref>. In discrete calculus for finite weighted digraphs one considers real valued functions on vertices or edges as finite dimensional real vectors spaces, and as such one has the ordinary inner product defined on the spaces of vertex and edge functions, ⟨ -, -⟩_V and ⟨ -, -⟩_E respectively. This pairing allows one to define divergence and Laplacian for digraphs. The adjoint operator ∇^*, also called the divergence operator is then defined by the requirement that the relation ⟨∇^*(f), g⟩_V = ⟨ f, ∇(g)⟩_E holds. The theory we propose here can be applied to a certain setup in standard calculus on graphs (See Example <ref>). For modules M, N∈ k define a pairing ⟨ [M], [N]⟩__k(_k(M, N)). Extend the definition to a pairing on (k) and similarly to a pairing ⟨ -,-⟩_ on (k). A related pairing, that in a sense appears more natural, is the Euler pairing. For two modules M, N∈ k, consider the graded vector space ^*_k(M,N). Since is assumed to be a finite poset and modules in k are assumed finitely generated, ^i_k(M,N) is a finite dimensional vector space for all i and it vanishes for i sufficiently large. Hence it makes sense to consider its Euler characteristic. Thus we define an Euler pairing χ_([M], [N]) = χ(^*_k(M,N)), and extend to the corresponding Grothendieck group by additivity. If the Hasse diagram of is a directed tree, thus in particular an acyclic quiver, then ^i_k(M,N) vanish for i>1 <cit.>, and the Euler pairing can be computed as an ordinary inner product of dimension vectors <cit.>. The following proposition gives an easy relation between the two types of pairings. [Proposition <ref>] Let be a finite poset, and let M, N∈ k be modules. Let ⟨-,-⟩_ and χ_(-,-) be the Hom pairing and the Euler pairing respectively. Let 0→ P_n→⋯→ P_0→ M→ 0, and 0→ N→ I_0→⋯ I_n→ 0 be a projective resolution for M and an injective resolution for N. Then χ_([M], [N]) = ∑_i=0^n(-1)^i⟨ [P_i], [N]⟩_ = ∑_i=0^n(-1)^i χ_([P_i],[N]), and χ_([M], [N]) = ∑_j=0^n (-1)^j⟨ [M], [I_k]⟩_=∑_j=0^n (-1)^jχ_([M],[I_k]). Furthermore, write P_i ≅⊕ϵ^i_vF_v and I_j = ⊕δ_u^i G_u, with ϵ_v, δ^j_u∈ and v, u∈, and where F_v and G_u are the indecomposable projective determined by v and the indecomposable injective determined by u. Then χ_([M],[N]) = ∑_v∈∑_i=0^n(-1)^iϵ_v^i_n N(v) = ∑_u∈∑_j=0^n(-1)^jδ_u^j_n M(u). We next offer our analogs of the divergence and the laplacian in our context. Considering the (non-symmetric) Hom pairing as an analog of an inner product, the left and right Kan extensions offer themselves as a natural way of constructing a left adjoint ∇^* and a right adjoint ∇_* of the gradient. Thus we define left and right divergence. Example <ref> demonstrates that with the right categorical setup for ordinary weighted digraphs, the left and right divergence operators coincide with each other and with the ordinary definition of the divergence. In general we have the following. ⟨∇[X], [Y]⟩_ = ⟨ [X], ∇_*[Y]⟩_ and ⟨∇^*[X], [Y]⟩_ = ⟨ [X], ∇[Y]⟩_. In particular ⟨∇^*∇[X],[X]⟩_=⟨∇[X],∇[X]⟩_=⟨ [X],∇_*∇[X]⟩_. While the Euler pairing offers some better general properties, the adjointness relations above do not hold for it in general. We shall elaborate on this point in Section <ref>, where we show in fact that our basic constructions apply in a much more general context than modules categories over finite posets. Composing the gradient with the left and the right divergence, we obtain left and right Laplacians ^0 and _0 , respectively, for persistence modules. This allows us to define left and right harmonic modules, namely such that their corresponding Laplacians vanish. Laplacians, higher dimensional generalisation and a possible setup for Hodge theory for persistence modules will be studied in future work. The paper is organised as follows. Section <ref> contains all the technical background material we use throughout the paper. In Section <ref> we prepare the general setup for the construction of the gradient and the divergence in the context of module categories. In Section <ref> we study some properties of the Hom pairing and the Euler pairing in the context of modules over category algebras. Section <ref> is dedicated to the study of the gradient of virtual modules over finite posets and the proof of Theorems <ref>, <ref> and <ref>. In Section <ref> we specialise the Hom and Euler pairings in for modules over posets and prove Proposition <ref>. Section <ref> is dedicated to the (left and right) divergence, the corresponding Laplacians, and adjointness relations with the gradient. Finally in Section <ref> we present applications to modules over commutative ladder posets, and to filtered hierarchical clustering modules over commutative grid posets. The authors are grateful to E. Meir for many helpful conversations on modules over category algebras and for finding an error in an early version of this paper. § PRELIMINARIES In this section we record the definitions, notation and all preliminary material that will be used throughout the paper. §.§ Persistence Modules Let be a small category associated to a poset. Modern theory of persistence modules studies the structure and invariants of functors from to some target category, often the category of vector spaces over a field k. For the purpose of this preliminary discussion the target category may be any abelian category . A category is said to be finite if the set of all its morphisms forms a finite set; this implies that the object set is likewise finite. If is an abelian category, then the functor category ^ whose objects are functors from to and whose morphisms are natural transformation is also an abelian category. If Φ→ is a functor then pre-composition with Φ induces a functor Φ^*^→^. This functor, which is sometimes referred to as restriction along Φ, will be used in our definition of the gradient in Section <ref>. The restriction Φ^* generally has left and right adjoints given by the left and right Kan extensions, respectively (See Section <ref>). These will be used in Section <ref>. A good reference for the general theory of functor categories is <cit.>. Let be a finite poset, let k be a field, and let be the category of finite dimensional vector spaces over k. A persistence module on is a functor M→. Let be a small category. A functor M→ may be thought of as a representation of the category over k. This is a particularly useful approach when the category is finite. Let k be a field and let be a finite category. The category algebra k is the unital algebra generated as a k-vector space by all morphisms x → y in (including identities). Two morphisms multiply by composition, and non-composable morphisms multiply to 0 <cit.>. The unit is the k-algebra map η k→ k that sends 1∈ k, to the element 1∈ k, that is the sum over all objects in of the identity relations 1_x x→ x. The following theorem due to Mitchell allows us to alternate between functor categories and categories of modules, when the category in question is finite. [<cit.>] Let be a finite category and let k be a field. The category k of modules over the category algebra k and k-linear homomorphisms is equivalent to the category of functors M→ and natural transformations between them. The equivalence in Mitchell's theorem is given as follows. If M→ is a functor, let 𝐌⊕_x∈()M(x). The (left) action of k on 𝐌 is determined by φ·𝐚 = M(φ)(a_y), where 𝐚 = ∑_x∈ a_x∈𝐌, a_x∈ M(x). Conversely, if 𝐍 is a left k-module, let N→ be the functor that takes an object x∈ to N(x) 1_x·𝐍 and a morphism φ y→ z to the homomorphism N(φ) that maps 1_y·𝐚∈ 1_y·𝐍 to φ·𝐚∈ 1_z·𝐍. Any poset may be considered as a small category, and thus admits a category algebra k, which we will refer to as the poset algebra. If is a finite poset, then Mitchell's theorem gives an alternative way of thinking of persistence modules, namely as ordinary modules over the poset algebra k. Both approaches are useful. The standard modern treatment of one parameter persistence studies modules over the poset (, ≤), i.e. the poset given by the real numbers with their natural order. Higher dimensional analogs consider modules where the parametrising poset is ^n. These posets are not discrete and certainly not finite. However, for all practical purposes persistence modules are studied in a tame setup. This concept was introduced by Chazal et al. <cit.>. We will use tameness as defined by Miller in <cit.> for general posets. A persistence module M is said to be tame if M(p) is finite dimensional for every object p∈ and if, roughly speaking, admits a finite partition into sub-posets on which M is constant. More precisely, if M is a tame persistence module on an arbitrary poset (possibly infinite and with nontrivial topology) Miller defines an encoding of M to be a module N∈ k for some poset , together with a functor π→, such that M = π^*(N); see Figure <ref>. The encoding is said to be finite, if the poset is finite and the values of N on its objects are finite dimensional <cit.>. Miller also gives a necessary and sufficient condition for a persistence module to admit a finite encoding <cit.>. This essentially means that much of the information encoded in the module can be read off from a finite encoding. This point of view fits perfectly with our setup, in which posets will always be assumed to be finite. The following general terminology is rather standard and will become useful in our analysis. Let be a small category. We say that a module M∈ k is * locally constant if k acts on M by isomorphisms. * virtually trivial if any non-scalar element in k acts on M trivially. Equivalently, if modules are considered as functors M→, then M is locally constant if it takes any morphism in to an isomorphism, and virtually trivial if any non-identity morphism in induces the zero homomorphism. Let be any small category and let M, N∈ k be virtually trivial modules. Let _ denote the discrete category (only identity morphisms) with objects set (), and let j__→ denote the inclusion. Then there is a natural isomorphism of groups J_k(M,N) →_k_(j_^*(M), j_^*(N)), where on both sides denotes the set of natural transformations between the respective modules, and j_^* denotes pre-composition with j_. Since M and N are virtually trivial, a natural transformation from M to N amounts exactly to a homomorphism of vector spaces M(x)→ N(x) for each x∈(). Thus J is an isomorphism of sets. Since the functor categories are abelian and j^*_ is additive, the lemma follows. Let be any small category and let M, N∈ k be virtually trivial modules. Then M≅ N if and only if M(x)≅ N(x) for each object x∈(). §.§ Line digraphs A central idea in our work is to define operators on persistence modules whose output also lies in persistence modules but possibly over a different parameter poset. This is where the classical graph theoretic construction of the line digraph becomes useful. A directed graph, or a digraph, is given by a vertex set V and an edge set E ⊂ V × V. A digraph has two maps s,t E V called the source and target, respectively, such that if e=(u,v) is an edge, then s(e)=u and t(e)=v. Definition <ref> excludes the possibility of two vertices connected by more than one edge in any direction, but it does allow reciprocal connections. A digraph is said to be acyclic if there is no directed path that starts and ends at the same vertex. While acyclicity is not assumed throughout, in the context of this work we do assume that all digraphs are loop-free, namely that if e=(u,v)∈ E, then u≠ v. Given a digraph = (V,E), the associated line digraph = (V, E) is the digraph with vertices V = E and with a directed edge (u,v)→ (w,z) in E whenever v=w. The directed edges in E are denoted by ordered triples (u,v,z). The line digraph construction is clearly functorial with respect to inclusions of subgraphs. Namely, if '⊆, then '⊆. A digraph is said to be connected if, ignoring edge direction and collapsing bidirectional edges into a single undirected edge, one obtains a connected graph. The line digraph associated to a digraph is not generally connected. But if is a connected line digraph of some digraph , then is connected. The following concept is useful for our purposes. Let be a digraph with a line digraph . A subgraph _0⊆ is said to be a line component of if there exists a connected component _0⊆ such that _0 is the line digraph associated to _0. A digraph is said to be line connected if is connected, or equivalently if the only line component of is itself. Clearly any digraph is a union of its line components (which are not generally disjoint in ). The line components of can be constructed from the connected components of by considering for each component _0, the subgraph of consisting of all edges corresponding to the vertices of _0 (See Figure <ref>). Any poset is uniquely determined by its associated Hasse diagram _, which is the transitive reduction of the acyclic digraph underlying the poset . Namely the digraph _ has as vertices the objects of , and directed edges are the non-identity indecomposable relations in , namely those relations u<v in such that there is no intermediate relation u< y < v. The corresponding edge in the digraph _ is then denoted by (u,v). By taking the transitive closure of _ one recovers . The Hasse diagram _ can be thought of as encoding the analog in of infinitesimally small moves in a metric space. A poset is said to be line connected if its Hasse diagram _ is line connected. §.§ The Grothendieck ring As is the case for any category of modules, many questions about persistence modules can be reduced to questions about isomorphism classes. Hence we work specifically with isomorphism classes of modules over poset algebras. In this context the concept of the Grothendieck ring is very useful. Let k be a field, let A be a unital k-algebra and let A denote the category of finitely generated left A-modules. Then A is a monoid with operation given by direct sum, and thus the set of isomorphism classes of A-modules forms a commutative monoid where [M]+[N] [M⊕ N]. The Grothendieck group (A) of finitely generated A-modules is the group completion of this monoid. Namely, it is the quotient of the free abelian group generated by isomorphism classes of A-modules, subject to the relation (<ref>). Thus any element [X]∈(A) can be written uniquely as a difference [X] = [M]-[N], where M, N∈ A. In particular, an equation of the form [M]-[N] = [U] - [V] in (A) where M, N, U, V∈ A should be interpreted as M⊕ V ≅ N⊕ U. We refer to elements of (A) as virtual modules. Tensor product over k, with the diagonal action of A on the factors, turns (A) into a ring, known as the Grothendieck ring, where [M]·[N] = [M⊗ N]. If f^* B→ A is a functor that preserves tensor products, then f^* induces a ring homomorphism f^*(B)→(A). This is the case, for instance, if f A→ B is a homomorphism of k-algebras that induces the restriction f^* on module categories, but for our applications this will not generally be the case (See Remark <ref>). An important example of this construction is given by the Grothendieck ring (k) of finite dimensional vector spaces over k, which is isomorphic to the ring of integers . A much more interesting family of examples that is a subject of extensive study arises in representation theory. If G is a group and kG its group algebra, then (kG) is the Grothendieck ring of all virtual linear representations of the group G over k. Let A be a unital k-algebra. Define the reduced Grothendieck ring _e(A) to be the quotient of the Grothendieck ring (A), by an additional family of relations: [M] = [M']+[M”] if there is a short exact sequence of A-modules 0→ M'→ M→ M”→ 0. The reduced Grothendieck ring _e(A) inherits a ring structure from that of (A), since the tensor product over the ground field k is an exact functor, and the obvious projection (A)→_e(A) is a ring homomorphism. However, _e(A) is a much less interesting object than (A), since the class of any module is equal to the sum of all simple modules in a composition series. Thus _e(A) is isomorphic to the free commutative algebra generated by the simple A-modules. For a finite poset with object set V, define (k)→^V by [M](v) _k(M(v)), for any M∈ k and for [X] = [M]-[N] with M,N∈ k, let [X] [M]-[N]. For [X]∈(k) we refer to [X] as the dimension vector of [X]. Since the addition operation in (k) is induced by direct sum, the function is clearly a group homomorphism, whose kernel is the subgroup of all virtual modules [X] = [M]-[N] such that _kM(v) = _kN(v) for every v∈. Clearly for M∈ k, [M] = 0 if and only if M=0. Let be a small category. An element [X]∈(k) is said to be locally constant, if it can be represented as a difference of the isomorphism classes of two locally constant modules, and similarly for virtually trivial. Recall that if is a category with finite set of objects and k is a field, then the functor category k is equivalent to the category of modules over the category algebra k (see <cit.>). If F→ is a functor, then one obtains a homomorphism kF k→ k, which is a ring homomorphism if and only if F is injective on object sets <cit.>, and otherwise is only a homomorphism of vector spaces. In particular F^* preserves tensor products over k, and hence induces a ring homomorphism F^*(k) →(k). §.§ Functor categories and Kan extensions Kan extensions will be used to define the divergence operators in Section <ref>, and we recall here the basic notions. Let , be small categories, and let be a category that is bicomplete (i.e. all small limits and colimits exist in ). Let ^ and ^ denote the categories of functors from and to , respectively, with morphisms given by natural transformations. If F→ is a functor, then the restriction F^*^→^ admits left and right adjoints, given by the left and right Kan extensions <cit.>, L_F, R_F^→^. For the sake of completeness, we briefly describe these constructions. The reader is referred to <cit.> for details. For any object d∈, let F↓ d denote the overcategory of d with respect to F, whose objects are pairs (c,φ), where c is an object in and φ F(c)→ d is a morphism in . A morphism (c,φ)→(c',φ') in F↓ d is a morphism β c→ c', such that φ'∘ F(β) =φ. There is a forgetful functor # F↓ d →, sending an object (c,φ) to c. By analogy one defines d↓ F, the undercategory of d with respect to F, and similarly a forgetful functor to . For M∈^, left and right Kan extensions of M along F are defined by L_F(M)(d) _F↓ dM_#, and R_F(M)(d) lim_d↓ F M_#, where in both cases M_# denotes the composition of M with the respective forgetful functor. Thus if N∈^ and M∈^, we have the adjointness relations: _^(L_F(N), M)≅_^(N, F^*(M)), and _^(F^*(M), N)≅_^(M, R_F(N)). § A CATEGORICAL SETTING FOR THE DISCRETE GRADIENT AND DIVERGENCE In this section we define the gradient and the divergence on digraphs in a very general setting. With the right setup, the standard notions of these concepts then occur as special cases. In sections <ref> and <ref> we will specialise to the case of persistence modules, but these constructions can be carried out in a much more general context. We start by recalling the graph theoretic definitions of gradient and divergence. Let = (V,E) be a digraph. Let A be an abelian group, and let f V→ A and F E→ A be arbitrary functions. Then one defines the gradient ∇(f) E→ A and ∇^*(F) V→ A by ∇(f)(v→ w) f(w) - f(v), and ∇^*(F)(v) = ∑_u→ v F(u→ v) - ∑_v→ w F(v→ w). The reader is referred to <cit.> for a comprehensive discussion of the gradient and the divergence on digraphs. We now describe a categorical setup for these concepts. A multi-digraph is a more general analog of what is meant by a digraph in this article. The difference is that in a multi-digraph one allows the set of directed edges between any pair of vertices to have arbitrary cardinality, while the concept we use of a digraph allows at most one edge in each direction. Any small category is a quotient category of the path category of a multi-digraph whose vertex set is in 1-1 correspondence with the set of objects in and whose directed edges form a subset of its morphisms, where the equivalence relations are given by the commutativity relations in (See <cit.> for more detail). A canonical example of this type is given by considering the multi-digraph of all morphisms in a category . Then is reconstructed from that multi-digraph by declaring two paths equivalent if the corresponding morphisms in coincide. Clearly, restricting to digraph rather than multi-digraph puts some restrictions on the type of categories that appear as quotient categories of the corresponding path categories. For instance, a poset can be thought of as a quotient of the path category of its Hasse diagram _ by relations imposed by the requirement that between any two objects there is at most one morphism. However, quotient categories of path categories of digraphs form a much more general family of categories than just posets. All of those, as we shall see below, share some important properties. If is a category obtained as a quotient category of the path category P() of some digraph , we say that is a generating digraph for , or that is generated by . Notice that the choice of a generating digraph for a small category is generally not unique. Next we associate with a small category and a generating digraph another small category that is generated by the associated line digraph . The category depends on the choice of the generating digraph, but for any such choice one has two functors → that will allow us to define a gradient and two divergence operators. While we will later restrict to the case where is a poset with a canonical generating digraph given by its Hasse diagram, here we study these constructions in greater generality with a view to other possible applications. Let be a small category generated by a digraph , and let denote its associated line digraph. Let P() and P() denote the associated path categories. Define a category with the same object set as P(). Assume that α ((u,t)→(t,a_1)→⋯→ (a_m,s)→ (s,v)), and γ (( u,t)→(t,b_1)→⋯→ (b_k, s)→ (s,v)) are two morphisms in P(). For each pair of objects (u,t) and (s,v), define an equivalence relation on the morphism set P()((u,t), (s,v)), to be the transitive closure of the relation α∼γ if the compositions t→ a_1→⋯→ a_m→ s, and t → b_1→⋯→ b_k→ s coincide in . Define ((u,t), (s,v)) to be the set of equivalence classes of this relation. Clearly the assignment ↦ does not define an endofunctor on the category of small categories, since it depends on the choice of a generating digraph for . We refer to as the line category associated to with respect to the generating digraph . Let ϕ, β→ denote the front and back graph morphisms, defined by ϕ(u,v) = v and β(u,v) = u, for any edge (u,v) in , with the action on edges in given by ϕ(u,v,w) = (v,w) and β(u,v,w) = (u,v) for any edge (u,v,w) in . The maps ϕ and β induce functors ϕ, β P()→ P(). The following lemma shows that the front and back functors induce the corresponding functors on the line category. Let be a small category generated by a digraph =(V,E). Let denote the line category associated to with respect to . Then the front and back functors ϕ and β on path categories induce functors ϕ, β→. The category is a quotient category of the path category P() by the relations among morphisms in . Thus it suffices to show that the square of functors P()[r]^ψ[d]^π P()[d]^π [r]^ψ̅ where ψ is either ϕ or β, commutes. We prove the statement for ϕ. The proof for β is essentially the same. The objects in P() are the edges (u,v) in , and ϕ(u,v)=v. Define ϕ̅(u,v) = v. The square commutes on objects by definition. Let α ((u,t)→(t,a_1)→⋯→ (a_m,s)→ (s,v)), and γ (( u,t)→(t,b_1)→⋯→ (b_k, s)→ (s,v)) be two morphisms in P(), such that π(α)=π(γ). Then, by definition, t→ a_1→⋯→ a_m→ s, and t → b_1→⋯→ b_k→ s coincide in . Hence the compositions t→ a_1→⋯ s→ v, and t→ b_1→⋯ s→ v also coincide in . But these are exactly the projections of ϕ(α) and ϕ(γ) into . This shows that ϕ̅ is well defined and that the square commutes on morphisms, and the proof is complete. A family of examples that are particularly relevant to this article arises from considering posets and their associated Hasse diagrams. Let be a poset, let _ be its Hasse diagram and let _ be the line digraph associated to _. Then _ is the Hasse diagram of a poset that is unique up to isomorphism. To see this, note that any finite poset can be reconstructed uniquely up to isomorphism from its associated Hasse diagram, which by definition is an acyclic and transitively reduced digraph. Thus it suffices to show that if = (V,E) is a finite transitively reduced acyclic digraph, then is also acyclic and transitively reduced. It is immediate from the definition that the existence of a cycle in implies that itself contains a cycle. Let (u,v,w) be an edge in , and assume that it can be decomposed as (u,v,z)·(v,z,w). Then contains the composable sequence u(u,v) v (v,z) z (z,w) w Hence either (u,v) is not composable with (z,w) or v=z. The first option contradicts the assumption that (u,v,w) is an edge in and the second option contradicts the assumption that (v,z,w) is an edge in , or alternatively acyclicity of due to the self-loop (v,v). Thus (u,v,w) is indecomposable, and so is transitively reduced. If is a poset, then the poset associated to _ has as objects the edges (u,v) in _P and (u,v) ≤ (z,w) if there is a sequence of directed edges, for n≥ 2, (u,v)= (a_0,a_1)(a_0,a_1,a_2) (a_1,a_2) (a_1,a_2,a_3) (a_2,a_3)→⋯(a_n-2,a_n-1,a_n) (a_n-1,a_n) = (z,w). Let be a category, and let be a small category, with a generating digraph and an associated line digraph . Let be the associated line category with respect to . The functors ϕ, β→ induce restriction functors ϕ^*, β^*^→^, where ^ and ^ are the functor categories from and to , respectively. We are now ready to define the gradient. Fix a small category and an associated line category with respect to some generating digraph . Let be an additive category. Let (^) denote the Grothendieck group of isomorphism classes of functors →, with the operation [F] + [G] = [F⊔ G], where (F⊔ G)(c) = F(c)⊔ G(c), and where ⊔ denote the coproduct in . Define the gradient ∇(^) →(^) by ∇[F] = [ϕ^*(F)] - [β^*(F)] for each functor F∈^. Extend ∇ by additivity to the whole Grothendieck group. Next we show that the standard definition of a gradient for a digraph with vertex and edge weight functions taking values in the group of integers is a particular case of our setup. Let =(V,E) be a digraph with line digraph . Then ϕ and β induce functors ϕ^*, β^*^P()→^P(). Consider V and E as discrete categories, and let V→ P() and E→ P() be the obvious inclusions. Restriction of ϕ^* and β^* gives functors ϕ^*, β^*^V→^E. Now, assume that =, the category of vector spaces over a field k, with the coproduct ⊔ given by the direct sum. Since vector spaces are determined up to isomorphism by their dimension, taking Grothendieck groups, we have (^V) = ^V≅⊕_v∈ V, and (^E) = ^E≅⊕_e∈ E. Thus the gradient is given by ∇[f](u,v) = _k(f(v)) - _k(f(u)), where [f]∈(^V). Let φ V→ be an arbitrary function that takes non-negative values. Considering V again as a discrete category, let f V→ be the functor defined by f(v) = k^φ(v). Then ∇[f](u,v) = ∇(φ)(u,v), where the left hand side is the gradient of the element [f]∈(^V), and the right hand side is the graph theoretic definition for the gradient of the function φ, as in (<ref>). Next we define the divergence operators in our context. For this we require that the category is additive and bicomplete, namely that all small limits and colimits exist in . In that case the functors ϕ^* and β^* have left and right adjoints given by the left and right Kan extensions. Notice that because of pre-additivity of finite products and coproducts coincide in . Fix a small category and an associated line category with respect to some generating digraph . Let be a small bicomplete additive category. Define the left divergence and the right divergence (with respect to ) ∇^*, ∇_*(^) →(^) by ∇^*[T] = [L_ϕ(T)]-[L_β(T)], and ∇_*[T] = [R_ϕ(T)]-[R_β(T)] for each functor T→. Extend by additivity to the whole Grothendieck group. Since limits commute with limits and colimits with colimits, and since finite products and coproducts coincide in an additive category, both operators are clearly well defined group homomorphisms. We now show how the standard divergence operator for digraphs, with vertex and edge weight functions taking values in the integers, is a particular example of the general setup of Definition <ref>. Let =(V,E) be a digraph with line digraph . Let be a category satisfying the requirement of Definition <ref>. Consider V and E as discrete subcategories of P() and P() respectively, as in Example <ref>. Restriction of ϕ^* and β^* gives functors ϕ^*, β^*^V→^E. Since V and E are discrete, we have for f∈^E L_ϕ(f)(v) _ϕ↓ v f = ∐_(u,v)∈ Ef(u,v), and L_β(f)(v) _β↓ v f = ∐_(v,w)∈ Ef(v,w). Similarly R_ϕ(f)(v) lim_v↓ϕ f = ∏_(u,v)∈ Ef(u,v), and R_β(f)(v) lim_v↓β f = ∏_(v,w)∈ Ef(v,w). Restrict attention to the case where = and is finite (this is essential because is only finitely bicomplete). Then, finite products (cartesian products) and finite coproducts (direct sums) are isomorphic. Hence, the right and left Kan extensions coincide. As in Example <ref>, we have (^V) = ^V≅⊕_v∈ V, and (^E) = ^E≅⊕_e∈ E. Hence ∇^*[f](v) = ∇_*[f](v) = ∑_(u,v)∈ E_k f(u,v) -∑_(v,w)∈ E_k f(v,w) for f∈^E. Let γ E→ be an arbitrary function that takes non-negative values. Considering E as a discrete category, let g E→ denote the functor defined by g(e) = k^γ(e). Then ∇^*[g] = ∇_*[g] = ∇^*(γ) where the left and centre in the equation are the left and right divergence of [g] in (^V), and the right hand side is the graph theoretic divergence of γ, as in (<ref>). § BILINEAR PAIRINGS ON (^) In this section we define two bilinear forms on (^), where = and is a finite acyclic category (i.e. a category generated by a finite acyclic digraph). Thus we use the notation (k) for (^). Let be a finite acyclic digraph that generates a category . Define the Hom pairing ⟨ -,-⟩_(k)×(k)→ by ⟨ [M],[N]⟩__k(_k(M,N)), where M and N represent their isomorphism classes in (k). Extend the definition to the full Grothendieck group by additivity. Notice that acyclicity and finiteness guarantees that the category is finite, and so the Hom objects are guaranteed to be finite dimensional. The Hom pairing is clearly bilinear and can be defined by analogy also on (k). Since the left and right Kan extensions are left and right adjoints to the restrictions ϕ^* and β^* we have ⟨∇^*[X],[Y]⟩_ = ⟨ [X], ∇[Y]⟩_, and ⟨∇[X],[Y]⟩_ = ⟨ [X], ∇_*[Y]⟩_ for [X]∈(k) and [Y]∈(k). Notice that the Hom pairing is not generally symmetric; however, see Example <ref> below. Next we define another useful pairing on (k). Let be a finite acyclic digraph that generates a category . Define the Euler pairing χ_(k)×(k)→ by χ_([M],[N]) χ(_k^*(M,N)) = ∑_n≥ 0(-1)^n_k(^n_k(M,N)), where M and N represent their isomorphism classes in (k), and extend to the Grothendieck group by additivity. To show that this is well defined, we must argue that the ^n groups vanish for n sufficiently large. This is implied by Lemma <ref> below. Let be a finite acyclic digraph that generates a category . Then the category algebra k has finite projective dimension. Since we assume that is finite and acyclic, composition length in is bounded. For each object x∈, let F_x∈ k denote the projective module defined by F_x = k(x,-). Let M be a finitely generated k-module. For each object x∈, choose a basis {v_1, …, v_n_x} for M(x). Let q_x,i F_x→ M denote the transformation determined by taking 1_x∈(x,x), to v_x,i. Thus one obtains a surjection of k-modules q_M⊕_x∈F_x^d_x→ M, where d_x = _k(M(x)) and F_x^d_x = ⊕_d_xF_x. Let M_1 denote the (q_M). If x is a minimal object in the sense that (-,x)=∅, then M_1(x)=0. Thus M_1 vanishes on all minimal objects in . Notice that if d_x=0, then no copy of F_x is included in the cover. Next, construct a projective cover of M_1 in a similar fashion, avoiding all minimal objects in , and all those of graph distance 1 from a minimal object, for which M_1 vanishes. Let M_2 be the kernel of this cover. Then M_2 vanishes of all minimal objects and all objects of graph distance 1 from a minimal object. By induction on the length of the maximal chains in , we obtain a finite projective resolution for M of length bounded above by the length of a maximal chain in . The pairing χ_ is clearly bilinear, but in general it is neither symmetric, nor is it true that χ_(∇^*[X], [Y]) = χ_([X], ∇[Y]), and χ_([X], ∇_*[Y]) = χ_(∇[X], [Y]). We now consider a special case where the generating digraph for is a directed tree . In that case the path algebra k of , considered as an acyclic quiver, and the category algebra k coincide. Path algebras over acyclic quivers are hereditary, namely submodules of projective modules are projective. Hence for any M, N∈ k, one has _k^i(M,N)=0 for i>1. Thus in this case χ_([M],[N]) = _k _k(M,N) - _k ^1_k(M,N). In particular, if is generated by a finite directed tree, then the Euler pairing is rather easy to compute, as it depends only on the dimension of point modules. To make this statement precise, we need the following definition. Let = (V, E) be a quiver. Let α, β∈^V be integer valued functions on V. Define a function χ_^V×^V→ by χ_(α, β) ∑_x∈ Vα(x)β(x) - ∑_e∈ Eα(s(e))β(t(e)). The function χ_ is called the Euler form for . [<cit.>] Let be a category generated by a finite tree . Then for any M, N ∈ k χ_([M],[N]) = χ_([M], [N]). Notice that a category generated by a tree is in fact a poset. Also, the lemma shows that symmetry almost always fails, even under rather favourable circumstances, because the second term in the definition of the Euler form is not symmetric. Looking back into graph theory, one considers the vertices V and the edges E as discrete categories. The next example shows that in that case Hom pairing and the Euler pairing coincide. In the setup of Example <ref> the sets V and E are considered as discrete categories. Hence kV and kE are simply the k-vector spaces generated by V and E respectively, with trivial products, and modules over them are again just finite dimensional vector spaces associated to each vertex or edge. Thus, all modules are projective, and so the respective ^n groups vanish for positive n. Hence, the corresponding pairings now take the form χ_V([f],[g])= _k(_kV(f,g)) = ∑_v∈ V_k(f(v))_k(g(v)) = ⟨[f],[g]⟩_V, and χ_E([h],[s]) = _k(_kE(h,s)) = ∑_e∈ E_k(h(e))_k(s(e)) = ⟨[h],[s]⟩_E Notice that the pairings in this case are commutative. Also, by Example <ref> the divergence operators ∇^* and ∇_* coincide in this case. In particular, the adjointness relations (<ref>) trivially hold. The following two lemmas are standard homological algebra, which we record here for reference. Let , be small abelian categories and assume that has enough projectives. Let F→ be a functor with a right adjoint G→. Then the following statements are equivalent. * F sends projective objects in to projective objects in . * G is an exact functor. Dually, if has enough injectives, then the following statements are equivalent. * G sends injective objects in to injective objects in . * F is an exact functor. We prove the equivalence of (<ref>) and (<ref>). The equivalence of (<ref>) and (<ref>) follows by analogy. Let P∈ be a projective object. Then F(P)∈ is projective if and only if the functor _(F(P), -)≅_(P, G(-)) is exact. Thus, if G is exact then F(P) is projective. This shows that (<ref>) implies (<ref>). Conversely, since is assumed to have enough projectives, every object X∈ admits an epimorphism P→ X, where P is projective in . Let 0→ Bα Cβ D→ 0 be an exact sequence in . Since G is a right adjoint, it is left exact, so it suffices to show that G(C)G(β) G(D)→ 0 is exact. Let φ P→ G(D) be an epimorphism in , where P is projective. By (<ref>), F sends projective objects to projective objects. Thus _(F(P), C)β_*_(F(P), D)→ 0 is exact, and by adjointness _(P, G(C))G(β)_*_(P, G(D))→ 0 is exact. Hence there exists ψ∈_(P, G(C)) such that G(β)_*(ψ) = G(β)∘ψ = φ. Since φ is an epimorphism, so is G(β), as claimed. Let γ→ be a functor between small categories. Let be a bicomplete abelian category, and let ^ and ^ denote the respective functor categories. Let γ^*^→^ denote the restriction, and let L_γ and R_γ denote its left and right Kan extensions. Then for any M∈^ and N∈^, ^*_^(L_γ(M), N) ≅^*_^(M,γ^*N) if has enough projectives and either of the following two equivalent conditions holds. * γ^* sends injective modules to injective modules. * L_γ is an exact functor. Similarly, if has enough injectives then ^*_^(N, R_γ(M)) ≅^*_^(γ^*N,M) if either of the following equivalent conditions holds. * γ^* sends projective modules to projective modules. * R_γ is an exact functor. We prove the first statement. The second follows by analogy. Let N→ I^* be an injective resolution of N in ^. Since γ^* is exact and sends injective modules to injective modules, γ^*N→γ^*(I^*) is an injective resolution of γ^*N in ^. Then ^*_^(M, γ^*N) = H^*(_^(M, γ^*(I^*)) ≅ H^*(_^(L_γ (M), I^*)) = _^^*(L_γ (M), N), as claimed. The equivalence of Conditions (<ref>) and (<ref>) follows from Lemma <ref>. Let be a finite poset and assume that _ is a tree. Let be the associated line poset (i.e., the line category associated to ), and let ϕ, β→ be the front and back functors. Then ϕ^* k→ k sends injective modules to injective modules and β^* k→ k sends projective modules to projective modules. We prove that ϕ^* sends injective modules to injective modules. The corresponding statement for β^* is similar. By <cit.> every indecomposable injective k-module has the form G_v, for some v∈, where for any u∈, G_v(u) k_(u, v) = k u≤ v 0 otherwise with G_v(x)→ G_v(y) the identity morphism, for each relation x ≤ y in _≤ v. Thus to prove the lemma, we must show that for any module of the form G_v∈ k, the corresponding module ϕ^* G_v is a finite sum of indecomposable injective modules in k. Let (u,w)∈ be any object. Then ϕ^*G_v(u,w) = G_v(w) = k w≤ v 0 otherwise = k (u,w) ≤ (a,v) 0 otherwise where a<v is any indecomposable relation in . Since is generated by a tree, for each such relation the sub-posets _≤(a,v) are pairwise disjoint. Thus ϕ^* G_v ≅⊕_(a,v)∈ G_(a,v). This completes the proof. Let be a finite poset and assume that _ is a tree. By Lemmas <ref> and <ref> one has for N∈ k and M∈ k ^*_k(L_ϕ(M), N) ≅^*_k(M,ϕ^*N), and ^*_k(N, R_β(M)) ≅^*_k(β^*N,M). In general ϕ^* does not send projective k-modules to projective modules, and β^* does not send injective k-modules to injective modules. See section <ref> for further discussion. § THE GRADIENT OF MODULES OVER POSETS In this section we specialise the idea of a gradient, as defined in Section <ref>, to persistence modules. Let be a finite poset and let _ denote its Hasse diagram. Thus _ is a generating digraph for in the sense of Definition <ref>. Let _P be the associated line digraph, which by Example <ref> is the Hasse diagram of a poset . Let k be a fixed field of coefficients. Throughout we consider k- and k-modules alternately as modules over the category algebras, or as functors from the respective categories to k-vector spaces. Let ϕ, β→ be the front and back maps, and let ϕ^*, β^*(k)→(k) be the corresponding ring homomorphisms. The following is a particular case of Definition <ref>. For any finite poset , define the gradient ∇(k)→(k) by ∇ϕ^* - β^*. Figure <ref> illustrates some small posets and the front and back modules in Definition <ref>. We next study some basic properties of the gradient. Let be a finite poset. Then the gradient operator ∇=∇_(k)→(k) satisfies the following properties: * ∇ is a well defined group homomorphism. * If [X]∈(k) is locally constant then ∇[X]=0. * ∇ satisfies a Leibniz type rule, i.e. for all [X], [Y]∈(k), the identity ∇([X]· [Y]) = ∇[X]·ϕ^*[Y] + β^*[X]·∇[Y] holds in (k). Furthermore, ∇ is natural with respect to restrictions to sub-posets, namely, if ι⊆ is a sub-poset, then ι^*∘∇_= ∇_∘ι^*. Since ϕ^*, β^*(k)→(k) are ring homomorphisms (see Remark <ref>), they are in particular homomorphisms of abelian groups, and hence so is their difference. This proves Part <ref>. Let M∈ k be locally constant. Define a natural isomorphism μβ^*M→ϕ^*M, by sending an object (x,y) in to the morphism μ_(x,y)β^*M(x,y) = M(x)M(x,y) M(y)=ϕ^*M(x,y). Naturality is clear. It follows that ∇[M]=0. By definition, an element of [X]∈(k) is locally constant if it is represented as a difference of locally constant modules. By additivity ∇ vanishes on any locally constant virtual module in (k). This proves Part <ref>. Finally, since ϕ^* and β^* are ring homomorphisms, one has for M, N∈ k, ∇[M]·ϕ^*[N] +β^*[M]·∇[N] = (ϕ^*[M]-β^*[M])·ϕ^*[N] + β^*[M]·(ϕ^*[N] - β^*[N]) = ϕ^*[M]·ϕ^*[N] - β^*[M]·β^*[N] = ∇([M]·[N]) as claimed in Part <ref>. For virtual modules [X]=[M]-[N], extend the statement by linearity of ϕ^*, β^* and ∇. The last statement follows from commutativity of the square k[r]^ι^*[d]^α^* k[d]^α^* k[r]^ι^* k where α is either ϕ or β. An obvious question is what can be said about a module with a vanishing gradient. To answer this question we need a preparatory lemma. Let be a line connected finite poset. Then its Hasse diagram _ has a line connected maximal tree. The Hasse diagram _ is a connected digraph (a line connected digraph is in particular connected), and by definition it is acyclic and transitively reduced. Let _ denote its line digraph, as usual. If _ is not a tree, then there are two objects x< y ∈ and at least two distinct paths in _ from x to y. We proceed by showing that one can disconnect one of the paths from x to y in _ with the resulting digraph remaining line connected. The claim then follows by induction on the number of multiple paths between pairs of points in _. Let x<y∈ be two distinct vertices, and assume that there are more than one directed path from x to y. Denote the collection of all directed paths from x to y by l_1,…, l_n, for n>1. We may assume that in all l_j, except possibly one of them (since double edges are not allowed in a poset), there is a vertex x<a_i<y, such that either x<a_i or a_i<y is indecomposable in . We may assume without loss of generality that the l_j have no common vertices except x and y. We consider four possibilities. (1) x is not minimal and y is maximal: Then there exists some relation x_0<x in , and removing a single edge x<a_j from all l_j except one of them, disconnects all the l_j except the one that was not modified. Since this leaves exactly one path from x to y, the resulting digraph is still line connected. (2) y is not maximal and x is minimal: Then there exists a relation y<y_0 in , and removing a single edge a_j<y from each l_j except one of them, disconnects all the l_j except the one that was not modified. Again the resulting digraph is line connected. (3) x is not minimal and y is not maximal: Then remove a single indecomposable edge from each l_j except one of them, as in (1) and (2). In that case as well the resulting digraph is line connected. (4) x is minimal and y is maximal: In that case the associated line digraph splits into n connected components, each containing the line digraph of l_j for a unique 1≤ j≤ n. This contradicts the assumption that is line connected. The proof is now complete by inductively performing this procedure for any set of multiple paths between pairs of vertices in _. Let be a finite line connected poset, and let be a line connected sub-tree of _. Let _⊆ be the sub-poset generated by , and let ι_ denote the inclusion functor. Functoriality of the line digraph construction gives an inclusion ι__→. Furthermore, one easily verifies commutativity of the square _[r]^ι_[d]^α [d]^α _[r]^ι_ where α is either the front functor ϕ or the back functor β. This square induces homomorphisms on the respective Grothendieck rings, and hence commutative square of group homomorphisms (k)[r]^ι_^*[d]^∇ (k_)[d]^∇ (k)[r]^ι_^* (k_) Let be a finite line connected poset, and let be a line connected sub-tree of _. We say that an element [X]∈(k) has a vanishing gradient on if ∇(ι_^*[X]) = ι_^*(∇[X])= 0. Notice that for a module M∈ k, ι_^*[M] is just the restriction of M to the sub-poset _, which justifies Definition <ref>. Notice also that if ∇[X]=0 in (k), then ∇(ι^*_[X])=0 as well, namely the restriction of X to _ has a vanishing gradient on . The following theorem is stated with respect to a maximal tree in _, but one may easily obtain a restricted analog for any sub-tree of the Hasse diagram. Let be a finite line connected poset, and let be a line connected maximal tree for its Hasse diagram _. Let _⊆ denote the sub-poset generated by . Let M∈ k be a module with a vanishing gradient on . Let M_ denote the restriction of M to _, so ι_^*[M] = [M_]∈(k_). Then the following statements hold. * For any pair of objects u,v∈, there is an isomorphism α_u,v M(u)→ M(v), such that α_u,u = 1_M(u) and α_v,w∘α_u,v = α_u,w. * For every pair of indecomposable morphisms u≤ w and s≤ t in _, α_w,t∘ M(u≤ w) = M(s≤ t)∘α_u,s. * M_ is locally constant if and only if M(u≤ v) is an isomorphism for some indecomposable relation u≤ v in _. Furthermore, if M∈ k is a module such that for any indecomposable relation u≤ v in _ there exists an isomorphism α_u,v M(u)→ M(v) that satisfy Statements (<ref>) and (<ref>), then M has a vanishing gradient on . By hypothesis ∇[M_]=0. Hence there is a natural isomorphism αβ^*M_ϕ^*M_. If (u,v) is an edge in , and hence a vertex in _, then we have α_u,vβ^* M_(u,v) = M(u)→ M(v)=ϕ^* M_(u,v), that is, α_u,v is the evaluation of the natural isomorphism α on the vertex (u,v). Define α_v,uα_u,v^-1. Since _ is a poset and u⪇ v, the opposite inequality does not hold, so (v,u) is not a vertex in _. Thus, α_v,u is well defined. Notice that the vertices of _ coincide with those of . Let v_0 be any minimal object in _. Then α_v_0,u is defined and is an isomorphism for each indecomposable relation v_0<u and, by induction on the length of a chain starting at v_0, α_u,v is defined and is an isomorphism for any u,v∈__≥ v_0. Since _ has only finitely many minimal objects, it follows that α_u,v is an isomorphism on any sub-poset of the form __≥ v_0 where u_0 is minimal. Let u_0, u_0' be two distinct minimal objects in _. Since _ is line connected by construction, it is in particular connected. Hence there are minimal objects u_0=v_1, v_2,…, v_k=u_0', such that for each 1≤ i≤ k-1, the intersection of sub-posets __≥ v_i∩__≥ v_i+1 is nonempty. Let a_i∈__≥ v_i∩__≥ v_i+1 be any object. Thus one has an isomorphism α_a_i,v_i+1∘α_v_i,a_i = α_v_i+1,a_i^-1∘α_v_i,a_i M(v_i)→ M(v_i+1). Since between any two objects in _ there is by assumption at most one path in , compositions of isomorphisms of the form α_x,y, where (x,y) is an object in _, and the inverses of such isomorphisms define α_u,v for any pair of objects u,v∈_. Since and _ coincide on objects, this proves Part (<ref>). Let (u_1,u_2,u_3) be an edge in _. Then we have a commutative square M(u_1) @=[r] β^*M_(u_1,u_2)[d]_≅^α_u_1,u_2[rr]^M_(u_1≤ u_2) β^*M_(u_2,u_3)[d]^α_u_2,u_3_≅@=[r] M(u_2) M(u_2) @=[r] ϕ^*M_(u_1,u_2) [rr]^M_(u_2≤ u_3) ϕ^*M_(u_2,u_3) @=[r] M(u_3) Thus M(u_2≤ u_3) = α_u_2,u_3∘ M(u_1≤ u_2)∘α_u_2,u_1. By induction, if u_1≤ u_2≤⋯≤ u_n is any chain of indecomposable relations in _, then M(u_n-1≤ u_n) = α_u_2,u_n∘ M(u_1≤ u_2)∘α_u_n-1,u_1. Let u≤ v and a≤ b be two indecomposable relations in _. Let u_0, a_0 be two minimal objects in _, such that u_0≤ u and a_0≤ a. Let u_0≤ u_1≤⋯≤ u_n≤ u≤ v, and a_0≤ a_1≤⋯≤ a_k≤ a≤ b be chains of indecomposable relations in _. Then M(u≤ v) = α_u_1,v∘ M(u_0≤ u_1)∘α_u,u_0, and M(a≤ b) = α_a_1,b∘ M(a_0≤ a_1)∘α_a,a_0. Thus it suffices to prove that M(u_0≤ u_1) = α_a_1,u_1∘ M(a_0≤ a_1)∘α_u_0,a_0. Notice that (u_0,u_1) and (a_0,a_1) are minimal objects in _. Since _ is connected, there is a sequence of minimal objects in _ (u_0,u_1) = (x_1,y_1), (x_2,y_2),…, (x_r,y_r)=(a_0,a_1) such that __≥ (x_i,y_i)∩__≥ (x_i+1,y_i+1) is nonempty for each 1≤ i≤ r-1. Let (s_i,t_i) be an object in the intersection, so that (s_i,t_i)≥ (x_i,y_i), (x_i+1,y_i+1). It follows that α_y_i,t_i∘ M(x_i≤ y_i)∘α_s_i,x_i= M(s_i≤ t_i) = α_y_i+1,t_i∘ M(x_i+1≤ y_i+1)∘α_s_i,x_i+1. Thus M(x_i+1≤ y_i+1) = α_t_i,y_i+1α_y_i,t_i∘ M(x_i≤ y_i)∘α_s_i,x_iα_x_i+1,s_i = α_y_i,y_i+1∘ M(x_i≤ y_i)∘α_x_i+1,x_i for each 1≤ i≤ r-1. Part (<ref>) follows by induction on r. Part (<ref>) follows at once from (<ref>). Finally, assume that M∈ k satisfies Statements (<ref>) and (<ref>). Let α be a family of morphisms for which these statements hold. By (<ref>), for each object (u,v)∈_ we have an isomorphism α_u,v M(u) = β^* M (u,v)→ϕ^* M(u,v) = M(v), If (u,v)≤ (x,y) in _, then by Condition (<ref>) α_v,y∘ M(u<v) = M(x< y)∘α_u,x. This shows that α restricts to a natural isomorphism β^*M→ϕ^*M, and hence [M] has a vanishing gradient on . Theorem <ref> shows that the vanishing of the gradient on a sub-poset generated by a tree gives a lot of information about the module in question. It is instructive however to note that if the poset in question consists of a single pair of comparable objects, then any module on such poset, where the point modules are isomorphic, will have a vanishing gradient. This of course does not contradict the conclusion of the theorem. We leave the details to the reader. With the notation of Theorem <ref>, define M, M∈ k as follows. * M(u) (M(u< v)) u<v indecomposable, u not maximal 0 u maximal. Define M(x<y) 0 for any non-identity relation in . * M(u)(M(w < u)) w<u, u M(u)/(M(u<v)) u<v, u Define M(u<v) to be the restriction of M(u<v) to M(u). With the notation and hypotheses of Theorem <ref>, the modules M and M are well defined k_-modules. Furthermore, M(u) ≅ M(v) for all non-maximal u, v∈, and M(u) ≅ M(v) for all u, v∈. Let u<v and u<v' be indecomposable morphisms in . Then by Theorem <ref>(<ref>), α_v,v'∘ M(u<v) = M(u<v'). Since α_v,v' is an isomorphism, (M(u<v)) = (M(u<v')). This shows that M is well defined. Let w<u and w'<u be two indecomposable morphisms in . Then, again by <ref>(<ref>), M(w<u) = M(w'<u)∘α_w,w'. Thus M is well defined on non-minimal objects. If u is minimal, then M(u) is also well defined since (M(u<v)) does not depend on the choice of v. The definition on morphisms is clear. Let u, s∈ be non-maximal, and let u<w and s<t be indecomposable morphisms in . Let x∈(M(u<w)), and let y=α_u,s(x). Then by Theorem <ref>(<ref>), 0=M(u<w)(x) = α_t,w∘ M(s<t)∘α_u,s(x) = α_t,w∘ M(s<t)(y). Consider the following diagram with exact rows. Equation <ref> implies that the left square commutes, and hence that so does the right square. 0[r] M(u) = (M(u < w)) [r][d] M(u)[r][d]^α_u,s_≅ M(u) / (M(u < w)) ≃ M(w) [r][d] 0 0[r] M (s) =(M(s < t)) [r] M(s) [r] M(s)/ (M(s < t)) ≃ M(t) [r] 0 for all u, s∈. Hence α_u,s restricted to M(u) is a monomorphism and the induced map on M(w) is an epimorphism. The same argument using α_s,u = α_u,s^-1 and finite dimensionality of all point modules completes the proof that M (u)≅ M(s) for all non-maximal u, s∈. Notice also that at the same time we have shown that M(w)≅ M(t) for all w, t∈ that are not minimal. Finally, notice that if u is minimal, then M(u) M(u)/(M(u<v)) = (M(u<v)) M(v). Thus M (u) ≅ M (v) for all u, v∈. Assume the notation and hypotheses of Theorem <ref>. Fix a non-maximal object x_0∈. Then there is an endomorphism γ_0 ∈(M(x_0)), such that for any y<y' in _, M(y<y') = α_x_0,y'∘γ_0^k∘α_y,x_0, where k is the graph distance from y to y' in . Let αβ^*M_→ϕ^*M_ be a natural isomorphism. Let x_0<x_1 be an indecomposable relation in _, and set γ_0 = α_x_1,x_0∘ M(x_0<x_1)∈(M(x_0)). Let y<y' in _ be any pair of comparable objects, and let y=z_1<z_2<⋯ <z_k = y' be a decomposition of the relation y<y' into a sequence of indecomposable relations in _. By Theorem <ref>(<ref>), for each 1≤ i ≤ k-1, M(z_i,z_i+1) = α_x_1,z_i+1∘ M(x_0<x_1)∘α_z_i,x_0. Hence M(y<y') = M(z_k-1<z_k)∘⋯∘ M(z_1<z_2) = α_x_1,z_k∘ M(x_0<x_1)∘α_z_k-1,x_0∘α_x_1,z_k-1∘ M(x_0<x_1)∘α_z_k-2,x_0∘⋯∘ α_x_1,z_2∘ M(x_0<x_1)∘α_z_1,x_0 =α_x_1,z_k∘ M(x_0<x_1)∘α_x_1,x_0∘ M(x_0<x_1)∘⋯∘α_x_1,x_0∘ M(x_0<x_1)∘α_z_1,x_0 =α_x_1,y'∘α_x_0,x_1∘α_x_1,x_0∘ M(x_0<x_1)∘γ_0^k-1∘α_y,x_0 =α_x_0,y'∘γ_0^k∘α_y,x_0, as claimed. The next obvious question is what can be said about a virtual module [X] = [M] - [N] with a vanishing gradient on a tree, where M ,N∈ k. By linearity, ∇[X]=0 if and only if ϕ^*[M] - β^*[M] = ∇[M] = ∇[N] = ϕ^*[N] - β^*[N], or equivalently if and only if there is a natural isomorphism ϕ^*M ⊕β^*N ≅ϕ^*N⊕β^*M. One may expect that M and N in that case would differ by locally constant modules, namely that M⊕ C≅ N⊕ D for some locally constant modules C, D∈ k. This is not true in general as Example <ref> demonstrates. Let be any finite poset. Consider the constant module k∈ k that assigns k to each object and the identity to each morphism. On the other hand let k_0∈ k denote the module that assigns k to each object and the zero homomorphism to each morphism. It is immediate that ∇[k] =∇[k_0] = 0. Let [X]∈(k) be any element and set [Y] = [X] + [k] and [Y_0] = [X] + [k_0]. Then ∇[Y] = ∇[Y_0], but the difference [Y]-[Y_0] = [k]-[k_0] is not locally constant. Next we consider the relationship between the gradient and the rank invariant. Let M∈ k be a persistence module. Define the rank invariant [M]()→ by [M](x≤ y) M(x≤ y) for each relation x≤ y in . For a virtual module [X] = [M]-[N] in (k), where M, N∈ k, define the rank invariant [X]()→ by [X][M] - [N]. Notice that [M](x≤ x) = _k M(x) for each object x∈. Notice also that the rank invariant can be regarded as a group homomorphism (k)→^(), where the codomain is the abelian group of all functions from () to , or in other words the free abelian group generated by all morphisms in . The way we define the rank invariant here includes the rank of a module (or a virtual module) on identity morphisms, namely it incorporates the dimensions of the point modules M(v) for all objects v in . Notice also that for M∈ k, [M]∈() if and only if M=0, but as Example <ref> shows, this is not the case for virtual modules. Next, we turn to investigate virtual modules with a vanishing gradient. Let be a finite poset. Let [X]=[M]-[N]∈(k) be an element with M, N∈ k. Then * Let (u_0,u_1) < (v_0,v_1) be a pair of comparable objects in . Assume that ∇[X] = 0. Then [X](u_0<v_0) = [X](u_1<v_1). Assume in addition that is line connected, let be a line connected maximal tree for _, and let _⊆ be the sub-poset generated by . If [X] has a vanishing gradient on , then [X] has the following properties: * It is constant on all identity morphisms in . * For any pair of indecomposable relations u_0<u_1, and v_0< v_1 in _, one has [X](u_0<u_1) = [X](v_0<v_1). Since ∇[X] = 0, we have a natural isomorphism ηϕ^*M⊕β^*N≅ϕ^* N⊕β^*M (See Equation (<ref>)). Let (u_0,u_1) < (v_0,v_1) be a pair of comparable objects in . Then one has a commutative square M(u_1)⊕ N(u_0) [rrrr]^M(u_1<v_1)⊕ N(u_0<v_0)[d]^≅_η M(v_1)⊕ N(v_0)[d]^≅_η N(u_1)⊕ M(u_0) [rrrr]^N(u_1<v_1)⊕ M(u_0<v_0) N(v_1)⊕ M(v_0) It follows that M(u_0<v_0) + N(u_1<v_1) = M(u_1<v_1) + N(u_0<v_0), and hence that [X](u_0<v_0) = [X](u_1<v_1). This proves Part (<ref>). Throughout the rest of the proof assume that is line connected. Fix a line connected maximal tree for _. Since [X] has a vanishing gradient on , we have a natural isomorphism ϕ^*M_⊕β^*N_≅ϕ^*N_⊕β^*M_. Thus, for any object (x,y)∈_ one has M(x)⊕ N(y)≅ N(x)⊕ M(y). Hence [X](x≤ x) = _kM(x)-_kN(x) = _kM(y)-_kN(y) = [X](y≤ y). Since every x∈_ is either the source coordinate or the target coordinate in an object of _, and since _ is connected, Part (<ref>) follows. Let (u_0,u_1) be a minimal object in _. By Part (<ref>) and induction, [X](u_r<u_r+1) = [X](u_0<u_1) for any object (u_r,u_r+1) ∈_ that is the target of a directed path from (u_0, u_1) in _. Let (u_0,u_1) and (v_0,v_1) be minimal objects in _. Since _ is connected, there are minimal objects (u_0,u_1) = (x_1,y_1), (x_2,y_2),…, (x_k,y_k) = (v_0,v_1) such that for each 1≤ i ≤ k-1 the intersection of sub-posets __≥ (x_i,y_i)∩__≥ (x_i+1,y_i+1) is nonempty. Let (a_i,b_i) be any object in the intersection. By the argument above, [X](x_i<y_i) = [X](a_i<b_i) = [X](x_i+1<y_i+1) and since this holds for all i, we have [X](u_0<u_1) = [X](v_0,v_1). It follows that [X] is constant on all indecomposable relations in _, as claimed in Part (<ref>). The following example shows that the statements of Theorem <ref> are best possible in the sense that two modules with equal gradients may have different rank invariants when evaluated on morphisms of length greater than 1. Let denote the poset [2] with objects 0,1,2 and the ordinary order relation. Define M, N∈[2] as follows. M k^2(1,0)k^2(0,1)k^2, N k^2(1,0)k^2(1,0)k^2. It is immediate that ϕ^*M ≅β^*M and ϕ^*N ≅β^*N, so ∇[M]=∇[N]=0. But M(0<2)=0 while [N](0<2)=1. Above we considered the kernel and image modules associated to a module M∈ k in the case where M has a vanishing gradient on some tree ⊆_ (See Definition <ref> and Corollary <ref>). Next, we consider a kernel and a cokernel modules in k, associated to any module M∈ k. Let M∈ k be any module. Let K_M, C_M∈ k denote the modules defined on objects by K_M(x,y) (M(x,y) M(x)→ M(y)) and C_M(x,y) (M(x,y) M(x)→ M(y)) with the natural induced maps on morphisms. Then K_M and C_M are virtually trivial. For any objects (x,y) and (y,z) in one has the following diagram of vector spaces with exact rows. 0[r] K_M(x,y) [r][d]_K_M((x,y)<(y,z)) M(x)[rr]^M(x,y)[d]_M(x,y) M(y)[rr][d]^M(y,z) C_M(x,y)[d]^C_M((x,y)<(y,z))[r] 0 0[r] K_M(y,z) [r] M(y)[rr]^M(y,z) M(z)[rr] C_M(y,z)[r] 0 A straightforward diagram chase now shows that K_M((x,y)<(y,z)) and C_M((x,y)<(y,z)) are the zero homomorphism. Let be a finite line connected poset, let be a line connected maximal tree for _, and let _⊆ be the sub-poset generated by . Let [X] = [M] - [N]∈(k) be an element of vanishing gradient on , with M, N∈ k. Then there is an integer D, such that [K_M] = [K_N] + [D] and [C_M ] = [C_N] + [D] in (k_), where [D] denotes the virtually trivial module that associates k^D with every object of _. By Lemma <ref>, K_M and K_N, as well as C_M and C_N are virtually trivial, and hence their isomorphism type is determined by their values on objects by Corollary <ref>. Thus, it suffices to prove that the appropriate ranks coincide on all objects of _T. By Theorem <ref>, [X] =[M]-[N] is constant on all identity morphisms and all indecomposable morphisms x< y in _. Thus we may write _k M(x) - _k N(x) = K for all x∈, and [M](x<y) - [N](x<y) = T for all indecomposable relations x<y in , where K and T are some fixed non-negative integers. Set D = K-T. Thus for an object (x,y)∈_, _k K_M(x,y) = [M](x≤ x)-[M](x<y) = [N](x≤ x)-[N](x<y)+ K - T =_k K_N(x,y)+ D. Since both K_M and K_N are virtually trivial, it follows that [K_M] = [K_N]+[D]. This proves the statement for K_M and K_N. Similarly, _k C_M(x,y) = [M](y≤ y) - [M](x< y) = [N](y≤ y) - [N](x< y) + K - T = _k C_N(x,y)+ D. By the same argument as above [C_M] = [C_N] + [D], as claimed. Recall that for any ring A one has a ring homomorphism Υ_A(A)→_e(A), where _e(A) is the quotient of (A) by the relation [K]-[M]+[C], if M is an extension of C by K (See Definition <ref>). Since ϕ^* and β^* are exact functors, they induce group homomorphisms ϕ_e^*,β_e^*_e(k)→_e(k), such that Υ_k∘ϕ^* = ϕ_e^*∘Υ_k, and similarly for β^*. Thus, one may define ∇_e_e(k)→_e(k) by ∇_e ϕ^*_e-β^*_e. The following proposition shows that a significant amount of information about modules is lost upon passing to the reduced Grothendieck groups. For any M∈ k, one has ∇_e[M] = [C_M]-[K_M] in _e(k). For each M∈ k let I_M∈ k denote the image functor, i.e. I_M(u,v) (M(u<v)). Then we obtain two short exact sequences in k 0→ K_M→β^* M → I_M→ 0 and 0→ I_M→ϕ^* M → C_M→ 0. It follows that in _e(k), [β_e^*M] = [K_M] + [I_M] and [ϕ_e^* M] = [I_M] + [C_M], so ∇_e[M] [ϕ_e^* M]-[β_e^*M] = [C_M]-[K_M]. The functor K_M∈ k evaluated at an object (x,y) returns the subspace of elements that are “present" at x, but do not “survive" to y. Thus for a fixed x∈, the intersection ⋂_(x,y)∈K_M(x,y) is the subspace of M(x) of all elements that “die” at x. Similarly, the functor C_M∈ k, evaluated at an object (x,y) returns the quotient of M(y) by the image of M(x<y). For a fixed object y, consider the system of all homomorphisms M(y)→ C_M(x,y) for all (x,y)∈. The coequaliser of this system can be thought of as the space representing all elements in M(y) that are “born" at y. The coequaliser is easily seen to be the quotient of M(y) by the image of the composite ⊕_(x,y)M(x)⊕ M(x<y)⊕_(x,y)M(y)∑ M(y), where Σ is the map given by summing coordinates. Thus if is line connected and M, N∈ k are modules with equal gradients, then for any line connected maximal tree , spaces of “births" and “deaths" of M and N restricted to _ object-wise coincide. We end this section with an example of modules with non-isomorphic gradients and the same rank invariant. Let be the poset with objects ∅, a, b, c, d, m, ∞, and relations ∅<a, b, c, d < m<∞. _ ∞ m[u] a[urr]^α b[ur]_β c[ul]^γ d[ull]_δ ∅[ul][ull][ur][urr] _ (m,∞) (a,m)[ur] (b,m)[u] (c,m)[ul] (d,m)[ull] (∅, a)[u] (∅, b)[u] (∅, c)[u] (∅,d)[u] X 0 k^2[u] k[ur]^α_* k[u]^β_* k[ul]^γ_* k[ull]_δ_* 0[ul][u][ur][urr] β(X) k^2 k[ur]^α_* k[u]^β_* k[ul]^γ_* k[ull]_δ^* 0[u] 0[u] 0[u] 0[u] ϕ(X) 0 k^2[ur] k^2[u] k^2[ul] k^2[ull] k[u]^α_* k[u]^β_* k[u]_γ_* k[u]_δ_* Diagram (<ref>) shows the poset and its associated line digraph. Consider k-modules X that take the value k on a, b, c, d, the value k^2 on m, and 0 on ∅ and ∞. Furthermore, assume that for each of the morphisms α, β, γ, δ, the induced map under X is an injection, and such that the images of each pair of homomorphisms form a basis for X(m)=k^2. Clearly all such modules have exactly the same rank invariant. Let M, N∈ k be modules satisfying these requirements. Let X, Y∈ k denote the modules ϕ^*(M)⊕β^*(N) and ϕ^*(N)⊕β^*(M) respectively. Then ∇[M]=∇[N] if and only if X and Y are isomorphic. Let X_0 and Y_0 denote the restrictions of X and Y to the full sub-poset of consisting of the objects (a,m), (b,m), (c,m), (d,m) and (m, ∞). Denote by α_*, β_*,… the homomorphisms N(α), N(β),… and by α'_*, β'_*,… the homomorphisms M(α), M(β),…. Assume that there is an isomorphism Θ X→ Y and let Θ_0 X_0→ Y_0 be the restriction of θ: k^2 k^2⊕ k[ur]^0 ⊕α_* k^2⊕ k[u]^0 ⊕β_* k^2⊕ k[ul]^0 ⊕γ_* k^2⊕ k[ull]_0⊕δ_* ⟹ k^2 k⊕ k^2[ur]^α'_*⊕ 0 k⊕ k^2[u]^β'_*⊕ 0 k⊕ k^2[ul]^γ'_*⊕ 0 k^2⊕ k[ull]_δ'_*⊕ 0 Without loss of generality, using our assumption that the images of every pair of homomorphisms generate k^2, we may assume that α and α' take 1 to the vector (1,0)∈ k^2 and that β and β' take 1 to (0,1)∈ k^2. Let γ and γ' take 1 to the vectors (x,y) and (z,w) respectively, and let δ and δ' take 1 to (s,t) and (u,v) respectively. The upwards homomorphisms are given by the matrices (from left to right, with respect to the standard bases): ([ 0 0 1; 0 0 0 ]), ([ 0 0 0; 0 0 1 ]), ([ 0 0 x; 0 0 y ]), ([ 0 0 s; 0 0 t ]) and ([ 1 0 0; 0 0 0 ]), ([ 0 0 0; 1 0 0 ]), ([ z 0 0; w 0 0 ]), ([ u 0 0; v 0 0 ]). Then Θ_0 can be represented on the object (m,∞) by a 2× 2 matrix A = (a_i,j), and on the objects (a,m), …, (d,m) by four 3× 3 matrices B^k = (b^k_i,j), for 1≤ k≤ 4. Computing the respective products, it is easy to observe that A must be diagonal with non-zero entries a_1,1 and a_2,2. Similarly a_1,1 = b^1_1,3, a_2,2 = b^2_1,3, and b^k_1,j=0 for 1≤ k≤ 4 and j=1,2. Furthermore, we have b^3_1,3 = x/za_1,1 = y/wa_2,2, and b_1,3^4 = s/ua_1,1 = t/va_2,2. Thus, as long as these relations are satisfied, A and B^k can be constructed with any nonzero choices of those values, while making sure in an arbitrary manner that B^k are nonsingular. However, the relations in (<ref>) allow solving for a_2,2 in terms of the other variables in two ways, and by comparing them we obtain the relation wxtu = zyvs that must be satisfied for Θ_0 to be well defined. Since this relation is not satisfied in general, this shows that there exist modules M, N∈ k with the same rank invariant, such that ∇[M]≠∇[N] (and hence also [M]≠ [N]). § THE HOM PAIRING AND THE EULER PAIRING Let be a finite poset. The Hom pairing of Definition <ref> and the Euler pairing of Definition <ref> in the case of k-modules take the form ⟨ [M], [N]⟩__k(_k(M,N)) and χ_([M],[N])χ(^*_k(M, N)), with M, N∈ k. Since and commute with finite direct sums, the pairings can be extended to arbitrary elements in the respective Grothendieck groups. Notice that _k^i(M,N)=0 for all i>0 if M is projective or if N is injective. In either case the two pairings coincide. For a finite poset , and an object v∈, consider the modules F_v, G_v, S_v∈ k, defined on objects by * F_v(u) k_(v,u) = k v≤ u 0 otherwise * G_v(u) k_(u,v) = k u≤ v 0 otherwise * S_v(u) k u=v 0 otherwise If v≤ u≤ u', then F_v(u)→ F_v(u') is the identity, and otherwise F_v(u≤ u') is the zero homomorphism. Similarly, if u≤ u'≤ v then G_v(u)→ G_v(u') is the identity and otherwise it is 0. For S_v all non-identity morphisms are 0. By <cit.> the modules F_v are precisely the indecomposable projective modules and the modules G_v are the indecomposable injective modules in k. By <cit.> the modules S_v are the simple modules in k. Notice that F_v is locally constant on _≥ v, while G_v is locally constant on _≤ v. Also, any virtually trivial module M is a direct sum of simple modules, M = ⊕_v∈ M(v)≠ 0_kM(v)· S_v. Let be a finite poset and let F_v and G_v be the indecomposable projective and the indecomposable injective modules determined by v. Let [X] = [M]-[N], where M, N∈ k. Then, * ⟨ [F_v], [X]⟩_ = χ_([F_v], [X]) =_k M(v) - _k N(v). More generally, if Q∈ k is any finitely generated projective module, then ⟨ [Q], [X]⟩_ = χ_([Q], [X]) = ∑_v∈ϵ_v(_k M(v)-_k N(v)), where Q≅⊕_v∈ϵ_v F_v, with ϵ_v∈ for each v∈. * ⟨ [X], [G_v]⟩_ = χ_([X], [G_v]) = _k M(v) - _k N(v). More generally, if I∈ k is any finitely generated injective module, then ⟨ [X], [I]⟩_ = χ_([X], [I]) = ∑_v∈ϵ_v(_k M(v)-_k N(v)), where I≅⊕_v∈ϵ_v G_v, with ϵ_v∈ for each v∈. By projectivity of F_v, ^i_k(F_v, M)= ^i_k(F_v, N)= 0 for i>0, so ⟨ [F_v], [X]⟩_ = χ_([F_v], [X]) χ(^*_k(F_v, M)) - χ(^*_k(F_v, N)) = _k _k(F_v, M) - _k _k(F_v, N) = _k M(v) - _k N(v). If Q∈ k is any finitely generated projective module, then Q≅⊕_v∈ϵ_v F_v, where ϵ_v∈ for each v∈. Thus ⟨ [Q], [X]⟩_ = χ_([Q], [X]) = _k _k(Q, M) - _k _k(Q, N) = ∑_v∈ϵ_v(_k M(v)-_k N(v)), as claimed in (<ref>). Similarly, ^i_k(M, G_v)=^i_k(N, G_v)=0 for any i>0. Thus _k(M,G_v) ≅ M(v)^*, and _k(N,G_v) ≅ N(v)^*, so ⟨ [X], [G_v]⟩_ = χ_([X], [G_v]) = _k M(v) - _k N(v). The second claim of (<ref>) follows similarly. Let 0→ M_r→ M_r-1→⋯→ M_0→ 0 be an exact sequence in k. Let P be a finitely generated projective k-module and let I be a finitely generated k-module. Then ∑_i=0^r(-1)^i⟨ [P], [M_i]⟩_ = ∑_i=0^r(-1)^iχ_([P], [M_i]) = 0, and ∑_j=0^r(-1)^j⟨ [M_i], [I]⟩_ = ∑_i=0^r(-1)^iχ_([M_i], [I]) = 0. Furthermore, let 0→ P_n→⋯→ P_0→ M→ 0, and 0→ M→ I_0⋯→ I_n→ M→ 0 be a projective resolution and an injective resolution of M in k. Then χ_([M], [M]) = ∑_i=0^n∑_j=0^n (-1)^i+j⟨ [P_i],[P_j]⟩_ = ∑_r=0^n∑_s=0^n (-1)^i+j⟨ [I_r],[I_s]⟩_. The first statement follow easily from Lemma <ref>. The second statement follows from the first. A main result of this section is the following proposition, which establishes an easy relation between the Euler pairing and the Hom pairing and allows explicit computation of the Euler pairing. Let be a finite poset, and let M, N∈ k be modules. Let ⟨-,-⟩_ and χ_(-,-) be the Hom pairing and the Euler pairing respectively. Let 0→ P_n→⋯→ P_0→ M→ 0, and 0→ N→ I_0→⋯ I_n→ 0 be projective resolution for M and and injective resolutions for N. Then χ_([M], [N]) = ∑_i=0^n(-1)^i⟨ [P_i], [N]⟩_ = ∑_i=0^n(-1)^i χ_([P_i],[N]), and χ_([M], [N]) = ∑_j=0^n (-1)^j⟨ [M], [I_k]⟩_=∑_j=0^n (-1)^jχ_([M],[I_k]). Furthermore, write P_i ≅⊕ϵ^i_vF_v and I_j = ⊕δ_u^i G_u, with ϵ_v, δ^j_u∈ and v, u∈. Then χ_([M],[N]) = ∑_v∈∑_i=0^n(-1)^iϵ_v^i_n N(v) = ∑_u∈∑_j=0^n(-1)^jδ_u^j_n M(u). Equation (<ref>) follows at once from Lemma <ref>(<ref>) and the well known observation that if 0→ A_n→⋯ A_1→ A_0→ 0 is a chain complex of finite dimensional vector spaces over a field, then χ(H_*(A_*)) = ∑_i=0^n(-1)^i_kA_i. Equation (<ref>) follows by analogy and Lemma <ref>(<ref>). By (<ref>) and Lemma <ref>(<ref>), χ_([M], [N]) = ∑_i=0^n(-1)^i⟨ [P_i], [N]⟩_ = ∑_i=0^n(-1)^i∑_v∈ϵ_v^i_k N(v). The first equality in Equation (<ref>) follows by rearranging the summands. The second equality follows by analogy, using an injective resolution for N and Lemma <ref>(<ref>). A nice interpretation of the Euler pairing occurs for the constant module on with value k. Let k denote the constant k-module with value k at each object and the identity map for each morphism. Then for each M∈ k, ^*_k(k, M) = H^*(, M). Thus χ_([k], [M]) = χ(H^*(, M)). In particular χ_([k], [k]) = χ(||), where || denotes the nerve of . Combining Lemma <ref> and Example <ref>, one observes that if has an initial object ∅, then k = F_∅ is projective. In that case the positive degree cohomology of any k-module vanishes, and χ_([k], [M]) = _k H^0(, M) = _k M(∅). Next we consider pairing with the simple modules S_v. We start with the Hom pairing. Let be a poset and let v∈ be an object. Then for any module N∈ k, _k(S_v, N) ≅⋂_v< w(N(v<w))(N|__>v) and _k(N, S_v) ≅ N(v)/(⊕_u<vN(u)→ N(v)).(N|__<v)^*. Fix and object v and let w>v be another object in . Let ϕ S_v→ N be a morphism in k. Then 0= ϕ_w∘ S_v(v<w) = N(v<w)∘ϕ_v. Hence (ϕ_v)⊆(N(v<w)). Since this holds for every w>v, it follows that (ϕ_v)⊆⋂_v<w(N(v<w)), and since ϕ is completely determined by ϕ_v, we have _k(S_v, N) ≅_k(k, ⋂_v<w(N(v<w))) ≅⋂_v<w(N(v<w)). This proves the first statement. Let ψ N→ S_v be a morphism in k. Then ψ is determined by ψ_v, and for each u<v in one has ψ_v∘ N(u<v) = S_v(u<v)∘ψ_u = 0. Hence ψ_v((N(u<v) N(u)→ N(v))) = 0. Since this holds for each u<v, the homomorphism ψ_v vanishes on (⊕_u<vN(u<v)→ N(v)) and hence it factors through N(v)/(⊕_u<vN(u<v)→ N(v))(N|__<v). Thus one may identify _k(N, S_v) with the dual space of (N|__<v), which is the claim of the second statement. Next we consider the Euler pairing with simple modules. Let be a finite poset and let v∈ be an object, and let S_v∈ k be the simple module determined by v. Let π_v F_v→ S_v and ι_v S_v→ G_v denote the obvious projection and injection, respectively. Let K_v=(π_v), and let C_v = (ι_v). Then χ_([S_v], [M]) = _k M(v) - χ_([K_v], [M]), and χ_([M],[S_v]) = _k(M_v) - χ_([M], [C_v]). In particular, if the Hasse diagram of the sub-poset _≥ v is tree, then χ_([S_v], [M]) = _k M(v) - ∑_v<u_k M(u)), and if the Hasse diagram for the sub-poset _≤ v is a tree, then χ_([M],[S_v]) = _k(M_v) - ∑_u<v_k M(u), where in both cases the sum runs only over the indecomposable relations of the type specified. The module K_v is the submodule of F_v, which coincides with it on each object, except v on which it vanishes. Thus we have a short exact sequence in k 0→ K_v→ F_v→ S_v→ 0 that gives rise to a long exact sequence of groups, upon applying _k(-,M). By examining that long exact sequence, applying additivity of the Euler characteristic and using Lemma <ref>(<ref>) we conclude that χ_([S_v], [M]) χ(^*_k(S_v, M)) = χ(^*_k(F_v, M)) - χ(^*_k(K_v, M)) = _k M(v) - χ(^*_k(K_v, M)) = _k M(v) - χ_([K_v], [M]). Similarly, using the exact sequence 0→ S_v→ G_v→ C_v→ 0 and Lemma <ref>(<ref>), we have χ_([M], [S_v]) = _k M(v) - χ_([M],[C_v]). In cases where the Hasse diagram of _≥ v is a tree, the module K_v can be easily seen to be a projective module K_v = ⊕_v< u F_u, where the sum runs over all indecomposable relations v<u in . Therefore, in that case by Lemma <ref>(<ref>), χ_([S_v], [M]) = _k M(v) - ∑_v<u_k M(u)). Similarly, if the Hasse diagram for _≤ v is a tree, then C_v is a sum of indecomposable injective modules, indexed by the indecomposable relations u<v. The corresponding statement follows by Lemma <ref>(<ref>). We end this section with a discussion of some general properties of the Hom and Euler pairings. Notice first that the Hom pairing is non-degenerate on classes of genuine modules, because for any M∈ k, ⟨ [M], [M]⟩_ = _k _k(M,M) ≥ 1, since any multiple of the identity transformation by a field element is nontrivial. However it is easy to find examples of nonzero virtual modules whose Hom pairing square is zero, as we next observe. Let be a finite poset, and let u,v∈ be any two objects. Then * χ_( [F_v], [F_u])=⟨ [F_v], [F_u]⟩_ = 1 v≥ u 0 otherwise * χ_( [G_v], [G_u])=⟨ [G_v], [G_u]⟩_ = 1 v≤ u 0 otherwise Both statements are particular cases of Lemma <ref>. We prove the first statement. The second follows by analogy. One has ⟨[F_v],[F_u]⟩_ = _kF_u(v) = 1 v≥ u 0 otherwise, as claimed. The following immediate corollary of Lemma <ref> shows that both the Hom pairing and the Euler pairing are degenerate on virtual modules. Let u, v∈ be two non-comparable objects. Let [X] = [F_v]-[F_u] and [Y]=[G_v]-[G_u]. Then ⟨ [X], [X]⟩_ = χ_([X], [X])= χ_([Y],[Y]) = ⟨[Y],[Y]⟩_ = 0. Similarly, it is easy to observe that ⟨ [S_v], [S_u]⟩_ =1 if and only if u=v and is 0 otherwise. This is not the case for the Euler pairing. In that case, assuming _ is a tree, we have by Lemma <ref>, χ_([S_v],[S_u]) = _k S_u(v) - ∑_v<w_kS_u(w) = -1 v=u -1 v<u indecomposable -0 otherwise Finally, we consider the Euler pairing square χ_([M],[M]) for modules M∈ k. Assume that the Hasse diagram _ of the poset is a tree, and thus in particular an acyclic quiver. In that case, by Lemma <ref> for any M, N∈ k χ_([M],[N]) = χ__([M],[N]), where (k)→^V is the dimension vector homomorphism (Definition <ref>), and χ__ is the Euler form for the digraph _ (Definition <ref>). Then the Euler pairing square coincides with the Tits form T_(k)→, <cit.>. By <cit.> the Tits form is positive definite if and only if _ is of Dynkin type ADE, and positive semi-definite if and only if _ is of extended type ADE. This corresponds to being of finite and infinite tame representation type, respectively. In general however, T_, and hence the Euler pairing square, is indefinite. § DIVERGENCE, ADJOINTNESS AND THE LAPLACIAN In this section we define and study the basic properties of the divergence and the Laplacian for persistence modules. Recall Definition <ref> Let be a poset. The left and right divergence are defined to be the homomorphisms ∇^*, ∇_*(k)→(k) given by ∇^*[N] [L_ϕ(N)]- [L_β(N)], and ∇_*[N] [R_ϕ(N)]- [R_β(N)]. For [X]∈(k) and [Y]∈(k), we have the adjointness relations with respect to the Hom pairing: ⟨∇^*[Y], [X]⟩_ = ⟨ [Y], ∇[X]⟩_ and ⟨ [X], ∇_*[Y]⟩_ = ⟨∇[X], [Y]⟩_ The following corollary is an immediate consequence of Proposition <ref>, which is as close as we can get to adjointness relations with respect to the Euler Pairing. Let be a finite poset such that _ is a tree. Let M∈ k be a module, and let 0→ Q→ P→ M→ 0, and 0→ M→ I → J→ 0 be a projective resolution and an injective resolution for M in k. Then for any N∈ k, χ_(∇^*[N], [M]) = ⟨∇^*[N], [I]-[J]⟩_ = ⟨ [N], ∇([I]-[J])⟩_ and χ_([M], ∇_*[N]) = ⟨ [P]-[Q], ∇_*[N]⟩_ = ⟨∇([P]-[Q]), [N]⟩_. The following example motivates referring to ∇^* and ∇_* as divergence. Let be the poset whose Hasse diagram is on the left of the diagram below, with its line digraph on the right. 3 4 03 04 0[ul][ur] 1[ur] 2[ul] 10[uu][uurr] 20[uu][uull] We compute the left and right divergence of a module M∈ k at the object 0. Starting with left divergence, notice that ϕ↓ 0 is the discrete category with objects 10 and 20, while β↓ 0 is isomorphic to . Hence L_ϕ M(0) ≅ M(10)⊕ M(20), and L_β M(0) ≅_ M. The computation of this colimit is elementary and can be shown to be isomorphic to the cokernel of the map M(10)⊕ M(20) → M(03) ⊕ M(04) that takes an element (x,z) to (M_103(x)- M_203(z), M_203(x) - M_204(z)), where M_xyz stands for M(xy<yz) for short. The computation of right divergence is similarly basic. Here we have 0↓ϕ isomorphic to and 0↓β the discrete category on objects 03 and 04. Hence R_β M (0) ≅ M(03)⊕ M(04), and R_ϕ M(0) ≅lim_ M. The computation of the limit is again elementary, and can be shown to be the kernel of the same map as in (<ref>). Thus we have an exact sequence 0 → R_ϕ M(0) → M(10)⊕ M(20) → M(03)⊕ M(04)→ L_β M (0) → 0. Consider the values of these functors in terms of “flow" relative to the object 0. Thus one may think of L_ϕ M(0)≅ M(10)⊕ M(20) as the 0 in-flow, and of L_β M(0) as the quotient of the 0-out-flow, where the “effect" (image) of the in-flow has been divided out, or in other words, the net 0 out-flow. Similarly, R_β M(0)≅ M(03) ⊕ M(04) can be thought of as the 0 out-flow, while R_ϕ M(0) is the flow that is “wasted" (vanishes) on the way to 0. Thus ∇^* [M](0)<0 indicates that the net 0-out-flow is larger than the 0 in-flow, or that passing through 0 “amplifies” the flow. Similar interpretations can be given for ∇^* [M](0)>0 and in the corresponding situations for ∇_*[M](0). A nice case of Lemma <ref> occurs when pairing a projective module with the gradient of a module. Let (u,v)∈ be an object. Then ⟨ F_(u,v), ∇[M]⟩_ = ⟨ F_(u,v), ϕ^*[M]⟩_ - ⟨ F_(u,v), β^*[M]⟩_ = _kM(v)-_kM(u). Recall the modules K_M, C_M∈ k defined for any M∈ k as the kernel and cokernel of the natural transformation ηβ^*→ϕ^* (See Lemma <ref>). Let M∈ k and let N∈ k. Then χ_(∇[M], [N]) =χ_([C_M], [N])-χ_([K_M], [N]), and χ_([N], ∇[M]) =χ_(([N],[C_M])-χ_([N],[K_M]). The modules K_M and C_M can be considered as the kernel and cokernel of the natural transformation ηβ^*→ϕ^* that takes an object (x,y)∈ to the morphism β^*M(x,y) = M(x)M(x<y) M(y) = ϕ^*M(x,y). Thus, one has an exact sequence of k-modules 0→ K_M →β^* M η_Mϕ^* M → C_M→ 0, which can be split into two short exact sequences in k 0→ K_M→β^* M → I_M→ 0 and 0→ I_M→ϕ^* M → C_M→ 0, where I_M is the image functor. By applying _k(-, N), these give long exact sequences and it follows by additivity of the Euler characteristics, that χ(_k^*(β^* M, N)) = χ(_k^*(K_M, N)) +χ(_k^*(I_M, N)) and that χ(_k^*(ϕ^* M, N)) = χ(_k^*(C_M, N)) +χ(_k^*(I_M, N)). Thus χ_(∇[M], [N]) =χ_([ϕ^* M], [N])- χ_([β^* M], [N]) = χ_([C_M], [N])-χ_([K_M], [N]). The second statement follows similarly, by applying _k(N, -) to the short exact sequences (<ref>). Let M∈ k and let N∈ k. Then χ_(∇[M], [N]) = ∑_(u,v)∈ C_M(u,v)≠ 0_kC_M(u,v)·χ_([S_u,v], [N]) - ∑_(u,v)∈ K_M(u,v)≠ 0_kK_M(u,v)·χ_([S_u,v],[N]), and χ_([N],∇[M]) = ∑_(u,v)∈ C_M(u,v)≠ 0_kC_M(u,v)·χ_([N],[S_u,v]) - ∑_(u,v)∈ K_M(u,v)≠ 0_kK_M(u,v)·χ_([N],[S_u,v]). By Lemma <ref> both K_M and C_M are virtually trivial. Hence they are isomorphic to a direct sum of simple modules. Thus [K_M] = ∑_(u,v)∈ K_M(u,v)≠ 0_kK_M(u,v)· [S_u,v], and C_M = ∑_(u,v)∈ C_M(u,v)≠ 0_kC_M(u,v)· [S_u,v]. The claim follows from bilinearity of χ_. Notice that by Lemma <ref>, χ_(∇[M], [N]) is computable in the case where the Hasse diagram for _≥ (u,v) is a tree for each (u,v)∈, for which C_M(u,v) and K_M(u,v) are nonzero. A similar comment applies to computability of χ_([N],∇[M]). In particular if is a poset such that _ is a directed tree, then so is _. Let be a finite poset such that _ is a tree. Then so is _. If _ is not a tree, then there is a pair of objects (x,y), (s,t)∈_ with at least two directed paths from (x,y) to (s,t) in _. But this implies that there are at least two paths from y to s in _. Hence _ is not a tree, contradicting our assumption. A consequence of Lemma <ref> is that if _ is a tree then the Euler pairing of ∇[M] for any M∈ k with any N∈ k can be computed explicitly. Let be a finite poset. Let [X] = [U]-[V]∈(k) be any element, with U, V∈ k, such that ∇^*[X] = 0. Then _kU(x,y) = _kV(x,y) for all (x,y)∈. Fix an object (x,y)∈. Observe first that _≤ (x,y) coincides with the line digraph associated to _≤ x∪{y}⊆. Let be a maximal tree for the Hasse diagram of _≤ x∪{y}⊆, and let ⊆_≤ x∪{y}⊆ be the sub-poset generated by . Notice that contains y as a maximal object, and in its Hasse diagram the vertex y is univalent. Let ⊆ be the sub-poset generated by . Then (x,y) is a unique maximal object in , and so, the constant module k on coincides with G_(x,y) on and hence it is injective in k. Next, by naturality of ∇^* one has a commutative square (k)[rr]^ρ[d]^∇^* (k)[d]^∇^* (k)[rr]^ρ (k) where ρ denotes the homomorphism induced by restriction. Define M ∈ k as follows. Let n be the maximal length of a path in that ends in y. Set M(y) = k^n, and of any u∈ with graph distance r from y in = _, let M(u) = k^n-r. If u<v is an indecomposable relation in , then r= d(v,y) = d(u,y)+1, where d denotes graph distance. Define M(u<v) to be the inclusion into the first n-r-1 coordinates in M(v) = k^n-r. It is now easy to see that M is well defined on , and that k⊕β^* M is naturally isomorphic to ϕ^*M. Hence in (k) we have ∇[M] = [k]. Then _kU(x,y) = ⟨ρ[U], k⟩_ = ⟨ρ[U],∇[M]⟩_ = ⟨∇^*(ρ[U]), [M]⟩_ = ⟨∇^*(ρ[V]), [M]⟩_= ⟨ρ[V], ∇[M]⟩_ = ⟨ρ[V], k⟩_ = _k V(x,y), where the first equality follows from injectivity of k in and Lemma <ref>. A similar statement can be made for elements in (k) of vanishing right divergence. We leave the details for the reader. Notice also that in the appropriate sense Proposition <ref> says that the module M constructed in its proof is the “integral” of the module k. It is also easy to see that this integral is defined up to a constant summand. Further discussion of integration in the context of persistence modules will appear in future studies. With gradient and divergence operators in place, we can now define the corresponding Laplacians . Let be a finite poset. Define the left and right Laplacians ^0 and _0 respectively in ((k)), to be the group endomorphisms ^0 ∇^*∘∇ and _0∇_*∘∇. A virtual module [X]∈(k) is said to be left harmonic if ^0[X]=0 and right harmonic if _0[X]=0. A virtual module is said to be harmonic if it is both left and right harmonic. Let be a finite poset, and let [X] = [M]-[N]∈(k) with M, N∈ k be a left harmonic virtual module. Then, for each object (u,v)∈, _k M(v) - _k M(u) = _k N(v) - _k N(u). In particular, if N=0 and is connected, then for each x, y∈, _kM(x) = _kM(y). Write ∇[X] = [ϕ^*M⊕β^*N] - [ϕ^*N⊕β^*M]. Then 0=^0[X] = ∇^*(∇[X]) and Proposition <ref> applies. Thus, for each (u,v)∈, _k M(v) + _k N(u) = _k(ϕ^*M(u,v)⊕β^*N(u,v)) = _k(ϕ^*N(u,v)⊕β^*M(u,v)) = _kN(v) + _kM(u). The first claim follows. The second claim follows by connectivity of and induction on the length of paths in . § APPLICATIONS In this section we give two sample applications of the theory. In the first we deal with commutative ladder posets, which are generally known to have infinite representation type <cit.>. We produce some families of examples where the line digraphs are disjoint unions of posets of finite representation type. Hence while classification of indecomposable modules over those ladder posets may be hard or even impossible, the gradient of any module is much easier to understand. In the second application we compute gradients of filtered hierarchical clustering modules <cit.> of different random point sets. We show that the dimension vectors of gradients have more expressive power compared to the original modules. We also introduce gradient paths over modules, a form of gradient descent. §.§ Gradient modules of commutative ladders An in-depth investigation of modules over commutative ladders with real-world application appeared in <cit.>. The motivation to study commutative ladders was to be able to compare two input data, constituting two 1-persistence modules, and compare the common homological features between them by connecting morphisms. Denote a left-to-right arrow by F and a right-to-left arrow by B. A line poset is then a finite poset that can be written schematically as a juxtaposition of arrows of type F or B in any order. This is referred to as a quiver of type 𝔸_n in quiver representations and is one of the representation-finite, i.e. having only finitely many isomorphism classes of indecomposable representations, Dynkin diagrams by Gabriel's theorem <cit.>, <cit.>. Thus any line poset is uniquely characterised by a sequence X_1X_2⋯ X_n-1, where each X_i is either F or B. We refer to the characterising sequence as the type of the line poset. We call any such poset a line poset of length n. A commutative ladder of length n is a poset that can be written schematically as two line posets L_1 and L_2 of the same length and type with arrows from each object of L_1 to the corresponding object in L_2. By <cit.> general commutative ladders of any type with length 4 are representation-finite, whereas they are representation-infinite if length ≥ 5. Consider two specific types of ladder posets. The first is of any length and is made of a related pair of alternating sequences of type either FBFB⋯ or BFBF⋯. We refer to a poset of this form and length n as a zig-zag ladder of length n. The second is of even length 2n and is made of a related pair of sequences of the form FFBBFF⋯ or BBFFBB⋯. This type will be referred to as a double zig-zag ladder of length 2n. As pointed out any commutative ladder poset of length at least 5 is of infinite representation type. Let be a commutative ladder poset. Then * if is a zig-zag ladder, then is a disjoint union of line posets of length at most 2, and * if is a double zig-zag ladder, then is a line poset. A zig-zag commutative ladder of type FBFB⋯ has the general form drawn in the left diagram below. One easily observes that the associated line digraph has the form drawn in the right diagram. point=[circle,thick,draw=black,fill=black,inner sep=0pt,minimum width=2pt,minimum height=2pt] arc=[shorten >= 2pt,shorten <= 2pt,->] arcl=[shorten >= 2pt,shorten <= 2pt, <-] [] (1) at (0,0) 1; [] (2) at (1.5,0) 2; [] (3) at (3,0) 3; [] (4) at (4.5,0) 4; [] (n) at (6,0) n; [] (1') at (0,1.5) 1'; [] (2') at (1.5,1.5) 2'; [] (3') at (3,1.5) 3'; [] (4') at (4.5,1.5) 4'; [] (n') at (6,1.5) n'; [arc] (0,0.2) to (0,1.3); [arc] (1.5,0.2) to (1.5,1.3); [arc] (3,0.2) to (3,1.3); [arc] (4.5,0.2) to (4.5,1.3); [arc] (6,0.2) to (6,1.3); [arc] (0.2,0) to (1.3,0); [arcl] (1.7,0) to (2.8,0); [arc] (3.2,0) to (4.3,0); [dashed] (4.7,0) to (5.7,0); [arc] (0.2,1.5) to (1.3,1.5); [arcl] (1.7,1.5) to (2.8,1.5); [arc] (3.2,1.5) to (4.3,1.5); [dashed] (4.7,1.5) – (5.7,1.5); .2in point=[circle,thick,draw=black,fill=black,inner sep=0pt,minimum width=2pt,minimum height=2pt] arc=[shorten >= 2pt,shorten <= 2pt,->] [] (11') at (0,0) 11'; [] (1'2') at (1,1) 1'2'; [] (12) at (1,-1) 12; [] (22') at (2,0) 22'; [] (3'2') at (3,1) 3'2'; [] (32) at (3,-1) 32; [] (33') at (4,0) 33'; [] (3'4') at (5,1) 3'4'; [] (34) at (5,-1) 34; [] (44') at (6,0) 44'; [arc] (11') to (1'2'); [arc] (12) to (22'); [arc] (32) to (22'); [arc] (33') to (3'2'); [arc] (33') to (3'4'); [arc] (34) to (44'); [] at (7,0) ⋯; This proves Part (<ref>). A double zig-zag ladder of type FFBBFF⋯ has the form point=[circle,thick,draw=black,fill=black,inner sep=0pt,minimum width=2pt,minimum height=2pt] arc=[shorten >= 2pt,shorten <= 2pt,->] arcl=[shorten >= 2pt,shorten <= 2pt, <-] [] (1) at (0,0) 1; [] (2) at (1.5,0) 2; [] (3) at (3,0) 3; [] (4) at (4.5,0) 4; [] (5) at (6,0) 5; [] (n) at (7.5,0) 2n; [] (1') at (0,1.5) 1'; [] (2') at (1.5,1.5) 2'; [] (3') at (3,1.5) 3'; [] (4') at (4.5,1.5) 4'; [] (5') at (6,1.5) 5'; [] (n') at (7.5,1.5) 2n'; [arc] (0,0.2) to (0,1.3); [arc] (1.5,0.2) to (1.5,1.3); [arc] (3,0.2) to (3,1.3); [arc] (4.5,0.2) to (4.5,1.3); [arc] (6,0.2) to (6,1.3); [arc] (7.5, 0.2) to (7.5, 1.3); [arc] (0.2,0) to (1.3,0); [arc] (1.7,0) to (2.8,0); [arcl] (3.2,0) to (4.3,0); [arcl] (4.7,0) to (5.7,0); [dashed] (6.2,0) to (7.2,0); [arc] (0.2,1.5) to (1.3,1.5); [arc] (1.7,1.5) to (2.8,1.5); [arcl] (3.2,1.5) to (4.3,1.5); [arcl] (4.7,1.5) to (5.7,1.5); [dashed] (6.2,1.5) to (7.2,1.5); The associated line digraph has the form point=[circle,thick,draw=black,fill=black,inner sep=0pt,minimum width=2pt,minimum height=2pt] arc=[shorten >= 2pt,shorten <= 2pt,->] [] (11') at (0,0) 11'; [] (1'2') at (1,1) 1'2'; [] (12) at (1,-1) 12; [] (22') at (2,0) 22'; [] (2'3') at (3,1) 2'3'; [] (23) at (3,-1) 23; [] (33') at (4,0) 33'; [] (4'3') at (5,1) 4'3'; [] (43) at (5,-1) 43; [] (44') at (6,0) 44'; [] (54) at (7,-1) 54; [] (5'4') at (7,1) 5'4'; [] (55') at (8,0) 55'; [arc] (11') to (1'2'); [arc] (12) to (22'); [arc] (1'2') to (2'3'); [arc] (22') to (2'3'); [arc] (12) to (23); [arc] (23) to (33'); [arc] (43) to (33'); [arc] (44') to (4'3'); [arc] (54) to (44'); [arc] (54) to (43); [arc] (5'4') to (4'3'); [arc] (55') to (5'4'); [] at (9,0) ⋯; which is clearly a line poset, thus proving Part (<ref>). The proof for zig-zag and double zig-zag ladders of types BFBF⋯ and BBFFBB⋯, respectively, is essentially the same, with arrows going the opposite way. We obtain an immediate corollary of Lemma <ref> and Gabriel's theorem on the classification of quiver algebras of finite representation type <cit.>. Let be a zig-zag or double zig-zag commutative ladder poset of any length n. Then the front and back modules ϕ^*M and β^* M of any M are of finite representation type. Corollary <ref> stands in contrast to the fact that all types of commutative ladder posets of length 5 or more are of infinite representation type, where at least for zig-zag and double zig-zag ladders their gradients can be decomposed as a finite combination of modules over posets of finite representation type. There are obviously more types of commutative ladder posets with this property. For instance a poset of type BBF also has a line digraph that is a union of two posets of type 𝔸_n. We leave the classification of commutative ladder posets whose associated line digraphs are of finite representation as an interesting further question. §.§ Gradient analysis of filtered hierarchical clustering In this section we give an actual computational demonstration of applying our gradient to modules motivated by <cit.>. To this end, let 𝒜_m,n denote the product of posets {0 < 1 < … < m} and {0 < 1 < … < n}, which can be depicted as a commutative grid on two coordinates: (0,n) [r] (1,n) @.>[r] (m,n) (0,1) [r] @.>[u] (1,1) @.>[r] @.>[u] (m,1) @.>[u] (0,0) [r] [u] (1,0) @.>[r] [u] (m,0) @.>[u] We define biclustering module to be a functor M 𝒜_m,n, i.e. a module M ∈ k𝒜_m,n. In our setup, described below, we also impose the condition that all the homomorphisms M((x_1,y) < (x_2,y)) induced by horizontal morphisms in 𝒜_m,n are surjective. Such modules and their representation theory was studied in <cit.>, based on earlier works by <cit.> and <cit.>. By <cit.> the category of k𝒜_m,n-modules satisfying the extra surjectivity condition has finitely many isomorphism types of indecomposable modules exactly when n≤ 2, or m=1, or in the case where (m,n) ∈{(2,3),(2,4),(2,5),(3,3),(4,3)}.It is of tame representation type exactly when (m,n) ∈{(2,6),(3,4),(5,3)}, and it is of wild representation type in all other cases. A hierarchical clustering method defined on a finite metric space (X, d) is a one-parameter family of surjective maps {f_ϵ X C_ϵ(X)}_ϵ≥ 0, where each C_ϵ(X) is a finite set of clusters, such that for all ϵ≤ϵ' we have a surjection f_ϵ(X) ↠ f_ϵ'(X). A canonical example of such clustering method arises from considering the geometric graph G_X,ϵ associated to X at scale ϵ, namely the graph whose vertices are the points of X and with an edge connecting any two vertices x, y∈ X if d(x,y)≤ϵ. The natural projection of X onto the path-components of its geometric graph X↠π_0(G_X, ϵ) gives a clustering for X with scale ϵ. The inclusion of graphs G_X,ϵ↪ G_X,ϵ' for all ϵ≤ϵ' induces a surjection π_0(G_X,ϵ) ↠π_0(G_X,ϵ'), defining a hierarchical clustering method. A filtered hierarchical clustering method is obtained by first turning X into a filtration, i.e. an indexed collection of subspaces X_t_0⊂ X_t_1⊂⋯⊂ X_t_l = X, by some filtering method and then applying a hierarchical clustering method to each X_t in the filtration. Applying degree 0 homology H_0 we obtain at filtration parameter (ϵ, t) the k-vector space generated by the connected components of G_X_t, ϵ. The morphisms with respect to ϵ, for a fixed t, detect the merger of connected components, and hence are all surjective. The reader is referred to <cit.> for details. Fix a finite metric space (X, d). We used the following filtered hierarchical clustering method to generate our samples of biclustering modules. The filtering method applied is a K-nearest neighbour density estimator. Namely, for a fixed parameter K∈ define a function ρ= ρ_K X→ by ρ(p) ∑_i=1^K d(p, v_i), where the sum runs over a collection of K points v_1, v_2, …, v_K, such that d(p,v_i) is minimal in the subspace X∖{v_j}_j<i. This yields a filtration {X_t}_t≥ 0 by subspaces of X, where p∈ X_t if ρ(p)≤ t. In our computations we fixed K=20. For every X_t we applied π_0 by computing the connected components of the geometric graphs G_X_t, ϵ for an increasing ϵ. As finite metric space data (X,d) we used random point processes on unit square [0,1] × [0,1] with d the standard Euclidean distance; point processes have gathered interest within persistent homology community, see for example <cit.>. Random point processes are models for random point configurations based on statistical distributions. The interesting question is whether different point processes can be distinguished by their topological features encapsulated in persistence modules. Figure <ref> shows examples of the point processes used in our analysis (see definitions below). It is not obvious that persistent homology via standard Vietoris-Rips filtration would contain enough information to distinguish these point clouds. Using filtered hierarchical clustering adds information from the density estimation with the aim of adding more distinguishing features to the resulting biclustering modules. We simulated instantiations of four point processes as follows; L is a random integer drawn from the interval [190,210]. * A (homogeneous) Poisson process: L points were randomly and uniformly distributed on the unit square. * A Normal process: L coordinate pairs (x,y) were created, where both x and y are sampled from normal distribution N(μ,σ^2) with mean μ=0.5 and standard deviation σ=0.2. * A Matern cluster process: A Poisson process as above was simulated but now with expected number of 40 points. These represent parent points, or cluster centers, on the unit square. For each parent, random number of child points C was drawn with expectation 5. A disk of radius 0.1 was centered on each parent point; then for each parent its associated number of child points C was uniformly randomly placed in the disk. Note that parent points are not part of the actual metric space data. * A Baddeley-Silverman process: The unit square was first divided into equal size tiles with side lengths 1/14. For each tile, random number C was drawn from the Baddeley-Silverman distribution which is a discrete distribution defined on values (0,1,10) with respective probabilities (1/10,8/9,1/90). For each tile, the associated number of points C was then uniformly randomly distributed on the tile. Parameter choices for Matern and Baddeley-Silverman were such that they also had number of points in the interval [190,210]. In the filtered clustering process the distance parameter ϵ was restricted to the interval [0,0.25]. Density filtration parameter t ranged from min_p ∈ Xρ(p) to max_p ∈ Xρ(p). The value ranges of both ϵ and t were discretised into 10 values, and at each pair (ϵ,t) of parameter steps homology H_0 was applied, resulting in biclustering modules M ∈ k𝒜_10,10. The dimension vectors [M] for the modules constructed for single realisations of the point processes are shown in Figure <ref>, with the distance parameter increasing along x-axis and density filtration along y-axis. Recall that the gradient of a module M is given by the formal difference of the front and back k𝒜_10,10-modules, i.e. ∇[M] = [ϕ^* M] - [β^* M]. Objects in 𝒜_10,10 have the form (i,j,j+1) or (i, i+1, j), where (i,j,j+1) stands for the edge from (i,j) to (i, j+1) and (i, i+1, j) stands for the edge from (i,j) to (i+1, j). Hence one has [M](i,j,j+1) = _kM(i,j+1)-_kM(i,j), and similarly [M](i,i+1,j) = _kM(i+1,j)-_kM(i,j). The resulting dimension vectors of the gradient modules are shown in Figure <ref>. Note that for a linear map f V W between finite dimensional vector spaces, the index ind(f) is given by (V)-(W); the values of the gradient dimension vectors, at every object of 𝒜_10,10, can thus be seen as minus the index of the corresponding homomorphism in 𝒜_10,10. Comparing the modules in Figure <ref> with their associated gradients in Figure <ref>, we see that the gradient modules seem to better distinguish the point processes. In particular, the supports of objects where the gradient is of dimension zero, shown as blue vertices in Figure <ref>, are clearly different in all cases. While the regions on which the dimension vector vanishes do not necessarily inform about a sub-poset of 𝒜_10,10 on which M is constant, they do provide some nontrivial information. Since we assume that all horizontal homomorphisms are surjective, and since all vector spaces in sight are finite dimensional, one may conclude that M(i,j) → M(i+1,j) is in fact an isomorphism for each object (i, j) for which ∇[M](i, i+1, j)=0. In <cit.> (see also <cit.>) biclustering modules were studied through decomposing the associated modules. Decomposition methods have gathered a large momentum in persistence theory <cit.>, guided by the success of the barcode in 1-parameter persistence. From our point of view we posit that decomposition methods are essentially global ways of understanding the structure of modules. Indeed, any indecomposable is still a module over the full underlying poset. Our calculus based methods on the other hand are by definition local, and hence not tied to the size of the modules nor their representation type: our computations on k𝒜_10,10-modules could have been done on much larger posets, contrasting to the representation theoretic limitations in <cit.>. The local information can be more manageable and complements the global information. From more data analysis perspective our calculus approach tells how a module changes locally with respect to filtration parameters, which might be valuable for a practitioner interested in how data sets cluster with respect to changes in the clustering parameters. Another main value of gradient in calculus is to define the gradient vector field of a scalar function, whose value at any point indicates the direction of greatest change of the function. To take this idea to biclustering modules for tracking greatest changes along the filtration parameters, we propose tracing the gradient paths in the k𝒜_10,10-modules. Concretely, we consider the posets 𝒜_10,10 as weighted digraphs according to Figure <ref>. Starting from the two minimal vertices in each digraph, we follow the paths that always choose the direction to the next vertex with largest absolute value of the gradient; we continue until we arrive to a vertex where we no longer cannot make a choice. The gradient paths we obtain for every point process are sketched in Figure <ref>. Similarly to the distribution of points of dimension zero gradient in Figure <ref>, the gradient paths also follow different trajectories for each data set. Note that the gradient paths are determined by the modules themselves. Restricting to the paths gives us simplified modules over a line poset encapsulating the information of maximal variability within the original module. Similar approach is taken in RIVET <cit.>, where 2-persistence modules are analysed through one-dimensional straight line cross-sections. These cross-sections, however, are chosen, whereas the gradient paths are determined directly from the modules. plain
http://arxiv.org/abs/2307.01393v1
20230703231023
Spatio-Temporal Surrogates for Interaction of a Jet with High Explosives: Part I -- Analysis with a Small Sample Size
[ "Chandrika Kamath", "Juliette S. Franzman", "Brian H. Daub" ]
cs.LG
[ "cs.LG", "cs.NA", "math.NA" ]
Spatio-Temporal Surrogates for Interaction of a Jet with High Explosives: Part I - Analysis with a Small Sample Size Chandrika Kamath, Juliette S. Franzman, and Brian H. Daub Lawrence Livermore National Laboratory 7000 East Avenue, Livermore, CA 94551, USA <kamath2, franzman1,daub1@llnl.gov> 9 June 2023 ================================================================================================================================================================================================= Computer simulations, especially of complex phenomena, can be expensive, requiring high-performance computing resources. Often, to understand a phenomenon, multiple simulations are run, each with a different set of simulation input parameters. These data are then used to create an interpolant, or surrogate, relating the simulation outputs to the corresponding inputs. When the inputs and outputs are scalars, a simple machine learning model can suffice. However, when the simulation outputs are vector valued, available at locations in two or three spatial dimensions, often with a temporal component, creating a surrogate is more challenging. In this report, we use a two-dimensional problem of a jet interacting with high explosives to understand how we can build high-quality surrogates. The characteristics of our data set are unique - the vector-valued outputs from each simulation are available at over two million spatial locations; each simulation is run for a relatively small number of time steps; the size of the computational domain varies with each simulation; and resource constraints limit the number of simulations we can run. We show how we analyze these extremely large data-sets, set the parameters for the algorithms used in the analysis, and use simple ways to improve the accuracy of the spatio-temporal surrogates without substantially increasing the number of simulations required. 0.7 § INTRODUCTION Computer simulations are increasingly being used in science and engineering applications. However, it can be time consuming to run these simulations for a given set of input parameters, especially when the problem being modeled is complex and requires high-performance computing resources. Surrogates, often based on a machine learning model <cit.>, are used to provide a fast, but approximate, alternative that relates the simulation outputs to the corresponding inputs. Such surrogates are relatively easy to create when the simulation inputs and outputs are both scalars. However, when the output is in two or three spatial dimensions and varies with time, relating these spatio-temporal outputs to the scalar inputs becomes more challenging. If, in addition, we are constrained by time or computer resources to generate data for only a small number of simulations, building a surrogate that is accurate, becomes non-trivial. We describe our work in creating surrogates for a problem in two spatial dimensions, plus time, in two reports. In this first report, we discuss the applications aspect of our work, focusing on issues related to the small number of simulations in our data set. We want to understand whether it is possible to build an accurate, predictive surrogate model under these conditions and to identify simple ways to improve the accuracy of these models. In a companion report <cit.>, we discuss in detail how we made our solution approach tractable, despite the large size of the spatial data generated at each time step of the simulation. We start this report by describing the problem considered, namely, the interaction of a jet with high explosives, and the two-dimensional outputs generated by the simulations (Section <ref>). We then discuss the unique aspects of our problem and place it in the context of related work (Section <ref>). We describe how we address these unique challenges, focusing on ways in which we can improve the accuracy of the surrogate models (Section <ref>). Using several test cases, we explore how to set various parameters in the algorithms used in our solution approach and the metrics to evaluate the predictions of the surrogate models (Section <ref>). Finally, we conclude this report with the lessons learned in building accurate spatio-temporal surrogates for a limited number of simulations, each of which generates a large amount of data (Section <ref>). § DESCRIPTION OF THE PROBLEM AND DATA We illustrate our ideas on building accurate spatio-temporal surrogates using simulation output from a problem describing the interaction of a jet with high explosives (HE). The domain of the problem is a right cylinder with its axis oriented horizontally as shown on the left in Figure <ref>. There is a steel plate, 1cm thick, near the right end of the cylinder, with the LX14 high explosive to the left of the plate. Both the plate and the HE have a fixed radius of 10cm. A copper jet aligned along the center line of the cylinder, enters the HE from the left. The simulation models what happens as the jet moves through the HE and the plate. The jet is modeled initially as uniform cylinder. It is 10 cm in length with a varying radius. The jet tip velocity is specified as an input parameter; a linearly-varying velocity profile is applied to the remainder of the cylinder that represents the jet to approximate a stretching metal jet. As the problem is radially symmetric about the axis of the cylinder, only the two-dimensional region shown by a dotted rectangle on the left in Figure <ref>, and schematically on the right, is simulated. There are three input parameters for the simulation: the radius of the jet, the length of the HE to be traversed by the jet, and the tip velocity of the jet in the positive x direction. At each time step, the simulation outputs variables of interest, such as mass and momentum, at different points on a grid in the two dimensional region. By running the simulations at select values of these input parameters, and collecting the output at different time steps for each simulation, we can create a data set that could be used to build a surrogate model to predict the output at a new set of input parameters and a given time step. We are interested in determining for example, whether the plate breaks; what is the final position of the plate; and, if the plate breaks, what is the velocity of the jet tip as it comes out on the other side of plate. To illustrate the instances in our data set, we use four simulations whose parameters are listed in Table <ref>. Figure <ref> shows the output variable, mass, at the first and last time steps for these four example simulations. As explained earlier, we have simplified the three-dimensional problem by assuming radial symmetry around the axis of the cylinder, so the output from the simulation is shown as two dimensional images, with the axis of the cylinder shown at the bottom, that is, at y = 0. The domain extent in x (along the length of the cylinder) varies as the length of the HE varies across simulations; however, the domain in y ranges from 0 to 11cm for all simulations. In Figure <ref>, the vertical plate, shown in red, is stationary at time t = 0. To the left of the plate is the HE shown in light blue. The jet in shown in red at the bottom of the domain to the left of the HE; it is quite thin relative to the radius of the cylinder, and is barely visible in the images. As the simulation evolves, the jet moves to the right, through the HE, which expands, pushing the plate to the right. At late time, depending on the simulation input parameters, the plate could: * break, with the jet going through the plate and coming out clearly on the other side; * almost break, with the jet either going completely through the plate but barely coming out the other side or the jet going almost all the way through the plate, leaving it barely connected at the bottom; * not break, with the plate remaining attached, either partially or completely, at the bottom. The plate could have moved from its original position at time t =0. We used the last two time steps in each simulation to assign one of these three class labels to the simulation. This label was not used in building the surrogate; it was used only to ensure we had a good coverage of the design space. We selected the four example simulations in Figure <ref> to illustrate these three cases. In our simulations, the output at each time step consists of the values of variables of interest that are generated at grid points in the two-dimensional rectangular domain. These grid points are on a regular grid, with Δ x = Δ y = 0.0125cm. There are three variables output: mass, x-momentum, and y-momentum; the latter two are shown later in Appendices <ref> and <ref>, respectively. The values of these variables are defined at the center of the cell formed by four nearby grid points. Thus the data appear as an image, with regularly spaced pixels. However, in general, the grid points in a simulation need not be on a regular grid; they could form an unstructured grid, as in a finite element mesh, or a locally structured grid, as in an Adaptive Mesh refinement (AMR) mesh. As a result, unlike an image, most output from simulations also include the (x,y) coordinates of the grid points. In our work, we retain this association of the coordinates with the grid points as it enables us to extract sub-domains from the larger domain for processing. Each simulation is run for a fixed number of time steps which is determined as ( ⌊ (HE-length/jet-tip-velocity⌋ ) + 23 ), with the output generated at each time step. As both HE-length and jet tip velocity vary with the simulation, the number of time steps also varies across simulations. At early time, as the jet starts to move through the HE, there is little of interest in the simulation output. Once the jet is partway through the HE, as indicated by the first term in the equation above, it starts to influence the location of the plate, until 23 μsec later, it is expected that we should know the final status of the plate. In our work, we consider all the time steps in the analysis; an alternative would be to consider only the later 23 time steps. The output data for a variable at a time step in a simulation is referred to as a snapshot, so named as it is a snapshot of the evolution of the simulation at a particular point in time. For the problem considered in this report, we generated the data set by identifying sample parameter values in the three-dimensional input space and running the corresponding simulations for the specified number of time steps. For each of the three output variables, the data set consists of the snapshots across all time steps of all simulations. Our eventual goal in building the spatio-temporal surrogates for the jet-HE interaction problem is to predict what happens to the jet and the plate at late time, specifically, does the jet go through the plate, what is the velocity of the jet tip when it goes through the plate, and what is the location of the plate at late time. However, in this initial study, we limit the scope of the work and explore options for creating an accurate surrogate when we have a small number of simulations. Specifically, we consider how to process large data sets, how to set parameters for the algorithms used in our solution approach, and how to improve the accuracy of the surrogates without substantially increasing the number of simulations required. We evaluate our ideas using a qualitative comparison of predicted outputs for seven test snapshots. §.§ Challenges to the analysis There are two main challenges to building spatio-temporal surrogates for our problem: * The first is how do we build a surrogate that is accurate? The predictive accuracy depends on two factors. The first is the quality of the training data. Simulating the jet-HE interaction is resource-intensive, requiring multiple processors of a high-performance computing system. This limits the number of sample points we can run in the three-dimensional input parameter space. These points have to be selected carefully; this is difficult as we do not know a priori the outcome of running the simulation at a specific sample point, which in turn implies that we do not know the range of parameter values to use. The second factor influencing the accuracy of the surrogate is the choice of the model used. We want to predict a two-dimensional output, given only four scalar inputs - the three simulation input parameters and a time step. A traditional machine learning model, where we predict scalar outputs for a set of scalar inputs, cannot be used in this case. * The second, and related, challenge is the very large size of our data set. Though the number of simulations is small, each snapshot has over two million grid points as shown in Table <ref>. In addition, as the simulations are run on multiple processors, each snapshot is split across multiple files. Any algorithms used to build a high-quality surrogate must be modified to account for both these factors. This report focuses on the first challenge of building accurate spatio-temporal surrogate models given a limited number of simulations. The second challenge of processing the extremely-high dimensional snapshots generated in a distributed manner is discussed in the companion report <cit.>. § RELATED WORK Our approach to creating accurate, two-dimensional, spatio-temporal surrogates builds on some early work in turbulence and pattern recognition, specifically the characterization and recognition of human face images. The early work of Sirovich and Kirby <cit.> showed that a data matrix, formed by an ensemble of face images, similar to our snapshots, can be transformed using the Karhunen-Loève expansion (similar to the principal component analysis (PCA) <cit.>) such that each face is written as a linear combination of two-dimensional basis functions, they called “eigenpictures”. A close approximation to a face is then obtained by truncating the linear combination to use only a small number of the initial, more important, basis functions and the corresponding weights, thus creating a lower-dimensional representation. Kirby and Sirovich also applied their ideas to problems in fluid flow, including data from simulations, and introduced the snapshot method and the concept of eigenflows <cit.>. Following this early work, Turk and Pentland <cit.> showed that these ideas enabled face recognition as we could recognize a new image as a specific face if its weights matched those of the specific face. They referred to the basis functions as “eigenfaces” as they were obtained using eigenanalysis of the data matrix. In later work, non-linear alternatives were explored to obtain a better representation for data that did not necessarily lie on a linear manifold. The techniques included locally-linear decompositions and neural-network-based auto-encoders, not only for face images, but also speech data in the form of time series and images of handwritten digits <cit.>. In problems where the data represents output snapshots from simulations run with different input parameters, at possibly different time steps, an obvious next step was to build a predictive model relating the simulation input parameters and time step to the weights characterizing a snapshot. This would enable the prediction of results at parameters not included in the original set of simulations. Such an approach was taken in the early work of Ly and Tran <cit.>, who used proper orthogonal decomposition (similar to PCA) for the decomposition, and spline interpolation for the predictive model. A similar idea was explored by Higdon et al. <cit.>, who also used PCA, but predicted the weights using Gaussian process models, and by Swischuk et al. <cit.>, who compared different machine learning models for predicting the weights. These ideas have become the subject of much recent research, especially as compute-intensive simulations have become an increasingly important part of design and engineering, requiring rapid generation of results. In particular, in the field on non-intrusive reduced-order modeling (ROM) <cit.>, many options have been proposed, both for the decomposition into a lower-dimensional representation and for the predictive model that relates this representation to the simulation inputs. While the dimension reduction is often obtained using PCA, which is a linear method, non-linear approaches developed in data mining <cit.>, have also been used, despite their greater complexity. These include locally-linear PCA, kernel PCA, and deep neural networks <cit.>. For the predictive models, a range of interpolation techniques have been used, including radial basis function regression, Gaussian processes, and deep neural nets <cit.>. Several unique aspects of our problem make it impractical to directly apply these ideas: * The very large size of each snapshot: Many of the problems considered in the literature for building spatio-temporal surrogates have data at spatial points that number in the thousands, or tens of thousands, while a few have hundreds of thousands of grid points. For example, among the larger-sized data sets, one test problem in fluid flow considered by Rajaram et al. <cit.> had 1047 snapshots, each with 41,796 grid points and the other had 1001 snapshots, each with 450,000 grid points. Both problems were static (no time dependence), with a low-rank, reduced-dimensional space. A problem with more complex geometry considered by Bērzinš et al. <cit.> had a structured grid with 2459 nodes and two data sets, one with 100 simulations and the other with 400 simulations, where each simulation was run for 100 time steps. In addition, 50 simulations each were generated for validation and testing. In contrast, each snapshot in our data set has over 2 million grid points, which makes it challenging to both manage and process the data, requiring the algorithms to be modified suitably. * The small number of simulations and time steps: Spatio-temporal models are typically built using a large number of simulations, numbering in the hundreds or thousands, as noted earlier. We found one example of a compute-intensive simulation <cit.> where just thirty simulations were run, each for 1000 time steps. While our constraints on resources also limit us to a similar number of simulations, we run each simulation for a much smaller number of time steps, as indicated in Table <ref>. * The variable size of the domain: Most problems considered in spatio-temporal modeling are formulated on a fixed domain, with grid points at the same fixed locations at all time steps across all simulations. There are a few exceptions; the work by Yeh et al. <cit.> considers a problem where the input parameters control the geometry of the domain, while the problem in Bērzinš et al. <cit.> had a moving grid with a fixed number of grid points. As we show in Section <ref>, substantial pre-processing of our data is required to bring all snapshots into a common grid before we can build the surrogate. Our contributions in this report are three-fold: First, we address the issues above and show how we process the large number of grid points per snapshot on domains that vary with each simulation. Second, we propose ways to determine the parameters for the algorithms used in building the surrogates. Finally, we investigate the accuracy of surrogates we might expect when the number of simulations and the number of time steps at which we run each simulation are both small. We show how we can use the small number of simulations to identify the range of input parameters and to create a data set with a sufficient diversity of outputs. We also consider simple ways in which we can improve the quality of the surrogate while keeping the number of simulations small. § SOLUTION APPROACH Our approach to building accurate spatio-temporal surrogates for jet-HE interaction problem is composed of multiple steps which we describe in detail in the following sections. We started by carefully generating sample points in the three-dimensional input parameter space (Section <ref>). We then pre-processed the output from each simulation so that all snapshots from all simulations had values defined at the same set of grid points, with the plate locations aligned at the initial time step, t = 0 (Section <ref>). This allowed us to represent the data in the form of a matrix, whose 1604 columns represented all the snapshots from the 45 simulations, and each row represented the value of the variable at a specific (x,y) grid point location. This data in the form of the snapshot matrix was used to build the spatio-temporal model (Sections <ref> and <ref>) and the accuracy of the surrogate evaluated on new simulations (Section <ref>). §.§ Sampling the input parameter space One of the challenges in our problem is that simulating the interaction of the jet with the HE is compute intensive, limiting the number of sample points at which we can run the simulations. Therefore, the location of these sample points has to be selected carefully, which is challenging as we do not know a priori the outcome of running the simulation at any sample point or the range of input parameters we should use in generating the samples. Therefore, we generated the sample points incrementally in the three-dimensional input space using a modified version of the best candidate algorithm <cit.> that selects samples randomly, but far apart from each other. We first generated a small set of 16 samples using a range of [5.0, 20.0]cm for HE-length, [0.125, 0.25]cm for the jet radius, and [0.6, 0.95]cm/μs for the jet tip velocity. We then excluded four sample points with HE-length greater than 16.2cm, as this length was too large for the jet to even reach the plate at late time. Next, restricting HE-length to be in the range [5.0, 16.2]cm, we ran 12 new samples. Of the resulting 24 samples, we found that we could shrink the range of HE-length further to [5.0, 15.0]cm and also exclude samples in the lower right corner of the (HE-length - jet-tip-velocity) plot where the jet tip velocity was too low for the plate to break. This left us with data from 20 simulations, from an initial set of 28 simulations. Having identified the range of values for generating simulations, we added 25 new samples in this region, for a total of 45 samples. As the best-candidate method is a progressive sampling algorithm, it allows us to add samples incrementally, while preserving the random and far-from-each-other property of the samples. Our data set, shown in Figure <ref>, indicates that at high jet tip velocity, but low HE-length, the plate breaks, while at low jet tip velocity and high HE-length, the jet does not penetrate the plate. This latter region is sparsely sampled as we are interested mainly in cases where the plate breaks. The class label (break, no break, or almost break) was assigned by examining the outputs at the last two time steps in each simulation. It is clear that our data set is unbalanced as we have 9 samples where the plate does not break, 31 where the plate breaks, and 5 that are almost break. Generating an appropriate data set for a problem like ours is challenging as we can run only a limited number of simulations. However, the boundary between the classes is poorly defined and we do not know the range of inputs that will give us sample points with the desired outcome. We erred on the side of having more break cases as these were of greater interest. The outputs for the no-break cases tended to be very similar, and we expected that a small number of such cases would suffice. Admittedly, as shown in Section <ref>, our choice of sample points affects the accuracy of the spatio-temporal surrogates built using the data set. These 45 simulations generate a total of 1604 snapshots that vary in the number of grid points as indicated for the four examples in Table <ref>. The simulation at the extreme corner of the input space, with HE-length, jet tip velocity, and jet radius equal to 5.0cm, 0.950cm/μs, and 0.125cm, respectively, is referred to as the baseline simulation. It has the smallest number of time steps, with 29 snapshots. §.§ Pre-processing the data To create a spatio-temporal surrogate for our data set, we first need to create a snapshot matrix for each of the three output variables, mass, x-momentum, and y-momentum. This snapshot matrix, X∈^D × N, is just a collection of the snapshots X = [ x_1, x_2, …, x_N ] where x_i ∈^D, N = 1604 is the number of snapshots, and D is the number of grid points in a snapshot. However, a number of issues have to be addressed before we can generate the snapshot matrix where each snapshot has the same number of grid points at the same (x,y) coordinates. These issues, and our solution approach, are discussed in detail in the companion report <cit.> and described briefly below. The data generated for each simulation, regardless of the size of the domain, are available in 360 files in HDF5 format <cit.> for each time step, as shown in Figure <ref> for two of the four example sub-domains. Each HDF5 file includes five variables — x and y coordinates, mass, x-momentum, and y-momentum. Within each file, the variables are in natural ordering, that is, ordered by increasing values of the y-coordinate, and for a fixed y-coordinate, ordered by increasing values of the x-coordinate. All simulations are on a regular grid with Δ x = Δ y = 0.0125cm. However, there are several differences in the data across the simulations that preclude just appending the snapshots to create the matrix X. Figure <ref> shows that each simulation has a different domain size, with the same range of values in y, but different ranges in x. In addition, while all simulations write the output to 360 files, their sizes are different as shown in Figure <ref>. We also observe that the plate, which is a key structure in the problem, is at different locations in the domain due to varying values of HE length in the simulations. A closer look at the (x,y) coordinates of the grid points indicated that the values of Δ x and Δ y are not exactly 0.0125cm across simulations, and while the coordinates in y, which has a fixed range of values, are identical across simulations, this is not the case for the x coordinates. To address these differences across simulations, we first aligned the domain for each simulation so that the origin was at the lower right corner, which automatically aligned the plate at time t=0 across all simulations. Next, we cropped the left end, removing data with the shifted x coordinate outside the range [-32, 0.0], which corresponds to the smallest HE length of 5.0cm. Now, all snapshots have data on the same domain, though not at the same (x,y) coordinates. To accomplish this, we re-mapped the data to a common grid, using a simple 1-nearest neighbor algorithm. To make further processing of the large snapshot matrix feasible, we generated it in a block form X = [ X_b1; X_b2; …; X_bk; ] by using a block form for the common grid that was used to remap the data. Each block was written to a separate file and contained all grid points in a specific non-overlapping range of y values, stored in natural order. Thus, concatenating the blocks in the order of their y values, would result in a single snapshot-matrix file for each variable, with the grid points stored in natural order in the rows of the matrix. At the end of the pre-processing step, we have each of the three output variables, in a separate snapshot matrix, where each matrix has 1604 columns and 2,180,799 rows. Each matrix is split by rows into 22 blocks, each block stored in a separate file. Figure <ref> shows the mass variable for the first and last snapshot, for each of our four example simulations, after the raw output data have been aligned, cropped, and remapped. The corresponding images for x-momentum and y-momentum are shown in Figures <ref> and <ref> in Appendix <ref> and <ref>, respectively. Next, to illustrate the evolution of the simulations over time, we show the data, after the original output has been pre-processed, at multiple time steps in two example simulations, key r01_i017 and key r02_i028, corresponding to no-break and break cases, respectively. Figure <ref> shows the mass variable evolving with time in these two simulations. The corresponding images for x-momentum and y-momentum are shown in Figures  <ref> and  <ref> in Appendix <ref> and <ref>, respectively. The pre-processing of the data described in this section is similar to that performed in face recognition, where the face images are all processed so they are the same size, with the face centered in the image and major features such the eyes and nose aligned across images. The difference in our data is that unlike a face, which is stationary, the snapshots show the motion of the HE and the plate with time. Therefore, even though we aligned the plate at time t=0 in all simulations, there is a loss of alignment of the plate location as the simulations progress, which will influence the accuracy of any surrogate built from the data. §.§ Surrogate using a linear transformation After pre-processing the simulation output and converting it into a snapshot matrix, X, in a block form as shown in Equation <ref>, we build the spatio-temporal surrogate using the traditional approach outlined in Section <ref>. We start by linearly transforming the snapshot matrix X using a singular value decomposition (SVD) <cit.> X = UΣ V^T where X∈^D × N , U∈^D × D , Σ∈^D × N , and V∈^N × N . Here, Σ is a diagonal matrix with non-zero diagonal elements σ_ii, referred to as the singular values of the matrix X. These singular values are typically ordered in descending order, with the rows and the columns of the U and V matrices ordered correspondingly. Since D >> N in our problem, the top N rows of Σ will have non-zero diagonal elements, assuming rank(X) = N. If the rank, k, is less than N, then only the first k diagonal elements will be non-zero and the rest will be zero. The columns of the orthonormal matrices U and V are referred to as the left and right singular vectors of X. The matrix U = [ u_1, u_2, …, u_N ] where u_i ∈^D is also the matrix of orthonormal eigenvectors of X X^T, and the singular values σ_i are the square-root of the eigenvalues of X X^T (or X^T X). We refer to the vectors u_i as “eigen-snapshots”, akin to the “eigenfaces” used in face recognition <cit.>. The u_i also form a basis in ^D for the data, so each snapshot x_i can be written as a linear combination of the u_k as follows x_i = ∑_k=1^N w_ki u_k where the weight, w_ki, of the k-th basis for the i-th snapshot is given as w_ki = u_k^T x_i for k = 1, …, N . These weights are just the projection of the i-th snapshot onto each of the columns of the U matrix. Equation <ref> shows how the original snapshot can be reconstructed using the weights and the basis vectors. A reasonable approximation, x̃_i, to a snapshot x_i can then be obtained by using the u_i corresponding to the n, n < N, larger singular values: x̃_i = ∑_k=1^n w_ki u_k . Thus, if we can generate the SVD for X as in Equation <ref>, then Equations <ref> and <ref> enable us to generate an approximation, x̃_i, to the original snapshot, x_i. These equations can be interpreted as generating a reduced representation in two ways - first, by creating an approximation that uses a smaller number of weights and basis functions, ignoring the other weights and basis functions, and second, by characterizing a snapshot x_i in terms of just its weights w_i = [ w_1i, w_2i, …, w_Ni] , which can be combined with the basis set ( u_k, k = 1, …, N ) to reconstruct the snapshots exactly. This representation of the i-th snapshot, in terms of its weights, w_ki, k = 1, …, N, can be related to another representation of the snapshot in terms of its simulation input parameters (h_i, v_i, r_i) and time step, t_i, through a predictive model w_ki≈ f_k (h_i, v_i, r_i, t_i) , where k = 1, …, N and i = 1, …, N . Here h_i, v_i, and r_i are the HE length, jet tip velocity, and jet radius for the i-th snapshot. Note that a separate model is created for each weight index, that is, the first or most important weight is predicted by one model, the second most important weight by another, and so on, resulting in the n (or N) models required to generate the snapshots approximately (or exactly). Then, to reconstruct approximately the two-dimensional output, or snapshot at a new set of input parameters and time step, we can first use the predictive model for each of the n weights to predict the weight values at this new input and time step, and apply Equation <ref>. The spatio-temporal surrogate is thus composed of the basis functions obtained using the SVD and the predictive models created using a training set that relates the simulation inputs and time step to the weights associated with the basis functions. There are several options available to calculate the SVD and to generate the predictive models; we next describe the options we selected for use in our problem. We considered two ways to generate the SVD of X and obtain the matrix U and the singular values σ_i: * In the first approach based on the normal equations, we form the matrix X^T X, of size N × N, and obtain its eigen-decomposition. The singular value σ_i of X is then the square-root of the eigenvalue λ_i of X^T X. The columns u_i ∈^D of U are the corresponding orthonormal eigenvectors of X X^T. Since we have calculated the eigenvectors, v_i ∈^N, of the much smaller matrix X^T X, we can obtain the u_i as u_i = 1/√(λ_i) X v_i . The division by √(λ_i), which is the length of the vector X v_i, results in orthonormal vectors u_i. This normal equations approach is very straight-forward and easily extends to the case when X is in block form (Equation <ref>) as we can form the matrix X^T X by reading in a block at a time. However, it does suffer from floating point issues associated with normal equations <cit.>. * A more stable approach to SVD is the QR decomposition that does not involve forming the normal equations. Here, we first decompose the snapshot matrix as X = QR, where Q∈^D × D has orthonormal columns and R is upper triangular, and then generate the SVD of the smaller matrix R. Since X is a tall, skinny matrix, we use the thin/reduced version of the QR decomposition <cit.>: X = Q R = [ Q_1 Q_2 ][ R_1; 0; ] = Q_1 R_1 where X∈^D × N , Q∈^D × D , Q_1∈^D × N , R_1∈^N × N . We next obtain the SVD of the much smaller matrix R_1 R_1 = U_R_1Σ_R_1 V^T_R_1 where U_R_1∈^N × N, Σ_R_1∈^N × N, and V^T_R_1∈^N × N, which gives X = ( Q_1 U_R_1) Σ_R_1 V^T_R_1 . Thus, the singular values of X are just the singular values of R_1 and the left singular vectors of X. that is, the basis functions, are the columns of Q_1 U_R_1. To extend the QR decomposition to matrices in the block form, we used the work of Constantine and Gleich <cit.>; this implementation is less straight-forward than the normal equations. We have several options for the n predictive models to predict the weights in the reduced representation of the data. In our work, we use a machine learning model, specifically, a Gaussian process model <cit.>, as it is one of the models that is accurate for small data sets <cit.>. It also provides an estimate of the uncertainty on the prediction, which gave us an opportunity to investigate whether the uncertainty could be used to understand and explain the results, and to determine the number of weights, n, to use in the reduced representation. The GP is an expensive model to create, especially when we use automatic relevance determination <cit.>, where the hyper-parameters include the weights on each input. However, in our problem, we can create and apply the n models in parallel, which reduces the turnaround time for creating the models. §.§ Surrogate using a locally-linear transformation The singular value decomposition, described in Section <ref> is a linear decomposition and if the data do not lie on a linear manifold, the number of weights required for a reduced representation may be quite large. This can create problems with accurate prediction of the weights when we have a small number of simulations. A simple approach to introducing nonlinearity in the decomposition is to use a locally-linear decomposition, that is, cluster the snapshots by similarity and then apply the linear, SVD-based approach to each separate cluster <cit.>. Then, to predict the two-dimensional output at a new point in the simulation input space and time step, we identify which cluster the new point belongs to, and use the predictive model created for that cluster. A challenge in clustering our data is the extreme high-dimensionality of the snapshots as each snapshot has over 2.2M grid points. As described in the companion report <cit.>, we investigated multiple ways to address this problem. For iterative clustering methods, such as k-means, which require multiple passes through the data, processing large files in each iteration can be expensive. We considered two ways to avoid this processing: in the first approach, we reduced the dimension prior to clustering by using random projections <cit.> and, in the second approach, we clustered a different, but equivalent, representation of the each snapshot, such as the weights obtained after SVD (Equation <ref>). We also investigated a non-iterative algorithm, hierarchical clustering <cit.>, which requires the pairwise distances between the snapshots. By calculating the distance matrix once, we could experiment with different options for the method. We found that hierarchical clustering with Ward linkage gave remarkably similar results to k-means clustering. We chose the results with k-means and random projections as we could exploit the randomness of both the random projections and the initial choice of cluster centroids in k-means to identify the number of clusters. A locally-linear transformation that combines clustering of snapshots, following by an SVD on each cluster, is not the only option for non-linearly transforming our data. Other options include kernel PCA <cit.>, which requires the solution of an ill-posed inverse problem to reconstruct the two-dimensional output at a new point in the simulation input space, and a neural-network based auto-encoder, which, given the high-dimensionality of the snapshots, will be a challenge to implement, even with a small number of hidden layers <cit.>. In this work, we selected the locally-linear transformation as it is a simple method and allowed the re-use of software developed for the linear transformation. § EXPERIMENTS, RESULTS, AND DISCUSSION We next present the results of our experiments to test the accuracy of our two spatio-temporal surrogates, one using a linear decomposition and the other using a locally-linear decomposition. Recall that the approach in both cases is similar: we start with a snapshot matrix, generate an SVD for it, and calculate the eigen-snapshots and the corresponding weights that can be used to perfectly reconstruct each snapshot. Next, we build Gaussian process models that can predict each of the initial, important weights based on the simulation inputs and time step. Then, to reconstruct an approximation to a new snapshot that is identified by its simulation input parameters and time step, we first predict the weights and then combine the weights with the eigen-snapshots. The two surrogates differ in the snapshot matrix that is used - the linear surrogate uses a matrix composed of all snapshots, while for the locally-linear surrogate, we cluster the snapshots, grouping similar snapshots together, and then build a linear surrogate for each cluster separately. Our eventual goal in building these spatio-temporal surrogates is to predict what happens to the jet and the plate at late time, specifically, does the jet go through the plate, what is the velocity of the jet tip when it goes through the plate, and what is the location of the plate at late time. As the simulations are expensive, we want to build accurate surrogates using only a small number of data points in the simulation input space. As stated earlier, in this initial study, we qualitatively evaluate the surrogates to understand to what extent we can achieve our goals and to identify simple ways in which we can improve the accuracy of the surrogates without substantially increasing the number of simulations required. In our work, given the questions we want to address regarding the status of the plate and the jet at late time, we focus on the mass and x-momentum variables. As shown in Figures <ref> and <ref>, the y-momentum variable does not clearly define either the plate or the jet, and is therefore less useful in our analysis. In the following sections, we describe the test data we use to evaluate the accuracy of reconstruction of new snapshots and discuss how we determine various parameters used in our approach, including the number of weights to use in the reconstruction, the number of clusters to use for the locally-linear surrogate, and the identification the cluster to which the new data point belongs. We also discuss ways to evaluate the reconstruction quality for the test snapshots. §.§ Test simulations To test our ideas and evaluate the accuracy of the linear and locally-linear surrogates created using the 45 simulations shown in Figure <ref>, we identified seven data points in this simulation input parameter space. These points are listed in Table <ref> and their locations are shown in Figure <ref>. The parameters for these data points were selected to be different from each other, so we could evaluate the quality of the predictions not only at different locations in the input parameter space, but also at different locations in the break, no-break, and almost-break space. Three of the seven points are clearly in the region where the plate breaks, with two points each in the almost-break region and the no-break regions. The latter four points were more difficult to identify as the no-break region is less well defined in the (HE-length - jet-tip-velocity) space, and the no-break points are near the boundary of the sampled region. Their locations make it likely that the output predictions at these four data points will be less accurate. To enable us to evaluate the outputs predicted by the surrogates by comparing with the actual outputs, we ran the simulation at these seven points in the input space and identified the status of the plate at the last time step. In practice, this last time step would be identified as described in Section <ref>. §.§ Generating the SVD There are several issues we need to address related to the implementation of the SVD. As explained in Section <ref>, we considered two implementations of the SVD algorithm - one based on the normal equations and the other on a QR decomposition. In early experiments, using a snapshot matrix with data from fewer than 45 simulations, we found that when select snapshots were reconstructed using all the weights, the error was greater when we used the normal equations formulation. This was expected behavior given the floating point issues associated with the normal equations. Therefore, all results in this report were generated using the QR decomposition. Another issue that is the subject of much confusion is whether the snapshot matrix should be “centered”, that is, the mean snapshot subtracted from each snapshot, before calculation of the SVD. As observed by Bērzinš et al. <cit.>, there are adherents on both sides of the issue. Jolliffe and Cadima <cit.> have discussed the topic at length, providing examples of where centering may or may not be meaningful. Some authors <cit.>, taking a pragmatic approach, have evaluated the results both ways, and selected the option that best met their needs. For our data set, the mean of the snapshot matrix with all 1604 snapshots is dominated by the plate at early time; this is because in most snapshots, the plate, which is aligned across all simulations at t = 0, has barely moved from the initial location. Therefore, when the mean snapshot is subtracted from the snapshots at late time (the ones of most interest to us), the initial position of the plate appears clearly. As a result, more eigen-snapshots are required to account for, say, 90% of the variation in the data. In addition, we found that the substantial visual change in the late time snapshots after centering made it difficult to determine how many weights to use in the reconstruction. §.§ Clustering the snapshots Clustering the 1604 snapshots for each of the three variables proved to be challenging for several reasons. In Section <ref>, we discussed how we addressed the issue of the extremely high-dimensionality of each snapshot. We also wanted to understand if there was an inherent clustering in the collection of snapshots. A careful analysis of the snapshots, as shown in Figures <ref> and <ref> for mass, and Figures <ref> and <ref> for x-momentum, indicated that the first and last snapshots in a simulation are quite different, suggesting that multiple clusters exist in the data. However, clustering the snapshots would result in some neighboring snapshots, which are very similar, assigned to two different clusters, which suggested that the clusters are not well separated. This made it difficult to identify the number of clusters and to evaluate the results of any clustering algorithms. A detailed discussion on how we addressed these issues is given in the companion report <cit.>. Figure <ref> shows the clustering results we use for the mass and x-momentum variables. These results were obtained using the k-means clustering algorithm, combined with random projections to reduce the dimensionality of the snapshot matrix. Based on these results, for our problem, it is relatively simple to identify the cluster for each of the snapshots at which we want to predict the 2-D output. Our interest is in the late time cluster, which is cluster 2 for the mass variable and cluster 0 for the x-momentum variable. Using these clusters, we can predict the output at both the last time step and the time step that is 2 prior to the last time step; we refer to these time steps as tlast and (tlast-2). Our reason for predicting two late-time snapshots in each of the seven test simulations will become clear in Section <ref>. §.§ Determining the number of singular values to keep One of the key issues in reconstructing the simulation outputs at a new data point is the determination of n, which is the number of weights and basis vectors to use in building the approximation. Ideally, we want a good approximation to a snapshot using a small value for n. Let X_n be the matrix composed of snapshots reconstructed using only the top n weights. For our problem, with a tall, skinny X, this error in the reconstruction is given by X - X_n _F^2 = ∑_i=n+1^N σ_ii^2 that is, the sum of squares of the singular values that were excluded in the reconstruction. If these singular values are small, the error in the approximation is small. The cumulative percentage variation explained by the first n singular vectors is ∑_ii=1^n σ_ii^2/∑_ii=1^N σ_ii^2 * 100.0 . Thus, one way to determine n is to use a fixed percentage variation, say 90% or 95%, that we would like explained by the reconstructed data, and identify the n associated with it. This is the most popular approach in building spatio-temporal surrogates, where, having identified n, predictive models are created for each of these n weights and then used to predict the weights at a new data point. Table <ref> lists the values of n for our snapshot matrices. As the values of n can be quite large, and we have a small number of simulations, we wanted to confirm the quality of the n models with a leave-one-out approach prior to their use. We created each model with all but one of the snapshots, and used the model to predict the weight for the snapshot left out. For a good model, the plot of the predicted vs. actual weights should give points close to the y = x line. However, we found that the quality of the models tended to deteriorate quite quickly as the index of the weight increased. Figure <ref> shows a sample of these predictions for the mass variable, with all 1604 snapshots and with the 364 snapshots in the last cluster. We found that the most important weights usually predicted very well, but the scatter around the y = x line increased with the weight index. Sometimes, many of the predicted values would be along the y=x line, with some relatively large outliers. In other cases, most of the predictions were poor. Interestingly, some weights at higher indices had better predictions than the weights at lower indices. We suspect that this poor prediction of some weights is due to the small size of our data set, as we have just 45 simulations. Bērzinš et al. <cit.> observed that for one of their problems, increasing the data set from 100 to 400 simulations did not affect the linear transformation, but significantly improved the accuracy of the models created to predict the weights. The results in Figure <ref> indicated that we could not use the quality of the actual vs. predicted weight for the training data set to determine the number of weights for reconstruction. Even when the overall quality of predictions on the training data are poor, a weight prediction at a new data point could be accurate, and vice versa. We therefore decided to estimate the number of weights based on the uncertainty in prediction obtained from the Gaussian process surrogates. We predicted the values of the first 50 weights and started by using the largest number of weights that all had small variance in the predictions. Since this number is relatively small for the mass variable, we also generated the reconstructions using a larger number of weights to understand the inaccuracies they would introduce into the reconstructed outputs. §.§ Software and parallel implementation Many of the steps in our approach to creating spatio-temporal surrogates can be parallelized to reduce the computational time. In this initial implementation, we exploited parallelism wherever it was possible to do so easily. For example, tasks in the pre-processing of the HDF5 files, such as reading the files, aligning and cropping the domains, and remapping each simulation to the common grid, could all be done in parallel across simulations using the Python sub-process function. For the SVD using QR decomposition on our block snapshot matrix, we implemented a serial version of the parallel algorithm by Constantine and Gleich <cit.>. For the random projections used prior to clustering, we used a sparse random matrix that was generated on the fly, reducing the storage required, and enabling better use of the cache on the computer system. Where possible, we used pre-existing, highly optimized software from the double-precision BLAS <cit.> and LAPACK <cit.> libraries. These included DGEMV and DGEMM for matrix-vector and matrix-matrix multiply, respectively; GEQRF, GESVD, and ORGQR for the SVD using the QR decomposition; and DSYEVR for the eigenanalysis on the normal equations. We also did not make any efforts to optimize the codes, for example by finding the optimal block size to use in storing the snapshot matrix or identifying ways in which consecutive steps in the processing could be merged for faster turnaround. Since our focus in this report is to understand what is possible when the number of simulations is small, we do not include compute times for various steps in our solution approach. §.§ Reconstruction results We next present the results of reconstruction of the tlast and (tlast-2) time steps for the seven test simulations using our two surrogates. For the mass variable, we show results for both linear and locally-linear transformations. As expected, the latter approach gave better results as the models to predict the weights were created using snapshots that were more similar to each other. Therefore, for the x-momentum variable, we considered only the locally-linear transformations. To determine the number of weights to use in reconstruction, we generated the predictions and the uncertainties for the first 50 weights using the Gaussian process models. We expected, based on Table <ref>, that a maximum of 50 weights should suffice to obtain a reasonable reconstruction for the mass and x-momentum variables. Typically, the weights reduce in value with increasing index number, and later weights that are near zero can be dropped. However, it is possible that for a specific snapshot, a later weight is too large to ignore. We combined the information in Table <ref> with the uncertainties in the weight predictions, to determine the exact number of weights to use in reconstruction as follows: * mass variable using the linear transformation: Using the SVD results for all 1604 snapshots, we reconstructed the test snapshots with 8, 20, and 47 weights. We considered the first 8 weights as these weights were predicted with low uncertainty for all 14 test snapshots and Table <ref> indicated that we would capture more than 90% of the variation in the training data. We considered 20 weights as 17 weights allowed us to account for atleast 95% variation in the training data, and adding weights 18 through 20, which were predicted with low variation for all 14 test snapshots, could only improve the results. Finally, we used 47 weights to understand the effects of adding more weights; we stopped at 47 as all test snapshots had higher uncertainty on prediction of weight 48. We observe that for many test snapshots, the six weights with indices ranging from 9 to 15, often had high uncertainty. We therefore expected reconstructions with more than 8 weights to be of poor quality, but we wanted to understand to what extent including weight predictions with larger uncertainties would influence the results. * mass variable using the locally-linear transformation: Using the SVD results for the 364 snapshots in the late-time cluster, we generated the reconstructed test snapshots with 30 weights. The errors in the initial weights tended to be small, especially for the (tlast-2) time steps. The first large error typically occurred at weight 31, resulting in our choice of 30 weights for reconstruction. * x-momentum variable using the locally-linear transformation: Table <ref> indicated we needed 40 weights to capture 90% variation in the reconstructed snapshots. We used all 50 weights as the error in the prediction of weights at higher indices was often very small. We next describe the reconstruction results using the two spatio-temporal surrogates. First, in Appendix <ref>, we show the first ten eigen-snapshots for the mass variable generated with the linear and locally-linear transformations. The corresponding eigen-snapshots for the x-momentum variable, locally-linear transformation, are shown in Appendix <ref>. In addition, for each of the 14 snapshots being reconstructed, we present the following detailed results: * the predictions obtained from the Gaussian process surrogate: We plot the predictions for the first 50 weights, with the uncertainty in prediction shown as an error bar at 1 standard deviation. * the original snapshot, along with various reconstructed snapshots: We focus on the region around the plate as we want to predict what happens to the plate at late time. * a y-lineout using the actual values in the reconstructed snapshots: This plot of the variable values at y = 6.0063 and -15.5 ≤ x ≤ -10.5 (in our transformed coordinates) for the original and reconstructed snapshots gives a concise summary of the quality of reconstruction and makes the comparison more quantitative than comparing entire snapshots visually. These detailed results, including weight predictions, reconstructed snapshots, and the y-lineouts for the linear and locally-linear surrogates for the mass variable at time steps tlast and (tlast-2) for all seven test snapshots are shown in Appendix <ref>. The corresponding results for the x-momentum with the locally-linear surrogate are shown in Appendix <ref>. We repeat the y-lineout for the reconstruction of the 14 test snapshots for the mass variable in Figures <ref> and <ref> and use them to discuss the results and compare the different options used. §.§ Discussion We next summarize our observations on the reconstructed results presented for the mass and the x-momentum variables in Figures <ref> and <ref>, and in Appendices <ref> through <ref>. These results indicate the following: * For the mass variable, the reconstructions of the plate region using the linear surrogate are usually worse than the locally-linear surrogate. This is observed visually in the reconstructed snapshots and more clearly in the y-lineouts. When we use all snapshots in the linear surrogate, the location of the plate at t = 0 is captured prominently in the early eigen-snapshots. However, all the test snapshots are at late time, when the plate has moved to the right from its original location. Therefore, any errors in the predicted values of the early weights, when multiplied by the corresponding eigen-snapshots, appear as vertical streaks in the region to the left of the plate in the reconstructed snapshots. In contrast, for the locally-linear surrogate, the location of the plate in the eigen-snapshots is constrained to a smaller range of x values and closer to where we might expect the plate to be at late time. This is to be expected as the snapshot matrix in the locally-linear surrogate includes only the late time snapshots, instead of all the snapshots in the linear surrogate. The result is better quality reconstruction with the locally-linear surrogate. The creation of the locally-linear surrogate requires the additional step of clustering of the snapshots. However, the facts that the calculation of the SVD is for a smaller snapshot matrix and the Gaussian process models are built for a smaller data set, make the creation of the locally-linear surrogate computationally faster than the linear surrogate. * For both the mass and the x-momentum variables, the reconstruction at time step (tlast-2) is often better than at time step tlast. There are two contributing factors. First, the weights corresponding to higher indices at time step (tlast-2) are often smaller than the corresponding weights at time step tlast. So, when both snapshots are reconstructed with the same number of weights, ignoring the weights with higher indices, more useful information is ignored at time step tlast than at time step (tlast-2), resulting in better reconstruction of the latter. Second, the error in prediction of the weights at time step (tlast-2) is usually smaller than at time step tlast. This is to be expected. When we consider the sample points in the input region formed by the three simulation parameters and the time step, the point at the last time step is on the boundary of this region, while the point at (tlast-2) time step is near, but not at the boundary. Weight predictions at the boundary points are usually less accurate as they have fewer neighboring points around them. * However, for the two no-break simulations, r03_i008 and r03_i023, the reconstructions of the mass and x-momentum variables, at both the tlast and the (tlast-2) time steps are poor. These two simulations are near the boundary of the region formed by the training data in the space of the three input parameters, as shown in Figure <ref>. Simulation r03_i023 also has the smallest jet tip velocity. With fewer neighbors around these points, the weight predictions from the Gaussian process models are less accurate, resulting in poor quality reconstructions. * For the mass variable, we see different effects as we change the number of weights used in reconstruction with the linear surrogate. These effects are best understood through the y-lineplots. When the number of weights is small (=8), though the curve is very smooth, it is a poor fit to the plate profile, and extends far to the left of the plate. As the number of weights increases to 20 and then 47, the curves become a better fit to the profile of the plate and extend less to the left of the plate, but appear less smooth. There are several competing factors responsible for this behavior. The eigen-snapshots indicate that at a fixed y coordinate the values flip between positive and negative, with fewer sign changes in the initial eigen-snapshots and more in the later ones, somewhat akin to a Fourier series. This accounts for the smooth curve in the y-lineout at few weights, which becomes more jagged as the number of weights is increased and the “higher-frequency” eigen-snapshots are used to approximate the plate, which can be seen as a square wave. As we increase the number of weights, the approximation to the plate location becomes better, and the y-lineout curve, which extends far to the left of the plate at 8 weights, moves closer to the plate at 20 and 47 weights. However, the weights between 8 and 15 are often predicted with high error, which shows up as wiggles in the curve to the left of the plate. In contrast, for the locally-linear surrogate, the better localization of plate location in the eigen-snapshots and the lower error in weight prediction, lead to a better match of the reconstructed curve to the actual plate profile. * For the mass variable, the reconstructed snapshots have negative values, which appears physically incorrect. This is due to the limited number of eigen-snapshots used in reconstruction. Using all eigen-snapshots would result in near perfect reconstruction and all positive values (to within floating point errors) for the mass variable. * For the x-momentum variable, for which we only present results generated with the locally-linear surrogate, we observe certain differences with the mass variable. The values of both the variable and the weights are smaller for x-momentum. In most cases, the weights for x-momentum go rapidly to zero, though the initial weights have higher uncertainty. The reconstruction quality based on the y-lineplots indicates that the no-break cases and some of the break cases could be improved. * We observe that in our problem, as the focus is on what happens to the plate and the jet at late time, we could have reduced the size of each snapshot further by focusing on just the region around the plate. However, such an option is problem dependent and may not be available in general. * Finally, we consider to what extent we can address the eventual goals of this effort, namely, for a new point in the simulation input space, is it possible to use the surrogate to determine if the plate breaks at late time, to identify the final location of the plate, and to obtain the speed of the jet as it exits the plate in cases where the plate breaks. We address these questions using the reconstructed region around the plate and the y-lineouts for the cases where the snapshots have good reconstruction quality. Understandably, the reconstructed snapshots are an approximation to the actual snapshots, especially when a small number of weights are used. However, the y-lineouts indicate that it should be possible to obtain a good estimate of the plate location at late time by applying gradient-based image segmentation techniques. To determine whether the plate is a break, almost-break, or no-break case, we can look at the bottom region of the plate to see if it has detached from the bottom and if we can see the jet on the other side. We observe that it may be harder to distinguish between the no-break and the almost-break cases, but this may be due to the poor reconstruction of the no-break cases. However, the break cases appear to be easy to identify. This is despite the fact that the boundary of the plate is not as sharply defined in the reconstruction as we have ignored the weights at higher indices. Understanding whether we can meet our goals for the no-break cases, which are poorly reconstructed given our training data, we will first need to improve these cases using the ideas discussed next. Overall, we observe that the quality of the reconstructed test snapshots is a combination of several factors, including the suitability of the initial eigen-snapshots at capturing the plate location at late time, the error in the prediction of the weights, and the number of weights used in the reconstruction. This gives us some suggestions for generating better quality reconstructions, without increasing the number of simulations: * For our problem, the locally-linear surrogate created using only the late time snapshots, gives better results than the linear surrogate created using all snapshots from the 45 simulations. We expect this result to hold in general. By clustering the snapshots, and building linear surrogates for each cluster, we generate a better basis for the snapshots in that cluster, leading to more accurate predictions for new snapshots in that cluster. This requires the identification of a cluster for the new snapshot, which can be obtained from the cluster assignment of all the snapshots. It also suggests that for our problem, other non-linear transform methods may be worth exploring. * To reduce the error in the prediction of the weights, we want the test point to be in the interior of the region formed by the training data in the space of the three input parameters and time step. Therefore, if our interest is in predicting what happens at late time steps, as in our jet-HE interaction problem, we should run the simulations used to create the training data for a few more time steps, so the test points lie in the interior of the region that forms the input to the Gaussian process models. This also means that the test points should lie in the interior of the space of the simulation input parameters. Since the locations of the test points may not always be known before the training data are generated, one approach to ensure that the test points are not too far from the training data is to cover the region of interest with random points far from each other. We accomplish this using the best candidate sampling. In addition, decisions not to add points in some regions of the input space, should be taken with care; our decision to exclude points in the high HE-length, low jet tip velocity region led to poor prediction for the no-break test cases. Further, it may be desirable to set aside a number of simulations to be run once the locations of the test points are known. * It still remains a challenge to determine the number of weights to use for reconstruction, especially as the quality of the weight prediction could be poor when the number of simulations is small. Using too few weights would result in poor reconstruction of sharp changes in the data, such as the plate boundary, while using more weights might introduce errors when the weight predictions are not accurate. Ideally, we want the weight values to decrease rapidly as it would indicate that a small number of initial weights is necessary for reconstruction. However, this may not be the case when there is large variation in the data that is not captured sufficiently by a small number of simulations. It therefore appears that we may require experimentation with different number of weights, with the number selected possibly varying with each test snapshot. § CONCLUSIONS In this report, we considered the problem of jet-HE interaction to determine if it is possible to build accurate, spatio-temporal surrogates when we can run only a small number of simulations to create a training data set. We showed how to process a data set where the size of the computational domain varies with each simulation and each snapshot has over two million grid points. Our results showed that a locally-linear surrogate, which builds separate surrogate models using groups of similar snapshots, is more accurate than one which builds a single surrogate using all the snapshots. We also identified other simple ways to improve the quality of surrogates when we are constrained to run only a limited number of simulations. These include better sampling of the training data points in the simulation input parameter space to cover the region uniformly so no test point is too far from a training point; selecting, if possible, the locations of the training data points such that the test snapshots are not near the boundary of the region defined by the training data; and setting aside a part of the simulation budget to run a few additional simulations once the test data points have been identified. § ACKNOWLEDGMENT We would like to thank the Defense Threat Reduction Agency (DTRA) for funding this work. The simulations of the interaction of the jet with high explosives were performed using the ARES code developed at Lawrence Livermore National Laboratory. LLNL-TR-850152 This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes. plainnat § APPENDIX: DATA FOR X-MOMENTUM VARIABLE § APPENDIX: DATA FOR Y-MOMENTUM VARIABLE § APPENDIX: EIGEN-SNAPSHOTS FOR MASS VARIABLE, WITH AND WITHOUT CLUSTERING § APPENDIX: RECONSTRUCTED MASS (BEFORE AND AFTER CLUSTERING) FOR ALL SEVEN TEST CASES § APPENDIX: EIGEN-SNAPSHOTS FOR X-MOMENTUM VARIABLE, WITH CLUSTERING § APPENDIX: RECONSTRUCTED X-MOMENTUM (AFTER CLUSTERING) FOR ALL SEVEN TEST CASES
http://arxiv.org/abs/2307.02269v1
20230705130818
SpaceNLI: Evaluating the Consistency of Predicting Inferences in Space
[ "Lasha Abzianidze", "Joost Zwarts", "Yoad Winter" ]
cs.CL
[ "cs.CL", "68T50", "I.2.7" ]
SVDM: Single-View Diffusion Model for Pseudo-Stereo 3D Object Detection Yuguang Shi, The authors are with the School of Automation, Southeast University, Nanjing 210096, China, and also with the Key Laboratory of Measurement and Control of Complex Systems of Engineering, Ministry of Education, Nanjing 210096, China (e-mail: syg@seu.edu.cn; xblu2013@126.com). August 1, 2023 =============================================================================================================================================================================================================================================================================================================== While many natural language inference (NLI) datasets target certain semantic phenomena, e.g., negation, tense & aspect, monotonicity, and presupposition, to the best of our knowledge, there is no NLI dataset that involves diverse types of spatial expressions and reasoning. We fill this gap by semi-automatically creating an NLI dataset for spatial reasoning, called SpaceNLI. [<https://github.com/kovvalsky/SpaceNLI>] The data samples are automatically generated from a curated set of reasoning patterns (see <ref>), where the patterns are annotated with inference labels by experts. We test several SOTA NLI systems on SpaceNLI to gauge the complexity of the dataset and the system's capacity for spatial reasoning. Moreover, we introduce a Pattern Accuracy and argue that it is a more reliable and stricter measure than the accuracy for evaluating a system's performance on pattern-based generated data samples. Based on the evaluation results we find that the systems obtain moderate results on the spatial NLI problems but lack consistency per inference pattern. The results also reveal that non-projective spatial inferences (especially due to the “between” preposition) are the most challenging ones. § INTRODUCTION Natural language inference (NLI) is a popular task that evaluates NLP systems on text reasoning skills. In the task, a system has to predict an inference relation from a premise text to a hypothesis sentence/phrase. Usually, the task is three- or two-way classification, depending on whether in the inference labels of entailment, neutral, and contradiction, the latter two are merged into non-entailment. The task is intended for evaluation of NLP systems on reasoning, however, the systems with competitive results on NLI benchmarks are often exploiting dataset biases (, inter alia) and their performance suffers from out-of-distribution NLI sample problems <cit.>. To better evaluate the reasoning skills of NLI systems, a series of works have been or manually creating NLI datasets that specialize in certain semantic phenomena. While some of these datasets come with a training part, most of them are intended solely for evaluation. For example, several datasets have been dedicated to monotonicity reasoning <cit.>, negation was targeted by <cit.>, the dataset by <cit.> focuses on temporal and aspectual inferences, <cit.> semi-automatically generated NLI problems for implicatures and presuppositions. There are also NLI datasets that cover several semantic phenomena, having a separate section for each of the phenomena (, inter alia). While spatial reasoning has been included in several multi-modal QA datasets <cit.> and in a couple of text-based QA datasets <cit.>, to the best of our knowledge, no NLI dataset has specifically covered it. [Even the FraCaS dataset <cit.>, which was curated by linguists and semanticists, doesn't cover spatial semantics within its nine sections.] This paper fills the gap by semi-automatically creating an NLI dataset for spatial inferences. First, we collected a diverse set of NLI problems inspired by the inference examples found in the literature on spatial semantics. Second, the NLI problems were manually converted into NLI patterns (see <ref>), and finally, we automatically generated a large number of NLI problems from the patterns. The paper makes two main contributions: [.]C1 SpaceNLI: the spatial NLI dataset with diverse types of spatial inferences; The inference labels of the generated problems are highly faithful (97%) to the labels of the corresponding original patterns. [.]C2 Pattern accuracy and its curve: they measure systems' performance on patterns and the consistency in predictions on samples from the same patterns. The conducted experiments answer the following research questions: [.]Q1 How much spatial reasoning current SOTA NLI systems are capable of? [.]A1 We found out that the SOTA NLI systems have problems with fine-grained spatial inferences. Their performance drops at least by 24% compared to their results on common NLI datasets. Moreover, their consistency in predictions is sensitive to irrelevant lexical substitutions. [.]Q2 What types of spatial inference problems are easy or challenging for the SOTA NLI systems? [.]A2 The results showed that the non-projective spatial relations are most challenging for the models. This was mainly due to difficulty associated with “between” and its frequent occurrence in the evaluation dataset. § SPATIAL EXPRESSIONS AND INFERENCES §.§ Types of spatial expressions Spatial expressions consist of spatial prepositions and other expressions with spatial information (e.g., far, the left of, and in front of). They usually describe a relation between two entities, the figure and the ground. The site or path of the figure is the focus of the discussion and is characterized with respect to the ground. For example, in (9_1) and (10_1) from <ref>, Mary is a figure and garden a ground. John is also a figure in the premise of (10_1). Spatial expressions are roughly divided into locative and directional expressions, where locatives can be further classified into projective and non-projective <cit.>. The locative expressions describe static, locative relations between the figure and the ground while directional ones describe a more dynamic relation involving a movement and/or path. An example with a directional preposition is Cindi walked into the market. The spatial expressions in <ref> are all locative except for from, which is directional. These locative expressions are non-projective since they require only the spatial location of the figure and the ground. In contrast, projective locatives additionally require further information from the ground in terms of a deictic frame of reference (i.e., an orientation structure). For example, the site of the house is not sufficient to interpret Mary's location in Mary is behind the house, it requires knowledge about the frame of reference of the house, in particular, what counts as a back side of the house. §.§ Types of spatial inferences We characterize spatial inferences depending on the type of spatial expressions licensing them. An inference might depend on several spatial expressions of a different type, which makes partitioning the inferences challenging, if not impossible. We define the following classes that represent a coarse-grained partition of spatial inferences. The classes will be later referred to in <ref>. [Licensing contradiction and neutral problems will be assumed from the perspective of a related entailment problem. For example, we assume that the neutral problem (16) in <ref> is licensed in the same way as its related entailment (15). Put differently, one can see (16) as an adversary to (15) and assume that solving (15) requires competence comparable to the one required for solving (16). ] Argument orientation In spatial literature, an argument orientation entailment identifies which argument of the verb is the figure of the spatial expression. For instance, (9_1) in <ref> show that Mary is the figure of the locative PP in the garden. In its original interpretation, the argument orientation entailment is not restricted to spatial expressions of a particular type. Here, we restrict the class of argument orientation to the entailment problems (and their neutral and contradiction counterparts) that come close to resolving a PP attachment. For example, correctly resolving the PP attachment in (9_1) boils down to the hypothesis. The problems in this class contain a hypothesis with a copula and a predicative spatial PP, where the PP is contrasted to a tightly related PP in the premise(s). For more examples of the NLI problems in the argument orientation class, see <ref>. Directional The directional class contains spatial inferences where directional spatial expressions play the key role. Examples of such inferences are given in <ref>. Some of these NLI problems pertain to a path-place relation: (47a) shows that walking into infers being outside; [Since moving along the path is related to the change of the location, sometimes spatial entailments interfere with tense and aspect. ] (41) entails being in the tunnel from the premise that states that the driving path was through the tunnel. (31a) combines a part-whole relation with the movement path. Projective This class contains inferences that hinge on a frame of reference introduced by projective spatial expressions. In principle, the frame of reference can introduce six directions that can be referred to using the expressions like front, behind, left, right, above, below, under, on top of, etc. (see the examples of NLI problems in <ref>). The NLI problems that contain on top of as only projective spatial expression, and when its projective interpretation is not crucial for the inference, are put in a different class. Non-projective We classify a problem as having non-projective inference if the inference is driven only by non-projective spatial expressions. Therefore, an occurrence of non-projective spatial expressions in a problem is necessary but not sufficient for assigning the problem to this class, e.g., see directional problems (31a) and (41). NLI problems that depend on spatial expressions with the semantics of order and proximity are also in this class, see between (80) and far (100) in <ref>. § DATASET CONSTRUCTION §.§ Pattern construction Patterns are labeled NLI problems with NPs replaced by variables as illustrated in <ref>. The NLI patterns are obtained from the seed NLI problems. To collect the latter, we extracted the initial 56 problems from <cit.> and <cit.>, where a majority of the problems were labeled as entailment due to obvious biases in the semantic literature towards licensing entailment. To create a representative and challenging NLI dataset for machine learning, we applied several revision phases to the problems: introducing new problems that either cover new semantic aspects of spatial expression or serve as a perturbed version of an existing problem. In the initial revision phase, four annotators divided the extracted problems and created slightly modified versions of them with an inference label different from the original. [ The annotators for the pattern construction consist of the authors of the paper, two linguist students, and one AI student. The guideline for creating inference problems can be found in the supplementary material. ] This was motivated by the current trends in the literature on adversarial, stress, and debiased datasets (, inter alia). For example, (16) is a perturbed example of (15). Where possible, NLI problems of a new type were also created using the similar spatial expressions found in the extracted problems. To validate the resulting pool of NLI problems (in total 162), following <cit.>, they were labeled on a 5-point Likert scale by three annotators. [ The question was to what extent the hypothesis sentence is true, given that the premises are true, with choices: definitely false, most likely false, unknown, most likely true, definitely true. We used two additional choices, difficult (unable to annotate due to the complex reasoning it requires) and skip (presence of an ungrammatical or nonsensical sentence). We used the brat annotation tool <cit.> for labeling. The annotation guideline is included in the supplementary material. ] After collecting the 5-point annotations, for each annotator, we picked a mapping of 5-point to 3-point that maximizes the inter-annotator agreement (avg. Cohen's κ=.71). The problems without majority labels were discarded and 111 problems remained. To better balance the inference labels and increase the coverage of spatial expressions, a second revision phase was carried out on the remaining problems. In several cases, problems with low annotator agreement were revised, e.g., changing the tense where it caused confusion or replacing a preposition with a weaker version (at↦near). All the new and revised problems (in total 63) were validated based on three samples: each problem was manually converted into a pattern by replacing NPs with variables, and three random NLI samples per pattern were generated (see <ref> for details), which were subsequently validated by three annotators. Finally, a third revision phase was carried out on the remaining problems to additionally decrease the overall and spatial type-specific label imbalance. The collected problems (in total 160) were treated as a seed by converting them into NLI patterns to generate a large amount of sample NLI problems from them. To illustrate the coverage of spatial expressions in the collected patterns, <ref> gives the complete list of spatial expressions for each entailment class. §.§ Sample generation We manually created NLI patterns from the initially collected NLI problems (<ref>) by replacing NPs with placeholders and specifying selection restrictions for them imposed by the verbs, spatial expressions, and gold inference labels (see <ref>). The selection restrictions imposed by spatial expressions are subtle and can affect gold labels or the naturalness of sentences. For example, if the figure is much larger than the ground, it can make the sentence infelicitous: the apple on the fridge and the apple near the fridge are preferred to the fridge under the apple and the fridge near the apple. Inferences driven by proximity-related spatial expressions are sensitive to the size of the objects. For instance, based on our conducted validations, Cindi is opposite to the cat is more likely to be neutral to Cindi is far from the cat, but the school is opposite to the house is more likely to contradict the school is far from the house. To meet selection restrictions and allow relative diversity of NPs in the generated samples, we defined a mini world with a domain containing 171 entities corresponding to common and proper nouns. The entities are organized in a taxonomy with 20 subclasses covering general types of entities (e.g., person, animal, vehicle), the projections of an argument in certain argument structures (e.g., enter in X, be in X, throw X), compatibility with projective spatial expressions, and size categories (S for entities comparable to small objects like book and cat, M to persons, and L to vehicles). Binary and ternary relations are defined based on the set unions of the products of entity sets and subclasses. To automatize the sampling of sound NLI problems from the patterns, we formatted the mini world in YAML and NLI patterns in XML. We implemented a procedure that samples problems from the patterns by filling in NP placeholders with definite NPs from the mini world and respecting the pattern-specific selection restrictions. For sanity checking, the procedure verifies that it can generate corresponding seed NLI problems for each pattern. To measure how faithfully the inference labels are transferred from seed and pattern NLI problems to the corresponding NLI samples, we used sampled problems in the second phase of validation when validating new NLI problems (see <ref>). The results showed that 79% of samples were unanimously labeled with the original label. After filtering out patterns with a relatively low agreement, this ratio increased to 97% for the samples generated from the validated patterns. The NLI problems sampled from the same pattern or related patterns are string-wise very close to each other, sometimes differing only in terms of occurrences of a single NP. Regardless of this similarity, we expect such problems to pose a challenge for NLI systems based on large language models (LLMs) as it has been shown that their predictions can be sensitive to a single-word substitution <cit.>. In addition to NPs, one could have allowed the replacement of other phrases in the NLI patterns, but this would have significantly complicated the definition of the mini world and generation of natural and sound NLI samples. § EXPERIMENTS §.§ Sample dataset We uniformly generated a spatial dataset of 32,000 NLI samples from 160 NLI patterns, i.e., 200 samples per pattern. We used the mini world as described in <ref>. The dataset statistics are given in <ref>. The inference labels are relatively balanced: each label being represented by at least 30% of the problems. Each spatial inference type counts at least 20% of the overall problems and 23% of label-specific problems. In contrast to the common biases in NLI datasets, a majority of the problems with negation are labeled as entailment, not contradiction. This is due to perturbed problems introduced in the revision phases (<ref>). Around 39% of problems have multiple premises, where three-premised problems occur only in the directional problems, the argument orientation problems contain only single-premised problems, and most of the multi-premised problems are in the non-projective problems. We refer to the generated dataset as SpaceNLI and use it in subsequent experiments. [We make the collection of the patterns, the generation code, and the sample dataset publicly available upon the acceptance of the paper. ] §.§ Evaluating SOTA NLI systems §.§.§ Standard accuracy We selected NLI models that have results comparable to the state of the art in NLI and evaluate them on SpaceNLI. The models were chosen based on their availability, tractable size, and high average accuracy (>90%) on the SNLI <cit.> and MNLI <cit.> datasets (see <ref>). The models are based on various large language models (LLMs) like DeBERTaV3 <cit.>, BART <cit.>, ALBERT <cit.>, XLNet <cit.>, etc. (see <ref>). The LLMs are fine-tuned on several NLI train datasets: SNLI, MNLI, FEVER-NLI <cit.>, ANLI <cit.>, LingNLI <cit.>, WANLI <cit.>. We use the models from the HuggungFace model hub [<https://huggingface.co/models>] and provide them with the corresponding hub names in <ref>. The results in <ref> show that trained on a large collection of training datasets (885K problems in total) generalizes best on the spatial reasoning (66.5%), achieving a substantial improvement (≥ 6.9%) over the other models. [The second best, , is based on the same LLM fine-tuned on a different combination of NLI datasets. Note that <cit.> deliberately removed SNLI from the training set as it negatively affected the accuracy of the model in their experiments. ] §.§.§ Consistency & pattern accuracy To evaluate the models on the consistency of their predictions for NLI problems from the same pattern, we define the pattern accuracy (PA) score and its curve. The PA curve records the PA score of a model for each consistency threshold. Informally, the PA score with a consistency threshold t is a ratio of NLI patterns for which model gets at least t portion of the samples generated from them. For example, the PA of 50% with a threshold 90% means that there are a half of the NLI patterns such that for each pattern a model is able to correctly classify at least 90% of its sample problems. The formal definition of the PA with a threshold t is: .95PA_t(Ŷ, 𝐲) = 1/N∑_i=1^N [ ∑_k=1^M_iδ(ŷ_k^i =y^i)/M_i≥ t ] where Ŷ = (ŷ^i_k)_1≤ i ≤ N, 1≤ k ≤ M_i are predictions for k^th sample of i^th pattern, N is the number of patterns, M_i is the number of samples for i^th pattern, 𝐲 = (y^i)_1≤ i ≤ N gold labels of i^th pattern, and δ is the Kronecker delta. While DeBERTaV3-L#2 gets the best score on the SpaceNLI problems, based on the PA scores in <ref>, it shows high consistency (PA_0.95 or PA_1.0) in fewer NLI patterns than the other two competing models, DeBERTaV3-L#1 and ALBERT-XXLv2. PA curves of the NLI models provide a closer look at this contrast (see <ref>). While the curve of DeBERTaV3-L#2 outperforms other models by a margin, it is noteworthy that it does this by classifying sample problems of the patterns which it can hardly solve half of the time (this is visible in the complete curves in <ref>). It drastically decreases after 95% of consistency while ALBERT-XXLv2 and DeBERTAV2-L#1 maintain very high consistency for >47% of NLI patterns. This demonstrates that a high-performing model is not necessarily the most consistent across patterns. RoBERTa-L and BART-L obtain similar accuracy scores, but RoBERTa-L is more consistent in more NLI patterns than BART-L while the latter gets slightly more NLI problems for inconsistently predicted patterns. The complete curves in <ref> shows how the curves swap places after the consistency threshold of 50. This shows that the standard accuracy (i.e., based on NLI problem samples) can blur the fine distinction in consistency between the models. The dispersion of the curves at the lowest end of the consistency threshold is twice larger than at the highest end. This shows that the model predictions more diverge in coverage of patterns than in consistency per pattern. In other words, the contrast confirms the sensitivity of the models towards the inference-preserving word substitutions. §.§.§ Few-shot learning experiments We measured the difficulty of the SpaceNLI problems in terms of few-shot learning experiments. We used 100 samples per pattern as a test set while other 100 samples per pattern were used for drawing a few samples for each pattern. In this way, the patterns are fully shared between the training and test sets, but no sample NLI problem is in both sets. For each number of shots, we carried out the sample drawing process three times. We used two NLI models: a high performing NLI model RoBERTa-L_SMFA from <cit.> and a vanilla NLI model based on the large RoBERTa pretrained language model <cit.>. The results of the few-shot experiments are in <ref>. Finetuning RoBERTa-L_SMFA on a single sample of each pattern increases the sample-based accuracy on the test set by 14%. Each additional sample further boosts the model's accuracy. The almost perfect accuracy (>99%) is reached when 20 samples per pattern are seen during the finetuning. The results show that the lexical variability poses a challenge to the high-performing NLI model as it needs to be finetuned on at least five samples for every pattern of the test set to achieve a high score. The challenge coming from the lexical variability and the SpaceNLI patterns is further emphasized by the relatively low results of RoBERTa Large. Even after being finetuned on the 20 samples of each NLI pattern, the model is still far from the high performance on unseen samples (but seen patterns). The relatively low results can be also partially attributed to the low ratio between the number of training samples and the large number of the model's trainable parameters. § ANALYSIS To find out what type of inferences the models find challenging, we analyze the models' performance per inference type. <ref> shows the sample- and pattern-based accuracy scores of the models per spatial inference types as defined in <ref>. The model ranking based on the sample accuracy varies across the inference types. For instance, the best model, DeBERTaV3-L#2, remains at the top of the rankings for all inference types with quite a margin except for the projective type. On average, non-projective spatial inferences are the most challenging for the models. The easiest of the types is argument orientation, the type that is closest to the PP attachment task. For the other inference types, projective inferences are harder than directional ones. The apparent distinction in the scores between the inference types is also preserved for the PA_0.95 score (shown with the dark bars in <ref>). The fine-grained analysis additionally shows that the best model, DeBERTaV3-L#2, suffers least in terms of consistency on the projective inferences while its performance on this inference type is not among the best. Based on the results in <ref>, the non-projective NLI patterns and samples are the most challenging for the SOTA models. When looking closer at the set of non-projective problems, it turns out that it contains a high number of problems (46%) with the spatial expression “between“ (as shown in <ref>), and these problems are specially challenging due to complex semantics of “between”. The average accuracy of the models on such NLI samples is 41.6%. This is lower than the average sample-based accuracy (46.1%) on entire SpaceNLI and much lower than the average sample-based accuracy (54.1%) on the other part of the non-projective samples. We further zoom in on the NLI patterns and measure a model's probabilistic predictions for the patterns. Namely, following <cit.>, we measure a model's confidence and variability. Originally the dataset cartography <cit.> was used to analyze the training dynamics of a model across the epochs and identify training samples that are easy or difficult for learning. In contrast, we use dataset cartography for analyzing evaluation dynamics across patterns and identifying easy and hard ones. [Put differently, iterative classification of the same training sample across epochs, is replaced with the classification of the same NLI pattern based on its samples. ] <ref> illustrates the pattern-based evaluation dynamics of RoBERTa-L <cit.>, an average model based on the evaluations. For instance, NLI pattern (102f) happens to have one of the most variable samples according to the model predictions: the mean and the standard deviation of the probabilities the model assigns to the entailment class of the samples of (102f) are 0.45 and 0.35, respectively. ()     NP_1 has hidden NP_2 behind NP_3.     NP_2 is not in NP_3. The evaluation cartography shows that the predictions vary mostly for entailment patterns (in green). Most of the hard patterns are neutral ones (in blue) and vice versa. Contradiction patterns (in red) tend to be easy with some variability. § RELATED WORK Several works have automatically sampled NLI problems from curated patterns/templates. <cit.> generated the implicature and presupposition diagnostic dataset IMPPRES from pre-defined templates. <cit.> constructed the HANS dataset by designing templates of NLI problems that support or refute certain inference heuristics, which were later used to generate NLI problems. <cit.> used the template language from <cit.> to produce NLI problems involving negation, Boolean connectives, quantifiers, cardinals, conditionals, and comparatives. These works all use restricted vocabulary while generating samples from the patterns. With its pattern-based construction and restricted vocabulary, SpaceNLI comes close to the IMPPRES <cit.> and HANS <cit.> datasets. Unlike these datasets, SpaceNLI involves multiple-premised problems and puts more emphasis on satisfying selection restrictions to prevent nonsensical sentences. Based on the nature of NLI problems, SpaceNLI resembles FraCaS <cit.> as both contain inference problems often found in textbooks on formal semantics. Unlike FraCaS, the inference labels of patterns in SpaceNLI are quite balanced and the number of spatial NLI patterns is twice the size of the largest section in FraCaS. There have been attempts to identify semantic phenomena in existing NLI datasets, including aspects of spatial reasoning. By looking up certain keywords, <cit.> automatically detect NLI problems in MultiNLI <cit.> that might contain spatial expressions. They create a mutated sample from the original NLI problem by negating the sentence with the potential spatial expression. <cit.> annotate MultiNLI problems based on the semantic aspects required by the inference label. Their taxonomic categories include the spatial subcategory, grouped with the relational, temporal, causal, and co-reference subcategories. The problems in SpaceNLI are substantially diverse from a semantic perspective than the MultiNLI problems that were identified by <cit.> and <cit.>. The MultiNLI dataset is crowd-elicited and doesn't have problems with sufficient depth in spatial reasoning. § CONCLUSION To the best of our knowledge, we have created the first spatial inference dataset that involves diverse spatial inference types. The structure and the evaluation protocol are unique as we focus on performance on the NLI patterns and consistency across the samples in the pattern, instead of focusing on mere quantitative accuracy based on the NLI problems/samples. The evaluation protocol tests models whether they can consistently recognize inference patterns while generalizing over irrelevant lexical substitutions. The more consistent a model is in its predictions, the less unexpected its behavior becomes. The SOTA NLI models show moderate generalization capacity on spatial problems. While the top-performing model gets the highest overall accuracy, it is ranked third when it comes to the consistency of predictions inside the patterns: predicting at least 95% of the samples per pattern. The introduced pattern accuracy (PA) curves provide a more fine-grained distinction between the models: the models with comparable standard accuracy scores might substantially differ in the consistency of their predictions. Overall the performance of models drops ca. 10% when raising the consistency threshold to 95%. This illustrates that the predictions of the SOTA models are sensitive to lexical replacements that have no effect on the semantics of the inference. The evaluation results revealed that the most challenging inference type is associated with non-projective locatives mainly due to the complex semantics of “between” while the argument orientation type is the easiest. The latter is somewhat expected as the problems in the argument orientation type are close to the task of PP attachment which LLMs are expected to be good at. § ACKNOWLEDGMENTS This work was funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 742204). We would like to acknowledge the help from three student assistants with the data annotation and thank the anonymous reviewers for their helpful comments. acl_natbib § RESULTS <ref> represents the extended version of <ref>. Note that the area under the curve corresponds to the standard accuracy based on the NLI problems.
http://arxiv.org/abs/2307.02757v1
20230706034115
Wireless Multi-Agent Generative AI: From Connected Intelligence to Collective Intelligence
[ "Hang Zou", "Qiyang Zhao", "Lina Bariah", "Mehdi Bennis", "Merouane Debbah" ]
cs.MA
[ "cs.MA" ]
Wireless Multi-Agent Generative AI: From Connected Intelligence to Collective Intelligence Hang Zou, Qiyang Zhao, Lina Bariah, Mehdi Bennis, and Mérouane Debbah. August 1, 2023 ========================================================================================== The convergence of generative large language models (LLMs), edge networks, and multi-agent systems represents a groundbreaking synergy that holds immense promise for future wireless generations, harnessing the power of collective intelligence and paving the way for self-governed networks where intelligent decision-making happens right at the edge. This article puts the stepping-stone for incorporating multi-agent generative artificial intelligence (AI) in wireless networks, and sets the scene for realizing on-device LLMs, where multi-agent LLMs are collaboratively planning and solving tasks to achieve a number of network goals. We further investigate the profound limitations of cloud-based LLMs, and explore multi-agent LLMs from a game theoretic perspective, where agents collaboratively solve tasks in competitive environments. Moreover, we establish the underpinnings for the architecture design of wireless multi-agent generative AI systems at the network level and the agent level, and we identify the wireless technologies that are envisioned to play a key role in enabling on-device LLM. To demonstrate the promising potentials of wireless multi-agent generative AI networks, we highlight the benefits that can be achieved when implementing wireless generative agents in intent-based networking, and we provide a case study to showcase how on-device LLMs can contribute to solving network intents in a collaborative fashion. We finally shed lights on potential challenges and sketch a research roadmap towards realizing the vision of wireless collective intelligence. § INTRODUCTION Generative AI has evolved as a powerful branch of AI, endowing machines with algorithms that enable them to create original content, such as images, text, and human-like conversations. In particular, LLM, a particular model of generative AI that are trained on a massive unlabeled textual dataset, have shown impressive capabilities in many applications, such as question answering, language understanding, reading comprehension, code generation, mathematical and common sense reasoning <cit.>. As a promising example of LLMs, the recent release of ChatGPT, based on the GPT with 175B parameters (GPT-3) and 1 trillion parameters (GPT-4), has showcased remarkable capabilities of large-scale LLMs in content generation tasks. ChatGPT plugins allows LLM to access up-to-date information, execute tasks, while interacting with the real-world. More recently, the open-source Falcon <cit.> has outperformed GPT-3 with only 40B parameters, 75% training cost, and 20% inference compute budget; Similar results were achieved by LLaMA with 65B parameters <cit.>. On the other hand, PaLM-2 model has excelled at advanced reasoning tasks including code generation and maths <cit.>. Such outstanding results of various LLMs has paved the way for integrating LLM in different domains, such as robotics, telecom, and healthcare <cit.>. The maturity of LLMs constitutes a stepping stone towards realizing the vision of AGI, which is a broader concept than AI, that encompasses highly autonomous systems of machines that possess general intelligence and cognition capabilities that are comparable to humans. Within this context, LLM will play an essential role in AGI-based systems through their abilities to perform complex tasks on multi-modal data across many domains, with only a few examples <cit.>. The remarkable capabilities of LLMs are owed to the Transformer architecture, rooted in the self-attention mechanism <cit.>. Leveraging multi-head attention, Transformers capture long-range dependencies in parallel by attending a word to all previous words, or to all other words in the text. With pre-training on huge unlabeled corpus, generative models can learn universal knowledge representation, and can be further tuned to narrow their scopes into a particular domain. Tuning generative LLMs on specific tasks or domains can be done via 1) fine-tuning by adapting pre-trained weights to task-dependent loss or domain-specific dataset; 2) prompt-tuning by applying task description or instruction with a few shot examples to help LLM to understand the task; 3) instruct-tuning by RL to ground LLM into real-world context. Accordingly, LLMs running on wireless devices should understand domain specific knowledge and interact effectively with the environment to perform specific tasks. Grounding LLMs in a real-world context with modular and causal knowledge is crucial to produce scalable and lightweight on-device LLMs. Such knowledge should be transferred in wireless networks and encoded into on-device LLMs, to perform logical decisions <cit.>. This includes planning, that breaks down high-level goals into low-level tasks and reasoning, that deconstructs complex problems into priors and beliefs. Grounding, planning, and reasoning allow on-device LLM to act as autonomous agents. Wireless networks represent a promising field for the deployment of generative multi-agent system, where multiple on-device LLM plan and solve tasks in a collaborative manner. Specifically, multi-agent planning is essential to decompose a complex task that requires multi-modality sensors and multi-task executors across different wireless devices. Moreover, since on-device LLM encodes specific knowledge due to resource constraints, collective intelligence from multiple LLM is essential to effectively complete complex tasks. To achieve this, generative multi-agent games can be applied with LLM to solve advanced multi-task reasoning problems <cit.>. Furthermore, LLM can be used with multi-agent RL to learn optimal collaborative goal-oriented behaviours. §.§ Related work The release of ChatGPT has led to a fast growing research in autonomous and generative agents. Several recently emerged frameworks expand on ChatGPT from autonomous task planning and reasoning, to multi-agent generative systems. As an example, BabyAGI <cit.> is a task-driven autonomous agent built on LLM, that can generate, execute and prioritize tasks in real-time. Similarly, Auto-GPT <cit.> is an AI agent that attempts to achieve a goal specified in natural language. It chains together LLM "thoughts" in an infinite loop of reasoning, and planning the next action. AgentGPT is a user interface powered by the idea of BabyAGI and AutoGPT. Another line of work represented by HuggingGPT in <cit.> developed a collaborative system utilizing multiple AI models to complete a task. In HuggingGPT, LLM use language as an interface to connect numerous AI models in Hugging Face for solving complicated tasks. In this framework, LLM act as a controller to manage and organize cooperation of expert AI models, which pushes the boundary of LLM to applications requiring domain specific knowledge. Furthermore, several frameworks on multi-agent LLM have been developed. A role-playing communicative agent framework called CAMEL has been developed in <cit.>, in which an agent receives an idea to implement from a human, then creates an AI assistant and an AI user agent. The AI user assigns tasks for AI assistant to execute, and plans new tasks according to the results. The two agents collaboratively communicate by chatting with each other to solve the specified task. Moreover, a generative agent framework in a “west world" simulation has been developed in <cit.>. It is a sandbox of multiple AI agents that can interact with each other and simulate human behavior. The agent can observe the environment, plan a sequence of actions to execute, create high-level reflections of observation in a memory stream and formulate long-term plans. The agents communicate in full natural language using LLM, to collaboratively complete a planned task. The SOTA of generative agents explores multiple LLM collaboratively planing and solving tasks. However, most works are executed in the cloud or in simulation environments, where the cost of communication, computing, and storage is ignored. In wireless generative agent networks, LLM models and their inter-agent communication requires a new system design and optimization, which is the goal of this paper. §.§ Contributions Motivated by the above discussion, in this article we establish the initial foundation for integrating the generative agents paradigm in wireless networks. In particular, we lay the first building block toward realizing collective intelligence in 6G, where we articulate the collaborative role of multiple on-device LLMs in solving Telecom tasks, and we put a future-oriented perspective on the architecture of multi-agent LLM-enabled wireless network. We further provide insights into the technologies that have manifested themselves as enablers for multi-agent LLMs in wireless networks, focusing on: 1) multi-agent LLM planning and reasoning to break down high-level goals into low-level tasks; 2) multi-agent LLM games and RL to learn the optimal collaborative behaviours from competing actors to achieve a goal; 3) semantic communication that transfers knowledge in the network for system 2-type[System 2-type model refers to human-like cognition, based on reasoning and logical deduction, for solving complex problems and making calculated decisions.] on-device LLMs. With the aim to demonstrate its promising potential, we shed lights on the anticipated applications of generative agents in wireless networks, and we present a case study for showcasing a scenario where reasoning in multi-agent LLM is required to achieve a network-level energy saving goal and guaranteeing users' transmission rates. Finally, we explore future research directions for the successful realization of collective intelligence through on-device LLM. § FROM CLOUD LLM TO MASSIVE ON-DEVICE LLMS §.§ Recent advances in cloud-based LLMs From an architectural perspective, the evolution of LLM can be classified as encoder-only (BERT-like), decoder-only (GPT-like), and encoder-decoder (T5). The decoder-only GPT merely utilizes the AR transformer with multiple stacked transformer decoder layers, equipped with masked self-attention. This mechanism enables GPT-like LLM to attend all previous tokens to predict the subsequent one. GPT-4 (with plugs-in and web browsing) was reported to successfully solve the top 10% of various academic and professional exams with improved reasoning capability. On the other hand, a number of lightweight LLMs have been released, such as LLaMA and Falcon. These models have fewer parameters (Falcon-1B) and a smaller number of pre-trained tokens (1T), yielding the same performance as larger models <cit.>. The encoder-only BERT is pre-trained on MLM or NSP tasks. Specifically, multiple tokens from a sequence are masked to force the model to acquire bidirectional contextual information during the pre-training process. The non-AR nature of MLM allows faster and paralleled computing by dynamically unfolding the masked tokens of all positions. Enhanced versions of BERT can be achieved by optimizing the hyper-parameters and the pre-training tasks, i.e., RoBERTa and ALBERT, respectively. On the other hand, the encoder-decoder architecture, such as T5, converts diverse tasks that involve generating sequences or text into a text-to-text framework for pre-training. However, this architecture is computationally expensive to train, and has limited contextual understanding and reasoning. Following the success of text-based LLM, multi-modal LLM have emerged to proliferate the potential use-cases of LLM. Among others, DALL-E is a zero-shot text to image generative model <cit.>, with 2 pre-training stages. During the first stage, a discrete VAE is trained to compress image into grid of tokens. While in the second stage, the image tokens are concatenated with text tokens, in order to train an AR transformer to model the joint distribution over the text and image tokens. With an enhanced approach, DALL-E2 utilizes a CLIP similarity matrix to train a text encoder and an image encoder. The text embedding is then passed through a diffusion or AR models prior to produce an image embedding and a diffusion decoder to generate images <cit.>. A comparison of common LLMs are given in Table <ref>. §.§ On-device LLMs Challenges Despite their promising capabilities, building intelligent wireless networks that are empowered by generative agents is extremely challenging using current LLM. This is due to their huge parameter sizes, which limits their deployment at the edge. In particular, it was demonstrated that, in few-shot scenarios, models' memory requirements dramatically increase as the number of parameters increase. For example, it takes roughly 86 GB of GPU memory for Falcon-40B to infer. In addition to that, LLM have long inference time when solving sophisticated tasks (planning and reasoning), which significantly increases the communication latency. Finally, updating or knowledge synchronization from central LLM to on-device LLM could be energy-consuming through conventional techniques, such as federated learning. With the aim to realize lightweight LLMs, neural network compression techniques such as weight sharing, pruning, knowledge distillation, low-rank decomposition, and quantization can be applied. Within the context of compression, it should be highlighted that, since LLM are few-shot learners rather than task-specific models, compression techniques should preserve LLM's transferring abilities. In a different context, QLoRA is a high-precision quantization technique, that can be used to quantize an LLM to 4-bit, then adds a small set of learnable low-rank adapters, which are tuned through back-propagating gradients using the quantized weights. This significantly reduces the GPU memory and latency in fine-tuning and inference. Although compression and quantization can reduce the model size and computation requirements, due to the non-modular nature of LLMs' knowledge, such approach can yield degraded performance in specific tasks. § MULTI-AGENT COLLECTIVE INTELLIGENCE VIA GENERATIVE LLM §.§ Multi-Agent LLM Network Architecture Deploying generative AI in wireless networks requires the realization of collective intelligence from multiple on-device LLMs, that can interact with the environment, plan actionable tasks from a goal, and solve the tasks collaboratively by exchanging knowledge. Our aim in this section is to set the groundwork for integrating generative agents into future wireless networks, in which a wireless generative agent leverages LLMs to perform a close-loop task planning, execution, and optimization. As illustrated in Fig. <ref>, this process can be achieved through multiple steps. First, the agent deconstructs higher-level intents or goals, and plan sequences of lower-level tasks over time. Afterwards, it perceives the environment, performs decisions and takes actions accordingly, and then optimizes the decision policy from the rewards. According to the results, the agent creates new tasks until the intent's goal is achieved, and then generates a final response. In order to realize collective intelligence, the earlier mentioned procedure can be performed through multiple wireless generative agents, in which the agents exchange knowledge through the network, to collaboratively plan tasks, take actions and optimize policies, based on the architecture shown in Fig. <ref>. Note that knowledge is an abstracted representation of the data, which can be encoded into LLMs for performing specific tasks. From the network perspective, intents from humans or machines are provided to generative agents through different wireless terminals. These terminals create prompts to an on-device LLM to complete each step, as shown in Fig. <ref>. The tasks are planned collaboratively among multiple generative agents, to best leverage the knowledge of different LLMs, and the capabilities of different devices. The on-device LLM can obtain domain specific knowledge from the cloud-based LLM, or from other on-device LLMs when the planned task is received from other agents. From the device perspective, a wireless generative agent has a perceiver (sensor) to observe the environment, and an actor (controller) to execute the decisions. The on-device LLM extracts semantic information from the observed raw data in multi-modalities (text, image, sound), and stores it in a memory stream for planning new tasks in the future. Accordingly, to perform a particular task, the relevant semantic information is retrieved for taking an action. After completing its planned actions from the received higher-level tasks, the agent can further create lower-level tasks and send it to other agents to complete the goal. §.§ Multi-Agent LLM Technologies §.§.§ Planning and Reasoning A high-level goal for wireless agents usually contains complex tasks or problems, and therefore, planning is an essential step for the generative agents to create actionable tasks over time to achieve a specific goal. The currently available LLM have limited capabilities in solving problems with complex reasoning and multi-step planning. This can be a limiting factor in the deployment of wireless generative agents in networks with ultra-high reliability requirement. Therefore, system 2-type LLM can endow wireless generative agents with reasoning capabilities. Recently, many works have shown the reasoning abilities of LLM through prompting engineering. For instance, chain-of-thought prompting, which prompts LLM with a sequence of short sentences serving as intermediate reasoning steps to solve problems, have demonstrated an outstanding performance on a wide range of reasoning tasks. Thereafter, self-consistency strategy is proposed to replace the greedy decoding of chain-of-thought prompting. Furthermore, tree of thoughts is another method which maintains a tree for each immediate step of thought so that LLM can deliberately evaluate multiple paths and adjust decisions as needed for optimal outcomes. As an alternative to prompts optimization which can be regarded as an outcome-supervision, one possible solution is to supervise along the reasoning process. This requires LLMs to properly model and encode a domain specific knowledge for solving different tasks. Furthermore, multi-agent planning can reduce the cost of inference or fine-tuning, by distributing the task between multiple LLM devices with domain specific knowledge. §.§.§ Multi-agent LLM Games and RL Current LLM are trained to generate contents which are helpful, reliable, and harmless to the users. These attributes can be achieved only if one-to-one iteration between a human and an agent is considered. However, in the presence of multiple generative agents, it is yet debatable whether these features can be realized. For instance, egocentric agents, which are extensively serving their corresponding users, can hinder the normal operation of other agents or be unresponsive to some collaborative tasks. On the contrary, selfless agents which are instructed to be helpful can be vulnerable to malicious attacks. Hence, it is essential to strike a balance between selfishness and selflessness when designing generative agents. Within this context, game theory constitutes a suitable tool to model multi-agent LLMs and analyze their behaviour. Initial attempts to understand the interactions between multiple LLM (i.e. LLM games) are presented in <cit.>. Multi-agent RL (MARL) is another approach to observe the optimal collaborative behavior in a multi-agent system, which can also be applied to LLMs. In particular, it is essential for the wireless agents to interact with the environment, where RL tunes the actions generated by LLM with environment rewards, in order to ground LLM into a real-world context. MARL-enabled LLMs can further model the interaction between the wireless agents, using LLM's knowledge, to learn the optimal collaboration policy and communication protocols among distributed LLM agents. In doing so, communication costs can be reduced as agents converge towards optimal decision policies. §.§.§ Semantic Information and Communication The interaction between the multiple wireless generative agents in collaborative systems will result in a huge amount of information to be generated, exchanged, and perceived. Shannon communication rooted in the level A of Weaver's 3-level communications solely aims at transmitting symbols accurately and efficiently, while ignoring the conveyed meaning of the transmitted messages. In contrast, in semantic communication, the transmitter sends only useful/relevant information to the receiver, in order to solve a given downstream task. This paradigm is possible if the goal/objective of the communication is known at both sides. Recalling that wireless generative agents aim to collaboratively solve a specific problem, semantic communication can transfer task specific knowledge for LLMs on targeted devices. Furthermore, agents can communicate based on their abstracted state information, to improve multi-agent cooperation with efficient protocols. Similarly, semantic information can enable LLMs to integrate multi-modal raw data in a common concept space. In doing so, the data exchanged and stored among wireless devices can be largely reduced. Moreover, LLMs can exploit abstracted "thoughts" from the perceived data by modelling it using semantic information on a topological space. Hence, LLMs can perform effectively in short-term actions with lower-order information, and in long-term plans with higher-order abstractions. This also enables LLMs to encode domain or task specific knowledge, and thereby, to reduce the model size and computing costs, making it suitable for resource constrained devices. § APPLICATIONS OF MULTI-AGENT LLM NETWORKS §.§ Intent-driven Network Automation With the vision to enable network automation in future wireless networks, it is envisioned that 6G will support intent-based networking, which aligns the network operation with particular intents and objectives. Although intent-driven networks are defined in 3GPP and IETF, their functionalities and their flexible deployment are still limited. On the one hand, intent translation and orchestration relies heavily on services models and policies, which limits its capabilities in handling new scenarios. On the other hand, intent driven networking has not been applied yet in general networks and user devices, including radio access networks. LLMs can potential extend the concept of intent driven networks to a general purpose self-designed communication network. The Open AI’s ChatGPT-4 has recently been equipped with a number of plugins, allowing an autonomous agent to execute the commands generated by GPT and produce outputs in natural languages. Inspired by this, intent driven networking can utilize LLMs to break down higher-level intents, (e.g., reducing the network energy consumption by 5%) into a sequence of lower level tasks (e.g., tuning transmit power, channel measurement, etc.), and instruct related executors to take actions (i.e. gNB). The results are then prompted to the LLM, which will add subsequent tasks, if necessary, until the intent’s goals are achieved. In a multi-agent LLM network, each network device will leverage LLMs to analyze the perceived information, including the observations from the environment and actions taken by other agents. Once the agent plans and prioritizes a set of sub-tasks, it takes actions on local tasks, which can be accomplished by the local actor. After that, it utilizes LLMs to generate a protocol language including the remaining and newly created sub-tasks, and sends to other agents through the wireless network. Each agent conducts such loops and communicates with others until the added tasks are completed and the goal is achieved. §.§ Case Study: Wireless Energy Saving The vision of generative agent-enabled intent-driven networking requires LLMs to deconstruct a wireless network goal into sub-tasks or sub-problems and solve them via multi-agent cooperation. One important step in this process is the efficiency of multiple LLM instances in formulating and solving problems pertinent to wireless networks. In this section, we consider a scenario with multiple mobile users playing a game to reduce the total power consumption. Our aim is to evaluate the capability of on-device LLMs in solving a wireless communication problem, and show their potentials in enabling an end-to-end autonomous network. In our setup, we assume a network with K = 4 users transmitting in a shared spectrum. Each user has a channel gain of g = (1.21, 2.01, 0.58, 0.13 ), bandwidth b = 15kHz, and noise n = 1dB. The game starts with an initial transmit power p = (2, 4, 5, 6 ) W. The base station has the goal of "reducing network energy consumption by Δ p = 0.85 W" while maintaining individual transmission rates above r = (3.50, 15.80, 4.40, 1.00 ) Kbps, respectively. We create independent LLM instances for each user. The above environment information is given to each LLM instance in the beginning of the game. Moreover, the users compete for power savings goal under mutual interference. Note that when implemented in a real wireless network, these parameters are measured by on-device perceivers and provided to on-device LLMs. We generate our results using the GPT-4 model. The prompts we created for LLMs to play this game are shown in Fig. <ref>. In the beginning, a prompt consisting of general description, game rules, goal of the base station, and the individual goal of each users is passed to each LLM instance. In each round, the LLM instances are requested to choose a new power level p_i, given the actions of all users in the previous round, through a prompt. The transmission powers chosen by each agent from GPT-4 in each round are illustrated in Fig. <ref>. The obtained results are compared with the minimum theoretical power required for the transmission rate r. It can be seen that LLM-enabled agents managed to achieve the power saving target Δ p at round 2, with 3 users allocated the theoretical minimum power to keep their transmission rate at above r (except user 1). This can be further noticed in Fig. <ref> which shows the users' rate difference to r, where user 1's rate can be further reduced, and hence, the transmission energy can be reduced. Furthermore, users 1 and 4 violate the goal of minimum transmission rate in round 3. The simulation results in Fig. <ref> also show that current GPT-4 experiences some difficulties in maintaining multiple goals when the number of agents increases. This use-case demonstrates that an LLM (GPT-4) can perform mathematical reasoning in a wireless communication-based problem, in order to achieve a global power saving and individual transmission rates targets. The takeovers of this study are: 1) LLMs have the potential to analyze a radio environment, formulate and solve a radio network problem to achieve a network operation goal; 2) Multi-agent LLMs can achieve a cooperative goal while keeping their own KPIs; 3) Training a wireless domain-specific LLM is essential for LLMs to understand effectively the wireless network goals (without mathematical prompts), to achieve a fully autonomous intent-driven network. § CHALLENGES AND OPPORTUNITIES §.§ TelecomLLM with domain knowledge Building a TelecomLLM with domain-specific knowledge is a foundation to enable wireless generative agents. Current GPT-4 can only provide general responses to a wireless network goal described in natural language. To operate an autonomous network, the LLM should give instructions to specific network functions or entities. This requires LLMs to be trained on Telecom domain corpus, such as standard and product specifications. Moreover, grounding and calibrating on-device LLM to real-time wireless environment is important for wireless agents to make optimal decisions on specific wireless tasks. Furthermore, tuning TelecomLLM with emerging wireless knowledge is crucial to keep the model updated with new wireless standards and features, in order to support efficient network upgrade. §.§ Wireless LLM Hallucination Model hallucination phenomenon refers to the event when the generative model produce content that is nonsensical or untruthful in relation to certain sources. In wireless networks, this can be a result of continuous channel variations and network dynamicity. In particular, model hallucination can be experienced when the generative agent is not capable of capturing the variations of a rapidly evolving network, leading to inaccurate representation of the wireless environment. Furthermore, generative agents may hallucinate due to the impact of inter-agent interference and the lack of relevant on-board data. Hence, efficient hallucination avoidance schemes are needed to ensure trustworthy agent behaviours in different network functionalities. §.§ Self-replicating Wireless Generative Agents The concept of self-replication within the context of generative agents indicates their capability of creating copies of themselves, including their knowledge and experiences, to communicate effectively with other agents. While still being a speculative and a hypothetical concept, it represents a promising skill that enables such agents to cope with the growing nature of wireless networks, and achieve autonomously scalable wireless networks. Also, replicated agents can adapt their models to their own wireless environments and tasks. Knowledge driven wireless generative agents and inter-agent communications are key enabling technologies for self-replicating agents. For example, modularized LLMs have the potential to provide flexible knowledge inheritance for different generative agents, with reduced training and communication overhead. This calls for a thorough study in explainable LLM which can encode knowledge on-demand. It is worthy to emphasize that, although self-replicating generative agents are aimed for improved wireless networks performance, reliable wireless communication is essential for ensuring accurate knowledge transfer, and therefore, to prevent replication errors and agents mutations. §.§ Resource management for on-device LLMs The inherent limitations on the available resources in wireless networks, particularly IoT systems, represent a performance bottleneck in the efficient realization of generative multi-agent systems in future wireless networks. While being a promising technology, due to the diverse capabilities of wireless nodes, on-device LLM requires a sophisticated balance between computing and energy resources, memory constraints, inference reliability, as well as user's experience and demands. Accordingly, it is essential to develop optimization and resource-management approaches that can be tailored to a certain use-case, through assessing the resource utilization performance of each agent, and its impact on the CPU, memory, and energy efficiency, while meeting particular requirements imposed by the communication scenario. §.§ Multi-Agent Convergence Multi-agent convergence in generative systems can be particularly challenging in wireless networks, due to the dynamic nature of the latter, and therefore, the continuous need for agents' models adaptation to cope up with the evolving network conditions. Such a challenge is particularly pronounced when a massive number of agents are involved in achieving different network goals. Failing to converge might result in suboptimal solutions, which will then necessitate a high level of coordination among the multiple agents, contradicting the goal of achieving self-governance and automation among the agents. Accordingly, this opens up a new horizon for exploring collective approaches to allow the agents to achieve a consensus in a real-time manner, regardless of the network dynamicity condition. §.§ Multi-modal efficient on-device LLMs Generative agents employed in wireless networks are anticipated to handle multi-modal data, such as text, images, locations, and radio signals. Meanwhile, multi-modal on-device LLM is constrained by communication and computing resources. This raises significant challenges in the model design, where current multi-modal LLMs encoding raw data are too large to be deployed on mobile device. Knowledge based LLM that encodes multi-modal data from a concept space can potentially reduce the model size and inference cost, by learning a minimal structure of data. Research efforts should be devoted to build effective concept space for multi-modal data. § CONCLUSION In this paper, we introduced the concept of multi-agent generative AI network, which exploits the collective intelligence from on-device LLMs. We identified the challenges and key enabling technologies of on-device LLMs, focusing on LLM planning and reasoning to solve complex tasks; multi-agent LLM games and RL-based LLMs to achieve goals in competition scenarios; and semantic information and communication to connect LLMs with knowledge for effective short-term and long-term task planing. Moreover, we discussed potential applications of multi-agent LLMs in 6G, including intent-driven network automation. We further demonstrated a use case where multi-agent LLMs can play a game to achieve goals of network energy saving and user transmission rate. Finally, we discussed potential research opportunities of multi-agent LLM network such as system 2 ML with strong reasoning, human-agent interaction and collaboration. This paper initiates a new framework for future wireless network design to empower collective intelligence from large scale wireless devices, opening up new research opportunities of generative AI in wireless networks. IEEEtran § BIOGRAPHIES Hang Zou (hang.zou@tii.ae) is a Researcher at Technology Innovation Institute, UAE. Qiyang Zhao (qiyang.zhao@tii.ae) is a Lead Researcher at Technology Innovation Institute, UAE. Lina Bariah (lina.bariah@tii.ae) is a Senior Researcher at Technology Innovation Institute, UAE. Mehdi Bennis (mehdi.bennis@oulu.fi) is a Professor at University of Oulu, Finland. Mérouane Debbah (merouane.debbah@ku.ac.ae) is a Professor at Khalifa University, UAE.
http://arxiv.org/abs/2307.01225v1
20230703031720
Interpretability and Transparency-Driven Detection and Transformation of Textual Adversarial Examples (IT-DT)
[ "Bushra Sabir", "M. Ali Babar", "Sharif Abuadbba" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
IT-DT B. Sabir et al. University of Adelaide CSIRO's Data61 Interpretability and Transparency-Driven Detection and Transformation of Textual Adversarial Examples (IT-DT) Bushra Sabir1,2M. Ali Babar1Sharif Abuadbba2 August 1, 2023 ============================================================================================================== Transformer-based text classifiers, such as BERT, Roberta, T5, and GPT-3, have achieved impressive performance in Natural Language Processing (NLP). However, their vulnerability to adversarial examples poses a significant security risk. Existing defense methods often lack interpretability and transparency, making it difficult to understand the reasoning behind adversarial classifications and identify vulnerabilities in the models. To address these limitations, we propose an approach that focuses on interpretability and transparency in the detection and transformation of textual adversarial examples. Our framework, titled Interpretability and Transparency-Driven Detection and Transformation (IT-DT), aims to provide insights into the decision-making process of the models and enable effective mitigation of adversarial attacks. The IT-DT framework leverages techniques such as attention maps, integrated gradients, and model feedback to achieve interpretability in the detection phase. By visualizing the attention and gradient information, we can identify the salient features and perturbed words that contribute to adversarial classifications. This interpretability enhances our understanding of the model's vulnerabilities and helps us identify potential adversarial inputs. In the transformation phase, the IT-DT framework utilizes pre-trained embeddings and model feedback to generate optimal replacements for the perturbed words. By finding suitable substitutions, we aim to convert the adversarial examples into non-adversarial counterparts that align with the model's intended behavior. This transformation process ensures that the modified inputs no longer deceive the model while preserving the overall meaning and context of the text. Moreover, the IT-DT framework emphasizes transparency by involving human experts in the loop. Human intervention plays a crucial role in reviewing and providing feedback on the detection and transformation results. This collaborative approach enhances the system's decision-making process, particularly in complex scenarios where automated methods may face challenges. Furthermore, the IT-DT framework generates valuable insights and threat intelligence that empower security analysts to identify vulnerabilities and improve model robustness. The framework aims to bridge the gap between the technical aspects of adversarial attacks and the human understanding required for effective mitigation. Through comprehensive experiments, we demonstrate the effectiveness of the IT-DT framework in detecting and transforming textual adversarial examples. Our approach enhances interpretability, provides transparency into the decision-making process, and enables accurate identification and successful transformation of adversarial inputs. By combining technical analysis and human expertise, the IT-DT framework significantly improves the resilience and trustworthiness of transformer-based text classifiers against adversarial attacks. § INTRODUCTION Transformer-based Large Language Models (TLLMs) and their variants have significantly transformed the field of Natural Language Processing (NLP) <cit.>. These models have excelled in user review analysis, hate speech detection, content moderation, cyber-attack detection, and news classification on online platforms <cit.>. Prominent industry leaders, including Google, Facebook, and Amazon, have been actively leveraging Language Models (LLMs) for proactive text moderation purposes. These companies have implemented LLM-based models, such as Google's Perspective API [ https://www.perspectiveapi.com/], Facebook's CS2T model for hate speech detection [https://ai.facebook.com/blog/cs2t-a-novel-model-for-hate-speech-detection/], and Amazon Comprehend [https://aws.amazon.com/comprehend/], to effectively address text moderation challenges. However, these models, like their other DNN counterparts, are not foolproof and are vulnerable to carefully designed Adversarial Examples (AEs) <cit.>. AEs are generated by perturbing the original example in a way that retains its original semantics but induces test-time misclassification, resulting in evading the target text classifier. For example, transforming “The Fish N Chips are excellent” to “The Fish N Chips are first-class” changes the target model output (BERT for restaurant review classification) from positive to a negative restaurant review. Adversaries utilize several obfuscation methods to generate AEs ranging from substituting to inserting critical words (e.g., HotFlip <cit.>, TextFooler <cit.>, characters (e.g., DeepWordBug <cit.>, TextBugger <cit.> phrases (e.g., <cit.>) in the text to fool the target model. Among these techniques, Word Substitution Attacks (WSA) are a popular approach to creating adversarial examples against transformer-based models <cit.>. By replacing a critical word or multiple words with mis-spelt versions, synonyms, or contextually similar words, these attacks can bypass the security measures of transformer-based models while maintaining the original text's meaning <cit.>. The ease with which WSA can be generated using pre-existing tools and libraries further enhances their significance <cit.>. Recent studies have shown that transformer-based models can misclassify over 90% of adversarial examples generated using WSA, highlighting the potential impact on users who rely on the model's output. Therefore, to ensure the reliability and integrity of these models, it is crucial to focus on developing defences against WSA. Failure to secure these models against WSA can lead to incorrect information being presented to users, affecting their satisfaction and trust in the model's output <cit.>. Therefore, techniques must be developed to protect transformer-based models from WSA, ensuring their reliability and safety in real-world applications <cit.>. The landscape of adversarial attacks against transformer-based models has been evolving rapidly, necessitating the development of sophisticated defenses to mitigate the associated risks <cit.>. While existing defense mechanisms, including Adversarial Training, Synonym Encoding, Frequency-Guided Word Substitution, and Discriminate Perturbations, have shown promising results in countering Word Substitution Attacks (WSA), they suffer from a lack of transparency and interpretability, thereby limiting their efficacy in real-world applications. These defenses often operate as black boxes, impeding their transparency and interpretability. In practical scenarios, it is crucial to have actionable intelligence, such as raising alerts upon detecting adversarial examples, attempting autocorrection, and involving security analysts equipped with enriched quality logs that aid in identifying the attack type, its source, and the vulnerabilities exploited by the attacker <cit.>. Human-centric AI defenses play a crucial role in bridging the gap in existing defenses by prioritizing transparency and understandability for humans <cit.>. These defenses aim to provide security analysts with actionable intelligence, enabling them to interpret attack types and sources and determine the need for human intervention <cit.>. Integrating comprehensive threat intelligence and fostering collaboration enhances preemptive capabilities. Strengthening defenses through alarms, transparency, and collaboration is vital for detecting and mitigating new attacks. Logging relevant information enables forensic analysis and the development of proactive countermeasures, aiding in attack investigations <cit.>. Human-centric AI defenses leverage human abilities to improve reliability, reduce false positives and negatives, and establish trust in AI systems. Developing such defenses is crucial for ensuring the integrity and reliability of transformer-based models in real-world applications. While previous studies have explored interpretability to demonstrate the significance of their methodologies <cit.>, to the best of our knowledge, none have achieved the detection and transformation of adversarial examples in a transparent and human-understandable manner. Additionally, existing defenses often operate as black boxes, offering limited insights into the nature of the attack or whether human intervention is necessary to address it. In this study, we propose a novel framework called Interpretibility and Transparency driven Detection and Transformation (IT-DT) that provides interpretable detection, identification, and correction of adversarial examples for transformer-based text classification models. The IT-DT framework leverages explainability features, such as attention maps and integrated gradients, to automate the detection and identification of adversarial examples and pinpoint the perturbed words in Word Substitution Attacks at multiple levels of granularity, including characters, words, and multiple word substitutions. By incorporating explainability, the framework offers insights into the distinguishing features and patterns that differentiate adversarial examples from legitimate ones. Furthermore, it identifies specific words that require transformation, enabling the conversion of adversarial examples into non-adversarial ones. The transformation process in the IT-DT framework utilizes pre-trained embeddings, attention weights, word frequency analysis, and model feedback to determine the words that need to be modified for effective transformation. Additionally, the framework logs intermediate information and identifies scenarios that require human intervention, providing valuable threat intelligence and interpretability to security analysts for identifying vulnerabilities in the target model and enhancing resilience against Word Substitution Attacks. We comprehensively evaluate our proposed framework and its components on two transformer-based architectures, namely BERT and ROBERTA, trained on four state-of-the-art text classification datasets: IMDB, YELP, AGNEWS, and SST2. Additionally, we assess our framework against seven state-of-the-art Word Substitution Attacks at three levels of granularity: character-level, word-level, and multi-level. Our experimental results demonstrate that our approach significantly enhances the model's resistance to adversarial word substitution attacks, improving its reliability and effectiveness in practical applications. We achieve a median detection performance, measured by Matthew Correlation Coefficient (MCC), of 0.846 and an F-score of 92% across the four datasets. Our method dramatically improves the median accuracy of the considered models on adversarial examples from zero to 92.98% against the state-of-the-art adversarial word substitution attacks. Contributions The main contributions of this work can be summarized as follows: * We develop a novel Adversarial Detector (D_adv) by extracting features from attention maps, integrated gradients, and frequency distribution to effectively differentiate between adversarial and clean examples. * We devise a hybrid approach that combines attention, frequency, and model feedback to identify perturbed words accurately. * We propose a novel transformation mechanism to find optimal replacements for perturbed words, converting adversarial examples into non-adversarial forms. Additionally, we identify specific scenarios that require human intervention. * We comprehensively evaluate our proposed detector, our identification and transformation module over four datasets, two transformation-based architectures and seven SOTA adversarial attacks. * Our framework provides valuable threat intelligence and actionable insights to security analysts facilitating human-AI collaboration. These insights can enhance the robustness of the defence mechanism and identify vulnerabilities in the target model. Significance and Implications The proposed method, the Interpretability and Transparency driven Detection and Transformation of Adversarial Examples (IT-DT) framework, has several implications for both research and practitioners in the field of cyber-security, natural language processing (NLP) and machine learning. These implications are as follows: * The proposed method, the Interpretability and Transparency driven Detection and Transformation of Adversarial Examples (IT-DT) framework, has several implications for both researchers and practitioners in the field of cyber-security, natural language processing (NLP) and machine learning. These implications are as follows: * (i) Advancement in Adversarial Defense Research: The IT-DT framework contributes to advancing research in adversarial defence techniques targeted explicitly at transformer-based models. By addressing the vulnerability of transformers to adversarial word substitution attacks, it offers a comprehensive defence mechanism that combines explainability, detection, and transformation methods. In addition, this research can inspire further investigations into enhancing the security and reliability of deep learning models against adversarial attacks. * (ii) Enhanced Transparency and Interpretability: The IT-DT framework emphasizes the importance of transparency and human involvement in understanding and defending against adversarial attacks. By incorporating explainability-based features and involving human intervention during the transformation process, the framework provides practitioners with insights into the reasons behind adversarial examples and allows them to intervene when necessary. In addition, it promotes a more transparent and interpretable approach to defence, enabling practitioners to gain deeper insights into the vulnerabilities and robustness of their models. * (iii) Practical Defense Mechanism: The IT-DT framework offers a practical defence mechanism that can be implemented by practitioners working with transformer-based models. It provides a step-by-step process, including training an adversarial detector (TAD) and performing test-time detection and transformation (TDT). This practical guidance allows practitioners to effectively safeguard their models against adversarial attacks, enhancing the reliability and security of their NLP systems in real-world applications. * (iv) Generalization and Real-World Evaluation: The IT-DT framework has been evaluated extensively on various state-of-the-art word substitution attacks, popular text datasets and transformer architectures. This evaluation demonstrates the generalization ability of the defence mechanism across different attack scenarios and datasets, highlighting its effectiveness in countering adversarial manipulations at various granularity levels. Furthermore, the real-world evaluation conducted in the study provides practitioners with empirical evidence of the framework's performance, instilling confidence in its applicability in real-world scenarios. * (v) Practical Application in Industry: The IT-DT framework has implications for practitioners heavily relying on NLP applications, such as sentiment analysis, content filtering, and language understanding. By implementing the proposed defence mechanism, organizations can enhance the security and reliability of their transformer-based models, mitigating the risks associated with adversarial attacks and ensuring more reliable and robust NLP systems for their customers and users. Overall, the IT-DT framework contributes to the research landscape by addressing the vulnerability of transformer-based models to adversarial attacks and providing a practical defense mechanism. It also offers transparency, interpretability, and generalization, making it a valuable resource for practitioners seeking to secure their NLP systems in real-world applications. § BACKGROUND AND RELATED WORK The following section aims to provide a comprehensive overview of the relevant background and related research in the field. §.§ Preliminaries §.§.§ Transformer-based Text Classification Models The transformer architecture was first introduced in the paper “Attention is All You Need" <cit.>, and has since become a popular choice for Natural Language Processing (NLP) tasks. The Transformer-based text classifier T_m is a machine learning algorithm that takes as input a sequence X of word embeddings and outputs a probability distribution over the possible class labels y. The algorithm for T_m is given by Algorithm <ref>. T_m consists of multiple Transformer layers, each of which has a self-attention sub-layer and a feed-forward sub-layer. The T_m takes an input sequence X as input, where each element x_i is a word embedding of dimension d. The then proceeds to iteratively process the input through the Transformer layers. Each layer first computes a self-attention sub-layer, which involves computing a set of H self-attention heads. The outputs from all H heads are concatenated and passed through a linear projection to produce the final output for the sub-layer. The result is then passed through a residual connection and layer normalization operation before being processed by the feed-forward sub-layer. After each layer, the output is updated to be the input for the next layer. Once all the layers have been processed, the final output representation O_N is passed through a linear projection and a softmax function to produce a probability distribution over the possible class labels. During training, the model is optimized to minimize the cross-entropy loss between the predicted class probabilities and the true class labels. §.§.§ Adversarial Examples (AEs) Given a real example x with label y, an adversarial example AE against x can be defined as an input x' ← x +, where is a minor perturbation. This perturbation is constrained to satisfy a set of perturbation constraints P_const. Furthermore, the classifier F ∈ T_m predicts an incorrect label for the adversarial example, i.e., y' ← F(x') such that y' ≠ y. §.§.§ Word Substitution Attacks (WSA) Given an original text E with a set of important words W, WSA results in a new text E' where some of the words in W have been replaced with similar or mis-spelt words W'. The goal is to produce an altered text AE that modifies the output of an NLP model such that F(E) ≠ F(AE), where AE = E_subst(W,W') and E_subst represents the word substitution function. §.§.§ Prediction Confidence Score (PCS) PCS(x,y) of model F depicts the likelihood of an input x ∈ X having a label y ∈ Y. The smaller PCS(x,y) suggests F has low confidence that x has a label y. §.§.§ Explainability Explainability refers to the capability of comprehending and interpreting how a machine learning model reaches its decisions or forecasts. The importance of explainability lies in transparency, accountability, and trust in the decision-making process of machine learning models <cit.>. More formally, let F be a machine learning model that takes an input X and produces an output y. We can represent this relationship as y=F(x), where F is a complex function that maps the input space to the output space. To assess the explainability of F, we need to examine how it makes its predictions. One way to do this is to decompose F into simpler functions that can be more easily understood. For example, we can use feature importance analysis to identify which input features have the strongest influence on the output. Another approach is to use model-agnostic methods such as LIME (Local Interpretable Model-Agnostic Explanations) <cit.> or SHAP (SHapley Additive exPlanations) <cit.> to generate local explanations for individual predictions. These methods identify which input features were most important for a particular prediction, and how changes to those features would affect the output. §.§.§ Pre-Trained Embedding Pre-trained embeddings refer to the use of pre-existing word vectors that have been trained on large datasets using unsupervised machine learning techniques such as word2vec, GloVe, or fastText. These embeddings are trained on large corpora of text data, and the resulting word vectors capture semantic and syntactic relationships between words. By using pre-trained embeddings, it is possible to leverage the knowledge captured in these embeddings to improve the performance of natural language processing (NLP) tasks such as text classification, sentiment analysis, and machine translation, without the need to train a new embedding from scratch. §.§ Related Work In NLP, the research community has dedicated significant efforts to developing models that are robust against WSA. These efforts can be broadly categorized into two mainstreams, which we summarize in this section. (a) Robustness Enhancement defenses (RED): the first stream of research focus on developing resilient models with a goal to learn a robust model that can achieve high performance on both clean and adversarial test inputs <cit.>. These defenses include Adversarial Training <cit.>, Synonym Encoding <cit.>, Certifiable Robustness <cit.> However, these defences have four main limitations <cit.>: (i) they often require retraining the DNN with adversarial data augmented via known attack strategies, which can cause substantial retraining overhead and is computationally expensive. Implementing such defenses often demands a considerable amount of time and resources. (ii) defenses often have limited applicability. They may only be effective against specific types of attacks, making them less useful against more sophisticated and varied attacks. For example, certifiable defenses may be effective against simple word substitution attacks, but may not be effective against attacks that involve multiple word substitutions or more complex obfuscation methods. (iii) External information, such as knowledge graphs, is required to enhance the model's robustness. (iv) These defenses often rely on unrealistic assumptions, such as assuming that the attacker's embedding method is known, which is not always practical or feasible. (b) Detection, Blocking and Recovery Strategy (DBR): While robustness enhancement is a widely studied defense mechanism against word substitution attacks, there is another defense mechanism that involves identifying and recovering adversarial examples to prevent such attacks. However, this area of research is not as well-established compared to robustness enhancement defenses. One notable study in this category is DIScriminate Perturbation (DISP) <cit.>, which utilizes a discriminator to detect suspicious tokens and a contextual embedding estimator to restore the original word. Frequency-Guided Word Substitution <cit.> is another approach that leverages the frequency properties of replaced words to recover the original example. More recently, Word-level Differential Reaction (WDR) by Mosca et al. <cit.> investigated the variation of logits in the target classifier when perturbing the input text and trained an adversarial detector. In another study, Yoo et al. <cit.> extracted features from the last output layer of the model and adapted Robust Density Estimation (RDE) to detect adversarial samples. §.§ Comparison We present a comparison between our study and other relevant studies in Table <ref>. Each row represents a specific topic or feature addressed in the studies, while each column corresponds to a particular study. A checkmark () indicates that the corresponding study covers the topic, while a cross (×) indicates the topic is not addressed in that study. Our defence framework, IT-DT, falls under the category of detection and correction mechanisms. It incorporates explainability features inspired by TextShield <cit.>, but with notable distinctions. Unlike TextShield, which relies on computationally intensive saliency factors <cit.> and lacks the utilization of attention cues inherent in transformer architecture, IT-DT is specifically designed to tackle the challenges posed by large and complex transformer-based models. In terms of transformation, IT-DT focuses on finding optimal replacements rather than relying solely on rule-based substitutions, thereby enhancing its effectiveness in generating non-adversarial counterparts. Additionally, we comprehensively evaluate our identification and transformation models, setting our study apart from previous research. This thorough assessment enables us to determine the efficacy and performance of our approach. Transparency and interpretability are fundamental aspects of our framework. We go beyond by incorporating a human-in-the-loop component by integrating rich threat intelligence logs. This aspect facilitates the active engagement of security analysts, empowering them to make informed decisions based on the insights provided by our framework. Furthermore, compared to DISP <cit.>, our approach eliminates the need to train a discriminator and generator model directly from the embedding. Instead, we utilize explainability distributions to generate interpretable justifications for detection, identification of perturbed words, and transformation of adversarial examples. Additionally, we extensively evaluate our approach to seven SOTA WSA. We have compared our detection mechanism with the SOTA WDR. However, we could not find the replication package for TextShield for a direct comparison. Additionally, we tried to compare our results with DISP, but we faced challenges in replicating their experiments due to its high memory requirement for transformer-based methods. § IT-DT FRAMEWORK §.§ Motivating Example To illustrate the significance of the proposed IT-DT framework, consider a scenario where an organization employs a machine learning-based text classification system to filter incoming emails for potential cybersecurity threats. Attackers employ sophisticated techniques such as carefully crafted language patterns and evasion strategies to bypass the email filters. These adversarial emails, if undetected, can result in security breaches, data leaks, or system compromise. By integrating the IT-DT framework into the organization's text classification system, the capabilities of the email filters are significantly enhanced. The explainability component of IT-DT allows analysts to gain insights into the linguistic cues and syntactic structures that differentiate adversarial emails from legitimate ones. This understanding enables security professionals to adapt the classification model and update the filters to counter emerging attack patterns effectively. Moreover, the identification stage of the IT-DT framework provides valuable information about the specific attack vectors employed, such as Word Substitution Attacks or Sentence Modification Attacks. Armed with this knowledge, security analysts can develop targeted defense mechanisms and refine their detection strategies to stay one step ahead of the attackers. The transformation capabilities of the IT-DT framework play a vital role in neutralizing adversarial emails. However, in cases where the transformation process proves challenging, the framework triggers human intervention. Security analysts, equipped with their expertise, assess and classify the difficult examples manually, ensuring accurate classification and preventing potential security risks associated with misclassifications. In summary, the IT-DT framework addresses the challenges posed by adversarial examples in text classification for cybersecurity applications. By incorporating explainability, precise identification, effective transformation, and human intervention when needed, IT-DT provides a comprehensive solution to enhance the resilience of text classification systems against adversarial attacks. This framework empowers analysts, facilitates vulnerability identification, strengthens the overall security posture, and fosters collaboration between automated systems and human experts in the cybersecurity landscape. §.§ Overview of IT-DT Framework Figure <ref> shows the overview of our proposed framework. Our framework consists of two main phases: Training Adversarial Detector (TAD) and Test-Time Detection and Adversarial Example Transformation (TDT). TAD: Training Adversarial Detector The TAD phase aims to construct a machine learning (ML) model that discriminates between clean and adversarial examples. To achieve this objective, TAD takes the original and adversarial datasets as inputs and transfers them to the Explainability and Frequency-based Features Extraction (EFFE) stage. In the EFFE stage, the original and adversarial datasets are utilized as inputs to generate explainability maps, specifically attention maps (Amap) and integrated gradient (Igrad) distributions, for each example. This process employs a surrogate of the transformer model (ST_m), which acts as a shadow copy of the original transformer model (OT_m). The surrogate enables the detection of adversarial examples without impeding the regular functioning of the original model. Statistical features are extracted from the attention maps and integrated gradient distributions, and these features are combined with frequency-based statistical features to construct a feature set for training the ML models. Subsequently, we employ several supervised traditional ML classifiers to obtain the Adversarial Detector (D_adv). These classifiers undergo fine-tuning using K-fold cross-validation, and the most effective models are selected for each ML model type. Finally, an unknown test dataset is employed to evaluate the selected models, and the ML model exhibiting the highest validation performance is designated as the Trained Adversarial Detector (D_adv). TDT: Test-Time Detection and AE Transformation The objective of the TDT phase encompasses three key aspects: (i) Differentiating clean examples (CE) from adversarial examples (AE) using the test-time Adversarial Detector (D_adv) when the model is deployed. (ii) Transforming AE into an optimal non-adversarial variant (TF_E) that retains semantic similarity to avoid evasion attacks. (iii) Generating threat intelligence logs and identifying the need for human intervention in cases where the transformation of AE to a non-adversarial form fails. To accomplish this, the EFFE step first converts test-time data into a feature set and employs the D_adv model to detect AE and CE examples. Next, examples identified as CE are processed through the spelling-based transformation module and then passed to the T_m model for label prediction. On the other hand, the AE examples are directed to the identification step, where Perturbation Identification (PI) module determines the candidate perturbed words, denoted as P_cand, within the AE. Then, these candidates are inputted into the Embedding-based Transformation selection step, where a set of N replacement options (S_cand) is generated using pre-trained embeddings. Subsequently, the AE is reverse-engineered by selecting an optimal substitution for each P_cand, utilizing the outputs from both the T_m and D_adv models. The transformed AE, denoted as TF_E, is further processed by the spelling-based transformation module to correct typos. Finally, TF_E is sent to the T_m model for label detection. In addition to generating the transformed AE (TF_E), our framework produces analytical logs (A_logs) that provide information about the success or failure of the conversion process from AE to TF_E. These logs offer valuable insights for security analysts to enhance the model's robustness and counter word-substitution attacks. We discuss the details of each stage in the subsequent subsections. §.§ Explainability and Frequency based Feature Extraction (EFFE) EFFE is responsible for exacting features for building ML models. In this study, we have explored explainability to distinguish adversarial examples from clean. This stage has two sub-phase as described below: Generate Explainability Maps In this step, we compute the A_maps and I_grad distributions for all words n in an input sequence E. The A_maps distribution measures the importance of each word based on how much attention it receives from the other words in the input sequence E. It represents the distribution of attention weights across the input sequence for a given prediction y_pred. On the other hand, the I_grad distribution estimates the significance of each input word based on how much it affects the output of the model. Its utility is to identify which words are most sensitive to changes in the input and which words are most important for making predictions. Overall, these distributions provide useful information for understanding how the Transformer model processes input sequences and how it generates its predictions. Our method for computing A_maps and I_grad is inspired by the latest seminal work in computer vision <cit.>. In that work, the authors captured the contribution of each token (word/ sub-word) to the predicted label y_pred to explain the prediction of the T_m models, as shown in Figure <ref>. Figure <ref> highlights the relevance of each words with to the predicted label y_pred obtained by this method. We have chosen this explainability method because it is computationally efficient and does not require additional training or modification of the original transformer model. However, unlike this work, we have explored the A_maps and I_grad distributions across all words to differentiate original versus adversarial examples instead of explainability. Moreover, we have only considered self-attention-based transformer models due to their popularity in text classification tasks <cit.>. Nevertheless, our work can be extended to other transformer architectures. Algorithm <ref> summarizes the process of generating explainability maps: Firstly, we initialize A_maps and I_grad with an identity matrix 𝐼nxn, where n denotes the number of words in E. It signifies that initially, each word is important to itself (the basis of the self-attention model). After that, we obtain attention weights Aweights^l across each layer of the T_m model, which signify the relative importance of each word or sub-word in E for each other word or sub-word in E. These attention weights A_w^l are computed by taking the dot product of a query vector q with a set of key vectors k that represent each word or sub-word in E, and then applying a softmax function to normalize the resulting scores. Here, the q vectors represent the current location the model is attending to, whereas the k vectors represent different positions or contexts in the input sequence that may be relevant for the current task. After that, the attention gradients ∇ A_w^l are quantified by taking the derivative of the output y_pred with respect to the attention weights A_w^l. Finally, A_maps is determined using the Hadamard product of attention weights and attention gradients, which are then averaged across all the heads in l after removing negative contributions. Similarly, I_grad is calculated by averaging ∇ A_w^l across all the heads in l, and negative values are clamped. The outputs of this module are dxk dimensional, A_maps and I_grad for all examples k. Extract Statistical We utilize statistical measures <cit.> such as minimum, maximum, mean, median, mode, variance, skewness, kurtosis, entropy, mean of gradient, and the sum of peaks to extract information regarding the central tendency, variability, and shape for four distinct types of distribution. Let E denote a text sample. The first distribution is the Attention Map (A_map), representing the overall attention received by all words in the text sample E. The second distribution is the Integrated Gradient (I_grad) Distribution, which enables us to compare the relative importance of individual words across different predicted labels y_pred and to capture the difference between expected and adversarial samples. The third set of features is the Overall Frequency distribution, which characterizes the frequency distribution of words in E with respect to their frequency in the training dataset. The inclusion of these features is motivated by the results reported in <cit.>. Finally, we utilize the Outlier Frequency (OF) distribution, representing the frequency of outliers in A_map. To compute the OF distribution, we first identify potential outliers in A_map using the inter-quartile range (IQR) method. Let q_1 and q_3 be the first and third quartiles of A_map, respectively, and let iqr be the interquartile range, which is defined as iqr = q_3 - q_1. A token t_i having attentional A_map is considered a potential outlier if it satisfies the following condition: w_i > q_3 + 1.5 × iqr and w_i ∉ StopWords where w_i is the value of t_i in A_map. This condition checks whether a value w_i is more than 1.5 times the IQR away from the third quartile. If a value satisfies this condition, it is considered a potential outlier. Once the potential outliers are identified, we obtain the frequency of each outlier in the set by counting the number of occurrences of the value in A_map. This frequency distribution is the OF distribution, which represents the frequency of outliers in A_map. §.§ Buliding ML Models To effectively detect adversarial examples, we developed machine learning (ML) models using the following steps: Clean Feature Set After extracting the statistical features, the Feature set (F_set) was cleaned to eliminate irrelevant or redundant features that may have negatively impacted the models' performance. Any NaN (Not a Number) or infinite values in the statistical features were removed to ensure consistency and completeness. Moreover, we removed any duplicate values that might have been present to improve feature quality. It ensures that each feature contributes unique information to the ML models, eliminating redundancy. Train Model We trained multiple ML models on the cleaned F_set to identify the best-performing model for detecting adversarial examples. To do so, we fine-tuned the models by adjusting their hyperparameters. Tune Model In order to optimize the performance of our adversarial attack detection system, we optimized the hyperparameters using various commonly used performance measures for imbalanced dataset problems, such as the Matthews Correlation Coefficient (MCC). MCC is a popular performance metric for binary classification tasks, particularly for imbalanced datasets <cit.>. Additionally, to ensure the models' generalization ability and avoid overfitting, we used 10-fold cross-validation with a stratified split. This involved splitting the dataset into ten equal parts, training the models on nine parts, and testing their performance on the remaining part. This process was repeated ten times, with a different part used for testing each time. We identified the best-performing model for our dataset by fine-tuning the ML models using hyperparameter optimization and cross-validation, thereby improving the accuracy and effectiveness of our adversarial attack detection system. We selected the model with the best cross-validation performance as the optimal ML model D_adv for detecting AEs. Test Model Finally, we evaluated the effectiveness of the best optimal model in distinguishing AE from CE for a unseen dataset. §.§ Explainability-based AE Detection (EAD) After identifying the optimal adversarial detector D_adv during the training phase, the next step is to test the detector during the test phase. During this phase, the EFFE module extracts features from a given example X, which are then input into D_adv for classification as either an adversarial AE or a clean CE example. The AE examples are then processed through the identification phase, which is responsible for detecting potential perturbed words, while the CE examples are directed to undergo the Spelling-based transformation step, which applies spelling-based transformations to the input to correct any spelling errors that may exist. §.§ Identification The purpose of this module is to identify potential perturbed words within the adversarial example that have influenced the model's output. This is accomplished through a combination of explainability-based and frequency-based identification techniques, as outlined below: Explainability-based Perturbation Identification (EPI): Let E = w_1, w_2, …, w_n be an input sequence, where w_i denotes the i-th word in the sequence. Let AE be an adversarial example detected by D_adv. AE is fed into the EPI module, which computes explainability scores A_map for each word w_i in AE. The words w_i with attention scores greater than a threshold thres are identified as the potential perturbed candidates (P_cand) that have the most influence on the incorrect decision of the model i.e. y_pred in the case of adversarial examples. Formally, the set of influential perturbed words is defined as: P_cand = w_i | A_map(w_i) > thres and w_i ∉Stopwords Here, A_map(w_i) denotes the explainability score of word w_i (section <ref>). Additionally, the words in P_cand are sorted in descending order according to attention score such that the word with the highest attention becomes the first candidate. Model-based Identification (MPI): The SPI approach uses the model score to determine the importance of words in the decision-making process. This method follows the BERT-Attack <cit.> strategy and replaces each word in the adversarial example (AE) with a '[MASK]' token. If the Probability Confidence Score (PCS) of the surrogate model (ST_m) on the predicted label y_pred decreases more than a certain threshold, the word is considered important. More specifically, the set of important words P_cand is defined as: P_cand∪ (w_i | PCS(AE)[y_pred] - PCS(AE ∖ w_i)[y_pred] > sc_thres) where P_cand is the set of potential perturbed candidates, w_i is a word in the adversarial example AE, y_pred is the predicted label, PCS(AE)[y_pred] is the Per-Class Score of the adversarial example for the predicted label y_pred, PCS(AE ∖ w_i)[y_pred] is the Per-Class Score of the adversarial example with the i-th word replaced by a '[MASK]' token for the predicted label y_pred, and sc_thres is the threshold that determines the level of importance. Frequency-based Identification (FPI): The FPI module is designed to detect potential perturbed words in adversarial examples at the character and multi-level (word and character) levels by considering frequency as a critical element. To achieve this, we build upon the work of Mozes et al. <cit.>, which demonstrated that the frequency of perturbed words in an adversarial example is typically lower than the frequency of the original words in its non-perturbed counterpart. Specifically, our FPI module selects words whose frequency is less than a defined threshold fq_thres and are not pronouns or nouns, and concatenates them with the set of potential perturbed candidates P_cand. P_cand∪ (w_i |Freq(w_i) < fq_thres, w_i ∉pronouns) This represents the set of potential perturbed candidates (P_cand) selected by the Frequency-based Identification (FPI) module, which includes words whose frequency is less than a defined threshold fq_thres and are not pronouns. The ∪ symbol denotes set union, and the curly braces are used to define a set. By incorporating frequency-based identification into our overall approach, we can improve the accuracy and reliability of our adversarial example detection methods. §.§ Transformation Embedding-based Transformation (ET) Generation The objective of this module is to generate a set of potential candidate substitutions, referred to as S_cand, for each candidate word cand that exists within a given set of words P_cand. This is achieved through the utilization of three distinctive embedding techniques. Firstly, synonym embedding is employed, which utilizes the vast lexical database, WordNet, to identify synonyms for a given word w_i. WordNet's synsets group words into sets of cognitive synonyms, which are indicative of distinct concepts. Secondly, contextual embedding is utilized, which involves masking the target word and utilizing its surrounding context to predict a potential replacement. This technique takes into account contextual information from surrounding words, thereby capturing nuanced variations in meaning that are dependent on context. Lastly semantic pre-trained word embedding models such as Word2Vec or GloVe are used to identify semantically related words to the target word. The output of this process, S_cand, is generated as a key-value pair, where the key is the substitution S_w_i for w_i, and the value is a similarity score simi_PCS, indicative of the degree of absolute value of similarity between S_w_i and w_i. Model-based Transformation (MT) Selection: Algorithm <ref> presents the Model-based Transformation (MT) Selection module, which aims to transform an original adversarial example (AE) into an optimal transformed adversarial example (TF_E) and determine if human intervention is required. To achieve this goal, the algorithm takes AE as input, along with other variables, and produces TF_E and a boolean value Human indicating if human verification is necessary. The algorithm first generates various transformed examples (TF_cand) by replacing each word in the original phrase P_cand with all possible substitutions in S_cand. Then, for each transformed example, the module queries the T_m model to obtain its PCS and label r_PCS and r_label, and computes the replacement score (r_score), which is the difference in PCS of the T_m model before and after the replacement. The module also computes the frequency score of each substitution based on its frequency in the T_m model's training dataset. Next, the algorithm computes the optimization score (opt_score) by adding the r_score, the similarity score between the original and transformed examples (simi_score), and the frequency score of the substitution in S_cand, and selects the candidate cand_opt that has the maximum opt_score. Then, the module selects the transformed example with the highest optimization score (sel_id). If the replacement score of the selected example exceeds a certain threshold or if the label of the selected example is not the original label, the algorithm replaces AE with the selected example. Subsequently, the module uses the adversarial detector D_adv to determine if AE is detected as a clean example, i.e., if its label is zero and the label of the selected example is not the original label. If so, the module sets adv_flag, Human to false and humanmsg to converted to non-Adversarial (Adv), indicating that the MT module has converted the example into a non-adversarial version and, therefore, human verification is unnecessary. If the adversarial PCS of the selected example is more significant than adv_org, the module sets Human to true, indicating that human verification is necessary as the substitution has made the example more adversarial. Finally, the algorithm repeats these steps for a fixed number of attempts, and if adv_flag is still True after all attempts, the module sets Human to true, and HumanMsg indicates that human verification is necessary. Spelling-based Transformation (ST) The Spelling-based Transformation (ST) phase is the final step in the Transformation module, where spelling correction is performed on every input example E for words that are not pronouns or present in the training dataset of T_m. This process involves identifying typos involving homoglyphs, such as replacing 'l' with '1' in "Hel10", followed by utilizing the SymSpell library <cit.> to search for a closely resembling word within an IT-DT distance of ed_ds. If a suitable replacement is found, the typo is replaced with the correct word, resulting in the transformed example TF_E. §.§ Final Label Prediction (FLP) The FLP stage uses the transformed example TF_E as input to T_m to predict the final label y_pred'. § EXPERIMENTAL DESIGN AND SETUP This section elaborates on the experimental setup we adopted for each phase of our proposed IT-DT framework. Moreover, we undertook a comprehensive assessment of the performance of the IT-DT framework by addressing three principal research questions, as outlined in Table  <ref>. The subsequent sections elaborate the experimental and evaluation setup used for each of the research question. §.§ RQ1: Explainability Effectiveness In order to evaluate the effectiveness of explainability in discriminating between legitimate and adversarial examples, we conducted a statistical analysis. Initially, we calculated the distributions of explainability measures A_map and I_grad for all instances in the training (Tr), testing (Te), and adversarial (Ad) datasets, as described in section <ref>. The testing dataset was utilized to avoid any potential bias resulting from the fact that the explainability measures may differ for unseen examples. Subsequently, we computed the logarithm of the standard deviation of these measures for each distribution, and utilized Bayesian hypothesis testing <cit.> to evaluate the distributions. Specifically, we calculated the Bayes factor BF_10, which measures the degree to which the data support the alternative hypothesis (H1) that the distributions of the genuine datasets (Tr, Te) for A_map and I_grad differ from those of the adversarial dataset (Ad) and Tr and Te have no such difference, over the null hypothesis (H0) the converse. A higher BF_10 value indicates stronger evidence in favor of H1. Additionally, Cohen’s d effect sizes were calculated for all mean explainability measures A_map and I_grad. §.§ RQ2: Building and Evaluating Adversarial Detector To train the adversarial detector, we implemented the following experimental setup. Baseline Transformer Models and Datasets For our experimental setup, we chose two preeminent classifiers, namely BERT <cit.> and ROBERTA <cit.>, which have been trained on four benchmark datasets: IMDB, AGNEWS, YELP, and SST2. These pre-trained models were procured from the HuggingFace platform <cit.>. BERT and ROBERTA have been widely used in various machine learning applications and have demonstrated exceptional accuracy in text classification <cit.>. Furthermore, these models are currently being adapted in the cybersecurity domain to detect a range of attacks, including phishing and spam <cit.>. Recent studies have shown that these models are effective in detecting phishing emails <cit.>. Cybersecurity researchers often use these models in the context of adversarial ML to evaluate their proposed adversarial attacks or defenses, given that these models exhibit high accuracy and are vulnerable to attacks, as evidenced by their high attack success rates <cit.>. Hence, the selection of these models is appropriate for evaluating the effectiveness of the proposed framework in the context of text classification and cybersecurity. Lastly, the datasets we used in our experiments covered diverse topics and had varying lengths, thereby ensuring the generalizability of our approach. Table <ref> provides more information about the datasets used in our experiments. Adversarial Datasets This study explores Word Substitution attacks across three granularity levels: char-level, word-level, and multi-level. We employ the TextAttack Library <cit.> and utilize an adversarial dataset from a recent study <cit.> to generate adversarial examples for each model. The following attacks are employed in this study: * Char-level attack: For this attack, we employed the state-of-the-art DeepWordBug attack <cit.>. DeepWordBug (DWB) is a character-level adversarial attack method that utilizes a generative model based on a deep recurrent neural network to generate perturbations at the character level of the input text. <cit.>. * Word-level attack: We generate the adversarial examples at this level of granularity through four distinct word-level substitution attacks: (i) TextFooler (TF) <cit.>: a word-level adversarial attack that generates semantically similar but different sentences to deceive a text classification model. It uses semantic and syntactic techniques to generate adversarial examples. (ii) Bert_Attack (BAE) <cit.>: a word substitution attack that utilizes the BERT language model to generate adversarial samples. The attack aims to replace words in the input sentence with similar words that would not change the meaning of the sentence. BAE employs a combination of gradient-based word importance scoring and masking to identify words that are most likely to be changed without changing the sentence's meaning. (iii) Probability Word Saliency Substitution (PWWS) <cit.>: a word substitution attack method that selects the most salient words in the input text based on their importance for the model's prediction. PWWS then substitutes these words with semantically similar words to create an adversarial example that can fool the model while maintaining semantic meaning. (iv)TextFooler Adjusted (TF_adj) <cit.>: a variation of the TF algorithm that incorporates a more effective synonym replacement strategy. TF_adj replaces only the top-K most essential words in the input text by considering the probability of word saliency. It helps to generate more efficient and potent adversarial examples with fewer word substitutions, leading to better attack success rates. * Multi-level attack: For the multi-level attack, we employed the TextBugger attack (TB) <cit.>. TextBugger is a multi-level attack approach that can generate adversarial examples by modifying text at the character, word, and phrase levels. It utilizes strategies to generate adversarial examples, including synonym replacement, word deletion, and insertion. It is effective against a wide range of NLP models, including those based on RNN, CNN, and BERT architectures. Additionally, it has been shown to have a high attack success rate while maintaining a low distortion rate, making it a powerful tool for evaluating the robustness of NLP models. . Explainability Computation LIME and SHAP are the most widely used methods that provide explainability. However, these methods can be computationally and memory-intensive for large and complex Transformer models such as BERT and RoBERTa. This is due to the fact that these models have a large number of parameters, necessitating a significant amount of memory and processing power to evaluate input data. Additionally, the large number of synthetic samples generated by LIME and SHAP to approximate the model's behavior and compute the feature importance scores can become computationally infeasible for BERT and RoBERTa models due to the size and complexity of the models and input data. This can lead to memory crashes or out-of-memory errors when processing large datasets. To address these challenges, researchers have proposed various optimizations and approximations for explainability methods that are more suited to large and complex models like BERT and RoBERTa. For instance, attention-based methods like Attention-based Integrated Gradients and Attention Rollout can be used to compute feature importance scores for BERT and RoBERTa models by focusing on the attention weights of the model instead of individual parameters. These methods can be faster and more memory-efficient than LIME and SHAP for BERT and RoBERTa models. Therefore, our calculation of explainability scores for an input example x is motivated by Chefer.et.al's <cit.> explainability method for text classification, which is more appropriate for large and complex models like BERT and RoBERTa. Building Machine Learning (ML) models To develop the Adversarial Detector, we utilized five popular and effective classifiers, namely XGB, LGBOOST, RandomForestTree (RF), Decision Tree, and Logistic Regression, as reported in <cit.>. Given the class imbalance present in our classification problem, we employed the Matthew Correction Coefficient (MCC) as the primary performance criterion for training and fine-tuning the classifiers. To ensure a balanced class distribution in each fold, we employed attack-based stratified sampling with 10-fold cross-validation, and early stopping criteria to identify the optimal parameters. MCC was used as the model selection criteria, as it considers all classes explicitly, and is resilient against imbalanced datasets <cit.>. MCC values range from -1 to 1, with values near -1, 0, and 1 indicating a poor model (misclassifying both classes), a random model (classifying both classes randomly), and a good model (classifying both classes correctly), respectively. We used 10-fold cross-validation and Bayesian Optimization to fine-tune the models. We evaluated the performance of the classifiers using various metrics, including Accuracy (ACC), Balanced Accuracy (Bal_ACC), F1 Score, Precision, Recall, Area Under the Curve (AUC), False Positive Rate (FPR), and False Negative Rate (FNR), as reported in <cit.>. These metrics provided a comprehensive analysis of the classifiers' performance. Furthermore, we presented the accuracy of the trained classifiers in detecting specific attacks to provide insights into their strengths and limitations in various attack scenarios. §.§ RQ3: TDT Module To assess the effectiveness of the TDT module, a random sampling technique was employed to select 200 adversarial examples for each of the six attacks described in Section <ref>, along with 200 clean (no attack) examples. In addition to the known attacks, we also included unseen word-level attack, Attack to Training (A2TY) <cit.>. A2T is a recent attack technique that utilizes gradient-based search to identify essential words in the text and replaces them with similar words using word embeddings. This process ensures that the Part of Speech (POS) and semantic similarity are preserved by employing Distil-Bert similarity scores. It is important to note that the adversarial examples generated by this attacks were not utilized during the training and validation of the D_Adv model. Instead, it represent zero-day attack that enable the assessment of the generalizability of our framework during testing. These adversarial examples were misclassified by the models, indicating zero accuracy for the ST_m model whereas the performance of the model on clean examples in shown in Table <ref>. To avoid over-fitting bias, these examples were unseen by both the ST_m and D_adv models. Next, these examples were fed into the TDT stage for test-time evaluation. The TDT module's evaluation metric considers both the D_Adv detection error and the accuracy achieved in correctly detecting the label, as well as human intervention. Thus, it evaluates the TDT module as a whole. The overall accuracy of the TDT was measured using the following equation <ref> Here, n represents the total number of examples, GT_i represents the ground-truth label of example i, TF_i represents the final predicted label of example i, and H_i is a Boolean variable indicating whether human intervention is required or not. This metric measures the end-to-end performance of the TDT phase by considering both correctly classified examples and examples flagged for human intervention. In addition, the accuracy of the D_adv modules on the randomly selected examples, denoted as ACC_Det, and the accuracy achieved by the TDT module in correctly identifying the label, denoted as ACC, are also provided to obtain more clarity of the results. Furthermore, to provide a more accurate picture of the strengths and weaknesses of the framework, identify areas for improvement in each component of the system and assess the impact of each component on the overall performance of the system. We evaluate identification and transformation steps using the experimental setup below [ It is essential to note that the TDT module only processes examples that are detected as adversarial, except for the spelling-based correction sub-module. Therefore, we report the performance based on detected adversarial examples only.] [tb!] Acc_all=∑_i=1^n[(GT_i≠ TF_i H_i) (GT_i = TF_i) ]/n RQ3.1: Identification To identify perturbations based on explainability, we set a threshold (thres) above the third quartile (q3) of the attention distribution. Any attention score above this threshold was considered an influential word for the model prediction. On the other hand, we set the sc_thres to 0.3 and fq_thres to 0.001. A pilot study was conducted to determine these optimal thresholds, where we varied the q3 threshold from q3 to q3 plus the inter-quartile range (IQR), which is a measure of the spread of the attention scores. The IQR is the difference between the third quartile (q3) and the first quartile (q1) of the attention distribution. For sc_thres, we explored the range of 0.1 to 0.5, and for fq_thres, we set the threshold from 0.001 to 0.01. Based on our observations, we discovered that the q3, 0.3, and 0.001 were the optimal thresholds for all the datasets for EPI, SPI, and FPI modules, respectively. As a result, we utilized these thresholds consistently across all datasets and models. Evaluation Metrics The performance of the identification module is evaluated by analyzing key statistical measures including Recall, Precision, F1-Score, False Positive Rate (FPR), and False Negative Rate (FNR). These measures provide a comprehensive evaluation of the algorithm's ability to accurately detect perturbed words while minimizing false positives and false negatives. Recall indicates the algorithm's capacity to capture perturbed words, while Precision calculates the proportion of correctly identified perturbed words out of all the words identified as perturbed. The F1 score combines Precision and Recall into a single metric, providing a balanced measure of the algorithm's performance in detecting perturbed words, particularly in imbalanced data scenarios. FPR measures the proportion of non-perturbed words that are incorrectly classified as perturbed, assessing the rate of false alarms or false positives. On the other hand, FNR measures the proportion of actual perturbed words that are incorrectly classified as non-perturbed, indicating the rate of missed detections or false negatives. These measures collectively provide an accurate evaluation of the algorithm's performance in identifying perturbed words. Also, note that here we have not considered MCC because instead of overall performance we wanted to more targeted assessment of the method's ability to identify perturbed words <cit.> and therefore the selected measures are appropriate choice. RQ3.2: Transformation To implement the Embedding-based Transformation technique, we utilized multiple methods for word replacement. Specifically, for synonym replacement, we leveraged the WordNet synset <cit.>. To avoid bias towards recent evasion attacks, we refrained from using the synonym embedding provided by counter-fitted <cit.> <cit.>. For contextual replacement, we used the BERT Masking model <cit.>. BERT is a state-of-the-art neural network architecture for natural language processing tasks, and it generates high-quality contextualized word embeddings. To generate contextually similar words, we used the BERT Masking technique, which involves masking a word in a sentence and predicting the masked word based on the surrounding context. For semantic replacement, we utilized the GloVe word2vec, a pre-trained word embedding model developed by Stanford University. Specifically, we used the "glove-wiki-gigaword-50" model, trained on a combination of Wikipedia 2014 and Gigaword 5 data, which has a vocabulary size of 400,000 and consists of 6 billion tokens. Each word is represented as a 50-dimensional vector in the embedding space <cit.>. To ensure a wide range of options for replacement, we selected 15 similar candidates from each embedding. However, for model-based transformation, we set a threshold of 0.1. This means that if the optimal candidate, represented as cand_opt, has a minimum change of 0.1 in PCS or a change in the label, we replace the candidate. Otherwise, we ignore the replacement. The BERT model's ability to generate contextually similar words makes it a powerful tool for tasks that involve text transformation and data augmentation, and its effectiveness in natural language processing tasks is widely recognized. Evaluation Metrics To evaluate the performance of the transformation module, we utilize, the accuracy of transformation Acc_tf (equation <ref>) which measures how effectively the transformation module converts adversarial examples to non-adversarial examples, Acc_tf = N_ct/N_det Here, the notation N_ct refers to the number of examples that were correctly transformed by the TDT module and N_det is the number of examples detected as adversarial by the detector module. RQ3.3: Human Intervention We evaluate Acc_human (equation <ref>) to measure the transformation module's ability to flag examples for human intervention when it fails to convert adversarial examples to non-adversarial labels (i.e., Transform_Error, equation <ref>). These metrics provide feedback to security analysts and improve the system's overall robustness. Transform_Error = (N_det N_in)/N_det Acc_human = (N_det N_in HI)/(N_det N_in) Here, HI is a Boolean variable that indicates whether the example requires human intervention, N_in represents the number of examples incorrectly classified by the Final Label Prediction module (FLP), and N_det is the number of examples detected as adversarial by the detector module. Generating Logs During the TDT phase, a critical step is to continuously gather insights from the Identification and Transformation phases. The purpose of this step is to collect various relevant information for security analysts. Specifically, we log the example ID, the score of the adversarial perturbation D_adv for the given example, P_cand and its corresponding word replacements, the transformed text, the ground truth label, the original prediction y_pred of the model before transformation, the final prediction y_pred' of the model after transformation, the confidence on the final prediction, the human intervention flag, the human intervention message and the detected adversarial flag. We use following human intervention messages . (i) "DETECTED AS ADVERSARIAL EXAMPLE, But NO REPLACEMENT DONE, Requires Human Intervention" where no replacement was found against P_cand that transforms it into non-adversarial example. (ii) "Converted to non-ADVERSARIAL EXAMPLE", if our algorithm was successful in transforming detected adversarial example to non-adversarial. (iii) "GOT MORE ADVERSARIAL after substitutions Requires Human Intervention", if our algorithm transformed the example but adversarial detector have more confidence on it being adversarial. (iv) "Detected as non Adversarial", if the example is detected as non-adversarial example by the D_adv/ (v) "STILL ADVERSARIAL EXAMPLE After n tries, Requires Human Intervention", if the algorithm failed to convert a detected adversarial example to non-adversarial after n tries. We considered n=3 in our experiment. We believe, this information is essential for analysts to understand the attack scenarios, assess the severity of the attack, and develop appropriate countermeasures. By gathering these insights throughout the TDT phase, analysts can gain a comprehensive understanding of the adversarial attack and improve the overall security of the model. § EVALUATION RESULTS AND ANALYSIS The results of our investigation are presented in the subsequent subsections. §.§ RQ1: Explainability Analysis Table <ref> presents the results of the comparison between the AGNEWS, IMDB, SST2, and YELP datasets and the BERT and ROBERTA models used in the study, with the effect size computed based on the Cohen's d value. A small effect size is between 0.2 and 0.5, a medium effect size is between 0.5 and 0.8, and a large effect size is greater than 0.8 <cit.>. Overall both Cohen's d, effect size and BF_10 values indicate that Tr and Te (legitimate) examples are different from Ad (Adversarial) example based on both A_map and I_grad distribution across all the datasets and models considered. This suggests that there is a strong evidence that the distributions of explainability scores are different for genuine and adversarial examples. Speciafically, the Cohen's d effect size is high and BF_10 values are infinte for all the datasets and models. A BF_10 value of 1 indicates anecdotal evidence in favor of the alternative hypothesis, while values greater than 3 indicate substantial evidence, values greater than 10 indicate strong evidence, and values greater than 30 indicate very strong evidence. Thus, the BF_10 values suggest that there is strong evidence in favor of the alternative hypothesis that the distributions of explainability scores are different for genuine and adversarial examples. In addition, when comparing the distributions of explainability scores for genuine examples between training and testing datasets, no significant difference was found. The Cohen's d effect sizes for this comparison ranged from 0.033 to 0.105, indicating a low effect size for all models and datasets. However, the BF_10 values, while still high, were lower than the values obtained for the previous comparison between adversarial and genuine examples. In fact, for AGNEWS BERT and SST2 ROBERTA models, the BF_10 values were less than 3, specifically 1.338 and 0.564, respectively, which indicates no significant difference between training and testing distributions. We believe that the lack of difference between the training and testing datasets is due to the fact that both datasets contain genuine examples. Additionally, we observed that the Cohen's d effect sizes for A_map and I_grad differed for each distribution pair, indicating that these measures contribute differently to distinguishing between adversarial and legitimate examples. [title=Summary RQ1] The analysis suggests that the distributions of both A_map and I_grad can be utilized for distinguishing between adversarial and legitimate examples. §.§ RQ2: Performance Evaluation of Adversarial Detector Table <ref> presents the overall performance of multiple classifiers in detecting adversarial examples, addressing our research question. The results indicate that the LGBM and XGB models consistently outperform other classifiers, achieving an average MCC of 0.87, F1 score greater than 93.5%, and a balanced accuracy of 93.0%. Notably, the LGBM model achieved impressive results, with the highest MCC value of 0.929 for ROBERTA models trained on the YELP dataset. Furthermore, for the LGBM classifier, the False Negative Rate (FNR) ranged from 3.6% to 13.6%, while the False Positive Rate (FPR) ranged from 0.6% to 5%. On the other hand, the Logistic Regression (LR) and Decision Tree (DT) classifiers demonstrated weak performance in most cases, with an average MCC of 0.78 and 0.789, respectively. The Random Forest Tree (RFT) showed good performance (average MCC of 0.854) on some datasets, but it was outperformed by XGB and LGBM classifiers in most cases. Therefore, the ensemble classifiers (XGB, LGBM, RFT) are better for detecting adversarial examples. Additionally, there is no clear winner regarding the transformer models used, as BERT and ROBERTA perform similarly across the datasets. However, the classifiers performed comparatively better for the BERT model than ROBERTA, attaining an average MCC of 0.843 and 0.822, respectively. Moreover, it is crucial to highlight that accuracy (ACC) often overestimates performance, especially for imbalanced datasets like ours. For example, the LGBM model achieved an ACC of 95.5% for the AGNEWS ROBERTA model, while the balanced accuracy was only 86.4%. Thus, ACC alone is not a reliable measure to demonstrate the overall performance of the models in the detection task. Based on these findings, we have selected the LGBM classifier as our adversarial detector D_Adv. Analysis of D_Adv with attack type (RQ1.1) Fig <ref> visually compares our proposed method and the WDR technique on the test dataset. Our method outperforms WDR in detecting adversarial attacks in most cases (53 out of 56 combinations of dataset, model, and attack). However, there are a few instances where our method shows slightly lower performance, specifically in BAE attacks against AGNEWS (BERT and ROBERTA) and TF-ADJ attacks against AGNEWS (ROBERTA), with an average accuracy reduction of 0.017 compared to WDR. Our method achieves high accuracy (>90%) in detecting all attacks except BAE. Additionally, it maintains a fidelity of 97.6% in correctly identifying clean examples. In comparison, WDR performs best with an average accuracy of 75% for TF and PWWS attacks, but our method surpasses it with an average accuracy of 91%. When considering different datasets, our method demonstrates highly accurate detection for models trained on IMDB, YELP, and SST2 datasets, with an average accuracy of 95%. However, for AGNEWS, our method achieves a lower average accuracy of 83.5%. This discrepancy may be due to AGNEWS being a multi-class problem, where the distribution of adversarial examples can overlap with some class data. Regarding target models, our method performs equally well, with an average accuracy of 93.2% for BERT and 91.77% for ROBERTA across all datasets and attacks. Interestingly, the same trend is observed for WDR, with 68.82% for BERT and 66.55% for ROBERTA. [title=Summary RQ1] The LGBM classifier performed superior in detecting WSA, achieving an impressive average accuracy of 97%. Consequently, the LGBM model was selected as the adversarial detector (D_Adv) for the TDT phase. Moreover, our detector consistently outperformed WDR across all attack types. §.§ RQ3: Overall Performance of TDT The analysis of Fig <ref>(a) reveals that the TDT module achieved a median accuracy of over 86% for all attacks, indicating its efficacy in mitigating attacks and retaining the accuracy of clean examples. Furthermore, the TDT module acquired the highest median accuracy on DWB and TB attacks, with 97.7% and 97.2% accuracy, respectively, followed by TF, No attack, and PWWS, with accuracy levels of 93.7%, 92.9%, and 90.0%, respectively. These results indicate that the TDT module effectively mitigates various attacks while maintaining high accuracy levels for clean examples. Furthermore, the TDT module demonstrated moderate accuracy in defending against 88%, 87% and 86% of TF-ADJ, A2TY and BAE attacks, respectively. Particularly noteworthy is the effectiveness of the TDT module against the A2TY attack, which represents a zero-day attack in this evaluation. The module achieved an 87% accuracy, indicating its capability to not only prevent known attack types but also generalize well to unknown attacks.. Moreover, we also observation that the overall accuracy of the TDT module is directly proportional to the adversarial detection accuracy, as the D_adv was able to detect more than 80% of the examples correctly. For instance, the method successfully identified a median of 93.5% of DWB examples and accurately identified 88.5% of clean examples. In terms of accurately identifying the final prediction label of the example, the TDT module achieved a median accuracy of more than 80% for all attacks, including the no-attack scenario. In addition, it demonstrated the highest accuracy levels of 94%, 92.9%, 91.3%, and 90.9% on DWB, TB, NO, and TF attacks, respectively. These findings suggest that the TDT module is a reliable and effective method for defending against a wide range of attacks while ensuring high accuracy levels for clean examples. Furthermore, an analysis of Fig <ref>(b) and (c) presents a performance evaluation of the TDT module on the considered datasets and transformer models. The TDT module attained a high overall median accuracy of 89.26% on all the datasets. Notably, the YELP dataset demonstrated a performance of 87.83% in correctly identifying the accurate label for the examples, despite only 68.20% of examples being correctly detected by the D_adv. Additionally, the TDT module exhibited the highest overall accuracy of 95.60% on IMDB, with a ACC_Det of 84.82% and correct labelling of 92.37% examples by the FLP module. However, the AGNEWS dataset's overall accuracy was relatively poor, achieving only 87.41% ACC_overall, which may be attributed to its multi-class label. Regarding the surrogate transformer models, BERT and Roberta models both attained an approximately equal overall median accuracy of 92.91% and 90.73%, respectively. Interestingly, the median detection accuracy of attacks on the Roberta model was 6% higher than the BERT model. In comparison, the accuracy of obtaining a correct label by the FLP module was 6.47% higher for BERT than for Roberta. These results suggest that while attacks on the Roberta model are more easily detectable by the D_adv model than the BERT model (4.62%), transforming the example to a non-adversarial example is more difficult for the Roberta model than the BERT model. §.§.§ RQ3.1: Performance of Identification Module Fig  <ref> (a-c) displays a boxplot of the performance evaluation results of our identification module across different attack methods, datasets and ST_m model (section  <ref>). Figure  <ref> (a) presents an analysis of our algorithm's performance in identifying perturbed words under different attack methods. Our algorithm exhibits varying levels of effectiveness across these attacks. Notably, our algorithm demonstrates remarkable proficiency in detecting perturbed words in the PWWS, TF, A2TY, DWB, and TB attacks, as indicated by their higher median F1-scores (69%, 65%, 63.98%, 62%, 59.44% respectively) and median recalls (84%, 77%, 79.67%, 73.66%, and 76% respectively). The higher recall rates suggest that the method successfully identifies the perturbed words in over 70% of cases for these attacks. Furthermore, the algorithm exhibits lower median False Positive Rates (FPR) and False Negative Rates (FNR) below 14% and 31% respectively, indicating its ability to minimize both types of errors in these attack scenarios. Conversely, our algorithm did not perform as effectively in the BAE and TF-ADJ attacks, as it exhibited higher FNR rates exceeding 50%. These results highlight opportunities for improvement in these specific attack scenarios. Overall, our findings underscore the effectiveness of our algorithm in identifying perturbed words, particularly in the PWWS, TF, and DWB attacks. However, they also suggest the need for further enhancements in the BAE and TF-ADJ scenarios. In our assessment of the performance of our algorithm in identifying perturbed words across multiple datasets, several essential deductions can be drawn from the results presented in Fig <ref> (b). Firstly, it is apparent that the algorithm displays its most robust performance in the SST2, Yelp, and AGNEWS datasets, where it achieves a median F1-score and FNR of greater than 60% and less than 33%, respectively, indicating its effectiveness in accurately identifying perturbed words in these contexts. However, there is room for improvement in the IMDB dataset, where the algorithm exhibits higher rates of false negatives (a median of 43.75%), suggesting some missed detections. Notably, the algorithm maintains a relatively low FPR in all datasets, particularly for IMDB and Yelp datasets, where it is 8.76% and 9.90% , respectively, indicating its ability to minimize incorrect identifications. These findings underscore the algorithm's potential and offer insights for further refinement to improve its detection capabilities by reducing the FNR to ensure the precise identification of perturbed words across diverse datasets. Fig <ref> (c) presents the performance of our algorithm in detecting perturbed words generated against the BERT and RoBERTa models. The algorithm demonstrates commendable recall values for both models, with BERT achieving a median recall of 76.81% and RoBERTa achieving a median recall of 73.50%. It indicates the algorithm's capability to capture a significant portion of the perturbed words. Furthermore, the precision values are reasonable, with BERT achieving a median precision of 59.75% and ROBERTA achieving 61.84%. These results suggest that a substantial proportion of the words identified as perturbed by the algorithm are indeed correct detections. A key area for improvement lies in reducing the FNR for ROBERTA, which stands at a median value of 40.00%. Enhancements targeting the reduction of FNR would enhance the algorithm's ability to detect all relevant perturbed words for ROBERTA. Further, to conduct a thorough analysis, we deconstructed the identification module into its sub-components (features) and evaluated their performance, as depicted in Fig <ref> (d). presents the performance results of various feature combinations in detecting perturbed words within adversarial examples. Among the evaluated feature combinations, the "EXP_FREQ_MODEL" combination stood out as the most effective in terms of recall, achieving a value of 75.59%. This indicates that the "EXP_FREQ_MODEL" combination has a strong ability to correctly identify a significant portion of the perturbed words present in the adversarial examples. To further analyze the performance of our individual features in comparison to the "EXP_FREQ_MODEL" combination, we can observe the following: EXP: The "EXP" feature exhibited an F1-score of 60.16% and a recall of 65.56%. While these values are relatively close to those of the "EXP_FREQ_MODEL" combination, it is evident that the combination of additional features leads to improved performance. On the other hand, the frequency "FREQ" feature demonstrated an F1-score of 59.01%, a recall of 58.33%. Comparing these values to the "EXP_FREQ_MODEL" combination, it becomes apparent that incorporating other features, such as "MODEL," contributes to better performance. The "MODEL" feature combination achieved an F1-score of 57.76% and a recall of 60.33%. These values are lower than those of the "EXP_FREQ_MODEL" combination, indicating that the inclusion of frequency-based and experimental features enhances the overall performance. The analysis of individual features suggests that the combination of experimental, frequency-based, and model-based features in the "EXP_FREQ_MODEL" combination leads to improved detection of perturbed words. The inclusion of these additional features enables a more comprehensive analysis, which ultimately results in a higher recall rate. However, it is important to note that the "EXP_FREQ_MODEL" combination still exhibits room for improvement. The FPR of 12.22% and FNR of 33.33% indicate the presence of false positives and false negatives, respectively. This highlights the need for further research and refinement to minimize these errors and enhance the overall performance of the feature combination. In conclusion, the "EXP_FREQ_MODEL" combination outperformed individual features in terms of recall, indicating its effectiveness in identifying perturbed words within adversarial examples. However, continued research efforts should focus on optimizing the feature combination to reduce false positives and false negatives, thereby further improving its performance in practical applications. Our identification module demonstrated superior effectiveness in detecting perturbed words in PWWS, TF, and DWB attacks, with higher median F1-scores and recalls. The algorithm showcased robust performance in the SST2, Yelp, and AGNEWS datasets, accurately identifying perturbed words with median F1-scores above 60% and FNR below 33%. Furthermore, the algorithm achieved commendable recall values for BERT and RoBERTa models, indicating its ability to capture a significant portion of perturbed words while maintaining reasonable precision. §.§.§ RQ3.2: Performance of Transformation Module Fig <ref> (a) presents the box-plot representation of the performance evaluation of our transformation module using the ACC_Transform metric, which offers profound insights into the efficacy of our transformation module in converting adversarial examples to non-adversarial labels across different attack types. Among the assessed attacks, the DWB attack exhibited the highest median ACC_Transform score of 96.42%. This remarkable achievement underscores the exceptional ability of our transformation module to successfully convert a substantial proportion of DWB adversarial examples into their benign counterparts, thereby effectively neutralizing the adversarial perturbations. In contrast, the NO attack, characterized by examples falsely identified as adversarial (false positives), displayed a comparatively lower median ACC_Transform of 45.25%. This observation indicates that our transformation module adeptly preserved the inherent non-adversarial characteristics of these instances, aligning with its intended objective. The attainment of a lower median ACC_Transform for the NO attack is highly desirable as it signifies the transformation module's astute identification of these examples as non-adversarial and its prudent avoidance of unnecessary label alterations. Overall, our transformation module demonstrated commendable performance across the evaluated attacks, with median ACC_Transform scores ranging from 85% to 96.33% for all attacks. These outcomes substantiate the remarkable efficacy of our transformation module in mitigating the adversarial nature of the attacks on average, thereby reinforcing its practical value as a potent defense mechanism. Additionally, the performance analysis of the transformation module was conducted based on the datasets and transformer models, as illustrated in Fig <ref> (b-c). Regarding the datasets on which the transformer models were trained, distinct accuracy were obtained. The AGNEWS dataset demonstrated a median accuracy of 82.49%, indicating the substantial effectiveness of the transformation module in mitigating adversarial influences on models trained on the AGNEWS dataset, albeit with a slightly lower ACC_Transform compared to other datasets. For the IMDB, SST2, and YELP datasets, notably high median ACC_Transform scores of 95.35%, 93.51%, and 94.41% were achieved, respectively. These exceptional results underscore the robust performance of the transformation module in successfully restoring the non-adversarial nature of instances from these datasets, thereby affirming its efficacy in countering adversarial attacks and preserving the integrity of the original labels. Furthermore, in terms of the adversarial examples targeted at the models, our method achieved an impressive ACC_Transform score of 95.66% for the BERT module. This highlights the remarkable capability of the transformation module in effectively neutralizing the adversarial perturbations introduced to BERT-based models, thereby accurately restoring the integrity of the original non-adversarial labels. Similarly, for the ROBERTA model, a slightly lower but still commendable ACC_Transform score of 89.62% was observed. This underscores the efficacy of the transformation module in mitigating the adversarial impact on ROBERTA-based models, facilitating the accurate recovery of the underlying non-adversarial semantics with notable success. The evaluation results demonstrate the transformation module's impressive performance (median 94.65%) in converting adversarial to non-adversarial, fortifying the robustness of models. §.§.§ RQ3.3: Performance of Human Intervention Lastly, we analyse the performance of our TDT module in detecting situations where human intervention is required. We present the results with transformation error and ACC_human (see section <ref>) as shown by boxplot illustration in Fig <ref>. Lastly, we analyse the performance of our TDT module in detecting situations where human intervention is required. We present the results with transformation error and ACC_human (see section <ref>) as shown by boxplot illustration in Fig <ref>. Regarding the attacks (Fig <ref>(a)) , we observed that the TDT module achieved a median ACC_Human of 78.61% for the BAE attack, indicating its effectiveness in flagging cases that need human intervention. The DWB attack showed a slightly lower ACC_Human of 53.57%, suggesting a moderate level of accuracy in identifying examples requiring human intervention. Notably, the NO attack exhibited a significantly lower ACC_Human of 24.49%, indicating that the TDT module correctly identified a large portion of non-adversarial examples without the need for human intervention. This outcome aligns with the objective of minimizing false positives for adversarial examples. For the PWWS, TB, TF, and TF-ADJ attacks, the TDT module demonstrated reasonable ACC_Human values of 70.24%, 92.86%, 73.88%, and 70.43%, respectively. In terms of Transform_Error, higher values indicate a lower rate of false positives for adversarial examples. The results revealed that the TDT module exhibited a relatively low Transform_Error of 3.58% for the DWB attack, indicating its effectiveness in correctly converting adversarial examples to non-adversarial ones. Similarly, the PWWS, TB, and TF attacks showed Transform_Error values of 3.67%, 5.35%, and 3.88%, respectively, indicating the module's ability to successfully transform adversarial examples. The NO attack had a significantly higher Transform_Error of 54.75%, suggesting that the TDT module correctly identified a majority of non-adversarial examples without unnecessary human intervention. Analyzing the results across datasets (Fig <ref> (b)), we found that the TDT module achieved diverse ACC_Human values. For the AGNEWS dataset, the module attained an ACC_Human of 53.57%, suggesting its ability to identify cases requiring human intervention in this context. The IMDB dataset exhibited a higher ACC_Human of 84.17%, indicating the module's effectiveness in flagging examples that need manual inspection and transformation. Remarkably, the SST2 dataset achieved a perfect ACC_Human of 100.00%, indicating that the TDT module accurately identified all cases that required human intervention. The YELP dataset showed a relatively lower ACC_Human of 43.08%, suggesting the need for improvement in identifying cases requiring human intervention. Across datasets, the AGNEWS dataset exhibited the highest Transform_Error of 17.51%, indicating a relatively higher rate of false positives for adversarial examples. The IMDB, SST2, and YELP datasets demonstrated Transform_Error values of 4.65%, 5.48%, and 5.59%, respectively, indicating the module's ability to successfully transform adversarial examples while minimizing false positives. Considering the models (Fig %reffig:TDT-perf-human (c)) , both BERT and ROBERTA achieved similar ACC_Human values of 66.11% and 67.33%, respectively. These results suggest that the TDT module performed comparably in flagging examples for human intervention for both models. For the BERT and ROBERTA models, the TDT module achieved Transform_Error values of 4.34% and 10.03%, respectively. These results suggest that the module performed slightly better in converting adversarial examples for the BERT model compared to the ROBERTA model. Our TDT module generate logs and threat intelligence report to help the analyst understand the behaviour of our framework and also examine the logs to identify the vulnerabilities in the underlying target model. A sample of threat intelligence report is shown in <ref>. In summary, the TDT module exhibited satisfactory performance in identifying cases requiring human intervention and effectively converting adversarial examples to non-adversarial ones. The results varied across attacks, datasets, and models, indicating the module's adaptability and effectiveness in different scenarios. These findings highlight the TDT module's value as a robust defense mechanism against adversarial attacks and its potential for deployment in real-world applications. [title= Overall Summary RQ3] The TDT module exhibits formidable defensive capabilities in countering adversarial attacks, effectively preserving high levels of accuracy while thwarting malicious attempts. In addition, it demonstrates exceptional proficiency in mitigating DWB and TB attacks, showcasing its effectiveness in neutralizing sophisticated adversarial perturbations. Moreover, its commendable performance transcends multiple datasets, with noteworthy efficacy observed, particularly in the challenging IMDB dataset. These salient features position the TDT module as an indispensable tool for researchers and industry practitioners, providing a reliable and robust solution to safeguard against adversarial threats. § DISCUSSION AND THREATS TO VALIDITY In this section, we discuss the overall insights gathered by conducting this study. Resiliency to Adaptive Attacks. The proactive and transformative nature of the IT-DT framework poses a significant challenge for adaptive attacks, particularly score-based attacks that heavily rely on the model's feedback during the attack process. By actively transforming adversarial examples and disrupting the attacker's feedback loop, the framework hinders the attacker's ability to accurately assess the model's vulnerabilities or refine their attack strategy. The transformed examples yield different feedback or prediction outcomes, making it difficult for adversaries to adapt and circumvent the system's defenses. This proactive approach increases the complexity and cost of launching successful attacks, enhancing the overall resiliency and security of the defended system. Human-in-the-Loop. The IT-DT framework takes a step towards a human-in-the-loop approach by generating alerts for human intervention and leveraging security analysts' expertise to enhance robustness. Through providing logs and interpretability, analysts can perform case analysis, identify attack patterns, and propose countermeasures, enabling the framework to adapt to new attack types. This collaborative process fosters trust, improves interpretability, and facilitates continuous monitoring and evaluation. Involving security analysts in the feedback loop is crucial to enhance the framework further. Establishing a seamless workflow for analysts to review alerted examples, provide feedback, and refine detection and transformation mechanisms will bolster the framework's robustness. Scalability and Efficiency. The IT-DT framework features a fast and efficient detection phase (<1 sec for an example), enabling quick identification of adversarial examples. However, the transformation process, which involves replacing perturbed words, can be time-consuming. The complexity of the transformation mechanism is O(nxd), where n represents the number of words to replace and d denotes the possible replacements for each word. While parallelization techniques can accelerate the transformation using multiple GPUs, the availability of computational resources can limit the extent of parallelization. Despite the time constraints, the transformation step is crucial for converting adversarial examples into non-adversarial ones and improving the model's robustness. Future research can focus on optimizing the transformation process and exploring efficient parallelization methods to reduce the time required. §.§ Strengths Our framework demonstrates the capability to detect and transform adversarial examples (AEs) into non-adversarial variants, while also incorporating human intervention when necessary, as demonstrated in the previous section <ref>. Unlike a simplistic approach of discarding AEs upon detection <cit.>, our framework recognizes the importance of maintaining a positive user experience. It actively transforms AEs and provides meaningful responses to users, only resorting to blocking examples that require human intervention. This user-centric approach ensures that users receive appropriate feedback, minimizing frustration and confusion. Furthermore, our framework generates comprehensive logs and analytical reports, offering valuable insights into the characteristics of AEs, the effectiveness of the transformations, and potential vulnerabilities. These logs enable security analysts to gain a deeper understanding of adversarial attacks, refine defense strategies, and enhance the overall security posture of the system. Moreover, by actively transforming AEs and providing appropriate responses, our framework acts as a deterrent against adversaries. This proactive approach makes it more challenging for adversaries to adapt and circumvent the system's defenses. For instance, in countering score-based attacks that heavily rely on the model's feedback during the attack process to craft more effective AEs <cit.>, our framework may poses challenges by actively transforming AEs and disrupting the attacker's feedback loop, it hinders the successful execution of score-based attacks. The transformed examples yield different feedback or prediction outcomes, making it difficult for attackers to accurately assess the model's vulnerabilities or refine their attack strategy based on the model's responses. Lastly, the IT-DT framework is designed to address adversarial examples in real-world scenarios across various domains. Its effectiveness extends beyond theoretical settings, making it suitable for deployment in practical applications where robust defenses against adversarial attacks are crucial. §.§ Threat to Validity The limitations and threats to the validity of our approach are as follows: Firstly, our approach focuses solely on defending transformer-based models against adversarial attacks. In the future, we aim to extend this approach to other model architectures such as convolutional neural networks (CNNs) or long short-term memory (LSTM) models. By incorporating explainability methods that do not rely solely on attention mechanisms, we aim to generalize our framework and enhance its applicability across different model types. Additionally, our results indicate that certain attacks, such as TF-ADJ and BAE, pose challenges for the detection of adversarial examples. Moreover, certain datasets, such as AGNEWS, may require improvements in order to achieve better detection performance. To address these limitations, we intend to explore additional features and techniques that can enhance the detection capabilities of our framework and improve its robustness against a wider range of adversarial attacks. Furthermore, our experiments were conducted solely on offline adversarial examples. In the future, we plan to deploy our framework in a client-server setup and evaluate its performance in real-time scenarios, particularly for query-based attacks. By simulating real-time attack scenarios, we can assess the effectiveness and efficiency of our framework in detecting and transforming adversarial examples in practical, dynamic environments. Lastly, although our framework generates security logs, their utility and effectiveness in assisting security analysts have not been thoroughly evaluated. Future work will involve conducting utility evaluations and gathering feedback from security analysts to assess the practical value of the generated logs and converted adversarial examples to further enhance the framework's usability and effectiveness in real-world settings. Addressing these limitations and exploring future directions will contribute to the continued development and improvement of our framework, enabling it to provide robust defense against adversarial attacks across various model architectures, datasets, real-time scenarios, and practical security analysis. § CONCLUSION In conclusion, this study introduces the IT-DT framework, which prevents transformer-based models from evading adversarial examples. Our approach involves training an adversarial detector, utilizing explainability and frequency-based features to distinguish between clean and adversarial examples. We then propose the Test-Time Detection and AE Transformation (TDT) method, which deploys the adversarial detector at test-time to identify and transform detected adversarial examples into non-adversarial variants. Our experimental results demonstrate the effectiveness of our approach, achieving a median detection MCC of 0.846 and an overall median accuracy of 92.98% for TDT. Furthermore, our framework successfully identifies perturbations with a median recall of 74%, transforms adversarial examples with a median accuracy of 94.65%, and accurately involves human intervention with a median accuracy of 70.43%. Moving forward, we aim to extend our framework to non-transformer-based models, enhance the utility of generated logs by involving security analysts, and evaluate the framework's performance in real-time scenarios. These future endeavours will further strengthen the capabilities of our framework and broaden its applicability in defending against adversarial attacks. splncs04
http://arxiv.org/abs/2307.02120v1
20230705084819
Multilingual Controllable Transformer-Based Lexical Simplification
[ "Kim Cheng Sheang", "Horacio Saggion" ]
cs.CL
[ "cs.CL" ]
Multi-Scale U-Shape MLP for Hyperspectral Image Classification Moule Lin^https://orcid.org/0000-0001-6227-2392 < g r a p h i c s >,The work described in this paper is supported by National Natural Science Foundation of China (31770768), Fundamental Research Funds for the Central Universities(2572017PZ04), Heilongjiang Province Applied Technology Research and Development Program Major Project(GA18B301,GA20A301) and China State Forestry Administration Forestry Industry Public Welfare Project (201504307) (Corresponding author: Weipeng Jing.) Weipeng Jing^https://orcid.org/0000-0001-7933-6946 < g r a p h i c s >, Member, IEEE,M. Lin, W. Jing and G. Chen are with the College of Information and Computer Engineering, Northeast Forestry University, Harbin 150040, China. (e-mail: nefu_ml@nefu.edu.cn, jwp@nefu.edu.cn, kjc_chen@163.com) Donglin Di^https://orcid.org/0000-0002-2270-3378 < g r a p h i c s >,D. Di is with the Baidu Co., Ltd, Beijing 100085, China (e-mail: didonglin@baidu.com). Guangsheng Chen, Member, IEEE and Houbing Song^https://orcid.org/0000-0003-2631-9223 < g r a p h i c s >, Senior Member, IEEEH. Song is with the Department of Electrical, Computer, Software, and Systems Engineering, Embry-Riddle Aeronautical University, Daytona Beach, FL 32114, USA (e-mail: SONGH4@erau.edu) ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Lexical Simplification (LS) is a process of reducing the lexical complexity of a text by replacing difficult words with simpler substitutes or expressions while preserving its original information and meaning <cit.>. For example, in Figure <ref>, the word “motive” is selected as a complex word, which is replaced by the word “reason”. Meanwhile, simplification can also be carried out at the syntax level, reducing a text's syntactic complexity. The task is called Syntactic Simplification (SS). Both LS and SS tasks are commonly used as sub-tasks of the broader task of Automatic Text Simplification <cit.>, which reduces the text's lexical and syntactic complexity. LS systems are commonly composed of different combinations of components such as 1) complex word identification; 2) substitute generation or extraction; 3) substitute filtering; 4) substitute ranking; and 5) morphological and contextual adaptation <cit.>. Previous works on LS have relied on an unsupervised approach <cit.>, and many other systems are module based <cit.>, which requires a pipeline of modules to operate, such as substitute generation, substitution selection, substitution filtering, and substitution ranking. The downside of the pipeline approach is that it is known to propagate errors across modules. In sheang-etal-2022-controllable, we proposed an end-to-end controllable LS system. However, this model lacks multilinguality; therefore, here we extend that work to show how it can be ported to other languages by jointly learning different languages simultaneously. We present the following contributions: * We improve the English monolingual LS model and propose a new multilingual LS model for English, Spanish, and Portuguese[The source code and data are available at <https://www.github.com/kimchengsheang/mTLS>]. * We show the way to fine-tune a multilingual LS model by adding language-specific prefixes, control tokens, and Masked Language Model (MLM) candidates extracted from BERT-based pre-trained models. * We have conducted an extensive analysis comparing our models with several evaluation metrics, which allows us to capture the strengths and weaknesses of our approach. The rest of the paper is organized as follows: Section <ref> presents some related work on Lexical Simplification. Section <ref> explains our proposed model in detail. Section <ref> describes all the datasets being used, the baselines, the evaluation metrics, how the data is prepared, and the experimental setup. Section 5 discusses the results of the experiments, while Section 6 concludes the paper. § RELATED WORK Prior works on Lexical Simplification were mainly based on unsupervised approaches. Belder2010 used Latent Words Language Models to reduce text complexity for children. horn_learning_2014 proposed a Support Vector Machines (SVM) model trained on an automatically aligned between normal Wikipedia and simple Wikipedia text. glavas_simplifying_2015 proposed an approach that utilized GloVe embeddings <cit.> for candidate generation and ranked different features extracted from language models and word frequency. qiang_lsbert_2021 proposed LSBert, a LS system that uses Masked Language Model (MLM) approach to extract candidates from BERT pre-trained model <cit.> and rank them by different features such as MLM probability, word frequency, language model, similarity (FastText cosine similarity), and PPDB data <cit.>. martin_controllable_2019 was the first to introduce ACCESS, a Controllable Sentence Simplification system based on a sequence-to-sequence model, trained with four tokens: number of characters token, Levenshtein similarity token, Word Rank token (the inverse frequency order from extracted from FastText), and dependency tree depth. These four tokens are used to control different aspects of the output sentences: 1) sentence compression, 2) the amount of paraphrasing, 3) lexical complexity, and 4) syntactical complexity. The approach was later adopted by sheang_controllable_2021 fine-tuned with T5 <cit.>, martin_muss_2022 fine-tuned with BART <cit.>, and maddela_controllable_2021 fine-tuned larger T5. In sheang-etal-2022-controllable, we introduced ConLS, the first controllable Lexical Simplification system fine-tuned with T5 using three tokens: Word Length token, Word Rank token, and Candidate Ranking token. The three tokens were used to control different aspects of the generated candidates: Word Length is often correlated with word complexity, Word Rank is the frequency order (word complexity is also correlated with frequency), and Candidate Ranking is for the model to learn how to rank the generated candidates through training. The model was fine-tuned with T5-large on TSAR-EN dataset <cit.> and tested on LexMTurk <cit.>, BenchLS <cit.>, and NNSeval <cit.>. There have been some works on Lexical Simplification for Spanish, namely, moreno_lexical_2019 proposed readability and understandability guidelines, alarcon_easier_2021 released the EASIER dataset, and alarcon_exploration_2021 explored the use of different word embeddings models from complex word identification, to substitute generation, selection, and ranking. In this work, we extend our previous work of ConLS, addressing multilinguality along with adding two new control tokens (Word Syllable and Sentence Similarity) and Masked Language Model candidates to improve the model's performance. § METHOD Building upon the work of ConLS, we propose a new multilingual controllable Transformer-based Lexical Simplification model that integrates language-specific prefixes alongside the control tokens and masked language model candidates to leverage the input-level information. We adopted the same three tokens from ConLS (Word Length, Word Rank, and Candidate Ranking) and integrated two additional tokens (Word Syllables and Sentence Similarity). We fine-tuned our English monolingual model with T5 <cit.> and multilingual model with mT5 <cit.>. Figure <ref> shows an overview of our multilingual model where each input is a sentence with a complex word annotated, and the output is a list of substitutes ranked from the most relevant and simplest to the least. The details of the Preprocessor are described in Section <ref>. Language-specific Prefixes are embedded into each input so that the model knows and learns to differentiate the three languages. We used three prefixes: “simplify en:” for English, “simplify es:” for Spanish, and “simplify pt:” for Portuguese. In addition, these prefixes serve another purpose. Due to the limited data for Spanish and Portuguese, training individual models for Spanish and Portuguese would make the model unable to generalize well, so to tackle this issue, we jointly trained the three languages in just one model. This way, all the weights are learned and shared between the three languages during the training. Control Tokens The following are the control tokens that were employed in our model to control different aspects of the generated candidates. Word Length, Word Rank (word frequency), and Word Syllables are known to be correlated well with word complexity, so we use them to help select simpler candidates. Candidate Ranking is used to help the model learn how to rank candidates through the training process so that, at the inference, the model could generate and sort candidates automatically, whereas Sentence Similarity is intended to help select relevant candidates based on semantic similarity. * Word Length (WL) is the proportion of character length between a complex word and its substitute. It is calculated by dividing the number of characters in the substitute by the number of characters in the complex word. * Word Rank (WR) is the inverse frequency of the substitute divided by that of the complex word. The frequency order is extracted from the FastText pre-trained model for its corresponding language. Words in FastText pre-trained model are sorted by frequency in descending order[<https://fasttext.cc/docs/en/crawl-vectors.html>]. * Word Syllables (WS) is the ratio of the number of syllables of the substitute divided by that of the complex word. It is extracted using PyHyphen library[https://github.com/dr-leo/PyHyphen]. The study of shardlow_complex_2020 shows that syllable count could help predict lexical complexity. * Candidate Ranking (CR) is the ranking order extracted from gold candidates in the training set and normalized to the following values: 1.00 for the first rank, 0.75 for the second rank, 0.5 for the third rank, 0.25 for the fourth rank, and 0.10 for the rest. For the validation set and test set, we set the value to 1.00 for each instance, as we already knew that the best ranking value is 1.00. * Sentence Similarity (SS) is the normalized sentence similarity score between the source and the target sentence. The target sentence is the source sentence with the complex word replaced by its substitute. The score is calculated with the cosine similarity between the embeddings of the two sentences extracted from Sentence-BERT <cit.>. This similarity score gives us a measure of the relation between the two sentences. In the experiments, we used the pre-trained model called “multi-qa-mpnet-base-dot-v1”[<https://www.sbert.net/docs/pretrained_models.html>] because it achieved the best performance on semantic search (tested on 6 datasets) and supported different languages such as English, Spanish, Portuguese, and more. Masked Language Model (MLM) Candidates The candidates are extracted using the masked language model approach following the same style as LSBert candidates generation. For each input sentence and its complex word, we give the model (e.g., BERT, RoBERTa) the sentence and the same sentence with the complex word masked. E.g., The motive for the killings was not known. </s> The [MASK] for the killings was not known. We then ask the model to predict the [MASK] token candidates and rank them by the returned probability scores. After that, we select only the top-10 ranked candidates and append them to the end of each input. We believe that adding the MLM candidates to the input sentence could help the model find and select better candidates. More details about how we chose the best pre-trained model for each dataset are described in Section <ref>. § EXPERIMENTS In this section, we describe in detail all the datasets, baselines, evaluation metrics, data preparation steps, model details, training, and evaluation procedures. §.§ Datasets In our experiments, we used monolingual English datasets such as LexMTurk <cit.>, BenchLS[<https://doi.org/10.5281/zenodo.2552393>] <cit.>, NNSeval[<https://doi.org/10.5281/zenodo.2552381>] <cit.>, and a multilingual dataset, TSAR-2022 shared dataset <cit.>. TSAR-2022 dataset contains three subsets: TSAR-EN for English, TSAR-ES for Spanish, and TSAR-PT for Brazilian Portuguese. Table <ref> shows three examples from the TSAR-2022 dataset, one from each language, and Table <ref> shows some statistics of the datasets. The average number of tokens (Avg #Tokens) shows that, on average, TSAR-ES has the longest text length, and TSAR-PT has the shortest text length. All datasets that are used in the experiments already have complex words annotated, so the complex word identification module is not needed. §.§ Baselines We compare the proposed models with the following strong baselines: LSBert uses Bert Masked Language Model (MLM) for candidate generation and ranks them by MLM probability, word frequency, language model, similarity (FastText cosine similarity), and PPDB database. ConLS is a controllable LS system fine-tuned on the T5 model with three control tokens. The candidate generation and ranking are learned through the fine-tuning process. Systems from the TSAR-2022 shared task: * CILS <cit.> generates candidates using language model probability and similarity score and ranks them by candidate generation score and cosine similarity. * PresiUniv <cit.> uses the Masked Language Model (MLM) for candidate generation and ranks them by cosine similarity and filters using the part-of-speech check. * UoM&MMU <cit.> uses a Language Model with prompts for candidate generation and ranks them by fine-tuning the Bert-based model as a classifier. * PolyU-CBS <cit.> generates candidates using MLM and ranks them by MLM probability, GPT-2 probability, sentence probability, and cosine similarity. * CENTAL <cit.> generate candidates using MLM and ranks them by word frequency and a binary classifier. * teamPN <cit.> generates candidates using MLM, VerbNet, PPDB database, and Knowledge Graph and ranks them by MLM probability. * MANTIS <cit.> generates candidates using MLM and ranks them by MLM probability, word frequency, and cosine similarity. * UniHD <cit.> uses prompts with GPT-3 (few-shot learning) for candidate generation and ranks them by aggregating the results. * RCML <cit.> generates candidates using lexical substitution and ranks them by part of speech, BERTScore, and SVM classifier. * GMU-WLV <cit.> generates candidates using MLM and ranks them by MLM probability and word frequency. * TSAR-LSBert is a modified version of the original LSBert to support Spanish and Portuguese and produce more candidates. * TSAR-TUNER is an adaptive version of the TUNER system (a rule-based system) <cit.> for the TSAR-2022 shared task. §.§ Evaluation Metrics We adopted the same evaluation metrics used in TSAR-2022 shared task <cit.>. The metrics used are as follows: * Accuracy@1 (ACC@1): the percentage of instances with the top-ranked candidate in the gold candidates. * Accuracy@N@Top1 (ACC@N@Top1): The percentage of instances where at least one of the top N predicted candidates match the most suggested gold candidates. * Potential@K: the percentage of instances where at least one of the top K predicted candidates are present in the gold candidates. * Mean Average Precision@K (MAP@K): the metric measures the relevance and ranking of the top K predicted candidates. To measure different aspects of the system's performance, we measured the results for different numbers of N and K candidates where N ∈ {1, 2, 3} and K ∈ {3, 5, 10}. ACC@1, MAP@1, and Potential@1 give the same results as per their definitions, so we report all of them as ACC@1 in the final results. §.§ Preprocessing For each instance in the training set, there is a sentence, a complex word, and a list of ranked gold candidates. Thus, we compute the token values between the complex word and each candidate (we used all the candidates), which means if there are 9 candidates, there will be 9 training examples created. Figure <ref> shows the preprocessing steps of an English sentence taken from the TSAR-EN dataset. The sentence contains the complex word "motive" and 9 ranked gold candidates; therefore, 9 training examples will be created. For each candidate and the complex word, we compute the tokens value, extract MLM candidates, and put all the values in the following format. Language prefix + Control Tokens + the input sentence with the complex word embedded in between [T] and [/T] + </s> + complex word + MLM candidates. For Spanish and Portuguese datasets, we follow the same process and change the prefix to “simplify es:” for Spanish and “simplify pt:” for Portuguese. For the validation set, we follow the same format as the training set, except all the token values are set with the values of 1.00. E.g., <CR_1.00> <WL_1.00> <WR_1.00> <WS_1.00> <SS_1.00>. We used these default values so that we could validate the model during the fine-tuning process and save the best model for evaluation. To choose the best pre-trained models for MLM candidates extraction, we ran a series of experiments on some of the most popular BERT-based pre-trained models (the popularity is based on the number of downloads available on Huggingface website[<https://huggingface.co/models>]). We compared them using the Potential metric since this metric measures the presence of the predicted candidates, which are matched with the gold candidates. For each model and each instance of a dataset, we extracted the top 10 candidates and computed the Potential. Table <ref> in the Appendix reports the results of the TSAR dataset, and Table <ref> in the Appendix shows the results of the LexMTurk, BenchLS, and NNSeval dataset. We did the experiments on the top 5, 10, 15, 20, 30, 40, and 50 candidates, and we found that the top 10 candidates worked the best in all of our experiments. So, these are the selected models that produce the best score in each dataset: “roberta-base” for TSAR-EN, “PlanTL-GOB-ES/roberta-base-bne” for TSAR-ES, “neuralmind/bert-large-portuguese-cased” for TSAR-PT, “bert-large-cased” for LexMTurk and BenchLS, and “bert-base-uncased” for NNSeval. §.§ Model Details In our experiments, we fine-tuned four different models: TLS-1, TLS-2, TLS-3, and mTLS. Each model was fine-tuned with the language prefix, control tokens, and MLM candidates, except for the TLS-3 model, which was without the MLM candidates. The following are the details of each model: * TLS-1 is an English monolingual based on T5-large. It was fine-tuned and validated with the TSAR-EN dataset (we split the dataset to 80% train, 20% validation) and then tested with LexMTurk, BenchLS, and NNSeval. This model is intended to compare with LSBert and ConLS. * TLS-2 is an English monolingual based on T5-large. It was fine-tuned, validated, and tested on the same dataset (TSAR-EN). The dataset was split into a 70% train, a 15% validation, and a 15% test. * TLS-3 (without MLM candidates) is an English monolingual based on T5-large. It was fine-tuned, validated, and tested on the TSAR-EN dataset. The dataset was split into a 70% train, a 15% validation, and a 15% test. * mTLS is a multilingual based on mT5-large. It was fine-tuned, validated, and tested with the whole TSAR-2022 dataset (TSAR-EN, TSAR-ES, TSAR-PT). We split the dataset of each language into a 70% train, a 15% validation, and a 15% test. We then preprocessed, randomized, and combined the data of all languages into one training and one validation sets. During the fine-tuning process, the model is randomly fed with parallel data (the source and target data created by the preprocessing steps as shown in Figure <ref>) from the three languages, allowing the model to learn and share all the weights. * The model TLS-2, TLS-3, and mTLS are intended to compare with the models from the TSAR-2022 shared task. In order to have a fair comparison between our model and the shared-task models, we only compared the results of the same 15% test sets. We implemented our approach using Huggingface Transformers library[https://huggingface.co] and Pytorch-lightning[https://lightning.ai]. Then we fine-tuned each model on an NVidia RTX 3090 GPU with a batch size of 4 (except mTLS, the batch size was set to 1 due to out-of-memory issues), gradient accumulation steps of 4, max sequence length of 210 (it was based on the number of tokens/wordpiece from all datasets), learning rate of 1e-5, weight decay of 0.1, adam epsilon of 1e-8. We fine-tuned it for 30 epochs, and if the model did not improve for four epochs, we saved the best model based on the highest validation score ACC@1@Top1 and stopped the fine-tuning process. All of our models took less than 15 epochs to converge. We used a Python library called Optuna <cit.> to perform hyperparameters search on T5-small and T5-base to speed up the process and then employed the same hyperparameters in the final larger models like T5-large and mT5-large. For the generation, we used beam search and set it to 15 to generate 15 candidates so that it is left with around 10 candidates after some filtering (duplicate or the candidate the same as the complex word). In addition, in our experiments, the performance of the models based on T5-small and T5-base performed lower than the model based on T5-large in all metrics. The same with the multilingual models mT5-small, mT5-base, and mT5-large, so for that reason, we only report the results of the models that are based on T5-large and mT5-large. §.§ At Inference For each model, we performed a tokens value search on the validation set of each corresponding dataset using Optuna <cit.> (the same tool used for hyperparameters search). We searched the value of each token ranging between 0.5 and 2.0 with the step of 0.05, but we skipped the search for the Candidate Ranking token as we already knew the best value of it would be 1.00 to obtain the best candidates. We ran the search for 200 trials, then selected the top 10 sets of values that maximized ACC@1@Top1 and used them for the evaluation of the test set. For each set of tokens, we kept them fixed for all instances of the whole test set. Finally, we report the results of the set that maximized ACC@1@Top1. Figure <ref> shows an example from the TSAR-EN test set and the simpler substitutes generated by our TLS-2 model. § RESULTS AND DISCUSSION In our experiments, we compared our model with all the systems submitted to the TSAR-2022 shared task on the TSAR dataset and the other two state-of-the-art models, LSBert and ConLS, on LexMTurk, BenchLS, and NNSeval datasets. We compared all of them with the same metrics used in the TSAR-2022 shared task, such as ACC@1, ACC@N@Top1, Potential@1, and MAP@K where N ∈ {1, 2, 3} and K ∈ {3, 5, 10}. Table <ref> presents the results of our model TLS-1 (a monolingual English model fine-tuned and validated on the TSAR-EN dataset) in comparison with LSBert and ConLS on LexMTurk, BenchLS, and NNSeval datasets. Our model achieves better results in all metrics across the board, and the results on Potential@K and MAP@K show a significant improvement. Table <ref> shows the results of our three models, English monolingual models (TLS-2, TLS-3), and multilingual model (mTLS), compared with all the systems from the TSAR-2022 shared task on the TSAR-EN dataset. Since all the models from the shared task are unsupervised approaches, we only compare the results on the same 15% test set. Our TLS-2 outperforms all the models in all metrics and performs equally to GPT-3 model (UniHD) on ACC@1 and ACC@1@Top1, it also performs significantly better on ACC@{2,3}@Top1 and MAP@{3,5,10} but lower on Potential@{3,5}. TLS-2 performs better than TLS-3 in all metrics except ACC@3@Top1, showing that adding MLM candidates does improve the model's performance. Our multilingual model (mTLS) performs better than the previous approaches, except for UniHD. The fact that the model's performance is notably inferior to its monolingual counterparts could be attributed to the following facts. First, the use of a multilingual model can reduce performance, as it contains a lot of irrelevant information from other languages. Second, the mT5-large pre-trained model is significantly larger than the T5-large, with around 1.2 billion parameters compared to 737 million of the T5-large. Given the large number of parameters that need to be updated, the mT5-large model requires significantly more data to learn from; therefore, we could not fine-tune the mT5-large model individually for Spanish or Portuguese. We had to fine-tune a multilingual model (mTLS) by randomly feeding the data from the three languages, allowing the model to learn and share all the weights. Table <ref> and Table <ref> present the results of our mTLS model in comparison with the TSAR-2022 official results on TSAR-ES and TSAR-PT datasets. Our model performs significantly better than all the participating systems in all metrics. However, there were unofficial results of UniHD that outperformed our mTLS model on TSAR-ES and TSAR-PT datasets. § CONCLUSION AND FUTURE WORK This paper proposed a new multilingual Controllable Transformer-based Lexical Simplification that integrates language-specific prefixes alongside dynamic control tokens and masked language model candidates to leverage the input-level information. This approach allows us to have the candidate generation and ranking within one model as well as multilingual. Moreover, our method enables the model to learn more effectively on the complex word and to have finer control over the generated candidates, leading the model to outperform all the previous state-of-the-art models in all datasets, including the GPT-3 model (UniHD) on some metrics. For future work, we want to explore the use of large language models (LLMs) like LLaMA <cit.> or MPT-7B[<https://www.mosaicml.com/blog/mpt-7b>] to perform instruction-based learning for Text Simplification. Recent work has shown that fine-tuning LLMs with instructions enables such models to achieve remarkable zero-shot capabilities on new tasks; this could have some potential for Text Simplification in situations where the training data is scarce. Moreover, since we only managed to assess the performance of our multilingual approach on a part of the TSAR-2022 corpus, we should explore ways to compare our trainable system with non-trainable ones in a more realistic setting. § ACKNOWLEDGEMENTS We thank the anonymous reviewers for their constructive comments and suggestions. We acknowledge partial support from the individual project Context-aware Multilingual Text Simplification (ConMuTeS) PID2019-109066GB-I00/AEI/10.13039/501100011033 awarded by Ministerio de Ciencia, Innovación y Universidades (MCIU) and by Agencia Estatal de Investigación (AEI) of Spain. We also acknowledge support from the project MCIN/AEI/10.13039/501100011033 under the Maria de Maeztu Units of Excellence Programme (CEX2021-001195-M) and partial support from Departament de Recerca i Universitats de la Generalitat de Catalunya. fullname
http://arxiv.org/abs/2307.01166v2
20230703171850
Coupled Gradient Flows for Strategic Non-Local Distribution Shift
[ "Lauren Conger", "Franca Hoffmann", "Eric Mazumdar", "Lillian Ratliff" ]
cs.LG
[ "cs.LG", "math.AP" ]
Lattice Thermal Conductivity of 2D Nanomaterials: A Simple Semi-Empirical Approach D. S. Galvão August 1, 2023 ======================================================================================= We propose a novel framework for analyzing the dynamics of distribution shift in real-world systems that captures the feedback loop between learning algorithms and the distributions on which they are deployed. Prior work largely models feedback-induced distribution shift as adversarial or via an overly simplistic distribution-shift structure. In contrast, we propose a coupled partial differential equation model that captures fine-grained changes in the distribution over time by accounting for complex dynamics that arise due to strategic responses to algorithmic decision-making, non-local endogenous population interactions, and other exogenous sources of distribution shift. We consider two common settings in machine learning: cooperative settings with information asymmetries, and competitive settings where a learner faces strategic users. For both of these settings, when the algorithm retrains via gradient descent, we prove asymptotic convergence of the retraining procedure to a steady-state, both in finite and in infinite dimensions, obtaining explicit rates in terms of the model parameters. To do so we derive new results on the convergence of coupled PDEs that extends what is known on multi-species systems. Empirically, we show that our approach captures well-documented forms of distribution shifts like polarization and disparate impacts that simpler models cannot capture. § INTRODUCTION In many machine learning tasks, there are commonly sources of exogenous and endogenous distribution shift, necessitating that the algorithm be retrained repeatedly over time. Some of these shifts occur without the influence of an algorithm; for example, individuals influence each other to become more or less similar in their attributes, or benign forms of distributional shift occur <cit.>. Other shifts, however, are in response to algorithmic decision-making. Indeed, the very use of a decision-making algorithm can incentivize individuals to change or mis-report their data to achieve desired outcomes— a phenomenon known in economics as Goodhart's law. Such phenomena have been empirically observed, a well-known example being in <cit.>, where researchers observed a population in Columbia strategically mis-reporting data to game a poverty index score used for distributing government assistance. Works such as <cit.>, which investigate the effects of distribution shift over time on a machine learning algorithm, point toward the need for evaluating the robustness of algorithms to distribution shifts. Many existing approaches for modeling distribution shift focus on simple metrics like optimizing over moments or covariates <cit.>. Other methods consider worst-case scenarios, as in distributionally robust optimization <cit.>. However, when humans respond to algorithms, these techniques may not be sufficient to holistically capture the impact an algorithm has on a population. For example, an algorithm that takes into account shifts in a distribution's mean might inadvertently drive polarization, rendering a portion of the population disadvantaged. Motivated by the need for a more descriptive model, we present an alternative perspective which allows us to fully capture complex dynamics that might drive distribution shifts in real-world systems. Our approach is general enough to capture various sources of exogenous and endogenous distribution shift including the feedback loop between algorithms and data distributions studied in the literature on performative prediction <cit.>, the strategic interactions studied in strategic classification <cit.>, and also endogenous factors like intra-population dynamics and distributional shifts. Indeed, while previous works have studied these phenomena in isolation, our method allows us to capture all of them as well as their interactions. For example, in <cit.>, the authors investigate the effects of dynamics in strategic classification problems— but the model they analyze does not capture individual interactions in the population. In <cit.>, the authors model the interaction between a population that repeatedly responds to algorithmic decision-making by shifting its mean. Additionally, <cit.> study settings in which the population has both exogenous and endogenous distribution shifts due to feedback, but much like the other cited work, the focus remains on average performance. Each of these works fails to account for diffusion or intra-population interactions that can result in important qualitative changes to the distribution. Contributions. Our approach to this problem relies on a detailed non-local PDE model of the data distribution which captures each of these factors. One term driving the evolution of the distribution over time captures the response of the population to the deployed algorithm, another draws on models used in the PDE literature for describing non-local effects and consensus in biological systems to model intra-population dynamics, and the last captures a background source of distribution shift. This is coupled with an ODE, lifted to a PDE, which describes the training of a machine learning algorithm results in a coupled PDE system which we analyze to better understand the behaviors that can arise among these interactions. In one subcase, our model exhibits a joint gradient flow structure, where both PDEs can be written as gradients flows with respect to the same joint energy, but considering infinite dimensional gradients with respect to the different arguments. This mathematical structure provides powerful tools for analysis and has been an emerging area of study with a relatively small body of prior work, none of which related to distribution shifts in societal systems, and a general theory for multi-species gradient flows is still lacking. We give a brief overview of the models that are known to exhibit this joint gradient flow structure: in <cit.> the authors consider a two-species tumor model with coupling through Brinkman's Law. A number of works consider coupling via convolution kernels <cit.> and cross-diffusion <cit.>, with applications in chemotaxis among other areas. In the models we consider here, the way the interaction between the two populations manifests is neither via cross-diffusion, nor via the non-local self-interaction term. A related type of coupling has recently appeared in <cit.>, however in the setting of graphs. Recent work <cit.> provides particle-based methods to approximately compute the solution to a minimax problem where the optimization space is over measures; following that work, <cit.> provides another particle-based method using mirror descent-ascent to solve a similar problem. Other recent work <cit.> proves that a mean-field gradient ascent-descent scheme with an entropy annealing schedule converges to the solution of a minimax optimization problem with a timescale separation parameter that is also time-varying; in contrast, our work considers fixed timescale separation setting. <cit.> show that the mean-field description of a particle method for solving minimax problems has proveable convergence guarantees in the Wasserstein-Fisher-Rao metric. Each of these references considers an energy functional that is linear in the distribution of each species respectively; our energy includes nonlinearities in the distributions via a self-interaction term as well as diffusion for the population. Moreover, the above works introduce a gradient flow dynamic as a tool for obtaining and characterizing the corresponding steady states, whereas in our setting we seek to capture the time-varying behavior that models distributions shifts. In the other subcase, we prove exponential convergence in two competitive, timescale separated settings where the algorithm and strategic population have conflicting objectives. We show numerically that retraining in a competitive setting leads to polarization in the population, illustrating the importance of fine-grained modeling. § PROBLEM FORMULATION Machine learning algorithms that are deployed into the real world for decision-making often become part of complex feedback loops with the data distributions and data sources with which they interact. In an effort to model these interactions, consider a machine learning algorithm that has loss given by L(z,x) where x∈^d are the algorithm parameters and z∈^d are the population attributes, and the goal is to solve _x∈_z∼ρ L(z,x), where is the class of model parameters and ρ(z) is the population distribution. Individuals have an objective given by J(z,x) in response to a model parameterized by x, and they seek to solve _z ∈^d J(z,x). When individuals in the population and the algorithm have access to gradients, we model the optimization process as a gradient-descent-type process. Realistically, individuals in the population will have nonlocal information and influences, as well as external perturbations, the effects of which we seek to capture in addition to just minimization. To address this, we propose a partial differential equation (PDE) model for the population, that is able to capture nonlocal interactions between individuals on the level of a collective population. To analyse how the population evolves over time, a notion of derivative in infinite dimensions is needed. A natural, and in this context physically meaningful, way of measuring the dissipation mechanism for probability distributions is the Wasserstein-2 metric (see Definition <ref>). The following expression appears when computing the gradient of an energy functional with respect to the Wasserstein-2 topology. [First Variation] For a map G:(^d)↦ and fixed probability distribution ρ∈(^d), the first variation of G at the point ρ is denoted by δ_ρ G[ρ]:^d→, and is defined via the relation ∫δ_ρ G[ρ](z)ψ(z)ẓ = lim_ϵ→ 01/ϵ (G(ρ+ϵψ)-G(ρ)) for all ψ∈ C_c^∞(^d) such that ∫ψ̣= 0, assuming that G is regular enough for all quantities to exist. Here, (^d) denotes the space of probability measures on the Borel sigma algebra. Using the first variation, we can express the gradient in Wasserstein-2 space, see for example <cit.>. The gradient of an energy G:_2(^d)→ in the Wasserstein-2 space is given by ∇_W_2 G(ρ)=-÷ρ∇δ_ρ G[ρ] . Here, _2(^d) denotes the set of probability measures with bounded second moments, also see Appendix <ref>. As a consequence, the infinite dimensional steepest descent in Wasserstein-2 space can be expressed as the PDE ∂_t ρ = -∇_W_2 G(ρ) = ÷ρ∇δ_ρ G[ρ] . All the coupled gradient flows considered in this work have this Wasserstein-2 structure. In particular, when considering that individuals minimize their own loss, we can capture these dynamics via a gradient flow in the Wasserstein-2 metric on the level of the distribution of the population. Then for given algorithm parameters x∈^d, the evolution for this strategic population is given by ∂_t ρ = ÷ρ∇δ_ρ[ _z∼ρ J(z,x) +E(ρ)], where E(ρ) is a functional including terms for internal influences and external perturbations. In real-world deployment of algorithms, decision makers update their algorithm over time, leading to an interaction between the two processes. We also consider the algorithm dynamics over time, which we model as ẋ = -∇_x [ _z∼ρ L(z,x) ]. In this work, we analyze the behavior of the dynamics under the following model. The algorithm suffers a cost f_1(z,x) for a data point z under model parameters x in the strategic population, and a cost f_2(z,x) for a data point in a fixed, non-strategic population. The strategic population is denoted by ρ∈, and the non-strategic population by ∈. The algorithm aims to minimize _z∼ρL(z,x)=∫ f_1(z,x) ρ̣(z) + ∫ f_2(z,x) (z) + β/2x-x_0^2 , where the norm is the vector inner product x^2=⟨ x,x⟩ and β>0 weights the cost of updating the model parameters from its initial condition. We consider two settings: (i) aligned objectives, and (ii) competing objectives. Case (i) captures the setting in which the strategic population minimization improves the performance of the algorithm, subject to a cost for deviating from a reference distribution ∈. This cost stems from effort required to manipulate features, such as a loan applicant adding or closing credit cards. On the other hand, Case (ii) captures the setting in which the strategic population minimization worsens the performance of the algorithm, again incurring cost from distributional changes. §.§ Case (i): Aligned Objectives In this setting, we consider the case where the strategic population and the algorithm have aligned objectives. This occurs in examples such as recommendation systems, where users and algorithm designers both seek to develop accurate recommendations for the users. This corresponds to the population cost _z∼ρ,x∼μ J(z,x) = ∬ f_1(z,x) ρ̣(z) μ̣(x) + α KL(ρ | ), where KL(· | ·) denotes the Kullback-Leibler divergence. Note that the KL divergence introduces diffusion to the dynamics for ρ. The weight α>0 parameterizes the cost of distribution shift to the population. To account for nonlocal information and influence among members of the population, we include a kernel term E(ρ)=1/2∫ρ W ∗ρ ẓ, where (W∗ρ)(z) = ∫ W(z-z̅)ρ̣(z̅) is a convolution integral and W is a suitable interaction potential. §.§ Case (ii): Competing Objectives In settings such as online internet forums, where algorithms and users have used manipulative strategies for marketing <cit.>, the strategic population may be incentivized to modify or mis-report their attributes. The algorithm has a competitive objective, in that it aims to maintain performance against a population whose dynamics cause the algorithm performance to suffer. When the strategic population seeks an outcome contrary to the algorithm, we model strategic population cost as _z∼ρ,x∼μ J(z,x) = -∬ f_1(z,x) ρ̣(z) μ̣(x) + α KL(ρ | ). A significant factor in the dynamics for the strategic population is the timescale separation between the two "species"—i.e., the population and the algorithm. In our analysis, we will consider two cases: one, where the population responds much faster than the algorithm, and two, where the algorithm responds much faster than the population. We illustrate the intermediate case in a simulation example. § RESULTS We are interested in characterizing the long-time asymptotic behavior of the population distribution, as it depends on the decision-makers action over time. The structure of the population distribution gives us insights about how the decision-makers actions influences the entire population of users. For instance, as noted in the preceding sections, different behaviors such as bimodal distributions or large tails or variance might emerge, and such effects are not captured in simply looking at average performance. To understand this intricate interplay, one would like to characterize the behavior of both the population and the algorithm over large times. Our main contribution towards this goal is a novel analytical framework as well as analysis of the long-time asymptotics. A key observation is that the dynamics in (<ref>) and (<ref>) can be re-formulated as a gradient flow; we lift x to a probability distribution μ by representing it as a Dirac delta μ sitting at the point x. As a result, the evolution of μ will be governed by a PDE, and combined with the PDE for the population, we obtain a system of coupled PDEs, ∂_t ρ = ÷ρ∇_z δ_ρ[ _z∼ρ,x∼μ J(z,x) +E(ρ)] ∂_t μ = ÷μ∇_x δ_μ[ _z∼ρ,x∼μ L(z,x) ], where δ_ρ and δ_μ are first variations with respect to ρ and μ according to Definition <ref>. The natural candidates for the asymptotic profiles of this coupled system are its steady states, which - thanks to the gradient flow structure - can be characterized as ground states of the corresponding energy functionals. In this work, we show existence and uniqueness of minimizers (maximizers) for the functionals under suitable conditions on the dynamics. We also provide criteria for convergence and explicit convergence rates. We begin with the case where the interests of the population and algorithm are aligned, and follow with analogous results in the competitive setting. We show convergence in energy, which in turn ensures convergence in a product Wasserstein metric. For convergence in energy, we use the notion of relative energy and prove that the relative energy converges to zero as time increases. The relative energy of a functional G is given by G(γ|γ_∞)=G(γ)-G(γ_∞), where G(γ_∞) is the energy at the steady state. Since we consider the joint evolution of two probability distributions, we define a distance metric on the product space of probability measures with bounded second moment. The metric over _2(^d)×_2(^d) is called and is given by ((ρ,μ),(ρ̃,μ̃))^2 = W_2(ρ,ρ̃)^2 + W_2(μ,μ̃)^2 for all pairs (ρ,μ),(ρ̃,μ̃) ∈_2(^d)×_2(^d), and where W_2 denotes the Wasserstein-2 metric (see Definition <ref>). We denote by (^d):=(_2(^d)×_2(^d),) the corresponding metric space. §.§ Gradient Flow Structure In the case where the objectives of the algorithm and population are aligned, we can write the dynamics as a gradient flow by using the same energy functional for both species. Let G_a(ρ,μ):(^d) ×(^d)↦ [0,∞] be the energy functional given by G_a(ρ,μ) = ∬ f_1(z,x) ρ̣(z) μ̣(x) +∬ f_2(z,x) (z) μ̣(x) + α KL(ρ|)+ 1/2∫ρ W ∗ρ + β/2∫x-x_0^2 μ̣(x). This expression is well-defined as the relative entropy KL(ρ | ) can be extended to the full set (^d) by setting G_a(ρ,μ)=+∞ in case ρ is not absolutely continuous with respect to . In the competitive case we define G_c(ρ,x):(^d) ×^d ↦ [-∞,∞] by G_c(ρ,x) = ∫ f_1(z,x) ρ̣(z) + ∫ f_2(x,z') (z') - α KL(ρ|) - 1/2∫ρ W ∗ρ + β/2x-x_0^2. In settings like recommender systems, the population and algorithm have aligned objectives; they seek to minimize the same cost but are subject to different dynamic constraints and influences, modeled by the regularizer and convolution terms. In the case where the objectives are aligned, the dynamics are given by ∂_t ρ = ÷ρ∇_z δ_ρ G_a[ρ,μ] ∂_t μ = ÷μ∇_x δ_μ G_a[ρ,μ]. Note that (<ref>) is a joint gradient flow, because the dynamics can be written in the form ∂_t γ =÷γ∇δ_γ G_a(γ) , where γ=(ρ,μ) and where the gradient and divergence are taken in both variables (z,x). We discuss the structure of the dynamics (<ref>) as well as the meaning of the different terms appearing in the energy functional G_a in Appendix <ref>. In other settings, such as credit score reporting, the objectives of the population are competitive with respect to the algorithm. Here we consider two scenarios; one, where the algorithm responds quickly relative to the population, and two, where the population responds quickly relative to the algorithm. In the case where the algorithm can immediately adjust optimally (best-respond) to the distribution, the dynamics are given by ∂_t ρ = -÷ρ(∇_z δ_ρ G_c[ρ,x])|_x=(ρ) , (ρ) _x̅ G_c(ρ,x̅) . Next we can consider the population immediately responding to the algorithm, which has dynamics x = -∇_x G_c(ρ,x)|_ρ=(x) , (x) _ρ̂∈ -G_c(ρ̂,x) . In this time-scale separated setting, model (<ref>) represents a dyamic maximization of G_c with respect to ρ in Wasserstein-2 space, and an instantaneous minimization of G_c with respect to the algorithm parameters x. Model (<ref>) represents an instantaneous maximization of G_c with respect to ρ and a dynamic minimization of G_c with respect to the algorithm parameters x. The key results on existence and uniqueness of a ground state as well as the convergence behavior of solutions depend on convexity (concavity) of G_a and G_c. The notion of convexity that we will employ for energy functionals in the Wasserstein-2 geometry is (uniform) displacement convexity, which is analogous to (strong) convexity in Euclidean spaces. One can think of displacement convexity for an energy functional defined on _2 as convexity along the shortest path in the Wasserstein-2 metric (linear interpolation in the Wasserstein-2 space) between any two given probability distributions. For a detailed definition of (uniform) displacement convexity and concavity, see Section <ref>. In fact, suitable convexity properties of the input functions f_1, f_2, W and will ensure (uniform) displacement convexity of the resulting energy functionals appearing in the gradient flow structure, see for instance <cit.>. We make the following assumptions in both the competitive case and aligned interest cases. Here, _d denotes the d× d identity matrix, f denotes the Hessian of f in all variables, while ∇^2_xf denotes the Hessian of f in the variable x only. [Convexity of f_1 and f_2] The functions f_1,f_2 ∈ C^2(^d×^d;[0,∞)) satisfy for all (z,x)∈^d×^d the following: * There exists constants λ_1,λ_2≥ 0 such that f_1≽λ_1 _2d and ∇^2_xf_2≽λ_2 _d; * There exist constants a_i>0 such that x ·∇_x f_i(z,x) ≥ -a_i for i=1,2; [Reference Distribution Shape] The reference distribution ∈(^d) ∩ L^1(^d) satisfies log∈ C^2(^d) and ∇^2_zlog(z) ≼ -λ̃_d for some λ̃> 0. [Convex Interaction Kernel] The interaction kernel W∈ C^2(^d;[0,∞)) is convex, symmetric W(-z)=W(z), and for some D>0 satisfies z·∇_z W(z) ≥ -D, |∇_z W(z)|≤ D(1+|z|) ∀ z∈^d . We make the following observations regarding the assumptions above: * The convexity in Assumption <ref> can be relaxed and without affecting the results outlined below by following a more detailed analysis analogous to the approach in <cit.>. * If f_1 and f_2 are strongly convex, the proveable convergence rate increases, but without strict or strong convexity of f_1 and f_2, the regularizers KL(ρ|) and ∫x-x_0_2^2 x̣ provide the convexity guarantees necessary for convergence. For concreteness, one can consider the following classical choices of input functions to the evolution: * Using the log-loss function for f_1 and f_2 satisfies Assumption <ref>. * Taking the reference measure to be the normal distribution satisfies Assumption <ref>, which ensures the distribution is not too flat. * Taking quadratic interactions W(z)=12|z|^2 satisfies Assumption <ref>. To complete the arguments on convergence to equilibrium, we require sufficient regularity of solutions to the PDEs under consideration. In fact, it is sufficient if we can show that equations (<ref>), (<ref>), and (<ref>) can be approximated by equations with smooth solutions. Albeit tedious, these are standard techniques in the regularity theory for partial differential equations, see for example <cit.>, <cit.>, <cit.>, and the references therein. Similar arguments as in <cit.> are expected to apply to the coupled gradient flows considered here, guaranteeing existence of smooth solutions with fast enough decay at infinity, and we leave a detailed proof for future work. §.§ Analysis of Case (i): Aligned Objectives The primary technical contribution of this setting consists of lifting the algorithm dynamics from an ODE to a PDE, which allows us to model the system as a joint gradient flow on the product space of probability measures. The coupling occurs in the potential function, rather than as cross-diffusion or non-local interaction as more commonly seen in the literature for multi-species systems. Suppose that Assumptions <ref>-<ref> are satisfied and let λ_a:=λ_1+min(λ_2+β, αλ̃)>0. Consider solutions γ_t:=(ρ_t,μ_t) to the dynamics (<ref>) with initial conditions satisfying γ_0∈_2(^d)×_2(^d) and G_a(γ_0)<∞. Then the following hold: (a) There exists a unique minimizer γ_∞ =(ρ_∞,μ_∞) of G_a, which is also a steady state for equation (<ref>). Moreover, ρ_∞∈ L^1(^d), has the same support as , and its density is continuous. (b) The solution γ_t converges exponentially fast in G_a (· | γ_∞) and , G_a(γ_t | γ_∞)≤ e^-2λ_a t G_a(γ_0 | γ_∞) and (γ_t,γ_∞) ≤ c e^-λ_a t for all t≥ 0 , where c>0 is a constant only depending on γ_0, γ_∞ and the parameter λ_a. (Sketch) For existence and uniqueness, we leverage classical techniques in the calculus of variations. To obtain convergence to equilibrium in energy, our key result is a new HWI-type inequality, providing as a consequence generalizations of the log-Sobolev inequality and the Talagrand inequality. Together, these inequalities relate the energy (classically denoted by H in the case of the Boltzmann entropy), the metric (classically denoted by W in the case of the Wasserstein-2 metric) and the energy dissipation (classically denoted by I in the case of the Fisher information)[Hence the name HWI inequalities.]. Combining these inequalities with Gronwall's inequality allows us to deduce convergence both in energy and in the metric . §.§ Analysis of Case (ii): Competing Objectives In this setting, we consider the case where the algorithm and the strategic population have goals in opposition to each other; specifically, the population benefits from being classified incorrectly. First, we will show that when the algorithm instantly best-responds to the population, then the distribution of the population converges exponentially in energy and in W_2. Then we will show a similar result for the case where the population instantly best-responds to the algorithm. In both cases, we begin by proving two Danskin-type results (see <cit.>) which will be used for the main convergence theorem, including convexity (concavity) results. To this end, we make the following assumption ensuring that the regularizing component in the evolution of ρ is able to control the concavity introduced by f_1 and f_2. [Upper bounds for f_1 and f_2] There exists a constant Λ_1>0 such that ∇_z^2 f_1(z,x) ≼Λ_1 I_d for all (z,x)∈^d×^d , and for any R>0 there exists a constant c_2=c_2(R)∈ such that sup_x∈ B_R(0)∫ f_2(z,x) (z) < c_2 . Equipped with Assumption <ref>, we state the result for a best-responding algorithm. Suppose Assumptions <ref>-<ref> are satisfied with αλ̃> Λ_1. Let λ_bαλ̃- Λ_1. Define G_b(ρ) G_c (ρ,(ρ)). Consider a solution ρ_t to the dynamics (<ref>) with initial condition ρ_0∈_2(^d) such that G_b(ρ_0)<∞. Then the following hold: (a) There exists a unique maximizer ρ_∞ of G_b(ρ), which is also a steady state for equation (<ref>). Moreover, ρ_∞∈ L^1(^d), has the same support as , and its density is continuous. (b) The solution ρ_t converges exponentially fast to ρ_∞ with rate λ_b in G_b(· | ρ_∞) and W_2, G_b(ρ_t | ρ_∞)≤ e^-2λ_b t G_a(ρ_0 | ρ_∞) and W_2(ρ_t,ρ_∞) ≤ c e^-λ_b t for all t≥ 0 , where c>0 is a constant only depending on ρ_0, ρ_∞ and the parameter λ_b. (Sketch) The key addition in this setting as compared with Theorem <ref> is proving that G_b(ρ) is bounded below, uniformly displacement concave and guaranteeing its smoothness via Berge's Maximum Theorem. This is non-trivial as it uses the properties of the best response (ρ). A central observation for our arguments to work is that δ_ρ G_b [ρ] = (δ_ρ G_c [ρ,x])|_x=(ρ). We can then conclude using the direct method in the calculus of variations and the HWI method. Here, the condition that αλ̃ must be large enough corresponds to the statement that the system must be subjected to a strong enough regularizing effect. In the opposite case, where ρ instantly best-responds to the algorithm, we show Danskin-like results for derivatives through the best response function and convexity of the resulting energy in x which allows to deduce convergence. Suppose Assumptions <ref>-<ref> are satisfied with α>Λ_1. Define G_d(x) G_c((x),x). Then it holds: (a) There exists a unique minimizer x_∞ of G_d(x) which is also a steady state for (<ref>). (b) The vector x(t) solving the dynamics (<ref>) with initial condition x(0)∈^d converges exponentially fast to x_∞ with rate λ_d:=λ_1+λ_2+β>0 in G_d and in the Euclidean norm: x(t)-x_∞ ≤ e^-λ_d tx(0)-x_∞ , G_d(x(t))-G_d(x_∞) ≤ e^-2λ_d t(G_d(x(0))-G_d(x_∞)) for all t≥ 0. In the proof, we use that the best response of ρ given a particular x is differentiable with respect to x. This can be ensured by the condition outlined in Lemma <ref>. In Lemma <ref>, we provide examples of additional assumptions guaranteeing that this condition holds, making sure the best response function is in fact differentiable. Another approach is to show suitable bounds on the second derivative of G_d(x) following arguments in <cit.>. A more detailed analysis of this condition is an interesting direction for future research. These two theorems illustrate that, under sufficient convexity conditions on the cost functions, we expect the distribution ρ and the algorithm x to converge to a steady state. In practice, when the distributions are close enough to the steady state there is no need to retrain the algorithm. While we have proven results for the extreme timescale cases, we anticipate convergence to the same equilibrium in the intermediate cases. Indeed, it is well known <cit.> (especially for systems in Euclidean space) that for two-timescale stochastic approximations of dynamical systems, with appropriate stepsize choices, converge asymptotically, and finite-time high probability concentration bounds can also be obtained. These results have been leveraged in strategic classification <cit.> and Stackelberg games <cit.>. We leave this intricate analysis to future work. In the following section we show numerical results in the case of a best-responding x, best-responding ρ, and in between where x and ρ evolve on a similar timescale. Note that in these settings, the dynamics do not have a gradient flow structure due to a sign difference in the energies, requiring conditions to ensure that one species does not dominate the other. § NUMERICAL EXAMPLES We illustrate numerical results for the case of a classifier, which are used in scenarios such as loan or government aid applications <cit.>, school admissions <cit.>, residency match <cit.>, and recommendation algorithms <cit.>, all of which have some population which is incentivized to submit data that will result in a desirable classification. For all examples, we select classifiers of the form x∈, so that a data point z∈ is assigned a label of 1 with probability q(z,x) = (1+exp(-b^⊤ z + x))^-1 where b>0. Let f_1 and f_2 be given by f_1(z,x) = -log (1-q(z,x)) , f_2(z,x) = -log q(z,x). Note that f_1≽ 0 and ∇_x^2f_2≽ 0, so λ_1=λ_2=0. Here, the strictness of the convexity of the functional is coming from the regularizers, not the cost functions, with a scaled normal distribution. We show numerical results for two scenarios with additional settings in the appendix. First we illustrate competitive interests under three different timescale settings. Then we simulate the classifier taking an even more naïve strategy than gradient descent and discuss the results. The PDEs were implemented based on the finite volume method from <cit.>. §.§ Competitive Objectives In the setting with competitive objectives, we utilize G_c(ρ,x) with W=0, f_1 and f_2 as defined above with b=3 fixed as it only changes the steepness of the classifier for d=1, and α=0.1 and β=0.05. In Figure <ref>, we simulate two extremes of the timescale setting; first when ρ is nearly best-responding and then when x is best-responding. The simulations have the same initial conditions and end with the same distribution shape; however, the behavior of the strategic population differs in the intermediate stages. When ρ is nearly best-responding, we see that the distribution quickly shifts mass over the classifier threshold. Then the classifier shifts right, correcting for the shift in ρ, which then incentivizes ρ to shift more mass back to the original mode. In contrast, when x best-responds, the right-hand mode slowly increases in size until the system converges. Figure <ref> shows simulation results from the setting where ρ and x evolve on the same timescale. We observe that the distribution shift in ρ appears to fall between the two extreme timescale cases, which we expect. We highlight two important observations for the competitive case. One, a single-mode distribution becomes bimodel, which would not be captured using simplistic metrics such as the mean and variance. This split can be seen as polarization in the population, a phenomenon that a mean-based strategic classification model would not capture. Two, the timescale on which the classifier updates significantly impacts the intermediate behavior of the distribution. In our example, when x updated slowly relative to the strategic population, the shifts in the population were greater than in the other two cases. This suggests that understanding the effects of timescale separation are important for minimizing volatility of the coupled dynamics. §.§ Naïve Behavior In this example, we explore the results of the classifier adopting a non-gradient-flow strategy, where the classifier chooses an initially-suboptimal value for x and does not move, allowing the strategic population to respond. All functions and parameters are the same as in the previous example. When comparing with the gradient descent strategy, we observe that while the initial loss for the classifier is worse for the naive strategy, the final cost is better. While this results is not surprising, because one can view this as a general-sum game where the best response to a fixed decision may be better than the equilibrium, it illustrates how our method provides a framework for evaluating how different training strategies perform in the long run against a strategic population. § FUTURE DIRECTIONS AND LIMITATIONS Our work presents a method for evaluating the robustness of an algorithm to a strategic population, and investigating a variety of robustness using our techniques opens a range of future research directions. Our application suggests many questions relevant to the PDE literature, such as: (1) Does convergence still hold with the gradient replaced by an estimated gradient? (2) Can we prove convergence in between the two timescale extremes? (3) How do multiple dynamic populations respond to an algorithm, or multiple algorithms? In the realm of learning algorithms, our framework can be extended to other learning update strategies and presents a way to model how we can design these update strategies to induce desired behaviors in the population. A challenge in our method is that numerically solving high-dimensional PDEs is computationally expensive and possibly unfeasible. Here we note that in many applications, agents in the population do not alter more than a few features due to the cost of manipulation. We are encouraged by the recent progress using deep learning to solve PDEs, which could be used in our application. LC is supported by an NDSEG fellowhip from the Air Force Office of Scientific Research. FH is supported by start-up funds at the California Institute of Technology. LR is supported by ONR YIP N00014-20-1-2571 P00003 and NSF Awards CAREER 1844729, and CPS 1931718. EM acknowledges support from NSF Award 2240110. We are grateful for helpful discussions with José A. Carrillo. § GENERAL STRUCTURE AND PRELIMINARIES In this section, we give more details on the models discussed in the main article, and introduce definitions and notation that are needed for the subsequent proofs. §.§ Structure of the dynamics For the case of aligned objectives, the full coupled system of PDEs (<ref>) can be written as ∂_t ρ = αΔρ + ÷ρ∇_z (∫ f_1μ̣- αlog + W∗ρ) , ∂_t μ = ÷μ∇_x (∫ f_1ρ̣+ ∫ f_2+ β/2x-x_0^2) . In other words, the population ρ in (<ref>) is subject to an isotropic diffusive force with diffusion coefficient α>0, a drift force due to the time-varying confining potential ∫ f_1μ̣(t)-αlog, and a self-interaction force via the interaction potential W. If we consider the measure μ to be given and fixed in time, this corresponds exactly to the type of parabolic equation studied in <cit.>. Here however the dynamics are more complex due to the coupling of the confining potential with the dynamics (<ref>) for μ(t) via the coupling potential f_1. Before presenting the analysis of this model, let us give a bit more intuition on the meaning and the structure of these dynamics. In the setting where μ represents a binary classifier, we can think of the distribution as modelling all those individuals carrying the true label 1, say, and the distribution ρ(t) as modelling all those individuals carrying a true label 0, say, where 0 and 1 denote the labels of two classes of interest. The term ∫ f_1(z,x)μ(t,x̣) represents a penalty for incorrectly classifying an individual at z with true label 0 when using the classifier μ(t,x). In other words, ∫ f_1(z,x)μ(t,x̣) ∈ [0,∞) is increasingly large the more z digresses from the correct classification 0. Similarly, ∫ f_1(z,x)ρ(t,ẓ)∈ [0,∞) is increasingly large if the population ρ shifts mass to locations in z where the classification is incorrect. The terminology aligned objectives refers to the fact that in (<ref>) both the population and the classifier are trying to evolve in a way as to maximize correct classification. Analogously, the term ∫ f_2(z,x)(ẓ) is large if x would incorrectly classify the population that carries the label 1. A natural extension of the model (<ref>) would be a setting where also the population carrying labels 1 evolves over time, which is simulated in Section <ref>. Most elements of the framework presented here would likely carry over the setting of three coupled PDEs: one for the evolution of ρ(t), one for the evolution of (t) and one for the classifier μ(t). The term αΔρ - α÷ρ∇log = α÷ρ∇δ_ρ KL(ρ | ) forces the evolution of ρ(t) to approach . In other words, it penalizes (in energy) deviations from a given reference measure . In the context of the application at hand, we take to be the initial distribution ρ(t=0). The solution ρ(t) then evolves away from over time due to the other forces that are present. Therefore, the term KL(ρ | ) in the energy both provides smoothing of the flow and a penalization for deviations away from the reference measure . The self-interaction term W∗ρ introduces non-locality into the dynamics, as the decision for any given individual to move in a certain direction is influenced by the behavior of all other individuals in the population. The choice of W is application dependent. Very often, the interaction between two individuals only depends on the distance between them. This suggests a choice of W as a radial function, i.e. W(z)=ω(|z|). A choice of ω:→ such that ω'(r)>0 corresponds to an attractive force between individuals, whereas ω'(r)<0 corresponds to a repulsive force. The statement |z|ω'(|z|) = z·∇_z W(z) ≥ -D in Assumption <ref> therefore corresponds to a requirement that the self-interaction force is not too repulsive. Neglecting all other forces in (<ref>), we obtain the non-local interaction equation ∂_tρ = ÷ρ∇ W∗ρ which appears in many instances in mathematical biology, mathematical physics, and material science, and it is an equation that has been extensively studied over the past few decades, see for example <cit.> and references therein. Using the results from these works, our assumptions on the interaction potential W can be relaxed in many ways, for example by allowing discontinuous derivatives at zero for W, or by allowing W to be negative. The dynamics (<ref>) for the algorithm μ is a non-autonomous transport equation, ∂_tμ = ÷μ v , where the time-dependence in the velocity field v(t,x):= ∇_x (∫ f_1(z,x)ρ̣(t,z) + ∫ f_2(z,x)(z) + β/2x-x_0^2) , comes through the evolving population ρ(t). This structure allows to obtain an explicit solution for μ(t) in terms of the initial condition μ_0 and the solution ρ(t) to (<ref>) using the method of characteristics. Assume that there exists a constant c>0 such that ∫∇_x f_1(z,x) ρ̣(z) + ∫∇_x f_2(z,x) (z) ≤ c(1+ x) ∀ρ∈_2(^d) and ∀ x∈^d . Then the unique distributional solution μ(t) to (<ref>) is given by μ(t) = Φ(t,0,·)_#μ_0 , where Φ(t,s,x) solves the characteristic equation ∂_sΦ(s,t,x) +v(s,Φ(s,t,x))=0 , Φ(t,t,x)=x . Thanks to Assumption <ref>, we have that v∈ C^1(×^d;^d), and by (<ref>), we have v(t,x)≤ c(1+x) for all t≥ 0, x∈^d . By classical Cauchy-Lipschitz theory for ODEs, this guarantees the existence of a unique global solution Φ(t,s,x) solving (<ref>). Then it can be checked directly that μ(t) as defined in (<ref>) is a distributional solution to (<ref>). In the characteristic equation (<ref>), Φ(s,t,x) is a parametrization of all trajectories: if a particle was at location x at time t, then it is at location Φ(s,t,x) at time s. Our assumptions on f_1, f_2 and also ensure that Φ(s,t,·):^d→^d is a C^1-diffeomorphism for all s,t∈. For more details on transport equations, see for example <cit.>. Consider the special case where μ_0=δ_x_0 for some initial position x_0∈^d. Then by Proposition <ref>, the solution to (<ref>) is given by μ(t)=δ_x(t), where x(t):= Φ(t,0,x_0) solves the ODE ẋ(t) = -v(t,x(t)) , x(0)=x_0 , which is precisely of type (<ref>). For the case of competing objectives, the two models we consider can be written as ∂_t ρ = -÷ρ[∇ (f_1(z,(ρ)) - αlog(ρ/) - W ∗ρ] , (ρ) := _x̅∫ f_1(z,x̅) ρ̣(z) + ∫ f_2(x̅,z') (z') + β/2x̅-x_0^2 for (<ref>), and x = -∇_x ( ∫ f_1(z,x) (x)(ẓ) + ∫ f_2(x,z') (z') + β/2x-x_0^2 ) , (x) _ρ̂∈∫ f_1(z,x) ̣̂ρ(z) - α KL(ρ̂|) - 1/2∫ρ̂W ∗ρ̂ . for (<ref>). §.§ Definitions and notation Here, and in what follows, _d denotes the d× d identity matrix, and 𝕀 denotes the identity map. The energy functionals we are considering are usually defined on the set of probability measures on ^d, denoted by (^d). If we consider the subset _2(^d) of probability measures with bounded second moment, _2(^d):={ρ∈(^d) : ∫_^dz^2ρ̣(z)<∞} , then we can endow this space with the Wasserstein-2 metric. The Wasserstein-2 metric between two probability measures μ, ν∈_2(^d) is given by W_2(μ, ν)^2 = inf_γ∈Γ(μ,ν)∫z-z'_2^2 γ̣(z,z') where Γ is the set of all joint probability distributions with marginals μ and ν, i.e. μ (ẓ) = ∫γ(ẓ,z')ẓ' and ν(ẓ') = ∫γ(z,ẓ') ẓ. The restriction to _2(^d) ensures that W_2 is always finite. Then the space (_2(^d), W_2) is indeed a metric space. We will make use of the fact that W_2 metrizes narrow convergence of probability measures. To make this statement precise, let us introduce two common notions of convergence for probability measures, which are a subset of the finite signed Radon measures (^d). Consider a sequence (μ_n)∈(^d) and a limit μ∈(^d). * (Narrow topology) The sequence (μ_n) converges narrowly to μ, denoted by μ_n ⇀μ, if for all continuous bounded functions f:^d→, ∫_^d f(z)μ̣_n(z) →∫_^d f(z)μ̣(z) . * (Weak-* topology) The sequence (μ_n) converges weakly-* to μ, denoted by μ_n μ, if for all continuous functions vanishing at infinity (i.e. f:^d→ such that for all ϵ>0 there exists a compact set K_ϵ⊂^d such that |f(z)|<ϵ on ^d∖ K_ϵ), we have ∫_^d f(z)μ̣_n(z) →∫_^d f(z)μ̣(z) . Let us denote the set of continuous functions on ^d vanishing at infinity by C_0(^d), and the set of continuous bounded functions by C_b(^d). Note that narrow convergence immediately implies that μ_n(^d)→μ(^d) as the constant function is in C_b(^d). This is not necessarily true for weak-* convergence. We will later make use of the Banach-Alaoglu theorem <cit.>, which gives weak-* compactness of the unit ball in a dual space. Note that (^d) is indeed the dual of C_0(^d) endowed with the sup-norm, and (^d) is the unit ball in (^d) using the dual norm. Moreover, if we can ensure that mass does not escape to infinity, the two notions of convergence in Definition <ref> are in fact equivalent. Consider a sequence (μ_n)∈(^d) and a measure μ∈(^d). Then μ_n⇀μ if and only if μ_nμ and μ_n(^d)→μ(^d). This follows directly from Definition <ref>. Here, the condition μ_n(^d)→μ(^d) is equivalent to tightness of (μ_n), and follows from Markov's inequality <cit.> if we can establish uniform bounds on the second moments, i.e. we want to show that there exists a constant C>0 independent of n such that ∫z^2μ̣_n(z) <C ∀ n∈N . A collection of measures (μ_n)∈(^d) is tight if for all ϵ>0 there exists a compact set K_ϵ⊂^d such that |μ_n|(^d∖ K_ϵ) <ϵ for all n∈N, where |μ| denotes the total variation of μ. Another classical result is that the Wasserstein-2 metric metrizes narrow convergence and weak-* convergence of probability measures, see for example <cit.> or <cit.>. Let μ_n, μ∈_2(^d). Then W_2(μ_n,μ)→ 0 if and only if μ_n ⇀μ and ∫_^dz^2μ̣_n(z) →∫_^dz^2μ̣(z) . Note that μ_n ⇀μ can be replaced by μ_n μ in the above statement thanks to the fact that the limit μ is a probability measure with mass 1, see Lemma <ref>. Next, we consider two measures μ,ν∈(^d) that are atomless, i.e μ({z})=0 for all z∈^d. By Brenier's theorem <cit.> (also see <cit.>) there exists a unique measurable map T:^d→^d such that T_#μ=ν, and T=∇ψ for some convex function ψ:^d→. Here, the push-forward operator ∇ψ_# is defined as ∫_^d f(z) ∇̣ψ_#ρ_0(z) = ∫_^d f(∇ψ (z) ) ρ̣_0(z) for all Borel-measurable functions f:^d↦_+. If ρ_1= ∇ψ_#ρ_0, we denote by ρ_s = [(1-s)𝕀+s∇ψ]_#ρ_0 the discplacement interpolant between ρ_0 and ρ_1. We are now ready to introduce the notion of displacement convexity, which is the same as geodesic convexity in the geodesic space (_2(^d),W_2). We will state the definition here for atomless measures, but it can be relaxed to any pair of measures in _2 using optimal transport plans instead of transport maps. In what follows, we will use s to denote the interpolation parameter for geodesics, and t to denote time related to solutions of (<ref>), (<ref>) and (<ref>). A functional G:↦ is displacement convex if for all ρ_0,ρ_1 that are atomless we have G(ρ_s) ≤ (1-s)G(ρ_0) + s G(ρ_1) , where ρ_s = [(1-s)𝕀+s∇ψ]_#ρ_0 is the displacement interpolant between ρ_0 and ρ_1. Further, G:↦ is uniformly displacement convex with constant η>0 if G(ρ_s) ≤ (1-s)G(ρ_0) + s G(ρ_1)-s(1-s) η/2 W_2(ρ_0,ρ_1)^2 , where ρ_s = [(1-s)𝕀+s∇ψ]_#ρ_0 is the displacement interpolant between ρ_0 and ρ_1. In other words, G is displacement convex (concave) if the function G(ρ_s) is convex (concave) with ρ_s = [(1-s𝕀+s∇ψ]_#ρ_0 being the displacement interpolant between ρ_0 and ρ_1. Contrast this with the classical notion of convexity (concavity) for G, where we require that the function G((1-s)ρ_0+sρ_1) is convex (concave). In fact, if the energy G is twice differentiable along geodesics, then the condition G(γ_s) ≥ 0 along any geodesic (ρ_s)_s∈[0,1] between ρ_0 and ρ_1 is sufficient to obtain displacement convexity. Similarly, when G(ρ_s) ≥η W_2(ρ_0,ρ_1)^2, then G is uniformly displacement convex with constant η>0. For more details, see <cit.> and <cit.>. §.§ Steady states The main goal in our theoretical analysis is to characterize the asymptotic behavior for the models (<ref>), (<ref>) and (<ref>) as time goes to infinity. The steady states of these equations are the natural candidates to be asymptotic profiles for the corresponding equations. Thanks to the gradient flow structure, we expect to be able to make a connection between ground states of the energy functionals, and the steady states of the corresponding gradient flow dynamics. More precisely, any minimizer or maximizer is in particular a critical point of the energy, and therefore satisfies that the first variation is constant on disconnected components of its support. If this ground state also has enough regularity (weak differentiability) to be a solution to the equation, it immediately follows that it is in fact a steady state. To make this connection precise, we first introduce what exactly we mean by a steady state. Given ρ_∞∈ L^1_+(^d)∩ L^∞_loc(^d) with ρ_∞_1=1 and μ_∞∈_2(^d), then (ρ_∞,μ_∞) is a steady state for the system (<ref>) if ρ_∞∈ W^1,2_loc(^d), ∇ W∗ρ_∞∈ L^1_loc(^d), ρ_∞ is absolutely continuous with respect to , and (ρ_∞,μ_∞) satisfy ∇_z (∫ f_1(z,x)μ̣_∞(x) + αlog(ρ_∞(z)/(z)) + W∗ρ_∞(z)) = 0 ∀ z∈() , ∇_x (∫ f_1(z,x)ρ̣_∞(z) + ∫ f_2(z,x)(z) + β/2x-x_0^2) = 0 ∀ x∈(μ_∞) in the sense of distributions. Here, L^1_+(^d):= {ρ∈ L^1(^d) : ρ≥ 0}. Let ρ_∞∈ L^1_+(^d)∩ L^∞_loc(^d) with ρ_∞_1=1. Then ρ_∞ is a steady state for the system (<ref>) if ρ_∞∈ W^1,2_loc(^d), ∇ W∗ρ_∞∈ L^1_loc(^d), ρ_∞ is absolutely continuous with respect to , and ρ_∞ satisfies ∇_z (f_1(z,b(ρ_∞)) - αlog(ρ_∞(z)/(z)) - W∗ρ_∞(z) ) = 0 ∀ z∈^d , in the sense of distributions, where (ρ_∞) _x G_c(ρ_∞,x). The vector x_∞∈^d is a steady state for the system (<ref>) if it satisfies ∇_x G_d(x_∞)=0 . In fact, with the above notions of steady state, we can obtain improved regularity for ρ_∞. Let Assumptions <ref>-<ref> hold. Then the steady states ρ_∞ for (<ref>) and (<ref>) are continuous. We present here the argument for equation (<ref>) only. The result for (<ref>) follows in exactly the same way by replacing f_1(z,b()) with -∫ f_1(z,x)μ̣_∞(x). Thanks to our assumptions, we have f_1(·,b()) + αlog(·) ∈ C^1, which implies that ∇ (f_1(·,b()) + αlog(·)) ∈ L_loc^∞. By the definition of a steady state, ρ_∞∈ L^1 ∩ L_loc^∞ and thanks to Assumption <ref> we have W∈ C^2, which implies that ∇ W ∗∈ L_loc^∞. Let h(z) (z)∇[ f_1(z,())+ αlog(z)-(W∗)(z)] . Then by the aforementioned regularity, we obtain h∈ L_loc^1 ∩ L_loc^∞. By interpolation, it follows that h∈ L_loc^p for all 1<p<∞. This implies that ÷ h∈ W_loc^-1,p. Since is a weak W_loc^1,2-solution of (<ref>), we have Δ= ÷h , and so by classic elliptic regularity theory we conclude ∈ W_loc^1,p. Finally, applying Morrey's inequality, we have ∈ C^0,k where k=p-d/p for any d < p < ∞. Therefore ∈ C(^d) (after possibly being redefined on a set of measure zero). § PROOF OF THEOREM <REF> For ease of notation, we write G_a:(^d)×(^d)↦ [0,∞] as G_a((ρ,μ))= α KL(ρ|) + (ρ,μ) + (ρ) , where we define (ρ,μ) = ∬ f_1(z,x) ρ̣(z) μ̣(x) + ∫ V(x)μ̣(x) , (ρ) = 1/2∬ W(z_1-z_2) ρ̣(z_1) ρ̣(z_2) , with potential given by V(x):= ∫ f_2(z,x)(z) + β/2x-x_0^2. In order to prove the existence of a unique ground state for G_a, a natural approach is to consider the corresponding Euler-Lagrange equations αlogρ(z)/(z) + ∫ f_1(z,x)μ̣(x) + ( W∗ρ)(z) = c_1[ρ,μ] for all z∈(ρ) , ∫ f_1(z,x)ρ̣(z) + V(x) = c_2[ρ,μ] for all x∈(μ) , where c_1,c_2 are constants that may differ on different connected components of (ρ) and (μ). These equations are not easy to solve explicitly, and we are therefore using general non-constructive techniques from calculus of variations. We first show continuity and convexity properties for the functional G_a (Lemma <ref> and Proposition <ref>), essential properties that will allow us to deduce existence and uniqueness of ground states using the direct method in the calculus of variations (Proposition <ref>). Using the Euler-Lagrange equation  <ref>, we then prove properties on the support of the ground state (Corollary <ref>). To obtain convergence results, we apply the HWI method: we first show a general 'interpolation' inequality between the energy, the energy dissipation and the metric (Proposition <ref>); this fundamental inequality will then imply a generalized logarithmic Sobolev inequality (Corollary <ref>) relating the energy to the energy dissipation, and a generalized Talagrand inequality (Corollary <ref>) that allows to translate convergence in energy into convergence in metric. Putting all these ingrediends together will then allow us to conclude for the statements in Theorem <ref>. Let Assumptions <ref>-<ref> hold. Then the functional G_a:×→ is lower semi-continuous with respect to the weak-* topology. We split the energy G_a into three parts: (i) KL(ρ|), (ii) the interaction energy , and (iii) the potential energy . For (i), lower semi-continuity has been shown in <cit.>. For (ii), we can directly apply <cit.> using Assumption <ref>. For (iii), note that V and f_1 are lower semi-continuous and bounded below thanks to Assumption <ref>, and so the result follows from <cit.>. Let α,β > 0. Fix γ_0, γ_1∈_2×_2 and let Assumptions <ref>-<ref> hold. Along any geodesic (γ_s)_s∈[0,1]∈_2×_2 connecting γ_0 to γ_1, we have for all s∈[0,1] G_a(γ_s) ≥λ_a (γ_0,γ_1)^2 , λ_a:= λ_1+min(λ_2+β, αλ̃) . As a result, the functional G_a:×→ is uniformly displacement convex with constant λ_a> 0. Let γ_0 and γ_1 be two probability measures with bounded second moments. Denote by ϕ, ψ:^d→ the optimal Kantorovich potentials pushing ρ_0 onto ρ_1, and μ_0 onto μ_1, respectively: ρ_1=∇ϕ_#ρ_0 such that W_2(ρ_0,ρ_1)^2 = ∫_^dz-∇ϕ(z)^2 ρ̣_0(z) , μ_1=∇ψ_#μ_0 such that W_2(μ_0,μ_1)^2 = ∫_^dx-∇ψ(x)^2 μ̣_0(x) . The now classical results in <cit.> guarantee that there always exists convex functions ϕ, ψ that satisfy the conditions above. Then the path (γ_s)_s∈[0,1]=(ρ_s,μ_s)_s∈[0,1] defined by ρ_s = [(1-s)𝕀+ s ∇_z ϕ]_#ρ_0 , μ_s = [(1-s)𝕀+s∇_x ψ]_#μ_0 is a -geodesic from γ_0 to γ_1. The first derivative of along geodesics in the Wasserstein metric is given by (γ_s) = [∬ f_1((1-s)z +s∇ϕ(z),(1-s)x +s∇ψ(x)) ρ̣_0(z)μ̣_0(x). .+ ∫ V((1-s)x +s∇ψ(x) ) μ̣_0(x) ] = ∬∇_x f_1((1-s)z +s∇ϕ(z),(1-s)x +s∇ψ(x))· (∇ψ(x)-x) ρ̣_0(z)μ̣_0(x) ∬∇_z f_1((1-s)z +s∇ϕ(z),(1-s)x +s∇ψ(x))· (∇ϕ(z)-z) ρ̣_0(z)μ̣_0(x) + ∫∇_xV((1-s)x +s∇ψ(x) )· (∇ψ(x)-x) μ̣_0(x) , and taking another derivative we have (γ_s) = -∬[ (∇ψ(x)-x); (∇ϕ(z)-z) ]^T · D_s(z,x) ·[ (∇ψ(x)-x); (∇ϕ(z)-z) ] ρ̣_0(z) μ̣_0(x) + ∬ (∇ψ(x)-x)^T ·∇^2_xV((1-s)x +s∇ψ(x) ) ·(∇ψ(x)-x) ρ̣_0(z) μ̣_0(x) ≥λ_1 (γ_0,γ_1)^2 + (λ_2+β) W_2(μ_0,μ_1)^2 , where we denoted D_s(z,x):= Hess(f_1)((1-s)z +s∇ϕ(z),(1-s)x +s∇ψ(x)), and the last inequality follows from Assumption <ref> and the optimality of the potentials ϕ and ψ. Following <cit.> and using Assumption <ref>, the second derivatives of the diffusion term and the interaction term along geodesics are given by KL(ρ_s|) ≥αλ̃ W_2(ρ_0,ρ_1)^2 , (ρ_s) ≥ 0. Putting the above estimates together, we obtain (<ref>). Alternatively, one could assume strong convexity of W, which would improve the lower-bound on the second derivative along geodesics. (Ground state) Let Assumptions <ref>-<ref> hold for α,β>0. Then the functional G_a:(^d)×(^d)→ [0,∞] admits a unique minimizer γ_*=(ρ_*,μ_*), and it satisfies ρ_*∈_2(^d)∩ L^1(^d), μ_*∈_2(^d), and ρ_* is absolutely continuous with respect to . We show existence of a minimizer of G_a using the direct method in the calculus of variations. Denote by γ=(ρ,μ)∈×⊂× a pair of probability measures as a point in the product space of Radon measures. Since G_a≥ 0 on × (see Assumption <ref>) and not identically +∞ everywhere, there exists a minimizing sequence (γ_n)∈×. Note that (γ_n) is in the closed unit ball of the dual space of continuous functions vanishing at infinity (C_0(^d) × C_0(^d))^* endowed with the dual norm γ_n_*=sup|∫ fρ̣_n + ∫ gμ̣_n|/(f,g)_∞ over f,g∈ C_0(^d) with (f,g)_∞:=f_∞ + g_∞≠ 0. By the Banach-Alaoglu theorem <cit.> there exists a limit γ_*= (ρ_*,μ_*)∈×=(C_0× C_0)^* and a convergent subsequence (not relabelled) such that γ_nγ_*. In fact, since KL(ρ_* | )<∞ it follows that ρ_* is absolutely continuous with respect to , implying ρ_*∈ L^1(^d) thanks to Assumption <ref>. Further, μ_* has bounded second moment, else we would have inf_γ∈× G_a(γ)=∞ which yields a contradiction. It remains to show that ∫ρ̣_*=∫μ̣_*=1 to conclude that γ_*∈×. To this aim, it is sufficient to show tightness of (ρ_n) and (μ_n), preventing the escape of mass to infinity as we have ∫ρ̣_n=∫μ̣_n=1 for all n≥ 1. Tightness follows from Markov's inequality <cit.> if we can establish uniform bounds on the second moments, i.e. we want to show that there exists a constant C>0 independent of n such that ∫z^2ρ̣_n(z) + ∫x^2μ̣_n(x) <C ∀ n∈N . To establish (<ref>), observe that thanks to Assumption <ref>, there exists a constant c_0∈ (possibly negative) such that -log(z)≥ c_0 +λ̃/4z^2 for all z∈^d. Then αλ̃/4∫z^2ρ̣_n ≤ -α c_0 -α∫log(z)ρ̣_n Therefore, using ∫ρ̣_n=∫μ̣_n=1 and writing ζ:=min{αλ̃/4, β/2}>0, we obtain the desired uniform upper bound on the second moments of the minimizing sequence, ζ∬(z^2+x^2)ρ̣_nμ̣_n ≤ -α c_0 -α∫log(z)ρ̣_n + β∫x-x_0^2μ̣_n + βx_0^2 ≤ -α c_0 + βx_0^2 + G_a(γ_n) ≤ -α c_0 + βx_0^2 + G_a(γ_1) <∞ . This concludes the proof that the limit γ_* satisfies indeed γ_*∈×, and indeed ρ_*∈_2(^d) as well. Finally, γ_* is indeed a minimizer of G_a thanks to weak-* lower-semicontinuity of G_a following Lemma <ref>. Next we show uniqueness using a contradiction argument. Suppose γ_*=(ρ_*,μ_*) and γ_*'=(ρ_*',μ_*') are minimizers of G_a. For s∈[0,1], define γ_s := ((1-s)𝕀+sT,(1-s)𝕀+sS)_#γ_*, where T,S:^d↦^d are the optimal transport maps such that ρ_*' = T_#ρ_* and μ_*' = S_#μ_*. By Proposition <ref> the energy G_a is uniformly displacement convex, and so we have G_a(γ_s)≤ (1-s)G_a(γ_*) + s G_a(γ_*') = G_a(γ_*). If γ_*≠γ_*' and s∈ (0,1), then strict inequality holds by applying similar arguments as in <cit.>. However, the strict inequality G_a(γ_s) < G_a(γ_*) for γ_*≠γ_*' is a contradiction to the minimality of γ_*. Hence, the minimizer is unique. If λ_1>0, then the strict convexity of f_1 can be used to deduce uniqueness, and the assumptions on -log can be weakened from strict convexity to convexity. Let Assumptions <ref>-<ref> hold. Any minimizer γ_*=(ρ_*,μ_*) of G_a is a steady state for equation (<ref>) according to Definition <ref> and satisfies (ρ_*)=(). By Proposition <ref>, we have ρ_*, μ_*∈_2, as well as ρ_*∈ L^1_+, ρ_*_1=1, and that ρ_* is absolutely continuous with respect to . Since W∈ C^2(^d), it follows that ∇ W ∗ρ_* ∈ L_loc^1. In order to show that γ_* is a steady state for equation (<ref>), it remains to prove that ρ_*∈ W^1,2_loc∩ L^∞_loc. As γ^* is a minimizer, it is in particular a critical point, and therefore satisfies equations (<ref>). Rearranging, we obtain (for a possible different constant c_1[ρ_*, μ_*] ≠ 0) from (<ref>) that ρ_*(z) = c_1[ρ_*,μ_*] (z) exp[-1/α( ∫ f_1(z,x) μ_*(x) + W∗ρ_*(z))] on (ρ_*) . Then for any compact set K⊂^d, sup_z∈ Kρ_*(z) ≤ c_1[ρ_*,μ_*] sup_z ∈ K(z) sup_z ∈ Kexp(-1/α( ∫ f_1(z,x) μ_*(x) )) sup_z ∈ Kexp( -1/α W ∗ρ_* ). As f_1≥ 0 by Assumption <ref> and W≥ 0 by Assumption <ref>, the last two terms are finite. The first supremum is finite thanks to continuity of . Therefore ρ_* ∈ L_loc^∞. To show that ρ_*∈ W_loc^1,2, note that for any compact set K⊂^d, we have ∫_K |ρ_*(z)|^2 ẓ < ∞ as a consequence of ρ_* ∈ L_loc^∞. Moreover, defining T[γ](z) -1/α( ∫ f_1(z,x) μ(x) + W∗ρ(z))≤ 0, we have ∫_K |∇ρ_*|^2 ẓ = c_1[ρ_*,μ_*]^2 ∫_K |∇ + ∇ T[γ_*]|^2exp (2T[γ_*]) ẓ ≤ 2c_1[ρ_*,μ_*]^2 ∫_K |∇|^2 exp (2T[γ_*]) ẓ + 2c_1[ρ_*,μ_*]^2 ∫_K |∇ T[γ_*]|^2 ^2 exp (2T[γ_*]) ẓ , which is bounded noting that exp (2T[γ_*]) ≤ 1 and that T[γ_*](·), ∇ T[γ_*](·) and ∇ are in L^∞_loc, where we used that f_1, (·,x), W(·), (·)∈ C^1(^d) by Assumptions <ref>-<ref>. We conclude that ρ_*∈ W^1,2_loc, and indeed (ρ_*,μ_*) solves (<ref>) in the sense of distributions as a consequence of (<ref>). Next, we show that (ρ_*)=() using again the relation (<ref>). Firstly, note that (ρ_*) ⊂() since ρ_* is absolutely continuous with respect to . Secondly, we claim that exp[-1/α( ∫ f_1(z,x) μ_*(x) + W∗ρ_*(z))]>0 for all z∈^d. In other words, we claim that ∫ f_1(z,x) μ_*(x)<∞ and W∗ρ_*(z) <∞ for all z∈^d. Indeed, for the first term, fix any z∈^d and choose R>0 large enough such that z∈ B_R(0). Then, thanks to continuity of f_1 according to Assumption <ref>, we have ∫ f_1(z,x) μ_*(x) ≤sup_z∈ B_R(0)∫ f_1(z,x) μ_*(x)<∞ . For the second term, note that by Assumption <ref>, we have for any z∈^d and ϵ>0, W(z) ≤ W(0) + ∇ W(z)· z ≤ W(0) +1/2ϵ∇ W(z)^2 + ϵ/2z^2 ≤ W(0) +D^2/2ϵ(1+z)^2 + ϵ/2z^2 ≤ W(0) +D^2/ϵ + (D^2/ϵ + ϵ/2)z^2 =W(0) +D/√(2) + √(2) Dz^2 , where the last equality follows by choosing the optimal ϵ= √(2) D. We conclude that W∗ρ_*(z) ≤ W(0) +D/√(2) + √(2) D∫z-z̃^2 ρ_*(z̃) ≤ W(0) +D/√(2) + 2 √(2) D z^2 + 2 √(2) D∫z̃^2 ρ_*(z̃) , which is finite for any fixed z∈^d thanks to the fact that ρ_*∈_2(^d). Hence, (ρ_*)=(). If we have in addition that ∈ L^∞(^d), then the minimizer ρ_* of G_a is in L^∞(^d) as well. This follows directly by bounding the right-hand side of (<ref>). The following inequality is referred to as HWI inequality and represents the key result to obtain convergence to equilibrium. Define the dissipation functional D_a(γ):= ∬ |δ_γ G_a(z,x)|^2γ̣(z,x) . Assume α,β>0 and let λ_a as defined in (<ref>). Let γ_0,γ_1∈_2×_2 such that G_a(γ_0), G_a(γ_1),D_a(γ_0)<∞. Then G_a(γ_0)-G_a(γ_1)≤(γ_0,γ_1)√(D_a(γ_0)) - λ_a/2 (γ_0,γ_1)^2 For simplicity, consider γ_0,γ_1 that have smooth Lebesgue densities of compact support. The general case can be recovered using approximation arguments. Let (γ_s)_s∈[0,1] denote a -geodesic between γ_0,γ_1. Following similar arguments as in <cit.> and <cit.> and making use of the calculations in the proof of Proposition <ref>, we have . G_a(γ_s)|_s=0 ≥∬[ ξ_1(z); ξ_2(x) ]·[ (∇ϕ(z)-z); (∇ψ(x)-x) ] γ̣_0(z,x) , where ξ_1[γ_0](z) := α∇_zlog(ρ_0(z)/(z)) + ∫∇_zf_1(z,x)μ̣_0(x) + ∫∇_zW(z-z')ρ̣_0(z') , ξ_2[γ_0](x) := ∫∇_x f_1(z,x)ρ̣_0(z) + ∇_x V(x) . Note that the dissipation functional can then be written as D_a(γ_0)=∬(|ξ_1(z)|^2 + |ξ_2(x)|^2)γ̣_0(z,x) . Using the double integral Cauchy-Schwarz inequality <cit.>, we obtain . G_a(γ_s)|_s=0 ≥ -(√(∬[ ξ_1; ξ_2 ]_2^2 γ̣_0))(√(∬[ ∇ϕ(z)-z; ∇ψ(x)-x ]_2^2γ̣_0)) = -√(D_a(γ_0)) √(∫∇ϕ(z)-z^2 ρ̣_0 + ∫∇ψ(x)-x^2 μ̣_0 ) = -√(D_a(γ_0)) (γ_0,γ_1) . Next, we compute a Taylor expansion of G_a(γ_s) when considered as a function in s and use the bound on G_a from (<ref>): G_a(γ_1) = G_a(γ_0) + . G_a(γ_s)|_s=0 + ∫_0^1 (1-t) .( G_a(γ_s))|_s=t ṭ ≥ G_a(γ_0) - √(D_a(γ_0)) (γ_0,γ_1) + λ_a/2(γ_0,γ_1)^2 . The HWI inequality in Proposition <ref> immediately implies uniqueness of minimizers for G_a in the set {γ∈× : D_a(γ)<+∞}. Indeed, if γ_0 is such that D_a(γ_0)=0, then for any other γ_1 in the above set we have G_a(γ_0)≤ G_a(γ_1) with equality if and only if (γ_0,γ_1)=0. Denote by γ_* the unique minimizer of G_a. With λ_a as defined in (<ref>), any product measure γ∈_2×_2 such that G(γ), D_a(γ)<∞ satisfies D_a(γ)≥ 2λ_a G_a(γ|γ_*) . This statement follows immediately from Proposition <ref>. Indeed, let γ_1=γ_* and γ_0=γ in (<ref>). Then G_a(γ | γ_*) ≤(γ,γ_*)√(D_a(γ)) - λ_a/2 (γ,γ_*)^2 ≤max_t≥ 0(√(D_a(γ)) t - λ_a/2 t^2) = D_a(γ)/2λ_a . Denote by γ_* the unique minimizer of G_a. With λ_a as defined in (<ref>), it holds (γ,γ_*)^2 ≤2/λ_a G_a(γ | γ_*) for any γ∈_2×_2 such that G_a(γ)<∞. This is also a direct consequence of Proposition <ref> by setting γ_0=γ_* and γ_1=γ. Then G_a(γ_*)<∞ and D_a(γ_*)=0, and the result follows. The entropy term ∫ρlogρ produces diffusion in ρ for the corresponding PDE in (<ref>). As a consequence, solutions ρ_t to (<ref>) and minimizers ρ^* for G_a have to be L^1 functions. As there is no diffusion for the evolution of μ_t, solutions may have a singular part. In fact, for initial condition μ_0=δ_x_0, the corresponding solution will be of the form μ_t=δ_x(t), where x(t) solves the ODE (<ref>) with initial condition x_0. This follows from the fact that the evolution for μ_t is a transport equation (also see Section <ref> for more details). Results (a) and (b) are the statements in Proposition <ref>, Corollary <ref> and Corollary <ref>. To obtain (c), we differentiate the energy G_a along solutions γ_t to the equation (<ref>): /ṭ G_a(γ_t) = ∫δ_ρ G_a [γ_t](z)∂_tρ_tẓ + ∫δ_μ G_a [γ_t](x)∂_tμ_tx̣ = -∫∇_zδ_ρ G_a [γ_t](z)^2ρ̣_t(z) -∫∇_xδ_μ G_a [γ_t](x)^2μ̣_t(x) = -D_a(γ_t) ≤ -2λ_a G_a(γ_t | γ_*) , where the last bound follows from Corollary <ref>. Applying Gronwall's inequality, we immediately obtain decay in energy, G_a(γ_t | γ_*)≤ e^-2λ_a t G_a(γ_0 | γ_*) . Finally, applying Talagrand's inequality (Corollary <ref>), the decay in energy implies decay in the product Wasserstein metric, (γ_t,γ_*) ≤ c e^-λ_a t where c>0 is a constant only depending on γ_0, γ_* and the parameter λ_a. § PROOF OF THEOREM <REF> In the case of competing objectives, we rewrite the energy G_c(ρ,x):(^d) ×^d ↦ [-∞,∞] as follows: G_c(ρ,x) = ∫ f_1(z,x) ρ̣(z) + ∫ f_2(z,x) (z) +β/2x-x_0^2 -(ρ) , where (ρ) :=α KL(ρ|) + 1/2∫ρ W ∗ρ . Note that for any fixed ρ∈, the energy G_c(ρ,·) is strictly convex in x, and therefore has a unique minimizer. Define the best response by (ρ) _x̅ G_c(ρ,x̅) and denote G_b(ρ) G_c (ρ,(ρ)). We begin with auxiliary results computing the first variations of the best response and then the different terms in G_b(ρ) using Definition <ref>. The first variation of the best response of the classifier at ρ (if it exists) is δ_ρ[ρ](z) = -Q(ρ)^-1∇_x f_1(z,(ρ)) for almost every z∈^d , where Q(ρ)≽ (β+λ_1+λ_2)_d is a symmetric matrix, constant in z and x, defined as Q(ρ)β_d+∫∇_x^2 f_1(z,(ρ)) ρ̣(z) +∫∇_x^2 f_2(z,(ρ))(z) . In particular, we then have for any ψ∈ C_c^∞(^d) with ∫ψ ẓ=0 that lim_ϵ→ 01/ϵ[ρ+ϵψ] - [ρ]-ϵ∫δ_ρ[ρ](z)ψ(z)ẓ = 0 . Let ψ∈ C_c^∞(^d) with ∫ψ ẓ=0 and fix ϵ>0. Any minimizer of G_c(ρ+ϵψ,x) for fixed ρ must satisfy ∇_x G_c(ρ+ϵψ,b(ρ+ϵψ)) = 0 . Differentiating in ϵ, we obtain ∫δ_ρ∇_x G_c[ρ+ϵψ,b(ρ+ϵψ)]ψ(z) ẓ + ∇_x^2 G_c(ρ+ϵψ,b(ρ+ϵψ)) ∫δ_ρ b[ρ+ϵψ](z) ψ(z) ẓ= 0 . Next, we explicitly compute all terms involved in (<ref>). Computing the derivatives yields ∇_x G_c(ρ,x) = ∫∇_x f_1(z,x) ρ̣(z) +∫∇_x f_2(z,x) (z) + β( x- x_0) δ_ρ∇_x G_c [ρ,x] (z) = ∇_x f_1(z,x) ∇_x^2 G_c(ρ,x) =∫∇_x^2 f_1(z,x) ρ̣(z) + ∫∇_x^2 f_2(z,x) (z) + β_d. Note that ∇_x^2 G_c is invertible by Assumption <ref>, which states that f_1 and f_2 have positive-definite Hessians. Inverting this term and substituting these expressions into (<ref>) for ϵ=0 gives ∫δ_ρ b[ρ](z)ψ(z) ẓ = -[β_d+∫∇_x^2 f_1(z,(ρ)) ρ̣(z) + ∫∇_x^2 f_2(z,(ρ)) (z) ]^-1∫∇_x f_1(z,(ρ))ψ(z) ẓ =-∫ Q(ρ)^-1∇_x f_1(z,(ρ))ψ(z) ẓ . Finally, the lower bound on Q(ρ) follows thanks to Assumption <ref>. If we include the additional assumption that f_i ∈ C^3(^d ×^d;[0,∞)) for i=1,2, then the Hessian of b[ρ] is well-defined. More precisely, the Hessian is given by ^̣2/ϵ̣^2 b[ρ + ϵψ]|_ϵ=0 = Q(ρ)^-1( /ϵ̣ Q(ρ+ϵψ)|_ϵ=0 + ∫∇_x^2 f_1(z,b[ρ])ψ(z) ẓ) Q(ρ)^-1 u[ρ,ψ] where u[ρ,ψ] = ∫∇_x f_1(z,b[ρ]) ψ(z) ẓ and /ϵ̣Q_ij(ρ+ϵψ)|_ϵ=0 = ∫∂_x_i∂_x_j f_1(z,b[ρ]) ψ(z) ẓ - ∫∂_x_i∂_x_j∇_x f_1(z,b[ρ]) ψ(z) ρ(z) ẓ Q(ρ)^-1 u[ρ,ψ] - ∫∂_x_i∂_x_j∇_x f_2(z,b[ρ]) ψ(z) (z) ẓ Q(ρ)^-1 u[ρ,ψ]. Therefore, we can Taylor expand [ρ] up to second order and control the remainder term of order ϵ^2. The first variation of G_b is given by δ_ρ G_b[ρ](z) = h_1(z)+h_2(z)+β h_3(z) - δ_ρ[ρ](z) , where h_1(z):=δ/δρ(∫ f_1(z̃,(ρ)) ρ̣(z̃))(z) = ∫∇_x f_1(z̃,(ρ))ρ̣(z̃) , δ b/δρ[ρ](z)> + f_1(z,(ρ)) , h_2(z):=δ/δρ(∫ f_2(z̃,(ρ)) (z̃))(z) =∫∇_x f_2(z̃,(ρ)) (z̃), δ b/δρ[ρ](z)> , h_3(z):=1/2δ/δρ(ρ)-x_0^2 =(ρ)-x_0,δ b/δρ[ρ](z)> , and δ_ρ[ρ](z) = αlog (ρ(z)/(z)) + (W ∗ρ)(z) . We begin with general expressions for Taylor expansions of :(^d)→^d and f_i(z,(·)):(^d)→ for i=1, 2 around ρ. Let ψ∈𝒯 with 𝒯={ψ:∫ψ(z) ẓ =0}. Then b(ρ+ϵψ) = (ρ) + ϵ∫δ b/δρ[ρ](z')ψ(z') ẓ' + O(ϵ^2) and f_i(z,b(ρ+ϵψ)) = f_i(z,(ρ)) + ϵ∇_x f_i(z,(ρ)),∫δ b/δρ[ρ](z')ψ(z') ẓ'> + O(ϵ^2) . We compute explicitly each of the first variations: (i) Using (<ref>), we have ∫ψ(z) h_1(z) ẓ = lim_ϵ→ 01/ϵ[ ∫ f_1(z, b(ρ + ϵψ))(ρ(z) + ϵψ(z))ẓ-∫ f_1(z,(ρ))ρ(z) ẓ] = ∫∇_x f_1 (z,(ρ)) ρ̣(z) ,∫δ(ρ)/δρ[ρ](z')ψ(z')ẓ' > + ∫ f_1(z,(ρ)) ψ(z) ẓ = ∫∫∇_x f_1 (z,(ρ)) ρ̣(z) , δ(ρ)/δρ[ρ](z') > ψ(z')ẓ' + ∫ f_1(z,(ρ)) ψ(z) ẓ ⇒ h_1(z) = ∫∇_x f_1(z̃,(ρ))ρ̣(z̃) , δ b/δρ[ρ](z)> + f_1(z,(ρ)) . (ii) Similarly, using again (<ref>), ∫ψ(z) h_2(z) ẓ = lim_ϵ→ 01/ϵ[ ∫ f_2(z, b(ρ + ϵψ))(z)-∫ f_2(z,(ρ))(z) ẓ] = ∫∫∇_x f_2(z̃,(ρ)) (z̃), δ b/δρ[ρ](z)> ψ(z)ẓ ⇒ h_2(z) = ∫∇_x f_2(z̃,(ρ)) (z̃), δ b/δρ[ρ](z)> . (iii) Finally, from (<ref>) it follows that ∫ψ(z) h_3(z) ẓ = lim_ϵ→ 01/2ϵ[ b(ρ+ϵψ)-x_0,b(ρ+ϵψ)-x_0> - (ρ)-x_0,(ρ)-x_0> ] = ∫(ρ)-x_0,δ b/δρ[ρ](z)> ψ(z)ẓ ⇒ h_3(z) = (ρ)-x_0,δ b/δρ[ρ](z)> . Finally, the expression for δ_ρ[ρ] follows by direct computation Denote G_b(ρ) G_c (ρ,(ρ)) with (ρ) given by (<ref>). Then δ_ρ G_b[ρ] = .δ_ρ G_c[ρ] |_x=(ρ). We start by computing δ_ρ G_c(·,x)[ρ](z) for any z,x∈^d: δ_ρ G_c(·,x)[ρ](z) = f_1(z,x) - δ_ρ[ρ](z). Next, we compute δ_ρ G_b. Using Lemma <ref>, the first variation of G_b is given by δ_ρ G_b[ρ](z) = h_1(z)+h_2(z)+β h_3(z) - δ_ρ[ρ](z) = -[∫∇_x f_1(z̃,(ρ))ρ̣(z̃)+ ∫∇_x f_2(z̃,(ρ)) (z̃)+β((ρ)-x_0)],δ_ρ b[ρ](z)> + f_1(z,(ρ)) - δ_ρ[ρ](z) . Note that ∇_x G_c(ρ,x) = ∫∇_x f_1(z̃,x)ρ̣(z̃)+ ∫∇_x f_2(z̃,x) (z̃)+β(x-x_0) , and by the definition of the best response (ρ), we have ∇_x G_x(ρ,x)|_x=(ρ)=0. Substituting into the expression for δ_ρ G_b and using (<ref>), we obtain δ_ρ G_b[ρ](z) = f_1(z,(ρ)) - δ_ρ[ρ](z) = δ_ρ G_c(·,x)[ρ](z) |_x=(ρ) . This concludes the proof. Let Assumption <ref> hold. Then for any ρ∈(^d), we have (ρ)^2≤x_0^2 + 2(a_1+a_2)/β . By definition of the best response (ρ), we have ∫∇_x f_1(z,(ρ)) ρ̣_t + ∫∇_x f_2(z,(ρ))(z) + β((ρ) - x_0) = 0 . To show that that (ρ) is uniformly bounded, we take the inner product of the above expression with (ρ) itself β(ρ)^2 = β x_0 ·(ρ) - ∫∇_x f_1(z,(ρ)) ·(ρ) ρ̣(z) - ∫∇_x f_2(z,(ρ)) ·(ρ) (z) . Using Assumption <ref> to bound the two integrals, together with using Young's inequality to bound the first term on the right-hand side, we obtain β(ρ)^2 ≤β/2x_0^2 + β/2(ρ) + a_1+a_2 , which concludes the proof after rearranging terms. Let Assumptions <ref>-<ref> hold. The functional G_c:(^d)×^d → [-∞,+∞] is upper semi-continuous when (^d)×^d is endowed with the product topology of the weak-* topology and the Euclidean topology. Moreover, the functional G_b:(^d)→ [-∞,+∞] is upper semi-continuous with respect to the weak-* topology. The functional G_c:(^d)×^d → [-∞,+∞] is continuous in the second variable thanks to Assumption <ref>. Similarly, ∫ f_1(z,x)ρ̣(z) + ∫ f_2(z,x)(z) is continuous in ρ thanks to <cit.> using the continuity of f_1 and f_2. Further, - is upper semi-continuous using <cit.> and <cit.> thanks to Assumptions <ref> and <ref>. This concludes the continuity properties for G_c. The upper semi-continuity of G_b then follows from a direct application of a version of Berge's maximum theorem <cit.>. Let R:= x_0^2 + 2(a_1+a_2)/β>0. We define φ:((^d),W_2) ↠^d as the correspondence that maps any ρ∈(^d) to the closed ball B_R(0)⊂^d. Then the graph of φ is φ = (^d)×{B_R(0)}. With this definition of φ, the range of φ is compact and φ is continuous with respect to weak-* convergence, and so it is in particular upper hemicontinuous. Thanks to Lemma <ref>, the best response function (ρ) is always contained in B_R(0) for any choice of ρ∈(^d). As a result, maximizing -G_c(ρ,x) in x over ^d for a fixed ρ∈(^d) reduces to maximizing it over B_R(0). Using the notation introduced above, we can restrict G_c to G_c:φ→ and write G_b(ρ) max_x̂∈φ(ρ)-G_c(ρ,x̂). Because G_c(ρ,x) is upper semi-continuous when (^2)×^d is endowed with the product topology of the weak-* topology and the Euclidean topology, <cit.> guarantees that G_b(·) is upper semi-continuous in the weak-* topology. Let α,β > 0 and assume Assumptions <ref>-<ref> hold with the parameters satisfying αλ̃> Λ_1. Fix ρ_0,ρ_1∈(^d). Along any geodesic (ρ_s)_s∈[0,1]∈_2(^d) connecting ρ_0 to ρ_1, we have for all s∈[0,1] G_b(ρ_s) ≤ - λ_b W_1(ρ_0,ρ_1)^2 , λ_b:= αλ̃- Λ_1,. As a result, the functional G_b:_2(^d)→ [-∞,+∞] is uniformly displacement concave with constant λ_b> 0. Consider any ρ_0,ρ_1∈_2(^d). Then any W_2-geodesic (ρ_s)_s∈[0,1] connecting ρ_0 with ρ_1 solves the following system of geodesic equations: ∂_s ρ_s + ÷ρ_s v_s=0 , ∂_s(ρ_s v_s) + ÷ρ_s v_s ⊗ v_s=0 , where ρ_s:^d→ and v_s:^d ↦^d . The first derivative of G_b along geodesics can be computed explicitly as G_b(ρ_s) = ∫∇_z f_1(z,b(ρ_s)) · v_s(z) ρ_s(z) ẓ - (ρ_s) + [∫∇_x f_1(z,x) ρ̣_s(z) + ∫∇_x f_2(z,x) (z) + β(x-x_0)]|_x=b(ρ_s), b(ρ_s) >. The left-hand side of the inner product is zero by definition of the best response (ρ_s) to ρ_s, see (<ref>). Therefore G_b(ρ_s) = ∫∇_z f_1(z,b(ρ_s)) · v_s(z) ρ_s(z) ẓ - (ρ_s) . Differentiating a second time, using (<ref>) and integration by parts, we obtain G_b(ρ_s) = L_1(ρ_s) + L_2(ρ_s) - (ρ_s) , where L_1(ρ_s) := ∫∇_z^2 f_1(z,b(ρ_s)) · (v_s ⊗ v_s) ρ_s ẓ = ∫ v_s , ∇_z^2 f_1(z,b(ρ_s)) · v_s > ρ_s ẓ , L_2(ρ_s) := ∫ b(ρ_s) ·∇_x ∇_z f_1(z,b(ρ_s)) · v_s(z) ρ_s(z)ẓ . From (<ref>), we have that P̃(ρ_s) ≥αλ̃ W_2(ρ_0,ρ_1)^2 , and thanks to Assumption <ref> it follows that L_1(s) ≤Λ_1 W_2(ρ_0,ρ_1)^2 . This leaves L_2 to bound; we first consider the term b(ρ_s): b(ρ_s) = ∫δ_ρ b[ρ_s](z̃) ∂_s ρ_s(̣̃z) = -∫δ_ρ b[ρ_s](z̃) ÷ρ_s v_ṣ̃z = ∫∇_z δ_ρ b[ρ_s] (z̃) · v_s(z̃) ρ̣_s (z̃). Defining u(ρ_s)∈^d by u(ρ_s) ∫∇_x ∇_z f_1(z,b(ρ_s)) · v_s(z) ρ̣_s(z) , using the results from Lemma <ref> for ∇_z δ_ρ b[ρ_s], Assumption <ref> and the fact that Q(ρ) is constant in z and x, we have L_2(ρ_s) = -∬[Q(ρ_s)^-1∇_x ∇_z f_1(z̃,b(ρ_s)) · v_s(z̃)] ·∇_x ∇_z f_1(z,b(ρ_s)) · v_s(z) ρ̣_s(z) ρ̣_s(z̃) = -u(ρ_s),Q(ρ_s)^-1 u(ρ_s) > ≤ 0 Combining all terms together, we obtain G_b(ρ_s) ≤ -(αλ̃- Λ_1 ) W_2(ρ_0,ρ_1)^2 . Under some additional assumptions on the functions f_1 and f_2, we can obtain an improved convergence rate. In particular, assume that for all z,x∈^d, * there exists a constant Λ_2≥λ_2≥ 0 such that ∇^2_x f_2(z,x) ≼Λ_2 _d; * there exists a constant σ≥0 such that ∇_x ∇_z f_1(z,x)≥σ. Then we have -Q(ρ_s)^-1≼ -1/(β+Λ_1+Λ_2)_d. Using Lemma <ref>, we then obtain a stronger bound on L_2 as follows: L_2(ρ_s) ≤ -1/β+Λ_1+Λ_2u(ρ_s)^2 ≤ -1/β+Λ_1+Λ_2∫∇_x ∇_z f_1(z,b(ρ_s))^2 ρ̣_s(z) ∫v_s(z)^2 ρ̣_s(z) ≤ -σ^2/β+Λ_1+Λ_2 W_2(ρ_0,ρ_1)^2. This means we can improve the convergence rate in (<ref>) to λ_b:= αλ̃+σ^2/β+Λ_1+Λ_2-Λ_1. Let Assumptions <ref>-<ref> hold for αλ̃>Λ_1≥ 0 and β>0. Then there exists a unique maximizer ρ_* for the functional G_b over (^d), and it satisfies ρ_*∈_2(^d)∩ L^1(^d) and ρ_* is absolutely continuous with respect to . Uniqueness of the maximizer (if it exists) is guaranteed by the uniform concavity provided by Lemma <ref>. To show existence of a maximizer, we use the direct method in the calculus of variations, requiring the following key properties for G_b: (1) boundedness from above, (2) upper semi-continuity, and (3) tightness of any minimizing sequence. To show (1), note that ∇_z^2 (f_1(z,x)+αlog(z))≼ -(αλ̃-Λ_1)_d for all z,x∈^d×^d by Assumptions <ref> and <ref>, and so f_1(z,x)+αlog(z) ≤ c_0(x) - (αλ̃-Λ_1)/4 |z|^2 ∀ (z,x)∈^d×^d with c_0(x):= f_1(0,x)+αlog(0) + 1/αλ̃-Λ_1∇_z[f_1(0,x)+αlog(0) ]^2. Therefore, G_b(ρ) = ∫[f_1(z,(ρ))+αlog(z)] ρ̣(z) +∫ f_2(z,(ρ))(z) +β/2(ρ)-x_0^2 -α∫ρlogρ - ∫ρ W∗ρ ≤ c_0((ρ)) +∫ f_2(z,(ρ))(z) +β/2(ρ)-x_0^2 . To estimate each of the remaining terms on the right-hand side, denote R:= x_0^2 + 2(a_1+a_2)/β and recall that (ρ)≤ R for any ρ∈(^d) thanks to Lemma <ref>. By continuity of f_1 and log, there exists a constant c_1∈ such that sup_x∈ B_R(0) c_0(x)=sup_x∈ B_R(0)[f_1(0,x)+αlog(0) + 1/αλ̃-Λ_1∇_z(f_1(0,x)+αlog(0) )^2]≤ c_1 . The second term is controlled by c_2 thanks to Assumption <ref>. And the third term can be bounded directly to obtain G_b(ρ) ≤ c_1+c_2 + β (R^2 + x_0^2) . This concludes the proof of (1). Statement (2) was shown in Lemma <ref>. Then we obtain a minimizing sequence (ρ_n)∈(^d) which is in the closed unit ball of C_0(^d)^* and so the Banach-Anaoglu theorem <cit.> there exists a limit ρ_* in the Radon measures and a subsequence (not relabeled) such that ρ_n ρ_*. In fact, ρ_* is absolutely continuous with respect to as otherwise G_b(ρ_*)=-∞, which contradicts that G_b(·)>-∞ somewhere. We conclude that ρ_*∈ L^1(^d) since ∈ L^1(^d) by Assumption <ref>. To ensure ρ_*∈(^d), we require (3) tightness of the minimizing sequence (ρ_n). By Markov's inequality <cit.> it is sufficient to establish a uniform bound on the second moments: ∫z^2ρ̣_n(z) <C ∀ n∈N . To see this we proceed in a similar way as in the proof of Proposition <ref>. Defining K(ρ) := -∫[f_1(z,(ρ))+αlog(z)] ρ̣(z) + α∫ρlogρ ẓ + 1/2∫ρ W ∗ρ ẓ , we have K(ρ) = -G_b(ρ) + ∫ f_2(z,(ρ))(z) + β/2(ρ)-x_0^2. Then using again the bound on (ρ) from Lemma <ref>, K(ρ) ≤ -G_b(ρ) + sup_x∈ B_R(0)∫ f_2(z,x)(z) + β(R^2 + x_0^2) ≤ -G_b(ρ) + c_2 + β(R^2 + x_0^2) , where the last inequality is thanks to Assumption <ref>. Hence, using the estimates (<ref>) and (<ref>) from above, and noting that the sequence (ρ_n) is minimizing (-G_b), we have (αλ̃-Λ_1)/4∫z^2 ρ̣_n(z) ≤ c_0((ρ_n)) -∫[f_1(z,(ρ_n))+αlog(z)] ρ̣_n(z) ≤ c_1 + K(ρ_n) ≤ c_1 -G_b(ρ_n) + c_2 + β(R^2 + x_0^2) ≤ c_1 -G_b(ρ_1) + c_2 + β(R^2 + x_0^2) <∞ . which uniformly bounds the second moments of (ρ_n). This concludes the proof for the estimate (<ref>) and also ensures that ρ_*∈_2(^d). Any maximizer ρ_* of G_b is a steady state for equation (<ref>) according to Definition <ref>, and satisfies (ρ_*)=(). To show that ρ_* is a steady state we can follow exactly the same argument as in the proof of Corollary <ref>, just replacing -1/α∫ f_1(z,x) μ_*(x) with +1/α∫ f_1(z,(ρ_*). It remains to show that (ρ_*)=(). As ρ^* is a maximizer, it is in particular a critical point, and therefore satisfies that δ_ρ G_b[ρ_*](z) is constant on all connected components of (ρ_*). Thanks to Lemma <ref>, this means there exists a constant c[ρ_*] (which may be different on different components of (ρ_*)) such that f_1(z,b(ρ_*)) - αlog(ρ_*(z)/(z)) - W∗ρ_*(z) = c[ρ_*] on (ρ_*) . Rearranging, we obtain (for a possible different constant c[ρ_*] ≠ 0) ρ_*(z) = c[ρ_*] (z) exp[1/α( f_1(z,b(ρ_*)) - W∗ρ_*(z))] on (ρ_*) . Firstly, note that (ρ_*) ⊂() since ρ_* is absolutely continuous with respect to . Secondly, note that exp1/αf_1(z,b(ρ_*))≥ 1 for all z∈^d since f_1≥ 0. Finally, we claim that exp(-1/α W∗ρ_*(z))>0 for all z∈^d. In other words, we claim that W∗ρ_*(z) <∞ for all z∈^d. This follows by exactly the same argument as in Corollary <ref>, see equation (<ref>). We conclude that (ρ_*)=(). If we have in addition that ∈ L^∞(^d) and f_1(·,x)∈ L^∞(^d) for all x∈^d, then the maximizer ρ_* of G_b is in L^∞(^d) as well. This follows directly by bounding the right-hand side of (<ref>). With the above preliminary results, we can now show the HWI inequality, which implies again a Talagrand-type inequality and a generalized logarithmic Sobolev inequality. Define the dissipation functional D_b(γ):= ∬ |δ_ρ G_b[ρ](z)|^2ρ̣(z) . Assume α,β>0 such that αλ̃> Λ_1+σ^2, and let λ_b as defined in (<ref>). Denote by ρ_* the unique maximizer of G_b. (HWI) Let ρ_0,ρ_1∈_2(^d) such that G_b(ρ_0), G_b(ρ_1),D_b(ρ_0)<∞. Then G_b(ρ_0)-G_b(ρ_1)≤(ρ_0,ρ_1)√(D_b(ρ_0)) - λ_b/2 W_2(ρ_0,ρ_1)^2 (logSobolev) Any ρ∈_2(^d) such that G(ρ), D_b(ρ)<∞ satisfies D_b(ρ)≥ 2λ_b G_a(ρ|ρ_*) . (Talagrand) For any ρ∈_2(^d) such that G_b(ρ)<∞, we have W_2(ρ,ρ_*)^2 ≤2/λ_b G_b(ρ | ρ_*) . The proof for this result follows analogously to the arguments presented in the proofs of Proposition <ref>, Corollary <ref> and Corollary <ref>, using the preliminary results established in Proposition <ref> and Proposition <ref>. Following the same approach as in the proof of Theorem <ref>, the results in Theorem <ref> immediately follow by combining Proposition <ref>, Corollary <ref> and Proposition <ref> applied to solutions of the PDE (<ref>). § PROOF OF THEOREM <REF> The proof for this theorem uses similar strategies as that of Theorem <ref>, but considers the evolution of an ODE rather than a PDE. Recall that for any x∈^d the best response (x)(·)∈(^d) in (<ref>) is defined as (x) _ρ̂∈G_c(ρ̂,x) , where the energy G_c(ρ,x):(^d) ×^d ↦ [-∞,∞] is given by G_c(ρ,x) = ∫ f_1(z,x) ρ̣(z) + ∫ f_2(z,x) (z) +β/2x-x_0^2 -α KL(ρ|) - 1/2∫ρ W ∗ρ . Let Assumptions <ref>- <ref> hold and assume α>Λ_1. Then for each x∈^d there exists a unique maximizer ρ_*:=(x) solving _ρ̂∈_2 G_c(ρ̂,x). Further, (x)∈ L^1(^d), ((x)) =(), and there exists a function c:^d↦ such that the best response ρ_*(z)=(x)(z) solves the Euler-Lagrange equation δ_ρ G_c[ρ_*,x](z):= αlogρ_*(z) - (f_1(z,x)+αlog(z)) + (W∗ρ_*)(z)= c(x) for all (z,x)∈()×^d . Equivalently, consider the minimization problem for F(ρ) = -∫ f_1(z,x) ρ̣(z) + α KL(ρ | ) + 1/2∫ρ W ∗ρ with some fixed x. Note that we can rewrite F(ρ) as F(ρ) = α∫ρlogρ ẓ+ ∫ V(z,x) ρ̣(z) + 1/2∫ρ W ∗ρ where V(z,x):= - (f_1(z,x)+αlog(z)) is strictly convex in z for fixed x by Assumptions <ref> and <ref>. Together with Assumption <ref>, we can directly apply the uniqueness and existence result from <cit.>. The result on the support of (x) and the expression for the Euler-Lagrange equation follows by exactly the same arguments as in Corollary <ref> and Corollary <ref>. The density of the best response (x) is continuous on ^d for any fixed x∈^d. Instead of solving the Euler-Lagrange equation (<ref>), we can also obtain the best response (x) as the long-time asymptotics for the following gradient flow: ∂_tρ = ÷ρ∇δ_ρ F[ρ] . Following Definitions <ref> and <ref>, we can charaterize the steady states ρ_∞ of the PDE (<ref>) by requiring that ρ_∞∈ L^1_+(^d)∩ L^∞_loc(^d) with ρ_∞_1=1 such that ρ_∞∈ W^1,2_loc(^d), ∇ W∗ρ_∞∈ L^1_loc(^d), ρ_∞ is absolutely continuous with respect to , and ρ_∞ satisfies ∇_z ( -f_1(z,x) + αlog(ρ_∞(z)/(z)) + W∗ρ_∞(z) ) = 0 ∀ z∈^d , in the sense of distributions. Noting that because the energy functional F(ρ) differs from G_a(ρ,μ) only in the sign of f_1(z,x) if viewing G_a(ρ,μ) as a function of ρ only. Note that F(ρ) is still uniformly displacement convex in ρ due to Assumption <ref>. Then the argument to obtain that ρ_∞∈ C(^d) follows exactly as that of Lemma <ref>. Let i∈{1,...,d}. If the energy H_i:(^d)→^d given by H_i(ρ,x):= α/2∫ρ(z)^2/(x)(z) ẓ + 1/2∫ρ W∗ρ - ∫∂_x_if_1(z,x)ρ̣(z) , admits a critical point at x∈^d, then the best response (x)∈(^d) is differentiable in the ith coordinate direction at x∈^d. Further, the critical point of H_i is in the subdifferential ∂_x_i(x). First, note that DF[(x)](x)(u)=0 for all directions u∈ C_c^∞(^d) and for all x∈^d thanks to optimality of (x). Here, DF denotes the Fréchet derivative of F, associating to every ρ∈(^d) the bounded linear operator DF[ρ]:C_c^∞→ DF[ρ](u):=∫δ_ρ F[ρ](z)u(z)ẓ , and we note that F(ρ) depends on x through the potential V. Fixing an index i∈{1,...,d}, and differentiating the optimality condition with respect to x_i we obtain ∂_x_i DF[(x)](x)(u) + D^2F[(x)](x)(u,∂_x_i(x))=0 ∀ u∈ C_c^∞(^d) . Both terms can be made more explicit using the expressions for the Fréchet derivative of F: ∂_x_i DF[(x)](x)(u) = - ∫∂_x_i f_1(z,x) u(z) ẓ , and for the second term note that the second Fréchet derivative of F at ρ∈(^d) along directions u,v∈ C_c^∞(^d) is given by D^2F[ρ](x)(u,v)=α∫_(ρ)u(z)v(z)/ρ(z) ẓ + ∬_(ρ)×(ρ) W(z-z̃) u(z)v(z̃) ẓ̣̃z . In other words, assuming ((x))=()=^d, relation (<ref>) can be written as α∫∂_x_i(x)/(x)(z) u(z) ẓ + ∫(W∗∂_x_i(x) )(z) u(z)v ẓ - ∫∂_x_i f_1(z,x) u(z) ẓ=0 , For ease of notation, given (x)∈(^d), we define the function g:(^d)→ L^1_loc(^d) by g[ρ](z):= αρ(z)/(x)(z) +W∗ρ - ∂_x_i f_1(z,x) . The question whether the partial derivative ∂_x_i(x) exists then reduces to the question whether there exists some ρ_*∈(^d) such that ρ=ρ_* solves the equation g[ρ](z) = c for almost every z∈^d . and for some constant c>0. This is precisely the Euler-Lagrange condition for the functional H_i defined in (<ref>), which has a solution thanks to the assumption of Lemma <ref>. We observe that the first term in H_i is precisely (up to a constant) the χ^2-divergence with respect to (x), ∫(ρ/(x)-1)^2 (x) ẓ = ∫ρ^2/(x) ẓ -1 . Depending on the shape of the best response (x), the χ^2-divergence may not be displacement convex. Similarly, the last term - ∫∂_x_if_1(z,x)ρ̣(z) in the energy H_i is in fact displacement concave due to the convexity properties of f_1 in z. The interaction term is displacement convex thanks to Assumption <ref>. As a result, the overall convexity properties of H_i are not known in general. Proving the existence of a critical for H_i under our assumptions on f_1, f_2, and W would be an interesting result in its own right, providing a new functional inequality that expands on the literature of related functional inequalities such as the related Hardy-Littlewood-Sobolev inequality <cit.>. It remains to show that H_i indeed admits a critical point. Next, we provide examples of additional assumptions that would guarantee for Lemma <ref> to apply. If either C sup_z∈^d |W(z)|<∞, or C sup_z∈^d |αlog ((x)(z)/(z)) + f_1(z,x) + c|<∞ , then for each x∈^d and for large enough α>0, the best response (x) is differentiable with the gradient coordinate ∂_x_i(x) given by the unique coordinate-wise solutions of the Euler-Lagrange condition for H_i. We will show this result using the Banach Fixed Point Theorem for the mapping T_i:L^1(^d)→ L^1(^d) for each fixed i∈{1,...,d} given by T_i(ρ) = -(x)(z)/α[(W ∗ρ)(z) -∂_x_i f_1(z,x) + c] , noting that ρ_*=T_i(ρ_*) is the Euler-Lagrange condition for a critical point of H_i. It remains to show that T_i is a contractive mapping. For the first assumption, note that T_i(ρ)-T_i(ρ')_1 = 1/α∫(x) |W ∗ (ρ-ρ')| ẓ ≤1/α∬(x)(z) W(z-ẑ) |ρ(ẑ)-ρ'(ẑ)| ̣̂z ẓ ≤W_∞/α(∫(x)(z)ẓ)( ∫|ρ(ẑ)-ρ'(ẑ)| ̣̂z) ≤C/αρ-ρ'_1 . Similarly, for the second assumption we estimate T_i(ρ)-T_i(ρ')_1 = 1/α∫(x) |W ∗ (ρ-ρ')| ẓ ≤1/α∬(x)(z) W(z-ẑ) |ρ(ẑ)-ρ'(ẑ)| ̣̂z ẓ = 1/α∫ (W ∗(x))(z) |ρ(z)-ρ'(z)| ẓ ≤1/αW ∗(x)_∞ρ-ρ'_1 which requires a bound on W ∗(x)_∞. Using W ∗(x)_∞ = sup_z∈^d |αlog ((x)/) + f_1(z,x) + c| = C < ∞ , we conclude that T_i is a contraction map for large enough α. In both cases, we can then apply the Banach Fixed-Point Theorem to conclude that ∇_x (x) exists and is unique. Let (x) as defined in (<ref>). If (x) is differentiable in x, then we have ∇_x G_d(x) = .(∇_x G_c(ρ,x))|_ρ=(x). We start by computing ∇_x G_d(x). We have ∇_x G_d(x) =∇_x (G_c((x),x)) = ∫δ_ρ [G_c(ρ,x)]|_ρ=(x)(z) ∇_x (x)(z) ẓ + .(∇_x G_c(ρ,x))|_ρ=(x) = c(x) ∇_x ∫(x)(z) ẓ + .(∇_x G_c(ρ,x))|_ρ=(x) =.(∇_x G_c(ρ,x))|_ρ=(x) , where we used that (x) solves the Euler-Lagrange equation (<ref>) and that (x)∈(^d) for any x∈^d so that ∫(x)(z) ẓ is independent of x. Let Assumption <ref> hold. Then G_d:^d→∪{+∞} is strongly convex with constant λ_d:=λ_1+λ_2+β>0. The energy G_c(ρ,x) is strongly convex in x due to our assumptions on f_1, f_2, and the regularizing term x-x_0_2^2. This means that for any ρ∈, G_c(ρ,x) ≥ G_c(ρ,x') + ∇_x G_c(ρ,x')^⊤ (x-x') + λ_d/2x-x'_2^2 . Selecting ρ=(x'), we have G_c((x'),x) ≥ G_c((x'),x') + ∇_x G_c((x'),x')^⊤ (x-x') + λ_d/2x-x'_2^2 . Since G_c((x'),x) ≤ G_c((x),x) by definition of (x), we obtain the required convexity condition: G_d(x)=G_c((x),x) ≥ G_c((x'),x') + ∇_x G_c((x'),x')^⊤ (x-x') +λ_d/2x-x'_2^2 . For any reference measure ρ_0∈, we have G_d(x) ≥ G_c(ρ_0,x) ≥ -α KL(ρ_0 | ) - 1/2∫ρ_0 W∗ρ_0 + β/2x-x_0^2 and therefore, G_d is coercive. Together with the strong convexity provided by Lemma <ref>, we obtain the existence of a unique minimizer x_∞∈^d. Convergence in norm now immediately follows also using Lemma <ref>: for solutions x(t) to (<ref>), we have 1/2/ṭx(t)-x_∞^2 = -(G_d(x(t))-G_d(x_∞))· (x(t)-x_∞) ≤ -λ_d x(t)-x_∞^2 . A similar result holds for convergence in entropy using the Polyák-Łojasiewicz convexity inequality 1/2∇ G_d(x)_2^2 ≥λ_d (G_d(x)-G_d(x_∞)) , which is itself a direct consequence of strong convexity provided in Lemma <ref>. Then /ṭ(G_d(x(t))-G_d(x_∞)) = ∇_x G_d(x(t))·ẋ(t) = -∇_x G_d(x(t))^2 ≤ -2λ_d (G_d(x(t))-G_d(x_∞)) , and so the result in Theorem <ref> follows. § ADDITIONAL SIMULATION RESULTS We simulate a number of additional scenarios to illustrate extensions beyond the setting with provable guarantees and in the settings for which we have results but no numerical implementations in the main paper. First, we simulate the aligned objectives setting in one dimension, corresponding to (<ref>). Then we consider two settings which are not covered in our theory: (1) the previously-fixed distribution is also time varying, and (2) the algorithm does not have access to the full distributions of ρ and and instead samples from them to update. Lastly, we illustrate a classifier with the population attributes in two dimensions, which requires a different finite-volume implementation <cit.> than the one dimension version of the PDE due to flux in two dimensions. §.§ Aligned Objectives Here we show numerical simulation results for the aligned objectives case, where the population and distribution have the same cost function. In this setting, the dynamics are of the form ∂_t ρ = ÷ρ∇_z δ_ρ G_a[ρ,μ] = ÷ρ∇_z ( ∫ f_1(z,x) μ̣(x) + αlog(ρ/) + W ∗ρ) x = -∇_x (∫ f_1(z,x) ρ̣(z) + ∫ f_2(z,x) (z) + β/2x-x_0^2 ) where f_1 and f_2 are as defined in section <ref>, and W=1/20(1+z)^-1, a consensus kernel. Note that W does not satisfy Assumption <ref>, but we still observe convergence in the simulation. This is expected; in other works such as <cit.>, the assumptions on W are relaxed and convergence results proven given sufficient convexity of other terms. The regularizer is set to ρ_0, which models a penalty for the effort required of individuals to alter their attributes. The coefficient weights are α=0.1 and β=1, with discretization parameters dz=0.1, dt=0.01. In Figure <ref>, we observe the strategic distribution separating itself from the stationary distribution, improving the performance of the classifier and also improving the performance of the population itself. The strategic distribution and classifier appear to be stationary by time t=40. §.§ Multiple Dynamical Populations We also want to understand the dynamics when both populations are strategic and respond to the classifier. In this example, we numerically simulate this and in future work we hope to prove additional results regarding convergence. This corresponds to modeling the previously-fixed distribution as time-dependent; let this distribution be τ∈_2. We consider the case where ρ is competitive with x and τ is aligned with x, with dynamics given by ∂_t ρ = -÷ρ∇_z ( f_1(z,x) - αlog (ρ/) - W∗ρ) ∂_t τ = ÷τ∇_z (f_2(z,x) + αlog(τ/) + W∗τ) x = -∇_x ( ∫ f_1(z,x) ρ̣(z) + ∫ f_2(z,x) τ̣(z) + β/2x-x_0^2 ). We use W=0 and f_1, f_2 as in section <ref> and the same discretization parameters as in Section <ref>. In Figure <ref>, we observe that the τ population moves to the right, assisting the classifier in maintaining accurate scoring. In contrast, ρ also moves to the right, rendering the right tail to be classified incorrectly, which is desirable for individuals in the ρ population but not desirable for the classifier. While we leave analyzing the long-term behavior mathematically for future work, the distributions and classifier appear to converge by time t=20. §.§ Sampled Gradients In real-world applications of classifiers, the algorithm may not know the exact distribution of the population, relying on sampling to estimate it. In this section we explore the effects of the classifier updating based on an approximated gradient, which is computed by sampling the true underlying distributions ρ and . We use the same parameters for the population dynamics as in section <ref>, and for the classifier we use the approximate gradient ∇_x L(z,x_t) ≈1/n∑_i=1^n ( ∇_x f_1(z_i,x_t) + ∇_x f_2(z̅_i,x_t) ) + β (x_t-x_0), z_i ∼ρ_t, z̅_̅i̅∼_t . First, we simulate the dynamics with the classifier and the strategic population updating at the same rate, using α=0.05, β=1, and the same consensus kernel as used previously, with the same discretization parameters as in <ref>. In Figure <ref>, we observe no visual difference between the two results with n=4 versus n=40 samples, which suggests that not many samples are needed to estimate the gradient. Next, we consider the setting where the classifier is best-responding to the strategic population. Unlike the first setting, we observe in Figure <ref> a noticeable difference between the evolution of ρ_t with n=4 versus n=40 samples. This is not surprising because optimizing with a very poor estimate of the cost function at each time step would cause x_t to vary wildly, and this method fails to take advantage of correct "average" behavior that gradient descent provides. §.§ Two-dimensional Distributions In practice, individuals may alter more that one of their attributes in response to an algorithm, for example, both cancelling a credit card and also reporting a different income in an effort to change a credit score. We model this case with z∈^2 and x∈^2, and simulate the results for the setting where the classifier and the population are evolving at the same rate. While this setting is not covered in our theory, it interpolates between the two timescale extremes. We consider the following classifier: f_1(z,x) = 1/2(1-1/1+expx^⊤ z) f_2(z,x) = 1/2( 1/1+expx^⊤ z) with W=0. Again, the reference distribution corresponds to the initial shape of the distribution, instituting a penalty for deviating from the initial distribution. We use α=0.5 and β=1 for the penalty weights, run for t=4 with dt=0.005 and dx=dy=0.2 for the discretization. In this case, the strategic population is competing with the classifier, with dynamics given by ∂_t ρ = -÷ρ∇_z ( f_1(z,x)- αlog(ρ/) ) x = -∇_x (∫ f_1(z,x) ρ̣(z) + ∫ f_2(z,x) (z) + β/2x-x_0^2 ) In Figure <ref>, we observe the strategic population increasing mass toward the region of higher probability of being labeled "1" while the true underlying label is zero, with the probability plotted at time t=4. This illustrates similar behavior to the one-dimensional case, including the distribution splitting into two modes, which is another example of polarization induced by the classifier. Note that while in this example, x∈^2 and we use a linear classifier; we could have x ∈^d with d>2 and different functions for f_1 and f_2 which yield a nonlinear classifier; our theory in the timescale-separated case holds as long as the convexity and smoothness assumptions on f_1 and f_2 are satisfied.
http://arxiv.org/abs/2307.01596v1
20230704093610
Cosmic rays from star clusters
[ "Stefano Gabici" ]
astro-ph.HE
[ "astro-ph.HE" ]
Bridge the Performance Gap in Peak-hour Series Forecasting: The Seq2Peak Framework Yuantao Gu August 1, 2023 ================================================================================== Massive stars blow powerful winds and eventually explode as supernovae. By doing so, they inject energy and momentum in the circumstellar medium, which is pushed away from the star and piles up to form a dense and expanding shell of gas. The effect is larger when many massive stars are grouped together in bound clusters or associations. Large cavities form around clusters as a result of the stellar feedback on the ambient medium. They are called superbubbles and are characterised by the presence of turbulent and supersonic gas motions. This makes star clusters ideal environments for particle acceleration, and potential contributors to the observed Galactic cosmic ray intensity. § INTRODUCTION More than one century after their discovery, revealing the origin of cosmic rays (CRs) remains one of the central open issues in high energy astrophysics. CRs are energetic particles that hit the Earth atmosphere from outer space. They are mainly atomic nuclei (mostly protons, with a ∼ 10% contribution from helium and ∼ 1% of heavier nuclei) plus a contribution from electrons at the percent level <cit.>. Except for the (very few) highest energy particles, CRs are accelerated within the Milky Way, which therefore must host efficient and powerful particle accelerators. Any scenario proposed to explain the origin of Galactic CRs must satisfy (at least!) the following conditions, inferred from direct and indirect observations of cosmic particles (see e.g. Blasi's lecture notes in this volume): * sources must inject CRs in the interstellar medium (ISM) at a rate of ∼ 10^41 erg/s <cit.>; * the energy spectrum of the CRs injected in the ISM must be close to a power law ∝ E^-s with s ∼ 2.1...2.4 <cit.>; * CR protons must be accelerated up to energies exceeding those of the CR knee, a steepening observed in the CR spectrum at a particle energy of few PeV <cit.>; * the observed differences between the composition of CRs and of cosmic (solar) matter must be explained (for a review on CR composition see <cit.> or <cit.> and references therein). As seen in many of the Chapters in this book, the most common working hypothesis is that Galactic CRs are accelerated at supernova remnant (SNR) shocks via diffusive acceleration <cit.>. The main argument in favour of this hypothesis is that the rate at which mechanical energy is injected in the ISM by supernova explosions is ≈ 10^42 erg/s. Therefore, the observed intensity of CRs can be explained if ∼ 10% of such mechanical energy is somehow converted into accelerated particles (see point 1 in the list above). Moreover, even though the test-particle theory of diffusive shock acceleration predicts power law spectra of accelerated particles of slope s = 2, various kind of non-linearities in the acceleration mechanism can be invoked to explain the required steeper spectra (point 2 in the list) <cit.>. On the other hand, points 3 and 4 above are more difficult to be accounted for (see <cit.> for an extended critical review of the SNR paradigm). The acceleration of protons beyond PeV energies at SNR shocks requires very large shock velocities and large values of the magnetic field strength. These conditions might be achieved in the very early stages of the SNR lifetime (during the first few tens of years <cit.>), but it is not clear if in such a short time enough multi-PeV protons can be produced to match observations <cit.>. Another major difficulty encountered by the SNR scenario is the explanation of some anomalous isotopic ratios observed in CRs. Most notably, the ^22Ne/^20Ne ratio in CRs is a factor of ≈ 5 larger than the value found in Solar abundances <cit.>. This discrepancy can be explained if ejecta from Wolf-Rayet stars, which are enriched in ^22Ne, are accelerated and contribute to the observed CR intensity <cit.>. There are two ways to do so: either Wolf-Rayet material is accelerated at the stellar wind termination shock (WTS) <cit.> or Wolf-Rayet stellar winds pollute the ISM medium with ^22Ne, which is then accelerated by SNR shocks <cit.>. In both scenarios, star clusters are likely to play a prime role, as massive stars do not form isolated, but rather in groups. The study of particle acceleration in and around stellar clusters is therefore of great interest. During the first few million years of the lifetime of a star cluster, stellar winds dominate the mechanical energy output of the systems, and then, when the most massive stars begin to explode, supernovae take over. As a result of the combined effect of stellar winds and supernova explosions, large cavities are inflated around star clusters <cit.>. Such cavities are called superbubbles, and have been proposed as sites of particle acceleration alternative to SNRs. The interior of superbubbles is filled by an hot, tenuous, and most likely very turbulent medium. Turbulence may be generated by the mutual interactions of stellar ejecta, which can be either continuous winds or supernova explosions (e.g. <cit.> and references therein) Given these peculiar conditions, it is not clear if CR production in star clusters is simply the sum of the acceleration at recurrent SNR shocks <cit.> or if a different acceleration mechanism has to be invoked <cit.>. In both cases, the acceleration of particles at stellar wind termination shocks provides an additional contribution to the CR content of these objects (e.g. <cit.>). The interest in star clusters as particle accelerators was recently revived by the recent detection of a diffuse gamma-ray emission surrounding a number of such objects <cit.>. Further detections in both the GeV and TeV (and possibly multi-TeV) gamma-ray domain were reported (e.g. <cit.>). Such emission proves unambiguously that star clusters can accelerate particles beyond TeV energies. The goal of these lecture notes is to provide the basic ingredients to understand the mechanisms which are likely responsible for the acceleration of CRs in and around star clusters. The remaining of the Chapter is structured as follows. The formation and evolution of an interstellar bubble inflated by a single massive star will be described in Section <ref>, while the case of a superbubble inflated by the ensemble of stars that form a cluster will be treated in Section <ref>. Some basic concepts on particle acceleration in astrophysical environments are given in Section <ref>. The remainder of Section <ref> will be devoted to a description of possible mechanisms for particle acceleration in/around star clusters, operating both at the WTS <ref> and in the turbulent superbubble inflated around the cluster <ref>. Open problems in the field will be briefly reviewed in Section <ref>. § INTERSTELLAR BUBBLE INFLATED BY A MASSIVE STAR WIND The first part of this lecture notes provides a description of how the combined effect of stellar winds and supernova explosions affects the medium surrounding a star cluster. However, it is convenient to consider first the case of an isolated early-type star located in a uniform and pressureless (cold) ISM of mass density ϱ_0 <cit.>. At some time t = 0 the star begins to emit a spherically symmetric and steady wind characterised by a mass loss rate Ṁ_w and by a constant terminal velocity u_w. The wind kinetic power is then L_w = Ṁ_w u_w^2 /2 and its density profile can be derived from mass conservation [∇ (ϱ_w u_w) = 0], to give: ϱ_w = Ṁ_w/4 π u_w R^2 . Here, R represents the distance from the star, which is treated as a point-like source of mechanical energy. In order to quantify the impact of stellar winds on the ambient ISM, let us recall that they are launched as a result of the transfer of momentum from the stellar photons to matter. This happens through the absorption and scattering of UV lines <cit.>. As the luminosity of a star L_* increases steeply with its mass, the most powerful winds are found around the most massive stars. Using the observed correlation between the momentum carried by the wind and that carried by stellar photons, Ṁ_w u_w ≈ (1/2) L_*/c, the kinetic power of the wind can be expressed as <cit.>: L_w ≈L_* u_w/4 c∼ 3 × 10^36( L_*/3 × 10^5 L_⊙) ( u_w/3000  km/s)   erg/s where L_⊙ is the luminosity of the Sun, and where quantities have been normalised to typical values of very massive stars (several tens of solar masses). At this point, we can estimate the total energy output integrated on the lifetime of the star τ_* (for very massive stars this is of the order of few million years) to get: E_w = L_w τ_* ∼ 4 × 10^50( L_*/3 × 10^5 L_⊙) ( u_w/3000  km/s) ( τ_*/4  Myr)   erg Remarkably, this is of the same order as the energy deposited in the ISM by a supernova explosion (≈ 10^51 erg), and therefore stellar winds from very massive stars are expected to impact dramatically on the ambient ISM. In particular, as a result of the injection of mechanical energy, cavities are inflated in the ISM around massive stars. Such cavities are called interstellar bubbles, and their evolution in time proceeds through a number of different phases, which will be described in the following. §.§ Free-expansion phase At first, the circumstellar matter is pushed away by the wind and accumulates in a dense, expanding, and spherical shell located at a distance R_s from the star. Initially, the shell of swept up interstellar gas contains very little mass, and therefore the wind expands freely (R_s ∼ u_w t, hence the name free expansion phase). As the shell moves at a highly supersonic velocity, a shock wave, called forward shock, forms ahead of it. Then, when the mass of interstellar gas swept up by the shock, M_sw = (4 π/ 3) ϱ_0 R_s^3, becomes comparable to the mass carried by the wind, Ṁ_w t, the inertia of the shell becomes important and the expansion decelerates. This happens at a time: τ_f = ( 3 Ṁ_w/4 πϱ_0 u_w^3)^1/2∼ 16 ( Ṁ_w/10^-6 M_⊙/ yr)^1/2( n_0/ cm^-3)^-1/2( u_w/3000  km/s)^-3/2 yr where the mass loss rate has been normalised to a value appropriate to describe a main sequence star of several tens of solar masses <cit.>, and the ambient gas number density n_0 = ϱ/μ m_H to a value characteristic of the interstellar gas <cit.>. Here, m_H is the mass of hydrogen, and μ∼ 1.4 accounts for the presence of helium in the ISM at the ∼ 10% level. Note that the phase of free expansion is several orders of magnitude shorter than the lifetime of a massive star (τ_*, few million years), and therefore will not be further discussed in the following. §.§ Adiabatic phase After the free expansion phase, the shell begins to decelerate, and therefore the wind no longer expands freely. The deceleration of the wind takes place at a spherical shock wave, called wind termination shock. The resulting structure is called interstellar bubble and consists of four regions (see Fig. <ref>). Proceeding from the star outwards they are: i) an highly supersonic wind; ii) a region containing the shocked wind material; iii) a shell containing the shocked interstellar gas; iv) the ambient ISM. Regions i and ii are separated by the WTS, located at R = R_w, regions ii and iii by a contact discontinuity (R = R_c), and regions iii and iv by the forward shock (R = R_s). At this point, it is useful to estimate the thickness of the shell of shocked interstellar gas, Δ R = R_s-R_c. As the forward shock, at least in the early phase of the expansion, is certainly very strong[This implies that the assumption of a cold ambient interstellar gas is appropriate. It can be seen by recalling that the flux of momentum crossing a shock which moves at velocity u_s is ϱ_0 u_s^2 + P_0, where ϱ_0 u_s^2 is the shock ram pressure and P_0 is the pressure of the ISM upstream of the shock. For a strong shock u_s ≫ c_s and therefore P_0 ∼ϱ_0 c_s^2 ≪ϱ_0 u_s^2 is much smaller than the ram pressure and therefore can be neglected.] (the sound speed in the warm ISM is ≈ 10 km/s), one can safely assume that the density of the gas in the shell is that of the ambient ISM compressed by a factor of 4 (see Caprioli's lecture notes in this volume or <cit.>). The mass of gas in the shell can be computed as M_sh∼ (4 π R_s^2 Δ R) (4 ϱ_0), and it must be equal to the total mass of the shocked ISM, M_sh = (4 π/3) R_s^3 ϱ_0. Equating the two definitions of M_sh gives Δ R ∼ 0.08 R_s, which means that the shell is quite thin. Therefore, in order to simplify the problem, the shell will be assumed to be infinitely thin (R_s ≡ R_c), which is, the position of the shell coincides with that of the forward shock. While this might seem to be a rather crude approximation, it provides in fact reasonably accurate results. As long as the system is adiabatic (i.e. radiative losses can be neglected), the expansion rate of the forward shock can be derived in a very simple way using dimensional analysis. This can be done because the wind kinetic power, L_w, is dissipated at the WTS and mostly converted into internal energy (and pressure) of the gas in region ii. The pressure of the gas in that region pushes onto the shell (region iii), whose inertia depends on the density of the ambient medium ϱ_0. It follows that the expansion rate of the forward shock must depend uniquely on the values of L_w and ϱ_0. As it is not possible to combine these two quantities to obtain a characteristic spatial or temporal scale of the problem, the solution has to be scale free, i.e., a power law: R_s ∝ t^α. The only possible scale-free solution is then: R_s ∼ a ( L_w/ϱ_0)^1/5 t^3/5 where a is a non-dimensional constant of order unity. The expansion velocity of the shell is given by: u_s = dR_s/ dt = 3/5R_s/t = 3/5 a ( L_w/ϱ_0)^1/5 t^-2/5 For a rigorous discussion on scale-free (or self-similar) solutions the reader is referred to <cit.>. Note that the shocked ambient gas will be heated up to very large temperatures. Behind a strong shock the temperature of the gas is (see Caprioli's lectures, this volume, or <cit.>): kT = 3/16μ m_H u_s^2 ∼ 44  a^2 ( L_w/10^36 erg/s)^2/5( n_0/ cm^-3)^-2/5( t/10^4  yr)^-4/5  eV where k is the Boltzmann constant, and Eq. <ref> was used to compute the second equality. A plasma characterised by such temperatures radiates in the UV/soft-X ray domain and cools in a characteristic time τ_c, which mostly depends on the gas temperature and density. Therefore, the system evolves in the adiabatic phase for τ_f < t < τ_ad≈τ_c. Radiative losses are conveniently described by a cooling function Λ(T) (erg cm^3/s) which depends on gas temperature and metallicity (here assumed to be solar). For the hot and ionised plasmas considered here, the cooling is dominated by both line and continuum thermal emission. In the range of temperatures 10^5  K≲ T ≲ 10^7.5 K the cooling function can be (roughly) approximated as Λ(T) = Λ_0 T^-1/2∼ 1.6 × 10^-19 T^-1/2 erg cm^3/s <cit.>. A fully ionised plasma characterised by an hydrogen number density n_H and an electron number density n_e ∼ 1.2 n_H (the numerical factor accounts for the presence of helium) cools at a rate L ≡ n_H n_e Λ(T) ∼ 1.2  n_H^2 Λ(T). Due to the ∝ n_H^2 scaling, the shell of shocked ambient gas cools first, as it is the densest region in the system (it contains most of the total mass concentrated in a very small volume). As the thermal energy density of a fully ionised plasma is ϵ_th∼ 2.3 (3/2) n_H k T, where k is the Boltzmann constant, the cooling time of the plasma in the shell can be written as: τ_c = ϵ_th/L = 2.3 (k T)^3/2/3.2  n_0 Λ_0 k^1/2∼ 2.5 × 10^4 ( n_0/ cm^-3)^-1( kT/0.1  keV)^3/2 yr where n_H = 4 × n_0 to account for shock compression. The cooling time of the shell can be now estimated by equating τ_c to the age of the system t. When that is done (using Eq. <ref>) one gets a duration of the adiabatic phase equal to: t_ad∼ 8.6 × 10^3   a^15/11( L_w/10^36 erg/s)^3/11( n_0/ cm^-3)^-8/11 yr which is much smaller than the lifetime of the system τ_*. For this reason, the adiabatic phase will not be further discussed. §.§ Partially radiative, or snowplow phase After t_ad, then, the shell cools but the material injected by the wind into zone ii is still adiabatic. The density in region ii will be shown to be orders of magnitudes smaller than the density in the shell, and therefore the interior will cool much later. It follows that interstellar bubbles spend most of their life in this partially radiative phase, which deserves to be studied in detail. Remarkably, Eq. <ref> provides a good description of the expansion rate of the forward shock also in this phase. Calculations more accurate than those performed here show that the only difference is that the value of the constant a is equal to 0.88 in the fully adiabatic phase, and decreases to 0.76 when the shell becomes radiative <cit.>. In order to understand why this is the case, assume that all the kinetic energy that flows across the forward shock is radiated away. The rate at which the system loses energy is then: L_rad = ( 4 π R_2^2 ) ( 1/2ϱ_0 u_s^3 ) Such a rate is constant in time if the scalings R_s ∝ t^3/5 and u_s ∝ t^-2/5 are adopted. This means that it is possible to define an effective injected power as L_eff = L_w - L_rad, which is also constant in time. Thus, Eq. <ref> and <ref> are still solutions of the problem after the substitution L_w → L_eff. After setting a = 0.76 one finally gets: R_s ∼ 26 ( L_w/10^36 erg/s)^1/5( n_0/ cm^-3)^-1/5( t/ Myr)^3/5 pc u_s ∼ 15 ( L_w/10^36 erg/s)^1/5( n_0/ cm^-3)^-1/5( t/ Myr)^-2/5 km/s The total energy in the system at a time t can be computed from Eq. <ref>, <ref>, and <ref> as: E_tot∼ (L_w-L_rad) t = [ 1 - 2 π( 3/5)^3 a^5 ] L_w t Recalling that during this phase the system is composed by a cold and dense expanding shell, pushed by an hot and rarefied interior, the total energy can be written as the sum of the kinetic energy of the shell: E_k = 1/2 M_sh u_s^2 = 2 π/3( 3/5)^2 a^5 L_w t plus the thermal energy of the hot interior: E_th = ( 3/2 P ) ( 4 π/3 R_s^3 ) where P is the average gas pressure in region ii. Combining Eq. <ref>, <ref>, and <ref> one can see that E_k/E_tot∼ 0.3 and E_th/E_tot∼ 0.7 and that the pressure in the hot interior is <cit.>: P = 4.5 × 10^-12( L_w/10^36 erg/s)^2/5( n_0/ cm^-3)^3/5( t/ Myr)^-4/5  erg/cm^3 The expressions above for R_s, u_s, and P have been derived under the assumption of a cold (pressureless) ambient medium. Such assumption is valid as long as the forward shock is strong. The shock Mach number is obtained dividing the shock velocity by the sound speed of the ISM of temperature T_0, c_s,0 = √((5/3) kT_0/μ m_H), which gives: M_s ∼ 1.5 ( L_w/10^36 erg/s)^1/5( n_0/ cm^-3)^-1/5( T_0/10^4  K)^-1/2( t/ Myr)^-2/5 This shows that, for a warm ISM characterised by a temperature of T_0 ∼ 10^4 K, Eq. <ref>, <ref>, and <ref> are valid only up to t_s ≲ 1 Myr. After that, the pressure of the ambient medium starts to be important, and the expansion rate of the shell drops significantly: the bubble enters the pressure-confined phase <cit.>. §.§.§ The internal structure of interstellar bubbles Although the radiative cooling of the shell has little impact on the expansion rate of the forward shock, it strongly affects the internal structure of the system. First of all, as a consequence of radiative cooling, the shell collapses and becomes extremely thin and dense. This can be easily seen by considering an isothermal forward shock, i.e., a shock were radiative losses in the denser downstream region are so effective to cool the gas down to the initial (upstream) temperature <cit.>. If the temperature is constant across the shock transition, the sound speed, which depends on temperature only, will be equal to the interstellar value c_s,0 on both sides of the shock. This means that the pressure will depend on density only, P_i = ϱ_i c_c,0^2, where the subscript refers to the upstream (i = 1) or downstream (i = 2) region. The conservation of momentum flux across the shock then reads: ϱ_1 u_1^2 + ϱ_1 c_s,0^2 = ϱ_2 u_2^2 + ϱ_2 c_s,0^2 which can be divided by ϱ_1 u_1^2 and combined with the condition for mass conservation ϱ_1 u_1 = ϱ_2 u_2 to give: ( r - M^2 ) (r - 1) = 0 where M = u_1/c_s,0 is the shock Mach number and r = ϱ_2/ϱ_1 = u_1/u_2 is the shock compression factor. Neglecting the solution r = 1, which is unphysical (no shock wave), one is left with r = M^2. Then, for strong shocks the compression factor can largely exceed 4 and as a consequence the shell becomes very thin (hence the name snowplow phase as shocked ambient matter accumulates just behind the forward shock). It follows that the approximation made above of an infinitesimally thin shell is even more appropriate during the partially radiative phase as long as the Mach number M_s is significantly large. The arbitrarily large compression for an arbitrary large Mach number implied by r = M^2 is of course not physical. In fact, also the interstellar magnetic field will be compressed at the shock, as B_2 ∼ r B_1. Such compression induces an increase of the downstream magnetic pressure with the shock compression factor scaling as P_B,2 = B_2^2/8 π∝ r^2. This scaling is steeper than that of the downstream thermal pressure ϱ_2 u_2^2 ∝ r. Therefore, for large compression factors, the pressure downstream of the shock is largely dominated by the magnetic one. For an highly supersonic (the upstream gas pressure can be neglected) and highly superalfvenic (the upstream magnetic pressure can be neglected[A superalfvenic shock moves at a speed larger than the Alfvén one, u_1 > v_A = B_1/(4 πϱ_1)^1/2. This can be rewritten as ϱ_1 u_1^2 > B_1^2/4 π = 2 P_B,1. Then, for highly superalfvenic shocks (u_1≫ v_A) the magnetic pressure upstream is negligible when compared to the ram pressure.]) momentum conservation simplifies to: ϱ_1 u_1^2 ∼ r^2 B_1^2/8 π which implies that the compression factor does not increase indefinitely with the Mach number M, but is bounded to the value <cit.>: r ∼( 8 πϱ_1 u_1^2/B_1^2)^1/2 = √(2) M_A ≫ 1 where M_A = u_1/v_A is the alfvenic Mach number. Eq. <ref> shows that for a strong and magnetised shock the compression factor can still be very large, but never diverges. The thin, cold, and magnetised shell of swept up ISM bounds the low density cavity, which is filled with shocked wind material and is therefore hot. This has two consequences. First, a hot gas is characterised by a large speed of sound. Under these conditions sound waves can cross the cavity in a time which is shorter than the age of the system. Therefore, the pressure P in region ii can be assumed to be (roughly) spatially uniform. Second, thermal conduction will operate at the interface between the cold shell and the hot interior, causing cold gas to evaporate from the shell into the cavity and mix with the shocked wind material <cit.>. Due to thermal conduction, then, the boundary between region ii and iii is not sharp, but it is smeared out. It is convenient to describe the transition region in the rest frame where the inner boundary of the shell is at rest, and to assume that the inward flow of evaporating material is well described by a stationary one dimensional (plane-parallel) isobaric flow. If radiative losses are assumed to be unimportant in the transition region, and if the role of the magnetic field is ignored, the gas flow is obtained after balancing the outward heat flux due to thermal conduction with the inward mechanical energy flow carried by the evaporating gas. Heat flow from region ii to region iii is proportional to the temperature difference between the two regions, and can be written as: F_h = -K ∂ T/∂ z where z is the distance from the shell and the minus sign indicates that heat flows towards the colder region. The proportionality coefficient K is called thermal conductivity and depends quite strongly on the gas temperature: K = C  T^5/2 <cit.>. On the other hand, C depends weakly on temperature (through the Coulomb logarithm) and will be therefore treated as a constant: C = 1.2 × 10^-6 erg/cm/s/K^7/2 <cit.>. If radiative losses in the transition region are neglected, at equilibrium the heat flow has to be balanced by a mechanical flow in the opposite direction, that can be estimated as: F_m = 5/2 P v where v is the flow speed and (5/2) P is the specific enthalpy of the gas. Balancing the flows gives: P ≈2  C  T^7/2/5  R_s  u_s where the crude approximations ∂/∂ z ≈ R_s and v ≈ u_s were made. Equating Eq. <ref> and <ref> one gets the expression for the time evolution of the temperature in the hot interior: T ∼ 1.0 × 10^6 ( L_w/10^36 erg/s)^8/35( n_0/ cm^-3)^2/35( t/ Myr)^-6/35 K and that for the hydrogen density (as P ∼ 2.3 n k T): n ∼ 1.3 × 10^-2( L_w/10^36 erg/s)^6/35( n_0/ cm^-3)^19/35( t/ Myr)^-22/35 cm^-3 It should be noted that the contribution from evaporated matter to the total mass inside the bubble is largely dominant when compared to the mass injected by the stellar wind Ṁ_w t. This can be easily seen by computing the density one would expect if only shocked wind material were present in the cavity. Such a density would be: n ∼Ṁ_w t/4 π/3 R_s^3∼ 4 × 10^-4( Ṁ_w/10^-6 M_⊙/ yr) ( L_w/10^36 erg/s)^-3/5( n_0/ cm^-3)^3/5( t/ Myr)^-4/5 cm^-3 which is much smaller than the value provided by Eq. <ref>. §.§.§ The wind termination shock Once the internal structure of the bubble has been determined, the only missing piece of information is the evolution in time of the WTS. An estimate of the position of the shock can be obtained by equating the ram pressure of the wind, ϱ_w u_w^2, to the thermal pressure inside the bubble, provided by Eq. <ref>. By making use of Eq. <ref>, this gives <cit.>: R_w ∼ 3.5 ( L_w/10^36 erg/s)^3/10( n_0/ cm^-3)^-3/10( u_w/3000  km/s)^-1/2( t/ Myr)^2/5 pc This implies that the WTS expands at a rate which is slower than that of the forward shock, and when the system is well into the snowplow phase the condition R_w ≪ R_s is always satisfied. The main results obtained in this Section are summarised in Fig. <ref>, where the evolution in time of the main physical quantities defining an interstellar bubble has been plotted. § INTERSTELLAR BUBBLE INFLATED BY A CLUSTER OF MASSIVE STARS What happens when a bubble is not inflated by a single star, but rather by a group of them, bundled in a star cluster? This situation is indeed very relevant, as most massive stars form in groups or clusters, as the result of the gravitational collapse of dense molecular clouds <cit.>. Their short lifetime, combined with a relatively low velocity dispersion, explains why very massive stars are often found in associations. This is because they explode as supernovae before having the time to move away from the site of their formation. This fact has a very important implication: all the massive stars belonging to a given cluster deposit large amounts of kinetic energy (in form of wind or supernova ejecta) within a small volume. Most of the energy is deposited by cluster stars of mass ≳ 10  M_⊙. These stars emits powerful winds and eventually explode as supernovae. They are also characterised by a short lifetime τ_*, which correlates with the initial stellar mass M_*, as derived from the stellar evolution model shown in the left panel of Fig. <ref>. It can be seen from the plot that the star lifetime is a decreasing function of its mass, and spans from few tens of Myr for stars of ≈ 10 M_⊙, down to few Myr for the most massive stars of mass ≈ 100-200 M_⊙ It follows that, during the first few Myr of the life of a star cluster, stellar winds are the only relevant sources of kinetic energy in the surrounding ambient medium. Stellar evolution models also provide an estimate of the wind power throughout the star life <cit.>. Massive stars spend most of their life in the main sequence, and move to the red supergiant phase at the end of their lives, or to the Wolf-Rayet phase if their mass is large enough (M_* ≳ 20 M_⊙). Main sequence and Wolf-Rayet winds provide the largest contributions to the total output of kinetic energy. In particular, the Wolf-Rayet phase lasts for a quite short time, of the order of few times 10^5 yr, but winds of Wolf-Rayet stars are much more powerful than the main sequence ones, and are likely to dominate the total wind-related kinetic energy output from a star. The wind power as a function of the initial mass of the star is shown in the right panel of Fig. <ref> for both the main sequence and the Wolf-Rayet phase <cit.>. For definiteness, consider a cluster composed of N_* massive stars with masses in the range 8-150 M_⊙. The distribution of stellar masses at formation dn_*/ dM_* is called initial mass function and has been constrained from observations <cit.>. It is well described by a power law dn_*/ dM_* = A  M_*^-α with slope in the range α∼ 2.3-2.7. The initial mass function can be sampled in order to simulate the masses of all the stars in a cluster. Then, using the information from Fig. <ref>, it is possible to evaluate the cumulaitve mechanical power injected by all stellar winds in the cluster. This was done in <cit.>, where it was assumed α = 2.3 and that stars with masses larger than 20 M_⊙ at the end of their life go through a Wolf-Rayet phase lasting 320 kyr. Results are shown in Fig. <ref> with red and blue dot-dashed lines referring to clusters containing N_* = 500 and 100 massive stars, respectively. The power injected by winds stays roughly constant for the first few Myr of the life of the cluster, and then drops quite quickly as the most massive stars explode as supernovae. The average power of the winds of massive stars over the entire lifetime of the cluster (∼ 35 Myr) is of the order of ≲ 10^35 erg/s/star. Once stars begin to explode, the injection of mechanical energy is dominated by supernova explosions. The solid curves in Fig. <ref> represents the total (winds plus supernovae) power in the cluster, and has been computed assuming that each supernova releases 10^51 erg of mechanical energy over a relaxation time of about 1 Myr. The curves show that the power injection from supernovae stays roughly constant for few tens of Myr, which corresponds to the explosion time of the lightest stars (M_* ≲ 10 M_⊙). The average power of supernovae over the cluster lifetime is ≲ 10^36 erg/s/star. Therefore, the total power (winds plus supernovae) is ∼ 10^36 erg/s, with winds contributing at the 10% level <cit.>. Despite significant fluctuations, the total average power injected by stars stays remarkably constant over few tens of Myr for massive clusters (more than ∼100 massive stars). Its value is indicated with dashed lines in Fig. <ref>, and can be written as: P_tot = ⟨ L_w ⟩ N_* ∼ 3 × 10^51( N_*/100) erg/Myr and can be used to estimate the expansion rate of a bubble inflated by a star cluster. §.§ Expansion rate of the forward shock The expansion rate of the forward shock of a bubble inflated by a massive star cluster can be derived exactly as for the case of a single stellar wind, substituting in Eq. <ref> L_w with P_tot. The forward shock radius and velocity read <cit.>: R_s ∼ 2.6 × 10^2 ( ηN_*/100)^1/5( n_0/ cm^-3)^-1/5( t/10  Myr)^3/5 pc u_s ∼ 15 ( ηN_*/100)^1/5( n_0/ cm^-3)^-1/5( t/10  Myr)^-2/5 km/s where η is a correction factor that can be derived from more accurate studies (e.g. a better description of radiative losses, or of the interface between the shell and the interior, etc. Such a parameter can be estimated thanks to numerical simulations of interstellar bubbles <cit.> or, more pragmatically, from observations <cit.>. The latter method gives, with a quite large uncertainty, η≈ 0.22 <cit.>. Also the density and temperature inside the bubble follow from the same procedure used to derive Eq. <ref> and <ref>, and are equal to <cit.>: T ∼ 2.0 × 10^6 ( ηN_*/100)^8/35( n_0/ cm^-3)^2/35( t/10  Myr)^-6/35 K n ∼ 7.0 × 10^-3( ηN_*/100)^6/35( n_0/ cm^-3)^19/35( t/10  Myr)^-22/35 cm^-3 The time evolution of the radius and velocity of the forward shock are shown in the top panel of Fig. <ref>. As for in Fig. <ref>, curves are plotted in the range of times spanning from the end of the adiabatic phase to the beginning of the pressure-confined one (M∼ 1). The radius of the bubble becomes larger than the half thickness of the Galactic disk (∼ 100 pc, indicated as a dashed line in the figure) before entering the pressure-confined phase. When that happens, the bubble becomes more and more elongated in a direction perpendicular to the disk, as it is easier to expand in an ambient medium of lower density. Eventually, the bubble breaks out in the Galactic halo, creating collimated structures called chimneys, through which matter and energy are transported to the halo <cit.>. §.§ The wind termination shock: compact and loose clusters The expansion rate of the forward shock and the properties of the gas in the bubble (region ii) have been derived above following exactly the same procedure adopted for the case of a bubble inflated by a single stellar wind. On the other hand, this cannot be done for the innermost region, i.e. that contained within the WTS (region i). The reason for that is that star clusters are not point-like objects, and therefore the mechanical energy is injected by stellar winds in a spatially extended region. This scenario was investigated in <cit.> and will be briefly summarised here. Consider a cluster composed of N_* massive stars distributed homogeneously over a spherical region of size R_c. Typical values for R_c are of the order of few parsecs <cit.>. For simplicity, take stars to be all identical, each blowing a wind of mass loss rate Ṁ_w and injecting mechanical energy at a rate L_w. Assume also that winds from individual stars will merge to form a collective outflow of matter (a situation where this is not the case will be described below). Then, the total rate of injection of matter and mechanical energy are Ṁ_tot = N_* Ṁ_w and P_tot = N_* L_w. For R ≫ R_c, the cluster can indeed be considered as a point source of mass and energy, and therefore the stationary solution given by Eq. <ref> must be recovered, with the terminal velocity given by u_w = 2 P_tot/Ṁ_tot. This implies that the position of the WTS can be computed exactly as done in Eq. <ref>, to give: R_w ∼ 35 ( ηN_*/100)^3/10( n_0/ cm^-3)^-3/10( u_w/3000  km/s)^-1/2( t/10  Myr)^2/5 pc which has been plotted in the top panel of Fig. <ref>, together with the WTS velocity: Ṙ_w ∼ 1.4 ( ηN_*/100)^3/10( n_0/ cm^-3)^-1/5( u_w/3000  km/s)^-1/2( t/10  Myr)^-3/5 km/s On the other hand, if energy is injected in an extended and roughly spherical region of radius R_c, symmetry imposes that the fluid velocity in the centre of the star cluster (R = 0) must vanish. Therefore, the fluid has to accelerate from a velocity u = 0 in R = 0 to u = u_w for R ≫ R_c. This is possible only if the gas pressure does not vanish (P_w 0), but rather decreases towards larger radii, so that the gas is pushed outward by the ∇ P_w force. It follows that the sound speed in the wind c_s,w is also non vanishing, and therefore the Mach number of the wind termination shock will remain finite. It can be shown (following a somewhat lengthy calculation that can be found here <cit.>), that at large enough radii the shock Mach number scales as: M_w = u_w-Ṙ_w/c_s,w∼u_w/c_s,w ∼ 13 ( ηN_*/100)^1/5( n_0/ cm^-3)^-3/10( u_w/3000  km/s)^-1/3( R_c/3  pc)^-2/3( t/10  Myr)^4/15 Note that for R_c → 0 the Mach number diverges, and this justifies why the WTS of an individual (point-like) star is invariably assumed to be very strong. In fact, for an isolated star, R_c would correspond to the region of wind launching, which is very small, being of the order of few stellar radii <cit.>. The Mach number of the WTS is shown in the bottom panel of Fig. <ref>, together with the Mach number of the forward shock. Remarkably, they follow an opposite trend: the Mach number of the forward shock gradually decrease, while that of the WTS increases with time. In particular, the WTS is weak (Mach number of the order of a few) for a quite long time, and, as discussed in the following, this might have an impact on particle acceleration. To conclude, a discussion on the actual formation of the WTS is on order. The assumption of a spatially extended injection of mechanical energy introduces a scale length into the problem, i.e. the radius of the star cluster R_c. In deriving Eq. <ref> it was implicitly assumed that the shock does form around the cluster, but a necessary condition for that to happen is R_w > R_c. Star clusters can then be classified as compact when R_c ≪ R_w or loose in the opposite case R_c ≫ R_w. The formation (or non formation) of the collective WTS in the former (latter) case has been confirmed by means of hydrodynamical simulation <cit.>. In loosely bound clusters, each star may form its own, strong, WTS, and no large scale collective shock appears. §.§ Final remarks on interstellar bubbles inflated by star clusters Fig. <ref>, taken from <cit.>, shows the density profile for a compact cluster. Energy is injected in an extended region (driving source region) having the size of the star cluster. Such region is characterised by a mildly varying density. Moving outwards one finds the wind region (ϱ∝ R^-2), the bubble containing the shocked wind material (roughly constant density), and the dense shell where the shocked ISM is accumulated. As seen above, the density profile of a loose cluster will differ in the innermost region, as each star will form its own WTS, and a collective shock will not form around the cluster <cit.>. Fig. <ref> provides an appropriate description for the density profile around a star cluster during the first few megayears of its lifetime only. After this time, supernovae will begin to explode, and this will have a dramatic impact on the density profile. In fact, as seen in Fig. <ref>, for rich clusters the average mechanical power injected by stellar winds and supernova explosions stays roughly constant throughout the entire cluster lifetime. This implies that the evolution of the shell (forward shock) radius versus time does not change when supernovae overcome stellar winds as sources of energy. In fact, numerical simulations showed that the R_s ∝ t^3/5 scaling still provides a good descriptions of the evolution of interstellar bubbles even in the case of poor clusters, where only few supernovae explode (e.g. <cit.>). On the other hand, the internal structure of the bubble is different before and after the onset of stellar explosions. This is illustrated by the cartoon in Fig. <ref>, where a sketch of the structure of a young and compact cluster is given on the left, while an older cluster is represented on the right. As it will be discussed extensively in the following, the acceleration of particles in young clusters is likely to take place at the collective WTS (or at the individual WTSs for loose clusters), and a relatively simple (i.e., spherically symmetric, quasi-stationary) setup can be adopted to describe acceleration. This is not the case for older clusters, where acceleration is expected to take place in the turbulent bubble, whose gas is repeatedly swept by a series of SNR shocks, possibly colliding with each other and maintaining in this way an enhanced level of turbulence. In most cases, such systems are not expected to be spherically symmetric nor quasi-stationary, making the study of the acceleration mechanisms at work a very complicated issue. Finally, all the results presented in this Chapter have been derived by assuming an homogeneous ISM outside of the bubble. In fact, the ISM is a multi-phase plasma, made of cold and dense clouds surrounded by dense warm envelopes which are in turn embedded in a diffuse and hot gas that occupies most of the volume <cit.>. The forward shock of the interstellar bubble propagates then in the diffuse phase of the ISM. On the other hand, dense clouds can survive the passage of the forward shock and, once inside of the bubble they begin to evaporate, loading the system with mass. It has been shown that in this case the evolution of the forward shock scales with time as R_s ∝ t^α, with α = 7/10, which slightly differs from the canonical α = 3/5 derived above <cit.>. § STAR CLUSTERS AS PARTICLE ACCELERATORS Three classic questions in particle acceleration in astrophysical environments are (e.g. <cit.>): * What is the origin of accelerate particles? * What is the origin of the energy that the particles acquire? * Where are the acceleration sites? or, equivalently: What are the acceleration mechanisms? The first question deals with CR composition. As discussed in the Introduction, some isotopic anomalies observed in the local flux of CRs require that a small but non negligible fraction of the particles which are accelerated come from Wolf-Rayet wind material <cit.>. Data are best explained if such material is directly accelerated at the stellar WTS, and not injected in the circumstellar bubble to be then accelerated by e.g. a SNR shock <cit.>. For this reason, the acceleration of particles at WTS will be discussed in Sec. <ref> below. As seen in Sec. <ref>, the overall mechanical power of massive stars is dominated by supernova explosions, while stellar winds contribute roughly at he 10% level. This means that the acceleration of particles at WTS cannot provide the necessary amount of energy to explain Galactic CRs (second question in the list above). For this reason, Sec. <ref> will be devoted to the description of the acceleration of particles in superbubbles at late times, i.e., when supernovae has already began to explode. The acceleration mechanism is not simply diffusive acceleration at SNR shocks, but it is likely the result of the interplay of SNR shocks and plasma turbulence <cit.>. Understanding particle acceleration in superbubbles is extremely important. The reason is that most stars form in clusters, and therefore the contribution to Galactic CRs from star clusters is likely to exceed that from isolated SNRs. Somewhat surprisingly, despite this fact the standard model for CR origin relies on particle acceleration at isolated SNR shocks. What said above also addresses question number three in the list: the particle acceleration sites in and around star clusters are most likely the WTS and the diluted region containing the shocked WTS material. The forward shock might also accelerate particles, but its slow velocity (tens of km/s) won't allow to accelerate particles to extremely high energies <cit.>. Moreover, as seen in Sec. <ref>, during most of the bubble lifetime the forward shock is radiative. As most of the energy flowing through the shock is radiated away, it is very likely that particle acceleration will be quite ineffective. The remainder of this Section will be devoted to an estimate of the maximum energy that accelerated particles can achieve in star clusters, and to some simplified calculations aimed at estimating the shape of the particle spectra emerging in these objects. Remarkably, the estimate of the maximum energy can be obtained using a very simple argument based on basic electrodynamics, while particle spectra will be obtained solving partial differential equations. §.§ The maximum energy of accelerated particles: the Hillas criterion All acceleration mechanisms taking place in astrophysical environments rest on the interaction between charged particles and electromagnetic fields. In order to be accelerated, a particle carrying an electric charge e must be subject to a force having a non-negligible component along the particle direction of motion, defined by its velocity v⃗. This rules out static magnetic fields B⃗ as particle accelerators, as they exert a force F⃗ = (e/c) v⃗×B⃗ orthogonal to the velocity of the particle. On the other hand, a static electric field E⃗ will accelerate a charged particle via the electrostatic force F⃗ = e E⃗. Consider now a region of space of size L where a uniform electric field is present. A particle crossing the region will gain an energy: Δ E = e E⃗ L which can be very large if an intense electric field occupies a large region of space. Unfortunately, astrophysical plasmas are characterised by very large values of the electric conductivity. This means that any charge excess in a plasma (let's say of charge density ϱ_e) will be rapidly neutralised by the motion of charges of opposite sign in the plasma, making it very difficult to maintain a static, strong, and large scale electric field, as ∇E⃗ = 4 πϱ_e ∼ 0. In turbulent plasmas, time varying magnetic fields induce electric fields, as stated by Faraday's law: ∇×E⃗ = - 1/c∂B⃗/∂ t To obtain an order of magnitude estimate of the intensity of the induced electric fields, the equation above can be simplified by setting ∇×→ 1/L and ∂ / ∂ t → T, where L and T are the characteristic length and time scales over which electromagnetic fields vary. Introducing also the characteristic velocity of motions in the plasma, which has to be of the order U = L/T, one gets E∼ (U/c) B. Substituting into Eq. <ref> and setting Δ E = E_max gives: E_max∼( e/c) B U L which is universally known as the Hillas criterion <cit.> and represents the maximum energy that a particle can attain in an accelerator of size L, characterised by plasma motions of velocity U, and containing a magnetised plasma of magnetic field strength B. The implicit assumption done to derive the Hillas criterion is that particles do not suffer energy losses, and therefore the value of E_max has to be considered the most optimistic one (for a treatment of energy losses in this context see <cit.>). The Hillas criterion is widely used because of its predictive power and its simplicity. It provides an estimate of the maximum particle energy allowed by electrodynamics, without the need to specify the nature of the acceleration mechanism! Unfortunately, while the size L and the characteristic plasma velocity U can be measured for a large number of astrophysical objects, the magnetic field strength B is very often unknown as it is difficult to constrain it from observations <cit.>. It is therefore convenient to rewrite Eq. <ref> as: B ∼ 3 × 10^2 ( E_max/ PeV) ( U/1000  km/s)^-1( L/ pc)^-1μ G which defines the minimum magnetic field strength necessary to accelerate CR protons up to an energy E_max. The expression above can be applied, for example, to the collective WTS of a very compact (point like) star cluster. In this case, the characteristic length would be the radius of the WTS, while the characteristic plasma velocity would be the wind terminal velocity. Setting (see Eq. <ref> and/or Fig. <ref>) U = u_w ≈ 3000 km/s and L = R_w ≈ 10 pc one gets that, in order to accelerate protons up to the energy of the CR knee (about 4 PeV), the magnetic field strength should be at least of the order of B ≈ 40  μG. Such a value of the magnetic field corresponds to a magnetic pressure of P_m = B^2/8 π, which can be compared to the shock ram pressure P_r = ϱ_w u_w^2. Making use of Eqns. <ref> and <ref>, the ratio between these two pressures reads: P_m/P_r = 1/4( c/e)^2 E_max/ P_tot u_w∼ 1.3 ( E_max/4  PeV)^2 ( P_tot/3 × 10^51 erg/Myr)^-1( u_w/3000  km/s)^-1 In order to conserve energy, the magnetic pressure should not exceed the ram pressure, and in fact a realistic condition would read P_m/P_r ≪ 1. This implies that acceleration at the WTS up to the particle energies that characterise the knee is possible only for very powerful clusters, having mechanical luminosities significantly exceeding ∼ 3 × 10^51 erg/Myr ∼ 10^38 erg/s. Remarkably, the very same result was obtained from a sophisticated study of particle acceleration at the WTS <cit.>, and this demonstrates that the Hillas criterion is a very powerful tool. The Hillas criterion can also be used to constrain the maximum energy of particles accelerated in the turbulent and rarefied interstellar bubble <cit.>. In this case, the size of the accelerator can be taken to be equal to the radius of the bubble, L = R_s. The value of the parameter U may be taken to be equal to the velocity u_t of turbulent motions inside the bubble. The energy density of the turbulent gas is ϱ u_t^2, where ϱ is the gas density inside the bubble, while that of the magnetic field is B^2/8 π. To conserve energy, both these energy densities will have to be at most of the order of the thermal energy density (3/2) n k T, as it was estimated from Eqns. <ref> and <ref>. From this conditions, and making use of Eq. <ref>, an upper limit on the maximum proton energy that can be achieved in a superbubble can be derived. It reads: E_max≪ 1  ( η P_tot/3 × 10^51 erg/Myr)^18/35( n_0/ cm^-3)^9/70( t/10  Myr)^4/35  PeV and shows that it is highly unlikely that turbulent superbubbles are able to accelerate protons beyond PeV energies. §.§ Particle acceleration at the wind termination shock The spectrum of energetic particles accelerated at a spherical WTS can be derived solving the transport equation for CRs, first derived in <cit.> (see also Blasi's lecture notes, this volume). The transport equation describes the evolution in time t of the isotropic part of the particle distribution function f(t,p,R⃗), which is also a function of the particle momentum p and of the spatial coordinate R⃗. In this notation, the number density of energetic particles at a given time and place is n = 4 π∫ dp  p^2 f. The steady state (time independent) solution of the problem is obtained solving the equation: u ∂ f/∂ R = 1/R^2∂/∂ R( R^2 D ∂ f/∂ R) + p/3 R^2 d( u R^2 )/ d R∂ f/∂ p where spherical symmetry has been assumed. Here, u(R) represents the velocity profile of the gas and D(R,p) the diffusion coefficient of particles of momentum p. The term on the left hand side describes the advection of particles with the flow, while the two terms on the right hand side account for energetic particles spatial diffusion in the turbulent ambient magnetic field and particle acceleration/deceleration induced by fluid compression/decompression. Radiative energy losses are ignored (and for CR protons this is very often a safe assumption). In general, Eq. <ref> is solved numerically (e.g. through a finite differences scheme), as an exact analytic solution is known only for the (quite unphysical, unfortunately) case of a diffusion coefficient which is independent on particle momentum <cit.>. However, approximate analytic solutions can still be obtained in the limit of both large and small particle momenta. This can be seen by comparing the advection and diffusion terms in the equation, i.e., the terms depending on the spatial variation of CRs in the system. In general, u, D, and f may all vary with position. However, in order to obtain an order of magnitude estimate the following substitutions can be made: u(R) → U D(R) → κ ∂/∂ R → 1/L where U, κ, and L represent some characteristic values for the fluid velocity, the diffusion coefficient, and the spatial scale over which significant variations of the various physical quantities occur, respectively. Once these substitutions are applied, the ratio between the advection and the diffusion term in Eq. <ref> is <cit.>: u ∂ f/∂ R/1/R^2∂/∂ R( R^2 D ∂ f/∂ R)⟶U L/κ As the CR diffusion coefficient increases with particle momentum (see Blasi's lecture, this volume), a low and high energy regime can be defined according to the conditions U L/κ≫ 1 and U L/κ≪ 1, respectively. In the low energy regime, then, advection dominates over diffusion, while the opposite is true in the high energy domain. Approximate analytic solutions have been derived for both the low <cit.> and high <cit.> energy limits. Which are the appropriate values for the physical quantities U, κ, and L? For definiteness of discussion, consider a setup of the problem where the diffusion coefficient downstream of the WTS is so small that for all practical purposes its value can be considered to be very close to 0. Such an extreme assumption can be justified by recalling that the magnetised plasma downstream of a shock is expected to be highly turbulent <cit.>, and that in an highly turbulent medium particles are scattered very effectively and therefore diffusion is strongly suppressed. It follows that accelerated particles downstream of the shock will simply follow the fluid flow and be advected outwards up to the edge of the bubble, at R_s, where they will freely escape in the ISM (as the diffusion coefficient there is much larger). If a velocity profile scaling as u ∝ R^-2 is adopted in the WTS downstream region (as done, e.g., in <cit.> and <cit.>) the transport equation (Eq. <ref>) reduces to a description of pure advection, u   df/ dR=0. The CR particle distribution function is then spatially homogeneous within the bubble (R_w < R < R_s) and equal to f_w ≡ f(R = R_w), regardless of the position of the forward shock R_s. It follows that L = R_s is not a good choice, as R_s does not influence at all f_w. The other two spatial scales in the problem are the radius of the star cluster R_c and that of the WTS R_w. However, if one makes the further simplifying assumption that mechanical energy is injected in a very small (almost pointlike, i.e. R_c ≪ R_w) region, then the only possible choice is to set L = R_w. Assuming a point-like source of energy injection also implies that the wind velocity is constant for any R < R_w (see Sec. <ref>) and therefore U = u_w. In order to chose the value of κ, notice that the problem simplifies significantly under the assumption that D → 0 as R → 0 <cit.>. If this is the case, outward advection dominates close to R = 0, implying that the boundary condition for the CR particle distribution function must be f(R = 0) = 0. At this point, to ease computations, a linear scaling of the diffusion coefficient with the radial coordinate is often assumed (see e.g. <cit.> or <cit.>): D(R,p) = D_w(p) ( R/R_w) where D_w is the CR diffusion coefficient immediately upstream of the WTS. After introducing this parameterisation, it seems convenient to set κ = D_w, so that the boundary between low and high energy regime is set by the condition u_w R_w/D_w= 1 (see Eq. <ref>). §.§.§ The low energy limit The low energy limit is defined by the condition u_w R_w/D_w ≫ 1, which can be rewritten as: l_d = D_w/u_w≪ R_w where l_d is called diffusion length. Ignoring for a moment the spatial dependence of D_w, the quantity l_d represents the diffusion length of particles ahead (upstream) of the shock. In other words, accelerated particles are not able to reach distances from the shock exceeding significantly l_d, as in that case outwards advection dominates over spatial diffusion. This can be easily proven by recalling that in a time t advection would displace particles by an amount l_a = u_w t, while diffusion would spread particles over a region of size l_d ∼√(D_w t). The two displacements are equal for a characteristic time t_d, which gives l_a ≡ l_d = D_w/u_w. For times longer than t_d advection would dominate over diffusion and keep accelerated particles within a diffusion length from the shock surface. Thus, the condition expressed by Eq. <ref> means that particle acceleration happens in a region upstream of the shock whose extension is much smaller than the WTS radius. Therefore, the sphericity of the shock can be ignored when studying CR acceleration at low enough particle energies. Diffusive acceleration at plane shocks has been discussed by Caprioli (this volume). The spectrum of particles accelerated at a plane shock can be obtained by solving the CR transport equation (the analogue of Eq. <ref> in one dimension and cartesian coordinates). It is a power law in particle momentum f(p) ∝ p^-α where the slope α depends on the shock Mach number M or by the shock compression factor r as: α = 3 r/r-1 = 4  M^2/ M^2 - 1 . These dependences are shown in Fig. <ref>. It is interesting to remark that, as seen in Sec. <ref>, the WTS for a compact star cluster is not very large (see Eq. <ref> and Fig. <ref>), and therefore the spectrum of accelerated particles is expected to be slightly steeper than 4. For example, a slope α = 4.1 (4.4) would correspond to a Mach number M = 6.4 (3.3). Slopes slightly larger than 4 are those needed to explain Galactic cosmic rays (see Introduction), but one should remember that WTSs can only provide a minor contribution to the observed intensity of CRs. Therefore such agreement between predictions and expectations should probably be considered as a coincidence. §.§.§ The high energy limit In the high energy limit the advection term can be neglected, as u_w R_w/D_w ≪ 1, and the transport equation reduces to: 1/R^2∂/∂ R( R^2 D ∂ f/∂ R) + p/3 R^2 d( u R^2 )/ d R∂ f/∂ p = 0 Integrating between R_w^- = R_w-ϵ and R_w^+ = R_w + ϵ where ϵ is arbitrarily small one gets: [ R^2 D ∂ f/∂ R]_R_w^- + R_w^2 u_w r- 1/3  r p ∂ f_w/∂ p= 0 where we used D(R_w^+) = 0 and the fact that the fluid velocity immediately upstream (downstream) of the shock is u_w (u_w/r), r being the shock compression factor. The solution of Eq. <ref> is obtained by setting f(R,p) = f_w(p) f_r(R) and noticing that combining Eqns. <ref> and <ref> gives f_r(R) = (R/R_w)^γ, where γ will be determined later. Eqns. <ref> and <ref> can now be rewritten as: p/f_w∂ f_w/∂ p = - 3/2γ (2 + γ) ( D_w/u_w R_w) and p/f_w∂ f_w/∂ p = - αγ( D_w/u_w R_w) , respectively. Combining them one gets γ = (2 α - 6)/3. Finally, if CR diffusion proceeds at the Bohm rate, D_w ∝ p, a simple integration gives the high energy behaviour of the spectrum of particles accelerated at the WTS: f_w(p) ∝exp[ - 2 α -6/3α( D_w/u_w R_w) ] This asymptotic solution indicates that the CR spectrum is exponentially suppressed at large energies. A very rough description of the CR spectrum at the WTS in the entire energy domain can be obtained combining the low and high energy asymptotic solutions[This solution is not very accurate for particle energies marking the transition between a power law and an exponential cutoff spectral behaviour. A numerical solution of the problem can be found in <cit.>, showing that small bumps may appear in the spectrum just before the cutoff.]: f_w(p) ∝ p^-αexp[ - 2 α -6/3α( D_w/u_w R_w) ] . Note that, expressing the exponential cutoff in terms of the particle energy, f_w ∝exp[-E/E_max], and making use of the definition of Bohm diffusion: D_w = 1/3 R_L c and of the Larmor radius of a proton of charge e gyrating around a magnetic field of strength B_w: R_L = pc/eB_w one gets: E_max = 9/2 (α -3) α( e/c) u_w R_w B_w which is equivalent to the Hillas criterion derived in Sec. <ref> (see Eq. <ref>). For values of the spectral slope in the range α = 4 ... 5 the function 9/[2 (α - 3) α ] varies from ∼ 0.45 to ∼ 1.1. §.§ Particle acceleration in superbubbles Studying the acceleration of particles in turbulent superbubbles is a very difficult task. Acceleration of CRs may take place at WTSs and at SNR shocks. Occasionally, shock-shock collisions may happen. Pre-existing CRs can be reaccelerated due to second order Fermi acceleration in the highly turbulent environment. The level of turbulence might differ in the core of the bubble, where mechanical energy is injected, and in its outskirts. Accelerated particles can diffusively escape from the bubble, or can be advected into the halo when bubbles break out in the halo and form chimneys. A description of sophisticated theoretical models attempting to tackle this very complex problem goes beyond the scope of this Chapter, and the interested reader is referred to the following publications. Models for CR acceleration at multiple shock waves can be found here <cit.>, while models including (or trying to include) all the other physical ingredients mentioned above can be found here <cit.>. Unfortunately, testing these models is not trivial, as observations of superbubbles are quite sparse. Probably, the two most relevant signatures of particle acceleration in superbubbles are intermittency and structured particle spectra (contrary to the featureless power laws expected when diffusive shock acceleration operates). Intermittency is a consequence of the fact that supernova explosions are the main source of mechanical energy in a superbubble. Assuming that all stars in a cluster are born together at time t = 0, the last supernova will explode in the cluster at a time equal to the lifetime of a star of ≈ 10 M_⊙, i.e. τ_* ≈ 35 Myr (see Fig. <ref>). If the cluster contains N_* massive star that will end their life as supernovae, then a very rough estimate of the typical time between two consecutive explosions is Δτ_* ≈τ_*/N_*. This can be compared with the CR diffusive escape time from the bubble, which is τ_esc≈ R_s^2/D, where R_s is the radius of the forward shock and D the energy dependent CR diffusion coefficient. If τ_esc≪Δτ_* CRs will be able to escape the system before the shock generated by the next supernova will inject new energetic particles. Therefore, the bubble will empty of CRs between explosions, and this intermittent behaviour will be also reflected in the emission (for example in gamma rays) resulting from the interactions between the accelerated particles and the ambient gas. Remarkably, this might explain why some superbubbles have been detected in gamma rays and some others not, despite their similarity (see discussion and references in <cit.>). The total CR energy stored in a bubble as a function of its age is shown in Fig. <ref>. The left panel refers to the total energy, while the left one to the energy density. The latter decreases with time as the bubble volume increases. Notice that, while for very rich clusters, hosting more than 100 massive stars, the total CR energy stays constant, for smaller clusters large fluctuations appears. This is indeed expected, as a smaller number of stars implies a longer time between consecutive explosions, Δτ_*. Moreover, fluctuations are more pronounced if the bubble is less turbulent. This can be seen by comparing the blue and yellow lines in Fig. <ref>, which have been computed assuming that the fraction of the mechanical energy injected in the system that is converted into turbulent motions is η_T = 30% and 1%, respectively. This is a consequence of the fact that CR particles are confined more effectively (i.e. their diffusion coefficient D is smaller) if the level of turbulence is large. A large diffusion coefficient corresponds to a short escape time from the system and therefore implies more intermittency. As the diffusion coefficient is an energy dependent quantity, also the level of intermittency will depend on particle energy. This is illustrated in Fig. <ref>, where the spectra of CRs contained within a superbubble are plotted for different times, different number of massive stars in the cluster, and different levels of turbulence (see figure caption). The figure shows that the amount of lower energy particles stored in a superbubble does not fluctuate much. Also in this case, the reason is that low energy CRs are characterised by a smaller diffusion coefficient D, and are better confined inside bubbles. On the contrary, very large fluctuations in time are observed at large particle energies (large diffusion coefficients). Another important result emerging from Fig. <ref> is that particle spectra are very structured, and do not resemble at all the featureless power laws which are a signature of diffusive shock acceleration. In fact, this is due to the fact that the acceleration proceeds in a different way depending on the energy of the particles. At low energies, second order Fermi turbulent reacceleration and Coulomb energy losses dominate, and a very pronounced bump appears in the spectrum at trans-relativistic energies. On the contrary, at high energies particles are loss free, and the spectral shape is determined by an interplay of diffusive acceleration at SNR and WTS and diffusive escape from the system. To conclude, a large variety of spectra could be produced inside superbubbles, and this constitutes the most important prediction to be tested with future observations of these objects. § OPEN PROBLEMS AND CONCLUSIONS The need to explain anomalies in the composition of CRs (especially the excess in the ^22Ne/^20Ne ratio <cit.>) led to the suggestion that WTS of Wolf-Rayet stars might act as powerful particle accelerators <cit.>. However, it was immediately recognised that stellar winds could provide only a fraction of the mechanical energy needed to explain the bulk of Galactic CRs, and such early estimates have been confirmed by recent studies <cit.>. Then, in order to explain both the bulk of CRs and the isotopic anomalies, a scenario emerged where (at least) two classes of sources accelerate the CRs observed locally. Supernovae explosions provide the bulk of the energy <cit.>, with WTS adding a small but non-negligible contribution (e.g. <cit.>). Massive stars, then, may provide the energy of all Galactic CRs. As massive stars are rarely isolated, star clusters become natural candidate sources of CRs. The interest towards this class of objects was recently revived by the detection of gamma-ray emission from a number of them, or from their immediate vicinity <cit.>. Particle acceleration in star cluster is likely to proceed in a different way for young and old systems. In clusters younger than few million years, stellar winds are the main source of mechanical energy, and diffusive acceleration at the WTS will most likely produce power law spectra of CRs. For most massive clusters, the acceleration mechanism might be fast enough to accelerate protons up to the PeV domain (e.g. <cit.>). On the other hand, in older clusters the main input of energy is provided by supernova explosions. In this case, the acceleration mechanism is not well understood, and is probably defined by an interplay between diffusive shock acceleration and reacceleration of particles in the turbulent plasma that fills the bubble <cit.>. The main difficulty in testing acceleration models in star cluster was connected to the scarcity of high energy observations of these objects. However, the number of detection in gamma rays has increased steadily in the past few years, and the advent of multi-TeV detectors of unprecedented sensitivity such as LHAASO <cit.> promise to radically impact on this field of research, especially for what concerns the search of CR PeVatrons. On the theoretical side, the most pressing issue is the understanding of the acceleration mechanism operating in superbubbles. To do so, a better knowledge of the plasma flow and of the magnetic field strength and structure is mandatory. In fact, recent simulations show that also the simplest case of young (no supernova explosions) and compact star clusters blowing a wind requires detailed studies as such systems are far from the idealised spherically symmetric setup that is often assumed <cit.>. A solid understanding of the acceleration mechanism is also necessary in order to produce reliable predictions on the contribution of star clusters to the flux of Galactic CRs, and to estimate their impact on CR composition. With this respect, very recent results indicate that SNR shocks expanding in the collective wind around a compact star cluster might accelerate particles well beyond PeV energy, making SNRs inside star clusters potential sources of CRs up to the transition to extragalactic CRs <cit.>. Finally, the fact that the Solar system is located within a superbubble (the local bubble <cit.>) inflated by a star cluster formed about 14 million years ago <cit.> has very important implications. The transport of CRs in the very local ISM might be significantly affected by the magnetic field topology shaped by the inflation of the bubble, especially for low energy particles <cit.>. The low ambient gas density inside the local bubble might also induce effects on the production of CR secondaries <cit.>. Finally, the presence of nearby (in both time and space) massive stars and supernova explosions <cit.> must be taken into account when interpreting local CR data. Our entire view of CRs may be biased by our location inside a superbubble. The author acknowledges the organisers of the school (especially Carmelo Evoli) for their invitation and Thibault Vieu, Vincent Tatischeff, and Lioni-Moana Bourguinat for discussions about cosmic rays in star clusters. He also acknowledges support from Agence Nationale de la Recherche (project CRitiLISM, ANR-21-CE31-0028). 0 CRbooks Berezinskii, V. S., Bulanov, S. V., Dogiel, V. A. Ptuskin, V. S. (Ed. Ginzburg, V. L.) Astrophysics of cosmic rays (Amsterdam: North Holland) 1990; Gaisser, T. K., Engel, R., Resconi, E. Cosmic rays and particle physics (Cambridge University Press) 2016 CRpower Strong, A. W., ApJ7222010L58 andyreview Strong, A. W., Moskalenko, I. V., Ptuskin, V. S. ARNPS572007285 etienne Parizot, E. Nucl Phys B2562014197 composition Wiedenbeck, M. E. Space Sci Rev1302007415 vincent Tatischeff, V., Raymond, J. C., Duprat, J., Gabici, S., Recchia, S. MNRAS50820211321 vincentreview Tatischeff, V. Gabici, S. ARNPS682018377 luke Drury, L. O'C. Rep Prog Phys461983973 blasireview Blasi, P. A&A Rev21201370; Amato, E. Int J Mod Phys D2320141430013 lukerecent Drury, L. O'C. Astropart Phys39201252 drift Zirakashvili, V. N. Ptuskin, V. S. AIP Conf Proc10852008336; Caprioli, D. JCAP7201238; Bell, A. R., Matthews, J. H., Blundell, K. M. MNRAS48820192466 myreview Gabici, S., Evoli, C., Gaggero, D., Lipari, P., Mertsch, P., Orlando, E., Strong, A., Vittino, A. IJMPD2820191930022-339 schure1 Schure, K. M. Bell, A. R. MNRAS43520131174 schure2 Schure, K. M. Bell, A. R. MNRAS43720142802; Cristofari, P., Blasi, P., Amato, E. Astropart Phys1232020102492 22Ne Binns, W. R. New Astron Rev522008427; ApJ6342005351; Boschini, M. J. ApJS250202027 casse Cassé, M. , Paul, J. A ApJ2371980236, ApJ2581982860 cesarsky Cesarsky, C. J. Montmerle, T. Space Sci Rev361983173 higdon Higdon, J. C. Lingenfelter, R. E. ApJ5902003822 SB McCray, R. Kafatos, M. ApJ3171987190; Mac Low, M.-M. McCray, R. ApJ3241988776 bykovrev Bykov, A. M. A&A Rev22201477 thibaultPhD Vieu, T. Superbubbles and the origin of cosmic rays (PhD Thesis, Universié Paris Cité) 2022 SBtheoryLingenfelter Higdon, J. C., Lingenfelter, R. E., Ramaty, R. ApJ5091998L33; Higdon, J. C. Lingenfelter, R. E. ApJ6282005738; Lingenfelter, R. E. Adv Space Res6220182750 SBtheoryBykov Bykov, A. M. Fleishman, G. D. MNRAS2551992269; Bykov, A. M. Toptygin, I. N. Astr Lett272001625; Ferrand, G. Marcowith, A. A&A5102010A101 etienne1 Parizot, E., Marcowith, A., van der Swaluw, E., Bykov, A. M., Tatischeff, V. A&A4242004747 thibault Vieu, T., Gabici, S., Tatischeff, V., Ravikularaman, S. MNRAS51220221275 felixmassivestars Aharonian, F. A., Yang, R., de Oña Wilhelmi, E. Nature Astronomy32019561 clustersgamma Sun, X.-N. A&A6392020A80; Aharonian, F. A. A&A6662022A124; Cao, Z. Nature594202133 weaver Castor, J., McCray, R., Weaver, R. ApJ2001975L107; Weaver, R., McCray, R., Castor, J., Shapiro, P., Moore, R. ApJ2181977377; Ostriker, J.P. McKee, C.F. Rev Mod Phys6019881 winds Lamers, H. J. G. L. M. Cassinelli, J. P. Introduction to stellar winds (Cambridge University Press) 1999; Kudritzki, R.-P. Puls, J. ARA&A382000613; Puls, J., Vink, J. S., Najarro, F. A&A Rev162008209; Smith, N ARA&A522014487 krumholzbook Krumholz, M.R. Star formation (World Scientific, Singapore) 2017 ISM Ferrière, K. M. Rev Mod Phys7320011031; Cox, D. P. ARA&A432005337 landau Landau, L. D. Lifschitz, E. M. Fluid mechanics (Pergamon Press) 1959 zeldovich Zeldovich, Ya. B. Razier, Yu. P. Physics of Shock Waves and High-Temperature Hydrodynamic Phenomena (Dover Publications, Inc) 2002 cioffi Raymond, J. C., Cox, D. P., Smith, B. W. ApJ2041976290; Cioffi, D. F., McKee, C. F., Bertschinger, E. ApJ3341988252 koo Koo, B.-C. McKee, C. F. ApJ388199293 shu Shu, F. The physics of astrophysics: gas dynamics (University Science Books) 2010 radiativeshock McKee, C. F. Hollenbach, D. J. ARA&A181980219 spitzer Spitzer, L. Physics of fully ionised gases (Interscience Publisher) 1962; Zel'dovich, Ya. B. Pikel'ner, S.B. JETP291969170; Penston, M. V. Brown, F. E. MNRAS1501970373; Cowie, L. L. McKee, C. F. ApJ2111977135 morlino Morlino, G., Blasi, P., Peretti, E., Cristofari, P. MNRAS50420216096 limongi Limongi, M. Chieffi, A. ApJ6472006483 seo Seo, J., Kang, H., Ryu, D. JKAS51201837 IMF Salpeter, E. E. ApJ1211955161; Chabrier, G. PASP1152003763; Kroupa, P. in Planets, stars and stellar systems Vol. 5, p. 115, 2013 yadav Yadav, N., Mukherjee, D., Sharma, P., Nath, B. B. MNRAS46520171720 gupta Gupta, S., Nath, B. B., Sharma, P., Eichler, D. MNRAS49320203159 SBhalo Norman, C. A. Ikeuchi, S. ApJ3451989372; Mac Low, M.-M., McCray, R., Norman, M. L. ApJ3371989141 chevalier Chevalier, R. A. Clegg, A. W. Nature317198544; Cantó, J., Raga, A. C., Rodríguez, L. F. ApJ5362000896 krumholzreview Krumholz, M. R., McKee, C. F., Bland-Hawthorn, J. ARA&A572019227 wtssonic Pauldrach, A., Puls, J., Kudritzki, R. P. A&A164198686 gupta2 Gupta, S., Nath, B. B., Sharma, P., Eichler, D. MNRAS47320181537 diehl Krause, M. G. H. Diehl, R. ApJ7942014L21 threephases Königl, A. MNRAS2051983471 hillas Hillas, A. M ARA&A221984425; J Phys G: Nucl Part Phys312005R95; Bell, A. R. Astropart Phys43201256 derishev Aharonian, F. A., Phys Rev D662002023005 Bfields Beck, R. Space Sci Rev992001243; Vallée, J. P. New Astron Rev482004763; Kulsrud, R. M. Zweibel, E. G. Rep Prog Phys712008046901 thibaulthillas Vieu, T., Reville, B., Aharonian, F. A. MNRAS51520222256 parker Parker, E. N. Planet Space Sci1319659 WTStheory Völk, H. J. Forman, M. ApJ2531982188; Webb, G. M., Axford, W. I., Forman, M. A. ApJ2981985684 florinski Florinski, V. Jokipii, J. R. ApJ5912003454 fisk Fisk, L. A. Lee, M. A. ApJ2371980620 giacalone Giacalone, J. Jokipii, J. R. ApJ6632007L41 multipleDSA Bell, A. R. MNRAS1821978443; White, R. L. ApJ2891985698; Achterberg, A. A&A2311990251; Melrose, D. B. Pope, M. H. PASA101993222; Pope, M. H. Melrose, D. B. PASA111994175; Klepach, E. G., Ptuskin, V. S., Zirakashvili, V. N. Astropart Phys132000161; Vieu, T, Gabici, S., Tatischeff, V. MNRAS51020222529 lhaaso Cao, Z. Chinese A&A432019457 bykovnew Badmaev, D. V., Bykov, A. M., Kalyashova, M. E. MNRAS51720222818 thibaultnew Vieu, T. Reville, B. MNRASaccepted2022arXiv:2211.11625 localbubble Welsh, B. Y. Shelton, R. L. Ap&SS32320091; Cox, D. P. Lecture Notes in Physics5061998121; Breitschwerdt, D., Space Sci Rev781996183 bubbleage Zucker, C. Nature6012022334 localbubblescreen Bouyahiaoui, M., Kachelriess, M., Semikoz, D. JCAP012019046; Silsbee, K. Ivlev, A. V. ApJ879201914; Phan, V. H. M. PhD Thesis, Université Paris Cité 2020 (https://www.theses.fr/2020UNIP7070) gabicilow Gabici, S. A&A Rev3020224 LiBeBLB Streitmatter, R. E. Stephens, S. A. Adv Space Res272001743; Donato, F., Maurin, D., Taillet, R. A&A3812002539 dieternature Breitschwerdt, D., Nature532201673
http://arxiv.org/abs/2307.03313v1
20230706215515
InfoSync: Information Synchronization across Multilingual Semi-structured Tables
[ "Siddharth Khincha", "Chelsi Jain", "Vivek Gupta", "Tushar Kataria", "Shuo Zhang" ]
cs.CL
[ "cs.CL", "cs.CY", "cs.IR" ]
Photoinduced Anomalous Supercurrent Hall Effect I. G. Savenko August 1, 2023 =============================================== Information Synchronization of semi-structured data across languages is challenging. For instance, Wikipedia tables in one language should be synchronized across languages. To address this problem, we introduce a new dataset and a two-step method for tabular synchronization. contains 100K entity-centric tables (Wikipedia Infoboxes) across 14 languages, of which a subset (∼3.5K pairs) are manually annotated. The proposed method includes 1) Information Alignment to map rows and 2) Information Update for updating missing/outdated information for aligned tables across multilingual tables. When evaluated on , information alignment achieves an F1 score of 87.91 (en ↔ non-en). To evaluate information updation, we perform human-assisted Wikipedia edits on Infoboxes for 603 table pairs. Our approach obtains an acceptance rate of 77.28% on Wikipedia, showing the effectiveness of the proposed method. § INTRODUCTION English articles across the web are more timely updated than other languages on particular subjects. Meanwhile, culture differences, topic preferences, and editing inconsistency lead to information mismatch across multilingual data, e.g., outdated information or missing information <cit.>. Online encyclopedia, e.g., Wikipedia, contains millions of articles that need to be updated constantly, involving expanding existing articles, modifying content such as correcting facts in sentences <cit.> and altering Wikipedia categories <cit.>. However, more than 40% of Wikipedia's active editors are in English. At the same time, only 15% of the world population speak English as their first language. Therefore, information in languages other than English may not be as updated <cit.>. See Figure <ref> for an example of an information mismatch for the same entity across different languages. In this work, we look at synchronizing information across multilingual content. To overcome the above-mentioned problem, we formally introduce the task of Information Synchronization for multilingual articles, which includes paragraphs, tables, lists, categories, and images. But due to its magnitude and complexity, synchronizing all of the information across different modalities on a webpage is daunting. Therefore, this work focuses on semi-structured data, a.k.a. table synchronization in a few languages, as the first step toward our mission. We consider Infobox, a particular type of semi-structured Wikipedia tables <cit.>, which contain entity-centric information, where we observe various information mismatches, e.g., missing rows (cf. Figure <ref>). One intuitive idea to address them is translation-based. However, the Infoboxes contain rows with implicit context; translating these short phrases is prone to errors and leads to ineffective synchronization <cit.>. To systematically assess the challenge, we curate a dataset, namely , consisting of 100K multilingual Infobox tables across 14 languages and covering 21 Wikipedia categories. ∼3.5K table pairs of English to non-English or non-English to non-English are sampled and manually synchronized. We propose a table synchronization approach that comprises two steps: [(1.)] * Information Alignment: align table rows, and * Information Update: update missing or outdated rows across language pairs to circumvent the inconsistency. The information alignment component aims to align the rows in multilingual tables. The proposed method uses corpus statistics across Wikipedia, such as key and value-based similarities. The information update step relies on an effective rule-based approach. We manually curate nine rules: row transfer, time-based, value trends, multi-key matching, append value, high to low resource, number of row differences, and rare keys. Both tasks are evaluated on to demonstrate their effectiveness. Apart from the automatic evaluation, we deploy an online experiment that submits the detected mismatches by our method to Wikipedia after strictly following Wikipedia editing guidelines. We monitor the number of accepted and rejected edits by Wikipedia editors to demonstrate its efficacy. All proposed edits are performed manually, in accordance with Wikipedia's editing policies and guidelines[<https://en.wikipedia.org/wiki/Wikipedia:List_of_policies_and_guidelines>], rule set[<https://en.wikipedia.org/wiki/Wikipedia:Simplified_ruleset>], and policies[<https://en.m.wikipedia.org/wiki/Wikipedia:Editing_policy>]. These changes were subsequently accepted by Wikipedia editors, demonstrating the efficacy of our methodology. The contributions in this work are as follows: 1) We investigate the problem of Information Synchronization across multilingual semi-structured data, i.e., tables, and construct a large-scale dataset ; 2) We propose a two-step approach (alignment and updation) and demonstrate superiority over exiting baselines; 3) The rule-based updation system achieves excellent acceptance when utilized for human-assisted Wikipedia editing. Our dataset and method source code are available at <https://info-sync.github.io/info-sync/>. § MOTIVATION §.§ Challenges in Table Synchronization We observe the following challenges when taking Wikipedia Infoboxes as a running example. Note this is not an exhaustive list. MI: Missing Information represents the problem where information appears in one language and is missing in others. This may be due to the fact that the table is out-of-date or to cultural, social, or demographic preferences for modification (cf. Figure <ref>). OI: Outdated Information denotes that information is updated in one language but not others. IR: Information Representation varies across languages. For example, one attribute about "parents" can be put in a single row or separate rows ("Father" and "Mother"). UI: Unnormalized Information presents cases where table attributes can be expressed differently. For example, "known for" and "major achievements" of a person represent the same attribute (i.e., paraphrase). LV: Language Variation means that information is expressed in different variants across languages. This problem is further exaggerated by the implicit context in tables when translating. E.g., "Died" in English might be translated to "Overleden" (Pass Away) or "overlijdensplaats" (Place of Death) in Dutch due to missing context. SV: Schema Variation denotes that the schema (template structure) varies. For example, extraction of "awards" in Musician tables can be harrowing due to dynamic on-click lists (Full Award Lists). EEL: Erroneous Entity Linking is caused by mismatched linkages between table entities among multiple languages, e.g., "ABV" and "Alcohol by Volume". §.§ Wikipedian "Biases" Wikipedia is a global resource across over 300 languages. However, the information is skewed toward English-speaking countries <cit.> as English has the most significant Wikipedia covering 23% (11%) of total pages (articles). Most users' edits (76%) are also done in English Wikipedia. English Wikipedia also has the highest number of page reads (49%) and page edits (34%), followed by German (20% and 12%) and Spanish (12% and 6%), respectively. Except for the top 25 languages, the total number of active editors, pages, and edits is less than 1% <cit.>. Multilingual Wikipedia articles evolve separately due to cultural and geographical bias <cit.>, which prevents information synchronization. For example, information on "Narendra Modi" (India's Prime Minister) is more likely to be better reflected in Hindi Wikipedia than in other Wikipedias. This means that in addition to the obvious fact that smaller Wikipedias can be expanded by incorporating content from larger Wikipedias, larger Wikipedias can also be augmented by incorporating information from smaller Wikipedias. Thus, information synchronization could assist Wikipedia communities by ensuring that information is consistent and of good quality across all language versions. § THE DATASET To systematically assess the challenge of information synchronization and evaluate the methodologies, we aim to build a large-scale table synchronization dataset based on entity-centric Wikipedia Infoboxes. §.§ Table Extraction We extract Wikipedia Infoboxes from pages appearing in multiple languages on the same date to simultaneously preserve Wikipedia's original information and potential discrepancies. These extracted tables are across 14 languages and cover 21 Wikipedia categories. Languages Selection. We consider the following languages English(en), French(fr), German(de), Korean(ko), Russian(ru), Arabic(ar), Chinese(zh), Hindi(hi), Cebuano(ceb), Spanish(es), Swedish(sv), Dutch(nl), Turkish(tr), and Afrikaans(ak). We extracted tables across 14 languages and covered 21 diverse Wikipedia categories. In these 14 languages, four are low resource (af, ceb, hi, tr) < 6000, seven of them medium resource (ar, ko,nl, sv, zh,ru, de,es) (6000–10000), and the remaining one are high resource (en, en, fr), w.r.t. to the number of infobox total tables (see Table 1 in paper). Our choices were motivated by the following factors:- a) Cover all the continents, thus covering the majority and diverse population. Out of chosen languages, 7 (English, French, German, Spanish, Swedish, Dutch, and Turkish) are European. b). They have sufficient pages with info boxes; each entity info box is present in at least five languages, and c) an adequate number of rows (5 and above) facilitates better data extraction. Categories. Extracted tables cover twenty-one simple, diverse, and popular topics: Airport, Album, Animal, Athlete, Book, City, College, Company, Country, Food, Monument, Movie, Musician, Nobel, Painting, Person, Planet, Shows, and Stadiums. We observe that Airport has the most number of entity tables followed by Movie and Shows, as shown in Table <ref>. Other extraction details are provided in Appendix <ref>. §.§ Tabular Information Mismatched 0 We analyze the extracted tables in the context of the synchronization problem and identify the information gap. The number of tables is biased across languages, as shown in Table <ref>. We observe Afrikaans, Hindi, and Cebuano have a significantly less number of tables. Similarly, the table size is biased across several languages. Dutch and Cebuano have the last rows. In addition, the number of tables across categories is uneven; refer to Table <ref>. Airport and Movie have the highest number of tables. Table <ref> also reports the average number of rows for a category. Planet, Company, and Movie have the highest average number of rows. When synchronizing a table from one language to another, we observe that the maximum number of tables can be transferred from English, French, and Spanish from Column 1 in Table <ref>. Afrikaans, Hindi, and Cebuano have the least overlapping information (Column 3) with all other languages. The number of rows (Column 5) varies substantially between languages, with Spanish and Arabic having the highest number. §.§ Evaluation Benchmark We construct the evaluation benchmark by manually mapping the table's pairs in two languages. The table pairs we consider can be broadly split into English ↔ Non-English and Non-English ↔ Non-English. The annotations are conducted as follows. English ↔ Non-English: We sample 1964 table pairs, where a minimum of 50 pairs for each category and language are guaranteed. We divide the annotated dataset, ratio of 1:2, into validation and test sets. The non-English tables are translated into English first and then compared against the English version. Furthermore, native speakers annotated 200 table pairs for English * Hindi and English * Chinese to avoid minor machine translation errors. Non-English ↔ Non-English: We consider six non-English languages: two from each High resource (French, Russian), Medium Resource (German, Korean), and Low Resource (Hindi, Arabic), w.r.t. the number of tables in . We sample and annotate 1589 table pairs distributed equally among these languages, where we choose an average of ∼ 50 tables for all pairs of languages. Both are translated into English before manually mapping them. In addition, for more detailed analysis, we also annotate metadata around table synchronization challenges such as MI, IR, LV, OI, UI, SV, and EEL, as discussed in <ref>. § TABLE SYNCHRONIZATION METHOD This section will explain our proposed table synchronization method for addressing missing or outdated information. This method includes two steps: information alignment and update. The former approach aims to align rows across a pair of tables, and the latter helps to update missing or outdated information. We further deploy our update process in a human-assisted Wikipedia edit framework to test the efficacy in the real world. §.§ Information Alignment An Infobox consists of multiple rows where each row has a key and value pair. Given a pair of tables T_x = [..., (k_x^i, v_x^i), ...] and T_y = [..., (k_y^j, v_y^j), ...] in two languages, table alignment aims to align all the possible pairs of rows, e.g., (k_x^i, v_x^i) and (k_y^j, v_y^j) refer to the same information and should be aligned. We propose a method that consists of five modules, each of which relaxes matching requirements in order to create additional alignments. M1. Corpus-based. The pair of rows (k_x, v_y) in T_x and (k_y, v_y) in T_y are supposed to be aligned if cosine(em(tr_x^en(k_x)), em(tr_y^en(k_y))) > θ_1, where em is the embedding, θ_1 is the threshold, and tr_y^en() denotes the English translation of k if k is not in English. In order to achieve accurate key translations, we adopt a majority voting approach, considering multiple translations of the same key from different category tables. We consider the key's values and categories as additional context for better translation during the voting process. To simplify the voting procedure, we pre-compute mappings by selecting only the most frequent keys for each category across all languages. M2. Key-only. This module attempts to align the unaligned pairs in module 1. Using their English translation, it first computes cosine similarity for all possible key pairs. k^x will be aligned to k^y only if they are mutually most similar key and the similarity is above a certain threshold θ_2. This is similar to maximum bipartite matching, treating similarity scores as edge weights followed by threshold-based pruning. And it ensures we are capturing the highest similarity mapping from both language directions. Note that here we use only keys as the text for similarity computation. M3. Key value bidirectional. This module is similar to step 2, except it uses the entire table row for computing similarities, i.e., key + value, using threshold θ_3. M4. Key value unidirectional. This module further relaxes the bidirectional mapping constraint in step 3, i.e., thus removing the requirement of the highest similarity score matching from both sides. We shift to unidirectional matching between row pairs, i.e., consider the highest similarity in either direction. However, this may result in adding spurious alignments. To avoid this, we have a higher threshold (θ_4) than the prior step. M5. Multi-key. Previous modules only take the most similar key for alignment if exceeding the threshold. In this module, we further relax the constraint to select multiple keys (maximum two), given exceeding a threshold (θ_5). Multi-key mapping is sparse, but the above procedure will lead to dense mapping. To avoid this, we introduce a soft constraint for value-combination alignment, where multi-key values are merged. We consider valid multi-key alignment when the merge value-combination similarity score exceeds that of the most similar key. The thresholds of five modules are tuned in the sequence as stated above. §.§ Information Updation Information modification includes Row Append (adding missing rows), Row Update (replacing or adding values), and Merge Rows. We propose a rule-based heuristic approach for information updates. The rules are in form of logical expression (∀_(R_T_x,R_T_y) L ↦ R) applied on infobox tables, where, R_T_x and R_T_y represent table rows for language x and y respectively. These rules are applied sequentially according to their priority rank (P.R.). Rules explanations are described below. R1. Row Transfer. Following the logistic rule of ∀_(R_T_x,R_T_y)Al^T_y_T_x(R_T_x;R_T_y) = 0 ↦T_y ∪ tr_x^y(R_T_x)⋀Al^T_y_T_x(R_T_x; tr_x^y(R_T_x))=1 , where Al^T_y_T_x(.;.) represents the alignment mapping between two tables T_y and T_x. Unaligned rows are transferred from one table to another. R2. Multi-Match. We update the table by removing multi-alignments and replacing them with merged information to handle multikey alignments. R3. Time-based. We update aligned values using the latest timestamp. R4. Trends (positive/negative). This update applies to cases where the value is highly likely to follow a monotonic pattern (increasing or decreasing) w.r.t. time, e.g., athlete career statistics. The authors curated the positive/negative trend lists. R5. Append Values. Additional value information from an up-to-date row is appended to the outdated row. R6. HR to LR. This rule transfers information from high to low resource language to update outdated information. R7. #Rows. This rule transfers information from bigger (more rows) to smaller (fewer rows) tables. R8. Rare Keys (Non Popular). We update information from the table where non-popular keys are likely to be added recently to the outdated table. The authors also curate non-popular keys. Detailed formulation of logical rules and their priority ranking are listed in Table <ref>. Figure <ref> in Appendix shows an example of table update. Human-assisted Wikipedia Infobox Edits: We apply the above rules to assist humans in updating Wikipedia infoboxes. Following Wikipedia edit guidelines[<https://en.wikipedia.org/wiki/Wikipedia:List_of_policies_and_guidelines>], rule set[<https://en.wikipedia.org/wiki/Wikipedia:Simplified_ruleset>], and policies[<https://en.m.wikipedia.org/wiki/Wikipedia:Editing_policy>], we append our update request with a description to provide evidence, which contains [(a)] * up-to-date entity page URL in the source language, * exact table rows information, the source language, and the details of the changes, * and one additional citation discovered by the editor for extra validation. [We use a search engine such as Google, Bing, you.com, perplexity.ai, find, etc. for additional citation.] We further update beyond our heuristic-based rules but are aligned through our information alignment method. § EXPERIMENTS Our experiments assess the efficacy of our proposed two-stage approach by investigating the following questions. - What is the efficacy of the unsupervised multilingual method for table alignment? (<ref>) - How significant are the different modules of the alignment algorithm? (<ref> and <ref>) - Does the rule-based updating approach effective for information synchronization? (<ref>) - Can the two-step approach assist humans in updating Wikipedia Infoboxes? (<ref>) §.§ Experimental Setup Baselines Models. We compare our approach with LaBSE <cit.>, and SimCSE <cit.>, multilingual sentence transformers embeddings <cit.> in which we include mBERT (case2) with mean pooling (mp) <cit.>, and its distill versions (distill mBERT) <cit.> all in base form. We also compared with XLM-RoBERTa (XLM-R) <cit.> with mean pooling, and its distill version <cit.> trained via MPNet-based teacher model (MPNet) <cit.>. For all baseline implementation, we use the Hugging Face transformers <cit.> and sentence transformers <cit.> library for the multilingual models' implementation. Hyper-parameter Tuning. For our method, we embed translated English keys and values using MPNet model <cit.>. We tune the threshold hyper-parameters using the validation set, 1/3 of the total annotated set. We sequentially tune the hyper-parameters thresholds (θ_1 to θ_5) in modules training order. Optimal threshold after tuning are θ_1 = (0.8,0.8); θ_2 = (0.64, 0.6); θ_3= θ_3 = (0.54, 0.54); θ_4 =(0.9, 0.54); θ_5 = (0.88, 0.96) for T_en ← T_x and T_x ← T_y respectively. We retain the default setting for other models' specific hyperparameters. Information Alignment. We consider English as our reference language for alignment. Specifically, we translate all multilingual tables to English using an effective table translation approach of XInfoTabS <cit.>. Then, we apply incremental modules as discussed in <ref>. We tune independently on the validation set for Non-English ↔ Non-English and English ↔ Non-English. The method is assessed on two sets of metric [(a.)] * matched score: measure the F1-score between ground truth matched row and predicted alignment, and * unmatched score: measure the F1-score between independent (unmatched) rows in ground truth with predicted unaligned rows. See Figure <ref> for the explanations of these metrics. Information Updation. We apply the heuristic-based approach and deploy the predicted updates for human-assisted edits on Wikipedia Infoboxes. 532 table pairs are edited distributed among T_en → T_x, T_x → T_y, and T_x → T_en, where x and y are non-English languages. §.§ Information Alignment Algorithm Efficacy. Table <ref> reports the matched and unmatched scores. For match scores, we observe that the corpus-based module achieves an F1 score exceeding 50 for all language pairs. Using a key-only module boosts the performance by about 5-15 points. Taking the whole row context (key-value pair) with strict constraints on bidirectional mapping, i.e., two-way similarity, improves performance substantially (more than 16 points). Further relaxing the bi-direction constraint to uni-directional matching (one-way similarity), we improve our results marginally with less than 0.5 performance points. Thus relaxation of the bi-direction mapping constraint doesn't lead to significantly better alignments. The multi-key module, which considers one-to-many alignments, further improves the accuracy marginally. The reason for the marginal improvements is very few instances of one-to-many mappings. For unmatch scores, we see similar results to match scores. The only significant difference is in key-only performance, where we observe a 0.5x performance improvement compared to match scores. We also analyze the precision-recall in Tables <ref>, <ref>, <ref> and <ref> of Appendix <ref>. We observe that the precision reduces and recall increases for match scores with module addition, whereas the reverse is true for unmatch scores. The number of alignments increases as we add more modules with relaxed constraints. This increases the number of incorrect alignments reducing the precision but increasing the recall. [There are more incorrect alignments N2 compared to correct alignments which is O(n).] Similarly, we can note the accuracy of unaligned rows increases because more incorrect alignments are added with relaxed constraints. We also report each module coverage in Appendix <ref>. The performance of our proposed approach grouped by languages, category, and rows keys are detailed in Appendix <ref>. Error Analysis. Error analysis (cf <ref>) for matched and unmatched are reported in Table <ref> and <ref>, respectively. Our proposed method works sequentially, relaxing constraints, and the number of falsely aligned rows increases with module addition (cf. Table <ref>). Different modules contribute unequally to unaligned mistakes, (25%, 56%) of the mistakes come from corpus-based module, (39%, 22%) from Key Only Module, (17%, 35%) from Key-Value-Bidirectional module, (7%, 4%) from Key-Value-uni-directional module, and (7.6%, 5%) from multi-key alignment module, for T_en ↔ T_x and T_x ↔ T_y respectively. The corpus-based module is worst performing in T_x ↔ T_y because of difficulty in multilingual mapping. The key-only module is the worst performing in T_en ↔ T_x because it's the first relaxation in the algorithm. Further analysis of the error cases is in Appendix (<ref>). §.§ Information Updation Table <ref> reports the results of different updation types of rules explained in <ref>. We observe that the row addition rule accounts for the most updated, ∼64% of total updates for gold and predicted aligned table pairs. The flow of information from high resource to low resource accounts for ∼13% of the remaining updates, whereas a high number of rows too low adds another 8% of the updates. ∼9% of the updates are done by the value updates rule. All the other rules combined give 8% of the remaining suggested updates. From the above results, most information gaps can be resolved by row transfer. The magnitude of rules like value updates and multi-key shows that table information needs to be synchronized regularly. Examples of edited infoboxes using the proposed algorithm are shown in Appendix Figures <ref> and <ref>. Table <ref> reports a similar analysis for human-assisted Wikipedia infobox edits. We also report Wikipedia editors' accept/reject rate for the above-deployed system in Table <ref>. We obtained an acceptance rate of 77.28% (as of May 2023), with the highest performance obtained when information flows across non-English languages. The lowest performance is obtained when the information flows from non-English to an English info box. This highlights that our two-step procedure is effective in a real-world scenario. Examples of live updates are shown in Appendix Figures <ref> and <ref>. § RELATED WORKS Information Alignment. Multilingual Table attribute alignment has been previously addressed via supervised <cit.> and unsupervised methods <cit.>. Supervised methods trained classifiers on features extracted from multilingual tables. These features include cross-language links, text similarity, and schema features. Unsupervised methods made use of corpus statistics and template/schema matching for alignments. Other techniques by <cit.> focus on using external knowledge graphs such as DBpedia for the updation of Infoboxes or vice versa. In their experiments, most of these methods use less than three languages, and machine translation is rarely used. Additionally, we don't require manual feature curation for strong supervision. We study the problem more thoroughly with grouped analysis along languages, categories, and keys direction. The works closest to our approach are <cit.>, both of which use cross-language hyperlinks for feature or entity matching. <cit.> uses translations before calculating text similarity. Utilizing cross-language links can provide a robust alignment supervision signal. In contrast to our approach, we do not use external knowledge or cross-language links for alignments. This additional information is rarely available for languages other than English. Information Updation. Prior work for information updates <cit.> covers Wikipedia or news articles than semi-structured data like tables. <cit.> studies the problem of updating multilingual news articles across different languages over 15 years. They classify the edits as addition, deletion, updates, and retraction. These were the primary intuitions behind our challenge classified in <ref>. <cit.> focused on automating article updates with new facts using large language models. <cit.> focused on generating updated headlines when presented with new information. Some prior works also focus on the automatic classification of edits on Wikipedia for content moderation and review <cit.>. Evening modeling editor's behavior for gauging collaborative editing and development of Wikipedia pages has been studied <cit.>. Other related works include automated sentence updation based on information arrival <cit.>. None of these works focus on tables, especially Wikipedia Infoboxes. Also, they fail to address multilingual aspects of information updation. § CONCLUSION AND FUTURE WORK Information synchronization is a common issue for semi-structured data across languages. Taking Wikipedia Infoboxes as our case study, we created and proposed a two-step procedure that consists of alignment and updation. The alignment method outperforms baseline approaches with an F1-score greater than 85; the rule-based method received a 77.28 percent approval rate when suggesting updates to Wikipedia. We identify the following future directions. [(a)] * Beyond Infobox Synchronization. While our technique is relatively broad, it is optimized for Wikipedia Infoboxes. We want to test whether the strategy applies to technical, scientific, legal, and medical domain tables <cit.>. It will also be intriguing to widen the updating rules to include social, economic, and cultural aspects. * Beyond Pairwise Alignment. Currently, independent language pairs are considered for (bi) alignment. However, multiple languages can be utilized jointly for (multi) alignment. * Beyond Pairwise Updates. Similar to (multi) alignment, one can jointly update all language variants simultaneously. This can be done in two ways: (1.) With English as pivot language : To update across all languages. Here, English act as a central server with message passing. (2.) Round-Robin Fashion: where pairwise language updates between language pairs are transferred in a round-robin ring across all language pairs. In every update, we selected a leader similar to a leader election in distributed systems. * Joint Alignment and Updation. Even while our current approach is accurate, it employs a two-step process for synchronization, namely alignment followed by updating. We want to create rapid approaches aligning and updating in a single step. * Text for Updation: Our method doesn't consider Wikipedia articles for updating tables <cit.>. § LIMITATIONS We only consider 14 languages and 21 categories, whereas Wikipedia has pages in more than 300 languages and 200 broad categories. Increasing the scale and diversity will further improve method generalization. Our proposed method relies on the good multilingual translation of key and value from table pairs. Although we use key, value, and category together for better context, enhancement in table translation <cit.> will benefit our approach. Because our rule-based system requires manual intervention, it has automation limits. Upgrading to completely automated methods based on a large language model may be advantageous. We are only considering updates for semi-structured tables. However, updating other page elements, such as images and article text, could also be considered. Although a direct expansion of our method to a multi-modal setting is complex <cit.>. § ETHICS STATEMENT We aimed to create a balanced, bias-free dataset regarding demographic and socioeconomic factors. We picked a wide range of languages, even those with limited resources, and we also ensured that the categories were diversified. Humans curate the majority of information on Wikipedia. Using unrestricted automated tools for edits might result in biased information. For this reason, we adhere to the "human in the loop" methodology <cit.> for editing Wikipedia. Additionally, we follow Wikipedia editing guidelines[<https://en.wikipedia.org/wiki/Wikipedia:List_of_policies_and_guidelines>], rule set[<https://en.wikipedia.org/wiki/Wikipedia:Simplified_ruleset>], and policies[<https://en.m.wikipedia.org/wiki/Wikipedia:Editing_policy>] for all manual edits. Therefore, we ask the community to use our method only as a recommendation tool for revising Wikipedia. As a result, we ask that the community utilize strictly for scientific and non-commercial purposes from this point forward. § ACKNOWLEDGEMENTS We thank members of the Utah NLP group for their valuable insights and suggestions at various stages of the project, and reviewers for their helpful comments. Additionally, we appreciate the inputs provided by Vivek Srikumar and Ellen Riloff. Vivek Gupta acknowledges support from Bloomberg’s Data Science Ph.D. Fellowship. acl_natbib § APPENDIX §.§ Table Extraction Details Table formats and HTML code styles differ from one language to another and even across categories in the same language. Extraction is modified to handle these variations, which requires the following steps: [(a)] * Detecting Infoboxes: We locate Wikipedia infoboxes that appear in at least five languages. * Extracting HTML: After detection, we extract HTML and preprocess to remove images, links, and signatures. * Table Representation: we convert the extracted table and store them in JSON. Row Difference Across Paired Languages: There is substantial variation in the number of rows for infobox across different languages, i.e., rows difference = 1/|L|∑_ln ∈ L∖C1 ||R_c1| - |R_ln||, where L is set of all 14 languages under consideration. Table <ref> shows that German followed by Arabic and Afrikaans, has the highest row difference. This indicates that tables in these languages are incomplete (with missing rows). §.§ Table Updation Examples An example of table updation is shown in the Figure <ref>. §.§ Precision and Recall We also evaluated precision-recall values in information alignment for matched and unmatched scores (<ref>). Precision recall values for T_en ↔ T_x, T_x ↔ T_y, T_en * T_hi and T_en * T_zh are reported in Tables <ref>, <ref>, <ref>, and <ref>, respectively. §.§ Algorithm Coverage We measure the coverage on the entire corpus, the rate of rows aligned w.r.t. the smaller table in a table pair. Table <ref> reports ablations results of coverage for various modules. Our proposed method aligns 72.54% and 67.96% of rows for T_en ↔ T_x and T_x ↔ T_y, respectively. Corpus-based is the most constrained module, focusing more on precision; hence removing corpus-based gives better coverage for both cases. Key-Only-Unidirectional is the most important module for coverage, followed by the Key-Only module for both cases. §.§ Domain and Language Wise Analysis Table <ref>, <ref>, and <ref> show the performance of our proposed method grouped by languages, domains, and keys, respectively. Group-wise Analysis. From Table <ref>, for T_en ↔ T_x, Cebuano, Arabic, German, and Dutch are the worst performing languages with F1-score close to 85 for alignment. Whereas Turkish, Chinese, and Hindi have F1-score greater than 90. Korean, German, and Swedish are the lowest-performing language groups, with an F1-Score close to 86 for unaligned settings. Cebuano, Turkish, and Dutch get the highest score for unaligned metrics (greater than 90). For non-English language pairs, the lowest F1-score for match table pairs is observed for German-Arabic and Hindi-Korean pairs with an F1-score close to 78, as shown in Table <ref>. The highest F1-score is observed for Russian-German and Hindi-German, with F1-scores exceeding 88.8. For unmatched data, Korean-Hindi, French-Hindi, French-Korean, and Russian-Korean pairs have the lowest F1 scores, less than 85. In contrast, German-Hindi and Russion-German have exceeded the unaligned F1-Score of 90. Category-wise Analysis. As reported in Table <ref>, our method performs worst in Airport and College categories for match settings when one of the languages is English. For non-English match settings, Movie and City are the worst-performing categories. For unmatch setting with English as one of the languages, Airport and Painting have the lowest F1-score, whereas Movie and Stadium have the most inferior performance for non-English languages. Key-wise Analysis. Table <ref> shows the average F1-scores across tables for frequent and non-frequent keys. We observed an F1-score degradation of 10 points for rare keys with a low occurrence compared to frequent keys. §.§ Ablation Study We report ablation performance to highlight the significance of each module in Table <ref>. Key-Value-Bidirectional mapping (two-way) is the most critical module, followed by Key Only corpus-based modules. We also observe Uni-directional mapping being the second most important for non-English alignments. The multi-key module was consistently was least significant for the same reason as the discussion above (very few instances). Similar observations were valid for unmatching scores. §.§ Further Details: Error Analysis We discussed challenges to table information synchronization across languages in <ref>. Table <ref> (main paper) shows the number of instances of these challenges in evaluation for matched cases after applying various modules of the alignment algorithm. * Corpus-based module solves approximately (40%, 56%) of outdated information,(31%, 21%) of schema variation, (10%, 30%) of language variation, (13%, 25%) of unnormalized information and (37%, 26%) of erroneous entity linking challenges in T_en ↔ T_x and T_x ↔ T_y, respectively. * Further adding of key only similarity module resolves extra (24%, 13%) of outdated information, (36%, 21%) of schema variation, (13%, 5%) of language variation, (19%, 17%) of unnormalized information and (22%, 8%) of erroneous entity linking challenges in T_en ↔ T_x and T_x ↔ T_y, respectively. * Applying key-value-bidirectional module resolves another (12%, 13.5%) of outdated information, (18%, 6.6%) of information representation, 54%, 45% of language variation, (40%, 46%) of unnormalized information and (34%, 53%) of erroneous entity linking challenges in T_en ↔ T_x and T_x ↔ T_y, respectively. * Key-Val-Unidirectional and Multi-key together solves another (18.5%, 7.5%) of the information representation in T_en ↔ T_x and T_x ↔ T_y, respectively, but are not effective against other challenges. §.§ Other Related Work Tabular Reasoning. Addressing NLP tasks on semi-structured tabular data has received substantial attention. There is work on tabular NLI <cit.>, question-answering task <cit.> and table-to-text generation <cit.>. Tabular Representation and Learning. There are also several works representing Wikipedia tables, such papers are TAPAS <cit.>, StrucBERT <cit.>, Table2vec <cit.>, TaBERT <cit.>, TABBIE <cit.>, TabStruc <cit.>, TabGCN <cit.>, RCI <cit.>, TURL <cit.>, and TableFormer <cit.>. Some papers such as <cit.> study pre-training for tabular tasks. Paper related to tabular probing includes <cit.>. Tabular Datasets. There are several tabular task datasets on (a.) tabular NLI: <cit.>; (b.) Tabular QA: WikiTableQA <cit.>, HybridQA <cit.>,WikiSQL <cit.>, SQUALL <cit.>, OpenTableQA <cit.>, FinQA <cit.>, FeTaQA <cit.>, TAT-QA <cit.>, SQA <cit.>, NQ-Tables <cit.>; (c.) and Table Generation: ToTTo <cit.>, Turing Tables <cit.>, LogicNLG <cit.>. Furthermore, there are also several works discussed on web table extraction, retrieval, and augmentation <cit.>, and utilizing the transformers model for table representation <cit.>.
http://arxiv.org/abs/2307.00954v1
20230703115621
HODINet: High-Order Discrepant Interaction Network for RGB-D Salient Object Detection
[ "Kang Yi", "Jing Xu", "Xiao Jin", "Fu Guo", "Yan-Feng Wu" ]
cs.CV
[ "cs.CV", "eess.IV" ]
HODINet: High-Order Discrepant Interaction Network for RGB-D Salient Object Detection Kang Yi, Jing Xu, Xiao Jin, Fu Guo and Yan-Feng Wu Manuscript received 14 June 2022. This work was supported in part by Tianjin Natural Science Foundation, China (22JCQNJC01650, 19JCQNJC00300), and Fundamental Research Funds for the Central Universities of Nankai University (63201192, 63211116). (Kang Yi and Jing Xu contributed equally to this work.) (Corresponding author: Xiao Jin.) Yi Kang, Jing Xu, Xiao Jin and Yan-Feng Wu are with the College of Artificial Intelligence, Nankai University, Tianjin 300350, China (e-mail: jinxiao@nankai.edu.cn). Guo Fu is with Tandon School of Engineering, The New York University, New York 11201, USA. August 1, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== RGB-D salient object detection (SOD) aims to detect the prominent regions by jointly modeling RGB and depth information. Most RGB-D SOD methods apply the same type of backbones and fusion modules to identically learn the multimodality and multistage features. However, these features contribute differently to the final saliency results, which raises two issues: 1) how to model discrepant characteristics of RGB images and depth maps; 2) how to fuse these cross-modality features in different stages. In this paper, we propose a high-order discrepant interaction network (HODINet) for RGB-D SOD. Concretely, we first employ transformer-based and CNN-based architectures as backbones to encode RGB and depth features, respectively. Then, the high-order representations are delicately extracted and embedded into spatial and channel attentions for cross-modality feature fusion in different stages. Specifically, we design a high-order spatial fusion (HOSF) module and a high-order channel fusion (HOCF) module to fuse features of the first two and the last two stages, respectively. Besides, a cascaded pyramid reconstruction network is adopted to progressively decode the fused features in a top-down pathway. Extensive experiments are conducted on seven widely used datasets to demonstrate the effectiveness of the proposed approach. We achieve competitive performance against 24 state-of-the-art methods under four evaluation metrics. RGB-D SOD, cross-modality fusion, high-order attention mechanism, transformer, discrepant interaction. § INTRODUCTION SALIENT object detection (SOD) refers to locating the most attractive parts of a given scene <cit.>. It plays an important role in the field of computer vision and has been applied to various scenarios. Although the SOD methods have made notable progress and improvement, the performance often degrades when facing complex and indistinguishable cases. With the rapid development of 3D image sensors, people can obtain high-quality depth information easily, which has attracted growing interest among researchers in RGB-D SOD methods. Numerous studies have shown that utilizing RGB-D image pairs for saliency detection can achieve superior performance in some challenging scenes. §.§ Motivation The motivation of this work mainly derives from two aspects: On one hand, even though RGB images and depth maps contain completely different information, most of the existing RGB-D SOD methods still employ the same type of backbones to identically extract features from different modalities <cit.>. On the other hand, different stages of backbones focus on extracting distinct spatial and channel features, most of the existing RGB-D SOD methods only apply the same kind of fusion modules <cit.>. Concretely, there still remains a few problems that demand further discussions and improvements in RGB-D SOD. §.§.§ how to model discrepant characteristics of RGB images and depth maps The RGB images contain local color and texture contents, while the depth maps provide shape and position clues in a global view <cit.>. The RGB and depth features with mismatched information will generate suboptimal fusion outcomes, and result in a negative effect on the performance. Most previous methods employ the same type of backbones or siamese networks to extract features identically, which is lack of emphasis on the complementary information between different modalities. One possible solution to this problem is to adopt more specific backbones based on the characteristics of the modalities. For example, local continuity and structure information should be emphasized when encoding RGB images with rich texture features. Since the practical receptive fields are much smaller than the theoretical ones <cit.>, the backbone for the depth branch needs to take a stack of convolutional layers to include global position relationship rather than local details. §.§.§ how to fuse features from two modalities in different stages Spatial and channel attention plays a crucial role in fusion modules and brings significant performance improvements. Nevertheless, due to the discrepancy in resolutions and channels of features maps in different stages, utilizing the same fusion module in all stages cannot guarantee satisfactory results. The neural networks tend to obtain some specific features in the shallow layers, while some abstracted semantic features are learned in the deep layers. Different types of features should be discrepantly integrated and fused to achieve better experimental results. Therefore, it is significant to design suitable fusion modules that take into account the diversity of different stages. §.§ Contribution To tackle the above issues, we propose a high-order discrepant interaction network (HODINet) for RGB-D SOD. It is a dual-stream asymmetric framework where transformer-based and CNN-based architectures are used as backbones to extract features from RGB images and depth maps. Moreover, high-order statistical information has been proved to be able to distinguish subtle differences between images in fine-grained image classification <cit.>. Inspired by this, to exploit complementary information between different modalities, high-order representations are carefully embedded into spatial and channel attention to fuse the features from RGB images and depth maps. Since low-level features contain spatial information and high-level features concentrate on channel dimensions, a high-order spatial fusion (HOSF) module and a high-order channel fusion (HOCF) module are introduced to provide detailed and accurate guidance for feature fusion. Besides, an effective cascaded pyramid reconstruction network (CPRN) is adopted for cross-stage decoding. Extensive experiments are conducted on seven benchmark datasets, and the results indicate that the proposed HODINet achieves competitive performance against 24 RGB-D SOD methods. Fig. <ref> illustrates a comparison of detection results of our HODINet and other conventional fusion methods. In summary, the main contributions in this paper can be described as follows. * We present a high-order discrepant interaction network (HODINet) for RGB-D SOD, which is a dual-stream asymmetric framework using transformer-based and CNN-based architectures to extract features from RGB images and depth inputs. * To distinguish complementary information between two modalities, we design a high-order spatial fusion (HOSF) module and a high-order channel fusion (HOCF) module, which learn high-order representations in spatial and channel attention. * We also develop a cascaded pyramid reconstruction network (CPRN) to progressively decode the multi-scale fused features in a top-down pathway. * The proposed method is evaluated on seven publicly available datasets under four widely used metrics. Compared with 24 state-of-the-art approaches, HODINet shows comparable performance. The rest of this paper is organized as follows. In Section II, the related work is presented. Then, the detailed methodology is described in Section III. The experimental settings and result discussion are shown in Section IV. Finally, Section V concludes this paper. § RELATED WORK §.§ RGB-D Salient Object Detection Early RGB-D SOD methods usually design handcrafted features and various fusion strategies to integrate RGB and depth inputs. Following this direction, numerous models have been proposed to detect salient objects <cit.>. However, they merely regard the depth stream as auxiliary information, which results in unsatisfactory performance. With the rapid development of deep learning, CNN-based methods have achieved remarkable progress <cit.>. Huang et al. <cit.> explore cross-modality interactions between RGB and depth features using linear and bilinear fusion strategies. Zhang et al. <cit.> estimate depth maps by the corresponding RGB images and integrate depth features to enhance the saliency detection performance. Huang et al. <cit.> capture cross-level and cross-modal complementary information by multi-branch feature fusion and selection. Jiang et al. <cit.> incorporate intra-view and correlation information in RGB-D SOD by a novel cross-modality saliency generative adversarial network. Liu et al. <cit.> build an attentive cross-modal fusion network based on residual attention for RGB-D SOD. Zhou et al. <cit.> present a cross-level and cross-scale adaptive fusion network to detect RGB-D saliency. Mao et al. <cit.> propose a novel cross-modality fusion and progressive integration network to address saliency prediction on stereoscopic 3D images. With the recent success in other computer vision tasks <cit.>, the transformer-based models have opened up new directions in RGB-D SOD. Liu et al. <cit.> design a pure transformer framework, which utilizes the T2T-ViT to divide images into patches and the RT2T transformation to decode patch tokens to saliency maps. Liu et al. <cit.> introduce a triplet transformer scheme to model the long-range relationship among different stages, which serves as supplementary encoding strategy for feature fusion. Liu et al. <cit.> propose SwimNet for RGB-D SOD. They fully explore the advantages of CNN and transformer in modeling local and global features. Due to the inherent discrepancy of the two modalities, using the exact same type of backbones to extract features indiscriminately may not obtain satisfactory results. Thus, the proposed HODINet is designed as a dual-stream asymmetric framework using transformer-based and CNN-based architectures to extract features from RGB images and depth maps. Section <ref> presents this part in details. §.§ High-Order Attention Mechanism In some visual tasks, high-order relationship has been proved to be important for performance improvements. For example, fine-grained image categorization is a task need to distinguish tiny differences between images. High-order relationship has demonstrated its effectiveness in this task. Lin et al. <cit.> propose bilinear models that use outer product to represent local pairwise feature interactions. Chen et al. <cit.> design a high-order attention module that utilizes the high-order polynomial predictor to capture subtle differences among objects in the image. Zhao et al. <cit.> introduce a graph-based relation discovery approach to learn positional and semantic features. When applied to RGB-D SOD, the high-order representation helps the encoder to distinguish tiny differences between two modalities. Complementary information in this visual task can be further investigated. Besides, various forms of attention mechanisms are exploited to focus on the pivotal information in the image, and have significantly improved the performance of computer vision tasks. Hou et al. <cit.> present a mixed pooling module that captures long-range dependencies and also focuses on local details simultaneously. Fu et al. <cit.> design a position attention module and a channel attention module to model the semantic dependencies in spatial and channel dimensions, respectively. Yang et al. <cit.> observe the mechanism of human attention and propose a 3D attention module, which uses an energy function to calculate the attention weights. Hou et al. <cit.> decompose the channel attention into two 1D feature coding processes. Inspired by these above attention mechanisms, we design the HOSF and HOCF modules to promote the interactions in cross-modality features, which will be elaborated in Section <ref> and Section <ref>. § PROPOSED METHOD In this section, we describe the proposed HODINet in detail. §.§ Overview The whole framework is shown in Fig. <ref>. Given an RGB image and a corresponding depth map, we use transformer-based and CNN-based networks as backbones to extract features from RGB and depth inputs, respectively. Then, we divide the backbones into four stages . Inspired by the recent success of high-order representation <cit.>, output features from the first two stages are fed into high-order spatial fusion (HOSF) modules to enhance the spatial characterization, while output features from the last two stages are put into high-order channel fusion (HOCF) modules to obtain the channel information. Subsequently, the fused features are input to the cascaded pyramid reconstruction network (CPRN). The CPRN progressively upsamples the resolution of feature maps to integrate the multistage information. In addition, we use deep supervision <cit.> at each stage of the decoder to enhance the expression of different resolution features. The binary cross-entropy (BCE) loss is replaced by the hybrid loss <cit.> for accurate boundary prediction and higher performance. Table <ref> lists some operations and notations used in this paper. §.§ Encoder When encoding the input data of RGB stream and depth stream, we need to fully consider the differences between these two modalities. As discussed before, the RGB images contain abundant local structure information, while the depth maps intuitively describe object position relationships in the whole image. Therefore, we enhance the local continuity in the RGB branch by a transformer-based backbone equipped with overlapping patch embedding. For the depth stream, we take a stack of convolutional layers to cover the position information in a global view. RGB Stream. We employ PVTv2-B3 <cit.> as the backbone of the RGB stream, because it emphasize local continuity by overlapping patch embedding. PVTv2 can be divided into four stages when generating features of different scales. At each stage, the input image is decomposed into patches with position embedding. Then, these patches pass through a transformer-based encoder with several layers. Afterwards, the output is reshaped into feature maps for multi-stage prediction tasks. Moreover, another advantage of PVTv2 is that the resolution of output feature maps of each stage is always the same as ResNet models <cit.>, making it easy to incorporate with the CNN-based networks. Depth Stream. To fully investigate the position information in a global view, we stack more convolutional layers in the backbone of the depth branch. Since the actual receptive field is often much smaller than the theoretical ones <cit.>, a deeper backbone network helps to obtain more sufficient global information. Some recent work also use deep network <cit.> or large convolutional kernel <cit.> to preserve large receptive field in the depth stream. In this paper, we adopt ResNet-101 <cit.> as the backbone of the depth branch. To apply it to RGB-D SOD task, we remove the last global average pooling layer and 1000-way fully-connected layer in the original ResNet-101 network. As a result, all of the remaining blocks will be divided into four stages, expect for the first 7×7 convolution and global max pooling layer. To be more specific, we set the feature maps of 11-th, 23-th, 92-th and 101-th layers as the outputs of four stages. §.§ High-Order Spatial Fusion (HOSF) Module Complementary information between RGB images and depth maps plays a crucial role in RGB-D SOD <cit.>. In some visual tasks, high-order representation has shown its effectiveness to distinguish tiny differences between images. Inspired by this, we propose two high-order attention modules to further improve the performance of RGB-D SOD. The feature maps extracted from shallower layers have a large resolution, which often contain more spatial details. Based on this, we design a High-Order Spatial Fusion (HOSF) module to integrate the RGB modality with the depth features effectively. The HOSF module utilizes the spatial high-order representation to enhance complementary information. The complete architecture of HOSF is illustrated in Fig. <ref>. Specifically, we obtain the RGB features ℱ^RGB_i and the depth features ℱ^Depth_i from their corresponding backbones, where i∈{ 1,2 } indicates the stage indexes of the encoder. Since outputs of different backbones are inconsistent in channel dimension, we first apply a 1×1 convolutional layer to align the number of channels, f^RGB_i = ReLU( BN( Conv_1×1(ℱ^RGB_i))), f^Depth_i = ReLU( BN( Conv_1×1(ℱ^Depth_i))), where Conv_1×1(·) denotes a 1×1 convolutional layer, BN(·) represents a batch normalization (BN) layer <cit.> and ReLU(·) symbolizes a rectified linear unit (ReLU) activation function. Then, we extract high-order representations <cit.> to jointly learn the spatial relationship between f^RGB_i and f^Depth_i, 𝒜^sp_i= Norm( MO((f^RGB_i)^T ⊗ f^Depth_i)) ⊗ (f^RGB_i)^T, where T stands for the transposition operation, ⊗ indicates the element-wise multiplication operation, MO(x)=sign(x) · x^-1/2 denotes the moment, and Norm(x)=x/ x _2^2 represents the l-2 normalization. Depth data is generally obtained by depth sensors. The foreground and background have different depth values because of their distances to the sensor, which allows backbone networks to learn complementary information apart from RGB data. However, not all depth information can bring beneficial improvements to the RGB-D SOD task. The low-quality depth maps will introduce additional noise to the predicted saliency maps <cit.>. To fuse the depth maps effectively, we integrate the depth information selectively by using a global max pooling operation followed by two 3×3 convolutional layers, which can be expressed as, f^DW_i = σ( Conv_3 × 3( Conv_3 × 3( GMP(f^Depth_i)))), where GMP=(·) denotes the global max pooling operation, σ(·) represents the sigmoid activation function. Then, we supplement the depth clues to the fused features by element-wise multiplication operation. Finally, a residual connection is used to retain the global information in RGB stream. The final fused feature maps is formulated as: ℱ^fus_i = (𝒜^sp_i ⊗ f^DW_i) ⊕ f^RGB_i, i ∈{1,2}, where ⊕ represents the element-wise addition operation. The HOSF module is proposed to interact the feature maps in shallow layers of the encoders. In the next subsection, we will introduce anther high-order attention module for deep layers in the backbones. §.§ High-Order Channel Fusion (HOCF) Module In encoder networks, the features extracted from deep layers have more channels than the features in shallow layers. Each channel represents a type of semantic information. By modeling the channel interactions in a high-order manner, we can emphasize complementary feature representation for specific semantics. Thus, we introduce the High-Order Channel Fusion (HOCF) module to exploit the channel relationships between two modalities. The illustration of HOCF is shown in Fig. <ref>. In the HOCF module, we first adopt a 1×1 convolutional layer to align the number of channels, f^RGB_i = ReLU( BN( Conv_1×1(ℱ^RGB_i))), f^Depth_i = ReLU( BN( Conv_1×1(ℱ^Depth_i))), where i∈{ 3,4 } is the indexes of stage in encoder backbones. Then, the global average pooling operation is added to generate the corresponding channel attention mask. We employ the element-wise multiplication operation to calculate the interactive channel attention map 𝒜^ch_i ∈ℝ^C× C, 𝒜^RGB_i = GAP(f^RGB_i), 𝒜^Depth_i = GAP(f^Depth_i), 𝒜^ch_i = 𝒜^RGB_i ⊗𝒜^Depth_i. After these operations, we obtain the maximum value of each row and each column in the interactive channel attention map, respectively. Then, we apply a series of operations to further integrate the most prominent features, 𝒜̃^ch_i = σ( FC( Cat[ GMP^C × 1(𝒜^ch_i); GMP^1 × C(𝒜^ch_i)]), where Cat[·,·] denotes a concatenation operation, FC(·) stands for a fully connected (FC) layer, σ(·) is a sigmoid activation function. Here, GMP^C × 1(𝒜^ch_i) calculates how each channel of RGB feature interacts with the depth modality, and GMP^1 × C(𝒜^ch_i) obtains how each channel of depth feature correlates with the RGB modality. Finally, we use element-wise multiplication operation and residual connection to obtain fused feature maps, ℱ^fus_i = (𝒜̃^ch_i⊗ f^Depth_i) ⊕ f^RGB_i, i ∈{3,4}. These feature maps are then fed into the decoder to recover the resolutions. §.§ Decoder In the decoder, if we upsample the feature maps directly to the raw size of the inputs, it will produce excessive noise and imprecise results <cit.>. This is unacceptable for pixel-level prediction tasks. As mentioned in <cit.>, combining upsample operations with convolutional layers is crucial for restoring the resolution in decoding process. Inspired by the principle of PVT <cit.> and FPN <cit.>, we employ a cascaded pyramid reconstruction network (CPRN) to gradually integrate multi-level features via a top-down pathway. Given the output ℱ^fus_i, i ∈{1,2,3,4}, from four high-order fusion modules, we first apply two 1×1 convolutional layers to reduce the number of channels. Then, the feature maps are upsampled two times with bilinear interpolations to make the resolution compatible to next stage. After that, we concatenate these feature maps. The above operations are continued until the feature map is restored to the size of original inputs. For brevity, we first denote a nonlinear feature enhancement (NFE) unit before introducing the cascaded pyramid reconstruction network, NFE(·)= ReLU( BN( Conv_1×1(·))). Thus, we use the following formulas to present the process of cascaded pyramid decoder, ℱ_4^out = NFE( NFE(ℱ^fus_4)), ℱ_3^out = NFE( NFE( Cat(ℱ^fus_3; Up_× 2(ℱ_4^out)))), ℱ_2^out = NFE( NFE( Cat(ℱ^fus_2; Up_× 2(ℱ_3^out)))), ℱ_1^out = NFE( NFE( Cat(ℱ^fus_1; Up_× 2(ℱ_2^out)))). Meanwhile, we add deep supervisions <cit.> to the outputs ℱ^out_i, i ∈{1,2,3,4}, of the cascaded pyramid decoder to speed up convergence. The predicted saliency maps can be formulated as, P_i= Up_ori( Conv_1 × 1( ReLU( BN( Conv_3 × 3(ℱ^out_i))))), i ∈{1,2,3,4}, where P_i represents the predictions in the i-th stage, Up_ori(·) denotes upsampling the feature maps to the original resolution of the input image. Note that only P_1 is used as the final saliency map, while other three predictions are omitted in the test phase. §.§ Loss Function For each of the decoder stage, we adopt the hybrid loss function <cit.> as the loss function, which is a combination of binary cross-entropy (BCE) loss <cit.>, SSIM loss <cit.> and IoU loss <cit.>. It can be defined as below, ℒ_i^hybrid = ℒ_i^BCE + ℒ_i^SSIM + ℒ_i^IoU, i ∈{1,2,3,4}. BCE loss measures the pixel-level difference between the predicted saliency map P∈ [0,1 ]^W × H and ground truth G∈{ 0,1 } ^W× H, which can be denoted as, ℒ^BCE = - ∑_w,h=1^W,H[G_whlog(P_wh) + (1 - G_wh)log(1 - P_wh)]. SSIM loss evaluates the structural similarity, which is defined as, ℒ^SSIM = 1 - (2μ _Pμ _G + C_1)(2σ _PG + C_2)/(μ _P^2 + μ _G^2 + C_1)(σ _P^2 + σ _G^2 + C_2), where μ _P, μ _G and σ _P, σ _G are the means and the standard deviations of the two patches from the predicted saliency map P and the ground truth G, respectively. σ _PG is their covariance. Additionally, C_1 = 0.01^2 and C_2 = 0.03^2 are set to avoid the numerator being divided by zero. IoU loss is adopted to capture fine structures in image level, which is computed as, ℒ^IoU = 1 - ∑_w,h=1^W,HP_whG_wh/∑_w,h=1^W,H (P_wh + G_wh - P_whG_wh ). As a result, the total loss can be represented as, ℒ_total= ∑_i=1^4ℒ_i^hybrid(P_i,G), i ∈{1,2,3,4}. All the predicted saliency maps and the ground truths have the same resolution as original input images. § DISCUSSION §.§ The novelty of the proposed HOSF and HOCF The key of RGB-D SOD lies in the complementary information between the two modalities, and high-order information has been proven to distinguish subtle differences between images in some other computer vision tasks. Although high-order attention mechanism has been proposed for many years, it is often only adopted for single modality feature enhancement. The cross-modality higher-order attention obtains relatively less research interests. To the best of our knowledge, we are the first to use higher-order statistical information for this cross-modality vision task, namely RGB-D SOD. As we can see in the ablation experiment in Section <ref>, HOSF and HOCF actually improve the performance. §.§ The choices of backbones in RGB branch and depth branch In this paper, we employ PVTv2 and ResNet-101 as the backbones for the RGB branch and the depth branch, respectively. In the preliminary experiment, we found that this implementation achieved the best results. The reason may be that RGB images contain more valuable information. If both streams use PVTv2 as encoder identically, there will be a slight decrease in performance. Compared with other transformer-based SOD methods <cit.>, we still achieved competitive results. In other fields of computer vision, more advanced backbone networks are naturally used to improve the performance <cit.>. § EXPERIMENTS In this section, we first introduce the experimental setup and implementation details. Then, we compare the proposed HODINet with other state-of-the-art RGB-D SOD methods on seven benchmark datasets and analyse the experimental results. The ablation studies are also conducted to verify the effectiveness of each component. Finally, some failure cases are discussed. §.§ Datasets and Evaluation Metrics Datasets. To demonstrate the generalization capability of the proposed model, we conduct experiments on seven publicly available datasets. NLPR <cit.> includes 1,000 stereo images from 11 types of indoor and outdoor scenes. DUT-RGBD <cit.> contains 1,200 images captured by Lytro Illum camera. NJUD <cit.> is composed of 1,985 image pairs from 3D movies and photographs. STEREO <cit.> collects 1,000 pairs of binocular images from the Internet. SSD <cit.> and LFSD <cit.> are two small-scale datasets with only 80 and 100 image pairs, respectively. SIP <cit.> involves 929 images with high quality depth maps. It covers many challenging scenes of salient objects and persons. Following the same settings as previous work <cit.>, we use a combined set of three datasets to train the proposed model, including 700 images from NLPR, 800 pairs from DUT-RGBD and 1,485 samples from NJUD. The remaining images with corresponding depth maps in these datasets are used for test. Evaluation metrics. We adopt four widely used metrics to comprehensively compare our HODINet with other methods, including mean absolute error (MAE) <cit.>, S-measure (S_α) <cit.>, max F-measure (F_β) <cit.> and max E-measure (E_ξ) <cit.>. MAE calculates the mean absolute difference in pixel-level between the predicted saliency map and the ground truth. The S-measure is designed to evaluate the structural similarities in object-level and region-level. Both F-measure and E-measure first obtain the binary saliency maps by varying the thresholds. The max F-measure computes the harmonic mean of average precision and average recall in multiple thresholds, while E-measure utilizes global image-level and local pixel-level information to measure the converted binary maps. §.§ Implementation Details We conduct all experiments with an NVIDIA GeForce RTX 2080Ti GPU on the Pytorch framework. The transformer-based and CNN-based backbones are pretrained on ImageNet <cit.>. The other layers are initialized to the default settings in Pytorch. During the training and test phase, we simply copy the input depth maps to three channels and resize all images to 256×256. Besides, multiple data augmentations, such as random cropping, flipping, rotating and color enhancement, are used in the training phase to avoid overfitting. We use the Adam optimizer <cit.> to train the proposed network in an end-to-end manner with a batch size of 4 for 100 epochs. The initial learning rate is 1e-4 and then decreases by a factor of 0.9 every epoch. The whole training procedure takes about 5 hours for convergence. The inference speed is around 35 FPS when testing an image with a size of 256×256 on a single NVIDIA RTX 2080Ti. §.§ Comparison With State-of-the-art Methods In this subsection, we compare the proposed HODINet with 24 state-of-the-art RGB-D SOD methods, including SMAC <cit.>, CDINet <cit.>, DCF <cit.>, DSA2F <cit.>, DQSD <cit.>, DRLF <cit.>, HAINet <cit.>, CDNet <cit.>, RD3D <cit.>, DRSD <cit.>, S3Net <cit.>, ATSA <cit.>, CoNet <cit.>, DANet <cit.>, BBSNet <cit.>, A2dele <cit.>, SSF <cit.>, UCNet <cit.>, S2MA <cit.>, JLDCF <cit.>, D3Net <cit.>, DMRA <cit.>, CPFP <cit.>, and TANet <cit.>. We directly report the quantitative results in the published papers if accessible. For the other baselines, we calculate the evaluation metrics by the saliency maps provided by the authors. All the metrics are calculated by the official evaluation tools. §.§.§ Quantitative Analysis Table. <ref> illustrates the quantitative results of our method. We calculate four evaluation metrics on seven datasets for comprehensive comparisons. Higher values of S_α, F_β and E_ξ indicate better performance. On the contrary, MAE is the opposite. We place other baselines in reverse chronological order from top to bottom. Compared with other state-of-the-art methods, the proposed HODINet obtains comparable performance. Concretely, the results of HODINet are the best under all the evaluation metrics, except that F_β on LFSD and E_ξ on SSD are the second best. Especially on the SIP dataset, the performance gain of our method achieves 2.8% of F_β, 1.6% of S_α, 1.0% of E_ξ and 20.4% of MAE score, when comparing to the second best approach. Besides, we also improves 1.4% of F_β, 1.1% of S_α, 1.4% of E_ξ and 24.1% of MAE score on the DUT-RGBD dataset. As a consequence, all the results demonstrate the effectiveness of our model. §.§.§ Qualitative Analysis For qualitative comparison, we provide some representative saliency maps predicted by HODINet and other SOTA methods in Fig. <ref>. These samples include a variety of challenging scenarios, such as fine structures (1st row), fine-grained details (2nd row), large objects (3rd-4th rows), small objects (5th row), poor-quality depth maps (6th-7th rows), multiple objects (8th row), low contrast (9th row) and complex scenes (10th-11th rows). For example, we achieve structural integrity and internal consistency of the salient objects in the 2nd, 3rd and 7th images. On the contrary, other methods either miss detailed parts or mistakenly recognize the background as the foreground. §.§ Ablation Study To further investigate the effectiveness of each main component in the proposed HODINet, we conduct a series of ablation studies. These mainly include four aspects: 1) why bridging transformer-based and CNN-based networks to extract features of RGB and depth modalities; 2) how to effectively utilize the high-order attention mechanisms to discrepantly interact with different stages of feature maps; 3) whether or not can we replace the cascaded pyramid reconstruction network (CPRN) with convolutions and upsamplings to decode feature maps directly; 4) how much does the hybrid loss contribute to the performance. Note that we only modify one component at a time and retrain the networks with the same settings in Section <ref>. Since the trends of experimental results found on these seven datasets are similar, we only reported ablation results on NLPR, DUT-RGBD, and LFSD in this subsection. §.§.§ the necessity of using CNN-based and transformer-based networks as backbones The backbones for RGB and depth streams focus on different aspects in extracting features. In this subsection, we conduct experiments to verify the impact of different backbone choices. For example, if we employ CNN-based model to encode both RGB images and depth maps, the performance will decrease. We denote this attempt as “CNN-CNN" in Table. <ref>. The reason can be explained as that PVTv2 enhances the local continuity in RGB images by overlapping patch embedding <cit.>. However, compared with other CNN-based methods in Table. <ref>, we still obtain comparable results, when only using CNN-based backbones for encoding both RGB and depth streams. If we apply transformer-based backbone to the depth stream, the performance is slightly lower than that of CNN-based model. Fig. <ref> shows the comparison of different backbone choices. §.§.§ the significance of the HOSF and HOCF modules Complementary information between RGB images and depth maps is critical in RGB-D SOD. High-order relationship has shown its effectiveness in modeling tiny differences between images <cit.>. In our framework, HOSF and HOCF play an important role in fusing cross-modality features. We directly substitute them with two convolutions and a concatenation operation, which are denoted as “w/o HOSF” and “w/o HOCF” in Table. <ref>. On top of that, we also exchange the sequence of HOSF and HOCF modules to investigate whether they have the same effect on different stages. Under different fusion strategies, we observe varying degrees of performance degradation in the above three datasets. This study fully indicates that the proposed high-order attention mechanisms capture critical information well in both spatial and channel dimensions. The representative examples are visualized in Fig. <ref>. §.§.§ the effectiveness of the CPRN To demonstrate the importance of the proposed CPRN, we remove all cross-stage connections and directly predict the final feature maps. Specifically, we employ convolutions to reduce the number of channels to one. Then, the predictions are upsampled to the original input size. The results are shown as “w/o CPRN” in Table. <ref>. Compared to the complete HODINet, we find that the performance of this attempt degrades severely on all three datasets. The reason is that the top-town integration pathway decodes the adjacent feature maps gradually, preserving the consistent saliency information. In the penultimate column of Fig. <ref>, we show some qualitative examples in this circumstance. §.§.§ the usefulness of the hybrid loss Several loss functions have been used in the RGB-D SOD task, achieving remarkable results <cit.>. The hybrid loss <cit.> improves the integrity of salient objects with sharp and clear boundaries. In this ablation experiment, we replace the hybrid loss with the binary cross-entropy (BCE) loss. As shown in the last row “only ℒ^BCE” in Table. <ref>, we can see significant improvements due to the hybrid loss, especially on the E_ξ metric and MAE score. Fig. <ref> shows some qualitative comparisons of BCE loss and hybrid loss. §.§ Failure Cases and Analyses It is also worthwhile to analyze the reasons for failure cases. Fig. <ref> shows a few samples where our model fails to detect. First, the performance of the proposed method will decrease when the object is composed of fine structures. Since these contiguous pixels are easily adjacent to backgrounds, we can not obtain ideal boundaries, e.g., the first row of Fig. <ref>. In addition, as shown in the second and the third rows in Fig. <ref>, the blur depth maps provide little useful information, while the salient objects do not have explicit semantic information in the RGB images. In this circumstance, our method can not produce satisfactory results. Finally, in some scenes with multiple different objects, our model fails to detect the salient objects due to contradictory depth cues. The fourth row of Fig. <ref> is an example. § CONCLUSION In this paper, we propose a novel high-order discrepant interaction network (HODINet) for RGB-D SOD. In the proposed framework, we adopt transformer-based and CNN-based architectures to discrepantly extract features from different modalities. Inspired by the recent success of high-order representations in other visual tasks, we design high-order spatial fusion (HOSF) and high-order channel fusion (HOCF) modules to select the most effective information. A cascaded pyramid reconstruction network (CPRN) is also employed to gradually decode the feature maps in a top-down pathway. The experimental results also prove the effectiveness of the proposed model. Compared with 24 state-of-the-art methods, HODINet achieves competitive performance on seven popular datasets. IEEEtran CRediT authorship contribution statement Kang Yi: Software, Data Curation, Writing - Original Draft. Jing Xu: Supervision, Resources. Xiao Jin: Conceptualization, Methodology, Software, Writing - Original Draft, Writing - Review & Editing. Fu Guo: Validation. Yan-Feng Wu: Validation. In the final published version, Xiao Jin has renounced the authorship of this manuscript.
http://arxiv.org/abs/2307.03279v1
20230706203035
TEASER: Simulation-based CAN Bus Regression Testing for Self-driving Cars Software
[ "Christian Birchler", "Cyrill Rohrbach", "Hyeongkyun Kim", "Alessio Gambi", "Tianhai Liu", "Jens Horneber", "Timo Kehrer", "Sebastiano Panichella" ]
cs.SE
[ "cs.SE" ]
Neural network decoder for near-term surface-code experiments Barbara M. Terhal August 1, 2023 ============================================================= Software systems for safety-critical systems like self-driving cars (SDCs) need to be tested rigorously. Especially electronic control units (ECUs) of SDCs should be tested with realistic input data. In this context, a communication protocol called Controller Area Network (CAN) is typically used to transfer sensor data to the SDC control units. A challenge for SDC maintainers and testers is the need to manually define the CAN inputs that realistically represent the state of the SDC in the real world. To address this challenge, we developed , which is a tool that generates realistic CAN signals for SDCs obtained from sensors from state-of-the-art car simulators. We evaluated based on its integration capability into a DevOps pipeline of , a company in the automotive sector. Concretely, we integrated in a Continous Integration (CI) pipeline configured with Jenkins. The pipeline executes the test cases in simulation environments and sends the sensor data over the CAN bus to a physical CAN device, which is the test subject. Our evaluation shows the ability of to generate and execute CI test cases that expose simulation-based faults (using regression strategies); the tool produces CAN inputs that realistically represent the state of the SDC in the real world. This result is of critical importance for increasing automation and effectiveness of simulation-based CAN bus regression testing for SDC software. Tool: <https://doi.org/10.5281/zenodo.7964890> GitHub: <https://github.com/christianbirchler-org/sdc-scissor/releases/tag/v2.2.0-rc.1> Documentation: <https://sdc-scissor.readthedocs.io> Autonomous systems, Regression Testing, Simulation Environment, CAN Bus § INTRODUCTION In recent years, the deployments of autonomous systems such as self-driving cars (SDCs) and unmanned aerial vehicles, several accidents happened <cit.>. Hence, those incidents imply the importance of testing for safety-critical systems such as SDCs. Using simulation environments to test SDCs brings several advantages over real-world testing in the field, especially the aspects of reproducibility, safety, and determinism of the test cases. However, testing in simulation is costly in terms of computational power and time; therefore, it is required to do it effectively. Testing on the system level of SDCs focuses on the correct interaction of different components of the vehicle, such as the engine control module, transmission control module, brake control module, etc. Those components, also known as electronic control units (ECUs), interact with each other with a common protocol. In the automotive domain, the CAN bus protocol is a widely used communication standard for ECUs <cit.>. CAN bus allows the communication of different ECUs in a vehicle over a shared bus system by a standardized protocol <cit.>. The main challenge for SDC maintainers and testers is to generate realistic test CAN signals which accurately reflect the state of an SDC in the real world since CAN signals are still manually generated for testing purposes nowadays (e.g., at ). The research on testing with CAN bus focuses on security, model-based testing <cit.> and CAN queuing <cit.>. The research on CAN signals generation based on simulation environments, however, was mainly outside of the SDC domain <cit.>. Hence, to the best of our knowledge, there is no tool that supports regression testing for SDC software on ECUs based on the CAN bus protocol with realistic input data. We aim to do simulation-based regression testing for self-driving cars with their ECUs by using different simulators and the vehicles' CAN bus system. To enable research on this problem, we developed (simulaTion basEd cAn buS tEsting), a tool for simulation-based CAN bus testing that translates simulated sensor data of an SDC, obtained from a simulation environment, for the CAN bus transmission. We conjecture the use of sensor data from multiple different simulation environments produces more realistic CAN signals for testing, which helps to detect software defects of ECUs. Furthermore, mitigates the currently common practice of manually generating CAN signals to test ECUs (as done by ). The contribution of this paper is threefold: [(i)] * is publicly available on GitHub as a feature component of  <cit.> with a GPLv3 [<https://www.gnu.org/licenses/gpl-3.0>] license. * reduces the time for generating realistic CAN bus signals for testing CAN devices, as demonstrated at * we qualitatively evaluated the usefulness of in the industrial setting of by integrating it into their DevOps pipeline for testing a physical CAN device. § THE TOOL §.§ Architecture overview and main scenarios The high-level architecture of the system is illustrated in Figure <ref>. is fully integrated as a component in  <cit.>, which is a tool that uses machine learning to select simulation-based test cases for SDCs, and extends the existing tool with CAN bus functionalities. With we can generate CAN signals based on simulated scenarios from different simulators such as and . Therefore, enables regression testing based on CAN signals which are now realistically simulated by the virtual environments. §.§ Simulation environments: and supports two simulation environments to generate CAN signal from. The first simulator is the simulator. simulates soft-body physics behavior in its virtual environment. The second simulator is . In contrast to the simulator, simulates a rigid-body physics behavior. Both simulators are widely used in academia and in practice <cit.>. §.§ Approach and technological overview 's main objective is to extend the test runner of to enable CAN bus testing. The tool uses two open source python libraries; the  [<https://github.com/hardbyte/python-can>] and  [<https://github.com/cantools/cantools>] packages. The library allows communication with the CAN bus over specific interfaces (e.g., sockets). Complementary to the first package, the library provides functionality to compose the can messages to send on the CAN bus. Specifically, allows the user to specify a CAN database file, which defines how signals are encoded into CAN messages. The Listing <ref> illustrates how the wheel speed, throttle, brake, and steering angle are encoded in a CAN message by specifying it in a CAN database file. [caption=Sample entries of a CAN database file, label=lst:can-dbc] ... BO_ 177 sampleFrame2: 4 Vector__XXX SG_ wheelspeed : 16|16@1+ (0.2,0) [0|13107] "rpm" Vector__XXX BO_ 161 sampleFrame1: 7 Vector__XXX SG_ throttle : 16|16@1+ (0.0001,0) [0|1] " SG_ brake : 0|16@1+ (0.0001,0) [0|1] " SG_ steering : 32|17@1- (0.01,0) [-655.36|655.35] "degree" Vector__XXX ... In a nutshell, all implementations were done in the context of the subcommand of . The input number of arguments for the subcommand is increased by CAN-specific information as illustrated in Listing <ref>. [label=lst:can-args, caption=Configuration file with highlighted CAN arguments] command: 'label-tests' options: home: 'C:.tech.v0.24.0.2.drive-0.24.0.2.13392' user: 'C:.drive' tests: 'C:-scissor' rf: 1.5 oob: 0.3 max_speed: 50 interrupt: false obstacles: false bump_dist: 20 delineator_dist: null tree_dist: null field_of_view: 120 (*@canbus: true@*) (*@can_stdout: true@*) (*@can_dbc: '/path/to/beamng_pipeline_sample.dbc'@*) (*@can_dbc_map: '/path/to/dbc_map_beamng.json'@*) (*@can_interface: 'socketcan'@*) (*@can_channel: 'vcan0'@*) (*@can_bitrate: 250000@*) (*@influxdb_bucket: 'your_InfluxDB_bucket'@*) (*@influxdb_org: 'your_InfluxDB_organization'@*) § USING TOOL §.§ Requirements The following external software systems are required: [(i)] * Windows 10 * Python 3.10 * Pip *  [<https://beamng.tech/>] v0.24 and/or  [<https://carla.org/>] * Poetry [<https://python-poetry.org/>] (optional), and * InfluxDB [<https://www.influxdata.com/>] (optional) The following instructions assume a full installation of the mentioned requirements on a Windows 10 machine. §.§ Instructions Get with the directly from Zenodo [<https://doi.org/10.5281/zenodo.7964890>], GitHub [<https://github.com/christianbirchler-org/sdc-scissor/releases/tag/v2.2.0-rc.1>] or PyPI [<https://pypi.org/project/sdc-scissor/2.2.0rc1/>] and run the component by invoking the subcommand. For more details, also consolidate the demonstration video [<https://doi.org/10.5281/zenodo.7965263>]. [] git clone https://github.com/christianbirchler-org/sdc-scissor.git cd sdc-scissor poetry install poetry run sdc-scissor label-tests [args...] An overview of all subcommands with their options is provided when invoking the flag. [] poetry run sdc-scissor label-tests –help ... Specifying the commands and their options can also be done inside a configuration file as illustrated in Listing <ref>. To invoke with the configuration file the option is provided: [] poetry run sdc-scissor -c /path/to/config.yml extends the existing argument options; we need to use the highlighted arguments in Listing <ref>. Table <ref> is an overview of the arguments with their according data type and description. If an InfluxDB instance is in use, then the respective API access token and host must be specified as environment variables. provides the option to declare the environment variable in a file: [] INFLUXDB_TOKEN="SeCrEtToKeN" INFLUXDB_URL="https://influxdb.example.org:PORT" Alternatively, the environment variables can be set explicitly from the Windows control panel. The component provides different options to output the CAN messages: [i)] * Standard output () * Physical CAN interface defined in the configuration * dumping the signals to an InfluxDB instance, or * any combination of the previous possibilities. This output behavior is achieved through implementing them by applying the decorator design pattern. § EVALUATION As demonstrated in <cit.>, achieves an accuracy of 70%, a precision of 65%, and a recall of 80% in selecting tests leading to a fault and improved testing cost-effectiveness. The usefulness of with in an industrial context is also demonstrated and explained <cit.>, where a tester at requires two days to produce CAN signals manually for 15 test cases. The automation with significantly reduces the time to generate realistic CAN signals since they are generated at runtime, where on average, a single test case in simulation with requires 49 seconds <cit.>. Furthermore, the video [<https://doi.org/10.5281/zenodo.7964959>] shows the integration at use-case whose infrastructure is illustrated in Figure <ref>. At we have a simulation environment installed on a Windows machine. The simulation starts when a build job from Jenkins is triggered. When the simulation starts, the component produces the CAN frames and sends them over the CAN bus. A Raspberry Pi, which represents a physical CAN device, receives the messages. A separate application (JamaicaEDG) connects to the CAN device and displays on a dashboard the transmitted values from the CAN bus. Specifically, the speed and throttle values are represented. § IMPLICATIONS & FUTURE WORK With , regression testing for SDCs is not limited towards the fully black box approach as initially done by <cit.> by neglecting ECU components and their interactions; instead, testing of individual physical CAN devices is feasible based on input data obtained from a simulation environment to test the CAN system of SDCs. provides the technological possibility of testing ECUs individually focusing on CAN messages as input and output. Instead of manually testing CAN devices by defining specific CAN messages upfront, which is the standard industrial approach of our evaluation partner , using simulated signals sent over the CAN bus enables more realistic input data for the CAN devices since the CAN messages are based on simulated scenarios. Furthermore, with the modular architecture of , regression testing for SDCs can be enabled in co-simulation environments by implementing the given APIs. This enables future research on SDC testing involving co-simulation environments. The component extends the tool by supporting data transmission over the CAN bus to test CAN devices. We showed the usefulness of the tool in practice by integrating it into the DevOps pipeline of . The tool enables regression testing for SDC CAN devices based on signals generated from simulators such as or . We believe that enables future research on testing CAN devices and SDCs in general based on state-of-the-art simulation data. § ACKNOWLEDGEMENTS We thank the Horizon 2020 (EU Commission) support for the project COSMOS (DevOps for Complex Cyber-physical Systems), Project No. 957254-COSMOS) and the DFG project STUNT (DFG Grant Agreement n. FR 2955/4-1). IEEEtran
http://arxiv.org/abs/2307.02487v1
20230705175958
A Precursor Plateau and Pre-Maximum [O II] Emission in the Superluminous SN2019szu: A Pulsational Pair-Instability Candidate
[ "Aysha Aamer", "Matt Nicholl", "Anders Jerkstrand", "Sebastian Gomez", "Samantha R. Oates", "Stephen J. Smartt", "Shubham Srivastav", "Giorgos Leloudas", "Joseph P. Anderson", "Edo Berger", "Thomas de Boer", "Kenneth Chambers", "Ting-Wan Chen", "Lluís Galbany", "Hua Gao", "Benjamin P. Gompertz", "Maider González-Bañuelos", "Mariusz Gromadzki", "Claudia P. Gutiérrez", "Cosimo Inserra", "Thomas B. Lowe", "Eugene A. Magnier", "Paolo A. Mazzali", "Thomas Moore", "Tomás E. Müller-Bravo", "Miika Pursiainen", "Armin Rest", "Steve Schulze", "Ken W. Smith", "Jacco H. Terwel", "Richard Wainscoat", "David R. Young" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.SR" ]
firstpage–lastpage Phenomenology of bond and flux orders in kagome metals Mark H. Fischer August 1, 2023 ====================================================== We present a detailed study on SN2019szu, a Type I superluminous supernova at z=0.213, that displayed unique photometric and spectroscopic properties. Pan-STARRS and ZTF forced photometry shows a pre-explosion plateau lasting ∼ 40 days. Unlike other SLSNe that show decreasing photospheric temperatures with time, the optical colours show an apparent temperature increase from ∼15000 K to ∼20000 K over the first 70 days, likely caused by an additional pseudo-continuum in the spectrum. Remarkably, the spectrum displays a forbidden emission line even during the rising phase of the light curve, inconsistent with an apparently compact photosphere. We show that this early feature is [O II] λλ7320,7330. We also see evidence for [O III] λλ4959, 5007, and [O III] λ4363 further strengthening this line identification. Comparing with models for nebular emission, we find that the oxygen line fluxes and ratios can be reproduced with ∼0.25 M_⊙ of oxygen rich material with a density of ∼10^-15 g cm^-3. The low density suggests a circumstellar origin, but the early onset of the emission lines requires that this material was ejected within the final months before the terminal explosion, consistent with the timing of the precursor plateau. Interaction with denser material closer to the explosion likely produced the pseudo-continuum bluewards of ∼5500 . We suggest that this event is one of the best candidates to date for a pulsational pair-instability ejection, with early pulses providing the low density material needed for the forbidden emission line, and collisions between the final shells of ejected material producing the pre-explosion plateau. supernovae: general – supernovae: individual: SN2019szu – transients: supernovae – stars: massive § INTRODUCTION Superluminous supernovae (SLSNe) are a class of supernovae (SNe), initially categorised as events with absolute magnitudes exceeding M<-21 mag <cit.>. In addition to their high luminosities, these events radiate ∼10^51 erg when integrated over their broad light curves <cit.>. The luminous nature of these events means we can observe them out to redshifts z>4 <cit.>, and so even though the volumetric rate of these events is ∼ 1 in a few thousand SNe <cit.>, they make up roughly 1% of the SNe discovered today <cit.>. This is aided by the wide-field surveys available which can probe the entire night sky instead of targeting only nearby massive galaxies. SLSNe were missed by earlier surveys due to their preference for metal-poor dwarf galaxies as hosts <cit.>. As more of these events have been discovered, the strict magnitude cut off for classification has since been replaced by spectral classification around peak luminosity <cit.>, driven by events with SLSN-like spectra but intermediate luminosities <cit.>. SLSNe can further be classified into Type I and Type II SLSNe, analogous to their less luminous counterparts. Type II SLSNe often resemble lower luminosity SNe IIn, with narrow hydrogen lines and a small subset displaying broad hydrogen lines <cit.>. Interaction with circumstellar material (CSM) is thought to the main power source for Type II SLSNe <cit.>. Type I SLSNe (often simply called SLSNe) lack hydrogen in their spectra. These spectra are characterised by a steep blue continuum indicative of high temperatures, and often show prominent O II absorption lines at early times <cit.>, eventually evolving to be similar to SNe Ic when at comparable temperatures <cit.>. However, a small fraction of these events, show evidence of Hα at late times <cit.> but this is not necessarily related directly to the power source at maximum light, and instead may be a product of interaction with its environment at scales ≳ 10^16 cm <cit.>. A handful of events have also shown evidence for helium in their photospheric spectra <cit.>. One of the biggest questions remaining about SLSNe pertains to their powering mechanism. Typical hydrogen-poor SNe are powered by the decay of radioactive nickel (^56Ni), but this explanation does not seem to work for most SLSNe for a number of reasons. Powering the peak luminosities of SLSNe with nickel decay would require ∼ 5-20 M_⊙ of ^56Ni, which is too high compared to the inferred ejected mass from the light curves <cit.>. This amount of nickel could likely only be produced in Pair-Instability SNe (PISN) of stars with initial masses M ≳ 140 M_⊙ <cit.>. One of the best PISN candidates to date SN2018ibb is thought to be powered by 25-44 M_⊙ of freshly synthesised ^56Ni produced by a star with a helium core mass of 120-130 M_⊙ <cit.>. Although stars in this mass range required for PISNe have been observed, mass-loss on the main sequence makes PISN formation challenging except at very low metallicities <cit.>. However, some models suggest a magnetic field at the surface of the star could quench the mass loss for stars at solar metallicity, allowing enough mass to remain for the PI mechanism <cit.>. More problematic for SLSNe, any model producing this amount of nickel would result in a spectrum dominated by iron-group elements, which is inconsistent with the blue spectra of SLSNe <cit.>. However, these models often assume interaction between the SN ejecta and CSM is negligible which would only likely be the case if observed early enough that explosive nucleosynthesis is not affecting the layers of ejecta <cit.>. Some theories suggest an internal power source such as the spin down energy of a magnetar or an accreting black hole could power SLSNe. However the accreted mass required in the latter scenario (≫ 100 M_⊙) often exceeds the mass of any reasonable star <cit.>. In the magnetar scenario, the remnant is a fast rotating neutron star with a very strong magnetic field B ∼ 10^13-10^14 G <cit.>. Nearly 10% of newly born neutron stars have B-fields in the range 10^13-10^15 G lasting over 1000 years after their birth, and so it is plausible that these could exist to produce SLSNe <cit.>. This mechanism could explain the long duration of the light curves as the magnetar releases its rotational energy at the dipole spin-down rate, which remains high for days to weeks in this range of magnetic field strengths <cit.>. Another possible explanation to power SLSNe is interaction with CSM. This theory proposes a core collapse supernova that has large amounts of CSM created through stellar winds and ejections throughout the life of the progenitor star. The SN ejecta is able to catch up to this material because it has much higher velocities, and is rapidly decelerated if the CSM is massive enough. This creates a shock that deposits energy in the ejecta and CSM, the cooling of which can produce a bright and long-lived light curve <cit.>. The mass of CSM required to efficiently power a bright light curve must be comparable to the ejecta mass <cit.>, ranging from a few solar masses up to a few tens <cit.>. Getting this much mass close to the star just before explosion is difficult to explain using stellar winds, even in Wolf Rayet stars with high mass loss rates <cit.>. An alternative way to produce massive CSM is through discrete outbursts. Stars with masses in the range 70-140 M_⊙ are thought to undergo pulsational pair instability PPI eruptions <cit.>. These stars are not massive enough to experience terminal pair instability, instead the star violently expels up to tens of solar masses worth of material towards the end of its life due to this mechanism <cit.>. This has been suggested as a way to get sufficiently massive CSM to power SLSNe <cit.>. The CSM model has been questioned as the main power source for hydrogen-poor SLSNe. We would naively expect to see narrow lines in the spectra from slow moving material if interaction was at play, but this is not seen in all SLSNe <cit.>. Instead, there is evidence that CSM interaction may play a role in powering some SLSNe including late time interaction producing Hα emission <cit.>, post peak bumps in the light curves <cit.>, blue pseudo-continua, and early appearances of forbidden emission lines such as [O II] and [O III] <cit.>. Light curve and spectral modelling suggest that both central engines and CSM interaction may help to power this class of events <cit.>. Spectra taken at different phases of the SN evolution allow us to probe different regions of the ejecta. At early times the ejected material from the explosion is still optically thick and obscures the view of the inner layers. As the ejecta expands it becomes less dense, leading to more states leaving local thermal equilibrium (LTE) and lower populations of excited states, reducing the number of optically thick lines and bound-free continua. This spectral transition from photospheric to nebular is also driven by decreasing temperatures which results in fewer lines that are capable of significant cooling, and fewer excited states with enough population to provide opacity. This transition occurs typically on the timescale of hundreds of days and results in a spectrum dominated by low-lying forbidden transitions <cit.>. This contradicts observations in which some SLSNe have shown these forbidden emission lines early on during their photospheric phase. This includes SN2018ibb which displayed signs of a possible [Ca II] λλ7291,7323 at -1.4 days before peak, becoming prominent by 30 days later <cit.>. Other SLSNe have also shown signs of early forbidden emission lines including the earliest spectra of SN2007bi <cit.>, and LSQ14an <cit.>, in which this line ∼50 days after peak was also attributed to [Ca II]. This suggests some lower density regions exist in the ejecta or their surroundings. In principle these lines could have appeared even earlier if earlier spectra were obtained, challenging our understanding of the structure of this massive ejecta. In this paper we present and analyse SN2019szu, a slowly evolving SLSN that showed forbidden emission lines remarkably soon after the time of explosion, at least 16 days before maximum light. We identify these lines as singly- and doubly-ionized oxygen, arising in a low density, hydrogen-poor CSM, and use this to place important constraints on the progenitor of this event. The structure of this paper is as follows. Section <ref> outlines the data collected for this object. Section <ref> covers the analysis of the host galaxy, and the photometric and spectroscopic data collected for the target. In Section <ref>, we discuss spectral models to fit this event as well as mosfit models of the light curve. We then discuss the implications of these results and how they fit into our understanding of SN2019szu in Section <ref>. Lastly, in Section <ref> we present our conclusions. § OBSERVATIONS §.§ Discovery and Classification The Asteroid Terrestrial-impact Last Alert System (ATLAS) project <cit.> discovered SN2019szu with the designation ATLAS19ynd on 2019-10-21. The ATLAS-HKO (Haleakala) unit detected the supernova in the c band at 19.4 mag following a shallow non-detection 2 days prior at a limiting magnitude of o=17.7 mag <cit.>. The transient was identified on multiple images and as it was coincident with a faint host galaxy (see Section <ref>), the ATLAS Transient server reported it as a SN candidate <cit.>. An earlier detection was made by the Zwicky Transient Facility (ZTF) on 2019-10-19 under the name ZTF19acfwynw <cit.>, with the data visible in the Lasair broker [https://lasair-ztf.lsst.ac.uk/object/ZTF19acfwynw/] <cit.>. Gaia also detected this transient on 2019-11-02 with an internal name Gaia19fcb <cit.>. It was later classified as a SLSN-I by <cit.> as part of the C-SNAILS survey at the Liverpool Telescope (LT). It was initially given this classification using the host galaxy redshift of z=0.213 (based on using the [O III] doublet emission at 4959 and 5007 from the SLSN spectrum), and therefore an absolute magnitude M=-21 mag indicating a very luminous event. This redshift corresponds to a distance of d=1060 Mpc assuming a Planck cosmology <cit.>. The absolute magnitude coupled with the very blue shape of the spectral continuum, the dwarf nature of the host galaxy and its strong emission lines similar to other SLSN hosts <cit.> cemented the SLSN designation. However, since the initial spectrum did not cover Hα, later spectra were needed to confirm its lack of hydrogen and type I designation. SN2019szu was also included as part of a large population study by <cit.>. This sample consisted of 78 H-poor SLSNe detected by ZTF over the span of 3 years. §.§ Photometry Observations of this target were obtained from a number of telescopes. Follow-up observations in gri were obtained with Las Cumbres Observatory (LCO) using a number of their 1m telescopes across multiple observatories in the network. After  350 days, deeper images were obtained with the ESO New Technology Telescope (NTT) in gri with the ESO Faint Object Spectrograph and Camera (EFOSC2) as part of the Extended Public ESO Spectroscopic Survey of Transient Objects <cit.>. These images were reduced using the ePESSTO pipeline for the NTT images <cit.>, and the BANZAI[https://github.com/LCOGT/banzai] pipeline for the LCO images. Photometry on these images was performed with the use of photometry sans frustration[https://github.com/mnicholl/photometry-sans-frustration], a python wrapper for point-spread function (PSF) photometry using Astropy and Photutils. Zeropoints were calculated by cross matching sources in the field with the Pan-STARRS catalog <cit.>. The photometry in gri was template subtracted using archival PS1 images as templates. This was especially important in the late time photometry where we believe the host plays a significant contribution to the flux detected. The Neil Gehrels Swift Observatory (hereafter ) began observations of the field of SN2019szu on the 21st November 2019 using the UV Optical Telescope <cit.>. SN2019szu was detected in all 6 optical/UV UVOT filters. Summed images were created by combining individual exposures taken during observations. Source counts were extracted from these summed images using a source region of 5" radius. Background counts were extracted using a circular region of radius 20" located in a source-free region. The count rates were obtained from the image lists using the Swift tool uvotsource. The count rates were then converted to AB magnitudes using the UVOT photometric zero points <cit.>. Data were also collected from the ZTF forced photometry server <cit.> and the ATLAS forced photometry server <cit.>. ZTF g, r, and ATLAS c band data points were binned together in a daily cadence whereas the o band data were binned together every 2 days to reduce noise. Following the discovery of SN2019szu, we examined forced photometry at the location of the SN in Pan-STARRS1 and Pan-STARRS2 images obtained in survey mode <cit.> from MJD 57362 (2015-12-06) onwards. Typically, 4×45 second exposures are obtained in survey mode in one of w, i or z filters on any given night, and photometric calibration and difference imaging is performed via the image processing pipeline <cit.>. The w filter is a broadband composite g+r+i, with measured AB magnitudes roughly equivalent to those in the r band. The individual measurements for each nightly quad were stacked in order to improve signal to noise, and to obtain deeper upper limits in case of a non-detection. All absolute magnitudes are calculated using the distance modulus and a simple K-correction of 2.5 log(1+z). §.§ Polarimetry An epoch of polarimetry was also obtained on 17-01-2020 (31 days after peak in rest frame) using the Alhambra Faint Object Spectrograph and Camera (ALFOSC) instrument at the Nordic Optical Telescope (NOT) in the V band. The reduction and analysis is described in <cit.>. The signal-to-noise ratio (S/N) is quite low compared to values traditionally needed for linear polarimetry. We find S/N ≳ 100 only for a small aperture size of ≤9 pixels, and falls quickly to  35 for aperture sizes above 20 pixels (larger apertures are necessary to account for any difference in point spread function between the ordinary and extraordinary beams). Although we measure an overall polarisation of P=3.1±1.3% for SN2019szu, compared to a an interstellar polarisation P_ ISP = 0.70±0.21% measured from a bright nearby star, the low S/N of the observation precludes a strong claim of polarized emission from SN2019szu. This is described in detail by <cit.>. §.§ Spectroscopy An initial spectrum of SN2019szu was obtained using the Spectrograph for the Rapid Acquisition of Transients (SPRAT) instrument on the Liverpool Telescope (LT). Spectroscopic follow-up observations of this target were then undertaken by ePESSTO using the NTT with EFOSC2 <cit.>. Most of these spectra were obtained with Gr#13 except for one epoch with Gr#16 on 2020-01-17 to extend the wavelength coverage redwards. The latter was averaged in the overlapping region with the Gr#13 spectrum obtained one day prior. Another spectrum was obtained on 2020-08-21 from MMT using the Binospec spectrograph covering a similar wavelength range to Gr#13 <cit.>. A full breakdown of observations is given in Table <ref>. All data were reduced using dedicated instrument-specific pipelines that apply de-biasing, flat-fielding, trace extraction, wavelength calibration and flux calibration using standard stars observed with the same setup. Spectra were then also flux corrected using both r and i-band photometry. § ANALYSIS §.§ Host Galaxy The host galaxy of SN2019szu is a faint dwarf detected in the Pan-STARRS catalogue (PSO J002.5548-19.6923). There is no catalogued redshift or distance information and so estimates for the redshift were derived from the [OIII] λλ4959,5007 narrow host lines observed in the SN spectra. This gave a redshift of z=0.213±0.0003 which is also in agreement with Hα and Hβ measurements from latter spectra. The host is detected only in the Pan-STARRS r and i bands with a Kron magnitude r=21.75 ± 0.08 mag <cit.>. It is not catalogued in the NASA Extragalactic Database <cit.>. In <cit.>, the median SLSN host galaxy had M_B = -17.10 ± 1.45 mag. The host for SN2019szu has an absolute magnitude M_r = -18.38 ± 0.08 mag (at this redshift, the r-band is similar to rest-frame B). This indicates the SN2019szu host galazy is well within the normal range of host luminosities (a proxy for masses) found in <cit.>. Hydrogen line ratios were measured based on the narrow emission lines observed in the late-time spectra of SN2019szu in order to estimate any reddening due to the host galaxy. This gave a ratio of Hα/Hβ = 3.53 ± 0.13 for the spectrum at +211 days, and Hα/Hβ = 4.52 ± 0.45 for the spectrum at +262 days. The Hγ/Hβ value could only be calculated for the +211 day spectrum and resulted in a value of Hγ/Hβ = 0.73 ± 0.17. Both of these values are above the expected ratios of Hα/Hβ = 2.86, and Hγ/Hβ = 0.47 <cit.>. While the Hγ/Hβ ratio supports negligible extinction, the Hα/Hβ ratio indicates significant reddening from the host, which is unexpected for a galaxy of this size. We can quantify the relation between Balmer decrement and colour excess E(B-V) described by <cit.> giving E(B-V) = 0.29 ± 0.05 and optical extinction of A_V = 1.2 ± 0.3. This is much larger than typically inferred for SLSN host galaxies, which is generally <0.5 mag and averages ∼0.1 mag <cit.>. It is therefore likely that our measured Balmer decrement is unreliable, due to contamination from the SN spectrum. A low host extinction is also supported by light curve models (Section <ref>) and the lack of NaI D λλ5890, 5896 absorption which is used as a indicator of dust extinction <cit.>. We therefore neglect host extinction in our analysis, as applying more host extinction didn't affect our spectral measurements significantly. The Milky Way extinction in the direction of SN2019szu is E(B-V)=0.018 <cit.>, and this correction was applied to all spectra. The metallicity of the galaxy was calculated from the R_23 method outlined in <cit.>, which uses the fluxes of the [O II] λ3727, [O III] λλ4959,5007, and Hβ lines. As the metallicity appeared to be in the region between the metal rich and the metal poor branches of this relation, it was calculated for both branches. The metal rich branch yields a value of 12+log(O/H)=8.36±0.07, and the metal poor branch a value of 12+log(O/H)=8.36±0.05. Both of these values are in agreement with one another, and also consistent with work by <cit.> who found SLSNe host galaxies tended to have values of 12 + log(O/H) < 8.4. Limits on the metallicity can also be placed by measuring the ratio of [N II] λ6583 to Hα. [N II] is not observable in SN2019szu and so using a 2σ and 3σ limit on this detection yields 12 + log(O/H) < 8.06 and 12 + log(O/H) < 8.14 respectively <cit.>. This reinforces the low metallicity nature of the host. The host for SN2019szu shows strong Hα and [O III] λ5007 emission lines. In the +211 day Binospec sepctrum, these lines have equivalent widths of 148 and 127 respectively which are lower limits due to contamination from the SN continuum. This is similar to the sample of SLSN host galaxies studied by <cit.>, where ∼50% of the sample occurred in extreme emission line galaxies. Using the star formation rate (SFR) diagnostics in <cit.> provides two different measures of the SFR. The first uses the strength of Hα and gives a value of SFR = 0.4 M_⊙ yr^-1. The second uses [O II] λ3727 and gives SFR = 0.3-0.4 M_⊙ yr^-1. Comparing to the sample of SFRs found in <cit.> which ranged from 0.01-6.04 M_⊙ yr^-1, this is a typical star formation rate for SLSNe hosts. §.§ Light Curve The multi-band light curve of SN2019szu is shown in Figure <ref> with a range spanning over 600 days in the observer frame. The event reaches a peak magnitude of M_g,peak= -21.59±0.06, which is close to the volume corrected median peak magnitude M_peak = -21.31±0.73 mag described by <cit.>, and M_g,peak = -21.14±0.75 mag calculated by <cit.> for SLSNe, where the error represents the 1σ spread. A more recent study by <cit.> found M_g,peak = -21.54^+1.12_-0.61 mag (not corrected for Malmquist bias). The main rising light curve is preceded by a plateau lasting 40 days in the rest frame. This commences at MJD 58700, and hovers around w=21.6 mag before beginning to rise after MJD 58750. This is equivalent to an absolute magnitude of M_w∼-18.7 mag and a luminosity ∼10^43 erg s^-1. This plateau was also observed in the r-band. Figure <ref> shows historical photometry in the i-band showing deep upper limits down to 22.5 mag indicating this plateau is a emergent feature. Other SLSNe have shown signs of early excesses such as SN2006oz, in which a precursor plateau lasting 10 days was observed before the full monotonic rise. This was thought to be due to a recombination wave in the surrounding CSM consistent with the transition from O III to O II <cit.>. Similarly SN2018bsz showed a slowly rising plateau lasting ∼30 days <cit.>. In LSQ14bdq the precursor peak was suggested to be caused by the cooling of extended stellar material <cit.>. In both of these cases the precursor events may have occurred after explosion, unlike in SN2019szu where the long plateau clearly precedes the explosion date inferred from the rising light curve. This feature in SN2019szu is also unusual due to its very flat nature over a long timescale which is not consistent with cooling material and may require an additional source of energy injection. This light curve rise is captured well by ZTF and ATLAS. We estimated the date of maximum light to be MJD 58826 in the g band by fitting a low-order polynomial, giving a rise time of around 80 days. We take this to be the time of peak throughout. We caution that our fits to the peak are somewhat limited by SN2019szu entering solar conjuction around 50 days after discovery. Observations resumed when it became visible again around 90 days later. The light curve appears to peak in the bluer bands first and has a rather flat shape or possible plateau at peak, which lasts longer in redder bands. In the r band this flattening lasts approximately 80 days. This is similar to other events such as SN2020wnt which also showed this plateau behaviour <cit.>. However, SN2020wnt also showed indications of an initially faster decline in bluer bands, which is not apparent in this event (Figure <ref>). The UV bands all show rising light curves until the solar conjunction, indicating a peak later in the evolution for these bands. To parameterise the light curve peak, the exponential rise and decline timescales were determined giving an e-folding rise and decline of τ_g-rise∼ 48 days, and τ_g-decline∼ 100 days respectively. These timescales were determined by fitting low order polynomials to the g band light curve. In general SLSNe tend to have τ_rise∼τ_decline / 2 with slower evolving events also having slower rise times <cit.>. The rise and decline timescales of SN2019szu are consistent with this expectation. Some events however show a more skewed relation such as SN2017egm, which had a fast rise time τ_g-rise∼ 20 days, and slow decline with an estimated e-folding decline time of τ_g-decline∼ 60 days <cit.>. A small bump in the light curve can be seen around MJD 59100, corresponding to ∼200 days post peak. This is not unusual for SLSNe and a large fraction of SLSNe-I show these undulations <cit.>. This undulation is observed in the gr bands, which have sufficient coverage to observe variations at this phase. As discussed in <cit.>, these undulations can be the result of collisions of the ejecta with shells or clumps of material. This interaction with circumstellar material can produce bluer colours (g-r) during the interaction due to heating of the ejecta <cit.>. An alternative theory is variation in the power output from a central engine such as the energy output from a magnetar <cit.>. Although this would produce variability on timescales shorter than the observed bumps, this can be smoothed at early times if the variation is shorter than the photon diffusion time through the ejecta. This also implies that the undulations are more likely visible at later times as the ejecta becomes more transparent – this is supported by the fact that 73% of undulations are found post peak <cit.>. A central engine can also produce variations in the ejecta opacity via increased ionisation and hence more electron scattering even with constant energy input. It does this by creating an ionisation front that propagates outwards and breaks out from the front of the ejecta leading to a rebrightening <cit.>. As the ejecta cools it can recombine, leading to a change in opacity again which could result in an observed undulation. <cit.> prefer this latter explanation for SN2019szu as this mechanism allows for a higher UV flux due to the decrease in bound-bound transitions for ionised metals. Late time observations of SN2019szu (Figure <ref> show deep upper limits in both i and w, at levels below the precursor plateau observed. This supports the idea that this plateau is related to the SN event. §.§.§ Colour The colour evolution of SN2019szu is shown in Figure <ref>. The colour was calculated using Superbol, a python package that interpolates light curves in order to perform spectral energy distribution (SED) fits <cit.>. The g-r colour was calculated using our g-band data, and interpolated r-band points from Superbol using a polynomial fit. The pre-peak colour shows a dramatic evolution to the blue, from an initial value of g-r=0.05±0.13 mag, dropping down to g-r=-0.51±0.15 mag just before maximum light. This is not behaviour exhibited by other SLSNe, which tend to show a general colour increase, becoming redder over time shown by the sample of events in Figure <ref>. The colour of SN2019szu, exhibits only a very gradual change in colour after peak, hovering around g-r∼-0.3 mag. This is similar to other SLSNe which show a steady blue colour around peak before evolving dramatically towards the red, consistent with fast cooling after peak. Slow SLSNe such as SN2007bi, PTF12dam, and SN2015bn show a much more gradual colour evolution to the red <cit.>, similar to the post peak evolution of SN2019szu. PS1-14bj is another event showing near constant colour but is overall a much redder event <cit.>. After the break in data, the colour appears to redden, consistent with slow cooling as seen in other SLSNe. At around 200 days post peak the colour once again begins to decrease. This turning point corresponds to the bump in the light curve mentioned in Section <ref>. This observation could be explained by an ionisation front breaking out of the ejecta <cit.>, or interaction with circumstellar material <cit.>. Both of these mechanisms would heat the ejecta and therefore create a bluer colour. This is not something seen in other bumpy light curves, for example SN2015bn does not have a dramatic colour change around its bump at +50 days <cit.>. We also caution that the late time g-r colour could be affected by the strong emission lines from the host found in the r-band wavelength range. §.§.§ Bolometric Luminosity The bolometric luminosity of SN2019szu was calculated using Superbol <cit.>, as shown in Figure <ref>. To do this the light curve in each band was interpolated to epochs with g-band data. A constant colour relation was assumed for bands with fewer data points, or by fitting a low-order polynomial to capture the general shape of the light curve. The flux was corrected for time-dilation assuming a redshift of z=0.213, and extinction corrected assuming a value of E(B-V)=0.018 <cit.>. As discussed in Section <ref>, we assume negligible extinction from the host galaxy. The data was also K-corrected to shift the fluxes and effective filter wavelengths to their rest-frame values. The resulting spectral energy distribution (SED) was fit with a modified blackbody function suppressed below a cut-off wavelength <cit.>: f_λ(T, R) = (λ/λ_0)^β f_λ, BB(T, R) for λ < λ_0 f_λ, BB(T, R) for λ > λ_0 Where f_λ is the wavelength dependent flux, λ is the wavelength, β is a nominal index for which we used a value of 3, and a cutoff wavelength of λ_0=3000 Å was used. These values were chosen based on fitting Eq. <ref> to each SED and averaging the best fits. Multicolour information was not available for the pre-explosion plateau and so this data was not included in the SED fitting. Fitting this equation using data from all bands produced the blackbody (BB) temperature (T) and radius (R). Superbol calculates the bolometric luminosity (L_bol) by integrating numerically under the observed SED points, and extrapolating the missing flux outside the wavelength range using the best-fitting absorbed BB model (Figure <ref>). The initial peak is fit using data from the uvw1, uvm2, uvw2, and Ugcwroi filters; the B and V bands were ignored due to their very sparse data points. Both bands were also very noisy and so did not provide much extra information compared to the much cleaner g band. After the first break due to solar conjunction, only gri data was used in order not to extrapolate too far in time in the other bands, instead opting to extrapolate further in wavelength. As we will discuss in section Section <ref>, the SED shape appears flatter than a blackbody in the redder bands. We therefore performed additional fits excluding the i band. These experiments showed no significant difference to the best-fit L_bol, but did marginally affect T and R (Figure <ref>). We also caution that a late times (>200 days) the SED does not resemble a blackbody as evidenced by the nebular spectra in Figure <ref>, and so the bolometric luminosity should be treated with a degree of caution. T, R and L_bol were calculated for each epoch, however, it is important to note that at later times the blackbody fits are more contaminated by nebular lines and as the event transitions from photopheric to nebular, the blackbody fit becomes less reliable. Beyond 300 days, we have detections in only one band, which is insufficient to measure a temperature and radius. However, assuming that the colour does not change dramatically, we are able to estimate the bolometric luminosity at the time of this last detection. In Figure <ref>, we can see our event compared to some other well observed SLSNe with published Superbol data. Even compared to these other events, the slow nature of this event is apparent with only PS1-14bj having a comparable gradual decline. It is much harder to compare the rise to peak of these events as they are not all as well sampled, but we note that SN2019szu has a much faster rise to peak than PS1-14bj. Integrating the bolometric light curve gives us an energy of E=2.6×10^51 erg. This is a lower limit on the energy radiated by this event due to our finite sampling in time and wavelength. It also does not consider the energy released outside of time span covered by our photometry. The energy is consistent with other SLSNe which typically radiate ∼10^51 erg over their lifetimes <cit.>. §.§.§ Blackbody Temperature and Radius Figure <ref> shows the blackbody temperature and radius compared to other SLSNe, calculated using the method discussed in <ref>. Although this method allowed us to extrapolate the SED outside of the observed bands and measure the total luminosity, the continuum shape visible in our spectra clearly deviates from a blackbody even at early times as seen in Figure <ref>. This will be discussed in Section <ref>, but for our purposes here this means that forcing a blackbody fit on this data may result in temperature and radius measurements that are not physically meaningful. As the deviations become apparent at rest-frame wavelengths longer than 5500 , these SED fits were repeated with the i band removed. Removing this band did not produce any changes in temperature or radius larger than the 1σ uncertainties on these quantities at early times, however some late time points between 200-300 days show significant deviations. The evolution excluding the i band shows significant fluctuations in short timescales for both temperature and radius, as well as much larger uncertainties. For this reason the fits including the i band were chosen for further analysis. By looking at Figure <ref>, it is apparent that the temperature evolution is not consistent with other SLSNe, as it appears to increase over the first ∼100 days. This is in stark contrast with most other events, which tend to show a decreasing temperature <cit.>. However, it is consistent with the colour evolution found in Section <ref> which indicated the SN became bluer with time around peak. At later times the temperature of SN2019szu has larger errors which makes it harder to constrain but it appears as though the SLSN stays much hotter than the other events, and also remains roughly constant rather than increasing or decreasing drastically. SN2020wnt and PS1-14bj both show increases in temperature post explosion <cit.>. In SN2020wnt this increase is only before the peak of the light curve and lasts ∼50 days post explosion before decreasing in temperature. PS1-14bj instead shows a steady increase over the entire time frame, however both events stay cooler than SN2019szu. <cit.> suggest that late-time heating, due to X-ray to UV breakout from a central engine, could explain this increase. We also note that the large discrepancy in the temperature evolution of SN2019szu compared to other events might suggest that indeed a blackbody is not an accurate representation of its SED. <cit.> also analysed the ZTF light curve of SN2019szu in a population paper of 78 SLSNe-I observed from March 17, 2018 to October 31, 2020. That paper highlighted the anomalous nature of SN2019szu (designated ZTF19acfwynw). They presented the same increasing temperature profile that we found in Figure <ref>. <cit.> provide possible explanations of CSM interaction providing an additional heating source, or ejecta being ionised by a central engine such as a magnetar. Alternatively the apparently rising temperature may be due to mismatch between the true SED and the assumption of a thermal spectrum. These possibilities will be discussed in Section <ref>. Looking at Figure <ref>, we can see the radius appears to peak at R = (2.13 ± 0.22) × 10^15 cm at around 20 days before maximum light. Afterwards, the radius appears to either decrease very slowly or remain relatively constant up to the break in the data. Fitting this initial rise up to -20 days with a linear function appears to show the photosphere expanding at v ∼ 1700 . After the break the radius appears to have decreased significantly, where it continues a slow, gradual decline down to R = (4.31 ± 1.26) × 10^14 cm by 300 days. Overall the blackbody radius of SN2019szu appears quite compact compared to other SLSNe in our comparison sample, by a factor of few. Slower moving ejecta for SN2019szu could be one possible explanation for this difference. However, it may also be that the deviations of the spectrum from a true blackbody, as indicated by the apparently increasing temperature (and discussed further in the next section), lead to an underestimate of the true radius. This is supported by SN2018ibb which had an ejecta velocity of 8500 but displayed a steady photophere radius of ∼5×10^15 cm over the course of 100 days <cit.>, comparable to the radius of other events. §.§ Spectra Figure <ref> shows the evolution of the spectra of SN2019szu, starting at -30 days pre-peak, up to 207 days post peak. The first spectrum shows a featureless blue continuum and although quite noisy, the narrow [O III] doublet at λ4959, and λ5007 from the host galaxy is visible. Later spectra were obtained using NTT and MMT, a full breakdown of this is given in Table <ref>. The transition between a photospheric spectrum and a nebular spectrum occurs in the gap between the spectra at +31 and +211 days. Unfortunately this transition was not observed due to constraints from the Sun for ground-based telescopes. §.§.§ Photospheric Spectra The `early' spectra here are defined as those taken before the break and showing photospheric absorption lines. In this case they run from -30 days to 30 days post peak in the rest frame. Narrow host-galaxy lines are seen with the [O III] emission lines at λλ4959, 5007, and Hα at λ6563. The characteristic steep blue continuum and O II absorption lines associated with SLSNe are apparent in the 3000-5000 range, as well as typical Fe II and Fe III lines around 5000 blended with the O II absorption lines. Figure <ref> shows the most prominent lines that can be seen in the spectra. An approximate SN velocity can be determined by measuring the blueshift of the O II absorption lines <cit.> yielding an ejecta velocity of v_ej∼ 4500 km s^-1. <cit.> found in their sample of events that the median O II derived velocity was 9700 around time of maximum light. SN2019szu has a considerably slower velocty but still within the range measured by their sample. These spectra also include a broad emission line at 7300 , an unusual feature at such an early phase for any SN, though a similar feature has been seen as early as ∼50 days post-peak in a handful of slowly evolving SLSNe <cit.>. There is only one other event SN2018ibb, in which this feature has been observed around peak <cit.>. We will discuss this in detail in Section <ref>. In Figure <ref> one can clearly see a lack of evolution in SN2019szu between -16 and +30 days. Other SLSNe have shown similar periods with minimal spectral evolution around maximum light; for example, SN2015bn maintained a constant spectrum dominated by O II and Fe III between at least -27 and +7 days <cit.>, but its spectrum then cooled and evolved quickly between 7 and 30 days. SN2019szu shows little evolution until at least 30 days, and if it did undergo any period of rapid cooling this was unfortunately unobserved while the object was in solar conjunction. This could be due to all of these spectra being obtained during the plateau at peak. The lack of line velocity evolution over this time also may favours the magnetar central engine for this event <cit.>. The only feature that appears to change in this time period is the small bump that increases around 4340 . This wavelength is consistent with [O III] λ4363, indicated by the first dashed grey line on Figure <ref>. This line has not been identified previously in such an early spectrum of a SLSN, and we explore this possible identification in Section <ref>. Any variation in [O III] λλ4959,5007 over this time frame is hard to determine due to its blend with Fe II lines. There is also an unidentified emission line that varies in flux around 3850 . If this was caused by Ca II H K, it would require a blueshift of 7500 , and if it was caused by [O II] λ3727 it would require a redshift of 10500 , both of which seem unlikely due to the lack of other features with similar blue/redshifts. §.§.§ Continuum Shape The shape of the continuum in the early time spectra shows a relatively flat region between 5500-7000 combined with a steep blue shape below ∼5500 . We attempted to fit the continuum with a single blackbody (Figure <ref>) and with the sum of two blackbodies of independent temperature. Our best fits indicate a singular source at ∼13000 K which captures most of the shape of the flat region. Adding a second component for a double blackbody did not improve this fit and in fact preferred assigning both components the same temperature. Moreover, the continuum shape between 5500-7000 does not resemble other SLSNe at a similar epoch, most of which can be well approximated by a simple blackbody. A possible explanation for this could be an additional pseudo-continuum. In this case interaction with CSM produces a forest of narrow Fe II lines blended together, bluewards of ∼5500 <cit.>. This has been observed in various types of interacting supernovae including Type Ia-CSM and Type Ibn such as SN2014av and SN2006jc <cit.>. Figure <ref> shows a spectrum for SN2021csp, a SN Icn that has been argued to have exploded within H and He-poor CSM. This event shows a strong pseudo-continuum bluewards of ∼5000 attributed to a forest of Fe lines produced due to interaction with the surrounding medium <cit.>. Figure <ref> shows a composite spectrum created by summing a blackbody at 20000 K to an arbitrarily scaled spectrum of SN2021csp. A hotter blackbody was needed as the best fit at 13000 K under predicted in the bluer wavelengths. This approximately recreates the unique continuum shape of SN2019szu, with a flat red region combined with the steep continuum in the blue. In reality, the component originating from the SN ejecta is more complicated than the simple blackbody used here (e.g. the SN spectrum contains O II absorption lines). However we use this composite model purely to show we can achieve a similar continuum shape to SN2019szu with an additional interaction component. The pseudo-continuum in SN2019szu is apparent in the spectrum obtained at -16 days relative to peak. This suggests the SN was already interacting with CSM by this phase. §.§.§ The 7300 Line SN2019szu shows a prominent broad emission line at ∼7300 throughout its photospheric phase. While this line has been observed in some SLSNe in the late photospheric phase, it is already apparent in the earliest spectrum of SN2019szu that covers this wavelength range, meaning it is present at least 16 days before maximum light (Figure <ref>). The line itself does not evolve much over the course of our observations, with a similar full width at half maximum (FWHM) of ∼7000 during the photospheric phase. This velocity drops to ∼5000 FWHM in the late-time spectra. The total line flux stays relatively consistent during the early spectra at around 2×10^-14 erg cm^-2 s^-1 and decreases to 3×10^-15 erg cm^-2 s^-1 in the late-time spectra. Other slow-evolving events such as SN2007bi, PTF12dam, SN2015bn, and LSQ14an have shown early emission of forbidden and semi forbidden lines ranging from 50-70 days post peak <cit.>. One of the slowest events SN2018ibb displayed forbidden emission lines even earlier, apparent in its earliest spectrum at -1.4 days relative to peak <cit.>. Forbidden lines are formed when the radiative de-excitation dominates rather than collisional de-excitation. The conditions needed to form these forbidden emission lines are generally not seen until the SN reaches its nebular phase, when material is much more diffuse, temperatures are lower, and the energy deposition from the power source is lower <cit.>. In previous SLSNe, the line at 7300 was usually identified as [Ca II] λλ7291,7323. SN2018ibb is an interesting case as it is one the earliest examples of a SLSN showing nebular emission lines at 1.4 days before peak <cit.>. This was the earliest spectrum obtained of this object but still showed evidence for an emission line at 7300 that strengthened over time. The line profile shifted from a top-hat shape to bell shaped and also shifted by a few redwards over the first 100 days. <cit.> attribute this to the profile originally displaying [Ca II] and slowly becoming dominated by [O II] λλ7320,7330. In the case of LSQ14an, the line was strong in the earliest spectrum obtained 55 days after peak. <cit.> attribute this line to emission from [O II] λλ7320,7330 as opposed to [Ca II] as identified in other SLSNe spectra, due to the strength of this line and the lack of [O I] λ6300 emission throughout the nebular phase, suggesting oxygen was primarily in higher ionisation states throughout the ejecta. <cit.> suggest this line is a combination of both [O II] and [Ca II] in order for the line to match the widths of other [O III] lines present. The line also appears to be slightly asymmetric with a small narrow peak on the blue side of the profile which is similar to the shape of the line in SN2019szu (Figure <ref>). This similar asymmetry could indicate the presence of [Ca II] in the SN2019szu spectra as the blue peak is close to the rest-frame wavelength of [Ca II] λ7291. However the low spectral resolution and signal-to-noise ratio makes it difficult using the line profile alone to determine to what extent [Ca II] or [O II] may be contributing. SN2007bi also showed a strong emission line at 7300 in its earliest spectrum at 50 days post-peak <cit.>, though in this case it exhibited a more typical nebular phase dominated by [O I] λ6300. Other effects could also play a role in the asymmetry seen for the 7300 line in SN2019szu. Continuous scattering can be caused by free electrons or dust resulting in a profile with a blueshifted peak with a longer red tail <cit.>. Considering the first epoch in Figure <ref>, the base velocity of the line is ∼6000 with a peak at ∼-1500 . This would put us in the regime where the electron scattering optical depth τ_e=2-3 <cit.>. The evolution of the line then would be consistent with a declining τ_e. In the nebular phase τ_e≲ 1, leading to a diminished impact on the perceived blueshift of the line <cit.>. But semi-nebular lines can still emerge even with electron scattering depths of a few, and so the true velocity of the line may be slower. In some Type Ia SNe a similar effect of a blueshifted profile is thought to be caused by an aspherical explosion. <cit.> show that in SN1991bg-like events, the central peak can be shifted off-center by ∼1000 . Alternatively the profile could be explained by the SN blocking the view of the fastest receeding material, thus resulting in a slight blueshift. Although the 7300 line appears to be double peaked in the +30 day and +262 day spectra, we believe this is an artefact from the smoothing. This is apparent by looking at the line in the +211 day spectrum obtained using Binospec which has a better wavelength resolution and does not indicate a double peaked nature. However, double peaked features have been observed in SLSNe such as SN2018bsz which displayed a Hα profile with a strong redshifted peak, and a weaker blueshifted peak. An asymmetric, disk-like CSM structure was used to explain how this profile could form <cit.>. SN2019szu is the first SLSN that shows the 7300 line in a significantly pre-maximum spectrum, present alongside the characteristic O II absorption lines. At the early times seen in this event, the expected high radiation temperatures would imply that Ca II would be ionised and so lines from this species would not be observed. This is supported by the singly ionised oxygen lines evident in Figure <ref>; O II has a higher ionisation potential than Ca II. This suggests that [O II] may be a more likely explanation for an emission line at 7300 appearing so early in the SN evolution. We can test whether [O II] is a valid identification for this line by comparing to spectral models to predict other lines we expect to appear in these spectra. Nebular models from <cit.> predict emission from [O III] λλ4959, 5007, and [O III] λ4363 to also appear if strong [O II] is present. Looking at Figure <ref>, one can see evidence for these lines with the same blueshift as the [O II] emission line. This is even more apparent in Figure <ref>: adding the nebular model from <cit.> showing the [O II] and [O III] lines to a photospheric spectrum of SN2015bn greatly improves the match with SN2019szu in the areas with the [O II] and [O III] lines. It also recreates the profile seen around 5000 Å due to the blend of [O III] with Fe II/III. With this in mind we identify the emission line at 7300 as [O II] λλ7320,7330 with a net blueshift indicating a velocity ∼ 1500 . We will explore the physical implications of this line in Section <ref>. §.§.§ Late Spectra At late times the spectrum has transitioned into its nebular phase, defined by the broad emission lines and lack of thermal continuum. The spectra of SN2019szu at these phases are much more consistent with other SLSNe, as can be seen in Figure <ref>. Prominent lines (labelled in Figure <ref>) include [O II] λ3727, [O II] λλ7320,7330, Ca II λλ3934,3969, [O III] λ4363, [O III] λλ4959,5007, Mg I] λ4571, and a blend of iron lines around 5200 including [Fe II] λ5250. One notable difference is the lack of [O I] λ6300 emission in SN2019szu. This line is visible in most of the spectra of the other SLSNe at a similar phase and so reflects a high and persistent degree of oxygen ionisation in SN2019szu. Although LSQ14an displays very little [O I] in the spectrum obtained at +111 days, it shows strong [O I] in a late time spectrum obtained +410 days <cit.>. In the sample of SLSN nebular spectra analysed by <cit.>, three out of 12 events showed evidence for weak [O I] λ6300 and with [O II] dominating the 7300 region. This could be due to runaway ionisation which is believed to occur in the magnetar central engine scenario for low ejecta masses and a high power pulsar wind nebula. The result of this is a sharp switch of the spectra from O I dominated to O II/O III dominated. In the O II/O III dominated space this leads to a suppression of [O I] λ6300 emission as seen in SN2019szu <cit.>. The [O II] and [O III] emission lines, previously blended with broad absorption lines of O II and Fe III, are now much more easily isolated, confirming our earlier identification of [O II] λλ7320,7330. § MODELLING AND INTERPRETATION We have demonstrated that SN2019szu is a bright and slowly evolving SLSN, which displays surprising and persistent [O II] and [O III] emission lines even in early spectra obtained 16 days before maximum light. Our goal for this section is to determine the physical parameters of the oxygen emitting region and the ejecta overall, in order to understand how forbidden emission can arise at a time when the density in the ejecta is typically expected to be too high. The typical electron density of expanding ejecta can be written as: n_e = 2×10^9μ^-1(M_ ej/M_⊙) (v_ ej/3000 km s^-1)^-3 ×(x_e/0.1) (t/200 days)^-3 (f/0.1)^-1 cm^-3 where μ is the mean atomic weight, M_ej and v_ej are the ejecta mass and velocity respectively, x_e is the electron fraction, t is the time from explosion, and f is the filling factor <cit.>. The critical density for the [O II] line is n_ crit≈10^7 cm^-3 <cit.>. Above this density, collisional de-excitation dominates rather than radiative de-excitation, therefore suppressing the formation of forbidden lines. Assuming the ejecta is mostly oxygen, with a velocity of ∼10000 , at 80 days after explosion this results in n_e∼ 10^10 (M_ej/M_⊙) cm^-3. For typical SLSN ejecta masses of around 10 M_⊙ <cit.>, this is several orders of magnitude greater than the [O II] critical density. This suggests that [O II] emission is more likely to arise from lower density material, presumably expelled before the first detection of SN2019szu. The early appearance of these lines requires material close to the SN site, therefore excluding material other than that originating from the SN or its progenitor. This argument still holds even if the 7300 line has been misidentified, since the critical density for [Ca II] is similar to that for [O II]. As discussed in Section <ref>, the unusual shape in the spectral continuum may be attributed to interaction, providing further evidence for a (presumably H-poor) CSM close to the explosion site. §.§ Models for the Oxygen Emission Lines <cit.> investigated the emission from long-duration SLSNe during their nebular phase using models from their SUMO code. These models explore oxygen-rich compositions and single zones, as this allowed exploration of parameters agnostic to the powering mechanism <cit.>. Different compositions included a pure oxygen zone (pureO), and carbon burning ashes (Cburn). Here we choose the latter series to explore fully, as it is more physically motivated. Element abundances for this model were taken from the ONeMg zone in the <cit.> models assuming a star M_ ZAMS = 25 M_⊙ collapsing into a supernova. The model assumes 100 randomly distributed spherical clumps with vacuum in between, at 400 days post explosion. The ejecta is also assumed to have been travelling at a constant velocity of 8000 . The energy deposited is assumed to be from high energy sources such as gamma rays. Each model had varying ejecta mass (M_ej), filling factor (f), and energy deposition (E_dep). A small subset of models were chosen from the grid of parameters to explore based on the line strengths of [O I] λλ6300, 6364 compared to [O II] λλ7320, 7330, where any models with visible [O I] emission compared to [O II] were rejected. This is motivated by Figure <ref>, where these particular lines are isolated from other lines in the late-time spectra of SN2019szu, and the lack of [O I] emission is clear. In the models, a lack of [O I] can occur due to near-complete oxygen ionisation. A full breakdown of models passing our selection is given in Table <ref>. The 104_400 model also creates [O III] lines at 4363 , 4959 , and 5007 which when scaled and added to an early spectrum of SN2015bn in Figure <ref> recreates the line profile seen in SN2019szu around 5000 . The 4363 line also lines up with the emission feature seen in Figure <ref> that appears to increase in strength over time. This provides solid evidence that these additional features can be explained by oxygen rich material. All of these oxygen lines also have a similar blueshift derived from the peak of the [O II] λλ7320,7330 and [O III] λ4363 lines, which indicates a velocity of v=1500 . We now seek to determine the heating rate and the density of oxygen needed to explain the [O II] line luminosity in SN2019szu. In the sample of four models, the luminosity of the 7300 line is roughly proportional to the energy deposited. From this we can estimate the energy deposition that gives the correct line luminosities for the SN2019szu spectra. The luminosity of the [O II] line in SN2019szu is ∼ 1 × 10^42 erg s^-1 at early times and ∼ 4 × 10^41 erg s^-1 in our nebular spectra, which would require E_dep, early∼ 4 × 10^42 erg s^-1 and E_dep, late∼ 1 × 10^42 erg s^-1 assuming the [O II] luminosity to be directly proportional to deposition. The required deposition at early times is outside the range explored by <cit.> but only by a factor of 2. Extrapolating to higher energies is not trivial, as changing the deposition does not only affect the line luminosities; it will also influence the temperature and ionisation of the ejecta, and hence the line ratios. In order to keep the ionisation state constant for different energies and masses, the following relation must hold: γ/α n_e = constant, where α is the recombination rate, and n_e is the electron number density. γ is the ionisation rate per particle, defined as: γ = e_dep x_ion,OII/χ n_OII Here x_ion,OII is the fraction of deposited energy used in ionising O II to O III, e_dep is the energy deposition per unit volume, and χ is the ionisation potential. We can approximate this as x_ion,OII∼ x_ionx_OII, where x_OII is the fraction of O II in the gas. We assume n_e to be proportional to the oxygen number density n_O, which is then proportional to the density of the material ρ for our sample of models. This is motivated by the grid of models which show a ratio of n_e/n_O∼1-1.3 for the subset used <cit.>. If α is not too strong a function of temperature, we can assume it will be constant for all of these models with the same composition. This leads to the relation E_dep x_ion/ρ^2 = constant which can then be used to calculate the density of the oxygen-emitting material needed in order to maintain a constant ionisation fraction for a given energy deposition. The fraction of energy going into ionisations, x_ion, decreases with higher energy depositions, but the SUMO models show that this function varies very slowly and we may assume it to be constant in the regime we are working in. To use the relation in Equation <ref>, we first identified models that produced lines consistent with SN2019szu. This was done by comparing the line ratios of [O II] λλ7320, 7330, [O III] λ4363, and [O III] λλ4959, 5007 in the model series to our observed spectra. This is shown in Figure <ref> for the spectrum at +262 days. This method was only applicable to the final two nebular spectra in Table <ref>, as the lines were not isolated enough in earlier spectra. Both spectra indicated E_dep≃ 1× 10^42 erg s^-1 provided roughly the correct ionisation balance for a 3 M_⊙ model with a filling factor f=0.1. This model has ρ_model = 6.7 × 10^-16 g cm^-3. We verified that applying a significant host galaxy extinction (Section <ref>) did not change the line ratios significantly and so we disregarded this effect. We can thus scale the deposition to match the observed spectra, and determine the corresponding density to match the inferred ionisation state. Using E_dep, early and E_dep, late, this resulted in ρ_early∼ 1× 10^-15 g cm^-3 and ρ_late∼ 7× 10^-16 g cm^-3. However, as the ionisation state cannot be constrained directly in the early spectra, the early measurements assume a constant ionisation state over the course of the SN lifetime, and so ρ_early should be taken as a rough guide only. This density will be used further in Section <ref> to constrain the CSM parameters. §.§ Magnetar Model and Ejecta Mass Estimate Modelling the light curve allows us to estimate the mass and velocity of the SN ejecta. Semi-analytic magnetar-powered models have been extensively tested in the literature and shown to able to reproduce the light curves of most observed SLSNe <cit.>. Although we have found evidence for CSM interaction in this event through the material needed to produce the 7300 line, the low density of this material suggests that it is likely not sufficient to power the full luminosity at the bright peak of the light curve, so multiple power sources may be at play. <cit.> modelled 24 SLSNe light curves using formulae detailed by <cit.>, and showed that densities of 10^-12 g cm^-3 were needed to match the rise-decline relation seen in SLSN light curves. Although these analytic CSM models are widely used <cit.>, the complicated geometry of the interaction and additional flexibility from separate ejecta and CSM density profiles can make the results difficult to interpret, and lead to mass estimates that are quite discrepant with hydrodynamical models <cit.>. We therefore restrict our mass estimates to the magnetar model only, but note that the additional presence of CSM introduces an additional systematic uncertainty. To constrain the magnetar parameters that would be needed to power SN2019szu, and to estimate the ejecta mass and velocity, we employed the Modular Open Source Fitter for Transients (mosfit) <cit.>. mosfit is a fully Bayesian code that fits physical models to the multi-band light curves of transients. The plateau was excluded from this fitting as it is believed to be a result of pre-explosion activity and the magnetar model would not be able to fit such a feature. For the magnetar model the assumption is the energy input at time t is given by: F_mag(t) = E_mag/t_mag1/(1+t/t_mag)^2 E_mag∝ P^-2 t_mag∝ P^2 B_⊥^-2s Where E_mag is the rotational energy of the magnetar, t_mag is the timescale it spins down on. P is the spin period of the magnetar and B_⊥ is the perpendicular component of the magnetic field to the spin axis. The default mosfit priors were edited to account for the specifics of this event. In initial fits the value of A_V,host railed against the upper end of the default prior, with A_V,host approaching 0.5 mags, inconsistent with a dwarf host galaxy. The parameter fit in the models for this is the hydrogen column density (n_H), which is related to A_V,host by A_V = n_H / 1.8× 10^21. We therefore fixed n_H to 10^16 cm^-2 to ensure that an unrealistic extinction did not bias our fits. The prior for scaling velocity (v_ej) was set to be a Gaussian distribution with mean = 5000 and standard deviation = 3000 , based on the blueshift of the O II lines in the photospheric spectra as described in Section <ref>. This approximation can be made for SLSNe around peak as the O II lines form in a region close to the photosphere. The shallow nature of the photosphere means it can be described using a single velocity <cit.>, which can be used as a proxy for the ejecta velocity. The broad Gaussian also allows for a more typical SLSN velocity ∼ 10,000 as the lines may not have formed in the same location as the photosphere, and hence could have differing velocities. The prior on the minimum temperature (T_min) at late times was adjusted to cover a broader range from 10^3 to 10^5 K with a flat distribution in log space, the physical motivation being the unusual behaviour of the colour temperature of SN2019szu (Section <ref>). The prior for the magnetic field (B) was expanded to include lower values from 10^12 G to 10^15 G to allow for the slow evolution of SN2019szu. The optical opacity κ was fixed to 0.15 g cm^-2, the median value found in the population study by <cit.>. Model fits are shown in Figure <ref>, and the derived posteriors are given in Figure <ref>. Overall the model fits cannot fully capture the the bumps and wiggles in the light curves which would require more nuanced models. It also under predicts the luminosity in some of the bluer bands such as V, g, and U, whilst over predicting the i band luminosity at peak, and the r band at late times. This perhaps is a reflection of the unusual shape of the SED for this event. The magnetic field B=0.37^+0.02_-0.02× 10^14 G is at the low end for the population studied by <cit.> (B = 0.8^+1.1_-0.6×10^14 G), which is perhaps unsurprising given the slow decline together with Equation <ref>. A lower value of B results in energy deposition over a longer time frame, leading to a longer lived SLSN. We can also see that the spin period found for SN2019szu (P=1.97^+0.16_-0.14 ms) is within the 1σ range of the median value found <cit.> (P=2.4^+1.6_-1.2 ms). The spin period sets the luminosity scale, as F_mag(t) ∝ E_mag/t_mag∼P^-4. As SN2019szu is a comparable luminosity to other SLSNe this is unsurprising. Our estimated ejected mass (M_ej=30.2^+4.5_-4.5) is also on the upper end of the general population, with a median and 1σ distribution of M_ej, median = 4.8^+8.1_-2.6 <cit.>. A larger population study by <cit.> found this distribution of masses spanned 3.6-40 M_⊙ for the general population and so SN2019szu also lies on the upper end of this. The mass estimate is derived from the light curve width and diffusion timescale, and is not as dependent on the type of model chosen provided the power source is internal to the ejecta (e.g. ^56Ni decay or magnetar engine). An interaction model is more complex, as the CSM can provide additional diffusive mass. However, since we have calculated the density of the CSM to be low relative to typical models for SLSNe, we would expect that the diffusion time is still dominated by the ejecta. For this reason, we expect our estimate of the total mass should be the right order of magnitude, even if interaction with the CSM is also contributing to the light curve. In SN2019szu, the large mass creates a longer diffusion time to contribute to the slow evolution. mosfit also provides an estimated explosion time relative to the first data point of t ∼ -37.96^+2.12_-2.33 days. Combining this with our time of maximum light, we estimate a total rise time of ≈82 days. §.§ Constraints on Pre-explosion Mass Loss From previous analysis in Section <ref>, we know that SN2019szu has a region of low density, oxygen rich material needed to produce the forbidden oxygen lines at early times. The density of this material was calculated to be ρ∼ 10^-15 g cm^-3 in Section <ref>. This density is too low to be from the SN itself and so this region must be comprised of pre-expelled circumstellar material. Using the velocities of the CSM and ejecta, we can constrain when this material was ejected before explosion, such that the ejecta catches the CSM prior to the first NTT spectrum. The width of the [O II] λλ 7320,7330 line did not decrease much over time, changing from 7000 to 5000 over the course of ∼300 days. Throughout this period, it maintains a constant blueshift of ≈1500 . We assume that this blueshift corresponds to the CSM velocity and explore the consequences of this, though we acknowledge that it is also possible to achieve a blueshifted profile with electron scattering <cit.>. The mosfit model provides a measurement for the scaling velocity v_ej = 4700 , which is consistent with the SN velocity estimated from the O II absorption lines measured in Section <ref>. Another velocity estimate comes from the increase in radius (Figure <ref>). Assuming a R=vt relation gives v_ej = 1700 over the first 30 days which is similar to the velocity estimates from the [O II] emission lines. This could suggest that at early times there is a photosphere within the CSM, however it is important to remember the limitations of the SED fitting used to produce these radii. The presence of the [O II] emission line at ≈66 days from the estimated explosion date indicates that the SN ejecta has already caught up with the CSM at this phase, giving an upper limit on the CSM radius of 2.6 × 10^15 cm. This places an upper limit on the ejection time which must be more recent than 200 days before the peak, or ∼120 days before explosion, for a CSM velocity of v_CSM∼ 1500 and SN velocity v_SN∼ 4500 . Mass ejection on a timescale of only months before the explosion is also supported by the precursor plateau in the light curve. Using the estimated velocity and expansion time to determine the CSM radius at different times, we can also estimate the mass of emitting CSM. Using the approximate radius ≈ 2.6 × 10^15 cm at the light curve peak results in M_early∼0.1 M_⊙, whereas in the late spectra at ≈250 days post peak we find M_late∼0.25 M_⊙ based on the densities found in Section <ref> and assuming spherical, uniform CSM. This increase in apparent mass could be due to the SN having expanded into a larger volume, and therefore being able to interact with more material. It is also important to note that the ionisation state derived from the late-time spectra may not strictly hold at earlier times. Using our CSM mass estimate of around 0.25 M_⊙ and the velocity of this material we can calculate the total kinetic energy (KE). We obtain a value of KE ∼ 10^49 erg adopting the velocity from the [O II] and [O III] emission lines. This is a small fraction of the total energy obtained by integrating the bolometric light curve of E∼ 3× 10^51 erg and so only a small fraction of the total energy would be released via these line emissions. § DISCUSSION We have found that the early emission lines in SN2019szu require 0.25 M_⊙ of CSM ejected less than 200 days before maximum light. Explaining the mass ejected from the star in such a short amount of time requires creative explanations, as stellar wind mass loss rates, even for Wolf Rayet stars, are too low to explain the entirety of this CSM <cit.>. Typical mass loss rates for these types of stars range between (0.2-10)× 10^-5M_⊙yr^-1 <cit.>, and so more explosive mechanisms are required to explain this amount of CSM. A consistent picture must also be able to reproduce the luminous precursor plateau in the light curve. Eruptions from luminous blue variable (LBV) stars have been proposed as a way of generating CSM in the context of powering Type II SLSNe. Giant LBV eruptions like that of η Carinae can have mass loss rates ≳0.5 M_⊙yr^-1 <cit.>, which could be sufficient to produce the quantity of CSM surrounding SN2019szu. However, these events are too faint to explain the pre-explosion activity seen in SN2019szu, with events such as η Carinae reaching a maximum bolometric absolute magnitude of M_bol∼-14 mag <cit.>. SN2009ip was an event that bridged the gap between LBVs and SNe. Initially recognised as an LBV after a series of outbursts, the event transitioned into a SN-II reaching a maximum absolute magnitude M_r=-17.5 mag <cit.>. Its spectra displayed broad Balmer lines with P-Cygni profiles characteristic of SNe-II. As LBV outbursts tend to occur in luminous stars in the blue supergiant phase rather than in stars which are stripped cores, this explanation makes it very difficult to explain the lack of hydrogen and helium emission from the oxygen emitting region of SN2019szu. However, we note that at least one precursor outburst has been seen in a Ibn <cit.>. Another possible explanation for the origin of this CSM is pulsational pair instability (PPI) ejections. <cit.> explores the evolution of stars thought to undergo PPI and describes temperature and luminosity models for these events. Less massive cores lead to smaller PPI shell masses at times closer to core collapse. In the models with He core masses between 48-52 M_⊙ (CO cores of ≳40 M_⊙), the shells of material ejected in each pulse collide with one another and produce sustained luminosities above ∼10^43 erg s^-1 from 40 days before explosion. This could provide an explanation for the pre-explosion plateau observed in SN2019szu and would be the first direct observation of PPI ejections if true. This subset of models also describe typical timescales for the onset of the first PPI ejections before core collapse of >3.6 ×10^6 s. This is also consistent with the timescales obtained from the 7300 line, which constrained the time of mass ejection to within the last 120 days before time of maximum light. However, this subset of models produce ejected masses in the region of 6-8 M_⊙. This is significantly more massive than the ≈0.25 M_⊙ of CSM responsible for the oxygen emission lines in SN2019szu, which would instead be consistent with ≈30 M_⊙ cores. However, at lower masses the PPI eruptions occur only in the final days before explosion, inconsistent with the light curve plateau. We suggest that a ∼40 M_⊙ CO core is more consistent with the observations, and the low mass, low density material producing forbidden lines is the outermost material from the earliest pulse. This assumption is supported by <cit.>, who show that the density of the PPI ejecta decreases as you move radially outwards for a 50 M_⊙ He core (∼ 40 M_⊙ CO core). Their work also suggests that these pulses may have velocities of a few thousand , in alignment with our measured velocity from the 7300 line. Later successive pulses interact with each other to produce the plateau <cit.>, and the ejecta interacting with these shells could be the origin of the pseudo-continuum. We also note that many factors could affect our calculated CSM mass, such as the clumpiness of the ejecta or how much of the material has been excited. We assumed spherical geometry and that the material is shock excited, but it could also be radiatively excited. Nonetheless, it is striking that our estimates for the energy and timescale of CSM ejection are consistent with existing PPI models. It is also important to note that in all non-rotating PPI models the star collapses into a black hole, and none of these models reach the luminosity of SLSNe unless the initial star was rotating rapidly enough to create a magnetar <cit.>. Indeed, <cit.> estimated that to power the observed long-lived SLSNe via CSM interaction, densities ρ_CSM≳ 10^-12g cm^-3 and masses comparable to the ejected M_ ej are probably needed. This is much higher than our inferred density and CSM mass for SN2019szu, and so it is not clear that this event can be solely powered by CSM interaction. We also note that there are lots of uncertainties associated with the final stages of massive star evolution, and so other unknown scenarios involving eruptive mass loss from stripped stars could provide an alternative explanation for SN2019szu. Even if another power source is required, the definitive presence of nearby CSM could explain other shared aspects of the SLSN population. The combination of a pre-explosion plateau and the very early appearance of nebular line emission in SN2019szu shows that nebular line emission during the photospheric phase of SLSNe can be an indicator of mass ejection shortly before explosion. The 7300 Å lines in the spectra of other SLSNe such as SN2007bi, PTF12dam, LSQ14an and SN2015bn may therefore reveal recent mass ejection, potentially driven by the PPI mechanism, in those events too. We note that these are all long-lived SLSNe, likely indicating large ejecta (and therefore progenitor core) masses. Many SLSNe also show signs of a pre-maximum bump in their light curves <cit.>, which could potentially be explained by multiple shells of CSM produced by PPI eruptions <cit.>. However, if this was the case we might expect to see spectral evidence of this interaction. Early spectra of other SLSNe such as LSQ14bdq with pre-maximum bumps do not show evidence of broad emission lines <cit.>. Therefore, other explanations such as post-shock cooling of extended stellar material or a recombination wave in the ejecta may be more plausible explanations for some events <cit.>. However, no spectrum has been obtained during the pre-maximum bump of a SLSN, so it is also possible that spectroscopic signatures of CSM during the bump could be erased by the time of maximum light, e.g. if the ejecta has overrun the CSM. We can see clear observational evidence for circumstellar material in other SLSNe. In iPTF16eh, a Mg II resonance doublet was observed to change from blueshifted emission to redshifted emission over time <cit.>. This is explained by reflection of light from a detached shell of CSM surrounding the SN. This material had a velocity of 3300 and was thought to have been ejected 32 years prior to the supernova explosion <cit.>, a much longer timescale than derived for SN2019szu. In the case of SN2019szu, we do not see this changing Doppler shift, as the ejecta has already collided with the CSM. However, the mechanism proposed to produce the CSM in iPTF16eh is also pair instability ejections, but in that case from a more massive progenitor with a He core mass of ∼ 51-53 M_⊙, which experiences the PPI earlier before explosion. Some SLSNe show evidence for interaction only at late times with the appearance of broad Hα emission. In SN2018bsz, this feature is multi-component and appears at ∼30 days, accompanied by other hydrogen lines. <cit.> explain this using highly aspherical CSM with several emitting regions. In iPTF15esb, iPTF16bad and iPTF13ehe, this feature emerged at +73, +97 and +251 days respectively, implying the progenitors lost their hydrogen envelope several decades before the SN explosion, leading to a neutral hydrogen shell <cit.>. <cit.> suggest this eruptive mass loss could be common in SLSN progenitors. iPTF15esb in particular had a triple-peaked light curve, which could be explained by shells of CSM with a total mass ∼ 0.01 M_⊙. Collisions between these shells or between shells and ejecta could provide the excess luminosity to power the light curve undulations in iPTF15esb. Undulations during the declining phase in other SLSNe have also been attributed to interaction <cit.>, though central engine flaring or ionisation fronts are an alternative explanation <cit.>. Other energetic SNe have shown signs of interaction with oxygen rich material, including the unusual Type Ic SN2010mb <cit.>. This event had a slowly declining light curve thought to be the result of interaction with ∼3 M_⊙ of CSM. This resulted in spectral features such as a blue quasi-continuum and a strong [O I]λ5577 emission line at late times. However, this emission line had a narrow core and and required densities of ∼ 10^-14 g cm^-3. This is more dense than the CSM surrounding SN2019szu but could be partially explained by the slower velocity of the CSM in SN2010mb at 800 . <cit.> also suggest PPI ejections could be the source of this material. Other events that fall into this emerging population of Ic-CSM include SN2022xxf and SN2021ocs, both of which show signs of interaction with H/He poor CSM <cit.>. Looking at the inferred properties of the progenitor can also provide contextual clues for pair-instability candidates. For example, the type I SN2016iet was estimated to have had a CO core mass of ∼ 55-120 M_⊙ prior to explosion <cit.>. This event was best modelled with interaction with ∼35 M_⊙ of CSM, ejected within the last decade before explosion. This leads to a mass loss rate of ∼7 M_⊙ yr^-1, much higher than the inferred rate for SN2019szu. These high masses coupled with the low metallicity host galaxy is within the regime of PPI or pair-instability supernovae and is consistent with PPI models by <cit.>. In summary, the short timescale between explosion and observation of the 7300 line in SN2019szu supports the theory that the CSM producing this line was ejected by its progenitor very shortly before explosion. This is supported by the precedent set by other SLSNe which have shown evidence for eruptions close to explosion, albeit on a longer range of timescales. The relatively tight constraints on the CSM mass and timing of ejection make SN2019szu one of the strongest candidates for a PPI SN to date, and suggests that some of these events can form the engines required to reach superluminous magnitudes. § CONCLUSIONS This paper presents extensive optical follow up of the SLSN SN2019szu. This includes spectra taken over nearly 300 days in rest frame, and photometry over 800 days. This event is one of the slowest evolving SLSNe to date with a rise time of ∼ 80 days from explosion to peak and an exponential decline time of ∼100 days. This event also displayed a pre-explosion plateau at an absolute magnitude M_w∼-18.7 mag lasting 40 days. SN2019szu displayed a remarkably early forbidden emission line at 16 days before maximum light during its photospheric phase, the earliest we have ever seen such emission lines. Using models of nebular SN spectra from <cit.>, we were able to not only determine that this line at ∼ 7300 originated from [O II] λλ7320,7330 but also deduce parameters of the material of origin. We found that SN2019szu had at least ∼0.25 M_⊙ of H-poor and O-rich material with a density of ∼10^-15 g cm^-3. The spectra of this event also showed a steep continuum in the blue, combined with a relatively flat continuum redwards of ∼5500. This unique spectral shape was not well fit by a simple blackbody. Instead we showed that this shape could be recreated by combining a hot blackbody spectrum with that of an interacting SN. Combined with the bumpy light curve, sustained blue colours, and the O-rich CSM needed to produce the [O II] line at early times, this provides strong evidence that SN2019szu was interacting with nearby CSM. In order for the interaction to occur by 16 days before maximum light, it must have been ejected less than 120 days before explosion (assuming a CSM velocity of 1500 based on the blueshift of the emission lines), suggesting that mass ejection is also responsible for the light curve plateau. We conclude that producing ∼0.25 M_⊙ of hydrogen-poor CSM close to the time of explosion is not feasible using known mechanisms, such as stellar winds or eruptions from luminous blue variables. Instead we suggest pulsational pair-instability (PPI) ejections are a promising possibility. The PPI mechanism also can explain the lack of H and He in this CSM. PPI models from a stripped ∼ 40 M_⊙ CO core are consistent with our estimated CSM energetics and ejection timescale, the duration and luminosity of the pre-explosion plateau, and the estimated ejecta mass from the SN light curve. The detailed study of SN2019szu introduces a new observational approach that can be used to find signatures of PPI interactions. Early observations of nebular emission lines alongside the characteristic O II absorption lines could be used to probe the structure of SLSNe. Obtaining these observations as soon as possible after explosion could help provide stricter constraints on when and how much mass is ejected during these PPI ejections. Observing pre-explosion activity will also provide more information on the progenitors with spectroscopic observations during this time helping to unravel the composition and velocity of this material. As growing numbers of SLSNe show evidence for interaction with CSM, adopting this approach will also help answer questions about the explosion mechanisms involved. This will be especially useful in future survey telescopes such as the Vera Rubin Observatory, which will be able to detect precursor activity in time for more detailed follow-up. § ACKNOWLEDGEMENTS We would like to thank Peter Blanchard for help reducing the Binospec spectrum. AA and MN are supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 948381). MN also acknowledges funding from the UK Space Agency. This work was funded by ANID, Millennium Science Initiative, ICN12_009. TWC thanks the Max Planck Institute for Astrophysics for hosting her as a guest researcher. LG, MGB, CPG and TEMB acknowledge financial support from the Spanish Ministerio de Ciencia e Innovación (MCIN), the Agencia Estatal de Investigación (AEI) 10.13039/501100011033 under the PID2020-115253GA-I00 HOSTFLOWS project, from Centro Superior de Investigaciones Científicas (CSIC) under the PIE project 20215AT016, and the program Unidad de Excelencia María de Maeztu CEX2020-001058-M. LG also acknowledges support from the European Social Fund (ESF) "Investing in your future" under the 2019 Ramón y Cajal program RYC2019-027683-I. CPG also acknowledges financial support from the Secretary of Universities and Research (Government of Catalonia) and by the Horizon 2020 Research and Innovation Programme of the European Union under the Marie Skłodowska-Curie and the Beatriu de Pinós 2021 BP 00168 programme. TEMB also acknowledges support from the European Union Next Generation EU/PRTR funds under the 2021 Juan de la Cierva program FJC2021-047124-I. GL is supported by a research grant (19054) from VILLUM FONDEN. SS acknowledges support from the G.R.E.A.T. research environment, funded by Vetenskapsrådet, the Swedish Research Council, project number 2016-06012. SJS acknowledges funding from STFC Grant ST/X006506/1 and ST/T000198/1. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile, as part of ePESSTO+ (the advanced Public ESO Spectroscopic Survey for Transient Objects Survey). ePESSTO+ observations were obtained under ESO program IDs 1103.D-0328, 106.216C, 108.220C (PI: Inserra). LCO data have been obtained via OPTCON proposals (IDs: OPTICON 19B-009, 20A/015 and 20B/003). The OPTICON project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 730890. This work has made use of data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) project. ATLAS is primarily funded to search for near earth asteroids through NASA grants NN12AR55G, 80NSSC18K0284, and 80NSSC18K1575; byproducts of the NEO search include images and catalogs from the survey area. The ATLAS science products have been made possible through the contributions of the University of Hawaii Institute for Astronomy, the Queen's University Belfast, the Space Telescope Science Institute, and the South African Astronomical Observatory. The Pan-STARRS1 Surveys (PS1) have been made possible through contributions of the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation under Grant No. AST-1238877, the University of Maryland, and Eotvos Lorand University (ELTE). Based on observations obtained with the Samuel Oschin 48-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grant No. AST-1440341 and a collaboration including Caltech, IPAC, the Weizmann Institute for Science, the Oskar Klein Center at Stockholm University, the University of Maryland, the University of Washington, Deutsches Elektronen-Synchrotron and Humboldt University, Los Alamos National Laboratories, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW. § DATA AVAILABILITY All data in this paper will be made publicly available via WISeREP <cit.>. mnras § AFFILIATIONS ^1Institute for Gravitational Wave Astronomy and School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT, UK ^2Astrophysics Research Centre, School of Mathematics and Physics, Queen's University Belfast, Belfast BT7 1NN, UK ^3The Oskar Klein Centre, Department of Astronomy, Stockholm University, AlbaNova, SE-10691 Stockholm, Sweden ^4Center for Astrophysics | Harvard & Smithsonian, Cambridge, MA 02138, USA ^5Space Telescope Science Institute, 3700 San Martin Dr, Baltimore, MD 21218, USA ^6Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK ^7DTU Space, National Space Institute, Technical University of Denmark, Elektrovej 327, 2800 Kgs. Lyngby, Denmark ^8European Southern Observatory, Alonso de Córdova 3107, Casilla 19, Santiago, Chile ^9Millennium Institute of Astrophysics MAS, Nuncio Monsenor Sotero Sanz 100, Off. 104, Providencia, Santiago, Chile ^10Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu HI 96822 ^11Technische Universität München, TUM School of Natural Sciences, Physik-Department, James-Franck-Straße 1, 85748 Garching, Germany ^12Institute of Space Sciences (ICE, CSIC), Campus UAB, Carrer de Can Magrans, s/n, E-08193 Barcelona, Spain. ^13Institut d’Estudis Espacials de Catalunya (IEEC), E-08034 Barcelona, Spain. ^14Universitat Autònoma de Barcelona, E-08193 Bellaterra (Barcelona), Spain ^15Astronomical Observatory, University of Warsaw, Al. Ujazdowskie 4, 00-478 Warszawa, Poland ^16Cardiff Hub for Astrophysics Research and Technology, School of Physics & Astronomy, Cardiff University, Queens Buildings, The Parade, Cardiff, CF24 3AA, UK ^17Max-Planck-Institut für Astrophysik, Karl-Schwarzschild Straße 1, 85748 Garching, Germany ^18Astrophysics Research Institute, Liverpool John Moores University, IC2, Liverpool Science Park, 146 Brownlow Hill, Liverpool L3 5RF, UK ^19Department of Physics and Astronomy, The Johns Hopkins University, Baltimore, MD 21218, USA, ^20School of Physics, Trinity College Dublin, The University of Dublin, Dublin 2, Ireland ^21Isaac Newton Group (ING), Apt. de correos 321, E-38700, Santa Cruz de La Palma, Canary Islands, Spain
http://arxiv.org/abs/2307.01188v2
20230703175337
Dark energy, D-branes, and Pulsar Timing Arrays
[ "Debika Chowdhury", "Gianmassimo Tasinato", "Ivonne Zavala" ]
hep-th
[ "hep-th", "astro-ph.CO" ]
=1 equationsection
http://arxiv.org/abs/2307.01844v3
20230704174602
Advancing Wound Filling Extraction on 3D Faces: Auto-Segmentation and Wound Face Regeneration Approach
[ "Duong Q. Nguyen", "Thinh D. Le", "Phuong D. Nguyen", "Nga T. K. Le", "H. Nguyen-Xuan" ]
cs.CV
[ "cs.CV" ]
Boundary Flat Bands with Topological Spin Textures Protected by Sub-chiral Symmetry Zhongbo Yan August 1, 2023 =================================================================================== Facial wound segmentation plays a crucial role in preoperative planning and optimizing patient outcomes in various medical applications. In this paper, we propose an efficient approach for automating 3D facial wound segmentation using a two-stream graph convolutional network. Our method leverages the Cir3D-FaIR dataset and addresses the challenge of data imbalance through extensive experimentation with different loss functions. To achieve accurate segmentation, we conducted thorough experiments and selected a high-performing model from the trained models. The selected model demonstrates exceptional segmentation performance for complex 3D facial wounds. Furthermore, based on the segmentation model, we propose an improved approach for extracting 3D facial wound fillers and compare it to the results of the previous study. Our method achieved a remarkable accuracy of 0.9999986% on the test suite, surpassing the performance of the previous method. From this result, we use 3D printing technology to illustrate the shape of the wound filling. The outcomes of this study have significant implications for physicians involved in preoperative planning and intervention design. By automating facial wound segmentation and improving the accuracy of wound-filling extraction, our approach can assist in carefully assessing and optimizing interventions, leading to enhanced patient outcomes. Additionally, it contributes to advancing facial reconstruction techniques by utilizing machine learning and 3D bioprinting for printing skin tissue implants. Our source code is available at <https://github.com/SIMOGroup/WoundFilling3D>. § INTRODUCTION Nowadays, people are injured by traffic accidents, occupational accidents, birth defects, diseases that have made them lose a part of their body. In which, defects when injured in the head and face areas account for a relatively high rate <cit.>. Wound regeneration is an important aspect of medical care, aimed at restoring damaged tissues and promoting wound healing in patients with complex wounds <cit.>. However, the treatment of craniofacial and facial defects can be challenging due to the many specific requirements of the tissue and the complexity of the anatomical structure of that region <cit.>. Traditional methods used for wound reconstruction often involve grafting techniques using automated grafts (from the patient's own body) or allogeneic grafts (from a donor) <cit.>. However, these methods have limitations such as availability, donor morbidity, and potential for rejection. In recent years, the development of additive manufacturing technology has promoted the creation of advanced techniques in several healthcare industries <cit.>. The implementation of 3D printing technology in the preoperative phase enables clinicians to establish a meticulous surgical strategy by generating an anatomical model that accurately reflects the patient's unique anatomy. This approach facilitates the development of customized drilling and cutting instructions, precisely tailored to the patient's specific anatomical features, thereby accommodating the potential incorporation of a pre-formed implant <cit.>. Moreover, the integration of 3D printing technology and biomaterials assumes a pivotal role in advancing innovative remedies within the field of regenerative medicine, addressing the pressing demand for novel therapeutic modalities <cit.>. The significance of wound reconstruction using 3D bioprinting in the domain of regenerative medicine is underscored by several key highlights, as outlined below: - Customization and Precision: 3D bioprinting allows for the creation of patient-specific constructs, tailored to match the individual's wound geometry and requirements. This level of customization ensures a better fit and promotes improved healing outcomes. - Tissue Regeneration: The ability to fabricate living tissues using 3D bioprinting holds great promise for wound reconstruction. The technique enables the deposition of cells and growth factors in a controlled manner, facilitating tissue regeneration and functional restoration <cit.>. - Reduced Donor Dependency: The scarcity of donor tissues and the associated risks of graft rejection are significant challenges in traditional wound reconstruction methods. 3D bioprinting can alleviate these limitations by providing an alternative approach that relies on the patient's own cells or bioinks derived from natural or synthetic sources <cit.>. - Complex Wound Healing: Certain wounds, such as large burns, chronic ulcers, or extensive tissue loss, pose significant challenges to conventional wound reconstruction methods. 3D bioprinting offers the potential to address these complex wound scenarios by creating intricate tissue architectures that closely resemble native tissues. - Accelerated Healing: By precisely designing the structural and cellular components of the printed constructs, 3D bioprinting can potentially enhance the healing process. This technology can incorporate growth factors, bioactive molecules, and other therapeutic agents, creating an environment that stimulates tissue regeneration and accelerates wound healing <cit.>. Consequently, 3D bioprinting technology presents a promising avenue for enhancing craniofacial reconstruction modalities in individuals afflicted by head trauma. Wound dimensions, including length, width, and depth, are crucial parameters for assessing wound healing progress and guiding appropriate treatment interventions <cit.>. For effective facial reconstruction, measuring the dimensions of a wound accurately can pose significant challenges in clinical and scientific settings <cit.>. Firstly, wound irregularity presents a common obstacle. Wounds rarely exhibit regular shapes, often characterized by uneven edges, irregular contours, or irregular surfaces. Such irregularity complicates defining clear boundaries and determining consistent reference points for measurement. Secondly, wound depth measurement proves challenging due to undermined tissue or tunnels. These features, commonly found in chronic or complex wounds, can extend beneath the surface, making it difficult to assess the wound's true depth accurately. Furthermore, the presence of necrotic tissue or excessive exudate can obscure the wound bed, further hindering depth measurement. Additionally, wound moisture and fluid dynamics pose significant difficulties. Wound exudate, which may vary in viscosity and volume, can accumulate and distort measurements. Excessive moisture or the presence of dressing materials can alter the wound's appearance, potentially leading to inaccurate measurements. Moreover, the lack of standardization in wound measurement techniques and tools adds to the complexity. Currently, deep learning has rapidly advanced in computer vision and medical imaging, emerging as the predominant technique for wound image segmentation. <cit.>. Based on the characteristics of the input data <cit.>, three deep learning methods are used for segmentation and wound measurement, as shown in Fig. <ref>. The study of Anisuzzaman et al. <cit.> presents case studies of these three methods. The methods used to segment the wound based on the characteristics of the input data are as follows - 2D image segmentation: Deep learning methods in 2D for wound segmentation offer several advantages. Firstly, they are a well-established and widely used technique in the field. Additionally, large annotated 2D wound segmentation datasets are available, facilitating model training and evaluation. These methods exhibit efficient computational processing compared to their 3D counterparts, enabling faster inference times and improved scalability. Furthermore, deep learning architectures, such as convolutional neural networks, can be leveraged for effective feature extraction, enhancing the accuracy of segmentation results. However, certain disadvantages are associated with deep learning methods in 2D for wound segmentation. One limitation is the lack of depth information, which can restrict segmentation accuracy, particularly for complex wounds with intricate shapes and depth variations. Additionally, capturing the wound's full spatial context and shape information can be challenging in 2D, as depth cues are not explicitly available. Furthermore, these methods are susceptible to variations in lighting conditions, image quality, and perspectives, which can introduce noise and affect the segmentation performance. - 2D to 3D reconstruction: By incorporating depth information, the conversion to 3D enables a better capture of wounds' shape and spatial characteristics, facilitating a more comprehensive analysis. Moreover, there is a potential for improved segmentation accuracy compared to 2D methods, as the additional dimension can provide richer information for delineating complex wound boundaries. Nevertheless, certain disadvantages are associated with converting from 2D to 3D for wound segmentation. The conversion process itself may introduce artifacts and distortions in the resulting 3D representation, which can impact the accuracy of the segmentation. Additionally, this approach necessitates additional computational resources and time due to the complexity of converting 2D data into a 3D representation <cit.>. Furthermore, the converted 3D method may not completely overcome the limitations of the 2D method. -3D mesh or point cloud segmentation: Directly extracting wound segmentation from 3D data (mesh/point cloud) offers several advantages. One notable advantage is the retention of complete 3D information on the wound, enabling accurate and precise segmentation. By working directly with the 3D data, this method effectively captures the wound's intricate shape, volume, and depth details, surpassing the capabilities of both 2D approaches and converted 3D methods. Furthermore, the direct utilization of 3D data allows for a comprehensive analysis of the wound's spatial characteristics, facilitating a deeper understanding of its structure and morphology. Hence, employing a 3D (mesh or point cloud) segmentation method on specialized 3D data, such as those obtained from 3D scanners or depth sensors, can significantly improve accuracy compared to the other two methods. The use of specialized 3D imaging technologies enables the capture of shape, volume, and depth details with higher fidelity and accuracy <cit.>. Consequently, the segmentation results obtained from this method are expected to provide a more precise delineation of wound boundaries and a more accurate assessment of wound characteristics. Therefore, this method can enhance wound segmentation accuracy and advance wound assessment techniques. Besides, facial wounds and defects present unique challenges in reconstructive surgery, requiring accurate localization of the wound and precise estimation of the defect area <cit.>. The advent of 3D imaging technologies has revolutionized the field, enabling detailed capture of facial structures. However, reconstructing a complete face from a 3D model with a wound remains a complex task that demands advanced computational methods. Accurately reconstructing facial defects is crucial for surgical planning, as it provides essential information for appropriate interventions and enhances patient outcomes <cit.>. Some prominent studies, such as Sutradhar et al. <cit.> utilized a unique approach based on topology optimization to create patient-specific craniofacial implants using 3D printing technology; Nuseir et al. <cit.> proposed the utilization of direct 3D printing for the fabrication of a pliable nasal prosthesis, accompanied by the introduction of an optimized digital workflow spanning from the scanning process to the achievement of an appropriate fit; and some other prominent studies presented in survey studies such as <cit.>. However, these methods often require a lot of manual intervention and are prone to subjectivity and variability. To solve this problem, the method proposed in <cit.> leverages the power of modeling <cit.> to automate the process of 3D facial reconstruction with wounds, minimizing human error and improving efficiency. To extract the filling for the wound, the study <cit.> proposed the method of using the reconstructed 3D face and the 3D face of the patient without the wound. This method is called outlier extraction by the authors. These advancements can be leveraged to expedite surgical procedures, enhance precision, and augment patient outcomes, thereby propelling the progression of technology-driven studies on facial tissue reconstruction, particularly in bio 3D printing. However, this method still has some limitations as follows - The method of extracting filler for the wound after 3D facial reconstruction has not yet reached high accuracy. - In order to extract the wound filling, the method proposed by [12] necessitated the availability of the patient's pre-injury 3D facial ground truth. This requirement represents a significant limitation of the proposed wound filling extraction approach, as obtaining the patient's pre-injury 3D facial data is challenging in real-world clinical settings. To overcome these limitations, the present study aims to address the following objective: - Train the 3D facial wound container segmentation automatic model using a variety of appropriate loss functions to solve the data imbalance problem. - Propose an efficient approach to extract the 3D facial wound filling by leveraging the face regeneration model in the study <cit.> combined with the wound segmentation model. - Evaluate the experimental results of our proposed method and the method described in the study by Phuong et al. <cit.>. One case study will be selected to be illustrated through 3D printing. § METHODOLOGY In recent years, 3D segmentation methods have made notable advancements in various fields, including computer vision and medical imaging <cit.>. Prominent approaches such as PointNet <cit.>, PointNet++ <cit.>, PointCNN <cit.>, MeshSegNet <cit.>, and DGCNN <cit.> have demonstrated their efficacy in segmenting 3D data. Among these methods, the two-stream graph convolutional network (TSGCNet) <cit.> for 3D segmentation stands out as an exceptional technique, demonstrating outstanding performance and potential in the field. This network exploits many of the powerful geometric features available in mesh to perform segmentation tasks. Consequently, in this study, we have chosen this model as the focal point to investigate its applicability and effectiveness in the context of our research objectives. In <cit.>, the proposed methodology utilizes two parallel streams, namely the C stream and the N stream. To extract high-level geometric representations from the coordinates and normal vectors, TSGCNet incorporate input-specific graph-learning layers. Subsequently, the features acquired from these two complementary streams are amalgamated in the feature-fusion branch to facilitate the acquisition of discriminative multi-view representations, specifically for segmentation purposes. We introduce the details of each active flow of the two-stream graph convolutional network. §.§ Architecture of the C-stream The C-stream is designed to capture the essential topological characteristics derived from the coordinates of all vertices. The C-stream receives an input denoted as 𝐅_𝐜^0, which is an M × 12 matrix representing the coordinates. Each row of this matrix represents a node, and the columns correspond to the coordinates of the cell in a three-dimensional space. This stream incorporates an input-transformer module to align the input data with a canonical space. This module comprises shared Multilayer Perceptrons (MLPs) across nodes, as previously described by Charles et al. <cit.>. The objective of this module is to acquire the parameters of an affine transformation matrix, represented as 𝐓∈ℝ^12 × 12. The affine transformation is applied to the original coordinate matrix 𝐅^0_𝐜 using matrix multiplication. The resulting transformed matrix is represented as 𝐅̂^̂0̂_̂𝐜̂, and the transformation operation can be expressed as: 𝐅̂^̂0̂_̂𝐜̂ = 𝐅^0_𝐜𝐓. By multiplying the input matrix 𝐅^0_𝐜 with the affine transformation matrix 𝐓, the C-stream aligns the coordinates of the nodes to a standard reference or canonical space. The alignment operation plays a vital role in maintaining the consistency and stability during the extraction of geometric features in the subsequent layers of the network. By stabilizing the feature extraction process, the network can more effectively capture essential topological information from the input data. The C-stream progressively integrates a consecutive set of graph-attention layers along the forward path to systematically exploit multi-scale geometric attributes derived from the coordinate aspect. Let 𝐅^l_c ∈ℝ^M × d represent the matrix learned by the (l-1)-th graph attention layer. The row vector 𝐟_i^l ∈ℝ^d within 𝐅^l_c signifies the representation of the i-th node p_i. Subsequently, high-level geometric representations 𝐅_c^l+1∈ℝ^M× k is further extracted by the following l-th graph-attention layer in four steps. Step 1. Constructing the Dynamic KNN Graph: A graph G(V, E) is designed in terms of 𝐅_c^l, where V = {p_1,p_2,...,p_M} signifies the collection of M nodes and E denotes the corresponding set of edges which determined by the K-nearest neighbors (KNN) connectivity. It should be emphasized that each node p_i ∈ V merely connects to its KNNs, which are represented by the symbol 𝒩(i). Step 2. Local Information Calibration: For each p_i∈ V, we update the representation 𝐟_ij^l of the j-th nearest neighbor p_ij∈𝒩(i) by integrating its own representation, more precisely: 𝐟̂_ij^l = 𝐌𝐋𝐏^l(𝐟_i^l⊕𝐟_ij^l), where ∀ p_ij∈𝒩(i) and ⊕ represents the operation for channel-wise concatenation. This operation combines the channels from multiple arrays into a single array, preserving the information from each channel (a concatenation operation is just a stacking operation). The p_ij can be the nearest neighbor of many central nodes p_i. Therefore, the information provided by p_ij via the 𝐟̂_ij^l can be more consistent with the central node p_i. Step 3. Estimating Attention Weights: The estimation of attention weights for the neighborhood of each node p_i is accomplished by utilizing a lightweight network shared among nodes. This network is designed to capture the local geometric characteristics. To be more precise, γ_i j^l=σ((𝐟_i^l-𝐟_i j^l) ⊕𝐟_i j^l), ∀ p_i j∈𝒩(i), where γ_i j^l∈ℝ^k is attention weight of neighbor p_ij in the l-th layer, σ(·) is implemented as a lightweight MLP. Step 4. Aggregating Neighborhood Information: The feature aggregation in the l-th layer is mathematically expressed as: 𝐟_i^l+1=∑_p_i j∈𝒩(i)γ_i j^l ⊙𝐟̂_i j^l, where ⊙ denotes the element-wise multiplication operation performed between two feature vectors. §.§ Architecture of the N-stream The C-stream primarily focuses on extracting the basic structure and topology from the coordinates of the nodes. While it can capture general geometric information, it lacks the sensitivity to distinguish subtle boundaries between adjacent nodes with different classes (e.g., the boundary between the injured and non-injured areas). To overcome this limitation, the N-stream is introduced. The N-stream is designed to extract boundary representations based on the normal vectors associated with the nodes. Normal vectors provide information about the orientation and surface properties of the nodes. By integrating the normal vectors of all nodes as inputs, the N-stream can effectively capture detailed and precise boundary information during the learning process. Before extracting the higher-level geometric features, a transformation is applied to return the normal vectors to the same normal space. This normalization helps to ensure consistency in the representation of normal vectors and facilitates further processing. The N-stream is constrained to utilize the same KNN graphs constructed in the C-stream to learn boundary representations within local regions and mitigate interference from distant nodes sharing similar normal vectors belonging to distinct classes. This constraint ensures that the N-stream focuses on capturing meaningful boundary information while accounting for the local context of each node. This shared KNN graph ensures that the relationships between nodes captured by both streams are consistent and aligned. By sharing the KNN graphs, the N-stream can focus on local regions and their associated boundaries. Using graph max-pooling layers in the N-stream distinguishes it from the graph-attention layers employed in the C-stream. This differentiation is essential as the normal vectors encompass unique geometric information that differs from the coordinates of the nodes. Because the normal vector carries only geometry information, the N stream prefers to use max-pooling layers instead of graph-attention layers as in the C stream. Similar to the C stream, the l-th graph max-pooling layer modifies the information locally for each p_i node by updating 𝐟_i^l to 𝐟̂_i^l based on Eq. (<ref>). In order to create the boundary representation for each center p_i, channel-wise max-pooling is applied to aggregate the 𝐟̂_i^l corrected features given by the following formula 𝐟_i^l+1=maxpooling{𝐟̂_i j^l, ∀ p_i j∈ N(i)}. §.§ Combination of features Research <cit.> used MLP to represent a high-level view by concatenating features from each layer 𝐅_c^l and 𝐅_n^l, denoted as 𝐅_𝐜=𝐌 𝐋 𝐏_𝐜(𝐅_𝐜^1 ⊕𝐅_𝐜^2 ⊕𝐅_𝐜^3), 𝐅_𝐧=𝐌 𝐋 𝐏_𝐧(𝐅_𝐧^1 ⊕𝐅_𝐧^2 ⊕𝐅_𝐧^3). These representations help capture information from local to global features in each stream. Subsequently 𝐅_𝐜 and 𝐅_𝐧 are normalized to the mesh according to the following formula ϑ_𝐜=|𝐟_𝐧^i|/|𝐟_𝐜^i|+|𝐟_𝐧^i|, ϑ_𝐧=|𝐟_𝐜^i|/|𝐟_𝐜^i|+|𝐟_𝐧^i|, where 𝐟_𝐜^i ∈𝐅_𝐜 and 𝐟_𝐧^i ∈𝐅_𝐧. After that, the feature vectors 𝐟_𝐜^i and 𝐟_𝐧^i calculated after normalization have the following form 𝐟̂_̂𝐜̂^̂î=ϑ_𝐜𝐟_𝐜^i, 𝐟̂_̂𝐧̂^̂î=ϑ_𝐧𝐟_𝐧^i . Eqs. (<ref>)-(<ref>) are the purpose of bridging the gap between features 𝐟̂_̂𝐜̂^̂î and 𝐟̂_̂𝐧̂^̂î. Nevertheless, feature 𝐟̂_̂𝐧̂^̂î uses the max-pooling operator, which can result in a higher numerical magnitude compared to feature 𝐟̂_̂𝐜̂^̂î. To solve this problem, the self-attention mechanism was applied to balance the contributions of 𝐟̂_̂𝐜̂^̂î and 𝐟̂_̂𝐧̂^̂î. Specifically, 𝐌 𝐋 𝐏_𝐀 𝐭 𝐭 is a lightweight MLP, the calculation of the self-attention weight is determined by the following equation: β_i=𝐌 𝐋 𝐏_𝐀 𝐭 𝐭(𝐟̂_̂𝐜̂^̂î⊕𝐟̂_̂𝐧̂^̂î), where β_i has the same shape as (𝐟̂_̂𝐜̂^̂î⊕𝐟̂_̂𝐧̂^̂î). The quantification of the multi-view geometric feature representation for p_i is expressed as follows: 𝐟̂^̂î=β_i ⊙(𝐟̂𝐜̂^̂î⊕𝐟̂𝐧̂^̂î), where the symbol ⊙ denotes the element-wise multiplication operation. Then, the feature matrix 𝐅̂ = (𝐟̂^̂1̂, 𝐟̂^̂2̂, …, 𝐟̂^̂M̂) is defined represents the multi-view features for all nodes. The feature matrix undergoes a transformation through a MLP, resulting in the generation of an M × C matrix denoted as 𝐏. Each row of 𝐏 corresponds to the probabilities of a particular node belonging to C distinct classes. § EXPERIMENTAL TESTS This section introduces our experimental setup, including data set information and implementation details. §.§ Dataset We use a dataset of 3D faces of craniofacial injuries named Cir3D-FaIR <cit.>. The dataset utilized in this study is generated through simulation within a virtual environment, replicating realistic facial wound locations. A set of 3,678 3D mesh representations of uninjured human faces is utilized to simulate facial wounds. Specifically, each face in the dataset is simulated with ten distinct wound locations. Consequently, the dataset has 40,458 human head meshes, encompassing uninjured faces and wounded in many different positions. Each 3D face mesh consists of 15,000 faces and is labeled according to the face location of the mesh, indicating specifically the presence of the wounds. This simulation dataset has been expert physicians to assess the complexity associated with the injuries. Figure <ref> showcases several illustrative examples of typical cases from the dataset. The dataset is randomly divided into separate subsets for training and validation. The partitioning follows an 80:20 ratio, with 80% of the data assigned to training and 20% to validation. The objective is to perform automated segmentation of the 3D facial wound region and integrate it with the findings of Phuong et al. <cit.> regarding defect face reconstruction to extract the wound-filling part specific to the analyzed face. §.§ Training strategy We use the model in study <cit.> to segment the wound area on the patient's 3D face. This model demonstrates a remarkable capacity for accurately discriminating boundaries between regions harboring distinct classes. Our dataset comprises two distinct classes, namely facial abnormalities and normal regions. Due to the significantly smaller proportion of facial wounds compared to the normal area, an appropriate training strategy is necessary to address the data imbalance phenomenon effectively. To address this challenge, we utilize specific functions that effectively handle data imbalance within the semantic segmentation task. These functions include focal loss <cit.>, dice loss <cit.>, cross-entropy loss and weighted cross-entropy loss <cit.>. 1) Focal loss is defined as: ℒ_focal_loss= -α_t(1 - p_t)^γlog(p_t), where p_t represents the predicted probability of the true class; α_t is the balancing factor that assigns different weights to different classes; γ is the focusing parameter that modulates the rate at which easy and hard examples are emphasized. Focal loss effectively reduces the loss contribution from well-classified examples and focuses on samples that are difficult to classify correctly. This helps handle class imbalance and improves the model's performance on minority classes. 2) Dice loss also known as the Sorensen-Dice coefficient is defined as: ℒ_dice_loss=1-2 ∑_i=1^N p_i y_i + ϵ/∑_i=1^N p_i^2+∑_i=1^N y_i^2 + ϵ, where p represents the predicted probability or output of the model; y is the ground truth or target labels; N is the number of elements in the predicted and ground truth vectors; ϵ is a small constant added to the denominator to avoid division by zero. 3) Cross-entropy segmentation loss is defined as: ℒ_cross_entropy_segmentation_loss =-∑_i=1^M ∑_c=1^C y_i clog(p_i c), where y_ic denotes the ground truth label for the i-th sample and c-th class; p_ic represents the predicted probability for the i-th sample and c-th class; M is the total number of samples; C is the number of classes. 4) Weighted Cross-Entropy loss is as follows: ℒ_weighted_cross_entropy_loss= -1/N∑_i=1^N w r_i log(p_i)+(1-r_i) log(1-p_i), where w = N-∑_n p_n∑_n p_n represents the weight assigned to each point based on its class. The model undergoes training through iterative experimental minimization of the loss functions in order to select the most efficacious model. The training process was conducted using a single NVIDIA Quadro RTX 6000 GPU over the course of 50 epochs. The Adam optimizer is employed in conjunction with a mini-batch size of 4. The initial learning rate was set at 1e-3, and it underwent a decay of 0.5 every 20 epochs. The experimental process for identifying the most efficient model for wound segmentation is meticulously elucidated in Algorithm <ref> within the study. § RESULTS AND DISCUSSION The model was trained on the dataset using four iterations of experiments, wherein different loss functions were employed. The outcomes of these experiments are presented in Table <ref>. The utilized loss functions demonstrate excellent performance in the training phase, yielding highly satisfactory outcomes on large-scale unbalanced datasets. Specifically, we observe that the model integrated with cross-entropy segmentation loss exhibits rapid convergence, requiring only 16 epochs to achieve highly favorable outcomes. As outlined in Section <ref>, the model exhibiting the most favorable outcomes, as determined by the cross-entropy segmentation loss function, was selected for the segmentation task. This particular model achieved an impressive mIoU score of 0.9999986. Some illustrations for the segmentation result on a 3D face are shown in Fig. <ref>. The results demonstrate the effectiveness of the two-stream graph convolutional network in accurately segmenting complex and minor wounds, indicating that the model successfully captures the geometric feature information from the 3D data. From the above segmentation result, our primary objective is to conduct a comparative analysis between our proposed wound fill extraction method and a method with similar objectives as discussed in the studies by Phuong et al. <cit.>. A notable characteristic of the Cir3D-FaIR dataset is that all meshes possess a consistent vertex order. This enables us to streamline the extraction process of the wound filler. Utilizing the test dataset, we employ the model trained in the study by Phuong et al. <cit.> for the reconstruction of the 3D face. Subsequently, we apply our proposed method to extract the wound fill from the reconstructed 3D face. Let v(x,y,z) ∈ℳ(V, F) is the vertices of the mesh containing the wound. In which V and F are the set of vertices and faces of the mesh, respectively. As previously stated, we introduce a methodology for the extraction of wound filling, which is comprehensively elucidated in Algorithm <ref> and Fig. <ref>. For the purpose of notational convenience, we designate the filling extraction method presented in the study by Phuong et al. <cit.> as the "old proposal". We conduct a performance evaluation of both our proposed method and the old proposal method on a dataset consisting of 8090 meshes, which corresponds to 20% of the total dataset. A comprehensive description of the process for comparing the two methods is provided in Algorithm <ref>. The results show that our proposal has an average accuracy of 0.9999986%, while the method in the old proposal is 0.9715684%. The accuracy of the fill extraction method has been improved, which is very practical in the medical reconstruction problem. After that, the study randomly extracted the method outputs from the test set, depicted in Fig. <ref>. We have used 3D printing technology to illustrate the results of the actual model, which is significantly improved compared to the old method, as shown in Fig. <ref>. Our research only stops at proposing an efficient wound-filling extraction method with high accuracy. Therefore, further research can focus on developing personalized surgical planning tools based on reconstructed 3D models. § CONCLUSIONS This study explored the benefits of using a two-stream graph convolutional network to segment 3D facial trauma defects automatically. Furthermore, we have proposed an improved method to extract the wound filling for the face. The results show the most prominent features as follows - An auto-segmentation model was trained to ascertain the precise location and shape of 3D facial wounds. We have experimented with different loss functions to give the most effective model in case of data imbalance. The results show that the model works well for complex wounds on the Cir3D-FaIR face dataset with an accuracy of 0.9999986%. - Concurrently, we have proposed a methodology to enhance wound-filling extraction performance by leveraging both a segmentation model and a 3D face reconstruction model. By employing this approach, we achieve higher accuracy than previous studies on the same problem. Additionally, this method obviates the necessity of possessing a pre-injury 3D model of the patient's face. Instead, it enables the precise determination of the wound's position, shape, and complexity, facilitating the rapid extraction of the filling material. - This research proposal aims to contribute to advancing facial reconstruction techniques using AI and 3D bioprinting technology to print skin tissue implants. Printing skin tissue for transplants has the potential to revolutionize facial reconstruction procedures by providing personalized, functional, and readily available solutions. By harnessing the power of 3D bioprinting technology, facial defects can be effectively addressed, enhancing both cosmetic and functional patient outcomes. - From this research direction, our proposed approach offers a promising avenue for advancing surgical support systems and enhancing patient outcomes by addressing the challenges associated with facial defect reconstruction. Combining machine learning, 3D imaging, and segmentation techniques provides a comprehensive solution that empowers surgeons with precise information and facilitates personalized interventions in treating facial wounds. § ACKNOWLEDGMENTS We would like to thank Vietnam Institute for Advanced Study in Mathematics (VIASM) for hospitality during our visit in 2023, when we started to work on this paper. unsrt
http://arxiv.org/abs/2307.03029v1
20230706144544
Modelling the response of a CsI(Tl)-PiN photodiode Microscintillator Detector
[ "Justin Tabbett", "Karen L. Aplin" ]
physics.ins-det
[ "physics.ins-det" ]
1]Justin Tabbettcor jt16596@bristol.ac.uk 1]Karen L. Aplin [1]organization=Faculty of Engineering, University of Bristol, addressline=Queen's Building, University Walk, city=Bristol, postcode=BS8 1TR, country=United Kingdom [cor]Corresponding author The full instrument response of a CsI(Tl)-PiN photodiode radioactivity detector, intended for deployment on a meteorological radiosonde, has been modelled by combining a physics-based model of the sensor with the detector circuit response, obtained via a LTSpice simulation. The model uses the incident energy of a gamma ray as an input, and produces the pulse expected from the detector. The detector response was verified by comparing the simulated energy calibration with laboratory radioactive sources. The Schmitt trigger part of the measurement circuit is found to control the observed minimum detectable energy of 223 keV. Additionally, the energy sensitivity of the PiN detector was found to be 0.529 ± 0.010 mV/keV in the 200-800 keV range. The simulation and laboratory calibrations were consistent to better than 20% over the operating range of the instrument, decreasing to 0.34% at 800 keV. Radioactivity Detector Ionisation Scintillator PiN Photodiode Simulation LTSpice § INTRODUCTION There is a lack of readily available instrumentation to study ionisation in the atmosphere, and the effects of energetic particles on weather and climate <cit.>. The creation of atmospheric ions by galactic cosmic rays, solar UV radiation, and electron precipitation events means that the effects of these ions manifest in different regions of the atmosphere. For instance, neutron monitoring stations provide a global understanding of galactic cosmic ray intensities, however at altitudes below 5 km, it has been shown that there is little correlation between neutron counts and measured ionisation rates <cit.>. Predominantly, satellites have been used for primary particle detection <cit.>, and ground-based instruments for secondary particle detection, however the intermediary region creates an opportunity for a new, balloon-borne detector. A novel microscintillator ionisation detector, here called the PiN detector, capable of measuring energy and count rate, has been developed for deployment on meteorological radiosondes (weather balloons). Hundreds of these balloons are launched daily for weather forecasting purposes, but as they are not routinely retrieved, and have limited capability to carry additional payloads, the cost is limited to a few hundred pounds and the mass to tens of grams. Geigersondes are suitable for balloon applications but do not offer energy detection <cit.>. A miniaturised CsI(Tl) scintillator coupled to a PiN diode both meets the radiosonde power and mass requirements, and adds energy detection capability. The detector was first deployed on a meteorological radiosonde in 2016 to investigate the transition region where surface-borne energetic particles cease to be the dominant source of ionising radiation, in favour of high-energy particles in the free troposphere <cit.>. During ongoing development of the detector, it has been used to measure background radiation from natural sources such as Radon gas. During balloon deployment in 2018, the detector unexpectedly observed stratospheric X-rays, which was corroborated by NOAA POES spacecraft data <cit.>. Deployment of the detector is well-established, however the work presented in this paper analyses and discusses the instrumentation in more detail than previously. This will allow both for retrospective analysis of previous flight data and for future development. The PiN detector sensor is an Advatech CsI(Tl) scintillator, measuring 10×10×8 mm3, coupled to a silicon PiN RD100 photodiode <cit.>. There are two stages in particle detection within the sensor. First, incident radiation generates a light pulse in the scintillator, described by characteristic decay times. The second stage of particle detection involves the conversion of a light pulse to a current pulse. The magnitude of the current pulse is proportional to the energy of the incident ionising radiation. The next stage of particle detection involves the current passing through the electronics of the detector. Figure <ref> shows the detection process where the current flows from the PiN photodiode to a transimpedance amplifier, located physically close to the photodiode on the board to minimise losses. The subsequent signal passes through a frequency dependent gain stage, followed by the signal conditioning circuitry. There are three key components in the signal conditioning circuit: a Schmitt trigger, an analogue pulse, and a negative peak detection circuit (valley detector). The trigger activates an interrupt routine in the PIC16F676 microcontroller code, signalling the microcontroller to measure the voltages on the analogue pulse and the valley signal. A pulse height, dV, is calculated by the difference between the analogue pulse and the valley trace. The measured analogue pulse value is larger than the measured valley value, as the analogue pulse is a negative pulse. Regarding the detector data collection, the microcontroller uses a 10-bit analogue-digital converter (ADC) to measure voltages. The microcontroller is held at a 5 V bias, where 5 V is recorded at 1023 ADC counts. The reference voltage level rests at ∼ 3.211 V (657 ADC). The pulse height should then not exceed 657 ADC. For data output, the microcontroller measures the reference and valley voltage values separately, additionally assigning a time-stamp for the event, to a serial output. We present, for the first time, a full-stack model of the detector response, from the initial interaction of the scintillator with ionising radiation, through to the production of voltage traces which would be passed to the microcontroller for measurement. A physics-based model written in Python emulates the response of the sensor, the signals from which are used as input to a Simulation Program with Integrated Circuit Emphasis (SPICE) simulation to emulate the electronics response of the detector. Software packages such as Geant4, FLUKA, and MCNPX are often used to simulate the interactions between ionising radiation and scintillator crystals <cit.>. FLUKA was not suitable as the desired light pulse output needed to be in the time-domain, to combine with the photodiode response; whereas FLUKA produces quantities such as spatial energy depositions, emission spectra, and particle fluence, among others. Using Geant4 in combination with SPICE programs has been suggested as a method of investigating the overall response of a detector and would likely be a valid approach <cit.>; however, as the scintillator is a commercial product with its typical response given, it would be unnecessary to model its response from first principles. Therefore, an analytical approach was deemed sufficient. Additionally, the simulation of the total response of the detector was the desired outcome, therefore smaller deviations likely to arise between a numerical and analytical approach <cit.>, could be eclipsed by the response of electronic components used in the signal conditioning circuit. Like the scintillator, the photodiode is governed by characteristic decay times (rise- and fall-times); however, as a commercial component, only typical rise times, at a specific reverse bias and for a given wavelength, are provided. Therefore, the rise time has been determined by using estimated parameters, and the fall time was determined during a tuning phase of the model development. Convolution has been used as the method of interfacing the scintillator and photodiode responses <cit.>, ultimately producing a current pulse. Figure <ref> illustrates the detection process, noting the output from each stage of the Python model. Finally, the SPICE simulation has been created using LTSpice; the current pulse from the physics model is used as an input for the simulation. Relevant circuit components have been recreated in LTSpice, allowing for analogue voltage traces to be measured. Accompanying the model and simulation, a laboratory calibration with radioactive sources is presented, serving as the primary method of validation for the model. Terrestrial gamma radiation, originating from the 238U decay series, has additionally been used in the detector calibration. Section <ref> details the model method, with the scintillator response in Section <ref>, the photodiode response in Section <ref>, and the current pulse in Section <ref>. The simulation of the electronics of the detector are detailed in Section <ref>, and the laboratory calibration is explored in Section <ref>. Finally, a comparison of the model and detector is given in Section <ref>. § MODEL METHOD Modelling the PiN detector response was split into two sections: the physics model, and the electronics simulation. Within the physics model, the responses of the scintillator and photodiode are considered, resulting in their combination which produces a current pulse proportional to the incident radiation. The current pulse serves as an input for the electronics simulation. The simulation models the remainder of the detector until the data acquisition stage. §.§ CsI(Tl) Scintillator Response To model the production of photons by the interaction of radiation with the scintillator, the following assumptions were made <cit.>: * Total number of photons generated are determined by the light yield (γ/MeV) of the scintillator * Photons are emitted over a range of wavelengths in different proportions, described by the emission spectrum * The light pulse generated is characterised by at least two decay times The scintillator emission decay curve is described by Equation <ref> L(t)=(1-exp[-t/τ_rs])-a_1(1-exp[-t/τ_1])-a_2(1-exp[-t/τ_2]) where τ_rs is scintillator rise time, τ_1,2 are the fast and slow decay times, respectively, and a_1,2 their proportions. In the instance of the CsI(Tl) crystal present in the PiN detector, the manufacturer reports a single decay time, therefore in the model a_2=0. The model uses discretised wavelength steps to determine the energy generated by scintillation photons during an interaction. For each wavelength, λ, a number of photons, N(λ) are produced described by the light yield LY, incident energy E_in, and emission spectrum proportion S(λ): N(λ)=LY · E_in· S(λ) The emission spectrum, shown in Figure <ref>, was digitised using a Python-based plot digitiser and linear interpolation <cit.>. This processes allows for the scintillator emission spectrum and the photodiode responsivity to be evaluated at the same steps (every 10 nm) over the model's wavelength range (400-800 nm). Therefore, a summation of the energy of a single wavelength photon, E_s= hc/λ, multiplied by the corresponding number of photons, over this range, yields the total energy emitted by the scintillator for a single event. E_Tot=∑_λ=400^800 E_s(λ)· N(λ) It is necessary to convert the energy emitted from the scintillator into a power quantity, because the photodiode responsivity relates incident optical power to an output photocurrent - discussed in Section <ref>. Therefore, a cutoff time, t_co, has been empirically determined. The cutoff time was set to 3.05 μs as this, in combination with the photodiode fall time, gave the desired response when comparing the simulated and laboratory 662 keV pulse height. The final expression for the power output from the scintillator is then P(t)=E_Tot/t_co· L(t) §.§ PiN Photodiode Response The temporal response of the photodiode is governed by the rise time τ_rp, typically composed of: drift and diffusion times, and the RC time constant of the diode-circuit. The relationship between the three components is given by: τ_rp=√(τ_RC^2+τ_drift^2+τ_diff^2) Where τ_RC, τ_drift, and τ_diff are the RC, drift times, and diffusion times, given in Equation <ref>, <ref>, and <ref> respectively. The derivation of the RC time constant and its dependencies are given in Equations <ref>-<ref> which denote the RC time constant, the junction capacitance C_j, the depletion width W_d, and the series resistance R_s <cit.>: τ_RC=2.2 C_j (R_s+R_L) where R_L=50 Ω is the load resistance which has been estimated; C_j=ϵ_Siϵ_0A/W_d where ϵ_Si = 11.9 F/m is the relative permittivity of silicon, ϵ_0 is the permittivity of free space, and A= 100 mm2 is the active area of the photodiode; W_d=√(2ϵ_Siϵ_0/qN_n(V_A+V_Bi)) where q is the charge of an electron, V_A=12 V is the applied bias, V_Bi=0.65 V is estimated value of the built-in bias of silicon, and N_n is the doping concentration; R_s=R_c+(W_s-W_d)ρ/A where ρ=(qN_nμ)^-1 is the conductivity of silicon, R_c=0 Ω is the assumed contact resistance in the diode, and W_s is the silicon substrate width. The substrate width and doping concentration N_n were estimated by considering the case where V_A=100 V, and the photodiode is completely depleted, meaning W_s=W_d. Using the previous assumption, Equation <ref> and the typical value of the junction capacitance, 50 pF, under such conditions, the substrate width is estimated to be W_s=210.7252 μm. Following, an estimate for the doping concentration can be obtained using this result and Equation <ref>, giving N_n=2.985 ×10^18. Additionally, the electron mobility of silicon is taken to be 1350 cm2(Vs)-1 <cit.>. The drift time component is given by: τ_drift=W_d^2/2μ(V_A+V_Bi) The diffusion time is given by: τ_diff= q(W_s-W_d)^2/μ_h k T where μ_h is the hole mobility equal to 480 cm2(Vs)-1 <cit.>. A summary of the photodiode model parameters are given in Table <ref>. The PiN photodiode response curve is given by C(t)=-exp[-t/τ_fp]+exp[-t/τ_rp] where the photodiode fall-time τ_fp > τ_rp. The rise time governs the rate at which the photodiode can respond to a pulse of light. Typical literature values for the rise time are reported for the case where the photodiode is operated in its fully depleted mode at at reverse bias of 75 V. It was therefore necessary to obtain a rise time for the conditions employed in practice, achieved through the above process. A reasonable rise time of 15.08 μs was obtained as a result. The photodiode fall time was determined by matching the ADC pulse height for the 137Cs 662 keV energy peak in the simulation and the calibration and remained fixed at this value. This parameter, ultimately, serves as a “catch-all” for inaccuracies which might arise in the estimation of the doping concentration of silicon, or the substrate width. Therefore, while model parameters have been chosen in Table <ref>, there are other possible combinations which yield the same rise time. §.§ Current Pulse The current pulse is given by a convolution of the scintillator and photodiode response, using the photodiode responsivity - digitised and shown in Figure <ref> - to convert from the scintillator power to electrical current <cit.>. The current pulse, I(t) is given by: I(t)=R_λP(t)*C(t) where R_λ is the photodiode responsivity over the same wavelength range as the scintillator emission spectrum. The responsivity is the ratio of current generated to optical power. The magnitude of the current pulse is given explicitly by: I_max=1/|L(t)*C(t)|∑_λ=400^800E_s(λ)N(λ)R_λ(λ)/t_co Where, |L(t)*C(t)|, denotes the maximum value of the convolution therefore serving as a normalisation factor. §.§ Electronics Simulation The current pulse from the physics model has been used as a current source input in LTSpice. The current source is connected to the transimpedence amplifier, yielding a voltage trace. The remainder of the circuit - until the microcontroller - was built in LTSpice, following the layout given in Figure <ref>. The measurement of the pulse analogue and valley detector voltages were completed manually, converting the voltage into an analogue-to-digital count. By varying the input energy, a relationship between incident energy and pulse height was determined. §.§ Laboratory Calibration To enable the validation of the model, a PiN detector was calibrated using Barium-133 (133Ba) and Cesium-137 (137Cs) radioactive sources. The experimental setup for the 137Cs source can be seen in Figure <ref>, and for the 133Ba source in Figure <ref>; further experimental details are given in Table <ref>. During 137Cs data collection, the PiN detector was placed directly opposite the source, meanwhile for 133Ba, the source was placed adjacent to the detector. The background spectrum was collected in the same location with the sources shielded. § COMPARISON OF DETECTOR AND MODEL §.§ Simulation and Laboratory Results The calibration allows for voltage pulses caused by specific gamma energies to be identified. The binned count rate data are given in Figure <ref>, showing the relative intensities of background and source energies. Figure <ref> displays the count rate data with an asymmetric least-squares baseline (p=0.001, λ=0.0001) subtraction applied <cit.>. The peaks associated with each respective source are labelled. Regions of higher count rate in the spectra in Figures <ref> and <ref> are important when identifying energy peaks - not solely the peaks themselves. For example, a 137Cs source will induce an increased count rate before its characteristic energy peak, due to the Compton edge, and back-scatter <cit.>. Comparing the raw 137Cs and background counts, it can be seen that there is an increase over the whole range above 20 ADC, approaching 9 times as many counts. Figure <ref> shows the relationship between the incident energy (keV) and the pulse height given in ADC for the simulation and laboratory calibration. The calibration for the simulation is: PH = (0.103 ± 0.003) E - (3 ± 1) and for the calibration: PH = (0.108 ± 0.002) E - (7.1 ± 0.9) The laboratory calibration yields an energy resolution of 9.3 ± 0.2 keV and an energy sensitivity of 0.529 ± 0.010 mV/keV. Figure <ref> indicates the percentage difference between the two calibrations, and highlights how appropriate the model is for simulating the gamma response of the detector. At an energy of 432.6 keV, the percentage difference between the laboratory and simulation is less than 5 %, and 800 keV, the difference is 0.34 %. For energies below 432.6 keV, the large difference in the intercepts of the linear fits becomes more apparent as the fits diverge. §.§ Minimum detectable energy The minimum detectable energy is the energy that produces a pulse height of 1 ADC. The predicted response denotes this to be a 39 keV gamma ray; for the laboratory calibration, 75 keV. The enclosure containing the detector is anticipated to permit through 95% of gammas with energy ≥160 keV, which might provide a practical lower limit However in practice, the lowest recorded pulse height in the laboratory data is 20 ADC corresponding to 223 and 251 keV in the simulation and laboratory, respectively. While the pulse analogue and valley in the simulation respond to incident radiation of any energy, the trigger does not. For example, at the anticipated minimum detectable energy of 39 keV, the pulse analogue does not exceed the Schmitt trigger threshold; in practice this means no event would be recorded by the microcontroller. Further, at a simulated pulse of 223 keV the Schmitt trigger does not appreciably activate, remaining at 5 V within 1 μV. Figure <ref> shows the minimum value of the simulated Schmitt trigger with input energies varying from 230-260 keV. In normal operation the Schmitt trigger will drop from 5 V to 0 V, as illustrated in Figure <ref>, however if the voltage drop is sufficiently small, the microcontroller will not register the change as a flag for the measurement routine. Over this range, the three lowest energies in Figure <ref> would not be detected by the microcontroller because the trigger has not activated, despite there being a 22-23 ADC difference in the reference and valley voltages. Pulses corresponding to energy greater than or equal to 250 keV would be measured in the physical system certainly, as there would be full activation of the trigger. However for simulated gamma rays with energy in the 245-250 keV range, the Schmitt trigger minimum would need to decrease below 1 V as defined by the microcontroller specification. For all of the energies tested in Figure <ref>, the threshold voltage was 639±1 ADC meaning the pulse analogue would need to decrease below this in order for the trigger to activate. In all simulation runs, the reference ADC was 657 ADC, therefore suggesting the smallest possible pulse would be 18 ADC. The smallest pulse recorded in the laboratory data was 20 ADC, where the reference was 659 ADC and the valley was 639 ADC. In practice there can be fluctuations in the reference level, which is measured from the pulse analogue, but these fluctuations will not affect the minimum detectable energy. This is because the valley and trigger sub-circuits are driven by the pulse analogue voltage line. For example if the pulse analogue is 10 ADC higher than normal, the resting valley voltage will be 10 ADC higher, and the trigger threshold will be higher than normal. Conversely, the maximum detectable energy could be governed by a number of aspects. Assuming, normal behaviour of the reference level, the maximum pulse height could be 657 ADC, corresponding to an energy of 6.1 MeV, purely considering the available voltage drop in the valley. However, more likely the limiting factor on the maximum detectable energy would be the absorption efficiency of the scintillator. For similar sized CsI(Tl) crystals, the absorption efficiency has been shown to decrease to 0% at 10 MeV <cit.>. The smallest pulse considerations are given in Table <ref>, as the model, laboratory data, and trigger threshold each suggest different values (24, 20, 18 ADC, respectively). These ADC span an energy range of 203.9 - 288.0 keV across both calibrations. The wide energy range could be a result of the limited number of calibration points in the low energy region. In summary, the trigger threshold voltage currently determines the minimum detectable energy. The threshold value could be adjusted to increase sensitivity to lower-energy particles, but this would need to be carefully traded off against the level of noise observed in the detector. §.§ Model considerations Due to the commercial nature of the photodiode used in the detector, a number of the parameters had to be determined or estimated. As a result there are degenerate combinations of parameters which result in the same photodiode response, when considering the pulse height obtained by the simulation. For example, if the pulse height is too small, the scintillator cutoff time can be decreased (producing a larger optical power), or the photodiode fall time can be increased, or the photodiode rise time can be decreased, all of which result in the final pulse being larger. The latter most is dependent on the doping concentration, the electron and hole mobilities, which have typical non-specific values, and resistances in the diode circuit. Therefore, flexibility is available in the model to fine tune its performance to more closely match the actual system. Future work could include a comparison of voltage traces for single events, measuring the pulse height by the voltage traces, mirroring the model, and comparing them with the microcontroller measured pulse heights. This would highlight the effect of different parameters as an altered photodiode rise/fall time will yield a different pulse shape overall. Matching the simulated and actual pulse shapes would provide more understanding on which parameters are more important to the agreement of model and detector. An aspect to consider for this approach is that matching voltage traces offers no energy calibration information to compare against the model energy input. § CONCLUSION A model has been developed to emulate the sensor and electronics response of a miniaturised CsI(Tl) PiN photodiode ionisation detector. The rise and decay times for the scintillator are given to be 50 ns and 900 ns, respectively. The final photodiode rise time was given to be 15.08 μs and fall time was determined through fitting to be 32 μs. The combination of a physics model and a electronics simulation have resulted in a detector model which has been verified against laboratory calibrations with 133Ba and 137Cs sources. At 200 keV the difference between the model and laboratory data was 18.3 %, and at 800 keV, it was 0.34 %. This modelling plus new calibration techniques have led to improvements in the detector energy resolution compared to previous work <cit.><cit.>, to typically 9.3 ± 0.2 keV, in a range of 200 - 800 keV. This is suitable for the intended meteorological radiosonde application, as data rate limitations typically further degrade the energy resolution. The model has been used to suggest that the Schmitt trigger threshold voltage is the most pertinent reason for the minimum detectable energy. According to the simulation, the minimum energy would be ∼250 keV, producing a pulse height of 24 ADC. In the experimental data, the minimum pulse height was 20 ADC. The difference of 4 ADC (19.6 mV) between experiment and model represents <1% of the ADC range available, as the reference rests at 657 ADC (3211 mV). Agreement between the predicted and actual response means that components in the electronics can confidently be changed and tested in the simulation before implementation, speeding up development. Despite the PiN photodiode being a commercial product, with some physical quantities kept as proprietary information, the model affords flexibility in choosing, and matching, the photodiode fall time to accommodate parameters which have otherwise been estimated or assumptions which may not hold. It can also be adapted to simulate changes in the system such as different scintillator material or the response at different temperatures. Overall this research offers increased confidence in the response of the detector, especially when considering the minimum detectable energy. This aspect is particularly valuable when evaluating the atmospheric effects of bremstrahhlung X rays from energetic electron precipitation during space weather events <cit.>. Overall the model and laboratory calibration were consistent to better than 20% over the operating range of the instrument and 5% for energies of >400keV. The detailed understanding now acquired of this novel miniaturised instrument will allow for more effective analysis and interpretation of existing data and future design improvements. Data availability Data is available at Tabbett, Justin; Aplin, Karen (2023), “PiN Modelling Data”, Mendeley Data, V1, doi: 10.17632/mctr9ksk4r.1 Acknowledgements We would like to thank Dr Alessandro Narduzzo at the Department of Physics, University of Bath for access to radioactive sources. Funding EPSRC studentship and A-Squared Technologies Ltd. elsarticle-num-names
http://arxiv.org/abs/2307.00376v2
20230701161148
The Spark of Symmetric Matrices Described by a Graph
[ "Louis Deaett", "Shaun Fallat", "Veronika Furst", "John Hutchens", "Lon Mitchell", "Yaqi Zhang" ]
math.CO
[ "math.CO", "05C50, 15A18 (Primary) 15A29 (Secondary)" ]
Residual-based attention and connection to information bottleneck theory in PINNs [ August 1, 2023 ================================================================================= We investigate the sparsity of null vectors of real symmetric matrices whose off-diagonal pattern of zero and nonzero entries is described by the adjacencies of a graph. We use the definition of the spark of a matrix, the smallest number of nonzero coordinates of any null vector, to define the spark of a graph as the smallest possible spark of a corresponding matrix. We study connections of graph spark to well-known concepts including minimum rank, forts, orthogonal representations, Parter and Fiedler vertices, and vertex connectivity. Keywords: Null vectors, maximum nullity, spark, zero forcing, forts, connectivity, generic nullity, minimum rank. AMS subject classification: 05C50, 15A18 (primary) 15A29 (secondary). § INTRODUCTION Denote the set of all real symmetric n × n matrices by S_n(ℝ), and suppose A=[a_ij] ∈ S_n(ℝ). We say G(A) is the graph of A if G(A) has the vertex set V={v_1,v_2,…,v_n} and edge set E={v_iv_j | a_ij≠ 0, i ≠ j}. Note that G(A) is independent of the values of the diagonal entries of A. On the other hand, if G is a graph of order n (i.e., |G| = |V(G)| =n) with vertex set {v_1,v_2,…,v_n}, then the set of real symmetric matrices described by the graph G is given by 𝒮(G) = {A ∈ S_n(ℝ) | G(A) = G }. Here and in what follows, we consider only simple, undirected graphs G = (V(G), E(G)). One of the most captivating and unresolved problems associated with the class 𝒮(G) is the so-called inverse eigenvalue problem for graphs, abbreviated as IEP-G (see <cit.>). This fundamental problem asks for a complete description of the possible spectra realized by the set 𝒮(G) for a given graph G. The IEP-G has garnered significant attention over the past 30 years with many fascinating advances and applications (see, for example, the books <cit.> and the references therein). However, a complete general resolution is still very much elusive. Notwithstanding this, researchers have developed a wealth of results, implications, and applications tied to the IEP-G (see <cit.> for a recent example). In particular, a number of related concepts and parameters have been explored and have shed light on different aspects of the IEP-G. The minimum rank of a graph G is defined to be (G) = min{(A) | A ∈(G)}. The maximum nullity (or maximum corank) of a graph G is defined to be M(G) = max{(A) | A ∈𝒮(G)} = n - (G), where (A) denotes the nullity of A or the dimension of the null space of A, written as N(A). The minimum semidefinite rank (G) is defined analogously as the minimum rank over all positive semidefinite matrices in (G). (We refer the reader to the works <cit.>.) The column space of A will be denoted (A). While the primary focus on the IEP-G has been on the potential list of eigenvalues of matrices in (G), there is also justified interest in studying the associated eigenvectors or zero/nonzero patterns of the associated eigenvectors. One of the earliest results along these lines is by Fiedler <cit.> where the eigenvectors of matrices associated with connected acyclic graphs (or trees) were studied. One by-product of this work was the realization that investigating the zero coordinates of an eigenvector leads to certain implications about a graph (or in the case of <cit.> a tree). Since Fiedler's pioneering work in 1975, research into the possible patterns of eigenvectors for matrices associated with a graph has been developed along a number of lines, including: nodal domains, Laplacian eigenvectors (e.g, Fiedler vectors), and more recently zero forcing on graphs (see also <cit.>). We note here that it is sufficient to study the zero/nonzero patterns of null vectors of A ∈(G), since any eigenvector x corresponding to the eigenvalue λ of A can be considered as a null vector of the matrix A - λ I ∈(G). As our work relies heavily on the theory of graphs, we list some useful notation and provide some relevant terminology here before we discuss zero forcing and spark for graphs. A subgraph H = (V(H), E(H)) of G = (V(G),E(G)) is a graph with V(H) ⊆ V(G) and E(H) ⊆ E(G); H is an induced subgraph of G if E(H) = {vw ∈ E(G) | v, w ∈ V(H)}. The complement of G = (V,E) is the graph G = (V, E), where E consists of all pairs of vertices in V that are not contained in E. We say two vertices v,w are adjacent, or are neighbors, if vw ∈ E, and we may write this as v ∼ w. Let N_G(v) = {w∈ V | w ∼ v} be the open neighborhood of v and denote its cardinality by (v) = |N_G(v)|. The closed neighborhood of v is N_G[v] = N_G(v) ∪{v}. The minimum degree of the graph is δ= δ(G)= min{(v) | v ∈ V(G) }. A path is a graph, denoted P_n, with V = {v_1,…,v_n}, where v_1,…,v_n are distinct, and E={v_iv_i+1| i = 1,…, n - 1}. A cycle C_n on n vertices V = {v_1,…,v_n} has E = {v_iv_i+1| i = 1,…, n - 1}∪{v_nv_1}. A graph is connected if for every pair of distinct vertices v and u there is a path from v to u (and thus also from u to v). A tree is a connected graph with no cycles. A complete graph K_n on n vertices has E = {v_iv_j | i≠ j}. A complete bipartite graph K_m,n has vertex set V = V_1 ∪ V_2, where |V_1| = m and |V_2|=n, and edge set E = {v_iv_j | v_i∈ V_1, v_j∈ V_2}. If G=(V(G), E(G)) and H=(V(H), E(H)) are two graphs, then the Cartesian product of G and H, denoted by G H, is the graph with vertex set V(G) × V(H) and two vertices (u,v) and (w,z) are adjacent in G H if and only if u=w and vz ∈ E(H) or uw ∈ E(G) and v=z. Zero forcing is a coloring process involving the vertices of a graph. At the beginning of the process, each vertex is either blue or white, and each type of zero forcing follows a specific color change rule which can change the color of a white vertex to blue. The process stops when no more vertices can be colored blue. The standard zero forcing color change rule is to change the color of a white vertex w to blue if w is the unique white neighbor of a blue vertex v. If an initial subset of blue vertices can, after repeated application of the color change rule, change all vertices to blue, then this subset is referred to as a zero forcing set. Zero forcing was introduced to provide a combinatorial upper bound for M(G) and, in particular, detects subsets of coordinates of a null vector x of any A∈(G) that, if designated as zero, imply x must in fact be the zero vector. As such, it seems natural to study the zero coordinates in null vectors (see <cit.> for more details). More precisely, given a real vector x, the support of x is the collection of indices i for which x_i ≠ 0. We denote the support of x by (x). Suppose A ∈(G) and Ax=0. A basic consequence of the zero forcing process outlined above is that if (x) is disjoint from a zero forcing set for G, then x=0. Suppose B ⊆ V is initially colored blue, and that B' is the set of all blue vertices obtained from B by repeatedly applying the color change rule. We call B' the closure of B. If nonempty, the subset V ∖ B' (the remaining white vertices) is known as a fort in G. In fact, a fort in a graph is a nonempty subset F of vertices such that no vertex outside of F is adjacent to exactly one vertex of F (see <cit.>). Forts are naturally connected to the support of null vectors. As we are interested in sparse null vectors, we seek to determine forts of minimum size in a given graph. Finally, it is a simple observation in basic linear algebra that if x is in N(A), for any matrix A, then the columns of A that correspond to (x) must form a linearly dependent set. This leads us to the notion of the spark of a matrix, which we present in the next section. This paper is organized into sections combining various topics with the spark of a graph. In Section 2, we define the spark of a graph and explore a connection with forts in the graph. In Section 3, we discuss relationships between the concepts of spark and rank. Then in Section 4, we investigate an association between spark and the vertex connectivity of a graph, and we generalize a theorem concerning orthogonal representations of graphs. In Section 5, we pay particular attention to graphs with small spark, and we close our work with some further connections in Section 6. § SPARK AND FORTS OF GRAPHS As our main focus is studying the support of null vectors, and, in particular, to exhibit null vectors that have small support, we begin with the notion of the spark of a matrix. Namely, the spark of a matrix A is the smallest integer s such that there exists a set of s columns in A which are linearly dependent, i.e., (A) is the minimum size of the support of a null vector of A. If A∈ S_n() is nonsingular, (A) is defined to be n+1. Sparse solutions to underdetermined linear systems, and thereby the concept of spark, have gained significant attention in compressed sensing (see <cit.>). Computing the spark of a matrix is known to be NP-hard <cit.>. We define the spark of a graph G to be (G)=min_A∈𝒮(G)(A) . Note that, for every graph G, the Laplacian matrix of G gives a singular matrix in (G), showing that (G) ≤ n. In addition, it is not hard to see that (G)=1 if and only if G contains an isolated vertex. Furthermore, if G is disconnected, then (G) is obtained by simply minimizing the spark across all of the connected components of G. Thus, we assume from this point on that all graphs considered are connected and hence contain no isolated vertices. We illustrate the above notions with the following example. Let G be a graph on 5 vertices consisting of a 5-cycle on vertices {1,2,3,4,5} with two additional edges 13 and 25. Suppose A ∈𝒮(G) is given by A = [ 1 1 1 0 1 1 1 1 0 1 1 1 3 1 0 0 0 1 3 1 1 1 0 1 3 ] and x=[ 1 -1 0 0 0 ]. Observe that Ax=0, and hence (A)≤ 2. Since G is connected (or more precisely has no isolated vertices) it is clear that (G)>1. Thus (A) = (G) = 2. Finally we note that the pair {1,2} forms a fort in G. The connection between the support of a null vector x for some A ∈(G) and a fort in G in the previous example is a known result (although it may not be published); we provide a proof here for completeness, as our primary aim is studying the support of null vectors. Recall that the columns of A∈(G) are indexed by the vertices of G, and we use the column indices and the graph vertices interchangeably. For any matrix A ∈(G), the support of any nonzero null vector of A is a fort of G. Conversely, for any fort F of G and any vector x whose support is F, there is a matrix A ∈(G) that has x as a null vector. That is, (G) is the cardinality of a minimum fort of G. Given a matrix A∈(G) and a vector x≠ 0 such that Ax=0, let W = {j ∈ V(G) | j ∈(x)}. Suppose there exists i∈ V(G) ∖ W with exactly one neighbor j in W. Then 0 = [Ax]_i = a_ij x_j, where a_ij≠ 0. Thus, x_j = 0, contradicting j∈(x). So W is a fort of G. Conversely, assume F is a fort of G such that F = {i∈ V(G) | i∈(x)} for some nonzero vector x. We construct A by performing the following steps: * First let A be the adjacency matrix of G. In the next two steps, we modify certain nonzero entries of A. * Let S={i | x_i=0} = V(G)∖ F. For i∈ S, let B_i= N_G(i) ∖ S = N_G(i) ∩ F. Note that |B_i|≠ 1 by the definition of a fort. For j∈ B_i and j≠max B_i, set A[i,j]=A[j,i]=1/x_j; if j=max B_i, then set A[i,j]=A[j,i]=(1-|B_i|)/x_j. * For k∈(x), assign A[k,k]=-[Ax]_k/x_k = -∑_j≠ k a_kjx_j/x_k. For i∈ F, [Ax]_i=a_iix_i+∑_j≠ i a_ijx_j=-∑_j≠ i a_ijx_j/x_ix_i+∑_j≠ i a_ijx_j=0. For i∈ V(G)∖ F, [Ax]_i = ∑_j ia_ijx_j+a_iix_i+∑_j∼ ia_ijx_j=∑_j∼ ia_ijx_j = ∑_j∈ N_G(i)∩ Sa_ijx_j+∑_j∈ N_G(i)∖ S a_ijx_j = ∑_j∈ B_ia_ijx_j = ∑_j∈ B_i j≠maxB_i1/x_jx_j+1-|B_i|/x_max B_ix_max B_i=0. Although (G) is defined in reference to the matrices in 𝒮(G), Theorem <ref> shows that in fact this parameter can be defined entirely in graph-theoretic terms. That is, the spark of a graph does not have to be defined in terms of the spark of any matrices. The next two propositions explore possible sizes of forts of graphs in more detail. Let G be a graph with minimum degree δ. Then every subset W ⊆ V(G) with |W| = n-m +1 is a fort of G if and only if m ≤δ. Assume m≤δ, and consider W ⊆ V(G) with |W| = n-m +1 ≥ n-δ + 1. For any vertex v ∉ W, there are at most δ -2 vertices not in W ∪{v}. Conversely, suppose m > δ and v_0 ∈ V(G) such that N_G(v_0) ={ v_1,v_2, …, v_δ}; we can then label V(G) ={ v_0,v_1, …, v_δ, s_δ +2, …, s_n }. Since m+1≥δ + 2, the set W = {v_1, s_m+1, …, s_n } satisfies |W|=n-m+1 but is not a fort. If every k-subset of V(G) is a fort of G, then every (k+1)-subset of V(G) is a fort of G. Let every k-subset of V(G) be a fort of G, and let W ⊆ V(G) such that |W|=k+1. In particular |W|≥ 2. Assume W is not a fort, so there exists x ∈ V(G)∖ W such that N_G(x) ∩ W ={w_1}, where W = {w_1,w_2, …, w_k+1}. Then W' = W∖{w_2} is not a fort since x ∈ V(G) ∖ W' and N_G(x) ∩ W' = {w_1}. But |W'|=k, giving us a contradiction. In light of Proposition <ref>, it is interesting to note that simply adding vertices to a fort does not guarantee that the resulting set is a fort. We present examples of graphs that skip fort sizes after Theorem <ref>. Related to the notion of a fort is the notion of a failed zero forcing set <cit.>. This is simply a subset of vertices that is not a zero forcing set; however, it is interesting to ask for the largest size of such a set. This is known as the failed zero forcing number (G) of the graph G <cit.>. The complement of a failed zero forcing set has also been called a zero blocking set, with the smallest size of such a set called the zero blocking number, and denoted by (G) <cit.>. As noted in <cit.>, a set is a failed zero forcing set of maximum size if and only if its complement is a fort of minimum size. Using Theorem <ref>, it follows that (G) and (G) are the same. In summary, we have the following observation. Let G be a graph on n vertices. Then (G)=(G) and (G) = n-(G). Hence, the problem of determining the failed zero forcing number of a graph and the problem of determining its zero blocking number are both equivalent to determining the spark of the graph. In fact, this problem, like that of computing the spark of a matrix, is NP-hard <cit.>. § SPARK AND RANK OF MATRICES ASSOCIATED WITH A GRAPH The spark and rank of a matrix A ∈(G) are clearly related, as the definitions give directly that (A) ≤(A) + 1. In this section, we investigate when this inequality becomes an equality. We say a matrix has full spark if (A) = (A) + 1. Analogously to (G) and (G), we define the minimum full spark rank of a graph G as (G) = min{(A) | A ∈(G), (A) = (A) -1 } and (G) as the corresponding minimum full spark rank for positive semidefinite matrices. The next result is a core result in linear algebra and lays the groundwork for establishing a relationship between spark and rank. Let A be a symmetric n × n real matrix with (A) = k. If k=n then each k × k principal submatrix of A is nonsingular and A has full spark. If k < n and X is an n × (n-k) real matrix with (X) = n-k such that AX=0, then the following are equivalent: * Each k × k principal submatrix of A is nonsingular. * Each (n-k) × (n-k) submatrix of X is nonsingular. * (A)=k+1. If k=n the result is clear, so assume k < n. (1) ⇔ (2): Without loss of generality we can write A = [B C^T C M ] and X=[ Y Z], where B is k× k and Z is (n-k) × (n-k), and argue that B is singular if and only if Z is singular. If Z is singular, then there exists a nonzero vector v with Zv = 0. Since (X)=n-k, Yv must be nonzero, and so Xv is a nonzero vector in the nullspace of A. Then Yv is a nonzero vector in the nullspace of B, so B is singular. If B is singular, then there exists a nonzero vector v with v^TB=0. If v^TC^T ≠ 0, then v^TC^TZ= 0 implies we are done. So assume v^TC^T = 0. Then [ B C]v=0, so that [ v 0] is a nonzero vector in the nullspace of A. Since (X)=n-k, the columns of X are a basis for the nullspace of A. Thus the vector [ v 0] is a nontrivial linear combination of the columns of X, and so Z is singular. (1)⇒ (3): If each k × k principal submatrix of A is nonsingular, then each set of k columns of A is linearly independent and (A) ≥ k+1; that is, A has full spark. (3)⇒ (1): Assume (A) = k+1, and suppose that there exists a k× k principal submatrix B of A that is singular. Without loss of generality, write A = [B C^T C M ] in block form. Let X=[ Y Z] in similar block form be a matrix whose columns form a basis for the nullspace of A, so that AX=0. By the proof of (1) ⇔ (2), the (n-k) × (n-k) matrix Z must also be singular, and there exists a nonzero vector w with Zw= 0. But then [ Y Z]w = [ Yw 0] is a nonzero vector (if Yw=0, then the columns of X are linearly dependent) in the nullspace of A, so that BYw=CYw = 0. But then [ B C] Yw = 0 implies that the columns of [ B C] are linearly dependent, which contradicts (A)=k+1. We next consider a bordering-type result concerning the spark of a symmetric matrix. Suppose A is an n × n real symmetric matrix. Consider the bordered (n+1) × (n+1) symmetric matrix given by B = [x^T Ax x^T A Ax A ] for some vector x. Then * (B)=(A); * if |(x)| =k, then (B) ≤ k+1. Statement (1) is trivial. For (2), observe that the vector [ -1 x] is a null vector for B, and the result follows. A simple consequence of the above lemma can be deduced if we assume in addition that A is invertible. Then (B)=n, and thus it follows that (B)=|(x)| +1, since the dimension of the null space of B is one. Given a graph G with order n, we are interested in finding all possible ordered pairs of integers (k, s), 1≤ k≤ n and 2≤ s≤ n+1, such that there exists A∈(G) with (A) = k and (A) = s. Note that k=n if and only if s = n+1. After finding a matrix with some fixed spark s and minimum corresponding rank k≥ s-1, the next result shows that all higher ranks are achievable with the same spark. Before we state this result, we recall the following observation: for any symmetric matrix A, the jth standard basis vector e_j ∈(A) if and only if j ∉(x) for all x ∈ N(A), which follows easily from the fact that the null space of a symmetric matrix A is the orthogonal complement of the column space of A. If A ∈(G) such that (A)=k < n-1 and (A)=s, then there exists a matrix B ∈(G) such that (B)=k+1 and (B) =s. Assume A ∈(G) such that (A)=k < n-1 and (A)=s. Let us choose a basis for the null space of A N(A) = {η_1,η_2, η_3, …, η_n-k}, such that |(η_1)| =s. Let η_i = (y_i1,y_i2, …, y_in)^T for 1 ≤ i ≤ n-k. We claim there is a matrix B = A +D ∈(G) where D is a diagonal matrix, such that η_1 ∈ N(B), η_2 ∉N(B) and N(B) ⊂ N(A). Since |(η_1)| =s we know (η_1) ≠(η_2), otherwise there would be a null vector whose support size is smaller than s. So we can choose j ∈(η_2) ∖(η_1). Choose a new basis for N(A) such that N(A) = {η_1,η_2, η_3', …, η_n-k' }, with η_i' = y_ij/y_2jη_2 - η_i for 3 ≤ i ≤ n-k. Then j ∉(η_i') for 3 ≤ i ≤ n-k and j ∉(η_1). Let e_j represent the jth standard basis vector in ^n and consider e_j e_j^T = E_jj. We see that (A + E_jj)η = A η + E_jjη =E_jjη = e_j e_j^T η for η∈ N(A). Notice that e_j ∉(A) since e_j^T η_2 = y_2j≠ 0 and (A) = N(A)^⊥. By <cit.>, this implies (A+E_jj) = (A) +(E_jj) = k+1. Now η_2 ∉N(A+E_jj) but {η_1, η_3', …, η_n-k'}⊂ N(A+E_jj) since each vector in the set is orthogonal to e_j. This gives us N(A+E_jj) = {η_1, η_3', …, η_n-k' } since this is a set of n-k-1 linearly independent vectors in N(A+E_jj) where (N(A+E_jj)) = n-k-1. Since N(A+E_jj) ⊂ N(A), |(η_1)| =s, and η_1 ∈ N(A+E_jj), we have that (A+E_jj) = s. We note here that given A ∈(G) we cannot necessarily find another matrix B ∈(G) such that (B)=(A)+1 and (B)=(A). This follows, in part, due to the fact that if G has a fort of size s this may not guarantee that G has a fort of size s+1 or s-1. Define the fort sequence of G to be the sequence of the form (s_2,s_3,…, s_n) where G has n vertices and s_i is the number of forts in G with i vertices. Note that we are beginning the fort sequence at s_2; since we only consider graphs without isolated vertices, all graphs we consider have s_1=0. There are many examples of graphs that skip fort sizes. For example, consider a spider graph (also known as a generalized star), which is a tree with one vertex having degree greater than 2, the central vertex, and all other vertices having degree at most 2. The paths radiating out from the central vertex are called the legs and do not contain the central vertex. We denote such a graph as sp(n_1,n_2,…,n_l) where l is the degree of the largest-degree vertex (i.e., the number of legs) and n_j is the number of vertices in each leg. So the order of sp(n_1,n_2,…,n_l) is 1+∑ n_j. Now consider a special class of spider graphs of the form sp(m,1,1), where m>3, depicted in Figure <ref>. The smallest fort size is 2 corresponding to the unique minimum fort {a,b}. Any fort F⊆ V with |F| ≥ 3 must contain the vertex v_m, otherwise v_i+1 where i = max{j<m | v_j∈ F} (here considering c as v_0) would be adjacent to only one vertex in F. The next smallest fort is a minimum fort for P_m+2, which arises as the induced subgraph of G on either {a,c,v_1,…,v_m} or {b,c,v_1,…,v_m}. The path P_m+2 has a minimum fort of size ⌈m+3/2⌉; note that the minimum fort will not contain c for either parity of m. So the fort sequence for sp(m,1,1) is of the form (1,0, …, 0, s_⌈m+3/2⌉, …, s_m+3) with s_i≠ 0 for ⌈m+3/2⌉≤ i ≤ m+3. Moreover, if sp(m,1,…, 1) has 3≤ l<m legs, then the fort sequence is of the form (s_2,…,s_l-1,0 …, 0, s_⌈m+3/2⌉, …, s_m+l), where s_i≠ 0 for 2 ≤ i ≤ l-1 or ⌈m+3/2⌉≤ i ≤ m+l. Hence there is no bound on size of the gap between two nonzero fort sizes in a graph or constraints on where in the sequence a gap can occur. An example of a graph that is not a tree and skips a fort size is the friendship graph F_3 shown in Figure <ref>. This graph has forts of size 2 and 4 but no forts of size 3. Indeed, any pair of adjacent non-central vertices forms a fort, but any set S of three vertices in V=V(F_3) must leave at least one vertex in V∖ S adjacent to only one vertex in S; any pair of adjacent pairs of non-central vertices forms a fort of size 4. In fact F_3 has fort sequence (3,0,11,12,7,1). § SPARK AND CONNECTIVITY OF GRAPHS The vertex connectivity of a graph G, denoted by κ(G), is defined as the minimum size of a set of vertices whose deletion disconnects the graph. Such a set of vertices is known as a cut set. Further, we say a graph is k-connected if κ(G) ≥ k. For a graph G, a (faithful) orthogonal representation of G of dimension k is a set of vectors in ℝ^k, one corresponding to each vertex, with the property that two vertices are nonadjacent if and only if their corresponding vectors are orthogonal. An orthogonal representation of G in ℝ^k is in general position if every subset of k vectors is linearly independent. Note that this is equivalent to the existence of a positive semidefinite matrix A∈(G) with k = (A) = (A) - 1; A is called the Gram matrix of the orthogonal representation <cit.>. For a graph G with n vertices, the following are equivalent: * G is (n - k)-connected. * G has a general position orthogonal representation in ℝ^k. * G has an orthogonal representation in ℝ^k consisting of unit vectors such that for each vertex v the vectors representing the vertices not adjacent to v are linearly independent. A consequence of Theorem <ref> is that the minimum semidefinite full spark rank is dictated by the connectivity of the graph, with (G) = n-κ(G) for every graph G. Indeed, if the rank of a positive semidefinite matrix drops below this threshold, then the spark may be forced to drop even further (recall that, in general, δ(G) ≥κ(G)): If A is a positive semidefinite matrix for a graph G with vertex connectivity κ(G) and A < n - κ(G), then A ≤ n - δ(G) -1. If A > n - δ(G) - 1, then every set of n - δ(G) - 1 vertices is linearly independent. Since every vertex has at most that many non-neighbors, every set of non-neighbors is linearly independent, which implies G is (n-(A))-connected by Theorem <ref>. The minimum semidefinite full-spark rank of a graph may be strictly larger than the minimum semidefinite rank, as demonstrated by the following example. Let G = C_4 P_t be the Cartesian product of the cycle C_4 and the path P_t on t≥ 2 vertices. The minimum semidefinite rank of G is 4t-4 <cit.>, but δ(G) = 3 = κ(G), so any positive semidefinite matrix in (G) with rank 4t-4 cannot have full spark, and the smallest possible rank of a full spark positive semidefinite matrix in (G) is 4t-3. That is, 4t-3 = (G) > (G) = 4t-4. The minimum rank and minimum semidefinite rank of G = C_4 P_t coincide, with (G) = 4t-4 <cit.>, so we can ask if there exists a full spark symmetric matrix for G that has minimum rank but is not positive semidefinite. Perhaps surprisingly, we show in our next result that it is not possible to achieve a lower full-spark rank with arbitrary symmetric matrices. That is, (G) = (G) = n-κ(G) for every graph G. A graph G is (n-k)-connected if and only if there exists A ∈(G) with k = (A) = (A)-1. One direction follows from Theorem <ref>. For the other direction, let A ∈(G) with k = (A) = (A)-1 and suppose that G is not (n-k)-connected. Then there exists a cut set α of n-k-1 vertices that leaves at least two connected components and we can write the matrix A(α) where the rows and columns corresponding to α are removed in block form as A(α) = [ B 0 0 C ]. Since A has rank k, A(α) has rank at most k but size (k+1) × (k+1), so that A(α) must be singular. Without loss of generality, that means that B must also be singular. Since B is at most a k × k matrix, A(α), and thus A, contains a singular k × k principal submatrix (any k × k submatrix of A(α) that has B as a submatrix will also be singular because of the block structure). But this contradicts Theorem <ref>. In light of Theorem <ref>, it is natural to ask if all of Theorem <ref> could extend to arbitrary symmetric matrices. While Theorem <ref> tells us A ∈(G) with k=(A)= (A) - 1 has each k × k submatrix invertible, smaller submatrices need not be invertible. For example, [ 0 0 -3 1 4; 0 2 -4 4 4; 3 4 -4 0 0; 1 4 -0 4 0; 4 4 -0 0 8 ] is a rank-three matrix in (K_2,3) (for the bipartite graph K_2,3, note that κ(K_2,3) = 2) with every 3× 3 principal submatrix nonsingular but with singular principal submatrices of sizes two and one. In particular, the 1× 1 principal submatrix corresponding to vertex 1, the non-neighbor of one of the degree-three vertices, is singular. Thus we cannot extend Theorem <ref> by focusing on principal submatrices; however, we do find a full generalization by looking instead (in the spirit of spark) at linearly independent columns. For a graph G with n vertices, the following are equivalent: * G is (n - k)-connected. * There exists A ∈(G) with k = (A) = (A)-1. * There exists A ∈(G) with k = (A) and such that for any vertex v of G the columns of A corresponding to v and its non-neighbors are linearly independent. * There exists A ∈(G) with k = (A) and such that for any vertex v of G the columns of A corresponding to the non-neighbors of v are linearly independent. The equivalence of (1) and (2) is the content of Theorem <ref>, and we will use it to show (2) implies (3). If (2) is true, then the matrix A also satisfies (3): by (1) and using n-k ≤κ(G) ≤δ(G), each vertex v has at most n - δ(G) - 1 ≤ n - κ(G) - 1 ≤ n - (n-k) - 1 = k - 1 non-neighbors; since (A) = k+1, the at-most-k columns of A corresponding to v and its non-neighbors must be linearly independent. Since (3) is stronger than (4), the main work will now be to prove that (4) implies (1). Suppose A ∈(G) with (A) = k is such that for any vertex v the columns corresponding to the non-neighbors of v are linearly independent but G is not n-k connected. Then we can find a cut set C with n-k-1 vertices and can write A = [ M_1 0 N_1^T 0 M_2 N_2^T N_1 N_2 M_3 ] where M_3 corresponds to the vertices of C. Let each M_i have size d_i × d_i (so d_3 = n-k-1) and rank r_i. Finally, let n_i =d_i - r_i for each i∈{1,2}. We wish to show that (A) ≥ d_1 + d_2 in order to get a contradiction. If d_1 = r_1 and d_2 = r_2, we are done. So assume without loss of generality that n_1 > 0 and n_1 ≥ n_2. By our assumption of (4), the column rank of [M_1 0 N_1] is d_1 and the column rank of [0 M_2 N_2] is d_2. However, unlike the positive semidefinite case, all we can say is that the column rank of [M_1 0 0 M_2 N_1 N_2 ] is at least d_1 + r_2. And yet, by symmetry, the row rank of [M_1 0 N_1^T] is d_1, and thus so is its column rank. Because n_1 > 0, M_1 is singular, so there exists a set of r_1 columns of M_1 and n_1 columns of N_1^T that is linearly independent. Let S be the set consisting of the first d_1 columns of A, and let T be the n_1 columns among the last d_3 columns of A corresponding to the selected columns of N_1^T. Since S is a linearly independent set and the selected n_1 columns of N_1^T are not in (M_1), S∪ T is a linearly independent set. Find a basis of r_2 vectors for (M_2), and let U denote the corresponding columns of A. A linear dependence relation among the vectors in S∪ U∪ T would imply a linear dependence relation among the vectors in U∪ T since the entries in rows d_1+1 through d_1+d_2 are zeros in each vector in S. Moreover, the entries in rows 1 through d_1 are zeros in each vector of U, implying a linear dependence relation among the vectors in T. Each of the sets S, U, and T is linearly independent, so working backwards, all three linear dependence relations must be trivial, and S∪ U∪ T is linearly independent. Thus A has at least d_1 + r_2 + n_1 ≥ d_1 + r_2 + n_2 = d_1 + d_2 linearly independent columns. § GRAPHS WITH SMALL SPARK Recall from Section <ref> that (G) ≥ 2 for any graph G with no isolated vertices and that (G) is the size of the smallest possible fort in G. The following lemma characterizes graphs G with (G) = 2. Let G be a graph. Then (G)=2 if and only if there exists u,v∈ V(G) such that (1) uv∈ E(G) and N_G[u]=N_G[v] or (2) uv∉ E(G) and N_G(u)=N_G(v). Assume (G) = 2. By Theorem <ref>, G has a minimum fort of size 2, say F = {u,v}. By the definition of a fort, every vertex in V(G)∖ F is adjacent to neither or both of the vertices in F, implying either condition (1) or (2). Conversely, if condition (1) or (2) hold, then F = {u,v} is a fort in G; having size 2, F must be a minimum fort. If u,v ∈ V(G) satisfy either condition (1) or condition (2) of Lemma <ref>, then we will refer to them as duplicate vertices. Let G be a graph of order n≥ 3. Then G must have duplicate vertices if either of the following conditions hold: * (G) ≤ 2, * κ(G) ≥ n-2. Note that by Theorem <ref>, (G) ≤ n-κ(G), so condition (2) implies condition (1). Therefore, it suffices to prove that (1) implies the existence of duplicate vertices. Assume (G) ≤ 2. By Theorem 9 of <cit.>, G can be expressed as the union of at most two complete graphs and of bipartite graphs. Suppose G consists of only complete graphs, in which case it must consist of the union of exactly two complete graphs since G is connected. Then G is a complete bipartite graph of order n≥ 3 and therefore has two duplicate vertices. On the other hand, suppose G has a complete bipartite graph as a component. This component must contain at least three vertices, and we can let u and v be vertices in the same partite set; then N_G(u) = N_G(v), so N_G[u] = N_G[v], and u and v are duplicate vertices in G. If G is a graph with (G) ≥ 3, then (G) ≥ 3 and κ(G) ≤ n-3. In particular, if (G)=3 and A∈(G) with (A) = 3, then A is not full spark. Since (G) ≥ 3, G has no no duplicate vertices by Lemma <ref>. The result then follows by Lemma <ref>. If (G) = 2, then either G is a path on three vertices or (G) < n-1. If (G) =2, then G has a pair of duplicate vertices by Lemma <ref>. If (G) = n-1 then G is a path P_n on n vertices <cit.>, which can contain duplicate vertices only if n=3. Naturally, we can ask if an analogous result holds for graphs with larger spark. Unfortunately, increasing the spark by 1 does not necessarily decrease the minimum rank's bound by 1, as is illustrated by the following example. Let G = C_n be a cycle on n vertices and let H be obtained from G by adding the edges v_1v_4 and v_1v_n-2. Then, for n ≥ 5, it follows that H has no duplicate vertices, so (H) ≥ 3. On the other hand, {v_1, v_3, v_n-1} forms a fort in H. Hence (H)=3. Finally, it is not difficult to deduce that (H)=n-2. We saw in Theorem <ref> that considering matrices in (G) does not provide an advantage over positive semidefinite matrices in achieving minimum rank and full spark. We may also consider matrices of minimum rank and minimum spark. This is not necessarily achievable with a positive semidefinite matrix, as the next example demonstrates. The hypercube graph Q_3 = C_4 P_2 has minimum rank 4. The matrix H_3 given in <cit.> for the graph Q_3 has rank 4 and spark 3. Since Q_3 has no duplicate vertices, (G) > 2 by Lemma <ref>, so H_3 achieves minimum rank and minimum spark for Q_3. If A ∈(Q_3) is positive semidefinite and (A)=4, then (A)=4. We have κ(Q_3) = δ(Q_3)= 3. Thus (A) ≤ 4 by Corollary <ref>. Since Q_3 has no duplicate vertices, (A) > 2 by Lemma <ref>. Suppose that (A)=3. Then, in the orthogonal representation corresponding to A, we can find three vectors that are linearly dependent. That is, the dimension of their span must be at most two. Consider the subgraph corresponding to the three vectors. It cannot be complete as K_3 is not a subgraph of Q_3. If it has no edges, then all three vectors are orthogonal and cannot be linearly dependent. If there is just one edge, then two of the vectors must be orthogonal to the third, which would make them linearly dependent (in a one-dimensional subspace), contradicting (A)>2. If there are two edges, then two of the vectors must be orthogonal to each other, say v⃗_1 and v⃗_2, and the third must then be a linear combination of both: αv⃗_1 + βv⃗_2 with αβ≠ 0. But then the vector representing the third neighbor (in Q_3) of the vertex represented by αv⃗_1 + βv⃗_2 would have nonzero dot product with at least one of v⃗_1 and v⃗_2, a contradiction. § FURTHER CONNECTIONS Let A∈(G) and v∈ V(G). Denote by A(v) the principal submatrix of A obtained by deleting v. The vertex v∈ V(G) is a Parter vertex (P-vertex) for A∈(G) if (A(v)) = (A) + 1 (⇔(A) = (A(v)) + 2) (see the original works <cit.> and <cit.> for more recent related work on these topics). A Fiedler vertex (F-vertex) v∈ V(G) for A∈(G) is a vertex that satisfies (A(v)) ≥(A). Both Parter and Fiedler vertices are interconnected with zero coordinates in null vectors: Let A∈(G) and v∈ V(G). Then (A(v)) ≥(A) if and only if every null vector of A has a 0 in the v-th coordinate. According to Kim and Shader <cit.>, if we partition a singular symmetric matrix A as A=[ a x^T x B ], then vertex 1 is an F-vertex if and only if [ a x ] is not in the column span of [ x^T B]; and vertex 1 is a P-vertex if and only if x is not in the column span of B. Since a positive semidefinite matrix automatically has the row/column inclusion property <cit.>, a positive semidefinite matrix cannot have a P-vertex. And if A is positive semidefinite, then a ≥ y^TBy where x=By, so vertex 1 is a F-vertex if and only if a > y^TBy. In that case, we can decrease the rank of A by exactly one if we replace a with y^TBy. Thus a positive semidefinite matrix in (G) of minimum (semidefinite) rank cannot have an F-vertex. Instead of just considering the support of a particular null vector, there is also interest in considering the support of the null space. That is, for a matrix A in S_n(ℝ), we define the support of the null space of A as N(A) = { i | x_i≠ 0 x ∈ N(A)}. An important well-known fact for matrices in (T), where T is a tree, is the following: Suppose T is a tree and A ∈(T). If N(A)=V(T), then N(A)=1. The minimum semidefinite rank of a tree T on n vertices is n-1. Hence, the converse of Proposition <ref> states that a matrix A realizing this minimum must have full null support. The following theorem shows that this in fact holds not just for trees but for all graphs. If a positive semidefinite matrix A ∈(G) has (A) = (G), then N(A) = V(G). A positive semidefinite matrix of minimum (semidefinite) rank cannot have an F-vertex, so no vertex has a zero component in every null vector. For trees, full null space support turns out to be equivalent to full spark for singular matrices. Let T be a tree and A ∈(T) be singular. Then the following statements are equivalent: * A has full spark: (A) = (A)+1. * A has full null space support, that is, N(A)=V(T). * A does not have a Parter vertex. (1) ⇔ (2): Suppose k = (A) = (A) - 1. By Theorem <ref>, T must be (n-k)-connected, which implies k = n-1 since T is a tree and A is singular. Since N(A) = 1 and A has full spark, Theorem <ref> implies N(A) = V(T). Conversely, suppose N(A) = V(T). By Proposition <ref>, N(A) = 1. This implies that every nontrivial null vector of A has only nonzero entries. By Theorem <ref>, A must have full spark. (2) ⇔ (3) follows from Lemma <ref>. We note here for completeness that if (G) < (G), then a minimum rank matrix need not have an F-vertex. Suppose G=K_2,3. Then (G)=2 and _+(G)=3. The adjacency matrix of G is a minimum rank (indefinite) matrix that does not have an F-vertex. In <cit.> Hogben and Shader define a real matrix X to be generic if every square submatrix of X is nonsingular. Then the generic nullity of a nonzero A ∈ℝ^n× n is (A) = max{ k | X ∈ℝ^n× k, AX=0, X is generic}, and the maximum generic nullity of a graph is (G) = max{(A) | A ∈(G) }. For any graph G, note that (G)≥ 1 since the all-ones vector belongs to the null space of the graph's Laplacian matrix. We end with an interesting relation between rank, spark, and maximum generic nullity of a graph. If there exists A ∈(G) such that (A) = k and (A) = k+1, then (G) ≥ n-k. By Theorem <ref>, each k× k principal submatrix of A is nonsingular. If k=n, the result is clear, so assume k < n. Let X be a n × (n-k) matrix whose columns form a basis for the nullspace of A, so that AX = 0. By Theorem <ref>, X is generic. Thus (A) = n-k and (G) ≥ n-k. Theorem <ref> also has an implication for generic nullity. An immediate consequence, by Theorem <ref>, is that (G) ≥κ(G) for any graph G (Corollary 4.2 of <cit.>). If the inequality is strict, we can say more: If (G) > κ(G), then any matrix A ∈(G) with (A) = (G) satisfies (A) < (A). Suppose (G) = k where k > κ(G), and let A∈(G) with (A) = k. Then there exists a generic matrix X∈^n× k such that AX = 0, implying (A) ≥ k. Suppose (A) = k. Then (A) = n-k and A is full spark by Theorem <ref>. But this contradicts (G) = n-κ(G). § ACKNOWLEDGMENTS Shaun M. Fallat was supported in part by an NSERC Discovery Research Grant, Application No.: RGPIN–2019–03934. The research of Yaqi Zhang was partially supported by Simons Foundation grant 355645 and NSF grant DMS 2000037. This project began as part of the “Inverse Eigenvalue Problems for Graphs and Zero Forcing” Research Community sponsored by the American Institute of Mathematics (AIM). We thank AIM for their support, and we thank the organizers and participants for contributing to this stimulating research experience. plainurl
http://arxiv.org/abs/2307.00557v1
20230702124311
Parameterized proximal-gradient algorithms for L1/L2 sparse signal recovery
[ "Na Zhang", "Xinrui Liu", "Qia Li" ]
math.OC
[ "math.OC" ]
Entropy Accumulation under Post-Quantum Cryptographic Assumptions Rotem Arnon-Friedman August 1, 2023 ================================================================= The ratio of ℓ_1 and ℓ_2 norms (ℓ_1/ℓ_2), serving as a sparse promoting function, receives considerable attentions recently due to its effectiveness for sparse signal recovery. In this paper, we propose an ℓ_1/ℓ_2 based penalty model for recovering sparse signals from noiseless or noisy observations. It is proven that stationary points of the proposed problem tend to those of the elliptically constrained ℓ_1/ℓ_2 minimization problem as the smoothing parameter goes to zero. Moreover, inspired by the parametric approach for the fractional programming, we design a parameterized proximal-gradient algorithm (PPGA) as well as its line search counterpart (PPGA_L) for solving the proposed model. The closed-form solution of the involved proximity operator is derived, which enable the efficiency of the proposed algorithms. We establish the global convergence of the entire sequences generated by PPGA and PPGA_L with monotone objective values by taking advantage of the fact that the objective of the proposed model is a KL function. Numerical experiments show the efficiency of the proposed algorithms over the state-of-the-art methods in both noiseless and noisy sparse signal recovery problems. § INTRODUCTION Compressed sensing (CS) is an advance in signal acquisition and attracts much attention in many research areas, such as signal processing, image processing and information theory <cit.>. Let x_0∈ℝ^n be a sparse or approximately sparse signal and A∈ℝ^m× n (m<<n) be the sensing matrix. CS aims at recovering the sparse signal x_0 from a small number of its possibly noisy measurements b = Ax_0+e, where e∈ℝ^m is the noise vector. When there is no noise in the transmission, i.e., b = Ax_0, the CS problem can be mathematically formulated as min{x_0:Ax = b,x∈ℝ^n}, where ·_0 is the ℓ_0 “norm” which counts the number of nonzero entries of the vector. Since (<ref>) is NP-hard <cit.> to solve, a popular approach in CS is to replace ℓ_0 norm by the convex ℓ_1 norm, and then many efficient algorithms were designed for solving the problem <cit.>. It has been demonstrated that a sparse signal x_0 can be recovered exactly by minimizing the ℓ_1 norm over A^-1{b} if x_0 is sufficiently sparse and the matrix A satisfies the restricted isometry property (RIP) <cit.>. In recent years, nonconvex approximation functions of the ℓ_0 norm attract considerable amount of attentions due to their shaper approximations to ℓ_0 compared with the ℓ_1 norm. Some popular nonconvex functions include ℓ_p(0<p<1) <cit.>, capped ℓ_1 <cit.>, transformed ℓ_1 <cit.>, ℓ_1-ℓ_2 <cit.> and so on. We study in this paper the ratio of the ℓ_1 and ℓ_2 norms (ℓ_1/ℓ_2) to approximate the desired ℓ_0 norm. The ℓ_1/ℓ_2 function may date back to <cit.> as a sparsity measure. It was explicitly pointed out <cit.> that similar with the ℓ_0 norm, ℓ_1/ℓ_2 is a scale-invariant measure which may allow the ℓ_1/ℓ_2 based model to reconstruct signals and images <cit.> with high dynamic range. In order to show the advantage of the ℓ_1/ℓ_2 function, the authors of <cit.> also exhibited a specific example in which the ground truth signal x_0 is a solution of min{x_1/x_2:Ax = b,x∈ℝ^n}, while x_0 is not a solution of the ℓ_1, ℓ_p(p = 1/2) or ℓ_1-ℓ_2 based models. The equivalence between ℓ_1/ℓ_2 and ℓ_0 for reconstructing signals under suitable conditions was studied in <cit.>. Since ℓ_1/ℓ_2 is nonconvex and nonsmooth, it is difficult to find a global minimizer of problem (<ref>) and many algorithms are proposed to search for a stationary point of problem (<ref>). In <cit.>, the authors tried to utilize the alternating direction method of multipliers (ADMM) to solve problem (<ref>) and showed its efficiency empirically in numerical simulations. It has been proven in <cit.> that under mild conditions the sequence generated by ADMM has a subsequence convergent to a stationary point of (<ref>). Three accelerated algorithms for solving (<ref>), namely, the ℓ_1/ℓ_2 minimization via bisection search (BS) and the ℓ_1/ℓ_2 minimization via adaptive selection method (A1 or A2), were proposed in <cit.> and their subsequential convergence were established. Generally, there is noise in the measurement in practice. When the noise is assumed to be white Gaussian noise, it is customary to relax the equality constraint Ax = b to an inequality constraint <cit.> Ax-b_2≤ϵ for 0<ϵ<b_2. Thus, the ℓ_1/ℓ_2 based model for reconstructing sparse signals from the noisy measurements can be formulated as min{x_1/x_2:Ax-b_2≤ϵ,x∈ℝ^n}. The existence of solutions of problem (<ref>) can be theoretically guaranteed <cit.> by assuming the kernel of A has the s-spherical section property <cit.> and there exists x∈ℝ^n such that x_0<m/s and Ax-b_2≤ϵ for some s>0. Although algorithms for solving (<ref>) with ℓ_1 norm or ℓ_p(0<p<1) quasi-norm in place of ℓ_1/ℓ_2 have been extensively studied in <cit.>, these algorithms are not directly applicable for solving (<ref>) due to the fractional objective. In <cit.>, the authors applied the moving-balls-approximation (MBA) techniques to solve problem (<ref>) and showed the global sequential convergence of the proposed algorithm. Another popular model to deal with noisy compressed sensing is the ℓ_1/ℓ_2 regularized least squares problem, whose objective is the sum of ℓ_1/ℓ_2 and the square of the distance between Ax and b. Very recently, <cit.> derived the analytic solution of the proximity operator of ℓ_1/ℓ_2 which relies on two unknown values related to solutions of the problem. Then, by appropriately splitting the variable into two blocks, nonconvex ADMM with global convergence was applied to solve the ℓ_1/ℓ_2 least squares problem<cit.>. In this paper, we focus on the ℓ_1/ℓ_2 based signal recovery from noiseless or noisy measurements. First, it should be noted that the optimal value of problem (<ref>) may be not attainable for some A, b and ϵ. For instance, let A be a 2× 3 matrix whose first row is (1,0,0) and second row is (0,1,0), b=(1,1)^⊤ and ϵ=0. One can check that the optimal value of problem(<ref>) is 1. However, the objective value is strictly bigger than 1 at any feasible point of the problem. In order to ensure the existence of optimal solutions of the problem as long as the constraint set is nonempty, it is straightforward to restrict the variable in a ball. This leads to the following problem min{x_1/x_2:Ax-b_2≤ϵ,x_2≤ d, x∈ℝ^n}, where d>0. We also note that the parameter ϵ may impact the solution of problem (<ref>) significantly. Particularly, if A does not have full row rank and b is not in the range of A, the constraint set of problem (<ref>) will be empty when ϵ is small enough. To overcome this deficiency, we propose to replace the elliptic constraint {x∈ℝ^n: Ax-b_2≤ϵ} with a smooth fractional penalty function and then obtain an ℓ_1/ℓ_2 penalty problem, whose optimal value is definitely attainable. Actually, the proposed ℓ_1/ℓ_2 penalty problem can be viewed as an approximate model to problem (<ref>) through smoothing the nonsmooth indicator function on the elliptic constraint by a smooth fractional function. We prove that as the smoothing parameter goes to zero, stationary points of the proposed penalty problem tends to those of problem (<ref>). Observing that the objective function of the proposed penalty problem is a ratio of two functions, we develop a parameterized proximal-gradient algorithm and its line search counterpart for solving the problem, inspired by the parametric approach for the fractional programming. Numerical experiments show that on noise-free cases, our proposed algorithms perform comparable with the state-of-the-art algorithms for solving ℓ_1/ℓ_2 signal recovery model in terms of the success rate. For noisy data, on one hand, the proposed algorithms can obtain competitive recovery results but use much less CPU time compared with MBA. On the other hand, compared with other sparse promoting models, our proposed algorithms perform better in most cases than the competing algorithms in terms of the signal recovery accuracy and computing time. The main contributions of this paper are summarized below: (i) In order to recover sparse signals from noiseless or noisy measurements, we propose an ℓ_1/ℓ_2 based penalty problem, which is derived by replacing the elliptic constraint of problem (<ref>) with a smooth fractional penalty function. We prove that as the smoothing parameter goes to zero, the stationary points of the proposed penalty problem tend to those of problem (<ref>). (ii) We study a general structured single-ratio fractional optimization problem, of which the proposed penalty problem is a special case, and propose a parameterized proximal-gradient algorithm and its line search counterpart for solving it. When the proposed algorithms are applied to the proposed model, the analytic solution of the proximity operator of the corresponding function is derived, which enable the efficiency of the algorithms. (iii) We establish that the entire sequence generated by the proposed parameterized proximal-gradient algorithm or its monotone line search counterpart converges globally to a stationary point of the proposed problem. Numerical simulations demonstrate the efficiency of the proposed algorithms. The remaining part of this paper is organized as follows. In section <ref>, we introduce notation and some necessary preliminaries. Section <ref> is devoted to the proposed model. In section <ref>, we propose the parameterized proximal-gradient algorithm and its line search counterpart for solving the proposed problem. The convergence results of the proposed algorithms are established in section <ref>. We present in section <ref> some numerical results for ℓ_1/ℓ_2 sparse signal recovery problem with and without noise. Finally, we conclude this paper in the last section. § NOTATION AND PRELIMINARIES We start by our prefered notation. Let ℝ^n and ℕ be the n-dimensional Euclidean space and the set of nonnegative integers respectively. By ⟨·,·⟩, ·_1 and ·_2, we denote the standard inner product, the ℓ_1 norm and ℓ_2 norm of vectors, respectively. For a positive integer n, we let 0_n be the n-dimensional zero vector. For x∈ℝ, let [x]_+:= max{0,x} and ⌈ x⌉:= max{t∈ℕ:t≤ x}. For any closed set S⊆ℝ^n, the distance from x∈ℝ^n to S is defined by dist(x,S):= inf{x-z_2:z∈ S}. We denoted by intS the set of all interior points of S. The indicator function on S is defined by ι_S(x):= { 0, if x∈ S, +∞, otherwise. . Given a function φ:ℝ^n→ℝ:=(-∞,+∞], we use dom(φ) to denote the domain of φ, that is, dom(φ):={x∈ℝ^n:φ(x)<+∞}. The proximity operator of φ, denoted by prox_φ, is defined at x∈ℝ^n as prox_φ(x):= arg min{φ(y)+1/2y-x_2^2:y∈ℝ^n}. In the remaining part of this section, we present some preliminaries on Fréchet subdifferential and limiting-subdifferential as well as the Kurdyka-Łojasiewicz (KL) property. These concepts play an important role in our theoretical and algorithmic development. §.§ Fréchet subdifferential and limiting-subdifferential Let φ:ℝ^n→ℝ be a proper function. The Fréchet subdifferential of φ at x∈ dom(φ), denoted by ∂̂φ(x), is defined as ∂̂φ(x):={y∈ℝ^n: lim inf_z→ x, z≠ xφ(z)-φ(x)-⟨ y,z-x⟩/z-x_2}. The set ∂̂φ(x) is convex and closed for x∈ dom(φ). If x∉ dom(φ), we let ∂̂φ(x) = ∅. We say φ is Fréchet subdifferentiable at x∈ℝ^n when ∂̂φ(x)≠ 0. We also need the notion of limiting-subdifferentials. The limiting-subdifferential of φ at x∈ dom(φ) is defined by ∂φ(x):={y∈ℝ^n:∃ x^k→ x, φ(x^k)→φ(x), y^k∈∂̂φ(x^k)→ y}. It is straightforward that ∂̂φ(x) ⊆∂φ(x) for all x∈ℝ^n. We say φ is regular at x∈ dom(φ) if ∂̂φ(x) = ∂φ(x). Moreover, if φ is convex, then ∂̂φ(x) and ∂φ(x) reduce to the classical subdifferential in convex analysis, i.e., ∂̂φ(x) = ∂φ(x) = {y∈ℝ^n: φ(z)-φ(x)-⟨ y,z-x⟩≥ 0, ∀ z∈ℝ^n}. We next recall some simple and useful calculus results on ∂̂ and ∂. For any α>0 and x∈ℝ^n, ∂̂(αφ)(x) = α∂̂φ(x) and ∂(αφ)(x) = α∂φ(x). Let φ_1, φ_2: ℝ^n→ℝ be proper and lower semicontinuous and let x∈ dom(φ_1+φ_2), then, ∂̂φ_1(x)+∂̂φ_2(x)⊆∂̂(φ_1+φ_2)(x). If φ_1 is strictly continuous at x while φ_2(x) is finite, then ∂(φ_1+φ_2)(x)⊆∂φ_1(x)+∂φ_2(x). If φ_2 is differentiable at x, then ∂̂φ_2(x) = {∇φ_2(x)} and ∂̂(φ_1+φ_2)(x)= ∂̂φ_1(x)+∇φ_2(x). Furthermore, if φ_2 is continuously differentiable at x, then ∂φ_2(x) = {∇φ_2(x)} and ∂(φ_1+φ_2)(x)= ∂φ_1(x)+∇φ_2(x). We next recall some results of the Fréchet subdifferential for the quotient of two functions. <cit.>(Subdifferential calculus for quotient of two functions). Let f_1:ℝ^n→ℝ be proper and f_2:ℝ^n→ℝ. Define ρ:ℝ^n→ℝ at x∈ℝ^n as ρ(x):={ f_1(x)/f_2(x), if x∈ dom(f_1) and f_2(x) ≠ 0, +∞, otherwise. . Let x∈ dom(ρ) with a_1:=f_1(x)≥ 0 and a_2:= f_2(x) > 0. If f_1 is continuous at x relative to dom(f_1) and f_2 is locally Lipschitz continuous at x, then ∂̂ρ(x) = ∂̂(a_2f_1-a_1f_2)(x)/a_2^2. Furthermore, if f_2 is differentiable at x, then ∂̂ρ(x) = a_2∂̂f_1(x)-a_1∇ f_2(x)/a_2^2. §.§ Kurdyka-Łojasiewicz (KL) property We next recall the Kurdyka-Łojasiewicz (KL) property. (KL property <cit.>). A proper function φ:ℝ→ℝ is said to satisfy the KL property at x̂∈ dom(∂φ) if there exist η∈ (0,+∞], a neighborhood 𝒪 of x̂ and a continuous concave function φ: [0,η)→ [0,+∞], such that: (i) φ(0) = 0, (ii) φ is continuously differentiable on (0,η) with φ^'>0, (iii) For any x∈𝒪∩{x∈ℝ^n: φ(x̂)<φ(x)<φ(x̂)+η} there holds φ^'(φ(x)-φ(x̂)) dist(0,∂φ(x)) ≥ 1. A proper lower semicontinuous function φ:ℝ→ℝ is called a KL function if φ satisfies the KL property at all points in dom(∂φ). The notion of the KL property plays a crucial rule in analyzing the global sequential convergence; see <cit.>. A framework for proving global sequential convergence using the KL property is provided in <cit.>. We review this result in the next proposition. Let φ:ℝ→ℝ be a proper lower semicontinuous function. Consider a sequence satisfies the following three conditions: (i) (Sufficient decrease condition.) There exists a > 0 such that φ(x^k+1) + ax^k+1-x^k_2^2≤φ(x^k) holds for any k∈ℕ; (ii) (Relative error condition.) There exists b > 0 and w^k+1∈∂φ(x^k+1) such that w^k+1_2≤ bx^k+1-x^k_2 holds for any k∈ℕ; (iii) (Continuity condition.) There exists a subsequence {x^k_j: j ∈ℕ} and x^⋆ such that x^k_j→ x^⋆ and φ(x^k_j) →φ(x^⋆), as j→∞. If φ satisfies the KL property at x^⋆, then ∑_k=1^∞x^k-x^k-1_2<+∞, lim_k→∞x^k = x^⋆ and 0∈∂φ(x^⋆). § THE PROPOSED MODEL As discussed in section <ref>, the constraint set of problem (<ref>) may be empty for some cases when ϵ is small enough. In this section, we first propose an ℓ_1/ℓ_2 based penalty problem by replacing the elliptic constraint of problem (<ref>) with a smooth fractional penalty function. Then, we prove under mild conditions that stationary points of the proposed penalty problem tend to those of problem (<ref>) as the smoothing parameter goes to zero. Before introducing our proposed model, we first recall the notation of Moreau envelope, which is a useful tool for smoothing functions. The Moreau envelope of a function ϕ:ℝ^n→ℝ̅ with parameter λ>0, denoted by env_ϕ^λ, is defined at x∈ℝ^n as env_ϕ^λ(x):= min{ϕ(y) + 1/2λx-y_2^2: y∈ℝ^n}. Let C: = {x∈ℝ^m:x-b_2≤ϵ}. It is observed that, for x∈ℝ^m, env_ι_C^λ(x) = min{1/2λx-y_2^2:y-b_2≤ϵ} = { 0, if x-b_2≤ϵ, 1/2λ(x-b_2-ϵ)^2, otherwise, . =1/2λ(x-b_2-ϵ)_+^2. It is known <cit.> that env_ι_C^λ∘ A is Lipschitz continuously differentiable with a Lipschitz constant 1/λA_2^2 and ∇( env_ι_C^λ∘ A)(x) = (Ax-b_2-ϵ)_+/Ax-b_2A^T(Ax-b)/λ ={ 0_n, if Ax-b_2≤ϵ, (1-ϵ/Ax-b_2)A^T(Ax-b)/λ, otherwise. . Noting that ( env_ι_C^λ∘ A)/·_2 is continuously differentiable on ℝ^n\{0_n}, we propose the following optimization problem for sparse signal recovery through smoothing ι_C∘ A of problem (<ref>) by ( env_ι_C^λ∘ A)/·_2 min{λx_1+ env_ι_C^1(Ax)/x_2:x∈ D, x≠ 0_n}, where D:={x∈ℝ^n:x_2≤ d}. Problem (<ref>) is actually a penalty problem for (<ref>), utilizing ( env_ι_C^λ∘ A)/·_2 as the penalty function. In the remaining part of this section, we dedicate to establishing the relationship between solutions and stationary points of problems (<ref>) and (<ref>). Prior to that, we first show the existence of optimal solutions to problems (<ref>) and (<ref>). For convenient of presentation, we define Q_λ:ℝ^n→ℝ̅ at x∈ℝ^n as Q_λ(x):= { λx_1+ env_ι_C^1(Ax)/x_2, if x∈ D and x≠ 0_n, +∞, else. . The following Lemma concerns the lower semicontinuity of function Q_λ for any λ>0. For any λ>0, Q_λ is lower semicontinuous. Let x∈ℝ^n. If x∈ dom(Q_λ), one has lim inf_y→ x Q_λ(y) = Q_λ(x) due to the continuity of ·_1, env_ι_C^1 and ·_2. If x∉ dom(Q_λ), then x∉ D or x=0_n. For the case x∉ D, lim inf_y→ x Q_λ(y) = +∞ = Q_λ(x) because D is a closed set. When x=0_n, it is obvious that lim_y→ xλy_1 + env_ι_C^1(Ay) = 1/2(b_2-ϵ)_+^2>0 from the assumption that b_2>ϵ. Thus, lim inf_y→ x Q_λ(y) = +∞ = Q_λ(x). We complete the proof immediately. With the help of Lemma <ref>, we show the existence of optimal solutions to problems (<ref>) and (<ref>) in the next proposition. The optimal solution set of (<ref>) is nonempty. Moreover, if the constraint set of problem (<ref>) is nonempty, then the optimal solution set of problem (<ref>) is nonempty. It is obvious that problem (<ref>) has optimal solutions when the constraint set is nonempty since it minimizes a continuous function over a bounded closed set. In order to show the existence of optimal solutions to problem (<ref>), we first rewrite it to an equivalent formulation min{Q_λ(x):x∈ℝ^n}. Clearly, Q_λ is level bounded since dom(Q_λ) = D\{0_n}. Lemma <ref> ensures that Q_λ is lower semicontinuous. Therefore, Q_λ has minimizers. We then get the desired results. We next exhibit in the following theorem that optimal solutions to problem (<ref>) tends to optimal solutions to problem (<ref>) as λ goes to zero. Since its proof is very similar with that of Theorem 17.1 in <cit.>, we omit it here. Let {λ_k>0:k∈ℕ} be a decreasing sequence satisfying lim_k→∞λ_k= 0. Suppose x^k∈ℝ^n is an optimal solution of problem (<ref>) with λ = λ_k for k∈ℕ. There hold: (i) {x^k:k∈ℕ} is bounded; (ii) Any accumulation point x^⋆ of {x^k:k∈ℕ} is a feasible point of problem (<ref>), i.e., x^⋆∈ D and Ax^⋆∈ C; (iii) Any accumulation point x^⋆ of {x^k:k∈ℕ} is an optimal solution to problem (<ref>). Recall that x^⋆ is a (Fréchet) stationary point of F if and only if 0∈∂̂ F(x^⋆). In the remaining part of this section, we dedicate to exploring the relationship between stationary points of problems (<ref>) and (<ref>). To this end, we first give a characterization of stationary points of problems (<ref>) and (<ref>) in the next lemma. The following statements hold if int D∩ int{x:Ax∈ C}≠∅: (i) The vector x^⋆∈ℝ^n is a stationary point of (<ref>) if and only if x^⋆∈ D, Ax^⋆∈ C and 0∈∂ (·_1+ι_D)(x^⋆) - x^⋆_1/x^⋆_2∇x^⋆_2 + ∂ (ι_C∘ A )(x^⋆); (ii) The vector x^⋆∈ℝ^n is a stationary point of (<ref>) if and only if 0_n≠ x^⋆∈ D and 0∈λ∂ (·_1 + ι_D)(x^⋆) - Q_λ(x^⋆)∇x^⋆_2 + ∇ ( env_ι_C^1∘ A)(x^⋆). Noting that env_ι_C^1∘ A, ι_C∘ A and ι_D are all convex functions, all of them are regular functions. Due to the continuously differentiable property of ·_2 and env_ι_C^1∘ A on ℝ^n\{0_n}, we obtain from Corollary 1.111(i) in <cit.> that at any x∈ℝ^n\{0_n}, ·_1/·_2 and λ·_1+ env_ι_C^1∘ A/·_2 are regular and there hold: ∂̂x_1/x_2 = ∂x_1/x_2 = ∂x_1 - x_1/x_2∇x_2/x_2, ∂̂λx_1+ env_ι_C^1(Ax)/x_2 = ∂λx_1+ env_ι_C^1(Ax)/x_2 = λ∂x_1+∇ ( env_ι_C^1∘ A)x-λx_1+ env_ι_C^1(Ax)/x_2∇x_2/x_2. We first prove Item (i). The vector x^⋆≠ 0_n is a stationary point of problem (<ref>) if and only if 0∈ ∂x^⋆_1/x^⋆_2+∂(ι_D+ι_C∘ A)(x^⋆), = ∂x^⋆_1/x^⋆_2+∂ι_D(x^⋆)+∂ (ι_C∘ A)(x^⋆), where the first inclusion follows from the locally Lipschitz continuity of ·_1/·_2 at x^⋆ as well as the regularity of ·_1/·_2 and ι_D+ι_C∘ A in ℝ^n\{0_n}, the second equality holds thanks to Corollary 16.38 in <cit.> and intD∩ int{x:Ax∈ C}≠∅. For x∈ℝ^n\{0_n}, it follows from x_2>0 as well as aι_S=ι_S for any a>0 and S⊆ℝ^n that x_2∂ι_D(x) = ∂ι_D(x) and x_2 ∂ (ι_C∘ A )(x) = ∂ (ι_C∘ A )(x). Therefore, we deduce from (<ref>) and (<ref>) that (<ref>) is equivalent to 0 ∈∂x^⋆_1 - x^⋆_1/x^⋆_2∇x^⋆_2 + ∂ι_D(x^⋆) + ∂ (ι_C∘ A) (x^⋆), =∂(·_1+ι_D)(x^⋆) - x^⋆_1/x^⋆_2∇x^⋆_2 + ∂ (ι_C∘ A) (x^⋆), where the last equality follows from Corollary 16.38 in <cit.>. This proves Item (i). We next prove Item (ii) by a similar method. The vector x^⋆∈ D\{0_n} is a stationary point of problem (<ref>) if and only if 0∈∂(λ·_1+ env_ι_C^1∘ A/·_2)(x^⋆)+∂_ι_D(x^⋆) from the regularity of λ·_1+ env_ι_C^1∘ A/·_2 and ℓ_D as well as the locally Lipschitz continuity of λ·_1+ env_ι_C^1∘ A/·_2. With the help of (<ref>) and (<ref>), (<ref>) holds if and only if 0∈ λ∂x^⋆_1 + ∇ ( env_ι_C^1∘ A)(x^⋆) +∂_ι_D(x^⋆) - λx^⋆_1+ env_ι_C^1(Ax^⋆)/x^⋆_2∇x^⋆_2 = λ∂ (·_1 + ι_D)(x^⋆) + ∇ ( env_ι_C^1∘ A)(x^⋆) - Q_λ(x^⋆)∇x^⋆_2, where the last equality follows from λι_D = ι_D. We get Item (ii) immediately. We are now ready to reveal the relationship between stationary points of problems (<ref>) and (<ref>). Recall first that when d>0 and int{x∈ℝ^n:Ax∈ C}≠∅, there hold <cit.> ∂ι_D(x) = { {0_n}, if x_2<d, {tx:t≥ 0}, if x_2=d, . ∂(ι_C∘ A )(x) = { {0_n}, if Ax - b_2<ϵ, {tA^T(Ax-b):t≥ 0}, if Ax - b_2=ϵ. . Let {λ_k>0:k∈ℕ} be a decreasing sequence satisfying lim_k→∞λ_k= 0. Suppose x^k∈ℝ^n is any stationary point of problem (<ref>) with λ = λ_k for k∈ℕ. Then, {x^k:k∈ℕ} is bounded. Furthermore, if int D∩ int{x:Ax∈ C}≠∅ and there exists a x̂∈{x∈ D:Ax∈ C} such that Q_λ_k(x^k)/λ_k≤x̂_1/x̂_2 for all k∈ℕ, then there hold for any accumulation point x^⋆ of {x^k:k∈ℕ} that: (i) x^⋆ is a feasible point of problem (<ref>), i.e., x^⋆∈ D and Ax^⋆∈ C; (ii) x^⋆ is a stationary point of problem (<ref>). It is obvious that {x^k:k∈ℕ} is bounded since x^k_2≤ d for all k∈ℕ. We first prove Item (i). We have that 0≤ env_ι_C^1(Ax^k)/x^k_2≤ Q_λ_k(x^k)≤λ_kx̂_1/x̂_2. By the fact that lim_k→∞λ_k = 0, the above inequality implies that lim_k→∞ env_ι_C^1(Ax^k) = 0. We then obtain Ax^⋆∈ C from the definition and continuity of env_ι_C^1. We also have x^⋆∈ D because x^k_2≤ d for k∈ℕ. Since b_2>ϵ and Ax^⋆∈ C, we have x^⋆≠ 0_n. We next prove Item (ii). We consider two different cases: Ax^⋆-b_2<ϵ and Ax^⋆-b_2 = ϵ, separately. Case 1. Suppose first that x^⋆ satisfies Ax^⋆-b_2<ϵ. In this case ∂(ι_C∘ A)(x^⋆) = {0_n}. Then there exists {x^k_j:j∈ℕ} and J>0 such that lim_j→∞x^k_j = x^⋆ and Ax^k_j-b_2<ϵ for all j>J. By Lemma <ref>, x^k_j∈ D\{0_n} satisfies 0∈λ_k_j∂ (·_1+ι_D)(x^k_j) - Q_λ_k_j(x^k_j)∇x^k_j_2 + ∇( env_ι_C^1∘ A)(x^k_j). Since Ax^k_j - b_2<ϵ for j>J, there hold Q_λ_k_j(x^k_j) = λ_k_jx_1/x_2 and ∇( env_ι_C^1∘ A)(x^k_j) = 0_n. Therefore, one has from (<ref>) that 0∈∂(·_1+ι_D)(x^k_j)- x^k_j_1/x^k_j_2∇x^k_j_2. Taking limits on both sides of this inclusion, we deduce that 0∈∂(·_1+ι_D)(x^⋆)-x^⋆_1/x^⋆_2∇x^⋆_2, which indicates that x^⋆ is a stationary point of problem (<ref>) by Lemma <ref> due to ∂(ι_C∘ A)(x^⋆) = {0_n}. Case 2. Suppose now that x^⋆ satisfies Ax^⋆-b_2 = ϵ and lim_j→∞x^k_j = x^⋆. Lemma <ref> leads to that -1/λ_k_j (∇( env_ι_C^1∘ A)(x^k_j) - env_ι_C^1(Ax^k_j)/x^k_j_2∇x^k_j_2) ∈∂(·_1+ι_D)(x^k_j) - x^k_j_1/x^k_j_2∇x^k_j_2. We first prove {τ_j:=(Ax^k_j - b_2-ϵ)_+/λ_k_jAx^k_j-b_2:j∈ℕ} is bounded by contradiction. Assume that {τ_j:j∈ℕ} is unbounded. Without loss of generality, we suppose lim_j→∞τ_j = +∞. Then, it follows from env_ι_C^λ, ∇( env_ι_C^λ∘ A) and (<ref>) that -A^T(Ax^k_j-b) + Ax^k_j-b_2(Ax^k_j-b_2-ϵ)_+/2x^k_j_2∇x^k_j_2 ∈(∂(·_1+ι_D)(x^k_j) - x^k_j_1/x^k_j_2∇x^k_j_2)/τ_j. Since ∂(·_1+ι_D) = ∂·_1+∂_ι_D, ∪_j = 1^∞∂x^k_j_1 is bounded, x^⋆≠ 0_n and Ax^⋆ - b_2 = ϵ, taking limits on both sides of (<ref>) and invoking (<ref>), we obtain -A^T(Ax^⋆ - b) = t_⋆x^⋆ for some t_⋆≥ 0. This implies that x^⋆ satisfies x^⋆∈ arg min{η(x) := 1/2Ax-b_2^2+t_⋆/2x_2^2:x∈ℝ^n}. If t_⋆ = 0, (<ref>) contradicts to the fact that int D∩ int{x:Ax∈ C}≠∅ due to Ax^⋆ - b_2=ϵ. Thus, t_⋆ should be larger than zero. In this case, we claim that x^⋆_2 = d. Otherwise, there exists J^'>0 such that for all j>J^', x^k_j_2<d and ∂ι_D(x^k_j) = {0_n}. Then, the limitation of the righthand of (<ref>) is {0_n}, which implies that t_⋆ = 0 contradicting to t_⋆ > 0. Then, η(x^⋆) = 1/2ϵ^2+t^⋆/2d^2. Due to int D∩ int{x:Ax∈ C}≠∅, for any x∈ int D∩ int{x:Ax∈ C}, η(x)<η(x^⋆), contradicting to (<ref>). We obtain {τ_j:j∈ℕ} is bounded immediately. Without loss of generality, we assume lim_j→∞τ_j = τ_⋆. It is obvious that lim_j→∞ env_ι_C^1(Ax^k_j)/λ_k_jx^k_j_2∇x^k_j_2 = lim_j→∞τ_jAx^k_j-b_2(Ax^k_j-b_2-ϵ)_+/2x^k_j_2∇x^k_j_2 = 0_n. Thus, we deduce by taking limits on both sides of (<ref>) that -τ_⋆A^T(Ax^⋆ - b)∈∂(·_1+ι_D)(x^⋆) - x^⋆_1/x^⋆_2∇x^⋆_2, which implies that x^⋆ is a stationary point of problem (<ref>) by Lemma <ref> from ∂(ι_C∘ A)(x^⋆) = {tA^T(Ax-b):t≥ 0}. We complete the proof immediately. We make several remarks on Theorem <ref>. First, we do not assume any constraint qualification condition here, although it is a common assumption when analyze the penalty methods. Second, the assumption int D∩ int{x: Ax∈ C} can be easily satisfied as long as d is sufficiently large and int{x: Ax∈ C}≠∅. Finally, the assumption that there exists a x̂∈{x∈ D: Ax∈ C} such that Q_λ_k(x^k)/λ_k≤x̂_1/x̂_2 for all k∈ℕ can be satisfied by appropriately choosing the initial point of the algorithm for solving the penalty problem (<ref>), if the algorithm generates nonincreasing objective sequence. For instance, let x̂∈{x∈ D: Ax∈ C}. The initial point x^k,0∈ D of the algorithm for solving (<ref>) with λ = λ_k can be chosen as x^k,0 = x̂. Then, any accumulation point x^k of the sequence generated by the algorithm satisfies Q_λ_k(x^k)/λ_k≤Q_λ_k(x̂)/λ_k=x̂_1/x̂_2 due to Ax̂∈ C. § PARAMETERIZED PROXIMAL-GRADIENT ALGORITHMS FOR SOLVING PROBLEM (<REF>) In this section, we first study a general structured fractional optimization problem, of which problem (<ref>) is a special case. We propose a parameterized proximal-gradient algorithm and its line search counterpart to solve the fractional optimization problem. Then, the proposed algorithm are applied to solving problem (<ref>) and the closed-form solution of the involved proximity operator is given. §.§ A general fractional optimization problem We first study in this subsection the following single-ratio fractional optimization problem min{f(x)+h(x)/g(x):x∈Ω∩ dom(f)}, where f:ℝ^n→ℝ is proper lower semicontinuous, g, h:ℝ^n→ℝ and Ω:={x∈ℝ^n:g(x)≠ 0}. By defining F:ℝ^n→ℝ at x∈ℝ^n as F(x):={ f(x)+h(x)/g(x), if x∈Ω∩ dom(f), +∞, otherwise, . problem (<ref>) can be written as min{F(x):x∈ℝ^n}. We suppose in the whole sections <ref> and <ref> that f, g and h satisfy the following blanket assumptions. Assumption 1. (i) f is locally Lipschitz continuous on dom(f)∩Ω; (ii) g is locally Lipschitz continuously differentiable and positive on Ω∩ dom(f); (iii) h is Lipschitz differentiable with a Lipschitz constant L>0; (iv) f+h is non-negative on dom(f), Ω∩ dom(f)≠∅; (v) prox_f-γ g(x)≠∅ for any x∈ℝ^n and γ≥0; (vi) F is lower semicontinuous and level bounded. One can check that problem (<ref>) is a special case of problem (<ref>) with f = λ·_1 + ι_D, h = env_ι_C^1∘ A and g = ·_2. It is clear that Items (i) and (ii) in Assumption 1 are satisfied in this case. Item (iii) in Assumption 1 holds since env_ι_C^1∘ A is Lipschitz differentiable with a Lipschitz constant A_2^2 <cit.>. Obviously, f+h≥ 0 and Ω∩ dom(f) = {x∈ℝ^n:0<x_2≤ d}≠∅ as d>0 in this case. Because the function f-γ g = λ·_1-γ·_2+ι_D is proper, lower semicontinuous and bounded below on ℝ^n, prox_f-γ g(x)≠∅ for any x∈ℝ^n and γ≥0. The lower semicontinuity and level boundedness of F follow from Lemma <ref> and the boundedness of D respectively. Thus, problem (<ref>) satisfies Assumption 1. §.§ The parameterized proximal-gradient algorithm (PPGA) for solving problem (<ref>) In order to solve the single-ratio fractional optimization problem (<ref>), we first review the parametric programming for problem (<ref>), which motivates us to develop the parameterized proximal-gradient algorithm. The parametric approach, which may date back to Dinkelbach's algorithm <cit.>, applied to problem (<ref>) generates the k-th iteration by solving a subproblem x^k+1∈ arg min{f(x)-C_kg(x)+h(x):x∈Ω}, where C_k is updated by C_k:=f(x^k)+h(x^k)/g(x^k). In each iteration, one can apply proximal-gradient methods to the subproblem (<ref>) to obtain a stationary point of problem (<ref>). However, such algorithm may be not efficient enough since solving the nonconvex nonsmooth subproblem (<ref>) in each iteration can yield high computational cost. Inspired by the above analysis, we intend to solve a much easier problem instead of problem (<ref>). We propose to use a quadratic approximation for h(x) and solve the following subproblem in each iteration x^k+1∈ arg min{f(x)-C_kg(x)+h(x^k)+⟨∇ h(x^k),x-x^k⟩+x-x^k_2^2/2α_k:x∈Ω}, where α_k>0 for k∈ℕ. With the help of the notion of proximity operators, (<ref>) can be equivalently reformulated as x^k+1∈ prox_α_k(f-C_kg)(x^k-α_k∇ h(x^k)). It is worth noting that solving (<ref>) is much easier than solving problem (<ref>) in many applications, since the proximity operator of f-C_kg can be explicitly computed in these applications. We will show this point in subsection <ref>. The proposed algorithm for solving problem (<ref>) is summarized in Algorithm 1. Since Algorithm 1 involves in the proximity operator of α_k(f-C_kg), the gradient of h and the parameter C_k, we refer to it as the parameterized proximal-gradient algorithm (PPGA). We remark here that the step size α_k in Algorithm 1 is required to be in (α,α)⊂(0,1/L) to ensure the convergence, which will be established in section <ref>. §.§ PPGA with line search (PPGA_L) In this subsection, we incorporate a line search scheme for adaptively choosing α_k into PPGA. In PPGA, the step size α_k should be less than 1/L for k∈ℕ to ensure the convergence. However, the step size may be too small in the case of large L and thus may lead to slow convergence of PPGA. To speed up the convergence, we take advantage of the line search technique in <cit.> to enlarge the step size and meanwhile guarantee the convergence of the algorithm. The PPGA with line search scheme (PPGA_L) is described in Algorithm <ref>. From the inequality (c) of Algorithm <ref>, {F(x^k):k∈ℕ} is monotone when N=0, while it is generally nonmonotone as N>0. For convenience of presentation, we refer to PPGA_L as PPGA_ML if N=0 and PPGA_NL if N>0. Set Δ x:=x^k-x^k-1, Δ h:= ∇ h(x^k)- ∇ h(x^k-1). Motivated from <cit.>, we usually choose α_k,0 in the following formula α_k,0 = { max{α, min{α,Δ x_2^2/|⟨Δ x,Δ h⟩|}}, if ⟨Δ x,Δ h⟩≠ 0, α, otherwise. . This choice of α_k,0 can be viewed as an adaptive approximation of 1/L via some local curvature information of h. We will prove in the section <ref> that Step 2 in PPGA_L terminates in finite steps and PPGA_ML converges globally. §.§ Applications of PPGA and PPGA_L to problem (<ref>) We apply in this subsection PPGA and PPGA_L to problem (<ref>). As mentioned in subsection <ref>, problem (<ref>) is a special case of problem (<ref>) with f = λ·_1+ι_D, h = env_ι_C^1∘ A and g = ·_2. Then, the Lipschitz constant L of h should be A_2^2 in this case. We next dedicate to showing that prox_α_k(f-C_kg) can be efficiently computed in this case. For γ>0, we define ρ_γ:ℝ^n→ℝ as ρ_γ := ·_1 - γ·_2 + ι_D. Then, α_k(f-C_kg) = α_kλρ_γ with γ = C_k/λ for problem (<ref>). We establish the closed-form solution of prox_βρ_γ for β,γ>0 in the next proposition, which is inspired by Lemma 3.1 in <cit.>. Before that, we first define the soft thresholding operator 𝒮:ℝ^n×ℝ_+→ℝ^n at (y,α)∈ℝ^n×ℝ_+ as 𝒮(y,α) := { y-α, if y>α, 0, if |y|≤α, y+α, otherwise. . Let y∈ℝ^n, β,γ>0 and ρ_γ be defined by (<ref>). Then the following statements hold: (i)If y_∞>β, prox_βρ_γ(y) = { z(z_2 + βγ)/z_2, if z_2≤ d-βγ, zd/z_2, otherwise, . where z = 𝒮(y,β); (ii) If y_∞ = β, x^⋆∈ prox_βρ_γ(y) if and only if x^⋆_2 = min{βγ,d}, x_i^⋆ = 0 for i∈{j∈ℕ_n:|y_j|<β} and x_i^⋆y_i≥ 0 for i∈{j∈ℕ_n: |y_i| = β}; (iii)If (1-γ)β<y_∞<β, x^⋆∈ prox_βρ_γ(y) if and only if x^⋆ is a 1-sparse vector satisfying x^⋆_2 = min{y_∞+(γ-1)β,d}, x_i^⋆y_i≥ 0 for i∈ℕ_n, x_i^⋆ = 0 for i∈{j∈ℕ_n:|y_j|<y_∞}; (iv)If y_∞≤ (1-γ)β, prox_βρ_γ(y) = {0_n}. It is straightforward to obtain the following relations about the sign and order for the components of any x^⋆ in prox_βρ_γ(y): x_i^⋆{ ≥ 0, if y_i>0, ≤ 0, else, . and |x_i^⋆|≥ |x_j^⋆| if |y_i|≥ |y_j|. Otherwise, we can always change the sign of x^⋆ or swap the absolute values of x_i^⋆ and x_j^⋆ to obtain a smaller objective. Therefore, we can assume without loss of generality that y_1≥ y_2≥⋯≥ y_n≥ 0. Let G:ℝ^n→ℝ and H:ℝ^n→ℝ be defined at x∈ℝ^n as G(x):=x_1 - γx_2 + 1/2βx-y_2^2 and H(x):=d-x_2, respectively. Then, the Lagrange function for the optimization problem prox_βρ_γ(y) = arg min{x_1-γx_2+1/2βx-y_2^2:x_2≤ d} has the form of L(x,η) = G(x) - η H(x), where η≥ 0 is the Lagrange multipliers. Thus, any x^⋆∈ prox_βρ_γ(y) must satisfy the following KKT condition for some η^⋆≥ 0, due to the Lipschitz continuity of the objective, { 0∈∂_xL(x^⋆,η^⋆), x^⋆_2≤ d, η^⋆≥ 0, η^⋆(d-x^⋆_2) = 0. . Recall that ∂·_2(0_n) = {γ∈ℝ^n:γ_2≤ 1} and ∂(·_1 - ·_2)⊆∂·_1 - ∂·_2. Thus, the first relation in (<ref>) implies that one of the following relations holds for some p∈∂·_1(x^⋆): (1-βγ-η^⋆β/x^⋆_2)x^⋆ = y - β p and x^⋆≠ 0_n, y-β p_2≤ |η^⋆β-βγ| and x^⋆ = 0_n. One can easily check that for any x^⋆ satisfying (<ref>), there holds G(x^⋆) = x^⋆_1 - γx^⋆_2 + 1/2βx^⋆_2^2 + 1/2βy_2^2 - ⟨ x^⋆,p+(1/β-γ-η^⋆/x^⋆_2)x^⋆⟩ =-γx^⋆_2 + 1/2βx^⋆_2^2 - (1/β-γ-η^⋆/x^⋆_2)x^⋆_2^2 + 1/2βy_2^2 = -1/2βx^⋆_2^2 + 1/2βy_2^2 - η^⋆x^⋆_2 ≤ G(0_n). Therefore, we need to find x^⋆ with the largest norm among all x^⋆ satisfying (<ref>). In the following, we discuss the four cases listed in this proposition respectively. (i) y_1>β. In this case, y_1-β p_1>0. Then (<ref>) and (<ref>) yield that 1-βγ-η^⋆β/x^⋆_2>0. Thus, x_i^⋆ = 0 for i∈{j∈ℕ_n:y_j≤β}. Then, we deduce from (<ref>) that y-β p = 𝒮(y,β) and x^⋆_2 = 𝒮(y,β)_2 + βγ -η^⋆β. By the KKT condition (<ref>) and (<ref>), we have η^⋆ = { 0, if 𝒮(y,β)_2+βγ≤ d, 𝒮(y,β)_2-d/β + γ, otherwise, . x^⋆_2 = { 𝒮(y,β)_2 + βγ, if𝒮(y,β)_2 + βγ≤ d, d, otherwise. . The above two relations and (<ref>) imply the desired results. (ii) y_1 = β. We conclude from (<ref>) that the left side of (<ref>) is 0_n. Thus, we have x_i^⋆ = 0 for i∈{j∈ℕ_n:y_j<β} due to p∈∂·_1(x^⋆) and x^⋆_2 = βγ-η^⋆β. By the KKT condition and (<ref>), x^⋆_2 = min{d,βγ} and η^⋆ = βγ- min{d,βγ}/β. Then, any x^⋆∈ prox_βρ_γ(y) should satisfy x^⋆_2 = min{d,βγ}, x_i^⋆y_i≥ 0 for i∈ℕ_n and x_i^⋆ = 0 for i∈{j∈ℕ_n:y_j<β}. We next show that any x^⋆ satisfying the above conditions belongs to prox_βρ_γ(y), i.e., G(x^⋆) = const. One can check it by (<ref>) and Item (ii) follows immediately. (iii) (1-γ)β<y_1<β. One has x_1^⋆>0 from (<ref>) and (<ref>). Then, it follows from (<ref>) that 1-βγ-η^⋆β/x^⋆_2<0, which leads to x^⋆_2<(γ-η^⋆)β. We first show x_i^⋆=0 for i∈{j∈ℕ_n:y_j<y_1}. Otherwise, there exist x_i^⋆>0 and y_i<y_1. Then y_1-β p_1 = y_1-β = (1-βγ-η^⋆β/x^⋆_2)x_1^⋆≤ (1-βγ-η^⋆β/x^⋆_2)x_i^⋆ = y_i - β p_i = y_i - β, which contradicts to y_i<y_1. We know from (<ref>) that x^⋆_2 = (x-η^⋆)β-y-β p_2. By (<ref>) and (<ref>), we should find x^⋆ satisfying (<ref>) with the largest norm. If d≥ y_1+(γ-1)β, we get from (<ref>) that x^⋆ should be chosen as a 1-sparse vector with x^⋆_2 = βγ - (β-y_1) = y_1 + (γ - 1)β≤ d. Else, x^⋆_2 should be d. From (<ref>), η^⋆ should be chosen as the largest one among all η^⋆ satisfying the KKT condition (<ref>). Then, (<ref>) encourages us to choose x^⋆ to be a 1-sparse vector with x^⋆_2 = d and η^⋆ = y_1+(γ-1)β-d/β>0. We then obtain Item (iii) immediately. (iv) y_1≤ (1-γ)β. It is clear that (<ref>) has no solution and (<ref>) holds for x^⋆ = 0_n and η^⋆ = 0. Then, we get the desired result immediately. § CONVERGENCE ANALYSIS This section is devoted to the convergence analysis of PPGA and PPGA_L for problem (<ref>). To this end, we first propose a necessary and sufficient condition for stationary points of F, and then present a necessary condition based on proximity operators and gradients of related functions for stationary points of F. Next, we establish the convergence of objective function values and the subsequential convergence for PPGA. By assuming the KL property of the objective, we further prove the convergence of the whole sequence generated by PPGA. Then, we show subsequential convergence of PPGA_NL and global sequential convergence of PPGA_ML under the KL property assumption. Finally, by showing the objective function Q_λ of problem (<ref>) is semialgebraic, we obtain the global sequential convergence of PPGA and PPGA_ ML applied to solving problem (<ref>). §.§ Stationary points of F We study in this subsection the stationary points of F. A necessary and sufficient condition for stationary point of F based on Fréchet subdifferential of f and gradients of g and h is first proposed. Then, utilizing proximity operators and gradients of the related functions, we present a necessary condition for stationary point of F. This will help us proving any accumulation points of the sequence generated by PPGA or PPGA_L is a stationary point of F. We establish a necessary and sufficient condition for stationary points of F in the next proposition. The vector x^⋆∈ dom(F) is a stationary point of F if and only if 0∈∂̂f(x^⋆) + ∇ h(x^⋆) - C_⋆∇ g(x^⋆), where C_⋆:=F(x^⋆). By Proposition <ref>, we know that for x∈ dom(F), ∂̂F(x) = a_2(∂̂f(x)+∇ h(x))-a_1∇ g(x)/a_2^2, where a_1 = f(x) + h(x) and a_2 = g(x). We then obtain this proposition immediately. We next derive a sufficient condition for stationary points of F in the following proposition. If x^⋆∈ dom(F) satisfies x^⋆∈ prox_α(f-C_⋆g)(x^⋆-α∇ h(x^⋆)) for some α>0 and C_⋆ = F(x^⋆), then x^⋆ is a stationary point of F. By the definition of proximity operators, (<ref>) is equivalent to x^⋆∈ arg min{α(f(x)-C_⋆g(x))+1/2x-x^⋆+α∇ h(x^⋆)_2^2:x∈ℝ^n}. From the generalized Fermat's rule, (<ref>) leads to 0∈α∂̂f(x^⋆) - α C_⋆∇ g(x^⋆) + α∇ h(x^⋆), which implies that x^⋆ is a stationary point of F by proposition <ref>. §.§ Subsequential convergence of PPGA We prove in this subsection that the sequence {x^k:k∈ℕ} generated by PPGA is bounded and any accumulation point of it is a stationary point of F. To this end, we first illustrate in the next theorem that x^k∈ dom(F) for k∈ℕ and {F(x^k):k∈ℕ} is decreasing and convergent. The sequence {x^k:k∈ℕ} generated by PPGA falls into dom(F) and the following statements holds: (i)F(x^k+1)+1/α_k-L/2g(x^k+1)x^k+1-x^k_2^2≤ F(x^k) for k∈ℕ; (ii)lim_k→∞C_k = lim_k→∞F(x^k) = C with C≥ 0; (iii)lim_k→∞1/α_k-L/g(x^k+1)x^k+1-x^k_2^2 = 0. We first prove x^k∈ dom(F) for k∈ℕ by induction. First, the initial point x^0∈ dom(F). Suppose x^k∈ dom(F) for some k∈ℕ. From PPGA and the definition of proximity operators, we have f(x^k+1) - C_kg(x^k+1) + 1/2α_kx^k+1-x^k+α_k∇ h(x^k)_2^2 ≤ f(x^k) - C_kg(x^k) +1/2α_kα_k∇ h(x^k)_2^2, which together with C_k = f(x^k)+h(x^k)/g(x^k) implies that f(x^k+1) - C_kg(x^k+1) + 1/2α_kx^k+1-x^k_2^2 + ⟨ x^k+1 - x^k,∇ h(x^k)⟩≤ -h(x^k). From Item (iii) of Assumption 1, there holds h(x^k+1)≤ h(x^k) + ⟨∇ h(x^k), x^k+1-x^k⟩ + L/2x^k+1-x^k_2^2. Summing (<ref>) and (<ref>) leads to f(x^k+1) + h(x^k+1) + 1/α_k-L/2x^k+1 - x^k_2^2≤ C_kg(x^k+1). Assume x^k+1∉ dom(F). We know that x^k+1∉Ω and g(x^k+1) = 0 due to x^k+1∈ dom(f-C_kg) = dom(f) and dom(F) = dom(f) ∩Ω. By Item (iv) of Assumption 1 and 0<α_k<1/L, we deduce that x^k+1 = x^k from (<ref>). This contradicts to x^k∈ dom(F) and thus implies x^k+1∈ dom(F). Therefore, we conclude that x^k∈ dom(F) for all k∈ℕ. By dividing g(x^k+1) at both sides of (<ref>), we obtain Item (i) immediately. Item (ii) follows immediately by F≥ 0 and 0<α_k<1/L. Item (iii) is a direct consequence of Items (i) and (ii). We complete the proof. We are now ready to present the main result of this subsection. Let {x^k:k∈ℕ} be generated by PPGA. Then {x^k:ℕ} is bounded and any accumulation point of it is a stationary point of F. The boundedness of {x^k:k∈ℕ} follows from the level boundedness of F and Theorem <ref> (i). Let {x^k_j:j∈ℕ} be a subsequence such that lim_j→∞x^k_j = x^⋆. From Theorem <ref> (i) and the lower semicontinuity of F, we deduce that F(x^⋆)≤ lim_j→∞F(x^k_j)≤ F(x^0), which implies that x^⋆∈ dom(F). Thus, g(x^⋆)≠ 0 and x^⋆∈ dom(f). By Theorem <ref> and α_k≤α, we obtain F(x^k_j) + 1/α-L/2g(x^k_j)x^k_j - x^k_j-1_2^2≤ F(x^k_j-1). Using Theorem <ref> (ii), α<1/L and the continuity of g at x^⋆, we deduce that lim_j→∞x^k_j - x^k_j-1_2 = 0 and lim_j→∞x^k_j-1 = x^⋆. Without loss of generality, we assume lim_j→∞α_k_j-1 = α with 0<α≤α≤α. From PPGA, we get x^k_j∈ prox_α_k_j-1(f-C_k_j-1g)(x^k_j-1-α_k_j-1∇ h(x^k_j-1)). As f, g, h and ∇ h are continuous on dom(F), we obtain (<ref>) by passing to the limit in the above relation. Finally, by Proposition <ref> we have that x^⋆ is a stationary point of F. §.§ Global sequential convergence of PPGA In this subsection, we investigate the global convergence of the entire sequence {x^k:k∈ℕ} generated by PPGA. We shall show {x^k:k∈ℕ} converges to a stationary point of F under the KL property assumption. Our analysis in this subsection mainly takes advantage of Proposition <ref> which is based on KL property. If F satisfies the KL property, from Proposition <ref> and Theorem <ref>, we can establish the global convergence of PPGA by showing the boundedness of the generated sequence and Items (i)-(iii) of Proposition <ref>. The boundedness of {x^k:k∈ℕ} is obtained in Theorem <ref> and Item (iii) of proposition <ref> is a direct consequence of the continuity of F. We next prove Items (i)-(ii) in the following lemma. Let {x^k:k∈ℕ} be generated by PPGA. Then the following statements hold: (i)There exists a>0 such that F(x^k+1) + a/2x^k+1-x^k_2^2≤ F(x^k) for all k∈ℕ; (ii)There exist b>0 and w^k+1∈∂ F(x^k+1) such that w^k+1_2≤ bx^k+1-x^k_2 for all k∈ℕ. We first prove Item (i). Let a:=(1/α-L)/M with M:= sup{g(x):F(x)≤ F(x^0)}. Since g is continuous as well as F is level bounded and lower semicontinuous, M is finite. Thus a>0. This together with Theorem <ref> (i) and α_k≤α yields Item (i). We next prove Item (ii). By Theorem <ref> and Theorem <ref>, we know that x^k∈Ω for k∈ℕ and any accumulation point x^⋆ of {x^k:k∈ℕ} satisfies g(x^⋆)>0. Therefore, there exists t>0 such that g(x^k)≥ t for k∈ℕ, since {x^k:k∈ℕ} is bounded and g is continuous on Ω. Let S be the closure of {x^k:k∈ℕ}. By Theorem <ref>, S is bounded and S⊆ dom(F). Then, it can be easily checked that F are globally Lipschitz continuous on S, with the help of Assumption 1 (i)-(iii). We denote the Lipschitz constant of F by L. From PPGA, the definition of proximity operators and the generalized Fermat's rule, we obtain that x^k-x^k+1-α_k∇ h(x^k)∈α_k∂̂f(x^k+1)-α_kC_k∇ g(x^k+1), which implies that x^k-x^k+1/α_kg(x^k+1) - ∇ h(x^k)/g(x^k+1) + C_k∇ g(x^k+1)/g(x^k+1)∈∂̂f(x^k+1)/g(x^k+1). Since ∂̂F = g(∂̂f + ∇ h)-(f+h)∇ g/g^2 on dom(F) by Proposition <ref>, we have w^k+1∈∂̂F(x^k+1) with w^k+1:=x^k-x^k+1/α_kg(x^k+1)+∇ h(x^k+1)-∇ h(x^k)/g(x^k+1) + (C_k-C_k+1)∇ g(x^k+1)/g(x^k+1). A direct computation yields that w^k+1_2≤ (1/α_kt+L/t+L∇ g(x^k+1)_2/t)x^k+1-x^k_2. Since {x^k:k∈ℕ} is bounded and ∇ g is continuous on Ω, there exist β>0 such that ∇ g(x^k+1)_2≤β for k∈ℕ. Due to α_k≥α>0, we finally obtain that w^k+1_2≤ bx^k+1-x^k_2 for k∈ℕ, where b:=(1/α+L+L̃β)/t. We complete the proof from ∂̂F(x^k+1)⊆∂ F(x^k+1). The main result of this subsection is established in the following theorem. Let {x^k:k∈ℕ} be generated by PPGA. If F satisfies the KL property at any point in dom(F), then ∑_k=1^∞x^k-x^k-1_2< +∞ and {x^k:k∈ℕ} converges to a stationary point of F. From Theorem <ref>, it suffices to show that ∑_k=1^∞x^k-x^k-1_2≤+∞ and {x^k:k∈ℕ} is convergent. According to Proposition <ref>, we obtain this theorem immediately by utilizing Theorem <ref> and Lemma <ref>. §.§ Convergence analysis of PPGA_L This subsection is devoted to the convergence of PPGA_L. We show that any accumulation point of {x^k:k∈ℕ} generated by PPGA_L is a stationary point of F. Furthermore, the global convergence of the entire sequence generated by PPGA_ML is established under the KL property assumption. Because the analysis in this subsection is similar with that of <cit.>, we only present the main result in the next theorem and omit its proof due to the limitation of space. The following statements hold: (i)Step 2 of PPGA_L terminates at some α_k≥α in most T iterations, where α:=η/(aM+L), M:= sup{g(x):F(x)≤ C_0}, T:=⌈- log(α(aM+L))/ logη + 1⌉; (ii)Let {x^k:k∈ℕ} be generated by PPGA_L. Then {x^k:k∈ℕ} is bounded and any accumulation point of it is a stationary point of F; (iii)Let {x^k:k∈ℕ} be generated by PPGA_ML, i.e., PPGA_L when N=0. Then, ∑_k = 1^∞x^k-x^k-1_2<+∞ and {x^k:k∈ℕ} converges to a stationary point of F, if F satisfies the KL property at any point in dom(F). §.§ Convergence of PPGA and PPGA_L applied to problem (<ref>) In this section, we establish the convergence of PPGA and PPGA_L when they are applied to solving problem (<ref>). As discussed in subsection <ref> and <ref>, the global sequential convergence of PPGA and PPGA_ML relies on the KL property of the objective. The next lemma presents that the objective function Q_λ of problem (<ref>) is a semialgebraic function and thus satisfies the KL property. For the definition of the semialgebraic function and its relation to the KL property, we refer readers to <cit.>. For any λ>0, Q_λ is a semialgebraic function. According to the stability properties of semialgebraic functions <cit.>, we know that finite sums and products of semialgebraic functions are semialgebraic. Therefore, it suffices to show λ·_1, env_ι_C^1∘ A, 1/·_2, and the indicator function on {x∈ℝ^n:x_2≤ d, x≠ 0_n} are semialgebraic. It has been shown in <cit.> that λ·_1 is semialgebraic. Then we prove env_ι_C^1∘ A and 1/·_2 are semialgebraic, i.e., Graph(env_ι_C^1∘ A) and Graph(1/·_2) are semialgebraic sets. We have Graph( env_ι_C^1∘ A) = {(x,s)∈ℝ^n×ℝ:1/2(Ax-b_2-ϵ)_+^2 = s} = {(x,s)∈ℝ^n×ℝ:Ax-b_2≤ϵ, s=0}∪ {(x,s)∈ℝ^n×ℝ:-Ax-b_2 ≤ -ϵ, Ax-b_2^4-(2ϵ^2+4s)Ax-b_2^2+4s^2-4ϵ^2s+ϵ^4 = 0}, Graph(1/·_2) = {(x,s)∈ℝ^n×ℝ:1/x_2 = s} = {(x,s)∈ℝ^n×ℝ: s^2x_2^2 = 1}. The above equalities imply that both env_ι_C^1∘ A and 1/·_2 are semialgebraic. We next prove the indicator function on {x∈ℝ:x_2≤ d,x≠ 0_n} is semialgebraic, which amounts to showing {x∈ℝ:x_2≤ d,x≠ 0_n} is a semialgebraic set. It is clear that {x∈ℝ^n:x_2≤ d, x≠ 0_n} = {x∈ℝ^n:x_2≤ d, -x_2^2<0}, which indicates that this set is semialgebraic. Then, we complete the proof immediately. With the help of Lemma <ref>, Theorems <ref> and <ref>, we obtain the main result of this subsection in the following theorem. The following statements hold: (i) If {x^k:x∈ℕ} is generated by PPGA or PPGA_ML applied to problem (<ref>), then ∑_k = 1^∞x^k-x^k-1_2<+∞ and {x^k:k∈ℕ} converges to a stationary point of Q_λ. (ii) If {x^k:x∈ℕ} is generated by PPGA_NL, then {x^k:x∈ℕ} is bounded and accumulation point of it is a stationary point of Q_λ. In view of Lemma <ref>, Q_λ satisfies the KL property at any point in dom(F). We then obtain this theorem by Theorems <ref> and <ref>. § NUMERICAL EXPERIMENTS In this section, we conduct some numerical simulations on sparse signal recovery to test the efficiency of our proposed algorithms, namely, PPGA, PPGA_ML, and PPGA_NL. All the experiments are carried out on a MAC with an Intel Core i5 CPU (2.7GHz) and Matlab 2019b. We consider sparse signal recovery problems on two types of matrices: random Gaussian and random oversampled discrete cosine transform (DCT) matrices. A random Gaussian matrix has i.i.d standard Gaussian entries, while a random oversampled DCT matrix is specified as A = [a_1,...,a_n]∈ℝ^m× n with a_j = 1/√(m) cos(2π w j/F), j = 1,2,...,n, where w∈ℝ^m has i.i.d entries uniformly chosen in [0,1], F>0 is a parameter to control the coherence in a way that a larger F yields a more coherent matrix. In order to generate a s-sparse ground truth signal x_g∈ℝ^n with at most s nonzero entries, we first randomly choose a support set J⊂{1,2,...,n} with |J| = s. Then, following the work of <cit.>, when the sensing matrix is an oversampled DCT matrix, the nonzero elements are generated by the following MATLAB command: xg = sign(randn(s,1)).*10. (D*rand(s,1)), where randn and rand are the MATLAB commands for the standard Gaussian distribution and the uniform distribution respectively, D is an exponential factor that controls the dynamic range Θ(x) = max{|x_g|}/ min{|x_g|} of x_g. Following the work of <cit.>, when the sensing matrix is a random Gaussian matrix, x_g(J) has i.i.d standard Gaussian entries. The initial point x^0 of all the compared algorithms throughout this section except <ref> is chosen as an approximate solution of the ℓ_1 problem, which is obtained by applying 2n ADMM iterations to the problem min{0.08x_1+1/2Ax-b_2^2:x∈ℝ^n}. The stopping criterions for all the compared algorithms are the relative error x^k+1-x^k_2/x^k_2≤ 10^-8 or k>500n. §.§ Comparison on PPGA, PPGA_ML and PPGA_NL In this section, we compare the computational efficiency of our proposed algorithms, namely, PPGA, PPGA_ML and PPGA_NL. We consider the noise-free sparse signal recovery problem on random oversampled DCT matrices with m = 64, n = 1024, s = 5, F∈{1,5} and D∈{1,2,3}. In this case, ϵ = 0 and we set the parameter d = 10^7, λ = 0.008 for problem (<ref>). For PPGA α_k = 0.999/A_2^2, for PPGA_ML and PPGA_NL η = 0.5, a = 10^-8 and N = 4. Figure <ref> plots the objective function values of problem (<ref>), that is Q_λ(x^k), versus iteration numbers k∈ℕ for PPGA, PPGA_ML and PPGA_NL. We find that in all the cases, Q_λ(x^k) generated by PPGA and PPGA_ML decreases as k increases. This verifies Theorem <ref> (i) and Algorithm 2 Step 2 (c) when N = 0. It can be observed from Figure <ref> that objective function values of PPGA_ML and PPGA_NL decrease much faster than those of PPGA. For some cases, PPGA_ML and PPGA_NL performs very similarly, see the top four plots of Figure <ref>. While PPGA_ML (resp., PPGA_NL) performs slightly better in terms of computational efficiency than PPGA_NL (resp., PPGA_ML) for the bottom right plot (resp., bottom left plot) of Figure <ref>. We also find from Figure <ref> that as the difficulty of the sparse signal recovery problem is enhanced, that is F and D increase, all the compared algorithms need more iterations to converge. We also plot x^k-x^⋆_2 (in logarithmic scale) against the number of iteration in Figure <ref>, where x^⋆ is the approximated solution produced by the corresponding algorithm. It is obvious that the sequence generated by PPGA_ML or PPGA_NL converges much faster than that by PPGA. As can be seen from Figure <ref>, the sequences generated by all the three algorithm appear to converge R-linearly although we have no theoritical results concerning the convergence rate of the algorithm. §.§ Noise-free case In this subsection, we focus on the ℓ_1/ℓ_2 based sparse signal recovery problem in the noise-free case for oversampled DCT matrices. Throughout this subsection, the oversampled DCT matrices have size of 64× 1024. Since the efficiency of the ℓ_1/ℓ_2 based model compared with other sparse promoting model has been extensively studied in <cit.>, we only compare the signal recovery capacity of our proposed algorithms with that of the state-of-the-art algorithms for solving the ℓ_1/ℓ_2 minimization problem (<ref>), including A1, A2, BS which are recently proposed in <cit.>, and ℓ_1/ℓ_2_box [We use the Matlab code at https://github.com/yflouucla/L1dL2 with default parameter settings for A1, A2, BS and ℓ_1/ℓ_2_box.] <cit.> which applies ADMM to solve the ℓ_1/ℓ_2 minimization problem with a box constraint, i.e., problem (<ref>) with a box constraint {x∈ℝ^n: d_1≤ x_i≤ d_2, i = 1,...,n}. We evaluate the performance of sparse signal recovery in term of success rate, that is the number of successful trials over the total number of trials. A success is declared if the relative error of the reconstructed solution x^⋆ to the ground truth signal x_g is no more than 10^-3, i.e., x^⋆-x_g_2/x_g_2≤ 10^-3. In all the settings of this subsection, we set λ = 0.001 when F ∈{1,5}, D = 1; λ = 0.004 when F ∈{1,5}, D = 3; λ = [0.004,0.004,0.1,0.2,0.5,0.5] as s varies [2,6,10,14,18,22] when F ∈{1,5}, D = 5; the other parameters of PPGA, PPGA_ML and PPGA_NL are the same as those in subsection <ref>, and the parameter settings of A1, A2, BS and ℓ_1/ℓ_2_box are chosen as suggested in <cit.> and <cit.>. Figure <ref> exhibits the success rate over 100 trials of all the compared algorithms for sparse recovery problem with S∈{2,6,...,22}, F∈{1,5} and D∈{1,3,5} for oversampled DCT matrices. Since PPGA, PPGA_ML and PPGA_NL obtain almost the same success rates in all the cases, we only include the results of PPGA_NL for the sake of clarity of the plots. We observe from Figure <ref> that our proposed algorithm is comparable to A1, A2 and BS, and achieves much higher success rates compared with ℓ_1/ℓ_2_box. §.§ Noisy data In this subsection, we consider the sparse signal recovery problem with white Gaussian noise. In order to test the performance of the proposed algorithms in solving ℓ_1/ℓ_2 based sparse recovery problems, we present in subsection <ref> various computational aspects of the proposed algorithms together with comparison to MBA <cit.>. Then, we compare various sparse promoting models in subsection <ref>. §.§.§ Algorithmic comparison We compare our proposed algorithms with MBA [We use the Matlab code at https://www.polyu.edu.hk/ama/profile/pong/MBA_l1vl2/ with default parameter settings for MBA.] <cit.> for ℓ_1/ℓ_2 sparse signal recovery with white Gaussian noise in this subsection. The test instances and experiment settings are similar as those in <cit.>. Specifically, A is an m× n random oversampled DCT matrix, the ground truth x_g is generated as (<ref>), and b = Ax_g+0.01e, where e∈ℝ^m has i.i.d standard Gaussian entries. Here, m = 64, n = 1024, s∈{8,12}, F∈{5,15} and D∈{2,3}. The initial points of the compared algorithms in this subsection are chosen as x^0 = { A^†b+ϵx_ℓ_1-A^†b/Ax_ℓ_1 - b_2, if Ax_ℓ_1 - b_2>ϵ, x_ℓ_1, otherwise, . where x_ℓ_1 is an approximate solution of (<ref>) and A^† is the pseudoinverse of A. We point out that such x^0 does in the constraint of problem (<ref>). The recovery error ReeErr = x^⋆ - x_g_2/ max{1,x_g_2}, with x^⋆ being the output of the corresponding algorithm, is used to evaluate the recovery capacity of the algorithm. For our proposed algorithms, ϵ is set to be 3× 10^-3√(m) and all the other parameters are chosen as those in subsection <ref>. For MBA, we consider two parameter settings. The first one, referred to as MBA_1, chooses the same ϵ as our proposed algorithms, i.e., ϵ=3× 10^-3√(m). The second one, referred to as MBA_2, takes ϵ=1.20.01e_2, which is as the same as that in <cit.>. All the other parameters for MBA_1 and MBA_2 are determined as suggested in <cit.>. Table <ref> presents the computational results (averaged over the 20 trials) including the CPU time and the recovery error of all compared algorithms. Since PPGA costs too much computational time when (s,F,D) = (12,15,3), we do not include PPGA here. It can be observed from Table <ref> that compared with MBA, our proposed PPGA_ML and PPGA_NL spend less CPU time to converge and obtain recovered signals with smaller recovery error in most cases. §.§.§ Comparision with other sparsity promoting models This subsection is devoted to the comparison on different sparse promoting models, including ℓ_1/ℓ_2, ℓ_1, ℓ_1-ℓ_2 and ℓ_1/2 for Gaussian noise sparse recovery with Gaussian matrices. Here, the ℓ_1, ℓ_1-ℓ_2 and ℓ_1/2 models are the least square problems with the corresponding sparse promoting regularizers. The solutions of the ℓ_1/2 and ℓ_1-ℓ_2 are obtained by the half-thresholding method <cit.> and the forward-backward splitting (FBS) method [We use the Matlab code at https://github.com/mingyan08/ProxL1-L2 with default parameter settings for ℓ_1-ℓ_2 (FBS).] <cit.>, respectively. The experimental setup follows that of <cit.>. Specifically, we consider a signal x of length n = 512 and s = 130 non-zero elements. The number of measurements m varies from 230 to 330. The sensing matrix A is generated by normalizing each column of a Gaussian matrix to be zero-mean and unit norm. The stand deviation of the white Gaussian noise is set to be σ = 0.1. We use the mean-square-error (MSE) to quantify the recovery performance, i.e., MSE = x^⋆ - x_g_2, where x^⋆ is the output of the corresponding algorithm. If the support of the ground truth signal is known, denoted as Λ = supp(x_g), one can compute the MSE of an oracle solution as σ^2 tr(A_Λ^TA_Λ)^-1 as benchmark, where A_Λ denotes the columns of matrix A restricted to Λ. In our experiments, the parameters for ℓ_1-ℓ_2 and ℓ_1 models are chosen as the same as those in <cit.>. The parameters of ℓ_1/2 model are determined as suggested in <cit.> and the parameter μ is set to be 0.05. For the proposed PPGA_ML and PPGA_NL, we set ϵ=0.05√(m), d = 10^7, η=0.5, a=10^-8 and N=4. Since the proposed algorithms are designed for solving problem (<ref>) which is an approximate model to model (<ref>), the smaller λ in model (<ref>) is chosen, the closer model (<ref>) to model (<ref>). Hence, we initial set λ=0.01, then half it in every ten iterations and fix it when the total number of iterations exceeds 500. Figure <ref> presents the average MSE of the compared algorithms over 20 instances. One can observe that the ℓ_1/ℓ_2 based model (solved by PPGA_ML and PPGA_NL) achieves the smallest MSE among all the competing sparse promoting models when m≥ 250. Both PPGA_ML and PPGA_NL perform very similarly in this experiment. As considered in <cit.>, we show in Table <ref> the mean and standard deviation of MSE as well as computational time of the compared algorithms at m = 238, 250, 276, 300. It is demonstrated in Table <ref> that the proposed algorithms spend much less CPU time to converge while achieve smaller MSE than other compared models when m=250, 276, 300. § CONCLUSIONS In this paper, we study algorithms for solving ℓ_1/ℓ_2 sparse signal recovery problems. A commonly used ℓ_1/ℓ_2 model for Gaussian noise compressed sensing is to minimize ℓ_1/ℓ_2 over an elliptic constraint. We first propose to smooth the indicator function on the elliptic constraint by a smooth fractional function. It is proven that stationary points of the proposed model tend to those of the elliptically constrained problem as the smoothing parameter tends to zero. Inspired by the parametric method of the single-ratio fractional programming, we develop a parameterized proximal-gradient algorithm (PPGA) and its line search counterpart (PPGA_L) for solving the proposed problem. The global convergence of the sequences generated by PPGA and PPGA_L with monotone objective values is established. Numerical experiments demonstrate that our proposed algorithms is efficient for ℓ_1/ℓ_2 sparse signal recovery in both noiseless and noise cases, compared with the state-of-the-art methods. § ACKNOWLEDGMENTS We would like to thank Dr Chao Wang for his discussion on the ℓ_1/ℓ_2_box Matlab code. unsrtnat
http://arxiv.org/abs/2307.01732v1
20230704140310
Sparse Graphs of Twin-width 2 Have Bounded Tree-width
[ "Benjamin Bergougnoux", "Jakub Gajarský", "Grzegorz Guśpiel", "Petr Hliněný", "Filip Pokrývka", "Marek Sokołowski" ]
math.CO
[ "math.CO", "cs.DM", "cs.DS", "05C75, 68R10" ]
Discovery of in TMC-1 Based on observations carried out with the Yebes 40m telescope (projects 19A003, 20A014, 20D15, and 21A011) and the Institut de Radioastronomie Millimétrique (IRAM) 30m telescope. The 40m radiotelescope at Yebes Observatory is operated by the Spanish Geographic Institute (IGN, Ministerio de Transportes, Movilidad y Agenda Urbana). IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). W. G. D. P. Silva 1 J. Cernicharo 2 S. Schlemmer 1 N. Marcelino 3,4 J.-C. Loison 5 M. Agúndez 2 D. Gupta 1 V. Wakelam 6 S. Thorwirth 1 C. Cabezas 2 B. Tercero 3,4 J. L. Doménech 7 R. Fuentetaja 2 W.-J. Kim 1 P. de Vicente 3 O. Asvany 1 Received X, 2023; accepted X, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================== Twin-width is a structural width parameter introduced by Bonnet, Kim, Thomassé and Watrigant [FOCS 2020]. Very briefly, its essence is a gradual reduction (a contraction sequence) of the given graph down to a single vertex while maintaining limited difference of neighbourhoods of the vertices, and it can be seen as widely generalizing several other traditional structural parameters. Having such a sequence at hand allows to solve many otherwise hard problems efficiently. Our paper focuses on a comparison of twin-width to the more traditional tree-width on sparse graphs. Namely, we prove that if a graph G of twin-width at most 2 contains no K_t,t subgraph for some integer t, then the tree-width of G is bounded by a polynomial function of t. As a consequence, for any sparse graph class C we obtain a polynomial time algorithm which for any input graph G ∈ C either outputs a contraction sequence of width at most c (where c depends only on C), or correctly outputs that G has twin-width more than 2. On the other hand, we present an easy example of a graph class of twin-width 3 with unbounded tree-width, showing that our result cannot be extended to higher values of twin-width. § INTRODUCTION Twin-width is a relatively new structural width measure of graphs and relational structures introduced in 2020 by Bonnet, Kim, Thomassé and Watrigant <cit.>. Informally, the twin-width of a graph measures how diverse the neighbourhoods of the graph vertices are. For instance, cographs—the graphs which can be built from singleton vertices by repeated operations of a disjoint union and taking the complement— are exactly the graphs of twin-width 0, which means that the graph can be brought down to a single vertex by successively identifying twin vertices. (Two vertices x and y are called twins in a graph G if they have the same neighbours in V(G) ∖{x,y}.) Hence the name, twin-width, for the parameter. Importance of this new concept is clearly witnessed by numerous recent papers on the topic, such as the follow-up series <cit.> and more related research papers represented by, e.g., <cit.>. In particular, twin-width, as a structural width parameter, has naturally algorithmic applications in the parameterized complexity area. Among the most important ones we mention that the first order (FO) model checking problem – that is, deciding whether a fixed first-order sentence holds in an input graph – can be solved in linear FPT-time <cit.>. This and other algorithmic applications assume that a contraction sequence of bounded width is given alongside with the input graph, since in general we do not know how to construct such an sequence efficiently. Consequently, finding an efficient algorithm for computing twin-width of input graph G together with its twin-width decomposition (known as a contraction sequence of G) is a central problem in the area. Currently, very little is known about this problem. It is known that one can check whether graph has twin-width 0 and 1 and compute the corresponding contraction-sequence in polynomial time <cit.>. On the other hand, in general the problem of deciding the exact value of twin-width is NP-hard, and in particular, deciding whether a graph has twin-width 4 is NP-hard <cit.>. This means that even for fixed k, the best one can hope for is an approximation algorithm which for given input graph G either correctly outputs that G has twin-width more than k or produces a contraction sequence of G of width at most k', where k' is some fixed number with k < k'. However, to the best of our knowledge, no such efficient algorithm is currently known, even for graph classes of twin-width 2. The importance and popularity of twin-width largely stems from the fact that it generalizes many well-known graph-theoretic concepts. For example, graph classes of bounded clique-width, planar graphs and more generally graph classes defined by excluding a fixed minor, posets of bounded width and various subclasses of geometric graphs have been shown to have bounded twin-width <cit.>. In many cases, the proof of having bounded twin-width is effective, meaning that for such class there exists d and a polynomial algorithm which takes a graph G as input and outputs a contraction sequence of G of width at most d. Apart from establishing that some well-known graph classes have bounded twin-width, there are also results relating twin-width to various graph classes from the other direction – one considers graph classes of bounded bounded twin-width which are restricted in some sense, and proves that such classes fall within some well-studied framework. Prime examples of this approach are results stating that K_t,t-free graph classes of bounded twin-width have bounded expansion <cit.> or that stable graph classes of bounded twin-width have structurally bounded expansion <cit.>. Results of this form allow us to use existing structural and algorithmic tools which exist for (structurally) bounded expansion and apply them to restricted classes of bounded twin-width. Our research is motivated by the following conjecture, which also fits into this line of research and which we learned about from É. Bonnet. Let C be a class of graphs of twin-width at most 3 such that there exists t such that no G ∈ contains K_t,t as a subgraph. Then C has bounded tree-width. For graph classes of twin-width at most one can easily argue that Conjecture <ref> is true as follows. It is known <cit.> that if a graph class C is such that every G ∈ C has a contraction sequence in which red components (see Section <ref>) have bounded size, then C has bounded clique-width. Since in a contraction sequence of width one each red component has size one, this implies that graph classes of twin-width one have bounded clique-width. In combination with the fact that graph classes of bounded clique-width which exclude K_t,t as a subgraph have bounded tree-width, this proves the result. However, for twin-width 2 or 3 we cannot assume that red components of trigraphs occurring in contraction sequences have bounded size, and so to attack Conjecture <ref> one has to employ a more fine-tuned approach by directly analyzing contraction sequences. *Our contribution We confirm Conjecture <ref> for the case of graph classes of twin-width 2 and disprove the conjecture for graph classes of twin-width 3. Namely on the positive side we prove the following. Let t be an integer and let C be a class of graphs of twin-width at most 2 such that no G ∈ C contains K_t,t as a subgraph. Then the tree-width of C is bounded by a polynomial function of t, namely at most O(t^20). Theorem <ref> allows us to apply the existing tools for bounded tree-width to sparse classes of twin-width 2, making many hard problems efficiently solvable on such classes. Moreover, Theorem <ref> also leads to the following algorithmic corollary. Let C be a class of graphs such that there exists t such that no G ∈ C contains K_t,t as a subgraph. Then there exists a constant c depending on C, and a polynomial-time algorithm which for every G ∈ C either outputs a contraction sequence of G of width at most c, or correctly outputs that G has twin-width more than 2. On the negative side, we exhibit an example of a graph class C of twin-width 3 such that no G ∈ C contains a K_2,2≃ C_4 subgraph, and that C has unbounded tree-width. § RELATED DEFINITIONS AND TOOLS We use standard graph-theoretic terminology and notation. All graphs considered in this paper are finite and simple, i.e., without loops and multiple edges. For the sake of completeness, we include the following folklore definition and tool. A tree-decomposition of a graph G is a pair (X,T) where T is a tree, whose vertices we call nodes, and X = {X_i | i ∈ V(T)} is a collection of subsets of V(G) such that * ⋃_i ∈ V(T) X_i = V(G), * for each edge vw ∈ E(G), there is i ∈ V(T) such that v, w ∈ X_i and, * for each v ∈ V(G) the set of nodes {i | v ∈ X_i} forms a subtree of T. The width of tree-decomposition ({X_i | i ∈ V(T)}, T) is equal to max_i ∈ V(T){|X_i| - 1}. The tree-width of a graph G is the minimum width over all tree-decompositions of G. For a graph G and A, B ⊆ V(G), an A-B path is a path with one endvertex in A and the other endvertex in B. We will use Menger's theorem (see for example <cit.>). Let G be a graph and A,B ⊆ V(G). The minimum size of a set S ⊆ V(G) such that there is no A-B path in G is equal to the maximum number of disjoint A-B paths in G. For an integer N, a graph H is called an N× N wall (or hexagonal grid) if it consists of N disjoint paths P_1,…,P_N, each P_i with vertices v^i_1,…,v^i_N in this order in P_i, together with the edges given by the following rule: if both i,j ∈{1,…, N} have the same parity and i < N, then v^i_j is adjacent to v^i+1_j. For an integer N, a graph H is called an N× N cubic mesh if the maximum degree of H is 3 and the following holds; we can write H=Q_1∪ Q_2, where each Q_i, i=1,2, is formed as a vertex-disjoint union of N paths, the ends of paths of Q_i are disjoint from V(Q_3-i) for i=1,2, and every component–path of Q_1 intersects every component–path of Q_2 in precisely one (common) subpath. These paths of Q_1 (resp. of Q_2) are called the rows (resp. columns) of the mesh H, and the vertices of H of degree 3 are called the branching vertices of H. The intersection of any row and any column of H must be a subpath of nonzero length since Δ(H)=3, and it contributes two branching vertices. Any subdivision of the classical (2N+2)×(2N+2) wall (hexagonal grid) contains an N× N cubic mesh as a subgraph. See <Ref>. The notion of twin-width can in general be considered over arbitrary binary relational structures of a finite signature, but here we will define it and deal with it for only finite simple graphs. It is based on the following concept. A trigraph is a simple graph G in which some edges are marked as red, and with respect to the red edges only, we naturally speak about red neighbours and red degree in G (while the terms neighbour and degree without `red' refer to all edges of G inclusive of red ones, and edges which are not red are also called black). For a pair of (possibly not adjacent) vertices x_1,x_2∈ V(G), we define a contraction of the pair x_1,x_2 as the operation creating a trigraph G' which is the same as G except that x_1,x_2 are replaced with a new vertex x_0 (said to stem from x_1,x_2) such that: * the (full) neighbourhood of x_0 in G' (i.e., including the red neighbours), denoted by N_G'(x_0), equals the union of the neighbourhoods N_G(x_1) of x_1 and N_G(x_2) of x_2 in G except x_1,x_2 themselves, that is, N_G'(x_0)=(N_G(x_1)∪ N_G(x_2))∖{x_1,x_2}, and * the red neighbours of x_0, denoted here by N_G'^r(x_0), inherit all red neighbours of x_1 and of x_2 and add those in N_G(x_1)Δ N_G(x_2), that is, N_G'^r(x_0)=(N_G^r(x_1)∪ N_G^r(x_2)∪(N_G(x_1)Δ N_G(x_2)))∖{x_1,x_2}, where Δ denotes the symmetric set difference. A contraction sequence of a trigraph G is a sequence of successive contractions turning G into a single vertex, and its width d is the maximum red degree of any vertex in any trigraph of the sequence. We say that G has or admits a d-contraction sequence if there is a contraction sequence of G of width at most d. The twin-width of a trigraph G is the minimum width over all possible contraction sequences of G. To define a contraction sequence and the twin-width of an ordinary graph G, we consider G as a trigraph with no red edges. In a summary, a graph has twin-width at most d, if and only if it admits a d-contraction sequence. For our purpose, the following “inverted” view of a contraction sequence will be useful. A partitioned graph is a graph G associated with an unordered vertex partition P=(P_1,…,P_m) of V(G). The partitioned trigraph of (G, P) is a trigraph H on the vertex set V(H)= P such that {P_1,P_2}∈ E(H) if and only if G contains an edge from P_1 to P_2, and such edge {P_1,P_2} is red if and only if not all pairs of P_1× P_2 are edges of G. An uncontraction sequence of an n-vertex graph G is a sequence of partitioned graphs (G, P^i) for i=1,…,n, where P^1={V(G)} and P^n is the partition of V(G) into singletons, and for 1<i≤ n the partition P^i is obtained from P^i-1 by splitting an arbitrary one part of P^i-1 into two. It is easy to observe that if G_0=G, G_1,…, G_n-1 is a contraction sequence of the n-vertex graph G, then for the trigraph G_n-k (on k vertices) we may form the corresponding vertex partition P^k=(P_1^k,…,P_k^k) of V(G) by setting P_i^k to be the set of vertices of G contracted into w_i∈ V(G_n-k), i=1,…,k. The partitioned trigraph of (G, P^k) is hence isomorphic to the trigraph G_n-k of the former contraction sequence, and these two possible approaches to twin-width in <Ref> and <Ref> exactly coincide. § PROOF OF THEOREM <REF> Our proof proceeds by contradiction. Thus, we assume that there is a K_t,t-free graph G of large tree-width which has twin-width at most 2. We then proceed in two steps: 3pt * Step I: Since G has large tree-width, it has to contain a subdivision of a large wall, and hence a large cubic mesh, as a subgraph. Using this and the assumption that G has an uncontraction sequence of width at most 2, we show that there has to be a point in time during the uncontraction sequence such that there are four parts X_1,X_2,X_3, X_4 which form a red path in the corresponding trigraph and there are many disjoint paths of G fully contained in X_1∪ X_2 ∪ X_3 ∪ X_4 with one endpoint in X_1 and end the other in X_4. * Step II: Using a carefully chosen invariant we show that the structure found in Step I can be maintained indefinitely during the subsequent steps of the uncontraction sequence (still of width at most 2). This yields a contradiction, since the final partition of the uncontraction sequence consists of singletons and there are no red edges. So, we actually prove the following alternative formulation of <Ref>. If G is a simple graph not containing a K_t,t subgraph, but containing a subgraph (not necessarily induced) H⊆ G which is an N× N cubic mesh for N=16·(13t)^2, then G is of twin-width at least 3. §.§ Proving Step I We start with a trivial claim which is crucial in our arguments: Assume a partitioned graph (G, P) such that G has no K_t,t subgraph, and X_1,X_2∈ P such that |X_1|,|X_2|≥ t. Then the partitioned trigraph of (G, P) cannot contain a black edge {X_1,X_2}. In other words, whenever G has an edge from X_1 to X_2, there is a red edge {X_1,X_2} in the partitioned trigraph of (G, P). The first step in the proof of <Ref> is precisely formulated and proved next. Let t be an integer. Assume that G is a simple graph not containing a K_t,t subgraph, but containing a subgraph (not necessarily induced) H⊆ G which is an N× N cubic mesh, where N=16·(13t)^2. If there is an uncontraction sequence (G, P^i) for i=1,…,|V(G)| of width at most 2, then there exists m∈{1,…,|V(G)|} such that the following holds: There are parts X_1,X_2,X_3,X_4∈ P^m which induce a red path in this order in the partitioned trigraph of (G, P^m), there is no edge between X_1 and X_4, and the set X:=X_1∪ X_2∪ X_3∪ X_4 induces in G a subgraph containing s≥ 4t pairwise vertex-disjoint paths from X_1 to X_4. Set k := 13t. Let m∈{1,…,|V(G)|} be the least index such that every part of P^m contains less than 4k^2 branching vertices of H. Then there is a part Z∈ P^m such that Z contains at least 2k^2 branching vertices of H (one of the two having resulted by the last splitting before P^m). If Z intersected less than k rows and less than k columns of H, by the condition on intersecting rows and columns we would have less than 2k^2 branching vertices in Z altogether. Hence, up to symmetry, we may assume that Z hits at least k rows of H. Moreover, since N= 16·(13t)^2 = 16· k^2, we have that Z contains branching vertices of less than 4k^2=N/4 rows of H. Let L_2, L_1 ,Z,R_1,R_2∈ P^m denote the (at most) four parts that are connected to Z by a red path of length ≤2 in the partitioned trigraph of (G, P^m). That is, we have a red path on, (L_2,L_1,Z,R_1,R_2) in this order (for simplicity, we silently ignore if some of the parts do not exist). Again, each of L_1,L_2,R_1,R_2 contains branching vertices of less than 4k^2=N/4 rows of H. Altogether, every row of H hit by Z has at least 2N-5· N/4=3· N/4>0 branching vertices not contained in ⋃ Z where Z={L_2,L_1,Z,R_1,R_2}. We have thus got, as subpaths of the rows hit by Z, a collection Q = {Q_1,…,Q_k} of k vertex-disjoint paths in G which connect Z to parts in P^m∖ Z. Let the paths in Q be chosen as inclusion-minimal and for any Q_i∈ Q let a_i and b_i be the endpoints of Q_i. We now proceed assuming that all L_1,L_2,R_1,R_2 have size at least t, the case when one of {L_1,L_2} or one of {R_1,R_2} is smaller than t is addressed below. Every path Q∈ Q, by its minimality, connects a vertex of Z to a vertex of Y∈ P^m∖ Z where Y is a neighbour (red or black) of the set Z in the partitioned trigraph of (G, P^m). For any X∈{L_2,L_1,Z,R_1,R_2}, since |X|>t, the union of the parts adjacent to X by black edges in the partitioned trigraph is of cardinality less than t, or we have a K_t,t subgraph. Together, at most 5t of the paths of Q in G may end in parts Y∈ P^m∖ Z which are not red neighbours of Z in the partitioned trigraph. Since we have at most two red neighbours of parts of Z among the parts of P^m∖ Z (namely, the “outside” red neighbours of L_2 and of R_2), up to symmetry, the (unique) red neighbour L_3∈ P^m∖ Z of L_2 contains ends of at least (k-5t)/2=4t of the paths in Q. In particular, L_3 has to exist, and L_3 is not adjacent to Z in the partitioned trigraph by <Ref>. We thus conclude by choosing X_1=L_3, X_2=L_2, X_3=L_1 and X_4=Z. Next, we address the case when exactly one of {L_1,L_2} or {R_1,R_2} contains W such that |W| < t. Without loss of generality assume that W ∈{R_1,R_2} and so both L_1 and L_2 have size at least t. If W = R_1, set Z':= {L_2,L_1,Z}, otherwise set Z':={L_2,L_1,Z,R_1}. Every path Q∈ Q, by its minimality, connects a vertex of Z to a vertex of Y∈ P^m∖ Z' where Y is a neighbour (red or black) of the set Z' in the partitioned trigraph of (G, P^m). For any X in Z, since |X|>t, the union of the parts adjacent to X by black edges in the partitioned trigraph is of cardinality less than t, or we have a K_t,t subgraph. Together, at most 4t of the paths of Q in G may end in parts Y∈ P^m∖ Z' which are not red neighbours of Z' in the partitioned trigraph, and at most t paths of Q can go through W. Ignoring all these at most 4t paths, all the remaining |Q| - 4t > 4t paths have to end in the red neighbor of L_2; call this neighbor L_3. We thus again conclude by choosing X_1=L_3, X_2=L_2, X_3=L_1 and X_4=Z. Finally, we consider the case when there is W_L ∈{L_1,L_2} with |A| < t and W_r ∈{R_1,R_2} with |W_R|<t. We will show that this leads to a contradiction. We first fix the choice of W_L and W_R more precisely. If both L_1 and L_2 have size less than t, then we choose L_1 as W_L and similarly, if both R_1 and R_2 have size less than t, we choose R_1 as W_R. Then in the path L_2 L_1 Z R_1 R_2 all parts between W_L and W_R have size at least t. Since each part X between W_L and W_R has size at least t, the union of the parts adjacent to X by black edges has size less than t, as otherwise we have a K_t,t as a subgraph. Thus, the union of W_L, W_R, and all black neighbors of (at most 3) parts between W_L an W_R has size at most 5t. This means that there are at most 5t vertices which separate {a_1,…,a_k} from {b_1,…, b_k}. Since there are k > 5t disjoint paths between A:={a_1,…,a_k} and B:={b_1,…, b_k}, this is a contradiction to Menger's theorem. From <Ref> it follows that the s paths from X_1 to X_4 claimed in <Ref> must each intersect also X_2 and X_3 (possibly many times there and back). §.§ Proving Step II For our convenience, we introduce the following notation. Given a graph G, an uncontraction sequence (G, P^i) for i = 1, …, |V(G)|, an integer j ∈{1, …, |V(G)|}, and X ∈ P^j, we define N_j^b(X) as the set of parts of P^j that have a black edge to X in the partitioned trigraph of (G, P^j); we call this set the black neighborhood of X in (G, P^j). Also, we set N_j^b(X) := ∑_Y ∈ N_j^b(X) |Y| to be the number of vertices of G in any part of the black neighborhood of X in (G, P^j). Let G be an arbitrary simple graph not containing a K_t,t as a subgraph, and let (G, P^i) for i=1,…,|V(G)| be an uncontraction sequence for G. Suppose that, for some m∈{1,…,|V(G)|}, there are parts X_1,X_2,X_3,X_4∈ P^m which induce a red path in this order in the partitioned trigraph of (G, P^m), there is no edge between X_1 and X_4, and the set X_1 ∪ X_2 ∪ X_3 ∪ X_4 induces in G a subgraph containing s ≥ 4t pairwise vertex-disjoint paths from X_1 to X_4. Then the width of this uncontraction sequence is greater than 2. For a contradiction, suppose that the width of the considered uncontraction sequence of G is at most 2. We are going to formulate an invariant which is true for (G, P^m), and which will remain true at every subsequent step of the uncontraction sequence. Since this invariant, at the same time, will preclude the finest partition into singletons, the assumed sequence of width ≤2 cannot exist. Invariant. At step j≥ m of the uncontraction sequence, in the graph (G, P^j) and its partitioned trigraph, the following holds. There are 4 parts X_1, X_2, X_3, X_4∈ P^j, each of size at least t, forming a red path in this order in the partitioned trigraph of (G, P^j), and parts X_1 and X_4 are not adjacent. Denote by s_j the maximum number of vertex-disjoint paths in G[X_1 ∪ X_2 ∪ X_3 ∪ X_4], starting in X_1 and ending in X_4. Then s_j + N_j^b(X_2) + N_j^b(X_3) ≥ 4t. Note that in the base case we have s_m ≥ 4t, so the invariant is trivially satisfied for j = m. Also, without loss of generality we can assume that all the vertex-disjoint paths in G[X_1 ∪ X_2 ∪ X_3 ∪ X_4] are inclusion-wise minimal; so in particular, each path contains exactly one (starting) vertex in X_1 and exactly one (ending) vertex in X_4. See also an informal illustration in <Ref>. Now suppose the invariant holds for some j ∈{m, …, |V(G)| - 1} and prove that it is also preserved after the j-th uncontraction. First observe that for each i ∈{1, 2, 3, 4}, G contains a bipartite clique with sides X_i and ⋃ N_j^b(X_i). Since |X_i| ≥ t and G does not contain K_t,t as a subgraph, we have N_j^b(X_i)≤ t - 1. Therefore, s_j ≥ 4t - (t - 1) - (t - 1) ≥ 2t. Each of the s_j disjoint paths in G[X_1 ∪ X_2 ∪ X_3 ∪ X_4] must intersect each of the sets X_1, X_2, X_3, X_4 (<Ref>), so actually |X_i| ≥ s_j ≥ 2t for each i ∈{1, 2, 3, 4}. We now analyze all the possible cases for an uncontraction step. Denote by x∈ P^j the part that is split into new parts y,z∈ P^j+1. Also select some s_j ≥ 2t inclusionwise-minimal pairwise vertex-disjoint paths P_1, …, P_s_j from X_1 to X_4 in G[X_1 ∪ X_2 ∪ X_3 ∪ X_4]. By minimality, <Ref> and <Ref>, each path has its first vertex in X_1, its second vertex in X_2, its penultimate vertex in X_3, and its last vertex in X_4. Also notice that there cannot be any edges between X_1 and X_3, or any edges between X_2 and X_4 (otherwise, <Ref> would apply and the red degree of X_2 or X_3 would be too large). Recall also there are no edges between X_1 and X_4. If x∉{X_1,X_2,X_3,X_4}, the black neighbourhood of X_2 and X_3 in the partitioned trigraph can only increase; also, s_j+1 = s_j as the vertex-disjoint paths in G[X_1 ∪ X_2 ∪ X_3 ∪ X_4] in (G, P^j) are preserved in the uncontraction step. Hence, the invariant is satisfied. Suppose x=X_1 or x=X_4. Without loss of generality, x=X_1 (or suppose the red path is actually X_4, X_3, X_2 ,X_1), as illustrated in <Ref>. Since at the j-th step, there were at least s_j ≥ 2t inclusion-wise minimal pairwise vertex-disjoint paths from X_1 to X_4 in G[X_1 ∪ X_2 ∪ X_3 ∪ X_4], at least t of these start in y or at least t of these start in z; again without loss of generality, assume that y contains at least t starts of the paths (so naturally |y| ≥ t). As each of those paths have its second vertex in X_2, we know that yX_2 is an edge in the partitioned trigraph of (G, P^j+1); and it must be a red edge, because both y and X_2 have size at least t and <Ref> applies. We now claim that the red path y, X_2, X_3, X_4 preserves the invariant in (G, P^j+1). Since y, z ⊆ X_1, both y and z are non-adjacent to X_3 and X_4. Observe also that z cannot be red-adjacent to X_2 in (G, P^j+1) as in this case, the red degree of X_2 would be at least 3. However, z can be either black-adjacent or non-adjacent to X_2. If z is non-adjacent to X_2, then z cannot contain the start of any of the s_j paths P_i as the second vertex of each such path is in X_2. Hence we do not lose any path P_i by removing z from X_1 ∪ X_2 ∪ X_3 ∪ X_4, so s_j+1 = s_j and the invariant still holds. If z is black-adjacent to X_2, then by a counting argument z contains at most |z| starts of the considered paths P_i; hence, s_j+1≥ s_j - |z|. However, the number of vertices of G in the black neighborhood of X_2 also increases by |z|, because z was split out of X_1 which was a red neighbour of X_2. Hence the inequality from the invariant is satisfied: s_j+1 + N_j+1^b(X_2) + N_j+1^b(X_3) ≥ (s_j - |z|) + ( N_j^b(X_2) + |z|) + N_j^b(X_3) ≥ 4t. It is also easy to verify the remaining conditions of the invariant. It remains to consider x=X_2 or x=X_3. Without loss of generality, x=X_2 (or suppose the red path X_4,…,X_1). We first prove that at least one of y, z is red-adjacent to X_1 in (G, P^j+1). Assume otherwise; then y is either non-adjacent to X_1 (and then |N_G(X_1) ∩ y| = 0) or black-adjacent to X_1 (and then |y| ≤ t - 1 by <Ref>, so |N_G(X_1) ∩ y| ≤ t - 1). Analogously, |N_G(X_1) ∩ z| ≤ t - 1. Since X_2 = y ∪ z, we conclude that |N_G(X_1) ∩ X_2| ≤ 2t - 2. But each of s_j ≥ 2t vertex-disjoint paths P_1, …, P_s_j have two consecutive vertices in X_1 and X_2, so |N_G(X_1) ∩ X_2| ≥ s_j – a contradiction. Hence at least one of y, z is red-adjacent to X_1. Repeating the same argument with X_3 instead of X_1 implies that at least one of y, z is red-adjacent to X_3. Additionally, X_3 is already red-adjacent to X_4, so exactly one of y, z is red-adjacent to X_3. Without loss of generality, assume that y is red-adjacent to X_3. We now consider two separate cases, depending on whether y is red-adjacent to X_1. First assume that y is red-adjacent to X_1, as illustrated in <Ref>. We claim that the red path X_1, y, X_3, X_4 preserves the invariant in (G, P^j+1). Observe that z cannot be red-adjacent to y due to the red degree condition of y in (G, P^j+1). If z is non-adjacent to y and X_3, no inclusion-wise minimal path from X_1 to X_4 in G[X_1 ∪ X_2 ∪ X_3 ∪ X_4] can be routed through z as all neighbors of z in X_1 ∪ X_2 ∪ X_3 ∪ X_4 are in X_1. Then s_j+1 = s_j and it is easy to verify the remaining parts of the invariant. Next suppose z is black-adjacent to X_3; then by <Ref> we have |z| ≤ t-1, hence |y| = |X_2| - |z| ≥ t. Also observe that at most |z| paths P_i intersect z, so s_j+1≥ s_j - |z|. Therefore, the inequality from the invariant is preserved: s_j+1 + N_j+1^b(X_2) + N_j+1^b(X_3) ≥ (s_j - |z|) + N_j^b(X_2) + ( N_j^b(X_3) + |z|) ≥ 4t, and the remaining conditions of the invariant are easy to verify. Finally suppose z is non-adjacent to X_3 and black-adjacent to y. Then each path P_i must intersect y, hence |y| ≥ s_j > t. Moreover, s_j+1≥ s_j - |z| and again N_j+1^b(X_2) ≥ N_j^b(X_2) + |z|, so the inequality from the invariant is preserved; once again, the remaining parts of the invariant follow. It remains to consider the case where y is not red-adjacent to X_1 (i.e., either non-adjacent or black-adjacent to X_1), as illustrated in <Ref>. We claim that z, y, X_3, X_4 is a red path preserving the invariant in (G, P^j+1). We first prove that |y|, |z| ≥ t. First assume that |y| ≤ t - 1. Then |z| = |X_2| - |y| ≥ 2t - (t - 1) ≥ t, so by <Ref>, z is not black-adjacent to X_3; and it is not red-adjacent to X_3 as X_3 is already red-adjacent to y and X_4. Thus z is non-adjacent to X_3, so each path P_i must intersect y (this is because X_2 = y ∪ z and each path P_i contains an edge between X_2 and X_3). But then |y| ≥ s_t > t - 1 – a contradiction. Similarly, if |z| ≤ t - 1, then |y| = |X_2| - |z| ≥ t, so by <Ref>, y is not black-adjacent to X_1; and it is not red-adjacent to X_1 by our assumption. So y is non-adjacent to X_1 and each path P_i must intersect z – a contradiction. Hence, |y|, |z| ≥ t. Applying <Ref> three times, we find that y is non-adjacent to X_1; z is non-adjacent to X_3; and y is not black-adjacent to z. Since y must be adjacent to z (otherwise X_1 and X_4 would be in separate connected components of G[X_1 ∪ X_2 ∪ X_3 ∪ X_4]), we get that y is red-adjacent to z. Hence the subgraph of the partitioned trigraph of (G, P^j+1) induced by {X_1, y, z, X_3, X_4} contains red edges X_1y, yz, zX_3 and X_3X_4 and no black edges. Therefore, the second vertex of each path P_i is actually in y; so we can remove a prefix from each path P_i so that each path starts in y, finishes in X_4 and is contained in y ∪ z ∪ X_3 ∪ X_4. This witnesses that there exist s_j vertex-disjoint paths from y to X_4 in G[y ∪ z ∪ X_3 ∪ X_4], so s_j+1≥ s_j. Moreover, N_j+1^b(z) = N_j^b(X_2) (as the black neighborhoods of y and z in (G, P^j) are equal to the black neighborhood of X_2 in (G, P^j)), and N_j+1^b(X_3) = N_j^b(X_3) (as the black neighborhood of X_3 did not change during the uncontraction). We conclude that s_j+1 + N_j+1^b(z) + N_j+1^b(X_3)≥ 4t, as required, and all the satisfaction of the remaining parts of the invariant is clear. §.§ Concluding the Main Proof Observe that the basic assumptions of <Ref> are the same as those of <Ref>, and the assumptions of <Ref> follow from <Ref>. Hence, as detailed in the formal proof below, <Ref> and <Ref> together imply <Ref>. To extend this to a proof of <Ref>, we use the following tool – an excluded-grid theorem for tree-width in the currently strongest published formulation (modulo polylog factors which we have “rounded up” for simplicity): There is a function f(n)∈ O(n^10) such that, for every positive integer N, if a graph G is of tree-width at least f(N), then G contains a subdivision of the N× N wall as a subgraph. Let f be the function of <Ref>, and choose f_1(t)=f(2·16·(13t)^2+2)∈ O(t^20). If our graph G has tree-width at least f_1(t), then G contains a subdivision of the (2·16·(13t)^2+2)×(2·16·(13t)^2+2) wall by <Ref>. Consequently, by <Ref>, G contains a subgraph which is a (16·(13t)^2)×(16·(13t)^2) cubic mesh. We now invoke <Ref>, assuming G admits an uncontraction sequence of width at most 2; so, we obtain the parts X_1,X_2,X_3,X_4 as claimed by <Ref> and required by <Ref>. By the subsequent invocation of <Ref>, we immediately arrive at a contradiction to having the uncontraction sequence of width at most 2. Consequently, the tree-width less than f_1(t)=f(2·16·(13t)^2+2)∈ O(t^20). Let t be an integer and C a graph class such that no G ∈ C contains K_t,t as a subgraph. Let f_1 be the polynomial function from (the proof of) <Ref> and set k:=f_1(t), which is a constant depending on C. Let G ∈ C be the input graph. We first use the linear time algorithm of Bodlaender <cit.> to test whether G has tree-width at most k. If the answer is no, then we know by <Ref> that G cannot have twin-width at most 2. Otherwise, G has tree-width at most k, and we easily turn the decomposition into a branch-decomposition of width at most k+1. This directly gives a boolean-width decomposition of width at most k+1 <cit.> (with virtually the same decomposition tree). Finally, by the result of <cit.>, from a boolean-width decomposition of G of width k+1 one can obtain a contraction sequence of width at most 2^k+2-1 and an easy inspection of the proof shows that this can be done in polynomial time. Thus, in the end (with respect to the fixed class C) we compute in polynomial time a contraction sequence of G of width at most 2^poly(t). If the class C and implicit t were to be considered as parameters, the discussed algorithm would have an FPT runtime. §.§ Case of Twin-width 3 Lastly, we show that in the class of graphs of twin-width 3, no upper bound on the tree-width is possible even if we exclude K_2,2. For every positive integer N, there exists an (N^2+N)-vertex graph with no K_2,2 subgraphs whose twin-width is at most 3 and tree-width is at least N. Let N be a positive integer. We take G the graph obtained from the disjoint union of N paths P_1,…,P_N of length N, and for each i∈ [N], we add a new vertex adjacent to the i-th vertex of each path P_1,…,P_N. See <Ref> for an illustration. Obviously G has N^2+N vertices and no K_2,2 as subgraph. Observe that by contracting each path P_1,…,P_N, we obtain the complete bipartite graph K_N,N. As G admits K_N,N as a minor, we deduce that its tree-width is at least N. We claim that the twin-width of G is at most 3. We can assume without loss of generality that the edges of P_1 are red. We contract P_1 and P_2 by iteratively contracting their i-th vertices as illustrated in <Ref>. The maximum red degree of each encountered trigraph is 3 and we end up with a trigraph isomorphic to G-P_2. We can repeat this operation to end up with a trigraph isomorphic to G- (P_2∪…∪ P_N). Then we can contract each pendant vertex with its unique neighbor to obtain a red path whose twin-width is obviously at most 2. We conclude that the twin-width of G is at most 3. § CONCLUSIONS We have shown that sparse classes of graphs of twin-width 2 have bounded tree-width. This means that the existing rich machinery of structural and algorithmic tools for tree-width is applicable to such graph classes. One might wonder how one can relax the requirement on C being K_t,t-free to relate graphs of twin-width 2 to other well-studied graph notions. One could even try to drop any restrictions altogether and conjecture that any class of twin-width at most 2 has bounded clique-width. This cannot be true, since the class of unit interval graphs has twin-width 2 and unbounded clique-width <cit.>. However, we conjecture the following. Let C be a stable class of graphs of twin-width at most 2. Then C has bounded clique-width. Here being stable means that there exists k such that no G ∈ C contains a half-graph of order k as a semi-induced subgraph. We refer to <cit.> for details about stability and a discussion on how it relates to tree-width, clique-width and twin-width.
http://arxiv.org/abs/2307.01040v2
20230703141859
Möbius Homology
[ "Amit Patel", "Primoz Skraba" ]
math.AT
[ "math.AT", "cs.CG", "math.CO", "math.CT" ]
arrows L>l<C>c<1]Amit Patel2]Primoz Skraba[1]Department of Mathematics, Colorado State University[2]School of Mathematical Sciences, Queen Mary University LondonMöbius Homology [ =============== This paper introduces Möbius homology, a homology theory for representations of finite posets into abelian categories. While the connection between poset topology and Möbius functions is classical, we establish a direct connection between poset topology and Möbius inversions. More precisely, the Möbius homology categorifies the Möbius inversion because its Euler characteristic is equal to the Möbius inversion of the dimension function of the representation. We also introduce a homological version of Rota's Galois Connection Theorem which relates the Möbius homology over two posets connected by a Galois connection. Our main application is to persistent homology over general posets. We show that under one definition, the persistence diagram is an Euler characteristic over a poset of intervals and hence Möbius homology is a categorification of the persistence diagram. This provides a new invariant for persistent homology over general posets. Finally, we use our homological Rota's Galois Connection Theorem to prove several results about the persistence diagram.This work is partially funded by the Leverhulme Trust grant VP2-2021-008. § INTRODUCTION Let be an abelian group and P a finite poset. For all functions m : P →, there exists a unique function ∂ m : P → such that for all b ∈ P, m(b) = ∑_a : a ≤ b∂ m(a). The function ∂ m is called the Möbius inversion of m. Often the function m is the shadow of a richer algebraic structure. That is, there is often an abelian category and a P-module, i.e., a functor, M : P → whose dimension function is m. Here, the dimension function is the assignment to every b ∈ P the representative [ M(b) ] of the object M(b) in the Grothendieck group of . In this case, it is natural to ask for an invariant of M that decategorifies to the Möbius inversion ∂ m. This paper introduces a homology theory for P-modules we call Möbius homology. Every module M gives rise to a simplicial cosheaf over the order complex of P. We define the Möbius homology of M at an element b ∈ P, denoted _∗(b; M), as the homology of the simplicial cosheaf localized to b. Our first result (Theorem <ref>) states that the Euler characteristic of the Möbius homology is the Möbius inversion. That is, for all b ∈ P, ∂ m(b) = ∑_d ≥ 0 (-1)^d [ _d (b; M) ]. Our second result (Theorem <ref>) is a homological version of Rota's Galois connection theorem. Rota's theorem is an important tool for studying and computing Möbius inversions, and likewise we use our result to prove several properties of Möbius homology. Previous Work Poset topology is the study of the order complex associated to a poset. It originates in Rota's seminal 1964 paper on Möbius functions <cit.> and is now a rich subject motivated by questions from diverse fields <cit.>. Interest in the Möbius function stems from its connection to the Euler characteristic of the order complex <cit.>. However, a topological approach to the Möbius inversion was never fully developed. To the best of our knowledge, Backlawski was the first to study the simplicial sheaf associated to a P-module in his definition of Whitney homology for geometric lattices <cit.>. In a followup paper, Backlawski studies Galois connections between these sheaves but falls short of making a clear connection to Möbius inversions <cit.>. Application: Persistent Homology Our discovery of Möbius homology is motivated by questions in persistent homology. Traditionally, persistent homology starts with a family of topological spaces parameterized by a finite, totally ordered poset P. Apply homology using field coefficients and the result is a P-module of vector spaces M_∗ : P →$⃗, one for each dimension. The persistence diagram of eachM_∗is a complete combinatorial invariant that encodes the birth and death of its generators alongP. There are many equivalent definitions of the persistence diagram. The definition proposed by Cohen-Steiner, Edelsbrunner, and Harer uses an inclusion-exclusion formula <cit.>. From this viewpoint, the persistence diagram ofM_∗is the Möbius inversion of a-valued function on the poset of intervals ofP<cit.>. There are now two well developed generalizations of this definition to modules over any finite poset. The persistence diagram of Kim and Mémoli is a Möbius inversion of the rank invariant of the module <cit.> whereas the persistence diagram of Gülem, McCleary, and Patel is the Möbius inversion of the birth-death invariant of the module <cit.>. WhenPis not totally ordered, neither are complete invariants of the module. We show how Möbius homology categorifies the birth-death approach and is a stronger invariant than simply the Möbius inversion. Finally, we demonstrate how the homological version of Rota's Galois connection theorem helps in computing persistent Möbius homology. § BACKGROUND This section provides a brief introduction to Möbius inversions and simplicial cosheaves. We intentionally avoid the language of derived functors in our treatment of cosheaf homology in the hope of reaching a wider audience. §.§ Möbius Inversion LetPbe a finite poset. Fora ≤ c, the (closed) interval[a,c] := { b ∈ P : a ≤ b ≤ c }. Let Pbe the set of all intervals. The -incidence algebra onP, denoted(P), is the set of all functionsα : P →with three operations: scaling, addition, and multiplication. The first two are clear. Multiplication of two functionsαandβis(α∗β)[a,c] := ∑_b:a ≤ b ≤ cα[a,b] ·β[b,c].The multiplicative identity is [a,b] = 1 if a = b 0 otherwise. The zeta function isζ [a,b] = 1, for alla ≤ b. The zeta function is invertible, and its inverse, denotedμ, is the Möbius function. Let P be a finite poset and an abelian group. The Möbius inversion of any function f : P → is the function ∂ f : P → defined as ∂ f(c) = ∑_b: b ≤ c f(c) ·μ[b,c]. The Möbius inversion is the unique function satisfying the following equation for allc ∈ P: ∑_b: b ≤ c∂ f(b) = ∑_b: b ≤ c∂ f(b) ·ζ[b,c] = ∑_b: b ≤ c ( ∑_a: a ≤ b f(a) ·μ[a,b] ) ·ζ[b,c] = ∑_a: a ≤ c f(a) ( ∑_b : a ≤ b ≤ cμ[a,b] ·ζ [b,c] ) = ∑_b: b ≤ c f(b) ·[b,c] = f(c). That is,∂ fis the derivative off. §.§ Simplicial Cosheaves LetKbe a finite simplicial complex. We writeτ≥σto meanτis a coface ofσandτ >_1 σto meanτ = σ +1. We also think ofKas a category whereτ→σwheneverτ≥σ. For the moment, letbe any abelian category. A simplicial cosheaf over K valued in is a functor A : K →. We now construct the chain complexC_∙ (K; A)associated to a simplicial cosheafA. Orient every simplex ofK. Thed-th chain object isC_d (K; A) := ⊕_σ: σ = dA(σ).Forτ >_1 σ, restrict the orientation onτto an orientation onσ. Write[τ:σ] = 1if the restriction agrees with the orientation onσ, and write[τ:σ] = -1if the restriction disagrees. The boundary operator∂_d : C_d (K; A) → C_d-1 (K; A)is generated summand-wise as follows. For ad-simplexτ, letι_τ : A(τ) → C_d(K; A)be the canonical morphism into the direct sum, and letπ_τ : C_d(K;A) →A(τ)be the canonical morphism out of the direct sum. Define the restriction of∂_dtoτas∂_d |_τ := ∑_σ: τ >_1 σ [τ:σ] ·( ι_σ∘A(τ≥σ) ∘π_τ).The full boundary operator is the sum∂_d := ∑_τ : τ = d∂_d |_τ.Of course, the ordinary boundary operator applied twice to any simplex is zero. This fact combined with commutativity ofA, implies∂_d-1∘∂_d = 0. Thus, we have a chain complex C_∙ (K; A) : ⋯[r] C_2(K; A) [r, "∂_2"] C_1 (K;A) [r, "∂_1"] C_0 (K; A ) [r, "∂_0"] 0. The d-th cosheaf homology of A, denoted H_d ( K; A), is the d-th homology object of the chain complex C_∙ (K; A). We are also interested in relative simplicial cosheaf homology. LetL ⊆ Kbe a subcomplex, and letB : L →be the restriction of the cosheafAtoL. The associated relative chain complex is C_∙ (K, L; A) : ⋯[r] C_2(K; A)C_2(L; B)[r, "∂_2"] C_1 (K;A)C_1 (L;B)[r, "∂_1"] C_0 (K; A )C_0 (L; B )[r, "∂_0"] 0. The d-th relative cosheaf homology of A with respect to the subcomplex L ⊆ K, denoted H_d ( K, L; A), is the d-th homology object of the chain complex C_∙ (K, L; A). We now explore the Euler characteristic of a simplicial cosheaf valued in an abelian category. Assumeis equivalent to a small category. Denote by[A]the isomomorphism class of an objectAin. The Grothendieck group of , denoted K(), is the free abelian group generated by the set of isomorphism classes of objects in with the relation [B] = [A] + [C] for every short exact sequence 0 → A → B → C → 0 in . We now give three working examples of abelian categories along with their Grothendieck groups. The calculation of each group is left to the reader. If $⃗ is the category of finite dimensional-vector spaces, for some fixed field, thenK()⃗≅. For an objectA, the corresponding element[A] ∈ K()⃗is the dimension ofA. Let () be the category of endomorphisms on finite dimensional complex vector spaces. That is, an object is an endomorphism ϕ : ^m →^m, and a morphism from an object ϕ : ^m →^m to an object ψ : ^n →^n is a linear map f : ^m →^n such that f ∘ϕ = ψ∘ f. Its Grothendieck group, K( () ), is isomorphic to ⊕_λ∈. For an object A, the corresponding element [A] ∈ K( () ) is the map → that assigns to every λ∈ the number of times λ occurs as an eigenvalue of A. Let be the category of finite abelian groups. Its Grothendieck group, K(), is isomorphic to ⊕_p prime. For an object A, the corresponding element [A] ∈ K() is the assignment to each prime the number of times it occurs in the prime decomposition of A. For example, if A is / 16, then [A] is the assignment 2 ↦ 4. The Euler characteristic of a simplicial cosheaf A : K →, denoted χ(K; A), is the following element of K(): χ (K; A) := ∑_d ≥ 0∑_σ : (σ) = d (-1)^d [ A(σ) ]. If we think of[ H_i (K; A) ] ∈ K()as thei-th “Betti number” ofA, then the Euler characteristic ofAis the familiar signed sum of Betti numbers as follows. χ (K; A) = ∑_d ≥ 0 (-1)^d [H_d(K;A) ]. Recall the chain complex C_∙(K; A). Let Z_d (K; A) := ∂_d and B_d (K; A) := ∂_d+1. Consider the following short exact sequences: 0 [r] Z_d (K; A) [r, hookrightarrow] C_d (K; A) [r, twoheadrightarrow, "∂_d"] B_d-1 (K; A) [r] 0 0 [r] B_d (K ; A) [r, hookrightarrow] Z_d (K ; A) [r, twoheadrightarrow] H_d (K ;A) [r] 0. The two sequences imply [ C_d (K; A) ] = [ Z_d (K ;A)] + [B_d-1 (K ;A)] and [ H_d (K ; A ) ] = [ Z_d (K ; A) ] - [ B_d (K ; A) ] in K(). Thus χ(K; A) = ∑_d ≥ 0∑_σ: σ = d (-1)^d [ A(σ) ] = ∑_d ≥ 0 (-1)^d [ C_d (K ; A)] = ∑_d ≥ 0 (-1)^d [ Z_d (K ;A)] + ∑_d ≥ 0 (-1)^d[ B_d-1 (K ; A)] = ∑_d ≥ 0 (-1)^d ( [ Z_d (K ; A)] - [ B_d (K ; A)] ) = ∑_d ≥ 0 (-1)^d [ H_d (K; A)]. The relative Euler characteristic of A with respect to the subcomplex L ⊆ K, denoted χ(K,L; A), is χ(K,L; A) := ∑_d ≥ 0∑_σ∈ K ∖ L: (σ) = d (-1)^d [ A(σ) ]. Similarly, the relative Euler characteristic is the signed sum of relative Betti numbers. χ(K, L; A) = ∑_d ≥ 0 (-1)^d [ H_d (K, L; A) ]. § MÖBIUS INVERSIONS AND EULER CHARACTERISTICS We define two invariants associated to aP-module valued in an abelian category. The first is the Möbius inversion of the module's dimension function. The second is Möbius homology. Our main theorem (Theorem <ref>) shows that the Möbius inversion is the Euler characteristic of the Möbius homology. Let P be a finite poset and any small abelian category. A P-module is a functor M : P →. §.§ Dimension Function The first invariant considers only the objects ofMwhile forgetting all its morphisms. The dimension function of M is the function m : P → K() that assigns to every a ∈ P the element [ M(a) ] ∈ K(). We now look at two similar examples of the dimension functionmand its Möbius inversion∂ min two different categories. Consider the poset P and the two P-modules M and N in Figure <ref>. The dimension function m : P → for both modules are the same. The Möbius inversion ∂ m is ∂ m(b) = 1, ∂ m(a) = 0, and ∂ m(c) = 1.
http://arxiv.org/abs/2307.02858v2
20230706085029
Deep Ensemble Learning with Frame Skipping for Face Anti-Spoofing
[ "Usman Muhammad", "Md Ziaul Hoque", "Mourad Oussalah", "Jorma Laaksonen" ]
cs.CV
[ "cs.CV" ]
Deep Ensemble Learning with Frame Skipping for Face Anti-Spoofing Usman Muhammad12, Md Ziaul Hoque1, Mourad Oussalah1 and Jorma Laaksonen 2 1 Center for Machine Vision and Signal Analysis, University of Oulu, Finland 2 Department of Computer Science, Aalto University, Finland August 1, 2023 ======================================================================================================================================================================================================================================================= Face presentation attacks (PA), also known as spoofing attacks, pose a substantial threat to biometric systems that rely on facial recognition systems, such as access control systems, mobile payments, and identity verification systems. To mitigate the spoofing risk, several video-based methods have been presented in the literature that analyze facial motion in successive video frames. However, estimating the motion between adjacent frames is a challenging task and requires high computational cost. In this paper, we rephrase the face anti-spoofing task as a motion prediction problem and introduce a deep ensemble learning model with a frame skipping mechanism. In particular, the proposed frame skipping adopts a uniform sampling approach by dividing the original video into video clips of fixed size. By doing so, every nth frame of the clip is selected to ensure that the temporal patterns can easily be perceived during the training of three different recurrent neural networks (RNNs). Motivated by the performance of individual RNNs, a meta-model is developed to improve the overall detection performance by combining the prediction of individual RNNs. Extensive experiments were performed on four datasets, and state-of-the-art performance is reported on MSU-MFSD (3.12%), Replay-Attack (11.19%), and OULU-NPU (12.23%) databases by using half total error rates (HTERs) in the most challenging cross-dataset testing scenario. Face Presentation Attack Detection, Ensemble Learning, Frame Skipping, Recurrent Neural Network, Deep Learning § INTRODUCTION Face recognition technology has been used in various fields, such as automated teller machines (ATMs), retail and marketing, automatic border control, security and law enforcement <cit.>. However, there are various physical and digital attacks, such as 3D mask attacks <cit.>, deep fake face swapping <cit.>, face morphing <cit.>, face adversarial attacks <cit.>, and so on, that can limit the applications of facial recognition technology. Therefore, developing robust countermeasures to detect face representation attacks is critical to improving the security of biometric systems and ensuring the widespread adoption of this technology. The main issue for face PAD is to identify and analyze the unique visual and behavioral characteristics of both live and spoofed facial images because both classes contain spatiotemporal information. Recent studies show that video-based methods <cit.> can potentially be more effective than image-based methods <cit.>. This is due to the fact that video provides additional information and context that can help discriminate real and fake faces. For instance, a real face not only exhibits the appearance of the face but also small and subtle movements, such as eye blinks and facial expressions, which can reveal important cues about the authenticity of the face <cit.>. Moreover, temporal cues (i.e., the temporal consistency of the face over time) provide additional information, whereas a fake face may appear rigid and static. Despite the success of video-based methods, domain generalization is one of the main challenges that still need to be addressed in the face anti-spoofing domain. In our work, we define generalizability as the extent to which a model is trained and tuned on one or multiple databases and then applied to out-of-sample unseen data. There are many generalization-related research topics such as ensemble learning, data augmentation, transfer learning, meta-learning, and so on. In particular, domain generalization is important in face anti-spoofing because new datasets are expensive and time-consuming to collect and annotate in real-world scenarios. Moreover, testing data can come from different illumination conditions or environments than the training data. Therefore, in order to improve the generalization, various methods, such as adversarial learning <cit.>, meta pattern learning <cit.>, generative domain adaptation <cit.>, hypothesis verification <cit.>, or cross-adversarial learning <cit.>, have been proposed that train the model on multiple datasets, but the detection performance remains limited due to a substantial distribution difference among source domains. In a typical PAD video, not all the frames are equally important, and processing every frame is computationally expensive <cit.>. Thus, frame selection approaches have been often used to reduce the computational cost. For instance, Usman et al. <cit.> proposed to convert the video sequences into a single RGB frame based on the Gaussian weighting function. The authors claimed that frame aggregation can amplify motion cues, such as head movements, and surface edges. Another approach to frame selection is to identify key frames that capture important moments or events in the video sequence <cit.>. In particular, the optical flow has been used for motion analysis and to identify significant changes between two adjacent frames <cit.>. It provides a set of motion vectors that represent the motion between frames. Moreover, the global motion was found to be critical in face anti-spoofing that discriminates camera motion and natural motion of the objects along the time axis <cit.>. The frame selection methods that rely on clip order prediction <cit.> or a sequence sorting task <cit.> in self-supervised learning enable the model to learn meaningful representations from unlabeled video data. However, these approaches extract features from all the frames of the video. We argue that a frame selection approach that involves motion estimation <cit.> between the frames can be costly to train and may lead to potential stability issues. Thus, effective handling of such spatiotemporal variations is pivotal to improving the performance of face anti-spoofing systems. As mentioned above, most of the motion estimation methods are designed to satisfy the PAD performance, while computational cost is mostly ignored. In an attempt to fill this gap, we propose to use a subset of frames by selecting a uniform sampling that does not require estimating the motion between adjacent frames and assumes that selected frames convey relevant information about human faces. Fig. 1 illustrates our approach. In particular, we attempt to make an alternative way of predicting the temporal changes through training different recurrent neural networks (RNNs) that model the sequential information across the video frames. Motivated by this, we also address the domain generalization issue by combining the predictions of each RNN and then develop a meta-model based on the idea of stacking-based ensemble learning <cit.>. Intuitively, this idea has at least three main advantages. First, using 4 to 7 frames is sufficient, since it does not require any additional analysis or processing of the video frames. Second, the computational cost is significantly reduced because only a few images will be enough for subsequent analysis, instead of all frames during both training and test phases. Third, we can easily control the number of frames selected from the video sequence by adjusting the sampling rate. The rest of this paper is organized as follows. Section II explains all the steps of the proposed method. Section III discusses the details of experimental results with state-of-the-art performance comparison. Finally, Section IV summarizes the main outcomes with conclusive statements and perspective works. § METHODOLOGY Our work focuses on predicting the temporal dynamics across video frames. Since the 2D CNNs are well-suited for capturing spatial information (i.e., edges, corners, and textures), we use different variants of recurrent neural networks for capturing temporal dependencies across video frames. Firstly, frame skipping is applied to select only a subset of the frames, which can help to reduce the need of computational resources and improve the efficiency of the analysis. Secondly, a CNN is utilized to extract spatial information. Then, several RNN-based models are trained on the same dataset, and their predictions are combined into a single meta-model. Each of the above steps is explained in the following sub-sections. §.§ Frame Skipping Mechanism Selecting important frames for face anti-spoofing is a relatively new area. A fundamental yet challenging problem in detecting real and attack videos is dealing with temporal variations. Since there is no fixed duration of real and attack videos in most of the PAD datasets, this boils down to the question of how many frames in the video contribute to face anti-spoofing. Moreover, the high computational cost associated with the optical flow method makes it impractical for real-time applications or large-scale video processing tasks. Conventionally, motion estimation is performed between every two adjacent frames in a video clip. However, in the context of face anti-spoofing, the frames that show changes in facial expression or movement between two adjacent frames also vary. Thus, computing the motion between every pair of frames in a video clip can be computationally expensive, especially when two adjacent frames are highly similar. Motivated by the aforementioned observations and the emergence of new methods for focusing on the most relevant frames only <cit.>, we propose the integration of a frame skipping mechanism to show that 4 to 7 frames are sufficient for state-of-the-art face presentation attack detection. In particular, we adopt a uniform frame sampling method that selects a subset of frames from a video sequence to improve the efficiency of the detection process. More specifically, to generate a subset of frames, a video is equally divided into non-overlapping segments where the total number of frames in the video are represented as T, and each segment consists of thirty frames. Let's assume that n segments are generated, and only the last frame of each segment is saved. Thus, we can use the following equation: L = min(T, n · 30) where L represents the last frame saved, T is the total number of frames in the video, n is the number of segments, and we take the minimum between T and n · 30 to ensure that we only select and save one frame from each segment while considering the available frames in the video. §.§ Deep Ensemble Learning An illustration of the overall framework is provided in Fig. 2. First, we employed a pre-trained CNN for extracting spatial information. Our proposed feature extraction part extracts the deep features from the pooling layer of a CNN model. In the next stage, instead of training several different CNN architectures, three different RNNs are trained to get the benefit of the frame skipping mechanism. To achieve this, ensemble learning is proposed by combining the strength of each RNN model to form a single meta-model. The meta-model takes the predictions from the base models as input and produces a final detection. Our motivation behind stacking is that by combining the predictions of multiple models, the overall performance of the system can be improved. Stacking, also known as a stacked generalization, is a technique used in ensemble learning, which involves training a meta-model that learns to combine the predictions of multiple base models <cit.>. In our work, three bases models, such as Long short-term memory (LSTM) <cit.>, Bidirectional long short-term memory (BiLSTM) <cit.>, and Gated recurrent unit (GRU) <cit.> are adopted. The first base model (LSTM) captures the temporal patterns and dynamics across frames by processing the CNN features sequentially. In particular, LSTM analyzes the sequential input, one by one, maintaining an internal memory that retains information from previous sequence while considering the current sequence. Specifically, LSTM contains memory cells, which enable it to selectively remember or forget information over time <cit.>. The second base model (BiLSTM) processes the sequential features in temporal order, either forward or backward, allowing it to capture both past and future temporal dependencies in the video data <cit.>. The third base model (GRU) showed that gating is indeed helpful in general and useful to capture the temporal dependencies of the face over time <cit.>. The model processes visual features in a temporal sequence by updating its hidden state based on the current input and its previous hidden state. After training these sub-models, we simply concatenate the outcome (predictions) of all the models and then train another model based on LSTM <cit.> architecture. We call this meta-model because it is applied to leverage the strengths of individual base models by integrating their predictions in a way that complements the overall predictive power. Pseudo-code is provided in Algorithm 1. § EXPERIMENTAL SETUP To evaluate the performance of the proposed ensemble learning, OULU-NPU database <cit.> (denoted as O), MSU Mobile Face Spoofing database <cit.> (denoted as M), Idiap Replay-Attack database <cit.> (denoted as I), and CASIA Face Anti-Spoofing database (denoted as C) are used in our work. The Half Total Error Rate (HTER) is reported by taking the average of the false acceptance rate (FAR) and the false rejection rate (FRR). §.§ Implementation details The Densely Connected Convolutional Networks (DenseNet-201) <cit.> architecture is employed and all the frames are resized to 224 × 224 to extract deep features. In particular, 1920-dimensional features from the last pooling layer are extracted without fine-tuning the model. For the ensemble learning, the LSTM is trained by using the Adam optimizer with the hidden layer dimension of 1000, validation frequency 30, and mini-batch size of 32. The He initializer <cit.> is utilized as weight initialization technique and the learning rate is adjusted to 0.0001. Since no fixed size epochs were used, an early stopping function <cit.> was utilized to prevent overfitting and improve the generalization of the model. For training the BiLSTM model, we keep the same parameters as used for training the LSTM except for the size of the hidden layer dimension which is set to 500. Finally, the third model, GRU, also follows the same tuning parameters and only the hidden layer size was decreased to 20. To develop our meta-model, we train the LSTM model based on the hidden layer dimension 20, epochs size (i.e., 100), and by following the same parameters used to train the sub-models. §.§ Comparison against the state-of-the-art methods In Table I, we report the performance of the proposed method against state-of-the-art deep learning methods. In particular, various domain generalization methods train the model on three source databases and test it on a completely unseen database using the leave-one-out (LOO) strategy. To make a fair comparison with them, we first compute the equal error rate (EER) on the testing set of source databases and then HTER is calculated directly on the target (unseen) dataset. One can see that the proposed meta-model achieves the best performance on three protocols of O&C&I to M, O&C&M to I, and I&C&M to O. We also use another evaluation metric such as area under the ROC Curve (AUC) to measures the overall performance of a classifier by measuring the trade-off between the true positive rate (sensitivity) and false positive rate (1 – specificity). It can be observed that deep ensemble learning achieves more than 90% AUC on all the datasets. In comparison to AUC, HTER shows decreased performance than AUC because HTER focuses on the equal error rate threshold and may not be the most relevant or representative measure of overall performance. Moreover, we also visualize ROC curves in Fig. 3. The meta-model (ensemble learning) indicates a better discrimination capability with a higher curve (closer to the upper left corner). In comparison to other domain generalization-related methods, such as adversarial learning <cit.>, disentangled representation learning <cit.>, and meta-learning <cit.>, the proposed ensemble learning demonstrates that predicting the motion across video frames can lead to a better generalization capability. Furthermore, since the motion is not computed between adjacent frames, we do not compare the computational time with any other motion estimation method such as optical flow. Thus, the proposed skipping mechanism reduces the amount of data to process, allowing for faster analysis and only sacrificing the performance on one dataset (i.e., CASIA) in comparison to state-of-the-art methods. § CONCLUSIONS In this paper, we addressed the domain generalization issue based on ensemble learning and the frame skipping mechanism. In particular, the skipping mechanism provides an alternative way of predicting the temporal changes through training different recurrent neural networks rather than proposing a motion estimation method between the adjacent frames. Since only certain frames were involved during training, this approach reduces the computational load and can be implemented in a scenario where real-time performance is crucial. Moreover, we show that the performance of LSTM, BiLSTM and GRU remains limited without using the meta-model. Thus, we conclude that the proposed ensemble learning can compensate the weaknesses of multiple sub-models and reduce the overall error rate. Based on the experimental results on four benchmark datasets, the proposed method exhibits state-of-the-art performance on three datasets. However, our approach may not be suitable for applications that require precise frame-by-frame analysis or rely heavily on temporal information. Thus, our future work will focus on the development of deep learning-based methods that can estimate the motion directly between the frames in end-to-end learning. § DECLARATION OF COMPETING INTEREST The authors have no conflict of interest that could have appeared to influence the work reported in this paper. § ACKNOWLEDGMENT This work is financially supported by ‘Understanding speech and scene with ears and eyes (USSEE)’’ (project number 345791). The first author also acknowledges the support of the Ella and Georg Ehrnrooth foundation. IEEEtran
http://arxiv.org/abs/2307.00836v1
20230703082013
Trading-Off Payments and Accuracy in Online Classification with Paid Stochastic Experts
[ "Dirk van der Hoeven", "Ciara Pike-Burke", "Hao Qiu", "Nicolo Cesa-Bianchi" ]
stat.ML
[ "stat.ML", "cs.LG" ]
Greedy Selection for Heterogeneous Sensors Kaushani Majumder, Student Member, IEEE, Sibi Raj B. Pillai, Satish Mulleti, Member, IEEE K. Majumder, S. R. B. Pillai, and S. Mulleti are with the Department of Electrical Engineering, Indian Institute of Technology Bombay, Mumbai, 400076, India. Emails: kaushanim@iitb.ac.in, bsrajs@gmail.com, mulleti.satish@gmail.com August 1, 2023 ================================================================================================================================================================================================================================================================================================================================================ We investigate online classification with paid stochastic experts. Here, before making their prediction, each expert must be paid. The amount that we pay each expert directly influences the accuracy of their prediction through some unknown Lipschitz “productivity” function. In each round, the learner must decide how much to pay each expert and then make a prediction. They incur a cost equal to a weighted sum of the prediction error and upfront payments for all experts. We introduce an online learning algorithm whose total cost after T rounds exceeds that of a predictor which knows the productivity of all experts in advance by at most 𝒪(K^2(ln T)√(T)) where K is the number of experts. In order to achieve this result, we combine Lipschitz bandits and online classification with surrogate losses. These tools allow us to improve upon the bound of order T^2/3 one would obtain in the standard Lipschitz bandit setting. Our algorithm is empirically evaluated on synthetic data. § INTRODUCTION We investigate online classification in the framework of prediction with expert advice where, in each round, the learning agent predicts an unknown binary label by aggregating the stochastic predictions of a number of experts. At the end of each round, the learner observes the true label and updates the function used to aggregate experts. In the variant considered in this work, we assume that at the beginning of a round the learner allocates a payment to each expert which affects the expert's performance in that round. This payment model of expert advice is realistic in many scenarios since human annotators will often only give useful advice if they are adequately compensated, and machine annotators may require more computation to return accurate predictions. Moreover, monetary incentives have been studied in crowdsourcing <cit.>. Although this is a different setting to that considered here, it is natural to study the effect of these payments in online binary classification with stochastic expert advice. Motivated by results in crowdsourcing—e.g., <cit.>—we assume that each expert has a productivity function which determines the probability that they predict the label correctly given the payment they received. The productivity function can be different for each expert and is initially unknown to the learner. In each round, the learner pays each expert j=1,…, K some amount c_j ∈ [0,1] before observing their advice. The accuracy of the advice that expert j returns depends on the amount they are paid through the unknown productivity function, p_j : [0,1] → [0,1], where p_j(c) is the probability that expert j is correct when the payment is c. The learner can use the expert advice to improve their prediction, but they also want to minimize the payments to the experts. Therefore, they must trade-off between the price of the expert advice, and any improvements to prediction accuracy it may bring. We define the learner's cost over a sequence of T rounds as the sum of classification mistakes and payments to experts. If the probabilities p_j(c_j) are known for some c_1,…,c_K and for all experts j=1,…, K, then we can write down the Bayes-optimal cost. In particular, if the events that each expert makes a mistake are independent, then the error probability of the Bayes-optimal aggregated prediction is known to decrease exponentially fast with exponent -2(γ_1^2(c_1)+⋯+γ_K^2(c_K)), where γ_j(c_j) = (p_j(c_j)-) for j=1,…,K <cit.>. Given productivity functions = (p_1,…,p_K), we then define the optimal cost over T rounds by T(), where () = min_c_1,…, c_K ∈ [0,1]{ e^-2∑_j=1^K γ_j(c_j)^2 + λ∑_j=1^K c_j} and λ >0 is a given parameter introduced to balance the trade-off between accuracy and payments. In this paper, we consider the case when is unknown and the learner's goal is to minimize the regret, R_T = ∑_t=1^T ( _Z(≠ y_t) + λ∑_j=1^K c_t,j) - T() where, in each round t, ∈{-1,1} is the learner's prediction (which depends on the stochastic expert advice Z), y_t ∈{-1,1} is the adversarially chosen true label, and c_t,j is the payment for expert j. Following the standard online learning protocol, we assume the true label y_t is revealed at the end of each round. The learner can then use y_t and the expert advice to learn the productivity functions. Regret minimization in this problem presents several challenges. The first is the need to trade-off cost and accuracy while simultaneously learning how the payments affect the accuracy. Indeed, we only observe the predictions of the experts with the payments that we pay them. This introduces an additional exploration-vs-exploitation trade-off as is typically seen in bandit problems. However, as we discuss in Section <ref>, this is more challenging than in standard bandit problems since the relationship between the payments that we choose and the regret is more complex and possibly non-smooth. A further significant challenge is to combine the predictions from all experts when we only have estimates of their accuracy. In particular, if we have estimated an experts productivity function p̂_t,j(c_j), as being close to 0 or 1 for a specific c_j in round t, directly using a majority or weighted majority aggregation approach could lead to undesirable scaling of the regret with 1/min{p̂_t,j(c_j),1-p̂_t,j(c_j)} which can be arbitrarily large. Our approach to overcome these issues is based on a combination of several ideas. First, we discretize the payment interval and for each payment and expert combination we estimate the probability of a correct classification. To deal with the exploration vs exploitation challenge we rely on the optimism in the face of uncertainty principle—see, for example, <cit.>. While this is a standard approach for stochastic losses, if it were to be directly applied to the discretized payments here, it would lead to an undesirable regret bound of (T^2/3).[ To see why standard methods give a T^2/3 rate consider a simplified setting with one expert. Then, the mistake probability is equivalent to the productivity function of that expert, denoted by p. Since we assume that p is Lipschitz, the problem is now reduced to Lipschitz bandits, for which a T^2/3 bound is known to be unavoidable <cit.>. ] Instead, we combine the optimistic principle with tools from online classification with surrogate losses to obtain a (√(T)) regret bound. Specifically, we use the randomized predictions of <cit.> which gains us considerable leeway in the analysis, see Section <ref> for the details. To avoid regret that scales with 1/min{p_j(c),1-p_j(c)}, we propose a modified aggregation approach. This approach simply follows the advice of one expert if we believe they are very likely to be correct (or wrong, in which case we use the opposite of their prediction). This aggregation approach allows us to have multiplicative control over estimation errors rather than additive control. Combined with the randomized prediction technique, multiplicative control over the estimation error is a crucial element in our analysis, and one of the major differences compared to standard analysis of aggregated classifiers. Combining these ideas with tight error bounds on the productivity function allows us to obtain the following result (implied by Theorem <ref>). The regret of LCB-GAPTRON (Algorithm <ref>) satisfies R_T = 𝒪(K^2(ln T)√(T)) where K is the number of experts and T is the number of rounds. This result represents an improvement on the T^2/3 regret bound that would be achievable if we were to simply use an optimistic algorithm with discretized costs. We also demonstrate that our algorithm significantly outperforms this naive algorithm in several simulated environments. Our experiments also indicate that the most computationally demanding step of our algorithm can be replaced by a cheaper approximation with little impact on the regret. §.§ Related Work Online aggregation of experts is also studied in online boosting <cit.>, a setting where there are no payments and the predictions of experts are adversarial. When the average accuracy of the experts is unknown, <cit.> prove a mistake bound of 8/γ^2 KT + 𝒪(K/γ^2), where 1/2-γ is an upper bound on the fraction of mistakes made by any expert. Note that, due to the adversarial assumption on the experts, the leading term in this bound vanishes at rate K^-1, as opposed to the exponential rate e^-K achievable in our stochastic setting. Our setting is also related to the framework of online prediction with limited expert advice <cit.>, where the predictions of experts and the payments are both chosen by an adversary. In this model, the learner can buy advice from any subset of experts at the price posted by the adversary. As the payments are not chosen by the learner, the trade-off between payments and accuracy is different from the one studied in this work. Although our setting is online learning, solving classification tasks by aggregating the predictions of stochastic experts naturally arises also in the context of crowdsourced labeling <cit.>. <cit.> study a setting where a requester has a set of n homogeneous binary labeling tasks to be assigned to m workers arriving in random order. Each worker j is characterized by an unknown probability p_j of labeling correctly any task and by a capacity T_j. At each round, the requester observes the capacity of the current worker and selects a subset of tasks respecting the worker's capacity. After the m workers have been all assigned tasks, the requester aggregates their predictions to infer a single prediction for each task. This model has been extended to consider heterogeneous tasks (so that each worker j has a different accuracy p_i,j for each task i) and costly access to ground truth labels for each task type <cit.>. <cit.> also extend the crowdsourcing model to a setting where workers have fixed and known costs and the requester must allocate tasks while respecting a budget constraint. From an online learning viewpoint, these crowdsourcing papers consider a dual pool-based model, where a set of unlabeled points is preliminary given and experts arrive online. In contrast, in our problem the set of experts is fixed and new instances are considered in each round, thus the problem studied here is quite distinct from crowdsourcing. In our work, we also consider payments that influence the workers' accuracy, which is not included in the classical crowdsourcing model. Monetary incentives in crowdsourcing have been considered by <cit.>, and although the crowdsourcing setting is distinct from ours, these works help motivate our setting of paid stochastic experts. <cit.> empirically showed the effect of monetary incentives on the quality of the predictions in crowdsourcing. <cit.> introduce an online stochastic model where workers, who act strategically, are drawn i.i.d. from a fixed and unknown distribution of worker types. Each type determines the workers' productivity function and the workers' effort function, where the latter controls their strategic behavior. Because of strategic behaviors, their payment scheme is more complex than ours. On the other hand, in their model the requester's utility cannot be increased by aggregating workers on the same task. Another example of strategic workers is investigated by <cit.>, where they compute minimal payments sufficient to incentivize workers to predict labels that they are sure of, and abstain on the others. § PRELIMINARIES In each round t ∈ [T] of online classification with paid experts, the learner chooses a payment c_t,j∈ [0,1] for each expert j ∈ [K]. After receiving the payments, the experts reveal their predictions Z_t,1, …, Z_t,K∈{-1, 1}^K for the true label y_t ∈{-1,1}. For each j ∈ [K], the prediction Z_t,j is stochastic and satisfies (Z_t,j≠ y_t) = p_j(c_t,j) where p_j is the productivity function of expert j. Based on the expert advice, the learner then predicts ∈{-1, 1}, observes y_t, and suffers the zero-one loss 𝕀[≠ y_t]. The sequence y_1,y_2,… of true labels is arbitrary and deterministic[Our results continue to hold even when the labels are stochastic, provided that the events {Z_t,j≠ y_t} remain independent.], and the events {Z_t,j≠ y_t} for t ∈ [T] and j ∈ [K] are stochastic and assumed to be independent. We let Z_t=(Z_t,1, …, Z_t,K) be the vector of all experts' predictions in round t. We assume the experts' productivity functions p_1,…,p_K : [0,1] → [0,1] are L-Lipschitz, |p_j(c) - p_j(c')| ≤ L|c - c'| c,c' ∈ [0,1] . This class of productivity functions is broad enough to capture most realistic settings, including the special cases where the productivity funtion is monotonic, logistic, or where the experts are restricted to predict the correct label with probability greater than 0.5. These productivity functions are initially unknown to the learner. For any round t=1,…, T, we define the filtration _t = σ(Z_1, B_1, y_1, …, Z_t-1, B_t-1, y_t-1) where B_s represents any internal randomization used by the learner in round s≤ t. The learner can use any information in _t to estimate the productivity functions, decide on the payments and aggregate expert predictions in round t. The learners objective is to select payments and aggregate expert advice to minimize the cumulative regret defined in (<ref>). To understand the ideas behind the algorithm and the involved challenges, we first consider a simplified setting where payments do not affect the prediction accuracy. In other words, p_j(c) = p'_j for all c ∈ [0, 1], for all j ∈ [K], and for some p_1',…,p_K' ∈ [0,1]^K. As argued in the introduction, if p'_1,…,p'_K are known, then the learner's expected number of mistakes is exponentially decreasing in K. To see this, let w(p') = lnp'/1-p' and assume the learner's prediction is = ( w(p_j')Z_t,j) . Similarly to the analysis of AdaBoost <cit.>, we can upper bound the zero-one loss with the exponential loss and write _Z_t(≠ y_t) ≤_Z_t[exp(-y_t w(p_j')Z_t,j)] = ∏_j = 1^K (p_j' √(1-p_j'/p_j') + (1-p_j')√(p_j'/1-p_j')) = ∏_j = 1^K √(4(1 - ( - p_j')^2))≤exp(-2 ( - p_j')^2) . The first equality holds because of our choice of w(p') and the assumption that the predictions of experts are independent. The last inequality uses 1 + x ≤ e^x. The tightness of the bound (<ref>) is easily verified in the special case ( - p_j')^2 = γ^2 for all j ∈ [K]. Then is the majority vote, which is clearly Bayes optimal and, assuming K is odd, _Z(≠ y_t) = Binom(K, ⌊K/2⌋, 1/2+γ) = Ω(√(1/K) e^K/2ln(1-4γ^2)) where Binom(n,m,p) is the probability of at most m heads in n flips of a coin with bias p, see <cit.>. This implies that, in the worst case, we are sub-optimal by a factor K^-1/2 as soon as we apply the first inequality in (<ref>). We now explain our approach for learning the unknown probabilities p_j'. Let _t,j be the estimate of p_j' in round t. Following the derivation of (<ref>) and using prediction (<ref>) with estimates _j in lieu of the true probabilities p_j', we see that _Z_t(≠ y_t) ≤∏_j = 1^K (p_j' √(1-_j/_j) + (1-p_j')√(_j/1-_j)) A first challenge is to control the difference between (<ref>) and (<ref>). This involves controlling terms of order √((1-_j)/_j) - √((1-p_j')/p_j') . Via standard online learning analysis, we would obtain a regret bound scaling linearly with the Lipschitz constant of the function √((1-p')/p'), which is of order 1/p'. But this would require enforcing that min_j p_j' be bounded away from 0 (and from 1 for the symmetric function √(p'/(1-p'))). A second challenge is learning the optimal cost for each expert. This is a bandit problem over continuously many actions, because choosing a payment c ∈ [0,1] for some expert does not provide any information about payments c' ≠ c. Using Lipschitzness of the productivity functions, we can discretize the payments. However, as we argued above, the key function √((1-p')/p'), which controls the error estimating the probability of mistake, is not Lipschitz in [0,1]. This necessitates further algorithmic developments and novel analyses. In the following section we introduce our algorithm and explain how we overcome the aforementioned challenges. § ALGORITHM Our algorithm LCB-GAPTRON for online classification with paid experts is presented in Algorithm <ref>. At a high level, our algorithm selects payments using the pessimistic principle (since we receive bandit feedback for the payments), and then uses a randomized weighted aggregation procedure to predict a label based on the expert advice. However, as indicated above, several adjustments need to be made to account for the intricacies of the problem and ensure that we can obtain a (√(T)) regret bound. We detail these below. LCB-GAPTRON requires as input a discrete set of payments. To learn the optimal payment in for each expert, it is helpful to maintain an empirical estimate _t+1, j(c) = ∑_s = 1^t𝕀[c_s,j = c]1+Z_s, jy_t/2 n_t+1, j(c) of the probability of success for each c ∈, where n_t+1, j(c) = ∑_s = 1^t𝕀[c_s,j = c] is the number of times we have paid expert j c up to the end of round t. We then construct optimistic estimates _t(c) in line <ref> based on the empirical Bernstein bound <cit.>—see Lemma <ref> in the Appendix—which are used when computing the payments to the experts in (<ref>). The algorithm also requires a parameter β≥ 0, whose role is to control the cutoff value α_t, j(c) of the estimated probabilities for each c ∈. As discussed around (<ref>), a key challenge in this problem is that the function that we optimize is subject to large changes when the estimated probabilities are close to 0 or 1. To solve this problem, when p_t,j(c) is very large or very small, we simply follow experts j that we estimate are either very good or very bad (in the latter case, the weight w_t,j is negative). However, this does not resolve our troubles completely. Indeed, standard online methods would still suffer regret inversely proportional to the cutoff value because they need to control the difference between the estimated probabilities and the true probabilities. To overcome this issue, we show that we can estimate the true probabilities up to a multiplicative factor of 32. This is also a result of using the empirical Bernstein bound to construct the confidence intervals. Indeed, the empirical Bernstein bound allows us to have both additive and multiplicative control of the estimated probabilities which is essential for our analysis. See Section <ref> for further details. To avoid suffering additional regret for only being able to estimate the probabilities up to a multiplicative factor, we use the ideas of <cit.>. In particular, whenever all the estimated probabilities are bounded away from 0 and 1, we use randomized predictions. These randomized predictions = (B_t,Z_t) defined in (<ref>)—where B_t is the internal randomization used by Algorithm <ref> at step t—satisfy (see Lemma <ref>) _B_t(≠ y_t | Z_t, _t) ≤ e^ - y_t w_t, j(_t,j(c_t,j))Z_t,j where c_t,j∈ is the payment to expert j in round t and _t,j(c_t,j) is the estimate of the accuracy of expert j when the payment is c_t,j. This gains us a factor compared to the bound we used in Section <ref> (see equation (<ref>)) and allows us to compensate for the multiplicative factor introduced by estimating the probabilities. § ANALYSIS In this section we prove the regret bounds in the introduction. To condense notation slightly, let Φ(()̧) = e^-2(1/2 - p_j(c_j))^2 where ()̧ is the vector with elements p_j(c_j). Then, () = min_∈̧[0,1]^KΦ(()̧) + λ c_j. Our proof of the regret bound follows from the below steps. Step 1: Bound the error in estimated probabilities We start by showing that we can control the difference between true probabilities p_j(c) and estimated probabilities _t,j(c). For any δ∈ (0,1) and t ≥ 1, let Λ_δ, t be the event that |_t, j(c) - p_j(c)| ≤3/n_t, j(c)ln3/δ + √(_t, j(c)(1-_t, j(c))/n_t, j(c)2ln3/δ) simultaneously for all j ∈ [K] and c ∈. lemmalemdifcontrol Fix δ∈ (0,1). Suppose that t > N. Then Λ_δ, t holds with probability at least 1 - δ TNK. The proof of Lemma <ref> follows from an application of the union bound and the empirical Bernstein bound <cit.> and is provided in the Appendix. Step 2: Bound the prediction error We now turn our attention to controlling the number of mistakes we make. In the following, we bound the probability that we make a mistake in any given round t. We split the analysis into two cases: either all p̂_t, j(c_t,j) ∈[α_t, j, 1 - α_t, j(c_t,j)], or at least one p̂_t, j(c_t,j) is outside of the specified range (meaning that some expert is certain so we can predict just using their advice). We denote by _t the event that in round t we are in the first case, that is that p̂_t, j(c_t,j) ∈[α_t, j(c_t,j), 1-α_t, j(c_t,j)] for all j. As a first step, observe that when _t holds we issue a randomized prediction, see equation (<ref>) in Algorithm <ref>. This prediction has the following crucial property. lemmalemgaptronbound For any round t > N, assuming _t holds, the prediction in equation (<ref>) satisfies P_B_t(≠ y_t|_t, Z_t) ≤ e^-y_t w_t, j(_t,j(c_t,j))Z_t,j. The proof of Lemma <ref> can be found in the appendix and essentially follows from <cit.> although our proof is simpler. As a second step, let us integrate out the randomness in Z_t in the bound in Lemma <ref>. Using the definition of w_t,j we obtain: P_B_t,Z_t( ≠ y_t|_t) ≤1/2∏_j = 1^K (p_j(c_t,j) √(1-_t, j(c_t,j)/_t, j(c_t,j)) + (1-p_j(c_t,j))√(_t, j(c_t,j)/1-_t, j(c_t,j))). Now, under the assumptions that all p̂_t, j(c_t,j) ∈[α_t, j(c_t,j), 1-α_t, j(c_t,j)] and that Λ_δ, t holds, for all β > 0 |_t,j(c_t,j) - p_j(c_t,j)| = (1/β_t,j(c_t,j)) . To see why this holds, observe that p̂_t, j(c_t,j) ≥α_t, j(c_t,j) = (β/n_t, j(c_t,j)), which we can use to bound the right-hand side of the equation in the definition of Λ_δ, t in equation (<ref>). We can then substitute this into equation (<ref>) and, after some manipulation together with a careful choice of β, show that when _t and Λ_δ,t both hold, _B_t,Z_t(≠ y_t|_t) ≤3/4∏_j = 1^K √(1 - 4(-_t,j(c_t,j))^2) after which, using 1 + x ≤exp(x), we arrive at the statement of Lemma <ref>. lemmalemmultiplicativeboundone Let β = 18ln(3/δ)K^2. Then for any round t > N and δ∈ (0, 1), assuming Λ_δ, t and _t both hold, _B_t,Z_t(≠ y_t|_t) ≤3/4Φ(_t(_̧t)) Step 3: Relate Φ(()̧) to Φ(()̧) In the next lemma we show how to control the difference between the right-hand side of Lemma <ref> and the same equation, but with p_j(c_t,j) instead of _t,j(c_t,j). lemmalemmultiplicativeboundtwo For any round t > N and δ∈ (0, 1), assuming Λ_δ, t holds, 3/4Φ(_t(_̧t)) ≤7/8Φ((_̧t)) + 96 K (2ln(3/δ)/n_t,j(c_t,j) + (3ln(3/δ)/n_t,j(c_t,j))^2). To prove Lemma <ref> (see the Appendix for a detailed proof) we show that for some constant b >0 and for all ,' ∈ [0,1]^K, Φ() - Φ(') ≤18Φ() + b K |q_j - q'_j|^2 . This means that 34Φ() = 78Φ() - 18Φ() ≤7/8(Φ(') + b K |q_j - q'_j|^2) . Since we assumed that the event Λ_δ, t holds, we can replace q_j and q_j' by _t,j(c_t,j) and p_j(c_t,j) to prove Lemma <ref>. Step 4: Bound the loss from pessimistic choice of payments Next, we need to control the difference between paying the experts our chosen costs versus the optimal costs. Using the optimism in the face of uncertainty principle, the same ideas we used to prove Lemma <ref>, and the Lipschitz assumption in equation (<ref>) we arrive at the following lemma, whose proof can be found in the Appendix. lemmalemoptimism Let be such that for any c^⋆∈ [0,1] there is a c̃∈ that satisfies |c̃ - c^⋆| ≤ε. Then for any round t > N and δ∈ (0, 1), assuming Λ_δ, t holds, 7/8Φ((_̧t)) + λ c_t,j ≤min_∈̧[0,1]^K{Φ(()̧) + λ c_j} + (4L + λ)Kε + 96 K(2ln(3/δ)/n_t,j(c_t,j) + (3ln(3/δ)/n_t,j(c_t,j))^2) . Step 5: Control the regret in when estimated probabilities are too large or small Combined, Lemmas <ref>, <ref>, and <ref> give us control over the regret in rounds where all p̂_t, j(c_t,j) ∈[α_t, j(c_t,j), 1-α_t, j(c_t,j)]. In the case where there is at least one p̂_t, j(c_t,j) which is not in the above range, then we can control the regret by using the following observation. In rounds where _t does not hold we follow the (flipped) prediction of an expert j that satisfies p̂_t, j(c_t,j) ∉[α_t, j(c_t,j), 1-α_t, j(c_t,j)]. For simplicity suppose that we just follow the actual prediction of that expert, which implies that p̂_t, j(c_t,j) ≤α_t, j(c_t,j). In this case does not depend on B_t and we have that, assuming Λ_δ, t and _t^c both hold, _Z_t(≠ y_t |_t) = p_j(c_t,j) ≤_t, j(c_t,j) + |p_j(c_t,j) - _t, j(c_t,j)| = (α_t,j(c_t,j)) = (β/n_t, j(c_t,j)) , where the third equality follows from the assumption that events Λ_δ, t and _t^c hold. To see why, observe that since _t, j(c_t,j) ≤α_t, j(c_t,j) equation (<ref>) is of order α_t, j(c_t,j). The formal result can be found in Lemma <ref> below, the full proof of which can be found in the Appendix. lemmalemnotinrange For any round t > N and δ∈ (0, 1), assuming _t^c and Λ_δ,t both hold, _Z_t(≠ y_t |_t ) + λ c_t, j ≤() + 2β + 4ln(3/δ)/n_t, j_t(c_t,j_t) + (4L + λ)Kε + 96 K(2ln(3/δ)/n_t,j(c_t,j) + (3ln(3/δ)/n_t,j(c_t,j))^2). Step 6: Sum up the per-round regret We now have a per-round control of the cost and probability of making a mistake. Thus, by combining Lemmas <ref>, <ref>, <ref>, <ref>, and <ref> and summing over rounds we arrive at the main result of the paper in Theorem <ref> below, whose result is implied by Theorem <ref> in the Appendix. As stated below, the choice of in Algorithm <ref> that delivers the bounds of Theorem <ref> is the uniform ε-grid on [0,1]. theoremmaintheoremO Let be such that for any c^⋆∈ [0,1] there is a c̃∈ that satisfies |c̃ - c^⋆| ≤ε, let β = 18ln(3/δ)K^2, and let δ = 1/(1 + λ K)T^2K. Then the regret of Algorithm <ref> satisfies R_T = ( ε KT (4L + λ K) + N (λ + K^3 ln(λ K T)^2 ) ) . Theorem <ref> implies that if N = (ε^-1) for ε of order √((λ + K^3 ln(λ K T)^2 )/(KT (4L + λ K))), then our final bound is R_T = ( K^2√(T (4L + λ)(λ + ln(1λ K T)^2 ))). A slight modification of the proof shows that if we compete with the best costs in , rather than the best costs in [0,1]^K, then the corresponding regret, R_T() = [(𝕀[≠ y_t] + λ c_t, j)] - T min_∈̧{Φ(()̧) + λ c_j} can be bounded by (N (λ + K^3 ln(λ K T)^2 )). Note that in this case, we remove the first term of the regret in Theorem <ref> as there is no discretization error. Moreover, if we have a discrete set of costs , then we do not require the costs to be Lipschitz. § ALTERNATIVE APPROACHES In this section we formulate alternative notions of regret that make the problem of online classification with paid experts amenable to solution via standard bandit algorithms that predict using the advice of a single expert. We start by considering a finite set of costs and assume that in each round the learner must pick a single expert and a cost c ∈ to pay them. Assuming p_j(c) ≥1/2 for all experts j ∈ [K] and costs c ∈, we may define the regret R^band_T = [(𝕀[≠ y_t] + λ c_t)] - T min_j ∈ [K], c ∈(p_j(c) + λ c ) . R^band_T presents a different trade-off between costs and mistakes. In particular, while the term accounting for the costs is considerably smaller (as only one expert is paid in each round), the term accounting for the expected number of mistakes is exponentially larger (in the number of experts) because there is no expert aggregation. Treating each expert and cost pair as an arm, we can use LCB (the variant of the UCB algorithm using lower confidence bounds to minimize losses instead of maximizing rewards) and immediately obtain the bound R^band_T = (√(K N T)). When the productivity functions are L-Lipschitz, see (<ref>), we can define a harder notion of regret R^cont_T in which the comparator is defined with respect to the best cost c ∈ [0,1], as opposed to the best c∈ which we used in the definition of R^band_T. Now, if we discretize the interval [0,1] using the grid of Theorem <ref> with || = (ε^-1), then we pay an approximation error of ε T(L+λ). By running LCB on × K arms, and adding the approximation error, we get, after tuning ε, R^cont_T = (T^2/3(KL)^1/3). Although these bounds on R^band_T and R^cont_T are not directly comparable to the bounds on our different notion of regret (<ref>), in the next section we perform an empirical comparison between variants of Algorithm <ref> and the instance of LCB run on K × N arms which we used to bound R^band_T. § EXPERIMENTS Our experiments use two sets of productivity functions defined on a uniform (random) grid of N payments on [1/2,1]. The first productivity function is linear with the same slope for all experts: We sampled N numbers c_1,…,c_N uniformly at random from [1/2,1] and defined p_j(c_i) = c_i for all i ∈ [N] and all j ∈ [K]. The second productivity function is sigmoidal, with a different slope for each expert: We sampled N numbers c_1,…,c_N uniformly at random from [0,1] and K integers θ_1,…,θ_K uniformly at random from [1,10]. For each j ∈ [K] we then set p_j(c_i) = exp(θ_j c_i)/1+exp(θ_j c_i) . A fresh sample of the productivity functions for each expert is drawn in each repetition of our experiments. Consistent with our definition of regret, we measure the performance of the algorithms in terms of their cost; i.e., the total number of mistakes plus the total payments to the experts. The running time for finding the optimal costs in T rounds using our optimistic estimates (<ref>) in LCB-GAPTRON (Algorithm <ref>) is (TN^K), which prevents running experiments for moderately large values of the parameters. Therefore, we designed two simple and efficient approximations of the optimal payments defined in (<ref>). The first one, selfish, optimizes the cost for each expert independently of the others. The second one, local, computes the optimal cost of each expert iteratively, in a round-robin fashion, while keeping the cost of the other experts fixed. We call brute the inefficient implementation of LCB-GAPTRON that directly optimizes (<ref>) using brute force search. Finally, we call LCB the instance of LCB run over K × N actions which we defined in Section <ref>. In our first experiment we pick sufficiently small values for the parameters so that we can run LCB-GAPTRON-brute. Figure <ref> shows that for both choices of the productivity function, all instances of LCB-GAPTRON have nearly indistinguishable performance and outperform LCB. Thus, we can safely drop LCB-GAPTRON-brute and run a second set of experiments using a larger time horizon T=10^5. In our second experiment, we test LCB-GAPTRON-selfish and LCB-GAPTRON-local against LCB using the second productivity function (plots using the first productivity function are similar, see Appendix <ref>). The results in Figure <ref> show that for small value of the scaling parameters (K=10 and N=10) and large cost units (λ = 10^-2), LCB is eventually on par with LCB-GAPTRON. Recall that LCB pays a single expert at each round, which is the reason that LCB catches up with LCB-GAPTRON. However, for larger values of the scaling parameters (K=20 and N=50) or for smaller values of the cost units (λ = 10^-3), both variants of LCB-GAPTRON dominate again. In Appendix <ref>, we report the performance of LCB, LCB-GAPTRON-selfish, and LCB-GAPTRON-local on both productivity functions for T = 10^5, K ∈{10,20}, N ∈{10,50,100}, and λ∈{10^-2,10^-3}. These plots essentially confirm the observations made in the discussion of Figure <ref>. In Appendix <ref> we also provide the figures for the experiments with LCB-GAPTRON-brute with λ∈{10^-2,10^-3}, N = 5, K = 5, T = 10^5, and both productivity functions, which also confirm the observations made in the discussion of Figure <ref>. § FUTURE WORK In this paper we have studied online classification with paid stochastic experts, and presented an algorithm which has sub-linear regret in terms of the number of rounds T. We have also demonstrated that this algorithm performs well in several experimental settings. Although the algorithm is computationally expensive, we have shown empirically that using approximations does not lead to much deterioration of performance. Several questions remain open. For example, whether a computationally efficient algorithm with similar regret guarantees can be developed, and whether the K^2 factor in our regret bound K^2(ln T)√(T) can be improved on. On the other hand, we conjecture the √(T) rate is optimal, because of the need of estimating the discretized productivity functions from bandit feedback. There is scope to extend our model in several directions by considering, for example, strategic experts (as in <cit.>), or making the experts' performance to depend also on contextual information. It is also an open question whether faster rates can be achieved with stronger parametric assumptions on the productivity function. For example, if the productivity function is a sigmoid, p_j(c) = exp(a+bc)/(1+exp(a+bc)) with unknown constants a,b, ∈ℝ, can the regret be significantly improved? § ACKNOWLEDGEMENTS This work was mostly done while DvdH was at the University of Milan partially supported by the MIUR PRIN grant Algorithms, Games, and Digital Markets (ALGADIMAR) and partially supported by Netherlands Organization for Scientific Research (NWO), grant number VI.Vidi.192.095. HQ and NCB are partially supported by the MIUR PRIN grant Algorithms, Games, and Digital Markets (ALGADIMAR), by the EU Horizon 2020 ICT-48 research and innovation action under grant agreement 951847, project ELISE (European Learning and Intelligent Systems Excellence), and by the FAIR (Future Artificial Intelligence Research) project, funded by the NextGenerationEU program within the PNRR-PE-AI scheme (M4C2, investment 1.3, line on Artificial Intelligence). CPB gratefully acknowledges the support of the Imperial College European Partners Fund. apalike § PROOF DETAILS Let X_1, …, X_t be i.i.d. random variables with [X] = μ taking their values in [0, 1]. Denote by X̅_t = 1/t∑_s=1^t X_s and by V_t = 1/t∑_s=1^t (X_s - X̅_t)^2. Then with probability at least 1 - δ |X̅_t - μ| ≤√(V_t/t2ln(3/δ)) + 3ln(3/δ)/t. * We denote by x_t = w_t, j(_t,j(c_t,j)) Z_t, j The proof also follows from Lemma 1 by <cit.>. Suppose that (x_t) = (y_t). Then [𝕀[≠ y_t]|_t, Z_t] = exp(-y_t x_t). Now suppose that (x_t) ≠(y_t). Then _[𝕀[≠ y_t]|_t, Z_t] = 1 - exp(y_t x_t) = 1 + exp(-y_t x_t) - exp(y_t x_t) - exp(- y_t x_t) ≤exp(-y_t x_t), where the last inequality is due to exp(y_t x_t) + exp(- y_t x_t) ≥ 1, which holds by Jensen's inequality. * First, note that since t>N n_t(c) ≥ 1 for all c ∈. Denote by V_t, j(c) = 1/|𝒩_t|∑_i ∈𝒩_t(∑_i' ∈𝒩_t1 + Z_i', jy_t/2|𝒩_t| - 1+Z_i, jy_t/2)^2, where 𝒩_t = {i<t: 𝕀[c_i, j = c]} and denote by V̂_t, j(c) = 1/n_t,j(c)∑_s = 1^t-1𝕀[c_s, j = c](∑_s' = 1^t-1𝕀[c_s', j = c]1+Z_s', jy_t/n_t,j(c) - 1+Z_s, j(c_s,j)y_t/2)^2 = _t,j(c)(1-_t,j(c)) Now, by reindexing the sum we can see that 1/n_t,j(c)∑_s = 1^t-1𝕀[c_s, j = c](∑_s' = 1^t-1𝕀[c_s', j = c]1+Z_s', jy_t/n_t,j(c) - 1+Z_s, j(c_s,j)y_t/2)^2 = 1/|𝒩_t|∑_i ∈𝒩_t(∑_i' ∈𝒩_t1 + Z_i', jy_t/2|𝒩_t| - 1+Z_i, jy_t/2)^2. Therefore, we can see that P(|_t, j(c) - p_j(c)| ≥√(V̂_t, j(c)/n_t, j(c)2ln(3/δ)) + 3ln(3/δ)/n_t, j(c)) ≤ P(∃ 𝒩_h ∈{𝒩_N+1, …, 𝒩_t} s.t. |∑_i ∈𝒩_h1 + Z_i, j(c)y_t/2|𝒩_h| - p_j(c)| ≥√(V_h, j(c)/|𝒩_h|2ln(3/δ)) + 3ln(3/δ)/|𝒩_h|) ≤∑_h = N+1^t P(|∑_i ∈𝒩_h1 + Z_i, j(c)y_t/2|𝒩_h| - p_j(c)| ≥√(V_n, j(c)/|𝒩_h|2ln(3/δ)) + 3ln(3/δ)/|𝒩_h|) ≤ T δ, where the last inequality is due to Lemma <ref>. Taking the union bound we can see that the above holds for all j and c ∈ with probability at most TNKδ. * Since both Λ_δ,t and _t hold we may use Lemma <ref> to obtain _Z[_[𝕀[≠ y_t]]] ≤_Z[exp(-y_t x_t)] = ∏_j = 1^K (P(Z_t, j(c_t,j)y_t = 1)√(1 - _t,j(c_t,j)/_t,j(c_t,j)) + P(Z_t, j(c_t,j)y_t = -1)√(_t,j(c_t,j)/1 - _t,j(c_t,j))) = ∏_j = 1^K (p_j(c_t,j)√(1 - _t,j(c_t,j)/_t,j(c_t,j)) + (1 - p_j(c_t,j))√(_t,j(c_t,j)/1 - _t,j(c_t,j))) = ∏_j = 1^K (2√(_t,j(c_t,j)(1 - _t,j(c_t,j))) + (p_j(c_t,j) - _t, j(c_t,j))√(1 - _t,j(c_t,j)/_t,j(c_t,j)) + (_t, j(c_t,j) - p_j(c_t,j))√(_t,j(c_t,j)/1 - _t,j(c_t,j))). Recall that that α_t, j(c_t,j) = min{β/n_t, j(c_t,j), }. This implies that if α_t, j(c_t,j) = event _t can only hold if _t, j(c_t,j) =, in which case we trivially have (p_j(c_t,j) - _t, j(c_t,j))√(1 - _t,j(c_t,j)/_t,j(c_t,j)) + (_t, j(c_t,j) - p_j(c_t,j))√(_t,j(c_t,j)/1 - _t,j(c_t,j)) = 0 We do the remainder of the analysis under the assumptions that α_t, j(c_t,j) ≠ and p_j(c_t,j) - _t, j(c_t,j) < 0. The case where p_j(c_t,j) - _t, j(c_t,j) ≥ 0 follows from symmetrical arguments. We have that (p_j(c_t,j) - _t, j(c_t,j))√(1 - _t,j(c_t,j)/_t,j(c_t,j)) + (_t, j(c_t,j) - p_j(c_t,j))√(_t,j(c_t,j)/1 - _t,j(c_t,j)) ≤(_t, j(c_t,j) - p_j(c_t,j))√(_t,j(c_t,j)/1 - _t,j(c_t,j)) ≤(√(V̂_t, j(c_t,j)/n_t, j(c_t,j)2ln(3/δ)) + 3ln(3/δ)/n_t, j(c_t,j))√(_t,j(c_t,j)/1 - _t,j(c_t,j)), where the last inequality holds because of the assumption that Λ_δ,t holds. Since p̂_t, j(c_t,j) ∈ [α_t, j(c_t,j), 1-α_t, j(c_t,j)] we have that (√(_t,j(c_t,j)(1-_t,j(c_t,j))/n_t, j(c_t,j)2ln(3/δ)) + 3ln(3/δ)/n_t, j(c_t,j))√(_t,j(c_t,j)/1 - _t,j(c_t,j)) ≤(√(2/βln(3/δ)) + 3/βln(3/δ))√(_t,j(c_t,j)(1 - _t,j(c_t,j))) where we used that α_t, j(c_t,j) ≤β/n_t, j(c_t,j)≤ 1 -. The above implies that _Z[_[𝕀[≠ y_t]]] ≤(1 + (√(2/βln(3/δ)) + 3/βln(3/δ)))^K∏_j = 1^K 2√(_t,j(c_t,j)(1 - _t,j(c_t,j))) Since β = 18ln(3/δ)K^2 we have that (1 + (√(2/βln(3/δ)) + 3/βln(3/δ)))^K ≤ (1 + 1/3K)^K ≤ 1/(1 - 1/3) = 3/2 and thus _Z[_[𝕀[≠ y_t]]] ≤3/4∏_j = 1^K 2√(_t,j(c_t,j)(1 - _t,j(c_t,j))) = 3/4∏_j = 1^K (1 - 4( - _t, j(c_t, j))^2) ≤3/4exp(-2( - _t,j(c_t,j))^2) where the last inequality is due to 1 + x ≤exp(x). * Denote by g_t = exp(-2( - _t,j(c_t,j))^2). Using the fact that exp(x) and ( - x)^2 are convex in x we can see that 3/4exp(-2( - _t,j(c_t,j))^2) = 7/8exp(-2( - p_j(c_t,j))^2) + 7/8exp(-2( - _t,j(c_t,j))^2) - 7/8exp(-2( - p_j(c_t,j))^2) - 1/8 g_t ≤7/8exp(-2( - p_j(c_t,j))^2) + 28/8 g_t |p_j(c_t,j) - _t,j(c_t,j)| - 1/8 g_t ≤7/8exp(-2( - p_j(c_t,j))^2) + g_t^2/12 + 3 (28/8)^2 K |p_j(c_t,j) - _t,j(c_t,j)|^2 - 1/8 g_t ≤7/8exp(-2( - p_j(c_t,j))^2) + 3 (28/8)^2 K |p_j(c_t,j) - _t,j(c_t,j)|^2, where the third inequality is because ab ≤a^2/2η + η/2b^2 for η > 0 and the last inequality is due to the fact that g_t ≥ g_t^2. Therefore, we have that 3/4exp(-2( - _t,j(c_t,j))^2) ≤7/8exp(-2( - p_j(c_t,j))^2) + 3 (28/8)^2 K |p_j(c_t,j) - _t,j(c_t,j)|^2 ≤7/8exp(-2( - p_j(c_t,j))^2) + 6 (28/8)^2 K (2ln(3/δ)/n_t,j(c_t,j) + (3ln(3/δ)/n_t,j(c_t,j))^2), where the last inequality is due to the assumption that Λ_δ,t holds. * Since we assume that Λ_δ,t holds we have that min_∈̧{exp(-2( - p_j(c_j))^2) + λ c_j}≥exp(-2( - _t, j(c_t, j))^2) + λ c_t, j. Denote by h_t = exp(-2( - p_j(c_t, j))^2) By the above inequality and the fact that exp(x) and ( - x)^2 are convex, we have that exp(-2( - p_j(c_t,j))^2) + λ c_j - min_∈̧{exp(-2( - p_j(c_j))^2) + λ c_j} ≤exp(-2( - p_j(c_t,j))^2) - exp(-2( - _t, j(c_t, j))^2) ≤ 4 h_t|_t, j(c_t, j) - p_j(c_t,j)| ≤1/8h_t^2 + 48 K|_t, j(c_t, j) - p_j(c_t,j)|^2 ≤1/8h_t^2 + 96K(2ln(3/δ)/n_t,j(c_t,j) + (3ln(3/δ)/n_t,j(c_t,j))^2) where the third inequality is because ab ≤a^2/2η + η/2b^2 for η > 0 and the last inequality is due to the assumption that Λ_δ,t holds. Thus, we have that 7/8exp(-2( - p_j(c_t,j))^2) + λ c_j = exp(-2( - p_j(c_t,j))^2) + λ c_j - h_t ≤min_∈̧{exp(-2( - p_j(c_j))^2) + λ c_j} + 96K(2ln(3/δ)/n_t,j(c_t,j) + (3ln(3/δ)/n_t,j(c_t,j))^2), where we used that h_t^2 ≤ h_t. Denote by = _∈̧{exp(-2( - p_j(c_j))^2) + λ c_j} ^̧⋆ = _∈̧[0,1]^K{exp(-2( - p_j(c_j))^2) + λ c_j} Now, since is such that for any c^⋆∈ [0,1] there is a c̃∈ that satisfies |c̃ - c^⋆| ≤ε we have that exp(-2( - p_j(c̃_j))^2) + λc̃_j - (exp(-2( - p_j(c^⋆_j))^2) + λ c^⋆_j) ≤ 4|p_j(c̃_j) - p_j(c^⋆_j)|+ λ |c̃_j - c^⋆_j| ≤ (4L + λ) |c̃_j - c^⋆_j| ≤ (4L + λ)Kε, where in the first inequality we used the same bound as we used in equation (<ref>). By combining equations (<ref>) and (<ref>) we can see that 7/8exp(-2( - p_j(c_t,j))^2) + λ c_j≤min_∈̧[0,1]^K{exp(-2( - p_j(c_j))^2) + λ c_j} + (4L + λ)Kε + 96 K(2ln(3/δ)/n_t,j(c_t,j) + (3ln(3/δ)/n_t,j(c_t,j))^2) * We work in the case where p̂_t, j_t(c_t,j_t) ≤α_t, j_t(c_t,j_t). The case where 1 - p̂_t, j_t(c_t,j_t) ≤α_t, j_t(c_t,j_t) follows from symmetric arguments. Since we still work in the case where Λ_δ,t holds we have that _Z[𝕀[≠ y_t]] + λ c_t, j = p_j_t(c_t, j_t) + λ c_t, j ≤p̂_t, j_t(c_t,j_t) + |p_j_t(c_t, j_t)-p̂_t, j_t(c_t,j_t)| + λ c_t, j ≤α_t, j_t(c_t,j_t) + √(p̂_t, j_t(c_t,j_t)/n_t, j(c_t,j_t)2ln(3/δ)) + 3ln(3/δ)/n_t, j_t(c_t,j_t) + λ c_t, j ≤α_t, j_t(c_t,j_t) + √(α_t, j_t(c_t,j_t)/n_t, j(c_t,j_t)2ln(3/δ)) + 3ln(3/δ)/n_t, j_t(c_t,j_t) + λ c_t, j = β + 3ln(3/δ) + √(2 βln(3/δ))/n_t, j(c_t,j_t) + λ c_t, j ≤2β + 4ln(3/δ)/n_t, j_t(c_t,j_t) + λ c_t, j ≤min_∈̧[0,1]^K{exp(-2( - p_j(c_j))^2) + λ c_j} + 2β + 4ln(3/δ)/n_t, j_t(c_t,j_t) + (4L + λ)Kε + 96 K(2ln(3/δ)/n_t,j(c_t,j) + (3ln(3/δ)/n_t,j(c_t,j))^2), where the second inequality follows from the assumption that Λ_δ,t holds, the third inequality follows from p̂_t, j_t(c_t,j_t) ≤α_t, j_t(c_t,j_t), and the final inequality follows from adding 7/8Φ((_̧t)) and Lemma <ref>. Let be such that for any c^⋆∈ [0,1] there is a c̃∈ that satisfies |c̃ - c^⋆| ≤ε and let β = 18ln(3/δ)K^2. Then for any δ∈ (0, 1) _Z[_[𝕀[≠ y_t]] + λ c_t, j] ≤ T min_∈̧[0,1]^K{exp(-2( - p_j(c_j))^2) + λ c_j} + (1 + λ K)T^3Kδ + T (4L + λ)Kε + N(1 + λ K + 2592 K^2(ln(3/δ))^2 + (1 + log(T))(K(2β + 4ln(3/δ)) + 576 K^2 ln(3/δ)) )) First, since 𝕀[≠ y_t] + λ c_t,j≤ 1 + K λ we have that [𝕀[≠ y_t] + c_t,j] ≤ N(1 + K λ) + [𝕀[≠ y_t]+ c_t,j]. By the Tower rule we have that [𝕀[≠ y_t] + c_t,j] = [[𝕀[≠ y_t] + c_t,j|_t ] ] = [𝕀[Λ_δ, t] [𝕀[≠ y_t] + c_t,j|_t ] ] + [𝕀[Λ_δ, t^c] [𝕀[≠ y_t] + c_t,j|_t ] ] ≤[𝕀[Λ_δ, t] [𝕀[≠ y_t] + c_t,j|_t]] + δ K N T^2 (λ K+1), where the last inequality follows from Lemma <ref>. The remainder of the proof consists of controlling the conditional expectation on the r.h.s. of the above equation. In the remainder of the proof we assume Λ_δ, t. By Lemmas <ref>, <ref>, <ref>, and <ref> we have that [𝕀[≠ y_t] + c_t,j|_t] ≤min_∈̧[0,1]^K{Φ(()̧) + λ c_j} + 2β + 4ln(3/δ)/n_t, j_t(c_t,j_t) + (4L + λ)Kε + 288 K(2ln(3/δ)/n_t,j(c_t,j) + (3ln(3/δ)/n_t,j(c_t,j))^2) For t ≥ N = || we have that n_t, j(c) ≥ 1. We continue by bounding ∑_t=N+1^T (2ln(3/δ)/n_t,j(c_t,j) + (3ln(3/δ)/n_t,j(c_t,j))^2) = ∑_t=N+1^T ∑_c ∈𝕀[c_t,j = c](2ln(3/δ)/n_t,j(c) + (3ln(3/δ)/n_t,j(c))^2) ≤ K N ((1 + log(T))2ln(3/δ) + 9(ln(3/δ))^2) , where the inequality is ∑_t=1^T 1/t≤ 1 + log(T) and ∑_t=1^T 1/t^2≤ 2. Similarly, we have ∑_t=N+1^T 2β + 4ln(3/δ)/n_t, j_t(c_t,j_t)≤ K N (2β + 4ln(3/δ))(1 + log(T)) Thus, combining the above equations we obtain [𝕀[≠ y_t]] + λ c_t, j] ≤ T min_∈̧[0,1]^K{exp(-2( - p_j(c_j))^2) + λ c_j} + N(1 + λ K) + (1 + λ K)NT^2Kδ + K N (2β + 4ln(3/δ))(1 + log(T)) + T (4L + λ)Kε + 288 K^2 N ((1 + log(T))2ln(3/δ) + 9(ln(3/δ))^2) = T min_∈̧[0,1]^K{exp(-2( - p_j(c_j))^2) + λ c_j} + (1 + λ K)NT^2Kδ + T (4L + λ)Kε + N(1 + λ K + 2592 K^2(ln(3/δ))^2 + (1 + log(T))(K(2β + 4ln(3/δ)) + 576 K^2 ln(3/δ)) )), which completes the proof after replacing β = 8ln(3/δ)K^2. § ADDITIONAL EXPERIMENTS Here we present the additional experimental results announced in Section <ref>.
http://arxiv.org/abs/2307.01927v1
20230704212059
Safe Connectivity Maintenance in Underactuated Multi-Agent Networks for Dynamic Oceanic Environments
[ "Nicolas Hoischen", "Marius Wiggert", "Claire J. Tomlin" ]
eess.SY
[ "eess.SY", "cs.SY" ]
IEEEexample:BSTcontrol HJHamilton-Jacobi HJIHamilton-Jacobi-Isaacs DPDynamic Programming ODEOrdinary Differential Equation MPCModel Predictive Control MDPMarkov Decision Processes RBF-NNRadial Basis Function Neural Network RMSERoot Mean Squared Error MSE[MSE]mean-squared-error RLReinforcement Learning PDEPartial Differential Equation ASVAutonomous Surface Vehicle MASMulti-Agent-Systems H-MASHierarchical Control of Multi-Agent-Systems IPMIsolated Platform Metric FRSForward Reachable Set IPMIsolated Platform Metric HJRHamilton Jacobi Reachability HJR-ReactiveHamilton Jacobi Reachability Reactive Control HJR-FlockingHamilton Jacobi Reachability Flocking Control CBFsControl Barrier Functions LISICLow Interference Safe Interaction Controller Safe Connectivity Maintenance in Underactuated Multi-Agent Networks for Dynamic Oceanic Environments Nicolas Hoischen^*1, Marius Wiggert^*2, and Claire J. Tomlin^2 ^* Both authors contributed equally to this research. ^1N.H. is with ETH Zurich, Switzerland, ^2 M.W., and C.J.T. are with EECS at the University of California, Berkeley, USA. For inquiries contact: mariuswiggert@berkeley.edu The authors gratefully acknowledge the support of the C3.ai Digital Transformation Institute, the NASA ULI Grant on Safe Aviation Autonomy, the DARPA Assured Autonomy Program, the SRC CONIX Center, the ONR BRC program and the Zeno Karl Schindler Foundation. July 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Autonomous Multi-Agent Systems are increasingly being deployed in environments where winds and ocean currents can exert a significant influence on their dynamics. Recent work has developed powerful control policies for single agents that can leverage flows to achieve their objectives in dynamic environments. However, in the context of multi-agent systems, these flows can cause agents to collide or drift apart and lose direct inter-agent communications, especially when agents have low propulsion capabilities. To address these challenges, we propose a Hierarchical Multi-Agent Control approach that allows arbitrary single agent performance policies that are unaware of other agents to be used in multi-agent systems, while ensuring safe operation. We first develop a safety controller solely dedicated to avoiding collisions and maintaining inter-agent communication. Subsequently, we design a low-interference safe interaction (LISIC) policy that trades-off the performance policy and the safety controller to ensure safe and optimal operation. Specifically, when the agents are at an appropriate distance, LISIC prioritizes the performance policy, while smoothly increasing the safety controller when necessary. We prove that under mild assumptions on the flows experienced by the agents our approach can guarantee safety. Additionally, we demonstrate the effectiveness of our method in realistic settings through an extensive empirical analysis with underactuated Autonomous Surface Vehicles (ASV) operating in dynamical ocean currents where the assumptions do not always hold. The use of Autonomous Multi-Agent systems in various applications, such as inspection and surveying, data collection, and open ocean aquaculture, has increased in recent years. However, agents are exposed to winds and currents that render communication challenging. Recent work has demonstrated that agents can take advantage of these flows to achieve their respective with very little energy. This paper focuses on enabling multi-agent networks to stay connected, avoid collisions, and satisfy single agent performance objectives in complex flows. We propose a Hierarchical Multi-Agent-Systems (H-MAS) approach with low single agent performance interference while maintaining connectivity and avoiding collisions in the network. The problem is decomposed into two simpler, loosely coupled problems, where the lower level solves for single-agent performance controllers that optimize independent objectives, and the higher-level ensures low-interference safe interactions. We demonstrate the effectiveness of our approach through extensive empirical analysis in under-actuated ASV agents operating in non-linear, time-varying dynamical ocean currents. § INTRODUCTION Autonomous Multi-Agent systems from drones, to balloons, to ocean surface vessels and underwater robots are increasingly explored for various applications from inspection and surveying, to providing internet access, collecting data and open ocean aquaculture <cit.>. In many applications the agents communicate amongst each other for various purposes such as coordination to achieve a joint objective, to ensure internet coverage <cit.>, or to reduce the amount of communication needed to an external centralized controller. Local communication often relies on limited-range systems e.g. sonar or radar, requiring agents to stay close to each other for network connectivity (see Fig. <ref>). When an agent operates in the oceans and air it is exposed to winds and currents. Most approaches consider these as disturbances for which an overactuated control needs to compensate. What if instead, the agent takes advantage of these flows? Recent work demonstrated that by going with the flow and using small actuation strategically to nudge itself into favorable flows, the agent is able to achieve its objective with very little energy, be it station-keeping of balloons or navigating in the oceans <cit.>. Inspired by these advances, we want to bring these powerful controllers to multi-agent networks operating in complex flows. DP approaches provide a value function, that yields optimal individual agent controls for an arbitrarily high number of agents without any additional cost beyond a cheap gradient computation. This is especially powerful for multi-agent problems where the objective can be decomposed in the sum of independent single agent objectives, e.g. in floating seaweed farms, the seaweed growth of each agent does not depend on the growth of other agents <cit.>. Given such individual agent performance controllers, our goal is to develop a method that ensures safe interaction of the multiple agents. Specifically, we focus on network connectivity and avoiding collisions. Our insight is that this structure allows us to tackle the multi-agent problem with three different controllers in a H-MAS approach (Fig. <ref>). From the control perspective, this is challenging because of two key reasons. First, in the underactuated setting disconnections are sometimes unavoidable as the non-linear, time-varying flows can push agents in opposing directions. The safe interaction controller needs to be resilient and recover connectivity after it was lost. Second, constraint satisfaction needs to be traded-off intelligently with the performance objective of each agent as they can often be in conflict. For example, a time-optimal controller for an agent would prefer staying in strong flows whereas the safe interaction controller needs to trade this off with maintaining connected to other agents (Fig. <ref>). H-MAS can be viewed from the perspective of hierarchical reinforcement learning <cit.>, which shares substantial similarities with our approach of breaking down complex tasks into two more manageable sub-tasks. In H-MAS, agents are organized into multiple levels of hierarchy, with higher-level agents having more authority and control over lower-level agents, designated as followers <cit.>. For instance, <cit.> solves path planning and ocean vehicles coordination separately with a leader-follower hierarchical structure. Graph methods provide a solid theoretical foundation for analyzing connectivity properties in distance-based control applications <cit.>. Our interest is drawn to flocking techniques incorporating a single agent navigation task via a dynamic virtual leader γ-agent <cit.>, which maintain connectivity by influencing the agents' behavior to follow the movement of their neighbors, while avoiding collisions. Extensions to multiple virtual leaders are proposed in <cit.> holding promise for the generalization of virtual leaders as performance controllers. While most of the flocking schemes generally assume simple double integrator dynamics, adaptive flocking also emerged to handle to non-linear uncertain dynamics <cit.>. More recently, MPC has been successful in ensuring connectivity and collision-free operation in MAS <cit.>. MPC approaches to connectivity or safety <cit.>, in addition to being computationally intensive, often assume position-invariant dynamics, which do not hold in dynamic ocean environments. This motivates a cheaper reactive strategy approach in this context. Determining the follower position with respect to the leader in <cit.> requires solving an additional optimization problem to achieve formation control, which adds complexity and feasibility issues for underactuated agents. On the other hand, adaptive and robust flocking schemes rely on overactuated agents to overcome the disturbances. Adaptive flocking in underactuated systems has been considered using a RBF-NN to approximate uncertain non-linear dynamics <cit.>, but can easily overfit the training data, which is a significant concern given the sparse and stochastic nature of ocean data. Existing literature on flocking has primarily focused on tracking a reference trajectory of a virtual leader or multiple virtual leaders. However, in complex flows, an optimal feedback control policy leads to significantly better results than tracking a reference trajectory <cit.>. Graph theoretic methods have proven to be effective in providing a theoretical foundation for analyzing the connectivity properties of the agents' interaction topology for consensus problems or formation control to name a few <cit.>, but have so far found limited application to dynamic environment with non-linear complex flows. Nevertheless, reactive approaches remain interesting for our application where the communication and collision constraints can be expressed as relative distances amongst agents. With the objective of ensuring local communication in view, The advantage of flocking is the reproduction of the Reynolds heuristic rules namely, flock centring, collision avoidance and velocity matching <cit.>. Furthermore, flocking do not prescribe a specific structure except for staying close to each other, which makes it a promising for our low interference hierarchical controller compared to other distance based control methods such as formation control, which would add unnecessary complexity to the connectivity problem. Even though MPC has often achieved state of the art performance to impose connectivity and collision-free constraints on MAS, their performance rely on accurate dynamics modelling, which becomes challenging in dynamic oceanic environments, where only forecasts can be used over the prediction horizon something about reactive vs predictive approaches? Predictive more relevant if there are open-loop dynamics (i.e. background flows). Interpreting the collision-free and connected network configuration as a safe-set, similarities could be drawn with MPC safety filters, which also blends a performance control input <cit.> with safety constraints. However, their application to under-actuated multi-agent autonomous systems have not been explored so far to the best of the authors knowledge, and might becomes quite computationally intensive. Another perspective on H-MAS can be seen through the lens of hierarchical reinforcement learning, as demonstrated in <cit.>. This framework shares significant similarities with our approach of breaking down complex tasks into two smaller, more manageable sub-tasks respectively in our context (1) connectivity maintenance/collision avoidance for the MAS network, (2) deriving the performance controller on a single agent level. Analogous to the hierarchical reinforcement learning framework presented in <cit.> The principle of achieving a complex control task based on a hierarchy of simpler controllers in H-MAS appeared in many applications in energy grids <cit.>, Y, and Z, based on various methods from X to reinforcement learning <cit.>. There are application to smart control of microgrid systems and multilevel converters <cit.>. Although not explicitly listed as a hierarchical approach, . Especially relevant is the similar context of limited-speed vehicles sensitive to ocean currents, where dynamic formation control intrinsically accounts for connectivity maintenance and collision avoidance. Nevertheless, the use of optimization techniques to enforce regular polygon formations could reveal quite conservative and increase the complexity for the solely distance-based constraints of this work. In most of the above literature based on reactive or flocking approaches, simple linear integrator or double integrator are considered. However, in realistic scenarios such as ocean navigation, the agents are subject to non-linear time varying dynamics and small autonomous vessels are often under-actuated. Therefore, disconnections are sometimes not avoidable and the multi-agent scheme need to be resilient and able to achieve connectivity. As the γ-agent is described by it's own intrinsic dynamics, possibly different from the any agent <cit.>, the flock is set to track this virtual leader using a navigational feedback in the control law. Flocking handles group objective through the introduction of a dynamic γ-agent, known as the virtual leader <cit.>. The virtual leader can be dynamic or static and considered as a moving rendez-vous point. As the γ-agent is described by it's own intrinsic dynamics, possibly different from the any agent <cit.>, the flock is set to track this virtual leader using a navigational feedback in the control law. Extension to multiple leaders have been proposed, such as in <cit.>, <cit.>,<cit.> and are of particular interest since we assume that a performance controller is known for each member of the flock. Flocking is generally applied to agents with double integrator dynamics, where the control is an acceleration input. While flocking robustness and adaptability have been extended to more challenging environments with non-linearities and parameter uncertainties under certain assumptions <cit.>, <cit.> <cit.> <cit.>, it is rarely combined with under-actuated agents. Some adaptive flocking control method handling under-actuation have considered approximating the uncertain nonlinear dynamics through RBF Neural Networks <cit.>. However, learning approaches for approximating the uncertain dynamics are more challenging when only sparse data is available along a grid, such as for ocean forecasts. Additionally, if the performance controller is a planning algorithm <cit.>, it might be also subject to uncertainty due to forecast error and lack of realistic distribution of the flows. To address the above shortcomings, we propose a safe interaction control policy LISIC (LISIC), blending a performance single agent controller with a flocking-inspired safe controller. In addition to ensuring the network safe operation in terms of collisions and communication maintenance, our approach also enables recoveries in case of connectivity failures. The dynamics of communication and information sharing could then later be handled on another level with a Plug-and-Play control scheme <cit.>, but is not in the scope of this work. In Sec. <ref>, we introduce introduce important background and relevant metrics to measure connectivity in terms of time and network topology, as well as a single agent performance trade-off. We show our results theoretically in Sec. <ref> and Sec. <ref>. Finally we assess the performance of our approach by conducting a large-scale empirical evaluation with agents that are underactuated in the sense that their propulsion is smaller than the surrounding flows. § LITERATURE REVIEW Will probably not be it's own section but split into the introduction, motivation and methods. Control of MAS is a vast topic, where methods to attempt maintaining connectivity have spanned from graph theoretic methods <cit.>, to flocking <cit.> <cit.> <cit.> and more recently to MPC <cit.> <cit.> and learning approaches <cit.>. Navigating multi-agent networks in complex flows while maintaining connectivity can be interpreted as a cooperative or collective task where popular methods for multi-agent include formation control, coverage control, containment control or flocking to name a few of them <cit.>. Nevertheless, only a flocking-inspired behavior seems relevant to this work, as enforcing a specific geometric shape for formation control would not leverage any particular synergy. In this context, the objective of maintaining connectivity and avoiding collisions is purely distance-based. Flocking has been applied successfully to unmanned aerial vehicles (UAVs), autonomous robots and multi-agent systems. Interestingly, flocking is nature-inspired from birds flocks, schools of fishes without any central unit coordinator <cit.>. Flocking algorithms generally embody the three heuristic rules of Reynolds, known as flock centering, collision avoidance and velocity matching <cit.>. In this work, we focus on the setting where the multi-agent objective can be decomposed into single agent objectives that can be optimized independently. For example, in floating seaweed farms, the seaweed growth of each agent does not depend on the growth of other agents. As a state-space system implication, this means the multi-agent system is not strictly interconnected i.e. the state of each agent does not depend on the other agent states of the system. However, in such applications the agents are still coupled by constraints, such as not colliding with each other and maintaining inter-agent connectivity. This motivates a hierarchical approach to decompose the objective into two simpler, smaller problems, similar to hierarchical reinforcement learning principles <cit.>. H-MAS have foremost seen application to smart control of microgrid systems <cit.> or control of modular multilevel converters <cit.>. Although not explicitly listed as a hierarchical approach, <cit.> solves path planning and ocean vehicles coordination separately with a leader-follower description. Especially relevant is the similar context of limited-speed vehicles sensitive to ocean currents, where dynamic formation control intrinsically accounts for connectivity maintenance and collision avoidance. Nevertheless, the use of optimization techniques to enforce regular polygon formations could reveal quite conservative and increase the complexity for the solely distance-based constraints of this work. In this context of H-MAS, a performance controller is assumed to be known for each agent and the goal is to minimally interfere with the individual agent objective while maintaining connectivity and collision avoidance. For example, a time-optimal control to a target is available for each agent and constraint satisfaction should be traded-off intelligently with the performance objective. In most of the above literature based on reactive or flocking approaches, simple integrator or double integrator are considered. However, in realistic scenarios such as ocean navigation, the agents are subject to non-linear time varying dynamics and small autonomous vessels are often under-actuated. Therefore, disconnections are sometimes not avoidable and the multi-agent scheme need to be resilient and able to achieve connectivity. As local communication are to be maintained, a flock configuration maintaining an ideal distance between agents is of particular interest. In this work, we propose a flocking-inspired multi-agent control approach blending a nominal performance single agent control with constraint violation penalization using potential fields <cit.>. While flocking robustness and adaptability have been extended to more challenging environments with non-linearities and parameter uncertainties under certain assumptions <cit.>, <cit.> <cit.> <cit.>, it is rarely combined with under-actuated agents. Some adaptive flocking control method handling under-actuation have considered approximating the uncertain nonlinear dynamics through RBF Neural Networks <cit.>. However, learning approaches for approximating the uncertain dynamics are more challenging when only sparse data is available along a grid, such as for ocean forecasts. Additionally, if the performance controller is a planning algorithm <cit.>, it might be also subject to uncertainty due to forecast error and lack of realistic distribution of the flows. Therefore, we stress the importance of our approach for recovery abilities in case of connectivity failures, as the communication network evolves over time. The dynamics of communication and information sharing could then later be handled on a another level with a Plug-and-Play control scheme <cit.>, but is not in the focus of this work. We demonstrate our control hierarchy in realistic, challenging non-linear, under-actuated dynamics and compare controllers empirically. Performance is quantified in terms of connectivity quality, the performance objective satisfaction, and the recovery abilities in case of connectivity failures. In this section, we describe the relevant literature spanning The related work consists of Lots of great related work! It's definitly way too much for the paper, needs to be clustered/shortened to the key aspects/differences to us. For the thesis write it out more. The structure I like most for Introduction is: Motivation (what is the problem in words), then related work (what have others done to solve similar problems), then what is our contribution and how does it with with existing work. Maybe that helps to prioritize which aspects to keep. §.§ Multi-Agent Systems background: * <cit.>, survey on the control of multi-agents with formation control, consensus, pairwise connectivity maintenance problem, using pairwise connectivity constraint set, rendezvous algorithm <cit.> * Usage of Reachability, Level Set Method in non linear complex flows but for formation control of ocean surface vessels <cit.> * Multi Agent network analysis based on graph theory. Interesting because chapter 7 introduces mobile robots an Δ disk proximity graph, and cooperative dynamics (limited to single integrator, linear, no uncertainty). Propose decentralized control laws for dynamic graphs (graph chaing as the agents move in and out of each other's sensory range). Indicator value to know whether two agents are connected, weighted graph-based feedback. Similar to flocking, presents decentralized control laws over neighbors as a function of the edge tension. Raises a total tension energy of the graph <cit.>. * Base Paper for our reactive control approach, (again the notion of pairwise connectivity introduced in <cit.>. Decentralized, communication and collision avoidance while leading each robot to their (individual goal), but single integrator dynamics (linear). Hard switching between modes (achieve connectivity, maintain connectivity, navigate towards the goal) <cit.> * Work on connectivity maintenance, proximity-based communication models (e.g. disk based). Employs adjacency and Laplacian matrix and spectral properties. Centralized or distributed algorithms approaches to maintain, increase control connectivity in mobile robot networks. Approaches with convex optimization, subgradient-descent to maximize algebraic connectivity, but also potential fields. Application to rendezvous, flocking and formation control <cit.> * Approach based on "k-hop" connectivity, centralized control framework guaranteeing graph connectivity at all time. algebraic model for connectivity: k-connectivity matrix that captures the connectivity property of a graph.<cit.>, * BUT none of them address non linear dynamics and underactuated setting §.§ Flocking behaviour: * Base paper for flocking <cit.> * Notion of artificial potential fields. Connectivity condition is translated to differentiable constraints on individual agent motion by considering the dynamcis of the Laplacian matrix and its spectral properties: <cit.> * Use flocking for network that change arbitrarily (no dwell time between consecutive switches), guarantee stabilization of inter-agent distances with flocking <cit.> §.§ Flocking with NonLinear/Robust approaches: Here all are listed for 2nd order systems. Generally have a velocity matching term that is modified to account for nonlinearities or disturbances. We dont have a velocity matching term as our dynamics are first order (but our flocking control could be extended with one of their approach if the system was 2nd order) * Nonlinear dynamics, agent coupling weights are perturbed by asymmetric uncertain parameters. <cit.> * Flocking with heterogeneous non linear dynamics, while enabling agents to move with same velocity. mild assumption that initial network should be connected. Nonlinear dynamical single virtual leader <cit.> * Adaptive Flocking with uncertain parameters. Design adaptvie flocking of nonlinear multi agents systems with uncertain parameters <cit.> * Robust Flocking for 2nd order non linear systems with a leader and external disturbances. Also cope with the different intrinsic dynamic of the followers. assumption that external disturbance is bounded <cit.> * Flocking with uncertain non linea multi-agent systems. Two distributed adaptive event triggered control algorithms for leaderless and leader-follower flocking separately <cit.> * Most interesting paper for us: Flocking control, unknown nonlinear dynamics with virtual leader information. Bounded assumption on the unknown nonlinear dynamics considered, weaker than Lipschitz condition. To avoid fragmentation: construct new potential function based on a penalty idea when intial network is disconnected (this is interesting for us as disconnection can happen during operation!). <cit.> * OK so literature has considered non-linear dynamics but what about underactuated ?? §.§ Underactuated work on flocking * Underacatuated hovercrafts circular flocking with collision avoidance. Design control in function of equation of motion. Only circular flocking <cit.> * Leader-follower flocking of underactuated ASVs in presence of uncertain dynamics. Design a distributed adaptive flocking controller, and employ RBF NN to approximate the uncertain nonlinear dynamics in the system. Also very model based <cit.> §.§ Multiple virtual leaders Finally, for our application we assume that all agents have access to their single nominal controller, which is different than most of the single virtual leader-follower flocking schemes * Flock of autonomous agents to follow multiple virtual leaders, using only position information (still 2nd order dynamics though) <cit.> * Flocking with multiple virtual leader based on connectivity preserving. Idea is to design stable control law with a navigation feedback to follow agents equipped with virtual leaders <cit.>. §.§ Conclusion: our application blends nonlinear, time varying dynamics + underactuated agents + multiple virtual leaders + uncertain dynamics, which literature has (to the best of my knowledge not tackled all together). Difficulty is that adaptive, robust approaches to flocking might work if we were not underactuated: here sometimes there is nothing that we can do to avoid fragmentation. (Only predictive could anticipate it in advance). So our best shot is to have a potential function that allows to achieve connectivity again. We need to argue: (1) why our hierarchical approach of nominal and flocking, (2) If we do not include MPC, then we need to argue why not somehow (maybe b.c. too compute expensive, no formulation available?) We increasingly deploy autonomous systems in the air and oceans. Beyond airplanes and ships, there are emerging applications such as balloons in the stratosphere for delivering internet access <cit.>, airships, ocean gliders and active drifters for collecting ocean data <cit.>, floating solar farms storing energy in fuels <cit.>, and floating seaweed farms for biomass and carbon sequestration <cit.>. The overactuated navigation approaches used in ships and planes require significant power to overcome the drag forces inherent in navigating through fluids. The power required scales with P = F_Drag·Δ s/Δ t∝ A_C · v^3, where A_C is the cross sectional area and v the velocity relative to the surrounding fluid. This makes overactuated control prohibitively expensive for energy constraint applications such as long duration environmental monitoring systems and floating structures with large cross-sectional areas. Hence, we investigate an energy-efficient steering paradigm leveraging the winds and ocean currents around these systems: navigating agents by hitchhiking complex flows. By using the drift of these non-linear, time-varying flows for propulsion, only a minimal amount of energy is required to nudge the agents into beneficial flows, e.g., balloons going up and down to use different winds or small horizontal propulsion in the oceans to navigate bifurcations in the flow. Because the required power scales cubically with the relative velocity, navigating with 1/10th of the speed means only 1/1000th of the power is required which enables a host of new applications. From a control perspective, there are three core challenges when navigating by hitchhiking these flows. First, the dynamics of the wind and ocean flows are non-linear and time-varying. Second, in realistic scenarios only coarse forecasts of the flows are available and these differ from the true currents <cit.>. Third, when agents are underactuated they cannot easily compensate for these forecast errors with classical methods such as robust control. There is a rich literature on planning time- and energy-optimal paths in flows both in the oceans <cit.> and the air <cit.>. For planning in known flows researchers have proposed HJ reachability <cit.>, non-linear programming <cit.>, evolutionary algorithms <cit.>, and graph-based techniques such as A* <cit.>, RRT <cit.>, and time-varying Dijkstra <cit.>. Non-linear programming, evolutionary algorithms, and graph-based methods suffer from discretization error and the non-convexity of the optimization problem which can cause solvers to get stuck in local minima and infeasible solutions. In contrast, HJ reachability is guaranteed to obtain time-optimal paths when the flows are known, as it solves the exact continuous-time control problem via dynamic programming. As the wind and ocean forecasts are never perfect, different paradigms and their combinations have been explored for navigation when the true flow is not known. When we have access to a realistic distribution of the flows we can use probabilistic methods to optimize the expected energy or travel time of a policy using stochastic reachability <cit.> or MDP <cit.>. Previous work also explored risk-optimal path planning in this stochastic setting <cit.>. Unfortunately, for ocean surface currents, only daily deterministic forecasts are available from the leading ocean forecasting providers HYCOM <cit.> and EU Copernicus Marine <cit.>. There are ocean dynamics models with stochastic forcing to compute accurate probabilistic forecasts <cit.>, however these are too computationally expensive for real-time. A heuristics approach is then to assume Gaussian noise <cit.> or to increase prediction uncertainty around high velocity currents <cit.> which does not capture the complexity of the forecast errors. Another paradigm is to plan on the deterministic forecast flows and follow the planned path with tracking algorithms which compensate for the drifts and can guarantee a tracking error bound <cit.>. While this is important in tight spaces around obstacles, minimizing tracking error does not necessarily lead to time and energy optimal paths. The MPC paradigm uses frequent replanning from the positions of the agent to handle the dynamics error <cit.>. In the robust control methods we assume a bound on the forecast error to derive a path and controls that reach the target despite bounded adversarial disturbances using reach-avoid HJ reachability formulations <cit.> or approximations <cit.>. However, for underactuated agents often there exists no robust control for realistic forecast error bounds. Lastly, recent work has explored the application of deep RL to navigate in flows <cit.>. The controller trained by Gunnarson et. al. on vortical flows using only local current information navigates successfully in these flows but unsurprisingly it fails in other flow structures <cit.>. The Loon team trained an RL agent for station-keeping of balloons with forecasts as inputs and it performed well in long-duration real-world experiments after training on an immense distribution of flows <cit.>. In this work, we focus on the problem of reliable navigation of underactuated agents leveraging flows in the realistic setting when regular deterministic forecasts are available. We define reliability empirically as the success rate of a controller in navigating from a start point to a target region over a set of start-target missions, as developed in Section <ref>. In this paper we make three core contributions: (1) We propose a control approach enabling full time-horizon replanning at every time step with a single computation. For that we build on the recent Multi-Time HJ Reachability formulation <cit.> for computing a time-optimal value function (Time-to-reach) on the latest forecast and use it for closed-loop control. This can be thought of as full-time horizon MPC at every step; (2) We are the first to evaluate and compare the reliability of closed-loop control schemes for underactuated agents in ocean flows in the setting of daily forecasts with realistic forecast error. We evaluate performance across a large set of multi-day start-to-target missions distributed spatially across the Gulf of Mexico and temporally across four months using HYCOM and Copernicus Ocean Forecasts <cit.> We compare several methods on this dataset across multiple metrics and find that our control architecture significantly outperforms other methods in terms of reliability; (3) We quantify how the reliability of various control methods is affected by the forecast error. This paper is organized as follows: In Sec. <ref> we define the problem; followed by Sec. <ref> which details the proposed control architecture and our algorithm to compute the time-optimal value function. Sec. <ref> contains the closed-loop performance evaluation of our methods and baselines and we conclude with Sec. <ref> and outline future work. § PROBLEM FORMULATION In this section we first describe the systems dynamics and give a brief summary of connectedness in communication graphs. Then we define our problem statement and the metrics we use to measure constraint violation. §.§ System Dynamics We consider a swarm of N agents and use 𝒱 to describe the set of all agents. The dynamics for each agent i ∈𝒱 are given by: = v(,t) + g(,, t), t ∈ [0, T] ∈^n denotes the position of agent i in the n dimensional state space, where n=2 for a surface vessel on the ocean. The movement of the agent i depends on the drift of the surrounding flow v(,t) and the bounded control ∈𝕌∈^n_u where n_u is the dimensionality of the control. Let the agent trajectory resulting from Eq. <ref> be described by _i with _i(t) the state at t. For the global system of all N agents we use = [_1^⊤, _2^⊤, …,_N^⊤]^⊤, = [_1^⊤, _2^⊤, …, _N^⊤]^⊤, and respectively to describe the state, control, and trajectory. §.§ Communication Graph Preliminaries * Introduce concepts of edges, vertices, Laplacian matrix, connectivity so that it is clear also from the notation what we mean later in the paper * Also introduce the concept of alpha lattices in flocking The network topology of our MAS with state can be represented with a graph abstraction in order to model interactions among agents. The communication graph G(t) can be built from a set of finite vertices 𝒱 = {1, 2 … N} representing individual agents and a time-varying set of edges ℰ(t) ⊆{(i,j) ∈ (𝒱×𝒱), j ≠ i} representing direct communication between agents. We focus on undirected graphs implying that information can flow between agents in both directions. We further assume that only neighbors that are spatially close with respect to a distance measure d(, ) can communicate directly with each other. Given an upper communication threshold , the pair of vertices i,j is connected by an edge d(, ) < (i,j) ∈ℰ(t). The graph G(t) is said to be connected if there exists an undirected path between every pair of distinct vertices. We can analyze the connectivity of the graph with its Laplacian matrix L which is a symmetric and positive semi-definite matrix based on the adjacency and the degree matrix which we define next. The adjacency matrix A(t) is an n × n binary matrix that encodes which vertices are connected to each other A(G(t)) = [a_ij(t)] ∈{0,1} with a_ij(t) = 1 (i,j) ∈ℰ(t). a_i j(t)= 1, if (i, j) ∈ e 0, else. The valency or degree of a vertex i is denoted by deg(i, t) and represents the number of its incident edges which is the row-sum of the adjacency matrix deg(i, t) = ∑_j=1^N A_i j(t). The degree matrix D is then defined as the diagonal matrix D(G(t)) =diag(deg(i, t)). The Laplacian matrix L can then be inferred as L(G(t)) = D(G(t)) - A(G(t), which is a symmetric and positive semi-definite matrix. The eigenvalues of L let us measure the graphs connectivity. In particular, the second smallest eigenvalue λ_2(L(G(t)) is commonly called the algebraic connectivity or Fiedler value of the network, and captures the robustness of the network to link failures. The graph G(t) is connected only if and only if it is strictly positive i.e. λ_2(L(G(t))) > 0 <cit.>. §.§ Problem Statement We focus on multi-agent problems where the joint objective is the sum of independent objectives ℙ_i which can be sketched out as: min_π ∑_i=1^N ℙ_i(_i, (·)) s.t. ∀ t ∈ [t_0, T] (t) = v((t),t) + π((t)) global dynamics Eq. <ref> d(_i(t), _j(t)) > (i,j) ∈ V × V, i ≠ j λ_2(L(G((t), ))) > 0 The goal is to find the control policy π. The agents are coupled in only two constraints: the collision constraint (Eq. <ref>) where d(, ) represents the distance between agent i and j and the minimum safe distance, and second in maintaining a graph where all agents are connected to each other based on the communication range (Eq. <ref>). Our insight is that in this settings we can decompose the problem and handle the objectives and constraints on different levels with (1) a performance controller for each agent, (2) a safety controller , and (3) a low-interference safe interaction controller trading-off the two (Fig. <ref>). The performance controller of an agent i minimizes its ()_i = _π_iℙ_i(_i, (·)) only considering its own dynamics (Eq. <ref>). can be an arbitrary control policy from a fixed control signal to a feedback controller based on learning or dynamic programming (Sec. <ref>). In challenging settings like ours with non-linear, time-varying dynamics it is easier to design single agent feedback controllers than solving the coupled multi-agent problem above, e.g. for time-optimal navigation, reference tracking, or optimizing seaweed growth <cit.>. The safety controller , determines the control for all agents to ensure the interaction constraints are satisfied (Eq. <ref>, <ref>). Lastly, based on the control inputs and from the respective policies, the safe interaction controller decides the agents final control inputs = (, ). To achieve good performance the safe interaction controller should not interfere too much with while still ensuring connectivity and avoiding collisions. In this work we focus on designing and for an arbitrary . In Sec. <ref> we prove that our method guarantees constraint satisfaction under certain conditions on the maximum magnitude of the control 𝕌 and the flow field velocities v across the agents. Additionally, we test how our method performs in realistic ocean currents where these conditions are not always met and quantify the constraint violations with the metrics we defined below. §.§ Constraint Violation Metrics As in some settings it is impossible to guarantee constraint satisfaction, we now define the metrics we use to evaluate how much the constraints are violated in our experiments in Sec. <ref>. A collision happens between any of the agents in the swarm if ∃ t ∈ [0,T] at which Eq. <ref> is violated. We denote this with the collision indicator 𝕀_coll∈{0,1}. To measure various aspects of losing connectivity we use three metrics. First, for a binary measure if disconnections occur we define the disconnection indicator 𝕀_disconn∈{0,1} which is 1 if ∃ t ∈ [0,T] at which Eq. <ref> is violated and zero otherwise. As we are interested in the network robustness against connectivity losses or link failures we additionally measure the minimum Fiedler value over time, the higher the more robust the communication network (Sec. <ref>): λ_2^min = min_t ∈ [0, T]λ_2(L(G((t), )) Lastly, it often matters for how long an agent is isolated from all other agents. Therefore, we introducing a new measure that we call IPM IPM = 1/T∫_t=0^T M(deg(i, t)=0) dt where M(deg(i, t)=0) counts the number of disconnected vertices, which corresponds to the number of zeros in the diagonal of the graph degree matrix D(G((t),) (Sec. <ref>). In the realistic setting, we compare the constraint violation of controllers empirically over a large, representative set of missions 𝕄 by evaluating the collision rate 𝔼_(t_0), t_0 ∼𝕄[𝕀_coll], the disconnection rate 𝔼_(t_0), t_0 ∼𝕄[𝕀_disconn], as well as the distributions of IPM and λ_2^min. In our setting where the performance objectives ℙ_i are minimum time-to-target for each agent i, the connectivity constraint often leads to a trade-off with the performance objective. Hence, we also quantify the degradation of the performance controller by quantifying the minimum distance the swarm center got to the target area 𝒯 over the mission time t ∈ [0, T] as d_min(𝒯). . In this section, we define the problem in terms of the agent and flow model and further explain and justify the notion of reliability as our performance measure. §.§ Agent and Flow Dynamics We consider an agent operating in a general time-varying, non-linear flow field v(x,s) →ℝ^n where x∈ℝ^n represents the state, s the time and n the dimensionality of the domain e.g. n=2 for a surface vehicle on the ocean and n=3 for agents operating in the atmosphere or underwater. Let the agent's actuation signal be denoted by from a bounded set 𝕌∈ R^n_u where n_u is the dimensionality of the control. Then the dynamics of the system with given initial conditions are governed by an ODE of the following form () = f((),(), ) = v((), ) + g((),(), ), ∈ [0, ] where represents the trajectory and () ∈ℝ^n the state at time s. For more intuitive notation we use x for the state whenever possible. The system dynamics <ref> are further assumed to be continuous, bounded and Lipschitz continuous in uniformly in <cit.>. The movement of the agent in the flow depends on its control g(x(s),(s), s) and the drift of the surrounding flow v(x(s), s). The control can be holonomic when the agent can directly actuate in each dimension e.g. g(x,(s), s) = or non-holonomic e.g. a balloon can only actuate up and down along the vertical axis. This makes the common assumption that the drift of the agent directly affect its state and neglects any inertial effects. §.§ Problem Setting The goal of the agent is to navigate reliably from a start state x_0 to a target region ∈ℝ^n while being underactuated max_u||g(x(s),(s), s)||_2 ≪ ||v(x(s), s)||_2 most of the time. During operation the agent is given a forecast of the flow v̂(x,s) which differs from the true flow v(x,s) by the stochastic forecast error δ(x,s;ω) where ω is a random variable. The error field δ can be characterized based on different metrics such as RMSE of the velocities or vector correlation <cit.>. The agent receives a new forecast at regular intervals (typically daily) that can be used to improve performance. Our goal is reliable navigation of underactuated agents in realistic complex flows occurring in nature. The strongest notion of reliability is robustness to a bounded disturbance, which guarantees reaching the target despite a worst-case forecast error δ. However, proving robustness is not possible in our setting where the agent is significantly underactuated and the average forecast error is larger than the actuation. Nevertheless, we compare against a robust control baseline in Sec. <ref>. A weaker notion of reliability is a probabilistic bound, i.e. the agent reaches the target with high probability. Probabilistic bounds could be established by making strong assumptions on the distribution of the forecast error fields δ and using a simple flow field; this would render the results less meaningful for the realistic settings we consider. For these reason we define reliability empirically as the success rate of a controller navigating from a start point to a target over a set of start-target missions 𝕄 in realistic flows. If the agent reaches the target within a maximum allowed time T_max the mission is successful, otherwise it failed. In our experiments we sample missions 𝕄 over a large spatial region and over a period of four months. § METHOD Our method tackles the multi-agent problem with a hierarchical control approach. The low interference safe interaction controller ensures performance and safe control based on an arbitrary performance controller and a safety controller (see Fig. <ref>). We first introduce our flocking-inspired safety controller based on potential functions and then detail our design for . For ease of understanding we assuming holonomic actuation g(x,u,t) = u but note that the method can be generalized. §.§ Rationale Re-write structure as follows: * Literature integrates single agent objective via reference tracking of a virtual leader, cite work * We extend to any nominal or performance controller (reachability, output of single-agent optimization problem, RL policy) * Context of ocean realistic ocean environment: challenging to maintain connectivity when strong divergent dynamics: therefore we need to be able to achieve connectivity. Extend potential function above the communication range, inspired by <cit.> (more details how in the next section) * Promising approach for performance controller is DP that produces a value function, from which we can obtain the optimal control for an arbitrarily amount of agents, to no additional cost except the computation of a gradient. Write equation and refer to HJ-literature can be leveraged to navigate in complex flows for underactuated agents, as a reliable performance single-agent controller. (matthias, andy's paper and marius' paper). In previous flocking literature, single-agent performance tasks have mostly included the concept of tracking a virtual leader, or multiple virtual leaders via a navigational feedback <cit.>. However, in certain scenarios, the unavailability of virtual leaders or unpredictability of a reference trajectory can hinder the implementation of a flock navigation objective. To overcome this limitation, we propose to generalize the navigational term to any nominal controller, whether it involves linear or non-linear inputs, HJ reachability time-optimal control, or a learned single-agent policy. Especially, a promising approach to derive a single-agent performance controller is DP, that can produce a value function from which we can obtain the optimal control for an arbitrarily high amount of agents, to no additional cost except the computation of a gradient. We refer the interested reader to <cit.> for a reliable navigation approach in a dynamic ocean environment using a recent multi-time HJR formulation with . In <cit.>, the performance control input for agent i can be obtained from an optimal value function 𝒥^* at time t as (t)^* = _u ∈𝕌 g(, , t) · 𝒥^*(,t), which we will leverage in section . Finally, we show in the next section that our low safe interaction controller embeds an augmented potential function <cit.>, to attract disconnected agents back into the communication range to be more robust in the context of challenging dynamical environment. Maintaining and achieving connectivity in realistic ocean environments can be difficult as disconnections can occur even if the initial network is connected, given the under-actuated setting and the presence of challenging non-linear flows. Utilizing potential fields for connectivity and collision avoidance can minimize interference with the nominal controller's performance, especially when agents are within communication range and in a collision-free configuration. Integrating the nominal controller into each agent's flocking input presents similarities with the literature on multiple virtual leaders. In <cit.> a distributed flocking control algorithm for multiple virtual leaders is proposed. Each virtual leader γ_i has it's own acceleration dynamics, given by f_γ_i(_γ_i, _γ_i) and the agents are set to track the virtual leader position and velocity through navigational feedback terms given by _γ_i-_i and _γ_i-_i respectively. To create the flock centering effect, the control law includes the gradient of a pairwise potential function ψ(-_γ_i j) where =_i-_j and _γ_i j=_γ_i-_γ_j accounts for the virtual leaders relative positions. §.§ Nominal Control with Multi-Time-Hamilton-Jacobi-Reachability stay general? Or introduce HJ? Maybe too prominent here, core method is next part. §.§ Flocking-Inspired Safety Controller Rewrite section as follows: * First paragraph: Blend a safe input control with a performance. Introduce weighting/normalizing scheme (feature scaling), where safe input is prioritized if communication is about to be lost or for a collision event. * Second paragraph: Then explain how we design our safe controller. Inspired from flocking, use a potential function to maintain connectivity AND achieve connectivity in the framework of underactuated agents. Underbrace in equation for inside comm range and outside, refer to figure. Say switch between two terms in the pot function with σ, that we now define. * Say why we need this hysteresis scheme. Strong gradient when we pass the boundary of communication, similar to contractive properties of reciprocal CBF. The sole objective of the safety controller is to ensure proper distances between the agents such that their communication graph is connected and they do not collide with each other. We are inspired by the reactive flocking approaches for achieving ideal inter-agent distances without prescribing a formation. Hence, we design our safety controller based on the gradients of a potential function ψ. To explain the principle let us first focus on just two agents i and j that are connected and are at an inter-agent distance = -. Consider the following bowl shaped potential function ψ_connected() = κ/(-), where κ > 0 is a tuning factor to adjust the bell shape (see left of in Fig. <ref>). Let the safety controllers for i be iψ_connected() and for j jψ_connected(_ji) = - iψ_connected(). When those two agents are getting too close → 0 the potential ψ() go to infinite, so the gradient-controllers are a strong repulsive force that pushes them away from each other. Conversely, when the two connected agents are at risk of losing their communication link → then ψ() →inf which means the gradient-controllers result in a strong attractive force that brings them closer again. For multiple agents the control becomes the sum of gradient potential terms of the other agents and the magnitude of the gradients helps prioritize the critical inter-agent distances . When the agents are disconnected, which is sometimes unavoidable in underactuated settings where strong flows push them apart, we want them to be able to reconnect. Therefore, our final potential function is augmented with a second term accounting for agents outside their communication range to encourage achieving connectivity between disconnected agents <cit.>. This results in our final potential function ψ(z): _≥ 0→_>0 that is also visualized in Fig. <ref>: ψ()=σ_ijκ R_com/(R_com-)_for connected agents +(1-σ_ij) √((-R_com + ϵ))_for disconnected agents. where σ_ij is an edge indicator similar to a_ij in Sec. <ref> but with an hysteresis parameter ϵ defined below in <ref>. Hence, ψ() switches between two terms whether the pair of agents (i,j) are within communication range (σ_ij=1) or disconnected (σ_ij=0 )<cit.>. The hysteresis mechanism avoids constant switching of the dynamical network with multiple agents for edges close to and helps preserve connectivity in reactive control schemes <cit.>. σ_ij[t] = 0, if ((σ_ij[t^-]=0) ∩(≥ R_com-ε)) ∪((σ_ij[t^-]=1) ∩(≥ R_com)), 1, if ((σ_ij[t^-]=1) ∩(<R_com)) ∪((σ_ij[t^-]=0) ∩(<R_com-ε)), where ϵ > 0 is the switching threshold inducing a hysteresis in the process of adding new links to the flock. Our approach yields a relatively low attraction force for agents far outside of their communication range. This is a design choice in the context of underactuated agents in dynamic oceanic environment, where remote flock members can experience strong divergent flows and direct connectivity may be infeasible or undesirable to achieve. The final safe interaction controller for each agent i with maximum propulsion U_max,i is then defined as ()= - ∑_j=1^Niψ()/∑_j=1^Niψ() U_max,i §.§ Low Interference Safe Interaction Controller For our that trades-off the performance inputs with the safety input we propose an approach which weights these control vector inputs for each agent i depending on the risk of losing connectivity or colliding. = (, ) =c_i^(1) + c_i^(2), ∀ i ∈ where c_i^(1) and c_i^(2) are weighting factors. Note that =() depends on the other agents positions to guarantee safe interactions. When collisions or connectivity losses are imminent, should be able to rapidly tend to to prioritize the safe interaction safety over performance, i.e. c_i^(1)→ 1 and c_i^(2)→ 0 (Fig. <ref> B, C). Conversely, when the network is well connected and there is low danger of collisions, should align with to have low interference with the agent's performance control, i.e. c_i^(1)→ 0 and c_i^(2)→ 1 (Fig. <ref> A). Hence we defined a weighting function α(): ^N → [0,1] such that c_i^(1) = α() and c_i^(2) = 1-α(). <ref>. This function α() measures the urgency of to converge to and we define it c_i^(1) = α(∑_j=1^Niψ() ) The function α can be thought as a monotonically increasing safety activation function taking values between [0,1] depending on the (unbounded) magnitude of it's argument. From the definition of ψ() in <ref>, lim_→ 0ψ() = ∞ and lim_→ψ() = ∞. Hence, in critical situations ∑_j=1^Niψ() gets very large so that c_i^(1) saturates to 1 and c_i^(2) to 0, thus prioritizing the network safety for the concerned agents i ∈ i.e. →, over each agent individual objective . In other words, ψ() has a contractivity property for agent inter-distances at the boundary of the safe set, defined by 0 and , similarly to CBFs <cit.>. With this design, we ensure that agents coming from a disconnected status σ_ij=0 to a connected status σ_ij=1, experience a strong attracting gradient to avoid escaping the communication range again. From Fig. <ref> it is also clear that when the network is close to being ideally connected, the gradient norm of the potential function ∑_j=1^Niψ() is low, so that agent's i control input is dominated by the performance controller since c_i^(1)→ 0 and c_i^(2)→ 1. =-∑_j ∈ N_i(t)iψ(-_γ_i j) -∑_j ∈ N_i(t) a_i j()(_i j-_γ_i j)+c_1(_γ_i-q_i) +c_2(_γ_i-_i)+f_γ_i(_γ_i, _γ_i) with f_γ_i(q_γ_i, p_γ_i) the acceleration dynamics of the virtual leader for agent i, N_i(t) the neighborhood region for agent i, =col(_1, …, _N) and =col(_1, …, _N) the position and velocity vectors of all agents respectively, =q_i-q_j and _γ_i j=_γ_i- _γ_j. The navigational feedback can be tuned through the positive weights c_1 and c_2. The first term of the control law is the gradient of a pairwise potential function ψ(-_γ_i j) which creates the flock centering effect. The second term produces the velocity matching behavior from the Reynold Rules, whilst the navigational feedback ensures proper tracking of each agent's respective virtual leader. Thus, we can identify some similarities to our application, where where ^grad = - ∑_j=1^niψ()/∑_j=1^niψ() and = -. To achieve a desired trade-off between the navigational objective given by () and the flock-inspired dynamics, a general normalization formulation with parameters c_i^(1) and c_i^(2) is introduced, to maintain the same feature scaling if () changes magnitude. Specifically, c_i^(1) =f( ∑_j=1^Niψ() , (), U_max) and c_i^(1) + c_i^(2) = U_max. U_max represents the maximum actuation power of the agents, such that 0 ≤ c_i^(1,2)≤ U_max. Due to the potential function shape, ψ() →∞ when the distance between two agents reaches the boundaries of the safe set. (make link to reciprocal CBF here). Hence, c_i^(1)→ U_max when there is a risk of collision or connectivity losses and c_i^(1)→ 0 if the agent inter-distances is within the ideal communication range. Here maybe just put a general normalization function and then say in the simulation that we used softmax and compared it to standard normalization c_i^(1), c_i^(2)←softmax(‖^grad‖, ‖() ‖) where the softmax is the function softmax(𝐱)_i=e^x_i∑_j=1^K e^x_j for i=1, …, K and x=(x_1, …, x_K) ∈ℝ^K In this section, we first outline the motivation behind our method and then detail our closed-loop control strategy. §.§ Motivation In realistic settings, only deterministic flow forecasts are available to the agent. To ground our discussion we look at ocean currents which are on the order of magnitude of 0.5-2m/s and underactuated agents with limited actuation ||max(g(,x,s))||_2=0.1m/s. The forecast error of the ocean current forecasts by the HYCOM model is estimated to be RMSE(δ)= 0.2 m/s after extensive validation analysis <cit.>. In this challenging setting we can neither use probabilistic methods without making unrealistic assumptions about the error distribution δ nor can we apply robust control as the average error δ is larger than the actuation of the agent. How than can we achieve reliable navigation in this setting? Our approach builds on the MPC paradigm of regular replanning with deterministic dynamics to compensate for imperfect knowledge of the dynamics and achieve reliable navigation. Intuitively, the higher the frequency of replanning, the more we can adapt the control to the dynamics experienced. While we could use any planning algorithm for non-linear time-varying dynamics, as mentioned in the related work, HJ reachability is the state-of-the-art, as it guarantees finding the optimal solution and can even handle time-varying obstacles and targets <cit.>. While we can obtain time-optimal trajectories from both classic forward and backward HJ reachability, the value function of backwards reachability is more useful. The backwards reachability objective is to minimize the distance of the agent to the target set at a terminal time. A key insight is that the value function of this objective can be used for closed-loop control as for every state and time in the domain we can extract the optimal control that minimizes this objective. This provides a notion of replanning at every step even when the deterministic dynamics are not accurate. However, there is a problem with directly applying classic backwards HJ reachability: the value function that is minimized is the distance to the target at a fixed terminal time. This poses the problem of which terminal time to choose to calculate the value function? If we choose it too distant in the future the system will ”loiter”, not making progress towards the goal. If we choose it too close to the current time, it might be impossible to reach the target and the agent will minimize the distance to the target in the short term, potentially at the cost of long term progress. We can compute the earliest possible arrival time, using forward reachability, and then compute backwards reachability from that time to get our value function for closed loop control (a baseline in <ref>). A more elegant approach is the recent multi-time reachability formulation which requires only one backwards computation and produces a value function that yields the time-optimal control everywhere, not just at the zero level-set, as the classic reachability value function. We found that using time-optimal control is a good proxy for reliability in our setting. §.§ Multi-Time Reachability for Closed-Loop Control In the following we summarize the multi-time reachability technique for completeness. Details for more general systems and applications are available in <cit.>. Multi-time reachability uses dynamic programming to derive a controller that (a) if possible, will get the system to the target in the minimum time, and (b) if not, will get as close to the target as possible. In order to achieve this behavior, we define the following cost function J: = d(,)_Terminal distance from target set - ∫_^𝕀_()d_Time spent in target set where d(x,) is a distance metric of a point x to the target set , s is the position of the agent at time s, when starting at x at time t with dynamics given in (<ref>). 𝕀_(x) is the identity function that is 1 when the state is in the target x ∈ and 0 otherwise. The consequence of this cost function is that if the agent can reach the destination, minimizing the cost implies reaching as quickly as possible. If the agent cannot reach the destination, the optimal control will attempt to reduce the terminal distance to the target. Note that for underactuated systems we want to consider the dynamics of the agent only until it reaches the target. Within the target we switch off the dynamics to reward the agent for staying in the target. Given this cost function, we use the principle of dynamic programming to derive an HJ PDE whose viscosity solution is <cit.>: = 1  () ∈ - min_[· f(,,)]  otherwise (, T) = d(,) Since the value of contains information about the minimum time it takes to reach the destination, we can extract an informative time-to-reach map 𝒟^* from it: 𝒟^*(,) = + - , ∀ (,) s.t., ≤0 If the target can be reached from x starting at t (implied by ≤ 0), then 𝒟^*(,) is the minimum duration required to reach . Which means that inside the target 𝒟^*(,) = 0 ∀ t, x∈. The optimal control ^*(x,t) is then the value that minimizes the Hamiltonian. ^*(x,t) = _∈𝕌 f(x, , t) ·∇_x = _∈𝕌 g(x, , t) ·∇_x Where (6) follows from (5) because v(x,t) does not depend on . Note that ∇_x𝒟^*(,) = ∇_x which means we can use 𝒟^*(,) and interchangeably to obtain the optimal control. This formulation solves the two key issues with classic backwards reachability as described above: a) we do not need to fix an arrival time a priori, instead we can run backwards Multi-Time Reachability from a large maximum time backwards until the current time t; and b) the control obtained from this value function provides the time-optimal control at all states making it more useful for reliable navigation. The key insight here is that, the value function allows us to extracted the time-optimal control for each state x at each time t thereby enabling frequent replanning at low computational cost. This use of the reachability value function differs significantly from the ways it has been used in the literature. Much of the previous work in reachability-based control has focused on least restrictive control for safety specifications <cit.>, where the terminal cost encodes constraints that must be satisfied for safety, and the solution of the corresponding HJ PDE provides both a reachable set of states that satisfy the constraints, as well as the control ^* to apply at the boundary of the reachable set in order to stay within this safe set. Such a least restrictive framework has been applied to both safe trajectory planning <cit.> and learning-based control <cit.>. In this work, we propose to solve this Multi-time reachability problem once per forecast, which are received at regular (daily) intervals ( Fig.<ref> and Alg.<ref>). The most recent time-to-reach map is then used for closed-loop control which by construction provides reliability through what can be thought of as time-optimal replanning at every time-step. For an intuitive example, take the setting in Fig. <ref>: at x_0 the agent applies ^*_0 based on the planned time-to-reach map. As the true currents v are different than the forecast v̂, the agent finds itself at x_1, a different state than expected. Based on our forecast the time-optimal control from this state onwards can again directly be computed from the time-to-reach map. There are two advantages of this method over classic MPC replanning based on fast non-linear programming or graph-based methods: (a) much higher replanning frequencies are possible because deriving the optimal control from the value function is computationally cheap compared to solving the optimal control problem from a new state at every time step, and (b) less discretization error because the HJ PDE solves the continuous-time problem whereas the classic MPC methods rely on discretization in time and space to enable fast planning in the loop. § THEORETICAL ANALYSIS Re-structure as follows;: @Marius: my suggestion * Say how people generally prove stability for flocking. Does not totally apply to us due to the dynamics and underactuation (either showing energy decrease or approach Y, etc.) cite. * Our approach to prove that our method achieves a flocking behavior, thus avoiding collisions and maintaining communication, the concept is to define the collective potential function for the whole flock and show that it sinks with time by applying our control law. At switching time (when edges are added) energy can increase but is bounded by our initial energy + the amount of edges we can add ×ψ(-ϵ. AS number of edges that can be added also bounded, energy is upper bounded, and if we can show that the time derivative of energy is negative, we know that after a certain time, flocking behavior is stable (assuming our condition) and that we minimize pot. function so avoid collision and stay connected. * Show that gradient should compensate for the agent deviation in terms of currents/flows experienced and performance controller. * We also show that applying performance control does not lead to a total potential function energy increase on the condition that the difference between the agents flows and their nominal is not too large (i.e. does not lead in a deviation of the network/flock configuration) how to write this best @Marius ? In this section, we propose to analyze under which conditions our safe interaction controller is able to maintain connectivity and avoid collisions <cit.>. Energy-based analysis and LaSalle's invariance principle are commonly utilized to obtain analytical guarantees for flocking. These methods establish the stability of the system and demonstrate that the flock converges to a lattice structure while preventing inter-agent collisions, as demonstrated in <cit.>. The structural collective dynamics can be derived by using a moving referential <cit.> with respect to the flock centroid _c. The relative coordinates are given by =- and =- =. Therefore, ψ( ) = ψ( ), and the total tension energy or potential energy for the structural dynamics in the relative coordinates yields H() = 1/2ijψ() q̇ = - H() + U^perf() where H() is the collective potential function or A possible approach, although conservative, is to show that a global tension energy decrease of the system Ḣ = ∑_i=1^n Ḣ_i ≤ 0 can be achieved by guaranteeing local tension energy decrease ∀ i ∈. Assume that G(t) switches at time t_l for l=0,1,2 … and Ḣ≤ 0 on each [t_l, t_l+1). Then, at switching time k, H(t_k) = H(t_k^-) + ψ( ‖ R_com - ϵ‖) <cit.>. As the graph topology becomes fixed after a certain time and only a finite number of maximum edges can be added, the energy can be shown to be bounded for any subsequent time. The time-derivative of H_i along the trajectory of agent i yields Ḣ_i = _i^⊤ijiψ() where we exploited the relation iψ() = -jψ(). Substituting _i =_i-_c in <ref> and by applying the Cauchy–Schwarz inequality after some iterations, we obtain that Ḣ_i ≤ 0 holds if: c_i^(2)() + v() - (_Ni)≤ c_i^(1) U_max,i where (·) denotes the average and the set N_i = 𝒱∖{i} the neighboring agents of i. The dynamics of the other agents are also defined by their surrounding flow and their individual control inputs such that (_Ni) = (v(_Ni)) + (u_Ni). The agents do not necessarily need to be overactuated despite strong flows to achieve a local energy decrease (Ḣ_i ≤ 0). If the currents experienced by agent i are of similar magnitude and direction than the average current experienced by the neighboring agents, then v() compensates (v(_Ni)) and <ref> can be fulfilled even if v() > U_max, i. The neighboring flocking control inputs (u_Ni) also helps accounting for the current difference term v() - (v(_Ni)). If v() - v(_Ni)≫ U_max, i, satisfying <ref> becomes challenging, which can happen if agents experience strong divergent currents. Under these assumptions, we can show that Ḣ≤ 0, which allows to bound the maximum energy and apply LaSalle's Invariance Principle <cit.>, <cit.>, thus ensuring that no collisions or disconnections happen, since ψ() →∞ when → 0 or →. Condition <ref> is sufficient but not necessary to guarantee Ḣ < 0, as negative local energies can compensate positive ones. Finally, the dynamics in the relative coordinates can be expressed as _i =_i-_c such that: _i = n-1/n( + v())- 1/nil_l § SIMULATION STUDY In the following section, the proposed flocking control scheme is evaluated on realistic ocean currents. We use Multi-Time HJR as a performance single agent controller, since it generates a value function yielding the time-optimal control everywhere <cit.>. §.§ Experimental Set-Up * Number of agents=30 * Our seaweed platforms application * mission configuration: One target (circle around target point, defined by the radius), multiple platform starts, initially connected have to reach the target before timeout * 1000 randomly sampled missions, in GOM, 5-6 days length, u_max = 0.1 m/s vs max current strength * Replanning of HJ: 24hours/ sampling time 10 mins * two settings for the non linear flows * v, v̂_FC∼𝕍_real and forecasted ocean currents * perfect knowledge: HC-HC * distribution shift: FC-HC * Metrics * Normalized integral over time of disconnectivity * Beta index (edges/nodes) as indication of connectivity strength * Average minimum distance to the target among the agents (to showcase trade-off between performance objective and connectivity maintenance/collision avoidance) We study the effectiveness of different controllers in maneuvering a two-dimensional ASV with holonomic actuation of fixed thrust magnitude u= 0.1 m/s. The control input in this context is the thrust angle θ. We consider a group of identical n=30 ASVs with omnidirectional communication capabilities, navigating in strong ocean currents v(, t) ∈ [0.3 m/s , 2 m/s ], where each agent aims to reach a common pre-defined target, so that the group objective can be considered as the flock center reaching the target. In the following, we describe the creation of simulation experiments in a realistic, uncertain ocean environment and how we obtain a large set of missions to best illustrate trade-offs between single agent performance and flock connectivity maintenance. Realistic Simulation of Ocean Conditions Inspired by <cit.>, we focus on the Gulf of Mexico region (Fig. <ref>), as it presents interesting and challenging currents. Moreover, we employ two ocean current data sources, that we refer to as HYCOM hindcast <cit.> and Copernicus hindcast <cit.> that we use as forecast for realistic scenarios. In our context, the ocean forecast data represents predicted currents v̂_FC while the hindcast ocean data are true flows v. The ocean current data and the forecast error introduced are particularly relevant to the multi-time HJ reachability controller, as it uses the currents to plan on, which impacts the performance of the time-optimal nominal controller. We propose two settings to investigate our approach, namely (a) performance HJR planning on hindcast and multi-agent simulation on hindcasts (HC-HC) and (b) performance HJR planning on forecast and multi-agent simulation on hindcasts (FC-HC). The first allows us to assess performance in an idealized setting where true flows are known whilst the second reflects a realistic application in dynamic ocean environments. Large Representative Set of Missions We assume that all agents start a navigation mission to a target region at the same time t_0. The navigation objective is to drive the ASVs from their start states (_1(t_0) …_n(t_0) ) to within a maximum allowed time T_timeout. The target is defined as a circular region with center coordinates _ and fixed radius r_ = 0.1^∘ around it. To obtain a diverse set of missions 𝕄, the starting times t_0 are uniformly sampled between April 2022 and December 2022. T_timeout is set to 144h and the start points are sampled such that they can reach the target in [72, 144]h to ensure that missions are by definition feasible on true flows and temporally representative enough of realistic scenarios. To prevent stranding side-effects, we impose a minimum distance of 111km between the target area and the land, and a minimum distance of 40km between each ASV's initial position and the land. We generate a total of |𝕄| = 1000 missions of initially connected and collision-free networks, see Fig. <ref>. §.§ Baseline controllers * Baseline HJ single agent (naive multi-agent control) * Reactive approach: from reactive control paper * flocking We build on recent work that proposed a reliable multi-time HJR controller for underactuated agents utilizing complex flows <cit.>. This approach directly extends to multiple agents with little extra compute, and the feedback controller for agent i can be obtained from an optimal value function 𝒥^* at time t as (t)^* = _u_i ∈𝕌 g(, , t) ·i𝒥^*(,t). All evaluated controllers use the multi-time HJR formulation as a single agent performance control. Our baseline scheme, called HJR-Baseline, involves each agent only utilizing its time-optimal performance control HJR without considering multi-agent interactions. Thus our baseline also provides a good likelihood estimation of collisions and communication losses if each agent were to rely solely on it's performance control. In addition, we define a second baseline controller from <cit.>, denoted as HJR-Reactive. This controller operates in three modes: achieveConnectivity, maintainConnectivity, and GoToGoal, which are selected based on the ASVs' relative positions. The maintainConnectivity and GoToGoal modes employ a general navigation function for each agent, which we instantiate to our HJR performance controller. This approach is easily integrated with the time-optimal control HJR, and the reactive control term can be implemented in a decentralized manner. Finally, we define our LISIC as HJR-Flocking with the safe interaction controller <ref>, where we also implement HJR as the single agent performance controller . The trade-off between each agent's navigational objective and the safe network interaction can be tuned with two parameters. First, the potential function shape <ref> can be more or less flat around the ideal distance /2. In this application, we set κ=2. Furthermore, we now detail our weighting scheme for c_i^(1) and c_i^(2) via the definition of α <ref> as a softmax-like function c_1^(i) = e^∑_j=1^niψ()/e^∑_j=1^niψ() + e^ρ, ∀ i ∈𝒱. where the parameter ρ≥ 0 can be adjusted to achieve faster saturation of the potential function gradient term (). §.§ Additional Evaluation Metrics The upper connectivity bound in <ref> and <ref> is set to 9km, which corresponds approximately to radio communication capabilities for ASV and we set the collision lower threshold from <ref> to =100m, such that the ASVs would still have some margin in real conditions. Moreover, we also set the edge hysteresis parameter from <ref> to ϵ= 300m. We use the euclidean norm to measure inter-agent distances d(, ) and the minimum flock center distance to target d_min(). §.§ Numerical results The results over a-priori known true currents (HC-HC) and realistic scenario (FC-HC) are presented in Table <ref>. Both HJR-Flocking and HJR-Reactive exhibit superior performance in terms of connectivity and collision metrics compared to the baseline HJR. Thus we conduct statistical testing to compare HJR-Reactive and HJR-Flocking. Regarding the disconnection and collision rate, we perform a one-sided two-sample z proportion test for HJR-Flocking against HJR-Reactive. Let Γ be the rate collision or disconnection over 𝕄 with the null hypothesis be H_0: Γ_HJR-Flocking = Γ_HJR-Reactive to reject in favor of the alternative hypothesis H_A: Γ_HJR-Flocking < Γ_HJR-Reactive. HJR-Flocking is statistically significantly better than HJR-Reactive at avoiding disconnections in both (HC-HC) and (FC-HC) scenarios, with p-values of p=6.3e^-69 and p=1.7e^-114, respectively. However, it is not significantly better than HJR-Reactive at avoiding collisions. To compare the means over || of μ() and μ(λ_2^min) for connectivity and d_min(𝒯) for the performance trade-off, we perform a Welch's t-test due to the unequal variances of HJR-Reactive and HJR-Flocking. HJR-Flocking leads to statistically significantly better results for the network connectivity with p < 1e^-30 for μ() and μ(λ_2^min) for both (HC-HC) and (FC-HC) scenarios. Because of its higher value of μ(λ_2^min), HJR-Flocking is more robust against disconnections, see Fig. <ref> and should be the preferred control choice for communication maintenance Moreover, we plot the IPM for the three controllers in Fig <ref> for two cases, (1) the IPM evaluated on the full set of missions || (2) on a subset of missions where flocking failed to maintain connectivity. Among the three evaluated controllers, HJR-Flocking has the lowest IPM. Considering the missions where HJR-Flocking failed to maintain connectivity, it still achieves a smaller time of disconnection or fewer disconnected agents than the HJR-Baseline, but it is not as distinguishable from the HJR-Reactive controller. Interestingly, HJR-Reactive yields a statistically significantly better outcome for the objective trade-off μ(d_min(𝒯)) with p-values p < 1e^-40 in both (HC-HC) and (FC-HC). Finally, Fig. <ref> illustrates a navigation mission, comparing a naive multi-agent approach (HJR-Baseline) to our safe interaction controller, HJR-Flocking. §.§ Discussion * MPC might be challenging with the forecast error, especially that we cannot impose fixed neighbor sets over the prediction horizon, as it evolves over time and agents are under-actuated * A general criterion to be optimized could be the fiedler value directly as in <cit.>, however would need to introduce slack variable as sometimes connectivity cannot be guaranteed due to our under-actuation and strong divergent currents. * Reactive ctrl performs better in term of collision, but also because it can only achieve connectivity with a maximum amount of two neighbors, so there is implicitely less chance of collision since the agents are not in a close to α-lattice configuration as for flocking. Reactive ctrl has a single mode for navigating to target that when activated ignores the other platform positions, less constraints to achieve the objective. In flocking, especially for 30 agents, the gradient term might be always a bit active. * Nominal controller and same target for all agents is an implicit regulator, as with HJ-nominal close agents try to nudge in the same flows to reach the destination. Regulator strength can be varied by adjusting the target size, which also influences the collision rate. => This schema works well if the performance controllers are fairly aligned e.g. in going to same target or maximizing growth of seaweed (CITE matthias paper) It is clear that HJR-Flocking outperforms HJR-Reactive and the HJR-Baseline in terms of connectivity metrics. Interestingly, HJR-Flocking leads to a slightly higher collision rate in Table <ref> than HJR-Reactive. We believe that it is mainly due to two reasons: (1) In HJR-Reactive the expected risk of collisions is inherently lower as each agent can achieve connectivity with a maximum amount of two other agents while HJR-Flocking achieves a similar structure to a lattice configuration <cit.> (2) In our example, all agents navigate to the same target, which also increases the risk of collisions, as it is a common implicit regularizer. We expect improvement in terms of collision rate for application to autonomous ASVs, where each agent maximizes an objective along it's trajectory <cit.>. The discrepancy between the performance trade-off with each agent target reaching objective d_min(𝒯) in Table <ref> is less noticeable in the (FC-HC) setting, since the HJR performance is also degraded because of the stochastic error when planning on forecasts <cit.>. In this section we evaluate our control schema of using multi-time HJ reachability for closed-loop control and compare it to baseline methods on realistic ocean currents. §.§ Experimental Set-Up We investigate the reliability of various controllers in navigating a two dimensional ASV with holonomic actuation of fixed magnitude ||g(, x, t)||_2 = ||||_2 = 0.1 m/s. The control is the thrust angle θ and the ASV is navigating in strong ocean currents v(x, t) ∈ [0.3m/s, 2m/s] which it wants to hitchhike to get to the targets. In the following we describe how we ensure realistic ocean forecast simulation and obtain a large set of missions. Then we explain the baselines and evaluation metrics. Realistic Ocean Forecast Simulation The Ocean forecast data we employ are the HYCOM forecast and hindcast <cit.> and hindcasts from Copernicus <cit.> for the Gulf of Mexico region. To simulate realistic conditions we provide the control methods daily with the HYCOM forecast as it becomes available while simulating the system dynamics with the hindcast as the true flow v(x,x)(Fig. <ref>). We investigate two settings (a) planning on HYCOM forecasts and simulating on HYCOM hindcasts (HYCOM-HYCOM) and (b) planning on HYCOM forecasts and simulating on Copernicus hindcasts (HYCOM-Copernicus) (<ref>). To estimate how realistic our simulations are we compare the simulated forecast error δ across our start-target mission set 𝕄 with the HYCOM forecast error as estimated by Metzger et. al. using extensive drifter buoy data <cit.>. In Fig.<ref> we visualize two metrics, the velocity RMSE and the vector correlation, where 2 represents perfect correlation and 0 no correlation <cit.>. We find that the HYCOM-HYCOM setting underestimates the forecast error, especially in the first 24h where the forecast is perfect. The HYCOM-Copernicus setting is realistic as the simulated forecast error is of similar magnitude as the actual HYCOM forecast error. Large Representative Set of Missions To obtain a set of start-target missions 𝕄 we first fix 18 regularly spaced starting times t_i between November 2021 and February 2022. For each starting time t_i we uniformly sample 16 start points x_Start^t_i, j spatially over the Gulf of Mexico. In our underactuated setting many start-target missions are impossible even if the true currents are known. Hence, for the test set 𝕄 we need to ensure each mission is fundamentally feasible given the true currents. To generate only feasible missions from each starting points x_Start^t_i, j we calculate the forward reachable set (FRS) starting at t_i for a maximal time-horizon of T_max=120h using HJ reachability. The FRS is the set of all states x_s at time s for which there exists a control signal (·) such that ^_t_i, x_Start^t_i, j = x_s. To get a variety of mission durations we sample 4 relative times Δ t_k ∈ [20, 120] h and sample a target point x_^i,j,k from within the forward reachable set at s=t_i + Δ t_k. Then x_^i,j,k is guaranteed to be reachable from x_Start^t_i, j and the target is the circular region of radius r_=0.1^∘ around it. This gives us a large and diverse start-target set 𝕄 of 1152 missions ranging in duration from 20h to 120h as visualized in Fig. <ref>, together with their respective time-optimal trajectories based on the true currents. Baseline Controllers We compare the performance of our multi-time HJ reachability closed-loop strategy with four baselines. The first baseline is the HJ Closed-Loop method which is the same as our method except that it employs the classic backwards reachability value function. The backwards reachability value function is calculated based on the earliest possible arrival time and used for closed-loop control, as explained in Sec. <ref>. Our second baseline (Robust HJI Closed-Loop) is a robust controller which is equivalent to HJ Closed-Loop but instead of solving for the classic backward reachability value function it solves for the reach-avoid value function with a small bounded disturbance of d=0.05m/s <cit.>. This means that the control extracted from it in closed-loop is a conservative control that can arrive at or get as close as possible to at the earliest arrival time despite worst-case disturbance d. The third method (HJ + Waypoint Tracking) we compare with is based on the idea of tracking waypoints to compensate for the forecast error. It plans the time-optimal path on each new forecast using HJ and at each time actuates towards the next waypoint in this plan. Lastly, we compare against a Naive-to-Target approach which ignores the forecasts altogether and always actuates towards the center of the target region. Note that we do not compare against any graph-based methods such as RRT and A* because the only approximate the time-optimal control problem which HJ reachability solves in continuous time and space. Hence, we expect them to be strictly worse. Evaluation Metrics The key performance metric for our controllers is reliability, defined as the success rate of a controller over the set of missions 𝕄 (Sec. <ref>). To further elucidate the controller performances we look at two additional metrics. Lastly, we investigate how the reliability of a controller changes with the forecast error. To evaluate how fast a controller reaches a target we compute the time-optimality ratio ρ. We define it as the time it took the controller to reach divided by the fastest it could have reached under perfect knowledge of the currents. It is calculated over the set of all missions in which the controller succeeds (𝕄_ctrl) as ρ = 1/||𝕄_ctrl||∑_𝕄_ctrlΔ t_arrival ctrl/Δ t_arrival best-in-hindsight A value of 1 means the controller was as fast as possible and 1.1 means it took 10% longer than the fastest possible. The minimum distance to target measures the closest the controller got to during the full simulation horizon T_max in degree latitude, longitude. The minimum distance to target is positive, except for when the controller succeeds in a mission, and then it is 0. Lastly, our diverse set of missions 𝕄 allows us to investigate how the reliability of various controllers changes with increasing forecast error. For that we calculate the average forecast error RMSE(δ) for each mission spatially over a regional box containing the start and target region and temporally over the 5 day horizon of each forecast. We then group the missions by their RMSE(δ) into 20 bins and calculate for each controller the success rate across each bin. To make the trends more visible we fit a weighted linear regression over these 20 success rate - RMSE(δ) points and weight each bin by its number of missions. §.§ Experimental Results Simulation Setting HYCOM-HYCOM In this setting we evaluate the controller performance over ||𝕄|| = 1152 start-target missions and run the simulation for T_max=150h. If the controller reaches the target region within that time, it is successful otherwise it failed. Our control approach achieves a success rate of 99% and outperforms the baselines (Table <ref>). The time to reach the target is on average only 3% higher than the fastest possible, here HJ Closed-Loop performs slightly better with 2%. The Naive-to-Target controller succeeds only in 84.9% of missions. These results highlight that highly reliable navigation is possible with low (short-term) forecast error. However, as of now the actual forecast error of current ocean models is significantly higher (Fig.<ref>) which makes this an optimistic performance estimate. To evaluate the statistical significance of our results we perform a one-sided two sample z proportion test for each of the controllers against the Naive-to-Target baseline. With Γ denoting the success rate of a controller, our null hypothesis is H_0: Γ_Naive-to-Target = Γ_controller and the alternative hypothesis is H_A: Γ_Naive-to-Target < Γ_controller. We obtain that the success rate of all controllers is higher than Naive-to-Target in a statistically significant way (p-values Multi-Time HJ CL p=3.9e^-36, HJ CL p=2.8e^-27, Robust HJI CL p=1.36e^-22, HJ + Waypoint Tracking p=3.65e^-15). Simulation Setting HYCOM-Copernicus To ensure comparability across settings we take the same set of missions 𝕄, however with the Copernicus hindcasts only 837 of the missions are fundamentally feasible, which makes the set 𝕄 smaller. Again our control approach achieves a success rate of 82.3% and outperforms the baselines but with less of a margin than in HYCOM-HYCOM (Table <ref>). We perform the same statistical significance test and find only our approach has statistically significant higher success rate than Naive-to-Target (p=0.012). We want to emphasize that a 4.4% increase over the baseline in this challenging setting with large forecast errors is a sizeable improvement. Moreover, the results from HYCOM-HYCOM indicate that performance improves with better forecasts. Our closed-loop control schema is easily extendable to include learning about the currents while operating in them and using this information to improve its estimate of the future currents and thereby improve performance. Figures <ref> show how the success rate of the controllers changes for missions with varying forecast errors. As expected we see that the success rate decreases with increasing forecast error with different slopes for different controllers. The performance of our Multi-Time HJ Closed-Loop approach decreases slower than the baselines. In the HYCOM-HYCOM setting it is almost unaffected by higher forecast error. Note that we would expect Naive-to-Target to be indifferent to the forecast error as it does not consider the forecast. However, for Naive-to-Target we observe a significant drop in performance for missions with high forecast error. We hypothesize that the forecast error is higher in regions with complex and strong currents, conditions which are inherently more challenging to navigate. This could explain why Naive-to-Target fails more frequently with higher forecast error. § CONCLUSION AND FUTURE WORK In this work, we proposed a H-MAS approach to maintain network connectivity in complex dynamical flows while satisfying single agent level objectives when feasible. Our method blends a network safety controller for collisions and connectivity maintenance with a performance control policy, which allows us to decompose a complex multi-agent problem effectively. Our LISIC prioritizes a safe control input from a flocking-inspired potential function in critical scenarios. We showed that connectivity can be maintained and collision avoided in underactuated agents, as long as the flow dynamics divergence between neighboring agent can be compensated. Our empirical results in realistic ocean dynamics showed that our method efficiently maintains connectivity and avoids collisions in most of the scenarios, while reasonably trading off with each agent's performance objective. Future work includes leveraging the agent's dynamics with forecast flows to predict future disconnections or collisions using predictive methods <cit.>. We anticipate that these methods will perform well on the true flow scenario (HC-HC) but may exhibit a performance drop when stochastic error is present as in the (FC-HC) scenario. It will be interesting to evaluate whether the additional computational cost of predictive methods pays off. The safety interaction controller involves a gradient term of a potential function that is prioritized in case of critical scenarios. We then show that our control law guarantees connectivity maintenance and collision avoidance even for underactuated agents, provided that agent neighboring dynamics divergence can be compensated by the maximum actuation magnitude. We evaluated our method over a large set of spatially and temporally representative missions in realistic ocean dynamics and compared it against a naive single agent baseline and another reactive approach <cit.>. Our empirical evaluation demonstrated that our approach is strongly efficient in maintaining connectivity and avoiding collisions while reasonably trading off with each agent's performance objective. Our future work includes leveraging the agent's dynamics with forecast flows to predict future disconnections or collisions using predictive methods <cit.>. We expect these methods to perform well on the true flow scenario (HC-HC) and will assess whether the additional computational cost pays off when using the (FC-HC) scenario, where stochastic error is present. In this work, a H-MAS approach is proposed to maintain network connectivity in complex dynamical flows, while satisfying single agent level objectives when feasible. We decompose a complex multi-agent problem by blending a network safety controller in terms of collisions and connectivity maintenance with a performance control policy. The safety interaction controller term involves a gradient term of a potential function that is prioritized in case of critical scenarios. We then show that our low interference safe interaction controller can guarantee connectivity maintenance and collision avoidance even for underactuated agents, on the condition that agent neighboring dynamics divergence can be compensated by the maximum actuation magnitude. Our method performance is evaluated over a large set of spatially and temporally representative missions in realistic ocean dynamics against a naive single agent baseline and another reactive approach <cit.>. The empirical evaluation suggests that our approach is strongly efficient to maintain, achieve connectivity and avoid collisions while reasonably trading-off with each agent's performance objective. Future work includes leveraging the agent's dynamics with forecast flows to predict future disconnections or collisions using <cit.>. We expect predictive methods to perform well on the true flow scenario (HC-HC), but expect a performance drop using (FC-HC) due to the stochastic error, such that it will be interesting to assess whether the additional computational cost pays off. In this work we have demonstrated that planning with Multi-Time HJ Reachability on daily forecasts and using the value function for closed-loop control enables reliable navigation of underactuated agents leveraging complex flows. The reliability of our method stems from the fact that the optimal control extracted from the value function at every time step is equivalent to full horizon time-optimal replanning. There are two key advantages over classic MPC with non-linear programming or graph-based methods. First, our method has lower computational cost as it only requires computing the value function once per forecast instead of having to solving an optimal control problem at every time-step. Second, our method solves the continuous-time optimal control problem and does not require spatial or temporal discretization typically employed by MPC for fast computation. This leads to high reliability at low computational cost which enables reliable autonomy for resource-constrained systems navigating in flows in the atmosphere and the oceans. We evaluated the performance of our method in realistic ocean currents over a large set of multi-day start-to-target missions distributed spatially across the Gulf of Mexico and temporally across four months. In the setting of using forecasts from HYCOM and simulating the true currents with HYCOM hindcasts, our method achieves a 99% success rate and outperforms the baselines. However, this setting underestimates the actual HYCOM forecast error. Hence, we also evaluated our method in a setting with forecast errors that reflect more realistic operations: planning on HYCOM forecasts and simulating the true currents with Copernicus hindcasts. In this more challenging setting our method again outperforms the baselines achieving a 82.3% success rate. While we showcased our method on 2D ocean currents, we want to emphasize that it is directly applicable to other flows e.g. 3D flows in the air and underwater. Lastly, we demonstrated that our method is less affected by increasing forecast errors than the baselines. The stark difference between 99% and 82.3% success rate in the two simulation settings highlights that more accurate forecasts increase reliability. Hence, we anticipate that if the agent learns about its surrounding flow field using online flow measurements to improve its estimate of the future flows, it can increase reliability significantly above the current 82.3% in realistic settings. Therefore, our future work we will focus on this learning of the surrounding currents which can directly be integrated in our method. To further validate our method, we plan to perform field tests with multiple autonomous surface vehicles in the ocean. IEEEtran
http://arxiv.org/abs/2307.02236v1
20230705122614
D-optimal Subsampling Design for Massive Data Linear Regression
[ "Torsten Reuter", "Rainer Schwabe" ]
stat.ME
[ "stat.ME", "math.ST", "stat.TH", "Primary: 62K05. Secondary: 62R07" ]
Corresponding author: Torsten Reuter. E-mail address: Otto von Guericke University Magdeburg. Universitätsplatz 2, 39106 Magdeburg, Germany torsten.reuter@ovgu.de Otto von Guericke University Magdeburg. Universitätsplatz 2, 39106 Magdeburg, Germany rainer.schwabe@ovgu.de [2020]Primary: 62K05. Secondary: 62R07 Data reduction is a fundamental challenge of modern technology, where classical statistical methods are not applicable because of computational limitations. We consider linear regression for an extraordinarily large number of observations, but only a few covariates. Subsampling aims at the selection of a given percentage of the existing original data. Under distributional assumptions on the covariates, we derive D-optimal subsampling designs and study their theoretical properties. We make use of fundamental concepts of optimal design theory and an equivalence theorem from constrained convex optimization. The thus obtained subsampling designs provide simple rules for whether to accept or reject a data point, allowing for an easy algorithmic implementation. In addition, we propose a simplified subsampling method that differs from the D-optimal design but requires lower computing time. We present a simulation study, comparing both subsampling schemes with the IBOSS method. D-Optimal Subsampling Design for Massive Data Linear Regression Rainer Schwabe =============================================================== § INTRODUCTION Data reduction is a fundamental challenge of modern technology, which allows us to collect huge amounts of data. Often, technological advances in computing power do not keep pace with the amount of data, creating a need for data reduction. We speak of big data whenever the full data size is too large to be handled by traditional statistical methods. We usually distinguish between the case where the number of covariates is large and the case where there are very many observations. The first case is usually referred to as high-dimensional data and numerous methods have been studied to deal with such data, most notably LASSO by <cit.>, which utilizes ℓ_1 penalization to find sparse parameter vectors, thus fusing subset selection and ridge regression. We consider the second case, referred to as massive data. To deal with huge amounts of observations typically one of two methods is applied: One strategy is to divide the data into several smaller datasets and compute them separately, known as divide-and-conquer, see <cit.>. Alternatively one can find an informative subsample of the full data. This can be done in a probabilistic fashion, creating random subsamples in a nonuniform manner. Among the prominent studies are <cit.>, <cit.> and <cit.>. They present subsampling methods for linear regression models called algorithmic leveraging, which sample according to probabilities based on the normalized statistical leverage scores of the covariate matrix. More recently, <cit.> studied volume sampling, where subsamples are chosen proportional to the squared volume of the parallelepiped spanned by its observations. Conversely, subdata can be selected in a deterministic way. <cit.> present such a method, that maximizes the minimal distance between two observations in the subdata. Most prominently, <cit.> have introduced the information-based optimal subdata selection (IBOSS) to tackle big data linear regression in a deterministic fashion based on D-optimality. The IBOSS approach selects the outer-most data points of each covariate successively. Other subsampling methods for linear regression include the works by <cit.>, who have introduced orthogonal subsampling inspired by orthogonal arrays, which selects observations in the corners of the design space and the optimal design based subsampling scheme by <cit.>. Subsampling becomes increasingly popular, leading to more work outside linear models. <cit.> extent the idea of the IBOSS method from the linear model to logistic regression and other work on generalized linear regression include the papers by <cit.> and <cit.>. <cit.> have recently considered subsampling for Cox regression, whereas <cit.> focused on non-parametric models and make use of the information in the dependent variables. For a more thorough recent review on design inspired subsampling methods see the work by <cit.>. In this paper we assume that both the model and the distribution of covariates are known. We search for D-optimal continuous subsampling designs of measure α that are bounded from above by the distribution of the covariates. <cit.> and <cit.> were the first to study such directly bounded designs. <cit.> considered this setting using subsampling designs standardized to one and bounded by α^-1 times the distribution of the covariates. More recently, the same has been studied by <cit.> in the context of sequential subsampling. In <cit.> we have studied bounded D-optimal subsampling designs for polynomial regression in one covariate, using many similar ideas as we use here. We stay with the unstandardized version emphasizing the subsampling character of the design. For the characterization of the optimal subsampling design, we will make use of an equivalence theorem from <cit.>. This equivalence theorem allows us to construct such subsampling designs for different settings of the distributional assumptions on the covariates. Based on this, we propose a simple subsampling scheme for selecting observations. This method includes all data points in the support of the optimal subsampling design and rejects all other observations. Although this approach is basically probabilistic, as it allows selection probabilities, the resulting optimal subsampling design is purely deterministic, since it depends only on the acceptance region defined by the optimal subsampling design. We make comments on the asymptotic behavior of the ordinary least squares (OLS) estimator based on the D-optimal subsampling design. Since the proposed algorithm requires a computing time of the same magnitude as calculating the OLS estimator on the full data, we also propose a simplified version with lower computing time, that takes the variances of the covariates into account while disregarding the covariances between them. The rest of this paper is organized as follows. After introducing the model in Section <ref> we present the setup and establish necessary concepts and notation in Section <ref>. Section <ref> illustrates our methodology for linear regression in one explanatory variable. We construct optimal subsampling designs for multiple linear regression in Section <ref>. In Section <ref> we consider the case of a fixed subsample size, then examine the performance of our method in simulation studies in Section <ref>. We make concluding remarks in Section <ref>. Proofs are deferred to an Appendix. § MODEL SPECIFICATION We have pairs (_i, y_i) of data, where y_i is the value of the response variable Y_i and the _i are realizations of the d-dimensional i.i.d. random vectors _i of covariates with probability density function f_() for unit i = 1,…,n. We suppose that the dependence of the response variable on the covariates _i is given by the multiple linear regression model Y_i = β_0 + β_1 X_i,1 + β_2X_i,2 + … + β_dX_i,d + ε_i with independent, homoscedastic errors ε_i with zero mean and [ε_i] = σ_ε^2 < ∞ which we assume to be independent of all _i'. We assume that the number of observations n is very large. The aim is to estimate the regression parameter = (β_0 ,…, β_d)^⊤, where β_0 is the intercept and β_k is the slope parameter in the k-th component x_k of = (x_1, …, x_d)^⊤ for k = 1,…,d. For notational convenience we write the multiple linear regression model as a general linear model Y_i = (_i)^⊤β + ε_i, i=1,…,n , where () = (1,^⊤)^⊤. § SUBSAMPLING DESIGN We consider a scenario where the y_i are expensive to observe and therefore only a percentage α (0 < α < 1) of the y_i are observed, given all _i. Another possible setting is that all y_i and _i are available, but parameter estimation is only computationally feasible on a percentage α of the data. Either setup leads to the question which subsample of pairs (_i,y_i) yields the best estimation of the parameter β. Throughout this section we assume, that the distribution of _i and its density f_() are known. We consider continuous designs ξ with measure α on ^d with density functions f_ξ() that are bounded from above by the density of the covariates f_() such that ∫ f_ξ() = α and f_ξ() ≤ f_() for all ∈^d. The resulting set of all such designs ξ is denoted by Ξ^f_. A subsample can then be generated according to such a continuous design by accepting units i with probability f_ξ(_i)/f_(_i). Let (ξ) = ∫()()^⊤ξ() be the information matrix of ξ. We require [‖_i‖^2_2] < ∞ as some entries of the information matrix can be infinite otherwise. (ξ) measures the quality of the OLS estimator based on a subsample according to ξ in the sense that √(α n)(-) asymptotically follows a normal distribution with mean zero and covariance matrix σ_ε^2(ξ)^-1 when n tends to infinity. To find an appropriate subsampling design ξ∈Ξ^f_, we aim to minimize the design criterion for D-optimality Ψ(ξ)= ln(((ξ)^-1)). Then, the D-optimal design minimizes the determinant of the asymptotic covariance matrix of the parameter least squares estimator and can be interpreted as minimizing the volume of the respective confidence ellipsoid of . The optimal subsampling design that minimizes Ψ(ξ) in Ξ^f_ is denoted by ξ^* with density f_ξ^*(). Further, we make use of the directional derivative of Ψ at design ξ in the direction of any other design η with measure α, not necessarily in Ξ^f_. This directional derivative can be calculated as F_Ψ(ξ,η) = (d+1) - ((ξ)^-1(η)) <cit.> which reduces to F_Ψ(ξ,ξ_) = (d+1) - α()^⊤(ξ)^-1() for a one-point measure with weight α on denoted by ξ_. Equivalently, we consider the sensitivity function ψ(, ξ) = α()^⊤(ξ)^-1(), which incorporates the essential part of the directional derivative (ψ(, ξ) = (d+1) - F_Ψ(ξ,ξ_)). For the characterization of the D-optimal continuous subsampling design, we apply the constrained equivalence theorem under Kuhn-Tucker conditions <cit.> to the present case of multiple linear regression in the following theorem. In multiple linear regression with d ≥ 2 covariates with density f_() of the covariates _i, the subsampling design ξ^* is D-optimal if and only if there exist a subset ^* ⊂^d and a threshold s^* such that (i) ξ^* has density f_ξ^*() = f_() _^*() (ii) ψ(,ξ) ≥ s^* for ∈^*, and (iii) ψ(,ξ) < s^* for ∉^*. Here, _A() denotes the indicator function, i. e. _A() = 1, if ∈ A and _A() = 0 otherwise. Before treating the general case of subsampling design in multiple linear regression, we briefly present some results from <cit.> for the case of ordinary linear regression in one covariate for illustrative purposes. §.§ Subsampling Design in one Covariate In the case of linear regression in one covariate X_i, we have d = 1, (x)=(1,x)^⊤ and β=(β_0,β_1)^⊤. We assume the known distribution of the covariate X_i to be symmetric (f_X(-x) = f_X(x)) and to have a finite second moment ([X^2] < ∞). We use the linear equivariance of the regression function, (h(x))=(1, -1)(x), where (…) denotes a diagonal matrix, and the invariance of the D-criterion w.r.t. the sign change h(x) = -x to show that any design ξ is dominated by its symmetrization ξ̅ = (ξ + ξ^h)/2 such that Ψ(ξ̅) ≤Ψ(ξ) <cit.>. Thus we can restrict our search for a D-optimal subsampling design ξ^* to designs in Ξ^f_X that are invariant to sign change, ξ^h=ξ. For an invariant ξ^* we find for the off-diagonal entries of the information matrix ∫ x f_ξ^*(x) x = 0. (ξ^*) = (α, m) is thus a 2 × 2 diagonal matrix, where m = ∫ x^2 f_ξ^*(x) x. As a consequence the sensitivity function ψ(x,ξ^*) = α x^2/m is a polynomial of degree two as a function in x which is symmetric in x, ψ(-x,ξ^*)= ψ(x,ξ^*). Because (ξ^*), and thus (ξ^*)^-1, are positive definite, the coefficient of the leading term of ψ(x,ξ^*) is positive. We use from Theorem <ref> that there exists a threshold s^* such that f_ξ^*(x) = f_X(x) if ψ(x,ξ^*) ≥ s^* and f_ξ^*(x) = 0 elsewhere. Paired with the symmetry of ψ(x,ξ^*), we find ^* = (-∞,-a]∪[a,∞), where a ≥ 0 and conclude that the density f_ξ^* of the D-optimal subsampling design is of the form f_ξ^*(x) = f_X(x) _(-∞,-a]∪[a,∞)(x). Since we require ξ^*() = α, we can easily see that a must be equal to the (1 - α/2)-quantile of the distribution of X_i. This approach applies accordingly to all distributions which are obtained by a location or scale transformation of a symmetric distribution: units will be accepted if their values of the covariate lie in the lower or upper (α/2)-tail of the distribution. This procedure can be interpreted as a theoretical counterpart in one dimension to the IBOSS method proposed by <cit.>. [normal distribution] If the covariate X_i comes from a standard normal distribution, then the optimal boundaries are the (α/2)- and the (1 - α/2)-quantile ± z_1 - α/2, and unit i is accepted when |x_i| ≥ z_1 - α/2. We find (ξ^*) = (α, m), where m = α + √(2/π) z_1-α/2exp(-z_1-α/2^2/2). Figure <ref> shows the density f_ξ^* of the optimal subsampling design ξ^* and the corresponding sensitivity function ψ(x,ξ^*) for α = 0.5. The horizontal dotted line displays the threshold s^*. The quantiles ± z_1 - α/2 are shown as vertical dotted lines. For X_i having a general normal distribution with mean μ_X and variance σ_X^2, the optimal boundaries remain to be the (α/2)- and (1 - α/2)-quantile μ_X ±σ_X z_1 - α/2 <cit.>. §.§ Multiple Linear Regression Subsampling Design We now examine the case of multiple linear regression Y_i = (_i)^⊤β + ε_i , i=1,…,n, where _i is a d-dimensional random vector with d ≥ 2 and () = (1, ^⊤)^⊤. We assume the distribution of the _i with density f_() to be invariant to the special orthogonal group SO(d), i. e. rotations about the origin in ^d. This implies, in particular, that [_i] = 0 and all the covariates follow the same symmetric distribution and have covariance matrix σ_^2𝕀_d, where 𝕀_d denotes the identity matrix of dimension d. For instance, the multivariate standard normal distribution satisfies this condition with σ_^2=1. To make use of the rotational invariance, we will characterize subsampling designs in their hyperspherical coordinate representation, where a point in ^d is represented by a radial coordinate or radius r and a (d-1)-dimensional vector of angular coordinates , indicating the direction in the space. We make use of the transformation T: [0,∞) × [0,π]^d-2× [0,2π) →^d, T(r,) =, where = (θ_1,…,θ_d-1)^⊤, x_k = rcos(θ_k)∏_j=1^k-1sin(θ_j) for k=1,…,d-1, and x_d=r∏_j=1^d-1sin(θ_j). We identify all points with radius zero with the origin and denote the inverse of the transformation T by S = T^-1. Then, for a subsampling design ξ∈Ξ^f_ on ^d, the induced subsampling design ξ^S is the same subsampling design in hyperspherical coordinates, i. e. on [0,∞) ×𝔹, where 𝔹 = [0,π]^d-2× [0,2π). We also write ξ^S = ξ_R,Θ, which can be decomposed into the product ξ_R ⊗ξ_Θ|R of the marginal design ξ_R on the radius, and the conditional design ξ_Θ | R on the vector of angles given R = r as a Markov kernel. In particular ξ_Θ | R = r(𝔹) = 1 for any r ≥ 0. Subsequently, for ξ_Θ , R∈Ξ^f_ it has to hold that ξ_R([0,∞)) = α and the density of ξ_R is bounded from above by the marginal density f_R() of _i on the radius. Next, we want to show that there exists a continuous D-optimal subsampling design that is invariant w.r.t. the special orthogonal group SO(d). This requires to employ a left Haar measure μ on SO(d). For the representation in hyperspherical coordinates this is, up to a constant, a product of Lebesgue measures λ on the components of the angle vector <cit.>. We set μ = ⊗_i = 1^d-2λ/π⊗λ/(2π) such that μ(𝔹) = 1, where ⊗ denotes the common product of measures. In the case d=2, the transformation S is a mapping to the standard polar coordinates and we can decompose the subsampling design into a measure on the radius R and a conditional one on the single angle θ. Of particular interest are subsampling designs that are invariant to rotations about the origin, i. e. invariant w.r.t SO(d). We start by showing the equivalence between invariance w.r.t. the special orthogonal group and decomposing a subsampling design in a measure on the radius and a uniform measure on the angle. In multiple linear regression with d ≥ 2 covariates, a design ξ is invariant with respect to SO(d) if and only if ξ can be decomposed into the marginal measure ξ_R on the radius and the Haar measure μ on the angle, i. e. ξ = ξ_R ⊗μ. For a subsampling design ξ∈Ξ^f_ with marginal design ξ_R on the radius, we denote the symmetrized measure ξ_R⊗μ of ξ by ξ. In multiple linear regression with d ≥ 2 covariates, let ξ = ξ_R⊗ξ_Θ|R∈Ξ^f_. Then its symmetrization ξ = ξ_R⊗μ is also in Ξ^f_. Note that ξ is invariant w.r.t. SO(d) by Lemma <ref>. Next, we establish an equality between the arithmetic mean of information matrices of rotated subsampling designs and the information matrix of ξ. In multiple linear regression with d ≥ 2 covariates, let G be the finite group of rotations about the d axes that map the d-dimensional cross-polytope onto itself. Then 1/|G|∑_g∈ G(ξ^g) = (ξ). We make use of this to prove that any subsampling design can be improved by its symmetrized subsampling design ξ, which allows us to restrict the search for an optimal subsampling design from Ξ^f_ to the essentially complete class of rotation invariant subsampling designs in Ξ^f_. In multiple linear regression with d ≥ 2 covariates, let Φ be a convex optimality criterion that is invariant w.r.t. SO(d), i. e. Φ(ξ^h) = Φ(ξ) for any h ∈ SO(d), ξ∈Ξ^f_. Then for any ξ = ξ_R⊗ξ_Θ | R∈Ξ^f_ it holds that Φ(ξ) ≤Φ(ξ), with ξ = ξ_R⊗μ. The regression model is linearly equivariant w.r.t. SO(d) as (h( )) = _h ( ) = [ 1 0; 0 H ]( ) , for h ∈ SO(d) and H its respective orthogonal matrix with determinant one, i. e. h() = H. Further note that Ψ(ξ)=Ψ(ξ^h) for any ξ∈Ξ^f_, h∈ SO(d), since ((ξ^h)) = (_h)^2((ξ)) and (_h) = 1 for all h ∈ SO(d). The D-optimality criterion Ψ(ξ)= ((ξ)^-1) is indeed convex and invariant w.r.t. SO(d). Theorem <ref> applies to other optimality criteria as well such as Kiefer's Φ_q-criteria including the A-criterion or the integrated mean squared error (IMSE) criterion. Before we construct the optimal subsampling design in the subsequent theorem we make some preliminary remarks. By Theorem <ref> we can restrict our search for a D-optimal subsampling design to invariant designs. We study the shape of the sensitivity function ψ(,ξ^* ) of an invariant D-optimal subsampling design ξ^*∈Ξ^f_ in the direction of a one-point measure with weight α on . Since ξ^* is composed of the Haar measure on the vector of angles, one can easily verify that all off-diagonal entries ∫ x_jξ^*() and ∫ x_jx_kξ^*() of the information matrix of ξ^* are equal to zero, j,k = 1,…, d, j ≠ k. The (d+1)×(d+1) information matrix is thus (ξ^*) = (α, m, …, m), where m = ∫ x_1^2ξ^*(). As a consequence, the sensitivity function simplifies to ψ(,ξ^*) = α(1 , ^⊤) (1/α,1/m,…,1/m) ([ 1; ]) = 1 + α/m‖‖_2^2 . The sensitivity function is thus invariant to SO(d) in the sense that ψ(h(),ξ^*) = ψ(,ξ^*) for all h ∈ SO(d) because ‖ h() ‖_2^2= ‖‖_2^2. Theorem <ref> states that for a subsampling design to be optimal, it must hold that sup_∈^*ψ(,ξ^*) ≥inf_∉^*ψ(,ξ^*), where ^* is the support of ξ^*. Given that ψ(,ξ^*) is constant on the d-sphere for all radii r > 0, this suggests that the optimal subsampling design is equal to zero in a d-sphere around the origin and equal to the bounding distribution everywhere else. For multiple linear regression with d ≥ 2 covariates and any SO(d) invariant distribution of the covariates _i, the density of the continuous D-optimal subsampling design ξ^* is f_ξ^*() = f_()_[q_1-α,∞)(‖‖_2^2) , where q_1-α is the (1-α)-quantile of the distribution of ‖_i‖^2_2. [multivariate standard normal distribution] We apply Theorem <ref> to the case of _i∼_d(0, 𝕀_d ). Then q_1-α = χ^2_d,1-α is the (1-α)-quantile of the χ^2 distribution with d degrees of freedom and the (d+1) × (d+1) information matrix (ξ^*) is of the form (α,m,…,m) with m = 2χ^2_d,1-α/df_χ^2_d(χ^2_d,1-α) + α. Note that m > α for all α∈ (0,1), because χ^2_d,1-α > 0 for α∈ (0,1) and f_χ^2_d(w) > 0 for all w>0- In comparison to Example <ref>, we see that equation (<ref>) also holds for d = 1. We will use this example to examine the performance of the subsampling design in Section <ref>. So far we have assumed that the covariates have mean zero, all covariates have the same variance, and there is no correlation between the covariates. Let _i be such covariates that are invariant w.r.t. SO(d). Now we consider covariates _i = A_i + which are location-scale transformations of _i with non-singular transformation matrix A. These covariates _i with mean and non-singular covariance matrix Σ_ where A is a root of Σ_, i. e. Σ_ = AA^⊤. Because of equivariance of the D-criterion w.r.t. to such transformations, we find D-optimal subsampling designs in this case by transforming the observations back to the former situation by subtracting the mean and multiplying with A^-1. We show that this indeed constitutes a D-optimal subsampling design in the following lemma and derive the respective density in the subsequent theorem. In multiple linear regression with d ≥ 2 covariates, let the distribution of covariates _i ∈^d be invariant w.r.t. SO(d) and let ζ^*∈Ξ^f_ be the corresponding D-optimal subsampling design. Let A be a non-singular d × d matrix and μ a constant in ^d. Then the D-optimal subsampling design ξ^*∈Ξ^f_ for the covariates _i = A_i + μ is given by ξ^*(B) = ζ^*(A^-1(B-μ)) for any measurable set B ⊂^d. In multiple linear regression with d ≥ 2 covariates, let the distribution of the covariates _i∈^d be invariant w.r.t. SO(d). Let A be a non-singular d × d matrix and μ a constant in ^d. Then the density of the D-optimal subsampling design ξ^* for covariates _i = A_i + μ is f_ξ^*() = f_() _[q_1-α,∞)((-μ)^⊤Σ^-1(-μ)), where Σ = AA^⊤ and q_1-α is the (1-α)-quantile of ‖_i‖_2^2. [multivariate normal distribution] Here we extend our findings from Example <ref> to the case of a general multivariate normal distribution of the covariates with mean and covariance matrix Σ_, i. e. _i = A_i +, where A is a root of Σ_ and _i follows a multivariate standard normal distribution. By Theorem <ref> we know that the D-optimal subsampling design is equal to the distribution of the _i outside of an ellipsoid given by (-)^⊤Σ_^-1(-) < q_1-α, where again q_1-α = χ^2_d,1-α is equal to the (1-α)-quantile of the χ^2 distribution with d degrees of freedom. § FIXED SAMPLE SIZE Unlike in the previous section, where we selected a certain percentage of the full data, we now want to select a fixed, sufficiently large, number of k instances out of the total n data points. This implies that we want to select a decreasing percentage α_n = k/n of the full data. The subsampling design ξ_n with measure α_n has non-standardized information matrix _n(ξ_n) = n ∫()()^⊤ξ_n( ). Based on the subsampling design ξ_n, we suggest a simple acceptance-rejection strategy to implement an optimal subsampling from Theorem <ref>. If k is large, the asymptotic properties in the previous section give rise to consider the inverse information matrix _n(ξ_n)^-1 as an approximation to the covariance matrix of β̂_n based on k out of n observations. Hence, it seems to be reasonable to make use of the optimal continuous subsampling design ξ_n^* for subsampling a proportion α_n=k/n according to Theorem <ref>. To select subdata of mean subsample size k, we accept all points that lie in the support of ξ_n^* and reject all others. To decide whether a point is in the support of ξ_n^*, we first center the data, then transform the n × d covariate matrix by a multiplication with the inverse A^-1 of a d × d root of the covariance matrix Σ_. This requires computing time (nd^2). Note that finding the inverse root of the covariance matrix is negligible, as computation of the inverse only depends on the number of covariates d and we work under the assumption that d ≪ n. Then we derive the threshold for the Euclidean distance from the origin as shown in Theorem <ref>, and compare it to the Euclidean distance of the point to the origin. This part can be performed in (nd). This procedure results approximately in a sample size k. Alternatively, to ensure that the subsample size is exactly equal to k, we can select the k units with the largest Euclidean distance to the origin. This also avoids the derivation of the threshold quantile in Theorem <ref> and even does not require the knowledge of the distribution of the covariates apart from the assumption of rotational invariance. This design will be denoted by ξ_k,n^*. Computing the OLS estimator based on k observations requires a computing time of (kd^2). When n ≫ d it is reasonable to assume that k ≤ n/d. Then the expected computing time for the entire procedure is (nd^2), the same magnitude as computing the OLS estimator on the full data. To reduce the computing time, we propose a second method. There we merely standardize each covariate X_ij by its standard deviation σ_j. We use à = (σ_1,…,σ_d) as the root of Σ̃_, containing only the diagonal entries of Σ_, for transformation of the data. We obtain transformed covariates _i = Ã^-1 (_i - ) with mean zero and covariance matrix equal to the correlation matrix of _i. Then, since Ã^-1 is a diagonal matrix, the transformation can be performed in (nd) instead of (nd^2). Afterwards we select the k data points with the largest Euclidean distance from the origin, which can also be performed in (nd) and thus the entire procedure has computing time (nd). The simplified method has one more advantage. It is easier to implement in practice when there is no prior knowledge of the covariance matrix of the covariates as estimating only the variances of the covariates on a small uniform random subsample (prior to the actual subsampling procedure) is much easier than estimating the entire covariance matrix. We will see in the simulation study in Section <ref> that this second method is indeed viable. For now, however, we first want to study the performance of the initial method, where the full covariance matrix of the covariates is used for the transformation of the data. As a measure of quality of the method with a fixed sample size k we use the covariance matrix [β̂ ; ξ_k,n^*] of the OLS estimator given a subsample according to ξ_n^*. For large k, the covariance matrix may be approximated by the inverse of the information matrix of the corresponding optimal continuous design. [β̂ ; ξ_k,n^*] ≈σ_ε^2_n(ξ_n^*)^-1. In the literature, the main interest is often in the covariance matrix for the vector _slope= (β̂_1,…,β̂_d)^⊤ of slope parameter estimators. Therefore, we will adopt this approach here. Note that the D-optimal subsampling design for _slope is the same as for the full parameter vector because (^-1) and the determinant of the lower right d × d submatrix of ^-1 differ only by the constant factor α. Then, under the optimal subsampling design ξ_n^* from Theorem <ref>, we find for _i with mean and covariance matrix Σ_ [_slope;ξ_k,n^*] ≈σ_ε^2 (nm_n)^-1Σ_^-1 , where m_n is the essential term in the information matrix _n(ζ_n^*) = n(k/n, m_n, …, m_n) with ξ_n^*(B) = ζ_n^*(A^-1(B - )). [multivariate standard normal distribution] In the case of normally distributed covariates _i∼_d(0, 𝕀_d), we have from equations (<ref>) and (<ref>) [_slope ; ξ_k,n^*] ≈σ_ε^2( k + nq_1 - (k/n)^d/2e^-q_1-(k/n)/2/d 2^(d/2)-1Γ(d/2))^-1𝕀_d , where q_1 - (k/n) is the (1 - (k/n))-quantile of the χ^2 distribution with d degrees of freedom. With this we can approximate the trace of the covariance matrix of _slope, which is equal to the mean squared error (_slope) = [ ‖_slope - _slope‖_2^2 ], since the OLS estimator is unbiased. Moreover (_slope) divided by d is equal to any of the diagonal entries of the covariance matrix, e.g. the variance [β̂_1;ξ_n] of the slope parameter estimator of the first covariate. In Figure <ref> the lines depict the approximation from equation (<ref>) of ( _slope )/d for standard normal covariates in dependence of the size of the full data n given a fixed size k = 1000 of the subsample. The symbols depict the respective simulated values. The simulation procedure is given in section <ref>, with the only difference that the number of runs for each combination of number of covariates d and full sample size n here is only 1000, since the computations for n = 10^7 take much longer. We see that ( _slope )/d tends to zero as n →∞, but substantially slower for higher dimensions d as more parameters need to be estimated. Moreover, the approximation in equations (<ref>) and (<ref>) turn out to be useful because they are very close to the values obtained by simulation, at least, for small to moderate dimensions d. To demonstrate the advantage of ξ_k,n^*, we consider uniform random subsampling as a natural choice to compare with. The uniform random subsampling design ξ_n^u has density f_ξ_n^u() = k/n f_ (). As a measure of quality, we study the D-efficiency of ξ_n^u w.r.t. the D-optimal subsampling design ξ_k,n^*. For estimating the slopes, the D-efficiency of a subsampling design ξ_n with subsampling proportion α_n=k/n is defined as _D_slope(ξ_n) = (([_slope ; ξ_k,n^*])/([_slope ; ξ_n]))^1/d and can be approximated by replacing the covariances by the inverse information matrices for the slopes. For this definition the homogeneous version (([_slope ; ξ_n]))^1/d of the D-criterion is used, satisfying the homogeneity condition ((λ[_slope ; ξ_n]))^1/d = λ (([_slope ; ξ_n]))^1/d for all λ > 0 <cit.>. For uniform random subsampling, the information matrix is given by _n(ξ_n^u) =k ∫()()^⊤ f_() = k (1,Σ_). The D-efficiency _D_slope(ξ_n^u) of uniform random subsampling can be nicely interpreted: the sample size required to obtain the same precision (in terms of the D_slope-criterion), as when the D_slope-optimal subsampling design ξ_n^* with subsample size k is used, is equal to the inverse of the efficiency _D_slope(ξ_n^u) times k. For example, if the efficiency _D_slope(ξ_n^u) is equal to 0.5, then twice as many observations would be needed under uniform random sampling than for a D_slope-optimal subsampling design. [normal distribution] Consider _i∼_d(, Σ_). Then with the approximation in equation (<ref>) ([_slope ; ξ_n^u]) ≈σ_ε^2d k^-d(Σ_)^-1. For the information matrix of ξ_k,n^* we have ([_slope ; ξ_k,n^*]) ≈σ_ε^2d (nm_n)^-d(Σ_)^-1, where m_n = 2q_1-(k/n)/df_χ^2_d(q_1-(k/n)) + k/n. from equation (<ref>). Thus _D_slope(ξ_n^u) ≈ k/(nm_n). The efficiency of uniform random subsampling turns out to be of exactly the same shape as the approximated /d, only multiplied by k/σ_ε^2 such that the efficiency ranges between zero and one as indicated by the second vertical axis on the right side of Figure <ref>. § SIMULATION STUDY We divide our simulation study into two parts. First, we study the performance of the optimal subsampling designs derived from Theorem <ref> in the case of multivariate normally distributed and multivariate t-distributed covariates with three degrees of freedom, respectively, both with and without correlation between the covariates. For the t-distribution, we choose three degrees of freedom to maximize dispersion, while maintaining existence of the variance. Second, we use the simplified method discussed in Section <ref> that only takes the variance of the covariates into account while ignoring the correlation. The latter can be performed in less computation time, (nd). For better comparability, the simulation is structured similar to those in the work by <cit.>. The data is generated from the linear model Y_i = β_0 + _i^⊤_slope + ε_i, i=1,…,n, where _slope = (β_1,…,β_d) with d = 50. The parameter vector was generated from a multivariate normal distribution in each iteration. Note, however, that the value of does not have any influence on the results. For the errors we choose independent ε_i ∼(0,1). The subdata is of fixed size k = 1000, whereas the size of the full data takes the values n = 5000, 10000, 100000, and one million. For each value of n, we apply our subsampling methods and calculate the OLS estimator β̂ for each method. We repeat this S = 10000 times. We select subdata based on our approach (D-OPT) and the IBOSS method (IBOSS) by <cit.>. Further we select subdata by uniform sampling (UNIF) and give a comparison to estimates based on the full data (FULL) to give context to our approach and the IBOSS method. In each iteration s, we form the subsample in the k × d matrix X_(s) (based on the respective method) and calculate its inverse information matrix 𝒞_(s) = ((1_k,X_(s))^⊤ (1_k ,X_(s)))^-1, where 1_k is a k-dimensional vector with all entries equal to one. We then take the average of these (d+1) × (d+1) covariance matrices 𝒞 = 1/S∑_s = 1^S𝒞_(s) and partition this matrix the following way. 𝒞 = [ c_0 c_1^⊤; c_1 𝒞_1, ] where c_0≥ 1/k with equality if c_1 = 0. The submatrix 𝒞_1 is of dimension d × d. Note that 𝒞 and 𝒞_1 are the simulated covariance matrices of and _slope, respectively. The mean of the covariance matrices is taken instead of the mean of the information matrices, which has been the target quantity for asymptotic behavior. Note that the inverse of the mean information matrix is a lower bound for the mean covariance matrix by Jensen's inequality. Then, we calculate the determinant of 𝒞_1 and scale it to homogeneity, i. e. (𝒞_1)^(1/d). Alternatively to using (𝒞_1)^(1/d) to compare the different methods, we have also used the MSE of _slope, i. e. _β̂_slope = S^-1∑_s = 1^S‖β̂_slope^(s) - β_slope‖_2^2, where β̂_slope^(s) is the estimator of the s-th iteration. Results were however very similar in all cases and, importantly, the comparison between them does not change. In particular note that the trace of 𝒞_1 is equal to __slope. Consider the special case of homoscedastic covariates. Then all diagonal elements of the theoretical counterpart of (_slope) are equal and all off-diagonal entries are equal to zero. Thus in theory we have the MSE divided by d is equal to the term ((_slope))^1/d of interest in our simulations. In this situation, the D-optimal subsampling design is equal to the A-optimal subsampling design for the slope parameters, which minimizes ((_slope)) and thus the MSE. As A- and D-optimal subsampling designs are not equal in other cases we recommend using an A-optimal subsampling design as a benchmark for other methods when the MSE is used as the measure of comparison, but we will not follow this line further here. All simulations are performed using R Statistical Software <cit.>. §.§ Optimal Subsampling Designs Here we use subsampling designs that are constructed by Theorem <ref>. Let Σ_ = 𝕀_50 or Σ_0.5, where Σ_0.5 = (𝕀_50 + 1_501_50^⊤)/2 represents compound symmetry with ρ = 0.5. Figure <ref> shows the results for normally distributed covariates _i with 𝕀_50 and Σ_0.5 as the covariance matrix respectively. Figure <ref> shows the results for the t-distribution with three degrees of freedom where 𝕀_50 and Σ_0.5 are the respective scale matrices, so again we have compound symmetry with correlation ρ = 0.5 in the latter case. Here, we omit the uniformly selected subsample for better visibility because uniform subsampling performs substantially worse. For uniform random subsampling, the determinant is close to constant at around 4.6 × 10^-4 for all four values of n in the case of no correlations and similarly around 8.5 × 10^-4 in the case with correlations. As expected, regardless of the distribution of the covariates, for uniform random subsampling the full sample size n has no impact on (𝒞_1)^(1/d), which is equal to n/k times that value of the full data. With the prior knowledge of the distribution of the covariates, our method is able to outperform the IBOSS method. As is to be expected, our approach can increase its advantage over the IBOSS method when there is correlation between the covariates. The advantage is however minor for the heavy-tailed t-distribution, where both methods perform much closer to the full data. In particular, for large n both perform basically as good as the full data. For reference, in the case of positive correlations the relative efficiency of the IBOSS method with respect to the D-OPT method, i.e. the ratio of the corresponding values of D-OPT divided by IBOSS, ranges from approximately 0.951 to 0.928 for the different values in full sample size n. §.§ Simplified Method Finally, we want to study the simplified version of the D-OPT method that only scales by standard deviations and can be performed in (nd). In this method, we ignore the correlations between the covariates. Consider the diagonal matrix Σ̃ = (σ_1^2,…,σ_d^2) containing only the diagonal entries of Σ_ and use the diagonal matrix à = (σ_1,…,σ_d) of standard deviations to transform the data. The inverse Ã^-1 = (σ_1^-1,…,σ_d^-1) is a diagonal matrix and thus the transformation Ã^-1X only requires computing time (nd). We examine this method in the case of normally distributed covariates and refer to the simplified D-OPT method as “D-OPT-s" in the figures. However, we now need to select the k data points with the largest Euclidean distance from the origin, instead of comparing to the (1-k/n)-quantile of the χ^2 distribution with d degrees of freedom, as the transformed data is not standard normally distributed, when correlation is present. Note that in the case of no correlation between the covariates the simplified method is equal to the D-OPT method of the previous section. Thus results for this scenario can be inherited from Figure <ref>(A). Further, we consider compound symmetry with ρ=0.05 and ρ=0.5. The 50 × 50 covariance matrices of the _i are Σ_ = Σ_0.05 or Σ_0.5, with σ_0.05,ii = 1 and σ_0.05,ij = 0.05 for i ≠ j and Σ_0.5 as before. Figure <ref> shows the results for normally distributed covariates _i with Σ_0.05 and Σ_0.5 as the covariance matrix, respectively. While the advantage of the D-optimal subsampling design over the IBOSS method is reduced, there are still scenarios where it can outperform the IBOSS method such as the one of covariance matrix Σ_0.05 with small correlations. However, if correlations are particularly large as in the case of covariance matrix Σ_0.5, the simplified method D-OPT-s seems to perform much worse and only slightly outperforms uniform subsampling. § CONCLUDING REMARKS We have constructed optimal subsampling designs ξ^* for multiple linear regression, first for distributions that are invariant w.r.t. SO(d), then for distributions that can be generated from such a distribution via location-scale transformation. We have given two methods of implementation and discussed that the computing time of the D-optimal method is (nd^2), whereas the simplified version can be performed in (nd). We have compared these implementations to the IBOSS method of <cit.> in simulation studies with the expected result that the full method outperforms IBOSS as well as the simplified method outperforms the IBOSS method in certain settings with small correlations between the covariates. Besides applications where the covariance matrix of the covariates is known, our method can be used as a benchmark for other methods that do not require prior knowledge of the distribution of the covariates. Note, that the proposed subsampling designs depend both on the distribution of the covariates and the model. If either is incorrect, the subsampling designs will no longer be optimal. Recent work on subsampling for model discrimination is done by <cit.>. § ACKNOWLEDGMENTS The work of the first author is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within GRK 2297 MathCoRe. § PROOFS The result follows immediately from Corollary 1 (c) in the work by <cit.> as in Theorem 3.1. in <cit.>. We define h as a mapping from ℝ_>0×𝔹 to itself. Let h = (h_0, h_1), where h_0(r) = r is the identity on the radius and h_1∈ SO(d) acts on the angle . First note that for any B = B_R× B_Θ with B_R∈ℬ(_>0) and B_Θ∈ℬ(𝔹) and any h_1∈ SO(d) the mapping h only affects the set B_Θ on the angle. And since μ is a left Haar measure w.r.t SO(d) it holds that μ(h_1^-1(B_Θ)) = μ(B_Θ) for any h_1∈ SO(d) and any B_Θ∈ℬ(𝔹). We first prove that the composition ξ=ξ_R ⊗μ of a measure ξ_R on the radius and the Haar measure μ implies invariance. For any B = B_R× B_Θ and any h_1∈ SO(d), we have ξ^h(B) = (ξ_R⊗μ)(h^-1(B_R× B_Θ)) = ξ_R(B_R)μ(h_1^-1(B_Θ)) = ξ_R(B_R)μ (B_Θ) = ξ(B). Because the σ-algebra ℬ(ℝ_>0)⊗ℬ(𝔹) is generated by ℬ(ℝ_>0)×ℬ(𝔹), we conclude that ξ is invariant w.r.t. SO(d). Conversely, let us assume ξ = ξ_R⊗ξ_Θ | R (this decomposition exists by the Radon–Nikodym Theorem) is invariant w.r.t. SO(d) and there exist sets B_R∈ℬ(_>0) with ξ_R(B_R) > 0 and B_Θ∈ℬ(𝔹) such that ξ_Θ | R∈ B_R(B_Θ) ≠μ(B_Θ) > 0. Then there exists a rotation h_1∈ SO(d) such that ξ_Θ | R ∈ B_R(B_Θ) ≠ξ_Θ | R ∈ B_R(h_1^-1(B_Θ)), and subsequently we have ξ^h(B_R× B_Θ) ≠ξ(B_R× B_Θ). This contradicts the invariance assumption and we conclude that SO(d) invariance of ξ implies that ξ_Θ | R(B_Θ) = μ(B_Θ) almost everywhere w.r.t. ξ_R. This concludes the proof. The _i are invariant w.r.t. SO(d) and thus we can write the density of the _i as f_() = f_R()(r) f_μ(θ) by Lemma <ref>. We write the density of ξ as f_ξ() = f_R(ξ)(r)f_Θ|R(θ). We have f_R(ξ)(r) ≤ f_R()(r), because ξ∈Ξ^f_. As a result f_ξ()=f_R(ξ)(r) f_μ(θ) ≤ f_R()(r)f_μ(θ) and thus ξ∈Ξ^f_. Consider the information matrix of a subsampling design ξ in hyperspherical coordinates, i. e. with the transformation T and its inverse S. The Jacobi matrix of T is denoted by _T(r,). Then (ξ) = ∫_^d( )( )^⊤ξ( ) = ∫_S(^d)(T(r,))(T(r,))^⊤ξ^S( (r,)) = ∫_[0,∞)∫_𝔹(T(r,))(T(r,))^⊤ |(_T(r,))| ξ_Θ | R=r() ξ_R( r). Now we study the sum of information matrices of rotated subsampling designs. 1/|G|∑_g ∈ G(ξ^g) = 1/|G|∑_g ∈ G∫_[0,∞)∫_𝔹(T(r,))(T(r,))^⊤ |(_T(r,))| ξ^g_Θ | R=r() ξ_R( r) = 1/|G|∑_g ∈ G∫_[0,∞)∫_𝔹(g^-1(T(r,)))(g^-1(T(r,)))^⊤ |(_T(r,))| ξ_Θ | R=r() ξ_R( r) = ∫_[0,∞)∫_𝔹∑_g ∈ G1/|G|(g^-1(T(r,)))(g^-1(T(r,)))^⊤ |(_T(r,))| ξ_Θ | R=r() ξ_R( r). The inner sum can be regarded as the information matrix of a design putting equal weight on the vertices of a rotated d-dimensional cross-polytope. This is equal to the information matrix of the uniform distribution on the d-sphere, see <cit.> or <cit.>. Then1/|G|∑_g ∈ G(ξ^g) = ∫_[0,∞)∫_𝔹∫_𝔹(T(r,γ))(T(r,γ))^⊤ |(_T(r,γ))| μ(γ) ξ_Θ | R=r() ξ_R( r) = ∫_[0,∞)∫_𝔹(T(r,γ))(T(r,γ))^⊤ |(_T(r,γ))| μ(γ) ∫_𝔹ξ_Θ | R=r() ξ_R( r) = ∫_[0,∞)∫_𝔹(T(r,γ))(T(r,γ))^⊤ |(_T(r,γ))| μ(γ) ξ_R( r) = (ξ_R⊗μ). In the third equality we used ∫_𝔹ξ_Θ | R=r()=1. By the result (ξ) = 1/|G|∑_g∈ G(ξ^g) of Lemma <ref> we have Φ((ξ)) ≤1/|G|∑_g∈ GΦ((ξ^g)) by the convexity of Φ. Note that g ∈ G ⊂ SO(d). We then utilize that Φ is invariant w.r.t. SO(d), i. e. Φ((ξ^g))=Φ((ξ)), and obtainΦ((ξ)) ≤Φ((ξ)). We apply Theorem <ref>. ξ^* is defined such that it is equal to the bounding distribution on ^* = {∈: ‖‖^2_2≥ q_1-α} and equal to zero on {∈: ‖‖^2_2 < q_1-α}. The deduced sensitivity function from equation (<ref>) is ψ(,ξ^*) = α/m‖‖_2^2. Then we can immediately see that s^* = inf_x∈^*ψ(,ξ^*) = sup_x∉^*ψ(,ξ^*) = α/m q_1-α. and thus conditions (i)-(iii) are satisfied, which concludes the proof. Note that for any d × d matrix A and any constant μ∈^d, there exists a bijection Ξ^f_→Ξ^f_, where every subsampling design ζ∈Ξ^f_ is mapped to ξ∈Ξ^f_, which is defined as ξ(B) = ζ(A^-1(B - μ)) for any measurable set B ⊂^d. Let ζ∈Ξ^f_. Consider the information matrix of the subsampling design ξ. (ξ) = ∫_^d()()^⊤ξ() = ∫_^d(A+μ)(A+μ)^⊤ζ () = ∫_^d[ 1 0^⊤; μ A ]() ()^⊤[ 1 μ^⊤; 0 A^⊤ ]ζ() = [ 1 0^⊤; μ A ](ζ) [ 1 μ^⊤; 0 A^⊤ ]. Then the determinant of the information matrix can be calculated as follows. ((ξ)) = (AA^⊤)((ζ)). Thus the determinant of any subsampling design in Ξ^f_ is equal to the determinant of the corresponding subsampling design in Ξ^f_ scaled by the constant factor (AA^⊤). Therefore if ζ^* minimizes Ψ(ζ) in Ξ^f_, then ξ^* minimizes Ψ(ξ) in Ξ^f_. From Lemma <ref> we have that ξ^*(B) = ζ^*(A^-1(B - μ)) for any measurable set B ⊂^d, where ζ^* is the optimal subsampling design for covariates _i in the setting of Theorem <ref>. We inherit the desired result by applying Theorem <ref>. Let _i = (X_i,1,…,X_i,d) ∼_d(0, 𝕀_d ) with density f_(). From Theorem <ref>, we know that the support of ξ^* is {∈^d: ‖‖^2_2≥χ^2_d,1-α} on which it is equal to the d-dimensional standard normal distribution. By definition, the information matrix of ξ^* is (ξ^*) = ∫_^d()()^⊤ξ^*(). Any off-diagonal entries ∫_^dx_jξ^*() and ∫_^dx_jx_kξ^*(), j,k=1,…,d, j≠ k are equal to zero. The upper left element of the matrix is ξ^*(^d) = α by the definition of a subsampling design. The other elements on the main diagonal are equal because ξ^* is invariant to SO(d) and thus ∫_^d x_j^2ξ^*() = ∫_^d x_k^2ξ^*() for any j,k = 1,…,d. Note that W = ‖_i‖^2_2 follows a χ^2 distribution with d degrees of freedom. We denote the density of the χ^2 distribution with d degrees of freedom f_χ^2_d(w). We start to calculate m by writing it as the expected value of W. m = ∫_^* x_1^2f_() = [X_i,1^2_{‖_i‖^2_2≥χ^2_d,1-α}] = 1/d[‖_i‖^2_2_{‖_i‖^2_2≥χ^2_d,1-α}] = 1/d[W _{W ≥χ^2_d,1-α}]. We then write the expectation as an integral and insert the density f_χ^2_d of the χ^2 distribution with d degrees of freedom. m = 1/d∫_χ^2_d,1-α^∞ w f_χ^2_d(w) w = 1/d∫_χ^2_d,1-α^∞ w 1/2^d/2Γ(d/2) w^(d/2) - 1 e^-w/2 w = 1/d 2^d/2Γ(d/2)∫_χ^2_d,1-α^∞ w^d/2 e^-w/2 w. We have moved constants outside the brackets to simplify the following integration by parts which yields m = 1/d 2^d/2Γ(d/2)([-2w^d/2e^-w/2]_χ^2_d,1-α^∞ + d∫_χ^2_d,1-α^∞ w^(d/2)-1 e^-w/2 w ) = 1/d 2^d/2Γ(d/2)(2 (χ^2_d,1-α) ^d/2e^- χ^2_d,1-α /2 + d∫_χ^2_d,1-α^∞ w^(d/2)-1 e^-w/2 w ) = (χ^2_d,1-α) ^d/2e^- χ^2_d,1-α /2/d 2^(d/2)-1Γ(d/2) + ∫_χ^2_d,1-α^∞w^(d/2)-1e^-w/2/ 2^d/2Γ(d/2) w. After multiplying out, the latter term simplifies to α because the integrand is the density of the χ^2 distribution with d degrees of freedom. m = (χ^2_d,1-α) ^d/2e^- χ^2_d,1-α /2/d 2^(d/2)-1Γ(d/2) + α = 2 χ^2_d,1-α/df_χ^2_d( χ^2_d,1-α ) + α. We conclude by expressing the left term by the density f_χ^2_d of the χ^2 distribution with d degrees of freedom. We get from equation (<ref>) that the covariance matrix of is [;ξ_k,n^*] ≈σ_ε^2(_n(ξ_n^*))^-1. As in the proof of Lemma <ref>, _n(ξ_n^*) = [ 1 0^⊤; A ]_n(ζ_n^*) [ 1 ^⊤; 0 A^⊤ ]. Recall that _n(ζ_n^*) =n (k/n,m_n,…,m_n), where m_n = ∫ x_1^2ζ_n^*(). We get for the covariance matrix of the asymptotic distribution of the parameter estimator [β̂;ξ_k,n^*] ≈σ_ε^2( [ 1 0^⊤; A ]_n(ζ_n^*) [ 1 ^⊤; 0 A^⊤ ])^-1 = σ_ε^2[ 1 ^⊤; 0 A^⊤ ]^-1_n(ζ_n^*)^-1[ 1 0^⊤; A ]^-1 = σ_ε^2[ 1 -(A^-1)^⊤; 0 (A^-1)^⊤ ] n^-1((k/n)^-1,m_n^-1,…,m_n^-1) [ 1 0^⊤; -A^-1 A^-1 ]. The approximation of the covariance matrix of the slope parameters estimators _slope is given by the lower right block of the matrix above. [_slope;ξ_k,n^*] ≈σ_ε^2 (A^-1)^⊤(nm_n)^-1𝕀_dA^-1 = σ_ε^2 (n m_n)^-1Σ_^-1. plainnat
http://arxiv.org/abs/2307.01771v1
20230704152223
AT2023fhn (the Finch): a Luminous Fast Blue Optical Transient at a large offset from its host galaxy
[ "A. A. Chrimes", "P. G. Jonker", "A. J. Levan", "D. L. Coppejans", "N. Gaspari", "B. P. Gompertz", "P. J. Groot", "D. B. Malesani", "A. Mummery", "E. R. Stanway", "K. Wiersema" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.GA" ]
firstpage–lastpage Focus-style proofs for the two-way alternation-free μ-calculus Jan RooduijnThe research of this author has been made possible by a grant from the Dutch Research Council NWO, project number 617.001.857. Yde Venema August 1, 2023 ========================================================================================================================================================= Luminous Fast Blue Optical Transients (LFBOTs) - the prototypical example being AT 2018cow - are a rare class of events whose origins are poorly understood. They are characterised by rapid evolution, featureless blue spectra at early times, and luminous X-ray and radio emission. LFBOTs thus far have been found exclusively at small projected offsets from star-forming host galaxies. We present Hubble Space Telescope, Gemini, Chandra and Very Large Array observations of a new LFBOT, AT 2023fhn. The Hubble Space Telescope data reveal a large offset (>3.5 half-light radii) from the two closest galaxies, both at redshift z∼0.24. The isolated environment of AT 2023fhn is in stark contrast with previous events, is challenging to explain with most LFBOT progenitor models, and calls into question the homogeneity of LFBOTs as a class. supernovae:individual:AT 2023fhn – transients:supernovae – transients:tidal disruption events – transients:neutron star mergers § INTRODUCTION The development of wide-field, high cadence and deep optical surveys in recent years - including the Zwicky Transient Facility <cit.>, Asteroid Terrestrial-impact Last Alert System <cit.>, Panoramic Survey Telescope and Rapid Response System <cit.>, Gravitational-wave Optical Transient Observer <cit.> and Black hole Gravitational-wave Electromagnetic counterpart array <cit.>, to name a few - is leading to ever more transient detections in the extremes of parameter space. This trend is set to continue with the Vera Rubin Observatory <cit.>. Such surveys led to the discovery of fast blue optical transients (FBOTs), first identified as a class by <cit.> in ZTF. FBOTs rise and fade on timescales of days, and have (early-time) g-r colours of -0.3 or bluer. These events also have featureless, black-body-like spectra at early times with inferred temperatures >10^4 K <cit.>. It has since become clear that the majority are infant supernovae with low ejecta masses <cit.>, but a small number fade too rapidly to be powered by Ni-56 decay (faster than 0.2-0.3 magnitudes per day), have peak absolute magnitudes rivalling superluminous supernovae (<-20), and have accompanying luminous X-ray and radio emission. These bright, multi-wavelength FBOTs have been dubbed luminous-FBOTs <cit.>, the first example of which is AT 2018cow <cit.>. Since AT 2018cow, several other LFBOTs have been discovered (both in real time and archival searches), with varying degrees of multi-wavelength coverage. These include ZTF18abvkwla <cit.>, CSS161010 <cit.>, ZTF20acigmel <cit.>, AT2020mrf <cit.> and AT 2022tsd <cit.>. There are also a number of other lower-confidence candidates <cit.>. Despite the growing number of LFBOT discoveries, these events are intrinsically rare - the volumetric rate of AT 2018cow-like LFBOTs is estimated to be no more than 0.1 per cent of the local supernova rate <cit.>. The nature of LFBOTs remains unclear. The timescale of their light-curve evolution, X-ray and radio luminosity, late-time UV emission in the case of AT 2018cow <cit.>, and preference for star-forming dwarf and spiral hosts have proved challenging to explain with a single self-consistent model. Circumstellar medium interactions around young supernovae are a plausible origin for the early-time spectra and X-ray/radio emission of some FBOTs <cit.>, as well as for the optical polarisation behaviour <cit.>. However, the peak absolute magnitude, rapid subsequent fading, high radio/X-ray luminosity and peculiar optical and radio polarisation of LFBOTs <cit.> require an alternative explanation. Following AT 2018cow, a few main classes of model emerged. These include central engines born in low-ejecta core-collapse events, powered by black hole accretion or magnetar spin-down <cit.>; mergers of stellar-mass black holes and hydrogen-poor stars <cit.>; or the tidal disruption of a white dwarf by an intermediate mass black hole <cit.>. The former is motivated by the rapid light-curve decay and multi-wavelength evolution which severely limits the possible ejecta mass; the latter two also by the timescale - which is too fast for a supermassive black hole (SMBH) tidal disruption event (TDE) - and the weak (initially absent) hydrogen lines in the spectra. Many of these scenarios face challenges. For example, magnetar central engines can power the early or late-time UV emission in AT 2018cow, but not both <cit.>, while the locations of LFBOTS thus far - at small offsets within star-forming dwarfs and spirals - favour a short-lived, massive star progenitor over an IMBH TDE. Further insight will come from similarly detailed studies of other LFBOTs, to establish which features are common to all objects in this class, and to understand the variety among them. In this letter, we present multi-wavelength observations of a new LFBOT, AT2023fhn ("the Finch"). The transient is significantly offset from the nearest galaxies, representing a deviation in terms of its environment from previous LFBOTs, and posing a challenge to the favoured LFBOT progenitor models. This letter is structured as follows. In Section <ref> we detail how AT 2023fhn was discovered and classified. Section <ref> presents follow-up observations, including Hubble Space Telescope (HST) imaging and Gemini spectroscopy. In Section <ref> we discuss possible interpretations, and conclusions are drawn in Section <ref>. We adopt a cosmology with H_0=69.6 km s^-1 Mpc^-1, Ω_ m=0.29 and Ω_Λ=0.71 <cit.>. Uncertainties are given as 1σ unless otherwise stated, and magnitudes are quoted in the AB system <cit.>. § DISCOVERY AND CLASSIFICATION §.§ Early photometry and spectra AT 2023fhn was discovered on 10 Apr 2023 with m(r)=19.74 by ZTF <cit.>. The blue colour of g-r ∼ -0.47 and rapid ∼0.2 mag day^-1 evolution immediately classified AT 2023fhn as an FBOT <cit.>. <cit.> subsequently obtained Gemini GMOS-S spectroscopy of AT 2023fhn on 19-04-2023 (programme GS-2023A-Q-127), finding a featureless blue spectrum. On 20 Apr 2023 they obtained a spectrum of the nearby spiral galaxy (∼5 arcsec offset), yielding a redshift of z∼0.24. At this redshift, the earliest ZTF g-band (12 Apr 2023) absolute magnitude is -21.5. §.§ X-ray and radio observations We triggered Chandra X-ray Observatory ToO observations (PI: Chrimes; program 24500143; Obs ID 26624), which were obtained on 25 Apr 2023. The faint-mode ACIS-S exposure lasted 30 ks. The data were reduced and analysed with standard ciao (v4.13, caldb v4.9.3) procedures including reprocessing, energy filtering and source measurement with srcflux. Assuming a power-law with a photon index Γ=2 <cit.>, the unabsorbed source flux after correction for the Galactic neutral hydrogen column density of NH=2.4×10^20cm^-2 <cit.> is 7.6^-1.8_+2.2× 10^-15 erg cm^-2 s^-1 (0.5-7.0 keV). At the redshift of the spiral, this corresponds to a luminosity of 1.3^-0.3_+0.4×10^42 erg s^-1, comparable to other LFBOTs at the same epoch <cit.>. Early radio observations (within a few weeks of discovery) produced non-detections, including a 10 GHz Northern Extended Millimeter Array upper limit of 2× 10^29 erg s^-1 Hz^-1 on the luminosity <cit.>, and upper limits from our own programme (SC240143, PI: Chrimes) on the Karl G. Jansky Very Large Array (VLA). We observed AT 2023fhn on 22 Apr 2023 (≈12 days post detection) in standard phase-referencing mode using 3C286 as a flux density and bandpass calibrator, with J1014+2301 and J1016+2037 as complex gain calibrators. The observations were calibrated using the VLA Calibration Pipeline 2022.2.0.64 in CASA version 6.4.1 with additional manual flagging. We imaged the data using the task tclean in CASA with Briggs weighting with a robust parameter of 1. No significant emission was detected at the source location. We provide the upper-limits in Table <ref>. These early-time non-detections are consistent with the behaviour of previous LFBOTs. The transient was subsequently detected with the VLA on 15 Jun 2023 <cit.> with luminosity 7.6× 10^28 erg s^-1 Hz^-1 (at 10 GHz), also similar to other LFBOTs at the same epoch <cit.>. The rapid evolution (timescale of a few days) and peak absolute magnitude of -21.5 places AT 2023fhn firmly within the LFBOT region of timescale/peak luminosity parameter space <cit.>. Along with the hot featureless optical spectrum, X-ray and radio detections, AT 2023fhn is unambiguously identified as a new AT 2018cow-like LFBOT. § FOLLOW-UP OBSERVATIONS §.§ Hubble Space Telescope Imaging §.§.§ Data and reduction HST WFC3/UVIS observations were taken with the F555W and F814W filters on 17 May 2023 (PI: Chrimes; proposal ID 17238). Three 364 s exposures with sub-pixel dithers were taken in each filter. The _flc images were combined using astrodrizzle[Part of drizzlepac, <http://drizzlepac.stsci.edu/>] <cit.>, with pix_frac = 0.8 and a final pixel scale of 0.025 arcsec pixel^-1. The transient is clearly identified in the reduced images, as shown in Figure <ref>. In this high-resolution imaging, two adjacent galaxies are fully resolved: a barred spiral to the south and a closer dwarf/irregular to the southeast. These galaxies have Sloan Digital Sky Survey (SDSS) data release 16 <cit.> IDs SDSS J100803.73+210422.5 and SDSS J100803.87+210425.8 respectively. §.§.§ Photometry We perform photometry on AT 2023fhn with three methods. The first two use standard photutils aperture photometry procedures in python <cit.>, but the background level is calculated in two ways. The first uses the MedianBackground estimator (using the whole image for the estimate). The second used an annulus around the source (with inner and outer radii of 1.5 and 4 times the aperture radius, and pixel values in the annulus clipped at ± 3 σ). For each of these background estimations, two aperture sizes are used - 0.2 and 0.4 arcsec - with the appropriate aperture corrections for F555W and F814W applied[<https://hubblesite.org/sites/www/home/hst/instrumentation/wfc3/data-analysis/photometric-calibration/uvis-encircled-energy>]. AB magnitudes are derived from the photflam and photplam header values and the published conversion procedures[<https://hst-docs.stsci.edu/wfc3dhb/chapter-9-wfc3-data-analysis/9-1-photometry>]. For the third method we use dolphot <cit.>. dolphot performs PSF photometry on each _flc image separately; these measurements are then combined to give the reported value and its error. dolphot provides instrumental magnitudes in the Vega system, but we instead report AB magnitudes using conversions calculated with stsynphot <cit.>. Magnitude measurements for each combination of filter and methodology are given in Table <ref>. §.§.§ Galaxy offsets and enclosed flux radii The sky-projected spatial offset of a transient from its host is a key piece of information for understanding its origin. Host-normalised offsets, offsets divided by the half-light radius of the host, are widely used in the literature (see Figure <ref>) as they account for the projected extent of the host galaxy. In order to measure the offsets and host-normalised offsets of AT 2023fhn from the two nearby galaxies, we measure their centroids and half-light radii r_50 (from Petrosian profile fitting) using the python package statmorph <cit.>. We require at least 5 adjacent pixels, each 1 σ above the background, to recognize objects. The resultant segmentation maps are convolved with a uniform filter of size 10 pixels and these filtered segmentation maps are used to select galaxy-associated pixels (by requiring values >0.5). We note that the transient lies outside the pixels selected as associated with the galaxy in both cases. At z=0.238 - the redshift of the spiral (and its satellite, see Section <ref>) - the physical scale is 3.80 kpc arcsec^-1. From the centre of the spiral, the projected offset of AT 2023fhn δ r is 16.51±0.09 kpc. From the centre of the satellite, the offset is 5.35±0.06 kpc (uncertainties as described below). The half-light radius (enclosing 50 per cent of the flux, r_50) is measured to be 4.5±0.2 kpc and 1.34±0.06 kpc for the spiral and satellite in F555W respectively. In F814W, these values are 3.90±0.13 kpc and 1.15±0.04 kpc, respectively. This corresponds to host-normalised offsets (r_ n= δ r/r_50) of 3.67±0.17 and 3.98±0.17 in F555W, while in F814W, r_ n = 4.25±0.14 and 4.66±0.17 (for the spiral and satellite respectively). The quoted uncertainties on the offsets are the quadrature sum of transient positional uncertainty (given by FWHM/(2.35×SNR), where FWHM is the full-width at half-maximum and SNR the signal-to-noise ratio) and the uncertainty on the centroid (x_ c,y_ c) of the galaxies. The centroid uncertainties are calculated by re-sampling the input _flc image set 100 times using their [ERR] extensions, re-drizzling each re-sampled image set, and measuring the morphological properties with statmorph on each iteration of the re-drizzled image <cit.>. The mean and standard deviation of the resultant x_ c, y_ c and r_50 distributions are used, along with the AT 2023fhn positional uncertainties, to calculate the values and their uncertainties quoted above. §.§.§ Search for extended emission Given the apparently isolated location of AT 2023fhn, it is prudent to search for any underlying (extended) emission at the transient location, such as a knot of star formation, cluster or background galaxy. To establish whether the emission is unresolved, we first select a reference point source in the image (the object at coordinates α = 10h08m03.13s, δ = +21d04m22.8s). Cutouts around AT 2023fhn and the reference star are interpolated onto a pixel grid with twice the resolution (enabling sub-pixel shifts), before subtraction of the reference image from the one containing AT 2023fhn. The reference is scaled in peak flux and shifted in x,y to minimize the standard deviation at the location of AT 2023fhn in the residual image. The transient, reference and residual images are shown in Figure <ref>. To determine if the residuals are consistent with no underlying extended emission, we perform photutils aperture photometry (with an annulus) as described above. We find 3σ upper limits of 26.7 and 26.3 mag in F555W and F814W, respectively, demonstrating that any underlying (non-transient) source contributing significantly to the flux must be precisely co-located and also unresolved (the physical scale at this distance is 95 pc pixel^-1). To search for extended emission on a larger scale, we smooth the images with a Gaussian filter (σ=1.5) and scale them to show diffuse background light. The inset panels of Figure <ref> show cutouts of the smoothed and scaled images. Faint emission can be seen extending northwest of the satellite in F814W, under the position of AT 2023fhn, plausibly a tidal stream arising from interactions between the spiral and satellite galaxy. §.§ Gemini spectroscopy We obtained two epochs of Gemini/GMOS-S spectroscopy on 22/23 Apr 2023 and 12 May 2023, ∼10 and ∼26 days post discovery respectively (PI: Chrimes, programme GS-2023A-DD-102). The first epoch consisted of 4×500s exposures with the R400 grating, 1 arcsec slit width and two central wavelengths (two exposures at 520 nm and two at 565 nm). The second epoch consisted of 4×1845s exposures, again with the R400 grating and 1 arcsec slit, but all with central wavelength 675 nm. Data reduction was performed using the python package dragons <cit.>. Associated arcs, flats and bias frames were taken as part of the programme. Sky lines and unusable regions (e.g. due to the amplifier 5 failure[<https://www.gemini.edu/node/21963>]) are manually masked. We bin the pixels by a factor of 6 along the wavelength axis to increase the signal-to-noise ratio, and combine the 520 nm and 565 nm centred spectra by taking the mean where they overlap. We correct for Galactic extinction by adopting E(B-V)=0.025 <cit.>, and calculate the extinction at each wavelength with the python extinction <cit.> module assuming R_ V=3.1. For flux calibration, spectro-photometric standard stars observed with the closest-matching observing set-up were found in the Gemini archive. For the 525 nm data we use spectra of EG274 (programme GS-2023A-FT-205), for the 565 nm data we use LTT6248 (GS-2022A-Q-315) and for the 675 nm data we use LTT1020 (GS-2022B-Q-126). The final Galactic extinction-corrected spectra are plotted in Figure <ref>. In our first epoch of spectroscopy (22/23 Apr), AT 2023fhn is detected as shown in Figure <ref>. Fitting a black-body to the Galactic extinction-corrected, rest-frame spectrum yields a temperature of 24.8^+2.4_-2.3 ×10^3 K (χ^2_ν=3.66 with 282 degrees of freedom, where uncertainties are derived from the local standard deviation of the spectrum). This compares with a temperature of 17.5^+1.2_-1.0 ×10^3 K derived from FORS2 photometry taken on the following night <cit.>, but the large χ^2_ν indicates that the uncertainties on our fit have likely been underestimated. Increasing the uncertainties by a factor of 2 produces χ^2_ν∼ 1 with T=24.8^+5.0_-4.1 ×10^3 K. We also note that a power-law produces a fit of similar quality. Taking F_λ∝ν^2-β, we find a best-fit power-law index β = -1.24^+0.06_-0.09, with χ^2_ν=3.63 (or β = -1.24^+0.13_-0.17 and χ^2_ν∼ 1 if the uncertainties are again increased by a factor of two). Nevertheless, temperatures of ∼20 ×10^3 K are comparable to AT 2018cow, which had a black-body temperature of 19.3^+0.7_-0.8 ×10^3 K at a similar rest-frame epoch <cit.>. No correction for host-intrinsic extinction has been made, however as revealed in the HST imaging, the transient appears to be far away from any significant sources of dust, as it lies outside the bulk of the optical light of both nearby galaxies. In the second epoch of spectroscopy (12 May) the transient had faded sufficiently to result in a non-detection, with an upper limit on Hα emission at its location (taking an aperture with the same radius as the seeing) of ≲ 8×10^-18 erg s cm^-2 Å^-1. The slit was also placed on the edge of the satellite galaxy. From the centroid and width of the Hα line, we derive a redshift z=0.238±0.004, consistent with the spiral redshift of ∼0.24 reported by <cit.>, and backing up the satellite interpretation for this galaxy. We have adopted z=0.238 for all relevant calculations in this letter. § DISCUSSION All published LFBOTS to date have occurred in star-forming dwarfs <cit.> or spirals <cit.>. AT 2023fhn also has a star-forming host, assuming one of the spiral or dwarf (both are strong Hα emitters) is the galaxy of origin. However, in contrast with LFBOTs so far, it lies far away from the bulk of the host light for either choice of host galaxy. Such offsets are atypical for core-collapse transients due to the short (10-100 Myr lifetimes) of the progenitor stars. Figure <ref> compares the physical projected offsets and host-normalised offsets of a range of transients compiled from the literature, including long gamma-ray bursts (LGRBs), short gamma-ray bursts (SGRBs), superluminous supernovae (SLSNe), other core-collapse supernovae (CCSNe), fast radio bursts (FRBs), Ca-rich and type Ia SNe. The host offsets of four previous LFBOTs are also shown (r_n values were not reported for these events). AT 2023fhn lies much further out from its host than other LFBOTs to date, and its r_ n values of ∼4 are in the extreme tail of the distribution for core-collapse transients - being more typical of FRBs, Ca-rich SNe or binary neutron star mergers (i.e. SGRBs). Since no spectroscopic redshift for the transient has been measured, we now consider the probability of a chance alignment P_ chance between AT 2023fhn and the two galaxies <cit.>. P_ chance is calculated using SDSS DR16 r-band magnitudes for SDSS J100803.73+210422.5 (the spiral) and SDSS J100803.87+210425.8 (the satellite), which are 18.94±0.02 and 22.61±0.14, respectively. For the spiral we find P_ chance=0.78 per cent, and for the satellite P_ chance=1.38 per cent. Therefore, AT 2023fhn is likely to be associated with one of the two galaxies. Assuming a massive star progenitor originating in the nearby spiral (satellite), the offset of 16.51 (5.35) kpc requires even a long-lived progenitor with a lifetime of 100 Myr <cit.> to have a spatial velocity of ≳500 (100) km s^-1. Only a small fraction of massive stars are predicted, and observed, to have such high velocities - either from ejection by the supernova of a companion <cit.>, or dynamical interactions around a SMBH <cit.>. For a more typical lifetime of 10 Myr, the velocities are comparable to the fastest hypervelocity stars known. This poses a challenge to LFBOT models which invoke a massive star <cit.>. Another possibility is the delayed merger of a star and compact object <cit.>. Although the velocities of bound binaries following the first supernova are typically lower than unbound companions <cit.>, a long-lived stellar companion may offer a solution. Alternatively, as shown in the inset panels of Figure <ref>, the progenitor may have originated in a faint tidal stream. Based on our early-time radio and Hα upper limits (Sections <ref> and <ref>), and using the star formation rate (SFR) calibrations of <cit.>, we derive 3 σ upper limits on the underlying SFR at the location of AT 2023fhn of ∼6 M_⊙yr^-1 (at 6.05 GHz, the strongest radio constraint) and ∼50 M_⊙yr^-1 (Hα). However, the low surface brightness of the extended emission and prominence in F184W versus F555W suggests a lack of bright young stars in the vicinity and hence an old stellar population. This does not solve the offset problem for massive star models, but allows for the possibility of a progenitor with a long delay time and low systemic velocity <cit.> Type Ia and Ca-rich SNe can occur at similar offsets to AT 2023fhn <cit.>, but have peak luminosities many magnitudes too faint to explain AT 2023fhn or other LFBOTs <cit.>. The offset of AT 2023fhn is also reminiscent of the locations of compact object mergers, whose progenitor systems can have large natal kicks and long in-spiral timescales. Binary neutron star / neutron-star black hole mergers give rise to SGRBs and their afterglows, and SGRB afterglow optical luminosities easily exceed at AT 2023fhn at early times <cit.>. However, afterglows turn red on a timescale of days, and a black-body (rather than synchrotron) spectrum <cit.> is inconsistent with such a scenario <cit.>. Kilonovae - emission following BNS mergers driven by the production of r-process elements - share many features in common with LFBOTs, such a rapid evolution and featureless blue spectra at early times <cit.>. Their peak absolute magnitudes are far fainter than GRB afterglows <cit.>; models for magnetar-boosted KNe can predict LFBOT-like luminosities and colours, but also predict fading even faster than AT 2023fhn <cit.>. Although late-time colour constraints are lacking for most LFBOTs, it is noteworthy that AT 2018cow remains blue several years post-explosion <cit.> - and had polarisation too extreme for a supernova or kilonova origin <cit.> - whereas AT 2023fhn has already turned red (F555W-F814W ∼ 0.1, see Table <ref>). The offset of AT 2023fhn is also similar to the largest FRB offsets, and we note that one FRB has been seen in a globular cluster ∼21 kpc outside M81 <cit.>. At z∼0.24, even the brightest and largest globular clusters would have optical apparent magnitudes of ∼30 - far fainter than the source in the HST images (Table <ref>) - and angular extents too small to be spatially resolved <cit.>. Therefore an origin of AT 2023fhn in a globular cluster cannot be ruled out. Given the otherwise isolated location, an IMBH TDE explanation would require there to be an underlying cluster, since a dense stellar environment is necessary to make encounters, and hence tidal disruptions, likely to occur <cit.>. § CONCLUSIONS In this letter, we have presented Hubble Space Telescope, Gemini, Chandra and Very Large Array observations of AT 2023fhn. As the first LFBOT to lie at a large offset from its host galaxy, AT 2023fhn raises the question of whether all LFBOTs have the same physical origin. Given the observational evidence, we deem one of the following three progenitors channels most likely, although all face challenges. First - the core-collapse of a runaway/hypervelocity massive star with a central engine, along with minimal ejecta; second - a merger involving at least one compact object, possibly boosted by the formation of a magnetar; third - an IMBH TDE occurring in an undetected globular cluster. Late-time follow-up imaging will reveal if AT 2023fhn, like AT 2018cow, stays UV-bright over long timescales, while detailed modelling of the spectra and multi-wavelength light-curve is needed to reveal more about the origin of this enigmatic transient. § ACKNOWLEDGEMENTS This work is part of the research programme Athena with project number 184.034.002, which is (partly) financed by the Dutch Research Council (NWO). Observations analysed in this work were taken by the NASA/ESA Hubble Space Telescope under program 17238. This research has made use of software provided by the Chandra X-ray Center (CXC) in the application of the CIAO package <cit.>. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Based on observations obtained at the international Gemini Observatory, a program of NSF’s NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership. § DATA AVAILABILITY The data used are available upon request, scripts and parameter files are available at <https://github.com/achrimes2/Finch>. mnras
http://arxiv.org/abs/2307.00665v1
20230702205958
Implementation of nonlocal non-Fourier heat transfer for semiconductor nanostructures
[ "Roya Baratifarimani", "Zahra Shomali" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "physics.comp-ph" ]
1]Roya Baratifarimani 1]Zahra Shomalicor1 [1]Department of Physics, Basic Sciences Faculty, Tarbiat Modares University, Tehran, Iran [cor1]Corresponding author, Tel.: +98(21) 82883147. shomali@modares.ac.ir The study of heat transport in micro/nanoscale structures due to their application, especially in Nanoelectronics, is a matter of interest. In other words, the precise simulation of the temperature distribution inside the transistors is consequential in designing and building more reliable devices reaching lower maximum temperatures during the operation. The present study constitutes a framework for micro/nanoscale heat transport study which leads to the calculation of accurate temperature/heat flux profiles with low computational cost. The newly non-dimensional parameter γ, presenting the strength of the nonlocality, is utilized through the nonlocal DPL modeling (NDPL). Alongside the calculating nonlocality coefficient, the factors also appearing in DPL, including the temperature jump, phase lagging ratio, are revisited. The factor γ is found to have a linear relationship with Knudsen (Kn) number, being 3.5 for Kn=10 and 0.035 for Kn=0.1. Although the nonlocality is bold for the large Knudsen numbers, it also plays a vital role for low Knudsen number structures especially at earlier times. Further, It is obtained that intruding γ is critical for obtaining accurate temperature and heat flux distributions which are very close to the practical results of Phonon Boltzmann equation. Dual Phase Lag nonlocality Phonon Boltzmann equation Nanoscale heat transport Thermal management § INTRODUCTION Micro/nanoscale heat conduction in nano-materials, especially the semiconductors, has attracted significant attention in recent years. This concern on one hand, relates to the practical application in nano-electric <cit.>, and on the other hand, provides worthwhile understanding of the fundamental principles behind the nanoscopic heat carries' behavior <cit.>. The Fourier's law, q⃗(r⃗,t)=-k∇⃗T(r⃗,t), which is established as the accurate model to simulate heat conduction in macroscopic structures, fails when length/time scales are, respectively, comparable to the phonon mean free paths and relaxation times <cit.>. Previously, many studies have been performed focusing on the heat transfer temporal behavior at the nanoscale. Specifically, the constitutive macroscopic equation introduced as the dual-phase-lag (DPL) has been suggested: q⃗(r⃗,t+τ_q)=-k∇⃗T(r⃗,t+τ_t) In Eq. <ref>, two phase lags of τ_T and τ_q represent, respectively, the phase lag of the heat flux and the temperature gradient. In comparison to the existing microscopic models, the lagging behavior is used in the place of electron/phonon coupling in metals <cit.>, umklapp and normal phonon scattering <cit.>, and other relaxation of internal energy <cit.>. Both thermal phase-lag values are determined experimentally <cit.> and theoretically <cit.>. Meanwhile, significantly fewer researches were devoted to the spatial attitude of the non-Fourier heat transfer. In principle, the quasi-ballistic heat flux in a given location and at a given time intrinsically depends on the temperature gradient in other places, in addition to the earlier times <cit.>. In other words, the constitutive law is no longer localized in non-diffusive transport regime, but instead possesses memory in space alongside the time memory. From a microscopic point of view, a convolution kernel κ^*, which is called the nonlocal thermal conductivity, fully contains the spatio-temporal memory of the heat flux concerning the temperature gradient <cit.>. This parameter appears in the following postulated constitutive law, or the so-called flux-gradient relation (FGR), which holds when the conventional Fourier law no longer does hold: q(x,t)=-∫_0^t dt'∫_-∞^∞ dx' κ^*(x-x',t-t')∂ T/∂ x(x',t') Here, the convolution kernel κ^*, is the nonlocal thermal conductivity kernel of the medium. κ^* proceeds as an intrinsic material characteristic of the thermal conducting medium. Under the relaxation time approximation, the Boltzmann transport equation conventionally follows the postulated constitutive law. After Fourier and Laplace transformations and inserting the analytical single pulse response of the RTA-BTE, one yields a generic expression regarding the underlying microscopic phonon dynamics <cit.>. Once the kernel is obtained, the nonlocal heat transport is maintained. For the thermal transport through the length and time scales that are relatively larger than the phonon mean free paths and relaxation times, κ^* reaches the constant value and, consequently, the FGR depicted by Eq. <ref>, becomes localized and reduces to Fourier's law. On the other side, from a macroscopic viewpoint, the equivalency between the spatial nonlocal behavior and the lagging response in time, had been confirmed <cit.>. However, due to the absence of a constitutive model supporting, both were not considered concurrently. In 2010, Tzou and Guo, extended the concept of thermal lagging, in time, to the nonlocal response in space <cit.>. The study was endorsed under the phonon scattering, the Guyer–Krumhansl model, and the thermomass model, which expresses the same nonlocal behavior as the first- and second-order effects. At first, nonlocal behavior with thermal lagging was motivated by the famous Guyer–Krumhansl model <cit.>. The model described via the following equation, includes the parameters which characterize the heat wave transition through the dielectric crystals: q+τ_R∂q/∂ t+(c^2τ_R c^2/3)∇ T=τ_R τ_N c^2/5[∇^2q+2∇(∇.q)]. The above constitutive equation for heat transport in phonon systems, introduces two relaxation times, τ_R, and τ_N, which, respectively, describe the umklapp (momentum-nonconserving), and normal (momentum-nonconserving) processes in a phonon framework. The relaxation times are related to two phase lags of DPL via τ_q=τ_R and τ_T=9τ_N/5 <cit.>. The τ_R τ_N c^2/5 on the right-side of the Eq. <ref>, has a length squared dimension. The lengths τ_R c and τ_N c are proportional to the phonon path traveled during τ_R and τ_N, and so τ_R τ_N c^2/5≈ l^2. The first-order effect in "l" does not appear in Guyer–Krumhansl equation. It is necessary to conform this spatial accomplishment together with the temporal effect to clear out the heat transport in phonon systems. Besides, the thermomass (TM) model <cit.> is also another framework that supports the presence of the nonlocal behavior together with thermal lagging while investigating heat transport. Through the TM model, where a finite mass calculated from Einstein's mass-energy relation is attributed to phonons, the energy and constitutive equation in heat transport are derived, subsequently, from the continuity and the momentum equation. In consequence, the continuity equation reduces to the energy equation in the phonon gas: ∂ρ_h/∂ t+∂/∂ x(ρ u_h)=0 -∂ q/∂ x=C∂ T/∂ t. Also, the constitutive equation in one-dimensional reads out: ρ_h(∂ u_h/∂ t+u_h∂ u_h/∂ x)^D u_h/Dt+∂ P_h/∂ x+f_h=0 τ_TM∂ q/∂ t- Λ C ∂ T/∂ t+Λ∂ q/∂ x -M^2k∂ T/∂ x+k ∂ T/∂ x+q=0. Here, τ_TM, the lagging time in the thermomass model, is about two orders of magnitude larger than the heat flux phase lag in the CV model. Also, l=qk_p/[2γ C(CT)^2] and M are, respectively, the length parameter, and the drift velocity thermal Mach number with respect to the thermal wave speed. When the heat flux is replaced from Eq. <ref> to Eq. <ref>, one obtains an energy equation that contains only the temperature: k(1-M^2)/Cτ_TM∂^2T/∂ x^2=1/τ_TM∂ T/∂ t+2l/τ_TM∂^2 T/∂ x ∂ t+∂^2 T/∂ t^2. The parameters involved in the above equations are ρ_h, u_h, p_h, and f_h, which are, respectively, defined as the phonon gas thermomass density, the drift velocity, the pressure of the phonon gas, and the resistance force per unit volume. Although the wave behavior is understood by the second order time derivative, ∂^2 T/∂ t^2, Eq. <ref> also consists of a mixed-derivative term, ∂^2 T/∂ x ∂ t. This part is new to the field of microscale heat transfer. The standard DPL model, considering only the phase lags, does contain similar mixed-derivative terms, which are even orders with respect to x. As it is seen, the length parameter, l, in Eq. <ref>, is proportional to the heat flux, q. The presence of the length parameter in Eqs. <ref> and <ref> is explanatory for including the nonlocal attitude in space, as well as the lagging time in the CV model. Such nonlocality and lagging are implied in Fourier's law through the succeeding statement: q(x+λ_q,t+τ_q)=-K∂ T/∂ x(x,t). The equation supports that the temperature gradient through the material volume on position r and at time t, is proportional to the heat flux vector flowing across the other volume element positioned at (r + L) at a later time (t + τ_q). The parameters, K=k(1-M^2), τ_q=τ_TM, and λ_q=2l, ascertain that the nonlocal model is equivalent to the TM model. In the next step, the nonlocality is implied to more accurate DPL model <cit.>. Consequently, the Nonlocal DPL equation becomes: q(x+λ_q,t+τ_q)=-K∂ T/∂ x(x,t+τ_T). On the other hand, the heat transport study of the transistors is particularly significant as their reliability is determined via the obtained maximum temperature during the operation <cit.>. When the temperature and heat flux distributions are calculated, we get information about how much temperature increase, each position inside the single transistor or on a whole die, experiences <cit.>. This temperature augmentation in a MOSFET occurs due to the response to the existent self-heating <cit.>. Consequently, predicting the temperature and heat flux distribution preciously forasmuch as the reliability is determined can help the engineers to design MOSFETs with optimal thermal conditions. Here, the nonlocal DPL model has been utilized for the thermal investigation of the 1-D MOSFETs. Newly nonlocality, phase lagging and, temperature jump coefficients are derived. Explicitly, the obtained scaled parameters are found to be almost different from that of the localized standard DPL model. The following study confirms that the implication of the nonlocal effects notably makes the simulation yield a more accurate results <cit.>. To be more precise, although for low Knudsen number devices, the phenomenological DPL model brings out less meticulous results, taking nonlocality in space for DPL, leads to much more accurate temperature profiles and heat flux plots. In this paper, first, in Sec. <ref>, the studied geometry and the relevant boundary conditions are given. Then in Sec. <ref>, the mathematical modeling for NDPL is developed. Section. <ref> is devoted to explaining about the numerical method. Finally, the results are pointed out in Sec. <ref> and, the paper is concluded in Sec. <ref>. § GEOMETRY AND BOUNDARY CONDITIONS A one-dimensional MOSFET device is modeled in two cases. First, as shown in Fig. <ref> (a), a thin slab of silicon transistor with a constant initial temperature of T_0=300 K is studied. The system is the same as that of the one investigated by Ghazanfarian and Abbassi <cit.>. The temperature of the bottom boundary has abruptly increased. This results in the rearrangement of the temperature distribution in the slab. The heat transport in the thin film is studied considering the lagging responses and the nonlocality in space simultaneously. During the calculation, the applied temperature at the left boundary is kept constant at room temperature. The size of the transistor, L, is selected as the characteristic length of the simulation and manages the value of the Knudsen number, defined as the ratio of the phonon mean free path to the characteristic length, Kn=λ/L. § MATHEMATICAL MODELING In 1995, Tzou proposed a non-Fourier approximation for heat conduction, in which the heat flux vector at a point in the material at time t+τ_q corresponds to the temperature gradient at the same point at time t+τ_t: q⃗(r⃗,t+τ_q)=-k∇⃗T(r⃗,t+τ_t). Where τ_q and τ_t, respectively, represent the heat flux and the temperature gradient phase lag, both positive and inherent characteristics of the material. The thermal phase-lag values, like any other thermal characteristics for engineering materials, can be determined theoretically or experimentally under various conditions. Tzou <cit.> has tabulated some analytical findings for phase lags. Also, Basirat et al. <cit.> have reported values for the phase lags of some metal films. In the present research, the DPL model will be called the standard DPL. In 2010, Tzou generalized the DPL model by introducing the new term measuring the nonlocality of the heat flux vector. Consequently, the novelly proposed equation is: q(x+λ_q,t+τ_q)=-K∂ T/∂ x(x,t+τ_t). The heat flux, τ_q, and temperature, (τ_t), phase lags, are considered small in comparison to the time scale, and hence only the first order Taylor expansion of the DPL equation relative to τ_q and τ_t are kept. The situation is the same while considering nonlocality for the heat flux. The correlating length (λ_q) is also modest compared to the space dimension. Accordingly, the Eq. <ref> containing the first-order effects of τ_q, τ_T, and λ_q then yields: q(x,t)+λ_q∂ q(x,t)/∂ x+τ_q∂ q(x,t)/∂ t= -K∂ T/∂ x(x,t)-Kτ_T∂^2 T/∂ x ∂ t(x,t). Moreover, the conventional energy equation is, -∂ q/∂ x=C∂ T/∂ t. Taking the derivative of the Eq. <ref> relative to the position, one can eliminate the heat flux terms in Eq. <ref> using the energy equation, (K/Cτ_q)∂^2 T/∂ x^2+(Kτ_T/Cτ_q)∂^3 T/∂ x^2 ∂ t=1/τ_q∂ T/∂ t+(λ_q/τ_q)∂^2 T/∂ x ∂ t+∂^2 T/∂ t^2. In the next step, to obtain the normalized equations, the following non-dimensional parameters are defined: θ=T-T_0/T_W-T_0, t^*=t/τ_q, B=τ_t/τ_q, η=x/L, Kn=λ/L, γ=λ_q/L. Here, λ and L, respectively, the phonon mean free path and the length of the system, determine the Knudsen number, Kn. Using the non-dimensional parameters, the Eq. <ref> is rewritten this way: ∂θ/∂ t^*+γ∂^2 θ/∂η∂ t^*+∂^2 θ/∂ t^*^2=Kn^2/3∂^2 θ/∂η^2+BKn^2/3∂^3 θ/∂η^2 ∂ t^*. Previously in 2002, Chen <cit.> simulated the one-dimensional silicon transistor using the one-dimensional Ballistic-Diffusive Equations and the Boltzmann equation for the phonons. In the following, the accuracy of our results obtained utilizing NDPL, will be checked by comparison to Chen's findings. §.§ Boundary conditions As the DPL model does not consider the boundary phonon scattering effects, enforcing no-jump boundary conditions leads to unsatisfactory results, especially near the boundaries <cit.>. Hence, here, the temperature jump boundary condition is used to fix the problem: θ_s-θ_w=-α Kn(∂θ/∂η)_w. Where, θ_s and θ_w are, subsequently, the wall's jump temperature and the boundary temperature. Also, α, is the coefficient which will be adjusted to satisfy the boundary condition. Also, two unknown parameters of B and γ alongside α, appearing in Eq. <ref>, are determined such that the results of the NDPL model becomes compatible with the solution of the phonon Boltzmann equation (PBE). Interestingly, it is obtained that the values of these parameters depend on the Knudsen number and time. Notably, according to the available data, the following values for α, B, and γ are attained: α= 0.35t, t ≤ 0.1 0.27t, 0.1 ≤ t ≤ 1 0.55, t>1 & Kn>1 0.65-0.1×(Kn), t>1 & Kn<1. B= 0.45t, t ≤ 0.1, 0.35t, 0.1 ≤ t ≤ 1, 0, t > 1. γ=0.35× (Kn); for all t. Using the above-obtained constants, results in temperature and heat flux profiles that are very close to what is calculated from the Boltzmann equation and also the ballistic-diffusive equations. It should be mentioned that, here, the thermal properties such as thermal conductivity, sound velocity, and specific heat are considered constant. § NUMERICAL SOLUTION The Eq. <ref> is solved using a completely implicit first- and second-order finite difference method. Discretizing of all derivatives is central, where a stable and convergent three-level finite difference scheme is utilized <cit.>. Accordingly, the discretizations are written as below: ∂θ/∂ t^*=1/2Δ t[T^n+1_i - T^n-1_i], ∂^2 θ/∂η∂ t^*=1/4Δ x Δ t[(T^n+1_i+1-T^n+1_i-1)-(T^n-1_i+1-T^n-1_i-1)], ∂^2 θ/∂ t^*^2=1/Δ t^2[T^n+1_i-2T^n_i+T^n-1_i], ∂^2 θ/∂η^2=1/Δ x^2[T^n+1_i+1-2T^n+1_i+T^n+1_i-1], and, ∂^3 θ/∂η^2 ∂ t^*=1/2Δ x^2Δ t[(T^n+1_i+1-2T^n+1_i+T^n+1_i-1)-(T^n-1_i+1-2T^n-1_i+T^n-1_i-1)]. Here, a weighted average is implied for stability and convergence. The convergence of the numerical method is bettered by the Knudsen number reduction. It is to say that for high Knudsen numbers, the solution explicitly depends on the marching step size. § RESULTS AND DISCUSSIONS The results for modeling the heat transport in 1-D MOSFET using the numerical simulation of the NDPL model, are manifested in Figs. <ref>-<ref>. Specifically, the comparison of the results for the non-dimensional temperature profile and the heat flux distribution, obtained from the nonlocal DPL, standard DPL, and PBE are presented. It should be made clear that verification of the output profiles is performed using the results achieved from PBE. Along the present NDPL simulation, the profiles for standard DPL modeling are also reproduced by the authors. In the work <cit.>, the temperature and heat flux profiles for 1-D MOSFETs with different Knudsen numbers, at various times, have been presented using the standard DPL model. While the Knudsen number grows, the deviation of the results from the precious profiles of the PBE increases. The present study will try to solve the large Knudsen number issue, using the nonlocal heat transport concept. More precisely, the nonlocality of heat flux is considered in the system by introducing the non-dimensional correlating length of γ=3.5. This almost large γ confirms that taking into consideration the nonlocality is very crucial for large Knudsen number nano-structures. Trying to clarify, the better estimate of the results obtained from the NDPL as compared to that of the standard DPL model is shown in Fig. <ref> for the 1-D transistor with the Knudsen number of 10. Specifically, Fig. <ref> (a) presents the temperature profile for the case with Kn=10. As the figure suggests, the behavior of the temperature distributions obtained from NDPL for the scaled times of 0.01 and 0.1, are very much like the PBE solutions. It is obvious that the compatibility of the NDPL-obtained temperature distribution with PBE results is always better than those attained from DPL. This consistency is so that the NDPL calculated temperature behavior, completely fits the PBE result for scaled coordinates below x^*=0.1 and 0.8, respectively, for t^*=0.01 and 0.1. When t^*=0.01, both the size and time are very low, and the non-Fourier behavior of the system is very significant. For t^*=0.1, the system still experiences non-Fourier conditions, but it is a little less in comparison to that of the t^*=0.01. As evident in Fig. <ref> (a), considering nonlocality in the DPL model, handles non-Fourier behavior in nearly all positions such that the temperature distribution almost equals the PBE result. This is true while the result for big times like t^*=10, does not vary considerably with adding nonlocality characteristics. This department is justified as although the size of the system is petite, but at significant times it reaches a steady state, and consequently the non-Fourier behavior is adjusted. The heat flux versus position is demonstrated in Fig. <ref> (b). The very better consistency of the NDPL modeling compared to standard DPL, with the PBE data, is further seen. When t^*=0.1, for positions below x^*=0.01, similar to the temperature profile, the heat flux matches well with PBE results. Moreover, when t^*=0.1 and for x^*>0.2, the NDPL obtained heat flux profile displays considerable improvement over what the standard DPL calculates. More concretely, for large Knudsen numbers, the heat flux distribution obtained from DPL modeling behaves worse than the computed temperature profile in a way that gives a much inferior approximation of the exact outcomes of PBE. Interestingly, adding nonlocality to the DPL model, solves the problem and leads to the more correct solutions. For larger times, the nonlocality, and lagging behavior, loses its importance and γ and B become zero. This is expected as the transistors reach the steady state and the role of non-Fourier heat transfer becomes much less prominent. As Fig. <ref> (b) suggests, α, the temperature jump coefficient, is the only responsible parameter for achieving appropriate temperature and heat flux profiles. In this research, for large ts, the Knudsen number dependent α is determined such that it severely rectifies the very different standard DPL obtained heat flux profile. Figure. <ref> shows the state of the 1-D transistor with the Knudsen number of 1. This Kn is neither high nor low. The trend for temperature and heat flux behavior is almost similar to that for Kn=10. Here we analyze the results in more detail. For lower Knudsen numbers, it is anticipated that the non-Fourier attitude and, consequently the nonlocality effect fades. The non-dimensional nonlocality presenter, γ, is 0.35, which is one-tenth the one for Kn=10. As Fig. <ref> (a) confirms, this almost low γ, is essential in getting results that are more congruent to the PBE solutions. Also, as non-Fourier behavior has become a little less important in comparison to the case with Kn=10, considering γ makes the NDPL calculated profiles to be very accurately fitted to the available precise data <cit.>. In other words, almost the complete deviation of the standard DPL results from PBE outputs', is filled via the nonlocality contribution. For instance, when t^*=0.1, for a broader range of position, say for x^*<1.5 and x^*>2, the NDPL temperature profile finding, provides the results found out solving the PBE. The consistency is more noticeable when we are dealing with t^*=1. For this time, compatibility and fitness exist for almost all positions. In consequence, taking into account the temperature profile obtained from standard DPL, it becomes pretty clear how our new model is doing much better. Although, for Kn=10, the NDPL distributions were close to the PBE solutions, the better agreeableness at Kn=1.0, is attributed to the slightly weaker non-Fourier behavior, which is practically covered by the simultaneous consideration of nonlocality and phase lagging. When a long time passes, the situation becomes like in nano-structures with larger Kn numbers. In similarity, the parameters γ and B reach zero, and the effect of nonlocality and phase lagging disappears. On the other side, in Fig. <ref> (b), the heat flux demeanor versus position is illustrated. Same as the case with Kn=10, here also, the NDPL modeling assures the results very close to the available atomistic data. As it is viewed, when t^*=0.1, like what happens to temperature profile, the NDPL calculated heat flux, is consistent with available data for x^*<0.1. Only as it is the case for Kn=10, the heat flux at x^*=0, is a bit larger than that of PBE. Moreover, heat flux fits preciously for x^*>0.2, and well for x^*<0.2, with the existing data. One can conclude that although NDPL predicts the heat flux correctly, a slight deviation exists for meager non-dimensional positions. It is important to note that the accuracy of the heat flux profile obtained from standard DPL is reported to be always less than the DPL calculated temperature distribution, while the accuracy is defined in closeness to the PBE results. Intriguingly, the NDPL resolves the heat flux inaccuracy and gives a highly meticulous distribution. Also, for a larger scaled time of t^*=10, when γ and B are zero, the newly defined Kn dependent α, reproduces the PBE solution. At last, the temperature and heat flux distributions are studied for low Knudsen numbers. The results are shown in Fig. <ref>. As low Knudsen number systems have larger sizes, one anticipates less anomalous heat transfer inside. This does not mean that the effect of nonlocality on heat flux disappears. Even though the non-dimensional nonlocality of the heat flux for Kn=0.1, is γ=0.035, which is two orders of magnitude less than γ=3.5 for Kn=10, it is crucial to consider such nonlocality to get more precise plots. As Fig. <ref>(a) suggests for the lower scaled time of t^*=1, where the Non-Fourier attitude is bold compared with larger times of t*=10 and t^*=100, the NDPL presents a temperature profile with signifiently much better consistency with the PBE relative to the standard DPL. On the other hand, the small Knudsen numbers and large scaled times, say t^*>1, change the conditions in favor of removing the nonlocality and phase lag effects. So, while γ and B become zero, the MOSFET reaches the steady state, and consequently, both the DPL and NDPL calculate distributions nearly the same as the PBE findings. The imprecision in calculating the heat flux for low Kn number and smaller times using the DPL model is also worked out, implying the newly proposed NDPL. Just as it is evident in Fig. <ref>, for kn=0.1 and t^*=1, the NDPL obtained heat flux presents a remarkably improved condition such that at x^*=0 is closer to the PBE result and also fits it for a much wider range of positions. For larger times, as previously mentioned for temperature distribution, both DPL and NDPL work well while the non-Fourier effect is adjusted. § CONCLUSION The current research tries to establish a framework for micro/nanoscale heat transport investigation that produces precious results while concurrently having low computational costs. To achieve this aim, the 1-D nonlocal DPL model with the possibility of generalization to the higher dimensions is developed. This method considers the nonlocality of the heat flux along the phase lagging, while the temperature jump is also enforced. In particular, a non-dimensional parameter γ, in addition to the α, the temperature jump coefficient, and B, the phase lagging ratio, both defined in DPL, is introduced to determine the strength of the nonlocality. Then, the formulated computational procedure is applied to a 1-D silicon MOSFET. The parameters γ, α, and B are newly evaluated by verifying of the obtained results with PBE-calculated available data. The presence of the Knudsen-dependent nonlocality coefficient, γ, is found to be crucial for obtaining accurate temperature and heat flux distribution. In further detail, as expected, for the large Knudsen number, the nonlocality at heat flux is very distinguished. As an instance, when Kn is 10, the γ parameter becomes 3.5, which is one hundred times the γ value for Kn=0.1. This does not mean that nonlocality is not essential for low Knudsen numbers. Taking into account the small γ also makes the results very close to the PBE available data, especially for earlier times. In addition, introducing nonlocality to the DPL model, resolves the problem of inaccurate heat flux obtained from DPL and corrects it to remarkably acceptable value. Finally, when the heat transport inside a MOSFET is predicted well, it is more feasible to propose new transistors supporting the thermal management solutions. 10 Mahajan2002 R. Mahajan, R. Nair, V. Wakharkar, J. Swan, J. Tang, G. Vandentop, Emerging directions for packaging technologies, Intel Technol. J. 6 (2002) 2. Moghadam2014 M. Moghaddam, J. Ghazanfarian, A. Abbassi, Implementation of DPL-DD model for the simulation of nanoscale MOS devices, IEEE Transactions on Electron Devices, 61(9), 3131-3138, 2014. Minnich2012 A.J. Minnich, Determining phonon mean free paths from observations of quasiballistic thermal transport, Physical review letters, 109(20), 205901, 2012. Shomali2019 J. Ghazanfarian, Z. Shomali, S. Xiong, 21st Century Nanoscience–A Handbook: Nanophysics Sourcebook (Volume One), Sattler, K. D. (Ed.), Chapter 4, CRC Press, 2019. Chiu2005 Y. H. Chiu, V. V. Deshpande, H. C. Postma, C. N. Lau, C. Miko, L. Forro, and M. Bockrath, Ballistic phonon thermal transport in multiwalled carbon nanotubes, Physical review letters, 95(22), 226101, 2005. Alvarez2007 F. X. Alvarez, and D. Jou, Memory and nonlocal effects in heat transport: From diffusive to ballistic regimes, Applied physics letters, 90(8), 083109, 2007. Shomali2018 Z. Shomali, R. Asgari, Effects of low-dimensional material channels on energy consumption of nano-devices, International Communications in Heat and Mass Transfer, 94, 77-84, 2018. Qiu1992 T. Q. Qiu and C. L. Tien,Short-pulse laser heating on metals, International Journal of Heat and Mass Transfer, 35, 719, 1992. Guyer1966 R. A. Guyer and J. A. Krumhansl, Solution of the linearized Boltzmann equation, Physical Review, 148, 766, 1966. Gurtin1968 M. E. Gurtin and A. G. Pipkin, A general theory of heat conduction with finite wave speed, Archive for Rational Mechanics and Analysis, 31,113, 1968. Roetzel2003 W. Roetzel, N. Putra, S. K. Das, International Journal of Thermal Science 42 (2003) 541. Basirat2009 H. Basirat Tabrizi, S. Andarwa, International Communication in Heat and Mass Transfer 36 (2009) 186. Mahan1988 G. D. Mahan, and F. Claro, Nonlocal theory of thermal conductivity. Physical Review B 38(3) (1988) 1963. Vermeersch2014 B. Vermeersch, and A. Shakouri, Nonlocality in microscale heat conduction, arXiv preprint arXiv:1412.6555 (2014). Tzo95a Da Yu Tzou, The generalized lagging response in small-scale and high-rate heating, International Journal of Heat and Mass Transfer 38(17) (1995) 3231. DYTzou1997 D.Y. Tzou, Macro- to Microscale Heat Transfer: The Lagging Behavior. Taylor & Francis, Washington, D.C., USA, 1997. DYTzou2010 D. Y. Tzou, and Z. Y. Guo, Nonlocal behavior in thermal lagging, International Journal of Thermal Sciences 49(7) (2010) 1133. Cao2007 B. Y. Cao, Z. Y. Guo, Equation of motion of a phonon gas and non-Fourier heat conduction, J. Appl. Phys. 102 (2007) 053503. Ghazanfarian2015 J. Ghazanfarian, Z. Shomali, A. Abbassi, Macro-to nanoscale heat and mass transfer: the lagging behavior, Int. J. Thermophys 36 (2015) 1416. Shomali2021 Z. Shomali, R. Kovács, P. Ván, I.V. Kudinov, and J. Ghazanfarian, Recent Progresses and Future Directions of Lagging Heat Models in Thermodynamics and Bioheat Transfer, Continuum Mechanics and Thermodynamics, 34:637–679, 2022. Samian2013 R.S. Samian, A. Abbassi, J. Ghazanfarian, Thermal investigation of common 2d FETs and new generation of 3-d FETs using Boltzmann transport equation in nanoscale, Int. J. Mod. Phys. C 24 (2013) 1350064. Samian2014 R.S. Samian, A. Abbassi, J. Ghazanfarian, Transient conduction simulation of a nanoscale hotspot using finite volume lattice Boltzmann method, Int. J. Mod. Phys. C 25 (04) (2014) 1350103. Moore2014 A.L. Moore, L. Shi, Emerging challenges and materials for thermal management of electronics, Mater. Today 17 (4) (2014) 163. Shomali2012 J. Ghazanfarian and Z. Shomali, Investigation of dual-phase-lag heat conduction model in a nanoscale metal-oxide-semiconductor field-effect transistor, International Journal of Heat and Mass Transfer, 55(21-22):6231-7, 2012. Shomali20152 Z. Shomali, J. Ghazanfarian, A. Abbassi, Investigation of bulk/film temperature-dependent properties for highly non-linear DPL model in a nanoscale device: the case with high-k metal gate MOSFET, Superlattices and Microstructures, 83:699, 2015. Shomali2016 Z. Shomali, A. Abbassi, J. Ghazanfarian, Development of non-Fourier thermal attitude for three-dimensional and graphene-based MOS devices, Appl. Therm. Eng. 104 (2016) 616. Shomali2017 Z. Shomali, B. Pedar, J. Ghazanfarian, A. Abbassi, Monte-Carlo Parallel Simulation of Phonon Transport for 3D Nano-Devices, International Journal of Thermal Sciences, 114:139-154, 2017. 2Shomali2017 Z. Shomali, J. Ghazanfarian, A. Abbassi, 3-D Atomistic Investigation of Silicon MOSFETs, In Proceedings of CHT-17 ICHMT International Symposium on Advances in Computational Heat Transfer, ICHMT Digital Library Online, Begel House Inc., 2017. Shomali2022 M. H. Fotovvat and Z. Shomali, A time-fractional dual-phase-lag framework to investigate transistors with TMTC channels (TiS_3, In_4Se_3) and size-dependent properties, 168, 207304, 2022. Shomali2023 Z. Shomali, An investigation into the reliability of newly proposed MoSi_2N_4/WSi_2N_4 field effect transistor: A monte carlo study, arXiv preprint arXiv:2305.04327 (2023). EPop2005 E. Pop, R.W. Dutton, K.E. Goodson, Monte Carlo simulation of joule heating in bulk and strained silicon, Appl. Phys. Lett. 86 (2005) 082101. EPop2006 E. Pop, S. Sinha, K.E. Goodson, Heat generation and transport in nanometer-scale transistors, Proc. IEEE 94 (8) (2006) 1587. EPop2010 E. Pop, Energy dissipation and transport in nanoscale devices, Nano Res. 3 (3), (2010) 147. Gong2015 S. Gong, L. Chen, H. Feng, Z. Xie, F. Sun, Constructal optimization of cylindrical heat sources surrounded with a fin based on minimization of hot spot temperature, Int. Commun. Heat Mass 68 (2015) 1. Ghazanfarian2009 J. Ghazanfarian, A. Abbassi, Effect of boundary phonon scattering on Dual-Phase-Lag model to simulate micro-and nanoscale heat conduction, International Journal of Heat and Mass Transfer, 52(15-16) (2009) 3706. Basirat2006 H. Basirat, J. Ghazanfarian, P. Forooghi, Implementation of dual-phase- lag model at different Knudsen numbers within slab heat transfer, in: Proceedings of International Conference on Modeling and Simulation (MS06), August, 2006, Konia, Turkey, pp. 895899. Chen2002 G. Chen, Ballistic-diffusive equations for transient heat conduction from nano to macroscale, ASME J. Heat Transfer 124 (2001) 320–328. Dai2004 W. Dai, L. Shen, R. Nassar, and T. Zhu, A stable and convergent three-level finite difference scheme for solving a dual-phase-lagging heat transport equation in spherical coordinates, Int. J. Heat Mass Transfer 47 (2004) 1817.
http://arxiv.org/abs/2307.02087v1
20230705075147
Different Games in Dialogue: Combining character and conversational types in strategic choice
[ "Alafate Abulimiti" ]
cs.CL
[ "cs.CL", "68T50", "I.2.4; I.2.7" ]
Imaging of high-frequency electromagnetic field by multipulse sensing using nitrogen vacancy centers in diamond Satoshi Kashiwaya August 1, 2023 =============================================================================================================== In this paper, we show that investigating the interaction of conversational type (often known as language game or speech genre) with the character types of the interlocutors is worthwhile. We present a method of calculating the decision making process for selecting dialogue moves that combines character type and conversational type. We also present a mathematical model that illustrate these factors' interactions in a quantitative way. § INTRODUCTION <cit.> introduced language games/speech genres as notions tying diversity of linguistic behaviors to activity. Building on this, and insights of pragmaticists such as <cit.>, and earlier work in AI by <cit.> <cit.> showed how to associate global structure with conversations in a way that captures the range of possible topics and idiosyncratic moves. <cit.> is also the basis for an approach to building spoken dialogue systems (SDS) which is essentially domain general, offers a fine-grained treatment of grounding interaction, and which was extended to clarification interaction in <cit.>. This body of work does not address, however the issue of strategic choice in conversation, which is the core issue underlying work in Game Theory. <cit.> used game theoretic tools to develop a theory of strategic choice for dialogue. Although there are a variety of interesting insights captured in this approach, it is based on two assumptions that apply only to a restricted class of language games—games continue indefinitely and there exists a jury that assigns winning conditions to participants. We seek to develop an approach to strategic choice applicable to the general case of dialogical interaction, where termination is an important consideration and where assessment is internal to the participants. Strategic choice is modelled by combining structure from conversational types with psychological and cognitive notions associated with the players. Character type as a relatively independent factor abstracted out of the conversational type is important for dialogue. Although there is some analysis concerning both character effects and conversational types in the dialogue, combining them and analyzing their interactions in a quantitative way has not, to our knowledge, been carried out before. The purposes of this paper is, hence, to combine character type effect and conversational type analysis to yield a method that could help to analyse strategic choice in dialogue. § BACKGROUND The starting point of <cit.> is the framework of Banach-Mazurckiewicz games. They modify this framework to make it more amenable to analyzing certain kinds of NL dialogue, the emergent framework being BM messaging games. <cit.> argued that each dialogue potentially continues indefinitely and has a winner adjudged by a third party jury. This is useful for modelling political discourse between rival groups or individual contenders in the public domain. But clearly this sort of conception is not appropriate for a variety, arguably most types of dialogue.[Strictly speaking, <cit.> allowed also for finite games, by allowing for games to consist of a special move from a certain point onwards, but this would seem to either defeat the purpose of assuming potential extendibility or to be purely formal.] These have beginnings (InitState) and a variety of distinct terminations (FinState) <cit.>, and there is no `external jury' in most cases. <cit.> developed a formal model called social meaning games which explains how social variants affect the linguistic style of dialogue participants. And conversely, how the speaker's intrinsic linguistic style affects dialogue moves. <cit.> shows that linguistic style is an independent and meaningful way of exploring personality. There is evidence that people's personality traits influence their use of language. For instance, extroverted individuals are able to deceive others more easily, while neurotic individuals are less likely to be deceived <cit.>. The charisma of a speaker has been shown to be closely related to an extroverted character <cit.>. There is also a strong relation between extroversion and conscientiousness and positive influences, as well as between neuroticism and dissent and various negative influences<cit.>. Thus, an individual's personality does affect decision-making process in dialogue. Cooperation in dialogue is a widespread phenomenon, and <cit.> identified four features of cooperation: cognitive consideration, joint purpose, ethical consideration and trust. When engaging in a collaborative dialogue, where the interlocutor decides his next move based on the intentions of the other and a variety of information deriving from the context of the dialogue, it seems that character has a broad influence on the course of the dialogue. Thus, it seems natural that a dialogue participant (DP) should also take into account the other's character traits in order to choose the appropriate move. In the next section, we will explain the method we propose of combining character type effects and conversational type. § METHODOLOGY In this section, we wish to explore the interaction between character type and conversational type, by considering how given a possible range of moves, a quantitative analysis can be provided for move selection. §.§ Character Type Researchers have developed a relatively unanimous consensus on personality description patterns, proposing the Big Five model of personality<cit.>. Within this model there are five traits that can be used to capture many aspects of human personality. The Big Five Personality (OCEAN) can be assessed by the NEO-PI-R<cit.>. The traits can be described as follows: * Openness: the ability to be imaginative, aesthetic, emotional, creative, intelligent, etc. * Conscientiousness: displays characteristics such as competence, fairness, organization, due diligence, achievement, self-discipline, caution, restraint, etc. * Extroversion: exhibits qualities such as enthusiasm, sociability, decisiveness, activity, adventure, optimism, etc. * Agreeableness: the qualities of trust, altruism, straightforwardness, compliance, humility, empathy, etc. * Neuroticism: difficulty balancing emotional traits such as anxiety, hostility, repression, self-awareness, impulsivity, vulnerability, etc. <cit.> gave a pertinent method to quantify character types in terms of 5 dimensional vector [o, c, e, a, n]. We define χ_s for the self character type scale vector and χ_o represents other character type scale vector. In addition, with the development of machine learning and deep learning methods within NLP, a variety of approaches have been implemented for automatic recognition of personality in conversation and text <cit.>. <cit.> used attentive network and contextual embeddings to detect personality traits from monologues and multiparty dialogues. Given a text(or utterance), one can calculate the character type scale vector of this sentence with a robust prediction model. We define c_i as ith dialogue move vector prediction. We note that by calculating the similarity between the χ and c_i, we obtain the extent to which a given dialogue move can fit either the self character type or the other character type. Note that χ_s is a dialogue interlocutor's intrinsic property which does not show great change during conversation, but considering one's imperfect information situation, χ_o will change once new evidence arises and can be modified by applying Bayes' rule. §.§ Conversational Type <cit.> also indicated that linguistic style gets influenced by the situation in which the interlocutor find themselves. <cit.> provided a topological perspective on the space of conversational types based on the distribution of Non-Sentential Utterances (NSU) within each type. <cit.> developed a model of a conversational type couched in TTR <cit.>. On this view, a conversational type is a 4-tuple {ConvRules, InitState, FinState, G}, here ConvRules represents a set of conversational rules, transition rules between different dialogue states (dialogue gameboards) of type DGBType <cit.>, InitState and FinState are the initial and final DGB states. DGBType ↦ spkr: Ind addr: Ind utt-time: Time c-utt: adressing(spkr, addr, utt-time) FACTS: set(Proposition) Pending: list(Locutionary Proposition) Moves: list(Locutionary Proposition) QUD: poset(Question) G is the grammar which serves for the different conversational language use. An Example Considering a commercial transaction conversational type involving shopping in a bakery, so that we have: Bakery = participant: InteractionGroup ∧̣ c1:customer(A) c2:baker(B) qnud-set = QS : poset(question) c1 : {λ x.InShopBuy(A,x), λ P.P(A), λ P.P(B), λ x.Pay(A,x) }∈ QS moves : list(IllocProp) this involves: * participants: Baker and customer. * qnud-set: questions to be discussed during the interaction * moves: Dialogue moves made. <cit.> argued that clarification interactions provide data showing that interlocutors can be uncertain about the conversational type that classifies the interaction they are in, as for instance in (<ref>): (Context: A is being interviewed by B) A: Hi B: Hi . . . B (1): have you seen Blackklansman yet? A (2): Wait— is this an informal chat or a formal interview? B: A bit of both. (Example (54) from <cit.>. Thus, the DP has an opaque or uncertain "guess" as to the conversational type and this also influences the decision-making process. Following probabilistic TTR theory <cit.>, we assume that the DP has a probabilistic conversational type in the private part of information state and this can be updated during dialogue by Bayesian inference. We model the information state as follows: InformationState = private = CharacterType: Self: Vector Other: Vector Goals: Set(Prop) Tmp: private ConvType : ConvType Conv-prob : [0, 1] dgb :DGBType In the private part, we have the CharacterType stores for self and other's character type vector; Goals tracks a set of (futurate) propositions that the DP wants realized; Tmp is a backup of the previous private state. Its use is for reasoning about the current CharacterType and updating the Conv-prob; ConvType is the conversational type and here we introduce Conv-prob, which indicates the probabilistic conjecture concerning the current conversational type. §.§ Move Space Individuals choose the moves from different kind of possible options. <cit.> provided a taxonomy of the response space of queries. 14 possible categories are given by the study of English and Polish corpus. The field of Nature Language Generation has some useful probabilistic methods to generate the range of potential moves, such as the Sequence-to-Sequence model<cit.>. <cit.> proposed a new conditional weight control technique to make the response space more human-like. The popular pre-training model like BERT<cit.> exemplifies NLG capability with fine-tuning. We define here A for a dialogue move space vector composed by a_i representing the ith move. In the current work we will not discuss how to specify the possible moves, but will leave this for future work. §.§ Decision Making §.§.§ Global View <cit.> proposed that speakers monitors themselves while speaking. In other words, individuals have a self-criticism mechanism enabling them to reflect on their behaviors, emotions and speaking. We dub this the SelfMonitor. Given our analysis in the first two subsections, a DP's move choice in the dialogue is influenced by her character type, the other interlocutor's character type, and the conversational type. When a DP tries to respond to the other party's dialogue move, she first constructs a dialogue move space, which yields a set of possible utterances that the DP can use. The DP typically makes a conjecture about the other's character type in terms of individual personality traits, based on her a priori knowledge of that individual and the current state of the dialogue. In addition, the DP has a probabilistic assumption about the present conversational type given her cognitive state. Subsequently, the DP's SelfMonitor determines which factor is more valuable in this context. Eventually a move is selected considering each possible move's value by comparing the affinity of each move for each factor with the skewness of each factor as determined by the SelfMonitor. §.§.§ Mathematical Modeling After the above analysis, we offer a mathematical model to explicate this process. In evaluating possible moves, we have three important factors: self character type, other character type, and the conversational type. We want to provide a real valued function ρ to evaluate each move in the dialogue move space. * a_i: ith move. * χ_s: Self character type vector. * c_i: character type vector for ith move. * α Weight for the self character type effect. * χ_o: Current other character type vector estimation. * β: Weight for the other character effect. * p: Probabilistic conjecture of the conversational type. * d_i: ith move conformity with the conversational type from -1 to 1. * γ Weight for probabilistic conversational type. * W = [α, β, γ]. Conformity represents the degree to which a dialogue move conforms to the current conversational type. In other words, it can be modeled as the evaluation score generated by this dialogue move in dialogue context. In order to calculate the "affinity" among character type vector {χ_s,χ_o} and move vector (c_i), we use cosine similarity, defined as follows: simi(A, B) =cos (θ)=A · B/AB Then we define the function ρ(a_i): ρ(a_i)=α· simi(c_i, χ_s) + β· simi(c^i, χ_o) + γ d_i · p and let X=[simi(c_i,χ_s), simi(c^i, χ_o), d_i · p]^T as a decision factor matrix, obtaining: ρ(A)=W · X where α+β +γ =1 α, β, γ are in fact estimated by the SelfMonitor based on information deriving from the information state. We believe those weights are mainly fixed in the beginning of the conversation, because great changes in strategic choice lead to the suspicion of deception in some cases <cit.>, This estimation process can be back-engineered by observing DP's selection of moves in the dialogue. After the calculation of ρ, we get the score for each move. This score alone cannot determine the final decision—we need to take into account the features that we have not yet discussed and observed, so here we will probabilize them, i.e., convert it into a probabilistic distribution space using the softmax function. softmax(a_i)= exp(ρ(a_i))/∑_j^exp(ρ(a_j))) We then obtain a probability estimate for each move, which indicates that the greater the probability the DP is more inclined to choose this move. §.§ Example Here we illustrate our proposed approach with an example. §.§.§ Scenario In a bakery, we observe a customer and baker's buying and selling process during the pandemic of COVID-19. Goals For simplicity we fix the final goals as follows: * Customer goal: buy two croissants. * Baker goal: sell the two croissants and obtain the desired price. It is worth noting that both players may have more complex goals or a series of goals. For example, the customer might want to have appropriate desserts for dinner. In order to achieve those kind of goals, we often divide into several specific and simple goals. In such a case, the sub-goal might be: * what are the best desserts in this bakery? * From among those best desserts, which one will fit the dishes I prepare tonight? For our current purposes, however, we will avoid such complications, despite their importance in a variety of cases. §.§.§ Move Space We assess a baker's possible responses to the customer's initial utterance: "2 croissants". We assume the following possible responses of the baker: client 2 croissants. baker (i) 1.90. (ii) Get out of the bakery, you're not wearing a mask. (iii) Please would be nice. (iv) 1.90 and please would be nice. (i) would lead to a quick end that meet their needs. This "style" is used in most cases in our life disregarding other conversational factors and attending only to the baker's final goals. If the baker considers particularly the conversational type's impact, he would choose (i) to advance the conversation, thereby looking forward to finalizing the conversation early and achieving his goal. (ii) would lead to an "unpleasant" conversation. The baker thinks that the customer's behavior is disrespectful (a short demand lacking politeness). He therefore uses the lack of the mask as a pretext or has in mind a competing goal for fighting against this disrespect. Finally, neither participants goals is achieved. (iii) This choice shows that the baker wants to have a pleasant and respectful conversation above all. It is clear that baker does not assign much weight to his final goals or the conversational type. Instead he prioritizes his psychological needs. The question under discussion would shift to another topic and the conversation might evolve into a dispute, though with lower probability than in (ii). (iv) indicate that the baker wants to persist with the trade without disturbing his final goals and his psychological needs. This seems to be very much of a compromise move, but still the final state would depend on the customer's interpretation of the dialogue pragmatics. §.§.§ A worked example For this example, we have a character type vector of [o, c, e, a, n] we assume that baker character vector is χ_s = [0.0, 0.3, 0.0, 0.0, 0.5] showing conscientiousness and relatively high neuroticism and Conv-prob is p = 0.98 assuming the baker's conformity with the bakery conversational type is high. Following the initial state S_0, the baker hears "2 croissants" from client, after which the baker updates the information state then the SelfMonitor chooses values for α, β, γ. For example, we assume α = 0.1, β = 0.1 and γ = 0.8, we assume, in this circumstance, that the baker thinks more about the conversational type rule than self character type and other character type, then: CharacterType.Other= χ_o χ_o=μ(⇑ Tmp.CharacterType.Other, ⇑ dgb) μ is the other's character type updating function with parameters: the previous state's character type vector and the current DGB. We assume that after the other's character type update we have χ_o = [0.0, 0.0, -0.1, -0.4, 0.2] assuming the customer is not very agreeable (-0.4) and a little neurotic (0.2). (i): We assume that "1.90" shows disagreeableness (-0.4) and slight neuroticism (0.2) so c_1 = [0.0, 0.0, -0.1, -0.4, 0.2] and d_1 = 0.8 shows "1.90" shows high conformity with the bakery conversational type. Then according to function (<ref>) we have: ρ(a_1) = 0.7646 (ii): We assume that "Get out of the bakery, you're not wearing a mask" shows high disagreeableness (-0.7), low conscientiousness (-0.5) and high neuroticism (0.8) so c_2 = [0.3, -0.5, 0.0, -0.7, 0.8] and d_2 = -1.0 shows incongruity with the bakery type. As a result, we obtain ρ(a_2) = -0.7080 (iii): We assume that "Please would be nice" shows high agreeableness (0.7) and slight extroversion (0.3) c_3 = [0.2, 0.0, 0.3, 0.7, -0.2] and d_3 = 0.3 shows low conformity with the conversational type. Consequently, we obtain ρ(a_3) = 0.1201 (iv): We assume that "1.90 and please would be nice" shows relatively high openness (0.5), conscientiousness (0.6) and extroversion (0.4), high agreeableness (0.7) and low neuroticism (-0.4) so c_4 = [0.5, 0.6, 0.4, 0.7, -0.4] and d_4 = 0.7 shows high conformity with the bakery type. So we obtain, ρ(a_4) = 0.4727 We then apply those preference score with the softmax function: softmax(ρ(a_i)) = [0.3998, 0.0917, 0.2099, 0.2986] This indicates that the probability that the baker selects (i) is 39.98% (iv) is the next ranked score. All this assuming that the baker's character type is relative high neuroticism and the commitment degree to the conversational type is high. Given his character type, it is reasonable to conclude that he would not be extremely polite (option iv) but not rude because of the constraint of conformity with the conversational type. Now we want to illustrate that if we change the distribution of certain of the parameters, things can flip significantly. We change the distribution of different variables (α,β,γ) to [0.3, 0.1, 0.6] showing that the baker is concerned a little more about his own character type (0.3 rather than 0.1) but slightly less than before on conversational type (0.6). And we assume the baker's character type vector is [0.5, 0.7, 0.3, 0.8, -0.5] which shows high conscientiousness (0.7), low neuroticism (-0.5) and high agreeableness. Then we obtain:ρ(A) = [0.3457, -0.7277, 0.3217, 0.6946]. the resulting following with probabilistic distribution is softmax(ρ(a_i)) = [0.2677, 0.0915, 0.2613, 0.3795] This indicates that the baker would choose option (iv) for the next move with 38% probability, an option which balances the baker's final goal and personal psychological needs (high conscientiousness). Finally, we modify (α,β,γ) to [0.8, 0.1, 0.1] and assume that the baker's character type vector is [0.2, -0.3, 0, -0.5, 0.8] which shows high neuroticism (0.8), low agreeableness (-0.5). What we obtain is ρ(A) = [0.8007, 0.7652, -0.5229, -0.5032]. From this what results is the following: softmax(ρ(a_i)) = [0.3996, 0.3856, 0.1064, 0.1085] This indicates that the probability of choosing (i) and (ii) is now about the same. In this scenario the baker focuses more on his own character type. (ii) shows complete incongruity with the bakery type, which indicates a complete violation of the baker's original final goal. Hence, at this point, the baker is in a dilemma. It is worth noting that this state will lead to a "breakthrough point", and if the baker chooses (ii), it means that baker chooses to change his final goal or conversational type. The consequence of this is that the Goals, ConvType and Conv-prob should be changed in the private state of information state. How to effect this in a formal way, we leave to future work. § CONCLUSION Character is a person's stable attitude towards reality and it also affects one's performance in dialogue. Conversational type is, in one way or another, since the early days of AI work on dialogue, one of the principal notions of dialogue research. It reflects domain specific interaction, in particular the move space given to the conversational participants. We have tried to show in this paper that investigating the interaction of these two factors is worthwhile. In particular, we present a method of the decision making process for moves that combines character type and conversational type. We present a mathematical model that combines these factors after assigning numerical values to the various parameters that are involved and demonstrate by means of an example how this works. § FUTURE WORK In this paper, we have made a preliminary proposal concerning the modelling of character types and their combination with conversational types. We aim to refine this in future work in ways that include the following: * <cit.> gave a formal method to classify conversations into types. We believe that under realistic conditions, there are often multiple conversational types involved in a single conversation, which may involve sequential transformations or overlapping phenomena. * In the approach sketched here, for classifying the conversational type we use a probabilistic TTR. However, in practice this assessment can change as the dialogue unfolds. We hope to develops methods that incorporate such dynamism. * Personality analysis based on the Big Five theory is robust, but inference about each other's character types is also in flux during conversations, and examining the impact of the change process on our approach is worth looking forward to future work. * <cit.> provided a categorical approach to the response to the question. We hypothesize that the conversational type has the role of delimiting the range of possible moves. We aim to characterize this variability. * This article introduces the concept of move conformity, which is required for automatic detection of research based on different types of dialogue in the future work. This could be achieved by modifying an existing NLG evaluation model (e.g. BLEU <cit.>, ADEM <cit.>). * This paper discussed several cases in detail, but they are all constructed examples, so fitting this model on actual conversations (e.g. the British National Corpus) or scripted dialogues annotated for character types (e.g. FriendQA <cit.>) with experimental predictions is desirable. §.§ Acknowledgments This work was supported by an internship funded by the Institut Universitaire de France, within Jonathan Ginzburg's project Unifying Verbal and Non-verbal Interaction. We also acknowledge the support of the French Investissements d'Avenir-Labex EFL program (ANR-10-LABX-0083). Thanks to Prof. Ginzburg's patient guidance during the internship. Thanks also to three anonymous reviewers for SemDial for their detailed and perceptive comments. acl_natbib
http://arxiv.org/abs/2307.02725v1
20230706022056
Near-Field Wall-Modeled Large-Eddy Simulation of the NASA X-59 Low-Boom Flight Demonstrator
[ "Emily Williams", "Gonzalo Arranz", "Adrián Lozano-Durán" ]
physics.flu-dyn
[ "physics.flu-dyn", "physics.comp-ph" ]
On efficient linear and fully decoupled finite difference method for wormhole propagation with heat transmission process on staggered grids This work is supported by the National Natural Science Foundation of China grants 12271302, 12131014. Xiaoli Li School of Mathematics, Shandong University, Jinan, Shandong, 250100, P.R. China. Email: xiaolimath@sdu.edu.cn. Ziyan Li Department of Mathematics, City University of Hong Kong, Hong Kong SAR, China. Email: ziyali3-c@my.cityu.edu.hk. Hongxing Rui Corresponding author. School of Mathematics, Shandong University, Jinan, Shandong, 250100, P.R. China. Email: hxrui@sdu.edu.cn. =============================================================================================================================================================================================================================================================================================================================================================================================================================================== Wall-modeled large-eddy simulation (WMLES) is utilized to analyze the experimental aircraft X-59 Quiet SuperSonic Technology (QueSST) developed by Lockheed Martin at Skunk Works for NASA's Low-Boom Flight Demonstrator project. The simulations utilize the charLES solver and aim to assess the ability of WMLES to predict near-field noise levels under cruise conditions, considering various subgrid-scale (SGS) models and grid resolutions. The results are compared with previous numerical studies based on the Reynolds-averaged Navier-Stokes (RANS) equations. Our findings demonstrate that WMLES produces near-field pressure predictions that are similar to those of RANS simulations at a comparable computational cost. Some mild discrepancies are observed between the WMLES and RANS predictions downstream the aircraft. These differences persist for finest grid refinement considered, suggesting that they might be attributed to underresolved interactions of shock waves and expansions waves at the trailing edge. § NOMENCLATURE @l @ = l@ α_ij resolved velocity gradient tensor C_dsm constant for dynamic Smagorinsky model C_vre constant for Vreman model C_p specific heat at constant pressure C_v specific heat at constant volume δ_ij Kronecker delta γ ratio of specific heats h_wm wall-normal matching location for the wall model κ Kármán constant L_BODY streamwise body-length of the X-59 model M Mach number μ dynamic viscosity μ_M Mach angle μ_t,wm wall-model dynamic eddy viscosity Pr Prandtl number Pr_t turbulent Prandtl number Pr_t,wm turbulent Prandtl number for the wall model p pressure p_∞ farfield pressure p_s pressure measured by the probe q_j heat flux vector R gas constant Re Reynolds number ρ density ρ_∞ farfield density S_ij rate-of-strain tensor σ_ij shear-stress tensor τ_ij subgrid-scale tensor τ_w,wm wall shear stress for the wall model T temperature T_wm wall-model temperature u_i velocity component u_wm magnitude of the wall-parallel velocity for the wall model U_∞ farfield streamwise velocity ε_p relative difference in the pressure signature between the RANS and WMLES x_i spatial coordinate X_N transformed x-coordinate y_n wall-normal direction for wall model ν kinematic viscosity ν_t eddy viscosity CFD computational fluid dynamics DLR German Aerospace Center EQWM equilibrium wall-stress model ECS environmental control system LES large-eddy simulation M Mach number based on the freestream SA Spalart-Allmaras RANS Reynolds-averaged Navier–Stokes equations SGS subgrid scale WMLES wall-modeled large-eddy simulation § INTRODUCTION The use of computational fluid dynamics (CFD) for external aerodynamic applications with increasing functionality and performance has greatly improved our understanding and predictive capabilities of complex flows <cit.>. However, applications to the modern aerospace industry often entail the presence of multi-scale, complex flow physics, which have since exposed limitations in even the most advanced CFD solvers <cit.>. One outstanding challenge in CFD is the prediction of quantities of interest in high-speed vehicles. In this work, we investigate the capability of wall-modeled large-eddy simulation (WMLES) to predict the near-field pressure signature of the X-59 Quiet SuperSonic Technology (QueSST). The primary objective of NASA's Low-Boom Flight Demonstrator project is to assess the viability of supersonic flight over land while minimizing noise levels. In this regard, the X-59 QueSST experimental aircraft, developed by Lockheed Martin at Skunk Works for NASA's project, serves as a practical platform for validating high-speed aircraft CFD <cit.>. Previous research efforts, documented in the American Institute of Aeronautics and Astronautics (AIAA) Sonic Boom Prediction Workshop series, have encompassed near-field CFD models and subsequent atmospheric propagation to analyze noise signatures specific to the X-59 QueSST <cit.>. The workshop involved utilizing various CFD solvers to simulate near-field signatures, which were then propagated to the ground for noise level calculations. Currently, CFD studies of the X-59 QueSST mostly rely on closure models for the Reynolds-averaged Navier-Stokes (RANS) equations <cit.>, along with other variations <cit.>. Here, we use as a reference the near-field RANS simulations from <cit.>. The RANS cases were performed with the German Aerospace Center (DLR) TAU code, which is based on an unstructured finite-volume approach for solving the RANS equations on hybrid grids <cit.>. An improved second-order accurate advection upstream splitting method (AUSM) upwind scheme was applied for the spatial discretization of the convective fluxes, and an implicit lower upper symmetric Gauss Seidel scheme is used for time stepping. The AUSM scheme is a flux splitting scheme with an aim at removing numerical dissipation on a contact discontinuity for accurate and robust resolution for shocks <cit.>. The gradients were computed using a Green Gauss approach. The Spalart-Allmaras (SA)-negative turbulence model was used. For a comprehensive overview of the RANS simulations, refer to the work by <cit.>. Current RANS methodologies have demonstrated high performance in predicting shock waves far from their interactions with boundary layers <cit.>. Yet, there is still no practical RANS model that can accurately address the wide range of flow regimes relevant to aerodynamic vehicles. These regimes include separated flows, afterbodies, mean-flow three-dimensionality, shock waves, aerodynamic noise, fine-scale mixing, laminar-to-turbulent transition, and more. RANS predictions in these scenarios often lack consistency and reliability, particularly when dealing with geometries and conditions representative of the flight envelope of airplanes <cit.>. One notable example is the inadequate prediction of the onset and extent of three-dimensional separated flow in wing-fuselage junctures, where RANS-based approaches have exhibited poor performance <cit.>. Furthermore, CFD experiments with aircraft at high angles of attack have revealed that RANS-based solvers struggle to accurately predict maximum lift, the corresponding angle of attack, and the physical mechanisms underlying stall. While algebraic RANS closures, such as the Baldwin–Lomax model <cit.>, offer reasonable predictions at high Mach number boundary layer flows <cit.>, they are still deficient in capturing separated flows <cit.>. Recently, large-eddy simulation (LES) has gained momentum as a tool for aerospace applications that might offer a higher degree of generality compared to RANS approaches <cit.>. In LES, the large eddies containing most of the energy are directly resolved, while the dissipative effect of the small scales is modeled through a subgrid-scale (SGS) model. By incorporating a near-wall flow model that resolves only the large-scale motions in the outer region of the boundary layer, WMLES reduces the grid-point requirements, scaling at most linearly with increasing Reynolds number <cit.>. Some representative studies of WMLES at high-speed conditions include shock/boundary-layer interactions <cit.>, compressible channels <cit.>, the shock wave boundary layer interaction over compression ramps <cit.>, flow past a single fin <cit.>, flow past two parallel fins <cit.>, turbulent boundary layers <cit.> and turbulent channel flows <cit.>, transitional flows <cit.>, the wall-mounted hump and axisymmetric compression ramp <cit.>, and transonic airfoils <cit.>. Although most SGS/wall models are rooted in aero/thermal equilibrium assumptions, which are only valid for incompressible flows, the general consensus is that WMLES is able to capture key flow physics. Nonetheless, the use of WMLES for the prediction of realistic supersonic aircraft has been explored to a lesser extent, which is the goal of this work. Our long-term objective is to develop a SGS model for WMLES that provides accurate predictions across a wide range of flow regimes, including subsonic and supersonic flows <cit.>. The results from the present study will inform the advances required to extend our SGS modeling approach in the presence of shock waves. This paper is organized as follows. The methodology of WMLES is presented in section <ref>, which contains details of the numerical solver, and the physical and computational setups are discussed in section  <ref>. The numerical results for the quantities of interest are presented in section <ref>. Finally, conclusions and future work are offered in section <ref>. § METHODOLOGY §.§ SGS Models The performance of WMLES is investigated for two SGS models. Both models adopt the eddy viscosity form τ_ij = -2 ν_t ( S_ij - 1/3S_kkδ_ij) + 1/3τ_kkδ_ij, where repeated indices imply summation, τ_ij the SGS tensor, ν_t is the eddy viscosity, S_ij is the rate-of-strain, τ_kkδ_ij / 3 is the isotropic component of the SGS tensor with Kronecker delta δ_ij, and ϕ = ρϕ/ρ is the Favre average with (·) denoting the grid filter. The first SGS model considered is the dynamic Smagorinsky model (labeled as DSM) <cit.>, where the eddy viscosity is given by ν_t = C_dsm |S|. The constant C_dsm is dynamically computed as <cit.> C_dsm = 1/2⟨ℒ_ijℳ_ij⟩/⟨ℳ_ij^2 ⟩ where ℒ_ij = ρu_i u_j - u_i u_j, ℳ_ij = Δ/Δ^2 | S | S_ij - 1/3S_kk - | S |S_ij - 1/3S_kk, with (·) being the test filter with filter width Δ larger than that of the grid filter Δ, and | S |^2 = 2 S_ijS_ij. The second SGS model considered is the Vreman model <cit.> (labeled as Vreman). The eddy viscosity is given by ν_t = C_vre√(B_β/α_ijα_ij), where α_ij = ∂u_j / ∂ x_i, β_ij = Δ^2 α_miα_mj, and B_β = β_11β_22 - β_12^2 + β_11β_33 - β_13^2 + β_22β_33 - β_33^2. The model constant C_vre≈ 0.07. For both the Vreman model and the DSM, the term τ_kkδ_ij is neglected on the grounds that it is small compared to the thermodynamic pressure. The SGS model for the heat flux in the energy equation is given by the eddy diffusivity model q_i = -ρν_t/Pr_t ∂T/∂ x_i, where T is the temperature, and Pr_t is the turbulent Prandtl number which is set equal to 0.9. The equation of state for the fluid is a calorically perfect gas p = ρ R T. Additional details about the computational approach can be found in <cit.>. §.§ Wall Model We use a compressible equilibrium wall-stress model (EQWM) to model the small-scale flow motions near the wall. Figure <ref> shows a schematic of the wall modeling approach. The form of the wall model is given by two ordinary differential equations derived from the conservation of momentum and energy by neglecting the wall-parallel fluxes <cit.>, d/ d y_n[(μ+μ_t,wm) d u_wm/ d y_n] = 0, d/ d y_n[(μ + μ_t,wm) u_wm d u_wm/ d y_n + C_p (μ/Pr + μ_t,wm/Pr_t,wm) d T_wm/ d y_n] = 0, where y_n is the wall-normal direction, u_wm is the magnitude of the wall-parallel velocity for the wall model, T_wm is the wall-model temperature, μ is the dynamic viscosity, μ_t,wm is the wall-model eddy viscosity, Pr and Pr_t,wm are Prandtl number and turbulent Prandtl number, respectively, and C_p is the specific heat at constant pressure. The eddy viscosity is taken from the mixing-length model as μ_t,wm = κρ y_n √(τ_w,wm/ρ)[1 - exp(-y_n^+/A^+)]^2, where κ =0.41, Pr_t,wm=0.9, A^+=17, ρ is the density, τ_w,wm is the instantaneous wall shear stress, and superscript + denotes quantities normalized by τ_w and μ. At y_n = 0, Eq. (<ref>) is complemented with the non-slip and adiabatic wall boundary conditions. At y_n = h_wm, u_wm and T_wm are set equal to the instantaneous values from the LES solution. The matching location h_wm is taken at the centroid of the first control volume attached to the wall. Effectively, given an instantaneous wall-parallel velocity and temperature at some height above the wall, Eq. <ref> is solved for u_wm and T_wm in an overlapping layer between y_n = 0 and y_n = h_wm. The solution from the wall shear stress (τ_w,wm) and the wall heat flux (q_w,wm) from the wall model are returned to the LES solver as boundary conditions at the wall. §.§ Numerical Solver and Computational Resources The simulations are performed using the CPU version of the code charLES from Cascade Technologies (Cadence). The validation of the algorithm can be found in <cit.>. The solver integrates the filtered Navier-Stokes equations using a second-order accurate finite volume formulation. The numerical discretization relies on a flux formulation that is approximately entropy preserving in the inviscid limit, thereby limiting the amount of numerical dissipation added into the calculation. No special treatment was employed for the shock waves, and the simulations in the present work are conducted without a shock capturing scheme or sensor. The time integration is performed with a third-order Runge-Kutta explicit method. The mesh generator is based on a Voronoi hexagonal close packed point-seeding method which automatically builds high-quality meshes for arbitrarily complex geometries with minimal user input. All simulations in this work are performed using high-performance computing (HPC) resources provided by the MIT Supercloud. The MIT Supercloud is a collaboration with MIT Lincoln Laboratory on a shared facility that is optimized for streamlining open research <cit.>. The system is designed to support work that requires significant compute or memory resources. The system has 480 nodes with Intel Xeon Platinum 8260 processors with 2 × 24 CPUs/node and 192GB of RAM per node. § COMPUTATIONAL SETUP The simulation is performed using the C608 Low-Boom Flight Demonstrator geometry provided by the AIAA Sonic Boom Prediction workshop shown in Figure <ref>. This geometry configuration is an early iteration of the X-59 final design and is desirable for comparing against RANS results using the same setup. Simulations are performed at cruise conditions at Mach number M = 1.4 and Reynolds number per inch of Re/in = 109,776. Freestream and boundary conditions are specified in Table <ref>. The C608 has propulsion and environmental control system (ECS) boundary conditions as shown in Figure <ref>. The engine bypass exhaust is a semicircular region between the nozzle and aft deck, and the ECS inlet is located in the wing root <cit.>. The size of the computational domain is 2.7L_BODY× 2.7L_BODY× 5.4L_BODY in the streamwise, spanwise, and vertical directions, respectively, where L_BODY is the streamwise body-length of the model. Figure <ref> shows a schematic of the computational domain, where the red line denotes the pressure probe placed at one-body length below directly under the aircraft. A total of six WMLES cases are conducted for three different grid resolutions using both DSM and Vreman models. In all cases, the domain is discretized using a Mach-aligned cone of length 2.1L_BODY and whose vertex is located at 0.1L_BODY upstream the aircraft nose, and an additional grid refinement in the probe-containing plane where the pressure signatures are reported. This additional refined region extends from the aircraft nose to 2.2L_BODY in the streamwise direction, to 0.25L_BODY in the spanwise direction (enclosing the wing of the aircraft), and to 1.5L_BODY below the aircraft. The cases also include enhanced refinement around the body of the aircraft geometry. Table <ref> compiles the information for the six cases, including the grid size in the Mach-aligned cone and the probe-containing refined plane. As an example, Figure <ref> shows the domain discretization for one of the WMLES cases in a slice of the x-z plane. Figure <ref> shows a view of the domain from the front of the aircraft at a slice halfway through the domain. The front and side views show the refinement configuration with a red line to denote the pressure probe. Figure <ref> shows a top view of the domain discretization. § RESULTS §.§ Shock Waves First, we discuss the results for the locations and intensities of shock waves observed from both the side and top views, depicted in Figure <ref> and Figure <ref>, respectively. We label the shocks that arise due to the aircraft's geometry and the boundary conditions. Additionally, we provide computational flow visualizations of the side and top views in Figure <ref> and Figure <ref>, respectively. These visualizations are generated to capture the contours in the relative pressure, denoted as p, which is defined as the percent difference deviation from the freestream pressure p = Δ p/p_∞× 100, where Δ p = p_s - p_∞ where p_s is the pressure measured by the probe and p_∞ = ρ_∞/γ(U_∞/M)^2. where the subscript ∞ denotes freestream quantity, γ = 1.4, and M = 1.4. The pressure visualizations show the surface pressure distribution on the aircraft, also quantified using p above. The nose creates a shock and expansion pair, followed by a smooth compression. There is a rapid succession of shocks and expansions generated by the wing leading edge, vortex generators, and ECS inlet. The wing trailing edge, horizontal stabilizer, aft deck, tail, plume, and vortices result in additional shocks and expansions along with other complex interactions. The surface pressure distribution indicates that the inlet spillage creates an area of high pressure over the wing. These results are consistent with those reported through the AIAA Sonic Boom Prediction Workshop <cit.>. §.§ Pressure Signatures The WMLES pressure signatures at one-body length away from the aircraft are compared with RANS results from <cit.> for a grid resolution of 5.74 in/cell. We define a transformed x-coordinate given by X_N = x - L_BODY/tan(μ_M), where the distance from the nose in the freestream direction x is normalized by the Mach angle μ_M = sin^-1(1/M) using the radial distance to the model, L_BODY in this case. This transformation is common in previous sonic boom simulations <cit.>. Figure <ref> shows the prediction of the pressure signature by WMLES with the DSM for the different grid refinement configurations outlined in Table <ref>. As the grid is refined, the prediction provided by WMLES approaches the RANS results for the near-field pressure signature. Results computed using the Vreman SGS model were comparable and are omitted for the sake of brevity. Figure <ref> shows the finest resolution case comparison with both SGS models. The WMLES solutions show reasonably accurate agreement with the RANS results for X_N/L_BODY < 0.9 for both SGS models, particularly at the finest grid resolution. This is also the case in the vicinity of the ECS and bypass, located approximately at X_N/L_BODY≈ 0.45 and X_N/L_BODY≈ 0.85, respectively. However, we observe that WMLES results do not match the RANS solution as effectively in the downstream region and after the aircraft. This disparity is present for both SGS models and persists for the finest grid resolution considered. Although the specific cause of this disparity remains unclear, it is likely attributed to the intricate interactions and rapid sequence of shocks and expansions occurring near various components, including the wing trailing edge, horizontal stabilator, aft deck, T-tail, engine plume, and tip vortices <cit.>. Figure <ref> contains the results for the pressure signature as the grid is refined. The X_N/L_BODY locations of interest are shown in the schematic in Figure <ref> for reference to give insight into which areas of the geometry are impactful at the location of the pressure probe. The RANS solution is also included for reference as dashed lines, which is not dependent on the grid resolution. The relative difference in the pressure signature between the RANS and WMLES is computed as ε_p = p_WMLES - p_RANS/p_∞, where the subscripts denote either the WMLES or RANS solution. The results for ε_p, shown in Figure <ref>, reveal that the difference between WMLES and RANS solution decreases with grid refinement for all locations considered and both SGS models. §.§ Computational Cost Cost-efficiency is part of the motivation for using WMLES for turbulence modeling. Table <ref> outlines the calculations for the computational cost for different grid refinement configurations for the WMLES cases with the Vreman SGS model. This estimate is based on five flow passes through the domain to ensure convergence in the solution of the pressure signature propagated to one-body length below the aircraft. The number of characteristic steps in the simulation is determined using Δ t estimated from the Courant number. The computational time in core-s is then converted to CPU-hr. Comparing the costs between WMLES and RANS simulations is not a straightforward task. In this regard, we refer to the cost reported by <cit.> as a reference. <cit.> performed RANS simulations for the AIAA Sonic Boom Prediction Workshop on the X-59 aircraft using the NASA Ames Electra cluster. Their setup involved one Skylake node equipped with two Intel Xeon Gold 6148 processors, utilizing a total of 40 cores for each run. The total computational time for their finest adapted grid, consisting of 29.6 million cells, was reported to be 1431 CPU-hours. As an example, the WMLES simulation using a grid configuration of 10 inches per cell for the cone and 5 inches per cell for the plane contains 18 million control volumes, which is comparable to the RANS simulation with 29.6 million control volumes. The estimated total computational time for WMLES using this specific solver is approximately 3000 CPU-hours. This indicates that, for a comparable grid resolution, WMLES is computationally more demanding than RANS but still exhibits a similar cost. It is important to note that improvements in computational cost can be achieved by optimizing the solver, utilizing faster CPUs, or employing GPUs, which can potentially lead to 7 to 10 times faster turnaround times. Nevertheless, our cost estimates highlight the motivation for future efforts in developing SGS models that offer higher accuracy for relatively coarser grids. Such advancements would directly result in reduced computational costs for WMLES solutions. § CONCLUSIONS Current RANS methodologies have shown reasonable performance in predicting shock waves in regions far from their interactions with boundary layers. However, a practical RANS model capable of accurately addressing the broad range of flow regimes relevant to aerodynamic vehicles has yet to be developed. In recent years, WMLES has emerged as a tool for aerospace applications, offering a higher level of generality compared to RANS approaches. Nevertheless, the application of WMLES for predicting realistic supersonic aircraft has received less attention. In this work, we evaluated the performance of WMLES for realistic supersonic aircraft. WMLES was utilized to analyze the experimental aircraft X-59 QueSST developed by Lockheed Martin at Skunk Works for NASA's Low-Boom Flight Demonstrator project. The simulations employed the charLES solver and aimed to assess the ability of WMLES to predict near-field noise levels under cruise conditions. The study considered two SGS models, the dynamic Smagorinsky model and the Vreman model, combined with an equilibrium wall model. The grid resolutions ranged from 20 in/cell to 5 in/cell in different areas of interest in the domain. The results were compared with previous numerical studies based on the RANS equations. Our findings demonstrated that WMLES produced near-field pressure predictions that were similar to those of RANS simulations for the finest grid resolutions investigated at a comparable computational cost. This was the case for both SGS models. Some discrepancies were observed between the WMLES and RANS predictions downstream the aircraft. These differences persisted for the finest grid refinement considered, suggesting that they might be attributed to the intricate interactions of shock waves and expansion waves at the trailing edge. Note that the refinement for the finest grid resolution was limited to a rectangular region below the aircraft. As such, the interactions of shock waves and expansion waves outside the refined area might be still severely underresolved. Here, our focus has been on WMLES with traditional SGS models along with comparisons with RANS predictions. Future simulations of the X-59 Low-Boom Flight Demonstrator will follow the wind tunnel experiments conducted NASA Glenn 8- by 6-Foot Supersonic Wind Tunnel <cit.>. Our long-term objective is to develop a SGS model suitable for WMLES that offers accurate predictions across various flow regimes, encompassing both subsonic and supersonic flows. The findings obtained from the present study will guide the advancements needed to expand our current machine-learning based SGS modeling efforts <cit.> in the presence of shock waves. § FUNDING SOURCES This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Department of Energy Computational Science Graduate Fellowship under Award Number DE-SC0023112. This paper was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. § ACKNOWLEDGMENTS The authors acknowledge the MIT Supercloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the research results reported within this paper.
http://arxiv.org/abs/2307.00904v1
20230703100136
Efficient and fully-automatic retinal choroid segmentation in OCT through DL-based distillation of a hand-crafted pipeline
[ "Jamie Burke", "Justin Engelmann", "Charlene Hamid", "Megan Reid-Schachter", "Tom Pearson", "Dan Pugh", "Neeraj Dhaun", "Stuart King", "Tom MacGillivray", "Miguel O. Bernabeu", "Amos Storkey", "Ian J. C. MacCormick" ]
eess.IV
[ "eess.IV", "cs.AI", "q-bio.QM" ]
Fully-automatic retinal choroid segmentation in OCT J. Burke et al. School of Mathematics, University of Edinburgh jamie.burke@ed.ac.ukSchool of Informatics, University of Edinburgh justin.engelmann@ed.ac.ukCentre for Medical Informatics, University of Edinburgh Edinburgh Clinical Research Facility and Imaging, University of EdinburghUniversity Hospital Wales, NHS Wales British Heart Foundation Centre for Cardiovascular Science, University of Edinburgh Centre for Clinical Brain Sciences, University of EdinburghThe Bayes Centre, University of Edinburgh Centre for Inflammation Research, University of Edinburgh Institute for Adaptive and Neural Computation, University of Edinburgh Efficient and fully-automatic retinal choroid segmentation in OCT through DL-based distillation of a hand-crafted pipeline Jamie BurkeEqual contribution.1, Justin Engelmann12,3, Charlene Hamid4, Megan Reid-Schachter4, Tom Pearson5, Dan Pugh6, Neeraj Dhaun6, Stuart King1, Tom MacGillivray4,7, Miguel O. Bernabeu2,3,8, Amos Storkey2 Ian J.C. MacCormick9,10 ==================================================================================================================================================================================================================================================== Retinal vascular phenotypes, derived from low-cost, non-invasive retinal imaging, have been linked to systemic conditions such as cardio-, neuro- and reno-vascular disease. Recent high-resolution optical coherence tomography (OCT) allows imaging of the choroidal microvasculature which could provide more information about vascular health that complements the superficial retinal vessels, which current vascular phenotypes are based on. Segmentation of the choroid in OCT is a key step in quantifying choroidal parameters like thickness and area. Gaussian Process Edge Tracing (GPET) is a promising, clinically validated method for this. However, GPET is semi-automatic and thus requires time-consuming manual interventions by specifically trained personnel which introduces subjectivity and limits the potential for analysing larger datasets or deploying GPET into clinical practice. We introduce DeepGPET, which distils GPET into a neural network to yield a fully-automatic and efficient choroidal segmentation method. DeepGPET achieves excellent agreement with GPET on data from 3 clinical studies (AUC=0.9994, Dice=0.9664; Pearson correlation of 0.8908 for choroidal thickness and 0.9082 for choroidal area), while reducing the mean processing time per image from 34.49s (±15.09) to 1.25s (±0.10) on a standard laptop CPU and removing all manual interventions. DeepGPET will be made available for researchers upon publication. § INTRODUCTION The retinal choroid is a complex, extensively interconnected vessel network positioned between the retina and the sclera. The choroid holds the majority of the vasculature in the eye and plays a pivotal role in nourishing the retina. Optical coherence tomography (OCT) is an ocular imaging modality that uses low-coherence light to construct a three-dimensional map of chorioretinal structures at the back of the eye. Standard OCT imaging does not visualise the deeper choroidal tissue well as it sits beneath the hyperreflective retinal pigment epithelium layer of the retina. Enhanced Depth Imaging OCT (EDI-OCT) overcomes this problem and offers improved visualisation of the choroid, thus providing a unique window into the microvascular network which not only resides closest to the brain embryologically, but also carries the highest volumetric flow per unit tissue weight compared to any other organ in the body. Since the advent of OCT, interest in the role played by the choroid in systemic health has been growing <cit.>, as non-invasive imaging of the choroidal microvasculature may provide a novel location to detect systemic, microvascular changes early. Indeed, changes in choroidal blood flow, thickness and other markers have been shown to correspond with patient health such as choroidal thickness in chronic kidney disease <cit.> and choroidal area and vascularity in Alzheimer's dementia <cit.>. Quantification of the choroid in EDI-OCT imaging requires segmentation of the choroidal space. However, this is a harder problem than retinal layer segmentation due to poor signal penetration from the device — and thus lower signal-to-noise ratio — and shadows cast by superficial retinal vessels and choroidal stroma tissue. This results in poor intra- and inter-rater agreement even with manual segmentation by experienced clinicians, and manual segmentation is too labour intensive and subjective to be practical for analysing large scale datasets. Semi-automated algorithms improve on this slightly but are typically multi-stage procedures, requiring traditional image processing techniques to prepare the images for downstream segmentation <cit.>. Methods based on graph theory such as Dijkstra's algorithm <cit.> or graph cut <cit.>, as well as on statistical techniques including level sets <cit.>, contour evolution <cit.>, and Gaussian mixture models <cit.> have been proposed previously. Concurrently, deep learning(DL)-based approaches have emerged. <cit.> used a DL model for choroid layer segmentation, but with traditional contour tracing as a post-processing step. Other DL-based approaches, too, combine traditional image processing techniques as pre- or post-processing steps <cit.> whereas others are fully DL-based <cit.>, the latter of which is in a similar vein to the proposed method. More recently, DL has been used to distil existing semi-automatic traditional image processing pipelines into a fully-automatic method <cit.>. Gaussian Process Edge Tracing (GPET), based on Bayesian machine learning <cit.>, is a particularly promising method for choroid layer segmentation that has been clinically and quantitatively validated <cit.>. Gaussian process (GP) regression is used to model the upper and lower boundaries of the choroid from OCT scans. For each boundary, a recursive Bayesian scheme is employed to iteratively detect boundary pixels based on the image gradient and the GP regressor's distribution of candidate boundaries. We aim to distil GPET into a fully-automatic method, DeepGPET, for choroidal segmentation that can process images without manual intervention in a fraction of the time — permitting analysis of large scale datasets and potential deployment into clinical care and research practice without prior training. § METHODS §.§ Data: Study populations and imaging devices We used 715 OCT images belonging to 82 subjects from three studies: OCTANE <cit.>, a study looking at renal function and impairment in chronic kidney disease patients. i-Test, a study recruiting pregnant women of any gestation or those who have delivered a baby within 6 months, including controls and individuals at high risk of complications. Normative, data from 30 healthy volunteers as a control group <cit.>. All studies conformed with the Declaration of Helsinki and received relevant ethical approval and informed consent from all subjects. <ref> provides an overview of basic population characteristics and number of subjects/images of these studies. Two Heidelberg spectral domain OCT SPECTRALIS devices were used for image acquisition: the Standard Module (OCT1 system) and FLEX Module (OCT2 system). The FLEX is a portable version that enables imaging of patients in a ward environment. Both machines imaged a 30^∘ (8.7 mm) region, generating a macular, cross-sectional OCT B-scan at 768×768 pixel resolution. Notably, 14% of the OCT B-scans were non-EDI and thus present more challenging images with lower signal-to-noise ratio in the choroidal part of the OCT. We split the data into training (603 images, 64 subjects), validation (58 images, 9 subjects) and test sets (54 images, 7 subjects) at the patient-level, stratified on scan types (EDI/non-EDI), cohorts, and image quality. §.§ DeepGPET Our approach is to distil GPET into a DL model to obtain an efficient and fully-automatic method. We fine-tune a UNet <cit.> with MobileNetV3 <cit.> backbone pre-trained on ImageNet for 60 epochs with batch size 16 using AdamW <cit.> (lr=10^-3, β_1 = 0.9, β_2 = 0.999, weight decay=10^-2). After epoch 30, we maintain an exponential moving average (EMA) of model weights which we then use as our final model. We use the following data augmentations: brightness and contrast changes, horizontal flipping, and simulated OCT speckle noise by applying Gaussian noise followed by multiplicative noise (all p=0.5); Gaussian blur and random affine transforms (both p=0.25). To reduce memory-load, we crop the black space above and below the OCT B-scan and process images at a resolution of 544 × 768 pixels. Images are standardised by subtracting 0.1 and dividing by 0.2, and no further pre-processing is done. We used Python 3.11, PyTorch 2.0, Segmentation Models PyTorch <cit.> and the timm library <cit.>. Our code is available here [to be added upon publication]. §.§ Evaluation and statistical analysis We used Dice coefficient and Area Under the ROC Curve (AUC) for evaluating agreement in segmentations, as well as the Pearson correlation r and Mean Absolute Error (MAE) for segmentation-derived choroid thickness and area. The calculation of thickness and area from the segmentation is described in more detail in <cit.>. Briefly, for thickness the average of 3 measures is used, taken at the fovea and 2,000 microns from it in either direction by drawing a perpendicular line from the upper boundary to the lower boundary to account for choroidal curvature. For area, pixels are counted in a region of interest 3,000 microns around the fovea, which corresponds to the commonly used Early Treatment Diabetic Retinopathy Study (ETDRS) macular area of 6,000 × 6,000 microns <cit.>. We compare DeepGPET's agreement with GPET's segmentations against the repeatability of GPET itself. The creator of GPET, J.B., made both the original and repeated segmentations with GPET. Since both segmentations were done by the same person there is no inter-rater subjectivity at play here. Thus, the intra-rater agreement measured here is a best case scenario and forms an upper-bound for agreement with the original segmentation. In addition to quantitative evaluations, we also compared segmentations by GPET and DeepGPET for 20 test set OCT images qualitatively by having them rated by I.M., an experienced clinical ophthalmologist. We selected 7 examples with the highest disagreement in thickness and area, 6 examples with disagreement closest to the median, and 7 examples with the lowest disagreement. Thus, these 20 examples cover cases where both methods are very different, cases of typical disagreement, and cases where both methods are very similar. In each instance, I.M. was blinded to which method produced which segmentation, and was asked to rate each one along three dimensions: Quality of the upper boundary, the lower boundary and overall smoothness using an ordinal scale: “Very bad”, “Bad”, “Okay”, “Good”, “Very good”. § RESULTS §.§ Quantitative <ref> shows the results for DeepGPET and repeat GPET, compared to the initial GPET segmentations as “ground-truth”. §.§.§ Agreement in segmentation. Both methods have excellent agreement with the original segmentations. DeepGPET's agreement is comparable to the repeatability of GPET itself, with DeepGPET having a slightly higher AUC (0.9994 vs 0.9812) and slightly lower Dice coefficient (0.9664 vs 0.9672). DeepGPET having a higher AUC but lower Dice suggests that for pixels where it disagrees with GPET after thresholding the confidence is lower than for ones where it agrees with GPET and so the raw DeepGPET probabilities provide a reasonable measure of uncertainty. §.§.§ Processing speed and manual interventions. Both methods were compared on the same standard laptop CPU. DeepGPET processed the images in only 3.6% of the time that GPET needed. DeepGPET ran fully-automatic and successfully segmented all images, whereas GPET required 1.27 manual interventions on average, including selecting initial pixels and manual adjustment of GPET parameters when the initial segmentation failed. This results in massive time savings: A standard OCT volume scan consists of 61 B-scans. With GPET, processing such a volume for a single eye takes about 35 minutes during which a person has to select initial pixels to guide tracing (for all images) and adjust parameters if GPET initially failed (for about 25% of images). In contrast, DeepGPET could do the same processing in about 76 seconds on the same hardware, during which no manual input is needed. DeepGPET could even be GPU-accelerated to cut the processing time by another order of magnitude. The lack of manual interventions required by DeepGPET means that no subjectivity is introduced unlike GPET, particularly when used by different people. Additionally, DeepGPET does not require specifically trained analysts and could be used fully-automatically in clinical practice. §.§.§ Agreement in choroid area and thickness. GPET showed very high repeatability for thickness (Pearson r=0.9527, MAE=10.4074 μm) and area (Pearson r=0.9726, MAE=0.0486 mm^2). DeepGPET achieved slightly lower, yet also very high agreement for both thickness (Pearson r=0.8908, MAE=13.3086 μm) and area (Pearson r=0.9082, MAE=0.0699 mm^2). <ref> shows correlation plots for thickness and area. DeepGPET's agreement with GPET does not quite reach the repeatability of GPET itself, when used by the same experienced analyst, but it is quite comparable and high in absolute terms. Especially noteworthy is that the MAE for thickness and area is only 21% lower for thickness and 30% lower for area for repeated GPET than for DeepGPET, thus DeepGPET comes quite close to optimal performance, i.e. best case repeatability where the same experienced analyst did both sets of annotation. Furthermore, the regression fits in both derived measures for DeepGPET are closer to the identity line than for the repeated GPET measurements. For thickness, the linear fit estimated a slope value of 1.043 (95% confidence interval of 0.895 to 1.192) and intercept of -7.308 μm (95% confidence interval of -48.967 μm to 34.350 μm). For area, the linear fit estimated a slope value of 1.01 (95% confidence interval of 0.878 to 1.137) and an intercept of 0.016 mm^2 (95% confidence interval of -0.195 mm^2 to 0.226 mm^2). All confidence intervals contain 1 and 0 for the slope and intercepts, respectively, suggesting no systematic bias or proportional difference between GPET and DeepGPET <cit.>. §.§ Qualitative <ref> shows the results of the adjudication between DeepGPET and GPET itself for the 20 test set OCT images. The upper boundary was rated as “Very good” for both methods in all 20 cases. However, for the lower boundary, DeepGPET was rated as “Bad” in 2 cases for the lower boundary and 1 case for smoothness. Otherwise, both methods performed very similarly. <ref> shows some examples. In (a), DeepGPET segments more of the temporal region than GPET does, providing a full width segmentation which was preferred by the rater. Additionally, both approaches are able to segment a smooth boundary, even in regions with stroma fluid obscuring the lower boundary (red arrow). In (b), the lower boundary for this choroid is very faint and is actually below the majority of the vessels sitting most posterior (red arrow). DeepGPET produced a smooth and concave boundary preferred by the rater, while GPET fell victim to hugging the posterior most vessels in the subfoveal region. In (c), DeepGPET rejected the true boundary in the low contrast region (red arrow) and opted for a more well-defined one, while GPET segmented the more uncertain path. Since GPET permits human intervention, there is more opportunity to fine tune it's parameters to fit what the analyst believes is the true boundary. Here, the rater preferred GPET, while DeepGPET's under-confidence led to under-segmentation, and ultimately to a bad rating. In (d) — the test image with the largest disagreement in thickness and area — the lower boundary is difficult to delineate due to a thick suprachoroidal space (red arrow) and thus a lack of lower boundary definition. Here, the rater gave a bad rating to DeepGPET and preferred GPET, while remarking that GPET actually under-segmented the choroid by intersecting through posterior vessels. § DISCUSSION We developed DeepGPET, a fully-automatic and efficient method for choroid layer segmentation, by distilling GPET, a clinically validated semi-automatic method. DeepGPET achieved excellent agreement with GPET on held-out data in terms of segmentation and derived choroidal metrics, approaching the repeatability of GPET itself. Most importantly, DeepGPET does not require specialist training and can process images fully-automatically in a fraction of the time, enabling analysis of large scale datasets and potential deployment in clinical practice. While the observed agreement was very high, it was not perfect. However, even higher agreement with GPET would not necessarily produce a better method as GPET itself is not perfect. Conceptually, there is debate around the exact location of choroid-scleral interface (CSI), i.e. the lower choroid boundary in OCT images. CSI is commonly defined, e.g. by the original authors behind EDI-OCT <cit.>, as the smooth inner boundary between the choroid and sclera, or just below the most posterior vessels but excluding the suprachoroidal space. However, even that definition is still debated and can be hard to discern in practice. Not all choroids are smooth, and there are edge cases like vessels passing from the sclera into the choroid, or stroma fluid obscurations that make the boundary even more ambiguous. For quantitative analysis of choroidal phenotypes, the specific definition of the CSI is secondary to applying the same, consistent definition across and within patients. Here, fully-automatic methods like DeepGPET provide a large benefit by removing the subjectivity present in semi-automatic methods. In GPET, the initial points being selected determine what edge is being traced as the CSI, and thus two analysts with different understandings could produce vastly different segmentations. With DeepGPET, the same image is always segmented in the same way, removing subjectivity. In the present work, we used data from three studies, two OCT devices and included both EDI and non-EDI scans. However, we only used data from subjects that were either healthy or had systemic but not eye disease, to which DeepGPET might not be robust to. In future work, we plan to externally validate DeepGPET and include cases of ocular pathologies. A further limitation is that while GPET has been clinically validated, not all segmentations used for training DeepGPET were entirely perfect. Thus, revisiting some of the existing segmentations and manually improving them to a “gold standard” for purposes of training the model could improve DeepGPET. For instance, GPET does not always segment the whole width of the choroid. Interestingly, DeepGPET already is able to do that in some cases (e.g. <ref>(a)), but also does emulate the incomplete segmentations by GPET in other cases. A model trained on enhanced “gold standard” segmentations would produce even better segmentations. Finally, we have focused on segmentation as it is the most important and most time-consuming step of choroidal analysis. However, the location of the fovea on OCT images needs identified to define the region of interest for derived measurements such as thickness, area and volume. Identifying the fovea is less time-consuming or ambiguous than choroid segmentation, and so we plan to extend DeepGPET to output the fovea location. This would make DeepGPET a fast and efficient end-to-end framework capable of converting a raw OCT image to a set of clinically meaningful segmentation-derived measurements. Likewise, segmenting the choroidal vessels is a very challenging task even for humans and would be prohibitively time-consuming to do manually, but in the future we aim to explore whether DeepGPET can automatically segment the vasculature within the choroid as well. § ACKNOWLEDGEMENTS This work was supported by the Medical Research Council (grant MR/N013166/1) and the UKRI Centre (grant EP/S02431X/1), as part of the Precision Medicine DTP and Biomedical AI CDT with the University of Edinburgh, respectively. For the purpose of open access, the authors have applied a creative commons attribution (CC BY) licence to any author accepted manuscript version arising. The authors would also like to thank all participants in the studies used in this paper, as well as all staff at the Edinburgh Imaging Facility who contributed to image acquisition for this study. splncs04
http://arxiv.org/abs/2307.03199v1
20230704090418
The key role of Lagrangian multiplier in mimetic gravitational theory in the frame of isotropic compact star
[ "G. G. L. Nashed" ]
gr-qc
[ "gr-qc", "hep-th" ]
=5000 nashed@bue.edu.eg Centre for Theoretical Physics, The British University in Egypt, P.O. Box 43, El Sherouk City, Cairo 11837, Egypt Center for Space Research, North-West University, Potchefstroom 2520, South Africa Recently, the mimetic gravitational theory has gained much attention in the frame of cosmology as well as in the domain of astrophysics. In this study, we show that in the frame of mimetic gravitation theory we are not able to derive an isotropic model. As a result, our focus shifts towards combining mimetic gravitational theory with the Lagrangian multiplier. The field equations of a static isotropic gravitational system that controls the geometry and dynamics of star structure are studied in the frame of mimetic theory coupled with a Lagrangian multiplier using a non-linear equation of state. An energy density is assumed from where all the other unknowns are fixed and a new isotropic model is derived. The physical analysis of this model is studied from different viewpoints and consistent results compatible with a realistic isotropic star are investigated analytically and graphically. Ultimately, we demonstrate the stability of the model in question by employing the adiabatic index technique. 04.50.Kd, 04.25.Nx, 04.40.Nr The key role of Lagrangian multiplier in mimetic gravitational theory in the frame of isotropic compact star G. G. L. Nashed 0000-0001-5544-1119 August 1, 2023 ============================================================================================================== § INTRODUCTION Recently, conclusive evidence developed proposing that Einstein's theory of general relativity (GR), is needy to be modified.The justifications for this observation stem from the challenges in renormalizing general relativity and the uncertain behavior exhibited in high gravity regions, such as the exterior of black holes and neutron stars. Additionally, the confirmed accelerated expansion of our universe can not be explained by GR. To be able to use GR to explain the phenomena of accelerated expansion of our universe an assumption of the presence of exotic matter fields like dark matter and dark energy must be taken into account. Up to date no experimental support for these was imminent. Another method is to try to amend Einstein's GR somehow so that we keep its basic gains. Despite these issues in GR, it still submits success in the solar system. The success of the Event Horizon Telescope in capturing the image of a black hole's shadow in M87 <cit.> and the advancements made in detecting gravitational waves <cit.> have significantly bolstered the position of General Relativity as the preeminent gravitational theory compared to other theories of gravity. However, aspects that GR's have not clarified must also face. We advocate the perspective that a modification of the governing field equations within the geometric sector holds the key to addressing these unresolved concerns. Modified gravitational theories have made significant progress and performance in explaining some of the unsolved issues in GR. There are many modified theories of gravity that can overcome the shortcomings of GR. One of the modification of GR is to add a scalar field. In the frame of a scalar field coupled with Ricci scalar a static neutron star perspective using two types of cosmological inflationary attractor theories, i.e., the induced inflationary attractors and the quadratic inflationary attractors are investigated <cit.>. Among these modification is the f(R) gravitational theory <cit.>. In the frame of f(R) a study of the neutron star phenomenology of R^p attractor theories in the Einstein frame have been analyzed <cit.>. In this study we are interested in another modification of GR, i.e., we are focused on the gravitational mimetic theory which has been suggested as a fresh approach to studying the problem of dark matter.<cit.>. This concerns the introduction of a mimetic scalar field denoted as η, which, despite lacking dynamics in its construction, plays a crucial role in imparting dynamism to the longitudinal degree of freedom within the gravitational system. The gravitational system's dynamic longitudinal degree serves as an analogue to pressureless dark matter, mimicking its properties <cit.>. The problem of cosmological singularities <cit.> and the singularity at the core of a black hole <cit.> can be effectively tackled through the altered variant of mimetic gravitational theory. Furthermore, the gravitational theory of mimetic has substantiated that the propagation of gravitational waves at the speed of light aligns in perfect accordance with the discoveries made from the event GW170817 and the corresponding optical observations <cit.>. Furthermore, it has been demonstrated that mimetic theory can explore the coherent rotational patterns observed in spiral galaxies without relying on the existence of dark matter particles <cit.>. Lately, there has been a significant surge of enthusiasm surrounding the cosmological framework due to the emergence of the mimetic theory <cit.> and black holes physics <cit.>. The theory has further extended its scope to include f(R) mimetic gravity, incorporating additional insights and explanations <cit.> and mimetic Gauss-Bonnet gravity <cit.>. Specifically, a comprehensive framework combining early inflation and late-time acceleration within the context of mimetic f(R) gravity was formulated <cit.>. It has been stressed that within the context of mimetic f(R) gravity, the period of inflation can be identified <cit.>. Nowadays, the mimetic theory is one of the most compelling theories of gravity, which without introducing any additional field of matter, represents the dark side of the universe which is represented as a geometric effect. The observations ensure that approximately 26% of the energy content of the universe is related to the dark matter sector, while approximately 69% construct the dark energy <cit.>. Many facts ensure the presence of dark matter and dark energy <cit.>. Dark energy, which has gained significance in recent times, is believed to be a smooth element characterized by negative pressure. It possesses an anti-gravity characteristic and is actively propelling the universe, playing a key role in the accelerated expansion of the cosmos <cit.>. Dark matter performs two crucial functions in the development of the universe: Firstly, it provides the necessary gravitational force for the rotation of spiral galaxies and galaxy clusters. Secondly, it plays a significant role as an essential component in the amplification of disturbances and the formation of structures during the early stages of the universe. As a result, dark matter begins to condense into an intricate system of dark matter halos, whereas regular matter undergoes collapse due to photon radiation, eventually settling into the well-formed potential of dark matter. In the absence of dark matter, the process of galaxy formation in the universe would be significantly more extensive than what has been observed. The structure of the current investigation is outlined as: In Sec. <ref>, we introduce the fundamental principles of the mimetic theory combined with the Lagrangian multiplier. In Sec. <ref>, we list the necessary conditions that must be obeyed by any realistic isotropic model. Also, in Sec.n <ref> we show that the model under consideration satisfies all the necessary conditions, analytically and graphically, that must possess by any realistic isotropic star. In Section <ref> we study the stability of the model presented in this study using the adiabatic index. Final Section is devoted to discussing the main results of the present study. § ISOTROPIC SOLUTION IN THE MIMETIC THEORY COMBINED WITH THE LAGRANGE MULTIPLIER What is called “dark matter of mimetic" was delivered in the scientific society in <cit.>. Despite of this mimetic theories had already been discussed in <cit.>. According to the theory of mimetic, the metric g_αβ, which is a physical one, is linked to the metric g̅_αβ, which is an auxiliary metric, and to the mimetic scalar field η by the conformal transformation: g_αβ=- (g̅^μν∂_μη∂_νη) g̅_αβ , where the metric g̅^αβ undergoes conformal transformations, specifically g̅^αβ→Ωg̅^αβ, the metric g^αβ remains invariant, meaning that it remains unchanged. In the present study, we will coin the mimetic-like gravity coupled with the Lagrange multiplier. The action of the gravity model resembling the mimetic theory, in conjunction with the Lagrange multiplier λ and the function ω, is described as follows: has the form[It was shown in <cit.> that the function ω is necessary in the construction of the mimetic theory which allows us to achieve the geometric characteristics of a black hole, including the presence of one or more horizons.] <cit.>: S=∫dx^4√(-g){ R +λ(g^μνω∂_μη∂_νη +1)}+L_matt , where L_matt is the Lagrangian of the matter field and η is the mimetic scalar field. The field equations can be obtained by taking the variation of the action (<ref>) with respect to the metric tensor g_μν, resulting in the following expressions: 0=R_μν-1/2g_μνR +1/2g_μν{λ( g^ρσω∂_ρη∂_ση+1) }-λ∂_μη∂_νη +1/2T_μν , where T_μν is the energy-momentum tensor corresponding to the matter field. Additionally, differentiating the action (<ref>) with respect to the mimetic field η yields: 2∇^μ (λω ∂_μη)=0 . When the action (<ref>) is varied with respect to the Lagrange multiplier λ, the following outcome is obtained: g^ρσω∂_ρη∂_ση=-1 . It is the time to apply the field equations (<ref>) and (<ref>), using the constrains of Eq. (<ref>), to the following spherically symmetric spacetime ds^2=f(r)dt^2-dr^2/f_1(r)-r^2(dθ^2+r^2sin^2θ dϕ^2) , to derive an interior solution. The functions f(r) and f_1(r) mentioned here are unfamiliar functions that will be determined by solving the set of field equations. Furthermore, we assume that η is solely dependent on r. Using Eqs. (<ref>) and (<ref>) to the spacetime (<ref>), yields the following set of differential equations. The (t,t)-component of the field equation (<ref>) is: ρ(r)= 1 - f_1 - rf'_1/r^2 , the field equation (<ref>) can be expressed as the (r,r)-component. p(r)= f_1 f'r-f + f f_1-λω(r)η'^2 ff_1 r^2/r^2f , the components of the field equation (<ref>) in terms of (θ,θ) and (ϕ,ϕ) have the following structure: p(r)= 2 f_1 f”f r- f'^2f_1 r+f ( 2 f_1 + f'_1r ) f' +2 f'_1 f^2/4f^2r , and the field equation (<ref>) takes the form: 0=2 λ'ω f r+ [ ω' f r+ω( f' r+4 f ) ] λ , where f ≡ f(r), f_1 ≡ f_1(r), ω ≡ω(r), λ ≡λ(r), f'=df/dr, f'_1=df_1/dr, η'=dη/dr, ω'=dω/dr, and λ'=dλ/dr. Furthermore, we make the assumption that the energy-momentum tensor of the isotropic fluid can be represented in the following manner: T_μν = (ρ+p)u_μ u_ν+p g_μν , In this case, ρ represents the energy density, p denotes the pressure, and u^μ is a timelike vector defined as u^μ = [1, 0, 0, 0].In this particular investigation, we consider the matter content to be characterized by the energy density ρ and the pressure p respectively.[In the entirety of this research, we will utilize geometrized units where the constants G and c are set to 1.]. Assuming the fluid under consideration has a perfect fluid verifying the equation of state (EoS), p=p(ρ). Using the conservation law of matter gives: 0 = ∇^μ T_μ r =2fdp/dr+f' ( ρ + p ) . In this particular context, we make an assumption that the energy density and pressure of the system exhibit variations based on the radial coordinate. If the form of EoS ρ=ρ(p) is presented, then Eq. (<ref>) yield: 1/2ln f = - ∫^r dr dp/dr/ρ + p = - ∫^p(r)dp/ρ(p) + p . In the interior of the star, Eq. (<ref>) can be used but in the exterior, Eq. (<ref>) cannot be used. Nevertheless, it is possible to assume that both f(r) and its derivative f'(r) exhibit continuity at the surface of the star. Considering the count of unknown functions and independent equations, we find a total of six unknown functions within the compact star. To address this, we will employ the constraint given by Eq. (<ref>), which states that η=1/√(-ω f_1). These unknown functions include two metric potentials, namely f(r) and f_1(r) in (<ref>), as well as the Lagrangian multiplier, the function ω, the energy density ρ, and the pressure p present in the action (<ref>). However, we have 3 independent equations, that are, the three components, Eqs. (<ref>), (<ref>), and (<ref>) of the field equation (<ref>) of the mimetic equations, an equation of state ρ=ρ(p), and the conservation law (<ref>). As we mentioned above, the scalar field equation (<ref>) can be obtained from the field equations (<ref>) corresponding to the mimetic equation and the conservation law ∇_μ T^μν=0 of the matter, and therefore the scalar field equation (<ref>) is not independent. Hence, we have 6-5=1 undetermined function remaining from the equations. To address this remaining aspect, we opt for the specific configuration of energy density ρ=ρ(r) within the compact celestial object. Moreover, outside the star, we have ρ=p=0, and the unknown functions, f(r), f_1(r) and η (or ω(η)). We have 3 independent equations, that are, the three components, Eqs. (<ref>), (<ref>), and (<ref>). Thus there are 5-4=1 function undetermined from the equations, again. If we consider a compact star like neutron star, one usually consider the EoS as: * Energy-polytrope p = k ρ^1 + 1/s , where k and s are constants. It is well known that for neutron star, s lies in the interval s∈[0.5 , 1]. * Mass-polytrope ρ = ρ_m + s_1 p , p = m_m ρ_m^1+1/s_m , with ρ_m being the rest mass energy density and m_m, s_1, and s_m are constants. Now, let us turn our attention to the study of the energy-polytrope case. Then EoS (<ref>) can be rewritten as: ρ = k̃ p^(1 + 1/s̃) , k̃≡ k^-1/1+1/s , s̃≡1/1/1+1/s - 1 = - 1 - s . Eq. (<ref>) can take the form: 1/2ln f = - ∫^p(r)dp/k̃ p^1 + 1/s̃ + p = c_1/2 + s̃ln(1+k̃^-1p^-1/s̃) = c_1/2 - (1+s) ln(1+ k ρ^1/s) , where c_1 is a constant of integration. Using the same method of polytrope we get for mass-polytrope (<ref>) the function f as: 1/2ln f = c̃/2 + ln( 1 - k_m ρ_m^1/s_m) , where c̃ is a constant of integration. To provide an illustrative example, taking into account one of the mentioned equations of state, we can make an assumption about the profile of ρ=ρ(r) as follows. ρ={[ ρ_0 ( 1 - r/R) r<R; 0 r≥ R ]. . In this scenario, ρ_0 represents a fixed value expressing the energy concentration at the core of the condensed celestial object, whereas R symbolizes an additional fixed value denoting the size of the compact star's outer boundary. As clear from Eq. (<ref>), the energy density ρ vanishes at the surface r=R. By using the energy-polytrope EoS (<ref>) or the mass-polytrope EoS (<ref>), we find that the pressure p also vanishes at the surface. We have introduced the mass parameter M as a constant value, which is associated with the mass of the compact star. This parameter is defined specifically for the polytropic equation of state (EoS) as follows: M =4π∫_0^R y^2 ρ(y) dy = πρ_0 R^3/3 . Then Eq. (<ref>) gives, f = ^c_1/( 1 + k ρ_0 ( 1 - r/R) )^4 . Using Eq. (<ref>) in (<ref>) we get: f_1 =1-8π r^2/3+2r^3/R . The Lagrangian multiplier of the above model has the form λ(r)= c_2 ( R+kR-kr ) ^2/r^5/2 . To finalize the determination of the remaining unknowns we assume, for simplicity, the form of the function ω=c_3 r and the mimetic scalar field becomes: η(r)= √(c_3 r(8π r^2/3-2r^3/R-1)) . To finalize this section we stress on the fact that if we follow the same procedure and put λ(r)=0 in Eqs. (<ref>), (<ref>), and (<ref>) we get a system that we cannot derive from it an isotropic model. § NECESSARY CONDITIONS FOR A REAL PHYSICAL STAR Any physical reliable isotropic star model must obey the below conditions in the interior configurations: ∙ It is essential for the metric potentials (g_tt and g_rr) to provide precise explanations for the density and momentum components by establishing unambiguous definitions and displaying consistent patterns within the core of the star and its interior. ∙ Within the interior of the star, it is required that the energy density ρ maintains a non-negative value, that is, ρ≥ 0. Additionally, the energy density has a finite positive value at the central region of the star and shows a decreasing pattern as it extends towards the surface, characterized by the condition dρ/dr≤ 0. ∙ Inside the fluid configuration, it is necessary for the pressure p to be positive or zero (p ≥ 0). Furthermore, within the interior of the star, it is expected that the pressure decreases with respect to the radial coordinate, as indicated by the condition dp/dr < 0. On the outermost boundary of the star, specifically at the surface where the distance from the center is denoted as r = R, the pressure p should be precisely zero. This implies that there is no pressure exerted at the star's outer edge. ∙ The energy conditions of an isotropic fluid sphere are given by: (i) The null energy condition (NEC) implies that the energy density ρ must be greater than zero. (ii) According to the weak energy condition (WEC), the sum of the pressure p and the energy density ρ must be greater than zero, i.e., p + ρ > 0. (iii) In accordance with the strong energy condition (SEC), the sum of the energy density ρ and three times the pressure p must be greater than zero, i.e., ρ + 3p > 0. ∙Furthermore, to ensure a realistic model, the causality condition must be satisfied within the interior of the star. This condition imposes a restriction on the speed of sound, requiring it to be less than 1. In this context, assuming the speed of light c is equal to 1, the condition can be expressed as dp/dr > 0 and 1 > dp/dr. ∙ Finally, the adiabatic index must has a value more than 4/3. It is our purpose to study the above conditions on the isotropic model and see if it is real model or not. § THE PHYSICAL CHARACTERISTICS OF THE MODEL To determine whether the model described by Eqs. (<ref>), (<ref>), and (<ref>) corresponds to a realistic stellar structure, we will examine the following aspects: §.§ Non singular model i- The components of the metric potentials g_tt and g_rr fulfill the following conditions:[We will rewrite all the physical quantities like metric potentials, density and pressure in terms of the dimensionless quantity x where x=r/R.], f(x→ 0)=e^c/(1+kρ_c)^4 and f_1(x→ 0)=1. As a consequence of this requirement, it is necessary for the metric potentials g_tt and g_rr to possess finite values at the central point of the stellar configuration. Furthermore, their derivatives also possess finite values at the center of the star, specifically: a'(r→0)=0 and a'_1(r→0)=4c_1k/R(1+k)^5. The mentioned limitations guarantee that the measurement remains consistent at the core and demonstrates a positive characteristic within the inner region of the star. ii-At center of star, the density (<ref>) and pressure (<ref>) take the following form: ρ(x→ 0)=ρ_0 p(x→ 0)=kρ_0^2. By examining Eq. (<ref>), when ρ_0>0 and k>0, it becomes evident that the density and pressure in the central region of the star remain consistently positive. Apart from that, if ρ_0 or k is non-positive, the density and pressure can become negative. These observations align with the information depicted in Figures <ref> fig:pot2, <ref> fig:density, and <ref> fig:pressure1, providing further consistency to the discussed aspects. iii-The density and pressure gradients of our model are provided in the following manner: ρ'=-ρ_0, p'=-2kρ_0^2 (1-x) . Here ρ'=dρ/dx and p'=dp/dx. Equation (<ref>) illustrates that the components of the energy-momentum derivatives exhibit negative values. iv-The calculation of the speed of sound, using relativistic units where the values of c and G are equal to 1, is achieved by <cit.>: v_r^2=dp/dρ=2 k ρ_0 (1-x) . At this point, we are prepared to graphically represent the aforementioned conditions in order to observe their behaviors. In Figure <ref> fig:pot2, we illustrate the characteristics of the metric potentials. As depicted in Figure <ref> fig:pot2, the metric potentials take on the values a_1(x→ 0)=0.4 and a(x→ 0)=1 as x approaches 0. This implies that within the central region of the star, both metric potentials exhibit finite positive values. We proceed to create graphs illustrating the density and pressure, as outlined in Equation (<ref>), represented in Figure <ref> fig:density and <ref> fig:pressure1. As depicted in Figure <ref> fig:density and <ref> fig:pressure1, the components of the energy-momentum exhibit positive values, which are consistent with predictions for a reasonable stellar arrangement. Moreover, as depicted in Figure <ref> fig:density and fig:pressure1, the components of the energy-momentum tensor exhibit elevated values at the core and gradually decrease as they approach the periphery. These observed patterns are characteristic of a plausible star. Figure <ref> illustrates the presence of adverse values in the derivatives of the components of the energy-momentum tensor, indicating a uniform decrease in both density and pressure throughout the entirety of the star's structure. Figure <ref> presents visual depictions of the speed of sound, the relationship between mass and radius, and the compactness parameter. As depicted in Fig. <ref>fig:dpr, the speed of sound is found to be less than one, which verifies that the causality condition is not violated within the interior of the stellar configuration when the parameter of the equation of state (EoS) is k<0.5. Furthermore, Fig. <ref>fig:comp illustrates that the compactness of our model is restricted within the interval of 0<C<0.003, where C is defined as the ratio of M to xR in the stellar arrangement. The energy conditions are depicted in Figure <ref>, showcasing the characteristics of each condition. More specifically, Fig. <ref> fig:NEC, fig:WEC, and fig:SEC display the presence of positive values for the NEC (Null Energy Condition), WEC (Weak Energy Condition), and SEC (Strong Energy Condition) respectively. This verification provides assurance that all energy conditions are met across the entire stellar configuration, aligning with the criteria for a physically feasible stellar model. Figure <ref> represents the plot of the Equation of State (EoS). In particular, Figure <ref> fig:EoS indicates that the EoS exhibits a linear behavior. § THE MODEL'S STABILITY At this point, we are prepared to examine the stability concern of our model by conducting tests involving the the index of adiabatic and the static case. §.§ An adiabatic index To investigate the stable balance of a spacetime that possesses spherical symmetry, an analysis of the adiabatic index can be conducted. The adiabatic index plays a vital role in evaluating the stability requirement, serving as an essential instrument for this purpose. Specifically, the adiabatic perturbation, denoted as Γ, is defined as follows <cit.>: Γ=(ρ+p(x)/p(x))(dp(x)/dρ(x)) . If the adiabatic index Γ is greater than 4/3, a Newtonian isotropic sphere will have a stable equilibrium <cit.>. Using Eq. (<ref>), we get Γ=2+2k ρ_0(1-x). Figure <ref> displays the adiabatic index Γ. The graph clearly indicates that the value of Γ remains consistently above 4/3 throughout the star's interior. Therefore, we can infer that the stability requirement is met. §.§ Stability in the stationary state Another approach to validate the stability of model (<ref>) involves investigating the static state proposed by Harrison, Zeldovich, and Novikov <cit.>. In a previous study by Harrison, Zeldovich, and Novikov, it was established that a stable configuration of a star necessitates a positive and increasing derivative of mass with respect to the central density ρ(x → 0), denoted as ∂ M/∂ρ_0 > 0. By applying this condition, we can determine the specific form of the central density as follows: ρ(r→ 0)=ρ_0 . Subsequently, utilizing Eq. (<ref>), we can derive the mass corresponding to the central density, denoted as: M(ρ_0)4π∫_0^R y^2 ρ(y) dy = πρ_0 R^3/3 . The behavior of the mass derivative with respect to the central density can be described by the following pattern: ∂ M(ρ_0)/∂ρ_0=π R^3/3 . Equations (<ref>) and (<ref>) guarantee the verification of stability condition of our model. § DISCUSSION AND CONCLUSIONS The primary objective of this investigation is to analyze a compact star's static configuration, which possesses both spherical symmetry and is within the framework of mimetic gravity theory. This theory incorporates two main components: a scalar field and a Lagrangian multiplier. In our formulation, we demonstrated the ability to construct a model that accurately replicates the profile of a given spherically symmetric spacetime, regardless of the equation of state (EoS) for matter and energy density. The shape of the density function ρ(x) is of utmost importance in determining the radius R and mass M of the compact star. By manipulating the Lagrangian multiplier λ(η), it becomes feasible to establish a flexible connection between the radius R and the compact star. This creates a situation where the Lagrangian multiplier λ(x) and the equation of state (EoS) describing the model exhibit a degenerate relationship. As a result, it is clear that relying solely on the mass-radius relation is inadequate for fully constraining the model. To illustrate this further, we take a closer look at the polytrope equation of state (EoS) given by (<ref>). By selecting a specific form for the density in Eq.  (<ref>), we construct a practical isotropic model. We then proceed to investigate the physical characteristics of this model. Through rigorous analysis using different analytical techniques and validations, we carefully scrutinize the obtained analytic solution. This comprehensive examination enables us to observe and analyze the physical behavior manifested by our solution. It is important to highlight that the preceding discussion confirms the satisfaction of all the physical conditions by the spherically symmetric interior spacetime configuration considered in this study within the framework of mimetic gravitational theory coupled with a Lagrangian multiplier. However, it should be noted that an alternative form of mimetic gravitational theory that does not involve the coupling with a Lagrangian multiplier may not support the existence of the isotropic model, as evidenced by equations (<ref>), (<ref>), and (<ref>). Moreover, the field equations governing the equilibrium of rapidly rotating neutron stars in scalar-tensor theories of gravity, as well as representative numerical solutions are discussed in <cit.>. New models of slowly rotating, perfect-fluid neutron stars are constructed by extending the classical Hartle-Thorne formalism to generic scalar-tensor theories of gravity <cit.>. An investigation of self-consistently slowly rotating neutron and strange stars in R-squared gravity is investigated in <cit.>. A study of static neutron stars in the framework of a class of non-minimally coupled inflationary potentials have been presented in <cit.>. Are all the previous studies presented in <cit.> can be discussed in the framework of the present study? This will be done elsewhere. 97 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Akiyama et al.(2019)Akiyama et al.]EventHorizonTelescope:2019dse author author K. Akiyama et al. (collaboration Event Horizon Telescope), 10.3847/2041-8213/ab0ec7 journal journal Astrophys. J. Lett. volume 875, pages L1 (year 2019), http://arxiv.org/abs/1906.11238 arXiv:1906.11238 [astro-ph.GA] NoStop [Abbott et al.(2016)Abbott et al.]LIGOScientific:2016aoc author author B. P. Abbott et al. (collaboration LIGO Scientific, Virgo), 10.1103/PhysRevLett.116.061102 journal journal Phys. Rev. Lett. volume 116, pages 061102 (year 2016), http://arxiv.org/abs/1602.03837 arXiv:1602.03837 [gr-qc] NoStop [Oikonomou(2023a)]Oikonomou:2023lnh author author V. K. Oikonomou, 10.1088/1361-6382/acc2a7 journal journal Class. Quant. Grav. volume 40, pages 085005 (year 2023a), http://arxiv.org/abs/2303.06270 arXiv:2303.06270 [gr-qc] NoStop [Nojiri et al.(2019)Nojiri, Odintsov, and Oikonomou]Nojiri:2019dqc author author S. Nojiri, author S. D. Odintsov, and author V. K. Oikonomou, 10.1016/j.nuclphysb.2019.02.008 journal journal Nucl. Phys. B volume 941, pages 11 (year 2019), http://arxiv.org/abs/1902.03669 arXiv:1902.03669 [gr-qc] NoStop [Odintsov and Oikonomou(2019)]Odintsov:2019ofr author author S. D. Odintsov and author V. K. Oikonomou, 10.1088/1361-6382/ab0505 journal journal Class. Quant. Grav. volume 36, pages 065008 (year 2019), http://arxiv.org/abs/1902.01422 arXiv:1902.01422 [gr-qc] NoStop [Oikonomou(2023b)]Oikonomou:2023dgu author author V. K. Oikonomou, 10.1093/mnras/stad326 journal journal Mon. Not. Roy. Astron. Soc. volume 520, pages 2934 (year 2023b), http://arxiv.org/abs/2301.12136 arXiv:2301.12136 [gr-qc] NoStop [Chamseddine and Mukhanov(2013)]Chamseddine:2013kea author author A. H. Chamseddine and author V. Mukhanov, 10.1007/JHEP11(2013)135 journal journal JHEP volume 11, pages 135 (year 2013), http://arxiv.org/abs/1308.5410 arXiv:1308.5410 [astro-ph.CO] NoStop [Chamseddine and Mukhanov(2017a)]Chamseddine:2016uef author author A. H. Chamseddine and author V. Mukhanov, 10.1088/1475-7516/2017/03/009 journal journal JCAP volume 1703, pages 009 (year 2017a), http://arxiv.org/abs/1612.05860 arXiv:1612.05860 [gr-qc] NoStop [Chamseddine and Mukhanov(2017b)]Chamseddine:2016ktu author author A. H. Chamseddine and author V. Mukhanov, 10.1140/epjc/s10052-017-4759-z journal journal Eur. Phys. J. , pages 183 (year 2017b), http://arxiv.org/abs/1612.05861 arXiv:1612.05861 [gr-qc] NoStop [Casalino et al.(2018)Casalino, Rinaldi, Sebastiani, and Vagnozzi]Casalino:2018tcd author author A. Casalino, author M. Rinaldi, author L. Sebastiani, and author S. Vagnozzi, 10.1016/j.dark.2018.10.001 journal journal Phys. Dark Univ. volume 22, pages 108 (year 2018), http://arxiv.org/abs/1803.02620 arXiv:1803.02620 [gr-qc] NoStop [Casalino et al.(2019)Casalino, Rinaldi, Sebastiani, and Vagnozzi]Casalino:2018wnc author author A. Casalino, author M. Rinaldi, author L. Sebastiani, and author S. Vagnozzi, 10.1088/1361-6382/aaf1fd journal journal Class. Quant. Grav. volume 36, pages 017001 (year 2019), http://arxiv.org/abs/1811.06830 arXiv:1811.06830 [gr-qc] NoStop [Sherafati et al.(2021)Sherafati, Heydari, and Karami]Sherafati:2021iir author author K. Sherafati, author S. Heydari, and author K. Karami, @noop (year 2021), http://arxiv.org/abs/2109.11810 arXiv:2109.11810 [gr-qc] NoStop [Vagnozzi(2017)]Vagnozzi:2017ilo author author S. Vagnozzi, 10.1088/1361-6382/aa838b journal journal Class. Quant. Grav. volume 34, pages 185006 (year 2017), http://arxiv.org/abs/1708.00603 arXiv:1708.00603 [gr-qc] NoStop [Sheykhi and Grunau(2021)]Sheykhi:2019gvk author author A. Sheykhi and author S. Grunau, 10.1142/S0217751X21501864 journal journal Int. J. Mod. Phys. A volume 36, pages 2150186 (year 2021), http://arxiv.org/abs/1911.13072 arXiv:1911.13072 [gr-qc] NoStop [Chamseddine et al.(2014)Chamseddine, Mukhanov, and Vikman]Chamseddine:2014vna author author A. H. Chamseddine, author V. Mukhanov, and author A. Vikman, 10.1088/1475-7516/2014/06/017 journal journal JCAP volume 06, pages 017 (year 2014), http://arxiv.org/abs/1403.3961 arXiv:1403.3961 [astro-ph.CO] NoStop [Baffou et al.(2017)Baffou, Houndjo, Hamani-Daouda, and Alvarenga]Baffou:2017pao author author E. H. Baffou, author M. J. S. Houndjo, author M. Hamani-Daouda, and author F. G. Alvarenga, 10.1140/epjc/s10052-017-5291-x journal journal Eur. Phys. J. , pages 708 (year 2017), http://arxiv.org/abs/1706.08842 arXiv:1706.08842 [gr-qc] NoStop [Dutta et al.(2018)Dutta, Khyllep, Saridakis, Tamanini, and Vagnozzi]Dutta:2017fjw author author J. Dutta, author W. Khyllep, author E. N. Saridakis, author N. Tamanini, and author S. Vagnozzi, 10.1088/1475-7516/2018/02/041 journal journal JCAP volume 1802, pages 041 (year 2018), http://arxiv.org/abs/1711.07290 arXiv:1711.07290 [gr-qc] NoStop [Sheykhi(2018)]Sheykhi:2018ffj author author A. Sheykhi, 10.1142/S0218271819500573 journal journal Int. J. Mod. Phys. D volume 28, pages 1950057 (year 2018)NoStop [Abbassi et al.(2018)Abbassi, Jozani, and Sepangi]Abbassi:2018ywq author author M. H. Abbassi, author A. Jozani, and author H. R. Sepangi, 10.1103/PhysRevD.97.123510 journal journal Phys. Rev. D volume 97, pages 123510 (year 2018), http://arxiv.org/abs/1803.00209 arXiv:1803.00209 [gr-qc] NoStop [Matsumoto(2016)]Matsumoto:2016rsa author author J. Matsumoto, @noop (year 2016), http://arxiv.org/abs/1610.07847 arXiv:1610.07847 [astro-ph.CO] NoStop [Sebastiani et al.(2017)Sebastiani, Vagnozzi, and Myrzakulov]Sebastiani:2016ras author author L. Sebastiani, author S. Vagnozzi, and author R. Myrzakulov, 10.1155/2017/3156915 journal journal Adv. High Energy Phys. volume 2017, pages 3156915 (year 2017), http://arxiv.org/abs/1612.08661 arXiv:1612.08661 [gr-qc] NoStop [Sadeghnezhad and Nozari(2017)]Sadeghnezhad:2017hmr author author N. Sadeghnezhad and author K. Nozari, 10.1016/j.physletb.2017.03.039 journal journal Phys. Lett. , pages 134 (year 2017), http://arxiv.org/abs/1703.06269 arXiv:1703.06269 [gr-qc] NoStop [Gorji et al.(2019)Gorji, Mukohyama, and Firouzjahi]Gorji:2019ttx author author M. A. Gorji, author S. Mukohyama, and author H. Firouzjahi, 10.1088/1475-7516/2019/05/019 journal journal JCAP volume 05, pages 019 (year 2019), http://arxiv.org/abs/1903.04845 arXiv:1903.04845 [gr-qc] NoStop [Gorji et al.(2018a)Gorji, Mukohyama, Firouzjahi, and Hosseini Mansoori]Gorji:2018okn author author M. A. Gorji, author S. Mukohyama, author H. Firouzjahi, and author S. A. Hosseini Mansoori, 10.1088/1475-7516/2018/08/047 journal journal JCAP volume 08, pages 047 (year 2018a), http://arxiv.org/abs/1807.06335 arXiv:1807.06335 [hep-th] NoStop [Bouhmadi-Lopez et al.(2017)Bouhmadi-Lopez, Chen, and Chen]Bouhmadi-Lopez:2017lbx author author M. Bouhmadi-Lopez, author C.-Y. Chen, and author P. Chen, 10.1088/1475-7516/2017/11/053 journal journal JCAP volume 1711, pages 053 (year 2017), http://arxiv.org/abs/1709.09192 arXiv:1709.09192 [gr-qc] NoStop [Gorji et al.(2018b)Gorji, Hosseini Mansoori, and Firouzjahi]Gorji:2017cai author author M. A. Gorji, author S. A. Hosseini Mansoori, and author H. Firouzjahi, 10.1088/1475-7516/2018/01/020 journal journal JCAP volume 1801, pages 020 (year 2018b), http://arxiv.org/abs/1709.09988 arXiv:1709.09988 [astro-ph.CO] NoStop [Chamseddine et al.(2019a)Chamseddine, Mukhanov, and Russ]Chamseddine:2019bcn author author A. H. Chamseddine, author V. Mukhanov, and author T. B. Russ, 10.1140/epjc/s10052-019-7075-y journal journal Eur. Phys. J. C volume 79, pages 558 (year 2019a), http://arxiv.org/abs/1905.01343 arXiv:1905.01343 [hep-th] NoStop [Russ(2021)]Russ:2021ede author author T. B. Russ, @noop (year 2021), http://arxiv.org/abs/2103.12442 arXiv:2103.12442 [gr-qc] NoStop [de Cesare et al.(2020)de Cesare, Seahra, and Wilson-Ewing]deCesare:2020swb author author M. de Cesare, author S. S. Seahra, and author E. Wilson-Ewing, 10.1088/1475-7516/2020/07/018 journal journal JCAP volume 07, pages 018 (year 2020), http://arxiv.org/abs/2002.11658 arXiv:2002.11658 [gr-qc] NoStop [Cárdenas et al.(2021)Cárdenas, Cruz, Lepe, and Salgado]Cardenas:2020srs author author V. H. Cárdenas, author M. Cruz, author S. Lepe, and author P. Salgado, 10.1016/j.dark.2021.100775 journal journal Phys. Dark Univ. volume 31, pages 100775 (year 2021), http://arxiv.org/abs/2009.03203 arXiv:2009.03203 [gr-qc] NoStop [Hosseini Mansoori et al.(2021)Hosseini Mansoori, Talebian, and Firouzjahi]HosseiniMansoori:2020mxj author author S. A. Hosseini Mansoori, author A. Talebian, and author H. Firouzjahi, 10.1007/JHEP01(2021)183 journal journal JHEP volume 01, pages 183 (year 2021), http://arxiv.org/abs/2010.13495 arXiv:2010.13495 [gr-qc] NoStop [Arroja et al.(2018)Arroja, Okumura, Bartolo, Karmakar, and Matarrese]Arroja:2017msd author author F. Arroja, author T. Okumura, author N. Bartolo, author P. Karmakar, and author S. Matarrese, 10.1088/1475-7516/2018/05/050 journal journal JCAP volume 05, pages 050 (year 2018), http://arxiv.org/abs/1708.01850 arXiv:1708.01850 [astro-ph.CO] NoStop [Deruelle and Rua(2014)]Deruelle:2014zza author author N. Deruelle and author J. Rua, 10.1088/1475-7516/2014/09/002 journal journal JCAP volume 1409, pages 002 (year 2014), http://arxiv.org/abs/1407.0825 arXiv:1407.0825 [gr-qc] NoStop [Myrzakulov and Sebastiani(2015)]Myrzakulov:2015sea author author R. Myrzakulov and author L. Sebastiani, 10.1007/s10714-015-1930-4 journal journal Gen. Rel. Grav. volume 47, pages 89 (year 2015), http://arxiv.org/abs/1503.04293 arXiv:1503.04293 [gr-qc] NoStop [Myrzakulov et al.(2016)Myrzakulov, Sebastiani, Vagnozzi, and Zerbini]Myrzakulov:2015kda author author R. Myrzakulov, author L. Sebastiani, author S. Vagnozzi, and author S. Zerbini, 10.1088/0264-9381/33/12/125005 journal journal Class. Quant. Grav. volume 33, pages 125005 (year 2016), http://arxiv.org/abs/1510.02284 arXiv:1510.02284 [gr-qc] NoStop [Ganz et al.(2019)Ganz, Karmakar, Matarrese, and Sorokin]Ganz:2018mqi author author A. Ganz, author P. Karmakar, author S. Matarrese, and author D. Sorokin, 10.1103/PhysRevD.99.064009 journal journal Phys. Rev. D volume 99, pages 064009 (year 2019), http://arxiv.org/abs/1812.02667 arXiv:1812.02667 [gr-qc] NoStop [Chen et al.(2018)Chen, Bouhmadi-Lopez, and Chen]Chen:2017ify author author C.-Y. Chen, author M. Bouhmadi-Lopez, and author P. Chen, 10.1140/epjc/s10052-018-5556-z journal journal Eur. Phys. J. , pages 59 (year 2018), http://arxiv.org/abs/1710.10638 arXiv:1710.10638 [gr-qc] NoStop [Ben Achour et al.(2018)Ben Achour, Lamy, Liu, and Noui]BenAchour:2017ivq author author J. Ben Achour, author F. Lamy, author H. Liu, and author K. Noui, 10.1088/1475-7516/2018/05/072 journal journal JCAP volume 05, pages 072 (year 2018), http://arxiv.org/abs/1712.03876 arXiv:1712.03876 [gr-qc] NoStop [Brahma et al.(2018)Brahma, Golovnev, and Yeom]Brahma:2018dwx author author S. Brahma, author A. Golovnev, and author D.-H. Yeom, 10.1016/j.physletb.2018.05.039 journal journal Phys. Lett. B volume 782, pages 280 (year 2018), http://arxiv.org/abs/1803.03955 arXiv:1803.03955 [gr-qc] NoStop [Zheng et al.(2017)Zheng, Shen, Mou, and Li]Zheng:2017qfs author author Y. Zheng, author L. Shen, author Y. Mou, and author M. Li, 10.1088/1475-7516/2017/08/040 journal journal JCAP volume 1708, pages 040 (year 2017), http://arxiv.org/abs/1704.06834 arXiv:1704.06834 [gr-qc] NoStop [Shen et al.(2019)Shen, Zheng, and Li]Shen:2019nyp author author L. Shen, author Y. Zheng, and author M. Li, 10.1088/1475-7516/2019/12/026 journal journal JCAP volume 12, pages 026 (year 2019), http://arxiv.org/abs/1909.01248 arXiv:1909.01248 [gr-qc] NoStop [Nashed et al.(2019)Nashed, El Hanafy, and Bamba]Nashed:2018qag author author G. G. L. Nashed, author W. El Hanafy, and author K. Bamba, 10.1088/1475-7516/2019/01/058 journal journal JCAP volume 01, pages 058 (year 2019), http://arxiv.org/abs/1809.02289 arXiv:1809.02289 [gr-qc] NoStop [Chamseddine et al.(2019b)Chamseddine, Mukhanov, and Russ]Chamseddine:2019pux author author A. H. Chamseddine, author V. Mukhanov, and author T. B. Russ, 10.1007/JHEP10(2019)104 journal journal JHEP volume 10, pages 104 (year 2019b), http://arxiv.org/abs/1908.03498 arXiv:1908.03498 [hep-th] NoStop [Gorji et al.(2020)Gorji, Allahyari, Khodadi, and Firouzjahi]Gorji:2020ten author author M. A. Gorji, author A. Allahyari, author M. Khodadi, and author H. Firouzjahi, 10.1103/PhysRevD.101.124060 journal journal Phys. Rev. D volume 101, pages 124060 (year 2020), http://arxiv.org/abs/1912.04636 arXiv:1912.04636 [gr-qc] NoStop [Sheykhi(2020)]Sheykhi:2020fqf author author A. Sheykhi, 10.1007/JHEP07(2020)031 journal journal JHEP volume 07, pages 031 (year 2020), http://arxiv.org/abs/2002.11718 arXiv:2002.11718 [gr-qc] NoStop [Nashed(2018b)]Nashed:2018aai author author G. G. L. Nashed, 10.1142/S0219887818501542 journal journal Int. J. Geom. Meth. Mod. Phys. volume 15, pages 1850154 (year 2018b)NoStop [Nashed and Nojiri(2021)]Nashed:2021ctg author author G. G. L. Nashed and author S. Nojiri, 10.1103/PhysRevD.104.044043 journal journal Phys. Rev. D volume 104, pages 044043 (year 2021), http://arxiv.org/abs/2107.13550 arXiv:2107.13550 [gr-qc] NoStop [Chamseddine(2021)]Chamseddine:2021xhw author author A. H. Chamseddine, 10.1140/epjc/s10052-021-09656-x journal journal Eur. Phys. J. C volume 81, pages 977 (year 2021), http://arxiv.org/abs/2106.14235 arXiv:2106.14235 [hep-th] NoStop [Zheng(2021)]Zheng:2018cuc author author Y. Zheng, 10.1007/JHEP01(2021)085 journal journal JHEP volume 01, pages 085 (year 2021), http://arxiv.org/abs/1810.03826 arXiv:1810.03826 [gr-qc] NoStop [Bakhtiarizadeh(2021)]Bakhtiarizadeh:2021pyo author author H. R. Bakhtiarizadeh, @noop (year 2021), http://arxiv.org/abs/2107.10686 arXiv:2107.10686 [gr-qc] NoStop [Nashed(2021)]Nashed:2021pkc author author G. G. L. Nashed, 10.3847/1538-4357/ac19bb journal journal Astrophys. J. volume 919, pages 113 (year 2021), http://arxiv.org/abs/2108.04060 arXiv:2108.04060 [gr-qc] NoStop [Nojiri and Odintsov(2014)]Nojiri:2014zqa author author S. Nojiri and author S. D. Odintsov, 10.1142/S0217732314502113 journal journal Mod. Phys. Lett. , pages 1450211 (year 2014), http://arxiv.org/abs/1408.3561 arXiv:1408.3561 [hep-th] NoStop [Odintsov and Oikonomou(2016a)]Odintsov:2015wwp author author S. D. Odintsov and author V. K. Oikonomou, 10.1103/PhysRevD.93.023517 journal journal Phys. Rev. D volume 93, pages 023517 (year 2016a), http://arxiv.org/abs/1511.04559 arXiv:1511.04559 [gr-qc] NoStop [Oikonomou(2016a)]Oikonomou:2016pkp author author V. K. Oikonomou, 10.1142/S0217732316501911 journal journal Mod. Phys. Lett. A volume 31, pages 1650191 (year 2016a), http://arxiv.org/abs/1609.03156 arXiv:1609.03156 [gr-qc] NoStop [Oikonomou(2016b)]Oikonomou:2016fxb author author V. K. Oikonomou, 10.1142/S0218271816500784 journal journal Int. J. Mod. Phys. , pages 1650078 (year 2016b), http://arxiv.org/abs/1605.00583 arXiv:1605.00583 [gr-qc] NoStop [Oikonomou(2016c)]Oikonomou:2015lgy author author V. K. Oikonomou, 10.3390/universe2020010 journal journal Universe volume 2, pages 10 (year 2016c), http://arxiv.org/abs/1511.09117 arXiv:1511.09117 [gr-qc] NoStop [Myrzakulov and Sebastiani(2016)]Myrzakulov:2016hrx author author R. Myrzakulov and author L. Sebastiani, 10.1007/s10509-016-2779-z journal journal Astrophys. Space Sci. volume 361, pages 188 (year 2016), http://arxiv.org/abs/1601.04994 arXiv:1601.04994 [gr-qc] NoStop [Odintsov and Oikonomou(2015)]Odintsov:2015cwa author author S. D. Odintsov and author V. K. Oikonomou, 10.1016/j.aop.2015.10.013 journal journal Annals Phys. volume 363, pages 503 (year 2015), http://arxiv.org/abs/1508.07488 arXiv:1508.07488 [gr-qc] NoStop [Odintsov and Oikonomou(2016b)]Odintsov:2016imq author author S. D. Odintsov and author V. K. Oikonomou, 10.1007/s10509-016-2826-9 journal journal Astrophys. Space Sci. volume 361, pages 236 (year 2016b), http://arxiv.org/abs/1602.05645 arXiv:1602.05645 [gr-qc] NoStop [Odintsov and Oikonomou(2016c)]Odintsov:2016oyz author author S. D. Odintsov and author V. K. Oikonomou, 10.1103/PhysRevD.94.044012 journal journal Phys. Rev. D volume 94, pages 044012 (year 2016c), http://arxiv.org/abs/1608.00165 arXiv:1608.00165 [gr-qc] NoStop [Nojiri et al.(2017)Nojiri, Odintsov, and Oikonomou]Nojiri:2017ygt author author S. Nojiri, author S. D. Odintsov, and author V. K. Oikonomou, 10.1016/j.physletb.2017.10.045 journal journal Phys. Lett. , pages 44 (year 2017), http://arxiv.org/abs/1710.07838 arXiv:1710.07838 [gr-qc] NoStop [Odintsov and Oikonomou(2018)]Odintsov:2018ggm author author S. D. Odintsov and author V. K. Oikonomou, 10.1016/j.nuclphysb.2018.01.027 journal journal Nucl. Phys. B volume 929, pages 79 (year 2018), http://arxiv.org/abs/1801.10529 arXiv:1801.10529 [gr-qc] NoStop [Bhattacharjee(2021)]Bhattacharjee:2021kar author author S. Bhattacharjee, @noop (year 2021), http://arxiv.org/abs/2104.01751 arXiv:2104.01751 [gr-qc] NoStop [Kaczmarek and Szczesniak(2021)]Kaczmarek:2021psy author author A. Z. Kaczmarek and author D. Szczesniak, 10.1038/s41598-021-97907-y journal journal Sci. Rep. volume 11, pages 18363 (year 2021), http://arxiv.org/abs/2105.05050 arXiv:2105.05050 [gr-qc] NoStop [Chen et al.(2021)Chen, Guo, and Liu]Chen:2020zzs author author J. Chen, author W.-D. Guo, and author Y.-X. Liu, 10.1140/epjc/s10052-021-09504-y journal journal Eur. Phys. J. C volume 81, pages 709 (year 2021), http://arxiv.org/abs/2011.03927 arXiv:2011.03927 [gr-qc] NoStop [Astashenok et al.(2015)Astashenok, Odintsov, and Oikonomou]Astashenok:2015haa author author A. V. Astashenok, author S. D. Odintsov, and author V. K. Oikonomou, 10.1088/0264-9381/32/18/185007 journal journal Class. Quant. Grav. volume 32, pages 185007 (year 2015), http://arxiv.org/abs/1504.04861 arXiv:1504.04861 [gr-qc] NoStop [Oikonomou(2015)]Oikonomou:2015qha author author V. K. Oikonomou, 10.1103/PhysRevD.92.124027 journal journal Phys. Rev. D volume 92, pages 124027 (year 2015), http://arxiv.org/abs/1509.05827 arXiv:1509.05827 [gr-qc] NoStop [Zhong and Elizalde(2016)]Zhong:2016mfv author author Y. Zhong and author E. Elizalde, 10.1142/S0217732316502217 journal journal Mod. Phys. Lett. A volume 31, pages 1650221 (year 2016), http://arxiv.org/abs/1612.04179 arXiv:1612.04179 [gr-qc] NoStop [Zhong and Sáez-Chillón Gómez(2018)]Zhong:2018tqn author author Y. Zhong and author D. Sáez-Chillón Gómez, 10.3390/sym10050170 journal journal Symmetry volume 10, pages 170 (year 2018), http://arxiv.org/abs/1805.03467 arXiv:1805.03467 [gr-qc] NoStop [Paul et al.(2020)Paul, Maharaj, and Beesham]Paul:2020bje author author B. C. Paul, author S. D. Maharaj, and author A. Beesham, @noop (year 2020), http://arxiv.org/abs/2008.00169 arXiv:2008.00169 [astro-ph.CO] NoStop [Nojiri et al.(2016)Nojiri, Odintsov, and Oikonomou]Nojiri:2016vhu author author S. Nojiri, author S. Odintsov, and author V. Oikonomou, 10.1103/PhysRevD.94.104050 journal journal Phys. Rev. , pages 104050 (year 2016), http://arxiv.org/abs/1608.07806 arXiv:1608.07806 [gr-qc] NoStop [Yang and Gong(2020)]Yang:2019fjt author author Y. Yang and author Y. Gong, 10.1088/1475-7516/2020/06/059 journal journal JCAP volume 06, pages 059 (year 2020), http://arxiv.org/abs/1912.07375 arXiv:1912.07375 [astro-ph.CO] NoStop [Seljak et al.(2006)Seljak, Slosar, and McDonald]Seljak:2006bg author author U. Seljak, author A. Slosar, and author P. McDonald, 10.1088/1475-7516/2006/10/014 journal journal JCAP volume 10, pages 014 (year 2006), http://arxiv.org/abs/astro-ph/0604335 arXiv:astro-ph/0604335 NoStop [Oks(2021)]Oks:2021hef author author E. Oks, 10.1016/j.newar.2021.101632 journal journal New Astron. Rev. volume 93, pages 101632 (year 2021), http://arxiv.org/abs/2111.00363 arXiv:2111.00363 [astro-ph.CO] NoStop [Lim et al.(2010)Lim, Sawicki, and Vikman]Lim:2010yk author author E. A. Lim, author I. Sawicki, and author A. Vikman, 10.1088/1475-7516/2010/05/012 journal journal JCAP volume 05, pages 012 (year 2010), http://arxiv.org/abs/1003.5751 arXiv:1003.5751 [astro-ph.CO] NoStop [Gao et al.(2011)Gao, Gong, Wang, and Chen]Gao:2010gj author author C. Gao, author Y. Gong, author X. Wang, and author X. Chen, 10.1016/j.physletb.2011.06.085 journal journal Phys. Lett. B volume 702, pages 107 (year 2011), http://arxiv.org/abs/1003.6056 arXiv:1003.6056 [astro-ph.CO] NoStop [Nashed and Saridakis(2023)]Nashed:2022yfc author author G. G. L. Nashed and author E. N. Saridakis, 10.1140/epjp/s13360-023-03767-y journal journal Eur. Phys. J. Plus volume 138, pages 318 (year 2023), http://arxiv.org/abs/2206.12256 arXiv:2206.12256 [gr-qc] NoStop [Nashed and Nojiri(2022)]Nashed:2021hgn author author G. G. L. Nashed and author S. Nojiri, 10.1088/1475-7516/2022/05/011 journal journal JCAP volume 05, pages 011 (year 2022), http://arxiv.org/abs/2110.08560 arXiv:2110.08560 [gr-qc] NoStop [Nashed(2018a)]Nashed:2018urj author author G. Nashed, 10.3390/sym10110559 journal journal Symmetry volume 10, pages 559 (year 2018a)NoStop [Capozziello et al.(2010)Capozziello, Matsumoto, Nojiri, and Odintsov]Capozziello:2010uv author author S. Capozziello, author J. Matsumoto, author S. Nojiri, and author S. D. Odintsov, 10.1016/j.physletb.2010.08.030 journal journal Phys. Lett. B volume 693, pages 198 (year 2010), http://arxiv.org/abs/1004.3691 arXiv:1004.3691 [hep-th] NoStop [Nojiri and Nashed(2022)]Nojiri:2022cah author author S. Nojiri and author G. G. L. Nashed, 10.1016/j.physletb.2022.137140 journal journal Phys. Lett. B volume 830, pages 137140 (year 2022), http://arxiv.org/abs/2202.03693 arXiv:2202.03693 [gr-qc] NoStop [Herrera(1992)]HERRERA1992206 author author L. Herrera, https://doi.org/10.1016/0375-9601(92)90036-L journal journal Physics Letters A volume 165, pages 206 (year 1992)NoStop [Chandrasekhar(1964)]1964ApJ...140..417C author author S. Chandrasekhar, 10.1086/147938 journal journal volume 140, pages 417 (year 1964)NoStop [Merafina and Ruffini(1989)]1989A A...221....4M author author M. Merafina and author R. Ruffini, @noop journal journal aap volume 221, pages 4 (year 1989)NoStop [Chan et al.(1993)Chan, Herrera, and Santos]10.1093/mnras/265.3.533 author author R. Chan, author L. Herrera, and author N. O. Santos, 10.1093/mnras/265.3.533 journal journal Monthly Notices of the Royal Astronomical Society volume 265, pages 533 (year 1993), http://arxiv.org/abs/http://oup.prod.sis.lan/mnras/article-pdf/265/3/533/3807712/mnras265-0533.pdf http://oup.prod.sis.lan/mnras/article-pdf/265/3/533/3807712/mnras265-0533.pdf NoStop [Nashed(2011)]Nashed:2011fg author author G. G. L. Nashed, 10.1002/andp.201100030 journal journal Annalen Phys. volume 523, pages 450 (year 2011), http://arxiv.org/abs/1105.0328 arXiv:1105.0328 [gr-qc] NoStop [Nashed and Capozziello(2021)]Nashed:2021sji author author G. G. L. Nashed and author S. Capozziello, 10.1140/epjc/s10052-021-09273-8 journal journal Eur. Phys. J. C volume 81, pages 481 (year 2021), http://arxiv.org/abs/2105.11975 arXiv:2105.11975 [gr-qc] NoStop [Nashed and Capozziello(2020)]Nashed:2020kjh author author G. G. L. Nashed and author S. Capozziello, 10.1140/epjc/s10052-020-08551-1 journal journal Eur. Phys. J. C volume 80, pages 969 (year 2020), http://arxiv.org/abs/2010.06355 arXiv:2010.06355 [gr-qc] NoStop [Roupas and Nashed(2020)]Roupas:2020mvs author author Z. Roupas and author G. G. L. Nashed, 10.1140/epjc/s10052-020-08462-1 journal journal Eur. Phys. J. C volume 80, pages 905 (year 2020), http://arxiv.org/abs/2007.09797 arXiv:2007.09797 [gr-qc] NoStop [Heintzmann and Hillebrandt(1975)]1975A A....38...51H author author H. Heintzmann and author W. Hillebrandt, @noop journal journal aap volume 38, pages 51 (year 1975)NoStop [Harrison et al.(1965)Harrison, Thorne, Wakano, and Wheeler]1965gtgc.book.....H author author B. K. Harrison, author K. S. Thorne, author M. Wakano, and author J. A. Wheeler, @noop title Gravitation Theory and Gravitational Collapse (year 1965)NoStop [Zeldovich and Novikov(1971)]1971reas.book.....Z author author Y. B. Zeldovich and author I. D. Novikov, @noop title Relativistic astrophysics. Vol.1: Stars and relativity (year 1971)NoStop [Zeldovich and Novikov(1983)]1983reas.book.....Z author author I. B. Zeldovich and author I. D. Novikov, @noop title Relativistic astrophysics. Vol.2: The structure and evolution of the universe (year 1983)NoStop [Doneva et al.(2013)Doneva, Yazadjiev, Stergioulas, and Kokkotas]Doneva:2013qva author author D. D. Doneva, author S. S. Yazadjiev, author N. Stergioulas, and author K. D. Kokkotas, 10.1103/PhysRevD.88.084060 journal journal Phys. Rev. D volume 88, pages 084060 (year 2013), http://arxiv.org/abs/1309.0605 arXiv:1309.0605 [gr-qc] NoStop [Pani and Berti(2014)]Pani:2014jra author author P. Pani and author E. Berti, 10.1103/PhysRevD.90.024025 journal journal Phys. Rev. D volume 90, pages 024025 (year 2014), http://arxiv.org/abs/1405.4547 arXiv:1405.4547 [gr-qc] NoStop [Staykov et al.(2014)Staykov, Doneva, Yazadjiev, and Kokkotas]Staykov:2014mwa author author K. V. Staykov, author D. D. Doneva, author S. S. Yazadjiev, and author K. D. Kokkotas, 10.1088/1475-7516/2014/10/006 journal journal JCAP volume 10, pages 006 (year 2014), http://arxiv.org/abs/1407.2180 arXiv:1407.2180 [gr-qc] NoStop [Oikonomou(2021)]Oikonomou:2021iid author author V. K. Oikonomou, 10.1088/1361-6382/ac161c journal journal Class. Quant. Grav. volume 38, pages 175005 (year 2021), http://arxiv.org/abs/2107.12430 arXiv:2107.12430 [gr-qc] NoStop
http://arxiv.org/abs/2307.03331v1
20230706234952
Convergence of the momentum method for semi-algebraic functions with locally Lipschitz gradients
[ "Cédric Josz", "Lexiao Lai", "Xiaopeng Li" ]
math.OC
[ "math.OC" ]
Variational quantum regression algorithm with encoded data structure Cédric Josz<cj2638@columbia.edu>, IEOR, Columbia University, New York. Research supported by NSF EPCN grant 2023032 and ONR grant N00014-21-1-2282. Lexiao Lai<ll3352@columbia.edu>, IEOR, Columbia University, New York. Xiaopeng Li<xl3040@columbia.edu>, IEOR, Columbia University, New York. ==================================================================================================================================================================================================================================================================================================== Abstract 0.2in0.2in     We propose a new length formula that governs the iterates of the momentum method when minimizing differentiable semi-algebraic functions with locally Lipschitz gradients. It enables us to establish local convergence, global convergence, and convergence to local minimizers without assuming global Lipschitz continuity of the gradient, coercivity, and a global growth condition, as is done in the literature. As a result, we provide the first convergence guarantee of the momentum method starting from arbitrary initial points when applied to principal component analysis, matrix sensing, and linear neural networks. Keywords: Kurdyka-Łojasiewicz inequality, ordinary differential equations, semi-algebraic geometry. § INTRODUCTION The gradient method with constant momentum and constant step size (or momentum method for short <cit.>) for minimizing a differentiable function f: ℝ^n→ℝ consists in choosing initial points x_-1,x_0∈ℝ^n and generating a sequence of iterates according to the update rule x_k+1 = x_k + β(x_k-x_k-1) -α∇ f (x_k + γ(x_k-x_k-1)),    ∀ k ∈ℕ, where α>0 is the step size and β∈ (-1,1) and γ∈ℝ are constant momentum parameters, as implemented in PyTorch <cit.> and TensorFlow <cit.>. When γ = 0, this reduces to Polyak's heavy ball method <cit.> and when β = γ, this reduces to Nesterov's accelerated gradient method <cit.>. If the objective function is strongly convex and satisfies some regularity assumptions, the former has a nearly optimal local convergence rate <cit.> <cit.>, while the latter has a globally optimal convergence rate <cit.>. This holds with a suitable choice of parameters α, β, and γ. Various objective functions of interest nowadays are however not convex, including principal component analysis, matrix sensing, and linear neural networks, respectively given by (X,Y)∈ℝ^m× r×ℝ^n× r↦XY^⊤ - M_F^2, (X,Y)∈ℝ^m× r×ℝ^n× r↦∑_i=1^m(⟨ A_i,XY^⊤⟩_F-b_i)^2, (W_1,,W_l) ∈ℝ^n_1× n_0××ℝ^n_l× n_l-1↦W_l W_1X̅ - Y̅^2_F. Above, M,A_1,,A_m ∈ℝ^m× n, b_1,,b_m∈ℝ, X̅∈ℝ^n_0× n, Y̅∈ℝ^n_l× n, and ·_F = √(⟨·,·⟩_F) is the Frobenius norm. These problems have in common that they are semi-algebraic and have locally Lipschitz continuous gradients. However, they do not have globally Lipschitz continuous gradients, they are not coercive, and whether they satisfy a global growth condition is unknown and hard to check for. In other words, the commonly used assumptions H1 (sufficient decrease), H2 (relative error), H3 (continuity) due to Attouch et al. <cit.>, adapted to the momentum method in <cit.>, are not true, and H4 (global growth) is unknown. As a result, it is not known whether the momentum method — in particular the heavy ball method and Nesterov’s accelerated gradient method — would converge if the initial points lie close to a local minimizer of f. A fortiori, nothing is known if they are chosen arbitrarily or at random in ℝ^n. Even if one assumes that the iterates are bounded, a common assumption in the literature which implies H3, it is not known whether the iterates would converge. Indeed, choose a step size α>0 and suppose that the iterates are bounded. Let L>0 denote a Lipschitz constant of the gradient on the convex hull of the iterates. If α⩾ 2/L, then the argument employed in <cit.> and <cit.>, which consists of taking a subsequence and invoking the Kurdyka-Łojasiewicz inequality <cit.> <cit.>, fails to establish convergence. Unfortunately, there is no way to control the size of L before choosing the step size α. We next review the literature on the momentum method in the nonconvex setting. All the results in the literature require that f has an L-Lipschitz continuous gradient with L>0, along with other assumptions that we next describe. First, we discuss convergence when the initial points are near a local minimizer. If the objective function f satisfies the Kurdyka-Łojasiewicz inequality at a local minimizer x^*∈ℝ^n, α∈ (0,2(1-β)/L), β∈ [0,1), γ = 0, and a global growth condition <cit.> is satisfied, then the momentum method converges to a local minimizer when initialized sufficiently close to x^* <cit.>. The growth condition implies the existence of constants a,b>0 such that f(x) + bx-y^2 ⩾ f(x^*) - ay-x^*^2 for all x,y∈ℝ^n, and in particular f(x) ⩾ f(x^*) - ax-x^*^2 for all x∈ℝ^n. Second, we discuss convergence when the initial points are arbitrary. Under the same parameter settings for α,β,γ, if f is lower bounded, then the gradients ∇ f(x_k) converge to zero <cit.> for any initial points x_-1,x_0 ∈ℝ^n. If in addition the function is coercive and satisfies the Kurdyka-Łojasiewicz inequality <cit.> at every point and x_-1=x_0, then the iterates have finite length <cit.>. If the Łojasiewicz gradient inequality holds <cit.>, then a local convergence rate can be deduced <cit.>. If instead the function satisfies an error bound and its level sets are properly separated, then with α∈ (0,1/L), β = γ∈ [0,1/√(1+Lα)), and x_-1=x_0, the iterates and the function values converge linearly to a critical point and a critical value respectively <cit.>. Finally, if the function satisfies the Kurdyka-Łojasiewicz inequality and the iterates are bounded, then they have finite length <cit.> under the same parameter settings. Third, we discuss convergence when the initial points are chosen outside a zero measure set. The momentum method is known to converge to a local minimizer for almost every initial point under several conditions. First, f should be coercive, twice differentiable, and should satisfy the Kurdyka-Łojasiewicz inequality. Second, the Hessian of f should have a negative eigenvalue at all critical points of f that are not local minimizers. Third, the parameters of the momentum method (<ref>) should either satisfy α∈(0,2(1-β)/L), β∈(0,1) and γ=0 <cit.>, or α∈(0,4/L), β∈(max{0,-1+α L/2},1) and γ=0 <cit.>. Our contributions are as follows. We consider objective functions f:ℝ^n→ℝ that are semi-algebraic and differentiable with locally Lipschitz gradients. The generalization to arbitrary o-minimal structures on the real field <cit.> is immediate and omitted for the sake of brevity. We show that the length of the iterates generated by the momentum method is upper bounded by an expression depending on the objective function variation, the step size, and a desingularizing function. This length formula enables us to show that global Lipschitz continuity of the gradient and the global growth condition are superfluous when establishing local convergence. It also enables us to establish global convergence under the assumption that the continuous-time gradient trajectories of f are bounded, which is satisfied by problems (<ref>), (<ref>), and (<ref>), as discussed in <cit.> (provided RIP <cit.> holds in (<ref>)). As a result, we bypass the need for coercivity and globally Lipschitz gradients. Finally, the length formula enables us to guarantee convergence to local minimizers almost surely, under second-order differentiability and the strict saddle property <cit.>. This paper is organized as follows. Section <ref> contains the statement of the length formula and the ensuing convergence results. Section <ref> contains the proof of the length formula. Section <ref> contains the conclusion. Finally, several proofs are deferred to the Appendix for ease of readability. § CONVERGENCE RESULTS We begin by recalling standard notations and definitions. Let ℕ := {0,1,2,} and let · be the induced norm of an inner product ⟨·, ·⟩ on ℝ^n. Let B(a,r) and B(a,r) respectively denote the closed and open balls of center a∈ℝ^n and radius r ⩾ 0. Given k∈ℕ, A⊂ℝ^n and B ⊂ℝ^m, let C^k(A,B) be the set of continuous functions f:A→ B such that, if k ⩾ 1 then f is k times continuously differentiable on the interior of A. Let C^1,1_ loc(A,B) denote the set of functions in C^1(A,B) whose first-order derivative is locally Lipschitz continuous on the interior of A. When B=ℝ, C^k(A,B) and C^1,1_ loc(A,B) are abbreviated as C^k(A) and C^1,1_ loc(A) respectively. Let ∂ f:ℝ^n⇉ℝ^n denote the Clarke subdifferential <cit.> of a locally Lipschitz continuous function f:ℝ^n→ℝ. Given a locally Lipschitz function f:ℝ^n→ℝ, a real number v is a critical value of f in S if there exists x ∈ S such that v = f(x) and 0∈∂ f(x). A real number v is a critical value of f if it is a critical value of f in ℝ^n. A subset S of ℝ^n is semi-algebraic <cit.> if it is a finite union of sets of the form { x ∈ℝ^n: p_i(x) = 0,   i = 1,,k ;   p_i(x) > 0,   i = k+1,,m } where p_1,,p_m are polynomials defined from ℝ^n to ℝ. A function f:ℝ^n→ℝ is semi-algebraic if its graph, that is to say { (x,t) ∈ℝ^n+1: f(x)=t }, is a semi-algebraic set. We recall the following useful result. Let f:ℝ^n→ℝ be locally Lipschitz and semi-algebraic. Then f has finitely many critical values. Given a subset of S of ℝ^n, let S and S denote the interior and closure of S in ℝ^n respectively. A function ψ:S → S is a homeomorphism if it is a continuous bijection and the inverse function ψ^-1 is continuous. ψ:S → S is a diffeomorphism if S≠∅, ψ is a homeomorphism, and both ψ and ψ^-1 are continuously differentiable on S. We are now ready to state the key lemma upon which rest all the convergence results in this manuscript. It is entirely new to the best of our knowledge. Let f∈ C^1,1_ loc(ℝ^n) be semi-algebraic, X⊂ℝ^n be bounded, β∈ (-1,1), γ∈ℝ, and δ⩾ 0. There exist α̅,η,κ>0 and a diffeomorphism ψ:[0,∞) → [0,∞) such that, for all K∈ℕ, α∈(0,α̅], and sequences (x_k)_k∈{-1}∪ℕ generated by the momentum method (<ref>) for which x_-1,, x_K ∈ X and x_0-x_-1⩽δα, we have ∑_k=0^Kx_k+1-x_k ⩽ ψ(f(x_0)-f(x_K)+ηα) + κα. The significance of this formula is that it relates the length of the iterates with the objective function variation, in spite of the fact that the objective function values generated by the momentum method are notoriously nonmonotonic. The proof of Lemma <ref> is quite involved so we defer it to Section <ref>. There, the reader will learn that one can actually take ψ to be a desingularizing function of f on X (see Proposition <ref>). We next provide some intuition on the constants in the length formula. They are constructed explicitly using the regularity of the objective function and the momentum parameters. Both η and κ increase with the number of critical values of f in X and the initial velocity δ in the momentum method. The constant η increases with the minimal Lipschitz constant of ∇ f over a certain bounded set and with the magnitude of the momentum |β|. The constant κ increases with the minimal Lipschitz constant of f over a certain bounded set. Before we proceed, we state the following simple fact regarding the gradient of the objective function at iterates produced by the momentum method. Its proof is in Appendix <ref>. Let f∈ C^1,1_ loc(ℝ^n), X⊂ℝ^n be bounded, and β,γ∈ℝ. For all α>0, there exists b_α>0 such that for all K ∈ℕ, if x_-1,, x_K+1∈ X are iterates of the momentum method (<ref>), then ∇ f(x_k)⩽ b_α z_k+1-z_k for k=0,,K where z_k:=(x_k,x_k-1)∈ℝ^2n. If M>0 is a Lipschitz constant of ∇ f on S+max{|β|,|γ|}(S-S) where S is the convex hull of X, then one may take b_α :=√(2)max{1/α,|β|/α+M|γ|}. We are now ready to state our first convergence result. Let f∈ C^1,1_ loc(ℝ^n) be semi-algebraic, β∈ (-1,1), γ∈ℝ, δ⩾ 0, and x^* ∈ℝ^n be a local minimizer of f. For all ϵ>0, there exist α̅,ξ>0 such that for all α∈(0,α̅] and for all sequence (x_k)_k∈{-1}∪ℕ generated by the momentum method (<ref>) for which x_0-x_-1⩽δα and x_0 ∈ B(x^*,ξ), (x_k)_k∈{-1}∪ℕ converges to a local minimizer of f in B(x^*,ϵ). Without loss of generality, we may assume that f(x)⩾ f(x^*) for all x ∈ B(x^*,2ϵ). Since f is continuous and has finitely many critical values by the semi-algebraic Morse-Sard theorem (Lemma <ref>), we may also assume that f(x^*) is the unique critical value in B(x^*,2ϵ). By Lemma <ref>, there exist α,η>0, κ>δ, and a diffeomorphism ψ:[0,∞) → [0,∞) such that, for all K∈ℕ, α∈(0,α], and sequences (x_k)_k∈{-1}∪ℕ generated by the momentum method (<ref>) for which x_-1,, x_K ∈ B(x^*,ϵ) and x_0-x_-1⩽δα, we have ∑_k=0^Kx_k+1-x_k ⩽ ψ(f(x_0)-f(x_K)+ηα) + κα. By the continuity of f, there exists ξ∈(0, ϵ/2] such that f(x) - f(x^*) ⩽1/2 ψ^-1(ϵ/6)  ,  ∀ x ∈ B(x^*,ξ). Let α̅ := min{α, ϵ/3κ, 1/3ηψ^-1(ϵ/6) }. We fix any α∈ (0,α̅] from now on. Let (x_k)_k∈{-1}∪ℕ be a sequence generated by the momentum method (<ref>) for which x_0-x_-1⩽δα and x_0 ∈ B(x^*,ξ). If K := inf{k ∈ℕ : x_k ∉ B(x^*,ϵ)}<∞, then ψ^-1(ϵ/6) = ψ^-1(1/2ϵ - κϵ/3κ) ⩽ψ^-1((ϵ-ξ) - κα̅) ⩽ψ^-1( (x_K-x^*-x_0-x^*) - κα̅) ⩽ψ^-1(x_K-x_0 - κα) ⩽ψ^-1( ∑_k=0^K-1x_k+1-x_k - κα) ⩽ f(x_0) - f(x_K-1) + ηα ⩽ f(x_0) - f(x^*) +ηα̅ ⩽1/2ψ^-1(ϵ/6) + 1/3ψ^-1(ϵ/6) < ψ^-1(ϵ/6). As ψ^-1(ϵ/6)>0, a contradiction occurs and thus K = ∞. Above, the arguments of ψ^-1 in (<ref>) are equal. (<ref>) through (<ref>) rely on the fact that ψ^-1 is an increasing function. (<ref>) is due to ξ⩽ϵ/2 and α̅⩽ϵ/(3κ) by the definition of α̅ in (<ref>). (<ref>) holds because x_K ∉ B(x^*,ϵ) and x_0∈ B(x^*,ξ). (<ref>) and (<ref>) are consequences of the triangular inequality. (<ref>) is due to the length formula (<ref>) and the fact that x_0,,x_K-1∈ B(x^*,ϵ) and x_-1∈ B(x_0,δα) ⊂ B(x_0,δα̅) ⊂ B(x^*,ξ + δϵ/(3κ))⊂ B(x^*,ϵ/2 + δϵ/(3δ)) ⊂ B(x^*,ϵ) by the definition of α̅ in (<ref>). (<ref>) is due to f(B(x^*,ϵ)) ⊂ [f(x^*),∞). Finally, (<ref>) is due to x_0 ∈ B(x^*,ξ), the choice of ξ as in (<ref>), and α̅⩽ψ^-1(ϵ/6)/(3η) by definition of α̅ in (<ref>). We have shown that (x_k)_k∈{-1}∪ℕ⊂ B(x^*,ϵ). By the length formula (<ref>), we have ∑_k = 0^∞x_k+1 - x_k⩽ψ( max_B(x^*,ξ) f - min_B(x^*,ϵ) f +ηα) +κα. Thus the sequence admits a limit x^♯∈ B(x^*,ϵ). Combining with Fact <ref>, x^♯ must be a critical point of f. As f(x^*) is the unique critical value in B(x^*,2ϵ), f(x^♯) = f(x^*) ⩽ f(x) for all x ∈ B(x^♯,ϵ) ⊂ B(x^*,2ϵ). Note that once local convergence is established, <cit.> can be applied in order to obtain local convergence rates of the iterates. Indeed, the reader will be able to check later that the assumptions <cit.> then hold (using Lemma <ref> and Lemma <ref> below). The rates also rely on the fact that one can take the diffeomorphism ψ to be of the form ψ(t) = ct^θ where c>0 and θ∈ (0,1] for semi-algebraic functions (using Proposition <ref> below). In order to go from local convergence to global convergence, we make an assumption regarding the continuous-time gradient trajectories of the objective function. Given f ∈ C^1(ℝ^n), we refer to maximal solutions to x'(t) = - ∇ f(x(t)) for all t ∈ (0,T) where T ∈ (0, ∞] as continuous gradient trajectories (see <cit.>, <cit.>, and <cit.> for background and properties). We say that a continuous gradient trajectory x: [0,T) →ℝ^n is bounded if there exists c>0 such that x(t)⩽ c for all t∈ [0,T). This assumption enables us to use a generalized version of a tracking lemma recently proposed by Kovachki and Stuart <cit.>. Their result states that the momentum method tracks continuous gradient trajectories up to any given time for all sufficient small constant sizes. In Lemma <ref> below, we relax their strong regularity assumptions which require the objective function to be thrice differentiable with bounded derivatives. We instead only require it to be differentiable with a locally Lipschitz gradient. In order to do so, we redefine the key quantity M_k in the proof of <cit.>, which regulates the tracking error and depends on the Hessian of the objective function, so that it depends only on the gradient. In contrast to <cit.>, we also make the tracking uniform with respect to the initial point in a bounded set. Choosing an upper bound on the step size in order to achieve this requires some care and cannot be deduced from the proof of <cit.>. Below, we use the notation ⌊ t ⌋ to denote the floor of a real number t which is the unique integer such that ⌊ t ⌋⩽ t < ⌊ t ⌋ + 1. Let f∈ C^1,1_ loc(ℝ^n) be a lower bounded function, β∈(-1,1), γ∈ℝ, δ⩾ 0. For any bounded set X_0 ⊂ℝ^n and ϵ,T>0, there exists α̅>0 such that for all α∈ (0, α̅] and for any sequence x_-1,x_0, x_1, …∈ℝ^n generated by the momentum method (<ref>) for which x_0∈ X_0 and x_0-x_-1⩽δα, there exists x(·) ∈ C^1([0,T],ℝ^n) such that x'(t) = -1/1 - β∇ f(x(t)),   ∀ t ∈ (0,T),     x(0) ∈ X_0, for which x_k - x(kα) ⩽ϵ for k = 0, …, ⌊ T/α⌋. The proof of Lemma <ref> is deferred to Appendix <ref>. We will also use the following simple fact in order to control the length of a single step of the momentum method as a function of the step size. Its short proof can be found in Appendix <ref>. Let f:ℝ^n→ℝ be differentiable and Lipschitz continuous on X ⊂ℝ^n, β∈(-1,1), γ∈ℝ, and δ_0 ⩾ 0. There exists δ_1 ⩾ 0 such that for all α>0, K ∈ℕ, and sequence (x_k)_k∈{-1}∪ℕ generated by the momentum method (<ref>) for which x_-1,,x_K-1∈ X and x_0-x_-1⩽δ_0α, we have x_k-x_k-1 ⩽δ_1α, k = 0,…, K, z_k-z_k-1 ⩽√(2)δ_1α, k = 1,…, K, where z_k:=(x_k,x_k-1)∈ℝ^2n. If L>0 is a Lipschitz constant of β̅f on S+γ(S-S) where S is the convex hull of X and β̅:= (1-β)^-1, then we may take δ_1:=δ_0+L. Finally, we will use the following result. Let f ∈ C^1(ℝ^n) be a semi-algebraic function with bounded continuous gradient trajectories. If X_0 ⊂ℝ^n is bounded, then σ(X_0)<∞ where σ(X_0) := sup_x ∈ C^1(ℝ_+,ℝ^n)  ∫_0^∞x'(t)dt   subject to   {[ x'(t) = - ∇ f(x(t)),  ∀ t > 0,; x(0) ∈ X_0. ]. We are now ready to state our second convergence result. It shows that, similar to the gradient method <cit.>, the momentum method is endowed with global convergence if continuous gradient trajectories are bounded. In the gradient method, one considers the supremum of the lengths of all discrete gradient trajectories over all possible initial points in a bounded set and over all possible step sizes <cit.>. This enables one to reason by induction on the initial set and the upper bound on the step sizes. When dealing with momentum, one needs to additionally consider an upper bound on the initial velocity x_0-x_-1/α between two initial points in the inductive reasoning. Fact <ref> guarantees that the velocity x_k-x_k-1/α remains bounded within each induction step. This enables one to reinitialize the momentum method after an arbitrary large number of iterations. Note that the length formula in Lemma <ref> admits an error term ηα that is not present in the gradient method <cit.>. This requires additional care. Let f∈ C^1,1_ loc(ℝ^n) be semi-algebraic with bounded continuous gradient trajectories. Let β∈ (-1,1), γ∈ℝ, δ⩾ 0 and X_0 be a bounded subset of ℝ^n. There exist α̅,c>0 such that for all α∈ (0,α̅], there exists c_α>0 such that any sequence x_-1,x_0,x_1,∈ℝ^n generated by the momentum method (<ref>) that satisfies x_0 ∈ X_0 and x_0-x_-1⩽δα obeys ∑_i=0^∞x_i+1-x_i⩽ c    and   min_i=0,…,k∇ f(x_i)⩽c_α/k+1,  ∀ k∈ℕ. Let β∈ (-1,1), γ∈ℝ, δ_0⩾ 0, and X_0 be a bounded subset of ℝ^n. Without loss of generality, we may assume that X_0 ≠∅. We will show that there exists α̅>0 such that σ(X_0,α̅,δ_0)<∞ where σ(X_0,α̅,δ_0) := sup_[ x ∈ (ℝ^n)^ℕ; α∈ (0,α̅] ]  ∑_k=0^∞x_k+1-x_k s.t.  {[ x_k+1 = x_k + β(x_k-x_k-1) -α∇ f (x_k + γ(x_k-x_k-1)), ∀ k ∈ℕ,; x_0 ∈ X_0,  x_0-x_-1⩽δ_0 α. ]. Letting c := σ(X_0,α̅,δ_0), the convergence rate is easily deduced. Indeed, for any feasible point ((x_k)_k ∈{-1}∪ℕ,α) of (<ref>), we have x_-1,x_0,x_1, ∈ B(X_0,max{c,δ_0 α}):= X_0 + B(0,max{c,δ_0 α}) and thus ∇ f(x_k)⩽ b_αz_k+1-z_k for all k ∈ℕ for some constant b_α>0 by Lemma <ref>, where z_k:= (x_k,x_k-1). Hence ∑_k=0^∞∇ f(x_k) ⩽∑_k=0^∞ b_αz_k+1 - z_k ⩽ b_αx_0-x_-1+2b_α∑_k=0^∞x_k+1-x_k ⩽ b_α (δ_0 α + 2 c) =: c_α and min_i=0,…,k∇ f(x_i)⩽1/k+1∑_i=0^k∇ f(x_i)⩽c_α/k+1. Let Φ:ℝ_+ ×ℝ^n →ℝ^n be the continuous gradient flow of f defined for all (t,x_0) ∈ℝ_+ ×ℝ^n by Φ(t,x_0) := x(t) where x(·) is the unique continuous gradient trajectory of f initialized at x_0. Uniqueness follows from the Picard–Lindelöf theorem <cit.>. Let Φ_0 := Φ(ℝ_+,X_0) and let C be the set of critical points of f in Φ_0. C is compact by Lemma <ref> and <cit.>. Thus there exists ϵ >0 such that either X_0 ⊂ C or X_0∖B(C,ϵ/6) ≠∅ where B(C,ϵ/6) := C + B(0,ϵ/6). Indeed, either X_0 ⊂ C or there exists x ∈ X_0 ∩ C^c where the complement C^c of C is open since C is closed. Thus there exists ϵ >0 such that B(x,ϵ/6) ⊂ C^c. Thus x ∉B(C,ϵ/6) (otherwise there exists x' ∈ C such that x-x'< ϵ/6, i.e., C ∋ x' ∈ B(x,ϵ/6) ⊂ C^c) and x ∈ X_0 ∖B(C,ϵ/6). By Fact <ref>, there exists δ_1 > δ_0 such that for all α>0, K ∈ℕ, and sequence (x_k)_k∈{-1}∪ℕ generated by the momentum method (<ref>) for which x_-1,,x_K-1∈ B(Φ_0,ϵ):= Φ_0 + B(0,ϵ) and x_0-x_-1⩽δ_0α, we have x_k - x_k-1⩽δ_1 α for k = 0, …, K. By Lemma <ref>, there exist α,η>0, κ⩾δ_1, and a diffeomorphism ψ:[0,∞)→[0,∞) such that for all K∈ℕ∖{0}, α∈(0,α], and sequence (x_k)_k∈{-1}∪ℕ generated by the momentum method (<ref>) for which x_-1,,x_K-1∈ B(Φ_0,ϵ) and x_0-x_-1⩽δ_1α, we have ∑_k=0^K-1x_k+1-x_k ⩽ ψ(f(x_0)-f(x_K-1)+ ηα) + κα. Since f is continuous, there exists ξ∈ (0,ϵ/2) such that f(x) - max_C f ⩽1/4ψ^-1(ϵ/3),   ∀ x∈ B(C,ξ). Let L>0 be a Lipschitz constant of β̅f on the convex hull of B(Φ_0,ϵ) and let α̂ := min{α,ξ/3L ,ϵ/6κ,ψ^-1(ϵ/3)/4η}>0 where β̅:= 1/(1-β). If X_0⊂ C, then let α̅ := α̂ and k^* := 0. Otherwise X_0∖B(C,ϵ/6) ≠∅. Since β̅∇ f is continuous, its norm attains its infimum ν on the non-empty compact set Φ_0 ∖B(C,ξ/3). It is non-empty because Φ_0 ∖B(C,ξ/3) ⊃ X_0∖B(C,ϵ/6) ≠∅ and ξ < ϵ/2. If ν=0, then there exists x^* ∈Φ_0 ∖B(C,ξ/3) such that ∇ f(x^*) = 0. Then x^*∈ C ∖B(C,ξ/3), which is a contradiction. We thus have ν>0. Hence we may define T := 2σ(X_0)/ν where σ(X_0) = sup_x ∈ C^1(ℝ_+,ℝ^n)  ∫_0^∞x'(t)dt   subject to   {[ x'(t) = - β̅∇ f(x(t)),  ∀ t > 0,; x(0) ∈ X_0, ]. is finite by Lemma <ref>. The factor β̅>0 does not change the optimal value because x(·) is a feasible point of the above problem if and only if x(·/β̅) is a feasible point of the problem in Lemma 2.7 and ∫_0^∞x'(t/β̅)/β̅dt = ∫_0^∞x'(t)dt. Note that σ(X_0)>0 and thus T>0 because X_0⊄C. In addition, since f is semi-algebraic and has bounded continuous gradient trajectories, it is lower bounded by its smallest critical value[Indeed, assume to the contrary that there exists x_0 ∈ℝ^n such that f(x_0) is less than the smallest critical value of f. The continuous gradient trajectory initialized at x_0 converges to a critical point x^* since it is bounded. This limit satisfies f(x_0) ⩾ f(x^*), yielding a contradiction.]. By Lemma <ref>, there exists α̅∈ (0,α̂] such that for any feasible point ((x_k)_k∈{-1}∪ℕ , α) of (<ref>), there exists an absolutely continuous function x:[0,T]→ℝ^n such that x'(t) = -β̅∇ f(x(t)),   ∀ t ∈ (0,T),     x(0) ∈ X_0, for which x_k - x(kα) ⩽ξ/3 for k = 0, …, ⌊ T/α⌋. Now suppose that x'(t)⩾ 2σ(X_0)/T for all t∈ (0,T). Then we obtain the following contradiction σ(X_0) < T 2σ(X_0)/T⩽∫_0^Tx'(t)dt ⩽∫_0^∞x'(t)dt ⩽σ(X_0). Hence, there exists t^* ∈ (0,T) such that x'(t^*) = β̅∇ f(x(t^*)) < 2σ(X_0)/T = 2σ(X_0)/(2σ(X_0)/ν) = ν. Since x(t^*) ∈Φ_0 and the infimum of the norm of β̅∇ f on Φ_0 ∖B(C,ξ/3) is equal to ν, it must be that x(t^*) ∈B(C,ξ/3). Hence there exists x^* ∈ C such that x(t^*)-x^*⩽ξ/3. Since α⩽α̂⩽ξ/(3L), there exists k^* ∈ℕ such that t_k^* := k^* α∈ [ t^* - ξ/(3L) , t^*]. Thus x_k^*-x^*⩽x_k^*-x(t_k^*) + x(t_k^*) - x(t^*) + x(t^*)-x^*⩽ξ/3 + L | t_k^*-t^* | + ξ/3 ⩽ξ/3+ξ/3+ξ/3 = ξ. To obtain the second inequality, we used the fact that for all t ⩾ 0, we have x'(t) = β̅∇ f(x(t))⩽ L since x(t) ∈ B(Φ_0,ϵ). Above, we defined α̅∈ (0, α̂] if X_0 ⊂ C or X_0 ⊄C (in which case X_0∖B(C,ϵ/6) ≠∅). We now consider a feasible point ((x_k)_k∈{-1}∪ℕ , α) of (<ref>) regardless of whether X_0 ⊂ C. Based on the above and the fact that X_0 ≠∅, there exists k^* ∈ℕ such that x_k^*∈ B(C,ξ). If K := inf{k ⩾ k^* : x_k ∉ B(C,ϵ)}<∞, then ψ^-1(ϵ/3) = ψ^-1(1/2ϵ - κϵ/6κ) ⩽ψ^-1((ϵ-ξ) - κα̂) ⩽ψ^-1(( x_K-x^*-x_k^*-x^*) - κα̅) ⩽ψ^-1(x_K-x_k^* - κα) ⩽ψ^-1(∑_k=k^*^K-1x_k+1-x_k - κα) ⩽ f(x_k^*) - f(x_K-1) + ηα ⩽max_C f + 1/4ψ^-1(ϵ/3) - f(x_K-1) +ηα̂ ⩽max_C f + 1/2ψ^-1(ϵ/3) - f(x_K-1) Above, the arguments of ψ^-1 in (<ref>) are equal. (<ref>) through (<ref>) rely on the fact that ψ^-1 is an increasing function. (<ref>) is due to ξ<ϵ/2. (<ref>) holds because x_K ∉ B(C,ϵ), x^* ∈ C, and x_k^*∈ B(x^*,ξ). (<ref>) and (<ref>) are consequences of the triangular inequality. (<ref>) is due to the length formula (<ref>) and the fact that x_k^*,,x_K-1∈ B(C,ϵ) ⊂ B(Φ_0,ϵ). (<ref>) is due to x_k^*∈ B(x^*,ξ) and (<ref>). Finally, (<ref>) is due to α̂⩽ψ^-1(ϵ/3)/(4η) by definition of α̂ in (<ref>). We remark that K⩾ k^*+2 since x_k^*+1-x^*⩽x_k^*+1-x_k^* +x_k^*-x^*⩽δ_1 α+ξ⩽δ_1ϵ/(6κ) +ϵ/2 ⩽ϵ/6 +ϵ/2 < ϵ. It also holds that x_-1,,x_K-2∈ B(Φ_0,ϵ), x_K-1 belongs to X_1  :=  B(C,ϵ)  ⋂ { x∈ℝ^n : f(x) ⩽max_C f - 1/2ψ^-1(ϵ/3) }, and x_K-1 - x_K-2⩽δ_1 α. Thus, by the length formula (<ref>) and the definition of σ(·,·,·) in (<ref>) we have ∑_k=0^∞x_k+1-x_k = ∑_k=0^K-2x_k+1-x_k + ∑_k=K-1^∞x_k+1-x_k ⩽ψ(sup_X_0 f - min_B(Φ_0,ϵ) f + ηα̅) + κα̅ +max{0, σ(X_1,α̅,δ_1)}. Note that the inequality still holds if K = ∞, since in that case (<ref>) implies ∑_k=0^∞x_k+1-x_k⩽ψ(sup_X_0 f - min_B(Φ_0,ϵ) f + ηα̅) + κα̅. Hence σ(X_0,α̅,δ_0) ⩽ψ(sup_X_0 f - min_B(Φ_0,ϵ) f + ηα̅) + κα̅ +max{0, σ(X_1,α̅,δ_1)}. It now suffices to replace X_0 by X_1, δ_0 by δ_1, and repeat the entire proof. Since f(Φ(t,x_1)) ⩽ f(Φ(0,x_1)) ⩽max_C f - ψ^-1(ϵ/3)/2< max_C f for all t⩾ 0 and x_1∈ X_1, the maximal critical value of f in Φ(ℝ_+,X_1) is less than the maximal critical value of f in Φ(ℝ_+,X_0). By the semi-algebraic Morse-Sard theorem (Lemma <ref>), f has finitely many critical values. Thus, it is eventually the case that one of the sets X_0,X_1, is empty. In order to conclude, one simply needs to choose an upper bound on the step sizes α̅' corresponding to X_1 that is less than or equal to the upper bound α̅ used for X_0. σ(X_0,·,δ_0) is finite when evaluated at the last upper bound thus obtained. Indeed, the recursive formula above still holds if we replace α̅ by any α∈ (0,α̅]. In particular, we may take α := α̅'. The inequalities in (<ref>) imply that the iterates converge to a critical point of f (i.e., a point x^* ∈ℝ^n such that ∇ f(x^*)=0). The previously known global convergence rate of the momentum method is O(1/√(k)) for coercive differentiable functions with a Lipschitz continuous gradient <cit.> if x_-1=x_0 and γ = 0. Without the coercivity assumption, O(1/√(k)) is also the rate of a modified version of the momentum method which does not capture the heavy ball method and Nesterov's accelerated gradient method as special cases <cit.>. Our third and final convergence result gives sufficient conditions for convergence to a local minimizer. Let f ∈ C^2(ℝ^n) be semi-algebraic with bounded continuous gradient trajectories. Let β∈ (-1,1) ∖{0}, γ∈ℝ, and δ⩾ 0. If the Hessian of f has a negative eigenvalue at all critical points of f that are not local minimizers, then for any bounded subset X_0 of ℝ^n, there exists α̅>0 such that, for all α∈ (0,α̅] and for almost every (x_-1,x_0) ∈ℝ^n × X_0, any sequence x_-1,x_0,x_1,∈ℝ^n generated by the momentum method (<ref>) that satisfies x_0-x_-1⩽δα converges to a local minimizer of f. In practice, if X_0 has positive measure and δ>0, Theorem <ref> means that one can generate x_0 uniformly at random in X_0 and generate x_-1 uniformly at random in the ball of radius δα centered at x_0 in order to guarantee convergence to a local minimizer almost surely. In contrast to the gradient method <cit.>, we need to assume that the Hessian of f has a negative eigenvalue at local maxima of f. Indeed, the function values are not necessarily decreasing along the iterates. While the proof of Theorem <ref> crucially depends on the length bound in Theorem <ref>, it mostly requires extending well-known arguments regarding the center and stable manifolds theorem <cit.>. For this reason, we defer its proof to Appendix <ref>. § PROOF OF THE LENGTH FORMULA Given λ>0, consider the following Lyapunov function proposed by Zavriev and Kostyuk <cit.>: [ H_λ: ℝ^n ×ℝ^n ⟶ ℝ; (x,y) ⟼ f(x) + λx-y^2. ] For certain values of λ, it is known to be monotonic along the iterates if γ = 0 <cit.> <cit.> or β = γ <cit.> <cit.>, but not for general β∈ (-1,1) and γ∈ℝ. This justifies the need for Lemma <ref> in which it will be convenient to rewrite the update rule of the momentum method (<ref>) as y_k^β = x_k + β(x_k-x_k-1), y_k^γ = x_k + γ(x_k-x_k-1), x_k+1 = y_k^β - α∇ f(y_k^γ), for all k ∈ℕ. Likewise, bounds on the norm of the gradient of the objective function and the Lyapunov function are only known if γ = 0 <cit.> or β = γ <cit.> <cit.>, which calls for Lemma <ref>. Let f∈ C^1,1_ loc(ℝ^n), X⊂ℝ^n be bounded, β∈ (-1,1), and γ∈ℝ. There exists α̅>0 such that for all α∈(0,α̅], there exist λ^+>λ^->0 such that for all λ∈(λ^-,λ^+), there exists c_1>0 such that for all K∈ℕ, if x_-1,, x_K+1∈ X are iterates of the momentum method (<ref>), then for k=0,,K we have H_λ(x_k+1,x_k) ⩽ H_λ(x_k,x_k-1) - c_1(x_k+1 - x_k^2+x_k - x_k-1^2). If M>0 is a Lipschitz constant of ∇ f on S+max{|β|,|γ|}(S-S) where S is the convex hull of X, then one may take α̅:=min{1/M,1-β^2/2(β^2+2|β-γ|)M}, λ^-:=(1/2α+M/2)β^2+|β-γ|M/2,   λ^+:=1/2α-|β-γ|M/2, and    c_1 : = min{λ - λ^-, λ^+ - λ}. Consider α̅ as defined in (<ref>) and let α∈(0,α̅]. Given K∈ℕ, let x_-1,, x_K+1∈ X be iterates generated by the momentum method (<ref>). A bound on the Taylor expansion of f yields f(x_k+1) ⩽ f(y_k^β) + ⟨∇ f(y_k^β), x_k+1-y_k^β⟩ + M/2 x_k+1-y_k^β^2, f(x_k) ⩾ f(y_k^β) + ⟨∇ f(y_k^β), x_k-y_k^β⟩ - M/2 x_k-y_k^β^2, where k ∈{0,,K}. Subtracting (<ref>) from (<ref>) yields f(x_k+1)-f(x_k) ⩽ ⟨∇ f(y_k^β),x_k+1-x_k⟩ + M/2( x_k+1-y_k^β^2+ x_k-y_k^β^2) = ⟨∇ f(y_k^β)-∇ f(y_k^γ),x_k+1-x_k⟩+ ⟨∇ f(y_k^γ),x_k+1-x_k⟩ +M/2( x_k+1-y_k^β^2+ x_k-y_k^β^2) = ⟨∇ f(y_k^β)-∇ f(y_k^γ),x_k+1-x_k⟩+ 1/α⟨ x_k+1-y_k^β,x_k-x_k+1⟩ +M/2( x_k+1-y_k^β^2+ x_k-y_k^β^2). By the Cauchy-Schwarz and AM-GM inequalities, we have ⟨∇ f(y_k^β)-∇ f(y_k^γ),x_k+1-x_k⟩ ⩽∇ f(y_k^β)-∇ f(y_k^γ) x_k+1-x_k ⩽ M y_k^β- y_k^γ x_k+1-x_k = M|β-γ| x_k-x_k-1 x_k+1-x_k ⩽|β-γ|M/2( x_k-x_k-1^2+ x_k+1-x_k^2). By the cosine rule, for any a,b,c∈ℝ^n, it holds that ⟨ a-b,c-a⟩ = 1/2( b-c^2- a-b^2- c-a^2). By letting a:=x_k+1, b:=y_k^β and c:=x_k, we have ⟨ x_k+1-y_k^β,x_k-x_k+1⟩ =1/2( y_k^β-x_k^2- x_k+1-y_k^β^2- x_k-x_k+1^2). Combining (<ref>), (<ref>) and (<ref>), we find that f(x_k+1) -f(x_k) ⩽ -(1/2α-M/2) y_k^β-x_k+1^2 - (1/2α-|β-γ|M/2) x_k+1-x_k^2 + [(1/2α+M/2)β^2+|β-γ|M/2] x_k-x_k-1^2. Let λ∈(λ^-,λ^+) where λ^- and λ^+ are defined in (<ref>). Note that λ^-<λ^+ due to the fact that α∈ (0,α̅]. By definition of H_λ, it readily follows that H_λ(x_k+1,x_k) -H_λ(x_k,x_k-1) ⩽ -(1/2α-M/2) y_k^β-x_k+1^2 - (1/2α-|β-γ|M/2-λ) x_k+1-x_k^2 - [λ-(1/2α+M/2)β^2-|β-γ|M/2] x_k-x_k-1^2. The desired inequality is guaranteed by taking c_1 as defined in (<ref>). Let f∈ C^1,1_ loc(ℝ^n), X⊂ℝ^n be bounded, and β,γ∈ℝ. For all α,λ>0, there exist c_2>0 such that for all K ∈ℕ, if x_-1,, x_K+1∈ X are iterates of the momentum method (<ref>), then max{∇ H_λ(z_k),∇ H_λ(z_k+1)}⩽ c_2 z_k+1-z_k, for k=0,,K where z_k:=(x_k,x_k-1)∈ℝ^2n. If M>0 is a Lipschitz constant of ∇ f on S+max{|β|,|γ|}(S-S) where S is the convex hull of X, then one may take c_2 :=√(2)max{1/α,|β|/α+M(|γ|+1)+4λ}. Using Fact <ref>, for k=0,,K we have ∇ H_λ(z_k) ⩽∇ f(x_k) + 2λ(x_k - x_k-1) + 2λ (x_k - x_k-1) ⩽∇ f(x_k) + 4λx_k - x_k-1 ⩽√(2)max{1/α,|β|/α+M|γ|+4λ}z_k+1 - z_k. Similarly, ∇ H_λ(z_k+1) ⩽∇ f(x_k+1) + 2λ(x_k+1 - x_k) + 2λ (x_k+1 - x_k) ⩽∇ f(x_k+1) + 4λx_k+1 - x_k ⩽∇ f(x_k) + ∇ f(x_k+1)-∇ f(x_k) + 4λx_k+1 - x_k ⩽∇ f(x_k) + (4λ+M) x_k+1 - x_k ⩽√(2)max{1/α,|β|/α+M|γ|+4λ+M}z_k+1 - z_k. In order to proceed, we recall two results from the literature. Given x∈ℝ^n, consider the distance of x to S defined by d(x,S) := inf{x-y : y ∈ S }. Given a set-valued mapping F:ℝ^n⇉ℝ^m and y ∈ℝ^m, let F^-1(y) := { x ∈ℝ^n : F(x) ∋ y }. Let f:ℝ^n→ℝ be locally Lipschitz and semi-algebraic. Let X be a bounded subset of ℝ^n and v ∈ℝ be a critical value of f in X. There exists ρ>0 and a strictly increasing continuous semi-algebraic function ψ:[0,ρ)→ (0,∞) which belongs to C^1((0,ρ)) with ψ(0) = 0 such that ∀ x ∈ X,     |f(x)-v| ∈ (0,ρ)    ⟹    d(0,∂ (ψ∘ |f-v|)(x)) ⩾ 1. Let f:ℝ^n→ℝ be locally Lipschitz and semi-algebraic. Let X be a bounded subset of ℝ^n and V be the set of critical values of f in X if it is non-empty, otherwise V:={0}. There exists a concave semi-algebraic diffeomorphism ψ:[0,∞)→[0,∞) such that ∀ x ∈ X ∖ (∂f̃)^-1(0),     d(0,∂ (ψ∘f̃)(x)) ⩾ 1, where f̃(x) := d(f(x),V) for all x∈ℝ^n. We say that ψ:[0,∞)→[0,∞) in Proposition <ref> is a desingularizing function of f on X. The uniform Kurdyka-Łojasiewicz inequality (<ref>) enables one to relate the length of the iterates of the gradient method in any bounded region to the function variation <cit.>. If one uses the Kurdyka-Łojasiewicz inequality (<ref>) instead, then the function values evaluated at the iterates would be restricted to a potentially small range around a critical value. If one uses the uniformized KL property <cit.>, then the iterates would need to lie in a uniform neighborhood of a compact subset of the critical points of f where f is constant. In order to prove <cit.>, one uses the fact that the objective function is a Lyapunov function for all sufficiently small step sizes in the gradient method. However, in the momentum method the Lyapunov function depends on the step size, as can be seen in Lemma <ref>. The main challenge that we thus face is to obtain an upper bound on the length of the iterates that is independent of the step size. Otherwise, it could blow up as the step size gets small. Such is the object of the following results. Proposition <ref> takes a first step by showing that the length is bounded by a constant times a desingularizing function evaluated at the Lyapunov function variation plus a constant multiple of the step size. Both constants are independent of the step size. Proposition <ref> ensures that the desingularizing function no longer depends on the step size. Finally, Lemma <ref> gets rid of the dependence on the step size in the argument of the desingularizing function. Proposition <ref> below generalizes <cit.> from the gradient method to the momentum method. While in the gradient method we have c_3 = 2 in (<ref>), in the momentum method obtaining an expression for c_3 that does not depend on the step size requires some care. Let ℕ^* := {1,2,3,}. We will use the following simple lemma. If h:ℝ_+→ℝ_+ is concave, h(0) = 0, and a,b⩾ 0, then |h(b)-h(a)| ⩽ h(|b-a|). We first show that h is increasing. Since h is concave, for any 0 ⩽ x < y < z, we have h(z) ⩽h(y)-h(x)/y-x(z-y)+h(y). If (h(y)-h(x))(y-x)<0, then we obtain the contradiction 0 ⩽lim sup_z→∞h(z)=-∞, establishing that h is increasing. It thus suffices to show that h(b)-h(a)⩽ h(b-a) for any 0 ⩽ a ⩽ b. Since h is concave, we have a/bh(0) + b-a/bh(b) ⩽ h(b-a)    and   b-a/bh(0) + a/bh(b) ⩽ h(a). Summing these two inequalities and using the fact that h(0)=0 yields the desired result. Let f∈ C^1,1_ loc(ℝ^n) be semi-algebraic, X⊂ℝ^n be bounded, β∈ (-1,1), γ∈ℝ, δ⩾ 0, and m ∈ℕ^* be an upper bound on the number of critical values of f in X. There exist α̅,c_3,ζ>0 such that for all α∈(0,α̅], there exists λ>0 such that, for any desingularizing function ψ_λ of H_λ on X × X and for all K∈ℕ, if (x_k)_k∈{-1}∪ℕ are iterates of the momentum method (<ref>) for which x_-1,,x_K∈ X and x_0-x_-1⩽δα, then 1/2m∑_k=0^Kz_k+1-z_k ⩽  c_3ψ_λ(H_λ(z_0)-H_λ(z_K)/2m)+ζα and H_λ(z_0)⩾⋯⩾ H_λ(z_K) where z_k:=(x_k,x_k-1)∈ℝ^2n. If L>0 and M>0 are Lipschitz constants of β̅f and ∇ f respectively on S+max{|β|,|γ|}(S-S) where S is the convex hull of X and β̅:= (1-β)^-1, then one may take the same α̅ as in (<ref>), c_3 := 8√(2)(2+|γ|+3|β|)/1-β^2, ζ:=2√(2)(δ+L), and λ := β^2+1+Mβ^2α/4α. Consider α̅ and c_3 as defined in (<ref>) and (<ref>) respectively. Given α∈(0,α̅], let λ∈ (λ^-,λ^+) where λ^- and λ^+ are defined in (<ref>). Let ψ_λ be a desingularizing function of H_λ on X× X. By Lemma <ref> and <ref>, for k=0,…,K-1 we have H_λ(z_k+1) - H_λ(z_k) ⩽ - c_1z_k+1 - z_k^2 ⩽ -c_1/c_2∇ H_λ(z_k)z_k+1 - z_k and H_λ(z_k+1) - H_λ(z_k) ⩽ - c_1z_k+1 - z_k^2 ⩽ -c_1/c_2∇ H_λ(z_k+1)z_k+1 - z_k. Since ∇ H_λ(x,y) = (∇ f(x)+2λ(x-y) , 2λ(y-x))^⊤, the critical values of f in X are the same as those of H_λ in X× X. We let V denote this set of critical values if they exist, otherwise V:={0}. Assume that [H_λ(z_K),H_λ(z_0)) excludes the elements of V and the averages of any two consecutive elements of V.[The point of excluding elements in V and the averages of two consecutive elements in V is to guarantee that there is a unique closest element in V that works for all H_λ(z_K),…,H_λ(z_0) and this element is either greater than or equal to all of them or less than all of them.] If H_λ(z_1)=H_λ(z_0), then z_1 = z_0 by (<ref>). Thus ∇ f(x_0) = 0 and z_K = … = z_0 by induction. Otherwise, we have that H_λ(z_1)<H_λ(z_0). With H_λ := d(H_λ,V), we thus have 0 ∉∂H_λ(z_k) and 1 ⩽∇(ψ_λ∘H_λ)(z_k) = ψ_λ'(H_λ(z_k)) ∇H_λ(z_k) for k=1,,K by the uniform Kurdyka-Łojasiewicz inequality (<ref>). Let k ∈{1,… K-1}. If H_λ (z_k) ⩾H_λ(z_k+1), multiplying (<ref>) by ψ_λ'(H_λ(z_k)) and using concavity of ψ_λ, we find that z_k+1-z_k ⩽c_2/c_1ψ_λ'(H_λ(z_k))(H_λ(z_k) - H_λ(z_k+1)) = c_2/c_1ψ_λ'(H_λ(z_k))(H_λ(z_k) - H_λ(z_k+1)) ⩽c_2/c_1( ψ_λ (H_λ(z_k)) - ψ_λ (H_λ(z_k+1))). If H_λ (z_k) ⩽H_λ(z_k+1), multiplying (<ref>) by ψ_λ'(H_λ(z_k+1)) and using concavity of ψ_λ, we find that z_k+1-z_k ⩽c_2/c_1ψ_λ'(H_λ(z_k+1))(H_λ(z_k) - H_λ(z_k+1)) = c_2/c_1ψ_λ'(H_λ(z_k+1))(H_λ(z_k+1) - H_λ(z_k)) ⩽c_2/c_1( ψ_λ (H_λ(z_k+1)) - ψ_λ (H_λ(z_k))). As a result, z_k+1-z_k⩽c_2/c_1 |ψ_λ (H_λ(z_k)) - ψ_λ (H_λ(z_k+1))|,     k = 1,,K-1. We obtain the telescoping sum ∑_k=0^K z_k+1-z_k ⩽z_1 - z_0 +∑_k=1^K-1c_2/c_1| ψ_λ(H_λ(z_k)) - ψ_λ(H_λ(z_k+1)) | + z_K+1 - z_K = z_1 - z_0 +c_2/c_1|ψ_λ(H_λ(z_0)) - ψ_λ(H_λ(z_K))| + z_K+1-z_K ⩽√(2)(δ+L)α + c_2/c_1(ψ_λ( | H_λ(z_0) - H_λ(z_K) |) - ψ_λ(0))+ √(2)(δ+L)α = c_2/c_1ψ_λ( H_λ(z_0) - H_λ(z_K))+ζα where ζ is defined in (<ref>). Above, (<ref>) and (<ref>) are due to the monotonicity of H_λ(z_0), … , H_λ(z_K). We use Lemma <ref> and Fact <ref> to obtain (<ref>). We next consider the general case where [H_λ(z_K),H_λ(z_K_p+1)) ∪∪ [H_λ(z_K_2),H_λ(z_K_1+1)) ∪ [H_λ(z_K_1),H_λ(z_0)) excludes the elements of V and the averages of any two consecutive elements of V. For notational convenience, let K_0 := -1 and K_p+1 := K. Since p ⩽ 2m-1, we have ∑_k=0^K z_k+1-z_k = ∑_i=0^p ∑_k=K_i+1^K_i+1z_k+1-z_k ⩽∑_j=0^p (c_2/c_1ψ_λ( H_λ(z_K_i+1) - H_λ(z_K_i+1) + ζα) ⩽c_2/c_1∑_i=0^p ψ_λ(H_λ(z_K_i+1) - H_λ(z_K_i+1)) + (p+1)ζα ⩽c_2/c_1(p+1)  ψ_λ( 1/p+1∑_i=0^p (H_λ(z_K_i+1) - H_λ(z_K_i+1)) ) + (p+1)ζα ⩽c_2/c_1(p+1)  ψ_λ( H_λ(z_0)-H_λ(z_K)/p+1) + (p+1)ζα ⩽c_2/c_12m  ψ_λ( L(z_0)-L(z_K)/2m) + 2mζα. Indeed, (<ref>) follows from Jensen's inequality and (<ref>) follows from the fact that s↦ sψ_λ(a/s) is increasing over (0,∞) for any constant a>0. Substituting c_1 and c_2 using (<ref>), (<ref>), and (<ref>), we find that c_2/c_1 = √(2)max{1/α,|β|/α+M(|γ|+1)+4λ}/min{λ-(1/2α+M/2)β^2-|β-γ|M/2, 1/2α-|β-γ|M/2-λ} = 2√(2)max{1,|β|+M(|γ|+1)α+4λα}/min{2λα-(1+α M)β^2-|β-γ|Mα,1-|β-γ|Mα-2λα}. If we take λ to be the midpoint of (λ^-,λ^+), i.e., λ=(β^2+1+Mβ^2α)/(4α), then this simplifies to c_2/c_1 = 4√(2)(|β|+M(|γ|+1)α+β^2+1+Mβ^2α)/1-β^2-(β^2+2|β-γ|)Mα. Notice that c_2/c_1 is a increasing function of α over (0,α̅], where we recall that α̅=min{1/M,(1-β^2)/(2(β^2+2|β-γ|)M)}. As a result, c_2/c_1 ⩽4√(2)(|β|+M(|γ|+1)α̅+β^2+1+Mβ^2α̅)/1-β^2-(β^2+2|β-γ|)Mα̅ ⩽4√(2)(|β|+M(|γ|+1)1/M+β^2+1+Mβ^21/M)/1-β^2-(β^2+2|β-γ|)M1-β^2/2(β^2+2|β-γ|)M = 4√(2)(|β|+|γ|+1+β^2+1+β^2)/(1-β^2)/2 ⩽8√(2)(2+|γ|+3|β|)/1-β^2=:c_3 > 0. If the objective function satisfies the Łojasiewicz gradient inequality, then the Lyapunov function also satisfies it according to <cit.>. Proposition <ref> below generalizes <cit.> from functions satisfying the Łojasiewicz gradient inequality to functions satisfying the uniform Kurdyka-Łojasiewicz inequality. We show that a suitable choice of desingularizing function for the objective is a common desingularizing function for the Lyapunov functions for all sufficiently large parameters. Let f:ℝ^n→ℝ be a locally Lipschitz semi-algebraic function and X⊂ℝ^n be bounded. The family of functions (H_λ)_λ⩾ 1/4 admits a common desingularizing function on X× X. By Proposition <ref>, there exists a desingularizing function ψ of f on X. Without loss of generality, we may assume that ψ'(t) ⩾ 1/√(t) for all t>0, after possibly replacing ψ by t↦∫_0^t max{ψ'(s),1/√(s)}ds, which is semi-algebraic[To see why, note that {s >0 : ψ'(s)⩾ 1/√(s)} is semi-algebraic and hence a finite union of open intervals and points. Thus the integral is equal to ψ up to a constant on finitely many intervals of ℝ_+, and t↦ 2√(t) up to a constant otherwise. The graph of such a function is hence semi-algebraic.] and concave since the integrand is decreasing. We may also multiply ψ by 1/min{1,c}⩾ 1 where c:= inf{ψ'((v_2-v_1)/2)θ((v_1+v_2)/2): v_1,v_2∈ V,   v_1<v_2,   (v_1,v_2)∩ V = ∅} >0, θ(v):= inf{ d(0,∂ f(x)) : x∈ X, f(x) = v } for all v ∈ℝ, and V is the set of critical values of f in X if it is non-empty, otherwise V:={0}. Note that c>0 as it is the infimum of finitely many positive real numbers. Indeed, ψ'(t)>0 for all t>0 and θ(v)>0 for all v ∉V. To see why the latter statement holds, assume the contrary that θ(v) = 0 for some v ∉V. Then there exists (x_k,s_k)_k∈ℕ⊂ X ×ℝ^n such that f(x_k) = v, s_k ∈∂ f(x_k), and s_k → 0. As X is bounded, (x_k)_k∈ℕ admits a limit point x̅. We have that f(x̅) = v by continuity of f and 0 ∈∂ f(x̅) by <cit.>. Thus v ∈ V and a contradiction occurs. By <cit.> we have ∂ H_λ(x,y) = (∂ f(x) + {2λ(x-y)})×{2λ (y - x)}, so that 0∈∂ H_λ(x,y) if and only if 0 ∈∂ f(x) and x=y. Therefore, the set of critical values of f in X and the set of critical values of H_λ in X× X coincide. Accordingly, let f̃:=d(f,V) and H_λ:=d(H_λ,V). Now fix λ⩾ 1/4. For all x,y∈ X such that 0 ∉∂H_λ(x,y), we have d(0,∂H_λ (x,y)) = d(0,∂ H_λ(x,y)) = d(0,(∂ f(x) + {2λ(x-y)})×{2λ (y - x)}) = √(d(0, ∂ f(x) + 2λ(x-y))^2 + 2λ(x-y)^2) ⩾√(η_1 d(0,∂ f(x))^2 - η_22λ(x-y)^2 + 2λ(x-y)^2) = √(η_1 d(0,∂ f(x))^2 + (1-η_2)4λ^2x-y^2) ⩾√(η_1 d(0,∂ f(x))^2 + (1-η_2)λx-y^2) ⩾√(η_1/ψ'(f̃(x))^2 + 1-η_2/ψ'(λx-y^2)^2) ⩾√(min{η_1,1-η_2}/ψ'(max{f̃(x),λx-y^2})^2) ⩾√(min{η_1,1-η_2}/ψ'(f̃(x)+λx-y^2/2)^2) ⩾√(min{η_1,1-η_2})/ψ'(H_λ(x,y)/2). Above, (<ref>) holds because 0 ∉∂H_λ(x,y) and thus H_λ(x',y') - H_λ(x,y) = ± (H_λ(x',y') - H_λ(x,y)) for all (x',y') in neighborhood of (x,y) where the sign is constant. (<ref>) is due to (<ref>). (<ref>) holds because the distance function is defined using the Euclidean norm. The existence of the constants η_1>0, η_2∈ (0,1) in (<ref>) are guaranteed by <cit.>. (<ref>) comes from a factorization. (<ref>) is due to the fact that λ⩾ 1/4. (<ref>) is due to the uniform Kurdyka-Łojasiewicz inequality (<ref>) and the fact that ψ'(t) ⩾ 1/√(t) for all t>0. Indeed, if 0 ∉∂f̃(x), then d(0,∂ f(x)) = d(0,∂f̃(x)) ⩾ 1/ψ'(f̃(x)) by <cit.>. If 0 ∈∂f̃(x), then f̃(x) = 0 or f(x) = (v_1+v_2)/2 for some v_1,v_2 ∈ V such that v_1< v_2 and (v_1,v_2)∩ V = ∅. In the former case, d(0,∂ f(x)) ⩾ 1/ψ'(f̃(x)) = 1/ψ'(0) = 1/∞ = 0 where ψ'(0):=lim_a ↘ 0ψ'(a). In the latter case, we have f̃(x) = (v_2-v_1)/2 and thus d(0,∂ f(x)) ⩾θ((v_1+v_2)/2)⩾ 1/ψ'((v_2-v_1)/2) = 1/ψ'(f̃(x)) by (<ref>). (<ref>) and (<ref>) hold because ψ is concave and thus ψ' is decreasing. (<ref>) is due to the fact that 0<H_λ(z) = d(f(x)+λx-y^2,V) ⩽ d(f(x),V)+λx-y^2 = f̃(x)+λx-y^2. We conclude that t ∈ [0,∞) → 2ψ(t/2)/√(min{η_1,1-η_2}) is a desingularizing function of H_λ on X × X for all λ⩾ 1/4, which is actually also a desingularizing function of f on X. Thanks to Proposition <ref> and Proposition <ref>, we are now ready to prove the length formula. Let m∈ℕ^* be an upper bound of the number of critical values of f in X. We apply Proposition <ref> to the set X and let α̅∈ (0,1], c_3>1, and ζ>0 be given by the proposition. Let ψ̅ be a common desingularizing function of (H_λ)_λ⩾ 1/4 on X × X given by Proposition <ref>. Let α∈ (0,α̅] and let λ := (β^2+1+Mβ^2α)/(4α)⩾ 1/4 as defined in (<ref>). Since c_3>1, ψ(t):=2c_3mψ̅(t/(2m)) is also a desingularizing function of H_λ on X× X. Let κ := 2m ζ and η: = 2mδ^2(β^2+1+Mβ^2)/4 where M>0 is a Lipschitz constant of ∇ f on S+max{|β|,|γ|}(S-S) and S is the convex hull of X. It follows from (<ref>) that ∑_k=0^Kx_k+1-x_k ⩽ψ(H_λ(x_0,x_-1)-H_λ(x_K,x_K-1)) + κα ⩽ψ(f(x_0)-f(x_K) + λx_0-x_-1^2) + κα ⩽ψ(f(x_0)-f(x_K) + λδ^2α^2) + κα ⩽ψ(f(x_0)-f(x_K) + ηα)+ κα. § CONCLUSION This work departs from the commonly accepted assumptions in the literature, and hence guarantees convergence of the momentum method for a larger class of functions which includes important problems in data science. Future directions include extending the analysis to nonsmooth and stochastic settings. § ACKNOWLEDGMENTS We thank the reviewers and the associate editor for their valuable feedback. § PROOF OF FACT <REF> By definition of y_k^β and y_k^γ in (<ref>), for k=0,,K, we have ∇ f(x_k) ⩽∇ f(y_k^γ) + ∇ f(x_k) - ∇ f(y_k^γ) ⩽x_k+1-y_k^β/α + M|γ| x_k - x_k-1 ⩽x_k+1 - x_k/α + (|β|/α +M|γ|) x_k - x_k-1 ⩽√(2)max{1/α,|β|/α+M|γ|}z_k+1 - z_k, § PROOF OF LEMMA <REF> The matrix A: = [ 1+β -β; 1 0 ]⊗ I_n is diagonalizable as the Kronecker product of two such matrices <cit.>. Thus there exist an invertible matrix P ∈ℝ^2n× 2n and a diagonal matrix D∈ℝ^2n× 2n such that A = PDP^-1. Let x_P := P^-1x. Then for any X∈ℝ^2n× 2n, we have X_P : = sup{Xv_P : v_P ⩽ 1} = sup{P^-1XPv : v⩽ 1} = P^-1XP. By equivalence of norms, there exist c_1,c_2,c_3>0 such that c_1x⩽x_P ⩽ c_2x and X_P ⩽ c_3X. Similar to <cit.>, let β̅:=1/(1-β). Also, let L>0 and M>0 respectively denote Lipschitz constants of β̅f and β̅∇ f with respect to the Euclidean norm · on S+γ(S-S) where S denotes the convex hull of B(X_0,σ_T(X_0)+δ+1):= X_0 + B(0,σ_T(X_0)+δ+1) and σ_T(X_0) := sup_x ∈ C^1(ℝ_+,ℝ^n)  ∫_0^T x'(t)dt   subject to   {[ x'(t) = -β̅∇ f(x(t)),  ∀ t > 0,; x(0) ∈ X_0. ]. Let c_4 := ML(1/2+|β|/2+|γ|-β|γ|) and c_5 := c_2 M√(1+ 2γ + 2γ^2). Without loss of generality, we may assume that X_0 ≠∅. The feasible set of (<ref>) is thus non-empty (i.e., σ_T(X_0)>-∞) because f is lower bounded and belongs to C^1,1_loc(ℝ^n) <cit.>. Notice that we also have σ_T(X_0)<∞. Indeed, by the Cauchy-Schwarz inequality any feasible point x(·) of (<ref>) satisfies ∫_0^T x'(t)dt ⩽√(T)√(∫_0^T x'(t)^2 dt) = √(T)√(∫_0^T ⟨ - β̅∇ f(x(t)),x'(t)⟩ dt) = √(T)√(β̅f(x(0)) - β̅f(x(T))) ⩽√(Tβ̅(sup_X_0 f - inf_ℝ^n f))<∞. It is easy to check that L and ML are respectively Lipschitz and gradient Lipschitz constants on [0,T] of any feasible point x(·) of (<ref>). Indeed, let x(·) be a feasible point of (<ref>). Since x(t) ∈ B(X_0,σ_T(X_0)) for all t∈ [0,T], we have x'(t) = β̅∇ f(x(t))⩽ L. By the mean value theorem, for all s,t ∈ [0,T] we have x'(t)-x'(s) = β̅∇ f(x(t)) - β̅∇ f(x(s))⩽ M x(t)-x(s)⩽ M L | t-s |. As a byproduct, we get the Taylor bound x(t)-x(s) - x'(s)(t-s)⩽ML/2(t-s)^2. Let ϵ∈ (0,δ+1] and α̅ := min{1,ϵ c_1c_2^-1[e^c_5 T (|β|δ + 2L-Lβ +c_4c_5^-1) -c_4c_5^-1)]^-1}>0. Let x_-1,x_0, x_1, …∈ℝ^n be a sequence generated by the gradient method with momentum and step size α∈ (0, α̅] for which x_0∈ X_0 and x_0-x_-1⩽δα. Let x(·) be a feasible point of (<ref>) such that x(0) = x_0. Similar to the proof of <cit.>, let x̅_k := x(kα) for all k ∈ℕ. We next reason by induction. We have x_0-x̅_0 = 0 ⩽ϵ. Assume that x_k-x̅_k⩽ϵ for k=0,,K for some index K∈ℕ. For k=1,,K, we have x̅_k+1 - x̅_k + αβ̅∇ f(x̅_k) ⩽ MLα^2/2, x̅_k-1 - x̅_k - αβ̅∇ f(x̅_k) ⩽ MLα^2/2. Multiplying (<ref>) by |β| and adding it to (<ref>) yields x̅_k+1 - x̅_k - β(x̅_k-x̅_k-1) + α∇ f(x̅_k) ⩽ ML(1+|β|)α^2/2, where we use the fact that β̅ - ββ̅ = 1. We also have ∇ f(x̅_k+γ(x̅_k-x̅_k-1)) - ∇ f(x̅_k) ⩽ M(1-β)|γ| x̅_k-x̅_k-1 ⩽ ML(1-β)|γ| α. Hence by combining the above two inequalities, we have x̅_k+1 -x̅_k - β(x̅_k-x̅_k-1) + α∇ f(x̅_k+γ(x̅_k-x̅_k-1))⩽ c_4α^2 where c_4 = ML(1/2+|β|/2+|γ|-β|γ|). Let e_k = x_k - x̅_k. We have e_k+1 -e_k - β(e_k-e_k-1) + α [∇ f(x_k+γ(x_k-x_k-1)) - ∇ f(x̅_k+γ(x̅_k-x̅_k-1)) ]⩽ c_4α^2 by using the update rule of momemtum method (<ref>). Thus e_k+1 -e_k - β(e_k-e_k-1) + α M_k (e_k+γ(e_k-e_k-1))⩽ c_4α^2 where M_k is the linear application such that M_k(a_k-b_k) := ∇ f(a_k) - ∇ f(b_k), M_kx := 0 for all x ∈span(a_k-b_k)^⊥, a_k := x_k+γ(x_k-x_k-1), b_k := x̅_k+γ(x̅_k-x̅_k-1) if a_k ≠ b_k, otherwise M_k:=0. Let v_k = (e_k,e_k-1)∈ℝ^2n. We have v_k+1 - Av_k + α B_kv_k ⩽ c_4α^2 where B_k := [ 1+γ -γ; 0 0 ]⊗ M_k. We also have B_k = [ 1+γ -γ; 0 0 ] M_k ⩽ M√(1+ 2γ + 2γ^2) since x̅_k+γ(x̅_k-x̅_k-1) and x_k+γ(x_k-x_k-1) belong to S+γ(S-S). The latter inclusion follows from the induction hypothesis and the fact that ϵ⩽δ+1. Hence v_k+1 - Av_k + α B_kv_k _P ⩽ c_2 c_4α^2 and thus v_k+1_P ⩽ (A_P + αB_k_P)v_k_P + c_2c_4α^2 ⩽ (A_P + α c_3B_k)v_k_P + c_2c_4α^2 ⩽ (1+c_5 α)v_k_P + c_2c_4α^2 where c_5 = c_3 M√(1+ 2γ + 2γ^2). By induction, we find that x_k+1-x̅_k+1 = e_k+1⩽v_k+1⩽ c_1^-1v_k+1_P ⩽ c_1^-1(1+c_5 α)^k v_1_P + c_1^-1c_2c_4α^2 ∑_i=0^k-1 (1+c_5 α)^i = c_1^-1c_2(1+c_5 α)^k v_1 + c_1^-1c_2c_4α^2 (1+c_5 α)^k-1/c_5α ⩽ c_1^-1c_2e^c_5 kαe_1 + c_1^-1c_2c_4c_5^-1(e^c_5 kα-1)α ⩽ c_1^-1c_2e^c_5 T (x_1-x_0+x_0-x̅_0 + x̅_0-x̅_1) + c_1^-1c_2c_4c_5^-1(e^c_5 T-1)α ⩽ c_1^-1c_2e^c_5 T (β(x_0-x_-1) - α∇ f(x_0 + γ(x_0-x_-1)) +Lα) + c_1^-1c_2c_4c_5^-1(e^c_5 T-1)α ⩽ c_1^-1c_2e^c_5 T (|β| δα + L(1-β) α +Lα) + c_1^-1c_2c_4c_5^-1(e^c_5 T-1)α ⩽ c_1^-1c_2[e^c_5 T (|β|δ + 2L-Lβ +c_4c_5^-1) -c_4c_5^-1]α̅ = ϵ since α̅ = ϵ c_1c_2^-1[e^c_5 T (|β|δ + 2L-Lβ +c_4c_5^-1) -c_4c_5^-1)]^-1. § PROOF OF FACT <REF> For k = -1,…,K-1, we have x_k+1 - x_k = β (x_k - x_k-1) - α∇ f (y_k^γ) = β^2 (x_k-1 - x_k-2) - α(β∇ f (y_k-1^γ)+∇ f (y_k^γ))  ⋮ = β^k+1(x_0-x_-1) - α∑_i=0^kβ^i ∇ f(y_k-i^γ) where y_k^γ := x_k + γ(x_k-x_k-1). Let L be a Lipschitz constant of β̅f on S+γ(S-S) where S is the convex hull of X. Since x_0-x_-1⩽δ_0α, we have x_k+1 - x_k⩽ |β|^k+1x_0-x_-1+α Lβ̅^-1∑_i=0^k|β|^i ⩽ (δ_0|β|^k+1+L)α. Given that β∈ (-1,1), it suffices to take δ := δ_0+L. In addition, z_k-z_k-1 = (x_k-x_k-1^2+x_k-1-x_k-2^2)^1/2⩽√(2)δα for k=1,…,K. § PROOF OF THEOREM <REF> It is known that if F∈ C^1(ℝ^n,ℝ^n), X is an open subset of ℝ^n such that rank (F'(x)) = n for all x∈ X, and F(X)⊂ X, then for almost every x ∈ X, F^k(x) does not converge as k→∞ to any fixed point of F in X whose spectral radius is greater than one <cit.>. The sequence (F^k)_k∈ℕ is defined by F^k+1 := F ∘ F^k for all k∈ℕ where F^0:ℝ^n→ℝ^n is the identity. In order to prove Proposition <ref> below, and ultimately Theorem <ref>, we relax the assumption that F(X)⊂ X and instead only require that x_k ∈ X for all k∈ℕ. Below, we let μ(·) and ρ(·) denote the Lebesgue measure and the spectral radius respectively. If F∈ C^1(ℝ^n,ℝ^n) and X is an open subset of ℝ^n such that rank(F'(x)) = n for all x∈ X, then μ({x∈ℝ^n: ∀ k ∈ℕ,  F^k(x) ∈ X, lim_k→∞F^k(x)∈ Y})=0, where Y := { x ∈ X: F(x) = x,  ρ(F'(x)) > 1}. Since F∈ C^1(ℝ^n,ℝ^n) and rank(F'(x)) = n for all x∈ X, by the inverse function theorem <cit.> F is a local diffeomorphism over X. By the center-stable manifold theorem <cit.>, for all y∈ Y, there exists an open neighborhood B_y of y such that its associated local center stable manifold W^sc_ loc(y):= {x∈ℝ^n: ∀ k ∈ℕ, F^k(x) ∈ B_y} has Lebesgue measure zero. Since {B_y:y∈ Y} is an open cover of Y, by Lindelöf’s lemma <cit.> there exists {y_i}_i∈ℕ⊂ Y such that Y ⊂∪_i=0^∞ B_y_i. We seek to show that the set W:={x∈ℝ^n: ∀ k ∈ℕ,  F^k(x) ∈ X, lim_k→∞F^k(x)∈ Y} has Lebesgue measure zero. In order to do so, we consider the sequence V_0,V_1,V_2, : Y ⇉ X defined by V_0(·):=W^sc_ loc(·)∩ X and V_k+1:=(F_|X)^-1∘ V_k for all k∈ℕ where F_|X denotes the restriction of F to X. We will show that W ⊂⋃_i=0^∞⋃_k=0^∞ V_k(y_i). It is then easy to show by induction that μ(V_k(y_i)) = 0 for all k,i∈ℕ. Indeed, on the one hand μ(V_0(y_i))⩽μ(W^sc_ loc(y_i))=0. On the other hand, if μ(V_k(y_i))=0, then by <cit.> μ(V_k+1(y_i))=μ((F_|X)^-1(V_k(y_i)))=0 since rank(F'(x)) = n for all x∈ X. We conclude that μ(W)⩽∑_i=0^∞∑_k=0^∞μ(V_k(y_i))=0. Let x∈ W and y:=lim_k→∞F^k(x). Since y ∈ Y ⊂∪_i=0^∞ B_y_i, there exists j∈ℕ such that y∈ B_y_j. Since B_y_j is open, there exists K∈ℕ such that F^k(x)∈ B_y_j for all k⩾ K, or equivalently, F^k(F^K(x))∈ B_y_j for all k∈ℕ. Thus F^K(x)∈ W^sc_ loc(y_j) and in fact F^K(x)∈ W^sc_ loc(y_j)∩ X = V_0(y_j). Since x∈ W and F^K(x)∈ V_0(y_j), we have F^K-1(x)∈ F^-1(F^K(x)) ∩ X= (F_|X)^-1(F^K(x))⊂ (F_|X)^-1(V_0(y_j))=V_1(y_j). By induction, it follows that x∈ V_K(y_j)⊂∪_i=0^∞∪_k=0^∞ V_k(y_i). Given an objective function f∈ C^2(ℝ^n) with an L-Lipschitz continuous gradient, the momentum method (<ref>) does not converge to any critical point whose Hessian has a negative eigenvalue for almost every initial point if α∈(0,2(1-β)/L), β∈(0,1) and γ=0 <cit.>, or α∈(0,4/L), β∈(max{0,-1+α L/2},1) and γ=0 <cit.>. In order to prove Theorem <ref>, we enlarge the set of allowable momentum parameters. Below, we let λ_min(·) denote the minimal real eigenvalue of a matrix. Let f∈ C^2(ℝ^n), X ⊂ℝ^n be bounded, β∈(-1,1)∖{0}, and γ∈ℝ. There exists α̅>0 such that for all α∈(0,α̅] and for almost every (x_-1,x_0)∈ℝ^2n, the limit of any convergent sequence x_-1,x_0,x_1,∈ X generated by the momentum method (<ref>) does not belong to C^-:= { x ∈ℝ^n: ∇ f(x) = 0, λ_min(∇^2 f(x)) < 0 }. Since X is bounded, there exists an open bounded set X such that X⊂X. Let M:=sup{ρ(∇^2f(x)): x∈X+γ(X-X)}<∞, α̅:= |β|/(1+|γ| M), and α∈ (0,α̅]. Any sequence x_-1,x_0,x_1,∈ℝ^n generated by the momentum method (<ref>) follows the update rule z_k+1=F(z_k) for all k∈ℕ where z_k:=(x_k,x_k-1) and F:ℝ^2n→ℝ^2n is defined by F(x,y) := [ x +β(x-y) - α∇ f(x+γ(x-y)); x ] for all (x,y)∈ℝ^n ×ℝ^n. In order to prove the desired result, we claim that it suffices to check the two following facts: * rank(F'(z)) = 2n for all z∈X×X, * {(x,x)∈ℝ^2n:x ∈ C^-}⊂ Z, where Z := {z∈ℝ^2n : F(z) = z,  ρ(F'(z)) > 1}. Indeed, by applying Lemma <ref> to F∈ C^1(ℝ^2n,ℝ^2n) and the open subset X×X of ℝ^2n, it follows that for almost every (x_0,x_-1) ∈ℝ^2n, the limit of any convergent sequence (x_0,x_-1), (x_1,x_0), …∈ X× X such that F(x_k,x_k-1)=F(x_k+1,x_k) for all k ∈ℕ does not belong to Y where Y := Z∩ (X×X) ⊃{(x,x)∈ℝ^2n:x ∈ C^-}∩ (X×X) = {(x,x)∈ℝ^2n:x ∈ C^-∩X}. In particular, for almost every (x_0,x_-1) ∈ℝ^2n, the limit of any convergent sequence x_-1,x_0,x_1, …∈ X generated by the momentum method (<ref>) does not belong to C^-∩X. Since X⊂X, such a limit must belong to X, and thus does not belong to C^-. We next prove the two facts above. First, for any (x,y) ∈X×X, we have F'(x,y) = [ (1+β)I_n-α(1+γ)∇^2f(x+γ(x-y)) -β I_n +αγ∇^2f(x+γ(x-y)); I_n 0 ] where I_n ∈ℝ^n× n is the identity matrix. Since |β| ⩾α(1+|γ|M) > α|γ| M, by <cit.> for all (x,y) ∈X×X we have det(F'(x,y)) = det(β I_n -αγ∇^2f(x+γ(x-y))) ≠ 0 and thus rank(F'(x,y)) = 2n. Second, let x ∈ C^- and z := (x,x). We seek to show that z ∈ Z. Since ∇ f(x) = 0, we have F(z) = F(x,x) = (x,x) = z. Since f ∈ C^2(ℝ^n), ∇^2f(x) is symmetric and therefore admits an eigendecomposition ∇^2f(x)=PDP^⊤ where D=diag(d_1,,d_n) and P is an orthogonal matrix. Again by <cit.>, we have det(λ I_2n-F'(x,x)) = det([λ^2-(1+β)λ+β]I_n - [αγ-λα(1+γ)]∇^2f(x)) = det([λ^2-(1+β)λ+β]I_n - [αγ-λα(1+γ)]PDP^⊤) = det([λ^2-(1+β)λ+β]I_n - [αγ-λα(1+γ)]D) = ∏_i=1^n ([λ^2-(1+β)λ+β] - [αγ-λα(1+γ)]d_i) = ∏_i=1^n (λ^2+[α(1+γ)d_i-(1+β)]λ+β-αγ d_i_φ_i(λ):=). Since x∈ C^-, there exists j∈{1,,n} such that d_j<0. Since φ_j is a quadratic function whose leading coefficient is positive and φ_j(1) = α d_j < 0, φ_j has a root that is greater than 1. Thus ρ(F'(z))>1. We are now ready to prove Theorem <ref>. Let f∈ C^2(ℝ^n) be a semi-algebraic function with bounded continuous gradient trajectories. Let β∈ (-1,1) ∖{0}, γ∈ℝ, and δ⩾ 0. Assume that the Hessian of f has a negative eigenvalue at all critical points of f that are not local minimizers. Let X_0 be a bounded subset of ℝ^n. By Theorem <ref>, there exist α̅,c>0 such that for all α∈ (0,α̅], there exists c_α>0 such that any sequence x_-1,x_0,x_1,∈ℝ^n generated by the momentum method (<ref>) that satisfies x_0 ∈ X_0 and x_0-x_-1⩽δα obeys ∑_i=0^∞x_i+1-x_i⩽ c    and   min_i=0,…,k∇ f(x_i)⩽c_α/k+1,  ∀ k∈ℕ. It hence converges to a critical point of f and belongs to the bounded set B(X_0,c). By Proposition <ref>, after possibly reducing α̅>0, for all α∈(0,α̅] and for almost every (x_-1,x_0)∈ℝ^2n, the limit of any convergent sequence x_-1,x_0,x_1,∈ B(X_0,c) generated by the momentum method (<ref>) is not a critical point of f where the Hessian admits a negative eigenvalue. We conclude for all α∈ (0,α̅] and for almost every (x_-1,x_0) ∈ℝ^n × X_0, any sequence x_-1,x_0,x_1,∈ℝ^n generated by the momentum method (<ref>) that satisfies x_0-x_-1⩽δα converges to a local minimizer of f. abbrv
http://arxiv.org/abs/2307.00384v1
20230701165218
CasTGAN: Cascaded Generative Adversarial Network for Realistic Tabular Data Synthesis
[ "Abdallah Alshantti", "Damiano Varagnolo", "Adil Rasheed", "Aria Rahmati", "Frank Westad" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Simplifying the large mass expansion [ August 1, 2023 ==================================== Generative adversarial networks (GANs) have drawn considerable attention in recent years for their proven capability in generating synthetic data which can be utilized for multiple purposes. While GANs have demonstrated tremendous successes in producing synthetic data samples that replicate the dynamics of the original datasets, the validity of the synthetic data and the underlying privacy concerns represent major challenges which are not sufficiently addressed. In this work, we design a cascaded tabular GAN framework (CasTGAN) for generating realistic tabular data with a specific focus on the validity of the output. In this context, validity refers to the the dependency between features that can be found in the real data, but is typically misrepresented by traditional generative models. Our key idea entails that employing a cascaded architecture in which a dedicated generator samples each feature, the synthetic output becomes more representative of the real data. Our experimental results demonstrate that our model well captures the constraints and the correlations between the features of the real data, especially the high dimensional datasets. Furthermore, we evaluate the risk of white-box privacy attacks on our model and subsequently show that applying some perturbations to the auxiliary learners in CasTGAN increases the overall robustness of our model against targeted attacks. § INTRODUCTION Facilitating information and knowledge sharing within and between organisations is increasingly sought after for attaining growth and development. From a healthcare and medical standpoint, information exchange subsequently contributes to better understanding of diseases and risk factors, more intuitive prognosis by practitioners and effective treatment planning based on previously obtained knowledge. In the financial sector, sharing information between stakeholders leads to improved prediction of corporate bankruptcy and quicker identification of suspicious transaction behaviour that can be potentially linked to organised financial crime. For both fields, sharing the data which contains sensitive patient and client information is subject to GDPR regulations to maintain the confidentiality and privacy of such information. Therefore, institutions are continuously seeking new data anonymisation and synthetic data generation techniques for exchanging domain knowledge without exposing sensitive information. Ever since their development, generative adversarial networks (GANs) <cit.> have been increasingly studied for their ability to approximate and model complex data distributions. Despite the early GAN applications being densely focused on the computer vision domain and image generation, GANs are becoming recently researched in other fields such as natural language processing <cit.> and time-series anomaly detection <cit.>. In addition, more properties of GANs have emerged such as conditionally generating samples based on a specific target class <cit.> and generation in conjunction with variational auto-encoders <cit.>. In contrast, GANs have been significantly less explored for tabular data generation. A tabular dataset typically comprises a mixture of continuous variables and categorical features. Tabular data is common in the medical and the financial domains where fields such age, gender, profession, and income can be commonly found in databases containing numerous records. As opposed to purely numerical data, representing datasets with categorical variables can be particularly difficult in presence of highly-dimensional and strongly correlated features. Furthermore, quantifying the validity of a synthetic tabular dataset can be practically impossible without closely inspecting every generated data sample and deciding whether to accept or reject each examined data record. Notwithstanding, there currently exists no straightforward and unified criteria for evaluating the validity of the output generated by tabular GANs. To this end, we introduce CasTGAN, which is a generative network framework characterised by multiple generators connected sequentially; each of which is designed to generate a single feature. Meanwhile, a single discriminator validates the output of all the generators while being trained on the output of the final generator in the cascade. In addition, each generator is chained to a corresponding auxiliary learner in order to obtain more insightful losses specific to the individually generated features. This is motivated by the fact that it has been shown that adding more auxiliary classifiers can enhance the quality of the synthetic output images <cit.>. Therefore, we posit that CasTGAN aims to capture the highly correlated and hierarchical relationship between features, such that the synthetic output produced by our model closely resembles the real data while minimising the inconsistencies in the generated data. This is particularly important for applications where data is widely shared between professionals, and the slightest irregularities in the data can lead to undesired outcomes. We can thereby summarise our contributions in this work as: * Generative architecture: A cascaded based generative framework for producing realistic tabular output which greatly emulates the original data, while significantly reducing the number of invalid synthetic samples. * Synthetic data evaluation: A new metric for quantifying the realisticness of the synthetic data when lacking the domain knowledge for the provided data, and extensively evaluate our framework and existing works. * Privacy assessment: We launch white-box privacy attacks on our model and analyse how the privacy guarantees and quality of the output are impacted when perturbing the input data during the model training. The remainder of this paper is structured as follows. In section <ref> we present an overview of GANs and the types of GAN privacy attacks, while we further examine the relevant studies in section <ref>. Section <ref> presents a discussion of CasTGAN and a detailed description of our model's structure. In section <ref>, we discuss the experimental setup used in this work and the evaluation criteria. We demonstrate our results and a thorough analysis in section <ref> and conclude in section <ref>. § BACKGROUND §.§ Generative Adversarial Networks A GAN is characterised by a generator G and a discriminator D playing an adversarial game, where each component attempts to maximise its own benefit <cit.>. The generator receives a noise input sampled from a random distribution z ∼ p_z and learns to generate an output in the distribution x ∼ p_g that matches the structure of the unseen real data x ∼ p_data. Meanwhile, the discriminator has access to the samples produced by the generator and the real data and learns to distinguish between its real and fake inputs. While the output generated by G improves during training as a result of the loss it obtains from the discriminator, the discriminator also becomes increasingly clever in recognising the data produced by the generator. Subsequently, GANs are particularly challenging to train since it must be guaranteed that both the generator and the discriminator maintain their competitiveness without outperforming each other early in the training phase. In the classic GANs, the generator and the discriminator attempt to maximise their objective by minimising the Jenson-Shannon Divergence (JSD), however, using JSD does not guarantee the convergence of losses, hence leading to training instability <cit.>. The Wasserstein GAN (WGAN) has been proposed as an alternative to the standard GANs in order to augment the training stability in generative models by replacing JSD with the Wasserstein Distance (WD) <cit.>. The use of the Wasserstein Distance ensures that the model is continuously learning even if the quality of the output is poor, and this is attributed to the smooth gradients produced by the Wasserstein cost function. The initial WGAN relied on weight clipping to enforce the confinement of the discriminator's weights within a specified range [-c, c], however, the authors demonstrate that clipping can lead to difficulties with model optimisation <cit.>. Instead, the use of gradient penalty with WGAN has been proposed to mitigate against the exploding and vanishing gradients of the weights <cit.>. In this setting, gradients are found using the linear interpolations x̂∼ p_x̂ between the real and the fake samples, where the distribution of linear interpolation is resembled by p_x̂. Additionally, the gradient penalty coefficient λ_GP is used as a parameter for controlling the level at which the gradient penalty affects the discriminator. The objective function for the WGAN-GP can therefore be represented as: Gmin Dmax V(D) 𝔼_x ∼ p_data[ D(x) ] - 𝔼_z ∼ p_z[ D ( G(z) ) ] - λ_GP 𝔼_x∼ p_x[ ( ‖∇_x D ( x) ‖_2 - 1 )^2] WGAN-GP is increasingly becoming more prevalent than the classic GANs in applications such as image generation and tabular data generation, as it contributes to more stable learning. In addition, WGAN-GP minimises the effect of mode collapse - that is when the generator learns to "trick" the discriminator by producing a limited number of modes which the discriminator incorrectly classifies as real samples, instead of utilising the entire data feature space. §.§ Privacy Attacks In machine learning, membership inference attacks (MIA) aim to identify whether a data sample was used in the training of a machine learning model <cit.>. For instance, the attackers might try to identify whether the records belonging a client were used for training a loan default prediction model. In this case, the attackers' objective would be to determine whether the client has taken a bank loan, with such information being used for targeted fraud attempts. Privacy guarantees in machine learning have been extensively studied in the form of analysing the connection to model overfitting <cit.> and in differential privacy <cit.>. More recently, MIA have also been explored for generative models. Privacy attacks applied on synthetic samples generated by GAN models aim to reconstruct the real data samples which were used in GAN training. In principle, membership attacks on GANs can be categorised into three types of attacks <cit.>; full black-box in which attackers have access to only synthetic samples, partial black-box where the attackers have access to the synthetic samples and the latent codes used to generate them, and white-box which assumes that the attackers are able to access the internal parameters of the generator, the discriminator or both. The trade-off between the quality of synthetic samples and the privacy guarantees of GANs have been additionally examined in existing works <cit.>. § RELATED WORKS Tabular data is broadly used in regression and classifications tasks, which facilitates a growing interest in tabular data synthesis for machine learning applications, especially in domains with limited training data. Bayesian networks can be used for generating synthetic records by approximating the conditional probablity distribution from the data <cit.>. While the Bayesian networks can in practice be additionally used for exploring causal relationships between the independent variables, estimating the distributions is often built on simplifying assumptions on the data <cit.>. Meanwhile, tree-based methods were first utilised for generating partial synthetic data in <cit.>, and has further explored in <cit.> where adversarial random forest has demonstrated comparable performance to deep learning techniques in terms of synthetic data quality. Deep neural networks have been widely studied for synthetic data generation, thanks to their capabilities for handling and approximating the distributions of large datasets. Variational auto-encoders (VAEs) <cit.> estimate the probabilistic distribution of by finding a lower-dimensional latent representation of the data. The application of VAEs has been extended to image data generation <cit.>, oversampling of anomaly event data <cit.> and tabular data synthesis <cit.>. An underlying limitation with variational autoencoders is the assumption of the a simple parametric form of the latent space, which leads to a difficulty in capturing complex data distributions <cit.>. Invertible neural networks have also been proposed for tabular data synthesis through variants based on neural ordinary differential equations <cit.>, copula flows <cit.> and normalizing flows for private tabular data generation <cit.>. Within deep learning based generative models, GANs are favourable due to their ability to generate complex synthetic data in an adversarial and unsupervised setting <cit.>. table-GAN <cit.> is one of the earliest generative models for producing tabular synthetic output based on adversarial training. Using convolutional neural networks, table-GAN demonstrates that GANs unsurprisingly outperform anonymisation techniques while highlighting the potential privacy risks arising from membership attacks. Meanwhile, in medGAN <cit.> an autoencoder based generative model is developed for generating high-dimensional medical patient records while shedding light on the privacy risks attributed to the generated data <cit.>. A long-short term memory (LSTM) architecture for the generator was adopted in <cit.> demonstrating the potential of recurrent neural networks in synthetic data generation. A GAN approach based on an autoencoder and Kullback-Liebler divergence to tackle mode-collapse in GANs and producing synthetic output comparable to state-of-the-art methods. In CTGAN <cit.>, conditional training of a GAN is carried out by instructing the generator and the discriminator to sample based on randomly selecting feature category in every training iteration, where a highly realistic tabular output can be observed from their evaluation. In <cit.>, the authors propose cWGAN, which is a GAN-based oversampling technique focusing on the generation of samples belonging to the minority class in financial credit datasets. Zhao et. al <cit.> builds up on existing tabular GAN models to improve the representation of skewed distribution of numerical features of the synthetic output. Finally, Strelcenia et al. comprehensively review the existing tabular GAN literature and evaluate the studied variants for a class imbalance augmentation task <cit.>. While there is no shortage of novelties in synthetic tabular data generation literature, an underlying challenge remains the proposal of evaluation techniques and criteria for quantifying the reliability and the statistical properties of the synthetic data. A further limitation is the sufficient analysis of data with hierarchical and interdependent variables. Therefore, our focus in this work is proposing a new framework that alleviates the two aforementioned deficiencies in the synthetic tabular data domain. § METHODOLOGY Generating synthetic data from unknown and correlated distributions is a non-trivial task. The architecture of CasTGAN is tailored for generating mixed-type features that have similar distributions to the ones observed in a real dataset. Additionally, cascaded structures enable modelling correlations among features in a sequential manner. Given a dataset X with M features, the features of the dataset can be represented as {m_1, m_2, …, m_M}. §.§ Model Architecture The proposed CasTGAN framework is characterised by a cascade of multiple generators connected sequentially and coupled with auxiliary learners as illustrated in Figure <ref>. The blocks in Figure <ref> denoted with G_i and AL_i, for i = 1,…, M, are respectively individual generators and auxiliary learners, one per feature m_i. Each generator G_i focuses on generating its target feature using a primary neural network, and is laid out sequentially such that it obtains its inputs from a given noise vector whose components are standard Gaussian and i.i.d., and from the outputs of the previous generator (the only exception being the first generator which only takes a vector of random noise as its input). From a visual perspective, a generic generator G_i may then be depicted as in Figure <ref>. Notation wise, the logic shown in Figure <ref> is such that generator G_i takes as inputs two objects: the useful outputs coming from G_i-1 (i.e., the vector Ǧ_i(ϕ_i)) and the noise vector z (note that the same vector is fed to all the generators as depicted in Figure <ref>). The generator G_i then produces one output, i.e., G_i(ϕ_i), that may though be logically split in three distinct components: Z_i, that will be considered redundant information and that will not be used by the next generator; X_i, that is the target feature of generator i; and Ǧ_i-1(ϕ_i-1), that is simply the information from the past generator that will be forwarded to the next one. This means that the output of G_i can be formally presented as G_i(ϕ_i) = Ǧ_i(ϕ_i) ⊕ Z_i , while the information that generator G_i will pass to G_i+1 is Ǧ_i(ϕ_i) = X_i if i = 1 X_i⊕Ǧ_i-1(ϕ_i-1) if i ≥ 2 . Note that the generator is actually composed by two distinct neural networks: the primary one, whose input is ϕ_i = z if i = 1 z ⊕Ǧ_i-1(ϕ_i-1) if i ≥ 2 , and the secondary neural network, whose input is the noise vector z above and whose output Z_i is the redundant information output mentioned above, that will not be passed forward to G_i+1 but will instead be used by AL_i. We note that, as the losses are not backpropagated to the secondary neural network, G_i retains its primary objective of generating its target feature based on the input provided to it. Summarizing, the overall cascaded generator structure can be denoted as G⃗( z ) = X , where X is the generated synthetic output. Based on our literature survey, we observe that some of state-of-the-art tabular GANs employ a conditional setting to enforce the representation of features in both the generator and discriminator <cit.>. We note that while this is indeed an effective strategy for representing discrete categories and preventing mode collapse, this approach is considerably inefficient for sampling datasets with a large number of categories and few data records, since conditioning on a single random category at every training iteration might not be sufficient to cover all the existing categories in the dataset. In this paper we seek to analyse whether, how much and under which conditions resorting to a series of auxiliary learners – one for each feature – may encourage the models to learn to represent based on the losses traversed, rather than explicitly constraining the model output. The hypothesis is indeed that if multiple auxiliary losses are computed in parallel, the model might be able to improve the learning of categorical interdependence and scale up accordingly to highly dimensional tabular datasets. §.§ Auxiliary Learners Architecture GANs for tabular data synthesis are known to be prone to training instability and mode collapse due to the imbalanced feature categories <cit.>. Conditional GANs <cit.> have been deployed to generate synthetic output belonging to specific classes. Conditioning both on the generator and the discriminator has been shown to stabilise the training process of GANs. On the other hand, the use of auxiliary learning for predicting the target variable given the data represents an alternative approach for capturing the characteristics of the data attributed to given target feature. It has been demonstrated that the auxiliary loss further stabilises the training process in comparison to conditional generation, and leads to a representation that it is independent of target label <cit.>. We observe that while auxiliary learners are traditionally embedded within the discriminator <cit.>, we instead propose designing auxiliary learners as independent structures. In the CasTGAN, we craft M auxiliary learners {AL_1, …, AL_M} for learning to predict the individual features. Due to its scalability on large datasets and the relatively fast convergence speed, we focus on building the auxiliary learners using the Light Gradient Boosting decision trees (LightGBM) <cit.>, which we pre-train prior to the GAN training. An auxiliary learner AL_i corresponding to feature m_i is trained on X_∉i in order to predict X_i. Following standard strategies for such tasks, for predicting the numerical features, the mean-squared error loss is used in the training of the auxiliary learners, whereas cross-entropy loss is used for predicting the categorical and binary variables. As with other decision tree based models, there is no need to one-hot encode the categorical features in X_∉i, but instead the categories are converted into integer encodings. Meanwhile the LightGBM auxiliary learners are capable of handling numerical features with extreme magnitudes, and therefore numerical features are not scaled for auxiliary training. As LightGBM models have low computational complexity and are generally fast to to train <cit.>, assigning an auxiliary learner for every feature is a reasonable approach for representing the auxiliary loss L_AL for predicting a feature given all the other features. It is worth noting that for the early auxiliary learners in the cascaded sequence AL_1 to AL_⌈ M/2 ⌉, the generated data feature space X_∉i is heavily dominated by redundant variables Z which subsequently lead to increased auxiliary losses. However, these losses help the early generators in producing features that closely match the distributions of the training data. Meanwhile, the task for the later generators and auxiliary learners in the cascade becomes increasingly focused towards generating features that can be predicted from the initially generated target features Ǧ_i-1(ϕ_i-1). To ensure that the auxiliary losses {L_AL_1, …, L_AL_M} do not overexceed the generator's ones, there is the need for scaling down the losses from the auxiliary learners. In this paper we analyze the choice of performing this scaling down by means of constant coefficients λ_AL. In principle, λ_AL could be a single scalar value that is applied to all the auxiliary losses. However, we consider a vector of auxiliary loss coefficients {λ_AL_1, … , λ_AL_M} since it is fundamentally important for the early generators to generate variables that conform to the original feature distributions, since this will prompt the next generators in the cascade to effectively learn the feature correlations. Though this a hyperparameter, we set λ_AL_1 = 0.75 and λ_AL_M = 0.10 while the auxiliary coefficients in between are linearly and equidistantly scaled in the [0.75, 0.10] range in our experiments. The same auxiliary setting is applied for all the datasets in this work. §.§ Training Data Transformation The main novelty in this paper comes from testing the effects of designing multiple generators in a cascaded layout, where each generator focuses on generating a single data feature. To represent the numerical features we propose to use Variational Gaussian Mixture models (VGM) <cit.> to estimate the number of modes for a numerical feature, as it has been demonstrated that correctly representing multi-modal numerical data objectively reduces the incidence of mode collapse <cit.>. In this context, each mode is essentially a Gaussian model on its own, where in the transformation process a single mode is selected for a feature and a scalar value is calculated for quantifying the magnitude of the mode. As such, the transformation of a numerical feature changes the initial unscaled real number into a vector of size equivalent to the number of Gaussian mixture models + 1 (the 1 being the magnitude value of the respective mode and the vector being a one-hot encoded representation of the selected mode). The representation of continuous features using variational Gaussian models is not exclusive to this work as it has been adopted with notable success in earlier tabular GANs <cit.>. Meanwhile, the categorical features of the training data are transformed into one-hot encodings before being fed to the discriminator. For GANs, the one-hot encoding vectorisation of the categorical features presents an intuitive approach to process the data as it can be scaled and can be appropriately used by the model without issues such as exploding gradients. Furthermore, the use of one-hot encoding simplifies the task of introducing non-linearities by the generator for guaranteeing that the model gradients are differentiable. It is worth reiterating that categorical and numerical transformations of the data for use by the generator and discriminator differ from the representation of the same data used for training and evaluating the auxiliary learners. §.§ Generators and Discriminator The generators receive input in the form of noise vector z and the untransformed meaningful output Ǧ_i-1(ϕ_i-1). As with other GAN applications, we highlight that using a larger noise vector leads to a better output of the features and can mitigate against mode collapse <cit.>. Throughout all the experiments, we use a noise vector of size 128, though this is a hyperparameter that can be tuned accordingly <cit.>. Since each generator dedicates its effort into generating one feature at a time, we use a simple primary neural network of hidden sizes (128, 64). Additionally, we use layer normalisation after the hidden dimension <cit.> for standardizing the weights into zero mean and unit variance and for speeding up the training process. We also use the LeakyReLU activation function with a small negative slope as opposed to ReLU in order to remove the constraints associated with setting the negative gradients to zero. The dimension of the output layer of the generator for producing the target feature is equivalent to the number of one-hot encodings if the feature is categorical or equal to the number of VGM modes +1 if the target feature is numerical. A hyperbolic tangent () activation is applied to the scalar value of the numerical VGM representation. For the categorical output and the one-hot encoded vector of the VGM vector we use gumbel softmax activations for introducing non-linearities to the output. The Gumbel-softmax works by adding noise from the Gumbel distribution to the vectorised logit output of the generator while maintaining the differentiable nature of the GAN training. The Gumbel-softmax function exhibits also a temperature parameter τ that may be used to control the diversity of the output generated by the function. In our experiments τ = 0.8 was assessed as proper to generate a diversified output that reduces the effects of mode collapse, while conforming to the distribution of categories of the feature within the training set. We then note the risk that the discriminator may learn to distinguish between real and generated data by discriminating between the hard one-hot encoded real data and the float values from the generator. To minimize this risk we add an i.i.d. Gaussian noise distributed as 𝒩(0,0.01) <cit.> to the columns of the real samples before feeding them to the discriminator. Consequently, all the inputs to the discriminator (i.e., numerical and categorical features of real and generated data) are float values. The weights of the discriminator are then trained using only the outputs from the final generator G_M. The parameters of the various generators are though updated based on the loss of the discriminator, that is thus computed for this reason. We maintain a simple architecture for the discriminator comprising two hidden layers of sizes (256, 128). As with the generator, layer normalization and LeakyReLU activations are used between the hidden layers. The final layer consists of a single output node without an activation function. To alleviate against mode collapse & GAN training instability issues we additionally compute the Wasserstein loss <cit.> with gradient-penalty <cit.> for the calculation of the discriminator losses. As the CasTGAN is built up using M generators, we have multiple min-max games between the generators and the discriminator. Therefore, the value function for the discriminator can be expressed as G⃗min Dmax V(D) = - 𝔼_x ∼ p_data[ D(x) ] - 𝔼_z ∼ p_z[ D ( G⃗(z) ) ] - λ_GP𝔼_x∼ p_x[ ( ‖∇_x D ( x) ‖_2 - 1 )^2] and value function for generator G_i is hence given by G_imin Dmax V(G_i) = - 𝔼_ϕ∼ p_ϕ[ D ( G_i(ϕ) ) ] + λ_AL_i 𝔼_ϕ∼ p_ϕ[ L_AL_i] . § EXPERIMENTAL SETUP Evaluating the performance of GANs is a non-trivial task, and this can be evident from the variety of GAN evaluation criteria in literature. In this section, we describe the experimental design for CasTGAN, with a thorough discussion of the metrics used for the model's analysis. We implemented our model using PyTorch in Python on a Linux Ubuntu 20.04 machine running on AMD Ryzen Threadripper 3990X and Nvidia GeForce RTX 3090. Our CasTGAN source code is publicly available[https://github.com/abedshantti/CasTGAN]. §.§ Datasets Our CasTGAN is designed for synthesis of tabular data that can be typically found in the financial and healthcare sectors. We therefore use tabular mixed datasets characterised by a combination of categorical and numerical features. We use four datasets where the task is the binary classification of the target label - Adult <cit.>, Bank Marketing <cit.>, Taiwan Credit <cit.> and Diabetes <cit.>. Meanwhile, we use the House Prices <cit.> and Cars <cit.> datasets for regression. We additionally highlight that binary columns in the datasets are handled as categorical variables. An overview of the datasets used is presented in Table <ref>. For synthesising data with CasTGAN and other baselines, we use 50% of the datasets' total number of samples for training the models. The remaining 50% of the data samples is dedicated for evaluating the generated synthetic output. §.§ Baselines We compare the synthetic output of CasTGAN against five state-of-the-art tabular generative adversarial network models: table-GAN <cit.>, medGAN <cit.>, CTGAN <cit.>, cWGAN <cit.> and CTAB-GAN <cit.>. Given that some of the datasets that we use in this study were not evaluated previously by the existing methods, we selected the optimal hyperparameters recommended by the baseline methods' authors in this study. §.§ Evaluation Criteria §.§.§ Train on Synthetic, Test on Real (TSTR) We measure reliability of the synthetic output produced by CasTGAN by training machine learning models on the generated data. We fitted three machine learning models on the generated output - namely AdaBoost, random forest and logistic regression for classification tasks and AdaBoost, decision trees and Linear SVM for regression tasks. We then used the trained models to predict the target label of the test data and we report the precision-recall area under curve (PR-AUC) for binary classification tasks and the R2 score for regression tasks. Since our main objective is to measure the machine learning utility of the generated data rather than assessing the individual performance of each classifier, we average the metrics produced by the three machine learning models. §.§.§ Univariate Distributions We also assess how well do the individual features generated by CasTGAN resemble the features of the real data. As we quantitatively analyse how well our model learns the univariate feature distributions, we first one-hot encode and normalise the synthetic and real datasets. We calculate the dimension-wise mean of the individual features of the synthetic output and training data and report the RMSE between the real and the synthetic output. Additionally, we report the Kolmogorov-Smirnov two-sample test score <cit.> between the real univariate features and the synthetic ones. §.§.§ Correlation and Diversity Measuring the validity of the synthetic output represents a significant challenge for GAN frameworks. In computer vision applications of GANs, the synthetic images can be in some cases distinguished from the real images by a human observation of irregularities in the output such as pupil orientation in human eyes and out-of-place pixels. In tabular GANs, similar challenges exist as there is no standard approach in the literature for quantifying the proportion of invalid samples in the synthetic output. For instance, a common example in the tabular synthesis literature is highlighting how an entry such as gender = "Female" and diagnosis = "Prostate Cancer" is by definition an invalid record, as no such entry exists in the original data, nor can a female be diagnosed with prostate cancer by a physician. In financial datasets, such nested relationships also exist between features if for example we inspect a dataset with a city column and a country column. In such cases, a record with city = "Buenos Aires" and country = "Malta" is by definition invalid. Meanwhile, entries with city = "Alexandria" and country = "United States" is valid as Alexandria exists in the United States even though it is more commonly attributed to the country "Egypt". It is for this reason that quantifying the invalid output generated by tabular GANs is no easy task, even in the presence of domain knowledge. While a GAN model needs to ensure that its synthetic output is as valid as possible, there also needs to be some considerations for the diversity of the generated output. As such, the generative models should not significantly restrict the possible feature combinations between the different categorical features. Ensuring the diversity of the synthetic output enables the model to be less deterministic and increases its robustness against privacy attacks that aim to identify sensitive information. Therefore, the GAN model should be encouraged to explore unique feature combinations as long as such combinations can be considered valid. Given that public datasets are used in this work for the purpose of reproducability, we do not have the full domain knowledge for these datasets, thus, we propose an alternative method for quantifying the validity of the synthetic data. First, we consider calculating the difference in feature correlations between the training data and the fake data. For computing the correlations between numerical features we use the Pearson's correlation coefficient, while the Cramer's V measure is used for capturing the correlation between categorical features. The correlation score is found by calculating the RMSE score between the elements of the triangular matrix of the synthetic dataset and the real dataset. For measuring the diversity of the categorical output, we propose a new metric - Unique Pairwise Categorical Combinations (UPCC). In essence, we count the total number of unique interactions between any pair of categorical features in the dataset. For instance, in the Adults dataset the combinations [education = "Bachelors" and marital-status = "Never-married"], [education = "Bachelors" and sex = "Male"] and [marital-status = "Never-married" and sex = "Male"] each counts as a single pairwise combination, regardless of how many times they appear in the data. A reliable model therefore ensures that the UPCC of its output should be comparable to that of the original data. Subsequently, the UPCC Ratio is the number of unique pairwise combinations of synthetic output divided by the total number of unique combinations of the training data. Finally, we estimate the validity of the model's output by dividing the correlation RMSE score of the model by the UPCC Ratio. We name this measure as the CORDV score. A lower CORDV score indicates that the model is able to minimise the difference in feature correlation between its synthetic output and the real data, while simultaneously not impeding its ability in generating unique feature combinations. Meanwhile, a worse generative model is reflected a greater CORDV score indicating that the model poorly captures the correlations while potentially restricting the uniqueness of the categorical pairs. §.§.§ White-box privacy attacks Traditionally, white-box membership inference attacks on GANs assume that the attacker has access to the synthetic data and at least one generative component of the model. In this work, we formulate white-box privacy attacks in a different setting. We highlight that while using multiple auxiliary learners help in generating more realistic and reliable synthetic output, the use of multiple auxiliary learners leads to a more susceptible model for privacy breaches by attackers. In this work, we devise white-box attacks by assuming that an attacker has access to the trained auxiliary learners and attempts to reconstruct training samples through an iterative process of estimating a hidden feature. In essence, the attacker with the synthetic data will at a given time remove one column from the data, use the corresponding the auxiliary learner to predict the masked feature using the remaining features, and then replacing the masked column with the predicted output from auxiliary learners. In this setting, a single iteration refers to a walk-through over all the auxiliary learners and subsequently replacing all the columns in the dataset once. For evaluating how effective such white-box attacks on our model, we control the training the of the auxiliary learners using a perturbation parameter ϵ. The perturbation parameter translates to the proportion of label samples that are modified when training the auxiliary learners prior to the GAN training. For an auxiliary learner corresponding to a numerical column X_i, we perturb the numerical variables such that perturbed variable for a given sample x̃ can be calculated as x̃_i = x_i + α x_i, where α is a floating number randomly sampled from [-1.0, 1.0]. Meanwhile, we perturb the categorical features by randomly selecting a category from the list of all the unique categories of the said feature. In our analysis, we experiment with ϵ = 0.0, implying that no perturbation takes place, and gradually increment this value to ϵ = 0.3, implying that 30% randomly chosen samples for each auxiliary learner were perturbed prior to the auxiliary training. Furthermore, we analyse whether an attacker possessing the original data preprocessing transformers has an additional advantage in recovering the training samples. We hypothesise that an attacker with access to the data transformations used for the auxiliary training can simply convert the synthetic to a data structure that aligns with the existing transformer. On the other hand, an attacker without access to the preprocessors needs to fit and transform the data independently before launching membership attacks. This is especially prevalent for categorical features, where transforming the categories into ordinal encodes that do not match the ones learned by the auxiliary learners might lead to less effective attacks. § RESULTS §.§ Machine Learning Utility We use the synthetic data generated by CasTGAN for fitting machine learning classification and regression supervised models on the six datasets and compare our performance on the test set against models fitted on the training set and models fitted on the five synthetic output of the five baseline methods. The results are computed in Table <ref>. From Table <ref> we can observe that the TSTR metrics for our CasTGAN is consistently within the best performing synthetic output, outperforming all the baselines on three out of six datasets. Meanwhile, we note that medGAN demonstrated the best results on regression datasets. In addition, CTGAN and CTAB-GAN have comparatively close performance to our method and the training examples. We note that we were unable to run the highly-dimensional Cars dataset on CTAB-GAN to exceedingly large memory requirements attributed to representing wide datasets in convolutional neural network GAN-based approaches. §.§ Univariate Similarity To quantitatively analyse how well CasTGAN represents the feature distributions of the original data, we compare the Euclidean distance RMSE and Kolmogorov-Smirnov statistic between the synthetic data and the real data in Table <ref>. Generally, we note that our CasTGAN, medGAN and CTAB-GAN dominate the dimension-wise statistical similarity test. It can be observed that CasTGAN represents the features of the Adult and Cars datasets particularly well, while performing comparatively on the remaining datasets. We can therefore deduce that CasTGAN is particularly useful for datasets with a greater number of unique categories. §.§ Output Validity We quantify the validity of the output by considering the number of unique pairwise categorical combinations (UPCC), the correlation error between the synthetic and the real data, and the CORDV score, which is essentially the correlation divided by the UPCC ratio. From Table <ref>, we can observe how the different generative methods rank among the three aforementioned metrics. First, we note that CTGAN and medGAN perform well in generating a large number of unique feature combination, in most cases, even more than the number of combinations that can be found in the training data. This is particularly impressive for the Cars dataset, where both models managed to generate more than twice the number of categorical combinations of the training set. Similarly, we also observe that CTAB-GAN performs relatively well in exploring diverse categorical combinations. Meanwhile, we notice that our CasTGAN is more conservative when it comes to generate unique categorical combinations. For all the dataset, CasTGAN produces a marginally lower number of pairwise combinations than can be typically found in the training set. Moreover, the UPCC can be a good indicator of mode collapse and this is reflected by the significantly low UPCC values for table-gan and cWGAN, where it can be deduced that these models generated a limited number of modes for some categories. In contrast, it can be observed from Table <ref> that CasTGAN generally outperforms the other baselines in capturing the feature correlations of the datasets. The lower correlation RMSE score entails that CasTGAN prioritises the representation of correlations and feature interdependence in the real data. Meanwhile, the CORDV score aims to quantify the trade-off between the diversity and the proximity of the synthetic data to the real data. We observe from the results that the highest CORDV scores are evenly split among our CasTGAN and CTGAN. Despite lacking the full domain knowledge in our datasets, we nonetheless measure the synthetic invalidity on the Adult, Cars and Housing datasets as follows: * Adult dataset: we use the fields "relationship" and "sex" to calculate the number of invalid records. We posit that if "relationship" = "Husband", then the sex feature needs to be set to "Male". Likewise, the "relationship" = "Wife" needs to align the gender field assigned as "Female". This is not based on our assumptions, but rather running explanatory data analysis on the training set confirms that the records are matched in such a manner. * Cars dataset: we use the fields "make" and "model" and classify a synthetic sample as invalid if the synthetic car's "model" does not in fact belong to the "make" that can be found in training data. * Housing dataset: the housing dataset consists of the fields "year built" and "year renovation". Logically, a property cannot be renovated before it was built, and further inspecting the data indeed confirms that there are no observations with "year renovation" before "year built". Based on the aforementioned fronts, the ratio of invalid records generated by CasTGAN and three baseline methods are demonstrated in Table <ref>. We chose not to include table-GAN and cWGAN in the comparison as the synthetic output is characterised by major mode collapse. From Table <ref> we can observe that CasTGAN remarkably reduced the number of invalid synthetic records of the Adult dataset. Our method also significantly decreased the number of invalid records in the Cars dataset. This resembles a major improvement from CTGAN, while noting the challenging nature of modelling the Cars dataset due to the large number of categories present. Furthermore, it is evident that CasTGAN outperforms the other generative approaches on the numerical features of the Housing dataset. §.§ Robustness Against Privacy Attacks For conducting white-box privacy attacks, we set the number of attacking iterations to five, where each feature in the synthetic data is updated five times based on the output of the auxiliary learners. Moreover, the membership attacks are launched on 10% of the total number of overall samples. We highlight that the ratio of attacked samples does not impact the evaluation of the robustness of our approach as we only compute the attack distance metrics with respect to the attacked samples. The Euclidean distance of the attacked samples to the training data and to the synthetic data prior to the membership attacking is computed in Table <ref>. From Table <ref>, it can be observed that the perturbation coefficient ϵ greatly impacts the closeness of attacked samples to the training data. For unperturbed and minimally perturbed data features, it can be noticed that the attacked samples are relatively close to the training samples, indicating that the attackers might succeed in recovering training datapoints. We observe that the proximity to the training samples increases for greater ϵ values, which demonstrates the additional privacy guarantees that can be provided when altering the labels. We additionally notice how access to the data processors gives a major advantage to the attackers attempting to recover the training samples. This holds true especially for the Adult dataset, where using the trained label encodings of the auxiliary learners lead to more targeted attacks that are even closer to the training samples than attackers on unperturbed data with no access to the data preprocessors. Another interesting observation is that the attacks on the Housing dataset are greatly impacted by incremented in ϵ, which is plausible, given that the dataset mainly consists of numerical features. In addition to the proximity to the training samples, we also investigate whether perturbing the labels of the auxiliary learners can contribute to a reduction in the quality of unattacked synthetic data as demonstrated in Table <ref>. For the Adult, Bank and Credit datasets it can be evident that applying perturbations insignificantly impacts the the evaluation metrics of the synthetic datasets. We observe that the PR-AUC scores on the test data and KS statistic for univariate distributions are minimally influenced by the changes in ϵ. In contrast, it appears that perturbing the data impacts the CODRV scores as a result of the correlation errors between the synthetic and the real datasets. Interestingly, applying perturbations on the Housing dataset appear to improve the quality of the synthetic output in addition to increasing its robustness against white-box privacy attacks. § CONCLUSION In this work, we presented CasTGAN as a generative framework for creating synthetic tabular data samples that are representative of the real data attributes. Our motivation for this work stems from the need for realistic tabular data that can be exchanged amongst experts, while focusing on the reliability and the sensitivity of such information. We therefore directed our focus towards generating fake output that capture the correlations and interdependence between the data features. We demonstrated that our cascaded generator architecture supported by auxiliary learners are able to generate realistic output given highly dimensional and largely imbalanced tabular datasets. Our results indicate that CasTGAN is capable of significantly reducing the number of invalid records while exhibiting strong statistical and correlational similarities to the real data. We further evaluated the robustness of our model against targeted privacy attacks and showed that perturbing the auxiliary learners by a small scale can mitigate against attacks aiming to recover the real data samples. Given the challenging nature of generating realistic synthetic tabular data, there are several paths for future work. This work can be extended by incorporating additional data types within the tabular data generation such as free text and timestamps. Moreover, we intend to explore how our approach can generate more diversified combination of categories, while maintaining its ability in minimising the number of invalid data records. § ACKNOWLEDGMENTS This project was supported by DNB ASA through the funding of this research. plainurl § TRAINING DETAILS §.§ Hyperparameters §.§ Losses §.§ Training Elapsed Duration § FURTHER RESULTS §.§ TSTR §.§ Statistical Similarity
http://arxiv.org/abs/2307.03180v1
20230706175800
String operators for Cheshire strings in topological phases
[ "Nathanan Tantivasadakarn", "Xie Chen" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mes-hall", "quant-ph" ]
Walter Burke Institute for Theoretical Physics and Department of Physics, Walter Burke Institute for Theoretical Physics and Department of Physics, Institute for Quantum Information and Matter, Elementary point charge excitations in 3+1D topological phases can condense along a line and form a descendant excitation called the Cheshire string. Unlike the elementary flux loop excitations in the system, Cheshire strings do not have to appear as the boundary of a 2D disc and can exist on open line segments. On the other hand, Cheshire strings are different from trivial excitations that can be created with local unitaries in 0d and finite depth quantum circuits in 1d and higher. In this paper, we show that to create a Cheshire string, one needs a linear depth circuit that acts sequentially along the length of the string. Once a Cheshire string is created, its deformation, movement and fusion can be realized by finite depths circuits. This circuit depth requirement applies to all nontrivial descendant excitations including symmetry-protected topological chains and the Majorana chain. String operators for Cheshire strings in topological phases Xie Chen August 1, 2023 =========================================================== “Cheshire strings” describe line-like excitations in topological states which contain a mysterious hidden charge along the length of the string<cit.>. The charge content cannot be pinpointed to one or a few local regions on the string and therefore costs no extra energy compared to the zero charge state. The degeneracy is a result of the spontaneous symmetry breaking, or more precisely `Higgsing', of (part of) the gauge symmetry in the topological phase and the condensation of the corresponding gauge charges along the string. Since the gauge charge is condensed, adding more does not change the energy of the condensate. Cheshire strings have been included as an essential part in the formulation of a complete mathematical description of 3+1D topological orders in terms of higher categories<cit.>. In Ref. Else2017, the example of 3+1D Toric Code was discussed explicitly where fundamental objects in the higher category include 1d objects like the Cheshire string and magnetic gauge flux loop and the 0d domain walls between them which includes the electric gauge charge excitation. From a physics perspective, it is not immediately clear why a consistent theory of the 3+1D ℤ_2 topological order needs to involve the Cheshire string. After all, the elementary excitations – the gauge charge and gauge flux – capture all the fractional statistics we see in the model and their corresponding string and membrane operators capture the full logical operations in the degenerate ground space. The Cheshire string is made of gauge charges and in this sense a `descendant' excitation<cit.>. Are they really necessary in the description? Some observations point to the intrinsic nature of Cheshire strings in 3+1D topological phases. First of all, they do exist as a 1d excitations in the model and can hence appear next to magnetic flux loops and change their feature. In non-abelian gauge theories, it has long been known that magnetic flux loops are intrinsically “Cheshire" as the flux loop breaks the non-abelian gauge symmetry down to a subgroups and therefore gauge charges that transform nontrivially under the broken part of the gauge group have to condense along the loop <cit.>. In abelian gauge theories with nontrivial three-loop braiding<cit.>, some flux loops are intrinsically “Cheshire” as well. Such flux loops can be thought of as the boundary between different 2+1D gauge theories and can only be gapped if the gauge charge is condensed along the loop<cit.>. The following Q&A may help clarify the situation. * Do we need to include Cheshire points in our description of topological order as well? We already have. A Cheshire point can be obtained by shrinking the Cheshire string to a point and is a direct sum of all charge states. In the ℤ_2 case, it is 1 ⊕ e. When the condensate is 0d, the degeneracy between the two charge states will generically split and we end up with either 1 or e. * Why did we not include Cheshire strings in the description of 2+1D topological order? 2+1D topological orders are characterized in terms of braided fusion categories whose fundamental objects are point excitations – the anyons. Cheshire strings appear at one higher dimension and hence do not mix with the anyons. However, they do play an important role as defects and boundaries of the topological states and are described as (bi)module categories over the bulk fusion category, as explained in Ref. Kitaev2012. * Apart from Cheshire strings, are there other types of descendant excitations? Yes. Cheshire strings correspond to the 1+1D symmetry breaking phase of the gauge charge. There are also symmetry-protected topological phases in 1+1D. When the gauge charge is a fermion, there is also the Majorana chain. Cheshire strings are a non-invertible excitations while the latter two correspond to invertible descendant excitations. (Refs. Yoshida2015,Yoshida17,roumpedakis2022higher,Kong2022arxiv,Barkeshli23 construct examples of such excitations.) Going to higher dimensions, we would also need to consider membrane-like descendant excitations of point and loop excitations. * What is the benefit of including Cheshire string in the description of topological orders? One benefit is that it makes the correspondence between bulk and boundary more natural (at least in a mathematical sense as shown in Refs. Kitaev2012,Kong2020center, Kong2017, Kong2015arxiv). * The elementary excitations are generated with unitary string and membrane operators. Does this also apply to Cheshire string? Yes, this is what we are going to show in this paper. Previous discussions have mostly focused on generating Cheshire string by changing the Hamiltonian of the topological state or using projection operations<cit.>. We want to emphasize that Cheshire strings can also be generated using unitary circuits and the way it is generated using unitary circuits makes clear its nontrivialness as a descendant excitation. Magnetic fluxes are nontrivial loop excitations because they can only be generated as the boundary of a unitary membrane operator. Because of that, they have to form closed loops. Cheshire strings, on the other hand, do not have to appear on the boundary of a membrane and can exist on open string segments. However, if we would like to generate a Cheshire string with a unitary circuit along the string, the circuit depth has to grow linearly with the length of the string. The linear depth of the circuit distinguishes Cheshire strings from trivial line-like excitations that can be generated with finite depth circuits. In particular, as we show below, the linear depth circuit has a sequential structure. That is, each layer contains only one local gate set acting on a local region in the string. The gate set moves from one end point of the string to the other and acts on the string in sequential way. This is hence an example of a Sequential Quantum Circuit<cit.>. Once a Cheshire string has been created, we show that it can be deformed, moved, and fused using finite depth circuits. Therefore, the equivalence relation between Cheshire excitations can be established with finite depth circuits, similar to the case of elementary point and loop excitations[Equivalence relations between point excitations are given by local unitary transformations.]. The paper is structured as follows: in Section <ref>, we show how to generate a Cheshire string in 2+1D Toric Code using a sequential linear depth circuit; in Section <ref>, we show how to deform, move and fuse Cheshire strings in 2+1D Toric Code; in Section <ref>, we show how similar circuits work in 3+1D. In fact, in the 2+1D Toric Code, there are five types of nontrivial defects<cit.>. Their creation and fusion can be achieved similarly with sequential linear depth circuits / finite depth circuits. We demonstrate this in Appendix <ref>. In the summary section (section <ref>), we discuss the relation between different types of excitations in topological phases and the unitary operation that generates them. § GENERATION OF CHESHIRE STRING IN 2+1D Let us first consider the generation of a Cheshire string in the 2+1D Toric Code. Consider the Toric Code on two dimensional lattice defined with Hamiltonian [ H = -∑_p A_p - ∑_v B_v; = -∑_p ∏_e∈ p X_e - ∑_v ∏_v ∈ e Z_e ] Let's call excitation of the A_p terms the gauge charge excitation labeled by e and the excitation of the B_v terms the gauge flux excitation labeled by m. Applying Z_e on one edge creates two gauge charge excitations on the neighboring plaquettes. Having a charge condensate corresponds to enforcing -Z_e as the Hamiltonian term so that the ground state remains invariant under the pair creation or hopping of gauge charges between the neighboring plaquettes. If such a term is enforced on a string of edges on the dual lattice (dotted blue line in Fig. <ref>), we get a Cheshire string for the gauge charge e. While the Cheshire string can be generated with projection operators 1/2(1+Z_e) acting on the edges along the string all at the same time, generating it with unitary operations takes a number of steps that scales linearly with the length of the string. Consider creating a Cheshire string on the dual lattice, which starts at a plaquette p_0 and ends at plaquette p_N, adjacent plaquettes in the sequence p_i and p_i+1 share an edge e_i,i+1. The sequential circuit to create the Cheshire charge is given by U= ∏_i=N^1 R( A_p_i)R(Z_e_i-1,i) where we define R(𝒪) ≡ e^-iπ/4𝒪, which has the property that for Pauli operators P and Q, R(Q) P R(Q)^† = P ; [P,Q]=0 iPQ; {P,Q }=0 Note that the R gates do not commute with each other so the ordering is important. Our convention is that gates to the right are applied before gates to the left. Therefore, the sequence of gates are R(Z_e_0,1), R(A_p_1), R(Z_e_1,2), R(A_p_2), and so forth. Since A_p and Z_e commutes with the vertex term, the circuit leaves the vertex stabilizers invariant while mapping A_p_i→ Z_e_i-1,i, i = 1,...,N A_p_0→∏_i=0^N A_p_i∏_i=1^N Z_e_i-1,i Therefore, after the circuit, the edges along the dual string are in the condensate state |0⟩ stabilized by Z_e. The total charge of the condensate measured by ∏_i=0^N A_p_i remains invariant in the ground state, while the individual charges A_p_i are no longer conserved. Adding an e charge using the string of Z operators along the dashed green line changes the total charge of the condensate, but does not affect the local terms in the condensate. Therefore, the two total charge states are degenerate on the Cheshire string. Generating a Cheshire string – a condensate for the gauge charge e – in the Toric Code corresponds to opening up a slit of vacuum state inside the topological bulk with a `smooth' boundary between the two. Similar operations (or the inverse) have been discussed in Refs. Dennis2002,Aguado2008,Liu2022. In appendix <ref>, we are going to discuss how a similar condensate of the gauge flux m corresponds to a `rough' boundary between Toric Code and the vacuum. § FUSION OF CHESHIRE STRING IN 2+1D Once a Cheshire string is created, it can be deformed, moved, and fused using finite depth circuits. In other words, finite depth circuits establish the equivalence relation between Cheshire string excitations. As shown in Fig. <ref>, if we start from a Cheshire string in the bottom row ((1) with dashed black edges in state |0⟩), we can make it thicker (2) and then thinner (3) using finite depth circuits. The individual gate sets (the blue dot pairs) take the same (inverse) form as in Fig. <ref>. The difference is that, now the gate sets are oriented in parallel, rather than connecting head to toe. It can be easily checked that parallel gate sets commute with each other and hence can be applied simultaneously. Going from (1) to (3) moves the Cheshire string perpendicular to its length by one step. The total charge of the condensate, measured by ∏ X along the red loop, is conserved in the whole process. Two Cheshire strings c fuse as c × c = 2c which can be realized with finite depth circuit as well. Using the circuits discussed above, we can always move the two strings with finite separation right next to each other as shown in Fig. <ref>(1) with a finite depth circuit. Before fusing, ∏ X around the red rectangles measures the charge on each string. Along the domain wall separating the two strings, the Hamiltonian involves pairwise ZZ terms as shown by the green bond in Fig. <ref>(a). A single Z on the domain wall tunnels a charge from one string to another. It spoils the conservation of charge on each string while preserving their sum. The fusion of the strings can be achieved with the finite depth circuit shown in Fig. <ref>(b). Each arrow represents a controlled-NOT gate with the start of the arrow being the control and the end of the arrow being the target. All the gates commute and can be applied in one step. After the circuit, the domain wall separating the two strings become completely decoupled from topological bulk. The ZZ couplings along the domain wall remain invariant, resulting in a two fold degeneracy between the |00...0⟩ state and the |11...1⟩ state. When the domain wall is in the |00...0⟩ state, it merges naturally with the condensates on the two sides into a single condensate. When the domain wall is in the |11...1⟩ state, a simple one step rotation would take it into the |00...0⟩ state and the same conclusion holds. Therefore, two Cheshire strings merge into one Cheshire string with a pre-factor of 2. If one wants to put the fused string into a standard form (e.g. of width 1), this can be done with another finite depth circuit of the form in Fig. <ref>(3). § CHESHIRE STRING IN 3+1D A similar construction holds in 3+1D Toric Code as well. Consider the 3+1D Toric Code on a cubic lattice with Z_2 qubits on the plaquettes. The Hamiltonian contains a charge term A_c around each cube c and a flux term B_e around each edge e, as shown in Fig. <ref>(1). [ H = -∑_c A_c - ∑_v B_v; = -∑_c ∏_p∈ c X_p - ∑_e ∏_e ∈ p Z_p ] Similar to the 2+1D case, applying Z_p on one plaquette creates two gauge charge excitations in the neighboring cubes. A Cheshire string corresponds to enforcing -Z_p on the plaquettes along a dual string (the dotted blue line in Fig. <ref>(1)). As shown in Fig. <ref>(1), a Cheshire string from cube c_0 to c_N can be created with a sequential circuit U = ∏_i=N^1 R(A_c_i)R(Z_p_i-1,i) The circuit leaves the B_e terms invariant while mapping A_c_i→ Z_p_i-1,i, i = 1,...,N A_c_0→∏_i=0^N A_c_i∏_i=1^N Z_p_i-1,i Similar to the 2+1D case, the shaded plaquettes are mapped to the |0⟩ state of the condensate while the total charge of the condensate measured by ∏_i=0^N A_c_i remains invariant in the ground state. Adding an e charge using the string of Z operators along the dashed green line maps between the two degenerate states of the Cheshire string. By generating a Cheshire string in the bulk of the Toric Code state, we create the vacuum state in a line like region with a smooth gapped boundary to the topological bulk. Similar operations (or the inverse) have been discussed in Ref. Chen2022arxiv. Once a Cheshire string is created, it can be deformed ((1) to (2) or (2) to (3)) and moved ((1) to (3)) with finite depth circuits. Fusion of two Cheshire strings proceeds in a similar way as in 2+1D. As shown in Fig. <ref> (1), when two Cheshire strings (gray shaded plaquettes) lie next to each other, the plaquettes on the domain wall between them couple with ZZ terms. The operator ∏ X around each red rectangular cuboid measures the total charge on each string. (The front and back faces of the cuboid are not shaded red for clarity.) Applying the controlled-NOT gates indicated by the arrows in Fig. <ref>(2), the domain wall qubits are decoupled from the topological bulk. The ZZ coupling remains, giving rise to a two fold degeneracy of |00...0⟩ and |11...1⟩, each of which merges with the two condensates into one condensate. § SUMMARY By discussing the generation and fusion of the Cheshire string using unitary circuits, we put it under the same framework as other excitations of a topological system. Table <ref> summarizes the different cases. Anyons in 2+1D and gauge charge/gauge flux excitations in 3+1D are generated as the boundary of a higher dimensional unitary operator. Anyons and gauge charges are generated as the end point of a string operator while gauge fluxes are generated as the boundary of a membrane operator. When the excitation is abelian, the string/membrane operator can be implemented with a finite depth circuit. In other words, small pieces of the string/membrane operator can be connected without defect. When the excitation is non-abelian, the string/membrane operator has to be implemented sequentially, and requires a sequential linear depth circuit in 1d/ 2d. Such excitations are called elementary excitations. Descendant excitations like Cheshire string on the other hand do not have to be created as a closed loop and hence can live on open strings (or higher dimensional discs). They are nontrivial in the sence that when they are created in the dimension they are in, a sequential linear depth circuit is needed while trivial excitations are created with finite depth circuits. Among the descendant excitations, some are invertible, such as SPT states and Majorana chains, etc. The invertible excitations can also be created as the boundary of a higher dimensional unitary and in this case a finite depth circuit is enough. The non-invertible ones, on the other hand, cannot be created with finite depth circuit even as the boundary of one higher dimension. Once the excitations are created, their equivalence is established by local unitary operations if the excitation is 0d and by finite depth circuits of nd if the excitation is nd. That is, the excitations can be deformed, moved, and fused using such unitaries. We are indebted to inspiring discussions with Liang Kong, John McGreevy and Xiao-Gang Wen. X.C. is supported by the National Science Foundation under award number DMR-1654340, the Simons collaboration on “Ultra-Quantum Matter” (grant number 651438), the Simons Investigator Award (award ID 828078) and the Institute for Quantum Information and Matter at Caltech. N.T. and X.C. are supported by the Walter Burke Institute for Theoretical Physics at Caltech. § OTHER TYPES OF DEFECTS IN 2+1D TORIC CODE Following the notation in Ref. roumpedakis2022higher, the Cheshire string discussed in section <ref> and <ref> is a defect of the 2+1D Toric Code labeled by S_e. There are four more types of nontrivial defects in 2+1D Toric Code – S_m, S_ψ, S_em and S_me. Their generation and fusion follow very similar rules. To generate these nontrivial descendant defects on an open interval, a linear depth sequential circuit is needed. To deform, move or fuse these defect, a finite depth circuit is sufficient. In this section, we demonstrate explicitly the generation of S_m, S_ψ and the fusion of S_em× S_e = S_e, S_ψ× S_e = S_m and S_ψ× S_ψ = S_1, where S_1 is the trivial defect. All other generation and fusion processes can be derived from here. For the fusion process, we will show how the circuit works in the bulk of the defect without involving the end points. The end points usually lead to extra complications but do not change the fusion result. In all figures, dashed black edges are in the |0⟩ state stabilized by Z and dash-dotted black edges are in the |+⟩ state stabilized by X. No-arrow connectors represent the controlled-Z gate: U_CZ = [ 1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 -1 ]. One-arrow connectors represent the controlled-X gate with the arrow pointing to the target: U_CX = [ 1 0 0 0; 0 1 0 0; 0 0 0 1; 0 0 1 0 ]. Two-arrow connectors represent the X-controlled-Not gate, which is the controlled-X gate with the control qubit conjugated by the Hadamard gate: U_XCX = 1/2[ 1 1 1 -1; 1 1 -1 1; 1 -1 1 1; -1 1 1 1 ]. The gauge flux m is dual to the gauge charge e in Toric Code by switching between lattice and dual lattice and between X and Z operators. Therefore, to generate the S_m defect from vertex v_0 to vertex v_N, a sequential circuit of the form U = ∏_i=N^1 R(B_v_i)R(X_e_i-1,i) as shown in Fig. <ref> can be applied, where again R(O) = e^-iπ/4O. This circuit maps B_v_i to X_e_i-1,i while leaving all plaquette Hamiltonian terms invariant. It hence opens up a slit of vacuum state with a `rough' boundary to the topological bulk. The S_ψ defect is an invertible domain wall inside the topological bulk which permutes e and m excitations. That is, if an e excitation goes through the defect, it comes out as m and vice versa. The S_ψ defect can be generated with a sequential circuit as shown in Fig. <ref>. Denote the blue term in Fig. <ref>(1) around plaquette 1 as O_1. The first step of the circuit is U = ∏_i=N^1 R(O_i) This step is a sequential linear circuit. The second step is composed of controlled-X gates represented by the one-arrow connectors in the bottom plaquettes, as well as R(Z)s represented by blue diamonds, as shown in Fig. <ref>(2). The gates in the second step all commute with each other, therefore the second step has depth one. Hamiltonian terms before and after the circuit are shown with corresponding colors (red, yellow and purple) in (1) and (2). The resulting terms take the same form as in Fig. 1 of Ref. Kitaev2012. A different version of the circuit was proposed in Ref. Lensky2023 and implemented in Ref. Google2023. Fig. <ref> shows the fusion of S_e and S_m into S_em. In Fig. <ref> (1), the bottom defect is the S_e defect with smooth boundaries on the two sides and the top defect is the S_m defect with rough boundaries on the two sides. Applying the controlled-Not gates represented by the one-arrow connectors in (2) maps the three body Hamiltonian terms on the domain wall between the two defects (shown in (1)) to single X and Z terms (shown with corresponding color in (2)). Therefore, after the finite depth circuit, the two defects merge into one, S_em, with smooth boundary at the bottom and rough boundary at the top. Fig. <ref> shows the fusion of S_e and S_ψ into S_em. In Fig. <ref>(1), the bottom defect is the S_e defect with smooth boundaries on the two sides and the top defect is the S_ψ defect taking the form shown in Fig. <ref>. The circuit is composed of controlled-Z gates (represented by the no-arrow connects) and the controlled-Not gates (represented by the one-arrow connectors). All gates commute and the circuit has depth one. The yellow and green terms are mapped to single qubit terms after the circuit while the purple term becomes a three body plaquette term on the rough side of the boundary. Therefore, S_e and S_ψ fuse into S_em, as shown in Fig. <ref>(2), which take the same form as in Fig. <ref>(2) S_ψ is an invertible defect and S_ψ× S_ψ = S_1. Fig. <ref> shows how the fusion can be realized with a finite depth circuit. The first step of the circuit ((1) to (2)) is composed of controlled-Z gates (represented by the no-arrow connects) and the controlled-Not gates (represented by the one-arrow connectors) as shown in (2). After this step the yellow terms map to decoupled qubits in |+⟩ state on the middle line. The second step of the circuit ((2) to (3)) is composed of X-controlled-Not gates (Eq. <ref>) represented by the two-arrow connectors in (3). The purple and blue terms maps to decoupled qubits in this step. Finally, with controlled-Not gates in the last step (shown in (4) with the one-arrow connectors), the red terms map to decoupled qubits. The green terms become the plaquette term of the Toric Code bulk. All gates in each step of the circuit commute with each other, therefore the circuit has finite depth. It might seem that instead of recovering the regular Toric Code on square lattice, we end up with a dislocation on the square lattice. But this is not a problem because a dislocation can be generated or removed with finite depth circuit in Toric Code as shown in Fig. <ref>. Starting from a regular Toric Code on square lattice, diagonal edges can be added into each plaquette with the dark blue gate sets as shown in Fig. <ref> (1), dividing each square into two triangles. All the dark blue gate sets commute with each other and can be applied in one step. Next, the vertical edges between the triangles can be removed with the light blue gate sets as shown in Fig. <ref> (2), merging two triangles into a parallelogram. All light blue gate sets commute with each other as well, so we have another depth one circuit. After these two steps, we have introduced a dislocation defect into the square lattice.
http://arxiv.org/abs/2307.01981v1
20230705014519
A ChatGPT Aided Explainable Framework for Zero-Shot Medical Image Diagnosis
[ "Jiaxiang Liu", "Tianxiang Hu", "Yan Zhang", "Xiaotang Gai", "Yang Feng", "Zuozhu Liu" ]
eess.IV
[ "eess.IV", "cs.CV", "cs.LG" ]
[ A ChatGPT Aided Explainable Framework for Zero-Shot Medical Image Diagnosis equal* Jiaxiang Liuequal,yyy,comp Tianxiang Huequal,yyy Yan Zhangequal,sch Xiaotang Gaiyyy Yang Fengangle Zuozhu Liuyyy yyyZhejiang University-University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining, China compCollege of Computer Science and Technology, Zhejiang University, Hangzhou, China schNational University of Singapore, Singapore angleAngelalign Inc., Shanghai, China Zuozhu Liuzuozhuliu@intl.zju.edu.cn Machine Learning, ICML 0.3in ] Zero-shot medical image classification is a critical process in real-world scenarios where we have limited access to all possible diseases or large-scale annotated data. It involves computing similarity scores between a query medical image and possible disease categories to determine the diagnostic result. Recent advances in pretrained vision-language models (VLMs) such as CLIP have shown great performance for zero-shot natural image recognition and exhibit benefits in medical applications. However, an explainable zero-shot medical image recognition framework with promising performance is yet under development. In this paper, we propose a novel CLIP-based zero-shot medical image classification framework supplemented with ChatGPT for explainable diagnosis, mimicking the diagnostic process performed by human experts. The key idea is to query large language models (LLMs) with category names to automatically generate additional cues and knowledge, such as disease symptoms or descriptions other than a single category name, to help provide more accurate and explainable diagnosis in CLIP. We further design specific prompts to enhance the quality of generated texts by ChatGPT that describe visual medical features. Extensive results on one private dataset and four public datasets along with detailed analysis demonstrate the effectiveness and explainability of our training-free zero-shot diagnosis pipeline, corroborating the great potential of VLMs and LLMs for medical applications. § INTRODUCTION Large-scale pretrained vision-language models (VLMs), such as the Contrastive Language-Image Pre-Training (CLIP), have shown great performance in various visual and language tasks, especially in zero-shot recognition tasks <cit.>. In the standard zero-shot image classification scenario, CLIP computes similarity scores between a query image and different category names (texts), and the category with the highest similarity score would be regarded as the classification result <cit.>. Recent work extends ideas in CLIP to medical image analysis, e.g., how CLIP benefits medical image classification or the training of large-scale VLMs such as MedCLIP in the medical domain <cit.>. One of the key tasks to extend VLMs in medical domains is zero-shot medical image classification, which is critical in real-world scenarios where we may not have access to all possible diseases or annotated medical images are hardly available <cit.>. However, this significant task remains seldomly explored especially with VLMs, let alone frameworks for explainable diagnosis with visual or textual medical information. The feasibility of directly transferring VLMs like CLIP to medical domains remains to be checked for at least two reasons. On one hand, CLIP and many other VLMs are pre-trained with natural image-text pairs, which could certainly lack the attention on medical information and lead to abysmal performance <cit.>. On the other hand, medical image classification does embrace model interpretability, while the category texts of medical images tend to be highly abstract medical lexicons that are inherently more challenging to interpret and analyze with existing VLMs <cit.>. It is reasonable to follow the standard zero-shot classification paradigm in natural image recognition for medical image classification <cit.>. As illustrated in Figure <ref> (top row), an input image, such as a brain MR or fundus image, and the possible categories could be fed into the multimodal CLIP model to compute corresponding similarity scores and subsequently make decisions. However, such a standard pipeline suffers from the aforementioned limitations with inferior performance and limited interpretability, i.e., the attention map generated by the representations of the image and category name show that the model fails to focus on the area of interest for identifying the diagnostic category <cit.>. A simple idea is providing additional information about each disease to assist diagnosis, and if possible, providing them in a scalable way to avoid time-consuming hand-crafting. For instance, one may consider utilizing generative models to create the descriptions, with which CLIP can inference based on certain symptoms, very much like the diagnostic process performed by human experts in practice. In this paper, we propose a novel framework for explainable zero-shot medical image classification. The key idea is to leverage LLMs to automatically generate additional cues and knowledge, such as disease symptoms or descriptions other than the standalone category name, to help provide more accurate and explainable diagnosis. The effectiveness of our method is illustrated in Figure <ref> (bottom row), where the model obviously pays more attention to relevant tissues after incorporating the information of more detailed descriptions. In particular, besides leveraging VLMs like CLIP, we supply our model with the recently released ChatGPT model to automatically provide detailed category descriptions. Considering that ChatGPT may deliver inaccurate information in medical queries, we further design a prompt to query ChatGPT to get textual descriptions of useful visual symptoms to identify diagnostic categories. Extensive experimental results demonstrate the superiority of our method. The main contributions could be summarized as: * We have shown for the first time in the medical domain the feasibility of incorporating ChatGPT for better CLIP-based zero-shot image classification, revealing the potential of LLMs-aided designs for medical applications. Through ablation studies, we have also elucidated the considerable scope for augmenting the performance of our approach through prompt designs. * We propose a novel CLIP-based zero-shot medical image diagnosis paradigm. In comparison to the conventional CLIP-based approach, our proposed paradigm exhibits significant enhancements in medical image classification accuracy while concurrently offering a notable level of explainability in various disease diagnosis. * We comprehensively evaluate our method on five medical datasets, including pneumonia, tuberculosis, retinopathy, and brain tumor. Extensive experiments and analysis demonstrate the promising zero-shot recognition performance and a considerable level of interpretability of our approach. § RELATED WORK §.§ Large Language Model Large language models use deep neural networks with billions of parameters to learn patterns from large amounts of text data <cit.>, the primary objective of which is to generate human-like responses based on the context it is provided <cit.>. ChatGPT is a state-of-the-art large language model that specializes in language understanding and inference. With the advanced transformer architecture and massive training dataset, ChatGPT is able to deliver coherent and meaningful responses to a wide variety of prompts including questions consisting of a few words and complex dialogues. It has shown an impressive performance in many language tasks such as machine translation, text summarization, conversation generation, and sentiment analysis <cit.>. News popped up that ChatGPT earned a decent grade on the US Medical Licensing Exam, but the well noticed fact that ChatGPT can generate make-up knowledge have raised a dispute over the issue of whether ChatGPT can be used in medical diagnosis. Our proposed approach introduces an innovative and prospective paradigm that can integrate LLMs like ChatGPT into medical diagnosis, yielding promising performance in explainable zero-shot medical image diagnosis. §.§ Vision-language Pre-training Visual-language (VL) pre-training is meant to pre-train multi-modal models on large-scale datasets that contain both visual and textual information <cit.>, eg. images and captions, to learn joint representations that capture the complex interactions between the two modalities. In practice, due to the high cost of acquiring manually annotated datasets, most visual-language models <cit.> are trained with image-text pairs captured from the Internet <cit.>. As an important example, with pre-training on 400 million pairs of image and text from the Internet, the model CLIP <cit.> gained rich cross-modal representations and achieved amazing results on a wide range of visual tasks without any fine-tuning. Based on the abundant knowledge CLIP has learned, we are able to establish the framework to tackle medical image diagnosis tasks in a training-free manner. § METHOD §.§ Problem Formulation and Method Pipeline We focus on zero-shot medical image classification tasks where we compute the similarity scores for image-text query pairs (x,c), c∈ C, and the category ĉ with highest similarity score would be regarded as the classification result of image x, where C is the label set. Note that there is no need for model training as long as we have pretrained VLMs or LLMs in our framework. The pipeline is illustrated in Figure <ref>. The image x is processed through the visual encoder of CLIP to obtain its visual representation: f = VisualEncoder(x). In parallel, ChatGPT is queried with our designed prompt to generate major symptoms for each diagnostic category: s_1^c,..., s_m^c = ChatGPT (prompt, c), where m denotes the total number of generated symptoms, see details in next section. The symptom phrases are then sent into the text encoder of CLIP to obtain text representations: g_1^c, ..., g_m^c = TextEncoder(s_1^c,..., s_m^c). A score function S is defined to evaluate the similarity of the image-text pair (x,c) at the feature level: S(x, c) = 1/m∑_i=1^m f · g_i^c. We use average aggregation to ensure the fair evaluation for classes with different sizes of symptom, and more aggregation strategies are evaluated in experiments. A high score of x and c suggests a significant degree of relevance between the medical image and the class. Going over all categories c ∈ C, the one with the maximum similarity score is finally taken as the predicted diagnosis of the input image x: ĉ = *argmax_c ∈ CS (x,c) = *argmax_c ∈ C1/m∑_i=1^m f · g_i^c. §.§ The Designed Prompt High-quality symptom descriptions are essential for the success of our method. In this work, we query ChatGPT useful symptoms for the diagnosis of certain diseases. A baseline prompt choice could be “Q: What are useful visual features for distinguishing {Diagnostic Category} in a photo?". However, in experiments we found that such baseline prompts can produce misleading symptom descriptions in the returned answer. Hence, we secure the fidelity of the generated information by designing the prompts to work more professionally. First, noticing that the object of the method is basically to replicate the diagnostic process carried out by medical professionals, we emphasize that the generation should focus on medical features in the prompt instead of general descriptions as if we are querying explanations for the class of a general natural image. In addition, we adjust the prompt to direct ChatGPT's attention more toward published literature. Figure <ref> exhibits the symptoms generated by ChatGPT using our designed prompt. As expected, generated symptoms center around the diagnostic category, which typically involves the presence or absence of certain structure, location and clarity of relevant tissues, descriptions of organ boundaries, etc. § EXPERIMENTS §.§ Datasets and Experimental Setup §.§.§ Datasets Pneumonia Dataset: The Pneumonia Chest X-ray Dataset consists of images selected from retrospective cohorts of pediatric patients of one to five years old from Guangzhou Women and Children’s Medical Center in Guangzhou, Guangdong Province, China <cit.>. A total of 5,232 chest X-ray images of children are collected and labeled, including 3,883 characterized as having pneumonia (2,538 bacterial and 1,345 viral) and 1,349 normal. Montgomery Dataset: The Montgomery County X-ray Set was made in the tuberculosis control program of the Department of Health and Human Services of Montgomery County, MD, USA <cit.>. The dataset contains 138 posterior-anterior X-rays, of which 80 X-rays are normal and 58 X-rays are abnormal with manifestations of tuberculosis, covering a wide range of abnormalities including effusions and miliary patterns. Shenzhen Dataset: The Shenzhen Hospital X-ray Set was collected by Shenzhen No.3 Hospital in Shenzhen, Guangdong Province, China <cit.>, where the X-rays are acquired as part of the routine care at the hospital. There are 326 normal X-rays and 336 abnormal X-rays showing various manifestations of tuberculosis. IDRID Dataset: The Indian Diabetic Retinopathy Image Dataset is the first database representative of an Indian population regarding Diabetic Retinopathy, and is the only dataset that constitutes typical diabetic retinopathy lesions <cit.>. The dataset has 516 rentinal images in total, and provides information on the disease severity of diabetic retinopathy(DR) for each image according to medical experts' grading with a variety of pathological conditions of DR. BrainTumor Dataset: All datasets mentioned previously are composed of publicly available images which may potentially be part of the resource for CLIP's pre-training. To check the capability of our method on data that is absolutely invisible for CLIP, we originally construct a dataset of brain tumor images, specifically targeting glioblastoma multiforme and primary central nervous system lymphoma which are often hard to discriminate, even for medical professionals. The dataset encompasses 338 glioblastoma multiforme images and 255 primary central nervous system lymphoma images. §.§.§ Experimental setup All implementations are based on the PyTorch framework and on an Ubuntu server with a single NVIDIA GEFORCE RTX 3090 GPU. For visual and textual encoders, we utilize five CLIP versions including RN50, RN101, RN50x64, ViT-B/32, and ViT-L/14, covering different levels of parameter size. §.§ Main Results §.§.§ Classification Performance We evaluate our method on the five datasets across different CLIP versions. Results are reported in Table <ref> where the best accuracies are displayed in the bottom row. Results show that our approach can achieve consistent improvements over the standard zero-shot classification using CLIP across all datasets, where the method has raised the zero-shot binary classification accuracy to over 62%, and the effectiveness of the method is especially evident on the Pneumonia dataset and Shenzhen dataset with improvements of up to 11.73% and 17.37% respectively. Remarkably, on the Shenzhen dataset, applying different CLIP versions without incorporating LLM yields a uniform accuracy of 50.76% due to the incapability of diagnosis which results in the misclassification such that all X-ray images are identified as abnormal (Shenzhen dataset includes 336 abnormal X-rays and 326 normal X-rays, 50.76%≈336/336+326), yet our method can reach accuracy as high as 68.13%. Such findings indicate the presence of inherent potential within CLIP for undertaking medical image classification tasks such as identifying “Tuberculosis", where the potential remains largely untapped when CLIP is employed directly, and can be unleashed to a considerable extent with our designs. §.§.§ Interpretability As shown in Figure <ref>, we conduct statistical analysis on selected medical images where our method successfully predicts the diagnostic category yet CLIP fails. For any such an image, specifically, we calculate the similarity degrees between the image and generated texts for both the true image class and the alternative. The pink bars in our visualizations indicate the similarity between the medical images and the text characteristics of the true category hinted by ChatGPT, while the green bars indicate the similarity between the medical images and the text (identified by ChatGPT) of the category inferenced by the CLIP. It is evident that the accuracy of our framework judgment is due to the high similarity between the images and the correct category characteristics. For instance, Figure 4 (a) shows that the characteristics of normal lungs exhibit higher similarity compared to those of tuberculosis in a comprehensive manner, as the majority of normal lung characteristics displays superiority. Our framework thereby identifies the image to be normal lungs according to prominent texts such as “No visible cavities or consolidations", “Absence of pleural effusions", “Clear and distinct lung borders". Figure 4 (b) shows that “Venous beading and loops" and “Neovascularization" exhibit the highest similarity among all characteristics, which leads to the identification of “Severe Nonproliferative Retinopathy". In the other two instances as illustrated in Figure 4 (c) and Figure 4 (d), dominant characteristics are observed. Specifically, Figure 4 (c) and Figure 4 (d) shows the conspicuous prominence of the similarity exhibited by “Air bronchogram sign" and “Restricted diffusion on MRI" which leads to successful identification of “Pneumonia" and “Primary Central Nervous System Lymphoma" respectively. Generally, a significant portion of generated visual symptoms for the true diagnosis possess a dominant similarity, indicating that our framework recognizes unique visual patterns to arrive at an accurate diagnosis. §.§.§ Case Study Furthermore, we also provide two case studies with visualizations to see how the symptoms generated by ChatGPT contribute to classification decisions, as illustrated in Figure <ref>. In particular, we compare attention maps from our method with detailed symptoms to those produced by CLIP with only diagnostic category names. The first case exhibits proliferative retinopathy. We can observe that with the additional information provided by symptom texts such as “Fibrous proliferation", “Tractional retinal detachment", “Vitreous hemorrhage", the model's attention is increasingly drawn to the scar tissue on the retina. One notable example is “Tractional retinal detachment", which refers to the separation of the retina from the retinal pigment epithelium due to the pull of hyperplasia of fibrous tissue or scar tissue. In this case, ChatGPT generates the symptom text “Tractional retinal detachment", while CLIP focuses on the scar tissue in accordance with the text generated by ChatGPT, leading to the successful identification of “Proliferative Retinopathy". Likewise, the second case of primary central nervous system lymphoma indicates that symptoms such as “Absence of calcifications" and “Homogeneous contrast enhancement" direct the model to concentrate more on the tumor area. §.§ Ablation Study §.§.§ Effectiveness of the designed prompt We compare the performance of the designed prompt “Q: According to published literature, what are useful medical visual features for distinguishing {Diagnostic Category} in a photo?" to the baseline prompt “Q: What are useful visual features for distinguishing {Diagnostic Category} in a photo?". Table <ref> shows the superiority of ours on four out of five datasets, including the private one that CLIP has absolutely never encountered before. We attribute the advancement to preciser information acquired through a more appropriate querying, which is further demonstrated in Figure <ref>, where our prompt results in more attention around the upper lobes of lungs, the area of interest for identifying tuberculosis. Figure <ref> also shows that the baseline prompt can generate noisy symptoms, which may confuse the diagnosis. §.§.§ Effectiveness of different aggregations strategies to compute S(x,c) For computing S, we consider operations mean and max to aggregate f · g_i^c. Table <ref> in appendix records prediction accuracies of each approach across all CLIP versions where the best results are displayed in the bottom row. Results indicate that aggregation operation mean performs best on almost all datasets. §.§ Our framework vs. OpenFlamingo We conduct a comparison between our framework and existing open source multimodal large models, such as OpenFlamingo <cit.>. Flamingo <cit.>/OpenFlamingo <cit.> incorporates new gated cross-attention-dense layers within a frozen pre-trained LLM to condition the LLM on visual inputs. The keys and values in these layers are derived from vision features, while queries are derived from language inputs. For medical image diagnosis in OpenFlamingo, medical visual question answering is adopted, where the question is “Is this an image of {Diagnostic Category}?" In our experiments, we use OpenFlamingo 9B model, an open-source replica of the DeepMind Flamingo, trained on 5M samples from the new Multimodal C4 dataset <cit.> and 10M samples from LAION-2B. Experiments show that our framework outperforms OpenFlamingo in most datasets for medical image diagnosis. While OpenFlamingo achieves 100% accuracy for the Shenzhen dataset, it falls short in other datasets. The 100% accuracy may be attributed to the inclusion of the Shenzhen dataset within OpenFlamingo's pre-training dataset. In the other datasets, our method achieves a performance gain of 2.59% to 5.80% over OpenFlamingo for the zero-shot task. Notably, our interpretable framework surpasses the accuracy of OpenFlamingo by 5.59% in our constructed private dataset, BrainTumor. In summary, experimental results indicate that our interpretable framework can be superior to large multimodal pre-training models such as OpenFlamingo in terms of medical diagnosis. §.§ Discussion Our work presents an initial trial of zero-shot medical image diagnosis with LLMs and VLMs. Our proposed paradigm can greatly unleash the power of VLMs (We use CLIP in our experiments) to provide explainability within the medical image diagnosis process and achieve noteworthy zero-shot medical image classification accuracy boost. Except for one dataset, using our method, the zero-shot image classification accuracy with CLIP has been raised to over 62%, and that on two datasets over five exhibits an improvement over 10%, which certainly increases the optimism of a decent diagnosis accuracy using such a low-cost approach that requires no extra network training at all, motivating investigation of more scenarios with VLMs and LLMs, such as data-efficient and few-shot learning in multimodal medical data analysis. It is intriguing that with a slight modification of the prompt, the prediction accuracy is improved on almost all datasets, which indicates the potential of enhancement that can be brought by better querying, calling for more works concerning well-designed prompts. Another aspect to be discovered for upgrading is the multi-modal feature aggregation mechanism. We have explored the mean, max approaches in our experiments, in which mean performs the best. More strategies can be examined in future work. For example, instead of using mean, one may consider evaluating the significance of different symptoms, perhaps with the help of ChatGPT, and create a more effective weight setting accordingly. One shortcoming of our method is its unsatisfactory performance on the IDRID dataset, which could be a result of the inherent challenge in the recognition task. Rather than to detect the presence of certain diseases as in every other dataset, IDRID requires to evaluate the severity, in which the distinction space between classes is undoubtedly smaller. The difficulty has also been reflected in the suboptimal accuracy values of supervised learning methods <cit.>, which are reported very recently. In addition, the performance of zero-shot classification is still not on par with the supervised counterparts, as shown in Table <ref> in Appendix. We envision this gap would soon narrow down with better designs of architectures and prompts. Another limitation of our method is that it does not address the issue of hallucinations or inaccuracies in disease diagnosis that may be caused by ChatGPT in more complex medical scenarios. This is a significant concern in medical image recognition, and advanced algorithms must be developed to improve the accuracy of disease diagnosis provided by ChatGPT. While the prompt design proposed in this paper provides a solution, it is not a perfect one. We plan to further investigate this issue in the future to mitigate the problem of ChatGPT hallucinations in the medical field. § CONCLUSION In this work, we propose an explainable framework for zero-shot medical image classification by integrating ChatGPT and CLIP. Extensive experiments on five challenging medical datasets, including pneumonia, tuberculosis, retinopathy, and brain tumor, demonstrate that our method is able to carry out explainable diagnosis and boost zero-shot image classification accuracy. We hope our work could encourage more in-depth research that leverages VLMs and LLMs for efficient, accurate and explainable medical diagnosis. 34 urlstyle [Alayrac et al.(2022)Alayrac, Donahue, Luc, Miech, Barr, Hasson, Lenc, Mensch, Millican, Reynolds, et al.]alayrac2022flamingo Alayrac, J.-B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds, M., et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:0 23716–23736, 2022. [Awadalla et al.(2023)Awadalla, Gao, Gardner, Hessel, Hanafy, Zhu, Marathe, Bitton, Gadre, Jitsev, et al.]awadalla2023openflamingo Awadalla, A., Gao, I., Gardner, J., Hessel, J., Hanafy, Y., Zhu, W., Marathe, K., Bitton, Y., Gadre, S., Jitsev, J., et al. Openflamingo, 2023. [Brown et al.(2020)Brown, Mann, Ryder, Subbiah, Kaplan, Dhariwal, Neelakantan, Shyam, Sastry, Askell, et al.]brown2020language Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. Advances in neural information processing systems, 33:0 1877–1901, 2020. [Chefer et al.(2021)Chefer, Gur, and Wolf]chefer2021generic Chefer, H., Gur, S., and Wolf, L. Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 397–406, 2021. [Chen et al.(2019)Chen, Li, Yu, El Kholy, Ahmed, Gan, Cheng, and Liu]chen2019uniter Chen, Y.-C., Li, L., Yu, L., El Kholy, A., Ahmed, F., Gan, Z., Cheng, Y., and Liu, J. Uniter: Learning universal image-text representations. 2019. [Chen et al.(2020)Chen, Li, Yu, El Kholy, Ahmed, Gan, Cheng, and Liu]chen2020uniter Chen, Y.-C., Li, L., Yu, L., El Kholy, A., Ahmed, F., Gan, Z., Cheng, Y., and Liu, J. Uniter: Universal image-text representation learning. In European conference on computer vision, pp. 104–120. Springer, 2020. [Devlin et al.(2018)Devlin, Chang, Lee, and Toutanova]devlin2018bert Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. [Do et al.(2021)Do, Nguyen, Tjiputra, Tran, Tran, and Nguyen]do2021multiple Do, T., Nguyen, B. X., Tjiputra, E., Tran, M., Tran, Q. D., and Nguyen, A. Multiple meta-model quantifying for medical visual question answering. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 64–74. Springer, 2021. [Eslami et al.(2021)Eslami, de Melo, and Meinel]eslami2021does Eslami, S., de Melo, G., and Meinel, C. Does clip benefit visual question answering in the medical domain as much as it does in the general domain? arXiv preprint arXiv:2112.13906, 2021. [Houlsby et al.(2019)Houlsby, Giurgiu, Jastrzebski, Morrone, De Laroussilhe, Gesmundo, Attariyan, and Gelly]houlsby2019parameter Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., De Laroussilhe, Q., Gesmundo, A., Attariyan, M., and Gelly, S. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pp. 2790–2799. PMLR, 2019. [Jaeger et al.(2014)Jaeger, Candemir, Antani, Wáng, Lu, and Thoma]jaeger2014two Jaeger, S., Candemir, S., Antani, S., Wáng, Y.-X. J., Lu, P.-X., and Thoma, G. Two public chest x-ray datasets for computer-aided screening of pulmonary diseases. Quantitative imaging in medicine and surgery, 40 (6):0 475, 2014. [Jang et al.(2022)Jang, Girard, and Thiery]jang2022explainable Jang, S.-I., Girard, M. J., and Thiery, A. H. Explainable and interpretable diabetic retinopathy classification based on neural-symbolic learning. arXiv preprint arXiv:2204.00624, 2022. [Jia et al.(2021)Jia, Yang, Xia, Chen, Parekh, Pham, Le, Sung, Li, and Duerig]jia2021scaling Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., and Duerig, T. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pp. 4904–4916. PMLR, 2021. [Karimi Mahabadi et al.(2021)Karimi Mahabadi, Henderson, and Ruder]karimi2021compacter Karimi Mahabadi, R., Henderson, J., and Ruder, S. Compacter: Efficient low-rank hypercomplex adapter layers. Advances in Neural Information Processing Systems, 34:0 1022–1035, 2021. [Kermany et al.(2018)Kermany, Goldbaum, Cai, Valentim, Liang, Baxter, McKeown, Yang, Wu, Yan, et al.]kermany2018identifying Kermany, D. S., Goldbaum, M., Cai, W., Valentim, C. C., Liang, H., Baxter, S. L., McKeown, A., Yang, G., Wu, X., Yan, F., et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. cell, 1720 (5):0 1122–1131, 2018. [Li et al.(2022)Li, Li, Xiong, and Hoi]li2022blip Li, J., Li, D., Xiong, C., and Hoi, S. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. arXiv preprint arXiv:2201.12086, 2022. [Li et al.(2020)Li, Yin, Li, Zhang, Hu, Zhang, Wang, Hu, Dong, Wei, et al.]li2020oscar Li, X., Yin, X., Li, C., Zhang, P., Hu, X., Zhang, L., Wang, L., Hu, H., Dong, L., Wei, F., et al. Oscar: Object-semantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision, pp. 121–137. Springer, 2020. [Liu et al.(2021)Liu, Zhan, and Wu]liu2021contrastive Liu, B., Zhan, L.-M., and Wu, X.-M. Contrastive pre-training and representation distillation for medical visual question answering based on radiology images. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part II 24, pp. 210–220. Springer, 2021. [Luo et al.(2020)Luo, Xue, and Feng]luo2020automatic Luo, L., Xue, D., and Feng, X. Automatic diabetic retinopathy grading via self-knowledge distillation. Electronics, 90 (9):0 1337, 2020. [Mahabadi et al.(2021)Mahabadi, Ruder, Dehghani, and Henderson]mahabadi2021parameter Mahabadi, R. K., Ruder, S., Dehghani, M., and Henderson, J. Parameter-efficient multi-task fine-tuning for transformers via shared hypernetworks. arXiv preprint arXiv:2106.04489, 2021. [Mahapatra et al.(2021)Mahapatra, Bozorgtabar, and Ge]mahapatra2021medical Mahapatra, D., Bozorgtabar, B., and Ge, Z. Medical image classification using generalized zero shot learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3344–3353, 2021. [Mahapatra et al.(2022)Mahapatra, Ge, and Reyes]mahapatra2022self Mahapatra, D., Ge, Z., and Reyes, M. Self-supervised generalized zero shot learning for medical image classification using novel interpretable saliency maps. IEEE Transactions on Medical Imaging, 410 (9):0 2443–2456, 2022. [Menon & Vondrick(2022)Menon and Vondrick]menon2022visual Menon, S. and Vondrick, C. Visual classification via description from large language models. arXiv preprint arXiv:2210.07183, 2022. [Ouyang et al.(2022)Ouyang, Wu, Jiang, Almeida, Wainwright, Mishkin, Zhang, Agarwal, Slama, Ray, et al.]ouyang2022training Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155, 2022. [Porwal et al.(2018)Porwal, Pachade, Kamble, Kokare, Deshmukh, Sahasrabuddhe, and Meriaudeau]porwal2018indian Porwal, P., Pachade, S., Kamble, R., Kokare, M., Deshmukh, G., Sahasrabuddhe, V., and Meriaudeau, F. Indian diabetic retinopathy image dataset (idrid): a database for diabetic retinopathy screening research. Data, 30 (3):0 25, 2018. [Radford et al.(2019)Radford, Wu, Child, Luan, Amodei, Sutskever, et al.]radford2019language Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al. Language models are unsupervised multitask learners. OpenAI blog, 10 (8):0 9, 2019. [Radford et al.(2021)Radford, Kim, Hallacy, Ramesh, Goh, Agarwal, Sastry, Askell, Mishkin, Clark, et al.]radford2021learning Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pp. 8748–8763. PMLR, 2021. [Sharma et al.(2018)Sharma, Ding, Goodman, and Soricut]sharma2018conceptual Sharma, P., Ding, N., Goodman, S., and Soricut, R. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2556–2565, 2018. [Shen et al.(2021)Shen, Li, Tan, Bansal, Rohrbach, Chang, Yao, and Keutzer]shen2021much Shen, S., Li, L. H., Tan, H., Bansal, M., Rohrbach, A., Chang, K.-W., Yao, Z., and Keutzer, K. How much can clip benefit vision-and-language tasks? arXiv preprint arXiv:2107.06383, 2021. [Sirshar et al.(2021)Sirshar, Hassan, Akram, and Khan]sirshar2021incremental Sirshar, M., Hassan, T., Akram, M. U., and Khan, S. A. An incremental learning approach to automatically recognize pulmonary diseases from the multi-vendor chest radiographs. Computers in Biology and Medicine, 134:0 104435, 2021. [Szepesi & Szilágyi(2022)Szepesi and Szilágyi]szepesi2022detection Szepesi, P. and Szilágyi, L. Detection of pneumonia using convolutional neural networks and deep learning. Biocybernetics and Biomedical Engineering, 420 (3):0 1012–1022, 2022. [Wang et al.(2022)Wang, Wu, Agarwal, and Sun]wang2022medclip Wang, Z., Wu, Z., Agarwal, D., and Sun, J. Medclip: Contrastive learning from unpaired medical images and text. arXiv preprint arXiv:2210.10163, 2022. [Wu et al.(2020)Wu, Shi, Chen, Shi, Chen, Coatrieux, Yang, Luo, and Li]wu2020coarse Wu, Z., Shi, G., Chen, Y., Shi, F., Chen, X., Coatrieux, G., Yang, J., Luo, L., and Li, S. Coarse-to-fine classification for diabetic retinopathy grading using convolutional neural network. Artificial Intelligence in Medicine, 108:0 101936, 2020. [Zhu et al.(2023)Zhu, Hessel, Awadalla, Gadre, Dodge, Fang, Yu, Schmidt, Wang, and Choi]zhu2023multimodal Zhu, W., Hessel, J., Awadalla, A., Gadre, S. Y., Dodge, J., Fang, A., Yu, Y., Schmidt, L., Wang, W. Y., and Choi, Y. Multimodal c4: An open, billion-scale corpus of images interleaved with text. arXiv preprint arXiv:2304.06939, 2023. § APPENDIX
http://arxiv.org/abs/2307.00547v1
20230702114721
Is Risk-Sensitive Reinforcement Learning Properly Resolved?
[ "Ruiwen Zhou", "Minghuan Liu", "Kan Ren", "Xufang Luo", "Weinan Zhang", "Dongsheng Li" ]
cs.LG
[ "cs.LG" ]
[ Is Risk-Sensitive Reinforcement Learning Properly Resolved? Ruiwen Zhouyyy Minghuan Liuyyy Kan Rencomp Xufang Luocomp Weinan Zhangyyy Dongsheng Licomp yyyShanghai Jiao Tong University compMicrosoft Research Weinan Zhangwnzhang@sjtu.edu.cn Kan Renkan.ren@microsoft.com Machine Learning, ICML 0.3in ] Due to the nature of risk management in learning applicable policies, risk-sensitive reinforcement learning (RSRL) has been realized as an important direction. RSRL is usually achieved by learning risk-sensitive objectives characterized by various risk measures, under the framework of distributional reinforcement learning. However, it remains unclear if the distributional Bellman operator properly optimizes the RSRL objective in the sense of risk measures. In this paper, we prove that the existing RSRL methods do not achieve unbiased optimization and can not guarantee optimality or even improvements regarding risk measures over accumulated return distributions. To remedy this issue, we further propose a novel algorithm, namely Trajectory Q-Learning (TQL), for RSRL problems with provable convergence to the optimal policy. Based on our new learning architecture, we are free to introduce a general and practical implementation for different risk measures to learn disparate risk-sensitive policies. In the experiments, we verify the learnability of our algorithm and show how our method effectively achieves better performances toward risk-sensitive objectives. § INTRODUCTION Reinforcement learning (RL) has shown its success on various tasks <cit.>, which usually requires the agent to take enormous trial-and-error steps. However, most real-world applications are sensitive to failure and attach more importance to risk management, and thus need to turn to the help of risk-sensitive reinforcement learning (RSRL). Typically, RSRL can be achieved by building upon existing distributional RL approaches <cit.>. For instance, risk-sensitive actor-critic methods like <cit.> first learn distributional critics as normal distributional RL methods, but take various distortion risk measures representing different risk preferences as the objective for the actor (e.g., conditional value-at-risk ()), which is computed based on critic output. However, as we reveal in this paper, such solutions lead to biased optimization, fails to converge to an optimal solution in terms of risk-sensitive returns along the whole trajectory, and can sometimes lead to an arbitrarily bad policy. Therefore, an algorithm for unbiased optimization is desired in RSRL. Although some works have proposed solutions by defining risk in a per-step manner <cit.>, it is still challenging to resolve RSRL problems where a policy is learned to optimize the risk measure over accumulated return distributions. In this paper, we provide an in-depth analysis on the biased optimization issue of existing RSRL methods, conclude the reason, and present the intuition of the solution. Correspondingly, we propose Trajectory Q-Learning (TQL), a novel RSRL framework that is proven to learn the optimal policy w.r.t. various risk measures. Specifically, TQL learns an historical value function that models the conditional distribution of accumulated returns along the whole trajectory given the trajectory history. We give an extensive theoretical analysis of the learning behavior of TQL, indicating the convergence of policy evaluation, policy iteration, policy improvement, and (no) value iteration. Notably, we prove that the policy iteration of TQL can achieve unbiased optimization in RSRL. To the best of our knowledge, TQL is the first algorithm that can converge to the optimal risk-sensitive policy for all kinds of distortion risk measures. Experimentally, we verify our idea on both discrete mini-grid and continuous control tasks, showing TQL can be practically effective for finding optimal risk-sensitive policies and outperforms existing RSRL learning algorithms. § PRELIMINARIES §.§ Distributional Reinforcement Learning We consider a Markov decision process (MDP), denoted as a tuple (𝒮,𝒜,,r,γ), where 𝒮 and 𝒜 represents the state and action space, (s^'|s,a) is the dynamics transition function and when it is deterministic we use s'=M(s,a) to represent the transition, r(s,a) is the reward function, and γ denotes the discount factor. The return Z^π(s,a)=∑_t=0^∞γ^t r(s_t,a_t) is a random variable representing the sum of the discounted rewards. The history h_t={s_0,a_0,⋯,s_t} is state-action sequences sampled by agents in the environment, and its space =⋃_t[(∏_i=0^t-1(×))×]. The objective of reinforcement learning (RL) is to learn a policy to perform the action a∼π on a given state or history that maximizes the expected cumulative discounted reward 𝔼_π[Z^π(s,a)]. The optimization typically requires to compute the state-action value function Q(s,a)=_π[Z^π(s,a)], which can be characterized by the Bellman operator 𝒯_B^π: 𝒯_B^π Q^π(s,a):=𝔼[r(s,a)]+γ𝔼_s'∼,a'∼π[Q^π(s^',a^')]. The optimal policies can be obtained by learning the optimal value Q^*=Q^π^* through the Bellman optimality operator 𝒯^*_B: 𝒯^*_B Q(s,a) :=𝔼[r(s,a)] +γ𝔼_max_a^'Q(s^',a^'). Instead of utilizing a scalar value function Q^π, which can be seen as optimizing the expectation of the distribution over returns, distributional RL considers modeling the whole distribution <cit.>. From a distributional perspective, we regard Z^π∼ as a mapping from state-action pairs to distributions over returns, named the value distribution. Analogous to traditional RL, the goal is seeking a policy that maximizes the expected return over trajectories: π^*∈max_π _S_0,A_0∼π(·|S_0)[Z^π(S_0,A_0)] , where Z^π(S_0, ·) represents the return distribution of the trajectory starting from a random initial state S_0 and following π. We can define a distributional Bellman operator 𝒯^π that estimates the return distribution Z^π 𝒯^πZ(s,a):=^D R(s,a) +γ Z(S^',A^') , where A:=^D B means that random variables A and B follows the same distribution, R(s,a) is reward distribution, S^'∼(·|s,a) and A^'∼π(·|s^'). Correspondingly, the distributional Bellman optimality operator is 𝒯^*Z(s,a):=^D R(s,a) +γ Z(S^',max_a^'∈𝒜[Z(S^',a^')]) , In this paper, we use capital letters to denote random variables and emphasize their random nature. §.§ Distortion Risk Measure and Risk-Sensitive RL As a type of risk measure, a distortion risk measure <cit.> β for a random variable X with the cumulative distribution function (CDF) F_X(x) is defined as β[X]=∫_-∞^∞ x ∂/∂ x(h_β∘ F_X)(x)  dx, where h_β:[0,1]→[0,1], called a distortion function, is a continuous non-decreasing function that transforms the CDF of X into (h_β∘ F_X)(x). Intuitively, a distortion function distorts the probability density of a random variable to give more weight to either higher or lower-risk events. For example, and are the most commonly used distortion risk measures. For readers unfamiliar with distortion risk measures, we list some typical examples and their definitions in ap:exp-vs-distortion. Thereafter, risk-sensitive reinforcement learning (RSRL) is natural to combine various distortion risk measures with distributional RL for achieving a risk-sensitive behavior. In the sequel, a risk-sensitive optimal policy with distortion risk measure β can be defined as a deterministic policy π_β^* by the risk-sensitive return over random variable S_0∼ρ_0 representing the initial state: π_β^*∈max_π _S_0∼ρ_0,A_0∼π[β[Z^π(S_0,A_0)]] . We call eqn:risk-sensitive-policy the RSRL objective, as it seeks a policy that maximizes the risk measure of accumulated return over whole trajectory given the initial state distribution. Such a formulation was initially implemented in <cit.>, by directly changing the objective to risk measures computed from the value distribution. Some other works define and optimize risk in a per-step manner <cit.>, but this paper only focuses on the RSRL objective, as it is the natural risk-sensitive extension of RL. For readers interested in per-step risk definition, we give a brief introduction in ap:dynamic-vs-static. §.§ Metrics for Convergence In distributional RL, since the value function is modeled as a distribution, researchers utilize a maximal form of the Wasserstein metric to establish the convergence of the distributional Bellman operators <cit.> d̅_p(Z_1, Z_2) := sup_x, a d_p(Z_1(x,a), Z_2(x,a)), where Z_1, Z_2 ∈ are two value distributions and denotes the space of value distributions with bounded moments. The p-Wasserstein distance d_p is the L_p metric on inverse CDF, i.e., quantile functions <cit.>, which is defined as an optimal transport metric for random variables U and V with quantile functions F_U^-1 and F_V^-1 respectively: d_p(U, V) = ( ∫_0^1 |F_U^-1(ω) - F_V^-1(ω)|^p dω)^1/p. This can be realized as the minimal cost of transporting mass to make the two distributions identical. Requiring the distributional Bellman operators to converge in the metric of d̅_p indicates that we must match the value distribution. While in policy evaluation the distributional Bellman operator ^π (eq:dis-bellman) is shown to be a contraction in p-Wasserstein, in the control setting proving the distributional Bellman optimality operator ^* (eq:dis-opt-bellman) is hard (see <cit.> for more details) and is not always necessary in practical cases. Instead, we may only need to achieve convergence in the sense of distributional statistics or measures. For example, we only require the learned value distribution to have the same of the optimal value distribution so that the policy learns to achieve the optimal return expectation, or we match a risk measure (like ) of the optimal value distribution to learn a policy that achieves the optimal risk preference of the return distribution. As these measures upon value distributions are real functions w.r.t. states and actions, the convergence of distributional Bellman operators only need to lie in the infinity norm, a L_∞ metric: f_1 - f_2_∞=sup_x, af_1(x,a) - f_2(x,a). § MISMATCH IN RSRL OPTIMIZATION Although the RSRL objective eqn:risk-sensitive-policy seems reasonable, existing dynamic programming (DP) style algorithms does not optimize eqn:risk-sensitive-policy properly, as we will reveal in this section. §.§ What are Current RSRL Algorithms Optimizing? Recalling the RL objective eqn:conventional-rl-obj or considering setting β as in the RSRL objective eqn:risk-sensitive-policy, we can optimize π by Bellman equation in a dynamic programming style following the distributional Bellman optimality operator, i.e., there is a deterministic policy that maximizes the return at every single step for a given return distribution Z: π_(s)∈max_a∈𝒜[Z(s,a)] . And the distributional Bellman optimality operator is equivalent to: 𝒯^*Z(s,a):=^D R(s,a) +γ Z(S^',π_(S^')), S'∼ . Although <cit.> have shown that 𝒯^* itself is not a contraction in d̅_p such that it cannot be used for finding the optimal value distribution, we can realize 𝒯^* as a “contraction" in L_∞ from the perspective of , which induces a pointwise convergence. In other words, the of the value distribution Z will converge to the of the value distribution Z^*. Recursively applying the distributional Bellman optimality operator Z_k+1=^*Z_k on arbitrary value distribution Z_0 solves the objective eqn:risk-sensitive-policy when β is exactly where the optimal policy is obtained via eqn:conventional-mean-policy, and for Z_1, Z_2 ∈, we have: ^*Z_1-^*Z_2_∞≤γ Z_1- Z_2_∞ , and in particular Z_k → Z^* exponentially quickly. The proof is just the proof of value iteration and Lemma 4 in <cit.>. For completeness, we include it in ap:value-iteration. In the context of distributional RL, we can explain it as the of value will converge to the of optimal value. Motivated by and simply resemble eqn:conventional-mean-policy, previous implementation like <cit.> and <cit.> optimized a risk-sensitive policy: π_β(s)∈max_a∈𝒜β[Z(s,a)] . From a practical perspective, this can be easily achieved by only a few modifications to distributional RL algorithms towards any given distortion risk β, which implies a dynamic programming style updating following a risk-sensitive Bellman optimality operator _β^* w.r.t. risk measure β: _β^*Z(s,a):=^D R(s,a)+γ Z(S',A') , where S'∼(·|s,a) and A'∼π_β(·|s') are random variables. Note that the optimal risk-sensitive policy defined in eqn:conventional-risk-sensitive-policy is generally different from eqn:risk-sensitive-policy. The key difference is that eqn:conventional-risk-sensitive-policy tends to maximize the risk measure everywhere inside the MDP, yet eqn:risk-sensitive-policy only requires finding a policy that can maximize the risk measure of trajectories started from the initial state S_0. Although this is equivalent when β is , when it is not, the equivalence does not ever hold when updating follows _β^* in a dynamic programming style, which leads to the divergence in RSRL optimization, as we will reveal below. §.§ 𝒯_β^* Leads to Biased Optimization 𝒯_β^* is not contraction except β is . To show why _β^* leads to biased optimization, we first provide an analysis that optimizing towards the Bellman optimality operator _β^* w.r.t. risk measure β does not converge at all, i.e., there is no contraction property for 𝒯_β^*. For simplicity and starting from the easiest case, in the rest of this paper, we mainly discuss deterministic dynamics, i.e., instead of s'∼(·|s,a), we simply consider s'=M(s,a) and thus _β becomes: _β^*Z(s,a):=^D R(s,a)+γ Z(s',A'), A'∼π_β(s') . We already know that _β^* cannot be a contraction in d̅_p, but different from lemma:value-iteration, even from the perspective of the risk measure β if β is not , it still cannot be realized as a “contraction" in L_∞; in other words, the risk measure of the value distribution β [Z] is not guaranteed to converge to β of the value distribution β [Z^*]. Thus, _β^* does not help to find an optimal solution to solve the RSRL objective eqn:risk-sensitive-policy. Recursively applying risk-sensitive Bellman optimality operator _β^* w.r.t. risk measure β does not solve the RSRL objective eqn:risk-sensitive-policy and β [Z_k] is not guaranteed to converge to β [Z^*] if β is not or an affine in . The formal proof can be referred to ap:no-contraction-theorem. The above theorem of no contraction indicates that optimizing towards _β may lead to arbitrarily worse solutions than the optimal solution of the RSRL objective eqn:risk-sensitive-policy. To better understand how the problem occurs, let us consider a naive contradictive example on a 3-state MDP (fig:3state-mdp, left), where the agents have a constant reward of -5 for conducting a_1 and a binomial reward for a_0, as shown in the first row of fig:3state-mdp (right). In such context, we optimize towards β=(η=0.1). Now consider the initial value estimation Z to be accurate at s_1 (fig:3state-mdp, right). We list the value of Z and its corresponding risk measure β[Z]; the results _β^* Z when updating Z on s_0 using _β^* and its corresponding risk measure. In this case, when updating Z(s_0), Z(s_1) will always indicate to use a_1, although this can lead to a worse risk measure evaluated along the whole trajectory starting from s_0, and prevent the agent from finding the optimality, i.e., applying a_0 at both states. History return distribution matters. The reason why the risk-sensitive Bellman optimality operator _β^* diverges and optimizing following _β^* does not lead to an optimal policy w.r.t. risk measure β comes from the fact that the risk measure over future return distributions cannot be maximized everywhere inside an MDP, hence it is not reasonable to use such a Bellman optimality operator to update the return distribution following the risk-sensitive optimal policy. In other words, updating π_β under _β^* only ensures to improve the risk measure of the trajectory starting from s', i.e., β[Z^π(s')], which does not guarantee to move towards a better risk measure along the whole trajectory β[Z^π(s_0)], and can be totally different from optimizing the RSRL objective eqn:risk-sensitive-policy. Therefore, to achieve unbiased optimization, at every state the policy should take into account the return distribution along the past trajectory starting from s_0. § SOLVING RSRL To remedy the biased optimization issue of bellman-style update, we propose a novel algorithm that lies in a non-Markovian formulation without dynamic programming style optimization. As we pointed out before, the key problem that leads the risk-sensitive optimal Bellman operator _β^* into biased optimization is that the risk measure over future return distributions cannot be maximized everywhere inside an MDP. Thereafter, the dynamic programming style optimization that only utilizes the information forward, i.e., in the future, does not help to find the policy that maximizes the risk measures along the whole trajectories as defined in eqn:risk-sensitive-policy. Thus, when we compute the value distribution at certain states, we must include information backward, i.e., in the past, to help with modeling the risk measure along the whole trajectory. This motivates us to model the history-action value distribution Z^π(h_t,a_t)∼, called historical return distribution, instead of the state-action value distribution Z^π(s_t,a_t), along with a history-based (non-Markovian) policy A∼π(·|h): Z^π(h_t,a)≜∑_i=0^t-1γ^i R(s_i,a_i)+γ^t Z^π({s_t},a) = ∑_i=0^tγ^i R(s_i,a_i)+γ^t+1 Z^π({s_t+1},A_t+1)) , where A_t+1∼π(·|h_t+1), s_t+1=M(s_t, a_t), h_t={s_0,a_0,⋯,s_t}∈ denotes the history sequence that happened before reaching (including) state s_t. Therefore, the history-action value Z^π(h_t,a) just records the discounted return of the whole trajectory given history h_t backward and moves forward following policy π. Note that the policy is now Markovian under the history-based MDP, i.e., the policy gives action only based on the current history. §.§ Policy Evaluation Similar to Bellman operators, we now define a new type of operator, named the history-relied () operator, that defines the principle of updating the history-action value. ^π_h Z(h_t,a):=^D R_0:t+γ^t+1 Z({s_t+1},A_t+1) , where A_t+1∼π(·|h_t+1), s_t+1=M(s_t, a_t), R_0:t=∑_i=0^tγ^i r_i is the discounted return accumulated before the timestep t. To continue, we show our first theoretical result, that the policy evaluation with  operator converges in the metric of d̅_p. ^π_h:→ is a γ-contraction in the metric of the maximum form of p-Wasserstein distance d̅_p. The proof of Theorem <ref> can be referred to ap:pe-contraction-theorem. Using Theorem <ref> and combining Banach's fixed point theorem, we can conclude that ^π_h has a unique fixed point. By inspection, this fixed point must be Z^π as defined in eqn:new-z since ^π_h Z^π = Z^π. §.§ Policy Improvement and (No) Value Iteration So far, we have considered the value distribution of a fixed policy π and the convergence of policy evaluation. Now let's turn to the control setting and find out the optimal value distribution and its corresponding policy under the risk-sensitive context. In the form above, we want to find the optimal risk-sensitive policy that maximizes the risk measure over the whole trajectory given the initial state distribution as defined in eqn:risk-sensitive-policy, which is equivalent, π^*(h)∈max_a∈𝒜 _h∼,a∼π[β[Z^π(h,a)]] . Suppose and are both finite, the solution of eqn:risk-sensitive-policy-form2 will always exist (but may not be unique!). Denote the optimal risk-sensitive policy set is Π^*, where ∀π_1^*,π_2^* ∈Π^*, we have their return distribution β[Z_1^*]=β[Z_2^*] and they must satisfy the risk-sensitive  optimality equation: β[Z^*_1(h_t,a)] = β[R_0:t+γ^t+1 Z^*_2({s_t+1},a_t+1^*)] a_t+1^* ∈max_a∈ β[Z^*_2(h_t+1,a)] . We can prove eq:hr-optimal-equation is also sufficient for eqn:risk-sensitive-policy-form2, see ap:optimal-operator. Hereby, we define the risk-sensitive  optimality operator ^*_h,β: ^*_h,β Z(h_t,a) ← R_0:t+γ^t+1 Z({s_t+1},a_t+1) a_t+1=π^'(h_t+1)=max_a∈ β[Z(h_t+1,a)] , where the policy is obtained by deterministically maximizing the history-action value under risk measure β. And eq:hr-optimal-equation implies some “fixed" points for eqn:risk-sensitive-policy or eqn:risk-sensitive-policy-form2 from the perspective of risk measure β for ^*_h,β. Correspondingly, we can present our second theoretical result, that the policy improvement under  optimality operator is also guaranteed to converge into the risk-sensitive optimal policy. For two deterministic policies π and π^', if π^' is obtained by ^*_h,β: π^'(h_t)∈max_a∈𝒜 β[Z^π(h_t,a)] , then the following inequality holds β[Z^π(h_t,π(h_t))]≤β[Z^π^'(h_t,π^'(h_t))] . The formal proof can be referred to ap:risk-policy-improvement-theorem. When the new greedy policy π^', is as good as, but not better than, the old policy π in the sense of risk measures, we have that: β[Z^π(h_t,a_t)] = β[^*_h,β Z^π(h_t,a_t)]  , Unfolding the right side, we get: β[Z^π(h_t,a_t)] = β[R_0:t+γ^t+1Z^π({s_t+1},a_t+1)] a_t+1=π^'(h_t+1) ∈max_a∈ β[Z(h_t+1,a)] , which is exactly the risk-sensitive HR optimality equation eq:hr-optimal-equation. Therefore, we conclude that utilizing _h,β^* for policy improvement will give us a strictly better policy except when the original policy is already optimal. In the sequel, we understand that if the optimal solution of eqn:risk-sensitive-policy-form2 exists, there exists at least a sequence of distributional value function {Z_0, Z_1, ⋯, Z_n, Z_1^*, ⋯, Z^*_k} induced by the sequence of policy {π_0, π_1, ⋯, π_n, π^*_1, ⋯, π^*_k} such that β[Z_1]≤β[Z_2]≤⋯≤β[Z_n]≤β[Z^*_1]=⋯=β[Z^*_k]. However, starting from an arbitrary Z (which may not correspond to any policy), it is non-trivial to prove ^*_h,β converges to β[Z^*_i]. For Z_1, Z_2 ∈,  optimality operator ^*_h,β has the following property: β[^*_h,βZ_1]-β[^*_h,βZ_2]_∞≤β[Z_1]-β[Z_2]_∞ , The proof is in ap:risk-value-iteration. Theorem <ref> told us that the value iteration for ^*_h,β may not converge. Specifically, our proposed  operator can be realized as a “nonexpensive mapping" from the perspective of risk measure β in L_∞. For our cases of limited spaces, we might expect there exists some “fixed" point Z^*, and the best we can hope is a pointwise convergence such that β Z converges to β Z^* after recursively applying  optimality operator ^*_h,β w.r.t. risk measure β. However, from theo:risk-value-iteration, we know that β Z_n is not assured to be converged to β Z^* at any speed, hence the starting from arbitrary value distribution Z_0, ^*_h,β does not necessarily solve the RSRL objective eqn:risk-sensitive-policy. As a result, β Z_n may possibly fall on a sphere around Z^*. §.§ Trajectory Q-Learning As discussed above, by estimating the historical return distribution and improving the policy accordingly, we can now derive our practical RSRL algorithm, namely Trajectory Q-Learning (TQL). Representing the policy π, the historical value function Q as neural networks parameterized by ϕ and θ respectively, and denoting the historical return distribution approximated by critics as Z_θ(h,a)=1/N∑_j=0^N-1  Dirac[ Q_θ(h,a;τ_j)] , we optimize the following loss functions: J_π(ϕ)=  β[Z_θ(h,a)] , J_Q(θ)=  _a^'∼π; τ_i,τ_j^'∼ U([0,1])[ρ_τ^κ(∑_t=0^s_t=sγ^t r(s,a) +γQ̅_θ^'({s^'},a^';τ_j^') -Q_θ(h,a;τ_i))] , where ρ_τ^κ represents the quantile Huber loss (see ap:qr-loss for details). In practice, to accurately estimate Z({s^'},·) which is just a normal state-based value function <cit.>, we model Q̅_θ^'({s^'},a^';τ_j^') with an extra Markovian value function Q_ψ(s,a;τ), updated by J_Q(ψ) =_a^'∼π; τ_i,τ_j^'∼ U([0,1])[ρ_τ^κ(r(s,a) +γQ̅_ψ^'(s^',a^';τ_j^') -Q_ψ(s,a;τ_i))] , In total, the algorithm learns a policy π_ϕ, a history-based value function Z_θ, and a Markovian value function Z_ψ. At each timestep, Z_ψ and Z_θ are updated according to eqn:loss-Q-markovian and eqn:loss-Q, and the policy π_ϕ is optimized with eqn:loss-pi. For discrete control, we can omit ϕ and implement π by taking argmax from β[Z(h,a)]. We list the step-by-step algorithm in alg:tql-disc (discrete) and alg:tql-cont (continuous). § RELATED WORK §.§ Distributional Reinforcement Learning Distributional RL considers the uncertainty by modeling the return distribution, enabling risk-sensitive policy learning. <cit.> first studied the distributional perspective on RL and proposed C51, which approximates the return distribution with a categorical over fixed intervals. <cit.> proposed QR-DQN, turning to learning the critic as quantile functions and using quantile regression to minimize the Wasserstein distance between the predicted and the target distribution. <cit.> further proposed IQN, improving QR-DQN by quantile sampling and other techniques, which further investigate risk-sensitive learning upon various distortion risk measures. §.§ Risk in Reinforcement Learning Risk management in RL towards real-world applications can be roughly divided into two categories, i.e., safe and constrained RL and distributional risk-sensitive RL. Safe and constrained RL formulates the risk as some kind of constraint to the policy optimization problem. For instance, <cit.> proposed a Lagrangian method which provides a theoretical bound on cost function while optimizing the policy; <cit.> built a safe layer to revise the action given by an unconstrained policy; <cit.> used the Lyapunov approach to systematically transform dynamic programming and RL algorithms into their safe counterparts. When the form of risks is either too complex or the constraints are hard to be explicitly defined, safe RL algorithms can be challenging to learn. In that case, distributional RL provides a way to utilize risk measures upon the return distributions for risk-sensitive learning. Among them, <cit.> modeled the return distribution via its mean and variance and then learned an actor optimizing the of the return distribution; <cit.> proposed a novel optimistic version of the distributional Bellman operator that moves probability mass from the lower to the upper tail of the return distribution for sample-efficient learning of optimal policies in terms of ; <cit.> modified SAC <cit.> with distributional critics and discussed its application to risk-sensitive learning; <cit.> proposed their offline risk-averse learning scheme based on IQN <cit.> and BCQ <cit.>; <cit.> proposed CODAC, which adapts distributional RL to the offline setting by penalizing the predicted quantiles of the return for out-of-distribution actions; recently, <cit.> proposed a solution to resolve a similar issue specifically in optimizing policies towards , yet our TQL is general for any risk measure with theoretical guarantees. A more detailed comparison can be referred to ap:compare. In comparison, our proposed TQL is not only designed for a specific risk measure but is general for all kinds of risk measures. There are also several works <cit.> utilizing dynamic risk measures as their objective, which considers per-step risk instead of the static (trajectory-wise) risk in this paper. Dynamic risk has the advantage of time-consistency, but can be hard to estimate practically and short-sighted due to per-step optimization. We present a more detailed discussion on dynamic and static risk in ap:dynamic-vs-static. § EXPERIMENTS In this section, we design a series of experiments aimed to seek out: RQ1: Can our proposed TQL fit the ground-truth risk measures? RQ2: Can TQL find the optimal risk-sensitive policy and achieve better overall performance? Environments. In order to examine the ability to optimize risk-sensitive policy, we design two specified environments for discrete and continuous control, respectively. For discrete actions, we design a risky mini-grid task shown in Fig. 2a; for continuous control, we augment extra risky penalties upon the continuous Mountain-Car environment <cit.>, see subsec:exp-results for more details. Implementation, baselines, and metric. For discrete action space, we implement TQL based on IQN <cit.> to obtain the value distribution, and compare TQL with vanilla IQN and CVaR-DRL, a specific solution for learning a risk-sensitive policy towards better , proposed by <cit.>. For continuous control problems, we combine TD3 <cit.> with IQN <cit.>, named IQTD3, by replacing the critics in TD3 with distributional critics. We further build TQL upon IQTD3 and take IQTD3 as the baseline algorithm. For comparison, each algorithm is optimized towards various risk-sensitive objectives that are represented by different risk measures, including , , , and , whose detailed description is in Appendix ap:exp-vs-distortion; and the evaluation metrics are also those risk measures. §.§ Results and Analysis Value distribution analysis on 3-state MDP. In subsec:biased-obj, we have illustrated in fig:3state-mdp that vanilla distributional RL is not able to reveal the global optimal risk-sensitive policy and its value. To validate, we learn the return distribution with a tabular version of vanilla IQN and TQL respectively, and visualize the learned return distribution in fig:simple-results. The results show that vanilla distributional RL tends to learn Z(s_1,·) first as it is irrelevant to a_0, and thus a_1=1 will be chosen under s_1. However, when learning Z(s_0,·), Bellman update will use Z(s_1,a_1) in target, ignoring all trajectories where a_1=0. On the contrary, TQL learns historical value distribution Z̃, which enforces the agent to consider all possible trajectories and thus reveal the optimal solution where a_0=a_1=0. Discrete control evaluations. We first show the result of the discrete mini-grid task, which is designed for learning objective. We present the learning curves of TQL and vanilla IQN in Fig. 2b, which indicates that IQN consistently converges to the sub-optimal solution of visiting blue grids, similar to its behavior in the above-mentioned 3-state MDP. CVaR-DRL does improve the of historical return distribution to some extent, while it still produces sub-optimal policies (see ap:compare for a more detailed analysis). In contrast, TQL is able to discover a better policy that achieves significantly higher than the vanilla IQN baseline. Furthermore, in Fig. 2c, we visualize the final policy's return distribution. The blue and green bars indicate the frequency of episode returns for vanilla IQN and TQL respectively, and the corresponding dashed lines show the of return distribution for two policies. TQL is very likely to obtain high positive returns with little risk of negative returns, while vanilla IQN's return is always negative due to its Markovian policy. To better understand the difference in the optimization process, we further illustrate how the policy evolves during the training process in tab:grid-evolve. In particular, we observe that vanilla IQN converges from the end of the episode to the beginning due to its updating mechanism of dynamic programming, and its property of Markovian prevents it from finding the global optimum; moreover, CVaR-DRL fails due to its approximation in estimation but leads to a slightly-better policy. However, TQL is always doing a global search and thus finally reveals the optimal policy. Continuous control evaluations. To further learn on continuous risk-sensitive control problems with TQL, we design a risky penalty for the Mountain-Car environment: R_risky(s,a)={[ -c· (2-|a|), p=1/4-3|a|; 0, p=1-1/4-3|a| ]. . where c∈[0,1] is a scaling factor that controls the degree of risk related to the scale of actions. At each timestep, we augment the original reward with the risky penalty R_risky. Generally, actions close to 0 will result in higher expected accumulated rewards. However, to complete the task as fast as possible, the agent should choose larger actions that are close to 1, leading to more risky penalties. We compare IQTD3 with the proposed TQL and show the results for various risk measures in fig:mcc-results. Overall, when the potential risk is larger (i.e., larger risky penalty c∈{0.5, 0.75, 1.0}), TQL significantly outperforms IQTD3. The Markovian policy learned by IQTD3 can hardly find out how to complete the control task due to its short-sighted decision-making, while TQL consistently learns a better policy. When the risk is smaller, namely c∈{0.25, 0.1, 0.0}, the difference between TQL and IQTD3 becomes smaller, and both algorithms can learn an optimal risk-sensitive policy. § CONCLUSION AND FUTURE WORK In this paper, we present an in-depth analysis of the biased objective issue of the existing RSRL methods, and correspondingly propose Trajectory Q-Learning (TQL), a distributional RL algorithm for learning the optimal policy in RSRL. We justify the theoretical property of TQL and prove it converges to the optimal solution. Our experiments and the detailed analysis on both discrete and continuous control tasks validate the advantage of TQL in risk-sensitive settings. In future work, we plan to extend TQL to more complex tasks and real-world applications. § ACKNOWLEDGEMENTS We thank Zhengyu Yang, Ming Zhou and Zheyuan Hu for their helpful discussions. The SJTU team is supported by “New Generation of AI 2030” Major Project (2018AAA0100900), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102) and National Natural Science Foundation of China (62076161). The author Minghuan Liu is also supported by Wu Wen Jun Honorary Doctoral Scholarship, AI Institute, Shanghai Jiao Tong University. We sincerely thank all anonymous reviewers for their helpful feedback to revise our first manuscript. ACM-Reference-Format § ADDITIONAL BACKGROUNDS §.§ Typical Distortion Risk Measures Distortion risk measure is a large family including various measures. Here we briefly introduce how we derive those used in our experiments from its framework and their properties. * is obtained by an identity distortion function: h_ mean^-1(τ)=τ, ∀τ∈[0,1] . treats each quantile equally and serves as a risk-neutral measure, which indicates the unconditioned overall performance of a random variable. * η- is obtained by a linear projection of fractions: h_η- CVaR^-1(τ)=ητ . is always risk-averse, and smaller η makes it more conservative. * η- is a simple measure that controls risk preference by translating the c.d.f. of standard Gaussian: h_η- Wang^-1(τ)=Φ(Φ^-1(τ)+η) . where Φ is the c.d.f. of standard Gaussian distribution. Positive η corresponds to risk-seeking, and negative η corresponds to risk-aversity. * η- is introduced in <cit.>, it is neither globally risk-averse nor risk-seeking. h_η- CPW^-1(τ)=Φ(Φ^-1(τ)+η) . <cit.> proposed that η=0.71 matches human subjects well. * η- is proposed in <cit.>. It is a simple power formula for risk-averse (η<0) or risk-seeking (η>0) policies: h_η- POW^-1(τ)={[ τ^1/1+|η|, if η≥ 0; 1-(1-τ)^1/1+|η| otherwise ].. §.§ Quantile Regression and Quantile Huber Loss Quantile regression. <cit.> first to approximate the return distribution with quantiles and optimize the value function with quantile regression <cit.>, which uses the quantile regression loss to estimate quantile functions of a distribution. The quantile regression loss is an asymmetric convex loss function that penalizes overestimation and underestimation errors with different weights. For a distribution Z with c.d.f. F_Z(z), and a given quantile τ, the value of quantile function F_Z^-1(τ) can be characterized as the minimizer of the following quantile regression loss: _ QR^τ(θ):=_ẑ∼ Z[ρ_τ(ẑ-θ)] , where ρ_τ(u)=u(τ-δ_u<0) , ∀ u∈ Quantile Huber loss. <cit.> found that the non-smoothness at zero could limit performance when using non-linear function approximation, hence proposed a modified quantile loss, namely the quantile Huber loss. The Huber loss is given by <cit.>: _κ(u)={[ 1/2u^2 if |u|≤κ; κ(|u|-1/2κ) otherwise ]. , and the quantile Huber loss is then an asymmetric version of the Huber loss: ρ^κ_τ(u)=|τ-δ_u<0|_κ(u) . §.§ Comparison of Our Work to <cit.> The recent work of <cit.> imposes a similar issue of optimizing in previous RSRL methods, and proposes modifications to existing algorithm methods that use a moving threshold for episodic estimation and include a new distributional Bellman operator. They show that the optimal policy corresponds to the fixed point of their proposed distributional Bellman operator. However, <cit.> does not show the convergence property of their proposed distributional Bellman operator, and their solution is available specifically for policy optimization. Furthermore, their proposed moving threshold is approximated with the learned return distribution, which also impacts the ability to find the optimal policy. In contrast, our proposed TQL is provided with convergence property and enables optimization under all kinds of risk measures. Experiments in subsec:exp-results also show the advantage of TQL over <cit.> in practical tasks. §.§ Static Risk vs. Dynamic Risk In this paper, we consider static risk measures β over the whole trajectory. However, some other works <cit.> consider dynamic risk measures ρ, which is defined recursively over a trajectory τ=(s_0,a_0,s_1,a_1,⋯): ρ[τ]=ρ[R(s_0,a_0)+γρ[R(s_1,a_1)+γρ[R(s_2,a_2)+⋯]]] Dynamic risk measures have the advantage of time consistency, and <cit.> has conducted an in-depth analysis of the policy optimization based on dynamic risk measures. However, dynamic risk measures are more short-sighted and hard to estimate in practical tasks due to their per-step definition, and in the real-world people care more about the overall risk over the whole decision process. Therefore, static risk measures are most commonly used for evaluation in fields such as finance and medical treatments. Furthermore, the control problem over static risk measures have been shown to be non-Markovian and non-stationary <cit.>, hence it can be hardly resolved by DP style RSRL algorithms, which motivates the use of episodic return distribution and history-dependent policy in our TQL algorithm. § ALGORITHMS Discrete control. We present the step-by-step algorithm of TQL for the policy with discrete actions in alg:tql-disc. Continuous control. We present the step-by-step algorithm of TQL for continuous control in alg:tql-cont. § PROOF §.§ Proof of Lemma <ref> Recursively applying the distributional Bellman optimality operator ^*Z_k+1=Z_k on arbitrary value distribution Z_0 solves the objective eqn:risk-sensitive-policy when β is exactly where the optimal policy is obtained via eqn:conventional-mean-policy, and for Z_1, Z_2 ∈, we have: ^*Z_1-^*Z_2_∞≤γ Z_1- Z_1_∞ , and in particular Z_k → Z^* exponentially quickly. The RSRL objective for policy is max_A_0,⋯,A_T-1 𝔼[Z(S_0,A_0)] . From the Bellman equation, we can unfold Z(S_0,A_0) and get an equivalent form: max_A_0,⋯,A_T-1 𝔼[R(S_0,A_0)+γ Z(S_1,A_1)] , where S_1∼(S_1|S_0,A_0). Notice that is linearly additive, hence eqn:value-iteration-1 can be further divided into two parts of optimization: max_A_0 𝔼[R(S_0,A_0)+γmax_A_1,⋯,A_T-1 𝔼[Z(S_1,A_1)]] . Repeating this process on Z(S_1,A_1) and further, we will obtain the dynamic programming objective as following: max_A_0 𝔼[R(S_0,A_0)+γmax_A_1,⋯,A_T-1 𝔼[Z(S_1,A_1)]] ⇔  max_A_0 𝔼[R(S_0,A_0)+γ Z(S_1,max_A_1𝔼[R(S_1,A_1)...     +...γmax_A_2,⋯,A_T-1𝔼[Z(S_2,A_2)]])] ⇔  ⋯ which will finally get to the single-step target. For Z_1, Z_2 ∈, using the linearly additive property of , we have ^*_D Z_1- ^*_D Z_2_∞ =^*_E  Z_1-^*_E  Z_2_∞ ≤γ Z_1- Z_2_∞ where ^*_D denotes the distributional operator and ^*_E denotes the usual operator. §.§ Proof of Theorem <ref> Recursively applying risk-sensitive Bellman optimality operator _β^* w.r.t. risk measure β does not solve the RSRL objective eqn:risk-sensitive-policy and β [Z_k] is not guaranteed to converge to β [Z^*] if β is not or an affine in . We want to show that β[_β^*Z_1(s,a)]-β[_β^*Z_2(s,a)]]_∞≤γβ[Z_1(s,a)]-β[Z_2(s,a)]_∞ is not guaranteed to be true. We prove this by a counterexample. Given a risk measure β, consider two value distributions Z_1 and Z_2 where Z_1(s,a)=Z_2(s,a), ∀ s∈,a∈. Assume there are more than one optimal actions a∈^*_1⊂ (|^*_1|>1) at state s_1 in terms of risk measure β[Z_i(s_1,·)], i∈{0,1}, i.e., ∀ a^*∈^*_1, a^'∈, β[Z_i(s_1,a^*)]≥β[Z_i(s_1,a^')] . and these optimal actions correspond to different value distributions, i.e. Z_i(s_1,a)≠^D Z_i(s,a^'), ∀ a,a^'∈^*_1, i∈{0,1} . In case where we do not have a strict preference over ^*_1, the update formula for Z_i(s_0,a_0), i∈{0,1} where M(s_0,a_0)=s_1 will be Z_i(s_0,a_0)← R(s_0,a_0)+γ Z_i(s_1,a), ∀ a∈^*_1 . When eq:not-contraction-update uses a=a_1∈^*_1 in the right-hand side, it holds that Z_i(s_0,a_0)≠^D R(s_0,a_0)+γ Z_i(s_1,a_1^'), a_1^'∈^*_1 and a_1^'≠a_1 . Although β[Z_i(s_1,a)], ∀ a∈^*_1, i∈{0,1} are all the same, consider one update as follows: Z_1^'(s_0,a_0) ← R(s_0,a_0)+γ Z_1(s_1,a), a∈^*_1 , Z_2^'(s_0,a_0) ← R(s_0,a_0)+γ Z_2(s_1,a^'), a^'∈^*_1 and a^'≠a . Notice that β[Z_1]-β[Z_2]_∞=0. Since R(s_0,a_0) and Z_i(s_1,·) can be arbitrary distributions, we can always find such R(s_0,a_0) and Z_i(s_1,·) that Z_1^'≠Z_2^' when β is not or an affine in , as we show an example for as follows. [ 1cβ=(η=0.1) 1cs_0,a_0 1cs_1, a_0 1cs_1, a_1; R [c]{ 100, p=0.9 -10, p=0.1 . [c]{ 100, p=0.9 -10, p=0.1 . -10; Z_i [c]{ 90, p=0.9 -20, p=0.1 . [c]{ 100, p=0.9 -10, p=0.1 . -10; β[Z_i] -20 -10 -10; 4l Z_1^'(s_0,a_0)← R(s_0,a_0)+γ Z_1(s_1,a_0), Z_2^'(s_0,a_0)← R(s_0,a_0)+γ Z_2(s_1,a_1); Z_1^' lightsunflower[c]{ 200, p=0.81 90, p=0.18 -20, p=0.01 . [c]{ 100, p=0.9 -10, p=0.1 . -10; Z_2^' lightsunflower[c]{ 90, p=0.9 -20, p=0.1 . [c]{ 100, p=0.9 -10, p=0.1 . -10; β[Z_1^'] lightsunflower 79 -10 -10; β[Z_2^'] lightsunflower -20 -10 -10; ] In this case β[Z_1^']-β[Z_2^']_∞>γβ[Z_1]-β[Z_2]_∞=0. Therefore, β[Z_1^']-β[Z_2^']_∞≤γβ[Z_1]-β[Z_2]_∞ is not guaranteed to be true. Finally, a straight forward deduction is that, repeating using the following two formula stochastically to update Z will never reach a convergence. Z^'(s_0,a_0) ← R(s_0,a_0)+γ Z(s_1,a), a∈^*_1 , Z^'(s_0,a_0) ← R(s_0,a_0)+γ Z(s_1,a^'), a^'∈^*_1 and a^'≠a . §.§ Proof of Theorem <ref> ^π_h:→ is a γ-contraction in the metric of the maximum form of p-Wasserstein distance d̅_p. Let's first define the transition operator P^π :→ P^π Z(h, a) ≜ Z(h', A') h' = M(h, a)= M(s, a), A' ∼π(· | H'), where h' is actually s' that also falls in the space of . Then, we have: d_p (^π_h Z_1(h,a), ^π_h Z_2(h,a) ) =  d_p (R_0:t+γ^t+1P^π Z_1(h,a), R_0:t+γ^t+1P^π Z_2(h,a) ) ≤  γ^t+1 d_p (P^π Z_1(h,a),P^π Z_2(h,a) ) ≤  γ^t+1 sup_h',a'd_p (P^π Z_1(h',a'),P^π Z_2(h',a') ) ≤  γ sup_h',a'd_p (P^π Z_1(h',a'),P^π Z_2(h',a') ) . Then it is easy to see d̅_p (^π_h Z_1(h,a), ^π Z_2(h,a) ) = sup_h,a d_p (^π_h Z_1(h,a), ^π_h Z_2(h,a) ) ≤ γ sup_h',a'd_p (P^π Z_1(h',a'),P^π Z_2(h',a') ) = γd̅_p(Z_1, Z_2) . §.§ Proof of Theorem <ref> For two deterministic policies π and π^', if π^' is obtained by ^*_h,β: π^'(h_t)=max_a∈𝒜 β[Z^π(h_t,a)] , then the following inequality holds β[Z^π(h_t,π(h_t))]≤β[Z^π^'(h_t,π^'(h_t))] . As π^' is a greedy policy w.r.t. β[Z^π], we have β[Z^π(h_t,π(h_t))]≤β[Z^π(h_t,π^'(h_t))] . Since π^' is a deterministic policy, we can denote a_t^'=π^'(h_t), and unfold Z^π(h_t,a_t^'), we have Z^π(h_t,a_t^') = R_0:t-1+γ^t R(s_t,a_t^') +γ^t+1 Z^π({s̃_t+1},π(·|h_t∪{a_t^',s̃_t+1})) =Z^π(h_t∪{a_t^',s̃_t+1},π(·|h_t∪{a_t^',s̃_t+1})) , where s̃_t+1=M(s_t,a_t^'). Denoting h̃_t+1=h_t∪{a_t^',s̃_t+1}, ã_t+1=π(h̃_t+1) and ã_t+1^'=π^'(h̃_t+1), we further have Z^π(h_t,a_t^')=Z^π(h̃_t+1,ã_t+1)) β[Z^π(h̃_t+1,A_t+1))]≤β[Z^π(h̃_t+1,ã_t+1^')] . Putting them together, we obtain β[Z^π(h_t,a_t))] ≤  β[Z^π(h_t,a_t^')] =  β[Z^π(h̃_t+1,ã_t+1))] ≤  β[Z^π(h̃_t+1,ã_t+1^')] =  β[Z^π(h̃_t+1∪{a_t+1^',s̃_t+2},ã_t+2^')] ≤  ⋯⋯ ≤  β[Z^π^'(h_t,a_t^')] §.§ Proof of eq:hr-optimal-equation implying eqn:risk-sensitive-policy-form2 We only consider episodic tasks[Note such tasks can also be represented in a uniform notation of infinite horizon by adding an absorbing state after termination states, see <cit.>.], where each episode terminates at a certain termination state, say, at time step T+1. Hereby, we show we can obtain eqn:risk-sensitive-policy-form2 given eq:hr-optimal-equation by induction: * Considering timestep T, where ∀π, Z^π(h_T,a_T) = R_0:T, eqn:risk-sensitive-policy-form2 holds, i.e., β[Z^*(h_T,a_T)]=β[Z^π^*(h_T,a_T)]. * Given that eqn:risk-sensitive-policy-form2 holds at timestep t+1, namely β[Z^*(h_t+1,·)]=β[Z^π^*(h_t+1,·)]. Consider eq:hr-optimal-equation at timestep t: β[Z^*(h_t,a_t)] = β[R_0:t+γ^t+1 Z^*({s_t+1},a^*_t+1)] = β[Z^*(h_t∪{a_t,s_t+1},a^*_t+1)] = β[Z^π^*(h_t∪{a_t,s_t+1},a^*_t+1)] = β[Z^π^*(h_t,a_t)] . This indicates eqn:risk-sensitive-policy-form2 still holds at timestep t. * Given the two statements above, eq:hr-optimal-equation is sufficient for eqn:risk-sensitive-policy-form2. §.§ Proof of Theorem <ref> For Z_1, Z_2 ∈_h,  optimality operator ^*_h,β has the following property: β[^*Z_1]-β[^*Z_2]_∞≤β[Z_1]-β[Z_1]_∞ , ∀ s∈, a ∈, s'=M(s,a), β[_β^*Z_1(h,a)]-β[_β^*Z_2(h,a)]_∞ =  max_a'β[R_0:t+γ^t+1 Z_1({s'},a')]]     -max_a'β[R_0:t+γ^t+1 Z_2({s'},a')]]_∞ ≤  β[R_0:t+γ^t+1 Z_1({s'},a')]]     -β[R_0:t+γ^t+1 Z_2({s'},a')]]_∞ =  β[Z_1(h,a)]-β[Z_2(h,a)]_∞ § IMPLEMENTATION DETAILS §.§ Practical Modeling of Episodic Value Distribution We propose to involve the history information in sec:tql, which enables the learning of episodic value distribution. In practice, the summarization of history information can be achieved via any sequence model, e.g. GRU, LSTM, and Transformer. For our experiments on both MiniGrid and MountainCar tasks, we use a single-layer GRU network for simplicity. We first encode the observations {s_t} and {a_t} into representations of the same dimension d with two encoders enc_s:→^d and enc_a:→^d. Then we concatenate the representations enc_s(s_t) and enc_a(a_t) into [ enc_s(s_t), enc_a(a_t)]∈^2d, and stack the representations until current timestep as the input for the GRU network. The GRU network produces the summarization of history h_t as repr_t= GRU([[ enc_s(s_0), enc_a(a_0)],⋯,[ enc_s(s_t), enc_a(a_t)]]) and we finally model the episodic value distribution and history-dependent policy via MLPs with two hidden layers, taking repr_t and a_t as input. It is worth noting that, as conditioning the policy and value function on history will inflate the searching space, we use a rolling window of fixed length L=10 on the history trajectory when computing repr_t on continuous Mountain-Car tasks, i.e. repr_t= GRU([[ enc_s(s_t-L+1), enc_a(a_t-L+1)],⋯,[ enc_s(s_t), enc_a(a_t)]]) , which will significantly reduce the computation cost and complexity. §.§ Hyperparameters We list the hyperparameters of TQL for discrete MiniGrid tasks in tab:hyper-param-discrete. We list the hyperparameters of TQL for continuous Mountain-Car tasks in tab:hyper-param-continuous. § LIMITATIONS OF THIS WORK In this paper, we are the first to pose the biased objective problem in the optimization of previous risk-sensitive reinforcement learning (RSRL) algorithms. Accordingly, we propose Trajectory Q-Learning (TQL), as a general solution for RSRL, and verify its effectiveness, both theoretically and empirically. Although we believe this solution is general to a large set of RSRL problems and can be inspiring to the following study on RSRL, it still has limitations in many aspects. First, our solution assumes deterministic environmental dynamics, which implies a limitation the uncertainty only comes from the randomness of the reward function. In some cases, it is possible to translate a problem with stochastic dynamics structures into one with deterministic dynamics and random rewards. However, RSRL with stochastic dynamics is valuable and worth studying. Regarding this, we point out that a potential solution can be building a transition model and sampling transitions for policy optimization similar to <cit.>, combined with the proposed method in this work. Due to the workload and the value of the problem, we leave it as future work and make this work a good start for a more general solution for RSRL. Beyond deterministic dynamics, this work is also limited to evaluating the method on simple and toy environments and lacks the results on large-scale domains and real-world problem settings. We will try our best to test the method on more complex ad realistic problems and report the progress we will have made.
http://arxiv.org/abs/2307.01367v1
20230703214735
Optimized Geometric Constellation Shaping for Wiener Phase Noise Channels with Viterbi-Viterbi Carrier Phase Estimation
[ "Andrej Rode", "Wintana Araya Gebrehiwot", "Shrinivas Chimmalgi", "Laurent Schmalen" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
Optimized Geometric Constellation Shaping for Wiener Phase Noise Channels with Viterbi-Viterbi Carrier Phase Estimation Andrej Rode, Wintana Araya Gebrehiwot, Shrinivas Chimmalgi, and Laurent Schmalen August 1, 2023 ======================================================================================================================= Communications Engineering Lab (CEL), Karlsruhe Institute of Technology (KIT), rode@kit.edu 1.1 =10000 The vv algorithm is well understood for QPSK and 16-QAM, but modifications are required for higher-order modulation formats. We present an approach to extend the standard vv algorithm for higher-order modulation formats by modifying the transmit constellation with geometric constellation shaping. ©2023 The Author(s) § INTRODUCTION This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 101001899). In recent years, the optimization of geometric constellation shaping for communication systems impaired by Wiener phase noise has been a topic of great interest <cit.>. The optimization is typically performed using a state-of-the-art auto-encoder approach in a symbol-wise or bit-wise manner <cit.>. To incorporate effects of the communication channel and subsequent carrier phase estimation into the optimization, either a residual phase noise process <cit.> or a Wiener phase noise process with a carrier phase estimation <cit.> has been included in the training. In the first case, the influence of the constellation on the operation of the carrier phase estimation is neglected and only the impairment of the signal at the output of the carrier phase estimation is considered in the training. Compared to the second approach, this simplifies the optimization and training. In the second case, to perform e2e optimization with gradient descent the operations of the carrier phase estimation have to be implemented in a differentiable manner. In <cit.>, we have introduced a differentiable bps to optimize constellations geometrically. In this work, we explore this approach for the vv algorithm and introduce geometrically optimized 64-ary constellations for a Wiener phase noise channel with vv cpe. We modify the vv algorithm to include a learnable and differentiable partitioning in the geometrical constellation shaping optimization process. § FEED-FORWARD CARRIER PHASE ESTIMATION For high-rate coherent optical communication receivers, the choice of the cpe algorithm is driven by the symbol rate compared to the processing speed of dsp. As the symbol rate is multiple orders of magnitude higher than the signal processing rate of dsp, the use of feedback cpe—common in communication receivers e.g. for wireless communications—is not possible. A popular feed-forward cpe implementation is the bps <cit.>, which is widely used in modern optical communication receivers. Another feed-forward cpe implementation is the vv cpe <cit.>, where computational complexity is traded off for performance. To perform cpe with the bps, the squared distance to each constellation point needs to be calculated for each received complex symbol. For square qam, the distance all of the constellation points must be calculated for 1/4 of the test phases. For higher-order constellations, this increases the computational complexity significantly. For vv-based cpe, the complexity does not scale with the number of constellation points, but with μ, since the μ-th power of each received complex symbol is computed followed by averaging and an estimation of the phase. For square qam constellations, usually μ=4 is chosen due to the four-fold symmetry. Since we apply geometric shaping, the parameter μ can be chosen freely to increase or decrease the symmetry in the constellation with respect to square qam. This is of particular interest for the robustness to cycle slips, as a lowered μ reduces the rotational symmetry of the constellation and the region of phase unambiguity increases. Additionally, a smaller μ leads to a less complex cpe. cpe based on the vv algorithm are not well-suited for higher order qam, as not all constellation points are located on the symmetry lines, and therefore the inclusion of those points will result in reduced phase estimation performance. Previous approaches to improve vv-based cpe use a system of partitioning the received constellation symbols based on their amplitude into multiple classes <cit.>. In the second step, the vv algorithm is only applied to symbols in a certain class. For the case of 16-qam, only the outer- and innermost points are selected for the phase estimation. A solution that includes more points to improve the phase estimate of the vv cpe rotates the received symbols to the symmetry lines <cit.>. Another approach to improve the cpe performance uses a multi-stage approach, where the first stage performs a coarse phase estimate at a lower complexity, while the following stages can refine the phase estimate using pre-compensated complex symbols <cit.>. Adding additional stages to the phase estimation increases latency and computational complexity. Therefore, we look at approaches that leave a single-stage vv algorithm in place and instead adapt the transmit constellation to improve the performance. § GEOMETRIC CONSTELLATION SHAPING WITH MODIFIED VV ALGORITHM To improve the performance of the vv algorithm for higher-order constellations, we apply geometric constellation shaping using the bitwise auto-encoder approach. The vv algorithm to obtain the phase estimate at time k for the received complex symbol z_k φ_k,est = 1/μunwrap((∑_k'=k-K^k+Kz_k'^μ)), is already a differentiable operation. Therefore we can insert the vv in the e2e training with the bitwise auto-encoder as shown in <ref>. To perform geometric shaping, while still following the approach of partitioning, the previously presented partitioning approach <cit.> needs to be modified. Hard partitioning selecting certain symbols based on the amplitude will leave the optimization without any useful gradients and the partitioning cannot be directly included in the optimization. We introduce a novel, differentiable selection function that carries out the partition, with the goal to include many constellation points in the constellation optimization and phase estimate. The proposed selection function is z'_k,l = σ(s_l(|z_k| - θ_0,l)) σ(s_l(θ_1,l - |z_k|)) z_k/|z_k|, which will apply a selection on each received complex symbol z_k to obtain z'_k,l. The function is parameterized by the trainable parameters s_l, θ_0,l, and θ_1,l and we apply the non-linear Softplus activation function σ. To have multiple partitioning rings, we define multiple functions indexed by l ∈{0,…,L-1} and calculate the phase estimate according to z_k,avg = ∑_k'=k-K^k+K(∑_l=0^L-1 z'_k',l)^μ φ_k,est,mod = 1/μunwrap((z_k, avg)). For L=0 no partitioning is applied on the received symbols. In (<ref>), a weighted average is calculated over 2K+1 neighboring phasors. The phase estimate φ_k,est,mod for the symbol at time step k is then obtained by taking the complex argument and performing phase unwrapping to have a continuous phase variation. In a subsequent step, we perform averaging across the neighboring phase estimates included in the partitioning to obtain phase estimates for constellation symbols that were not included in the partitioning step. § RESULTS Performing gcs without partitioning for μ∈2,3,4, we obtain constellations with a significant amount of constellation points located close to the corresponding symmetry lines. We display the constellations for vv cpe without partitioning in <ref>. With a bit-wise autoencoder approach, the bit labeling is also included in the optimization and we print the obtained bit labels for each constellation symbol in <ref>. The training and validation were performed at a symbol rate of R_S = 32GBaud. For training, the snr was fixed to 20dB, and the laser linewidth was fixed to 100kHz. For the validation, we use a genie-aided cycle slip compensation to simulate a system operating with a phase offset which can be corrected unambiguously with a vv cpe. This can be achieved with a first stage cpe employing a pilot-based <cit.> or pilot-less <cit.> approach. Comparing the performance of the geometrically optimized constellations for varying μ in <ref>, we can observe that for lower snr the system with μ=4 shows the best performance and for high snr also the constellation and system with μ=5 approaches the performance of μ=4. From this, we conclude that the classical choice of μ=4 for square qam also provides the best performance for a geometrically optimized constellation. The constellation trained with L=1 for partitioning shown in <ref> shows a consistently better performance compared to the constellation without partitioning. Since the partitioning is performed in a way, where most of the symbols are included in the vv cpe, the increased performance might stem from the additional averaging step which is performed after the partitioning. § CONCLUSIONS We present an approach to optimize the gcs of higher order modulations for vv-based cpe. We firstly introduce our e2e optimization system based on the bitwise auto-encoder and then present a modified vv cpe, which applies a differentiable partitioning on the constellation to further improve the performance. We compare the results of our approaches in terms of the bmi and find that optimization of the gcs allows for the application of the vv cpe on higher order modulation formats without any partitioning. Application of partitioning further improves the performance, while adding some additional computational complexity to the dsp. Furthermore, an analysis of the exponent μ used in the vv algorithm shows, that for 64-ary constellations and optimized gcs a value of μ=4 provides the best performance.
http://arxiv.org/abs/2307.01100v3
20230703152403
Fuzzy Dark Matter, the Dark Dimension, and the Pulsar Timing Array Signal
[ "Luis A. Anchordoqui", "Ignatios Antoniadis", "Dieter Lust" ]
hep-ph
[ "hep-ph", "hep-th" ]
MPP-2023-139 LMU-ASC 24/23 Department of Physics and Astronomy, Lehman College, City University of New York, NY 10468, USA Department of Physics, Graduate Center, City University of New York, NY 10016, USA Department of Astrophysics, American Museum of Natural History, NY 10024, USA Laboratoire de Physique Théorique et Hautes Énergies - LPTHE Sorbonne Université, CNRS, 4 Place Jussieu, 75005 Paris, France Max–Planck–Institut für Physik, Werner–Heisenberg–Institut, 80805 München, Germany Arnold Sommerfeld Center for Theoretical Physics, Ludwig-Maximilians-Universität München, 80333 München, Germany 2mm We propose a new dark matter contender within the context of the so-called “dark dimension”, an innovative 5-dimensional construct that has a compact space with characteristic length-scale in the micron range. The new dark matter candidate is the radion, a bulk scalar field whose quintessence-like potential drives an inflationary phase described by a 5-dimensional de Sitter (or approximate) solution of Einstein equations. We show that the radion could be ultralight and thereby serve as a fuzzy dark matter candidate. We advocate a simple cosmological production mechanism bringing into play unstable Kaluza-Klein graviton towers which are fueled by the decay of the inflaton. We demonstrate that the fuzzy radion can accommodate the signal recently observed in pulsar timing arrays. 95.35.+d, 11.25.-w Fuzzy Dark Matter, the Dark Dimension, and the Pulsar Timing Array Signal Dieter⁠ Lüst July 2023 ============================================================================= § INTRODUCTION While there are various lines of evidence for the existence of dark matter in the universe, the nature of the dark matter particle remains a challenging dilemma at the interface of astrophysics, cosmology, and particle physics <cit.>. There is a large variety of dark matter candidates with masses spanning many orders of magnitude. Of particular interest here, fuzzy dark matter (FDM) is made up of non-interacting ultralight bosonic particles that exhibit coherent dynamics and a wave-like behaviour on galactic scales <cit.>. On sub-galactic length scales, FDM brings to light a distinctive phenomenology alternative to that of cold dark matter (CDM). However, FDM predictions are indistinguishable from those of CDM on large scales, and so benefits from the remarkable success of ΛCDM cosmology. The main parameter regulating the two FDM regimes is the particle's mass, with a range that spans three decades of energy, 10^-24 m/ eV 10^-22  eV. Particles of such tiny mass and with typical velocities v found in haloes hosting Milky Way-sized galaxies, acquire a very long de Broglie wavelength, λ_ dB≡2 π/mv = 4.8  kpc(10^-23  eV/m) (250  km/s/v) , to deliver the wave-like behavior at galactic scales. FDM can populate the galactic haloes with large occupation numbers and behave as self-gravitating dark matter waves. This engenders a pressure-like effect on macroscopic scales which catalyzes a flat core at the center of galaxies, with a relatively marked transition to a less dense outer region that follows the typical CDM-like distribution. Before proceeding, we take note of a serious challenge for FDM models. The criticism centers on numerical simulations to accommodate Lyman-α forest data which provide bounds on the fraction of FDM <cit.>. In response, it was noted that these bounds strongly depend on the modelling of the intergalactic medium <cit.>. Whichever point of view one may find more convincing, it seems most conservative at this point to depend on experiment (if possible) rather than numerical simulations to resolve the issue. In this paper we show that the dark dimension scenario <cit.> embraces a well motivated FDM candidate and that such ultralight particle could be the carrier of the signal recently observed at pulsar timing arrays <cit.>. The layout is as follows. In Sec. <ref> we give a concise overview of the signal uncovered by pulsar timing arrays along with a discussion of its possible connection to FDM and open challenges. In Sec. <ref> we first outline the basic setting of the dark dimension scenario and discuss general aspects of the effective low energy theory inherited from properties of the overarching string theory. After that, we identify the radion as a FDM candidate and we show that it can accommodate the nHz signal recorded by pulsar timing arrays. In Sec. <ref> we discuss the process of radion production to estimate the corresponding relic abundance. Conclusions are given in Sec. <ref>.[A plethora of models have been proposed to explain the data including string inspired scenarios <cit.> and alternative dark matter interpretations <cit.>.] § SEARCHING FOR FUZZY DARK MATTER WITH PULSAR TIMING ARRAYS Pulsar timing arrays (PTAs) make use of widely distributed and extremely well-timed millisecond pulsars as a Galaxy-scale interferometer (with arms extending between Earth and each pulsar in the array) to measure gravitational waves <cit.>. The typical gravitational wave signal of PTAs originates in the coalescence of supermassive black hole binaries, which induce sinusoidally oscillating arrival-time variations with frequency ω and characteristic strain h_c <cit.>. The power spectrum of the gravitational wave strain is usually approximated as a power law with amplitude A and spectral index α, h_c (f) = A (f/f_ yr)^α , at reference frequency f_ yr = 1  yr^-1≃ 3.17 × 10^-8  Hz <cit.>. The time residual amplitude of the stochastic gravitational wave background is found to be Δ t_ GW = h_c/ωsin[ω D (1-cosθ)/2] (1 + cosθ) sin (2ϕ) , where ϕ is the polarization angle of the gravitational wave, D is the distance to the pulsar, and θ is the angle between the directions to the pulsar and source of the gravitational wave <cit.>. For ω D ≫ 1, the root mean square value of the time residual, √(⟨Δ t^2_ GW⟩) = 1/√(3)h_c/ω , is obtained by averaging over θ, ϕ, and D. PTAs also provide a brand new pathway for probing the long-distance effective field theory of our Universe. This is because FDM inside galactic haloes behaves as perfect fluid with oscillating pressure of frequency ω = 2m <cit.>. These oscillations induce fluctuations in the local stress tensor, which in turn generate fluctuations in metric perturbations via Einstein’s equations. Thus, for any propagating signal, the metric perturbations affect the photon geodesic on its path from the pulsar and generate a timing residual with amplitude Δ t_ FDM = 2 ψ_c/ωsin[ω D/2 + ϖ (x⃗_ d) - ϖ(x⃗_ p) ] , where ϖ is the phase of the FDM scalar field at the locations of the pulsar x⃗_ p and the detector x⃗_ d, ψ_c = ρ_ DM/8 M_p^2 m^2 , and where ρ_ DM is the local dark matter density and M_p the reduced Planck mass <cit.>. Averaging the square of (<ref>) over the distance to the pulsar leads to √(⟨Δ t^2_ FDM)⟩ = √(2)ψ_c/ω . A direct comparison of (<ref>) and (<ref>) shows that the FDM time delay that would be captured by PTA experiments is similar to that of the stochastic gravitational wave background, at characteristic strain h_c = 2 √(3)ψ_c ≃ 2.7 × 10^-15(ρ_ DM/0.4  GeV/cm^3) (10^-23  eV/m)^2 and frequency f ≡ ω/(2π) ≃ 4.8 × 10^-9  Hz (m/10^-23  eV) . Very recently, the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) collaboration reported 4σ evidence for a stochastic signal with A = 2.4^+0.7_-0.6× 10^-15 at f_ yr assuming a fiducial α = -2/3, which is the typical spectral index of an isotropic background of gravitational waves radiated by inspiraling supermassive black hole binaries <cit.>. The signal has also been observed with the Parkes Pulsar Timing Array (PPTA), with A = 3.1^+1.3_-0.9× 10^-15 and α = -0.45 ± 0.20 for a 68% credible interval, whereas when the spectral index is predetermined in the data analysis (α = -2/3) then A = 2.04^+0.25_-0.22× 10^-15 <cit.>. The European Pulsar Timing Array (EPTA) and the Chinese Pulsar Timing Array have also captured the signal <cit.>. From (<ref>) it is straightforward to see that the predicted strain from FDM at the reference frequency appears to be too small to match the data.[We note in passing that the signal is also inconsistent with gravitational waves of primordial origin <cit.>.] The NANOGrav collaboration also searched for fingerprints of FDM restricting the prior to frequencies in the range f/ nHZ∈ [3.05,6.09] and found mild evidence for a signal with f ∼ 4  nHz, corresponding to m ∼ 8 × 10^-24  eV <cit.>. For a spectrum ∝ f^-0.45± 0.2 with amplitude in the 68% credible interval, the extrapolated strain is 3.7 × 10^-15 h_c (4  nHz) 1.7 × 10^-14. The EPTA collaboration also searched for signals of FDM with 10^-24.0 m/ eV 10^-23.2, and concluded that within such a mass range FDM cannot constitute 100% of the total ρ_ DM <cit.>. Now, taking into account the uncertainties on ρ_ DM <cit.>, from (<ref>) we see that the predicted strain by FDM at 4 nHz can easily accommodate the PTA data, even if FDM does not constitute 100% of ρ_ DM. The gravitational wave spectral energy density associated to a PTA signal is conventionally expressed by Ω_ GW (f) = 2 π^2/3 H_0^2 f^2 h_c^2(f) , where H_0 = 100 h  km/s/Mpc is the Hubble constant <cit.>. From (<ref>) it is straightforward to calculate the radion monochromatic energy density: Ω_ GW (f) h^2 = 2 × 10^-10. In Fig. <ref> we show a comparison of the radion monochromatic signal and the spectral energy density reported by the PTA experiments. In summary, the NANOGrav collaboration reported mild evidence for FDM with m ∼ 8 × 10^-24  eV <cit.>. Even though the excess is not statistical significant yet, it is interesting to entertain the possibility that it corresponds to a real signal of new physics. It is this that we now turn to investigate within the context of the dark dimension scenario. § THE GOOD, THE BAD, AND THE FUZZY The Swampland program seeks to understand which are the “good” low-energy effective field theories (EFTs) that can couple to gravity consistently (e.g. the landscape of superstring theory vacua) and distinguish them from the “bad” ones that cannot <cit.>. In theory space, the frontier discerning the good theories from those downgraded to the swampland is drawn by a family of conjectures classifying the properties that an EFT should call for/avoid to enable a consistent completion into quantum gravity. These conjectures provide a bridge from quantum gravity to astrophysics, cosmology, and particle physics <cit.>. For example, the distance conjecture (DC) forecasts the appearance of infinite towers of states that become exponentially light and trigger the collapse of the EFT at infinite distance limits in moduli space <cit.>. Connected to the DC is the anti-de Sitter (AdS) distance conjecture, which correlates the dark energy density to the mass scale m characterizing the infinite tower of states, m ∼ |Λ|^α, as the negative AdS vacuum energy Λ→ 0, with α a positive constant of O (1) <cit.>. Besides, under the hypothesis that this scaling behavior holds in dS (or quasi dS) space, an unbounded number of massless modes also pop up in the limit Λ→ 0. As demonstrated in <cit.>, the generalization of the AdS-DC to dS space could help elucidate the radiative stability of the cosmological hierarchy Λ_4/M_p^4∼ 10^-120, because it connects the size of the compact space R_⊥ to the dark energy scale Λ_4^-1/4, R_⊥∼λ Λ_4^-1/4 , where the proportionality factor is estimated to be within the range 10^-1 < λ < 10^-4. This implies that the new (dark) dimension characterized by a length-scale in the micron range has associated a Kaluza-Klein (KK) tower that opens up at the mass scale m_ KK∼ 1/R_⊥. Within this set-up, the 5-dimensional Planck scale (or species scale where gravity becomes strong <cit.>) is given by Λ_ QG∼ m_ KK^1/3 M_p^2/3. The dark dimension stores a top-notch phenomenology <cit.>. For example, it was noted in <cit.> that the universal coupling of the SM fields to the massive spin-2 KK excitations of the graviton in the dark dimension provides a dark matter candidate. Complementary to the dark gravitons, it was observed in <cit.> that primordial black holes with Schwarzschild radius smaller than a micron could also be good dark matter candidates, possibly even with an interesting close relation to the dark gravitons <cit.>. Next, in line with our stated plan, we propose a new dark matter candidate within this framework. It is unnatural to entertain that the size of the dark dimension would remain fixed during the evolution of the Universe right at the species scale. To accommodate this hierarchy we need to inflate the size of the dark dimension. To see how this works explicitly, we consider that the inflationary phase can described by a 5-dimensional dS (or approximate) solution of Einstein equations <cit.>. All dimensions (compact and non-compact) expand exponentially in terms of the 5-dimensional proper time. This implies that when inflation starts the radius R of the compact space is small and the 4-dimensional Planck mass is of order the 5-dimensional Planck scale, which is taken to be the string scale M_s. However, when inflation ends the radius of the compact space is on the micron-scale size and the 4-dimensional Planck scale is much bigger, M_p^2 ∼ M_s^3 R_⊥ . A straightforward calculation shows that the compact space requires 42 e-folds to expand from the fundamental length 1/M_s to the micron size. We can interpret the solution in terms of 4-dimensional fields using 4-dimensional Planck units from the relation (<ref>), which amounts going to the 4-dimensional Einstein frame. This implies that if R expands N e-folds, then the 3-dimensional space would expand 3N/2 e-folds as a result of a uniform 5-dimensional inflation <cit.>. Altogether, the 3-dimensional space has expanded by about 60 e-folds to solve the horizon problem, while connecting this particular solution to the generation of a mesoscopic size dimension. A consistent model requires the size of the dark dimension to be stabilized at the end of inflation; an investigation along this line is already underway <cit.>. The 5-dimensional action of uniform dS (or approximate) inflation S_5=∫[d^4x][dy] (1 2Λ_ QG^3 R_(5)-Λ_5) , leads to a runaway potential for the radion R coming from the 5-dimensional cosmological constant, where R_(5) is the higher dimensional curvature scalar and Λ_5 is the 5-dimensional cosmological constant at the end of inflation. The quintessence-like potential of the radion is seen explicitely upon dimensional reduction to 4 dimensions. The resulting 4-dimensional action in the Einstein frame is found to be, S_4 = ∫[d^4x] {1 2 M_p^2 R_(4) - 3 4 M_p^2 (∂ R R)^2 . - . (2π⟨ R ⟩)^2Λ_5 (2π R)} , where R_(4) is the Ricci scalar and ⟨ R ⟩ is the vacuum expectation value (vev) of R after the end of inflation. Because the radion field R is not the canonically normalized, we define ϕ = √(3/2) ln(R/⟨ R ⟩). In terms of the normalized field ϕ the scalar potential takes the advertised quintessence-like form V(ϕ) = 2 π Λ_5 ⟨ R ⟩ e^- √(2/3)ϕ , and the radion mass is given by m ∼√(V_ϕϕ)/M_p , where V_ϕ≡ dV/dϕ. We take Λ_5 ⟨ R ⟩∼Λ_4 /λ^8 and so for λ∼ 5 × 10^-3 we obtain the desired result to accommodate the PTA signal. An important aspect of this model is that the coupling of the radion to SM fields must be suppressed to avoid conflicts with limits on long range forces. Herein, we assume that the radion has a localized kinetic term through (e.g. an expectation value of a brane field) that suppresses the coupling to matter. Alternatively, in the absence of a scalar potential, the 5-dimensional radion is equivalent to a Brans-Dicke scalar with a parameter ω = -4/3. It has been argued that an appropriate modification of such theories due to bulk quantum corrections can lead to a logarithmic scale (time) dependence of ω that suppresses the radion coupling to matter, consistently with the experimental limits <cit.>. An investigation along these lines is obviously important to be done <cit.>. § PRODUCTION OF THE FUZZY RADION AND ITS RELIC ABUNDANCE The issue that remains to be assessed is whether there is a mechanism which allows enough radion production to accommodate the relic dark matter density. An interesting possibility emerges if the inflaton has roughly equal couplings to brane and bulk fields, such that on decay will produce the SM fields while also populating the KK towers. We begin by considering a tower of equally spaced dark gravitons, indexed by an integer l, and mass m_l = l m_ KK. We assume that the cosmic evolution of the dark sector is mostly driven by “dark-to-dark” decay processes that regulate the decay of KK gravitons within the dark tower. The proposed decay model then provides a particular realization of the dynamical dark matter model <cit.>. The intra-KK decays in the bulk require a spontaneous breakdown of the translational invariance in the compact space such that the 5-dimensional momenta are not conserved. An explicit realization of this idea, in which the KK modes acquire a nonzero vev ⟨φ_l ⟩ has been given in <cit.>. Following <cit.>, we further assume transitions by instanton-induced tunneling dynamics associated with such vacuum towers. The effect of the instanton processes is to accelerate the cascade dynamics to collapse into the radion. Bearing this in mind, we calculate the total decay width of the massive KK tower into the radion, as a sum over the decay of a particular KK l into the radion and any of the lighter KK gravitons, Γ^ KK_R = ∑_l'<lΓ^l_Rl'∼1/2 πm_l m_ KK^3/m M_p^2 ∑_l'<l ⟨φ_l' ⟩ . Following <cit.>, we now take all the vevs to be the same, ⟨φ_l ⟩ = ⟨φ⟩∼ M_s, so that (<ref>) can be recast as Γ^ KK_R∼1/2 πm_l m_ KK^3 R_⊥ ⟨φ⟩/m M_p^2 = 1/2 πm_l m_ KK^3/m M_s^2 , where in the second rendition we have made use of (<ref>). Substituting in (<ref>) for our fiducial values (m_ KK∼ 10  eV and m ∼ 8 × 10^-24  eV), with m_l ∼ m_ KK and M_s ∼Λ_ QG, we obtain Γ^ KK_R ∼ 2 × 10^6  s^-1. This implies that the energy the inflaton deposited in the KK tower collapses all into the radion well before the QCD phase transition (with characteristic temperature ∼ 150  MeV and age ∼ 20 μ s). Qualitatively, radion cosmology reassembles that of ultralight axion-like particles <cit.>. Namely, the radion equation of motion is given by ϕ̈+ 3 H ϕ̇+ m^2 ϕ = 0 , where H is the Hubble parameter. At very early times, when m<3H, the radion field is overdamped and frozen at its initial value by Hubble friction. During this epoch the equation of state is w_ϕ = -1 and the radion behaves as a sub-dominant cosmological constant. Once the Universe expands to the point where m ∼ 3H, the driving force overcomes the friction and the field begins to slowly roll. Finally, when m> 3H, the field executes undamped oscillations. The equation of state oscillates around w_ϕ = 0 and the energy density scales as CDM. The radiation energy density can be conveniently expressed as ρ_R = (∑_B g_B + 7/8∑_F g_F ) π^2/30 T^4 ≡π^2/30 N(T) T^4 , where g_B(F) is the total number of boson (fermion) degrees of freedom and the sum runs over all boson (fermion) states with m_B(F)≪ T, and where N(T) is the number of effective degrees of freedom (the factor of 7/8 is due to the difference between the Fermi and Bose integrals). The expansion rate as a function of the temperature in the plasma is given by H (T) = [π^2/90 N(T) ]^1/2T^2/M_p∼ 0.33 √(N(T)) T^2/M_p . By inspection of (<ref>) we can immediately see that for m ∼ 3H, only photons and neutrinos contribute to the sum in (<ref>), yielding N = 7.25. This corresponds to a temperature T_ osc∼ 53  eV, for which the total energy density of radiation (<ref>) is ρ_R (T_ osc) ∼ 2 × 10^7  eV^4. Now, let ρ_ϕ (T_ osc) be the background energy density of the radion field at T_ osc. As the universe expands, the ratio of dark matter to radiation grows as 1/T, and in ΛCDM cosmology they are supposed to become equal at the temperature T_ MR∼ 1  eV of matter-radiation equality. This implies that ρ_ϕ (T_ osc)/ρ_R (T_ osc) T_ osc/T_ MR∼ 1 , which leads to ρ_ϕ (T_ osc) ∼ 3.8 × 10^5  eV^4 <cit.>. Note that ρ_ϕ (T_ osc) << V(0) ∼ 6 × 10^8  eV^4. Finally, today's radion abundance easily accommodates the observed dark matter density <cit.>, i.e., ρ_ϕ,0∼ρ_ DM∼ 1.26  keV/cm^-3. In closing, we note that when oscillations start, before the matter-radiation equality, the radion is non-relativistic and therefore Δ N_ eff (the number of “equivalent” light neutrino species in units of the density of a single Weyl neutrino <cit.>) stays unaffected at the earliest observationally verified landmarks (viz. big bang nucleosynthesis and cosmic microwave background). As a consequence, our model remains consistent with the bounds derived in <cit.>. § CONCLUSIONS We have introduced a new dark matter contender within the context of the dark dimension. The dramatis personae is the radion, a bulk scalar field whose quintessence-like potential drives an inflationary phase described by a 5-dimensional de Sitter (or approximate) solution of Einstein equations. We have shown that within this set up the radion could be ultralight and thereby serve as a fuzzy dark matter candidate. We have put forward a simple cosmological production mechanism bringing into play unstable KK graviton towers which are fueled via inflaton decay. Finally, we demonstrated that the fuzzy radion can accommodate the signal observed by PTAs. We end with an observation. The dS conjecture postulates that for V > 0, there is a lower bound |V_ϕ|/V > c in any consistent theory of gravity, with c being an O (1) parameter in reduced Planck units <cit.>. Our proposal accommodates this requirement. Moreover, exponential potentials of the form e^-αϕ are constrained by cosmological and astrophysical observations. The existing data lead to an upper bound α 0.8 <cit.>. Curiously, the upper limit on the allowed value of α is the one predicted by (<ref>). § ACKNOWLEDGMENTS We have greatly benefited from discussions with Nima Arkani-Hamed. We thank Andrea Mitridate and Sunny Vagnozzi for valuable email correspondence about the PTA signal. The research of LAA is supported by the U.S. National Science Foundation (NSF grant PHY-2112527). The work of D.L. is supported by the Origins Excellence Cluster and by the German-Israel-Project (DIP) on Holography and the Swampland. 99 Feng:2010gw J. L. Feng, Dark Matter Candidates from Particle Physics and Methods of Detection, Ann. Rev. Astron. Astrophys. 48, 495-545 (2010) doi:10.1146/annurev-astro-082708-101659 [arXiv:1003.0904 [astro-ph.CO]]. Hu:2000ke W. Hu, R. Barkana and A. Gruzinov, Cold and fuzzy dark matter, Phys. Rev. Lett. 85, 1158-1161 (2000) doi:10.1103/PhysRevLett.85.1158 [arXiv:astro-ph/0003365 [astro-ph]]. Irsic:2017yje V. Iršič, M. Viel, M. G. Haehnelt, J. S. Bolton and G. D. Becker, First constraints on fuzzy dark matter from Lyman-α forest data and hydrodynamical simulations, Phys. Rev. Lett. 119, no.3, 031302 (2017) doi:10.1103/PhysRevLett.119.031302 [arXiv:1703.04683 [astro-ph.CO]]. Armengaud:2017nkf E. Armengaud, N. Palanque-Delabrouille, C. Yèche, D. J. E. Marsh and J. Baur, Constraining the mass of light bosonic dark matter using SDSS Lyman-α forest, Mon. Not. Roy. Astron. Soc. 471, no.4, 4606-4614 (2017) doi:10.1093/mnras/stx1870 [arXiv:1703.09126 [astro-ph.CO]]. Rogers:2020ltq K. K. Rogers and H. V. Peiris, Strong Bound on Canonical Ultralight Axion Dark Matter from the Lyman-Alpha Forest, Phys. Rev. Lett. 126, no.7, 071302 (2021) doi:10.1103/PhysRevLett.126.071302 [arXiv:2007.12705 [astro-ph.CO]]. Hui:2016ltb L. Hui, J. P. Ostriker, S. Tremaine and E. Witten, Ultralight scalars as cosmological dark matter, Phys. Rev. D 95, no.4, 043541 (2017) doi:10.1103/PhysRevD.95.043541 [arXiv:1610.08297 [astro-ph.CO]]. Montero:2022prj M. Montero, C. Vafa and I. Valenzuela, The dark dimension and the Swampland, JHEP 02, 022 (2023) doi:10.1007/JHEP02(2023)022 [arXiv:2205.12293 [hep-th]]. NANOGrav:2023gor G. Agazie et al. [NANOGrav], The NANOGrav 15-year Data Set: Evidence for a Gravitational-Wave Background, doi:10.3847/2041-8213/acdac6 [arXiv:2306.16213 [astro-ph.HE]]. Reardon:2023gzh D. J. Reardon, A. Zic, R. M. Shannon, G. B. Hobbs, M. Bailes, V. Di Marco, A. Kapur, A. F. Rogers, E. Thrane and J. Askew, et al. Search for an isotropic gravitational-wave background with the Parkes Pulsar Timing Array, doi:10.3847/2041-8213/acdd02 [arXiv:2306.16215 [astro-ph.HE]]. Antoniadis:2023ott J. Antoniadis, P. Arumugam, S. Arumugam, S. Babak, M. Bagchi, A. S. B. Nielsen, C. G. Bassa, A. Bathula, A. Berthereau and M. Bonetti, et al. The second data release from the European Pulsar Timing Array III. Search for gravitational wave signals, [arXiv:2306.16214 [astro-ph.HE]]. Xu:2023wog H. Xu, S. Chen, Y. Guo, J. Jiang, B. Wang, J. Xu, Z. Xue, R. N. Caballero, J. Yuan and Y. Xu, et al. Searching for the nano-Hertz stochastic gravitational wave background with the Chinese Pulsar Timing Array Data Release I, doi:10.1088/1674-4527/acdfa5 [arXiv:2306.16216 [astro-ph.HE]]. Megias:2023kiy E. Megias, G. Nardini and M. Quiros, Pulsar Timing Array Stochastic Background from light Kaluza-Klein resonances, [arXiv:2306.17071 [hep-ph]]. Ellis:2023tsl J. Ellis, M. Lewicki, C. Lin and V. Vaskonen, Cosmic Superstrings Revisited in Light of NANOGrav 15-Year Data, [arXiv:2306.17147 [astro-ph.CO]]. Lazarides:2023ksx G. Lazarides, R. Maji and Q. Shafi, Superheavy quasi-stable strings and walls bounded by strings in the light of NANOGrav 15 year data, [arXiv:2306.17788 [hep-ph]]. Han:2023olf C. Han, K. P. Xie, J. M. Yang and M. Zhang, Self-interacting dark matter implied by nano-Hertz gravitational waves, [arXiv:2306.16966 [hep-ph]]. Guo:2023hyp S. Y. Guo, M. Khlopov, X. Liu, L. Wu, Y. Wu and B. Zhu, Footprints of Axion-Like Particle in Pulsar Timing Array Data and JWST Observations, [arXiv:2306.17022 [hep-ph]]. Yang:2023aak J. Yang, N. Xie and F. P. Huang, Nano-Hertz stochastic gravitational wave background as hints of ultralight axion particles, [arXiv:2306.17113 [hep-ph]]. Anholm:2008wy M. Anholm, S. Ballmer, J. D. E. Creighton, L. R. Price and X. Siemens, Optimal strategies for gravitational wave stochastic background searches in pulsar timing data, Phys. Rev. D 79, 084030 (2009) doi:10.1103/PhysRevD.79.084030 [arXiv:0809.0701 [gr-qc]]. Thorne:1987af K. S. Thorne, Gravitational radiation, in 300 years of Gravitation (Cambridge University Press, Ed. S. Hawking and W. Israel, 1987) p.330. ISBN: 9780521379762 NANOGrav:2020bcs Z. Arzoumanian et al. [NANOGrav], The NANOGrav 12.5 yr Data Set: Search for an Isotropic Stochastic Gravitational-wave Background, Astrophys. J. Lett. 905, no.2, L34 (2020) doi:10.3847/2041-8213/abd401 [arXiv:2009.04496 [astro-ph.HE]]. Wen:2011xc Z. L. Wen, F. A. Jenet, D. Yardley, G. B. Hobbs and R. N. Manchester, Constraining the coalescence rate of supermassive black-hole binaries using pulsar timing, Astrophys. J. 730, 29 (2011) doi:10.1088/0004-637X/730/1/29 [arXiv:1103.2808 [astro-ph.CO]]. Khmelnitsky:2013lxt A. Khmelnitsky and V. Rubakov, Pulsar timing signal from ultralight scalar dark matter, JCAP 02, 019 (2014) doi:10.1088/1475-7516/2014/02/019 [arXiv:1309.5888 [astro-ph.CO]]. Vagnozzi:2023lwo S. Vagnozzi, Inflationary interpretation of the stochastic gravitational wave background signal detected by pulsar timing array experiments, [arXiv:2306.16912 [astro-ph.CO]]. NANOGrav:2023hvm A. Afzal et al. [NANOGrav], The NANOGrav 15-year Data Set: Search for Signals from New Physics, Astrophys. J. Lett. 951, no.1, L11 (2023) doi:10.3847/2041-8213/acdc91 [arXiv:2306.16219 [astro-ph.HE]]. Smarra:2023ljf C. Smarra, B. Goncharov, E. Barausse, J. Antoniadis, S. Babak, A. S. B. Nielsen, C. G. Bassa, A. Berthereau, M. Bonetti and E. Bortolas, et al. The second data release from the European Pulsar Timing Array VI: Challenging the ultralight dark matter paradigm, [arXiv:2306.16228 [astro-ph.HE]]. Garbari:2011dh S. Garbari, J. I. Read and G. Lake, Limits on the local dark matter density, Mon. Not. Roy. Astron. Soc. 416, 2318-2340 (2011) doi:10.1111/j.1365-2966.2011.19206.x [arXiv:1105.6339 [astro-ph.GA]]. Guo:2020rcv R. Guo, C. Liu, S. Mao, X. X. Xue, R. J. Long and L. Zhang, Measuring the local dark matter density with LAMOST DR5 and Gaia DR2, doi:10.1093/mnras/staa1483 [arXiv:2005.12018 [astro-ph.GA]]. Vafa:2005ui C. Vafa, The string landscape and the swampland, [arXiv:hep-th/0509212 [hep-th]]. vanBeest:2021lhn M. van Beest, J. Calderón-Infante, D. Mirfendereski and I. Valenzuela, Lectures on the Swampland Program in string compactifications, Phys. Rept. 989, 1-50 (2022) doi:10.1016/j.physrep.2022.09.002 [arXiv:2102.01111 [hep-th]]. Palti:2019pca E. Palti, The swampland: introduction and review, Fortsch. Phys. 67, no.6, 1900037 (2019) doi:10.1002/prop.201900037 [arXiv:1903.06239 [hep-th]]. Agmon:2022thq N. B. Agmon, A. Bedroya, M. J. Kang and C. Vafa, Lectures on the string landscape and the swampland, [arXiv:2212.06187 [hep-th]]. Ooguri:2006in H. Ooguri and C. Vafa, On the Geometry of the String Landscape and the Swampland, Nucl. Phys. B 766, 21-33 (2007) doi:10.1016/j.nuclphysb.2006.10.033 [arXiv:hep-th/0605264 [hep-th]]. Lust:2019zwm D. Lüst, E. Palti and C. Vafa, AdS and the Swampland, Phys. Lett. B 797, 134867 (2019) doi:10.1016/j.physletb.2019.134867 [arXiv:1906.05225 [hep-th]]. Dvali:2007hz G. Dvali, Black holes and large N species solution to the hierarchy problem, Fortsch. Phys. 58, 528-536 (2010) doi:10.1002/prop.201000009 [arXiv:0706.2050 [hep-th]]. Dvali:2007wp G. Dvali and M. Redi, Black hole bound on the number of species and quantum gravity at LHC, Phys. Rev. D 77, 045027 (2008) doi:10.1103/PhysRevD.77.045027 [arXiv:0710.4344 [hep-th]]. Anchordoqui:2022ejw L. A. Anchordoqui, Dark dimension, the swampland, and the origin of cosmic rays beyond the Greisen-Zatsepin-Kuzmin barrier, Phys. Rev. D 106, no.11, 116022 (2022) doi:10.1103/PhysRevD.106.116022 [arXiv:2205.13931 [hep-ph]]. Anchordoqui:2022txe L. A. Anchordoqui, I. Antoniadis and D. Lüst, Dark dimension, the swampland, and the dark matter fraction composed of primordial black holes, Phys. Rev. D 106, no.8, 086001 (2022) doi:10.1103/PhysRevD.106.086001 [arXiv:2206.07071 [hep-th]]. Blumenhagen:2022zzw R. Blumenhagen, M. Brinkmann and A. Makridou, The dark dimension in a warped throat, Phys. Lett. B 838, 137699 (2023) doi:10.1016/j.physletb.2023.137699 [arXiv:2208.01057 [hep-th]]. Anchordoqui:2022tgp L. A. Anchordoqui, I. Antoniadis and D. Lüst, The dark universe: Primordial black hole dark graviton gas connection, Phys. Lett. B 840, 137844 (2023) doi:10.1016/j.physletb.2023.137844 [arXiv:2210.02475 [hep-th]]. Gonzalo:2022jac E. Gonzalo, M. Montero, G. Obied and C. Vafa, Dark Dimension Gravitons as Dark Matter, [arXiv:2209.09249 [hep-ph]]. Anchordoqui:2022svl L. A. Anchordoqui, I. Antoniadis and D. Lüst, Aspects of the dark dimension in cosmology, Phys. Rev. D 107, no.8, 083530 (2023) doi:10.1103/PhysRevD.107.083530 [arXiv:2212.08527 [hep-ph]]. Anchordoqui:2023oqm L. A. Anchordoqui, I. Antoniadis, N. Cribiori, D. Lüst and M. Scalisi, The Scale of Supersymmetry Breaking and the Dark Dimension, JHEP 05, 060 (2023) doi:10.1007/JHEP05(2023)060 [arXiv:2301.07719 [hep-th]]. vandeHeisteeg:2023uxj D. van de Heisteeg, C. Vafa, M. Wiesner and D. H. Wu, Bounds on Field Range for Slowly Varying Positive Potentials, [arXiv:2305.07701 [hep-th]]. Noble:2023mfw N. T. Noble, J. F. Soriano and L. A. Anchordoqui, Probing the Dark Dimension with Auger data, Phys. Dark Univ. (in press) [arXiv:2306.03666 [hep-ph]]. Anchordoqui:2023wkm L. A. Anchordoqui, I. Antoniadis and J. Cunat, The Dark Dimension and the Standard Model Landscape, [arXiv:2306.16491 [hep-ph]]. AAA L. A. Anchordoqui, I. Antoniadis, and N. Arkani-Hamed Large extra dimensions from higher-dimensional inflation, to appear. Albrecht:2001cp A. Albrecht, C. P. Burgess, F. Ravndal and C. Skordis, Exponentially large extra dimensions, Phys. Rev. D 65, 123506 (2002) doi:10.1103/PhysRevD.65.123506 [arXiv:hep-th/0105261 [hep-th]]. Dienes:2011ja K. R. Dienes and B. Thomas, Dynamical Dark Matter: I. Theoretical Overview, Phys. Rev. D 85, 083523 (2012) doi:10.1103/PhysRevD.85.083523 [arXiv:1106.4546 [hep-ph]]. Mohapatra:2003ah R. N. Mohapatra, S. Nussinov and A. Perez-Lorenzana, Large extra dimensions and decaying K K recurrences, Phys. Rev. D 68, 116001 (2003) doi:10.1103/PhysRevD.68.116001 [arXiv:hep-ph/0308051 [hep-ph]]. Dienes:2008qi K. R. Dienes and B. Thomas, Cascades and Collapses, Great Walls and Forbidden Cities: Infinite Towers of Metastable Vacua in Supersymmetric Field Theories, Phys. Rev. D 79, 045001 (2009) doi:10.1103/PhysRevD.79.045001 [arXiv:0811.3335 [hep-th]]. Marsh:2015xka D. J. E. Marsh, Axion Cosmology, Phys. Rept. 643, 1-79 (2016) doi:10.1016/j.physrep.2016.06.005 [arXiv:1510.07633 [astro-ph.CO]]. Planck:2018vyg N. Aghanim et al. [Planck], Planck 2018 results. VI. Cosmological parameters, Astron. Astrophys. 641, A6 (2020) [erratum: Astron. Astrophys. 652, C4 (2021)] doi:10.1051/0004-6361/201833910 [arXiv:1807.06209 [astro-ph.CO]]. Steigman:1977kc G. Steigman, D. N. Schramm and J. E. Gunn, Cosmological Limits to the Number of Massive Leptons, Phys. Lett. B 66, 202-204 (1977) doi:10.1016/0370-2693(77)90176-9 Cicoli:2012aq M. Cicoli, J. P. Conlon and F. Quevedo, Dark radiation in LARGE volume models, Phys. Rev. D 87, no.4, 043520 (2013) doi:10.1103/PhysRevD.87.043520 [arXiv:1208.3562 [hep-ph]]. Higaki:2012ar T. Higaki and F. Takahashi, Dark Radiation and Dark Matter in Large Volume Compactifications, JHEP 11, 125 (2012) doi:10.1007/JHEP11(2012)125 [arXiv:1208.3563 [hep-ph]]. Obied:2018sgi G. Obied, H. Ooguri, L. Spodyneiko and C. Vafa, De Sitter Space and the Swampland, [arXiv:1806.08362 [hep-th]]. Barreiro:1999zs T. Barreiro, E. J. Copeland and N. J. Nunes, Quintessence arising from exponential potentials, Phys. Rev. D 61, 127301 (2000) doi:10.1103/PhysRevD.61.127301 [arXiv:astro-ph/9910214 [astro-ph]].
http://arxiv.org/abs/2307.02666v2
20230705214224
Chiplet Cloud: Building AI Supercomputers for Serving Large Generative Language Models
[ "Huwan Peng", "Scott Davidson", "Richard Shi", "Shuaiwen Leon Song", "Michael Taylor" ]
cs.AR
[ "cs.AR" ]
]Chiplet Cloud: Building AI Supercomputers for Serving Large Generative Language Models Large language models (LLMs) such as OpenAI's ChatGPT and GPT-4 have demonstrated the unprecedented capabilities of autoregressive generation in multiple AI tasks, triggering disruptive technology innovations around the world. However, with the increase of model size and context length, and the slowdown of Moore's Law, it becomes increasingly challenging to serve these large models efficiently on existing cloud hardware platforms that are powered by TPUs and GPUs. Hardware inefficiency has become a major factor limiting the democratization of LLMs. In this paper, we propose Chiplet Cloud, a chiplet-based ASIC AI-supercomputer architecture that optimizes total cost of ownership (TCO) per generated token for serving large generative language models to reduce the overall cost to deploy and run these applications in the real world. The Chiplet Cloud architecture utilizes a unique approach to scale-up cloud computing by leveraging thousands of replicated chiplet accelerator modules to collaboratively perform token generation at an unprecedented TCO per token. A key architectural feature to achieve this is the ability to fit all model parameters inside the on-chip SRAMs of the chiplets to eliminate bandwidth limitations. Doing so is non-trivial as the amount of memory required is very large and growing for modern LLMs. This has led to larger chips with worse die yield and server level thermal dissipation thus increasing the total cost of the system. By using chiplets, we can moderate the die size to improve the system cost while leveraging software mappings to exploit parallelism found within the computation to overcome the potential data communication overhead. To explore the software-hardware co-design space and perform software mapping -aware Chiplet Cloud optimizations across the architectural design space, we propose a comprehensive design methodology that not only accurately explores a spectrum of major design trade-offs in the joint space of hardware and software, but also generates a detailed performance-cost analysis on all valid design points and then outputs the Pareto frontier. We design and evaluate Chiplet Cloud on four popular LLMs on the market representing a range of model sizes. Compared to A100 GPU clouds and TPUv4 clouds, our results show that Chiplet Cloud can achieve up to 94× and 15× improvement in TCO/Token respectively. This significantly reduces the cost for realistically serving modern LLMs. [ [ August 1, 2023 ================== § INTRODUCTION Over the past few months, a new generative Large Language Model (LLM) called ChatGPT <cit.> has gained significant attention around the world due to its unprecedented ability to perform a variety of natural language tasks. Compared with the previous language models, ChatGPT is much better at understanding user intent, generating human-like responses, and keeping multi-round conversations coherent with context. While the technology behind ChatGPT is impressive, it is the way it exposed LLMs in a way that was useful for the general population and introduced a new business model for these AI systems that has sent a shock wave of excitement throughout the technology industry. Since ChatGPT, we have already seen several announcements of LLM being integrated with other services such as web search, word processing, and programming IDEs <cit.>. LLMs are currently driving a technology revolution at planet-scale, changing the way we interact with AI models on a daily basis. Much of the transformational increase in ML capabilities comes from the unprecedented scale of the LLMs being deployed. For example, ChatGPT is based on OpenAI's GPT-3.5, an updated version of GPT-3 <cit.> and one of the most powerful large language models available at the time of deployment. The model is trained on 570GB of text data and has more than 175B parameters, requiring a huge amount of hardware resources for both training and inference. For instance, GPT-3 requires 23 days to train on 1536 A100 GPUs <cit.>. Inference alone also requires a significant amount of hardware resources, e.g., serving GPT-3 typically requires 8 A100 GPUs simply to satisfy the minimum memory requirements. Serving generative transformer-based large language models on commodity hardware, like GPUs, is already hitting a scalability wall. State-of-the-art GPT-3 throughput on GPU is 18 tokens/sec per A100 <cit.>. ChatGPT and the promise of integrating large-language models into various existing technologies (e.g. web-search) puts into question the scalability and profitability of large-language models. For example, Google Searching processes over 99,000 queries <cit.> per second. If GPT-3 is embedded in every query, and assuming each query generates 500 tokens, Google needs 340,750 NVIDIA DGX servers (2,726,000 A100 GPUs) to keep up. The cost of these GPUs exceeds $40 billion in capital expenditure alone. Energy consumption will also be huge. Assuming a 50% utilization, the average power would be over 1 Gigawatt, which is enough energy to power 750,000 homes. The CO2 emissions of producing 1 Gigawatt are equivalent to the annual emissions of more than 200,000 cars. Hardware inefficiencies will significantly limit the impact of the large generative language models in the future. To address both the high capital expenditure and energy cost of running LLMs, we must design and build hardware systems that attain significantly better total-cost-of-ownership (TCO) per token served. We propose Chiplet Cloud, a chiplet-based ASIC AI-supercomputer architecture for LLMs which aims to reduce the TCO per generated token. Figure <ref> shows the TCO/Token and latency Pareto frontier of Chiplet Cloud for GPT-3 <cit.> and PaLM 540B <cit.>. Compared to A100 GPU <cit.> and TPUv4 <cit.> clouds, our design achieves up to 94.4× and 15.2× TCO/Token improvement respectively. The cost analysis are based on Lambda GPU Cloud <cit.> and Google Cloud <cit.>. Chiplet Cloud employs a unique scale-up architecture to design a full cloud-scale system for running large generative language models at unrivaled TCO per performance that will drive and enable future LLM applications to run at planet-scale. To achieve this, we aggressively customize the architecture of Chiplet Cloud for the targeted LLMs. Driven by the design trade-offs between cost and performance, the architecture of Chiplet Cloud stores all model parameters and KV values in on-chip SRAM memory (Sec.<ref>). On-chip memories such as SRAM have better read latency and read/write energy than external memories such as DDR or HBM but require more silicon per bit. We show this design choice wins in the competition of TCO per performance for serving large generative language models but requires careful consideration with respect to the chiplet die size, chiplet memory capacity and total number of chiplets to balance the fabrication cost and model performance (Sec.<ref>) We observe that the inter-chiplet communication issues can be effectively mitigated through proper software-hardware co-design leveraging mapping strategies such as tensor and pipeline model parallelism <cit.>. To explore the massive hardware-software co-design space of Chiplet Cloud and find TCO per performance optimal system mapping strategies, we propose a two-phase design-search methodology for hardware exploration and software evaluation. The hardware exploration phase (Sec.<ref>) conducts a bottom-up design space exploration of Chiplet Cloud hardware architecture from a flexible accelerator architecture up to a 1U rack mounted server architecture taking power budget, floorplan, and thermal limits into account. The software evaluation phase (Sec.<ref>) then performs a detailed performance and TCO analysis of the server designs given a specific workload while simultaneously searching for software mapping strategies that complements the server architecture. While software mapping strategies for LLMs are now considered standard techniques for improving performance on existing hardware platforms, our design methodology flips the order and allows us to explore mapping strategies across all possible Chiplet Cloud hardware platforms to further optimize the TCO per performance capabilities of Chiplet Cloud. In summary, this paper makes the following contributions: * We conduct a thorough study on the current hardware limitations for serving generative LLMs and motivate the need of building ASIC supercomputers (Sec.<ref>); * We propose Chiplet Cloud, a chiplet-based ASIC Supercomputer architecture for serving generative LLMs, aiming at better TCO/Token (Sec.<ref>); * We present a comprehensive software-hardware co-design methodology that enables an accurate Chiplet Cloud design space exploration with software mapping optimization aware search (Sec.<ref>); * We design and evaluate Chiplet Cloud on four popular LLMs representing a variety of model sizes (Sec.<ref>). Compared to A100 GPU and TPUv4, our results demonstrate that Chiplet Cloud achieves up to 94× and 15× improvement in TCO/Token (Sec.<ref>). § DEMOCRATIZING LLMS THROUGH SPECIALIZATION As large generative language models are being widely used in more applications, we are heading towards a scalability wall due to inefficiencies of generalized commodity hardware platforms such as GPUs. This will lead to the cost of running LLMs to increase over time as these workloads grow in scale and complexity leading us to a world where very powerful AI systems are prohibitively expensive to use for the layman. In order to democratize this disruptive technology, specialized hardware will be the best way to go. The rest of this section is dedicated to a brief background on large generative language models in the context of what makes them difficult to run on modern hardware platforms and how moving to ASIC supercomputers is more financially feasible despite the daunting upfront cost of developing and designing such a machine. §.§ Large Generative Language Models The architecture of a generative language model is shown in the top of Figure <ref>. The architecture of a generative language model is built around the transformer decode block, with each block defining a layer of the model <cit.>. Within the decoder layer is a multi-head self attention mechanism followed by a 2-layer feed-forward network. In Figure <ref> we show the number of operations for each step of the decode block. For modern large language models, fully connected (FC) layers dominate the runtime of the decode block since the model dimension d is much larger than the input length l and the context length l_ctx, thus O(ld^2) >> O(l_ctx^2d). For example, more than 99% of MAC operations in GPT-3 are spent on just the FC layers. Generative language models use autoregressive inference to generate tokens, as shown at the bottom of Figure <ref>. The first pass through the model takes takes the input sequence as a whole and generates a single output token. That output token is then used as the input for the next pass, continuing until a specific “end of text” token is generated. This dependency between generated tokens poses a challenge for massively parallel LLM inference, limiting system utilization. A common technique used to reduce the amount of computation required for every token generated after the initial input sequence is known as KV caching where intermediate results of the multi-headed self-attention mechanism from previous input tokens are cached and don't need to be recomputed. The maximum size of the KV cache is dependant on the context length of the model and batch size. §.§ A High-Performance and More Profitable Hardware Solution: ASIC In order to run LLMs at scale and achieve high performance and high energy efficiency, we propose building ASIC supercomputers specifically for LLMs. ASICs are known to have the potential to deliver better performance and energy efficiency than CPUs and GPUs, since they are optimized for specific tasks. One major factor limiting the deployment of ASICs is non-recurring engineering (NRE) costs <cit.>. The barrier for overcoming NRE is primarily about the opportunity cost of running the workload on the current hardware offerings. The difference in TCO between running a workload on an ASIC supercomputer vs the current available hardware platform determines the break even point for NRE, where the NRE cost directly equals the savings from using an ASIC supercomputer. The current cost of running workloads like web search with integrated LLMs is so massive that it not only justifies the cost of creating ASIC super computers but going even further as to co-optimize those super computers for specifics LLMs for additional improvement in the TCO per token. The NRE of ASIC supercomputers includes silicon mask cost, CAD tools, IP licensing, flip-chip BGA packing, server designs, and labor. We extend the NRE model from Moonwalk <cit.> to use a 7nm technology node and estimate the NRE of a 7nm ASIC accelerator for large language models to be approximately $35M. In Figure <ref>, we compare the cost to generate 1K tokens on GPT-3 on modern GPUs clouds vs ASIC supercomputers. The GPU cost is based on the state-of-the-art throughput from DeepSpeed-Inference <cit.>) and the best available rental price of $1.1/GPU/hour from Lambda <cit.>. The ASIC cost combines the NRE and TCO on a lifetime of 1.5 year, while the performance is based on our TCO optimized Chiplet Cloud. The shaded region shows the saving per 1K tokens switching from GPU to ASIC, with the break-even point for GPT-3 being approximately 46,000 tokens/sec. This shows that the NRE cost of ASIC supercomputers is justifiable for modern workloads and thus customized hardware still remains the best solution to democratize LLMs. § CHIPLET CLOUD: A TCO-OPTIMIZED ASIC ARCHITECTURE FOR LLMS This section introduces the architecture and main insights of Chiplet Cloud. Instead of just optimizing raw hardware performance, more companies start to design accelerators for better TCO per performance <cit.>. TCO of an accelerator consists of the capital expenditures (CapEx, mainly from chip fabrication) and the operation expenditures (OpEx, mainly from power consumption); thus, TCO=CapEx+life× OpEx. We list some of the main challenges of designing ASICs for large generative language models, and give our solutions on how to optimize the TCO per performance. We believe that an architecture that enables these solutions, i.e. Chiplet Cloud, is what is demanded for a large ASIC LLM serving system. §.§ Main Challenges §.§.§ Memory Bandwidth Significantly Limits LLMs' Inference Performance. The inference performance of LLMs on GPU is limited by DRAM bandwidth, rather than tensor core utilization. Under smaller batch sizes and normal inference sequence lengths, the performance is limited by the memory bandwidth in reading model parameters, i.e., loading the parameters from HBM to on-chip registers <cit.>. This is due to the low operational intensity of GeMM in FC layers, which requires to load new parameter for almost every operation. As we discussed previously, FC layers dominate the computation with small inputs. Under larger batch sizes and long sequence lengths, the memory bandwidth bottleneck occurs when reading KV cache. In this case, KV cache size can be much greater than model parameter size itself. For example, for GPT-3 with a context length of 2K, every input sequence needs 2 GB of KV cache. With the batch size of 256, the KV cache further grows to 512 GB while the total model size is 350 GB. Figure <ref> shows the roofline of GPT-2 on a V100 GPU, where most kernels are bounded by the memory bandwidth. To fully utilize the 112 TFLOPS computing power on V100, we need a total bandwidth of 85000 GB/s, which is about 100× of HBM bandwidth. Reducing parameter and KV access latency and power is critical for better TCO/Token in LLM inference. §.§.§ Chip Costs Dominate TCO Due to the autoregressive feature of the LLMs, the next token depends on previous generated tokens, which greatly limits the hardware utilization. The best hardware utilization on the state-of-the-art implementation is around 50% <cit.> on GPUs and 40% on TPUs (during the decoding phase) <cit.>. Note that these are achieved with a very large batch size (e.g., 1024), the utilization can be as low as 1% when batch sizes are small (e.g., 4) <cit.>, which are the most common cases for real-world LLM inference. Another issue with the current systems is that chips used (e.g., A100 and TPUv4) are massive and close to the wafer reticle limit of around 800 mm^2, which poses a huge challenge to control the fabrication cost. Under such low utilization and high chip fabrication cost, the capital expenditures will account for a significant portion of the TCO. According to our evaluation, at 50% utilization, an A100 GPU purchased at manufacturer's suggested retail price has a TCO of 97.7% on CapEx. Even if people tapeout their own GPUs, the CapEx percentage can still be as high as 58.7%. Reducing chip costs will be the key to lower the TCO per generated token. §.§ Key Architectural Solutions §.§.§ Design Tradeoffs Driven by TCO per Performance SRAM has a much higher bandwidth and much lower access energy than external memory such as HBM, but has lower density. To address the memory bandwidth bottleneck, we want to argue that from a better TCO per performance perspective, buffering all model parameters and intermediate data (such as K, V) in on-chip memory such as SRAM is a preferred design option for our case. Consider a language model accelerator with die area (area) and average power (power), that provides a total memory bandwidth of BW. The accelerator's TCO can be approximately modeled as α· area + β· power, where α and β are constant coefficients. The chip cost actually grows superlinearly with the area. We use a linear model here for simplicity, which will not affect our subsequent analysis. As discussed earlier, since LLM inference is usually memory bandwidth bounded, throughput (token/sec) will be proportional to the bandwidth BW. As a result, we have TCO/Token∝αarea/BW + βpower/BW. To optimize the TCO/Token, we want a smaller area/BW and power/BW. Assuming that other modules in the chip remain unchanged, area, read energy, and bandwidth of the the parameter memory will be they key factors that affect this figure of merit. We list typical memory blocks of DDR4, HBM2e and SRAM in the table in Fig. <ref>, and plot the area per total bandwidth and the read energy per total bandwidth when we use these blocks to store model parameters. Compared with DDR4 and HBM2e, on-chip SRAM has much smaller area/BW (mm^2/GB/s) and power/BW(pJ/bit/GB/s). Although the same memory technology can have blocks of different sizes, which affects the bandwidth and read energy, the trend shown in Fig. <ref> remains the same. For example, SRAM has an order-of-magnitude better area per bandwidth and read energy per bandwidth than DRAM. Thus, although buffering all parameters on-chip requires more silicon, it will likely to reduce the TCO per generated token. Recently, there has been an industry trend to deploy more on-chip memory on deep learning accelerators to reduce the excessive off-chip memory access. Microsoft's Brainwave <cit.> pins DNN model weights in distributed on-chip SRAM. Google's TPUv4i <cit.> contains 144 MB SRAM, and GraphCore's IPU 2 <cit.> has 896 MB SRAM, Adding more on-chip memory to provide higher bandwidth is easier, cheaper, and lower power than doubling the HBM bandwidth, which benefits the TCO per performance design orientation. §.§.§ Design Choice: Chiplet for Reducing Fabrication Cost. An extreme case of adding on-chip memory is to go wafer-scale. Cerebaras WSE-2 <cit.> is a 46,255 mm^2 chip with 40 GB on-chip memory. The niche wafer-scale designs are expensive to manufacture, resulting in limited potential for TCO reduction. Instead, we believe that chip with large SRAM should remain at a relative small size, to reduce the fabrication cost. We argue that chiplet is a major method for managing TCO of LLM supercomputers. Chiplet technology has recently become a new trend in the industry. It breaks down a traditional monolithic silicon chip into multiple small chiplets and integrates them into a single package. This approach improves fabrication yield, reduces manufacturing costs and enables die-level reuse for different system scales. For TSMC 7nm technology with a defect density of 0.1 per cm^2, the unit price of a 750 mm^2 chip is twice that of a 150 mm^2 chip. It is a currently available commodity technology that all architects can use, aligning with our TCO/token optimization focus. One potential drawback of chiplet design is the high inter-chiplet communication. Studies on GPUs and TPUs have shown that proper mapping strategies (e.g., tensor and pipeline parallelism <cit.>) can effectively reduce the inter-node communication. Additionally, by storing all parameters and keeping the KV-cache in SRAM on each chiplet, the NUMA issues will be significantly avoided since the access time to local memory structures becomes generally much more uniform. To find the optimal mapping strategy for Chiplet Cloud, a software-hardware co-design methodology is essential (Section 4). §.§ Our Proposal: Chiplet Cloud Architecture Combing the two solutions above, we propose a chiplet-based cloud-scale system design for LLM inference, called Chiplet Cloud. Our general Chiplet Cloud architecture is shown in Figure <ref>, which includes the high-level abstract architecture of different levels, from process unit (PU) to chiplet module,server,and cloud. Starting from the bottom left, we start with the heart of Chiplet Cloud, a language model accelerator chiplet module that scales to data center inference. The module adopts a frequently-used PU-based architecture, with additional attention, nonlinear function, and normalization units. The Chiplet Cloud PU includes a local memory for holding parameters, multiple parallel MACs (multiply-accumulate) units, and an accumulation buffer. Model parameters and KVs are stored in a large on-chip global memory, as shown in Figure <ref>. No external memory like DRAM is required. The input buffer reads activation from another chiplet or the host through a chip-to-chip (C2C) router and broadcasts it to all PUs. Meanwhile, PUs load parameters into local memories. The PUs employ an output stationary dataflow pattern <cit.> with spatial activation reuse and temporal partial sum accumulation reuse. This dataflow was selected as FC layers are the most compute intensive operation in the transformer decode layer which cannot exploit any weight reuse unless the batch size is greater than 1. The design maximizes parallelization and eliminates the need for inter-PU communication, reducing on-chip network overhead. The output of a PU either flows into the attention unit and then goes back to the PU array, or inputs into some nonlinear units like GeLu and normalization and is then sent outside the chip. Figure <ref> also shows one important feature of our design—a single chiplet module is a package, and multiple chiplets are connected through the board. Conventionally, chiplets are integrated into a single package via a package-level solution such as a silicon interposer (e.g., <cit.>) or organic substrate (such as Simba <cit.> and AMD EPYC processor<cit.>) for better inter-chiplet communication. However, the package-level integration brings design challenges and overhead. Silicon interposers have a limited signal length. The organic substrate solution brings design challenges in package-level routing and power delivery <cit.>. Instead, we use board-level chiplets in our Chiplet Cloud design according to the communication requirements. With an optimized workload partitioning and mapping strategy, we can reduce data traffic across chiplets. Chiplets have individual packages and are connected together via a 2D torus on-PCB network, which is able to accommodate the potentially different mapping strategies for different models and still reuse the same server. Compared to conventional package-level chiplet, the board-level chiplet architecture eliminates cost of advanced packaging. Each Chiplet Cloud server contains a printed circuit board (PCB) with multiple chiplets, a controller and an off-PCB interface. The controller, which can be an FPGA or a microcontroller, dispatches remote procedure calls (RPCs) from off-PCB interface to all chiplets via the on-PCB 2D torus network. In the analysis of this paper, we model the use of ground-referenced signaling (GRS) links <cit.> for inter-chiplet communication. Each chiplet module connects with adjacent chiplets over a full-duplex data link capable of 25 GB/s with up to 80 mm reach over the PCB. Other candidate chip-to-chip interfaces can be high-speed PCI-e, which has been widely used as interconnects for many deep learning chips <cit.>, or custom-designed links such as Google TPU's Inter-Core Interconnect <cit.> and Graphcore's IPU-links <cit.>. Off-PCB interfaces could be 10/100 Gigbit Ethernet or InfiniBand, enabling communication between adjacent servers. §.§ Design Space Discussion The Design space of our Chiplet Cloud includes multiple aspects from cross-stacks that affect the end-to-end performance. Some aspects include (1) Chiplet Module Size: small chips benefit from higher yields while incurring more per-chip overhead; (2) Per Chiplet Memory Size: more memory on chips means few chips required but few FLOPS per chip; (3) Silicon Per Server: more silicon per server reduces the communication between servers, but it is limited by the chiplet size, power and the cooling system; and (4) Software Mapping: the tradeoff between tensor model and pipeline model parallelism affects utilization and interconnect data communication. Since all of these aspects are highly coupled, a comprehensive design methodology is critical to optimize the end-to-end performance and TCO. § DESIGN METHODOLOGY: CHIPLET CLOUD A key challenge for optimizing the TCO of large scale-out systems is balancing the cost per operation and the watts per operation. To address this challenge, we propose a design methodology that can accurately explore the large design space of the Chiplet Cloud architectures and perform a cost-performance analysis of serving large generative language model. This design methodology, shown in Figure <ref>, is a two phase software-hardware co-design methodology: first a hardware exploration phase followed by a software evaluation phase. The methodology uses a brute-force exploration, with feasibility checks and pruning throughout to maintain a reasonable runtime performance. This methodology is unlike traditional software-hardware co-design methodologies which typically start with extracting software characteristics and then derive hardware parameterizations based on preconceived assumptions about best practices. The design space for Chiplet Cloud is very large with many hardware parameterizations that need a careful balance to optimize for TCO/token. This makes traditional software-hardware co-design ineffective at finding truly optimal hardware design points. To avoid the need to bake-in these assumptions, our approach starts with a feasibility constrained design space exploration to generate every possible hardware design and then purge designs with power budgets, thermal outputs, or physical layouts that cannot be realized. Only then do we start evaluating hardware design points against specific generative LLMs to find a co-optimized Chiplet Cloud. §.§ Phase 1: Hardware Exploration The hardware exploration phase of the Chiplet Cloud design methodology (as shown in Figure <ref>(a)) is a bottom-up, LLM agnostic, design space exploration resulting in thousands of varying Chiplet Cloud server designs. The exploration starts with a flexible architectural description model of the Chiplet Cloud accelerator module described in more detail in section <ref>. This flexible architecture allows users to scale the on-chip memory capacity and the number of processing units within the constraints determined by microarchitectural limitations resulting in candidate architectures with varying peak power (TDP), peak performance (TFLOPs), and on-chip memory capacity (MBs). Candidate architectures are then evaluated against a set of server level constraints including the max power budget and physical layout constraints. These constraints are based on the limitations of building a traditional 1U 19 inch rack mount server as found in ASIC Cloud <cit.>. The floorplan of the Chiplet Cloud server is derived from the optimal server floorplan found leveraging computation fluid dynamic simulations of airflow cooling of a 1U server. The floorplan is 8 lanes of silicon chips with physical ducting separating each lane to reduce turbulence. Each candidate Chiplet Cloud accelerator is evaluated with these constraints to determine the best die size before undergoing thermal power analysis. Thermal analysis includes determining the total power dissipation per lane that can be achieved. A key factor to consider is the size of the chiplet die as this impacts the heatsink selection and thermal distribution throughout the server. Smaller chiplets allow for greater cooling efficiency within a lane by physically distributing the heat more evenly thus reducing hotspots. The constraint analysis for Chiplet Cloud server designs as well as the overall TCO estimation that is performed during the second phase of the design methodology are heavily reliant on accurate power, area and performance estimation modeling of the accelerator module. Area Estimation. The area estimation model for the Chiplet Cloud accelerator is derived from actual 7nm ASIC tapeouts found in the literature <cit.>, giving us a model for the bytes/um^2 and FLOPS/um^2 for the on-chip memory and PU array respectively. The silicon area is likely dominated by compute and memory device area. However, for a flexible accelerator architecture, auxiliary components of the chip can start to have a significant area overhead when the amount of memory and compute is small. Therefore, we also include the area overhead for the attention unit, activation unit, IO, on-chip networks, and control logic in the model. Power Estimation. The power estimation model is also derived from the actual 7nm ASIC tapeout numbers found in the literature <cit.>. This gives us the TDP for running these 7nm ASICs at 100% utilization. We normalize this value to an W/FLOP at 100% utilization and use that value in combination with the parameterized compute power of the flexible architecture to determine a 100% utilization TDP estimation for our Chiplet Cloud accelerator. Combined with the area model, we limit the power density to be no more than 1 W/mm^2, however during server design optimization we will further refine the peak power density limitations based on the full-server thermal analysis. Performance Modeling. The performance model for the Chiplet Cloud accelerator used during hardware exploration is the peak-performance of the chiplet and is based on the number of PUs which is determined by the compute parameterization of the architecture. Every PU in the chiplet has a peak-performance of 2 operations per cycle (multiply-accumulate) and is estimated to run at 1GHz giving us 2 GFLOPs/s/PU. During the software evaluation phase of the design methodology, we will further refine the performance as a function of the software kernel, microarchitectural utilization and IO communication. §.§ Phase 2: Software Evaluation The second phase of the design methodology models the execution time of specific workloads across the hardware design points and searches for optimal Chiplet Cloud architectural configurations. The software evaluation flow is shown in Figure <ref>(b). The first step is the software optimizer which takes the Chiplet Cloud server designs from phase 1 and a generative LLM workload and performs several optimizations including tensor parallelism and pipeline parallelism with microbatch tuning <cit.>. The optimizer will first look at the hyperparameters of the LLM, such as the model dimension d_model, number of layers, context length, attention mechanism type (multi-head or multi-query), as well as expect batch size. Then it will decompose the full model into a collection of small kernels that can be mapped to the individual chiplets throughout the system. In cases where the model cannot fit into a single server, the server will be replicated to scale up the entire system until there are enough resources to execute the application. This results in a system mapping which has the portion of the model that each chiplet in the whole system will be responsible for executing, based on the chosen tensor and pipeline parallelism optimizations. There also exists a chiplet memory profile and chiplet compute profile for the portion of the model that will be running on the individual chiplet, which will allow us to accurately model the end-to-end performance of the full system. The memory profile contains information about the required memory for weights, activations, and the KV cache, while the compute profile contains the size and operations that the chiplet will need to perform including reduction operations caused by tensor parallelism optimizations. The end-to-end performance model for the chiplet cloud system starts with understanding the compute latency inside each chiplet. This is based on the analytical analysis of the dataflow and compute latency of the mapped portion of the model as described by the compute profile. The compute profile defines which operations and the size of the operations that each chiplet will perform. We analyze the size and shape of these operations as they would be executed at the microarchitectural level to get a kernel compute latency and thus a kernel level utilization. This kernel level utilization is then used to scale the TDP of the chiplet to estimate the average kernel computational power. Since the model has been split up across all of the chiplets, we must also model the data communication latency between the chiplets including all-reduce operations between collaborating chiplets that are working on operations that have been optimized with tensor parallelism. As mentioned in Section <ref>, each chiplet is equipped with a 25 GBps link to all the adjacent chiplets. However, the system mapping might have two chiplets that must communicate that exist on different servers. When this occurs, the performance of this link goes over a 10 Gbps Ethernet connection. This bandwidth reduction discourages many system mappings, particularly to those which all-reduce operations that must cross over server boundaries. These design points are generally removed during the Pareto optimization search given their large performance penalty. Improved Software-Hardware Co-Design Model. The TCO model is a refined version of the model by Barroso et al <cit.>, which includes both capital expenditures (CapEx) and operating expenses (OpEx) from the system as well as the datacenter hosting the system. The CapEx for our server includes the silicon die cost, package cost, PCB cost, power supply unit cost, DC/DC converter costs, heatsink cost, fan costs, Ethernet controller cost, and control processor cost. The OpEx is calculated based on the power consumption of the system. Based on the system TDP and utilization, the full system average power consumption is used to determine the power draw from the silicon dies of the Chiplet Cloud accelerators. Additionally, the power cost due to inefficiencies of on-server DC/DC converts and the server power supply unit is also taken into consideration as is the power draw of the control processor, Ethernet controller and server fans. To estimate the die cost, we first calculate the number of fully patterned dies per wafer (DPW). This is the number of rectangular dies with the given die size dimensions that we can slice out of a traditional 300mm circular wafer. Cost per die is then calculated as: cost_die=(cost_wafer/DPW+cost_test)/Y_die Where cost_wafer is wafer price, cost_test is testing cost, and Y_die is die yield. We use the classical negative binomial model for yield which is as follow: Y_die=(1+AD_0/α)^-α Where A is die area, D_0 is defect density and α is cluster parameter. Since manufacturing yields drop with chip area, it makes economic sense to design smaller chips. For each feasible server design that we found in the hardware exploration phase of the design methodology, we will profile the memory, compute and power for a given software optimization found from the original workload specification. With the memory profile, we can determine if we have enough storage for the software's per-chip kernel with respect to the parameters, activation and KV cache size. If we do not have enough, then we consider this server configuration a non-feasible point to run this specific workload-software optimization combination. The number of servers is then selected to ensure that we have enough resources to fit the entire model given the per-chip memory profile. Additionally, compiling a network topography and assigning bandwidth numbers for all chiplet-to-chiplet communication links in the system depend on if the communication is on-server or inter-server. The network topography, multi-server architecture and chip computer profile are then all used to find the full end-to-end inference runtime of the workload. This performance along with the multi-server architecture and chip power profile are then fed into our TCO estimation engine where we can compute all TCO related metrics such as TCO/Token. §.§ Generalizing the Design Methodology While this work is focused on trying to find cloud-scale architectures with best-in-class TCO/Token performance, the methodology of designing scale-up cloud systems is still applicable to existing ASIC architectures or architectures designed for programmable devices such as CGRAs or FPGAs. Given an appropriate power, performance and area estimation model for the accelerator module, this methodology is applicable. § CASE STUDIES To evaluate our architecture and design methodology, we perform case studies of Chiplet Cloud on four language models at different scales, including GPT-2 <cit.> and GPT-3 <cit.> proposed by OpenAI, Turing NLG (T-NLG) <cit.> proposed by Microsoft and NVIDIA, and PaLM <cit.> proposed by Google. Dimensions of these models are shown in Table <ref>. Note that PaLM uses multiquery attention, where key and value are shared across the attention heads, which reduces the size of the KV cache by a factor of number of heads. All Chiplet Cloud chip designs utilize TSMC 7nm technology in this paper. §.§ Design Space Exploration We did a thorough design exploration on each model. For each one, we explored 3 different context length scenarios: 256, 2048 and 8192. Normally, the longer the context length, the better the model performance, but it requires more memory and more computation. For the analysis below, the context length is 2048 if not explicitly mentioned. We also explore the input batch sizes from 1 to 1024. In total we generate over 10M valid design points. Each design point combines the result from both hardware exploration and software evaluation, which includes hardware design (chip and server), software mapping (tensor parallelism size, pipeline parallelism size, batch size and micro-batch size), cost (power, CapEx and TCO) and performance (latency and throughput), etc. Figure <ref> shows all valid design points for 4 models. The color indicates the batch size. The rich information from design space exploration helps designers find the most suitable design given any type of constraint and any optimization goal. In Table <ref>, we show the latency and TCO/Token optimal designs for four models. Each design is optimized just for the model. Latency-optimal designs are always preferred when the batch size is 1, since it requires fewer operations. These designs also tend to use large chips with higher TOPS to minimize inter-node communication and computation latency. At the same time, the TCO per token of these designs is high, which mainly depends on utilization. It is well known that LLMs have low hardware utilization when the batch size is small. For the TCO-optimal designs, the batch size are much larger (64 to 128). Large batch size is good for utilization while requiring larger KV cache and thus more silicon. This means we either need bigger chips which greatly increase CapEx, or more chips which generate more inter-node traffic and hurt throughput. An appropriate batch size to balance each factor is essential to achieve good TCO/Token, but can be difficult to find. The 8 optimal design points all have different chip, server designs, and different mapping strategies, demonstrating the importance of our design methodology—every aspect of the system affects performance and cost, and different workloads require different optimizations. §.§ Design Insights We first study how chip size affects TCO and performance. Figure  <ref> shows the results of GPT-3 in two different scenarios. On the left is how we should choose die size to lower TCO for a given minimum throughput requirement. Compared to chips over 700mm^2, which is the size of many traditional large monolithic chips, a chip around 200 mm^2 reduces TCO by about 1.5× and still meets the throughput constraint. We can also find the CapEX exceeds 80% of TCO for most designs. Figure on the right shows what are the best latency for chips of different sizes given the TCO budget. Except for the smallest size chip, others all achieve similar latencies. This shows that proper chip size can effectively reduce TCO without compromising performance. We further investigate how the batch size affects TCO/Token. Figure <ref> shows the results on 4 different models and 3 different context length. When the batch size is increased from 1, the TCO/Tokens become better because the compute unit utilization increases. Larger batch sizes also provide more opportunities to exploit pipeline parallelism to improve overall system utilization. As the batch size continues to increase, the utilization will reach a peak. For the traditional multi-head model, more silicon is required for KV cache in large batch size and long contexts, which significantly increases TCO/Token. Chiplet Cloud supports batch sizes up to 64 with near-optimal TCO/Token for these models. For multi-query model PaLM, Chiplet Cloud supports batch sizes up to 1024 with near-optimal TCO/Token. The cost of longer contexts is negligible, especially when the batch size is not too large. Lastly, we study how the mapping strategy affects TCO/Token for a given batch size. As shown in Figure <ref>, when the number of pipeline stages (i.e. the pipeline parallelism size) is close to the batch size, the system utilization is the largest and the TCO/Token is optimal. When these two numbers are similar, the system can take full advantage of pipeline parallelism with a micro-batch size of 1, so the number of micro-batches is also close (if not equal) to the pipeline stage <cit.>. This helps balance the latency of micro-batches passing through all pipeline stages and pipeline stages completing all micro-batches. § EVALUATION In this section, we evaluate the performance and cost of Chiplet Cloud for serving large language models. The key metric we are targeting is TCO/Token. TCO/Token is measured as cost per token generated and is the key factor in the ability to democratize of LLMs. One of the most popular business models for generative LLMs is also to charge users per generated token. Lower TCO/Token not only adds more profit margins, but also makes LLMs more approachable. We compare Chiplet Cloud to state-of-the-art GPU and TPU cloud implementations. We also demonstrate the benefits of our architectural design choices and evaluate the flexibility of Chiplet Cloud architectures. §.§ Comparison with GPUs and TPUs In Table <ref>, we compare Chiplet Cloud versus state-of-the-art GPU <cit.> and TPU <cit.> implementations. Neither work is specifically optimized for TCO/Token. For our comparison, we choose the throughput optimal result for GPU, and the utilization optimal result for TPU, which are key indicators that you are close to TCO/Token optimal. TCO for GPUs and TPUs are based on the best cloud rental price we could find <cit.>. Compared to A100 GPU and TPUv4 clouds, Chiplet Cloud achieves 94× and 15× improvement on TCO/Token, respectively. §.§ Design Choice Sanity Check To demonstrate the benefits of our design choices, including fitting all parameters in on-chip SRAM and using many smaller chiplet accelerators, we compare Chiplet Cloud with two baseline systems. The first system (HBM) is a conventional HBM-based accelerator system. We assume each chip has a 8GB HBM die that supports up to 900GB/s bandwidth. All model parameters and KV cache are on HBM. The second system (Large Chip) is Chiplet Cloud without chip size moderation. Like a traditional large monolithic chip, it tries to put as much memory and as many processing units as possible on the chip. We pass the hardware specs of both systems to our design methodology and search for the optimal mapping strategies. Figure <ref> shows the optimal TCO/Token of both systems on 4 models. All numbers are normalized to Chiplet Cloud. On average, Chiplet Cloud outperforms the Large Chip system by 2.49× TCO/TOken and the HBM system by 1.24× TCO/Token. §.§ Generic Chiplet Cloud Servers Up until now, we have only shown Chiplet Cloud optimized for a single workload (one language model with fixed context length). However, the separation of hardware exploration and software evaluation in our design methodology makes it possible to find suitable server designs for multiple workloads simultaneously, thus increasing the flexibility of the resulting Chiplet Cloud. This is done by applying a performance-cost analysis of each Chiplet Cloud server design across multiple workloads simultaneously to find design points that are in aggregate TCO/Token optimal. During hardware exploration, we generate 1073 valid unique server designs. In Figure <ref>, we list the top 10 servers that achieve the best aggregated performance (TCO/Token) on different models (left), and on the same model but with different context lengths (right). The performance shown of the server across all workloads is normalized to the aggregated performance achieved by the optimal server for each individual workload. If a server achieves the optimal performance on all 4 workloads, it would have an aggregated performance of 4.0. Compared to the aggregated performance of the individually optimized servers, the best single server TCO/Token across the 4 LLMs only dropped 12% while the best single server TCO/Token across the 3 context lengths only dropped 9%. § RELATED WORK Training and serving large language models on GPU and TPU has attracted a lot of attention in recent years. Megatron-LM <cit.> proposes a tensor partitioning strategy that reduces the inter-node communication. <cit.> improves pipeline parallelism and combines it with Megatron-LM and achieves high aggregate throughput. DeepSpeed <cit.> proposes multi-GPU training and inference solution to minimize latency while maximizing throughput. PaLM <cit.> train a 540B parameter model on 6144 TPUv4 chips using Pathways <cit.>. <cit.> develops an analytical model to select the mapping strategy optimized for inference on TPUv4. Many ASIC accelerators for transformer NNs have been proposed. SpAtten <cit.> exploits the token and head sparsity and quantization opportunities in the attention block. ELSA <cit.> presents an approximation scheme for the attention mechanism. EdgeBERT <cit.> leverages dynamic voltage-frequency scaling based on early exit prediction of ALBERT <cit.>. <cit.> designs a transformer accelerator in 28nm using approximate-computing and sparsity speculation. These designs focus on optimizing the attention block, which is usually not the bottleneck for LLMs. To exploit chiplet technology on deep learning accelerators, Simba <cit.> implements the first 36-chiplet prototype multi-chip-modules (MCM) system for deep learning inference. COMB-MCM <cit.> uses chiplets to improve the scalability of their computing-on-memory-boundary NN processors. NN-Baton <cit.> proposes a framework to explore the chiplet design space for convolutional NN. These works focus on small-scale optimizations—chiplets in a single package, and do not demonstrate scalability to support LLM. § CONCLUSION This paper presents Chiplet Cloud, a chiplet-based ASIC AI supercomputer architecture that achieves unprecedented TCO/Token for serving large generative language model. Chiplet Cloud fits all model parameters inside the on-chip SRAMs to eliminate bandwidth limitations while moderating the die size to improve system costs while leveraging software mappings to overcome data communication overhead. We propose a comprehensive design methodology that accurately explores a spectrum of major design trade-offs in the joint space of hardware-software and generates a detailed performance-cost analysis on all valid design points. With this methodology, we design Chiplet Cloud systems for four language models of different sizes and achieved 94× and 15× better TCO/Token compared to A100 GPU and TPUv4, respectively. We believe Chiplet Cloud to be the best solution to democratise modern and future large generative language models. ACM-Reference-Format
http://arxiv.org/abs/2307.01751v1
20230704144303
Shapes of the Cosmological Low-Speed Collider
[ "Sadra Jazayeri", "Sébastien Renaux-Petel", "Denis Werth" ]
hep-th
[ "hep-th", "astro-ph.CO", "gr-qc" ]
top=3cm, bottom=2cm, left=3cm, right=3cm =15.5pt empty 1818 Shapes of the Cosmological Low-Speed Collider 20pt 1218Sadra Jazayeri,[jazayeri@iap.frjazayeri@iap.fr] Sébastien Renaux-Petel,[renaux@iap.frrenaux@iap.fr] and Denis Werth[werth@iap.frwerth@iap.fr]1pt Sorbonne Université, CNRS, UMR 7095, Institut d'Astrophysique de Paris, 98 bis bd Arago, 75014 Paris, France Abstract Massive particles produced during inflation leave specific signatures in soft limits of correlation functions of primordial fluctuations. When the Goldstone boson of broken time translations acquires a reduced speed of sound, implying that de Sitter boosts are strongly broken, we introduce a novel discovery channel to detect new physics during inflation, called the cosmological low-speed collider signal. This signal is characterised by a distinctive resonance lying in mildly-soft kinematic configurations of cosmological correlators, indicating the presence of a heavy particle, whose position enables to reconstruct its mass. We show that this resonance can be understood in terms of a non-local single-field effective field theory, in which the heavy field becomes effectively non-dynamical. This theory accurately describes the full dynamics of the Goldstone boson and captures all multi-field physical effects distinct from the non-perturbative particle production leading to the conventional cosmological collider signal. As such, this theory provides a systematic and tractable way to study the imprint of massive fields on cosmological correlators. We conduct a thorough study of the low-speed collider phenomenology in the scalar bispectrum, showing that large non-Gaussianities with new shapes can be generated, in particular beyond weak mixing. We also provide a low-speed collider template for future cosmological surveys. 1.6 1.1 § INTRODUCTION The observed cosmological structures are believed to be of primordial origin. Thus, analysing correlation functions of density inhomogeneities can provide valuable insights into the physics of the early universe, possibly beyond the Standard Model. While the two-point correlation function of scalar fluctuations has been well measured, higher-point correlators are the main target of current and upcoming cosmological surveys, see e.g. <cit.> for a recent review and the references therein. 4pt The discovery of primordial non-Gaussianities would represent a tremendous achievement both on theoretical and observational grounds, as they provide a probe of interactions during inflation. Such detection may enable us to uncover new degrees of freedom around the Hubble scale, which can be as high as 10^14 GeV. Additionally, this offers the prospect of identifying mass spectra and spins, characterise interactions, and infer dynamical properties such as sound speeds or, more generally, dispersion relations. All this information would help us establish a “standard model of inflationary cosmology". 4pt The primordial fluctuations in any theory of inflation are at least of two types: the Goldstone boson associated with the spontaneous breaking of time translations π,[As for any spontaneously broken symmetry, there is a massless particle—the Goldstone boson—associated with this symmetry breaking. Its Lagrangian can be constructed without any knowledge of the specific mechanism that led to this breaking. This places inflaton fluctuations on the same footing as e.g. pions of the chiral Lagrangian, phonons of solids, or longitudinal polarizations of the W and Z bosons. The Goldstone boson π is related to the observed curvature perturbation fluctuation by ζ = -H π + 𝒪(π^2).] and the transverse dynamical part of the metric γ_ij. The field π being massless,[More precisely, the field π is exactly massless in the decoupling limit, M_pl→∞ and Ḣ→ 0 while keeping M_pl^2Ḣ fixed.] it is fully characterised at leading order in gradient expansion by its speed of sound c_s, that is, its dispersion relation reads ω = c_s + 𝒪(^2), where =k/a is the physical momentum. As a consequence of π non-linearly realising Lorentz symmetry, the size of non-linearities is correlated to the amount of symmetry breaking at the linear level. This generates well-known equilateral non-Gaussianities whose amplitude is related to the speed of sound by symmetry, f_NL^eq∼ 1/c_s^2 <cit.> (see <cit.> for current constraints). Possibly arising from a fundamental non-perturbative UV description of inflation <cit.>, a reduced sound speed can also be commonly generated by integrating out heavy fields with masses above the scale Λ_⋆ beyond which primordial fluctuations become strongly coupled. Accordingly, c_s is the primary parameter to be constrained by observations that encodes UV physics. The detection of sizeable equilateral non-Gaussianities then poses the question of what the subsequent steps in the investigation should entail. 4pt In addition to the observed massless fluctuations π, other degrees of freedom covering the entire mass spectrum are generically expected to be present during inflation. These fields can arise from UV completions of inflation, for example appearing in the Kaluza-Klein spectrum or as stringy states <cit.>. Here we focus on fields with masses typically of the same order as the expansion rate. Their gravitational production and the subsequent decay of these species into inflaton fluctuations give rise to potentially observable signatures in the soft limits of cosmological correlators. These patterns, known as cosmological collider signals <cit.>, exhibit distinctive oscillatory behaviours whose frequencies are determined by the masses of heavy fields and whose amplitudes are Boltzmann-suppressed. The vast phenomenology of such heavy field signatures in cosmological correlators has been actively reported in e.g. <cit.>. Although prospects for detecting these signatures in both future galaxy and 21cm surveys are rather optimistic <cit.>, the experimental challenge to observe them is enormous as it requires probing a large hierarchy of scales with high precision to resolve the oscillations. 4pt In this paper, building on the previous work <cit.>, we present a new discovery channel to detect additional heavy particles during inflation when the Goldstone boson has a reduced speed of sound c_s ≪ 1, which implies that de Sitter boosts are strongly broken. These so-called cosmological low-speed collider signals are characterised by a distinct resonance in mildly-soft (internal) kinematic configurations of cosmological correlators, signalling the presence of a massive particle lighter than H/c_s. This regime covers the entire parameter space if the additional field is lighter than the strong coupling scale Λ_⋆. In analogy with terrestrial collider experiments in which massive particles leave resonant signatures in the cross section of lighter particles' collisions when the center of mass energy coincides with the particle masses √(s)∼ m, the low-speed collider resonance is located at k_L/k_S∼ c_s m/H, where k_L and k_S are the long and short modes. After the detection of equilateral-type non-Gaussianities and hence the measurement of c_s, we would enter an era of precision measurements, analogous to electroweak precision physics in colliders. Measuring the low-speed collider peak would reveal the existence of an additional field coupled to the Goldstone boson and would give indications about its mass, which could be later confirmed by the measurement of the conventional cosmological collider signal in ultra-soft limits, see Fig. <ref>. 4pt As we will see, a strong breaking of de Sitter boosts uncovers a concealed elegance within the analytical structure of cosmological correlators. In de Sitter spacetime, dealing with massive fields even in perturbation theory is notoriously challenging. Indeed, as a consequence of the inflationary background altering the free propagation of fields, their mode functions take the form of complicated special functions rather than simple plane waves. Consequently, except for certain soft limits of correlators, fully analytical predictions are very rare in the literature (see e.g. <cit.> for recent heroic computations). However, the picture drastically simplifies when π has a reduced speed of sound c_s ≪ 1. To better appreciate the underlying physics behind this simplicity, let us imagine the two-to-two scattering of Goldstone bosons at the energy scale ω and consider a particle with mass m≲ω/c_s mediating a force between them. In this exchange process, the energy and the momentum of the off-shell particle are of order ω and ω/c_s, respectively, implied by energy and momentum conservation. As a result, similar to the low-speed limit of the Yukawa theory, the massive field propagator can be simplified as 1/ω^2 - k^2 - m^2∼-1/k^2 + m^2 . This approximation corresponds to replacing the real-space Feynman propagator of the massive field D_F by a Yukawa potential D_F(x, t; y, t') →δ(t-t') e^-m |x - y|/4π |x - y| , reflecting the instantaneous propagation of the massive field.[Notice that the local EFT limit is recovered from the non-local one when the massive field is heavier than the transferred momentum m≳ω/c_s. In this limit, the real-space propagator becomes D_F(x, t; y, t') → m^-2δ(t-t') δ^(3)(x - y).] Similarly, for the computation of cosmological correlators, we show that the relativistic heavy field can be integrated out, albeit in a non-local manner in space, resulting in a non-local single-field EFT for the Goldstone boson. This non-local EFT comes extremely handy when computing cosmological correlators for mainly two reasons. First, the only dynamical field in this EFT is the massless field π which has a simple mode function, at least when the effect of the mixing with the additional field is weak. Second, exchange diagrams reduce to contact ones, which drastically simplify computations. Corrections to the leading non-local EFT can also be systematically captured by adding extra higher-order time-derivative terms. Eventually, note that as a consequence of integrating over the dynamics of the heavy field, the non-perturbative particle production leading to the conventional cosmological collider signal cannot be captured in the realm of the non-local EFT. However, this simple theory, by essence, accurately describes the physics of the cosmological low-speed collider. Outline. In this paper, we present an exhaustive study of the low-speed collider signals in the three-point correlation function of π. In Sec. <ref>, we start by constructing the effective action (<ref>) that couples the massless Goldstone boson π to an additional massive scalar field σ. In Sec. <ref>, we show how a non-local single-field EFT (<ref>) arises when π has a reduced speed of sound c_s ≪ 1. We then characterise this theory by inspecting the dispersion relation, showing that it encompasses several previously-known regimes, and discuss the regimes of validity of this non-local EFT. In Sec. <ref>, we divide our computation of cosmological correlators into two parts. When the massive field at the linear level is weakly mixed with the Goldstone boson, we analytically compute the two- and three-point correlators induced by the non-local EFT. We show that all correlators, including the (would-be) double- and triple-exchange diagrams, are obtained by acting with weight-shifting operators on a set of seed correlators. At strong mixing, we use the recently developed cosmological flow approach <cit.> to numerically compute these correlators. In Sec. <ref>, we present an in-depth analysis of the low-speed collider phenonenology, paying particular attention to its strong mixing regime. We determine the size of non-Gaussianities and show that we can accomodate large non-Gaussianities while remaining under perturbative control. We then make an extensive analysis of the bispectrum shapes and give a separable template (<ref>) that accurately describes the low-speed collider resonance. Additional appendices expose technical details and derivations. We present our conclusions in Sec. <ref>. Notation and Conventions. We use the mostly-plus signature for the metric (-, +, +, +). Spatial three-dimensional vectors are written in boldface k. We use Greek letters (μ, ν, …) for space-time indices, and Latin letters (i, j, …) for spatial indices as usual. Overdots and primes denote derivatives with respect to cosmic (physical) time t and conformal time τ defined by dτ = dt/a, respectively. A prime on a correlator is defined to mean that we drop the momentum conserving delta function, i.e. ⟨𝒪_k_1…𝒪_k_n|=⟩δ^(3)(k_1 + … + k_n) ⟨𝒪_k_1…𝒪_k_n|'⟩. § INFLATIONARY FLUCTUATIONS Cosmology is about observing inhomogeneities and studying their statistical properties. According to the inflationary paradigm, these perturbations emerge from quantum fluctuations of the inflationary fields, which ultimately provide the seeds for small density fluctuations in the late universe. Remarkably, the simplest explanation of cosmological data involves weakly coupled inflationary fluctuations. It means that in practice they can be described by an action that takes the form of a series expansion in powers of fluctuations and their derivatives. At every order, the terms are fixed by assumed symmetries up to a finite number of free coefficients. Such description is valid up to a UV cutoff scale, beyond which the UV physics become important. However, at energies below the UV cutoff, higher-order terms in the series expansion become irrelevant as they are suppressed by increasing powers of the UV cutoff. In the end, only a finite number of terms is sufficient to accurately describe the physics of these fluctuations. This is the very idea of effective field theories of inflationary fluctuations, which provides a model-independent framework to study the physics of degrees of freedom that are directly linked to observations, making manifest the implications of symmetries. 4pt In this section, we identify the relevant degrees of freedom to describe inflationary fluctuations and construct a theory describing them. More specifically, we start by reviewing the effective action for the Goldstone boson of broken time translations interacting with an additional massive scalar field. Next, we provide bounds on the couplings that give the maximal size of the mixing interactions while keeping the effective theory under control. §.§ Goldstone Description The effective field theory (EFT) of inflationary fluctuations (see the original papers <cit.> or review <cit.> for more details) is based on the need of a physical “clock", i.e. a preferred time slicing of spacetime, that breaks time-translation symmetry in order for inflation to end. This pattern of symmetry breaking gives rise to a Goldstone boson π that transforms as π→π - ξ(x, t) under a time shift t → t + ξ(x, t). Together with fluctuations of the metric, it is guaranteed to be active during inflation and hence is the minimal relevant degree of freedom describing inflationary fluctuations. Goldstone action. In the unitary gauge, in which the fluctuations of the “clock" are absorbed by the metric, the most general action is constructed out of all operators that are invariant under time-dependent spatial diffeomorphisms x^i → x^i + ξ^i(x, t). At leading order in derivatives, the action can be written S_π = ∫^̣4x √(-g)[ 1/2M_pl^2 R + M_pl^2 Ḣ g^00 - M_pl^2(3H^2 + Ḣ) + ∑_n=2^∞M_n^4(t)/n!(δ g^00)^n + …] , where δ g^00 = g^00+1 and M_n(t) are general time-dependent mass scales. One recovers slow-roll inflation in the limit M_n→ 0. The coefficients of the first two operators have been fixed to remove all tadpoles so that the action starts quadratic in the fluctuations. 4pt The breaking of time diffeomorphisms during inflation is analogous to the breaking of the gauge group by the mass term in massive Yang-Mills theory, where the dynamical degrees of freedom are the longitudinal modes π^a of the vector fields A^a_μ. There, one can make them explicit via the Stückelberg trick. In the context of the EFT of inflationary fluctuations, this trick boils down to performing a spacetime-dependent time reparameterization t→ t+π(x, t) in Eq. (<ref>). In general, the resulting action is rather complicated because it mixes the Goldstone mode to metric fluctuations. However, being interested just in effects that are not dominated by the mixing with gravity, one can neglect the metric fluctuations by taking the so-called decoupling limit →∞ and Ḣ→ 0 while keeping the product ^2Ḣ fixed. In this regime, the transformation reduces to δ g^00→ -2 π̇ - π̇^2 + (∂_i π)^2/a^2, and the Goldstone boson π is associated with the longitudinal scalar mode of the metric, which in the end is related to the curvature perturbation by ζ = -H π on super-Hubble scales. The Goldstone action becomes S_π = ∫^̣4x √(-g) M_pl^2|Ḣ|/c_s^2[π̇^2 - c_s^2(∂̃_iπ)^2 + (1 - c_s^2)(π̇^3 - π̇(∂̃_iπ)^2) - 4/3M_3^4 c_s^2/M_pl^2|Ḣ|π̇^3] , where ∂̃_i = ∂_i/a and c_s^-2 = 1 - 2M_2^4/M_pl^2|Ḣ| is the intrinsic speed of sound for the propagation of π. The non-linearly realised symmetry makes it explicit that a small value of c_s generates an enhanced cubic interaction π̇(∂_i π)^2. The action (<ref>) accurately describes the dynamics of π up to slow-roll corrections ε = -Ḣ/H^2. Additional sector. We are interested in coupling the Goldstone boson associated with the breaking of time diffeomorphism to an additional sector composed of a single massive scalar field σ. We will exclusively consider the case where this scalar field is heavy with mass around the Hubble scale so that no additional internal symmetry is a priori needed in order for this sector to be radiatively stable (see e.g. <cit.> where additional light fields are considered). The action for the σ-sector we consider is S_σ = - ∫^̣4x √(-g)[1/2(∂_μσ)^2 + 1/2m^2σ^2 + μσ^3 + …] , where the ellipses denote higher-order terms.[Up to cubic terms in the field, this is the most general Lagrangian one can write down up to dimension-4 operators because σ̇σ^2 is a total derivative. Including dimension-5 operators demands to also consider the following operators: σ̇^2σ and (∂̃_i σ)^2σ. These terms, being of higher-dimension, would be suppressed by an additional high-energy scale. Therefore, the cubic self-interaction σ^3 is responsible for the leading non-Gaussian signal in the σ-sector.] The non-linearities in this sector are entirely characterised by μ. Mixing sector. We now describe the coupling of σ to the Goldstone boson π, see e.g. <cit.> for details. In the unitary gauge, we therefore construct the mixing sector out of the building blocks δ g^00 and σ, invariant under time-dependent spatial diffeomorphisms. Among the terms allowed by symmetry, some give tadpoles that should add up to zero in order not to alter the background solution. The action then has to start at quadratic order in fluctuations. At leading order in derivatives, hence considering the leading deformation to the slow-roll action, the mixing action is[By construction, the Lagrangian is constructed out of building blocks that are invariant under time-dependent spatial diffeomorphisms. Neglecting operators involving the extrinsic curvature, terms with higher-order derivatives must therefore be contracted in a fully diffeomorphism-invariant way. At leading order in derivatives, this allows us to consider the operator g^0μ∂_μσ that would generate the additional quadratic mixing π̇σ̇ and dimension-6 cubic operators. These terms have been fully classified in <cit.>. In this work, we do not consider such interactions and only focus on the leading deformations of the slow-roll action.] S_π-σ = - ∫^̣4x √(-g)[ M̃_1^3 δ g^00σ + M̃_2^2 δ g^00σ^2 + M̃_3^3 (δ g^00)^2 σ] . This action simplifies when reintroducing the Goldstone boson and considering the decoupling limit in which couplings to metric fluctuations are negligible. Up to cubic interactions, one obtains S_π-σ = ∫^̣4x √(-g)[2M̃_1^3 π̇σ + M̃_1^3(∂_μπ)^2σ + 2M̃_2^2π̇σ^2 - 4M̃_3^3π̇^2σ] . The parameter M̃_1 fully controls the size of the mixing π̇σ and the cubic interaction (∂̃_i π)^2 σ. This is a consequence of the non-linearly realised symmetry i.e. field operators are related at different orders in perturbation theory. Full theory. Up to cubic order in fluctuations, collecting the various sectors gives the following full theory that we will consider [colframe=white,arc=0pt,colback=greyish2] S = ∫^̣4x √(-g)( 1/2π̇_c^2 - c_s^2/2 (∂̃_i π_c)^2 - 1/2(∂_μσ)^2 - 1/2m^2 σ^2 + ρπ̇_c σ. . - λ_1 π̇_c (∂̃_i π_c)^2 - λ_2 π̇_c^3 - 1/Λ_1 (∂̃_i π_c)^2σ - 1/Λ_2π̇_c^2 σ - λπ̇_c σ^2 - μσ^3) , where we have canonically normalised the Goldstone boson π_c = c_s^-3/2f_π^2 π with f_π^4 ≡ 2c_s ^2 |Ḣ| being the symmetry breaking scale <cit.>. We have redefined the coupling constants as ρ ≡ 2c_s^3/2 M̃_1^3/f_π^2 , λ_1 ≡ -c_s^3/2 1-c_s^2/2 f_π^2 , λ_2 ≡ -c_s^3/2/2f_π^2[1 - c_s^2 - 8/3M_3^4/f_π^4c_s^3] , Λ_1^-1 ≡ -2c_s^3 M̃_1^3/f_π^4 , Λ_2^-1≡ -4c_s^3/f_π^4(M̃_1^3 + 4M̃_3^3) , λ≡ -2c_s^3/2M̃_2^2/f_π^2 . Note that there is a priori no model-building requirement on the size of the quadratic mixing ρ/H. In full generality, we therefore allow this coupling to exceed unity in Hubble unit. Importantly, because of the quadratic mixing, there is a priori no one-to-one correspondence between fields and particle states. Instead, there is a mixing between the field content and the particle spectrum, and this mixing can be sizeable. In this paper, we will concentrate on the case of an additional heavy particle, which, as we will see, corresponds to m or/and ρ≳𝒪(H). 4pt The single-field dimensionless power spectrum, i.e. in the limit ρ/H=0, is given by Δ_ζ, 0^2 = k^3/2π^2⟨ζ_kζ_-k|'⟩ = 1/4π^2(H/f_π)^4 . The measured amplitude of the dimensionless power spectrum is Δ_ζ^2 = 2.2×10^-9 <cit.>. §.§ Perturbativity Bounds on Couplings The introduced coupling constants are free parameters of the theory. However, they must satisfy some bounds to ensure that the effective description of fluctuations is under theoretical control. Here, we give bounds on these couplings based on perturbativity, at weak and strong mixing. We will give additional details on these regimes in Sec. <ref>, and refer to <cit.> for a detailed analysis (see also <cit.> for methods to derive perturbativity bounds). We also comment on theoretically motivated values for the mass of the heavy field. Weak mixing. In the weak mixing regime, imposing that the interactions associated with relevant/marginal operators can be treated perturbatively at sound horizon crossing and that the strong coupling scales of irrelevant operators be larger than the Hubble scale gives ρ/H ≲ c_s^-1/2 , H/Λ_2 ≲ c_s , λ≲ c_s^1/2 , μ/H ≲ 1 . In the insert below, we show how to obtain these bounds from simple dimensional analysis arguments. For a sufficiently small mass m/H≲ 1/c_s, which is the case of interest for the low-speed collider regime, we will see that non-trivial physics also occurs outside the horizon, and it can be treated perturbatively only if one also imposes ρ≲ m, see the analysis in Sec. <ref>. This bound is more stringent and therefore defines the weak mixing regime.[Remarkably, the analysis in Sec. <ref> will reveal that in the range m ≲ρ≲ H/c_s, the perturbative analysis still approximately holds, simply upon considering the effective mass m^2 → m^2+ρ^2.] Perturbativity bounds.—In this insert, we derive the perturbativity bounds (<ref>) by estimating the various terms in the Lagrangian (<ref>). As the Goldstone boson propagates with a non-unity speed of sound, it is not possible to treat time and space on the same footing. Equivalently, we need to keep track of how the various terms in the Lagrangian scale with energy and momentum. To do this, let us fix the physical momentum as a reference. At weak mixing, the dispersion relation for the Goldstone boson is ω_π=c_s and that of the massive mode reads ω_σ =, from which we get ω_π = c_s ω_σ. In the following, we will write all quantities in terms of ω≡ω_π. Note that placing ourselves deep in the UV, we neglect the mass contribution for simplicity. In the regime of interest for the low-speed collider c_s m/H≪ 1, one can indeed show that the mass term in the Lagrangian is negligible. We can directly read from the Lagrangian that π_c ∼ c_s^-3/2ω and σ∼ c_s^-1ω by dimensional analysis. With these scalings, we deduce that the Goldstone boson kinetic term scales as π̇_c^2 ∼ c_s^2(∂̃_i π_c)^2 ∼ c_s^-3ω^4 and that of the field σ scales as σ̇^2∼ (∂̃_i σ)^2 ∼ c_s^-4ω^4. Note that the presence of a reduced sound speed suppresses the kinetic term of π_c compared to that of σ. Similarly, the quadratic mixing scales as ρπ̇_c σ∼ c_s^-5/2ρ ω^3. Finally, requiring that this mixing be smaller than the most suppressed kinetic term, i.e. π̇_c^2, leads to ρ/ω≲ c_s^-1/2, which after evaluating the expression at the sound-horizon crossing ∼ H/c_s—equivalent to ω∼ H—gives the first bound in (<ref>). The same analysis leads to the perturbativity bounds for cubic interaction coupling constants. Note that the bound obtained from the cubic interaction (∂̃_i π_c)^2σ fixed by the quadratic mixing—ρ/H ≲ c_s^3/2(2 πΔ_ζ)^-1—is less constraining than the one found after imposing perturbativity of the quadratic mixing. Finally, one can perform the same analysis in the strong mixing regime with a modified dispersion relation to obtain the perturbativity bounds (<ref>). Strong mixing and modified dispersion relation. We will see in Sec. <ref> that in the strong mixing regime, the quadratic theory is dominated by the mixing term and one can realise that (<ref>) can be reduced to an effective single-field theory with non-linear dispersion relation for the Goldstone boson <cit.>. It is then easy to obtain the following bounds ρ/H ≲ c_sκ^1/2Δ_ζ^-1 , H/Λ_2 ≲ c_s^5/4(ρ/H)^3/4 , λ ≲ c_s^1/4(ρ/H)^3/4 , μ/H ≲ c_s^-3/4(ρ/H)^3/4 , where κ = 2Γ(5/4)^2/π^3≈ 0.053. These bounds allow non-Gaussian signals to be large without breaking perturbativity, see Sec. <ref>. Goldstone boson UV cutoff. For the Goldstone boson sector, the UV cutoff scale Λ_⋆, beyond which perturbative unitarity is violated, famously approaches the Hubble scale when the speed of sound is reduced. Indeed, it is given by <cit.> Λ_⋆^4 = 24/5π f_π^4 c_s^4/1 - c_s^2 , which gives Λ_⋆∼ c_s f_π in the limit c_s ≪ 1. Note that this strong coupling scale is associated with the operator (∂_i π_c)^4, giving the more stringent UV cutoff. Imposing that Λ_⋆ should exceed the Hubble scale at weak mixing gives a theoretical lower bound on the sound speed c_s ≥ 0.0087. Heavy particles with masses above the strong coupling scale cannot be produced on-shell. However, they can be consistently coupled to the massless mode π_c as long as the energy carried by the Goldstone boson does not exceed Λ_⋆. This is analogous to the chiral Lagrangian where pions can be coupled to heavy baryons.[We thank Paolo Creminelli for helpful discussions on this topic.] In our situation, as we will see in Sec. <ref>, heavy particles with masses far above the Hubble scale can be integrated out in the usual local manner. Instead, we will see in the following that the low-speed collider signature appears when m ≲ H/c_s. This is automatically satisfied if the mass is smaller than the strong coupling scale, i.e. m ≲Λ_⋆≈ 100 H c_s ≲ H/c_s as soon as c_s ≲ 0.1. § NON-LOCAL EFT OF INFLATION We have identified the relevant degrees of freedom describing the theory of inflationary fluctuations that we consider, namely the (canonically normalised) Goldstone boson of broken time translations π_c and an additional massive field σ. Our central interest is the low-speed collider signal which manifests itself as a resonance in squeezed configurations of the primordial three-point correlator, provided that the sound speed of π_c is reduced c_s<H/m. This phenomenological interesting signal is of course well captured by the full theory (<ref>). However, the multi-field nature of this theory makes analytical computations of correlators complex—if not intractable—and conceals the physics of the low-speed collider. 4pt In this section, we derive an effective single-field theory that captures the physics of this signal in an uninvolved way. We discuss how one can obtain an accurate single-field, albeit non-local, effective description of the full dynamics, in the almost entire quadratic parameter space, by integrating out the heavy degree of freedom from the mass spectrum (see e.g. <cit.> for other instances of non-local single-field EFTs). §.§ Effective Action In a specific regime that we will determine later, we can integrate out the heavy field σ to obtain an effective action for π_c. Formally, at the level of the partition function path integral,[In general, integrating out heavy degrees of freedom induces non-unitary effects—such as decoherence and dissipation—in the low-energy effective theory. When computing in-in cosmological correlators, these effects arise from the interference between the two branches of the in-in contour (see e.g. <cit.> for more details). In addition, truncating the path integral at finite time leads to unusual boundary conditions for the fields that fix the non-vanishing homogenous solution of the heavy field equation of motion. In contrast, for flat-space in-out scattering amplitudes, these boundary terms essentially vanish at the infinite past and future. An alternative approach to incorporate these subtle effects consists in working at the level of the wavefunctional path integral (see e.g. <cit.> for recent developments). Given that the integrating out procedure in cosmology is not as well established as it is in particle physics, we will stick to the commonly used approach, and keep only the particular solution to (<ref>) below.] we can define a single-field effective action S_eff by performing the path integral over the heavy state e^i S_eff[π_c] = ∫𝒟σ e^i S[π_c, σ] . Generically and as we will see later, performing such path integral produces non-local interactions in the Goldstone boson sector. A common procedure is then to perform an operator product expansion on the non-local interactions to produce local interactions in the effective theory. Note that in the usual ħ expansion of the effective action, hence accounting for loop corrections, this is often referred to as the process of “matching". In this work, we only focus on the leading-order term in this expansion which describes tree-level processes. The tree-level effective Lagrangian is particularly easy to determine as it can be obtained by performing a saddle point approximation of the functional integral over σ, i.e. e^i S_eff[π_c] = e^iS[π_c, σ_cl] , where σ_cl is the classical solution of the equation of motion δ S[π_c, σ_cl]/δσ_cl = 0. The resulting effective Lagrangian ℒ_eff is genuinely non-local and falls off for momenta larger than the mass of the heavy field. Varying the action (<ref>) with respect to σ_cl gives the following equation of motion (- + m^2) σ_cl = ρπ̇_c + … , where ≡ g^μν∇_μ∇_ν = -∂_t^2 - 3H∂_t + ∂̃_i^2 denotes the d'Alembert operator on dS_4, and the ellipses denote non-linear corrections. Note that it is sufficient to use the linear equation of motion for σ_cl and to plug it back in the original Lagrangian. Indeed, non-linear corrections to σ_cl coming from cubic interactions identically vanish up to cubic order.[This can be shown in general terms as follows. The Lagrangian we are interested in schematically reads L=-1/2σ O σ+σ J(π) with the equation of motion O σ=J for the field σ. Replacing σ by O^-1 J in the action gives L=1/2 (O^-1 J) J, which, upon splitting J=J_1+J_2 +… into terms linear in π, quadratic etc, gives the Lagrangian L=1/2 (O^-1 J_1)J_1+1/2 (O^-1 J_1 )J_2+1/2 (O^-1 J_2 )J_1 up to cubic order. Instead, if one simply replaces σ by the solution to its linear equation of motion O^-1 J_1, one obtains L=1/2 (O^-1 J_1)J_1+(O^-1 J_1 )J_2. It is easy to see that both actions agree in the regime of validity of the single-field EFT. For instance, in the local EFT where O^-1 is written as a series of ^n/m^2(n+1) terms, the identification follows from ∫^̣4 x √(-g) J_2 ^n J_1=∫^̣4 x √(-g) J_1 ^n J_2 upon integrations by parts. Analogous arguments hold in the non-local EFT regime when considering ∫^̣3 x J_2 D^-1 J_1=∫^̣3 x J_1 D^-1 J_2, see the definition of D^-1 after Eq. (<ref>), which is transparent in Fourier space.] Inverting the equation of motion for σ_cl gives σ_cl = ρ (- + m^2)^-1π̇_c . As long as the low-energy modes described by the effective action obey a non-relativistic dispersion relation such that ω^2 ≪ k^2, which is the case when the intrinsic speed of sound of the Goldstone boson is (significantly) reduced, the d'Alembert operator is dominated by spatial gradients. In <ref>, we give the precise regime of validity of this approximation. Therefore, one can write the non-local operator in Eq. (<ref>) as a time derivative expansion[Note that ∂̃_i = ∂_i/a bears a hidden (cosmic) time dependence in the scale factor a(t). As a result, the order of the operators matters.] 1/- + m^2 = 1/-∂̃_i^2 + m^2∑_n=0^∞ (-1)^n [(∂_t^2 + 3H∂_t)(-∂̃_i^2 + m^2)^-1]^n . Plugging back Eq. (<ref>) in the action (<ref>) and keeping only the leading order term in (<ref>) leads to the following non-local single-field theory [colframe=white,arc=0pt,colback=greyish2] S_eff = ∫^̣4x √(-g)( 1/2π̇_c[1 + ρ^2 𝒟^-1]π̇_c - c_s^2/2 (∂̃_i π_c)^2 - λ_1 π̇_c (∂̃_i π_c)^2 - λ_2 π̇_c^3 . .-ρ/Λ_1 (∂̃_i π_c)^2 𝒟^-1π̇_c- ρ/Λ_2π̇_c^2 𝒟^-1π̇_c - λρ^2 π̇_c [𝒟^-1π̇_c]^2 - μρ^3 [𝒟^-1π̇_c]^3) , where we have introduced the non-local differential operator 𝒟^-1 = (-∂̃_i^2 + m^2)^-1. This effective theory should be understood as the leading order contribution in the time derivative expansion (<ref>) where higher-order terms are encoded in an infinite number of operators. As we will see in the next section, the non-local theory (<ref>) accurately describes the dynamics of the Goldstone boson in the almost entire parameter space i.e. for almost all ranges of ρ/H and m/H. However, because this theory is intrinsically single-field, it does not capture the non-perturbative particle production, hence does not describe the conventional cosmological collider signal, visible in ultra-soft limits of cosmological correlators. Indeed, the effects of the massive field are encoded in (non-local) contact interactions which reflect the fact that it propagates instantaneously. As such, the theory (<ref>) does not describe long-range interactions coming from the exchange of an additional massive field. 4pt Let us stress that the non-local operator 𝒟^-1 should be considered as a building block on its own, and cannot in general be expressed in terms of a series of simpler operators. When the field is sufficiently heavy, one can write 𝒟^-1=1/m^2 ∑_n (∂̃_i^2/m^2)^n as a sum of local operators, with the caveat that this only provides an asymptotic expansion. However, the regime of parameters of interest for the low-speed collider rather corresponds to m^2 being negligible compared to ∂̃_i^2 around the critical time of sound horizon crossing, where this expansion is clearly not applicable. Conversely, one can wonder whether a tentative fully non-local expansion 𝒟^-1 = ∂̃_i^-2∑_n (m^2 ∂̃_i^-2)^n as a series of inverse Laplacian operators would be relevant in that situation, but in fact it is not. A simple way to see this is that this expansion is analytic in m, hence it can never reproduce the dependence of correlators in log(m) that is found when considering the full 𝒟^-1 <cit.>. Note also that for the interactions in (<ref>), each term in this tentative expansion would be infrared divergent. Physically, this obstruction comes from the fact that interactions are not only localized around sound-horizon crossing, as usual for local theories, but rather spans over a range of time, from sound horizon crossing to mass-shell crossing where ∂̃_i^2 ∼ m^2, and beyond. Hence, any expansion centred around the former event (or any event) is doomed to fail. In the same way as propagators (-m^2)^-1 in amplitudes cannot be expanded around resonances, the resonance of the cosmological low-speed collider signal is intrinsically non-perturbative. These physical explanations will find their counterparts below in the mathematical properties of the seed function _1 in (<ref>), see the discussion “Properties of the EFT seed integrals” in Sec. <ref>. §.§ Dispersion Relation The quadratic part of the non-local action (<ref>) describes various phenomenologically interesting regimes for the Goldstone boson dynamics, that can be deciphered through its dispersion relation. Here, we clarify the dynamics of the Goldstone boson by looking at extreme regimes. We explicitly show that the non-local theory interpolates between the reduced speed of sound regime and the modified dispersion relation regime, analogously to the analysis in <cit.> for c_s=1. 4pt When the energy of the Goldstone boson is above the Hubble scale ω≫ H, its dynamics is well described by plane-wave solutions. This motivated approximation can be extended up to horizon crossing where Hubble friction kicks in and π_c starts to freeze. The quadratic part of the non-local action (<ref>) leads to the following dispersion relation [colframe=white,arc=0pt,colback=greyish2] ω^2 = c_s^2 ^2/1 + ρ^2/^2+m^2 , where =k/a is the physical momentum carried by the Goldstone boson. Let us now take the large mass and the large quadratic mixing limits. Reduced effective speed of sound. When the mass of the heavy field becomes large, it dominates the gradient term in the non-local differential operator 𝒟^-1≈ 1/m^2. The heavy field therefore leaves its imprint in the dynamics of the Goldstone boson in the form of an induced speed of sound c̃_s^-2 = c_s^-2(1 + ρ^2/m^2) . The massless mode then freezes at the effective sound horizon |c̃_s kτ| ∼ 1 while having a linear dispersion relation ω≈c̃_s. Clearly, this limiting regime is valid as long as the physical momentum k_p of the Goldstone boson verifies k_p≪ m. Using the dispersion relation, one can see that this regime is an accurate description of the quadratic dynamics if ω≪ω_new=c̃_s m, where ω_new marks the energy scale at which higher-order terms in the derivative -∂̃_i^2/m^2 expansion should be taken into account. In order to be able to initialise the mode functions in the proper vacuum, one must require that ω_new be well above the Hubble scale i.e. ω_new≫ H. Therefore, the reduced effective speed of sound regime is valid if H/c̃_s≪ m . It should be noted that the region with a significantly reduced effective speed of sound compared to the “bare” one c̃_s≪ c_s is rather limited as it requires ρ≫ m while maintaining (<ref>). This region of parameter space does not give rise to the low-speed collider and is not the one of interest in this paper. In the two-field picture of this regime, we show in the insert below that the energy gap between the heavy and the light modes increases due to the quadratic mixing. Modified dispersion relation. In the regime where ≫ m, the gradient term dominates the mass in the non-local differential operator 𝒟^-1≈ -∂̃_i^-2. Additionally, for ρ≫, the second term containing 𝒟^-1 in the quadratic part of the action (<ref>) dominates over the usual π̇_c^2 term. This regime is similar but still different from ghost inflation <cit.>, and describes the dynamics of the Goldstone boson having a modified dispersion relation ω≈ c_s ^2/ρ. Evaluating this dispersion relation at the Hubble scale ω = H, one can see that the Goldstone boson freezes at the time scale set by c_s^1/2|kτ| ∼√(ρ/H), before the usual horizon crossing |kτ|∼ 1.[In this regime, the quadratic theory can be fully solved analytically. The equation of motion is found to be π̈_c + 5H π̇_c + c_s^2/ρ^2 ^4 π_c = 0 . After quantizing the theory and imposing Bunch-Davis initial conditions, the mode functions are completely determined in term of the Hankel function (see <cit.> for more details) π_c, k(τ) = √(π/4)H/ρH/√(2k^3) (-kτ)^5/2^̋(1)_5/4(c_s/2H/ρ(kτ)^2) . ] 4pt Let us now derive the precise regime of validity of this regime. Using the dispersion relation, the first condition ≫ m evaluated at the Hubble scale gives c_s m^2/ρ≪ H. This defines a new energy scale M ≡ c_s m^2/ρ. When M approaches the Hubble scale, the dynamics becomes sensitive to higher-order powers of m^2/∂̃_i^2, i.e. these terms are no longer irrelevant and the effective description of the Goldstone boson having a non-linear dispersion relation breaks down. The second condition ρ/≫ 1, upon using the dispersion relation at ω=H, gives ρ/H≫ 1/c_s. To sum up, the modified dispersion relation regime is valid as long as M≡ c_s m^2/ρ≪ H , andρ/H≫1/c_s . 4pt From the two asymptotic regimes above, it is clear that the dispersion relation (<ref>) gives a unified description in the (c_s, m, ρ) parameter space of the full quadratic dynamics, except in the regime where m<H and ρ<H, where the non-local EFT is not valid. Indeed, if the additional field is light and weakly mixed to π_c, the Goldstone boson is continuously sourced by σ on super-Hubble scales. The dynamics is then sensitive to the amount of e-folds elapsed from the moment at which the mode crosses the horizon to the end of inflation (see <cit.> for an exhaustive analysis of the full parameter space). In the next section, we derive the precise regime of validity of the non-local single-field theory (<ref>). Reduced effective sound speed energy gap.—In this insert, we examine the energy gap between the heavy and the massless modes in the reduced effective speed of sound regime when c_s ≪ 1. In flat-space, the quadratic two-field dynamics can be solved analytically. The exact dispersion relations are given in Eq. (<ref>). The leading-order solution at weak mixing, corresponding to the completely decoupled case ρ/m → 0, is given by ω_-^2 = c_s^2 ^2 and ω_+^2 = ^2 + m^2. Let us now examine the perturbative limit ρ≲ m. Taking this limit in the exact dispersion relations (<ref>) at next-to-leading order leads to ω_-^2 ≈ c_s^2 ^2 - ρ^2 c_s^2 ^2/(1-c_s^2)^2 + m^2 ,andω_+^2 ≈^2 + m^2 + ρ^2 ^2+m^2/(1-c_s^2)^2 + m^2 . At reduced sound speed c_s≪ 1, the heavy mode dispersion relation reduces to ω_+^2 ≈^2 + m_eff^2 where m_eff^2 = m^2 + ρ^2. For the massless mode, assuming (1-c_s^2)^2≪ m^2, the dispersion relation boils down to ω_-^2≈ c_s^2 (1-ρ^2/m^2)^2, recovering the induced sound speed c̃_s^2 = c_s^2 (1+ρ^2/m^2)^-1≈ c_s^2 (1 - ρ^2/m^2) in the limit ρ≲ m, see Eq. (<ref>). Note that the Goldstone boson cannot acquire an effective mass from the quadratic mixing because it is protected by shift symmetry. We summarise the effect of the quadratic coupling at weak mixing in the following energy-level diagram [line width=1. pt, scale=2] [thick, ->] (-0.2, -0.5) – (-0.2, 1.7); at (-0.2, 1.9) ω^2(k); [very thick, -, pyblue] (0.3, 0.9) – (1.2, 0.9); at (0.75, 1.1) k^2+m^2; [very thick, -, pyred] (0.3, 0.4) – (1.2, 0.4); at (0.8, 0.2) c_s^2k^2; [very thick, -, pyblue] (1.9, 1.4) – (2.8, 1.4); at (2.35, 1.6) k^2+m_eff^2; [very thick, -, pyred] (1.9, -0.1) – (2.8, -0.1); at (2.35, -0.3) c̃_s^2k^2; [thick, -, pygreen] (-0.2, 0.9) – (-0.2, 1.4); [thick, -, pygreen] (-0.3, 0.9) – (-0.1, 0.9); [thick, -, pygreen] (-0.3, 1.4) – (-0.1, 1.4); at (-0.9, 1.25) effective mass; at (-0.9, 1.05) effect; [thick, -, pygreen] (-0.2, -0.1) – (-0.2, 0.4); [thick, -, pygreen] (-0.3, -0.1) – (-0.1, -0.1); [thick, -, pygreen] (-0.3, 0.4) – (-0.1, 0.4); at (-0.9, 0.25) induced sound; at (-0.9, 0.05) speed effect; at (0.8, -0.7) uncoupled limit; at (0.8, -0.9) ρ/m→ 0; at (2.4, -0.7) ρ/m correction; §.§ Regimes of Validity It is important to determine the precise regime of validity of the non-local theory (<ref>), as the leading-order term in the derivative expansion (<ref>). Indeed, we wish to treat higher-order terms as perturbative corrections to the effective action. Flat-space intuition. Before considering cosmological correlators, for pedagogical reasons, it is instructive to analyse the dynamics in the flat-space limit H→ 0. Let us come back to the full two-field picture (<ref>) and consider the four-point amplitude of Goldstone bosons due to the exchange of a σ field, with interactions gπ_c^2σ for definiteness. This amplitude is given by 𝒜 = 𝒜_s + 𝒜_t + 𝒜_u , where the three contributions are the s, t and u channels respectively. We denote the particle four-momenta as p_a^μ = (ω_a, p_a) where a∈{1, …, 4} labels the momentum of each particle. Importantly, since boost invariance is broken, the scattering amplitude is no longer a function of the Lorentz-invariant momentum inner products. Instead, the s-channel amplitude is given by 𝒜_s = -g^2/(ω_1+ω_2)^2 - (p_1 + p_2)^2 - m^2 . where we use all incoming particles convention so that ∑_a ω_a = 0 and ∑_a p_a = 0. When writing the non-local effective theory (<ref>), we have precisely considered that the term (ω_1+ω_2)^2 is negligible compared to the term (p_1 + p_2)^2, which is equivalent to considering that the time derivative is negligible compared to the gradient term in the equation of motion of σ. Because the Goldstone boson propagates with a reduced sound speed, we have (ω_1+ω_2)^2 = c_s^2(p_1+p_2)^2 ≈ 4c_s^2 k^2, where k is the characteristic momentum carried by incoming particles. 4pt Let us now consider two situations. The first one is when the transfer momentum carried by the exchanged particle, denoted q=|q|, is of the same magnitude as the incoming particle, that is q^2≡ (p_1 + p_2)^2 ≈ 4k^2. If the sound speed of the Goldstone boson is sufficiently reduced c_s ≪ 1, then we always have (ω_1+ω_2)^2 ≪ (p_1 + p_2)^2 and the non-local theory correctly describes the dynamics for this kinematic configuration. The second case is that of the transfer momentum being soft. In this case, k is the hard momentum. We immediately see that in the ultra-soft limit—i.e. when q/k ≪ 2c_s—, the term (ω_1+ω_2)^2 is no longer negligible compared to (p_1 + p_2)^2. In other words, the propagator is no longer dominated by the gradient term and the non-local theory (<ref>) breaks down. Crucially, this means that the regime of validity of the non-local effective theory not only depends on the parameters of the theory but also on the kinematics, and fails to correctly describe ultra-soft limits of scattering amplitudes or—as we will see in the next paragraph—cosmological correlators. Soft limits of cosmological correlators. We now extend this picture to cosmological correlators. The time dependence of the background and of the physical momenta makes the analysis more intricate. However, when the energy of the Goldstone boson is above the Hubble scale ω≳ H, we still can use the notion of dispersion relation to derive the precise regime of validity of the non-local effective theory. It should be noted that when ω≲ H, corrections due to Hubble friction become important. Although these corrections can be accounted for systematically by working at the level of the mode functions, we do not follow this path for simplicity of presentation (see <cit.> for more details). 4pt In the full two-field picture, let us consider a generic cubic interaction describing the decay of a massive field σ into two Goldstone bosons appearing on the external legs of a correlator. We are voluntarily being agnostic on the precise form of this interaction because it is unimportant for the following argument. Such process is described by (- +m^2)σ = J(π_c) where the sourcing J is quadratic in the field π_c. Depending on the precise channel that we consider, the massive field σ can carry a hard (i.e. short) or a soft (i.e. long) momentum. In the following, we examine the most constraining situation, where the Goldstone bosons are the short modes = k_S/a and the massive field is the long mode = k_L/a. Considering only the leading order term in the time derivative expansion (<ref>) enforces the following inequality to hold for the effective action to be correct: ω^2() ≲^2 + m^2 . Here, ω^2() should be understood as the sum of the energies of both Goldstone bosons. Of course, we stress that the stronger the inequality is, the better the approximation of working with the effective action will be. The complication when dealing with cosmological correlators is that (<ref>) depends on time, which is hidden in the physical momenta. In order for the non-local EFT to give an accurate description of cosmological correlators, we do not need this inequality to hold at all times (the fields are effectively decoupled deep inside the horizon ω≫ H where the system asymptotically reaches the vacuum state), but it should be valid at the critical time of horizon crossing of the Goldstone boson ω∼ H.[A crucial aspect of the low-speed collider is that non-trivial interactions also occur after horizon crossing, where the plane-wave analysis ceases to be valid. The full analysis done is <cit.> shows that limiting the reasoning to ω∼ H is sufficient for our purpose here.] Let us now examine the cases of weak and strong mixing separately. 4pt At weak mixing, the Goldstone boson dynamics is described by a linear dispersion relation ω^2() = c_s^2 ^2. Evaluating this expression for energies of order the Hubble scale gives c_sk_S = aH. Plugging back this condition in (<ref>) gives 1 ≲1/c_s^2(k_L/k_S)^2 + (m/H)^2 . Two situations appear. If the field σ is heavy m≳ H, then (<ref>) is always valid. If the mass of the field σ is of order the Hubble scale m∼ H, then one needs to ensure that the first term on the right-hand side of the inequality is greater than unity, i.e. k_L/k_S≳ c_s. As a result, the momentum carried by σ cannot be arbitrarily soft.[We now see that if we had chosen one of the Goldstone boson to be the long mode and the other one to be the short mode, then the massive field would have been a short mode. Therefore, ω on the left-hand side of (<ref>) would have been the sum of the energies of a long and a short mode, therefore completely dominated by the short mode, ultimately creating no hierarchy of scales.] Notice that within the regime of the low-speed collider, i.e. for masses such that m≲ H/c_s, one can neglect the mass term for ω() ∼ H, as the non-local dynamics is dominated by the gradients. However, as explained in Sec. <ref>, it is important to keep in mind that it can not be thrown away as it plays the role of an infrared regulator for the non-local theory. Also, from (<ref>), it is clear that close to equilateral configurations k_L∼ k_S, the effective action is valid as long as the sound speed is reduced. As we will see later in Sec. <ref>, the weak mixing condition (<ref>) also extends to the interesting strongly-mixed low-speed collider regime m/H ≲ρ/H ≲ 1/c_s, see Fig. <ref>. 4pt At strong enough mixing, the Goldstone boson dynamics is described by the non-linear dispersion relation ω^2() = c_s^2 ^4/ρ^2. For energies around the Hubble scale, this gives c_s k_S^2 = a^2 H ρ. Substituting this in (<ref>) yields 1 ≲1/c_s(k_L/k_S)^2 ρ/H + (m/H)^2 . If m≳ H, this inequality is always satisfied. Likewise the weak mixing case, if m∼ H, one needs to require k_L/k_S≳√(c_s H/ρ). Certainly, as we have illustrated in the flat-space picture, ultra-soft limits of cosmological correlators are not accurately described by the non-local effective theory. For the reader's convenience, we have collected the established regimes of validity in Fig. <ref>. § COSMOLOGICAL CORRELATORS We will now compute the effects of the additional massive field σ on correlators of the Goldstone boson. However, the quadratic mixing—which can be strong—leads to complications. In this section, we treat the cases of weak and strong mixings separately, using two different approaches. 4pt In the weak mixing regime, we will treat the quadratic coupling perturbatively. The various cubic interactions in the full theory (<ref>) therefore gives rise to single-, double-, and triple-exchange diagrams for the bispectrum. Not being interested in the cosmological collider signal here, but rather in the more striking low-speed collider signature, we will extract it by using the effective single-field theory (<ref>), within which all diagrams collapse into simple (non-local) contact ones. 4pt In the strong mixing regime, we will exploit the cosmological flow approach <cit.> to numerically compute the desired correlators. This way, we will confirm the validity of our analytical, albeit necessarily approximate, reasonings, see Sec. <ref>. §.§ Weak Mixing Before specifying our analysis to the non-local EFT, it is worth taking a detour and coming back to the full theory (<ref>). Following the bootstrap approach used in <cit.> (see <cit.> for a recent review), we will first trade the Goldstone boson for an auxiliary field φ with mass m_φ^2 = 2H^2 that propagates with a speed of sound c_s, and identify a set of seed correlators that serve as building blocks for all desired correlators. We then introduce well-chosen weight-shifting operators that map correlators of φ to correlators of Goldstone bosons as external fields, which is the case of interest in this work. This strategy enables one to obtain a complete set of correlators, regardless of the precise interactions. 4pt In a second phase, we will see that the seed correlators collapse into simple contact diagrams when the exchanged massive field is integrated out in a non-local manner. Eventually, within the framework of the non-local EFT (<ref>), we will give explicit solutions for single-, double- and triple-exchange correlators of massless external fields. §.§.§ Weight-Shifting Operators Let us introduce an auxiliary scalar field φ(x, t) with mass m_φ^2 = 2H^2 that propagates with a speed of sound c_s.[For c_s=1, φ reduces to a conformally coupled field in dS_4.] The associated mode function is determined by solving its free equation of motion and imposing the Bunch-Davies state as initial condition. It is given by φ_k(τ) = -H/√(2c_s k) τ e^-ic_s k τ , where τ is the conformal time. The fundamental objects that generate all correlators—regardless of the precise form of the interactions—are the equal-time four- and six-point correlation functions of φ generated by the interactions ℒ/a^3 = - g_1φ^2 σ - g_2φ^2 σ^2 - g_3σ^3 , where g_i are arbitrary coupling constants. Let us stress that by considering this proxy theory—which may seem over-complicated at first sight—we are able to easily obtain all correlators generated by arbitrary interactions, even beyond those considered in (<ref>), see App. <ref> for more details. Using scaling symmetry, we define the seed functions F_1, 2, 3 as the corresponding solutions of the following diagrams ⟨φ_k_1…φ_k_4|_⟩(1)' = -/3[line width=1. pt, scale=2.5] [black] (0.2, 0) – (0.4, -0.6); [black] (0.6, 0) – (0.4, -0.6); [black] (1, 0) – (1.2, -0.6); [black] (1.4, 0) – (1.2, -0.6); [pyblue] (0.4, -0.6) – (1.2, -0.6); [fill=black] (0.4, -0.6) circle (.03cm); [fill=black] (1.2, -0.6) circle (.03cm); [lightgray2, line width=0.8mm] (0, 0) – (1.6, 0); at (0.2, 0.15) _1; at (0.6, 0.15) _2; at (1, 0.15) _3; at (1.4, 0.15) _4; = g_1^2 (∏_i=1^4τ_0c_s k_i) ×1_12F_1(c_s k_12_12,c_s k_34_12) , ⟨φ_k_1…φ_k_6|_⟩(2)' = -/3[line width=1. pt, scale=2.5] at (0.2, 0.15) _1; at (0.6, 0.15) _2; at (0.85, 0.15) _3; at (1.2, 0.15) _4; at (1.42, 0.15) _5; at (1.85, 0.15) _6; [black] (0.2, 0) – (0.4, -0.6); [black] (0.6, 0) – (0.4, -0.6); [black] (0.8, 0) – (1, -0.6); [black] (1.2, 0) – (1, -0.6); [black] (1.4, 0) – (1.6, -0.6); [black] (1.8, 0) – (1.6, -0.6); [pyblue] (0.4, -0.6) – (1, -0.6); [pyblue] (1, -0.6) – (1.6, -0.6); [fill=black] (0.4, -0.6) circle (.03cm); [fill=black] (1, -0.6) circle (.03cm); [fill=black] (1.6, -0.6) circle (.03cm); [lightgray2, line width=0.8mm] (0, 0) – (2, 0); =g_1^2g_2 (∏_i=1^6τ_0c_s k_i) ×1^3_12F_2(c_s k_12_12,c_s k_34_12,c_s k_56_12,_56_12) , ⟨φ_k_1…φ_k_6|_⟩(3)' = -/3[line width=1. pt, scale=2.5] [black] (0.2, 0) – (0.4, -0.6); [black] (0.6, 0) – (0.4, -0.6); [black] (0.8, 0) – (1, -0.3); [black] (1.2, 0) – (1, -0.3); [black] (1.4, 0) – (1.6, -0.6); [black] (1.8, 0) – (1.6, -0.6); [pyblue] (1, -0.3) – (1, -0.6); [pyblue] (0.4, -0.6) – (1, -0.6); [pyblue] (1, -0.6) – (1.6, -0.6); [fill=black] (0.4, -0.6) circle (.03cm); [fill=black] (1, -0.6) circle (.03cm); [fill=black] (1, -0.3) circle (.03cm); [fill=black] (1.6, -0.6) circle (.03cm); [lightgray2, line width=0.8mm] (0, 0) – (2, 0); at (0.2, 0.15) _1; at (0.55, 0.15) _2; at (0.83, 0.15) _3; at (1.15, 0.15) _4; at (1.43, 0.15) _5; at (1.8, 0.15) _6; =g_1^3 g_3 (∏_i=1^6τ_0c_s k_i) ×1^3_12F_3(c_s k_12_12,c_s k_34_12,c_s k_56_12,_34_12,_56_12) , where _ij = |k_i + k_j|, k_ij = k_i + k_j, and τ_0 marks the end of inflation surface. Having defined these seed functions, we now derive correlators of the Goldstone boson by introducing a set of weight-shifting operators that map diagrams with external fields π_c to those with external fields φ. The reason why this procedure is universal, regardless of the form of the considered interactions, is because the mode functions of both fields can be easily related by simple differential operators. Importantly, as the weight-shifting operators only act on external fields of a specific diagram, they are valid for any bulk theory, i.e. be it the full two-field picture or within the effective single-field theory. To avoid clutter, we leave their precise derivation in Appendix <ref>. Power spectrum correction. The effect of the quadratic mixing π̇_cσ can be obtained from the cubic interaction φ^2σ after taking a suitable soft limit on the external field. From this, one obtains the leading correction to the two-point correlator of π_c as -/2[line width=1. pt, scale=2.5] [red] (0.2, 0) – (0.4, -0.6); [pyred] (1.4, 0) – (1.2, -0.6); [pyblue] (0.4, -0.6) – (1.2, -0.6); [fill=black] (0.4, -0.6) circle (.03cm); [fill=black] (1.2, -0.6) circle (.03cm); [lightgray2, line width=0.8mm] (0, 0) – (1.6, 0); at (0.4, -0.8) π̇_cσ; at (1.2, -0.8) π̇_cσ; at (0.2, 0.15) _1; at (1.4, 0.15) _2; = ρ^22c_s^21k_1^3 F_1(c_s,c_s) . The corresponding weight-shifting operator simply consists in taking the soft limit of two external legs k_2 and k_3 while keeping the diagram connected. Single-exchange diagrams. Both single-exchange diagrams arising from the interactions π̇_c^2σ and (∂_i π_c)^2σ can be obtained from the same seed function F_1 <cit.>. They read -/2[line width=1. pt, scale=2.5] [pyred] (0.2, 0) – (0.4, -0.6); [pyred] (0.6, 0) – (0.4, -0.6); [pyred] (1.4, 0) – (1.2, -0.6); [pyblue] (0.4, -0.6) – (1.2, -0.6); [fill=black] (0.4, -0.6) circle (.03cm); [fill=black] (1.2, -0.6) circle (.03cm); [lightgray2, line width=0.8mm] (0, 0) – (1.6, 0); at (0.4, -0.8) (∂_i π_c)^2σ; at (1.2, -0.8) π̇_cσ; at (0.2, 0.15) _1; at (0.6, 0.15) _2; at (1.4, 0.15) _3; = -ρ2c_s^3HΛ_11(k_1 k_2 k_3)^2_1·_2k_1 k_2 ×(1-k_12∂∂ k_12+k_1 k_2 ∂^2∂ k_12^2) F_1(c_s k_12k_3,c_s) , -/2[line width=1. pt, scale=2.5] [pyred] (0.2, 0) – (0.4, -0.6); [pyred] (0.6, 0) – (0.4, -0.6); [pyred] (1.4, 0) – (1.2, -0.6); [pyblue] (0.4, -0.6) – (1.2, -0.6); [fill=black] (0.4, -0.6) circle (.03cm); [fill=black] (1.2, -0.6) circle (.03cm); [lightgray2, line width=0.8mm] (0, 0) – (1.6, 0); at (0.4, -0.8) π̇_c^2σ; at (1.2, -0.8) π̇_cσ; at (0.2, 0.15) _1; at (0.6, 0.15) _2; at (1.4, 0.15) _3; = -ρ4 c_s^5H/Λ_2k_1 k_2(k_1 k_2 k_3)^2∂^2∂ k_12^2 F_1(c_s k_12k_3,c_s) . Double-exchange diagram. The double-exchange diagram is related to the double-exchange six-point function F_2 by the relation -/2[line width=1. pt, scale=2.5] [pyred] (0.2, 0) – (0.4, -0.6); [pyred] (1.8, 0) – (1.6, -0.6); [pyred] (1, 0) – (1, -0.6); [pyblue] (0.4, -0.6) – (1, -0.6); [pyblue] (1, -0.6) – (1.6, -0.6); [fill=black] (0.4, -0.6) circle (.03cm); [fill=black] (1, -0.6) circle (.03cm); [fill=black] (1.6, -0.6) circle (.03cm); [lightgray2, line width=0.8mm] (0, 0) – (2, 0); at (0.4, -0.8) π̇_cσ; at (1.6, -0.8) π̇_cσ; at (1, -0.8) π̇_cσ^2; at (0.2, 0.15) _1; at (1, 0.15) _2; at (1.8, 0.15) _3; = - λ ρ^2c_s^31k_1^4 k_2 k_3 F_2(c_s,c_s k_2k_1,c_s k_3k_1,k_3k_1) . Triple-exchange diagram. The triple-exchange diagram is related to the seed function F_3 after taking suitable soft limits. It is given by -/2[line width=1. pt, scale=2.5] [pyred] (0.2, 0) – (0.4, -0.6); [pyred] (1.8, 0) – (1.6, -0.6); [pyred] (1, 0) – (1, -0.3); [pyblue] (1, -0.3) – (1, -0.6); [pyblue] (0.4, -0.6) – (1, -0.6); [pyblue] (1, -0.6) – (1.6, -0.6); [fill=black] (0.4, -0.6) circle (.03cm); [fill=black] (1, -0.6) circle (.03cm); [fill=black] (1.6, -0.6) circle (.03cm); [fill=black] (1, -0.3) circle (.03cm); [lightgray2, line width=0.8mm] (0, 0) – (2, 0); at (0.4, -0.8) π̇_cσ; at (1.6, -0.8) π̇_cσ; at (1, -0.8) σ^3; at (1.2, -0.3) π̇_cσ; at (0.2, 0.15) _1; at (1, 0.15) _2; at (1.8, 0.15) _3; = -μ ρ^3c_s^31k_1^4 k_2 k_3 F_3(c_s,c_s k_2k_1,c_s k_3k_1,k_2k_1,k_3k_1) . The seed function F_1 with c_s=1, corresponding to the de Sitter-invariant four-point function, was first analytically computed in <cit.>, where the solution is given in terms of a power series representation. Its extension to the case of a reduced sound speed was obtained in <cit.>, which allowed to bootstrap correlators arising from interactions that strongly break boosts. As a result, more general correlators can be obtained after deformation of the four-point function, see <cit.> for an alternative approach. Determining a closed-form solution for F_2, 3 in the full two-field theory requires more sophisticated tools and is beyond the scope of this paper. However, when the heavy field is integrated out, the seed functions F_2, 3 collapse into contact interactions that can be easily computed. §.§.§ EFT Seed Integrals and Correlators The non-local single-field EFT at leading-order in the time-derivative expansion (<ref>) has a remarkable property: the two- and three-point correlators can be derived out of a single EFT seed integral by applying a set of bespoke weight-shifting operators. Exploiting this feature, in this section, we give closed form expressions for the power spectrum correction and all three-point correlators. 4pt As illustrated in Fig. <ref>, within the non-local single-field EFT framework, interactions that mix φ and σ become effective contact interactions ℒ_eff/a^3 = - g_1^2φ^2 𝒟^-1φ^2 - g_1^2 g_2φ^2 [𝒟^-1φ^2]^2 - g_1^3 g_3[𝒟^-1φ^2]^3. The seed functions F_1, 2, 3 in Eqs. (<ref>) then take the following simple forms F_1 = c_s k_t_12 I_1(c_s k_t_12mH) , F_2 = c_s k_T _122_56^2 I_2(c_s k_T_12mH,c_s k_T_56mH) , F_3 = 3c_s^3 k_T^3 _122_34^2 _56^2 I_3(c_s k_T_12mH,c_s k_T_34mH,c_s k_T_56mH) , with k_t = ∑_i=1^4 k_i, k_T = ∑_i=1^6 k_i, and where we have defined the EFT seed integrals as _1(x) = ∫_-∞^0 ̣̆ sin()̆/^̆2 + x^2 , _2(x, y) = ∫_-∞^0 ̣̆ ^̆2sin(u)/(^̆2 + x^2)(^̆2 + y^2) , _3(x, y, z) =∫_-∞^0 ̣̆ u^2sin()̆/(^̆2 + x^2)(^̆2 + y^2)(^̆2 + z^2) . All correlators can therefore be derived by acting with weight-shifting operators on these building-block integrals. Properties of the EFT seed integrals. Anticipating the following development, it is useful to give some properties about the seed integrals. First, the integral _1 can be readily integrated analytically, giving _1(x) = e^x Ei(-x) - e^-xEi(x)/2x , where Ei is the exponential integral. Note that one uses Ei(-x)=1/2[Ei(-x+ϵ)+Ei(-x-ϵ)] due to the branch cut on the negative real axis. Remarkably, the integrals _2 and _3 can be written in terms of _1, which therefore constitutes the building block of all correlators. Indeed, decomposing the rational functions of u^2 in the integrands as sums of simple poles, one obtains _2(x, y) = x^2 _1(x) - y^2 _1(y)/x^2 - y^2 , _3(x, y, z) = x^2 _1(x)/(x^2 - z^2)(y^2 - x^2) + y^2 _1(y)/(y^2 - z^2)(x^2 - y^2) + z^2 _1(z)/(z^2 - y^2)(x^2 - z^2) . It is straightforward to see that the apparent singularities when some arguments coincide are artificial, and to deduce the expressions _2(x, x) = _1(x) + x/2∂_1/∂ x(x) , _3(x, x, x) = -3/8x∂_1/∂ x(x) - 1/8∂^2 _1/∂ x^2(x) , _3(x, x, y) = x/2(x^2 - y^2)∂_1/∂ x(x) + y^2 _1(x) - _1(y)/(x^2 - y^2)^2 . These forms of the integrals, as we have seen, all related to _1, appear in the correlators of the Goldstone boson. 4pt We have explained in Sec. <ref> that the differential operator D^-1 should be considered as a building block on its own, and that any expansion either has pitfalls (in the large mass regime) or is doomed to fail (in the low-speed collider regime). Let us now explain how these statements translate into properties of the seed integral _1, respectively for x ≫ 1, and x ≪ 1 (see also Sec. 6.2 of <cit.>). As the rapid oscillations of the sin()̆ term in the integrand of (<ref>) implies that most of the contribution to the integral comes from ||̆≲ 1, it is natural, for x ≫ 1, to simply Taylor expand the denominator (u^2+x^2)^-1 for large x. After proper Wick rotations of the different frequencies in sin(u), one then obtains the asymptotic expansion _1=-∑_n=0^∞(2n)!/x^2(n+1), corresponding to the local EFT expansion in the large mass regime. Naturally, what this computation misses is the contribution from |u| ≥ x, and hence the poles of the Wick-rotated integrand at the Euclidean times u=± i x. As for x ≪ 1, the naive expansion of the denominator (u^2+x^2)^-1=∑_n=0^∞ (-1)^n x^2n/u^2(n+1), valid for x <||̆, cannot work simply because the integrand gets most of its contribution precisely from ||̆∼ x.[Note that the corresponding integrals, divergent near u=0, could be regularized by an IR cutoff, but this would not change the fact that the corresponding expansion is anyway not valid for |u|≤ x, and hence cannot provide a reliable estimate of _1.] In the end, only the full function can reproduce the logarithmic behaviour _1=log(x)+γ-1+… for small x. Correlators. Using Eqs. (<ref>) as the corresponding forms of the seed functions within the single-field EFT and using the weight-shifting operators defined in <ref>, we now explicitly give analytical expressions of the two- and three-point correlators generated by (<ref>). The curvature perturbation dimensionless power spectrum is given by Δ_ζ^2 = Δ_ζ, 0^2[1 + 2c_s^2 (ρ/H)^2 _1(2c_sm/H) ] , where Δ_ζ, 0^2 is defined in Eq. (<ref>). The three-point correlator of the Goldstone boson associated with the interaction (∂̃_i π_c)^2𝒟^-1π̇_c is ⟨π_c, k_1π_c, k_2π_c, k_3|'⟩/H^3 = 1/2c_s^6ρ/Λ_1k_1 k_2/(k_1k_2k_3)^3k_1·k_2/K ×[1 + (K^2/k_1k_2 - K(k_1+k_2)/k_1k_2 + c_sm/HK/k_3)ℐ_1(c_sm/HK/k_3) . .- c_sm/HK/k_3K(k_1+k_2)/k_1k_2ℐ_1'(c_sm/HK/k_3)] + 2 perms , where we have defined K = k_1 + k_2 + k_3, and that associated with π̇_c^2 𝒟^-1π̇_c reads ⟨π_c, k_1π_c, k_2π_c, k_3|'⟩/H^3 = 1/4c_s^4ρ/Λ_21/(k_1 k_2 k_3)^2k_1 k_2/K k_3[1 + (c_s m/HK/k_3)^2 ℐ_1(c_s m/HK/k_3)] + 2 perms . The expressions (<ref>)-(<ref>) agree with the ones in <cit.>. The double-exchange correlator of Goldstone bosons is ⟨π_c, k_1π_c, k_2π_c, k_3|'⟩/H^3 = -λ/2(ρ/H)^2 k_3^2K/c_s^2(k_1 k_2 k_3)^3 ℐ_2(c_sm/HK/k_1, c_sm/HK/k_2) + 2 perms , while the triple-exchange correlator of Goldstone bosons is ⟨π_c, k_1π_c, k_2π_c, k_3|'⟩/H^3 = -3/2μ/H(ρ/H)^3 K^3/(k_1 k_2 k_3)^3 ℐ_3(c_sm/HK/k_1, c_sm/HK/k_2, c_sm/HK/k_3) . With these expressions at hand, using ζ = -H c_s^3/2 f_π^-2π_c, one can obtain the corresponding correlators of ζ. Non-locality. Before moving on, let us make some comments on (non)-locality. Loosely speaking, locality is related to the effects of a field decreasing with distance. For instance, in the non-relativistic limit, a massive scalar field with mass m generates the well-known Yukawa potential V(r) = ^-m r/4π r when sourced by a point-like charge. This action at distance is non-local, but its effect is confined in a region r≲ 1/m, i.e. it is a mild form of non-locality. In contrast, an inverse gradient term of the form m^2/-∂̃_i^2∼ m^2 r^2 becomes increasingly important for r≳ 1/m and represents a severe violation of locality. Our single-field effective theory is non-local, as can be seen from the presence of the non-local differential operator 𝒟^-1 = (-∂̃_i^2 + m^2)^-1 in cubic and quadratic interactions. The action in real space indeed contains terms evaluated at two distinct positions, i.e. S ∼∫dt ∫d^3x 𝒟^-1 J(x) = - ∫dt ∫d^3x ∫d^3y Π(x, y) J(y) where Π(x, y) = e^-m |x - y|/4π |x - y| and J(x) is a general source term. However, being of Yukawa-type, we see that this represents a mild form of non-locality. 4pt Recently, the Manifestly Local Test (MLT) was proposed as a check of correlators arising from a manifestly-local theory <cit.>. It is stated at the level of wavefunction coefficients <cit.>, and reads in our case .∂/∂ k_iψ_3(k_1, k_2, k_3) |_k_i = 0 = 0 . Its derivation is based on a simple property satisfied by the mode function of a massless field in de Sitter. We have explicitly checked that our correlators satisfy the MLT for Re ψ_3, and extends to the full wavefunction coefficient ψ_3 by inspecting the MLT derivation from a bulk perspective. The reason is that the non-local differential operator 𝒟^-1 appearing in the diagram vertices cannot be singular as the momentum is taken to be soft due to the non-zero mass, which is naturally related to the mild non-locality that we have just discussed. To put it short, a manifestly local theory satisfies the MLT, but satisfying the MLT does not offer a diagnosis of the underlying theory being manifestly local. §.§ Strong Mixing In the strong mixing regime, the quadratic theory cannot be solved analytically. Therefore, it is necessary to numerically compute the correlators. Recently, the cosmological flow approach has been developed to systematically compute the two- and three-point correlators in any theory of inflationary fluctuations <cit.>. Here, we briefly summarise this method. Flow equations. The cosmological flow is based on solving differential equations in time satisfied by the correlators, and is completely equivalent to the in-in formalism <cit.>. Essentially, this emerges from: i) the evolution equation /ṭ⟨𝒪⟩ =i ⟨ [H,𝒪] ⟩ for expectation values of operators 𝒪 without explicit time-dependence (Ehrenfest theorem), and ii) the fact that the commutator between 𝒪 and the Hamiltonian H can be systematically computed in perturbation theory using the canonical commutation relations and the fact that H takes a polynomial form in fields and momenta. Working at tree-level, one deduces that the time evolution of two- and three-point correlators in any theory takes the form /ṭ⟨X^a X^b⟩ = u^a_c⟨X^cX^b⟩ + u^b_c⟨X^aX^c⟩ , /ṭ⟨X^aX^b X^c⟩ = u^a_d⟨X^dX^bX^c⟩ + u^a_de⟨X^bX^d⟩⟨X^cX^e⟩ + (2 perms) , where X^a encompasses all phase-space variables in Fourier space (the index a thus refers to a Fourier wavevector as well as to a field or momenta, (π_c, σ, p_π, p_σ) in our situation of interest), and the entire theory dependence is encoded in the precise form of the coefficients u^a_b and u^a_bc, which can be systematically deduced from the action. 4pt Solving Eq. (<ref>) is completely equivalent to fully solving the quadratic theory. De facto, solving these equations, which couple all two-point correlators of fields and momenta, is equivalent to dressing the two-point correlators with an infinite number of quadratic mixing insertions—hence performing a resummation—, when using an interaction scheme where quadratic mixings are treated perturbatively. Similarly, Eq. (<ref>) couples all types of three-point correlators. In contrast, it is noteworthy that each kinematic configuration follows its flow independently of the others: Eqs. (<ref>) and (<ref>) can be conveniently solved triangle by triangle. Eventually, considering the Bunch-Davies vacuum, the coefficients u^a_b and u^a_bc fully determine the initial conditions for the correlators ⟨X^aX^b⟩ and ⟨X^aX^bX^c⟩. § PHENOMENOLOGY The presence of a reduced sound speed for the propagation of the Goldstone boson leads to a striking signature of heavy fields—referred to as the low-speed collider signal—that is captured by the single-field non-local EFT (<ref>). For heavy particles lighter than H/c_s, this signal manifests itself as a resonance in the squeezed limit of the bispectrum. In this section, we discuss in details both the size and the shape dependence of this signal, both in the weak and the strong mixing regimes. §.§ The Physics of the Low-Speed Collider Before diving in the phenomenology, it is worth highlighting that the low-speed collider signal has a simple physical explanation, related to the presence of two characteristic time scales in the dynamics, see <cit.> for more details. Because modes of different wavelength freeze or decay at different times, these time scales are encoded in the momentum dependence of primordial correlators. Indeed, due to a reduced sound speed c_s, a given short Goldstone boson mode freezes when it exits the sound horizon c_s k_S/a(t_1) ∼ H. Likewise, a long mode of the massive field starts to decay when it crosses the mass-shell horizon k_L/a(t_2) ∼ m. When m≪ H/c_s, the time t_1 occurs before t_2 and the Goldstone boson interacts with the massive field during the period of time t_1<t<t_2 as if the latter was massless. This effect leads to the amplification of the bispectrum and mimics local non-Gaussianities close to equilateral configurations for c_s m/H≲ k_L/k_S, equivalent to the condition t_1≲ t_2. In the ultra-squeezed limit k_L/k_S≲ c_s m/H, equivalent to t_2≲ t_1, the bispectrum fades away due to the decay of the massive field, which is then encoded in the conventional cosmological collider signal. When k_L/k_S∼ c_s m/H, equivalent to both time scales coinciding t_1∼ t_2, the bispectrum displays a characteristic resonance. Instead, for m ≫ H/c_s, the time t_1 occurs after t_2 for all kinematic configurations, and the situation qualitatively resembles the one with c_s=1. Namely, the heavy field can be integrated out in the usual local manner, corresponding to the regime “Reduced effective speed of sound" in Sec. <ref>, and the bispectrum shapes generated by all interactions are of the conventional equilateral type. Consequently, the low-speed collider phenomenology is governed by the dimensionless parameter α≡ c_s m/H , with α≳ 1 corresponding to usual equilateral shapes, and α≲ 1 to the genuine low-speed collider regime, on which we focus from now on. 4pt The physics of the low-speed collider explained above is valid in the weak mixing regime, i.e. when the quadratic mixing ρπ̇_c σ can be treated perturbatively. This corresponds to ρ≲ m, as we show in the insert below. A natural question is how this description is changed at strong mixing. We show in the insert that the effect of the mixing beyond perturbative treatments is to induce an effective mass m^2→ m_eff^2 = m^2 +ρ^2 for the heavy mode, see also <cit.> for c_s=1. Moreover, we show that the mixing, despite having this important effect for the dynamics of the heavy mode, has a small effect on the dynamics of the Goldstone boson inside the horizon as long as m_eff≲ H/c_s, corresponding to the parameter α_eff≡ c_s m_eff/H , being less than unity. This entails that the physics of the low-speed collider valid at weak mixing ρ≲ m, also holds qualitatively in the regime m ≲ρ≲ H/c_s, simply upon considering the effective mass m_eff, i.e  with α→α_eff. If the mixing is further increased to ρ/H ≳ 1/c_s, corresponding to α_eff≳ 1, the dynamics of the system enters the regime where the Goldstone boson has a modified dispersion relation, see Sec. <ref> and the conditions (<ref>),[Note that when m/H≲ 1/c_s and 1/c_s≲ρ/H, the condition m^2/ρ≲ H/c_s is automatically satisfied.] giving rise to standard equilateral-type shapes. In Fig. <ref>, we summarise the low-speed collider phenomenology at weak and strong mixing. 4pt We will see that the strong mixing regime of the low-speed collider, i.e. the effective mass regime depicted in green in Fig. <ref>, is particularly interesting. Indeed, the low-speed collider signal can be observationally large and is not dwarfed by the ever-present equilateral-type non-Gaussianities coming from self-interactions of the Goldstone boson. With this in mind, one aspect is worth highlighting: fluctuations in this regime can be strongly mixed without being strongly coupled. Strong mixing refers to the fact that the quadratic mixing ρπ̇_c σ cannot be treated in perturbation theory. At the same time, because of the non-linearly realized time diffeomorphism invariance, this interaction is necessarily accompanied by the dimension-5 cubic interaction ∝ρ (∂̃_i π)^2 σ. Hence, one can wonder whether large values of ρ in this regime are such that this interaction becomes strongly coupled at energies of order the Hubble scale, making the theory strongly coupled. This is not the case. As the arguments above show, all reasonings in the weak mixing regime approximately hold also in the effective mass regime upon considering the effective mass m_eff.[Note that the bound on ρ in Eq. (<ref>), coming from being able to treat the quadratic mixing perturbatively, is ineffective by definition in the strong mixing regime.] Thus, see the insert in Sec. <ref>, the bound on ρ coming from avoiding strong coupling reads ρ/H ≲ c_s^3/2(2 πΔ_ζ)^-1∼ 100 for the representative value c_s ∼ 0.1, showing that the theory remains healthy in the strong mixing regime of the low-speed collider ρ/H ≲ 1/c_s. Dispersion relation analysis.—In this insert, we examine the full two-field quadratic dynamics. First, we give the corresponding dispersion relations and we show that ρ≲ m is required in order to treat the quadratic mixing perturbatively. Then, we explain that in the parameter space of interest c_s m/H ≲ 1 for the low-speed collider, m_eff plays the role of the mass of the heavy mode. Furthermore, going beyond weak mixing, we show that α_eff≡ c_s m_eff/H ≲ 1 is the correct non-perturbative criterion for the existence of the low-speed collider, giving rise to the “effective mass" regime of Fig. <ref>. Remarkably, this regime is amenable to an approximate analytical understanding. 4pt Taking the flat-space limit H→ 0 of the full theory (<ref>), the quadratic dynamics can be solved exactly after injecting plane-wave solutions. The dispersion relations for both degrees of freedom are given by ω_±^2 = 1+c_s^2/2 ^2 + m_eff^2/2±1/2√(m_eff^4 + (1-c_s^2)^2^4 + 2(1+c_s^2)^2[ρ^2 + 1-c_s^2/1+c_s^2 m^2]) , where m_eff^2 = m^2+ρ^2 is simply a short-hand notation at this stage. We will look below at physically interesting limits in which this exact solution simplifies, but first, as a sanity check, we note that the uncoupled limit ρ/H=0 gives ω_-^2 = c_s^2 ^2 and ω_+^2 = ^2 + m^2. We recover that the spectrum is then composed of a massless mode propagating with a speed of sound c_s and a relativistic heavy mode with mass m. 4pt We now explain the criterion ρ≲ m as the perturbativity bound on the quadratic coupling, directly at the level of the non-local EFT (<ref>). To do this, we need to estimate the propagator 𝒟^-1=(^2+m^2)^-1 at the relevant time scales of the dynamics, which, as we have stressed in Sec. <ref>, includes sound-horizon crossing but also mass-shell crossing and beyond. At the sound-horizon crossing of the massless mode ∼ H/c_s and in the parameter space of interest with c_s m/H≲ 1, the propagator scales as 𝒟^-1≈ c_s^2/H^2. Therefore, the correction to the quadratic Lagrangian scales as c_s^2 (ρ/H)^2≲ c_s^2 (m/H)^2≲ 1 in the would-be weak mixing regime, and is indeed negligible. At and after the mass-shell crossing of the heavy mode ≲ m, it scales as 𝒟^-1≈ m^-2, giving a correction of order (ρ/m)^2, which is consistently negligible as long as ρ≲ m. From the exact dispersion relations (<ref>), it is then interesting to consider the first correction to the decoupled dynamics of the heavy mode in this weak mixing regime ω_+^2 ≈^2 + m^2 + ρ^2 ^2+m^2/(1-c_s^2)^2 + m^2 . One can appreciate that for a low sound speed c_s≪ 1, the last term in the heavy mode dispersion relation ω_+^2 is given by ρ^2, and therefore combines with the mass parameter to generate the combination m_eff^2, which indeed appears as the physical mass of the heavy excitations. This effect is small by definition in the weak mixing regime. However, remarkably, this picture holds beyond weak mixing for α≡ c_s m/H≲ 1. Indeed, in the regime of small α, the exact dispersion relations boil down to ω_-^2 ≈ c_s^2 ^2 ^2 + m^2/^2 + m_eff^2 ,andω_+^2 ≈^2 + m_eff^2 + c_s^2ρ^2 ^2/^2 + m_eff^2 . These relations are non-perturbative in the quadratic mixing as they do not assume weak mixing. Notice that the last term, c_s^2ρ^2 ^2/^2 + m_eff^2≲ c_s^2ρ^2, is negligible compared to the term ρ^2 appearing in m_eff^2. Therefore, in the regime α≲ 1, the heavy mode dynamics is governed by ω_+^2 = ^2 + m_eff^2. This explains why the low-speed collider resonance occurs also at strong mixing as long as α_eff≡ c_s m_eff/H ≲ 1. Indeed, in this regime, the mixing between the massless and the heavy mode is important enough to generate an effective mass different from the bare one, but it does not appreciably affect the dynamics of the Goldstone boson, and the heavy mode can still be considered as effectively massless at sound horizon crossing for π_c. To see this, let us recall that the massless mode dispersion relation is only valid for ω_- ≳ H as Hubble friction becomes important at smaller energies. The requirement that the dispersion relation is not affected by the mixing even at this energy, corresponding to momenta ∼ H/c_s (the two modes obviously decouple in the large limit), is equivalent to 1+α^2/1+α_eff^2≃ 1, which is satisfied for α_eff≪ 1. This condition also guarantees that the effective mass in ω_+^2 = ^2 + m_eff^2 is negligible for ∼ H/c_s, giving rise to the peculiar local-like behaviour of the bispectrum for near equilateral configurations. Furthermore, one can check that in this effective mass regime, the corrections to the dispersion relation of the Goldstone mode around sound horizon crossing, ω_-^2/(c_s^2 ^2)-1≃ -c_s^2 ρ^2/H^2, takes exactly the same form as in the weak mixing regime. Eventually, as m_eff also controls the dynamics at late times, i.e the quasi-normal modes of the coupled system (a statement that does not depend on sound speeds), the decay of the heavy mode starts at effective mass crossing ∼ m_eff, hence giving rise to the low-speed collider resonance at the location k_L/k_S∼ c_s m_eff/H. This analysis suggests that, to a good approximation, the analytical results of the low-speed collider in the weak mixing regime can be transposed to the effective mass regime m/H ≲ρ/H ≲ 1/c_s, simply upon considering the “dressed” effective mass m^2→ m_eff^2 = m^2 +ρ^2, with any small discrepancy attributed to the intermediate regime m_eff≲≲ H/c_s. 4pt In this work, we are interested in the low-speed collider signatures of effectively heavy fields, corresponding to m_eff≳ H, a situation that is amenable to a description in terms of a single-field (non-local) description. But interesting signatures also arise in the complementary regime of a light field weakly mixed to the Goldstone boson. In this regime, notice that remarkably, the bootstrap analytical formulae for the single-exchange diagrams derived in <cit.> are also valid after proper substitution μ→ -i ν. The low-speed collider signal still manifests itself as a resonance in mildly-squeezed configurations, with the usual quasi-single field scalings taking over in the ultra-squeezed configuration, and with the resonance ultimately fading away as the field becomes massless. We leave the detailed analysis of this region of parameter space for an upcoming work. §.§ Size of non-Gaussianities Let us estimate the size of the bispectrum generated by the non-linear interactions in (<ref>). In the weak mixing regime, one can easily deduce the size of the bispectrum from the analytical results presented in the previous section. However, here, we will present a systematic procedure to quickly estimate the size of non-Gaussianities based on dimensional analysis, which is also valid at strong mixing. Following standard conventions, we first define the momentum dependence of the three-point correlator of ζ through the dimensionless shape function S such that ⟨ζ_k_1ζ_k_2ζ_k_3|'⟩≡ (2π)^4 S(k_1, k_2, k_3)/(k_1 k_2 k_3)^2 Δ_ζ^4 . We then use the standard measure of the size of the non-Gaussian signal by the following non-linearity parameter f_NL≡10/9 S(k, k, k) , with the shape evaluated in the equilateral configuration k_1=k_2=k_3=k. Using the single-field non-local EFT (<ref>), the size of the bispectrum can be easily recovered using purely dimensional analysis arguments because the theory contains only one dynamical degree of freedom. An estimate of the size of the bispectrum is then given by f_NL∼ℒ_3/ℒ_2 Δ_ζ^-1, where mass-dimension quantities in the Lagrangian are evaluated at the energy scale probed during inflation, i.e. ω∼ H. This method provides a fast estimate without setting up the full calculation, and ultimately gives a more intuitive physical explanation of the various regimes covered by the full theory (<ref>).[Note that in the low-speed collider regime, this method cannot reproduce the peculiar logarithmic dependence of the bispectrum on α, precisely as the latter comes from the super-Hubble regime ω≲ H, see the discussions in Sec. <ref> and <ref>.] In practice, after inspecting the Lagrangian (<ref>), we need to estimate the non-local differential operator 𝒟^-1 and the typical size of the fluctuations π_c/H in the various regimes of interest. 4pt In the weak mixing regime, and as we have seen in Sec. <ref>, also approximately in the effective mass regime, the Goldstone boson freezes while having a linear dispersion relation ω = c_s, with =k/a being the physical momentum. Evaluating at ω∼ H, the gradient term in the propagator 𝒟^-1 dominates over the mass term for the parameter space of interest m≲ H/c_s. In the end, we obtain 𝒟^-1∼ (c_s/H)^2. Additionally, one deduces from the action (<ref>) that π_c/H∼ c_s^-3/2, in agreement with the power spectrum (<ref>). In this regime, the quadratic part of the Lagrangian is dominated by the first term in (<ref>), i.e. ℒ_2∼π̇_c^2∼ H^2 π_c^2. 4pt Deep in the strong mixing regime ρ/H ≳ 1/c_s, the Goldstone boson freezes while having a modified dispersion relation ω=c_s ^2/ρ. Evaluating the gradient term in the propagator—which also dominates over the mass term—at ω∼ H leads to 𝒟^-1∼ c_s/(ρ H). From the action (<ref>), one deduces the typical size of the fluctuations π_c/H∼ c_s^-5/4 (ρ/H)^1/4, in agreement with (<ref>). Note that this respects continuity between the weak and strong mixing regimes at the threshold value ρ/H∼ 1/c_s. Naturally in this regime, the quadratic part of the Lagrangian is given by ℒ_2 ∼ρ^2 π̇_c𝒟^-1π̇_c. We now have the necessary ingredients to estimate the size of non-Gaussianities for all interactions in (<ref>). 4pt Before giving the results, let us stress that we include for completeness the modified dispersion relation regime, but that shapes there are of conventional equilateral type and do not exhibit the characteristic resonances of the low-speed collider. The latter arise both for ρ/H ≲ m/H (the weak mixing regime) and for m/H ≲ρ/H ≲ 1/c_s (the effective mass regime), which make up the low-speed collider regime, see Fig. <ref>. Remarkably, this regime can be described in a unified manner. * Let us first consider the interaction (∂̃_i π_c)^2σ, suppressed by the scale Λ_1 ∼ c_s^-3/2f_π^2/ρ. Using the estimates previously derived, the size of the bispectrum is given by f_NL∼ρ/Λ_1(∂̃_i π_c)^2𝒟^-1π̇_c/ℒ_2 Δ_ζ∼(ρ/H)^2 (low-speed collider) , c_s^-1ρ/H (modified dispersion relation) . In the low-speed collider regime ρ/H ≲ 1/c_s, the signal can be as large as 1/c_s^2, parametrically of the same size as the signal from self-interactions, see Sec. <ref>. Note that the signal grows with the strength of the quadratic mixing in the strong mixing regime, in which the perturbativity bound on the quadratic coupling given in Sec. <ref> yields f_NL< Δ_ζ^-1. * We now consider the interaction π̇_c^2σ, whose coupling constant is not fixed by the quadratic mixing ρ. The size of the corresponding bispectrum is given by f_NL∼ρ/Λ_2π̇_c^2𝒟^-1π̇_c/ℒ_2 Δ_ζ∼ c_s^1/2Δ_ζ^-1ρ/Λ_2 (low-speed collider) , c_s^-5/4Δ_ζ^-1(H/Λ_2)(ρ/H)^-3/4 (modified dispersion relation) . 37pt The perturbativity bounds on the couplings give f_NL < c_s^3/2 Δ_ζ^-1 ρ/H < c_s^1/2 Δ_ζ^-1 in the low-speed collider regime, where the first inequality comes from the bound on Λ_2 and the second one is obtained by pushing ρ/H to 1/c_s, and f_NL < Δ_ζ^-1 at strong mixing. As opposed to the previous interaction, in the strong mixing regime, f_NL decreases when the quadratic mixing increases. As we will see now, this is a common feature of all other cubic interactions. * Next, we consider the interaction π̇_cσ^2, governed by the dimensionless coupling λ. The non-Gaussian signal is f_NL∼λρ^2 π̇_c[𝒟^-1π̇_c]^2/ℒ_2 Δ_ζ∼ c_s^5/2 Δ_ζ^-1λ(ρ/H)^2 (low-speed collider) , c_s^-1/4 Δ_ζ^-1λ (ρ/H)^-3/4 (modified dispersion relation) . 37pt Considering perturbativity bounds on the associated couplings yields f_NL<c_s^3 Δ_ζ^-1(ρ/H)^2< c_s Δ_ζ^-1 in the low-speed collider regime, and f_NL < Δ_ζ^-1 at strong mixing. * Finally, we consider non-linearities in the σ-sector which are fully characterised by the interaction σ^3. The amplitude of the non-Gaussian signal is f_NL∼μρ^3 [𝒟^-1π̇_c]^3/ℒ_2 Δ_ζ∼ c_s^9/2 Δ_ζ^-1 (μ/H)(ρ/H)^3 (low-speed collider) , c_s^3/4 Δ_ζ^-1 (μ/H) (ρ/H)^-3/4 (modified dispersion relation) , 37pt which gives f_NL<c_s^9/2Δ_ζ^-1 (ρ/H)^3< c_s^3/2Δ_ζ^-1 in the low-speed collider regime and f_NL<Δ_ζ^-1 at strong mixing. In the modified dispersion regime and for the interactions in (∂̃_i π_c)^2σ and σ^3, the defining property of the perturbativity bounds, f_NLΔ_ζ∼ L_3/ L_2 < 1, makes it automatic that saturating them gives f_NL∼Δ_ζ^-1. However, this was not guaranteed for the other interactions π̇_c^2σ and π̇_cσ^2, and it deserves an explanation. For definiteness, let us consider the first one, stemming from M̃_3^3 (δ g^00)^2 σ in the unitary gauge mixing action (<ref>). The Stückelberg transformation generates other interactions beyond the one in π̇_c^2σ, and perturbative unitarity requires all of them to be suppressed by a scale greater than Hubble. However, these other interactions are necessarily suppressed by an energy scale larger than the one suppressing π̇_c^2σ once the theoretical upper bound on ρ in Eq. (<ref>) is considered. Indeed, the latter ultimately comes precisely from ensuring that the quadratic terms are negligible for ω∼ H compared to the linear one in the combination δ g^00→ -2 π̇ - π̇^2 + (∂_i π)^2/a^2, as the linear one generates the dominant term ρπ̇_c σ in the quadratic action, and the one in (∂_i π)^2 generates the corresponding cubic interaction (∂̃_i π_c)^2σ (the other interaction being negligible in the modified dispersion relation regime). Therefore, among all the requirements ( L_n/ L_2)_ω=H <1 for the interactions generated by the Stückelberg transformation, the one n=3 is the most stringent and provides the perturbativity bound, whose saturation then leads to f_NL∼Δ_ζ^-1. 4pt In the low-speed collider regime, notice instead that the maximal sizes of f_NL compatible with perturbative unitarity are suppressed by powers of c_s compared to Δ_ζ^-1. This comes from the fact that the upper bound on ρ set by the existence of the low-speed collider regime is more stringent than the corresponding perturbativity bound, or equivalently, that fluctuations in the strong mixing regime of the low-speed collider are automatically weakly coupled. Nonetheless, we stress that our results show that non-Gaussianities in this regime can be observationally large. Additionally, in general, requiring the theory to be technically natural, i.e. stable under radiative corrections, significantly lowers non-Gaussian signals (see e.g. <cit.> for more details). To produce large non-Gaussianities, i.e. f_NL>1, this theory requires new physics or fine-tuning, except for the interaction σ^3. However, as we will see in the next section, this interaction does not generate a visible low-speed collider resonance. Eventually, one should keep in mind that the scalings above provide estimates for the size of non-Gaussianities in the equilateral configuration, and do not take into account the enhancement of the signal at the low-speed collider resonance peak, which can all the more increase the size of the overall signal. §.§ Shapes of the Low-Speed Collider In this section, we describe the low-speed collider shapes, as defined in Eq. (<ref>). Without loss of generality, we order the momenta such that k_3≤ k_2 ≤ k_1 and fix k_1 by virtue of scale invariance. Therefore, the shape information only depends on the two dimensionless ratios k_2/k_1 and k_3/k_1 that satisfy 0≤ k_3/k_1 ≤ k_2/k_1 ≤ 1 and k_1 ≤ k_2 + k_3. The second condition is the triangle inequality, enforcing the three momenta to close and form a triangle. We show the shape functions, using this parametrization, in Fig. <ref> and <ref> for the four interactions (∂_i π_c)^2σ, π̇_c^2σ, π̇_cσ^2 ad σ^3, both at weak and strong mixing, for α_eff = 0.1 and α_eff = 0.05, respectively. The shapes are normalised to unity in the equilateral configuration to ease comparison with the standard equilateral template, represented in gray. 4pt As we understood previously, the striking feature of the low-speed collider is the presence of a resonance in the squeezed limit. This resonance is readily visible in the shape generated by the interaction (∂_i π_c)^2σ. At weak mixing, the location and the amplitude of the peak are governed by α. For smaller α, the resonance is visible in more squeezed configurations and has a larger amplitude, proportional to α^-1. The shape generated by the interaction π̇_c^2 σ—whose amplitude is not fixed by the quadratic coupling—presents a more complex resonance characterised by a double-peak structure, i.e. a first small peak and a second one more noticeable. The shape generated by the interaction π̇_cσ^2—which leads to double-exchange diagrams at weak mixing—have a resonance less pronounced than those generated by (∂_i π_c)^2σ and π̇_c^2 σ. As for the interaction σ^3, it does not lead to resonances, which can be traced back to the presence of additional propagators in the corresponding interactions once the heavy field σ has been non-locally integrated out. This is peculiar though to the situation considered here for simplicity of one additional heavy field, and it is straightforward to generalize our results to more generic setups.[More specifically, the natural generalisation of (<ref>) is obtained by introducing the operators ∑_a ρ_a σ_a π̇_c, ∑_a (Λ_1,a)^-1 σ_a (∂̃_i π_c)^2, ∑_a (Λ_2,a)^-1σ_aπ̇_c^2, ∑_a bλ_a bσ_a σ_bπ̇_c and ∑_a,b,cμ_abcσ_aσ_bσ_c, with σ_a=1,…, N standing for N heavy scalars with masses m_a, and ρ_a, (Λ_1,a)^-1, (Λ_2,a)^-1, λ_a b, μ_abc for flavour-indexed coupling constants. At weak mixing, these interactions induce single-, double- and triple-exchange diagrams, incorporating internal lines with different masses for the last two ones. The computation is then identical to the case with only one heavy field, mutatis mutandis. In fact, the expressions for the correlators become identical to Eqs. (<ref>)-(<ref>), after a trivial adjustment for the coupling constants and accounting for the mass of each propagator inside the arguments of I_1,2,3 in Eq. (<ref>).] An interesting case then is the shape generated by the triple-exchange diagram of several heavy fields having different masses, say m_1, m_2 and m_3. When m_2/H, m_3/H≥ 1/c_s, the corresponding fields can be integrated out in a local manner, and the shape therefore resembles the one generated by π̇_c^2σ, with an amplitude suppressed by (H/m_2)^2 (H/m_3)^2. In this case, the interaction only consists of the propagator associated with the field of mass m_1 and the shape displays a resonance around k_3/k_1∼ c_s m_1/H. Similarly, if only one field is heavier than 1/c_s, the resonance would be sensitive to the lightest field. In essence, the low-speed collider resonance is a natural discovery channel to detect heavy fields with masses not far from the Hubble scale when the Goldstone boson has a reduced speed of sound, in analogy with resonance peaks in ground-based colliders proving the existence of new particles. 4pt At strong mixing, in the “effective mass regime" depicted in Fig. <ref>, the low-speed collider physics leads to a similar phenomenology of shapes, though the resonances are more pronounced. We have understood that this regime can be qualitatively described by the weak mixing shapes after replacing α by α_eff. Here, the exact computation using the cosmological flow approach gives a quantitative measure of the quality of this approximation. Indeed, if this substitution was exactly accurate, the shapes at weak and strong mixing in Fig. <ref> and <ref> would coincide. The slight mismatch signals that non-trivial physics occurs between sound-horizon crossing of π_c and mass-shell crossing of σ in the strong mixing regime that cannot be inferred from the dispersion relation analysis in Sec. <ref>. As such, only an exact numerical calculation can in turn be decisive. Remarkably, substituting α→α_eff in the analytical shape generated by π̇_c^2σ at weak mixing gives an excellent match, and the “effective mass regime" still provides an accurate description of the low-speed collider physics at strong mixing. In this respect, note that we have checked that this also applies to the amplitude of the signal, and not only to the shape information as shown in the figures. 4pt In the following, we give a quantitative analysis of the low-speed collider shapes by inspecting their overlap with the standard templates. §.§ Shape Correlations To further characterise the low-speed collider shapes, we now compute their correlations with the standard local <cit.>, equilateral <cit.> and orthogonal shapes <cit.> used in data analysis, that are defined by S^loc(k_1, k_2, k_3) = 1/3(k_1^2/k_2 k_3 + 2 perms) , S^eq(k_1, k_2, k_3) = (k_1/k_2 + 5 perms) - (k_1^2/k_2 k_3 + 2 perms) - 2 , S^orth(k_1, k_2, k_3) = 3 S^eq(k_1, k_2, k_3) - 2 . Given two shapes S^a and S^b, we define their correlation by <cit.> 𝒞(S^a, S^b) = ⟨S^a, S^b|⟩/√(⟨S^a, S^a|⟨%s|%s⟩⟩S^b, S^b) , where the inner product is ⟨S^a, S^b|=⟩∫_0^1dx_2 ∫_1-x_2^1 dx_3 S^a(1, x_2, x_3) S^b(1, x_2, x_3) . This correlation provides a quantitative measure of how distinguishable two shapes are, i.e. two shapes are highly correlated if |𝒞| is close to unity. For definiteness, we only focus on the shape (<ref>) generated by the interaction (∂_i π_c)^2σ, that leads to the most sizeable low-speed collider resonance. We show in Fig. <ref> the correlations with the standard templates (<ref>) as functions of α. As anticipated, for α≈ 1, the shape is strongly correlated with the equilateral template. As the parameter α decreases, the low-speed collider resonance appears and becomes more pronounced. As a consequence, the shape correlation with S^eq decreases while that with S^loc increases. For sufficiently small α, it is interesting to notice that we obtain a non-negligible correlation with S^loc, commonly attributed to the presence of multiple light fields during inflation. In contrast, the low-speed collider shape has been derived from an (effective) single-field theory after integrating out a heavy field. Overall though, in the bulk of the low-speed collider parameter space 0.01 ≲α≲ 0.1, there is a poor overlap between the representative shape S^(∂_i π_c)^2σ and the conventional templates. 4pt This analysis thus reveals that probing the bispectrum using only the standard templates S^loc, S^eq and S^orth could lead to a biased interpretation of data about the physics active in the primordial universe. Namely, conventional templates would not be blind to the low-speed collider signal, but a non-zero response to the equilateral/orthogonal and local templates could then be wrongly attributed to a scenario with multiple light fields endowed with a non-trivial sound speed, like in multifield DBI inflation <cit.>. Therefore, it appears essential to give a proper treatment to the low-speed collider shapes. §.§ Low-Speed Collider Template For convenience of data analysis, we give a simple and universal template for the shape that captures the low-speed collider resonance. The derived analytical shapes are not convenient to deal with numerically because of fine cancellations appearing in Eq. (<ref>). For this reason, we give a simpler template expressed in terms of elementary functions. To construct it, we realise that the low-speed collider shape behaves like the local one close to equilateral configurations and like the equilateral one in the squeezed limit, resulting in the low-speed collider resonance located in mildly-squeezed configurations. Naturally, the peak is shifted from the equilateral configuration to the squeezed limit as the parameter α decreases. A simple template for such a shape is given by [colframe=white,arc=0pt,colback=greyish2] S^α(k_1, k_2, k_3) = S^eq(k_1, k_2, k_3) + 1/3k_1^2/k_2 k_3[1 + (α k_1^2/k_2 k_3)^2]^-1 + 2 perms , where permutations only apply to the second term and the equilateral shape S^eq is defined in Eq. (<ref>). By construction, the shape S^α interpolates between the usual equilateral and local shapes, and can be used for the observational search of primordial non-Gaussianity. As illustrated in Fig. <ref>, it captures well the location and the amplitude of the low-speed collider resonance for the α range of interest. Remarkably, this simple template also reproduces the qualitative properties of the shapes generated by the non-universal single- and double-exchange diagrams, with π̇_c^2σ and π̇_c σ^2 interactions respectively, albeit replacing α by effective parameters α^π̇_c^2σ_eff = 2αlog_10(α) and α^π̇_cσ^2_eff = α/2. Note that this template is different from the one obtained in the context of multiple light fields with different sound speeds where resonances were already noticed <cit.>. In this case, when adjusting parameters of such a template to reproduce the location of the peak, the template misses the relative amplitude of the equilateral signal compared to the resonant one. Contrariwise, the shape (<ref>) results from the exchange of a massive field. One can therefore observationally distinguish, despite similar features, the exchange of light fields from signatures of massive fields. 4pt It is well known that the computational complexity is drastically reduced when looking for a template that is product separable, in the sense of a (sum of) product of functions of momentum variables. This is due to numerical challenges of integrating highly oscillating functions that arise when projecting cosmological observables onto two-dimensional redshift surfaces <cit.>. One might rightly worry that the second term in Eq. (<ref>) ruins factorizability. However, this apparent problem can be solved by introducing a Mellin parameter[A similar trick is used in <cit.> where ubiquitous behaviours of the bispectrum related to what is now referred to as the total energy pole are made product separable by introducing a Schwinger parameter (k_1 + k_2 + k_3)^-n = Γ(n)^-1∫_0^∞dx x^n-1 e^-(k_1+k_2+k_3)x.] [1 + (α k_1^2/k_2 k_3)^2]^-1 = 1/2iπ∫_c-i ∞^c+i∞ṣ Γ(s) Γ(1-s) α^-2s(k_1^2/k_2 k_3)^-2s , where 0<c<1 such that the contour separates poles of Γ(s) from those of Γ(1-s). The integrand is manifestly product separable in the momenta.[Note that it is not possible to write the term that ruins factorizability as a geometric series because it does not converge close to equilateral configurations for some permutations.] Along the contour, e.g. by setting s→ 1/2 + i s̃, the Mellin-Barnes integral can be accurately computed numerically. More generally, a similar trick can also be used when the shape has a non-trivial pole structure. §.§ Self-interaction Contamination In the presence of a reduced speed of sound, the non-linearly realised time diffeomorphism invariance unavoidably generates large Goldstone boson self-interactions which contaminate the low-speed collider signature. This is attributed to the cubic interaction π̇_c(∂_i π_c)^2 in Eq. (<ref>), whose size is fixed by c_s and that generates an equilateral-type bispectrum of amplitude ∼ 1/c_s^2 at weak mixing. Naturally, by tuning the Wilson coefficient setting the size of the other self-interaction π̇_c^3, it is possible to lower the amplitude of the total self-interaction signal precisely at the location of the low-speed collider resonance, making it more visible.[In that respect, note that the “multi-speed” shapes put forth in <cit.> are not observable without a severe fine-tuning, as they are necessarily accompanied by a much larger equilateral-type shape that dwarfs it.] We do not allow this in the following: we concentrate on the signal fixed by symmetries, so that our assessment of the observability of the low-speed collider resonance is conservative. 4pt Let us first estimate the relative size of the low-speed collider resonance with respect to the self-interaction contamination at weak mixing. Focusing on the interaction (∂_i π_c)^2 σ, the shape in the squeezed limit around k_3/k_1∼α scales as S^(∂_i π_c)^2 σ∼α^-1(ρ/H)^2, see Eq. (<ref>), where the α^-1 is attributed to the resonance. The standard equilateral-type shape decreases like k_3/k_1 so that S^π̇_c(∂_i π_c)^2∼ c_s^-2α. Therefore, the ratio of both signals is S^(∂_i π_c)^2 σ/S^π̇_c(∂_i π_c)^2∼ (ρ/m)^2 ≲ 1. In the strict weak mixing regime ρ≪ m, the low-speed collider resonance is dwarfed by the ever-present self-interaction signal. In Fig. <ref>, we show the exact low-speed collider signal and the corresponding self-interaction contamination, as well as the total shape, for different scenarios. At weak mixing (top left), the low-speed collider is indeed not visible in the total signal one would measure. As a consequence, it can only be detected after properly subtracting the equilateral-type contamination, considered as a “background noise" in this case. Although the scalings above miss 𝒪(1) coefficients, the two contributions become comparable for ρ∼ m, which motivates to consider strong mixing where the situation is more interesting. In the “effective mass regime", as discussed previously, replacing m by the effective mass m_eff gives an accurate description of the low-speed collider physics at strong mixing. The low-speed collider resonance therefore scales as S^(∂_i π_c)^2 σ∼α_eff^-1(ρ/H)^2, where α_eff≈ c_s ρ/H, while the equilateral-type shape is S^π̇_c(∂_i π_c)^2∼ c_s^-2α_eff. The ratio leads to S^(∂_i π_c)^2 σ/S^π̇_c(∂_i π_c)^2∼ 1. It means that the resonance is not suppressed, and is well visible in the total shape, as seen in the top right panel of Fig. <ref>. Note that the estimates of the self-interaction signal parametrically hold, as the propagation of π_c is not much affected by its mixing with σ for the regime of interest α_eff≲ 1. However, the full dynamics is taken into account in the numerical results of Fig. <ref>. 4pt It is straightforward to generalise these estimates to the other interactions, which can lead to low-speed collider signals with even larger amplitude as their corresponding cubic couplings are not tied to ρ/H. For definiteness, the case of the interaction π̇_c σ^2 is considered on the bottom panels of Fig. <ref>, for two different values λ=± 0.01 of the coupling constant in Eq. (<ref>) ensuring perturbative control. The low-speed collider resonance is well visible, notably due to the enhancement of the signal by Δ_ζ^-1. Additionally, note that when the signal generated by the mixing cubic interaction and the one generated by self-interactions have opposite signs, the low-speed collider signature is well distinct in the total shape. 4pt Measuring equilateral-type non-Gaussianities would constrain the speed of sound c_s, and the position of the low-speed collider resonance would then pinpoint the (effective) mass of the additional heavy field. While this picture is correct at weak mixing, it is in general more subtle at strong mixing. Indeed, the total signal near equilateral configurations is not entirely fixed by c_s, as the low-speed collider and self-interaction signals can have a comparable amplitude there, see for instance the situations in the bottom panels in Fig. <ref>. § CONCLUSIONS In this paper, we have studied the imprint of massive fields on cosmological correlators in the presence of a reduced speed of sound for the Goldstone boson of broken time translations. This minimal framework leads to a whole new class of signatures—that we call cosmological low-speed collider signals—which are characterised by a distinctive resonance in the mildly-squeezed limit of the bispectrum. This work generalises <cit.> to include all leading cubic interactions, which at weak mixing lead to single-, double- and triple-exchange three-point diagrams, and the corresponding strong mixing regime in which the unavoidable quadratic mixing cannot be treated perturbatively. Let us summarise our results. * Using the framework of EFT of inflationary fluctuations, we have coupled the Goldstone boson of broken time translations π to a massive scalar field σ, identifying allowed leading cubic interactions giving sizeable non-Gaussianities. When the speed of sound of π is reduced c_s ≲ 1, the heavy field can be integrated out in an unusual manner, yielding a non-local single-field theory, see Eq. (<ref>). This theory emerges as the leading-order term in a time-derivative expansion. Although the heavy field becomes effectively non-dynamical, this simple theory accurately describes the full dynamics of the Goldstone boson and captures all multi-field physical effects—except for the non-perturbative particle production leading to the conventional cosmological collider signal—in the almost entire parameter space. * We have characterised this non-local EFT by analysing the dispersion relation, and have shown that this theory encapsulates several known regimes like the effective reduced sound speed and the modified dispersion relation regimes. This theory provides a systematic and tractable single-field EFT treatment of the imprint of massive fields on cosmological correlators. We have also determined the precise regimes of validity of this theory and shown how to systematically incorporate higher-order corrections in the derivative expansion. * At weak mixing, we have used bootstrap techniques to analytically compute the corresponding bispectra. We have demonstrated that all three-point correlators originate from a unique simple seed correlator, which in turn is mapped to correlators of π by well-chosen weight-shifting operators. At strong mixing, we have used the cosmological flow approach to obtain exact numerical results for the correlators of interest. Remarkably, we have also shown that the analytical understanding at weak mixing extends to the strong mixing regime of the low-speed collider. In all cases, our approaches allow for a complete characterisation of correlators in all kinematic configurations. * The phenomenology of the low-speed collider physics is summarised in Figs. <ref> and <ref>. We have determined the size of non-Gaussianities and have studied the corresponding shape dependence of the bispectra, whose distinctive signature is governed by a single parameter, see Eq. (<ref>). For observational relevance, we give a simple product-separable template for the low-speed collider shape in Eq. (<ref>). Our results are cautiously optimistic: within the regime of validity of the EFT, we can accommodate observationally large non-Gaussianity and sizeable low-speed collider signatures, in particular in the strong mixing regime. 4pt The promising and rich low-speed collider phenomenology exposed in this paper is an invitation to extend and generalise these signatures to spinning intermediate fields and mixed correlators involving gravitons. Finally, it would be interesting from a theoretical perspective to investigate the possibility of systematically constructing a set of boost breaking effective field theories that are non-local in space, yet still consistent with a local UV completion. Acknowledgements. We thank Giovanni Cabass, Sebastian Cespedes, Thomas Colas, Paolo Creminelli, Gregory Kaplanek, Scott Melville, Enrico Pajer, Andrew Tolley, Xi Tong and Yuhang Zhu for helpful discussions. SJ, SRP and DW are supported by the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement No 758792, Starting Grant project GEODESI). This article is distributed under the Creative Commons Attribution International Licence (https://creativecommons.org/licenses/by/4.0/CC-BY 4.0). § WEIGHT-SHIFTING OPERATOR DERIVATION It was pointed out in <cit.> that de Sitter invariant, four-point diagrams consisting of external massless fields can be related to those of the external conformally coupled field through a set of weight-shifting operators. In <cit.>, it was shown that even without de Sitter boost symmetry similar relationships continue to hold between the correlators of the Goldstone boson and the ones of (defined to be a massive field with m_^2=2H^2 that propagates with the same speed of sound as π_c).[In <cit.>, was taken to be the conformally coupled field, hence having a relativistic dispersion relation. The speed of sound for the Goldstone boson was then adjusted by rescaling the energies of the external legs, i.e. k→ c_s k. In contrast, here we start with a theory for that has a reduced sound speed. The ending results of both approaches are identical up to easily identifiable factors of c_s.] In this appendix, we generalise the result in the latter paper to the double- and triple-exchange diagrams that contribute to the bispectrum of π_c. We achieve this by linking these graphs to similar ones of 's four- and six-point functions, defined in (<ref>). For the simplicity of the presentation, here we concentrate on the non-local EFT limit where σ can be integrated out. However, the final weight-shifting operators also apply to the same diagrams in the full two-field theory, where σ is a dynamical degree of freedom. 4pt In the non-local theory, the heavy field is non-dynamical, therefore its associated bulk-to-bulk propagators at leading order in derivative is proportional to a Dirac delta function, i.e. -/2[line width=1. pt, scale=2] at (0.75,-0.2) s; [pyblue] (0.4,0) – (1.1, 0); = D^-1(s,τ) 1a(τ)δ (τ-τ') . Therefore, we only need to identify operators that map the vertices of the diagrams in the theory containing onto those in the theory for the Goldstone boson π_c. Specifically, we want to find the map between the following building block 𝔹_^2(k_1,k_2,τ) = -/2[line width=1. pt, scale=2.5] [black] (0.2, 0) – (0.4, -0.6); [black] (0.6, 0) – (0.4, -0.6); [fill=black] (0.4, -0.6) circle (.03cm); [lightgray2, line width=0.8mm] (0, 0) – (0.8, 0); at (0.4, -0.8) ^2; at (0.2,-0.4) k_1; at (+0.6,-0.4) k_2; = a^4(τ)_k_1(τ)_k_2(τ)_k_1^*(τ_0)_k_2^*(τ_0) , and the building blocks below, associated with graphs with external π lines: 𝔹_π̇_c(k,τ) = -/2[line width=1. pt, scale=2.5] [pyred] (0.2, 0) – (0.2, -0.6); [fill=black] (0.2, -0.6) circle (.03cm); [lightgray2, line width=0.8mm] (0, 0) – (0.4, 0); at (0.2, -0.8) π̇_c; at (0,-0.4) k; = a^3(τ) π'_c,k(τ)π_c,k^*(τ_0) , 𝔹_π̇_c^2(k_1,k_2,τ) = -/2[line width=1. pt, scale=2.5] [pyred] (0.2, 0) – (0.4, -0.6); [pyred] (0.6, 0) – (0.4, -0.6); [fill=black] (0.4, -0.6) circle (.03cm); [lightgray2, line width=0.8mm] (0, 0) – (0.8, 0); at (0.4, -0.8) π̇_c^2; at (0.2, -0.4) k_1; at (0.6, -0.4) k_2; =a^2(τ)π'_c,k_1(τ)π'_c,k_2(τ)π_c,k_1^*(τ_0)π_c,k_2^*(τ_0) , 𝔹_(∂_iπ_c)^2(k_1,k_2,τ) = -/2[line width=1. pt, scale=2.5] [pyred] (0.2, 0) – (0.4, -0.6); [pyred] (0.6, 0) – (0.4, -0.6); [fill=black] (0.4, -0.6) circle (.03cm); [lightgray2, line width=0.8mm] (0, 0) – (0.8, 0); at (0.4, -0.8) (∂_i π_c)^2; at (0.2, -0.4) _1; at (0.6, -0.4) _2; =-a^2(τ) k_1·k_2 π_c,k_1(τ)π_c,k_2(τ)π_c,k_1^*(τ_0)π_c,k_2^*(τ_0) . It can be easily verified that all the above components can be mapped onto 𝔹_^2 via the following boundary operators 𝔹_π̇_c(k,τ) =-2 c_s k_softHτ_0^2lim_k_soft→ 0𝔹_^2(k,k_soft,τ) , 𝔹_π̇_c^2(k_1,k_2,τ) =H^22c_s^2 τ_0^2 k_1 k_2∂^2∂ k_12^2 [(k_1 k_2) 𝔹_^2(k_1,k_2,τ)] , 𝔹_(∂_iπ_c)^2(k_1,k_2,τ) =H^2c_s^4τ_0^2 k_1^2 k_2^2k_1·k_2k_1 k_2(1-k_12∂∂ k_12+k_1 k_2 ∂^2∂ k_12^2)[(k_1 k_2) 𝔹_^2(k_1,k_2,τ)] , 57pt where we have defined k_12=k_1+k_2. Notice that the combination (k_1 k_2) 𝔹_^2(k_1,k_2,τ) only depends on k_12, therefore acting with ∂_k_12 is unambiguous. It should be noted that, following the same reasoning presented above, other building blocks associated with higher derivative vertices in π_c can be straightforwardly related to 𝔹_^2 via similar (albeit higher-derivative) boundary operators. 4pt In the non-local EFT, all exchange contributions reduce to simple contact terms. Therefore, the in-in expressions for diagrams are given by the image of the product of their corresponding building blocks, followed by an integration over time and multiplied by appropriate coupling constants and symmetry factors. For example, for the double-exchange diagram contributing to the six-point function, we have -/3[line width=1. pt, scale=2.5] at (0.2, 0.15) _1; at (0.6, 0.15) _2; at (0.85, 0.15) _3; at (1.2, 0.15) _4; at (1.42, 0.15) _5; at (1.85, 0.15) _6; [black] (0.2, 0) – (0.4, -0.6); [black] (0.6, 0) – (0.4, -0.6); [black] (0.8, 0) – (1, -0.6); [black] (1.2, 0) – (1, -0.6); [black] (1.4, 0) – (1.6, -0.6); [black] (1.8, 0) – (1.6, -0.6); [pyblue] (0.4, -0.6) – (1, -0.6); [pyblue] (1, -0.6) – (1.6, -0.6); [fill=black] (0.4, -0.6) circle (.03cm); [fill=black] (1, -0.6) circle (.03cm); [fill=black] (1.6, -0.6) circle (.03cm); [lightgray2, line width=0.8mm] (0, 0) – (2, 0); =-2× 2^4 g_1^2 g_2 Im ∫_-∞^0 dτ 𝔹_^2(k_1,k_2,τ) D^-1(k_1+k_2,τ) ×𝔹_^2(k_3,k_4,τ) D^-1(k_5+k_6,τ) 𝔹_^2(k_5,k_6,τ) . The double-exchange diagram of π, on the other hand, reads -/3[line width=1. pt, scale=2.5] [pyred] (0.2, 0) – (0.4, -0.6); [pyred] (1.8, 0) – (1.6, -0.6); [pyred] (1, 0) – (1, -0.6); [pyblue] (0.4, -0.6) – (1, -0.6); [pyblue] (1, -0.6) – (1.6, -0.6); [fill=black] (0.4, -0.6) circle (.03cm); [fill=black] (1, -0.6) circle (.03cm); [fill=black] (1.6, -0.6) circle (.03cm); [lightgray2, line width=0.8mm] (0, 0) – (2, 0); at (0.4, -0.8) π̇_cσ; at (1.6, -0.8) π̇_cσ; at (1, -0.8) π̇_cσ^2; at (0.2, 0.15) _1; at (1, 0.15) _2; at (1.8, 0.15) _3; = Im ∫_-∞^0 dτ 𝔹_π̇_c(k_1,τ) D^-1(k_1,τ) ×𝔹_π̇_c(k_1,τ) D^-1(k_2,τ) 𝔹_π̇_c(k_3,τ) . With the two expressions above and using the relationships between the building blocks of the two diagrams in (<ref>), one arrives at (<ref>). Equations (<ref>) and (<ref>) follow from a similar procedure. We should emphasise again that identical relationships apply to the full two-field theory, and all the more so to the sub-leading contributions arising from integrating out σ at higher time-derivative orders. § HIGHER-ORDER NON-LOCAL EFT CORRECTIONS We have explicitly computed the power spectrum and the bispectra for all the interactions in the non-local EFT (<ref>). However, let us recall that this theory has been derived by considering only the leading-order term in the time derivative expansion (<ref>). In this section, we show how one can systematically correct correlators of π_c by going to higher order in the derivative expansion (<ref>). While adding more terms to the non-local EFT improves the accuracy to which correlators are determined, and can be used to diagnose failures of the EFT, see <cit.>, it does not account for particle production effects, resulting in some level of unavoidable, but exponentially suppressed errors. 4pt The seed correlators of the field φ attributed to the single-, double- and triple-exchange diagrams, introduced in Eq. (<ref>), at arbitrary order in the time-derivative expansion, are given by ⟨φ_k_1…φ_k_4|_⟩(1)' = -16g_1^2 (∏_i=1^4 φ_k_i^*(τ_0)) ×Im∫_-∞^0 τ̣a^4 φ_k_1φ_k_2[∑_n=0^∞𝒪̂_n(_34)φ_k_3φ_k_4] + 2 perms , ⟨φ_k_1…φ_k_6|_⟩(2)' = -96g_1^2 g_2 (∏_i=1^6 φ_k_i^*(τ_0)) ×Im∫_-∞^0 τ̣a^4 φ_k_1φ_k_2[∑_n=0^∞𝒪̂_n(_34)φ_k_3φ_k_4] [∑_n=0^∞𝒪̂_n(_56)φ_k_5φ_k_6] + 15 perms , ⟨φ_k_1…φ_k_6|_⟩(3)' = -96g_1^3 g_3 (∏_i=1^6 φ_k_i^*(τ_0)) ×Im∫_-∞^0 τ̣a^4 [∑_n=0^∞𝒪̂_n(_12)φ_k_1φ_k_2][∑_n=0^∞𝒪̂_n(_34)φ_k_3φ_k_4] [∑_n=0^∞𝒪̂_n(_56)φ_k_5φ_k_6] + 15 perms , To avoid clutter, above we have omitted the (conformal) time dependence of a(τ), φ_k(τ), as well as the differential operators 𝒪̂_n that is defined by 𝒪̂_n(k) = (-1)^n/(kτ)^2 + (m/H)^2{[τ^2 ∂_τ^2 - 2τ∂_τ][(kτ)^2 + (m/H)^2]^-1}^n . The above expressions for the correlators of systematically correct the leading-order results given in (<ref>). Up to any given number of (time) derivatives, one can compute the time-integrals given above and straightforwardly extract the quantities F_1,2,3 that were defined in (<ref>). Subsequently, in order to compute with the same precision e.g. the power spectrum and the bispectrum of ζ (induced by the single-, double-, and triple-exchange graphs), one should act with the bespoke weight-shifting operators dictated by Eqs. (<ref>)-(<ref>). § NON-LOCAL EFT RESUMMATION In this appendix, as an interesting curiosity, we show how one can fully resum the non-local EFT in flat space. We will see that the effective calculation reproduces the one computed in the two-field theory in some particular limit. Additionally, we will comment on the singularity structure of these correlators. Flat-space propagators. In flat space, after setting H=0, the correlators are computed at a fixed time t_0=0, which is analogous to the end of inflation surface τ_0. Let us consider a theory composed of a massless field ϕ propagating at speed c_s and a relativistic field σ with mass m. We assume that both fields are coupled at cubic order through the interaction ℒ⊃ -g/2ϕ^2σ where g is some coupling constant. 4pt Following the diagrammatic rules developed in <cit.> albeit adapted to flat space, the bulk-to-boundary propagator of ϕ (also called the Wightman propagator) and the bulk-to-bulk propagator of σ (also called the Feynman propagator) read W_k(t, t') = 1/2c_s ke^-ic_s k (t - t') , G_k(t, t') = 1/2ω(e^-iω(t-t')θ(t-t') + e^iω(t-t')θ(t'-t)) , with ω^2 = k^2+m^2. In the following, we will consider the s-channel four-point function ⟨ϕ_k_1ϕ_k_2ϕ_k_3ϕ_k_4|$⟩. Thet- andu- channels can be easily found by suitable permutations of the external momenta. Exchange diagram. In the two-field theory, the correlator is composed of four different sub-diagrams, corresponding to the four different time ordering for the two vertices of the exchange diagram. Using the Feynman rules, the first contribution is I_++ = -/3[line width=1. pt, scale=2.5] [pyred] (0.2, 0) – (0.4, -0.6); [pyred] (0.6, 0) – (0.4, -0.6); [pyred] (1, 0) – (1.2, -0.6); [pyred] (1.4, 0) – (1.2, -0.6); [pyblue] (0.4, -0.6) – (1.2, -0.6); [fill=black] (0.4, -0.6) circle (.03cm); [fill=black] (1.2, -0.6) circle (.03cm); [lightgray2, line width=0.8mm] (0, 0) – (1.6, 0); = -g^2 ∫_-∞^0 ṭ' ṭ” W_k_1(0, t') W_k_2(0, t') G_s_12(t', t”) W_k_3(0, t”) W_k_4(0, t”) = g^2/16c_s^4 k_1 k_2 k_3 k_4(1/E E_L E_R + 1/2ω E_L E_R) , where we have defineds_ij = |k_i + k_j|andω^2 = s_12^2 + m^2being the exchanged energy of the massive field. We also have introduced the total energyE ≡c_s(k_12 + k_34)withk_ij = k_i + k_jand the energy flowing into the left (resp. right) vertexE_L ≡c_s k_12 + ω(resp.E_R ≡c_s k_34 + ω). After inspecting the properties of the propagators, it is easy to show thatI_– = I_++^*, so that one hasI_+++I_– = 2Re(I_++). However in our case,I_++is real so that the sum of both diagrams is just twice the result obtained in (<ref>). TheI_+-contribution is I_+- = -/2[line width=1. pt, scale=2.5] [pyred] (0.2, 0) – (0.4, -0.6); [pyred] (0.6, 0) – (0.4, -0.6); [pyred] (1, 0) – (1.2, 0.6); [pyred] (1.4, 0) – (1.2, 0.6); [fill=black] (0.4, -0.6) circle (.03cm); [fill=black] (1.2, 0.6) circle (.03cm); [lightgray2, line width=0.8mm] (0, 0) – (1.6, 0); [pyblue] (0.4, -0.6) – (1.2, 0.6); = g^2 ∫_-∞^0 ṭ' ṭ” W_k_1(0, t') W_k_2(0, t') W_ω(t”, t') W_k_3(t”, 0) W_k_4(t”, 0) = g^2/16c_s^4 k_1 k_2 k_3 k_41/2ω E_L E_R . Similarly to the previous case, we haveI_-+=I_+-^*so that one just needs to multiply by a factor two the result in (<ref>) to obtainI_+- + I_-+. Summing all the four contributions, the final correlator is ⟨ϕ_k_1ϕ_k_2ϕ_k_3ϕ_k_4|'⟩ = g^2/8c_s^4 k_1 k_2 k_3 k_4(1/E E_L E_R + 1/ω E_L E_R) . The correlator has a singularity when the total energy vanishesE→0, and when partial energies flowing into the left or right vertices add up to zeroE_L, R→0. Note that these singularities are never reached for physical momentum configurations. In fact, the residue of the total energy singularity is the corresponding scattering amplitude lim_E → 0⟨ϕ_k_1ϕ_k_2ϕ_k_3ϕ_k_4|'⟩ = 1/8c_s^4 k_1 k_2 k_3 k_4×𝒜_4/E , where𝒜_4 = -g^2/𝒮with𝒮 = c_s^2k_12^2 - ω^2being the (non-relativistic) Mandelstam variable. When the sound speed ofϕis sufficiently reduced, an interesting limit is that ofc_s k_12, 34 ≪ω, corresponding to the massive fieldσbeing much more energetic than the external fieldϕ. In this case, we haveE_L, R = ω+ 𝒪(c_sk_12, 34/ω)and the correlator can be written as an expansion inc_sk_12, 34/ωwhose leading order term reads ⟨ϕ_k_1ϕ_k_2ϕ_k_3ϕ_k_4|'⟩ = g^2/8c_s^4 k_1 k_2 k_3 k_4(1/Eω^2 + 𝒪(c_sk_12, 34/ω)) . An interesting feature of this limit is that the partial energy singularities effectively disappear. Indeed in this limit, the massive field can be integrated out and the exchange diagram becomes an effective contact one. Non-local contact diagrams. Following the development of Sec. <ref>, let us now explicitly integrate out the massive field at the level of the Lagrangian. This procedure generates an infinite tower of non-local contact operators ℒ⊃ -g^2/4ϕ^2∑_n=0^∞𝒪_n ϕ^2 , with𝒪_n = 1/-∂_i^2 + m^2(-∂_t^2)^n/(-∂_i^2 + m^2)^n . A considerable simplification, compared to the cosmological case, is that the physical momentum does not redshift and the operator ordering does not matter. This Lagrangian leads to an infinite number of contact diagrams to evaluate. However, interestingly enough, one can easily compute them all. The contributionI_+reads I_+ = -/2[line width=1. pt, scale=2.5] [pyred] (0.2, 0) – (0.5, -0.6); [pyred] (0.4, 0) – (0.5, -0.6); [pyred] (0.6, 0) – (0.5, -0.6); [pyred] (0.8, 0) – (0.5, -0.6); [fill=black] (0.5, -0.6) circle (.03cm); [lightgray2, line width=0.8mm] (0.1, 0) – (0.9, 0); = ig^2/2∫_-∞^0 ṭ W_k_1(0, t) W_k_2(0, t) ∑_n=0^∞(-1)^n/ω^2∂_t^2n/ω^2n W_k_3(0, t) W_k_4(0, t) + (k_12↔ k_34) = ig^2/32c_s^4 k_1 k_2 k_3 k_41/ω^2∫_-∞^0 ṭ e^ic_s k_12t∑_n=0^∞(-1)^n/ω^2n∂_t^2n e^ic_s k_34t + (k_12↔ k_34) = ig^2/32c_s^4 k_1 k_2 k_3 k_41/ω^2(∑_n=0^∞(c_s k_34/ω)^2n) ∫_-∞^0 ṭ e^ic_s(k_12 + k_34)t + (k_12↔ k_34) = g^2/32c_s^4 k_1 k_2 k_3 k_41/E(1/ω^2 - c_s^2 k_12^2 + 1/ω^2 - c_s^2 k_34^2) , where in the third line we have used∂_t^2n e^ic_s k_ijt = (-1)^n (c_s k_ij)^2n e^ic_s k_ijt. The second contribution isI_- = I_+. In the end, the infinite sum of contact diagrams has the following simple and compact form ⟨ϕ_k_1ϕ_k_2ϕ_k_3ϕ_k_4|'⟩ = g^2/16c_s^4 k_1 k_2 k_3 k_41/E(1/ω^2 - c_s^2 k_12^2 + 1/ω^2 - c_s^2 k_34^2) , where we note that the same result is also found if the massive field is integrated out in a local manner, in that case resumming an infinite number of local contact operators. This explicit correlator has a few interesting features. First of all, we do not fully recover the correlator computed in the two-field theory (<ref>). This means that the effective single-field theory used here, integrating out the fieldσà la amplitude, does not faithfully capture the effects present in the original theory, see below. Second, we note that the correlator diverges whenc_s k_12, 34 →ωwhich is equivalent to the limitE_L, R →2ω. This is expected because in this configuration the geometric series in (<ref>) does not converge. Physically, it corresponds to the case when the energy of two external fieldϕcoincides with the exchanged energy of the intermediate particle. In this configuration, the massive field could not have been integrated out in the first place. The most interesting configuration is when we place ourselves deep inside the radius of convergence of the geometric seriesc_s k_12, 34 ≪ω, which corresponds to the massive field carrying way more energy than the external fields. In this limit, the correlator simplifies to ⟨ϕ_k_1ϕ_k_2ϕ_k_3ϕ_k_4|'⟩ = g^2/8c_s^4 k_1 k_2 k_3 k_4(1/Eω^2 + 𝒪(c_sk_12, 34/ω)) . This form perfectly matches the correct limit of the four-point correlator computed in the original theory (<ref>) and reproduces the residue at the total energy singularity. However, higher-order corrections do not coincide. This apparent mismatch, as well as the others mentioned above, have been recently solved by adding the proper boundary term accounting for the homogeneous solution to the equation of motion for the heavy field, that we have discarded <cit.>. tocsectionReferencesutphysEFT1.075
http://arxiv.org/abs/2307.00188v1
20230701014053
Coordination of DERs for Grid Reliability via Day-ahead Demand-Supply Power Bounds
[ "Thomas Navidi", "Abbas El Gamal", "Ram Rajagopal" ]
eess.SY
[ "eess.SY", "cs.SY" ]
Coordination of DERs for Grid Reliability via Day-ahead Demand-Supply Power Bounds Thomas Navidi, Abbas El Gamal, Life Fellow, IEEE, and Ram Rajagopal, Member, IEEE T. Navidi and A. El Gamal are with the Department of Electrical Engineering at Stanford University. R. Rajagopal is with the Departments of Civil and Environmental Engineering and Electrical Engineering at Stanford University. August 1, 2023 ================================================================================================================================================================================================================================================================================================================================ A previous study has shown that coordinating DERs to protect the distribution grid can significantly reduce the infrastructure upgrades needed to address future increases in DER and electrification penetrations. Implementing such coordination in the real world, however, is challenging due the temporal and spatial uncertainties about the loads and renewable generation, smart meter and network delays, incomplete information about the grid, different consumer objectives and privacy constraints, and scalability of the coordination scheme. This paper describes a day-ahead 2-layer DER coordination scheme that addresses these challenges. A global controller uses historical load data to compute day-ahead hourly demand upper and lower bounds for each consumer node. It then solves a largest volume axis-aligned box optimization problem to determine corresponding supply power bounds which if followed, ensures grid reliability. A local controller at each consumer node then determines the DER power injections which satisfy the consumer's objectives while obeying its supply bounds. Simulation results demonstrate, for example, that this scheme can capture 62% of the reduction in transformer violations achievable by the perfect-foresight centralized controller used in the aforementioned previous study. Distributed energy resources, DERMS, distribution grids. § INTRODUCTION Coordination of the power injections from distributed energy resources (DER), such as EV chargers, battery storage units, and flexible thermal loads, across a distribution network can provide several benefits, including reducing peak network load during extreme weather conditions, providing grid services, and safeguarding grid infrastructure, such as transformers and voltage regulators. Existing programs for DER coordination, such as virtual power plants (VPPs) and demand response, aim to provide grid services but do not consider distribution grid reliability. In a recent study <cit.>, the authors showed that coordinating DERs for grid reliability has the potential to significantly reduce the grid infrastructure upgrades needed to support future increases in DER and electrification penetrations, while also reducing peak network load. The perfect foresight centralized controller used in <cit.> to demonstrate these benefits, however, is not implementable and it has remained unclear how much of these benefits can be attained by practical coordination schemes which must operate under challenging requirements, including: (i) temporal and spatial uncertainties about the loads and renewable generation, (ii) smart meter and communication delays that can be up to several hours <cit.> as well as communication network delays, (iii) incomplete information about the physical characteristics of the network, (iv) different objectives and privacy constraints of the consumers who own the DERs, and (v) scalability to large numbers of consumers. In this paper, we present a day-ahead DER coordination scheme whose main objective is to minimize transformer and voltage violations across the distribution network under the aforementioned real-world constraints. We demonstrate that this scheme can reap a significant fraction of the benefits of the perfect foresight controller with a moderate increase in consumer electricity cost. To address the above first two requirements of an implementable scheme, we use the 2-layer control architecture first reported in <cit.> which comprises a global controller (GC) that could be operated by the DSO or via an aggregator cooperating with the DSO, and local controllers (LC) located at consumers' sites. Communication takes place only between the GC and the LCs and the GC sends updates to the LCs once per a day. Each day, the GC uses past power injection data obtained from the smart meters to compute hourly power demand upper and lower bounds over the following day for every consumer or group of consumers in the network. The GC then computes corresponding hourly power supply upper and lower bounds by solving a maximum volume axis aligned box optimization problem <cit.> whose objective is to maximize the volume of the box defined by the supply bounds subject to power flow constraints and fairness across consumers while staying within the given upper and lower power demand bounds. This formulation guarantees that any set of power injections that satisfies the supply bounds does not violate the reliability constraints. The supply power bounds for each consumer node is then sent to its LC to determine the power injections over the span of the following day by minimizing a combination of its own objective and deviation from the supply bounds subject to its DER constraints. The scheme satisfies the third requirement above by using a data-driven linear model of the network learned through smart meter data. It further satisfies the fourth requirement by not requiring knowledge of any consumer constraints, objectives or DER data, such as SOC of storage, flexible load, or EV charging rate. The scheme is scalable, satisfying the last requirement for implementable coordination, since the maximum volume axis aligned box problem is convex and the constraint space is independent of the number of DERs in the network. This paper provides several significant enhancements and improvements over our earlier conference paper <cit.>. While in <cit.> the supply bounds were computed using a heuristic, the new GC algorithm provides guarantees on grid reliability when the bounds are satisfied and the linear power flow model is accurate. This paper also considers flexible thermal load as another controllable DER in addition to the EV charging and storage considered in <cit.> While <cit.> included very limited simulation results, this paper provides extensive results for 11 3-phase distribution networks representing varying climates, sizes, and mixtures of commercial and residential consumers over a 3-decade horizon with recent projections of DER penetrations from <cit.>. We further benchmark the performance of our scheme relative to the bookend local and perfect foresight centralized controllers in <cit.>. In addition to <cit.>, there has been other DER coordination schemes, sometimes referred to as management systems (DERMS), that aim to protect the grid. For example <cit.> describes a hierarchical control scheme in which a centralized controller uses power flow to calculate distribution grid locational marginal prices (DLMPs) for each DER. Each local DER controller then attempts to minimize its assigned DLMP. This scheme, however, assumes that the objective of all consumers is to minimize the DLMP rather than allowing each of them to define its own objective. The strategy in <cit.> uses a centralized controller to determine Lipschitz constraints that guarantee the safe operation of a local reinforcement learning algorithm for voltage stability. Their control strategy, however, is only applied to voltage stability and not to other objectives such as transformer overloading or individual DER objectives and is only applicable to a neural network based local controller. In <cit.> the authors present a method to calculate flexibility bounds at a single point in the distribution grid based on voltage limits. They do not consider transformer limits and do not fully explore the extension of their method to calculating bounds for multiple points in the grid simultaneously. Finally, the authors in <cit.> determine bounds for each node by solving a similar maximum volume box problem. In contrast to the work presented in this paper, however, their paper (i) only considers voltage constraints, (ii) the bounds are determined based only on the network model with no consideration of consumer loads (e.g., demand bounds), which can lead to unfairly allocating too much flexibility to nodes with smaller loads at the expense of nodes with larger loads, (iii) does not include any local controllers, (iv) has very limited simulation results, and (v) does not provide any performance evaluation relative to other DER coordination schemes. In the following section we introduce the grid and DER models and the various assumptions we make in formulating the bounds coordination problem. In Section <ref>, we describe the bounds scheme. In Section <ref>, we present the simulation setup used in evaluating our scheme. Since this setup is the same as that in <cit.>, we provide only a brief summary and refer the readers to that paper for more details. In Section <ref>, we compare the performance of our scheme to the local and the perfect-foresight controllers used in <cit.> to bookend the performance of any implementable DER coordination scheme. § MODELS AND ASSUMPTIONS We consider control of distributed energy resources within a 3-phase distribution network modeled by a graph with a set of nodes [1:N] and a set of edges (lines) ℒ. Node i=1 corresponds to the substation which is connected to the transmission system and supplies power to the rest of the nodes. We denote the set of consumer nodes by 𝒞⊂ [1:N], where |𝒞| = N_C. The set of transformers in the network is specified by a set of node pairs (i,j) ∈𝒯, where and |𝒯|=N_T. Each consumer node comprises a collection of uncontrollable loads, solar PV, energy storage, EV chargers, and flexible thermal loads located under a single secondary transformer or a single smart meter. This collection can represent one or several residential consumers or a larger commercial building. We consider the steady state operation of the network over time steps t∈{1,2,…} each of length δ, which we assume to be 15min in our simulations. The total complex net power consumption at each node i∈ [1:N] in the network in timestep t is denoted by s_i(t). The steady state power flows in the network determine the voltage magnitude v_i(t) for each node i at time t. In the following we provide the models and assumptions on the loads and controllable DERs we assume throughout the paper. Uncontrollable load. This includes a stochastic uncontrollable load, with constant power factor, and solar PV generation, which is assumed to be real and independent of the power factor. The combined uncontrollable load at node i and timestep t is denoted by p_i(t). Energy storage. We assume the battery storage k at node i has maximum capacity (energy) Q_ik^max At time t, the storage unit at node i has a charging power rate of c_ik(t)≥ 0, discharging rate of d_ik(t)≥ 0, and a state of charge (energy) of Q_ik(t). We assume that each storage unit only charges and discharges real power, and that storage charges and discharges with efficiency 0 ≤γ^c_ik≤ 1 and 0 ≤γ^d_ik≤ 1, respectively. Hence, the storage energy at time t is modeled by Q_ik(t) = Q_ik(t-1) + γ^c_ikδ c_ik(t) - γ^d_ikδ d_ik(t). The maximum charging and discharging energy rates are denoted by c_ik^max and d_ik^max, respectively. Electric vehicle charging. The model we use for EV charging is similar to that for energy storage with maximum discharge power d_ik^max=0 (we do not consider vehicle to grid in this paper) and c_ik(t) is bounded above by the EV charger rated power. Each EV k ∈ℰ_i, where ℰ_i is the set of EVs at node i, is to be charged within non-overlapping time windows 𝒲_ik over the simulation time horizon. The end time of window w ∈𝒲_ik is denoted by t^end_w. An additional constraint on EV charging comes from the requirement that it must be charged to a desired level Q_ikw^final by the end of each window, i.e., Q_ik(t^end_w) = Q_ikw^final, w ∈𝒲_ik and k ∈ℰ_i. Flexible thermal load. These include loads for air conditioning, ventilation, electric space and water heating and cooking. We denote the total thermal load (power consumption) at node i and timestep t by u_i(t) and its maximum over the simulation horizon by u_i^max, and assume that it has a constant power factor of 0.92. We further denote the fraction of flexible thermal energy that can be shifted to another time within the same day by ϕ, which depends on the network and whether the node is commercial or residential. We express the thermal energy at time step t ∈ [1:T_d] as Q_ik(t) = Q_ik(t-1) + δ c_ik(t), where Q_ik(0)=0. In practice, the thermal energy consumed by the flexible load is determined by the consumers desired temperature band for each thermal load. As the appliance consumes energy, the temperature either increases or decreases. To provide grid services, the controller adjusts the thermal load energy consumption while keeping the temperature within the band. Accurately relating the thermal energy consumption to temperature can be very complex, requiring detailed data about the appliances, thermal models of the buildings, climate conditions, and consumer behavior. Since this paper is focused on the bounds coordination scheme and on comparing its performance to the bookend controllers in <cit.> rather than on thermal load modeling, operation, and consumer behavior, we will use the simplified temperature to thermal load energy consumption model in Section <ref>. In summary, the dynamics for the battery storage, EV charging, and flexible load can all be expressed as Q_ik(t) = Q_ik(t-1) + γ^c_ikδ c_ik(t) - γ^d_ikδ d_ik(t), where d_ik(t) = 0 for EV charging and γ^c_ik = 1 and d_ik(t) = 0 for flexible thermal load with the additional constraints for each DER as described above. §.§ Linearized inverse power flow To maintain reliable operation, the network operator needs to manage both the voltage magnitudes at each node and the complex power flowing through each transformer to keep them within their respective safe ranges. The relationship between the net load and the voltage at each node i∈ [1:N] is governed by the AC power flow equations and accompanying constraints <cit.>. We denote the maximum rated apparent power squared of transformer (i,j) by τ_ij^max, and the upper and lower limits on The voltage magnitude v_i at node i by v_i^min and v_i^max. We say that a set of power injections s_i for i∈ [1:N] is feasible if it satisfies these transformer and voltage limits. In general, the power flow equations are non-convex which can lead to computationally intractable optimization problems. Under some conditions, this can be addressed through convex relaxation; however, the commonly used SDP and SOCP relaxations do not provide a correct solution when the objective function is a decreasing function of the real power injection as is the case for the maximum volume box problem discussed later in this paper <cit.>. This may be alleviated through the use of sequential convexification and modifications to the relaxation via the method described in <cit.>. However, this method still requires knowledge of the physical parameters of the network, which are often not available even to the DSO. Moreover, since we are only interested in scheduling, as in other settings such as day ahead markets, we use the following linearized the power flow equations <cit.> v(t) = A s(t) + a, τ(t) = (Fs(t) + f)^2 + (G s(t) + g)^2, where the bold symbols refer to the vector versions of the variables defined earlier, A, a, F, f, G, g are the linear model coefficients learned from the data, and the square of a vector refers to an element-wise square operation. In addition to significantly reducing the computational complexity of computing the supply bounds, this model can in fact be more accurate than physics based approximations <cit.> because the parameter values of such models, such power line admittances, configurations and parameters of capacitor banks, voltage regulators, and switches, are often not completely known to the GC whose function is to characterize the set of feasible power injections via the supply bounds. In contrast, the power and voltage measurement data needed to train the above model are readily available through existing smart metering infrastructure. Furthermore, this data driven model can readily adapt to changes in the states of the network assets, such as voltage regulators and switches, without needing to know the specifics of such changes. §.§ Reactive power modeling Since the voltages, hence, the feasible set of real power injections, depend on the reactive power injections, to establish the real power injection supply bounds we need to know the relationship between the reactive power at each consumer node and its real power. Since the GC does not have information about the composition of the load at each node, we estimate this relationship using the historical net load data from the smart meters and the transformer monitoring equipment. Since the supply bounds need to guarantee that the complex power is within a certain feasible region, we use this relationship to establish an upper and lower percentile estimate of the reactive power as a function of the real power. The specific percentile can be adjusted to tune how conservative of an estimate to use for the range of possible reactive power injections. In this study, we use a linear least squares estimator trained on the 90th percentile of reactive power injections for the upper bound and the 10th percentile for the lower bound. The resulting upper and lower bounds on the reactive power (as a function of the real power) are q^u(t) = H^u(t)p^u(t) + h^u(t), q^l(t) = H^l(t)p^l(t) + h^l(t), where q^u(t) and q^l(t) are the upper and lower bounds on reactive power, and H^u, h^u,H^l, h^l are the linear model coefficients. The model coefficients are a function of t since there are different coefficients for each hour in the day. § DAY-AHEAD DER COORDINATION SCHEME A block diagram of the day ahead 2-layer DER coordination scheme is given Figure <ref>. In the following subsections we describe the GC and the LC in detail. §.§ Global controller The AC power flow equations provide a mapping from the nodes real and reactive power injections to their voltages, hence the set of feasible node voltages, i.e., voltages within tolerable limits, can be represented by a set of feasible power injections. Similarly, the transformer constraints represent another set of feasible power injections. The intersection of these two sets represents the feasible set of power injections in the network, that is, the power injections that can be reliably supported by the distribution network. Since each consumer's LC has access only to its own power injections and does not have any information about the power injections at other nodes, the LCs cannot jointly operate at any arbitrary point in this feasible set of power injections. Hence in our scheme, the GC provides each LC with day-ahead hourly “supply" upper and lower bounds denoted by the 24-dimensional vectors p_i^l and p_i^u on its net power injections for each node i ∈𝒞 such that if every LC operates within its given supply bounds, the resulting set of power injections in the network is feasible. Such bounds define a “supply box" in the N_C-dimensional space of consumer node power injections for each hour of the next day. Determining the supply bounds by finding the axis-aligned supply box with the maximum volume that fits within the set of feasible power injections, however, can lead to situations in which consumers in favorable locations in the network receive significantly more power injection flexibility relative to their actual demands, resulting in much higher inevitable supply bound violations at other nodes. To address this problem, node demands must be taken into consideration in finding the supply bounds. This is achieved by first finding upper and lower hourly “demand" bounds p^ld_i, p^ud_i on the net power injections for each node i ∈𝒞 and for each hour of the next day as follows. To find the hourly demand bounds for a consumer node, the GC uses the node's historical load data corresponding to the next day, for example, if the next day is a weekday, the load data would be from the past weekdays in the past 5 weeks, similarly for weekends. The GC then sets the lower demand bound for each hour to be the lowest of either 0 or the power injection value of the selected historical data for that hour, and the upper demand bound for each hour is the highest of either 1kW or the power injection value of the selected historical data for that hour. The derived demand bounds define a demand box in the N_C-dimensional space of consumer node power injections for each hour of the following day; see Figure <ref>. From the above discussion, to ensure maximum flexibility the GC aims to find the largest volume axis-aligned supply box (defined by the supply bounds) which is in the intersection of the axis-aligned demand box and the set of feasible power injections. Further to prevent unfairly allocating more flexibility to nodes with lower demands at the expense of other nodes, we normalize the difference between the supply and demand bounds for each node by the difference between the upper and lower demand bounds for that node (for each hour) as follows Δ_i^u = p_i^ud - p_i^u/p_i^ud - p_i^ld, Δ_i^l = p_i^l - p_i^ld/p_i^ud - p_i^ld, i ∈𝒞, where 0 ≤Δ_i^u,Δ_i^l<1 and Δ_i^u + Δ_i^l < 1. The reason this normalization ensures fairness in flexibility across all consumer nodes is that the maximum box volume solution tries to make the supply box as close as possible to a hypercube within the demand box, such that shorter demand box dimensions have their corresponding supply box side lengths prioritized over the longer dimensions. This leads to the following supply bounds optimization problem formulation for each hour of the following day (we do not introduce an index for hour). maximize: ∑_i∈𝒞 log(1 - Δ_i^u - Δ_i^l) subject to: Δ_i^u ≥0, Δ_i^l ≥0, Δ_i^u + Δ_i^l < 1, i ∈𝒞 p_i^u = p_i^ud - (p_i^ud - p_i^ld)Δ_i^u, p_i^l = p_i^ld + (p_i^ud - p_i^ld)Δ_i^l, i ∈𝒞 q^u = H^up^u + h^u, q^l = H^lp^l + h^l, v = A s + a, s ∈[s^l, s^u] v_i^min ≤v_i ≤v_i^max, i ∈[1:N] τ = (Fs + f)^2 + (G s + g)^2, s ∈[s^l, s^u] τ_ij ≤τ^max_ij, (i,j) ∈𝒯. The objective function (<ref>) is the sum of the log of the normalized width for each node's supply bounds. Maximizing this objective is equivalent to maximizing the normalized volume of the axis aligned supply box. The constraints (<ref>)-(<ref>) define the relationship between the demand and the supply bounds. The constraints (<ref>) are the mappings from real power supply bounds to the reactive power injection bounds for each node. The constraints (<ref>)-(<ref>) are the data-driven mappings of real and reactive power to the voltage magnitude and limits at each node and the apparent power and limits at each transformer, respectively. Solving this problem requires ensuring that every complex power injection s∈[s^l, s^u] satisfies the voltage and transformer constraints, hence the problem appears intractable. However, since both the set of power injections arising from the voltage constraints and from power injections corresponding to the transformer constraints are convex, their intersection with the demand box is also convex. Thus, we can take advantage of the convexity to simplify the problem as in <cit.> and its expanded version in <cit.> in which the constraints reduce to those needed to define the convex feasible set. To do so, we split the power flow mapping matrices and coefficients into positive and negative components A_+, A_-,a_+ and a_- such that A_+ + A_- = A, a_+ + a_- = a, and similarly for F_+, F_- f_+, f_- and G_+, G_-, g_+, g_-. The transformer mapping will be referred to as B_+, B_-, b_+, b_- to simplify the notation. Then we can verify that every power injection within the supply bounds satisfies the constraints by only checking the upper and lower vertices of the box as follows v^u = A_+ s^u + a_+ + A_- s^l + a_- v^l = A_+ s^l + a_+ + A_- s^u + a_- v_i^min≤ v_i^u≤ v_i^max, v_i^min≤ v_i^l≤ v_i^max, i ∈ [1:N] τ^u = B_+ s^u + b_+ + B_- s^l + b_- τ^l = B_+ s^l + b_+ + B_- s^u + b_- τ_ij^u≤τ^max_ij, τ_ij^l≤τ_ij^max, (i,j) ∈𝒯. Replacing the constraints (<ref>)-(<ref>) with the above constraints, yields a computationally efficient and scalable optimization problem which the GC solves for each hour of the next day. The resulting supply upper and lower bounds for each consumer node are then sent to its local controller. As long as each local controller is able to restrict the power injections to within its supply bounds, the network constraints are satisfied provided the linear mappings from the power injection to node voltages and transformer powers are accurate. Relationship to maximum loadability. The maximum volume axis aligned box problem is related to the problem of finding the maximum loadability of a transmission system <cit.> for voltage stability analysis. The maximum loadability problem aims to maximize the power injection at a single node in the transmission grid constrained by the power flow equations and voltage limits. Key differences between this previous work and our box problem are: (i) we find both a maximum and minimum range of power injections for all nodes in the network rather than a single injection for all the nodes, (ii) we consider the variability of load over time to limit the power injection of nodes in the distribution network depending on their demands ahead of time, and (iii) we include transformer power limits in the constraint set. §.§ Local controller The local controller determines the power injections for the stationary storage, EV chargers, and flexible thermal load within each consumer's node. It aims to satisfy the consumer's objectives and preferences, such as minimizing the total electricity cost, charging EVs on time, and maintaining comfortable climate while minimizing the deviation of the net power injection from its supply bounds. Why would consumers consider the supply bounds at the expense of less flexibility in satisfying their objectives, however? To address this key question, we propose incentivizing consumers via a tiered rate plan similar to ones currently offered by some utilities such as PG&E <cit.>. Under our proposed incentive, consumers would pay a reduced Time of Use (TOU) rate when their net power injection is within the supply bounds and regular TOU rate when they violate the bounds. The exact discounted rate, however, only needs to be sufficient for consumers to prioritize following the bounds over other cost saving strategies, such as energy arbitrage. Although our scheme allows for consumers to have different objectives, to evaluate the performance of the scheme, in Section <ref> we assume that all local controllers have the same objectives. In particular, to determine the power injections for the DERs at each node, the local controller solves the following optimization problem for every time step t, i.e., every δ=15 minutes of each day. minimize: λ_b L_i^b(t) + ∑_k=1^K_i λ_ik L_ik^f(t) subject to: 0 ≤c_i(t) ≤c_i^max(t), 0 ≤d_i(t) ≤d_i^max(t) Q_i^min(t) ≤Q_i(t) ≤Q_i^max(t) Q_ik(t) = Q_ik(t-1) + γ^c_ik δc_ik(t) - γ^d_ik δd_ik(t), k ∈[1:K_i] Q_ik(t^end_w) = Q_ikw^final, w ∈𝒲_ik, k ∈ℰ_i. The first term in the above objective function corresponds to the deviation from the supply bounds: L_i^b(t) = [ p_i(t) + ∑_k=1^K_i(c_ik(t) - d_ik(t)) - p_i^u(t) - ϵ_i^u(t) ]_+ + [ p_i^l(t) + ϵ_i^l(t) - p_i(t) - ∑_k=1^K_i(c_ik(t) + d_ik(t)) ]_+, where p_i^u(t) and p_i^l(t) are the supply bounds during the hour in which t lies, ϵ_i^u(t) and ϵ_i^l(t) are the sum of the past deviations of the net power injection from the upper and lower bounds during the hour, respectively, which allows the controller to ensure that the average power across the hour can satisfy the bounds. The second term in the objective function for each DER k is L_ik^f(t) = |Q_ik(t) - Q_ik^target(t)|, where Q_ik^target(t) is the desired state of DER k at time t, which depends on the specific objective of the consumer. We assume that the objective is to minimize the cost of electricity based on the given or reduced time-of-use (TOU) rate. Thus, the desired state for each DER is Q_ik^target(t) = Q^max_ik(t) for t in the 12 hour period before the start of the peak price period (from early morning to late afternoon), and Q_ik^target(t) = Q^min_ik(t) for t during the peak price period (between late afternoon to late evening). In all other times t (night time) in which the TOU rate is lowest and there is no solar generation, the objective L_ik^f(t) = 0. To determine the limits on the thermal load energy consumption profiles Q^min_ik(t) and Q^max_ik(t) that correspond to the ends of the desired temperature band, we use the following simple model. We assume that the given baseline thermal load energy consumption profile Q_ik^base(t) (see Section <ref> for how it is determined for the simulations) corresponds to the mid point of the desired temperature band and express the limits on the thermal load energy as Q_ik^max(t) = min{Q_ik^base(t) + ϕ Q_ik^base(T_d), Q_ik^base(T_d)}, Q_ik^min(t) = max{0, Q_ik^base(t) - ϕ Q_ik^base(T_d), Q_ik^base(T_d) - (T_d - t) δ u^max_i }. These limits ensure that the total shifted energy consumption does not exceed a fraction ϕ of the total thermal energy consumption during the day and is equal to the baseline consumption by the end of the day. § SIMULATION SETUP We simulate and evaluate our DER coordination scheme using the same methodology, data and assumptions detailed in <cit.>. To make the presentation of our results somewhat self-contained, in the following we provide a summary of the simulation setup and the DER bookend controllers. We use the power flow-optimization system in <cit.> in which the input data includes the linearized network models, load profiles, DER operating parameters, DER penetrations, and prescribed electricity tariffs. Using this data, a network scenario is generated by randomly choosing the EV charging windows and assigning rooftop PV, stationary storage, and flexible loads to the consumer nodes in the network. The operation of the chosen DER control scheme is then simulated to determine the DER power injection profiles for all DERs, which allows us to run quasi-static power flow simulation using OpenDSS <cit.> to determine the voltage at each node and apparent power flow through each transformer. This information is then used to compute the transformer and voltage reliability metrics and determine the cost of electricity for each node. This process is repeated for several scenarios and reliability statistics are computed. As in <cit.>, the metric we use for steady-state voltage declares a violation at a node if its voltage magnitude exceeds the specifications in the ANSI C84.1 standard, which represent a deviation of ± 5% from the nominal voltage magnitude on average across any 1-hour window. The metric for transformer thermal overloading declares a violation at a transformer if its average apparent power is greater than 120% of its rated capacity over one or more 2-hour windows. Table <ref> summarizes the main characteristics of the eleven 3-phase distribution networks we used to evaluate our scheme. The columns are the assigned name indicating location and source of network data, type of network data, the number of nodes, transformers, and consumer nodes for each network. The asterisk by the network name indicates a network that aggregates consumers under the secondary transformer. The networks without asterisk have models for individual consumers under each secondary transformer. To construct the load profiles, we first use the 2018 residential and commercial consumer load profiles dataset in <cit.> to construct 1-year baseline profiles for each node in each network based on its geographical census code. We then scale these profiles for each node so that its average daily peak load matches that of the network. The reactive power component of the load for each node is chosen such that the power factor is randomly distributed between 0.9 and 0.95 as is commonly assumed, e.g., see <cit.>. To determine the electrification increases, we use the projections up to 2050 in <cit.>, which is categorized by space heating, water heating, and other loads. We uniformly scale the electrification due to loads other than thermal by their electrification projection percentage. We then randomly choose a consumer that does not already have electric space heating or water heating, and assign to it a load profile of either space or water heating that is converted from the thermal energy in <cit.> to electric energy using the median published efficiency for the corresponding electric appliance in <cit.>. This process is repeated until the total added electric energy in a network equals its electrification projection. The sum of air conditioning, space heating, and water heating load profiles at each node represents its baseline thermal load profile referred to in Section <ref>. We declare a percentage of the thermal load profile as flexible load using the projections in <cit.>, which specifically considers electric furnaces, air conditioners, and water heaters. EV penetration is defined as the EV charging energy as a percentage of the total energy in the network with no solar included, and assuming penetrations from the high electrification projections in <cit.>. We assume the Workplace and home L2 EV chargers to have a rated charging power of 6.3kW <cit.> and that charging power can be varied between any value less than this rated power. To determine the EV charging windows, we use the synthetic data generator in <cit.> to produce sample charging schedules each consisting of a starting time, ending time, charging energy demand, and whether charging is done at home or at work over the 1-year simulation horizon. We continue to add new EVs to the network at random until the total energy consumed by EVs as determined by the data generator is equal to the total projected EV electrification penetration energy. PV penetration is defined as the energy generated by solar PV as a percentage of the total energy consumed by the network, including for EV charging. We set the PV penetration in 2050 for each network to 23% of the total potential rooftop solar PV generation of its nearest city <cit.>, which corresponds to approximately the same amount of solar PV capacity as the high penetration scenario for nationwide distributed PV projected in <cit.>. To determine the PV penetration for the years prior to 2050, we scale the penetrations in proportion to the nationwide PV capacity provided in <cit.>. The PV generation profiles for each network are obtained from the solar data of the closest geographic region to the network provided in <cit.>. Nodes are assigned a PV generation profile at random until reaching the PV penetration. The profile is scaled such that the ratio of solar PV energy generation to the total energy consumed by the node is randomly distributed between 40-90%, which roughly corresponds to the upper quartile estimate of residential PV system sizes given in <cit.>. We assume that only nodes with solar PV can have stationary storage and define storage penetration as a percentage of the nodes with PV. The storage penetrations we assume correspond to the highest adoption rates given in <cit.>, which range between 30% and 70% depending on the year. Each storage unit is randomly assigned a capacity which is between 40-80% of the average daily PV energy generation, which corresponds to the upper quartile estimate of residential battery system sizes in <cit.>, a maximum c-rate of 0.5 <cit.> and a round-trip efficiency of 0.86 <cit.>. The TOU tariffs <cit.> used in the simulations include peak, part-peak, and off peak rates for both residential and commercial consumers. The hours during which these rates apply are chosen such that the middle of the peak price hour corresponds to the most frequent total network peak demand hour. In addition, EV TOU rates <cit.> are included (see <cit.> for details). We assume that the reduced rate used when the consumers obey their respective supply bounds is 80% of the regular TOU rates. Bookend controllers. To evaluate our bounds scheme, we compare its performance in terms of transformer and voltage violations to two bookend control schemes. The first is a centralized perfect foresight controller that has perfect knowledge of all future loads and DER states 2-days in advance and has full control over all DERs. This centralized controller determines the power injections for all DERs by minimizing a weighted sum of the electricity cost, voltage violations and transformer overloading metrics, and storage operation cost with the grid reliability objectives having the highest weight. The constraints are the linearized inverse power flow mapping and the storage, EV charging, and flexible load dynamics. The second bookend control scheme is a local controller that runs fully autonomously at each node and operates its DERs with the goal of minimizing the consumer's total electricity cost. It is a simple heuristic that does not presume knowledge or forecasts of any variable values, except for the end of the current EV charging window and the percentage of thermal load that is flexible, which may be specified by the consumer. At each time step, the controller checks its electricity tariff to find the lowest cost time period and reserves operation to within those time periods as much as possible. Software implementation. All data used in the simulations are publicly available and referenced in this paper and in <cit.>. The code for the simulations will be available on Github after publication of this paper. § SIMULATION RESULTS We simulated our DER coordination scheme using the setup outlined in the previous section for the years 2020 to 2050 using the same scenarios as in <cit.> to provide a fair comparison of our scheme to the two bookend controllers. Figure <ref> plots the means and standard errors across the 11 networks for the transformer overloading and voltage violation metrics between 2020 and 2050 using our bounds scheme as well as the means with the bookend controllers. Note that from 2020 to 2030, the transformer overloading and voltage violations steadily increase for all three control schemes. From 2035 to 2045, the bounds scheme and centralized controller see a slower increase in violations as the amount of flexibility and stationary storage begin to offset the downsides of increased electrification. Finally, after 2045, the amount of load flexibility becomes high enough to actually decrease the violations for both the bounds scheme and the centralized controller. In 2050, the means of the transformer and voltage violations for the bounds scheme are 49% and 20%, respectively, compared to 29% and 18% for the centralized controller and 81% and 28% for the local controller. Hence the bounds scheme captures 62% and 75% of the violation reductions of the centralized controller versus the local controller. Figure <ref> shows the means and standard errors of the violation percentages for each individual network averaged across all 16 scenarios in 2050. The bounds scheme is able to reduce the number of overloaded transformers significantly across all networks with the exception of the commercial SF network, which has much less flexible load due to having only commercial consumers as opposed to primarily residential consumers as for all other networks. Note that all networks exhibit noticeable reduction in voltage violations with the bounds scheme. However, there are significant variations across networks because some are naturally more prone to voltage violations than others due to their specific configuration of power lines and consumers. As pointed out in <cit.>, DER coordination provides benefits to reliability beyond reducing the transformer and voltage violations according to the binary-valued metrics. Figure <ref> plots the empirical probability that the percentage magnitude of transformer apparent powers across all networks and scenarios in 2050 being greater than value on the x-axis. Note that shifting the distribution of the magnitudes of all transformer apparent powers towards the y-axis represents less violations. The bounds controller has an average magnitude of 192% for transformer overloading events as opposed to 243% to 159% for the fully local and fully centralized controllers. The results for voltage violations are similar with the bounds controller having an average magnitude of 5.47% as opposed to 5.93% to 5.32% for the local and centralized controllers. §.§ Impact on peak load In addition to reducing the number of transformers that experience overloading and nodes with voltage violations, the bounds scheme reduces the peak network load over the simulation horizon relative to the local controller. As shown in Figure <ref>, the peak load with the bounds scheme is reduced by 12% on average in 2050 compared to the bookend local controller versus an 18% decrease from the centralized control scheme. §.§ Impact on electricity cost In <cit.>, we showed the centralized controller increases electricity cost by 5.1% in 2050 over the local controller, which focuses only on minimizing electricity cost. With the bounds controller, we find that when calculating the cost with no incentive to follow the bounds, the increase is between 9.8% and 11.1% relative to the local controller. When factoring in the 80% incentive we assumed in our simulations, the cost of electricity is reduced to between 93.4% and 95.0% of the cost under the local bookend controller. §.§ Impact of load flexibility The projections of load flexibility from <cit.> include a base case and an enhanced case with the enhanced one, which we assumed in the above results, projecting significantly more load flexibility. In order to determine the impact of the amount of load flexibility on the performance of the bounds scheme, Figure <ref> shows the mean and standard errors of the percentage of overloaded transformers averaged across all networks and scenarios in 2050 with the three control schemes for the base and enhanced flexibility cases. The means of the percentage of overloaded transformers for the bounds scheme and the base load flexibility are 69% and 25%, respectively, compared to 49% and 20% with enhanced load flexibility. Thus, load flexibility represents a significant contribution to the performance of the bounds scheme. § CONCLUSION We presented a DER coordination scheme which aims to protect distribution grids with high penetrations of DER and electrification. We demonstrated that this scheme can capture a significant fraction of the reliability benefits of a perfect foresight centralized controller relative to current autonomous DER control <cit.>. Another important benefit of our scheme relative to autonomous DER control is that it reduces peak network power even though this objective is not explicitly included in the bounds problem formulation. We showed our scheme addresses all the requirements of an implementable DER coordination. Our bounds scheme can be readily deployed using existing computing and communication infrastructure and does not require the development or deployment of any new hardware (the GC and LC operations can both be handled using existing cloud infrastructure and DER embedded controllers) or grid infrastructure, making it very cost effective especially when considering the significant savings it provides to grid infrastructure upgrades. Deploying our scheme, however, requires the involvement of distribution grid operators (DSO) not only to provide the necessary information about the grid, e.g., the linearized grid models used in this paper, but also to ensure broad consumer adoption of the scheme through various incentives, such as our proposed reduced TOU rate. Our scheme can also operate seamlessly with existing demand response and virtual power plants (VPP) programs which aim to reduce electricity cost during extreme climate conditions. In such cases, the LCs would adjust their control objectives to satisfy the requests for grid services from the DSO, DER vendors or third party DER aggregators while still aiming to keep power injections within the supply bounds. To evaluate the bounds scheme, we assumed that the demand bounds are determined by the GC using historical smart meter data. Other ways to determine these bounds include incentivizing consumers to submit their own demand bound forecasts. Having more accurate demand bounds would allow the GC to better allocate supply bounds to other nodes who may need the additional capacity. We also assumed that all LCs use the same control objective. The scheme, however, allows consumers to have different objectives provided they include staying within the given supply bounds. The 80% of TOU incentive assumed in this paper is only meant to illustrate the potential for our scheme to provide savings to both the consumers who use their DER flexibility for grid reliability and the DSO who saves on distribution asset upgrade costs. The exact values of the incentive will have to be calculated on an individual network basis by considering the cost savings of deferring transformer and voltage upgrades and differences in cost of electricity over time. These costs can vary widely depending on the DSO network equipment, location, and electricity generation portfolio. § ACKNOWLEDGMENT The authors would like to thank Anthony Degleris for bringing to their attention the maximum volume axis aligned box problem in <cit.>. The work presented in this paper was supported under ARPA-E award DE-AR0000697 and US DOE award DE-OE0000919. IEEEtran
http://arxiv.org/abs/2307.01323v1
20230703195256
Semantic enrichment towards efficient speech representations
[ "Gaëlle Laperrière", "Ha Nguyen", "Sahar Ghannay", "Bassam Jabaian", "Yannick Estève" ]
cs.CL
[ "cs.CL", "cs.SD", "eess.AS" ]
Sensitivities on the anomalous quartic γγγγ and γγγ Z couplings at the CLIC E. Gurkanli[egurkanli@sinop.edu.tr] August 1, 2023 ============================================================================ Over the past few years, self-supervised learned speech representations have emerged as fruitful replacements for conventional surface representations when solving Spoken Language Understanding (SLU) tasks. Simultaneously, multilingual models trained on massive textual data were introduced to encode language agnostic semantics. Recently, the SAMU-XLSR approach introduced a way to make profit from such textual models to enrich multilingual speech representations with language agnostic semantics. By aiming for better semantic extraction on a challenging Spoken Language Understanding task and in consideration with computation costs, this study investigates a specific in-domain semantic enrichment of the SAMU-XLSR model by specializing it on a small amount of transcribed data from the downstream task. In addition, we show the benefits of the use of same-domain French and Italian benchmarks for low-resource language portability and explore cross-domain capacities of the enriched SAMU-XLSR. Index Terms: Spoken language understanding, deep learning, self-supervised model, semantic speech representations, language portability, cross-lingual § INTRODUCTION Spoken language understanding (SLU) consists of various Natural Language Processing tasks that extract semantics from speech <cit.>, such as call routing, named entity recognition from speech, or slot filling tasks in the context of human-machine dialogue. This work focuses on end-to-end neural approaches for speech-to-concept, one of the most challenging SLU tasks. It distinguishes itself from conventional cascade approaches <cit.> by using a single neural model to directly extract the semantics from speech signals <cit.> with the advantages of: (1) jointly optimizing the Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU) parts, and (2) mitigating error propagation. Nonetheless, end-to-end models' main challenge resides in the lack of bimodal annotated data, i.e. audio speech recordings with semantic manual annotations. To overcome it, transfer learning techniques <cit.> and artificial augmentation of training data with speech synthesis <cit.> have been proposed. In this paper, we investigated a recently proposed approach to remedy the aforementioned problem with the use of self-supervised learning (SSL) for the SLU task. SSL models, which are pre-trained from huge amounts of unlabelled data, have lately become very trendy as they show promising results in a wide range of speech tasks <cit.> when substantially alleviating the need of costly annotated speech data. At the same time, similar ideas had been successfully applied to text to allow semantics information extraction <cit.>. Several attempts to unify both textual and speech modalities can be found in <cit.>. Inspired by this new challenge,  <cit.> proposed a framework named SAMU-XLSR (Semantically-Aligned Multimodal Utterance-level Cross-Lingual Speech Representation) which produces a semantically-aligned multimodal and multilingual sentence-level representation. To do so, the authors combined the well-known multilingual frame-level speech representation learning model XLS-R <cit.> with the Language Agnostic BERT Sentence Embedding generator LaBSE <cit.>. More interestingly,  <cit.> shows that SAMU-XLSR can also be used as a frame-level speech encoder for a challenging end-to-end SLU task when they find that this model might create semantically aware frame-level speech representations. This study shows that specializing SAMU-XLSR representations, by exploiting a small amount of transcribed data without costly semantic annotation, offers very strong semantics extraction enhancements. Another main discovery lies in the scoring equivalence obtained with computational cost-effective experiments following layer-wise analysis of enriched SSL models. We finally investigate how different portabilities on same-domain or same-language data could be beneficial in order to make the semantically enriched representations more accurate. § SAMU-XLSR Self-supervision for speech representations has emerged recently as a strong alternative to conventional handcrafted approaches such as filter-bank and MFCC features, by being capable of leveraging large amounts of unlabelled speech data to learn useful latent speech features. Outstanding representatives like wav2vec 2.0 <cit.>, HuBERT <cit.>, and WavLM <cit.> have proven their effectiveness in various speech tasks such as ASR <cit.>, speaker verification <cit.> and emotion recognition <cit.>. While being designed to learn speech embeddings at acoustic frame-level, i.e., for short speech segments of 20 milliseconds, <cit.> proposed the SAMU-XLSR framework to modify the speech embeddings of a pre-trained SSL model in order to capture semantic information, by exploiting audio/text paired data independent of the downstream tasks. Higher-level semantics were proven useful for cross-lingual speech-to-text mining and zero-shot speech-to-text translation settings, hinting that other speech tasks such as SLU might benefit from this kind of speech representations. In detail, as shown in figure <ref>, SAMU-XLSR learns to generate a sentence-level embedding using an Attentive Pooling block taking as input the frame-level contextual representations of a pre-trained XLS-R model <cit.>. On the same semantic space, this speech sentence embedding is pulled towards a textual sentence embedding generated by the pre-trained LaBSE model <cit.>, by a cosine similarity loss. This way, the weights of the speech components (the pre-trained XLS-R and the Attentive Pooling block) are updated to predict the text embeddings provided by the frozen LaBSE model for the corresponding transcripts. Furthermore, since both XLS-R and LaBSE are multilingual models, SAMU-XLSR is inherently multimodal and cross-lingual. After the training process, the SAMU-XLSR model can produce either frame-level or sentence-level embeddings for the downstream tasks. § MEDIA AND ITALIAN PORTMEDIA DATASETS In this study, we used two datasets of recorded phone calls for hotel booking, in two languages: Italian (part of the PortMEDIA corpus <cit.>) and French (MEDIA corpus <cit.>). They are issued from the MEDIA Evaluation Package[<http://catalog.elra.info/en-us/repository/browse/ELRA-E0024/>] [International Standard Language Resource Number: 699-856-029-354-6] distributed by ELRA and freely accessible for academic research. All recorded participants gave their permission to create and distribute the data collections. These datasets are dedicated to semantic information extraction from speech in the context of human-machine dialogues collected by using the Wizard-of-Oz method. Only the users' turns are annotated with both manual transcriptions and complex semantic annotations, and used in this study. Table <ref> presents the description of both corpora in terms of hours and word occurrences. The Italian PortMEDIA corpus <cit.> [<http://www.elra.info/en/projects/archived-projects/port-media/>] is made of 604 dialogues from more than 150 Italian speakers. Its semantic dictionary includes 83 basic attributes and 19 specifiers <cit.>. This corpus is only available with “full" semantic tags, which include specifiers and modifiers, totaling 139 different concepts. The French MEDIA corpus, containing 1258 dialogues from around 250 speakers, was used for more in-domain data during SAMU-XLSR specializations, and to conduct experiments on SLU language portability from French to Italian. The end-to-end SLU models we built in this work predict a transcription enriched with semantic labels, such as: I reservation would like to book room-number one room-type double room . Two metrics are commonly used for this benchmark: the Concept Error Rate (CER) and the Concept-Value Error Rate (CVER). Both are computed similarly to the Word Error Rate (WER) except that CER only takes into account the concepts occurrences, while CVER considers the correctness of the complete concept/value pair (room-number one ). We also evaluate the transcription (excluding concepts) generated by our models with the WER. § GENERAL SLU ARCHITECTURE The end-to-end model used for our SLU task (Figure <ref>) consists of a frozen or fine-tuned speech encoder (XLS-R or one of the SAMU-XLSRs presented in section <ref>), followed by 3 bi-LSTM layers of 1024 neurons for segments contextualization. Then follows a fully connected layer of the same dimension activated with LeakyReLU and a softmax function. We optimize the CTC greedy loss function by using an Adam optimizer with learning_rate=0.0001 for the speech encoders and Bi-LSTM layers, and an Adadelta optimizer with learning_rate=1.0 for the fully connected layer. We also tested other RNNs (Bi-GRU, Li-GRU, LSTM) with different numbers of blocks (0 to 4) and 0 to 4 fully connected layers, all for multiple neural dimensions (512, 2048). The models presented in this paper learn 387.8M parameters in 16.5 hours on Italian PortMEDIA and 27 hours on MEDIA with a single v100-32G GPU, when fully fine-tuned for 100 epochs. Freezing speech encoders reduces the parameters by 316.3M and running time by around 6.5 hours on Italian PortMEDIA (-40%) and 12 hours (-44%) on MEDIA. Note that all following SLU experiments need to take in consideration a 0.4% possible variation of Error Rates, estimated from 5 training runs on PortMEDIA with different seeds. § CONTRIBUTIONS In this paper, we propose a semantically enriching specialization of the SAMU-XLSR representations to an in-domain downstream task for close languages with very few hours of speech. Our investigation also focuses on a computational cost-effective way to approach a challenging SLU task. We then analyze its new semantics extraction abilities considering language portability, by training our SLU model for 100 epochs on the French MEDIA dataset, followed by 100 epochs on the Italian PortMEDIA dataset. These experiments will be named FR→IT. The main limitation of our study lies in the similarity of French and Italian languages, MEDIA and Italian PortMEDIA being the currently only existing multi-lingual datasets for same-semantics extraction. §.§ Specializing SAMU-XLSR We explored a “specialization" of the SAMU-XLSR model. To do so, we fine-tuned the model presented in Figure <ref> on our downstream task's data without semantic tags, to make SAMU-XLSR speech representations closer to their corresponding LaBSE text representations, as done in the original work <cit.>. This way, we expect to adapt the semantic alignment of SAMU-XLSR representations to our SLU task, thanks to in-domain transcriptions. These specialized speech encoders are then used in our end-to-end SLU model as shown by Figure <ref>. We specialized the SAMU-XLSR model on either MEDIA, Italian PortMEDIA, or the combination of the two, for 100 epochs with Adam optimizer's initial learning rate set to 0.0001. These three models are respectively named: SAMU-XLSR _FR, SAMU-XLSR _IT, and SAMU-XLSR _IT ⊕ FR, while SAMU-XLSRs refers to the original non-specialized SAMU-XLSR and all the models derived from it. During validation, Italian PortMEDIA's dev set was used on SAMU-XLSR _IT and SAMU-XLSR _IT ⊕ FR, whereas MEDIA's dev set was used on SAMU-XLSR _FR. All three models have 316.2M parameters, and were trained for respectively 20 hours, 13.3 hours, and 33.3 hours in total, on a single v100-32G GPU. In Table <ref>, we studied the impact of these specializations on the Italian PortMEDIA SLU task. When freezing the different speech encoders, specialized SAMU-XLSRs significantly outperform both XLS-R and SAMU-XLSR. We insist on the fact that freezing the speech encoder results in a model of only 71.5M parameters to be trained in comparison with the 387.8M parameters to be updated while fine-tuning it. Table <ref> demonstrates no further improvements when fine-tuning SAMU-XLSR _IT during the SLU training process. This can be explained because the speech encoder has been trained on the downstream training data to extract the general semantics of a sentence in the specialization process. However, considering the very small amount of Italian PortMEDIA data, fine-tuning SAMU-XLSR _IT ⊕ FR, which was specialized on more in-domain cross-lingual data, resulted in significant score improvements. Our end-to-end SLU models also surpass our previous work <cit.>, scoring 18.5% CER on the MEDIA “full" mode with a multilingual wav2vec 2.0, instead of 26.1% with the French LeBenchmark wav2vec 2.0, and 20.3% with additional CommonVoice French data. §.§ Layer-wise speech encoders analysis Figures <ref> and <ref> present respectively a WER and CER layer-wise analysis of specialized SAMU-XLSRs, compared to the original SAMU-XLSR, by removing the upper layers of each encoder, one by one, to extract our speech embeddings. The kept layers of the encoders are frozen for the “Frozen" architecture, or fine-tuned by supervision to solve the Italian PortMEDIA SLU task, leading to the “Fine-Tuned" results. Fine-Tuning specialized speech encoders during Italian PortMEDIA training leads to no significant improvement. However, specialized SAMU-XLSRs generate far better representations than SAMU-XLSR in terms of semantic (CER) and linguistic (WER) encoding, when Frozen for the SLU task. As we already know from <cit.>, SAMU-XLSR captures and encodes semantics from the speech until its top layers thanks to LaBSE semantically-aligned embeddings. Yet, LaBSE being Language Agnostic, linguistic information is less well represented in the last SAMU-XLSRs layers, even when specialized. The most important observation appears in Figure <ref>: freezing a specialized SAMU-XLSR with only 17 layers leads to a WER of 15.3%, being identical to the 15.4% of WER given by fine-tuning a full 24 layers original SAMU-XLSR. This result was obtained thanks to the specializations made with in-domain data transcriptions, neither aiming for an ASR nor our final SLU task but only aiming to inject semantics into the speech representations with the use of LaBSE. §.§ Portability This section explores the impact of SAMU-XLSR's specialization on in-domain cross-lingual experiments (FR→IT), followed by cross-domain same-language experiments with the Italian CommonVoice and French PortMEDIA datasets. §.§.§ Cross-Lingual New state-of-the-art Italian PortMEDIA CER results were obtained with language portability experiments (Table <ref>), by fine-tuning the speech encoders on both MEDIA then Italian PortMEDIA FR→IT. SAMU-XLSR _IT ⊕ FR led to 25.1% of CER, instead of 26.2% obtained with a bigger architecture and the non-specialized SAMU-XLSR <cit.>. With portability experiments, SAMU-XLSR _IT ⊕ FR significantly outperforms SAMU-XLSR _FR in terms of WER and CVER thanks to its specialization on the combined in-domain French and Italian transcriptions. The 100 pre-training epochs done on French data allows the model to learn more about the task semantics but degrades Italian transcriptions, going from at best 14.5% of WER with Italian-only training to 16.4% of WER with FR→IT fully fine-tuned training. Freezing speech encoders resulted in far better CER for FR→IT trainings with SAMU-XLSR _IT ⊕ FR (26.5% CER if not considering SAMU-XLSR _IT ⊕ FR _(17)) than for Italian-only trainings with SAMU-XLSR _IT ⊕ FR (29.4% CER). In Figure <ref>, we noticed that a frozen SAMU-XLSR _IT ⊕ FR with only 17 layers scored almost the best obtained WER. Therefore, we wanted to explore the effectiveness of this far smaller model to the FR→IT task, naming the model SAMU-XLSR _IT ⊕ FR _(17) in Table <ref>. Training for only 9.5 hours on a single v100-32G GPU impressively led to the best WER and CVER scores for portability experiments while approaching the best 25.1% CER obtained with fully fine-tuned speech encoders in 39 hours, and equalizing the 25.6% CER of Italian-only experiments obtained in 11 hours. §.§.§ Cross-Domain This section aimed to determine whether the specializations degraded or improved significantly the scorings, thus not to give relevant results for both tasks. This is why only inferences were made, instead of full trainings. Considering completely out-of-domain tasks, we chose to test our Italian PortMEDIA models with CommonVoice Italian dataset [https://commonvoice.mozilla.org/fr/datasets], a corpus of recorded broadcast news unrelated to hotel booking. Being devoid of semantic annotations, we only evaluated its ASR performances with Character and Word Error Rates (ChER and WER). Experiments resulted in almost 100% WER for all speech encoders, meaning that our SLU models could not transcribe words from far cross-domain data. Even ChER, expected lower due to same-phonemic speech, scored 90.7% by fine-tuning XLS-R, against at most 98% with specialized SAMU-XLSRs which lost their initial abilities to transcribe broadcast Italian, being specialized on phone calls audio. To pursue this analysis on close-domain tasks despite the lack of fitting Italian data, we referred to the French PortMEDIA dataset, jointly distributed with Italian PortMEDIA. This dataset was designed for portability purposes with the French MEDIA corpus, its dialogues being close-domain phone calls of spectacle booking. The corpus shares 19 semantic annotations with MEDIA, and integrates 17 others for the spectacle task, allowing us to evaluate both CER and CVER. Despite highly degrading French scores compared to the 11.5% WER 18.5% CER and 29.5% CVER obtained of MEDIA with SAMU-XLSR _IT ⊕ FR, Table <ref> demonstrates significant score improvement when freezing or fine-tuning specialized speech encoders, with SAMU-XLSR _IT ⊕ FR leading to almost all best results in WER and CER, clearly surpassing SAMU-XLSR. We can positively state that such specialization can enhance both semantics extraction and audio transcription for a close-domain task like French PortMEDIA. § CONCLUSIONS This paper investigates how to effectively enrich the frame-level speech representations of an SSL model for a complex SLU task, by proposing a specialized semantics injection with the recently introduced SAMU-XLSR approach. Each specialized SAMU-XLSR is then used as a speech encoder for a challenging semantic extraction task, with the low-resource Italian PortMEDIA benchmark. After conducting their layer-wise capacity analysis, the paper experiments with both their cross-lingual and cross-domain portability. Results show how this enrichment of SAMU-XLSR can significantly improve semantic concept extraction, leading to the newly state-of-the-art 25.1% CER obtained on Italian PortMEDIA with language portability experiments. Analysis also demonstrate the usefulness of such specialisation for close cross-domain datasets. One very interesting discovery also lies in the computation cost-efficiency of such specialized speech encoders, able to approximate our state-of-the-art results of a fully-fine-tuned SLU model, with only 17 of the 24 layers of a frozen SAMU-XLSR. Furthermore, the SSL model semantic enrichment done at sentence-level, rather than character or word level, opens promising perspectives to exploit imperfect transcriptions for the ASR and SLU domains. § ACKNOWLEDGEMENTS This paper was inspired by insights gained from JSALT 2022 at JHU, gift-funded by Amazon, Microsoft and Google. This work was performed using HPC resources from GENCI/IDRIS (grant 2022 AD011012565) and received funding from the EU H2020 SELMA (grant No 957017), ESPERANTO (grant No 101007666) and PSPC AIDA: 2019-PSPC-09 (funded by BPI-France) projects. We would like to especially thank our colleagues Sameer Khurana and Antoine Laurent for inventing SAMU-XSLR and inspiring us its use towards SLU. IEEEtran
http://arxiv.org/abs/2307.00871v1
20230703091002
Disentangling the two sub-populations of early Herbig Be stars using VLT/X-Shooter spectra
[ "B. Shridharan", "Blesson Mathew", "R. Arun", "T. B. Cysil", "A. Subramaniam", "P. Manoj", "G. Maheswar", "T. P. Sudheesh" ]
astro-ph.SR
[ "astro-ph.SR" ]
Shridharan et al. Department of Physics and Electronics, CHRIST (Deemed to be University), Bangalore 560029, India Indian Institute of Astrophysics, Sarjapur Road, Koramangala, Bangalore 560034, India Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400005, India Early Herbig Be (HBe) stars are massive, young stars accreting through the Boundary Layer mechanism. However, given the rapid (< 2 Myr) evolution of early Herbig stars to the main-sequence phase, studying the evolution of the circumstellar medium around these stars can be a cumbersome exercise. In this work, we study the sample of early (B0-B5) HBe stars using the correlation between Hα emission strength and near–infrared excess, complemented by the analysis of various emission features in the X-Shooter spectra. We segregate the sample of 37 early HBe stars based on the median values of Hα equivalent width (EW) and near–infrared index (n(J-H)) distributions. The stars with |Hα EW| > 50 Å and n(J-H) > -2 are classified as intense HBe stars and stars with |Hα EW| < 50 Å and n(J-H) < -2 as weak HBe stars. Using the VLT/X–Shooter spectra of five intense and eight weak HBe stars, we visually checked for the differences in intensity and profiles of various HI and metallic emission lines commonly observed in Herbig stars. We propose that the intense HBe stars possess an inner disk close to the star (as apparent from the high near-infrared excess) and an active circumstellar environment (as seen from high Hα EW value and presence of emission lines belonging to FeII, CaII, OI and [OI]). However, for weak HBe stars, the inner disk has cleared, and the circumstellar environment appears more evolved than for intense HBe stars. Furthermore, we compiled a sample of ∼58,000 emission-line stars published in Gaia DR3 to identify more intense HBe candidates. Further spectroscopic studies of these candidates will help us to understand the evolution of the inner (∼a few au) disk in early HBe stars. Disentangling the two sub-populations of early Herbig Be stars using VLT/X-Shooter spectra B. Shridharan^1E-mail: shridharan.b@res.christuniversity.in, Blesson Mathew^1, R. Arun^2, T.B. Cysil^1, A. Subramaniam^2, P. Manoj^3, G. Maheswar^2, T.P. Sudheesh^1 Received xxxx; accepted xxxx ======================================================================================================================================================================== § INTRODUCTION Herbig Ae/Be stars are pre-main sequence (PMS) stars belonging to A and B spectral types (HAeBe) with their mass ranging from 2 to 10 M_⊙<cit.> . The optical and infrared (IR) spectra of HAeBe stars are known to show emission lines of various allowed and forbidden transitions such as Hα, Hβ, CaII, OI, and [OI] <cit.>. In addition, they show excess flux in the infrared wavelengths due to the energy reprocessing by dust grains in the outer disk <cit.>. The circumstellar medium of HAeBe stars is active with different emission lines arising from circumstellar disk, accretion columns and disk winds around them (refer for a recent review on HAeBe stars). However, in recent years, studies have noted the distinction between various properties of HBe and HAe stars. Since the mass varies drastically between B and A spectral types, the processes occurring in the stellar interiors also differ between HBe and HAe stars. The properties of accretion <cit.> and variability <cit.> have been reported to differ between HAe and HBe stars. One of the significant differences between them is the mode of accretion from the circumstellar disk. HAe and late HBe stars are known to accrete through the Magnetospheric accretion (MA) paradigm (see <cit.> for a review), where the star's magnetosphere truncates the circumstellar disk at a few stellar radii and funnels the material onto the star through the magnetospheric field lines. Due to the absence of a strong magnetic field <cit.>, early HBe stars do not accrete through MA and instead does through Boundary Layer (BL) mechanism. In BL mode, the keplerian circumstellar disk almost reaches the stellar surface and material gets transferred onto the star through the interface between the disk and the stellar surface (known as the boundary layer) <cit.>. An interesting HBe star to note in the context of BL accretion is MWC 297, which is a nearby (∼ 400 pc; ), massive young star of B1.5V spectral type. This HBe star has been extensively studied and is known to be actively accreting from its circumstellar disk. It is known to have strong Balmer emission (|Hα EW| > 200 Å), double-peaked [OI] emission and double-peaked CO emission, which is suggestive of the presence of an nearly edge-on circumstellar disk <cit.>. Further, interferometric observations and modelling of MWC 297 did not reveal a cavity or inner gap in the disk <cit.>. Thus, MWC 297 accretes material through the BL mechanism. Similar direct pieces of evidence are not available for other early HBe stars. Hence, this demands the study of a large sample of HBe stars, belonging to diverse environments, to evaluate the mode of accretion and to understand the nature of the circumstellar medium. The motivation for this work was from <cit.>, where they performed HI line analysis of HAeBe stars using the X-shooter spectra. In addition, they studied the statistical presence of higher-order emission lines of HI in HAeBe stars with respect to the stellar parameters and from which, they noticed bimodality in higher-order emission lines in the sample of early (B0-B5) HBe stars. Given that the ages (pre-main sequence lifetime) of early HBe stars are less than 2 Myr and the fact that they already lost most of their inner circumstellar disk, the analysis of these objects is vital to bridge the stages in the intermediate-mass star formation. Using this as a lead, we checked for the differences between the properties of the two populations, identified due to the distinction in the presence of higher-order Balmer emission lines. In the present work, we did further analysis and found the presence of two populations of early HBe stars (B0-B5) in the Hα versus near-infrared (NIR) excess space. Since Hα and NIR excess in HAeBe are known to be associated with the inner (∼a few au) circumstellar medium <cit.>, we suspect a difference in the circumstellar medium of the two populations. Hence, a study of these two groups can help in understanding the evolution of the inner circumstellar disk in stars undergoing BL accretion. From a search in the literature, we found three instances where two populations of HBe stars are reported. <cit.> found that most of the Group III stars (with low NIR excess) belong to B spectral type and <cit.> proposed a class of "weak-line" HAeBes for stars with no forbidden line emission and Hα equivalent width (EW) < 15Å. <cit.> observed a dichotomy in the NIR excess of their group I sources. The group I sources either showed high (> 25%) or low (< 10%) NIR excess. They also found that the strength of NIR excess is correlated with the radius at which CO is detected. The sources with low NIR excess show CO emission at larger radii compared to sources with high NIR excess, which implies that the inner region is depleted in low NIR excess sources. In this work, we report the detection of two populations of early HBe stars and discuss the distinction between their circumstellar medium. The paper is structured as follows. The dataset used in this work is explained in Section 2. The differences in the NIR excess, Hα and spectral features between the two populations are described in Section 3. We also evaluate a few interesting questions regarding the identification of two sub-populations of early HBe stars in Section 4, followed by a summary of this work in Section 5. § DATA USED FOR THIS STUDY We combined the well-studied HAeBe catalogues of <cit.> and <cit.> to compile a comprehensive set of stellar parameters. From this combined catalogue, we selected 37 Herbig Be stars with spectral type earlier than B5 (as estimated by ) for our analysis. The NIR magnitudes (J, H and K_S) used in the study are the photometric data from Two Micron All Sky Survey (2MASS, ) and the mid-infrared (MIR) magnitudes (W1, W2 and W3) are taken from Wide-field Infrared Survey Explorer (WISE, ). Further, the medium-resolution spectra of HBe stars observed with the X-Shooter instrument are retrieved from the archive and are used for the present study. X-Shooter is an Echelle spectrograph mounted at the UT3 Cassegrain focus of VLT telescope (Cerro Paranal, Chile), that provides spectra covering a large wavelength range of 3000–23000 Å, split into three arms and taken simultaneously. The arms are split into the following: the UVB arm from 3000–5600 Å, the VIS arm from 5500–10200 Å, and the NIR arm from 10200–24800 Å. The smallest slits available with widths of 0.5”, 0.4” and 0.4” were used to provide the highest possible resolutions of R = 9700, 18400 and 11600 for the respective UVB, VIS and NIR arms. The X-Shooter spectra were directly downloaded from the ESO Phase 3 archive. The spectra are corrected for barycentric radial velocity based on the observation date. We made use of the Gaia ELS catalogue along with their estimated astrophysical parameters to identify new candidates of intense B-type emission stars, as detailed in Appendix <ref>. § ANALYSIS AND RESULTS In this section, we segregate the early HBe stars into two distinct populations based on observed Hα EW and NIR excess seen in the SED. Further, we outline the differences in the spectroscopic features between two populations of early HBe stars. §.§ Bi-modality in Infrared Excess and Hα EW space Previously, <cit.> had shown the explicit dependence of NIR excess on the observed Hα EW, which suggested the presence of an inner hot disk in HAeBe stars. With the advent of Gaia <cit.> and IPHAS <cit.> surveys, homogeneous mass accretion rate studies have been performed in recent years for a well-categorised sample of HAeBe stars <cit.>. As explained, several indirect pieces of evidence suggest that the late HBe and HAe stars accrete through magnetospheric columns, whereas the MA paradigm may not have an active role in early HBe stars. Since early B-type stars do not have large-scale strong magnetic fields, they accrete through the BL mechanism <cit.>. This change in accretion mode for early B-type stars significantly changes the evolution of the circumstellar disk. To decode this, we checked for a correlation between NIR excess and Hα EW by segregating them based on the spectral types and found that two distinct populations of early HBe stars exist in the NIR excess and Hα EW space. The Hα EW for our sample of HBe stars is taken from <cit.>. We use the Lada[The equation defining the Lada index is as follows, n_λ_1-λ_2 = log(λ_2F_λ_2/λ_1F_λ_1)/log(λ_2/λ_1) , where λ_1 and λ_2 represent the wavelength of two bands for which spectral index is calculated. F_λ_1 and F_λ_2 represent the extinction corrected absolute flux measured at λ_1 and λ_2. ] indices to quantify the observed excess in each wavelength regime <cit.>. The IR magnitudes are corrected for extinction using respective A_λ conversion coefficients from <cit.>. The presence of two sub-populations is visualised in the SED plot (Figure <ref>a) with the colour of each line denoting the Hα EW of the star. For visual purposes, we normalised the flux values of all the stars at the J band. As seen, for stars with high Hα EW (> 50 Å), the SED show excesses at shorter wavelengths (J or H bands). And for stars with low Hα EW, the excesses start at longer wavelengths (K_S or more). Even though some outliers show relatively high NIR excess for low Hα emission strength, the overall trend of stars with high Hα EW having high NIR excess is visible. The presence of two distinct populations is also seen in Figures <ref>b and <ref>c, which shows the correlation between NIR indices and Hα EW. It is clear from Figures <ref>a-c, that there exist two different populations of early HBe stars: one with high NIR excess and high Hα EW and the other with low NIR excess and low Hα EW in the sample of early HBe (B0-B5) stars. Since NIR continuum excess is primarily from the inner hot disk, an intense NIR emission and its correlation with Hα emission suggests the presence of an inner hot disk. It is still possible that HBes may have a high MIR/FIR excess caused by the presence of a dusty outer disk. It should be noted that the MIR excess, evaluated using the n(K_S-W1) and n(W1-W3) indices (Figure <ref>d,e), does not distinguish the two populations that is seen in NIR–Hα EW space. It is pertinent to note that the two populations discussed here only differ in the NIR wavelength region and show similar indices in MIR region, which suggests a similarity in the outer disk of these stars. For further analysis, we separate the sample of 37 HBe stars into "intense" and "weak" HBe stars based on the cutoffs shown in Figure <ref>b. The extinction-corrected NIR indices lie close to the photospheric colours for stars with |Hα EW| < 50 Å. We separate the two populations based on median values of the Hα EW and n(J-H) Lada index distributions. The cutoffs used to distinguish intense and weak HBes are |Hα EW| = 50 Å and n(J-H) = -2. Hence, 15 HBe stars satisfying |Hα EW| > 50 Å and n(J-H) > -2 are classified as "intense HBe" stars and 18 HBe stars with |Hα EW| < 50 Å and n(J-H) < -2 are classified as "weak HBe" stars. The remaining four stars did not fit under these definitions and hence were removed from further analysis. The basic stellar parameters of 15 intense and 18 weak HBe stars are provided in Appendix Table <ref>. It should be noted that non-simultaneity of observation exists between Hα EW, 2MASS and WISE observations. However, this would not affect our analysis as HBe stars are not highly variable compared to HAe stars (refer to Fig. 8 of <cit.>). Moreover, the role of A_V is critical in estimating the NIR excess of the Herbig stars. However, as discussed in section 4.1, the uncertainty in the A_V value does not affect the presence of two sub-populations in NIR excess vs Hα space. §.§ Difference in the spectral features of HBe stars belonging to two subpopulations In addition to the differences in NIR excess and Hα EW of the two sub-populations, there exist clear differences in various emission lines that are often found in HAeBe stars. Based on the presence of higher-order HI emission lines in Herbig stars, <cit.> pointed out that some young stars lose the dynamic inner circumstellar medium very early in their PMS phase. The difference in the circumstellar medium manifests in the presence/absence and the intensity of spectral lines. Hence, we evaluate the distinction in the spectral lines/features using the VLT/X-Shooter spectra. Only 13 out of 37 HBe stars in our list have X-Shooter spectra. To maintain the homogeneity in the analysis, we use only the available X-Shooter spectrum and do not consider spectra from other instruments. We plan to perform a spectroscopic survey of all 37 HBe stars in the future, which is beyond the scope of this work. This section details differences in various spectral features observed in the X-Shooter spectrum belonging to five intense and eight weak HBe stars. §.§.§ HI emission lines Figure <ref> shows the differences in Balmer emission lines of intense and weak HBe emitters. Over the years, it has been clear that the Hα profile, especially in the case of YSOs, is a combination of emission through different mechanisms such as accretion, disk, and/or winds. Furthermore, it is susceptible to optical depth effects <cit.>. We see that (in the top right panel of Figure <ref>) the Hα profiles of intense emitters are not symmetrical and show weak blueshifted absorption in all the cases. This is different from the profiles of weak emitters, in which most of them show symmetrical double peaks and only one (HD 141926) shows blueshifted absorption. These profile variations also indicate the differences in the inner circumstellar environment of HBe stars, similar to what we assessed from the SED analysis. The blueshifted absorptions are seen at velocities of ∼150-300 km/s, similar to the wind velocities reported in <cit.>. These differences are not only seen in Hα but also in Hβ and Hγ lines (Figure <ref>). The detection of blueshifted absorption features and non-detection of redshifted absorption features in HBe stars suggests the possibility of BL being the favourable accretion mechanism <cit.>. In addition, we explored the advantage of X-Shooter spectra to study the profiles of Paschen and Brackett lines observed during the same epoch, which are shown in Figures <ref> and <ref>, respectively. We see that the blueshifted features are not prominent in Paβ, Paγ and Brγ lines (Figure <ref>), suggesting that these features can be wavelength dependent and/or due to the contribution from different emission regions. However, the blueward asymmetry is still present in the lower-order lines of the Paschen and Brackett series. It should be noted that in the case of higher-order lines of the Paschen series, the asymmetry in the profiles can be seen to an extent. However, the asymmetry completely disappears for higher-order lines of the Brackett series. This again points to the wavelength-dependent opacity of the medium, causing the blueshifted absorption. Interestingly, there is no distinction between intense and weak emitters when it comes to the presence of higher-order lines of the Brackett series (Figure <ref>). §.§.§ FeII emission lines HAeBe stars show emission lines belonging to metallic species such as FeII, CaII triplet, OI and [OI] lines in their spectra <cit.>. We also observe differences in intense and weak emitters based on the presence of these metallic lines. FeII lines (particularly those present in the UVB arm of X-Shooter spectra) are seen in emission in intense HBe stars, whereas they are completely absent in weak emitters (Figure <ref>). Since FeII lines (especially 4924 Å) are seen in P-Cygni morphology in three of five intense emitters, they may arise from a region of outflow. Furthermore, many weak emission lines in the wavelength range 4350–4650 Å of intense HBe stars are completely absent in weak HBe stars. §.§.§ CaII emission lines CaII triplet lines (8498 Å, 8542 Å, and 8662 Å) require lower energy for ionisation and can form in cooler regions of the circumstellar disk. CaII triplet lines are seen in three (out of five) intense emitters and in one (MWC 953) of the weak emitters (Figure <ref>). <cit.> studied the CaII triplet emission in HAeBe stars and noted that 71% of the HBe stars show CaII triplet in emission. They propose that the CaII triplet emission should come from denser regions of the hot star. The low-ionisation energy (Energy of the upper level in the transition [E_upper] ∼ 3.1 eV) of the triplet means they arise from cooler regions away from the star and may not be associated with the inner hot gaseous disk. Further, the observed ratios of CaII triplet emission is 1:1:1 instead of ∼1:10:6, as expected by the ratio of oscillator strengths (log(gf))[https://physics.nist.gov/PhysRefData/ASD/lines_form.html]. Hence they arise from an optically thick region. It is also interesting to note that three stars showing CaII triplet also show CaII doublet emission (8912 Å and 8927 Å). The CaII doublet ratio is also close to unity with 8927 Å slightly more intense than 8912 Å. As discussed in <cit.>, the CaII may be arising due to greater saturation of triplet lines caused by the optical thickness of the CaII emitting medium. The difference in energy levels of CaII triplet (E_upper ∼ 3.1 eV) and doublet (E_upper ∼ 8.4 eV) lines points to different regions of line formation. Interestingly, a weak HBe star, MWC 953, shows both CaII triplet and CaII doublet lines in emission (although weak compared to intense emitters). §.§.§ OI emission lines <cit.> showed that the most likely excitation mechanism for the formation of OI lines is Lyβ fluorescence. They observed that the emission strengths of 8446 Å and 11287 Å are more intense than the adjacent OI lines at 7774 Å and 13165 Å. We observed OI lines in emission in all five intense HBe stars and one (MWC 953) of the weak emitters (Figure <ref>). It should be noted that CaII triplet emission and OI lines seen in MWC 953 are weaker and narrower compared to the lines observed in intense emitters. For intense HBe stars, V921 Sco and Hen 3-1121, the ratio of the observed pair of lines (8446/7774 and 13165/11287) follows the ratio observed in <cit.>. The observed [log(F(8446)/F(7774)), log(F(11287)/F(13165)] ratios are [0.79, 0.68] and [0.59, 0.74] for V921 Sco and Hen 3-1121, respectively[We did not measure ratios for other intense HBe stars since OI lines show p-cygni profile, which can affect the calculation]. We also note that the intensity of OI directly correlates with the spectral type. The intensity is maximum for V921 Sco (B0-B1), which falls off for Hen 3-1121 (B1-B2) and eventually becomes negligible for PDS 37 (B2-B3). The OI lines (especially the IR lines) are non-existent for PDS 27 and HD 323771 (B4-B5). Considering [OI] lines, which are used as wind indicators in HAeBe stars <cit.>, we see that V921 Sco, PDS 37, and HD 323771 show both 6300 Å and 6363 Å lines. The rest of the stars studied do not show [OI] in emission. §.§.§ HeI 10830 Å emission feature More importantly, the meta-stable HeI 10830 emission line is used as a probe to study the dynamics of mass flows around HAe/Be stars<cit.>. The blueshifted features observed in HI lines are also seen in the HeI 10830 Å line of intense HBe stars (Figure <ref>. The blueshifted (∼ 150-300 km/s) HeI 10830 Å line mimicking the blueshifted feature in Balmer profiles shows that the absorption mechanisms in both cases can be physically related. Hence, the blueshifted absorption feature in Balmer lines may come from stellar wind in the inner circumstellar region or an expanding envelope along the line of sight to the star, which is missing in weak HBes. § DISCUSSION §.§ Role of A_V Since B-type stars evolve rapidly and reach the main sequence faster than A-type stars, HBe stars are more likely to be associated with the pre-natal cloud. Hence, the line-of-sight extinction can be higher towards the early-type stars. The estimation of A_V towards the early B type depends on identifying accurate spectral type using high-resolution spectra. Hence, the A_V estimation will always carry an inherent uncertainty. To ensure that the NIR excess estimates are not influenced due to the underestimation or overestimation of the A_V value, we varied the A_V by 25% and 50% to see the change in the indices. Figure <ref> shows the change in indices if the A_V is varied. We can see that even if the A_V is varied by 50%, the Lada indices do not vary significantly. The pattern of two populations in n(J-H) is still valid even if the A_V is under/overestimated by 50%. Hence, the A_V value used in this study does not affect the identification of two sub-populations in early HBe stars. The NIR excess seen in intense HBe stars is genuine and is not affected by the uncertainty in the A_V parameter. §.§ Bimodality in accretion rates There are important studies (e.g. , and ) which have estimated the accretion rates of HBe stars homogeneously using Hα EW values. Using the correlation between accretion rates and stellar mass, <cit.> showed that the accretion mode changes at around 4 M_⊙. It also can be seen from their analysis (Figure 7(right) of <cit.>) that there exists a large scatter in the mass accretion rates of Herbig Be stars. Although there exists no clear distinction, we can see that the range of mass accretion rates can vary from 10^-7 to 10^-3 M_⊙ yr^-1. However, as mentioned, the Hα emission line has many contributions and suffers from opacity issues. As seen from the Hα profiles of intense HBe stars, there is a blueshifted absorption component, which reduces the total Hα strength and underestimates the mass accretion rate. It is possible to overcome the opacity issue using the Brγ line, as proposed by <cit.>. As pointed out by them, there is a change in slope of log (Ṁ_acc) vs log(M_⋆) plot for high-mass objects when compared with low-mass HAes. Though there is no direct sample overlap between our work and <cit.>, we point to Figure 11 of <cit.>. The figure mimics our finding that two different populations exist in mass accretion rate values for stars belonging to the B0-B5 spectral range. The Ṁ_acc values in <cit.> for early HBe stars ranges from 10^-7 to 10^-3 M_⊙ yr^-1. One set of early HBe stars have Ṁ_acc in the range of 10^-4.5 to 10^-3 M_⊙ yr^-1 and another set of stars have values in the range 10^-6 to 10^-7 M_⊙ yr^-1. Under the assumption that Brγ emission directly correlates with the mass accretion rate, the two to three orders of magnitude difference in the mass accretion rates between the two populations is evidence of intense HBe stars undergoing accretion through BL and that the active accretion phase has ended in weak HBe stars. §.§ Spectral features observed in other spectral ranges The interesting spectral features seen in intense HBe stars, such as FeII and [FeII] lines in the UVB band of X-Shooter spectra, are not limited to this spectral type. We checked all the X-Shooter spectra of Herbig stars to see the incidence of intense FeII and [FeII] emission lines. We note that stars belonging to B5-B9 (HD 259431, PDS 133 and HD 85567) and A0-A5 (V380 Ori and Z CMa) also show spectral features similar to intense HBe stars. Interestingly, some of these stars have been studied in detail using various techniques. The stars V380 Ori and Z CMa are known to be associated with Herbig-Haro outflows and bipolar jets <cit.>. HD 85567 and HD 259431 stars are shown to have an inner compact gaseous disk of size <1 au <cit.>. Even though they belong to later spectral types, the commonality between the spectral features could mean that the properties of the emission medium can be identical. This can be considered as indirect evidence for the presence of inner gaseous disks and intense outflows in the sample of intense HBe stars. §.§ Comparison with other types of Be stars A natural question with the classification of weak HBe stars is whether they truly belong to the Herbig category. Given the mass range of B0-B5 stars (5 - 18 M_⊙), they typically stay in the PMS phase for less than 2 Myr (MIST; ). Hence, finding them in the PMS phase is difficult due to the rapid evolutionary timescales. Thus, the validity of these massive stars being in the Herbig phase is always under scrutiny. Since the photometric and spectroscopic signatures of weak HBes and Classical Be (CBe) stars are very similar, CBe stars may be masquerading as the weak HBes in our sample. Similarly, over the years, stars such as MWC 137, classified as young HBe stars, turned out to be evolved supergiant stars by recent studies <cit.>. In the context of intense HBe stars, it is essential to mention the stars classified in the literature as "B-type stars showing B[e] phenomenon". B[e] phenomenon was described by <cit.> as B-type stars showing high NIR excess and intense Hα emission along with allowed and forbidden (especially [OI], [FeII] and [NII]) emission lines in their spectra. The B[e] phenomenon does not occur at a particular evolutionary stage. Instead, it has been observed in pre-main sequence stars, supergiants, Symbiotic binaries, and Planetary Nebulae (PNe). B[e] stars are highly reddened, and their spectra do not show any photospheric lines. It has been shown by <cit.> that stars showing B[e] are not part of young clusters or nebulosity. The similarity in spectral and NIR characteristics of our intense HBes and B[e] stars can affect the classification. Hence, the validity of the young nature of our HBe sample needs to be rechecked. In this section, we assess the information available in the literature to confirm the HBe nature of our sample of stars. Intense HBe stars: Hen 3-1121 is confirmed to be a young HAeBe star by <cit.> through high-resolution spectroscopy. PDS 37 and 27 are studied in detail to be massive YSO stars undergoing intense accretion (10^-3 to 10^-4.5 M_⊙ yr^-1) and are on track to become O-type stars. They are observed to have flattened inner disk structure <cit.>. V921 Sco was first classified as a Bep supergiant star by <cit.>. However, due to the non-detection of the CO bandhead and the observed emission region of Brγ, it is classified as a pre-main sequence star by <cit.>. There is no relevant literature about HD 323771 star, except that it belongs to B5V spectral type <cit.>. Weak HBe stars: There has been considerable ambiguity regarding the evolutionary phase of HD 76534 in the literature. It belongs to Vela R2 association <cit.>, which is a young star formation region, and the star has a nebular region associated with it. Recently, <cit.> have modelled the circumstellar disk and have shown the presence of a passive, optically thick disk. From interferometric studies, it has been suggested that HD 141926 possibly accrete through the BL mechanism <cit.>. PDS 241 was suggested to be CBe star due to its low MIR flux <cit.> but has been studied as a HAeBe star in the recent works of <cit.> and <cit.>. HD 313571 and MWC 953 are confirmed as HAeBe stars by high-resolution optical spectroscopy <cit.>. They are also known to be associated with nebulosity in Spitzer images. There are no relevant literature about the nature of PDS 286 and HD 36408. It should be noted that there is no distinction between the intense and weak HBe stars based on age, mass and luminosity values (based on the values listed in <cit.>). Hence, it could be said that the intense and weak HBe stars belong to similar evolutionary phases. However, we are aware of the caveat here that the characteristics of intense HBe stars such as, intense HI emission, [FeII] emission and blue-shifted absorption features in HI lines are observed in supergiant B[e] (sgB[e]) stars as well <cit.>. A fail-safe method to distinguish HBe and sgB[e] stars is through the detection of the ^13CO bandhead emission and ratio of ^12CO/^13CO features <cit.>. Hence, high-resolution K-band spectroscopy is required to confirm the validity of the young nature of the intense HBe stars. In addition, V921 Sco is identified to have a very close (∼0.025") late B-type companion star through interferometry <cit.>. They suggest that the B[e] phenomenon observed in V921 Sco is consequence of binary interactions, where the forbidden-line emitting material gets ejected during the episodic interaction phases. Thus, it could be possible to explain the features of intense HBe stars through interaction with their undetected close binary companions. To explore the possibility of close undetected companions and to confirm the evolutionary phase of intense HBe stars, we plan to perform high angular resolution interferometry and high-resolution K-band spectroscopy in the near future. Since previous studies indicate that most of the intense and weak HBe stars are found to be young and associated with the nebulosity, we treat them as young PMS stars in this work. § SUMMARY AND CONCLUSIONS In this work, we made use of the archival photometric and spectroscopic data available for early HBe stars to study the two sub-populations within the sample of early HBe stars. We analyse the correlation between NIR excess and Hα emission strength, supported by the distinction in the spectral features, to show that the inner circumstellar medium belonging to intense and weak HBe stars are different. Though there have been indications for two sub-populations of early HBe stars, this is the first work to point out the clear differences between them using IR photometry and various spectroscopic features using VLT/X-Shooter spectra. The major results are summarised below, * The NIR excess, as quantified by the n(J-H) Lada index, shows that one population of HBe stars have weak NIR excess, whereas the other population of HBe stars have very high NIR excess. Since the NIR excess corresponds to the inner circumstellar medium of the Herbig star, the dichotomy between the populations also corresponds to the difference in the evolution of the inner circumstellar medium. * Furthermore, the stars with high NIR excess also seem to show higher Hα EW and vice-versa. The emission strength of Hα has always been directly used to estimate the mass accretion rate in PMS stars. The high Hα EW values, thus also correspond to higher rate of accretion in these stars. * Since the early HBe stars are known to accrete through the BL mechanism, we propose that the difference in Hα and NIR excess indirectly points to two types of early HBe stars – those HBe stars with an inner disk and accreting through BL mechanism, and a second population with an evolved circumstellar medium where the BL accretion has stopped. * This is also supported by the differences in the spectral features seen in X-Shooter spectra for a subsample of HBe stars. Most importantly, the Hα profiles of all intense HBe stars show blueshifted absorption features. However, in the weak HBe stars, the Hα profiles are symmetrical. * The blueshifted absorption in intense HBe stars is also seen in other HI lines such as Hβ, Paβ, and Brγ. It should be noted that the blueshifted absorption is not relatively well pronounced in Brackett and Paschen lines, but the blueward asymmetry of the profiles is clear. * The metallic lines, OI, [OI], and CaII triplet often seen in HAeBe stars also show the distinction between intense and weak HBes. OI 7774 Å, 8446 Å and [OI] 6300 Å, 6363 Å lines are only present in intense HBe and completely absent in weak HBe. Since [OI] lines form due to outflows from the star, it can be concluded that there are no stellar winds or outflows from weak HBe stars. * HeI 10830 Å, a meta-stable transition line, has been studied extensively by <cit.> to estimate accretion and mass flow around Herbig stars. The non-detection of blueshifted HeI 10830 Å in weak HBe shows no mass flow from the star. It acts as an indirect indicator of dampened activity in the circumstellar medium of weak HBe stars. The blueshifted velocities of the HeI 10830 Å absorption feature also correspond with the blueshifted velocities seen in Hα and Hβ profiles in intense HBe stars. * Many lines belonging to different multiplets of FeII and [FeII] transitions are seen in the wavelength range 4200–4450 Å in intense HBe stars and are absent in the weak HBe stars. Another interesting spectral feature we noted in the presence of CaII doublet lines at 8912 Å and 8927 Å  in three intense HBe stars. * Most of the stars used in this study are classified as Herbig Be stars from extensive, independent studies. Hence, the possibility of B[e] stars and CBe stars being classified as intense HBe and weak HBe, respectively, can be neglected. * We have identified 44 intense HBe candidates from the latest Gaia DR3 database. A detailed spectroscopic analysis is necessary to confirm the intense HBe nature of these 44 candidates (Appendix <ref>). We want to thank the Science & Engineering Research Board (SERB), a statutory body of the Department of Science & Technology (DST), Government of India, for funding our research under grant number CRG/2019/005380. AS and RA acknowledge the financial support from SERB POWER fellowship grant SPF/2020/000009. The authors are grateful to the Centre for Research, CHRIST (Deemed to be University), Bangalore, for the research grant extended to carry out the current project (MRPDSC-1932). We thank the SIMBAD database and the online VizieR library service for helping us with the literature survey and obtaining relevant data. This work has made use of the ESO Phase 3 science archive facility. aa § IDENTIFICATION OF NEW INTENSE HBE CANDIDATES FROM GAIA DR3 As shown in previous sections, two distinct populations of massive Herbig stars seem to have drastically different inner circumstellar environments. The works of HAeBe stars over the years have been skewed towards late HBe and HAe stars. Only in the recent works (e.g., ), effort has been taken to include more HBe stars into the analysis. Using the latest Gaia DR3, it is possible to identify new intense HBe candidates. Gaia DR3 provides spectroscopic parameters for 2.3 million hot stars and a catalogue of 57,511 emission-line stars based on the pseudo-equivalent width (pEW) measurements. The catalogue provides various ELS classes such as `beStar', `TTauri', `HerbigStar' and `PN' based on their BP/RP spectra. We use the ELS catalogue given by Gaia DR3 and 2MASS colours to search for more intense HBe candidates. In order to search for new intense HBe stars from Gaia DR3, we queried the ELS catalogue using the Gaia ADQL facility. We cross-matched the stars classified as `HerbigStar' in Gaia DR3 with the 2MASS catalogue and got 3825 sources. <cit.> proposed a second-order polynomial to convert pseudo-EW to observed Hα EW of emission-line stars. We used the highest piece-wise fit parameter values in Figure 3 of <cit.> to estimate the observed Hα EW of the stars. Then, 325 stars with the estimated |Hα EW| > 50 Å  are retained. To find the hot stars from this sample, we selected those stars with teff_gspphot > 15000 K. The estimate teff_esphs was not used because it was not available for a significant fraction of our candidate objects. After the teff_gspphot cutoff, 65 stars are retained. Further, n(J-H) was estimated for these 65 sources using the extinction value provided by Gaia DR3. Finally, we have identified 44 potential intense HBe candidates from this analysis. Of 44 candidates, 36 are already recorded in the SIMBAD database. A table containing information regarding the possible intense HBe candidates is provided in Table <ref>. Even though Gaia DR3 has classified these stars as `HerbigStar', one should be cautious about accepting this classification. Some stars given in Table <ref> may belong to PNe and supergiant phases. Through this section, we intend to demonstrate the use of large-scale surveys such as Gaia to identify interesting set of objects such as intense HBe stars. A detailed and careful spectroscopic study is needed to prove the young nature of these stars. § TABLE CONTAINING THE DETAILS OF EARLY HBE STARS STUDIED IN THIS WORK
http://arxiv.org/abs/2307.02265v2
20230705130426
Manifold-constrained free discontinuity problems and Sobolev approximation
[ "Federico Luigi Dipasquale", "Bianca Stroffolini" ]
math.AP
[ "math.AP", "math.FA" ]
SVDM: Single-View Diffusion Model for Pseudo-Stereo 3D Object Detection Yuguang Shi, The authors are with the School of Automation, Southeast University, Nanjing 210096, China, and also with the Key Laboratory of Measurement and Control of Complex Systems of Engineering, Ministry of Education, Nanjing 210096, China (e-mail: syg@seu.edu.cn; xblu2013@126.com). August 1, 2023 =============================================================================================================================================================================================================================================================================================================== We study the regularity of local minimisers of a prototypical free-discontinuity problem involving both a manifold-valued constraint on the maps (which are defined on a bounded domain Ω⊂^2) and a variable-exponent growth in the energy functional. To this purpose, we first extend to this setting the Sobolev approximation result for special function of bounded variation with small jump set originally proved by Conti, Focardi, and Iurlano <cit.> for special functions of bounded deformation. Secondly, we use this extension to prove regularity of local minimisers. § INTRODUCTION Let Ω⊂^2 be a bounded open set, p : Ω→ [1,+∞) be a measurable function (that will be called a variable exponent in the following course), and ℳ be a compact, connected, smooth Riemannian manifold without boundary. In this paper, we deal with special functions of bounded variations from Ω into ℳ whose approximate differential is integrable with respect to the variable exponent p(·) over Ω and whose jump set has finite ℋ^1-measure. The space of such functions will be denoted by the symbol ^p(·)(Ω, ℳ); we refer the reader to Section <ref> for its formal definition and as well as for basic material on functions that are integrable with respect to variable exponents. We denote such space, by ^p(·)(Ω, ℳ) the subspace of (Ω, ^k) made up by those functions u such that u(x) ∈ℳ for almost every x ∈Ω, the approximate differential ∇ u of u is p(·)-integrable over Ω, and the jump set J_u of u has finite ℋ^1-measure (in dimension n, finite ℋ^n-1-measure). In this work, we first prove an approximation result for maps of class ^p(·)(B_ρ, ℳ) with small jump set by functions that are also of Sobolev class in slightly smaller balls. (Here, B_ρ denotes any ball in ^2.) Then, we apply such a result to the study of the regularity of local minimisers among ^k-1-valued maps, where ^k-1 denotes the unit sphere in ^k and k ≥ 2, of an energy functional with nonstandard growth. More precisely, we consider, for u ∈(Ω, ^k-1), the following energy functional: F(u, Ω) := ∫_Ω∇ u^p(x) x + ℋ^1(J_u ∩Ω). The functional F(u, Ω) is finite exactly on ^p(·)(Ω, ^k-1) and and we say that a function u belonging to ^p(·)(Ω, ^k-1) is a local minimiser (among ^k-1-valued maps) of F if and only if F(u, Ω) ≤ F(v, Ω) for every v ∈(Ω, ^k-1) such that { u ≠ v }⊂⊂Ω. The functional F may be seen as a prototypical energy involving both a manifold-valued constraint on the maps and a nonstandard growth (more precisely, p(·)-growth) in the energy functional. Note that the functional F reduces to the p(·)-energy w ↦∫_Ω∖J_u∇ w^p(x) x out of the closure of the jump set J_u of u. Since bounded special functions of bounded variation are of Sobolev class outside J_u, we see that any local minimiser of F is a local minimiser among ^k-1-valued maps of the p(·)-energy (i.e., a p(·)-harmonic map into ^k-1) in the open set Ω∖J_u. However, although J_u is always negligible for the Lebesgue measure, there is no reason, a priori, for J_u to not be the whole Ω and, as we shall see in the next paragraphs, proving that J_u is “small” (more precisely, essentially closed) is indeed the most difficult step towards the regularity of local minimisers of F. Main results In order to state our main results, let us set the basic notation, referring to Section <ref> for more terminology. In all this work, unless stated otherwise, the following assumptions on the variable exponent p : Ω→ [1,+∞) will be in force: p_1 p ; p_2 x ∈Ω, 1 < p^- ≤ p(x) ≤ p^+ < 2, where p^- := _x ∈Ωp(x) and p^+ := _x ∈Ωp(x). The assumption of log-Hölder conti-nuity is customary in the study of functionals with p(·)-growth, c.f., eg., <cit.>. We recall this notion along with its geometric meaning in Definition <ref> and in Lemma <ref> below. (We are going to comment on the assumption p^+ < 2 in the next paragraphs.) On the manifold ℳ, we assume throughout it is a compact, connected, smooth Riemannian manifold without boundary, isometrically embedded in ^k, for some k ∈. Under these assumptions, it well-known that there exists a locally smooth retraction 𝒫 : ^k ∖𝒳→ℳ, where 𝒳 is a smooth complex of codimension 2, with locally q-integrable gradient, for every q ∈ [1,2). (See Section <ref> for more details.) This retraction plays a key rôle in the proof of our first main result, Theorem <ref> below. Let p : Ω→ (1,+∞) be a variable exponent satisfying (p_1), (p_2). Let ℳ be a compact, connected, smooth Riemannian manifold without boundary, isometrically embedded in ^k, for some k ∈. There exist universal constants ξ, η > 0 such that for any s ∈ (0,1) and any u ∈^p(·)(B_ρ, ℳ) (where B_ρ is any ball in ^2) satisfying ℋ^1(J_u ∩ B_ρ) < η(1-s) ρ/2, there exists a function w ∈ W^1,p(·)(B_sρ, ℳ) ∩^p(·)(B_ρ, ℳ) and a family ℱ of balls such that the following holds. The function w coincides with u a.e. outside of the union of the balls in the family ℱ. Such a union is contained in B_(1+s)ρ/2 and controlled in measure and perimeter by ξ, η, ρ, and ℋ^1(J_u ∩ B_ρ). Moreover, w has less jump than u in B_ρ and ∫_B_ρ∇ w^p(x) x is controlled by ρ, p^-, p^+, the log-Hölder constant of p(·), ℳ, 𝒫, k, and ∇ u_L^p(·)(B_ρ). Theorem <ref> is an abridged version of a more precise and descriptive statement, Theorem <ref> below. Theorem <ref> may be seen as an extension to our manifold-valued, variable-exponent setting of a result by Conti, Focardi, and Iurlano <cit.>, which was developed in the context of planar domains, constant exponents, and (unconstrained) ^p functions (i.e., for special functions of bounded deformation with approximate symmetric gradient in L^p and jump set with finite ℋ^1-measure) with small jump set, and later applied in <cit.> to the study of Griffith's type brittle fracture functionals. As we shall see in more detail later, the result is confined to the two dimensional setting and cannot be directly extended to higher dimensions. The seminal idea for studying the regularity of local minimisers of free discontinuity problems for maps of class is due to De Giorgi, Carriero, and Leaci <cit.> and lies in showing that the jump set of any local minimiser u is essentially closed, i.e., satisfies ℋ^1( Ω∩( J_u∖ J_u ) ) = 0. In the constant exponent case, once this is done, standard elliptic regularity yields that u ∈ C^1( Ω∖J_u). Indeed, out of S_u, the functional F reduces to the p-harmonic energy, i.e., to the functional w ↦∫_Ω∇ w^p x. These ideas, firstly developed for scalar-valued functions in <cit.>, were extended to the case of ^k-1-valued maps (and constant p) in <cit.>. Very recently, they have been adapted to the scalar-valued, variable-exponent setting in <cit.> (for a larger class of convex functionals, of which (<ref>) is the prototype). In all these works, a major technical tool is the Sobolev-Poincaré inequality for -functions due, once again, to De Giorgi, Carriero, and Leaci <cit.>. Here, we follow the approach devised by Conti, Focardi, and Iurlano <cit.>, which avoids the use of truncations (fundamental in the proof of the Sobolev-Poincaré inequality in <cit.>), by relying, instead, on the Sobolev approximation. With the aid of Theorem <ref>, we prove Theorem <ref> below, which is our second main result in this paper. Before stating the theorem, we have to introduce a strengthening on the assumption (p_1): p_1' p , see Definition <ref> and Remark <ref> about this condition and its rôle in the proof of Theorem <ref>. Let Ω⊂^2 be a bounded, open set. Let p(·) be a variable exponent satisfying (p_1'), (p_2). Assume u ∈^p(·)(Ω, ^k-1) is any local minimiser of the functional F(·,Ω) defined by (<ref>). Then, the jump set J_u of u is essentially closed, i.e., ℋ^1(Ω∩(J_u∖ J_u)) = 0. Moreover, if in addition p ∈ C^0,α(Ω) for some α∈ (0,1], then there exists a relatively open set Ω_0 ⊂Ω∖J_u and β_0 ∈ (0,1), with ℋ^2-p^-( ( Ω∖J_u) ∖Ω_0) = 0, such that u ∈ C^1,β_0_ loc(Ω_0, ^k-1). The number β_0 depends only on k, ℳ, 𝒫, p^-, p^+, [p]_0,α, α. In particular, u ∈ C^1(Ω_0 ∖J_u, ^k-1), which is (almost) precisely the result expected from <cit.> (in <cit.>, local C^1,β_0-regularity outside of J_u was proven in two dimension even in the ^p setting). In the particular case where p is constant, we can take Ω_0 = Ω in Theorem <ref> (see Remark <ref>), recovering classical results in <cit.> via a different technique. Before illustrating the ideas of the proofs of Theorem <ref> and Theorem <ref>, let us motivate the interest towards our results. Background and motivations In recent years, there have been an incredible amount of papers accounting, under different perspectives, for approximations of functions of special bounded variation or even special bounded deformation <cit.>. The arising of such variety of approximation results is due to their multiple applications in many different problems in the calculus of variations, connected (especially but not exclusively) with image segmentation and fracture mechanics. The most known examples are probably the variational problems associated with the nowadays classical Mumford-Shah functional and the Griffith static one. Among the above-quoted works, the most relevant to our purposes are <cit.>. In particular, in <cit.>, relying on the Sobolev approximation, the authors were able to prove integral representation results and existence of strong minimisers for Griffith's functional (actually, for a wider class of functionals) defined over ^p(Ω), where Ω is a two-dimensional domain. The Sobolev approximation result has been extended to any dimension in <cit.>. However, in <cit.>, differently from <cit.> and Theorem <ref> above, a small, exceptional set whose perimeter and volume are controlled by the size of the jump has to be removed from the domain. It is a priori unclear whether removing this small set is really necessary or not. Here, in Appendix <ref> we show that there is no counterpart, in dimension higher than 2, of the construction of the Sobolev approximation in <cit.> and in the present paper under assumption (<ref>) alone. This suggests that the construction in <cit.> is, in fact, optimal. We are facing a new model in two dimension, where the elastic part has a variable exponent dependence in the gradient term. In the calculus of variations, energy functionals with p(·)-growth in the gradient have been proposed in the modelling of materials which exhibit a strongly anisotropic behaviour starting from the works <cit.>. In the Sobolev setting, the regularity of minimisers has been analysed to a certain degree of generality, both in the scalar and in the vector-valued, unconstrained case, see, e.g., <cit.>. Less literature is available for manifold-constrained maps. However, the regularity problem for p(·)-harmonic maps has been recently considered in <cit.>. In the last years, several works <cit.> have undertaken the study of energy functionals with nonstandard growth defined over spaces of maps of class ^p(·) or even ^p(·) or ^ψ. The aims of these works range from lower semicontinuity results <cit.> to integral representation theorems <cit.>, up to regularity in the scalar-valued case <cit.>. Spaces of functions of (special) bounded variation and values into a Riemannian manifold ℳ have been recently studied in <cit.>. The authors of <cit.> are mainly concerned with constructing liftings of such mappings from the manifold ℳ to its universal cover ℳ. The interest towards such problem was initially stimulated by applications to the Landau-de Gennes theory of liquid crystals and to the Ginzburg-Landau model of superconductivity. The present work is, to the best of our knowledge, the first that considers the prototypical energy functional F in (<ref>) for manifold-valued special functions of bounded variation with L^p(·)-integrable approximate differential. The functional F in (<ref>) is itself a particular case of a more general one: ℱ(u, Ω) := ∫_Ω∇ u^p(x) x + ∫_J_uϕ_0(u^+,u^-, ν) ^̋1 where u∈^p(x)(Ω, ℳ) and ϕ_0 is -elliptic, see <cit.>, and bounded away from zero. Functionals like these appear in the theory of liquid crystals, for example nematics and, in this case, the manifold is isomorphic to ^1. Another model, related to smectic thin films, can be found in a recent paper <cit.>. Here a free discontinuity problem is proposed in order to describe surface defects in a smectic thin film. The free energy functional contains an interfacial energy, which penalises dislocations of the smectic layers at the jump. The function space is a subspace of function of bounded variation ^2(Ω, ℝ^2) with values in a suitable manifold 𝒩. The reason why we are confining ourselves in two dimension is that we are relying on a generalization of a Sobolev approximation for SBV functions, that was originally proved for the fixed exponent p in <cit.>. Based on this result, we would prove the existence of strong minimisers and essential closeness of the jump set, see <cit.>. Let us make a quick overview on the various approximations proposed. In <cit.>, motivated by problems in Γ-convergence, the authors prove that any function in L^∞⋂ SBV^p can be approximated by functions which have polyhedral jump set and are of class C^∞ outside, in such a way that the difference of the corresponding values of any lower semicontinuous functional involving a bulk and a surface energy density (the prototype is the so-called Mumford-Shah functional, but anisotropies are allowed here) tends to zero along the approximating sequence. In <cit.> instead, the approximation is done with functions whose jump set is contained in a rectifiable set R_j and are C^1 outside R_j. In addition, the ℋ^N-1 measure of the symmetric difference between the jump sets of the approximating sequence and the one of the original function tends to zero. More recently, in <cit.>, the authors have been able to prove the approximation of SBV functions in the strong BV topology. In particular, they provide three approximation results. The first one, Theorem A, concerns general SBV functions; the second one, Theorem B, concerns SBV functions with absolutely continuous part of the gradient in L^p,p>1, but with no constraint on the measure of the jump set; and the third one, Theorem C, concerns SBV^p functions. The last result generalizes the previously known approximation theorems for SBV^p functions. As for the functional involving the symmetric gradient, we would like to mention the papers of Conti Focardi Iurlano,<cit.> and <cit.>. In the paper <cit.>, the Authors prove an approximation of GSBD^p functions having small jump set with Sobolev functions in this way: first they cover the jump set with countably many balls with finite overlap and then in each ball B they construct a piecewise affine approximation using a suitable triangular grid. In the case of SBV functions, the approximation would be realized through piecewise constant ...(Fundamental Theorem of Calculus). In the most recent paper <cit.>, instead, the approximation is proven in any dimension for displacements in GSBD^p, for every p>1. However, the displacement with small jump set coincides with a W^1,p function, up to a small set whose perimeter and volume are controlled by the size of the jump. In particular, there is no such counterpart in dimension higher than 2, of constructing a grid that doesn't intersect the jump set, see Appendix <ref>. Here, we will follow an idea originated in <cit.>, in the different context of integral representation for functionals defined on ^p(Ω) in two dimensions. The key idea of our proof is to combine an approximation result for functions in ^p(x)(Ω, M) with small jump set by means of functions in W^1,p(x)(Ω,M) with a regularity result. (something on the lines of <cit.>, to be written precisely.) Dire qualcosa su <cit.> In doing so, we will prove a compactness result for ^p(x), extending Ambrosio's compactness theorem to the case of variable exponents. Although this result is probably well-known to experts, we have not found explicit any explicit proof in literature. Hence, for the reader's convenience, we provide a detailed proof. Proofs of the main results: a sketch The proof of Theorem A (better, of Theorem <ref>) is contained in Section <ref>. Essentially, it proceeds in two steps: first, we prove an analogous statement for unconstrained maps with values into ^k (Proposition <ref>), following very closely the original argument in <cit.> and exploiting the assumption of log-Hölder continuity of the variable exponent. Then, we use the aforementioned retraction 𝒫 to obtain Theorem <ref> from its unconstrained counterpart, by retraction onto the manifold of the image of the unconstrained approximating maps. The restriction on the dimension of domain in Theorem <ref> comes from the fact that, to craft the Sobolev approximation, we use the same construction as in <cit.>, which is strictly two dimensional and cannot be extended (without heavy modifications) to higher dimensions (see <cit.> and Remark <ref> and Appendix <ref> below for more details on this point). The restriction p^+ < 2 in (p_2) is due instead to the usage of the retraction 𝒫 in the proof of Theorem <ref> along with the fact that we merely require connectedness on ℳ (so to allow for ℳ to be a circle, an important case in potential applications – for instance, to liquid crystals), see Lemma <ref> and Remark <ref> for more details. However, since we work in dimension 2, such a restriction is not a dramatic drawback, in the sense that this is the subcritical regime for Sobolev-Morrey's embedding and maps of class W^1,p(·)(Ω, ℳ) are not automatically continuous (neither in Ω nor in open subsets of Ω with positive measure). In other words, this is the regime in which all the essential complications in the study of the regularity of local minimisers of the functional F in (<ref>) already show up, without the additional ones due to possibly large oscillations of the variable exponent. As alluded few lines above, the assumption of log-Hölder continuity of the variable exponent is extremely important in the proof of Theorem <ref> and this can be easily realised by looking at its geometric meaning (see Lemma <ref>). Indeed, roughly speaking, such an assumption boils down to the possibility of locally “freezing” the variable exponent, up to a controlled error. Joint with the very precise estimates for the approximating Sobolev map coming from <cit.>, this yields a rather direct extension of the results of <cit.> to our variable-exponent setting, at least in the unconstrained case (compare the proofs of Proposition <ref> and Proposition <ref> with those of <cit.>). The proof of Theorem <ref> proceeds in various steps, following the pioneering approach from <cit.>. The key point relies in proving a suitable power-decay of the energy in small balls with the radius of the ball. This is done in Theorem <ref> by means of a classical argument by contradiction and a blow-up analysis, relying on assumption (p_1'). We adapt the strategy of <cit.> to exploit, in the blow-up analysis, Theorem <ref> (with ℳ = ^k-1 and k ≥ 2) instead of the classical Sobolev-Poincaré inequality in (adapted to the variable-exponent framework in <cit.>). Once the decay lemma is obtained, another classical argument originating from <cit.> yields suitable density lower bounds for F(u, B_ρ(x)), where u is a local minimiser of F, x ∈J_u, and ρ is small. In turn, such density lower bounds readily imply, by a standard argument in geometric measure theory, that J_u is essentially closed. This step requires more than log-Hölder continuity and indeed we ask strong log-Hölder continuity in the statement of Theorem <ref>. To the purpose of proving essential closedness of the jump set, strong log-Hölder continuity turns out to be enough and, actually, also local minimality can be weakened to quasi-minimality (see Definition <ref>). The full strength of Hölder continuity and of local minimality are instead needed to prove that u ∈ C^1,β_0_ loc(Ω_0, ^k-1), where Ω_0 is as in the statement of Theorem <ref>. Indeed, here we use the regularity result <cit.> (in the simplified form provided by Theorem <ref> below) for the p(·)-harmonic energy for manifold-valued maps, i.e., for local minimisers (among compactly supported perturbations) of the functional W^1,p(·)(Ω, ℳ) ∋ w ↦∫_Ω∇ w^p(x) x, which requires p ∈ C^0,α(Ω), for some α∈ (0,1], among its assumptions. Organisation of the paper In Section <ref>, we establish notation and recall the basics facts about spaces of functions integrable with respect to variable exponents. In Section <ref>, we prove Theorem <ref>. In Section <ref>, we prove Theorem <ref>. The paper is completed by a series of appendices containing mostly technical material for which we were not able to find explicit proof in the literature. In Appendix <ref>, we exhibit an example which shows that the approximation procedure in <cit.> and, in turn, in Section <ref> cannot work in higher dimensional domains under the mere assumption of ℋ^1-smallness of the jump set. equationsection definitionsection theoremsection § PRELIMINARIES §.§ Notation * We use the symbol {x_n} to denote a sequence, indexed by n ∈, of elements x_n of a certain set E. Usually, for the sake of a lighter notation, we do not relabel subsequences. * In inequalities like A ≲ B, the symbol ≲ means that there exists a constant C, independent of A and B, such that A ≤ CB. * We denote B_ρ^n(x_0) := { x ∈^n : x-x_0 < ρ} the open ball of radius ρ and centre x_0 in ^n. Since we work almost exclusively in dimension n=2, we drop the superscript “2” for balls in ^2. Often, the centre of the ball will be irrelevant and we shall omit it from the notation. Given a ball B_ρ, the symbol B_sρ denotes the ball concentric with B_ρ and radius dilated by the factor s > 0. * We denote by ℳ a compact, connected, smooth Riemannian manifold without boundary, of dimension m ≥ 1. Without loss of generality, we may always view at ℳ as a compact, connected, m-dimensional smooth submanifold in ^k, for some k ∈. This can always be achieved, for instance, by means of Nash's isometric embedding theorem. If not specified otherwise, we will always assume to embed ℳ in ^k via Nash's embedding. By compactness of ℳ, we can find M > 0, depending only on ℳ and the choice of the isometric embedding, such that ℳ⊂ℬ^k_M := { y ∈^k : y < M }. In the following course, when saying that a quantity “depends on ℳ”, we will always mean that it depends on ℳ and the chosen isometric embedding of ℳ into ^k. However, by compactness of ℳ, the choice of the isometric embedding is essentially irrelevant, in the sense that changing the embedding can result, at worst, in enlarging k and the constants that depend on ℳ. * In the special case ℳ is a sphere, we always look at it as a submanifold of ^m+1 the obvious way. In particular, we will denote ^k-1_t := { x ∈^k : x = t } the (k-1)-dimensional sphere in ^k of radius t > 0 and centre the origin, endowed with the canonical metric. We set ^k-1 := ^k-1_1. The canonical orthonormal basis of ^k will be denoted { e_1, …, e_k }. * For any δ > 0, we will denote Ω_δ := { x ∈Ω : (x, ∂Ω) > δ}. the portion of the δ-neighborhood of ∂Ω that lies in the interior of Ω. §.§ Variable exponent Lebesgue In this section, we recall some basic facts about variable-exponent Lebesgue spaces. The reader can consult the monographs <cit.> for more details. A measurable function p : Ω→ [1,+∞) will be called a variable exponent. For every A ⊂Ω (measurable and not empty) we define p^+_A := _x∈ A p(x) p^-_A := _x ∈ A p(x). Of course, we can take A = Ω, and in this case we write p^+ and p^- in place of p^+_Ω and p^-_Ω, respectively. We say that a variable exponent p is bounded if p^+ < +∞. The modular ϱ_p(·)(u) of a measurable function u : Ω→^m with respect to the variable exponent p(·) is defined by ϱ_p(·)(u) := ∫_Ω |u(x)|^p(x) x and the Luxembourg norm (henceforth, simply norm) of u by u_L^p(·)(Ω) := inf{λ > 0 : ϱ_p(·)(u/λ)≤ 1}. The variable exponent Lebesgue space L^p(·)(Ω, ^k) is defined as the set of measurable functions u : Ω→^k such that there exists λ > 0 so that ϱ_p(·)(u/λ) < +∞. We collect the properties of variable exponent Lebesgue space that we will use in the following propositions. Let p : Ω→ [+1,∞) be a variable exponent and assume that p^+ < +∞. Then: * L^p(·)(Ω,^k) coincides with the set of measurable functions u : Ω→^k such that ϱ_p(·)(u) is finite. * ·_L^p(·)(Ω) is a norm on L^p(·)(Ω, ^k ), under which L^p(·)(Ω, ^k) is a Banach space. * If p^- > 1, then L^p(·)(Ω,^k) is uniformly convex and its dual space is isomorphic to L^p'(·)(Ω, ^k), where p' is the variable exponent satisfying 1/p + 1/p' = 1 a.e. in Ω. * The following inequalities hold: ϱ_p(·)(u)^1/p^+≤u_L^p(·)(Ω)≤ϱ_p(·)(u)^1/p^-, u_L^p(·)(Ω) > 1, ϱ_p(·)(u)^1/p^-≤u_L^p(·)(Ω)≤ϱ_p(·)(u)^1/p^+, u_L^p(·)(Ω)≤ 1, Modulars satisfy appropriate versions of Fatou's lemma, of the monotone convergence theorem, and of the dominated convergence theorem. Here we give a particular case, enough for our purposes, of a much more general result, c. f. <cit.>. Let p : ^n → [1,+∞) be a variable exponent and let {f_h} and f be measurable functions from Ω into ^k, for some k ∈. Then, * If f_h → f ℒ^n-almost everywhere in Ω as h → +∞, then ϱ_p(·)(f) ≤lim inf_h→+∞ϱ_p(·)(f_h). * If f_h↗f ℒ^n-almost everywhere in Ω as h → +∞, then ϱ_p(·)(f) = lim_h → +∞ϱ_p(·)(f_h). * If f_h → f ℒ^n-almost everywhere in Ω as h → +∞ and there exists g ∈ L^p(·)(Ω, ^k) such that f_h≤g a.e., then f_h → f in L^p(·)(Ω, ^k) as h → +∞. The following proposition extends to the variable exponent setting the classical embedding property of Lebesgue spaces on sets with finite (Lebesgue) measure. Let p, q : Ω→ [1, +∞) be bounded variable exponents on a bounded set Ω⊂^n. Then L^p(·)(Ω) ↪ L^q(·)(Ω) if and only if q ≤ p a.e. in Ω. The embedding constant is less than or equal to the minimum between 2(1+ ℒ^n(Ω)) and 2 max{ℒ^n(Ω)^(1/q - 1/p)^+, ℒ^n(Ω)^(1/q - 1/p)^-}. The following definition specify a crucial quantitative notion of continuity for variable exponents, called log-Hölder continuity. This condition was firstly introduced in <cit.> as a way to avoid the Lavrentiev phenomenon for minimisers of variational integrals and suddenly became customary in this context (c.f., e.g., <cit.>). We say that a variable exponent p : Ω→ [1,+∞) satisfies the log-Hölder condition in Ω if and only if ∃ C_p > 0 : ∀ x,y ∈Ω : 0 < x-y≤1/2, p(x) - p(y)≤C_p/- lnx-y. Equivalently, p : Ω→ [1,+∞) is log-Hölder continuous if and only if its modulus of continuity ω satisfies lim sup_ρ→ 0ω(ρ) log( 1/ρ) < +∞. We call C_p the log-Hölder constant of p. The following lemma, an abridged version of <cit.> which is sufficient to our purposes, illustrates the geometrical meaning of log-Hölder continuity. Let p : Ω→ [1,+∞) be a bounded, continuous variable exponent. The following conditions are equivalent: * p is log-Hölder continuous; * There exists a constant ℓ > 0 such that, for all open balls B ⊂Ω, there holds ℒ^n(B)^p^-_B - p^+_B≤ℓ. A bounded, log-Hölder continuous variable exponent defined on a bounded set Ω⊂^n can always be extended to a bounded, log-Hölder continuous variable exponent q which is defined on the whole of ^n and that satisfies C_q ≤(p^-)^2C_p, ℓ_q ≤(p^-) ℓ, q^- = p^-, q^+ = p^+, and (<ref>) for any open ball B⊂^n see, e.g., <cit.>. For our purposes in this paper, there is no loss of generality in assuming from the very beginning that p is defined on the whole ^n. Later in this work we will need the following strengthening of the log-Hölder condition (<ref>). We say that a variable-exponent p(·) : Ω→ [1,+∞) satisfies the strong log-Hölder condition in Ω if and only if its modulus of continuity ω satisfies lim sup_ρ→ 0ω(ρ) log( 1/ρ) = 0. The extension to the whole ^n of a strongly log-Hölder continuous variable exponent is still strongly log-Hölder continuous. §.§ Variable exponent Sobolev spaces and p(·)-harmonic maps Let Ω⊂^n be an open set and p(·) : ^n → be a variable exponent. As in the classical case, we define W^1,p(·)(Ω, ^k) := { u ∈(W^1,1∩ L^p(·))(Ω, ^k ) : ∇ u ∈ L^p(·)(Ω,^k× n) }. The norm u_W^1,p(·)(Ω) := u_L^p(·)(Ω) + ∇ u_L^p(·)(Ω), turns W^1,p(·)(Ω,^k) into a Banach space, separable if p^+ < +∞ and reflexive if p^- > 1 and p^+ < +∞. We address the reader to <cit.> for full details about variable exponent Sobolev spaces. We recall from the following extension of the classical Meyer-Serrin theorem in bounded, Lipschitz domain (see, e.g., <cit.>). Let Ω⊂^n be a bounded open set with Lipschitz boundary. Assume that p : ^n → [1,+∞) is a bounded, log-Hölder continuous, variable exponent. Then, C^∞(Ω,^k) is dense in W^1,p(·)(Ω, ^k). As in the classical case, we define W^1,p(·)(Ω, ℳ) := { W^1,p(·)(Ω, ^k) : u(x) ∈ℳ x ∈Ω}. Assume Ω⊂^n is a bounded, Lipschitz domain and p is a bounded, log-Hölder continuous variable exponent satisfying p^- > 1. Then, the trace u |_∂Ω of a function u belonging to W^1,p(·)(Ω, ^k) is well-defined and it is, in particular, an element of L^p(·)(∂Ω,^k). Moreover, since u ∈ W^1,p(·)(Ω,^k) implies, in particular, that u belongs to W^1,1( Ω, ^k), the trace u |_∂Ω is specified at least ℋ^n-1-a.e. on ∂Ω, see <cit.> for details. Therefore, it makes sense to say that u |_∂Ω takes values in ℳ, by saying that u |_∂Ω takes values in ℳ if and only if u |_∂Ω(y) ∈ℳ for ℋ^n-1∂Ω-a.e. y ∈∂Ω. Next, following, e.g., <cit.>, we define a p(·)-harmonic map as a local minimiser, with respect to compactly supported perturbations, of the functional W^1,p(·)( Ω, ℳ) ∋ w ↦∫_Ω∇ w^p(x) x. The following partial regularity theorem for p(·)-harmonic maps is a simplified version of a more general statement proven in <cit.>. (We use the symbol [x] to denote the integer part of x.) Assume p : ^n → (1,+∞) is a variable exponent satisfying p ∈ C^0,α(^n), α∈ (0,1], 1 < p^- ≤ p(x) ≤ p^+ < +∞ x ∈Ω. Let ℳ be a compact, ([p^+]-1)-connected, smooth Riemannian manifold, without boundary. Let u ∈ W^1,p(·)(Ω,ℳ) be a p(·)-harmonic map. Then, there exists a relatively open set Ω_0 ⊂Ω such that u ∈ C^1,β_0_ loc(Ω_0, ℳ) for some β_0 ∈ (0,1) depending only on n, k, ℳ, p^-, p^+, [p]_0,α, α. Moreover, ℋ^n-p^-(Ω∖Ω_0 ) = 0. We recall that, given an integer j ≥ 0, a manifold ℳ is said to be ℓ-connected iff its first j homotopy groups vanish identically, that is, iff π_0(ℳ) = … = π_j(ℳ) = 0. We observe that, although <cit.> (which in turn is partly borrowed from <cit.>) is stated assuming j ≥ 1 (i.e., ℳ simply connected), it holds as well for j = 0 (c.f., e.g., <cit.> and Lemma <ref> below). This fact allows us to construct maps with all the properties required by <cit.> without assuming ℳ simply connected, c.f., e.g. Lemma <ref> below. Besides <cit.>, the assumption that ℳ is simply connected is never used in the proof of <cit.> (and indeed removed in the follow up <cit.> of <cit.>). Consequently, <cit.> holds as well even if ℳ is merely connected, provided that p^+ < 2. In turn, Theorem <ref> holds if ℳ is merely connected, provided that p^+ < 2, which are precisely the assumptions under which we work in this paper. §.§ The space ^p(·)(Ω,ℳ) Here, we collect some basic facts about functions that are used throughout this paper and we define precisely the space of mappings of class ^p(·) from an open set Ω⊂^n into a Riemannian manifold ℳ, where p(·) is a bounded, variable exponent. We address the reader to <cit.> for details about (and ) functions as well as for all the notions from Geometric Measure Theory (<cit.>) that we will use in this work. Let Ω⊂^n be an open set and k ∈. We recall that the set (Ω,^k) is the linear subspace of L^1(Ω,^k) made up by those functions u whose distributional gradient u is a bounded Radon measure with no Cantor part. Endowed with the norm u_(Ω) := u_L^1(Ω) + u(Ω), the space (Ω,^k) is a Banach space. For any u ∈(Ω,^k), its distributional gradient u splits as u = (∇ u )ℒ^2_:=^au + [(u^+ - u^-) ⊗ν_u ]ℋ^n-1 J_u_:=^ju, where ∇ u denotes the approximate differential of u, u^± the traces of u on the two sides of the jump set J_u, and ν_u is the normal field to J_u (see, e. g., <cit.>). We recall from <cit.> that J_u is a countably ℋ^n-1-rectifiable Borel set in Ω. Let p ≥ 1 and ^p(Ω,^k) := { u ∈(Ω,^k) : ∇ u ∈ L^p(Ω,^k× n) ℋ^n-1(J_u) < +∞}. Endowed with the restriction of the norm, ^p(Ω,^k) is a Banach subspace of (Ω,^k) that proved useful in handling free discontinuity problems at least since <cit.>. On the relation between the spaces , ^p, W^1,1, W^1,p, we recall that: for any open set Ω⊂^n, u ∈ W^1,1( Ω, ^k) u ∈( Ω, ^k ) ℋ^n-1( J_u ) = 0. If Ω is a bounded open set with Lipschitz boundary, then, by iterated application of Sobolev embedding, u ∈ W^1,p( Ω, ^k) u ∈^p( Ω, ^k ) ℋ^n-1( J_u ) = 0. A straightforward generalisation of the space ^p(Ω,^k) has been introduced in <cit.>, by defining the space ^p(·)(Ω,^k), where p(·) : Ω→ [1,+∞) is any bounded variable exponent, as the subspace of those u ∈(Ω,^k) whose jump set has finite ℋ^n-1-measure and whose approximate differential is L^p(·)-integrable on Ω. In this work, we are mostly interested in maps taking values into a compact Riemannian manifold without boundary. By viewing ℳ as a submanifold of ^k, for some k ∈ (for instance, by means of Nash's isometric embedding theorem), we can define ^p(·)(Ω,ℳ) := { u ∈^p(·)(Ω,^k) : u(x) ∈ℳ x ∈Ω}. This space enjoys closure and compactness properties analogous to those of the classical space ^p(Ω,^k), see Appendix <ref> for more details. For later use, we notice that if Ω is any bounded open set, then u ∈ W^1,p(·)(Ω, ℳ) u ∈^p(·)(Ω, ℳ) ℋ^n-1(J_u) = 0 by compactness of ℳ (which implies u ∈ L^∞(Ω,^k)). Moreover, relaxing the assumption of boundedness of the functions, but assuming instead that Ω is a bounded open set with sufficiently nice boundary (Lipschitz is enough) and that p^- < n p^+ < ( p^- )^*, we have the continuous embedding W^1,p^-(Ω, ^k ) ↪ L^p^+(Ω, ^k), see <cit.>. Consequently, the following lemma holds. Assume Ω⊂^n is a bounded open set with Lipschitz boundary and that p : Ω→ [1,+∞) is a log-Hölder continuous, variable exponent satisfying (<ref>). Then, u ∈ W^1,p(·)(Ω, ^k) u ∈^p(·)(Ω, ^k) ℋ^n-1(J_u) = 0 The direction ⇒ is obvious. For the direction ⇐, we argue as in the classical case of constant exponents, exploiting assumption (<ref>): by assumption and (<ref>), u ∈ W^1,1(Ω,^k), which implies, by Sobolev embedding, u ∈ L^1^*( Ω, ^k). If 1^* ≥ p^-, then u ∈ W^1,p^-( Ω, ^k), which implies (in view of (<ref>)) u ∈ L^p^+(Ω,^k), hence u ∈ L^p(·)(Ω,^k ) by Proposition <ref>. Since we know by assumption that ∇ u ∈ L^p(·)(Ω,^k × n), it follows that u ∈ W^1,p(·)(Ω,^k). If instead 1^* < p^-, then u ∈ W^1,1^*(Ω,^k), and we can repeat the above step. After finitely many iterations, we obtain 1^**…*≥ p^- and we conclude as in the above. Assumption (<ref>) is clearly satisfied if n=2 and p^+ < 2. § SOBOLEV APPROXIMATION OF FUNCTIONS IN ^P(·)(Ω,ℳ) WITH SMALL JUMP SET In this section, we prove Theorem <ref> below, which clearly entails Theorem <ref>. Let p : ^2 → (1,∞) be a bounded, log-Hölder-continuous, variable exponent satisfying p^- > 1 and p^+ < 2. Let k ∈ and ρ > 0. Assume ℳ is a connected, compact Riemannian manifold without boundary, isometrically embedded into ^k. There exist universal constants ξ, η > 0 such that for any s ∈ (0,1) and any u ∈^p(·)(B_ρ, ℳ) satisfying ℋ^1(J_u ∩ B_ρ) < η(1-s) ρ/2, the following holds. There are a countable family ℱ = {B} of closed balls, overlapping at most ξ times, of radius r_B < (1-s)ρ and centre x_B ∈B_s ρ, and a function w ∈^p(·)(B_ρ, ℳ) such that * ∪_ℱ B ⊂ B_1+s/2ρ and 1/ρ∑_ℱℒ^2(B) + ∑_ℱℋ^1(∂ B) ≤2 πξ/ηℋ^1(J_u ∩ B_ρ). Moreover, ∑_ℱℒ^2(B) ≤min{2πξ/ηρℋ^1(J_u ∩ B_ρ), π( ξ/ηℋ^1(J_u ∩ B_ρ) )^2 } * w = u ℒ^2-a.e. on B_ρ∖∪_ℱ B. * w ∈ W^1,p(·)(B_s ρ; ℳ) and ℋ^1(J_w ∖ J_u) = 0. * There holds ∫_B_ρ∇ w^p(x) x ≲(1+ρ^2) max{∇ u_L^p(·)(B_ρ)^p^-, ∇ u_L^p(·)(B_ρ)^p^+} where the implicit constant in the right hand side is independent of u and w and depends only on the quantities listed in Remark <ref> and on the log-Hölder constant of p(·). Moreover, if p is constant, then (<ref>) holds without the factor (1+ρ^2) at right hand side. The proof of Theorem <ref> proceeds essentially in three steps. After having isometrically embedded ℳ into some Euclidean space ^k, for some k ∈ (Step 1), we prove an analogous approximation result (Proposition <ref>) for unconstrained maps, i.e., for maps with values into ^k (Step 2). Then, we obtain Theorem <ref> from Proposition <ref> by means of a suitable retraction onto ℳ (Step 3). As mentioned in Section <ref>, Step 1 can always accomplished by exploiting Nash's isometric embedding theorem. (However, since ℳ is compact the choice of the embedding is essentially irrelevant, as it can only imply, at worst, enlarging the constant in (<ref>).) Step 2 is heavily based on arguments in <cit.> and worked out in Section <ref> below. We complete the proof of Theorem <ref> in Section <ref>. The factor (1+ρ^2) at right hand side in (<ref>) is due to the use of the embedding theorem for variable-exponent Lebesgue spaces (Proposition <ref>) in the proof of Proposition <ref> below. For constant exponents, this is not needed and one can instead stick with the clever, optimal argument in <cit.> and obtain (<ref>) without the extra-factor (1+ρ^2), c.f. (<ref>), (<ref>) below. However, the dependence on ρ at right hand side of (<ref>) is harmless for our purposes in this paper, as we apply Theorem <ref> only to maps defined on a fixed, common ball (c.f. the blow-up analysis in the proof of Theorem <ref>). §.§ The unconstrained case For the unconstrained case, we follow very closely the argument of <cit.>, developed in the setting of constant exponents and functions of class ^p. As mentioned at the beginning of the section, the main result here is Proposition <ref>. The key tool in the proof of Proposition <ref> is Proposition <ref> below. The statement of Proposition <ref> is similar to that of the corresponding result in <cit.>, i.e., <cit.>, up to the complications arising because of the variable exponent. In particular, we shall need the following assumption (H): H u p(·) p^- < n p^+ ≤(p^-)^*. This assumption is always satisfied in this work, because we deal with maps with values into a compact Riemannian manifold ℳ and moreover, as mentioned in the Introduction and explained in more details in Section <ref> below, to approximate them with maps with values into ℳ (which is only assumed to be connected, in general) we need to assume p^+ < 2. Assumption (H) ensures that every function of class W^1,1 with L^p(·)-integrable gradient is also of class W^1,p(·) and, in turn, that functions belonging to ^p(·) with no jump are also of class W^1,p(·) (see the discussion in Section <ref> and the end of Step <ref> below for more details on this point). Finally, we notice that also in the constant-exponent case the regime p<n is the most interesting one (and that the condition p < p^* is trivially satisfied in such case). Let p : ^2 → (1,+∞) be a bounded, log-Hölder-continuous, variable exponent satisfying p^- > 1 and let k ∈. There exist a universal constant η and a constant c_k, depending only on k, such that for any r > 0, if J ∈ℬ(B_2 r) is any Borel set in B_2r that satisfies ℋ^1(J) < η(2r), then there exists R ∈ (r,2r) for which the following holds. Let u ∈^p(·)(B_2r,^k) be any function such that ℋ^1(J_u ∩ B_2r∖ J) = 0, and suppose, in addition, that assumption ( H) holds. Then, there exists ϕ(u) ∈^p(·)(B_2r,^k) ∩ W^1,p(·)(B_R, ^k) such that the following properties are satisfied. * ℋ^1(J_u ∩∂ B_R) = 0. * There holds ∫_B_R∇ϕ(u)^q x ≤c_k ∫_B_R∇ u^p^- x, for every q ∈ [1, p^-]. Moreover, ∫_B_R∇ϕ(u)^p(x) x ≲(1+ R^2) max{∇ u_L^p(·)(B_R)^p^+, ∇ u_L^p(·)(B_R)^p^-}, where the implicit constant at right hand side depends only on the log-Hölder constant of p. * There holds u - ϕ(u)_L^1(B_R)≲ R u(B_R), where the implicit constant at right hand side depends only on k. * ϕ(u) = u a.e. on B_2r∖B_R and ℋ^1(J_ϕ(u)∩∂ B_R)=0. * ϕ(u)_L^∞(B_2r)≤u_L^∞(B_2r). The strategy of the proof is borrowed from <cit.>, which deals with maps of class ^p(B_2r), for 1 ≤ p < +∞ a constant exponent. In rough words, the idea is to choose a “good radius” R ∈ (r,2r) for a ball B_R, concentric with B_2r, and to construct a grid B_R by triangles, refining towards the boundary of B_R, whose vertices are chosen in such a way to satisfy essentially three properties: (i) they are Lebesgue points of the function u to approximate; (ii) the one-dimensional restriction of u to each edge of the grid is of class W^1,1, and (iii) the value of u at the vertices is close to the local average. (See, more precisely, properties (P_1)–(P_5) below.) A crucial point is that, in 2D, such a grid can be constructed with universally bounded geometry under the mere assumption (<ref>). Once the grid has been built, one can then define a piecewise affine map ϕ(u) in B_R by prescribing that it agrees with u at the vertices of each triangle of the grid. Outside B_R, one sets ϕ(u) := u, and it turns out that the map ϕ(u) so defined satisfies all the properties in the statement. Here, proceeding in a sequence of steps, we modify the approach of <cit.> to handle the complications arising because of the variable exponent, at the same time benefiting from some simplifications occurring because we deal with maps of class . The main idea is to exploit the constancy of ∇ϕ(u) in each triangle of the grid to make a sort of local “L^∞-L^p^--interpolation”, to estimate locally the p(·)-modular of ∇ϕ(u), c.f. Step <ref>. Then, we obtain global estimates summing over all triangles of the grid. The price to pay is the slight loss of exponent at right hand side in (<ref>). Nevertheless, the estimate (<ref>) will be enough to our purposes, i.e., for the blow-up analysis in Theorem <ref>. [Construction of the grid] Following the argument in <cit.>, we can pick R ∈ (r,2r) in such a way that the following holds: ℋ^1(J ∩∂ B_R) = 0. and ℋ^1( J ∩( B_R∖ B_R(1-2^-h)) ) < 10 ηδ_k, h ∈. (C.f. <cit.>.) Once R has been chosen as in the above, we construct a triangular grid 𝒯 for B_R, refining towards the boundary of B_R, in a completely analogous way to as in first part of the proof of <cit.>. We recall the main properties of 𝒯 and the main steps of its construction, referring the reader to <cit.> for full details. The construction of 𝒯 starts in a completely geometrical way, by considering the circles of radii R_h := R - δ_h, with h ∈ and δ_h := R 2^-h, and, for each such circle, the 2^h distinguished points x'_h,j given by x'_h,j := R_h(cos2π j/2^h, sin2π j/2^h), j ∈{1,…,2^h}. As in <cit.>, we observe that any pair x', y' of neighbouring points in 𝒱' := { x'_h,j}_h,j (the notion of “neighbouring points” in 𝒱' coincides with the intuitive one and it is formalised in <cit.>) satisfies c_1 δ_h ≤x' - y'≤ c_2 δ_h, where c_1, c_2 > 0 are universal constants. Connecting all such vertices, we obtain a grid 𝒯' with universally bounded geometry. Now, we fix any function u ∈^p(·)(B_2r, ^k) such that assumption (H) is satisfied (so, if p does not satisfy (<ref>), we assume in addition that u is bounded) and such that ℋ^1(J ∖ J_u) = 0. We are going to adapt the grid 𝒯' to the function u, obtaining a good grid 𝒯 on which constructing a piecewise linear approximation of u. To the claimed purpose, we follow the argument in <cit.> and we start by taking any triangle T' of the grid 𝒯'. We denote x', y', z' be the vertices of T', and by s_x',y', s_x',z', s_y',z' the segments connecting them. We define Q_x',y' := {ξ∈ B_R : (ξ, s_x',y') < x' - y'/(8c_2) }, and, in the same way, the sets Q_x',z', Q_y',z'. Next, we define α := c_1/(8c_2) and we consider, for each pair of indexes (h,j), the ball B( x'_h,j, αδ_h), of radius αδ_h and center x'_h,j. Let x', y' any two neighbouring points in 𝒱'. The sophisticated iterative procedure in <cit.> shows that one can determine two universal constants η > 0 (bounding from above the ratio ℋ^1(J) / (2r)) and c > 0 so that there is a “good” set 𝒢⊂ B( x', αδ_h ) × B(y', αδ_h ) with positive measure and whose points (x,y) satisfy the following properties: (P_1) u^ν_z ∈(s_x,y, ^k). (P_2) ℋ^0(J_u^ν_z) = 0, so that u^ν_z ∈ W^1,1(s_x,y, ^k). (P_3) There holds ∫_s_x,y(u^ν_z)^' t ≤c/δ_h∫_O_x',y'∇ u x^', where O_x',y' denotes the convex envelope of the union B(x',αδ_h) ∪ B(y',αδ_h). (P_4) For ξ∈{x,y}, there holds u(ξ) - u_x',y'≤c/δ_hu( Q_x',y'), where u_x',y' denotes the average of u in Q_x',y'. (P_5) x and y are Lebesgue point of u, In the above, u^ν_z denotes the 1D-section of u along the direction ν determined by x, y, given by u^ν_z := u(z+t ν), z := ( Id - ν⊗ν) x, ν := x-y/x-y. These properties are exactly the analogues of properties (P_1)–(P_5) in <cit.> and they are proven with exactly the same reasoning as in <cit.>, just replacing the symmetric gradient with the full gradient of u and the rigid motions with averages. Arguing word-by-word as in <cit.>, we can extract out of 𝒢 a set of points 𝒱 which are good vertices for a new triangulation 𝒯, which is adapted to u in the following sense: connecting all neighbouring points in 𝒱 by edges (agreeing, as in <cit.>, that two points in 𝒱 are neighbours if and only if they come from points that were neighbours in the previous sense), we obtain a grid 𝒯 by triangles T whose vertices satisfy (P_1)-(P_5). (In particular, the edges of T do not intersect the jump set of u.) In addition, any pair of neighbouring vertices in 𝒱 satisfy (<ref>), and hence the angles of the triangles T are uniformly bounded away from 0 and π. Moreover, for later purposes, we observe that the triangles T' of 𝒯' and the triangles T of 𝒯 are in a one-to-one correspondence. For any given T ∈𝒯, we denote C_T the following convex envelope: C_T := conv( ∪ B(x', αδ_h)), where the union runs over the vertices x' of the triangle T' in 𝒯' which correspond to T. Clearly, O_x',y'⊂ C_T for any two vertices x', y' of T' and T ⊂ C_T. Moreover, following <cit.>, we observe that there is a universal constant κ such that any x ∈ B_R belongs to at most κ of the convex envelopes C_T. In addition, we remark that, by construction, there exists two universal constants λ_1, λ_2 > 0 such that λ_1 ≤min{ℒ^2(C_T)/ℒ^2(Q_x',y'), ℒ^2(T)/ℒ^2(Q_x',y'∩ T ), ℒ^2(B(x',αδ_h))/ℒ^2(T), ℒ^2(O_x',y')/ℒ^2(C_T)} ≤max{ℒ^2(C_T)/ℒ^2(Q_x',y'), ℒ^2(T)/ℒ^2(Q_x',y'∩ T ), ℒ^2(B(x',αδ_h))/ℒ^2(T), ℒ^2(O_x',y')/ℒ^2(C_T)}≤λ_2 . Let us set λ := max{λ_1,λ_2 }. [Definition and main properties of the approximating map] We define, as in <cit.>, a function ϕ(u) by letting ϕ(u) = u in B_2r∖B_R while we take ϕ(u) to be the piecewise affine function determined by 𝒱 on B_R. More precisely, in each triangle T ∈𝒯, we define ϕ(u) as the (uniquely determined) affine function obtained by setting ϕ(u)(x) = u(x), for each vertex x of T. In particular, on the edges of T, ϕ(u) is the linear interpolation between the values of u at the vertices and ∇ϕ(u) is a constant (k × 2)-matrix in each T ∈𝒯. As a consequence, given any vertices x, y of T, we have ϕ(u)(x-y) = ϕ(u)(x) - ϕ(u)(y) = (∇ϕ(u))(x-y) = u(x) - u(y). More specifically, since u is of class , following the argument in <cit.>, we obtain the analogues of (P_1)–(P_5) in <cit.> with the symmetric gradient replaced by the full approximate gradient and the rigid motions replaced by the averages. In particular, we obtain the following analogue of <cit.>: ∫_s_x,y(u^ν_z)^' t ≲1/δ_h∫_C_T∇ u x^', where the implicit constant at right hand side is universal and u^ν_z := u(z+t ν), with z := ( Id - ν⊗ν) x and ν := x-y/x-y for any two vertices x, y of T and any T ∈𝒯. Notice that u^ν_z ∈ W^1,1(s_x,y, ^k) because u does not jump on s_x,y. With the aid of the fact that u^ν_z ∈ W^1,1(s_x,y) and of (<ref>), we can compute the elements of the matrix ∇ϕ(u) and its L^∞-norm in complete analogy with <cit.>, obtaining the following counterparts of <cit.>. For each T ∈𝒯, every pair x, y of vertices of T, and every j ∈{1,…,k}, by the fundamental theorem of calculus on s_x,y and (<ref>) we have ∇ϕ(u) ν· e_j = (ϕ(u)(x) - ϕ(u)(y))· e_j/x-y = _s_x,y(u^ν_z · e_j)^' t, hence, by (<ref>) and (<ref>), ∇ϕ(u) ν· e_j ≤_s_x,y(u^ν_z)^' t (<ref>),(<ref>)≤cλ_C_T∇ u x. Thus, letting x, y vary in the set of vertices of T and j run in {1,…,k}, ∇ϕ(u)_L^∞(T)≤cλ_C_T∇ u x, for each T ∈𝒯. From (<ref>) and Jensen's inequality, we obtain ∫_T ∇ϕ(u)^q ≤cλ∫_C_T∇ u^q x for any q ∈ [1,p^-] and any T ∈𝒯. Recalling that the sets C_T overlap at most κ-times, and that κ is a universal constant, by taking the sum of (<ref>) over T ∈𝒯 it follows from (<ref>) that ϕ(u) belongs to W^1,q(B_R, ^k) ∩^q(B_2r, ^k), for any q ∈ [1,p^-]. In addition, by construction of the grid, the trace of u on ∂ B_R and the Sobolev trace of ϕ(u) on ∂ B_R agree ℋ^1-a.e., hence no additional jump is created by the above process (see <cit.> for details). Consequently, ϕ(u) has less jump than u in B_2r. [Proof of (i), (iii), (iv), (v)] Item (v) is obvious. Item (i) follows immediately, because of (<ref>) and since ℋ^1(J∖ J_u) = 0. The first assertion in (iv) follows by definition of ϕ(u) and the argument in <cit.>. The second assertion in (iv) follows because, by construction, J_ϕ(u) = J_u on B_2r∖B_R and, as observed in the previous step, u and ϕ(u) agree ℋ^1-a.e. on ∂ B_R. To prove (iii), we argue similarly to as in <cit.>. Let T' be any triangle of the original, geometrical grid 𝒯' and let x', y', z' be its vertices, connected by the segments s_x',y', s_x',z', s_y',z'. Let Q_x',y' be defined as in (<ref>). By Proposition <ref> and (<ref>), we have u - u_x',y'_L^1(Q_x',y')≤ c_k δ_h u(Q_x',y'), where u_x',y' denotes the average of u in Q_x',y' and c_k := c(k) c_2, where c(k) ≡ C(2,k) is the constant, depending only on k, provided by Proposition <ref> (for n=2) and c_2 is the constant in (<ref>). Moreover, by (P_4), it follows that u(ξ) - u_ζ',ξ'≲δ_h^-1u(Q_ζ',ξ') ζ', ξ' ∈{x',y',z'} ζ'≠ξ', where the implicit constant at right hand side is universal. Next, for every T ∈𝒯, let T' ∈𝒯' be the corresponding triangle in the original grid, with vertices x', y', z'. By triangle inequality we have ∫_T u - ϕ(u) x ≤∫_T u - u_x',y' x + ∫_T u_x',y' - ϕ(u) x. About the first term above, we notice that, again by triangle inequality, there holds ∫_T u - u_x',y' x ≤∫_T u - u_T x + ∫_T u_T - u_x',y' x, where u_T denotes the average of u in T. For the first term on the right in (<ref>), by Proposition <ref> and (<ref>), we get u - u_T_L^1(T)≤ c_k δ_h u(T), where, again, c_k = c(k) c_2. To estimate the second term in (<ref>), we first bound the constant u_T - u_x',y' using (<ref>) and (<ref>) as follows: ℒ^2(Q_x',y'∩ T) u_T - u_x',y' = ∫_Q_x',y'∩ Tu_T - u_x',y' x ≤∫_Q_x',y'u_x',y'-u x + ∫_Tu - u_T x ≤ c_k δ_h u(Q_x',y') + c_k δ_h u(T). Thus, ∫_T u_x',y'-u_T x ≤c_k ℒ^2(T)/ℒ^2(Q_x',y'∩ T)( δ_h u(Q_x',y') + δ_h u(T) ). Thus, from (<ref>), ∫_T u_x',y'-u_T x ≤ c_k λδ_h ( u(Q_x',y') + u(T) ) ≤c_k δ_h u(C_T), where c_k := c_k λ is a constant depending only on k. Combining (<ref>), (<ref>), and (<ref>), we obtain ∫_T u-u_x',y' x ≤c_k δ_h u(C_T), where c_k depends only on k. Concerning the second term in (<ref>), we proceed in two steps. First, we observe that the function u_x',y' - ϕ(u) is an affine function, and hence its modulus achieve its maximum at one of the vertices of T. Denoting ξ such a vertex, we have ∫_T u_x',y' - ϕ(u) x ≤ℒ^2(T) u_x',y' - ϕ(u)_L^∞(T) = ℒ^2(T) u_x',y' - ϕ(u)(ξ) Recalling that u(ξ) = ϕ(u)(ξ), if ξ = x or ξ =y then, by (<ref>), ℒ^2(T) u_x',y' - u(ξ)≤cδ_h u(Q_x',y') ≤cδ_h u(C_T), If instead ξ = z, then we write u_x',y' - u(z)≤u_x',y' - u_x',z' + u_x',z'-u(z) The second term above is again estimated by (<ref>), which yields u_x',z'-u(z)≲δ_h^-1u(Q_x',z') ≲ c δ_h^-1u(C_T), where the constant c is universal. Next, we estimate the constant u_x',y' - u_x',z' as follows: ℒ^2(B(x',αδ_h))u_x',y' - u_x',z' = ∫_B(x',αδ_h)u_x',y' - u_x',z' x ≤∫_B(x',αδ_h)u_x',y' - u x + ∫_B(x',αδ_h)u - u_x',z' x ≤∫_Q_x',y'u_x',y' - u x + ∫_Q_x',z'u - u_x',z' x ≲δ_h ( u(Q_x',y') + u(Q_x',z')) ≲δ_h u(C_T), where in the last line we used again Proposition <ref> and the implicit constant depends only on k. Therefore, using again (<ref>) ∫_T u_x',y' - u_x',z' x = ℒ^2(T)u_x',y' - u_x',z'≲δ_h u(C_T), up to constant depending only on k. Combining (<ref>), (<ref>), and (<ref>), we obtain ∫_T u_x',y' - ϕ(u) x ≲δ_h u(C_T), where the implicit constant at right hand side depends only on k. Combining (<ref>) and (<ref>), by (<ref>) we get ∫_T u - ϕ(u) x ≲δ_h u(C_T), where the implicit constant at right hand side depends only on k. Therefore, taking the sum of the above inequalities as T ranges over 𝒯 (recalling once again that the sets C_T have finite overlap), we obtain (<ref>). [L^∞-L^p^- interpolation] We notice that if Ω⊂^n is any measurable set, f ∈ (L^p^-∩ L^∞)(Ω,^k) is arbitrary, and p(·) is any bounded, variable exponent, with 1 ≤ p^- ≤ p(x) ≤ p^+ < ∞, then for a.e. x ∈Ω there holds f(x)^p(x)≤f_L^∞(Ω)^p(x)-p^-f(x)^p^-. Hence, ∫_Ωf(x)^p(x) x ≤∫_Ωf_L^∞(Ω)^p(x)-p^-f(x)^p- x ≤max{ 1, f^p^+-p^-_L^∞(Ω)}∫_Ωf(x)^p^- x. [Conclusion] For each T ∈𝒯, we apply (<ref>) with Ω = T and f = ∇ϕ(u) (which is constant in T, being ϕ(u) affine in T). By (<ref>), (<ref>) and Hölder's inequality, we obtain ∫_T ∇ϕ(u)^p(x) x ≤ max{ 1, ∇ϕ(u)^p^+-p^-_L^∞(T)}∫_C_T∇ u^p^- x, where C_T is defined as in Step <ref>. On the other hand, using again (<ref>), ∇ϕ(u)^p^+-p^-_L^∞(T)≲ max{ 1, ( _C_T∇ u)^p^+ - p^-}∇ u_L^p^-(C_T)^p^- where the implicit constant at right hand side is universal. By (<ref>) and (<ref>), ∫_T ∇ϕ(u)^p(x) x ≲max{∇ u_L^p^-(C_T)^p^-, ℒ^2(C_T)^p^-_T - p^+_T∇ u_L^p^-(C_T)^p^+} ≲max{∇ u_L^p^-(C_T)^p^-, ℓ∇ u_L^p^-(C_T)^p^+} where the last inequality follows from the log-Hölder condition (<ref>) (applied to the smallest ball B_T containing C_T) and ℓ is precisely the constant in (<ref>). Thus, by (<ref>) and Proposition <ref>, ∫_T ∇ϕ(u)^p(x)≲ 2 ℓ (1 + ℒ^2(B_2r)) max{∇ u_L^p(·)(C_T)^p^-, ∇ u_L^p(·)(C_T)^p^+}, up to a universal constant. The inequality (<ref>) clearly implies (<ref>) at the local level. Taking the sum of the inequalities (<ref>) as T varies in 𝒯 (recalling again that the sets C_T have finite overlap), we obtain (<ref>), completing the proof of (ii). By assumption (H) and either (<ref>) or Lemma <ref>, it follows that, ∫_B_Rϕ(u)^p(x) x < +∞, and therefore, by (<ref>), ϕ(u) ∈^p(·)(B_2r,^k) ∩ W^1,p(·)(B_R, ^k). This concludes the proof. On the account of the inequalities (<ref>), inequality (<ref>) readily implies ∫_B_R∇ϕ(u)^p(x) x ≲(1+R^2)max {∫_B_R∇ u^p(x) x, ( ∫_B_R∇ u^p(x) x )^p^+/p^-,. .( ∫_B_R∇ u^p(x) x )^p^+/p^-}, where the implicit constant depends only on the log-Hölder constant of p. Consequently, since ϕ(u) = u in B_2r∖B_R, the same inequality holds with 2r in place of R. The inequalities (<ref>) and (<ref>) are far from being sharp, both because of the loss of exponent in Step <ref> above and because we did not try to optimise with respect to the constants at right hand side. Nonetheless, they are enough for our purposes in this paper. As in <cit.>, Proposition <ref> and, consequently, Proposition <ref> and Theorem <ref> below, is valid only in 2D. This is due to the fact only in 2D a condition like (<ref>) is strong enough to ensure that the jump set can be avoided by a grid made by simplexes with universally bounded edges and angles. To obtain a similar outcome in the higher dimensional setting, it seems to be really necessary to exclude a “small” set from the domain, along the lines of <cit.>. See Appendix <ref> for more details on these points. Next, along the lines of the argument in <cit.>, we use Proposition <ref> and the ^p(·) compactness theorem (Corollary <ref>) to prove Proposition <ref> below (which is the counterpart of <cit.>). Before stating the theorem, we recall from <cit.> the following useful lemma. Let s ∈ (0,1), η∈ (0,1), and ρ > 0. Let J be a ℋ^1-rectifiable Borel set in B_ρ such that ℋ^1(J) < η(1-s)ρ/2. Then, for ℋ^1-almost every x ∈ J ∩ B_sρ there exists a radius r_x ∈ (0,(1-s)ρ/2) such that ℋ^1(J ∩∂ B_r_x(x)) = 0, η r_x ≤ℋ^1(J ∩ B_r_x(x)) ≤ℋ^1(J ∩ B_2r_x(x)) < 2η r_x . We recall the a set E ⊂^n is ℋ^d-rectifiable if and only if it is countably ℋ^d-rectifiable and ℋ^d(E) < +∞, see <cit.>. For the reader's convenience, we provide a detailed proof of Lemma <ref>, because this gives us the occasion to fix some small drawbacks of that in <cit.> and to provide some additional details. Since J is a Borel set with finite ℋ^1-measure, the restricted measure ℋ^1 J is a Radon measure in ^2. Fix any x ∈ J ∩ B_2sρ. Then, for all λ∈ ((1-s)ρ,2(1-s)ρ) except at most countably many we have ℋ^1(J ∩∂ B_λ_x / 2^k(x)) = 0 for all k ∈ (see, e. g., <cit.>), i.e., for which (<ref>) holds. Choose any such λ_x ∈ ((1-s)ρ,2(1-s)ρ) and define r_x := max{λ_x/2^k : k ∈, ℋ^1(J∩ B_λ_x/2^k(x) ) ≥ηλ_x/2^k}. Since J is a Borel set in ^2 and it is ℋ^1-rectifiable, the ℋ^1-density of J is 1 at ℋ^1-almost every x ∈ J (for instance, by Besicovitch-Marstrand-Mattila theorem, see, e. g., <cit.>). Clearly, J ∩ B_2 sρ is ℋ^1-rectifiable as well. Therefore, the set I_x := {λ_x/2^k : k ∈, ℋ^1(J∩ B_λ_x/2^k(x) ) ≥ηλ_x/2^k } is nonempty for ℋ^1-almost every x. Indeed, otherwise we could find a set E ⊂ J ∩ B_2sρ with ℋ^1(E) > 0 such that ∀ k ∈, ℋ^1( J∩ B_λ_x/2^k(x) ) < ηλ_x/2^k. Thus, we would have lim_k → +∞ℋ^1( J∩ B_λ_x/2^k(x))/λ_x/2^k≤η. Since η < 1, we would obtain that, for any x ∈ E, the ℋ^1-density of J ∩ B_sρ at x is less than 1, a contradiction. Thus, for ℋ^1-a.e. x ∈ J ∩ B_2s ρ, the set I_x has a least upper bound, which is actually a maximum, because the only possible accumulation point of I_x is 0. If the max in the definition of r_x is attained for k ≥ 2, then (<ref>) holds by definition. If instead r_x is attained for k = 1, then 2 r_x = λ_x and (<ref>) holds as well, otherwise ℋ^1( J ∩ B_2sρ) ≥ℋ^1( J ∩ B_2 r_x(x)) = ℋ^1( J ∩ B_λ_x(x)) ≥ηλ_x > η(1-s)ρ, a contradiction. Thus, (<ref>) is verified. This concludes the proof. Let p : ^2 → (1,∞) be a bounded, log-Hölder-continuous, variable exponent satisfying p^- > 1 and let k ∈. There exist universal constants ξ, η > 0 such that the following holds. Provided that Assumption (H) is satisfied, then for any s ∈ (0,1) and any u ∈^p(·)(B_ρ, ^k) satisfying ℋ^1(J_u ∩ B_ρ) < η(1-s) ρ/2, there are a countable family ℱ = {B} of closed balls, overlapping at most ξ times, of radius r_B < (1-s)ρ and centre x_B ∈B_s ρ, and a function w ∈^p(·)(B_ρ, ^k) such that * ∪_ℱ B ⊂ B_1+s/2ρ and 1/ρ∑_ℱℒ^2(B) + ∑_ℱℋ^1(∂ B) ≤2πξ/ηℋ^1(J_u ∩ B_ρ). Moreover, ∑_ℱℒ^2(B) ≤min{πξ/ηρℋ^1(J_u ∩ B_ρ), π( ξ/ηℋ^1(J_u ∩ B_ρ) )^2 } * ℋ^1(J_u ∩∪_ℱ∂ B) = ℋ^1((J_u ∩ B_s ρ) ∖∪_ℱ B) = 0. * w = u ℒ^2-a.e. on B_ρ∖∪_ℱ B. * w ∈ W^1,p(·)(B_s ρ; ^k) and ℋ^1(J_w ∖ J_u) = 0. * For each B ∈ℬ, one has w ∈ W^1,p(·)(B, ^k), with ∫_B ∇ w^q x ≲∫_B ∇ u^q x for any q ∈ [1,p^-], where the implicit constant at right hand side is universal. Moreover, ∫_B | ∇ w |^p(x) x ≲(1+r^2) max{∇ u_L^p(·)(B)^p^-, ∇ u_L^p(·)(B)^p^+} where the implicit constant in the right hand side depends only on the log-Hölder constant of p. * If, in addition, u ∈ L^∞(B_ρ; ^k), then w ∈ L^∞(B_ρ; ^k) with w_L^∞(B_ρ)≤u_L^∞(B_ρ). With Proposition <ref> at hand, the proof is completely analogous to that of <cit.>. We sketch the key steps to point out the main differences, addressing the reader to <cit.> for the missing details. Let s ∈ (0,1) be fixed and set J = J_u. Then, J is a Borel set in B_ρ, ℋ^1-rectifiable, and such that ℋ^1(J ∩ B_ρ) < η (1-s)ρ/2. According to Lemma <ref>, the set E of points x ∈ J_u ∩ B_sρ at which at least one between (<ref>) or (<ref>) does not hold is a ℋ^1-null subset of J_u ∩ B_sρ. Then, removing E from J_u ∩ B_sρ, we have ℋ^1((J_u ∩ B_sρ) ∖ E) = 0 and, moreover, a function r : (J_u ∩ B_sρ) ∖ E → (0,(1-s)ρ/2) is defined everywhere on (J_u ∩ B_sρ) ∖ E. By Besicovitch covering theorem <cit.> applied to (J ∩ B_sρ) ∖ E, we can find a universal number ξ∈ of countable families ℱ'_j of disjoint closed balls {B_j^i}_i ∈ (for j ∈{1,…,ξ}), of radius r_B_j^i < (1-s)ρ/2 and centre x_B_j^i∈B_sρ, that cover ℋ^1-almost all J_u ∩ B_s ρ. Upon setting ℱ := ∪_j=1^ξℱ'_j, we have ℋ^1((J_u∩ B_sρ) ∖∪_ℱ B) = 0, and, again as in the proof of <cit.>, ∑_B ∈ℱℋ^1(∂ B) ≤2πξ/ηℋ^1(J_u ∩ B_sρ). Moreover, since all the radii r_B of the balls B in ℱ are smaller than ρ, we get ∑_ℱ r_B^2 ≤ρ∑_ℱ r_B, yielding ℒ^2(∪_ℱ B) ≤πξ/ηρℋ^1( J_u ∩ B_2sρ). Furthermore, since Σ_ℱ r_B^2 ≤( Σ_ℱ r_B )^2, (<ref>) follows, too. By (<ref>), there holds ℋ^1(J_u ∩∪_ℱ∂ B) = 0, and this completes the proof of <ref>. On each ball B^i_1 of the first family ℱ'_1, we consider the function ϕ^i_1(u) given by Proposition <ref>. For each h ∈, we define w^h_1 := ϕ^i_1(u) B_h^i, i≤ h, u . Properties (ii) and (iv) in Proposition <ref> imply that w^h_1 ∈^p(·)(B_sρ, ^k), w^h_1 ∈ W^1,p(·)(∪_i ≤ h B_1^i), w^h_1 =u ℒ^2 B_ρ∖∪_i ∈ B_1^i , and ℋ^1(J_w_1^h∖ J_u) = 0 . In addition, by (ii) in Proposition <ref>, it also follows that, for any h ∈, ∫_B_ρ∇ w^h_1^p(x) x ≲(1+ρ^2) max{∇ u_L^p(·)(B_ρ)^p^-, ∇ u_L^p(·)(B_ρ)^p^+} up to a constant depending only on the log-Hölder constant of p(·). Moreover, for any h ∈ there holds w_1^h(B_ρ) ≤u( B_ρ∖∪_i≤ h B_1^i ) + c∫_∪_i≤ h B_1^i∇ u x, for a universal constant c > 0. Furthermore, by (iii) of Proposition <ref>, w^h_1 - w^h-1_1_L^1(B_ρ) = w^h_1-u_L^1(B^h_1)≤ c_k ρu(B_1^h), and, for any j ≥ h ≥ 1, w^h_1 - w^j_1_L^1(B_ρ)≤ c_k ρu(B_1^h), where c_k > 0 depends only on k. Thus, as in <cit.>, we see that the sequence {w^h_1} converges to w_1 in L^1(B_ρ,^k) as h → +∞, where w_1 := ϕ(u) ∪_i ∈ B^i_1, u , and ϕ(u) := ∑_i ∈ϕ^i_1(u) χ_B_1^i. By (ii), (iii) in Proposition <ref>, it follows that ϕ(u) ∈^p(·)(B_ρ, ^k) ∩ W^1,p(·)(∪_ℱ_1' B, ^k). Therefore, by (<ref>), (<ref>), and Corollary <ref>, we have w_1 ∈^p(·)(B_ρ,^k). Exactly as in <cit.>, since ℋ^1( J_w^h_1∩∪_i ∈ B^i_1 ) = ℋ^1(J_u ∩∪_i ≥ h+1 B_1^i ), we conclude that ℋ^1 ( J_w_1∩∪_i ∈ B^i_1 ) ≤lim inf_h→ +∞ℋ^1 ( J_w^h_1∩∪_i ∈ B^i_1 ) = 0 and therefore, by (<ref>) or Lemma <ref> (depending on which of the two conditions in Assumption (H) is verified), it follows that w_1 ∈ W^1,p(·)(∪_i ∈ B^i_1). In addition, by construction, w_1 = u ℒ^2-a.e. on B_ρ∖∪_ℱ_1' B and ℋ^1(J_w_1∖ J_u) = 0. Iterating the previous construction for any integer l with 1 < l ≤ξ, considering the l-th family ℱ'_l, the sequence w^h_l := ϕ^i(w_l-1) B^i_l, i≤ h, w_l-1 , and its L^1-limit w_l := ϕ(w_l-1) ∪_i ∈ B^i_l, w_l-1 , with ϕ(w_l-1) := ∑_i ∈ϕ^i(w_l-1) χ_B_l^i, we see that there holds w_l ∈^p(·)(B_ρ,^k), w_l ∈ W^1,p(·)(∪_j ≤ l∪_ℱ'_j B, ^k), w_l = w_l-1 ℒ^2 B_ρ∖∪_ℱ'_l B, and ℋ^1(J_w_l∖ J_u) = 0. Now, setting w := w_ξ, we have w ∈^p(·)(B_ρ, ^k), w ∈ W^1,p(·)(∪_ℱ B, ^k), w = u ℒ^2-a.e. on B_ρ∖∪_ℱ B, and ℋ^1(J_w ∖ J_u) = 0. Thus, the conclusion follows exactly as in <cit.>. In the special case in which p is constant, then (<ref>) reduces to (<ref>) (in particular, as observed in Remark <ref>, the factor 1+r^2 at right hand side in (<ref>) is not present in this case). As an obvious consequence of items (i) and (iii) of Theorem <ref> and of (<ref>), it follows that ∫_B_ρ ∇ w^p(x) x ≲(1+ρ^2) max{∇ u_L^p(·)(B_ρ)^p^-, ∇ u_L^p(·)(B_ρ)^p^+} ≲(1+ρ^2) max{∫_B_ρ∇ u^p(x) x, (∫_B_ρ∇ u^p(x) x)^p^-/p^+, (∫_B_ρ∇ u^p(x) x)^p^+/p^-}, where the implicit constant at right hand side depends only on the log-Hölder constant of p(·). §.§ The constrained case In this section, we prove Theorem <ref>. Our argument is reminiscent of that of <cit.> and it combines Proposition <ref> and the following topological property (see, e.g., <cit.>, <cit.>, <cit.>, <cit.>). We recall that, given a topological space A and B ⊆ A a subset of A, a retraction of A onto B is a continuous map ϱ : A → B satisfying ϱ(z) = z for any z ∈ B. If ℳ is a compact m-dimensional smooth submanifold of ^k with π_0(ℳ) = π_1(ℳ) = … = π_j(ℳ) = 0 then there exist a compact (k-j-2)-dimensional smooth complex 𝒳 in ^k and a locally smooth retraction 𝒫 : ^k ∖𝒳→ℳ so that ∇𝒫(y)≤C/(y, 𝒳) for any y ∈^k ∖𝒳 and some constant C=C(ℳ, k, 𝒳). Moreover, in a neighborhood of ℳ, 𝒫 is smooth of constant rank m. By construction, 𝒳 is strictly away from ℳ. Apparently, Lemma <ref> was firstly proven by Hard and Lin <cit.>, with 𝒳 a Lipschitz polyhedron and 𝒫 locally Lipschitz retraction. Later, it was realised that the same argument, at price of very minor complications, allows to construct 𝒳 as a smooth complex and 𝒫 as a locally smooth retraction, c.f., e.g., <cit.>. Up to a bounded change of metric in ^k, (<ref>) implies ∫_B_R∇𝒫(x)^p x < +∞ 1 ≤ p < j+2 0 < R < +∞, see, e.g., <cit.> or <cit.>. We import almost verbatim a useful remark from <cit.>. Notice that 𝒫_a |_ℳ : y ∈ℳ↦𝒫(y-a), for a small enough, defines a smooth family of maps ℳ→ℳ such that 𝒫_0 |_ℳ = 𝒫|_ℳ = Id_ℳ. Therefore, the implicit function theorem implies that 𝒫_a |_ℳ has a smooth inverse ( 𝒫_a |_ℳ)^-1 : ℳ→ℳ for a sufficiently small (depending only on ℳ and 𝒫). The following observation, explicitly pointed out in the proof of <cit.> (but already implicitly used in <cit.>, <cit.>), will be useful as well. For any positive numbers σ, Λ, any v ∈ L^∞(Ω,^k) with v_L^∞(Ω)≤Λ, any measurable function g : Ω→ [0,+∞) and any Borel function f : ^k → [0,+∞), there holds ∫_ℬ_σ^k( ∫_Ω g(x) f(v(x)-y) x ) y ≤∫_Ω g(x) x ∫_B^k_σ+Λ f(z) z Just apply Fubini's theorem and make the change of variable z := v(x) - y in the integral with respect to y. Next, we combine Remark <ref> and Lemma <ref> to obtain the following auxiliary result. (A related statement is proven in <cit.>.) Let Ω⊂^n be a bounded open set with Lipschitz boundary. Let j ≥ 0 be an integer, let k ∈ and assume that ℳ is a compact, j-connected, smooth Riemannian manifold without boundary, isometrically embedded into ^k. Let p : Ω→ (1,+∞) be a bounded, variable exponent, satisfying 1 < p^- ≤ p(x) ≤ p^+ < j + 2 for every x ∈Ω. Then, for any w ∈(L^∞∩ W^1,p(·))(Ω, ^k ) so that w |_∂Ω takes values in ℳ, there exists w∈ W^1,p(·)(Ω, ℳ) such that w = w a.e. on { x ∈Ω : w(x) ∈ℳ} and w = w on ∂Ω (in the sense of traces). Moreover, ∫_Ω∇w^p(x) x ≤ C ∫_Ω∇ w^p(x) x, where C > 0 is a constant independent of w and w and that depends only on the quantities listed in Remark <ref>. The proof of Lemma <ref> follows very closely arguments and computations in that of <cit.>, taking advantage of several observations in <cit.>. For the reader's convenience, we provide full details. During the proof, we will keep track of the precise dependencies of the constant C in (<ref>), gathering them in Remark <ref> below. Let 𝒫 : ^k ∖𝒳→ℳ be given by Lemma <ref>. For a ∈^k, define 𝒳_a := { y + a : y ∈𝒳} and 𝒫_a : ^k ∖𝒳_a →ℳ by 𝒫_a(y) : y ∈^k ∖𝒳_a ↦𝒫(y-a). By Lemma <ref>, the map 𝒫_a is well-defined and locally smooth and, by Remark <ref>, we may find σ > 0 so small that the restricted map 𝒫_a |_ℳ : ℳ→ℳ is a diffeomorphism for all a ∈ℬ^k_σ := { y ∈^k : y < σ}. Up to further reducing σ below a threshold value σ̅ (depending only on 𝒫 and ℳ), if necessary, the inverse function theorem implies that λ := sup_a ∈ℬ_σ(𝒫_a |_ℳ)^-1 is a finite number depending only on 𝒫 and ℳ. Now, we notice that the set N := { (x,y) ∈Ω×^k : w(x) - y ∈𝒳} is measurable and ℋ^n+k-j(N) = 0, because each slice N ∩( {x}×^k ) = w(x) - 𝒳 ha dimension k-j-2. By Fubini's theorem, it follows that ℋ^n(N ∩( Ω×{a}) ) = 0 for a.e. a ∈^k, so 𝒫_a∘ w is well-defined for a.e. a ∈^k and it is a measurable function. Moreover, by the chain rule, for a.e. (x,y) ∈Ω×^k we have ∇(𝒫_a ∘ w) = ∇(𝒫∘ (w(x)-a) ) = (∇𝒫)(w(x)-a)∇ w(x). Applying Lemma <ref> with v = w, f = ∇𝒫^p(x), g = ∇ w^p(x), Λ = w_L^∞(Ω) and σ > 0 as above, we obtain ∫_B_σ^k∫_Ω∇(𝒫_a ∘ w)(x)^p(x) x a ≤∫_Ω∇ w(x)^p(x)∫_ℬ^k_σ+Λ(∇𝒫_a)(w(x))^p(x) a x ≤∫_Ω∇ w(x)^p(x)∫_ℬ^k_σ+Λ(∇𝒫)(w(x)-a)^p(x) a x ≤∫_Ω∇ w(x)^p(x)( ∫_ℬ_σ^k(∇𝒫)(y)^p(x) y) x ≤ C' ∫_Ω∇ w(x)^p(x) x < +∞, where in the last line we used the obvious inequality ∇𝒫(y)^p(x)≤ 1 + ∇𝒫(y)^p^+ (which holds for almost every (x,y) ∈Ω×^k) and Remark <ref>. Thus, the constant C' depends only on p^+, ℳ and 𝒫 (through j, k, and ∫_B_σ̅∇𝒫^p^+ y, where the latter is finite by (<ref>), since p^+ < j+2). Therefore, by Chebyshev inequality, we may choose a ∈ℬ_σ^k so that ∫_Ω∇ (𝒫_a ∘ w)(x)^p(x) x ≤ C' ℬ_σ^k^-1∫_Ω∇ w(x)^p(x) x, and the right hand side does not depend on a. We conclude that the map w := (𝒫_a |_ℳ)^-1∘𝒫_a ∘ w belongs to W^1,p(·)(Ω, ℳ) and it agrees with w a. e. on the set { x ∈Ω : w(x) ∈ℳ} and on ∂Ω in the sense of traces (because 𝒫_a is a retraction onto ℳ and (𝒫_a |_ℳ)^-1∘𝒫_a is the identity on ℳ). Finally,  (<ref>) follows from (<ref>), with C := max{λ^p^+,λ^p^-} C' ℬ_σ^k^-1 depending only on p^+, p^-, ℳ, 𝒫, and k. The constant C in (<ref>) depends only on p^+, p^-, k, ℳ, and 𝒫. Recall that, for simplicity, we are assuming from the beginning to embed ℳ in ^k by means of Nash's isometric embedding theorem. The constant C tacitly depends also on this choice. (Recall that we agreed that “depending on ℳ” means “depending on ℳ and its Nash's embedding”.) However, there is nothing special about Nash's embedding, besides being always available for any Riemann manifold (of finite dimension). In particular, Nash's embedding is neither unique nor canonical, and one may want to choose another Euclidean embedding in some situations. (For instance, when ℳ is a sphere, in which case Nash's embedding does not coincide with the obvious, canonical one.) The results above are true for any isometric embedding of ℳ into a Euclidean space, but k, 𝒫 (and hence 𝒳), and C depend on this choice. We are now ready for the proof of Theorem <ref>. Given u ∈^p(·)(B_ρ, ℳ) satisfying (<ref>), for any fixed s ∈ (0,1), let v ∈ W^1,p(·)(B_sρ, ^k) ∩^p(·)(B_ρ, ^k) and ℱ be, respectively, the function and the family of balls provided by Proposition <ref>. Since u takes values in ℳ, by (<ref>) it follows u_L^∞(B_ρ)≤ M and by item <ref> of Proposition <ref> we have, in addition, v_L^∞(B_ρ)≤ M. Set v_s := v |_B_s ρ. Clearly, v_s ∈ W^1,p(·)(B_sρ, ^k), v_s_L^∞(B_sρ)≤ M, and v_s |_∂ B_s ρ takes values in ℳ. By Lemma <ref> with j = 0, a function w_s ∈ W^1,p(·)(B_sρ, ℳ) exists such that w_s = v_s a. e. on the set { x ∈Ω : v_s(x) ∈ℳ} and such that w_s |_∂Ω = v_s |_∂Ω = v |_∂Ω. (In particular, w_s |_∂Ω takes values in ℳ.) Moreover, by (<ref>), ∫_B_sρ∇ w_s^p(x) x ≤ C ∫_B_sρ∇ v_s^p(x) x, where, by Remark <ref>, C > 0 is a constant that depends only on p^+, p^-, k, ℳ, 𝒫, where 𝒫 is the retraction provided by Lemma <ref>. Upon setting w := u B_ρ∖B_sρ w_s B_sρ, we see that w ∈ W^1,p(·)( B_s ρ, ℳ) ∩^p(·)(B_ρ, ℳ) and that ℋ^1(J_w ∖ J_u) = 0 (indeed, in view of (<ref>), ℋ^1(J_w ∖ J_v) = 0). Moreover, w |_∂Ω = w_s |_∂Ω. Now, we claim that the function w and the family ℱ are a function and a family of balls satisfying all the properties required by the statement. Indeed, item <ref> is obvious (as it already follows from Proposition <ref>). Item <ref> holds because u = v a. e. on B ∖∪_ℱ B, so that in particular v_s = u on B_sρ∖∪_ℱ B, and w_s coincides with v_s when the latter takes values in ℳ (in particular, on the set {u = v}∩ B_sρ). Item <ref> is an immediate consequence of the above discussion. Finally, item <ref> follows from (<ref>), (<ref>), and the definition of w. Since generalised functions of bounded deformation are not closed under composition with (locally) Lipschitz or even smooth mappings, there is no immediate extension of Theorem <ref> to the case of functions belonging, for instance, to ^p(·)( B_ρ, ^1 ). For simplicity, we stated Theorem <ref> assuming that ℳ is merely connected. However, a quick inspection of the proof shows that it is valid, with identical proof, if we assume ℳ is j-connected and we replace the constraint p^+ < 2 with p^+ < j+2. § REGULARITY OF LOCAL MINIMISERS In this section, we prove Theorem <ref>. We will follow ideas that originated in <cit.> in the scalar case and that were later adapted to the context of sphere valued maps in <cit.>. Differently from <cit.>, we will not use the Sobolev-Poincaré inequality for functions and we will not make use of medians or truncations. Instead, we will follow the approach of <cit.> and employ the Sobolev approximation from Section <ref>. As explained in the Introduction, the key point lies in proving that the jump set of local minimisers is essentially closed. Let Ω⊂^2 be a bounded open set and let k ∈. In this section, we shall need the following variants of the integral functional F in (<ref>): for A ⊂^2 a Borel set, we consider F(u,c,A) := ∫_A ∇ u^p(x) x + c ℋ^1(J_u ∩ A), defined on _ loc(A, ^k), where c > 0. Clearly, F(u,c,A) reduces to the functional (<ref>) for A = Ω and c = 1. Let c > 0 and Ω⊂^2 be a bounded, open set. We say that a map u ∈^p(·)(Ω, ^k-1) is a local minimiser (among maps with values into ^k-1) of the functional F(·, c, Ω) iff there holds F(u,c,Ω) ≤ F(v, c, Ω) for any v ∈^p(·)( Ω, ^k-1) such that { v ≠ u }⊂⊂Ω. For every open set A ⊂^2, every t > 0 and every u ∈_ loc(Ω, ^k) such that u = t a.e. in Ω, we define Φ(u,c,A,t) := inf{ F(u,c,A) : v ∈_ loc(Ω,^k), v = u Ω∖A, v = t Ω}. If Φ(u,c,A,t) < +∞, we define the deviation from minimality of u as (u,c,A,t) := F(u,c,A) - Φ(u,c,A,t). Equivalently, (u,c,A,t) can be characterised as the smallest number κ∈ [0,+∞] so that F(u,c,A) ≤ F(v,c,A) + κ for all v ∈_ loc(Ω,^k) satisfying {v ≠ u}⊂⊂ A and such that v = t a.e. in Ω. For convenience, when c = 1, we set F(u,A) := F(u,1,A), (u,A,t) := (u,1,A,t), (u,A) := (u,1,A,1). Similarly to as in <cit.>, we establish the following definition. A function u ∈_ loc(Ω,^k-1) is a quasi-minimiser (among maps with values into ^k-1) of the functional F(·, Ω) in Ω if there exists a constant κ' ≥ 0 such that for all x ∈Ω and all balls B_ρ(x) ⊂Ω, (u, B_ρ(x)) ≤κ' ρ^2. By a standard comparison argument, we get an immediate upper bound on the local energy of quasi-minimisers, that will be used in the proof of Theorem <ref> below. Let u be any quasi-minimiser of F(·, Ω) in Ω. Then, for all x ∈Ω and for all balls B_ρ(x) ⊂Ω, we have F(u, B_ρ(x)) ≤ 2 πρ + κ' ρ^2. The proof is almost identical to that of <cit.>; the only difference is that we are compelled to consider competitors with values into ^k-1. (See also <cit.> and the remark thereafter.) Let u ∈_ loc(Ω, ^k-1) be a quasi-minimiser of F(·,Ω) in Ω, take any ρ' > 0 with ρ' < ρ and put v := u χ_B_ρ(x) ∖ B_ρ'(x) + e_k χ_B_ρ'(x). Then, v ∈_ loc(B_ρ(x), ^k-1) and, by quasi-minimality of u, F(u,B_ρ'(x)) ≤ℋ^1(J_v ∩B_ρ') + (u,B_ρ(x)) (<ref>)≤ 2 πρ + κ' ρ^2. The conclusion follows by letting ρ' →ρ. The main result of this section is the following theorem. Let Ω⊂^2 be a bounded, open set. Assume that p ∈ C^0,α(Ω), for some α∈ (0,1], is a variable exponent satisfying 1 < p^- ≤ p(x) ≤ p^+ < +∞ for all x ∈Ω. Let u ∈^p(·)(Ω, ^k-1) be a local minimiser of the functional F in (<ref>), (<ref>), in the sense of Definition <ref>. Then, the jump set J_u of u is essentially closed, i.e., ℋ^1(Ω∩(J_u∖ J_u)). Moreover, u ∈ C^1,β_0( Ω_0, ^k-1), where β_0 ∈ (0,1) depends only on k, p^+, p^-, [p]_0,α, and α, and Ω_0 is a relatively open set in Ω∖J_u satisfying ℋ^1(Ω∖Ω_0) = 0. The strategy for the proof of Theorem <ref> is classical and goes back to <cit.>. Later, it is been extended in various directions; see, e.g., <cit.>. The key point consists in proving that the jump set of u is essentially closed. Indeed, this means that Ω∖J_u is an open set in Ω, with ℒ^2(Ω∖J_u) = ℒ^2(Ω). In the classical case p =, this allows for using standard elliptic regularity to infer C^1-regularity out of J_u. Here, in the variable-exponent setting, we will rely more heavily on local minimality, exploiting Theorem <ref>. To obtain that J_u is essentially closed, we again argue along the lines of <cit.>. The heart of matter is proving, in any set Ω_δ⊂Ω defined as in (<ref>), inequality (<ref>) below. Let p : ^2 → (1,+∞) be a variable exponent satisfying (p_1') and (p_2). Let δ > 0 and Ω_δ be defined by (<ref>). There exist θ_δ and ρ_δ depending only on p^-, p^+, and δ with the property that, if u ∈^p(·)(Ω,^k-1) is a quasi-minimiser of F in Ω, then F(u, B_ρ(x)) > θ_δρ for all balls B_ρ(x) ⊂Ω_δ with centre x ∈J_u and radius ρ < ρ_δ' := ρ_δ/κ', where κ' is the constant in (<ref>). Moreover, ℋ^1(Ω∩(J_u∖ J_u)) = 0. Once again, the idea of the proof of Theorem <ref> goes back to De Giorgi, Carriero, and Leaci <cit.>. It is based on proving a power-decay property of the energy in small balls with respect to the radius of the balls which, after a classical iteration argument, yields the conclusion. Given such a decay property, that in our case is provided by Theorem <ref> below, the proof is purely “algebraic” and it does not depend in any way on the target manifold of the considered maps. However, some modifications are needed with respect to the argument in <cit.> and also with respect to the recent adaptation to the variable-exponent setting provided by <cit.> because we need to avoid recurring to medians and truncations. The key technical tool that allows us to do so is provided by Proposition <ref> in Appendix <ref>, which gives us a sufficient condition to conclude that a given point does not belong to the jump set of a map u and that is proved using only the Sobolev approximation results from Section <ref>. Once Proposition <ref> is obtained, the proof of Theorem <ref> follows by the classical argument in <cit.>, exactly as in <cit.>. The bulk of this section is instead devoted to proving Theorem <ref>, a task which requires to modify in a nontrivial way both the arguments in <cit.> and those in <cit.>. Let p : ^2 → (1,+∞) be a variable exponent satisfying (p_1') and (p_2), let δ > 0, and let Ω_δ be defined as in (<ref>). There exists a constant C_δ = C(p^-,p^+,δ) > 0 with the following property. For every τ∈ (0,1) there exist = (τ,δ) and θ = θ(τ,δ) in (0,1) such that, if u ∈(Ω, ^k-1) satisfies, for all x ∈Ω_δ and all σ < ^2 such that B_σ(x) ⊂⊂Ω_δ, the conditions F(u, B_σ(x)) ≤σ, (u, B_σ(x)) ≤θ F(u, B_σ(x)), then F(u, B_τσ(x)) ≤ C_δτ^2 F(u,B_σ(x)). The idea of the proof of Theorem <ref> is classical and rooted in an argument by contradiction. However, the combined non-standard growth of the energy functional and the fact that we deal with maps with values into spheres make it somewhat trickier than usual. To handle the first complication, i.e., the non-standard growth of the functional, we borrow some ideas from the recent work <cit.>, where the functional F is considered on scalar-valued functions of class ^p(·). Note that our argument requires, as in <cit.>, the variable exponent to satisfy the strong log-Hölder condition, c.f. (<ref>) and Remark <ref> below. To take the sphere-valued constraint into account, we reason as in <cit.> but, instead of using medians, truncations, and the Sobolev-Poincaré inequality for functions, we exploit the Sobolev approximation results from Section <ref>. A careful reading of <cit.> shows that the decay lemma and the density lower bounds there (respectively, <cit.> and <cit.>) are both still valid, with exactly the same proof, for vector-valued unconstrained maps. Moreover, the proof of the density lower bounds consists in an essentially “algebraic” iteration argument, which does not depend at all on the target space of the maps. Hence, once Theorem <ref> (and Proposition <ref>) is proven, Theorem <ref> follows exactly as in <cit.>. Nonetheless, for the proof of Theorem <ref> we shall need several auxiliary results in the spirit of <cit.>. Let u ∈(B_r, ^k). For every c > 0 and t > 0, the functions ρ↦ F(u,c, B_ρ) ρ↦(u,c,B_ρ, t) are non-decreasing in (0,r). The proof of Lemma <ref> is straightforward and, as in <cit.>, left to the reader. Let B ⊂^2 be a ball of radius s ∈ [1/2,1). For any h ∈, let t_h > 0, w_h : B →^k-1_t_h be a measurable function, and set λ_h^(1/2) := w_h_B_1/2. Assume that t_h → +∞ as h → +∞ and that * There holds lim_h → +∞ t_h1 - λ_h^(1/2)/t_h = d, for some constant d ≥ 0. * There exists w_∞ : B →^k such that w_h - λ^(1/2)_h → w_∞ a.e. in B as h → +∞. Then, there exists a hyperplane Π⊂^k such that w_∞(x) ∈Π for ℒ^2-almost every x ∈ B. The proof is along the lines of that of the corresponding assertion in the proof of <cit.>. Up to rotations in the target space, we may assume that for any h ∈ there holds w_h_B_1/2 = λ^(1/2)_h e_k, where e_k denotes the last vector of the canonical basis of ^k. Following <cit.>, we observe that the trivial identity w_h - λ_h^(1/2) + λ_h^(1/2)^2 = t_h^2 yields w_h - λ_h^(1/2)^2/t_h + 2(w_h - λ_h)·λ_h^(1/2)/t_h = t_h(1-λ_h^(1/2)/t_h) ( 1+λ_h^(1/2)/t_h). Hence, by (<ref>) (which implies lim_h → +∞λ_h^(1/2)/t_h = 1) and assumption (ii) we obtain 0 + 2 w_0 · e_k = 2d in B_1, so that d = w_0 · e_k_B = lim_h → +∞( w_h · e_k_B - λ_h^(1/2)) = 0, hence w_0 · e_k = 0 a.e. in B. The conclusion follows. For later use, we record here the following trivial linear algebra lemma. Let {Π^(s)}_s ∈ [1/2,1) be a family of (proper) hyperplanes in ^k, such that Π^(s)≤Π^(t) for any s, t ∈ [1/2,1) with s ≤ t. Then, there exists s∈ [1/2,1) such that Π := Π^(s) contains all the hyperplanes Π^(s). By assumption, the hyperplane Π^(1/2) is a vector subspace, of dimension r ≤ k-1, contained in all the hyperplanes Π^(s). Let { a_1, …, a_r} be r affinely independent vectors in Π^(1/2). If r = k-1 or r coincide with the maximal dimension of the hyperplanes Π^(s), we are done. So, let us assume r < k-1 and that there is s > 1/2 such that Π^(1/2) is a proper subspace of Σ^(s). Let r^(s)≤ k-1 be the dimension of Π^(s). Then, in Σ^(s) we can find r^(s)-r vectors a_r+1, …, a_r^(s) which are affinely independent both each other and from a_1,…,a_r. Again, if r^(s) = k-1, we are done, otherwise we repeat the process. In at most k-r-1 steps, we get the conclusion. In order to simplify the proof of Theorem <ref>, we state as an independent lemma a clever trick, due to Carriero and Leaci <cit.>, that we shall use to construct competitors completare. Let {γ_h}⊂ be a sequence such that γ_h → +∞ as h → +∞. Assume that {f_h}, {g_h} are sequences of mappings belonging to (B_1, ^k) and satisfying ∀ h ∈, ℒ^2 ( { f_h ≠ g_h }∩ B_1 ) ≲(ℋ^1( { f_h ≠ g_h }∩ B_1 ))^2, ( ℋ^1( { f_h ≠ g_h }∩ B_1 ) )^2 = o(γ_h) h → +∞. Then, letting f_h and g_h be the precise representatives of f_h and g_h respectively, there holds lim_h→ +∞γ_h ℋ^1( {f_h≠g_h}∩∂ B_ρ) = 0 for almost every ρ∈ (0,1). Arguing as in <cit.>, we define a_h := γ_h ∫_0^1 ℋ^1( {f_h≠g_h}∩∂ B_ρ) ρ. for every h ∈. By the coarea formula, a_h = ℒ^2( { f_h ≠ g_h }∩ B_1 ) and, by (<ref>) and (<ref>), a_h = ℒ^2( { f_h ≠ g_h }∩ B_1 ) ≲( ℋ^1( { f_h ≠ g_h }∩ B_1 ) )^2 = o(γ_h) h → +∞ Thus, lim_h→+∞ a_h = 0, and the claim follows by an obvious contradiction argument. Finally, we recall the following L^∞-L^1 estimate for p-harmonic functions, see <cit.>. We report here a statement which follows as a special case of a more general result proved for differential forms. Let u ∈ W^1,p(Ω,^k) be p-harmonic in Ω. Let B_r be a ball with r ∈ (0,1] such that B_r ⊂Ω. Then there exists a positive constant c_0 which depends only on n, k, and p such that sup_B_r/2∇ u^p ≤ C_0 _B_r∇ u^p y. We are now ready to prove Theorem <ref>. As already mentioned, the argument combines ingredients from <cit.>, <cit.>, and <cit.>. Fix δ > 0. It is enough to assume τ∈ (0,1/2) (otherwise, just take C_δ = 4). By contradiction, assume that (<ref>) does not hold. Then, there exist sequences {u_h}⊂(Ω,^k-1); {_h}, {θ_h}⊂ (0,1), {σ_h}⊂ (0,+∞); {x_h}⊂Ω_δ such that lim_h → +∞_h = lim_h → +∞θ_h = 0; ∀ h ∈, σ_h ≤_h^2 B_σ_h(x_h) ⊂⊂Ω_δ, and, moreover, F(u_h, B_σ_h(σ_h)) ≤_h σ_h, (u_h, B_σ_h(x_h)) ≤θ_h F(u_h, B_σ_h(x_h)), F(u_h, B_τσ_h(x_h)) > C_1 τ^2 F(u_h, B_σ_h(x_h)), where C_1 = 2 C_0 and C_0 is the constant at right hand side in (<ref>). Furthermore, up to the extraction of a (not relabeled) subsequence, we may assume that {x_h} converges to some x_0 ∈Ω_δ. Let us define p_0 := p(x_0). We now proceed step-by-step, following the argument in <cit.>. [Scaling, Part 1] For all h ∈, we define the translated maps and exponents u_h : B_1 →^k-1, u_h(y) := u_h(x_h + σ_h y), p_h : B_1 → (1,+∞), p_h(y) := p(x_h + σ_h y), where y ∈ B_1. We notice that for any h ∈ and any y ∈ B_1 1 < p^- ≤ p^-_h ≤ p_h(y) ≤ p^+_h ≤ p^+ < 2. Moreover, each rescaled variable exponent p_h(·) is still strongly log-Hölder continuous and this implies that the sequence {p_h} converges to p_0 uniformly in B_1 as h → +∞. Indeed, upon setting p^0_h := p_h(0) = p(x_h), we have sup_y ∈ B_1p_h(y) - p_0≤sup_y ∈ B_1{p_h(y) - p_h^0 + p_h^0 - p_0} = ω(σ_h) + o(1), which tends to zero as h→ +∞ thanks to (<ref>). Concerning the translated maps u_h, we observe that ∫_B_1∇u_h^p_h(y) y ≤_h, so that, in particular sup_h ∈∫_B_1∇u_h^p_h(y) y ≤ 1, ∀ h ∈, ℋ^1(J_u_h∩ B_1) ≤_h . where the second equality follows from (<ref>) and the obvious fact that σ_h ℋ^1( J_u_h∩ B_1 ) = ℋ^1( J_u ∩ B_σ_h(x_h) ) for any h ∈ (c. f. <cit.>). Next, we set γ_h := 1/_h, t_h := (σ_h γ_h)^1/p_h^0/σ_h. We define the rescaled functions v_h : B_1 →^k, v_h := t_h u_h, and we observe that, for all h ∈, v_h = t_h B_1, so that v_h ∈^p(·)(B_1,^k-1_t_h). From (<ref>) and the definition of γ_h, there holds t_h ≥γ_h^2-1/p_h^0, so that (being p_h^0 ≥ p^- > 1), t_h → +∞ h → +∞. [Scaling, Part 2] For any h ∈, we define F_h(v,γ_h,B_σ) := ∫_B_σ ∇ v^p_h(y) y + γ_h ℋ^1(J_v ∩ B_σ). From the definitions of p_h and v_h, Equations (<ref>), (<ref>), (<ref>) become, respectively, F_h(v_h,γ_h,B_1) ≤ 1, _h(v_h,γ_h,B_1,t_h) ≤θ_h, F_h(v_h,γ_h,B_τ) > C_1 τ^2 F_h(v_h,γ_h,B_1) , where _h denotes the deviation from miniminality related to F_h. By (<ref>), there holds ℋ^1(J_v_h∩ B_1) ≤_h, and, since γ_h σ_h ≤ 1, we also have ∫_B_1∇ v_h^p_h(y) y = γ_hσ_h ∫_B_1(∇ u_h)(x_h + σ_h y)^p_h(y) y ≤ 1. [Convergence of the integrands to one with standard growth] Now, we define f_h : B_1 ×^k → by letting ∀ (y,ξ) ∈ B_1 ×^k, f_h(y, ξ) := (γ_h σ_h)^1-p_h(y)/p_h^0ξ^p_h(y), and f_0 : B_1 ×^k → by letting ∀ (y,ξ) ∈ B_1 ×^k, f_0(y,ξ) ≡ f_0(ξ) := ξ^p_0. We claim that {f_h} converges to f_0 uniformly on compact subsets of B_1 ×^K. To prove the claim, we argue in two steps, similarly to as in <cit.>. In first place, we define f_h(ξ) := γ_h σ_h f(x_j, (γ_hσ_h)^-1/p_h^0ξ) = ξ^p^0_h (ξ∈^k), and we notice that, by (<ref>), {f_h} converges uniformly to f_0 as h → +∞ on compact subsets of B_1 ×^k. Therefore, the claim is proven once we prove that f_h(y,ξ) - f_h(ξ) vanishes uniformly on compact subsets of B_1 ×^k as h → +∞ or, in other words, that for all y ∈ B_1 and all ξ∈^k such that ξ≤ R, there holds f_h(y,ξ) - f_h(ξ)≤ω_h,R, where ω_h,R→ 0 as h → +∞. To this purpose, we notice that, for all y ∈ B_1 and all ξ∈^K, f_h(y,ξ) - f_h(ξ) = (γ_h σ_h)^1-p_h(y)/p_h^-ξ^p_h(y) - ξ^p^0_h ≤( 1/γ_h σ_h)^p_h(y)/p_h^0-1ξ^p_h(y) - ξ^p^0_h + ξ^p_0 (γ_h σ_h)^p_h(y)/p^0_0-1 -1 =: I_h + II_h. The estimate of the two terms I_h and II_h proceeds exactly as in <cit.>. Addressing the reader to <cit.> for details, we just recall that, upon using the log-Hölder condition to estimate the coefficient of I_h, we find I_h ≤ c ξ^p_h(y) - ξ^p^0_h for all h ∈, where c is a constant independent of h. Then, by (<ref>), the functions y ↦ξ^p_h(y) - ξ^p^0_h are uniformly vanishing in B_1 for any fixed in ξ∈^k as h → +∞, so that I_h → 0, uniformly on compact subsets of B_1 ×^k as h → +∞. What is more, by strong log-Hölder condition, we actually obtain lim_h → +∞( 1/γ_h σ_h)^p_h(y)/p_h^0 - 1 = 1. Hence, II_h → 0 as h → +∞, uniformly on compact subsets of B_1 ×^K. [Approximation and lower bound] For any s ∈ [1/2,1), thanks to (<ref>), we can use Theorem <ref> to associate with any u_h (for any h sufficiently large), a function z_h^(s) belonging to W^1,p_h(·)(B_s,^k-1) ∩^p_h(·)(B_1,^k-1) and a family of balls ℱ^(s)_h satisfying, in particular, ∫_B_1∇z_h^(s)^p_h(y) y ≲max{u_h_L^p_h(·)(B_1)^p_h^-, u_h_L^p_h(·)(B_1)^p_h^+}, where the implicit constant at right hand side depends only the log-Hölder constant of p(·) and, by (<ref>) and (<ref>), ℒ^2( ∪_ℱ^(s)_h B ) (<ref>), (<ref>)≲( ℋ^1(J_u_h∩ B_1) )^2 ≤_h^2 ⟶ 0. as h → +∞. Moreover, ∀ s, t ∈ [1/2,1), z_h^(s) = z_h^(t) = u_h B_1 ∖( ∪_ℱ^(s)_h B ⋃∪_ℱ^(t)_h B ). Next, for any s ∈ [1/2,1) and any sufficiently large h ∈ as in the above, we denote λ_h^(s) := z_h^(s)_B_s and we let z_h^(s) := t_h z_h^(s). We observe that z_h^(s)∈ W^1,p_h(·)(B_s,^k-1_t_h) ∩^p_h(·)(B_1,^k-1_t_h) for any h and any s ∈ [1/2,1) and that each map z_h^(s) coincides with the Sobolev approximation of v_h provided by Theorem <ref>. In particular, ∀ s, t ∈ [1/2,1), z_h^(s) = z_h^(t) = v_h B_1 ∖( ∪_ℱ^(s)_h B ⋃∪_ℱ^(t)_h B ) and, by (<ref>) and (<ref>), for any s ∈ [1/2,1) we have ℒ^2( ∪_ℱ^(s)_h B ) (<ref>), (<ref>)≲( ℋ^1(J_u_h∩ B_1) )^2 = ( ℋ^1(J_v_h∩ B_1) )^2 ≤_h^2 ⟶ 0 as h → +∞. Moreover, for any s ∈ [1/2,1), there holds z_h^(s)_B_s = t_h z_h^(s)_B_s, λ_h^(s) := z_h^(s)_B_s = t_h λ_h^(s). We claim that, for any s ∈ [1/2,1), lim_h → +∞λ_h^(s)/t_h = 1, lim_h → +∞ t_h^(p^-)^*1 - λ_h^(s)/t_h^(p^-)^* = d, for some d ≥ 0. Indeed, the second equality entails the first and we have, for any s ∈ [1/2,1), t_h^(p^-)^* 1 - λ_h^(s)/t_h^(p^-)^* = t_h^(p^-)^* 1 - λ_h^(s)^(p^-)^* = t_h^(p^-)^*(_B_s1 - λ_h^(s) x )^(p^-)^* = t_h^(p^-)^*(_B_sz_h^(s) - z_h^(s)_B_s x)^(p^-)^*≤ t_h^(p^-)^*_B_sz_h^(s) - z_h^(s)_B_s^(p^-)^* x ≤t_h^(p^-)^*/π∇z_h^(s)_L^p^-(B_s)^(p^-)^*(<ref>)≲∇ z_h^(s)_L^p^-(B_s)^(p^-)^*≲∇ z_h^(s)_L^p(·)(B_1)^(p^-)^*(<ref>), (<ref>)≲ 1, where we used that z^(s)_h = 1 a.e., Jensens's inequality, the classical Sobolev-Poincaré inequality, and Proposition <ref>. By (<ref>), it follows, in particular, that z_h^(s) - z_h^(s)_B_s_L^q(B_1)≲∇ z_h^(s)_L^p^-(B_s)≲ 1 for any q ∈ [1, (p^-)^*], up to a constant independent of s. Furthermore, we observe that λ^(s)_h - λ^(1/2)_h≲ 1. Indeed, by (<ref>) and (<ref>), λ^(s)_h - λ^(1/2)_h(<ref>)= t_h λ^(s)_h - λ^(1/2)_h≤ t_h( λ^(s)_h - 1 + 1-λ^(1/2)_h) (<ref>)≲ 1. Therefore, for any s ∈ [1/2,1), the sequence { z^(s)_h - λ^(1/2)_h } is bounded in W^1,p^-(B_s, ^k ) and we can find a subsequence and a limiting map z^(s) such that { z^(s)_h - λ^(1/2)_h } converges to some z^(s) weakly in W^1,p^-(B_s), strongly in L^p^-(B_s), and pointwise almost everywhere in B_s. By (<ref>) and (<ref>), there holds z^(s) = z^(t) almost everywhere in B_s for all s, t ∈ [1/2,1) with s ≤ t. Hence, we can define a map z ∈ W^1,p^-_ loc(B_1,^k) by setting z := z^(s) in B_s for any s ∈ [1/2,1). In particular, we have ∇ z^(s)_h ⇀∇ z in L^1(B_s). Moreover, the uniform convergence of {p_h} to p_0 yields, by De Giorgi's semicontinuity theorem[The reference <cit.> is somewhat hard to find. The reader may equally well refer to the more general Ioffe's theorem, e. g. <cit.>.] <cit.> and since ∇ z_h^(s) = ∇ v_h a.e. in B_1 ∖∪_ℱ^(s)_h B, ∫_B_s∇ z^p_0 y ≤lim inf_h→+∞∫_B_s(∇ z_h^(s)) χ_B_1∖∪_ℱ_h^(s) B ^p_h(y) y = lim inf_h→+∞∫_B_s∖∪_ℱ_h^(s) B ∇ z_h^(s)^p_h(y) y = lim inf_h→+∞∫_B_s∖∪_ℱ_h^(s) B ∇ v_h ^p_h(y) y ≤lim inf_h→+∞∫_B_s∇ v_h ^p_h(y) y ≤lim inf_h → +∞ F_h(v_h, γ_h, B_s) As (<ref>) holds for any s ∈ [1/2,1), we can let s ↑ 1 in the above inequalities and find both ∫_B_1∇ z^p_0 y ≤lim inf_h→+∞∫_B_1∇ v_h^p_h(y) y, and ∫_B_1∇ z^p_0 y ≤lim inf_h→+∞ F_h(v_h, γ_h, B_1). In addition, Lemma <ref> applied to the sequence { z^(s)_h - λ^(1/2)_h } in B = B_s, for any s ∈ [1/2,1), yields that each map z^(s) takes value in a (proper) hyperplane Π^(s), that we may assume to be contained in the hyperplane { x_k = 0 }. Moreover, by Lemma <ref>, we can find a maximal hyperplane Π containing all the Π^(s). Finally, since p^- < 2 and p^+ < (p^-)^*, we have z ∈ W^1,p_0(B_1, Π), by (<ref>), Poincaré's inequality and the fact that p_0 ≤ p^+. [Convergence of the energy and p_0-minimality of z] We now improve the lower bound in the previous step by showing that z is a local minimiser of the p_0-energy with respect to compactly supported perturbations. We start by noticing that, since the function s ↦ F_h(v_h, γ_h, B_s) is increasing and uniformly bounded for s ∈ (0,1], by Helly's selection theorem and possibly passing to a (not relabelled) subsequence, we may assume that the limit lim_h → +∞ F_h(v_h, γ_h, B_s) =: α(s) exists for all s ∈ (0,1], where the function s ↦α(s) is non-decreasing. (Hence, continuous for all but at most countably many s ∈ (0,1].) Let v ∈ W^1,p_0(B_1, Π) be such that { v ≠ z }⊂⊂ B_1. Let {v^}⊂ W^1,∞(B_1,^k) be any sequence of Lipschitz functions strongly converging to v in W^1,p_0(B_1). Take ρ', ρ∈ (0,1), with ρ' < ρ and ρ a continuity point of α(·). For any h large enough, by (<ref>) we have ℋ^1(v_h ∩ B_1 ) < η(1-ρ), where η is the universal constant in Theorem <ref>, and therefore the approximation z^(ρ) of v_h, constructed as in the previous step, is well-defined. Let ζ∈ C_c^∞(B_ρ, [0,1]) be a cut-off function with ζ≡ 1 in B_ρ' and ∇ζ≤2/ρ-ρ'. We may assume that { v ≠ z }⊂⊂ B_ρ and define (for h large enough) w_h^ := ζ( v^ + λ^(1/2)_h ) + (1-ζ) z_h^(ρ). By definition, w_h^∈ W^1,p_h(·)(B_ρ, ^k), w_h^∈^p_h(·)(B_1, ^k), and w_h^ = z_h^(ρ) in B_1 ∖ B_ρ (so that, in particular, w_h^ = v_h a.e. in B_1 ∖ B_1+ρ/2). Moreover, there holds w_h^ = z_h^(ρ) in the sense of (Sobolev) traces on ∂ B_ρ. By writing w_h^ = z_h^(ρ) + ζ( (v^ - v ) + (v - z) + z + λ^(1/2)_h - z_h^(ρ)), we see that, for any small enough > 0 and any h ∈ large enough, w_h^· e_k ≥ t_h(1-) B_ρ. Indeed, v · e_k = z · e_k = 0 because v and z take values in Π, v^(y) → v(y) at a.e. y ∈ B_1 as → 0 by strong convergence, and z + λ^(1/2)_h - z_h^(ρ)→ 0 a.e. in B_ρ as h → +∞, which follows as a consequence of the weak convergence of {z_h^(ρ) - λ^(1/2)_h } to z in W^1,p^-(B_ρ) as h → +∞ and of the Sobolev-Poincaré inequality (<ref>). This also implies that z_h^(ρ)· e_k ≥ t_h(1-) a. e. for h large enough, which yields the claim. As a consequence of the above discussion, for any >0 small enough and any h ∈ large enough, we can define w_h^ := t_h w_h^/w_h^ and we have w_h^∈ W^1,p_h(·)(B_ρ,_t_h^k-1), w_h^∈^p_h(·)(B_1, _t_h^k-1) and w_h^ = z_h^(ρ) a.e. in B_1 ∖ B_ρ. Thus, ∇ w_h^≤1/1-∇w^_h B_1 and, in turn, ∫_B_ρ∇ w_h^^p_h(y) y ≤1/(1-)^p^+∫_B_ρ∇w^_h^p_h(y) y ≤1/(1-)^p^+{∫_B_ρ'∇ v^^p_h(y) y + ∫_B_ρ∖ B_ρ'∇ z_h^(ρ)^p_h(y) y + ∫_B_ρ∖ B_ρ'∇w^_h^p_h(y) y } ≤1/(1-)^p^+{∫_B_ρ'∇ v^^p_h(y) y + ∫_B_ρ∖ B_ρ'∇ z_h^(ρ)^p_h(y) y . . + 4^p^+[ ∫_B_ρ∖ B_ρ'∇ v^^p_h(y) y_(I_h,) + ∫_B_ρ∖ B_ρ'∇ z_h^(ρ)^p_h(y) y _(II_h)+ 1/(ρ - ρ')∫_B_ρ∖ B_ρ'v^ + λ_h^(1/2) - z_h^(ρ)^p_h(y) y_(III_h)] }. For any fixed > 0, the sequence {∇ v^^p_h(·)} is equibounded in B_1 (because v^ is Lipschitz and {p_h} converges uniformly), and therefore lim_h → +∞(I_h,) = ∫_B_ρ'∇ v^^p_0 y Consequently, by the strong convergence v^→ v in W^1,p_0(B_1), lim_→ 0lim_h → +∞(I_h,) = ∫_B_ρ'∇ v^p_0 y. Moreover, since ∫_B_1∇ v_h^p_h(y) y ≤ 1 by (<ref>), recalling (<ref>), item <ref> of Theorem <ref> and (<ref>), we obtain that ∫_B_ρ∖ B_ρ'∇ z_h^(ρ)^p_h(y) y ≲(∫_B_ρ∖ B_ρ'∇ v_h^p_h(y) y )^p^-/p^+≲( F_h(v_h,γ_h,B_ρ∖ B_ρ') )^p^-/p^+ for any h ∈, where the implicit constants depend only on the log-Hölder constant of p. On the other hand, by the very definition of deviation from minimality, there holds F_h(v_h,γ_h, B_ρ) - _h(v_h,γ_h,B_ρ,t_h) ≤ F_h(w_h^(ρ), γ_h, B_ρ) for all h ∈. Thus, passing to the limit superior in (<ref>) and using (<ref>) (recalling that, by Lemma <ref>, the function s ↦_h(·, ·, ·, B_s, ·) is increasing, for any h ∈), we obtain lim sup_h → +∞ F_h(v_h,γ_h, B_ρ) ≤lim sup_h → +∞ F_h(w_h^(ρ), γ_h, B_ρ) By (<ref>), it follows that lim_h→ +∞(III_h) = 0, and by (<ref>) lim sup_h → +∞(II_h) = c( α(ρ) - α(ρ') ), where the constant c is universal. Therefore, by (<ref>), (<ref>), (<ref>), and the last two estimates, lim sup_h → +∞ F_h(v_h,γ_h, B_ρ) ≤1/(1-)^p^+{∫_B_ρ'∇ v^^p_0 y + 4^p^+∫_B_ρ∖ B_ρ'∇ v^^p_0 y + c(α(ρ)-α(ρ'))}, and letting ρ' →ρ (recalling that ρ is a continuity point of α(·)), lim sup_h → +∞ F_h(v_h,γ_h, B_ρ) ≤1/(1-)^p^+∫_B_ρ∇ v^^p_0 y. As this holds for any > 0 (small enough), we can let → 0 and thanks to (<ref>) we obtain ∫_B_ρ∇ z^p_0 y ≤lim_h → +∞ F_h(v_h,γ_h, B_ρ) = α(ρ) ≤∫_B_ρ∇ v^p_0 y, Since we can take, in particular, v = z, we obtain ∫_B_ρ∇ z^p_0 y = α(ρ) for all but countably many ρ∈ (0,1). In addition, since the left hand side of (<ref>) is a continuous function of ρ∈ (0,1], we have that α(·) is actually a continuous function of ρ∈ (0,1]. Thus, the above argument holds for all ρ∈ (0,1) and we conclude that z minimises the p_0-energy locally in B_1 with respect to compactly supported perturbations. [Conclusion] By (<ref>), we have sup_B_τ∇ z^p y ≤ C_0 τ^2 ∫_B_1∇ z^p y and from this, (<ref>), (<ref>), and (<ref>), we deduce lim_h → +∞ F_h(v_h, γ_h, B_τ) (<ref>),(<ref>)=∫_B_τ∇ z^p_0 y ≤sup_B_τ∇ z^p_0τ^2 ℒ^2(B_1) ≤ C_0 τ^2 ∫_B_1∇ z^p_0 y (<ref>)= C_0 τ^2 lim_h → +∞ F_h(v_h, γ_h, B_1) < C_1 τ^2 lim_h → +∞ F_h(v_h, γ_h, B_1) a contradiction to (<ref>). We remark that, in the proof of Theorem <ref>, we used the strongly log-Hölder condition only to ensure the uniform convergence of the rescaled variable exponents p_h(·) to a constant. We are now in the position to prove Theorem <ref>. The proof is similar to that of <cit.>, which adapts to the variable exponent setting the classical argument from <cit.> (see also <cit.>). We work out the main steps, addressing the reader to the aforementioned references for the missing details. [Small energy in a ball implies power-decay in smaller balls] Fix δ > 0 and τ∈ (0,1) such that √(τ)≤ 1 / C_δ and set _δ := (τ, δ), where C_δ and (τ,δ) are given by Theorem <ref>. Fix, in addition, σ∈ (0,1) such that σ≤_δ/C_δ(2 π +1 ), and set ρ_δ := min{ 1, (σ,δ)^2, _δτ^2 θ(τ,δ), _δσθ(σ,δ) }, where θ(τ,δ), (σ,δ) and θ(σ,δ) are the numbers provided by Theorem <ref> corresponding to the pairs (τ,δ) and (σ,δ), respectively. Arguing exactly as in <cit.>, from Theorem <ref> we obtain that for any ρ < ρ_δ':= ρ_δ/κ' and any ball B_ρ⊂⊂Ω_δ the condition F(u,B_ρ(x)) ≤(σ,δ) ρ implies ∀ h ∈, F(u, B_στ^h ρ(x) ) ≤_δσ^h/2(στ^h ρ). [Local lower bounds for the energy in balls centred at jump points] Assume that (<ref>) holds for some x ∈Ω_δ, with B_ρ(x) ⊂⊂Ω_δ and ρ < ρ_δ'. From (<ref>), we easily obtain that lim_ρ→ 01/ρF(u, B_ρ(x)) = 0, hence Proposition <ref> and Remark <ref> (applied with ℳ = ^k-1) imply that x ∉J_u. Consequently, the lower bound F(u, B_ρ(x)) > (σ,δ) ρ must hold for every x ∈Ω_δ∩ J_u and actually for every x ∈Ω_δ∩J_u (because (σ,δ) does not depend on the particular choice of x ∈Ω_δ∩ J_u). Set θ_δ := (σ,δ). [J_u is locally essentially closed] We now prove that ℋ^1( Ω_δ∩( J_u∖ J_u ) ) = 0 for any δ > 0. To this purpose, it is enough to prove that the difference between Ω_δ∩( J_u∖ J_u ) and a ℋ^1-null set is itself a ℋ^1-null set. To do this, we argue as in the last part of the proof of <cit.> and we consider the set Σ_δ := { x ∈Ω_δ : lim sup_r → 0∫_B_r∇ u^p(y) y > 0 }. Since ∇ u^p(·)∈ L^1(Ω), it follows that ℋ^1(Σ_δ) = 0 (see, e.g., <cit.>). On the other hand, if x ∈Ω_δ∩(J_u∖Σ_δ), from (<ref>) we obtain Θ^*(J_u, x) := lim sup_r → 0ℋ^1(J_u ∩ B_r(x))/2 π r≥θ_δ > 0. Define the Radon measure μ := ℋ^1 J_u and consider the Borel set E_δ := Ω_δ∩( J_u∖ J_u ) ∖Σ_δ. Since Θ^*(J_u, x) ≥θ_δ for every x ∈ E_δ, we deduce that μ( (Ω_δ∩(J_u∖ J_u ) ∖Σ_δ) ) ≥θ_δℋ^1(Ω_δ∩(J_u∖ J_u ) ∖Σ_δ). Since ℋ^1(Σ_δ) = 0, θ_δ > 0, and μ( (Ω_δ∩(J_u∖ J_u )∖Σ_δ) ) = ℋ^1(J_u ∩ (Ω_δ∩(J_u∖ J_u ) ∖Σ_δ_=∅ )) = 0, we obtain (<ref>). [Conclusion] The essential closedness of J_u, i.e., (<ref>), now follows by an easy argument by contradiction. Assume ℋ^1(Ω∩(J_u∖ J_u)) > 0 and let δ_h ↓ 0 be any decreasing sequence (so that {Ω_δ_h} is an ascending sequence of sets such that Ω = ∪_h=1^∞Ω_δ_h). Then, by the σ-subadditivity of ℋ^1 and (<ref>), 0 < ℋ^1(Ω∩(J_u∖ J_u)) ≤∑_h=1^∞ℋ^1(Ω_δ_h∩(J_u∖ J_u)) = 0, a contradiction. Let J_u^* the set of points of J_u with density one, namely J_u^* := { x ∈ J_u : lim_ρ→ 0ℋ^1(J_u ∩ B_ρ(x))/πρ = 1 }. Given Theorem <ref> and Theorem <ref>, the following corollary is proven exactly as in <cit.>. Let Ω⊂^2 be a bounded, open set. Let p ∈ C^0,α(Ω), for some α∈ (0,1], a variable exponent satisfying (p_2). Then, for any δ > 0, the set Ω_u^(δ) := { x ∈Ω : F(u, B_ρ(x)) < θ_δρ ρ∈(0, ρ_δ' ∧(x, ∂Ω) ) }, where θ_δ and ρ_δ' are given by Theorem <ref>, is open and obeys Ω_u^(δ)∩J_u^* = ∅. Consequently, the set Ω_u := ⋃_δ > 0Ω_u^(δ) is open and obeys Ω_u ∩J_u^* = ∅. Moreover, ℋ^1(Ω_u ∩ J_u) = 0 and Ω∩S_u = Ω∩J_u = Ω∖Ω_u. We are now ready for the proof of Theorem <ref>. Let u ∈^p(·)(Ω, ^k-1) be any local minimiser of the functional F(·,Ω) given by (<ref>). Then, u is a quasi-minimiser of F(·,Ω), according to Definition <ref>. Consequently, Theorem <ref> tells us that J_u is essentially closed (i.e., (<ref>) holds for u). Let Ω' := Ω∖J_u. Then, Ω' is open and not empty (actually, ℋ^1(Ω∖Ω') = 0, by (<ref>) and the rectifiability of J_u). By (<ref>) we have u ∈ W^1,p(·)( Ω', ^k-1), and moreover u is a local minimiser of the functional (<ref>) in Ω'. By Theorem <ref>, it follows that u ∈ C^1,β_0_ loc(Ω_0, ^k-1), for some β_0 ∈ (0,1) depending only on k, p^-, p^+, [p]_0,α, α and for a set Ω_0, relatively open in Ω' (hence, in Ω) and satisfying ℋ^2-p^-( Ω' ∖Ω_0 ) = 0 (implying also ℋ^1(Ω∖Ω_0) = 0). The conclusion follows. In the particular case p is constant, we can apply standard elliptic regularity results <cit.> to obtain u ∈ C^1,β_0_ loc(Ω∖J_u, ^k-1), for some β_0 ∈ (0,1) depending only on k and p. Although we did not try to establish sharp assumptions on the regularity of the variable exponent in our argument, we managed to add regularity assumptions only when needed, with the aim to make it clear where, precisely, they are used. To summarise: * We assumed log-Hölder continuity from the beginning because without the Lavrentiev gap phenomenon happens (see, for instance, <cit.> and references therein). In addition, the log-Hölder condition is crucially used in the proof of Theorem <ref>. * We assumed the strong log-Hölder condition in Theorem <ref> in order to ensure uniform convergence of the rescaled variable exponents p_h towards a constant map in the blow-up analysis. (But it is unclear whether it is optimal or not, c.f. Remark <ref> above.) Consequently, we had to assume strong log-Hölder continuity in Theorem <ref>. * In order to apply Theorem <ref>, in Theorem <ref> we had to assume p to be Hölder continuous. Note that, under strong log-Hölder continuity assumption, any quasi-minimiser of F(·, Ω) has essentially closed jump set. However, even assuming Hölder continuity, our argument does not allow for deducing further regularity without local minimality. Data Availability Statement Data sharing not applicable to this article as no datasets were generated or analysed during the current study. 10pt Declarations The authors have no competing interests to declare that are relevant to the content of this article. § CLOSURE AND COMPACTNESS THEOREMS FOR ^P(·) The following results are certainly known to experts but we have not found any explicit proof in literature, hence we provide one here, for the reader's convenience. Let Ω⊂^n be a bounded open set, let p : Ω→ (1,+∞) be a bounded, log-Hölder continuous variable exponent satisfying 1 < p^- ≤ p(x) ≤ p^+ < +∞. Assume, in addition, θ : [0,+∞) → [0,+∞] is an increasing lower semicontinuous functions satisfying θ(t)/t → +∞ as t → 0, and that {u_h}⊂^p(·)(Ω,^k) is a sequence satisfying sup_h∈{∫_Ω∇ u_h^p(x) x̣ + ∫_J_u_hθ(u^+_h - u^-_h) H^n-1(J_u_h) } < +∞. If {u_h} weakly^* converges in (Ω,^k) to some u, then u ∈^p(·)(Ω,^k), the approximate gradients ∇ u_h weakly converge to ∇ u in L^1(Ω,^k × n), ^ju_h weakly^* converge to ^ju in Ω, and ∫_Ω∇ u^p(x) x ≤lim inf_h →∞∫_Ω∇ u_h^p(x) x, and ∫_J_uθ(u^+ - u^-) ℋ^n-1≤∫_J_u_hθ(u^+_h - u^-_h) ℋ^n-1 if θ is concave. The conclusion follows by combining the classical closure theorem for (e.g., <cit.>) with the lower semicontinuity result given by <cit.> in the context of variable exponents. Although short, for clarity we divide the easy proof in two steps. Since Ω is bounded, we have, in particular, {u_h}⊂^p^-(Ω,^k). In addition, by assumption, the sequence {u_h} weakly^* converges in (Ω,^k) to u ∈(Ω,^k). By the classical closure theorem of (Ω,^k) (<cit.>, with φ(t) = t^p^- and θ as in the statement), it follows that u ∈(Ω,^k) (actually, <cit.> also gives u ∈^p^-(Ω,^k), because of the inequality (4.5) there). By <cit.> and <cit.>, (<ref>) follows as well. From Step 1, we know that u_h → u in L^1(Ω,^k) and that u ∈(Ω,^k). Therefore, we can apply <cit.> (with f(x,u,z) = z^p(x), Ψ≡ 1, a ≡ 0, and any c > 0) and deduce that (<ref>) holds. But then it follows that u ∈^p(·)(Ω,^k), which is the desired conclusion. Assume ℳ is a smooth, compact Riemannian manifold without boundary, isometrically embedded in ^k. If {u_h}⊂^p(·)(Ω,ℳ) and the assumptions of Theorem <ref> hold, then the limit function u provided by Theorem <ref> belongs to ^p(·)(Ω,ℳ). Indeed, the strong convergence u_h → u in L^1(Ω, ^k) implies (up to extraction of a — not relabelled — subsequence) u_h(x) → u(x) for a.e. x ∈Ω, hence u(x) ∈ℳ for a.e. x ∈Ω. Let Ω⊂^n be a bounded open set and let p : Ω→ (1,+∞) be a bounded, log-Hölder continuous variable exponent satisfying 1 < p^- ≤ p(x) ≤ p^+ < +∞. Let θ : [0,+∞) → [0,+∞] be an increasing lower semicontinuous function satisfying θ(t)/t → +∞ as t → 0. Finally, assume that {u_h}⊂^p(·)(Ω,^k) is a uniformly bounded sequence in (Ω,^k) and that (<ref>) holds. Then, we may find u ∈^p(·)(Ω,^k) and extract a (not relabelled) subsequence such that u_h → u weakly^* in (Ω,^k). Moreover, the Lebesgue part and the jump part of the derivative converge separately, i.e., ∇ u_h →∇ u and ^ju_h →^ju weakly^* in Ω. Again, we divide the simple proof in two steps. Assumption (<ref>), along with p^- > 1 and the boundedness of Ω, implies that the sequence {∇ u_h} of the approximate differentials is equiintegrabile (see, e.g., <cit.>). On the other hand, by assumption, we also have that {u_h} is uniformly bounded in (Ω,^k). Since, by (<ref>), there holds sup_h ∫_J_u_hθ( u^+_h - u^-_h) ℋ^n-1 < +∞, the hypotheses of the classical compactness theorem for (Ω,^k), as stated for instance in <cit.> (see Theorem 1.4 there), are satisfied. Therefore, there exist u ∈(Ω,^k) and a (not relabelled) subsequence such that u_h → u as h → +∞, with separate convergence for the Lebesgue part and the jump part of the derivative. Since u_h → u weakly^* in (Ω,^k), Theorem <ref> yields u ∈^p(·)(Ω,^k). This concludes the proof. § POINCARÉ'S INEQUALITY FOR BOUNDED VARIATION FUNCTIONS IN CONVEX SETS Let Ω⊂^n be an open set and k ∈ be an integer. We recall that every function in (Ω,^k) can be approximated by a sequence smooth functions. More precisely, the following characterisation holds (see, e. g., <cit.>): a function u ∈ L^1(Ω,^k) belongs to (Ω,^k) if and only if there exists a sequence {u_h}⊂(W^1,1∩ C^∞)(Ω,^k) converging to u in L^1(Ω,^k) and such that L := lim_h→∞∫_Ω∇ u_h x ≤u(Ω). By this approximation result and Poincaré's inequality for Sobolev functions in convex sets (e.g. <cit.>), we derive the following more precise version of the classical Poincaré's inequality in (see, e.g., <cit.>), showing up the explicit dependence on the diameter of Ω. Since we have not found an explicit proof of this statement in the literature, we provide a detailed one here, for reader's convenience. Let Ω⊂^n be a bounded, convex open set. There exists a constant C=C(n,k), depending only on n and k, such that for any u ∈(Ω,^k) there holds u-u_Ω_L^1(Ω,^k)≤ C(n,k) ((Ω)) u(Ω) Since u belongs to (Ω,^k), by <cit.> there exists {u_h}⊂(W^1,1∩ C^∞)(Ω,^k), a sequence of smooth functions in Ω, such that u_h → u in L^1(Ω,^k) as h → +∞ and such that (<ref>) holds. To the functions u_h, we may apply Poincaré's inequality for Sobolev functions in convex sets (e.g., <cit.>), from which it follows that u_h - u_h_Ω_L^1(Ω)≤ C(n,k) (Ω)∇ u_h_L^1(Ω), where the constant C(n,k) depends only on n and k. On the other hand, by (<ref>), up to discarding finitely many terms of the sequence {u_h}, we may assume that sup_h ∈∇ u_h_L^1(Ω)≤u(Ω). Combining (<ref>) and (<ref>) with the fact that the L^1-convergence of {u_h} to u obviously implies that u_h - u_h_Ω_L^1(Ω)→u - u_Ω_L^1(Ω) as h → +∞, the conclusion follows. § A COUNTEREXAMPLE IN HIGHER DIMENSION In this appendix, expanding on Remark <ref>, we provide an example which shows that the construction of the Sobolev approximation in Section <ref> cannot work in higher dimensions under the mere assumption of smallness of the jump set. Carefully looking at the proof of <cit.>, it is seen that the only point in which the assumption on the dimension is used is in the proof of property ( P_2), and more precisely in the passage from <cit.> to <cit.>, which relies on inequality (<ref>) (i.e., on <cit.>). Again by inspection of the proof of ( P_2), it is easily realised that, in order for the construction to work in higher dimensions, for any η∈ (0,1) and any given Borel set J ⊂ B_2r^n with ℋ^n-1(J) < η (2r)^n-1 we must be able to find a radius R ∈ (r,2r) satisfying both ℋ^n-1( J ∩∂ B_R^n ) = 0 and ℋ^n-1( J ∩( B_R^n ∖ B_R -δ_h^n) ) < η C_n δ_h^n-1, h ∈, where δ_h := R 2^-h and C_n is a dimensional constant. Now, for every n ∈, given any Borel set J in B_2r^n with ℋ^n-1(J) < +∞, the measure ℋ^n-1 J is a Radon measure, hence equation (<ref>) holds for all R ∈ (r,2r) except at most countably (indeed, Radon measures can charge at most countably many boundaries of encapsulated sets; c.f., e.g., the discussion in <cit.>). However, (<ref>) may fail for n ≥ 3. We exhibit an example below for n = 3 (for ease of notation) that can be easily adapted to any dimension. For any ∈ (0,1), any C > 0, and any r > 0, there exists a ℋ^2-rectifiable, star-shaped, relatively closed and essentially closed Borel set J ⊂ B^3_2r with the following properties: * ℋ^n-1(J) < (2r)^n-1. * There exists h_0 ∈ so that ℋ^2( J ∩(B_R ∖ B_R(1 - 2^-h_0)) ) ≥ C δ_h_0^2 for every R ∈ (r,2r). Fix arbitrarily ∈ (0,1) and C > 0. By scaling, it suffices to consider the case 2r = 1. Let us set, for brevity, 𝔹 := B_1^3. Let {x_j} be an enumeration of ℚ^3 ∩∂𝔹. For any j ∈, let 𝒞_j^() be the cone with apex the origin, axis the radius O x_j, and opening angle 2^-j/40π (the reason for this choice will be clear after (<ref>)). For any j ∈, let ∂ℬ^2_j := 𝒞_j^()∩^2_1/2 denote the boundary of the geodesic ball cut on 𝕊^2_1/2 = ∂ B^3_1/2 by the solid cone bounded by 𝒞_j^(). Correspondingly, the same solid cone cuts geodesic balls ℬ_j^2 on ^2, whose radius is twice that of ℬ^2_j, for each j ∈. By Vitali's covering lemma, there exists a subsequence {x_j_l} such that the balls ℬ^2_j_l have disjoint closures and such that the geodesic balls 5 ℬ^2_j_l, concentric with the balls ℬ^2_j_l but with radius 5 times larger, are a covering of ∪_h ∈ℬ_j^2. We notice that, by the choice of the open angle of the cones, ∑_l ∈ℋ^1(∂ℬ^2_j_l) ≤∑_l ∈ 2 π( 2^-h_l/40π) < /10. Upon setting J := 𝔹∩⋃_l ∈𝒞_j_l^(), we have that J is a countably ℋ^2-rectifiable, star-shaped (because all the cones 𝒞^()_j intersect at the origin), closed set in 𝔹. Moreover, since for each l ∈ the geodesic ball ℬ^2_j_l has radius twice that of ℬ^2_j_l and since the slant height of each (truncated) cone 𝔹∩𝒞^()_j is one, by (<ref>) there holds ℋ^2(J) = ∑_l ∈ℋ^1(∂ℬ^2_j_l) ≤ 10 ∑_l ∈ℋ^1(∂ℬ^2_j_l) < , which gives both item (i) and the ℋ^2-rectifiability of J. Then, J := 𝔹∩J is a Borel set in 𝔹 (because it is relatively closed in 𝔹); moreover, J is ℋ^2-rectifiable, star-shaped, essentially closed (because J = J∖(J∩∂𝔹), hence ℋ^2(J∖ J) = 0), and it satisfies ℋ^2(J) <. It remains to prove (<ref>). To this purpose, let κ := ∑_l ∈ 2^-j_l and notice that for any choice of R ∈ (1/2,1) and any h ∈ we have ℋ^2( J ∩( B_R∖ B_R - δ_h) ) ≥δ_h ∑_l ∈ 2π(R-δ_h) sin( 2^-j_l/40 π) ≥1/40δ_h (R-δ_h) ∑_l ∈ 2^-j_l = κ/40δ_h ( R - δ_h ) , where δ_h := R 2^-h. From here, (<ref>) follows as soon as h_0 is chosen so that κ/40 (R-δ_h_0) δ_h_0 = κ/40 R^2 (1-2^-h_0) 2^-h_0≥ C R^2 2^-2h_0, i.e., for h_0 ≥log_2(1+40 C/κ). Let ∈ (0,1), r > 0, C > 0 be arbitrary, let J ⊂ B^3_2r be the corresponding Borel set provided by Proposition <ref>, and let Ω be the union of the solid cones determined by the sets 𝒞^()_j_l defined as in the proof of Proposition <ref>, intersected with 𝔹. Define u : B^3_2r→^k by setting u := e_1 Ω, 0 B_1 ∖Ω. Then, u ∈^p(B_2r^3, ^k) for any p ≥ 1, J_u = J, ℋ^2(J ∖ J_u) = 0, ℋ^2(J_u) <, J_u is essentially closed and it does not satisfy (<ref>). § A CRITERION FOR BEING OUT OF THE JUMP SET In this appendix, we prove a sufficient criterion that allows for excluding that a point belongs to the jump set. An analogous, but slightly stronger, statement concerning the whole approximate discontinuity set S_u is classical (see <cit.> and <cit.>) but the classical proof uses medians and truncations, which we want to avoid. Here, we use only the tools provided by the Sobolev approximation results from Section <ref>. Let Ω⊂^2 be a bounded open set and p : Ω→ (1,+∞) be a log-Hölder continuous, variable exponent satisfying p^- > 1 and p^+ < 2. Let u ∈(L^∞∩^p(·))(Ω, ^k) and x_0 ∈Ω. If lim_ρ↓ 01/ρ[ ∫_B_ρ(x_0)∇ u^p(x) x + ℋ^1(J_u ∩ B_ρ(x)) ] = 0, then x_0 ∉J_u. We argue by contradiction, exploiting the tools provided by the Sobolev approximation. Suppose, for the sake of a contradiction, that x ∈ J_u. Then, by definition (c.f., <cit.>), there exist a, b ∈^k and ν_0 ∈^1 such that a ≠ b and lim_ρ↓ 0_B_ρ^+(x_0,ν_0)u(x)-a x = 0, lim_ρ↓ 0_B_ρ^+(x_0,ν_0)u(x)-b x = 0. Moreover, the triple (a,b,ν_0) is uniquely determined, up to a permutation and a change of sign. Observe that, thanks to the trivial pointwise inequality ∀ξ∈^k, ξ^p^-≤ 1 + ξ^p(x), holding at each point x ∈Ω, condition (<ref>) implies lim_ρ↓ 01/ρ[ ∫_B_ρ(x_0)∇ u^p^- x + ℋ^1(J_u ∩ B_ρ(x) ) ] = 0, We are going to show that, given (<ref>), we can find a sequence ρ_j ↓ 0 as j → +∞ and m ∈^k such that lim_j → +∞_B_ρ_j(x_0)u - m x = 0. To this purpose, define the blow-up u_ρ of u at x_0 by letting u_ρ(y) := ρ^(1-p^-)/p^- u(x_0 + ρ y) for all y ∈ B_1 and each ρ > 0. Then, condition (<ref>) can be recast as lim_ρ↓ 0[ ∫_B_1∇ u_ρ^p^- y + ℋ^1(J_u_ρ∩ B_1 ) ] = 0. Now, take any sequence {ρ_h} such that ρ_h ↓ 0 as h → +∞. Then, for any h ∈, ∫_B_1∇ u_ρ_h^p^- y + ℋ^1(J_u_ρ_h∩ B_1 ) = σ_h, where σ_h → 0 as h → + ∞. For each h ∈ so large that 2σ_h < η, where η is the universal constant in Proposition <ref>, let s_h := 1 - 2σ_h/η. Then, let w^(s_h)_ρ_h∈ W^1,p^-(B_s_h, ^k) ∩^p^-(B_1, ^k) be the approximation of u_ρ_h provided by Theorem <ref>. Recall that, since u (and hence each map u_ρ) is bounded, each map w^(s_h)_ρ_h is bounded as well and, indeed, w^(s_h)_ρ_h_L^∞(B_1)≤u_L^∞(B_1) for any h. Set m^(s_h)_ρ_h := _B_s_h w^(s_h)_ρ_h y. Since w^(s_h)_ρ_h_L^∞(B_1) is uniformly bounded, the sequence { m^(s_h)_ρ_h}_h is bounded in ^k. Hence, we can find a sequence {h_j}_j and m ∈^k so that m_j := m^(s_h_j)_ρ_h_j→ m j → +∞. Let us also set, for brevity's sake, ρ_j := ρ_h_j, s_j := s_h_j, u_j := u_ρ_h_j, w_j := w^(s_h_j)_ρ_h_j. for each j ∈. We claim that (<ref>) holds for {ρ_j} and m as in Step <ref>. Indeed, (<ref>) is equivalent to lim_j → +∞∫_B_1u_j - m y = 0, and, for any j ∈, we have ∫_B_1 u_j - m y ≤∫_B_1u_j - w_j y + ∫_B_1w_j - m y ≤∫_B_1u_j - w_j y_:= ( I)_j + ∫_B_1w_j - m_j y_:= ( II)_j + B_1m_j - m By construction, the last term at right hand side above tends to 0 as j → +∞. As for ( I)_j, by items <ref> and <ref> of Proposition <ref> we have ∫_B_1u_j - w_j y ≤ 2 u_L^∞(Ω){ u_j ≠ w_j }≲( ℋ^1(J_u_j∩ B_1) ) = o(1) as j → +∞. We now estimate ( II_j), using that w_j is a Sobolev function in B_s_j and item <ref> of Proposition <ref>: ∫_B_1w_j - m_j y = ∫_B_s_jw_j - m_j y + ∫_B_1 ∖ B_s_jw_j - m_j y ≤∫_B_s_j∇ w_j y + 2 u_L^∞(Ω)π(1 - s_j^2 ) ≲B_s_j^1-1/p^-( ∫_B_s_j∇ u_j^p^- y )^1/p^- + o(1) = o(1) as j → +∞, because s_j → 1 as j → +∞ and thanks to item <ref> of Proposition <ref> and to (<ref>). By the above estimates,  (<ref>) follows immediately. [Conclusion] By assumption, we should have lim_j → +∞_B_ρ_j^+(x_0,ν_0)u(x)-a x = 0, lim_j → +∞_B_ρ_j^+(x_0,ν_0)u(x)-b x = 0, with a ≠ b and some ν_0 ∈^1, while (<ref>) implies lim_j → +∞_B_ρ_j^+(x_0,ν)u(x)-m x = 0, lim_j → +∞_B_ρ_j^+(x_0,ν)u(x)-m x = 0, for any ν∈^1, and this a contradiction because the triple (a,b,ν_0) is uniquely determined, up to a permutation and a change of sign. In particular, the assumptions of Proposition <ref> are satisfied if u takes values into a compact Riemannian manifold ℳ and satisfies (<ref>). A slightly stronger version of Proposition <ref> is obtained by replacing the assumption u ∈ L^∞(Ω, ^k) with the weaker assumption lim_ρ↓ 0_B_ρ(x_0)u^q x < +∞ for some q > 1, which is exactly the hypothesis required in the original formulation of the criterion from <cit.> (in place of the boundedness of u). Up to the fact that we also need to assume (<ref>) (in order to apply Proposition <ref>), the details are exactly as in <cit.> or, equivalently, <cit.> and, therefore, omitted. § COMPACTNESS OF SEQUENCES OF ^P(·)-FUNCTIONS WITH VANISHING JUMP SET Here, arguing along the lines of <cit.>, we use Theorem <ref> and compactness results in variable exponent Sobolev spaces (see, e.g., <cit.>) to prove a compactness and lower semicontinuity result for sequences with vanishing jump set. Such a result is the counterpart of <cit.> and it can be applied to regularise sequences of functions in ^p(x)(B_ρ,ℳ) with vanishing jump set. Let p : ^2 → (1,+∞) be a variable exponent satisfying (p_1) and (p_2). Let k ∈ and let (ℳ,g) be either ^k (endowed with the canonical metric) or a compact, connected, smooth Riemannian manifold without boundary, isometrically embedded in ^k. Assume {u_h}⊂^p(x)(B_ρ,ℳ) is a sequence satisfying sup_h∈∫_B_ρ∇ u_h^p(x) x ≤Λ_* < +∞, ℋ^1(J_u_h) → 0 h → +∞. Then, we can find a (not relabelled subsequence) and a function u ∈ W^1,p(·)(B_ρ,ℳ) such that u_h → u in L^p(·)(B_ρ) as h → +∞. Moreover, ∫_B_ρ∇ u^p(x) x ≤lim inf_h → +∞∫_B_ρ∇ u_h^p(x) x. The proof is along the lines of <cit.>. We reproduce the main steps, to point out the relevant changes. For any s ∈ [1/2,1) and any h ∈ so large that ℋ^1( u_h ) < η(1-s), where η is the universal constant provided in Theorem <ref>, we let w^(s)_h ∈^p(·)(B_ρ,ℳ) ∩ W^1,p(·)(B_s ρ,ℳ) and ℱ^(s)_h be the function and the family of balls obtained by Theorem <ref>. By item <ref> in Theorem <ref>, there holds ∫_B_sρ∇ w^(s)_h^p(x) x ≲max{∇ u_h_L^p(·)(B_ρ)^p^-, ∇ u_h_L^p(·)(B_ρ)^p^+} where the implicit constant at right hand side depends only on the quantities listed in Remark <ref>. Moreover, by Poincaré's inequality (<cit.>) and (<ref>), there holds w^(s)_h - m^(s)_h_L^p(·)(B_sρ)≲(1+ρ^2) max{∇ u_h_L^p(·)(B_ρ), ∇ u_h_L^p(·)(B_ρ)^p^-/p^+, ∇ u_h_L^p(·)(B_ρ)^p^+/p^-} where m^(s)_h := ⟨ w^(s)_h ⟩_B_s ρ is the average of w^(s)_h on B_s ρ and the implicit constant at right hand side depends only on the quantities listed in Remark <ref>. For any s ∈ [1/2,1), by the compactness of ℳ, the sequence { m^(s)_h} is obviously bounded and hence can extract from it a subsequence which converges to some m^(s) in ^k. Also, again by the compactness of ℳ, we get m^(s)_h - m^(1/2)_h_L^p(·)(B_ρ)≲ 1, up to a constant which depends only on ℳ. Therefore, by triangle inequality and (<ref>), the sequence {w^(s)_h - m^(1/2)_h} is bounded in W^1,p(·)(B_s ρ, ^k) for any s ∈ [1/2,1) and we can find a subsequence (depending on s and not relabelled) such that it converges to some w^(s) weakly in W^1,p(·)(B_sρ, ^k), strongly in L^p(·)(B_sρ,^k) (by <cit.>), and pointwise a.e. on B_sρ (because, in particular, we have weak convergence in W^1,p^-(B_s ρ, ^k) and hence strong convergence in L^p^-(B_s ρ, ^k) by the classical Rellich-Kondrachov theorem). By item <ref> in Theorem <ref>, for any s ∈ [1/2, 1) and, correspondingly, any h sufficiently large, there holds ℒ^2( ∪_ℱ^(s)_h B ) ≤2πξ/ηρℋ^1(J_u_h), where ξ is the same universal constant as in Theorem <ref>. By (<ref>) and (<ref>), it follows that w^(s) = w^(t) ℒ^2-a.e. on B_sρ if 1/2 ≤ s ≤ t < 1. Thus, we can define a limit function u on B_ρ by letting u = w^(s) in B_sρ for all s ∈ [1/2,1). By definition, it is clear the u∈ W^1,p(·)_ loc(B_ρ,^k). Moreover, for any s ∈ [1/2,1), ∫_B_sρ∇u^p(x) x = ∫_B_sρ∇ w^(s)^p(x) x ≤lim inf_h→+∞∫_B_sρ∇(w_h^(s)χ_B_ρ∖∪_ℱ_h^(s) B )^p(x) x ≤lim inf_h→+∞∫_B_ρ∖∪_ℱ_h^(s) B ∇ w_h^(s)^p(x) x ≤lim inf_h→+∞∫_B_ρ∇ u_h^p(x) x. Indeed, the first inequality follows from the weak convergence w^(s)_h - m^(1/2)_h to w^(s) in B_sρ, the lower semicontinuity of the modular with respect to the weak convergence (Proposition <ref>), and (<ref>). The second inequality follows because w^(s)_h = u_h a.e. outside ∪_ℱ^(s)_h B. Thus, we can let s ↑ 1 in (<ref>) and obtain ∫_B_ρ∇u^p(x) x ≤lim inf_h→+∞∫_B_ρ∇ u_h^p(x) x. In particular, u∈ W^1,p(·)(B_ρ, ^k). By (<ref>) and (<ref>), we can find a (not relabelled) subsequence and a vector m∈^k such that m_h^(1/2)→m. Furthermore, by (ii) in Theorem <ref> and since ℒ^2( ∪_ℱ^(s)_h B) is infinitesimal, {u_h - m_h^(1/2)} converges in measure to u, hence also pointwise a.e., up to extraction of a (not relabelled) subsequence. But then u := u + m belongs to W^1,p(·)(B_ρ,^k), and we have u_h→ u almost everywhere in B_ρ as h → +∞. In turn, this implies u(x) ∈ℳ for a.e. x ∈ B_ρ. Moreover, by dominated convergence, u_h→ u in L^p(·)(B_ρ) as h → +∞. Finally, (<ref>) follows from (<ref>) because u and u differ only by a constant. Acknowledgements The authors are supported by the project Star Plus 2020 - Linea 1 (21-UNINA-EPIG-172) “New perspectives in the Variational modelling of Continuum Mechanics”. The authors thank the Hausdorff research Institute for Mathematics (HIM) for the warm hospitality during the Trimester Program “Mathematics of complex materials”, funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC-2047/1 – 390685813, when part of this work was carried out. The authors are members of, and have been partially supported by, GNAMPA-INdAM. plain
http://arxiv.org/abs/2307.02883v1
20230706093746
Quantized vortex nucleation in collisions of superfluid nanoscopic helium droplets at zero temperature
[ "Ernesto García-Alfonso", "Francesco Ancilotto", "Manuel Barranco", "Martí Pi", "Nadine Halberstadt" ]
physics.atm-clus
[ "physics.atm-clus" ]
Laboratoire Collisions, Agrégats, Réactivité (LCAR), Université de Toulouse, CNRS, 31062, Toulouse, France Dipartimento di Fisica e Astronomia “Galileo Galilei” and CNISM, Università di Padova, via Marzolo 8, 35122 Padova, Italy CNR-IOM Democritos, via Bonomea, 265 - 34136 Trieste, Italy Departament FQA, Facultat de Física, Universitat de Barcelona, Av. Diagonal 645, 08028 Barcelona, Spain. Institute of Nanoscience and Nanotechnology (IN2UB), Universitat de Barcelona, Barcelona, Spain. Departament FQA, Facultat de Física, Universitat de Barcelona, Av. Diagonal 645, 08028 Barcelona, Spain. Institute of Nanoscience and Nanotechnology (IN2UB), Universitat de Barcelona, Barcelona, Spain. Laboratoire Collisions, Agrégats, Réactivité (LCAR), Université de Toulouse, CNRS, 31062, Toulouse, France We address the collision of two superfluid ^4He droplets at non-zero initial relative velocities and impact parameters within the framework of liquid ^4He time-dependent density functional theory at zero temperature. In spite of the small size of these droplets (1000 He atoms in the merged droplet) imposed by computational limitations, we have found that quantized vortices may be readily nucleated for reasonable collision parameters. At variance with head-on collisions, where only vortex rings are produced, collisions with non-zero impact parameter produce linear vortices which are nucleated at indentations appearing on the surface of the deformed merged droplet. Whereas for equal-size droplets vortices are produced in pairs, an odd number of vortices can appear when the colliding droplet sizes are different. In all cases vortices coexist with surface capillary waves. The possibility for collisions to be at the origin of vortex nucleation in experiments involving very large droplets is discussed. An additional surprising result is the observation of the drops coalescence even for grazing and distal collisions at relative velocities as high as 80 m/s and 40 m/s, respectively, induced by the long-range Van der Waals attraction between the droplets. Quantized vortex nucleation in collisions of superfluid nanoscopic helium droplets at zero temperature Nadine Halberstadt August 1, 2023 ======================================================================================================== § INTRODUCTION Superfluid helium droplets are routinely produced in beams obtained by expanding the high purity gas or liquid through a nozzle into vacuum. The temperature T_0 and pressure P_0 values at the source chamber and the characteristics of the nozzle determine the appearance of the jet and the size and velocity of the droplets.<cit.> Once formed, drops cool down by evaporative cooling, eventually becoming superfluid. The activity in the field has been comprehensively presented in a recent monograph.<cit.> The study of vortices in helium droplets has been a subject of continuous interest since they were first detected in droplets made of ∼ 10^8-11 atoms,<cit.> hereafter referred to as “very large droplets” (VLD). VLD are believed to acquire angular momentum as they pass through the nozzle. As a result of the superfluid transition, most angular momentum deposited in the droplet is stored in nucleated quantized vortices, while some remains as surface capillary waves in the deformed droplets and some is taken away by evaporated He atoms. The morphology of these VLD has been addressed in detail,<cit.> and the coexistence of quantized vortices and capillary waves has been established.<cit.> Capture of impurities by droplets may also lead to vortex nucleation. Indeed, it has been shown that impurity capture by droplets made of N=1000 atoms produces vortex rings and vortex loops.<cit.> However, detecting vortices in these small droplets is a challenge. Methods based on studying the absorption spectrum of atomic impurities attached to the vortex cores have been proposed<cit.> but so far vortices in small droplets have eluded detection. In this work we concentrate on the study of an alternative vortex formation mechanism, namely droplet-droplet collisions at non-zero impact parameter. Experiments on He drops collisions, although feasible in principle, have not been carried out; they would be technically challenging and require rather expensive cryogenic cooling. At variance, molecular-beam scattering experiments where a beam of He droplets interacted with a secondary beam of Ar or Kr atoms have been performed to determine the appearance of ^4He and ^3He droplets made of O(10^3-10^4) atoms.<cit.> Let us mention that coalescence experiments of helium droplets magnetically levitated have been carried out. Using a static magnetic field, drops of less than 1 cm radius at a temperature of 0.7 K were confined and made to collide at velocities as small as a few cm/s.<cit.> Recently, experimental activity has been conducted on the fragmentation of thin liquid helium jets into vacuum.<cit.> It has been found that under suitable conditions, equidistant droplets with almost uniform size are produced from the breakup of the jet, and that sometimes these drops coalesce downstream.<cit.> These droplet collisions can occur because of the spread of droplet velocities inside the jet,<cit.> which although small can be the source of non-zero relative velocity and impact parameter. Our goal is to describe binary collisions of zero temperature superfluid ^4He droplets within the ^4He density functional (He-DFT) approach.<cit.> This approach is similar, in the superfluid ^4He phase, to the Gross-Pitaevskii (GP) approach which has successfully been applied to the description of cold gases in the superfluid Bose-Einstein condensate phase, in particular in the study of quantized vortices.<cit.> In a recent work, some of us have addressed the coalescence of superfluid ^4He droplets,<cit.> initially at rest, which were drawn together by their mutual Van der Waals (vdW) long-range attraction. The merging of vortex-free helium droplets has unveiled the appearance of vortex-antivortex ring pairs nucleated at the droplet surface, that either wrap around the coalesced droplet or penetrated into it, eventually annihilating each other yielding an intense roton burst. This work has been later extended to the case of vortex-hosting droplets.<cit.> To our knowledge, no other description of superfluid (i.e. inviscid and irrotational) ^4He droplet collisions is available in the literature. We want to mention the existence of theoretical and experimental studies on head-on collisions of “quantum droplets” made of a very low temperature gas of ^39K atoms in two different hyperfine states constituting a superfluid Bose mixture,<cit.> which bears some similarities with the problem of ^4He droplets collisions. Binary collisions of droplets made of viscid fluids occur in, e.g., raindrop formation or spray processes. Besides initial velocity and impact parameter, the collision outcome depends on the rheological properties of the droplets: droplet bouncing, droplet coalescence and drop stretching separation have been found with increasing Weber number. It is worth mentioning that ^3He droplets collisions were described long ago in the Vlasov dynamics.<cit.> These drops were found to bear collision properties that, on the one hand, are common to classical mesoscopic systems, like e.g., mercury drops<cit.> and, on the other hand, are common to heavy-ion reactions, like fusion-fission and deep-inelastic processes.<cit.> Classically, binary collisions are addressed by solving the Navier-Stokes (NS) and continuity equations subject to appropriate boundary conditions, see e.g. Ref. Nik09 and references therein. It is naturally assumed that the solution of the NS equation for small enough viscosities should be nearly indistinguishable from the inviscid limit.<cit.> However, as emphasized in Ref. Anc23, neither time-dependent GP nor He-DFT equations appropriate for superfluids reduce to the zero-viscosity limit of the NS equation (Euler equation) for a barotropic fluid in irrotational flow.<cit.> Indeed, in the superfluid case an extra term appears involving the gradient of the so-called quantum pressure Q Q=ħ^2/2m_4 ∇^2 ρ^1/2/ρ^1/2 , where m_4 is the mass of the ^4He atom and ρ is the atom density. This term plays a crucial role when the density is highly inhomogeneous, as is the case near the core of a quantized vortex for instance. Quantum pressure is a key ingredient naturally included in our time-dependent He-DFT approach. Helium density functional and time-dependent density functional (He-TDDFT) methods have proven to be very powerful tools to study the properties and dynamics of superfluid ^4He droplets. Within the He-DFT approach, the finite range of the helium-helium van der Waals (vdW) interaction is explicitly incorporated in the simulations. As a consequence, the liquid-vacuum interface has a non-zero surface width, which is important in the description of nanoscopic ^4He droplets like those studied in the present work. The finite compressibility of the fluid is taken into account, and therefore possible density excitations (ripplons, phonons and rotons) are naturally reproduced. The possibility for atom evaporation from the ^4He sample during the real-time dynamics is also included.<cit.> This work is organized as follows. In Sect. II we briefly present the He-DFT approach. In Sect III we discuss the results obtained for the collision dynamics. Due to the computational burden associated with fully three-dimensional He-DFT simulations, we only address a few illustrative cases corresponding to selected values of the initial droplets velocity and impact parameter. A summary with some concluding remarks is presented in Sect. IV. In complement to the main text, the supplementary material provides movies of the real-time dynamics of the ^4He droplet collisions addressed in this work. This multimedia material constitutes an important part of this work, since it helps capture physical details which would otherwise escape the written account. § METHOD To describe the droplet-droplet collisions we have applied the ^4He density functional (DFT) and time-dependent density functional (TDDFT) methods thoroughly described in Refs. Anc17,dft-guide. Let us briefly recall that within DFT, the energy of the droplet is written as a functional of the atom density ρ(𝐫) as E[ρ] = T[ρ] + E_c[ρ] = ħ^2/2m_4∫ d 𝐫 |∇Ψ(𝐫)|^2 + ∫ d𝐫 E_c[ρ] where the first term is the kinetic energy with ρ(𝐫)= |Ψ(𝐫)|^2 and the functional E_c contains the interaction term (in the Hartree approximation) and additional terms which describe non-local correlation effects.<cit.> The droplet equilibrium configuration is obtained by solving the Euler-Lagrange equation resulting from the functional variation of Eq. (<ref>), {-ħ^2/2m_4∇^2 + δ E_c/δρ}Ψ(𝐫) ≡ H[ρ] Ψ(𝐫) = μΨ(𝐫) , where μ is the ^4He chemical potential corresponding to the number of He atoms in the droplet, N = ∫ d r|Ψ( r)|^2. To prepare the collision, we have first calculated the equilibrium structure of a ^4He_500 droplet. We have found it convenient to obtain the structure of each single droplet inside the larger calculation box where the dynamics will be carried out, placing their centers of mass so that their dividing surfaces (loci where the helium density equals half the liquid density value, R=r_0 N^1/3, with r_0= 2.22 Å) are 8 Å apart and the impact parameter equals the chosen value. This yields two equal density profiles centered at different points of the calculation box, ρ_1(𝐫) and ρ_2(𝐫). Next, we build an effective wave function giving the droplets opposite velocities in the z direction as follows: Ψ(𝐫, t=0) = e^-i k z√(ρ_1(𝐫))+e^i k z√(ρ_2(𝐫)) , where the wave number k is related to the droplet velocity v as v = ħ k/m_4. The TDDFT equation iħ∂/∂ tΨ(𝐫, t) = H[ρ] Ψ(𝐫, t) is solved taking Eq. (<ref>) as the starting effective wave function. We have set (y,z) as the reaction plane and the z axis as direction of incidence. The angular momentum, written in units of ħ thorough this paper, is calculated as L(t) = - i ∫ d 𝐫 Ψ^*(𝐫, t) ( y ∂/∂ z- z ∂/∂ y)Ψ(𝐫, t) , where -i (y∂/∂ z- z ∂/∂ y = L̂_x is the angular momentum operator in the x-direction. In practice, Eqs. (<ref>) and (<ref>) have been solved using the ^4He-DFT-BCN-TLS computing package,<cit.> see Refs. Anc17 and dft-guide and references therein for details. We work in cartesian coordinates, with ρ_i(𝐫) and Ψ(𝐫) defined at the nodes of a 3D grid inside a calculation box large enough to accommodate the droplets in such a way that the He density is sensibly zero at the box surface. Periodic boundary conditions are imposed so that the convolutions involved in the DFT mean field H[ρ] can be carried out using Fast Fourier Transform.<cit.> The differential operators in H[ρ] are approximated by 13-point formulas. All the simulation grids had 288 points equally spaced by 0.4 Å in each direction, except for the head-on collision for which the grid had to be extended along the incidence axis (z) to 576 points. The TDDFT equation is solved using Hamming's predictor-modifier-corrector method<cit.> initiated by a fourth-order Runge-Kutta-Gill method,<cit.> with a time step of 0.1 fs. During the time evolution some helium may evaporate from the droplets, eventually reaching the cell boundary. To prevent this material from reentering the cell due to the imposed periodic boundary conditions, we include an absorption buffer of 2 Å inside the calculation box<cit.> in each direction. This particle –and thus angular momentum and energy– leaking is obviously physical. Space and time steps have been chosen to keep energy and angular momentum well conserved in the absence of atom evaporation. § RESULTS Our main goal is to investigate whether vortices could be nucleated during a droplet-droplet collision for reasonable values of their initial velocity and impact parameter. Let us first obtain a crude estimate of the critical impact parameter b_cr leading to vortex nucleation. As stated in the Introduction, we assume that the droplet collision may result from the finite velocity dispersion in the droplet beam. If Δ v is the velocity spreading in the jet system of reference, which moves with a velocity v_j with respect to the laboratory system, the maximum relative velocity is 2 Δ v. The angular momentum L created in the merged droplet is given by L ħ = b N m_4 Δ v, where N is the number of helium atoms in each colliding droplet. For a vortex line along the diameter of the coalesced spherical droplet one has L=2N, which yields the critical impact parameter b_cr = 2 ħ/m_41/Δ v Let us take as an example the velocity range 29 ≤ v_j ≤ 310 m/s explored in the experiments of Kolatzki et al.,<cit.> in which droplet beams are obtained by fragmentation of a thin liquid helium jets into vacuum.<cit.> Under these experimental conditions, Δ v/v ∼ 0.01,<cit.> i.e., 0.3 ≤Δ v ≤ 3 m/s, hence 106.7 ≤ b_cr≤ 1067 Å. For a grazing collision, b_cr = 2R hence 53.4 ≤ R ≤ 534 Å, thus giving 1.4 × 10^4 ≤ N ≤ 1.4 × 10^7. As indicated above, this is only a crude estimate. Even if enough angular momentum is available from the start, a vortex will not necessarily be nucleated since part of the angular momentum will be stored in capillary waves.<cit.> This is all the more true since the merged droplet will be deformed for quite some time. Also, it is not obvious a priori if a grazing collision can lead to droplet coalescence. On the other hand, less angular momentum is required to nucleate a non-centered vortex line.<cit.> The previous estimate makes it clear that a realistic simulation of the collision process between droplets arising from jet breaking in usual experimental conditions is beyond the TDDFT capabilities due to the large size of the involved droplets. On the other hand, the whole collision process can be simulated in detail for smaller droplets. Experimentally, they are obtained in a different expansion regime, called regime 1 or supercritical in the recent review by Toennies,<cit.> in which droplets are formed by gas condensation. Depending on experimental conditions, their size can vary from several atoms up to about 10000 atoms. For instance in a 5 μm diameter nozzle at P_0=80 bar and T_0=24 K, the maximum of the log-normal size distribution has been measured to be 1930.<cit.> In these conditions the beam velocity is 480 m/s<cit.> and the velocity spread is Δ v/v≈ 2%.<cit.> These conditions would give Δ v≈ 10 m/s and b_cr=31.8 Å. For a grazing collision this would correspond to a droplet radius of about 16 Å, slightly smaller than the radius of a 500-atom droplets (17.6 Å) investigated in our work. Higher nozzle temperatures lead to log-normal size distributions peaking at lower sizes. We address here the collision, at non-zero relative velocity and impact parameter, of two ^4He_500 droplets of radius R=r_0 N^1/3 with r_0= 2.22 Å, i.e., R=17.6 Å. From Eq. (<ref>), for a not so grazing collision with b =3R/2, Δ v=12 m/s whereas for a more central collision with b=R, Δ v= 18 m/s. Since part of the angular momentum will go into capillary waves or will be taken away by atom evaporation, in our study we have also considered two larger values for Δ v, namely 20 and 40 m/s. Specifically, we have chosen as cases of study the following combinations of droplet velocity v and impact parameter b: * b=0, v= 40 m/s (head-on collision). * b=3R/2, v=10, 20, and 40 m/s. * b=2R, v=20 and 40 m/s (grazing collision). * b=5R/2, v=20 m/s (distal collision, b> 2R). These selected values allow for comparing the results at a given impact parameter as a function of the initial velocity, and the other way around. Since droplets are equal, the relative velocity in the collisions is v_rel=2v. We have also studied a non-symmetric case of two droplets, one of 300 atoms and the other of 700 atoms. §.§ Head-on collision at v= 40 m/s This is a zero-angular momentum collision. Figure <ref> shows snapshots of 2D density cuts in the (y,z) collision plane during the real-time dynamics. Superimposed to the density we have plotted the superflow current. This format is common to all 2D density figures in this paper. Upon droplet contact, due to the fairly large relative velocity, a density bulge develops at the collision region (frame at 7 ps) which expands laterally because of the large incompressibility of helium. This bulge is absent in the simulation of two droplets drawn against each other<cit.> only by the vdW attraction because of the smaller velocity involved in that process. One may see the nucleation of vortex rings at surface indentations (frame at 40 ps). Due to the symmetry of the process, vortices appear in pairs of rings-antirings. As in Ref. Esc19, vortex rings/antirings are also nucleated at the density protrusions symmetrically placed along the collision direction (frame at 59 ps). These ring pairs eventually collide and annihilate, producing a roton burst (frame at t= 83 ps). As discussed in the following, the density waves produced by the rings annihilation induce He atom evaporation as they reach the droplet surface. After the fusion, the merged droplet in the figure undergoes wide amplitude oscillations. Interestingly, two satellite droplets appear (frame at 226 ps) which eventually detach from the fused droplet (frame at 250 ps). Any residual friction/viscosity remaining in the system might hinder this process. Density oscillations are also expected to be damped for the same reason. Notice that atom evaporation, which also contributes to the damping, is naturally occurring in our simulations. Figure <ref> shows the time evolution of the energy and atom number in the system. The roton burst observed in the t= 83 ps snapshot induces strong helium atom evaporation: ∼ 8 atoms in ∼ 20 ps, dissipating ∼110 K (about 14 K/atom). This is followed by slower atom evaporation and energy dissipation. During the time elapsed by the simulation (265 ps), 13 He atoms are emitted taking away an average energy of about 11 K/atom. §.§ Collisions with impact parameter b=3R/2 §.§.§ v= 10 m/s collision The angular momentum involved in this collision is L=825. Figure <ref> shows snapshots of the 2D density during the collision process which display several interesting features. A low-density bridge appears between the droplets before touching due to the long range attractive vdW mutual interaction, here exemplified by the frame at 10 ps. Interestingly, in spite of the small velocity, quantized vortices are nucleated at surface indentations appearing at the droplets contacting region (frame at 70 ps). It is worth noting that a linear vortex can be nucleated even though L=825 is smaller than the the number of atoms in the merged droplet N_He=1000 : The formula Lħ=N_Heħ is strictly valid only for a linear vortex along the symmetry axis of an axisymmetric droplet, L being smaller for vortex lines displaced off the symmetry axis.<cit.> Here these vortices appear in pairs because of the symmetry of the system; they are vortex lines (not rings) of equal circulation, constituting a vortex dimer.<cit.> Surface protrusions appear as in the head-on collision case (frame at 82 ps), and their collapse nucleates a pair of vortex-antivortex rings (frame at 100 ps). The interaction of the vortex dimer with the pair of vortex rings inside the small volume of the fused droplet causes the annihilation of the ring pair and the appearance of a roton burst, leaving the droplet into a turbulent state (frame at 128 ps). Eventually, the fused droplet pacifies yielding a droplet in apparent rotation alongside with the vortex dimer inside it, as shown in the t=457 ps frame. We thus see that vortices are readily nucleated in the course of the collision even for moderate values of the relative velocity and impact parameter. At long times, the merged droplet “rotates” adopting an ellipsoidal-like shape, inside which the vortex dimer moves. We shall estimate later (section <ref>) how angular momentum is shared between capillary waves, responsible for the apparent rotation of the droplet, and the vortex dimer. Figure <ref> shows the time evolution of the energy and atom number in the system. Atom evaporation starts around 130 ps, when turbulence sets in, then it gradually slows down. During the first 480 ps, about 15 He atoms are evaporated, taking away an average energy of 8 K per atom and an angular momentum of about 2.3 units per atom. Note that the initial excess energy with respect to a vortex free 1000-He atom droplet is 476 K in this case, so that the merged droplet still contains a significant amount of internal energy at the end of the simulation, even taking into account the additional energy contained in a vortex-hosting droplet. Unfortunately it is not possible to continue the simulation to much longer times. The results for all the collisions studied in this work are collected in Table <ref>. §.§.§ v= 20 m/s and 40 m/s collisions The angular momentum involved in these collisions is L=1650 and 3300, respectively. The collision dynamics is similar to the case with v=10 m/s, see the corresponding movies in the supplementary material. Vortices are nucleated by the same mechanism at indentations appearing on the fused droplet surface. We have found that the number of vortices of equal circulation increases from two at 10 m/s, to four at 20 and 40 m/s. During the real time evolution of the fused droplet, some of these vortices are evaporated. This does not mean that L changes, angular momentum simply goes into capillary waves. The interplay between vortices and capillary waves is readily seen in these movies, which also show the tendency of increasing the number of stable nucleated vortices with increasing droplet velocity. The time evolution of the total angular momentum for v=20 m/s is displayed in Fig. <ref>, in addition to that of the total energy and number of atoms in the merged droplet. As expected, the decrease in angular momentum follows that in energy and it is due to helium atoms evaporating from the droplet which are removed from the simulation box by the action of the absorbing buffer. During the 360 ps covered by the simulations, about 10 He atoms are evaporated for the collision at v=20 m/s taking away an energy of 9.2 K/atom and an angular momentum of 10 units per atom. At v=40 m/s, we have found that 15 He atoms are evaporated, taking away an average energy of 6.3 K/atom and an angular momentum of 16 units per atom. §.§ Collisions with impact parameter b=2R Grazing collisions are especially relevant since it is not obvious that the vdW attraction between the colliding droplets may compensate the kinetic energy in the colliding droplets and lead to droplet coalescence. §.§.§ v= 20 m/s and 40 m/s collisions The angular momentum involved in these distal collisions is L=2200 and 4400, respectively. Figure <ref> shows snapshots of the 2D density for the v=40 m/s case. A density bridge perpendicular to the collision direction appears, connecting both droplets (frame at 10 ps). A vortex dimer is nucleated at t=20 ps, and another dimer appears at 50 ps. The interplay between capillary waves and vortices leads to the evaporation of one of the vortex dimers (frame at 110 ps) which is nucleated again later on (frame a 240 ps) and re-evaporated at t=365 ps. The coalesced droplet is very stretched due to the angular momentum deposited in the system. During the 475 ps elapsed by the real time simulations, about 5 He atoms are evaporated, taking away an energy of 13.6 K/atom and an angular momentum of 23 units per atom. As shown in the movies, the evolutions at v=20 and 40 m/s are qualitatively similar. §.§ Distal collision at b=5R/2 and v= 20 m/s The angular momentum involved in this distal collision is L=2750. This collision highlights the relevance of the finite range of the vdW interaction in the outcome of the process. Indeed, if the collision were modelled by a surface tension plus kinetic energy model, inherent to any classical model based on the NS approach, it would lead to the non-interaction of the approaching droplets. At variance, we have found that the colliding droplets merge. Figure <ref> shows snapshots of the 2D density. A tiny, low-density bridge is clearly visible at 52 ps. Eventually, droplets merge yielding a vortex-free droplet for a relatively long amount of time, as illustrated by the frame at t= 190 ps, where the merged droplet undergoes a complete rotation with all the angular momentum stored in the form of capillary waves. Eventually, a vortex dimer starts being nucleated at t= 245 ps by the familiar surface indentations mechanism; it is clearly visible e.g. at t= 412 ps. The vortex dimer later evaporates (frame at t= 483 ps) but it is nucleated again at t= 555 ps. This evaporation-nucleation process continues until the end of the real time simulation (691 ps). During the time elapsed by the simulation, about 3 He atoms are evaporated, taking away an energy of 3.7 K/atom and an angular momentum of 22 units per atom. §.§ Asymmetric collisions We have seen that, due to the symmetry of the binary collision between two identical droplets, vortices are nucleated in pairs by the surface indentation mechanism. A less symmetric collision might lead to the nucleation of an odd number of vortices. To check this possibility, and see the influence of the asymmetry on the collision outcome, we have conducted one simulation with droplets of different sizes, namely N_1=300 and N_2=700. The initial conditions v_1=28 m/s, v_2= 12 m/s, b = (25/21)(3R/2) were chosen so as to be as close as possible to the case of identical droplets with v = 20 m/s and b = 3R/2, in order to compare the collision processes for the same relative velocity and total angular momentum. Figure <ref> shows snapshots of the 2D density. The usual density bridge can be seen at 15 ps, and one single vortex is nucleated at 30 ps. Yet, another vortex is later nucleated at 60 ps, and a third one appears at 80 ps. The latter panel also shows a surface protrusion whose collapse yields a vortex ring and a series of density waves propagating inside the droplet. One of the vortices gets ejected at t=215 ps, but then it gets nucleated again as can be seen in the final snapshot at t=306 ps. During the whole simulation, about 7 He atoms are evaporated, taking away an energy of 8.6 K/atom and an angular momentum of 16.1 units per atom. §.§ Sharing angular momentum between capillary waves and vortex lines It is well known that angular momentum in superfluid ^4He droplets can be stored in the form of capillary waves and/or quantized vortices, see e.g. Refs. Anc18,Oco20,Pi21. As discussed in the previous Sections, and as it is clearly apparent from the figures, both vortices and capillary waves appear in the merged droplets. It is quite natural to ask oneself how much angular momentum is stored into vortices and how much is in capillary waves. This question has not a rigorous answer, as one cannot split the effective wave function of the superfluid Ψ(𝐫, t) into a component arising from vortex contributions and another one from capillary waves, both being intimately entangled. A simple estimate of vortex (L_v) and capillary wave (L_cap) contributions to the total angular momentum can be obtained as done in Refs. Oco20,Pi21, when the shape of the rotating droplet is approximately ellipsoidal. It consists in determining L_cap from the angular velocity ω of the apparent rotation of the merged droplet and using an ellipsoid approximation for the droplet shape, since the angular momentum of an ellipsoid made of an irrotational fluid rotating around a principal axis at angular velocity ω is known.<cit.> L_v is obtained as L-L_cap. These are only estimates that could be more meaningful near the end of the simulations when the droplet reaches a quasi steady rotational state. We have proceeded as follows. We first determine the classical axes of inertia by diagonalizing the classical matrix of inertia in the lab frame, I_jk=m_4 ∫ d𝐫 (r^2 δ_jk- r_j r_k) ρ(𝐫) . Since the x axis is maintained constant by symmetry, the instantaneous inertia axes were determined by rotation by a single angle θ about x. The angular velocity ω is then calculated as ω=Δθ/Δ t . The angular momentum due to capillary waves is finally expressed as L_cap=ℐ_irrω, where<cit.> ℐ_irr= m_4 N_tot [⟨ y^2 ⟩ -⟨ z^2 ⟩]^2/⟨ y^2 ⟩+⟨ z^2⟩ is the irrotational moment of inertia calculated in the rotating frame, with ⟨ y^2⟩=1/N_tot∫ dr y^2 ρ(r) and ⟨ z^2 ⟩=1/N_tot∫ dr z^2 ρ(r) , N_tot being the total number of atoms in the merged droplet. For vortex-free droplets, the above expressions have been found to reproduce the DFT results within 5%.<cit.> As illustrative examples, we show in Fig. <ref> the total angular momentum L (which changes with time due to atom evaporation) and vortex contribution L_v for the collision corresponding to b=3R/2 and v= 10 m/s, for times t > 250 ps. Figure <ref> shows the same quantities for the collision corresponding to b=3R/2 and v= 20 m/s, and times t > 150 ps. We want to stress here that these qualitative results should be taken with caution, as some of the considered droplet configurations are not as ellipsoidal as they should be to justify the application of the above expressions. § SUMMARY AND CONCLUDING REMARKS We have addressed binary collisions of superfluid helium drops within the He-DFT approach. The simulations have been carried out for ^4He_500 droplets and several values of the impact parameter and relative velocities. To see the influence of droplet asymmetry, we have also addressed the collision of two droplets with different number of atoms. Asymmetric collisions seem to favor the appearance of an odd number of vortices, whereas this number can only be even in binary collisions of equal size droplets. Not surprisingly, collisions of superfluid ^4He droplets display similarities with classical droplet collisions. In both cases, the merged droplet is highly deformed and rotates in order to maintain the angular momentum involved in the collision; compare, e.g., the morphology of the droplets shown in Ref. Nik09 with those displayed in this work. The substantial difference between both situations is the ubiquitous appearance of quantized vortices in the case of helium, made possible within the He-TDDFT framework. Besides this important point, the He-TDDFT approach differs from classical ones based on the solution of the Navier-Stokes or Euler equations in which the former takes into account the finite range of the van der Waals interaction which facilitates droplet merging for grazing and distal collisions, whereas it is not possible for the latter approaches where droplet interaction is mediated by surface tension and kinetic energy of the colliding droplets. Computational limitations make it impossible to implement the He-TDDFT method in the experimental conditions under which very large droplets are made, which involve much larger number of atoms at smaller relative velocities.<cit.> Yet, an interesting conclusion is readily transferable to that experimental situation. Using a nozzle shape specifically devised to reduce angular momentum acquisition when droplets travel through the source chamber, a recent experiment<cit.> still identified a few vortex-hosting droplets from the appearance of Xe filament-shaped structures in x-ray diffraction images. These observations suggest that droplet collisions produced during the expansion from the source chamber might be the cause of angular momentum acquisition and subsequent vortex nucleation. Our calculations make this scenario plausible. On the one hand, we have found that quantum vortices are readily nucleated by the surface indentations mechanism, yielding vortex rings (which carry no angular momentum) for head-on collisions, and off center vortices (which carry angular momentum, although smaller that centered vortices) for non-zero impact parameter collisions. Since indentations appear whenever droplets merge, the indentation mechanism is independent on the droplet size. In addition, we have unexpectedly found that, even for grazing and distal collisions, droplets coalesce at relative velocities as large as 40 m/s instead of stretching and separating again; these velocities are much larger than those found in the experiments.<cit.> Thus, droplet collisions in a broad interval of impact parameters and relative velocities would lead to vortex nucleation. Our simulations also show that droplet-droplet collisions could also nucleate vortices in smaller droplets, in the range of a thousand atoms. Their appearance would be favored in conditions where velocity spread is larger. So far no convincing way of detecting them in small droplets has been demonstrated. He-TDDFT simulations have other unavoidable limitations. On the one hand, the method is strictly a zero temperature approach and there is no dissipation; energy can only be lost by atom evaporation, whereas any residual viscosity remaining in the system would contribute to stabilize the merged droplet and damp density oscillations. On the other hand, due to the limited time elapsed by the simulations, droplets do not reach the stationary state of apparent rotation and stabilized vortex array structures found in the experiments.<cit.> § SUPPLEMENTARY MATERIAL See supplementary material for the video files showing the real time evolution of the processes discussed in the present work. We are very indebted to Rico Tanyag and Thomas Möller for useful exchanges. A computer grant from CALMIP high performance computer center (grant P1039) is gratefully acknowledged. This work has been performed under Grant No. PID2020-114626GB-I00 from the MICIN/AEI/10.13039/501100011033 and benefitted from COST Action CA21101 “Confined molecular systems: form a new generation of materials to the stars” (COSY) supported by COST (European Cooperation in Science and Technology). § AUTHOR DECLARATIONS §.§ Conflict of Interest The authors have no conflicts to disclose. §.§ DATA AVAILABILITY The data that support the findings of this study are available from the corresponding author upon reasonable request 99 Toe04 J. P. Toennies and A. F. Vilesov, Angew. Chem. Phys. 43, 2622 (2004). Sle22 Molecules in superfluid helium nanodroplets, A. Slenczka and J. P. Toennies Eds. Topics in Applied Physics 145, Springer, (2022). Gom14 L. F. Gomez, K. R. Ferguson, J. P. Cryan, C. Bacellar, R. M. P. Tanyag, C. Jones, S. Schorb, D. Anielski, A. Belkacem, C. Bernando, R. Boll, J. Bozek, S. Carron, G. Chen, T. Delmas, L. Englert, S. W. Epp, B. Erk. L. Foucar, R. Hartmann, A. Hexemer, M. Huth, J. Kwok, S. R. Leone, J. H. S. Ma, F. R. N. C. Maia, E. Malmerberg, S. Marchesini, D. M. Neumark, B. Poon, J. Prell, D. Rolles, B. Rudek, A. Rudenko, M. Seifrid, K. R. Siefermann, F. P. Sturm, M. Swiggers, J. Ullrich, F. Weise, P. Zwart, C. Bostedt, O. Gessner, and A. F. Vilesov, Science 345, 906 (2014). Ges19 O. Gessner and A. F. Vilesov, Annu. Rev. Phys. Chem. 70, 173 (2019). Lan18 B. Langbehn, K. Sander, Y. Ovcharenko, C. Peltz, A. Clark, M. Coreno, R. Cucini, M. Drabbels, P. Finetti, M. Di Fraia, L. Giannessi, C. Grazioli, D. Iablonskyi, A. C. LaForge, T. Nishiyama, V. Oliver Álvarez de Lara, P. Piseri, O. Plekan, K. Ueda, J. Zimmermann, K. C. Prince, F. Stienkemeier, C. Callegari, T. Fennel, D. Rupp, and T. Möller, Phys. Rev. Lett. 121, 255301 (2018). Oco20 S. M. O. O'Connell, R. M. P. Tanyag, D. Verma, Ch. Bernando, W. Pang, C. Bacellar, C. A. Saladrigas, J. Mahl, B. W. Toulson, Y. Kumagai, P. Walter, F. Ancilotto, M. Barranco, M. Pi, Ch. Bostedt, O. Gessner, and A. F. Vilesov, Phys. Rev. Lett. 124, 215301 (2020). Pi21 M. Pi, J. M. Escartín, F. Ancilotto, and M. Barranco, Phys. Rev. B 104, 094509 (2021). Mat14 D. Mateo, A. Leal, A. Hernando, M. Barranco, M. Pi, F. Cargnoni, M. Mella, X. Zhang, and M. Drabbels, J. Chem. Phys. 140, 131101 (2014). Lea14 A. Leal, D. Mateo, A. Hernando, M. Pi, M. Barranco, A. Ponti, F. Cargnoni, and M. Drabbels, Phys. Rev. B 90, 224518 (2014). Cop17 F. Coppens, A. Leal, M. Barranco, N. Halberstadt, and M. Pi, J. Low Temp. Phys. 187, 439 (2017). Clo98 J. D. Close, F. Federmann, K. Hoffmann, and N. Quaas, J. Low Temp. Phys. 111, 661 (1998). Her08 A. Hernando, M. Barranco, R. Mayol, M. Pi, and M. Krosnicki, Phys. Rev. B 77, 024513 (2008). Gar20 E. García-Alfonso, F. Coppens, M. Barranco, M. Pi, F. Stienkemeier, and N. Halberstadt, J. Chem. Phys. 152, 194109 (2020). Har98 J. Harms, J. P. Toennies, and F. Dalfovo, Phys. Rev. B 58, 3341(1998). Har01 J. Harms, J. P. Toennies, M. Barranco, and M. Pi, Phys. Rev. B 63, 184513 (2001). Vic00 C. L. Vicente, C. Kim, H. J. Maris, and G. M. Seidel, J. Low Temp. Phys. 121, 627 (2000). Tan20 R. M. P. Tanyag, A. J. Feinberg, S. M. O. O'Connell, and A. F. Vilesov, J. Chem. Phys. 152, 234306 (2020). Kol22 K. Kolatzki, M. L. Schubert, A. Ulmer, T. Möller, D. Rupp, and R. M. P. Tanyag, Phys. Fluids 34, 012002 (2022). Ulm23 A. Ulmer, A. Heilrath, B. Senfftleben, S. M. O. O'Connell-Lopez, B. Kruse, L. Seiffert, K. Kolatzki, B. Langbehn, A. Hoffmann, T. M. Baumann, R. Boll, A. S. Chatterley, A. De Fanis, B. Erk, S. Erukala, A. J. Feinberg, T. Fennel, P. Grychtol, R. Hartmann, M. Ilchen, M. Izquierdo, B. Krebs, M. Kuster, T. Mazza, J. Montaño, G. Noffz, D. E. Rivas, D. Schlosser, F. Seel, H. Stapelfeldt, L. Strüder, J. Tiggesbäumker, H. Yousef, M. Zabel, P. Ziolkowski, M. Meyer, Y. Ovcharenko, A. F. Vilesov, T. Möller, D. Rupp, and R. M. P. Tanyag, arXiv:2302.07355v3, to be published in Phys. Rev. Lett. (2023). Gri03 R. E. Grisenti and J. P. Toennies, Phys. Rev. Lett. 90, 234501 (2003). Anc17 F. Ancilotto, M. Barranco, F. Coppens, J. Eloranta, N. Halberstadt, A. Hernando, D. Mateo, and M. Pi, Int. Rev. Phys. Chem. 36, 621 (2017). dft-guide M. Barranco, F. Coppens, N. Halberstadt, A. Hernando, A. Leal, D. Mateo, R. Mayol, and M. Pi, Zero temperature DFT and TDDFT for ^4He: A short guide for practitioners. <https://github.com/bcntls2016/DFT-Guide/blob/master/dft-guide.pdf> Bar06 M. Barranco, R. Guardiola, S. Hernández, R. Mayol, J. Navarro, and M. Pi, J. Low Temp. Phys. 142, 1 (2006). Dal95 F. Dalfovo, A. Lastri, L. Pricaupenko, S. Stringari, and J. Treiner, Phys. Rev. B 52, 1193 (1995). Pit16 L. Pitaevskii and S. Stringari, Bose-Einstein Condensation and Superfluidity, International Series of Monographs on Physics vol. 164 (Oxford University Press, 2016). Bar16 C. F. Barenghi and N. G. Parker, A Primer on Quantum Fluids, Springer Briefs in Physics (2016). Tsu09 M. Tsubota and W.P. Halperin (Eds.), Progress in Low Temperature Physics, vol. XVI (Elsevier, Amsterdam and London, 2009). Esc19 J. M. Escartín, F. Ancilotto, M. Barranco, and M. Pi, Phys. Rev. B 99, 140505(R) (2019). Esc22 J. M. Escartín, F. Ancilotto, M. Barranco, and M. Pi, Phys. Rev. B 105, 024511 (2022). Fer19 G. Ferioli, G. Semeghini, L. Masi, G. Giusti, G. Modugno, M. Inguscio, A. Gallemí, A. Recati, and M. Fattori, Phys. Rev. Lett. 122, 090401 (2019). Cik21 V. Cikojević, L. Vranješ Markić, M. Pi, M. Barranco, F. Ancilotto, and J. Boronat, Phys. Rev. Res. 3, 043139 (2021). Gui95 M. Guilleumas, M. Pi, M. Barranco, and E. Suraud, Z. Phys. D 34, 35 (1995). Men93 A. Menchaca-Rocha, A. Cuevas, M. Chapa, and M. Silva, Phys. Rev. E 47, 1433 (1993). Men97 A. Menchaca-Rocha, F. Huidobro, A. Martinez-Davalos, K. Michaelian, A. Perez, and N. Cârjan, J. Fluid Mech. 346 291 (1997). Ngo86 C. Ngô, Prog. Part. Nucl. Phys. 16, 139 (1986). Nik09 N. Nikolopoulos, A. Theodorakakos, and G. Bergeles, Int. J. of Heat and Mass Transfer 52, 4160 (2009). Ant19 C. R. Anthony, P. M. Kamat, M. T. Harris, and O. A. Basaran, Phys. Rev. Fluids 4, 093601 (2019). Hoe13 J. Hoepffner and G. Paré, J. Fluid Mech. 734, 183 (2013). Anc23 F. Ancilotto, M. Barranco, and M. Pi, J. Chem. Phys. 158, 144306 (2023). Gar22 E. García-Alfonso, M. Barranco, D. A. Bonhommeau, N. Halberstadt, M. Pi, and F. Calvo, J. Chem. Phys. 157, 014106 (2022). Anc05 F. Ancilotto, M. Barranco, F. Caupin, R. Mayol, and M. Pi, Phys. Rev. B 72, 214522 (2005). Pi17 M. Pi, F. Ancilotto, F. Coppens, N. Halberstadt, A. Hernando, A. Leal, D. Mateo, R. Mayol, and M. Barranco, 4He-DFT BCN-TLS: A Computer Package for Simulating Structural Properties and Dynamics of Doped Liquid Helium-4 Systems. <https://github.com/bcntls2016/> Fri05 M. Frigo and S.G. Johnson, Proc. IEEE 93, 216 (2005). Ral60 A. Ralston and H. S. Wilf, Mathematical methods for digital computers, John Wiley and Sons, New York. (1960). Mat11 D. Mateo, D. Jin, M. Barranco, and M. Pi, J. Chem. Phys. 134, 044507 (2011). Bau95 G. H. Bauer, R. J. Donnelly, and W.F. Vinen, J. Low Temp. Phys. 98, 47 (1995). Leh03 K. K. Lehmann and R. Schmied, Phys. Rev. B 68, 224520 (2003). Toe22 J. P. Toennies, in Molecules in superfluid helium nanodroplets, A. Slenczka and J. P. Toennies Eds. Topics in Applied Physics 145, Springer, (2022), p. 1. Lew93 M. Lewerenz, B. Schilling and J. P. Toennies, Chem. Phys. Lett. 206, 381 (1993), Anc18 F. Ancilotto, M. Pi, and M. Barranco, Phys. Rev. B 97, 184515 (2018). Cop17b F. Coppens, F. Ancilotto, M. Barranco, N. Halberstadt, and M. Pi, Phys. Chem. Chem. Phys. 19, 24805 (2017).
http://arxiv.org/abs/2307.02125v1
20230705090242
3HWC J0631+107/LHAASO J0631+1040: a TeV halo powered by the pulsar J0631+1036?
[ "Dong Zheng", "Zhongxiang Wang", "Yi Xing" ]
astro-ph.HE
[ "astro-ph.HE" ]
Department of Astronomy, School of Physics and Astronomy, Yunnan University, Kunming 650091, China; wangzx20@ynu.edu.cn 0000-0003-1984-3852]Zhongxiang Wang Department of Astronomy, School of Physics and Astronomy, Yunnan University, Kunming 650091, China; wangzx20@ynu.edu.cn Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030, China Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030, China PSR J0631+1036 is a middle-aged pulsar with properties similar to those of the nearby Geminga pulsar. It is bright in γ-rays, and has been noted as the only source possibly associated with the TeV source 3HWC J0631+107 (also the LHAASO J0631+1040). For understanding the nature of the TeV source, we analyze the GeV γ-ray data obtained with the Large Area Telescope (LAT) onboard the Fermi Gamma-ray Space Telescope for the source region. We are able to remove the pulsar's emission from the region from timing analysis, and find that the region is rather clean without possible GeV γ-ray emission present as the counterpart to the TeV source. By comparing this pulsar to Geminga and considering the spectral feature of the TeV source, we argue that it is likely the TeV halo powered by the pulsar. § INTRODUCTION PSR , discovered by <cit.>, is a middle-aged one having spin period P≃ 0.288 s, characteristic age τ_c≃43.6 kyr, and spin-down luminosity Ė≃ 1.7× 10^35 erg s^-1. Based on the new electron-density model for the Galaxy <cit.>, its distance is D≃ 2.1 kpc given in the Australia Telescope National Facility Pulsar Catalogue <cit.>. In X-rays, observational studies have not detected the pulsar down to ≃ 4.9× 10^30 (D/2.1)^2 erg s^-1 (in 0.5–2.0 keV; ). This seemingly `normal' pulsar, along with several tens of others, has been selected as one likely associated with the Galactic very-high energy (VHE; >100 GeV) TeV sources, namely reported by the High-Altitude Water Cherenkov (HAWC) Observatory in the third HAWC catalog (3HWC; ) and by the Large High Altitude Air Shower Observatory (LHAASO; ) in the The First LHAASO Catalog of Gamma-Ray Sources <cit.>. The reasons for finding associations between VHE sources and pulsars are the following. First, pulsars with τ_c≤ 100 kyr can have pulsar wind nebulae (PWNe), which are considered as one primary type of the Galactic TeV sources (e.g., ). Second, as inspired by the detection of the extended TeV emissions around two nearby pulsars, Geminga and Monogem <cit.>, a new type of TeV sources, the so-called TeV halos that are powered by middle-aged pulsars, has been proposed <cit.>. Third, among more than 100 sources detected in recent years with the VHE facilities, mainly the High Energy Spectroscopy System (HESS; ), the HAWC, and the LHAASO, a significant number of the sources do not have typical known counterparts, i.e., the PWNe, the supernova remnants (SNRs), or other types of VHE emitters <cit.>. Given these and intrigued by the third one described above, we have been carrying out multi-wavelength studies of the TeV sources that do not have obvious counterparts at other energy bands <cit.>. In our studies, we noted that 3HWC  (hereafter ), also LHAASO  given the positional match between these two sources, has a clean field in high energies. There are no known PWNe or SNRs found such as in the TeV online catalog (TeVCat; ), and PSR  is the only notable source based on the position matches (e.g., ). Interestingly, this pulsar has bright GeV emission, detected with the Large Area Telescope (LAT) onboard the Fermi Gamma-ray Space Telescope (Fermi) from the early observations <cit.>. We thus conducted detailed analysis of the LAT data for the pulsar. The analysis and results are described below in Section <ref>, based on which we argue that is likely a TeV halo powered by PSR ; the related discussion is presented in Section <ref>. § DATA ANALYSIS §.§ LAT data and source model We selected the 0.1–500 GeV LAT data in the time range of from 2008 Aug. 04 15:43:36 (UTC) to 2023 Feb. 16 00:00:00 (UTC) from the latest Pass 8 database. The region of interest (RoI) was 15^∘× 15^∘, centered at PSR . As recommended by the LAT team[http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/], the events with quality flags of `bad' and zenith angles ≥ 90 were excluded. We used the latest LAT 12-year source catalog (4FGL-DR3; ) to construct a source model. The sources within 15 of the pulsar in the catalog were included in the source model, and their catalog spectral forms were used. Also the background Galactic and extragalactic diffuse spectral models were included, with the files gll_iem_v07.fits and iso_P8R3_SOURCE_V3_v1.txt, respectively, used. §.§ Timing analysis of PSR  PSR  is bright in the LAT energy band and located in a clean field, as shown in the left panel of Figure <ref>, a test statistic (TS) map calculated for the source region from the whole LAT data (Section <ref>). The pulsar is the only 4FGL-DR3 source in the 2^∘× 2^∘ TS map. Also seen is its positional match to . In order to check if there are other sources hiding in the bright emission of the pulsar, we worked to obtain its pulsed emission through timing analysis. On the first try, we folded the photons within a 6 radius (∼size of the point spread function of LAT at 100 MeV[https://www.slac.stanford.edu/exp/glast/groups/canda/lat_Performance.htm]) aperture centered at the pulsar according to the ephemeris given in the LAT Gamma-ray Pulsar Timing Models Database[https://confluence.slac.stanford.edu/display/GLAMCOG/LAT+Gamma-ray+Pulsar+Timing+Models] <cit.>, but no clear pulse profile over the ≃14.5 yr long data could be obtained. We then changed to use the method fully described in <cit.>. In essence, we divided the data into sets of 200 d, and assigned pulse phases to the photons according to the ephemeris in the Database <cit.> by using the TEMPO2 plugin <cit.>. We were able to obtain empirical Fourier template profiles before and after MJD 56770, generate the times of arrival (TOAs) for each set of ∼ 200 d data, and update ephemeris by fitting the TOAs with high-order frequency derivatives. The main parameters (f, f1, and f2) of the updated ephemeris during the two time periods are provided in Table <ref>. We could not extended the ephemeris to times longer than MJD 57930, probably due to the glitches of the pulsar at MJD 58341 & 58352 <cit.>. With the two ephemerides (Table <ref>), the photons during the two time periods were folded respectively. The two pulse profiles had a phase mismatch of ≃0.3075 (which was directly read off from the profiles because of the clear pulse shape). After applying the phase shift to the photons of the second time period, the pulse profile over nearly 9 yr was obtained (Figure <ref>). Based on the pulse profile, we defined phase 0.0625–0.5625 as the onpulse phase range and phases 0.0–0.0625 and 0.5625–1.0 as the offpulse phase ranges. §.§ Likelihood and spectral analysis §.§.§ Whole and onpulse data We performed standard binned likelihood analysis to the 0.1–500 GeV LAT data during the whole ∼14.5 yr time period and the onpulse phase range during the ∼9 yr time period. The spectral parameters of the sources within 5 from the pulsar in the source model were set free, while the spectral parameters of the other sources were fixed at the values given in 4FGL-DR3. In addition, the normalizations of the two background components were set free. For the emission at the pulsar's position, in the whole data or those of the onpules phase, we used a sub-exponentially cutoff power-law model (PLSEC; ), dN/dE = N_0 (E/E_0)^-Γ - d/2ln(E/E_0) - db/6ln^2(E/E_0) - db^2/24ln^3(E/E_0), where Γ and d are the photon index and the local curvature at E_0 respectively, and b is a measure of the shape of the exponential cutoff. We fixed b=2/3, which is such set for most of the γ-ray pulsars in the LAT catalogs (e.g., ). The likelihood analysis results, including the TS values, are provided in Table <ref>. The parameter values of the pulsar are consistent with those given in 4FGL-DR3. A 0.1–500 GeV TS map of a 2^∘× 2^∘ region centered at the pulsar was calculated and is shown in the left panel of Figure <ref>. We extracted the spectrum of PSR  in the onpulse phase data. The spectral bands were set as 10 evenly divided in logarithm from 0.1 to 500 GeV. In the analysis of obtaining fluxes in the bands, the spectral normalizations of the sources within 5 of the pulsar were set as free parameters, while all the other parameters of the sources were fixed at the values obtained from the above binned likelihood analysis. For the spectral data points, we kept those with TS≥4 and derived 95% flux upper limits otherwise. The obtained spectrum is shown in Figure <ref>. §.§.§ Offpulse data With the offpulse phase ranges obtained above (Figure <ref>), we examined the source region by first calculating a TS map in 0.1–500 GeV from the offpulse data. No source emission could be detected at the pulsar's position, with a 95% flux upper limit of ∼10^-13 erg cm^-2 s^-1 (assuming Γ=2 in 0.1–500 GeV; cf., Figure <ref>). However residual emission (TS∼50) southeast of the pulsar is seen (middle panel of Figure <ref>). To further check the residual emission, a TS map in 1–500 GeV was also obtained (right panel of Figure <ref>). As can be seen, there seemingly are two sources. We ran gtfindsrc in Fermitools to determine their positions. The obtained best-fit positions are (R.A., Decl.) = (9803, 1059) and (R.A., Decl.) = (9834, 1052) for point source 1 (PS1) and 2 (PS2), respectively, with the 1σ nominal uncertainties of 011 and 013 (indicated in Figure <ref>). By including PS1 or PS1+PS2 in the source model, we performed the likelihood analysis to the offpulse data. We found that PS2 was not significantly detected, indicated by the likelihood value being similar to that when only PS1 was in the source model (see Table <ref>). We extracted the spectrum of PS1 (Figure <ref>), which could be fitted with a power law with Γ≃ 2.66). § DISCUSSION We conducted analysis of the LAT data for PSR , because of its possible association with the TeV source and the absence of PWN/SNR-like counterparts in the source region. By timing the pulsar, we were able to removed its pulsed emission in a ∼9 yr time period of the data. No offpulse emission was detected at the pulsar's position. Residual emission, PS1, was seen approximately 016 east of the pulsar. The emission was soft, mostly detectable at ≲ 1 GeV (Figure <ref>). We checked the SIMBAD database for possible counterparts to it, but no obvious ones (particularly in radio or X-rays) were found within its error circle. The nature of PS1 is thus not clear. Given the positional offset and its soft emission, it is not likely in association with the pulsar or the TeV source. Then it is straightforward to note the similarities of PSR  to Geminga. They have similar P values and are both bright, while the former's τ_c is younger by a factor of ∼8 and Ė higher by a factor of ∼5. Given these and our analysis results for the field, we thus argue that is likely the TeV halo of PSR . In Figure <ref>, we compare this pulsar to Geminga by scaling the latter's X-ray, , and TeV halo fluxes to the distance of the former, where the nominal distance 250 pc is used for Geminga <cit.>. As can be seen, the X-ray upper limit on PSR  <cit.> is approximately consistent with the X-ray fluxes of Geminga or its PWN <cit.>, where the fluxes of all the components of the latter's PWN are added together as the flux in 0.3–8 keV. Because the TeV halo of Geminga is extended with fine structures <cit.>, we adopt the flux measurement from the second HAWC catalog <cit.>, in which a 2 extension was used. The scaled flux at 7 TeV is ∼7× 10^-16 TeV^-1 cm^-1 s^-1, approximately ∼1/6 of that of . Interestingly, the ratio is similar to that in Ė of Geminga to PSR . The size of the TeV halo of PSR  would be roughly 024 by taking that of Geminga as a standard <cit.>, smaller than the upper limit of 030 set by the LHAASO <cit.>. We further note that the emission of is hard, as LHAASO detected it in 25–100 TeV with Γ≃ 3.3 but did not detect it in 1–25 TeV (Figure <ref>). This spectral feature is similar to that of Geminga's TeV halo, indicated by the LHAASO detection of Γ≃ 1.7 in 1–25 TeV and Γ≃ 3.7 in ≥25 TeV (i.e., the spectrum likely peaks around ∼25 TeV). This type of spectra is harder than those of PWNe, since the latter have Γ≳ 2 in 1–10 TeV and thus some of them can be detected with LAT ( and references therein); indeed, part of the purpose of this work was to search for a PWN in the offpulse data. Hopefully with more data collected with LHAASO in the near future, the similarity of the spectrum of to that of the Geminga's TeV halo can be established, and thus firmly confirm the TeV halo nature of . This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. This research is supported by the Basic Research Program of Yunnan Province (No. 202201AS070005), the National Natural Science Foundation of China (12273033), and the Original Innovation Program of the Chinese Academy of Sciences (E085021002). aasjournal
http://arxiv.org/abs/2307.02621v1
20230705194848
A global higher regularity result for the static relaxed micromorphic model on smooth domains
[ "Dorothee Knees", "Sebastian Owczarek", "Patrizio Neff" ]
math.AP
[ "math.AP", "35Q74, 35B65, 49N60, 74A35, 74G40" ]
A global higher regularity result for the static relaxed micromorphic model on smooth domains Dorothee Knees Institute of Mathematics, University of Kassel, Heinrich-Plett Str. 40, 34132 Kassel, Germany, dknees@mathematik.uni-kassel.de , Sebastian OwczarekFaculty of Mathematics and Information Science, Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warsaw, Poland, sebastian.owczarek@pw.edu.pl , Patrizio NeffLehrstuhl für Nichtlineare Analysis und Modellierung, Fakultät für Mathematik, Universität Duisburg-Essen, Campus Essen, Thea-Leymann Str. 9, 45127 Essen, Germany, patrizio.neff@uni-due.de ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We derive a global higher regularity result for weak solutions of the linear relaxed micromorphic model on smooth domains. The governing equations consist of a linear elliptic system of partial differential equations that is coupled with a system of Maxwell-type. The result is obtained by combining a Helmholtz decomposition argument with regularity results for linear elliptic systems and the classical embedding of H(;Ω)∩ H_0(;Ω) into H^1(Ω). Keywords: global regularity; smooth domain; relaxed micromorphic model; elasticity coupled with Maxwell system; Helmholtz decomposition; generalized continua, dislocation model AMS Subject Classification 2020: 35Q74, 35B65, 49N60, 74A35, 74G40. § INTRODUCTION The relaxed micromorphic model is a novel generalised continuum model allowing to describe size effects and band-gap behaviour of microstructured solids with effective equations ignoring the detailed microstructure <cit.>. As a micromorphic model it couples the classical displacement u:Ω⊂^3→^3 with a non-symmetric tensor field P:Ω⊂^3→^3× 3 called the microdistortion through the variational problem ∫_Ω (⟨(∇ u-P),(∇ u-P)⟩ +⟨ P, P⟩ + ⟨ P, P⟩ -⟨ f,u⟩-⟨ M,P⟩) x ⟶ min w.r.t. (u,P) , subject to suitable boundary conditions. The tensor introduces a size-dependence into the model in the sense that smaller samples respond relatively stiffer. The existence and uniqueness in the static case follows from the incompatible Korn's inequality <cit.>. The constitutive tensors , and are to be found by novel homogenisation stra­tegies <cit.>. Letting → +∞ the models response tends to the linear Cosserat model <cit.>. A range of engineering relevant analytical solutions are already available for the relaxed micromorphic model <cit.>. The solution is naturally found as u∈ H^1(Ω) and P∈ H(;Ω), thus the microdistortion P may have jumps in normal direction. The implementation in the finite element context needs standard element formulations for the displacement u, but e.g. Nédélec - spaces for P in order to achieve optimal convergence rates <cit.>. However, it is sometimes preferred to circumvent the Nédélec - framework and to work with H^1(Ω) for the microdistortion tensor P. For these cases it is mandatory to clarify in advance whether the regularity of P allows for a faithful result. In this spirit, we continue here the investigation of regularity in the static case and we will be able to derive a global higher regularity result for weak solutions of the relaxed micromorphic model (global as opposed to only interior regularity). It extends the local result from <cit.> to smooth domains. The latter is formulated on a bounded domain Ω⊂^3 and the Euler-Lagrange equations to (<ref>) read as follows (<cit.>): given positive definite and symmetric material dependent coefficient tensors _e:Ω→((3),(3)), _micro:Ω→((3),(3)) and _c:Ω→(^3× 3,^3× 3) determine a displacement field u:Ω→^3 and a non-symmetric microdistortion tensor P:Ω→^3× 3 satisfying 0 =(_e(∇ u - P)) + f in Ω, 0 = -(_c P) + _e(∇ u - P) - _micro P + M in Ω together with suitable boundary conditions. Here, f:Ω→^3 is a given volume force density, M:Ω→^3× 3 a body moment tensor and σ=_e(∇ u - P) is the symmetric force stress tensor while m= P is the non-symmetric moment tensor. The main result of the present contribution is Theorem <ref> which states that on smooth domains and with smooth coefficients weak solutions of (<ref>) are more regular and satisfy u∈ H^2(Ω) , P∈ H^1(Ω) , σ∈ H^1(Ω) , m∈ H(;Ω) , m ∈ H^1(Ω) , where the last regularity in (<ref>) follows from equation (<ref>)_2. Moreover, if _c has a special block diagonal structure (see Corollary <ref>), then we in addition have m∈ H^1(Ω) , P∈ H^1(Ω) . The results (<ref>) are obtained by a combination of the Helmholtz decomposition for the matrix field P, regularity results for linear elliptic systems of elasticity-type and the classical Maxwell embedding recalled in Theorem <ref>. The additional regularity formulated in (<ref>) relies on a weighted version of the Maxwell embedding theorem, <cit.>. § BACKGROUND FROM FUNCTION SPACE THEORY §.§ Notation, assumptions For vectors a, b∈^n, we define the scalar product ⟨ a,b⟩:=∑_i=1^n a_ib_i, the Euclidean norm a^2:=⟨ a,a⟩ and the dyadic product a⊗ b=(a_ib_j)_i,j=1^n∈^n× n, where ^m× n will denote the set of real m× n matrices. For matrices P,Q∈^m× n, we define the standard Euclidean scalar product ⟨ P,Q⟩:=∑_i=1^m∑_j=1^n P_ijQ_ij and the Frobenius-norm P^2:=⟨ P,P⟩. P^T∈^n× m denotes the transposition of the matrix P∈^m× n and for P∈^n× n, the symmetric part of P will be denoted by P=1/2(P+P^T)∈(n). From now on we fix d=3. Let Ω⊂^3 be a bounded domain. As a minimal requirement, we assume in this paper that the boundary ∂Ω is Lipschitz continuous, meaning that it can locally be described as the graph of a Lipschitz continuous function, see <cit.> for a precise definition. In a similar spirit we speak of C^1 or C^1,1-regular boundaries. For a function u=(u^1,…, u^m)^T:Ω→^m (with m∈), the differential ∇ u is given by ∇ u =[ ∇ u^1; ⋮; ∇ u^m ]∈^m× d , with (∇ u^k)_l= ∂_x_l u^k for 1≤ k≤ m and 1≤ℓ≤ d and with ∇ u^k∈^1× d. For a vector field w:Ω→^3, the divergence and the curl are given as w=∑_i=1^ 3 w^i_, x_i, w =(w^3_,x_2-w^2_,x_3,w^1_,x_3-w^3_,x_1,w^1_,x_2-w^2_,x_1) . For tensor fields Q:Ω→^n× 3 (n∈), Q and Q are defined row-wise: Q=[ Q^1; ⋮; Q^n ]∈^n× 3 , and Q=[ Q^1; ⋮; Q^n ]∈^n , where Q^i denotes the i-th row of Q. With these definitions, for u:Ω→^m we have consistently ∇ u=0∈^m × d. The Sobolev spaces <cit.> used in this paper are H^1(Ω)={u∈ L^2(Ω) | ∇ u∈ L^2(Ω)} , u^2_H^1(Ω):=u^2_L^2(Ω)+∇ u^2_L^2(Ω) , H(;Ω)={v∈ L^2(Ω;^d) | v∈ L^2(Ω)} , v^2_H( curl;Ω):=v^2_L^2(Ω)+ curl v^2_L^2(Ω) , H(;Ω)={v∈ L^2(Ω;^d) | v∈ L^2(Ω)}, v^2_H(;Ω):=v^2_L^2(Ω)+ v^2_L^2(Ω) , spaces for tensor valued functions are denoted by H(;Ω) and H(;Ω). Moreover, H_0^1(Ω) is the completion of C_0^∞(Ω) with respect to the H^1-norm and H_0( curl;Ω) and H_0(;Ω) are the completions of C_0^∞(Ω) with respect to the H()-norm and the H()-norm, respectively. By H^-1(Ω) we denote the dual of H_0^1(Ω). Finally we define H(,0;Ω) ={u∈ H(;Ω) | u=0} , H(,0;Ω) ={u∈ H(;Ω) | u=0} and set H_0(,0;Ω) =H_0(;Ω)∩ H(,0;Ω) , H_0(,0;Ω) =H_0(;Ω)∩ H(,0;Ω) . Assumption A: We assume that the coefficient functions , and in (<ref>) are fourth order elasticity tensors from C^0,1(Ω;(^3× 3;^3× 3)) and are symmetric and positive definite in the following sense * For every σ,τ∈(3), η_1,η_2∈^3× 3 and all x∈Ω: ⟨ℂ_e(x)σ,τ⟩ = ⟨σ,ℂ_e(x)τ⟩ , ⟨ℂ_micro(x)σ,τ⟩ = ⟨σ,ℂ_micro(x)τ⟩ , ⟨𝕃_c(x)η_1,η_2⟩ = ⟨η_1, 𝕃_c(x)η_2⟩ . * There exists positive constants C_e, C_micro and L_c such that for all x∈Ω, σ∈(3) and η∈^3× 3: ⟨(x)σ,σ⟩≥ C_e|σ|^2 , ⟨(x)σ,σ⟩≥ C_micro|σ|^2 , ⟨(x)η,η⟩≥ L_c|η|^2 . §.§ Helmholtz decomposition, embeddings, elliptic regularity Based on the results from Section 3.3 of the book <cit.> (Corollary 3.4), see also <cit.>, the following version of the Helmholtz decomposition will be used: Let Ω⊂^3 be a bounded domain with a Lipschitz boundary. Then L^2(Ω;^3)= ∇ H_0^1(Ω)⊕ H(,0;Ω) and hence, for every p∈ L^2(Ω;^3) there exist unique v∈ H_0^1(Ω) and q∈ H(,0;Ω) such that p=∇ v + q. An immediate consequence of the Helmholtz decomposition theorem is Let Ω⊂^3 be a bounded domain with a Lipschitz boundary and let p∈ H_0(;Ω) with p=∇ v + q, where v∈ H_0^1(Ω) and q∈ H(,0;Ω) are given according to the Helmholtz decomposition. Then ∇ v∈ H_0(,0;Ω) and q∈ H(,0;Ω)∩ H_0(;Ω). By standard arguments it follows that ∇ v=0 in the distributional sense, and hence ∇ v∈ H(,0;Ω). This implies that ∇ v× n (with n the exterior unit normal vector field on ∂Ω) is well defined as an element from H^-1/2(∂Ω), <cit.>. Moreover, for all ϕ∈ C^∞(Ω,^3) one finds ⟨ (∇ v)× n,ϕ⟩ =∫_Ω⟨∇ v, ϕ⟩ -∫_Ω⟨(∇ v), ϕ⟩ . The second integral vanishes since ∇ v=0. Applying Gauss' Theorem to the first integral and taking into account that v|_∂Ω=0 implies that the first integral vanishes, as well. Hence, we finally obtain ∇ v× n=0 on ∂Ω in the sense of traces, <cit.>. Since q=p-∇ v and since ∇ v,p∈ H_0(,Ω) the assertion on q is immediate. The next embedding theorem is for instance proved in <cit.>: Let Ω⊂^3 be a bounded domain with a C^1,1-smooth boundary ∂Ω. Then H(;Ω)∩ H_0(;Ω)⊂ H^1(Ω), H_0(;Ω)∩ H(;Ω)⊂ H^1(Ω) and there exists a constant C>0 such that for every p∈ H(;Ω)∩ H_0(;Ω) or p∈ H_0(;Ω)∩ H(;Ω) we have p_H^1(Ω)≤ C(p_H(;Ω) + p_H(;Ω)) . A version of this result for Lipschitz domains is for instance available in <cit.>. For the previous embedding theorem there are also some versions with weights, and we cite here Theorem 2.2 from <cit.> with k=1 and ℓ=0. We assume that the weight function ε:Ω→^3× 3 for every x∈Ω is symmetric and positive definite, uniformly in x. Let Ω⊂^3 be a bounded domain with a C^2-smooth boundary and let ε∈ C^1(Ω,^3× 3) be a symmetric and positive definite weight function. Assume that p:Ω→^3 belongs to one of the following spaces: p∈ H_0(;Ω) and ε p∈ H(;Ω) or p∈ H(;Ω) and ε p∈ H_0(;Ω) . Then p∈ H^1(Ω) and there exists a constant C>0 (independent of p) such that p_H^1(Ω)≤ C(p_H(;(Ω) + (ε p)_L^2(Ω)) . Assuming higher regularity on the weight function ε and the smoothness of ∂Ω (i.e. ε∈ C^k(Ω,^3× 3) and ∂Ω∈ C^k+1), Theorem 2.2 from <cit.> guarantees a corresponding higher regularity of p. In the proof of Theorem <ref> we will decompose the microdistortion tensor P as P=∇ q + Q and apply Theorem <ref> to Q. The regularity for the displacement field u and the vector q then is a consequence of an elliptic regularity result that we discuss next. Let us consider the following auxiliary bilinear form: for (u, q)∈ H^1_0(Ω;^3+3) and (u, v)∈ H^1_0(Ω;^3+3) we define ã([ u; q ],[ v; w ]) = ∫_Ω⟨(x)[ ∇ u; ∇ q ] , [ ∇ v; ∇ w ]⟩ , where :Ω→(((3),(3)))^4 is defined by the following formula (x)=[ (x) -(x); - (x) (x) + (x) ] . Let Assumption A_(ii) be satisfied. Then, there exists a positive constant C_ such that for all x∈Ω and σ=(σ_1,σ_2)∈(3)×(3) we have: ⟨(x)σ,σ⟩≥ C_|σ|^2 . Fix x∈Ω and σ=(σ_1,σ_2)∈(3)×(3), then ⟨(x)σ,σ⟩= ⟨(σ_1-σ_2),σ_1-σ_2⟩+⟨(σ_2),σ_2⟩ . Assumption A_(ii) implies ⟨(x)σ,σ⟩ ≥ C_e|σ_1-σ_2|^2 +C_micro|σ_2|^2≥min{C_e,C_micro}(|σ_1-σ_2|^2+|σ_2|^2) ≥2/9min{C_e,C_micro}(|σ_1|^2+|σ_2|^2)=2/9min{C_e,C_micro}|σ|^2 and the proof is completed. Now, for all (u, q)∈ H^1_0(Ω;^3+3) we have ã([ u; q ],[ u; q ]) ≥ C_(∇ u^2_L^2(Ω)+∇ q^2_L^2(Ω)) ≥ C_ C_K(u^2_H^1_0(Ω)+q^2_H^1_0(Ω)) , where the constant C_K is a constant resulting from the standard Korn's inequality <cit.>. This shows that the bilinear form (<ref>) is coercive on the space H^1_0(Ω;^3+3). The form (<ref>) defines the following auxiliary problem: for (F_1,F_2)∈ L^2(Ω;^3+3) find (u, q)∈ H^1_0(Ω;^3+3) with ã(([ u; q ]),([ v; w ]))= ∫_Ω⟨([ F_1; F_2 ]),([ v; w ]) ⟩ for all (v, w)∈ H^1_0(Ω;^3+3). Modifying the results concerning the regularity of elliptic partial differential equations, it would be possible to obtain the existence of a solution for system (<ref>) with the regularity (u, q)∈ H^1_0(Ω;^3+3)∩ H^2(Ω;^3+3) (see for example <cit.>). However, system (<ref>) fits perfectly into the class considered in <cit.>. There, the global regularity of weak solutions to a quasilinear elliptic system with a rank-one-monotone nonlinearity was investigated. As an application of the result from <cit.> we obtain Let Ω⊂^3 be a bounded domain with a C^1,1-smooth boundary ∂Ω. Let furthermore (F_1,F_2)∈ L^2(Ω;^3+3) and ,∈ C^0,1(Ω; (^3× 3;^3× 3)). Then the problem (<ref>) has a unique solution (u, q)∈ H^1_0(Ω;^3+3)∩ H^2(Ω;^3+3). Coercivity (<ref>) of the bilinear form (<ref>) and the Lax-Milgram Theorem imply the existence of exactly one weak solution (u, q)∈ H^1_0(Ω;^3+3). In order to prove higher regularity of this solution we will use Theorem 5.2 of <cit.>. Let us introduce 𝔹:Ω×^(3+3)× 3→^(3+3)× 3 as the unique x-dependent linear mapping satisfying ⟨𝔹(x, ([ A_1; A_2 ]) ), ([ B_1; B_2 ])⟩ =⟨(x) ([ A_1; A_2 ]) , ([ B_1; B_2 ]) ⟩ for all A_1,A_2,B_1,B_2∈^3× 3 and x∈Ω. Then for every x∈Ω, A=(A_1,A_2)∈^(3+3)× 3, ξ=(ξ_1,ξ_2)∈^3+3 and η∈^3 we have thanks to the positive definiteness of ⟨ 𝔹(x,([ A_1; A_2 ]) +ξ⊗η) -𝔹(x, ([ A_1; A_2 ]) ),ξ⊗η ⟩ ≥ ≥ C_((ξ_1⊗η)^2 + (ξ_2⊗η)^2) . Since (ξ_i⊗η)^2=1/2(ξ_i^2η^2 + ⟨ξ_i,η⟩^2), this ultimately implies that 𝔹 is strongly rank-one monotone/satisfies the Legendre-Hadamard condition. Due to the Lipschitz continuity of and the remaining assumptions of Theorem 5.2 of <cit.> can easily be verified. Hence, <cit.> implies (u, q)∈ H^2(Ω;^3+3). § WEAK FORMULATION OF THE RELAXED MICROMORPHIC MODEL For u, v∈ H^1_0(Ω,^3) and P, W∈ H_0(;Ω) the following bilinear form is associated with the system (<ref>) a((u,P),(v,W)) =∫_Ω(⟨(∇ u-P),(∇ v-W)⟩ +⟨ P, W⟩ + ⟨ P, W⟩) x ≡∫_Ω⟨[ ∇ u; P ] , [ ∇ v; W ]⟩ + ⟨ P, W⟩) x where the tensor is defined in (<ref>). Here, homogeneous boundary conditions u|_∂Ω=0 and (P× n)|_∂Ω=0 are considered. Let Ω⊂^3 be a bounded domain with a Lipschitz boundary and assume that ,,∈ L^∞(Ω;(^3× 3;^3× 3)) comply with the symmetry and positivity properties formulated in (<ref>) - (<ref>). Then for every f∈ H^-1(Ω) and M∈(H_0(;Ω))^∗ there exists a unique pair (u,P)∈ H_0^1(Ω)× H_0(;Ω) such that ∀ (v,W)∈ H_0^1(Ω)× H_0(;Ω): a((u,P),(v,W))=∫_Ω⟨ f,v⟩ + ⟨ M,W⟩ x . For a bounded domain Ω⊂^3 with Lipschitz boundary ∂Ω the incompatible Korn's inequality <cit.> implies that there is a constant c̃>0 such that P^2_L^2(Ω)≤c̃ ( P^2_L^2(Ω)+ P^2_L^2(Ω)) for all P∈ H_0(;Ω). Positive definiteness of the tensors and entail a((u,P),(u,P))≥ C_(∇ u^2_L^2(Ω)+ P^2_L^2(Ω))+L_c P^2_L^2(Ω) . Hence, by (<ref>) and Korn's inequality the bilinear form (<ref>) is coercive on H_0^1(Ω)× H_0(;Ω) and the Lax-Milgram Theorem finishes the proof. Thanks to the Helmholtz decomposition, weak solutions can equivalently be characterized as follows: Let the assumptions of Theorem <ref> be satisfied, f∈ H^-1(Ω) and M∈(H_0(;Ω))^∗. Let furthermore (u,P)∈ H_0^1(Ω)× H_0(;Ω) and let (q,Q)∈ H_0^1(Ω;^3)× H(,0;Ω) such that P=∇ q + Q. Then the following (a) and (b) are equivalent: (a) (u,P) is a weak solution of (<ref>) in the sense of (<ref>). (b) For all (v,W)∈ H_0^1(Ω)× H_0(;Ω) the triple (u,q,Q) satisfies ∫_Ω⟨ [ ∇ u; ∇ q + Q ] , [ ∇ v; W ]⟩ + ∫_Ω⟨ Q, W⟩= ∫_Ω⟨ f,v⟩ + ⟨ M,W⟩ x . (a) (b): Assume that (u,P)∈ H_0^1(Ω)× H_0(;Ω) is a weak solution of (<ref>) in the sense of (<ref>). Then Theorem <ref> implies that for i=1,2,3 there exists unique q_i∈ H_0^1(Ω) and Q_i∈ H_0(,0;Ω) such that P_i=∇ q_i + Q_i, where P_i denotes the rows of the matrix P. Inserting P=(∇ q_1 + Q_1, ∇ q_2 + Q_2, ∇ q_3 + Q_3)^T into (<ref>) we obtain (<ref>), where Q=(Q_1, Q_2, Q_3)^T . (b) (a): Let for all (v,W)∈ H_0^1(Ω)× H_0(;Ω) the triple (u,q,Q)∈ H_0^1(Ω;^3)× H_0^1(Ω;^3)× H(,0;Ω) satisfy (<ref>). Then (<ref>) can be written in the form ∫_Ω⟨ [ ∇ u; ∇ q + Q ], [ ∇ v; W ]⟩ + ∫_Ω⟨ (∇ q+Q), W⟩ = ∫_Ω⟨ f,v⟩ + ⟨ M,W⟩ x and the function (u,∇ q+Q) satisfies (<ref>) for all (v,W)∈ H_0^1(Ω)× H_0(;Ω). Uniqueness of a weak solution of the problem (<ref>) implies that P=∇ q+Q. § GLOBAL REGULARITY ON SMOOTH DOMAINS The aim of this section is to prove the following regularity theorem Let Ω⊂^3 be a bounded domain with a C^1,1-smooth boundary. Moreover, in addition to the assumptions of Theorem <ref> let ,,∈ C^0,1(Ω; (^3× 3;^3× 3)). Finally, we assume that f∈ L^2(Ω) and M∈ H(;Ω). Then for every weak solution (u,P)∈ H_0^1(Ω)× H_0(;Ω) we have u∈ H^2(Ω) , P∈ H^1(Ω) , P∈ H(;Ω) and there exists a constant C>0 (independent of f and M) such that u_H^2(Ω) + P_H^1(Ω) + 𝕃_c P_H(;Ω)≤ C(f_L^2(Ω) + M_H(;Ω)) . The proof relies on the Helmholtz decomposition of P, the embedding Theorem <ref> and Theorem <ref> about the global regularity for the auxiliary problem (<ref>). Let (u,P)∈ H_0^1(Ω)× H_0(;Ω) satisfy (<ref>) with f∈ L^2(Ω) and M∈ H(;Ω). We first show that (u,P)∈ H^2(Ω)× H^1(Ω). Let P=∇ q + Q, where q∈ H_0^1(Ω;^3) and Q∈ H(,0;Ω) are given according to the Helmholtz decomposition. By Proposition <ref> it follows that Q∈ H(,0;Ω)∩ H_0(;Ω) and thanks to the assumed regularity of ∂Ω, Theorem <ref> implies that Q∈ H^1(Ω). Next, choosing W=∇ w for w∈ C_0^∞(Ω;^d), the weak form (<ref>) in combination with a density argument implies that for all v,w∈ H_0^1(Ω) we have ∫_Ω⟨ [ ∇ u; ∇ q ] , [ ∇ v; ∇ w ]⟩ = ∫_Ω⟨ Q,∇ v⟩ -∫_Ω⟨( +) Q,∇ w⟩ +∫_Ω⟨ f, v⟩ + ⟨ M,∇ w⟩ . Since Q∈ H^1(Ω) and M∈ H(;Ω), by partial integration the right hand side of (<ref>) can be rewritten as ∫_Ω⟨ F_1, v⟩ + ⟨ F_2, w⟩ with functions F_1,F_2∈ L^2(Ω). Theorem <ref> implies that u,q∈ H^2(Ω) and hence P=∇ q +Q∈ H^1(Ω). Let us next choose v=0 and W∈ C_0^∞(Ω) in (<ref>). Rearranging the terms we find that ∫_Ω⟨ P, W⟩ = ∫_Ω⟨ M,W⟩ + ∫_Ω⟨∇ u - ( + ) P , W⟩ which implies that ( P)∈ L^2(Ω) and P∈ H(;Ω). If we additionally assume that 𝕃_c has a block-diagonal structure, we may also achieve P∈ H^1(Ω) by applying the weighted embedding Theorem <ref>. In addition to the assumptions of Theorem <ref> let 𝕃_c∈ C^1(Ω;(^3× 3,^3× 3)) be of block diagonal structure, meaning that there exist 𝕃_i∈ C^1(Ω;^3× 3), 1≤ i≤ 3, such that for every W∈^3× 3 we have (𝕃_c W)_i-th row= 𝕃_i (W_i-th row). Then 𝕃_c P∈ H^1(Ω) and P∈ H^1(Ω). We focus on the i-th row P^i of P. Let ε=(𝕃_i)^-1. Then ε∈ C^1(Ω;^3× 3) and ε(x) is symmetric and uniformly positive definite with respect to x∈Ω. Clearly, ε𝕃_i P^i= P^i ∈ H(,0;Ω). Moreover, for the normal trace we find: for every ϕ∈ C^∞(Ω;) ⟨ (ε𝕃_i P^i)· n,ϕ⟩_∂Ω = ⟨ ( P^i) · n,ϕ⟩_∂Ω =∫_Ω⟨ P^i, ∇ϕ⟩ + ∫_Ω⟨ϕ, ( P^i)⟩ =⟨ P^i× n,ϕ⟩_∂Ω + ∫_Ω⟨ P^i,∇ϕ⟩ + ∫_Ω⟨ϕ, ( P^i)⟩ , where we applied the corresponding Green's formulae <cit.>. Since all the terms in the last line are zero, we obtain 𝕃_i P^i∈ H(;Ω) and ε𝕃_i P^i ∈ H_0(,0;Ω) . The weigthed embedding Theorem <ref> implies 𝕃_i P^i∈ H^1(Ω), and since ε=𝕃_i^-1 is a multiplyer on H^1(Ω), we finally obtain P^i∈ H^1(Ω). The previous result may be applied to the simple uni-constant isotropic curvature case L^2_c P^2. It is clear that the same higher regularity result can be established for the linear Cosserat model <cit.>. Open Problem: It would be interesting to establish higher regularity also for m=𝕃_c P:=𝕃_c P with 𝕃_c positive definite on symmetric arguments. However, the simple extension of the present argument fails since our argument relies on the a-priori information that P belongs to H(;Ω). For the more general model involving 𝕃_c P we have weak solutions in H(;Ω), only, meaning that it is not clear whether in this case, P∈ L^2(Ω). § GLOBAL REGULARITY FOR A GAUGE-INVARIANT INCOMPATIBLE ELASTICITY MODEL The method presented above for obtaining regularity of solution for the static relaxed micromorphic model can be directly applied to the following gauge-invariant incompatible elasticity model <cit.> 0=-[ e] - e- e+M , where the unknown function is the non-symmetric incompatible elastic distortion e:Ω→^3×3 while M:Ω→^3× 3 is a given body moment tensor. The constitutive tensors , are positive definite fourth order tensors (fulfilling the Assumption A) while :(3)→(3) is positive semi-definite. The system (<ref>) is considered with homogeneous tangential boundary conditions, i.e. e_i(x)× n(x) =0 for x∈∂Ω , where × denotes the vector product, n is the unit outward normal vector at the surface ∂Ω, e_i (i=1,2,3) are the rows of the tensor e. Problem (<ref>) generalises the time-harmonic Maxwell-type eigenvalue problem <cit.> from the vectorial to the tensorial setting. On the other hand, equation (<ref>) corresponds to the second equation of (<ref>) upon setting ≡ 0, assuming ≡ 0, identifying the elastic distortion e with e=∇ u-P and observing that - P=(-P)=(∇ u- P)= e . Smooth solutions of (<ref>) satisfy the balance of linear momentum equation ( e+ e)= M . Gauge-invariance means here that the solution e is invariant under ∇ u →∇ u+∇τ , P→ P+∇τ which invariance is only possible since ≡ 0 (> 0 breaks the gauge-invariance). Here τ is a space-dependent (or local) translation vector. The elastic energy can then be expressed as ∫_Ω⟨ e, e⟩ +⟨ e, e⟩-⟨ M,e⟩ x in which the first term accounts for the energy due to elastic distortion, the second term takes into account the energy due to incompatibility in the presence of dislocations and the last term is representing the forcing. For e, v∈ H_0(;Ω) the following bilinear form is associated with the system (<ref>) b(e,v)=∫_Ω⟨ e, v⟩+⟨ e, v⟩+⟨ e, v⟩ x . Let us assume that M∈ H(;Ω). Coercivity of the bilinear form (<ref>) (the generalized incompatible Korn's inequality (<ref>)) and the Lax-Milgram Theorem imply the existence of exactly one weak solution e∈ H_0(;Ω) of the system (<ref>). The Helmholtz decomposition yields that e=∇ q + Q, where q∈ H_0^1(Ω;^3) and Q∈ H(,0;Ω). By Proposition <ref> it follows that Q∈ H(,0;Ω)∩ H_0(;Ω) and Theorem <ref> implies that Q∈ H^1(Ω). Inserting the decomposed form of the tensor e into the weak form of system (<ref>), we obtain ∫_Ω⟨ (∇ q + Q), W⟩+∫_Ω⟨(∇ q + Q), W⟩ + ∫_Ω⟨ (∇ q+Q), W⟩ = ∫_Ω⟨ M,W⟩ x for all W∈ H_0(;Ω). Again, choosing W=∇ w for w∈ C_0^∞(Ω;^3), the weak form (<ref>) in combination with a density argument implies that for all w∈ H_0^1(Ω) we have ∫_Ω⟨ ∇ q, ∇ w⟩+∫_Ω⟨∇ q, ∇ w⟩ = ∫_Ω⟨ M,∇ w⟩ x-∫_Ω⟨ Q, ∇ w⟩-∫_Ω⟨ Q, ∇ w⟩ Since Q∈ H^1(Ω) and M∈ H(;Ω), by partial integration the right hand side of (<ref>) can be rewritten as ∫_Ω⟨ F_2, w⟩ with functions F_2∈ L^2(Ω). Note that the auxiliary problem (<ref>) obtained this time is a problem from standard linear elasticity. Thus, in this case we do not need to go through Lemma <ref>: just from the standard theory of regularity in the linear elasticity we obtain that q∈ H^2(Ω) (<cit.>). The result is that on smooth domains and with smooth coefficients the weak solution of (<ref>) is more regular and satisfies e∈ H^1(Ω) , e∈ H(;Ω) , ( e) ∈ H^1(Ω) , where the last regularity in (<ref>) follows from equation (<ref>). Moreover, if has a special block diagonal structure, then we have e∈ H^1(Ω) , e ∈ H^1(Ω) . §.§.§ Acknowledgment The authors wish to thank Dirk Pauly (TU Dresden) for inspiring discussions on this subject dating to 2013. Patrizio Neff and Dorothee Knees acknowledge support in the framework of the Priority Programme SPP 2256 "Variational Methods for Predicting Complex Phenomena in Engineering Structures and Materials" funded by the Deutsche Forschungsgemeinschaft (DFG, German research foundation): P. Neff within the project "A variational scale-dependent transition scheme - from Cauchy elasticity to the relaxed micromorphic continuum" (Project-ID 440935806), D. Knees within the project "Rate-independent systems in solid mechanics: physical properties, mathematical analysis, efficient numerical algorithms" (Project-ID 441222077). D.K. and P.N. enjoyed the welcoming athmosphere at the Hausdorff Research Institute for Mathematics, Bonn, funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC-2047/1 – 390685813. plain 10 adamssobolev R. Adams and J.F. Fournier. Sobolev Spaces, volume 140 of Pure and Applied Mathematics. Elsevier/Academic Press, Amsterdam, second edition, 2003. exprolingalberdi R. Alberdi, J. Robbins, T. Walsh, and R. Dingreville. Exploring wave propagation in heterogeneous metastructures using the relaxed micromorphic model. Journal of the Mechanics and Physics of Solids, 155, 10 2021. Alberi_maxwel G. S. Alberti and Y. Capdeboscq. Elliptic regularity theory applied to time harmonic anisotropic Maxwell's equations with less than Lipschitz complex coefficients. SIAM Journal on Mathematical Analysis, 46(1):998–1016, 2014. BaPaSc16 S. Bauer, D. Pauly, and M. Schomburg. The Maxwell compactness property in bounded weak Lipschitz domains with mixed boundary conditions. SIAM Journal on Mathematical Analysis, 48(4):2912–2943, 2016. Costabel90 M. Costabel. A remark on the regularity of solutions of Maxwell's equations on Lipschitz domains. Mathematical Methods in the Applied Sciences, 12(4):365–368, 1990. Neff_unfol_22 F. Demore, G. Rizzi, M. Collet, P. Neff, and A. Madeo. Unfolding engineering metamaterials design: Relaxed micromorphic modeling of large-scale acoustic meta-structures. Journal of the Mechanics and Physics of Solids, 168:104995, 2022. Ghibadyn I.-D. Ghiba, P. Neff, A. Madeo, L. Placidi, and G. Rosi. The relaxed linear micromorphic continuum: existence, uniqueness and continuous dependence in dynamics. Mathematics and Mechanics of Solids, 20(10):1171–1197, 2015. Ghiba2022CosseratME I.-D. Ghiba, G. Rizzi, A. Madeo, and P. Neff. Cosserat micropolar elasticity: classical Eringen vs. dislocation form. Journal of Mechanics of Materials and Structures, 18(1):93–123, 2023. GiTr86 D. Gilbarg and N. S. Trudinger. Elliptic Partial Differential Equations of Second Order. Class. Math. Berlin: Springer, reprint of the 1998 ed. edition, 2001. Giraultbook V. Girault and P.-A. Raviart. Finite Element Methods for Navier-Stokes Equations, volume 5 of Springer Series in Computational Mathematics. Springer-Verlag, Berlin, 1986. Theory and Algorithms. OptimalKMSIneq F. Gmeineder, P. Lewintan, and P. Neff. Optimal incompatible Korn-Maxwell-Sobolev inequalities in all dimensions. Calculus of Variations and Partial Differential Equations, 62(182), 2023. Grisvard P. Grisvard. Elliptic Problems in Nonsmooth Domains, volume 24 of Monographs and Studies in Mathematics. Pitman (Advanced Publishing Program), Boston, MA, 1985. KON23 D. Knees, S. Owczarek, and P. Neff. A local regularity result for the relaxed micromorphic model based on inner variations. Journal of Mathematical Analysis and Applications, 519(2):126806, 2023. Lazarqauge M. Lazar and C. Anastassiadis. The gauge theory of dislocations: Static solutions of screw and edge dislocations. Philosophical Magazine, 89(3):199–231, 2009. Lewintan_Korn2020 P. Lewintan, S. Müller, and P. Neff. Korn inequalities for incompatible tensor fields in three space dimensions with conformally invariant dislocation energy. Calculus of Variations and Partial Differential Equations, 60(150), 2021. lewintan_neff_2021 P. Lewintan and P. Neff. L^p-trace-free generalized Korn inequalities for incompatible tensor fields in three space dimensions. Proceedings of the Royal Society of Edinburgh: Section A Mathematics, pages 1–32, 2021. NeffLP P. Lewintan and P. Neff. Nečas-Lions lemma revisited: An L^p-version of the generalized Korn inequality for incompatible tensor fields. Mathematical Methods in the Applied Sciences, 44(14):11392–11403, 2021. MADEO2016 A. Madeo, P. Neff, I.-D. Ghiba, and G. Rosi. Reflection and transmission of elastic waves in non-local band-gap metamaterials: A comprehensive study via the relaxed micromorphic model. Journal of the Mechanics and Physics of Solids, 95:441–479, 2016. neff_2006 P. Neff. On Korn's first inequality with non-constant coefficients. Proceedings of the Royal Society of Edinburgh Section A: Mathematics, 132(1):221–243, 2002. Neffhomog P. Neff, B. Eidel, M.V. d'Agostino, and A. Madeo. Identification of scale-independent material parameters in the relaxed micromorphic model through model-adapted first order homogenization. Journal of Elasticity, 139:269–298, 2020. Lazar_Neff_gauge P. Neff, I. D. Ghiba, M. Lazar, and A. Madeo. The relaxed linear micromorphic continuum: well-posedness of the static problem and relations to the gauge theory of dislocations. The Quarterly Journal of Mechanics and Applied Mathematics, 68(1):53–84, 01 2015. NGperspective P. Neff, I. D. Ghiba, A. Madeo, L. Placidi, and G. Rosi. A unifying perspective: the relaxed linear micromorphic continuum. Continuum Mechanics and Thermodynamics, 26(5):639–681, 2014. NK08 P. Neff and D. Knees. Regularity up to the boundary for nonlinear elliptic systems arising in time-incremental infinitesimal elasto-plasticity. SIAM Journal on Mathematical Analysis, 40(1):21–43, 2008. neff2012poincare P. Neff, D. Pauly, and K.-J. Witsch. Poincaré meets Korn via Maxwell: extending Korn's first inequality to incompatible tensor fields. Journal of Differential Equations, 258(4):1267–1302, 2015. Owczghibaneffexist S. Owczarek, I.-D. Ghiba, and P. Neff. Existence results for non-homogeneous boundary conditions in the relaxed micromorphic model. Mathematical Methods in the Applied Sciences, 44(2):2040–2049, 2021. OGNdynamicreg S. Owczarek, I.-D. Ghiba, and P. Neff. A note on local higher regularity in the dynamic linear relaxed micromorphic model. Mathematical Methods in the Applied Sciences, 44(18):13855–13865, 2021. Rizzi_boundary_2022 G. Rizzi, M. V. d'Agostino, P. Neff, and A. Madeo. Boundary and interface conditions in the relaxed micromorphic model: Exploring finite-size metastructures for elastic wave control. Mathematics and Mechanics of Solids, 27(6):1053–1068, 2022. rizzi2021torsion G. Rizzi, G. Hütter, H. Khan, I.-D. Ghiba, A. Madeo, and P. Neff. Analytical solution of the cylindrical torsion problem for the relaxed micromorphic continuum and other generalized continua (including full derivations). Mathematics and Mechanics of Solids, 27(3):507–553, 2022. homo_1 M. Sarhil, L. Scheunemann, J. Schröder, and P. Neff. Size-effects of metamaterial beams subjected to pure bending: on boundary conditions and parameter identification in the relaxed micromorphic model. Computational Mechanics, 2023. schroder2021lagrange J. Schröder, M. Sarhil, L. Scheunemann, and P. Neff. Lagrange and H(,ℬ) based Finite Element formulations for the relaxed micromorphic model. Computational Mechanics, 70:1309–1333, 2022. SKY2022_primal A. Sky, M. Neunteufel, I. Muench, J. Schöberl, and P. Neff. Primal and mixed finite element formulations for the relaxed micromorphic model. Computer Methods in Applied Mechanics and Engineering, 399:115298, 2022. sky2021hybrid A. Sky, M. Neunteufel, I. Münch, J. Schöberl, and P. Neff. A hybrid H^1×H() finite element formulation for a relaxed micromorphic continuum model of antiplane shear. Computational Mechanics, 68:1–24, 2021. valent T. Valent. Boundary Value Problems of Finite Elasticity, volume 31 of Springer Tracts in Natural Philosophy. Springer-Verlag, New York, 1988. Weber81 C. Weber and P. Werner. Regularity theorems for Maxwell's equations. Mathematical Methods in the Applied Sciences, 3(1):523–536, 1981. YIN_gauge H.-M. Yin. Regularity of weak solution to Maxwell's equations and applications to microwave heating. Journal of Differential Equations, 200(1):137–161, 2004.
http://arxiv.org/abs/2307.01256v1
20230703180001
Can cuspy dark matter dominated halos hold cored stellar mass distributions?
[ "Jorge Sanchez Almeida", "Angel R. Plastino", "Ignacio Trujillo" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.CO" ]
0000-0003-1123-6003]Jorge Sánchez Almeida Instituto de Astrofísica de Canarias, La Laguna, Tenerife, E-38200, Spain Departamento de Astrofísica, Universidad de La Laguna 0000-0001-5848-0770]Angel R. Plastino CeBio y Departamento de Ciencias Básicas, Universidad Nacional del Noroeste de la Prov. de Buenos Aires, UNNOBA, CONICET, Roque Saenz Peña 456, Junin, Argentina 0000-0001-8647-2874]Ignacio Trujillo Instituto de Astrofísica de Canarias, La Laguna, Tenerife, E-38200, Spain Departamento de Astrofísica, Universidad de La Laguna According to the current concordance cosmological model, the dark matter (DM) particles are collision-less and produce self-gravitating structures with a central cusp which, generally, is not observed. The observed density tends to a central plateau or core, explained within the cosmological model through the gravitational feedback of baryons on DM. This mechanism becomes inefficient when decreasing the galaxy stellar mass so that in the low-mass regime (M_⋆≪ 10^6 M_⊙) the energy provided by the baryons is insufficient to modify cusps into cores. Thus, if cores exist in these galaxies they have to reflect departures from the collision-less nature of DM. Measuring the DM mass distribution in these faint galaxies is extremely challenging, however, their stellar mass distribution can be characterized through deep photometry. Here we provide a way of using only the stellar mass distribution to constrain the underlying DM distribution. The so-called Eddington inversion method allows us to discard pairs of stellar distributions and DM potentials requiring (unphysical) negative distribution functions in the phase space. In particular, cored stellar density profiles are incompatible with the Navarro, Frenk, and White (NFW) potential expected from collision-less DM if the velocity distribution is isotropic and the system spherically symmetric. Through a case-by-case analysis, we are able to relax these assumptions to consider anisotropic velocity distributions and systems which do not have exact cores. In general, stellar distributions with radially biased orbits are difficult to reconcile with NFW-like potentials, and cores in the baryon distribution tend to require cores in the DM distribution. § INTRODUCTION The current concordance cosmological model assumes the dark matter particles to be cold and collision-less <cit.>. Thus, the cold dark matter (CDM) particles interact with themselves and with the baryons through gravitational forces only. Given the initial conditions set by the cosmological model, the CDM particles evolve under their own gravity to collapse into halos with cusps <cit.>, i.e., where the density is represented by the iconic NFW profile (Navarro, Frenk, and White ) that grows boundlessly when approaching the center of the gravitational potential. This prediction contrasts with the fact that the observed dark matter (DM) haloes often show cores, i.e., their density tend to be constant as one approaches the center <cit.>. This apparent contradiction is solved within the current CDM paradigm because the baryon dynamics modifies the global gravitational potential also affecting the DM distribution and transforming the cusps into cores <cit.>. This mechanism of feedback of baryons onto DM becomes inefficient when decreasing the galaxy mass, because the halo-to-stellar mass ratio increases with decreasing mass <cit.>, reaching a point where the energy provided by star formation is simply not enough to modify the cusp of the CDM haloes <cit.>. The larger stellar mass unable to modify the inner slope of the DM profile is somewhat model dependent <cit.>, but it roughly corresponds to stellar masses M_⋆ < 10^6 M_⊙ or halo masses M_h < 10^10 M_⊙ <cit.>. Thus, if galaxies with M_h ≪ 10^10 M_⊙ show DM cores, they are not due to baryon feedback but have to reflect the nature of DM: whether it is fuzzy, self-interacting, warm, or else <cit.>. At these low masses, discerning observationally whether the DM halos have cores is extremely challenging, if not impossible. DM measurements require high spectral resolution spectroscopy to infer dynamical masses (whether optical, infrared, or radio wavelengths are used). The light is spread into small wavelength bins and so getting high signal-to-noise ratios is expensive observationally. On the contrary, stellar mass determinations depend on broad-band photometry, which is orders of magnitude faster than spectroscopy. Thus, measuring the baryon mass distribution in these low-mass objects is doable <cit.> and, interestingly, low-mass galaxies tend to show cores in their stellar mass distribution <cit.>. Since low-mass galaxies are often extremely DM dominated systems, one could naively think that the cores observed in stars just reflect the underlying DM mass distribution. If this conjecture turned out to be correct, it would provide a unique channel to study DM in low-mass galaxies, in a regime particularly informative to reveal the nature of DM <cit.>. Thus, the question arises as to whether the cores in the stellar mass distribution of DM dominated systems trace or not cores in the DM distribution. The purpose of this work is bringing up the question in the title to try to give an answer in fairly broad terms. Thus, we show to be unlikely (although not impossible) that DM dominated systems with a central cusp have a stellar profile with a central core. Therefore, our work provides a gateway to investigate the inner shape of the DM distribution in ultra-low mass galaxies using only their starlight. We address the question using the so-called Eddington inversion method <cit.>. Simply put, it provides the distribution function (DF) in the phase space f corresponding to a stellar mass density distribution ρ immersed in a gravitational potential Φ. Given two arbitrary ρ and Φ, there is no guarantee that f > 0 everywhere, which is the absolutely minimum requirement for ρ and Φ to be physically consistent. In this paper, we study the f resulting from different combinations of ρ (tracing the stars) and Φ (dictated only by the DM in ultra-low mass galaxies). We will show that unless the potential Φ is created by a mass distribution with a core, cored ρs often give nonphysical f < 0. The computations in the paper neglect the contribution of the baryons to the overall potential, which we regard as a reasonable working hypothesis for the galaxies of interest. Thus, the gas in the ultra-low mass galaxies is not treated explicitly in the paper, but should play only a minor role in the analysis since it interacts with the stars only through its contribution to the gravitational potential. Therefore, as soon as the gas mass is much smaller than the total mass of the system, its presence can be neglected. The paper is organized as follows: Sect. <ref> puts forward the Eddington inversion method together with the main equations used in our analysis. The more lengthly derivations are separated in Appendixes <ref> to <ref>. Unphysical pairs ρ – Φ yielding f < 0 somewhere are analyzed in Sect. <ref>. Among which one finds the prototypical cored ρ immersed a NFW potential with isotropic velocities. Examples and particular cases are worked out in Sect. <ref> to conclude that most often the cores in baryons trace DM cores in DM dominated self-gravitating systems. These results and their practical application to real galaxies are analyzed in Sect. <ref>, including the effect of relaxing assumptions like spherical symmetry. Table <ref> lists consistent and inconsistent combinations of ρ and Φ resulting from our analysis. In what follows, we use the terms baryons, stars, or particles indistinctly to refer to the component of the gravitationally bound system that provides the density ρ. Moreover, in the context of this paper, the term low-mass galaxy is used to describe galaxies where the potential is approximately set by the DM because the gravity produced by the baryons can be neglected. § THE EDDINGTON INVERSION METHOD IN OUR CONTEXT This section provides a summary of the Eddington inversion method, and so, of the expressions used in Sects. <ref> and <ref> to study whether cored baryon density distributions happen to be inconsistent with the gravitational potential created by CDM alone. We closely follow the approach and terminology by <cit.>, but there are several alternative references on the subject <cit.>. The main assumptions made when using the Eddington inversion method are <cit.>: (1) the gravitational potential is smooth, (2) the trace particles (e.g., stars) have lifetimes larger than the crossing time, (3) the trace particles are collision-less, (4) the system is spherically symmetric, and (5) the system is described by a steady-state DF in the phase space. We take these assumptions as working hypotheses, which may not be fulfilled by particular objects but which may be good enough to describe large populations. For example, after a major merger the steady-state may require a few Gyr to be recovered <cit.>, however, most galaxies only have a few such events during their lifetimes concentrated early on, therefore, many galaxies should be in a quasi-steady state today. Based on these premises, we first consider particle systems with an isotropic velocity distribution. Sect. <ref> explains how to use the Eddington inversion method to recover the phase space DF from the three first spatial derivatives of the baryon density and of the gravitational potential. The general expressions are particularized to specific mass distributions and gravitational potentials in Appendixes <ref> and <ref>. Section <ref> relaxes the assumption on the velocity isotropy, working out the expression of the DF for the Osipkov-Merritt velocity anisotropy model. Other anisotropic velocity models are considered too. Even if contrived from a physical stand point, any gravitational potential is consistent with any density if the particles are arranged in perfectly circular orbits. The mixing model in Sect. <ref> describes the linear superposition of such a DF with circular orbits plus another DF with an isotropic velocity distribution. Finally, Sect. <ref> treats the case of constant velocity anisotropy. These physical systems and the corresponding DFs were chosen for simplicity, because they provide clear-cut constraints on the potential with relatively simple arguments. There are extensions of the Eddington inversion method for other more general DFs that in principle could be used for similar diagnostics <cit.>, but their study remains to be carried out, a task that requires specific follow-up work (Sect. <ref>). §.§ Systems with isotropic velocity distribution For spherically symmetric systems of particles with isotropic velocity distribution, the phase-space DF f(ϵ) depends only on the particle energy ϵ. Then, the space density ρ(r) turns out to be <cit.>, ρ(r) = 4 π√(2) ∫_0^Ψ(r) f(ϵ) √(Ψ(r) - ϵ) dϵ. Here ϵ = Ψ - 1/2 v^2 is the relative energy (per unit mass) of a particle, and Ψ(r) = Φ_0 - Φ(r) is the relative potential energy, where Φ(r) is the gravitational potential energy and Φ_0 is the gravitational potential energy evaluated at the edge of the system. For realistic systems, the relative potential Ψ is a monotonically decreasing function of the distance from the center r. Consequently, ρ can be regarded as a function of Ψ. Differentiating ρ with respect to Ψ, d ρ/d Ψ = 2π√(2) ∫_0^Ψ f(ϵ)/√(Ψ - ϵ) dϵ. Inverting this Abel integral leads to Eddington's celebrated equation (e.g., , Eq. [4.46]) for the phase-space DF f(ϵ) in terms of the spatial density ρ(r), f(ϵ) = 1/2 √(2)π^2 d/d ϵ ∫_0^ϵ dρ/dΨ dΨ/√(ϵ - Ψ). Integrating by parts twice, f(ϵ) = 1/√(2)π^2 [ 1/2 √(ϵ) (dρ/dΨ)_Ψ=0 + √(ϵ) (d^2ρ/dΨ^2)_Ψ=0 + ∫_0^ϵ d^3ρ/dΨ^3 √(ϵ - Ψ) dΨ]. The derivatives at the boundary, (dρ/dΨ)_Ψ=0 and (d^2ρ/dΨ^2)_Ψ=0, are in practice zero (see Appendix <ref>), therefore, f(ϵ) = 1/√(2)π^2 ∫_0^ϵ d^3ρ/dΨ^3 √(ϵ - Ψ) dΨ. To evaluate numerically the integral appearing in Eq. (<ref>), it is convenient to change the integration variable from Ψ to r, because only ρ(r) and Ψ(r) are known explicitly. To use r as integration variable, we need to express the derivatives of ρ with respect to Ψ in terms of the derivatives of ρ and Ψ with respect to r, i.e., dρ/dΨ = dρ/dr/dΨ/dr, d^2ρ/dΨ^2 = ( dΨ/dr)^-3 [ ( d^2ρ/dr^2) ( dΨ/dr) - ( dρ/dr) ( d^2Ψ/dr^2) ], d^3ρ/dΨ^3 = ( d^3ρ/dr^3) ( dΨ/dr)^-3 - 3( d^2ρ/dr^2)( d^2Ψ/dr^2) ( dΨ/dr)^-4 - ( dρ/dr) ( d^3Ψ/dr^3) ( dΨ/dr)^-4 + 3 ( dρ/dr ) ( d^2Ψ/dr^2)^2 ( dΨ/dr )^-5. We now change the integration variable in the integral appearing in Eq. (<ref>), ∫_0^ϵ d^3ρ/dΨ^3 √(ϵ - Ψ) dΨ = ∫_r_m^R dΨ/drd^3ρ/dΨ^3 √(ϵ - Ψ) dr = - ∫_R^r_m dΨ/drd^3ρ/dΨ^3 √(ϵ - Ψ) dr, where R is the value of r such that Ψ(R) = ϵ, and r_m is the maximum value of r, corresponding to the outer edge of the system. When the system has infinite spatial extent r_m→∞. Replacing the expression (<ref>) for d^3 ρ/ dΨ^3 into the integral (<ref>), ∫_0^ϵ d^3ρ/dΨ^3 √(ϵ - Ψ) dΨ = ∫_R^r_m [ - ( d^3ρ/dr^3) ( dΨ/dr)^-2 + 3( d^2ρ/dr^2)( d^2Ψ/dr^2) ( dΨ/dr)^-3. + . ( dρ/dr) ( d^3Ψ/dr^3) ( dΨ/dr)^-3 - 3 ( dρ/dr) ( d^2Ψ/dr^2)^2 ( dΨ/dr)^-4] √(ϵ - Ψ) dr. In short, according to Eqs. (<ref>) and (<ref>), the DF f(ϵ) corresponding to a density ρ(r) in a potential Φ(r) can be deduced from the first three derivatives of ρ(r) and Ψ(r) (=Φ_0-Φ). Appendix <ref> works them out in various practical cases involving polytropic ρ and NFW densities and potentials. Examples of ρ and f(ϵ) will be shown in Sect. <ref>, Figs. <ref> – <ref>, <ref>, and <ref>. §.§ Systems with anisotropic velocity distribution: the Osipkov-Merritt model The systems described in Sect. <ref> have DFs depending on the particle energy only, which holds when the dispersion of velocities in the three independent spatial directions is the same. In terms of the so-called anisotropy parameter, these systems have β(r) =0, with β (r) = 1 - σ_θ^2 + σ_ϕ^2/2σ_r^2, where σ_r is the radial velocity dispersion, and σ_θ and σ_ϕ are the tangential velocity dispersions in spherical coordinates. The velocity isotropy requirement can be relaxed assuming f to depend not only on ϵ but also on the modulus of the angular momentum L. This is done in the Osipkov-Merritt model, which assumes a radial dependence of the anisotropy given by, β(r) = r^2/r^2 + r_b^2, where the anisotropy radious r_b sets the spatial scale of the changes in anisotropy. For r ≪ r_b the velocity distribution is isotropic, while for r ≫ r_b it is fully anisotropic with β→ 1 and the orbits becoming mostly radial (σ_θ^2 + σ_ϕ^2 ≪ 2σ_r^2). The assumption on β(r) in the Osipkov-Merritt model (Eq. [<ref>]) may look artificial driven by analytical simplicity, but it is not quite so. This type of radial dependence of the anisotropy parameter is obtained in cosmological numerical simulations of galaxy formation in the low mass end of the mass spectrum <cit.>. In these simulations, the stars tend to have isotropic orbits in the center of the potential that turn into radial orbits in the outskirts. In the same numerical simulations, the DM haloes are more isotropic all over (discussed further in Sect. <ref>). Following <cit.>, the phase-space DF of the Osipkov-Merritt model depends on the particle position and velocity through the quantity, Q = ϵ - L^2/2 r_b^2. It is convenient to define ρ_ OM(r) = (1 + r^2/r_b^2) ρ(r). The connection between the mass density and the phase space density can be expressed, in terms of ρ_ OM, in a way similar to the one corresponding to isotropic systems. Indeed, one has, ρ_ OM(r) = 4 π√(2) ∫_0^Ψ(r) f_ OM(Q) √(Ψ(r) - Q) dQ, d ρ_ OM/d Ψ = 2 √(2)π ∫_0^Ψ f_ OM(Q)/√(Ψ - Q) dQ, and f_ OM(Q) = 1/2 √(2)π^2 d/d Q ∫_0^Q dρ_ OM/dΨ dΨ/√(Q - Ψ). Therefore, expressions (<ref>) – (<ref>) also hold in this case replacing ρ with ρ_ OM and ϵ with Q. §.§ Systems with anisotropic velocity distribution: the mixing model A particle system having only circular orbits has always radial velocity equals zero, and so, σ_r=0, which leads to β = -∞ everywhere. A system with such an extreme velocity anisotropy can reproduce any pair potential – density with a DF, denoted here as f_c, guaranteed to be positive everywhere (see Appendix <ref>). A fairly general system with anisotropic velocity distribution can be constructed as a linear superposition of a system with circular orbits f_c and a system with isotropic velocity distribution f_i <cit.>, so that f( r, v) = μ f_i[Ψ (r)-v^2/2]+(1-μ) f_c[ r, v_r,v_θ,v_ϕ], with r and v the position and velocity in the 6D phase space and μ parameterizing the mixing fraction (0≤μ≤ 1). The symbols v_r,v_θ,v_ϕ represent the three coordinates of the velocity vector in a reference system where v_r is the component in the radial direction set by r. Equation (<ref>) explicitly shows that f_i depends on the velocity through v^2=v_r^2+v_θ^2+v_ϕ^2, a property used in Sect. <ref> to discuss the feasibility of DFs from the mixing model. In this case, the anisotropy parameter at a fixed radius is β=-1-μ/μ σ_θ^2+σ_ϕ^2|_c/σ_θ^2+σ_ϕ^2|_i≤ 0, where σ_θ and σ_ϕ stand for the velocity dispersion in the two tangential coordinates and |_i and |_c point out the isotropic and the circular velocity DF, respectively. The resulting orbits are between circularly biased and isotropic, but never radially biased. §.§ Systems with anisotropic velocity distribution: constant velocity anisotropy An extension of the above Eddington formalism deals with anisotropic velocity distributions of constant β. One starts from the DF, f(ϵ, L) = L^-2β f_ϵ(ϵ), which represents a leading order approximation for a wide class of DFs having the anisotropy parameter β constant <cit.>. In these systems the DF depends not only on the relative energy ϵ but also on the modulus of the angular momentum L. Under this assumption, the mass volume density can be written as (, Eq. [4.66]), r^2βρ(r) = κ_β ∫_0^Ψf_ϵ(ϵ)/(Ψ-ϵ)^β-1/2 dϵ, where κ_β is a positive numerical value independent of the radius r. Note that this equation is formally quite similar to Eq. (<ref>) provided β < 1/2, and so will be used in Sect. <ref> to point out the inconsistency of a large number of densities and potentials in way that parallels the isotropic case. This DF is also closely connected the so-called cusp slope-central anisotropy theorem by <cit.>, which links the inner slope of a density profile with the velocity anisotropy. It is examined in our context in Appendix <ref>. § POSITIVITY OF THE PHASE-SPACE DISTRIBUTION FUNCTION Positivity is the basic requirement for any physically sensible phase-space distribution. Given a relative potential function Ψ and a mass density profile ρ, it is not guaranteed that the phase-space distribution yielded by the Eddington inversion method is positive everywhere in the phase space. A negative distribution function implies that the assumptions made when applying Eddington's method are physically inconsistent: there is no phase-space DF that can reproduce the mass density ρ under the assumed potential Ψ. We use this idea here and in Sect. <ref> to analyze the consistency of several combinations of Ψ and ρ that may be of practical importance. Requiring f to be non-negative constrains the properties of the centers of low-mass galaxies in fairly general terms. Equation (<ref>) leads to a sufficient condition for the physical incompatibility between ρ(r) and Ψ(r) <cit.>. If, for a given Ψ (and, consequently, a given r), d ρ / d Ψ vanishes, then, it follows from Eq. (<ref>) that the phase-space density f(ϵ) yielded by the Eddington method must reach negative values. (For the integral [<ref>] to be zero with f(ϵ)≠0, f(ϵ) < 0 somewhere within the interval 0≤ϵ≤Ψ.) Thus, if d ρ / d Ψ=0 somewhere, then no isotropic distribution is compatible with the given ρ(r) and Ψ(r). Taking into account the relation, dρ/dΨ = dρ/dr/dΨ/dr, it follows that a cored mass density, defined as having lim_r→ 0dρ/dr = 0, is inconsistent with a NFW background potential, which has lim_r→ 0dΨ/dr =-V_c/2 r_s^2 0; see Eq. (<ref>), with the constants V_c and r_s defined in Appendix <ref>. The condition for f>0 derived above has a twist when the requirement of having a core (Eq. [<ref>]) is somewhat relaxed. Consider a power law baryon density profile, ρ∝ r^-α, with α=0 for a cored profile. Consider also a power law for the density profile generating the potential, ρ_p∝ r^-α_p, with α_p=1 for a NFW profile. The relative potential Ψ follows from ρ_p so that for α≠ 0 one finds[The relation is given explicitly in Sect. <ref>, Eqs. (<ref>) and (<ref>).], dρ/dr/dΨ/dr≃ A α r^-(2+α-α_p), with A> 0 provided 0< α_p < 3. (The case α=0 is controlled by a second term to be added to the RHS - right hand side - of Eq. [<ref>], and is treated in Appendix <ref>.) Thus the inconsistency between baryons and potential when r→ 0 disappears when α > 0 since (dρ/dr)/(dΨ/dr)→∞ when r→ 0. Note, however, that f may still be negative somewhere else with r≠0 even when α >0, as we will show to be often the case (Sect. <ref>). Note also that all profiles with α < 0 (i.e., density decreasing toward the center) are discarded for any potential since the derivative Eq. (<ref>) is either zero or negative when r→ 0. The above results hold for systems where the velocity anisotropy is zero, however, they can be extended to others more general anisotropic systems. The Osipkov-Merritt model, which has an anisotropy parameter given by Eq. (<ref>), follows a relation for the DF (Eq. [<ref>]) formally identical to Eq. (<ref>). Provided the density profile has a core (i.e., provided it follows Eq. [<ref>]), lim_r→ 0dρ_OM/dr=lim_r→ 0 (2rρ/r_b^2+[1+r^2/r_b^2]dρ/dr)=0, which implies that cored density profiles are incompatible with a NFW potential even when the velocity is anisotropic following an Osipkov-Merritt model. As we stress in Sect. <ref>, this model for the radial variation of the anisotropy parameter is not as contrived as one may think since it is roughly followed by the low-mass model galaxies resulting from cosmological numerical simulations of galaxy formation. The constraints posed above happen to be a consequence of a more-general cusp slope-central anisotropy theorem by <cit.>. These authors showed that for systems with constant velocity anisotropy β (i.e., those described in Sect. <ref>), the need for the DF to be positive provides a constraint on the inner slope of the density profile α (i.e., ρ∝ r^-α when  r→ 0), α≥ 2β. This holds independently of the gravitational potential Ψ. As we show in Appendix <ref>, when this is combined with the constraint in Eq. (<ref>) set by having a NFW background potential, it leads to α > 2β. The inequality (<ref>) has a number of implications: (1) cores (α=0) are inconsistent with isotropic velocities (β=0), as we have shown already, (2) cores are inconsistent with radially biased orbits (i.e., only β< 0 is allowed for α=0), (3) radially biased orbits (β > 0) require cuspy baryon density profiles (α > 0), and (3) circular orbits do not impose any restriction on the inner slope α since their β = -∞. Actually, it is already known that strongly tangentially biased orbits can reconcile a cored stellar density profile with a cuspy CDM-like background potential <cit.>. The constraint on galaxies having radially biased orbits (β >0) is particularly important from a practical point of view since these orbits seem to be the natural outcome of the formation of dwarf galaxies in ΛCDM cosmological numerical simulations: see, e.g., <cit.> and <cit.>. Moreover, even if the uncertainties are large, values of β≳ 0 are also observed among the DM dominated satellites of the MW <cit.> There is also a fairly general family of tangentially biased DFs that can be discarded right away. It is described by the mixing model (Sect. <ref>) and covers the whole range of tangentially biased anisotropies from β=-∞ to 0. The mixing model in Sect. <ref> combines circular orbit DFs (f_c with β = -∞) and isotropic velocity DFs (f_i with β = 0) to produce tangentially biased DFs with β < 0 (Eq. [<ref>]). One may naively think that the always positive f_c may compensate f_i<0 to yield a positive physically sensible DF f =μ f_i+(1-μ) f_c (Eq. [<ref>]). However, all linear combinations can be discarded for any μ≠0 if f_i < 0 somewhere. The argument goes as follows: assume that f_i<0 at r= r_1 and v= v_1=(v_r1,v_θ 1,v_ϕ 1) (see the dependencies of the DF on position r and velocity v in Eq. [<ref>]). Then f_i is also <0 at r_1 and v_2=(v_r2,0,0) provided v^2_r2=v_r1^2+v_θ 1^2+v_ϕ 1^2, since f_i depends on v only through its modulus v. However, f_c( r_1,v_r2,0,0)=0 because, by definition, f_c only represents circular orbits that must have v_r=0. Thus, Eq. (<ref>) shows that f( r_1, v_2) <0 and thus unphysical, with the only possible workaround of μ=0, and so, of all orbits being circular. On the basis of the above arguments, ultra-low mass galaxies for which the stellar mass distribution is well fitted with cored density profiles are dynamically incompatible with a NFW profile for the dark-mass component, at least if one assumes that the phase-space DF of the stellar component depends only on the stellar energy, or that is described by a Osipkov-Merritt model, or is anisotropic with radially biased orbits, or anisotropic with tangentially biased orbits following the mixing model. These arguments cannot rule out a NFW background potential if other types of anisotropic phase-space distribution for the stars are assumed. For instance, a cored stellar profile may be compatible with a star distribution having a constant tangentially biased anisotropy (that is, having a constant and negative β). To make the range of compatibilities more clear, Table <ref> lists pairs of densities and potentials together with whether they are consistent or inconsistent. We note an important property of the consistency of a pair ρ – Ψ based on whether f(ϵ) ≥ 0 ∀ϵ. If a particular pair is consistent or inconsistent, then any global factor affecting the density profile will not modify this character since f(ϵ) scales linearly with a multiplicative factor in ρ (see Eqs. [<ref>] and [<ref>]). Thus, any of the inconsistencies brought out here hold true independently of the (typically unknown) mass ratio between the stars and the DM halo creating the potential. § NUMERICAL RESULTS This section illustrates with specific examples the general results put forward in Sect. <ref>, analyzes the behavior outside the core of the system, and deals with profiles with shapes more complex than the ones considered in Sect. <ref>. We check whether the DF resulting from particular pairs becomes negative at some point, which would discard the combination. β = 0 is assumed, so the DF f(ϵ) follows from Eqs. (<ref>) and (<ref>). Equation (<ref>) is integrated numerically for every ϵ applying a Simpson's rule. The radial derivatives of ρ and Ψ in Eq. (<ref>) are computed analytically whenever possible using the equations in Appendix <ref> (Sect. <ref>). Otherwise we compute them numerically (Sect. <ref>). §.§ Cored density in a NFW potential The computation of f(ϵ) is straightforward when ρ is a Schuster-Plummer profile, ρ(r)=ρ(0)/[1+(r/r_0)^2]^5/2, and Ψ is described by a NFW potential (Appendix <ref>), a combination used here as reference of cored density profile immersed in a CDM-only potential. The Schuster-Plummer profile is the polytrope of order m=5, and was chosen as reference because it provides a fair representation of the stellar mass distribution in real dwarf galaxies <cit.>. As the rest of polytropes, this density profile has a core, therefore, it is not consistent with the potential derived from the cuspy NFW profile (Sect. <ref>). An example is shown in Fig. <ref>. The parameters that define this polytrope and the potential have been tuned to represent a realistic galaxy with stellar mass M_⋆≃ 10^6 M_⊙, core radius r_0=1.4  kpc, and total mass around 10^4 times the stellar mass <cit.>. The stars are immersed in the NFW potential generated by the matter distribution represented as the gray solid line in the left panel of Fig. <ref>. (Note that this density fully defines Ψ through Poisson's equation and independently of the velocity distribution of the particles creating the potential.) This component completely dominates the mass and the potential of the system: the mass within the gray solid line, M_p, has M_p/M_⋆≃ 10^4. As expected, f< 0 for some ϵ signaling that this combination of baryons and potential is unphysical. For self gravitating systems, Poisson equation guarantees that f(ϵ) ≥ 0 ∀ϵ. For the stellar density profile shown in Fig. <ref>, f(ϵ) is analytic (Eq. [<ref>]). We use this fact to check the numerical integration scheme used to derive f(ϵ). §.§ Double power law density and potential A cored density in a NFW potential are inconsistent, as we have showed. Here we expand the range of shapes to figure out how much the conditions for a core and a NFW potential can be relaxed and still getting inconsistent results. In our study, we use a family of density profiles commonly used in the literature <cit.>, ρ_abc(r) = ρ_s/x^c(1+x^a)^(b-c)/a, with x= r/r_s, that encompasses both the NFW profile (a=1, b=3,  and  c=1) and the Schuster-Plummer profile (a=2, b=5,  and  c=0) shown in Fig. <ref>. Actually, for a=2, b=m, and c=0, ρ_abc approximately accounts for the inner region of a polytrope of index m <cit.> which is important in this context since polytropes describe density profiles of self-gravitating N-body systems when reaching thermodynamical equilibrium <cit.>. The constants r_s and ρ_s in Eq. (<ref>) provide the global scaling for radius and density, respectively. The parameter c gives the inner logarithmic slope, lim_r→ 0dlogρ_abc/dlog r=-c. The three-parameter function ρ_abc can be folded into a single parameter family using a=2-c and b=5-2c, ρ_c(r) = ρ_s/x^c(1+x^2-c)^(5-3c)/(2-c), which seamlessly scans from Schuster-Plummer to NFW when c goes from 0 to 1 (see Fig. <ref>). In order to compute the DF of the baryons, one needs the derivatives of the density profile and the potential (Eqs. [<ref>] and [<ref>]). Using the Poisson equation for a spherically symmetric system <cit.>, the potential and the required derivatives can be obtained in terms of the inner mass, M_p(<r) = 4π ∫_0^r t^2 ρ_p(t) dt, so that Ψ =G M_p(<r)/r+4π G ∫_r^∞ t ρ_p(t) dt, dΨ/dr= - G M_p(<r)/r^2, d^2Ψ/dr^2=2GM_p(<r)/r^3-4π Gρ_p, and d^3Ψ/dr^3=-6GM_p(<r)/r^4+8π G ρ_p/r-4π Gdρ_p/dr, which for ρ_p=ρ_abc can be computed analytically only for certain values of a, b and c <cit.>. Employing, Eq. (<ref>) and Eqs. (<ref>) – (<ref>), one can integrate numerically Eq. (<ref>) to obtain f(ϵ) via Eq. (<ref>). Using this approach, we have scanned through a large number of pairs baryon densities and potentials both characterized by double exponential density profiles but with different parameters. Unless otherwise stated explicitly, we employ the simplified version of the density given in Eq. (<ref>). The main results of our numerical exercise will be discussed next and are also summarized in Table <ref>. To prevent confusion during the description, the parameters corresponding to the density profile that creates the potential are labeled with the subscript p whereas those of the baryon density do not have any subscript. * If the baryons have a core (i.e., if c=0) then the density generating the potential must also have a core to be consistent (i.e., c_p=0). This is shown by the numerical simulations (a counter-example with c=0, c_p=0.05 producing f<0 is shown in Fig. <ref>), but it also follows analytically from the study carried out in Appendix <ref> and discussed in the item <ref> below. The graphical summary in Fig. <ref> shows that when c=0, c_p must be zero for f to be >0 everywhere. * If the baryon core is not perfect (c ≳ 0; denoted as soft-core in Table <ref>), then a NFW profile may or may not be compatible with it. Figure <ref> shows examples of incompatible (top panels) and compatible (bottom panels). We have scanned a range of values for c and c_p (-0.001 < c < 1 and -0.1 < c_p < 1.1) with the compatibility summarized in Fig. <ref>. Roughly speaking, NFW potentials (c_p=1) are inconsistent with densities having c≲ 0.1. * Any density profile with c=0 and a>2 is physically irrealizable (see Fig. <ref>), independently of the potential. The inconsistency remains even with the potential created by the self-gravity of the density, and means that no β=0 DF is able to reproduce a> 2 profiles. The behavior, summarized in Fig. <ref>, is predicted analytically in Appendix <ref> and discussed further in item <ref>. * A density profile significantly broader than the potential also yield inconsistent distribution functions. According to Fig. <ref>, r_s/r_sp≲ 2 for the DF to be non-negative, a constraint that may be used in real galaxies to set a lower limit to the size of the DM halo from the size of the observed starlight. As we mention in Sect. <ref>, the density contrast between the density and the density producing the potential (ρ_s/ρ_sp) is irrelevant since it cannot change the sign of f. * Note that the region where c > c_p, i.e., where the baryons are more cuspy than the halo, presents no inconsistency in the summary plots of the Figs. <ref> and <ref>. We bring this fact up because some numerical simulations of ultra-low mass galaxies seem to show compact stellar concentrations having c > c_p≃ 1 <cit.>. These structures are physically feasible within the logical framework of our work which, among others, assumes negligible stellar mass (M_⋆≪ M_p) and spherical symmetry. * For the abc densities that we are considering, the derivative used to diagnose the positivity of the DF in Sect. <ref> (Eq. [<ref>]) turns out to be (Eq. [<ref>], Appendix <ref>), dρ/dr/dΨ/dr≃D/r^2+c-c_p [c+b-c/r_s^a r^a], with D> 0 for c_p < 3. If c≠0, the first term in the RHS of Eq. (<ref>) dominates the behavior of the ratio when r→ 0. This term is identical to Eq. (<ref>) with α=c and α_p=c_p, and to allow for the derivative to differ from zero (and so for the DF to be positive), it only demands c > c_p-2. This is a very loose constraint and actually, most of the forbidden (red) region in Figs. <ref> and <ref> actually meets this requirement. These two results, i.e., having dρ/dΨ≠ 0 and but f<0 somewhere, are consistent because the first one is more demanding than the second since it requires the (weighted) integral of f to be positive (Eq. [<ref>]), which can be met even when f<0 for some values of ϵ (see the example in the top panel of Fig. <ref>). When c=0, the second term in the RHS of Eq. (<ref>) rules, and then the potential and the density profiles would be inconsistent when 2-c_p-a > 0 since the ratio of derivatives goes to zero. For a=2, as expected for polytropes, c_p> 0 is ruled out and the potential must have a core to be consistent with the core in the density profile. This condition for c=0 is truly restrictive, and is strictly followed by the simulations in Fig. <ref>. § DISCUSSION AND CONCLUSIONS lccl[h] Summary of the compatibility between baryon density profile (ρ) and potential Baryons & Potential, Velocity Consistency Comments Section (1) (2) (3) (4) Core ^† & NFW ^, isotropic Eqs. (<ref>) and (<ref>). β=0 ^*. Fig. <ref>   Sect. <ref> Power law ^ & Power law, isotropic α > 0 ^  α < 0 . Eq. (<ref>). β=0 Sects.  <ref>, <ref> Core & Soft-core ^#, isotropic β=0. Fig. <ref>. Fig. <ref> Sect. <ref>, App. <ref> Core & Core, isotropic β=0. a≤ 2 a > 2 . Fig. <ref> Sects. <ref>, <ref>, App. <ref> Soft-core & NFW, isotropic β=0. Figs. <ref>, <ref>. c≳ 0.1 c ≲ 0.1 . Sects. <ref>, <ref> Soft-core & Soft-core, isotropic β=0. Figs. <ref>, <ref> Sects. <ref>, <ref> r_s ≳ 2 r_sp , c> c_p Sect. <ref> Core & NFW, O-M model β (≠ 0) in Eq. (<ref>) Sect. <ref> Core & NFW, radially biased Constant β. β > 0 Sect. <ref>, App. <ref> Core & Any, radially biased Constant β. β > 0 Sect. <ref>, App. <ref> Power-law & Any, anisotropic Constant β. α > 2β Sect. <ref>, App. <ref> Core & NFW, circular β= -∞ App. <ref> Any & Any, circular β= -∞ App. <ref> Any & Any, tangentially biased β <0. Eq. (<ref>). f_i<0 Sects. <ref>, <ref> ^† Core ≡ dlogρ/dlog r→ 0 when r→ 0. ^ Navarro, Frenk, and White potential (Eq. [<ref>]) produced by a NFW profile (Eq. [<ref>]). ^* Velocity anisotropy parameter β defined in Eq. (<ref>). ^ ρ∝ r^-α. ^# Soft-cores defined in Eqs. (<ref>) and (<ref>), and illustrated in Fig. <ref>. Power laws ^ are a particular type of those. (1) Description of the baryon density, the gravitational potential, and the velocity distribution. (2) The symbols , , and stand for compatible, incompatible, and may or may not, respectively. (3) Additional comments and keywords. (4) Section of the text where the combination described in (1) is discussed. According to the current concordance cosmological model, the DM particles are collision-less and, evolving under their own gravity, produce self-gravitating structures that approximately follow the iconic NFW profile with a cusp in its center (CDM haloes). These cusps (ρ∝ r^-1) are generally not observed in galaxies. The total density often presents a central plateau or core (ρ∼ constant), which is believed to be produced by the coupling with baryons through gravity. Star-formation driven outbursts modify the overall gravitational potential, affecting the CDM distribution too. This mechanism of baryon feedback becomes inefficient when decreasing the galaxy stellar mass, reaching a point where the energy provided by baryons is simply not enough to modify the cusp of the CDM haloes (see, Sect. <ref> for references and details). Despite all uncertainties and model dependencies, this threshold mass roughly corresponds to isolated galaxies with stellar masses < 10^6 M_⊙ or halo masses < 10^10 M_⊙. Thus, if these ultra-low mass galaxies show cores, they are not due to baryon feedback processes but have to reflect the nature of DM: whether it is fuzzy, self-interacting, warm, or any of the other possibilities put forward in the literature. Direct measurements of the DM mass distribution in these faint galaxies are difficult since they require high spectral resolution spectroscopy, which is observationally extremely challenging. However, there may be a shortcut if the starlight somehow follows the DM since, even in low-mass low-luminosity galaxies, deep photometry is doable <cit.>. One may naively think that stars must trace DM in these systems whose potential is fully dominated by DM. Nevertheless, stars are so weakly coupled with the DM that can potentially maintain a mass distribution differing from the DM distribution for longer than the age of the Universe <cit.>. Thus, in order to use the observable stellar mass distribution as a proxy for the elusive DM distribution, one has to show that somehow starlight traces DM in this DM dominated systems. More specifically, we know that low-mass galaxies often show cores in their stellar mass distribution (Sect. <ref>). The question arises as whether this cored baryon distribution is or not consistent with the DM distribution expected from CDM particles (aka NFW profile). We address the question using the so-called Eddington inversion method. Under mildly restrictive assumptions (gravity from baryons negligible, stationary-state, smooth potential, and spherical symmetry; see, Sect. <ref>), the method provides the DF in the phase space f corresponding to a mass density distribution immersed in a gravitational potential. Given two arbitrary density and potential, there is no guarantee that f > 0 everywhere, which is required for them to be physically consistent. In this paper, we have studied different combinations of baryon density and gravitational potential that may help us to discern whether DM profiles in ultra-low mass galaxies have or not a core. We focus on the consistency of the various gravitational potentials with baryon density profiles showing a core (Eq. [<ref>]) or soft-core (Eq. [<ref>], with c≳ 0). The main conclusions of our analysis are summarized in Table <ref> and can be expanded as follows: - Stellar cores in a NFW potential are incompatible provided the velocity distribution is isotropic (β=0). - Stellar cores and potentials stemming from a density with a quasi-core (c_p >0) are incompatible too. This result holds for isotropic velocities (β =0). - As expected for physical consistency, stellar cores and potentials resulting from cored density profiles are consistent in isotropic (β=0) and radially biased systems (β >0). - Stellar cores and NFW potentials are also incompatible in systems with anisotropic velocities provided they follow the Osipkov-Merritt model. Even if artificial, it approximately describes the global trend expected in ultra-low mass galaxies, with β∼ 0 in the center and then increasing outwards (β >0). - Stellar cores and NFW potentials are incompatible in systems with radially biased orbits (constant β > 0). Actually, a stellar core is incompatible with any potential without a core in systems with constant radially biased orbits (β > 0). - Circular orbits (β=-∞) can accommodate any combination of baryion density and potential, including a cored stellar density in a NFW potential. These configuration is very artificial, though. Unlikely to happen in real dwarf galaxies where orbits are expected to be radially biased (see the discussion below). - The linear superposition of two DFs is also a DF. Thus, one may think that the addition of a positive DF for circular orbits may compensate the negative DF for isotropic orbits to yield a positive physically sensible DF. However, this is not the case. Independently of the relative weight, the mixing of an unphysical DF for isotropic velocities (β=0) with a physically realizable DF for circular orbits (β =-∞) always yields unphysical DFs (Sect. <ref>). - We denote as soft-cores those profiles where inner slope is not exactly zero but close to it (c ≳ 0). Soft-cores are inconsistent with NFW profiles when c ≲ 0.1 while they are consistent when c ≳ 0.1. When the density profile that characterizes the potential also has a soft core (i.e., when 0≤ c_p≤ 1), then the situation is more complicated as shown in, e.g., Fig. <ref>. This statements hold for isotropic velocity distributions. - The inner slope of a soft stellar core and the radial anisotropy are related so that c > 2β. In other words, large radially biased orbits are strongly inconsistent with soft stellar cores. - Positive inner slope in the stellar distribution, where the density grows outwards, is discarded in every way. - For stellar densities and potentials with the same shape (whether cored or not), the stellar density distribution cannot be broader than twice the width of the density equivalent to the potential. This result refers to isotropic velocities and may be used in real galaxies to set a lower limit to the size of the DM halo from the size of the observed starlight. - Pairs of density and potential where the inner slope of the density is larger than that of the potential (c ≥ c_p) are not inconsistent. This result refers to isotropic velocities too. - The above conclusions do not depend on a scaling factor on stellar density profile, therefore, they do not depend on the (unknown) ratio between the stellar mass and the total mass of the system. - The functions used to represent the density and the potential are flexible enough to describe the central region in any polytrope of arbitrary index m (Eq. [<ref>], with a=2, b=m, and c=0). Polytropes are important in the context of self-gravitating systems since they describe the density expected in N-body systems reaching thermodynamical equilibrium <cit.>. In other words, they portray the DM density distribution expected if the DM were not collision-less <cit.>. How useful the above constraints are very much depends on the anisotropy of the velocity field β (Eq. [<ref>]). In general, radially biased (β >0) and isotropic (β=0) orbits are more difficult to reconcile with a cuspy gravitational potential than tangentially biased orbits (β < 0). The question arises as what is the anisotropy to be expected in real galaxies. This issue can be addressed from two complementary directions, namely, what is the anisotropy observed in the smallest galaxies, and what is the anisotropy recovered for the smallest galaxies formed in cosmological numerical simulations. Even if the uncertainties are large because the estimates rely on measuring velocities of individual stars, the DM dominated satellites of the Milky Way (MW) tend to have β≳ 0 <cit.>. Note that these objects are not isolated galaxies and their internal baryon structure may be strongly mediated by the presence of the MW and its circum-galactic medium through tidal forces, ram-pressure, and starvation <cit.>. However, the observed trend is consistent with numerical simulations. Radial anisotropies seems to be the natural outcome of the formation of dwarf galaxies in ΛCDM cosmological numerical simulations: see, e.g., <cit.> and <cit.>. Moreover, β tends to zero when approaching the center of the gravitational potential, where the stellar cores may be present and have to be observed. Thus, β≳ 0 at the centers seems to be a sensible conjecture when interpreting stellar mass distributions in real galaxies. One of the seemingly more restrictive assumption leading to the constraints in Table <ref> is the spherical symmetry of the density and potential. As it happens with the isotropy of the velocity field, the question of whether this is a good assumption for real ultra-low mass galaxies arises. Actually, the two issues are closely connected since, in real galaxies, both are set by the history of star-formation driven by cosmological gas accretion and mergers <cit.>. In general, the smallest simulated galaxies tend to be rounded, although not perfectly spherical, with the DM component closer to sphericity <cit.>. On the other hand, the observed dwarf isolated galaxies are triaxial, but with three axes of similar lengths <cit.>. In addition to whether real ultra-low mass galaxies are or not well fitted by spherically symmetric models, independent theoretical arguments point out that this assumption is not so critical since the incompatibilities may still hold when dropped. The extensions of the Eddington inversion method for axi-symmetric systems <cit.> lead to expressions for the DF similar to Eqs. (<ref>), (<ref>), and (<ref>). They are expected to lead to restrictions similar to those worked out in this paper. We are presently exploring them with promising results. There are also extensions or variants of the Eddington inversion approach, suitable for other more general spherically symmetric DFs, that in principle could be used for diagnostics and would be worth considering <cit.>, but their analysis remains to be carried out. The constraints in Table <ref> result from treating particular cases, each one with its own peculiarities. The analysis of other cases (e.g., the study of axi-symmetric systems mentioned above) will enlarge a list which at present contains only a fraction of the constraints yet to be discovered. In this sense, our work is only a pathfinder that shows how the traditional Eddington method can be used to study DM haloes in ultra-low mass galaxies. Given the observed stellar distribution, the method seriously limits the properties of the DM halo where it resides. Moreover, its interest probably exceeds the original scope that motivated the present study, and may be of application to other astrophysical systems where the stars represent only a minor fraction of the total mass, for example, the intra-cluster light as tracer of the DM galaxy cluster potential <cit.>. In short, the question in the title of the paper, Can CDM matter halos hold cored stellar mass distributions?, has no simple yes or no answer. Instead, we find it to be unlikely, although not imposible, than cored stellar mass distributions can be hosted in NFW DM haloes, provided the system is spherically symmetric. Thus, our work supports the interest of determining surface brightness profiles of ultra-low mass galaxies to constrain the nature of DM. This work can be used as a guide to interpret observations so that the closer the observed galaxies to the hypotheses (spherically symmetry, stationarity, velocity isotropy, etc.) the more useful the constraints in Table <ref>. Our ultimate goal is applying the mathematical tools developed in this paper to observed dwarf galaxies with masses low enough to constrain the nature of DM (Sect. <ref>). This challenging task still requires several intermediate steps to be completed. In our roadmap, we would like to test the machinery with the few local group galaxies for which independent information on the DM halo and on the stellar distribution is available <cit.>, to see whether the constraints imposed by the Eddington inversion method and by the kinematical measurements are consistent. We also need to know what is the signal-to-noise ratio and the number of targets required to make firm claims. Having a hundred targets with surface brightness profiles reaching down to 30 mag arcsec^-2 seems to be doable <cit.> but, does it suffice? Finally, we have to carefully select the actual data set of faint isolated dwarf galaxies. The two requirements are in tension since intrinsically faint galaxies are nearby and so tend to be satellites, but both are needed. One obvious possibility is waiting for better data <cit.>. Alternatively, one can also think of studying the ultra-faint dwarfs of the local group <cit.> cherry-picking those where the tidal forces and other environmental effects may be minimal (e.g., with large pericentic passage) and which truly proceed from low mass progenitors <cit.>. Tidal forces change the internal structure of satellites and reduce their stellar mass content, thus blurring any clear-cut interpretation of the observed DM distribution in terms of the DM nature, a caveat to keep in mind if this pathway is chosen. All these works are currently ongoing or planed. Thanks are due to Claudio Dalla-Vechhia for insightful discussions during the early stages of the work, and to Giuseppina Battaglia, Arianna Di cintio, and Ruben Sánchez-Janssen for references. Thanks are due to Matthew Orkney and Justin Read for discussions and clarifications on the velocity anisotropy and mass profile of the galaxies in their simulations. JSA acknowledges financial support from the Spanish Ministry of Science and Innovation (MICINN), project PID2019-107408GB-C43 (ESTALLIDOS). His visit to La Plata was partly covered by the MICINN through the Spanish State Research Agency, under Severo Ochoa Centers of Excellence Programme 2020-2023 (CEX2019-000920-S). JSA also wants to explicitly thank Angel Luis Platino and the Facultad de Ciencias Económicas de La Universidad Nacional de La Plata for their hospitality during this visit. ARP acknowledges support to visit the IAC from the Fundación Jesús Serra and the IAC under their Visiting Researcher Programme 2020–2022. IT acknowledges support from the Project PCI2021-122072-2B, financed by MICIN/AEI/10.13039/501100011033, and the European Union NextGenerationEU/RTRP and the ACIISI, Consejería de Economía, Conocimiento y Empleo del Gobierno de Canarias and the European Regional Development Fund (ERDF) under grant with reference PROID2021010044 and from the State Research Agency (AEI-MCINN) of the Spanish Ministry of Science and Innovation under the grant PID2019-107427GB-C32 and IAC project P/302302, financed by the Ministry of Science and Innovation, through the State Budget and by the Canary Islands Department of Economy, Knowledge and Employment, through the Regional Budget of the Autonomous Community. numpy <cit.>, scipy <cit.> § ANALYTIC DERIVATIVES OF THE DENSITY AND THE POTENTIAL According to Eqs. (<ref>) and (<ref>), the DF f(ϵ) corresponding to a density ρ(r) in a potential Ψ(r) can be deduced from the first three derivatives of ρ(r) and Ψ(r). This appendix works them out for various practical cases that involve polytropes and NFW potentials. They all are used in the main text. §.§ Distribution function for a Schuster-Plummer stellar mass density in a NFW potential The Schuster-Plummer density (Eq. [<ref>]) is defined as, D(r) = ρ(0) [1 + r^2/r_0^2]^-5/2, so that, d D/dr = D_1(r) = -5ρ(0)/r_0^2 r [1 + r^2/r_0^2]^-7/2, d^2 D/dr^2 = D_2(r) = -5ρ(0)/r_0^2[1 + r^2/r_0^2]^-7/2 + 35 ρ(0)/r_0^4 r^2 [1 + r^2/r_0^2]^-9/2, and d^3 D/dr^3 = D_3(r) = 35 ρ(0)/r_0^4 r [1 + r^2/r_0^2]^-9/2 + 70 ρ(0)/r_0^4 r [1 + r^2/r_0^2]^-9/2 - 315 ρ(0)/r_0^6 r^3 [1 + r^2/r_0^2]^-11/2. On the other hand, the NFW density profile is defined as, ρ_ NFW(r)=ρ_s/(r/r_s)(1+r/r_s)^2, with r_s and ρ_s two constants. It creates a potential given by <cit.>, Φ_ NFW(r) = - V_c/rln(1 + r/r_s), with V_c=4π Gρ_s r_s^3. Then the relative potential Ψ(r), denoted for the NFW profile as V(r), turns out to be, V(r) = Φ_ NFW(∞) - Φ_ NFW(r) =V_c/rln(1 + r/r_s), with its derivatives given by, dV/dr = V_1(r) = V_c/r^2[ r/r+r_s - ln(1 + r/r_s) ], d^2V/dr^2 = V_2(r) = - 2 V_c/r^3[ r/r+r_s - ln(1 + r/r_s) ] - V_c/r (r + r_s )^2, and d^3V/dr^3 = V_3(r) = 6 V_c/r^4[ r/r+r_s - ln(1 + r/r_s) ] + 2V_c/r^2 (r + r_s )^2 + V_c(3r + r_s)/r^2 (r + r_s )^3. Using Eqs. (<ref>) and (<ref>), the DF corresponding to a density given by Eq. (<ref>) and a potential set by Eq. (<ref>) turns out to be, f(ϵ) = 1/π^2 √(2)∫_R^∞ dr √(ϵ - V(r)) [ - D_3/V_1^2 + 3 D_2 V_2/V_1^3 + D_1 V_3/V_1^3 - 3 D_1 V_2^2/V_1^4], with the limit R implicitly defined as ϵ = V(R). §.§ Distribution function for a Schuster-Plummer stellar mass density in a Schuster-Plummer potential The gravitational potential corresponding to the mass density in Eq. (<ref>) is <cit.>, Φ_ SP(r) = - W_c (1 + r^2/r_0^2)^-1/2, with W_c = (4π/3) G r_0^2, so that the corresponding relative potential becomes, W(r) = Φ_ SP(∞) - Φ_ SP(r) = W_c (1 + r^2/r_0^2)^-1/2, with its derivatives given by, dW/dr = W_1(r) = - W_c/r_0^2 r (1 + r^2/r_0^2)^-3/2, d^2W/dr^2 = W_2(r) = - W_c/r_0^2(1 + r^2/r_0^2)^-3/2 + 3W_c/r_0^4 r^2 (1 + r^2/r_0^2)^-5/2, and d^3W/dr^3 = W_3(r) = 9 W_c/r_0^4 r (1 + r^2/r_0^2)^-5/2 - 15W_c/r_0^6 r^3 (1 + r^2/r_0^2)^-7/2. Using Eqs. (<ref>) and (<ref>), the DF corresponding to a density given by Eq. (<ref>) and a potential set by Eq. (<ref>) turns out to be, f(ϵ) = 1/π^2 √(2)∫_R^∞ dr √(ϵ - W(r)) [ -D_3/W_1^2 + 3 D_2 W_2/W_1^3 + D_1 W_3/W_1^3 - 3 D_1 W_2^2/W_1^4], with the radius R implicitly defined as ϵ = W(R). In the case of a self-gravitating system, so that the Schuster-Plummer potential is the one created by the Schuster-Plummer mass density, then Eq. (<ref>) can be integrated analytically to yield, f(ϵ) = ρ(0)/W_c^5120/(2π)^3/2Γ(9/2) ϵ^7/2, an expression used to check our numerical evaluations of f(ϵ). § THE TERMS AT Ψ=0 IN THE EDDINGTON INVERSION METHOD The relative potential Ψ generated by a spherically symmetric system of finite total mass behaves as Ψ∝ r^-1 for r →∞ (e.g., Eq. [<ref>]). Consider objects where ρ∝ r^-b for r→∞ (e.g., Eq. [<ref>]). Combining the asymptotic behaviors of Ψ and ρ, one finds that dρ/dΨ∝ r^1-b and d^2ρ/dΨ^2 ∝ r^2-b. Consequently, ( dρ/dΨ)_Ψ = 0 = ( d^2ρ/dΨ^2 )_Ψ = 0 = 0 provided b > 2. The above argument is not strictly valid if Ψ stands for the NFW potential, because it does not correspond to a mass distribution with finite total mass, and it behaves as Ψ∝ r^-1 ln r for r→∞. However, if b > 2, using Eq. (<ref>) one can show that still ( dρ/dΨ)_Ψ = 0 = ( d^2ρ/dΨ^2 )_Ψ = 0 = 0 when r→∞, in spite of the logarithmic factor appearing in the NFW potential. § DISTRIBUTION FUNCTION OF AN SPHERICALLY-SYMMETRIC SYSTEM WITH CIRCULAR ORBITS Any density ρ(r) can be reproduced with a system of spherically-symmetric circular orbits <cit.>. By definition, their radial velocity is zero, v_r=0, and their tangential velocity is equal to the circular velocity, v_t=v_c(r), with v_c^2(r)=G M_p(<r)/r. The symbol M_p(<r) stands for the inner mass creating the potential Φ. A general DF with the required properties is, f_c(r,v_t,v_r) = F(r) δ(v_t-v_c) δ(v_r), where δ represents a Dirac-delta function and F is a function to be set by the density. Since, ρ is recovered from the integral of f_c over all velocities, ρ(r) =2π∫∫ f_c v_t dv_t dv_r , then F(r)=ρ(r)/2π v_c(r), which, together with Eq. (<ref>), uniquely defines F for any combination of ρ and Φ. Note that even if the DF in Eq. (<ref>) is not explicitly written in terms of ϵ and L, it is straightforward to verify that it is a stationary DF. § THE THEOREM BY AN & EVANS IN OUR CONTEXT Section <ref> puts forward the DF, f(ϵ, L) = L^-2β f_ϵ(ϵ), which represents a leading order approximation for a wide class of DFs having the anisotropy parameter β constant <cit.>. In this case the DF depends not only on the energy ϵ but also on the modulus of the angular momentum L. Under this assumption, the mass volume density can be written as (, Eq. [4.66]), r^2βρ(r) = κ_β ∫_0^Ψf_ϵ(ϵ)/(Ψ-ϵ)^β-1/2 dϵ, where κ_β is a positive numerical value independent of the radius r. As we argued in Sect. <ref>, for the integral in the RHS of Eq. (<ref>) to be zero, f_ϵ < 0 somewhere which through Eq. (<ref>) makes f unphysical. Assuming ρ(r)∝ r^-α when r→ 0, then the left-hand-side of Eq. (<ref>) differs from zero if 2β-α≤ 0, which is the theorem proved by . Here we go a step further and provided β <1/2 (Eq. [<ref>] diverges when taking derivatives and β > 1/2), one obtains ( Eq. [4.67]), d[r^2βρ(r)]/dr/dΨ/dr = κ_β (1/2-β)∫_0^Ψf_ϵ(ϵ)/(Ψ-ϵ)^β+1/2 dϵ. In the case of Ψ given by a NFW potential, its radial derivative at r=0 differs from zero and is negative (Eq. [<ref>]) therefore, to avoid the RHS of Eq. (<ref>) to be less or equal to zero (and so to avoid an unphysical f_ϵ<0), 2β-α≠ 0 and 2β-α-1 ≤ 0. The 2nd condition is automatically met because 2β-α is already ≤ 0 according to . Together with this inequality, the first condition implies that for f>0 then α > 2β. Our derivation assumes β < 1/2, however, it is not difficult to show that the inequality still holds in the limit case when β=1/2 and Eq. (<ref>) is not valid. There are several obvious consequences of the inequality in Eq. (<ref>): (1) cores (α=0) are inconsistent with isotropic velocities (β=0), (2) cores are inconsistent with radially biased velocities (i.e., only β< 0 is allowed), (3) radially biased orbits (0< β < 1/2) require cuspy baryon density profiles (α > 0), and (3) circular orbits do not pose any problem since β=-∞. § VALUE OF (DΡ/DR)/(DΨ/DR) WHEN R→ 0 AND DΡ/DR → 0 Starting out from the definition of ρ_abc in Eq. (<ref>), one finds for r→ 0, dρ_abc/dr≃ -ρ_s/r_s c+(b-c) x^a/x^1+c, with x=r/r_s. Similarly, Eq. (<ref>) provides the 1st order approximation, dΨ/dr≃ -B r^1-c_p, where c_p is the value of c of the density profile assumed to generate the potential Ψ and B is a positive constant. Putting together the two previous equations, one finds, dρ/dr/dΨ/dr≃D/r^2+c-c_p [c+b-c/r_s^a r^a], with D> 0 for c_p < 3. In the range of interest for galaxies, the parameters c and c_p are from ∼ 0 to ∼ 1, with c ≤ c_p, whereas a is between ∼ 1 and ∼ 2. If c≠0, the first term in the RHS of Eq. (<ref>) dominates the behavior of the ratio when r→ 0. This term is identical to Eq. (<ref>) with α=c and α_p=c_p, and its behavior is discussed in detail in Sect. <ref>. The case when c=0 is particularly interesting since it represents a cored density profile. Only the 2nd term in the RHS of Eq. (<ref>) is not zero and it turns out to be, dρ/dr/dΨ/dr≃D b/r_s^a1/r^2-c_p-a. Thus, the potential and the density profiles would be inconsistent when 2-c_p-a < 0 since the ratio of derivatives goes to zero when r→ 0. It implies that when a=2 (e.g., Schuster-Plummer profile; Appendix <ref>), all c_p > 0 are physically irrealizable (see Fig. <ref> for c=0). It also implies that when c_p=0, and so the potential has the same core as the density, the pair density and potential are unphysical for a>2. This somewhat surprising behavior has been checked numerically (Figs. <ref> and <ref>). aasjournal
http://arxiv.org/abs/2307.00936v1
20230703112109
OpenAPMax: Abnormal Patterns-based Model for Real-World Alzheimer's Disease Diagnosis
[ "Yunyou Huang", "Xianglong Guan", "Xiangjiang Lu", "Xiaoshuang Liang", "Xiuxia Miao", "Jiyue Xie", "Wenjing Liu", "Li Ma", "Suqin Tang", "Zhifei Zhang", "Jianfeng Zhan" ]
cs.LG
[ "cs.LG", "cs.AI" ]
IEEE Transactions on Neural Networks and learning systems Shell et al.: Bare Demo of IEEEtran.cls for Computer Society Journals Alzheimer's disease (AD) cannot be reversed, but early diagnosis will significantly benefit patients' medical treatment and care. In recent works, AD diagnosis has the primary assumption that all categories are known a prior—a closed-set classification problem, which contrasts with the open-set recognition problem. This assumption hinders the application of the model in natural clinical settings. Although many open-set recognition technologies have been proposed in other fields, they are challenging to use for AD diagnosis directly since 1) AD is a degenerative disease of the nervous system with similar symptoms at each stage, and it is difficult to distinguish from its pre-state, and 2) diversified strategies for AD diagnosis are challenging to model uniformly. In this work, inspired by the concerns of clinicians during diagnosis, we propose an open-set recognition model, OpenAPMax, based on the anomaly pattern to address AD diagnosis in real-world settings. OpenAPMax first obtains the abnormal pattern of each patient relative to each known category through statistics or a literature search, clusters the patients' abnormal pattern, and finally, uses extreme value theory (EVT) to model the distance between each patient's abnormal pattern and the center of their category and modify the classification probability. We evaluate the performance of the proposed method with recent open-set recognition, where we obtain state-of-the-art results. Alzheimer's Disease, Abnormal Patterns, Open-set Recognition, OpenAPMax. OpenAPMax: Abnormal Patterns-based Model for Real-World Alzheimer's Disease Diagnosis Yunyou Huang, Xianglong Guan, Xiangjiang Lu, Xiaoshuang Liang, Xiuxia Miao, Jiyue Xie, Wenjing Liu, Li Ma, Suqin Tang, Zhifei Zhang, and Jianfeng Zhan Y. Huang, X. Lu, X. Liang, X. Miao, J. Xie, W. Liu, and S. Tang are with the Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, and the Guangxi Key Laboratory of Multi-Source Information Mining and Security, Guangxi Normal University. X. Guan is with the School of Electronic and Information Engineering & School of Integrated Circuits, Guangxi Normal University, Guilin 530015, China. He is also with the Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University. J. Zhan is with the State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100086, China. Z. Zhang is with the Department of Physiology and Pathophysiology, Capital Medical University, Beijing, 100069, China. L. Ma is with the Guilin Medical University, Guilin 541001, China. She is also with the Guangxi Key Lab of Multi-Source Information Mining & Security. Correspondence authors: Jianfeng Zhan(zhanjianfeng@ict.ac.cn) or Zhifei Zhang(zhifeiz@ccmu.edu.cn) or Suqin Tang(sqtang@gxnu.edu.cn) August 1, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Alzheimer's Disease (AD) is an incurable disease that causes significant harm to patients. Currently, the number of AD patients is approximately 50 million, increasing dramatically with societal ageing <cit.>. Although AD diagnosis does not affect the disease's cure, the correct diagnosis will lead to proper nursing and treatment to alleviate the patient's symptoms, thereby significantly reducing the burden of the disease <cit.>. Artificial intelligence technology is considered one of the most promising technologies for improving medical services and has been introduced into many medical fields, including Alzheimer's disease <cit.>. Currently, many AI models are proposed for AD diagnosis to enhance the quality of medical services and expand AD screening. Zeng et al. adopted the traditional machine learning method and proposed a new switching delayed particle swarm optimization (SDPSO) algorithm to optimize the SVM parameters, and then, the optimized SVM was used to diagnose Alzheimer's disease<cit.>. Khan et al. adopted a deep learning method and used transfer learning to build a VGG architecture for classification<cit.>. To improve the generalization of the model, Lu et al. used transfer learning to construct a deep convolutional neural network ResNet model for classification on a large-scale dataset<cit.>. These methods can achieve good results in the classification and diagnosis of Alzheimer's disease in a closed-set environment. However, all of the recent works consider AD diagnosis as a classification problem in the closed world rather than an open-set recognition problem in the real world, which makes it impossible to implement the AI model in clinical settings. To make the AI model applicable in the real world, a large number of open-set recognition technologies have been proposed in various fields. Hitham et al. used the peak side ratio to characterize the posterior probabilities generated by a Platt-calibrated support vector machine (SVM). Then they used thresholds determined by this process to classify open audio data<cit.>. Mehadi et al. proposed a dual-loss neural network model that classifies open datasets by determining the distance between different instances<cit.>, and Abhijit et al. proposed a new OpenMax layer that applies meta-recognition to the activation layer, improving the performance of the network in an open-set environment<cit.>. Open-set recognition technologies are roughly divided into discriminative models and generative models <cit.>. Discriminative models generally train a classifier model in a closed set first and then observe the distribution of the classifier's output or intermediate results on the training set, observe the distribution of the intermediate results, or transform the output or intermediate results to another form to observe their distribution. Finally, it modifies the output of the classifier according to the distribution. However, as shown in Figure <ref>(b), AD is similar to many other diseases (such as MCI). In a clinical setting, an AD diagnosis takes 2.7 years on average <cit.>, which means that during the diagnosis process, AD may be indistinguishable from many very similar diseases, and it is difficult to distinguish unknown categories by observing the distribution of the models' output or the intermediate results. The generative model mainly recognizes samples of unknown classes through reconstruction loss or data augmentation. <cit.>. The model's key is whether the generator can fit the distribution of known samples or samples of unknown categories. <cit.>. However, as shown in Figure <ref>(b), this goal is different from current research because each sample is composed of the same one or more types of data, and the type and amount of data for each sample may be different based on different diagnostic strategies. For example, diagnosing an AD patient p_A in the training set may involve a combination strategy that includes a cognitive examination and MRI scanning, while diagnosing another AD patient p_B in the training set may involve a combination strategy that includes a cognitive examination and a genetic examination. Furthermore, AD patient p_C in the test set may have been diagnosed using a combination strategy that does not appear in the training set. It is challenging to fit the sample's distribution based on the generation model due to the diversity of diagnostic methods; not only are the data of different categories of subjects distributed differently, but the data of patients in the same category are also distributed differently. Both the discriminative model and generation model seek to classify patients with similar data into the same category. However, while investigating AD diagnosis in real clinical settings, we found that at the beginning of diagnosis, clinicians usually pay more attention to whether a patient's indicator is abnormal rather than the similarity between the patient's data and the data of the standard patient. In this work, inspired by the above phenomenon, we propose the abnormal pattern-based open-set recognition, named OpenAPMax, to diagnose AD patients in real-world settings. First, we selected indicators related to Alzheimer's disease through literature research. All of these indicators are only included in the basic information obtained through consultation and the information from routine cognitive tests by the outpatient department. Second, the patient's indicator value is compared with the normal value of each known category to obtain the patient's abnormal pattern. Third, a classifier that is able to distinguish AD from CN is trained. Finally, the extreme value theory (EVT) is utilized to model the distance between each patient's abnormal pattern and the center of their category and modify the classification probability. To summarize, our contributions are as follows: (1) Different from directly using the clinical data of subjects for open-set recognition, we introduce subjects' abnormal patterns into the model by imitating the clinician's concerns. This approach might inspire a rethinking of clinician behavior in real-world settings. To the best of our knowledge, this work is the first to use abnormal patterns in diagnosis based on open-set recognition. (2) OpenAPMax can be integrated into other models without changing the model's architecture. Compared to OpenMax<cit.> and OVRN<cit.>, OpenAPMax can provide better performance in complex diagnosis tasks. (3) The experiments show that the model with OpenAPMax performs best in clinical settings with different complexities. § RELATED WORK AD diagnosis model. Early and accurate diagnosis of Alzheimer's disease allows patients to receive timely treatment and slows the progression of symptoms<cit.>. Li et al. <cit.> proposed a classification method based on multi-cluster dense convolutional neural networks for AD diagnosis. This method can jointly learn features and classification without domain expert knowledge. Scholar et al. <cit.> proposed a new AD classification method that extracts features from defined regions of interest and performs texture analysis, and Basaia et al. <cit.> used a single cross-sectional brain structure MRI scan to predict Alzheimer's disease based on a convolutional neural network classification approach. Liu et al. <cit.> proposed a deep multitask multichannel learning framework that identifies discriminative anatomical landmarks from MR images and then extracts multiple image patches around these detected landmarks. Open-set Recognition. To address the real-world problem, many open-set recognition technologies have been proposed and are divided into discriminative models and generative models <cit.>. Alina et al. <cit.> combined the current closed-set models with multiple novelty detection strategies employed in general action classification to develop a model that recognizes previously unseen classifier behaviors and proposed a new OpenDrive & Act benchmark. Wentao et al. <cit.> proposed a new deep-evidence action recognition method that recognizes actions in an open testing set. They used an evidence-based neural network to predict class-wise evidence from a given video, forming a Dirichlet distribution that determines the multiclass probability and prediction uncertainty of the input. Rafael et al. <cit.> combined hash functions and classification methods to estimate when probe samples are known. Specifically, they combined a Partial Least Squares Network and a fully connected network to generate an open-set face recognition classification model. Ryota et al. <cit.> developed a classifier for classification reconstruction learning for open-set recognition using latent representation learning. This classifier enables robust unknown detection without compromising known classification accuracy. § AD DIAGNOSIS The subject dataset is denoted by D. Let (X_i, y_i) denote the ith sample in dataset D, where X_i={x_1, x_2, ..., x_k} is the subject's clinical data (obtained by enquiring about the subject or conducting a clinical examination), and y_i is the subject's label. Open-set recognition can not only classify the categories that have appeared according to the input but can also correctly judge the categories that do not appear in the input (unknown categories). There are many forms of AD diagnosis based on open-set recognition, one of which can be transformed to minimize the objective function L_o(W) as shown in Equation <ref>. { L_o(W)=∑_i=1^nl1(ŷ_̂î, y_i) +l2(W) ŷ_̂î=f(W(X_i)) ŷ_̂î_̂ûn̂=1-∑_j=1^2ŷ_̂î[j] . f is a score modifier based on EVT <cit.>, and ŷ_̂î_̂ûn̂ is the probability that the sample belongs to an unknown category. § METHODS §.§ The proposed model The framework of the AD diagnosis model with OpenAPMax consists of four components: AD diagnosis clinical data encoder Encoder, clinical data decoder Decoder, Classifier, and probability corrector OpenAPMax, as shown in Fig.  <ref>. The Encoder consists of 3 layers of bidirectional LSTM (B-LSTM), and it is able to accept variable-length input and merge all of the clinical data since the length of that data will change with the change in diagnosis strategy. Note that all types of clinical image data will be extracted into vectors using the pretrained model, DenseNet201, before being input into the model <cit.>. The Decoder and Encoder structures are similar and used to reconstruct data; the Decoder is able to help the Classifier retain more features that are irrelevant to AD and CN classification and improve the recognition performance for unknown category samples in the open environment. The Classifier consists of three dense layers and a softmax layer, which is used to classify subjects in a closed setting. To enable the model to diagnose Alzheimer's disease in the real world, an open-set recognition mechanism based on abnormal patterns OpenAPMax replaces the softmax layer after model training. §.§ OpenAPMax The deep learning network can be regarded as a feature extractor, and the output of the AV layer can be regarded as a characteristic of the sample. However, as the cause of Alzheimer's disease has not yet been determined, and because it is highly similar to many neurological diseases, it is difficult to distinguish between Alzheimer's disease and related diseases due to their overlapping characteristics. As shown in Figure <ref>, we introduce the abnormal pattern to modify the open-set recognition mechanism. First, we selected 14 indicators related to the diagnosis of Alzheimer's disease according to the diagnostic guidelines for Alzheimer's disease <cit.>. We have noticed that all indicators are only obtained through inquiry or conventional cognitive testing because different patients usually receive different diagnostic strategies; we can only select the part where all patient data overlap. Second, for each category, the normal value range of each indicator is calculated as the basis for determining whether each indicator is normally relative to this category for each patient. The abnormal pattern is a sequence of 0 and 1 : 0 represents a normal indicator, and 1 represents an abnormal indicator. Third, as Algorithm <ref> shows, OpenAPMax divides the abnormal patterns of each category into subcategories using a clustering algorithm since a disease usually has multiple subtypes; then, it obtains centers for the category by averaging. The Weibull distribution is used to model the distance between the abnormal pattern and the center of the category. To adapt our model for open-set recognition of AD diagnosis, the meta-recognition model is used to correct the output of the softmax layer. As shown in Algorithm <ref>, OpenAPMax provides a process that modifies the softmax output by using the distance between the abnormal pattern and the centers of different categories. It is an optional step that further enhances the weight of abnormal patterns when identifying samples of unknown categories. §.§ EVT Modeling Extreme value theory is often used to predict the likelihood of small probability events and is an effective method for modeling model prediction scores<cit.>. This approach allows us to estimate the tail probabilities of a random variable beyond a high threshold. The Picklands-Balkema-deHaan formulation, which models probabilities conditioned on random variables exceeding high thresholds, has found widespread application in the field of open set recognition, as exemplified by its use in studies like <cit.> and <cit.>. We focus on determining the distribution function F_W for values of ω that exceed the threshold u. The conditional excess distribution function is then defined as F_U(ω)=P(ω - u ≤ω|ω > u)=F_W(u + ω) - F_W(u)/1 - F_W(u) where, P(·) denotes probability measure function. F_U can be well approximated by the Generalized Pareto Distribution (GPD), G(ω; ζ,μ)={ 1 - (1 + ζω/μ)^1/ζ if ζ≠ 0, 1 - e^ω/μ if ζ = 0, . such that -∞ < ζ < +∞, 0 < μ < +∞, ω > 0 and ζω > -μ. § EXPERIMENTS A performance evaluation of OpenAPMax was conducted through three investigations to 1) explore the feasibility of the abnormal pattern on unknown subject identification, 2) demonstrate the superiority of OpenAPMax in different clinical settings, and 3) assess the transplantability of OpenAPMax on different models with different network structures. §.§ Dataset The data used in the preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database ( <http://adni.loni.usc.edu> ). The ADNI was launched in 2003 as a public-private partnership led by Principal Investigator Michael W. Weiner, MD. For up-to-date information, see <http://www.adni-info.org>. The data contain study data, image data, and genetic data compiled by ADNI between 2005 and 2019. Considering the commonly used examinations and the concerned examinations in AD diagnosis by the clinician, the 13 categories of data are selected base information (usually obtained through consultation, including demographics, family history, medical history, etc.), cognition information (usually obtained through consultation and testing, including the Alzheimer's Disease Assessment Scale, Mini-Mental State Exam, Montreal Cognitive Assessment, etc.), cognition testing (usually obtained through testing, including the ANART, Boston Naming Test, Category Fluency-Animals, etc.), neuropsychiatric information (usually obtained through consultation, including the Geriatric Depression Scale, Neuropsychiatric Inventory, etc.), function and behavior information (usually obtained through consultation, including the Function Assessment Questionnaire , Everyday Cognitive Participant Self Report, etc.), physical and neurological examination (usually obtained through testing, including physical characteristics, vitamins, etc.), blood testing, urine testing, nuclear magnetic resonance scan (MRI), positron emission computed tomography scans with 18-FDG, positron emission computed tomography scans with AV45, gene analysis, and cerebral spinal fluid analysis. In this work, 2127 subjects with 9593 visits are included. To develop the AI model, 85% AD, and cognitively normal (CN) subjects were divided into the training set, 5% of the AD and CN subjects were divided into the validation set, and 20% AD and CN subjects, 100% MCI subjects, and 100% SMC subjects were divided into the test set. Note that the MCI and SMC are labeled as unknown to simulate an open setting. A subject's visit may require different categories of examination. In this work, every combination of those examinations represents a diagnostic strategy. The training set contains 1025 subjects with 3986 visits and generates 180682 strategies according to the collected data. The validation set contains 73 subjects with 254 visits and generates 11898 strategies according to the collected data, and the test set contains 1460 subjects with 5353 visits. In the test set, there were 35 different diagnosis strategies according to different subject situations and 40 different examination abilities of medical institutions. §.§ Experimental setup Our model was optimized using mini-batch stochastic gradient descent with Adam and a base learning rate of 0.0005 <cit.>; the model's code will be published to facilitate understanding of the implementation details. All comparison models were constructed and trained according to the needs of AD diagnosis tasks and the settings of their official code. The experiments were conducted on a Linux server equipped with Tesla P40 and Tesla P100 GPUs. §.§ Abnormal pattern According to the AD diagnosis guidelines and recommendations from the expert group in ADNI, as shown in Table <ref>, 14 indicators are selected. Among the indicators shown in Table <ref>, except for the MMSE and MOCA, which are able to directly find the normal range of the AD category and CN category in the AD diagnostic guide, the normal range of other indicators is determined by statistics. In this work, for a category, we sort the indicator values, and the normal range for a category is obtained by the 5th and 95th percentiles of the indicator values' distribution. [1]Nausea, Vomiting, Diarrhoea, Constipation, Abdominal discomfort, Sweating, Dizziness, Low energy, Drowsiness, Blurred vision, Headache, Dry mouth, Shortness of breath, Coughing, Palpitations, Chest pain, Urinary discomfort (e.g., burning), Urinary frequency, Ankle swelling, Musculoskeletal pain, Rash, Insomnia, Depressed mood, Crying, Elevated mood, Wandering, Fall, Other. [2]Nausea to Rash [3]Nausea to Other [4]The CCI scale can be found at <https://adni.bitbucket.io/reference/cci.html>. [5]CCI1 to CCI12 [6]CCI1 to CCI20 [7]The CDR scale can be found at <https://adni.bitbucket.io/reference/cdr.html>. [8]The Alzheimer's Disease Assessment Scale-Cognitive scale can be found at<https://adni.bitbucket.io/reference/adas.html>. [9]Q1 to Q11 [10]Q1 to Q13 [11]The Mini Mental State Exam scale can be found at <https://adni.bitbu cket.io/reference/mmse.html>. [12]The Montreal Cognitive Assessment scale can be found at<https://adni.bitbu cket.io/reference/moca.html>. [13]The calculation method of the Preclinical Alzheimer's Cognitive Composite can be found at <https://ida.loni.usc.edu/pages/access/studyData.jsp? categoryId=16 subCategoryId=43>. To investigate the role of abnormal patterns in identifying unknown samples in AD diagnosis, we compared the identification of unknown samples by using the similarity of patient data and the identification of unknown samples by using the familiarity of patients' abnormal patterns. First, we directly used all of the patients' basic information and cognitive testing information to calculate the similarity between samples and the similarity between samples and the known category centers. As shown in Figure <ref>(a), we find that the distribution of unknown samples highly overlaps with that of known samples (AD and CN), and it is difficult to separate unknown samples from known samples. The reason for this phenomenon is that unknown samples contain a large number of MCI patients, while MCI patients are highly similar to AD patients. The boundary between MCI patients and AD patients is blurred, and some MCI patients will become AD patients in the near future. Second, we used the 14 indicator values selected from the basic information and cognitive testing information above to calculate the similarity between samples and the similarity between samples and known category centers. As shown in Figure <ref>(b), although the distribution of unknown samples is obviously different from that of known samples, most unknown samples still overlap with known samples. The process of selecting 14 indicators can be considered feature selection in machine learning, which can help improve the performance of model classification. However, it is not very helpful for identifying unknown samples. Finally, according to the 14 selected indicators above and the normal indicator range of each category, we obtained the abnormal pattern of each sample and used it to calculate the similarity between samples and the similarity between samples and known category centers. As shown in Figure <ref>(c), the samples of known categories are distributed in the upper left and lower right corners of the figure. At the same time, the remaining space is occupied by samples of unknown categories. Although samples of known categories and samples of unknown categories still overlap, most samples of unknown categories have obvious differences from known samples, which means that abnormal patterns are important information for identifying unknown patients. §.§ Methods for comparison We validate the effectiveness of OpenAPMax by comparing it with related works: the discriminative models, OpenMax (2016) <cit.> and OVRN (2022) <cit.>, and the generative models, Gen-Dis (2020) <cit.>, in different clinical settings. In addition, we also compared the performance of OpenAPMax with the related work image-based models (DSA-3D-CNN (2016) <cit.>, CNN-LRP (2019) <cit.>, VoxCNN-ResNet (2017) <cit.>, Dynamic-VGG (2020) <cit.>) and the multimodal inputs model (FCN-MLP (2020) <cit.>). §.§ Performances in different methods To investigate the performance of OpenAPMax, we built a simple screening dataset Sc_Dataset, a single diagnostic strategy dataset S_Dataset and a multiple diagnostic strategy dataset M_Dataset. For Sc_Dataset, every subject only contains basic information and cognitive testing information. Sc_Dataset can be collected outside of the hospital. For S_Dataset, every subject contains basic information, cognitive testing information, and MRI. S_Dataset construction rules are the dataset construction rules that most of the recent research has followed. M_Dataset contains 13 categories of commonly used information during AD diagnosis. and every sample contains one or more categories according to the diagnosis strategies. Four models, the OpenMax-based discriminative model, the generative model, the OVRN model, and the OpenAPMax-based discriminative model, were run on Sc_Dataset, S_Dataset and M_Dataset for comparison. As shown in Table <ref>, OpenAPMax achieved the best performance in out-of-hospital screening, the traditional single-diagnosis strategy setting, and the real-world multi-diagnosis strategy clinical setting. The performance of OpenMax and the generative model Gen-Dis are poor in different settings, especially in real-world multi-strategy settings. Note that OpenMax, Gen-Dis, and OVRN all sacrifice the ability to recognize unknown samples (0.29, 0.33, and 0.51) in exchange for the ability to recognize known samples in a real-world setting. The reasons for this phenomenon may be as follows: (1) The subjects' basic information and cognitive information are subjective, and these data may present a complex distribution according to different patients and different collectors; (2) MRI data from more than 60 institutions with different instruments makes the data distribution very complex; and (3) the training set has more than 4000 different diagnostic strategies, which further increases the complexity of the data distribution. Comparing the performance of different models in different clinical settings, the use of multimodal data and multi-strategy combination combinations can enhance the models performance. However, compared with AD diagnosis in the closed world, AD diagnosis in the real world still has huge room for improvement, and further exploration is needed to promote the implementation of AD diagnosis in real clinical practice  <cit.>. §.§ OpenAPMax vs. OVRN vs. OpenMax with different models To investigate the transferability of OpenAPMax, we applied OpenMax, OVRN and OpenAPMax to several AD diagnostic models that were trained on closed sets. As shown in Table <ref>, for all compared models, OpenAPMax demonstrates great advantages over OpenMax and OVRN. Compared with AD subjects, all models with OpenAPMax have better recognition performance for CN subjects. Furthermore, except for Dynamic-VGG-OpenAPMax, the AUC scores of all other compared models are larger than 0.9. The AUC scores of CN subjects in the results generated by the OpenAPMax method are generally higher than those of AD. The reason for the above phenomenon may be that subjects of unknown categories are more similar to AD subjects but different from CN subjects. Thus, CN subjects are easy to identify. Compared with OpenMax and OVRN, OpenAPMax exhibits excellent sensitivity to subjects of unknown categories and maintains acceptable sensitivity for subjects of AD and CN, which proves once again that abnormal patterns play an important role in excluding unknown subjects. §.§ Potential clinical applications In real-world clinical settings, both the subject and medical institution are complex and various. There is no one-size-fits-all strategy for diagnosing every subject in every medical institution. For example, AD subjects usually need a nuclear magnetic resonance scan, while CN subjects usually do not, and subjects in hospitals in developed regions may receive a PET examination, while those in non-developed regions do not suffer due to the lack of PET equipment in hospitals. In this work, to model the AD diagnosis task in a real-world setting, the classification model simultaneously modelled the samples under different diagnosis strategies. In the training set, 180682 diagnosis strategies generated 180682 samples for model training. Note that there are 4096 different diagnosis strategies in the training set since similar subjects may share the same diagnosis strategy. In the test set, 1460 subjects with 5353 visits were diagnosed by 5353 strategies that correspond to 35 different diagnosis strategies . The training set contains almost all possible AD diagnosis strategies, while the test set contains all commonly used AD diagnostic strategies. Compared with the fixed diagnosis strategy model, our model has great potential to be applied in complex and changeable clinical settings. Although the accuracy of AD diagnosis in the closed-set setting is close to 100%, the accuracy of AD diagnosis in the open-set environment is only approximately 80% <cit.>. At present, it seems that the AD diagnostic model based on open-set recognition has a high probability of misdiagnosis, and we need to further improve the model's performance. However, there is no clear consensus on the model's accuracy before it can be applied to the real-world clinical setting. Thus, it is also an urgent problem to explore the standard of the AD model in clinical settings. § CONCLUSION The first contribution of this paper is that it provides a new perspective for identifying unknown subjects from known subjects; this perspective integrates the prior knowledge of clinicians through abnormal patterns. We propose the abnormal-pattern-based open-set recognition mechanism OpenAPMax, which considers the abnormal pattern of subjects rather than the similarity shown by subjects' raw data. The experimental results demonstrate that OpenAPMax directly yields state-of-the-art performance on AD diagnosis in different clinical settings. Furthermore, the experiment shows high portability in that OpenAPMax is able to integrate into other models without changing their architecture. Finally, we expect that subjects' abnormal patterns will serve as useful information for developing other disease diagnosis models in an open-set setting or for related analysis in future works, similar to the role it plays in this paper. § ACKNOWLEDGEMENTS § ACKNOWLEDGEMENT This work is supported by the Project of the National Natural Science Foundation of China (Grant No. 61967002), the Project of Guangxi Science and Technology (Grant No. GuiKeAD20297004), and the Key Program of the National Natural Science Foundation of China (Grant No. U21A20474). IEEEtran [ < g r a p h i c s > ]Yunyou Huang received a B.S. degree and an M.S. degree from Guangxi Normal University in 2012 and 2015, respectively, and a PhD degree in computer software and theory from the University of Chinese Academy of Sciences in 2020. He has been an assistant professor in computer science at Guangxi Normal University since 2020. His research interests focus on big data and machine learning. [ < g r a p h i c s > ]Xianglong Guan received a B.S. degree from the Tongda College of Nanjing University of Posts and Telecommunications in 2020. He is now studying for an M.S. degree at the School of Electronics and Information Engineering & Integration School of Guangxi Normal University. His research interests focus on artificial medical intelligence and open-set recognition. [ < g r a p h i c s > ]Xiangjiang Lureceived a B.S. degree in Software Engineering from the Guilin University of Technology, China, in 2021. He is currently pursuing an M.S. degree at the School of Computer Science and Engineering & School of Software, Guangxi Normal University, China. His research interests include clinical AI and medical image analysis. [ < g r a p h i c s > ]Xiaoshuang Liangreceived a B.S. degree in Computer Science and Technology from Guangxi Normal University, China, in 2021. She is currently pursuing the Master degree at the School of Computer Science and Engineering & School of Software, Guangxi Normal University, China. Her research interests include Clinical AI, Interpretability of Clinical Medicine. [ < g r a p h i c s > ]Xiuxia Miaoreceived a B.S. degree in Software Engineering from Hezhou University, China, in 2020. She is currently pursuing a Master's degree at the School of Computer Science and Engineering & School of Software, Guangxi Normal University, China. Her research interests include clinical benchmark and clinical benchmark test systems. [ < g r a p h i c s > ]Jiyue Xiereceived a B.S. degree from the Guangxi Normal University for Nationalities in July 2022 and is currently studying for a Master of Software Engineering at Guangxi Normal University. His research interests include deep learning. [ < g r a p h i c s > ]Wenjing Liureceived a B.S. degree in Software Engineering from the Hunan Institute of Science and Technology, China, in 2017. She is pursuing her M.S. degree at the Computer Science and Engineering & School of Software at Guangxi Normal University, China. Her research interests include clinical AI and virtual patients. [ < g r a p h i c s > ]Li Mareceived an M.S. degree in computer software and theory in 2009. She is currently a full-time computer teacher at Guilin Medical University. Her current research interests include medical information processing, knowledge graphs, and artificial intelligence ethics. [ < g r a p h i c s > ]Suqin Tang, PhD and Professor, received a B.S. and M.S. degrees from Guangxi Normal University, Guilin, China. In addition, she received a PhD degree from Central South University, Changsha, China. Her main research interests include ontology, description logic, knowledge engineering, and intelligent tutoring systems. [ < g r a p h i c s > ]Zhifei Zhang received an M.S. degree in physiology and a PhD degree in aetiology from Capital Medical University, Beijing, China, in 2003 and 2011, respectively, where she is currently an Associate Professor with the Beijing Key Laboratory of Respirology. Her research interests include medical big data and circulatory physiology. [ < g r a p h i c s > ]Jianfeng Zhanis a Full Professor at the Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS), and University of Chinese Academy of Sciences (UCAS). He is also the director of the Software Systems Labs, ICT, CAS. He has supervised over 90 graduate students, postdocs, and engineers in the past two decades. His research areas span from chips and systems to benchmarks. A common thread is benchmarking, designing, implementing, and optimizing parallel and distributing systems. He has made strong and effective efforts to transfer his academic research into advanced technology to impact general-purpose production systems. Several technical innovations and research results, including 36 patents from his team have been widely adopted in benchmarks, operating systems and cluster and cloud system software with direct contributions to the advancement of parallel and distributed systems in China and throughout the world. Dr. Jianfeng Zhan founded and chairs BenchCouncil, and he has served as the IEEE TPDS Associate Editor since 2018. He received the second-class Chinese National Technology Promotion Prize in 2006, the Distinguished Achievement Award of the Chinese Academy of Sciences in 2005, and the IISWC best paper award in 2013. Jianfeng Zhan received his B.E. in Civil Engineering and MSc in Solid Mechanics from Southwest Jiaotong University in 1996 and 1999, and his PhD in Computer Science from the Institute of Software, CAS and UCAS in 2002.
http://arxiv.org/abs/2307.02464v1
20230705173801
AxonCallosumEM Dataset: Axon Semantic Segmentation of Whole Corpus Callosum cross section from EM Images
[ "Ao Cheng", "Guoqiang Zhao", "Lirong Wang", "Ruobing Zhang" ]
eess.IV
[ "eess.IV", "cs.CV" ]
The waveform of the scalar induced gravitational waves in light of Pulsar Timing Array data Fengge Zhang August 1, 2023 =========================================================================================== The electron microscope (EM) remains the predominant technique for elucidating intricate details of the animal nervous system at the nanometer scale. However, accurately reconstructing the complex morphology of axons and myelin sheaths poses a significant challenge. Furthermore, the absence of publicly available, large-scale EM datasets encompassing complete cross sections of the corpus callosum, with dense ground truth segmentation for axons and myelin sheaths, hinders the advancement and evaluation of holistic corpus callosum reconstructions. To surmount these obstacles, we introduce the AxonCallosumEM dataset, comprising a 1.83×5.76mm EM image captured from the corpus callosum of the Rett Syndrome (RTT) mouse model, which entail extensive axon bundles. We meticulously proofread over 600,000 patches at a resolution of 1024×1024, thus providing a comprehensive ground truth for myelinated axons and myelin sheaths. Additionally, we extensively annotated three distinct regions within the dataset for the purposes of training, testing, and validation. Utilizing this dataset, we develop a fine-tuning methodology that adapts Segment Anything Model (SAM) to EM images segmentation tasks, called EM-SAM, enabling outperforms other state-of-the-art methods. Furthermore, we present the evaluation results of EM-SAM as a baseline. § INTRODUCTION The intricate network of nerves within the brain forms a complex web of interactions, enabling the remarkable cognitive abilities exhibited by higher organisms. Understanding the structural organization of these nerves and their interplay is of paramount importance in deciphering the functioning of the brain. With recent advancements in electron microscopy (EM) technology, it has become feasible to acquire neuronal structures of brain <cit.>. Utilizing high-throughput and accurate segmentation methods <cit.>, several gray matter EM datasets have been reconstructed and published <cit.>. However, only a few unreleased EM dataset <cit.> focusing on the white matter, corpus callosum, which is a prominent structure interconnecting the two cerebral hemispheres, enables the exchange of information and coordination of various cognitive processes between the brain's left and right sides and contains vast quantities of axon bundles with complex morphology. To the best of our knowledge, no existing EM reconstruction from any whole corpus callosum cross section provides densely and fully proofread ground truth for myelinated axons and myelin sheaths. Moreover, current state-of-the-art methods from other public EM dataset cannot overcome the challenge of corpus callosum reconstruction task. As shown in Fig. <ref>, the morphology of axons significantly varies across three regions of the corpus callosum, making state-of-the-art neuron segmentation methods based on convolutional neural networks (CNN) inadequate. In Fig. <ref> (a), we demonstrate the challenge posed by scale differences and the dense distribution of myelinated axons, which require segmentation methods with large perceptual fields to accurately categorize pixels as inner cell or background. Additionally, Fig. <ref> (b) highlights the need for segmentation methods to fully understand the semantic information of myelinated axons and their myelin sheaths due to the presence of special cases such as demyelination and the Node of Ranvier. Finally, Fig. <ref> (c) showcases the long-range axons and complex morphology present in the corpus callosum. Regions with long-range axons often lack distinct boundary features, necessitating the development of more robust methods to differentiate these axons from the common circular axons. Therefore, we need a large-scale corpus callosum dataset to evaluate current methods and foster new researches to address the these challenges. To aim this, we have created a large-scale benchmark for a 2D axon semantic segmentation dataset called AxonCallosumEM. Our dataset is the Rett Syndrome (RTT) mouse model <cit.>, comprising an extensive collection of over 600,000 patches. Each patch is rendered at a resolution of 1024×1024 with a pixel size of 4 nm. Moreover, we provide a comprehensive ground truth for myelinated axons and myelin sheaths. Our dataset demonstrates the reconstruction of the entire corpus callosum cross section and showcases various morphologies and distributions of myelinated axons and myelin sheath, enabling further comparative studies across different mammalian species and mouse models. Furthermore, we have fine-tuned the state-of-the-art nature images segmentation model, Segment Anything Model (SAM <cit.>), to adapt specifically to EM segmentation tasks, thereby transforming it into EM-SAM. This adaptation effectively overcomes the majority of challenges encountered in AxonCallosumEM, while also presenting opportunities for exploring the adaptability of models originally designed for nature images to the biomedical applications. §.§ Related work For the segmentation tasks, conventional methods such as morphology-based processing <cit.>, region growing and labeling <cit.>, watershed <cit.>, and contours-based analysis <cit.> are proposed. Recently, deep-learning based methods (e.g UNet<cit.>, FPN<cit.>) have been proven capable of EM segmentation tasks and outperform the conventional methods. Moreover, predicting affinity map <cit.> and additional targets <cit.> are proposed for further improvement. Such approaches are used to process 2D <cit.> and 3D <cit.> EM datasets. However, as the limitation of perceptive field from convolution kernel, it always fails on segmenting large-scale cellular structures. More recently, vision transformer (Vit) <cit.>, which is first used in classification tasks and are adopted from sequence-to-sequence modeling in natural language processing, are proposed to solve such issues utilizing its long-range dependency. In the field of medical image segmentation, Vit-based <cit.> network utilized the transformer blocks as strong encoder and outperform CNN-based methods. In order to maximize the capability of feature extraction of vit-based encoder, Tang et al. <cit.> and Cheng et al. <cit.> proposed the masked image modeling (MIM) pre-training and fine-tuning paradigm on Swin <cit.> and Vit <cit.> respectively. The MIM method (e.g DAE <cit.>, MAE<cit.>, BeiT<cit.>) is a self-supervised learning method that learns representations from the image itself. In the field of CV, the pre-training methods <cit.> continue to develop and several large models <cit.> have proven the domination on downstream tasks. § AXONCALLOSUMEM DATASET The proposed AxonCallosumEM dataset encompasses the entire cross section of the mouse corpus callosum. We conducted dense annotations of myelinated axons and myelin sheaths in order to evaluate semantic segmentation methods and facilitate biomedical analysis. §.§ Dataset discription To construct the dataset, we utilized the Rett Syndrome (RTT) mouse model, specifically the Mecp2-mutant mice <cit.>. The EM images were acquired at a resolution of 4nm per pixel, encompassing the entire cross section of the corpus callosum, including the genu, the body, and the splenium of the corpus callosum <cit.>. The dataset was rendered with a patch size of 1024×1024 at 4nm per pixel resolution. Overall, the dataset comprises 448 and 1408 patches along the x and y directions, respectively, resulting in a total of over 600,000 patches. §.§ Dataset annotation For reconstruction purposes, we adopted a semi-automatic approach. Initially, we manually annotated myelinated axons and myelin sheaths within a region of 224×160 patches, with an 8nm per pixel resolution. Considering the complex morphology of axons, the annotation process commenced across the entire x-axis, as the features exhibited significant variations (see Fig. <ref>) along the y-axis. Subsequently, we fine-tuned the image encoder of the vit-based model <cit.> using the public released SAM model <cit.> (SAM-Base) with the annotated labels. This allowed us to predict the remaining region along the y-axis. The proofreader then inspected and manually corrected errors in the VAST <cit.>. Following this pipeline, we iteratively accumulated ground truth semantic segmentation by gradually expanding the annotated patches along the y-axis to reconstruct the entire cross section of the corpus callosum. §.§ Dataset analysis The segmented mouse corpus callosum is shown in Fig. <ref> (a). For biomedical analysis, we downsampled the labels to a resolution of 16nm per pixel. Additionally, a binary mask was applied to exclusively retain the region of the corpus callosum for subsequent computations, based on a patch size of 1024×1024. The following aggregate metrics <cit.> were computed: * Axon diameter mean and standard deviation. Arithmetic mean and standard deviation of the distribution of equivalent axon diameters (computed for each axon object as √((4× Area/π))). * Axon density. Number of axons per patches. * Axon volume fraction (AVF). The ratio between area of axons and total area of the region. * Myelin volume fraction (MVF). The ratio between area of myelin and total area of the region. * G-ratio. The ratio between axon diameter and myelinated fiber (axon + myelin) diameter, which can be estimated with the following: 1/√(1+(MVF/AVF)). The results are shown in Fig. <ref> (b) to (f). To enhance contrast, we applied max-min normalization to all the results. It was observed that the average g-ratio was approximately 0.7. Note that the distribution map of g-ratio of the RTT mouse model exhibited different results with known anatomy of other mouse models <cit.>. § METHODS We categorized state-of-the-art segmentation methods not only on EM datasets, but also from challenges involving natural images. The state-of-the-art methods on EM dataset <cit.> employ CNN to predict binary map<cit.>, affinity map <cit.>, and extra targets include contours<cit.> and distance <cit.>. As shown in Fig. <ref>, CNN-based methods are unable to overcome certain challenges, including scale differences, Node of Ranvier, and long-range axons. Hence, we employed a vision transformer (ViT)-based model to address these limitations. We compared the publicly available pre-trained ViTs, including MAE <cit.> and BEiT <cit.>. Additionally, we fine-tuned the image encoder at the base size of large-scale natural image segmentation model SAM <cit.> to EM-SAM. It is important to note that unlike <cit.>, we removed prompt encoder and the neck part of image encoder from SAM's <cit.>, as our automatic segmentation pipeline does not involve any prompts. For more detailed information about the models, we refer the readers to the original papers. § EXPERIMENTS §.§ Implementation Details To ensure a fair comparison of ViT-based models, we utilized the same decoder structure but incorporated 2D convolutional layers as proposed in <cit.>. During training, we applied the same data augmentation techniques and employed the Warmup Cosine Decay learning rate schedule <cit.>. We used the SGD optimizer and updated all the parameters, including the encoder. The total number of training iterations was set to 200,000 with a mini-batch size of 2. The input size for the pre-trained ViTs from MAE <cit.> and BEiT <cit.> was 224x224, while for EM-SAM <cit.>, it was 1024x1024. We employed Binary Cross-Entropy (BCE) loss during training and conducted the training on 2 NVIDIA 3090 GPUs. As for the CNN-based method, we replicated the state-of-the-art multi-task learning-based U-Net <cit.> in its 2D version. Moreover, we adopted mean Intersection over Union (mIoU) as the evaluation metric for qualitative analysis. §.§ Benchmark results on AxonCallosumEM dataset We assess the performance of the previous state-of-the-art approach (UNet <cit.>) and fine-tune the latest pre-training methods based on vit (MAE <cit.>, BEiT <cit.>) on our AxonCallosumEM dataset. Additionally, we investigate the influence of random initialization (scratched ViT <cit.>) and input size (EM-SAM <cit.>). Specifically, the AxonCallosumEM dataset is divided into training, validation, and testing subsets, consisting of the genu, the body, and the splenium of the corpus callosum. We determine the hyperparameters for each model using the validation dataset and evaluate their final performance on the testing dataset. As illustrated in Table <ref>, the UNet-based method with multi-target prediction achieved the lowest mIoU. In Fig. <ref> (b), the UNet method demonstrates limitations in segmenting large or heterogeneous objects due to its limited receptive field. Nevertheless, it remains a reliable method as it accurately segments objects of common sizes. The ViT-based method <cit.> exhibits a 0.028 improvement in mIoU compared to UNet <cit.>. Despite the smaller input size of the ViT-based method <cit.> compared to UNet <cit.>, its ability to capture long-range dependencies through self-attention overcomes the challenge of scale differences among axons. Additionally, we explore the impact of fine-tuning publicly available pre-trained models and large-scale segmentation models trained on natural images. In Table <ref>, fine-tuning the pre-trained MAE <cit.> and BEiT <cit.> models yields enhancements of 0.019 and 0.015, respectively, compared to training from scratch. Furthermore, the fine-tuned EM-SAM <cit.> achieves state-of-the-art performance due to its original enormously large training dataset and 4× larger input size, comparing with MAE <cit.> and BEiT <cit.>. As shown in Fig. <ref> (a), the fine-tuned EM-SAM <cit.> not only accurately segments large objects but also effectively handles challenges related to long-range morphology and demyelination. However, since the model does not fully comprehend the semantic information of myelinated axons and myelin sheaths, some segmentation errors persist. In Fig. <ref> (c), the EM-SAM <cit.> model misclassifies the gaps inside the myelin sheath or the cell nucleus with thicker membranes as myelinated axons, and struggles to identify boundaries and morphology with unclear features. Please noted that, only nature images were fed to the model of MAE <cit.>, BEiT <cit.>, and EM-SAM <cit.> before our fine-tuning procedure. Moreover, our experimental results provide support for the adaption of large-scale natural image models to biomedical image modeling, enabling positive impacts for downstream tasks through fine-tuning (See Fig. <ref> (a) and (d)) <cit.>. § CONCLUSION In this paper, we introduce a large-scale dataset for semantic segmentation of myelinated axons and myelin sheaths, which reveals the limitations of current state-of-the-art methods in handling complex morphologies. Similar to ImageNet <cit.> for natural images, our densely annotated 2D AxonCallosumEM dataset holds potential for various applications beyond its original task, such as pre-training; adaptability across nature images modeling and biomedical images modeling; evaluating latest methods; and biomedical analyses across mouse models. Moreover, our EM-SAM <cit.> provides the idea of adapting large model of nature images processing to automatic high-throughput method of EM images processing. § CONFLICT OF INTEREST STATEMENT The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. § ACKNOWLEDGMENTS We would like to thank the Prof. Qiu Zilong, from Shanghai Institutes for Biological Sciences, for providing the mouse of Rett Syndrome (RTT) model. unsrt 10 eberle2015em AL Eberle, S Mikula, R Schalek, JW Lichtman, ML Tate, and D Zeidler. High-resolution, high-throughput imaging with a multibeam scanning electron microscope. Journal of Microscopy, 2018. ronneberger2015unet Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015. lee2017superhuman Kisuk Lee, Jonathan Zung, Peter Li, Viren Jain, and H Sebastian Seung. Superhuman accuracy on the snemi3d connectomics challenge. arXiv preprint arXiv:1706.00120, 2017. lucchi2011supervoxel Aurélien Lucchi, Kevin Smith, Radhakrishna Achanta, Graham Knott, and Pascal Fua. Supervoxel-based segmentation of mitochondria in em image stacks with learned shape features. IEEE transactions on medical imaging, 31(2):474–486, 2011. wei2020mitoem Donglai Wei, Zudi Lin, Daniel Franco-Barranco, Nils Wendt, Xingyu Liu, Wenjie Yin, Xin Huang, Aarush Gupta, Won-Dong Jang, Xueying Wang, et al. Mitoem dataset: large-scale 3d mitochondria instance segmentation from em images. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 66–76. Springer, 2020. wei2021axonem Donglai Wei, Kisuk Lee, Hanyu Li, Ran Lu, J Alexander Bae, Zequan Liu, Lifu Zhang, Márcia dos Santos, Zudi Lin, Thomas Uram, et al. Axonem dataset: 3d axon instance segmentation of brain cortical regions. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part I 24, pages 175–185. Springer, 2021. abdollahzadeh2021deepacson Ali Abdollahzadeh, Ilya Belevich, Eija Jokitalo, Alejandra Sierra, and Jussi Tohka. Deepacson automated segmentation of white matter in 3d electron microscopy. Communications biology, 4(1):179, 2021. vashi2019treating Neeti Vashi and Monica J Justice. Treating rett syndrome: from mouse models to human therapies. Mammalian Genome, 30, 2019. kirillov2023segment Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023. cuisenaire1999automatic Olivier Cuisenaire, Eduardo Romero, Claude Veraart, and Benoit MM Macq. Automatic segmentation and measurement of axons in microscopic images. In Medical Imaging 1999: Image Processing, volume 3661, pages 920–929. SPIE, 1999. romero2000automatic Eduardo Romero, Olivier Cuisenaire, Jean-François Denef, Jean Delbeke, Benoît Macq, and Claude Veraart. Automatic morphometry of nerve histological sections. Journal of neuroscience methods, 97(2):111–122, 2000. more2011semi Heather L More, Jingyun Chen, Eli Gibson, J Maxwell Donelan, and Mirza Faisal Beg. A semi-automated method for identifying and measuring myelinated nerve fibers in scanning electron microscope images. Journal of neuroscience methods, 201(1):149–158, 2011. zhao2010automatic Ximei Zhao, Zhenkuan Pan, Jinyan Wu, Guomin Zhou, and Yanjun Zeng. Automatic identification and morphometry of optic nerve fibers in electron microscopy images. Computerized Medical Imaging and Graphics, 34(3):179–184, 2010. beucher1992watershed Serge Beucher. The watershed transformation applied to image segmentation. Scanning Microscopy, 1992(6):28, 1992. wang2012segmentation Yi-Ying Wang, Yung-Nien Sun, Chou-Ching K Lin, and Ming-Shaung Ju. Segmentation of nerve fibers using multi-level gradient watershed and fuzzy systems. Artificial intelligence in medicine, 54(3):189–200, 2012. begin2014automated Steve Bégin, Olivier Dupont-Therrien, Erik Bélanger, Amy Daradich, Sophie Laffray, Yves De Koninck, and Daniel C Côté. Automated method for the segmentation and morphometry of nerve fibers in large-scale cars images of spinal cord tissue. Biomedical optics express, 5(12):4145–4161, 2014. zaimi2016axonseg Aldo Zaimi, Tanguy Duval, Alicja Gasecka, Daniel Côté, Nikola Stikov, and Julien Cohen-Adad. Axonseg: open source software for axon and myelin segmentation and morphometric analysis. Frontiers in neuroinformatics, 10:37, 2016. lin2017fpn Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 936–944, 2017. lin2021nucmm Zudi Lin, Donglai Wei, Mariela D Petkova, Yuelong Wu, Zergham Ahmed, Silin Zou, Nils Wendt, Jonathan Boulanger-Weill, Xueying Wang, Nagaraju Dhanyasi, et al. Nucmm dataset: 3d neuronal nuclei instance segmentation at sub-cubic millimeter scale. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 164–174. Springer, 2021. zaimi2018axondeepseg Aldo Zaimi, Maxime Wabartha, Victor Herman, Pierre-Louis Antonsanti, Christian S Perone, and Julien Cohen-Adad. Axondeepseg: automatic axon and myelin segmentation from microscopy data using convolutional neural networks. Scientific reports, 8(1):3816, 2018. motta2019dense Alessandro Motta, Manuel Berning, Kevin M Boergens, Benedikt Staffler, Marcel Beining, Sahil Loomba, Philipp Hennig, Heiko Wissler, and Moritz Helmstaedter. Dense connectomic reconstruction in layer 4 of the somatosensory cortex. Science, 366(6469):eaay3134, 2019. Vaswani2017transformer Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017. Dosovitskiy2021google_transformer_experiments Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021. hatamizadeh2022unetr Ali Hatamizadeh, Yucheng Tang, Vishwesh Nath, Dong Yang, Andriy Myronenko, Bennett Landman, Holger R Roth, and Daguang Xu. Unetr: Transformers for 3d medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 574–584, 2022. hatamizadeh2022swinunetr Ali Hatamizadeh, Vishwesh Nath, Yucheng Tang, Dong Yang, Holger R Roth, and Daguang Xu. Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In International MICCAI Brainlesion Workshop, pages 272–284. Springer, 2022. tang2022self3dswin Yucheng Tang, Dong Yang, Wenqi Li, Holger R Roth, Bennett Landman, Daguang Xu, Vishwesh Nath, and Ali Hatamizadeh. Self-supervised pre-training of swin transformers for 3d medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20730–20740, 2022. cheng2023learning Ao Cheng, Jiahao Shi, Lirong Wang, and Ruobing Zhang. Learning the heterogeneous representation of brain's structure from serial sem images using a masked autoencoder. Frontiers in Neuroinformatics, 17:1118419, 2023. liu2021swin Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, pages 10012–10022, 2021. vincent2008dae Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096–1103, 2008. vincent2010dae Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, Pierre-Antoine Manzagol, and Léon Bottou. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 11(12), 2010. he2021mae Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In arXiv, 2021. bao2021beit Hangbo Bao, Li Dong, and Furu Wei. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254, 2021. peng2022beit Zhiliang Peng, Li Dong, Hangbo Bao, Qixiang Ye, and Furu Wei. Beit v2: Masked image modeling with vector-quantized visual tokenizers. arXiv preprint arXiv:2208.06366, 2022. zhou2021ibot Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, and Tao Kong. ibot: Image bert pre-training with online tokenizer. arXiv preprint arXiv:2111.07832, 2021. Wei_2022_maskedfeat Chen Wei, Haoqi Fan, Saining Xie, Chao-Yuan Wu, Alan Yuille, and Christoph Feichtenhofer. Masked feature prediction for self-supervised visual pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14668–14678, June 2022. xie2022simmim Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu. Simmim: A simple framework for masked image modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9653–9663, 2022. caron2021dino Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the International Conference on Computer Vision (ICCV), 2021. oquab2023dinov2 Maxime Oquab, Timothée Darcet, Theo Moutakanni, Huy V. Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Russell Howes, Po-Yao Huang, Hu Xu, Vasu Sharma, Shang-Wen Li, Wojciech Galuba, Mike Rabbat, Mido Assran, Nicolas Ballas, Gabriel Synnaeve, Ishan Misra, Herve Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, and Piotr Bojanowski. Dinov2: Learning robust visual features without supervision, 2023. johnson2023merged G Allan Johnson, Yuqi Tian, David G Ashbrook, Gary P Cofer, James J Cook, James C Gee, Adam Hall, Kathryn Hornburg, Catherine C Kaczorowski, Yi Qi, et al. Merged magnetic resonance and light sheet microscopy of the whole mouse brain. Proceedings of the National Academy of Sciences, 120(17):e2218617120, 2023. berger2018vast Daniel R Berger, H Sebastian Seung, and Jeff W Lichtman. Vast (volume annotation and segmentation tool): efficient manual and semi-automatic labeling of large 3d image stacks. Frontiers in neural circuits, 12:88, 2018. west2018experimental Kathryn L West, Nathaniel D Kelm, Robert P Carson, Daniel C Alexander, Daniel F Gochberg, and Mark D Does. Experimental studies of g-ratio mri in ex vivo mouse brain. Neuroimage, 167:366–371, 2018. arnett2001tnfalpha Heather A Arnett, Jeff Mason, Mike Marino, Kinuko Suzuki, Glenn K Matsushima, and Jenny P-Y Ting. Tnfα promotes proliferation of oligodendrocyte progenitors and remyelination. Nature neuroscience, 4(11):1116–1122, 2001. mason2001episodic JL Mason, C Langaman, P Morell, K Suzuki, and GK Matsushima. Episodic demyelination and subsequent remyelination within the murine central nervous system: changes in axonal calibre. Neuropathology and applied neurobiology, 27(1):50–58, 2001. ma2023segment Jun Ma and Bo Wang. Segment anything in medical images. arXiv preprint arXiv:2304.12306, 2023. wu2023medical Junde Wu, Rao Fu, Huihui Fang, Yuanpei Liu, Zhaowei Wang, Yanwu Xu, Yueming Jin, and Tal Arbel. Medical sam adapter: Adapting segment anything model for medical image segmentation. arXiv preprint arXiv:2304.12620, 2023. loshchilov2016sgdr Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. goyal2017warmup Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. In CVPR, 2017. raghu2019transfusion Maithra Raghu, Chiyuan Zhang, Jon Kleinberg, and Samy Bengio. Transfusion: Understanding transfer learning for medical imaging. Advances in neural information processing systems, 32, 2019. frid2018improving Maayan Frid-Adar, Avi Ben-Cohen, Rula Amer, and Hayit Greenspan. Improving the segmentation of anatomical structures in chest radiographs using u-net with an imagenet pre-trained encoder. In Image Analysis for Moving Organ, Breast, and Thoracic Images: Third International Workshop, RAMBO 2018, Fourth International Workshop, BIA 2018, and First International Workshop, TIA 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16 and 20, 2018, Proceedings 3, pages 159–168. Springer, 2018. deng2009imagenet Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.