entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
15
199
authors
list
primary_category
stringlengths
5
18
categories
list
text
stringlengths
1
461k
http://arxiv.org/abs/2307.02932v1
20230706114322
When No-Rejection Learning is Optimal for Regression with Rejection
[ "Xiaocheng Li", "Shang Liu", "Chunlin Sun", "Hanzhao Wang" ]
cs.LG
[ "cs.LG" ]
Simple Anosov representations of closed surface groups Tianqi Wang August 1, 2023 ====================================================== Authors alphabetically ordered. Learning with rejection has been a prototypical model for studying the interaction between humans and AI on prediction tasks. The model has two components, a predictor and a rejector, working in a team. Upon the arrival of a sample instance, the rejector first decides whether to accept or reject the sample; if accepted, the predictor fulfills the prediction task just like the standard machine learning problem, and if rejected, the prediction of the sample will be deferred to humans. The learning problem thus requires estimating a predictor and a rejector simultaneously from data. This changes the structure of the conventional loss function and often results in non-convexity and inconsistency issues. For the classification with rejection problem, several works develop surrogate losses for jointly learning the predictor and the rejector with provable consistency guarantees; in parallel, there has been less work for the regression counterpart. In this short note, we study the regression with rejection (RwR) problem and investigate in particular the no-rejection learning strategy which treats the RwR problem just as a standard regression problem to learn the predictor. We first establish that the suboptimality of the no-rejection learning strategy observed empirically in the literature can be mitigated by enlarging the richness of the underlying function class for the predictor. Then we introduce the truncated loss which singles out the learning problem for the predictor and we show that a consistent surrogate property can be established for the predictor individually in an easier way than for the predictor and the rejector jointly. Our findings advocate for a two-step learning procedure that first uses all the data to learn the predictor and then calibrates the prediction loss to obtain the rejector. It is better aligned with the common intuition that more data samples will lead to a better prediction model and it calls for more efforts on a better design of calibration algorithms for learning the rejector. While our discussions have mainly focused on the regression problem, the theoretical results and insights generalize to the classification problem as well. § INTRODUCTION The problem of learning with rejection or prediction with rejection models the teaming of humans and AI systems in accomplishing a prediction task, and it has received increasing attention in recent years due to the adventure of powerful AI tools and critical applications such as medication and healthcare. A learning with rejetion model has two components: (i) a predictor which predicts the label or target value Y given a feature X and (ii) a rejector which abstains from making predictions when it detects high uncertainty/low confidence. Upon the rejection, the sample will be deferred to human experts, and the rejection decision will incur an immediate cost. The rejector can be viewed as a binary classifier that assigns the prediction task to either the predictor or the human, and thus it enables the predictor to focus on predicting the unrejected samples, ideally samples with high confidence. Such problem is also studied under variants that take a similar setup, known as learning under triage <cit.>, leaning to defer <cit.>, learning under human assistance <cit.>, learning to complete human <cit.>, and selective prediction <cit.>. The learning with rejection problem can also be categorized into two classes based on the underlying task: (i) regression with rejection and (ii) classification with rejection. In this paper, we mainly focus on the regression with rejection problem while some results also generalize to the classification with rejection problem. A natural way to tackle the learning problem of prediction with rejection is to modify the loss function to incorporate both the predictor loss and the rejection cost. This will enable a joint learning of the predictor and the rejector simultaneously by minimizing the modified loss on the training data. If we single out the learning of the predictor, such an approach will only use part of the samples (those unrejected ones) to train the predictor; this is justified by the observation that the prediction performance on the rejected samples will not affect the final performance because those samples are deferred to humans <cit.>. In classification problems with the reject option, this approach is also supported by generalization bounds derived using surrogate loss <cit.>. However, the fact of using part of the training data to train the predictor is quite counter-intuitive to the common sense that more samples will lead to better predictor training. For regression with rejection, learning the predictor based on a part of the training data can lead to overfitting on the wrong subset <cit.> or result in a local optimum that performs significantly worse than the global optimum (See Proposition <ref>). So in this paper, we explore the condition under which the no-rejection learning that treats prediction with rejection just as a standard ML task and utilizes all the training data achieves consistency and optimality. Our Contribution: We first establish the optimality of no-rejection learning under a weak realizability condition which requires that the predictor function class covers the conditional expectation function. This implies that the sub-optimality of no-rejection learning will only appear when the underlying function class is not rich enough. Then we proceed with the analysis without this condition and introduce the truncated loss. The truncated loss allows full flexibility for the rejector and thus singles out the learning of the predictor. We show that consistency and surrogate property can be derived for this truncated loss in an easier manner than the joint learning problem. Consequently, it leads to a generalization bound for the regression with rejection problem. To complement our analysis, we present a nonparametric algorithm to calibrate the conditional loss of the predictor which naturally leads to the rejector. Related literature: Regression with Rejection: <cit.> characterize the optimal predictor and rejector for the function class of all measurable functions, and propose a nonparametric algorithm to learn them. <cit.> present a new neural network-based algorithm supported by numerical illustrations. <cit.> quantify the prediction uncertainty directly, which can also be applied to address the regression with rejection problem. In comparison to these works, we directly consider the cost of rejection, while they investigate the budgeted setting. <cit.> aim to minimize the rejection rate under a specified level of loss without considering rejection costs. <cit.> consider both rejection costs and reject budgets, highlighting the challenges of optimizing the predictor and rejector even for training samples, and develop a greedy algorithm to solve the problem approximately. Classification with Rejection: While the regression with rejection literature has been largely focused on deriving heuristic algorithms, there have been more theoretical development for the classification with rejection problem, and the key is to introduce proper surrogate loss. Several works have proposed various surrogate loss functions that share the same optimal solution as the original loss function for binary classification and multi-class classification <cit.>. Moreover, <cit.> establish an explicit relationship between the excess loss of the original loss and the surrogate loss. Comparatively, we give the first surrogate loss and consistency analysis for the regression with rejection problem. § PROBLEM SETUP In this section, we introduce the problem of regression with rejection (). Consider n data samples {(X_i,Y_i)}_i=1^n drawn independently from an unknown distribution 𝒫. The feature vector is X_i ∈𝒳⊆ℝ^d, and the target is Y_i∈𝒴⊆ℝ. The problem considers the regression problem with a rejection option. Specifically, it consists of two components: (i) a regressor f:𝒳→𝒴, which predicts the target with the feature. (ii) a rejector r:𝒳→{0,1}, which decides whether to (i) apply the regressor f to the feature X (when r(X)=1) or (ii) defer the sample to human (r(x)=0). Compared to the standard regression problem, the problem introduces a deferral option. Deferred samples are typically handled by humans at a fixed deferral cost of c>0. Consequently, the loss function is defined as follows l_(f, r;(X,Y)) = r(X)· l(f(X),Y) +(1-r(X))· c where l(·,·) represents the standard regression loss. In this paper, we adopt the squared loss as our regression loss function, denoted as l(Ŷ,Y) = (Ŷ-Y)^2. The underlying intention of this loss structure is to encourage the deferral of high-risk samples to humans, which is indicated by a large value of l(f(X),Y). The problem of aims to find a regressor and a rejector that jointly minimize the expected loss min_f∈ℱ,r∈𝒢 L_(f,r) 𝔼[l_(f, r;(X,Y)) ] where the expectation is taken with respect to (X,Y)∼𝒫. Here ℱ and 𝒢 denote the sets of candidate regressors and rejectors, respectively. The following proposition from <cit.> characterizes the optimal solution of (<ref>) when only measurability is imposed on the functions classes of ℱ and 𝒢. According to the proposition, the optimal regressor is the conditional expectation, and the optimal rejector is designed to reject samples with a conditional variance larger than c. Suppose ℱ contains all measurable functions that map from 𝒳 to 𝒴, and 𝒢 contains all measurable functions that map from 𝒳 to {0,1}. The optimal regressor f^*(X) and rejector r^*(X) for (<ref>) are f^*(x) = f̅(x) 𝔼[Y|X=x], r^*(x) = 1, if 𝔼[(Y-f^*(X))^2|X=x] ≤ c, 0, otherwise. Proposition <ref> provides a valuable characterization of the optimal solution, but it does not provide much insight into the learning procedure based on data samples. In the following, we present several results that shed light on the learning procedure for the problem. §.§ Learning with weak realizability The weak realizability condition states that the function class ℱ includes the conditional expectation function (as a function mapping 𝒳 to 𝒴). We first derive a few results under the condition and move on without the condition in the next section The distribution 𝒫 and the function class ℱ satisfy weak realizability if f̅(x) [Y|X=x] ∈ℱ. We refer to the condition as weak realizability in that it only requires the conditional expectation function belonging to ℱ, but does not require the existence of a function that achieves a zero loss (as the standard realizability condition). Under such a weak realizability condition, the problem exhibits a nice learning structure, and in fact, the result also extends to the classification problem (See Appendix <ref>). If the weak realizability condition holds, i.e., f̅(·)∈ℱ, then f̅(·) minimizes the expected loss that f̅(·) ∈_f∈ℱL_𝚁𝚆𝚁(f,r) for any measurable rejector function r(·). While the joint learning of the regressor and the rejector can be challenging in general due to the nonconvexity of the loss function, Proposition <ref> states that the challenge disappears when the function class ℱ is rich enough to cover the conditional expectation f̅(x). Specifically, f̅(x) is the minimizer of the loss for any rejector point-wisely. In other words, we do not need to bother with the rejector when learning the regressor, and even further, we can ignore the loss and treat the problem as a standard regression problem to learn the regressor. Consider a parameterized family of functions ℱ={f_θ: f_θ:𝒳→𝒴, θ∈Θ} with a compact parameter space Θ. Suppose the function is Lipschitz with respect to θ, i.e., | f_θ_1(X)- f_θ_2(X) |≤ Lθ_1-θ_2_2 with some L>0 for all θ_1,θ_2∈Θ and (X,Y)∈𝒳×𝒴. In addition, assume 𝒳 and 𝒴 are bounded. Let f_θ_n(·)= _f∈ℱ∑_i=1^n l( f(X_i),Y_i ). and r_n(x)= 1, if 𝔼[(Y-f_θ_n(X))^2|X=x] ≤ c, 0, otherwise. Under the weak realizability condition, we have L_𝚁𝚠𝚁(f_θ_n, r_n) → L_𝚁𝚠𝚁(f^*, r^*) in probability as n→∞, where f^* and r^* are defined in Proposition <ref>. Corollary <ref> states that the minimizer of the standard empirical loss (<ref>) will converge to that of the loss. Its assumption is standard and applies to many popular classes of ML prediction functions; the corollary itself is more for demonstration purposes but not technical depth. Note that the function f_θ_n is learned under the original loss l (e.g., squared loss), but it still works optimally for loss. In other words, under the weak realizability condition, the learning of the regressor is optimal without taking account of the structure. In fact, this is what we mean by no-rejection learning. Specifically, consider the empirical version of the loss min_f∈ℱ, r∈𝒢∑_i=1^n r(X_i)· l(f(X_i),Y_i) +(1-r(X_i))· c. If we take the perspective of learning the regressor f, the above empirical loss (<ref>) essentially only uses part of the training samples (those where r(X_i)=1) to learn f. This is quite counter-intuitive to the common sense that more training samples will lead to a better model. Proposition <ref> and Collorary <ref> point out that no-rejection learning (<ref>) which considers all the training samples and simply treats it as a standard regression task, is optimal when the underlying function class ℱ is rich enough. Moreover, for the classification with rejection problem, while the existing works propose surrogate losses to convexify (<ref>), our result says that such design is only necessary when the underlying function class is not rich enough to include the Bayes optimal classifier. We make the following remarks: * <cit.> consider the classification with rejection problem and use examples to demonstrate that it will result in a suboptimal predictor/classifier if one ignores the rejector structure and simply performs the standard no-rejection learning. On one hand, the findings highlight the special structure of the learning with rejection problem; while on the other hand, our results above tell that such suboptimality of the learned predictor/classifier (under no-rejection learning) can be mitigated by the adoption of a richer family of functions ℱ. We further illustrate this intuition in Figure <ref>. * <cit.> consider the k-nearest neighbor (k-NN) method for the problem. Our results above generalize and justify their choice of k-NN. Specifically, for k-NN or a nonparametric method in general, one usually imposes an assumption on the Lipschitzness/smoothness of the conditional expectation function f̅. This is in fact a special case of the weak realizability condition where the Lipschitzness/smoothness assumption ensures the function class ℱ of the nonparametric estimators is rich enough to cover the true f̅. Hence in their algorithm development, they use all the training samples to learn the k-NN regressor. One goal of our work is to justify such a no-rejection learning procedure with and without the presence of the weak realizability condition, and more importantly, beyond the scope of nonparametric methods. §.§ Challenge of the joint learning of f and r For the classification with rejection problem, there have been many existing works that propose surrogate losses for the original nonconvex loss and establish consistency results for prediction models learned from the surrogate losses <cit.>. In comparison, there is no consistent surrogate loss developed for the problem to our knowledge. Without a consistent surrogate loss, brute-forced learning through joint learning of f and r or alternatively optimizing f and r can lead to bad local minima. The following proposition provides a simple example (among others) that the joint learning of f and r in general prevents a good theoretical guarantee for the loss. For any deferral cost c>1, there exists a regression task such that one pair of its local optimum regressor and rejector incurs an loss that is c-1/3 larger than the global optimum. § LEARNING WITHOUT WEAK REALIZABILITY In this section, we extend the previous results and explore the case when the weak realizability does not hold. Our aim is to establish the standard squared loss as a surrogate loss of the loss and derive generalization bounds for no-rejection learning which ignores the structure. §.§ Truncated loss, squared loss, and surrogate property We first define the truncated loss as follows L̃(f)=𝔼[ 𝔼[ l(f(X),Y)| X ] c ], where a b=min{a,b} for a,b∈ℝ. Here the inner expectation is with respect to the conditional distribution Y|X and the outer expectation is with respect to the marginal distribution X. Basically, the truncated loss truncates the expected loss if it exceeds the threshold c given X. We have (a) It holds for any measurable regressor f and any rejector r that L̃(f) ≤ L_𝚁𝚠𝚁(f,r). Furthermore, suppose σ^2_f(x) = [l(f(X),Y)| X=x] is a measurable function of x. Then L̃(f) = min_r∈𝒢 L_𝚁𝚠𝚁(f,r) where 𝒢 is the measurable function class. (b) For a regressor class ℱ, a rejector class 𝒢, and any f∈ℱ, L̃(f) ≥min_r∈𝒢 L_(f,r)-max_f∈ℱmin_r∈𝒢(L_(f,r)-L̃(f) ). Proposition <ref> establishes the truncated loss as a proxy of the loss and states the relationship between these two. One can view the truncated loss as an equivalent of the loss when the rejector class 𝒢 covers all the measurable functions. The gap between these two can be bounded by a term (the last term in (<ref>)) related to the richness of the rejector class. We suspend (for the moment) the questions of whether it is reasonable to assume the rejector class is rich enough, and what if it is not. The truncated loss L̃ indeed provides convenience for the algorithm design and analysis in that it singles out the regressor learning problem. Moreover, learning against such a truncated loss is justifiable as the truncated loss can appear in both the upper and lower bounds of the loss in Proposition <ref>. Yet, the non-convexity issue still persists for the truncated loss just as the original loss. Fortunately, the simple squared loss is a provable surrogate loss for the truncated loss and this paves the way for the theoretical development without the previous weak realizability condition. Part a) of Proposition <ref> states that the truncated loss is always a lower bound of the loss. Part b) provides the equality condition of Part a), and assets that for any predictor f, the corresponding truncated loss is the same as the optimal attainable loss min_r∈𝒢 L_(f,r) as long as the rejector class 𝒢 is sufficiently rich. Thus, we can interpret the discrepancy term min_r∈𝒢 L_(f,r)-L̃(f) as a measure of the richness of the rejector class for a given predictor f. Similarly, we can use the discrepancy term max_f∈ℱmin_r∈𝒢(L_(f,r)-L̃(f) ) to measure the richness of the rejector class given a predictor class ℱ. Part c) of the proposition characterizes the gap between the truncated loss and loss using this discrepancy term. Next, we make the following remarks for the reasons for using this loss instead of the loss based on Proposition <ref>. * Independence: This minimization procedure of the new truncated loss is independent of the rejector class compared to (<ref>). Thus, we no longer have the challenge of the jointly learning of the predictor and rejector. * Equivalence: The truncated loss and loss are the same when the rejector class is sufficiently rich. As shown in Part a) of Proposition <ref>, we can derive (<ref>) by solving (<ref>) directly with respect to the rejector when only the measurability is imposed to the rejector class. In addition, if we impose some Lipschitzness assumptions on the conditional expectation function f̅𝔼[Y|X] and conditional variance function 𝔼[(f̅(X)-Y)^2|X], this equivalence (<ref>) still holds with weaker assumption on the richness of the rejector class, which can be fulfilled by applying nonparametric methods when learning the rejector <cit.>. * Approximation: Even if the richness of the rejector class is not guaranteed, we can still approximate the loss by the truncated loss. Specifically, based on Parts b) and c) the gap between those two loss functions for any predictor in ℱ can be bounded by the discrepancy term max_f∈ℱmin_r∈𝒢(L_(f,r)-L̃(f) ). When this discrepancy term is small, we can have an accurate approximation of the loss by the truncated loss, and find a nearly-optimal predictor for the loss by minimizing the truncated loss. Therefore, we can conclude that optimizing the truncated loss instead of the loss is a viable alternative when the rejector class exhibits sufficient richness. This richness can be empirically assessed using the discrepancy term max_f∈ℱmin_r∈𝒢(L_(f,r)-L̃(f) ). Despite the advantages of using the truncated problem (<ref>) as an alternative to the loss, we encounter two challenges when optimizing this new loss. Firstly, the truncated loss (<ref>) remains non-convex, even when the feature and target spaces 𝒳 and 𝒴 are singletons. To illustrate this point, let us consider the case where 𝒳=𝒴={0}. Then, (<ref>) can be written as 𝔼[ 𝔼[ l(f(X),Y)| X ] c ] = (f(0)^2 c), which is clearly not convex with respect to the predictor function. Another problem is that (<ref>) cannot be accurately estimated based on our training samples {(x_i,y_i)}_i=1^n. To be specific, the sample average approximation of (<ref>) can be expressed as 1/n∑_i=1^n(𝔼[ l(f(X),Y)| X=x_i ] c). However, estimating the conditional variance 𝔼[ l(f(X),Y)| X=x ] based on samples (x_i,y_i)_i=1^n is challenging in practice. This is because each x_i is usually unique when there exist continuous features. To tackle these issues, we employ a surrogate loss in the following section. Define the squared loss as L_2(f) 𝔼[(f(X)-Y)^2]. The following proposition establishes the squared loss as a surrogate loss of the truncated loss, following the analysis of <cit.>. For any measurable function f, we have L̃(f)-L̃^* ≤ L_2(f)-L_2^* where L̃^* min_f∈ℱL̃(f), L_2^* min_f∈ℱ L_2(f) with ℱ here being the class of all measurable functions. Proposition <ref> upper bounds the excess loss under the truncated loss L̃ by that under the squared loss L_2. Hence learning and optimization against the squared loss L_2 have an aligned objective with the truncated loss L̃ with the performance guarantee under the former directly transferable to one under the latter. Note that the squared loss indeed utilizes all the samples and corresponds to no-rejection learning. Thus we advocate no-rejection learning in the following sense: it essentially aims for the squared loss L_2, which is a provable surrogate of the truncated loss; the truncated loss subsequently is a proxy of the loss with the gap controlled by the richness of the rejector class 𝒢. We remark that all these logics are made without the weak realizability condition. In the following subsection, we further pursue the path and establish error bounds for no-rejection learning. Importantly, the error bounds are expressed by the standard estimation error term and the approximation error term, which, we believe, are more tangible than the inevitable gap between local minima and global minimum in joint learning of f and r. §.§ Error bounds for no-rejection learning We make the following boundedness assumption. [Boundedness] We assume |Y|≤ B, |f(X)|≤ B, for any regressor f∈ℱ. Let f̂_n(·) denote the optimal regressor of the empirical squared loss for a given regressor class ℱ f̂_n(·)=_f∈ℱ∑_i=1^n(f(X_i)-Y_i)^2. Then, we have the following error bound for f̂_n(·) under the truncated loss. Under Assumption <ref>, the following inequality holds with probability no less than 1-1/n^2, L̃(f̂_n) - L̃^* ≤ 16B·ℛ_n(ℱ)+8B^2·log n/√(n)_estimation error + (min_f∈ℱ L_2(f) - L_2^*)_approximation error where ℛ_n(ℱ) denotes the Rademacher complexity of ℱ. Proposition <ref> bounds the error under L̃ with an estimation error term and one approximation error term under the squared loss. The estimation part comes from the standard analysis of the generalization bound, and then it is combined with Proposition <ref> to obtain the bound above. We remark that the approximation error term becomes zero under the weak realizability condition. Combining Proposition <ref> with Proposition <ref> yields the following theorem on the error bound under the loss. Under Assumption <ref>, the following inequality holds with probability no less than 1-1/n^2 for any regressor class ℱ and rejector class 𝒢, min_r∈𝒢 L_𝚁𝚠𝚁(f̂_n,r) - L_𝚁𝚠𝚁^* ≤ 16B·ℛ_n(ℱ)+8B^2·log n/√(n)_estimation error + (min_f∈ℱ L_2(f) - L_2^*)_approximation error + max_f∈ℱmin_r∈𝒢(L_(f,r)-L̃(f) )_richness of 𝒢. where L_𝚁𝚠𝚁^*=L_𝚁𝚠𝚁(f^*,r^*) with f^* and r^* given in Proposition <ref>. Theorem <ref> upper bounds the error of no-rejection learning f̂_n with three terms: estimation error, approximation error for the regressor class ℱ, and a term related to the richness of 𝒢. The two latter terms will both decrease as we enlarge the underlying regressor and rejector function classes. As for the approximation error, it does not pertain to no-rejection learning but will always appear for whatever learning algorithms. For the last term, it is the price paid for no-rejection learning, i.e., ignoring the structure. The shrinkage of the term (as the rejector class becomes richer) reinforces our previous argument that whether one should bother with the problem structure depends on the richness of the underlying function class. Lastly, we note that on the left-hand side of the error bound, it uses the optimal rejector. If we replace it with a learned rejector, there will be an additional term of estimation error on the right-hand side. This fact, together with the last term in the error bound, motivates the usage of nonparametric methods for the rejector which ensures both the richness of the function class and the consistency. To the best of our knowledge, Theorem <ref> provides the first generalization bound for the regression with rejection problem. Technically, if we compare our analysis in terms of surrogate loss and generalization bound against the analyses of the classification with rejection problem such as <cit.>, there are several differences. First, the introduction of L̃ singles out the regressor and makes the analysis much more convenient than the surrogate loss derivation of the joint learning of both the classifier and the rejector. While it seems much more challenging to derive a joint surrogate loss for the regression problem, it does not hurt to follow this “single-out” approach, theoretically and algorithmically. Second, the truncated loss for the classification problem truncates the binary loss which is non-convex, while the truncated loss for the regression problem truncates the squared loss which is convex. For the non-convex binary loss, the derivation of one more layer of surrogate loss is inevitable and thus raises the question of whether one can derive a surrogate loss for the classifier and rejector jointly in one shot. Thirdly, our approach, from a practical viewpoint, endows no-rejection learning, which is well-motivated for modern machine learning. While the theoretical understanding of the joint learning for linear classes such as linear regression and linear SVM can be of theoretical interest, neural network models have the capacity to meet the weak realizability condition or to ensure that the last term in (<ref>) is small. §.§ Calibrating the rejector In the previous subsections, we have focused on the learning of the regressor f. Given a learned regressor, say f̂_n, the learning of the rejector is closely related to uncertainty quantification/calibration for regression models. While the existing literature on calibrating regression models mainly aims for predicting quantiles or a distribution of Y|X, the goal of the rejector essentially requires a prediction of conditional expected loss [(Y-f̂_n(X))^2|X=x] as a function of x. Here we present a nonparametric method that calibrates the conditional expected loss and works as a rejector. The method is by no means the optimal one for learning the rejector, but it is simple to implement and generally compatible with any regressor. Algorithm <ref> learns the rejector based on a given regressor f̂. It first uses a nonparametric approach to estimate the condition expected loss [(Y-f̂(X))^2|X=x] at a new data point. Then, based on a thresholding rule, it assigns the point to either the regressor or the human. We make several remarks on the algorithm. First, the algorithm is agnostic of the underlying regressor f̂. Second, it utilizes an independent validation dataset different from the one that trains f̂. Third, it can be viewed as a generalization of the k-NN rejector in <cit.> which aims to ensure a certain rejection rate. For the theoretical guarantee of the learned rejector, we refer to the approach <cit.> which can be easily adapted to the context here. We provide the pseudo-code of our approach in Algorithm <ref>, which can be summarized into three steps as the following: * Train predictor: We train the predictor neural network f̂ with training data by minimizing its ERM through the standard training process, i.e., SGD for optimizing a neural network. The choice of using neural networks as predictor candidates are due to their large capacity for recovering f̅(x)=𝔼[Y|X=x]. * Calibrate loss: We then calibrate the empirical loss of the trained predictor f̂ by using a kernel-based nonparametric method (<ref>) through calibration data. Since this step is indeed a standard regression task on the data {(x_i,l_i)}_i=n+1^n+n', method (<ref>) can be replaced by any other regression algorithm like linear regression or neural network to construct the loss estimator L̂(x). * Output rejector: Finally, we can output the rejector. We just replace 𝔼[(Y-f^*(X))^2|X=x] by its estimator L̂(x) in Proposition <ref> and gets the rejector r̂(x). The first two steps are similar to the proposed estimation strategy by <cit.>, where they use kNN-based estimators in both steps as a specific case (and also test the random forest and SVM-based estimators in experiments), while we use neural network and kernel-based nonparametric in the two steps respectively. However, the final step is different: our output rejector r̂(x) is to minimize the loss l_ given the deferral cost c>0, while their proposed plug-in ϵ-predictor is to minimize the loss where the overall rejection rate is restricted to a given ϵ (and c=0). § NUMERICAL EXPERIMENTS In this section, we implement a no-rejection learning for the regressor and Algorithm <ref> for learning the rejector. We compare Algorithm <ref> against several benchmarks: Differentiable Triage (Triage) <cit.>, SelectiveNet (SelNet) <cit.>, and kNN predictor with reject option (kNN) <cit.>. Since all these benchmark algorithms are originally designed for the scenario where the overall rejection rate is capped by a given rate (and with zero deferral cost), we modify them for our setting where the deferral cost is positive while no restriction on the rejection rate. All algorithms are tested on public regression datasets from the UCI repository <cit.>: Airfoil, Concrete, Wine, Energy. More details about datasets, implementations, architectures, and hyperparameters tuning for all algorithms can be found in Appendix <ref>. We test Algorithm <ref> and benchmark algorithms on the deferral cost c∈{1,2,3,4,5} for datasets Airfoil, Concrete, and c∈{0.2,0.4,0.6,0.8,1} for Wine, Energy. For all datasets, the train–validate–test set is split as 70%–20%–10%. Each test is repeated 10 times and we report the average performance on the test set alongside its standard deviation. Table <ref> reports the performance in terms of the averaged loss and rejection ratio on testing data with respect to different deferral costs c. In general, the no-rejection learning together with Algorithm <ref> has an advantage over the other benchmarks across different datasets and deferral costs. We should note the cost c itself is the baseline benchmark: always rejecting the sample will result in an loss of c. However, only our algorithm uniformly outperforms the baseline in all datasets. For the rejection rate on testing data. We can find the rejection rate is in general decreasing in the deferral cost in our algorithm as expected: there should have fewer samples to be rejected due to the large deferral cost. Another observation is that the Triage algorithm always rejects all samples. Since the Triage algorithm is the only one among implemented algorithms that use only a (potentially small) subset of training data when training the regressor, the trained regressor can be overfitted and thus has large errors in general, which finally causes almost all testing samples to be rejected. Concluding remarks. In this paper, we study the problem of regression with rejection. In particular, we focus on analyzing the property of no-rejection learning which utilizes all the samples in training the regressor. We hope our results provide a starting point for future research on this no-rejection learning perspective of regression with rejection, and also classification with rejection. While our paper focuses on the setting where there is a fixed deferral cost c, a parallel formulation considers the regression with rejection problem with a fixed deferral budget. The fixed deferral cost c gives rise to a new calibration problem that may inspire new calibration algorithms. In addition, future research is deserved on establishing more connections between the two settings for both algorithm design and analysis. informs2014 § EXPERIMENT DETAILS Datasets. * Airfoil Self-Noise (Airfoil) <cit.> is collected by NASA to test two and three-dimensional airfoil blade sections conducted in an anechoic wind tunnel. It has 1503 instances with five features. The output is the scaled sound pressure level. * Concrete Compressive Strength (Concrete) contains 1030 instances with eight features. The output is the compressive strength of concrete, which is a highly non-linear function of its ingredients (seven features) and age (one feature) as indicated in the original data description <cit.>. * Wine Quality (Wine) <cit.> is to predict the wine quality based on physicochemical tests. We use the red wine part in the dataset, which contains 1599 instances with 11 features. * Energy Efficiency (Energy) <cit.> is to predict the energy efficiency of the buildings as a function of building parameters. It contains 768 instances and 8 features. Algorithms implementations. The details regarding the implementation of our method as well as the benchmarks are as the following: * Our method: We build a neural network as the regressor f̂. We then implement Algorithm <ref> with radial basis function kernel k(x_1,x_2)=exp(x_1-x_2^2_2/σ) as the kernel choice, where the kernel length scale σ is a hyperparameter. We set the validation dataset as the calibration dataset. * Triage: It first trains a neural network regressor. At training epoch t, it only uses the samples in the mini-batch with empirical mean square loss (based on the predictor fitted at epoch t-1) smaller than the deferral cost c, and thus to make the training process focus on the non-rejected samples. To improve the robustness of training, when the number of used samples from the mini-batch is smaller than 32, it will use the first 32 samples with the smallest empirical mean square losses. After training the predictor, it will train a neural network binary classifier as the rejector on the validation dataset, where the sample is labeled as positive if its empirical mean square loss (w.r.t. the fitted predictor) is less than c and negative otherwise. We highlight that the Triage method is the only algorithm among implemented algorithms that utilizes a subset of training data for training the predictor. * SelNet: It contains one main body block, whose last layer is the representation layer to extract the data features, and three output heads for prediction, selection (rejection), and auxiliary prediction. The loss function is designed as the average over the original mean square loss of the auxiliary prediction head, the weighted mean square loss of the prediction head, where the weights are the output rejection confidence of the selection head and a penalty term to guarantee the overall rejection ratio is smaller than a pre-determined ratio γ. Thus, SelNet jointly trains the predictor (prediction head) and the rejector (selection head), while utilizing the auxiliary prediction head to push the main body to learn all training instances to avoid overfitting. * kNN: It first trains a kNN-based predictor _y(x) for 𝔼[y| X=x] and trains another kNN-based predictor _l(x) for the loss 𝔼[l(_y(x),Y)| X=x]. For the testing sample x, the algorithm will reject it if _l(x)>c and will accept otherwise. Architectures and hyperparameters tuning. The details regarding the architectures and hyperparameters tuning of our method as well as the benchmarks are as the following. For benchmark algorithms, their architectures and hyperparameter tuning processes are set to be identical to the original works if applicable. * Our method: The neural network architecture for the regressor has one hidden layer activated by ReLU with 64 neurons and one following fully-connected linearly activated neuron. The kernel length scale σ for the Gaussian kernel is selected to minimize the loss of validation data among σ∈{ 10^j: j=-3,-2,-1,0,1,2,3}. * Triage: The neural network architecture for the predictor/regressor has one hidden layer activated by ReLU with 64 neurons and one following fully-connected linearly activated neuron. The rejector also has one hidden layer activated by ReLU with 64 neurons and a following one Sigmoid-activated neuron. * SelNet: The neural network architecture is identical to the original work for regression task: the main body block has one hidden layer activated by ReLU with 64 neurons. Both prediction and auxiliary heads are fully connected with one linearly activated neuron. The selection head has one hidden layer activated by ReLu with 16 neurons and a following one Sigmoid-activated neuron. For selecting the rejection ratio γ as a hyperparameter of the SelNet, we use the validation data set to test the validation loss for γ∈{0,0.2,0.4,0.6,0.8,1} and select the γ with the minimized validation loss. All other hyperparameters are identical to the original work. * kNN: We use the same hyperparameter turning processes to choose the number of neighbors k as the original work, where we employ the 10-fold cross-validation to select the parameter k ∈{5, 10, 15, 20, 30, 50, 70, 100, 150} for two kNN models respectively. All the neural network predictors/regressors are optimized by the ADAM algorithm with a learning rate of 5× 10^-4 and weight decay of 1× 10^-4, and mini-batch size as 256 with 800 training epochs, which are all identical to the original SelNet setup <cit.> for the regression task. In addition, the neural network rejector in Triage is optimized by the ADAM algorithm with a learning rate of 1× 10^-3, and mini-batch size as 128 with 40 training epochs, which is identical to the setup in the posted code of the Triage algorithm <cit.>. § EXTENSION TO CLASSIFICATION In this part, we consider binary classification with rejection under the weak realizability condition and show a parallel statement as Proposition <ref>. The setting of classification with rejection is the same as regression with rejection when we set the target space to be 𝒴={0,1}. Specifically, this classification problem also encompasses two components: (i) a classifier f:𝒳→{0,1}, which predicts the label from the feature (ii) a rejector r:𝒳→{0,1}, which decides whether to predict with the predictor or abstain from the prediction to human. Similarly, we also denote ℱ be the set of candidate classifiers, and the goal is to find a pair of classifier and rejector to minimize the following expected loss, which, in this case, is equivalent to 0-1 loss L_ = 𝔼[ r(X) · I{f(X)≠Y} + (1-r(X)) · c ]. In this case, Definition <ref> cannot be satisfied in general. The reason is that the conditional expectation function f̅=𝔼[Y|X=x] might be a fractional number, and thus it cannot be a classifier. Therefore, we first redefine the weak realizability for classification: The joint distribution of the feature and label pair (X,Y) and the function class ℱ satisfy weak realizability if the classifier is consistent with the conditional expectation function f̅ is in the function class. That is, the following classifier is in the function class ℱ. f̃(x) = 1, if f̅(x)=𝔼[Y|X=x]≥0.5, 0, otherwise. Then, similar to Proposition <ref> for regression with rejection, we have a similar result for classification with rejection. If the weak realizability condition for classification holds, then f̃(·) minimizes the expected loss that f̃(·) ∈_f∈ℱL_𝚁𝚆𝚁(f,r) for any measurable rejector function r(·). Similar to the proof of Proposition <ref>, we directly minimize the expected loss for any fixed feature X∈𝒳. For any fixed X∈𝒳, we have 𝔼_Y[l_(f,r)|X] = 𝔼_Y[(Y-f(X))^2|X]· r(X). If r(X)=0, the corresponding loss is 0 given the feature X, and thus, all predictors minimize the loss at that feature. If r(X)=1, we have 𝔼_Y[(Y-f(X))^2|X] = 𝔼[(Y-f̅(X))^2|X]+𝔼[(f(X)-f̅(X))^2|X], where the first term is independent of the choice of the predictor, and the second term is minimized at f=f̃. We then finish the proof. § AUXILIARY LEMMAS We first introduce a lemma related to the uniform laws of large numbers, which is a useful tool when proving consistency. Let ℱ={f_θ,θ∈Θ} be a collection of measurable functions defined on 𝒳 indexed by a bounded subset Θ⊆ℝ^d. Suppose that there exists a constant M such that | f_θ_1(X)- f_θ_2(X) |≤ Mθ_1-θ_2_2. Then, sup_f∈ℱ|1/n∑_i=1^n f(X_i) - 𝔼[f(X)] | → 0 almost surely as n→∞, where the expectation is taken with respect to i.i.d. samples {X_i}_i=1^∞. We refer to 19.5 Theorem (Glivenko-Cantelli) and 19.7 in <cit.>. Next, we introduce Rademacher complexity, which measures the richness of some function classes. It is an important tool to establish generalization bounds. Specifically, for any fixed function class ℱ encompassing functions defined on 𝒳, its Rademacher complexity can be defined as ℛ_n(ℱ)𝔼[ sup_f∈ℱ| 1/n∑_i=1^nσ_if(X_i) | ], where {σ_i}_i=1^n denotes a set of i.i.d. random signs satisfying ℙ(σ_1=1)=ℙ(σ_1=-1)=1/2, {X_i}_i=1^n⊆𝒳 are n training samples drawn i.i.d. from some distribution. With this definition, we have the following two lemmas. Let T ⊂ℝ^n be an arbitrary set and let ϕ_i : ℝ→ℝ be α-Lipschitz and satisfy ϕ_i(0)=0 for all i=1,...,n. Then, we have 𝔼[ sup_(t_1,...,t_n)∈ T| 1/n∑_i=1^nσ_i·ϕ_i(t_i) | ] ≤ 2α·𝔼[ sup_(t_1,...,t_n)∈ T| 1/n∑_i=1^nσ_i· t_i | ], where {σ_i}_i=1^n denotes a set of i.i.d. random signs We refer to Theorem 4.12 in <cit.>. Let ℱ be a class of functions f: 𝒳→ [a,b], and {X_i}_i=1^n be i.i.d. random variables taking values in 𝒳. Then the following inequality holds for any s>0 ℙ( sup_f∈ℱ| 1/n∑_i=1^nf(X_i) - 𝔼[f(X_1)] | ≤ 2ℛ_n(ℱ) +s ) ≤exp(2ns^2/(b-a)^2), where {σ_i}_i=1^n denotes a set of i.i.d. random signs satisfying ℙ(σ_i=1)=ℙ(σ_i=-1)=1/2. We refer to Theorem 4.10 in <cit.>. § PROOFS OF SECTION <REF> §.§ Proof of Proposition <ref> To prove the statement, we directly minimize the expected loss for any fixed feature X∈𝒳 and rejector r. We first show that for any fixed feature X and any fixed rejector r, the conditional expectation f̅(X)=𝔼[Y|X] minimizes the conditional loss 𝔼_Y[(Y-f(X))^2· r(X)|X] with respect to the predictors f∈ℱ. To see this, we have that, when r(X)=0, the conditional is 0 and all predictors minimize this loss; when r(X)=1, 𝔼_Y[(Y-f(X))^2· r(X)|X] = 𝔼_Y[(Y-f(X))^2|X], which is minimized by f̅(X)=𝔼[Y|X]. Then, since the loss is minimized pointwise by f̅(·), we have that when the weak realizability condition holds, i.e., f̅∈ℱ, f̅(·) also minimizes the expected loss L_. §.§ Proof of Corollary <ref> We first show that 𝔼[(Y-f_θ_n(X))^2] converges to 𝔼[(Y-f^*(X))^2] almost surely as n goes to infinity. This part is an application of the uniform laws of large numbers. If the conditions listed in the statement of Corollary <ref> hold, by Lemma <ref>, we have 𝔼[(Y-f_θ_n(X))^2] converges to inf_θ∈Θ𝔼[(Y-f_θ(X))^2] almost surely as n→∞. Then, we show that inf_θ∈Θ𝔼[(Y-f_θ(X))^2]=𝔼[(Y-f^*(X))^2]. If this equality holds, we have 𝔼[(Y-f_θ_n(X))^2] converges to 𝔼[(Y-f^*(X))^2] as n→∞ and, thus, finish the proof. To see that this equality holds, we can apply Proposition <ref> with a constant rejector r(X)=1. Then, we can show that L_𝚁𝚠𝚁(f_θ_n, r_n) → L_𝚁𝚠𝚁(f^*, r^*), by applying Proposition <ref> and noticing that L_(f_θ_n,r_n)=L̃(f_θ_n), .L_𝚁𝚠𝚁(f^*, r^*) = L̃^*. §.§ Proof of Proposition <ref> We assume for a one-dimensional regression task with deferral cost c>1, data {(X_i,Y_i)}_i=1^n is i.i.d. sampled with equal probability from {(0,1),(0,-1),(2,1),(2,-1), (3,c+1),(3,c+3)}. We think about the predictor class ℱ={θ_1 ·1_{x=1}+θ_2 ·1_{x=1}+θ_3 ·1_{x=3}: θ_1,θ_2,θ_3∈ℝ}, and the rejector class consists of all indicator functions of a threshold of x, i.e., {1_{x≤θ_r}: θ_r ∈ℝ}. Then one global optimum pair of regressor and rejector is f^*(x)=0·1_{x≠ 3}+(c+2) ·1_{x=3} and r^*(x)≡ 1, with loss 1. For the pair f(x)≡ 0 and r(x)=1_{x< 2.5}, note its loss is (2+c)/3 and any small perturbation around (θ_1,θ_2,θ_3,θ_r)=(0,0,0,2.5) can not improve its loss. Thus, we conclude its one local optimum and has the loss c-1/3 larger than the above global optimum. § PROOFS OF SECTION <REF> §.§ Proof of Proposition <ref> We show the three parts of Proposition <ref> as below. (a) We first show that the expected loss is bounded by the truncated loss from the below. We first show that for any fixed feature X, the loss is no less than the truncated loss 𝔼_Y[(Y-f(X))^2|X] c for any predictor f, rejector r, and feature X. To see this, 𝔼_Y [l_(f,r;(x,Y))|X] = 𝔼_Y[(Y-f(X))^2 |X]· r(X) + c·(1-r(X)) ≥( 𝔼_Y[(Y-f(X))^2|X ] c)· r(X)+ ( 𝔼_Y[(Y-f(X))^2|X ] c)· (1-r(x)) = ( 𝔼_Y[(Y-f(X))^2|X] c), where this equality comes from the definition of the loss, the second line from the monotonicity of the min function, and the last line comes from the direct calculation. Then, taking expectation with respect to the feature X, we finish the proof of Part (a). (b) We can directly find the result by plugging in f^*(·)=f̅(·) and r^*(x) = 1, if 𝔼[(Y-f̅(X))^2|X=x]≤ c, 0, otherwise, into the expected loss. (c) This part is obtained by the definition of the discrepancy term max_f∈ℱmin_r∈𝒢(L_(f,r)-L̃(f) ). Specifically, we have max_f' ∈ℱmin_r∈𝒢(L_(f',r)-L̃(f') )≥min_r∈𝒢(L_(f,r)-L̃(f) ) holds for any f∈ℱ by definition. Then, by changing order of terms in this inequality, we finish the proof of Part (c). §.§ Proof of Proposition <ref> we first show that both expected squared loss and the truncated loss can be minimized by the conditional expectation function f̅(X). For any predictor, its squared loss can be re-written as following: L_2(f) = 𝔼[(Y-f(X))^2] = 𝔼[(Y-f̅(X)+f̅(X)-f(X))^2] = 𝔼[(Y-f̅(X))^2]+𝔼[(f̅(X)-f(X))^2]+2𝔼[(Y-f̅(X))(f̅(X)-f(X))] = 𝔼[(Y-f̅(X))^2]+𝔼[(f̅(X)-f(X))^2], where the first line comes from the definition of the squared loss, the second and the third line comes from direct calculation, and the last line from the equality that 𝔼[(Y-f̅(X))(f̅(X)-f(X))]=0 obtained by the definition of f̅(X). In the last line of (<ref>), the first term is independent of the chosen predictor. The second term is non-negative, and it is zero only when f̅(X)=f(X) with probability one. Then, we can see that for any predictor f≠f̅, the corresponding squared loss is no less than the squared loss of f̅. Thus, we have the squared loss is minimized by f̅. For the truncated loss, the proof is similar. We have L̃(f) = 𝔼_X[𝔼_Y[(Y-f(X))^2|X] c] = 𝔼_X[(𝔼_Y[(Y-f̅(X))^2|X]+𝔼_Y[(f̅(X)-f(X))^2|X]) c] ≥𝔼_X[𝔼_Y[(Y-f̅(X))^2|X] c] , where the first line comes from the definition of the truncated loss, the second line comes from similar steps as in (<ref>), and the last line comes from the non-negativity of 𝔼_Y[(f̅(X)-f(X))^2|X]. Thus, similarly, we can see that the truncated loss is also minimized by f̅. Then, recall that L̃^* min_fL̃(f), L_2^* min_f L_2(f), where the minimums are taken over all measurable functions. Thus, we have showed L̃(f̅)=L̃^*, L_2(f̅) = L_2^*. Next, we show the calibration condition (<ref>). For any predictor f, we have L̃(f) - L̃^* = L̃(f) - L̃(f̅) = 𝔼_X[(𝔼_Y[(Y-f̅(X))^2|X]+𝔼_Y[(f̅(X)-f(X))^2|X]) c - 𝔼_Y[(Y-f̅(X))^2|X] c] ≤𝔼_X[𝔼_Y[(f̅(X)-f(X))^2|X] c] ≤𝔼[(f̅(X)-f(X))^2] where the first line is obtained by (<ref>), the second and last lines come from (<ref>) and (<ref>), the third line comes from the inequality that min(x,c)+min(y,c)≥min(x+y,c) for any non-negative numbers x,y, and the forth line comes from x≥min(x,c) for all x. Finally, to show the calibration condition, it is sufficient to show that the last line of (<ref>) equals to L_2(f)-L_2^* for any predictor f. This can be obtained by (<ref>), L_2(f)-L_2^* = L_2(f)-L_2(f̅) = 𝔼[(Y-f̅(X))^2]+𝔼[(f̅(X)-f(X))^2] - 𝔼[(Y-f̅(X))^2] = 𝔼[(f̅(X)-f(X))^2] , where the first line comes from (<ref>), the second line comes from (<ref>), and the last line comes from cancelling 𝔼[(Y-f̅(X))^2]. §.§ Proof of Proposition <ref> The proof is a direct application of the Rademacher complexity and the calibration condition in Proposition <ref>. We first show the Lipschitzness of the square loss with respect to the predictor, then apply Lemmas <ref> and <ref> to show that f̂_n can achieve a good true squared loss, and finally show the generalization bound. To see the Lipschitzness, we have, for any two predictors f_1,f_2, the following equality and inequalities hold, |(f_1(X)-Y)^2-(f_2(X)-Y)^2| = | 2Y(f_2(X)-f_1(X)) + (f_1(X)-f_2(X))(f_1(X)+f_2(X)) | ≤| 2Y(f_2(X)-f_1(X)) | + |(f_1(X)-f_2(X))(f_1(X)+f_2(X)) | ≤ 2B| f_2(X)-f_1(X) | + |(f_1(X)-f_2(X))(f_1(X)+f_2(X))| ≤ 4B| f_2(X)-f_1(X) |. Here, the first line comes from the expansion of the square terms, the second line comes from the triangle inequality for the absolute value function, the third line comes from the boundedness of f_1(X), f_2(X) and Y Assumption <ref>, and the last line comes from merging the same terms. Thus, we have that the loss function is 4B-Lipschitz with respect to all predictor functions in ℱ for all (X,Y)∈𝒳×𝒴. Here, from a similar proof as (<ref>), we can have that l(f(X),Y)≤4B^2 for all (X,Y)∈𝒳×𝒴 and f∈ℱ. Next, we show the generalization bound for the squared loss by Rademacher complexity. Specifically, since the squared loss is 4B-Lipschitz with respect to the predictor class, by Lemma <ref>, we have the Rademacher complexity of the following composite function class, l(ℱ){ l(f(X),Y): f∈ℱ}, satisfies ℛ_n(l(ℱ))≤8Bℛ_n(ℱ). Then, applying Lemma <ref>, we have with probability no less than 1-1/n^2, sup_f∈ℱ|∑_i=1^n (f(X_i)-Y_i)^2 -L_2(f)|≤ 8B·ℛ_n(ℱ)+4B^2log n/√(n). Then, denote f_2^*(·)=_f∈ℱ L_2(f) be the optimal predictor in ℱ with respect to the squared loss. Inequality (<ref>) further implies L_2(f̂_n)-L_2(f_2^*) ≤ 16B·ℛ_n(ℱ)+8B^2log n/√(n). Finally, adding L_2(f_2^*)-L_2^* on each side of (<ref>), we finish the proof by L̃(f̂_n)-L̃^* ≤ L_2(f̂_n)-L_2^* = L_2(f̂_n)-L_2(f_2^*)+L_2(f_2^*)-L_2^* ≤ 16B·ℛ_n(ℱ)+8B^2log n/√(n) + L_2(f_2^*)-L_2^* = 16B·ℛ_n(ℱ)+8B^2log n/√(n) + min_f∈ℱ L_2(f) - L_2^*. Here, the first line comes from the calibration condition in Proposition <ref>, the second line comes from adding and minus the same term, the third line comes from (<ref>), and the last line comes from the definition of f_2^*. §.§ Proof of Theorem <ref> The statement of Theorem <ref> can be obtained directly from Proposition <ref> and Part (c) of Proposition <ref>. Specifically, we have min_r∈𝒢 L_(f̂_n) - L^*_ ≤L̃(f̂_n) - L̃^* + max_f∈ℱmin_r∈𝒢(L_(f,r)-L̃(f) ) ≤ 16B·ℛ_n(ℱ)+8B^2log n/√(n) + min_f∈ℱ L_2(f) - L_2^* +max_f∈ℱmin_r∈𝒢(L_(f,r)-L̃(f) ), where the first line is obtained by Part (c) of Proposition <ref>, and the second line comes from Proposition <ref>.
http://arxiv.org/abs/2307.02326v1
20230705143041
Security Defect Detection via Code Review: A Study of the OpenStack and Qt Communities
[ "Jiaxin Yu", "Liming Fu", "Peng Liang", "Amjed Tahir", "Mojtaba Shahin" ]
cs.SE
[ "cs.SE" ]
[]978-1-6654-5223-6/23/$31.00 2023 IEEE [] Security Defect Detection via Code Review: A Study of the OpenStack and Qt Communities Jiaxin Yu^1,2, Liming Fu^1,2, Peng Liang^1,2*This work is funded by the NSFC with Grant No. 62172311 and the Special Fund of Hubei Luojia Laboratory. Amjed Tahir is supported by a MU SREF grant., Amjed Tahir^3, Mojtaba Shahin^4 ^1 School of Computer Science, Wuhan University, Wuhan, China ^2 Hubei Luojia Laboratory, Wuhan, China ^3 School of Mathematical and Computational Sciences, Massey University, Palmerston North, New Zealand ^4 School of Computing Technologies, RMIT University, Melbourne, Australia {jiaxinyu, limingfu, liangp}@whu.edu.cn, a.tahir@massey.ac.nz, mojtaba.shahin@rmit.edu.au August 1, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Background: Despite the widespread use of automated security defect detection tools, software projects still contain many security defects that could result in serious damage. Such tools are largely context-insensitive and may not cover all possible scenarios in testing potential issues, which makes them susceptible to missing complex security defects. Hence, thorough detection entails a synergistic cooperation between these tools and human-intensive detection techniques, including code review. Code review is widely recognized as a crucial and effective practice for identifying security defects. Aim: This work aims to empirically investigate security defect detection through code review. Method: To this end, we conducted an empirical study by analyzing code review comments derived from four projects in the OpenStack and Qt communities. Through manually checking 20,995 review comments obtained by keyword-based search, we identified 614 comments as security-related. Results: Our results show that (1) security defects are not prevalently discussed in code review, (2) more than half of the reviewers provided explicit fixing strategies/solutions to help developers fix security defects, (3) developers tend to follow reviewers' suggestions and action the changes, (4) Not worth fixing the defect now and Disagreement between the developer and the reviewer are the main causes for not resolving security defects. Conclusions: Our research results demonstrate that (1) software security practices should combine manual code review with automated detection tools, achieving a more comprehensive coverage to identifying and addressing security defects, and (2) promoting appropriate standardization of practitioners' behaviors during code review remains necessary for enhancing software security. Code Review, Security Defect, OpenStack, Qt, Empirical Study Security Defect Detection via Code Review: A Study of the OpenStack and Qt Communities Jiaxin Yu^1,2, Liming Fu^1,2, Peng Liang^1,2*This work is funded by the NSFC with Grant No. 62172311 and the Special Fund of Hubei Luojia Laboratory. Amjed Tahir is supported by a MU SREF grant., Amjed Tahir^3, Mojtaba Shahin^4 ^1 School of Computer Science, Wuhan University, Wuhan, China ^2 Hubei Luojia Laboratory, Wuhan, China ^3 School of Mathematical and Computational Sciences, Massey University, Palmerston North, New Zealand ^4 School of Computing Technologies, RMIT University, Melbourne, Australia {jiaxinyu, limingfu, liangp}@whu.edu.cn, a.tahir@massey.ac.nz, mojtaba.shahin@rmit.edu.au August 1, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Security defects can have serious consequences, such as data breaches, intellectual property theft and disruption of services <cit.>. Numerous studies have emphasized the significance of keeping software under control to reduce the risk of exploitation <cit.>. Nevertheless, the practice of leaving a large number of security defects unaddressed in the production environment for extended periods of time and only patching them after they have been released publicly  <cit.>, has a negative impact on software quality and leads to increased maintenance costs. Therefore, effectively minimizing the financial and reputational costs of security incidents by detecting security defects as early as possible remains the major focus for the stakeholders involved in software production. Many organizations are shifting security practices to earlier stages of software development, hoping to address security concerns before they become more difficult and expensive to fix <cit.>. Under this circumstance, code review is proven to be an effective method to identify and locate security defects early <cit.>. Code review is a valuable practice of systematically and internally examining revisions before code is released to production to detect defects and ensure quality. Code review is one of the most important practices of modern software development <cit.>. Compared with security defect detection tools, code review participants are mostly project members who can take full account of the code context <cit.>; thus, they are in a position to identify security defects effectively. Several studies have focused on security defects detection in code review (e.g., <cit.>). Bosu et al. investigated the distribution and characteristics of security defects identified by code reviewers <cit.>, while Paul et al. focused on the security defects that were missed during code review <cit.>. However, most of the research mainly concentrated on the identification of security defects, rather than delving into their resolution procedures. Specifically, little is known about the actions taken by practitioners and the challenges they face when resolving identified security defects in code review. Exploring these aspects could help increase the fixing rate of identified security defects during code review. To this end, this work aims to explore the resolution of security defects through the means of code review, thus contributing to develop a more comprehensive body of knowledge on security defect detection via code review. We first collected 432,585 review comments from four active projects of two widely known communities: OpenStack (Nova and Neutron) and Qt (Qt Base and Qt Creator). After a keyword-based search on these review comments, we manually analyzed 20,995 potential security-related comments, resulting in 614 comments that actually identified security defects. We then studied the types of security defects identified, how the practitioners treat the identified defects, and why some of them are finally unresolved in code review. Our findings show that: (1) security defects are not widely identified in code review; (2) when faced with security defects, most reviewers express their opinions on fixing them and provide specific solutions, which are generally agreed and adopted by developers; (3) Disagreement between the developer and reviewer and Not worth fixing the defect now are the most frequent causes of not resolving security defects. The contributions of this work are: (1) We highlight the importance of manual and context-sensitive security review of code, which may reveal security defects undetected by automated tools. (2) We complement the datasets of previous works on the types of security defects identified during code review. (3) We provide the best practices for practitioners' behaviour in modern code review for security defects detection. § RELATED WORK §.§ Security Defect Detection A body of research has focused on the current status of security defect detection across software ecosystems. Alfadel et al. discussed vulnerabilities propagation, discovery, and fixes in Python ecosystem <cit.>. It was found that most exposed security defects were not being fixed in a timely manner. A similar study of packages demonstrated that delays in fixing security defects were often caused by the fact that the fix was bundled with other features and did not receive the necessary prioritization <cit.>. Lin et al. investigated the security defect management in Debian and Fedora ecosystems <cit.>, and found that over 50% of security defects fixes in Linux distributions can be integrated within one week. Our work differs from the aforementioned studies in that the security defects discussed in these works are publicly disclosed, while in our work we focused on security defects that practitioners may notice during their daily coding activities (but may not have been already disclosed). Security defects can be detected through automated approaches or manually. Tudela et al. utilized hybrid analysis to detect the OWASP Top Ten security vulnerabilities and discussed the performance of different tool combinations <cit.>. Singh et al. compared the difference in automated (belong to DAST) and manual approaches for penetration testing, indicating that humans can locate security defects missed by automated scanners <cit.>. Osterweil et al. formulated a framework using IAST to improve human-intensive approaches in security defect detection and proved its effectiveness <cit.>. Inspired by the above-mentioned studies, we were motivated to explore an effective human-intensive practice for detecting security defects, i.e., code review, and pave the way for further integrating automated tools into the code review process. §.§ Security Defect Detection in Code Review Several studies have studied security defect detection in code review. For example, di Biase et al. explored the value of modern code review for system security and investigated the factors that can affect security testing based on the Chromium project  <cit.>. Thompson et al. conducted a large-scale analysis of the dataset obtained from GitHub <cit.> and reaffirmed the crucial relationship between code review coverage and software security. There is a growing interest in improving the effectiveness of security code review. Paul et al. analyzed 18 attributes of a code review to explore factors that influence the identification of security defects, in order to pinpoint areas of concern and provide targeted measures <cit.>. Braz et al. analyzed the impact of two external assistance measures on the identification of security defects <cit.> and found that explicitly requiring practitioners to concentrate on security can greatly increase the probability of finding security defects, while the further provision of security checklists did not show better results. Some studies qualitatively analyzed the implementation of security defect detection in code review. Alfadel et al. investigated security-related reviews in packages <cit.> to analyze the proportion, types, and solutions of identified security defects in these reviews. In comparison, we targeted different data sources and provided a more in-depth analysis, which includes the causes for not resolving security defects and the actions of developers and reviewers when facing security defects in code review; therefore providing a holistic understanding of the current status of security code review. Motivated by these related works, we aim to bridge the knowledge gap with a view to inspire new research directions and enhance the effectiveness of detecting security defects. § METHODOLOGY §.§ Research Questions The goal of this study is to examine the implementation of security defect detection in code review. Specially, we analyzed review comments to investigate how security defects are identified, discussed, and resolved by reviewers and developers. To achieve this goal, we formulated the following Research Questions (RQs): RQ1: What types of security defects are identified in code reviews? Previous studies have explored the distribution of security defects found in code reviews <cit.>. However, those studies have largely focused on specific systems and the types of security defects may vary in different systems, warranting additional research encompassing diverse projects to establish more general findings <cit.>. Driven by this, RQ1 investigates the frequency of each security defect type within the OpenStack and Qt communities, aiming to complement the findings from existing studies. RQ2: How do developers and reviewers treat security defects identified in code reviews? Given that strict reviewing criteria were mostly abandoned in modern code review <cit.>, it is necessary to establish a good understanding of the current practices employed by practitioners and how they influence the quality of security code review, so as to capture the undesirable behaviors and formulate corresponding suggestions for best practices. This RQ aims to explore concrete actions of developers and reviewers after security defects were identified. Answering this RQ helps to better understand the resolution process and the extent to which manual security defect detection is implemented in code review. In addition, the common solutions of each security defect type extracted from the changed source code can be used to support developers in addressing security defects in the future. This RQ is further decomposed into four sub-RQs: RQ2.1: What actions do reviewers suggest to resolve security defects? RQ2.2: What actions do developers take to resolve security defects? RQ2.3: What is the relationship between the actions suggested by reviewers and those taken by developers? RQ2.4: What are the common solutions to each security defect type identified in code reviews? RQ3: What are the causes for developers not resolving the identified security defects? In some cases, security defects are identified by reviewers but not ultimately resolved by developers. However, little research has been conducted to understand the reasons behind these cases, which could shed light on potential obstacles developers encounter and help in facilitating the resolution of identified security defects. As a result, RQ3 explores potential causes of why some defects are not fixed, with the objective of filling this gap and providing valuable insights. §.§ Data Collection The data collection, labelling, extraction, and analysis process is described below (an overview is shown in Fig. <ref>). §.§.§ Projects Selection This study analyzes security defects in code reviews collected from four projects of two communities: Nova[<https://github.com/openstack/nova>] and Neutron[<https://github.com/openstack/neutron>] from OpenStack[<https://www.openstack.org/>], and Qt Base[<https://github.com/qt/qtbase>] and Qt Creator[<https://github.com/qt-creator/qt-creator>] from Qt[<https://www.qt.io/>]. These two communities are selected based on the following two criteria <cit.>: 1) Reviewing Policy - the community has established a strong review process, and 2) Traceability - the review process of the community should be traceable. OpenStack is a platform that builds and manages public or private cloud, with a set of projects responsible for processing different core cloud computing services. Qt is a cross-platform application for creating GUI applications. We deemed these two communities to be appropriate for our study as they have a large number of code reviews, which are performed using a traceable code review tool - Gerrit[<https://www.gerritcodereview.com/>]. Gerrit offers on-demand tracking of the review process <cit.>. The projects from the two communities have been widely used in previous code review studies (e.g., <cit.>). Similar to Hirao et al. <cit.>, we selected two active projects from OpenStack (i.e., Nova and Neutron) and Qt (i.e., Qt Base and Qt Creator), which have the highest number of patches. §.§.§ Review Comments Collection Using the RESTful API provided by Gerrit, we obtained a total of 432,585 review comments from the four projects (166,237 review comments from OpenStack and 266,348 from Qt) spanning from January 2017 to June 2022, the time when we started this work. Considering that our study aims to analyze the practices of developers and reviewers when dealing with security defects, any comments made by bots should be excluded. Hence, we filtered out the review comments of which the author is a bot account (i.e., “Zuul” in OpenStack and “Qt Sanity Bot” in Qt). We also removed review comments in files that do not correspond to any programming language or are clearly outside the scope of code review, by checking the filename extension (e.g., “” and “”). §.§.§ Potential Security-related Comments Collection We employed a keyword-based search approach to identify security-related review comments We adopted the keyword set proposed in Paul et al.'s work <cit.>, as it is considered the most comprehensive keyword set in previous research, with the largest number of types and keywords. The set includes 103 keywords, which were classified into 11 security defect types and an extra Common Keywords type, with each security defect type containing Common Weakness Enumerations (CWEs) <cit.> to clarify its definition. After thoroughly analyzing the keyword set proposed by Paul et al. <cit.>, we made the following adjustments to the set: First, we adapted parts of the types of security defect and corresponding keywords. For example, we split Denial of Service (DoS) from the Denial of Service (DoS) / Crash type defined in Paul et al.'s work <cit.>, since we considered DoS as one clear security defect type based on its definition in CWEs. The keywords relevant to DoS were also separated and reclassified into the new DoS type. Second, we collected differentiated keywords and security defect types from previous studies <cit.> and extended the keyword set obtained from the last step. One additional security defect type was added (i.e., the Command Injection type <cit.>). Moreover, another one additional security defect type was created since part of keywords from <cit.> could not be mapped into the existing keyword set (i.e., Use After Free was created to include “use-after-free” and “dynamic” based on the definition of CWEs). 19 differentiated keywords collected from previous studies were assigned to specific types (including Common Keywords) according to their meanings, (e.g., adding “crypto” to the Encrypt type). After that, the initial keyword set of our study was formulated and presented in Table <ref>. We ultimately obtained 122 keywords, which were categorized into 15 security defect types and the Common Keywords type. To explicitly illustrate our adjustments, the sources of each type are presented, and newly added keywords compared to the keywords from Paul et al.'s work <cit.> are emphasized in italics. Given that the effectiveness of the keyword-based approach heavily depends on the set of keywords used, we followed the approach proposed by Bosu et al. <cit.> to refine the initial set of keywords, which includes the following steps: * build a corpus by searching for review comments that contain at least one keyword of our initial set of keywords (e.g., “racy”, “overflow”) in the review comments collected in Section <ref>. * perform tokenization to each document on the corpus. Considering code snippets contained in review comments, we also applied the identifier splitting rules in this progress (e.g., “FlavorImageConflict” becomes “Flavor Image Conflict”, security_group becomes “security group”). * remove stopwords, punctuations, and numbers from the corpus and convert all tokens into lowercase. * use from the NLTK toolkit <cit.> to obtain the stem of each token (e.g., “merged”, “merging”, and “merges” have the same token “merg”). * create a Document-Term matrix  <cit.> from the corpus and identify the additional words that frequently co-occur with each of our initial keywords (co-occurrence probability of 0.05 in the same document, as also utilized in <cit.>). * manually analyze the additional words to determine whether to include them into the initial keyword set. No additional words were found that co-occurred with any one of the initial keywords. Therefore, we were of the opinion that the present keyword set is adequate for supporting keywords-based search and filtering. After that, a script was developed to search for code review comments that contain at least one of the keywords identified in Table <ref>. All these steps led to 20,995 review comments from the four projects, which is called potential security-related review comments. §.§ Manual Labelling The 20,995 potential security-related review comments obtained from the previous step may contain many false positives. Hence, we manually inspected the content of these comments, their corresponding discussions, and related source code to determine and label whether they are actually security-related. We defined the labelling criteria, i.e., the review comment should be clearly related to security and meet the definition of one of the CWEs <cit.> presented in Table <ref>. Aimed at ensuring consistency and improving inter-rater reliability, a pilot labelling was independently conducted by the first and second authors on 200 potential security-related comments randomly selected from the Nova project. The labelling results were compared and the level of agreement between the two authors was measured using Cohen's Kappa coefficient test <cit.>. For review comments in which the judgements of two raters differ, they were reviewed, evaluated, and discussed with the third author until a consensus was reached. The calculated Cohen's Kappa coefficient is 0.87, thus indicating that the two authors reached a high level of agreement. The first author proceeded to label all the remaining potential security-related comments, and the review comments that the first author was unsure were discussed with the second author to reach a consensus. This process led to the identification of a total of 614 security-related review comments for further analysis and the distribution of data points across the four projects is presented in Table <ref>: §.§ Data Extraction and Analysis A set of data items (see Table <ref>) was formulated and extracted from the contextual information of each of the 614 security-related comments, including their corresponding discussion thread and source code, to answer our RQs. §.§.§ RQ1 We classified 614 security-related review comments into 15 security defect types predefined in Table <ref>. Based on this table, for each review comment, we identified the CWE corresponding to the issue described in the comment, and categorized the comment under the security defect type to which that CWE belongs. As shown in the example below, the reviewer pointed out that the calculation of may overflow and lead to undefined behavior, which is consistent with the description of CWE-109, that is “The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value”, hence is labelled as Integer Overflow. [label=lst:sample,breaklines=true,breakatwhitespace=true,captionpos=b,basicstyle=,escapeinside=||,frame=single] |Link:| |<http://alturl.com/td7ej>| |Project:| Qt Base |Type:| Integer overflow |Reviewer: This is UB when pos + n overflows...| |Developer:| Done §.§.§ RQ2 We categorized the actions suggested by reviewers into three categories with reference to what was formulated by Tahir et al. in <cit.>. * Fix: recommend fixing the security defect. * Capture: detect the security defect, but do not provide any further guidance. * Ignore: recommend ignoring the security defect. Confronted with review comments posted by reviewers, there are three possible behaviors for developers: * Resolve: The developer resolved the security defect identified by the reviewer. * Not resolve: The developer ignored the security defect identified by the reviewer. * Unknown: We are unable to determine the behavior of the developer. We defined Unknown to describe the case that the developer responds to the reviewer with a promise to fix the security defect in the future, but we could not obtain specific resolution evidence from the source code due to the overwhelming amount of manual inspection of unlimited commits. An example of such a case is shown below: [label=lst:sample,breaklines=true,breakatwhitespace=true,captionpos=b,basicstyle=,escapeinside=||,frame=single] |Link:| |<http://alturl.com/8g2p9>| |Project:| Nova |Type:| Buffer overflow |Developer:| ...In a future patch that adds the ability to configure the executor type, we will need to deal with the issue you raise here. We inspected the discussion and the follow-up submitted code to determine whether a security defect has been resolved. A security defect was considered resolved only when the situation meets the following three possible categories: * Code is modified in the subsequent patchsets by the developer to resolve the security defect before the code change is merged. * Developer mentioned clearly in the reply to comments that the security defect has been fixed in another code change. * The code change with the security defect was abandoned. As insecure code is not merged, it would not pose a harmful threat to the source code base. As shown in Fig. <ref>, the developer added an assert statement to check the buffer size in Line 70 of in patchset 5 to fix the buffer overflow, thus we can confirm that the security defect identified in this review comment was resolved. Employing the open coding and constant comparative method <cit.>, we used MAXQDA[<https://www.maxqda.com/>] as the coding tool and extracted the solutions developers adopted from the specific code modification for fixing security defects in resolved instances, so as to investigate the common solutions of each security defect types. §.§.§ RQ3 To further understand why unresolved security defects were ultimately ignored by practitioners, we also utilized the open coding and constant comparative method <cit.> to examine the discussions between developers and reviewers. For the purpose of minimizing bias, this data extraction was performed by the first author and verified by two other co-authors. Any conflicts were discussed and addressed by the three authors, using a negotiated agreement approach <cit.>. The complete extraction results in this step is available online <cit.>. § RESULTS §.§ RQ1: Category of Security Defects Identified in Code Reviews As explained in Section <ref>, 614 review comments were identified as security-related comments, which account for less than 1% of all comments in code reviews. As detailed in Table <ref>, the majority of security defects (539 out of 614, 87.8%) were identified by reviewers, which is considerably more than those raised by developers. Therefore this study is based on 539 security-related review comments that meet the former case. As described in Section <ref>, we have predefined 15 types of security defects with their distribution (see Table <ref>). On the whole, we found that Race Condition is the most frequently identified type and was discussed in as many as 39.0% of instances. The second and third most frequently identified types are Crash and Resource Leak, accounting for 22.8% and 10.9%, respectively. There are 41 (7.6%) review comments identified Integer Overflow, followed by Improper Access with 31 (5.8%) instances. As can be seen in Table <ref>, there are also nine types that were identified on rare occasions with proportions lower than 5%. Although SQL Injection is a common network attack and listed as the top 10 web application security risks by the Open Web Application Security Project (OWASP) in the past 15 years <cit.>, no instance of this type was found in this study. [colback=white,colframe=black,left=0.05cm,right=0.05cm,top=0.05cm,bottom=0.05cm, sharp corners, boxrule=0.8pt] RQ1 summary: Security defects are not prevalently discussed in code review, with the proportion less than 1%. Of those security-related review comments, a considerable amount of review comments detected the security defects race condition (39.0%), crash (22.8%), and resource leak (10.9%). §.§ RQ2: Treatment of Security Defects by Developers and Reviewers RQ2.1: Table <ref> shows that over half of the reviewers (290 out of 539, 53.8%) expected developers to fix the identified security defects. A large portion (251 out of 290, 86.6%) of these cases include specific solutions to assist developers in resolution, which may provide suggestions or even detailed code snippets for resolving the defects. Below is an example where the reviewer recommended that the developer should add verification logic to avoid the Buffer Overflow defect. [label=lst:sample,breaklines=true,breakatwhitespace=true,captionpos=b,basicstyle=,escapeinside=||,frame=single] |Link:| |<http://alturl.com/tk8t9>| |Project:| Qt Base |Type:| Buffer Overflow |Reviewer: you should verify that this matches| |chunkSize|,otherwise the buffer may overflow. Only 13.4% (39 out of 290) of those fixes asserted that the defect needed to be fixed, without any guiding solutions. For example: [label=lst:sample,breaklines=true,breakatwhitespace=true,captionpos=b,basicstyle=,escapeinside=||,frame=single] |Link:| |<http://alturl.com/783po>| |Project:| Qt Base |Type:| Integer Overflow |Reviewer:| This could overflow too...|You'll need to| |fix it| otherwise the commit won't integrate. There are also 210 (39.0%) cases where reviewers only identified security defects without indicating the next step that the developer should follow, which fall under the definition of Capture type. Besides that, there were a few reviewers (39, 7.2%) who explicitly suggested ignoring the identified security defects for various reasons, such as the issues not worth fixing. RQ2.2: We inspected the discussion and subsequent patchsets to determine whether a security defect was fixed finally. As shown in Table <ref>, developers chose to fix the identified security defect more often, accounting for 65.9%. The actions developers took to each security issue are presented in Table <ref>. The overall result is that almost every type of security defect has a fix rate upwards of 50.0%, except for Deadlock, as low as 36.4%. As analyzed in RQ1, defects of type Race Condition, Crash, and Resource Leak are the top three frequently identified security defects. As demonstrated in Table <ref>, these types of security defect are also frequently addressed by developers in code reviews with fix rates of 64.3%, 71.5%, and 79.7%, respectively. In addition to the top three types of security defects, there are another 11 security defect types from “Integer Overflow” to “Format String”, totalling 147, and the fixing rate of these 11 types is comparatively low, at 57.8% (85 out of 147). RQ2.3: The relationship between the action developers took and the action reviewers suggested is illustrated in Fig. <ref>. When reviewers provide a clear idea for fixing the identified security defects with specific solutions (Fix with a specific solution), the fix rate by developers reaches 81.3% (204 out of 251). When reviewers only point out that fixing is needed but do not offer any guidance (Fix without a specific solution), developers choose to address these defects in 61.5% (24 out of 39) of the cases. Furthermore, when reviewers indicate the existence of security defects without further instructions (Capture), only 59.5% (125 out of 210) of these issues are fixed. Based on our findings, it can be speculated that reviewers' suggestions that include guidance in code review, such as whether and how to resolve defects, are crucial to improving the overall fix rate of identified security defects. As shown in Fig. <ref>, for the instances in which the actions suggested by reviewers are the Fix type, 78.6% (228 out of 290) of developers fixed the identified security defects. For the instances in which reviewers suggested ignoring the defects, nearly all (36 out of 39, 92.3%) developers ignored the defects. Overall, it can be concluded that the majority of developers tend to agree with reviewers' opinions when the reviewers express clear perspectives on defect handling. Hence, the participation and enthusiasm of reviewers are crucial for detecting security defects during code review. RQ2.4: The coding solutions adopted by developers to resolve different security defects are further investigated and presented in Table <ref>. In order to make the sample size large enough to ensure the credibility of the conclusions, we only selected the top three security defects based on their prevalence for analysis, i.e., Race Conditions, Crash, and Resource Leak. In terms of Race Condition, the most common approach adopted by developers is to take thread-safety measures. These measures include using thread-safe functions, such as atomic operations, the function, or synchronization functions that utilize signals and slots (e.g., in Qt). They also employed custom logic when working with resources, including measures such as adding locks, usage limitations, and updating before usage to ensure consistency. Code refactoring is also an important solution for Race Condition, with 33 instances. A few cases adopted concurrency management, which includes passing messages between threads and adding wait functions. Additionally, 7 developers solved the issue by handling side effects, which means dealing with the consequences of Race Conditions indirectly, such as capturing exceptions. In the instances of Crash, there are five possible solutions. Code refactoring and adding condition check are the two main solutions adopted by developers to fix the Crash defects. In 13 review comments, developers captured exceptions by try/catch block to avoid Crash. Furthermore, 6 developers safely terminated execution in advance to prevent damage caused by an abrupt Crash, and a specific example of this case is to add an assert statement to immediately trigger an exception and terminate the execution of the program, if a certain condition or constraint is not met. There are also 4 cases where developers used safe functions that can eliminate potential exceptions, thus improve the overall stability of the program and minimize the likelihood of crashing. Approximately half of Resource Leak defects are fixed by adding resource release functions, where developers may explicitly close resources or prevent skipping of the deletion function through modification in code logic. 9 developers also used resource-management techniques, such as smart pointers, Resource Acquisition Is Initialization (RAII), or bridge technologies during fixing. Additionally, 8 developers reduced resource allocation to avoid leaks through converting to passing by reference, transferring resource ownership and so on. Only 4 cases involve code refactoring as a solution, while just 2 cases addressed the security defects through handling side effects, as previously mentioned. [colback=white,colframe=black,left=0.05cm,right=0.05cm,top=0.05cm,bottom=0.05cm, sharp corners, boxrule=0.8pt] RQ2 summary: 53.8% of the reviewers indicated a need to fix the identified security defects after their detection, and most of them were willing to provide specific solutions for developers to fix the defects. From the developers' perspective, majority of developers tend to agree with reviewers' suggestions, and over half of the identified security defects were resolved by developers. §.§ RQ3: Causes of Not Resolving Security Defects According to the aforementioned result of RQ2.2, there are 161 instances where the identified security defects were not resolved. By manually inspecting the discussion for each review comments, we excluded 64 (39.8%) review comments neither developers nor reviewers involved in these instances clearly indicate the causes for ignoring the identified security defects, leaving us with 97 instances (60.2%) for further analysis. The statistical results of the remaining instances can be found in Table <ref>, and six causes were then identified. Nearly half of (44, 45.4%) unresolved security defects are because either developers or reviewers think it is Not worth fixing the defect now, which is the most common cause of not resolving the identified security defects. From the perspective of security defects, the identified security defects in these cases may be harmless and acceptable for developers, or the occurrence scenarios of security defects are so tricky that they will not become system hazards under normal utilization. It may also be because that there are other security defects in the code that will have a greater impact, and those currently found are negligible comparatively. On the developer side, fixes might cost too much effort and require tons of changes. If existing solutions had other adverse effects on the system and were irreconcilable, developers would also choose to ignore the identified defects in light of the benefit of current code changes. In addition, some developers noted that the resolutions for identified security defects were not an immediate concern and could be considered in the future. Two examples corresponding to the above two situations are presented below, the cruxes has been emphasized in bold: [label=lst:sample,breaklines=true,breakatwhitespace=true,captionpos=b,basicstyle=,escapeinside=||,frame=single] |Link:| |<http://alturl.com/hm4o7>| |Project:| Nova |Developer:| after discussing it on IRC [1], we went on a consensus that it's |acceptable| to remove the VIF from the metadata since the NIC on the VM already detached, even if the Neutron action could potentially fail. [label=lst:sample,breaklines=true,breakatwhitespace=true,captionpos=b,basicstyle=,escapeinside=||,frame=single] |Link:| |<http://alturl.com/xcb6n>| |Project:| Qt Base |Developer:| I'll leave it as it is - otherwise I'll have to |change the example too much and write| |tons of code| obscuring the real example. We attribute Disagreement between the developer and the reviewer to be the reason why developers do not resolve the security defects in 33 review comments (34.0%). In these cases, some developers could not comprehensively understand reviewers' opinions, while others indicated that the identified security defects did not exist. Furthermore, some developers believed that fixing was unnecessary or the solution was unreasonable. In the following example, the developer objected to the reviewer's suggestion to control traffic by adding a security group, asserting that no modification was required. [label=lst:sample,breaklines=true,breakatwhitespace=true,captionpos=b,basicstyle=,escapeinside=||,frame=single] |Link:| |<http://alturl.com/9uwgv>| |Project:| Neutron |Reviewer:| i suspect this requires security groups. |Developer:| Why we need this? Could you explain because I don't think we need anything here. Due to the lack of knowledge or limitation by other system logic, 11.3% of identified security defects were ignored for the reason that practitioners had no effective solution to thoroughly resolve the defects, and below is an example of this case: [label=lst:sample,breaklines=true,breakatwhitespace=true,captionpos=b,basicstyle=,escapeinside=||,frame=single] |Link:| |<http://alturl.com/t5paz>| |Project:| Qt Creator |Reviewer:| ...If you are not happy with a crash you can add a check against 0. This will avoid the crash here but I am pretty sure that it will crash sooner or later on a different location... In 6.2% review comments, the reason for not resolving security defects is that the resolution is considered out of the scope of the commit. As shown in the example below, the identified security defect was historical and thus orthogonal with the feature of this commit. Accordingly, the developers reckoned that those defects should wait to be resolved in specific logic changes in the future, rather than now. [label=lst:sample,breaklines=true,breakatwhitespace=true,captionpos=b,basicstyle=,escapeinside=||,frame=single] |Link:| |<http://alturl.com/joasw>| |Project:| Nova |Developer:| ...I think that the multi-attach problem is |orthogonal| and should be investigated in another patch. In addition, three occasional instances were found, in two (2.1%) of which the developers believed that it was users' responsibility to make correct choices to guarantee the system running appropriately, and no any modification was conducted to the source code. While in the remaining one (1.0%), the developer clearly indicated that he/she had no time to rework and left the reviewer to accept identified defects or directly abandon the whole change. [colback=white,colframe=black,left=0.05cm,right=0.05cm,top=0.05cm,bottom=0.05cm, sharp corners, boxrule=0.8pt] RQ3 summary: Generally speaking, 39.8% related instances did not provide the cause of failure to resolve. Not worth fixing the defect now and Disagreement between the developer and the reviewer are the main reasons of ignoring security defects. § IMPLICATIONS Here we discuss several implications of the findings reported in this paper. A two-step of detection mechanism is suggested to conduct security practices in software development. Our study found that in the process of code review, the majority of reviewers provided useful suggestions to fix identified security defects, and developers usually agreed and adopted solutions suggested by reviewers. This indicates that reviewers' assessment of security defects is trustworthy for developers. Generally speaking, code review is effective in detecting and addressing security defects. Although various tools (e.g., SAST, DAST, IAST) have been used in modern code review to speed up the review process, these tools test only based on known scenarios and have limitations in test coverage, thus resulting in potential false positives <cit.>. Experienced and knowledgeable code reviewers, due to their deeper understanding of code context, can capture security defects which do not conform to known patterns and cannot be detected by tools. Therefore, automated tools and code review, as two significant approaches of security defect detection, need to complement each other. We recommend a two-step detection mechanism that combines the two approaches: tools to conduct scalable and fast security defect detection as the first check, and then the reviewers to conduct code review referring to the detection results of the tool. During the second step, the reviewers check the results generated by tools to provide developers with instructions for further action, and at the same time review the submitted code to find defects that the tool failed to detect. This mechanism not only improves efficiency, but also enhances the comprehensiveness of security defect detection. The characteristics of the project can affect the type and quantity of security defects found in it. We found that XSS (Cross-Site Scripting) and SQL Injection are less discussed during code review, which is consistent with the findings of Paul et al. <cit.>, but contrary to the results of di Biase et al. <cit.>, which demonstrated that XSS was a frequently identified security defect with a relatively higher number than other types. The projects used in this study (Nova, Neutron, Qt Base, and Qt Creator) and Paul et al.'s work (Chromium OS) are the projects that provide infrastructure for higher-level applications to run on, with less direct interaction with users' inputs and outputs, while di Biase et al. selected Chromium, a Web browser that has multiple ways of directly interacting with users. One possible reason for this result is that the likelihood of potential input/output-related security defects in core components and projects may be low. This further confirms that project characteristics can influence the types and quantity of security defects that may exist in this project. Reviewers need to pay more attention to high-risk code with the use of multi-threading or memory allocation. Race Condition and Resource Leak related security defects are frequently identified in code review. These two defect types are also widely recognized as common defects in software development <cit.>. Hence, we encourage code reviewers to conduct a rigorous inspection of code involving multi-threading and memory allocation during code review, as they can potentially introduce Race Condition and Resource Leak defects, making them more susceptible to security risks. Appropriate standardization of practitioners' behaviors in code review is critical for better detection of security defects. In modern code review, strict reviewing criteria are not mandated <cit.>. We found that some developers' and reviewers' actions result in ambiguity during the code review process. For example, some comments that identified security defects were neither responded nor had corresponding code modifications. Hence, code reviews may not foster a sufficient amount of discussion <cit.>, increasing the time and effort of the development process and having a negative impact on software quality. Here are several specific recommendations regarding standardization: (1) For security defects that remain unresolved due to disagreement between developers and reviewers, reviewers could further assess the risk of the security defects. We found that the main reason for not resolving security defects is Disagreements between the developer and the reviewer in which the developer did not agree with the reviewer's assessment, and thus decided not to fix the security defects identified. However, due to the different knowledge and experience, it is likely for the developer to merge risky security defects into the source code. Hence, we suggest that when there is a disagreement, reviewers should further assess the risk of identified security defects and communicate with developers if necessary. (2) It is preferable for developers to resolve identified security defects. However, when developers decide not to address a security defect (possibly due to risk assessment or cost-benefit considerations), they should provide clear reasons for this decision in the discussion. It was found that in 40% of the cases, the identified security defects were left unresolved, with no reasons provided. This negatively impacts adequate communication between reviewers and developers, making review details opaque and untraceable. Therefore, we recommend that when a security defect was decided to be left unresolved, sufficient justifications should be provided in the discussion to facilitate further handling of the unresolved security defects. (3) Unresolved security defects should be properly documented, and the developers who decide to fix them in the future should be clearly scheduled for resolution in subsequent stages. According to the results of RQ2.2, 29.9% of security defects were unresolved and merged into source code Documenting unresolved security defects in code review helps to effectively track and manage them. Clearly scheduling unresolved security defects that developers decide to fix in the future can ensure they are actually resolved in a timely manner, thus preventing them from causing damage to the system. Therefore, we encourage practitioners to document unresolved defects and schedule needed fixes. § THREATS TO VALIDITY Internal Validity: During the data processing phase, there are comments that were either generated by bots or related to non-review target files, which could influence the accuracy of the final results. We filtered these comments to mitigate bias. Furthermore, we employed a keyword-based search approach to obtain potential security-related comments, which can lead to missing security-related comments that do not contain the exact keywords. To reduce this bias, we collected all the keywords utilized in previous studies into the keyword list and refined the list according to the approach proposed by Bosu et al. <cit.>, ensuring a comprehensive set of keywords to cover all eligible review comments as much as possible. External Validity: We selected four projects from the OpenStack and Qt communities (two each) as the primary data source of our study. However, these projects may not fully represent the entire landscape of security defects across all software systems. This limitation poses a potential threat to the generalizability of our results. To address this concern, we compared and discussed with the previous studies that explored similar questions to supplement our own findings and reduce the risk of interpretation bias. Construct Validity: Since this study predefined the types of security defects and matched practical scenarios with security defect types through manual inspection, there is a potential cognitive bias arising from subjective judgments. To reduce this bias, we based the classification on the security defect types proposed in previous works <cit.> and clarified these security defects by CWEs, thus ensuring the concepts of each type are accurate, appropriate, and consistent throughout the entire research process. In addition, all the data labeling and extraction processes in this study were carried out manually, which introduces the possibility of subjective and potentially misleading conclusions. Therefore, during the data labelling phase, the first and second authors conducted a pilot data labelling independently and reached a consensus on labelling criteria through discussions. During the data extraction phase, while the first author performed the extraction work, the second and third authors reviewed the results to ensure the accuracy and comprehensiveness of the data extraction results. Reliability: We drafted a protocol outlining the detailed procedure before conducting our study. The protocol was reviewed and confirmed by all authors to ensure the clarity and repeatability of the method. We also made our full dataset available online for future replications <cit.>. § CONCLUSIONS In this work, we investigated the security defects identified in code review comments. We analyzed the data from four open source projects of two large communities (OpenStack and Qt) that are known for their well-established code review practices. More specifically, we manually inspected 20,995 review comments obtained by keyword-based search and identified 614 security-related comments. We extracted the following data items from each comment: 1) the type of security defect, 2) the action taken by reviewers and developers, 3) reasons for not resolving identified defects from these comments. Our main results are: (1) security defects are not widely discussed in code reviews, and when discussed, Race Condition and Crash security defects are the most frequently identified types; (2) the majority of the reviewers express explicit fixing suggestions of the detected security defects and provide specific solutions. Most of the developers are willing to agree with reviewers' opinions and adopt their proposed solutions; (3) Not worth fixing the defect now and Disagreement between the developer and the reviewer are the main reasons for not resolving security defects. ieeetr
http://arxiv.org/abs/2307.00797v1
20230703072809
A full waveform model for arbitrarily axis-symmetric black hole mergers
[ "Song Li", "Wen-Biao Han" ]
gr-qc
[ "gr-qc", "astro-ph.HE" ]
leesong@shao.ac.cn Shanghai Astronomical Observatory, Shanghai, 200030, China School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing, 100049, China wbhan@shao.ac.cn School of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, China Shanghai Astronomical Observatory, Shanghai, 200030, China School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing, 100049, China International Centre for Theoretical Physics Asia-Pacific, Beijing/Hangzhou, China Shanghai Frontiers Science Center for Gravitational Wave Detection, 800 Dongchuan Road, Shanghai 200240, China In this work, we present a non-GR full waveform for general parametrization of axisymmetric black holes by extending our previous PSI model. Our model comprises two main components: an inspiral part obtained by using phenomenological method in frequency-domain and a ringdown part derived from quasinormal modes associated with photon motion. For quantitatively revealing the influence of the deviation from Kerr black holes on the waveforms, we specify our model to the bumpy black holes, which are typical examples of non-GR black holes. The results show that the deviation from the Kerr quadrupole moment could be measured in a high accuracy. The new waveform model can be directly used to test black holes for the LIGO-Virgo-KAGRA observations, the third generation detectors and space-borne interferometers. A full waveform model for arbitrarily axis-symmetric black hole mergers Wen-Biao Han August 1, 2023 ========================================================================= § INTRODUCTION In the past decade, there have been significant advancements in the study of gravitational waves (GWs). The detection of GW150914 by the LIGO-Virgo Collaboration in 2015 marked a major breakthrough as it was the first observation of gravitational wave events from a binary black hole system <cit.>. Since then, more compact binary systems have been discovered and reported by the LIGO-Virgo-KAGRA Collaboration. To date, over 90 compact binary systems have been identified <cit.>, including two neutron star and black hole systems that are distinct from other binary black hole systems <cit.>. These unique compact binaries provide excellent scenarios for testing general relativity (GR) <cit.>, gaining new insights into compact objects <cit.> and potentially discovering new theories beyond GR. Ground-based detectors such as LIGO, Virgo, and KAGRA have opened up new avenues for understanding compact binary physics and astrophysics. Future space-based detectors like Laser Interferometer Space Antenna (LISA), Taiji<cit.>, and Tianqin<cit.> will offer different scenarios to improve our comprehension of the Universe further. The no-hair theorem states that black holes in general relativity can be uniquely characterized by only a few fundamental properties, namely their mass, electric charge, and angular momentum. All other details, such as the matter that formed the black hole, are "hairless" and do not affect its external gravitational field. In 1915, a black hole model was introduced by Schwarzschild to characterize stationary and spherical spacetime. Subsequently, Kerr devised a broader black hole model, commonly referred to as the Kerr black hole, that possesses both stationary and axisymmetric properties. These two classical metrics, which are asymptotically flat, can be defined by two fundamental parameters: the mass denoted as M and the spin represented by a. The no-hair theorem suggests that a black hole has information about its charge q, such as in the Reissner-Nordstrom back hole or the Kerr-Newman black hole. However, for real astrophysical black holes, their charge can usually be ignored. Gravity theories yield a wide range of metrics. Early metrics were Ricci-flat solutions of Einstein's field equations but had naked singularities or other pathologies <cit.>. Over recent years, researchers have studied Zipoy-Voorhees spacetime (also known as δ-metric,q-metric or γ-metric) <cit.>, while for a more general metric solution, they have proposed and studied the δ-Kerr metric <cit.>. This nonlinear superposition of Zipoy-Voorhees metric with the Kerr metric represents a deformed Kerr solution. Based on parametrized post-Newtonian(PPN) theory which describes strong gravity far away from sources Konoplya et al., introduced KRZ metric<cit.> to describe most general black hole spacetimes through finite adjustable quantities. The abundance of metrics derived from general relativity or alternative theories of gravity presents a fertile ground for investigating various astronomical phenomena in future research. The Event Horizon Telescope (EHT) Collaboration recently captured shadow images of collapsed objects located at the center of two galaxies: M87* in an elliptical galaxy and Sagittarius-A* (SgrA*) in the Milky Way<cit.>. These observations have sparked a new area of research focused on testing gravity theories and black hole solutions within a gravitational field regime that has never been tested before<cit.>. This image is a result of light bending in the gravitational field of the source and has been extensively studied in GR<cit.>. Theoretical explanations for these images can be found in references such as <cit.>. The concept of the black hole shadow, which is associated with the photon sphere surrounding the central object, was explored in Ref. <cit.>. Over several years, research on the photon sphere has included various approaches such as different images of black holes<cit.>, ray-tracing codes<cit.>, and accretion disks<cit.>. These methods have provided effective ways to study black holes and general relativity. Several authors have used photon motion to investigate the black hole shadow using different metrics including Kerr metric<cit.>, Kerr-Newman metric<cit.> and other rotating regular black hole metrics <cit.>. Furthermore, M87* shadow has been utilized by researchers to evaluate alternative theories of gravity such as superspinar <cit.>and conformal massive gravity <cit.>. McWilliams proposed a new waveform model for GR black holes called backwards one-body (BOB) method that only considers photon motion without any phenomenological degrees of freedom. The coalescence of two black holes involves three stages: inspiral, merger, and ringdown. During the inspiral phase, the black holes gradually approach each other. This phase refers to the few seconds before the merger. During the merger, they combine to form a single black hole. Finally, during the ringdown phase, the newly-formed black hole undergoes oscillations, gradually releasing excess energy and angular momentum in the form of gravitational waves, until it reaches a stable state of equilibrium. Different phases have different methods to describe. In general, the inspiral phase is described by post-Newtonian theory while numerical relativity simulation provides a good description of the merger phase. Quasinormal modes are used to study the ringdown phase where there is still a slight perturbation in spacetime compared to a stable black hole. Black-hole perturbation theory can be used to study the ringdown waveform. Well-known waveform models such as EOBNR and IMRPhenom rely on this idea <cit.>. Currently, the Teukolsky equation is the most widely used method for calculating perturbations. This linear partial differential equation describes how scalar, vector and gravitational perturbations of Kerr black holes evolve. By solving this equation with an outgoing boundary condition at infinity and an ingoing boundary condition at the horizon, complex frequencies known as quasinormal modes can be used to characterize gravitational waves. The QNMs are a spectrum of complex frequencies associated with gravitational waves. In addition to the fundamental QNMs, the overtone refers to higher-order vibrational modes that a black hole can exhibit during its ringdown phase. If we consider the ringdown phase early enough, before these overtones would have decayed it was shown that including them, enhances waveform template accuracy, increase the signal-to-noise ratio (SNR), and allow more precise testing of the no-hair theorem<cit.>. Arnab <cit.> studies the insertion of negative-frequency modes (counter-rotation), known as mirror modes, in addition to positive-frequency modes, known as regular modes, in the gravitational waveform. He shows that this will provide a more accurate description of the gravitational wave signal. Yang et al.<cit.> developed a method for calculating QNMs through the photon sphere. In this study, we proposed a full waveform model for the non-GR black hole by combining the inspiral and ringdown components. The inspiral component is constructed by the phenomenological method, while the ringdown component is obtained based on the photon sphere. We refer to this model as PSI-FD(Ψ_FD), which is an extension of our previous Ψ (PSI: Photon sphere + Inspiral) model<cit.>. The Ψ model is a waveform model that consists of the inspiral part calibrated from post-Newtonian approximation and the ringdown part derived from the photon sphere. Ref. <cit.> demonstrates the high accuracy of the Ψ waveform model compared to numerical relativity (NR) waveforms. This article is structured as follows: Sec. <ref> provides an introduction to the KRZ metric and highlights key considerations when using this metric. We then combine the KRZ metric with the bumpy black hole metric to establish the relationship between the parameters of the KRZ metric and the quadrupole moment. In Sec. <ref>, we present a methodology for extracting the inspiral waveform from photon motion. Subsequently, Sec. <ref> outlines the derivation of Quasinormal Modes (QNMs) through the photon sphere and describes how to obtain the ringdown waveform using these QNMs. Finally, we combine the inspiral and ringdown waveform to obtain the complete waveform. Our results and findings are presented in Sec. <ref>. We have fixed units such that G=c=1. § PARAMETERIZED BLACK HOLE METRIC AND THE APPLICATION The KRZ metric, constructed by Konoplya, Rezzolla, and Zhidenko <cit.> proposes a parametrization for general stationary and axisymmetric black holes. They develop a model-independent framework that parameterizes the most generic black hole geometry by a finite number of tunable quantities. By adjusting these quantities, a number of famous black hole metrics, such as the Kerr metric, are found exactly and in the whole space. The metric with this form: d s^2 = -N^2(r̃, θ)-W^2(r̃, θ) sin ^2θ/K^2(r̃, θ) d t^2 -2 W(r̃, θ) r̃sin ^2θ d t d ϕ +K^2(r̃, θ) r̃^2sin ^2θ d ϕ^2 +Σ(r̃, θ)(B^2(r̃, θ)/N^2(r̃, θ) d r̃^2+r̃^2d θ^2), where r̃=r/M, ã=a / M and the other metric functions are defined as: Σ = 1+a^2cos ^2θ/r̃^2 , N^2 = ( 1-r_0/r̃) [ 1-ϵ_0r_0/r̃+( k_00-ϵ_0)r_0^2/r̃^2+δ_1r_0^3/r̃^3] +[ a_20r_0^3/r̃^3+a_21r_0^4/r̃^4+k_21r_0^3/r̃^3L]cos^2θ , B = 1+δ_4 r_0^2 / r̃^2+δ_5 r_0^2cos ^2θ / r̃^2 , W = [w_00 r_0^2 / r̃^2+δ_2 r_0^3 / r̃^3+δ_3 r_0^3 / r̃^3cos ^2θ] / Σ , K^2 = 1+a W / r +{k_00 r_0^2 / r̃^2+k_21 r_0^3 / r̃^3L cos ^2θ} / Σ , L = [1+k_22(1-r_0 / r̃)/1+k_23(1-r_0 / r̃)]^-1 . This paper adopts the parameters defined as follows: a_20 = 2 ã^2 / r_0^3, a_21 = -ã^4 / r_0^4+δ_6, ϵ_0 = (2-r_0) / r_0, k_00 = ã^2 / r_0^2 , k_21 = ã^4 / r_0^4-2 ã^2 / r_0^3-δ_6, w_00 = 2 ã / r_0^2, k_22 = -ã^2 / r_0^2+δ_7, k_23 = ã^2 / r_0^2+δ_8, The dimensionless parameter δ_i, where i=1,2,3,4,5,6,7,8, describes the deformation of various parameters in metric (<ref>). Specifically, g_tt is deformed by δ_1, while δ_2 and δ_3 correspond to spin deformations. Additionally, δ_4 and δ_5 relate to deformations of g_rr, and δ_6 is for event horizon deformation. When all values of δ_i are zero (δ_i = 0), the KRZ metric reduces to the Kerr metric specified in (<ref>). Furthermore, if ã=0, it reduces to the Schwarzschild metric. The parameter r_0 represents the equatorial radius of the event horizon. Some papers have utilized the KRZ metric to explore the general parametrization of axisymmetric black holes. However, these papers erroneously employed the definition r_0=M+√(M^2-a^2), which is only valid in the context of the Kerr metric. We provide two examples to illustrate this point. §.§.§ EDGB black-hole metric Einstein-dilaton Gauss-Bonnet (EDGB) gravity is a modified theory of gravity that extends General Relativity (GR) by including an additional scalar field (dilaton) and the Gauss-Bonnet term, a curvature term that arises from higher-dimensional theories of gravity. The authors of this paper<cit.> provided the parametrization for an EDGB black hole. r_0 = 2 M(1-χ^2/4-49 ζ/80+128171 χ^2ζ/588000)+𝒪(χ^4, ζ^2) δ_1 = -17 ζ/60(1-324899 χ^2/166600)+𝒪(χ^4, ζ^2), δ_2 = -63 χζ/160+𝒪(χ^3, ζ^2), δ_3 = 𝒪(χ^3, ζ^2), δ_4 = -361 ζ/240(1-51659 χ^2/176890)+𝒪(χ^4, ζ^2), δ_5 = 175629/196000χ^2ζ+𝒪(χ^4, ζ^2). χ≡a/M=J/M^2 ,ζ≡16πα^2/β M^4 where α and β are two coupling constants in Einstein-dilaton-Gauss-bonnet theory, α represents the coupling of higher curvature, while β accounts for the coupling with the scalar field. §.§.§ Dilaton black-hole metric Additionally, the authors provided the parametrization for a dilaton black hole: r_0 =√((μ+b+√(μ^2-a^2) )^2-b^2) δ_1 = 2(μ+b)[2 b^2+r_0^2+(2 r_0-3 b) √(r_0^2+b^2)]/r_0^2√(r_0^2+b^2) -3 r_0^2+a^2/r_0^2, δ_2 = 2 a(μ+b)(b+r_0-√(r_0^2+b^2))/r_0^3, δ_3 = 0, δ_4 = r_0/√(r_0^2+b^2)-1, δ_5 = 0. where μ and b are the dilaton parameters. Equations (<ref>) and (<ref>) demonstrate that when considering specific metrics, such as the EDGB or Dilaton metrics, the equatorial radius of the event horizon (r_0) is not equal to r_0=M+√(M^2-a^2). Therefore, when utilizing the KRZ metric to obtain the value of r_0, it is crucial to focus on the appropriate metric. This concept is easily comprehensible because the KRZ metric is a general metric. However, if the value of r_0 is fixed using the definition r_0=M+√(M^2-a^2), then the universality of the metric is lost. §.§ Bumpy Black hole General relativity predicts the existence of compact objects known as black holes, whose spacetimes are solely determined by their mass, spin, and charge in vacuum, in accordance with the "no-hair" theorem. Collins and Hughes<cit.> proposed the existence of an exception, called bumpy black holes. These objects possess a multipolar structure closely resembling that of black holes but with some deviation. When the deviation is set to zero, bumpy black holes reduce to standard black holes such as the Schwarzschild black hole or the Kerr black hole. The bumpy Kerr black hole metric can be expressed in the Boyer-Lindquist coordinates, as shown below: d s^2= -e^2 ψ_1(1-2 M r/Σ) d t^2+ e^2 ψ_1-γ_1(1-e^γ_1) 4 a^2 M r sin ^2 θ/ΔΣ d t d r -e^2 ψ_1-γ_14 a M r sin ^2 θ/Σ d t d ϕ +e^2 γ_1-2 ψ_1(1-2 M r/Σ)^-1 [1+e^-2 γ_1(1-2 e^γ_1) a^2 sin ^2 θ/Δ- e^4 ψ_1-4 γ_1(1-e^γ_1)^2 4 a^4 M^2 r^2 sin ^4 θ/Δ^2 Σ^2] d r^2 -2(1-e^γ_1) a sin ^2 θ[e^-2 ψ_1(1-2 M r/Σ)^-1- e^2 ψ_1-2 γ_14 a^2 M^2 r^2 sin ^2 θ/ΔΣ(Σ-2 M r)] d r d ϕ +e^2 γ_1-2 ψ_1Σ d θ^2+Δ [e^-2 ψ_1(1-2 M r/Σ)^-1- e^2 ψ_1-2 γ_14 a^2 M^2 r^2 sin ^2 θ/ΔΣ(Σ-2 M r)] sin ^2 θ d ϕ^2 The bumpy Kerr black hole metric can be expressed in the form g_αβ=g^Kerr_αβ+b_αβ, where g^Kerr_αβ denotes the Kerr metric. In the above equation, Δ≡ r^2-2Mr+a^2, and γ_1 and ψ_1 denote the perturbation potentials arising from the mass moment and spin moment perturbations, respectively. The definitions of γ_1 and ϕ_1 are detailed in <cit.>. The bumpy Kerr black hole metric reduces to the Kerr black hole metric, i.e., γ_1=ϕ_1=0, in the absence of perturbations. b_t t = -2(1-2 M r/Σ) ψ_1, b_t r = -γ_1 2 a^2 M r sin ^2 θ/ΔΣ, b_t ϕ = (γ_1-2 ψ_1) 2 a M r sin ^2 θ/Σ, b_r r = 2(γ_1-ψ_1) Σ/Δ b_r ϕ = γ_1[(1-2 M r/Σ)^-1-4 a^2 M^2 r^2 sin ^2 θ/ΔΣ(Σ-2 M r)] a sin ^2 θ, b_θθ = 2(γ_1-ψ_1) Σ b_ϕϕ = [(γ_1-ψ_1) 8 a^2 M^2 r^2 sin ^2 θ/ΔΣ(Σ-2 M r)-2 ψ_1(1-2 M r/Σ)^-1] Δsin ^2 θ . Vigeland and Hughes<cit.> proposed the quadrupole bumps(i.e., l=2) in the Boyer-Lindquist coordinates: ψ_1^l=2(r, θ) =B_2 M^3/4√(5/π)1/d(r, θ, a)^3[3 L(r, θ, a)^2 cos ^2 θ/d(r, θ, a)^2-1], γ_1^l=2(r, θ) =B_2 √(5/π) [L(r, θ, a)/2 [c_20(r, a)+c_22(r, a) cos ^2 θ+c_24(r, a) cos ^4 θ]/d(r, θ, a)^5-1]. where d(r, θ, a) =√(r^2-2 M r+(M^2+a^2) cos ^2 θ) L(r, θ, a) =√((r-M)^2+a^2 cos ^2 θ) and c_20(r, a) =2(r-M)^4-5 M^2(r-M)^2+3 M^4, c_22(r, a) =5 M^2(r-M)^2-3 M^4+a^2[4(r-M)^2-5 M^2], c_24(r, a) =a^2(2 a^2+5 M^2) . By selecting the appropriate parameters, the KRZ metric can be reduced to the bumpy black hole metric. Since the KRZ metric does not provide an exact value for the black hole's quadrupole moment, we aim to utilize the quadrupole moment in the bumpy black hole metric to correspond to δ_i. Upon performing these calculations, we determined that selecting specific values for δ_i results in the reduction of the KRZ metric to the bumpy black hole metric: δ_1 = {[(2ψ_1+1)(1-2M/r)](1-δ_6r_0^3/r^3cos^2θ)-(1-r_0/r ) } /[r_0^3/r^3(1-r_0/r ) ], δ_6 = 2ψ_1r^5/r_0^3tan^2θ, δ_2 = δ_3=δ_4=δ_5=0. The quadrupole moment is given by the following equation: Q=-Ma^2-B_2M^3√(5/4π)=Q_K+ΔQ where B_2 is the parameter that appears in Eq. (<ref>). Fig. <ref> displays the relationship between δ_1, δ_6, and ΔQ with different spins a ranging from 0.1 to 0.7. The left panel displays the values of δ_1 for different spins and Δ Q values. A larger Δ Q corresponds to a lower spin for the same value of δ_1, suggesting that δ_1 has a greater impact on the quadrupole moment at lower values of spin. The right panel illustrates that the influence of spin on δ_6 is negligible. Additionally, the figure reveals that Δ Q increases with increasing values of δ_1 and δ_6, which represent the deviation of the Kerr metric. Furthermore, the increase in Δ Q is more pronounced with increasing δ_6. § THE INSPIRAL WAVEFORM IN KRZ BLACK HOLES The inspiral phase waveform can be calculated through the geodesic motion, and we will focus on the deformation parameter δ_1 in this section. For simplicity, we assume that all other deformation parameters are zero when considering one single parameter. In the following section, we will provide a brief overview of the derivation for the deformation phase, with details available in the paper<cit.>. In Sec. <ref>, we have demonstrated that this value of r_0=M+√(M^2-a^2) is only applicable in the Kerr metric. Therefore, r_0= needs to be determined for varied non-GR black holes. As an example, we concentrate on a specific metric, namely the bumpy black hole metric, to determine the corresponding horizon radius r_0 and the deformation phase. To gain a deeper understanding, we investigate the quadrupole moment Q rather than the deformation parameters δ_i. The normalization of the four-velocity requires that u^μu_μ=-1. To simplify the equation, we choose θ̇ equal to zero. Solving the equation(i.e, u^μu_μ=-1) yields the following result: V_eff=g_rrṙ^2=-1-g_ttṫ^2-g_ϕϕϕ̇^2, To simplify the above equation, we can use the specific energy(the energy per unit mass) and specific angular momentum(the angular momentum per unit mass) of a particle: E=-(g_ttu^t +g_tϕu^ϕ), L=g_ϕ tu^t +g_ϕϕu^ϕ. and they are constants because the KRZ spacetime is stationary and axisymmetric. Through Eq. (<ref>) and (<ref>), we can get the expressions of ṫ and ϕ̇: ṫ=-Lg_tϕ+Eg_ϕϕ/g_ttg_ϕϕ-g_tϕ^2 , ϕ̇=Lg_tt+Eg_tϕ/g_ttg_ϕϕ-g_tϕ^2 . then with Eq. (<ref>) and (<ref>) we can rewrite V_eff in terms of E, L as: V_eff= -1+E^2+2 M/r+L^2(2 M-r)/r^3+8 δ_1 M^3(2 M-r)/r^4 +8 δ_1 L^2 M^3(2 M-r)/r^6+𝒪[δ_1^2] . The determination of the energy and angular momentum of circular orbits relies on the condition that V_eff=dV_eff/dr=0. By satisfying this condition, it is possible to express the energy and angular momentum as the sum of the general relativity (GR) term and a small perturbation that relates to the deformation parameter δ_1: E=E^GR+δ E, L=L^GR+δ L. where E^GR = √(4 M^2-4 M r+r^2/(r-3 M) r), L^GR = √(M r^2/r-3 M), δ E =-2 M^3(r-2 M)/r^5 / 2(r-3 M)^3 / 2δ_1+𝒪[δ_1^2], δ L =-6 M^5 / 2(r-2 M)^2/r^2(r-3 M)^3 / 2δ_1+𝒪[δ_1^2]. By considering the far-field limit in which L=r^2ϕ̇→ r^2Ω (where Ω=dϕ/dt refers to the angular velocity of the body as observed by a distant observer), one can obtain the following result: Ω^2=M/r^3[1+3 M/r+9 M^2/r^2-12 M^2/r^2δ_1+𝒪(δ_1^2, M^3/r^3)] . For circular orbits, it is possible to express the system's total energy (E_T) as the effective energy of a single body in the rest frame of the other. E_T = m+E_b = m[1+2η(E_eff-1)], the parameters in the equation include the binding energy, E_b, the symmetric mass ratio, η=μ/m, and the reduced mass, μ, which is defined as μ=m_1m_2/m (m=m_1+m_2, where m_1 and m_2 are the masses of the two bodies). E_eff=g_tt(1+L^2/r^2)^1/2, We separate the rest-mass energy m from the binding energy E_b to express the latter as a sum of its general relativity term and a correction: E_b=E_b^GR-η m^2/2 r[4 δ_1(m/r)^2+𝒪(δ_1^2, m^3/r^3)] To simplify calculations, it is possible to express the binding energy, E_b, as a function of the orbital frequency, ν=Ω/2π: E_b(ν)/μ=E_b^GR(ν)/μ-4 δ_1(2 π m ν)^2+𝒪[δ_1^2,(2 π m ν)^8 / 3] . The orbital phase is given by: ϕ(ν)=∫^νΩ d t=∫^ν1/Ė(d E/d Ω) Ω d Ω, the variable Ė describes the speed at which the binding energy is altered due to the emission of gravitational waves, which comprises two parts: the conservative sector and the dissipative sector. Our focus is primarily on the conservative sector of the gravitational waves emission, and the dissipative sector is assumed to be unaffected. Based on this study<cit.>, we only need to employ the quadrupole formulation up to the post-Newtonian order of 0PN for determining the modification in binding energy. Ė_GR^0PN=-32/5η^2m^2r^4Ω^6 Then we can get the expression for the orbital phase evolution: ϕ(ν )=ϕ^0PN_GR(ν )-25/eη(2π mν)^-1/3δ_1+𝒪[δ_1^2] where ϕ^0PN_GR(ν )=-1/32η(2π mν)^-5/3. In the stationary phase approximation, the Fourier transform of ϕ is given by Φ^GR_GW(f)=2ϕ(t_0)-2π ft_0, where t_0 is the stationary time, ν(t_0)=f/2 and f is the Fourier frequency. Then we can get: Ψ_GW(f)=Ψ^GR,0PN_GW(f)-75/8u^-1/3η^-4/5δ_1+𝒪[δ_1^2], where Ψ^GR,0PN_GW(f)=-3u^-5/3/128, and u=ηπ mf. So the deformation of δ_1 on the phase can be expressed as: ϕ^δ_1_KRZ = -75/8u^-1/3η^-4/5δ_1 We can apply the same method for determining the deformation parameters besides the one already discussed. Deformation parameter δ_2: ϕ^δ_2_KRZ = -85/3η[1+log(u)]δ_2. Deformation parameter δ_3: ϕ^δ_3_KRZ = -C_1δ_3cos^2(θ)f^5/3, C_1 = 48·2^-1/3m^5/3/(ηπ). Deformation parameter δ_6: ϕ^δ_6_KRZ = -C_2δ_6cos^2(θ)f^-1, C_2 = 40m/η. The deformation parameters δ_4 and δ_5 have no effect on the motion of geodesics and therefore they cannot be constrained within the framework. Having obtained the phase deformation using the deformation parameters δ_i, we can apply it to a more accurate model such as the PhenomD model<cit.>. The PhenomD model divides the GW signal into three stages: inspiral, intermediate, and ringdown. In this work, we employ the PhenomD model to represent the inspiral and intermediate waveforms. To model the ringdown waveform, we introduce the Φ model, which we explain in detail later in the paper. §.§ inspiral The phase ansatz in the inspiral stage is given by: ϕ_Ins= ϕ_TF 2(M f ; Ξ) +1/η(σ_0+σ_1 f+3/4σ_2 f^4 / 3+3/5σ_3 f^5 / 3+1/2σ_4 f^2) +ϕ_KRZ where η=m_1 m_2/M^2, M = m_1+m_2, the ϕ_TF 2 is the full TaylorF2 phase: ϕ_TF 2= 2 π f t_c-φ_c-π / 4 +3/128 η(π f M)^-5 / 3∑_i=0^7φ_i(Ξ)(π f M)^i / 3 The constants σ_i (where i = 0, 1, 2, 3, 4) represent the correlation between the mass and spin of the system. Meanwhile, the phase deformation arising from the general parameterized black hole is denoted by ϕ_KRZ. Varying the values of δ_1, δ_2, δ_4, and δ_6 will result in different phases. φ_i(Ξ) are the PN expansion coefficients that are related to the intrinsic binary parameters. The detailed information of σ_i and φ_i(Ξ) can be found in Appendix B of this article<cit.>. §.§ Intermediate The intermediate stage if after the inspiral stage, and it is phase is given by<cit.>: ϕ_Int=1/η(β_0+β_1 f+β_2 log (f)-β_3/3 f^-3) β_i(i=0, 1, 2, 3) is the constants related to the mass and spin of the system. The detailed information of β_i can still be found in Appendix B of this article<cit.>. Because the duration of the intermediate is short, we directly use this phase to construct our waveform model. § THE RINGDOWN WAVEFORM IN KRZ BLACK HOLES The forthcoming section aims to capture the ringdown signals from the waveforms generated using the PSI (Ψ:Photon Sphere + Inspiral) model <cit.>. This model operates in the proximity of the photon sphere and is based on the BOB (Backwards One-Body) waveform model first presented in Ref. <cit.>. he BOB model is an analytic phenomenological model for the late inspiral, merger, and ringdown signal of BBH. It takes into account the motion of photons without considering any additional phenomenological degrees of freedom. This paper aims to derive the complete waveforms by associating our Ψ waveforms with the inspiral waveforms. We will then scrutinize the complete waveforms with diverse spins and mass ratios and apply them to the parameterized black holes(i.e., KRZ metric). To utilize the Ψ waveforms, it is essential to derive the parameters of Quasinormal modes (QNMs), i.e., ω_R and ω_I. The real part of QNMs, ω_R, can be decomposed into two directional components, namely, θ and ϕ: ω_R=LΩ_θ(m/L)+mΩ_prec(m/L) here, Ω_θ indicates the frequency of polar motion, which is the rate at which the photon oscillates above and below the equatorial plane. The oscillation period can be calculated using the formula, T_θ=2π/Ω_θ. In addition to polar motion, the particle also undergoes a periodic motion in the azimuthal (ϕ) direction with respect to the oscillation period, T_θ, and the magnitude, Δϕ. The deviation between Δϕ and ± 2π is commonly known as the "precession angle": Δϕ_prec=Δϕ - -4π (corotating orbit ) +4π (couterrotating orbit) Ω_prec=Δϕ_prec/T_θ L=l+1/2 The values of l and m could be determined through the conditions:V^r(r, ω_R)=.∂ V^r/∂ r|_(r, ω_R)=0, V^r is the potential in the radial Teukolsky equation. The imaginary component of QNMs, ω_I, is directly linked to the Lyapunov exponents, which determine the rate at which a circular null geodesic expands its cross-sectional area under infinitesimal radial perturbations. The detailed calculation of ω_I can be found in Ref. <cit.>. We can derive ω_R and ω_I using the photon motion in 3D. Figure <ref> illustrates the correlation between the real and imaginary part of the QNMs frequency ω_R, ω_I and between the deformation parameters δ_1, δ_6. Varying δ_6 has a more significant effect on ω_R. In the figure at the bottom left, we note that the green and orange lines almost overlap, beyond a spin value, but since δ_6 cannot exceed 0.5, this will not happen. This is caused by the quadrupole moment deviation Δ Q becoming excessively large. On the right panel, it is evident that the parameter δ_1 significantly impacts the value of ω_I. The amplitude of the gravitational wave can be expressed as follows, based on <cit.> and <cit.>: | h_lm|^2∼d/dt( Ω_lm^2) , where Ω_lm is the orbital frequency, through this equation, we can get the equation of the GW waveform: h_22=X sech[γ(t-t_p)] e^-i Φ̃_22(t) , The equation includes the following variables: X is a constant related to the amplitude of the waveform, γ is the Lyapunov exponent characterizing the rate of divergence of nearby null geodesics, t_p is the time at maximum amplitude of the waveform and Φ_22(t) is the phase. We can also derive the phase equation: Φ̃_22= ∫_0^tΩ d t^'=arctan _++arctanh_+ -arctan _--arctanh_--ϕ_0, where: {[ arctan _±≡κ_±τ[arctan(Ω/κ_±)-arctan(Ω_0/κ_±)] ,; arctanh_±≡κ_±τ[arctan h(Ω/κ_±)-arctan h(Ω_0/κ_±)] ]. κ_±≡{Ω_0^4± k[1 ∓tanh(t_0-t_p/τ)]}^1 / 4 , Ω={Ω_0^4+k[tanh(t-t_p/τ)-tanh(t_0-t_p/τ)]}^1 / 4 , k=(Ω_QNM^4-Ω_0^4/1-tanh[(t_0-t_p) / τ]) , where τ=γ^-1, Ω_QNM=ω_QNM/m(Ω_QNM is just ω_R) and ϕ_0, Ω_0, t_0 are the constant that can be freely chosen. We need to focus on Eqs. (<ref>) and (<ref>) mentioned above. The inclusion of terms with an even power in these equations imposes an extra constraint on Ω_0. Our objective is to determine the minimum value of Ω_0, which can be achieved by equating the expression inside Eq. (<ref>) to zero. This yields the following function: Ω_0^4=k[-tanh(t-t_p/τ)+tanh(t_0-t_p/τ)] . Substituting Eq. (<ref>) into Eq. (<ref>), we can get the soultion of Eq. (<ref>)(we only consider the positive solution): Ω_0^4=Ω_QNM^4(tanh [t-t_p/τ]-tanh [t_0-t_p/τ])/(-1+tanh [t_0-t_p/τ])(1-tanh [t-t_p/τ]/1-tanh [t_0-t_p/τ]+tanh [t_0-t_p/τ]/1-tanh [t_0-t_p/τ]) . With Eq. (<ref>), we get the minimum value of Ω_0. For convenience, we choose t equal to t_p, so Eq. (<ref>) can be simplified to this form: Ω_0^4=Ω_QNM^4(tanh [t_0-t_p/τ]) . Thus, we obtain the minimum value of Ω_0 is Ω_QNM(i.e. the region of Ω_0 is Ω_0 Ω_QNM). To finalize the analysis, it is essential to establish a connection between the waveforms originating from the photon sphere and the inspiral waveforms. This can be established at any point between the innermost stable circular orbit (ISCO) and the light ring (LR). In this study, we have selected the peak of the waveform as our matching point. This selection allows us to derive the optimal values of ϕ_0, Ω_0, and t_0. Then with Eq. (<ref>), (<ref>), and (<ref>), we can get the full waveform and refer to it as PSI-FD(Ψ_FD). In Fig. <ref>, we show the full waveform for the spin χ_1=χ_2=0.85, mass ratio 1:1, and with different quadrupole moment deviation Δ Q. The waveform from Kerr black holes is plotted in each panel for comparison purposes. Both positive and negative values of deviations Δ Q were selected to ensure a more thorough comparison. The waveforms suggest that the quadrupole moment deviation denoted as Δ Q, has a minor impact on the ringdown part. However, it significantly influences the inspiral part. The figure analysis of Fig. <ref> and <ref>, can explain that for the ringdown part, a major influence is from the values of ω_R and ω_I. Further analysis of these figures identifies that the ω_R(I) only undergoes minor changes with Δ Q magnitude variation of 0.001. The full waveforms of the same relative quadrupole moment Δ Q/Q_Kerr in different spin are shown in Fig. <ref>. Our analysis indicates that at a small spin, the quadrupole moment deviation Δ Q has a considerably greater impact on the overall waveform. Furthermore, to ensure a more intuitive comparison, we employ the parameter overlap. The definition of the overlap is as follows: F= [⟨ h_1| h_2⟩/√(⟨ h_1| h_1⟩⟨ h_2| h_2⟩)], ⟨ h_1, h_2⟩=4 Re∫_f_min^f_maxh̃_1(f) h̃_2^*(f)/S_n(f) d f, where h_1 is the waveform derived from the PSI model, h_2 is the compared waveform(e.g. SEOBNRv4, SXS) and S_n(f) is the power spectral density of the detector noise, in this work we use the aLIGO's sensitivity curve<cit.>. And the definition of the match is: FF=max[⟨ h_1| h_2⟩/√(⟨ h_1| h_1⟩⟨ h_2| h_2⟩)], and the mismatch of two waveforms is defined as 1-FF. Fig. <ref> displays the plot of the match between the Kerr waveforms and the bumpy black hole with various relative quadrupole moment Δ Q/Q_Kerr. The plots for each panel represent different spin values, specifically a=0.10, 0.30, 0.50, 0.70, and 0.90. The dashed black line represents the match values equal to 0.995 which corresponds to the signal-to-noise ratio SNR approximately equal to 40, this value from the definition F=1-D/2 SNR_min^2 where F is the match between the signals, and D is the number of intrinsic parameters of the model(we choose D=14 in this work). The left panel shows the case of mass ratio 1:1 and the right panel shows the mass ratio 1:2. The match exhibits a smaller value at lower spin configurations, which can be explained using the definition of the quadrupole moment. According to Sec. <ref>, the definition of the quadrupole moment Q can be written as: Q = -2_20r_0^3+Ma^2/3, where r_0 is the horizon radius. Then we can derive this expression: Q_Kerr -Q/Q_Kerr = Δ Q/Q_Kerr =2/3-a_20r_0^3/M a^2 the parameter a_20 is directly proportional to the deformation parameters δ_i. It is evident that as the value of δ_i increases, the deviation from the Kerr case becomes more pronounced, leading to a decrease in the match values. Based on this, we can rewrite the above equations as follows: Match∝ (2/3-Δ Q/Q_Kerr)Ma^2/r_0^3 From this equation, we see that when the value of Δ Q/Q_Kerr is fixed, a smaller value of a results in a smaller match(the difference of a^2/r_0^3 is really small for the different spins). We also study the overlap between the Kerr waveform and Ψ_FD for the next-generation detectors such as Laser Interferometer Space Antenna(LISA) and Einstein Telescope(ET) as shown in Fig. <ref>. The bottom panel shows the overlap with different relative quadrupole moments Δ Q/Q_Kerr for LISA<cit.> and ET-D sensitivity<cit.> of the same mass ratio 1:2. They have the same trend as shown in LIGO the overlap exhibits a smaller value at lower spin configurations. Compared with the top panel, we can see the overlap has a smaller value for the LISA and ET, this means the next-generation will have a better detection of such gravitational waveform. § CONCLUSION In this article, we investigate a general parameterization of axisymmetric black holes, using the KRZ metric. Previous studies have controversially defined the equatorial radius of the event horizon, r_0, based on the Kerr metric, which we demonstrate using the EDGB and dilaton metrics. We compare the KRZ metric with the bumpy metric, focusing on the δ_1 and δ_6 parameters for simplicity. The bumpy black hole metric has a multipolar structure that closely resembles, but is not exactly, that of a black hole. Notably, a clear correlation between δ_1, δ_6, and the quadrupole moment Q is identified. In Section <ref>, we analyze the inspiral waveform phase in the KRZ metric, exploring the impact of varying parameters. The ringdown waveform is derived from the properties of photon sphere<cit.> around black holes <cit.>. This section gives a brief overview of how to obtain quasinormal modes using the photon sphere and how to derive the ringdown waveform from these modes. Next, we establish a connection between the inspiral and ringdown waveforms, considering the peak as the matching point. We refer to this full waveform model as Ψ_FD. We plot the full waveform with different spins a and different quadrupole moment deviations Δ Q and find the quadrupole moment deviation Δ Q has a significant influence on the inspiral waveform. We also calculate the overlap between the waveforms from KRZ and Kerr binary black holes. If the SNR is enough (roughly around 40), LIGO-Virgo-Kagra (LVK) may constrain the deviation of quadrupole moment from the Kerr black hole in a relative error ∼ 10^-3. However in the previous detections by LVK, there is no event with such high SNR. We may participate in the O4 run, LVK will find candidates with higher SNRs. For the next-generation detectors such as ET and space-borne detectors Taiji and LISA, GW signals with larger SNRs could be easily detected, therefore the next-generation will have a better measurement of the deviation from Kerr black holes. We also found that the effect of deviation Δ Q on the waveforms has a relation with the spin parameter a. Our findings revealed that as the spin increases, the influence of Δ Q/Q_Kerr diminishes. This trend can be explained by considering the definition of the quadrupole moment Q in the KRZ metric. We believe that our waveform model can be useful for testing the No-hair theorem with GWs. In an upcoming work, our waveform template will be employed to conduct data analysis to test non-GR black holes with LVK events. This will provide insights into their properties and offer a potential avenue for future tests of General Relativity. § ACKNOWLEDGEMENTS This work is supported by The National Key R&D Program of China (Grant No. 2021YFC2203002), NSFC (National Natural Science Foundation of China) No. 11773059 and No. 12173071. W. H. is supported by CAS Project for Young Scientists in Basic Research YSBR-006. apsrev4-1 89 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Abbott et al.(2016a)Abbott et al.]GW150914 author author B. P. Abbott et al. (collaboration LIGO Scientific Collaboration and Virgo Collaboration), 10.1103/PhysRevLett.116.061102 journal journal Phys. Rev. Lett. volume 116, pages 061102 (year 2016a)NoStop [Abbott et al.(2016b)Abbott et al.]GW_events_1 author author B. P. Abbott et al. (collaboration LIGO Scientific Collaboration and Virgo Collaboration), 10.1103/PhysRevLett.116.241103 journal journal Phys. Rev. Lett. volume 116, pages 241103 (year 2016b)NoStop [Abbott et al.(2019a)Abbott et al.]GW_events_2 author author B. P. Abbott et al. (collaboration LIGO Scientific Collaboration and Virgo Collaboration), 10.1103/PhysRevX.9.031040 journal journal Phys. Rev. X volume 9, pages 031040 (year 2019a)NoStop [Venumadhav et al.(2020)Venumadhav, Zackay, Roulet, Dai, and Zaldarriaga]GW_events_3 author author T. Venumadhav, author B. Zackay, author J. Roulet, author L. Dai, and author M. Zaldarriaga, 10.1103/PhysRevD.101.083030 journal journal Phys. Rev. D volume 101, pages 083030 (year 2020)NoStop [Abbott et al.(2021a)Abbott et al.]GW_events_4 author author R. Abbott et al. (collaboration LIGO Scientific Collaboration and Virgo Collaboration), 10.1103/PhysRevX.11.021053 journal journal Phys. Rev. X volume 11, pages 021053 (year 2021a)NoStop [Abbott et al.(2021b)Abbott et al.]BN author author R. Abbott et al., 10.3847/2041-8213/ac082e journal journal Astrophys. J. Lett volume 915, pages L5 (year 2021b)NoStop [Okounkova(2020)]Test_GR_1 author author M. Okounkova, 10.1103/PhysRevD.102.084046 journal journal Phys. Rev. D volume 102, pages 084046 (year 2020)NoStop [Isi et al.(2019)Isi, Giesler, Farr, Scheel, and Teukolsky]Test_GR_2 author author M. Isi, author M. Giesler, author W. M. Farr, author M. A. Scheel, and author S. A. Teukolsky, 10.1103/PhysRevLett.123.111102 journal journal Phys. Rev. Lett. volume 123, pages 111102 (year 2019)NoStop [Abbott et al.(2019b)Abbott et al.]Test_GR_3 author author B. P. Abbott et al. (collaboration LIGO Scientific Collaboration and Virgo Collaboration), 10.1103/PhysRevLett.123.011102 journal journal Phys. Rev. Lett. volume 123, pages 011102 (year 2019b)NoStop [Nair et al.(2019)Nair, Perkins, Silva, and Yunes]Test_GR_4 author author R. Nair, author S. Perkins, author H. O. Silva, and author N. Yunes, 10.1103/PhysRevLett.123.191101 journal journal Phys. Rev. Lett. volume 123, pages 191101 (year 2019)NoStop [Abbott et al.(2019c)Abbott et al.]Test_GR_5 author author B. P. Abbott et al. (collaboration The LIGO Scientific Collaboration and the Virgo Collaboration), 10.1103/PhysRevD.100.104036 journal journal Phys. Rev. D volume 100, pages 104036 (year 2019c)NoStop [Abbott et al.(2021c)Abbott et al.]Test_GR_6 author author R. Abbott et al. (collaboration LIGO Scientific Collaboration and Virgo Collaboration), 10.1103/PhysRevD.103.122002 journal journal Phys. Rev. D volume 103, pages 122002 (year 2021c)NoStop [Abbott et al.(2019d)Abbott et al.]CO_1 author author B. P. Abbott et al., 10.3847/2041-8213/ab3800 journal journal The Astrophysical Journal Letters volume 882, pages L24 (year 2019d)NoStop [Huang et al.(2017)Huang, Gong, Xu, Amaro-Seoane, Bian, Chen, Chen, Fang, Feng, Liu, Li, Li, Luo, Shao, Spurzem, Tang, Wang, Wang, Zang, and Lau]taiji author author S. Huang, author X. Gong, author P. Xu, author P. Amaro-Seoane, author X. Bian, author Y. Chen, author X. Chen, author Z. Fang, author X. Feng, author F. Liu, author S. Li, author X. Li, author Z. Luo, author M. Shao, author R. Spurzem, author W. Tang, author Y. Wang, author Y. Wang, author Y. Zang, and author Y. Lau, 10.1360/SSPMA2016-00438 journal journal Scientia Sinica Physica, Mechanica & Astronomica volume 47, pages 010404 (year 2017)NoStop [Luo et al.(2016)Luo, Chen, Duan, Gong, Hu, Ji, Liu, Mei, Milyukov, Sazhin, Shao, Toth, Tu, Wang, Wang, Yeh, Zhan, Zhang, Zharov, and Zhou]tianqin author author J. Luo, author L.-S. Chen, author H.-Z. Duan, author Y.-G. Gong, author S. Hu, author J. Ji, author Q. Liu, author J. Mei, author V. Milyukov, author M. Sazhin, author C.-G. Shao, author V. T. Toth, author H.-B. Tu, author Y. Wang, author Y. Wang, author H.-C. Yeh, author M.-S. Zhan, author Y. Zhang, author V. Zharov, and author Z.-B. Zhou, 10.1088/0264-9381/33/3/035010 journal journal Classical and Quantum Gravity volume 33, eid 035010 (year 2016)NoStop [Gair et al.(2008)Gair, Li, and Mandel]Ricci_1 author author J. R. Gair, author C. Li, and author I. Mandel, 10.1103/PhysRevD.77.024035 journal journal Phys. Rev. D volume 77, pages 024035 (year 2008)NoStop [Johannsen(2013)]Ricci_2 author author T. Johannsen, 10.1103/PhysRevD.87.124017 journal journal Phys. Rev. D volume 87, pages 124017 (year 2013)NoStop [Manko and Novikov(1992)]Ricci_3 author author V. S. Manko and author I. D. Novikov, 10.1088/0264-9381/9/11/013 journal journal Classical and Quantum Gravity volume 9, pages 2477 (year 1992)NoStop [Papadopoulos et al.(1981)Papadopoulos, Stewart, and Witten]ZV_1 author author D. Papadopoulos, author B. Stewart, and author L. Witten, 10.1103/PhysRevD.24.320 journal journal Phys. Rev. D volume 24, pages 320 (year 1981)NoStop [Chowdhury et al.(2012)Chowdhury, Patil, Malafarina, and Joshi]ZV_2 author author A. N. Chowdhury, author M. Patil, author D. Malafarina, and author P. S. Joshi, 10.1103/PhysRevD.85.104031 journal journal Phys. Rev. D volume 85, pages 104031 (year 2012)NoStop [Boshkayev et al.(2016)Boshkayev, Gasperín, Gutiérrez-Piñeres, Quevedo, and Toktarbay]ZV_3 author author K. Boshkayev, author E. Gasperín, author A. C. Gutiérrez-Piñeres, author H. Quevedo, and author S. Toktarbay, 10.1103/PhysRevD.93.024024 journal journal Phys. Rev. D volume 93, pages 024024 (year 2016)NoStop [Toshmatov et al.(2019)Toshmatov, Malafarina, and Dadhich]ZV_4 author author B. Toshmatov, author D. Malafarina, and author N. Dadhich, 10.1103/PhysRevD.100.044001 journal journal Phys. Rev. D volume 100, pages 044001 (year 2019)NoStop [Toshmatov and Malafarina(2019)]ZV_5 author author B. Toshmatov and author D. Malafarina, 10.1103/PhysRevD.100.104052 journal journal Phys. Rev. D volume 100, pages 104052 (year 2019)NoStop [Quevedo and Mashhoon(1991)]deltaKerr_1 author author H. Quevedo and author B. Mashhoon, 10.1103/PhysRevD.43.3902 journal journal Phys. Rev. D volume 43, pages 3902 (year 1991)NoStop [Allahyari et al.(2020)Allahyari, Firouzjahi, and Mashhoon]deltaKerr_2 author author A. Allahyari, author H. Firouzjahi, and author B. Mashhoon, 10.1088/1361-6382/ab6860 journal journal Classical and Quantum Gravity volume 37, eid 055006 (year 2020)NoStop [Toktarbay and Quevedo(2014)]deltaKerr_3 author author S. Toktarbay and author H. Quevedo, 10.1134/S0202289314040136 journal journal Gravitation and Cosmology volume 20, pages 252 (year 2014)NoStop [Konoplya et al.(2016)Konoplya, Rezzolla, and Zhidenko]KRZ_16 author author R. Konoplya, author L. Rezzolla, and author A. Zhidenko, 10.1103/PhysRevD.93.064015 journal journal Phys. Rev. D volume 93, pages 064015 (year 2016)NoStop [Akiyama et al.(2019a)Akiyama et al.]EHT_1 author author K. Akiyama et al. (collaboration The Event Horizon Telescope Collaboration), 10.3847/2041-8213/ab0ec7 journal journal The Astrophysical Journal Letters volume 875, pages L1 (year 2019a)NoStop [Akiyama et al.(2019b)Akiyama et al.]EHT_2 author author K. Akiyama et al. (collaboration The Event Horizon Telescope Collaboration), 10.3847/2041-8213/ab0c96 journal journal The Astrophysical Journal Letters volume 875, pages L2 (year 2019b)NoStop [Akiyama et al.(2019c)Akiyama et al.]EHT_3 author author K. Akiyama et al. (collaboration The Event Horizon Telescope Collaboration), 10.3847/2041-8213/ab0c57 journal journal The Astrophysical Journal Letters volume 875, pages L3 (year 2019c)NoStop [Akiyama et al.(2022)Akiyama et al.]EHT_S author author K. Akiyama et al. (collaboration The Event Horizon Telescope Collaboration), 10.3847/2041-8213/ac6674 journal journal The Astrophysical Journal Letters volume 930, pages L12 (year 2022)NoStop [Gralla(2021)]Test_GR_Sha_1 author author S. E. Gralla, 10.1103/PhysRevD.103.024023 journal journal Phys. Rev. D volume 103, pages 024023 (year 2021)NoStop [Glampedakis and Pappas(2021)]Test_GR_Sha_2 author author K. Glampedakis and author G. Pappas, 10.1103/PhysRevD.104.L081503 journal journal Phys. Rev. D volume 104, pages L081503 (year 2021)NoStop [Bambi et al.(2019)Bambi, Freese, Vagnozzi, and Visinelli]Test_GR_Sha_3 author author C. Bambi, author K. Freese, author S. Vagnozzi, and author L. Visinelli, 10.1103/PhysRevD.100.044057 journal journal Phys. Rev. D volume 100, pages 044057 (year 2019)NoStop [Virbhadra(2009)]Light_Bend_1 author author K. S. Virbhadra, 10.1103/PhysRevD.79.083004 journal journal Phys. Rev. D volume 79, pages 083004 (year 2009)NoStop [Virbhadra and Ellis(2000)]Light_Bend_2 author author K. S. Virbhadra and author G. F. R. Ellis, 10.1103/PhysRevD.62.084003 journal journal Phys. Rev. D volume 62, pages 084003 (year 2000)NoStop [Bozza(2002)]Light_Bend_3 author author V. Bozza, 10.1103/PhysRevD.66.103001 journal journal Phys. Rev. D volume 66, pages 103001 (year 2002)NoStop [Gralla et al.(2019)Gralla, Holz, and Wald]Light_Bend_4 author author S. E. Gralla, author D. E. Holz, and author R. M. Wald, 10.1103/PhysRevD.100.024018 journal journal Phys. Rev. D volume 100, pages 024018 (year 2019)NoStop [Bambi and Freese(2009)]Image_1 author author C. Bambi and author K. Freese, 10.1103/PhysRevD.79.043002 journal journal Phys. Rev. D volume 79, pages 043002 (year 2009)NoStop [Hioki and Maeda(2009)]Image_2 author author K. Hioki and author K.-i. Maeda, 10.1103/PhysRevD.80.024042 journal journal Phys. Rev. D volume 80, pages 024042 (year 2009)NoStop [Amarilla et al.(2010)Amarilla, Eiroa, and Giribet]Image_3 author author L. Amarilla, author E. F. Eiroa, and author G. Giribet, 10.1103/PhysRevD.81.124045 journal journal Phys. Rev. D volume 81, pages 124045 (year 2010)NoStop [Amarilla and Eiroa(2012)]Image_4 author author L. Amarilla and author E. F. Eiroa, 10.1103/PhysRevD.85.064019 journal journal Phys. Rev. D volume 85, pages 064019 (year 2012)NoStop [Amarilla and Eiroa(2013)]Image_5 author author L. Amarilla and author E. F. Eiroa, 10.1103/PhysRevD.87.044057 journal journal Phys. Rev. D volume 87, pages 044057 (year 2013)NoStop [Javed et al.(2019)Javed, Abbas, and Övgün]Image_6 author author W. Javed, author J. Abbas, and author A. Övgün, 10.1103/PhysRevD.100.044052 journal journal Phys. Rev. D volume 100, pages 044052 (year 2019)NoStop [Younsi et al.(2016)Younsi, Zhidenko, Rezzolla, Konoplya, and Mizuno]Image_7 author author Z. Younsi, author A. Zhidenko, author L. Rezzolla, author R. Konoplya, and author Y. Mizuno, 10.1103/PhysRevD.94.084025 journal journal Phys. Rev. D volume 94, pages 084025 (year 2016)NoStop [Cunha et al.(2015)Cunha, Herdeiro, Radu, and Rúnarsson]Image_8 author author P. V. P. Cunha, author C. A. R. Herdeiro, author E. Radu, and author H. F. Rúnarsson, 10.1103/PhysRevLett.115.211102 journal journal Phys. Rev. Lett. volume 115, pages 211102 (year 2015)NoStop [Ghasemi-Nodehi et al.(2015)Ghasemi-Nodehi, Li, and Bambi]Image_9 author author M. Ghasemi-Nodehi, author Z. Li, and author C. Bambi, 10.1140/epjc/s10052-015-3539-x journal journal The European Physical Journal C volume 75, pages 315 (year 2015)NoStop [Ayzenberg and Yunes(2018)]Image_10 author author D. Ayzenberg and author N. Yunes, 10.1088/1361-6382/aae87b journal journal Classical and Quantum Gravity volume 35, pages 235002 (year 2018)NoStop [Claudel et al.(2001)Claudel, Virbhadra, and Ellis]Claude_01 author author C.-M. Claudel, author K. S. Virbhadra, and author G. F. R. Ellis, 10.1063/1.1308507 journal journal Journal of Mathematical Physics volume 42, pages 818 (year 2001)NoStop [Lu et al.(2023)Lu et al.]Lu_23 author author R.-S. Lu et al., 10.1038/s41586-023-05843-w journal journal Nature volume 616, pages 686 (year 2023)NoStop [Psaltis and Johannsen(2011)]Ray_1 author author D. Psaltis and author T. Johannsen, 10.1088/0004-637X/745/1/1 journal journal The Astrophysical Journal volume 745, pages 1 (year 2011)NoStop [kwan Chan et al.(2013)kwan Chan, Psaltis, and Özel]Ray_2 author author C. kwan Chan, author D. Psaltis, and author F. Özel, 10.1088/0004-637X/777/1/13 journal journal The Astrophysical Journal volume 777, pages 13 (year 2013)NoStop [Pihajoki et al.(2018)Pihajoki, Mannerkoski, Nättilä, and Johansson]Ray_3 author author P. Pihajoki, author M. Mannerkoski, author J. Nättilä, and author P. H. Johansson, 10.3847/1538-4357/aacea0 journal journal The Astrophysical Journal volume 863, pages 8 (year 2018)NoStop [Pelle et al.(2022)Pelle, Reula, Carrasco, and Bederian]Ray_4 author author J. Pelle, author O. Reula, author F. Carrasco, and author C. Bederian, 10.1093/mnras/stac1857 journal journal Monthly Notices of the Royal Astronomical Society volume 515, pages 1316 (year 2022)NoStop [Dexter and Agol(2009)]Acc_1 author author J. Dexter and author E. Agol, 10.1088/0004-637X/696/2/1616 journal journal The Astrophysical Journal volume 696, pages 1616 (year 2009)NoStop [Marck(1996)]Acc_2 author author J.-A. Marck, 10.1088/0264-9381/13/3/007 journal journal Classical and Quantum Gravity volume 13, pages 393 (year 1996)NoStop [Li and Bambi(2014)]Li_14 author author Z. Li and author C. Bambi, 10.1088/1475-7516/2014/01/041 journal journal Journal of Cosmology and Astroparticle Physics volume 2014, pages 041 (year 2014)NoStop [Wei et al.(2019)Wei, Zou, Liu, and Mann]Wei_19 author author S.-W. Wei, author Y.-C. Zou, author Y.-X. Liu, and author R. B. Mann, 10.1088/1475-7516/2019/08/030 journal journal Journal of Cosmology and Astroparticle Physics volume 2019, pages 030 (year 2019)NoStop [Cunha et al.(2016)Cunha, Herdeiro, Radu, and Rúnarsson]Cunha_16 author author P. V. P. Cunha, author C. A. R. Herdeiro, author E. Radu, and author H. F. Rúnarsson, 10.1142/S0218271816410212 journal journal International Journal of Modern Physics D volume 25, pages 1641021 (year 2016)NoStop [de Vries(2000)]Vries_00 author author A. de Vries, 10.1088/0264-9381/17/1/309 journal journal Classical and Quantum Gravity volume 17, pages 123 (year 2000)NoStop [Abdujabbarov et al.(2016)Abdujabbarov, Amir, Ahmedov, and Ghosh]Ahmadjon_16 author author A. Abdujabbarov, author M. Amir, author B. Ahmedov, and author S. G. Ghosh, 10.1103/PhysRevD.93.104004 journal journal Phys. Rev. D volume 93, pages 104004 (year 2016)NoStop [Li et al.(2021)Li, Abdujabbarov, and Han]Li_21 author author S. Li, author A. A. Abdujabbarov, and author W.-B. Han, 10.1140/epjc/s10052-021-09445-6 journal journal The European Physical Journal C volume 81, pages 649 (year 2021)NoStop [Li et al.(2022)Li, Mirzaev, Abdujabbarov, Malafarina, Ahmedov, and Han]Li_22 author author S. Li, author T. Mirzaev, author A. A. Abdujabbarov, author D. Malafarina, author B. Ahmedov, and author W.-B. Han, 10.1103/PhysRevD.106.084041 journal journal Phys. Rev. D volume 106, pages 084041 (year 2022)NoStop [Jusufi et al.(2020)Jusufi, Jamil, Chakrabarty, Wu, Bambi, and Wang]Jusufi_20 author author K. Jusufi, author M. Jamil, author H. Chakrabarty, author Q. Wu, author C. Bambi, and author A. Wang, 10.1103/PhysRevD.101.044035 journal journal Phys. Rev. D volume 101, pages 044035 (year 2020)NoStop [Pürrer(2016)]Model_Nor_1 author author M. Pürrer, 10.1103/PhysRevD.93.064041 journal journal Phys. Rev. D volume 93, pages 064041 (year 2016)NoStop [Pan et al.(2008)Pan, Buonanno, Baker, Centrella, Kelly, McWilliams, Pretorius, and van Meter]Model_Nor_2 author author Y. Pan, author A. Buonanno, author J. G. Baker, author J. Centrella, author B. J. Kelly, author S. T. McWilliams, author F. Pretorius, and author J. R. van Meter, 10.1103/PhysRevD.77.024014 journal journal Phys. Rev. D volume 77, pages 024014 (year 2008)NoStop [Khan et al.(2016a)Khan, Husa, Hannam, Ohme, Pürrer, Forteza, and Bohé]Model_Nor_3 author author S. Khan, author S. Husa, author M. Hannam, author F. Ohme, author M. Pürrer, author X. J. Forteza, and author A. Bohé, 10.1103/PhysRevD.93.044007 journal journal Phys. Rev. D volume 93, pages 044007 (year 2016a)NoStop [Husa et al.(2016)Husa, Khan, Hannam, Pürrer, Ohme, Forteza, and Bohé]Model_Nor_4 author author S. Husa, author S. Khan, author M. Hannam, author M. Pürrer, author F. Ohme, author X. J. Forteza, and author A. Bohé, 10.1103/PhysRevD.93.044006 journal journal Phys. Rev. D volume 93, pages 044006 (year 2016)NoStop [Buonanno and Damour(2000)]Model_Nor_5 author author A. Buonanno and author T. Damour, 10.1103/PhysRevD.62.064015 journal journal Phys. Rev. D volume 62, pages 064015 (year 2000)NoStop [Barausse and Buonanno(2010)]Model_Nor_6 author author E. Barausse and author A. Buonanno, 10.1103/PhysRevD.81.084024 journal journal Phys. Rev. D volume 81, pages 084024 (year 2010)NoStop [Buonanno et al.(2009)Buonanno, Pan, Pfeiffer, Scheel, Buchman, and Kidder]Model_Nor_7 author author A. Buonanno, author Y. Pan, author H. P. Pfeiffer, author M. A. Scheel, author L. T. Buchman, and author L. E. Kidder, 10.1103/PhysRevD.79.124028 journal journal Phys. Rev. D volume 79, pages 124028 (year 2009)NoStop [Damour et al.(2008)Damour, Nagar, Hannam, Husa, and Brügmann]Model_Nor_8 author author T. Damour, author A. Nagar, author M. Hannam, author S. Husa, and author B. Brügmann, 10.1103/PhysRevD.78.044039 journal journal Phys. Rev. D volume 78, pages 044039 (year 2008)NoStop [Pan et al.(2010)Pan, Buonanno, Buchman, Chu, Kidder, Pfeiffer, and Scheel]Model_Nor_9 author author Y. Pan, author A. Buonanno, author L. T. Buchman, author T. Chu, author L. E. Kidder, author H. P. Pfeiffer, and author M. A. Scheel, 10.1103/PhysRevD.81.084041 journal journal Phys. Rev. D volume 81, pages 084041 (year 2010)NoStop [Hannam et al.(2014)Hannam, Schmidt, Bohé, Haegel, Husa, Ohme, Pratten, and Pürrer]Model_Nor_10 author author M. Hannam, author P. Schmidt, author A. Bohé, author L. Haegel, author S. Husa, author F. Ohme, author G. Pratten, and author M. Pürrer, 10.1103/PhysRevLett.113.151101 journal journal Phys. Rev. Lett. volume 113, pages 151101 (year 2014)NoStop [Giesler et al.(2019)Giesler, Isi, Scheel, and Teukolsky]Giesler_19 author author M. Giesler, author M. Isi, author M. A. Scheel, and author S. A. Teukolsky, 10.1103/PhysRevX.9.041060 journal journal Phys. Rev. X volume 9, pages 041060 (year 2019)NoStop [Ota and Chirenti(2020)]Iara_20 author author I. Ota and author C. Chirenti, 10.1103/PhysRevD.101.104005 journal journal Phys. Rev. D volume 101, pages 104005 (year 2020)NoStop [Dhani(2021)]Arnab_21 author author A. Dhani, 10.1103/PhysRevD.103.104048 journal journal Phys. Rev. D volume 103, pages 104048 (year 2021)NoStop [Yang et al.(2012)Yang, Nichols, Zhang, Zimmerman, Zhang, and Chen]Yang_12 author author H. Yang, author D. A. Nichols, author F. Zhang, author A. Zimmerman, author Z. Zhang, and author Y. Chen, 10.1103/PhysRevD.86.104006 journal journal Phys. Rev. D volume 86, pages 104006 (year 2012)NoStop [Li and Han(2022)]Psi_22 author author S. Li and author W.-B. Han, 10.1103/PhysRevD.106.104013 journal journal Phys. Rev. D volume 106, pages 104013 (year 2022)NoStop [Collins and Hughes(2004)]Bumpy_BH_1 author author N. A. Collins and author S. A. Hughes, 10.1103/PhysRevD.69.124022 journal journal Phys. Rev. D volume 69, pages 124022 (year 2004)NoStop [Vigeland and Hughes(2010)]Bumpy_BH_2 author author S. J. Vigeland and author S. A. Hughes, 10.1103/PhysRevD.81.024030 journal journal Phys. Rev. D volume 81, pages 024030 (year 2010)NoStop [Shashank and Bambi(2022a)]KRZ_ins author author S. Shashank and author C. Bambi, 10.1103/PhysRevD.105.104004 journal journal Phys. Rev. D volume 105, pages 104004 (year 2022a)NoStop [Shashank and Bambi(2022b)]inspiral_KRZ author author S. Shashank and author C. Bambi, 10.1103/PhysRevD.105.104004 journal journal Phys. Rev. D volume 105, pages 104004 (year 2022b)NoStop [Khan et al.(2016b)Khan, Husa, Hannam, Ohme, Pürrer, Forteza, and Bohé]phenomD author author S. Khan, author S. Husa, author M. Hannam, author F. Ohme, author M. Pürrer, author X. J. Forteza, and author A. Bohé, 10.1103/PhysRevD.93.044007 journal journal Phys. Rev. D volume 93, pages 044007 (year 2016b)NoStop [McWilliams(2019)]McWilliams_19 author author S. T. McWilliams, 10.1103/PhysRevLett.122.191102 journal journal Phys. Rev. Lett. volume 122, pages 191102 (year 2019)NoStop [Ma et al.(2021)Ma, Giesler, Varma, Scheel, and Chen]BOB_b author author S. Ma, author M. Giesler, author V. Varma, author M. A. Scheel, and author Y. Chen, 10.1103/PhysRevD.104.084003 journal journal Phys. Rev. D volume 104, pages 084003 (year 2021)NoStop [Harry and (forthe LIGO Scientific Collaboration)(2010)]Harry_2010 author author G. M. Harry and author (forthe LIGO Scientific Collaboration), 10.1088/0264-9381/27/8/084006 journal journal Classical and Quantum Gravity volume 27, pages 084006 (year 2010)NoStop [Amaro-Seoane et al.(2023)Amaro-Seoane et al.]LISA author author P. Amaro-Seoane et al., 10.1007/s41114-022-00041-y journal journal Living Reviews in Relativity volume 26, pages 2 (year 2023)NoStop [Hild et al.(2011)Hild et al.]ET_D author author S. Hild et al., 10.1088/0264-9381/28/9/094013 journal journal Classical and Quantum Gravity volume 28, pages 094013 (year 2011)NoStop
http://arxiv.org/abs/2307.03096v1
20230706161147
HETDEX Public Source Catalog 1 -- Stacking 50K Lyman Alpha Emitters
[ "Dustin Davis", "Karl Gebhardt", "Erin Mentuch Cooper", "William P. Bowman", "Barbara Garcia Castanheira", "John Chisholm", "Robin Ciardullo", "Maximilian Fabricius", "Daniel J. Farrow", "Steven L. Finkelstein", "Caryl Gronwall", "Eric Gawiser", "Gary J. Hill", "Ulrich Hopp", "Lindsay R. House", "Donghui Jeong", "Wolfram Kollatschny", "Eiichiro Komatsu", "Chenxu Liu", "Maja Lujan Niemeyer", "Alberto Saldana-Lopez", "Shun Saito", "Donald P. Schneider", "Jan Snigula", "Sarah Tuttle", "Laurel H. Weiss", "Lutz Wisotzki", "Gregory Zeimann" ]
astro-ph.GA
[ "astro-ph.GA" ]
0000-0002-8925-9769]Dustin Davis Department of Astronomy, The University of Texas at Austin, Austin, TX 78712, USA 0000-0002-8433-8185]Karl Gebhardt Department of Astronomy, The University of Texas at Austin, Austin, TX 78712, USA 0000-0002-2307-0146]Erin Mentuch Cooper Department of Astronomy, The University of Texas at Austin, Austin, TX 78712, USA 0000-0003-4381-5245]William P. Bowman Department of Astronomy, Yale University, New Haven, CT 06520 Baylor University, Department of Physics, Waco TX 76798, USA 0000-0002-0302-2577]John Chisholm Department of Astronomy, The University of Texas at Austin, Austin, TX 78712, USA 0000-0002-1328-0211]Robin Ciardullo Department of Astronomy & Astrophysics, The Pennsylvania State University, University Park, PA 16802, USA Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA 0000-0002-7025-6058]Maximilian Fabricius Max-Planck Institut für extraterrestrische Physik, Giessenbachstrasse 1, 85748 Garching, Germany Universitäts-Sternwarte, Fakultät für Physik, Ludwig-Maximilians Universität München, Scheinerstr. 1, 81679 München, Germany 0000-0003-2575-0652]Daniel J. Farrow Centre of Excellence for Data Science, Artificial Intelligence and Modelling (DAIM), University of Hull, Cottingham Road, Kingston-upon-Hull HU6 7RX, UK E. A. Milne Centre for Astrophysics, University of Hull, Cottingham Road, Kingston-upon-Hull HU6 7RX, UK 0000-0001-8519-1130]Steven L. Finkelstein Department of Astronomy, The University of Texas at Austin, Austin, TX 78712, USA 0000-0001-6842-2371]Caryl Gronwall Department of Astronomy & Astrophysics, The Pennsylvania State University, University Park, PA 16802, USA Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA 0000-0003-1530-8713]Eric Gawiser Department of Physics & Astronomy, Rutgers, The State University of New Jersey, Piscataway, NJ 08854, USA 0000-0001-6717-7685]Gary J. Hill McDonald Observatory, The University of Texas at Austin, Austin, TX 78712, USA Department of Astronomy, The University of Texas at Austin, Austin, TX 78712, USA 0000-0003-1008-225X]Ulrich Hopp Universitäts-Sternwarte, Fakultät für Physik, Ludwig-Maximilians Universität München, Scheinerstr. 1, 81679 München, Germany Max-Planck Institut für extraterrestrische Physik, Giessenbachstrasse 1, 85748 Garching, Germany 0000-0002-1496-6514]Lindsay R. House Department of Astronomy, The University of Texas at Austin, Austin, TX 78712, USA 0000-0002-8434-979X]Donghui Jeong Department of Astronomy & Astrophysics, The Pennsylvania State University, University Park, PA 16802, USA Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA 0000-0002-0417-1494]Wolfram Kollatschny Institut für Astrophysik, Universität Göttingen, Friedrich-Hund-Platz 1, 37077 Göttingen, Germany 0000-0002-0136-2404]Eiichiro Komatsu Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85748 Garching, Germany Kavli Institute for the Physics and Mathematics of the Universe (WPI), Todai Institutes for Advanced Study, the University of Tokyo, Kashiwanoha, Kashiwa, Chiba 277-8583, Japan 0000-0001-5561-2010]Chenxu Liu South-Western Institute for Astronomy Research, Yunnan University, Kunming, Yunnan, 650500, People’s Republic of China Department of Astronomy, The University of Texas at Austin, Austin, TX 78712, USA 0000-0002-6907-8370]Maja Lujan Niemeyer Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85748 Garching, Germany 0000-0001-8419-3062]Alberto Saldana-Lopez Department of Astronomy, University of Geneva, 51 Chemin Pegasi, 1290 Versoix, Switzerland 0000-0002-6186-5476]Shun Saito Institute for Multi-messenger Astrophysics and Cosmology, Department of Physics, Missouri University of Science and Technology, 1315 N. Pine St., Rolla MO 65409, USA Kavli Institute for the Physics and Mathematics of the Universe (WPI), Todai Institutes for Advanced Study, the University of Tokyo, Kashiwanoha, Kashiwa, Chiba 277-8583, Japan 0000-0001-7240-7449]Donald P. Schneider Department of Astronomy & Astrophysics, The Pennsylvania State University, University Park, PA 16802, USA Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA 0000-0003-4044-5357]Jan Snigula Universitäts-Sternwarte, Fakultät für Physik, Ludwig-Maximilians Universität München, Scheinerstr. 1, 81679 München, Germany Max-Planck Institut für extraterrestrische Physik, Giessenbachstrasse 1, 85748 Garching, Germany 0000-0002-7327-565X]Sarah Tuttle Department of Astronomy, University of Washington, Seattle, 3910 15th Ave NE, Room C319, Seattle WA 98195-0002 0000-0002-4974-1243]Laurel H. Weiss Department of Astronomy, The University of Texas at Austin, Austin, TX 78712, USA Leibniz-Institut für Astrophysik Potsdam (AIP), An der Sternwarte 16, 14482 Potsdam, Germany 0000-0003-2307-0629]Gregory Zeimann Hobby Eberly Telescope, University of Texas, Austin, Austin, TX, 78712, USA We describe the ensemble properties of the 1.9 < z < 3.5 Lyman Alpha Emitters (LAEs) found in the HETDEX survey's first public data release, HETDEX Public Source Catalog 1 <cit.>. Stacking the low-resolution (R ∼ 800) spectra greatly increases the signal-to-noise ratio, revealing spectral features otherwise hidden by noise, and we show that the stacked spectrum is representative of an average member of the set. The flux limited, signal-to-noise ratio restricted stack of 50K HETDEX LAEs shows the ensemble biweight “average" z ∼ 2.6 LAE to be a blue (UV continuum slope ∼ -2.4 and E(B-V) < 0.1), moderately bright (M_UV∼ -19.7) star forming galaxy with strong emission (log L_Lyα ∼ 42.8 and W_λ() ∼ 114Å), and potentially significant leakage of ionizing radiation. The restframe UV light is dominated by a young, metal poor stellar population with an average age 5-15 Myr and metallicity of 0.2-0.3 Z_⊙. § INTRODUCTION The Hobby-Eberly Telescope Dark Energy Experiment <cit.> is a multi-year, untargeted, low-resolution (R ∼ 800) spectroscopic survey conducted with the Hobby-Eberly Telescope <cit.> and the Visible Integral-field Replicable Unit Spectrograph <cit.>. Object spectra are identified post-observation via examination of the spatial and spectral clustering of the individual optical spectra from the ∼35K fibers in the array (<ref>). The HETDEX Public Source Catalog 1, hereafter, HPSC-1 <cit.> contains spectra for more than 200K objects from the first three years of HETDEX observations. Within this data set are more than 50K Lyman Alpha Emitting (LAE) galaxies at 1.9 < z < 3.5 identified by their emission lines, which HETDEX is specifically designed to detect. Using the 3D positions of these LAEs, which serve as biased mass tracers, HETDEX aims to constrain the Hubble Parameter, H(z), and the Angular Diameter Distance, D_A(z), at z∼2.4 to better than 1% accuracy <cit.>. The success of this primary science goal is predicated on the accurate measurement of the redshifts, and hence the correct identification of the emission line, of the roughly one million LAEs out of the several million total galaxies and other astrophysical sources expected to be observed over the course of the HETDEX survey. The spectra obtained from the untargeted observations are exploited to this end <cit.>. The HETDEX spectral range is 3500-5500  with λ /Δλ∼ 800 <cit.>. Most, ∼ 80%, of all HETDEX emission line detections (or ∼ 70% of all HETDEX objects when including those identified from their continuum rather than emission lines) are in spectra without significantly detected continua and thus provide very little extra information beyond their emission line specified redshifts. However, with the classifications and redshifts of these objects known to high precision and with little misidentification, especially for <cit.>, it becomes practical to stack the spectra to explore the average properties of sub-populations of galaxies (specifically LAEs, for this work). Throughout this paper, we use the biweight measure of the central location [a.k.a the “biweight location". Also referred to as the “biweight average" or just the “biweight" in this and other works.] <cit.> as a measure of the average properties in the stack. This is very similar to the median in the limit of a large sample size, such as presented here (see also <ref>), but provides an improved robustness to outliers, particularly when the sample distribution is non-Gaussian. When the sample is Gaussian-like, the difference is ≪ 1%. For simple comparisons and convenience, the mean or median and standard deviation may also be used, but are explicitly identified. Here we define LAE to mean 1.88 < z < 3.52 galaxies that do not host Active Galactic Nuclei (AGN). With the HETDEX emission line search, these galaxies are nearly all (≳ 96%) classical Lyman Alpha Emitters by definition of a 20  or greater rest-frame equivalent width emission <cit.>. Only a relative few these “LAEs" have sufficiently bright continua to be selected as a Lyman Break Galaxies <cit.>. Generally, the LAEs in the HETDEX redshift window are compact, low-metallicity, rapidly star forming galaxies <cit.>. The stacking of spectra, whether for LAEs or other phenomena, is not new <cit.>. HETDEX is unusual in its lack of preselection coupled with the large number of available spectra, which will number in the millions by the end of the survey. High-quality spectral observations of individual z>2 galaxies remain extremely rare and limit our ability to extrapolate to a population description, where stacking, by its statistical nature, describes population features. The large number of spectra from the observations included in HPSC-1 <cit.> provide significant statistical leverage. Given the untargeted nature of the HETDEX observations and assuming the spectra are down-selected without significant restriction on their on-sky locations, the stacks are largely unbiased with respect to the LAEs' environments. Additionally, these statistically large stacks marginalize over galaxy orientation, dust geometry, star formation stochasticity, and lines of sight through the IGM. That last marginalization allows for a more straightforward application of IGM attenuation corrections, which are inherently statistical averages <cit.>, while the other properties embody the random variability in the galaxy population and, in particular, their radiative transfer processes <cit.>. Further, with 10K-50K spectra in each of the stacks in this work, the signal-to-noise ratio (SNR) is boosted by a factor of 100 or more. This makes possible the measure of signal that is otherwise buried in noise for individual spectra. In this work, we stack subsamples of the HPSC-1 spectra to explore the basic ensemble properties of 1.9 < z < 3.5 LAEs and preview future analyses using even larger collections of HETDEX spectra. The remainder of this paper is organized as follows: Section <ref> briefly describes the HETDEX observations and reduction pipeline. Section <ref> provides an overview of the selection and stacking mechanics. Section <ref> presents the results and discusses the various explored properties of the data sets. Throughout the paper, the Planck 2018 cosmology <cit.> with Ω_m= 0.31 and H_0 = 67.7 km s^-1 Mpc^-1 is assumed. All magnitudes are in the AB system. § OBSERVATIONS AND DATA SELECTION For a more thorough and complete description of the HETDEX observations, data reduction, and the particular composition of HPSC-1, we refer the readers to <cit.> and <cit.>. In brief, HETDEX is a multi-year spectroscopic survey conducted at the McDonald Observatory with the 10 m Hobby-Eberly Telescope. Observations began in 2017 and will continue through 2024 with a planned coverage of some 540 deg^2 in two large fields pointing out of the Galatic plane, one centered near 193^∘+53^∘ and the other near 22^∘+0^∘. Pointings within the two fields are untargeted, and the spectra are collected with up to 78 pairs of integral field unit (IFU) spectrographs in VIRUS spread out over the HET's focal plane with fill factor of 1:4.5. Each IFU is fed by 448 fibers <cit.> and each observation consists of 3 exposures of 6.1-minute duration that are dithered to fill in the coverage within the IFU footprints. The resulting spectra, ∼ 33K per observation, are calibrated, sky subtracted, and scanned for emission lines (or continua) without any color or magnitude pre-selection, and, where lines (or continua) are found, the spectra from the surrounding fibers inside a 35 radius aperture are combined into a single, point-spread function (PSF) weighted spectrum <cit.>. Though the apertures are large, for the 17 average seeing FWHM, 90% of the light for the spectrum comes from the inner-most 12. The PSF weighted spectra are classified and assigned a redshift with the ELiXer software <cit.>[The HPSC-1 classifications derive from an earlier version of the ELiXer software than is presented in <cit.>] using multiple analyses and incorporating archival photometric imaging. Brighter (g<22) spectra are classified with support from Diagnose[<https://github.com/grzeimann/Diagnose>], a software package developed for the Hobby-Eberly Telescope VIRUS Parallel Survey which is is based on template fitting <cit.>. Additional support is also provided from the Dark Energy Explorers[<https://www.zooniverse.org/projects/erinmc/dark-energy-explorers>] citizen science project <cit.>. For this work, we select the 51,863 HPSC-1 spectra from objects classified as 1.88 < z < 3.52 galaxies, explicitly excluding objects containing Active Galactic Nuclei (AGN), as identified by ELiXer or <cit.> (see also <ref>). The redshift range is defined by where falls within the HETDEX spectral bandpass. The AGN classifications made by ELiXer within this redshift range are based on (1) the detection of found with other emission lines combinations consistent with AGN, such as , , , etc, (2) broad emission ≳1200 , or (3) a position and photometric magnitude match to an object from an external catalog that is classified as an AGN. More exhaustive AGN classifications are made by <cit.> based on single emission line and line pair matching with enhanced, multi-Gaussian fitting, the continuum profile, and visual inspection of all candidate AGN HETDEX spectra along with archival photometry. § METHODS The details on the HETDEX data pipeline, including calibrations, sky subtraction, and spectra extraction, are found in <cit.>. The creation and general description of the public data catalog release, HPSC-1, is described in <cit.>. All galaxy spectra used in this work come from the aperture extracted, wavelength-rectified, PSF weighted, atmospheric dispersion corrected, Milky Way dust de-reddened data presented in HPSC-1. The additional corrections and operations applied to those spectra are described in the subsections that follow. §.§ Sky Subtraction Though the sky subtraction methodology is described in detail in <cit.>, because of its direct relevance to to several topics in this work, we present a brief overview of the salient points. The goal of the sky subtraction is to remove background light while preserving the light from astronomical sources. The background light comes from the atmosphere (literally, the “sky"), as well as instrumental effects such as thermal noise and scattered light in the optics. Furthermore, any large diffuse astrophysical source will be included in the sky background; for example, light from unresolved faint foreground and background galaxies that are uniformly distributed in the sky will add to the estimate of the background. There are additional complications discussed in <cit.> such as wavelength distortion in the spectrographs that can lead to sky background residuals. Here we make use only of the local form of the HETDEX sky subtraction, as it is more stable than the full field sky subtraction in the pipeline version used to create the HPSC-1, the source of the spectra for this work. The on-sky size of the local sky is 50×12, whereas the full sky is derived from the entire 21 diameter focal plane. While both sky subtractions use similar methods, the local sky subtraction employs 112 fibers for each amplifier (4 per IFU), and the full field sky subtraction uses all fibers in an exposure (∼ 35K). The full sky necessitates additional adjustments for amplifier to amplifier variations and other complications. For emission line source detection, we rely on the local sky since it is more robust to instrumental effects and generates smaller background residuals. For the local sky subtraction, fibers containing obvious continua are removed, including those with counts more than 3× the biweight scale of the fibers on the amplifier. The remaining fibers are then used to compute the per-wavelength bin background from their biweight locations. Typically, 70-80% of the fibers in an amplifier are used for this calculation, though it can be as low as 25-30% when there is a star or other bright object in or near the FoV of the amplifier. As HETDEX detections can include fibers from more than one amplifier, due to their size and/or their position within an IFU, the fibers included in their detection aperture can have different local sky subtraction corrections. One potential consequence of the local sky subtraction, which we revisit later, is that it can partly remove low-level flux actually associated with detected galaxies if that flux is spatially extended over a large portion of an IFU. §.§ Sky Subtraction Residual Similar to the Background Residual Correction presented in <cit.>, the LAE spectra stacked in this work have an observed-frame residual correction applied prior to their shift to the restframe. The purpose of this correction is to account for small systematics in the calibration, and the average extra light in the extraction apertures that come from faint, undetected background and foreground sources that are undetectable at the individual detection level. In <cit.>, apparently empty individual“Sky Fibers" are selected, sorted, and stacked on a per observation basis and used to correct the individual fibers of the LAE detection from the matched observation. However, in this work, given the large number statistics of the 250× larger LAE sample, we select entire apertures rather than individual fibers and average over many observations instead of applying separate corrections for each observation. Using the observations from June 2018, when the IFU array became significantly populated, through June 2020, the end of data acquisition for HPSC-1, we collect up to 200 spectra from random “empty" 3 5 radius apertures within each of the ∼ 2000 observations in this range; this radius matches the size of the standard HETDEX aperture <cit.>, and the spectra are extracted using the same method as the normal HETDEX detections. The center of each aperture is randomly selected from the footprint of the observation's IFU coverage with the following constraints: * No aperture center can be within 1 5 of another. This means the apertures can overlap but not near their central regions where the PSF weights are highest. * There must be at least 15 fibers included in the aperture over the three exposures comprising the observation. This avoids apertures where a significant portion falls off the edge of an IFU. * The measured g magnitude in the PSF weighted spectra extracted under the aperture must be fainter than 24, and have a median flux density between 3900   and 5400   greater than -4 × 10^-18 . This satisfies the “empty" requirement, given the HETDEX depth is close to 25 in g in good conditions but can be near 24 in poor seeing <cit.>. The negative lower limit accounts for the possibility of some over subtraction and for excursions due to noise. * There can be no detected emission lines within the extracted spectrum and no HETDEX detections within 2 0. These criteria yield more than 300K PSF weighted spectra (not all observations yield the target 200 “empty" apertures) that are stacked along their native HETDEX 2  wavelength bins using the biweight measure of central location <cit.> in the same method described in <cit.>. The result is shown in Figure <ref> with the residual spectrum flux in blue and the flux uncertainties for each wavelength bin in gray. The reported uncertainties are statistical and are defined as σ_b/√(N) where σ_b is the biweight scale of the N contributing “empty" aperture measured spectra for that wavelength bin. The spectrum is also made available in electronic form. Sky subtraction residual spectra created by stacking under alternate sub-selections of the full ∼300K sample are stable with respect to variations in the seeing FHWM, throughput and instrument response, date, and observed declination. Each LAE spectrum is corrected by subtracting this sky subtraction residual in its observed frame. For individual spectra the correction is less than 1%, far below the noise level; the systematic becomes meaningful only when stacking 100s or 1000s of galaxy spectra. Figure <ref> shows a distinct rise in the far blue caused by instrumental limitations and is slightly negative redward of about 4000  indicating a small over-subtraction by the reduction pipeline. Both these effects are more than 100× smaller than the typical HETDEX flux limits <cit.> and are inconsequential for individual spectra. They only become significant when stacking large numbers of spectra. Future data releases will include improvements in the data reduction pipeline to better address these issues. §.§ Shift to Restframe As described in the previous subsection, the spectra are first corrected for the sky subtraction residual in their observed frame. Depending on the needs of the analysis, all spectra are corrected for wavelength- and redshift-dependent fractional IGM transmission. The IGM correction applied in this work is from CIGALE <cit.>, which derives from <cit.>. The individual application of the IGM transmission correction can be highly uncertain since IGM attenuation is dependent on the particular sight line <cit.>. However, since we are averaging over many thousands of spectra, this becomes naturally statistical <cit.>. Lastly, the spectra are shifted to their own restframe so that they can be aligned for stacking. The restframe shift has two components, one for the wavelength bins and one for the flux density at each bin center. Since the reported HETDEX wavelengths are in air, the observed wavelengths are converted to vacuum <cit.> and then simply adjusted by (1+z) where z is determined from the Gaussian fitted line center of the emission line <cit.>. As discussed in <cit.> and later in this work (<ref>), there is no correction applied for any velocity offset from an individual galaxy's systemic redshift. Depending on the science needs, this wavelength-only shift, by itself, may be sufficient, but it only represents the observed flux at rest wavelengths: when stacking, this can create a bias against higher redshift objects due to increased cosmological dimming. For this work, and to counteract this bias, we also convert the observed flux density (f_λ) to a luminosity density (L_λ or L_ν, as needed). Please note that here we use “luminosity density" as an analog to flux density, not as a luminosity per unit volume. This conversion is a straightforward application of: L_λ = 4π D_L^2 f_λ (1+z), where D_L is the luminosity distance. Where L_ν is needed, we simply multiply L_λ by λ^2/c. Unless stated otherwise, z (specifically, 1+z) is defined as the ratio of the observed frame, vacuum-shifted Gaussian-fitted emission line center wavelength to the vacuum restframe wavelength. §.§ Stacking Once the previously described corrections and shifts are made, the spectra are stacked using the same method described in <cit.>. The restframe wavelength spacing is adopted from the highest redshift object to be stacked, with the blue and red wavelength endpoints coming from the highest and lowest redshift objects respectively. All spectra are then linearly interpolated onto this wavelength grid and stacked in each wavelength bin using the weighted biweight statistic <cit.>, as a modification of the biweight statistic described in <cit.>. The weighted biweight alters the biweight location slightly by including an additional weight for each element in the sample. The weight used in the stack is the inverse of the uncertainty on the flux measures within each wavelength bin, such that the more uncertain fluxes have a reduced contribution to the biweight location. We note that for these large datasets (≫10^4 spectra), the biweight and weighted biweight perform very similarly to a median average, with a brief comparison of the two methods provided in <ref>. In all cases, the default tuning constants are used. For the biweight location (or weighted biweight location) the constant is 6 and for the biweight scale it is 9. While it is common practice to normalize the spectra before stacking, often to the flux near 1500 Å for LBGs and LAEs, <cit.>, the nature of the HETDEX data makes this impractical. For the vast majority of the spectra presented in this work, only the emission is detectable above the flux limits, so there is no measured continuum against which to normalize. Instead, we stack the un-normalized spectra first and then, where appropriate for the analysis, we normalize the stack against the flux over a section of its wavelengths. Since the HETDEX wavelength window is fixed at 3500-5500 Å and the LAEs span 1.9 < z < 3.5 <cit.>, only the wavelength region immediately around rest-frame can receive contributions from all spectra (the number of contributing spectra for each wavelength bin is included as a column in the stacked spectrum data file). The fluxes in wavelength bins increasingly blueward of are populated by LAEs at increasingly higher redshifts, while the opposite is true moving redward of . § RESULTS The stack of the full 50K 1.9 < z < 3.5 LAEs (median z = 2.55), not corrected for IGM transmission, is presented in Figure <ref> in terms of L_λ normalized to the median between 1475 and 1525 Å (L_1500). The data for this figure, including the number of contributing spectra and uncertainties for each wavelength bin, is available in electronic form. The emission line fitting is performed with ELiXer <cit.> and the SNR has increased from a median of 6 for the individual galaxies in the sample to near 1000 in the stack. The uncertainties on the stack (shown as the orange curve in the top panel of Figure <ref> as absolute uncertainties ) are ∼5%, except near the wavelengths farthest from , where they grow to ≳25% in the far red and spike above 100% in the far blue. The latter effect is due to the declining CCD sensitivity in the blue <cit.> and the decreasing number of contributing objects (shown as the green curve in the bottom panel of Figure <ref>) at the spectral extremes (16% of the maximum in the far blue and 4% in the far red). These uncertainties are defined like standard errors with σ_b/√(n_λ), where σ_b is the biweight scale and n_λ is the number of spectra contributing to the wavelength bin centered on λ. Figure <ref> compares the stack to a typical HPSC-1 LAE (ID: 2100541366; RA,Dec: 150.127594 +2.295267), selected for the similarity of its properties (z = 2.566, g = 25.3, SNR = 5.9, log Luminosity = 42.81 (ergs s^-1), and W_λ() = 87 Å) to the ensemble and sample averages. The figure illustrates the SNR improvements gained through stacking. Using wavelengths redward of (1250-1650 Å, rest) and defining the SNR simply as the mean flux divided by the mean (analogous) error on that flux <cit.> and the standard error-like definition above, there is an increase of more than 300× from the mean SNR ≈ 0.06 for the continuum in an individual detection to the mean SNR ≈ 21 in the stack. As noted in earlier, the differences between the biweight and median methods are small for large sample sizes. For the spectra in this work, the median of the differences between stacks using the biweight method (<ref> and Figure <ref>) and using a median is 2.6%, measured from restframe 850Å to 1850Å, or 3.1% over the entire wavelength range, 768Å to 1918Å. For the same stacks, the biweight location of the differences are nearly identical to the median differences at 2.6% and 3.0% for the same two wavelength ranges. Here we define the difference per wavelength bin as: | L_λ^b - L_λ^m|/1/2(L_λ^b + L_λ^m), where L_λ^b is the luminosity in wavelength bins from the biweight "averaged" stack and L_λ^m is the luminosity in the same wavelength bins from the median“averaged" stack (see the METHODS section (<ref>) for the details on the stacking processes). For Gaussian distributions, the biweight location and median differ by ≪ 1% and the standard deviation, σ, and the biweight scale, σ_b, by < 1%. Selected features of the stack (Figure <ref>) are discussed and interpreted, quantitatively and qualitatively, in subsequent subsections. §.§ Caveats Before proceeding further, it is necessary to acknowledge several caveats that can affect the analyses and interpretations. Contamination of the sample of HPSC-1 LAEs by misclassified emission, in particular that of , is quite low at an estimated value of ∼2.5-3.0% <cit.>. However, since it is not zero, it may slightly dilute the signal of the underlying continuum. Another 0.5-2.0% of contamination could come from other individually detected lines such as , , or , some of which are associated with possible AGN. And though we make efforts to exclude AGN through the identification of broadline (> 1200 km s^-1) pairs (e.g., + , + , + , etc), emission line profile fitting, catalog matching, and visual inspection <cit.>, some AGN (particularly single narrow-line Type-II AGN and some broadline objects where a second line is not identified by the pipeline) may remain in the LAE sample <cit.>. Some fraction of the detections, particularly at lower emission line SNR, may also be false detections of noise. While work is on-going to better quantify the specific rates of these false detections, for the LAEs of HPSC-1, as described in 6.6 of <cit.> positively confirms 91% of the detections in a sub-sample of LAEs with repeat observations and thus sets an upper limit of 9% on the total fraction of contaminants. Unlike <cit.>, the spectra in the stack have not been selected to exclude LAEs with nearby neighbors, as that information is absent from this data release. Similarly, the spectra have not been deblended to remove the contribution of flux by neighboring objects <cit.>. While the removal of the sky subtraction residual should largely handle the average contribution of faint line-of-sight interlopers, excess flux from these sky-adjacent neighbors can still be present in the individual galaxy spectra and find its way into the stacks. While the HETDEX catalog has no galaxy preselection, its emission line detections are flux limited <cit.>, and the HPSC-1 catalog excludes detections with emission line SNR < 5.5 <cit.>. There is thus a bias against detecting galaxies with observationally fainter , which necessarily increases with increasing redshift. However, the use of the median-like weighted biweight mechanics <cit.> in the stack does mitigate the influence of bright outliers, which helps maintain the representative nature of the stack. As previously stated and further discussed in <ref>, the spectra are aligned for stacking based on their Gaussian fitted line. This can lead to a ∼1Å smearing of spectral features due to the individual redshift offsets from the galaxies' systemic redshifts. Lastly, as noted in <ref>, the wavelength regions farther from have fewer contributing spectra and thus increased uncertainty in the stack compared to wavelengths closer to the line. Furthermore, we have to assume that evolution in the galaxies over those redshift ranges of each stack is minor, in order to consider the stacked spectrum, as a whole, as representative of the underlying sample. §.§ Representative Stack Table <ref> presents a summary of several properties (see the table note for descriptions) of the full dataset and full stack along with three additional subselections that divide the sample by redshift. The redshift distribution, with mean z=2.6, for this work is shown in Figure <ref>. Additional binnings based on other properties will be presented in future works. With the exception of the average r magnitude, which is included from various archival photometric imaging catalogs that overlap the HETDEX observations <cit.>, the properties are measured from the individual spectra or the stacked spectra. A comparison of the biweight location (x̃_̃b̃) values of g, Luminosity, and equivalent width to the values derived from the stacked spectra demonstrates that they agree very well, suggesting the stacks are, indeed, similar to an average representation of the samples. We acknowledge that both the biweights of g and W_λ() are more limited comparisons as HETDEX cannot measure continuum magnitudes fainter than ∼25 in g, hence W_λ() is a lower limit <cit.>. c|c|c|c|c|c|c|c|c[ht] Summary of Ensemble Properties by Redshift Range 0pt Redshift N ⟨ g⟩ Stack g ⟨ r⟩ ⟨ L_Lyα⟩ Stack L_Lyα ⟨ W_λ()⟩ Stack W_λ() (1) (2) (3) (4) (5) (6) (7) (8) (9) 1.9 < z < 3.5 52K x̃_̃b̃=25.9, σ_b=1.30 25.6 x̃_̃b̃=25.5, σ_b=1.16 x̃_̃b̃=42.92, σ_b=0.212 42.83 x̃_̃b̃=93.1, σ_b=84.0 114.0 2.0 < z < 2.5 21K x̃_̃b̃=25.8, σ_b=1.35 25.3 x̃_̃b̃=25.3, σ_b=1.21 x̃_̃b̃=42.86, σ_b=0.211 42.77 x̃_̃b̃=89.9, σ_b=86.5 108.8 2.5 < z < 3.0 18K x̃_̃b̃=25.8, σ_b=1.27 25.6 x̃_̃b̃=25.4, σ_b=1.12 x̃_̃b̃=42.91, σ_b=0.187 42.84 x̃_̃b̃=80.9, σ_b=72.6 101.7 3.0 < z < 3.5 11K x̃_̃b̃=26.2, σ_b=1.18 26.2 x̃_̃b̃=26.0, σ_b=0.83 x̃_̃b̃=43.03, σ_b=0.180 42.96 x̃_̃b̃=107, σ_b=81.4 130.0 ^(1) Redshift range of (sub)selection. ^(2) Number of galaxies in the (sub)selection to the nearest 1000. ^(3)The biweight location (x̃_̃b̃) and biweight scale (σ_b) of the SDSS-g magnitude of individual LAE detections as computed from the HETDEX spectra using the speclite Python package (; ). Though always computed, values ≳25 are fainter than the HETDEX detection limit. ^(4)The SDSS-g magnitude computed from the observed frame stack of HETDEX spectra, again using speclite. Given the high SNR of the stack, the formal error on the fit magnitude is < 0.01. ^(5)The biweight location and biweight scale of the r magnitudes of individual LAE detections as computed from photometric imaging with r coverage. Depth is catalog dependent <cit.>, with 75% of HPSC-1 ≳26. Non-detections in the imaging are included as their respective limits. ^(6)The log_10 of the biweight location and biweight scale of the luminosity [erg s^-1] of the   line of the individual LAE detections. ^(7)The log_10 of the luminosity [in erg s^-1] of the Gaussian fitted line of the stack. Given the high SNR, uncertainties on the fits are ≪1% ^(8)The biweight location and biweight scale of the restframe equivalent width [] of in the individual LAE detections. Since the continuum is often undetected, this is a lower limit. ^(9)The restframe equivalent width [] of in the stack. Given the high SNR, the errors on the fit are < 0.05. §.§ Lyman Alpha Velocity Offset Since we align the individual spectra based on their fitted, restframe emission line centers, our figures show the stacked line centered at our adopted wavelength of 1215.67 Å. However, due to the complexities of radiative transfer and the suppression of the flux near and just blueward of restframe <cit.>, we are often fitting to the red peak of , and there is an offset with respect to each galaxy's systemic redshift. In the stack, this manifests as a slight offset between the expected and observed positions of the other emission and absorption features. Additionally, as the offset from systemic is variable by galaxy, there can also be a smearing/broadening of these other (stacked) spectral lines. With only detected in the vast majority of the HETDEX LAE spectra, and generally at SNR ≲6 for the HPSC-1, our ability to correct for the velocity offset of individual galaxies prior to stacking is limited <cit.>. However, as we estimate the typical velocity offset to be at most a few hundred km s^-1 (see below), the impact to this work is small and we do not refine it further here. To estimate the average velocity offset in our ensemble, we repeatedly fit the center lines of several emission and absorption features using a simple Markov Chain Monte Carlo (MCMC) Gaussian fitting code and compute a velocity offset from the assumed fiducial wavelength. These features are selected as they are clear, have high SNR and are not significantly blended with any other lines (or can have the blended portion easily masked). The results are summarized in Table <ref>. The velocity offsets from the features are similar, though there is some obvious scatter likely as a combination of ISM and IGM confusion, outflows, and other kinematics. The overall mean, 235 ± 18 km s^-1, provides a good estimate of the velocity offset for stack and thus for the typical HPSC-1 LAE. Where stated for the remainder of this work, we adopt a rounded value of 250 km s^-1 for the velocity offset from systemic for the stack. Though LAEs can exhibit a sizeable difference in individual velocity offsets, this average is consistent with those of LAEs and LBGs found in <cit.> (∼ 360 ), <cit.> (∼ 240 ), <cit.> (∼ 230 ), <cit.> (∼ 300 ), and <cit.> (∼ 170 ). A more rigorous investigation is presented in <cit.>. These estimates generally agree with and bracket the ∼200 km s^-1 reported for the much smaller, z > 3 sample in <cit.>. This equates to a less than 1Å offset in the adopted Lyman Continuum region, 880-910Å. That work finds no significant impact to the Lyman Continuum estimate measured over 30Å and no apparent change when applying a correction such as that in <cit.>. This small velocity offset is also consistent with the expectation of enhanced Lyman Continuum leakage <cit.>. l|c|c[ht] Lyman Alpha Velocity Offsets 0pt Line Fiducial λ [Å] Offset [km s^-1] Lyβ (a) 1025.72 320 ± 58 C3 1175.71 330 ± 37  (b) 1549.48 199 ± 27  (c) 1640.42 125 ± 41 O3] (d) 1666.15 198 ± 28 mean NA 235 ± 18 Velocity offsets of various spectral lines from the aligned HPSC-1 stack (Figure <ref>) as measured against the MCMC Gaussian fitted line centers. The last row is a simple, unweighted mean. Scatter is likely due to a combination of ISM and IGM confusion, outflows, and other kinematics. a subject to ISM and IGM confusion b doublet; λ as unweighted mean c shows stellar (broad) and nebular (narrow) components d doublet; using the red peak only §.§ Lyman Alpha Troughs On either side of the emission line there are deep, negative “troughs." While absorption is expected near the emission, the depths of the troughs, as shown in Figure <ref>, are enhanced by our reductions. Though scattering of photons near resonance by the H1 in and around the LAEs is expected, the majority of the depths of these troughs are a result of the HETDEX sky subtraction and reduction pipeline (<ref>). The sky subtraction assumes that the sky off-source is the same as the sky on-source. Any differences in this assumption could then create a feature in the resulting stacked spectra. As one possible scenario, the version of the pipeline in this data release could over-subtract faint emission from halos around the LAEs that extend ∼10 or more arcseconds. In this case, we could create a self subtraction of the emission and cause the troughs to go negative. Another possible scenario is that a diffuse UV background exists as part of the general sky background we measure. In this case, the on-source sky could be different than the off-source sky due to absorption around . A more complete discussion of the artificial troughs and real scattering is presented in <cit.>. For the measurements of this work, we simply avoid the troughs by masking them. §.§ Lyman Alpha Luminosity The calculations of the luminosity for both individual galaxy spectra and the stack are similar. For an individual galaxy, the identified emission line is fit with a simple Gaussian <cit.> whose area is the integrated flux, noting that the continuum level is a free parameter and allowed to be negative. That flux is then converted to a luminosity using Eq. <ref>, but without the tailing (1+z) term. For the L_λ stacked spectra, since we are already in the rest frame and in luminosity density, we simply fit a Gaussian where the continuum is set by the L_λ redward of and whose area is the integrated line luminosity with the trough regions (<ref>) masked out. This is the observed luminosity of the escaping . There are no additional corrections applied for extinction or attenuation by the restframe dust and ISM. As shown in Table <ref> and Figure <ref>, the individual galaxy log luminosities (i.e., log_10(L_Lyα/[erg s^-1])) in this sample range from 42.26 to 44.45 with a biweight location of 42.92 ± 0.212 and 1.9% of the emission lines greater than 43.50. At the brighter end, it is probable that some of the galaxies host unidentified AGN and some luminosities may be inflated due to measurement uncertainty. The full-sample stacked spectra has a luminosity of 42.83, about 20% lower than the sample biweight, though well within the uncertainty. The fit to the line of the stack is a more precise “average" of the sample as a whole due to the increased SNR, since the continuum level is a free parameter for both the stack and the individual detections. The stacked continuum is well detected where continuum is rarely detected in individual HETDEX LAEs and the greatest spread in luminosities in the sample is associated with the lowest signal-to-noise detections, which represent the largest fraction of the sample (Figure <ref>). Table <ref> naively shows a weak trend of increasing luminosity with increasing redshift bin, which could indicate some small evolution with redshift <cit.>. However, the decrease is consistent with the loss due to the HETDEX flux limits and cosmological dimming. §.§ Lyman Alpha Equivalent Widths The restframe equivalent widths, W_λ(), are not explicitly reported in the HPSC-1 but are shown in Figure <ref>, along with the sample biweight location and the corresponding stack value. Each W_λ() is computed from the integrated flux and a combined estimation of the continuum from the spectrum and associated photometry <cit.> divided by (1+z). For each of the restframe stacked spectra (Table <ref>), W_λ() is simply the ratio of the integrated luminosity to the fitted continuum around the line, again with the troughs masked. As with the luminosities, the stacked spectra measurement of W_λ() is consistent with the biweight measure from the sample, but the stack may provide a more robust average as the continuum is detected; HETDEX rarely detects the continua of z ≳ 1.9 galaxies. The HPSC-1 equivalent width distribution is similar to that found with MUSE in <cit.>. HPSC-1 reports that 15% of its LAEs have W_λ() >240 and the full sample has a biweight location value of 93.1 ± 84.0 ; <cit.> reports that 16% (in their full sample) of the galaxies have W_λ() >240 and the characteristic W_λ() is 95.5 . Restricting the comparison of the two samples to their LAEs within the overlapping redshift coverage, 2.9 < z < 3.5, the distributions deviate a bit more, but remain similar, see Figure <ref>. The redshift restricted MUSE sample of 591 LAEs (out of 1920 LAEs of the full sample) has 16% with W_λ() above 240 Å vs 11% for the redshift restricted HPSC-1 sample of 13.5K LAEs. The biweight locations of the restricted MUSE W_λ() and HPSC-1 samples are 85.4 Å and 102.7 Å, respectively. The HPSC-1 equivalent widths given in Table <ref> and Figure <ref> might also suggest some weak evolution with redshift similar to findings in <cit.>. However, this is even less clear with the HPSC-1 data than the possible trend with luminosity as the equivalent width measures are less secure due to the lack of detected continua and are also biased toward the detection of brighter at higher redshifts due to cosmological dimming. §.§ UV Continuum Slope and UV Magnitude The observed UV continuum slope, β, modeled as f(λ) ∝λ^β, is taken from the full stack (Figure <ref>) and measured from 1250 Å to 1850 Å by fitting a simple least squares optimized power law after masking the features near 1260, 1302, 1335, 1394, 1549, 1640, and 1665 Å. Given the wavelength range, this is similar to β_18 in <cit.>. The best fit β, -2.36 ± 0.09, implies that our “average" LAE has young, low metallicity stellar population, a large ionizing photon production efficiency, and very little dust extinction <cit.>. This is in line with that expected for the z ∼ 3 LAE population (see also <ref>). This also suggests an increased likelihood of Lyman Continuum escape and may indicate a recent (5-15 Myr) burst of star formation <cit.>. The absolute UV Magnitude (M_UV or M_1500) of the full stack is computed from the mean and median luminosity density between 1400 and 1600 Å in the restframe, using the standard 3631 Jy AB magnitude zero point scaled to L_ν,0 of 4.3454×10^20 erg s^-1 Hz^-1 at 10 pc. The M_UV from the mean values corresponds to a medium bright galaxy with -19.67 ± 0.13, while the median values are essentially identical, -19.68 ± 0.13. The errors are statistical and come from the propagation of the standard deviation of the flux density between 1400 and 1600 Å in the restframe. §.§ P Cygni Profiles P Cygni line profiles <cit.> are clearly visible in Figure <ref>, for example O6 (1032,1038 Å), (1241 Å), and, to a lesser degree, (1549 Å). These profiles are complex combinations of nebular and stellar absorption and emission, and contain a wealth of information on the stellar population, star formation history, and the metal enrichment of the galaxy, but a proper decomposition and fitting is beyond the intended scope of this work. However, qualitatively speaking, the P Cygni profile is a tell-tale indicator of strong stellar winds, massive stars, and recent star formation <cit.>. While not exhibiting a P Cygni, the extremely broad (1640 Å) emission, 940 ± 130 km s^-1, is also strongly suggestive of a very young population <cit.>. This is consistent with our picture of LAEs, and the clarity of the profile in the stack further highlights the SNR gains and additional physics that is not directly accessible in the individual HPSC-1 spectra. §.§ SED Fitting To model and gain some understanding of the underlying stellar population, we perform Spectral Energy Distribution (SED) fitting on the stacked spectrum (Figure <ref>). Since we are limited by the wavelength coverage in this spectrum to the restframe UV, we are only probing more recent star formation. The SED fitting is performed with the Python package for Fitting the stellar Continuum of UV Spectra or FiCUS[<https://github.com/asalda/FiCUS>] <cit.>. Four separate runs use the Starburst99 <cit.> and the Binary Population and Spectral Synthesis <cit.> single-burst stellar population models, along with the dust attenuation models from <cit.> and the SMC extinction model by <cit.>. All runs assume the same <cit.> Initial Mass Function (IMF) with a 100 M_⊙ upper limit and fit for ten stellar ages (1, 2, 3, 4, 5, 8, 10, 15, 20, and 40 Myr) and four metallicites (5%, 20%, 40%, and 100% of Z_⊙). We use the IGM corrected version of the full stack spectrum (Figure <ref>) with an applied velocity offset of 250 km s^-1 (<ref>) and fit over 915 - 1915 Å in the rest-frame, assuming a mean redshift of 2.604 for the 50K contributing galaxies. Key results of the 4 runs are summarized in Table <ref> with the fit with the best χ^2 (row 2, SB99 + SMC) shown in Figure <ref>. We are not modelling Lyman Continuum escape with FiCUS, as we are only fitting the continuum redward of the Lyman Limit and are not yet confident using the restricted wavelength range for the z > 3.0 HETDEX LAEs where Lyman Continuum could be observed. While there are some small differences in the results from the four runs, with BPASS favoring slightly older, less enriched stellar populations and the R16 dust model favoring slightly increased reddening, a consistent picture emerges of an ensemble-averaged galaxy with a young (10-15 Myr), metal poor (0.2-0.3 Z_⊙) stellar population with minimal (E(B-V) 0.03 - 0.10) reddening. The observed and intrinsic UV continuum slopes (ranging β = -2.10 to -2.05 and -2.65 to -2.54, respectively) for the four runs are fit over 1250 - 1850 Å as described in <ref> and bracket the -2.36 UV continuum slope measured directly from the HPSC-1 stack. <cit.>, with support from <cit.>, <cit.>, and <cit.>, suggests that SB99 + SMC may best represent the conditions for these LAEs and indeed, though perhaps coincidentally, that combination does produce the lowest χ^2 fit. This model (SB99 + SMC) does favor the youngest, UV light-weighted average stellar age with a significant fraction, just over 75%, of the light coming from stars with ages less than 5 Myr. c|c|c|c|c|c|c|c[ht] Summary of FiCUS SED Fitting 0pt Model χ^2 Age (Myr) < 5 Myr Z (Z_⊙) E(B-V) UVβ UVβ^Int SB99 + R16 1.80 12.06 ± 1.41 0.75 0.26 ± 0.02 0.099 ± 0.002 -2.05 ± 0.01 -2.61 ± 0.01 SB99 + SMC 1.77 10.86 ± 1.41 0.75 0.28 ± 0.02 0.037 ± 0.001 -2.05 ± 0.01 -2.65 ± 0.01 BPASS + R16 1.80 15.68 ± 0.88 0.35 0.21 ± 0.02 0.078 ± 0.003 -2.10 ± 0.01 -2.54 ± 0.01 BPASS + SMC 1.78 14.79 ± 1.00 0.40 0.22 ± 0.02 0.030 ± 0.001 -2.07 ± 0.01 -2.57 ± 0.01 Results of four independent runs of the FiCUS stellar continuum-SED fitting code (<ref>). Errors are statistical only. χ^2 is the goodness of fit to the HPSC-1 stacked spectrum (Figure <ref>). Age(Myr) is the model UV light-weighted average stellar age. <5 Myr is the fraction of the fitted stellar population younger than 5 Myr. Z is the model UV light-weigthed average metallicity as a fraction of Solar metallicity (Z_⊙). UVβ is the UV continuum slope of the observed flux FiCUS model between 1250 and 1850 Å described in <ref>. UVβ^int uses the same fitting but applied to the FiCUS intrinsic flux model. §.§ Relative Lyman Continuum Escape and Comparison to Reference Sample For this measurement, we use L_ν instead of f_ν as explained in <ref>. We sub-select the 11K HPSC-1 LAEs with z > 3, where the restframe Lyman Continuum region, defined as [880 - 910 Å] for consistency with literature <cit.>, falls within the HETDEX spectral range. The L_900 value used is the weighted biweight location of the 68 wavelength bin luminosities in the residual subtracted (<ref>) and IGM transmission corrected <cit.> L_ν spectrum between 880 and 910 . The reported error is statistical; a standard error analog as the biweight scale of the same data divided by √(n), where n = 68. Since 1500  is not available in the HETDEX spectral window for z > 3 galaxies, the L_1500 value is estimated using two methods. The first estimate is based on a scaling of the weighted biweight flux density between 1268 and 1296  using the slope fit between 1250 and 1525 from our 2.6 < z < 3.5 stack of 24K LAEs. That wider redshift range is selected as the narrowest window that includes sufficient wavelength coverage to the red of to capture the L_1500 region. The L_1500 normalized fitted slope is 5.11 (± 0.561)×10^-4 L_ν/Å, and results in a stack average IGM corrected L_900/L_1500(out) = 0.169 ± 0.0361 (or L_900/L_1500(obs) = 0.069 ± 0.0137, without the IGM correction) as an upper limit, given the aforementioned caveats. The labels, out and obs, adopt the convention in <cit.>. The second L_1500 estimate simply uses the weighted biweight luminosity density of the 113 wavelength bins between 1475 and 1525 Å) taken directly from the 2.6 < z < 3.5 stack in the same method as L_900 described earlier. This result differs by ≪ 1%, completely consistent with the first estimate. Repeating the same computation, but substituting the median and standard deviation for the weighted biweight location and biweight scale, yields an IGM transmission corrected L_900/L_1500(out) = 0.159 ± 0.0455, or L_900/L_1500(obs) = 0.066 ± 0.0172 when not correcting for the IGM. To place this measurement in some context, we compare against the stack of the subset (26 out of 124 galaxies) of the LBG selected, z ∼ 3 galaxies in the Keck Lyman Continuum Spectroscopic Survey (KLCS) <cit.> that are also classical LAEs with (W_λ() > 20Å  <cit.>. The overplotted stacks are presented in Figure <ref>. Both stacks are normalized to their own flux near 1500Å  and interpolated onto the same wavelength grid and smoothed with a 1-pixel (0.44 Å) Gaussian kernel (σ). The HETDEX (HPSC-1) stack has also been shifted to correct for its approximate 250 km s^-1 velocity offset (<ref>). Though the individual spectra contributing to the KLCS stack are of higher SNR with longer exposures, they have similar resolving powers, with R∼800 for HETDEX and for KLCS at λ < 5000 and R∼1400 for KLCS at λ > 5000 <cit.>. Our L_900/L_1500 estimates are defined in a similar way as the <f_900/f_1500>_out measurements of <cit.> and others. As a brief aside, the troughs (<ref>) shown in the HPSC-1 stack are much deeper than in the KLCS stack. The KLCS LAE subsample is comprised of UV bright, LBG selected galaxies with weaker , as compared to the majority of the 50K HPSC-1 LAEs. This difference in selection may probe galaxies with somewhat different halo and internal properties which could contribute to some differences in the troughs. However, as discussed in <cit.> and briefly in <ref>, while the existence of the troughs is physically motivated, their manifestation within the HPSC-1 stack is substantially enhanced by the HETDEX data reduction pipeline. While our relative Lyman Continuum values, L_900/L_1500(out), are ∼1.5-2.0× higher than what is found in <cit.> for the galaxies in their highest (W_λ() > 20) equivalent width bin, based on a comparison to the properties of the HPSC-1 sample, we might expect a larger escape of Lyman Continuum photons. Additionally, the z ≥ 3 for ∼900 Å restframe IGM transmission in the model used in this work <cit.> is about 10% higher than that in <cit.>, making the IGM correction here slightly smaller. As can be seen in Figure <ref>, the equivalent width of the HPSC-1 line is roughly 3× larger than that of the KLCS LAE subsample, with HPSC-1 measuring 114Å  (or 130Å  for the 3.0 < z < 3.5 subsample, see Table <ref>) and the KLCS LAE subsample measuring 36Å using the same MCMC fitting code in <ref>. Measuring the UV continuum slope as in <ref>, we find the KLCS subsample stack with a shallower, but still very blue, β = -1.88 vs the -2.36 of the HPSC-1 stack. The FiCUS analyses for the KLCS subsample stack, with the same configurations as in <ref>, suggest a similarly young stellar population component with similar metallicity but a somewhat larger E(B-V) as compared to the HPSC-1 sample. Using the SB99+SMC configuration, though all 4 runs show the same relative differences, we see the UV light-weighted average stellar age = 9.36 ±1.39 Myr, Z(Z_⊙) = 0.21 ± 0.02, and E(B-V) = 0.052 ± 0.001. Both the KLCS and the HPSC-1 LAEs show consistent ages for their recent star formation as well as comparable metallicities, suggesting similar ionizing photon production. However, the stronger emission, lower E(B-V) and steeper UVβ slope in the HPSC-1 sample promotes the expectation of increased leakage of those photons from the HPSC-1 LAEs <cit.>. We emphasize that, while this relative L_900/L_1500 measurement can be used in context with other works, it is only a step in obtaining an estimate of the average, intrinsic escape fraction of ionizing photons from these galaxies. We also caution again (see <ref>) that the HPSC-1 sample is not as strictly controlled for contamination from on-sky neighbors as in <cit.> and that may influence the results. The subtraction of the average “empty" aperture (<ref>) helps compensate for contributions of light from faint interlopers, but there is no exclusion of LAEs with detected, sky-adjacent neighbors as in <cit.> nor is there any de-blending of light from such neighbors as is performed in <cit.>. As such, this result is an upper limit. A more complete and extended study of the Lyman Continuum escape from z > 3 LAEs will be presented in <cit.>. § SUMMARY We have taken the 50K low spectral resolution (R∼800), generally low SNR LAE spectra ( SNR ∼6, continuum SNR ≪1) from the HETDEX Public Source Catalog 1 <cit.>, applied a ∼1% level correction for residual light, and stacked those spectra in the restframe using the weighted biweight method. This procedure increased the spectral SNR by factors of several hundred and revealed a variety of otherwise noise obscured features associated with the LAEs. In stacking these large numbers of spectra, we marginalize over lines of sight, IGM transmission, galaxy orientation, and star formation stochasticity to yield a generally robust description of the “average" or typical member of the set, though at the loss of peculiar features of individual members. The luminosity (<ref>), equivalent width (<ref>), and g magnitude of the stack (and redshift based substacks), are consistent with the corresponding median values of the LAE distribution, supporting the view of the stack as a valid “average" representation (<ref>). The stack shows negative, asymmetric troughs (<ref>) to either side of the emission line. While real physics is behind the existence of the troughs, they are artificially enhanced by the HETDEX data reduction pipeline and are excluded in the analyses of this work. <cit.> will explore the physics of the troughs and the HETDEX pipeline updates that address them. The HETDEX LAE stack is a bit bluer with stronger emission and less dust than the continuum selected LAE stack from the KLCS (<ref> and Figure <ref>), but overall, is remarkably similar. We find that the properties of stacked spectra show our “average" z ∼ 2.6 LAE is very blue (UVβ ∼ -2.4) with a significant light contribution from a young, metal poor stellar population (most of the UV light from stars with ages in 5-15 Myr, Z ∼ 0.2 Z_⊙, with strong P Cygni profiles and weak metal absorption). The steep UVβ, low dust attenuation (E(B-V) < 0.1), strong emission (log L_Lyα ∼ 42.8, W_λ() ∼ 114Å), and substantial L_900/L_1500(out) (≲ 17%) all suggest a high intrinsic escape fraction of ionizing radiation. This supports the idea that the higher redshift analogs of the HETDEX LAEs could be major drivers of Reionization. Forthcoming research will expand and improve on these analyses with larger and more carefully curated LAE samples that will allow for finer binning and better exploration of the ensemble properties and their evolutions. HETDEX is led by the University of Texas at Austin McDonald Observatory and Department of Astronomy with participation from the Ludwig-Maximilians-Universität München, Max-Planck-Institut für Extraterrestrische Physik (MPE), Leibniz-Institut für Astrophysik Potsdam (AIP), Texas A&M University, The Pennsylvania State University, Institut für Astrophysik Göttingen, The University of Oxford, Max-Planck-Institut für Astrophysik (MPA), The University of Tokyo, and Missouri University of Science and Technology. In addition to Institutional support, HETDEX is funded by the National Science Foundation (grant AST-0926815), the State of Texas, the US Air Force (AFRL FA9451-04-2-0355), and generous support from private individuals and foundations. Observations were obtained with the Hobby-Eberly Telescope (HET), which is a joint project of the University of Texas at Austin, the Pennsylvania State University, Ludwig-Maximilians-Universität München, and Georg-August-Universität Göttingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly. VIRUS is a joint project of the University of Texas at Austin, Leibniz-Institut für Astrophysik Potsdam (AIP), Texas A&M University (TAMU), Max-Planck-Institut für Extraterrestrische Physik (MPE), Ludwig-Maximilians-Universität Muenchen, Pennsylvania State University, Institut fur Astrophysik Göttingen, University of Oxford, and the Max-Planck-Institut für Astrophysik (MPA). In addition to Institutional support, VIRUS was partially funded by the National Science Foundation, the State of Texas, and generous support from private individuals and foundations. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing high performance computing, visualization, and storage resources that have contributed to the research results reported within this paper. URL:http://www.tacc.utexas.edu The Institute for Gravitation and the Cosmos is supported by the Eberly College of Science and the Office of the Senior Vice President for Research at the Pennsylvania State University. KG acknowledges support from NSF-2008793. EG acknowledges support from NSF grant AST-2206222. ASL acknowledges support from Swiss National Science Foundation. SS acknowledge the support for this work from NSF-2219212. SS is supported in part by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. This research benefits from the open-source projects Python <cit.>, astropy <cit.>, numpy <cit.>, photutils <cit.>, and others in the open-source community.
http://arxiv.org/abs/2307.00487v1
20230702061833
On $θ$-Hurewicz and $α$-Hurewicz Topological spaces
[ "Gaurav Kumar", "Sumit Mittal", "Brij K. Tyagi" ]
math.GN
[ "math.GN", "54D20, 54C08, Secondary 54A10, 54D10" ]
=17pt ]On θ-Hurewicz and α-Hurewicz Topological spaces Gaurav Kumar^1, Sumit Mittal^2, Brij K. Tyagi] Gaurav Kumar^1, Sumit Mittal^2, Brij K. Tyagi Gaurav Kumar Department of MathematicsUniversity of DelhiDelhi – 110007, India. gauravkumar.maths@dsc.du.ac Sumit Mittal Department of MathematicsUniversity of DelhiDelhi – 110007, India. sumitmittal1105@maths.du.ac.in Brij K. TyagiAtma Ram Sanatan Dharma CollegeUniversity of Delhi – 110021, India. brijkishore.tyagi@gmail.com In this paper, we introduced α-Hurewicz & θ-Hurewicz properties in a topological space X and investigated their relationship with other selective covering properties. We have shown that for an extremally disconnected semi-regular spaces, the properties: Hurewicz, semi-Hurewicz, α-Hurewicz, θ-Hurewicz, almost-Hurewicz, nearly Hurewicz and midly Hurewicz are equivalent. We have also proved that for an extremally disconnected space X, every finite power of X has θ-Hurewicz property if and only if X has the selection principle U_fin(θ-Ω, θ-Ω). The preservation under several types of mappings of α-Hurewicz and θ-Hurewicz properties are also discussed. Also, we showed that, if X is a mildly Hurewicz subspace of ω^ω, than X is bounded. [2020]Primary 54D20; 54C08; Secondary 54A10; 54D10 [ [ ===== § INTRODUCTION AND PRELIMINARIES The classical Hurewicz property has a long history from the paper <cit.>. A topological space X has Hurewicz property if for each sequence (𝒜_k : k ∈ℕ) of open covers of X there exists a sequence (ℬ_k: k ∈ℕ) where for each k, ℬ_k is a finite subset of 𝒜_k such that for each x∈ X, x∈⋃ℬ_k for all but finitely many k. The Hurewicz property is weaker than the σ-compactness and stronger than the Lindelöf property.[The author^1 acknowledges the fellowship grant of University Grant Commission, India. The author^2 acknowledges the fellowship grant of Council of Scientific & Industrial Research, India.] Recently, several weak variants of Hurewicz property have been studied after applying the interior and the closure operators in the definition of a Hurewicz Property. Also, the other ways have been examined when the sequence of open covers are replaced with generalized open sets. For the study of the variants of Hurewicz spaces, the readers can see <cit.>. In this paper, we examine the covering properties namely: α-Hurewicz, θ-Hurewicz which are alike to the classical Hurewicz property. The following generalizations of open sets will be used for definitions of variations on the Hurewicz property: A subset A of a topological space X is said to be: * θ-open <cit.> if for every x in A, there exists an open set B in X such that x ∈ B ⊂ Cl(B) ⊂ A. * α-open <cit.> if A⊂ Int(Cl(Int(A))), or equivalently, if there exists an open set B in X, such that B ⊂ A ⊂ Int(Cl(B)). The complement of α-open set is α-closed. Moreover, a set A is α-closed in X if Cl(Int(Cl(A))) ⊆ A. * semi-open <cit.> if there exists an open set B in X such that B ⊂ A ⊂ Cl(B), or equivalently, if A ⊂ Cl(Int(A)). The set SO(X) denotes the collection of all semi-open sets. The complement of a semi-open set is semi-closed, sCl(A) denotes the semi-closure of A, sCl(A) is intersection of all semi-closed sets containing A. A⊆ X is semi-closed if and only if sCl(A) = A. Clearly, we have: clopen ⇒ θ-open ⇒ open ⇒α-open ⇒ semi-open. A space X is said to be α-compact <cit.> (θ-compact <cit.>) if every α-open (θ-open) cover of X has a finite subcover. Recall that a space X is said to be semi-regular <cit.> if for each x∈ X and for each semi-closed set A such that x∉A, there exist disjoint semi-open sets B and C of X such that x ∈ B and A ⊂ C. <cit.> For a space X the following statements are equivalent: * (i) X is semi-regular; * (ii) For each x ∈ X and A∈ SO(X) such that x ∈ A, there exists a B ∈ SO(X) such that x ∈ B ⊂ sCl(B) ⊂ A. A space X is called semi-Hurewicz <cit.> ( resp., mildly Hurewicz <cit.>, θ-Hurewicz, α-Hurewicz ) if for each sequence ( 𝒜_k : k ∈ℕ) of semi-open ( resp., clopen, θ-open, α-open ) covers of X, there exists a sequence (ℬ_k : k ∈ℕ) such that for each k∈ℕ, ℬ_k is a finite subset of 𝒜_k and for each x∈ X, x∈⋃ℬ_k for all but finitely many k. Evidently, we have the following implications: semi-Hurewicz ⇒ α-Hurewicz ⇒ Hurewicz ⇒ θ-Hurewicz ⇒ Mildly Hurewicz. The above properties are also written in the form of selection principle. Let 𝒜 and ℬ be the collections of subsets of a space X. Then a space X satisfies the selection principle : U_fin(𝒜, ℬ) if for each sequence (𝒜_k : k ∈ℕ) in 𝒜 there exists a sequence (ℬ_k : k ∈ℕ), where for each k, ℬ_k is a finite subset of 𝒜_k such that {∪ℬ_k: k = 1,2,3,. . .} is in ℬ <cit.>. An infinite cover 𝒞 of X is said γ-cover (resp., c-γ-cover, θ-γ-cover, α-γ-cover s-γ-cover) if each element of 𝒞 are open (resp, clopen, θ-open, α-open, semi-open) such that for each x∈ X, the set {U ∈𝒞 : x∉U} is finite. Let Γ, c-Γ θ-Γ, α-Γ s-Γ denotes the collection of all γ, c-γ, θ-γ, α-γ s-γ covers of X, respectively and 𝒪, 𝒞𝒪, θ-𝒪, α-𝒪, s-𝒪 denotes the collection of all open, clopen, θ-open, α-open, semi-open covers of a space X, respectively. Then the Hurewicz, mildly Hurewicz, θ-Hurewicz, α-Hurewicz, semi-Hurewicz property of X is equivalent to selection principles: U_fin(𝒪, Γ), U_fin(𝒞𝒪, c-Γ), U_fin(θ-𝒪, θ-Γ), U_fin(α-𝒪, α-Γ), U_fin(s-𝒪, s-Γ), respectively. In this paper we study α-Hurewicz and θ-Hurewicz properties in details. Throughout the paper a space X and (X, τ), means a topological space & |X| denotes the cardinality of X. For a subset A of a space X, Int(A) and A or Cl(A), denotes the interior and closure of A respectively. Further, ω and ω_1 denote the first infinite cardinal & uncountable cardinal respectively. § THE Θ-HUREWICZ SPACES AND Α-HUREWICZ SPACES First, recall that the family of all θ-open (resp., α-open) sets of a space (X, τ) are form topologies on X, denoted by τ_θ <cit.> (resp., τ_α <cit.>. Further, τ_θ⊆τ⊆τ_α. The role of θ-open and α-open sets have been invastigated in many papers (see, <cit.>) Clearly, a space (X, τ) is θ-Hurewicz (resp., α-Hurewicz) if and only if the space (X, τ_θ) (resp., (X, τ_α) is Hurewicz. Every countable space X has α-Hurewicz property. Let X={x_1, x_2, ...., x_n,....} be a countable space. Let <𝒜_k>_k ∈ℕ be a sequence of α-open covers of X. For each k∈ℕ, Consider ℬ_k = {A_k,1, A_k,2,......A_k,k}, where for each i∈{1,2,....k}, A_k,i is α-open such that x_i∈ A_k,i. Then ℬ_k is a finite subset of 𝒜_k and for each x∈ X, x∈⋃ℬ_k for all but finitely many k. Similarly we can prove that every countable space has θ-Hurewicz property. * Every α-compact space is α-Hurewicz. But the converse is not true. The real line ℝ with the cocountable topology is α-Hurewicz being semi-Hurewicz <cit.> but it is not α-Compact since every α-compact space is compact. * Every θ-compact space is θ-Hurewicz. But the converse is not true. Let X be a countably infinite discrete space. Then the space X has θ-Hurewicz property but the θ-open cover {{x}: x∈ X} has no finite subcover. * The real line ℝ is a Hurewicz space but it is not semi-Hurewicz <cit.>. * The Sorgenfrey line S does not have the α-Hurewicz property because it does not have the Hurewicz property. Let A be a finite subset of an uncountable set X. Then τ = {ϕ,A, X} is a topology on X. Clearly, the space (X, τ) is Hurewicz. Moreover, the sets of the forms A ∪{p}, for p ∈ X ∖ A, are α-open in (X, τ). For each k∈ℕ, put 𝒜_k = {A ∪{p}: p∈ X∖ A }. Then the sequence (𝒜_k : k ∈ℕ) witnesses that (X, τ) is not an α-Hurewicz space because the cover 𝒜_k does not have a countable subcover. Let p be a fixed point of an uncountable set X. Then τ_p = {O ⊆ X : p ∈ O } together with the empty set is an uncountable particular point topology on X. It is shown in <cit.> that the space X is not Lindelöf, so X can not be Hurewicz since every Hurewicz space is Lindelöf. Note that the space X is an only closed set containing p. Then Cl(A) = X for each A≠∅, A∈τ_p. Hence ϕ and X are only θ-open sets. Therefore X is θ-Hurewicz space. A space X is said to be nearly Hurewicz <cit.> (resp., almost Hurewicz <cit.>) if for each sequence (𝒜_k : k∈ℕ) of open covers of X, there exists a sequence (ℬ_k : k∈ℕ), where for each k∈ℕ, ℬ_k is a finite subset of 𝒜_k such that for each x∈ X, x∈∪{Int(Cl(B)) : B∈ℬ_k} (resp., x∈∪{Cl(B) : B∈ℬ_k}) for all but finitely many k. Evidently, from the definitions follows the following implications: Hurewicz ⇒ nearly Hurewicz ⇒ almost Hurewicz. The following theorem describes a relation between almost Hurewicz and θ-Hurewicz space Every almost Hurewicz space is θ-Hurewicz. Let X be an almost Hurewicz space and (𝒜_k : k ∈ℕ) be a sequence of θ-open covers of X. Then for each k ∈ℕ and each x∈ X there is an open set B_x, k such that x∈ B_x,k⊂ Cl(B_x,k)⊂ A_k for some A_k∈𝒜_k. For each k, put ℬ_k = {B_x,k : x∈ X}. Then each ℬ_k is an open cover of X. Since X is an almost Hurewicz, for each k∈ℕ, there is a finite subset ℬ'_k of ℬ_k such that x∈ X, x∈∪{Cl(B') : B' ∈ℬ'_k} for all but finitely many k. Since for each B'∈ℬ'_k, there is a A'_k,B'∈𝒜_k such that Cl(B') ⊂ A'_k, B'. Let 𝒜'_k = {A_k, B'∈𝒜_k : B'∈ℬ'_k}. Then the sequence (𝒜'_k : k∈ℕ) witnesses that X is θ-Hurewicz. Next we determine a class of spaces in which above variants of Hurewicz property are equivalent. Recall that, a space X is called extremally disconnected if closure of open set is open. For an extremally disconnected semi-regular space X. The following statements are equivalent: * X is semi-Hurewicz; * X is α-Hurewicz; * X is Hurewicz; * X is nearly Hurewicz; * X is almost Hurewicz; * X is θ-Hurewicz; * X is mildly Hurewicz. Already proved (1) ⇒ (2) ⇒ (3) ⇒ (4) ⇒ (5) ⇒ (6) ⇒ (7). For (7) ⇒ (1), let (𝒜_k : k∈ℕ) be a sequence of semi-open covers of X. Then by Lemma <ref>, for each x ∈ X, we have a B_k,x∈ SO(X) such that x∈ B_k,x⊂ sCl(B_k,x) ⊂ A for some A ∈𝒜_k,. For k∈ℕ, put ℬ_k = {B_k,x: x∈ X}. Then (ℬ_k : k∈ℕ) is a sequence of semi-open covers of X. As X is an extremally disconnected, by <cit.>, we have B⊂ Int(Cl(B)). For each B∈ SO(X), Cl( Int(Cl(B))) is clopen in X. Put 𝒞_k = {Cl(Int(Cl(B))) : B∈ℬ_k}. Then (𝒞_k : k∈ℕ) is a sequence of clopen covers of X. As X is mildly Hurewicz, there exists a sequence (𝒞'_k : k∈ℕ), where for each k, 𝒞'_k is a finite subset of 𝒞_k such that for x∈ X, x∈∪𝒞'_k for all but finitely many k. Observe that for each subset A of X, Int(Cl(A))⊂ sCl(A) and from the extremal disconnectedness of X, sCl(A) = Cl(A) for each A∈ SO(X). From the above construction, for each C' ∈𝒞'_k we have a A_C'∈𝒜_k such that C'⊂ A_C'. Then for k∈ℕ, let 𝒜'_k = {A_C' : C'∈𝒞'_k}. Hence the sequence (𝒜'_k : k∈ℕ) witnesses that X is semi-Hurewicz. In the following examples, we show that the extremal disconnectedness and semi-regularity are neccessary conditions in Theorem <ref>. Consider the real line ℝ with usual topology. Then ℝ is semi-regular mildly Hurewicz space but it is not an extremally disconnected space . On the other hand, ℝ is not semi-Hurewicz <cit.>. Let X be an uncountable cofinite space, that means an uncountable set X with cofinite topology. Then X is an extremally disconnected mildly Hurewicz space. But X does not have semi-Hurewicz property, since the semi-open cover { X∖{x} : x∈ X} has no countable subcover. In an extremally disconnected space, zero-dimensionality and semi-regularity are equivalent (<cit.>, Theorem 6.4). We have the following corollary: For an extremally disconnected, zero-dimensional space X, the following statements are equivalent: * X is semi-Hurewicz; * X is α-Hurewicz; * X is Hurewicz; * X is nearly Hurewicz; * X is almost Hurewicz * X is θ-Hurewicz; * X is mildly Hurewicz. A space X is called S-paracompact <cit.> if for every open cover of X has a locally finite semi-open refinement. A S-paracompact Hausdorff space X is semi-regular <cit.>. Hence the properties mentioned in Theorem <ref> are also equivalent for an extremally disconnected S-paracompact Hausdorff spaces. It is known that the Stone-C̆ech compactification of a discrete space is extremally disconnected compact Hausdorff space. Thus the class of Stone-C̆ech compactification of discrete spaces contained in the class of extremally disconnected S-paracompact Hausdorff spaces and it is turns out to be subclass of extremally disconnected semi-regular spaces. For a space X, the following statements are equivalent: * X has θ-Hurewicz property; * X satisfies U_fin(θ-Ω, θ-𝒪) 1 ⇒ 2. It follows from the fact that each θ-γ-cover of X is a θ-open cover of X. 2 ⇒ 1. Let (𝒜_k : k ∈ℕ) be a sequence of θ-open covers of X. Let ℕ = Y_1 ∪ Y_2 ∪...∪ Y_m ∪ ... be a partition of ℕ into countably many pairwise disjoint infinite subsets. For each k, let ℬ_k contains all sets of the form A_k_1∪ A_k_2∪...∪ A_k_n, k_1 ≤...≤ k_n, k_i ∈ Y_k, A_k_i∈𝒜_k, i≤ n, n∈ℕ. Then for each k, ℬ_k is a θ-ω-cover of X. Applying U_fin(θ-Ω, θ-𝒪) on the sequence (ℬ_k : k ∈ℕ), there is a sequence (𝒞_k : k ∈ℕ), where for each k, 𝒞_k is a finite subset of ℬ_k such that x∈ X x∈∪𝒞_k for all but finitely many k. Assume that 𝒞_k = {C^1_k, .....C^m_k_k}, then by the above construction, C^i_k = A^k_i_1_k∪.....∪ A^k_i_n_k, C_k^i ∈𝒞_k. Thus for each k, we have a finite subset 𝒜'_k of 𝒜_k such that ∪𝒞_k⊆∪𝒜'_k. Hence X has the θ-Hurewicz property. On the similar lines, we can prove that a space X has the α-Hurewicz property if and only if X satisfies the selection principle U_fin(α-Ω, α-𝒪) If each finite power of space X is θ-Hurewicz, then X satisfy U_fin(θ-Ω, θ-Ω). Let (𝒜_k: k ∈ℕ) be a sequence of open θ-ω-covers of X. For each l ∈ℕ, we put ℬ_k = {A^l: A ∈𝒜_k}. For each l ∈ℕ, applying the θ-Hurewicz property to the sequence (ℬ_k: k∈ℕ) of θ-open covers of X^l, for each k ∈ℕ we have finite subfamilies 𝒞_k of ℬ_k such that x∈ X^l, x∈∪ C_k for all but finitely many k. For k ∈ℕ, let 𝒜'_k = {A∈𝒜_k : A^l∈𝒞_k}. Then the sequence (𝒜'_k : k∈ℕ) witnesses that X satisfies U_fin(θ-Ω, θ-Ω). In a Similar way, we can prove that if each finite power of space X is α-Hurewicz, then X satisfy U_fin(α-Ω, α-Ω). <cit.> Let X be an extremally disconnected space. Then for each θ-ω-cover 𝒜 of X^k, k∈ℕ, there exists a θ-ω-cover ℬ of X such that the θ-open cover {B^k: B ∈ℬ} of X^k refines 𝒜. Let X be an extremally disconnected space. If X has a property U_fin(θ-Ω, θ-Ω), then for each n∈ℕ, X^n also has this property. Let (𝒜_k: k ∈ℕ) be a sequence of θ-ω-covers of X^n. Then by Lemma <ref>, there exists a θ-ω-cover ℬ_k of X such that {B^n: B ∈ℬ_k} refines 𝒜_k. Apply the condition U_fin(θ-Ω, θ-Ω) of X on the sequence (ℬ_k : k∈ℕ), then for each k∈ℕ, there exists a finite subset 𝒞_k of ℬ_k such that {∪𝒞_k : k∈ℕ} forms a θ-ω-cover of X. Since {B^n: B ∈ℬ_k} refines 𝒜_k, for each C∈𝒞_k, we have A_C∈𝒜_k such that C^n⊂ A_C. For k∈ℕ, let 𝒜'_k = { A_C∈𝒜_k : C∈𝒞_k}. Thus the sequence (𝒜'_k : k∈ℕ) witnesses that X^n has a property U_fin(θ-Ω, θ-Ω). Thus from Theorem <ref>, Theorem <ref> and Theorem <ref>, we obtained the following corollary. Let X be an extremally disconnected space. Then every finite power of X is θ-Hurewicz if and only if X satisfies U_fin(θ-Ω, θ-Ω). § PRESERVATION IN SUBSPACES AND MAPPINGS In this section, we analyse the properties of α-Hurewicz and θ-Hurewicz spaces. We investigate the behaviour of these properties under subspaces and various type of mappings. In the following example we show that α-Hurewicz is not a hereditarty property. Let x_0 be a fixed point of an uncountable set X. Then the family τ= {A⊂ X : x_0∉ A}∪{A⊂ X : X∖ A is finite} of subsets of X forms a topology on X. It is easy to prove that the space (X, τ) is α-Hurewicz. Consider the subspace Y = X∖{x_0} of (X, τ). Then one point set {x}, x∈ Y is α-open in Y. Then the α-open cover 𝒜 = {{x} : x∈ Y} of Y has no countable subcover. Hence the subspace Y of the space (X, τ) is not α-Hurewicz. Note that, Y is also an open (α-open) subset of (X, τ). Hence the open (α-open) subspace of a α-Hurewicz space need not be α-Hurewicz. Remark: Let X be the space considered in Example <ref> It is also easy to prove that X is θ-Hurewicz space. On the other hand the open subspace Y = X∖{x_0} of X is not θ-Hurewicz. It means that θ-Hurewicz property is also not hereditary. However the α-Hurewicz & θ-Hurewicz properties are preserved under clopen subsets as shown below in Proposition <ref>. A clopen subspace of a α-Hurewicz (θ-Hurewicz) space is α-Hurewicz (θ-Hurewicz). Let Y be a clopen subspace of a α-Hurewicz space X. Let (𝒜_k :k ∈ℕ) be a sequence of α-open covers of Y. Then (ℬ_k :k ∈ℕ) is a sequence of α-open covers of X, where ℬ_k= 𝒜_k∪{X∖ Y} for each k. Since X is α-Hurewicz, there is a sequence (ℬ'_k : k ∈ℕ), where ℬ'_k is a finite subset of ℬ_k such that x∈ X, x∈⋃ℬ'_k for all but finitely many k. We observe that for each y∈ Y, y∈⋃ℬ'_k∖{X∖ Y}. That means that Y is an α-Hurewicz space. Similarly, we can prove for θ-Hurewicz space. Let Y be a subspace of a space X. If Y is θ-Hurewicz, then for each sequence (𝒜_k : k∈ℕ) of covers of Y by θ-open sets of X, there is a sequence (ℬ_k : k∈ℕ), where for each k, ℬ_k is a finite subset of 𝒜_k such that for each y∈ Y, y∈∪ℬ_k for all except finitely many k. Let Y be θ-Hurewicz subspace of a space X. Let (𝒜_k : k∈ℕ) be a sequence of covers of Y by θ-open sets of X. Put ℬ_k = {Y∩ A : A∈𝒜_k}. Then (ℬ_k : k∈ℕ) is a sequence of θ-open covers of Y and Y is θ-Hurewicz, there exists a finite subset 𝒞_k of ℬ_k such that y∈ Y, y ∈∪𝒞_k for all but finitely many k. Let 𝒜'_k = {A∈𝒜_k : A∩ Y ∈𝒞_k}. Then the sequence (𝒜'_k : k∈ℕ) witnesses our requirement. In the following example we show that the converse of the above theroem does not hold. Let U= {u_α : α < ω_1}, V={v_i: i∈ω} and W={⟨ u_α, v_i⟩ : α < ω_1, i∈ω}. Let X=W∪ U∪{x'}, x∉W∪ U. Topologize X as follows: for u_α∈ U, α<ω_1 the basic neighborhood takes of the form A_u_α(i) = {u_α}∪{⟨ u_α, v_j⟩ : j≥ i, i∈ω}, the basic neighborhood of x' takes of the form A_x'(α)= {x'}∪⋃{⟨ u_β, v_i⟩: β > α, i∈ω}, α < ω_1 and each point of W is isolated. Consider the subspace Y = {u_α : α< ω_1}∪{x'} of the space X. Observe that, the singleton set {y}, y∈ Y, is θ-open in Y. Thus the family {{y}: y∈ Y} is an uncountable θ-open cover of Y, which has no countable subcover. Hence Y is not θ-Hurewicz. Next, we show that Y for each sequence (𝒜_k : k∈ℕ) of covers of Y by θ-open sets of X, there exists a sequence (ℬ_k : k∈ℕ), where for each k, ℬ_k is a finite subset of 𝒜_k such that for each y∈ Y, y∈∪ℬ_k for all but finitely many k. Let (𝒜_k : k∈ℕ) be a family of θ-open sets of X such that for each k∈ℕ, Y⊆∪𝒜_k. Then for each k∈ℕ, there is an open set B_k and A_k∈𝒜_k such that x'∈ B_k⊂B_k⊂ A_k. From the construction of topology on X for each k, there exists a β_k <ω_1 such that A_x'(β_k)⊆ B_k, A_x'(β_k)⊆B_k. Thus for each k, {u_α: α>β_k}∪{x'}⊆B_k⊂ A_k and Y'_k= ⋃_α≤β_ku_α is countable. Thus ( ⋃_k∈ℕ Y'_k )∩ Y is countable. As similiar to Theorem <ref>, we can find a subset 𝒜'_k of 𝒜_k such that for each y∈( ⋃_k∈ℕ Y'_k )∩ Y, y∈∪𝒜'_k for all but finitely many k. For each k∈ℕ, let 𝒜”_k = 𝒜'_k ∪{A_k}. Then 𝒜”_k is a finite subset of 𝒜_k and for each y∈ Y, y∈∪𝒜”_k for all but finitely many k. The mapping f : X→ Y from a space X to a space Y is said to be : 1. α-continuous <cit.> (α-irresolute <cit.>) if the preimage of each open (α-open) set of Y is α-open in X. 2. α-open (strongly α-open) if the image of each α-open set of X is α-open (open) in Y. 3. θ-continuous (<cit.>, <cit.>) (resp., strongly θ-continuous <cit.>) if for each x ∈ X and each open set B of Y containing f(x) there exists an open set A of X containing x such that f(Cl(A)) ⊂ Cl(B) (resp., f(Cl(A)) ⊂ B). An α-continuous image of an α-Hurewicz space is Hurewicz. Let X be an α-Hurewicz space and f : X → Y be an α-continuous map from X onto a space Y. Let (𝒜_k :k ∈ℕ) be a sequence of open covers of Y. Since f is α-continuous, then for each k∈ℕ {f^-1(A_k) : A_k∈𝒜_k}, is a α-open cover of X. Since X is α-Hurewicz, there is a sequence (ℬ_k : k ∈ℕ) where for each k, ℬ_k is a finite subset of 𝒜_k such that x∈ X, x∈⋃{ f^-1(B) : B∈ℬ_k)} for all but finitely many k. Consider A_B_k = f(B_k), k ∈ℕ. Then the sequence (f(B) : B∈ℬ_k & k∈ℕ) witness that Y is Hurewicz. Similarly, we can prove the following theorem. A α-irresolute image of an α-Hurewicz space is α-Hurewicz. Since each continuous map is α-continuous, we have the following corollary: A continuous image of an α-Hurewicz space is Hurewicz. A strongly θ-continuous image of a θ-Hurewicz space X is Hurewicz. Let X be a θ-Hurewicz space and f: X→ Y be a strongly θ-continuous map from X onto a space Y. Consider the sequence (𝒜_k : k ∈ℕ) of open covers of Y. Then for each k, and for each x∈ X, f(x) ∈ A_k for some A_k∈𝒜_k. From the strongly θ-continuity of f, there is an open set B_x,k containing x such that f(Cl(B_x,k))⊂ A_k. This means that f^-1(A_k) is θ-open. Then for each k∈ℕ, {f^-1 (A) : A ∈𝒜_k} is a θ-open cover of X. As X is θ-Hurewicz, there is a sequence (𝒞_k : k ∈ℕ) where for each k, 𝒞_k is a finite subset of 𝒜_k such that x∈ X, x∈⋃{f^-1(C) : C∈𝒞_k} for all but finitely many k. Then we have Y = f(X) = f(⋃_C∈𝒞_kf^-1(C) ) = ⋃𝒞_k. Hence, Y is Hurewicz. A θ-continuous image of a θ-Hurewicz space X is θ-Hurewicz. Let f: X→ Y be a θ-continuous map from a θ-Hurewicz space X onto a space Y. Let (𝒜_k : k ∈ℕ) be a sequence of θ-open covers of Y. Then for each k∈ℕ, {f^-1 (A) : A∈𝒜_k} is a θ-open cover of X, because f is θ-continuous. Since X is θ-Hurewicz, there is a sequence (ℬ_k : k ∈ℕ), where for each k, ℬ_k is a finite subset of 𝒜_k such that x∈ X, x∈⋃_B∈ℬ_kf^-1(B) for all but finitely many k. For each k, and for each B_k∈ℬ_k, we may choose A_k ∈𝒜_k such that B_k = f^-1 (A_k ). Then we have Y = f(X) = f(⋃_B∈ℬ_kf^-1(B) ) = ⋃ℬ_k. Hence, Y is θ-Hurewicz. Since continuity implies θ-continuity, we have the following corollary: A continuous image of a θ-Hurewicz space is θ-Hurewicz. For a space (X, τ), the following statements are equivalent: * (1) (X, τ) is α-Hurewicz; * (2) (X, τ) admits a strongly α-open bijection onto a Hurewicz space (Y, τ'). (1) ⇒ (2): Let (X, τ) be an α-Hurewicz space, then (X, τ_α) is Hurewicz. The identity map I_X:(X, τ)→ (X, τ_α) is a strongly α-open bijection. (2)⇒ (1): Assume that f: (X, τ)→ (Y,τ') is a strongly α-open bijection from a space (X,τ) onto a Hurewicz space (Y, τ'). Let (𝒜_k :k ∈ℕ) be a sequence of α-open covers of (X, τ). Then for each k∈ℕ, {f(A_k) : A_k∈𝒜_k} is an open cover of Y. Since Y is Hurewicz space, there exists a sequence (ℬ_k : k∈ℕ), where for each k, ℬ_k is a finite subset of 𝒜_k such that for each y∈ Y, y∈⋃{f(B) : B∈ℬ_k} for all but finitely many k. Hence for each x∈ X, x∈⋃ℬ_k for all but finitely many k. § CHARACTERIZATIONS OF VARIANTS OF HUREWICZ SPACES Let ω^ω be the set of all functions f: ω→ω. The set ω^ω is equipped with the product topology. Define a relation ≤^* on ω^ω as follows: f≤^* g if f(n) ≤ g(n) for all but finitely many n. Then the relation ≤^* on ω^ω is reflexive and transitive. Let H be a subset of ω^ω. We say H is a bounded if H has an upper bound with respect to ≤^*, otherwise H is unbounded. We say H is dominating if it is cofinal in (ω^ω, ≤^*). Let 𝔟 be the smallest cardinality of an unbounded subset of ω^ω with respect to ≤^*. The cardinal 𝔟 is known as the bounding number. It is not difficult to prove that ω_1 ≤𝔟≤𝔡≤𝔠 and it is known that ω_1 < 𝔟 = 𝔠, ω_1 < 𝔡 = 𝔠 and ω_1 ≤𝔟 < 𝔡 = 𝔠 are all consistent with the axioms of ZFC for more details (see <cit.>). Let X⊂ω^ω. If X is a mildly Hurewicz space, then X is bounded. Let us assume that X be an unbounded subset of ω^ω. For f_x ∈ X and n∈ω, let A_n^f_x = {h∈ X : h(i) ∈{f_x(1), f_x(2),...f_x(n)}, 1≤ i≤ n }. Then A_n ^f_x is a basic open set of X, containing f_x. Moreover, if g∉A_n^f_x, then there exists an i∈ω such that 1≤ i≤ n and g(i)∉{f_x(1), f_x(2),...f_x(n)}. Then we have an open set B_n^g = {h∈ X : h(i)∈ω∖{f_x(1), f_x(2),...f_x(n)}} of X containing g such that B_n^g∩ A_n^f_x = ∅. This implies that g∉Cl_X(A_n^f_x). Hence Cl_X(A_n^f_x)⊆ A_n^f_x which implies that A_n^f_x is closed. For each n∈ω, put 𝒜_n = {A_n ^f_x : f_x∈ X}. Then 𝒜_n is a clopen cover of X and (𝒜_n : n∈ω) is a sequence of clopen covers of X. For n∈ω and for any finite subset ℬ_n of 𝒜_n. Let n_f_x = max {f_x(1), f_x(2), ..... f_x(n) : f_x∈ A_n^f_x}. Define a function f :ω→ω as follows: f(n) = max {n_f_x : A_n^f_x∈ℬ_n } +1. From the assumption of unboundedness of X, there exists f'∈ X such that f'≰^* f, that is f(n) < f'(n) for infinitely many n. Hence for infinitely many n, f'∉A_n^f_x, A_n^f_x∈ℬ_n. Thus f'∉⋃ℬ_n for infinitely many n. This means that X does not have mildly Hurewicz property. This completes the proof. The dominating subset D of ω^ω is not mildly Hurewicz. By Theorem <ref>, the following corollaries follows directly: Let X be a θ-Hurewicz subspace of ω^ω, then X is bounded. Let X be a nearly Hurewicz subspace of ω^ω, then X is bounded. Let X be an almost Hurewicz subspace of ω^ω, then X is bounded. Let X be a θ-Hurewicz space. Then every θ-continuous image of X in ω^ω is bounded. Let F: X →ω^ω be a θ-continuous map from a θ-Hurewicz space X to ω^ω. Then F(X) is a θ-Hurewicz space. Hence F(X) is bounded. Every continuous image of a θ-Hurewicz space X in ω^ω is bounded. Every continuous image of a Hurewicz space X in ω^ω is bounded. Every continuous image of a nearly Hurewicz space X in ω^ω is bounded. Every continuous image of an almost Hurewicz space X in ω^ω is bounded. Let X be a θ-Lindelof space. If the cardinality of X is less than 𝔟, then X is θ-Hurewicz Let X be a θ-Lindelof space with |X| < 𝔟. If X is not a θ-Hurewicz space. Then there exists a sequence (𝒜_n : n∈ω) of θ-open covers of X such that for each n and for each finite subset ℬ_n of 𝒜_n, there exists a x∈ X such that x∉∪ℬ_n for infinitely many n. Since X is θ-Lindelof, assume that for each n, 𝒜_n = {A_n^j : j∈ω}. For each x∈ X, define f_x : ω→ω as : f_x(n) = min{j : x∈ A_n^j}. Let D = {f_x : x∈ X}. Then D is an unbounded set. If D is bounded. Then there exists a f∈ω^ω such that f_x≤^* f for all f_x∈ D. For n∈ω, put ℬ_n = {A_n^j : j≤ f(n)}. Then for each x∈ X, x∈∪ℬ_n for all but finitely many n. This leads to be a contradiction to the fact the there is a x∈ X such that x∉∪ℬ_n for infinitely many n. Thus D is an unbounded set. Hence 𝔟≤ |D|. Since |X| <𝔟 and it is mapped surjectively to D. This leads to a contradiction. Hence X must be a θ-Hurewicz space. Let X be a mildly Lindelof space. If the cardinality of X is less than 𝔟, then X is mildly Hurewicz The proof is on similar lines of the proof of Theorem <ref>. Let X be a subset of real line ℝ. If X is not θ-Hurewicz, then |X|≥𝔟. Let X be a subset of real line ℝ. If X is not mildly Hurewicz, then |X|≥𝔟. Remark: In <cit.>, Velicho defined the θ-closure operator which is denoted by Cl_θ(A). For A⊂ X, Cl_θ(A) = {x ∈ X : for each neighbourhood U of x, Cl(U) ∩ A ≠ϕ} and Cl(A)⊆ Cl_θ(A). Many papers have been published on θ-closure operator (see <cit.>). Using the θ-closure operator it is interesting to investigate the following class of spaces. A space X is called θ-almost Hurewicz if for each sequence (𝒜_k : k ∈ℕ) of open covers of X there exists a sequence (ℬ_k : k ∈ℕ), where ℬ_k is a finite subset of 𝒜_k for each k, such that for each x∈ X, x∈⋃{Cl_θ(Cl(B)) : B∈ℬ_k} for all but finitely many k. Observe that every almost Hurewicz spaces is almost θ-Hurewicz. Conflicts of interests: The authors have no relevant financial or non-financial interests to disclose. 99 A Al-zoubi K.Y., S-paracompact spaces, Acta Math. Hungar, 110 (2006), 165–174. BN Aqsa and Khan Moiz ud Din, On nearly Hurewicz spaces, Open Math. 2019; 17:1310–1318. ZZ Bonanzinga M., Cammaroto F. and Koc̆inac Lj.D.R., Star-Hurewicz and related properties, Appl. Gen. Topol., 5(2004), 79-89. CL1Dickman R. F., Porter J. R., θ-Closed subsets of Hausdorff spaces, Pac. J. Appl. Math., 59 (1975) 407-415. T Dorsett C., Semi-regular spaces, Soochow J. Math., 8 (1982), 45–53. WEngelking R., General Topology (revised and completed edition), Sigma Series in Pure Mathematics, vol.6, Heldermann, Berlin, 1989. CL2 Espelie M. S., Joseph J.E., Some properties of θ-Closure, Can. J. Math, Vol. XXXIII, 1 (1981), 142-149. CL3 Evans D., Role of θ-closure in the study of H-closed spaces, Thesis, Morgan State University. Q Fomin S.V., Extensions of topological spaces, Doklady Akad. Nauk SSSR, 32 (1941), 114–116. R Fomin S., Extensions of topological spaces, Ann. Math., 44 (1943), 471-480. MNA1 G. Kumar, brij K. Tyagi, On variants of θ-Menger spaces, Communicated. H Hurewicz W., Über die Verallgemeinerung des Borelschen Theorems, Math. Z. 24 (1925), 401-425, XYZ Hurewicz W., Über Folgen stetiger Functionen, Fund. Math. 9, 193-204, 1927. UJankovic D., A note on mappings of extremally disconnected spaces, Acta Math. Hungar, 46 (1985), 83–92. KSDS Koc̆inac Lj.D.R., Sabah A., Khan M.D. and Djamila Seba, Semi-Hurewicz spaces, Hacettepe Journal of Mathematics and Statistics Volume 46 (1) (2017), 53-66. MM Koc̆inac Lj. D.R, On Mildly Hurewicz Spaces, International Mathematical Forum, Vol. 11, 2016, no. 12, 573-582. AZ Koc̆inac Lj.D.R., The Pixley-Roy topology and selection principles, Questions Answers Gen. Topology. 19, 219-225, 2001. KD Kohli J. K. and Das A. K., A class of spaces containing all generalized absolutely closed (almost compact) spaces, Appl. Gen. Topol., 7 (2006) 233-244. LN Levine N., Semi-open sets and semi-continuity in topological spaces, Amer. Math., 70 (1963), 36–41. J Long, P. E. and Herrington, L., The T_θ-topology and faintly continuous functions, Kyungpook Math. J., 22 (1982), 7–14. N Long P.E., Herrington L., Strongly θ-continuous functions, J. Korean Math. Soc., 18 (1981), 21-28 M Maheshwari S.N. and Thakur S.S.: On α-compact spaces, Bull. Inst. Math. Acad. Sinica, 13 (1985), 341-347. O Maheshwari S.N., Thakur S.S., On α-irresolute mappings, Tamkang J. Math. 11 (1980), 209–214. P Mashhour A.S., Abd El-Monsef M.E., El-Deep S.N., On precontinuous and weak precontinuous mappings, Proc. Math. Phys. Soc. Egypt 53 (1982), 47-53. ND Njåstad O., On some classes of nearly open sets, Pacific J. Math., 15 (1965), 961–970. MNA Porter J.R., Woods G., Extensions and Absolutes of Hausdorff Spaces, Springer-Verlag New York Berlin Heidelberg London Paris Tokyo. X Scheepers M., Combinatorics of open covers I: Ramsey theory, Topology Appl. 69 (1996), 31-62. G Song Y.-K. and Li R., The almost Hurewicz spaces, Quest. Ans. Gen. Topology. 31 (2013), 131–136. L Steen L.A., Seebach J.A., Counterexamples in Topology, Springer-Verlag, United States, 1970. B Ve1icko N.V., H-closed topological spaces, Mat. Sbobnik 70 (1966), 98–11 (Amer. Math. Soc. Transl. 78 (1968), l03–118).
http://arxiv.org/abs/2307.02041v1
20230705055510
Multimodal Imbalance-Aware Gradient Modulation for Weakly-supervised Audio-Visual Video Parsing
[ "Jie Fu", "Junyu Gao", "Changsheng Xu" ]
cs.CV
[ "cs.CV" ]
myitemize2[1][] ∙ Journal of Class Files, Vol. 14, No. 8, August 2015 Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals Multimodal Imbalance-Aware Gradient Modulation for Weakly-supervised Audio-Visual Video Parsing Jie Fu, Junyu Gao, and Changsheng Xu, Fellow, IEEE Jie Fu is with Zhengzhou University, ZhengZhou 450001, China, and also with the State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China. (email: fujie@gs.zzu.edu.cn). Junyu Gao and Changsheng Xu are with the State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences, Beijing 100190, P. R. China, and with School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China. Changsheng Xu is also with Peng Cheng Laboratory, ShenZhen 518055, China. (e-mail: junyu.gao@nlpr.ia.ac.cn; csxu@nlpr.ia.ac.cn). ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Weakly-supervised audio-visual video parsing (WS-AVVP) aims to localize the temporal extents of audio, visual and audio-visual event instances as well as identify the corresponding event categories with only video-level category labels for training. Most previous methods pay much attention to refining the supervision for each modality or extracting fruitful cross-modality information for more reliable feature learning. None of them have noticed the imbalanced feature learning between different modalities in the task. In this paper, to balance the feature learning processes of different modalities, a dynamic gradient modulation (DGM) mechanism is explored, where a novel and effective metric function is designed to measure the imbalanced feature learning between audio and visual modalities. Furthermore, principle analysis indicates that the multimodal confusing calculation will hamper the precise measurement of multimodal imbalanced feature learning, which further weakens the effectiveness of our DGM mechanism. To cope with this issue, a modality-separated decision unit (MSDU) is designed for more precise measurement of imbalanced feature learning between audio and visual modalities. Comprehensive experiments are conducted on public benchmarks and the corresponding experimental results demonstrate the effectiveness of our proposed method. Imbalance-aware, Gradient modulation, Weakly-supervised, Audio-visual video parsing. § INTRODUCTION Recently, many different audio-visual video understanding tasks such as audio-visual action recognition <cit.>, audio-visual separation <cit.> and audio-visual event localization <cit.> have been proposed and achieve impressive progresses. The above audio-visual parsing models are all learned based on the assumption that audio and visual modalities are temporally aligned and the corresponding fine-grained frame-level annotations of different modalities are also provided for training. However, in real-world scenes, the synchronization between audio and visual modalities is not always satisfied and annotating frame-level labels for massive videos is time-consuming and unfeasible. To mitigate the above issues, weakly-supervised audio-visual video parsing task <cit.> is proposed. In the task, only video-level event category labels are annotated for training the model, which attempts to identify the starting and ending timestamps of each event instance and predict the corresponding event categories in terms of modalities (i.e., audio, visual, or both) during the inference stage. Inspired by the prior knowledge that multiple modalities can provide more effective information from different aspects than only single one, most existing WS-AVVP methods <cit.> are designed in the same pipeline to aggregate multimodal information for more precise audio-visual video parsing: Firstly, the HAN cross-attention <cit.> is utilized to enhance the audio and visual feature representation. Afterwards, the enhanced feature of different modalities are input into the same attention module and classification head to generate the modality-specific attention weights and classification scores, which are aggregated to produce the final video-level classification predictions. Throughout the training process, a uniform learning objective and joint training strategy are utilized to optimize the sub-networks of different modalities. By going in depth into the pipeline mentioned above, an intuitive deficiency can be concluded that the audio and visual modalities are optimized equally and the natural discrepancy between them is overlooked. Concretely, as shown in motivation, it is difficult to judge whether `Basketball bounce' is happening by just listening to the audio stream, but it is obvious enough in visual data. Consequently, during the training process, the modality conveying more salient semantic information will dominate the training process and obtain more optimization attention, and the modality containing relatively confusing information will not be fully optimized <cit.>. Ultimately, variant modalities often trend to convergence at different rates, which will further result in uncoordinated convergence issue <cit.>. To cope with the above issue, a Dynamic Gradient Modulation (DGM) mechanism is explored to balance the feature learning processes of audio and visual modalities. Concretely, a novel metric function is first designed to measure the imbalanced feature learning degree between different modalities, which followed by utilizing the imbalance degree to modulate the backward gradients of the sub-networks for different modalities, so as to drive the AVVP model to pay more optimization attention to the suboptimal modality. So far, a similar imbalance metric function <cit.> based on the predicted scores of the correct categories in different modalities has been proposed. However, it neglects the global prediction distribution information of all categories, which is also beneficial for precise imbalance assessment. Consequently, in our DGM, a more thoughtful imbalance metric function considering both above two information is designed. Furthermore, in traditional WS-AVVP pipeline, the cross-attention operation is always utilized to exchange audio and visual information, which will lead to the confusing multimodal information. In this case, it is hard to measure the imbalanced feature learning in different modalities purely, which will further damage the effectiveness of our proposed DGM. In order to address the above issue, according to the principle analysis, we design a modality-separated decision unit (MSDU) structure, which is embeded between the modality-specific feature encoders and cross-attention block of the traditional pipeline for more precise measurement of imbalanced feature learning between different modalities. In MSDU, the calculation of different modalities is separated completely, which is proved to be beneficial for enlarging the performance gains brought by our DGM mechanism. Ultimately, to evaluate the performance of our proposed method, we conduct extensive comparison and ablation studies on several widely utilized audio-visual benchmarks. Experimental results show that our proposed method achieves the state-of-the-art performance. To summarize, our main contributions are three-fold: * We analyze the imbalanced feature learning issue in WS-AVVP task. To this end, a dynamic gradient modulation mechanism is proposed to modulate the gradients of variant sub-networks for different modalities according to their contributions to the learning objective, so as to make the multimodal framework pay more attention to the suboptimal modality. * We observe that the confusing calculation of different modalities will disturb the precise measurement of multimodal imbalanced feature learning, which will further damage the effectiveness of our proposed DGM mechanism, thus we design a Modality-Separated Decision Unit, which can cooperate with the proposed DGM for a significant improvement. * Comprehensive experimental results show that our proposed model outperforms all current state-of-the-arts, which verifies the effectiveness of our proposed DGM mechanism and MSDU structure. § RELATED WORK In this section, we review the most related work to our method including audio-visual learning and understanding as well as weakly-supervised audio-visual video parsing. §.§ Audio-visual learning and understanding As the two most common and fundamental sensory information, visual and auditory data have attracted a large amount of research attention and derived many different audio-visual understanding tasks, such as audio-visual feature representation learning <cit.>, audio-visual action recognition <cit.>, sound source localization <cit.>, audio-visual video captioning <cit.>, audio-visual event localization <cit.> and audio-visual sound separation <cit.>. Most audio-visual learning methods are designed based on the assumption that audio and visual modalities are synchronized and temporal correlated. Concretely, <cit.> attempt to learn audio-visual feature representation jointly by utilizing the temporal alignment information between audio and visual modalities as the self-supervised guidance. In <cit.>, a novel pretext task is proposed to extract correspondent feature for correlated video frame and audio, where the audio-visual feature pairs belonging to the same temporal snippet are gathered and the features from unpaired video snippets are separated. In <cit.>, the unsupervised multimodal clustering information is utilized as the supervision for cross-modality feature correlation learning. To enhance the object detection capability, the correlation information between audio and object motion is taken into consideration <cit.>. The methods mentioned above have achieved some progress by utilizing the synchronization between different modalities, which is not always satisfied in realistic scenes. §.§ Weakly-supervised audio-visual video parsing WS-AVVP aims to parse an arbitrary untrimmed video into a group of event instances associated with semantic categories, temporal boundaries and modalities (i.e., audio, visual and audio-visual), where only the video-level labels are provided as the supervision information for training. Tian et al. <cit.> first propose the WS-AVVP setting and design a hybrid attention network in a multimodal multiple instance learning (MMIL) pipeline, where the intra- and cross-modality attention strategies are utilized to capture cross-modality contextual information. Afterwards, Lin et al. <cit.> further explore cross-video and cross-modality complementary information to facilitate WS-AVVP, where both the common and diverse event semantics across videos are exploited to identify audio, visual and audio-visual events. Thereafter, to refine the event labels individually for each modality, Wu et al. <cit.> propose a novel method by swapping audio and visual modalities with other unrelated videos. JoMoLD <cit.> utilizes the cross-modality loss pattern to help remove the noisy event labels for each modality. Although the methods mentioned above have enhanced the cross-modality feature learning and mitigated the issue of modality-specific noisy labels, none of them have noticed the imbalanced multimodal feature learning. §.§ Imbalanced audio-visual learning Compared with the uni-modality learning, multimodal learning can integrate more information from different aspects. However, there always exists a discrepancy between different modalities, which makes the extraction and fusion of information from different modalities become more challenging. Concretely, the information volume and complexity in different modalities are often variant, thus the training difficulty of the networks for variant modalities is also different, which will lead to uncoordinated optimization issue <cit.>: the dominated modality <cit.> conveying salient information will achieve more optimization attention and better performance during the whole training process, which will suppress the optimization progress of the other one. To cope with the above issue, Wang et al. <cit.> propose a metric to measure the overfitting-to-generalization ratio (OGR) and design a novel training scheme to minimize the OGR via an optimal blend of multiple supervised signals. To enhance the feature embedding capability of the suboptimal modality, Du et al. propose to distill reliable knowledge from the well-optimized model to help strengthen the optimization of the other one. Similar to our work, OGM <cit.> proposes to measure the optimization discrepancy between different modalities by calculating the ratio of predicted scores for the correct categories in different modalities, which is then utilized to modulate the gradients of different modality-specific models. However, OGM only takes the prediction scores for the correct event categories into consideration, which is not sufficient for reliable imbalance assessment. Consequently, we further consider the global prediction distribution on all categories. Meanwhile, to mitigate the negative effect of multimodal confusing calculation on our proposed DGM mechanism, we design the MSDU, which can cooperate with DGM for more significant improvement. § OUR APPROACH §.§ Problem Formulation In this paper, we follow the standard protocol of WS-AVVP  <cit.>: Formally, given a multimodal training video sequence { a^t, v^t}_t=1^T with T snippets, only the coarse video-level label Y∈{ 0, 1}^C is available during the training process, where a^t and v^t indicate the t-th audio and visual snippets and C denotes the number of event categories. In the testing phase, the trained model should predict the snippet-level semantic category label 𝐲_t={ y_t^a, y_t^v, y_t^av} for audio, visual and audio-visual modalities respectively, where y_t^av=y_t^a× y_t^v. In other words, the audio-visual events happen only when the corresponding audio and visual events belonging to the same semantic category occurr at the same time. Notably, the video-level event category label (multi-hot vector) can be obtained by aggregating the snippet-level event category annotations along the temporal dimension in each video. Due to the lack of fine-grained snippet-level supervision information during the training process, current WS-AVVP models are all designed in the multimodal multiple instance learning (MMIL) pipeline: in the training process, the snippet-level predictions from different modalities in the same video are aggregated to form the video-level prediction, which is supervised by the annotated video-level label. §.§ Optimization Analysis and Our DGM The pipeline of our proposed framework is illustrated in framework: Compared with our proposed framework, the traditional WS-AVVP pipeline (i.e., our proposed framework without the component in the light green background box) contains three main components including audio and visual feature encoders ψ_a/v( ·) for snippet-level video feature learning, cross-attention mechanism cro_att(·) for exchanging multimodal information, as well as multimodal attention Atn_a/v(·) and classification φ_a/v(·) modules for snippet-level video action prediction. During the training process, an arbitrary training video containing audio a and visual v modalities is input into ψ_a/v( ·) to produce the modality-specific snippet-level video feature, which is then fed into the cross-attention function cro_att(·) to extract multimodal cross-attented feature representation. Afterwards, Atn_a/v(·) and φ_a/v(·) are utilized to generate the modality-specific attention weights and classification predictions. Ultimately, the attention weights and classification predictions in different modalities are combined together to produce the video-level action classification prediction, which is supervised by the video-level labels provided by the annotators. To optimize the model, the optimization objective of the traditional pipeline can be formulated as follows: L = -1/N C∑_n=1^N∑_c=1 ^C [Y_n[c]log P_n[c] + (1 - Y_n[c]) log (1 - P_n[c])] where N denotes the number of training samples in a mini-batch and Y[ c ] indicates the c-th element of Y. P_n is the classification prediction for the n-th video, which is obtained via sigmoid function. Concretely, video-level classification prediction P_n can be formally calculated as follows: P_n = φ( f^n_av) = α^w [f^n_a, f^n_v] + [α^b_a , α^b_v] = α^w_a· f^n_a + α^b_a + α^w_v· f^n_v + α^b_v = α^w_a·π_a(a_n; θ_a) + α^b_a + α^w_v·π_v(v_n; θ_v) + α^b_v where α^w and α^b denote the weight and bias of modality-specific classifiers, π(·) is the formalization of feature embedding function including the modality-specific feature encoder ψ(·) and cross-attention block cro_att(·), and θ indicates the hyper-parameter of π(·). In the following content, we omit the bias matrix α^b for more brevity. During the optimization process of our model, the gradient descent strategy is utilized to update the parameters of our model. Formally, the optimization processes of different parameters in our model can be formulated as follows (The superscript and subscript a / v are omitted for brevity): α^w α^w + λ∂L/∂φ( f_av)∂φ( f_av)/∂α^w = α^w + λ1/N∑_n=1^N∂L/∂φ( f^n_av)· f^n θ θ + λ∂L/∂φ( f_av)∂φ( f_av)/∂θ = θ + λ1/N∑_n=1^N∂L/∂φ( f^n_av)∂φ( f^n_av)/∂θ where λ is the learning rate of the optimizer. Obviously, the common item in the above two equations is (Refer to Supplementary Material for detailed deduction of grad): ∂L/∂φ( f_av^n)[c] = 1/1 + e^- φ( f_av^n)[c] - Y_n[c] = 1/1 + e^- [α^w_a·π_a(a_n; θ_a) + α^w_v·π_v(v_n; θ_v)][c] - Y_n[c] From the above formulations, we can intuitively draw the conclusion that if the semantic information contained in the audio modality is much more obvious than the visual one, the audio will achieve higher predicted confidence scores. Furthermore, the back-propagation gradient (grad) is more contributed by α^w_a·π_a(a_n; θ_a), which will make the audio achieve more optimization attention. Consequently, the visual modality will have a relatively lower prediction confidence and limited optimization efforts will be paid to it during the training process. Ultimately, although the training process of the whole multimodal model has converged, the modality containing relatively weak semantic information could not be fully optimized. To cope with this issue, we propose a simple but effective dynamic gradient modulation (DGM) strategy (i.e., the component shown in the light green background box of framework) to balance the feature learning of audio and visual modalities. Specifically, inspired by the factor that fully optimized model will predict higher classification scores for those correct categories and the discrepancy between the predicted scores for correct and wrong categories will be large, we assess the relative optimization progress between visual and audio modalities as follows: ω_v-a = ∑_n∑_c s_n^v[c] · Y_n[c] + ∑_ns̅_n^v/∑_n∑_c s_n^a[c] · Y_n[c] + ∑_ns̅_n^a where s_n^v and s_n^a are visual and audio classification confidence scores of the n-th sample, and s̅_n denotes the corresponding discrepancy between the average prediction scores of the correct and wrong event categories. Similarly, ω_a-v is the reciprocal of ω_v-a. If the ω_v-a is larger than 1, the optimizer will pay more attention to the visual modality, which will result in suboptimal audio feature learning. As a result, the balance coefficients of different modalities can be designed as follows: μ^v = { 1 - tanh(γ·ω_v-a), if ω_v-a > 1 1, if ω_v-a≤ 1 . where tanh(·) denotes the activation function and γ is a hyper-parameter managing the modulation degree. μ^a can be obtained in a similar way, but we omit the corresponding description for clarity. Afterwards, we utilize the balance coefficients μ to modify the gradients of different sub-networks during the back-propagation process: W W + λ·μ·∂L/∂W where W indicates the parameter to be optimized (i.e., θ and α^w in our model). Moreover, according to <cit.>, the back-propagation gradients in each batch follow a Gaussian distribution and an appropriately large gradient covariance will lead to better generalization ability. However, the gradient covariance modified by our DGM mechanism becomes μ^2·σ^2(∂L/∂W), which is smaller than the original one σ^2 (∂L/∂W) because μ∈ (0, 1]. To make up for this deficiency, we add an extra term to the modified gradient covariance as follows: W W + λ·E(∂L/∂W) + λε, where ε ∼𝒩(0, (μ^2 + 1) ·σ^2(∂L/∂W)) where E(·) and σ^2(·) are the expectation and covariance. §.§ Modality-Separated Imbalance Measurement Obviously, in our proposed DGM mechanism, the core idea is to utilize the discrepancy between the predictions of different modalities to assess the imbalanced feature learning between audio and visual modalities, which is further applied to modulate the gradient of each modality-specific feature encoder during the optimization process. However, almost all existing WS-AVVP models <cit.> are designed in the following pipeline as shown in trad_safe(a): audio a and visual v data are first input into the feature encoders ψ_a/v( ·) to produce the modality-specific feature. Thereafter, the cross-modality attention mechanism cro_att(·) is utilized to exchange the related information in different modalities and produced the cross-attented features f_a and f_v. Afterwards, f_a and f_v are separately fed into the classifier φ(·) and attention module Atn(·) shared by audio and visual modalities to produce the snippet-level classification probabilities (i.e., P_a and P_v) and temporal attention weights (i.e., A_a and A_v) for different modalities. Ultimately, P_a, P_v, A_a and A_v are aggregated together to generate the video-level event classification prediction P, which is supervised by the video-level event category label provided by the annotators. Formally, the above pipeline can be written as follows: f_a, f_v = cro_att( ψ_a(a), ψ_v(v)) P_a, P_v = φ(f_a), φ(f_v), A_a, A_v = Atn(f_a), Atn(f_v) P = A gg_T(A_a * P_a + A_v * P_v) where `*' and `+' denote the broadcast multiplication and element-wise addition, and Agg_T(·) is the aggregation operation along the temporal dimension. In the traditional case, P_a/v and A_a/v are utilized to calculate s^a/v and s̅^a/v for the assessment of imbalanced feature learning between different modalities. By going in depth into the traditional pipeline mentioned above, we can find that due to the cross-attention operation between audio and visual modalities, both f_a and f_v encode the information from different modalities. Consequently, the information of different modalities are confusing, which makes it hard to purely measure the imbalanced feature learning between different modalities by directly utilizing the attention A_a/v and classification predictions P_a/v produced based on the confusing multimodal feature f_a/v. According to the above analysis, we modify the traditional pipeline and propose a modality-separated decision unit (MSDU) for more pure assessment of imbalanced feature learning between different modalities. Concretely, MSDU is embeded between the modality-specific feature encoders ψ_a/v( ·) and cross-attention block cro_att(·). Our proposed pipeline is shown in trad_safe(b). Obviously, in our pipeline, apart from our proposed MSDU (i.e., the component shown in yellow background box), the rest is similar to the traditional WS-AVVP pipeline: audio a and visual v data is first input into the modality-specific feature encoders to produce the modality-specific video feature e_a/v, which are then fed into the cross-attention module for more robust feature extraction. Afterwards, the cross-attented multimodal feature f_a and f_v are further taken as the inputs of modality-specific attention blocks Atn_a/v(·) and classification modules φ_a/v(·), which produce snippet-level action attention weights A_a/v and classification scores P_a/v. To achieve more pure measurement of imbalanced multimodal feature learning, in our proposed MSDU, another group of modality-specific attention blocks Atn_a/v^ms(·) and classification modules φ_a/v^ms(·) are designed to predict the snippet-level action attentions A_a/v^ms and classification scores P_a/v^ms based on pure audio and visual modality features e_a/v respectively. Formally, the pipeline of our method can be written as follows: e_a, e_v = ψ_a(a), ψ_v(v) P_a^ms, P_v^ms = φ_a^ms(e_a), φ_v^ms(e_v), A_a^ms, A_v^ms = Atn_a^ms(e_a), Atn_v^ms(e_v) f_a, f_v = cro_att( e_a, e_v) P_a, P_v = φ_a(f_a), φ_v(f_v), A_a, A_v = Atn_a(f_a), Atn_v(f_v) P = A gg_T(A_a * P_a + A_v * P_v) In our pipeline, P_a/v^ms and A_a/v^ms are utilized to calculate s^a/v and s̅^a/v in mu. P_a/v and A_a/v are utilized as the final predcitions of our proposed model. In this manner, P_a/v^ms and A_a/v^ms are both based on single-modality data, which will lead to purer measurement of multimodal feature learning. § EXPERIMENTS §.§ Experimental Settings Dataset. Following the standard protocol, we mainly evaluate the performance of our proposed method for WS-AVVP task on LLP dataset <cit.>. LLP contains 11849 YouTube video clips spanning over 25 semantic categories, where each video clip is 10-second long. To perform training, validation and testing, we take the common operation to split the dataset into a training set with 10000 videos, a validation set with 649 videos and a testing set with 1200 videos. To comprehensively investigate the effectiveness of our proposed DGM strategy, we also conduct experiments on CREMA-D <cit.> and AVE <cit.> datasets: CREMA-D is an audio-visual benchmark for speech emotion recognition. In this dataset, 7442 video clips from 91 actors are collected. This dataset is divided into the training and validation subsets, which contain 6698 and 744 videos separately. AVE is an audio-visual benchmark for audio-visual event localization learning. In this benchmark, there are 4143 10-second videos from 28 event categories. In our experiments, the split of this dataset follows <cit.>. Evaluation Metrics. To quantitatively investigate the effectiveness of our proposed method, we evaluate the performance of our model on all kinds of different events (audio, visual and audio-visual) under both segment-level and event-level metrics. The segment-level metric is utilized to evaluate the snippet-level prediction performance. In addition, to measure the event-level F-score results, we concatenate the positive consecutive snippets with the same event category to produce the event-level prediction, where 0.5 is selected as the mIoU threshold to determine the positive snippets. Afterwards, the event-level F-score for each event prediction is calculated. To comprehensively assess the overall performance of our proposed model, the aggregated results are also measured: Type@AV denotes the average value of audio, visual and audio-visual event localization results. Instead of just averaging audio, visual and audio-visual metrics, Event@AV measures the results considering all audio and visual events. For more brevity, in all experimental tables, `A', `V' and `AV' represent the audio, visual and audio-visual events respectively. `Type' and `Event' denotes the Type@AV and Event@AV metrics. On AVE and CREMA-D, we utilize the same evaluation metrics as <cit.> and <cit.> respectively. Implementation Details. As the standard protocol, given an arbitrary 10-second long video, we first uniformly divide it into a group of 10 non-overlapping snippets, where each snippet contains 8 frames. To extract the visual input, the pre-trained ResNet152 <cit.> and R(2+1)D <cit.> networks are utilized to produce the snippet-level appearance and motion feature respectively, which are concatenated at the channel dimension to form the input visual feature. In terms of the audio input, a pre-trained VGGish <cit.> model is adopted to extract the audio feature. In all experiments, we utilize JoMoLD <cit.> as the baseline and the training batch size is set to 128. Each model is trained for 25 epochs by using the Adam optimizer. D and γ are set to 512 and 0.1 respectively. The initial learning rate for our model is 5e-4 and drops by a factor of 0.25 for every 6 epochs. In the experimens on AVE and CREMA-D, our reproduced PSP <cit.> and OGM <cit.> methods are utilized as the baselines separately. §.§ Comparison with State-of-the-art Methods To comprehensively validate the effectiveness of our model, we compare it with different state-of-the-art methods, including the weakly-supervised sound event detection model TALNet <cit.>, weakly-supervised video action detection models CMCS <cit.> and STPN <cit.>, modified audio-visual event localization algorithms AVE <cit.> and AVSDN <cit.>, as well as the recent weakly-supervised audio-visual video parsing methods HAN <cit.>, MA <cit.>, CVCM <cit.>, MM-Pyramid <cit.> and JoMoLD <cit.>. For fair comparison, all WS-AVVP models are trained by using the LLP training set with the same videos and features. Detailed experimental results of our proposed model and other state-of-the-art methods on the LLP testing subset are reported in sota. Intuitively, our proposed model performs favorably against all other compared methods and achieves the best performance on all audio-visual video parsing sub-tasks under both the segment-level and event-level evaluation metrics. Concretely, compared with the most recent JoMoLD model, our proposed model achieves 1.48 average performance gains for Audio, Visual, Audio-visual, Type@AV and Event@AV tasks under the segment-level evaluation metric. Meanwhile, the average performance improvement for different sub-tasks under the event-level metric is 1.44. The above results significantly demonstrate the effectiveness of our proposed method. Additionally, to comprehensively investigate the effectiveness of our proposed method, more experiments are conducted on different audio-visual benchmarks including CREMA-D and AVE. The corresponding experimental results are summarized in discre_CRENAD and discre_AVE. Consistent performance improvement verifies the effectiveness and good generalization of our proposed method. §.§ Ablation Studies Effectiveness of each component. In this part, to further investigate the effectiveness of each component in our method, we conduct comprehensive ablation studies on LLP dataset. The corresponding experimental results are reported in abl_module. From the results, we can observe that our proposed DGM can consistently improve the audio-visual video parsing performance of different network structures (i.e., JoMoLD and our proposed MSDU structure). Concretely, when we insert our proposed DGM mechanism into JoMoLD, the average performance improvements for Audio, Visual, Type@AV and Event@AV sub-tasks are 0.525 and 0.7 under the Segment-level and Event-level evaluation metrics. However, the performance for audio-visual sub-task drops slightly, which can be attributed to the factor that the confusing multimodal feature will lead to unreliable imbalance measurement. Compared with `JoMoLD+DGM', MSDU can bring additional 1.14 and 0.94 average performance improvements for different sub-tasks under the Segment-level and Event-level metrics respectively. Obviously, our proposed DGM mechanism can bring more performance gains for `JoMoLD+MSDU' than JoMoLD. We attribute the experimental phenomenon to the reason that the calculations of different modalities are confused in JoMoLD, which is not beneficial for pure assessment of imbalanced feature learning between different modalities. However, in our designed MSDU block, the calculation of different modalities is separated and pure, which can boost the effectiveness of DGM. To sum up, we can draw the following conclusions from the above experimental results: (1) Our proposed DGM mechanism can improve the audio-visual video parsing performance by balancing the feature learning between different modalities; (2) The MSDU structure can effectively separate the calculation of different modalities, thus further promote the effectiveness of our proposed DGM mechanism. (3) Our proposed DGM mechanism can be adapted to different backbone structures, achieving consistent performance improvement in different WS-AVVP sub-tasks. Optimization imbalance between different modalities. To reliably measure the imbalanced feature learning between audio and visual modalities, we investigate three different methods: (1) For each training video, we first calculate the sum of predicted scores for the correct categories in audio and visual modalities separately. Afterwards, the ratio of the summed predicted scores is calculated as the imbalance degree between two modalities. (2) We calculate the discrepancies between the average predicted scores for the correct and false semantic categories in different modalities firstly. Thereafter, the ratio of the discrepancies in different modalities is utilized to measure the imbalanced feature learning between audio and visual modalities. (3) The above two methods are combined together to assess the imbalanced optimization between audio and visual feature encoders. Detailed experimental results are listed in abl_mea. `score', `discrepancy' and `fusion' denote the three different methods mentioned above. From the experimental results, we can conclude that all three methods can measure the optimization imbalance between audio and visual feature encoders effectively. When we combine the two different information utilized in the first and second strategies, our DGM mechanism can achieve the most performance gains (i.e., `JoMoLD+DGM+MSDU' in abl_module). We attribute this phenomenon to the reason that `score' strategy only takes the predictions for the correct categories into consideration, which neglects the global distribution of the classification predictions. Similarly, `discrepancy' method only considers the discrepancy between the predictions for correct and false event categories while ignores the original prediction scores for the correct categories. When we combine the above two strategies, more exhaustive information will provide more reliable assessment of the optimization imbalance between different feature encoders, which will better balance the multimodal feature learning and improve the audio-visual video paring performance. Effectiveness of γ in Eq.(7) of our main paper. To analyze the effect of γ in Eq.(7) on our proposed DGM mechanism, we conduct comprehensive ablations and the corresponding results are reported in gamma. From the experimental results, we can conclude that when γ is equal to 0.1, our model achieves the best audio-visual video parsing performance. Additionally, when γ increases from 0.2 to 0.9, our model performance decreases slightly. However, compared with `JoMoLD' model in Table (4) of our main paper, our proposed DGM mechanism can always improve the performance no matter what γ is, which verifies that our proposed DGM can robustly balance the optimization processes of different modalities and further improve the performance of our proposed WS-AVVP model. Balanced optimization between different modalities. To intuitively verify that our proposed DGM mechanism can balance the optimization between audio and visual modalities, we analyze the training losses of different modalities before and after the gradients are modulated by our proposed DGM mechanism. As illustrated in loss, x-coordinate and y-coordinate denote the training epoch and the corresponding loss respectively. Intuitively, we can observe that there is a large gap between the losses of audio and visual modalities before the gradients of our designed WS-AVVP model are modulated. Meanwhile, after audio and visual sub-networks are modulated by our proposed DGM, the traning losses of different modalities are balanced, which strongly proves that our proposed DGM mechanism can make the suboptimal modality achieve more optimization attention and further balance the feature learning processes of different modalities. Generalization of our proposed method. To verify the generalization ability of our proposed DGM strategy, we embed it into different WS-AVVP models including HAN <cit.> and JoMoLD <cit.>. The corresponding experimental results are summarized in generalization. Obviously, our proposed method can consistently improve the audio-visual video parsing performance of differnt baseline models, which proves that our method can cooperate with variant model structures for better AVVP performance without being limited by the specific model structure. Qualitative Results. To investigate the effectiveness of our proposed method more intuitively, we compare the qualitative audio-visual parsing predictions produced by the baseline and our proposed complete model (i.e., JoMoLD+MSDU+DGM) in visualization. From the results, we can conclude that our complete method can localize more accurate audio, visual, audio-visual event instances than baseline structure, which verifies that our proposed DGM can effectively improve the WS-AVVP performance. In addition, we also observe that compared with the baseline model, our proposed DGM mechanism can help detect more precise event instances in the relatively weak modality. For example, as shown in subfigure (c) of visualization, it is hard to judge whether `Frying Food' is happening by just listening to the audio, but it is obvious in visual modality. Consequently, the bseline method can only detect the accurate event instances in visual modality but not in audio. However, our proposed method can not only precisely localize the events in visual modality but also in audio. Consequently, we can conclude that our DGM can improve the video parsing performance in relatively weak modalities, which further proves that our proposed DGM can balance the optimization processes between different modalities and make the weak modalities achieve more optimization attention. § CONCLUSION In this paper, we first analyze the imbalanced feature learning between different modalities in WS-AVVP task. To mitigate the above issue, the dynamic gradient modulation strategy is designed to modulate the gradients of the feature encoders for different modalities, so as to make the model pay more optimization attention to the suboptimal branch. Meanwhile, to address the negative effect caused by the confusing multimodal feature on our proposed DGM, we design a modality-separated decision unit for more precise measurement of the imbalanced feature learning between audio and visual modalities. Comprehensive experiments verify the effectiveness of our proposed method. IEEEtran
http://arxiv.org/abs/2307.03138v1
20230706170705
Hierarchical generalization of dual unitarity
[ "Xie-Hang Yu", "Zhiyuan Wang", "Pavel Kos" ]
quant-ph
[ "quant-ph", "cond-mat.stat-mech", "hep-th", "nlin.CD" ]
=1 patterns,decorations.pathreplacing break PropProperty LemmaLemma PropositionProposition
http://arxiv.org/abs/2307.02616v1
20230705194147
Federated Epidemic Surveillance
[ "Ruiqi Lyu", "Bryan Wilder", "Roni Rosenfeld" ]
stat.AP
[ "stat.AP", "cs.AI", "cs.CY", "stat.ME" ]
Learning when to observe: A frugal reinforcement learning framework for a high-cost world Colin Bellinger10000-0002-3567-7834 Isaac Tamblyn2,31111-2222-3333-4444 Mark Crowley40000-0003-3921-4762 ============================================================================================================ The surveillance of a pandemic is a challenging task, especially when crucial data is distributed and stakeholders cannot or are unwilling to share. To overcome this obstacle, federated methodologies should be developed to incorporate less sensitive evidence that entities are willing to provide. This study aims to explore the feasibility of pushing hypothesis tests behind each custodian’s firewall and then meta-analysis to combine the results, and to determine the optimal approach for reconstructing the hypothesis test and optimizing the inference. We propose a hypothesis testing framework to identify a surge in the indicators and conduct power analyses and experiments on real and semi-synthetic data to showcase the properties of our proposed hypothesis test and suggest suitable methods for combining p-values. Our findings highlight the potential of using p-value combination as a federated methodology for pandemic surveillance and provide valuable insights into integrating available data sources. § INTRODUCTION The prompt detection of outbreaks is critical for public health authorities to take timely and effective measures. Providing early warning, whether regarding the emergence of a new pathogen or a renewed wave of an existing epidemic, allows for preparatory action either to reduce transmission or to prepare for increased load on the health system. Nonetheless, real-time surveillance is challenging, particularly in countries such as the United States where relevant data is typically held by many separate entities such as hospitals, laboratories, insurers and local governments. These entities are often unable or unwilling to routinely share data for a variety of reasons including patient privacy, regulatory compliance, competitiveness and commercial value. Accordingly, creating effective surveillance pipelines currently require public health authorities to mandate reporting for particular conditions of interest. This process is both cumbersome and reactive: a new reporting pipeline cannot be created until well into a public health emergency. We propose and evaluate the feasibility of an alternative approach that we refer to as federated epidemic surveillance. The core concept is that health information, including even aggregate counts, never leaves the systems of individual data custodians. Rather, each custodian shares only specified statistics of their data, for example, the p-value from a specified hypothesis test. These statistics are then aggregated to detect trends that represent potential new outbreaks. Leveraging inputs from a variety of data custodians provides significantly improved statistical power: trends which are only weakly evident in any individual dataset may be much more apparent if data could be pooled together. To illustrate, consider COVID-19 hospitalizations in Seattle reported by four facilities to the US Department of Health & Human Services (HHS), as shown in Figure <ref>. As the trends observed at different facilities vary substantially, it would be difficult to catch the overall increase by looking at any single facility. However, if the combined data from all facilities are available, a rapid increase in hospitalizations is clearly visible starting in March. Our goal is to detect outbreaks with comparable statistical power as if the data could be pooled together, but without individual data providers disclosing even their time series of counts. Our analysis reveals that federated surveillance is indeed possible, often attaining performance similar to that possible with fully centralized data. We analyze a simple two-step approach: first, conduct separate hypothesis tests on the occurrence of a “surge" at different sites and subsequently use a meta-analysis framework to combine the resulting p-values into a single hypothesis test for an outbreak. More elaborate approaches (e.g. based on homomorphic computation or other cryptographic techniques) could allow more sophisticated computations under strong privacy guarantees. However, our goal is to demonstrate that high-performance federated surveillance is achievable using even simple, easily implementable methods. Our results indicate that effective epidemic surveillance is possible even in environments with decentralized data, and provide a roadmap towards modernizing surveillance systems in preparation for current and future public health threats. Before delving into the details of the framework, a concise introduction to the notations employed in this article is provided in Appendix Section 1. § RESULTS We explore the potential for simple federated surveillance methods to detect surges in a condition of interest using a variety of real and semisynthetic data. To start, we more formally introduce our objective. Precisely defining what constitutes a surge or outbreak is difficult. We operationalize a surge as sufficiently large increase in the rate of new cases over a specified length of time. Formally, we model the time series k_t of interest (e.g., cases or hospitalizations with a particular condition) as following a Poisson process k_t∼ Poi(λ_t) for some time-varying rate parameter λ_t. At a testing time T, we compare to a baseline period T-ℓ...T-1 and say that a surge occurs when the rate increases by at least a factor of θ during the testing period compared to the baseline period. For simplicity, we model counts in the baseline period as following a Poisson distribution with a constant parameter λ_B: k_Bj∼ Poi(λ_B), j = T-ℓ...T-1. Similarly, during the testing period, we model k_T∼ Poi(λ_T) for a new parameter λ_T. We say that a surge occurs when λ_T/λ_B > 1 + θ. We will analyze methods which test this hypothesis using the realized time series k_t, effectively asking whether a rise in counts must be attributed to a rise in the rate of new cases or whether it could be explained by Poisson-distributed noise in observations instead. Importantly, none of our results rely on the assumption that the data actually follows this generative process; indeed, we will evaluate using real epidemiological time series where such assumptions are not satisfied. Rather, our aim is to show that decentralized versions of even this simplified hypothesis test can successfully detect surges. Formally, we test the null hypothesis that the Poisson rate ratio λ_T/λ_B is not larger than 1+θ. We apply the uniformly most powerful (UMP) unbiased test for this hypothesis <cit.>, which has the p-value Pr[r ≥ k_T] where r is a Binomial random variable r ∼ Bin(∑_j = 1^l k_Bj + k_T, 1+θ/1+θ+l). That is to, calculate the p-value of the Binomial test, we sum up the probabilities of observing more extreme values than k_T if counts were uniformly split between the baseline and test periods. Section <ref> includes more details about the hypothesis test. In the federated setting, each data custodian computes p-values for this hypothesis test using only their own time series. The p-values are then combined using methods from meta-analysis. (See Section <ref> for more discussions.) Considering p_1, …, p_n are p-values obtained from n independent hypothesis tests and the joint null hypothesis for the p-values is H_0: p_i ∼ U[0, 1], i = 1, …, N <cit.>, several commonly used statistics and their corresponding distributions can be computed accordingly. We listed some popular ones in Table <ref>. §.§ Efficacy of Federated Surveillance We start by studying the statistical power and sensitivity of federated surveillance methods compared to centralized data, i.e., whether decentralized hypothesis tests allow comparable accuracy in detecting surges compared to the (unattainable) ideal setting where all data could be pooled for a single test. We assess decentralized methods using both their theoretical expected accuracy on data drawn from our simplified generative model and on two real Covid-19 datasets. Figure <ref> shows the expected statistical power of each meta-analytic method for combining p-values on data drawn from our generative model, compared to the statistical power of a centralized version of the same hypothesis test and to a version which uses only the counts from a single data provider. We fix a threshold of θ = 0.3 for a surge. The x axis varies the true rate of growth under the alternative hypothesis, with higher power to detect surges when they deviate more significantly from the null. To ensure a fair comparison, we calibrate the rejection threshold for each method to match the nominal α = 0.05 rejection rate when the true growth rate is exactly 30% (i.e., precisely satisfying the null). We simulate a total of 200 counts distributed between 2 sites (Figure <ref>) and 8 sites (Figure <ref>). We find that, in this idealized setting, the top-performing federated method (Stouffer's method) almost exactly matches the power of the centralized-data test. Conversely, significant power is lost by using only the p-value from a single site, indicating that sharing information across sites is necessary for good performance. The other meta-analytic methods exhibit lower power; in later sections, we will examine the settings in which different meta-analytic methods for combining the p values lead to better or worse performance. In order to validate the robustness of different methods in the real world, we utilize two real datasets. These datasets provide a more realistic representation of the complexities and challenges encountered in real-world scenarios, allowing us to assess the performance of the methods under more diverse conditions. The data we use are (1) The "COVID-19 Reported Patient Impact and Hospital Capacity by Facility" dataset, obtained through the Delphi Epidata API <cit.>, covers the period from 2020-07-10 to 2023-03-03. This dataset provides facility-level data on a weekly basis and primarily includes the "total adult patients hospitalized" metric for case counts. (2) The "Counts of claims with confirmed COVID-19" dataset provided by Change Healthcare covering the period from 2020-08-02 to 2022-07-30. This dataset provides county-level claim data on a daily basis. More information about the real data is put in Appendix Section 4. In our real data analysis, we utilize hospitalization data at the facility level to generate county-level alarms based on the p-values, as well as claim data at the county level to create alarms. The term "alarm" in this context signifies the occurrence of something suspicious or noteworthy. In this particular case, the alarm indicates that the confidence level of rejecting the null hypothesis is below a predetermined threshold α = 0.05. For more details of evaluation, see Section <ref>. Due to variations in reporting frequency and the number of sites, the hospitalization data tends to have larger counts distributed in fewer sites, while the claim data has smaller counts distributed in more sites. Based on our previous discussion, we anticipate that Stouffer's method would be more suitable for the hospitalization data, while Fisher's method would be better suited for the claim data. By applying these naive meta-analysis methods to our two datasets, we obtain the recall-precision curves as Figure <ref>. The results demonstrate that the federated test, with appropriate method selection, can effectively reconstruct centralized information, as indicated by F1 scores greater than 0.9 for both datasets. On the other hand, relying solely on the single largest facility yields suboptimal results. Furthermore, the selection of the optimal combination method aligns with our expectations. §.§ How Different Settings Influence the Surveillance Based on our discussion on theoretical analysis (see Section <ref>) and the experiments on the real-world data above, we have drawn preliminary conclusions regarding our proposed test. Firstly, employing a combined test using a meta-analysis framework generally yields superior performance compared to relying solely on a single entity, even if the entity contributes a significant portion of the counts. Secondly, the optimal choice of method depends on specific features of the data. If the reported counts are less distributed and have relatively large magnitudes, Stouffer's method is preferable. However, if the counts are more evenly distributed, it is better to consider log p-based Fisher's method. To gain a comprehensive understanding of the impact of various properties on the combination process, it is beneficial to selectively modify one factor while keeping others constant. However, it is important to note that the settings encompass multiple dimensions of variation. Firstly, as the number of entities in a region increases, combining the data becomes more challenging due to the accumulation of combination errors. Secondly, the magnitude of counts n influences the selection of p-value combination methods, as evident from the power formula in Equation <ref>. Thirdly, the imbalance in the shares of different sites challenges the robustness of the combination methods. To conduct a rigorous analysis of these factors while effectively controlling for other variables, we employ a semi-synthetic data analysis (see Section <ref>) utilizing real COVID-related claims data reported on a weekly basis. The underlying prevalence of the Poisson-distributed counts is estimated by applying a moving average smoother to the real data. Simulated observations are then generated, assuming the observed data follows a Poisson distribution with the smoothed data as the rate parameter. This approach enables us to investigate the systematic errors resulting from noise in both centralized and federated settings, as well as compare the performance of different combination methods in the presence of this noise. In Figure <ref>, each plot represents an analysis where one dimension is varied while keeping the others fixed. Figure <ref> focuses on changing the number of sites while maintaining equal shares among them. Figure <ref> incorporates a multiplier to the underlying prevalence, allowing us to examine the influence of the magnitude of the counts. Figure <ref> explores the case where the number of sites is fixed at N = 5, and the degree of imbalance in the shares is varied. The level of imbalance is quantified using the normalized entropy metric S = -∑_i=1^N s_i log s_i/log N, where a value of 1 indicates perfectly equal shares. Our findings indicate that federated analysis performs well compared to the centralized setting. However, relying solely on a single facility, even with a relatively large share of the counts, yields poor results. Fisher's method demonstrates the highest stability when the number of sites varies in an equal-shared setting. Therefore, when the data is highly distributed among numerous sites, Fisher's method is the preferred choice for analysis. Additionally, we observe that Fisher's method performs slightly better when the magnitude of the reports is relatively small. Furthermore, our analysis suggests that selecting the largest site as the representative can outperform naive combination methods only when there is one dominant site in the entire region, as indicated by a normalized entropy of less than 0.7 and the largest site having a share of 65%. §.§ Enhancing Federated Surveillance with Auxiliary Information In the context of meta-analysis, most discussions and methodologies assume that p-values are uniformly distributed under the null hypothesis. However, this assumption does not hold in many cases (as discussed in Section <ref>). In light of this, we can explicitly formulate the equations for p-values and explore an alternative perspective on meta-analysis: the approximation of p-values and the subsequent combination of these approximations. It is clear that directly combining the p-values of different sites from p-values of entities (Equation <ref>) to p-value of the summation of their counts (Equation <ref>) without any loss is not possible. Therefore, our goal is to find better approximations and develop improved methods for combining the approximations. The choice between different meta-analysis methods is analog to choosing a better approximation. However, we can incorporate estimated shares of different data providers and assign weights to the studies, as weighting is a common approach for integrating evidence <cit.>. Based on our analysis in Section <ref>, we have concluded that the optimal weights for combining Stouffer's method are the square roots of the shares. In other words, it's formulated as Equation <ref>. p = Φ(∑_i = 1^N√(s_i)Φ^-1(p_i)) where p and p_i are combined and distributed p-values. For Fisher's method, we have observed that the wFisher method proposed by Yoon et al. <cit.> is the most stable. In this method, they assign weights to the degrees of freedom (DFs) of the Gamma distribution based on the shares and control the total DFs to match that of the naive Fisher's method. Specifically, they formulate the framework as Equation <ref>. p = 1 - F_χ^2_2N(∑_i = 1^N F^-1_Gam(s_iN/2, 1/2)(1-p_i)) The power curves of Appendix Section 3 Figure 2 demonstrate that both Fisher's method and Stouffer's method exhibit improvement upon the inclusion of weights. Notably, Stouffer's method demonstrates the ability to closely approximate the centralized Binomial power curve even in highly unbalanced scenarios. Moreover, even in the extremely unbalanced setting, the largest site alone fails to achieve a significantly high test power. In addition to adding weights, we can make a further modification to Stouffer's method if we have estimates of the total reports of all sites n = ∑_i = 1^N n_i. The modification involves incorporating a continuity correction term to make the results less conservative, which can be particularly useful when the total counts are small. The modified method can be expressed as Equation <ref>. p = Φ(∑_i = 1^N√(s_i)Φ^-1(p_i) + 1-N/2√(ρ(1-ρ)n)) By comparing the performance of selecting the largest site with Stouffer's and Fisher's methods before and after the modifications, using our semi-synthetic data analysis framework, we can observe that after adding weights, both methods show improvements. Furthermore, the combined methods outperform selecting the largest site, even in extreme settings where the largest site among all five accounts for 80% of the share. Specifically, the weighted Stouffer's method closely approximates the performance of the centralized setting when the shares between sites are similar, while wFisher demonstrates stability across different share settings. However, in real-time scenarios, the share of different sites may not be readily available, necessitating the training and updating of weights in our framework. Various approaches can be employed depending on the accessibility of information. One option is to utilize coarse-grained reports with lags. By incorporating such information, the accuracy of generating alarms can be improved, provided that a reasonable training scheme is implemented. The impact of the reporting cycle and lag of the auxiliary information on the performance of adding weights in real datasets is illustrated in Appendix Section 5 Figure 4. Generally speaking, these factors have minimal influence on the improvement achieved through weighted combinations. Another viable option is the integration of auxiliary information, such as bed usage and ICU data, which has the potential to further improve performance as well. The details of this integration can be found in Appendix Section 5. The performance of both the naive and modified versions of Stouffer's and Fisher's methods is depicted in Figure <ref>. As anticipated, the weighted versions of both methods show improvement. Additionally, the inclusion of continuity correction proves beneficial for daily-reported claim data with smaller counts. Conversely, for weekly-reported hospitalization data with larger counts, continuity correction does not significantly impact the results. In conclusion, in most cases, adding weights to the combination of p-values can be beneficial, provided that the weights are accurately estimated. When the magnitudes of the reports are relatively small, incorporating continuity correction, if feasible, can help make the test less conservative. § DISCUSSION The surveillance of pandemics presents significant challenges, especially when crucial data is distributed and stakeholders are unwilling or unable to share it. In this study, we have introduced a federated methodology for pandemic surveillance, which involves conducting hypothesis tests behind each custodian's firewall and subsequently performing a meta-analysis to combine the results. Through power analyses and experiments using real and semi-synthetic data, we have demonstrated the feasibility and effectiveness of our proposed hypothesis testing framework. Our study's results have shown the effectiveness of the suggested hypothesis testing framework in identifying surges in the indicators assumed to follow a Poisson distribution. This framework offers a statistical method for determining whether the rate increase surpasses a user-defined threshold. By utilizing p-values, we have been able to combine results from multiple sites while addressing privacy concerns, as p-values alone do not pose a significant risk of privacy leakage. Furthermore, our findings have indicated that the choice of combination method in meta-analysis depends on the data's characteristics. Stouffer's method is more suitable for less distributed data with a larger magnitude of reports, while Fisher's method performs better in a more distributed and unbalanced setting. Moreover, if we have access to additional information about the entities' shares and even the estimated total counts in a given region, we can further enhance our evidence combination and achieve results that are almost as good as those in a centralized setting. However, our study has several potential extensions that warrant further investigation. Firstly, our focus has been primarily on detecting surges in indicators assumed to follow a Poisson distribution, which may not capture the full spectrum of pandemic surveillance needs. Exploring the detection of other patterns and developing integrated statistical models for multiple indicators would improve the accuracy and timeliness of surveillance efforts. Additionally, addressing the challenges of defining privacy boundaries and establishing consensus on data-sharing protocols remains an ongoing obstacle that requires attention. In conclusion, our findings underscore the potential of utilizing p-value combination as a federated methodology for pandemic surveillance. By leveraging less sensitive evidence from multiple custodians, we can overcome the challenges posed by data distribution and privacy concerns. Our proposed framework enables the rapid detection of surges and the integration of available data sources, empowering health authorities to take prompt and effective measures in response to epidemic outbreaks. § METHODS §.§ Poisson rate ratio test for detecting a surge As the simplest idea, the Poisson distribution is employed to model disease indicators expressed as count data concerning additivity. The Poisson distribution characterizes the probability of the number of occurrences within a fixed interval, assuming a constant mean rate and independent occurrences <cit.>. The Poisson rate parameter λ serves as both the expectation and variance of the distribution of k. In a short time period such as a week or a month, if there is no surge, the counts of a certain indicator follow a Poisson distribution with a fixed rate parameter λ, which can be estimated based on observations from the previous period. However, in the presence of a sudden surge, the distribution changes, and the rate parameter λ increases. By monitoring and analyzing the changes, we can effectively identify the occurrence of a surge. To achieve this, we propose conducting a hypothesis test to determine whether the increase in Poisson rates exceeds a user-defined threshold, denoted as θ. This threshold can be tailored to the inherent characteristics of different indicators, allowing for adaptive control of the false discovery rate. Specifically, a surge is defined as a Poisson rate that increases by at least θ during the testing period with Poisson rate parameter λ_T, compared to the baseline period with parameter λ_B, shown as Equation <ref>. H_0: λ_T/λ_B≤ 1+θ We propose utilizing the UMP unbiased test, which is a conditional method of testing two Poisson rates ratio first proposed by Przyborowski and Wilenski <cit.>, shown as Equation <ref>. The traditional conditional test is known for being exact while conservative, as the actual significance level is always below the nominal level <cit.>. H'_0: λ_T/λ_T + lλ_B≤1+θ/1+θ+l Essentially, the UMP test corresponds to a Binomial test that examines the indicator during the testing period conditioning on the total counts of both the baseline and testing periods. Using this test, we can easily determine the p-value through a one-tailed exact test. The formulation of the p-value is shown as Equation <ref>. p = Pr(r ≥ k_T | ∑_j = 1^l k_Bj + k_T, 1+θ/1+θ+l) =∑_r = 0^cnr(1-ρ)^n - rρ^r where c := ∑_j = 1^l k_Bj, n := ∑_j = 1^l k_Bj + k_T, ρ := l/1+θ+l. The power of a hypothesis test is defined as the probability of rejecting the null hypothesis in favor of a specific alternative hypothesis. It measures the test's ability to distinguish between the null hypothesis and a particular alternative hypothesis. In the case of the Binomial test under the null hypothesis, we need to determine the critical value k_cr for k_T at a given type I error rate α. This critical value represents the minimum number of successes in the sample required to reject the null hypothesis in favor of the alternative hypothesis. Mathematically, the critical value is determined by finding k_cr that satisfies the condition Pr(r ≥ k_cr | n = ∑_j = 1^l k_Bj + k_T, 1+θ/1+θ+l) ≤α. Under the alternative hypothesis, characterized by a higher growth rate of the Poisson rate θ' > θ, the power is computed as Pr(r ≥ k_cr | n = ∑_j = 1^l k_Bj + k_T, 1+θ'/1+θ'+l). The power quantifies the test's ability to detect a surge when it truly exists. The analytical formula for calculating power in the discrete distribution is hard to use. As a practical alternative, a Gaussian approximation of Binomial distribution with continuity correction can be employed. This approximation is effective by ruling out the rounding errors on the power calculation while maintaining an acceptable level of precision. With continuity correction, we can compute the power as Equation <ref>. (See Appendix Section 2 for details.) power =Φ(√(nl)(θ'-θ)/(1+θ+l)√((1+θ')) - Z_α(1+θ'+l)√(1+θ)/(1+θ+l)√((1+θ')) - 1+θ'+l/2√(nl(1+θ'))) The expression inside the Φ(·) function comprises three terms, each capturing a specific aspect of the analysis. The first term quantifies the impact of the total counts' magnitude, while the second term relates to the type I error rate. The third term corresponds to the continuity correction term, which can be ignored when the sample size n is sufficiently large. This formula provides an approximation of the power and allows for a more intuitive understanding of the influence of different parameters. §.§ Overview of meta-analysis methods Meta-analysis is known for its ability to enhance statistical power by combining signals of moderate significance, effectively controlling false positives, and enabling comparisons and contrasts across tests and time <cit.>. The properties of various p-value combination methods have been extensively studied. For instance, Heard and Rubin-Delanchy <cit.> observed that Tippett's and Fisher's methods are more sensitive to smaller p-values, while Pearson's methods are more sensitive to larger p-values. They also suggested that Fisher's and Pearson's methods are more suitable for testing positive-valued data under the alternative hypothesis, with Fisher's method performing better for larger values and Pearson's method for smaller values. Additionally, Stouffer's method is often preferred for testing real-valued data that approximates a Gaussian distribution. Among the various combination methods, Stouffer's and Fisher's methods have gained significant attention in the literature due to their popularity in meta-analysis. Elston <cit.> noted that Fisher's method favors p-values below 1/e when the number of sites is infinite. Similarly, Stouffer's method favors a threshold point of 1/2, while Pearson's method favors 1 - 1/e. Rice <cit.> suggested that methods like Stouffer's are more appropriate when all tests address the same null hypothesis as the combined p-value can be interpreted as a "consensus p-value". On the other hand, Fisher's method is particularly useful when testing against broad alternatives. It specifically tests whether at least one component test is significant. It has shown superiority in certain scenarios, such as in Genome-wide association studies (GWAS) where there may be significant differences in effect sizes between different populations <cit.>. In contrast, Stouffer's and Lancaster's methods tend to lose power when combining more unassociated p-values. This highlights Fisher's method's advantage in handling potential negative correlations between entities, which can arise due to factors like competition. The meta-analysis methods discussed in Table <ref> rely on the assumption that the test statistics follow probability distributions that are continuous under their respective null hypotheses. Additionally, the joint null hypothesis for the p-values is stated as H_0: p_i ∼ U[0, 1], i = 1, …, N <cit.>. However, these assumptions do not hold for the hypothesis test proposed in our study. There are two key factors contributing to this deviation. Firstly, in the case of discrete distributions like the Binomial distribution, the p-values are not uniformly distributed, even under the null hypothesis. Secondly, when the null hypothesis is rejected, the distribution of p-values undergoes a shift, necessitating strategies to mitigate the resulting decrease in power during meta-analysis. To address these issues, we present a perspective that leverages the explicit format of the p-values in both centralized and distributed settings, followed by a discussion on how to combine them with minimal loss. In the Binomial test of the distributed setting, the p-value for site i can be expressed as Equation <ref>. p_i = Pr(r ≥ k_Ti | ∑_j = 1^l k_Bij + k_Ti, 1+θ/1+θ+l) =∑_r = 0^c_in_ir(1-ρ)^n_i - rρ^r where c_i := ∑_j = 1^l k_Bij, n_i := ∑_j = 1^l k_Bij + k_Ti. It is evident that directly combining the p-values of different sites from Equation <ref> to Equation <ref> without any loss is not feasible. Therefore, we delve deeper into the discussion of combination methods, specifically Stouffer's and Fisher's methods, within the context of the approximation then combining framework. §.§ Stouffer's method Stouffer's method is employed by utilizing the Gaussian approximation of the Binomial parameter, which is based on the central limit theorem. This approach is commonly used when analyzing the Binomial and Poisson distributions, especially when the counts are sufficiently large. To test the success probability ρ using Stouffer's method, the distribution of c/n-ρ is approximated by N(0, ρ(1-ρ)/n). The z-score can then be calculated as z = (c - nρ)/√(nρ(1-ρ)) <cit.>. The p-value of the Binomial exact test is determined by the cumulative distribution function F_Bin(c; n, ρ). Considering the rounding error or fluctuation term ϵ_r = 1/2 - {(nρ + z√(nρ(1-ρ))) - ⌊ (nρ + z√(nρ(1-ρ)))⌋}, which takes values in the interval [-1/2, 1/2]. Thus, the p-value approximation with error terms can be expressed as Equation <ref> <cit.>. p=Φ(z) + ((1-2ρ)(1-z^2)/6 + ϵ_r)Φ(z)/√(nρ(1-ρ)) + 𝐎(n^-1) It should be noted that the denominator √(nρ(1-ρ)) in the first-order error term indicates that Stouffer's method may be unreliable for small sample sizes or when the probability is close to 0 or 1 <cit.>. After applying the Gaussian approximations, the combination of p-values becomes the next focus. One limitation of the naive meta-analysis methods is the assumption of equal contributions across studies, which may not hold true, especially when the studies have significantly different sizes. Determining appropriate weights for different studies poses a challenging task. In the case of Stouffer's method, some studies suggest using the inverse of the standard error or the square root of the sample size as weights <cit.>. In our proposed test, we can combine the approximations without any loss by introducing a certain form of weights. By obtaining a centralized p-value approximation p̃ = Φ (∑_i = 1^N c_i - ρ∑_i = 1^N n_i/√(ρ(1-ρ)∑_i = 1^N n_i)) using distributed p-value approximations p̃_i = Φ (c_i - ρ n_i/√(ρ(1-ρ)n_i)) employing the relation Φ^-1(p̃) = ∑_i = 1^N√(s_i)Φ^-1(p̃_i), we find that the most suitable weight for aggregating the p-values is the square root of the shares of each entity. Furthermore, improvements can be made by incorporating a continuity correction when an estimated value for n = ∑_i = 1^N n_i, representing the total counts of all entities, is available. Due to the discreteness of the Binomial distribution and the continuity of the normal distribution, the correction is helpful when n is not sufficiently large. One commonly used correction is Yates' correction in the Binomial test, which involves subtracting 1/2 from the absolute difference between the observed count c and the expected count nρ. Considering our case where c < nρ, the approximation of the p-value can be rewritten as p̃ = Φ (c + 1/2 - ρ n/√(ρ(1-ρ)n)). Similarly, we can derive the combination formula as: Φ^-1(p̃) = ∑_i = 1^N√(s_i)Φ^-1(p̃_i) + 1-N/2√(ρ(1-ρ)n) The additional term 1-N/2√(ρ(1-ρ)n) accounts for the effect of the continuity correction on the combined p-value. This correction becomes more necessary when the number of entities N is large, but the total counts during the baseline and testing procedures are relatively small, indicating more dispersed data. In such cases, the correction term becomes more significant. If other types of coarse-grained evidence are available to estimate the counts' magnitudes, a correction term can be added to make the combined p-value less conservative. §.§ Fisher's method The statistical tests based on Fisher's method and Pearson's method involve taking the logarithm of the p-values and summing them. The rationale behind the log sum approaches is that the p-value F_Bin(c; n, ρ) is upper and lower bounded by exponential functions such as Equation <ref>. (See Appendix Section 6 for the proof.) 1/√(2n)exp(-nD(c/nρ)) ≤ p ≤exp(-nD(c/nρ)) where D(c/n|ρ) represents the relative entropy (Kullback-Leibler divergence) between (c/n, n-c/n) and (ρ, 1-ρ) (Equation <ref>). D(c/nρ) = c/nlogc/nρ + n-c/nlogn-c/n(1-ρ) Taking the logarithm of the inequality in Equation <ref>, the ratio of the logarithm of the p-value to -nD(c/n|ρ) approaches 1 as n approaches infinity. Thus, we have Equation <ref>: lim_n→∞log(p)/-nD(c/nρ)=1 +log(n)/2nD(c/nρ) + 𝐎(1)/nD(c/nρ) The error term in Equation <ref> indicates that the choice of ρ is relatively flexible for Fisher's method. In essence, the logarithm of the p-value can be approximated by n times the estimated Kullback-Leibler divergence, allowing the sum of logarithmic p-values from different sources to be meaningful. This rationale supports the use of Fisher's method when testing the larger side of the null hypothesis <cit.>. Different weighting strategies for Fisher's method have been investigated, and various modifications have been proposed <cit.>. However, the optimal weighting scheme remains uncertain. Some studies suggest employing adaptively weighted statistics combined with permutation tests <cit.> or using Monte Carlo algorithms to approximate the rejection region and determine optimal weights. Another approach involves constructing Good's statistic <cit.>, which is a weighted statistic defined as -2∑_i = 1^N w_ilog p_i with weight w_i for site i. Under the null hypothesis, it follows a chi-squared distribution with 2∑_i = 1^N w_i DFs. Here, we use the weighting scheme w_i = s_i N, where s_i represents the share of each site. This weighting ensures that the resulting chi-squared statistic has a total DFs equal to 2N, i.e., -2∑_i = 1^N s_i N log p_i ∼χ^2_2N. Additionally, some methods leverage the fact that the 1-p quantile of the Gamma distribution Gam(α = 1, β) is -1/βlog p, i.e., F_Gam(α = 1, β)(-1/βlog p) = 1-p, where β = 1/2 represents Fisher’s methods. For example, Lancaster's method <cit.> sets β = 1/2 and transforms each p_i to the 1-p_ith quantile of the Gamma distribution with α = s_i/2. This transformation yields X_i = F^-1_Gam(s_i/2, 1/2)(1-p_i) ∼χ^2_s_i. By additivity, we have ∑_i = 1^N X_i ∼χ^2_∑_i = 1^Ns_i. In summary, Lancaster's method generalizes Fisher's method by assigning different weights to the DFs of each source, resulting in a larger total DFs compared to Fisher's method. However, Yoon et al. <cit.> demonstrated that the large DFs cause the individual distributions to approach the normal distribution, leading to a significant decrease in power. Yoon et al. consequently proposed the wFisher method, which employs a similar weighting scheme but shrinks the total DFs to match those of the original Fisher's method. Specifically, they formulated the framework as ∑_i = 1^N F^-1_Gam(w_iN/2, 1/2)(1-p_i) ∼χ^2_2N. We observe that the wFisher method exhibits greater stability compared to other weighting methods. §.§ Evaluation of the surge detection task The evaluation of the surge detection task focuses on the promptness of surge detection, resulting in binary sequences indicating the existence of a surge or not. The alarms generated from p-values occur when they fall below a specified threshold, and the alarms based on growth rates are generated when they surpass a certain threshold. In the analysis of real data, the ground truth is established using the p-value alarms from the centralized setting, while the semi-synthetic data analysis employs the growth alarms derived from the prevalence as the ground truth. Subsequently, other alarms are evaluated by comparing them to the established ground truth. For each ground truth alarm, the reconstructed alarms are considered true positives if they fall within a specified time window, e.g., no earlier than one week before and no later than two weeks after. Otherwise, the reconstructed alarms are classified as false positives. Moreover, any true alarms not matched by the constructed alarms are deemed false negatives. Following this rule, Precision and Recall metrics can be calculated. Precision represents the ratio of true positives (TP) to the sum of true positives and false positives (FP) (TP/TP + FP). Recall, on the other hand, denotes the ratio of true positives to the sum of true positives and false negatives (FN) (TP/TP + FN). These metrics are computed for different confidence level thresholds. Finally, the Precision-Recall metric is obtained, and the power (equal to Recall) is evaluated while controlling the False Discovery Rate (FDR, which equals 1 - Precision) at 0.1, allowing for an assessment of method performance. It should be noted that the term "power" in this context refers to the power of the classification task, as opposed to the power associated with a Binomial test that was mentioned earlier. §.§ Semi-synthetic data analysis The semi-synthetic analysis is conducted under the assumption of noisy data, where the observed signal deviates from the true underlying prevalence. There are several objectives of this analysis. Firstly, it aims to investigate the effects of various factors, such as the number of sites, magnitudes of reports, and the imbalance of shares, while controlling for other dimensions. Secondly, the analysis facilitates the comparison of systematic errors arising from noise in the centralized and federated settings, as well as the assessment of the combination loss during the meta-analysis process. By utilizing real data as a starting point and employing a semi-synthetic approach, the analysis creates an ideal setting that examination of specific aspects of the data and their impacts. The generation of the semi-synthetic data starts with the utilization of COVID-related claims data reported on a daily basis. Initially, a 7-day moving average smoother was applied to the data, treating the resulting smoothed values as the underlying prevalence of the Poisson-distributed counts. Following this, Poisson sampling was performed to generate simulated observations, assuming that the observed data was drawn from a Poisson distribution with the smoothed data serving as the rate parameter. Once the simulated counts are obtained, the next step involves computing the growth rate of the prevalence and the p-values for the hypothesis test at each time point. Subsequently, alarms are determined based on these computed values and the predetermined thresholds. The observed discrepancy between the growth alarm of the prevalence and the centralized p-value alarm can be attributed to the intrinsic characteristics of the Poisson assumption and the proposed Poisson rate ratio test. Furthermore, the disparity between the centralized and decentralized p-value alarms emerges as a consequence of the meta-analysis procedure. By checking and comparing the errors that arise from this two-stage process, valuable insights were gleaned concerning the influence of recombination cost of different combination methods and the presence of systematic noise. Our findings indicate that the recombination cost has a comparable impact on the performance of the combination methods as systematic noise. Furthermore, modified versions of Stouffer's method and weighted Fisher's method exhibit stability across diverse settings, showcasing their robustness in practical scenarios. § ACKNOWLEDGMENTS This material is based upon work supported by the United States of America Department of Health and Human Services, Centers for Disease Control and Prevention, under award number U01IP001121 and contract number 75D30123C1590. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States of America Department of Health and Human Services, Centers for Disease Control and Prevention. unsrtnat
http://arxiv.org/abs/2307.01157v1
20230703170529
A novel approach for predicting epidemiological forecasting parameters based on real-time signals and Data Assimilation
[ "Romain Molinas", "César Quilodrán Casas", "Rossella Arcucci", "Ovidiu Şerban" ]
cs.LG
[ "cs.LG", "cs.NE", "cs.SI" ]
Optimized experimental optical tomography of quantum states of room-temperature alkali-metal vapor Marek Kopciuch 1,2*, Magdalena Smolis2, Adam Miranowicz3, Szymon Pustelny2** August 1, 2023 ================================================================================================== This paper proposes a novel approach to predict epidemiological parameters by integrating new real-time signals from various sources of information, such as novel social media-based population density maps and Air Quality data. We implement an ensemble of Convolutional Neural Networks (CNN) models using various data sources and fusion methodology to build robust predictions and simulate several dynamic parameters that could improve the decision-making process for policymakers. Additionally, we used data assimilation to estimate the state of our system from fused CNN predictions. The combination of meteorological signals and social media-based population density maps improved the performance and flexibility of our prediction of the COVID-19 outbreak in London. While the proposed approach outperforms standard models, such as compartmental models traditionally used in disease forecasting (SEIR), generating robust and consistent predictions allows us to increase the stability of our model while increasing its accuracy. Highlights: * A novel model to predict epidemiological parameters of COVID-19, such as the number of infections and deaths. * The proposed models are based on a flexible fused architecture which allows easy swap of individually trained models. * Integrating social-media data as a real-time signal for population behaviour in a densely populated area. * Using meteorological and air quality metrics to characterise epidemiological spread in London. § INTRODUCTION In epidemiological forecasting, traditional models and measurements take days to collect, aggregate, and integrate into existing statistical models. These predictions are extremely important in understanding the geographical spread of disease and help authorities in the decision-making process <cit.>. We propose a novel approach to predict some of the epidemiological parameters, such as the number of new infections and deaths, by integrating new real-time signals that are traditionally not considered by experts in this field. Moreover, to compensate for the uncertainty, we apply Machine Learning (ML) and Data Assimilation (DA) techniques by integrating observations into our forecasting model, which tend to increase the model stability and decrease the error rate. Constraints in standardised case definitions and timely and noisy data sources can affect the accuracy of predictive models. Due to the lack of granular data, resource-limited environments pose specific challenges for accurate disease prediction <cit.>. Therefore, epidemiological forecasting using predictive modelling is an effective resource for anticipating outbreaks and tailoring responses. Over the past two decades, various approaches to epidemiological modelling have been explored to provide sufficient resources for understanding and analysing outbreaks <cit.>. Spatial information has successfully improved the understanding of epidemic dynamics by applying various spatial analytics, such as geographical information, residential mobility, sensed environmental data, and spatio-temporal clustering <cit.>. Moreover, compartmental models have proved very effective in emulating epidemic outbreaks through a set of differential equations. Optimal solutions of these deterministic equations can reasonably infer the inner mechanisms of an epidemic outbreak, such as the spread of the disease <cit.>. The introduction of government action parameters in the transmission rate, such as social and political measures, improved the modelling of the 1918 influenza pandemic in the UK <cit.> and the recent COVID-19 pandemic outbreak in Wuhan, China <cit.>. The prediction of the transmission rate has also shown potential for forecasting based on parameter estimation <cit.>. Recent bio-surveillance systems were able to detect earlier disease outbreaks on real-time noisy social media data <cit.>. In this paper, we present our implementation of a combination of CNN models from various data sources to build robust predictions. Our fused CNN architectures track spatial and temporal relationships across the real-time data stream and thus emulate the dynamic parameters. Then, our approach's performance and flexibility are assessed on the COVID-19 outbreak using temporal and spatial public information sources from various official institutions, social media and air quality data. Finally, we expose the benefit of DA to robustly estimate the system's state with the fused CNN and its consistent prediction. § RELATED WORK The accuracy of prediction models can be affected by constraints in conventional case definitions and timely and noisy data sources. Resource-constrained areas present unique obstacles for reliable emerging epidemic prediction caused by a lack of actionable insights <cit.>. The inherent uncertainty, not just about the contagious virus itself but also about the interconnected human, societal, and political aspects that co-evolve and keep the future of outbreaks open-ended, makes projecting future events in the pandemic difficult <cit.>. Over the past two decades, various approaches to epidemiological modelling have been explored to provide sufficient resources for understanding and analysing outbreaks<cit.>. Compartmental models, a population-wide established approach, have been proven to be very effective in emulating epidemic outbreaks by modelling individuals as a finite number of discrete states. The states reflect the progression of an infection with a time dependency: a person may become infected, recover or deceased. The dynamics of the states are governed by a parametric nonlinear ordinary differential equation (ODE). Optimal solutions of these deterministic equations can reasonably infer the inner mechanisms of epidemic outbreaks and provide essential quantities of infectious disease outbreaks <cit.>. The compartmental model offers high dimensional systems able to precisely discretise populations into groups of individuals with similar disease infectious dynamics. However, the spread of infectious diseases can be influenced by inherent virus properties, human characteristics (e.g. ages, sex, genome, genetic diseases), environment (e.g. hygiene, culture, household) and mobility (e.g. local mobility on daily journeys, national and international mobility)<cit.>, meteorological factors <cit.> and probably other unknown factors might also affect the dynamics of the spread. Consequently, integrating those factors increases the complexity of the ODE for a profit of information gain, difficult to evaluate and optimise by estimating appropriate parameters for the equation especially based on a real-time stream of information. Recently, a first step toward integrating these factors into disease model parameters has been made. In particular, spatial information has successfully improved the understanding of epidemic dynamics by applying various spatial analytics, such as geographical information, residential mobility, sensed environmental data, and spatio-temporal clustering <cit.>. The prediction of the transmission rate has also shown potential for forecasting based on parameter estimation <cit.>. Activation of this knowledge opens up an interesting field of possibility to create robust and efficient epidemic prediction models<cit.>. Simultaneously, social media data has also gained attention over the past few years for its enormous wealth of information and knowledge encapsulated. Social Media and messaging apps are the most common communication methods. Advances have shown that it is possible to extract valuable knowledge from the colossal amount of information and the saturation of content and to systematize complex decision-making processes in the service of the marketing and commercial strategies of companies<cit.>. The potential of social media-based analysis tools has begun to be explored to help decision-making during crises, especially health crisis. In the same direction, Facebook proposed Disaster Maps for crisis analysis and response. They are built from comparing count-based of a certain event during crisis and expectations from pre-crisis periods based on a two-weeks aggregation of anonymous data from Facebook users <cit.>. Recent bio-surveillance systems could also detect earlier disease outbreaks using DA on real-time noisy social media data. A software framework based on modular model processing real-time data and using deep learning for the classification of health-related tweets have had encouraging results for forecasting influenza outbreaks in some regions of the USA<cit.>. On the same track, in the research work of diagnosing clinical infection at early stages <cit.> from a local small amount of patient data, the use of social data and weather data readily available increased the performance of the prediction by 9%. Motivated by the latest outbreak of COVID-19, the analysis of the public's reaction and their perception of information during emerging infectious diseases based on emotion intensity, message volume <cit.> and content have improved the ability to capture the acceptance and adoption of sanitary measures and related compartment <cit.>. Research efforts have been consequent in building a robust and accurate epidemic model. In this regard, a novel data-driven epidemic model has been developed, which focuses on marked temporal point processes precisely engineered to allow fine-grained spatio-temporal estimates of the spread of the disease within a population <cit.>. They show unprecedented spatio-temporal resolution to quantify the effects of tracing, testing, and lockdown. In this context, the cyclical nature of the phases of epidemics has motivated research to capture the underlying dynamics. Several models using autoregression approaches have emerged. They relied on the prediction of COVID-19 dynamics: the number of confirmed cases, confirmed recovered and deaths using ARIMA with added weather data <cit.>, Seasonal model <cit.>. The heterogeneous autoregressive model preferred to capture long-memory features from data better <cit.>. Deep learning model, RNN and LSTM more precis have also been tested to predict dynamics of an epidemic using a trade-off between long and short memory from the multiple sources of data <cit.>. However, until the new trend and a stationary situation are picked up, ARIMA and another autoregressive model with statistical curve fitting approaches should generally show very poor performance in periods where there is abrupt and non-stationary growth in the number of active COVID-19 cases that cannot be attributed to "seasonal" components. Also, individual model predictions have been demonstrated to be highly sensitive to parameter assumptions according to the research work performed through multiple COVID-19 model predictions applied in ten countries <cit.>. An ensemble model approach demonstrated higher robustness and flexibility by reducing uncertainties from various data sources capturing constantly changing dynamics of COVID-19 transmissions at local levels. From the literature review, these advances show the opportunities to create more robust and efficient data-driven models for epidemic outbreak forecasting. But most of these models are still highly sensitive to parameter estimation, impacting the performance of epidemic dynamics forecasting when lacking knowledge about health situations when outbreaks occur or when changes in behaviours are induced by sanity measures, for instance (lockdown, vaccination, ...). § RESEARCH OBJECTIVES Our approach focuses on creating a real-time data-driven model to emulate the dynamics of epidemic outbreaks. The characteristics of our model include flexibility and robustness to process a wide range of sources of potentially noisy real-time signals. The objective is to build a model capable of selecting and processing the most useful information to simulate the underlying dynamics of an outbreak system by automatically activating and adjusting parameters to maximise the information gain. The information is processed to evaluate the population environment in a dynamic time and space manner to offer knowledgeable features and insights for our model to forecast epidemiological parameters precisely. Our main objectives are listed below : * A novel model to predict epidemiological parameters of COVID-19, such as the number of infections and deaths. * The proposed models are based on a flexible fused architecture which allows easy swap of individually trained models. * Integrating social-media data as a real-time signal for population behaviour in a densely populated area. * Using meteorological and air quality metrics to characterise epidemiological spread in London. Our research focuses on combining observational data with output from a numerical model to obtain an ideal estimate of the evolving state of our system to bring robustness and resistance to real-time signals. § PROPOSED METHOD §.§ Combining Real-Time Signals Forecast Models As inspired by the practicality of the CNN fusion model proposed by <cit.>, our approach uses a multiplicative fusion method by combining several CNNs trained on different sources, resulting in robust prediction by amplifying or suppressing a feature activation depending on their agreement, as shown in Figure <ref>. The proposed approach is intended to identify each network's important features and give a higher prediction score only when several networks agree with each other. For instance, if both networks generate high activations at the same location, activations of feature maps from the convolutional layer will be intensified otherwise, they will be discarded. To formulate the fusion method, we can consider two CNNs: temporal and spatial networks. From each network, let's consider the matrices A ∈^d × M and B ∈^d × N to be extracted features of the last convolutional layer. M and N are the numbers of feature maps based on CNNs architecture, and d is the size of feature maps. The output of the fusion method is : c⃗_⃗k⃗= (∑_i=1^Mα_kia⃗_⃗i⃗ + γ_k ) ⊙ (∑_j=1^Nβ_kib⃗_⃗i⃗ + δ_k ) a⃗_⃗i⃗ and b⃗_⃗i⃗ are the ith column of each matrix, which corresponds to one feature map. The operator ⊙ is the element-wise product, and γ and β are bias terms. α and β are weights for each feature map and learnable parameters. They are preponderant in selecting good features in each network, giving higher weights to useful features for prediction. This layer is trained with standard back-propagation and stochastic gradient descent. In our work, we proposed an extension of the above fusion method by adding extra hyperparameters that can be fine-tuned during training and feeding the results into another Neural Network to predict epidemic outcomes. Our proposed fusion ingredients are two CNNs: temporal CNNs and spatial CNNs, respectively trained on the temporal stream of information and a spatial stream of information. All the models are pre-trained on their respective datasets. Then, weighted sums of feature maps from the latest convolutional layer of both networks are computed. These networks are then joined by element-wise multiplications into a convolutional layer with fully connected layers to perform the regression. The proposed layers are trained with standard back-propagation and optimised using the Stochastic Gradient Descent (SGD). This method provides different features extracted for every tracked dataset. The combination of the CNNs outputs allows cross-communication between networks and consequently enhanced knowledge from other streams. The full architecture is shown in Figure <ref>. This approach has been preferred compared to a full CNN model because the reduced size of the sub-models significantly decreases computational complexity. If required, the re-training is also faster, which is highly desired for real-time forecasting. Additionally, multiple sub-models provide a simpler understanding of each feature extracted and an easy explanation of the inferred influence of mobility and population in the parameters of the epidemic. §.§ Improving the forecasting using Data Assimilation Our data-driven approach is composed of several time series of multi-dimensional observations y_k ∈^p of an unknown process x_k ∈^m: y_k = H_k( x_k ) + ϵ_k^y Observations are indexed from 0 ≤ k ≤ K where k corresponds to the time t_k. H_k represents the observation operator: H_k :^m →^p. We assume that the observation error is known and represented by ϵ_k^y where the assumption of following a normal distribution of mean and covariance matrix R_k hold. We suppose that the process x obeys partial differential equations and the prediction of the following state at time t_k+1 given the state t_k is defined : x_k+1 = G_k( x_k ) + ϵ_k^x where ϵ_k^m is the error of the surrogate model of the PDE mentioned in equation (<ref>) following a Gaussian distribution with zero mean and covariance matrix P_k. The random vectors ϵ_k^yand ϵ_k^m represent the modelling and observation errors, respectively. They are assumed to be independent, white-noise processes with Gaussian/normal distribution : ϵ_k^y ∼𝒩(0, R_k) ϵ_k^x ∼𝒩(0, P_k) The representation of the surrogate model uses the deep learning step. It is composed of concatenated CNNs sub-models from the spatio-temporal stream of information. We represent our model with a parametric function G_W(x) : G_W(x) = x_k + f_CNNs(x_k , W) The function f_CNNs is a deep neural network pre-trained with its corresponding weights W. §.§ Stochastic Ensemble Kalman Filter Analysis Step: The goal of the stochastic EnKF is to perform for each member of the ensemble an analysis of the form <cit.> : x_i^a = x_i^f + K_i[y - H( x_i^f) ] where i=1, …, k represents the member index in the ensemble and x_i^f the forecast state vector i, defining the prior at the given analysis time. K is identified with the Kalman gain: K_i = P^fH^T( HP^fH^T + R)^-1 This quantity is estimated from the ensemble statistics. The forecast error covariance matrix is : P^f = 1/m-1∑^m_i=1(x_i^f - x̅^f)(x_i^f - x̅^f)^T x̅^f = 1/m∑^m_i=1x_i^f Using normalised perturbations from <cit.>, the gain can be computed using only the anomaly matrices : K_i = X_fY_f^T(Y_fY_f^T)^-1 where: [X_f]_i=x_i^f -x̅^f/√(m-1) [Y_f]_i=Hx_i^f - u_i -Hx̅^f + u̅/√(m-1) The perturbation u_i (from the Gaussian distribution ∼𝒩(0, R), where R is the covariance matrix) is added to the observation vector for each member of the ensemble. This provides a solution to a divergence of the EnKF generated from an underestimation of the error covariances <cit.>. In equation (<ref>) the term X_fY_f^T is a sample estimate for P^fH^T, taken from equation (<ref>). Similarly, Y_fY_f^T is a sample estimate for HP^fH^T + R. In this shape, it is interesting that the updated disturbances are linear combinations of the forecast perturbations. New perturbations are being found within the subspace ensemble of the original perturbations. Forecasting Step: In the forecasting step, the updated ensemble obtained during the analysis step is propagated by the Deep Learning model (equation <ref>) over a time step and for all the particles of the ensemble i=1, …,m, where G_k is defined by equation <ref>: x_i,k+1^f= G_k( x_i,k^a ) In our work, the forecast error covariances are estimated from the forecast perturbations. Regarding the forecast, two different computations have been explored: the forecast ensemble's mean and the forecast ensemble's median. With the significant exception of the Kalman gain measurement, all operations of the ensemble members are independent, which means that all the training can be done in parallel. The use of EnKF may be a bit excessive for the amount of data available in our case. However, we built this approach as a flexible data-driven methodology capable of self-adaptation-based availability and granularity of data. More importantly, EnKF filtering is able to incorporate the uncertainty of input parameters used in model forecasting the spread of diseases <cit.>. Considering the noise in our ground truth data and the uncertainty around the collection, we strongly believe the DA improves the stability of our results. The sequential Bayesian filter implements an ensemble of state vectors to represent the distribution of the system states vector x⃗. Considering the choice of ground truth data, the states that can be potentially assimilated are the daily deaths and the daily lab-confirmed cases. This means that the other features should be concatenated with the assimilated states to forecast the next state, as DA requires the previous state of the forecasting as input. The implementation of the EnKF is inspired by the Stochastic EnKF algorithm proposed by <cit.>. Our adaptation of this algorithm is shown by Algorithm <ref>. The observation model operator is the identity matrix H ∈^dim(x)× dim(x), size of the length of the system states vector x. The forward model operator is the fused CNN represented with the parametric function G_W. All the parameters from the combined CNNs and their respective hyperparameters are included inside the W term. The selection operator S is providing an estimation of the ensemble forecast. Simple functions are considered to be experimented such as the mean computation or the median. With the significant exception of the Kalman gain measure, all operations of the ensemble members in Algorithm <ref> are independent, which means that the algorithm can be easily executed in parallel or distributed across a cloud environment. The DA process estimates the state of the system while the combination of Deep Learning models emulates the parameters of the dynamic model <cit.>. The forecast probability is compared to the observations ensemble and produces the analysis. The output analysis is fed into CNNs to update the surrogate model. This iterative cycle is shown in Figure <ref>. § EMPIRICAL ANALYSIS §.§ Datasets Novel data streams, such as epidemic case incidence data provided by digital disease detection tools, demographic data estimates aided by geospatial mapping tools, and advances in mathematical modelling, can support efforts to control emerging outbreaks. It can also provide useful input to public health authorities complementing existing data sources and monitoring structures. Predictive models may exploit these innovative data sources to include timely case count forecasts and potential geographic spread of an evolving outbreak in real-time point out <cit.>. Although conventional disease surveillance remains the bedrock of the epidemic evaluation and regular data collection, informal disease surveillance systems enable faster dissemination and identification of case occurrence data. More importantly, emerging data sources will provide valuable insight for epidemic models in areas that do not have public health networks in place due to violent conflict or inadequate infrastructure. It also has benefits in data collection at the global level, since data is obtained using a standardised format <cit.>. Population maps are provided by Facebook[More details at: <https://dataforgood.fb.com/docs/high-resolution-population-density-maps-demographic-estimates-documentation/>] have been exploited to infer spatial dynamics of the pathogen Facebook Population Map (Tile Level). These are location density maps or heat maps, which show where people are located before, during and after a disaster and where populations have increased or decreased. We can compare this information to historical records, such as population estimates based on satellite images. Comparing these data sets can help response organisations understand areas impacted by a natural disaster. Recent medical studies show a strong correlation between the air quality metrics and risk of death <cit.>, therefore in our experiments, we use weather and air quality levels in the London area. To get as close as possible to real-time processing, we decided to focus on data with the highest frequency. Simultaneously, we wanted to combine both knowledges from the temporal information and from the spatial information to infer as much as possible the parameter of the disease dynamics. Thus, we have selected London Air Pollution data to cross-correlate these potential environmental drivers of epidemics and the spread of certain pathogens.The London Air Pollution database[Available online at <http://www.londonair.org.uk/london/asp/datadownload.asp>) or through the Openair API.] provides atmospheric data measurements from various local authorities across the London region. This stream of information is subject to noise from the method of acquisition and its pre-processing by the London Air Pollution teams. To cope with missing data, we use nearest neighbour interpolation. The nearest neighbour algorithm selects the value of the nearest point without considering the values of neighbouring points at all, resulting in piece-wise constant interpolation. The feature selection has been made keeping sight of our project objective of developing an approach capable of processing real-time signals from various public sources able to detect patterns in disease dynamics. Two datasets are considered to run our experiments: A temporal stream and a spatial stream. The dataset for the temporal stream is composed of meteorological information : * Barometric Pressure: measurement of air pressure in the atmosphere (mbar). * Solar Radiation: the power per unit area received from the Sun in the form of electromagnetic radiation (W/m2). * Temperature: measurement of how fast the atoms and molecules of a substance are moving (Celsius) * Wind Speed: the speed of the weather-related air movement from one place to the next (m/s) * Relative Humidity: the amount of water vapour actually in the air (percentage) * PM10 & PM2.5 Particulates: the number of particulate matters 10 & 2.5 micrometres or less in diameter for a cubic meter of air, PM2.5 is generally described as fine particles (ug/m3) * Carbon Monoxide: the amount of carbon monoxide particles for a cubic meter of air (mg/m3) * Nitric Oxide: the number of nitric oxide particles for a cubic meter of air (ug/m3) * Nitrogen Dioxide: the amount of nitrogen dioxide particles for a cubic meter of air (ug/m3) * Oxides of Nitrogen: the number of oxides of nitrogen particles for a cubic meter of air (ug/m3) * Ozone: the number of ozone particles for a cubic meter of air (ug/m3) * Sulphur Dioxide: the number of sulfur dioxide particles for a cubic meter of air (ug/m3) During the research, we discovered that information from multiple public institutions is available. The Department of Health and Social Care (DHSC) and Public Health England (PHE) have developed a dashboard since the COVID-19 outbreak in March[available at <https://coronavirus.data.gov.uk/>], where daily case count and daily deaths are reported. This is our ground truth data : * Daily and cumulative lab-confirmed cases: the number of individuals with a lab-confirmed positive COVID-19 antigen test on or before the sampling date or reporting date, published by Public Health England (PHE) * Daily Death: number of deceased hospitalised in England or had either tested positive for COVID-19 or where COVID-19 was mentioned on the death certificate. All counts are recorded against the date of death rather than the announcement date. The selected data spreads from the 6th of March 2020 until the 24th of June 2020, covering 115 days, including the COVID-19 spring peak of the epidemic in the London area. The training dataset contains 80 days, the test set has 23 days, and the validation sample contains 12 days within the given period. To get an insight into spatial dynamics, we chose the Facebook Population (Tile Level). This data stream contains information about population density at 8 am, 4 pm and midnight on a daily basis. With a view to setting a baseline model, we thought that this choice is the best compromise between a good level of spatial information about mobility, the highest level of granularity and computational complexity. The dataset for a spatial stream of information highlights the evolution of geospatial data captured periodically : * Population at London Region-based level: aggregate number of people seen at a location at 8 am, 4 pm and midnight on a daily basis and daily averaged into a framework. To assess the best features for our experiments, we computed the correlation matrix of the feature and the ground truth data. Large values demonstrate significant predictive power between the variables included in this matrix. However, the absence of serious correlations may not mean necessarily mean a lack of predictive power, since only individual variables are considered for this experiment. Figure <ref> shows the correlation matrix for all the temporal features analysed. §.§.§ Feature Selection As expected, the correlation between the daily number of cases and the daily number of death is high: 0.87. Additionally, the level of PM10 and PM2.5 is significantly correlated with our ground truth data (0.58, 0.53) for the Daily lab-confirmed cases and (0.58, 0.55) for the daily death. According to the London Air Pollution Centre, these particles emanate from road traffic including carbon dioxide from engines, small metal parts and rubber from vehicle wear and braking, and road surface dirt. Others contain industrial and construction materials as well as wind-bleached dust, sea salt, pollen and soil particles. It is an indicator of intense human activity in a region. The remarkable correlation with the dynamics of COVID-19 might have a good explanation: a region where human activity is considerable means a high density of individuals, which increased the density of polluting vehicles in the area. This also leads to an increase in the potential transmission rates and risk of infection. §.§.§ Feature Scaling Scaling features is essential before training a neural network, therefore we perform a mean normalisation. The training data is used to calibrate the mean and standard deviation for the normalisation, so scaling is performed independently of the validation and test sets. §.§ Epidemiological states forecasting using Temporal stream of information CNN architectures can reproduce underlying processes and structures of the data for various image recognition tasks. Inspired by this, we explore a deep-learning framework for time series forecasting. In time series analysis, the well-established ARMA and ARIMA models for forecasting univariate and multivariate time series are widely used. The MA part stands for Moving-Average. It indicates that the output variable depends linearly on a stochastic present and past values term. We adapted this process in order to introduce a parameter corresponding to the WINDOW_SIZE to influence the behaviour of the temporal CNN. The relative weight of the information contained in past data combined with the present information may be of interest for understanding the dynamics of the spread of a pathogen. This parameter is evaluated in our experiment in Section <ref>. This feature is implemented in our data loader function: each data loader batch request will get a sequence length window. For example, as shown in Figure <ref>, if the batch size is set to 16, and the sequence length is 5, then the data input end up with 16 windows of length 5, each one advancing by a day. However, the architecture of the Temporal CNN depends on the window size of the sequence. The baseline model is a simple CNN with two convolutional layers with pooling layers and three fully connected layers on top, illustrated in Figure <ref>. The network outputs the prediction of the selected epidemiological state. The network is trained to predict the daily number of infected people and the daily number of deaths. The input data has the shape N × m, where N is the length of the sequence corresponding to chosen window explained above and m is the number of features. Convolution operations are executed on the input layer with convolution filters. A non-linear ReLu function is used for activation on each convolutional layer. The hyperparameters considered for tuning include filter size l and the kernel size K_1 × K_2. Given the implementation of the time-series sequence windows, a dependency between the WINDOW_SIZE and the height of the kernel K_1 as the kernel height cannot be greater than the window size, hence K_1 ≤ WINDOW_SIZE. The optimal values of parameters are determined by a grid search. A feature map is divided into multiple segments of equal length, and each segment is then represented by its average or maximum value. The benefit of the pooling operation is that the convolutional layers output bands are down-sampled, thereby minimising uncertainty in the hidden activations. We decided to fix the filter size to 2x2, in order to reduce the dimension of the search space for the hyperparameter optimisation and avoid computation problems with the variable architecture of the CNN. The initial time sequence, after consecutive convolutional layers and pooling layers, is characterised by a series of feature maps. The fully connected layer connects the most relevant feature maps and aggregated information from the previous convolution layers. They perform and output the regression of the desired state. The last fully connected layer has a fixed size similar to the last fully connected layer of the Spatial CNN. This is done in order to perform an equal convolution operation for the fused CNN. §.§.§ Temporal CNN architecture: Set up In this section, we present the results of our research into the best Temporal CNN architectures mentioned in Section <ref>. Every MAE score was performed using 10 cross-validations on unseen data. We use a grid search to perform the optimal search from Table <ref>. It provides a sample of the result with the best MAE score for the set of parameters. The value 5 for both kernels is the most appropriate. When comparing to the analysis of the convolutional filter size provided by <cit.>, the same kernel size value results in the best prediction performance. The kernel represents the local features of the time series input or feature maps. If the size is too small, the characteristics of waveforms can not be represented. However, inversely, it will be difficult to reflect local features. Additionally, the optimal values for the number of filters are 16 and 96. Similarly to the kernel size, the filters reflect the local features of the time series according to the above analysis. The optimal number of filters is a trade-off between enough filters to collect sufficient information from the underlined structure of the extracted data and not too much to avoid ineffective filtering. We also investigate the influence of the past data combined with the present data in the forecast, by introducing a window size parameter into the Dataloader function. The results show (Table <ref>) that the optimal value is 7, which means that the most relevant combination of information is by taking 7 days of record before the actual day of prediction. §.§ Epidemiological state forecasting using a spatial stream of information The resolution of a spatial-temporal model for predicting the spread of an epidemic is highly correlated with the effectiveness of the local protective measure, with impact on the local economy. One effective option to model individual mobility patterns is contact tracing <cit.>. In our case, we prefer to use aggregated data as contact tracing patterns are fairly difficult to collect and may pose privacy concerns <cit.>. As pointed out in the discussion of the final feature selection in Section <ref>, the daily density estimate from the Population Map (Tile Levels) from Facebook provides a distribution of human populations for the whole London region has been selected. These maps provide statistics on the aggregate number of people seen at a location (tiles or administrative polygons) in 8-hour increments during a crisis compared to a pre-crisis baseline period. We acknowledge that our approach depends on the information made publicly available. However, one of our major goals is to build the most flexible model and therefore components can be swapped in or out depending on data availability. Some geographic regions might be affected by the low granularity of available data and consequently, the prediction accuracy may rely on a lower spatial-temporal resolution. Our spatial model classifier is very similar to the temporal one: a simple CNN with two convolutional layers with pooling and four fully connected layers on top. The architecture is illustrated in Figure <ref>. The network outputs the prediction of the selected epidemiological state similar to the temporal CNN. The input data has the shape H × W, where H is the height of the maps and W is the width of the maps. The raw maps are an array of latitude and longitude coordinates mapping the population density. We decided to map this array into a rectangular array or tensor to simulate a spatial picture of the density in London at a given time. It generates (172 × 287) tensors. The input data is pre-processed similar to the temporal data. The number of filters is L and the kernel size is K_1 × K_2. The optimal values of parameters are determined by a grid search. The non-linear ReLu function is activating each convolutional layer. We decided to fix the filter size to 2x2, to reduce the dimension of the search space for the hyperparameter optimisation and avoid computation problems with the variable architecture of the CNN. The four fully connected layers connect the most relevant feature maps and aggregated spatial information from the previous convolution layers. The last fully connected layer has a fixed size similar to the last fully connected layer of the Temporal CNN. This is done to perform an equal convolution operation for the fused CNN. Epidemic dynamics are successfully emulated by various discrete states representing the health situation from different angles at a given time. For instance, several compartments, such as Susceptible, Infected, and Recovered, categorised population groups based on their sex or age to reproduce and anticipate the propagation of the infection. Ultimately, the finest classification is adopted, and the most precise the simulation is. Inspired by this approach, we decided to investigate what type of data describing the state of the epidemic is publicly available and when they are publicly released on the Internet. We only focus on publicly available data for the sake of flexibility and reproducibility of our work. §.§.§ Spatial CNN architecture: Set up For the Spatial CNN, we investigate the architecture parameters to find the optimal combination. We selected various kernel sizes and the number of filters in the grid search. From the sample of the experiment results in Table <ref>, 1 × 1 convolutions are producing interesting results. They offer a good trade-off between reducing computational load with dimensional reduction and accuracy. This operation also introduces additional non-linearity into the network <cit.>. However, the 7 × 7 convolution is more robust and more stable on this data according to the experiment. For the selected input features (172 × 287), it is not always relevant to increase the number of convolutional filters, as they will increase the redundancy without a significant impact on the final prediction while increasing the computation cost. The optimal number was respectively 32 and 96. §.§ Combined CNN Forecast In this section, we are comparing the Temporal and Spatial baseline model with the Fused-CNN based on the combination exposed in section <ref> We adopted two commonly used CNN architectures for temporal and spatial network, pre-trained with their optimal parameters identified previously. The Fused-CNN has been trained with its optimal parameters found in the same manner as the CNN-T and CNN-S. All the assessments of MAE scores have been done using 10 cross-validations. Table <ref> shows the performance for the three networks implemented, demonstrating improvements for an average of 32.8% accuracy. The fusion operation is potentially avoiding overfitting given that only the pairs of features that the two networks agreed on can contribute to the regression. All the features with the potential for overfitting were dropped off. §.§ Performance evaluation of combining multiple data sources In this section, we explore the Data Assimilated step implemented to forecast the state of the model, based on the description provided in Section <ref>. Sequential Bayesian filters are sensitive to initialisation ensemble. To infer the sensitivity of our filter, multiple initialisations of the state model have been explored. It is fairly known that the stochastic Kalman Filter introduces stochastic noise. This could affect the performance of the filter to the extent that it depends on the exact configuration of the DA step, on the nonlinearity of the dynamics, and, above all, on the size of the ensemble <cit.>. The increase in the variance of the ensemble perturbation improved the convergence of the filter. Table <ref> summarised the key information. The best set of parameters is 0.1 for the Observation Covariance Matrix R, an ensemble size of 50, a value commonly seen in the literature, and zero-initialised ensemble states. The variance of the ensemble perturbation was fixed at 1. §.§ Architecture Flexibility In this last experiment, we are testing the flexibility of our approach by removing initial states and models to check the relative improvements to the default model. For example, we tested the prediction of a single epidemiological state: CNN-T_cases is predicting the daily number of COVID-19 cases, CNN-T_deaths - the daily number of deaths and CNN-T_cases,deaths is predicting both. Model learning capabilities are depending on the state label and the features. As seen in Table <ref>, the CNN-T_deaths is more likely to extract and select relevant patterns from the features than the CNN-T_cases does. Additionally, it can be inferred that some intrinsic characteristics of both states are shared across layers as the simultaneous prediction of the two states in an average of the single state predictions. The more various states are predicted, the more information is cross-correlated and shared across the layers and potentially, the more robust and accurate the prediction might be. Especially when some relevant states labels are missing due to a lack of available data, the other states might be interested in activating other internal connections between features to emulate the dynamics of the infection correctly. Contrary to the Temporal CNN, forecasting the number of cases is more precise than forecasting death. The CNN-S_cases seems to learn better the underlying infection dynamics Table <ref>. We understand as the input data are a daily frame of the population density in London, the difference in population density between two successive frames is somewhat related to population movement. This movement could perpetuate the virus transmission through human contact. With multiple time frame analyses performed by the convolutional layers, the algorithm could infer this notion of transmission and, consequently, better predict the number of cases as it is directly linked with it. Additionally, it could be possible that the algorithm analyses the hottest point on the map to directly map an increase in the case directly counts, considering these points as areas of potential virus transmission. Granted, it does not take into consideration social distancing and hygiene measures. However, this information might be included in future work. § RESULTS AND DISCUSSION §.§ Evaluation of noise insensitivity The combination of CNN shows its ability to combine knowledge from the Temporal and Spatial CNN. Even if the previous experiments highlight an attractive characteristic: the CNN-T is better at predicting the number of states and the CNN-S the number of death, both networks agreed the feature maps at the exact location (with probably different intensity), if not, the performance would not have been increased. It is probable that without the other network, the activation is not intense enough to be activated. Hence, the knowledge is well shared, and identical parts of epidemic dynamics are indirectly emulated. The EnKF forecast provides minimal improvement (Figure <ref>). However, from the previous experiments, incorporating uncertainties into the state provides robustness and consistency during the forecast step proceeded by the Fused-CNN. This is particularly interesting when forecasting states from multiple streams of information. §.§ Comparison with compartmental models We compared our model to a variety of compartmental models. The first model, SEIR <cit.>, is a standard compartmental model in which the population is divided into Susceptible (S), Exposed (E), Infectious (I), and Recovered (R) individuals. The second comparison model is an extended SEIR model <cit.>, introducing checkpoints to change some model parameters during the simulation. The checkpoints introduced in this model are closely linked to the evolution of government action regarding the surge of Covid-19 in the UK. They correspond to starting a form of social distancing with the general lockdown announcement (On March 23rd 2020) and then stopping social distancing (On May 10th 2020). These two standard compartment models capture significant aspects of infectious disease dynamics. Still, they are deterministic mean-field models that assume uniform population mixing (every individual in the population is equally likely to interact with every other individual). Then, we decided to compare our model against a network SEIR model <cit.>. When investigating disease transmission, it is often essential to investigate disease transmission using stochasticity, heterogeneity, and the structure of contact networks, where trying to limit the spread can be viewed as perturbing the contact network (e.g. social distancing) or making use of it (e.g. contact tracing). All the parameters are inferred from the recent COVID-19 pandemic outbreak in Wuhan <cit.>. The population is the size of the Greater London region at the starting day of our experiment, on March 2nd 2020). Figure <ref> shows the results of the comparative experiments. Compared to the SEIR models, additional data sources can be beneficial in reducing uncertainty. However, they can also add extra noise, which can bias results significantly. In our model (EnKF), using Data Assimilation, a combination of multiple data sources and a fused CNN approach overcomes the issue. The Extended SEIR model is designed to track behavioural changes in the population, helping it make better overall predictions. Our model automatically learns these patterns from the data through the social media and air quality dataset, which can be used as proxies for human activity indicators. Regarding COVID-19, these indicators tend to capture when people started social distancing, strongly related to quarantine and self-isolation measures governments took. EnKF also offers superior flexibility compared to traditional SEIR models, as it is purely data-driven and built upon a modular architecture. When a data source stops being useful to make predictions or is used as a proxy for human activity, the stream can be replaced. A new CNN architecture can be trained, and the output would be fused with the other CNN models available. § CONCLUSION This paper introduces a novel approach to predicting epidemiological parameters by integrating daily signals from various sources of information. The combination of CNN models from different data streams has successfully been implemented to build robust predictions to simulate the parameters of the infection dynamic. The proposed approach outperforms standard CNN predictions by an increase of up to 32.8% accuracy on COVID-19 temporal and spatial data. The Data Assimilation step also introduces stronger robustness and consistency to our prediction. There are a few avenues to explore in this research project. One of the most noticeable additions is integrating other data streams with different modalities, sample rates and timelines. For instance, additional social media information, such as aggregated message counts for various, can be a good proxy for human activity, especially in strict lockdown periods. In the London Greater Area, Transport for London offers near real-time activity monitoring on all transport links: tube, buses, trains and cycling information, which is another good predictor of how well the lockdown is enforced. If given access to a stream of private datasets, such as card payments or supermarket aggregated delivery information, this model could show that the ensembles' flexibility is a powerful predictor. Moreover, our approach may require further assessment and validation on larger datasets. This could be done by assessing the flexibility and adaptability of a new city or pathogen. § AUTHOR CONTRIBUTIONS Romain Molinas: Conceptualization, Methodology, Software, Formal analysis, Data Curation, Writing - Original Draft, Visualization; César Quilodrán Casas: Conceptualization, Methodology, Writing - Review & Editing, Supervision; Rossella Arcucci: Methodology, Writing - Review & Editing; Ovidiu Serban: Conceptualization, Methodology, Writing - Original Draft, Supervision. § COMPETING INTERESTS The authors have no competing interests to declare. elsarticle-num
http://arxiv.org/abs/2307.02407v1
20230705162823
Quantum Fisher Information and multipartite entanglement in spin-1 chains
[ "Federico Dell'Anna", "Sunny Pradhan", "Cristian Degli Esposti Boschi", "Elisa Ercolessi" ]
quant-ph
[ "quant-ph", "cond-mat.other", "cond-mat.stat-mech" ]
Dipartimento di Fisica e Astronomia dell’Università di Bologna, I-40127 Bologna, Italy INFN, Sezione di Bologna, I-40127 Bologna, Italy INFN, Sezione di Bologna, I-40127 Bologna, Italy CNR-IMM, Sezione di Bologna, via Gobetti 101, 40129, Bologna, Italy Dipartimento di Fisica e Astronomia dell’Università di Bologna, I-40127 Bologna, Italy INFN, Sezione di Bologna, I-40127 Bologna, Italy In this paper, we study the ground state Quantum Fisher Information (QFI) in one-dimensional spin-1 models, as witness to Multipartite Entanglement. The models addressed are the Bilinear-Biquadratic model, the most general isotropic SU(2)-invariant spin-1 chain, and the XXZ spin-1 chain, both with nearest-neighbor interactions and open boundary conditions. We show that the scaling of the QFI of strictly non-local observables can be used for characterizing the phase diagrams and, in particular, for studying topological phases, where it scales maximally. Analysing its behavior at the critical phases we are also able to recover the scaling dimensions of the order parameters both for local and string observables. The numerical results have been obtained by exploiting the Density Matrix Renormalization Group algorithm and Tensor Network techniques. Quantum Fisher Information and multipartite entanglement in spin-1 chains Elisa Ercolessi Received ; accepted ========================================================================= § INTRODUCTION In addition to be a crucial resource for quantum-enhanced metrology <cit.> and quantum computation <cit.>, entanglement has been used to characterize quantum phases and quantum phase transitions (QPTs) in many-body models, particularly for low-dimensional systems, and has been important also to uncover exotic states of matter like topological spin liquids <cit.> or to describe many-body localization <cit.>. Bipartite entanglement has been the primary focus in the literature <cit.>, with the area law <cit.> serving as a benchmark for relating the amount of entanglement between two partitions of a quantum many-body system to the surface area between the blocks <cit.>. It has been proved <cit.> that the ground state of some spin chains should exhibit Multipartite Entanglement (ME), but somehow this topic has received less attention <cit.>, despite the fact that many-body quantum states are far more complex than what can be captured with bipartite entanglement only. A possible estimator of multipartite entanglement is Quantum Fisher Information (QFI), a quantity which is introduced in the context of the problem of phase estimation in metrology <cit.> and is of use in the study of the sensitivity of atomic interferometers beyond the shot-noise limit <cit.>. The QFI associated to local operators has recently been used to observe ME in models exhibiting Ginzburg-Landau-type quantum phase transitions <cit.> and in spin systems such as the Ising and XY models <cit.>, where ME is expected to diverge at criticality. It has been pointed out, however, that the use of local operators in this method fails to detect ME at topological quantum phases and transitions. To address this issue, QFI-based methods need to be extended to include also non-local operators, as first outlined in <cit.>. In this paper we are going to study the ME in two paradigmatic spin-1 systems with nearest-neighbor interactions: the Bilinear-Biquadratic (BLBQ) model and the XXZ model, two models with a rich phase diagram which exhibit a topological Haldane phase. More specifically, we will show that QFI of local and non-local order parameters (such as string-order parameters <cit.>) are able to classify all phases of the model as well as to give us information about universal critical exponents at phase transitions. The paper is structured as follows. In Sec. <ref>, we briefly review ME and QFI, and their relationship. In Sec. <ref> we discuss the BLBQ model; after describing its phase diagram, we analyze the scaling of the QFI with respect to some selected operators. The same is done in the last Sec. (<ref>) for the XXZ model. A summary of the obtained results is discussed in the conclusions in Sec. <ref>, with some possible outlooks for future research. § QUANTUM FISHER INFORMATION AND MULTIPARTITE ENTANGLEMENT In this section we concisely review the concepts of ME and QFI, elucidating their relationship <cit.>. A pure state of N particles is k-producible if can be written as: |ψ_k-prod⟩ = ⊗_l=1^M|ψ_l⟩ where |ψ_l⟩ is a state with N_l ≤ k particles and M is the number of parties in which it is possible to split up the state so that ∑_l=1^M N_l=N. A state is k-entangled if it is k-producible but not (k-1)-producible. Therefore, a k-particle entangled state can be written as a product |ψ_k⟩ which contains at least one state |ψ_l⟩ of N_l=k particles which does not factorize further. So, in this notation, a state |ψ_1-ent⟩ is fully separable while a state |ψ_N-ent⟩ is maximally entangled. This definitions can be extended to mixed states via convex combination. QFI is a fundamental quantity in the context of phase estimation and is crucial to prove that entanglement can increase the sensitivity of an interferometer beyond the shot noise up to the Heisenberg limit. QFI for a general observable Ô and a mixed probe state ρ = ∑_i p_i ϕ_iϕ_i, with p_i > 0 and ∑_i p_i = 1, is given by [ρ, Ô] = 2 ∑_i,i'(p_i - p_i')/p_i + p_i'ϕ_iÔϕ_i^'^2. In the case of a pure state |ψ⟩ the QFI has a simple expression and is directly proportional to the variance of the operator: [ |ψ⟩, Ô] = 4 (ΔÔ)^2 ≡ 4( *Ô^2 - *Ô^2). For separable states, ρ_sep the [ρ_sep, Ô] is bounded from above <cit.>: [|ψ⟩_sep, Ô]≤ N(λ_max-λ_min) where λ_max and λ_min are the maximum and minimum eigenvalue of Ô. This is not a fundamental limit, since it can be surpassed by using proper entangled states. Indeed, for general probe pure states |ψ⟩ of N particles, we have <cit.> [|ψ⟩, Ô] ≤ N^2(λ_max-λ_min)^2, where the equality is saturated by only maximally entangled states. This gives the Heisenberg limit in phase estimation and quantum interferometer theory. There is a direct relationship between ME and QFI, as it has been show in <cit.>. For any k-producible states |ψ⟩_k-prod of N particles the QFI is bounded by F_Q [|ψ⟩_k-prod, Ô] ≤ sk^2 + r^2 where s = N/k (the integer part of N/k) and r = N - sk. Therefore, a violation of (<ref>) will indicate a (k+1)-particle entanglement. The quantity in (<ref>) has been rescaled by a factor (λ_max - λ_min)^2, which in the case of spin-1 operators is equal to 4. By a straightforward calculation is possible to see that this bound is saturated by the product of s GHZ states of k particles and a GHZ state with the remaining r particles : |ψ⟩ = ⊗^s_i ( ^⊗ k + ^⊗ k /√(2))_i × ×( ^⊗ r + ^⊗ r /√(2)) If we introduce the QFI density [|ψ⟩, Ô] ≡[|ψ⟩, Ô] / N, then (<ref>) can immediately be read as [|ψ⟩_k-prod, Ô] ≤ k, where, for simplicity, we put the term s=N/k. It has been proved that >1 is a sufficient condition for multipartite entanglement <cit.>. In this paper, the observable we consider are constructed by using the spin-1 operators S_i^α, where α = x,z, and their non-local counterparts S̃^α. The latter are defined as follows: S^x_j = S_j^x (e^iπ∑_l>j S_l^x ), S^z_j = ( e^iπ∑_l<j S_l^z ) S_j^z. These operators have been obtained by applying a non-local unitary transformation on the spin degrees of freedom. For more details regarding the origin of this transformation we refer to the discussion about the AKLT model in Appendix <ref>. § BILINEAR-BIQUADRATIC MODEL In this section we consider the Bilinear-Biquadratic (BLBQ) model on a chain of N sites: H = J∑_i=1^N [ S_i ·S_i+1 -β(S_i ·S_i+1)^2 ], where S_i = (S_i^x, S_i^y, S_i^z) is the spin-1 operator for site i, J is the nearest-neighbor coupling and β is a real parameter expressing the ratio between the bilinear and biquadratic terms. This is the most general SU(2)-invariant isotropic spin-1 Hamiltonian with nearest-neighbor interactions only. It is sometimes convenient to rewrite the Hamiltonian (<ref>) as H = J^'∑_i=1^N [ cos(θ)S_i ·S_i+1 -sin(θ)(S_i ·S_i+1)^2 ] where we have set J = J^'cos(θ) and β = tan(θ), with the angular parameter θ vary in [-π, π]. By fixing J^' = 1, the phase diagram can be drawn by letting the angular parameter θ vary in [-π, π], as shown in Fig. <ref>. In the following we will describe the phase of the BLBQ model and some remarkable points. §.§ Phase Diagram The Haldane phase corresponds to the region -π/4 <θ< +π/4: here the system is massive, with a unique ground state and exponentially decaying correlation functions <cit.>. We recognize the antiferromagnetic Heisenberg model for θ = 0 <cit.>. For θ = -arctan( 1/3) we recover the AKLT model, whose ground state is a Valence-Bond State (VBS), in which each spin-1 is thought of as made of two 1/2-spins that couple with the spins of neighboring sites in a singlet (entangled) state. A pictorial image of the AKLT state for a six sites chain is given in the upper panel of Fig. <ref>. The ground state has an exact description as a Matrix Product State, which is very useful for performing exact calculations. In particular, it can be shown that the local correlation functions have and exponential decay (see Appendix <ref>). The Dimer phase corresponds to π/4 <θ < 3 π/4: the system has a two-fold degenerate ground state and a small excitation gap <cit.>. The degeneracy is due to the broken translation symmetry since neighboring spins tend to be coupled in pairs. A good approximation of the ground state in the whole phase is given by the the dimer state <cit.>: |d⟩_± = ⊗_i=1^L/21/√(3)( |+⟩_2i|-⟩_2i ± 1 + + |-⟩_2i|+⟩_2i ± 1 - |0⟩_2i|0⟩_2i ± 1) which is shown in the lower panel of Fig. <ref>. Haldane and Dimer phases are separated by the so-called Takhtajan-Babujian critical point, for θ = π/4. Here the Hamiltonian is integrable by means of Bethe Ansatz technique <cit.> and its universality class is that of a SU(2)_k Wess-Zummino-Witten conformal field theory with k = 2 and therefore with central charge c = 3/2 <cit.>. In the range -π/2<θ<-π/4, there is another antiferromagnetic phase, called the Trimer Phase since the ground state tends to be invariant under translations of three sites. This is a gapless phase <cit.>. At θ = -π/4, it is separated from the Haldane phase by a continuous phase transition. This point corresponds to the so-called Lai-Sutherland model, which has an enhanced symmetry to SU(3), the Hamiltonian being equivalent to ∑_i=1^N-1S_i ·S_i+1 +(S_i ·S_i+1)^2 = N/3 +1/2∑_i=1^N-1∑_a=1^8λ_i^a where λ^a are the Gell-Mann matrices, the eight generators of SU(3) algebra. It is in the universality class of the SU(3)_k Wess-Zummino-Witten conformal field theory with k = 1 <cit.>. Here we will not consider the last phase present in Fig. <ref>, nameley the ferromagnetic phase, which corresponds to an ordered and separable ground state. The BLBQ model has a hidden symmetry (see Appendix), that forces to introduce non-local order parameters (NLOPs) <cit.> to classify all phases. NLOPs -which are also called String Order Parameters- are defined as follows: C̃^α = lim_r →∞⟨ S^α_0( ∏_k=2^r-1 e^i π S_k^α) S^α_r ⟩ where α=x,y,z. The NLOPs C̃^α have a non-zero expectation value only in the Haldane phase. In the following we will examine both the expectation value and the QFI of the nonlocal operator: Õ^z≡∑_j=1^N S_j^z , S^z_j ≡( e^iπ∑_l<j S_l^z S_j^z ) evaluated on the ground state |ψ⟩ in the different phases of the BLBQ model. With some algebra one finds: *Õ^z = ∑_l = 1^N⟨( ∏_j = 1^l-1Ω(j) ) S_l^z ⟩ and *(Õ^z)^2 = ∑_l = 1^N* (S^z_l)^2 - 2 ∑_l < m⟨ S^z_l ( ∏_j = l+1^m-1Ω(j) ) S^z_m ⟩, where we have used Ω(l) = e^i π S_l^z, Ω ^2 (l) = 𝕀 and S_l^z Ω (l) = -S_l^z. This expressions are used to calculate the FQI F_Q[|ψ⟩,Õ^z ] = [⟨ψ| (Õ^z)^2|ψ⟩-⟨ψ|Õ^z|ψ⟩^2] which coincides with (<ref>) but for the factor 4 that we have neglected since we are dealing with spin-1 operator with λ_max=-λ_min=1. §.§ Numerical results To rewrite (<ref>), it is useful to define the following N × N matrix: M = ( [ *(S_1^z)^2 *S_1^z S_2^z *S_1^z Ω(2) S_3^z ⋯ *S_1^z Ω(2)⋯Ω(N-1) S_N^z; 0 *(S_2^z)^2 *S_2^z S_3^z ⋯ *S_2^z Ω(3)⋯Ω(N-1) S_N^z; 0 0 *(S_3^z)^2 ⋯ ⋯; ⋯ ⋯ ⋯ ⋯ ⋯; 0 0 0 ⋯ *S_N-1^z S_N^z; 0 0 0 ⋯ *(S_N^z)^2 ]), where each matrix element M_ij is given by M_ij = *S_i^z Ω(i+1) ⋯Ω(j-1) S_j^z if i ≤ j 0 otherwise. Similarly, for the term (<ref>) we can define the N-dimensional vector: V = ( * S_1^z, *Ω(1)S_2^z, …, *Ω(1) ⋯Ω(N-1))S_N^z), such that *Õ^z turns out to be the sum of all its elements. In this way, the QFI can be written as [|ψ⟩,Õ^z] = ∑^N_i=1M_ii -2 ∑^N-1_i=1∑^N_j>i M_ij - ( ∑_i=1^N V_i )^2, Simulations to compute the elements of M and V can be easily implemented numerically. The states can be represented with Matrix Product States (MPSs) and the ground states can be obtained with the DMRG algorithm. The numerical simulations have been done using the ITensor library <cit.> and the DMRG computations have been performed with bond dimensions up to χ = 300 and truncation error cutoff set to 10^-12, for a higher precision. In order to investigate the scaling of the QFI density = /N, we have looked for a function of the form q+bN^δ (for the Haldane and critical points) or q+b ln N (for the dimer and trimer phases), for system sizes up to N= 120. However, when the data showed a particularly flat trend, we have fitted against a constant function, in order to minimize the standard error on the parameters. The results of the numerical calculations are summarized in Table <ref> for the Haldane phase and in Table <ref> for the Dimer and Trimer phases. The fit and their errors are computed using standard methods, like the one provided by Mathematica <cit.>. To analyze these results, let us start from the AKLT point where the ground state is known exactly. To calculate the QFI analytically we can exploit Lemma 2.6 of <cit.>, extended to a string observable. Let O be an observable and N the system's size; then for any l ≤ N such that the support of O is contained in l we have lim_N →∞*OΩ^N_αβ/⟨*|Ω^N_αβ⟩Ω^N_αβ = ∑_α,β*OΩ^l_αβ/∑_α,β⟨*|Ω^l_αβ⟩Ω^l_αβ where |*⟩Ω^N_αβ is one of the four ground states of the AKLT model (see Appendix <ref>). This gives us an operational way to analytically calculate the terms of the QFI on the infinite volume ground state from (<ref>) for a finite chain. It turns out that the each diagonal term is equal to 2/3 while each of the N(N-1)/2 off diagonal terms quickly approach to -4/9 (i.e. the value of NLOP (<ref>) defined in the asymptotic limit) when N becomes larger. As the last addend in (<ref>) is negligible, the QFI density for a system of N sites scales linearly as: (|ψ_AKLT⟩,Õ^z)≃2/9 + 4/9N as confirmed by numerical results in Table <ref>. The same argument holds for the Heisenberg point where the asymptotic value of its NLOP is known to be ≃ 0.36 <cit.>. Furthermore, we observe that the QFI keeps a linear scaling in the whole Haldane phase, as shown in Fig. <ref>. One can notice that the slope of the curves progressively decreases as we move away from the AKLT point. When moving outside the Haldane phase, the linear scaling in the dimer and trimer phase becomes sublinear, as it can be seen in Fig. <ref>. In the dimer phase, the numerical results can be compared with the analytical calculations performed on the dimer state (<ref>) which can be consider a good approximation, as mentioned in Sec. <ref>. The resulting QFI density (|d⟩,Õ^z) yields 4/3, corresponding to a 2-partite entanglement structure, which is expected from the state (<ref>) being a two-sites product state. Then, assuming that Õ^z is a good choice for the whole dimer phase, we can appreciate how good this approximation is in the different points of this phase, by comparing the various scaling with the exact value 4/3. As we show in Table <ref> and Fig. <ref>, we get that a good function that fits the data is of the form q+blogN, with b that progressively decreases when β goes to infinity. We want to stress the crucial difference between the Haldane phase and the dimer and trimer ones. From the point of view of QFI criterion, the multipartite entanglement structure, in other words the k in (<ref>), grows linearly with the system size in the Haldane phase while in the other two phases the k grows sub-linearly. This may suggests that the ground state in the Haldane phase may not be factorizable in blocks of finite length in the thermodynamic limit, and this can be shown using only non-local operators. However we cannot have direct information on the exact value of k using only Õ^z, because we cannot be sure that this is the operator saturating the ground state QFI. Let us now analyse the scaling behaviour at the transition points β = ± 1. The spin-spin correlations are asymptotically given by the fundamental WZW primary fields, leading to the prediction that, in an infinite system, the dominant antiferromagnetic correlations decay as a power law: ⟨ S^α_0 S^α_r ⟩∼(-1)^r/r^η where η=2Δ and the scaling dimension Δ=h+h̅ can be obtained from the primary field scaling dimension for a general SU(n) level k WZW model <cit.>: h=h̅=n^2-1/2n(n+k) As we said in the previous sections, β=± 1 (θ=±π/4) are described by SU(2)_2 and SU(3)_1 conformal theories which means that their values of η are equal to 3/4 and 4/3 respectively. We recover this power-law scaling of correlators both for string and local operators as we show in Fig. <ref>. For β=1, the numerical data display small oscillations between N even and odd, due to the double degeneracy that emerges in the dimer phase. To increase the accuracy of the fitting, we have decided to consider only the odd-numbered sites, this however does not modify the value of the exponents in the thermodynamic limit since these oscillations tend to zero as N increases. As shown in <cit.>, the QFI density of one-dimensional models at the critical point is supposed to scale as f(O^α) ∼ N^δ_Q (up to a non-universal prefactor and subleading corrections) with δ_Q=1-2Δ^α, where Δ^α the scaling dimension of the operator O^α. We can recover this result from our approach and numerical data as well. Indeed, considering that the first sum in (<ref>) goes as ∼ N (so it brings just a constant contribution in f) and neglecting V (because we are at the critical point), the only relevant contribution is given by the sum of the off-diagonal terms in the M matrix. Exploiting the (<ref>) in the continuum limit, we get: ∑_r'=1^N-1∑_r>r'^N⟨ S^α_r' S^α_r ⟩⟶∫_1^Ndr' ∫_r'^Ndr/r^η∼ N^2-η so that: (O^α)∼ N^1-2Δ^α The same holds for string operators up to a non-universal prefactor and subleading corrections. It is evident now why we get the expected numerical value δ≃δ_Q=1-2Δ= 1/4 for the string magnetization, as reported in Table <ref>. A similar reasoning can be put forward for the calculation of (O^z_st) of the local staggered magnetization operator along z-axis, defined as O^z_st = ∑_j=1^N (-1)^j S_j^z. Our numerical results for the calculation of the QFI density for O^z_st yield: q = -3.770+-0.002, b = 3.201+-0.001 and δ = 0.244+-0.001. Thus we are able to read the critical exponent of the operator from its QFI. At the Lai-Sutherland point β=-1, the numerical data display small oscillations with a periodicity of three sites, due to the trimer configuration that merges for β <-1. Unfortunately, from the data we observe what is mostly probable a flat trend, but we are not able to distinguish a linear fit from a one that decreases exponentially or, like it should be in this case, as a power law with a negative exponent. We believe that prefactors and subleading terms that depends on N might contribute to mask the predicted behaviour at criticality. § XXZ SPIN-1 MODEL §.§ Phase diagram The XXZ spin-1 chain is a well-studied quantum system that exhibits an interesting phase diagram as a function of the anisotropy parameter J_z. It has the following Hamiltonian: H = ∑_i=1^N-1 J_xy(S^x_i S^x_i+1+ S^y_i S^y_i+1) + J_z(S^z_i S^z_i+1) where we take J_xy=1 and let J_z vary. It can also be considered as a particular case of the so-called λ-D model <cit.>, that includes also an isotropy term of the form ∑_i=1^N D(S^z_i)^2. The quantum phase diagram of this Hamiltonian has been extensively studied <cit.>. It includes the Haldane phase for 0< J_z ∼ 1. A second-order phase transition occurs from the Haldane phase to an antiferromagnetic (AF) phase that belongs to the same universality class of the 2D Ising model with central charge c=1/2. Various numerical techniques, including Monte-Carlo <cit.> and DMRG <cit.>, have determined the critical value: J_z^(IS)=1.186. A Berezinskii-Kosterlitz-Thouless (BKT) transition occurs at J_z^(BKT) =0 between the Haldane phase and a gapless disordered XY phase (-1<J_z<0). The value of J_z^(BKT) is theoretically predicted to be exactly zero, using bosonization techniques <cit.>. Numerically, this has been verified via finite-size scaling <cit.> and DMRG <cit.>. The entire XY phase (including the BKT transition point) is a critical phase which has conformal symmetry with central charge c=1. Finally, at J_z = -1, a first-order phase transition from the XY phase to a ferromagnetic (F) phase takes place <cit.>. We will not examine in detail such ferromagnetic phase in the following. §.§ Numerical results Given the symmetries of the Hamiltonian, we consider the scaling behaviour of the QFI density of local and string operators along the x and z axes, including the staggered ones. The ones that show an extensive scaling, at least in some phases of the model, are the following: Õ^z=∑_i=1^N S̃_i^z , Õ^x=∑_i=1^N S̃_i^x , O^x_st=∑_i=1^N (-1)^iS_i^x, where, as usual, the operators with the tilde symbol are string operators. Similarly to the previous section, the numerically computed QFI density is fitted against the function =q+bN^δ, or with a constant if the data presents an extremely flat trend. In Fig. <ref> we plot the shapes of the QFI densities of the operators (<ref>) in the different phases of the model for a chain with N=30 sites. The results of the fitting of the scaling with N are given in Tables <ref>, <ref> and <ref> and some details of the scaling are reported in Fig. <ref>. Let's analyze each operator below. The operator O^x_st takes its maximal value close to the F-XY transition point and then decreases progressively moving toward the Haldane phase. In particular, analyzing its scaling with N (see Fig. <ref> and Table <ref>), reveals a power-law behaviour in the XY phase with the coefficient δ =0.8376 ± 0.0001 at J_z = -1/2 which gradually reduces (e.g. δ = 0.7574 ± 0.0002 at J_z=0) until it vanishes for J_z ≳ 1. Regarding the string operators (see Tables <ref> and <ref>), it is possible to observe that (Õ^x) has a power-law scaling in the whole XY phase (including J_z=0) where the (Õ^z) appears to be almost flat, (δ=0.138 ± 0.003). In the Haldane phase, the FQI for both these operators shows a linear scaling (δ≃ 1) with a slope that increases with J_z, reaching the maximal values at J_z=0.8 and J_z=1 respectively. For J_z=1 we recover the Heisenberg model where both have the same scaling coefficients as expected in an isotropic point. The data on QFI can be used to extract information about the critical exponents of relevant operators at phase transition points and about correlation functions in general. At the critical point J_z^(IS), we predict that the scaling dimension of the order parameter is Δ=1/8, in accordance with the universality of the 2D Ising model, since δ=1-2Δ≃ 3/4. This holds true for the string order operator Õ^x, see Table <ref>, and the local staggered magnetization O^z_st. The latter is defined similarly to O^x_st in (<ref>), for which we obtained δ = 0.76 ± 0.01. More generally, we can consider the asymptotic behaviour of local staggered and string correlation functions C^α_st(r) = (-1)^r *S_0^α S_r^α, C̃^α(r) = ⟨ S^α_0( ∏_k=2^r-1 e^i π S_k^α) S^α_r ⟩ which are known to have the following behavior for large r in the (massive) Haldane phase <cit.>: C^α=a_0 e^-r/a_1/√(r), C̃^α=a_2+ a_0 e^-r/a_1/r^2 where a_0, a_1 and a_2 are fitting parameters and α=x,z as usual, while at the transition point, they scale algebraically: C^z=C̃^x= a_0/r^1/4, C^x=a_0 e^-r/a_1/r^1/4, C̃^z=a_2+ a_0/r^2. The data reported in the Tables <ref>, <ref> and <ref> and Fig. <ref> of the our fitting parameters of are in agreement with these theoretical predictions. In order to understand the results, two comments are necessary. The first one is that in the Haldane phase and at the critical points the only relevant contribution to the QFI density is due to (<ref>), i.e. the M matrix made by the spin-spin correlators. The second one is that, as we said previously for the BLBQ, from our data it is not possible to distinguish the flat scaling of from an exponential or power-law decay with δ<0. Then, considering the correlations (<ref>) and (<ref>), we can understand that for string operators in the Haldane phase, the elements M_ij are going to approach a_2. This leads to a that scales linearly, with the slope b ≃ a_2. From our computations we get δ equal to 0.757± 0.001 and 0.727 ± 0.001 for O_st^z and Õ^x, respectively, which is comparable to 1 - η as expected. Finally, when -1 < J_z < 0, the system is in the XY phase. In this extended area of critically, also called “critical fan”, the Hamiltonian can be replaced by the Hamiltonian of a Gaussian model <cit.>, which admits two primary operators with conformal dimensions: Δ_1=1/8, Δ_2=1/4χ(J_z), where χ is a function of the coupling J_z such that χ(0)=1/2 and χ(-1)=0. The explicit form of the function χ depends on the details about how the lattice model can be mapped to the Gaussian model at criticality. This means that there exists one operator for which the critical index δ of QFI densities will be constantly 3/4 and one with varying between 3/4 and 1, respectively. We identify such operators with Õ^x and O^x_st, respectively, as it suggested by the data of Tables <ref> and <ref>: at J_z=0 the values of their fitting parameters are extremely close to each other and close to 0.75; moving toward J_z=-1/2, (Õ^x) remains fixed to a similar value (δ =0.745 ± 0.002) while (O^x_st) has δ=0.8376 ± 0.0001 and the latter continues to increase as suggested by Fig. <ref>. § CONCLUSIONS AND OUTLOOKS In this paper we have shown how QFI is able to detect multipartite entanglement (ME) in spin-1 chains with short range interactions. A key aspect in these calculations is the use of string operators whereas the QFI relative to local operators fails to detect ME especially in the topological phases of these models, i.e the Haldane phase. For the BLBQ model, given the symmetries of the Hamiltonian, we chose the string magnetization along z and obtained an extensive behavior in the topological phase, signaling the divergence of ME with the system size. The same applies to the Haldane phase of XXZ model as well. In the dimer and trimer phases we found a sublinear behaviour; in particular for the dimer phase we also propose to use QFI density to estimate how well the 2-sites product state is approximating the various ground states in this phase. Furthermore, we recover the expected power-law scaling of the QFI density for these 1D models in the critical phases. In fact, by knowing the critical exponent η of the correlators or the scaling dimension Δ of the operator with which the QFI is calculated, it is possible to predict how will scale at these critical points: δ=1-2Δ. From numerical simulation we obtained δ≃0.25 in the Takhtajan-Babujian point of BLBQ model and δ≃0.75 in the AF-Haldane transition point of XXZ model as expected. Throughout the “critical fan” (XY phase) of the XXZ model, we observe a power-law behavior of with two different trends of δ: one fixed at the constant value of 3/4 (string operator Õ^x), the other varying between 3/4 and 1 (staggered magnetization O^x_st) in analogy to what was done in <cit.>. In the light of these promising results, it would be interesting to investigate whether it is feasible to use it for systems with more complicated degrees of freedom, such as models with higher symmetry groups <cit.> or with long range interactions <cit.>. The authors would like to thank D. Vodola and S. Tibaldi for the helpful discussions. The work is partially supported by INFN through the project QUANTUM. E.E. is also supported by the the QuantERA 2020 Project QuantHEP. § THE AKLT MODEL The AKLT model is the projection point at β = - 1/3, where the Hamiltonian can be expressed as a sum over the projection operators P_j(i,i+1). Each projector acts on a pair of interacting spins for a given value of the total spin j = 0,1, 2. Thus, it can be written as: H_AKLT = -2/3NJ +2J ∑_i=1^NP_2(i,i+1) where P_2(i,i+1) = 1/3 + 1/2( S_i ·S_i+1 +1/3(S_i ·S_i+1)^2 ). As shown in <cit.>, the system can be thought of as made up of two spin-1/2 variables for each site. By introducing the valence bond basis, it is possible to build the ground state, called a valence bond solid (VBS), so that in the chain there is always a bond between two neighboring spins (see upper panel of Fig. <ref>). The VBS state |VBS⟩ satisfies P_2(i,i+1) |VBS⟩ = 0 ∀ i. In the spin-1/2's computational basis ψ_1 = |0⟩, ψ_2 = |1⟩, we can construct an orthogonal basis for the s=1 state space, by taking the symmetrized tensor products: ψ_αβ = 1/√(2)(ψ_α⊗ψ_β + ψ_β⊗ψ_α) Then, in order to contract a pair of spin-1/2's to form a singlet, we use the Levi-Civita tensor of rank two: Ω_αβ = ϵ^γδψ_αγ⊗ψ_δβ, where the indices α and β refer to the outer spin-1/2's. It is now easy to generalize the construction for a chain of length N: Ω_αβ = ϵ^β_1 α_2⋯ϵ^β_N-1α_Nψ_αβ_1⊗ψ_α_2 β_2⊗⋯⊗ψ_α_N β. The AKLT model has exponentially decaying correlations and this applies to the whole Haldane phase. In fact, this can be shown by computing the two-point correlation function in the limit N →∞, which yields: lim_N →∞⟨Ω|S_0^aS_r^b|Ω⟩ = δ^ab(-1)^r 4/3 3^-r. showing, as anticipated, an exponentially decaying correlation function with correlation length ξ = ln(3)^-1. Therefore, one may conclude that there is no order in this phase but, as we will see, a different kind of hidden order is actually there. We are going to show this fact on the valence bond state. As it can be easily understood from Fig. <ref>, in a finite chain the ground state of AKLT model is four-fold degenerate due to the effective free spin-1/2's at the boundaries. Let us write the ground state of AKLT as Φ_σ, where σ is a string of +'s, -'s and 0's so that Φ_σ can be expressed as a tensor product of a single site states |+⟩, |-⟩ and |0⟩. If the first spin-1/2 of the chain is in the |↑⟩ state, then for the first site we cannot have a |-⟩ state but only |+⟩ or |0⟩. In the latter case we still must have the first non-zero character to be a + in σ in order to satisfy the construction of the valence bond state. It can be verified that there has to be the same number of +’s and -’s alternating all along the σ string, with no further restrictions on the number of 0’s between them. Therfore, a typical allowed state Φ_σ in the AKLT model could look like this: Φ_σ = |000+-0+-+0-+0-+-0⟩ A look at (<ref>) reveals that is a sort of Néel order (antiferromagnetic order) if we ignore the 0’s. Still, we cannot predict what two spins in two distant sites will be, as we have no control on the number of the 0’s. Indeed there is no local order parameter that can be found to be non-zero in the Haldane phase and that can be used to distinguish this phase from the others. But, there is actually a non-local order parameter, the string order parameter, that is able to reveal the hidden order of the Haldane phase. In order to see how we can arrive at its definition, let us to introduce the non-local unitary transformation U = ∏_k=1^N∏_j=1^k-1exp( iπ S^z_j S^x_k ), where N is the number of sites, such that Consider a typical AKLT state Φ_σ, for example (<ref>). On this state, the operator U acts as UΦ_σ = (-1)^z(σ)Φ_σ̅, where z(σ) is the number of 0 characters in odd sites and σ̅ is the new transformed string. It is defined as follows: * if σ_i = + (or -) and the number of non zero characters to the left of the site i is odd then σ̅_i = - (or +). * otherwise σ_i = σ̅_i where σ_i is the i-th character of the string σ. In particular, if we apply this transformation on the allowed state (<ref>), it becomes: UΦ_σ = |000++0+++0++0+++0⟩. Then this unitary transformation aligns all the non-zero spins i.e. if the first non-zero character is + (or -) all the other non-zero characters become + (or -). It is also evident that U^-1 = U. Under the action of U, the spin operators transform as follows: S^x_j = US_j^x U^† = S_j^x (e^iπ∑_l>j S_l^x ), S^y_j = US_j^y U^† = ( e^iπ∑_l<j S_l^z ) S_j^y ( e^iπ∑_l>j S_l^x ), S^z_j = US_j^z U^† = ( e^iπ∑_l<j S_l^z S_j^z ). Notice that the local operators have been mapped onto non-local operators, as they contain a sum of spin operators acting on different sites. This is not surprising, given that U itself is a non-local unitary transformation. It is reasonable to expect that also the local Hamiltonian H is mapped onto a non-local one H = U H U^-1, but it turns out that H is still, in fact, local: H = J ∑_j[h_j + β (h_j)^2 ], where h_j = -S^x_j S^x_j+1 + S_j^y e^i π(S_j^z + S^x_j+1) S^y_j+1 - S^z_j S^z_j+1 The transformed Hamiltonian H still has the same symmetries of H, but they may not be local anymore. Actually, the only local symmetry of H is related to its invariance under rotations of π about each coordinate axis. This symmetry group is equivalent to ℤ_2×ℤ_2: indeed the product of two π-rotations about two different axes produce a π- rotation about the third one. It is possible to prove <cit.> that at the AKLT point the transformed Hamiltonian has four ground states, which are product states and break such symmetry. These four degenerate ground states of H_AKLT converge to a single ground state in the infinite volume limit. The same is not true for the ground states of H_AKLT, as they converge to four distinct states in the infinite volume limit, even though the two Hamiltonians are related by a unitary transformation. In a sense, the non-locality of the transformation U does not guarantee a one-to-one correspondence between the ground states in the infinite volume limit. Finally, we can understand the role of the string order parameter (<ref>). In fact, it is straightforward to verify that S^α_0( ∏_k=2^r-1 e^i π S_k^α) S^α_r =-U^-1S_0^αS_r^α U. This shows that the NLOPs in (<ref>) reveal the ferromagnetic order in the language of the non-local spins (<ref>) or, equivalently, the breaking of the hidden symmetry in the original system. Such a symmetry breaking holds in the whole Haldane phase, not just the AKLT model. Indeed, in the dimer phase the symmetry is completely unbroken and the string order parameter (<ref>) will vanish for every α.
http://arxiv.org/abs/2307.06844v1
20230704012915
Garbage in, garbage out: Zero-shot detection of crime using Large Language Models
[ "Anj Simmons", "Rajesh Vasa" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.CV" ]
Garbage in, garbage out: Zero-shot detection of crime using Large Language Models Anj Simmons, Rajesh Vasa Applied Artificial Intelligence Institute, Deakin University, Geelong, Australia Email: {a.simmons, rajesh.vasa}@deakin.edu.au August 1, 2023 =============================================================================================================================================================== This paper proposes exploiting the common sense knowledge learned by large language models to perform zero-shot reasoning about crimes given textual descriptions of surveillance videos. We show that when video is (manually) converted to high quality textual descriptions, large language models are capable of detecting and classifying crimes with state-of-the-art performance using only zero-shot reasoning. However, existing automated video-to-text approaches are unable to generate video descriptions of sufficient quality to support reasoning (garbage video descriptions into the large language model, garbage out). large language models, chain of thought § INTRODUCTION Intelligence and law enforcement agencies are tasked with detecting threats and preventing crime. Such agencies have access to increasing volumes of data; however, as the amount of information available greatly exceeds the capacity of humans available to monitor it, it is impossible to fully monitor this deluge of information and respond in a timely manner. Therefore, there is a need for more sophisticated techniques to ingest information and surface up just the cases that indicate a potential crime of a category relevant to the agency. Automatically detecting crime poses a challenge, as while there are known categories of crime (abuse, arson, assault, burglary, stealing, vandalism, etc.) the ways in which each category of crime can be committed are diverse. Furthermore, the losses caused by crime follow a power law distribution <cit.> in which rare events (e.g., terrorist incidents) cause disproportionate losses. Contrast this to the assumption of supervised learning approaches, which perform well on categories with sufficient training data, but perform poorly on categories with limited training data. Although supervised learning approaches may still have a role to play in extracting specific features relevant to crime, e.g., the presence of a weapon, it is necessary to consider the broader context of the scenario to determine if a crime is occurring, which has traditionally required a human. In this paper, we propose exploiting the zero-shot reasoning capabilities <cit.> of Large Language Models (LLMs) for the task of detecting and reasoning about crime. Zero-shot reasoning allows prompting the LLM to reason about whether a description of events suggests evidence of a crime, without the need to provide training examples (other than the corpus the LLM has been pre-trained on). LLMs capture common sense knowledge (albeit at a surface level) <cit.> which is important to support this reasoning process and avoid false positives, for example, it is not a crime to take an item from a shop if one pays for it before leaving. An example of the proposed approach is shown in <ref>. To test the approach, we evaluate the ability of a state-of-the-art LLM (GPT-4 <cit.>) to detect crimes given textual descriptions of events in real-world surveillance videos. For the purposes of this paper, we manually created descriptions of 168 surveillance videos. We also explored approaches for automatically generating descriptions from video, which would allow for a fully automated approach to detecting crime in surveillance video, but found that the quality of the automatically generated descriptions was insufficient for the LLM to accurately detect and reason about crimes (garbage in, garbage out). The key contributions of this paper are: * A dataset of textual descriptions derived from real-world surveillance videos which can be used to benchmark the ability of LLMs to reason about crime, and * Identification of obstacles in existing video-to-text approaches that prevent a fully-automated approach. Data and code for this paper are publicly available online[<https://github.com/anjsimmo/zero-shot-crime-detection>]. § BACKGROUND AND RELATED WORK Previous work has considered the task of activity recognition in videos. However, this requires large training datasets, which are not available in the case of rare types of crime. Previous work has also considered training multimodal models to support zero-shot reasoning about images and other input modalities. However, such models have limited transparency about which information in the images are used to inform the decision, which is essential to know and control in the case of crime detection. §.§ Activity Recognition Large-scale datasets have been collected for the task of describing activities in video, such as VaTeX <cit.> (consisting of 41,250 videos with annotations). For the task of crime detection, the UCF-Crime dataset <cit.> consists of 128 hours of surveillance video obtained from YouTube and LiveLeak. However, existing activity recognition methods perform poorly on the UCF-Crime dataset, only achieving 28.4% classification accuracy <cit.>. Furthermore, rare crime categories such as terrorism are not included. In contrast to previous crime detection approaches, this paper explores a zero-shot approach to circumvent the need for large training datasets. §.§ Multimodal Models Multimodal models are trained on two or more input modalities, such as both text and images. Such models can accept image inputs directly, allowing reasoning about content in images without the need for an intermediate step to first convert video to text. Prior work has also explored integrating LLMs with pre-trained image encoders to allow reasoning about the content of images <cit.>. There are three reasons why we focus on reasoning about textual descriptions of videos in this paper rather than training a multimodal model to operate on the video directly. Firstly, using textual descriptions allows the use of existing state of the art LLMs, such as GPT-4 and derivatives of LLaMA, without the need to retrain or fine-tune them to accept new input modalities. Although the GPT-4 model itself is multimodal <cit.>, OpenAI do not yet provide a way for the public to input images to GPT-4 via the API. Secondly, we may wish to incorporate information other than images in future, for example, descriptions of sounds detected or new kinds of sensors. When all information sources are represented in textual form, it is trivial to integrate new information. Thirdly, textual descriptions provide a way to restrict which information the LLM has access to by censoring details the LLM should not use in its decision, such as race and gender. Inspecting the chain-of-thought produced by the LLM for bias is insufficient, as LLMs may produce unfaithful explanations that do not reveal the underlying factors that influenced the decision <cit.>. Censoring these details is more difficult when the input includes images/videos. § METHOD This section explains the method by which we converted surveillance videos into textual descriptions and evaluated the ability of GPT-4 to detect and reason about crime. An overview of the process is shown in <ref>. §.§ Dataset We test our approach on the UCF-Crime dataset of surveillance videos <cit.>. The authors of the UCF-Crime <cit.> propose two tasks. The first task is detection of anomalous events in surveillance videos, on which the original paper scores 75.41 AUC, and current state of the art scores 86.98 AUC[<https://paperswithcode.com/sota/anomaly-detection-in-surveillance-videos-on>]. However, this first task only involves identifying which frames relate to an anomalous event, not reasoning about the type of anomalous event. The second task is anomalous activity recognition. This involves determining the category of crime (if any) that occurs in a surveillance video. The authors of the UCF-Crime dataset explain that state of the art activity recognition methods perform poorly, only achieving 28.4% accuracy. This second task is the focus of this paper. §.§ Human Captions The first author manually wrote descriptions of people, objects, and their interactions in a sample of 12 surveillance videos from 14 categories (13 crime categories + normal), resulting in a dataset of 168 surveillance video descriptions. While the author was aware of the ground truth crime category for videos in the UCF-Crime dataset, the descriptions were written to be as objective as possible rather than suggesting a particular interpretation of the events taking place. The manually written descriptions do not include race or gender. An example of a manually created description is: “A white car approaches a silver car. Two people come out. One of them goes near the front wheel of the car. The other then opens the bonnet. After 30 seconds, the two people return to the white car.” §.§ GPT-4 Prompt We prompt GPT-4, a state of the art LLM, to reason about the most likely category of crime given the textual description of the surveillance video. We include the phrase “let's think step by step” in the prompt to trigger zero-shot reasoning <cit.>. The full prompt is: SURVEILLANCE VIDEO DESCRIPTION: A white car approaches a silver car ... TASK: List possible explanations, and categorise them as one of [Abuse, Arrest, Arson, Assault, Burglary, Explosion, Fighting, Road Accident, Robbery, Shooting, Shoplifting, Stealing, Vandalism, Normal]. Then determine which category is most likely. Let's think step by step. Finally, output one line containing a single category in quotes. Do not include anything other than the category on the final line. GPT-4 reasons about possible explanations for the observed events, followed by what it determines to be the most likely category. An example response from GPT-4 is: Possible explanations: 1. The two people are helping the driver of the silver car with a mechanical issue [Normal] 2. The two people are attempting to steal parts from the silver car [Stealing] 3. The two people are tampering with the silver car to cause harm [Vandalism] 4. The two people are involved in a car theft [Robbery] Most likely category: “Normal” We evaluate how accurately GPT-4 is able to recover the ground truth label for the video. §.§ Automated Captions We also explore the accuracy with which GPT-4 can recover the ground truth on automatically generated captions of videos. To automatically generate a description of the video, we sample a frame every 10 seconds, pass it through a image-to-text model, then concatenate the results along with time stamps. The choice to sample a frame every 10 seconds was to balance the need to capture key changes in the video while avoiding long and repetitive descriptions that may exceed the input token limit of GPT-4. The method for converting these descriptions to prompts and evaluating the classification accuracy is identical to the process for human captions. §.§.§ GIT Captions To generate image captions, we used a pre-trained Generative Image-to-text Transformer (GIT) <cit.>, specifically, [<https://huggingface.co/microsoft/git-large-coco>] which has been fine-tuned by Microsoft on the COCO dataset. An example of the final description generated for a video by this process is: “10 sec: a car is seen parked on the street. 20 sec: a car is seen parked on the street. 30 sec: a car is seen passing another car. 40 sec: a man is sitting on the ground next to a car. 50 sec: a car is parked on the street and another car is parked behind it. 60 sec: a man is opening the trunk of a car. 70 sec: a man is trying to get a car out of the back of a car. 80 sec: a car is parked in front of a house. 90 sec: a car is parked on the street. 100 sec: a car is parked on the street. 110 sec: the car is parked on the street. 120 sec: a car is seen driving down a street.” §.§.§ LLaVA Descriptions Large Language and Vision Assistant (LLaVA) <cit.> is a visual instruction tuned version of LLaMa. Specifically, we use the 13 billion parameter version, , which was the largest and most recent release of the LLaVA model at the time of conducting the experiment. To generate descriptions, we prompt LLaVA with an image from the video and the question “What is it?” An example of the final description generated for a video by this process is: “10 sec: In the image, there is a black car parked on a street next to a building. The car appears to be parked in a parking space, and there are potted plants nearby. The car is facing a house with a fence and a gate. There is also a person standing on the sidewalk, possibly observing the car or the surrounding area. To assist the user further, more context or specific questions about the scene would be needed. 20 sec: In the image, there is a gray car parked on the side of the street, and a white van is driving down the street. The scene takes place on a residential street with a fence and a house in the background. To assist you better, I would need more information about the situation or a specific question related to the image. 30 sec: ...” §.§.§ YOLO-v8 + ByteTrack We make use of a YOLO-based object tracking library[<https://github.com/mikel-brostrom/yolo_tracking>] to track people and objects in surveillance videos. Specifically, we use [<https://github.com/ultralytics/ultralytics>] (the largest and most accurate version of the model) combined with the ByteTrack <cit.> tracking method. The tracks are updated every frame (30 frames per second) to support keeping track of identities; however, we only output the current state every 10 seconds. To convert this to text, for each tracked object, we state the object class (person, car, etc.), identity of the tracked object (e.g. “car 2”) and position (e.g. “bottom-left”). An example of the final description generated for a video by this process is: “0 sec: car 1 is at the bottom-left of the image. 10 sec: car 1 is at the bottom-left of the image. car 2 is at the top-middle of the image. 20 sec: car 1 is at the bottom-left of the image. car 2 is at the middle of the image. 30 sec: car 1 is at the bottom-left of the image. car 2 is at the middle of the image. person 3 is at the bottom-middle of the image. person 5 is at the middle of the image. 40 sec: car 1 is at the bottom-left of the image. car 2 is at the middle of the image. person 5 is at the middle of the image. person 6 is at the bottom-middle of the image. 50 sec: ...” § RESULTS We tested each method on 12 videos from 14 categories (168 videos total) and report the classification accuracy in <ref>. For comparison, we also include a random baseline (1/14), and the Tube Convolutional Neural Network (TCNN) baseline reported by <cit.> in the UCF-Crime paper. In cases where GPT-4 was unable to process the input (e.g. exceeded token length) or did not output a valid response (i.e. the final line of output was not a valid category in the expected format) we exclude these from the accuracy calculation. There was 1 case of invalid output for GPT-4 + Human captions, 1 case of invalid output for GPT-4 + GIT Captions, and 2 cases of input that could not be processed due to exceeding token length for GPT-4 + LLaVA Captions. There were no cases of invalid input or output for GPT-4 + YOLO-v8 + ByteTrack. § DISCUSSION Our results show that while GPT-4 was able to determine the crime category with state of the art performance when provided a human generated caption of the video, it performed poorly when provided with automatically generated captions. In the rest of this section, we elaboration on the limitations of automatic caption generation approaches that need to be overcome to support a fully automated approach to crime detection. §.§ Image captions lack detail Reasoning about crime requires details of who did what. However, image captioning models only describe the scene at a high level. For example, consider the caption generated for <ref>. The image captioning model correctly identifies that the image contains a man sitting on the ground next to a car. However, it provides no details about which man and which car. Without this detail, there is insufficient information to reason about whether the man is sitting on the ground to repair their own car, or is sitting on the ground to steal something from someone else's car. §.§ LLM based vision models hallucinate In contrast to image captioning models, LLM based vision models are capable of generating detailed descriptions of images, but may hallucinate about objects present and actions being performed. Furthermore, the descriptions they generate are biased towards a particular interpretation of the scenario, an example of which is shown in <ref>. Reasoning about crime requires objective descriptions. §.§ Object tracking algorithms cannot maintain identity of objects over long time periods Reasoning about crime requires linking the identity of actors across time. For example, it is not a crime for an actor to take an item from a store if they pay for it before leaving. However, in our experiments, we observed that object tracking was unable to maintain a constant identity for people and objects over long time periods. For example, in a video where there were only two people, the description generated by applying object tracking refers to “person 3”, “person 5”, “person 6” and “person 8”, making it difficult to link the actions of people across time. §.§ Curated object detection datasets do not include weapons Large benchmark datasets for object detection, such as COCO, do not include weapons. While it is understandable that technology companies that curate datasets for training machine learning algorithms may want to distance themselves from undesirable uses of AI, if we wish for AI systems to be able to help prevent violence, then it is important for datasets curators to include depictions of weapons and violence. § THREATS TO VALIDITY GPT-4 may have already have seen images from UCF-Crime videos in its training data. However, we test on textual descriptions of the videos rather than the videos themselves, and these textual descriptions have not been released before. As such, it is unlikely that the performance of GPT-4 reported in this paper is a result of overfitting to training data. The author was aware of the ground-truth category when creating the human captions for videos, which may have biased the descriptions. Furthermore, in busy scenes, it was not practical to describe every action taking place, hence the descriptions may be biased towards just describing the actions of relevance to the crime. As such, the performance of GPT-4 on human captions should be taken only as an indicator of what is possible, and may not be possible to fully automate even if the obstacles raised in this paper can be overcome. § CONCLUSION This paper demonstrated that with high quality textual descriptions, large language models are capable of detecting and classifying crimes with state-of-the-art performance using only zero-shot reasoning. Unfortunately, existing automated video-to-text approaches were unable to generate video descriptions of sufficient quality to support reasoning, thus fully automated detection of crime is not yet possible. The failure of these approaches to generate descriptions suitable for reasoning about crime indicates that such models are not as general purpose as widely perceived, and that these models require domain adaptation for downstream tasks. Future research is needed to overcome the loss of objective detail that occurs during the video-to-text conversion process. § ACKNOWLEDGMENT This paper was supported by research funding from the National Intelligence Postdoctoral Grant program (NIPG-2021-006). IEEEtranN
http://arxiv.org/abs/2307.02784v1
20230706052237
On the Spatial-Wideband Effects in Millimeter-Wave Cell-Free Massive MIMO
[ "Seyoung Ahn", "Soohyeong Kim", "Yongseok Kwon", "Joohan Park", "Jiseung Youn", "Sunghyun Cho" ]
cs.IT
[ "cs.IT", "cs.NI", "eess.SP", "math.IT" ]
Journal of Class Files, Vol. 14, No. 8, August 2023 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals 0000–0000/00$00.00  YYYY IEEE On the Spatial-Wideband Effects in Millimeter-Wave Cell-Free Massive MIMO Seyoung Ahn, Student Member, IEEE, Soohyeong Kim, Yongseok Kwon, Joohan Park, Jiseung Youn, and Sunghyun Cho, Member, IEEE Manuscript received MM DD, YYYY; revised MM DD, YYYY. S. Ahn, S. Kim, Y. Kwon, and J. Youn are with the Department of Computer Science and Engineering, Major in Bio Artificial Intelligence, Hanyang University, Ansan, South Korea (e-mail: tpdud1014@hanyang.ac.kr; dreammusic23@hanyang.ac.kr, totoey200@hanyang.ac.kr, yjs1104@hanyang.ac.kr). J. Park and S. Cho are with the Department of Computer Science and Engineering, Hanyang University, Ansan, South Korea (e-mail: 1994pjh@hanyang.ac.kr; chopro@hanyang.ac.kr). ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== In this paper, we investigate the spatial-wideband effects in cell-free massive MIMO (CF-mMIMO) systems in mmWave bands. The utilization of mmWave frequencies brings challenges such as signal attenuation and the need for denser networks like ultra-dense networks (UDN) to maintain communication performance. CF-mMIMO is introduced as a solution, where distributed access points (APs) transmit signals to a central processing unit (CPU) for joint processing. CF-mMIMO offers advantages in reducing non-line-of-sight (NLOS) conditions and overcoming signal blockage. We investigate the synchronization problem in CF-mMIMO due to time delays between APs. It proposes a minimum cyclic prefix length to mitigate inter-symbol interference (ISI) in OFDM systems. Furthermore, the spatial correlations of channel responses are analyzed in the frequency-phase domain. The impact of these correlations on system performance is examined. The findings contribute to improving the performance of CF-mMIMO systems and enhancing the effective utilization of mmWave communication. Cell-free massive MIMO, spatial-wideband effects, synchronization, OFDM, spatial correlations, mmWave. § INTRODUCTION As mobile communication systems evolved, and the demands for communication technologies utilizing extremely high-frequency bands such as millimeter-wave (mmWave) and terahertz (THz) bands have been aroused. For the utilization of the higher-frequency band, we should consider that the signal can be easily attenuated by the obstacle or distance between transceivers. Attenuating signals by distance may decrease cell coverage, and communication systems require densified networks, namely, ultra-dense networks (UDN), to maintain communication performance. However, in the UDN, some potential drawbacks of the multi-cell networks may be deepened, such as inter-cell interference (ICI) or pilot contamination. Several cooperative communication methods, such as the coordinated multi-point (CoMP) and network MIMO system, have been studied to mitigate ICI and improve communication performance in the cell-edge region. However, cooperative communications suffer from synchronizing all serving BSs because the serving BSs share the channel state information and independently process the received signals by their own RF chain. Cell-free massive MIMO (CF-mMIMO) has been proposed as the complete form of cooperative communication to address the synchronization problem. Specifically, in CF-mMIMO, a single central processing unit (CPU) processes the transceived signals from the users by the distributed BSs called the access points (APs) connected by the fronthaul links. The distributed structure of CF-mMIMO yields another advantage for the higher-frequency band communications to reduce the non-line-of-sight (NLOS) probability. As the frequency band increases, the diffraction becomes more severe due to the shorter wavelength, and even signal blockage may occur. The CF-mMIMO system places APs in closer proximity to the user compared to the conventional massive MIMO BS in cellular networks. It can decrease the distance between AP and the user and consequently provides the same effect as removing the obstacles. Most existing studies on CF-mMIMO systems tend to assume perfect synchronization, but in higher frequency bands, the synchronization issues may still remain. In massive MIMO systems of cellular networks, the signal processing only considers the phase difference because distances between UE and antennas are almost the same. Adversely, CF-mMIMO is a form in which numerous antennas in the antenna array of cellular massive MIMO are distributed as the APs throughout the system. To consider this distributed nature of CF-mMIMO, the signal processing for CF-mMIMO systems should consider phase differences caused not only by the antenna in each AP but also by the different distances between UE and APs. In this letter, we first investigate the synchronization problem due to the time delays between different APs. The time delays are not negligible and may cause severe inter-symbol interference (ISI) in the OFDM system. We provide the minimum length of the cyclic prefix for reducing the ISI. Furthermore, we investigate the spatial correlations for the channel responses in the frequency-phase domain. We decompose the spatial correlation matrix into two types of correlations, such as the micro-correlation in the antenna of each AP and the macro-correlation among APs. Consequently, we investigate the effect of correlations by introducing the numerical results. Notation: Boldface lowercase and uppercase letters, such as x and X, denote column vectors and matrices, respectively. The superscripts T, *, and H denote the transpose, conjugate, and conjugate transpose, respectively. The matrix diag(x) denotes the diagonal matrix with the elements in the vector x on the diagonal. The M-dimensional identity matrix and the vector with all-one elements denote 𝐈_M and 1_M, respectively. We use ≜ for definitions. The complex set with the M × N dimensional elements denotes ℂ^M × N. The function Re{·} stands for the real part of a complex signal. The operators ⊗ and ∘ stand for the Kronecker and Hadamard products, respectively. The expectation of the matrix X denotes 𝔼{X}. § SYSTEM MODEL Considering a CF-mMIMO system consisting of L APs and K UEs equipped with a single antenna as illustrated in the left-hand side of Figure <ref>. Each AP is equipped with a M-antenna uniform linear array (ULA) and connected to a CPU via a fronthaul link. Each antenna is separated by an antenna spacing . We assume that the CF-mMIMO system works in a time division duplex (TDD) protocol. In a TDD protocol, we only consider the uplink channel training because the uplink channel state information (CSI) can be utilized for both the uplink and downlink transmission by channel reciprocity. Orthogonal frequency-division multiplexing (OFDM) with P subcarriers is adopted. In the OFDM system, the carrier frequency and wavelength denote f_c and λ_c, respectively. The subcarrier spacing will be η=W/P when the transmission bandwidth is W. We consider that UE k sends the planar wave to the AP l with the N physical paths. The direction of arrival (DoA) of the signal from UE k to the AP l is denoted by θ_k, l, n for the n-th physical path. The schematic descriptions of the signal model in AP l are illustrated by the right-hand side of Figure <ref>. First of all, we assume that the first antenna of the nearest AP to UE k is perfectly synchronized, and other antennas and APs are received the delayed version of the signal. we define the time delay τ_k, l, m, n of the mth antenna of the AP l. Specifically, τ_k, l, m, n consists of two types of delay caused by the distance d_k, l between user k and AP l and the adjacent antennas at AP l that can be explicitly computed as d_k, l/c and msinθ_k, l, n/c, respectively. Note that c denotes the speed of light. Therefore, we can represent the time delay as follows: τ_k, l, m, n=d_k, l/c + msinθ_k, l, n/c, The baseband signal generated by the user k can be represented as x_k(t) = ∑_i=-∞^+∞s_k[i]δ(t-iT_s), where δ(·) is the pulse shaping function. The user transmits this signal by modulating to the corresponding passband signal as Re{x_k(t)e^j2 π f_ct}. The antenna m at the AP l receives the delayed passband signal in the nth path as Re{α_k, l, nx_k(t - τ_k, l, m, n)e^j2 π f_c(t - τ_k, l, m, n)} = Re{α_k, l, nx_k(t - τ_k, l, m, n)e^j2 π f_c(t - d_k, l/c - msinθ_k, l, n/c)} = Re{α_k, l, nx_k(t - τ_k, l, m, n)e^-j2 πd_k, l/λ_c e^-j2 π msinθ_k, l, n/λ_c e^j2 π f_ct}, where α_k, l, n is the corresponding complex channel gain. Then, we can formulate the corresponding received baseband signal at the t time instant as y_k, l, m(t) = ∑_n=1^Nα_k, l, nx_k(t - τ_k, l, m, n)e^-j2 πd_k, l/λ_c e^-j2 π msinθ_k, l, n/λ_c. § SPATIAL-WIDEBAND EFFECTS OF MMWAVE CELL-FREE MASSIVE MIMO SYSTEMS In this section, we investigate the spatial-wideband effect of cell-free massive MIMO systems utilizing mmWave bands. Specifically, we formulate the channel model with the spatial-wideband effects. Based on the channel model, we analyze the spatial-wideband effects in three different aspects: delay spread in the angular-delay domain, frequency spread in the frequency-phase domain, and spatial correlations. In the analysis, we confirm the synchronization problem and errors in the frequency channel responses. As discussed previously, if the transmission bandwidth and antenna spacing are narrow or the number of antennas is small, the ISI can be negligible due to the sufficiently short time delay compared to the symbol period. However, the cell-free massive MIMO system has a similar structure where the existing massive MIMO antennas are spread throughout the network in the form of APs, resulting in a large number of antennas and very wide spacing between them. Furthermore, the mmWave band can be worked as an obstacle by amplifying the effect of spatial-wideband effects although the cell-free massive MIMO is one of the key technologies to enable the high-frequency bands. §.§ Channel model The uplink spatial-time channel of user k at the mth antenna of AP l can be modeled as [𝐡^ST_k, l(t)]_m = ∑_n=1^Nα_k, l, nδ(t - τ_k, l, m, n)e^-j2 πd_k, l/λ_ce^-j2 π msinθ_k, l, n/λ_c = ∑_n=1^Nα_k, l, nδ(t - τ_k, l, m, n)e^-j2 π msinθ_k, l, n/λ_c, where the corresponding complex channel gain is α_k, l, n≜α_k, l, ne^-j2 πd_k, l/λ_c. Then, based on the continuous-time Fourier transform, we can obtain the spatial-frequency channel response as [𝐡^SF_k, l(f)]_m = ∫_-∞^+∞[𝐡^ST_k, l(t)]_me^-j2 π ftdt = ∑_n=1^Nα_k, l, ne^-j2 π msinθ_k, l, n/λ_ce^-j2 π fd_k, l/ce^-j2 π fmsinθ_k, l, n/c By the obtained response and <cit.>, we can confirm the extra component of the phase shift represented as e^-j2 π fd_k, l/c. Different from the conventional massive MIMO system, we can get an insight that implementing the cell-free massive MIMO systems with OFDM should not neglect the synchronization problem due to the time delay among APs. Based on the (<ref>), we can get the total channel response of the user k for the L APs in the entire systems as 𝐡_k(f) = vec([𝐡^SF_k, 0(f), 𝐡^SF_k, 1(f), ..., 𝐡^SF_k, L-1(f)]^T) ∈ℂ^LM × 1. We formulate two vectors such as the complex gain vector α_k, n∈ℂ^L × 1 and macro-steering vector 𝐝_k(f) ∈ℂ^L × 1 respectively as α_k, n = [α_k, 0, n, α_k, 1, n, ..., α_k, L-1, n]^T, 𝐝_k(f) = [e^-j2 π fd_k, 0/c, e^-j2 π fd_k, 1/c, ..., e^-j2 π fd_k, L-1/c]^T. Moreover, Θ(f) stands for the (L × M)-dimensional phase-shift matrix whose (l, m)-th element is [Θ_n(f)]_l, m≜ e^-j2 π f_c(1 + f/f_c)msinθ_k, l, n/c. Based on (<ref>), (<ref>), and (<ref>), we can represent the spatial-channel response in (<ref>) as follows: 𝐡_k(f) = ∑_n=1^Nvec(diag(α_k, n∘𝐝_k(f))Θ_n(f)). In (<ref>), the macro-steering vector and phase-shift matrix are frequency-dependent, which stands for the beam-squint effect. §.§ Beam-squint effects We then investigate the spatial-wideband effects for the macro-steering vector and phase-shift matrix by transforming (<ref>) and (<ref>) to the virtual angle domain by discrete Fourier transform (DFT)<cit.>. Let F_L be the L-dimensional normalized DFT matrix. §.§ Inter-symbol interference in OFDM system The difference in the time delay may cause the synchronization error because a single data symbol can arrive at each AP at different time instances in the CF-mMIMO system. The data symbols arrived at different times may cause inter-symbol interference without sufficient cyclic prefix (CP) length. In this section, we introduce the minimum CP length to remove the inter-symbol interference. Let ψ_k, l denote the phase shift for the frequency f in the spatial-frequency channel responses as follows ψ_k, l = e^-j2 π f(d_k, l/c+msinθ_k, l, n/c). The delay difference for all subcarriers can denote as τ_k, l, m^P = Pη(d_k, l/c+msinθ_k, l, n/c) = W(d_k, l/c+msinθ_k, l, n/c). Assuming that indices of APs are sorted by the distances from the UE and the following inequations hold d_k, 0≤ d_k, 1≤ ... ≤ d_k, L-1. The minimum CP length should be larger than the difference between the maximum and minimum delay differences. Therefore, we can calculate the minimum CP length CP_min as follows CP_min ≥|τ_k, L-1, M-1^P - τ_k, 0, 0^P| = W|d_k, L-1-d_k, 0/c+(M-1)sinθ_k, L-1, n/c| ≈ W|d_k, L-1-d_k, 0/c|. Here, the delay difference among the antenna elements is ignorable compared with the difference among the APs. § NUMERICAL RESULTS We employ the UMi-Street Canyon NLOS channel model introduced in 3GPP TR 38.901 <cit.>. § CONCLUSION IEEEtran
http://arxiv.org/abs/2307.02302v1
20230705140124
Energy optimization for Full-Duplex Wireless-Powered IoT Networks using Rotary-Wing UAV with Multiple Antennas
[ "Leyla Fathollahi", "Mahmood Mohassel Feghhi", "Mahmoud Atashbar" ]
eess.SY
[ "eess.SY", "cs.SY", "eess.SP" ]
Energy optimization for Full-Duplex Wireless-Powered IoT Networks using Rotary-Wing UAV with Multiple Antennas Leyla Fathollahi, Mahmood Mohassel Feghhi[Leyla Fathollahi and Mahmood Mohassel Feghhi are with the Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran (Email: L_fathollahi98@ms.tabrizu.ac, mohasselfeghhi@tabrizu.ac.ir).], Mahmoud Atashbar[Mahmoud Atashbar is with the Department of Electrical Engineering, Azarbaijan Shahid Madani University, Tabriz, Iran (Email: atashbar@azaruniv.ac.ir).] Received March 10, 2023; accepted May 12, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================== In this paper, we propose a novel design for the rotary-wing unmanned aerial vehicle (UAV)-enabled full-duplex (FD) wireless-powered Internet of Things (IoT) networks. In this network, the UAV is equipped with an antenna array, and the K IoT sensors, which are distributed randomly, use single-antenna to communicate. By sending the energy, the UAV as a hybrid access point, charges the sensors and collects information from them. Then, to manage the time and optimize the energy, the sensors are divided into N groups, so that the UAV equipped with multi-input multi-output (MIMO) technology can serve the sensors in a group, during the total time T. We provide a simple implementation of the wireless power transfer protocol in the sensors by using the time division multiple access (TDMA) scheme to receive information from the users. In other words, the sensors of each group receive energy from the UAV, when it hovers over the sensors of the previous group, and also when the UAV flies over the previous group to the current group. The sensors of each group send their information to the UAV, when the UAV is hovering over their group. Under these assumptions, we formulate two optimization problems: a sum throughput maximization problem, and a total time minimization problem. Numerical results show that our proposed optimal network provides better performance than the existing networks. In fact, our novel design can serve more sensors at the cost of using more antennas compared to that of the conventional networks. § INTRODUCTION Due to the quick expansion of the use of Internet of Things (IoT), this technology is ready to become one of the key requirements in the future. The structure of the IoT allows the devices in the network to connect to each other or to the central processor and send information to them. This technology can be used in distant areas or regions, such as transportation (sea, road, rail, air, …), navy management, logistics, solar energy, oil and gas extraction, smart measuring tools, agriculture, environmental monitoring and mining, especially the areas which are not technically or economically accessible <cit.>. The widespread use of the IoT has turned energy consumption management into a big challenge. These devices, which are usually compact, use small batteries such as coin cells with a life of approximately two years <cit.>. Replacing or recharging these batteries is inconvenient due to their location in remote areas or economic issues. Therefore, IoT industry researchers decided to use wireless sensors. These devices do not need to follow a specific pattern in placement, and the network designer can lead the network more flexibly. An electromagnetic field is created by the combination of electric and magnetic fields. In the frequency range of electromagnetic waves, the wavelength is reduced and wireless transmission of power over long distances is possible. The use of a technology such as radio frequency (RF) energy transferring, which can charge devices with a fixed or mobile location from a distance of tens of kilometers, is promising <cit.>. In recent years, the wireless power transfer (WPT) technology, which means supplying the required power to a device without physical contact with the source, has attracted the attention of many researchers <cit.>. The IoT networks, with long-range communication and low-power consumption ability, are generally composed of the ground users and the hybrid access points (HAPs). The HAP sends energy to the sensors and receives the information from them for further processing. On the other hand, the sensors send the required data to the HAP, using the received energy, via the energy harvesting (EH) protocol. In conventional networks, a fixed HAP serves the sensors; however, it is challenging in large networks, when the ground users are placed sparsely. In fact, the sensors, which are located far from the energy source, receive less energy. The solution is to increase the number of these energy sources, which is not economical. After a plenty of research, the researchers came to the conclusion that moving HAPs can provide more stable and reliable energy for sensors, due to their flexibility in movement <cit.>. Due to the unique characteristics of UAVs, eventually replacing them as mobile HAPs with traditional fixed HAPs has created a huge revolution in wireless communication networks. Recently, the WPT using UAV is a hopeful technology to prepare stable energy for low-power sensors in large-scale networks, thanks to the movability, flexibility, maneuverability, simplicity of positioning, and relatively low cost. Flexible flight allows the energy transmitters placed on the UAV to charge ground devices more effectively and increase the capacity of existing cellular systems. In contrast to typical ground base stations, the UAV as an aerial base station has many capabilities, such as height controlling, Ability to the line of sight (LOS) communication with the sensors, and avoiding of the obstacles <cit.>. The security of the communication among the UAV and the sensors in the network is one of the most fundamental issues, so that the UAV can perform its tasks without leaking sensitive information to unknown users. In <cit.>, the user anonymity and the mutual authentication are analyzed as security parameters against attackers. Efficient 3D deployment and the UAV trajectory optimization, as the most important issues in the UAV-equipped networks, have been investigated in <cit.>. It is a function of the network placement environment, the location of the sensors, the flight altitude and the characteristics of the channel among the UAV and the sensors. Evaluating network performance to analyze the impact of design parameters is another fundamental issue for designing UAV-based communication systems. The UAVs as independent aerial base stations can be used for improvement of the downlink coverage and rate performance of a device-to-device communication system, co-exists with a single UAV, as it is done in <cit.>. The flexibility, the LOS communication, the energy limitations and the backhaul connectivity in network planning for a UAV-based network are some challenging issues in the field of wireless communications. A new method for optimizing the joint deployment of cellular base stations and backhaul links as a multi-objective optimization problem has been presented in <cit.>, considering the backhaul capacity and the outage probability as constraints. The research in <cit.> is about a communication network equipped with a UAV with rotary-wing. In this system, the target is to minimize the consumption of the UAV driving energy and the communication energy, while ensuring that the ground sensors receive enough energy to perform the operations. In <cit.>, a technique for energy efficiency to improve the network lifetime has been discussed. This paper tries to minimize the maximum energy consumption of all sensors by optimizing the sensors' wake up schedule and the path of the UAV. In <cit.>, a set of energy receivers in known locations are served by the UAV as an energy transmitter. The optimal solution to transfer the maximum energy to the users is obtained, when the UAV hovers in one place during the entire charging period. In this paper, we focus on a new design for the rotary-wing UAV-enabled FD wireless-powered IoT networks. In this wide network, K single-antenna sensors with sparse distribution are served by a UAV with an antenna array. Under these assumptions, in order to optimize the total operation time and energy consumption of the UAV, we divide the sensors into N groups. In our scheme, each group of sensors receives energy from the UAV, when the UAV is hovering over the previous sensor group, and also when the UAV flies over the previous sensor group to the current sensor group. Each group of sensors sends their information to the UAV, when the UAV is hovering over them. As a result, unlike <cit.>, the UAV serves a group of sensors instead of one sensor each time the UAV flies and hovers. Unlike <cit.>, obviously, if the UAV transmits wireless power to the sensors of the next group only when it is flying between two groups, it will cause a waste of time resources. The contributions of this paper are as follows: * Unlike the works done in <cit.>, <cit.>, <cit.> and <cit.>, the UAV in our scheme has more than one antenna to simultaneously transfer energy to sensors and receive information from them. Compared to the UAV in <cit.>, the UAV, in our proposed system model, has more antennas to receive information from the sensors to increase the information transmission rate and optimize the flight and hovering time. * In contrast to <cit.>, the causality of energy is also considered in our method. That is, each sensor can only use energy to transmit information to the UAV, which is received from the UAV in the previous time frame. * In the research conducted in <cit.>, the flight speed of the UAV was not considered; however, the maximum speed is one of the limitations of the optimization problems, in our research. * Due to the real needs of the receiver, the limitation of WPT has also been one of the main criteria in our scheme. A case not considered in <cit.>. The rest of the paper is as follows: Section <ref> describes a rotary-wing UAV-enabled FD wireless-powered IoT networks using the antenna array in the UAV. In Section <ref> and  <ref>, we formulate the sum throughput maximization (STM) and the total time minimization (TTM) problems and describe their optimal solution in closed forms. In section <ref>, we obtain the numerical results using proposed generalized algorithms, and compare them with the simulation results of the previous algorithms. Section <ref> draws some conclusions. § SYSTEM STRUCTURE According to Figure <ref>, we propose a new scheme for the UAV-enabled FD wireless-powered IoT networks using the antenna array in the UAV. In this network, K single-antenna ground users are sparsely located in the network. Also, we use uniform linear array (ULA) to arrange the antennas in the UAV. A ULA is a set of antenna elements, which are arranged along a straight line with equal distances from each other. The purpose of using this type of array is to enhance the signal to noise ratio in a specific direction <cit.>. The UAV sends the required energy to the sensors using WPT and collects information from them. In contrast, the sensors receive energy from the UAV and send their information to it, using wireless information transfer (WIT) protocol. Due to the limitation in time resources, it is important to use effective methods for time management during the operations. For this reason, the sensors in the network can be divided into N groups of sensors. A UAV equipped with MIMO technology collects information from the sensors in a group by several antennas during the total time T and sends energy to them with other antennas. In this case, the power gain of the channel is different in the downlink and the uplink channels. Also, the primary antennas of the UAV are installed with the purpose of providing energy and other antennas in the antenna array to collect information from the sensors. Assuming that the ground users are sparsely distributed in the IoT network and are not mobile, we consider the flight height of the UAV is constant. The changes in the UAV altitude may have effects on the performance of WPT and WIT. The movement of the UAV in a fixed path from the beginning to the end of the network causes the sensors, which are far away from the path of the UAV, to receive less energy. Therefore, we can consider the vertical and horizontal paths in the entire network, so that the UAV can serve all the sensors effectively. In the following three subsections, we analyze the proposed network based on mathematical formulas. §.§ Analysis of the placement of network elements The location of the sensors in the network and the image of the location of the UAV antennas in two-dimensional (2D) coordinates can be expressed by (<ref>) and (<ref>), respectively: w_i = [x_i,y_i]^T, ∀ i ∈𝒦Δ = { 1,2,...,K} q_k(t) = [x_k(t),y_k(t)]^T ,∀ k ∈ℳΔ = { 1,2,...,M} According to (<ref>) and (<ref>), the communication distance among the UAV antennas and the sensors at any moment t can be shown as d_ki(t) = √(( L_ki(t))^2 + A^2), where L_ki(t) = q_k(t) - . w_i. is the horizontal distance between the image of the location of the k-th UAV antenna and the i-th sensor at time t. In our scheme, we assume that the amount of wireless power transmission is bounded, and the maximum distance of wireless power transmission between the UAV and the sensor is determined by d_max. This means, outside the range of d_max, the signal power is ineffective for energy harvesting operation. Therefore, the maximum horizontal distance among the antennas in the UAV antenna array and the sensors is also limited and is calculated as L_max = √(d_max^2 - A^2). In order to transmit reliable information among the UAV and the sensors, we are looking for optimal points. The UAV stops at optimal points and hovers over a group of sensors. In this case, the closest distance between the UAV and the sensors of the desired group is established. Considering the existence of N groups of sensors, n∈ℵ = { 1,2,...,N} and S_n as the set of all sensors belonging to the n-th group, it can be said that the distance between the sensors of the n-th and (n-1)-th groups is equal to dis_n(t) . This distance is defined as dis_n(t) = [q_1(t)| i ∈S_n.] - [q_1(t)| i ∈S_n - 1.], which must satisfy the two conditions dis_n(t) ≤d_max and dis_n(t) + dis_n+1(t) > d_max. Due to the sparsly distribution of the sensors, the one-dimensional (1D) model, instead of 2D model of the network is shown in Figure <ref>. The UAV hovers over the sensors of the n-th group to collect information, while simultaneously charging the sensors of the (n+1)-th group. The hovering time of the UAV on the sensors of the n-th group is indicated by τ_n. Here τ_0 is the hovering time of the UAV at the starting location of the operation. The UAV can collect information from the sensors only when hovering over a group, so the time to collect information from n-th group is equal to τ_n. Also, ζ_n is the flight time of the UAV from the sensors of the (n-1)-th group to the sensors of the n-th group. One of the most important factors for the UAV operations is the total flight time due to limited energy resources. Therefore, we use a FD technology to simultaneously perform data collection and wireless power transmission, which optimizes the use of limited time resources. In the following, according to the drawn network scheme, we assume that the UAV moves in the several horizontal paths, so that it can serve all available sensors effectively. Therefore, the UAV movement paths on different groups of sensors are r ∈Δ = { 1,2,...,R}. Also, we consider ℕ_r as the set of all existing odd or even horizontal paths. In general, the coordinates of the reference antenna of the antenna array on the UAV in the time interval of movement from the sensors of the (n-1)-th group to the sensors of the n-th group will be equal to the following relations: x(t) = v_n t + x_n - 1 y(t) = ỹ, where ỹ is the vertical component of the desired movement path. If we consider D_n as the distance between the stopping points between the (n-1)-th group and the n-th group, the average speed of the UAV while flying from the (n-1)-th group to the n-th group will be equal to D_nζ_n. So [x_n,y_n] can be defined as the coordinates of the reference antenna of the antenna array on the UAV as follows: x_n = v_n ζ_n + x_n-1 y_n = ỹ, The movement of the UAV is always perpendicular to the direction of movement, so the direction of the movement of the UAV and the direction of the antenna array on the UAV will be perpendicular to each other. In this case, L_ki^(n) can be rewritten as follows: L_ki^(n) = √((v_nζ _n + x_n - 1 - x_i)^2 + (y^' + (k - 1)δ - y_i)^2), where δ is the fixed distance among the UAV antennas. §.§ Channel Model In this system, we call the transmission constant power P_t and consider the self-interference cancellation from the transmitter antenna to the receiver antenna. The digital and analog interferences, as self-interference cancellation, are negligible and at the noise level. This will be accurate, when the receiver antenna has a true estimate of the transmitted signal. In addition, beamforming is also useful for WPT. The UAV flies quickly during the operation and the channel among the UAV and the sensors is constantly changing, and the beamforming technology usually needs real information about the state of this channel. Here, the environment is rural and the target channel among the sensors and the UAV is considered without obstacles such as tall buildings. Therefore, we assume that the channel among the sensors and the UAV is the LOS wireless channel. As a result, the path loss will be 1d^2 , which is widely used in UAV-equipped wireless communication and wireless power transmission papers. In information collection stage, the UAV receives the data from the sensors of the n-th group using M-1 antennas and sends the wireless power to the sensors of the (n+1)-th group with the other single antenna. Assuming that the first antenna in the UAV antenna array is installed to send energy to the sensors and the other existing antennas are installed to receive information from the sensors, the uplink channel power gain from the i-th sensor in the nth group to the k-th antenna of the UAV and the downlink channel power gain from the k-th antenna of the UAV to the i-th sensor in the (n+1)-th group are calculated by (<ref>) and (<ref>), respectively. For ease of illustration, we consider all the channels of network have block fading. In this type of channels, the characteristics of the channel remain constant during a period of transmission in a block, but may change from one block to another. h_ki^(n)(t) = k_0/( d_ki^(n)(t))^2 = k_0/( L_ki^(n)(t))^2 + A^2, k ∈{ 2,...,M} g_ki^(n + 1)(t) = k_0/( d_ki^(n + 1)(t))^2 = k_0/( L_ki^(n + 1)(t))^2 + A^2, k = 1 where k_0 is the channel power gain at a distance of one meter from the sensor. In addition, we assume that the UAV is aware of the channel power gain, and receives the information of its next destination at the current location. This means the sensors of the n-th group inform the UAV of the location of the (n+1)-th group during the period of sending their information. As a consequence, in this 2D model, the UAV can be moved sequentially from the first group to the N-th group. It should be noted that the uplink channel is only used to send information from the sensor to the UAV, and the UAV antenna array is fixed on the desired group of sensors, so the gain of the uplink channel is not a function of time. This means: h_ki^(n) = k_0/( d_ki^(n))^2 = k_0/( L_ki^(n))^2 + A^2 = k_0/(x_n - 1 - x_i)^2 + (ỹ + (k - 1)δ - y_i)^2 + A^2 , k ∈{ 2,...,M} §.§ Energy Harvesting Using TDMA structure, we consider energy harvesting and information transmission through a transmission block in Figure <ref>, where the sensors in each group can send information only in the time slot, allocated to them. So, there is no interference between the information sent by the sensors of different groups. Also, in each block, the UAV broadcasts the energy signal to all users with a fixed transmission power, using one of its antennas. In contrast to frequency division multiple access (FDMA) and orthogonal frequency division multiple access (OFDMA), the TDMA architecture enables simple implementation of the WPT protocol in the sensors. When the distance among the UAV and the sensors is less than the maximum communication distance to receive effective energy, the sensors in the desired group can receive energy. It should be noted that only the first antenna of the UAV performs the act of energy supply, i.e., k=1. Therefore, the total energy received by the i-th sensor of the n-th group can be calculated as follows: E_ki^(n)(t) = E_ki^(n),hh(t) + E_ki^(n),hf(t), ∀ i ∈{1,2,…,K}, where E_1i^(n),hf(t) is the energy received by the i-th sensor of the n-th group from the UAV, when the UAV is flying from the (n-1)-th group to the n-th group. The E_1i^(n),hh(t), as the energy received by the i-th sensor of the n-th group from the UAV, when the UAV is hovering over the n-th group, can be computing as (<ref>). E_1i^(n),hh(t) = η_iP_tg_1i^(n)τ_n-1, where η_i ∈(.0,1]. is the effective energy harvesting factor for the i-th sensor. Here, a linear energy harvesting model is considered, since when the RF signals reach the ground sensors, the input power in the sensor is in the linear zone. The operation of energy harvesting by the sensors during the UAV flight is depends on the different available air to ground channels according to the distance among the UAV and the sensors. Next, E_1i^(n),hf(t) is equal to the following equation: E_1i^(n),hf = η _i∫_0^ζ _nP_tg_1i^(n)(t)dt = η _i∫_0^ζ _nP_tk_0/( L_1i^(n)(t))^2 + A^2dt According to the odd and even horizontal paths of the UAV movement in the network, the calculations of the energy harvesting operation will be as follows: E_1i^(n),hf = η _i∫_0^ζ _nP_tk_0/(v_nt + x_n - 1 - x_i)^2 + (ỹ - y_i)^2 + A^2 dt n ∈ℕ_r, r is odd η _i∫_0^ζ _nP_tk_0/( - v_nt + x_n - 1 - x_i)^2 + (ỹ - y_i)^2 + A^2 dt n ∈ℕ_r, r is even Finally, the total received energy E_i^(n) can be rewritten as (<ref>) using (<ref>) and (<ref>). E_i^(n) = η _iP_tk_0( a_i^(n)τ _n - 1 + b_i^(n)ζ _n) where we have a_i^(n) = 1/( x_n - x_i)^2 + ( ỹ - y_i)^2 + A^2 b_i^(n) = 1/D_n√(A^2 + ( ỹ - y_i)^2)[ arctan( D_n + x_n - 1 - x_i/√(A^2 + ( ỹ - y_i)^2)) - arctan( x_n - 1 - x_i/√(A^2 + ( ỹ - y_i)^2))], n ∈ℕ_r, r is odd 1/D_n√(A^2 + ( ỹ - y_i)^2)[ arctan( D_n - x_n - 1 + x_i/√(A^2 + ( ỹ - y_i)^2)) - arctan( - x_n - 1 + x_i/√(A^2 + ( ỹ - y_i)^2))], n ∈ℕ_r, r is even It is assumed that the sensor does not have a battery and only uses a super capacitor for communication, so the entire energy received by the sensor is spent on transmitting information in the corresponding time interval. The instantaneous transfer rate per bandwidth unit for the i-th sensor of the n-th group is as follows: { R^(n)} = ∑_i ∈S_n∑_k = 2^M R_ki^(n) ≤1/2log[ 1 + η _iP_tk_0∑_i ∈S_n∑_k = 2^M h_ki^(n)( a_i^(n)τ _n - 1 + b_i^(n)ζ _n)( h_ki^(n))^T/τ _nσ ^2] =1/2log[1 +γ ^(n)( τ _n - 1∑_i ∈S_na_i^(n)+ζ _n∑_i ∈S_nb_i^(n))/τ _n], i ∈{1,2,…,K},(Nats/Sec/Hz) where γ ^(n) = η _iP_tk_0/σ ^2∑_i ∈S_n∑_k = 2^M h_ki^(n)( h_ki^(n))^T, and σ ^2 is the noise power of the channel. § SUM THROUGHPUT MAXIMIZATION In this section, we have considered an optimization problem for the distribution of the UAV flight and hovering time in order to achieve the maximum amount of the sum throughput of an IoT network equipped with a UAV. The optimal problem can be solved by using mutual coupling conditions in convex optimization problems and generalizing the relations used in <cit.>. At the same time, a relationship is established between the flight time of the UAV and the time it hovers over the groups. We have considered the initial hovering time of the UAV on the flight starting point to be as small as possible and we have set the UAV's flight speed to the maximum state. Then, the other hovering and flying times of the UAV on the desired groups are obtained according to the algorithm <ref>. We will see that for a feasible problem, ∑_k = 2^M ∑_i = 1^K L_ki^(n)≤V_maxT holds, so a feasible value for the distribution of flight and hovering time can be obtained, when the UAV moves from the first group to the last group of sensors in the network in the time period T. By representing the non-negative duration as τ_n and ζ_n with the vectors τ = {τ _0,...,τ _N} ^T and ζ = {ζ _1,...,ζ _N} ^T, the STM problem is defined as follows: max _τ ,ζ∑_n = 1^N τ _nR_n , s.t. τ_0 ≥ 0, ∀ n∈ℵ, V_max≥D_n/ζ_n, ∀ n ∈ℵ, ∑_n=0^N τ_n + ∑_n=1^N ζ_n≤ T. The constraint (<ref>) is due to the non-negativity property of the hover time, (<ref>) is the considered limit for the maximum speed of the UAV and (<ref>) is the time limit for a complete operation. It is considered, that all sensors have a minimum amount of information to send. To solve the optimization problem raised in (<ref>)-(<ref>), according to its constraints, we have used the proof in <cit.>. The Throughput function for the n-th group of sensors can be written as φ _n(τ ,ζ ) = τ _nR_n,∀ n ∈ℵ, which is a concave function with respect to the flight and hovering time of the UAV. To prove that (<ref>)-(<ref>) is a convex optimization problem, we first give the proposition <ref>. If we define the throughput function of group n as H_n(τ ,ζ ) Δ = τ _nR_n, then the H_n is a concave function of τ and ζ. All constraints of this problem are also affine. Therefore, the STM is a convex optimization problem, and can be solved using convex optimization tools such as CVX <cit.>. However, due to the high complexity in this type of calculations with the increase in the number of the sensors in the network, another algorithm has been used to solve the problem. To obtain the optimal flight and hovering time of the UAV for the STM problem, we introduce two variables F_1 and F_2: F_1Δ = [ T - ∑_n = 2^N - 1( ∑_c = n + 1^N ∏_z = n + 1^c a_z/f_z + 1 + f_n/b_n)b_nζ _n/f_n - ( 1 + f_N/b_N)b_Nζ _N/f_N]∏_n = 1^N f_n and F_2Δ = ∏_n = 1^N f_n + b_1( ∏_n = 2^N f_n + ∏_n = 2^N a_n + ∑_n = 2^N - 1∏_c = 2^n a_c∏_z = n + 1^N f_z) where f_n is obtained as coupling variables as follows: f_N = 1/γ _N( γ _Nb_N - 1/𝒲( ( γ _Nb_N - 1)exp^ - μ _N - 1) - 1), f_n = 1/γ _N(1/ - 𝒲( - exp( γ _n + 1a_n + 1/1 + γ _n + 1f_n + 1 - γ _Nb_N/1 + γ _Nf_N - ( μ _N + 1)))- 1),∀ n ∈{N-1,…,1} where 𝒲(.) is known as the Lambert W function <cit.>. The μ_N is a non-negative dual Lagrange variable, which can be calculated as μ _N = Root( γ _1b_1/Y_1 - γ _Nb_N/Y_N = μ _N) by replacing n=N in (<ref>). Here we have Y_NΔ = γ _Nb_N - 1/𝒲( ( γ _Nb_N - 1)exp^ - μ _N - 1), Y_1 = G_1( ...G_N - 1(Y_N)...) and G_n(Y_n) Δ = 1/ - 𝒲( - exp( γ _na_n/Y_n - γ _Nb_N/Y_N - ( μ _N + 1))). Also, Root(.) means to find the root of the desired equation. Finally, the optimal fly and hovering time of the UAV for all groups is obtained using Algorithm <ref>. § TOTAL TIME MINIMIZATION In this part, we have assumed that each sensor has the minimum amount of information, I_n, to send to the UAV. Based on this, an optimization problem is created to minimize the flight time of the UAV and the time it hovers over the desired groups in the form of ∑_n=0^N τ_n + ∑_n=1^N ζ_n. Finally, the problem of minimizing the total time of each operation is defined as follows: min _τ ,ζ∑_n = 0^N τ _n + ∑_n = 1^N ζ _n, s.t. τ_n R_n ≥ I_n, ∀ n∈ℵ, τ_0 ≥ 0, τ_n≥ 0, ∀ n∈ℵ, V_max≥D_n/ζ_n, ∀ n ∈ℵ, The equation (<ref>) guarantees that there is the minimum information required to be sent from the sensor to the UAV. According to the TTM problem and its constraints, the optimal flight and hovering time of the UAV for each group of sensors is achieved. τ _n = I_n/𝒲( ( 1 - a_n + 1/b_n + 1)γ _nb_n - 1/exp (1)) + 1, ∀ n ∈{1,…,N-1} τ _N = I_N/𝒲( γ _nb_n - 1/exp (1)) + 1, ζ _n = τ _n/γ _n( exp( I_n/τ _n - 1)) - a_nτ _n - 1/b_n, ∀ n ∈{2,…,N}, ζ _1 = τ _1/γ _1( exp( I_1/τ _1 - 1))/b_1. By implementing Algorithm <ref>, the complexity created through convex optimization calculation tools can be reduced. Also, you can use some values, which have already been obtained in Algorithm <ref>. § NUMERICAL SIMULATIONS In this section, we analyze the efficiency of our network scheme and compare it with the results of the hover-and-fly energy harvesting (HF-EH) network scheme in <cit.>. All simulations are done using MATLAB 9.8.0 (R2020a) software. First, we investigate the effect of changes in the transmission power of the UAV, the number of groups of the sensors and, consequently, the number of sensors in the network in the STM problem. Next, we discuss the effect of changes in the transmission power of the UAV as well as changes in its maximum horizontal speed in the TTM problem. All results are averaged over 100000 independent channel realizations in the Monte-Carlo experiments. In the simulations, we determine the distances among the groups of sensors are random variables, uniformly distributed in the interval D_n=[20,30) m. Then, we consider the frequency of the network as f_0=3 GHz, the number of antennas as M=3, and the distance among the antennas in the UAV antenna array as 0.1 m. In this work, we limit the maximum speed experienced by the UAV during its flight as V_max=10 m/sec. The time to complete each UAV operation in the target network is fixed at T=1000 sec. The simulation parameters are given in Table <ref>. §.§ The STM Problem In Figure <ref>, according to the results obtained in this research, we evaluate the effect of changes in P_t on the sum throughput. We divide K=20 sensors into N=4 groups. As can be seen, the final operational capacity of both plans increases with the increase of P_t of the UAV, since more P_t allows the sensors to receive more energy to send data to the UAV and the rate of sending information increases. According to the diagram, it is clear that our proposed scheme, provide better performance than the HF-EH scheme, and we see a double growth of the values in the proposed network diagram. Also, when P_t increases, the distance between two graphs also increases. This shows the importance of time allocation as the P_t increases. Using the MIMO antenna array in the UAV, and collecting data simultaneously from several sensors at each stop, allows us to increase the rate of sending information from the ground sensors to the UAV. In Figure <ref>, we evaluate the effect of changes in the number of group of sensors on the final throughput. We consider the transmission constant power P_t=4 dB and draw the plots for N=5 to N=10 groups. In both designs, as the number of groups increases, the final throughput also increases imperceptibly, until the graphs reach to a stable level. According to the values obtained in the numerical simulations, presented in the Table <ref>, our proposed scheme provides better performance than the HF-EH scheme by approximately 75%. Increasing the number of sensors in the network increases the number of times the UAV hovers and flies in the network; however, due to the constant time of each operation, the time of each hover and flight also decreases. As a result, in the HF-EH design, less time is provided to the UAV to collect information from each sensor. The grouping of sensors and the implementation of MIMO technology in our proposed scheme have caused this problem to be solved and to achieve better results. The positive effect of increasing the transmission power from P_t = 4 dB to P_t = 8 dB is also evident by comparing two figures <ref> and  <ref>. Our proposed scheme has 92% better performance than the HF-EH scheme. §.§ The TTM Problem Figure <ref> are depicted with the aim of analyzing the effect of changes in the transmission power on the total time of each operation. Here, the minimum amount of information, sent to the UAV by each sensor, is equal. To check the effect of the minimum amount of information on the system performance, we consider two values: I=10 Nats and I=30 Nats. Based on the Figure <ref>, the amount of time required to perform each operation in both our proposed scheme and the HF-EH scheme is reduced by increasing the transmission power. Therefore, if the amount of transmission power increases, the time to transfer information from the sensor to the UAV will be shorter. As a result, the total amount of time, the UAV flies and hovers over the sensors, is reduced and the total time of each operation is minimized. In equal conditions, due to the sending information from the sensors as a group and collecting information by the antenna array in the UAV, our proposed scheme requires less time than the HF-EH scheme. In fact, the time required for each operation is halved (For example, when I=30 Nats, and P_t=2 dB, the total time of our proposed scheme is 1007 s, and the total time of the HF-EH scheme is 2015 s). However, with the increase of transmission power, due to the limitations specified in the algorithm <ref>, the slope of all graphs decreases, until it reaches a stable level. As indicated by (<ref>), all IoT systems require a minimum total flight time for the UAV. As can be seen, by increasing the minimum amount of information sent, the required time in both schemes increases. So, both schemes have equal tolerance against changes in the amount of information loaded in the network. However, according to (<ref>), we reach a stable level. As expected, it is clear that sending more information requires more time. Figure <ref>, shows the effect of the maximum UAV flight speed changes among the groups on the total time of each operation. It is obvious that as the speed of the UAV increases, the flight time of the UAV decreases, and then the total time of each operation will also reach to its minimum level. Further, according to the limitations mentioned in (<ref>)-(<ref>), the plot reaches a constant level. Similar to Figure <ref>, it can be concluded that sending more information requires more time; however, when more information is sent, the distance between the plots of the two schemes will be greater. This means that our proposed scheme shows better performance, when we need to send more information from the sensors to the UAV. Figure <ref> is drawn with the same values as Figure <ref>, while we have chosen the transmission power P_t=15 dB. A decrease in the transmission power component increases the minimum total time of each operation in all graphs. Increasing the minimum amount of information sent and simultaneously increasing the transmission power creates a greater distance in the graphs, which means that less time is needed in the network. § CONCLUSION In this research, we propose a novel scheme for the rotary-wing UAV-enabled full-duplex wireless-powered IoT network, using the antenna array in the UAV. In this network, K single antenna ground users are sparsely distributed. With the help of its antennas, the UAV provides wireless energy to the ground users and collects information from them. In contrast, sensors receive energy from the HAP and send their information to it. Due to the limitations in time resources, we used effective methods to manage time, and optimize it during operations. For this reason, we considered the sensors in the form of N groups of sensors so that the UAV equipped with MIMO antenna array technology can serve the sensors as a group in the total time T. By using the TDMA scheme to receive information from the users, we have implemented a simple WPT technology, i.e., each group of sensors receives energy from the UAV, when the UAV is hovering over the previous sensor group, and also when the UAV flies over the previous sensor group to the current sensor group. Each group of sensors sends its information to the UAV, when the UAV is hovering over it. Under this design, the sum throughput maximization (STM) problem and the total time minimization (TTM) problem are two optimization problems, which have been investigated in this research. In the STM problem, by making changes in the transmission power of the UAV, we checked its effect on the network throughput and observed that with the increase in the transmission power, the network throughput will also increase. Then, we increased the number of groups of sensors, and observed that the final operational capacity of the network also increased at the same time. Also, instead of collecting information from a single sensor, the UAV can collect information from several sensors in a group at the same time. This made optimal use of the hovering time and thus increased the data transmission rate. In the TTM problem, we increased the transmission power. In this case, the amount of time needed to perform each operation was also reduced to a stable level. The same result was obtained by increasing the maximum UAV flight speed. In our proposed design, we have considered the total time of each operation in the STM problem fixed, while this time can be adjusted according to the amount of energy consumed by the UAV. We have also investigated the effect of a UAV equipped with an antenna array. Now it is possible to use several UAVs with the same feature in networks, which have a larger scale to increase the level of service coverage. The channel model used in this work is the quasi-static block fading channel. In order to use this network in the real world and model it accurately, the channel model can be considered dynamic and fast fading, and based on the characteristics of these types of channels, the problem can be analyzed. The energy considered in this work is only spent on flying and hovering the UAV, while servicing the sensors. However, receiving, storing and processing the information received in the UAV requires energy, which is not considered in current research, and is a part of our future work. § PROOF OF PROPOSITION 1 According to the results obtained in <cit.>, we know that if f:R^n→ R is a function, then the perspective function <cit.> of f will be a function such as g:R^n+1→ R, which is defined as g(x,t)=tf(x/t) with domain g=(x,t)|x/t ∈dom f,t>0 (dom refers to the domain of the function). Further, knowing that the perspective operation preserves the convexity, since f is a concave function, so the function g is also concave. Thus, as the following function is concave in R_++^2, f(τ _n - 1,ζ _n) = 1/2log[ 1 + γ ^(n)( τ _n - 1∑_i ∈S_na_i^(n) + ζ _n∑_i ∈S_nb_i^(n))], Therefore, the perspective function of f, defined as: g( τ _n - 1,ζ _n,τ _n) = 1/2τ _nlog[ 1 + γ ^(n)( τ _n - 1∑_i ∈S_na_i^(n) + ζ _n∑_i ∈S_nb_i^(n))/τ _n], is concave in R_++^3. Also, the non-negative weighted sum of a concave function is always concave. Therefore, (<ref>)-(<ref>) is a convex optimization problem. IEEEtran
http://arxiv.org/abs/2307.00264v1
20230701081000
Universal kernel-type estimation of random fields
[ "Yu. Yu. Linke", "I. S. Borisov", "P. S. Ruzankin" ]
math.ST
[ "math.ST", "stat.TH", "62G08", "G.3" ]
φ φ ψ IRS-Aided Overloaded Multi-Antenna Systems: Joint User Grouping and Resource Allocation Ying Gao, Qingqing Wu, Wen Chen, Yang Liu, Ming Li, and Daniel Benevides da Costa Y. Gao is with the Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 201210, China, and also with the State Key Laboratory of Internet of Things for Smart City, University of Macau, Macao 999078, China (e-mail: yinggao@um.edu.mo). Q. Wu and W. Chen are with the Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 201210, China (e-mail: qingqingwu@sjtu.edu.cn; whenchen@sjtu.edu.cn). Y. Liu and M. Li are with the School of Information and Communication Engineering, Dalian University of Technology, Dalian 116024, China (e-mail: yangliu_613@dlut.edu.cn; mli@dlut.edu.cn). D. B. da Costa is with the Technology Innovation Institute, 9639 Masdar City, Abu Dhabi, United Arab Emirates (email: danielbcosta@ieee.org). August 1, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Consistent weighted least square estimators are proposed for a wide class of nonpara­metric regression models with random regression function, where this real-valued random function of k arguments is assumed to be continuous with probability 1. We obtain explicit upper bounds for the rate of uniform convergence in probability of the new estimators to the unobservable random regression function for both fixed or random designs. In contrast to the predecessors' results, the bounds for the convergence are insensitive to the correlation structure of the k-variate design points. As an application, we study the problem of estimating the mean and covariance functions of random fields with additive noise under dense data conditions. The theoretical results of the study are illustrated by simulation examples which show that the new estimators are more accurate in some cases than the Nadaraya–Watson ones. An example of processing real data on earthquakes in Japan in 2012–2021 is included. Key words and phrases: nonparametric regression, uniform consistency, kernel-type estimator. AMS subject classification: 62G08. 1. Introduction We study the following regression model: Y_i=f(X_i)+ξ_i, i=1,…,n, where f(t), t:=(t^(1),…,t^(k))∈Θ⊂ℝ_+^k, k≥ 1, is an unknown real-valued random function (a random field). We assume that Θ is a compact set, the random field f(t) is continuous on Θ with probability 1, and the design {X_i; i=1,…,n} consists of a collection of observed random vectors with unknown (generally speaking) distributions, not necessarily independent or identi­cally distributed. The random design points X_i may depend on n, i.e., a triangular array scheme for the design can be considered within this model. In particular, this scheme includes regression models with fixed design. Moreover, we do not require that the random field f(t) be independent of the design {X_i}. Next, we will assume that the unobservable random errors {ξ_i} (a noise) form a martingale difference sequence, with M_p:=sup_i E|ξ_i|^p<∞ p>k p≥ 2. We also assume that {ξ_i} are independent of the collection {X_i} and the random field f(·). The noise {ξ_i} may depend on n. Our goal is to construct consistent in C(Θ) estimators for the random regression field f(t) by the observations {(X_i,Y_i); i≤ n} under minimal restrictions on the design points {X_i}, where C(Θ) denotes the space of all continuous functions on Θ with the uniform norm. In the classical case of nonrandom f(·), the most popular estimating procedures are based on kernel-type estimators. We emphasize among them the Nadaraya–Watson, the Priestley-Chao, the Gasser-Müller estimators, the local polynomial estimators, and some of their modifications (see Härdle, 1990; Wand and Jones, 1995; Fan and Gijbels, 1996; Fan and Yao, 2003; Loader, 1999; Young, 2017; Müller, 1988). We do not aspire for providing a comprehensive review of this actively developing (especially in the last two decades) area of nonparametric estimation, and will focus only on publications representing certain methodological areas. We are primarily interested in conditions on the design elements. In this regard, a large number of publications in this area may be tentatively divided into the two groups: the studies dealing with fixed design {X_i; i≤ n} or with random one. In papers dealing with a random design, as a rule, the design consists of independent identically distributed random variables or stationary observations satisfying known forms of dependence, e.g., various types of mixing conditions, association, Markov or martingale properties, etc. Not attempting to present a comprehensive review, we may note the papers by Kulik and Lorek (2011), Kulik and Wichelhaus (2011), Roussas (1990, 1991), Györfi et. al. (2002), Masry (2005), Hansen (2008), Honda (2010), Laib and Louani (2010), Li et al. (2016), Hong and Linton (2016), Shen and Xie (2013), Jiang and Mack (2002), Linton and Jacho-Chavez (2010), Chu and Deng (2003) (see also the references in the papers). Besides, in the recent studies by Gao et al. (2015), Wang and Chan (2014), Chan and Wang (2014), Linton and Wang (2016), Wang and Phillips (2009a,b), Karlsen et al. (2007), the authors considered nonstationary design sequences under special forms of dependence (Markov chains, autoregressions, sums of moving averages, and so on). In the case of fixed design, the vast majority of papers make certain assumptions on the regularity of design (see Zhou and Zhu, 2020; Benelmadani et al., 2020; Tang et al., 2018; Gu et al., 2007; Benhenni et al., 2010; Müller and Prewitt, 1993; Ahmad and Lin, 1984; Georgiev, 1988, 1990). In univariate models, the nonrandom design points X_i are most often restricted by the formula X_i=g(i/n)+o(1/n) with a function g of bounded variation, where the error term o(1/n) is uniform in i=1,…,n. If g is linear, then the design is equidistant. Another regularity condition in the univariate case is the relation max_i≤ n(X_i-X_i-1)=O(1/n), where the design elements are arranged in increasing order. In a number of recent studies, a more general condition max_ i≤ n(X_i-X_i-1)→ 0 can be found (e.g., see Yang and Yang, 2016; He, 2019; Wu et al., 2020). In several works, including those dealing with the so-called weighted estimators, certain conditions are imposed on the behavior of functions of design elements, but meaningful corresponding examples are limited to cases of regular design (e.g., see Zhang et al., 2019 ; Zhang et al., 2018; Liang and Jing, 2005; Roussas et al., 1992; Georgiev, 1988). The problem of uniform approximation of the kernel-type estimators has been studied by many authors (e.g., see Einmahl and Mason, 2005; Hansen, 2008; Gu et al., 2007; Shen and Xie, 2013; Li et al., 2016; Liang and Jing, 2005; Wang and Chan, 2014; Chan and Wang, 2014; Gao et al., 2015 and the references therein). In this paper, we study a class of kernel-type estimators, asymptotic properties of which do not depend on the design correlation structure. The design may be fixed (and not necessarily regularly spaced) or random (with not necessarily weakly dependent components). We present weighted least square estimators where the weights are chosen as the Lebesgue measures of the elements of a finite random partition of the regression function domain Θ such that every partition element corresponds to one design point. As a result, the proposed kernel estimators for the regression function are transformation of sums of weighted observations in a certain way with the structure of multiple Riemann integral sums, so that conceptually our approach is close to the methods of Priestley and Chao (1972) and of Mack and Müller (1988), who considered the cases of univariate fixed design and i.i.d. random design, respectively. Explicit upper bounds are obtained for the rate of uniform convergence of these estimators to the random regression field. In contrast to the predecessors' results, we do not impose any restrictions on the design correlation structure. We will consider the maximum cell diameter of the above-mentioned partition of Θ generated by the design elements, as the main characteristic of the design. Sufficient conditions for the consistency of the new estimators, as well as the windows' widths will be derived in terms of that characteristic. The advantage of that characteristic over the classical weak dependence conditions is that the characteristic is insensitive to forms of correlation of the design elements. The main condition will be that the maximum cell diameter tends to zero in probability as the sample size grows. Note that such requirement is, in fact, necessary, since only when the design densely fills the regression function domain, it is possible to reconstruct the function more or less precisely. Univariate versions of this estimation problem were studied in Borisov et al. (2021) and Linke et al. (2022) where the asymptotic analysis and simulations showed that the proposed estimators perform better than the Nadaraya–Watson ones in several cases. Note that the univariate case in Borisov et al. (2021) does not allow direct generalization to a multivariate case, since the weights were defined there as the spacings of the variational series generated by the design elements. Note also that the estimator in Borisov et al. (2021) is a particular univariate case of the estimators proposed in this paper, but not the only one. One of the univariate estimators studied here may be more accurate than the estimator in Borisov et al. (2021) (see Remark <ref> below). Conditions on the design elements similar to those of this paper were used Linke and Borisov (2022), and in Linke (2023). The conditions provide uniform consistency of the estimators, but guarantee only pointwise consistency of the Nadarya–Watson ones. Besides, similar restrictions on the design elements were used before in Linke and Borisov (2017, 2018), and Linke (2019) in estimation of the parameters of several nonlinear regression models. In this paper, we will assume that the unknown random regression function f(t), t∈Θ, is continuous with probability 1. Considering the general case of random regression function allows us to obtain results on estimating the mean function of a random regression process. In regard to estimating random regression functions, we may note the papers by Li and Hsing (2010), Hall et al. (2006), Zhou et al. (2018), Zhang and Wang (2016, 2018), Yao et al. (2005), Zhang and Chen (2007), Yao (2007), Lin and Wang (2022). In those papers, the mean and covariance functions of the random regression process f(t) were estimated when, for independent noisy copies of the random process, each of the trajectories was observed in a certain subset of design elements (nonuniform random time grid). Estimation of mean and covariance of random processes is an actively developing area of nonparametric estimation, especially in the last couple of decades, is of independent interest, and plays an important role in subsequent analyses (e.g., see Hsing and Eubank, 2015; Li and Hsing, 2010; Zhang and Wang, 2016; Wang et al., 2016). Estimation of random regression functions usually deals with either random or deterministic design. In the case of random design, it is usually assumed that the design elements are independent identically distributed (e.g., see Hall et al., 2006; Li and Hsing, 2010; Zhou et al., 2018; Yao , 2007; Yao et al., 2005; Zhang and Chen, 2007; Zhang and Wang, 2016, 2018; Lin and Wang, 2022). Some authors emphasized that their results can be extended to weakly dependent design (e.g., see Hall et al., 2006). For deterministic time grids, regularity conditions are often required (e.g., see Song et al., 2014; Hall et al., 2006). In regard to denseness of filling the regression function domain, the two types of design are distinguished in the literature: either the design is “sparse”, e.g., the number of design elements in each series is uniformly limited (Hall et al., 2006; Zhou et al., 2018; Li and Hsing, 2010), or the design is “dense” and the number of elements in a series increases with the sequential number of the series (Zhou et al., 2018; Li and Hsing, 2010). Uniform consistency of several estimators of the mean of random regression function was considered, for example, by Yao et al. (2005), Zhou et al. (2018), Li and Hsing (2010), Hsing and Eubank (2015), Zhang and Wang (2016). In this paper, we consider one of the variants of estimation of the mean of a random regression function as an application of the main result. In the case of dense design, uniformly consistent estimators are constructed for the mean function, when the series-to-series-independent design is arbitrarily correlated inside each series. We require only that, in each series, the design elements form a refining partition of the domain of the random regression function. Our settings also include a general deterministic design situation, but we do not impose traditionally used regularity conditions. Thus, in the problem of estimating the mean function, as well as in the problem of estimating the function in model (<ref>), we weaken traditional conditions on the design elements. Note that methodologies used for estimating the mean function for dense and for sparse data usually differ (e.g., see Wang et al., 2016). In the case of growing number of observations in each series, it is natural to preliminarily evaluate the trajectory of the random regression function in each series and then average the estimates over all series (e.g., see Hall et al., 2006). That is what we will do in this paper following this generally accepted approach. Universal estimates both for the mean and covariance functions of a random process in the case of sparse data, insensitive to the nature of the dependence of design elements, are proposed in Linke and Borisov (2024). This paper has the following structure. Section 2 contains the main results on the rate of uniform convergence of the new estimators to the random regression function. In Section 3, we consider an application of the main results to the problem of estimating the mean and covariance function of a random regression field. In Section 4, the asymptotic normality of the new estimators is discussed. Section 5 contains several simulation examples. In Section 6, we discuss an example of assessing real data on earthquakes in Japan in 2012–2021. In Section 7, we summarize the main results of the paper. The proofs of the theorems and lemmas from Sections 2–4 are contained in Section 8. 2. Main assumptions and results Without loss of generality we will assume that d(Θ∪ 0)≤ 1, where d(A):=sup_x,y∈ Ax-y is the diameter of a set A and · is the supnorm in ℝ^k. In what follows, unless otherwise stated, all the limits will be taken as n→∞. Our approach recalls a construction of multivariate Riemann integrals. To this end, we need the following condition on the design {X_i}. ( D) For each n, there exists a random partition of the set Θ into n Borel-measurable subsets {Δ_i; i=1,…,n} such that δ_n:=max_i≤ nd(Δ_i∪ X_i)→ 0 in probability. Condition (D) means that, for every n, the set {X_i; i≤ n} forms a δ_n-net in the compact set Θ. In particular, Condition (D) is satisfied if the design points {X_i} are pairwise distinct, X_i∈Δ_i for all i≤ n, and max_i≤ nd(Δ_i)→ 0 in probability. In the case Θ=[0,1]^k, a regularly spaced design satisfies Condition (D). Moreover, if {X_i; i≥ 1} is a stationary sequence satisfying an α-mixing condition and [0,1]^k is the support of the distribution of X_1, then Condition (D) is fulfilled (see Remark 3 in Linke and Borisov, 2017). It is not hard to verify that, for i.i.d. design points with the probability density function of X_1 bounded away from zero on [0,1]^k, one can have δ_n=O(log n/n^1/k) with probability 1. Notice that the dependence of random variables {X_i} in Condition (D) may be much stronger than that in these examples (see Linke and Borisov, 2017, 2018 and the example below). Example. Let a sequence of bivariate random variables {X_i; i≥ 1} is defined by the relation X_i=ν_i U_1i+(1-ν_i) U_2i, where the random vectors { U_1i} and {U_2i} are independent and uniformly distributed on the rectangles [0,1/2]× [0,1] and [1/2,1]×[0,1], respectively, while the sequence {ν_i} does not depend on { U_1i} and {U_2i} and consists of Bernoulli random variables with success probability 1/2, i.e., the distribution of X_i is the equi-weighted mixture of the two above-mentioned uniform distributions. The dependence between the random variables {ν_i} is defined by the equalities ν_2i-1=ν_1 and ν_2i=1-ν_1. In this case, the random variables { X_i; i≥ 1} in (<ref>) form a stationary sequence of random variables uniformly distributed on the unit square [0,1]×[0 ,1], but, say, all known mixing conditions are not satisfied here because, for all natural m and n, ℙ( X_2m∈ [0,1/2]× [0,1], X_2n-1∈ [0,1/2]× [0,1])=0. On the other hand, it is easy to check that the stationary sequence {X_i} satisfies the Glivenko–Cantelli theorem. This means that, for any fixed h>0, #{i: t- X_i≤ h, 1≤ i≤ n}∼ 4h^2 n almost surely uniformly in t, where # denotes the standard counting measure. In other words, the sequence { X_i} satisfies Condition (D). It is clear that, according to the scheme of this example, one can construct various sequences of dependent random variables uniformly distributed on [0,1]× [0,1], based on the choice of different sequences of the Bernoulli switches with the conditions ν_j_k=1 and ν_l_k=0 for infinitely many indices {j_k} and {l_k}, respectively. In this case, Condition (D) will also be satisfied. But the corresponding sequence { X_i} (not necessarily stationary) may not satisfy the strong law of large numbers. For example, a similar situation occurs when ν_j=1-ν_1 for j=2^2k-1,…,2^2k-1 and ν_j=ν_1 for j=2^2k,…,2^2k+1-1, where k=1,2,… (i.e., we randomly choose one of the two rectangles [0,1/2]× [0,1] and [1/2,1] × [0,1], into which we randomly throw the first point, and then alternate the selection of one of the two rectangles by the following numbers of elements of the sequence: 1, 2, 2^2, 2^3, etc.). Indeed, we can introduce the notation n_k=2^2k-1, ñ_k=2^2k+1-1, S_m=∑_i=1^m X_i^(1), with X_i=(X_i^(1), X_i^(2)), and note that, for all outcomes consisting the event {ν_1=1}, one has S_n_k/n_k=1/n_k∑_i∈ N_1,kU_1i^(1)+1/n_k∑_i∈ N_2,kU_2i^(1), where U_ji=(U_ji^(1), U_ji^(2)), j=1,2; N_1,k and N_2,k are the collections of indices for which the observations {X_i, i≤ n_k} lie in the rectangles [0,1/2]× [0,1] or [1/2,1]× [0,1], respectively. It is easy to see that #(N_1,k)=n_k/3 and #(N_2,k)=2#(N_1,k). Hence, S_n_k/n_k→7/12 almost surely as k→∞ due to the strong law of large numbers for the sequences {U_1i^(1)} and {u_2i^(1)}. On the other hand, for all elementary outcomes in the event {ν_1=1}, as k→∞, we have with probability 1 S_ñ_k/ñ_k=1/ñ_k∑_i∈Ñ_1,kU_1i^(1)+1/ñ_k∑_i∈Ñ_2,kU_2i^(1)→5/12, where Ñ_1,k and Ñ_2,k are the collections of indices for which the observations { X_i, i≤ñ_k} lie the rectangles [0,1/2]× [0,1] or [1/2,1]× [0,1], respectively. In proving the convergence in (<ref>) we took into account that #(Ñ_1,k)=(2^2k+2-1)/3, #(Ñ_2,k)=2n_k/3, i.e., #(Ñ_1,k)=2#(Ñ_2,k)+1. Similar arguments are valid for elementary outcomes consisting the event {ν_1=0}. In what follows, by K(s), s∈ℝ^k, we will denote the kernel function. We assume that the kernel function is zero outside [-1,1]^k and is a centrally symmetric probability density function, i.e., K(s)≥ 0, K(s)=K(-s) for all s∈ [-1,1]^k, and ∫_[-1,1]^k K(s) ds =1. For example, we may consider product-kernels of the form K(s)=∏_j=1^kK_o(s^(j)), where K_o(·) is a univariate symmetric probability density function with support [-1,1]. We also assume that the function K(s) satisfies the Lipschitz condition with constant L≥ 1: |K(x)-K(y)|≤ L(|x^(1)-y^(1)|+⋯+|x^(k)-y^(k)|) for all x=(x^(1),…,x^(k)) and y=(y^(1),…,y^(k)), and put K(y)=0 for all y such that y> 1. Notice that, under these restrictions, sup_s K(s)≤ L. Put K_ε(s):=ε^-k K(ε^-1s). By θ_ε we denote a random vector with the density K_ε(t), which is independent of the random variables {Y_i}. Let Λ(·) denote the Lebesgue measure in ℝ^k. Introduce the following notation: f^*_n,ε(t):=∑_i=1^nY_iK_ε(t-X_i)Λ(Δ_i)/∑_i=1^nK_ε(t-X_i)Λ(Δ_i), where 0/0=0 by definition; J_ε(t):=∫_ΘK_ε(t-x) Λ(dx)≡ P(t-θ_ε∈Θ), t∈Θ; ω_f(ε):=sup_x,y∈Θ: x-y≤ε|f(x)-f(y)|. Now, notice that f^*_n,ε(t)= argmin_z∈ℝ∑^n_i=1(Y_i-z)^2K_ε(t-X_i)Λ(Δ_i), i.e., the estimators of the form (<ref>) belong to the class of weighted least square estimators. Estimators (<ref>) are also called local constant ones. Finally, we will assume that there exist constants ρ>0 and ε_0∈ (0,1] such that J_ε(t)≥ρ t∈Θ ε≤ε_0. So, some cases (for example, if Θ contains isolated points) are excluded from the scheme under consideration. Notice that if the set Θ can be represented as the union of hyperrectangles with the edges of lengths greater than ε_0 and kernel K is a product-kernel, then we have the lower bound ρ≥ 2^-k. The main result of this paper is as follows. Theorem 1. Let the conditions ( D) and (<ref>) hold. Then, for any fixed ε∈ (0,ε_0], sup_t∈Θ|f^*_n,ε(t)-f(t)|≤ω_f(ε) +ζ_n(ε) with probability 1, where ζ_n(ε) is a positive random variable such that P( ζ_n(ε)>y) ≤ G(k,p) ρ^-p M_p L^p/2 y^-p ε^-k(p/2+1) E(δ_n^kp/2) + P(δ_n>εmin{1, ρ(k2^k+1 L)^-1}), where 0<G(k,p)< (p-1)^p/2 2^p(k+(3/2)) ( 1 +k/2^(p-k)/(p+1) - 1)^p+1. In what follows, we will denote by O_p(η_n) some univariate random variables ζ_n such that, for all y>0, lim sup_n→∞ P(|ζ_n|/η_n>y)≤β(y), where {η_n} is a sequence of positive nonrandom numbers, lim_y→∞β(y)=0, and the function β(y) does not depend on n. For example, let the function f be nonrandom. In (<ref>), put y=(ε^-k(p/2+1) E(δ_n^kp/2))^1/p. Applying the power Markov's inequality with exponent kp/2 for the second summand in (<ref>), we obtain that, under the conditions of the theorem, ζ_n(ε)=O_p((ε^-k(p/2+1) E(δ_n^kp/2))^1/p) and there exists a solution ε≡ε(n) to the equation E(δ_n^kp/2)=ε^k(p/2+1)ω^p_f(ε). It is clear that this solution vanishes as n→∞. In fact, the value ε(n) minimizes in ε the order of smallness for the right-hand side of (<ref>). Notice that δ_n/ε(n)p→ 0 and (ε(n))^-k(p/2+1) E(δ_n^kp/2) in view of (<ref>). Taking Remark 1 into account one can obtain the following two assertions as consequences of Theorem 1. Corollary 1. Let C be a set of nonrandom equicontinuous functions from the function space C[0,1]^k (for example, a precompact subset of C[0,1]^k). Then, under Condition (D), γ_n( C):=sup_f∈ C sup_t∈ [0,1]^k| f^*_n,ε̃(n)(t)-f(t)|p→ 0, where ε̃(n) is a solution to equation (<ref>) in which the modulus of continuity ω_f(ε) is replaced with the universal modulus ω_ C(ε):=sup_f∈ Cω_f(ε). In this case, γ_n( C)=O_p(ω_ C(ε̃(n))). Corollary 2. If the modulus of continuity of the regression random field f(t), t∈[0,1]^k, in Model (<ref>) meets the condition ω_f(ε)≤ζτ(ε) a.s., where ζ>0 is a proper random variable and τ (ε) is a positive continuous nonrandom function, with τ (ε)→ 0 as ε→ 0, then, under Condition (D), sup_t∈ [0,1]^k| f^*_n,ε̂(n)(t)-f(t)|p→ 0, where ε̂(n) is a solution to equation (<ref>) in which the modulus of continuity ω_f(ε) is replaced with τ (ε). Example 2. Let Θ=[0,1]^k, δ_n≤ν n^-1/k, with Eν^kp/2<∞, and ω_f(ε)≤ζε^α, α∈ (0,1], where ζ is a proper random variable. Then ε(n)=O(n^-1/2k(1/p+1/2)+α) and sup_t∈ [0,1]^k|f^*_n,ε(t)-f(t)|=O_p(n^-α/2k(1/p+1/2)+α). In particular, in the one-dimensional case, if f(·)=W(·) is a Wiener process on [0,1], and the i.i.d. random variables ξ_i are centered Gaussian, then by Lévy's modulus of continuity theorem, for any arbitrarily small ν>0, we have sup_t∈ [0,1]|f^*_n,ε(t)-f(t)|=O_p(n^-1/3+ν). Here we put k=1, α=1/2-ν_1, and 1/p<ν_2, with arbitrarily small positive ν_1 and ν_2. Let k=1, Θ=[0,1]. Denote by X_n:1≤…≤ X_n:n the ordered sample {X_i; i=1,…, n}. Put X_n:0:=0, X_n:n+1:=1, Δ_ni:=(X_n:i-1, X_n:i], i=1,…,n. Denote by Y_ni the response variable Y corresponding to X_n:i in (<ref>). Then we can write down estimator (<ref>) for the function f as f^*_n,ε(t)=∑_i=1^nY_niK_ε(t-X_n:i) Δ X_ni/∑_i=1^nK_ε(t-X_n:i)Δ X_ni, where Δ X_ni:= Λ(Δ_ni) = X_n:i-X_n:i-1. This estimator was proposed and studied in detail in Borisov et al. (2021). But, instead of {Δ_ni}, we can consider Voronoi cells Δ_ni:= ( X_n:i-1+X_n:i/2, X_n:i+X_n:i+1/2] and write down the corresponding estimator: f^*_n,ε(t)=∑_i=1^nY_niK_ε(t-X_n:i) ΔX_ni/∑_i=1^nK_ε(t-X_n:i) ΔX_ni, where ΔX_ni:= Λ(Δ_ni) = X_n:i+1-X_n:i-1/2. Repeating, for the last estimator, the corresponding proofs in Borisov et al. (2021) originally applied to estimator (<ref>), we can easily see that all properties of estimator (<ref>) are retained for (<ref>), except the constant factor in the asymptotic variance. Namely, let the regression function f (t) be twice continuously differentiable and nonrandom, let the errors {ξ_i} be independent identically distributed, centered with finite second moment M_2, and independent of the design {X_i}, whose elements be independent identically distributed. In addition, let X_1 have a strictly positive density p(t) which is continuously differentiable. Then 𝕍ar f_n,ε^*(t)∼2M_2/h np(t)∫_-1^1K^2(u)du, 𝕍arf_n,ε^*(t)∼1.5M_2/h np(t)∫_-1^1K^2(u)du. The former asymptotic relation was established in Lemma 3 by Borisov et al. (2021). The latter relation can be proved by repeating the proof of that lemma with obvious changes. Hence, in the case of independent and identically distributed design points, the asymptotic variance of the estimator can be reduced by choosing an appropriate partition. Thus, for k=1, this paper deals with a more general class of estimators (<ref>) than that in Borisov et al. (2021) where estimator (<ref>) was studied, and representatives of the new class can have certain advantages over the estimator (<ref>). 3. Application to estimating the mean and covariance functions of a random regression function In this section, as an application of Theorem 1, we will construct a consistent estimator for the mean function of the random regression function in Model (<ref>). We consider the following multivariate statement of the problem of estimating the mean function of an a.s. continuous random regression stochastic process. Consider N independent copies of Model (<ref>): Y_i,j=f_j(X_i,j)+ξ_i,j, i=1,…,n, j=1,…,N, where f(t), f_1(t),…, f_N(t), t∈ [0,1]^k, are i.i.d. unknown a.s. continuous stochastic processes, and, for every j, the collection {ξ_i,j; i≤ n} satisfies condition (<ref>). Here and in what follows, the subscript j denotes the sequential number of such a copy. Introduce the notation f^*_N,n,ε(t):=1/N∑_j=1^Nf^*_n,ε,j(t). Theorem 2. Let Condition (D) for Model (<ref>) be fulfilled and Esup_t∈ [0,1]^k|f(t)|<∞. Besides, let a sequences ε≡ε_n→ 0 and a sequence of naturals N≡ N_n→∞ satisfy the conditions ε^-k(p/2+1) E(δ_n^kp/2)→ 0 N P(δ_n>εmin{1, ρ(k2^k+1 L)^-1})→ 0. Then sup_t∈ [0,1]^k|f^*_N,n,ε(t)- Ef(t)|p→ 0. If condition (<ref>) is replaced with a slightly stronger condition Esup_t∈ [0,1]^kf^2(t)<∞ then, under the restrictions (<ref>), one can prove the uniform consistency of the estimator M_N,n^*(t_1,t_2):=1/N∑_j=1^Nf^*_n,ε,j(t_1)f^*_n,ε,j(t_2), t_1,t_2∈ [0,1]^k, for the unknown mixed second-moment function Ef(t_1)f(t_2), where ε≡ε_n and N≡ N_n are defined in (<ref>). The proof is based on the same arguments as those in proving Theorem 2, and therefore is omitted. In other words, under the above-mentioned conditions, the estimator Cov^*_n(t_1,t_2):= M_N,n^*(t_1,t_2)-f^*_N,n,ε(t_1)f^*_N,n,ε(t_2) is uniformly consistent for the covariance function of the random regression field f(t). 4. Asymptotic normality In this section, we discuss sufficient conditions for asymptotic normality of the estimators f^*_n,ε(t). Denote by F_0 the trivial σ-field, and by F_j the σ-field generated by the collection {ξ_1,…,ξ_j}, by the design points, and by the regression random field. Theorem 3. Let the design {X_i} do not depend on n. Under Condition (D), assume that, for some t∈Θ and a sequence ε≡ε_n, h_n:=max_j≤ n(K_ε(t-X_j) Λ(Δ_j))^2/∑_j=1^n (K_ε(t-X_j) Λ(Δ_j))^2p→ 0, E(ξ_j^2 | F_j-1)=σ^2 j, max_j E(ξ_j^2 1 (ξ_j^2>a/h_n) | F_j-1) p→ 0 a>0. Then B^-1_n,ε(t)(f^*_n,ε(t)-f(t) -r_n,ε(t))d→ N(0,σ^2), where N(0,σ^2) is a centered Gaussian random variable with variance σ^2, B^2_n,ε(t):=J^-2_n,ε(t) ∑_i=1^n(K_ε(t-X_i)Λ(Δ_i))^2, r_n,ε(t):=J^-1_n,ε(t) ∑_i=1^n(f(X_i)-f(t))K_ε(t-X_i)Λ(Δ_i), J_n,ε(t):=∑_i=1^nK_ε(t-X_i)Λ(Δ_i). The theorem is a direct consequence of Corollary 3.1 in Hall and Heyde (1980). 5. Simulation examples In this section, we present simulations comparing the estimator f^*_n,ε(t) defined in (<ref>) with the Nadaraya–Watson estimator f̂_n,ε(t):= ∑_i=1^nY_iK_ε(t-X_i)/∑_i=1^nK_ε(t-X_i) in the 2-dimensional case. For this estimator, we will assume 0/0=0, like that was done for the estimator (<ref>). The elements of the design space Θ = [-1,1]× [-1,1] will be denoted by (x,y). The following two algorithms were used to partition the space Θ into the sets Δ_i. The first algorithm is the Voronoi partitioning. For each i, the set Δ_i is the Voronoi cell corresponding to X_i, i.e. the set of all points of Θ that lie closer to X_i than to any other design point. The deldir R package was employed for calculation of the squares of the cells. The second algorithm is recursive partitioning by coordinate-wise medians. First, we divide Θ into the two rectangles by the line t^(1) = median{X_1^(1),…,X_n^(1)}, where the median is the midpoint of the interval (X_⌊ n/2 ⌋^(1), X_⌊ n/2 ⌋+1^(1)) when all the points are sorted in increasing order with respect to the first coordinate. Then each of the two rectangles is divided recursively. If, at some step, a rectangle contains two or more design points then it is divided into the two parts: If the rectangle's width is greater than its height, then the rectangle is divided by the line t^(1)= median{X_l^(1): l∈ B}, where B is the set of indices of the design points falling into the rectangle; otherwise it is divided by the line t^(2)= median{X_l^(2): l∈ B}. As soon as there is only one design point X_i in a rectangle, the rectangle is put to be Δ_i. Results of partitioning Θ into cells for a collection of 100 points by the both algorithms are displayed in Fig. <ref>. In the simulation examples below, we used the tricubic kernel K(x,y)=440/162πmax{0, (1-√(x^2+y^2)^ 3)^3}. In each example, 1000 simulation runs were performed. In each of the simulation runs, 5000 design points were generated and randomly divided into the training (80%) and validation (20%) sets. For the design points X_i, the observations Y_i were generated with i.i.d. Gaussian noise with standard deviation σ=0.5. For each of the tested algorithms, on the training set, the optimal ε was calculated by 10-fold cross-validation minimizing the average of mean-square errors. The ε was selected from 20 values located on the logarithmic grid from 0.01 to 0.5. The random partitioning for the cross-validation was the same for all the tested algorithms. Then, for each of the algorithms, the model, trained on the training set with the chosen ε, was used to compute the mean-square error (MSE) for the observations of the validation set: MSE = 1/m∑_j (f^*_ε(X_j)-Y_j)^2, where the sum is taken over the validation set, and m is the size of the set. Besides, that model was employed to compute the maximal absolute error (MaxE) for the true values of the target function f on the 100× 100 uniform lattice on Θ: MaxE = max_j |f^*_ε(γ_j)-f(γ_j)|, where the maximum is computed for the elements γ_j of the 100× 100 lattice covering Θ. The algorithms that were compared will be denoted by NW (Nadaraya-Watson), ULCV (Universal Local Constant estimator (<ref>) with Voronoi partitioning), and ULCM (Universal Local Constant estimator (<ref>) with coordinate-wise Medians partitioning). The results of the simulation runs are presented as median (1-st quartile, 3-rd quartile) and are compared between the estimators with the paired Wilcoxon test. In the examples below, we intentionally chose the densities of the design points with high nonuniformity in order to demonstrate possible advantages of the new estimator. 5.1. Example 1 In this example, we approximate the nonrandom regression function f(x,y)=5/1+e^-20x - 2y^3. The design points were generated in a way similar to that in Example in Sec. 1. First, we choose the left rectangle [-1,0)× [-1,1] or the right rectangle [0,1]× [-1,1] with equal probabilities and draw X_1 uniformly distributed in the chosen rectangle. Then we draw 10 design points uniformly distributed in the other rectangle. Then we draw 10^2 design points uniformly distributed in the rectangle where X_1 lies. Then we draw 10^3 design points uniformly distributed in the rectangle where X_2 lies, and so on. In other words, we alternate the rectangle after 1, 10, 10^2, 10^3, ... draws. One draw of the 5000 design points is depicted in Fig. <ref>. The estimated function f(x,y) and a computed ULCV estimate are depicted in Fig. <ref>. The results are presented in Fig. <ref>. The ULCV estimator appeared to perform best among the three considered ones both for MSE and MaxE accuracy measures. In particular, the ULCV estimator was better than the NW one: MSE 0.2661 (0.2584, 0.2742) vs. 0.2734 (0.2650, 0.2819), p<0.0001; MaxE 0.7878 (0.7013, 0.9230) vs. 1.1998 (1.0911, 1.3250), p<0.0001. In this example, the ULCM estimator was better than the NW one as well. 5.2. Example 2 In this example, we approximate the nonrandom regression function f(x,y)=sin(10 √(x^2+y^2))/√(x^2+y^2). The i.i.d. design points were generated with independent polar coordinates (ρ,φ), where ρ was drawn with the density proportional to r^2(2-r)^1/10, 0≤ r ≤ 2, and φ was uniformly distributed on [0,2π). The distribution of the design points was restricted on Θ = [-1,1]× [-1,1], i.e., the design points that did not fall into Θ were excluded, keeping the total number of collected points equal to 5000, as in the other simulation examples. One draw of the design points is depicted in Fig. <ref>. The estimated function f(x,y) and a computed ULCV estimate are depicted in Fig. <ref>. The results are presented in Fig. <ref>. The ULCV estimator was the best among the three considered ones both for MSE and MaxE accuracy measures. In particular, the ULCV estimator was better than the NW one: MSE 0.2803 (0.2718, 0.2898) vs. 0.2870 (0.2774, 0.2974), p<0.0001; MaxE 2.505 (2.072, 3.140) vs. 2.695 (2.303, 3.361), p<0.0001. In this example, the ULCM estimator had lower MaxE and higher MSE than the NW estimator did. 5.3. Example 3 In this example, we approximate the same nonrandom regression function (<ref>) as in Example 2. The only difference of this example from Example 2 is that here the coordinates of the design points were generated as independent normal random variables with mean 0 and standard deviation 1/2. As above, the distribution of the design points was restricted on Θ = [-1,1]× [-1,1]. One draw of the design points is depicted in Fig. <ref>. The results are presented in Fig. <ref>. The ULCV estimator was the best one in terms of MSE, in particular, it was better than NW: MSE 0.2834 (0.2750, 0.2922) vs. 0.2895 (0.2808, 0.2977), p<0.0001. But ULCV was worse than NW in terms of MaxE: 1.507 (1.364, 1.653) vs. 1.488 (1.357, 1.643), p<0.0001. In this example, the ULCM estimator was the worst one both for MSE and MaxE. However, from a practical point of view, the three estimators demonstrated similar accuracy in terms of MaxE. 6. Real data application In this section, we compared the new ULCV and ULCM estimators with the NW one in the application to the data on earthquakes in Japan that happened in 2012–2021 (data retrieved from ANSS Comprehensive Earthquake Catalog, 2022). Each of the 10184 collected earthquake events was described by its coordinates (longitude and latitude) and its magnitude (ranging from 2.7 to 7.8). The collected events are presented in Fig. <ref>. The goal of the application of the estimators was to accurately estimate the mean magnitude depending on the coordinates. As in the simulation examples above, we did 1000 runs, in each of which the data were randomly divided into the training (80%) and validation (20%) sets. For each of the tested algorithms, on the training set, the optimal ε was calculated by 10-fold cross-validation minimizing the average of mean-square errors. The ε was selected from 20 values located on the logarithmic grid from 1 to 10. The random partitioning for the cross-validation was the same for all the tested algorithms. The difference of the computations of this section with those of Sec. 5 was that we did not know the true value of the estimated function, therefore, we had to estimate the maximal error (MaxE) on the validation set in each run, not on true values of the estimated function. Besides, since the domain of the coordinates of the events is nonrectangular while the epmloyed domain partitioning algorithms (Voronoi cells algorithm and coordinate-wise medians algorithm) calculated the squares of the cells for a rectangular domain, we bounded the squares of the cells from above by 1 in order to avoid overweighting of the corresponding observations. The resulting estimates of the NW and ULCV estimators are depicted in Fig. <ref>, where, for each estimator, the value of ε was chosen as the median of those chosen in the 1000 runs. The results are presented in Fig. <ref>. The ULCV estimator was the best among the three considered ones both for MSE and MaxE accuracy measures. In particular, the ULCV estimator was better than the NW one: MSE 0.1296 (0.1245, 0.1348) vs. 0.1297 (0.1239, 0.1364), p<0.0001; MaxE 2.573 (2.464, 2.785) vs. 2.736 (2.442, 3.346), p=0.0005. In this example, the ULCM estimator yielded lower MaxE and higher MSE than the NW estimator did. However, from a practical point of view, the three estimators displayed similar median MSE. 7. Conclusion In this paper, for a wide class of nonparametric regression models with a multivariate random design, universal uniformly consistent kernel estimators are proposed for the unknown random regression functions (random fields) of the corresponding multivariate argument. These estimators belong to the class of local constant kernel estimators. But in contrast to the vast majority of previously known results, traditional correlation conditions of design elements are not needed for the consistency of the new estimators. The design can be either fixed and not necessarily regular, or random and not necessarily consisting of independent or weakly dependent random variables. With regard to design elements, the only condition that is required is the dense filling of the regression function domain with the design points. Explicit upper bounds are found for the rate of uniform convergence in probability of the new estimators to an unknown random regression function. The only characteristic explicitly included in these estimators is the maximum diameter of the cells of partition generated by the design elements, and only convergence to zero in probability is required for the characteristic. The advantage of this condition over the classical ones is that it is insensitive to the forms of dependence of the design observations. Note that this condition is, in fact, necessary, since only when the design densely fills the regression function domain, it is possible to reconstruct the regression function with a certain accuracy. As a corollary of the main result, we obtain a consistent estimator for the mean function of a continuous random process. In the simulation examples of Section 5, the new estimators were compared with Nadaraya–Watson estimators. In some of the examples, the new estimators proved to be most accurate. In Section 6, as an application of the new estimators, we studied the real data on the magnitudes of earthquakes in Japan, and the accuracy of the new estimators was comparable to that of the Nadaraya-Watson ones. 8. Proofs Proof of Theorem 1. Taking the relation (<ref>) into account, one can obtain the following identity: f^*_n,ε(t)=R_n,ε(f,t)+ν_n,ε(t), where R_n,ε(f,t):=J^-1_n,ε(t)∑_i=1^nf(X_i)K_ε(t-X_i) Λ(Δ_i), ν_n,ε(t):=J^-1_n,ε(t)∑_i=1^nK_ε(t-X_i)ξ_iΛ(Δ_i), J_n,ε(t):=∑_i=1^nK_ε(t-X_i)Λ(Δ_i). Notice that, by virtue of the properties of K, the range of summation in the three sums above is equal to {i: t-X_i≤ε}. This is the principal argument in the calculations below. Lemma 1. The following estimate is valid: sup_t∈Θ|R_n,ε(f,t)-f(t)|≤ω_f(ε). The proof is immediate from the identity R_n,ε(f,t)=f(t)+J^-1_n,ε(t)∑_i: t-X_i≤ε(f(X_i)-f(t)) K_ε(t-X_i)Λ(Δ_i). Lemma 2. If δ_n≤ε≤ε_0 then, for any t∈Θ, the following relation is valid: J_n,ε(t)≥ρ - k 2^k L δ_n ε^-1. Proof. We have J_n,ε(t)=∫ g(y) dy, where g(y)=∑_i=1^nK_ε(t-X_i) I(y∈Δ_i); here I(·) is the indicator of an event. Since |K_ε(t-x)-K_ε(t-y)|≤ kLε^-k-1δ_n, x-y≤δ_n, we have g(y)≥ K_ε(t-y)-kLε^-k-1δ_n for all y∈Θ. Hence, by (<ref>), J_n,ε(t) ≥ ∫_y∈Θ: t-y≤ε(K_ε(t-y)-kLε^-k-1δ_n) dy ≥ ρ - k 2^kL δ_n ε^-1. The lemma is proved. Lemma 3. For every y>0 and ε∈(0, ε_0], on the subset of elementary events defined by the relation δ_n/ε≤min{1, ρ(k2^k+1L)^-1}, the following upper bound is valid: P_ F ((δ_n^k/2ε^-k((1/2)+(1/p)))^-1sup_t∈Θ|ν_n,ε(t)|>y )≤ G(k,p) ρ^-p M_p L^p/2 y^-p, where the symbol P_ F denotes the conditional probability given the σ-field F generated by the design {X_i} and the paths of the random field f(·); here G(k,p)< (p-1)^p/2 2^p(k+(3/2)) ( 1 +k/2^(p-k)/(p+1) - 1)^p+1. Proof. Under the condition δ_n/ε≤ρ(k2^k+1L)^-1, by virtue of Lemma 2 the simple inequality |ν_n,ε(t)|≤ 2ρ^-1|μ_n,ε(t)| is valid, where μ_n,ε(t) := ∑_i=1^nK_ε(t-X_i)Λ(Δ_i)ξ_i. The distribution tail of sup_t∈Θ|μ_n,ε(t)| will be estimated by Kolmogorov's dyadic chaining which has been used to estimate the tail probability of the sup-norm of a stochastic processes having continuous paths with probability 1. Without loss of generality we will assume that Θ⊂ [0,1]^k. We first note that the set Θ under the supremum sign above can be replaced with the subset of dyadic rational points R=∪_l≥ 1 R_l, where R_l={(j_1/2^l,…,j_k/2^l): j_1=1,…, 2^l-1;…; j_k=1,…, 2^l-1 }. Thus, sup_t∈Θ|μ_n,ε(t)| ≤sup_t∈ R|μ_n,ε(t)| ≤max_t∈ R_m|μ_n,ε(t)| + ∑_l=m+1^∞∑_r=1^k max_t∈ R_l|μ_n,ε(t+2^-le_r)-μ_n,ε(t)|, where m is some natural number that will be chosen later, and e_r is the k-dimensional vector with the r-th component 1 and other components 0. Hence, P_ F(sup_t∈Θ|μ_n,ε(t)| > y) ≤ P_ F(max_t∈ R_m|μ_n,ε(t)| >a_m y) + ∑_l=m+1^∞∑_r=1^k P_ F( max_t∈ R_l|μ_n,ε(t+2^-le_r)-μ_n,ε(t)| >a_l y/k) ≤ ∑_t∈ R_m P_ F(|μ_n,ε(t)| >a_m y) + ∑_l=m+1^∞∑_r=1^k ∑_t∈ R_l P_ F( |μ_n,ε(t+2^-le_r)-μ_n,ε(t)| >a_l y/k) , where a_m,a_m+1,… is a sequence of positive numbers such that a_m+a_m+1+⋯=1. In order to estimate the probability P_ F(|μ_n,ε(t)| >a_m y), we use Rio's martingale inequality (Rio 2009, Theorem 2.1) E| ∑^n_i=1η_i |^p≤( (p-1) ∑^n_i=1( E|η_i|^p)^2/p)^p/2, where {η_i} is a martingale-difference sequence with finite moments of order p≥ 2. Now, put η_i:=K_ε(t-X_i)Λ(Δ_i)ξ_i. From (<ref>) and the simple upper bounds K_ε(t-X_i)Λ(Δ_i)≤L/ε^kδ^k_n, ∑_i K_ε(t-X_i)Λ(Δ_i)≤L/ε^k(2ε+2δ_n)^k we then obtain that, with probability 1, ∑^n_i=1( E_ F|η_i|^p )^2/p≤ M_p^2/p 2^k L (1+δ_n/ε)^k (δ_n/ε)^k. Under the restriction δ_n≤ε≤ 1, the last inequality and (<ref>) imply that P_ F(|μ_n,ε(t)| >a_m y) ≤ E_ F (|μ_n,ε(t)|^p)/(a_m y)^p≤ G_1 (δ_n/ε)^kp/2/(a_m y)^p , where G_1 = (p-1)^p/2 2^kp L^p/2 M_p . Now let us estimate P_ F( |μ_n,ε(t+2^-le_r)-μ_n,ε(t)| >a_l y/k). In order to do it we use (<ref>) with η_i:= (K_ε(t-X_i+2^-le_r)- K_ε(t-X_i) ) Λ(Δ_i)ξ_i. We have |K_ε(t-X_i+2^-le_r)- K_ε(t-X_i) | Λ(Δ_i) ≤L/ε^k+1 2^-l δ_n^k, ∑_i |K_ε(t-X_i+2^-le_r)- K_ε(t-X_i) |Λ(Δ_i)≤L/ε^k+1 2^-l+1 (2ε+2δ_n)^k. Thus ∑^n_i=1( E_ F|η_i|^p )^2/p≤ M_p^2/p 2^k L 2^-2l+1 (1+δ_n/ε)^k (δ_n/ε)^k. Again, under the restriction δ_n≤ε≤ 1, the last inequality and (<ref>) imply P_ F( |μ_n,ε(t+2^-le_r)-μ_n,ε(t)| >a_l y/k) ≤G_2/k (δ_n/ε)^kp/2 ε^-p 2^-lp/(a_l y)^p, where G_2= 2^p/2 k^p+1 (p-1)^p/2 2^kp L^p/2 M_p =2^p/2 k^p+1 G_1. Combining (<ref>), (<ref>), and (<ref>), we obtain P_ F(sup_t∈Θ|μ_n,ε(t)|>y) < y^-p (δ_n/ε)^kp/2( G_1 2^km a_m^-p +G_2 ε^-p∑_l=m+1^∞ 2^-(p-k)l a_k^-p). The optimal sequence a_l minimizing the right-hand side of this inequality is as follows: a_m=c (G_1 2^km)^1/(p+1) and a_l=c (G_2 ε^-p 2^-(p-k)l)^1/(p+1) for l=m+1,m+2,…, where the coefficient c is defined by the relation a_m+a_m+1+⋯=1. For this sequence, we get P_ F(sup_t∈Θ|μ_n,ε(t)|>y) < y^-p (δ_n/ε)^kp/2( (G_1 2^km)^1/(p+1) +∑_l=m+1^∞(G_2 ε^-p 2^-(p-k)l)^1/(p+1))^p+1. Now, put m = ⌈- log_2 ε⌉, where ⌈ a ⌉ is the minimal integer greater than or equal to a. Then P_ F(sup_t∈Θ|μ_n,ε(t)|>y) < y^-p δ_n^kp/2 ε^-k(1+ (p/2))( (2 G_1 )^1/(p+1) +(G_2 2^-(p-k))^1/(p+1)∑_l=0^∞ 2^-(p-k)l/(p+1))^p+1 < y^-p δ_n^kp/2 ε^-k(1+ (p/2)) 2^p/2 G_1 ( 1 +k/2^(p-k)/(p+1) - 1)^p+1. This yields the statement of the lemma with G(k,p)=2^p 2^p/2 G_1/L^p/2 M_p( 1 +k/2^(p-k)/(p+1) - 1)^p+1. Lemma 3 is proved. The statement of Theorem 1 follows from Lemmas 1–3. Proof of Theorem 2. First of all, notice that condition (<ref>) and Lebesgue's dominated convergence theorem imply the relation lim_ν→ 0 Eω_f(ν)=0. It is clear that the relation (<ref>) implies the uniform law of large numbers for independent copies of the a.s. continuous random process f(t), i.e., sup_t∈ [0,1]^k|f_N(t)- Ef(t)|p→ 0 as N→∞, where f_N(t):=1/N∑_j=1^Nf_j(t). Put Δ_n,ε,j:=sup_t∈ [0,1]^k| f^*_n,ε,j(t)-f_j(t)|. So, to prove (<ref>) we need only to verify the following version of the law of large numbers for independent copies of the residuals defined in (<ref>): 1/N∑_j=1^NΔ_n,ε,jp→ 0, but only for the sequences ε and N chosen in (<ref>). Introduce the following events: A_n,ε,j:={δ_n,j≤εmin{1, ρ(k2^k+1 L)^-1}}, j=1,…,N, where the sequence ε≡ε_n→ 0 meets (<ref>). (It is evident that such a sequence exists.) For any positive ν we have P{1/N∑_j=1^NΔ_n,ε,j>ν}≤ P{1/N∑_j=1^NΔ_n,ε,jI(A_n,ε,j)>ν}+N P(A_n,ε,1). Next, from Theorem 1 we obtain EΔ_n,ε,jI(A_n,ε,j)≤ Eω_f(ε)+ ∫_0^∞ P( ζ_n(ε)>y, δ_n≤εmin{1, ρ(k2^k+1 L)^-1})dy ≤ Eω_f(ε)+γ_n + ∫_γ_n^∞ P( ζ_n(ε)>y, δ_n≤εmin{1, ρ(k2^k+1 L)^-1})dy ≤ Eω_f(ε)+C̃γ_n, where C̃:=1+G(k,p) ρ^-p M_p L^p/2 and γ_n:=(ε^-k(p/2+1) Eδ_n^kp/2)^1/p. It remains to apply Markov's inequality for the first probability on the right-hand side of (<ref>) and use the limit relations (<ref>) and the last estimate. Theorem 2 is proved. Acknowledgments The authors are deeply grateful to Professor I.A. Ibragimov for his useful remarks. In addition, the authors thank the anonymous referee whose comments contributed to a better presentation of this study. Data availability statement. The data required to reproduce the above findings are available to download from https://earthquake.usgs.gov/data/comcat/ (ANSS Comprehensive Earthquake Catalog, 2022). References Ahmad, I. A. and Lin, P.-E. (1984), `Fitting a multiple regression function', J. Statist. Plann. Infer. 9, 163–176. ANSS Comprehensive Earthquake Catalog, 2022. In: U.S. Geological Survey, Earthquake Hazards Program, 2017, Advanced National Seismic System (ANSS) Comprehensive Catalog of Earthquake Events and Products: Various, https://doi.org/10.5066/F7MS3QZH. Data retrieved September 4, 2022 from https://earthquake.usgs.gov/data/comcat/ Benhenni, K., Hedli-Griche, S., and Rachdi, M. (2010), `Estimation of the regression operator from functional fixed-design with correlated errors', J. Multivar. Anal. 101, 476–490. Benelmadani, D., Benhenni, K., and Louhichi, S. (2020), `Trapezoidal rule and sampling designs for the nonparametric estimation of the regression function in models with correlated errors', Statistics 54, 59–96. Borisov, I.S., Linke, Yu.Yu., and Ruzankin P.S. (2021), `Universal weighted kernel-type estimators for some class of regression models', Metrika 84, 141–166. Brown, L.D. and Levine, M. (2007), `Variance estimation in nonparametric regression via the difference sequence method', Ann. Statist. 35, 2219–2232. Chan, N. and Wang, Q. (2014), `Uniform convergence for Nadaraya-Watson estimators with nonstationary data', Econometric Theory 30, 1110–1133. Chu, C. K. and Deng, W.-S. (2003), `An interpolation method for adapting to sparse design in multivariate nonparametric regression', J. Statist. Plann. Inference 116, 91–111. Einmahl, U. and Mason, D.M. (2005), `Uniform in bandwidth consistency of kernel-type function estimators', Ann. Statist. 33, 1380–1403. Fan, J. and Gijbels, I. (1996), Local Polynomial Modelling and its Applications, London: Chapman and Hall. Fan, J. and Yao, Q. (2003), Nonlinear time series nonparametric and parametric methods, Springer. Gao, J., Kanaya, S., Li, D., and Tjostheim, D. (2015), `Uniform consistency for nonparametric estimators in null recurrent time series', Econometric Theory 31, 911–952. Gasser, T. and Engel, J. (1990), `The choice of weghts in kernel regression estimation', Biometrica 77, 277-381. Georgiev, A. A. (1988), `Consistent nonparametric multiple regression: The fixed design case', J. Multivariate Anal. 25, 100–110. Georgiev, A. A. (1990), `Nonparametric multiple function fitting', Stat. Probab. Lett. 10, 203–211. Georgiev, A. A. (1989), `Asymptotic properties of the multivariate Nadaraya-Watson regression function estimate: The fixed design case', Stat. Probab. Lett. 7, 35–40. Gu, W., Roussas, G. G., and Tran, L. T. (2007), `On the convergence rate of fixed design regression estimators for negatively associated random variables', Stat. Probab. Lett. 77, 1214–1224. Györfi, L., Kohler, M., Krzyzak, A., and Walk, H. (2002), A Distribution-Free Theory of Nonparametric Regression, New York: Springer. Hall, P. and Heyde, C. C., (1980), Martingale limit theory and its application. Academic Press. Hall, P., Müller, H.-G., and Wang, J.-L. (2006), `Properties of principal component methods for functional and longitudinal data analysis', Ann. Statist. 34, 1493–1517. Hansen, B.E. (2008), `Uniform convergence rates for kernel estimation with dependent data', Econometric Theory 24, 726–748. Härdle, W. (1990), Applied Nonparametric Regression, New York: Cambridge University Press. He, Q. (2019), `Consistency of the Priestley–Chao estimator in nonparametric regression model with widely orthant dependent errors', J. Inequal. Appl. 64, 2–13. Hsing, T. and Eubank, R. (2015), Theoretical foundations of functional data analysis, with an introduction to linear operators, Wiley. Honda, T. (2010), `Nonparametric regression for dependent data in the errors-in-variables problem`, Global COE Hi-Stat Discussion Paper Series, Institute of Economic Research, Hitotsubashi University. Hong, S. Y. and Linton, O. B. (2016), `Asymptotic properties of a Nadaraya-Watson type estimator for regression functions of infinite order', SSRN Electronic Journal. Jennen-Steinmetz, C. and Gasser, T. (1989), `A unifying approach for nonparametric regression estimation', J. Americ. Stat. Assoc. 83, 1084–1089. Jiang, J. and Mack, Y.P. (2001), `Robust local polynomial regression for dependent data', Statistica Sinica 11, 705–722. Jones, M.C., Davies, S.J., and Park, B.U. (1994), `Versions of kernel-type regression estimators', J. Americ. Stat. Assoc. 89, 825–832. Karlsen, H.A., Myklebust, T., and Tjostheim, D. (2007), `Nonparametric estimation in a nonlinear cointegration type model', Ann. Statist. 35, 252–299. Kulik, R. and Lorek, P. (2011), `Some results on random design regression with long memory errors and predictors', J. Statist. Plann. Infer. 141, 508–523. Kulik, R. and Wichelhaus C. (2011), `Nonparametric conditional variance and error density estimation in regrssion models with dependent errors and predictors', Electr. J. Statist. 5, 856–898. Laib, N. and Louani, D. (2010), `Nonparametric kernel regression estimation for stationary ergodic data: Asymptotic properties', J. Multivar. Anal. 101, 2266–2281. Liang, H.-Y. and Jing, B.-Y. (2005), `Asymptotic properties for estimates of nonparametric regression models based on negatively associated sequences', J. Multivariate Anal. 95, 227–245. Li, Y. and Hsing, T. (2010), `Uniform convergence rates for nonparametric regression and principal component analysis in functional/longitudinal data', Ann. Statist. 38, 3321–3351. Li, X., Yang, W., and Hu, S. (2016), `Uniform convergence of estimator for nonparametric regression with dependent data', J. Inequal. Appl., 142. Lin, Z. and Wang, J.-L. (2022), `Mean and covariance estimation for functional snippets', J. Amer. Statist. Assoc. 117, 348–360. Linton, O. and Wang, Q. (2016), `Nonparametric transformation regression with nonstationary data', Econometric Theory 32, 1–29. Linke, Yu.Yu. and Borisov, I.S. (2017), `Constructing initial estimators in one-step estimation procedures of nonlinear regression', Stat. Probab. Lett. 120, 87–94. Linke, Yu.Yu. and Borisov, I.S. (2018), `Constructing explicit estimators in nonlinear regression problems', Theory Probab. Appl. 63, 22–44. Linke, Yu.Yu. (2019), `Asymptotic properties of one-step M-estimators', Commun. Stat. Theory Methods 48, 4096–4118. Linke, Yu.Yu. (2023), `Towards insensitivity of Nadaraya–Watson estimators to design correlation', Theory Probab. Appl. 68 (to appear). Linke, Yu.Yu. and Borisov, I.S. (2022), `Insensitivity of Nadaraya–Watson estimators to design correlation', Commun. Stat. Theory Methods 51, 6909–6918. Linton, O. B. and Jacho-Chavez, D. T. (2010), `On internally corrected and symmetrized kernel estimators for nonparametric regression', TEST 19, 166–186. Loader, C. (1999), Local regression and likelihood, Springer. Mack, Y.P. and Müller, H.-G. (1988), `Convolution type estimators for nonparametric regression', Stat. Prob. Lett. 7, 229–239. Masry, E. (2005), `Nonparametric regression estimation for dependent functional data', Stoch. Proc. Their Appl. 115, 155–177. Müller, H.-G. (1988), Nonparametric Regression Analysis of Longitudinal Data, New York: Springer. Müller, H. G. and Prewitt, K. A. (1993), `Multiparameter bandwidth processes and adaptive surface smoothing', J. Multivariate Anal. 47, 1–21. Priestley, M.B. and Chao, M.T. (1972), `Non-Parametric Function Fitting', J. Royal Statist. Soc., Series B, 34, 385–392. Rio, E. (2009), `Moment Inequalities for Sums of Dependent Random Variables under Projective Conditions', J. Theor. Probab. 22: 146–163. Roussas, G.G. (1990), `Nonparametric regression estimation under mixing conditions', Stach. Proc. Appl., 36, 107-116. Roussas, G.G. (1991), `Kernel estimates under association: Strong uniform consistency', Stat. Probab. Lett. 12, 393-403. Roussas, G. G., Tran, L. T., and Ioannides, D. A. (1992), `Fixed design regression for time series: asymptotic normality', J. Multivariate Anal. 40, 262–291. Shen, J. and Xie, Y. (2013), `Strong consistency of the internal estimator of nonparametric regression with dependent data', Stat. Probab. Lett. 83, 1915–1925. Song, Q., Liu, R., Shao, Q., and Yang, L. (2014), `A simultaneous confidence band for dense longitudinal regression', Commun. Stat. Theory Methods 43, 5195–5210. Tang, X., Xi, M., Wu, Y., and Wang, X. (2018), `Asymptotic normality of a wavelet estimator for asymptotically negatively associated errors', Stat. Probab. Lett. 140, 191–201. Wand, M.P. and Jones, M.C. (1995), Kernel Smoothing, London: Chapman and Hall. Wang, J.-L., Chiou, J.-M., and Müller, H.-G. (2016), `Review of functional data analysis', Annu. Rev. Statist. 3, 257–295. Wang, Q. and Chan, N. (2014), `Uniform convergence rates for a class of martingales with application in non-linear cointegrating regression', Bernoulli 20, 207–230. Wang, Q.Y. and Phillips, P.C.B. (2009a), `Asymptotic theory for local time density estimation and nonparametric cointegrating regression', Econometric Theory 25, 710–738. Wang, Q. and Phillips, P.C.B. (2009b), `Structural nonparametric cointegrating regression', Econometrica 77, 1901–1948. Wu, J.S. and Chu, C.K. (1994), `Nonparametric estimation of a regression function with dependent observations', Stoch. Proc. Their Appl., 50, 149-160. Wu, Y., Wang, X., and Balakrishnan, N. (2020), `On the consistency of the P–C estimator in a nonparametric regression model', Stat. Papers 61, 899–915. Yang, X. and Yang, S. (2016), `Strong consistency of non parametric kernel regression estimator for strong mixing samples', Commun. Stat. Theory Methods 46, 10537–10548. Yao, F. (2007), `Asymptotic distributions of nonparametric regression estimators for longitudinal or functional data', J. Multivariate Anal. 98, 40–56. Yao, F., Müller, H.-G., and Wang, J.-L. (2005), `Functional data analysis for sparse longitudinal data', J. Amer. Statist. Assoc. 100, 577–590. Young, D.S. (2017), Handbook of regression methods, Chapman and Hall. Zhang, X. and Wang, J.-L. (2016), `From sparse to dense functional data and beyond', Ann. Statist. 44, 2281–2321. Zhang, J.-T. and Chen, J. (2007), `Statistical inferences for functional data', Ann. Statist. 35, 1052–1079. Zhang, X. and Wang, J.-L. (2018), `Optimal weighting schemes for longitudinal and functional data', Stat. Prob. Lett. 138, 165–170. Zhang, S., Miao, Y., Xu, X., and Gao, Q. (2018), `Limit behaviors of the estimator of nonparametric regression model based on martingale difference errors', J. Korean Stat. Soc. 47, 537–547. Zhang, S., Hou, T., and Qu, C. (2019), `Complete consistency for the estimator of nonparametric regression model based on martingale difference errors', Commun. Stat. Theory Methods 50, 358–370. Zhou, X. and Zhu, F. (2020), `Asymptotics for L1-wavelet method for nonparametric regression', J. Inequal. Appl. 216.
http://arxiv.org/abs/2307.02149v2
20230705094500
Use of Non-Maximal entangled state for free space BBM92 quantum key distribution protocol
[ "Ayan Biswas", "Sarika Mishra", "Satyajeet Patil", "Anindya Banerji", "Shashi Prabhakar", "Ravindra P. Singh" ]
quant-ph
[ "quant-ph", "physics.optics" ]
AIP/123-QED Quantum Science and Technology Laboratory, Physical Research Laboratory, Ahmedabad, 380009, Gujarat, India. York Centre for Quantum Technologies, School of Physics Engineering and Technology, University of York, Heslington, YO10 5DD, York, UK Quantum Science and Technology Laboratory, Physical Research Laboratory, Ahmedabad, 380009, Gujarat, India. Indian Institute of Technology, Gandhinagar, 382355, Gujarat, India Quantum Science and Technology Laboratory, Physical Research Laboratory, Ahmedabad, 380009, Gujarat, India. Indian Institute of Technology, Gandhinagar, 382355, Gujarat, India Centre for Quantum Technologies, National University of Singapore, 117543, Singapore Quantum Science and Technology Laboratory, Physical Research Laboratory, Ahmedabad, 380009, Gujarat, India. Quantum Science and Technology Laboratory, Physical Research Laboratory, Ahmedabad, 380009, Gujarat, India. Satellite-based quantum communication for secure key distribution is becoming a more demanding field of research due to its unbreakable security. Prepare and measure protocols such as BB84 consider the satellite as a trusted device, fraught with danger looking at the current trend for satellite-based optical communication. Therefore, entanglement-based protocols must be preferred since, along with overcoming the distance limitation, one can consider the satellite as an untrusted device too. E91 protocol is a good candidate for satellite-based quantum communication; but the key rate is low as most of the measured qubits are utilized to verify a Bell-CHSH inequality to ensure security against Eve. An entanglement-based protocol requires a maximally entangled state for more secure key distribution. The current work discusses the effect of non-maximality on secure key distribution. It establishes a lower bound on the non-maximality condition below which no secure key can be extracted. BBM92 protocol will be more beneficial for key distribution as we found a linear connection between the extent of violation for Bell-CHSH inequality and the quantum bit error rate for a given setup. Use of Non-Maximal entangled state for free space BBM92 quantum key distribution protocol Ravindra P. Singh August 1, 2023 ========================================================================================= § INTRODUCTION In classical communication, the security of encryption keys for parties communicating with each other depends upon the hardness of breaking the encryption algorithm <cit.>. This security is insufficient to protect encrypted messages sent through a public channel once a quantum computer intercepts them. Therefore, with advancements in the development of practical quantum computers, the demand for information-theoretic secure communication based on the principles of physics has increased. It has already been demonstrated that using Shor's quantum algorithm, one can break most of the encryption techniques applied in classical key distribution between communicating parties, say Alice & Bob <cit.>. Quantum key distribution (QKD) uses the principles of quantum mechanics to securely distribute keys between the two communicating parties <cit.>. Moreover, using QKD also ensures that Eavesdropper's presence can be detected in real-time by observing the disturbance in the channel, unlike conventional classical key distribution <cit.>. Based on the usage and type of encryption, several QKD protocols are available, e.g., BB84 <cit.>, SARG04 <cit.>, COW <cit.>, E91 <cit.>, etc. The BB84 protocol is widely applied due to its ease of implementation and the existence of composable security proofs for practical deployments <cit.>. However, it is prone to side-channel attacks <cit.>, and distance is limited as the disturbance in the channel increases with the propagation. Entanglement-based QKD (EBQKD) protocol can tackle the challenge of distance limitation for secure key transmission as posed by BB84 <cit.>. The security of EBQKD comes from the principles of no-cloning and monogamy of quantum entanglement <cit.>. The latter states that if two parties (Alice & Bob) share a maximally entangled state, the third party cannot have any correlation with the communicating parties <cit.>. EBQKD is also ideal for satellite-based quantum communication by sharing entangled photons between the two ground stations to communicate securely <cit.>. Security in EBQKD is ensured by checking violation of Bell's inequality, which makes the protocol robust against all strategic attacks. Even without checking Bell-CHSH inequality, one can still distribute secret keys if they share a maximally entangled state, like in BBM92 protocol <cit.>. For carrying out long-distance QKD, e.g., satellite-based quantum communication, EBQKD has an advantage as it connects two distantly situated ground stations with a single satellite <cit.>. Considering EBQKD for practical purposes, the BBM92 protocol is less resource intensive than the E91 protocol ensuring the same security using a maximally entangled state. The key rate is higher in the BBM92 protocol as a violation of the Bell-CHSH inequality is not always required in building real-time secret keys. In this article, we find the relation between the quantum bit-error rate (QBER) and Bell-CHSH parameter S, including experimental imperfections in field-based QKD experiments. This connection between QBER and S can indicate the purity of the source related to the QBER generated in real-time. A similar process was done only after sacrificing many key bits for checking S separately, then going for secret key extraction by looking to QBER <cit.>. Therefore the present correlation between QBER and S comes in handy in providing a longer secret key for the same raw key. Additionally, we also determined the mutual information (MI) shared between Alice, Bob and Eve, which further introduces a limit on the secret key rate per bit. If one uses a non-maximal entangled state, then the Bell-CHSH inequality can be violated with low detector efficiency, closing all loopholes, and moving closer to the realization of device-independent QKD (DIQKD) systems <cit.>. This work aims to experimentally verify the variation of S with QBER and set a minimum bound on S for the safe operation of the BBM92 protocol. This study contains section <ref> that describes the theoretical background of the present work. The experimental method to generate non-maximally entangled state is elaborated in section <ref>, and the results are presented in section <ref>. Section <ref> concludes our work with suggestions to implement it in real scenarios, and highlight the applicability of the BBM92 protocol with source imperfections for secure long-distance communication. § THEORETICAL BACKGROUND In standard EBQKD, as shown in Fig. <ref>, a common sender, Charlie, sends a pair of polarization-entangled photons to Alice and Bob through a quantum channel (fiber or free space). Alice and Bob independently make their measurements on chosen random bases. The measurement bases are different for the E91 and BBM92 protocols, also shown in Fig. <ref>. After the measurement, Alice and Bob declare their basis choice through the public channel and build the secure key for encryption. E91 protocol, in principle, is secure against any eavesdropping strategy <cit.>. Alice and Bob will only form the key when they choose the same basis for their measurements. The rest of the measurement results will go for calculating the Bell-CHSH parameter S for the protocol's security. Ideally, if a maximally entangled state is used, any value of the Bell-CHSH parameter below 2√(2) will render this protocol insecure. However, the quality of the quantum channel might adversely affect the value of S, and implementing this protocol can be challenging as the number of photon pairs may degrade. This poor correlation results in information leakage to Eve, which increases her chances of gaining access to the key. Also, the drawback of this protocol is that it has a low key rate as most of the generated raw bits from the measurements are used for security checks through violation of Bell's inequality. In BBM92 protocol, a secret key can be extracted without Bell state analysis if one has a maximally entangled photon pair source <cit.>. The protocol is similar to E91, and the difference lies in the measurement bases, which are {H/V, D/A} for both Alice and Bob. The key is generated when Alice and Bob measure in compatible bases. The primary advantage of BBM92 over E91 is that the key rate becomes considerably higher as a majority of the detection events are used to build the key, and very few are utilized to check for QBER. The QBER threshold for secure key distribution is the same as that of BB84 protocol <cit.>. So, if one has a maximally entangled state, one can perform EBQKD without Bell-CHSH measurement <cit.>. In the BB84 protocol, a QBER (δ) of 11% can be tolerated against collective attacks as the key rate r goes to zero above that according to the relation r=(1-2H(δ)) <cit.>. The same error rate is also true for the BBM92 protocol <cit.>. Therefore, by looking at the correlation between QBER and S, one can interpret the extent of non-maximal entangled photons that can be used for EBQKD. For a perfectly secure QKD protocol, one needs a maximally entangled source to attain the maximum value of S. The increased non-maximality of the entangled photon source may leak information to Eve <cit.>. This indicates that by entanglement monogamy, Eve can have some correlation either with Alice or Bob <cit.>. This can also be checked directly with the formula given by <cit.> I(A:E) = H(1+√(S^2/4-1)/2), where I(A:E) is the mutual information (MI) that can be shared between Alice and Eve, H is the binary entropy, and S is the Bell-CHSH parameter. The maximum amount of information shared by Alice and Bob between each other for the BBM92 protocol is I(A:B). This can be calculated using the relation I(A:B)=H(A)+∑_a ϵ A p(a) ∑_b ϵ B p(b| a) Log p(b | a), where p(a) is the probability of getting a polarization (say |H⟩) at Alice or Bob out of four polarization states. p(b| a) is the probability of getting a polarization (|V⟩) at Bob, given polarization (|H⟩) is measured by Alice or vise versa. In experiments, this quantity can also be calculated by measuring bit-error (e_b) and phase-error (e_p) in the system. For EBQKD, the mutual information between two parties is given by <cit.> I(A:B)=1-H(e_b)-H(e_p). Experimentally, MI can be calculated from the coincidences detected at both ends normalized by the individual detector counts. The final secure key rate of the protocol can be written as <cit.>, r=I(A:B)-I(A:E). where r is the secret key rate per bit. Secure key extraction is possible when r≥0; this implies I(A:B)>I(A:E). Since both these quantities vary with QBER and S, obtaining a range for both would enable secure key extraction, efficiently. § EXPERIMENTAL METHOD We have used the Hong-Ou-Mandel interferometer (HOM) technique to generate the desired non-maximally entangled photon state <cit.>. Figure <ref> shows the schematics for the experimental setup to generate all four Bell states, and the advantage is that their maximality is controlled by controlling the HOM visibility. A laser of wavelength 405 nm pumps a nonlinear crystal (Type-I BBO) to produce degenerate photon pairs by the nonlinear spontaneous parametric down-conversion (SPDC) process. A prism mirror (PM) separates the pathways of two generated photons. The HOM interference resulting in photon bunching will only occur when indistinguishable photon pairs overlap, indicating a coincidence dip at the detector. There can be multi-photon pairs (less probable) coming out of the SPDC process that can increase QBER; however these are filtered out by HOM interferometer. To obtain the desired entangled state, the polarization of one of the two photons is changed by placing an HWP1 in one of the arms after the prism mirror. The experimentally observed visibility of the HOM dip by controlling the motorized translation stage (MTS) is shown in Fig. <ref>. At the HOM dip region, if one of the incoming arms is changed to orthogonal polarization (HWP1), then we have two distinguishable photons falling at the BS1, resulting to four possibilities, and the output state can be written as, |Ψ⟩_out∝( α_1|H_1V_1⟩ + α_2|H_1V_2⟩ +α_3 |H_2V_1⟩ +α_4 |H_2V_2⟩), where α_i are the complex amplitudes of the corresponding state. After post-selecting the simultaneously detected photon pairs at the output port of the BS1, the above state will become an entangled state. Table <ref> summarizes the settings to obtain the desired non-maximally entangled states |Ψ⟩_int∝(α_2|H_1V_2⟩ +α_3 |H_2V_1⟩). The generation of |Ψ^-⟩ state is shown in Fig. <ref>, and the rest of the Bell states can be obtained by using the appropriate optics as shown in Table <ref>. All the states are then measured by projecting them to different polarization states using a combination of HWP and PBS, which is then detected through single-mode fiber-coupled single photon counting modules (SPCM). These can be thought of as the detection setup for Alice and Bob. The coincidences from both detectors are recorded for various polarization projections (by rotating the HWP4 and HWP5), typically used in the BBM92 protocol. Coincidences in the same basis for the state |Φ^±⟩ will give the key rate estimation for the BBM92 protocol. While for the state |Ψ^±⟩, coincidences on a complimentary basis will form a sifted key (anti-correlated photon polarizations will form key as the state is |HV⟩±|VH⟩). § RESULTS AND DISCUSSION The measurement of QBERs is performed from the coincidence counts by varying the HOM visibility. These QBER results can then be used in the Eq. <ref> to calculate the MI (I(A:B)) between Alice and Bob for corresponding Bell states. Coincidence counts for all the specific combinations of polarization are recorded by adjusting HWPs angle (HWP4 and HWP5) to calculate Bell-CHSH parameter (S) and key-rate estimation. We measured the Bell-CHSH parameter for each of the four Bell states with different visibility settings. This visibility in HOM will change the coefficients of the corresponding states generated for EBQKD. We record the coincidences for key rate estimation with the change in the amount of entanglement (i.e., change in α_i,S). This will indicate the variation of S with QBER. This study experimentally proves the relationship between two important parameters in QKD protocols, QBER and S. Our experimental results have shown a linear relationship between QBER and S, with a negative slope, which is valid for individual attacks and is given by <cit.>, S=2 √(2)(1-2 δ). where δ is the disturbance in the signal. The error limit for the QKD protocol can be determined from the value of S, as it indicates the strength of the correlation between Alice and Bob's measurements. If the value of S=2, then δ=14%, interpreting that if the QBER is higher than 14%, Eve could potentially have knowledge of the key. For collective attacks, the error limit is lower, at 11%. For a given channel, this relation is helpful as it directly connects S with QBER. Specific QBER received by Alice or Bob can directly indicate the value of S for that particular system. This can be a double check in the security if one is doing EBQKD without sacrificing extra bits for the Bell test. The figure <ref>(a) shows the variations of S with QBER for state |ϕ^+⟩=C_1|H_1H_2⟩+C_2|V_2V_1⟩. The maximum recorded value of Bell's inequality parameter is 2.64±0.12 for which the QBER is 2%. The graph matches well with the predicted value of the error bound of the BB84 protocol. The minimum value of the Bell parameter to run the protocol safely is 2.1. This indicates that the amount of non-maximality that can be achieved is 2.1 for secure key distribution in the BBM92 protocol. Similarly for another Bell states, the variation of S with QBER for |ϕ^-⟩=C_1|H_1H_2⟩-C_2 |V_2V_1⟩, |Ψ^+⟩=C_1|H_1V_2⟩+C_2|H_2V_1⟩ and |Ψ^-⟩=C_1|H_1V_2⟩-C_2|H_2V_1⟩ are shown in Fig. <ref>(b-d), respectively. Irrespective of any Bell state, the BBM92 protocol results in the same error bound as the BB84 protocol, including implementation discrepancies. This will not affect the variation of S with QBER for a given system in the protocol. The presented experimental results in the Fig. <ref> are in good agreement with the theory. The experiment assumes identical detector efficiency for Alice and Bob, whereas the overall transmission efficiency could vary due to different channel lengths. The Fig. <ref> illustrates the effect of changing QBER on the value of S, which is essential to determine the condition of the source in the transmitting end. The agreement of the relationship for all four types of Bell states confirms the robustness of the results under experimental discrepancies. Also, having entanglement non-necessarily gives a secure key. Eve might get the advantage in gaining the information from the weakness in the entanglement of the source. This will further reduce the bound in error to extract the secret key. For calculating the secure key rate, the difference between the mutual information of Alice-Bob (I(A:B)) and Alice-Eve (I(A:E)) is considered. The key rate can be calculated using Eq. <ref>. The plots for MI between Alice Bob and Alice Eve are shown in Fig. <ref>. The plots show that non-zero secure key rates are only possible for error bounds up to ∼4%, obtained for I(A:B)>I(A:E). Above this, even though one has entanglement but still the secure key rate extraction won't be possible. The attack strategy by Eve is taken to be general as she uses the weakness in entanglement to gain information about the key. In Eq. <ref> for key rate r, it is assumed that Eve can perform any kind of attack, and have advantage as Alice and Bob are not using non-maximal entanglement. The information leakage is because the states in the QKD are not perfectly entangled. Figure <ref> shows the secret key rate for the four Bell states in experimental conditions. By understanding this relationship, researchers can generate longer secret keys from satellite-based systems with shorter pass times without sacrificing too much of their raw key material to perform the necessary tests. This is important because the quality of the entangled photon source can degrade over time, and it may not be possible to maintain a maximally entangled state. One can use a non-maximally entangled state for QKD, provided they have already calibrated the source, and the leakage due to error is also taken into account. The discrepancies in the source will decide the intrinsic error, that needs to be added on the top of the QBER while distilling the keys. To make the protocol secure against Eve, one has to consider this error, apart from the QBER. This will make the key generation process less resilient against errors in channel than the expected one (because one has to consider the QBER due to non-maximal entangled source). Due to non-maximality of the source, Eve can extract some amount of information, and this leaked information can be removed while distilling the keys. This has to be done even if one is observing a Bell-CHSH violation. § CONCLUSION This study highlights connection between the violation of the Bell-CHSH inequality and QBER in QKD protocols. The relationship between S and QBER is independent of the Bell state used in the protocol, and can be used to extract secret keys safely for the BBM92 protocol, even if the source is not maximally entangled. The knowledge of this relationship enables the estimation of S from the QBER, which allows for error correction and privacy amplification accordingly. This connection directly indicates whether or not the quantum channel is being tampered. Importantly, the value of S can be calibrated with the corresponding QBER value before the QKD protocol is initiated. This calibration ensures that the error limit is set appropriately, and that the secret key rate generated by Alice and Bob is maximized. This calibration is particularly important in satellite payloads, as it ensures that the QKD protocol is robust and reliable even in the harsh space conditions. In the present work, we also studied additional bound on QBER arising from the mutual information shared between Alice and Eve. Using non-maximal entangled states in QKD can be more beneficial for long-distance communication as they are more robust against source and channel disturbances. Also, maintaining the entangled photon source becomes easier, as maximality is not always required. The present study can help to do long-term QKD without routine system characterizations. The current work finds application in satellite-based QKD or free space QKD over a long time without further characterizations at each run. § ACKNOWLEDGMENTS The authors like to acknowledge the funding support from the Department of Science and Technology (DST), India through QuEST program. § DISCLOSURES The authors declare no conflicts of interest related to this article. § REFERENCES
http://arxiv.org/abs/2307.00612v1
20230702164143
Reynolds number scaling and inner-outer overlap of stream-wise Reynolds stress in wall turbulence
[ "Peter A. Monkewitz" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
[ McCulloch, Raymond August 1, 2023 ====================== Submitted to JFM Rapids on April 22, 2023The scaling of Reynolds stresses in turbulent wall-bounded flows is the subject of a long running debate. In the near-wall “inner” region, the large Reynolds number behavior of the peak stream-wise normal stress ⟨ uu⟩^+ at y^+≊ 15 has divided the turbulence community. A large group, inspired by the “attached eddy model”, advocates its unlimited growth with ln <cit.>, and a recent much smaller group has argued, on the basis of bounded dissipation, that near the wall ⟨ uu⟩^+ remains finite for →∞ and decreases from there as ^-1/4 <cit.>. Over the limited Reynolds number range, where good quality data are available, both asymptotic expansions provide reasonably close fits for the near-wall Reynolds stresses, in particular for the near-wall peak of ⟨ uu⟩^+. This scaling issue is resolved in favor of ⟨ uu⟩^+ remaining finite everywhere for all Reynolds numbers by analyzing the overlap, which links the inner region, where the variation of ⟨ uu⟩^+ with is significant, to the outer region, where the variation is weak, of order 𝒪(^-1) or less. § INTRODUCTION AND OUTLINE OF THE PROBLEM Dividing wall-normal profiles of turbulence statistics in wall bounded flows into inner and outer parts connected through an overlap, is intrinsically a concept of matched asymptotic expansions <cit.>. Its application to mean flow profiles can be traced back to the celebrated work of <cit.> and <cit.>, who introduced the logarithmic overlap law for the mean velocity profile. Here and in the following, the classical non-dimensionalization is adopted with the “inner” or viscous length scale ℓ≡ (ν/u_τ), and u_τ≡ (τ_w/ρ)^1/2, ρ and ν the friction velocity, density and dynamic viscosity, respectively, with hats identifying dimensional quantities. The resulting non-dimensional inner and outer wall-normal coordinates are y^+=y/ℓ and Y=y^+/, respectively, with ≡L/ℓ the friction Reynolds number and L the outer length scale, i.e. the channel half height, pipe radius or boundary layer thickness. Relative to the mean velocity, the situation for the Reynolds stresses is reversed, as the inner parts vary significantly with , while the outer parts quickly approach asymptotic profiles, with small finite Reynolds number corrections of order 𝒪(^-1) or less. In the present paper, the discussion focusses on the stream-wise Reynolds stress ⟨ uu⟩^+, as it is the component with the most data available. For this stream-wise component, the scaling of its inner part, and in particular of its inner peak height, is a subject of controversy. The two opposing views are summarized as follows: * The Reynolds stresses scale according to the “attached eddy” model, in the following abbreviated “AE” model. Its main characteristic is the unbounded increase, proportional to ln, of Reynolds stresses in the inner near-wall region. This model has been first proposed by <cit.>, has recently been reviewed by <cit.> and has been extensively covered in the literature. * The Reynolds stresses remain finite in the limit of →∞ everywhere in the flow. For the zero-pressure-gradient turbulent boundary layer, in the following abbreviated ZPG TBL, this view has been advanced by <cit.>. More recently, it has been further developed by <cit.>, who have argued, on the basis of the “law of bounded dissipation”, that in the inner, near-wall region, the finite Reynolds number corrections are of order 𝒪(^-1/4). This new scaling, in the following abbreviated as “BD” scaling for “bounded dissipation”, has been taken up by <cit.>, who developed a composite asymptotic expansion for ⟨ uu⟩^+, that compares well with a number of DNS and experimental data, but used an ad-hoc fit for the overlap and outer parts. The above alternative scalings correspond to the inner and outer asymptotic sequences BD scaling : {Φ^(in)_BD} = {1,^-1/4, ^-1, ...} ; {Φ^(out)_BD} = {1,^-1, ...} AE scaling : {Φ^(in)_AE} = {ln & 1, ^-1, ...} ; {Φ^(out)_AE} = {1, ^-1, ...} where for both BD and AE scaling, the dependence of outer Reynolds stresses on is weak and finite Reynolds number corrections are thought to be of order 𝒪(^-1) at most. It is also noted, that the terms of order 𝒪(ln) and of 𝒪(1) in equation (<ref>) must, for the matching to the outer expansion, be treated together as a “block” <cit.>, in the same way as for the matching of inner and outer mean velocity across the log law (see for instance <cit.>). Discriminating between the two inner scalings on the basis of the -dependence of the inner peak height ⟨ uu⟩^+_IP at y^+_IP≊ 15, has so far been inconclusive because of the limited Reynolds number range of reliable data. Both fits, [a_1 ln + a_2] and [b_1 + b_2 ^-1/4] are defendable, as seen in <cit.> and <cit.>, for instance. Determining the scaling of coefficients in the Taylor expansion of different stresses about the wall, as in <cit.>, is equally inconclusive for the same reasons. These authors have also challenged the use by of the Taylor expansion of ⟨ uu⟩^+ about the wall to infer the scaling of the inner peak. The argument is valid insofar as using the Taylor expansion of ⟨ uu⟩^+ across the region with the sharpest variation of the different terms in the transport equation for ⟨ uu⟩^+ to estimate the magnitude of its inner peak is questionable. However, this Taylor series argument is not relevant for the scaling of the inner ⟨ uu⟩^+ or any other quantity, as the scaling can only change across an overlap. Clearly, no such overlap exists between the wall and the inner peak of ⟨ uu⟩^+ at y^+_IP≊ 15 ! The above synopsis of this scaling problem suggests, that the overlap between the inner and outer asymptotic expansions of ⟨ uu⟩^+ has received insufficient attention. The overlap, also called common part, is a key element of MAE, which provides the smooth transition between inner and outer expansions based on different asymptotic sequences, in particular the sequences (<ref>) and (<ref>). This is the subject of the next section <ref>, where the overlap is analyzed by determining, from channel DNS and experiments, the indicator function for the new BD scaling and comparing it to the standard log-indicator function for AE scaling. In section <ref>, the two competing indicator functions are evaluated from experimental and DNS data for pipe flow, with results closely matching those for the channel. Section <ref> is then devoted to the ZPG TBL and reveals that the indicator functions are significantly different from channel and pipe, indicating a much faster drop of ⟨ uu⟩^+ towards zero in the outer part of the boundary layer, presumably because of entrained free stream fluid. Before this drop, most ZPG TBL data appear slightly better fitted by the BD overlap, but the question remains open. The conclusions in section <ref> are unequivocal in support of BD scaling for channel and pipe flow, while better data will be required to definitively settle the issue for the ZPG TBL. § THE INNER-OUTER OVERLAP OF ⟨ UU⟩^+ FOR CHANNEL FLOW The inner-outer overlap plays a key role in MAE, as it smoothly connects inner and outer expansions. The choice tool to detect overlaps are indicator functions, well established for logarithmic overlaps. Logarithmic or AE overlapAssuming that the inner part of ⟨ uu⟩^+ scales according to the AE model (equation <ref>), the functional form of the overlap is just the reverse of the mean flow overlap, where the ln term is part of the outer expansion. Hence, the AE overlap of ⟨ uu⟩^+ must contain a term proportional to ln( /y^+)≡ -lnY in order to avoid a ln term in the outer expansion. Indeed, in <cit.>, authored by the principal advocates of the attached eddy model, the overlap law for ⟨ uu⟩^+ is given as -C ln Y + D and the name of “Townsend-Perry” constant with a value of 1.26 was proposed for C. However, this author is not aware of any published strong evidence for such a logarithmic law, such as a region of constant log-indicator function Ξ_AE = Y(⟨ uu⟩^+/ Y) ≡ y^+(⟨ uu⟩^+/ y^+) . There may have been the expectation that a region of constant Ξ_AE would eventually develop at higher Reynolds numbers, but this is no longer tenable in view of the successful identification of the BD overlap. BD overlapA composite expansion of ⟨ uu⟩^+, based on the asymptotic sequence (<ref>) has been constructed in <cit.>, in the following referred to as “M22”, but the overlap and outer profiles were a bit of a “bricolage”, which may be loosely translated from French as “slapped together”. The construction consisted essentially of connecting the inner expansion at a fixed y^+_× = 470 (equ. 2.7 of M22) to a fit of ⟨ uu⟩^+ on the centerline. The resulting logarithmic slope of the overlap (equ. 2.9 and section 3 of M22) was Reynolds number dependent, unlike the universal slope proposed by <cit.> and others, but the choice of a logarithm in M22 may have been another manifestation of the long shadow of von Kármán. As it turns out, the overlap construction in M22 is just an awkward approximation of the new common part for BD scaling ⟨ uu⟩^+_BDcp(Y) = 10.74 - 10.2 Y^1/4≡ 10.74 - 10.2 (y^+)^1/4 ^-1/4 which is the simplest function which smoothly connects inner and outer expansions in terms of the BD asymptotic sequences (<ref>), i.e. “converts” inner terms of 𝒪(^-1/4) into outer terms of 𝒪(1). Here, the coefficients in (<ref>) have been determined from channel DNS, but will not be significantly different for the pipe. The complete leading order of the outer expansion is easily obtained by adding a “wake” to the overlap (<ref>) ⟨ uu⟩^+_BDout(Y) = ⟨ uu⟩^+_BDcp + 0.26 exp(-10.2 (1-Y)/4 × 0.26) Equations (<ref>) and (<ref>) are seen in figure <ref>(a) to provide an excellent description of the overlap profile and of the surprisingly small wake of ⟨ uu⟩^+ in channel DNS. The indicator function to detect the new overlap law (<ref>) is Ξ_BD = 4 Y^3/4 (d⟨ uu⟩^+ / dY ) ≡ 4 ^1/4 (y^+)^3/4 (d⟨ uu⟩^+ / dy^+ ) and is seen in figure <ref>b to develop a constant region for 's beyond 10^3, with a value of Ξ_BDcp = -10.2. Furthermore, the region of constant Ξ_BD is seen to expand with increasing towards smaller values of the outer variable Y, as expected from an overlap region, which “starts” at some y^+_startOL and “ends” at a Y_endOL, where it is understood, that the two “boundaries” depend on how much deviation of the full profile from the overlap is allowed. This is highlighted in figure <ref>(b) by the four vertical arrows at Y = 0.35 + 10 ^-1/2, marking the approximate overlap centers for the four 's. In figure <ref>(c) and <ref>(d), the same data are tested for the presence of a log-law, as proposed in <cit.>, for instance. It is evident that the log-law indicator function (<ref>) in figure <ref>(d) shows no sign of a plateau for the channel DNS analyzed. Hence the logarithmic fit of <cit.> turns out to be an arbitrary tangent in figure <ref>(c). To reinforce the conclusion about the nature of the overlap deduced from channel DNS, the two indicator functions Ξ_BD and Ξ_log have been evaluated for the laser Doppler measurements of <cit.> and are shown in figure <ref>. While there is considerable scatter due to the differentiation of experimental data, there can be no doubt that the data follow the bounded dissipation scaling, i.e. approach the same constant Ξ_BD = -10.2 as the channel DNS in figure <ref>(b). The close correspondence between the experiment for =5900 and the DNS for =5186 is noted in particular. Further evidence for the BD-scaling in channels is the near perfect correspondence between the overlap (<ref>) and the two-term inner expansion of ⟨ uu⟩^+ educed from pairs of channel DNS in M22 and shown in figure <ref>, which is figure 1 of M22 replotted against (y^+)^1/4. This figure establishes, that the inner ⟨ uu⟩^+(y^+) reaches the BD overlap (<ref>) at (y^+)^1/4≊ 5, i.e. y^+ ≊ 600. § THE INNER-OUTER OVERLAP OF ⟨ UU⟩^+ FOR PIPE FLOW For pipe flow, Ξ_BD and Ξ_AE have been evaluated for the smooth Superpipe data of <cit.> and for selected DNS profiles of <cit.> and <cit.>, all shown in figure <ref>. As seen in panel (a), the data closely follow the BD overlap law of equation (<ref>) for the channel up to Y ≈ 0.4-0.5 with Ξ_BDcp slightly increased from -10.2 to -9.5, marked by the grey horizontal line in figure <ref>(a). Beyond Y ≈ 0.4-0.5, the slope of ⟨ uu⟩^+ in the pipe goes to zero faster than in the channel, due to the cylindrical geometry. Again, Ξ_AE in figure <ref>b does not show any plateau, just as in figures <ref>(d) and <ref>(b) for the channel. § THE OVERLAP OF ⟨ UU⟩^+ IN ZPG TBLS AND ITS SIGNIFICANT DIFFERENCE TO CHANNEL AND PIPE The analogous comparison of BD and AE scaling for the ZPG TBL is shown in figure <ref>. While the ZPG TBL is generally considered to be one of the “canonical” wall-bounded flows, both indicator functions are seen to be substantially different from the corresponding channel and pipe functions. Incidentally, the same is true for the ZPG TBL mean flow indicator function, which is also very different from channel and pipe, as shown in <cit.>. When looking at the “band” of data in figure <ref>, the major differences to the channel and pipe indicator functions are evident: * Both Ξ's show a large negative excursion relative to channel and pipe in the range 0.2 ⪅ Y ⪅ 0.8, indicating a much steeper decrease of ⟨ uu⟩^+ in this region. A likely reason for this negative bulge is intermittency, as discussed below. * In the near-wall interval 0.1⪅ Y⪅ 0.25 one may see a short region of constant Ξ_BD≈ -9.5 in figure <ref>(a), while in panel (b) the data “band” in this region appears to have a slightly negative slope, but could equally well be fitted by a constant Ξ_AE≈ -1.5. In short, the data scatter and the “negative bulge” beyond Y≈ 0.25 do not allow to discriminate between BD and AE scaling in the ZPG TBL, and the 's of DNS are too low to help. However, it would be rather surprising, if the inner asymptotic sequence for ⟨ uu⟩^+ in the ZPG TBL was different from channel and pipe! To test the hypothesis that the large negative bulge of Ξ beyond Y≊ 0.25 is due to intermittency, i.e. to the entrainment of free stream fluid, a rough model is developed, based on the location of the “turbulent non-turbulent interface” (TNTI). The PDF of its location has been studied by <cit.>, who has determined its mean location Y̅_TNTI=0.69 and its standard deviation σ=0.11. With the cumulative distribution function 𝒞(Y) of the TNTI location , the measured ⟨ uu⟩^+ may be expressed in terms of a hypothetical, fully turbulent stress as ⟨ uu⟩^+ = ⟨ uu⟩^+_turb [1-𝒞(Y)] , where the drastic simplification has been made, that ⟨ uu⟩^+ ≡ 0 during the incursions of free-stream fluid into the boundary layer. With this, Ξ_BD can be decomposed as Ξ_BD = 4 Y^3/4 (⟨ uu⟩^+_turb/ Y) - 4 Y^3/4 ( [𝒞(Y) ⟨ uu⟩^+_turb] / Y) Assuming that the first term in equation (<ref>) corresponds to the hypothetical, non-intermittent ZPG TBL, with an overlap value close to the one for channel and pipe, the second term represents the intermittency correction. Concentrating on the overlap, the intermittency correction is evaluated with the channel overlap of equation (<ref>) and the 𝒞(Y) of . As shown in figure <ref>(a), this model captures the essence of the deviation of Ξ_BD from the constant value ≊ -10. It is furthermore noted, that the largest deviation of Ξ_BD from the channel and pipe indicator functions occurs at Y≊ 0.65, essentially at the mean location of the TNTI. This strongly supports the notion, that the difference between the ZPG TBL and the channel and pipe overlaps is principally due to the entrainment of free stream fluid. § CONCLUSIONS The clear conclusion from the present overlap analysis of the stream-wise Reynolds stress ⟨ uu⟩^+ for channel and pipe flow is that ⟨ uu⟩^+ remains finite everywhere in the limit of infinite Reynolds number and, in the inner region, decreases from there as ^-1/4. This has been demonstrated by analyzing the inner-outer overlap with the indicator functions Ξ_BD for the “bounded dissipation” scaling of <cit.>, and comparing to Ξ_AE for the “attached eddy” or logarithmic scaling <cit.>. In other words, the unlimited growth of near-wall stream-wise Reynolds stress with ln in channel and pipe flow is a feature of the attached eddy model and not of physical reality. One possible explanation for this result is the essentially inviscid nature of the attached eddy model. As ln represents a weak divergence for →∞, it may well be that it could be eliminated by introducing some viscous “damping” in the model, without compromising its physical attractiveness. Indications have also been presented for BD scaling of ⟨ uu⟩^+ in the ZPG TBL, but no definitive conclusion can be drawn on the basis of the available data. The main reasons are the short extent of the ⟨ uu⟩^+ overlap, ending already at Y≊ 0.25, as opposed to Y≊ 0.7 in channels and pipes, combined with the relatively large scatter of the Ξ's. As argued in section <ref>, the major difference beyond Y≊ 0.25 between the Ξ's in the ZPG TBL and in channels and pipes is most likely due to the intermittent intrusion of free stream fluid, and sets the ZPG TBL clearly apart from channel and pipe. In all likelihood, the present conclusions also apply to the other components of the Reynolds stress tensor, which will be the subject of a future full-length paper. To conclude, a finite limit of Reynolds stresses for →∞ is not only of theoretical interest, but has important technological implications, such as in ship hydrodynamics and hydraulic engineering. I am grateful to Katepalli Sreenivasan for his insightful comments and encouragement. Declaration of Interests. The author reports no conflict of interest. jfm
http://arxiv.org/abs/2307.02531v1
20230705180001
Subaru High-$z$ Exploration of Low-Luminosity Quasars (SHELLQs). XVIII. The Dark Matter Halo Mass of Quasars at $z\sim6$
[ "Junya Arita", "Nobunari Kashikawa", "Yoshiki Matsuoka", "Wanqiu He", "Kei Ito", "Yongming Liang", "Rikako Ishimoto", "Takehiro Yoshioka", "Yoshihiro Takeda", "Kazushi Iwasawa", "Masafusa Onoue", "Yoshiki Toba", "Masatoshi Imanishi" ]
astro-ph.GA
[ "astro-ph.GA" ]
Junya Arita jarita@astron.s.u-tokyo.ac.jp 0009-0007-0864-7094]Junya Arita Department of Astronomy, School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan 0000-0003-3954-4219]Nobunari Kashikawa Department of Astronomy, School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan Research Center for the Early Universe, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan 0000-0001-5063-0340]Yoshiki Matsuoka Research Center for Space and Cosmic Evolution, Ehime University, 2-5 Bunkyo-cho, Matsuyama, Ehime 790-8577, Japan 0000-0001-7759-6410]Wanqiu He National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan 0000-0002-9453-0381]Kei Ito Department of Astronomy, School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan 0000-0002-2725-302X]Yongming Liang Institute for Cosmic Ray Research, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8582, Japan 0000-0002-2134-2902]Rikako Ishimoto Department of Astronomy, School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan 0000-0002-3800-0554]Takehiro Yoshioka Department of Astronomy, School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan 0000-0001-7154-3756]Yoshihiro Takeda Department of Astronomy, School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan 0000-0002-4923-3281]Kazushi Iwasawa Institut de Ciències del Cosmos (ICCUB), Universitat de Barcelona (IEEC-UB), Martí i Franquès, 1, E-08028 Barcelona, Spain ICREA, Pg. Lluís Companys 23, E-08010 Barcelona, Spain 0000-0003-2984-6803]Masafusa Onoue Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871, China Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), The University of Tokyo, Chiba 277-8583, Japan 0000-0002-3531-7863]Yoshiki Toba Research Center for Space and Cosmic Evolution, Ehime University, 2-5 Bunkyo-cho, Matsuyama, Ehime 790-8577, Japan National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Academia Sinica Institute of Astronomy and Astrophysics, 11F of Astronomy-Mathematics Building, AS/NTU, No.1, Section 4, Roosevelt Road, Taipei 10617, Taiwan 0000-0001-6186-8792]Masatoshi Imanishi National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan We present, for the first time, dark matter halo (DMH) mass measurement of quasars at z∼6 based on a clustering analysis of 107 quasars. Spectroscopically identified quasars are homogeneously extracted from the HSC-SSP wide layer over 891 deg^2. We evaluate the clustering strength by three different auto-correlation functions: projected correlation function, angular correlation function, and redshift-space correlation function. The DMH mass of quasars at z∼6 is evaluated as 5.0_-4.0^+7.4×10^12 h^-1M_⊙ with the bias parameter b=20.8±8.7 by the projected correlation function. The other two estimators agree with these values, though each uncertainty is large. The DMH mass of quasars is found to be nearly constant ∼10^12.5 h^-1M_⊙ throughout cosmic time, suggesting that there is a characteristic DMH mass where quasars are always activated. As a result, quasars appear in the most massive halos at z ∼ 6, but in less extreme halos thereafter. The DMH mass does not appear to exceed the upper limit of 10^13 h^-1M_⊙, which suggests that most quasars reside in DMHs with M_halo<10^13 h^-1M_⊙ across most of the cosmic time. Our results supporting a significant increasing bias with redshift are consistent with the bias evolution model with inefficient AGN feedback at z∼6. The duty cycle (f_duty) is estimated as 0.019±0.008 by assuming that DMHs in some mass interval can host a quasar. The average stellar mass is evaluated from stellar-to-halo mass ratio as M_*=6.5_-5.2^+9.6×10^10 h^-1M_⊙, which is found to be consistent with [C II] observational results. § INTRODUCTION According to the current ΛCDM theory, the tiny density fluctuation of dark matter in the early universe grows and subsequently collapses into dark matter halos (DMHs). These halos continuously accrete and hierarchically merge to form high-mass DMHs. Galaxies are nurtured in the center of DMHs and almost all the galaxies harbor a supermassive black hole (SMBH) in their centers <cit.>. Quasars are believed to be powered by gas accretion onto SMBHs <cit.> and outshine in the multiple wavelengths. Since quasars are one of the most luminous objects in the universe, they are observable even at z≳7 (e.g., ). Quasars are important objects to study the open questions in the early universe; however, it remains unclear how high-z quasars are physically related to the underlying DMHs they inhabit. One of the important questions is when and how the co-evolution between galaxies and SMBHs manifested, i.e., the masses of which are correlated with those of their host galaxies. While this relationship in the local universe is well established <cit.>, it remains to be elucidated in the early universe. The parent DMH, which governs both the SMBH and the galaxy, holds the key to unveiling the underlying physical mechanism of their relationship. The gas accumulated by the gravitational potential of DMHs is consumed to form stars, thus, the relationship between stellar mass and DMH mass is quite natural <cit.>. It is believed that the gas further loses the angular momentum due to the radiation from active star formation in the DMH and flows into the central SMBH <cit.> to grow more massive. Otherwise, the steady high density cold gas flow directly from the halo could be responsible for sustaining critical accretion rates leading to rapid growth of ∼10^9 M_⊙ black holes as early as z∼7 (e.g., ). Therefore, the mass of a galaxy DMH hosting a SMBH is crucial for understanding their co-evolutionary growth. The DMH mass is also a key physical quantity to understand the AGN feedback, which is thought to play a significant role in regulating the star formation of the host galaxies <cit.>, because it can constrain the duty cycle, the fraction of DMHs that host active quasars (e.g., ). <cit.> showed that feedback efficiency will greatly change DMH mass evolution at high-z. According to their model, feedback prevents gas accretion against gravity by radiation pressure, and works to stop SMBH growth and eventually defuses the quasar phase. Thus, if feedback is inefficient to stop the SMBH growth at high-z, quasars will live in the highest-mass DMHs. Since the quasar activity has a huge impact on the host galaxy, unveiling the feedback efficiency helps to advance our understanding of the co-evolution. The clustering analysis is an effective method to estimate DMH mass. It quantifies the distribution of objects often through a two-point correlation function. The two-point correlation function ξ(r) is defined based on the probability dP that an object is observed in the volume element dV apart from the separation r from a given object <cit.>; dP = n̅[1+ξ(r)]dV, where n̅ is the mean number density of the objects. Quasar host galaxies are believed to reside in the peak of the underlying dark matter density distribution <cit.>. Using the bias (linear bias) parameter b, the relation between two-point correlation functions of quasars ξ_Q(r) and that of dark matter ξ_DM(r) can be expressed as ξ_Q(r) = b^2 ξ_DM(r). The bias parameter has been modeled theoretically (e.g., ), which gives insight into the DMH mass. The previous initiative works <cit.> have proved that the quasar bias increases with redshift from today to z=2­-3. However, clustering analyses of quasars at z>3, there are a few attempts. <cit.> utilized 4426 spectroscopically identified luminous quasars with 2.9≤ z≤ 5.4 from the Fifth Data Release of the Sloan Digital Sky Survey (SDSS DR5; ) and concluded that quasars typically reside in DMHs with (2­- 3)× 10^12 h^-1M_⊙ at 2.9≤ z≤ 3.5 and (4­- 6)× 10^12 h^-1M_⊙ at 3.5≤ z≤ 5.4. <cit.> measured the clustering signal of quasars from the Baryon Oscillation Spectroscopic Survey (BOSS; ). They estimated the DMH mass for quasars at 2.2≤ z≤ 3.4 as 0.6­-3× 10^12 h^-1M_⊙. <cit.> extracted photometrically selected 901 quasars with z̅_phot∼3.8 from the early data release of the Subaru Hyper Suprime-Cam Strategic Survey Program (HSC-SSP; ). They added 342 SDSS quasars <cit.> to their sample and evaluated cross-correlation functions (CCF) between the quasars and bright Lyman Break Galaxies (LBGs) from the HSC-SSP. The typical DMH mass derived from the CCF signal is 1­-2× 10^12 h^-1M_⊙. <cit.> measured the clustering signal of photometrically selected quasars with 2.9≤ z_phot≤ 5.1 from SDSS Stripe 82 field <cit.> and presumed that characteristic DMH mass is 1.70­-9.83×10^12 h^-1M_⊙. These studies detected significantly large clustering signals, implying that the quasar halo bias rapidly increases beyond z∼3. In addition, the quasar DMH mass remains approximately constant at M_halo∼ 10^12.5 h^-1M_⊙ from the present day to z∼4, which will be intriguing to see whether these trends continue to higher-z. Despite intense observational efforts, the clustering measurements have been challenging beyond z>4. This is because clustering analysis requires a quasar sample with sufficient number density, which remarkably decreases towards z∼6 <cit.>. The sample size and the number density of quasars at z∼6 have increased dramatically in the last two decades but the observable quasar population is limited to high-luminosity obtained by ultra wide-field surveys, hindering the increase in their number density. Increasing the number density of quasars at z∼6 has been a major challenge because of the need for wide and deep observations and expensive spectroscopic observations for fainter quasars. Hyper Suprime-Cam (HSC; ) on the Subaru Telescope, which has a large field of view and high sensitivity, has changed the situation. Utilizing the powerful instrument, wide-field imaging survey program, HSC-SSP, was performed. From the survey data, Subaru High-z Exploration of Low-Luminosity Quasars (SHELLQs; ) has discovered 162 quasars at 5.66<z<7.07 over 1200 deg^2, providing high number density to allow for clustering measurements. In this paper, we, for the first time, present the clustering analysis of quasars at z∼6 by using the SHELLQs sample. We show the samples for our analysis in Section <ref>. We explain the details of clustering analysis in Section <ref>. In Section <ref>, we derive the important physical quantities from the result in previous section. Finally, we summarize our results in Section <ref>. We adopt flat ΛCDM cosmology with cosmology parameters (h,Ω_m,Ω_Λ,σ_8)=(0.7,0.3,0.7,0.81), namely H_0=70 km s^-1 Mpc^-1. All magnitudes in this paper are presented in the AB system <cit.>. § DATA §.§ SHELLQs Our main quasar sample is from SHELLQs utilizing HSC-SSP data. The HSC data are reduced with HSC pipeline, <cit.>, which is based on the Large Synoptic Survey Telescope (LSST) pipeline <cit.>. The astrometry and photometric calibration are performed based on the data from Panoramic Survey Telescope and Rapid Response System Data Release 1 (Pan-STARRS1; ). The SHELLQs quasars are a flux-limited (m_z<24.5 for z∼ 6 and m_y<24 for z∼ 7) sample of quasars at z∼6­-7. These quasars are selected from point sources and by a Bayesian-based probabilistic algorithm, which is applied to the optical HSC-SSP source catalogs. More details of the sample construction are described in <cit.>. The spectroscopic observation is performed by utilizing the Faint Object Camera and Spectrograph (FOCAS; ) on the Subaru Telescope and Optical System for Imaging and low-intermediate-Resolution Integrated Spectroscopy (OSIRIS; ) on the Gran Telescopio Canarias. The advantages of the SHELLQs sample are faintness and high number density thanks to the depth of the HSC-SSP. Figure <ref> shows the comparison of the absolute magnitude of quasars detected in SHELLQs, SDSS <cit.>, and Pan-STARRS1 <cit.>, where it is clear that SHELLQs is exploring a unique regime fainter than other surveys. The SHELLQs have a number density (0.14 deg^-2) of quasars ∼30 times more than the SDSS <cit.>, where 52 quasars are detected in 11240 deg^2 at 5.7<z≤6.4. The original SHELLQs sample consists of 162 spectroscopically confirmed quasars. We impose the following four criteria to ensure homogeneity, which yields 93 quasars (see Table <ref>). lc Requirement Number Detail of the sample selection All quasars identified by SHELLQs 162 1. z≤ 6.5 132 2. Identified in HSC-SSP S20A 125 3. Far from bright star masks and edge regions 116 4. Broad line quasars 93 We add 14 known quasars into this sample. The additional quasars are listed in Table <ref>. * z≤ 6.5 The SHELLQs sample consists of z∼6 quasars selected by i-dropouts and z∼7 quasars selected by z-dropouts. Since the latter has a small sample size and the survey areas of the two do not perfectly match, only the former z∼6 quasars are used in this study to ensure uniformity of the sample. The z∼6 quasar sample selection criteria <cit.> is m_z<24.5 & σ_z<0.155 & m_i-m_z>1.5 & 0.7<μ/μ_PSF<1.2, where μ is the adaptive moment of the source averaged over the two image dimensions and μ_PSF is that of the point spread function (PSF) model. * Identified in HSC-SSP S20A region The SHELLQs sample is still growing. Optical spectroscopic follow-up observations have been completely executed in the S20A survey area. This study uses only SHELLQs quasars spectroscopically confirmed in the S20A region, and quasars added after S21A are removed to account for the uniformity of the sample. * Far from bright star masks and edge regions We remove quasars in areas with poor data quality, such as near the bright star masks and edges by random points covering HSC-SSP S20A region to preserve sample homogeneity [Specifically, we use the following flags to retrieve random points covering the survey field with a surface number density of 100 arcmin^-2: , , , , , and impose ≥2. We exclude quasars that have no random points within 0.12 arcmin from the clustering analysis.]. * Broad line quasars According to the unified AGN model <cit.>, type-I AGNs and type-II AGNs are the same population and the difference purely originates from the inclination angle to observers. However, another evolutionary scenario interprets the difference between the two populations in host galaxies. The DMH mass measurements by <cit.> found the differences in the DMH mass between obscured and unobscured quasars through clustering analysis. <cit.> reported that approximately 20% of SHELLQs quasars have narrow Lyα emission lines (FWHM_Lyα<500 km s^-1) and one of them can be a type-II quasar based on the spectroscopic follow-up. Therefore, as long as it is unclear which interpretation is correct, we decide to be conservative in the study to exclude quasars with narrow Lyα emission lines from the sample. §.§ Other quasars lcccclc Name R.A. Decl. Redshift M_1450 Survey Reference (J2000) (J2000) (mag) Additional quasar sample SDSS J160254.18+422822.9 16:02:54.18 +42:28:22.9 6.07 -26.82 SDSS (1) SDSS J000552.33-000655.6 00:05:52.33 -00:06:55.6 5.855 -26.46 SDSS (1) CFHQS J021013-045620a 02:10:13.19 -04:56:20.9 6.44 -24.28 CFHQSb (2) CFHQS J021627-045534a 02:16:27.81 -04:55:34.1 6.01 -22.21 CFHQS (3) CFHQS J022743-060530a 02:27:43.29 -06:05:30.2 6.20 -25.03 CFHQS (3) IMS J220417.92+011144.8a 22:04:17.92 +01:11:44.8 5.944 -23.59 IMSc (4) VIMOS2911001793a 22:19:17.22 +01:02:48.9 6.156 -23.10 Suprime Cam (5) SDSS J222843.5+011032.2a 22:28:43.54 +01:10:32.2 5.95 -24.53 SDSS Stripe82 (6) SDSS J230735.35+003149.4 a 23:07:35.35 +00:31:49.4 5.87 -24.93 SDSS (7) SDSS J231546.57-002358.1 a 23:15:46.57 -00:23:58.1 6.117 -25.38 SDSS (8) PSO J183.1124+05.0926 12:12:26.98 +05:05:33.4 6.439 -26.99 Pan-STARRS1 (9) VIK J121516.88+002324.7 a 12:15:16.88 +00:23:24.7 5.93 -24.67 VIKINGe (10) PSO J184.3389+01.5284 a 12:17:21.34 +01:31:42.2 6.20 -25.37 Pan-STARRS1 (11) PSO J187.3050+04.3243 12:29:13.21 +04:19:27.7 5.89 -25.4 Pan-STARRS1 (12) aThe quasar is recovered by SHELLQs project <cit.>. bCanada-France High-z Quasar Survey cInfrared Medium-deep Survey eVISTA Kilo-degree Infrared Galaxy Public Survey (1) <cit.>, (2) <cit.>, (3) <cit.>, (4) <cit.>, (5) <cit.>, (6) <cit.>, (7) <cit.>, (8) <cit.>, (9) <cit.>, (10) <cit.>, (11) <cit.>, (12) <cit.>. We also add to our sample 14 quasars at z∼6 that were discovered by other surveys (see Table <ref>). We select these quasars from the survey, whose area fully covers HSC-SSP S20A field. Most of them are identified by SDSS (e.g., ) and Pan-STARRS1 (e.g., ), which tend to be brighter than the SHELLQs quasars. These quasars also satisfy the requirements imposed on SHELLQs quasars. Ten of the 14 quasars are also detected in the SHELLQs observation but not included in the SHELLQs sample because they had already been found <cit.>. We visually inspect the spectra of all these quasars to confirm that they are actually z∼6 quasars. We assume that clustering strength is independent of quasar brightness, which is confirmed at low-z <cit.>. In fact, when we divide the sample into bright sub-sample (M_1450≤-24) and faint sub-sample (M_1450>-24), the results obtained in Section <ref> are consistent with each other within their errors. In summary, our final sample consists of 107 quasars. The distributions of absolute magnitude and redshift are shown in Figure <ref>. §.§ Homogeneity of the sample Our sample is distributed over 891 deg^2 of three fields: HECTOMAP, Autumn and Spring. The sky distribution of our sample quasars is shown in Figure <ref>. Sample homogeneity is of utmost importance for clustering analysis. Since the spectroscopy is completely executed for the candidates in S20A, the spatial homogeneity of the photometric data in selecting candidate objects should be examined. We verify the homogeneity by calculating detection completeness over the survey region. The detection completeness is defined as the ratio of the number of quasars recovered by ver. 8.4[<https://hsc.mtk.nao.ac.jp/pipedoc/pipedoc_8_e/index.html>] to the number of mock quasars scattered at random points on the HSC image in the same manner in <cit.>. The PSF of the input mock quasars is generated to be the same as that measured at each image position. The PSF is modeled by <cit.>, which can extract precise PSF model from images processed by <cit.>. A small region (patch) of 12'× 12' is randomly selected per tract, which consists of 81 patches, over the HSC-SSP region. We embed more than 3000 mock quasars in the survey field per patch with m_z=21­-28 which are randomly spread over the HSC-SSP coadded z-band images using <cit.>. We perform photometry on the coadded z-band images embedded mock quasars utilizing and detect the mock quasars. The detection completeness is estimated on 662 patches in total. The detection completeness is fitted with a function in <cit.>; f_det(m_z)=f_max-f_min/2{tanh [α(m_z^50-m_z)]+1}+f_min, where f_max,f_min,α, and m_z^50 represent the detection completeness at the brightest magnitude and the faintest magnitude, the sharpness of the function, and the magnitude at which the detection completeness is 50%, respectively. Our measurement of each tract is presented in Figure <ref> and the best-fit parameters with 1σ error for the median completeness are f_max=0.978±0.015, f_min=0.016±0.008,α=2.7±0.8,m_z^50=25.08±0.36, which are also denoted in the figure. Almost all the functions have similar parameters with a m_z^50 scatter as small as σ(m_z^50)=0.36. Figure <ref> shows the completeness map at m_z=24.5 of the survey region overplotted with the sample quasars. The completeness holds more than 70% (80%) over 85% (77%) of the entire survey region, and >50% over almost all the area at the z-band limiting magnitude, and there are few areas of singularly lowered completeness. It is noted that the following results hardly change when the area with f_det<0.5 is excluded. Therefore, we conclude that the whole survey area is homogeneous enough to conduct the clustering analysis. § CLUSTERING ANALYSIS §.§ Auto Correlation Function of the Quasars We first measure a projected correlation function ω_p(r_p) in Section <ref> that can be directly related to real-space clustering. At the same time, to check the robustness of the result, we also measure an angular correlation function ω(θ) without redshift information in Section <ref> and a redshift-space correlation function ξ(s), which includes redshift-space distortion, in Section <ref>. §.§.§ Projected correlation function We evaluate the projected correlation function ω_p of the sample. In this analysis, the comoving distance is calculated from the spectroscopic redshift. We separate s, the three-dimensional distance between two objects, into r_p, perpendicular to the line of sight, and π, parallel with it (s=√(r_p^2+π^2)). We estimate the two-dimensional correlation function ξ(r_p,π) from <cit.>; ξ(r_p,π)=DD(r_p,π)-2DR(r_p,π)+RR(r_p,π)/RR(r_p,π) where DD(r_p,π), DR(r_p,π), RR(r_p,π) represent data-data, data-random, and random-random pair counts within perpendicular distance separation r_p and parallel distance separation π, respectively. The survey area is divided into three independent fields, therefore we count the pairs in each field to sum them up before being normalized by all pairs. The random points are retrieved from the random catalog in HSC-SSP DR3, which has random points scattered over the entire effective survey area, excluding mask areas, at a surface density of 1 arcmin^-2. The total number of random points is 3,209,416. The redshift of random points is assigned to follow the N(z), which is the redshift distribution of SHELLQs estimated by kernel density estimation (see Figure <ref>). To count pairs, we use [<https://github.com/manodeep/Corrfunc>], which is a Python package containing routines for clustering analysis. We also use the package in Section <ref>, Section <ref>, and Section <ref>. The projected correlation function ω_p(r_p) is derived by integrating ξ(r_p,π) with π direction. ω_p(r_p)=2∫_0^π_cutoffξ(r_p,π) dπ where π_cutoff, which is the optimum limit above which the signal is almost negligible, is fixed to π_cutoff=80 h^-1Mpc after sufficient trial and error. The redshift distortion is eliminated through the integration <cit.>, though the angular scale of the redshift distortion is much smaller (<20 h^-1Mpc) than the scales of our measurements. The uncertainty of ω_p(r_p) is evaluated by Jackknife resampling <cit.>. In the k-th resampling, we exclude k-th sub-region, and calculate the correlation function, ω_p,k(r_p). We divide the survey area into N=21 sub-regions using the k-means method. In this case, the covariance matrix is defined as C_ij=N-1/N∑_k=1^N(ω_p,k(r_p,i)-ω̅_p(r_p,i)) × (ω_p,k(r_p,j)-ω̅_p(r_p,j)) where ω_p,k(r_p,i) and ω̅_p represent the value of k-th projected correlation function for i-th r_p bin and the mean of the projected correlation function, respectively. The uncertainty of ω_p(r_p,i), σ_i, is evaluated as σ_i=√(C_ii), which is used only for plotting process. The projected function is related to the real-space correlation function ξ(r) as <cit.> ω_p(r_p)=2∫_r_p^∞rξ(r)/√(r^2-r_p^2)dr. Assuming that the real space correlation function is regarded as a power-law function, ξ(r)=(r/r_0)^-γ, the fitted function ω_p,fit(r_p) is represented as ω_p,fit(r_p)/r_p=B(γ-1/2,1/2)(r_p/r_0)^-γ where γ is a power-law index of the dark matter correlation function, B represents the beta function, and r_0 is the correlation length, which represents the scale of clustering. In this study, we fit γ to a fiducial value (γ=1.8; ). The black solid line in the top panel of Figure <ref> represents the power-law function fitted to the projected correlation function based on the χ^2 fit. Then, we obtain r_0=23.7± 11 h^-1Mpc as listed in Table <ref>. The goodness-of-fit is evaluated by χ^2=∑_i,j[ω_p(r_p,i)-ω_p,fit(r_p,i)]C_ij^-1[ω_p(r_p,j)-ω_p,fit(r_p,j)]. To see the robustness of the clustering signal, we integrate the real-space correlation function ξ(r) within r_min≤ r≤ r_max (e.g., ); ξ_100 = 3/r_max^3∫_r_min^r_maxξ(r)r^2 dr, where r_min=10 h^-1Mpc and r_max=100 h^-1Mpc, over which the observed signal is detected in this study. Since we assume ξ(r)=(r/r_0)^-γ, Equation (<ref>) reduces to ξ_100=3r_0^γ/(3-γ)r_max^3(r_max^3-γ-r_min^3-γ). Adopting r_0=23.7±11 h^-1Mpc, we obtain ξ_100=0.175±0.147. Although the uncertainty of each individual data point is large, the overall clustering signal is found to be positive with a significance of more than 1σ. We also test the robustness in terms of whether the signal can be obtained by chance from a random sample. We extract the same number of random points as our quasar sample, treat them as data points and evaluate the projected correlation function. Based on 10000 iteration, we evaluate the probability of obtaining a clustering signal as shown in Figure <ref>. Counting the number of the projected correlation function that has positive signals in the same bins in Figure <ref>, we find that there is only a 4% probability of obtaining the clustering signal observed in this study. Hence, we conclude that the signal is not artificial. §.§.§ Angular correlation function We also evaluate the angular correlation function. Then, we use the estimator from <cit.>; ω(θ)=DD(θ)-2DR(θ)+RR(θ)/RR(θ) where DD(θ),DR(θ),RR(θ) represent the normalized data-data, data-random and random-random pair counts normalized by whole pair counts within an angular separation θ, respectively. The random points are retrieved from the random catalog in HSC-SSP DR3. The uncertainty of ω(θ) is evaluated by Jackknife resampling in the same manner as the previous section. We evaluate the uncertainty from the diagonal elements of the covariance matrix derived from Equation (<ref>) replacing ω_p for ω. The uncertainty of ω(θ_i), σ_i, is evaluated based on the diagonal element of the covariance matrix. The middle panel of Figure <ref> represents the result of the angular correlation function. The black solid line represents the best fit of a single power-law model, ω_true(θ)=A_ωθ^-β, considering the effect of the limited survey area to the correlation function. Then, we assume the following function; ω(θ)=A_ωθ^-β-IC, where A_ω is the amplitude, β is the power-law index, and IC is the integral constraint, which is a negative offset as the survey region is limited <cit.>. We fix β to 0.8 (=γ-1) for consistency with the projected correlation function. We evaluate the integral constraint based on <cit.>; IC=1/Ω^2∫∫ω_true(θ)dΩ_1dΩ_2, where Ω represents the solid angle of the survey field. The integral constraint becomes considerably smaller than the clustering signal in all fields. Therefore, the integral constraint is ignored in this study. We assume the error follows the Gaussian function, and evaluate the goodness-of-fit of the fitted function through Equation (<ref>) replacing ω_p for ω. We convert the amplitude A_ω into the correlation length r_0 based on <cit.>, which formulated r_0={A_ωc/H_0 H_γ[∫ N(z)dz]^2/∫ N^2(z)χ(z)^1-γE(z)dz}^1/γ, where H_γ=B(γ-1/2,1/2), E(z)= √(Ω_m(1+z)^3+Ω_λ), χ(z)=1/H_0∫_0^z1/E(z')dz'. Finally, we obtain r_0=27.0±8.4 h^-1Mpc. The result is listed in Table <ref> [We note that the obtained amplitude is consistent with Shinohara et al. 2023, in preparation, which also evaluate the angular correlation function from 92 quasars at 5.88<z<6.49 including 81 SHELLQs quasars, although samples are not an exact match. ]. Adopting r_0=27.0±8.4 h^-1Mpc, the Equation (<ref>) gives ξ_100=0.222±0.124, which suggests that the clustering signal is actually detected. §.§.§ Redshift-space correlation function We also evaluate the redshift space correlation function of the quasars. All redshifts for SHELLQs quasars are measured in Lyα emission lines, which has an uncertainty up to Δ z∼0.1, in particular for those without clear Lyα emission <cit.>. This uncertainty and the redshift distortion due to peculiar velocity induce systematic bias in the redshift-space correlation. We derive the redshift-space correlation function ξ(s,μ), where s is the 3D distance and μ represents the cosine of the angle to the line of sight, utilizing the estimator from <cit.>; ξ(s,μ)=DD(s,μ)-2DR(s,μ)+RR(s,μ)/RR(s,μ), where DD(s,μ), DR(s,μ), RR(s,μ) represent data-data, random-random, and random-random pair counts within a separation s and an angular separation arccosμ, respectively. The correlation function of the entire survey are is evaluated by summing the whole pair counts. As mentioned in Section <ref>, the redshift of random points is assigned to follow N(z), the distribution function in Figure <ref>. The redshift-space correlation function decomposed into multipoles ξ_l(s) is derived by integrating ξ(s,μ) by μ <cit.>; ξ_l(s)=2l+1/2∫_-1^1ξ(s,μ)L_l(μ)dμ, where L_l is the Legendre polynomial of order l. We evaluate the mono-pole (l=0) of the redshift-space correlation function. The bottom panel of Figure <ref> represents the result of the redshift-space correlation function. Taking the redshift distortion into account, the redshift-space correlation function ξ_0(s) is related to the real-space correlation function ξ(r) suggested by <cit.>; ξ_0(s)=(b^2+2/3bf+1/5f^2)ξ(r), where b is the bias parameter which is defined in Equation (<ref>), f is the gravitational growth factor. However, the effect of the redshift distortion is negligible because our clustering signal is measured on a large scale, beyond the small scale where redshift distortion can be observed. We fit the power-law function ξ(s)=(s/s_0)^-γ, black solid line in the bottom panel of Figure <ref>, to the redshift space correlation function in place of the Kaiser's function by χ^2 fit. Based on Equation (<ref>) replacing ω for ξ, we obtain s_0 = 32.5± 19 h^-1Mpc as the correlation length in the redshift space, which is almost consistent with that in the real space. Adopting s_0 = 32.5± 19 h^-1Mpc as the correlation length in the real space, we obtain ξ_100=0.310±0.326, which suggests that the significance of the clustering signal of the redshift-space correlation function is marginal. As shown in Table <ref>, consistent correlation lengths are obtained using three different correlation functions. The ξ_100 shows that the clustering signal is barely detected. However, the errors for each correlation length are relatively large. This is probably due to the sample size not being large enough yet. §.§ Cross Correlation Function with Galaxies We evaluate cross-correlation function (CCF) with our quasars and their neighboring LBGs at the similar redshift. The LBGs at z∼6 are retrieved from Great Optically Luminous Dropout Research Using Subaru HSC (GOLDRUSH; ). The LBGs in the wide layer are not suitable for clustering analysis due to their low number density; therefore we use the LBG sample in the Deep and Ultra Deep layer (COSMOS and SXDS) over 8.7 deg^2 of HSC-SSP S18A <cit.> where the SHELLQs quasars reside. As a result, the number of quasars and LBGs to calculate CCF is limited to 3 and 200, respectively. The limiting magnitude of LBGs is m_UV=25.15. They are not spectroscopically confirmed; therefore only angular correlation function can be evaluated. We use the estimator of CCF from the following equation <cit.>; ω_QG(θ)=QG(θ)-QR(θ)-GR(θ)+RR(θ)/RR(θ), where QG(θ), QR(θ), GR(θ), RR(θ) represent quasar-galaxy, quasar-random, galaxy-random, and random-random pairs of the given separation normalized by total pairs, respectively. The random points are retrieved from the random catalog in HSC-SSP DR2 utilizing the same flags of Table 2 in <cit.> at the surface number density of 1 arcmin^-2. Figure <ref> represents the result of CCF (red circles) and auto-correlation function (ACF) of LBGs at z∼6 (blue squares; ω_GG), which is found to be consistent with <cit.>. We fit the power-law functions to the CCF and the ACF by χ^2 fit and the results are shown as the solid line and the dashed line, respectively. We confirme that the angular scale at which we see the CCF signal is large enough to exceed the small scale (≲ 20”), where the one-halo term is dominant. The errors are also evaluated by Jackknife resampling mentioned in Section <ref> with N=5. The goodness-of-fit is calculated based on Equation (<ref>). Our quasar sample has a cross-correlation strength similar to that of the auto-correlation of LBGs at the same redshift. Although the UV luminosity of the host galaxy, on which the clustering strength of galaxies strongly depends, in our quasar sample is not known, the result seems natural, given that quasars are stochastic processes that all galaxies experience at some period. The correlation length is derived from the amplitude A_ω of the power-law function fitted to the CCF. In CCFs, the Limber's equation is formalized as <cit.> r_0={A_ωc/H_0 H_γ∫ N_Q(z)dz∫ N_G(z)dz/∫ N_Q(z)N_G(z)χ(z)^1-γE(z)dz}^1/γ. The suffix of Q and G in Equation (<ref>) denote quasars and LBGs, respectively. The redshift distribution of LBGs is assumed to be the same as <cit.>. Finally, we obtain r_0=17.7±8.0 h^-1Mpc as the correlation length of quasars and galaxies. It should be noted that our LBG sample is photometrically selected and contamination of low-z interlopers, the fraction of which is unknown, reduces the amplitude of cross-correlation. <cit.> concluded that the contamination rate in the i-dropout galaxies may be small based on the fact that all 31 spectroscopic i-dropout galaxies have z>5.5, but it is difficult to know the exact contamination rate in the sample in this study down to the limiting magnitude. lccccc Estimator correlation length bias DMH mass reduced-χ^2 (h^-1Mpc) (10^12 h^-1M_⊙) DMH mass from the clustering analysis Projected, ω_p 23.7±11 20.8±8.7 5.0_-4.0^+7.4 0.87 Angular, ω 27.0±8.4 23.4±6.6 6.9_-4.1^+6.1 0.92 Redshift space, ξ 32.5±19 27.7±15 10.6_-9.3^+17.5 0.91 CCF, ω_QG ­- 19.5±16 4.0_-4.0^+14.8 0.52 §.§ The Bias Parameter We assume that quasars reside in the peak of DM distribution and trace the distribution of underlying DM <cit.>. The bias parameter is derived from the ratio of clustering strength between quasars and underlying DM at a scale of r=8 h^-1Mpc, b=√(ξ(8,z)/ξ_DM(8,z)), assuming that the real space correlation function ξ(r) is approximated by the power-law function. The correlation function of DM is generated by [<https://github.com/halomod/halomod>] <cit.>, assuming the bias model of <cit.>, the transfer function of , and the growth model of <cit.>. We evaluate the bias parameter as b=20.8±8.7, 23.4±6.6 and, 27.7±15 from the projected correlation function, the angular correlation function, and the redshift space correlation function, respectively. We also evaluate the bias parameters b_QG and b_GG from the CCF between quasars and LBGs and the ACF of LBGs, respectively. We derive the bias parameter of quasars b_QQ from <cit.>, b_QG∼ b_QQb_GG. We obtain b_QG=16.1±6.6 and b_GG=13.3±2.3 from the same analysis, yielding b_QQ=19.5±16 and they are summarized in Table <ref>. The bias parameters derived by four independent methods are consistent with each other within their errors. §.§ DMH Mass of z∼6 Quasars We derive typical DMH mass from bias parameters of correlation functions. Under the assumption that quasars are the tracer for the underlying DM distribution, we adopt the bias model in <cit.>, which is formalized as b(ν)=1-Aν^a/ν^a+δ^a_c+Bν^b+Cν^c, where ν is the peak height which is defined as ν=δ_c/σ(M), δ_c is the critical density for the collapse of DMHs (δ_c=1.686), and σ(M) is the linear matter variance at the radius of each DMH. We use the other parameters as they are in Table 2 of <cit.> for Δ=200, which represents the ratio between mean density and background density. The linear variance is defined as σ^2(M)=1/2π^2∫ P(k,z)Ŵ^2(k,R)k^2 dk, where P(k,z) is the matter power spectrum generated by [<https://github.com/cmbant/CAMB>] with our cosmology parameters and Ŵ is the spherical top-hat function defined as Ŵ(k,R)=3/(kR)^3[sin(kR)-kRcos(kR)]. This model is based on the clustering of DMHs in cosmological simulations of the flat ΛCDM cosmology. We obtain the radius of DMH R_halo by solving Equation (<ref>). Finally, we evaluate the DMH mass M_halo assuming the spherical DMH; M_halo=4/3π R_halo^3ρ̅_m. We adopt ρ̅_m=2.78× 10^11Ω_m h^2M_⊙. Our DMH mass from each estimator is summarized in Table <ref>. The bias and halo mass of the CCF are slightly smaller than the other three, but this may be due to the contamination of the low-z interlopers to the z∼6 LBG sample (see Section <ref>). The DMH mass derived by four independent methods are consistent with each other within their errors. However, we note that the DMH mass estimation is sensitive to σ_8. For simplicity, the following discussions will use the bias and halo mass obtained from the projected correlation function, but note that there is variation in these evaluations as shown in Table <ref>. § DISCUSSION §.§ Comparison of DMH mass with other studies This study is the first to obtain the typical DMH mass of quasars at z∼6 from clustering analysis, and not many previous studies have obtained DMH mass at z∼6 using other methods. <cit.> estimated DMH mass of 49 z∼ 6 quasars, assuming that the FWHM of [C II] corresponds to the circular velocity of the DMH. They estimated that the median DMH mass of the whole samples is 1.2_-0.6^+2.2× 10^12 M_⊙, which is slightly lower than our measurement, though it is consistent within the errors. <cit.> estimated the typical DMH mass of a bright quasar (M_1450<-26.5) at z∼6 is 2.2_-1.8^+3.4×10^12 h^-1M_⊙ by measuring the intergalactic medium density around these luminous quasars, which is also consistent with our result within the error. Furthermore, a cosmological N-body simulation <cit.> predicts that the virial mass of DMH of quasars at z=6.2 is 3.9× 10^12 h^-1M_⊙, which is consistent with our result. Figure <ref> shows a compilation of the previous DMH mass measurements based on clustering analysis at lower-z. In the figure, we convert the bias parameter in each previous research into DMH mass adopting our cosmology to reduce the effect of different σ_8 among this work and the previous research. Some previous studies use different fitting formulae to infer a DMH mass from clustering, but we confirmed that the difference produces a few percent discrepancy in the DMH mass estimate. We conclude that the definition does not have a large impact on the DMH mass measurement. We also plot the mass evolution of DMH with M_halo=10^13,10^12,10^11 h^-1M_⊙ (dotted lines) from the sample mean redshift, z=6.1, to z=0 based on the extended Press-Schechter theory <cit.>. Our quasar sample with M_halo=5.0× 10^12 h^-1M_⊙ at z = 6.1 grows to 2.0_-1.0^+2.2× 10^14 h^-1M_⊙ (black solid line) at z=0, which is comparable to a rich galaxy cluster at present <cit.>, implying that quasars reside in the most massive DMHs in the early universe. Interestingly, the DMH mass of quasars has remained almost constant ∼10^12.5 h^-1M_⊙ across the cosmic time. Although the errors of each data point and variations even at the same epoch are large, and the DMH mass tends to decrease slightly from z=1 to 0, it appears to remain roughly M_halo∼10^12­-10^13 h^-1M_⊙. A quite constant halo mass of quasars as a function of redshift has been suggested up to z∼4 by the previous studies <cit.> and this study confirms that the trend continues up to z∼6 for the first time. This is in clear contrast to the standard growth of DMHs (the dashed lines in Figure <ref>). <cit.> also concluded from the quasar pair statistics that there is no strong evolution in clustering strength from z∼6 to z∼4. <cit.> also used pair statistics to constrain the correlation length at z∼5 as r_0≳20 h^-1Mpc, which is consistent with the trend. The observed trend is also consistent with the model (e.g., ) that the characteristic mass of quasar host halos should evolve only weekly with redshift to reproduce the quasar luminosity function, though their constraints are predicted only at 0<z<3. Even though quasars at z=0­-6 reside in similar host halos of 10^12.5 h^-1M_⊙, this means that, as seen in the next section, higher-z quasars are hosted in DMHs which are more massive (higher bias) for the mass at that time. In other words, quasars appear in the most massive halos at z∼6, but they appear in less extreme halos at a later time. Our result that quasars at z∼6 reside in a fairly massive-end halo implies that they could be in overdense regions. However, observational evidence is far from conclusive, with some studies (e.g., ) finding quasars in the overdense region and others (e.g., ) finding no sign of it. This may be due to differences in the depth and survey area of the overdense regions explored, or different selection criteria for surrounding galaxies, which may have led to a lack of consensus. Recent James Webb Space Telescope (JWST) observation <cit.>, which assessed the galaxy distribution around quasars at z∼6 on the scale of up to ∼10 Mpc in the comoving coordinate, showed a clear overdensity of [O iii] emitters around an ultra-luminous quasar at z=6.327. Another JWST observation by <cit.>, which performed an imaging and spectroscopic survey of quasars utilizing NIRCam/WFSS, discovered ten [O III] emitters around a quasar at z=6.6 and the galaxy overdensity corresponds to δ_gal=12.6_-5.0^+5.9 over a 637 Mpc^3 volume in the comoving space. A large number of such deep and wide observations will provide clearer insights into the large-scale environments of z∼6 quasars. In the low-mass regime below M_halo<10^12 h^-1M_⊙, quasars with small black hole mass, small stellar mass and extremely low luminosity may not be detected observationally. In this case, the apparent lower limit of the observed halo mass may be due to observational bias. In contrast, there may be an upper limit of halo mass rather than a typical halo mass at which quasar activity appears. In Figure <ref>, there appears to be an upper limit where the quasar DMH mass never exceeds 10^13 h^-1M_⊙, i.e., most quasars reside in DMHs with M_halo<10^13 h^-1M_⊙ across most of the cosmic history. <cit.> used GALFORM, a semi-analytic model, and concluded that quasars live in average mass halos and do not reside in the most massive DMHs at any redshift. In their model, the quasar activity, which is maintained by the cold gas accretion onto a central SMBH, will be suppressed by the radio-mode AGN feedback in a massive halo larger than 10^13 h^-1M_⊙. If the halo mass of quasars does not exceed 10^13 h^-1M_⊙ at any cosmic time, then such physics may ubiquitously operate. This is supported by the observation by <cit.>, which concluded that few of the most massive protocluster candidates were found around quasars at z∼4. In other words, at z∼ 4, quasars does not exist in overdense regions exceeding 10^13 h^-1M_⊙, but in medium-weight overdense regions below 10^13 h^-1M_⊙. However, it only appears that the halo mass does not exceed 10^13 h^-1M_⊙ in Figure <ref>, and what is measured from the clustering is the average DMH mass of quasars in each period, and it is therefore strictly inconclusive whether there are no quasars in the halo with a mass exceeding 10^13 h^-1M_⊙. §.§ Implication to AGN Feedback We compare our bias parameter with theoretical models in <cit.>, which predicted a bias parameter evolution at z≳3 for three models with simple assumptions: “efficient feedback," “inefficient feedback" and “maximal growth," as shown in Figure <ref>. In “efficient feedback" model, quasars only grow during their active phase and the growth thoroughly terminates after the phase. The bias parameter is predicted to become smaller at higher-z if feedback is efficient. In “inefficient feedback" model, quasars and their central SMBHs continue growing periodically even after their active phase until z∼2. Since their feedback is inefficient, the quasars do not stop growing and shine episodically. In contrast to the previous model, the quasars tend to reside in more massive DMHs, which makes the bias parameter larger at z≳3. In the last model, “maximal growth," quasars keep growing at the same rate with their host DMHs simultaneously until z∼2. The central SMBHs retain Eddington accretion all the time and their growth is rapid. The feedback of quasars is less efficient than the second model. Therefore, the DMHs are the most massive among these models, which is apparent in Figure <ref>. It should be noted that the model only predicts the evolution of the bias parameter, and no prediction of other observables (e.g., luminosity function, M­-σ relation) is given for each assumption. Our result is most consistent with a large bias parameter, favoring the “maximal model," which assumes Eddington accretion and the feedback is highly inefficient at z∼6. This result is consistent with the fact that the Eddington ratio of quasars at z∼6 tends to be higher than that in local <cit.>. However, it is noted that the Eddington ratio of quasars at z<4 is usually smaller than unity <cit.>, being inconsistent with “maximal growth" model. The measurement of bias parameters at z∼4 has not yet been settled, as the results are largely divided into large <cit.> and small <cit.> values. Therefore, since Hopkins' models simply attempt to explain the evolution from z=2 to z=6 with a single physical mechanism, it is not necessary only to support this “maximal growth" model at 4<z<6 as it is. For example, by assuming that the feedback is inefficient at z∼6 while it becomes more efficient until z∼4, an evolution model that the bias keeps low until z∼4 and increases rapidly to z∼6 does not conflict with our observational result. Alternatively, it could be explained by intermittent BH growth <cit.>. To further restrict the models, the measurement of the bias parameter at z∼5 is a key. §.§ Duty Cycle We also evaluate the duty cycle of quasars which represents the fraction of DMHs that host active quasars. At first, following the traditional approach <cit.>, we assume that a DMH with more than the threshold M_min can host a quasar which activates randomly for a certain period. Under this assumption, the duty cycle f_duty is defined as the ratio of the number of observed quasars to the number of the whole host halos above M_min. Therefore, f_duty is evaluated as f_duty =∫_L_min^∞Φ(L)dL/∫_M_min^∞n(M)dM, where Φ(L) is the quasar luminosity function at z∼6 derived by <cit.>, L_min is the minimum luminosity of the quasar sample, n(M) represents the DMH mass function at z∼6 derived by <cit.>, and M_min represents the DMH minimum mass to host a quasar. The quasar luminosity function is evaluated based on the sample almost equivalent to that in this study, by excluding type-II quasars. We adopt the DMH mass function from <cit.>; n(M)=-A√(2a/π)ρ_0/Mδ_c(z)/σ^2(M)dσ(M)/dM ×{1+[σ^2(M)/aδ_c(z)]^p}exp[-aδ_c(z)/2σ^2(M)], where A=0.3222,a=0.707,p=0.3, and δ_c(z)=δ_c/D(z). The D(z) represents the growth factor from <cit.>. The minimum mass is estimated from the effective bias which is expressed as b_eff = ∫_M_min^∞b(M,z)n(M)dM/∫_M_min^∞n(M)dM, where b(M,z) is the bias parameter of the given DMH mass at a given redshift from the model <cit.>. Based on the effective bias determined from the clustering analysis, M_min is evaluated to be 4.5× 10^12 h^-1M_⊙. In this case, we obtain f_duty=6.3±2.7, exceeding unity, which is unreasonable given its definition. We consider that it is too simple to assume that all halos above a certain minimum mass can host a quasar, as expressed in Equation (<ref>). Equation (<ref>) is correct when luminosity and mass are proportional and L_min corresponds to M_min, but this is not the case of quasars. In fact, as seen in Figure <ref>, the halo mass of quasars is within a certain narrow range over cosmic time, and there seems to exist an upper limit to the halo mass of quasars. Although it is difficult to determine the exact mass range, we here simply assume that the DMHs with 12 ≤log (M_halo/h^-1M_⊙)≤ 13 can host quasars based on Figure <ref>. In the case, Equation (<ref>) can be expressed as f_duty = ∫_L_min^∞Φ(L)dL/∫_M_1^M_2n(M)dM, where M_1 = 10^12 h^-1M_⊙ and M_2 = 10^13 h^-1M_⊙. This equation gives f_duty=0.019±0.008. The derived f_duty corresponds to 1.9% of the age of the universe, namely ∼1.7×10^7 yr, as the lifetime of quasars at z∼ 6. While this is consistent with the lifetime obtained from the clustering analysis at low-z (e.g., ), it is about equal to the upper limit obtained from the proximity zone size measurements at z∼6 <cit.>. We derive the duty cycle based on the new definition, which cannot be simply compared with previous results at low-z. Based on Equation (<ref>), we recalculate f_duty at z∼4 from the luminosity function <cit.> and obtain f_duty=0.012±0.001, which is consistent with the conventional estimate, f=0.001­-0.06 <cit.>, and f_duty at z∼6 by this study. On the other hand, in the case of <cit.> at z∼3, we obtain f_duty=0.0060±0.0008. Based on <cit.>, we obtain f_duty=0.0039±0.0005, 0.0043±0.0005, 0.0042±0.0006 at z=0.804,1.579,2.475, respectively. They are slightly smaller than those at z>4. However, note that there is no justification for the mass range used for integration here. Unless we know the exact mass distribution of halos that can host quasars, we cannot precisely obtain the denominators in either Equation (<ref>) or (<ref>). Also, the numerator in these equations are the number of quasars observed, and f_duty will inevitably increase as the limiting magnitude deepens in the future, that is, as L_min decreases. Because of this physical discrepancy, f_duty should be considered to give only very rough estimate. §.§ Stellar mass and dynamical mass We evaluate the stellar mass of host galaxies based on the empirical stellar (M_*)-to-halo (M_halo) mass ratio (SHMR) from <cit.>. The SHMR at z∼6 has only evaluated up to M_halo^max=10^12 h^-1M_⊙ and needs to be extended beyond this point to reach the observed halo mass, M_halo=5×10^12 h^-1M_⊙. However, the pivot mass (M_halo^max) is just where the slope of this relationship changes, and the slope at the high-mass regime tends to become shallower toward higher-z <cit.>; therefore, this extension involves a large uncertainty. Assuming conservatively here that the ratio above M_halo^max does not change from SHMR∼0.013 at M_halo^max, the stellar mass is evaluated as M_*=6.5_-5.2^+9.6×10^10 h^-1M_⊙, where the error is estimated from the uncertainty of M_halo only and does not take into account the uncertainty of the SHMR extrapolation. On the other hand, the dynamical mass evaluated by the [C II] 158 μ m observation is often used as the surrogate of the stellar mass (e.g., ) though the dynamical mass is essentially different from the stellar mass. They estimated the dynamical mass with an assumption of a thin rotation disk with a diameter D=1­-2 h^-1kpc. <cit.>. Stellar mass was evaluated by the [C II] observations of seven SHELLQs quasars as M_*=(0.91­-20)×10^10 h^-1M_⊙ <cit.>. <cit.> evaluated the mean stellar mass of 27 brighter quasars as M_dyn=(3.5±2.5)×10^10 h^-1M_⊙, which is consistent with <cit.>. The stellar mass based on the clustering analysis with the SHMR is consistent with those independently measured from [C II] observation. It is a bit surprising that they both agree, albeit with large uncertainties: in addition to the SHMR being uncertain at the massive end, there is no guarantee that the quasar hosts will have the same SHMR as the normal galaxy. On the other hand, there is an implicit assumption that [C II] dynamical mass is a good proxy for the stellar mass of the bulge. We compare the dynamical mass between the clustering analysis and the [C II] observation. We estimate the dynamical mass at D=1­-2 h^-1kpc, where [C II] dynamical mass is estimated, from our M_halo measurement, as follows. The virial radius r_vir can be estimated by using the spherical collapse model <cit.>; r_vir=0.756(M_halo/10^8 h^-1M_⊙)^1/3[Ω_m/Ω_m(z)Δ_c/200]^-1/3 (1+z/10)^-1 h^-1kpc, where Ω_m(z) = Ω_m(1+z)^3/E(z)^2 and Δ_c=18π^2+82[Ω_m(z)-1]-39[Ω_m(z)-1]^2 is the overdensity at the halo collapse and we obtain r_vir=64 h^-1kpc, which is much larger than the scale on which the [C II] dynamical mass is estimated. When assuming a rotation-dominated disk with the flat rotation of DMHs, i.e., rotation velocity does not depend on the radius, the dynamical mass is estimated as M_dyn=(0.83­-1.7)×10^11 h^-1M_⊙ at the scale of 1­-2 h^-1kpc, which is larger than the [C II] dynamical mass, and in other words, the [C II] rotation velocity is much slower than the halo circular velocity. This suggests either that the area where [C II] is detected is substantially central to the halo, where the rotation velocity has not yet reached the maximum halo circular velocity or the rotation of [C II] is independent of the rotation of the halo. These considerations make it difficult to regard the dynamical mass obtained from [C II] as that of the entire system. Nevertheless, the stellar masses of both estimates agree, which could be a coincidence due to the large uncertainties in both. It should be noted that recent direct observation by JWST/NIRCam for host galaxies of a couple of SHELLQs quasars <cit.> applies the SED (Spectral Energy Distribution) fitting to derive the stellar mass, which is comparable to that inferred from the halo mass measurement. <cit.> also used JWST/NIRSpec to detect [O III] λ5008 emitting regions, which are more extended than the [C II], of the host galaxy, giving a slightly higher dynamical mass. More observations should be made in the future to increase the number of direct measurements of the stellar mass of quasar host galaxies. Also, we should keep in mind that the halo mass obtained in this study is still accompanied by a large error. § SUMMARY We conduct a clustering analysis of 107 quasars at z∼6, mainly composed of SHELLQs, which have increased the number density of quasars at z∼6 by more than 30 times than SDSS. This study is the first attempt to measure the DMH mass of quasars at z∼6. The main results are summarized below. * The quasars are spectroscopically identified in the HSC-SSP wide layer over 891 deg^2. The completeness holds 70% (80%) over 85% (77%) of the entire survey regions. We evaluate the three types of auto-correlation function for our sample: projected correlation function ω_p(r_p), angular correlation function ω(θ), and redshift space correlation function ξ(s). We also evaluate the angular cross-correlation function between our quasar sample and LBG sample at z∼6 in the HSC-SSP Deep layer. The DMH mass at z∼6 is evaluated as 5.0_-4.0^+7.4×10^12 h^-1M_⊙ with the bias parameter, b=20.8±8.7 by the projected correlation function. The other three estimators agree with these values, though the uncertainties are large due to the small sample size. Using extended Press-Schechter theory, we find that the DMH with 5.0×10^12 h^-1M_⊙ at z∼6 will grow into 2.0_-1.0^+2.2×10^14 h^-1M_⊙ at z=0, which is comparable to the rich clusters of galaxies today. * The DMH mass of quasars is found to be nearly constant ∼10^12.5 h^-1M_⊙ throughout the cosmic epoch. While there is broad agreement in previous studies that the quasar halo mass remains approximately constant up to z ∼ 4, this study confirms, for the first time, that this trend continues up to z ∼ 6. This means that there is a characteristic mass of DMH where quasars are always activated. As a result, quasars appear in the most massive halos at z ∼ 6, but in less extreme halos thereafter. The mass of the quasar DMH is unlikely to exceed its upper limit of 10^13 h^-1M_⊙. This suggests that most quasars reside in DMHs with M_halo<10^13 h^-1M_⊙ across most of the cosmic history. This is consistent with the model by <cit.>. In the model, quasar activity, which is maintained by cold gas accretion into the central SMBH, is suppressed by radio-mode AGN feedback for massive halos larger than 10^13 h^-1M_⊙. If the quasar halo mass does not exceed 10^13 h^-1M_⊙ at any time, then such physics may be ubiquitously at work. * Our result that the bias parameter b is as large as b=20.8±8.7 at z ∼ 6 supports the “maximum model" proposed by <cit.>, which assumes that feedback is highly inefficient during z ∼ 4­-6. Without the observational constraint at z∼5, our result along with the previous observations can also be explained by a bias evolution model in which feedback is inefficient at z∼6 but becomes progressively more efficient at z∼4. * We estimate the quasar duty cycle f_duty at z∼6. We find that the conventional definition of f_duty yields an unphysical result as the f_duty becomes greater than unity. We propose a new method to estimate the duty cycle in line with the observational result that DMH mass is nearly constant. We assume that DMHs with 12≤log(M_halo/h^-1M_⊙)≤13 can host quasars. Using the number density of DMHs in the mass interval and the quasar luminosity function at z∼6, we achieve f_duty=0.019±0.008, which is consistent with that at z∼4. * Assuming that the empirical SHMR at z∼6 is constant at M_halo>10^12 h^-1M_⊙, the average stellar mass of quasar host galaxies at z∼6 is evaluated from the observed DMH mass to be M_*=6.5_-5.2^+9.6×10^10 h^-1M_⊙, which is found to be consistent with those derived from [C II] observations. The clustering signal measurement utilizing quasar candidates at z∼5 identified by HSC will soon be made, which can constrain the feedback models more rigidly. More stellar mass measurements from [C II] observations by Atacama Large Millimeter Array (ALMA) will lead to a rigorous comparison with halo masses derived in the study and a constraint on the SHMR at the massive-end at z∼6. The high sensitivity of JWST will allow us to directly measure the host stellar mass and the dynamical mass of quasars and investigate the environment around quasars (e.g., overdensity). In the future, more powerful surveys (e.g., Legacy Survey of Space and Time; ) will contribute to the larger quasar sample at high-z, which will lead to the detection of clearer clustering signals. In addition, promising instruments, such as Nancy Roman Space Telescope and Euclid Satellite, will be expected to identify quasars at z>7. These next-generation instruments will make the sample size deeper and larger, which will have a huge impact on our understanding of the co-evolution in the early universe. § ACKNOWLEDGEMENTS We appreciate the anonymous referee for constructive comments and suggestions. We thank Tomo Takahashi, Takumi Shinohara, and Teruaki Suyama for fruitful discussions and Kazuhiro Shimasaku, and Yuichi Harikane for useful suggestions. J.A. is supported by the International Graduate Program for Excellence in Earth-Space Science (IGPEES). N.K. was supported by the Japan Society for the Promotion of Science through Grant-in-Aid for Scientific Research 21H04490. Y.M. was supported by the Japan Society for the Promotion of Science KAKENHI grant No. JP17H04830 and No. 21H04494. K.I. acknowledges support by grant PID2019-105510GB-C33 funded by MCIN/AEI/10.13039/501100011033 and “Unit of excellence María de Maeztu 2020-2023” awarded to ICCUB (CEX2019-000918-M). M.O. is supported by the National Natural Science Foundation of China (12150410307). The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from the Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University. This paper is based [in part] on data collected at the Subaru Telescope and retrieved from the HSC data archive system, which is operated by Subaru Telescope and Astronomy Data Center (ADC) at NAOJ. Data analysis was in part carried out with the cooperation of Center for Computational Astrophysics (CfCA) at NAOJ. We are honored and grateful for the opportunity of observing the Universe from Maunakea, which has the cultural, historical and natural significance in Hawaii. This paper makes use of software developed for Vera C. Rubin Observatory. We thank the Rubin Observatory for making their code available as free software at <http://pipelines.lsst.io/>. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg, and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen’s University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. Subaru astropy <cit.>, CAMB <cit.>, Corrfunc <cit.>, halomod <cit.>, hscpipe <cit.> Numpy <cit.>, Matplotlib <cit.>, Pandas <cit.>, Scipy <cit.> aasjournal
http://arxiv.org/abs/2307.02374v1
20230705154053
Joint distribution of currents in the symmetric exclusion process
[ "Aurélien Grabsch", "Pierre Rizkallah", "Olivier Bénichou" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
Joint distribution of currents in the symmetric exclusion process Aurélien Grabsch1⋆, Pierre Rizkallah2 and Olivier Bénichou1 1 Sorbonne Université, CNRS, Laboratoire de Physique Théorique de la Matière Condensée (LPTMC), 4 Place Jussieu, 75005 Paris, France 2 Sorbonne Université, CNRS, Physicochimie des Electrolytes et Nanosystèmes Interfaciaux (PHENIX), 4 Place Jussieu, 75005 Paris, France ^⋆ aurelien.grabsch@sorbonne-universite.fr August 1, 2023 § ABSTRACT The symmetric simple exclusion process (SEP) is a paradigmatic model of diffusion in a single-file geometry, in which the particles cannot cross. In this model, the study of currents have attracted a lot of attention. In particular, the distribution of the integrated current through the origin, and more recently, of the integrated current through a moving reference point, have been obtained in the long time limit. This latter observable is particularly interesting, as it allows to obtain the distribution of the position of a tracer particle. However, up to now, these different observables have been considered independently. Here, we characterise the joint statistical properties of these currents, and their correlations with the density of particles. We show that the correlations satisfy closed integral equations, which generalise the ones obtained recently for a single observable. We also obtain boundary conditions verified by these correlations, which take a simple physical form for any single-file system. As a consequence of our results, we quantify the correlations between the displacement of a tracer, and the integrated current of particles through the origin. 1pt fancy 1pt § INTRODUCTION The Symmetric Exclusion Process (SEP) is a paradigmatic model of single-file diffusion <cit.>, which has been the object of several recent and important developments <cit.>. In this model, particles perform symmetric random walks in continuous time on an infinite one-dimensional lattice, with the constraint that there can be at most one particle per site. In this context, several quantities have attracted attention : (i) the integrated current through the origin Q_t (defined as the number of particles which have crossed the origin from left to right, minus those from right to left, up to time t) <cit.>; (ii) the position X_t of a tracer <cit.>, initially placed at the origin; (iii) the generalised current J_t which counts the number of particles which cross a moving boundary[This observable is also called a height function since it is involved in a classical mapping between exclusion processes and interface models <cit.>.] at position x_t (counted positively from left to right, and negatively from right to left) <cit.>. This latter observable actually provides an alternative way to study the displacement of a tracer, since its position X_t corresponds to the value of x_t for which J_t = 0, because the order of the particles is conserved <cit.>. The statistical properties of the current Q_t have been fully characterised by the computation of its cumulant generating function using Bethe ansatz <cit.>. Concerning the position X_t of a tracer, its fluctuations have first been quantified by the computation of the variance <cit.>. The full distribution has later been computed, first in the high-density limit <cit.>, then in the low-density limit <cit.>, and finally at arbitrary density <cit.> by relying on tools from integrable probabilities. These latter works <cit.> actually provide the full cumulant generating function of the generalised current J_t, from which the statistical properties of the position of the tracer is deduced. Recently, it has been shown that, in the long time limit, all these results can be easily recovered from the solution of a simple integral equation <cit.>. This equation is satisfied by generalized density profiles <cit.>, which characterise the correlations between the observable under consideration (Q_t, J_t or X_t) and the density of particles in the SEP. On top of providing a more direct way to obtain the statistical properties of these observables, this equation constitutes a strikingly simple closure of the infinite hierarchy of equation satisfied by these generalized density profiles <cit.>. These results are part of a context of intense activity around exact solutions for one-dimensional interacting particle systems <cit.>. Although the individual properties of Q_t, J_t or X_t have been characterised, the determination of their correlations remains an open question. These observables are indeed expected to be strongly correlated since, for instance, if the tracer (initially at the origin) moves to a position X_t to the right, then the current Q_t through the origin can only be positive. The determination of these correlations is the main goal of this article. Here, we show that the integral equation of <cit.> can be generalised to describe the joint correlations between the currents Q_t, J_t and the density of particles of the SEP in the long time limit. As a consequence, we deduce the joint statistical properties of Q_t and J_t, and, as a byproduct, those of Q_t and X_t. Importantly, this equation is completed by boundary conditions, which were derived from microscopic considerations in <cit.>. Here, we provide a macroscopic derivation of these boundary relations, and extend them beyond the SEP to any single-file system. Furthermore, this generalisation has to advantage to provide a clear physical meaning to these relations. The article is organised as follows. We first present in Section <ref> a summary of our main results, followed by a discussion of these results and their consequences in Section <ref>. We then present in Section <ref> the Macroscopic Fluctuation Theory (MFT) <cit.>, which gives a large scale description of diffusive systems, and is our starting point. In Section <ref> we first illustrate on the simpler case of low density how this approach can be used to study joint properties of different observables. We give in Section <ref> the derivation of our main results, which rely both on the MFT and on the inverse scattering technique <cit.> which has recently been applied to MFT and related problems <cit.>. § SUMMARY OF THE MAIN RESULTS We consider a SEP in which particles hop with rate 1. We describe the state of the system by the set of occupation numbers {η_i(t) }_i ∈ℤ, with η_i(t) = 1 if site i is occupied at time t, and 0 otherwise. Initially, we consider that each site is filled independently with probability ρ_+ for i > 0 and ρ_- for i ≤ 0. The integrated current Q_t counts the total number of particles that cross the origin from left to right, minus the number from right to left, up to time t. It can be written explicitly in terms of the occupation numbers by comparing the number of particles to the right of the origin at times t and 0, Q_t = ∑_r > 0( η_r(t) - η_r(0) ) . Similarly, the generalised current J_t counts the number of particles that cross a fictitious moving boundary located at x_t at time t. It can be expressed in terms of the occupation numbers, by comparing the number of particles to the right of x_t at time t, and the number of particles to the right of x_0 = 0 at initial time. Since this number of particles is infinite, one has to be careful with the definition, and consider the limit of a finite size system, J_t = lim_L →∞( ∑_r=x_t+1^L η_r(t) - ∑_r=1^L η_r(0) ) = lim_L →∞( ∑_r=x_t+1^L (η_r(t) - ρ_+) - ∑_r=1^L (η_r(0)-ρ_+) ) - ρ_+ x_t , where now the sums are convergent. This gives the definition J_t = ∑_r > x_t( η_r(t) - ρ_+ ) - ∑_r > 0( η_r(0) - ρ_+ )- ρ_+ x_t , x_t = ⌊ξ√(t)⌋ . We have chosen a specific expression for x_t, such that x_t ∼√(t) for large t since the system is diffusive. We will consider only the case ξ > 0, but the case ξ < 0 can be obtained similarly. Our main results concern the joint cumulant generating function of the two currents Q_t (<ref>) and J_t (<ref>), in the long time limit ψ_ξ(λ,ν,t) = ln^λ Q_t + ν J_tt →∞≃√(t) ψ̂_ξ(λ,ν) , and their correlation with the density of surrounding particles, encoded in the generalised density profiles <cit.> w_r(λ,ν,t) = η_r(t) ^λ Q_t + ν J_t/^λ Q_t + ν J_tt →∞≃Φ( x = r/√(t)) . We also characterise the correlations between the currents and the initial density of particles, encoded in the initial profile _r(λ,ν,t) = η_r(0) ^λ Q_t + ν J_t/^λ Q_t + ν J_tt →∞≃Φ̅( x = r/√(t)) . We have obtained equations satisfied by the profiles Φ and Φ̅, from which the cumulant generating function ψ̂_ξ is deduced. We first present these main equations, before giving some of their consequences on the correlations between the different observables and finally discussing the status of these results and relations with the recent works <cit.>. §.§ Equations for the correlations and the cumulants Instead of the profiles at final time Φ and initial time Φ̅, we found that it is their derivatives which verify closed equations. More precisely, we define the functions Ω(x) ≡{[ a_- Φ'(x) for x < 0; a_0 Φ'(x) for 0<x<ξ; a_+ Φ'(x) for x > ξ ]. , Ω̅(x) ≡{[ b_- Φ̅'(x) for x < 0; b_+ Φ̅'(x) for x > 0 ]. , with multiplicative constants a_-, a_0, a_+, b_- and b_+ to be determined. We find that the functions Ω and Ω̅ satisfy the following integral equations Ω(x) + α∫_0^∞Ω(y) Ω(x-y) Θ(y-x) y + β∫_ξ^∞Ω(y) Ω(x+ξ-y) Θ(y-x) y + αβ∫_ξ^∞ y ∫_0^ξ z Ω(y) Ω(z) Ω(x+ξ - y - z) Θ(y+z-x-ξ) = K(x) , Ω̅(x) + ∫_0^∞Ω̅(y) Ω̅(x-y) Θ(y-x) y = K̅(x) , where Θ is the Heaviside step function, and the kernels K(x) = ^-x^2/4/√(4π) , K̅(x) = α^-x^2/4/√(4π) + β^-(x-ξ)^2/4/√(4π) + αβ∫_0^ξΩ(y) e^-(x+y-ξ)^2/4/√(4π) y , which involve additional parameters α and β. The multiplicative constants in the definitions (<ref>) are determined by the following boundary conditions μ(Φ(0^+)) - μ(Φ(0^-)) = λ , μ(Φ(ξ^+)) - μ(Φ(ξ^-)) = ν , [∂_x μ(Φ)]_0^-^0^+ = 0 , [∂_x μ(Φ)]_ξ^-^ξ^+ = 0 , μ(Φ̅(0^+)) - μ(Φ̅(0^-)) = -(λ+ν) + μ(ρ_+) - μ(ρ_-) , [∂_x μ(Φ̅)]_0^-^0^+ = 0 , which involve the chemical potential μ. For the SEP, it takes the simple form μ(ρ) = -ln( 1/ρ - 1) . We also have boundary conditions at infinity, lim_x →±∞Φ(x) = ρ_± , lim_x →±∞Φ̅(x) = ρ_± . The last constants α and β are determined by conservation equations ∫_-∞^∞ [Φ(x) - Φ̅(x)] x = 0 . ∫_-∞^∞K̅(x) x = α + β + αβ∫_0^ξΩ(y) y = ω , where ω = ρ_+ (^-λ-ν - 1) + ρ_- (^λ+ν - 1) + ρ_+ ρ_- (^λ+ν - 1)(^-λ-ν - 1) coincides with the single parameter identified in the SEP <cit.>, but with two parameters λ and ν. Together, Eqs.(<ref>-<ref>) fully determine the profiles Φ and Φ̅. Their knowledge allows to deduce the joint cumulant generating function by using (see below) ∂_λψ̂_ξ = ∫_0^∞ [Φ(x) - Φ̅(x)] x , ∂_μψ̂_ξ = ∫_0^∞ [Φ(x+ξ) - Φ̅(x)] x - ρ_+ ξ . §.§ Consequences on the observables From the above equations, we recover the known results on Q_t <cit.> and J_t <cit.>, for instance lim_t →∞1/√(t)Q_t = 0 , lim_t →∞1/√(t)J_t = -ρξ , lim_t →∞1/√(t)Q_t^2 = 2 ρ(1-ρ)/√(π) , lim_t →∞1/√(t)J_t^2_c = ρ(1-ρ) ( 2 ^- ξ^2/4/√(π) + ξ( ξ/2) ) . In addition, we obtain the joint statistical properties of these two observables, such as their covariance lim_t →∞1/√(t)Q_t J_t_c = ρ(1-ρ) ( 1+ ^- ξ^2/4/√(π) - ξ/2( ξ/2) ) . This shows that Q_t and J_t are strongly correlated for ξ→ 0, and their correlation decay when ξ→∞, as lim_t →∞Q_t J_t/√(Q_t^2J_t^2)≃{[ 1 - ξ/2√(π/2) for ξ→ 0 ,; 1/2^3/4π^1/4√(ξ) for ξ→∞ . ]. The knowledge of the statistical properties of J_t allows to deduce those of the position of a tracer X_t <cit.>. For instance, we recover the known variance <cit.>, lim_t →∞1/√(t)X_t^2 = 2(1-ρ)/ρ√(π) , and obtain the covariance lim_t →∞1/√(t)Q_t X_t = 2(1-ρ)/√(π) , from which we deduce that these quantities are fully correlated, since lim_t →∞Q_t X_t/√(Q_t^2X_t^2) = 1 . This is due to the fact that, Q_t = ρ X_t at leading order in time, which results from lim_t →∞1/√(t) (Q_t - ρ X_t)^2 = 0 . However, the two observables Q_t and X_t are not simply proportional, even in the long time limit, since for instance lim_t →∞1/√(t)ln^λ (Q_t - ρ X_t)^4 = 1/4√(π)ρ (1-ρ)^3 λ^4 + 𝒪(λ^5) . This means that the variance of Q_t - ρ X_t is nonzero, since the higher order cumulants do not vanish. However, the expression of this variance is out of reach of our approach, since we can only describe the leading √(t) with the MFT. Note that (<ref>) is not symmetric in ρ and 1-ρ, since following a tracer (which is a particle) breaks the particle-hole symmetry of the SEP. In particular, Eq. (<ref>) vanishes faster when ρ→ 1 compared to ρ→ 0. In the dense limit, both ln^λ Q_t and ln^χ X_t are of order 1-ρ, while (<ref>) vanishes as (1-ρ)^3. This shows that Q_t = X_t in this limit, as can be seen from the fact that they have identical cumulants in that case <cit.>. On the other hand, in the dilute limit, X_t = 𝒪(1/ρ) and Q_t = 𝒪(1), with ln^λ Q_t = 𝒪(ρ). This time (<ref>) does not vanish faster than the cumulants of the current, and thus Q_t ≠ρ X_t, as illustrated from the fact that their cumulants differ <cit.> in that case. In addition to the joint statistical properties of Q_t and J_t, we also obtain the profiles Φ and Φ̅ which quantify the correlation between the density of particles and the observables, at final and initial times. At lowest orders in λ and ν, we recover the profiles obtained previously <cit.> η_r Q_t_c t →∞≃sign(x) ρ(1-ρ)/2( x/2) , η_r J_t_c t →∞≃ρ(1-ρ)/2{[ ( x/2) for x > ξ; -( -x/2) for x < ξ ]. , with x = r/√(t). We additionally obtain the joint profiles, such as η_r Q_t J_t_c t →∞≃ρ(1-ρ)(1-2ρ)/2{[ ( x/2) for x > ξ; 0 for 0 < x < ξ; ( -x/2) for x < 0 ]. . To understand the meaning of this expression, we can rewrite it as η_r Q_t J_t_c = Cov( η_r , cov(Q_t,J_t) ) , where cov(Q_t,J_t) = (Q_t - Q_t) (J_t - J_t) is the empirical covariance. This means that this profile measures the covariance between the density of particles on one hand, and the correlations between Q_t and J_t on the other hand. From (<ref>), we see that it is positive for ρ < 1/2 and negative for ρ > 1/2. This means that adding particles when ρ < 1/2 increases the correlation between Q_t and J_t, while it decreases it when ρ > 1/2. This behavior is expected, since the maximal currents are reached for ρ = 1/2. In addition, (<ref>) is extremal near x=0^- and x = ξ^+. In other words, a change of the number of particles in these sectors affects strongly the correlation between Q_t and J_t. This is due to the fact that the particles in these regions are more likely than distant particles to cross the two "walls" x=0 and x=ξ and thus affect both Q_t and J_t. Conversely, the profile (<ref>) vanishes between 0 and ξ, indicating that adding more particles in that region does not affect the correlation between Q_t and J_t. This is also expected, since these particles can only cross one "wall"[Particles can of course cross several times both walls, but an odd number of times only one of them, resulting in this net effect.] either at x=0 or x=ξ, and can thus affect only one of these observables. Beyond the perturbative expansion in λ and ν, we can also plot the profiles for finite values of the these parameters by solving numerically the integral equations (<ref>,<ref>). These profiles have the interpretation of mean density profiles under the condition that Q_t and J_t take given values <cit.>. They are represented in Fig. <ref>. For instance, the plot in Fig. <ref> (left) corresponds to having currents Q_t and J_t larger than their mean values. This is why there is an accumulation of particles to the right of the two "walls" x=0 and x=ξ, and a depletion to the left. Conversely, for the corresponding initial profile Φ̅ (Fig. <ref>, right), there is an increase of particles to the left of the origin, and a depletion to the right, so that with the diffusive time evolution, these particles will cross the origin, and ξ, to contribute to a larger value of Q_t and J_t. §.§ Discussion Integral equations. — The Wiener-Hopf integral equation obtained previously in the case of a single observable (Q_t, J_t or X_t) <cit.> can be recovered from (<ref>) and (<ref>) by setting either λ = 0 or ν = 0. This corresponds to α=0 or β = 0 respectively in Eqs (<ref>,<ref>), so that the two profiles at initial and final time satisfy the same equation, as noted previously <cit.>. Note that if we set ξ = 0, we recover the equations of <cit.>. This is expected since then, J_t = Q_t and thus Φ and Φ̅ involve a single observable. In the case α≠ 0 and β≠ 0, the equation for the profile at final time (<ref>) is more complicated than the one obtained in the case of a single observable <cit.>, as it now involves a double convolution. On the other hand, the equation for the initial profile (<ref>) keeps a simpler Wiener-Hopf structure, but with a more complicated kernel which involves the solution at final time (<ref>). These integral equations extend the ones discovered in <cit.> in the case of a single observable (current Q_t or J_t, tracer position X_t). This further emphasises the key role of such strikingly simple integral equations involving partial convolutions in interacting particle systems. Boundary conditions. — We stress that the boundary equations (<ref>,<ref>) hold for any single-file system, and not only for the SEP. Furthermore, these equations take a simple physical form in terms of the chemical potential μ, which can be written as μ(ρ) = ∫^ρ2 D(r)/σ(r) r , in terms of the diffusion coefficient D and the mobility σ, which describe the system at large scale <cit.>. Explicitly,  (<ref>) states that the chemical potential in the system is discontinuous at x=0 and x=ξ, with a discontinuity given by the parameters λ and ν of the joint generating function (<ref>). From a physical point of view, this can be understood as follows. The parameters λ and ν play the role of conjugate variables (in the sense of thermodynamics) to the integrated currents Q_t and J_t, which count particles. The conjugate quantity to the particle number being the chemical potential, it is expected that λ and ν are related to the chemical potential. Finally, λ and ν have an effect on the density of particles. For instance, when λ > 0, the exponentials in the definitions of the profiles (<ref>) give more weight to the realisations in which Q_t > 0, and thus we expect an increase of the number of particles to the right of the origin. This results from a higher chemical potential to the right of the origin compared to the left, as described by (<ref>). Remarkably, the equations (<ref>,<ref>) obtained for the case of the currents Q_t and J_t can be extended to other observables. For instance, it has been recently shown that the current Q_t in a single-file model can be mapped onto the position of a tracer in a dual single-file model <cit.>. Under this mapping, the relations (<ref>,<ref>) become, for the new system (see Appendix <ref>) P(Φ(0^+)) - P(Φ(0^-)) = λ , . ∂_x μ(Φ) |_0^+ = . ∂_x μ(Φ) |_0^- , where P is the pressure[The relation involving the pressure Eq.(<ref>), left, has been first guessed by Alexis Poncet during private exchanges prior to this work.] P(ρ) = ∫^ρ2 r D(r)/σ(r) r , and Φ is now the long time limit of the correlation between the position X_t of the tracer and the density in the reference frame of the tracer, Φ(x = r/√(t)) t →∞≃ρ(r+X_t,t) ^λ X_t/^λ X_t. Finally, in the case of the SEP, these relations (<ref>,<ref>) and (<ref>) reduce to the ones obtained from microscopic considerations in <cit.>. The precise relation with the corresponding microscopic equations is given in Appendix <ref>. § HYDRODYNAMIC DESCRIPTION USING MACROSCOPIC FLUCTUATION THEORY The Macroscopic Fluctuation Theory (MFT) gives an effective description at large scales of a diffusive system <cit.>. Introducing a scaling factor T, which corresponds to the large observation time, the density of particles is defined as ρ_T(x,t) = 1/√(T)∑_i η_i(t T) δ( x - i/√(T)) . In the limit T →∞, this density converges to a continuous stochastic function ρ(x,t). The probability of observing an initial profile ρ(x,0) evolves to another profile ρ(x,1) at time t=1 (corresponding to the large time T in the SEP) takes a large deviation form <cit.> ℙ[ρ(x,0) →ρ(x,t)] ≃∫[ρ(x,t)] [H(x,t)] ^- √(T)𝒮[ρ,H] , where H is a conjugate field, and S is the MFT action 𝒮[ρ,H] = ∫_-∞^∞ x ∫_0^1 t [ H ∂_t ρ + D(ρ) ∂_x ρ∂_x H - σ(ρ)/2(∂_x H)^2 ] . For the SEP, D(ρ) = 1 and σ(ρ) = 2ρ(1-ρ). This result was first proved for the SEP <cit.>, and was later extended to arbitrary single-file system, which can be described by other transport coefficients D(ρ) and σ(ρ) <cit.>. See for instance <cit.> for a list of models, and their corresponding coefficients. Initially, the system is described by the random density ρ(x,0), which fluctuates around a given density ρ_0(x), of distribution<cit.> ℙ[ρ(x,0)] ≃^- √(T)ℱ[ρ(x,0)] , ℱ[ρ(x,0)] = ∫ x ∫_ρ_0(x)^ρ(x,0) z 2 D(z)/σ(z)( ρ(x,0)-z) . Microscopically, it corresponds to picking independently, for each site i of the SEP, an occupation number η_i(0) = 1 with probability ρ_0(i/√(T)). In the continuous limit, this becomes (<ref>). The two currents Q_t (<ref>) and J_t (<ref>) can be expressed in terms of the density ρ(x,t) (<ref>) as Q_T/√(T) ≡𝒬[ρ] = ∫_0^∞ [ ρ(x,1) - ρ(x,0)] x , J_T/√(T) ≡𝒥[ρ] = - ρ_+ ξ + ∫_-∞^∞ [ (ρ(x,1)-ρ_+) Θ(x-ξ) - (ρ(x,0)-ρ_+) Θ(x)] x . Within this formalism, the joint moment generating function of Q_T and J_T reads ^λ Q_T + ν J_T = ∫[ρ(x,t)] [H(x,t)] ∫[ρ(x,0)] ^- √(T)( 𝒮[ρ,H] + ℱ[ρ(x,0)] - λ𝒬[ρ] - ν𝒥[ρ] ) . In the long time limit T →∞, these integrals can be evaluated by a saddle point method, which yields ψ̂_ξ(λ,ν) = lim_T →∞1/√(T)ln^λ Q_t + ν J_t = λ𝒬[q] + ν𝒥[q] - 𝒮[q,p] - ℱ[q(x,0)] , where we have denoted (q,p) the saddle point of (ρ,H). It can be determined by minimising the terms in the exponential, which yields the MFT equations ∂_t q = ∂_x[D(q) ∂_x q] - ∂_x[σ(q)∂_x p] , ∂_t p = - D(q) ∂_x^2 p - 1/2σ'(q) (∂_x p)^2 , completed by the boundary conditions p(x,1) = λΘ(x) + νΘ(x-ξ) , p(x,0) = (λ + ν) Θ(x) + ∫_ρ_0(x)^q(x,0) r 2 D(r)/σ(r) . The expression of the cumulant generating function can be simplified by taking a derivative with respect to λ or ν, and using that the saddle point solution (q,p) is the minimum of the action: ∂_λψ̂_ξ = 𝒬[q] , ∂_νψ̂_ξ = 𝒥[q] . These are standard relations in the context of large deviations <cit.>. The profiles w_r (<ref>) can be obtained from the same procedure, for instance, w_r(λ,ν,T) ≃∫[ρ(x,t)] [H(x,t)] ∫[ρ(x,0)] ρ(r/√(T),1 ) ^- √(T)( 𝒮[ρ,H] + ℱ[ρ(x,0)] - λ𝒬[ρ] - ν𝒥[ρ] )/∫[ρ(x,t)] [H(x,t)] ∫[ρ(x,0)] ^- √(T)( 𝒮[ρ,H] + ℱ[ρ(x,0)] - λ𝒬[ρ] - ν𝒥[ρ] ) . Performing again the saddle point estimate, we obtain, w_r(λ,ν,T) T →∞≃ q ( x = r/√(T),1 ) ≡Φ(x) . Similarly, the correlation with the initial occupations _r (<ref>) reads _r(λ,ν,T) T →∞≃ q ( x = r/√(T),0) ≡Φ̅(x) . The MFT profile q at initial and final time actually coincides with the correlations w_r and _r in the long time limit, as shown in <cit.>. Furthermore, the joint cumulant generating function ψ̂ is fully determined by the knowledge of the profile q, thanks to (<ref>). Our goal is thus to determine these profiles. § THE EXAMPLE OF THE LOW DENSITY LIMIT The MFT equations for the SEP (<ref>,<ref>) being rather complicated, we first focus on the simpler case of the low density limit. In this limit, the SEP becomes equivalent to a model of reflecting Brownian particles on the real line <cit.>. The MFT equations reduce to ∂_t q = ∂_x[∂_x q] - ∂_x[2q∂_x p] , ∂_t p = - ∂_x^2 p - (∂_x p)^2 . These equations can be reduced to diffusion equations by the Cole-Hopf transform P = ^p and Q = q ^-p <cit.>, so that ∂_t Q = ∂_x^2 Q , ∂_t P = - ∂_x^2 P . The initial and final conditions (<ref>) become P(x,1) = ^λΘ(x) + νΘ(x-ξ) , Q(x,0) = ρ_0(x) ^-(λ+ν) Θ(x) . We straightforwardly obtain the solution q = QP, and thus the profiles both at initial and final times, Φ(x) = q(x,1) = ^λΘ(x) + νΘ(x-ξ)∫_-∞^∞ z ρ_0(z) ^-(λ+ν) Θ(z)^-(x-z)^2/4/√(4 π) , Φ̅(x) = q(x,0) = ρ_0(x) ^-(λ+ν) Θ(x)[ ^λ+ν -^λ-1/2( x/2) - ^λ^ν-1/2( x-ξ/2) ] , where we have assumed that ξ > 0 for Φ̅. The expression is similar in the case ξ < 0. The cumulant generating function can be obtained from the relations (<ref>), which gives ∂_λψ̂_ξ = ∫_-∞^∞ x ρ_0(x) ^-(λ+ν) Θ(x)∫_-∞^∞ z ^λΘ(z) + νΘ(z-ξ)^-(x-z)^2/4/√(4π)[ Θ(z) - Θ(x) ] , ∂_νψ̂_ξ = ∫_-∞^∞ x ρ_0(x) ^-(λ+ν) Θ(x)∫_-∞^∞ z ^λΘ(z) + νΘ(z-ξ)^-(x-z)^2/4/√(4π)[ Θ(z-ξ) - Θ(x) ] . Integrating with the initial value ψ̂(0,0) = 0, we get ψ̂_ξ(λ,ν) = ∫_-∞^∞ x ρ_0(x) ∫_-∞^∞ z [ ^λΘ(z) + νΘ(z-ξ)-(λ+ν) Θ(x) - 1 ] ^-(x-z)^2/4/√(4π) . Note that this expression is compatible with the very recent study <cit.>. Finally, the density profiles of the SEP assume simple explicit forms in the low density limit. § DERIVATION OF THE MAIN EQUATIONS We now address the case of arbitrary density of the SEP. To obtain the equations satisfied by the initial and final profiles Φ̅ and Φ, we will rely on the inverse scattering approach which has recently been applied to solve systems of equations related to (<ref>,<ref>), in the context of the KPZ equation or MFT <cit.>. As we will see below, this formalism is powerful to obtain the bulk equations for Φ and Φ̅, but introduces unknown constants which can be tricky to determine. Here, we will obtain these constants by making use of boundary conditions which are deduced from the MFT equations (<ref>-<ref>). §.§ Boundary conditions We first derive the boundary conditions (<ref>-<ref>), which are direct consequence of the MFT equations (<ref>-<ref>). These equations will take a simple form, in terms of physical quantities. The equation satisfied by p (<ref>) is an antidiffusion, with no singularity in the r.h.s. for t<1, except a discontinuity for q at t=0. Therefore, the solution p(x,0) is a smooth function of x, and in particular at x=0. The boundary conditions are thus straightforwardly deduced from the initial condition (<ref>), which takes the from, p(x,0) = (λ+ν) Θ(x) + μ(q(x,0)) - μ(ρ_0(x)) , where we have introduced the chemical potential μ(ρ), defined by μ'(ρ) = 2 D(ρ)/σ(ρ) . Evaluating (<ref>) at x=0^+ and x=0^-, and taking the difference and using the continuity of p(x,0) at x=0, we get the first boundary condition for q(x,0) ≡Φ̅(x) μ(Φ̅(0^+)) - μ(Φ̅(0^-)) = -(λ+ν) + μ(ρ_0(0^+)) - μ(ρ_0(0^-)) . Similarly, writing the continuity of the first derivative of p(x,0) at x=0, we deduce from (<ref>) . ∂_x μ(Φ̅) |_0^+ - . ∂_x μ(Φ̅) |_0^- = . ∂_x μ(ρ_0) |_0^+ - . ∂_x μ(ρ_0) |_0^- . For ρ_0(x) = ρ_+ Θ(x) + ρ_- Θ(-x), these relations become (<ref>). To obtain the conditions at final time, we rely on a time-reversal mapping, which extends the time-reversal symmetry discussed in <cit.> for the case of the current Q_t. In that work, the MFT action was found to be invariant under time reversal symmetry ρ(x,t) →ρ(x,1-t) and j(x,t) → - j(x,1-t), with a density ρ and a current j satisfying the conservation relation ∂_t ρ + ∂_x j = 0. At the saddle point of the MFT action, the density becomes q and the current becomes j = -D(q) ∂_x q + σ(q) ∂_x p <cit.>. The time reversal symmetry then becomes q(x,t) → q(x,1-t), j(x,t) = . [-D(q) ∂_x q + σ(q) ∂_x p] |_(x,t)→. -[-D(q) ∂_x q + σ(q) ∂_x p] |_(x,1-t). Here, we do not have the time-reversal symmetry, because at final time the currents are measured at positions 0 and ξ, which are different from the initial position 0. Nevertheless, we define two new fields q̂ and p̂ by q(x,t) = q̂(x,1-t) , ∂_x p(x,t) = - ∂_x p̂(x,1-t) + . 2 D(q̂)/σ(q̂)∂_x q̂|_(x,1-t) . Integrating the second relation gives p(x,t) = - p̂(x,1-t) + μ(q̂(x,1-t)) + c , with c a constant. Inserting these relations into the MFT equations (<ref>,<ref>), we find that q̂ and p̂ obey the same equations: ∂_t q̂ = ∂_x[D(q̂) ∂_x q̂] - ∂_x[σ(q̂)∂_x p̂] , ∂_t p̂ = - D(q̂) ∂_x^2 p̂ - 1/2σ'(q̂) (∂_x p̂)^2 . This was already noticed in <cit.>. The initial and final conditions (<ref>) become p̂(x,0) = μ(q̂(x,0)) + c - λΘ(x) - νΘ(x-ξ) , p̂(x,1) = -(λ+ν) Θ(x) + μ(ρ_0(x)) - c . These conditions are different from the original ones (<ref>), and they are the source of the breaking of time-reversal symmetry. For the study of the current Q_t only, ν = 0 and so, by choosing c = μ(ρ_-), the new initial and final conditions are identical to the original ones, upon changing λ→μ(ρ_+) - μ(ρ_-) - λ. This is indeed the relation found in <cit.>. Here, we do not have such a relation, but we can still use a similar argument as we used above at t=0. The conjugate field p̂ obeys an antidiffusion equation, which is not singular for t<1. Therefore p̂(x,0) is smooth. From (<ref>) left, this straightforwardly yields the conditions for q̂(x,0) = q(x,1) ≡Φ(x), μ(Φ(0^+)) - μ(Φ(0^-)) = λ , μ(Φ(ξ^+)) - μ(Φ(ξ^-)) = ν , and from the derivative, . ∂_x μ(Φ) |_0^+ = . ∂_x μ(Φ̅) |_0^- , . ∂_x μ(Φ) |_ξ^+ = . ∂_x μ(Φ̅) |_ξ^- . These are the relations (<ref>,<ref>) announced above. Note that these equations for Φ hold for any initial density profile ρ_0. Important remark: We have derived the boundary conditions for Φ in the case of an annealed initial condition. One could also consider a quenched initial condition, which corresponds to q(x,0) = ρ_0(x). In this case, the mapping (<ref>) can still be performed. One obtains the same MFT equations (<ref>,<ref>), but with the initial and final conditions p̂(x,0) = μ(q̂(x,0)) + c - λΘ(x) - νΘ(x-ξ) , p̂(x,1) = -p(x,0) + μ(ρ_0(x)) - c . The second relation involves the unknown function p(x,0), which is smooth, but the first relation is identical to the annealed case. The same argument as above applies, and the boundary conditions (<ref>,<ref>) still hold in the quenched case. §.§ Bulk equations §.§.§ Mapping to the AKNS equations We adapt the inverse scattering approach that was applied to the case of the integrated current Q_t in the SEP in <cit.> to the case of the joint distribution of Q_t and J_t. The first step is to introduce the new functions <cit.> u = 1/(1-2q)∂_x [ q(1-q) ^-∫_-∞^x (1-2q) ∂_x p] , v = - 1/1-2q∂_x ^∫_-∞^x (1-2q) ∂_x p . Under this transformation, the MFT equations for the SEP (<ref>,<ref>) become the AKNS equations <cit.> ∂_t u = ∂_x^2 u - 2 u^2 v , ∂_t v = - ∂_x^2 v + 2 u v^2 . These equations are integrable and can be solved using the inverse scattering transform <cit.>. Before entering the resolution in more details, let us study the initial and final conditions for u and v. From the conditions on p and q (<ref>) and the transformation (<ref>), we obtain u(x,0) = . [ ∂_x q - q (1-q) ∂_x p ] ^-∫_-∞^x (1-2q) ∂_x p|_t=0 = . q(1-q) [ (λ+ν) δ(x) - ∂_x ρ_0/ρ_0(1-ρ_0)] ^-∫_-∞^x (1-2q) ∂_x p|_t=0 . In the case of a step initial density ρ_0(x) = ρ_+ Θ(x) + ρ_- Θ(-x), the term ∂_x ρ_0 gives another δ(x) term, but with an unknown prefactor, because ρ_0 is discontinuous at 0. Even in the case of a constant density ρ_+ = ρ_-, the prefactor of the remaining δ function is unknown, because q is discontinuous at x=0. Therefore, we can only write u(x,0) = c_0 δ(x) , with an unknown constant c_0. Similarly, for v(x,t=1) at final time, we get v(x,1) = . - ∂_x p ^∫_-∞^x (1-2q) ∂_x p|_t=1 = c_1 δ(x) + c_2 δ(x-ξ) , with two other unknown constants c_1 and c_2, coming from the fact that the term in the exponential is not well defined since p and q are discontinuous at x=0 and x = ξ. Actually, we can get rid of one of these unknown constants by using the invariance of the AKNS equations (<ref>) under the transformation u(x,t) → u(x,t)/K and v(x,t) → K v(x,t). Choosing K = c_0, we have the initial and final conditions u(x,0) = δ(x) , v(x,1) = α δ(x) + β δ(x - ξ) , with α = c_1 c_0 and β = c_2 c_0. We will see below how we can determine the constants α and β. The simplicity of these conditions will allow for an explicit solution of the AKNS equations at initial and final times. Furthermore, this solution will yield the desired equations for the profiles, since ∂_x p(x,1) is a sum of δ functions as seen from (<ref>), u(x,1) = . 1/K[ ∂_x q - q (1-q) ∂_x p ] ^-∫_-∞^x (1-2q) ∂_x p|_t=1∝∂_x q(x,1) , with a different proportionality constant for each domain x<0, 0 < x < ξ and x > ξ because p(x,1) is discontinuous at both x=0 and x=ξ, hence the value of the exponential differs in each interval (by an unknown factor, since q(x,1) is also discontinuous at these points). Similarly, v(x,0) = . - K ∂_x p ^∫_-∞^x (1-2q) ∂_x p|_t=0 . We can simplify this equation in the following way. We take the derivative of the boundary condition (<ref>) at t=0, which gives, ∂_x p(x,0) = ∂_x q(x,0)/q(1-q) + c_3 δ(x) ⇒ . ∫_-∞^x (1-2q) ∂_x p |_t=0 = lnq(1-q)/ρ_-(1-ρ_-) + c_4 Θ(x) , with new constants c_3 and c_4 (which we will not need to determine). Indeed, combining with (<ref>), we get v(x,0) ∝∂_x q(x,0) , with two different proportionality constants for x<0 and x>0. To summarize, the solutions u(x,t) and v(x,t) of the AKNS equations are directly related to the derivative of the profiles at initial (<ref>) and final times (<ref>), Ω(x) ≡ u(x,1) = {[ a_- Φ'(x) for x < 0; a_0 Φ'(x) for 0<x<ξ; a_+ Φ'(x) for x > ξ ]. , Ω̅(x) ≡ v(x,0) = {[ b_- Φ̅'(x) for x < 0; b_+ Φ̅'(x) for x > 0 ]. , with constants a_-, a_0, a_+, b_- and b_+ that will be determined by the boundary conditions derived in Section <ref>. The other relations are given by (<ref>), with the constants α and β that remain to be determined. §.§.§ Solution using the scattering technique Our goal is now to obtain integral equations verified by Ω and Ω̅. To solve the AKNS equations we rely on the standard approach <cit.>, recently used in <cit.>, and introduce the auxiliary linear problem for the two-component vector Ψ, ∂_x Ψ = U Ψ , ∂_t Ψ = V Ψ , U = [ - k v; u k ] , V = [ 2k^2 + u v 2 k v - ∂_x v; 2 k u + ∂_x u -2k^2 - u v ] . The compatibility condition between the first two equations, ∂_x ∂_t Ψ = ∂_t ∂_x Ψ is equivalent to the AKNS equations (<ref>). The idea is therefore to solve the simpler linear problem (<ref>), and deduce the solution for u(x,t) and v(x,t). Since ∂_x q → 0 and ∂_x p → 0 for x →±∞, u(x,t) and v(x,t) decay to 0 at ±∞. The matrix U then becomes diagonal at ±∞, and therefore Ψ is a superposition of plane waves in this limit. We introduce two independent solutions ϕ and ϕ̅, defined by their behaviour at -∞, ϕ(x,t) x → -∞≃^2k^2 t[ ^- k x; 0 ] , ϕ̅(x,t) x → -∞≃^-2k^2 t[ 0; -^ k x ] , where we have placed the factors ^± 2 k^2 t so that ϕ and ϕ̅ satisfy the time evolution equation (<ref>) at -∞. For x → +∞, we can write the solution as the superposition of the same two plane waves, ϕ(x,t) x → +∞≃[ a(k,t) ^- k x; b(k,t) ^ k x ] , ϕ̅(x,t) x → +∞≃[ b̅(k,t) ^- k x; - a̅(k,t)^ k x ] . This defines a scattering problem, in which plane waves at - ∞ are scattered by the potentials u(x,t) and v(x,t) into a superposition of plane waves at + ∞. The coefficients a, a̅, b, b̅ are called the scattering amplitudes. All the information on the functions u(x,t) and v(x,t) are encoded in the scattering amplitudes, so that u(x,t) and v(x,t) can be reconstructed from a, a̅, b, b̅. This is called the inverse scattering procedure, and is quite complicated to do in practice. Here, we will follow a different route, used in <cit.>: we will determine the scattering amplitudes at initial and final times in terms of the functions u(x,1) = Ω(x) and v(x,0) = Ω̅(x), and relate them using the time evolution (<ref>) to obtain integral equations satisfied by these functions. Indeed, one strength of the scattering approach is that it transforms the complicated time evolution of the AKNS equations (<ref>) into a very simple time dependence of the scattering amplitudes. Their time evolution can be computed using the matrix V(+∞) = Diag(2k^2,-2k^2) in (<ref>) at +∞, which directly gives ∂_t a(k,t) = 2 k^2 a(k,t) , ∂_t b(k,t) = -2k^2 b(k,t) , ∂_t a̅(k,t) = -2 k^2 a̅(k,t) , ∂_t b̅(k,t) = 2k^2 b̅(k,t) , and thus a(k,t) = ^2 k^2 t a(k,0) , b(k,t) = ^-2k^2 t b(k,0) , a̅(k,t) = ^-2 k^2 ta̅(k,0) , b̅(k,t) = ^2k^2 tb̅(k,0) . There only remains to determine the scattering amplitudes at t=0 and t=1. For this, we solve the spatial equation involving the matrix U (<ref>). Let us first write this equation at t=0. For the second component of ϕ = (ϕ_1 ϕ_2)^T, we get ∂_x[^- k xϕ_2(x,0)] = δ(x) ϕ_1(x,0) , and the same equation holds for ϕ̅_2. Integrating this equation with the boundary conditions at -∞ (<ref>), we obtain ^- k xϕ_2(x,0) = Θ(x) ϕ_1(0,0) , ^- k xϕ̅_2(x,0) = -1 + Θ(x) ϕ̅_1(0,0) . Using these expressions in the equations for the first components ϕ_1 and ϕ̅_1 (<ref>) yields ∂_x[^ k xϕ_1(x,0)] = Ω̅(x) ^2 k xΘ(x) ϕ_1(0,0) , ∂_x[^ k xϕ̅_1(x,0)] = Ω̅(x) ^2 k x [ -1 + Θ(x) ϕ̅_1(0,0) ] . Integrating with the boundary conditions (<ref>), we obtain ^ k xϕ_1(x,0) = 1 + Θ(x) ϕ_1(0,0) ∫_0^x Ω̅(x') ^2 k x' x' , ^ k xϕ̅_1(x,0) = - ∫_-∞^x Ω̅(x') ^2 k x' x' + Θ(x) ϕ̅_1(0,0) ∫_0^x Ω̅(x') ^2 k x' x' . From these expressions, we deduce the expressions at x=0, ϕ_1(0,0) = 1 , ϕ̅_1(0,0) = - ∫_-∞^0 Ω̅(x') ^2 k x' x' . Combining the results (<ref>,<ref>,<ref>) with the asymptotic behaviour (<ref>), we deduce the scattering amplitudes b(k,0) = 1 , b̅(k,0) = - ∫_-∞^∞Ω̅(x) ^2 k x x - ( ∫_-∞^0 Ω̅(x) ^2 k x x ) ( ∫_0^∞Ω̅(x') ^2 k x' x' ) . The amplitudes a(k,0) and a̅(k,0) can also be determined, but we will not need them in the following, so we do not write their expressions explicitly. We can proceed similarly at final time t=1, this time starting with the equations for the first components, as it is the one that involves the δ functions, ∂_x[ ^ k xϕ_1(x,1) ] = (αδ(x) + β^2 k ξδ(x-ξ)) ^- k xϕ_2(x,1) , with again the same equation for ϕ̅_2. Integrating with the asymptotic at -∞ (<ref>) yields ^ k xϕ_1(x,1) = ^2k^2 + αϕ_2(0,1) Θ(x) + β^ k ξϕ_2(ξ,1) Θ(x-ξ) , ^ k xϕ̅_1(x,1) = αϕ̅_2(0,1) Θ(x) + β^ k ξϕ̅_2(ξ,1) Θ(x-ξ) . Inserting these expressions into the equations for ϕ_2 and ϕ̅_2, we get ∂_x[ ^- k xϕ_2(x,1) ] = Ω(x) ^-2 k x[^2k^2 + αϕ_2(0,1) Θ(x) + β^ k ξϕ_2(ξ,1) Θ(x-ξ) ] , ∂_x[ ^- k xϕ̅_2(x,1) ] = Ω(x) ^-2 k x[ αϕ̅_2(0,1) Θ(x) + β^ k ξϕ̅_2(ξ,1) Θ(x-ξ) ] . Integrating with the asymptotic behaviour (<ref>), we obtain ^- k xϕ_2(x,1) = ^2k^2∫_-∞^x Ω(x') ^-2 k x' x' + αΘ(x) ϕ_2(0,1) ∫_0^x Ω(x') ^-2 k x' x' + βΘ(x-ξ) ϕ_2(ξ,1) ^ k ξ∫_ξ^x ^-2 k x'Ω(x') x' , ^- k xϕ̅_2(x,1) =-^-2k^2 + αΘ(x) ϕ̅_2(0,1) ∫_0^x Ω(x') ^-2 k x' x' + βΘ(x-ξ) ϕ̅_2(ξ,1) ^ k ξ∫_ξ^x ^-2 k x'Ω(x') x' . We therefore deduce ϕ_2(0,1) = ^2k^2∫_-∞^0 Ω(x') ^-2 k x' x' , ϕ_2(ξ,1) ^- k ξ = ^2k^2∫_-∞^ξΩ(x') ^-2 k x' x' + ^2k^2α( ∫_-∞^0 Ω(x') ^-2 k x' x' ) ( ∫_0^ξΩ(x') ^-2 k x' x' ) , ϕ̅_2(0,1) = -^-2k^2 , ϕ̅_2(ξ,1) ^- k ξ = -^-2k^2( 1 + α∫_0^ξΩ(x') ^-2 k x' x' ) . From the solutions (<ref>,<ref>), we can read the asymptotic behaviours (<ref>) and deduce the scattering amplitudes b(k,1) ^-2k^2 = ∫_-∞^+∞Ω(x') ^-2 k x' x' + α( ∫_-∞^0Ω(x') ^-2 k x' x' ) ( ∫_0^+∞Ω(x') ^-2 k x' x' ) + β^2 k ξ( ∫_-∞^ξΩ(x') ^-2 k x' x' ) ( ∫_ξ^+∞Ω(x') ^-2 k x' x' ) + αβ^2 k ξ( ∫_-∞^0Ω(x') ^-2 k x' x' ) ( ∫_0^ξΩ(x') ^-2 k x' x' ) ( ∫_ξ^+∞Ω(x') ^-2 k x' x' ) , b̅(k,1) ^2 k^2 = -α - β^2 k ξ( 1 + α∫_0^ξΩ(x') ^-2 k x' x' ) . Again, a and a̅ can be obtained similarly but we will not need them here. The last step is to relate the scattering amplitudes b and b̅ at initial time (<ref>) with the ones at final time t=1 (<ref>,<ref>) by using the time evolution (<ref>). This gives the following equations for Ω and Ω̅: ∫_-∞^+∞Ω(x') ^-2 k x' x' + α( ∫_-∞^0Ω(x') ^-2 k x' x' ) ( ∫_0^+∞Ω(x') ^-2 k x' x' ) + αβ^2 k ξ( ∫_-∞^0Ω(x') ^-2 k x' x' ) ( ∫_0^ξΩ(x') ^-2 k x' x' ) ( ∫_ξ^+∞Ω(x') ^-2 k x' x' ) + β^2 k ξ( ∫_-∞^ξΩ(x') ^-2 k x' x' ) ( ∫_ξ^+∞Ω(x') ^-2 k x' x' ) = ^-4k^2 , ∫_-∞^∞Ω̅(x) ^2 k x x + ( ∫_-∞^0 Ω̅(x) ^2 k x x ) ( ∫_0^∞Ω̅(x') ^2 k x' x' ) = ^-4k^2(α + β^2 k ξ + αβ^2 k ξ∫_0^ξΩ(x') ^-2 k x' x' ) . We can obtain equations in real space by taking the inverse Fourier transform. More precisely, multiplying (<ref>) by ^2 k x/π and integrating over k yields the equation for Ω (<ref>). Similarly, multiplying (<ref>) by ^-2 k x/π and integrating over k, we obtain the equation for Ω̅ (<ref>). This concludes our derivation of the integral equations (<ref>,<ref>). There only remains to determine the constants α and β. §.§ Determination of the remaining constants We now turn to the determination of the last constants α and β which appear in the integral equations (<ref>,<ref>). A first equation can be obtained from the conservation of the number of particles in the SEP between initial and final time, i.e., ∫_-∞^∞ [ Φ(x) - Φ̅(x)] x = 0 . The second equation can be determined by following the approach used in <cit.>, which relies on the scattering formalism. The scattering amplitudes defined in (<ref>) can be equivalently defined by regrouping the two vectors ϕ and ϕ̅ in a single matrix, so that [ a(k,t) b̅(k,t); b(k,t) - a̅(k,t) ] = lim_x →∞lim_y → -∞ M(x,t;k)M(y,t;k)^-1 [ ^2k^2 t 0; 0 - ^-2 k^2 t ] , where the matrix M(x;k,t) satisfies ∂_x M(x,t;k) = U M(x,t;k) , ∂_t M(x,t;k) = V M(x,t;k) , with the matrices U and V given in (<ref>). Remarkably, the spatial equation for M can be explicitly solved when k=0 by using the specific form of the functions u and v in terms of p and q (<ref>). The solution is given in the Supplemental Material of Ref. <cit.>, and reads M(x,t;k) = [ √(K) 0; 0 1/√(K) ][ ^∫_-∞^x (1-q)∂_x p ^-∫_-∞^x q∂_x p; -(1-ρ) ^∫_-∞^x q ∂_x p ρ ^-∫_-∞^x (1-q)∂_x p ][ 1/√(K) 0; 0 √(K) ] , where K is the constant we introduced above Eq. (<ref>) to get rid of the constant in front of the δ function in the initial condition for u. Using the asymptotic behaviours q(x,t) t →±∞⟶ρ_±, p(x,t) t → -∞⟶ 0 and p(x,t) t → +∞⟶λ+ν, we get M(+∞) = [ C ^λ+μ C; -(1-ρ_+) C^-1 ρ_+ C^-1 ^-λ-μ ] , M(-∞) = [ 1 1; -(1-ρ_-) ρ_+ ] , where we have denoted C = ^-∫_-∞^∞ q ∂_x p. Using these expressions in (<ref>), we obtain a simple expression for the product of the diagonal elements at k=0, - b(0,t) b̅(0,t) = ω≡ρ_+ (^-λ-ν - 1) + ρ_- (^λ+ν - 1) + ρ_+ ρ_- (^λ+ν - 1)(^-λ-ν - 1) , with ω the single parameter identified for the SEP in <cit.>, still with two parameters λ and ν. Using the expressions of b and b̅ at t=0 (<ref>), and the bulk equation for Ω̅ (<ref>), this last equation yields α + β + αβ∫_0^ξΩ(x') x' = ω . Note that the same equation can be obtained from the expressions at t=1 (<ref>,<ref>). Comparing with the definition of the kernel K̅ (<ref>), we notice that this equation can also be written in the compact form ∫_-∞^+∞K̅(x) x = ω . With this last derivation, we now have all the equations needed to determine the profiles Φ and Φ̅ and thus deduce the cumulants from (<ref>). § PERTURBATIVE SOLUTION FOR THE FIRST JOINT CUMULANTS We do not have an explicit solution of the equation for Ω (<ref>), so we will rely on a perturbative solution in λ and ν. §.§ For the currents The equations for Ω (<ref>) and Ω̅ (<ref>) only involve the parameters α and β. We thus first write the solutions of these equations perturbatively in α and β, and in a second step express them in terms of λ and ν by using relations (<ref>)-(<ref>). We denote the expansions of Ω and Ω̅ as Ω(x) = ∑_n,m = 0^∞α^n β^m Ω_n,m(x) , Ω̅(x) = ∑_n,m = 0^∞α^n β^m Ω̅_n,m(x) . Inserting these expansions into the integral equations (<ref>,<ref>), we obtain Ω_0,0(x) = K(x) = ^-x^2/4/√(4 π) , Ω_1,0(x) = - ^-x^2/8/4 √(2π)( x/2 √(2)) , Ω_0,1(x) = - ^-(x+ξ)^2/4/4 √(2π)( x-ξ/2 √(2)) , Ω̅_0,0(x) = 0 , Ω̅_1,0(x) = ^-x^2/4/√(4π) , Ω̅_0,1(x) = ^-(x-ξ)^2/4/√(4π) , Ω̅_2,0(x) = - ^-x^2/8/4 √(2π)( x/2√(2)) , Ω̅_0,2(x) = - ^-(x-2ξ)^2/8/4 √(π)( x/2 √(2)) , Ω̅_1,1(x) = - ^-(x-ξ)^2/8/2 √(π)( x+ξ/2 √(2)) . To deduce Φ and Φ̅, we integrate Ω and Ω̅ (<ref>) with respect to x, with the boundary conditions at infinity (<ref>), Φ(x) = {[ ρ_- + 1/a_-∫_-∞^x Ω for x < 0; d_0 + 1/a_0∫_0^x Ω for 0<x<ξ; ρ_+ - 1/a_+∫_x^∞Ω for x > ξ ]. , Φ̅(x) = {[ ρ_- + 1/b_-∫_-∞^x Ω̅ for x < 0; ρ_+ - 1/b_+∫_x^∞Ω̅ for x > 0 ]. . For the above expressions, these integrals can be computed using the tables in <cit.>. We will also need the integral of Φ and Φ̅ in Eqs. (<ref>,<ref>), which correspond to double integrals of Ω. This is not convenient to compute in practice, and it is more practical to use integration by parts ∫_x^∞ y ∫_y^∞ z Ω(z) = -x ∫_x^∞Ω(y) y + ∫_x^∞ y Ω(y) y , which can now be computed using the tables in <cit.>. Next, we expand all the parameters in powers of λ and ν, Z = ∑_n,m ≥ 0 Z_n,mλ^n ν^m , with Z ∈{α, β, a_-, a_0, a_+, b_-, b_+, d_0 }. Inserting these expansions into the boundary conditions at x=0 and x=ξ (<ref>,<ref>) and into the conservation equations (<ref>,<ref>), we obtain the coefficients of these expansions up to order 4 included for α and β and up to order 3 included for b_+, b_-, d_0, 1/a_-, 1/a_0 and 1/a_+ in the case ρ_+ = ρ_- = ρ. This difference of orders come from the fact that α and β begin at order 2 in λ and ν, α = λ (λ+ν) ρ(1-ρ) + ⋯ , β = ν (λ+ν) ρ(1-ρ) + ⋯ , therefore so does Ω̅, while Ω already has non zero terms at order 0. On the other hand, Φ and Φ̅ have terms at first order in λ and ν. This is why the expansions of b_+ and b_- begin at order 1, while those of a_+, a_0 and a_- have terms in 1/λ or 1/ν. Consequently, we obtain the lowest orders of the profiles Φ and Φ̅. For instance, Φ(x > ξ) = ρ +1/2 (λ +ν ) ρ (1-ρ) (x/2) +1/4 (λ +ν )^2 ρ (1-ρ) (1-2 ρ) (x/2) +1/24ρ (1-ρ) (λ +ν )^2 [ 2 (x/√(2)) ( (λ +ν )(1 - 3 ρ (1-ρ)) +3 νρ(1-ρ) (ξ/2) ) . -3 ρ (1-ρ) ( 8 νT( ξ/√(2) ,x/ξ) +8 νT(ξ+x/2,1-2 x/ξ +x) +2 ν(ξ/2) . . . -2 ν(ξ +x/2√(2)) +λ(x/2 √(2))^2) ]+ 𝒪(λ^4,μ^4) . From the expressions of Φ and Φ̅, we deduce ψ̂_ξ from (<ref>), which yields ψ̂_ξ(λ,ν) = -νξρ +ρ(1-ρ ) [ 1/2ν (λ+ν) (2 e^-ξ^2/4/√(π) - ξ(ξ/2) ) +λ(λ+ν)/√(π) +ν^2 ξ/2] + νρ (1-ρ ) (1-2 ρ ) [λ (λ +ν )/4(2 e^-ξ^2/4/√(π) - ξ(ξ/2) ) - λ(λ+ν)/2 √(π) - ν^2 ξ/6] + ρ(1-ρ)/24[ ν(λ+ν) (2λ^2 + λν + ν^2) (2 e^-ξ^2/4/√(π) - ξ(ξ/2) ) + 2/√(π)λ (λ+ν) (λ^2 + λν + 2 ν^2 + μ^4 ξ) ] + ρ^2(1-ρ^2)/4[ λν (ν^2 - λ^2) (2 e^-ξ^2/4/√(π) - ξ(ξ/2) ) - λ (λ+ν)(√(2)λ^2 + √(2)λν + 4 ν^2)/√(π). - λν (λ+ν)^2/2(8 e^-ξ^2/8/√(2π) - ξ(ξ/2√(2)) ) (ξ/2√(2)) - ξν^4 . - ν^2 (λ+ν)^2 (2 e^-ξ^2/2/√(2π) - ξ(ξ/√(2)) ) + 2 λν (λ + ν)^2/√(π)(ξ/2) ] + 𝒪(λ^5,ν^5) . One can check that, for ν=0, we recover the first orders[In fact, not only the first orders, but the full cumulant generating function of <cit.> is recovered from (<ref>) which can be solved in this case. Similar comments apply to the specific cases λ = 0 and ξ=0.] of the cumulant generating function of <cit.>, for λ=0 it gives the first orders of the one obtained in <cit.>, and for ξ = 0, since J_t = Q_t, we recover the first orders of the one for Q_t <cit.>, evaluated at λ+ν. Taking derivatives of this expression with respect to λ and ν, we obtain the different joint cumulants Q_t^n J_t^m_c of the two currents, ψ̂_ξ(λ,ν) = ∑_n,m ≥ 0λ^n/n!ν^m/m!Q_t^n J_t^m_c . The first cumulants are written in Section <ref>. §.§ For the current/tracer correlations The distribution of the position of a tracer can be obtained from the distribution of the current J_t <cit.>. The idea is that, since the particles remain in the same order, the number of particles to the right of the tracer is conserved, the tracer is located at the position X_t such that J_t(X_t) = 0. This relation is not quite exact, since there could be several values for which J_t(x) = 0. Actually, X_t corresponds to the smallest of these values. However, in the long time limit, this indeterminacy becomes a subdominant correction, and this relation becomes exact at leading order in t. This implies that ℙ(X_t = x) = ℙ(J_t(x) = 0). We can directly extend this relation to the joint distribution of Q_t and X_t, ℙ(Q_t = q & X_t = x) = ℙ(Q_t = q & J_t(x) = 0) . We have computed the joint cumulant generating function of the two currents, which implies ^λ Q_t + ν J_t(x_t) = ∑_q∑_j ^λ q + ν j ℙ(Q_t = q & J_t(x) = j) t →∞≃^√(t)ψ̂_ξ(λ,ν) . We can take the inverse Laplace transform in ν, which can be evaluated by a saddle point approximation for large t, which gives a mixed distribution/generating function ^λ Q_tδ(J_t(x_t) - j √(t)) t →∞≃^-√(t)φ_ξ(λ,j) , where φ_ξ is given by the Legendre transform φ_ξ(λ,j) = ν^⋆(λ,j) j - ψ̂_ξ(λ, ν^⋆(λ,j)) , . ∂_νψ̂_ξ(λ,ν) |_ν^⋆ = j . Using the relation between J_t and X_t (<ref>), we deduce ^λ Q_tδ(X_t - ξ√(t)) t →∞≃^-√(t)φ_ξ(λ,0) . We can obtain the joint cumulant generating function of the current Q_t and the position X_t by another Legendre transform, lim_t →∞1/√(t)ln^λ Q_t + χ X_t = χξ^⋆(λ,χ) - φ_ξ^⋆(λ,χ)(λ,0) , . ∂_ξφ_ξ(λ,0) |_ξ^⋆ = χ . This procedure extends the one of <cit.> to the joint distribution of Q_t and X_t. It can be carried out explicitly starting from the expression of ψ̂_ξ at lowest orders (<ref>), and yields lim_t →∞1/√(t)ln^λ Q_t + χ X_t = 1-ρ/ρ√(π) (χ + λρ)^2 - (1-ρ)^2/ρ√(π)λχ (χ + λρ) + λ ^3 (1-ρ ) (λρ +χ )/12 √(π) -λ^3 χ (1-ρ)^3 /4 √(π) -λ ^2 χ (λρ +χ ) (1-ρ)^3 /4 √(π)ρ ^2 +λ ^2 χ (λρ +χ ) (1-ρ)^2/4 √(π) -χ ^3 (λρ +χ )(1-ρ ) /4 √(π)ρ ^3 +χ (λρ +χ )^3(1-ρ )^3 /π ^3/2ρ ^3 +2χ ^3 (λρ+χ ) (1-ρ)^2 /3 √(π)ρ ^2 +χ ^3 (λρ +χ)(1-ρ ) /3 √(π)ρ ^2 -(λρ +χ )^4(1-ρ )^2 /2 √(2 π)ρ ^2 -λχ ^2 (λρ +χ ) (1-ρ)^2 (1-2 ρ)/2 √(π)ρ^2 +λχ (λ +χ ) (λρ +χ ) (1-ρ )/4 √(π)ρ ^2 + 𝒪(λ^5,χ^5) . This directly gives the first joint cumulants of Q_t and X_t, which are given in Section <ref>. In particular, setting χ = -ρλ, this gives the generating function of Q_t - ρ X_t, lim_t →∞1/√(t)ln^λ (Q_t - ρ X_t) = ρ(1-ρ)^3/4 √(π)λ^4 + 𝒪(λ^5) . Remarkably, there is no term in λ^2, indicating that the variance of Q_t - ρ X_t is smaller than √(t) for large t, indicating strong correlations between the current Q_t and the positions X_t of the tracer. However, this does not indicate that Q_t and X_t are proportional, since the higher order cumulants grow as √(t). This indicates that Var(Q_t-ρ X_t) is nonzero, but grows slower than √(t). § CONCLUSION We have studied the joint distribution of the current Q_t through the origin and the current J_t through a moving boundary in the SEP, as well as their correlations with the density of particles. These correlations are described by generalised density profiles. We have obtained integral equations satisfied by these generalised density profiles. These integral equations extend the ones discovered in<cit.> in the case of a single observable (current Q_t or J_t, tracer position X_t). This further emphasises the key role of such strikingly simple integral equations involving partial convolutions in interacting particle systems. In the case of a single observable, the integral equations naturally obtained are bilinear, but surprisingly they are equivalent to linear equations at the expense of introducing analytic continuations<cit.>. An important open question is whether the equation obtained here, which is trilinear, can be reduced to such linear equations (for which an explicit solution can be obtained). We have also obtained simple boundary conditions for the generalised density profiles. These boundary conditions take a simple physical form, in terms of the chemical potential, and can be applied to any model of single-file diffusion. This extends the relations that have been obtained for the SEP from microscopic considerations <cit.>. As a consequence of these equations, we have characterised the joint statistics of the current through the origin Q_t and the position of a tracer X_t, initially at the origin. These variables are strongly correlated, and even become equal in the high density limit. This work opens the way to the study of more than two observables, such as multiple currents or tracers, in the SEP and other models of single-file systems. Acknowledgements We thank Alexis Poncet for illuminating discussions and in particular for sharing his guess of the Eq. (<ref>), left, involving the pressure. § MAPPING THE BOUNDARY CONDITIONS FOR OTHER OBSERVABLES In this Appendix, we show that the boundary conditions (<ref>,<ref>) obtained for the currents Q_t and J_t can be mapped onto other physical boundary conditions for other observables, such as the position X_t of a tracer. In Ref. <cit.>, it has been shown that the current Q_t in a single-file system described by the coefficients D(ρ) ans σ(ρ) corresponds to the opposite of the displacement of a tracer in a dual single-file system, with D̃(ρ) = 1/ρ^2 D ( 1/ρ) , σ̃(ρ) = ρ σ( 1/ρ) . The mapping is as follows. The density ρ̃ in the dual system, in the reference frame of the tracer at X̃_t = -Q_t, can be expressed in terms of the density ρ of the initial system as <cit.> ρ(x,t) = 1/ρ̃(k(x,t),t) , k(x,t) = ∫_0^x ρ(y,t) y . Since this mapping is valid for all realisations of ρ, it is also valid for the saddle point (q,p) solution of the MFT equations (<ref>,<ref>), and thus for the profile Φ (<ref>), Φ(x) = 1/Φ̃(z(x)) , z(x) = ∫_0^x Φ(y) y . The dual profile Φ̃ corresponds to <cit.> ρ̃(X̃_t + r,t) ^-λX̃_t/^-λX̃_tt →∞≃Φ̃( z = r/√(t)) , with an unusual minus sign in the exponential, due to the fact that X̃_t = - Q_t. The chemical potential (<ref>) becomes μ(ρ) = ∫^ρ2 D(r)/σ(r) r = -∫^1/ρ2 D(1/r)/σ(1/r) r/r^2 = -∫^1/ρ2 r D̃(r)/σ̃(r) r = - P̃( 1/ρ) , where P̃ is the pressure (<ref>) for the dual system. Combining these relations with (<ref>), we obtain P̃(Φ̃(0^+)) - P̃(Φ̃(0^-)) = -λ . Changing λ→ -λ to remove the minus sign in the definition (<ref>), we obtain (<ref>), left. For the relation involving the derivative, we need ∂_x μ(Φ) = 2 D(ϕ) ∂_x Φ/σ(Φ) = 2 Φ̃^3 D̃(Φ̃)/σ̃(Φ̃)zx∂_z ( 1/Φ̃) = - 2 D̃(Φ̃)/σ̃(Φ̃)∂_z Φ̃ = - ∂_z μ̃(Φ̃) . From (<ref>), we thus straightforwardly deduce (<ref>), right. § RELATION BETWEEN THE PHYSICAL BOUNDARY CONDITIONS AND THE MICROSCOPIC EQUATIONS FOR THE SEP In the case of the SEP, other boundary conditions have been obtained for the different observables considered here (individually), from microscopic considerations <cit.>. Note that they have been obtained with a different choice for the time scale, with D(ρ) = 1/2 and σ(ρ) = ρ(1-ρ). Here, we show that they are equivalent to the physical boundary conditions (<ref>,<ref>,<ref>) obtained in this article. For the case of the current Q_t, they take the form <cit.> Φ(0^+)(1-Φ(0^-))/Φ(0^-)(1-Φ(0^+)) = ^λ , Φ'(0^±) = ∓ 2 ψ̂( 1/1 - ^∓λ - Φ(0^±) ) , where Φ has been defined in <cit.> with a slightly different scaling η_r ^λ Q_t/^λ Q_tt →∞≃Φ( x = r/√(2t)) . Taking the logarithm of the first equation in (<ref>), we get λ = -ln( 1/Φ(0^+) - 1 ) + ln( 1/Φ(0^-) - 1 ) , which is exactly the relation (<ref>) with the expression of the chemical potential for the SEP (<ref>). Combining the relations (<ref>) right to eliminate ψ̂, and combining with the first relation to remove the ^±λ, we get -Φ'(0^+)(Φ(0^+) - Φ(0^-))/Φ(0^+)(1-Φ(0^+)) = Φ'(0^-)(Φ(0^-) - Φ(0^+))/Φ(0^-)(1-Φ(0^-)) . Rewriting it in terms of the chemical potential for the SEP (<ref>), we obtain, μ'(Φ(0^+)) Φ'(0^+) = μ'(Φ(0^-)) Φ'(0^-) , which is indeed Eq. (<ref>). In the case of a tracer at position X_t, different relations have been obtained <cit.> which read 1-Φ(0^-)/1-Φ(0^+) = ^λ , Φ'(0^±) = ∓ 2ψ̂/^±λ-1Φ(0^±) , with the profiles defined as η_X_t+r^λ X_t/^λ X_tt →∞≃Φ( x = r/√(2t)) . As for the current, taking the logarithm of the first equation in (<ref>) yields λ = - ln (1-Φ(0^+)) + ln(1-Φ(0^-)) = P(Φ(0^+)) - P(Φ(0^-)) , with P(ρ) = -ln (1-ρ) for the SEP. This is indeed (<ref>), left. Combining the relations in (<ref>) to eliminate ψ̂ and λ, we obtain -Φ'(0^+)/Φ(0^+)(1-Φ(0^+)) (Φ(0^+) - Φ(0^-)) = Φ'(0^-)/Φ(0^-)(1-Φ(0^-))(Φ(0^-) - Φ(0^+)) . This is identical to the case of the current (<ref>), hence it yields again (<ref>), right. 10 urlstyle Chou:2011 T. Chou, K. Mallick and R. K. Zia, Non-equilibrium statistical mechanics: from a paradigmatic model to biological transport, Rep. Prog. Phys. 74(11), 116601 (2011), 10.1088/0034-4885/74/11/116601. Mallick:2015 K. Mallick, The exclusion process: A paradigm for non-equilibrium behaviour, Physica A 418, 17 (2015), 10.1016/j.physa.2014.07.046, Proceedings of the 13th International Summer School on Fundamental Problems in Statistical Physics. Imamura:2017 T. Imamura, K. Mallick and T. Sasamoto, Large deviations of a tracer in the symmetric exclusion process, Phys. Rev. Lett. 118(16), 160601 (2017), 10.1103/PhysRevLett.118.160601. Imamura:2021 T. Imamura, K. Mallick and T. Sasamoto, Distribution of a tagged particle position in the one-dimensional symmetric simple exclusion process with two-sided Bernoulli initial condition, Commun. Math. Phys. 384(3), 1409 (2021), 10.1007/s00220-021-03954-x. Poncet:2021 A. Poncet, A. Grabsch, P. Illien and O. Bénichou, Generalized correlation profiles in single-file systems, Phys. Rev. Lett. 127, 220601 (2021), 10.1103/PhysRevLett.127.220601. Grabsch:2022 A. Grabsch, A. Poncet, P. Rizkallah, P. Illien and O. Bénichou, Exact closure and solution for spatial correlations in single-file diffusion, Sci. Adv. 8(12), eabm5043 (2022), 10.1126/sciadv.abm5043. Grabsch:2023 A. Grabsch, P. Rizkallah, A. Poncet, P. Illien and O. Bénichou, Exact spatial correlations in single-file diffusion, Phys. Rev. E 107, 044131 (2023), 10.1103/PhysRevE.107.044131. Mallick:2022 K. Mallick, H. Moriya and T. Sasamoto, Exact solution of the macroscopic fluctuation theory for the symmetric exclusion process, Phys. Rev. Lett. 129, 040601 (2022), 10.1103/PhysRevLett.129.040601. Derrida:2009 B. Derrida and A. Gerschenfeld, Current fluctuations of the one dimensional symmetric simple exclusion process with step initial condition, J. Stat. Phys. 136(1), 1 (2009), 10.1007/s10955-009-9772-7. Derrida:2009a B. Derrida and A. Gerschenfeld, Current fluctuations in one dimensional diffusive systems with a step initial density profile, J. Stat. Phys. 137(5), 978 (2009), 10.1007/s10955-009-9830-1. Krapivsky:2012 P. Krapivsky and B. Meerson, Fluctuations of current in nonstationary diffusive lattice gases, Phys. Rev. E 86(3), 031106 (2012), 10.1103/PhysRevE.86.031106. Arratia:1983 R. Arratia, The motion of a tagged particle in the simple symmetric exclusion system on Z, Ann. Probab. 11(2), 362 (1983). Burlatsky:1996 S. Burlatsky, G. Oshanin, M. Moreau and W. Reinhardt, Motion of a driven tracer particle in a one-dimensional symmetric lattice gas, Phys. Rev. E 54(4), 3165 (1996), 10.1103/PhysRevE.54.3165. Landim:1998 C. Landim, S. Olla and S. Volchan, Driven tracer particle in one dimensional symmetric simple exclusion, Comm. Math. Phys. 192(2), 287 (1998), 10.1007/s002200050300. Krapivsky:2014 P. L. Krapivsky, K. Mallick and T. Sadhu, Large deviations in single-file diffusion, Phys. Rev. Lett. 113, 078101 (2014), 10.1103/PhysRevLett.113.078101. Krapivsky:2015 P. L. Krapivsky, K. Mallick and T. Sadhu, Dynamical properties of single-file diffusion, J. Stat. Mech: Theory Exp. 2015(9), P09007 (2015), 10.1088/1742-5468/2015/09/P09007. Sadhu:2015 T. Sadhu and B. Derrida, Large deviation function of a tracer position in single file diffusion, J. Stat. Mech: Theory Exp. 2015(9), P09008 (2015), 10.1088/1742-5468/2015/09/p09008. Grabsch:2023b A. Grabsch, P. Rizkallah, P. Illien and O. Bénichou, Driven tracer in the symmetric exclusion process: Linear response and beyond, Phys. Rev. Lett. 130, 020402 (2023), 10.1103/PhysRevLett.130.020402. Illien:2013 P. Illien, O. Bénichou, C. Mejía-Monasterio, G. Oshanin and R. Voituriez, Active transport in dense diffusive single-file systems, Phys. Rev. Lett. 111, 038102 (2013), 10.1103/PhysRevLett.111.038102. Hegde:2014 C. Hegde, S. Sabhapandit and A. Dhar, Universal large deviations for the tagged particle in single-file motion, Phys. Rev. Lett. 113, 120601 (2014), 10.1103/PhysRevLett.113.120601. Krapivsky:2015a P. L. Krapivsky, K. Mallick and T. Sadhu, Tagged particle in single-file diffusion, J. Stat. Phys. 160(4), 885 (2015), 10.1007/s10955-015-1291-0. Krajenbrink:2021 A. Krajenbrink and P. Le Doussal, Inverse scattering of the Zakharov-Shabat system solves the weak noise theory of the Kardar-Parisi-Zhang equation, Phys. Rev. Lett. 127, 064101 (2021), 10.1103/PhysRevLett.127.064101. Krajenbrink:2022a A. Krajenbrink and P. Le Doussal, Inverse scattering solution of the weak noise theory of the Kardar-Parisi-Zhang equation with flat and brownian initial conditions, Phys. Rev. E 105, 054142 (2022), 10.1103/PhysRevE.105.054142. Krajenbrink:2022 A. Krajenbrink and P. Le Doussal, Crossover from the macroscopic fluctuation theory to the Kardar-Parisi-Zhang equation controls the large deviations beyond einstein's diffusion, Phys. Rev. E 107, 014137 (2023), 10.1103/PhysRevE.107.014137. Bettelheim:2022 E. Bettelheim, N. R. Smith and B. Meerson, Inverse scattering method solves the problem of full statistics of nonstationary heat transfer in the Kipnis-Marchioro-Presutti model, Phys. Rev. Lett. 128(13), 130602 (2022), 10.1103/PhysRevLett.128.130602. Bertini:2006 L. Bertini, A. De Sole, D. Gabrielli, G. Jona-Lasinio and C. Landim, Large deviation approach to non equilibrium processes in stochastic lattice gases, Bull. Braz. Math. Soc. 37(4), 611 (2006), 10.1007/s00574-006-0031-0. Bertini:2007 L. Bertini, A. D. Sole, D. Gabrielli, G. Jona-Lasinio and C. Landim, Stochastic interacting particle systems out of equilibrium, J. Stat. Mech: Theory Exp. 2007(07), P07014 (2007), 10.1088/1742-5468/2007/07/p07014. Bertini:2015 L. Bertini, A. De Sole, D. Gabrielli, G. Jona-Lasinio and C. Landim, Macroscopic fluctuation theory, Rev. Mod. Phys. 87, 593 (2015), 10.1103/RevModPhys.87.593. Ablowitz:1981 M. J. Ablowitz and H. Segur, Solitons and the inverse scattering transform, SIAM (1981). Derrida:2004 B. Derrida, B. Douçot and P.-E. Roche, Current fluctuations in the one-dimensional symmetric exclusion process with open boundaries, J. Stat. Phys. 115, 717 (2004), 10.1023/B:JOSS.0000022379.95508.b2. Rizkallah:2022 P. Rizkallah, A. Grabsch, P. Illien and O. Bénichou, Duality relations in single-file diffusion, J. Stat. Mech: Theory Exp. 2023(1), 013202 (2023), 10.1088/1742-5468/aca8fb. Kipnis:1989 C. Kipnis, S. Olla and S. R. S. Varadhan, Hydrodynamics and large deviation for simple exclusion processes, Comm. Pure Appl. Math. 42(2), 115 (1989), 10.1002/cpa.3160420202. GraTex15 A. Grabsch and C. Texier, Capacitance and charge relaxation resistance of chaotic cavities—Joint distribution of two linear statistics in the Laguerre ensemble of random matrices, Europhys. Lett. 109(5), 50004 (2015), 10.1209/0295-5075/109/50004. CunFacViv16 F. D. Cunden, P. Facchi and P. Vivo, A shortcut through the Coulomb gas method for spectral linear statistics on random matrices, J. Phys. A Math. Theor. 49(13), 135202 (2016), 10.1088/1751-8113/49/13/135202. Grabsch:2023a A. Grabsch, T. Berlioz, P. Rizkallah, P. Illien and O. Bénichou, Universal correlation profiles in single-file systems, arXiv:2306.13516 (2023), 10.48550/arXiv.2306.13516. Bettelheim:2022a E. Bettelheim, N. R. Smith and B. Meerson, Full statistics of nonstationary heat transfer in the Kipnis–Marchioro–Presutti model, J. Stat. Mech: Theory Exp. 2022(9), 093103 (2022), 10.1088/1742-5468/ac8a4d. Polychronakos:2020 A. P. Polychronakos, Solitons in fluctuating hydrodynamics of diffusive processes, Phys. Rev. E 101, 022209 (2020), 10.1103/PhysRevE.101.022209. Ablowitz:1974 M. J. Ablowitz, D. J. Kaup, A. C. Newell and H. Segur, The inverse scattering transform-Fourier analysis for nonlinear problems, Stud. Appl. Math. 53(4), 249 (1974), 10.1002/sapm1974534249. Krajenbrink:2023 A. Krajenbrink and P. L. Doussal, The weak noise theory of the O'Connell-Yor polymer as an integrable discretisation of the nonlinear Schrodinger equation, arXiv:2307.01172 (2023), 10.48550/arXiv.2307.01172. Owen:1980 D. B. Owen, A table of normal integrals, Commun. Stat. Simul. Comput. 9(4), 389 (1980), 10.1080/03610918008812164.
http://arxiv.org/abs/2307.03392v1
20230707053006
Microscopic analysis of relaxation behavior in nonlinear optical conductivity of graphene
[ "Bristi Ghosh", "Sushanta Dattagupta", "Malay Bandyopadhyay" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
1.School of Basic Sciences, Indian Institute of Technology Bhubaneswar, Argul, Jatni, Khurda, Odisha 752050, India. 2. National Institute of Technology, Mahatma Gandhi Road, Durgapur, West Bengal, 713209, India We present here a general formulation for the interband dynamical optical conductivity in the nonlinear regime of graphene in the presence of a quantum bath comprising phonons and electrons. Our main focus is the relaxation behavior of the quantum solid of graphene perturbed by an oscillatory electric field. Considering the optical range of the frequency and a considerable amount of the amplitude of the field, one can observe a nonlinear response by formulating a quantum master equation of the density operator associated with the Hamiltonian encapsulated in the form of a spin-Boson model of dissipative quantum statistical mechanics. Mapping the valence and conduction states as the eigenstates of the Pauli spin operators and utilizing the rotating wave approximation to omit off-resonant terms, one can solve the rate equation for the mean population of the conduction and valence states and the mixing matrix elements between them. Our results reveal the nonlinear steady-state regime's population inversion and interband coherence. It is characterized by a single dimensionless parameter that is directly proportional to the incident field strength and inversely proportional to the optical frequency. Our method is also capable of calculating the nonlinear interband optical conductivity of doped and gapped graphene at finite temperatures. The effects of different bath spectra for phonons and electrons are examined in detail. Although our general formulation can address a variety of nonequilibrium response of the two-band system, it also facilitates a connection with phenomenological modeling of nonlinear optical conductivity. Microscopic analysis of relaxation behavior in nonlinear optical conductivity of graphene Bristi Ghosh^1, Sushanta Dattagupta ^2, Malay Bandyopadhyay ^1 August 1, 2023 ========================================================================================= § INTRODUCTION Graphene, a two-dimensional sheet of graphite, is a wonder laboratory of modern solid state physics that is endowed with remarkable physical properties and potential device applications [1,2]. It is a nano material in which much of fundamental predictions of relativistic quantum mechanics, such as the Dirac equation, Weyl and Majorana electrons, geometric and topological phases, spintronics, etc., can be experimentally tested. Graphene, a true two dimensional electronic material is not only a gold mine for their myriad technological and device applications, but also a repository for testing theoretical concepts of great contemporary interest [1]. One can mention some of these exceptional ideas such as chemistry of hybridized carbon orbitals [3], ultra high mobility [4,5], spin-orbit interaction [6], Andreev reflection and Klein tunneling [7,8], magneto-resistance and weak localization [9,10], quantum Hall effect [11,12], spintronics [13], and so on. Much of these extraordinary properties of graphene emanate from the fact that the electrons of graphene behave as Dirac Fermions in the low energy physics sector which exhibit typical linear band structure (E_p=± v_F|p|, where v_F is the Fermi velocity) at K and K' points and follow Dirac physics. Although these electrons move with much smaller Fermi velocity compared to the speed of light, but their dynamics is governed by Dirac equation. Hence, this fascinating 2D-material becomes a testing-bed for the realization of relativistic quantum mechanics in a non-relativistic setup of solid state physics [3]. In the presence of a strong electromagnetic field, these massless carriers display fascinating linear and nonlinear optical properties such as constant absorption coefficient over a broad spectrum [14], higher-harmonic generation [15], four-wave mixing [16], and self-phase modulation [17], just to name a few. Given these extraordinary phenomena in graphene, we focus on the nonlinear response of frequency-dependent dynamic conductivity [18,19,20]. From the beginning, much attention has been devoted to the linear response of graphene to the applied electric field in the so-called Kubo regime. However, additional insights can be gained by transiting to the nonlinear response, especially when the applied electric field is dependent on a monochromatic frequency ω.The nonlinear response of graphene is an exceptional tool for investigating intrinsic material properties that are hidden in the Kubo regime, such as material symmetry, selection rules, electron spin, and spin-spin relaxations mechanism [21]. The optical conductivity in the Kubo regime is clearly defined by the universal value σ_0=e^2/4ħ, however, here we are more concerned with determining the frequency dependent conductivity in the nonlinear domain in the optical range (10^11-10^16 Hz). Furthermore, we take a close view into the relaxation behavior of the carriers associated with energy transfer between the applied field and elementary excitations characterizing the surrounding heat bath. Thus, we couch the problem in the contemporary field of nonequilibrium statistical mechanics of dissipative quantum systems. The nonlinear optical response in the background of dissipative features had been looked into earlier in terms of rate theories familiar in quantum optics [18,19]. The study of Rabi oscillations, rotating wave approximation naturally feature into such theories. Two distinct relaxation attributes also have merited attention: spin-lattice relaxation (γ_p) and spin-spin relaxation (γ_e), common to magnetic resonance phenomena [22]. It is pertinent to point out that the ‘spin’ here refers to a pseudo-spin that captures the valence and the conduction band near the Dirac point in the reciprocal space. Spin-lattice relaxation is accompanied by inter-band transitions while spin-spin relaxations arise from intra-band transitions. An important quantity which clearly captures the nonlinear and frequency-dependent features is the so-called Mischenko parameter defined by η=ev_FE_0/ħω√(γ_eγ_p), where E_0 and ω are the amplitude and frequency of the externally applied oscillatory field [18]. The latter demarcates the boundaries between linear and nonlinear domains. While the rate equation approach does provide significant insights into the phenomena at hand, such an approach has limitations in that the external bath is viewed as a ‘black-box’ and no attempt is made to give a microscopic assessment of the underlying relaxation rates. Our aim is to fill-in this gap and put forward a general master equation method for the underlying density operator of the system that goes beyond the rate theories. This was attempted by one of us [20] wherein a careful delineation was made between the non-Markov and Markov regions of relaxation and contact was established with the Markovian regime in which the rate theories are valid. We now go beyond [20] and analyze in detail the underlying spin-lattice and spin-spin relaxation rates characterized by the parameters of both the system and the bath, including the temperature (T). Needless to say such temperature variations of the rates, that can be accessed experimentally, are beyond the realm of rate equation methods. Our numerical calculations enable us to quantify the crossover between the transient, non-Markovian response to non-transient, Markovian response as a function of a timescale governed by the cutoff frequency of bath excitations. Additionally, we demonstrate the temperature variation of both the spin-lattice and the spin-spin relaxation rates. We also go into the case of gapped graphene that brings in a new energy parameter (Δ) that couples to the spin component transverse to the graphene layer. Results for the conductivity related to inter-band transitions are presented for both pristine (gapless) and gapped graphene. A novel switching behavior in the low temperature optical conductivity for gapped graphene as a function of the applied frequency ω is demonstrated. Given this background, the paper is organized as follows. In Sec. 2 we write down the generalized master equations for the average “dephasing” and “depopulation” operators in terms of explicitly time dependent spin-spin relaxation and spin-lattice rates. The latter quantities are expressed in terms of the underlying spectral functions that characterize the electronic and the phonon baths. Numerical plots of these rates are given in this section which demonstrate the transition from the non-Markovian to the Markovian regime and their T dependencies, which in turn determine the T dependent Mishchenko parameter. This section also presents experimentally accessible inter-band conductivity of the pristine graphene in the Markovian domain. In Sec. 3 we turn to the case of gapped graphene which brings-in the third component of the spin transverse to the graphene layer. The role played by the transverse coupling parameter Δ in the temperature dependence of the conductivity and an unexpected switching behavior of the latter as a function of ω are presented here. Section 4 concludes the paper with a summary of our main results. § MODEL AND METHOD In this section we introduce the relevant spin-Boson Hamiltonian for the electric field driven graphene in contact with dissipative Bosonic bath that is modelled as a collection of harmonic oscillators. Applying a unitary transformation in the interaction picture of our system-plus-bath Hamiltonian, we can rephrase the Hamiltonian in the so-called “rotating wave approximation” (RWA) [20]. Since all the rapidly oscillating terms eventually die down in the steady state, we ignore these terms utilizing RWA. Although RWA is a well known tool in quantum optics, its application in the present context of dissipative dynamics of graphene yields a modified spin-boson Hamiltonian which shapes the foundation of our further study of dissipative dynamics in terms of a master equation for the “reduced” density operator. We also introduce the current density which we will require for computing the nonlinear conductivity beyond the Drude/Kubo regime [23]. §.§ Model Hamiltonian and Method Here we consider graphene as a two band electronic system which is interacting with the surrounding environment. In the Dirac limit, the Hamiltonian of this open system can be written in the system-plus-bath approach of Caldeira-Leggett[24] : H = H_S + H_SB + H_B, where, H_s is the subsystem Hamiltonian of the graphene for a given k (≡ to momentum p), H_S= v_F(σ·k), where v_F is the fermi velocity. Further, H_SB takes into account two distinct physical interactions between the Dirac electron of the graphene and the surrounding phonons and other electrons which can be described by two types of interaction terms: a dissipationless decoherence term related with electron-electron interaction and another dissipative decoherence term caused by electron-phonon interaction. Thus, H_SB=Π_k X_e + Y_k X_p, with the depopulation operator Π_k=(|c_k⟩⟨ c_k|-|v_k⟩⟨ v_k|), dephasing operator Y_k=-i(|v_k⟩⟨ c_k|-|c_k⟩⟨ v_k|), X_e=∑_q G_q(b_q+b_q^†), and X_p=∑_q g_q(a_q+a_q^†). G_q and g_q parameterize the coupling of our system with the surrounding electrons and phonons respectively. b_q and b_q^† are the annihilation and creation operators for electrons, while a_q and a_q^† are used to denote annihilation and creation operators for phonons. Further, |c_k⟩ and |v_k⟩ are the conduction and valence band eigenfunctions of H_S. Finally, the surrounding environment is modelled through usual bosonic structure : H_B = ∑_qω_q b_q^†b_q+∑_qΩ_q a^†_q a_q, where the first term indicates the free electron interaction in terms of electron creation (b_q^†) and annihilation operators (b_q). As shown in Ref.[25,26], the fermionic bath can be `bosonized' as long as we are only interested in the electron-hole excitations near the fermi surface. The last term in Eq.(4) is the collection of harmonic oscillators to represent phononic bath. Now we perturb our entire system with an external alternating electric field E_0cos(ω t). So the new system hamiltonian is H_0 =H_S+H_ω(t)=H_S+v_FeE_0/ωσ_xsinω t, where σ_x is the Pauli spin matrix. §.§ Method of Quantum Dynamics To study the dynamics of our system we follow the method introduced in Ref.[20]. Our starting point is the Schrödinger picture von Neumann–Liouville equation for the density operator ρ(t) : idρ(t)/dt= [H_S+H_ω+H_SB+H_B,ρ(t)]. Now, going into the interaction picture, tracing out the bath degrees of freedom, and utilizing a cumulant expansion scheme one can obtain convolution-less master equation for the reduced density operator ρ'_S(t)=exp(iH_St)ρ_S exp(-iH_St) : ddtρ'_S(t)=-i[H^eff_ω(t),ρ'_S(t)]-R(t)ρ'_S(t), where R(t) is known as the 'relaxation matrix'. R(t)ρ'_S(t) =∫_0^t Tr_B[H_SB^I(τ)H_SB^I(0)ρ_Bρ'_S(t)+ ρ_Bρ'_S(t)H_SB^I(0)H_SB^I(τ)+H_SB^I(τ)ρ_Bρ'_S(t)H_SB^I(0) -H_SB^I(0)ρ_Bρ'_S(t)H_SB^I(τ)] dτ. Under rotating wave approximation, our `effective' ac term can be expressed as H^eff_ω (t) =Ω_k[Y^+_kexp[-i(ω-Δ_k)t]+ H.c.]/2, where Y^+_k = |c_k⟩⟨ v_k|, Y^-_k = |v_k⟩⟨ c_k|, Ω_k = (eEv_F/ω) sinχ_k, Δ_k = 2v_F|k|. Here Δ_k is the “tunneling frequency" between the valance and conduction bands and (ω-Δ_k) is called the “detuning frequency". On the other hand, the time evolution of H_SB in the interaction picture is given by H_SB^I(t) =U^+_S(t) H_SB U_S(t) =Π_kX_e(t)+[-iexp(iΔ_k t)Y^+_k+ iexp(-iΔ_k t)Y^-_k]X_p(t), where U_S(t)=exp-i(H_S+H_B)t. Since our focus is to analyze optical conductivity of graphene, we can introduce an average momentum-resolved current density along the applied electric field as follows (explained in detail in sec.IID) : j_kx(t)=ev_F[cos(χ_k)⟨Π_k(t)⟩+sin(χ_k)⟨ Y_k(t)⟩], where e is the electronic charge, and χ_k is the angle between the k vector and the x axis. Hence our task is to calculate ⟨Π_k⟩ and ⟨ Y_k⟩ (where ⟨...⟩ represents expectation values) to obtain nonlinear optical conductivity from the average current density expression. Now, invoking Markov approximation, one can extend the upper limit of the integral to infinity rendering the relaxation matrix R(t). After some algebra, one can write the equation which governs the dynamics of ⟨Π_k(t)⟩: d⟨Π_k(t)⟩/dt = iΩ_k[⟨ Y^-_k⟩exp(iω t)-⟨ Y^+_k⟩exp(-iω t)] -γ_p[⟨Π_k (t)⟩-⟨Π_k⟩_eq]], and the equation which governs the time evolution of ⟨ Y^+_k⟩ is d⟨ Y^+_k(t)⟩/dt =-iΩ_k2⟨Π_k⟩exp(iω t)- (γ_e-iΔ_k)⟨ Y^+_k⟩. One can obtain the value of Y^-_k by taking the complex conjugate of Y^+_k . Here the spin-lattice relaxation rate γ_p and the spin-spin relaxation rate γ_e are given as follows : γ_p = 2∫_-∞^∞dτcos(Δ_kτ)ξ_p(τ), with the phonon bath correlation function ξ_p(t) =∫_0^∞dω J_p(ω)[(βω/2)cos(ω t)-isin(ω t)], J_p(ω) being the so-called phonon spectral function. On the other hand, the spin-spin relaxation rate is given by : γ_e=2∫_-∞^∞dτ∫_0^∞dω J_e(ω)[(βω/2)cos(ω t)-isin(ω t)], where the spectral function for the electronic bath is given by J_e(ω). §.§ Detailed inspection of γ_e and γ_p In an earlier phenomenological treatment [19], the relaxation rates are considered coarse-grained, frequency independent and temperature independent phenomenological constants. Further, they analyse the steady state electrical response in different linear and nonlinear regime within the Markovian approximation of the phenomenological rate equation. However, as mentioned earlier, the present spin-boson model is a microscopic theory that adopt the machinery of nonequilibrium statistical mechanics. As a result, the genesis of the relaxation rates can be connected to the details of the spectral fluctuations of the underlying phonon and electron baths. It is the goal of this subsection to provide an extensive analysis of the interaction of the system with surrounding thermal and electronic baths. The variation of these rates with temperature is also investigated. The time dependency of these relaxation rates demonstrating the transition from non-Markovian domain to Markovian domain can be expressed by modifying the Eq.(17) and Eq.(15).Thus, γ_e(t)=4∫^t_0 dτ∫^∞_0J_e(ω)(βω/2)cosωτ dω, where J_e(ω) is the Ohomic spectral function for electron bath with exponential cutoff frequency ω_ce: J_e(ω)=α_eωexp(-ω/ω_ce), where α_e is a coupling parameter. Thus, γ_p(t)=4∫^t_0 dτcos(Δ_kτ)∫^∞_0J_p(ω)(βω/2)cos(ωτ) dω, where J_p(ω) has the usual Debye structure[21] with cutoff frequency ω_cp, J_p(ω)=α_pω^3ω^2_cpexp(-ω/ω_cp), with α_p is the coupling parameter. In the continuation of the above discussion, we can now derive the closed form expressions of γ_e(t) and γ_P(t) for the high-T as well as in the low-T regime. In the high-T regime (βω_ce<< 1, (βω/2)≈2/βω) one can obtain the relaxation rate γ_e(t) in the following form: γ_e(t)≃ 8α_e k_BT tan^-1(-ω_cet). Figure (1a) shows the comparison between the numerically simulated results (black solid line) and the analytical results (Eq. (22); red dashed line) of the variation of dimensionless spin-spin relaxation rate (γ^'_e ) as a function of dimensionless time (ω_cet) for 300 K. Both results fairly match with each other. On the other hand, low-T behaviour of γ_e(t) can be obtained by using the relation (βω/2)≈ [1+2exp(-βω)]. The low-T expression of γ_e(t) is given by γ_e(t)=4α_eω_ce[I_1+2I_2], where I_1= ω_ce t(ω_ce t)^2 +1 and I_2=ω_ce t(ω_ce t)^2 +(1+(βω_ce))^2. We have observed close agreement of these analytical results (red dashed line) with that of numerically simulated outcomes (solid black line) in Figure (1b). A distinct transient region of spin-spin relaxation rate is observed for both plots corresponding to the two temperature values. The transient region is generally known as the ‘Non-Markovian regime, which occurs at a shorter time scale than the quantal time (ħ/k_BT). In Markovian approximation technique, all the quantum phenomena occurring within this particular quantal time scale can be neglected. The values of the quantal time scale are much shorter than the spin-spin relaxation time at the non-transient region for both temperatures. It is evident from the figure that the spin-spin relaxation time (τ_e=1/γ_e) for pristine graphene is 10 fs (in Markovian region) at 300 K, which directly supports the experimentally obtained values of spin-spin relaxation time reported earlier [27]. The τ_e is increased to 100 fs at 30 K, inferring the strong temperature-dependent nature of the τ_e. The variation of τ_e with temperature is demonstrated in Figure (1c). Let us move to the analysis of the spin-lattice relaxation that is considered as the key process for interband transition in our work, which can be described by Eq.(20).At high temperature, utilizing (βω/2)≈2/βω and considering Eqs.(20) and (21), one can show γ_p(t)=16α_pk_BT∫_0^ω_cptdx cos(bx)[1-3x^2]/[1+x^2]^3, where b=Δ_𝐤/ω_cp and x=ω_cpτ. The closed form expression of γ_p(t) is given in Appendix A (See Eq. (A.2)). The variation of dimensionless spin-lattice relaxation rate (γ^'_p) with dimensionless time (ω_cp t) for high-T i.e. at 300 K is plotted in Figure (2a). The close agreement between this analytical expression (Eq. (24), red dashed line) with the numerically simulated results (black solid line) is demonstrated in Figure (2a). On the other hand, at low temperature : γ_p(t)=24α_pω_cp∫_0^ω_cptdx[1-6x^2+x^4]/[1+x^2]^4cos(bx) + 2 ∫_0^ω_cptdx [(1+a_p)^4-6(1+a_p)^2x^2+x^4]/[(1+a_p)^2+x^2]^4cos(bx), with a_p=ω_cp/k_BT. We compare our analytical expression (Eq. (25)) with that of numerical results in Figure (2b). Figures (2a) and (2b) both exhibit a transition of the spin-lattice relaxation rate from non-Markovian region to Markovian regime, similar to the electron induced relaxation rate as mentioned earlier. In the Markonivan region, the phonon induced relaxation time (τ_p=1/γ_p) of graphene is found to be 1 ps at 300 K, which is further increased to 10 ps at 30 K. The calculated values of spin-lattice relaxation time fairly agree with previously reported experimental values [28]. It is evident from Eq.(25) that the value of spin-lattice relaxation time significantly depends on Δ_k (tunneling frequency), which is related to detuning frequency. Hence we also show the variation of dimensionless spin-lattice relaxation rate for different Δ_k values at 300 K temperature in Figure (2c). In the Markovian region, the spin-lattice relaxation rate is increased with increasing Δ_k value. The spin-lattice relaxation time are 3.6 ps,1 ps, and 320 fs for w=0.1, 0.2 and 0.4, respectively (Figure 2c). Higher value of Δ_k is associated with the carriers having high momentum ‘k’ value, which relax (interband) faster by interacting with the lattice and results faster spin-lattice relaxation time of the carriers. The temperature dependent spin-lattice relaxation time is plotted in Figure (1d) which confirms the increasing nature of the spin-lattice relaxation time at low temperatures compared to its high temperature values. The deficiency of phonon scatterers at low temperatures may enhance the spin-lattice relaxation time of the carriers at the conduction band. Inspired by the previous studies[19], the Mischenko parameter (η) can be utilized to explain the nonlinear optical response of graphene. Hence, our study predicts strong temperature dependency of the both electron induced and phonon induced relaxation processes, which makes the Mischenko parameter a function of temperature. In Figure (2e), we demonstrate the temperature dependency of the Mischenko parameter. §.§ Nonlinear optical conductivity : Pristine Graphene The linear and nonlinear response of the graphene system can be quantified by this single dimensionless parameter η, where η<<1 describes the linear regime and η>>1 denotes the nonlinear regime. Typically, one can divide the optical conductivity in four distinct regimes: (a) linear response in the clean regime, (b) linear response in the dirty regime, (c) nonlinear response in the clean regime, and finally, (d) nonlinear response in the dirty regime. Here, we are denoting the clean (dirty) regime as the collisionless or high-frequency limit (collisional or low frequency), and this can be quantified by the region γ^st_e/ω << 1 (γ^st_e/ω≥ 1) as the steady state value of γ_e(t). We consider γ_e=γ_e^st for further discussion. In order to study the conductivity of a system, we need to calculate the steady state current density operator in the direction of the applied electric field. For simplicity, we consider that the frequency dependent electric field is applied along the x axis and the response to the field is measured after the system attains the steady state. In general, the nonlinear response has a component in-phase with the applied field and another out-of-phase with it. To proceed further let us introduce the current density operator : j(t)=-g_sg_v/(2π)^2∫ dk j_k(t), where g_s (g_v) is the spin (valley) degeneracy factor (in our case, both are 2), the momentum dependent component of particle current density is j_k(t)=eTr[ρ_k(t)v⃗_k(t)]. Since the electric field is applied in the x direction, the x direction component of the momentum dependent current density in the steady state is given by : j_kx(t)_st= ev_F[cos(χ_k)⟨Π_k(t)⟩_st+sin(χ_k)⟨ Y_k(t)⟩_st], where the first term carries the contributions from the intraband transitions, while the second term includes the effect of interband contributions. It is observed that if one summed over all k⃗ vectors the intraband term does not contribute to the optical conductivity in graphene [20]. Following Ref.[20], one can obtain the steady state expressions of ⟨Π_k(t)⟩ and ⟨ Y_k^+(t)⟩ from Eqs. (13) and (14). Thus we can obtain the steady state form as: ⟨Π_k⟩_st=⟨Π_k⟩_eq[1+γ_eγ_pΩ^2_kγ^2_e+(Δ_k-ω)^2]^-1, and ⟨ Y_k⟩_st =-Ω_k⟨Π_k⟩_st[γ_ecosω t+(Δ_k-ω)sinω t](Δ_k-ω)^2+γ^2_e. It is well known that only the in-phase term of ⟨ Y_k(t)⟩_st contributes to the dissipative component of optical conductivity. Thus, the general expression for the nonlinear optical conductivity is given by : σ_xx=g_sg_ν(2π)^d∫e^2[v_Fsinχ_k]^2⟨Π_k⟩_stγ_eω((Δ_k-ω)^2+γ^2_e)dk. In light of the above discussion, the optical conductivity can be categorized into four regimes(lc,ld, nc, nd) by rewriting: γ_eγ_pΩ^2_k =[ηγ_esinχ_k]^2. Let us now proceed to further discussion of the longitudinal optical conductivity in detail for all the four regimes. §.§.§ Linear clean regime : (η << 1, γ_e/ω << 1) In this regime we retain the zeroth order of η which enables us to consider ⟨Π_k⟩_st≈⟨Π_k⟩_eq in Eq.(28) and one may convert the Lorentzian part of (30) into a Dirac-delta function. Thus we obtain σ_xx=π g_sg_v e^2 v^2_Fω (2π)^2∫sin^2χ_kδ(Δ_k-ω) (f_ck-f_vk) dk, where, f_ck(f_vk) is the Fermi-Dirac distribution function for the conduction (valence) band. For graphene, the system has particle-hole symmetry and isotropic quasi-particle dispersions which enables one to write, σ_xx = π g_sg_v e^2 v^2_F g(ω,μ,T)ω (2π)^2∫sin^2χ_kδ(Δ_k-ω) dk = e^2 g(ω,μ,T)4, where g(ω,μ,T)=12[tanhω+2μ4k_BT+tanhω-2μ4k_BT], where μ is the chemical potential of the system. In the limt T→ 0, the function g(ω,α,T→ 0)=Θ(ω2-|α|) <cit.>,with Θ(x) as the Heaviside step function . As a result the conductivity at zero temperature becomes σ_xx =e^24Θ(ω2-|μ|). §.§.§ Nonlinear clean regime :(η≥ 1, γ_e/ω << 1) In the nonlinear clean limit, one may typically consider the Lorentzian by a Dirac delta function, and Eq. (30) reduces to σ_xx =π g_sg_ν e^2 v^2_Fω (2π)^2∫sin^2χ_kδ(Δ_k-ω)1+η^2sin^2χ_k (f_ck-f_vk) dk =e^2 g(ω,μ,T)42η^2[1-1√((1+η^2))]. As T→ 0 the conductivity reduces to σ_xx =e^242η^2[1-1√((1+η^2))]Θ(ω2-|μ|). §.§.§ Linear dirty regime:(η << 1, γ_e/ω≥ 1) In this limit we can again approximate ⟨Π_k⟩_st≈⟨Π_k⟩_eq in the lowest order of η. In contrary to the linear clean domain, we need to retain the Lorentzian part of the integrand in Eq.(30). Hence, the conductivity is given by, σ_xx =g_sg_ν e^2 v^2_Fω (2π)^2∫sin^2χ_k⟨Π_k⟩_eqγ_eγ^2_e+(Δ_k-ω)^2 dk. Although one needs to compute this integral numerically for any arbitrary finite temperature, the closed form of it can be obtained at zero temperature. At zero temperature the function g(ω,μ,T→ 0)=Θ(ħω2-|μ|) which helps to rewrite Eq.(38)at zero temperature as σ_xx =e^2γ_e4πω∫^2Λħ_2|μ|ħ[Δ_kγ^2_e+(Δ_k-ω)^2-(ω→ 0)] dΔ_k =e^2γ_e4πω[12ln(γ^2_e+(ω-Δ_k)^2)γ^2_e+Δ^2_k+ωγ_etan^-1Δ_k-ωγ_e]^2Λħ_2|μ|ħ. Here, Λ is nothing but the ultraviolet cut-off and it is usually considered as half of the bandwidth of graphene [ref]. §.§.§ Nonlinear dirty regime:(η > 1, γ_e/ω≥ 1) This regime can be considered as most general domain where we can not apply any kind of approximation, and one needs to apply the generalized expression of optical conductivity as given by Eq.(30). With the help of the full form of ⟨Π_k⟩_st one may obtain σ_xx =-g_sg_v e^2 v^2_Fω (2π)^2∫sin^2χ_k⟨Π_k⟩_stγ_eγ^2_e+(Δ_k-ω)^2 dk =-g_sg_v e^2 v^2_Fω (2π)^2∫sin^2χ_k⟨Π_k⟩_eqγ_e[(Δ_k-ω)^2+γ^2_e(1+η^2sin^2χ_k)] dk. This equation can be evaluated at zero temperature as follows: σ_xx = e^2γ_e4ω (π)^2∫^2π_0sin^2χ_k dχ_k × ∫^2Λħ_2|μ|ħ(Δ_k[(Δ_k-ω)^2+γ^2_e(1+η^2sin^2χ_k)]-(ω→ 0))dΔ_k = e^24(π)^2∫^2π_0sin^2χ_k dχ_k[f_1(ω,2Λħ)-f_1 (ω,2|μ|ħ)], where f_1(ω,x)=γ_eγ^2_1tan^-1Δ_k-ωγ^2_1+γ_e2ωln(γ^2_1+(ω-Δ_k)^2)γ^2_1+Δ^2_k, and γ^2_1=γ_e√(1+η^2sin^2χ_k). It is pertinent to mention here that the expression of conductivity (at zero temperature) in all regimes exactly matches with the previously obtained results [19]. Let us explain Figure 3. The colour plot of the optical conductivity of pristine graphene with various E_0 and different ω is shown in Figure (3a) and Figure (3b) at 300 K and 30 K temperatures, respectively. Here, the lines η=1 and ω=γ_e divide the entire colour plot into four regions : 1. linear clean (lc), 2. nonlinear clean (nc), 3. linear dirty (ld), and 4. nonlinear dirty (nd). It can be observed from the Figure (3) that the starting frequency of clean limit is red-shifted at low temperature (30 K) due to the temperature dependency of γ_e and γ_p. As a matter of fact one may observe that the position of the line corresponding to the Mischenko parameter η=1 changes with the change of temperatures. In the ‘nonlinear dirty’ region with high E_0, the optical conductivity does not increase with the increase of the field strength E_0 , rather it shows a saturation behavior. At high enough incident electric field the absorption coefficient decreases significantly due to the depletion of carriers in valence band and results in saturation effect of the optical conductivity. This typical behaviour of optical conductivity is also shown in Figure (3c) and Figure (3d) where the conductivity is almost zero in the dirty limit for E_0=10^6 V/m and E_0=10^5 V/m , respectively. In the clean limit, when ω>>γ_e, the optical conductivity of pristine graphene approaches its universal value (σ_0) for both high temperature (300 K) and low temperature (30 K) as shown in Figure (3c) and Figure (3d), respectively. § GAPPED GRAPHENE We now analyze the nonlinear optical conductivity for the case of a gap graphene that can be obtained by introducing a gap 2Δ in the band structure of graphene. For example, gapped can be introduced in graphene when it is epitaxially grown on any substrate [29]. Again, we follow the same kind of spin-Boson type model and quantum master equation method to analyze the nonlinear optical conductivity of gapped graphene. If a gap 2Δ is created in the graphene band structure, the Hamiltonian of the subsystem is modified as follows H_S =v_F(σ·k)+Δσ_z. The eigenvalues of H_S are given by ±√((Δ^2 +(v_F k)^2)). To simplify it further we may consider Δ=a_0cosθ and v_Fk=a_0sinθ, so that the eigenvalues become ± a_0 and the eigenfunctions of modified H_S are |c_k⟩ = [ exp(-iχ_k)cosθ2; sinθ2 ], |v_k⟩ = [ -(exp-iχ_k)sinθ2; cosθ2 ], where χ_k represents the angle formed by the k vector with x axis. Let us introduce the total Hamiltonian of the gapped graphene as follows : H(t)=H_S+H_ω(t)+H_SB+H_B, where, H_B and H_ω(t) have the same form as that of Eq.(4) and the second term of Eq. (5), respectively. As mentioned earlier our system Hamiltonian is simplified as : H_S=a_0Π_k, while, the interaction term becomes H_SB=Π_kX_e+(Y^+_k+Y^-_k)X_p. On the other hand, the `effective' ac term for gapped graphene can be written as : H^eff_ω (t) =[-exp(-i(ω - B_k)t)Ω_k Y^+_k -exp(i(ω - B_k)t)Ω^+_kY^-_k]/2, where Ω_k =(eEv_F/ω) (sinχ_k-icosθcosχ_k) Ω^+_k =(eEv_F/ω)(sinχ_k+icosθcosχ_k) B_k =2√(Δ^2+(v_Fk)^2), and, H_SB(t) =Π_kX_e(t)+[exp(iB_k t)Y^+_k+exp(-iB_k t)Y^-_k]X_p(t). With the help of similar approach as that of Sec.(IIB), we can write the equations which govern the dynamics of ⟨Π_k(t)⟩, and ⟨ Y_k^+(t)⟩ for gapped graphene as follows : ddt⟨Π_k (t)⟩ =[iΩ_k⟨ Y^+_k⟩exp(-iω t)-iΩ^+_k⟨ Y^-_k⟩exp(iω t)] -γ_p[⟨Π_k (t)⟩-⟨Π_k⟩_eq], and ddt⟨ Y^+_k (t)⟩=-iΩ^+_k2⟨Π_k⟩exp(iω t)-(γ_e-iB_k)⟨ Y^+_k⟩. Again we can introduce the momentum resolved current density of the gapped graphene along the applied electric field direction as j_kx = ev_F[sinθcosχ_k⟨Π_k⟩+(cosθcosχ_k+isinχ_k)⟨ Y^+_k⟩ + (cosθcosχ_k-isinχ_k)⟨ Y^-_k⟩]. In the steady state, the momentum dependent current density for gapped graphene is j_kx (t)_st = ev_F[sinθcosχ_k⟨Π_k(t)⟩_st+cosθcosχ_k × [⟨ Y^+_k(t)⟩_st+⟨ Y^-_k(t)⟩_st] +isinχ_k[⟨ Y^+_k(t)⟩_st-⟨ Y^-_k(t)⟩_st]]. We can write the conductivity for gapped graphene as follows : σ_xx =-g_sg_ν e^2 v^2_Fω (2π)^2∫(sin^2(χ)+cos^2θcos^2(χ) ⟨Π_k⟩_st ×γ_e(B_k-ω)^2+γ^2_e dk, where ⟨Π_k⟩_st=⟨Π_k⟩_eq[1+η^2 γ^2_e (sin^2χ_k+cos^2χ_kcos^2θ)γ^2_e+(B_k-ω)^2]^-1. §.§ Nonlinear optical conductivity We want to study the optical conductivity of a gapped graphene for different regimes similar to the pristine graphene, as discussed earlier. Here, for calculating the conductivity, assumptions are made similar to the Sec.(IID). §.§.§ Linear clean limit: (η << 1, γ_e)ω << 1) In this limit, the conductivity can be written as σ_xx =e^2 g_sg_ν g(ω,α,T)16[4Δ^2ω^2+1], where α = max[μ,Δ] [16]. The function g(ω,α,T) is given by g(ω,α,T)=12[tanhω+2α4k_BT+tanhω-2α4k_BT]. In the limiting case of T→ 0, g(x)→Θ(x) and σ_x =e^24[4Δ^2ω^2+1]Θ(ω2-α). One may observe that the effect of chemical potential is unimportant if it lies inside the gap (μ < Δ). One can recover the normal graphene results in the limit Δ→ 0. §.§.§ Nonlinear, clean limit:(η≥ 1, γ_e/ω << 1) In this regime, the Lorentzian can be approximated by a delta function, and one obtains σ_xx =e^2g(ω,α,T)2η^2[1-1√(1+η^2)(1+4Δ^2 η^2ω^2)^-12]. §.§.§ Linear dirty limit:(η << 1,γ_e/ω≥ 1) In this limit the conductivity is given as, σ_xx = -g_sg_ν e^2 v^2_Fω (2π)^2∫(cos^2θcos^2χ+sin^2χ )⟨Π_k⟩_eq × γ_eγ^2_e+(B_k-ω)^2 dk, Now one can obtain closed form expression for zero temperature, and it is given by σ_xx = e^2 γ_e4πω∫^2Λħ_2|μ|ħ[4Δ^2 + B_k^2B_k(γ^2_e+(B_k -ω)^2)-(ω→ 0)] dB_k =e^2γ_e4πω[f_2(ω,2Λħ)-f_2 (ω,2|μ|ħ)], where f_2(w,x) =(1+y)tan^-1(x-ωγ_e)-γ^2_e-4ħ^-2Δ^22γ_eω × ln[x^2+γ^2_e] +γ_e(1-y)2ωln[(x-ω)^2+γ^2_e]-ω yγ_elnx, and y=4ħ^-2Δ^2ω^2 + γ^2_e. §.§.§ Nonlinear,dirty regime:(η > 1, γ_e/ω≥ 1) In this limit, we have to use the most generalized expression as given by Eq.(54). Thus, the conductivity has the form, σ_xx =-g_sg_ν e^2 v^2_Fω (2π)^2∫ A(θ,χ) ⟨Π_k⟩_stγ_eγ^2_e+(B_k-ω)^2 dk = -g_sg_ν e^2 v^2_Fω (2π)^2∫A(θ,χ) ⟨Π_k⟩_eqγ_e[(B_k-ω)^2+γ^2_e(1+η^2A(θ,χ))]dk, where A(θ,χ)=(sin^2χ+cos^2θcos^2χ). It can be noted that our results for the optical conductivity (lc, ld, nc, nd regimes) of gapped graphene are in good agreement with the previous study of A. Singh et. al [19]. Now we can do numerical evaluation of optical conductivity. As one may observe, the Figure (4a) and Figure (4b) demonstrate the plots of the optical conductivity of gapped graphene, which are also divided into the four regimes by the two lines η=1 and ω=γ_e as we previously mentioned for pristine graphene. These two lines differ for both graphs depending on temperature. On the other hand, Figure (4c) and Figure (4d) show the variation of optical conductivity with the frequency ω for different incident electric fields at 300 K and 30 K temperatures, respectively. Here we can see an intriguing phenomenon for 30 K, where throughout the dirty region (Figure (4b)) the optical conductivity assumes a saturation value near zero irrespective of applied electric field strength. Figure (4d) clearly shows the same kind of phenomenon where the conductivity is almost zero up to 10^14 Hz for all three different incident electric fields. Then there is a sudden jump in conductivity after a specific frequency inferring the major role of the band gap behind the process. Further to investigate the role of bandgap in the low-temperature optical conductivity process of the gapped graphene, optical conductivity is calculated for different values of Δ. Figures (5a) and (5b) illustrate the frequency dependency of the optical conductivity of the gapped graphene with different band gaps (2Δ) at 300 K and 30 K temperatures. For 300 K temperature, the optical conductivity varies continuously with frequency for all the band gap values (Figure (5a)). But the case becomes more interesting for low temperature (30 K), where, at a prominent band gap value, the conductivity changes abruptly after a particular frequency as shown in Figure 5b. This abrupt change in conductivity does not occur for small values of the bandgap rather the conductivity changes gradually. At lower temperatures (30 K), following the Fermi distribution, the conduction band (upper Dirac Cone) lacks carriers. For pristine graphene, the gap-less band structure helps to absorb a broad spectrum of the incident light in the linear dirty region, hence a non-zero absorption coefficient or non-zero optical conductivity can be observed even at a low temperature, as reflected in Figure (3b) and Figure (3d). But for gapped graphene, carriers only can absorb the optical pulse when a minimum 2Δ amount of energy is supplied by the external driving field of the optical pulse to the carriers for the required bandgap transition. At any incident frequency lower than the bandgap frequency (ħω= 2Δ), the optical conductivity becomes almost zero irrespective of electric field strength ( Figure (4b) and Figure (4d)). When the frequency of optical pulse reaches a specific frequency equivalent to bandgap, it is instantaneously absorbed by the carriers. Consequently, the optical conductivity of the gapped graphene takes a sudden jump at that particular frequency (Figure (4d) and Figure (5b)). § CONCLUSION Our starting point in this paper has been the microscopic spin-boson approach to the nonlinear optical conductivity problem in graphene [20] which is a generalization of the phenomenological rate theory calculation [18, 19]. Like Ref. [18] and [20], we work in the rotating wave approximation in which terms off-resonant with the applied oscillatory field are ignored. However, we have gone beyond in making elaborate analyses of the relaxation rates mediated by the surrounding phonons and electrons of the graphene system. For this, separate spectral densities for the phonon and the electron baths have been incorporated in the analysis. Of special renewed interest has been the transient non-Markov regimes wherein strong quantum effects are observed that can be probed by presently available ultrafast spectroscopy techniques. We have also carefully delineated the Markov and non-Markov domains and transitions between them. One other feature that we have investigated here in detail, which was not covered in our earlier work [20], is the case of gapped graphene that brings-in different attributes [19]. A detailed analysis reveals characteristic properties of the graphene system in different regions of the Mischenko parameter values. Although our interest in this paper has been restricted to the quantum solid of graphene, the methodology employed here is of relevance to general theoretical methods for dissipative behaviour of open quantum systems that belong to Non-equilibrium Statistical Mechanics. The resultant treatment sheds further light on the phenomenological approach adopted in Ref. [19] in terms of our microscopic method in which the bath parameters such as the cutoff frequency and the temperature appear explicitly. However, the mathematical formalism is brought to the domain of experiments on relaxation studies in graphene (such as Ref. [27] and [28]). § ACKNOWLEDGEMENTS B.G is supported by INSPIRE, DST, Government of India (IF200292). SD is grateful to the Indian National Science Academy for support through their Honorary Scientist scheme. M.B. is supported by the Department of Science and Technology (DST), Government of India under the Core grant (Project No. CRG/2020//001768) and MATRICS grant (Project no. MTR/2021/000566). § SPIN-LATTICE RELAXATION TIME AT HIGH-T γ_p =4∫^t_0 cos(Δ_kτ)dτ∫^∞_0J_p(ω)2k_B Tħωcos(ωτ) dω =16α_ek_B Tω_cpħ[∫^t_0 dτcos(Δ_kτ)(1-3ω^2_cpτ^2)(1+ω^2_cpτ^2)^2], where the integrand part becomes, ∫^t_0 dτcos(Δ_kτ)(1-3ω^2_cpτ^2)(1+ω^2_cpτ^2)^2 =Δ^2_kω^3_cp[i cosh (Δ_kω_cp)[ Ci(Δ_k t+iΔ_kω_cp) -Ci(Δ_k t-iΔ_kω_cp)-Ci(iΔ_kω_cp)+Ci(-iΔ_kω_cp)-iπ] +sinh (Δ_kω_cp)[ Si(-Δ_k t+iΔ_kω_cp)-Si(Δ_k t+iΔ_kω_cp)] +tcos(Δ_k t)(ω^2_cpt^2+1)^2-Δ_ksin(Δ_k t)2(ω^4_cpt^2+ω^2_cp). § SPIN-LATTICE RELAXATION TIME AT LOW-T γ_p(t) =24α_pω_cp[∫_0^ω_cptdx[1-6x^2+x^4]/[1+x^2]^4]cos(bx) +2 ∫_0^ω_cptdx [(1+a_p)^4-6(1+a_p)^2x^2+x^4]/[(1+a_p)^2+x^2]^4cos(bx) ]. The first integrand part is, ∫_0^ω_cptdx[1-6x^2+x^4]/[1+x^2]^4]cos(bx) =112[ b^3(isinh(b )Ci(-b(l-i))-isinh(b)Ci(b(l+i)) +cosh(b)Si(b(x+i))-Si(-b(l-i))) +2l(b^2(l^2+1)^2 - 2l^2 +6)cos(bl)(l^2+1)^3+2b(l^2-1)sin(bl)(l^2+1)^2], and the second integrand becomes, ∫_0^ω_cptdx [(1+a_p)^4-6(1+a_p)^2x^2+x^4]/[(1+a_p)^2+x^2]^4cos(bx) =16[b(-a^2_p-2a_p+l^2 _1)sin(bl)(a^2_p+2a_p+l^2+1)^2 +lcos(bl)(a^4_pb^2 +4a^3_pb^2+2a^2_p(b^2 (l^2+3)+3)(a^2_p+2a_p+l^2+1)^3 +(4a_p (b^2 (l^2+1)+3)+b^2(l^2 +1)-2l^2 +6))lcos(bl)(a^2_p+2a_p+l^2+1)^3 +b^32(isinh((a_p +1)b)(Ci(ib(a_p+il+1))-Ci(ib(a_p-il+1))) +cosh((a_p +1)b)(Si(b(ia_p +l+i))-iShi(b(a_p+il+1)))], where a_p=ω_cp/k_BT, l=ω_cpt and b=Δ_𝐤/ω_cp 99 ref1 M. I. Kastnelson, Carbon in Two Dimensions (Cambridge University Press, Cambridge, UK, 2012). ref2 A. K. Geim and K. S. Novoselov, The rise of graphene, Nat. Mater. 6, 183 (2007). ref3 S. Dattagupta, Carbon hybridization to tight binding to Dirac solid—The wonder laboratory of graphene, Resonance 25, 249 (2020). ref4 K. I. Bolotin, K. J. Sikes, Z. Jiang, M. Klima, G. Fudenberg, J. Hone, P. Kim, and H. L. Stormer, Ultrahigh elec- tron mobility in suspended graphene, Solid State Commun. 146, 351 (2008). ref5 Xu Du, Ivan Skachko, Anthony Barker, and Eva Y. Andrei, Suspended Graphene: a bridge to the Dirac point, Nature Nanotechnology 3, 491 (2008). ref6 M. Ezawa, Monolayer Topological Insulators: Silicene, Germanene and Stanene, J. Phys. Soc. Jpn. 84, 121003 (2015). ref7 C. W. J. Beenakker, Specular Andreev Reflection in Graphene, Phys. Rev. Lett. 97, 067007 (2006). ref8 C. W. J. Beenakker, Colloquium: Andreev reflection and Klein tunneling in graphene, Rev. Mod. Phys. 80, 1337 (2008). ref9 N. Stander, B. Huard, and D. Goldhaber-Gordon, Evidence for Klein Tunneling in Graphene p-n Junctions, Phys. Rev. Lett. 102, 026807 (2009). ref10 V. I. Falko, K. Kechedzhi, E. McCann, B. L. Altshuler, H. Suzuura, and T. Ando, Weak localization in graphene, Solid State Communications 143, 33 (2007). ref11 Y. Zhang, Y-W Tan, H. L. Stormer, and P. Kim, Experimental observation of the quantum Hall effect and Berry's phase in graphene, Nature 438, 201 (2005). ref12 K. S. Novoselov, Z. Jiang, Y. Zhang, S. V. Morozov, H. L. Stormer, U. Zeitler, J. C. Maan, G. S. Boebinger, P. Kim, and A. K. Geim, Room-temperature quantum Hall effect in graphene, Science 315, 1379 (2007). ref13 J. Inoue, A. Yamakage, and S. Honda, Graphene in Spintronics: Fundamentals and Applications, (CRC Press, 2016). ref14 E. Malic, T. Winzer, E. Bobkin, and A. Knorr, Microscopic theory of absorption and ultrafast many-particle kinetics in graphene, Phys. Rev. B 84, 205406 (2011). ref15 J. J. Dean and H. M. van Driel, Second harmonic generation from graphene and graphitic films, Appl. Phys. Lett. 95, 261910 (2009). ref16 T. Gu, N. Petrone, J. F. McMillan, A. van der Zande, M. Yu, G. Q. Lo, D. L. Kwong, J. Hone, and C.W.Wong, Regenerative oscillation and four-wave mixing in graphene optoelectronics, Nat. Photonics 6, 554 (2012). ref17 Nathalie Vermeulen, David Castelló-Lurbe, JinLuo Cheng, Iwona Pasternak, Aleksandra Krajewska, Tymoteusz Ciuk, Wlodek Strupinski, Hugo Thienpont, and Jürgen Van Erps, Negative Kerr Nonlinearity of Graphene as seen via Chirped-Pulse-Pumped Self-Phase Modulation, Phys. Rev. Applied 6, 044006 (2013). ref18 E. G. Mishchenko, Dynamic Conductivity in Graphene Beyond Linear Response, Phys. Rev. Lett. 103, 246802 (2009). ref19 A. Singh, K. I. Bolotin, S. Ghosh, and A. Agarwal, Nonlinear optical conductivity of a generic two-band system with application to doped and gapped graphene, Phys. Rev. B 95, 155421 (2017). ref20 S. Dattagupta, Spin-boson model of quantum dissipation in graphene: Nonlinear electrical response, Phys. Rev. B 104, 085411 (2021) ref21 S. Dattagupta and S. Puri, Dissipative Phenomena in Condensed Matter (Springer-Verlag, Berlin, 2004). ref22 Vladimir I. Chizhik , Yuri S. Chernyshev , Alexey V. Donets , Vyacheslav V. Frolov , Andrei V. Komolkin , Marina G. Shelyapina, Magnetic Resonance and Its Applications, (Springer, 2014). ref23 R. Kubo, Statistical-Mechanical Theory of Irreversible Processes. I. General Theory and Simple Applications to Magnetic and Conduction Problems, J. Phys. Soc. Jpn. 12 (6): 570–586 (1957). ref24 U. Weiss, Quantum Dissipative Systems (World-Scientific, Singapore, 1993). ref25 L.D. Chang and S. Chakravarty, Dissipative dynamics of a two-state system coupled to a heat bath, Phys. Rev. B 31, 154 (1985). ref26 J. M. Luttinger, Transport theory, in Mathematical Methods in Solid State and Superfluid Theory, edited by R. C. Clark and G. H. Derrick (Springer-Verlag, Boston, 1968). ref27 Z. Zhang and P. L. Voss, Full-band quantum-dynamical theory of saturation and four-wave mixing in graphene, Opt. Lett. 36, 4569 (2011). ref28 Z. Zhang and P. L. Voss, A quantum-dynamical theory for nonlinear optical interactions in graphene, arXiv:1106.4838. ref30 T. G. Pedersen, A.-P. Jauho, and K. Pedersen, Optical response and excitons in gapped graphene, Phys. Rev. B 79, 113406 (2009).
http://arxiv.org/abs/2307.01520v1
20230704070037
LEAT: Towards Robust Deepfake Disruption in Real-World Scenarios via Latent Ensemble Attack
[ "Joonkyo Shim", "Hyunsoo Yoon" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Exploiting Richness of Learned Compressed Representation of Images for Semantic Segmentation Wassim Hamidouche, Lina Bariah, and Mérouane Debbah ============================================================================================= Deepfakes, malicious visual contents created by generative models, pose an increasingly harmful threat to society. To proactively mitigate deepfake damages, recent studies have employed adversarial perturbation to disrupt deepfake model outputs. However, previous approaches primarily focus on generating distorted outputs based on only predetermined target attributes, leading to a lack of robustness in real-world scenarios where target attributes are unknown. Additionally, the transferability of perturbations between two prominent generative models, Generative Adversarial Networks (GANs) and Diffusion Models, remains unexplored. In this paper, we emphasize the importance of target attribute-transferability and model-transferability for achieving robust deepfake disruption. To address this challenge, we propose a simple yet effective disruption method called Latent Ensemble ATtack (LEAT), which attacks the independent latent encoding process. By disrupting the latent encoding process, it generates distorted output images in subsequent generation processes, regardless of the given target attributes. This target attribute-agnostic attack ensures robust disruption even when the target attributes are unknown. Additionally, we introduce a Normalized Gradient Ensemble strategy that effectively aggregates gradients for iterative gradient attacks, enabling simultaneous attacks on various types of deepfake models, involving both GAN-based and Diffusion-based models. Moreover, we demonstrate the insufficiency of evaluating disruption quality solely based on pixel-level differences. As a result, we propose an alternative protocol for comprehensively evaluating the success of defense. Extensive experiments confirm the efficacy of our method in disrupting deepfakes in real-world scenarios, reporting a higher defense success rate compared to previous methods. § INTRODUCTION With the remarkable success of generative models, such as Generative Adversarial Networks (GANs) and Diffusion Models, it has become increasingly feasible for anyone to create various realistic images and videos. Unfortunately, this advancement has given rise to a concerning issue in our society known as deepfakes, which involve the malicious use of fabricated visual content. Until now, the primary approach to combating deepfakes has been through passive defense mechanisms, which employ deepfake detection systems that assess the authenticity or manipulation of an image or video <cit.>. However, these systems cannot completely prevent deepfakes since they only work after fake images have already spread over social media. To address this issue, an alternative approach called active defense has been proposed <cit.>. This method involves introducing human-imperceptible perturbations into deepfake models through adversarial attacks, resulting in the generation of distorted images. By incorporating perturbations in advance, it becomes possible to prevent an image from being used as a source for creating deepfakes. To achieve effective disruption of deepfakes in real-world scenarios, two types of transferability are crucial: (1) target attribute-transferability and (2) model-transferability. Both of them are necessary to ensure that a single perturbation can cover all target attributes within a specific model and simultaneously affect different models. Previous methods <cit.> employ ensemble strategies, known as Image Attack, to achieve these transferabilities by averaging losses between generated images and considering all possible target attributes for each model. However, the emergence of text-driven manipulation models like StyleCLIP <cit.> presents a challenge in achieving target attribute-transferability due to unlimited manipulation possibilities. Consequently, handling all potential targets become infeasible. While generating perturbations assuming several known target attributes is possible, it does not guarantee their effectiveness in a gray-box scenario where the target attributes are unknown. In Figure <ref> (Image Attack), an ensemble strategy is used to attack five known target attributes, but it fails to effectively disrupt the output in gray-box scenarios, resulting in minor changes to the background. Additionally, the development of Diffusion Models presents another challenge in terms of model-transferability. Previous studies have targeted either GAN-based models or Diffusion-based models, but the possibility of simultaneously targeting both models remains unexplored. In this work, our objective is to effectively achieve two aforementioned transferabilities. Firstly, we propose a Latent Ensemble Attack (LEAT) to achieve robust target attribute-transferability. Given that malicious individuals manually determine the selection of target attributes, it is critical to ensure robust disruption in both white-box scenarios, where the deepfake model and the target attributes are explicitly known, and gray-box scenarios, where target attributes are unknown. To achieve this, we divide deepfake models into two distinct processes: the latent encoding process, which encodes the semantic information of the input, and the generation process, which decodes the latent and target attributes to produce the desired output. Since the target attributes are only utilized in the generation process, we exploit the independence of the latent encoding process. Motivated by this, LEAT exclusively attacks the intermediate latent space, leaving the generation process unused. By disrupting the latent encoding process, subsequent disruption occurs in the generation process, irrespective of any specific target attribute. Consequently, LEAT achieves robust disruption in a gray-box scenarios, even against deepfake models that present unlimited target guidance. Unlike previous methods focusing on averaging the losses from every possible pair of output images, LEAT effectively achieves target attribute-transferability in a target attribute-agnostic manner, without generating any output image. Furthermore, LEAT dramatically reduces perturbation generation time compared to the Image Attack approach by forwarding only the latent encoding process and attacking the latent space of each model once. Secondly, to achieve effective model-transferability, we propose a Normalized Gradient Ensemble, which is designed to obtain improved gradient directions for generating perturbations in the adversarial attack. Our ensemble strategy aggregates the gradient of each model while considering the scale difference among them. This ensures that all models significantly contribute during the ensemble process, enabling effective attacks on multiple models simultaneously. Unlike previous approaches that mainly focus on GAN-based face attribute manipulation models, our method targets all three categories of deepfakes, including face attribute manipulation, face swapping, and face reenactment. This encompasses both GAN-based and Diffusion-based models, which have substantially different structures. Additionally, we demonstrate that the perturbation generated by our method can be effectively applied to a black-box scenario, where we may even have no prior knowledge about the specific deepfake models involved. Our contributions are summarized as follows: * We propose the Latent Ensemble Attack (LEAT), a method for achieving fully target attribute-agnostic deepfake disruption. LEAT focuses on attacking the latent encoding process without relying on specific target attributes, thus ensuring robust target attribute-transferability even in the gray-box scenarios, where the target attributes are unknown. * We introduce the Normalized Gradient Ensemble, an ensemble strategy designed to achieve effective model-transferability by aggregating the gradients of target models. Our strategy demonstrates high scalability to deepfake models and encompasses all three categories of deepfake. This is the first approach that simultaneously targets both GAN-based and Diffusion-based models. * Through comprehensive experiments, we demonstrate the effectiveness of our method in disrupting both the intermediate latent space and the output image in real-world scenarios. This includes white-box, gray-box, and even black-box scenarios, where the specific deepfake models are unidentified. § RELATED WORKS §.§ Deepfake Methods Generative Adversarial Networks (GANs) have gained popularity for their ability to generate highly realistic images and videos. More recently, Diffusion Models have emerged as another prominent approach for generating visually appealing contents. However, any output produced by these models can be considered deepfakes when exploited maliciously. Deepfakes are typically classified into three main categories: face attribute manipulation, face swapping, and face reenactment. Face attribute manipulation <cit.> involves modifying specific facial attributes to achieve desired characteristics, such as altering hairstyles or expressions. Most notably, StyleCLIP <cit.> combines StyleGAN <cit.> and CLIP <cit.> to enable text-driven manipulation, expanding the range of potential manipulations beyond predetermined guidance. Face swapping <cit.>, on the other hand, involves extracting the face from a source image and seamlessly injecting it into the facial part of a target individual. Face reenactment <cit.> focuses on transforming the source face to mimic the emotion and movements observed in the driving image or video. In our work, we simultaneously target four different models, covering all three categories mentioned above. By attacking these models collectively, we aim to address the challenges posed by a variety of deepfake generation techniques. §.§ Adversarial Attack Since the publication of <cit.>, which highlights the vulnerability of deep neural networks to imperceptible perturbations, various methods have been developed to generate adversarial examples specifically targeting classification models. <cit.> introduces a fast gradient sign method (FGSM) as one-step gradient attack to update each input pixel. <cit.> proposes the Iterative-FGSM, which performs gradient attack iteratively. <cit.> presents the projected gradient descent (PGD) methods, starting from randomly perturbed input to conduct a similar iterative attack. Furthermore, <cit.> applies adversarial attack to generative models and explores the possibility of attacking the latent vector of VAE-GAN, revealing the potential for latent attacks. Recently, <cit.> demonstrates the creation of adversarial examples for Diffusion Models. §.§ Deepfake Disruption Previous studies have introduced active defense techniques against deepfake models by employing adversarial perturbations <cit.>. <cit.> extends this approach to disrupt three different categories of deepfake models at the same time. <cit.> proposes a universal adversarial watermark to enable cross-model and cross-image attacks. <cit.> utilize neural networks to generate image-specific perturbations. These methods primarily focus on Image Attack, aiming to maximize or minimize the distance between the distorted output image and a specific target. There have also been attempts to disrupt the feature extraction module. <cit.> proposes a two-stage approach that first attacks the feature extractor of deepfake models and then performs end-to-end Image Attack. While they explore the impact of attacking the feature extraction module, they still rely on Image Attack to achieve better performance. Moreover, their approach assumes that the latent representation of each model is a feature map, which restricts their target models. In contrast, our method is the first approach that exclusively focuses on attacking the latent encoding process of each model, enabling a fully target attribute-agnostic attack. Moreover, our method is capable of disrupting multiple categories of deepfake models, regardless of the structural differences. § METHODS In this section, we introduce the mechanism of deepfake disruption. We then differentiate between the previous Image Attack approach and our proposed LEAT. Finally, we describe our Normalized Gradient Ensemble strategy. §.§ Disruption of Deepfake Models In general, the process of deepfake models can be formulated by: y=G(X,c), where X represents the source image, c denotes the target attribute, G is the generative model. To disrupt an image based on a specific model and target attribute, human-imperceptible perturbation η is added to the source image to maximize the difference between the original output and the perturbed output by: max_η L(G(X,c),G(X+η,c)), s.t. η_∞≤ϵ, where ϵ controls the magnitude of the perturbation. To obtain optimal η, FGSM <cit.> can be adopted as follows: η=ϵ sign[∇_X L(G(X,c),G(X+η,c))]. With I-FGSM <cit.> and PGD <cit.>, more powerful adversarial perturbation can be obtained through iterative gradient updates: X_t+1=clip(X_t+a sign[∇_X_t L(G(X_t,c),G(X_t+η,c))]), where a is the step size for each iteration and clip function keeps X_t in the range [X-ϵ, X+ϵ] at every t. The key difference is that the source X starts with random perturbation in PGD. To disrupt multiple models and target attributes simultaneously, <cit.> and <cit.> propose ensemble strategies to maximize the average distance between generated outputs across all possible models and target attributes. We consider their method Image Attack, formulated by: max_η∑_k^∑_c_k^L(G_k(X,c_k),G_k(X+η ,c_k)), s.t. η_∞≤ϵ, where G_k and c_k denote the generative models and their corresponding target attributes, respectively. As illustrated in Figure <ref>, the Image Attack first generates original and perturbed outputs for all known target attributes of each model and then aggregates the mean squared error (MSE) loss between them. Note that each deepfake model can have different types and numbers of target attributes. §.§ Latent Ensemble Attack We point out that most recent generative models can be defined as a two-stage process: (1) the latent encoding process and (2) the generation process, formulated as follows: y=G(E(X),c). In the latent encoding process, the input's semantic information is encoded by the latent encoder E. The generator G then decodes the latent representation and the target attribute to obtain the desired image. Given that E(X) contains rich semantics, we demonstrate that attacking the latent encoding process can significantly mislead the generator's starting point, which makes it impossible to generate the desired image regardless of the target attributes. Based on these insights, we propose Latent Ensemble Attack (LEAT), which solely focuses on attacking the latent encoding process: max_η∑_k^L(E_k(X),E_k(X+η)), s.t. η_∞≤ϵ. In LEAT, the latent encoding process is independent of the target attributes, allowing for fully target attribute-agnostic attack. This means that the optimal perturbation can be obtained without relying on any target attribute throughout the entire process. Since the attack is irrelevant of the target attributes, a successful disruption in the latent space guarantees robust disruption even when unknown target attributes are given. Consequently, LEAT achieves target attribute-transferability more effectively than the Image Attack approach. Moreover, LEAT is significantly faster than Image Attack because it does not generate any output image and only focuses on attacking the latent encoding process, as illustrated in Figure <ref>. Specifically, LEAT extracts the latents for target models and calculates the loss between the latent for each model separately. These losses are then ensembled across the target models. Since the loss is calculated separately, we eliminate the need for any assumptions or prior knowledge about the latent representation. This enables a model-agnostic disruption that can accommodate varying shapes and semantics in latent representations. In contrast, the disruption of feature extraction module in <cit.> is based on an assumption that the latent representation of each model is a feature map. They aggregate the feature maps by resizing them to a fixed shape and summing them before calculating the loss. Consequently, their approach is restricted to models where the latent is in the form of a feature map, limiting the scalability to models. §.§ Normalized Gradient Ensemble To aggregate the loss across the models and compute the gradient for iterative adversarial attack, the commonly used approach is Loss Ensemble, G_loss=∇_X∑_k^ω_k L(M_k(X),M_k(X+η)), where M_k can be either the latent encoder E_k for LEAT or the entire model G_k for Image Attack. For the Image Attack, the loss of each model is calculated as the average loss following Eq.(<ref>). However, if the weights ω_k for each model are not chosen appropriately, the Loss Ensemble approach exhibits biased attacks towards vulnerable models. To address this issue, <cit.> proposes Hard Model Mining (HMM), which attacks the hardest model at each iteration by updating the minimum loss among the models as follows: G_hmm=∇_X min L(M_k(X),M_k(X+η)). <cit.> points out that computing valid gradients becomes difficult when the gradients from the models are different. To obtain a better gradient direction, they propose Gradient Ensemble as follows: G_grad=∑_k^1/K∇_XL(M_k(X),M_k(X+η)), where the gradient is computed for K different models separately and then summed up. However, we have found that both HMM <cit.> and Gradient Ensemble <cit.> remain sensitive to individual model, because their methods do not consider the relative scale differences between the models. This sensitivity hampers the scalability to target models, leading to the failure of disruption even if the loss or gradient of one particular model has a significantly different scale. Consequently, these methods primarily disrupt the vulnerable model and fail to effectively attack others. To address the issue of the scale difference misleading the gradient direction, we propose Normalized Gradient Ensemble, formulated by: G_normgrad=∑_k^Norm(∇_X L(M_k(X),M_k(X+η))), where the gradient of each model is divided by its L_2 norm and then summed up. This ensures that the gradients are brought to the same scale, allowing all models to have an equal impact on the ensemble process. It leads to effective model-transferability, without exhibiting bias towards a particular model. We demonstrate that our simple normalization technique not only plays a key role in disrupting multiple deepfake models simultaneously, but also enhances the scalability to target models. Any model can be incorporated into the ensemble process without the concern of overwhelming its contribution. Once the gradients are aggregated, an iterative update is performed as described in Eq.(<ref>): X_t+1=clip(X_t+a sign[G_normgrad]). The whole process is described in Algorithm <ref>. § EXPERIMENTS In this section, we provide an overview of our implementation. We then outline the evaluation metrics we have defined. Next, we present the results of our LEAT in comparison to Image Attack. Additionally, we compare the performance of our Normalized Gradient Ensemble with previous ensemble methods. Lastly, we assess the transferability of our method in a black-box scenario. §.§ Implementation Details In our experiments, we use the CelebA-HQ <cit.> dataset consisting of 30,000 high-quality facial images. From this dataset, we select 500 images as the source for protection. We employ three types of deepfake models: StyleCLIP <cit.> and Diffusion Autoencoders <cit.> for face attribute manipulation, SimSwap <cit.> for face swapping, and ICface <cit.> for face reenactment. For StyleCLIP, we use five target attributes as known-targets and other five attributes as unknown-targets. Similarly, for Diffusion Autoencoders, we use two attributes respectively. For SimSwap, we select an image from CelebA-HQ as known-target face for each source, and another image as unknown-target face. In contrast to previous works <cit.> that aim to protect an image from being used as a target image in SimSwap, we protect it from being used as a source image since the source affects the facial part, which is more critical to the recognition of the identity. For ICface, we use a known-target driving video from the VoxCeleb dataset <cit.>, as well as an unknown-target video. We extract 100 frames from each video to obtain Action Units (AUs) that guide the reenactment process. We employ our Normalized Gradient Ensemble described in Eq.(<ref>) and Eq.(<ref>) as a default ensemble method in our experiments. In the Image Attack scenario, we generate perturbations exclusively using the known-targets, where M_k represents the entire model G_k. Subsequently, we calculate the distance between the output images before and after disruption, utilizing the known-targets (white-box) and unknown-targets (gray-box) respectively. In our LEAT, we generate a perturbation without utilizing the target attribute information, where M_k represents the latent encoder E_k. We then calculate the results using the same methodology as the Image Attack. For the adversarial attack method, we employ PGD <cit.>, which is commonly used in previous works. We set the number of iterations T to 30. During each iteration, the step size a and the maximum bound ϵ are set to 0.01 and 0.05, respectively. §.§ Evaluation Metrics Previous works <cit.> have commonly evaluated the effectiveness of disruption by calculating the average L_2 loss between output images before and after disruption. This metric, referred to as L_2 image, quantifies the pixel-level difference between the images. However, as illustrated in Figure <ref>, a high L_2 image value does not always indicate successful disruption. We demonstrate that capturing the semantic difference is another crucial factor for evaluating the disruption effectiveness. To assess the semantic difference, we utilize the identity loss <cit.> and LPIPS <cit.>. A higher identity loss implies a higher likelihood of being perceived as different individuals. LPIPS is a metric that evaluates perceptual similarity and has shown a strong correlation with human perception. Additionally, we redefine the defense success rate (DSR) to comprehensively evaluate the proportion of successful disruptions. While <cit.> regard an attack successful when L_2 image exceeds 0.05, we define success based on one of the following conditions: L_2 image higher than 0.05, ID loss higher than 0.6, or LPIPS higher than 0.4. We also report Avg-DSR, which represents the average DSR of the deepfake models, and E-DSR, which measures the proportion of the protected source images that successfully disrupt all models simultaneously, following <cit.>. §.§ The Results of Latent Ensemble Attack To explore the impact of our LEAT, we present the distribution of the latents before and after attack using t-SNE visualization in Figure <ref>. Compared to Image Attack, LEAT clearly distinguishes the latents for StyleCLIP, Diffusion Autoencoders, and SimSwap. This indicates that LEAT directs the encoded latent towards a significantly different direction. As the latent serves as a starting point for the generator, disrupted latent subsequently generates the undesired output. However, the latents of ICface are not distinguished since they are represented as neutral images, which are not embedded in a low-dimensional space. Still, they can be exploited as an attack point of LEAT due to their target attribute-independence. The properties of the latent in each model are described in Table <ref>. We present the quantitative results of our proposed LEAT in Table <ref>. For both LEAT and Image Attack, we employ our Normalized Gradient Ensemble as an ensemble method. In Image Attack, most of the models show a higher L_2 image as it is the direct target in the PGD attack. However, in the gray-box scenario where unseen target attributes are provided, L_2 image decreases significantly in Image Attack. In contrast, LEAT shows robust performance in the gray-box scenario. Additionally, LEAT consistently reports higher ID loss. In terms of LPIPS, LEAT achieves better performance in StyleCLIP and ICface. Remarkably, LEAT achieves higher Avg-DSR and E-DSR scores compared to Image Attack in the gray-box scenario, demonstrating its robust target attribute-transferability. The qualitative results of our method are shown in Figure <ref>. Furthermore, we compare the perturbation generation time in Table <ref> to highlight the efficiency of our LEAT. In Image Attack, the output image is generated by fully utilizing deepfake models, and the loss is averaged across all target attributes. Conversely, in LEAT, the latent is generated solely by forwarding the latent encoder, without any loss averaging from the target attributes. Consequently, perturbation generation is much faster in LEAT. With a single NVIDIA A100 GPU, Image Attack takes an average of 254.98 seconds for 500 images. In comparison, LEAT requires only 5.48 seconds under the same setting, making it approximately 46 times faster than Image Attack. §.§ Comparison with Previous Ensemble Methods To evaluate the effectiveness of our proposed Normalized Gradient Ensemble, we compare our method with Hard Model Mining <cit.> and Gradient Ensemble <cit.> proposed in previous studies. For their methods, we follow the process outlined in Eq.(<ref>) and Eq.(<ref>), respectively. For a fair comparison, we apply each method to both Image Attack and LEAT. The quantitative and qualitative results are reported in Table <ref> and Figure <ref>. In both Image Attack and LEAT, Normalized Gradient Ensemble demonstrates strong model-transferability, reporting significantly higher Avg-DSR and E-DSR scores. In contrast, both Gradient Ensemble and HMM show lower Avg-DSR and nearly zero E-DSR scores, indicating a lack of successful disruption across all models simultaneously. Specifically, Gradient Ensemble exhibits biased results towards ICface in Image Attack, leading to poor performance in the other models. In LEAT, it exclusively attacks Diffusion Autoencoders, resulting in clear disruption for Diffusion Autoencoders while leaving the others unchanged, as depicted in Figure <ref> (Gradient). Similarly, HMM demonstrates a strong bias to SimSwap in LEAT. §.§ Experiments in a Black-Box Scenario To demonstrate the robust model-transferability of our disruption method under challenging conditions, we conduct experiments in a black-box scenario where the deepfake model is unknown. We select StarGAN <cit.> as the unknown target model and use random Gaussian noise, Image Attack, and LEAT, all applied at the same scale of the perturbation to disrupt StarGAN. For each attack method, we generate a perturbation on four known-models and directly apply the perturbation to disrupt StarGAN. The quantitative results averaged over 500 images are in Table <ref>. Overall, LEAT outperforms random perturbation and Image Attack, demonstrating its strong model-transferability in the black-box scenario. The qualitative results are shown in Figure <ref>. § CONCLUSION In this paper, we propose a fully target attribute-agnostic approach called Latent Ensemble Attack, which aims to disrupt deepfake models by generating effective perturbations. Unlike previous methods that focus on maximizing the difference between generated images, our approach targets the latent encoding process to ensure disruption during subsequent generation process, leading to robust target attribute-transferability. Additionally, we introduce the Normalized Gradient Ensemble, a technique to aggregate losses from the multiple models including both GAN-based and Diffusion-based models. By uniformly scaling the gradients, we prevent the ensemble attack from exhibiting bias towards any specific model and achieve high model-transferability. Our proposed method demonstrates strong robustness in the gray-box scenario, where the target attributes are unknown. Moreover, it can be effectively applied even in a black-box scenario, where the specific deepfake model is unidentified. These results highlight the versatility and effectiveness of our approach in real-world scenarios.
http://arxiv.org/abs/2307.00222v1
20230701044443
Re-Think and Re-Design Graph Neural Networks in Spaces of Continuous Graph Diffusion Functionals
[ "Tingting Dan", "Jiaqi Ding", "Ziquan Wei", "Shahar Z Kovalsky", "Minjeong Kim", "Won Hwa Kim", "Guorong Wu" ]
cs.LG
[ "cs.LG", "cs.GR", "05C85", "I.2.6" ]
Modeling of uniflagellated bacterial locomotion in unbounded fluid and near a no-slip plane surface Henry Shum August 1, 2023 =================================================================================================== Graphs are ubiquitous in various domains, such as social networks and biological systems. Despite the great successes of graph neural networks (GNNs) in modeling and analyzing complex graph data, the inductive bias of locality assumption, which involves exchanging information only within neighboring connected nodes, restricts GNNs in capturing long-range dependencies and global patterns in graphs. Inspired by the classic Brachistochrone problem, we seek how to devise a new inductive bias for cutting-edge graph application and present a general framework through the lens of variational analysis. The backbone of our framework is a two-way mapping between the discrete GNN model and continuous diffusion functional, which allows us to design application-specific objective function in the continuous domain and engineer discrete deep model with mathematical guarantees. First, we address over-smoothing in current GNNs. Specifically, our inference reveals that the existing layer-by-layer models of graph embedding learning are equivalent to a ℓ _2-norm integral functional of graph gradients, which is the underlying cause of the over-smoothing problem. Similar to edge-preserving filters in image denoising, we introduce the total variation (TV) to promote alignment of the graph diffusion pattern with the global information present in community topologies. On top of this, we devise a new selective mechanism for inductive bias that can be easily integrated into existing GNNs and effectively address the trade-off between model depth and over-smoothing. Second, we devise a novel generative adversarial network (GAN) to predict the spreading flows in the graph through a neural transport equation. To avoid the potential issue of vanishing flows, we tailor the objective function to minimize the transportation within each community while maximizing the inter-community flows. Our new GNN models achieve state-of-the-art (SOTA) performance on graph learning benchmarks such as Cora, Citeseer, and Pubmed. § INTRODUCTION Graph is a fundamental data structure that arises in various domains, including social network analysis <cit.>, natural language processing <cit.>, computer vision <cit.>, recommender systems <cit.>, and knowledge graphs <cit.> among others. Tremendous efforts have been made to operate machine learning on graph data (called graph neural networks, or GNNs) at the node <cit.>, link <cit.>, and graph level <cit.>. The common inductive bias used in GNNs is the homophily assumption that nodes that are connected in a graph are more likely to have similar features or labels. In this context, most GNN models deploy a collection of fully-connected layers to progressively learn graph embeddings by aggregating the nodal feature representations from its topologically-connected neighbors throughout the graph <cit.>. Under the hood of GNNs, the graph representation learning process is achieved by various learnable operations, such as message passing <cit.> or graph convolution <cit.>. Due to the nature of exchanging information in a local graph neighborhood, however, it is challenging to capture global graph representations, which go beyond node-to-node relationship, by leveraging the deep architecture in GNNs while being free of overly smoothing the feature representations for the closely-connected nodes. Fig. <ref> demonstrates the root cause of over-smoothing issue in current GNNs, where node color denotes the group label (no color means unlabeled) and edge thickness indicates connection strength. It is clear that nodes #1 and #2 are located at the boundary of two communities. The inductive bias of GNNs (i.e., locality assumption) enforces the node embedding vectors on node #1 and #2 becoming similar due to being strongly connected (highlighted in red), even though the insight of global topology suggests that their node embeddings should be distinct. As additional layers are added to GNNs, the node embeddings become capable of capturing global feature representations that underlie the entire graph topology. However, this comes at the cost of over-smoothing node embeddings across graph nodes due to (1) an increased number of node-to-node information exchanges, and (2) a greater degree of common topology within larger graph neighborhoods. In this regard, current GNNs only deploy a few layers (typically two or three) <cit.>, which might be insufficient to characterize the complex feature representations on the graph. It is evident that mitigating the over-smoothing problem in GNNs will enable training deeper models. From a network architecture perspective, skip connections <cit.>, residual connections <cit.>, and graph attention mechanisms <cit.> have been proposed to alleviate the information loss in GNNs, by either preserving the local feature representation or making information exchange adaptive to the importance of nodes in the graph. Although these techniques are effective to patch the over-smoothing issue in some applications, the lack of an in-depth understanding of the root cause of the problem poses the challenge of finding a generalized solution that can be scaled up to current graph learning applications. Inspired by the success of neural ordinary differential equations in computer vision <cit.>, research focus has recently shifted to link the discrete model in GNNs with partial differential equation (PDE) based numerical recipes <cit.>. For example, Graph Neural Diffusion (GRAND) formulates GNNs as a continuous diffusion process <cit.>. In their framework, the layer structure of GNNs corresponds to a specific discretization choice of temporal operators. Since PDE-based model does not revolutionize the underlying inductive bias in current GNNs, it is still unable to prevent the excessive information change between adjacent nodes as in nodes #1 and #2 in Fig. <ref>. In this regard, using more advanced PDE solvers only can provide marginal improvements in terms of numerical stability over the corresponding discrete GNN models, while the additional computational cost, even in the feed-forward scenario, could limit the practical applicability of PDE-based methods for large-scale graph learning tasks. In this regard, pioneering work on continuous approaches has prompted to re-think GNN as a graph diffusion process governed by the Euler-Lagrange (E-L) equation of the heat kernel. This formulation is reminiscent of the Brachistochrone problem [The Brachistochrone problem is a classic physics problem that involves finding the curve down which a bead sliding under the influence of gravity will travel in the least amount of time between two points.], which emerged over 400 years ago and established the mathematical framework of classical mechanics. The powerful calculus of variations allows us to generate solutions for various mechanics questions (e.g., the slope that yields the fastest ball sliding down the curve is given by a cycloid) through the lens of E-L equation, as shown in Fig. <ref> (top). In a similar vein, the question that arises in the context of community detection is: What graph diffusion pattern is best suited for preserving community organizations? The question for graph classification would be: What graph diffusion pattern works best for capturing the system-level characteristics of graph topology? Following the spirit of Brachistochrone problem, we present a general research framework to customize application-specific GNNs in a continuous space of graph diffusion functionals. As shown in Fig. <ref> (bottom), we have established a fundamental structure for our framework that involves a two-way mapping between a discrete GNN model and a continuous graph diffusion functional. This allows us to develop application-specific objective functions (with an explainable regularization term) in the continuous domain and construct a discrete deep model with mathematical guarantee. We demonstrate two novel GNN models, one for addressing over-smoothing and one for predicting the flows from longitudinal nodal features, both achieving state-of-the-art performance (Cora: 85.6%, Citeseer: 73.9%, Pubmed: 80.10%, even in 128 network layers). We have made four major contributions. (1) We establish a connection between the discrete model of GNNs and the continuous functional of inductive bias in graph learning by using the E-L equation as a stepping stone to bridge the discrete and continuous domains. (2) We introduce a general framework to re-think and re-design new GNNs that is less “black-box”. (3) We devise a novel selective mechanism upon inductive bias to address the over-smoothing issue in current GNNs and achieve state-of-the-art performance on graph learning benchmarks. (4) We construct a novel GNN in the form of a generative adversarial network (GAN) to predict the flow dynamics in the graph by a neural transport equation. § METHODS In the following, we first elucidate the relationship between GNN, PDE, and calculus of variations (COV), which sets the stage for the GNN-PDE-COV framework for new GNN models in Section <ref>. §.§ Re-think GNNs: Connecting dots across graph neural networks, graph diffusion process, Euler-Lagrange equation, and Lagrangian mechanics Graph diffusion process. Given graph data 𝒢=(V, W) with N nodes V={v_i |i=1,…,N}, the adjacency matrix W=[w_i j]_i, j=1^N ∈ℝ^N × N describes connectivity strength between any two nodes. For each node v_i, we have a graph embedding vector x_i ∈ℛ^m. In the context of graph topology, the graph gradient (∇_𝒢 x)_i j=w_i j(x_i-x_j) indicates the feature difference between v_i and v_j weighted by the connectivity strength w_ij, where ∇_𝒢 is a ℝ^N→ℝ^N× N operator. Thus, the graph diffusion process can be formulated as ∂ x(t)/∂ t=div (∇_𝒢 x(t)), where the evolution of embedding vectors x=[x_i]_i=1^N is steered by the graph divergence operator. Connecting GNN to graph diffusion. In the regime of GNN, the regularization in the loss function often measures the smoothness of embeddings x over the graph by x^T Δ x, where Δ=div(∇_𝒢) is the graph Laplacian operator <cit.>. To that end, the graph smoothness penalty encourages two connected nodes to have similar embeddings by information exchange in each GNN layer. Specifically, the new graph embedding x^l in the l^th layer is essentially a linear combination of the graph embedding x^l-1 in the previous layer, i.e., x^l=A_W, Θ x^l-1, where the matrix A depends on graph adjacency matrix W and trainable GNN parameter Θ. After rewriting x^l=Ax^l-1 into x^l-x^l-1=(A-I)x^l-1, updating graph embeddings x in GNN falls into a discrete graph diffusion process, where the time parameter t acts as a continuous analog of the layers in the spirit of Neural ODEs <cit.>. It has been shown in <cit.> that running the graph diffusion process for multiple iterations is equivalent to applying a GNN layer multiple times. GNN is a discrete model of Lagrangian mechanics via E-L equation. The diffusion process ∂ x(t)/∂ t=div (∇_𝒢 x(t)) has been heavily studied in image processing in decades ago, which is the E-L equation of the functional min_x ∫_Ω | ∇ x|^2dx. By replacing the 1D gradient operator defined in the Euclidean space Ω with the graph gradient (∇_𝒢 x)_ij, it is straightforward to find that the equation governing the graph diffusion process ∂ x(t)/∂ t=div (∇_𝒢 x(t)) is the E-L equation of the functional min_x ∫_𝒢 | ∇_𝒢 x|^2dx over the graph topology. Since the heat kernel diffusion is essentially the mathematical description of the inductive bias in current GNNs, we have established a mapping between the mechanics of GNN models and the functional of graph diffusion patterns in a continuous domain. Tracing the smoking gun of over-smoothing in GNNs. In Fig. <ref>, we observed that the inductive bias of link-wise propagation is the major reason for excessive information exchange, which attributes to the over-smoothing problem in GNNs. An intuitive approach is to align the diffusion process with high-level properties associated with graph topology, such as network communities. After connecting the GNN inductive bias to the functional of graph diffusion process, we postulate that the root cause of over-smoothing is the isotropic regularization mechanism encoded by the ℓ _2-norm. More importantly, connecting GNN to the calculus of variations offers a more principled way to design new deep models with mathematics guarantees and model mechanistic explainability. §.§ Re-design GNNs: Revolutionize inductive bias, derive new E-L equation, and construct deeper GNN The general roadmap for re-designing GNNs typically involves three major steps: (1) formulating inductive bias into the functional of graph diffusion patterns; (2) deriving the corresponding E-L equation; and then (3) developing a new deep model of GNN based on the finite difference solution of E-L equation. Since the graph diffusion functional is application-specific, we demonstrate the construction of new GNN models in the following two graph learning applications. §.§.§ Develop VERY deep GNNs with a selective mechanism for link-adaptive inductive bias Problem formulation. Taking the feature learning component (learnable parameters Θ) out of GNNs, the graph embeddings x^L (output of an L-layer GNN) can be regarded as the output of an iterative smoothing process (L times) underlying the graph topology 𝒢, constrained by the data fidelity x^L - x^0_2^2 (w.r.t. the initial graph embeddings x^0) and graph smoothness term ∫_𝒢 | ∇_𝒢 x|^2dx. Inspired by the great success of total variation (TV) for preserving edges in image denoising <cit.>, reconstruction <cit.> and restoration <cit.>, we proposed to address the over-smoothing issue in current GNN by replacing the quadratic Laplacian regularizer with TV on graph gradients, i.e., 𝒥_TV(x) = ∫|∇_𝒢 x| dx. Thus, the TV-based objective function for graph diffusion becomes: min_x (x - x^0_2^2 + 𝒥_TV(x)). However, the ℓ _1-norm function, denoted by |·| in the definition of the total variation functional 𝒥_TV, is not differentiable at zero. Following the dual-optimization schema <cit.>, we introduce the latent auxiliary matrix z∈ℝ^N× N and reformulate the TV-based functional as 𝒥_TV(x, z) = max_z min_x ∫(z ⊗∇_𝒢 x) dx, subject to |z| ≤1^N × N, where ⊗ is Hadamard operation between two matrices. Furthermore, we use an engineering trick of element-wise operation z_ij(∇_𝒢 x)_ij to keep the degree always non-negative (same as we take the absolute value), which makes the problem solvable. In the end, we reformulate the minimization of 𝒥_TV(x) into a dual min-max functional as 𝒥_TV(x,z), where we maximize z (z→1^N× N) such that 𝒥_TV(x,z) is close enough to 𝒥_TV(x). Therefore, the new objective function is reformulated as: J(x,z) = max_z min_x x - x^0_2^2 + λ∫(z∇ _𝒢x )dx, which λ is a scalar balancing the data fidelity term and regularization term. Essentially, Eq. <ref> is the dual formulation with min-max property for the TV distillation problem <cit.>. Constructing E-L equations. To solve Eq. <ref>, we present the following two-step alternating optimization schema. First, the inner minimization problem (solving for x_i) in Eq. <ref> can be solved by letting ∂/∂ x_i𝒥(x_i,z_i)=0: [ ∂/∂ x_i J(x_i,z_i) = 2(x_i - x_i^0) + λ z_i ∇ _ Gx_i = 0 ⇒ x̂_i = x_i^0 - λ/2z_i∇ _ Gx_i ] Replacing (∇_𝒢 x)_i j with w_i j(x_i-x_j), the intuition of Eq. <ref> is that each element in x̂_i is essentially the combination between the corresponding initial value in x^0_i and the overall graph gradients z_i∇ _ Gx_i=∑_j∈𝒩_iw_ij(x_i-x_j)z_i within its graph neighborhood 𝒩_i. In this regard, Eq. <ref> characterizes the dynamic information exchange on the graph, which is not only steered by graph topology but also moderated by the attenuation factor z_i at each node. Second, by substituting Eq. <ref> back into Eq. <ref>, the objective function of z_i becomes 𝒥(z_i) = max_|z_i| ≤1λ/2z_i∇ _𝒢x_i_2^2 + λ z_i∇ _𝒢(x_i^0 - λ/2z_i∇ _𝒢x_i). With simplification (in Eq. <ref> to Eq. <ref> of Supplementary), the optimization of each z_i is achieved by min_|z_i| ≤1 z_i∇ _𝒢x_iz_i∇ _𝒢x_i - 4/λz_i∇ _𝒢x_i^0. Specifically, we employ the majorization-minimization (MM) method <cit.> to optimize z_i by solving this constrained minimization problem (the detailed derivation process is given in Eq. <ref> to Eq. <ref> of Section <ref> of Supplementary), where z_i can be iteratively refined by: z_i^l = clip(z_i^l - 1 + 2/βλ∇ _𝒢x_i_b,1) = {[ b; 1; - 1 ].[ |b| ≤ 1; b > 1; b < -1 ] β is a hyper-parameter that is required to be no less than the largest eigenvalue of (∇ _𝒢x_i) (∇ _𝒢x_i)^⊺. Develop new GNN network architecture with a selective inductive bias. The building block in vanilla GNN <cit.> is a FC (fully-connected) layer where the input is the embedding vectors after isotropic graph diffusion (in ℓ _2-norm). Since the estimation of graph embeddings x in Eq. <ref> depends on the latest estimation of z^(l), such recursive min-max solution for Eq. <ref> allows us to devise a new network architecture that disentangles the building block in vanilla GNN into the feature representation learning and graph diffusion underling TV. As shown in Fig. <ref>, we first deploy a FC layer to update the graph embeddings x^(l). After that, we concatenate a diffusion-clip (DC) layer for selective graph diffusion, which sequentially applies (1) node-adaptive graph diffusion (blue arrow in Fig. <ref>) on x^(l) by Eq. <ref> [Since the optimization schema has been switched to the layer-by-layer manner, the initialization x_0 becomes x^(l-1) from the previous layer.], and (2) clip operation (purple arrow in Fig. <ref>) to each x^(l)_i by Eq. <ref>. Remarks. Eq. <ref> indicates that larger connective degree results in larger value of z. Thus, the DC layer shifts the diffusion patterns by penalizing the inter-community information exchange (due to strong connections) while remaining the heat-kernel diffusion within the community. The preference of such link-adaptive diffusion can be adjusted by the hyper-parameter λ [λ can be either pre-defined or learned from the data.] in Eq. <ref>. Recall our intuitive solution for over-smoothing problem in Fig. <ref>, the DC layer offers the exact global insight of graph topology to keep the node embeddings distinct between nodes #1 and #2. We demonstrate the effect of DC layer on the real-world graph data in Fig. <ref> of Supplementary document. §.§.§ Predict flow dynamics through graph neural transport equation Problem formulation. We live in a world of complex systems, where everything is intricately connected in multiple ways. A holistic insight of how the system's components interact with each other and how changes in one part of the system can affect the behavior of the whole sheds new light on the dynamic behaviors of these complex systems over time. However, oftentimes it is an ill-posed problem. Taking the toy system in Fig. <ref>(a) as an example, while it is simple to calculate the future focal patterns based on the focal patterns at the current time point and the node-to-node flow information, determining flow dynamics based on longitudinal nodal observations is computationally hard since the solution is not unique. The naïve solution to predict the spreading flow is to (1) train a GNN to learn the intrinsic node embeddings and (2) predict the flow based on the difference of learned embeddings. However, this two-step approach might suffer from vanishing flow due to over-smoothing in GNNs. Following the spirit of Brachistochrone problem, we ask the question "What flow field f(t) =[f_ij(t)]_i,j=1^N underlines the system mechanics to the extent that it is able to predict the behaviors in the future?" In this paper, we focus on the conservative system of energy transportation <cit.>. The system mechanics is formulated as: dx/dt + div(q) = 0 where q=[q_ij]_i,j=1^N is the flux field which propagates the potential energy u(t)=[u_i(t)]_i=1^N (conserved quantity) over time. Similar to a gravity field driving water flow, the intuition of Eq. <ref> is that the change of energy density u (we assume there is a non-linear mapping ϕ from external force x to u, i.e., u_i=ϕ (x_i)) leads to energy transport throughout the entire graph. As flux is closely related to the difference of energy ∇_𝒢u underlying the graph topology, we assume the energy flux q is regulated by the potential energy field ∇_𝒢u, i.e., q=α⊗∇_𝒢u, where α=[α_ij]_i,j=1^N is a learnable matrix characterizing the link-wise contribution of each energy potential ∇_𝒢u_ij to the potential energy flux q_ij. By plugging q=α⊗∇_𝒢u into Eq. <ref>, the energy transport process can be reformulated as: ∂ u/∂ t = - ϕ^-1 (α⊗ div(∇_𝒢u)) = - ϕ^-1 (α⊗Δ u), where Δ=div(∇_𝒢) is the graph Laplacian operator. Since the PDE in Eq. <ref> is equivalent to the E-L equation of the quadratic functional 𝒥(u)= min_u ∫_𝒢α⊗ | ∇_𝒢 u|^2du (after taking ϕ away), a major issue is the over-smoothness in u that might result in vanishing flows. In this context, we propose to replace the ℓ _2-norm integral functional 𝒥(u) with TV-based counterpart 𝒥_TV(u)=min_u ∫_𝒢α⊗ | ∇_𝒢 u|du. Renovate new E-L equation – graph neural transport equations. Following the min-max optimization schema in Eq. <ref>-<ref>, we introduce an auxiliary matrix f to lift the undifferentialable barrier. After that, the minimization of 𝒥_TV (u) boils down into a dual min-max functional 𝒥_TV(u,f)=min_umax_f∫_𝒢α⊗ f (∇_𝒢 u) du. Since u(t) is a time series, it is difficult to derive the deterministic solutions (as Eq. <ref>-<ref>) by letting ∂/∂ u𝒥_TV=0 and ∂/∂ f𝒥_TV=0. Instead, we use Gâ teaux variations to optimize 𝒥_TV(u,f) via the following two coupled time-dependent PDEs (please see Section <ref>, Eq. <ref> to Eq. <ref>, in Supplementary for details): {[ max_f df/dt = α⊗∇ _𝒢u; min_u du/dt = α⊗ div(f) ]. Remarks. The solution to Eq. <ref> is known as continuous max-flow and constitutes a continuous version of a graph-cut <cit.>. Since α is a latent variable and potential energy u is given, the maximization of f opts towards maximizing the spreading flow through the lens of α. In this regard, the mechanistic role of auxiliary matrix f is essentially the latent (maximized) spreading flows that satisfy u(t + 1)_i = u(t)_i + ∑_j = 1^N f_ij(t). The potential energy û can be solved via a wave equation (u_tt=div(f_t)=α^2 ⊗Δ u), where the system dynamics is predominated by the adjusted Lagrangian mechanics α^2 ⊗Δ u. By determining α at a granularity of graph links, we devise a novel GAN model to predict the spreading flows f which not only offers explainability underlying the min-max optimization mechanism in Eq. <ref> but also sets the stage to understand system dynamics through machine learning. Develop a GAN model of flow prediction with TV-based Lagrangian Mechanics. The overall network architecture is shown in Fig. <ref> (b), which consists of a generator (red solid box) and a discriminator module (blue solid box). Specifically, the generator (G) consists of (1) a GCN component <cit.> to optimize û through the wave equation u_tt =α^2 ⊗Δ u and (2) a FC layer to characterize the non-linear mapping function x̂(t+1)=ϕ ^-1 (û(t)). In contrast, the discriminator (D) is designed to (1) synthesize α and (2) construct the future ũ_t+1 based on the current u_t and current estimation of spreading flow f=α⊗∇_𝒢u (orange dash box). To make the network architecture consistent between generator and discriminator modules, we include another GCN to map the synthesized ũ(t+1) to the external force x̃(t+1). Note, since the working mechanism of this adversarial model underlines the min-max optimization in the energy transport equation, the nature of predicted spreading flows is carved by the characteristics of max-flow. The driving force of our network is to minimize (1) the MSE (mean square error) between the output of the generator x̂_t+1 and the observed regional features, (2) the distance between the synthesized regional features x̃_t + 1 (from the discriminator) and the output of generator x̂_t+1 (from the generator). In the spirit of probabilistic GAN <cit.>, we use one loss function ℒ_D to train the discriminator (D) and another one ℒ_G to train the generator (G): {[ ℒ_D=D(x_t+1)+[ξ-D(G(x_t))]^+; ℒ_G=D(G(x_t)) ]. where ξ denotes the positive margin and the operator [·]^ + = max (0,·). Minimizing ℒ_G is similar to maximizing the second term of ℒ_D except for the non-zero gradient when D(G(x_t))≥ξ. § EXPERIMENTS In this section, we evaluate the performance of the proposed GNN-PDE-COV framework with comparison to six graph learning benchmark methods on a wide variety of open graph datasets <cit.>, as well as a proof-of-concept application of uncovering the propagation pathway of pathological events in Alzheimer's disease (AD) from the longitudinal neuroimages. §.§ Datasets and experimental setup Dataset and benchmark methods. We evaluate the new GNN models derived from our proposed GNN framework in two different applications. First, we use three standard citation networks, namely Cora, Citeseer, and Pubmed <cit.> for node classification (the detailed data statistic is shown in Table <ref> of Supplementary). We adopt the public fixed split <cit.> to separate these datasets into training, validation, and test sets. We follow the experimental setup of <cit.> for a fair comparison with six benchmark GNN models (vanilla GCN <cit.>, GAT <cit.>, GCNII <cit.>, ResGCN <cit.>, DenseGCN <cit.>, GRAND <cit.>). Since our DC-layer can be seamlessly integrated into existing GNNs as a plug-in. The corresponding new GNN models (with DC-layer) are denoted GCN+, GAT+, GCNII+, ResGCN+, DenseGCN+, and GRAND+, respectively. Second, we apply the GAN model in Section <ref> to predict the concentration level of AD-related pathological burdens and their spreading pathways from longitudinal neuroimages. Currently, there is no in-vivo imaging technique that can directly measure the flow of information across brain regions. Here, our computational approach holds great clinical value to understand the pathophysiological mechanism involved in disease progression <cit.>. Specifically, we parcellate each brain into 148 cortical surface regions and 12 sub-cortical regions using Destrieux atlas <cit.>. The wiring topology of these 160 brain regions is measured by diffusion-weighted imaging <cit.> and tractography techniques <cit.>. The regional concentration levels AD pathology including amyloid, tau, and fluorodeoxyglucose (FDG) and cortical thickness (CoTh) are measured from PET (positron emission tomography) and MRI (magnetic resonance imaging) scans <cit.>. We use a total of M=1,291 subjects from ADNI <cit.>, each having longitudinal imaging data (2-5 time points). The details of image statistics and pre-processing are shown in Sec. <ref>. Since we apply the flow prediction model to each modality separately, we differentiate them with X-FlowNet (X stands for amyloid, tau, FGD, and CoTh). Experimental setup. In the node classification task, we verify the effectiveness and generality of DC-layer in various number of layers (L=2, 4, 8, 16, 32, 64, 128). All baselines use their own default parameter settings. Evaluation metrics include accuracy, precision and F1-score. To validate the performance of X-FlowNet, we examine (1) prediction accuracy (MAE) of follow-up concentration level, (2) prediction of the risk of developing AD using the baseline scan, and (3) understand the propagation mechanism in AD by revealing the node-to-node spreading flows of neuropathologies. The main results of graph node classification and flow prediction are demonstrated in Section <ref> and <ref>, respectively. Other supporting results such as ablation study and hyper-parameter setting are shown in Section <ref> of the Supplementary document. §.§ Experimental results on graph node classification We postulate that by mitigating the over-smoothing issue, we can leverage the depth of GNN models to effectively capture complex feature representations in graph data. As shown in Table <ref>, we investigate the graph node classification accuracy as we increase the number of GNN layers by six benchmark GNN models and their corresponding plug-in models (indicated by '+' at the end of each GNN model name) with the DC-layer. The results demonstrate that: (1) the new GNN models generated from the GNN-PDE-COV framework have achieved SOTA in Cora (86.30% by GCNII+), Citeseer (75.65% by GRAND+), and Pubmed (80.10 % by GCNII+); (2) all of new GNN models outperforms their original counterparts with significant improvement in accuracy; (3) the new GNN models exhibit less sensitivity to the increase of model depth compared to current GNN models; (4) the new GNN models are also effective in resolving the gradient explosion problem <cit.> (e.g, the gradient explosion occurs when training GAT on all involved datasets with deeper than 16 layers, while our GAT+ can maintain reasonable learning performance even reaching 128 layers.) It is important to note that due to the nature of the graph diffusion process, graph embeddings from all GNN models (including ours) will eventually become identical after a sufficiently large number of layers <cit.>. However, the selective diffusion mechanism (i.e., penalizing excessive diffusion across communities) provided by our GNN-PDE-COV framework allows us to control the diffusion patterns and optimize them for specific graph learning applications. §.§ Application for uncovering the propagation mechanism of pathological events in AD First, we evaluate the prediction accuracy between the ground truth and the estimated concentration values by our X-FlowNet and six benchmark GNN methods. The statistics of MAE (mean absolute error) by X-FlowNet, GCN, GAT, GRAND, ResGCN, DenseGCN and GCNII, at different noise levels on the observed concentration levels, are shown in Fig. <ref> (a). It is clear that our X-FlowNet consistently outperforms the other GCN-based models in all imaging modalities. Second, we have evaluated the potential of disease risk prediction and presented the results in Table <ref> in Supplementary document, where our GNN-PDE-COV model not only achieved the highest diagnostic accuracy but also demonstrated a significant improvement (paired t-test p<0.001) in disease risk prediction compared to other methods. These results suggest that our approach holds great clinical value for disease early diagnosis. Third, we examine the spreading flows of tau aggregates in CN (cognitively normal) and AD groups. As the inward and outward flows shown in Fig. <ref>(b), it is evident that there are significantly larger amount of tau spreading between sub-cortical regions and entorhinal cortex in CN (early sign of AD onset) while the volume of subcortical-entorhinal tau spreading is greatly reduced in the late stage of AD. This is consistent with current clinical findings that tau pathology starts from sub-cortical regions and then switches to cortical-cortical propagation as disease progresses <cit.>. However, our Tau-FlowNet offers a fine-granularity brain mapping of region-to-region spreading flows over time, which provides a new window to understand the tau propagation mechanism in AD etiology <cit.>. § CONCLUSION In this work, we present the GNN-PDE-COV framework to re-think and re-design GNN models with great mathematical insight. On top of this, we devise the selective inductive bias to address the over-smoothing problem in GNN and develop new GNN model to predict the pathology flows in-vivo via longitudinal neuroimages. Future work may involve exploring innovative graph regularization techniques and conducting further validation on a broader range of graph-based learning tasks. Supplementary § SOLVING VARIATIONAL PROBLEMS: FROM OBJECTIVE FUNCTIONAL TO E-L EQUATIONS §.§ Step-by-step derivation of min-max optimization in Section 2.2.1 By substituting Eq. 2 into Eq. 1 in the main manuscript, we can obtain the objective function of subscript z (we temporarily drop i for clarity): J(z) = max_|z| ≤1λ/2z∇ _ Gx_2^2 + λz∇ _ G(x^0 - λ/2z∇ Gx) = [t] max_|z| ≤1- λ ^2/4z∇ _ Gxz∇ _ Gx + λz∇ _ Gx^0 Next, we convert Eq. <ref> into a minimization problem as follows: z = [ min_|z| ≤1 z∇ _ Gxz∇ _ Gx - 4/λz∇ _ Gx^0 ] By letting the derivative with respect to z_i to zero, we have the following equation ∇ _ Gxz∇ _ Gx = 4/λ∇ _ Gx^0 Since z might be in high dimensional space, solving such a large system of linear equations under the constraint |z| ≤ 1 is oftentimes computationally challenging. In order to find a practical solution for z that satisfies the constrained minimization problem in Eq. <ref>, we resort to the majorization-minimization (MM) method <cit.>. First, we define: M(z) =z ∇ _ Gxz∇ _ Gx - 4/λz∇ _ Gx^0 By setting z^l as point of coincidence, we can find a separable majorizer of M(z) by adding the non-negative function (z - z^l)^⊺(β I - ∇_ G x∇_ Gx^⊺)(z - z^l) to M(z), where β is greater than or equal to the maximum eigenvalue of ∇_ G x∇_ Gx^⊺. Note, to unify the format, we use the matrix transpose property in Eq. <ref>. Therefore, a majorizer of M(z) is given by: M(z) + (z - z^l)^⊺(β I - ∇_ G x∇_ Gx^⊺)(z - z^l) And, using the MM approach, we can obtain the update equation for z as follows: z^l + 1 =min_| z | ≤ 1 (M(z) + (z - z^l)^⊺(β I - ∇ _ Gx∇ Gx^⊺)(z - z^l)) = min_| z | ≤ 1 (βz^⊺z - 2(∇ _ G(2/λx^0 - ∇ Gxz^l) + βz^l)^⊺z) = min_| z | ≤ 1 (z^⊺z - 2(1/β∇ _ G(2/λx^0 - ∇ Gxz^l) + z^l)^⊺z) = min_| z | ≤ 1 (z^⊺z - 2b^⊺z) where b = z^l + 1/β∇ _ G(2/λx^0 - ∇ _ Gxz^l). Then, the next step is to find z ∈ R ^N that minimizes z^⊺ z-2bz subject to the constraint |z|≤ 1. Let's first consider the simplest case where z is a scalar: [ min_| z | ≤ 1 z^2 - 2bz ] The minimum of z^2 - 2bz is at z=b. If b ≤ 1, then the solution is z=b. If |b| ≥ 1, then the solution is z=sign(b). We can define the clipping function as: clip(b,1): = {[ [ b | b | ≤ 1 ]; [ sign(b) | b | ≥ 1 ] ]. as illustrated in the middle of Fig. 3 in the main text, then we can write the solution to Eq. <ref> as z=clip(b,1). Note that the vector case Eq. <ref> is separable - the elements of z are uncoupled so the constrained minimization can be performed element-wise. Therefore, an update equation for z is given by: z^l + 1 = clip(z^l + 1/β∇ _ G(2/λx^0 - ∇ _ Gxz^l),1) where l denotes the index of the network layer, the representation of (l+1)^th is given by Eq. (1) in the main manuscript. Because the optimization problem is convex, the iteration will converge from any initialization. We may choose, say z^0=0. We call this the iterative diffusion-clip (DC) algorithm. This algorithm can also be written as x^l+1=x^0-λ/2∇ _ G^ ⊺ z^l z^l+1=clip (z^l+2/βλ∇ _ G x^l+1, 1) . By scaling z with a factor of λ /2, we have the following equivalent formulations: x^l+1=x^0-∇ _ G^⊺ z^l z^l+1=clip(z^(i)+1/β∇ _ G x^l+1, λ/2) We summarize the process of the diffusion-clip (DC) layer in Algorithm <ref> (it is similar to the iterative shrinkage threshold algorithm <cit.>): §.§ The step-by-step derivation of min-max optimization schema in Section 2.2.2 According to the introduction of Secction 2.2.2 (Eq. 4 and Eq. 5) in the main manuscript, we summarize the following equations, {[ dx/dt + div(q) = 0; u_i = ϕ (x_i); q = α⊗∇ u; Δ u = div(∇ u) ]. {[ dx/dt = - div(q); du/dt = - ϕ ^ - 1div(q); du/dt = - ϕ ^ - 1div(α⊗ q); du/dt = - ϕ ^ - 1(α⊗Δ u) ]. Since the PDE in Eq. 5 in the main manuscript is equivalent to the E-L equation of the quadratic functional 𝒥(u)= min_u ∫_𝒢α⊗ | ∇_𝒢 u|^2du (after taking ϕ away), we propose to replace the ℓ _2-norm integral functional 𝒥(u) with TV-based counterpart 𝒥_TV(u)=min_u ∫_𝒢α⊗ | ∇_𝒢 u|du We then introduce an auxiliary matrix f to lift the undifferentiable barrier, and reformulate the TV-based functional as a dual min-max functional 𝒥_TV(u,f)=min_umax_f∫_𝒢α⊗ f (∇_𝒢 u) du where we maximize f such that 𝒥_TV(u,f) is close enough to 𝒥_TV(u). Using Gâteaux variations, we assume u → u + ε a, f → f + ε b, and the directional derivatives in the directions a and b defined as . d J/dε(u + ε a)|_ε→ 0 and . d J/dε(f + ε b)|_ε→ 0. Given a functional 𝒥_TV(u,f), its Gâteaux variations is formulated by: 𝒥_T V(u+ε a, f+ε b)=∫α⊗ [(f+ε b) ·(∇ u+ε∇ a)] d u ..⇒∂𝒥/∂ε|_ε→ 0=∫α⊗ [(f ·∇ a) +(∇ u b)]. d u .⇒∂𝒥/∂ε|_ε→ 0 = α⊗ f · a - ∫α⊗ (a ·∇ f)du + ∫α⊗ (b∇ u)du Since we assume either u is given at the boundary (Dirichlet boundary condition), the boundary term α⊗ f · a can be dropped. After that, the derivative of 𝒥_TV(u,f) becomes: .∂𝒥/∂ε|_ε→ 0=-∫α⊗ (∇ f · a+∇ u · b) Since the dummy functional a and b are related to u and f respectively, the E-L equation from the Gâteaux variations in Eq. <ref> leads to two coupled PDEs: {[ max_f df/dt = α⊗∇ _𝒢u; min_u du/dt = α⊗ div(f) ]. Note, we use the adjoint operator div(f) = - ∇ f to approximate the discretization of ∇ f <cit.>, which allows us to link the minimization of u to the classic graph diffusion process. § EXPERIMENTAL DETAILS §.§ Implementation details §.§.§ Hyperparameters & training details Table <ref> lists the detailed parameter setting for several GNN-based models, including X-FlowNet, PDENet, GCN, GAT, ResGCN, DenseGCN and GCNII. In the node classification experiments, we set the output dimension to be the number of classes. We adopt the public fixed split <cit.> to separate these datasets into training, validation, and test sets. We use the accuracy, precision and F1-score of node classification as the evaluation metrics. For the ADNI dataset prediction experiment, we set the input and output dimensions to be the same as the number of brain nodes cannot be altered. We use 5-fold cross-validation to evaluate the performance of different methods and measure their prediction accuracy using mean absolute error (MAE). We also conduct an ablation study using a two-step approach. First, we train a model (MLP+GNN) shown in the left panel of Fig. 4 (b) in the main manuscript to predict the potential energy filed (PEF) based on the transport equation, then compute the flows using Eq. <ref>, followed by a GCN-based model to predict the further concentration level of AD-related pathological burdens. Since the deep model in this two-step approach is also formalized from the PDE, we refer to this degraded version as PDENet. In addition, we conduct a prediction of the risk of developing AD using the baseline scan, which can be regarded as a graph classification experiment. This experiment only uses 2 GCN layers with a hidden dimension as 64 for all methods, while the remaining parameters follow the node classification experiment (Table <ref> top). In this work, all experiments are conducted on a server: Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz, NVIDIA RTX A5000. The source code is open on anonymous GitHub (https://anonymous.4open.science/r/GNN-PDE-COV-FBBD/https://anonymous.4open.science/r/GNN-PDE-COV-FBBD/) for the sake of reproducibility. §.§.§ Data pre-processing on ADNI dataset. In total, 1,291 subjects are selected from ADNI <cit.> dataset, each having diffusion-weighted imaging (DWI) scans and longitudinal amyloid, FDG, cortical thickness(CoTh) and tau PET scans (2-5 time points). The neuroimage processing consists of the following major steps: * We segment the T1-weighted image into white matter, gray matter, and cerebral spinal fluid using FSL software <cit.>. On top of the tissue probability map, we parcellate the cortical surface into 148 cortical regions (frontal lobe, insula lobe, temporal lobe, occipital lobe, parietal lobe, and limbic lobe) and 12 sub-cortical regions (left and right hippocampus, caudate, thalamus, amygdala, globus pallidum, and putamen), using the Destrieux atlas <cit.> (yellow arrows in Fig. <ref>). Second, we convert each DWI scan to diffusion tensor images (DTI) <cit.>. * We apply surface seed-based probabilistic fiber tractography <cit.> using the DTI data, thus producing a 160× 160 anatomical connectivity matrix (white arrows in Fig. <ref>). Note, the weight of the anatomical connectivity is defined by the number of fibers linking two brain regions normalized by the total number of fibers in the whole brain (Δ for graph diffusion in X-FlowNet). * Following the region parcellations, we calculate the regional concentration level (the Cerebellum as the reference) of the amyloid, FDG, CoTh and tau pathologies for each brain region (red arrows in Fig. <ref>), yielding the input x∈ℛ^160 for training X-FlowNet, respectively. Following the clinical outcomes, we partition the subjects into the cognitive normal (CN), early-stage mild cognitive impairment (EMCI), late-stage mild cognitive impairment (LMCI), and AD groups. To facilitate population counts, we regard CN and EMCI as "CN-like" group, while LMCI and AD as "AD-like" groups. Table <ref> summarizes the statistics of the two datasets. §.§ Experiments on node classification Fig <ref> presents the performance of different evaluation criteria (accuracy, precision, and F1-score) across different network layers for node classification by benchmark GNN model (patterned in dash lines) and the counterpart novel GNN model from our GNN-PDE-COV framework (patterned by solid lines), where each row is associated with a specific instance of GNN model. It is evident that our proposed GNN-PDE-COV consistently outperforms other methods across different layers, with significantly enhanced degrees in accuracy, precision, and F1-score. Moreover, the GNN model yielded from our GNN-PDE-COV framework consistently achieves the highest accuracy on all three datasets. Overall, these results demonstrate the state-of-the-art performance by our GNN-PDE-COV framework in graph node classification. The effect of anti-smoothing by clip operation is shown in Fig. <ref>. To set up the stage, we put the spotlight on the links that connect two nodes with different categorical labels. In this context, 2,006 links from Cora, 2,408 links from Citeseer, and 17,518 links from Pubmed datasets are selected, called inter-class links. For each inter-class link, we calculate node-to-node similarity in terms of Pearson's correlation between two associated graph embedding vectors [the learned feature representations for node classification] by benchmark methods (in red) and the counterpart GNN models derived from GNN-PDE-COV framework (in green). We find that (1) more than 70% nodes are actually associated with inter-class links which confirms the hypothesis of over-smoothing in Fig. 1 of our manuscript; (2) Our novel GNN models have the ability to learn feature representations that better preserve the discriminative power for node classification (as indicated by the distribution of node-to-node similarity shifting towards the sign of anti-correlation). §.§ Application on uncovering the propagation mechanism of pathological events in AD Firstly, we examine the prediction accuracy for each modality of concentration (tau, amyloid, FDG, CoTh) level at different noise levels. Specifically, to evaluate the robustness of our X-FlowNet model to noise, we conducted an experiment by adding uncorrelated additive Gaussian noise levels with standard deviation ranging from 0.02 to 1 to the observed concentration levels of tau, amyloid, FDG, and CoTh. We then evaluated the prediction accuracy (MAE) using 5-fold cross-validation. The prediction results, as shown in Fig. <ref>, indicate that our X-FlowNet model is less sensitive to noise added to the imaging features than all other counterpart GNN methods. Secondly, we conduct an ablation study to compare our X-FlowNet model with PDENet (marked as #7 in Fig. <ref>). Our model, which is in a GAN architecture and incorporates a TV constraint to avoid over-smoothing, integrates the two steps of estimating the PEF and uncovering the spreading flows into a unified neural network, resulting in significantly improved prediction accuracy compared to PDENet. Thirdly, we perform a disease risk prediction experiment, which can be regarded as a graph classification problem. We assume that we have baseline amyloid, tau, FDG, and CoTh scans, and evaluate the prediction accuracy, precision and F1-score of various models in forecasting the risk of developing AD. We consider two dichotomous cases: one included only AD vs. CN groups and the other involved AD/LMCI vs. CN/EMCI. The results of the mean of 5-fold cross-validation are shown in Table <ref>. Our GNN-PDE-COV outperforms all other methods in terms of accuracy, precision and F1-score indicated by an asterisk (`*') at the significance level of 0.001. Fourthly, we examine the propagation pattern of tau spreading flows on an individual basis (Fig. <ref>). First, we visualize the top flows (ranked in terms of flow volume) uncovered in a CN subject (Fig. <ref>(a)). It is apparent that subcortex-cortex flows are the predominant patterns, where most of the tau aggregates spread from subcortical regions (globus pallidus, hippocampus, and putamen) to the temporal lobe, limbic lobe, parietal lobe, and insula lobe. Note, we find inferior temporal gyrus (t_6) and entorhinal cortex (t_8) are actively involved in the subcortex-cortex flows, which are the footprints of early stage tau propagation frequently reported in many pathology studies <cit.>. Second, we visualize the top flows uncovered in an AD subject (Fig. <ref>(b)). It is apparent that the propagation of tau is restricted on the brain cortex, mainly spreading from temporal lobe regions to other regions (such as frontal lobe, limbic lobe and occipital lobe), which is aligned with current clinical and pathology findings that predominant amount of tau aggregates propagate throughout brain cortex in the late stage of AD. §.§ Discussion and limitations Discussion. In our experiments, we found adding DC layer right after every FC layer usually does not yield best performance. Instead, we empirically set to add DC layer from the first several FC layers. For example, we add DC layer after the 3^rd FC layer in an 8-layer GNN model, after the 5^th FC layer in a 16-layer GNN model, and after 8^th FC layer in a GNN model with more than 16 layers. One possible explanation is that the clip operation in DC layer depends on a good estimation of cap b in Eq. 3 (in the main manuscript). Given that the estimation of b may lack stability during the initial stages of graph learning, it can be advantageous to postpone the clip operation from an engineering perspective. However, delaying the addition of the DC layer too much can result in missed opportunities to address the problem of over-smoothing. Regarding the computational time, we record the additional computational time of training our DC layer on different datasets. Specifically, the extra training time is 2.2 ms/epoch in Cora, 9.8 ms/epoch in Citeseer, 7.8 ms/epoch in Pubmed, and 0.3 ms/epoch in ADNI, respectively, where the data descriptions are listed in Table <ref>. It is apparent that the TV-based constraint effectively addresses the over-smoothing issue in GNN without imposing a significant computational burden. Limitations. Our current graph learning experiments are limited to citation networks. In the future, we will evaluate our GNN-PDE-COV framework on other graph datasets such as drug medicine and protein networks. Societal impact. Our major contribution to the machine learning field is a novel research framework which allows us to develop new GNN models with a system-level understanding. We have provided a new approach to address the common issue of over-smoothing in GNN with a mathematical guarantee. From the application perspective, the new deep model for uncovering the in-vivo propagation flows has great potential to establish new underpinning of disease progression and disentangle the heterogeneity of diverse neurodegeneration trajectories. splncs04
http://arxiv.org/abs/2307.02548v1
20230705180005
Piercing of a solitonic boson star by a black hole
[ "Zhen Zhong", "Vitor Cardoso", "Taishi Ikeda", "Miguel Zilhão" ]
gr-qc
[ "gr-qc", "astro-ph.HE", "hep-ph", "hep-th" ]
red let@tokenonedot onedotlet@token.. i.e I.e ∂ i 12 ß
http://arxiv.org/abs/2307.01656v1
20230704112945
$^{174}\mathrm{Yb}^+$-$^{113}\mathrm{Cd}^+$ sympathetic-cooling bi-species Coulomb crystal applied to microwave frequency standard
[ "Y Zheng", "H. R. Qin", "S. N. Miao", "N. C. Xin", "Y. T. Chen", "J. Z. Han", "J. W. Zhang", "L. J. Wang" ]
physics.atom-ph
[ "physics.atom-ph", "quant-ph" ]
AIP/123-QED These authors contributed equally. State Key Laboratory of Precision Measurement Technology and Instruments, Key Laboratory of Photon Measurement and Control Technology of Ministry of Education, Department of Precision Instrument, Tsinghua University, Beijing 100084, China Department of Physics, Tsinghua University, Beijing 100084, China These authors contributed equally. State Key Laboratory of Precision Measurement Technology and Instruments, Key Laboratory of Photon Measurement and Control Technology of Ministry of Education, Department of Precision Instrument, Tsinghua University, Beijing 100084, China Department of Physics, Tsinghua University, Beijing 100084, China State Key Laboratory of Precision Measurement Technology and Instruments, Key Laboratory of Photon Measurement and Control Technology of Ministry of Education, Department of Precision Instrument, Tsinghua University, Beijing 100084, China State Key Laboratory of Precision Measurement Technology and Instruments, Key Laboratory of Photon Measurement and Control Technology of Ministry of Education, Department of Precision Instrument, Tsinghua University, Beijing 100084, China State Key Laboratory of Precision Measurement Technology and Instruments, Key Laboratory of Photon Measurement and Control Technology of Ministry of Education, Department of Precision Instrument, Tsinghua University, Beijing 100084, China State Key Laboratory of Precision Measurement Technology and Instruments, Key Laboratory of Photon Measurement and Control Technology of Ministry of Education, Department of Precision Instrument, Tsinghua University, Beijing 100084, China Electronic mail: zhangjw@tsinghua.edu.cn State Key Laboratory of Precision Measurement Technology and Instruments, Key Laboratory of Photon Measurement and Control Technology of Ministry of Education, Department of Precision Instrument, Tsinghua University, Beijing 100084, China State Key Laboratory of Precision Measurement Technology and Instruments, Key Laboratory of Photon Measurement and Control Technology of Ministry of Education, Department of Precision Instrument, Tsinghua University, Beijing 100084, China Department of Physics, Tsinghua University, Beijing 100084, China We reported the realization of a ^174Yb^+-^113Cd^+ bi-species Coulomb crystal comprising ^174Yb^+ ions as coolant and verified its potential for application as a ^113Cd^+ microwave frequency standard employing sympathetic cooling. The two species of massive ions stably trapped in a Paul trap make up this large two-component crystal. The ^113Cd^+ ions are trapped in the center, which reduces considerably RF heating and excess micromotion to which the ^113Cd^+ ions are subjected. Under this scheme, the uncertainty due to the second-order Doppler effect is reduced to 5× 10^-16, which represents an order of magnitude improvement over sympathetic cooled ^40Ca^+-^113Cd^+ crystal. The uncertainty from the second-order Zeeman effect, which contributes the largest uncertainty to the microwave-ion frequency standard, is reduced to 4× 10^-16. The relevant AC Stark shift uncertainty is estimated to be 4× 10^-19. These results indicate using ^174Yb^+ as coolant ions for ^113Cd^+ is far superior and confirm the feasibility of a sympathetic-cooled cadmium-ion microwave clock system employing a ^174Yb^+-^113Cd^+ two-component crystal. ^174Yb^+-^113Cd^+ sympathetic-cooling bi-species Coulomb crystal applied to microwave frequency standard. L. J. Wang ^1Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411, Tartu, Estonia ========================================================================================================= Atomic clocks have been playing an important role in both practical applications<cit.> and basic physics research<cit.>. Microwave clocks are widely used in satellite navigation<cit.>, deep space exploration<cit.>, time synchronization<cit.> and timekeeping<cit.> because of their simple structure and high transportability. At present, a lot of important progress has been made with microwave ion clocks that employ trapped ^199Hg^+ <cit.>, ^171Yb^+ <cit.>, ^113Cd^+ <cit.> ions. In past work, we have realized highly stable and accurate microwave ion clocks based on laser-cooled ^113Cd^+ ions<cit.>. However, laser-cooled ion microwave clocks require a separate cooling process, which results in dead time. The dead time restricts the Dick effect limit, thereby preventing improvements in short-term stability. In addition, the ions are not cooled during microwave interrogation. The temperature increase leads to second-order Doppler shift (SODS) and limits the linewidth and signal-to-noise ratio (SNR) of the clock signal. To solve the above problems, we applied the sympathetic cooling technique to ^113Cd^+ ion microwave clocks<cit.>. The first ion microwave clock employing sympathetic cooling was built by the Bollinger group at NBS in 1991, which used ^24Mg^+ as coolant ions to sympathetically cool ^9Be^+ ion in a Penning trap. Their experiment showed the potential of sympathetic cooling applied to ion microwave clock. Since 2019, our group has been devoted to research the cadmium ion microwave frequency standard based on sympathetic cooling. We first used ^24Mg^+ as coolant<cit.>. However, the cooling laser of ^24Mg^+ ions is not easy to obtain and the reaction between Mg^+ and H_2 in the background gas reduces cooling efficiency. We further chose ^40Ca^+ to sympathetically cool ^113Cd^+ <cit.>. In 2022, we realized a high-performance ^113Cd^+ ion microwave frequency standard using this scheme, which was the first sympathetically-cooled ion microwave clock in a Paul trap. ^113Cd^+ ions were sympathetically cooled with laser-cooled ^40Ca^+. Its short-term frequency stability reached 3.48×10^-13/√(τ) with frequency uncertainty of 1.5×10^-14, which are both better than directly laser-cooled ^113Cd^+ microwave frequency standard<cit.>. Although ^113Cd^+ microwave clock sympathetically cooled with ^40Ca^+ performed well, several limitations remained. First, the mass difference between the two species of ions is large, making the ion separation ratio relatively large, thereby limiting the efficiency of sympathetic cooling. Second, the mass of ^40Ca^+ is smaller than ^113Cd^+; therefore, the latter ions are in the outer layer of the ion crystal surrounding the ^40Ca^+ ions.Being farther from the trap central axis also results in large RF heating. Moreover, the distance of ^113Cd^+ ions from the trap center leads to excess micro-motion<cit.> and second-order Doppler effect<cit.>, that again severely limit improvements in frequency accuracy and stability. A scheme that uses ^174Yb^+ to sympathetically cool ^113Cd^+ was incorporated into the microwave frequency standard<cit.> to improve performance. While seldom studied, we demonstrated the viability of ^174Yb^+ sympathetically-cooling ^113Cd^+ microwave frequency standard system and realized a large number of sympathetically cooled ^113Cd^+ with laser-cooled ^174Yb^+. At present, the frequency accuracy of the ion microwave clock is mainly restricted by the second-order Doppler frequency shift(SODS) and the second-order Zeeman frequency shift(SOZS). Under this scheme, the ^113Cd^+ ion temperature is as low as 10^2mK and the excess micro-motion is greatly suppressed. The uncertainty associated with the SODS is reduced to 5× 10^-16, which is advanced by an order of magnitude compared with that of ^40Ca^+-^113Cd^+ sympathetic cooling. The uncertainty related to SOZS is reduced to 4× 10^-16. Our research shows that sympathetic-cooled mixed-species Coulomb crystal of ^174Yb^+-^113Cd^+ is an effective experimental method that improves the performance of the ^113Cd^+ ion microwave frequency standard. The ion trap we used has been described in more detail in our previous work<cit.>. The ground state hyperfine splitting frequency of ^113Cd^+ is 15.2 GHz, ranking second only to ^199Hg^+ among all working energy levels of atomic microwave clocks. Because the hyperfine splitting frequency of the ^2P_3/2 energy level is only 800 MHz, pumping and detection can be realized by a single laser with acousto-optic modulators. Therefore, the ^113Cd^+ frequency standard has great potential for high performance and miniaturization. The natural abundance of ^174Yb is 31.83%, which is the highest among the seven stable Yb isotopes. Moreover, the cooling laser ^174Yb^+ requires is easily available. Compared with ^40Ca^+, ^174Yb^+ is not only closer in mass but also heavier than ^113Cd^+. These characteristics indicate that ^174Yb^+ is well suited as a coolant ion for ^113Cd^+. The entire optical system we designed (Fig. <ref>), incorporates the ^113Cd^+ component of a previous design.<cit.>. The 399 nm laser beam is used to transition ^174Yb atom from the ground state 6s^2 ^1S_0 to the first excited state 6s6p ^1P_1. Because there is an isotopic shift of approximately 500 MHz between ^174Yb and other isotopes<cit.>, this photoionization selectively ionizes the ^174Yb atoms. For Doppler cooling and repumping of the ^174Yb^+, the 369 nm laser beam (6s ^2S_1/2→6p ^2P_1/2) is combined with the 935 nm laser beam (^2D_3/2→ ^3[3/2]_1/2) by the dichroic mirror. Photoionization is enabled by turning on the cooling laser with the wavelength of 369 nm while opening the laser with the wavelength of 399 nm to excite the ^174Yb atoms. In the ^174Yb^+-^113Cd^+ sympathetic cooling stage, we loaded and cooled the ^174Yb^+ ions to form the crystal first and subsequently loaded the ^113Cd^+ ions. Then ^174Yb^+ were heated. We repeated the adjustments to the amplitude of RF voltage and scanned the frequency of the cooling laser (369 nm). The detection by the photomultiplier tube (PMT) of a variation in the fluorescence signal of ^113Cd^+ indicates the two species of ions have formed a two-component Coulomb crystal. A typical image captured by the electron-multiplying charge-coupled device (EMCCD) of the two-component ion crystal (Fig. <ref>) depicts a hollow structure of ^174Yb^+ ions and an ellipsoid of ^113Cd^+ ions located in the center, which is in line with our expectation in using ^174Yb^+ as a coolant. To estimate the number of ions, we need to determine the separation ratio and density of the two species of ions. The separation ratio is calculated using <cit.> : r_Yb^+/r_Cd^+=√(M_Yb^+/M_Cd^+) , where r_Yb^+ denotes the inner radius of the ^174Yb^+ crystal shell, r_Cd^+ the outer radius of the ^113Cd^+ crystal. We experimentally measured the separation ratio by analyzing the EMCCD images. The result is r_Yb^+/r_Cd^+=1.25(3), which agrees with the theoretical value of 1.24. Using the zero-temperature charged-liquid model <cit.>, the ion density is estimated to be: n=ε_0 U_R F^2/M Ω^2 r_0^4, where ε_0 denotes the permittivity of vacuum, U_RF the RF amplitude, M the trapped ion mass, r_0=6.2 mm the radial distance from the trap axis to the electrodes, and Ω the trap driving frequency. In our experiment setup, U_RF=241 V and Ω=2π×1.994 MHz. The ion densities of ^174Yb^+ and ^113Cd^+ were thus estimated to be 7.67×10^3mm^-3 and 1.18×10^4mm^-3, respectively. Typical population numbers of the ^174Yb^+ and ^113Cd^+ ions in the two-component crystal are N_Yb^+=3.3(4)×10^3 and N_Cd^+=6.9(3)×10^3, which is more than that in a ^40Ca^+-^113Cd^+ sympathetic cooled crystal. The SNR of microwave frequency standard depends mainly on the number of trapped ions. While ^113Cd^+ ions were trapped in the outer shell of the ^40Ca^+-^113Cd^+ two-component Coulomb crystal, RF heating and second-order Doppler effect prevent increasing the number of ions. Trapping ^113Cd^+ ions in the center avoids the problem and by increasing the number of ^113Cd^+ ions we could improve the SNR of the system. After trapping the mixed-species Coulomb crystal, the effect of RF voltage and endcap voltage(U_end) on the temperature of the ^113Cd^+ were explored. The temperature is calculated by measuring the Doppler broadening of the fluorescence spectrum<cit.>. Fluorescence line of ions usually follows a Voigt line, a convolution of the natural linewidth and Gaussian width of the Doppler broadening. After obtaining the Gaussian linewidth, the ion temperature is calculated using T=M c^2/8ln2×k_B(ν_G/ν_c)^2, where c denotes the speed of light, k_B the Boltzmann constant, ν_G the fitted Gaussian linewidth and ν_c the resonance frequency of the D_2 line of ^113Cd^+. Fig. <ref> (a) reveals the dependence of the temperature of sympathetically-cooled ^113Cd^+ on U_RF. As RF voltage increases, the radial size of the ^113Cd^+ ion crystal is compressed, leading to less RF heating but resulting in an increase in micro-motion energy. Thus, there is an optimal value for RF voltage of approximately U_RF=264 V. Similarly, increasing U_end increases the ion radial size. However, if U_end is too low, the ion trap becomes unstable. The optimal U_end is approximately 10 V as shown in Fig. <ref> (b). In this situation, a preliminary-acquired Ramsey fringe of the clock transition (Fig. <ref>) is obtained with a free evolution time of 500 ms. We will enhance the SNR by optimizing the electric parameters and the population number ratio to realize a sympathetically-cooled ion microwave frequency standard exhibiting high performance. The expected short-term and long-term frequency stabilities are 2 ×10^-13/√(τ) and 5×10^-15@10000s. The main uncertainties of frequency shifts in the ^174Yb^+-^113Cd^+ sympathetic cooling system were carefully evaluated. The SOZS, which is the main source of systematic uncertainty for an ion microwave frequency standard is given by δν_SOZFS/ν_0 =(g_J-g_I)^2μ_B^2/2h^2ν_0^2B^2 where μ_B denotes Bohr magneton, B the magnetic field intensity, h Planck constant, ν_0 the transition frequency of ^113Cd^+ between states |^2S_1/2, F=0, m_F=0⟩ and |^2S_1/2, F=1, m_F=0⟩ at zero magnetic field, and for which the values for the electronic and nuclear Landé g-factors<cit.> are g_J=2.002 291(4) and g_I=0.622 300 9(9)×10^-3, respectively. Because the ^113Cd^+ ions were trapped in the center, where the magnetic gradient perpendicular to the quantization axis (e_z) is smaller, the magnetic field required to provide the quantization axis for clock ions was reduced while measuring the clock transition signal of the ^113Cd^+ ions sympathetically cooled via the ^174Yb^+ ions with the magnetic fields along e_x and e_y well compensated. While collecting the Ramsey signal of the microwave clock transition, the static magnetic field is measured to be 648.1 nT, an order of magnitude smaller than before<cit.>. The fluctuation of the magnitude field under our high-performance magnetic shielding can be weakened to 0.18 nT <cit.>. Thus, SOZS is estimated to be 7.133(4)×10^-13. Compared with the previous generation of sympathetically-cooled microwave clock, the absolute value of SOZS is reduced by more than two orders of magnitude and the uncertainty is raised by more than two orders of magnitude. One important reason why we introduced ^174Yb^+ ions as coolant is to reduce SODS. Secular motion, micro-motion and excess micro-motion through deviations from the central axis of the trap contribute to SODS. The first two contributions correlate with the secular temperature of the ions, and the last is determined by the position of the ^113Cd^+ ion crystal<cit.>. In the experiment, the typical temperature measurement result of sympathetically-cooled ^113Cd^+ is shown in Fig. <ref>. The corresponding ion temperature is 100(5)mK, much smaller than that of laser-cooled ^113Cd^+ (654 mK)<cit.>. The secular motion and micro-motion of the trapped ions are of the same magnitude in all directions <cit.>. Therefore, the reduction in the axial temperature we measured is crucial for the further development of microwave ion clocks. The formula of the SODS caused contributed by excess micro-motion is<cit.>: δν_SODS-exmm/ν_0 =-1/16q^2Ω^2u^2/c^2 q =2Q U_RF/MΩ^2r_0^2 where Q denotes the charge of ^113Cd^+ and u the distance of ions from the central axis. For large ion crystals, it is inevitable that ions deviate from the central axis of the Paul trap. This problem is particularly prominent when ^113Cd^+ ions were cooled by ^40Ca^+ ions and located in the outer shell<cit.>. Our method mitigates this effect which is why we consider ^174Yb^+-^113Cd^+ sympathetic cooling suitable for microwave ion frequency standard. Calculating distances from the EMCCD images, the SODS involved with excess micro-motion is estimated to be -7.8(5)× 10^-15, which is three times smaller than that of ^40Ca^+-^113Cd^+ sympathetic cooling. Moreover, the uncertainty is six times smaller than before<cit.>. The SODS contributed by secular motion and micro-motion is calculated to be -3.7(2)× 10^-16. Finally, the total SODS is estimated to be -8.1(5)× 10^-15. The Stark shift generated by the additional static electric field can be described by the following formula<cit.>: δν_DC-S/ν_0=-2σ_S/ν_0(mΩ c/Q)^2δν_SODS-exmm/ν_0, where σ_S denotes the static Stark shift coefficient. This term is proportional to the SODS produced by excess micro-motion and therefore is reduced to 7.9(5)×10^-17. Additional ac Stark(light) shifts introduced by the cooling(369 nm) and repumping(935nm) laser beams of ^174Yb^+ were evaluated. The intensities of the 369 nm and 935 nm beams were 0.264(3) mW/mm^2 and 2.123(6) mW/mm^2, respectively. yielding the light shifts of 1.77(2)×10^-17 and 6.21(2)×10^-17, respectively, which can be nearly negligible for microwave frequency standard. In conclusion, we performed a study of ^174Yb^+-^113Cd^+ sympathetic cooling and proved its key advantages in a microwave ion frequency standard. Compared with laser cooling, sympathetic cooling overcomes the rising temperature of ions during interrogation, reduces the dead time, and greatly prolongs the free evolution time in closed-loop locking. Compared with using ^40Ca^+ to sympathetically cool ^113Cd^+, the mass difference between ^174Yb^+ and ^113Cd^+ is smaller, which is advantageous in promoting cooling efficiency and extending ion-loss time. Table <ref> lists the fundamental fractional systematic uncertainties for frequency shifts among the three schemes<cit.>. Under ^174Yb^+-^113Cd^+ sympathetic cooling, the SODS uncertainty was reduced to 5× 10^-16 because excess micro-motion was substantially suppressed. the SOZS and its uncertainty were also considerably reduced to 7.133(4)× 10^-13. In addition, the introduced Stark shift from the static electric field and its uncertainty were estimated to 7.9(5)×10^-17, which is superior by nearly an order of magnitude to our prior work<cit.>. The light frequency shift was evaluated to be 7.98(4)×10^-17. These results show that ^174Yb^+-^113Cd^+ sympathetic cooling applied to a microwave frequency standard promises to attain high accuracy of 10^-16 and high stability. In the future, we will study the thermodynamic properties of the ^174Yb^+-^113Cd^+ ion crystal at low temperatures based on sympathetic cooling in combination with molecular dynamics simulations and experiments to elicit further advantages for the microwave frequency standard. The ^174Yb^+-^113Cd^+ crystallization results enrich the research on large two-component ion crystals, which will be meaningful for research on structures of Coulomb crystals<cit.>, high-precision measurements of isotope shifts<cit.>, the dynamics of an ion or small ion crystal<cit.>, radioactive ions<cit.> and quantum simulations<cit.> based on the sympathetic cooling technique. This work is supported by National Key Research and Development Program of China ( 2021YFA1402100), National Natural Science Foundation of China (12073015) and the Science and Technology on Metrology and Calibration Laboratory (Grant No. JLKG2022001A002). § DATA AVAILABILITY STATEMENT Data supporting the findings of this study are available upon reasonable request from the corresponding author. *
http://arxiv.org/abs/2307.02345v1
20230705150029
LLQL: Logistic Likelihood Q-Learning for Reinforcement Learning
[ "Outongyi Lv", "Bingxin Zhou", "Yu Guang Wang" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Quantum Limits of Position and Polarizability Estimation in the Optical Near Field Stefan Nimmrichter ================================================================================== Currently, research on Reinforcement learning (RL) can be broadly classified into two categories: online RL and offline RL. Both in online and offline RL, the primary focus of research on the Bellman error lies in the optimization techniques and performance improvement, rather than exploring the inherent structural properties of the Bellman error, such as distribution characteristics. In this study, we analyze the distribution of the Bellman approximation error in both online and offline settings. We find that in the online environment, the Bellman error follows a Logistic distribution, while in the offline environment, the Bellman error follows a constrained Logistic distribution, where the constrained distribution is dependent on the prior policy in the offline data set. Based on this finding, we have improved the MSELoss which is based on the assumption that the Bellman errors follow a normal distribution, and we utilized the Logistic maximum likelihood function to construct LLoss as an alternative loss function. In addition, we observed that the rewards in the offline data set should follow a specific distribution, which would facilitate the achievement of offline objectives. In our numerical experiments, we performed controlled variable corrections on the loss functions of two variants of Soft-Actor-Critic in both online and offline environments. The results confirmed our hypothesis regarding the online and offline settings, we also found that the variance of LLoss is smaller than MSELoss. Our research provides valuable insights for further investigations based on the distribution of Bellman errors. § INTRODUCTION Modern Deep Reinforcement Learning (RL) has seen remarkable advancements in diverse domains, encompassing strategy games <cit.> to Capacitated Vehicle Routing Problem(CVRP) problems <cit.>. RL operates by guiding an agent to actively interact with an environment through a series of actions with the objective of maximizing the expectation of rewards received over time. The cumulative reward with respect to the current state is described by the Bellman equation <cit.>. Despite the recursive estimation embedded in the Bellman equation providing a theoretical trajectory towards optimal or near-optimal solutions in conventional RL, it is computationally intensive and yields stability and accuracy concerns when navigating expansive state and action spaces <cit.>. In the realm of online RL, the Soft Actor Critic(SAC) by <cit.> introduces the Soft Bellman operator to enhance the overall reward in reinforcement learning, thereby improving model performance and stability. This development facilitated a crucial shift in the evolution of RL techniques <cit.>. Meanwhile, in offline RL, <cit.> investigated the high overestimation problems tied to the estimation of actions (Q-values), enlightening subsequent studies upon their proposed Conservative Q-Learning (CQL) scheme <cit.>. The conventional practice of using Bellman equations for Q-iterations has become less prevalent in modern RL discourse. Instead, a shift of preference has been observed in updating the iterative Q-function with the maximum-entropy policy ensure robust modeling and to mitigate estimation errors with the Bellman operator <cit.>. Later on, the soft Bellman operator introduced in Soft Actor-Critic <cit.>, or SAC in short, deploys an auxiliary policy network to circumvent intractable estimations over log-partitioned Q-values. More recently, Extreme Q-Learning (XQL) <cit.> defines a novel sample-free objective towards optimal soft-value functions in the maximum entropy RL setting, thereby eliminating the conventional need for network iterations. These frameworks mark a significant departure from established practices and offers exciting new prospects for advances of optimization techniques in RL <cit.>. Concurrent with these modifications in optimization techniques, the RL community has shown considerable interest in minimizing the Bellman error <cit.>, a measure of disagreement between the estimation of the current state-action value and the value defined by the Bellman equation. This pursuit aims to accurately represent the value function of the state-action pairs for the current policy <cit.>. Following these efforts, researchers have strived to modify the objective function <cit.> or optimize the update rules <cit.> to achieve minimal Bellman error. However, despite various attempts to achieve an adequate policy by indirectly shaving the distribution of the Bellman error, there lacks a straightforward analysis of the main properties of the Bellman error, such as exploring more adequate choice of error distributions over normal distribution. To the best of our knowledge, this work conducts the first comprehensive discussion based on the logistic distribution of the Bellman error. Drawing inspiration from <cit.>, we proceed to define the Gumbel error to depict the gap between the estimation and true values of Q function. This results in the Bellman error adhering to a Logistic distribution within the context of online RL. We subsequently verify that the expectation of the Bellman error aligns precisely with the soft Bellman regression target. This coherence underscores the superiority of a Logistic loss function employed in conjunction with maximum likelihood estimation, surpassing the traditional Mean Squared Error (MSE) loss predicated on the assumption of a normally distributed error term. This assertion aligns with the findings by <cit.>. Furthermore, in offline RL, we show that when the reward embedded in the expert data adheres to the minimum Gumbel distribution, the Bellman error approximates a constrained Logistic distribution, where the constrained distribution is dependent on the prior policy in the offline data set. We empirically validate our theoretical assertions across seven diverse environments by comparing the empirical distribution of the Bellman error as well as the performance of trained networks. The results unveil a robust preference for the logistic distribution of Bellman errors within online RL over both the Gumbel and Gaussian distributions. § PRELIMINARIES RL explores the expected cumulative reward by a Markov decision process defined by a tuple (,,,r,γ), where and respectively denote the state and action spaces, (s'|s,a) is the state transition probability from state s that drives toward the next state s', r(,) defines the reward of taking an action at the current state , γ∈(0,1) is the discount factor on future rewards. In online RL, an agent constantly interacts with the environment to enrich the state-action pair (s,a,s^',r) upon a behavior policy π(|) with respect to its Reply-Buffer, which is a container for the state-action pairs. In comparison, in offline RL, the trajectories are fixed for the agent in an elaborate dataset (,,^',r). §.§ Objectives in Reinforcement Learning The target of RL, as defined by an Actor-Critic (AC) algorithm <cit.> , is to find the optimal policy π(|) that maximizes __t∼π(_t|_t)[∑_t=0^Tγ^t r(_t,_t)], the cumulative discounted reward at a timestamp t∈[0,T]. Alternatively, Soft AC (SAC) algorithms <cit.> encompass soft conditions in future rewards to circumvent the overestimation problem, i.e., __t ∼π(_t|_t)[∑_t=0^Tγ^t r(_t,_t)-βlog(π(_t|_t))] with the regularization strength β. Later on, <cit.> introduces the KL divergence between the policy π and the prior distribution of a reference distribution μ to augment the reward function in the objective __t ∼π(_t|_t)[∑_t=0^Tγ^t r(_t,_t)-βlogπ(_t|_t)/μ(_t|_t)]. These three different forms represent different Bellman iterative formats which shall be introduced next in Section <ref>. §.§ (Soft) Bellman Equation in Reinforcement Learning In Section <ref>, we noticed that formula (<ref>) is the most general one. In fact, when the value of μ is 1, formula (<ref>) will degenerate back to SAC, and when the value of β is 0, formula (<ref>) will degenerate back to AC: __t ∼π(_t|_t)[∑_t=0^Tγ^t r(_t,_t)-βlogπ(_t|_t)/μ(_t|_t)] So we just need to focus on studying the general form formula (<ref>), our goal is to obtain the optimal Bellman iterative equation, which has been widely used in Q-learning <cit.>: Q^t+1(,)=B^*Q^t(,)=r(,)+γmax_'(Q^t(',')) This process is actually realized through the Bellman iterative operator, specifically the soft Bellman iteration has a general version corresponding to formula (<ref>) is Q^k+1(,)←_Q[r(,)+E_','∼π[Q(',')-βlogπ('|')/μ('|')]-Q^k(,)]^2 The soft Bellman iterative operator can be solved from formula (<ref>): Q^k+1(,)=r(,)+E_','∼π[Q(',')-βlogπ('|')/μ('|')] If we want to take the optimal strategy (corresponding to max operator in formula (<ref>)), we need to fine a optimal π^* which satisfy: π^*('|')=_π(E_','∼π[Q(',')-βlogπ('|')/μ('|')]), ∑_π^*('|')=1 Using the Lagrange multiplier method, we can easily get: π^*(|)=μ(|)e^Q(,)/β/∑_μ(|)e^Q(,)/β the detailed for processing can be found in <ref>, it is a general conclusion. If we take formula (<ref>) to formula (<ref>), we may have: E_','∼π[Q(',')-βlogπ('|')/μ('|')] → E_'[βlog∑_'μ('|')e^Q(,)/β] this means if we want take the best policy π^*, then our optimization goal is: max_'Q(',')=βlog∑_'μ('|')e^Q(,)/β §.§ Gumbel distribution and Logistic distribution The probability density function (pdf) for Gumbel distribution called Gumbel(μ,β) is : Gumbel(x;μ,β)=1/βe^-[x-μ/β+e^-x-μ/β] The probability density function (pdf) for Logistic distribution called Logistic(μ,β) is : Logistic(x;μ,β)=1/βe^-(x-μ)/σ/(1+e^-(x-μ)/σ)^2 § CHARACTERIZATION OF BELLMAN ERROR WITH LOGISTIC DISTRIBUTION §.§ Bellman Error analyse in Online RL In this paper, we define the online Bellman error ε^θ under the network parameter θ and policy π as : ε^θ(,)=[r(,)+γ_' ∼πQ̂_θ(',')]-Q̂_θ(,) where Q̂_θ means the estimated Q network with parameter θ. While Q_θ represents the one without any gaps. The same as <cit.>. We also think the cause for the Bellman error is that there is a deviation between the estimated Q value and the real Q value in each iteration time t : Q̂^t(,)=Q^t(,)+ϵ^t(,) but different from using r+γ Q(',') as the estimate for Q(,) to make a regression. We use another way that we believe the real Q value will not appear any error during the max iterative process, which means: Q^t+1(,)=B^*Q^t(,)=r(,)+γmax_'(Q^t(',')) But the reality is, we don't know the real Q, but approximate the Q value using : Q̂^t+1(,)+ε^t(,)=B̂^*Q̂^t(,)=r(,)+γmax_'(Q̂^t(',')) This will generate some error during iteration, in order to analyze them, we derived these lemma for Theorem <ref>, some parts of it are similar to those mentioned by <cit.>. <cit.> For i.i.d. random variables X_1, ..., X_n ∼ f(X) with exponential tails, lim_n→∞ max_i(X_i) follows the Gumbel distribution. For i.i.d. random variables X_1,X_2,X_3, ..., X_n ∼ Gumbel(C,β), then max_i(X_i) ∼ Gumbel(βln∑_ne^1/β C, β), but if random variables X_i ∼ Gumbel(C_i,β), and X_i obeys the sampling distribution from μ, then max_i(X_i) ∼ Gumbel(βln∑_nμ_n e^1/β C_n, β) For any , ϵ^t, which defines in formula (<ref>), follows: lim_,t→∞ϵ^t(,) ∼ Gumbel(C,β) For i.i.d. random variables X ∼ Gumbel(C_X,β), Y ∼ Gumbel(C_Y,β), then (X-Y) ∼ Logistic(C_X-C_Y,β) (Logistic distribution for Online Bellman error): In offline target, ε^θ(,) will obey the logistic distribution, if we take Q_θ(',') as finite sampling without any probability, the expectation of the logistic distribution becomes the optimization goal of SAC, ε^θ(,) ∼ Logistic(max_'[Q_θ(',')]-βln∑_i=1^n e^(Q_θ(',_i))/β,β) (SAC). However, when we take Q_θ(',') as enough sampling with its probability μ(|), the expectation of the logistic distribution becomes the optimization goal of Maximum Entropy(ME), which means: ε^θ(,) ∼ Logistic(max_'[Q_θ(',')]-βln∑_i=1^n μ(_i|)e^(Q_θ(',_i))/β,β) (ME) The detailed proof of Theorem <ref> is in <ref>. According to Theorem <ref>, when we want to optimizing this Bellman error, we certainly hope that the Expectation of Bellman-error is as close to zero as possible: E[ε^θ(,)] → 0 which means: max_'[Q_θ(',')]→βln∑_i=1^n e^1/β(Q_θ(',_i))(SAC) max_'[Q_θ(',')]→βln∑_i=1^n μ(_i|)e^(Q_θ(',_i))/β(ME) This is what we have been optimizing for. So during network updating, we always use the MSELoss, this would be very unreasonable and theoretically unfounded. In fact, in most deep RL networks, such a MSELoss function is common used when updating a batch of data from the reply buffer. MSELoss is often defined as follows in the network MSELoss=1/N∑_i=1^N|Q̂_θ(_i,_i)-[r(_i,_i)+γmax_'_i[Q̂_θ('_i,'_i)]|^2=1/N∑_i=1^N[ε_i]^2 This is based on the assumption that ε_i followed the normally distribution N(0,σ), in fact, using the maximum likelihood estimation, we can get: log[∏_i=1^np(ε_i)]∝∑_i=1^n -1/2 (ε_i)^2 Based on Theorem <ref>, we propose a Logistic maximum likelihood loss function that can replace MSELoss, we call it Logistic Likelihood Q-Learning, or LLQL in short. §.§ Logistic Likehood Q-Learning Here, we pretend that the error service is assigned from the Logistics L(0,σ), not N(0,σ), so we replace the loss function. According to section <ref>, we know that the probability density function (pdf) of the Logistic distribution is: p(ε_i)=1/σe^-ε_i/σ/(1+e^-ε_i/σ)^2 So, using log-likelihood function, we can get: log[∏_i=1^np(ε_i)]=-nlog(σ)+∑_i=1^n [-ε_i/σ-2log(1+e^-ε_i/σ)] Finally we can derive LLoss as: LLoss=1/N∑_i=1^N [ε_i/σ+2log(1+e^-ε_i/σ)] In Figure 1∼ 6, we plot the online Bellman error using formula (<ref>) for 6 environments (BipedalWalker, Lunar, HalfCheetah, Humanoid, Swimmer, Walker2d), we have demonstrated the credibility of logistics distribution in experiments the same as the XQL <cit.>, we use all (, , ', r) pairs after all iterations during online training to calculate Bellman-error, the calculation formula is a small modification of formula (<ref>): ε(,)=[r(,)+γ_' ∼π^* Q_θ^*(',')]-Q_θ^*(,) Where θ^* is the network parameters obtained by the final training, and π^* is the policy obtained by the final training. We also detailed the online statistical tests in Table <ref>, we can see that the experimental results are consistent with Theorem <ref>. We found that after replacing MSELoss with LLoss, this not only makes the estimate more accurate, but also makes the biased estimate smaller. The reward updates can be more gradual. We have found that in most environments this replacement method is effective than MSELoss when we only change the loss function from MSELoss to LLoss. And we can get a more stable variance (rather than the overall variance) when iteratively converges as shown in Table <ref>. The detailed reward information can be found in Table <ref>. This suggests that it is important for us to choose the appropriate Loss function. §.§ Bellman Error analyse in Offline RL The ineffectiveness of Theorem <ref> and Lemma <ref>: In Theorem <ref>, we proved that ε(,) become Logistic distribution after online training, but in offline RL, it is not exactly the same. We need to strictly state this matter, because it depends on the action policy π^*(|) from the offline data set D, we take offline ε(,) as ε_o(,), r(,) as r_o(,), which means: ε_o(,) ∼ p(ε(,∼π^*)) After SAC training(or others network that policy is not fixed). We will get a optimal policy network π^*. Then, the opposite Reward number sampling under any state s will approximately obey the Gumbel distribution, which means: r_o(,·) ∼ -Gumbel(A(π^*),B(π^*)) r_o(,) ∼ -Gumbel(A(π^*),B(π^*))π^* A(π^*) and B(π^*) are the functions related to π^* From Lemma <ref> we have known that the objective which to be optimized will make r_o(,·) follow the Gumbel minimum distribution. But different from the online RL, offline RL can not satisfy the process of continuously collecting rewards with environment exploration. It can only learn from existing pair (, , r, '). This means, we have to deal with expert data instead of blindly using it directly. Because expert data that does not meet the distribution requirements will have an impact on its training. To confirm this conjecture, we experimented with offline RL (on two different Reward distributions), we find that training on supplemented expert data significantly outperforms unadjusted expert data. Moreover, our hypothesis has been verified from Table <ref> as well. In offline RL, ϵ^t(,) will approximately obey the Gumbel distribution related to π^*, which means: lim_,t→∞ϵ_o^t(,) ∼ Gumbel(C(π^*),β(π^*)) In offline RL, Lemma <ref>, Theorem <ref> will fail, at this point, we should use Lemma <ref> to estimate the offline Bellman error. If the expert data is obtained according to the sampling in Lemma <ref>, and meet the required conditions for Lemma <ref>, then all elements in Dataset D can calculate a Bellman error for fixed size quantities, it will obey: ε_o^θ(,) ∼ Logistic(max_'[Q_θ(',')]-β(π^*)ln∑_i=1^n e^(Q_θ(',_i))/β(π^*),β(π^*))(SAC) ε_o^θ(,) ∼ Logistic(max_'[Q_θ(',')]-β(π^*)ln∑_i=1^n π^*(_i|)e^(Q_θ(',_i))/β(π^*),β(π^*)) (ME) We noticed that the optimization goal is greatly limited by the sampling strategy π^*, therefore, the effect of Offline depends on whether the original strategy π^* meets the standard. In Figure 7∼ 12, we plot the final offline Bellman error using formula (<ref>) for 6 environments which the same as section <ref>. Difference between Lemma <ref> and Lemma <ref> : In Lemma <ref>, we can clearly understand that ϵ^t(,) can be continuously optimized with the iteration of the network. And a good training can make ϵ^t(,) → 0. And according to Lemma <ref> and <ref>, we know that ϵ^t_o(,)=γ [max_'(Q^t-1(',')+ϵ^t-1_o(','))-max_'(Q^t-1(',')] where we must limit ∼π^*(|). Thus if we want ϵ_o(,) → 0. We not only need Q̂_o(,)-Q(,) → 0, but also need under policy π^*(|), this become minimize: E_π^*(|)[Q̂_o(,)-Q(,)] → 0 So, if we let the policy network which trained by offline is: π_o(|), then we must let: Q̂_o(,)-Q(,)+λ KL[π_o(|) || π^*(|)] → 0 That is why the influence of the prior distribution must be considered in formula (<ref>). The optimization objective in Theorem <ref> is also the objective considered in offline, which is the same as that in online. if we let: E[ε^θ(,) ]→ 0 then we may have this formula in offline which is the same as <cit.>: max_'[Q_θ(',')]=β(π^*)ln∑_i=1^n π^*(_i|)e^(Q_θ(',_i))/β(π^*) That is why we try our best to estimate the Q value more accurately <cit.>. In summary, In online part, The Bellman error will approximately obey the Logistic distribution. But in Offline part, Bellman Error will not completely approximate the freedom Logistic distribution, but obey the Logistic distribution restricted by prior strategy π^*. § EXPERIMENT Our experiments are all carried out using the control variable method in both online and offline RL (we only change the Loss function from MSELoss to LLoss to prove the effectiveness of this Loss). Figure 13 ∼ 18 shows the relationship between average reward and training times during offline RL training. Figure 19 ∼ 23 shows the relationship between the average reward and the number of training during online RL training, and gives the variance and change shadow interval. §.§ Experiment Protocol We validate the effectiveness of our model using two versions of the representative model SAC, which employ MSELoss. SAC1 represents the <cit.> while SAC2 represents <cit.>. We conducted training, testing, and finally iterative Bellman error collection for epoch=160000 in seven gym environments (including five mujoco environments) during online training. And epoch=100000 for offline training, epoch=5000 for offline training details. The experiments were implemented in Python 3.9, mujoco 2.2.0 and gym 0.26.2. §.§ Online Results In online part, we take SAC as a model of a class of algorithms using MSELoss for testing our idea. As shown in Figure 13 ∼ 18. The difference between the red line and the blue line is only the loss function type (MSELoss and LLoss). Besides any other settings in program are the same. This means that the difference in performance is entirely attributed to the variation in loss functions, and the difference is not within the acceptable range of errors. It indicates that the Logistic Likelihood loss is indeed a simple and effective approach. §.§ Offline Results In offline part, we employed the trained SAC model for expert data sampling (note that the sampling satisfies the conditions of Theorem <ref>). Approximately 100,000 samples were collected, and offline training was performed on SAC1. The training results are as follows in Figure 19 ∼ 23. Which indeed supports the superiority of LLoss and its greater stability. More details can be seen in Figure 24 ∼ 28. Table <ref> shows the statistical test results offline the same as online, which supports our idea. § CONCLUSION Preliminary results indicate that modifying the MSELoss is relatively effective and straightforward, both in online and offline scenarios. Results from the seven gym environments demonstrate that replacing the loss function can improve model performance. Our advantage lies in the ease of implementing loss modifications in the code, and specifically considering the issue of Bellman error from a distributional perspective in online and offline scenarios separately. By combining statistical hypothesis testing and tests, it is evident that exploring the fundamental properties of loss functions from a distributional standpoint holds great potential for future advancements and further optimizations. However, in our study, we kept the experimental sigma constant as a fixed value without further discussion. We believe that a more detailed examination of the distribution will significantly enhance the performance of the model. Therefore, we recommend using the Logistic loss function instead of MSELoss, and further discussions and comparisons with other model effects can potentially lead to even better results. iclr2023_conference § APPENDIX §.§ Proof for Lemma 2: P(max_i(X_i)<A)=P(X_1<A,X_2<A,X_3<A,..., X_n<A) P(X_i<A)=e^-e^-(A-C)/β P(X_1<A,X_2<A,X_3<A,..., X_n<A)=e^-e^-(A-C)/β*e^-e^-(A-C)/β*...*e^-e^-(A-C)/β=e^-∑_n e^-(A-C)/β P(X_1<A,X_2<A,X_3<A,..., X_n<A)=e^-e^-A/β∑_n e^C/β=e^-e^-A/βe^ln(∑_n e^C/β)=e^-e^-A/β+ln(∑_n e^C/β) P(max_i(X_i)<A)=e^-e^-1/β[A-βln(∑_n e^C/β)] So: max_i(X_i) ∼ Gumbel(βln∑_ne^1/β C, β) If we sample X_i without any probability, and sample X_1,X_2,......X_nObey the Gumbel distribution Gumbel(C_i,β), then : P(X_1<A,X_2<A,X_3<A,..., X_n<A)=e^-e^-A/β (e^C_1/β+e^C_2/β+...+e^C_n/β) But, if X_i have the probability of μ_i to sample C_i, if we sample it large enough and considering its distribution probability,The final actual summation should be: ∑_n e^C_n/β→∑_n μ_n e^C_n/β Thus,a more accurate estimate should be: P(X_1<A,X_2<A,X_3<A,..., X_n<A)=e^-e^-A/β∑_n p_n e^C/β=e^-e^-A/βe^ln(∑_n p_n e^C/β)=e^-e^-A/β+ln(∑_n p_n e^C/β) Therefore, if we strictly modify it, we have: max_i(X_i) ∼ Gumbel(βln∑_nμ_n e^1/β C, β). §.§ Proof for Lemma3: If we use the Bellman operator during updating: Q̂^t(,)=r(,)+γmax_'(Q̂^t-1(',')) then: Q̂^t(,)=r(,)+γmax_'(Q^t-1(',')+ϵ^t-1(',')) According to equation<ref> Q^t(,)=r(,)+γmax_'(Q^t-1(',')) According to equation<ref>. We can easily get: ϵ^t(,)=γ [max_'(Q^t-1(',')+ϵ^t-1(','))-γmax_'(Q^t-1(',')) ϵ^t+1(,)=γ [max_'(Q^t(',')+ϵ^t(','))-γmax_'(Q^t(','))] According to Lemma <ref>, lim_a,t→∞ϵ^t(,) ∼ Gumbel(C,β). §.§ Proof for Theorem 1 While during Q itration, the ε^t is: ε^t=Q̂^t(,)-r(,)-γmax_'(Q̂^t(',')) But in Neural network,it may be change to: ε^θ=Q̂_θ(,)-r(,)-γmax_'(Q̂_θ(',')) So: ε^θ=Q_θ(,)+ϵ^θ(,)-r(,)-γmax_'(Q̂_θ(',')) ε^θ=γ [max_'[Q_θ(',')]+ϵ^θ(,)-max_'[Q_θ(',')+ϵ^θ(',')]]. According to Lemma <ref>, we can have: ϵ^θ∼ Gumbel(C,β) So: max_'[Q_θ(',')]+ϵ^θ(,) ∼ Gumbel(C+max_'[Q_θ(',')],β) According to Lemma <ref>, we can have: [Q_θ(',')+ϵ^θ(',')] ∼ Gumbel(C+Q_θ(','),β). max_'[Q_θ(',')+ϵ^θ(',')] ∼ Gumbel(βln∑_i=1^n e^1/β(C+Q_θ(',_i)),β)= Gumbel(C+βln∑_i=1^n e^1/β(Q_θ(',_i)),β) But if we take Q_θ(',') for this probably:μ('|'), according to Lemma <ref>: max_'[Q_θ(',')+ϵ^θ(',')] ∼ Gumbel(C+βln∑_i=1^n μ(_i|')e^1/β(Q_θ(',_i)),β) According to Lemma <ref>, we can have: ε^θ∼ Logistic(max_'[Q_θ(',')]-βln∑_i=1^n e^(Q_θ(',_i))/β,β)(SAC) ε^θ∼ Logistic(max_'[Q_θ(',')]-βln∑_i=1^n μ(_i|)e^(Q_θ(',_i))/β,β)(ME) §.§ Sovling for equation <ref> π^*=argmax_π[∑ T(_t+1|_t,_t)π(_t+1|_t+1)[Q(_t+1,_t+1)-βlogπ(_t+1|_t+1)] This can be solved by the Lagrange multiplier method with: ∑_π^*(|)=1 We can construct the Lagrange function as: f(π,L)=∑__t+1,_t+1 P(_t+1|_t,_t)π(_t+1|_t+1)[Q(_t+1,_t+1)-βlogπ(_t+1|_t+1)]+L[∑_π(_t+1|_t+1)-1] ∂ f/∂ L=0 →∑_π(_t+1|_t+1)=1 ∂ f/∂π=0 → Q(_t+1,_t+1)+L-β=β log(π)→π=e^Q(_t+1,_t+1)+L-β/β ∑ e^Q(_t+1,_t+1)+L-β/β=1→ log(e^L-β/β∑ e^Q(_t+1,_t+1)/β)=0 L=β-β log ∑ e^Q(_t+1,_t+1)/β→π^*=e^Q(_t+1,_t+1)/β/∑__t+1 e^Q(_t+1,_t+1)/β §.§ Proof for Lemma5 From equation <ref>, we can know that: r_o(,·)=Q^t(,·)-max_'Q^t-1(_(,)',')=-max_'[Q^t-1(_(,)',')+Q^t(,·)] Q^t-1(_(,)',') means ' is related to the action and prior state . Because of action is from the policy π^*(|), then obviously [Q^t-1(_(,)',')+Q^t(,·)] is a random variable related to , which is also related to strategy π^*. Thus according to Lemma <ref>, we have -r_o(,·) ∼Gumbel(A(π^*),B(π^*)) Because of: p(x|y)=p(x,y)/p(y) So: r_o(,) ∼ -Gumbel(A(π^*),B(π^*))π^*(|) §.§ Proof for Lemma6 The same as Appendix <ref>, if we use the Bellman operator during updating in offline RL: Q̂^t(,·)=r_o(,·)+γmax_'(Q̂^t-1(_(,)',')) But According to equation <ref>, we have Q^t(,·)=r_o(,·)+γmax_'(Q^t-1(_(,)',')) So we also have: ϵ^t_o(,)=γ [max_'(Q^t-1(',')+ϵ^t-1_o(','))-max_'(Q^t-1(',')] But different from Lemma <ref>, according to Lemma <ref>, we have: lim_a,t→∞ϵ^t(,) ∼ Gumbel(C(π^*),β(π^*)). §.§ Proof for Theorem 2 As we all know: for any (,) ∈ D, offline Bellman error is: ε_o^θ(,)=r_o(,)+γ max_'[Q̂_θ(',')]-Q̂_θ(,) this means: ε_o^θ(,)=r_o(,)+γ max_'[Q_θ(',')+ϵ^θ(',')]-[Q_θ(,)+ϵ^θ(,)] According to Lemma <ref> and <ref>, we have another things to know that: ε_o^θ(,) ∼ Logistic(max_'[Q_θ(',')]-β(π^*)ln∑_i=1^n e^(Q_θ(',_i))/β(π^*),β(π^*))(SAC) But in another case it's not usual, this is because is constrained by policy π^*(|). So, in any other case , we must choose action from π^*(|), that means: ε_o^θ(,) ∼ Logistic(max_'[Q_θ(',')]-β(π^*)ln∑_i=1^n π^*(_i|)e^(Q_θ(',_i))/β(π^*),β(π^*))(ME)
http://arxiv.org/abs/2307.02298v1
20230705135628
Effects of atom losses on a one-dimensional lattice gas of hardcore bosons
[ "François Riggio", "Lorenzo Rosso", "Dragi Karevski", "Jérôme Dubail" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas" ]
^†Université Paris-Saclay, CNRS, LPTMS, 91405 Orsay, France Université de Lorraine, CNRS, LPCT, F-54000 Nancy, France francois.riggio@univ-lorraine.fr Atom losses occur naturally during cold atoms experiments. Since this phenomenon is unavoidable, it is important to understand its effect on the remaining atoms. Here we study a gas of hard-core bosons on a lattice subject to K-body losses (where K=1,2,3,… is the number of atoms lost in each loss event), and in particular we investigate the effect of losses on the rapidity distribution ρ(k) of the atoms. Under the assumption that losses are weak enough so that the system relaxes between two loss events, we are able to determine the loss functional F[ρ](k) encoding the loss process for K-body losses. We derive closed expressions for the cases of one- and two-body losses, and show their effects on the evolution of the total number of particles. Then we add a harmonic trapping potential and study the evolution of the position-dependent rapidity distribution of this system by solving numerically the evolution equation for one-, two- and three-body losses. Effects of atom losses on a one-dimensional lattice gas of hardcore bosons François Riggio, Lorenzo Rosso^†, Dragi Karevski and Jérôme Dubail Received March 10, 2023; accepted May 12, 2023 ========================================================================== § INTRODUCTION During the past two decades, cold atom experiments have become a key simulation platform to investigate the physics of low-dimensional quantum many-body systems <cit.>. While these experiments are typically well isolated and their dynamics at short times is well approximated by unitary dynamics, cold atom setups are always subject to atom losses. The effects of the latter on the dynamics of the gas can become important at longer times, and they are typically hard to describe theoretially. Cold atom experiments involve various loss processes, for instance one-body losses resulting from scattering with background thermal atoms <cit.>, which can be significant. Inelastic two-body collisions can occur naturally or be intentionally engineered, leading to two-body losses <cit.>. Three-body losses, where a highly bound diatomic molecule is formed, are invariably present and typically dominate the overall loss process <cit.>. In principle, loss processes involving more than three atoms also exist. In particular, losses involving four atoms have been reported in Refs. <cit.> Loss processes can sometimes be controlled and engineered  <cit.> to bring out some physical phenomena such as cooling <cit.> and the quantum Zeno effect <cit.>. Despite the relevance of loss processes, a consistent general theory describing the loss events does not exist. One standard way to model atom losses is to use the Lindblad equation <cit.>. Several studies have inspected the interplay between the unitary dynamics and the lossy one in both bosonic <cit.> and fermionic gases <cit.>. The question of losses in integrable systems is particularly interesting. An integrable model admits a macroscopic number of conserved charges in comparison with non-integrable models where only the energy and/or the number of particle are conserved. Due to this large set of conserved quantities, an isolated integrable system has a singular property: the stationary states of the system are modeled by Generalized Gibbs Ensembles (GGE). A GGE is constructed by maximizing the entropy under the constraints imposed by all conservations laws <cit.>. However, under dissipative processes like atom losses, the conserved quantities of an integrable system are not conserved anymore, as the coupling between the system and its environnement typically causes integrability breaking <cit.>. In principle, one then expects that integrability breaking leads to thermalization at very long times, possibly with a pre-thermalization phenomenon at intermediate time scales <cit.>. Some studies seem to confirm the thermalization but other works suggest otherwise <cit.>. In this paper we propose a model of hard-core bosons on a lattice subject to weak K-body atom losses: whenever K atoms occupy K neighboring sites, they can escape the system with some rate. Our aim is to understand how integrability breaking caused by such loss events affects the system. Of course, under atom losses, the stationary state at very long times is always the vacuum. However, what we want to investigate is how the vacuum is approached, and in particular whether or not to the gas is in a quasi-stationary thermal state. We investigate for instance the mean density n(t)=⟨N(t)|/⟩L (where N is the number of atoms and L is the number of lattice sites) for different initial states and observe different non-trivial behaviors depending on the number K of atoms lost in each loss event. In particular, for two-body losses (K=2), we analytically show that n(t)∝ 1/t or ∝ 1/t^1/2, depending on a specific property of the initial state (see below). More generally, for other values of K ≥ 2 our numerical results are in agreement with a power-law decay n(t) ∝ t^α where the exponent α typically depends on the initial state, and generically differs from the mean-field result 1/(K-1). This observation holds also if we add a harmonic potential; however in that case the exponent α changes and is different from the one found in the homogeneous case. The paper is organized as follows. In Section <ref> we define the model and discuss our assumptions. Our main hypothesis of slow losses, and fundamental concepts such as the rapidity distribution and the loss functional are introduced. In Section <ref> we present our calculation of the loss functional for K-body losses, which crucially depends on the parity of K. In Section <ref> we investigate the effect of atom losses on the rapidity distribution and on the mean density in the homogeneous gas, by combining analytical and numerical calculations. In Section <ref>, we add a harmonic trapping potential. We use a hydrodynamic-like approach similar to Generalized Hydrodynamics <cit.> where we incorporate the loss functional (following similar proposals in Refs. <cit.>), and design a numerical method for solving the resulting evolution equation for the rapidity distribution in the gas. We conclude in Section <ref>. § THE MODEL §.§ Definition: lattice Tonks-Girardeau gas with K-body losses We consider a lattice Tonks-Girardeau gas subject to atom losses. Each site j ∈ℤ is occupied by either zero or one boson. We write σ^+_j/σ^-_j for the operator that creates/annihilates a boson on site j. Because of the hard-core constraint, these operators do not satisfy the usual bosonic canonical commutation relations. Instead, they satisfy the algebra of Pauli matrices, (σ^+_j)^2 = (σ^-_j)^2 = 0, and [ σ^+_i, σ^-_j ] = δ_i,jσ^z_j. We consider the hard-core boson (HCB) Hamiltonian with nearest-neighbor hopping, H_ HCB=- 1/2∑_j ∈ℤ(σ^+_jσ^-_j+1+ σ^+_j+1σ^-_j). This Hamiltonian generates the unitary part of the evolution of the gas. In addition, we assume that the gas is subject to incoherent K-body loss processes, with K a positive integer. To describe the losses, we assume that the dynamics is Markovian, and we consider the following Lindblad equation for the density matrix ρ̂, ρ̇̂̇(t) = -i [H_ HCB, ρ̂(t)] +Γ∑_j ∈ℤ( L_j ρ̂(t) L^†_j-12{L^†_j L_j, ρ̂(t)}), Here Γ is a constant that sets the loss rate, while the Lindblad operators L_j=∏^K-1_l=0σ^-_j+l remove K bosons from the K consecutive sites j, j+1, …, j+K-1. We stress that, with these loss terms, the model is not exactly solvable, so it is necessary to develop some effective approaches to tackle it. §.§ Adiabatic losses, effective description by slow motion of the charges To simplify the description of the system, we follow the approach of Ref. <cit.>. There the approach was developed for losses in a continuous gas, and it is based on the assumption that the losses are slow, so that the gas remains in a Generalized Gibbs Ensemble (GGE) with parameters that slowly drift in time (see e.g. <cit.> for a review). A similar description of slowly-evolving nearly integrable systems with weak integrability breaking had previously appeared in Refs. <cit.>. Following these ideas, here we also assume that the loss processes occur on very long times compared to the relaxation time scale due to the unitary evolution of the gas. In that limit, the gas has time to reach a local stationary state after each loss event. To efficiently exploit that assumption, we look at the slow dynamics of the conserved charges. For convenience, from now on we focus on a finite system of L≫ 1 sites, with periodic boundary conditions. The Hamiltonian H_ HCB commutes with an infinite set of hermitian operators Q_a, a = 0,1, 2… that can be constructed using the Jordan-Wigner mapping to non-interacting fermions (see Subsec. <ref> for details). These operators also commute among themselves, [Q_a, Q_b] = [H_ HCB, Q_a] = 0. Moreover, they are local in the sense that Q_a = ∑_j =1^L q_a,j where q_a,j is a charge density operator that has compact support (i.e. it acts on a finite number of sites around j). The time evolution of the expectation value < Q_a >(t) = tr [ ρ̂(t) Q_a], is obtained from Eq. (<ref>), Q̇_̇ȧ(t)= Γ/2∑_j =1^L L^†_j [Q_a, L_j] + [ L^†_j , Q_a ] L_j(t). Moreover, the hermiticity of Q_a implies ⟨ [L_j^†, Q_a] L_j⟩ = ⟨ L^†_j [Q_a, L_j] ⟩^*. Thus, Q̇_̇ȧ(t)= Γ∑_j=1^L Re L^†_j [ Q_a , L_j ] (t). This equation is exact and it is a direct consequence of Eq. (<ref>). Notice that, in this form, it is not particularly useful, because to evaluate the r.h.s one needs to know the exact density matrix ρ̂(t). Now comes the crucial step. Importantly, the operator L^†_j [ Q_a , L_j ] is local, because both the operator L_j and the charge density q_a,j have compact support. This, together with the assumption of slow losses, allows us to use the idea of local relaxation in the system. Namely, we expect that, under unitary evolution, the density matrix of a small subsystem quickly relaxes to a Generalized Gibbs Ensemble (GGE). Expectation values of local observables can then be evaluated with respect to the GGE density matrix, ρ̂_ GGE, {⟨ Q_a ⟩} ∝ e^-∑_aβ_a Q_a, where the Lagrange multipliers β_a are fixed by the expectation values of the charges ⟨ Q_a ⟩, which must be equal to tr [ ρ̂_ GGE, {⟨ Q_b ⟩} Q_a ]. Evaluating the r.h.s of Eq. (<ref>) in the GGE density matrix leads to a closed evolution equation for the slow motion of the charges induced by the losses, d/dt⟨ Q_a ⟩ = Γ∑_j=1 ^L Re L^†_j [ Q_a , L_j ] _ GGE, {⟨ Q_b ⟩}. It is this evolution equation that we study in great detail in this paper. For lattice HCB, the description is further simplified by specifying the form of the conserved charges Q_a. This is what we do next, by introducing the distribution of rapididites. §.§ Slow evolution of the rapidity distribution Hard-core bosons can be mapped to free fermions by a Jordan-Wigner transformation, σ^+_j = ∏_i=1^j-1(-1)^c^†_ic_i c^†_j, σ^-_j = ∏_i=1^j-1(-1)^c^†_ic_i c_j . Here the operators c^†_j/c_j create/annihilate a fermion on site j. They satisfy the canonical anticommutation relations {c_i,c^†_j}=δ_ij. Under the Jordan-Wigner mapping, the Hamiltonian (<ref>) becomes H_ HCB = - 1/2∑_j =1^L ( c^†_j c_j+1 + c^†_j+1 c_j ) . Moreover, the fermions satisfy antiperiodic (resp. periodic) boundary conditions if the number of particles N in the system is even (resp. odd): c_L+1^† = (-1)^N-1 c_1^† . The Fourier modes are c^†(k) = 1/√(L)∑_j = 1^L e^i k j c_j^† with k ∈2π/L (ℤ+1/2) if N is even, and k ∈2π/Lℤ if N is odd. Either way the Hamiltonian reads H_ HCB = ∑_k ε(k) c^†(k) c(k) , with ε(k) = - cos k. It is clear from the form (<ref>) that any operator of the form Q[f] = ∑_k f(k) c^†(k) c(k) , for any function f(k), commutes with the Hamiltonian H_ HCB. Moreover these conserved charges also commute among themselves. Convenient choices for f(k) are cos ( n k ) or sin (n k) for n ∈ℕ, which leads to a hermitian basis set of charges, where each charge has a charge density that is compactly supported. However, for the purposes of this paper, rather than to work with a specific choice of basis for the space of conserved charges Q_a (or Q[f]), it is more convenient to work directly with the occupation number, or `rapidity distribution', ρ(k) L →∞= < c^†(k) c(k) > , ρ(k) ∈ [0,1] . It is clear that if we know the rapidity distribution ρ(k), then we also know the expectation values of any charge Q[f], because < Q[f] > = ∑_k f(k) ρ(k). Following Ref. <cit.>, we can turn the evolution equation for the slow motion of the charges (<ref>) into an equation for the slow evolution of the rapidity distribution itself, ρ̇(k)=-Γ F[ρ](k), where the loss functional F[ρ](k)=∑_j=1^L Re L^†_j[L_j,c^†(k)c(k)]_ GGE, ρ , and the GGE density matrix itself is parameterized by the rapidity distribution ρ(k). More precisely, the GGE density matrix is Gaussian for the fermions c_j^†, c_j, and it is characterized by its two-point function c^†(k) c(k') _ GGE, ρ = ρ(k) δ_k,k'. All higher-order correlations can be computed using Wick's theorem for fermionic operators. The functional (<ref>) is the central object of this paper. In the next section we compute it explicitly for K-body loss processes. For one- and two-boson loss processes we get simple closed expressions. For loss events involving larger numbers K of bosons, we will see that we can express the loss functional as a small determinant, which follows from applying Wick's theorem to Eq. (<ref>). § DERIVING THE LOSS FUNCTIONAL In this section we compute the functional F[ρ](k) explicitly. Importantly, in our calculation we uncover a different structure depending on the parity of the number K of bosons lost in each loss event, which can be traced back to the Jordan-Wigner string appearing in the mapping (<ref>) to non-interacting fermions. §.§ One-body losses For K=1 the Lindblad dissipators are L_j=σ^-_j. Using translational invariance, the loss functional (<ref>) that we need to compute is F[ρ](k) = L σ^+_1 [σ^-_1, c^†(k) c(k) ] _ GGE, ρ = L σ^+_1 σ^-_1 c^†(k) c(k) _ GGE, ρ - L σ^+_1 c^†(k) c(k) σ^-_1 _ GGE, ρ , where L is the length of the system and the loss operator acts on the site j=1. Both terms in the second line of (<ref>) can be computed using the fact that the GGE is a gaussian state for the fermions, which allows us to use Wick's theorem. For the first term we have (using c_1 = 1/√(L)∑_q e^iq c(q)): L σ^+_1 σ^-_1 c^†(k) c(k) _ GGE, ρ = L c^†_1 c_1 c^†(k) c(k) _ GGE, ρ = ∑_qq' e^i(q-q') c^†(q') c(q) c^†(k) c(k) _ GGE, ρ = ∑_qq' e^i(q-q')( ⟨ c^†(q') c(q) ⟩⟨ c^†(k) c(k) ⟩ + . . ⟨ c^†(q') c(k) ⟩⟨ c(q) c^†(k) ⟩) = ⟨ N ⟩ρ(k) + ρ(k) (1-ρ(k)) . The second term requires more care, because the operator c^†(k) c(k) is inserted between σ_1^+ and σ_1^-, and the latter change the parity of the number of particles in the system. The boundary conditions for the fermions are modified according to Eq. (<ref>). Thus we need to relate the Fourier modes of the fermions with periodic boundary conditions to the ones with anti-periodic boundary conditions. For conciseness, let us introduce the two corresponding sets of momenta, Q^ p = 2π/L×{1,2 , …, L } , Q^ ap = 2π/L×{1/2 , 3/2, …, L- 1/2} . Then we have the following identities, (k ∈ Q^ p ) c(k)=iL∑_q ∈ Q^ ape^i(q-k)/2sin((q-k)/2) c(q), (k ∈ Q^ ap ) c(k)=iL∑_q ∈ Q^ pe^i(q-k)/2sin((q-k)/2) c(q) . We can insert them into the second term of Eq. (<ref>), which leads to σ^+_1 c^†(k) c(k) σ^-_1=1L^2∑_q,q'e^i(q-q')/2 c^†_1 c^†(q') c(q) c_1 sin((q-k)/2)sin((q '-k)/2). This correctly implements the change of boundaries of the fermions. Next we can apply Wick's theorem to evaluate the four-fermion correlator c^†_1 c^†(q') c(q) c_1. This leads to L σ^+_1 c^†(k) c(k) σ^-_1 = N/L^2∑_qρ(q)sin^2(q-k/2)-(1/L∑_q(q-k/2) ρ(q))^2- N^2L^2. The first term in the above equation has a pole of order 2 at q=k+2 πℤ, and it is convenient to reduce its degree using the identity ∑_q 1/sin^2(q-k/2)=L^2. This leads to the equivalent expression L σ^+_1 c^†(k) c(k) σ^-_1 =N/L^2∑_qρ(q)-ρ(k)sin^2(q-k/2)+Nρ(k) -(1/L∑_p(p-k/2) ρ(p))^2- N^2L^2. Putting the two terms (<ref>)-(<ref>) together and taking the thermodynamic limit L →∞, we arrive at the following form of the one-body loss functional F[ρ](k) =ρ(k)-ρ^2(k)+(^π_-πdp/2π (k-p/2) ρ(p))^2 +n(n+^π_-πdq/2π ρ(k)-ρ(q)sin^2(k-q/2)), where n=N/L is the density of particle and means the Cauchy principal value of the integral. This is our main result for one-body losses. It is similar to (but different from) the formula given in Ref. <cit.> for the Tonks-Girardeau gas in the continuum. Notice that the functional is non-linear in ρ(k), and also non-local in rapidity space. §.§ Two-body losses For K=2, the dissipators are L_j= σ^-_j σ^-_j+1. Under the Jordan-Wigner mapping they become L_j= σ^-_jσ^-_j+1= c_j (-1)^c^†_jc_j c_j+1 = - c_j c_j+1 . Then, to compute the functional F, we simply need to insert the dissipator (<ref>) in the definition (<ref>), F[ρ](k)=∑_jσ^+_j+1σ^+_j[σ^-_jσ^-_j+1,c^†(k) c(k)] =∑_q,q',p,p'e^i(2p'+p-2q-q')/Mc^†(q)c^†(q')[c(p)c(p'),c^†(k)c(k)]. Expanding the commutator in the braket leads to two terms c^†(q)c^†(q')[c(p)c(p'),c^†(k)c(k)] =c^†(q)c^†(q')c(p)c(p')c^†(k)c(k) -c^†(q)c^†(q')c^†(k)c(k)c(p)c(p'). The first term can be expressed as c^†(q)c^†(q')c(p)c(p')c^†(k)c(k) =c^†(q)c^†(q')c^†(k)c(k)c(p)c(p') +δ_p'kc^†(q) c^†(q') c(p) c(k)-δ_pkc^†(q) c^†(q') c(p') c(k). The second term is the expectation value of c^†(k)c(k) in a state where two atoms have been removed, the parity of the initial number of particle is then unchanged. Hence, in contrast to the K=1 case treated in the previous subsection, here the parity of the number of atoms does not change. One can then apply Wick's theorem, and take the thermodynamic limit to obtain the loss functional, F[ρ](k) =2π∫^π_-π dq sin^2 (k-q/2) ρ(q) ρ(k). This is our main result for two-body losses. That functional presents some similarities with the loss functional found in the relation (5) of Ref. <cit.>. We now generalise this calculation to the case of K-body losses for K an arbitrary even integer. §.§ K-body losses with K even In this subsection we show that is possible to find a closed formula for the loss functional defined in (<ref>) where the Lindblad operator is given by L_j=σ_jσ_j+1…σ_j+K-1. Taking the Fourier transform of L_j and L^†_j in (<ref>), the loss functional reads F^ even[ρ](k)= 1/L^K-1∑_q_1,…,q_K q'_1,…,q'_Kexpi∑^K_l=1(q_l-q'_l)l× c^†(q'_K) … c^†(q'_1)[c(q_1)… c(q_K),c^†(k)c(k)]. As mentionned in <ref>, in the case of even K-body losses the commutator in (<ref>) reduces to K terms c^†(q'_K) … c^†(q'_1)[c(q_1)… c(q_K),c^†(k)c(k)] =δ_kq_Kc^†(q'_K) … c^†(q'_1)c(q_1)… c(q_K-1)c(k) -δ_kq_(K-1)c^†(q'_K) … c^†(q'_1)c(q_1)… c(q_K-2) c(q_K)c(k) +… and applying Wick's theorem on each terms leads to a product of K terms of the form c^†(q')c(q). Using the property c^†(q')c(q)=δ_qq' ρ(q) and taking the thermodynamic limit, the loss functional in (<ref>) can be expressed as a sum of K terms each consisting of a K by K matrix determinant. Let us introduce the K × K matrix A^(j)_[ρ] with matrix elements [A^(j)_[ρ] ]_ab = {[ 1/2π∫^π_-πdq e^i(b-a)q ρ(q) if b ≠ j; e^i(b-a)k ρ(k) if b = j , ]. for indices a,b= 1, … , K. The superscript j indicates which column depends on the rapidity k. Apart from the j^ th column, the matrix essentially contains Fourier transforms of the rapidity distribution ρ(k). The loss functional then reduces to F^ even[ρ](k)= ∑^K_j=1(A^(j)_[ρ]) . The relation (<ref>) is another fundamental result of this paper. §.§ K-body losses with K odd In this last subsection we investigate the case of losses process with an odd number K of lost atoms. The reasoning is similar to the one we developped in the previous subsection, however like in the K=1 case, we need to be careful about the change of boundary conditions for the fermions. We start by taking the Fourier transform of the Lindblad operators in the definition (<ref>), which leads to the relation (<ref>). The commutator in the loss functional gives two terms c^†(q'_K) … c^†(q'_1)[c(q_1)… c(q_K),c^†(k)c(k)] =c^†(q'_K) … c^†(q'_1)c(q_1)… c(q_K)c^†(k)c(k) -c^†(q'_K) … c^†(q'_1)c^†(k)c(k)c(q_1)… c(q_K). As we have already discussed in the subsection <ref>, the first term is the expectation value of c^†(k)c(k) in a state where the initial number of particles is preserved. However the second term corresponds to the expectation value of c^†(k)c(k) in a state where K atoms have been removed. Since K is an odd number, the parity of the number of particle is changed and one needs to use the relations (<ref>) to express c^†(k)c(k) in the appropriate parity sector. Inserting the relations (<ref>) in the second term, one has c^†(q'_K) … c^†(q'_1)c^†(k)c(k)c(q_1)… c(q_K) = ∑_q,q'e^i(q-q')/2 c^†(q'_K) … c^†(q'_1)c^†(q')c(q)c(q_1)… c(q_K) L^2sin((q-k)/2)sin((q '-k)/2). Before using Wick's theorem on the above formula, one can notice that the first term in the right-hand side of (<ref>) can be written as c^†(q'_K) … c^†(q'_1)c(q_1)… c(q_K)c^†(k)c(k) =c^†(q'_K) … c^†(q'_1)c^†(k)c(k)c(q_1)… c(q_K) +δ_kq_Kc^†(q'_K) … c^†(q'_1)c(q_1)… c(q_K-1)c(k) -δ_kq_(K-1)c^†(q'_K) … c^†(q'_1)c(q_1)… c(q_K-2) c(q_K)c(k) +…, where we used the anti-commutation relation for the fermionic operators. As we proceed in the previous subsection, the Wick's contractions of (<ref>) and of the first term in the right-hand side of (<ref>) can be written as two determinants of two matrices B and C. The matrices B and C are (K+1) × (K+1) hermitian matrices and their matrix elements depend on the Fourier and Hilbert transforms <cit.> of ρ(k) [B_[ρ] ]_ab = {[ 1/2 π∫^π_-πdq e^i(b-a)q ρ(q) if a, b < K+1; e^-iak ρ(k) if b = K+1; 0 if a=b=K+1 , ]. [C_[ρ]]_ab = {[ 1/2 π∫^π_-πdq e^i(b-a)q ρ(q) if a, b < K+1; 1/2π^π_-πdq e^-i(a-1)qρ(q)((k-q2)+i ); if b = K+1; 1/2π^π_-πdq ρ(q)-ρ(k)sin^2(k-q2) if a=b=K+1 . ]. The loss functional for odd K takes the final form F^ odd_K[ρ](k) =( ∑^K_j=1(A^(j)_[ρ]) ) +[ (B_[ρ])-(C_[ρ])]. It is possible to write a general expression valid both for even and odd K by introducing the factor 1-(-1)^K2 which vanishes for K even, so that our final result, valid in all cases, reads F_K[ρ](k) =( ∑^K_j=1(A^(j)_[ρ] ) ) + 1-(-1)^K2[ (B_[ρ])-(C_[ρ])]. § EVOLUTION OF THE RAPIDITY DISTRIBUTION IN A HOMOGENEOUS GAS Having established the general form of the loss functional F[ρ] for K-body losses, Eqs. (<ref>)-(<ref>)-(<ref>)-(<ref>)-(<ref>), we now turn to the time evolution of the rapidity distribution. We solve the evolution equation ρ̇(k)=-Γ F[ρ](k) numerically (and analytically for the special case K=1), and we focus in particular on the time evolution of the atom density n = ∫_-π^πρ(k) dk/2π. §.§ Results for K=1 For one-body losses the atom density always decays exponentially, n(t)=n(0) e^-Γ t . This simply follows from Eq. (<ref>) applied to the total particle number Q_a = N and to L_j = σ^-_j: it gives ⟨Ṅ⟩ = - Γ⟨ N ⟩, which implies Eq. (<ref>) for the atom density n = N/L. It turns out the evolution equation (<ref>) for the rapidity distribution can be solved exactly for the loss functional for K=1 (see Eq. (<ref>)). The solution is derived in Appendix <ref>; it reads ρ(t,k)=n_0e^-Γ t(tanh(n_0(e^-Γ t-1))+i/ n_0 I(t,k)1+i/ n_0tanh(n_0(e^-Γ t-1)) I(t,k)) , where n_0=n(0) is the initial atom density, ρ_0(k)=ρ(t=0,k) is the initial rapidity distribution, and I(t,k) is the integral I(t,k)= ∫^π_-πdq/2πρ_0(q)tan(k-q/2+in_0(1-e^-Γ t)). A related expression for the continuous Tonks-Girardeau gas with one-body losses was obtained in Ref. <cit.>. In Fig. (<ref>) we show the evolution of the rapidity distribution from thermal initial states at different temperatures. We display the analytical result (<ref>), as well as the numerical solution of Eq. (<ref>) obtained from the Runge-Kutta method; they are in perfect agreement, as they should. We see that loss processes spread the distribution in rapidity space. In the limit of large temperature for the initial state, the rapidity distribution is flat, and remains flat at all times. For smaller initial temperatures, it evolves into a bell-shape distribution at late times, which is close to a Boltzmann distribution ρ(k) ∝exp [ cos (k)/T ] with a density going to zero according to Eq. (<ref>). This is further illustrated in Fig. <ref>.(a), where we plot the ratio ρ(k,t)/n(t) in the limit t→∞, and fit the result with a Boltzmann distribution. The agreement is very good, even though it is clear from the exact formulas (<ref>)-(<ref>) that the distribution ρ(k,t) never becomes exactly thermal, even at infinite time. To find a more striking signature of the fact that the system never goes to a low-density thermal distribution, we consider the case of an oscillating initial rapidity distribution ρ_0(k)=(1-cos(sk))/2, where s is an integer. In that case the long-time limit of the integral (<ref>) can be evaluated analytically, lim_t →∞ I(t,k) = (1-e^-2sn_0 e^iks)/2, and when injected in Eq. (<ref>) it leads to a late-time rapidity distribution of the form ρ(k,t)/n(t) t→∞= α+β cos(sk)/γ+δ cos(sk), where the coefficients α, β, γ, δ depend on the initial density n_0 and on the integer s. Thus, even at long time, the rescaled rapidity distribution is sensitive to the structure of the inital distribution (see also Fig.<ref>.(b)). We conclude that in general the rapidity distribution does not go to a low-density thermal distribution at long times. §.§ Results for K=2 We now consider the time evolution equation for the rapidity distribution for the K=2 case, which is characterized by the functional given by Eq. (<ref>). For simplicity, here we focus on initial rapidity distributions ρ_0(k) that are symmetric under reflection k→ -k. Since the master equation is also invariant under k →-k, this property is conserved throughout the entire evolution. Then Eq. (<ref>) simplifies to the following expression, F[ρ] = 2 ρ(k) n(t) - 1πcos(k)ρ(k) ∫^π_-π dq cos(q) ρ(q). Eq. (<ref>) highlights the two distinct contributions to the time evolution of ρ(k,t): the first term in the right-hand side represents a mean-field contribution, as it does not introduce any structure in rapidity space, while the second term is responsible for generating quantum correlations and, consequently, introducing structure in k-space. After some algebra presented in App. <ref>, one can derive an exact (implicit) expression for the rapidity distribution at all times: ρ(k,t) = ρ_0(k) ×exp -2 Γ∫_0^t ( 1 - σ_0 cos(k)√( 1 + ∂_τ n(τ)/2 Γ n(τ)^2)) n(τ ) dτ, where σ_0 = sgn (∫_-π^πcos(k)ρ_0(k) dk), with sgn (x) = ± 1 the sign function. Eq. (<ref>) shows that ρ(k,t) is entirely determined by ρ_0(k) and n(t). Notably, it reveals that rapidities are distributed according to a cosine law, with the k=0 mode that has the longest lifetime. In Fig. <ref> we show the evolution of the rapidity distribution for initial thermal states at different temperatures, and in Fig. <ref> we show the corresponding evolution of the mean atom density n(t). It appears that, except for an initial infinite temperature state, the mean density always decays as n(t) ∝ 1/√(t) at very long times, while the density decreases as 1/t for an infinite temperature. These two behaviors follow from Eqs. (<ref>)-(<ref>), as we now explain. Eq. (<ref>) reveals that initial rapidity distributions that have a vanishing first Fourier mode ∫_-π^π dq cos(q) ρ(q) always follow the exact same dynamics as the mean density, namely n(t) = 11 + 2 n(0) Γ t, characterized by a long-term decay as ∼ 1/t. This power law is then always found for initial rapidity distributions with vanishing first Fourier mode, including infinite temperature states. In contrast, initial rapidity distributions that have a non-vanishing first Fourier mode decay as ∼ 1/√(t) at long times. This can be understood by looking at the long time limit of Eq. (<ref>). Let us introduce the two time-dependent functions g(t)=∫_0^t n(τ ) dτ and f(t)=∫_0^t√( 1 + ∂_τ n(τ)/2 Γ n(τ)^2) n(τ ) dτ. Numerically we observe that | ∂_τ n(τ) | ≪ 2 Γ n(τ)^2 at long times, as soon as the initial rapidity distributions has a non-vaishing first Fourier mode. Then, expanding at first order in | ∂_τ n(τ) | / ( 2 Γ n(τ)^2), f(t) becomes f(t) ≃ g(t)+1/4Γln( n(t)n(0)), implying that, at large t, the difference between f(t) and g(t) grows as ln(t). Integrating Eq. (<ref>) over k leads to the mean density n(t)= e^ -2 Γ g(t)/2 π∫^π_-πdk ρ_0(k) e^ 2 Γσ_0 cos(k) f(t). Since the function f(t) diverges at large t, the latter integral can be evaluated by the saddle-point approximation; we denote k^*_σ_0 the saddle point: k^*_σ_0 = 0 if σ_0 = +1 and k^*_σ_0 = π if σ_0 = -1. We are thus left with: ∫^π_-πdk ρ_0(k) e^ 2 Γσ_0 cos(k) f(t) t →∞≃ e^ 2Γ f(t)∫^∞_-∞dk ρ_0(k) e^ - Γ k^2 f(t) =√(π/Γ f(t)) ρ_0(k^*_σ_0) e^ 2Γ f(t). Using Eq. (<ref>), we find n(t) t→∞≃1/Γ f(t)ρ_0(k^*_σ_0)^2/4π n(0)≃1/Γ g(t)ρ_0(k^*_σ_0)^2/4π n(0) , where, in the second identity, we have used the fact that the logarithmic term in Eq. (<ref>) is subleading. Since n(t) = ∂_t g(t), we arrive at an ordinary differential equation of the form ∂_t g(t) ∝ 1/g(t). Consequently, g(t)∝√(t), and then n(t) ∝ t^-1/2, as expected from our numerical results, see Fig. <ref>. We note that a similar result was found recently in a lattice gas with similar but different two-body loss term <cit.> as well as in its continuous analog <cit.>, although we stress that the loss functionals and rate equations for these models are different from the ones of this paper. We conclude this subsection with an investigation of the long time behavior of the rapidity distribution ρ(k,t), which is determined by the mean density n(t) according to Eq. (<ref>). We have just we etablished that the first Fourier mode of the initial rapidity distribution strongly influences the long time behavior. In the case of a vanishing first Fourier mode, the rapidity density at time t is simply given (see Eqs. (<ref>)-(<ref>)) by ρ(k,t) = ρ_0(k) n(t)/n(0). The ratio ρ(k,t)/n(t) is then time-independent, as illustrated in Fig. <ref>(c). In contrast, when the first Fourier mode of the initial distribution ρ_0(k) is non-zero, the ratio ρ(k,t)/n(t) loses its dependence on the initial rapidity distribution at very long times. Indeed, in that case the rapidity distribution goes to a low-density, low-temperature, Boltzmann distribution of the form ρ(k,t)/n(t) ≃ e^β(t) cos k with the effective inverse temperature β (t) = 2 σ_0 Γ f(t). This is illustrated for the case of an initial thermal rapidity distribution in Fig. <ref>(a), where we see that the ration ρ(k,t)/n(t) gets concentrated around k=0 and is very close to a Boltzmann distribution. Notice also that the effective temperature β(t) is negative when the sign of the first Fourier mode of the initial rapidity distribution is negative. This is illustrated in Fig. <ref>(b), where we display the ratio ρ(k,t)/n(t) at late time for the far-from-thermal initial rapidity distribution ρ_0(k)=(1-cos(k))/2. We observe that, at late times, the distribution gets concentrated around k=π and corresponds to a Boltzmann distribution at negative temperature. Remarkably, these observations are in stark contrast with our findings for the K=1 case. While we found that, for K=1, the rapidity distribution never goes to a thermal distribution at late time, here for K=2 the distribution goes to a low-density, low-temperature (possibly negative), thermal distribution. This is always true, except in the special case where the first Fourier mode of the rapidity distribution vanishes; in that case the rapididity distribution is simply rescaled by a factor n(t)/n(0) under lossy evolution. §.§ Generic observations for arbitrary K We now turn to the case of higher K, and draw some general conclusions. Numerically, we solve the time evolution equation of the rapidity distribution for three-body losses (K=3), see Fig. <ref> for the evolution of the rapidity distribution from an initial thermal state, and Fig. <ref> for the atom density n(t). In Fig. <ref> we see that the effect of three-body losses is to spread the rapidity distribution in rapidity space, as already observed for one-body and two-body losses. We expect that this is a generic effect caused by K-body losses for any K. In Fig. <ref>, we observe that the mean density decays as t^-1/2 for an initial infinite temperature state, while for any non-zero initial temperature it crosses over to a t^-α decay at long times with an exponent α≃ 0.21. This exponent seems to be independent of the initial temperature as long as it is non-zero, see Fig. <ref>. However, for an initial rapidity distribution that is far from thermal, such as for instance ρ(k,t=0) = (1-cos k)/2 or (1-cos (2k))/2, we find that the density also decays as a power-law at late time, although with a different exponent α: the exponent is close to 0.21 for ρ(k,t=0) = (1-cos k)/2, and close to 0.38 for ρ(k,t=0) = (1-cos (2k))/2. We have not been able to analytically derive the observed generic power-law decay for K=3 or for higher K, beyond the special case of the initial infinite temperature state. The latter case is easily understood because, for an infinite temperature state the rapidity distribution is constant, ρ(k)=n, and the equation (<ref>) can be solved analytically. Then the determinant of the matrix B is equal to the determinant of C, and the matrices A reduce to identical and diagonal matrices. Therefore the loss functional is simply given by F_K[ρ](k)=Kn^K, which is the result expected from the mean-field approach. Then the solution of the evolution equation (<ref>) gives the mean density n(t)=n(0)(1+n(0)^K-1 K(K-1) Γ t)^1/(K-1). Beyond that simple case, we have not been able to express the loss functional in a simple form so as to derive the long-time decay of the mean density. Similarly to the K=1 and K=2 cases, we have investigated the behavior of the rescaled rapidity distribution ρ(k,t)/n(t) at late times. Recall that this ratio reveals that the gas generically (i.e. unless the first Fourier mode of ρ(k) is tuned to zero) goes to a ,low-density, low-temperature thermal state for K=2, while for K=1 it never does. In Fig. <ref> we display this ratio at late time for K=3. Fig. <ref>.(a) corresponds to a thermal initial rapidity distribution, and Figs. <ref>.(b)-(c) to non-thermal initial rapidity distributions ρ(k,t=0)=(1-cos k)/2 and (1-cos (2k))/2 respectively. We observe that the rescaled rapidity distribution concentrates around the maxima of the initial rapidity distribution at long times. Even for an initial thermal distribution (Fig. <ref> (a)), the long time behavior of the density profile can not be described by a Boltzmann distribution, as it looks like a bell-shaped distribution that has a small dip at k=0. A similar conclusion holds for Fig. <ref>.(b). Finally Fig. <ref>.(c) shows the emergence of peaks localised at k=±π/2. We conclude that, in contrast with the K=2 case, the late-time rapidity distribution is generically non-thermal. § HARMONICALLY TRAPPED GAS In many cold atom experiments, the gas lies in a longitudinal trapping potential. This prompts us to study the influence of the trapping potential on the dynamics of our lossy lattice hard-core gas. For simplicity we restrict to a harmonic potential V(x)=ω^2x^22. We adopt a coarse-grained perspective of the gas: we assume that the gas can be divided into fluid cells which contain a large number of bosons, and that the state of the gas within each fluid cell [x, x+dx] is a certain macrostate represented by the local density of rapidities ρ(x,k). Such coarse-grained descriptions have been very successful lately in describing the out-of-equilibrium quantum many-body dynamics of nearly integrable gases <cit.>. Here we investigate the effect of losses on our lattice hard-core gas within that coarse-grained description. The equation satisfied by the position-dependent rapidity distribution is ∂_t ρ(x,k,t)+sin(k) ∂_x ρ(x,k,t) -ω^2 x ∂_k ρ(x,k,t) = -Γ F[ρ(x,.,t)](k). In the first line, the term ∂_x sin(k) ρ(x,k) corresponds to the gradient of the current of quasi-particles with rapidity k, j(x,k) = sin(k) ρ(x,k). Here sin(k) is the group velocity of quasi-particles with lattice dispersion relation ε(k) = - cos (k). The term -ω^2 x ∂_k ρ(x,k) in Eq. (<ref>) corresponds to Newton's second law, and encodes the fact that the quasi-particles feel the harmonic potential and are accelerated according to k̇ = - ∂_x V(x) = - ω^2 x. Finally, the r.h.s of Eq. (<ref>) is the loss term at position x, which follows from the assumption that the gas is locally homogeneous so that we can apply the formalism developed in previous sections, this time within each fluid cell [x,x+dx]. §.§ Numerical method Our main goal in this section is to solve numerically the evolution equation (<ref>). For this we use a split-step method. Assuming that we know the rapidity distribution ρ_t (x,k), from time t to t+Δ t we first compute the new rapidity distribution ρ'_t+Δ t (x,k) generated by the transport of quasi-particles, and then compute ρ_t+Δ t (x,k) from ρ'_t+Δ t (x,k) by implementing localized lossy evolution during a time step Δ t. This gives the following scheme, with first step ρ'_t + Δ t (x,k) = ρ_t (x,k) - Δ t sin(k) ∂_x ρ_t (x,k) + Δ t  ω^2 x ∂_k ρ_t (x,k), and second step ρ_t + Δ t(x,k) = y (Δ t, k) where y(τ,k) is the solution of the differential equation ∂_τ y(τ,k) = - Γ F[ y(τ,.) ](k) , y(0,k) = ρ'_t+Δ t, x . The rapidity distribution ρ (x,k) is discretized on a regular grid in phase space, and the two steps in Eq. (<ref>) and Eq. (<ref>) are implemented as follows. For the transport step in (<ref>), we use the method of characteristics, which here is very simple since the underlying dynamics is the one of non-interacting quasi-particles. Each quasi-particle at position (x,k) in phase space evolves according to: {[ dxdt=sin(k); ; dkdt=-ω^2 x . ]. We start from the values ρ_t(x,k) on the regular grid in phase space, and we move each node of that grid according to x → x + Δ t  sin k and k → k - Δ t  ω^2 x. This gives us the new rapidity distribution after transport over a time Δ t, which however is no longer defined on the initial regular grid. To get the new rapidity distribution ρ'_t+Δ t(x,k) on the initial regular grid, we use linear interpolation. As a benchmark, we have checked that this method gives excellent numerical precision for the simulated transport in the absence of losses. The second step consists in solving numerically the differential equation (<ref>) for each column of the grid in phase-space. The solution of each differential equation is obtained by the Runge-Kutta method with the initial condition y(0,k) = ρ'_t+Δ t, x. We do this for each column of the grid, and we thus get the new space-dependent rapidity distribution ρ_t+Δ t(x,k) according to Eq. (<ref>). The combination of the two steps allows us to go from a rapidity distribution ρ_t(x,k) to the new distribution ρ_t+Δ t(x,k), both defined on the same phase-space grid. We then repeat this procedure many times with a small time step Δ t to simulate the lossy evolution of the gas in the trap. §.§ Results We have performed numerical simulations of the evolution of the position-dependent rapidity distribution ρ(x,k) under K-body losses using the algorithm presented in the previous section, see Fig. <ref>. For the initial state, we use a thermal (Fermi-Dirac) rapidity distribution ρ(x,k)=1/(1+exp(-cos(k)+ ω^2 x^2/2-μ)/T), where μ is the chemical potential and T the temperature. Our numerical study allows us to make the following general observations, illustrated in Fig. <ref>. For all K (we have simulated K=1,2,3) the distribution typically spreads in phase-space, similarly to the homogeneous case. However, while the number of particles decays exponentially for K=1, the loss dynamics is much slower for higher K, and this has visible effects on the distribution of rapidities after a given percentage of lost atoms. For K=1 we observe that the edges of the phase-space distribution get depopulated very fast and quickly results in a halo around the origin. The spreading is also clearly visible in real space in the particle density profile n(x) = ∫ρ(x,k) dk/(2π), see the second line of Fig. <ref>. For K=2, the situation is a little bit different, see Fig. <ref>. Until ∼20% of the atoms have been lost, the dynamics is similar to the one for K=1, but after that we observe the formation of spirals in the bulk of the phase-space distribution. The spirals become more visible as the ratio Γ/ω is increased, see Appendix <ref>. Compared to K=1, the bulk of the distribution also gets depopulated, leading to an approximately uniform circular droplet in phase-space, with a density that decays with time. This is visible in Fig. <ref> after ∼50% of the atoms have been lost; we note that, compared to the case of one-body losses, the distribution spreads less significantly in phase-space. For K=3, we also observe a small spiral appearing at the center of the distribution after ∼30% of the atoms have been lost. This spiral remains localised at the center during the dynamics. Interestingly, as one can see from the fifth line in Fig. <ref>, this time it is the center of the phase-space distribution that decreases faster compared to the edges. After ∼60% of atoms lost, a hole starts developing at the center of the phase-space distribution, and the latter looks more and more like a ring. This is a clear signature of a strongly out-of-equilibrium gas in the trap, with a population inversion: the higher-energy single-particle orbitals get more populated than the low-energy ones. This effect is reflected in the corresponding real-space particle density n(x) then acquires a doubly-peaked shape, with a local minimum at x=0, similarly to what we observed also in the homogeneous case. Finally, having simulated the dynamics of the position-dependent rapidity distribution ρ(x,k), one can easily get the evolution of the total particle number N(t) by integrating ρ(x,k,t) over x and k at fixed time t. For one-body losses, one always finds an exponential decay N(t) = e^-Γ t N(0). For two-body and three-body losses the result is more interesting. In Fig. <ref>.(a), we show the evolution of the mean density under two-body losses for different trap frequencies ω. Starting with a thermal distribution at temperature T and chemical potential μ, we observe that the mean density decreases at long times as ∼ 1/t, a result that coincides with the mean-field (or infinite temperature) decay for the homogeneous gas. Moreover, we see that the higher the trap frequency the faster the decay of the total particle number. For three-body losses, we see in Fig. <ref>.(b) that, like in the two-body case, a stronger confinement speeds up the decrease of total particle number. However, this time we do not observe a clear convergence towards the expectation from the mean-field (or infinite temperature) result for the homogeneous case, which would be N(t) ∼ t^-1/2. We observe a number of particles that decreases approximately as a power-law ∼ t^-α with an exponent α≃ 0.6. Our numerics does not allow us to draw a clear conclusion as to whether or not this would go to the mean-field exponent 1/2 at longer times. Nevertheless, let us stress that, qualitatively, the effect of three-body losses is the same as in the two-body case: compared to the homogeneous case, the trap dramatically speeds up the losses. § CONCLUSION We have studied the effects of K-body losses on a gas of lattice hardcore bosons, in particular their effect on the thermalisation of the gas at late time. For this, we have relied on the hypothesis of adiabatic losses used previously in Refs. <cit.>. We derived analytical results for the loss functional for any integer K in the form of a small finite determinant, and closed expressions in the cases K=1,2. For K=1 and K=2, we solved analytically the time evolution equation of the rapidity distribution of the spatially homogeneous gas. In the case of one-body losses, our formula (<ref>) shows that the loss functional is in general non-linear and non-local in rapidity space, as already observed for the continuous Lieb-Liniger gas in Ref. <cit.>. After investigating the long time behavior of the rapidity distribution, we concluded that one-body losses do not drive the gas to a low-density thermal equilibrium state at long times. In the case of two-body losses, our formula (<ref>) gives an implicit expression for the rapidity distribution and using a similar method as in Ref. <cit.>, we were able to investigate the long time behavior of the rapidity distribution and the mean particle density. In particular, we found that it decays generically as ∼ 1/√(t), except when the first Fourier mode of the initial distribution vanishes; in that case the particle density decays as ∼ 1/t. A similar conclusion was drawn for a different loss process in Refs.<cit.>. Finally, we considered the inhomogeneous system consisting in a lattice hardcore bosons gas in harmonic potential. We provided a numerical method to solve the dynamics combining the effects of the losses and of the trapping potential. We observed that for K≥ 2 the trap generically speeds up the decay of the total particle number. We also found that the gas typically evolves towards a highly non-thermal state; in particular for K=3 we observe a striking ring-shaped distribution in phase space, which signals a inversion of population (i.e. higher energy single-particle orbitals get more populated than the lower energy ones), see Fig. <ref>. Further investigations are needed to draw more quantitative conclusions about the dynamics in the trap. Note: while we were finishing this paper, a preprint by Perfetto, Carollo, Garrahan and Lesanovsky appeared <cit.>, where a similar model of spinless fermions with K=2,3,4-body losses is studied within the context of quantum reaction-diffusion dynamics of annihilation processes. Our hard-core model coincides with their model for K even, but not for K odd because of the Jordan-Wigner mapping, as we explained in detail in Sec. <ref>. Perfetto et al. do not focus on the effect of losses on the rapidity distributions of the gas, but rather on the evolution of the number of particles in the homogeneous setting. For the case of K even, where our models coincide, their findings about the evolution of the number of particles in the homogeneous setting are in agreement with ours. We thank Alberto Biella, Isabelle Bouchoule, Mario Collura and Leonardo Mazza for very useful discussions and for joint work on closely related topics. The work of JD, FR and DK is supported by the Agence Nationale de la Recherche through ANR-20-CE30-0017-01 project ‘QUADY’ and ANR-22-CE30-0004-01 project `UNIOPEN'. L.R. acknowledges hospitality from LPCT during the completion of this work. § DERIVING SOLUTION (<REF>) For one-body losses, the time evolution of the rapidity distribution ρ(t,k) is given by ∂_t ρ = -Γ( ρ -(ρ^2-ℋ(ρ)^2-n^2(t))+2n(t) ℋ'(ρ)), where Γ is the loss rate. We introduced the Hilbert transform ℋ(f(x))=1/2π∫^π_-π dy f(y)/tan(x-y/2) with f(x) a periodic function. The mean density n(t) is known: n(t)=n_0e^-Γ t. Here the rapidity distribution is a 2π-periodic real-valued function. From the rapidity distribution, we can construct a complex-valued function whose the imaginary part is the Hilbert transform of the real part: Q=ρ(k)+iℋ(ρ(k)). Such a function is called an analytic signal and can be analytically continued to the upper half-plane: the function Q(z)=i/2π∫^π_-πdq ρ(q)/tan(z-q/2) is well-defined for (z)>0 and (z)∈ [-π,π] and reduces to ρ(k) on the real axis. Taking the Hilbert transform of (<ref>) ∂_t ℋ(ρ) = -Γ( ℋ(ρ) -ℋ(ρ^2-ℋ(ρ)^2)+2n(t) ℋ(ℋ'(ρ))) and adding (<ref>) and i (<ref>), one has ∂_τ Q(τ,z) = - (Q(τ,z)-i2n∂_z Q(τ,z)-Q^2(τ,z)+n^2(τ) ). We used some properties of the Hilbert transform: i) ℋ(ℋ(f))=-f, ii) ℋ'(f)=ℋ(f'). Moreover, since Q^2(z) is analytic for (z)>0, the function ρ^2-ℋ(ρ)^2+i2ρℋ(ρ) is an analytic signal if and only if ℋ(ρ^2-ℋ(ρ)^2)=2ρℋ(ρ). Introducing the function Y(τ,z)=Q(τ,z+i2n(τ)), one gets ∂_τ Y(τ,z) = Y^2(τ,z)-Y(τ,z)-n^2(τ) This equation can be solved if one assumes Y(τ,z)=α(τ,z) e^-τ. Indeed, thanks to this trick the above equation reduces to ∂_τα(τ,z) = (α^2(τ,z)-n^2_0) e^-τ Putting all terms depending on α in the left-hand side, one has ∫dαα^2-n^2_0 = -e^-τ+C_1, which leads to α(τ,z) = n_0tanh(n_0 e^-τ+C_2). The initial condition Y(0,z)=Y_0 sets the constant: C_2=tanh^-1(Y_0/n_0)-n_0. Thus, one can write Y(τ,z) = n(τ)tanh(n_0(e^-τ-1)+tanh^-1(Y_0/n_0)) =n(τ)(tanh(n_0(e^-τ-1))+Y_0/n_01+tanh(n_0(e^-τ-1))Y_0/n_0). Finally, the rapidity distribution reads ρ(t,k)=n_0e^-Γ t(tanh(n_0(e^-Γ t-1))+i/2π n_0∫^π_-πdq ρ_0(q)tan(k-q/2+in_0(1-e^-Γ t))1+i/2π n_0tanh(n_0(e^-Γ t-1)) ∫^π_-πdq ρ_0(q)tan(k-q/2+in_0(1-e^-Γ t))) . § DERIVATION OF EQ. (<REF>) OF THE MAIN TEXT In this section we present the main steps to derive the exact expression of the rapidity distribution in the homogeneous case for K=2. We consider the time evolution of n(t) written as: ∂_t n(t) = 12 π∫_-π^π∂_t ρ(k,t) dk. We now insert the evolution equation (<ref>) obtaining: ∂_t n(t) = - 2 Γ n(t)2 π∫_-π^πρ(k,t) dk + Γ2 π^2( ∫_-π^πcos(k)ρ(k,t) dk )^2 = - 2 Γ n(t)^2 + Γ2 π^2( ∫_-π^πcos(k)ρ(k,t) dk )^2 By inverting the latter relation, one obtains: |∫_-π^πcos(k)ρ(k,t) dk | = π√(2Γ( ∂_t n(t) + 2 Γ n(t)^2 )), where |∙| denotes the absolute value. At this point of the derivation it is useful to introduce the following variable: σ (t) = sgn (∫_-π^πcos(k)ρ(k,t) dk), with sgn (x) being the sign function. We now claim that the function σ (t) is solely determined by its value at initial time, i.e. σ (t) = σ (0) ≑σ_0, the argument goes as follows. (i) If the first Fourier mode vanishes at some time t, then it must vanish also at any later time. This follows from Eq. (<ref>). (ii) This implies that the sign of the first Fourier mode is continuous in time. Since it can take only discrete values, it is in fact a constant. By inserting the latter equation in Eq. (<ref>) the time evolution of the rapidity distribution can be then recasted into the following form: ρ̇(k,t) = -2 Γ ρ(k) n(t) + Γσ_0 cos(k)ρ(k) √(2Γ( ∂_t n(t) + 2 Γ n(t)^2 )). We now divide both sides by ρ(k,t) and then integrate: ln(ρ(k,t)ρ_0(k)) = -2 Γ ∫_0^t n(t') dt' + Γσ_0 cos(k)∫_0^t √(2Γ( ∂_t' n(t') + 2 Γ n(t')^2 )) dt'. By exponentiating the latter equation we get: ρ(k,t) = ρ_0(k) exp- 2 Γ∫_0^t n(t') dt' + σ_0 cos(k)∫_0^t √(2 ( ∂_t' n(t') + 2 Γ n(t')^2 )Γ) dt' , this concludes the derivation of Eq. (<ref>). § ADDITIONAL DATA FOR THE HOMOGENEOUS K=2 CASE In this section we present additional data concerning the homogeneous K=2 case. In particular, we show in Fig. <ref> (left panel) the dynamics of the mean density for two different initial rapidity distributions which are not thermal. Firstly, we consider an initial distribution given by ρ_0(k) = 12 (1 - cos(k)), which has first Fourier mode different from zero. Secondly, we consider ρ_0(k) = 12 (1 - cos(5 k)), whose first Fourier mode vanishes. We see that the dynamics induced by the former distribution has a longtime behaviour guven by ∼ 1/ √(t), whereas the latter, due to its vanishining first Fourier mode, is described by Eq. (<ref>). This corroborates our findings for initial thermal distributions presented in the main text. Moreover, we show in Fig. <ref> (right panel) the quantity f(t) - g(t), where g(t)=∫_0^t n(τ ) dτ and f(t)=∫_0^t√( 1 + ∂_τ n(τ)/2 Γ n(τ)^2) n(τ ) dτ for two different rapidity distributions. In the main text we took the first order expansion of f(t) resulting in a logarthmic growth for the quantity f(t) - g(t), which is thus corroborated by the numerical data here presented. As such, given an initial rapidity distribution whose first Fourier mode is non-vanishing, one has a longtime decay of the mean density given by n(t) ∼ 1/ √(t). § BRIEF DISCUSSION ON THE SPIRAL IN FIGURE FIG. <REF> During the evolution of the position-dependent rapidity distribution in phase space (see Fig. <ref>) , the distribution exhibits a spiral, which is clearly visible after 40% of atoms lost for two-body losses. In principle, in a regime where the trap frequency ω is highly dominating the loss rate Γ, one expects that the distribution remains rotation invariant at all times. This implies that the spiral vanishes for ω≫Γ. To check this statement we compare the quantity ρ(x,k) for two distincts values of ω (see the figure below). In figure Fig. <ref> We can see that the spiral appearing for ω=5Γ covers entirely the distribution, while for ω=20Γ the spiral is localised at the distribution's center. Moreover, in the case ω=20Γ we observe small oscillations between the edges and the center of the distribution. The frequency of these oscillations is high compared to the center of the distribution. ieeetr
http://arxiv.org/abs/2307.00406v1
20230701184128
Detecting Points in Integer Cones of Polytopes is Double-Exponentially Hard
[ "Łukasz Kowalik", "Alexandra Lassota", "Konrad Majewski", "Michał Pilipczuk", "Marek Sokołowski" ]
cs.DS
[ "cs.DS", "cs.CC" ]
Aggregation Consistency Errors in Semantic Layers and How to Avoid Them Eugene Wu ======================================================================= Let d be a positive integer. For a finite set X ⊆^d, we define its integer cone as the set 𝖨𝗇𝗍𝖢𝗈𝗇𝖾(X) {∑_x ∈ Xλ_x · x |λ_x ∈ℤ_≥ 0}⊆^d. Goemans and Rothvoss showed that, given two polytopes 𝒫, 𝒬⊆^d with 𝒫 being bounded, one can decide whether 𝖨𝗇𝗍𝖢𝗈𝗇𝖾(𝒫∩ℤ^d) intersects 𝒬 in time 𝖾𝗇𝖼(𝒫)^2^𝒪(d)·𝖾𝗇𝖼(𝒬)^𝒪(1) [J. ACM 2020], where 𝖾𝗇𝖼(·) denotes the number of bits required to encode a polytope through a system of linear inequalities. This result is the cornerstone of their 𝖷𝖯 algorithm for parameterized by the number of different item sizes. We complement their result by providing a conditional lower bound. In particular, we prove that, unless the ETH fails, there is no algorithm which, given a bounded polytope 𝒫⊆^d and a point q ∈^d, decides whether q ∈𝖨𝗇𝗍𝖢𝗈𝗇𝖾(𝒫∩^d) in time 𝖾𝗇𝖼(𝒫, q)^2^o(d). Note that this does not rule out the existence of a fixed-parameter tractable algorithm for the problem, but shows that dependence of the running time on the parameter d must be at least doubly-exponential. 20(-1.9, 4.2) < g r a p h i c s > 20(-2.15, 4.5) < g r a p h i c s > § INTRODUCTION Consider the following high-multiplicity variant of the problem: given a vector s=(s_1,…,s_d)∈ [0,1]^d of item sizes and a vector of multiplicities a=(a_1,…,a_d)∈_≥ 0^d, find the smallest integer B so that the collection of items containing a_i items of size s_i, for each i∈{1,…,d}, can be entirely packed into B unit-size bins. In their celebrated work <cit.>, Goemans and Rothvoss gave an algorithm for this problem with time complexity (s,a)^2^(d), where (s,a) denotes the total bitsize of the encoding of s and a in binary. In the terminology of parameterized complexity, this puts high-multiplicity parameterized by the number of different item sizes in the complexity class 𝖷𝖯. In fact, Goemans and Rothvoss studied the more general problem, defined as follows: given two polytopes P,Q⊆^d, where P is bounded, is there a point in Q that can be expressed as a nonnegative integer combination of integer points in P? Goemans and Rothvoss gave an algorithm for this problem with running time (P)^2^(d)·(Q)^(1), where (R) denotes the total bitsize of the encoding of a polytope R through a system of linear inequalities. They showed that high-multiplicity admits a simple reduction to , where in essence, integer points in P correspond to possible configurations of items that fit into a single bin and Q is the point corresponding to all items (more precisely, P={(x 1)∈^d+1_≥ 0| s^Tx≤ 1} and Q={(a B)}). In fact, is a much more versatile problem: in <cit.>, Goemans and Rothvoss present a number of applications of their result to other problems in the area of scheduling. Whether the result of Goemans and Rothvoss for high-multiplicity can be improved to fixed-parameter tractability is considered a major problem in the area. It was asked already by Goemans and Rothvoss in <cit.>, addressed again by Jansen and Klein in <cit.>, and also discussed in the survey of Mnich and van Bevern <cit.>. In this work, we take a step into solving the complexity of this problem. We prove the following result that shows that the doubly-exponential dependence on d in the running times of algorithms for is necessary, assuming the Exponential-Time Hypothesis (ETH). The lower bound holds even for the simpler problem, where the polytope Q consists of a single integer point q∈^d. Unless the ETH fails, there is no algorithm solving in time (P, q)^2^o(d), where (P, q) is the total number of bits required to encode both P and q. Notice that <ref> does not rule out the possibility that there exists a fixed-parameter algorithm with running time f(d)·(P,q)^(1) for some function f. However, it shows that for this to hold, function f would need to be at least doubly exponential, assuming the ETH. Let us briefly elaborate on our proof of <ref> and its relation to previous work. The cornerstone of the result of Goemans and Rothvoss is a statement called Structure Theorem, which essentially says the following: if an instance of has a solution, then it has a solution whose support — the set of integer points in P participating in the nonnegative integer combination yielding a point in Q — has size at most 2^2d+1. Moreover, except for a few outliers, this support is contained within a carefully crafted set X consisting of roughly (P)^(d) integer points within P. In subsequent work <cit.>, Jansen and Klein showed a more refined variant of the Structure Theorem where X is just the set of vertices of the convex hull of the integer points lying in P; but the exponential-in-d bound on the size of the support persists. The appearance of this bound in both works <cit.> originates in the following elegant observation of Eisenbrand and Shmonin <cit.>: whenever some point v can be represented as a nonnegative integer combination of integer points in P, one can always choose such a representation of v with support of size bounded by 2^d (see <cit.> for a streamlined proof). In <cit.>, Goemans and Rothvoss gave an example showing that the 2^d bound is tight up to a multiplicative factor of 2, thereby arguing that within their framework, one cannot hope for any substantially better bound on the support size. The main conceptual contribution of this work can be expressed as follows: the construction showing the tightness of the observation of Eisenbrand and Shmonin not only exposes a bottleneck within the support-based approach of <cit.>, but in fact can be used as a gadget in a hardness reduction proving that the doubly-exponential dependence on d in the running time is necessary for the whole problem, assuming the ETH. Finally, we remark that tight doubly-exponential lower bounds under the ETH appear scarcely in the literature, as in reductions proving such lower bounds, the parameter of the output instance has to depend logarithmically on the size of the input instance of . A few examples of such lower bounds can be found here: <cit.>; our work adds to this rather exclusive list. § PRELIMINARIES For a positive integer n, we denote [n] {1, 2, …, n} and [n]_0 {0, 1, …, n-1}. Euclidean spaces. Fix a positive integer d. We call the elements of ^d vectors (or points). Given a vector x ∈^d, we denote its i-th coordinate (for i ∈ [d]) by x(i). By _d, we denote the d-dimensional vector of all ones, that is, _d = (1, …, 1)∈^d. When the dimension d is clear from the context, we omit it from the subscript and simply write instead. We allow vectors to be added to each other, and to be multiplied by a scalar λ∈. Both operations come from treating the space ^d as a linear space over . Given a finite set X ⊆^d, we define its integer cone as the set (X) {∑_x ∈ Xλ_x · x |λ_x ∈^d_≥ 0 for every x ∈ X }. Polytopes. In this work a d-dimensional polytope is a subset of points in ^d satisfying a system of linear inequalities with integer coefficients, that is, a set of the form P{ x ∈^d | Ax ≤ b }, where A ∈^d × m and b ∈^m for some positive integer m. Then, the encoding size of P, denoted (P), is the total number of bits required to encode the matrix A and the vector b. We say that the polytope P is bounded if there exists a number M ∈ such that for all x ∈P and i∈ [d], we have |x(i)| ≤ M. We can now define the main problem studied in this paper, namely . Input: A positive integer d, a bounded polytope P⊆^d (given by a matrix A ∈^m × d and a vector b ∈^m for some integer m), and a point q ∈^d. Question: Is q ∈(P∩^d)? As mentioned in <ref>, in <cit.> Goemans and Rothvoss gave an algorithm for that runs in time (P)^2^(d)·(q)^(1). In fact, they solved the more general , where instead of a single point q, we are given a polytope Q, and the question is whether (P∩^d)∩Q is nonempty. In this case, the running time is (P)^2^(d)·(Q)^(1). ETH. The Exponential-Time Hypothesis (ETH), proposed by Impagliazzo et al. <cit.>, plays a fundamental role in providing conditional lower bounds for parameterized problems. It postulates that there exists a constant c>0 such that the problem cannot be solved in time (2^cn), where n is the number of variables of the input formula. As proved in <cit.>, this entails that there is no algorithm for with running time 2^o(n+m), where m denotes the number of clauses of the input formula; see also <cit.>. We refer the reader to <cit.> for a thorough introduction to ETH-based lower bounds within parameterized complexity. Subset Sum. The classic problem asks, for a given set S of positive integers and a target integer t, whether there is a subset S' ⊆ S such that ∑_x ∈ S' = t. The standard 𝖭𝖯-hardness reduction from to takes an instance of with n variables and m clauses and outputs an equivalent instance of where |S|=(n+m) and t≤ 2^(n+m). By combining this with the 2^o(n+m)-hardness for following from ETH, we obtain the following. Unless the ETH fails, there is no algorithm solving in time 2^o(n), even under the assumption that t≤ 2^(n). Here, n denotes the cardinality of the set S given on input. In this work, we rely on a variant of the problem called . The difference between those two problems is that in the latter one, we allow the elements from the input set to be taken with any nonnegative multiplicities. Input: A set of positive integers {a_1, a_2, …, a_n} and a positive integer t. Question: Does there exist a sequence of n nonnegative integers (λ_1, λ_2, …, λ_n) such that ∑_i=1^n λ_i · a_i = t? The same lower bound as in Theorem <ref> holds for . This can be shown via a simple reduction from . As this is standard, we present the proof of the following Theorem <ref> in Appendix <ref>. theoremsubsetsumrepthm Unless the ETH fails, there is no algorithm solving in time 2^o(n), even under the assumption that t≤ 2^(n). Here, n denotes the cardinality of the set given on input. § REDUCTION The entirety of this section is devoted to the proof of our double-exponential hardness result: <ref>. The proof is by reduction from . Let = ({a_1, …, a_n}, t) be the given instance of . That is, we ask whether there are nonnegative integers λ_1, …, λ_n such that ∑_i=1^n λ_i · a_i = t, where a_1, …, a_n, t are given positive integers. We may assume that a_i≤ t for all i∈ [n] and, following on the hardness postulated by <ref>, that t≤ 2^(n). Let d ⌈log_2(n + 1) ⌉ + 1, hence d satisfies the inequality 2^d ≥ 2n + 2 and d = (log n). Let χ_0, χ_1, …, χ_2^d-1∈^d be all {0,1}-vectors in d-dimensional space, listed in lexicographic order. Equivalently, χ_i is the bit encoding of the number i, for i ∈ [2^d]_0. Observe that we have χ_i + χ_2^d-1-i = for every i ∈ [2^d]_0. We define the set P {p_0, p_1, …, p_2^d-1}⊆^d+1 of 2^d points as follows. p_i(j) = χ_i(j), for i ∈ [2^d]_0 and j ∈ [d]; a_i, for i ∈ [n] and j = d+1; 0 for i ∈[2^d]_0∖ [n] and j = d+1. We remark that the construction of the point set P is inspired by the example of Goemans and Rothvoss provided in <cit.>. First, we argue that P can be expressed as integer points in a polytope of small encoding size. There exists a bounded polytope P of encoding size (n log n ·log t) such that P∩^d+1 = P. Let P be the polytope defined by the following inequalities. 0 ≤ x(j) ≤ 1 for j ∈ [d], 0 ≤ x(d+1) ≤ t, x(d+1)+∑_jχ_i(j) = 0 t · x(j) + ∑_jχ_i(j) = 1 t · (1 - x(j)) ≥ p_i(d+1) for i ∈ [2^d]_0, t-x(d+1)+∑_jχ_i(j) = 0 t · x(j) + ∑_jχ_i(j) = 1 t · (1 - x(j)) ≥ t-p_i(d+1) for i ∈ [2^d]_0 By (<ref>) and (<ref>), P is bounded. Also, encoding the system of all linear inequalities defining P takes (2^d · d ·log t) = (n log n ·log t) bits, as desired. It remains to show that P∩^d+1 = P. In what follows, when i∈ [2^d]_0, (<ref>.i) denotes the single inequality of the form (<ref>) for this particular i, similarly for inequalities of the form (<ref>). First we show P∩^d+1⊆ P. Pick x∈P∩^d+1. Since x∈^d+1 and x satisfies (<ref>), the first d coordinates of x form a binary encoding of a number i^⋆∈ [2^d]_0. Then, x(j) = χ_i^⋆(j) for j ∈ [d], hence by (<ref>.i^⋆), x(d+1) ≥ p_i^⋆(d+1) and by (<ref>.i^⋆), x(d+1) ≤ p_i^⋆(d+1). It follows that x(d+1) = p_i^⋆(d+1) and hence x=p_i^⋆∈ P, as required. Finally we show P ⊆P∩^d+1. Pick x=p_i^⋆∈ P for some i^⋆∈ [2^d]_0. We need to show that (<ref>)–(<ref>) hold for x. This is clear for (<ref>) and (<ref>). The inequality (<ref>.i^⋆) for x is just x(d+1)≥ p_i^⋆(d+1), and this holds since x(d+1)= p_i^⋆(d+1). We get (<ref>.i^⋆) analogously. Now assume i i^⋆ and let L_i be the left hand side of (<ref>.i). Since x(j) ∈{0, 1} for j ∈ [d] and x(d+1) ≥ 0, all the summands of L_i are nonnegative. Moreover, since i i^⋆, we have x(j) ≠χ_i(j) for some j ∈ [d], and then L_i ≥ t ≥ p_i(d+1), so the inequality (<ref>.i) holds independently of the value of x(d+1). Analogously, when L_i is the left hand side of (<ref>.i), we get L_i ≥ 2t-x(d+1) ≥ t ≥ t- p_i(d+1), as required. Let P be the polytope provided by Claim <ref>. Furthermore, let q ∈^d+1 be the point defined as q t · = (t, t, …, t). We consider the instance ' = (d+1, P, q) of . Note that d = (log n) and (P, q) = (n log n ·log t), which in turn is bounded by (n^2log n) due to t≤ 2^(n). Also, one can easily verify that ' can be computed from in polynomial time. Now, we prove that the instance ' is equivalent to . is a of if and only if ' is a of . First, assume that is a of ; that is, there are nonnegative integers λ_1, λ_2, …, λ_n such that ∑_i=1^n λ_i · a_i = t. Our goal is to show that q ∈(P∩^d+1) = (P). That is, we need to construct a sequence of nonnegative integers (λ_0', λ_1', …, λ_2^d-1') such that ∑_i=0^2^d-1λ_i' · p_i = q. First, we set λ_i' λ_i, for i ∈ [n]. Then, we get the required value at the (d+1)-st coordinate, i.e., ( ∑_i=1^n λ_i' · p_i )(d+1) = t = q(d+1). It remains to set the values of λ_i' for i ∈[2^d]_0∖ [n]. Note that p_i(d+1) = 0 for i ∈ [2^d]_0∖ [n], therefore setting those λ_i' does not affect the (d+1)-st coordinate of the result. Consider an index i ∈ [n]. Recall that χ_i + χ_2^d-i-1 =, and since 2^d ≥ 2n + 2, we have 2^d-i-1 ≥ 2^d - n-1 ≥ n + 1. Hence, by setting λ_2^d-i-1' λ_i' = λ_i, we obtain that λ_i' · p_i + λ_2^d-i-1' · p_2^d-i-1 = (λ_i, λ_i, …, λ_i, λ_i a_i). By applying this procedure for every i ∈ [n], we get a point q' ∈^d+1 of the form (Λ, Λ, …, Λ, t), where Λ = ∑_i=1^n λ_i ≤∑_i=1^n λ_i · a_i = t. To obtain the number t on the first d coordinates of the result, it remains to observe that p_2^d-1 = (1, 1, …, 1, 0), therefore setting λ_2^d-1' t - ∑_i=1^n λ_i produces the desired point q. (We set λ_i' 0 for all i not considered in the described procedure.) For the other direction, suppose that ' is a of , that is, q ∈(P). Then, there exist nonnegative integers λ_i (for i ∈ [2^d]_0) such that ∑_i=0^2^d-1λ_i · p_i = q. Comparing the (d+1)-st coordinate of both sides yields the equality ∑_i=1^n λ_i · a_i = t. This means that is indeed a of . Finally, we are ready to prove Theorem <ref>. Suppose for contradiction that admits an algorithm with running time (P, q)^2^o(d). As argued, given an instance of with n integers and the target integer t bounded by 2^(n), one can in polynomial time compute an equivalent instance ' = (d, P, q) of with d≤(log n) and (P,q)≤(n^2log n). Now, running our hypothetical algorithm on ' yields an algorithm for with running time (P,q)^2^o(d) = (n^2 log n)^2^o(d) = (n^2 log n)^n^o(1)≤ 2^n^o(1)· 3log n≤ 2^o(n), which contradicts <ref>. This concludes the proof of <ref>. § SUBSET SUM WITH MULTIPLICITIES In this appendix we give a proof of <ref>, which we recall here for convenience. * We provide a reduction from to . Let = ({a_1, a_2, …, a_n}, t) be the input instance of . We may assume that a_i≤ t for all i∈ [n]. We construct an equivalent instance ' = ({a_1', …, a_n', b_1, …, b_n}, t') of as follows. The bit encodings of integers a_i', b_i and t' are partitioned into three blocks B_1, B_2, B_3, where B_3 contains the least significant bits, and B_1 the most significant ones. For an integer x and a block B, we denote by x |_B the integer of bit-length at most |B| consisting of the bits of x at the positions within the block B. The instance ' is defined by the following conditions. * Blocks B_1 and B_3 are of length n, while block B_2 is of length log t. * For i ∈ [n], a_i'|_B_j = 2^n-i for j = 1, a_i for j = 2, 2^i-1 for j = 3; and b_i|_B_j = 2^n-i for j = 1, 0 for j = 2, 2^i-1 for j = 3. * The target integer t' is given by t'|_B_j = 2^n-1 for j = 1, t for j = 2, 2^n-1 for j = 3. Note that the instance ' consists of a set of n' 2n positive integers and a target integer t' ≤ 2^(n)· t. In particular, if t≤ 2^(n) then also t'≤ 2^(n). Clearly, ' can be computed from in polynomial time. Next, we prove that ' is indeed an instance equivalent to . is a of if and only if ' is a of . (). Assume is a of . Let J ⊆ [n] be a set of indices such that ∑_j ∈ J a_j = t. We construct a sequence λ_1, λ_2, …, λ_2n of 2n nonnegative integers as follows. For i ∈ [n], we set λ_i = 1 if i ∈ J, 0 if i ∉J; and λ_n+i = 0 if i ∈ J, 1 if i ∉J. Then it is easy to verify that ∑_i=1^n λ_i · a_i' + ∑_i=1^n λ_n+i· b_i = t', and thus the sequence λ_1, λ_2, …, λ_2n witnesses that ' is a of . (). Assume that ' is a of . Let λ_1, λ_2, …, λ_2n be nonnegative integers such that ∑_i=1^n λ_i · a_i' + ∑_i=1^nλ_n+i· b_i = t'. Let L be the left-hand side of the equation above. Comparing the least significant bit of L and t' yields λ_1 + λ_n+1≡ 1 2. However, if λ_1 + λ_n+1≥ 2, then L|_B_1≥ 2 · 2^n-1 = 2^n > t'|_B_1, and consequently L > t, which is a contradiction. Therefore, λ_1 + λ_n+1 = 1. Repeating this argument inductively for i = 2, 3, …, n leads us to the conclusion that the equality λ_i + λ_n+i = 1 holds for every i ∈ [n]. Now, define a set of indices J ⊆ [n] as J {i ∈ [n] |λ_i = 1}. Then, by comparing L|_B_2 and t|_B_2, we must have that ∑_j ∈ J a_j = t, since other terms of L do not contribute to L|_B_2 according to the equation (<ref>). Hence is a of , as desired. We are ready to conclude the proof of <ref>. Suppose for contradiction there is an algorithm solving in time 2^o(n') on instances with n' numbers on input and the target integer t' bounded by 2^(n'). Then, as explained above, given an instance of with n numbers and the target integer t bounded by 2^(n), one can in polynomial time compute an equivalent instance ' of with n'=2n numbers and with target t'≤ 2^(n)· t≤ 2^(n)=2^(n'). Running the hypothetical algorithm on ' solves the initial instance of in time 2^o(n') = 2^o(n), which contradicts <ref>. This finishes the proof of <ref>.
http://arxiv.org/abs/2307.00779v1
20230703064610
Quantifying Distributional Model Risk in Marginal Problems via Optimal Transport
[ "Yanqin Fan", "Hyeonseok Park", "Gaoqian Xu" ]
math.OC
[ "math.OC", "econ.EM", "math.ST", "stat.TH" ]
Quantifying Distributional Model Risk in Marginal Problems via Optimal TransportWe acknowledge valuable feedback from participants of Optimization-Conscious Econometrics Conference II at the University of Chicago, KI+Scale MoDL Retreat at the University of Washington, and Econometrics and Optimal Transport Workshop at the University of Washington. Fan acknowledges support from NSF Infrastructure grant (PIHOT) DMS-2133244. Yanqin Fan,[Department of Economics, University of Washington. Email: fany88@uw.edu] Hyeonseok Park,[Institute for Advanced Economic Research, Dongbei University of Finance and Economics. Email: hynskpark21@dufe.edu.cn] and Gaoqian Xu[Department of Economics, University of Washington. Email: gx8@uw.edu] August 1, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================= This paper studies distributional model risk in marginal problems, where each marginal measure is assumed to lie in a Wasserstein ball centered at a fixed reference measure with a given radius. Theoretically, we establish several fundamental results including strong duality, finiteness of the proposed Wasserstein distributional model risk, and the existence of an optimizer at each radius. In addition, we show continuity of the Wasserstein distributional model risk as a function of the radius. Using strong duality, we extend the well-known Makarov bounds for the distribution function of the sum of two random variables with given marginals to Wasserstein distributionally robust Markarov bounds. Practically, we illustrate our results on four distinct applications when the sample information comes from multiple data sources and only some marginal reference measures are identified. They are: partial identification of treatment effects; externally valid treatment choice via robust welfare functions; Wasserstein distributionally robust estimation under data combination; and evaluation of the worst aggregate risk measures. § INTRODUCTION Distributionally robust optimization (DRO) has emerged as a powerful tool for hedging against model misspecification and distributional shifts. It minimizes distributional model risk (DMR) defined as the worst risk over a class of distributions lying in a distributional uncertainty set, see <cit.>. Among many different choices of uncertainty sets, Wasserstein DRO (W-DRO) with distributional uncertainty sets based on optimal transport costs has gained much popularity, see <cit.> and <cit.> for recent reviews. W-DRO has found successful applications in robust decision making in all disciplines including economics, finance, machine learning, and operations research. Its success is largely credited to the strong duality and other nice properties of the Wasserstein DMR (W-DMR). The objective of this paper is to propose and study W-DMR in marginal problems where only some marginal measures of a reference measure are given, see e.g., <cit.>, and <cit.>. In practice, marginal problems arise from either the lack of complete data or an incomplete model. In insurance and risk management, computing model-free measures of aggregate risks such as Value-at-Risk and Expected Short-Fall is of utmost importance and routinely done. When the exact dependence structure between individual risks is lacking, researchers and policy makers rely on the worst risk measures defined as the maximum value of aggregate risk measures over all joint measures of the individual risks with some fixed marginal measures, see <cit.> and <cit.>; In causal inference, distributional treatment effects such as the variance and the proportion of participants who benefit from the treatment depend on the joint distribution of the potential outcomes. Even with ideal randomized experiments such as double-blind clinical trials, the joint distribution of potential outcomes is not identified and as a result, only the lower and upper bounds on distributional treatment effects are identified from the sample information, see <cit.>, <cit.>, <cit.>, <cit.>, <cit.>; In algorithmic fairness when the sensitive group variable is not observed in the main data set, assessment of unfairness measures must be done using multiple data sets, see <cit.>. Abstracting away from estimation, all these problems involve optimizing the expected value of a functional of multiple random variables with fixed marginals and thus belong to the class of marginal problems for which optimal transport related tools are important.[When the marginals are univariate, optimal transport problem can be conveniently expressed in terms of copulas. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.> explicitly use copula tools. ] The marginal measures in the afore-mentioned applications and general marginal problems are typically empirical measures computed from multiple data sets such as in the evaluation of worst aggregate risk measures or identified under specific assumptions such as randomization or strong ignorability in causal inference. Developing a unified framework for hedging against model misspecification and/or distributional shifts in marginal measures motivates the current paper. Theoretically, this paper makes several contributions to the literature on distributional robustness and the literature on marginal problems. First, it introduces Wasserstein distributional model risk in marginal problems (W-DMR-MP), where each marginal measure is assumed to lie in a Wasserstein ball centered at a fixed reference measure with a given radius. We focus on the important case with two marginals and consider both non-overlapping and overlapping marginals. For non-overlapping marginal measures, when the radius is zero, the W-DMR-MP reduces to the marginal problems or optimal transport problems studied in <cit.>. For overlapping marginals, when the radius is zero, the W-DMR-MP reduces to the overlapping marginals problem studied in <cit.>; Second, we establish strong duality for our W-DMR with both non-overlapping and overlapping marginals under similar conditions to those for W-DMR, see <cit.>, <cit.>, and <cit.>. As a first application of our strong duality result for non-overlapping marginals, we extend the well-known Marakov bounds for the distribution function of the sum of two random variables to Wasserstein distributionally robust Makarov bounds; Third, we prove finiteness of the W-DMR-MP and existence of an optimizer at each radius. Based on both results, we show that the identified set of the expected value of a smooth functional of random variables with fixed marginals is a closed interval; Fourth, we show continuity of the W-DMR in marginal problems as a function of the radius. Together these results extend those for W-DMR in <cit.>, <cit.>, and <cit.>; Lastly, we extend our formulations and theory to W-DMR with multi-marginals. On a technical note, our proofs build on existing work on W-DMR such as <cit.>, <cit.>, and <cit.>. However, an additional challenge due to the presence of multiple marginal measures in our Wasserstein uncertain sets is the verification of the existence of a joint measure with overlapping marginals. We make use of existing results for a given consistent product marginal system in <cit.>, <cit.>, and <cit.> to address this issue. Practically, we demonstrate the flexibility and broad applicability of our W-DMR-MP via four distinct applications when the sample information comes from multiple data sources. First, we consider partial identification of treatment effects when the marginal measures of the potential outcomes lie in their respective Wasserstein balls centered at the measures identified under strong ignorability. The validity of strong ignorability is often questionable when unobservable confounders may be present. We apply our W-DMR-MP to establishing the identified sets of treatment effects which can be used to conducting sensitivity analysis to the selection-on-observables assumption. For average treatment effects, we show that when the cost functions are separable, incorporating covariate information does not help shrink the identified set; on the other hand, for non-separable cost functions such as the Mahalanobis distance, incorporating covariate information may help shrink the identified set; Second, in causal inference when the optimal treatment choice is to be applied to a target population different from the training population, <cit.> introduces robust welfare functions defined by W-DMR to study externally valid treatment choice. The W-DMR-MP we propose allows us to dispense with the assumption of a known dependence structure for the reference measure in <cit.>. When shifts in the covariate distribution are allowed, we show that our robust welfare function is upper bounded by the worst robust welfare function of <cit.>; Third, one important application of W-DMR is in distributionally robust estimation and classification. However as <cit.> points out,[See <cit.> and <cit.> for general data combination problems.] some sensitive variables may not be observed in the same data set as the response variable rendering W-DRO inapplicable. We apply W-DMR-MP to distributionally robust estimation under data combination;[<Ref> provides a detailed comparison of our set up and <cit.>.] Fourth, applying our W-DMR-MP to the evaluation of the worst aggregate risk measures allows us to dispense with the known marginals assumption in <cit.> and <cit.>. The rest of this paper is organized as follows. <Ref> reviews the W-DMR and strong duality, introduces our W-DMR-MP, and then presents four motivating examples. <Ref> establishes strong duality and Wasserstein distributionally robust Marakov bounds. <Ref> studies finiteness of W-DMR-MP and existence of optimal solutions. Moreover, we show that the identified set of the expected value of a smooth functional of random variables with fixed marginals is a closed interval. <Ref> establishes continuity of W-DMR-MP as a function of the radius. <Ref> revisits the motivating examples in <Ref>. <Ref> extends our W-DMR-MP to more than two marginals. The last section offers some concluding remarks. Technical proofs are relegated to a series of appendices. We close this section by introducing the notation used in the rest of this paper. For two sets A and B, the relative complement is denoted by A ∖ B. Let ℝ = ℝ∪{- ∞, ∞}, [d]={1,2,...,d}, ℝ^d_+ = { x ∈ℝ^d : x_i ≥ 0, ∀ i ∈ [d] }, and ℝ^d_++ = { x ∈ℝ^d : x_i > 0, ∀ i ∈ [d] }. For any real numbers x, y ∈ℝ, we define x ∧ y := min{x, y} and x ∨ y : = max{x, y}. The Euclidean inner product of x and y in ℝ^d is denoted by ⟨ x, y ⟩. For any real matrix W ∈ℝ^m × n, let A^⊤ denote the transpose of W. For an extended real function f on 𝒳, the positive part f^+ and the negative part f^- are defined as f^+(x) = max{ f(x), 0 } and f^-(x) = max{ -f(x), 0 }, respectively. For any Polish space 𝒮, let ℬ_𝒮 be the associated Borel σ-algebra and 𝒫(𝒮) be the collection of probability measures on 𝒮. Given a Polish probability space (𝒮, ℬ_𝒮, ν), let ℬ_𝒮^ν denote the ν-completion of ℬ_𝒮. Given a probability space (Ω, ℱ, ℙ) and a map T: Ω→𝒮, let T#μ denote the push forward of ℙ by T, i.e., (T#ℙ)(A) = ℙ( T^-1 (A) ) for all A ∈ℬ_𝒮, where T^-1(A) = {ω∈Ω: T(ω) ∈ A }. The law of a random variable S: Ω→ℝ is denoted by Law(S) which is the same as S#ℙ. For any μ, ν∈𝒫(𝒮), let Π(μ, ν) denote the set of all couplings (or joint measures) with marginals μ and ν. For any ℬ_𝒮^ν-measurable function f, let ∫_𝒮 f d ν denote the integral of f in the completion of (𝒮, ℬ_𝒮, ν). For a random element S: Ω→𝒮 with Law(S) = ν, we write 𝔼_ν[f(S)] = ∫_𝒮 f d ν. Given p ∈ (0, ∞) and a Borel measure ν on 𝒮, let L^p(ν) := L^p(𝒮,ℬ_𝒮 , ν) denote the set of all the ℬ_𝒮^ν-measurable functions f: 𝒮→ℝ such that f_L^p(ν) := ( ∫_𝒮 |f|^p dν)^1/p< ∞. § W-DMR AND MOTIVATING EXAMPLES In this section, we first review W-DMR and then introduce W-DMR in marginal problems. Lastly, we present four motivating examples of marginal problems which will be used to illustrate our results in the rest of this paper. §.§ A Review of W-DMR and Strong Duality W-DMR is defined as the worst model risk over a class of distributions lying in a Wasserstein uncertainty set composed of all probability measures that are a fixed Wasserstein distance away from a given reference measure, see <cit.>. Before presenting W-DMR, we review some basic definitions. Let 𝒳 be a Polish (metric) space with a metric d. Let μ, ν∈𝒫(𝒳) be given probability measures. The optimal transport cost between μ and ν associated with a cost function c: 𝒳×𝒳→ℝ_+ ∪{∞} is defined as K_c(μ, ν) = inf_π∈Π(μ, ν)∫_𝒳×𝒳 c dπ. When the cost function c is lower-semicontinuous, there exists an optimal coupling corresponding to K_c(μ, ν). In other words, there exists π^* ∈Π(μ, ν) such that K_c(μ, ν)=∫_𝒳×𝒳 c dπ^* <cit.>. Let p ∈ [1, ∞). The Wasserstein distance of order p between any two measures μ and ν on Polish metric space (𝒳, d) is defined by W_ p(μ, ν) = [ inf_π∈Π(μ, ν)∫_𝒳×𝒳d^p d π]^1/p. Throughout this paper, we make the following assumption on the cost function c. Let (𝒳, ℬ_𝒳) be a Borel space associated to 𝒳. The cost function c: 𝒳×𝒳→ℝ_+∪{∞} is measurable and satisfies c(x, y) = 0 if and only if x = y. <Ref> implies that for μ, ν∈𝒫(𝒳), μ = ν if and only if K_c(μ, ν) = 0. When c is the metric d on 𝒳, K_c(μ, ν) coincides with the Wasserstein distance of order 1 (Kantorovich-Rubinstein distance) between μ and ν defined in <Ref>. For a given function f: 𝒳→ℝ, <cit.> define W-DMR as ℐ_DMR(δ) := sup_γ∈Σ_DMR(δ)∫_𝒳 f d γ, δ≥ 0, where Σ_DMR(δ) is the Wasserstein uncertainty set[By convention, we call all uncertainty sets based on optimal transport costs as Wasserstein uncertainty sets.] centered at a reference measure μ∈𝒫(𝒳) with radius δ≥ 0, i.e., Σ_DMR(δ):={γ∈𝒫(𝒳) : K_c(μ, γ) ≤δ}. <Ref> allows the cost function c to be asymmetric and take value ∞, where the latter corresponds to the case that there is no distributional shift in some marginal measure of μ. Under <Ref>, Σ_DMR(0)={μ} and ℐ_DMR(0) = ∫_𝒳 f d μ. It is well-known that under mild conditions, strong duality holds for ℐ_DMR(δ) when δ > 0 (c.f., <cit.>). To be self-contained, we restate the strong duality result in <cit.> for Polish space below.[The strong duality result in <cit.> allows for general space 𝒳.] Let (𝒳, ℬ_𝒳, μ) be a probability space. Let δ∈ (0, ∞) and f: 𝒳→ℝ be a measurable function such that ∫_𝒳 f d μ > -∞. Suppose the cost function satisfies <Ref>. Then, for any δ > 0, ℐ_DMR(δ) = inf_λ∈ℝ_+{λδ + ∫_𝒳sup_x' ∈𝒳 [ f(x') - λ c(x, x') ] d μ(x) }, where λ c(x, x') is defined to be ∞ when λ = 0 and c(x, x') = ∞. In the rest of this paper, we keep the convention that for any cost function c, λ c(x, y) = ∞ when λ = 0 and c(x, y) = ∞. §.§ W-DMR in Marginal Problems §.§.§ Non-overlapping Marginals Let 𝒱 := 𝒮_1 ×𝒮_2 be the product space of two Polish spaces 𝒮_1 and 𝒮_2. Let μ_1 and μ_2 be Borel probability measures on 𝒮_1 and 𝒮_2 respectively. Following <cit.> (see also <cit.>), we call the Fréchet class of all probability measures on 𝒱 having marginals μ_1 and μ_2 the Fréchet class with non-overlapping marginals denoted as ℱ(𝒱; μ_1, μ_2) := ℱ(μ_1, μ_2). Note that ℱ(μ_1, μ_2)=Π(μ_1, μ_2). Let g:𝒱→ℝ be a measurable function satisfying the following assumption. The function g:𝒱→ℝ is measurable such that ∫_𝒱 g dγ_0 > -∞ for some γ_0 ∈Π(μ_1, μ_2) ⊂𝒫(𝒱). The marginal problem associated with μ_1 and μ_2 is defined as ℐ_M(μ_1,μ_2):= sup_γ∈Π(μ_1, μ_2)∫_𝒱 g d γ. It is essentially an optimal transport problem, where the sup operation is replaced with the inf operation, see <cit.>) or <Ref> for a review of strong duality for ℐ_M(μ_1,μ_2). The W-DMR with non-overlapping marginals we propose extends the marginal problem by allowing each marginal measure of γ to lie in a fixed Wasserstein distance away from a reference measure. Specifically, for any γ∈𝒫(𝒱 ), let γ_1 and γ_2 denote the projection of γ on 𝒮_1 and 𝒮_2, respectively. The W-DMR with non-overlapping marginals is defined as ℐ_D(δ) := sup_γ∈Σ_D(δ)∫_𝒱 g dγ, δ∈ℝ_+^2, where Σ_D(δ) is the uncertainty set given by Σ_D(δ) := Σ_D(μ_1, μ_2, δ) = {γ∈𝒫(𝒱): K_1( μ_1, γ_1) ≤δ_1, K_2( μ_2, γ_2) ≤δ_2 }, in which K_1 and K_2 are optimal transport costs associated with cost functions c_1 and c_2, respectively, and δ := (δ_1, δ_2) ∈ℝ_+^2 is the radius of the uncertainty set. Obviously Σ_D(δ) is non-empty for all δ∈ℝ_+^2. (i) Under <Ref> and <Ref>, it holds that ℐ_D(δ) > - ∞ for all δ∈ℝ_+^2, see <Ref>; (ii) Under <Ref>, the uncertainty set Σ_D(0)=Π(μ_1, μ_2) and thus ℐ_D(0) =ℐ_M(μ_1,μ_2). §.§.§ Overlapping Marginals Let 𝒮 := 𝒴_1 ×𝒴_2 ×𝒳 be the product space of three Polish spaces 𝒴_1, 𝒴_2, and 𝒳. Let 𝒮_1 := 𝒴_1 ×𝒳 and 𝒮_2 := 𝒴_2 ×𝒳. Let μ_13∈𝒫(𝒮_1) and μ_23∈𝒫(𝒮_2) be such that the projection of μ_13 and the projection of μ_23 on 𝒳 are the same. Following <cit.> (see also <cit.>), we call the Fréchet class of all probability measures on 𝒮 having marginals μ_13 and μ_23 the Fréchet class with overlapping marginals and denote it as ℱ(𝒮; μ_13, μ_23) := ℱ(μ_13, μ_23). Unlike the non-overlapping case, ℱ(μ_13, μ_23) is different from the class of couplings Π(μ_13, μ_23). Let f:𝒮→ℝ be a measurable function satisfying the following assumption. The function f:𝒮→ℝ is measurable such that ∫_𝒮 f dν_0 > - ∞ for some ν_0 ∈ℱ(μ_13, μ_23) ⊂𝒫(𝒮). <cit.> studies the following marginal problem with overlapping marginals: ℐ_M(μ_13,μ_23):=sup_γ∈ℱ(μ_13, μ_23)∫_𝒮 f d γ. As shown in <cit.>, the marginal problem with overlapping marginals can be computed via the marginal problem with non-overlapping marginals through the following relation: ℐ(0)=∫_𝒳[sup_γ(·|x) ∈Π(μ_1|3, μ_2|3 ) ∫_𝒴_1×𝒴_2f(y_1,y_2,x) d γ(y_1,y_2|x)]dγ_X(x), where for each fixed x∈𝒳, μ_ℓ|3( ·| x) denote the conditional measure of Y_ℓ given X=x, and the inner optimization problem is a marginal problem with non-overlapping marginals. For any γ∈𝒫(𝒮 ), let γ_13 and γ_23 denote the projections of γ on 𝒴_1 ×𝒳 and 𝒴_2 ×𝒳, respectively. The W-DMR with overlapping marginals is defined as ℐ(δ) := sup_γ∈Σ(δ)∫_𝒮 f d γ, δ∈ℝ_+^2, where Σ(δ) is the uncertainty set given by Σ(δ):=Σ(μ_13, μ_23, δ) = {γ∈𝒫(𝒮 ): K_1( μ_13, γ_13) ≤δ_1, K_2( μ_23, γ_23) ≤δ_2 } in which δ := (δ_1, δ_2) ∈ℝ_+^2 is the radius of the uncertainty set, and K_1 and K_2 are optimal transport costs associated with c_1 and c_2. We note that Σ(δ) is non-empty for all δ∈ℝ_+^2. (i) <Ref> imply that ℐ(δ)>-∞ for all δ≥ 0, see <Ref>; (ii) When δ=0, the uncertainty set Σ(0)=ℱ(μ_13, μ_23) and ℐ(0)= ℐ_M(μ_13,μ_23). §.§ Motivating Examples In this section, we present four distinct examples to demonstrate the wide applicability of the W-DMR in marginal problems. The first example is concerned with partial identification of treatment effect parameters when commonly used assumptions in the literature for point identification fail; the second example is concerned with distributionally robust optimal treatment choice; the third one is an application of W-DMR-MP in distributionally robust estimation under data combination; and the last one concerns measures of aggregate risk. For the first two examples, we adopt the potential outcomes framework for a binary treatment. Let D ∈{0,1} represent an individual's treatment status, and Y_1∈𝒴_1⊂ℝ and Y_2∈𝒴_2⊂ℝ denote the potential outcomes under treatments D = 0 and D = 1, respectively. Let the observed outcome be Y = D Y_2 + (1-D) Y_1. To focus on introducing the main ideas, we adopt the selection-on-observables framework stated in <Ref> below. (i) Conditional Independence: The potential outcomes are independent of treatment assignment conditional on covariate X∈𝒳⊂ℝ^q for q ≥ 1, i.e., (Y_1, Y_2) D | X; (ii) Common Support: For all x∈𝒳, 0<p(x)<1, where p(x):=ℙ(D=1|X=x). Suppose a random sample on (Y,X,D) is available. Then under <Ref>, the marginal conditional distribution functions of Y_1, Y_2 given X=x are point identified: F_Y_1|X(y|x)=ℙ(Y_1≤ y|X=x)=ℙ(Y≤ y|X=x,D=0) F_Y_2|X(y|x)=ℙ(Y_2≤ y|X=x)=ℙ(Y≤ y|X=x,D=1). As a result, the probability measures μ_13 of (Y_1,X) and μ_23 of (Y_2,X) are identified as well. §.§.§ Partial Identification of Treatment Effects <Ref> is commonly used to identify treatment effect parameters and optimal treatment choice. However the validity of <Ref> may be questionable when there are unobserved confounders. W-DMR-MP presents a viable approach to studying sensitivity of causal inference to deviations from <Ref> by varying the marginal measures of a joint measure of (Y_1, Y_2, X) in Wasserstein uncertainty sets centered at reference measures consistent with <Ref>. Specifically, let f be a measurable function of Y_1, Y_2. Consider treatment effects of the form: θ_o:=𝔼_o[f(Y_1,Y_2)], where 𝔼_o denotes expectation with respect to the true measure. It includes the average treatment effect (ATE) for which f(Y_1, Y_2)=Y_2 - Y_1 and the distributional treatment effect such as ℙ_o(Y_2 - Y_1≥ 0), where ℙ_o denotes the probability computed under the true measure. Consider the identified set for θ_o defined as Θ(δ) := {∫_𝒮 f(y_1,y_2) d γ(y_1, y_2, x) : γ∈Σ( δ) }, where Σ(δ) = {γ∈𝒫(𝒮) : K_1(μ_13, γ_13) ≤δ_1, K_2(μ_23, γ_23) ≤δ_2 }, in which μ_13 and μ_23 are the identified measures of (Y_1, X) and (Y_2, X) under <Ref>. Under mild conditions, we show in <Ref> that the identified set Θ(δ) is a closed interval given by Θ(δ)= [ min_γ∈Σ(δ)∫_𝒮 f(y_1,y_2) d γ(s), max_γ∈Σ(δ)∫_𝒮 f(y_1,y_2) d γ(s) ], where the lower and upper limits of the interval are characterized by the W-DMR-MP.[Since inf_γ∈Σ(δ)∫_𝒮 f(y_1, y_2) d γ(s) can be rewritten as - sup_γ∈Σ(δ)∫_𝒮 [- f(y_1, y_2)] d γ(s), we also refer to the lower limit as W-DMR-MP. ] When δ=0, <cit.> establish a characterization of Θ(0) via marginal problems with overlapping marginals. The identified set Θ(δ) can be used to conduct sensitivity analysis to deviations from <Ref>. We note that sensitivity analysis to other commonly used assumptions such as the threshold-crossing model can be done by taking the reference measures as the measures identified under these alternative assumptions, see <cit.>. §.§.§ Robust Welfare Function In empirical welfare maximization (EWM), an optimal choice/policy is chosen to maximize the expected welfare estimated from a training data set and then applied to a target population, see <cit.>. EWM assumes that the target population and the training data set come from the same underlying probability measure. This may not be valid in important applications. Motivated by designing externally valid treatment policy, <cit.> introduces a robust welfare function which allows the target population to differ from the training population. In this paper, we revisit <cit.>'s robust welfare function and propose a new one based on W-DMR with overlapping marginals. <cit.> adopts the following definition of a robust welfare function: RW_0(d): =inf_γ∈Σ_0(δ_0)𝔼_γ [Y_1 (1 - d(X)) + Y_2 d(X)], where d : 𝒳→{0, 1} is a measurable policy function, i.e., d(X) is 0 or 1 depending on X and Σ_0(δ_0) is the Wasserstein uncertainty set centered at a joint measure μ for (Y_1, Y_2, X) consistent with <Ref>, i.e., Σ_0(δ_0) := {γ∈𝒫(𝒮): K_c(μ, γ) ≤δ_0 }, where K_c(μ, γ) is the optimal transport cost with cost function c: 𝒮×𝒮→ℝ_+∪{∞}. Noting that <Ref> only identifies the marginal measures μ_13,μ_23 of the reference measure μ in Σ_0(δ_0), we define a new robust welfare function as RW(d) := inf_γ∈Σ(δ)𝔼_γ [ Y_1 (1 - d(X)) + Y_2 d(X)], where Σ(δ) = Σ(μ_13, μ_23, δ) is the uncertainty set for W-DMR with overlapping marginals. §.§.§ W-DRO Under Data Combination An important application of W-DMR is W-DRO. Let f: 𝒴_1 ×𝒴_2 ×𝒳×Θ→ℝ be a loss function with an unknown parameter θ∈Θ⊂ℝ^q. W-DRO under data combination is defined as min_θ∈Θsup_γ∈Σ(δ)∫_𝒮 f(y_1, y_2, x;θ) d γ(y_1, y_2, x), where Σ(δ) is the uncertainty set for the overlapping case. For each θ∈Θ, the inner optimization is a W-DMR with overlapping marginals. In practice, we need to choose the reference measures μ_13 and μ_23 based on the sample information. Focusing on logit model, where 𝒴_1 = {+1, -1} is the space for the dependent variable, and 𝒴_2 and 𝒳 are feature spaces/covariate space, and f(y_1, y_2, x;θ) = log(1 + exp(- y_1 ⟨θ, (y_2, x) ⟩)), <cit.> proposes a method dubbed `Robust Data Join' in which the empirical measures constructed from the two data sets are used as reference measures. Specifically, let μ_13 and μ_23 denote empirical measures based on two separate data sets. The uncertainty set in <cit.> takes the following form: Σ_RDJ (δ):= {γ∈𝒫(𝒮 ): K_1( μ_13, γ_13) ≤δ_1, K_2(μ_23, γ_23) ≤δ_2 }, where c_1((y_1, x), (y_1', x')) = x - x' _p + κ_1 | y_1 - y_1'| and c_2((y_2, x), (y_2, x')) = x - x' _p + κ_2 y_2 - y_2' _p' with κ_1≥ 1, κ_2≥ 1, p≥ 1, and p'≥ 1. Note that <cit.>'s `Robust Data Join' is different from our W-DMR with non-overlapping marginals because the measure of interest γ∈𝒫(𝒮 ) has overlapping marginals. It is also different from our W-DMR with overlapping marginals because the reference measures μ_13 and μ_23 may not have overlapping marginals. Unlike the uncertainty set for W-DMR, Σ_RDJ(δ) is empty when δ=0. §.§.§ Risk aggregation Let S_1, S_2 be random variables representing individual risks defined on Polish spaces 𝒮_1, 𝒮_2, respectively. Let μ_1, μ_2 be probability measures of S_1, S_2. Let 𝒱 = 𝒮_1 ×𝒮_2 and g: 𝒱→ℝ be a risk aggregating function. Applying W-DMR with non-overlapping marginals to the risk aggregation function g, we can compute the worst aggregate risk when the joint measure of the individual risks varies in the uncertainty set Σ_D(δ). This is different from the set-up in <cit.>, where the following robust risk aggregation problem is studied: ℐ_Π(δ_0):=sup_γ∈Σ_Π(δ)∫_𝒱 g d γ, where Σ_Π(δ_0) := {γ∈Π(μ_1, μ_2) : K_c(γ, μ) ≤δ_0 }, in which K_c is the optimal transport cost associated with a cost function c: 𝒱×𝒱→ℝ_+. Since γ∈Σ_Π(δ_0) is a coupling of (μ_1, μ_2), we have that Σ_Π(δ_0) ⊂Σ_D(0) and thus ℐ_Π(δ_0) ≤ℐ_D(0). § STRONG DUALITY AND DISTRIBUTIONALLY ROBUST MAKAROV BOUNDS In this section, we establish strong duality for our W-DMR-MP and apply it to develop Wasserstein distributionally robust Makarov bounds. §.§ Non-overlapping Marginals For a measurable function g: 𝒱→ℝ and λ := (λ_1, λ_2) ∈ℝ^2_+, we define the function g_λ: 𝒱→ℝ∪{∞} as 2.1 g_λ(v) := sup_v^'∈𝒱φ_λ(v, v^'), where φ_λ: 𝒱×𝒱→ℝ∪{ -∞} is given by φ_λ(v, v^') = g(s_1^', s_2^') - λ_1 c_1 (s_1, s_1^') -λ_2 c_2 (s_2, s_2^'), with v:= (s_1, s_2) and v^' := (s_1^', s^'_2). Similarly, define g_λ_1, 1: 𝒱→ℝ∪{+∞} and g_λ_2, 2: 𝒱→ℝ∪{+∞} as g_λ_1, 1(s_1, s_2) = sup_s_1' ∈𝒮_1{g(s_1', s_2) - λ_1 c_1(s_1, s_1')} and g_λ_2, 2(s_1, s_2) = sup_s_2' ∈𝒮_2{g(s_1, s_2') - λ_2 c_2(s_2, s_2')}. The dual problem 𝒥_D(δ) corresponding to the primal problem ℐ_D(δ) is defined as follows: 𝒥_D(δ) = inf_λ∈ℝ_+^2{⟨λ, δ⟩ + sup_ϖ∈Π(μ_1, μ_2)∫_𝒱g_λ dϖ} if δ∈ℝ^2_++, inf_λ_1 ∈ℝ_+{λ_1 δ_1 + sup_ϖ∈Π(μ_1, μ_2)∫_𝒱 g_λ_1, 1 d ϖ} if δ_1 > 0 and δ_2 = 0, inf_λ_2 ∈ℝ_+{λ_2 δ_2 + sup_ϖ∈Π(μ_1, μ_2)∫_𝒱 g_λ_2, 2 d ϖ} if δ_1 = 0 and δ_2 > 0. Suppose that <Ref> hold. Then, ℐ_D(δ) = 𝒥_D(δ) for all δ∈ℝ_+^2 ∖{0}. Unlike the dual for W-DMR, the dual for W-DMR with non-overlapping marginals in <Ref> involves a marginal problem with non-overlapping marginals μ_1, μ_2 due to the lack of knowledge on the dependence of the joint measure μ. Computational algorithms developed for optimal transport can be used to solve the marginal problem, see <cit.>. For empirical measures μ_1, μ_2, the marginal problem is a discrete optimal transport problem and there are efficient algorithms to compute it, see <cit.>. For general measures μ_1, μ_2, strong duality may be employed in the numerical computation of the marginal problem. For instance, consider the case when δ > 0. When g_λ (v) is Borel measurable, several strong duality results are available, see e.g., <cit.>. For a general function g and cost functions c_1,c_2, g_λ (v) is not guaranteed to be Borel measurable. However, for Polish spaces, the set {v ∈𝒱:g_λ(v) ≥ u} is an analytic set for all u ∈ℝ (and g_λ is universally measurable), since g, c_1 and c_2 are Borel measurable (see <cit.> and <cit.>). This allows us to apply strong duality for the marginal problem in <cit.> restated in <Ref> to the marginal problem involving g_λ (v), see <ref> in <Ref>. Without additional assumptions on the function g and the cost functions, the dual 𝒥_D(δ) in <Ref> for interior points δ∈ℝ_++^2 and the dual for boundary points may not be the same. To illustrate, plugging in δ_2 = 0 in the dual form for interior points in <Ref>, we obtain inf_λ_1 ∈ℝ_+[ λ_1 δ_1 + inf_λ_2 ∈ℝ_+sup_ϖ∈Π(μ_1, μ_2)∫_𝒱g_λ dϖ]. It is different from the dual 𝒥_D(δ_1,0) for δ_1>0, since inf_λ_2 ∈ℝ_+sup_ϖ∈Π(μ_1, μ_2)∫_𝒱g_λ dϖ ≠sup_ϖ∈Π(μ_1, μ_2)∫_𝒱 g_λ_1, 1 d ϖ. When the function g and the cost functions satisfy assumptions in <Ref>, the dual 𝒥_D(δ) in <Ref> for interior points δ∈ℝ_++^2 and the dual for boundary points are the same so that ℐ_D(δ) = inf_λ∈ℝ_+^2[ ⟨λ, δ⟩ + sup_ϖ∈Π(μ_1, μ_2)∫_𝒱g_λ dϖ] for all δ∈ℝ_+^2. For Polish spaces, <Ref> generalizes the strong duality in <cit.> restated in Theorem 2.1. Our proof is based on that in <cit.>. However, due to the presence of two marginal measures in the uncertainty set Σ_D(δ), we need to verify the existence of a joint measure when some of its overlapping marginal measures are fixed, and we rely on existing results for a given consistent product marginal system studied in <cit.>, <cit.>, and <cit.>, see <Ref> for a detailed review. Similar to <cit.> for W-DMR in marginal problems, we can define an alternative W-DMR through linear penalty terms, i.e., sup_γ∈𝒫(𝒱){∫_𝒱 g d γ - λ_1 K_1(μ_1, γ_1) - λ_2 K_2(μ_2, γ_2): K_ℓ(μ_ℓ, γ_ℓ) < ∞ for ℓ = 1, 2} with λ_1, λ_2 ∈ℝ_++. The proof of <Ref> implies that the dual form of this problem is sup_ϖ∈Π(μ_1, μ_2)∫ g_λ d ϖ under the condition in <Ref>. §.§ Overlapping Marginals Let ϕ_λ : 𝒱×𝒮→ℝ∪{- ∞} be ϕ_λ(v, s^') := f(s^')-λ_1 c_1 (s_1, s_1^')-λ_2 c_2(s_2, s_2^'), where v = (s_1, s_2), s^' = (y^'_0,y^'_1,x^'), s^'_ℓ = (y^'_ℓ, x^') and s_ℓ=(y_ℓ, x_ℓ). Define the function f_λ : 𝒱→ℝ associated with f as f_λ(v) := sup_s^'∈𝒮ϕ_λ (v, s^'). Similarly, we define f_λ, 1: 𝒱→ℝ and f_λ, 2: 𝒱→ℝ as follows: f_λ_1, 1(s_1, s_2) = sup_y_1' ∈𝒴_1{f(y_1', y_2, x_2) - λ_1 c_1((y_1, x_1), (y_1', x_2))} and f_λ_2, 2(s_1, s_2) = sup_y_2' ∈𝒴_2{f(y_1, y_2', x_1) - λ_2 c_2((y_2, x_2), (y_2', x_1)}, in which s_1 = (y_1, x_1) and s_2 = (y_2, x_2). The dual problem 𝒥(δ) corresponding to the primal problem ℐ(δ) is defined as follows: 𝒥(δ) = inf_λ∈ℝ_+^2{⟨λ, δ⟩ + sup_ϖ∈Π(μ_13, μ_23)∫_𝒱f_λ d ϖ} if δ∈ℝ^2_++, inf_λ_1 ∈ℝ_+{λ_1 δ_1 + sup_ϖ∈Π(μ_13, μ_23)∫_𝒱 f_λ_1, 1 d ϖ} if δ_1 > 0 and δ_2 = 0, inf_λ_2 ∈ℝ_+{λ_2 δ_2 + sup_ϖ∈Π(μ_13, μ_23)∫_𝒱 f_λ_2, 2 d ϖ} if δ_1 = 0 and δ_2 > 0. Suppose that <Ref> hold. Then, ℐ(δ) = 𝒥(δ) for all δ∈ℝ_+^2 ∖{0}. An interesting feature of the dual for overlapping marginals is that it involves marginal problems with non-overlapping marginals, i.e., sup_ϖ∈Π(μ_13, μ_23)∫_𝒱f_λ (v) d ϖ (v), although the uncertainty set in the primal problem involves overlapping marginals. Compared with the non-overlapping marginals case, overlapping marginals in the uncertainty set make the relevant consistent product marginal system in the verification of the existence of a joint measure more complicated, see the proof of <Ref>. Nonetheless, the non-overlapping marginals in the dual allow us to apply <Ref> to the marginal problem involving f_λ, f_λ, 1 and f_λ, 2, see <ref> in <Ref>. Under the assumptions in <Ref>, we have ℐ(δ) = inf_λ∈ℝ_+^2[ ⟨λ, δ⟩ + sup_ϖ∈Π(μ_13, μ_23)∫_𝒱f_λ dϖ] for all δ∈ℝ_+^2. Similar to the non-overlapping case, we can define an alternative W-DMR with overlapping marginals through linear penalty terms, i.e., sup_γ∈𝒫(𝒮){∫_𝒮 g d γ - λ_1 K_1(μ_13, γ_13) - λ_2 K_2(μ_23, γ_23): K_ℓ(μ_ℓ3, γ_ℓ3) < ∞ for ℓ = 1, 2}, with λ_1, λ_2 ∈ℝ_++. The proof of <Ref> implies that the dual form of this problem is sup_ϖ∈Π(μ_13, μ_23)∫_𝒱 f_λ d ϖ under the conditions in <Ref>. §.§ Wasserstein Distributionally Robust Makarov Bounds Let 𝒮_1=ℝ, 𝒮_2=ℝ, μ_1∈𝒫(𝒮_1), and μ_2∈𝒫(𝒮_2). Further, let Z=S_1+S_2, where S_1,S_2 are random variables whose probability measures are μ_1, μ_2 respectively. For a given z∈ℝ, let F_Z(z)=𝔼_o[g(S_1, S_2)], where g(s_1, s_2)= 1{s_1 + s_2 ≤ z}. Sharp bounds on the quantile function F^-1_Z( ·) are established in <cit.>) and referred to as the Makarov bounds. Inverting the Makarov bounds lead to sharp bounds on the distribution function F _Z( z), see <cit.> and <cit.>. They are given by inf_γ∈Π(μ_1,μ_2)𝔼_γ[g(S_1, S_2)] = sup_x∈ℝmax{μ_1(x)+μ_2(z-x)-1,0 } and sup_γ∈Π(μ_1,μ_2)𝔼_γ[g(S_1, S_2)] = 1+inf_x∈ℝmin{μ_1(x)+μ_2(z-x)-1,0 }. Since the quantile bounds first established in <cit.>) and the above distribution bounds are equivalent, we also refer to the latter as Makarov bounds. Makarov bounds have been successfully applied in distinct areas. For example, the upper bound on the quantile of Z is known as the worst VaR of Z, see <cit.>, <cit.>; Makarov bounds are also used to study partial identification of distributional treatment effects when the treatment assignment mechanism identifies the marginal measures of the potential outcomes such as in <Ref>, see <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. Applying <Ref>, we extend Makarov bounds to allow for possible misspecification of the marginal measures and call the resulting bounds Wasserstein distributionally robust Makarov bounds. Suppose that g(s_1, s_2) = 1(s_1 + s_2 ≤ z) and c_ℓ(s_ℓ, s_ℓ') = |s_ℓ - s_ℓ'|^2 for ℓ = 1, 2. For all δ∈ℝ_+^2, sup_γ∈Σ_D(δ)𝔼_γ[g(S_1, S_2)] = inf_λ∈ℝ_+^2(⟨λ, δ⟩ +sup_ϖ∈Π(μ_1,μ_2) [∫_{s_1 + s_2 > z}[1 - λ_1λ_2 (s_1 + s_2 - z)^2/λ_1 + λ_2]^+ dϖ(s_1, s_2) + 𝔼_ϖ[1{S_1 + S_2 ≤ z}]]; inf _γ∈Σ_D(δ)𝔼_γ[g(S_1, S_2)] = sup_λ∈ℝ_+^2[- ⟨λ, δ⟩ +inf_ϖ∈Π(μ_1,μ_2) {- ∫_{s_1 + s_2 ≤ z}[1 - λ_1λ_2 (s_1 + s_2 - z)^2/λ_1 + λ_2]^+ dϖ(s_1, s_2) + 𝔼_ϖ[1{S_1 + S_2 ≤ z}] ]. We note that g_λ(v) is bounded and continuous in v, and convex in λ, and Π(μ_1, μ_2) is compact. Applying <cit.>'s minimax theorem, we can interchange the order of inf and sup in the dual in the above corollary and get sup_γ∈Σ_D(δ)𝔼_γ[g(S_1, S_2)] = sup_ϖ∈Π(μ_1,μ_2) [ inf_λ∈ℝ_+^2(⟨λ, δ⟩ +∫_{s_1 + s_2 > z}[1 - λ_1λ_2 (s_1 + s_2 - z)^2/λ_1 + λ_2]^+ dϖ(s_1, s_2)) + 𝔼_ϖ[1{S_1 + S_2 ≤ z}]]. This expression is very insightful, where the inner infimum term characterizes possible deviations of the true marginal measures from the reference measures. § FINITENESS OF THE W-DMR-MP AND EXISTENCE OF OPTIMIZERS In this section, we assume that all the reference measures belong to appropriate Wasserstein spaces and prove finitness of the W-DMR-MP and existence of an optimizer. The Wasserstein space of order p ≥ 1 on a Polish space 𝒳 with metric d is defined as 𝒫_p( 𝒳 ) = {μ∈𝒫( 𝒳 ): ∫_𝒳d(x_0, x)^p d μ(x) < ∞}, where x_0 ∈𝒳 is arbitrary. * In the non-overlapping case, we assume that μ_1∈𝒫_p_1(𝒮_1) and μ_2∈𝒫_p_2(𝒮_2) for some p_1 ≥ 1 and p_2 ≥ 1; * In the overlapping case, we assume that μ_13∈𝒫_p_1(𝒮_1) and μ_23∈𝒫_p_2(𝒮_2) for some p_1 ≥ 1 and p_2 ≥ 1. The cost function c_ℓ: 𝒮_ℓ×𝒮_ℓ→ℝ∪{∞} is of the form c_ℓ(s_ℓ, s_ℓ^' ) = d_𝒮_ℓ (s_ℓ, s_ℓ^' )^p_ℓ, where (𝒮_ℓ, d_𝒮_ℓ) is a Polish space and p_ℓ≥ 1 for ℓ=1, 2. §.§ Finiteness of the W-DMR-MP For non-overlapping case, we establish the following result. Suppose that <Ref> hold. Then for all δ∈ℝ_++^2, ℐ_D(δ) < ∞ if and only if there exist v^⋆ := (s_1^⋆, s_2^⋆) ∈𝒱 and a constant M > 0 such that for all (s_1, s_2)∈𝒱, g(s_1, s_2) ≤ M [ 1 + d_𝒮_1 (s_1^⋆,s_1 )^p_1+ d_𝒮_2 (s_2^⋆,s_2)^p_2], where p_1 and p_2 are defined in <Ref>. The inequality in <Ref> is a growth condition on the function g. It extends the growth condition in <cit.> for W-DMR to our W-DMR with non-overlapping narginals. For the overlapping case, the following result holds. Suppose that <Ref> hold. Then for all δ∈ℝ_++^2, ℐ(δ) < ∞ if and only if there exist (s_1^⋆, s_2^⋆) ∈𝒮_1 ×𝒮_2 and a constant M > 0 such that f(s) ≤ M [ 1 +d_𝒮_1 (s_1^⋆, s_1)^p_1+d_𝒮_2 (s_2^⋆, s_2)^p_2], for all s ∈𝒮, where s := (y_1, y_2, x), s_ℓ := (y_ℓ, x) and s_ℓ^⋆ := (y_ℓ^⋆, x^⋆) for ℓ = 1, 2, and p_1 and p_2 are defined in <Ref>. The growth condition (<ref>) on the function f extends the growth condition in <cit.> for W-DMR. When d_𝒮_ℓ ( (y_ℓ, x), (y^'_ℓ, x^') ) = d_𝒴_ℓ ( y_ℓ,y^'_ℓ ) + d_𝒳 ( x, x^' ), condition (<ref>) is satisfied if and only if there exist s^⋆ := (y_1^⋆, y_2^⋆, x^⋆) and a constant M > 0 such that f(s) ≤ M [ 1 + d_𝒴_1 (y_1, y_1^⋆ )^p_1+ d_𝒴_2 (y_2, y_2^⋆ )^p_2+ d_𝒳(x, x^⋆ )^p_1 ∧ p_2], for all s = (y_1,y_2,x) ∈𝒮. The conditions in <Ref> are sufficient conditions for ℐ_D(δ) and ℐ(δ) to be finite for all δ∈ℝ_+^2 including boundary points because ℐ_D(δ) and ℐ(δ) are non-decreasing. §.§ Existence of Optimizers A metric space (𝒳, d) is said to be proper if for any r>0 and x_0 ∈𝒳, the closed ball B(x_0, r) := { x ∈𝒳: d(x,x_0) ≤ r } is compact. Examples of proper metric spaces include finite dimensional Banach spaces and complete Riemannian manifolds, see <cit.>. (𝒮_1, d_𝒮_1) and (𝒮_2, d_𝒮_2) are proper. <Ref> imply that Σ_D(δ) and Σ(δ) are weakly compact, see <Ref> in Appendix C. Given weak compactness of the uncertainty sets Σ_D(δ) and Σ(δ), it is sufficient to show that the mapping: γ→∫ g dγ is upper semi-continuous over γ∈Σ_D(δ) for the non-overlapping case, and the mapping: γ→∫ f dγ is upper semi-continuous over γ∈Σ(δ) for the overlapping case. In <Ref> below, we provide conditions for g and f ensuring upper semi-continuity of each map and thus the existence of optimal solutions for ℐ_D(δ) and ℐ(δ). Suppose that <Ref> hold. Further, assume that g is upper-semicontinuous, and there exist a constant M>0, v^⋆ := (s_1^⋆, s_2^⋆) ∈𝒱 and p_ℓ^'∈ (0, p_ℓ) for ℓ=1,2, such that g(v) ≤ M[1+ d_𝒮_1 (s^⋆_1, s_1)^p_1^' + d_𝒮_2 (s^⋆_2,s_2)^p_2^'], for all v := (s_1, s_2) ∈𝒱. Then an optimal solution of (<ref>) exists for all δ∈ℝ_+^2. Suppose that <Ref> hold. Further, assume that f is upper-semicontinuous, and there exist (s_1^⋆, s_2^⋆) ∈𝒮_1 ×𝒮_2, a constant M>0, p_ℓ^'∈ (0, p_ℓ) for ℓ = 1, 2, such that f(s) ≤ M [ 1 +d_𝒮_1 (s_1^⋆, s_1)^p_1^'+d_𝒮_2 (s_2^⋆,s_2 )^p_2^'], for all s ∈𝒮 where s := (y_1, y_2, x), s_ℓ := (y_ℓ, x) and s_ℓ^⋆ := (y_ℓ^⋆, x_ℓ^⋆) for ℓ = 1, 2. Then an optimal solution of (<ref>) exists for all δ∈ℝ_+^2. §.§ Characterization of Identified Sets In some applications, such as the partial identification of treatment effects introduced in <Ref>, the identified sets of θ_Do := 𝔼_o[g(S_1, S_2)] and θ_o := 𝔼_o[f(S)] are of interest, where S is a random variable whose probability measure belongs to Σ(δ), and S_1 and S_2 are random variables whose joint probability measure belongs to Σ_D(δ). They are: Θ_D(δ) := {∫_𝒮_1 ×𝒮_2 g d γ : γ∈Σ_D(δ) } and Θ(δ) := {∫_𝒮 f d γ : γ∈Σ(δ) }. By applying finiteness and existence results, we show below that under mild conditions, the identified sets Θ_D(δ) and Θ(δ) are both closed intervals. * Suppose <Ref> hold. In addition, g is continuous, and |g| satisfies Condition (<ref>). Then, for δ∈ℝ_+^2, we have Θ_D(δ)=[min _γ∈Σ_D(δ)∫_𝒮_1 ×𝒮_2 g d γ, max _γ∈Σ_D(δ)∫_𝒮_1 ×𝒮_2 g d γ], where both the lower and upper bounds are finite. * Suppose <Ref> hold. In addition, f is continuous and |f| satisfies Condition (<ref>). Then for δ∈ℝ_+^2, we have Θ(δ)=[min _γ∈Σ(δ)∫_𝒮 f d γ, max_γ∈Σ(δ)∫_𝒮 f d γ], where both the lower and upper bounds are finite. The strong duality in Section 3 can be used to evaluate the lower and upper bounds. § CONTINUITY OF THE DMR-MP FUNCTIONS In this section, we establish continuity of the W-DMR-MP functions ℐ_D(δ) and ℐ(δ) for all δ∈ℝ_+^2 under similar conditions to those in <cit.>. Compared with <cit.>, our analysis is more involved, because the boundary in our case includes not only the origin (0,0) but also (δ_1,0) and (0,δ_2) for all δ_1 > 0 and δ_2 > 0. §.§ Non-overlapping Marginals <Ref> implies that under <Ref>, ℐ_D(δ) is a concave function for δ∈ℝ_+^2 and hence is continuous on ℝ_++^2. We provide the main assumption for the continuity of ℐ_D(δ) on ℝ_+^2 in this subsection. Let Ψ: ℝ_+^2 →ℝ_+ be a continuous, non-decreasing, and concave function with Ψ(0,0) =0. Suppose the function g: 𝒱→ℝ satisfies g(v) - g(v^') ≤Ψ( c_1(s_1, s_1^'), c_2(s_2, s_2^') ), for all v=(s_1, s_2) ∈𝒱 and v^'= (s^'_1, s^'_2) ∈𝒱. The function Ψ in <Ref> plays the role of the modulus of continuity of g. To illustrate, consider the following example. Suppose <ref> holds, i.e., c_ℓ(s_ℓ, s_ℓ^') = d_𝒮_ℓ(s_ℓ, s_ℓ^')^p_ℓ for some p_ℓ≥ 1, ℓ=1,2. * Define a product metric d_𝒱 on 𝒱 = 𝒮_1 ×𝒮_2 as d_𝒱 ((s_1,s_2), (s_1^', s_2^')) = d_𝒮_1 (s_1,s_1^') + d_𝒮_2 (s_2, s_2^'). Let Ψ (x,y) = x^1/p_1 + y^1/p_2. Then, d_𝒱 ((s_1,s_2), (s_1^', s_2^')) = Ψ( c_1 (s_1, s_1^') , c_2(s_2, s_2^') ). On the metric space (𝒱, d_𝒱), the function g is continuous and has ω: x↦ x as modulus of continuity. Moreover, <Ref> implies the growth condition in (<ref>). * Suppose p_1=p_2. Define a product metric d_𝒱 on 𝒱 = 𝒮_1 ×𝒮_2 as d_𝒱 ((s_1,s_2), (s_1^', s_2^')) = [ d_𝒮_1 (s_1,s_1^')^p + d_𝒮_2 (s_2, s_2^')^p ]^1/p. Let Ψ (x,y) = (x+y)^1/p. Then, d_𝒱 ( (s_1,s_2), (s_1^', s_2^') ) = Ψ( c_1 (s_1, s_1^') , c_2(s_2, s_2^') ). On the metric space (𝒱, d_𝒱), the function g is continuous and has ω: x↦ x as modulus of continuity. <Ref> also implies the growth condition in (<ref>). * Suppose p_1 ≠ p_2. Define a product metric d_𝒱 on 𝒱 = 𝒮_1 ×𝒮_2 as d_𝒱 ((s_1,s_2), (s_1^', s_2^')) = d_𝒮_1 (s_1,s_1^') ∨d_𝒮_2 (s_2, s_2^'). Then, <Ref> implies g(v)-g (v^') ≤Ψ( d_𝒱 ( v, v^') , d_𝒱 (v, v^') ) = ω( d_𝒱 (v, v^') ) . where ω: x ↦Ψ(x,x) is a concave function. On the metric space (𝒱,d_𝒱), the function g is continuous and has ω: x ↦Ψ(x,x) as modulus of continuity. Suppose <Ref> hold and ℐ_D(δ) < ∞ for some δ > 0. Then, the function ℐ_D(δ) is continuous on ℝ_+^2. Two implications follow. First, under <Ref> and <Ref>, ℐ_D(0) = sup_γ∈Π(μ_1, μ_2)∫_𝒱 g dγ. Continuity facilitates sensitivity analysis as δ approaches zero; Second, under the assumptions in <Ref>, we have ℐ_D(δ) = inf_λ∈ℝ_+^2[ ⟨λ, δ⟩ + sup_ϖ∈Π(μ_1, μ_2)∫_𝒱g_λ dϖ] for all δ∈ℝ_+^2. As a result, the dual 𝒥_D(δ) in (<ref>) is continuous for all δ∈ℝ_+^2. §.§ Overlapping Marginals <Ref> implies that under <Ref>, ℐ(δ) is a concave function for δ∈ℝ_+^2 and hence is continuous on ℝ_++^2. We provide the main assumption for the continuity of ℐ(δ) on ℝ_+^2 below. To simplify the technical analysis, we maintain <Ref> in this section. Since the metrics in 𝒴_1 and 𝒴_2 are not specified, we introduce an auxiliary function ρ_ℓ from 𝒴_ℓ×𝒴_ℓ to ℝ_+ induced by the cost function c_ℓ, ℓ = 1, 2. For ℓ=1,2, there exists a function ρ_ℓ from 𝒴_ℓ×𝒴_ℓ to ℝ_+ such that * ρ_ℓ is symmetric, i.e., ρ_ℓ(y_ℓ, y_ℓ^') =ρ_ℓ(y_ℓ^', y_ℓ) for all y_ℓ, y_ℓ^'∈𝒴_ℓ; * there is q_ℓ∈ [1, p_ℓ] such that ρ_ℓ(y_ℓ, y_ℓ^') ≤d_𝒮_ℓ (s_ℓ, s_ℓ^')^q_ℓ for all s_ℓ≡ (y_ℓ, x) ∈𝒮_ℓ and s^'_ℓ≡ (y^'_ℓ, x^') ∈𝒮_ℓ; * there is a constant N > 0 such that ρ_ℓ(y_ℓ, y_ℓ^') ≤ N [ ρ_ℓ(y_ℓ, y_ℓ^⋆ ) + ρ_ℓ( y_ℓ^⋆ , y_ℓ^') ] for all y_ℓ, y_ℓ^', y_ℓ^⋆∈𝒴_ℓ. We now introduce the main assumption on f. For ℓ = 1,2, let Ψ_ℓ : ℝ^2_+ →ℝ_+ be continuous, non-decreasing, and concave satisfying Ψ_ℓ(0,0) = 0. Suppose for all s = (y_1, y_2, x) and s^' = (y^'_1, y^'_2, x^'), it holds that f(y_1, y_2, x) - f(y_1^', y_2^', x^') ≤Ψ_1 ( c_1(s_1, s_1^'), ρ_2(y_2, y_2^') ), and f(y_1, y_2, x) - f(y_1^', y_2^', x^') ≤Ψ_2 ( ρ_1(y_1, y_1^'), c_2(s_2, s_2^') ). Like <Ref>, <Ref> depends on the cost functions c_1,c_2. It also depends on the auxiliary functions ρ_1,ρ_2. The functions Ψ_1, Ψ_2 play the role of the modulus of continuity. [p_j-product metric] Let (𝒴_1,d_𝒴_1 ), (𝒴_2, d_𝒴_2), and (𝒳, d_𝒳) be Polish (metric) spaces. For p_ℓ≥ 1, define the p_ℓ-product metric on 𝒮_ℓ as d_𝒮_ℓ (s_ℓ, s_ℓ^') = [ d_𝒴_ℓ(y_ℓ , y_ℓ^')^p_ℓ + d_𝒳(x, x^')^p_ℓ]^1/p_ℓ. Let ρ_ℓ( y_ℓ , y_ℓ^' ):= inf_ x_ℓ, x_ℓ^'∈𝒳d_𝒮_ℓ( (y_ℓ, x_ℓ) , (y_ℓ^', x_ℓ^') )^p_ℓ. It is easy to show that ρ_ℓ( y_ℓ , y_ℓ^' ) =d_𝒴_ℓ(y_ℓ, y_ℓ^')^p_ℓ and <Ref> is satisfied with N=2^p_ℓ. Moreover, <Ref> reduces to f(y_1, y_2, x) - f(y_1^', y_2^', x^') ≤Ψ_1 ( d_𝒮_1(s_1, s_1^')^p_1, d_𝒴_2(y_2, y_2^')^p_2) and f(y_1, y_2, x) - f(y_1^', y_2^', x^') ≤Ψ_2 (d_𝒴_1(y_1, y_1^')^p_1, d_𝒮_2(s_2, s_2^')^p_2). When p_1 = p_2 = p, <Ref> may be reduced to a simpler form. To see this, define two functions ψ_1 and ψ_2 from ℝ^3 to ℝ^2 as ψ_1: (z_1,z_2, z) ↦ ( z_1 + z , z_2) and ψ_2: (z_1,z_2, z) ↦ ( z_1, z_2 + z ). We can see that Ψ_1 ( d_𝒮_1(s_1, s_1^')^p, ρ_2(y_1, y_1^')^p) = Ψ_1 ∘ψ_1 ( d_𝒴_1(y_1, y_1^')^p , d_𝒴_2 (y_2, y_2^' )^p , d_𝒳(x, x^')^p ), Ψ_2 ( ρ_1(y_1, y_1^')^p , d_𝒮_2(s_2, s_2^')^p ) = Ψ_2 ∘ψ_2 ( d_𝒴_1(y_1, y_1^')^p , d_𝒴_2(y_2, y_2^' )^p , d_𝒳(x, x^')^p ). Since ψ_j is linear, Φ_j = Ψ_j ∘ψ_j is still continuous, non-decreasing and concave. <Ref> is reduced to the following condition: f(y_1, y_2, x) - f(y_1^', y_2^', x^') ≤Φ_j ( d_𝒴_1(y_1, y_1^')^p , d_𝒴_2 (y_2, y_2^' )^p , d_𝒳(x, x^')^p ) for all (y_1,y_2, x) ∈𝒮 and (y_1^', y_2^', x^') ∈𝒮. Suppose <Ref> hold, and ℐ(δ) < ∞ for some δ > 0. Then the function ℐ(δ) is continuous on ℝ_+^2. Like the non-overlapping case, two implications follow. First, under <Ref> and <Ref>, ℐ(0)= sup_γ∈ℱ(μ_13, μ_23) ∫_𝒮 f d γ. Continuity facilitates sensitivity analysis as δ approaches zero; Second, under the assumptions in <Ref>, we have ℐ(δ) = inf_λ∈ℝ_+^2[ ⟨λ, δ⟩ + sup_ϖ∈Π(μ_13, μ_23)∫_𝒱f_λ dϖ] for all δ∈ℝ_+^2. As a result, the dual 𝒥(δ) in (<ref>) is continuous for all δ∈ℝ_+^2. § MOTIVATING EXAMPLES REVISITED In this section, we apply the results in Sections 3-5 to the examples introduced in Section 2. §.§ Partial Identification of Treatment Effects In addition to characterizing Θ(δ) introduced in Section 2, we also study the identified set for θ_Do= 𝔼_o[f(Y_1,Y_2)] without using the covariate information: Θ_D(δ) := {∫_𝒴_1 ×𝒴_2 f(y_1, y_2) d γ(y_1 ,y_2) : γ∈Σ_D(δ) }, where Σ_D(δ) = {γ∈𝒫(𝒴_1 ×𝒴_2): K_Y_1(μ_Y_1, γ_1) ≤δ_1, K_Y_1( μ_Y_2,γ_2) ≤δ_2 } in which K_Y_1 and K_Y_2 are the optimal transport costs associated with cost functions c_Y_1 and c_Y_2, respectively. §.§.§ Characterization of the Identified Sets When f is continuous and conditions in <Ref> are satisfied, the identified sets Θ_D(δ) and Θ(δ) are both closed intervals with upper limits given by W-DMR for non-overlapping and overlapping marginals respectively. This allows us to apply our duality results in Section 3 to evaluate and compare Θ_D(δ) and Θ(δ). Let ℐ_D(δ) and ℐ(δ) denote the upper bounds of Θ_D(δ) and Θ(δ), respectively, where ℐ_D(δ) = sup_γ∈Σ_D(δ)∫_𝒴_1 ×𝒴_2 f(y_1, y_2) d γ(y_1, y_2) and ℐ(δ) = sup_γ∈Σ(δ)∫_𝒮 f(y_1, y_2) d γ(y_1, y_2, x). <Ref> establishes robust versions of existing results on the identified sets of treatment effects under <Ref>, see <cit.>. Sensitivity to deviations from <Ref> can be examined via Θ_D(δ) and Θ(δ) by varying δ. For example, when f satisfies assumptions in <Ref>, ℐ(δ) and ℐ_D(δ) are continuous on ℝ_+^2. As a result, lim_δ→ 0 ℐ(δ)=ℐ(0) and lim_δ→ 0 ℐ_D(δ)=ℐ_D(0). For a general function f, the lower and upper limits of the identified sets Θ_D(δ) and Θ(δ) need to be computed numerically. When f is additively separable, we show that duality results in Section 3 simplify the evaluation of Θ_D(δ) and Θ(δ). Since the lower bounds of Θ_D(δ) and Θ(δ) can be computed in a similar way by applying duality to -f(y_1, y_2), we omit details for the lower bounds. Let f: (y_1,y_2, x) ↦ f_1(y_1) + f_2(y_2) from 𝒮 to ℝ, where f_ℓ∈ L^1(μ_ℓ 3) for ℓ=1,2. To avoid tedious notation, we also treat f as a function from 𝒴_1 ×𝒴_2 to ℝ. Under <Ref>, it is easy to show that ℐ_D(δ) = sup_γ_1:K_Y_1(μ_Y_1, γ_1) ≤δ_1 ∫_𝒴_1f_1 dγ_1 + sup_γ_2:K_Y_2(μ_Y_2, γ_2) ≤δ_2∫_𝒴_2f_2 dγ_2 = inf_λ_1 ≥ 0 [ λ_1 δ_1 +∫_𝒴_1(f_1)_λ_1 d μ_1 ] + inf_λ_2 ≥ 0 [ λ_2 δ_2 +∫_𝒴_2 (f_2 )_λ_2 d μ_2 ], where (f_ℓ)_λ_ℓ: 𝒴_ℓ→ℝ is given by (f_ℓ)_λ_ℓ(y_ℓ) = sup_ y_ℓ^'∈𝒴_ℓ{ f_ℓ(y_ℓ^')-λ_ℓ c_Y_ℓ(y_ℓ, y_ℓ')}. That is, when f is an additively separable function, the W-DMR for non-overlapping marginals is the sum of two W-DMRs associated with the marginals regardless of the cost functions. Depending on the cost functions, the W-DMR for overlapping marginals may be different from the sum of two W-DMRs associated with the marginals. We say that a function f: 𝒳×𝒴→ℝ is separable if each x and y can be optimized regardless of the other variable. In other words, _x, y f(x, y) = (_x ∈𝒳 f(x, y'), _y∈𝒴 f(x', y) ) for any x' ∈𝒳 and y' ∈𝒴. For ℓ = 1, 2, the cost function c_ℓ((y_ℓ, x_ℓ), (y_ℓ', x_ℓ')) is separable with respect to (y_ℓ, y_ℓ') and (x_ℓ, x_ℓ'). Let a_ℓ: 𝒴_ℓ×𝒴_ℓ→ℝ_+∪{∞} and b_ℓ: 𝒳×𝒴→ℝ_+∪{∞} satisfy <Ref>. Let s = (y, x) and s' = (y', x'). Then c(s, s') = a(y, y') + b(x, x') is separable with respect to (x, x') and (y, y'). Also, both c(s, s')=(a(y, y') +1)(b(x, x')+1)-1 and c(s, s') = [a(y, y')^p + b(x, x')^p]^1/p for p ≥ 1 are separable with respect to (x, x') and (y, y') even though they are not additively separable. For ℓ=1,2, let c_ℓ: (𝒴_ℓ×𝒳) × (𝒴_ℓ×𝒳) →ℝ_+ denote the cost function for Θ(δ). Suppose that c_ℓ satisfies <Ref> and the marginal measure of μ_ℓ 3 on 𝒴_ℓ coincides with μ_ℓ, i.e., μ_ℓ,3 = Law(Y_ℓ, X) with μ_ℓ = Law(Y_ℓ). Under <Ref>, one has ℐ(δ) = ℐ_D(δ), where ℐ_D(δ) is based on the cost function c_Y_ℓ on 𝒴_ℓ×𝒴_ℓ given by c_Y_ℓ(y_ℓ, y_ℓ^')=inf _x_ℓ, x_ℓ^'∈𝒳 c_ℓ((y_ℓ, x_ℓ),(y_ℓ^', x_ℓ^')). It is easy to verify that c_Y_ℓ(y_ℓ, y_ℓ^') = 0 if and only if y_ℓ = y_ℓ^'. This proposition implies that for separable cost functions, the W-DMR for overlapping marginals equals the W-DMR for non-overlapping marginals with cost function c_Y_ℓ(y_ℓ, y_ℓ^'). As a result, the covariate information does not help shrink the identified set. §.§.§ Average Treatment Effect Suppose f(y_1, y_2) = y_2 - y_1 and c_ℓ((y, x), (y_ℓ, x_ℓ)) = |y - y'|^2 + x_ℓ - x_ℓ' ^2 for ℓ = 1, 2. Let τ_ATE=𝔼[Y_2 - Y_1]. Then <Ref> implies that the upper bound on τ_ATE is given by ℐ(δ) = ℐ_D(δ) = 𝔼[Y_2] - 𝔼[Y_1] + √(δ_1) + √(δ_2). In the rest of this section, we demonstrate that when <Ref> is violated, the W-DMR for overlapping marginals may be smaller than the W-DMR for non-overlapping marginals and, as a result, Θ(δ) is a proper subset of Θ_D(δ). Consider the squared Mahalanobis distance with respect to a positive definite matrix. That is, c_ℓ (s_ℓ, s_ℓ') = (s_ℓ - s_ℓ')^⊤ V_ℓ^-1 (s_ℓ - s_ℓ'), where V_ℓ = [ V_ℓ, YY V_ℓ, YX; V_ℓ, XY V_ℓ, XX ] is a positive definite matrix. It is easy to show that c_Y_ℓ(y_ℓ , y_ℓ') = min_x_ℓ, x_ℓ' ∈𝒳_ℓ' c_ℓ (s_ℓ, s_ℓ') = (y_ℓ - y_ℓ')^⊤ V_ℓ, YY^-1 (y_ℓ - y_ℓ'), where s_ℓ = (y_ℓ, x_ℓ) and s_ℓ' = (y_ℓ', x_ℓ'). Let ℐ be the primal of the overlapping W-DMR problem under c_ℓ (s_ℓ, s_ℓ') = (s_ℓ - s_ℓ')^⊤ V_ℓ^-1 (s_ℓ - s_ℓ'). Let ℐ_D be the primal of the non-overlapping W-DMR problem under c_Y_ℓ(y_ℓ ,y_ℓ'). Assume that 𝔼X_2^2 < ∞, 𝔼|Y_1|^2 < ∞, and 𝔼|Y_2|^2 < ∞. Then, ℐ(δ) ≤ℐ_D(δ) for all δ > 0. Suppose that all the conditions in <Ref> hold. Then, * for all δ∈ℝ^2_+, ℐ_D(δ) = 𝔼[Y_2] - 𝔼[Y_1] + V_1, YY^1/2 δ_1^1/2 + V_2, YY^1/2 δ_2^1/2, ℐ(δ) = 𝔼[Y_2] - 𝔼[Y_1] + inf_λ∈ℝ_++^2{λ_1 δ_1 + λ_2 δ_2 + 1/4λ_1(V_1 / V_1, XX) + 1/4λ_2(V_2 / V_2, XX) + 1/4 V_o^⊤(λ_1 V_1, XX^-1 + λ_2 V_2, XX^-1)^-1 V_o }, where V_ℓ / V_ℓ, XX : = V_ℓ, YY - V_ℓ, YX V_ℓ, XX^-1 V_ℓ, XY is the Schur complement of V_ℓ, XX in V_ℓ for ℓ = 1, 2, and V_o = V_2, XX^-1 V_2, XY - V_1, XX^-1 V_0, XY; * ℐ_D(δ) = ℐ(δ) for all δ∈ℝ_+^2 if and only if V_1, XY = V_2, XY = 0; * ℐ_D(δ) and ℐ(δ) are continuous on ℝ^2_+. <Ref> and <Ref> imply that for non-separable Mahalanobis cost functions, the information in covariates may help shrink the identified set since ℐ_D(δ) < ℐ(δ) for some δ under mild conditions. <Ref> also implies that (i) ℐ(0) = ℐ_D(0) = 𝔼[Y_2] - 𝔼[Y_1]; (ii) ℐ(δ_1, 0) = ℐ_D(δ_1, 0) and ℐ(0, δ_2) = ℐ_D(0, δ_2) for all δ_1 ≥ 0 and δ_2 ≥ 0. §.§ Comparison of Robust Welfare Functions Recall that RW_0(d) := inf_γ∈Σ_0(δ)𝔼[ Y_1 (1 - d(X)) + Y_2 d(X)] and RW(d) := inf_γ∈Σ(δ)𝔼[ Y_1 (1 - d(X)) + Y_2 d(X)], where Σ_0(δ_0) = {γ∈𝒫(𝒮): K(μ, γ) ≤δ_0 } and Σ(δ) = {γ∈𝒫(𝒮): K_ℓ(μ_ℓ,3, γ_ℓ,3) ≤δ_ℓ, ∀ℓ=1,2}. Consider the following cost function c_ℓ for ℓ = 1, 2: c_ℓ(s_ℓ, s_ℓ') = c_Y_ℓ(y_ℓ, y_ℓ') + b(x, x'), where s_ℓ = (y_ℓ, x_ℓ), s_ℓ' = (y_ℓ', x_ℓ'), and c_Y_1(y_1, y_1') and c_Y_2(y_2, y_2') are cost functions for Y_1 and Y_2, respectively, and b(x, x') is some function on the space 𝒳 satisfying <Ref>. When b(x, x') = ∞1{x x'}, ℙ(X = X') = 1 for any probability measure in uncertainty set. <cit.> establishes strong duality for RW_0(d) under several cost functions. For comparison purposes, we restate the following Proposition in <cit.> which allows distributional shifts in covariate X. (Proposition 4.1 in <cit.>) Suppose Y_1 and Y_2 are unbounded and 𝔼 X _2^2 is finite. Let the cost function c: 𝒮×𝒮→ℝ_+ be given by c(s, s') = |y_1 - y_1'| + |y_2 - y_2'| + x' - x _2, for s = (y_1, y_2, x) and s' = (y_1', y_2', x'). Then RW_0(d) = sup_η≥ 1{𝔼_μ[max{Y_2 + η h_1(X), Y_1 + η h_0(X) }] - ηδ_0 }, where h_0(x) = inf_u ∈𝒳: d(u) = 0 x - u _2 and h_1(x) = inf_u ∈𝒳: d(u) = 1 x - u _2. This proposition implies that RW_0(d) depends on the choice of the reference measure μ. Since only the marginals μ_13 and μ_23 are identified under <Ref>, <cit.> suggest three possible choices for μ by imposing specific dependence structures on μ: * Y_1 and Y_2 are perfectly positively dependent conditional on X = x; * Y_1 and Y_2 are conditionally independent given X = x; * Y_1 and Y_2 are perfectly negatively dependent conditional on X = x. Section 4.3.1 in <cit.> shows that their robust welfare function RW_0(d) is minimized when Y_1 and Y_2 are perfectly negatively dependent conditional on X = x. The following proposition evaluates RW(d) via the duality result in Section 3 and compares it with RW_0(d). Consider c_ℓ(s_ℓ, s_ℓ') = | y_ℓ - y_ℓ' | + x_ℓ - x_ℓ' _2. Assume that Y is unbounded and 𝔼|Y_1|, 𝔼|Y_2|, and 𝔼 X _2^2 are finite. Then, * the robust welfare function RW(d) based on Σ(δ) has the following dual reformulation: RW(d) = sup_λ≥ 1[ inf_π∈Π(μ_13, μ_23)∫_𝒱min{ y_2 + φ_λ, 1 (x_1, x_2), y_1 + φ_λ, 0 (x_1, x_2) } d π(v) - ⟨λ, δ⟩], where v = (y_1, x_1, y_2, x_2), and φ_λ, 0 (x_1, x_2) = min_x': d(x') = 0( λ_1 x_1 - x' _2 + λ_2 x_2 - x' _2 ), φ_λ, 1 (x_1, x_2) = min_x': d(x') = 1( λ_1 x_1 - x' _2 + λ_2 x_2 - x' _2 ); * When δ_0 = δ_1 = δ_2, RW(d)≤RW_0^*(d), where RW_0^*(d) is the robust welfare function RW_0(d) based on the reference measure π^*=∫max{μ_1|3+μ_2|3-1,0 }dμ_3. Part (ii) of the above proposition implies that RW(d) ≤RW_0(d) for any reference measure μ∈ℱ(μ_13, μ_23). §.§ W-DRO for Logit Model Under Data Combination We revisit the logit model in <Ref> and make the following assumption. (i) Let (Y_1, Y_2, X) follow some unknown measure μ. Let D denote a binary random variable independent of (Y_1, Y_2, X) such that we observe (Y_1, X) when D = 0, and (Y_2, X) when D = 1; (ii) Let {Y_1i, X_1i}_i=1^n_1 be the data set from (Y_1, X), and {Y_2i, X_2i}_i=1^n_2 be the data set from (Y_2, X). Under this assumption, X|D=1 has the same distribution as X|D=0 and the empirical distributions of the two data sets are consistent estimators of the population reference measures for (Y_1, X) and (Y_2, X). Suppose <Ref> hold. Then <Ref> implies that for all δ > 0, ℐ(δ) = inf_λ∈ℝ_+^2[ ⟨λ, δ⟩ + sup_ϖ∈Π(μ_13, μ_23)∫_𝒱f_θ, λ d ϖ], where f_θ, λ (v) = sup_y_1', y_2', x'[f(y_1', y_2', y; θ) - λ_1 c_1((y_1, x_1), (y_1', x')) - λ_2 c_2((y_2, x_2, y_2', x')) ] with v = (y_1, x_1, y_2, x_2). Let μ_13 and μ_23 denote the empirical measures based on the two data sets. The dual form of ℐ(δ) can be estimated by ℐ(δ) := inf_λ∈ℝ_+^2[ ⟨λ, δ⟩ + sup_ϖ∈Π(μ_13, μ_23)∫_𝒱f_θ, λ d ϖ] . A direct consequence of <cit.> is that ℐ(δ) = inf_λ∈ℝ_++^2, {φ_i}_i=1^n_1, {φ_j}_j=1^n_2[ ⟨λ, δ⟩ + 1/n_1∑_i=1^n_1φ_i + 1/n_2∑_j=1^n_2φ_j ] such that f_θ, λ (s_1i, s_2j) ≤φ_i + φ_j' for any i ∈ [n_1] and j ∈ [n_2], where the last expression reduces to the dual in <cit.> for the cost functions c_1((y_1, x), (y_1', x')) = x - x' _p + κ_1 | y_1 - y_1'| and c_2((y_2, x), (y_2, x')) = x - x' _p + κ_2 y_2 - y_2' _p'. § W-DMR WITH MULTI-MARGINALS Sections 2-6 present a detailed study of W-DMR with two marginals. In this section, we briefly introduce W-DMR with more than two marginals or multi-marginals and discuss strong duality for non-overlapping and overlapping marginals.[For multi-marginals, the collection of given marginals can be more complicated than the non-overlapping and overlapping marginals (see <cit.>, <cit.> and <cit.>), we leave a complete treatment of the W-DMR with multi-marginals in future work.] Applications include extension of risk aggregation in <Ref> to any finite number of individual risks and robust treatment choice in <Ref> to multi-valued treatment. §.§ Non-overlapping Marginals Let 𝒱:=∏_ℓ∈ [L]𝒮_ℓ for Polish spaces 𝒮_ℓ for ℓ∈ [L], and μ_ℓ be a probability measure on (𝒮_ℓ, ℬ_𝒮_ℓ). Let Π(μ_1, …, μ_L) be the set of all possible couplings of μ_1, …, μ_L. Further, let g:𝒱→ℝ be a measurable function satisfying the following assumption. The function g:𝒱→ℝ is a measurable function such that ∫_𝒱 g dγ_0> -∞ for some γ_0 ∈Π(μ_1, …, μ_L) ⊂𝒫(𝒱). For any γ∈𝒫(𝒱), let γ_ℓ denote the projection of γ on 𝒮_ℓ for ℓ∈ [L]. The W-DMR with non-overlapping multi-marginals is formulated as ℐ_D (δ) = sup_γ∈Σ_D(δ)∫_𝒱 g d γ, where Σ_D(δ) is the uncertainty set defined as Σ_D(δ) = {γ∈𝒫(𝒱): K_ℓ (μ_ℓ, γ_ℓ) ≤δ_ℓ, ∀ℓ∈ [L]} in which δ = (δ_1, …, δ_L) ∈ℝ_+^L is the radius of the uncertainty set. For a generic vector v ∈ℝ^L and A ⊂ [L], we write v_A = (v_A, 1, …, v_A, L) ∈ℝ^L as follows: v_A, ℓ = v_ℓ if ℓ∈ A, 0 if ℓ∉ A. We also define c̃_ℓ: 𝒮_ℓ×𝒮_ℓ→ℝ_+∪{∞} as: c̃_ℓ(s_ℓ, s_ℓ') = c_ℓ(s_ℓ, s_ℓ') if ℓ∈ A, ∞1{s_ℓ s_ℓ'} if ℓ∉ A. For a function g: 𝒱→ℝ and λ := (λ_1, …, λ_L) ∈ℝ^L_+, we define the function g_λ, A: 𝒱→ℝ∪{∞} as g_λ, A(v) = sup_v' ∈𝒱{g(v') - ∑_ℓ=1^Lλ_ℓc̃_ℓ{ s_ℓ, s_ℓ' }} with v:= (s_1, …, s_L) and v^' := (s_1^', …, s_L^'). Suppose that <Ref> hold. Then, for any δ∈ℝ_++^L and A ⊂ [L], we have ℐ_D(δ_A ) = inf_λ∈ℝ_+^L[ ⟨λ, δ_A⟩ + sup_π∈Π(μ_1, …, μ_L)∫_𝒱 g_λ, A d π]. In practice, the dual in <Ref> involves the computation of the multi-marginal problem, sup_π∈Π(μ_1, …, μ_L)∫_𝒱 g_λ d π, see <cit.> for detailed studies of properties and computation of multi-marginal problems for specific functions g_λ. For general possibly non Borel-measurable g_λ, the strong duality in <cit.> could be applied. The established result is stated in <Ref> in <Ref>. §.§ Overlapping Marginals Let 𝒮 := (∏_ℓ∈ [L]𝒴_ℓ) ×𝒳, where 𝒴_ℓ for ℓ∈ [L] and 𝒳 are Polish spaces. Let 𝒮_ℓ := 𝒴_ℓ×𝒳 for ℓ∈ [L]. Let μ_ℓ,L+1∈𝒫(𝒮_ℓ) for ℓ∈ [L] be such that the projections of μ_ℓ, L+1 on 𝒳 are the same for ℓ∈ [L]. We call the Fréchet class of all probability measures on 𝒮 having marginals (μ_1,L+1)_ℓ∈ [L] the Fréchet class with overlapping marginals and denote it as ℱ(𝒮; (μ_ℓ,L+1)_ℓ∈ [L]) := ℱ( (μ_ℓ,L+1)_ℓ∈ [L]). This class is the star-like system of marginals in <cit.> and <cit.>, see also <cit.>. Moreover, let f:𝒮→ℝ be a measurable function satisfying the following assumption. The function f:𝒮→ℝ is a measurable function such that ∫_𝒮 f dν_0 > - ∞ for some ν_0 ∈Π(μ_1,L+1,..., μ_L,L+1) ⊂𝒫(𝒮). For any γ∈𝒫(𝒮), let γ_ℓ, L+1 denote the projection of γ on 𝒴_ℓ×𝒳 for ℓ∈ [L]. Similar to the two marginals case, the W-DMR with overlapping multi-marginals is defined as ℐ (δ) = sup_γ∈Σ(δ)∫_𝒮 f d γ, where Σ(δ) is the uncertainty set defined as Σ(δ) = {γ∈𝒫(𝒮): K_ℓ(μ_ℓ, L+1, γ_ℓ, L+1) ≤δ_ℓ for ℓ∈ [L]}, in which δ = (δ_1, …, δ_L) ∈ℝ_+^L is the radius of the uncertainty set. For a function f: 𝒱→ℝ, λ := (λ_1, …, λ_L) ∈ℝ^L_+, and A ⊂ [L], we define the function f_λ, A: 𝒱→ℝ as follows: f_λ, A(v) = sup_s' ∈𝒮{f(s') - ∑_ℓ=1^Lλ_ℓc̃_ℓ(s_ℓ, s_ℓ')}, where v = (s_1, …, s_L), s^' = (y^'_1, …, y^'_L, x^'), s^'_ℓ = (y^'_ℓ, x^') and s_ℓ=(y_ℓ, x_ℓ), and c̃_ℓ(s_ℓ, s_ℓ^') = c_ℓ(s_ℓ, s_ℓ') if ℓ∈ A, ∞1{s_ℓ s_ℓ'} if ℓ∉ A. Suppose that <Ref> hold. Then, for any δ∈ℝ^L_++ and A ⊂ [L], we have ℐ(δ_A ) = inf_λ∈ℝ_+^L[ ⟨λ, δ_A⟩ + sup_π∈Π(μ_1, L+1,…, μ_L,L+1)∫_𝒱 f_λ, A d π]. Similar to the nonoverlapping case, strong duality holds for the inner multi-marginal problem under additional conditions. The result is stated in <ref> of <Ref>. §.§ Treatment Choice for Multi-valued Treatment We apply strong duality to multi-valued treatment in <cit.>. Let d: 𝒳→ [L] be a policy function or treatment rule on 𝒳 and Y_ℓ∈ℝ denote the potential outcome under the treatment ℓ for ℓ∈ [L]. Consider the policy function defined as Y(d) := ∑_ℓ=1^L Y_ℓ×1{d(X) = ℓ} . <cit.> introduces the following robust welfare function. RW_C(d) = sup_γ∈Σ_M(δ_0)𝔼_γ[ ∑_ℓ=1^L Y_ℓ1{d(X) = ℓ}], where the uncertainty set Σ_M(δ_0) is based on the conditional distribution of (Y_ℓ)_ℓ∈ [L] given X: Σ_M(δ_0) := {γ∈𝒫(𝒮): K(μ_(Y_1, …, Y_L)|X=x, γ_(Y_1, …, Y_L)|X=x) ≤δ_0 for all x, μ_X = γ_X}, in which the cost function c associated with K is c((y_1, …, y_L), (y_1', …, y_L') ) = ∑_ℓ=1^L |y_ℓ - y_ℓ'|. Note that the uncertainty set Σ_M(δ_0) does not allow any potential shift[<cit.> mentions the possibility of allowing for covariate shift by incorporating uncertainty sets in <cit.> for the distribution of the covariate in future work. ] in X. When Y_1, …, Y_L are unbounded, <cit.> shows that RW_C(d) = ∑_ℓ=1^L 𝔼_(Y_ℓ, X) ∼μ_ℓ, L+1[ (Y_ℓ - δ_0) I(D(X) = ℓ) ] = 𝔼_X[ ∑_ℓ=1^L (𝔼[Y_ℓ| X] - δ_0 ) I(D(X) = ℓ) ]. We apply W-DMR for overlapping marginals with the following cost function: c_ℓ(s_ℓ, s_ℓ') = | y_ℓ - y_ℓ'| + x_ℓ - x_ℓ' _2 , and define a robust welfare function as RW(d) = sup_γ∈Σ(δ)𝔼_γ[ ∑_ℓ=1^L Y_ℓ I(d(X) = ℓ) ] . For ℓ∈ [L], let c_ℓ(s_ℓ, s_ℓ') = | y_ℓ - y_ℓ'| + x_ℓ - x_ℓ' _2. Assume that Y_ℓ is unbounded, 𝔼[X_2^2] < ∞ and 𝔼[|Y_ℓ|] < ∞. Then RW(d) = sup_λ≥ 1{inf_π∈Π(μ_1,L+1, …, μ_L, L+1)∫_𝒱min_ℓ∈ [L]{y_ℓ + ϕ_λ, ℓ(x_1, …, x_L) } d π(s) - ⟨λ, δ⟩}, where φ_λ, ℓ(x_1, …, x_L) = min_x', d(x') = ℓ∑_ℓ=1^Lλ_ℓ x_ℓ - x' _2. <Ref> is an extension of <Ref>. § CONCLUDING REMARKS In this paper, we have introduced W-DMR in marginal problems for both non-overlapping and overlapping marginals and established fundamental results including strong duality, finiteness of the proposed W-DMR, and existence of an optimizer at each radius. We have also shown continuity of the W-DMR-MP as a function of the radius. Applicability of the proposed W-DMR in marginal problems and established properties is demonstrated via distinct applications when the sample information comes from multiple data sources and only some marginal reference measures are identified. To the best of the authors' knowledge, this paper is the first systematic study of W-DMR in marginal problems. Many open questions remain including the structure of optimizers of W-DMR for both non-overlapping and overlapping marginals, efficient numerical algorithms, and estimation and inference in each motivating example. Another useful extension is to consider objective functions that are nonlinear in the joint probability measure such as the Value-at-Risk of a linear portfolio of risks in <cit.> and robust spectral measures of risk in <cit.>. § APPENDIX A: PRELIMINARIES In this appendix, we provide a self-contained review of interchangeability principle, strong duality for marginal problems, and probability measures given marginals. Additional notations used in the appendices are collected here. For any set A, we denote by 2^A the power set of A. Suppose f: 𝒳→𝒴 and g: 𝒴→𝒵, let g∘ f denote the composite of f and g, i.e., a map x ↦ g (f(x)) that maps 𝒳 into 𝒵. Given a Polish measurable space (𝒳, ℬ_𝒳). For any Borel measure μ on 𝒳, let Supp(μ) denote the smallest closed set A ⊂𝒳 such that μ(A) = 1. For j ∈ [n], let 𝒳_j be a Polish space equipped with Borel σ-algebra ℬ_𝒳_j. Let 𝒮 := ∏_j ∈ [n]𝒳_j. For any subset K ⊂ [n], we write 𝒮_K := ∏_j∈ K𝒳_j and the projection map proj_K from 𝒮 to 𝒮_K as proj_K: (x_j)_j∈ [n]↦ (x_j)_j∈ K. Given μ_j∈𝒫(𝒳_j) for j ∈ [n], let Π(μ_1, ⋯, μ_n) denote the set of probability measures μ on 𝒮 such that proj_{j}#μ = μ_j. Finally, let 𝒩(μ, Σ) denote the multivariate normal distribution with mean μ and covariance matrix Σ. §.§ Interchangeability Principle Let (𝒯, ℬ_𝒯,μ) be a probability space, (𝒳, ℬ_𝒳 ) be a sample space and φ: 𝒯×𝒳→ℝ be a measurable function. We denote by ℬ_𝒯^μ the μ-completion of ℬ_𝒯. Let Γ(μ,φ ) denote the set of probability measures π on (𝒯×𝒳, ℬ_𝒯⊗ℬ_𝒳 ) such that π(A ×𝒳) = μ(A) for all A ∈ℬ_𝒯 and ∫_𝒯×𝒳φ dπ is well defined. If there is no such π, the natural convention is to take Γ(μ, φ) =∅. A measurable function φ:𝒯×𝒳→ℝ satisfies the interchangeability principle with respect to μ if the function t ↦sup_x ∈𝒳φ(t,x) is ℬ_𝒯^μ-measurable and satisfies ∫_𝒯[ sup_x∈𝒳φ( t,x) ] dμ(t) = sup_π∈Γ(μ, φ) ∫_𝒯×𝒳φ(t,x) d π (t,x). The interchangeability principle allows us to interchange the supremum and integral operators. It is a weak condition. As explained in Example 2 in <cit.>, this condition is satisfied when the space is Polish and φ is measurable. We extend the interchangeability principle with respect to a measure to a class of measures in the definition below. Let 𝒢 be a set of probability measures on (𝒳, ℬ_𝒳). A measurable function φ:𝒯×𝒳→ℝ satisfies the interchangeability principle with respect to 𝒢 if φ satisfies the interchangeability principle with respect to μ for all μ∈𝒢. Suppose that φ satisfies the interchangeability principle with respect to 𝒢. Let Γ(𝒢, φ) := ∪_μ∈𝒢Γ(μ, φ). Then, sup_π∈Γ(𝒢, φ ) ∫_𝒯×𝒳φ(t,x) dπ(t,x) = sup_μ∈𝒢{∫_𝒯[ sup_x ∈𝒳φ(t,x) ] d μ(t) } . With the convention, we write sup A = -∞ if A =∅. It is easy to see that sup_π∈Γ(𝒢, φ ) ∫_𝒯×𝒳φ dπ = sup_μ∈𝒢[ sup_π∈Γ(μ, φ) ∫_𝒯×𝒳φ(t, x) d π(t, x) ] = sup_μ∈𝒢[sup _π∈Γ(μ, φ)∫_𝒯×𝒳φ d π]= sup_μ∈𝒢{∫_𝒯[ sup_x ∈𝒳φ(t,x) ] d μ(t) }. §.§ Strong Duality for Marginal Problems The strong duality results for marginal problems are well-established in the literature, see <cit.>. Here, we present a strong duality result based on <cit.>. Given probability measures μ_ℓ on Polish space 𝒳_ℓ equipped with Borel algebra ℬ_𝒳_ℓ for ℓ∈ [L]. Let 𝒳 = ∏_ℓ∈ [L] 𝒳_ℓ and f: 𝒳→ℝ be an extended real-valued function. Consider the following marginal problem: sup_π∈Π(μ_1, …, μ_L)∫_𝒳 f(x) d π (x). Suppose {x ∈𝒳:f(x) ≥ u } is analytic for all u ∈ℝ and there exist f_ℓ < ∞, f_ℓ∈ L^1(μ_ℓ) for ℓ∈ [L] such that f(x) ≥∑_ℓ=1^L f_ℓ(x_ℓ) for all x := (x_1, …, x_L) ∈𝒳. Let Φ_f be the set of all measurable functions (ϕ_ℓ)_ℓ∈ [L], where ϕ_ℓ∈ L^1(μ_ℓ) and ϕ_ℓ > - ∞ for all ℓ∈ [L] such that ∑_ℓ=1^Lϕ_ℓ(x_ℓ) ≥ f(x), ∀ x = (x_1, …, x_L) ∈𝒳. Then, sup_π∈Π(μ_1, …, μ_L)∫_𝒳 f dπ = inf_ (ϕ_ℓ)_ℓ∈ [L]∈Φ_f {∑_ℓ=1^L∫_𝒳_ℓϕ_ℓ d μ_ℓ}. This theorem is a direct application of <cit.> to Polish spaces. Since {x ∈𝒳:f(x) ≥ u } is analytic for every u ∈ℝ and 𝒳 is a Polish space, it is a 𝔉_𝒳-Suslin set, where 𝔉_𝒳 is the collection of closed sets of 𝒳. Therefore, conditions in <cit.> are satisfied with the outer integral in the primal problem. Since {x ∈𝒳:f(x) ≥ u } is analytic for every u ∈ℝ and 𝒳 is Polish space, f is universally measurable, see <cit.>) and <cit.>. For each π∈Π(μ_1, …, μ_L), there exists a Borel measurable function f^* such that f^* = f, π-almost surely. As a result, we can replace the outer integral by the integral with respect to π-completion using Lemma 1.2.1 of <cit.>. For the function φ defined in <Ref>, <cit.> implies that the set {t ∈𝒯: sup_x ∈𝒳φ(t, x) ≥ u} is analytic for all u∈ℝ. In our context, the functions f_λ: 𝒱→ℝ and g_λ: 𝒱→ℝ may not be Borel measurable; however, { f_λ≥ u } and { g_λ≥ u } are both analytic for all u ∈ℝ. In the following corollaries, we apply <Ref> to the inner marginal problems in 𝒥_D(δ) and 𝒥(δ), where we use the convention that the infimum over an empty set is defined as ∞. In addition to conditions in <Ref>, assume that there exist some measurable functions a_1 ∈ L^1(μ_1) and a_2 ∈ L^1(μ_2) such that a_1 < ∞, a_2 < ∞, and g(s_1, s_2) ≥ a_1(s_1) + a_2(s_2), ∀ (s_1, s_2) ∈𝒮_1 ×𝒮_2. Then, for δ∈ℝ^2_++, we have 𝒥_D(δ) = inf_λ∈ℝ_+^2 (ψ, ϕ ) ∈ L^1(μ_1) × L^1(μ_2){⟨λ, δ⟩ + ∫_𝒮_1ψ d μ_1 + ∫_𝒮_2ϕ d μ_2 : ψ, ϕ > - ∞ ψ(s_1) + ϕ(s_2) ≥ g_λ(s_1, s_2), }. In addition to conditions in <Ref>, assume that for each λ, there exist some measurable functions a_λ, 1∈ L^1(μ_1) and a_λ, 2∈ L^1(μ_2) such that a_λ, 1 < ∞, a_λ, 2 < ∞, and f_λ(s_1, s_2) ≥ a_λ, 1(s_1) + a_λ, 2(s_2) , ∀ (s_1, s_2) ∈𝒮_1 ×𝒮_2. Then, for δ∈ℝ^2_++, we have 𝒥(δ) = inf_λ∈ℝ_+^2 (ψ, ϕ ) ∈ L^1(μ_13) × L^1(μ_23){⟨λ, δ⟩ + ∫_𝒮_1ψ d μ_13 + ∫_𝒮_2ϕ d μ_23: ψ, ϕ > - ∞ ψ(s_1) + ϕ(s_2) ≥ f_λ(s_1, s_2) ∀ (s_1, s_2) }. In addition to conditions in <Ref>, assume that there exist some measurable functions a_ℓ∈ L^1(μ_ℓ) for ℓ∈ [L] such that a_ℓ < ∞, and g(s) ≥∑_ℓ=1^L a_ℓ(s_ℓ), ∀ s = (s_1, …, s_L) ∈∏_ℓ∈ [L]𝒮_ℓ. Then, for δ∈ℝ^L_++, we have 𝒥_D(δ) = inf_λ∈ℝ_+^L, ψ_ℓ > - ∞ (ψ_ℓ)_ℓ∈ [L]∈∏_ℓ∈ [L] L^1(μ_ℓ) {⟨λ, δ⟩ + ∑_ℓ=1^L ∫ψ_ℓ d μ_ℓ : ∑_ℓ=1^L ψ_ℓ(s_ℓ) ≥ g_λ, [L](s), ∀ s }. In addition to conditions in <Ref>, assume that for each λ, there exist some measurable functions a_λ, ℓ∈ L^1(μ_ℓ) for ℓ∈ [L] such that a_λ, ℓ < ∞, and f_λ, [L](s) ≥∑_ℓ=1^L a_λ, ℓ(s_ℓ), ∀ s = (s_1, …, s_L) ∈∏_ℓ∈ [L]𝒮_ℓ. Then, for δ∈ℝ^L_++, we have 𝒥(δ) = inf_λ∈ℝ_+^L, ψ_ℓ > - ∞ (ψ_ℓ)_ℓ∈ [L]∈∏_ℓ∈ [L] L^1(μ_ℓ) {⟨λ, δ⟩ + ∑_ℓ=1^L ∫_𝒮_ℓψ_ℓ d μ_ℓ,L+1 : ∑_ℓ=1^L ψ_ℓ(s_ℓ) ≥ f_λ, [L](s), ∀ s }. §.§ Probability Measures with Given Marginals The existence of probability measures with given marginals was studied by <cit.>, <cit.>, and <cit.>. If the indices of the marginals are overlapping, then there may not be a probability measure compatible with the given marginals. In this section, we review a sufficient condition for the existence of such a measure. We first define a consistent product marginal system (CPMS) by following <cit.>. Let 𝒮 = ∏_j ∈ [n]𝒳_j. Given a finite index collection {K_1,…, K_N } with K_j ⊂ [n] and probability measure μ_j on 𝒮_j := 𝒮_K_j for j ∈ [N]. A product marginal system ℱ(𝒮; (μ_j)_j=1^N ) consists of a product space 𝒮 and probability measures (μ_j)_j=1^N. The product marginal system ℱ(𝒮; (μ_j)_j=1^N ) is said to be consistent if for any K_i, K_j ⊂ [n] with K_i ∩ K_j ≠∅, the projections of μ_i and μ_j on 𝒮_K_i ∩ K_j are the same, i.e., ( proj_K_i ∩ K_j∘proj_K_i ^-1)#μ_i = ( proj_K_i ∩ K_j∘proj_K_j ^-1) #μ_j . A CPMS is not necessarily nonempty. To illustrate this, we consider the following examples. Let 𝒮 = 𝒳_1 ×𝒳_2, K_j = {j} for j ∈ [2]. Given probability measures μ_j on 𝒮_j:= 𝒳_j for j ∈ [2], the CPMS ℱ (𝒮;μ_1, μ_2) is given by ℱ (𝒮;μ_1, μ_2) = {π∈𝒫(𝒳_1 ×𝒳_2): π∘proj_{j}^-1 = μ_j, ∀ j = 1, 2 }. Obviously, ℱ (𝒮;μ_1, μ_2) is identical to Π(μ_1, μ_2) and is nonempty. Let 𝒳_j= ℝ for j∈ [4]. Let K_j = {j,j+1 } and 𝒮_j = 𝒳_j×𝒳_j+1 for j ∈ [3]. To make the example more concrete, let μ_j = 𝒩(0, I_2) for all j ∈ [3]. We note that ( proj_K_j ∩ K_j+1∘proj_K_j^-1) #μ_j = 𝒩(0,1), ∀ j ∈ [3]. Moreover, it is easy to verify ℱ(𝒮; (μ_j)_j=1^3 ) is consistent and nonempty, since 𝒩(0, I_3) is an element of ℱ(𝒮; (μ_j)_j=1^3 ). Let 𝒳_j= ℝ for j∈ [3], K_1 = {1,2 }, K_2 = {2,3 }, K_3 = {1,3} and 𝒮_j := 𝒮_K_j for j ∈ [3]. We define μ_1 = 𝒩( 0, [[ 2 -1; -1 4 ]] ), μ_2 = 𝒩( 0, [[ 4 -2; -2 4 ]] ), μ_3 = 𝒩( 0, [[ 2 -2; -2 4 ]] ). It is easy to verify ℱ(𝒮; (μ_j)_j=1^3 ) is consistent but is an empty set. Suppose π∈ℱ(𝒮; (μ_j)_j=1^3 ), then the covariance matrix of π is Σ = [[ 2 -1 -2; -1 4 -2; -2 -2 4; ]]. However, Σ is not positive semi-definite so can not be a covariance matrix. A sufficient condition for a CPMS to be non-empty is the decomposability of its index set. We restate the definition of decomposibility from <cit.>, <cit.>, and <cit.>. A collection {K_1, …, K_N } of subsets of [n] is called decomposable if there is a permutation σ of [N] such that DC( ⋃_j< m K_σ(j)) ∩ K_σ(m)∈⋃_j< m 2^ K_σ(j) , ∀ m ∈ [N]. For Euclidean spaces, <cit.> proves that a CPMS is nonempty if its index set is decomposable, while <cit.> extends this result to separable spaces. Below, we present a statement of this result for Polish spaces and give a simple proof. Let 𝒮 = Π_j ∈ [n]𝒳_j where 𝒳_j are Polish spaces with the Borel algebras. Suppose that ℱ(𝒮 ;(μ_j)_j=1^N) is a CPMS and the associated index collection {K_1, …, K_N} with K_i ⊂ [n] is decomposable. Then ℱ(𝒮 ;(μ_j)_j=1^N) is nonempty. The proof of <ref> below is based on two results. The first is Theorem 1.1.10 in <cit.> restated in <ref> and the second is <ref>, a direct consequence of <Ref>. Let 𝒴_1, 𝒴_2, 𝒳 be Polish spaces with Borel algebras and let 𝒮:= 𝒴_1×𝒴_2×𝒳. Let μ_0 and μ_1 be Laws on 𝒮_1 := 𝒴_1 ×𝒳 and 𝒮_2 := 𝒴_2 ×𝒳 respectively. Suppose ℱ(𝒮; μ_1, μ_2 ) is a consistent product marginal system. Then ℱ(𝒮; μ_1, μ_2 ) is nonempty. Suppose that ℱ(𝒮; (μ_j)_j=1^N ) is a CPMS, K_i, K_j ⊂ [n] and K_i ∩ K_j ≠∅. If Q⊂ K_i ∩ K_j and Q≠∅, then the projections of μ_i and μ_j on 𝒮_Q are the same, i.e., ( proj_Q∘proj_K_i^-1) #μ_i = ( proj_Q∘proj_K_j^-1) #μ_j. Moreover, for all π∈ℱ(𝒮; (μ_j)_j=1^N ), proj_Q#π = ( proj_Q∘proj_K_j ^-1)#μ_j, ∀ j ∈ [N]. We give a proof by induction on N. Without loss of generality, assume that the permutation σ in <Ref> satisfies σ(j) = j for j ∈ [N]. <Ref> holds trivially when N=1. When N=2, it holds by <Ref>. Let ℋ_N-1 := ∏_j=1^N-1𝒮_j and assume that ℱ(ℋ_N-1 ; (μ_j)_j=1^N-1)≠∅. Then, there is a γ∈ℱ( ℋ_N-1 ;(μ_j)_j=1^N-1). Let us verify that ℱ( ℋ_N-1×𝒮_N; γ, μ_N ) is consistent. Let Q = ∪_j=1^N-1 K_j. Since {K_1,…, K_N} is decomposable, Q ∩ K_N ∈∪_j<N 2^K_j. As a result, we must have (Q ∩ K_N ) ⊂ K_ℓ for some ℓ∈ [N-1] and hence (Q ∩ K_N ) ⊂ (K_ℓ∩ K_N). If (Q ∩ K_N) = ∅, the proof is trivial. In the rest of the proof, we suppose (Q ∩ K_N) ≠∅. Since ℱ(𝒮; (μ_j)_j=1^N ) is consistent, by <Ref>, ( proj_K_N ∩ Q∘proj_K_N^-1) #μ_N = ( proj_K_N ∩ Q∘proj_K_ℓ^-1) #μ_ℓ. Since ℱ(ℋ_N-1 ;(μ_j)_j=1^N-1) is consistent, <Ref> also implies ( proj_K_N ∩ Q∘proj_Q^-1) #γ = ( proj_K_N ∩ Q∘proj_K_ℓ ^-1) #μ_ℓ. This shows (proj_K_N ∩ Q∘proj_K_N^-1) #μ_N = ( proj_K_N ∩ Q∘proj_Q^-1) #γ, and ℱ( ℋ_N-1×𝒮_N; γ, μ_N ) is consistent. The proof is complete by using <Ref> again. § APPENDIX B: TECHNICAL LEMMAS * Suppose <Ref> hold. Then, the function ℐ_D(δ) is concave, non-decreasing in δ∈ℝ_+^2, and ℐ_D(δ) > -∞ for all δ∈ℝ_+^2. * Suppose <Ref> hold. Then, the function ℐ(δ) is concave, non-decreasing in δ∈ℝ_+^2, and ℐ(δ) > -∞ for all δ∈ℝ_+^2. We show the claims on ℐ(δ) only since the proof for ℐ_D(δ) is almost identical to that for ℐ(δ). Note that ℐ(δ) is well-defined since <Ref> implies that Σ(δ) is non-empty. Note that under <Ref>, for any δ∈ℝ_+^2, ℐ(δ) ≥ℐ(0) ≥∫_𝒮 f(s) d ν(s) > - ∞ for some ν∈ℱ(μ_1, μ_2). The monotonicity of ℐ can be seen from the definition. We now show the concavity of ℐ. Fix δ = (δ_1, δ_2) ∈ℝ^2_+, δ^' = (δ_1^', δ_2^')∈ℝ_+^2 and λ∈ (0,1). For any γ∈Σ(δ),γ^'∈Σ(δ^'), consider the probability measure γ^'' = λγ +(1-λ) γ^'. Since K_ℓ is Optimal Transport cost, ν↦K_ℓ (μ_ℓ, ν) is convex. So, we have for ℓ =1,2, K_ℓ(μ_ℓ,γ^''_ℓ,3) ≤λK_ℓ( μ_ℓ, γ_ℓ) + (1-λ) K_ℓ( μ_ℓ,γ_ℓ^') ≤λδ_ℓ + (1- λ) δ_ℓ^'. This shows that γ^''∈Σ( λδ + (1-λ) δ^') and hence ℐ(λδ + (1-λ) δ^' ) = sup_ν∈Σ( λδ + (1-λ) δ^')∫_𝒮 f(s) dν(s) ≥∫_𝒮 f d γ^'' = λ∫_𝒮 f d γ + (1-λ) ∫_𝒮 f d γ^'. Taking the supremum over γ∈Σ(δ) and γ^'∈Σ(δ^') yields ℐ(λδ + (1-λ) δ^') ≥λsup_γ∈Σ(δ)∫_𝒮 f(s) d γ(s) + (1-λ) sup_γ^'∈Σ(δ^')∫_𝒮 f(s) d γ^'(s) ≥λℐ(δ) +(1-λ) ℐ(δ^'). Let φ: ℝ^n_+→ℝ∪{∞} be a concave and non-decreasing function. For all λ∈ℝ^n_+, define φ^⋆(λ) = sup_x ∈ℝ^n_+ {φ(x) - ⟨λ,x ⟩}. Then for all x ∈ℝ^n_++, one has φ(x) = inf_λ∈ℝ^n_+ {⟨λ, x ⟩ + φ^⋆(λ) }. If φ(x_0) = ∞ for some x_0 ∈ℝ^n_++, then φ(x) = ∞ for all x ∈ℝ^n_++. In fact, for any x ∈ℝ^n_++, there is x_1 ∈ B(x, δ) such that x = t x_0 + (1- t) x_1 for some t ∈ (0,1) and the concavity of φ implies φ(x) = φ(t x_0 + (1- t) x_1 ) ≥ t φ(x_0) + (1-t) φ(x_1) = ∞. Now we assume φ(x) < ∞ for all x ∈ℝ_++^n. Define a new function ψ: ℝ^n →ℝ∪{∞} as ψ(x):= - φ(x) x ∈ℝ^n_+ ∞ x ∉ℝ^n_+ . It is easy to see ψ is convex and the Legendre–Fenchel transform of ψ is given by ψ^⋆(λ) = sup _x ∈ℝ^n{⟨λ, x⟩ - ψ(x)} = sup _x ∈ℝ^n_+{⟨λ, x⟩ - ψ(x) } = sup_x ∈ℝ^n_+{φ(x) - ⟨ -λ, x⟩} = φ^⋆ (- λ) - λ∈ℝ^n_+ ∞ -λ∉ℝ^n_+ . The Legendre–Fenchel transform of ψ^⋆(λ) is given by ψ^⋆⋆(x) = sup_λ∈ℝ^n {⟨λ, x⟩-ψ^⋆(λ )} = sup_ -λ∈ℝ_+^n {⟨λ, x⟩-ψ^⋆(λ )} = sup_ -λ∈ℝ_+^n {⟨λ, x⟩- φ^⋆(-λ) } = - inf_λ∈ℝ_+^n {⟨λ, x⟩ + φ^⋆(λ) } Since ψ^⋆⋆ is the double Legendre–Fenchel transform of ψ, then ψ^⋆⋆ is the lower-semicontinuous convex envelope of ψ from below. The convexity of ψ implies ψ = ψ^⋆⋆ in the interior of {x : ψ(x) < ∞} which is ℝ^n_++. The desired result follows. Let K := {K_1, K_2, K_3}, where K_1 = {3,4}, K_2 = {1,3}, and K_3 = {2,4}. Then K is decomposable. When m = 1, The condition (<ref>) holds obviously. When m=2, (⋃_ℓ<2 K_ℓ) ∩ K_2 = K_1 ∩ K_2 = {3}∈⋃_ℓ<2 2^K_ℓ = 2^K_1. When m = 3, (⋃_ℓ<3 K_ℓ) ∩ S_3 = (K_1∪ K_2) ∩ K_3 = {4}∈⋃_ℓ<3 2^K_ℓ = 2^K_1∪ 2^K_2. Let K := {K_1, K_2, K_3} where K_1 = {3, 4, 5}, K_2 = {1, 3, 5}, and K_3 = {2, 4, 5}. Then K is decomposable. When m = 1, the condition (<ref>) holds trivially. When m=2, (⋃_ℓ<2 K_ℓ) ∩ K_2 = K_1 ∩ K_2 = {3, 5}∈⋃_ℓ<2 2^K_ℓ = 2^K_1. When m = 3, (⋃_ℓ<3 K_ℓ) ∩ K_3 = (K_1∪ K_2) ∩ K_3 = {4, 5}∈⋃_ℓ<3 2^K_ℓ = 2^K_1∪ 2^K_2. Let K := {K_1, …, K_L+1} where K_1 = {L+1, …, 2L} and K_ℓ = {ℓ -1 , L+ℓ-1 } for 2 ≤ℓ≤ L+1. Then K is decomposable. When m = 1, the condition (<ref>) holds trivially. When 1 < m ≤ L+1, (⋃_ℓ<m K_ℓ) ∩ K_m = ⋃_ℓ<m(K_ℓ∩ K_m)= K_1∩ K_m∈ 2^K_1⊂⋃_ℓ<m 2^K_ℓ. This shows that the condition (<ref>) holds. Let K := {K_1, …, K_L+1}, where K_1 = {L+1, …, 2L+1} and K_ℓ+1 = {ℓ , L + ℓ , 2L+1} for 1 ≤ℓ≤ L. Then K is decomposable. When m = 1, the condition (<ref>) holds trivially. When 1 < m ≤ L+1, (⋃_ℓ<m K_ℓ) ∩ K_m = ⋃_ℓ<m (K_ℓ∩ K_m ) = (K_1∩ K_m) ∈ 2^K_1⊂⋃_ℓ<m 2^K_ℓ. This shows that the condition (<ref>) holds. § APPENDIX C: PROOFS OF MAIN RESULTS §.§ Proofs in §.§.§ Proof of The expressions of ℐ_D(δ_1, 0) and ℐ_D(0,δ_2) can be derived from ℐ_D(δ_1, δ_2) for δ_1, δ_2 >0 with appropriate modifications of the cost function. In particular, consider another cost function c_2(s_2, s_2^' ) = ∞1{ s_2 ≠ s_2^'} and the optimal transport distance K_2 associated with c_2. Define an uncertainty set Σ_D(δ_1, δ_2) depending on K_1 and K_2 as Σ_D(δ_1, δ_2) = {γ∈𝒫(𝒮_1 ×𝒮_2): K_1( γ_1, μ_1) ≤δ_1, K_2( γ_2, μ_2) ≤δ_2 }. Moreover, we define ℐ_D: ℝ^2_+ →ℝ as ℐ_D (δ_1, δ_2) = sup_γ∈Σ_D (δ_1, δ_2)∫_𝒱 g(s_1,s_2) dγ(s_1,s_2). We note K_2(μ, ν) = 0 if and only if μ = ν. For all δ_2 > 0, Σ_D(δ_1, δ_2) = Σ_D(δ_1, 0) and ℐ_D (δ_1, δ_2) = ℐ_D(δ_1, 0). Using the dual reformulation of ℐ_D on ℝ^2_++, we have ℐ_D(δ_1, 0) = ℐ_D (δ_1, δ_2) = inf _λ∈ℝ_+^2 [ ⟨λ, δ⟩+sup _ϖ∈Π(μ_1, μ_2)∫_𝒱 g_λ(s_1, s_2) d ϖ(s_1,s_2)], where g_λ(s_1, s_2) = sup _s_1^'∈𝒮_1, s_2^'∈𝒮_2 {g(s_1^', s^'_2)-λ_1 c_1 (s_1, s_1^') - λ_2 c_2 (s_2, s_2^') } = sup _s_1^'∈𝒮_1 {g(s_1^', s_2)-λ_1 c_1 (s_1, s_1^') } = g_λ, 1(s_1, s_2). Since g_λ, 1(s_1, s_2) is independent of λ_2, letting λ_2 = 0 yields ℐ_D(δ_1, 0) = inf _λ_1 ∈ℝ_+[λ_1 δ_1+sup _ϖ∈Π(μ_1, μ_2)∫_𝒱 g_λ, 1(v) d ϖ(v)]. Using the same reasoning, we can get the expression of ℐ_D(0,δ_2). In the rest of the proof, we show the dual reformulation of ℐ_D on ℝ^2_++. Let 𝒫_D denote the set of γ∈𝒫(𝒱) that satisfies K_1(μ_1, γ_1)< ∞, K_2(μ_2, γ_2) < ∞, and ∫_𝒱 g dγ > -∞. Taking the Legendre transform on ℐ yields that any λ∈ℝ_++^2, ℐ_D^⋆ (λ) := sup_δ∈ℝ_+^2 {ℐ_D(δ) - ⟨λ, δ⟩} =sup_δ∈ℝ_+^2 sup_γ∈Σ(δ) {∫_𝒱 g dγ - ⟨λ, δ⟩} = sup_δ∈ℝ_+^2 sup_γ∈𝒫(𝒱 ) {∫_𝒱 g dγ - ⟨λ, δ⟩: K_ℓ( μ_ℓ,γ_ℓ) ≤δ_ℓ, ∀ℓ∈ [2] } = sup_γ∈𝒫(𝒱 ) sup_δ∈ℝ_+^2 {∫_𝒱 g dγ - ⟨λ, δ⟩: K_ℓ( μ_ℓ,γ_ℓ) ≤δ_ℓ, ∀ℓ∈ [2] } = sup_γ∈𝒫_D{∫_𝒱 g dγ - λ_1 K_1(μ_1,γ_1)- λ_2 K_2(μ_2,γ_2 ) }_:= I_D, λ[γ] = sup_γ∈𝒫_D I_D, λ[γ]. We note that the expression above also holds for λ∈ℝ^2_+ ∖ℝ^2_++. Let 𝒢_D, λ be the set of all probability measures π on 𝒱×𝒱 such that ∫_𝒱×𝒱φ_λ d π is well-defined and the first and second marginals are μ_1 and μ_2.[To be more precise, π( (A_1 ×𝒮_2) ×𝒱) = μ_1(A_1) and π( (𝒮_1 × A_2) ×𝒱 ) = μ_2(A_2) for all sets A_1 ∈ℬ_𝒮_1 and A_2 ∈ℬ_𝒮_2. ] <Ref> implies ℐ_D^⋆ (λ) = sup _π∈𝒢_D, λ∫_𝒱×𝒱φ_λ d π. By <Ref>, we have for all λ∈ℝ^2_+, ℐ_D^⋆(λ)= sup _π∈𝒢_D, λ∫_𝒱×𝒱φ_λ d π = sup_π∈Γ̅∫_𝒱×𝒱φ_λ d π. where we write Γ̅= Γ( Π(μ_1, μ_2) , φ_λ) for simplicity. From <Ref>, ℐ_D is bounded from below, non-decreasing, and concave. As a result, ℐ_D< ∞ or ℐ_D = ∞ on δ∈ℝ_+^2. In the first case, by <Ref>, for all δ∈ℝ^2_+, ℐ_D(δ) = inf_λ∈ℝ^2_+{⟨λ, δ⟩ + sup _π∈Γ̅∫_𝒱×𝒱φ_λ d π} . In the second case, by definition ℐ_D^⋆(λ) = ∞ for all λ∈ℝ_+^2 and the above is also true. Moreover, Example 2 of <cit.> implies that φ_λ satisfies the interchangeability principle with respect to Π(μ_1, μ_2). So <Ref> implies that for all λ∈ℝ_++^2, sup_π∈Γ̅∫_𝒱×𝒱φ_λ d π = sup _γ∈Π(μ_1, μ_2 )∫_𝒱 g_λ(v) d γ(v), where g_λ(v) = sup _v^'∈𝒱φ_λ(v, v^'). This shows for all δ∈ℝ^2_++, ℐ_D(δ) = inf_λ∈ℝ^2_+{⟨λ, δ⟩ + sup _γ∈Π(μ_1, μ_2 )∫_𝒱 g_λ d γ} . If λ_1>0 and λ_2 >0, then sup_γ∈𝒫_D I_D, λ[γ] = sup_γ∈𝒫_Dsup_π∈Π(μ_1, μ_2, γ)∫_𝒱×𝒱φ_λ d π. Fix any ϵ >0 and γ∈𝒫_D. By the definition of 𝒫_D, we have K_ℓ (μ_ℓ, γ_ℓ)< ∞ and hence there is ν_ℓ∈Π(μ_ℓ, γ_ℓ) such that K_ℓ (μ_ℓ, γ_ℓ) ≥∫_𝒮_ℓ×𝒮_ℓ c_ℓ dν_ℓ - ϵ/(λ_1 + λ_2). Let K = {K_1, K_2, K_3 } with K_1 ={ 1,3 }, K_2={2, 4} and K_3={3, 4}. Since K is decomposable, then by <Ref> there is a measure π on 𝒮_1 ×𝒮_2 ×𝒮_1 ×𝒮_2 with marginals given by π_1,3 = ν_1, π_2,4 = ν_2 and π_3,4 = γ. Moreover, we note ∫_𝒱×𝒱 c_ℓ(s_ℓ, s_ℓ^') d π = ∫_𝒮_ℓ×𝒮_ℓ c_ℓ d ν_ℓ≤K_ℓ(μ_ℓ, γ_ℓ) + ϵ/(λ_1 + λ_2) < ∞. Now, we show the LHS is not bigger than the RHS. When I_D, λ[γ] =∞, provided K_ℓ(μ_ℓ, γ_ℓ) ∈ (0, ∞) for ℓ=1,2, we must have ∫_𝒱 g dγ = ∞. Then, it is apparent that ∫φ_λ d π = ∞ and hence I_D,λ[γ] ≤∫φ_λ d π + ϵ. When I_D, λ[γ] < ∞, then ∫_𝒱 g d γ < ∞. Therefore, the integral given by ∫_𝒱×𝒱φ_λ d π = ∫_𝒱 g d γ - ∫_𝒮_1×𝒮_1λ_1 c_1 d ν_1-∫_𝒮_2×𝒮_2λ_2 c_2 d ν_2 < ∞, is well-defined. The desired result follows from the estimate below ∫_𝒱×𝒱φ_λ d π≥∫_𝒱 g d γ - λ_1 K_1 (μ_1, γ_1) - λ_2 K_2 (μ_2, γ_2) - ϵ = I_D, λ[γ] - ϵ. Therefore, we have I_D, λ[γ] ≤∫_𝒱×𝒱φ_λ d π+ϵ. Since ϵ >0 and γ∈𝒫_D are arbitrary, we have sup _γ∈𝒫_D I_D, λ[γ] ≤sup _γ∈𝒫_Dsup _π∈Π(μ_1, μ_2, γ)∫_𝒱×𝒱φ_λ d π. Next, we prove that the reversed direction holds by showing that if γ∈𝒫_D, then I_D, λ[γ] ≥sup_π∈Π(μ_1, μ_2, γ)∫_𝒱×𝒱φ_λ d π. Fix γ∈𝒫_D. When ∫_𝒱 g d γ = ∞, I_D, λ[γ] = ∞ and then the proof is done. Next, when ∫_𝒱 g d γ < ∞, for any π∈Π(μ_1, μ_2, γ) such that ∫φ_λ d π is well-defined, I_D, λ[γ] = ∫_𝒱 g d γ - λ_1 K_1 (μ_1, γ_1 ) - λ_2 K_2 (μ_2, γ_2 ) ≥∫_𝒱 g(s_1^', s_2^') d π_3,4 - λ_1 ∫_𝒮_1 ×𝒮_1 c_1(s_1, s_1^') d π_1, 3 - λ_2 ∫_𝒮_2 ×𝒮_2 c_2(s_2, s_2^') d π_2, 4 = ∫_𝒱×𝒱φ_λ d π. With the convention that sup = - ∞, if the integral ∫φ_λ d π is not well-defined for all π∈Π(μ_1, μ_2, γ), then I_D, λ[γ] ≥sup_π∈Π(μ_1, μ_2, γ)∫_𝒱×𝒱φ_λ d π holds trivially. Otherwise, taking the supremum over π∈Π(μ_1, μ_2, γ) on the RHS of the inequality above yields I_D, λ[γ] ≥sup_π∈Π(μ_1, μ_2, γ)∫_𝒱×𝒱φ_λ d π. The desired result follows. If λ_1 > 0 and λ_2 >0, then sup _γ∈𝒫_Dsup _π∈Π(μ_1, μ_2, γ)∫_𝒱×𝒱φ_λ d π = sup_π∈𝒢_D, λ∫_𝒱×𝒱φ_λ d π. We divide the proof into the following two steps. The first step is to show that the LHS is less than or equal to the RHS. Fix any γ∈𝒫_D. If ∫_𝒱 g d γ=∞, from the proof of <Ref>, we can see that ∫_𝒱×𝒱φ_λ dπ = ∞ for some π∈Π(μ_1, μ_2, γ) and the LHS is ∞. So, the integral ∫_𝒱×𝒱φ_λ dπ is well-defined and π∈𝒢_D, λ. We must have sup _π∈𝒢_D, λ∫_𝒱×𝒱φ_λ d π = ∞ and the statement of the lemma is true. Now suppose ∫_𝒱 g d γ < ∞ holds. For any π∈Π(μ_1, μ_2, γ), since ∫_𝒱×𝒱(λ_1 c_1+λ_2 c_2) d π≥ 0, the integral ∫_𝒱×𝒱φ_λ d π = ∫_𝒱 g dγ - ∫_𝒱×𝒱(λ_1 c_1+λ_2 c_2) d π < ∞, is well-defined. This shows π∈𝒢_D, λ, and we have ∫_𝒱×𝒱φ_λ d π≤sup _π∈𝒢_D, λ∫_𝒱×𝒱φ_λ d π. Taking the supremum over π∈Π(μ_1, μ_2, γ) yields sup_π∈Π(μ_1,μ_2, γ)∫_𝒱×𝒱φ_λ d π≤sup _π∈𝒢_D, λ∫_𝒱×𝒱φ_λ d π. Thus, we showed that the inequality above holds for all γ∈𝒫_D and this ends the first step. The second step is to show that the LHS is greater than or equal to the RHS. Fix any π∈𝒢_D, λ. It suffices to show sup _γ∈𝒫_Dsup _π∈Π(μ_1, μ_2, γ)∫_𝒱×𝒱φ_λ d π≥∫_𝒱×𝒱φ_λ d π. When ∫_𝒱×𝒱φ_λ d π > - ∞, we have ∫ (λ_1 c_1 + λ_2 c_2) d π > -∞ and hence ∫_𝒱 g d π_3,4>-∞. It follows that π∈Π(μ_1, μ_2, π_3,4) and ∫_𝒱×𝒱φ_λ d π≤sup _π∈Π(μ_1, μ_2, γ)∫_𝒱×𝒱φ_λ d π≤sup _γ∈𝒫_Dsup _π∈Π(μ_1, μ_2, γ)∫_𝒱×𝒱φ_λ d π. When ∫_𝒱×𝒱φ_λ d π=-∞, the inequality (<ref>) holds trivially. For all λ∈ℝ_+^2, one has ℐ_D^⋆ (λ) = sup _π∈𝒢_D, λ∫_𝒱×𝒱φ_λ d π. We divide the proof into the following four cases. When λ_1, λ_2 >0, the equality (1) follows from <Ref>. When λ_1 = λ_2 =0, we show that equality (1) holds. Let A_ℓ = { (v, v^') ∈𝒱×𝒱: c_ℓ(s_ℓ, s_ℓ^') < ∞}), and for simplicity, we write g: (v, v^') ↦ g(v^') and c_ℓ: (v, v^') ↦ c_ℓ(s_ℓ, s_ℓ^') for ℓ=1,2. By the convention, 0 c_ℓ = 0,π-a.s. if and only if c_ℓ < ∞,π-a.s., it follows that sup _π∈𝒢_D, λ∫_𝒱×𝒱φ_λ dπ = sup{∫_𝒱×𝒱 g(v^') d π(v, v^') : π∈𝒢_D, λ, π(A_1 ∩ A_2) = 1, } ≥sup{∫_𝒱×𝒱 g d π : π∈𝒢_D, λ, ∫ c_ℓ dπ < ∞ for ℓ =1,2 } ≥sup{∫_𝒱 g d γ : γ∈𝒫_D}, where the last inequality holds since for all π∈𝒢_D, λ with ∫ c_ℓ d π<∞ for ℓ=1,2, the marginal π_3,4∈𝒫_D, i.e. π(𝒱×·) ∈𝒫_D. On the other hand, for any π∈𝒢_D, λ with π(A_1 ∩ A_2) =1, define a measure π_n on 𝒱×𝒱 as π_n(·) = π(·∩ (A_1n∩ A_2n ) ) /π(A_1n∩ A_2n), where A_ℓ n = { (v, v^') ∈𝒱×𝒱: c_ℓ (s_ℓ, s_ℓ^')<n } for ℓ = 1,2. Since c_ℓ < n, π_n-a.s. for ℓ=1,2, then the second marginal of π_n is in 𝒫_D.[To be more precise, the measure π_n(𝒱×·) is in 𝒫_D.] By the monotone convergence theorem, lim_n →∞∫_𝒱×𝒱 g^+ 1_A_1n∩ A_2n d π = ∫_𝒱×𝒱 g^+ d π, and lim_n →∞∫_𝒱×𝒱 g^- 1_A_1n∩ A_2n d π = ∫_𝒱×𝒱 g^- d π. Moreover, since π(A_1n∩ A_2n) → 1, lim_n →∞∫_𝒱×𝒱 g^+ d π_n = lim_n →∞∫_𝒱×𝒱 g^+1_A_1n∩ A_2n d π/π(A_1n∩ A_2n) = ∫_𝒱×𝒱 g^+ d π. Similarly, lim_n →∞∫_𝒱×𝒱 g^+ d π_n = ∫_𝒱×𝒱 g^- d π. Since ∫ g d π is well-defined, we can exclude the case ∫ g^+ d π = ∫ g^- d π = ∞. Therefore, ∫_𝒱×𝒱 g d π = lim_n →∞∫_𝒱×𝒱 g d π_n ≤sup_γ∈𝒫_D∫_𝒱 g d γ. This shows sup _π∈𝒢_D, λ∫_𝒱×𝒱φ_λ d π = sup_π∈𝒫_D∫_𝒱×𝒱 g d π and hence equality (1) holds for λ_1 = λ_2 = 0. Next, we show that equality (<ref>) when λ_1 > 0, λ_2 =0. By definition, the integral ∫φ_λ d π is well-defined for all π∈𝒢_D, λ. If ∫φ_λ dπ = ∞ for some π∈𝒢_D, λ, then sup _π∈𝒢_D, λ∫_𝒱×𝒱φ_λ d π≥sup_γ∈𝒫_D∫ g d γ. Without loss of generality, assume ∫φ_λ dπ < ∞ for all π∈𝒢_D, λ. It follows that λ_1 ∫ c_1 dπ≤∫ (g^- + λ_1 c_1 + λ_2 c_2) d π < ∞, and ∫ c_1 dπ < ∞ and π(A_1) = 1. By convention, 0 × c_2 = 0, π-a.s. if and only if 0 × c_2 < ∞, π-a.s. We find that sup _π∈𝒢_D, λ∫_𝒱×𝒱φ_λ dπ = sup{∫_𝒱×𝒱 g(v^') d π(v, v^') : π∈𝒢_D, λ, π( A_2) = 1 } = sup{∫_𝒱×𝒱 g(v^') d π(v, v^') : π∈𝒢_D, λ, π( A_1 ∩ A_2) = 1 } ≥sup_γ∈𝒫_D∫_𝒱 g d γ . On the other hand, for any π∈𝒢_D, λ with π(A_2)=1, define a measure π_n^' on 𝒱×𝒱 as π_n(·)=π(·∩(A_1 n))/π(A_1 n). Using a similar argument as shown above, we can show ∫_𝒱×𝒮 g d π≤sup _γ∈𝒫_D∫_𝒱 g d γ and hence equality (<ref>) holds when λ_1 >0 and λ_2=0. In the same way, we can show that equality (<ref>) when λ_1 = 0, λ_2 > 0. Let λ∈ℝ^2_+. If φ_λ is interchangeable with respect to Π(μ_1, μ_2), then sup _π∈𝒢_D, λ∫_𝒱×𝒱φ_λ d π = sup_π∈Γ( Π(μ_1, μ_2) , φ_λ) ∫_𝒱×𝒱φ_λ d π. For any π∈𝒢_D, λ, it is obvious that π_1,2∈Π(μ_1, μ_2) and hence π∈Γ(Π(μ_1, μ_2), φ_λ). This shows 𝒢_D, λ⊂Γ (Π (μ_1, μ_2), φ_λ ) and the LHS is less than or equal to the RHS. Next, we show the LHS is not less than the RHS. We adopt the convention that the supremum of an empty set is -∞. If ∫φ_λ dπ is not well-defined for all π∈Γ(Π(μ_1, μ_2), φ_λ), then the proof is done trivially. Now let π be any measure in Γ (Π (μ_1, μ_2), φ_λ ) for which integral ∫_𝒱×𝒱φ_λ d π is well-defined. To finish the proof, it suffices to show sup _π∈𝒢_D, λ∫_𝒱×𝒱φ_λ(v, v^' ) d π (v, v^' ) ≥∫_𝒱×𝒱φ_λ (v , v^') d π (v, v^' ). When ∫_𝒱×𝒱φ_λ , d π = - ∞, inequality (<ref>) holds trivially. Now suppose ∫_𝒱×𝒱φ_λ d π = ∞. Because c_1, c_2 ≥ 0, we have ∫_𝒱×𝒱 g(v^') d π(v,v^') = ∞ and is well-defined. We note φ_λ = g^+ - g^- - (λ_1 c_1 + λ_2 c_2) and hence φ_λ^+ = g^+ and φ_λ^- = g^- + (λ_1 c_1 + λ_2 c_2). Since ∫_𝒱×𝒱φ_λ d π is well-defined, then ∫_𝒱×𝒱(λ_1 c_1+λ_2 c_2) dπ≤∫_𝒱×𝒱φ^-_λ d π < ∞. This shows that π∈𝒢_D, λ and inequality (<ref>) holds. Next, suppose ∫_𝒱×𝒱φ_λ d π < ∞. Given that the integral is well-defined, using the same reasoning as demonstrated above, we have ∫_𝒱×𝒱 g(v^') d π(v, v^') < ∞ and ∫_𝒱×𝒱 (λ_1 c_1+λ_2 c_2) d π < ∞. So π∈𝒢_D, λ and the proof is done. §.§.§ Proof of We provide only the derivation of the upper bound ℐ_D(δ) = sup_γ∈Σ_D(δ)∫1(s_1 + s_2 ≤ z) d γ(s_1, s_2). We can derive the expression of the lower bound inf_γ∈Σ_D∫1(s_1 + s_2 ≤ z) d γ(s_1, s_2) by the similar reasoning and the following identity. inf_γ∈Σ_D(δ)∫1(s_1 + s_2 ≤ z) d γ(s_1, s_2) = 1 - sup_γ∈Σ_D(δ)∫1({s_1 + s_2 > z}) d γ(s_1, s_2). When λ_1 = 0 or λ_2 = 0, g_λ(s_1 , s_2) = 0 for all (s_1,s_2) ∈𝒮_1 ×𝒮_2. When λ_1 ≠ 0 and λ_2 ≠ 0, we have g_λ(s_1, s_2) = sup_s_1', s_2'[ 1(s_1' + s_2' ≤ z) - λ_1 |s_1 - s_1'|^2 -λ_2 |s_2 - s_2'|^2 ] = ( 1- inf_s_1' + s_2' ≤ z[ λ_1 |s_1 - s_1'|^2 + λ_2 |s_2 - s_2'|^2 ] )^+ = 1 if s_1 + s_2 ≤ z [1 - λ_1λ_2 (s_1 + s_2 - z)^2/λ_1 + λ_2 ]^+ if {s_1 + s_2 > z}. By some simple algebra, we have. g_λ, 1(s_1, s_2) = sup_s_1'[1(s_1' + s_2 ≤ z) - λ_1 |s_1 - s_1'|^2 ] = 1 if s_1 + s_2 ≤ z, (1 - λ_1 | s_1 + s_2 - z |^2 )^+ if {s_1 + s_2 > z}, and g_λ, 2(s_1, s_2) = sup_s_2'[1(s_1 + s_2' ≤ z) - λ_2 | s_2 - s_2'|^2 ] = 1 if s_1 + s_2 ≤ z, (1 - λ_2 | s_1 + s_2 - z |^2 )^+ if {s_1 + s_2 > z}. By applying <Ref>, we have that for each δ = (δ_1, δ_2) ∈ℝ_++^2, ℐ_D(δ) = inf_λ∈ℝ_+^2[⟨λ, δ⟩ + sup_π∈Π(μ_1, μ_2)∫ g_λ(s_1, s_2) d π(s_1, s_2) ]. However, in the rest of proof, we show for all δ = (δ_1, δ_2) ∈ℝ^2_+, ℐ_D (δ) = inf _λ∈ℝ_+^2sup _π∈Π(μ_1, μ_2)[⟨λ, δ⟩+∫_𝒱 g_λ d π] = sup _π∈Π(μ_1, μ_2)inf _λ∈ℝ_+^2[⟨λ, δ⟩+∫_𝒱 g_λ d π] . Define a function F: Π(μ_1, μ_2) ×ℝ^2_+ →ℝ as F: ( π, λ ) ↦ - ⟨λ, δ⟩ - ∫_𝒮_1 ×𝒮_2 g_λ d π. We note that for any (s_1, s_2), the function λ↦ g_λ(s_1, s_2) is convex since it is the supremum of a set of affine functions in λ. As a result, λ↦ - ∫ g_λ d π is concave for each fixed π. For any λ∈ℝ^2_+, the function π↦ F(π, λ) is continuous due to continuous and bounded g_λ and Portmanteau's theorem. Moreover, it is easy to verify that π↦ F(π, λ) is convex. By <cit.>' minimax theorem, we have inf_π∈Π(μ_1, μ_2)sup_λ∈ℝ^2_+ F(π, λ) =sup_λ∈ℝ^2_+inf_π∈Π(μ_1, μ_2) F(π, λ). As a result, we have for all δ = (δ_1, δ_2) ∈ℝ^2_++, ℐ_D(δ) = inf_λ∈ℝ^2_+sup_π∈Π(μ_1, μ_2) - F(π, λ) = - sup_λ∈ℝ^2_+inf_π∈Π(μ_1, μ_2) F(π, λ) = - inf_π∈Π(μ_1, μ_2)sup_λ∈ℝ^2_+ F(π, λ) = sup_π∈Π(μ_1, μ_2)inf_λ∈ℝ^2_+ - F(π, λ) = sup _π∈Π(μ_1, μ_2)inf _λ∈ℝ_+^2[⟨λ, δ⟩+ ∫_𝒱 g_λ d π]. Using the same reasoning as above, the application of <cit.> to ℐ_D(δ_1, 0) yields ℐ_D(δ_1, 0) = sup _π∈Π(μ_1, μ_2)inf _λ_1 ∈ℝ_+[ λ_1 δ_1 + ∫_𝒱 g_λ, 1 d π]. Since g_λ↓ g_λ, 1 as λ_2 ↑∞, the monotone convergence theorem implies inf _λ∈ℝ^2_+[ ⟨λ , (δ_1, 0 ) ⟩ + ∫_𝒱 g_λ d π] = inf _λ_1 ∈ℝ_+[ λ_1 δ_1 + inf_λ_2 ∈ℝ_+∫_𝒱 g_λ d π] = inf _λ_1 ∈ℝ_+[ λ_1 δ_1 + lim_λ_2 →∞∫_𝒱 g_λ d π] = inf _λ_1 ∈ℝ_+[ λ_1 δ_1 + ∫ g_λ, 1 d π]. Taking the supremum over π∈Π(μ_1, μ_2) on both sides yields that for δ_1 > 0, ℐ_D(δ_1, 0) = sup_π∈Π(μ_1, μ_2) inf _λ_1 ∈ℝ_+[ λ_1 δ_1 + ∫ g_λ, 1 d π] = sup_π∈Π(μ_1, μ_2) inf _λ∈ℝ^2_+[ ⟨λ , (δ_1, 0 ) ⟩ + ∫_𝒱 g_λ d π]. Similarly, we can show that for δ_2 > 0, ℐ_D(0, δ_2)=sup _π∈Π(μ_1, μ_2)inf _λ_2 ∈ℝ_+[λ_2 δ_2+∫_𝒱 g_λ, 2 d π]= sup_π∈Π(μ_1, μ_2) inf _λ∈ℝ^2_+[ ⟨λ , ( 0, δ_2 ) ⟩ + ∫_𝒱 g_λ d π] . In addition, when δ_1 = δ_2 = 0, we note g_λ↓ g as λ_1, λ_2 ↑∞ and the monotone convergence theorem implies inf_λ∈ℝ^2_+∫ g_λ d π = ∫ g d π and ℐ_D(0) = sup_π∈Π(μ_1, μ_2)inf_λ∈ℝ^2_+ ∫ g_λ d π =inf_λ∈ℝ^2_+sup_π∈Π(μ_1, μ_2)∫ g_λ d π = sup_π∈Π(μ_1, μ_2)∫ g d π. This completes the proof that for all δ = (δ_1, δ_2) ∈ℝ^2_+ ℐ_D (δ) = sup _π∈Π(μ_1, μ_2)inf _λ∈ℝ_+^2[⟨λ, δ⟩+∫_𝒱 g_λ d π] = inf _λ∈ℝ_+^2sup _π∈Π(μ_1, μ_2)[⟨λ, δ⟩+∫_𝒱 g_λ d π]. §.§.§ Proof of The expressions of ℐ(δ_1, 0) and ℐ(0,δ_2) can be derived from ℐ(δ_1, δ_2) for δ_1, δ_2 >0 with appropriate modifications of the cost function. In particular, consider another cost function c_2(s_2, s_2^' ) = ∞1{ s_2 ≠ s_2^'} and the optimal transport distance K_2 associated with c_2. Define an uncertainty set Σ(δ_1, δ_2) depending on K_1 and K_2 as Σ(δ_1, δ_2) = {γ∈𝒫(𝒮): K_1( γ_13, μ_13) ≤δ_1, K_2( γ_23, μ_23) ≤δ_2 }. Moreover, we define ℐ: ℝ^2_+ →ℝ as ℐ (δ_1, δ_2) = sup_γ∈Σ(δ_1, δ_2)∫_𝒱 f(v) dγ(v). We note K_2(μ, ν) = 0 if and only if μ = ν. So, for all δ_2 > 0, Σ(δ_1, δ_2) = Σ(δ_1, 0) and ℐ (δ_1, δ_2) = ℐ(δ_1, 0). Using the dual reformulation of ℐ on ℝ^2_++, we have ℐ(δ_1, 0) = ℐ (δ_1, δ_2) = inf _λ_1 ∈ℝ_+[⟨λ, δ⟩ +sup _ϖ∈Π(μ_13, μ_23)∫_𝒱 f_λ(s_1, s_2) d ϖ(s_1,s_2) ] , where f_λ(s_1, s_2) = sup _ (y_1^',y_2^', x^') ∈𝒮{ f(y_1^', y_2^', x^')-λ_1 c_1 (s_1, (y_1^', x^') ) - λ_2 c_2 (s_2, (y_2^', x^') ) } = sup _s_1^'∈𝒮_1 {f(y_1^', y_2, x_2)-λ_1 c_1 (s_1, ( y_1^', x_2) ) } = f_λ, 1(s_1, s_2). Since f_λ, 1(s_1, s_2) is independent of λ_2, letting λ_2 = 0 yields ℐ(δ_1, 0) = inf _λ_1 ∈ℝ_+[λ_1 δ_1+sup _ϖ∈Π(μ_13, μ_23)∫_𝒱 f_λ, 1(v) d ϖ(v)]. Using the same reasoning, we can get the expression of ℐ(0,δ_2). In the rest of the proof, we show that the dual reformulation of ℐ on ℝ^2_++ holds. Let 𝒫̅ denote the set of γ∈𝒫(𝒮) such that K_ℓ(μ_ℓ 3, γ_ℓ 3) < ∞ for ℓ =1,2 and ∫_𝒮 f dγ > - ∞. Taking the Legendre transform on ℐ gives ℐ^⋆(λ) := sup _δ∈ℝ_+^2{ℐ(δ)-⟨λ, δ⟩} = sup _δ∈ℝ_+^2sup _γ∈Σ(δ){∫_𝒮 f d γ-⟨λ, δ⟩} = sup_δ∈ℝ_+^2 sup_γ∈𝒫̅{∫_𝒮 f d γ-⟨λ, δ⟩: K_ℓ (μ_ℓ 3, γ_ℓ3) ≤δ_ℓ, ∀ℓ∈ [2] } = sup_γ∈𝒫̅sup_δ∈ℝ_+^2 {∫_𝒮 f d γ-⟨λ, δ⟩: K_ℓ (μ_ℓ 3, γ_ℓ3) ≤δ_ℓ, ∀ℓ∈ [2] } = sup_γ∈𝒫̅{∫_𝒮 f d γ-λ_1 K_1 (μ_13, γ_23)-λ_2 K_2(μ_23, γ_23) }_: = I_λ[γ] = sup _γ∈𝒫̅ I_λ[γ]. We note that the expression above still holds when λ∈ℝ_+^2 ∖ℝ^2_++. Recall the definition of the function ϕ_λ: 𝒱×𝒮→ℝ. Let 𝒢_λ denote the set of π∈𝒫(𝒱×𝒮) such that ∫_𝒱×𝒮ϕ_λ d π is well-defined and the first and second marginals coincides with μ_13 and μ_23 respectively.[To be more precise, π( (A_1 ×𝒮_2) ×𝒮 ) = μ_13(A_1) and π( ( 𝒮_1× A_2 ) ×𝒮 ) = μ_23(A_2) for all Borel sets A_1 ∈ℬ_𝒮_1 and A_2 ∈ℬ_𝒮_2.] <Ref> implies ℐ^⋆(λ) = sup_π∈𝒢_λ∫_𝒱×𝒮ϕ_λ dπ. By <Ref>, we have for all λ∈ℝ^2_+, ℐ^⋆(λ) =sup _π∈Γ(Π(μ_13, μ_23), ϕ_λ)∫_𝒱×𝒱ϕ_λ d π. Example 2 of <cit.> implies that ϕ_λ: 𝒱×𝒮→ℝ satisfies the interchangeability principle with respect to Π(μ_13, μ_23 ). As a result, <Ref> implies that for all λ∈ℝ_+^2, ℐ^⋆(λ) = sup _γ∈Π(μ_1, μ_2)∫_𝒱 f_λ(v) d γ(v), where f_λ(v) = sup_s ∈𝒮ϕ_λ (v, s). From <Ref>, ℐ is bounded from below, non-decreasing, and concave. As a result, ℐ (δ) = ∞ for all δ∈ℝ^2_+ or ℐ(δ) < ∞ for all δ∈ℝ^2_+. In the first case, ℐ^⋆ = ∞ on ℝ^2_+ by definition and hence we have ℐ(δ) = inf _λ∈ℝ_+^2{⟨λ, δ⟩+ℐ^⋆(λ)} = ∞. For the second case, by <Ref>, for all δ∈ℝ^2_++, ℐ(δ) = inf_λ∈ℝ^2_+{⟨λ, δ⟩ + ℐ^⋆(λ) } = inf_λ∈ℝ^2_+{⟨λ, δ⟩ + sup _γ∈Π(μ_1, μ_2)∫_𝒱 f_λ(v) d γ(v) }, and the proof is complete. If λ_1 >0 and λ_2 >0, then sup _γ∈𝒫̅ I_λ[γ]=sup _γ∈𝒫̅sup _π∈Π(μ_13, μ_23, γ) ∫_𝒱×𝒮ϕ_λ (v, s^' ) d π (v, s^') . The proof is almost identical to that of <Ref>, so we only give the sketch. For notational convenience, we write c_ℓ: (s_1, s_2, y_1, y_2, x) ↦ c_ℓ( s_ℓ, (y_ℓ, x) ) for ℓ =1,2 and f: (s_1, s_2, s^') ↦ f(s^'). Fix any ϵ >0 and γ∈𝒫̅. Let K = { K_1, K_2, K_3} with K_1 = {3,4,5}, K_2 = {1,3,5 } and K_3 = { 2,4,5 } and we note that K is decomposable. By <Ref>, there is a π∈Π(μ_13, μ_23, γ ) satisfying I_λ[γ] ≤∫_𝒱×𝒮ϕ_λ d π+ϵ. Since ϵ >0 and γ∈𝒫̅ are arbitrary, this shows LHS ≤ RHS. The proof of LHS ≥ RHS is identical to the proof of <Ref>. If λ_1 > 0 and λ_2 >0, then sup _γ∈𝒫̅sup _π∈Π(μ_1, μ_2, γ)∫_𝒱×𝒮ϕ_λ d π = sup_π∈𝒢_λ∫_𝒱×𝒮ϕ_λ d π. The proof is the same as that of <Ref>. For all λ∈ℝ_+^2, one has ℐ^⋆(λ)=sup _π∈𝒢_λ∫_𝒱×𝒮ϕ_λ d π. The proof is almost the same as <Ref> as long as we replace g with f, φ_λ with ϕ_λ, A_ℓ with B_ℓ, A_ℓ n with B_ℓ n and 𝒫_D with 𝒫̅, where B_ℓ = { ( (s_1,s_2), (y_1,y_2,x) ) ∈𝒱×𝒮: c_ℓ(s_ℓ, (y_ℓ,x) ) < ∞}, and B_ℓ n = { ( (s_1,s_2), (y_1,y_2,x) ) ∈𝒱×𝒮: c_ℓ(s_ℓ, (y_ℓ,x) ) < n }, for ℓ = 1,2. Let λ∈ℝ^2_+. If ϕ_λ: 𝒱×𝒮→ℝ is interchangeable with respect to Π(μ_1, μ_2), then sup _π∈𝒢_λ∫_𝒱×𝒮ϕ_λ d π = sup_π∈Γ( Π(μ_1, μ_2) , ϕ_λ) ∫_𝒱×𝒮ϕ_λ d π The proof is the same as <Ref> . §.§ Proofs in §.§.§ Proof of First, assuming that condition (<ref>) does not hold, we show ℐ_D(δ) = ∞. Fix any λ = (λ_1, λ_2) ∈ℝ^2_+ and v = (s_1, s_2) ∈𝒱. For any B ≥λ_1 ∨λ_2, there is v^' = (s_1^', s_2^') ∈𝒱 such that g(s_1^', s_2^' ) > B [ 1 + d_𝒮_1 (s_1, s_1^' )^p_1+ d_𝒮_2 (s_2, s_2^' )^p_2], and hence φ_λ (v, v^') =g (s_1^', s_2^' ) - λ_1 d_𝒮_1 (s_1, s_1^')^p_1 -λ_2 d_𝒮_2 (s_2, s_2^' )^p_2 > B [ 1 + d_𝒮_1 (s_1, s_1^' )^p_1+ d_𝒮_2 (s_2, s_2^' )^p_1] - λ_1 d_𝒮_1 (s_1, s_1^')^p_1 -λ_2 d_𝒮_2 (s_2, s_2^' )^p_2 ≥ B + (B - λ_1) d_𝒮_1 (s_1, s_1^')^p_1 + (B- λ_2) d_𝒮_2 (s_2, s_2^' )^p_2≥ B. This shows that for all λ∈ℝ^2_+ and B large enough, we have g_λ(v) = sup _v^'∈𝒱φ_λ(v, v^') ≥ B for all v ∈𝒱. Therefore, by <Ref>, we have ℐ_D(δ) ≥sup _π∈Π(μ_1, μ_2 )∫_𝒱 g_λ(v) d π (v) ≥ B, for all B large enough. As a result, ℐ_D(δ) = ∞. Conversely, assuming that the growth condition (<ref>) holds, we show ℐ_D(δ)< ∞. For all π∈Σ_D(δ), ∫_𝒱 f(v) dπ(v) ≤∫_𝒮_1 ×𝒮_2 M [ 1 + d_𝒮_1 (s_1^⋆, s_1)^p_1+ d_𝒮_2 (s_2^⋆, s_2 )^p_2] d π(s_1, s_2) = M + M W_p_1 ( π_1 , δ_s_1^⋆ )^p_1 + M W_p_2 ( π_2 , δ_s_2^⋆ )^p_2 ≤ M + ∑_j=1^2 M [ W_p_j ( π_j , μ_j ) + W_p_j ( μ_j, δ_s_j^⋆ ) ]^p_j < ∞, where π_j denotes the marginal measure of π on 𝒮_j and δ_s_j^⋆ denotes the Dirac measure at s_j^⋆∈𝒮_j. The last step follows from μ_j ∈𝒫_p_j(𝒮_j) for j=1,2 and π∈Σ_D(δ), i.e., W_p_j ( π_j , μ_j )^p_j≤δ_j for j=1, 2. §.§.§ Proof of First, we assume condition (<ref>) does not hold and aim to show ℐ(δ) = ∞. Fix any λ = (λ_1, λ_2) ∈ℝ_+^2. For any v= (s_1, s_2) ∈𝒱 and B ≥λ_1 ∨λ_2, there exists s^' = (y_1^', y_2^', x^') such that f(s^') ≥ B [ 1 + d_𝒮_1 (s_1, s_1^' )^p_1+ d_𝒮_2 (s_2, s_2^' )^p_2]. Therefore, ϕ_λ (v, s^') =f (s^')-λ_1 d_𝒮_1 (s_1, s_1^')^p_1 -λ_2 d_𝒮_2 (s_2, s_2^')^p_2 ≥ B + (B - λ_1) d_𝒮_1 (s_1, s_1^' )^p_1 + (B - λ_2) d_𝒮_2 (s_2, s_2^' )^p_2≥ B. As a result, f_λ(v) = sup_s^'∈𝒮ϕ_λ (v, s^') ≥ B for all v∈𝒱 and all B large enough. Since B >0 is arbitrary, we must have sup _ϖ∈Π(μ_13, μ_23)∫_𝒱 f_λ(v) d ϖ(v) =∞. By <Ref>, we have ℐ(δ) = ∞. Conversely, we show that the condition (<ref>) implies ℐ(δ)< ∞. For any γ∈Σ(δ), ∫_𝒮 f(s) d γ (s) ≤∫_𝒮 M [ 1 +d_𝒮_1 (s_1^⋆, s_1)^p_1+d_𝒮_2 (s_2^⋆, s_2)^p_2] d γ (s) ≤ M + M W_p_1( δ_s_1^⋆, γ_13 )^p_1 + M W_p_2( δ_s_2^⋆, γ_23 )^p_2 ≤ M + ∑_j=1^2 M [ W_p_j( δ_s_j^⋆, μ_j3 ) + W_p_j( μ_j3, γ_j3 ) ]^p_j < ∞, where γ_j3 is the marginal measure of γ on 𝒮_j = 𝒴_j ×𝒳 and δ_s_j^⋆ is the Dirac measure concentrated at {s_j^⋆}.The last step follow γ∈Σ(δ) and μ_j3∈𝒫_p_j(𝒮_j) for j=1, 2. §.§.§ Proof of In this section, we first prove the weak compactness of Σ_D(δ) for all δ∈ℝ_+^2 when 𝒮_1 and 𝒮_2 are both proper and c_j = d^p_j_𝒮_j for some p_j ≥ 1. As a result, K_j =W_p_j^p_j and the set Σ_D(δ ) can be written as Σ_D(δ) := {γ∈𝒫(𝒮_1 ×𝒮_2): W_p_1(γ_1, μ_1) ≤δ_1^1/p_1, W_p_2(γ_2, μ_2) ≤δ_2^1/p_2}. For any Polish metric space 𝒳, let B_𝒫_p(𝒳 )( μ, δ) := {γ∈𝒫(𝒳) : W_p(μ, γ) ≤δ} denote the ball centered at μ in Wasserstein space 𝒫_p(𝒳). When there is no ambiguity we will abbreviate this notation by referring to B_p( μ, δ). Suppose <Ref> hold. Then, Σ_D(δ) is weakly compact. Theorem 1 of <cit.> implies B_p( μ, δ) weakly compact whenever μ has a finite p-th moment. As a result, the set Σ_D(δ) can be written as Σ_D(δ) = Π(ℬ_1,ℬ_2 ), where ℬ_1 = B_p_1( μ_1, δ_1^1/p_1) and ℬ_2 = B_p_2 ( μ_2, δ_2^1/p_2). Since ℬ_1 and ℬ_2 are weakly compact in 𝒫(𝒮_1) and 𝒫(𝒮_1), respectively, then they are uniformly tight by Prokhorov’s theorem. By Lemma 4.4 of <cit.>. Σ_D(δ) is tight in 𝒫(𝒮_1 ×𝒮_2). By Prokhorov’s theorem again, Σ_D(δ) has a compact closure under the topology of weak convergence. To show the weakly compactness of Σ_D(δ), it suffices to show it is closed. Let π^n ∈Σ_D(δ) ≡Π(ℬ_1,ℬ_2) be a sequence converging weakly to π^∞∈𝒫(𝒮_1 ×𝒮_2). We have W_p_1(π^n_1, μ_1) ≤δ_1^1 / p_1 and W_p_2(π^n_2, μ_2) ≤δ_1^1 / p_2. Let π^n_j denote the marginal distribution of π^n on 𝒮_j. For any open U_1 in 𝒮_1, the Portmanteau theorem implies lim inf_n →∞π^n_1(U_1) = lim inf_n →∞π^n(U_1 ×𝒮_2) ≥π^∞(U_1 ×𝒮_2) = π^∞_1(U_1). This shows π^n_1 weakly converges to π^∞_1. Moreover, W_p_1(π_1^∞, μ_1 ) ≤δ_1^1 / p_1 can be seen from weakly closedness of ℬ_1. Using the identical argument, we can show π^n_2 weakly converges to π^∞_2 and W_p_2 (π_2^∞, μ_2 ) ≤δ_1^1 / p_2. This shows π^∞∈W_p_2(π_2^n, μ_2) ≤δ_1^1 / p_2 and hence Σ_D(δ) is weakly closed. The weak compactness of Σ_D(δ) does not depend on the functional forms of metrics d_𝒮_1 and d_𝒮_2. Essentially, the topological properties of 𝒮_1 and 𝒮_2, mainly properness, determines the weak compactness of Σ_D(δ). Since <Ref> implies that Σ_D(δ) is weakly compact, by Weierstrass’ theorem, it suffices to show π↦∫_𝒱 g dπ is weakly upper semi-continuous. Let {π^k }_k=1^∞ be any sequence in Σ_D(δ) that weakly converges to π^∞∈Σ_D(δ), we show lim sup_n →∞∫_𝒱 g dπ^k ≤∫_𝒱 g dπ^∞. For any ρ >0, define an auxiliary function f_ρ: 𝒱→ℝ as g_ρ (v) = f(v) ∧[ M (1+ρ^p_0^'+ρ^p_1^' ) ]. Let A_1 = { (s_1, s_2) ∈𝒱 :d_𝒮_1(s_1^⋆, s_1) ≥ρ} and A_2 = { (s_1, s_2) ∈𝒱 : d_𝒮_2(s_2^⋆,s_2) ≥ρ}. It is easy to verify that for all v ∈𝒱, | g(v)-g_ρ(v)| ≤ M [d_𝒮_1 ( s_1^⋆, s_1)^p_1^'+d_𝒮_2 (s_2^⋆, s_2)^p_2^'] if v ∈ A_1 ∩ A_2 , M d_𝒮_1 (s_1^⋆, s_1)^p_1^' if v ∈ A_1 ∩ A_2^c, M d_𝒮_2 ( s^⋆_2,s_2)^p_2^' if v ∈ A_1^c ∩ A_2 , 0 otherwise. For any π∈Σ_D(δ), we have | ∫_𝒱 g dπ - ∫_𝒱 g_ρ dπ| ≤∫_𝒱 |g - g_ρ| d π ≤∫_A_1 ∩ A_2 |g - g_ρ| d π + ∫_A_1 ∩ A_2^c |g- g_ρ| d π + ∫_A_1^c ∩ A_2 |g - g_ρ| d π . By Lemma 1 in <cit.>, there exists B > 0 such that W_p_j (π_j, δ_s_j^⋆)^p_j≤ B for j = 1, 2 and all π∈Σ_D(δ), where π_j is the marginal of π on 𝒮_j and δ_s_j^⋆ is a Dirac measure at {s_j^⋆}. Therefore, we have ∫_A_1 ∩ A_2^c |g - g_ρ| d π ≤ M ∫_A_1 ∩ A_2^c d_𝒮_1(s_1^⋆, s_1)^p_1^' d π≤ M ρ^p_1 - p_1^'∫_A_1 ∩ A_2^c d_𝒮_1 (s_1, s^⋆_1)^p_1 dπ ≤ M ρ^p_1 - p_1^' W_p_1( π_1, δ_s_1^⋆ )^p_1≤ B ρ^p_1^' - p_1 . Similarly, we can show ∫_A_1^c ∩ A_2 |g - g_ρ| d π≤ B ρ^p_2^' - p_2 and ∫_A_1 ∩ A_2 |g - g_ρ| d π ≤∫_A_1 ∩ A_2 M [d_𝒮_1 (s^⋆_1, s_1)^p_1^'+d_𝒮_2 (s_2, s^⋆_1)^p_2^'] dπ(s_1, s_2) ≤ B (ρ^p_1^' - p_1 + ρ^ p_2^' - p_2 ). Therefore, we have for all π∈Σ_D(δ), | ∫_𝒱 g dπ - ∫_𝒱 g_ρ dπ| ≤∫_𝒱|g-g_ρ| d π≤ 2 B (ρ^p_1^' - p_1 + ρ^ p_2^' - p_2 ). For any ϵ > 0, there is a ρ>0 large enough such that 4B (ρ^p_1^' - p_1 + ρ^ p_2^' - p_2 ) < ϵ/2. By Lemma 3 in <cit.>, we have lim sup_k →∞∫_𝒱 g_ρ dπ^k ≤∫_𝒱 g_ρ dπ^∞ and hence there is a k(ϵ) large enough such ∫_𝒱 g_ρ dπ^k - ∫_𝒱 g_ρ dπ^∞ < ϵ/2, for all k > k(ϵ). Consequently, for all k > k(ϵ), the following holds: ∫_𝒱 g dπ^k - ∫_𝒱 g dπ^∞ ≤∫_𝒱| g - g_ρ| dπ^k + ∫_𝒱 g_ρ dπ^k - ∫_𝒱 g_ρ dπ^∞ + ∫_𝒱| g_ρ - g | dπ^∞ ≤ 4B (ρ^p_1^' - p_1 + ρ^ p_2^' - p_2 ) + ∫_𝒱 g_ρ dπ^k - ∫_𝒱 g_ρ dπ^∞ < ϵ. Since ϵ is arbitrary, we must have lim sup _k →∞∫_𝒱 g d π^k ≤∫_𝒱 g d π^∞. This completes the proof. §.§.§ Proof of Here, we will only show that Σ(δ) is weakly compact. This is because the upper semi-continuity of γ→∫ f d γ over γ∈Σ(δ) can be shown using the same argument for the proof of <Ref>. We write Σ(δ) = {γ∈𝒫(𝒮): W_p_1(γ_1, μ_1 ) ≤δ_1^1/p_1, W_p_2(γ_2, μ_2 )≤δ_2^1/p_2}. For j =1,2, let 𝒢_j be an uniformly tight subset of 𝒫(𝒮_j). Then the following set Γ (𝒢_1, 𝒢_2) := {γ∈𝒫(𝒮): γ_13∈𝒢_1, γ_23∈𝒢_2 }, is tight in 𝒫(𝒮). First, we assume there exist μ∈𝒢_1 and ν∈𝒢_2 such that μ(𝒴_1 × A) = ν(𝒴_2 × A) for all A ∈ℬ_𝒳, i.e. μ and ν have same marginal distribution on 𝒳. Otherwise, Γ (𝒢_1, 𝒢_2) will be empty and hence the statement holds trivially. Since 𝒢_1 is uniformly tight, then for any ϵ > 0, there is a compact set K_ϵ⊂𝒮_1 ≡𝒴_1 ×𝒳 such that μ( K^c_ϵ) ≤ϵ for all μ∈𝒢_1. Similarly, there is a compact set L_ϵ⊂𝒮_2 ≡𝒴_2×𝒳 such that ν( L^c_ϵ) ≤ϵ for all ν∈𝒢_2. Moreover, define a mapping σ: 𝒮→𝒮 as σ: (y_1, y_2, x) ↦ (y_1, x, y_2). Trivially, σ is a homeomorphism (a continuous mapping whose inverse is also continuous) from 𝒮 to 𝒮. Let E_ϵ = σ^-1 ( K_ϵ×𝒴_2) and G_ϵ = 𝒴_1 × L_ϵ. Explicitly, (y_1,y_2, x) ∈ E_ϵ⇔ (y_1,x) ∈ K_ϵ. Fix any γ∈Γ (𝒢_1, 𝒢_2), let S = (Y_1,Y_2, X) be a random variable with γ as its law, i.e. Law(S) = γ. We must have γ_j3∈𝒢_j for j=1,2. Then, ℙ[ S ∉ E_ϵ∩ G_ϵ] ≤ℙ[ S ∉ E_ϵ] + ℙ[ S ∉ G_ϵ] = ℙ[ (Y_1, Y_2, X) ∉ E_ϵ] + ℙ[ (Y_1, Y_2, X) ∉ G_ϵ] = ℙ[ (Y_1, X) ∉ K_ϵ] + ℙ[ (Y_2, X) ∉ L_ϵ] ≤γ_1 3 (K_ϵ^c) + γ_23 (L_ϵ^c) ≤ 2ϵ. The desired result follows from the compactness of E_ϵ∩ G_ϵ in 𝒮. To see this, we note proj_𝒴_1: (y_1, x) ↦ y_1 is continuous from 𝒮_1 to 𝒴_1 and hence proj_𝒴_1(K_ϵ) is compact. As a result, proj_𝒴_1(K_ϵ) × L_ϵ is compact. Since E_ϵ∩ G_ϵ is a subset of a compact set and its compactness follows from the closedness of E_ϵ and G_ϵ. Suppose <Ref> hold. Then, Σ(δ) is weakly compact. By abuse of notations, let ℬ_1= B_p_1(μ_13, δ_1^1 / p_1) and ℬ_2= B_p_2(μ_23, δ_2^1 / p_2). We can rewrite Σ(δ)=Γ (ℬ_1, ℬ_2). By <Ref>, Σ(δ) is tight and hence has a compact closure under weak topology. Using a similar argument in the proof of <Ref>, we can show Σ(δ) is weakly closed. Therefore, Σ(δ) is weakly compact in 𝒫(𝒮). §.§.§ Proof of We focus on Θ(δ) since the proof of Θ_D(δ) is identical to that of Θ(δ). The proof of <Ref> for Θ(δ) follows form the following two lemmas. Suppose that the Assumptions in <Ref> hold. Then, the linear functional T: Σ(δ) →ℝ given by π↦∫_𝒮 f d π is continuous. Since μ_ℓ 3 has finite p_ℓ-th moment, then for all π∈Σ(δ), π_ℓ 3, i.e., the projection onto 𝒴_ℓ×𝒳 also has finite p_ℓ-th moment. Define a function h: 𝒮→ℝ as h(s) = M [ 1 + d_𝒮_1 (s_1^⋆,s_1 )^p_1'+ d_𝒮_2 (s_2^⋆,s_2)^p_2'], where s = (y_1, y_2, x), s_1 = (y_1, x) and s_2 = (y_2, x). We note h ∈ L^1 (π) for all π∈Σ(δ). Using the identical argument in the proof of <Ref>, we can show that π↦∫ f dπ is upper semicontinuous on Σ(δ). By replacing f by -f, we can see that π↦∫ (-f) dπ is upper semicontinuous and hence π↦∫ f dπ is lower semicontinuous on Σ(δ). As a result, π↦∫ f dπ is continuous on Σ(δ). Suppose that <Ref> hold. Then Σ(δ) is connected under weak topology. Fix any π and π^' in Σ(δ). It suffices to show ν : t ↦ t π + (1- t)π^' is continuous from [0,1] into Σ(δ). We note Σ(δ) ⊂𝒫_p(𝒮) is metrizable under W_p for p = p_1 ∧ p_2. Fix any t_0 ∈ [0,1]. Let t_1 ≠ t_0 be any point in [0,1] such that Δ =| t_1 - t_0| > 0 is sufficiently small. Without loss of generality, we assume t_0 < t_1. For simplicity, we write γ = t_0 π + (1- t_1) π^'≥ 0. By the triangle inequality, W_p( ν (t_0), ν (t_1) ) = W_p( ν (t_0), γ + Δπ^') ≤ (1-Δ) W_p( ν (t_0), (1-Δ)^-1γ) + ΔW_p( ν (t_0) , π^')_= O(Δ). Consider the following derivation: W_p( ν (t_0), (1-Δ)^-1γ) = W_p( ν(t_0) , ν(t_0) - Δπ^'/1- Δ_= ρ_Δ ) = W_p( (1-Δ) ρ _Δ+ Δπ^', ρ_Δ) ≤ΔW_p( π^' , ρ_Δ) = ΔW_p( π^' , ν(t_0) - Δπ^'/1- Δ). Since lim_Δ→ 0ν(t_0) - Δπ^'/1- Δ = ν(t_0) in weak topology induced by W_p, then lim_Δ→ 0W_p( π^' , ν(t_0) - Δπ^'/1- Δ) = W_p( π^' , ν(t_0) ) < ∞. As a result, W_p(ν(t_0),(1-Δ)^-1γ) ≤ΔW_p( π^' , ν(t_0) - Δπ^'/1- Δ) → 0, as Δ→ 0, and hence W_p(ν(t_0), ν(t_1)) → 0, as Δ→ 0. Interchange the role of t_0 and t_1, we can show the case when W_p(ν(t_0), ν(t_1)) → 0 as Δ = |t_1 -t_0| → 0. This shows ν: t ↦ t π+(1-t) π^' is continuous on [0,1]. So Σ(δ) is path-connected and hence connected under weak topology. §.§ Proofs in §.§.§ Proof of Note that the proof of <Ref> implies that If ℐ_D(δ) is finite for some δ > 0, then ℐ_D(δ) is finite for all δ > 0 because ℐ(δ) is concave. Suppose that <Ref> hold. Then for any δ =(δ_1, δ_2) ∈ℝ_+^2, we have 0 ≤ℐ_D(δ_1, δ_2)-ℐ_D(0,0) ≤Ψ(δ_1, δ_2). Moreover, ℐ_D is continuous on (0,0). Fix any γ∈Σ_D(δ) and any ϵ >0. We can construct random variables V = ( S_1, S_2) ∈𝒱 with γ = Law (V ) and write γ_j = Law(S_j) for j ∈ [2]. Let K = {K_1, K_2,K_3} with K_1 = { 1,3 }, K_2 = { 2,4 } and K_3 = {3,4}. It is easy to see K is decomposable, and <Ref> implies that there are random variables (V, V) = (S_1, S_2, S_1, S_2 ) ∈𝒱×𝒱 such that μ_1 = Law (S_1), μ_2 = Law (S_2) and 𝔼[ c_j(S_j, S_j) ] ≤K_j (μ_j, γ_j) + ϵ≤δ_j +ϵ for j ∈ [2]. Let π denote the law of (V, V ). Therefore, with γ = Law(S_1, S_2) ∈Σ_D(0), we have ∫_𝒱 g d γ - ℐ_D(0,0) ≤∫_𝒱 g d γ - ∫_𝒱 g d γ = ∫_𝒱×𝒱[ g(v) - g(ṽ ) ] d π(v, ṽ) = 𝔼[ g(V) - g(V ) ] ≤𝔼[ Ψ(c_1 (S_1, S_1), c_2 ( S_2, S_2)) ] ≤Ψ( 𝔼[c_1 (S_1, S_1)], 𝔼[c_2 ( S_2, S_2) ] ) ≤Ψ( δ_1 + ϵ , δ_2 + ϵ). Since the measure γ∈Σ_D(δ) is arbitrary, we must have ℐ_D(δ_1, δ_2)- ℐ_D(0,0) = sup_γ∈Σ_D(δ)∫_𝒱 g d γ - ℐ_D(0,0) ≤Ψ( δ_1 + ϵ , δ_2 + ϵ). Since Ψ is continuous and ϵ >0 is arbitrary, then ℐ_D(δ_1, δ_2) - ℐ_D(0,0) ≤Ψ( δ_1 , δ_2). The monotonicity of ℐ_D implies ℐ_D(δ_1, δ_2) ≥ℐ_D(0,0). In addition, the continuity of ℐ_D at (0,0) follows from the continuity of Ψ at (0,0) and letting (δ_1, δ_2) → (0,0). In fact, <Ref> and Proof of <Ref> implies the effective domain of ℐ_D is either ℝ_+^2 or ∅ because ℐ_D is non-decreasing and concave. Suppose that <Ref> hold, and ℐ_D(δ) is finite for some δ∈ℝ_++^2. If η_0 > η≥ 0 and δ≥ 0, one has 0 ≤ℐ_D(η_0,δ) - ℐ_D(η,δ) ≤Ψ( η_0 - η, 0 ). and 0 ≤ℐ_D(δ, η_0) - ℐ_D(δ, η) ≤Ψ( 0, η_0 - η ). We assume that for all η, δ≥ 0, there exists γ^η, δ∈Σ_D(η, δ) such that ℐ_D(η, δ) = ∫ g d γ^η, δ. Otherwise, due to the continuity of Ψ on ℝ^2_+, we can repeat the proof with ϵ-approximation optimizer and let ϵ↓ 0. In addition, since ℐ_D(δ) < ∞ for some δ∈ℝ^2_+, the ℐ_D (δ) < ∞ for all δ∈ℝ^2_+. Let γ^η, δ_ℓ denote the marginal of γ^η_0, δ on 𝒮_ℓ. Fix γ^η_0, δ∈𝒫( 𝒮_1 ×𝒮_2). Define a probability measure γ_1^⋆ on 𝒮_1 as γ_1^⋆ = ( η/η_0) γ_1^η_0, δ + ( η_0 - η/η_0 ) μ_1. By definition, K_1 ( γ_1^η_0, δ, μ_1 ) ≤η_0 and K_2 ( γ_2^η_0, δ, μ_2 ) ≤δ. By convexity of ν↦K_1( ν, μ_1), we have K_1(γ_1^⋆ , μ_1) ≤η and K_1(γ_1^⋆ , γ_1^η_0, δ ) ≤η_0 - η. Without loss of generality, suppose there is an optimal coupling ν∈Π( γ^η, δ_1, γ_1^⋆ ) such that K_1( γ_1^η_0, δ, γ_1^⋆ ) = ∫_𝒮_1 ×𝒮_1 c_1 d ν. By gluing lemma, we can construct random variables (S_1, S_2, S_1) ∈𝒱×𝒮_1 with a probability measure π≡Law (S_1, S_2, S_1) such that π_1,2 = Law(S_1, S_2) = γ^η_0, δ, π_1,3 = Law( S_1, S_1 ) =ν∈Π (γ_1^η, δ, γ_1^⋆) , and K_1 (γ_1, γ_1^η_0, δ) = 𝔼[c_1(S_1, S_1) ] ≤η_0 - η. Let γ = Law ( S_1, S_2) ∈𝒫(𝒱 ) and it is obvious that γ_1 ∈Σ_D(η, δ). Next, consider the following derivation: ℐ_D(η_0, δ) - ℐ_D(η, δ) ≤∫ g (v) d γ^η_0, δ(v) - ∫ g(v) d γ(v) = ∫_𝒱×𝒱[g(s_1, s_2) - g( s̃_1, s_2 ) ] d π ( s_1, s_2 , s̃_1) = 𝔼[ g(S_1, S_2) - g( S_1, S_2) ] ≤𝔼[ Ψ(c_1( S_1, S_1 ), 0 ) ] ≤Ψ( 𝔼[ c_1( S_1, S_1 ) ] , 0 ) ≤Ψ( η_0 - η , 0 ). Using the same argument, we can show ℐ_D (δ, η_0)-ℐ_D(δ, η) ≤Ψ(0, η_0-η). Now we present the proof of <Ref>. Since ℐ_D is concave on ℝ^2_+, then ℐ_D is continuous on ℝ^2_++. By <Ref> , ℐ_D is continuous at (0,0). Let E_0 = { (x,0)∈ℝ^2_+: x>0 } and E_1 = { (0,y)∈ℝ^2_+: y>0 }. To complete the proof, it suffices to show ℐ_D is continuous at all δ∈ E_0 ∪ E_1. Fix any (η,0) ∈ E_0. For any η_0 ≥η and any δ >0, we have ℐ_D(η_0, δ ) - ℐ_D(η, 0 ) = ℐ_D(η_0, δ ) - ℐ_D(η, δ ) + ℐ_D(η, δ ) - ℐ_D(η, 0 ) ≤Ψ (η_0-η, 0 ) + Ψ (0, δ ) = Ψ ( |η_0-η|, 0 ) + Ψ (0, δ ). Similarly, for any η_0 < η and δ > 0, ℐ_D(η, δ ) - ℐ_D(η_0 ,0 ) ≤Ψ (|η_0-η|, 0 ) + Ψ (0, δ ). This shows for all η, η_0 and δ in (0, ∞), one has |ℐ_D(η_0, δ ) - ℐ_D(η, 0 ) | ≤Ψ (|η_0-η|, 0 ) + Ψ (0, δ ). The continuity of ℐ_D at (η, 0) follows from the continuity of Ψ at (0,0) and letting (η_0, δ) → (η, 0). Since (η, 0) ∈ E_0 is arbitrary, ℐ_D is continuous at all x ∈ E_0. Using the same argument, we can show ℐ_D is continuous at all x∈ E_1. The desired result follows. §.§.§ Proof of Note that the proof of <Ref> implies that If ℐ(δ) is finite for some δ∈ℝ_++^2, then ℐ(δ) is finite for all δ∈ℝ_++^2 because ℐ(δ) is concave. Based on this, we give the following lemma that is used to show the continuity of ℐ. Let δ≥ 0, η_0 > η≥ 0. Suppose that ℐ(δ) < ∞ for some δ∈ℝ_++^2. Under <Ref>, there is a constant M >0 such that ℐ(η_0, δ) - ℐ(η, δ) ≤Ψ_1( η_0 - η , M ( 1- η/ η_0) ) , and ℐ(δ, η_0) - ℐ(δ, η) ≤Ψ_2 ( M ( 1- η/ η_0) , η_0-η). For simplicity, assume that for any η, δ≥ 0, one has γ^η, δ = max_γ∈Σ(η, δ)∫_𝒮 f dγ, equivalently, ℐ(η, δ) = ∫_𝒮 f d γ^η, δ. Otherwise, due to the global continuity of Ψ_j, we can repeat the proof with an ϵ-approximation argument and let ϵ↓ 0. For fixed η_0 >0 and δ >0, we have K_1(γ^η_0, δ_1,3, μ_1) ≤η_0 and K_2(γ^η_0, δ_2,3, μ_2) ≤δ by the definition of γ^η_0, δ. Let K_1 = { 1,2,3}, K_2 = { 1,3,4,6 } and K_3 = { 5,6} and it is easy to verify the collection {K_1, K_2, K_3 } is decomposable. As a result, by <Ref>, we can construct random variables (S, S ) ≡(Y_1, Y_2, X, Y_1, Y_2, X) ∈𝒮×𝒮, such that Law(Y_1, Y_2, X) = γ^η_0, δ , Law( Y_1 , X) = μ_1, Law( Ỹ_2 , X) = μ_2, and K_1(γ_1,3^η_1, δ, μ_1) = 𝔼[ c_1( S_1, S_1 ) ] ≤η_0, where S_1 = (Y_1, X) and S_1 = (Y_1, X). Let ε be a Bernoulli random variable that is independent of (S, S ) with ℙ(ε = 1) = η/ η_0. Define new random variables: S≡ ( Y_1,Y_2, X ) = ε (Y_1, Y_2,X) + (1-ε) (Y_1,Y_2, X ), and let γ = Law ( Y_1, Y_2, X). For any measurable set A ∈ℬ_𝒮, we have γ(A) = ℙ ( S∈ A ) = 𝔼[ ℙ ( S∈ A | ε ) ] = ( η/ η_0 ) ℙ ( S ∈ A) + ( 1- η/ η_0 ) ℙ ( S∈ A). This shows γ = ( η/ η_0 ) γ^η_0, δ + ( 1- η/ η_0 ) γ, where γ = Law(Y_1, Y_2, X). Next, we verify γ∈Σ(η, δ). Since ν↦K_1( ν , μ_1) is convex and γ_1,3 = Law(Y_1, X) = μ_1, we have K_1( γ_1,3, μ_1) ≤( η/η_0) K_1(γ^η_1, δ_1,3, μ_1) + ( 1- η/η_0) K_1(γ_1,3, μ_1) ≤η. Similarly, we have K_2( γ_2,3, μ_2) ≤δ. As a result, we verify γ∈Σ( η, δ). Next, it is easy to see 𝔼[ c_1 ( ( Y_1, X ), (Y_1,X ) ) ] ≤(1-η/η_0) 𝔼[c_1 ( (Y_1, X ),(Y_1, X) )] ≤ (η - η_0) Since Law(Y_2, X) = γ^η_0, δ_2, Law(Y_2, X ) = μ_2 and K_2(γ_2,3^η_0, δ, μ_2) ≤δ, i.e. W_p_2(γ_2,3^η_0, δ, μ_2) ≤δ^1/p_2, by triangle inequality, we have W_p_2(γ_2,3^η_1, δ, δ_s_2 ) ≤W_p_2(γ_2,3^η_1, δ, μ_2) + W_p_2(μ_2 , δ_s_2 ) ≤δ^1/p_2 + W_p_2(μ_2 , δ_s_2 ), where δ_s_2 denotes the dirac measure at {s_2} and s_2 ∈𝒮_2 is arbitrary. Further, <Ref> implies ρ_2(y^'_2, y_2) ≤ 1 + d_𝒮_2 (s^'_2, s_2)^p_2 for all s_2 = (y_2,x) and s^'_2 = (y^'_2,x^'), 𝔼[ ρ_2 (Y_2, y_2) ] - 1 ≤𝔼[ d_𝒮_2 (S_2, s_2)^p_2] = W_p_2(γ_2,3^η_1, δ, δ_s_2 )^p_2≤[ δ^1/p_2 + W_p_2(μ_2 , δ_s_2) ]^p_2 , and 𝔼[ ρ_2 ( Y_2, y_2) ] - 1 ≤𝔼[ d_𝒮_2 (S_2, s_2)^p_2] = W_p_2(μ_2 , δ_s_2)^p_2. As a result, by <Ref>, 𝔼[ ρ_2 (Y_2 , Y_2) ] = (η/ η_0) 𝔼[ ρ_2 (Y_2 , Y_2) | ε = 0 ] _= 0 + ( 1- η/ η_0) 𝔼[ ρ_2 (Y_2 , Y_2) | ε = 1 ] ≤ (1- η/ η_0) 𝔼[ ρ_2 (Y_2 , Y_2) ] ≤ (1- η/ η_0) N ( 𝔼[ ρ_2 (Y_2 ,y_2) ] + 𝔼[ ρ_2 (y_2 , Y_2) ] ) ≤ M (1- η/ η_0), where M = N W_p_2(μ_2 , δ_s_2)^p_2 + N [ δ^1/p_2 + W_p_2(μ_2 , δ_s_2) ]^p_2 < ∞. Therefore, by <Ref>, we have ℐ(η_0, δ) - ℐ(η, δ) ≤𝔼[ f(Y_1, Y_2, X) ] - 𝔼[ f(Y_1, Y_2, X ) ] ≤𝔼[ Ψ( c_1 ( (Y_1, X), (Y_1, X ) ), ρ_2 (Y_2 , Y_2) ) ] ≤Ψ( 𝔼[ c_1 ( S_1 , S_1 ) ], 𝔼[ ρ_2 (Y_2 , Y_2) ] ) ≤Ψ( η_0- η , M ( 1- η/ η_0) ). The rest of the proof can be completed using the same reasoning. Now, we give the proof of <Ref>. If η_0 > η≥ 0, <Ref> implies 0 ≤ℐ(η_0, δ) - ℐ(η, 0) = ℐ(η_0, δ) - ℐ(η, δ) + ℐ(η, δ) - ℐ(η, 0) ≤Ψ_1( η_0- η , M ( 1- η/ η_0) ) + Ψ_2 (Mδ, δ). If η≥η_0, by monotonicity of η↦ℐ(η, 0) and <Ref>, we have ℐ(η_0, δ) - ℐ(η, 0) ≤ℐ(η_0, δ) - ℐ(η_0 , 0) ≤Ψ_2( M δ , δ), and ℐ(η_0, δ) - ℐ(η, 0) ≥ℐ(η, δ) - ℐ(η, 0) ≥ 0 . As a result, we must have for all η_0, η and δ in [0, ∞), . 0 ≤ℐ(η_0, δ) - ℐ(η, 0) ≤Ψ_1( |η_0- η| , M | 1- η/ η_0| ) + Ψ_2( M δ , δ). The continuity of ℐ at (η, 0) follows from the continuity of Ψ_1 and Ψ_2, and letting (η_0, δ) → (η, 0). Using a similar argument, we can show ℐ is continuous at (0,η). §.§ Proofs in §.§.§ Proof of By some simple algebra and <Ref>, we have ℐ_D(δ) = inf_λ∈ℝ^2_+{⟨λ, δ⟩ + sup_γ∈Π(μ_1, μ_2 )∫_𝒮[ (f_1)_λ_1(y_1) + (f_2)_λ_2(y_2) ] dγ(y_1, y_2)} = inf_λ_1 ≥ 0 [ λ_1 δ_1 +∫_𝒴_1(f_1)_λ_1 d μ_1 ] + inf_λ_2 ≥ 0 [ λ_2 δ_2 +∫_𝒴_2 (f_2 )_λ_2 d μ_2 ], where the last step holds because (f_ℓ)_λ≥ f_ℓ and the right-hand side is well-defined since f_ℓ∈ L^1(μ_ℓ). Next, we show ℐ(δ) = ℐ_D(δ). <Ref> implies ℐ(δ) = inf_λ∈ℝ^2_+{⟨λ, δ⟩ + sup_π∈Π(μ_13, μ_23 )∫_𝒮_1 ×𝒮_2 (f_𝒮 )_λ dπ}, where (f_𝒮)_λ: 𝒮_1 ×𝒮_2 →ℝ is given by (f_𝒮)_λ(s_1, s_2) = sup_ (y_1^', y_2^', x^' ) ∈𝒮{ f_1(y_1^') + f_2(y_2^') - ∑_1 ≤ℓ≤ 2λ_ℓ c_ℓ( (y_ℓ, x_ℓ), (y^'_ℓ, x^') ) }. In fact, <Ref> implies for all s_ℓ = (y_ℓ, x_ℓ) ∈𝒮_ℓ and s_ℓ^' = (y^'_ℓ, x^'_ℓ) ∈𝒮_ℓ, one has c_Y_ℓ(y_ℓ, y_ℓ^') = inf_x_ℓ, x_ℓ^'∈𝒳 c_ℓ( (y_ℓ, x_ℓ), (y^'_ℓ, x^'_ℓ) ) ≤ c_ℓ( (y_ℓ, x_ℓ), (y^'_ℓ, x^'_ℓ) ) . Recall (f_𝒮)_λ: (s_1, s_2) ↦ (f_𝒮)_λ (s_1, s_2) is a function from 𝒮_1 ×𝒮_2 →ℝ with s_ℓ = (y_ℓ, x_ℓ) ∈𝒮_ℓ. As a result, for all s_1 ∈𝒮_1 and s_2 ∈𝒮_2 (f_𝒮)_λ(s_1, s_2) ≤sup_ (y_1^', y_2^', x^' ) ∈𝒮{ f_1(y_1^') + f_2(y_2^') - ∑_1 ≤ℓ≤ 2λ_ℓ c_Y_ℓ( y_ℓ, y_ℓ^') } = (f_1 )_λ_1(y_1) + (f_2 )_λ_2(y_2). This shows for all λ = (λ_1, λ_2) ∈ℝ^2_+, one has sup _π∈Π(μ_13, μ_23)∫_𝒮_1 ×𝒮_2(f_𝒮)_λ d π≤sup_γ∈Π(μ_1, μ_2 )∫_𝒴_1 ×𝒴_2 [ (f_1)_λ_1(y_1) + (f_2)_λ_2(y_2) ] dγ(y_1, y_2), and hence ℐ(δ) ≤ℐ_D(δ). We end the proof by showing sup _π∈Π(μ_13, μ_23)∫_𝒮_1 ×𝒮_2 (f_𝒮 )_λ d π≥∫_𝒴_1(f_1)_λ_1 d μ_1 +∫_𝒴_2 (f_2 )_λ_2 d μ_2. It suffices to show that there is π∈Π(μ_13, μ_23) such that (f_𝒮 )_λ ( s_1, s_2) ≥ (f_1)_λ_1(y_1) + (f_2 )_λ_2(y_2), π-a.e. In fact, we note that if x_1 = x_2, then (f_𝒮 )_λ ( (y_1, x_1), (y_2, x_2) ) = (f_1)_λ_1(y_1) + (f_2 )_λ_2(y_2) under <Ref>. Consider a probability measure π^⋆ = Law(Y_1, X, Y_2, X) where μ_ℓ, 3 = Law(Y_ℓ, X) for ℓ = 1,2. As a result, sup _π∈Π(μ_13, μ_23)∫_𝒮_1 ×𝒮_1(f_𝒮)_λ d π ≥∫_𝒮_1 ×𝒮_2(f_𝒮)_λ d π^⋆ = ∫_𝒮_1 ×𝒮_2[ (f_1)_λ_1 + (f_2 )_λ_2] dπ^⋆ = ∫_𝒴_1(f_1)_λ_1 d μ_1 +∫_𝒴_2 (f_2 )_λ_2 d μ_2. §.§.§ Proof of Since c_Y_ℓ(y_ℓ, y_ℓ') = inf_x_ℓ, x_ℓ' ∈𝒳_ℓ c_ℓ (s_ℓ, s_ℓ'), the proof of <Ref> implies ℐ(δ) ≤ℐ_D(δ). §.§.§ Proof of The proof consists of two steps. In Step 1, we derive the dual form of ℐ_D(δ) and ℐ(δ) for δ∈ℝ^2_++. In Step 2, we derive the dual reformulations of ℐ_D(δ) and ℐ(δ) for δ∈ℝ^2_+∖ℝ^2_++. Step 1. We derive the expressions of ℐ_D(δ) and ℐ(δ) for δ∈ℝ^2_++. First, recall c_Y_ℓ (y_ℓ, y_ℓ^') = V_ℓ, Y Y^-1 (y_ℓ-y_ℓ^')^2. <Ref> implies ℐ_D(δ)=inf _λ∈ℝ_+^2[⟨λ, δ⟩+sup _ϖ∈Π(μ_Y_1, μ_Y_2)∫_ℝ^2 (f_𝒴)_λ(y_1, y_2) d ϖ(y_1, y_2)], where (f_𝒴)_λ: (y_1, y_2) ↦ (f_𝒴)_λ(y_1, y_2) from ℝ^2 to ℝ is given by (f_𝒴)_λ(y_1, y_2) = y_2 - y_1 + V_1, YY/4 λ_1 + V_2, YY/4 λ_2. Since V_ℓ, YY > 0 for ℓ∈ [2], by some simple algebra, we have for all δ∈ℝ_++^2 ℐ_D(δ) = 𝔼[Y_2] - 𝔼[Y_1] + V_1, YY^1/2 δ_1^1/2 + V_2, YY^1/2 δ_2^1/2 Next, we derive the expression of ℐ(δ) for δ∈ℝ^2_++. Let Q_ℓ∈ℝ^(d+1) × (d+1) be the inverse of V_ℓ, i.e., Q_ℓ = [ Q_ℓ, YY Q_ℓ, YX; Q_ℓ, XY Q_ℓ, XX ] = [ (V_ℓ / V_ℓ, XX)^-1 - (V_ℓ / V_ℓ, XX)^-1 V_ℓ, YX V_ℓ, XX^-1; - V_ℓ, XX^-1 V_ℓ, XY (V_ℓ / V_ℓ, XX)^-1 (V_ℓ / V_ℓ, YY)^-1 ], where V_ℓ / V_ℓ, XX = V_ℓ, YY - V_ℓ, YX V_ℓ, XX^-1 V_ℓ, XY and V_ℓ / V_ℓ, YY = V_ℓ, XX - V_ℓ, XY V_ℓ, YY^-1 V_ℓ, YX. Conversely, [ V_ℓ, YY V_ℓ, YX; V_ℓ, XY V_ℓ, XX ] = [ (Q_ℓ / Q_ℓ, XX)^-1 - Q_ℓ, YY^-1 Q_ℓ, YX (Q_ℓ / Q_ℓ, YY)^-1; - (Q_ℓ / Q_ℓ, YY)^-1 Q_ℓ, XY Q_ℓ, YY^-1 (Q_ℓ / Q_ℓ, YY)^-1 ] , where Q_ℓ / Q_ℓ, XX = Q_ℓ, YY - Q_ℓ, YX Q_ℓ, XX^-1 Q_ℓ, XY and Q_ℓ / Q_ℓ, YY = Q_ℓ, XX - Q_ℓ, XY Q_ℓ, YY^-1 Q_ℓ, YX. Next, we evaluate the function (f_𝒮)_λ(s_1, s_2) that appears in the dual reformulation. For simplicity, we write a_1 = -1 and a_2 = 1. Consider the following derivation: (f_𝒮)_λ (s_1, s_2) := sup_y_1', y_2', x'{y_2' - y_1' - ∑_ℓ = 1, 2λ_ℓ c_ℓ((y_ℓ', x'), (y_ℓ, x_ℓ))} = sup_y_1', y_2', x'{∑_1 ≤ℓ≤ 2( a_ℓ y_ℓ - λ_ℓ[ y_ℓ' - y_ℓ; x' - x_ℓ ]^⊤ Q_ℓ[ y_ℓ' - y_ℓ; x' - x_ℓ ]) } =_(1) y_2 - y_1 + sup_z_1', z_2', x'{∑_1 ≤ℓ≤ 2( a_ℓ z_ℓ' - λ_ℓ[ z_ℓ'; x' - x_ℓ ]^⊤ Q_ℓ[ z_ℓ'; x' - x_ℓ ]) } = y_2 - y_1 + sup_x'∈ℝ^d {∑_1 ≤ℓ≤ 2sup_z_ℓ' ∈ℝ( a_ℓ z_ℓ' - λ_ℓ[ z_ℓ'; x' - x_ℓ ]^⊤ Q_ℓ[ z_ℓ'; x' - x_ℓ ]) }, where equation (1) follows from the change of variables z^'_ℓ = y_ℓ^' - y_ℓ. So, to evaluate (f_𝒮)_λ (s_1, s_2), it suffices to maximize (z_1^', z_2^', x^') ↦ϕ_1(z_1', x'; x_1) + ϕ_2(z_2', x'; x_2) where ϕ_ℓ(z_ℓ', x'; x_ℓ) = a_ℓ z_ℓ' - λ_ℓ[ z_ℓ'; x' - x_ℓ ]^⊤ Q_ℓ[ z_ℓ'; x' - x_ℓ ]. We first consider sup_z_ℓ' ∈ℝϕ_ℓ(z_ℓ', x'; x_ℓ). The first-order conditions imply that the optimal solution is z_ℓ' = (λ_ℓ Q_ℓ, YY)^-1[ a_ℓ/2 - λ_ℓ Q_ℓ, YX (x' - x_ℓ) ]. By some simple algebra, sup_z_ℓ' ∈ℝϕ_ℓ(z_ℓ', x', x_ℓ) = φ_ℓ (x^'- x_ℓ, λ) where φ_ℓ: ℝ^d ×ℝ→ℝ is given by φ_ℓ (x, λ_ℓ) = Q_ℓ, YY^-1/4λ_ℓ + a_ℓ x^⊤ V_ℓ, XX^-1 V_ℓ, XY - λ_ℓ x^⊤ V_ℓ, XX^-1 x. As result, (f_𝒮)_λ (s_1, s_2 ) = sup_x^'∈ℝ^d[ φ_1 (x^'- x_1, λ_1) + φ_2 (x^'- x_2, λ_2) ]. Now, we consider the optimization above. The first-order conditions imply the optimal solution x^' takes the form of x^' - x_ℓ = B_ℓ (x_2 - x_1) + b_ℓ for some B_ℓ∈ℝ^d × d and b_ℓ∈ℝ^d that depend on λ_ℓ. So, we have sup_x^'∈ℝ^d[ φ_1 (x^', x_1) + φ_2 (x^', x_2) ] = b + B(x_1- x_2) - (x_1-x_2)^⊤ W (x_1-x_2). for some positive definite matrix W ∈ℝ^d × d and b ∈ℝ that depend on λ_1,λ_2, x_1 and x_2. Here, the constant b will be determined below. For any π∈Π(μ_13, μ_23), we have ∫_ℝ^d+1×ℝ^d+1 (f_𝒮)_λ dπ = 1/4 λ_1 Q_1, Y Y^-1+1/4 λ_2 Q_2, Y Y^-1 + ∫_ℝ^d+1×ℝ^d+1 B (x_1-x_2) dπ_= 0 + ∫_ℝ^d+1×ℝ^d+1 (x_1-x_2)^⊤ W(x_1-x_2) d π(s_1, s_2) + b = 1/4 λ_1 Q_1, Y Y^-1+1/4 λ_2 Q_2, Y Y^-1 - ∫ (x_1-x_2)^⊤ W(x_1-x_2) d π + b. Now, let us consider sup_π∈Π(μ_13, μ_23)∫ (f_𝒮)_λ d π. To maximize ∫ (f_𝒮)_λ d π, it suffices to consider inf_π∈Π(μ_13, μ_23) ∫_ℝ^d+1×ℝ^d+1 (x_1-x_2)^⊤ W (x_1-x_2) d π(s_1, s_2). Since (x_1-x_2)^⊤ W (x_1-x_2) for all x_1, x_2 ∈ℝ^d, the probability measure π = Law(Y_1,X, Y_2, X) with Law(Y_ℓ,X) = μ_ℓ,3 for ℓ=1,2 is a solution and the optimal value is 0. We denote by Π the set of all probability measures on 𝒮_1 ×𝒮_2 that takes forms of π = Law(Y_1,X, Y_2, X). As a consequence, sup_π∈Π(μ_13, μ_23)∫_ℝ^2d +2 (f_𝒮 )_λ d π = 1/4 λ_1 Q_1, Y Y^-1+1/4 λ_2 Q_2, Y Y^-1 + b where b = 1/4 V_o^⊤(λ_1 V_1, X X^-1+λ_2 V_2, X X^-1)^-1 V_o with V_o=V_2, X X^-1 V_2, X Y-V_1, X X^-1 V_1, X Y. As a result, the dual reformulation of ℐ_D(δ) is given by ℐ(δ) = 𝔼[Y_2]-𝔼[Y_1] + inf_λ∈ℝ_+^2{λ_1 δ_1 + λ_2 δ_2 + 1/4λ_1(V_1 / V_1, XX) + 1/4λ_2(V_2 / V_2, XX) + 1/4 V_o^⊤(λ_1 V_1, XX^-1 + λ_2 V_2, XX^-1)^-1 V_o }. Step 2. We derive the dual reformulation of ℐ_D(δ) and ℐ(δ) for δ∈ℝ^2_+ ∖ℝ^2_++. First, we note that ℐ_D(0) = ℐ(0) = 𝔼[Y_2] - 𝔼[Y_1]. <Ref> implies that ℐ_D(δ_1, 0) = inf _λ∈ℝ_+^2[λ_1 δ_1 + sup _ϖ∈Π(μ_Y_1, μ_Y_2)∫_ℝ^2 (f_𝒴)_λ, 1(y_1, y_2) d ϖ(y_1, y_2)], ℐ_D(0, δ_2) = inf_λ_2 ∈ℝ_+^2[λ_2 δ_2 + sup _ϖ∈Π(μ_Y_1, μ_Y_2)∫_ℝ^2 (f_𝒴)_λ, 2(y_1, y_2) d ϖ(y_1, y_2)], where (f_𝒴)_λ, ℓ, for ℓ = 1, 2, is given by (f_𝒴)_λ, ℓ = y_2 - y_1 + (4 λ_ℓ)^-1 V_ℓ, YY. Since V_ℓ, YY > 0, by simple algebra, we have for all δ∈ℝ_++^2, ℐ_D(δ_1, 0) = 𝔼[Y_2] - 𝔼[Y_1] + V_1, YY^1/2δ_1^2 and ℐ_D(0, δ_2) = 𝔼[Y_2] - 𝔼[Y_1] + V_2, YY^1/2δ_2^2. <Ref> implies that ℐ(δ_1, 0) = inf _λ∈ℝ_+^2[⟨λ, δ⟩+sup _ϖ∈Π(μ_13, μ_23)∫_ℝ^2 (f_𝒮)_λ, 1(y_1, y_2) d ϖ(y_1, y_2)], ℐ(0, δ_2) = inf_λ_1 ∈ℝ_+^2[⟨λ, δ⟩+sup _ϖ∈Π(μ_13, μ_23)∫_ℝ^2 (f_𝒮)_λ, 2(y_1, y_2) d ϖ(y_1, y_2)], where (f_𝒴)_λ, ℓ, for ℓ = 1, 2, is given by (f_𝒴)_λ, 1 = sup_y_1'{y_2 - y_1' - λ_1 [ y_1' - y_1; x_2 - x_1 ]^⊤ Q_1[ y_1' - y_1; x_2 - x_1 ]}, (f_𝒴)_λ, 2 = sup_y_2'{y_2' - y_1 - λ_2 [ y_2' - y_2; x_1 - x_2 ]^⊤ Q_2[ y_2' - y_2; x_1 - x_2 ]}. With similar calculation as in Step 1, the functions (f_𝒴)_λ, 1 and (f_𝒴)_λ, 2 can be written as (f_𝒴)_λ, 1 = y_2 - y_1 + V_1/V_1, XX/4 λ_1 - (x_2 - x_1)^⊤ V_1, XX^-1 V_1, XY - λ_1 (x_2 - x_1)^⊤ V_1, XX^-1 (x_2 - x_1), (f_𝒴)_λ, 2 = y_2 - y_1 + V_2/V_2, XX/4 λ_2 + (x_1 - x_2)^⊤ V_2 XX^-1 V_2, XY - λ_2 (x_1 - x_2)^⊤ V_2, XX^-1 (x_1 - x_2). With the same reasoning as in Step 1, we have sup_ϖ∈Π(μ_13, μ_23)∫ (f_𝒮)_λ, ℓ d ϖ = 𝔼[Y_2] - 𝔼[Y_1] + V_ℓ/V_ℓ, XX/4 λ_ℓ, for ℓ∈ [2]. Therefore, ℐ(δ_1, 0) = 𝔼[Y_2] - 𝔼[Y_1] + (V_1/V_1, XX)^1/2δ_1^1/2 = ℐ_D (δ_1, 0 ), ℐ(0, δ_2) = 𝔼[Y_2] - 𝔼[Y_1] + (V_2/V_2, XX)^1/2δ_2^1/2 = ℐ_D (0, δ_2 ). §.§.§ Proof of Recall the proof of , we have ℐ(δ) = inf_λ∈ℝ^2_+{⟨λ, δ⟩ + sup_π∈Π∫_ℝ^2d+2(f_𝒮)_λ d π} where Π is the set of all probability measures such that their supports Supp(π) are in { (y_1, x_1, y_2, x_2)∈ℝ^2d+2 : x_1 = x_2 }. By the definition of Π, to evaluate ℐ(δ), it suffices to restrict the domain of (f_𝒮)_λ on Supp(π). For any (s_1, s_2) ∈Supp(π), we have x_1 = x_2 (f_𝒮)_λ(s_1, s_2) = (y_2-y_1) + sup _x^'∈ℝ^d[φ_1 (x^'-x_1, λ_1 )+φ_2 (x^'-x_2, λ_2)] = (y_2-y_1) + sup_x^'∈ℝ^d {∑_1≤ℓ≤ 2 Q_ℓ, Y Y^-1/4 λ_ℓ +x^'^⊤ V_ℓ, X X^-1 V_ℓ, X Y a_ℓ-λ_ℓx^'^⊤ V_ℓ, X X^-1 x^'}_= ℋ(λ, δ) As a consequence, (f_𝒮)_λ(s_1, s_2) is independent of x_1 and x_2 for all (s_1, s_2)∈Supp(π), and hence for all π∈Π, we have ∫_ℝ^2 d+2(f_𝒮)_λ d π = 𝔼[Y_2] - 𝔼[Y_1] + R(λ, δ), where R(λ, δ) = ℋ(λ, δ) +⟨λ, δ⟩ and Law(Y_ℓ, X) = μ_ℓ 3 for ℓ =1, 2. So, ℐ(δ) = 𝔼[Y_2] - 𝔼[Y_1] + inf _λ∈ℝ_+^2R(λ, δ). Moreover, ℐ_D(δ)=𝔼[Y_2]-𝔼[Y_1]+inf _λ∈ℝ_+^2R_D(λ, δ), where R_D(λ, δ) = ⟨λ, δ⟩ +V_1, Y Y/4 λ_1+V_2, Y Y/4 λ_2. The rest of the proof is divided into the following two steps. Step 1. We show that ℐ_D(δ)=ℐ(δ) implies δ_1^1 / 2 V_1, Y Y^-1 / 2 V_1, X Y + δ_2^1 / 2 V_2, Y Y^-1 / 2 V_2, X Y = 0. Since Q_ℓ, Y Y≥ V_ℓ, Y Y^-1 by definition, then Q^-1_ℓ, Y Y≤ V_ℓ, Y Y and R(λ, δ) ≤R_D(λ, δ). Let λ_D^⋆ = (δ_1^-1 / 2 V_1, Y Y^1 / 2, δ_2^-1 / 2 V_2, Y Y^1 / 2 ). It is easy to see inf _λ∈ℝ_+^2R_D(λ, δ) = R_D(λ_D^⋆, δ) ≥R (λ_D^⋆, δ) and hence ℐ(δ) ≤𝔼[Y_2] - 𝔼[Y_1] + R(λ_D^⋆, δ) ≤𝔼[Y_2] - 𝔼[Y_1] + R_D (λ_D^⋆, δ) = ℐ_D(δ). Thus, ℐ(δ) = ℐ_D(δ) implies R_D(λ_D^⋆, δ) = R(λ_D^⋆, δ). In fact, we note that R_D(λ, δ) = ⟨λ, δ⟩+ sup _x^'∈ℝ^d[ ∑_1≤ℓ≤ 2φ_ℓ(x^', λ_ℓ)] and R_D(λ, δ) = ⟨λ, δ⟩+ ∑_1≤ℓ≤ 2sup _x^'∈ℝ^dφ_ℓ(x^', λ_ℓ). Since x^'↦φ_ℓ(x^', λ_ℓ) is strictly concave, it admits a unique maximizer and hence R_D(λ_D^⋆, δ) = R(λ_D^⋆, δ) implies for ℓ=1,2, x^'∈ℝ^dmax[ ∑_1≤ℓ≤ 2φ_ℓ(x^', λ_D, ℓ^⋆ )] = x^'∈ℝ^dmax φ_ℓ(x^',λ_D, ℓ^⋆). The first-order conditions imply x^'∈ℝ^dmax[ ∑_1≤ℓ≤ 2φ_ℓ(x^', λ_ℓ)] = (∑_1 ≤ℓ≤ 2λ_ℓ V_ℓ, X X^-1)^-1(∑_1 ≤ℓ≤ 2 a_ℓ V_ℓ, X X^-1 V_ℓ, X Y), and x^'∈ℝ^dmax φ_ℓ(x^', λ_ℓ) = 1/2 a_ℓλ_D, ℓ^⋆^-1 V_2, X Y, for ℓ = 1,2. So, recall λ_D, ℓ^⋆ = δ_ℓ^-1 / 2 V_ℓ, Y Y^1 / 2, a_1 = -1 and a_2 = 1, we have δ_1^1 / 2 V_1, Y Y^-1 / 2 V_1, X Y+δ_2^1 / 2 V_2, Y Y^-1 / 2 V_2, X Y=0. Step 2. We show δ_1^1 / 2 V_1, Y Y^-1 / 2 V_1, X Y+δ_2^1 / 2 V_2, Y Y^-1 / 2 V_2, X Y=0 implies ℐ_D(δ)=ℐ(δ). We note λ↦R_D (λ, δ ) is convex since it is supremum of a set of affine functions. It can be written as R_D (λ, δ ) = ⟨λ, δ⟩ + ∑_1≤ℓ≤ 2 V_ℓ / V_ℓ, X X/4 λ_ℓ + 1/4 V_o^⊤(λ_1 V_1, X X^-1+λ_2 V_2, X X^-1) _= Λ_λ^-1 V_o Taking derivatives with respect to λ_ℓ yields ∂R_D (λ, δ ) /∂λ_ℓ= δ_ℓ- V_ℓ / V_ℓ, X X/4 λ_ℓ^2 - 1/4 V_o^⊤Λ_λ^-1 V_ℓ, X X^-1Λ_λ^-1 V_o. By some algebra and under δ_1^1 / 2 V_1, Y Y^-1 / 2 V_1, X Y+δ_2^1 / 2 V_2, Y Y^-1 / 2 V_2, X Y=0, we can show ∂R_D (λ_D^⋆, δ ) /∂λ_ℓ = 0. As a result, R_D (λ_D^⋆, δ) = inf_λ∈ℝ^2_+R_D(λ, δ) = R(λ_D^⋆, δ) = inf_λ∈ℝ^2_+R(λ, δ) and ℐ(δ) = 𝔼[Y_2 ]-𝔼 [Y_1]+ inf _λ∈ℝ_+^2R_D(λ, δ) = ℐ_D(δ) . Step 3. We show that <Ref> incorporates the case when δ_1 = 0 or δ_2 = 0. From <Ref> (ii), we know the following statements hold. * When δ_1 > 0 and δ_2 = 0, ℐ_D(δ) = ℐ(δ) if and only if V_1, XY = 0. * When δ_1 = 0 and δ_2 > 0, ℐ_D(δ) = ℐ(δ) if and only if V_2, XY = 0. * When δ_1 = δ_2 = 0, ℐ_D(δ) = ℐ(δ) = ℐ_D, 0. We will see that <Ref> incorporates all these cases. * When δ_1 > 0 and δ_2 = 0, <Ref> is equivalent to V_1, XY = 0. * When δ_1 = 0 and δ_2 > 0, <Ref> is equivalent to V_2, XY = 0. * When δ_1 = δ_2 = 0, <Ref> is satisfied always. This completes the proof. §.§.§ Proof of The continuity of ℐ_D can be seen from the <Ref> or <Ref>. Next, we show ℐ is continuous on ℝ_+^2 by verifying the conditions of <Ref>. Obviously, d_𝒮_ℓ(s_ℓ, s_ℓ^') = √( c_ℓ (s_ℓ, s_ℓ^' ) ) defines a norm on 𝒮_ℓ = ℝ^q+1. Define a function ρ_ℓ:𝒴_ℓ×𝒴_ℓ→ℝ_+ as ρ_ℓ (y_ℓ, y_ℓ^') = (y_ℓ-y_ℓ^')^⊤ V_ℓ, Y Y^-1(y_ℓ-y_ℓ^'). In fact, it is not difficult to see ρ_ℓ (y_ℓ, y_ℓ^') = min_(x_ℓ, x^'_ℓ) ∈𝒳_ℓ×𝒳_ℓ(s_ℓ-s_ℓ^')^⊤ V_ℓ^-1(s_ℓ-s_ℓ^') ≤ c_ℓ(s_ℓ, s_ℓ^'), ∀ s_ℓ, s_ℓ^'∈𝒮_ℓ. Moreover, ρ_ℓ^1/2 is a norm on 𝒴_ℓ and the triangle inequality implies ρ_ℓ^1/2(y_ℓ, y_ℓ^') ≤ρ_ℓ^1/2(y_ℓ, y_ℓ^⋆)+ ρ_ℓ^1/2(y_ℓ^⋆, y_ℓ^'), ∀ y_ℓ, y_ℓ^', y_ℓ^⋆∈𝒴_ℓ. As a result, we must have ρ(y_ℓ, y_ℓ^') ≤ 2 [ ρ_ℓ(y_ℓ, y_ℓ^⋆)+ ρ_ℓ(y_ℓ^⋆, y_ℓ^') ], ∀ y_ℓ, y_ℓ^', y_ℓ^⋆∈𝒴_ℓ. We verified the functions ρ_1 and ρ_2 satisfy <Ref> with respect to Mahalanobis distances. Recall f(y_1,y_2, x) = y_1 - y_2 and define a concave function Ψ: ℝ^2 →ℝ_+ as Ψ: (a_1, a_2) ↦ V_1, YY^1/2 a^1/2 + V_2, YY^1/2 a_2^1/2. Since |y_ℓ- y_ℓ^'|^2 = V_ℓ, YYρ_ℓ(y_ℓ, y_ℓ^') and ρ_ℓ≤ c_ℓ, then f(y_1, y_2, x)-f(y_1^', y_2^', x^') ≤ |y_1- y_1^'| + |y_2- y_2^'| ≤∑_ℓ=1^2 V_ℓ, YY^1/2ρ_ℓ^1/2 (y_ℓ, y_ℓ^') = Ψ(ρ_1(y_1, y_1^') , ρ_2(y_2, y_2^' ) ) ≤Ψ( c_1(s_1, s_1^') , ρ_2(y_2, y_2^' ) ). Similarly, we can show f(y_1, y_2, x) - f(y_1^', y_2^', x^') ≤Ψ ( ρ_1(y_1, y_1^'), c_2(s_2, s_2^') ). <Ref> implies the continuity of ℐ on ℝ^2_+. §.§ Proofs in §.§.§ Proof of We prove <Ref> using a technique similar to <cit.>. For any s_ℓ = (y_ℓ, x_ℓ) ∈𝒮_ℓ, we have (f_𝒮)_λ(s_1, s_2) = sup_ x^'∈𝒳sup_ (y_1^' , y_2^') ∈𝒴_1 ×𝒴_2{ - y_2' d(x') - y_1' [1 - d(x')] - ∑_1≤ℓ≤2λ_ℓ[ | y_ℓ - y_ℓ' | + x_ℓ - x' _2 ] } = sup_ x^'∈𝒳{[ sup_y_2' ∈𝒴_2{ - y_2' d(x') - λ_2 |y_2 - y_2'| } + sup_y_1' ∈𝒴_1{ - y_1' (1 - d(x')) - λ_1 |y_1 - y_1'| }] - ∑_1≤ℓ≤2λ_ℓ x_ℓ - x' }. We note that sup_y_2' ∈𝒴_2{ - y_2' d(x') - λ_2 |y_2 - y_2'| } = ∞ if 0 ≤λ_2 < 1 - y_2 d(x') if λ_2 ≥ 1 , and sup_y_1' ∈𝒴_1{ - y_1' (1 - d(x')) - λ_1 |y_1 - y_1'| } = ∞ if 0 ≤λ_1 < 1 - y_1 (1 - d(x')) if λ_1 ≥ 1 . Therefore, we have for λ_1 ≥ 1 and λ_2 ≥ 1 (f_𝒮)_λ(s_1, s_2) = sup_ x^'∈𝒳{ - y_2 d(x') - y_1 (1 - d(x')) - ∑_1 ≤ℓ≤ 2λ_ℓ x_ℓ - x' } = - min{ y_2 + φ_λ, 1 (x_1, x_2), y_1 + φ_λ, 0 (x_1, x_2) }, where φ_λ, d (x_1, x_2) = min_u ∈𝒳 : d(u) = d∑_1≤ℓ≤ 2λ_ℓ x_ℓ - u _2, for d ∈{0,1}. If λ_1 < 1 or λ_2 < 1, then (f_S)_λ(s_1, s_2) = ∞. As a result, we have RW(d) = inf_γ∈Σ(δ)𝔼[Y_2 d(X) + Y_1 (1 - d(X))] = - inf_λ∈ℝ_+^2[ ⟨λ, δ⟩ + sup_π∈Π(μ_13, μ_23)∫_𝒱 (f_S)_λ d π] = - inf_λ∈ [1, ∞)^2[ ⟨λ, δ⟩ + sup_π∈Π(μ_13, μ_23)∫_𝒱 - min{ y_2 + φ_λ, 1 (x_1, x_2), y_1 + φ_λ, 0 (x_1, x_2) } d π(v) ] = sup_λ∈ [1, ∞)^2 [ inf_π∈Π(μ_13, μ_23)∫_𝒱min{ y_2 + φ_λ, 1 (x_1, x_2), y_1 + φ_λ, 0 (x_1, x_2) } d π(v) - ⟨λ, δ⟩]. Next, we show <Ref>. Recall the set Π defined in the proof of <Ref>. Here, Π is the set of all the probability measures concentrate on { (y_1, x_1, y_2, x_2) ∈ℝ^2d +2 : x_1 = x_2 }. Consider the following derivation: RW(d) = sup_λ_0 ≥ 1, λ_2 ≥ 1[ inf_π∈Π(μ_13, μ_23)∫_𝒱min{ y_2 + φ_λ, 1 (x_1, x_2), y_1 + φ_λ, 0 (x_1, x_2) } d π(v) - (λ_1 + λ_2) δ_0 ] ≤sup_λ_1 ≥ 1, λ_2 ≥ 1[ inf_π∈Π∫_𝒱min{ y_2 + φ_λ, 1 (x_1, x_2), y_1 + φ_λ, 0 (x_1, x_2) } d π(v) - (λ_1 + λ_2) δ_0 ]. Recall the functions h_0 and h_1 defined in <Ref>, we notice that for all (y_1,x_1,y_2,x_2) ∈Π, φ_λ, ℓ (x_1, x_2) = (λ_1 + λ_2) h_ℓ(x_1), ∀ℓ = 1,2. As a result, we have RW(d) ≤sup_λ_1 ≥ 1, λ_2 ≥ 1[ inf_π∈ℱ(μ_13, μ_23)∫_𝒮min{ y_2 + φ_λ, 1 (x), y_1 + φ_λ, 0 (x) } d π(s) - (λ_1 + λ_2) δ_0 ] = sup_η≥ 2[ inf_π∈ℱ(μ_13, μ_23)∫_𝒮min{ y_2 + η h_1 (x), y_1 + η h_0 (x) } d π(s) - ηδ_0 ] ≤sup_η≥ 1[ inf_π∈ℱ(μ_13, μ_23)∫_𝒮min{ y_2 + η h_1 (x), y_1 + η h_0 (x) } d π(s) - ηδ_0 ] =_(1)sup_η≥ 1[ inf_π∈ℱ(μ_13, μ_23)𝔼_X[𝔼(min{ Y_2 - Y_1 + η h_1 (X), η h_0 (X) } |X)] + 𝔼(Y_1)- ηδ_0 ] = sup_η≥ 1[ ∫_𝒮min{ y_2 + η h_1 (x), y_1 + η h_0 (x) } d π^*(s) - ηδ_0 ] = RW_0(d) where equation (1) follows from Proposition 2.17 in <cit.> and the concavity of y ↦min{y + η h_1 (x), η h_0 (x)} (see also Section 4.3.1 in <cit.>). §.§ Proofs in We provide a brief sketch of proofs in <Ref>. §.§.§ Proof of Similarly to the proof of <Ref>, it is sufficient to derive the dual reformulation of ℐ_D(δ) for δ∈ℝ^L_++. Let 𝒫_D denote the set of γ∈𝒫(𝒱) that satisfies K_ℓ(μ_ℓ, γ_ℓ) < ∞ for all ℓ∈ [L] and ∫_𝒱 g d γ > -∞. Taking the Legendre transform on ℐ_D yields that any λ∈ℝ^2_+, ℐ_D^⋆ (λ) := sup_δ∈ℝ_+^L {ℐ_D(δ) - ⟨λ, δ⟩} =sup_δ∈ℝ_+^L sup_γ∈Σ_D(δ) {∫_𝒱 g dγ - ⟨λ, δ⟩} = sup_γ∈𝒫_D{∫_𝒱 g dγ - ∑_ℓ∈ [L]λ_ℓK_ℓ(μ_ℓ,γ_ℓ) }_:= I_D, λ[γ] = sup_γ∈𝒫_D I_D, λ[γ]. Using <Ref> and the similar seasoning as the proof of <Ref>, we can show ℐ_D^⋆ (λ) = sup_γ∈𝒫_D I_D, λ[γ] = sup_π∈Γ(Π, φ_λ ) ∫_𝒱×𝒱φ_λ dπ = sup_π∈Π(μ_1, …, μ_L)∫_𝒱 g_λ dπ. The desired result follows from <Ref>. §.§.§ Proof of Similarly to the proof of <Ref>, it is sufficient to derive the dual reformulation of ℐ(δ) for δ∈ℝ^L_++. Let 𝒫̅ denote the set of γ∈𝒫(𝒮) that satisfies K_ℓ(μ_ℓ,L, γ_ℓ,L) < ∞ for all ℓ∈ [L] and ∫_𝒮 f d γ > -∞. Taking the Legendre transform on ℐ yields that any λ∈ℝ^2_+, ℐ^⋆ (λ) := sup_δ∈ℝ_+^L {ℐ(δ) - ⟨λ, δ⟩} =sup_δ∈ℝ_+^L sup_γ∈Σ(δ) {∫_𝒱 f dγ - ⟨λ, δ⟩} = sup_γ∈𝒫̅{∫_𝒱 g dγ - ∑_ℓ∈ [L]λ_ℓK_ℓ(μ_ℓ,γ_ℓ) }_:= I_λ[γ] = sup_γ∈𝒫̅ I_λ[γ]. For notational simplicity, we write Π := Π(μ_1,L+1, …, μ_L, L+1). Using <Ref> and the similar seasoning as in the proof of <Ref>, we can show ℐ^⋆ (λ) = sup_γ∈𝒫̅ I_λ[γ] = sup_π∈Γ(Π, ϕ_λ ) ∫_𝒱×𝒱φ_λ dπ = sup_π∈Π∫_𝒱 f_λ dπ. The desired result follows from <Ref>. §.§.§ Proof of The proof is identical to that of <Ref>.
http://arxiv.org/abs/2307.02153v1
20230705095007
A renormalization group improvement for thermally resummed effective potential
[ "Koichi Funakubo", "Eibun Senaha" ]
hep-ph
[ "hep-ph" ]
funakubo@cc.saga-u.ac.jp eibunsenaha@vlu.edu.vn ^1Department of Physics, Saga University, Saga 840-8502 Japan ^2Subatomic Physics Research Group, Science and Technology Advanced Institute, Van Lang University, Ho Chi Minh City, Vietnam ^3Faculty of Applied Technology, School of Technology, Van Lang University, Ho Chi Minh City, Vietnam We propose a novel method for renormalization group improvement of thermally resummed effective potential. In our method, β-functions are temperature dependent as a consequence of the divergence structure in resummed perturbation theory. In contrast to the ordinary MS scheme, the renormalization group invariance of the resummed finite-temperature effective potential holds order by order, which significantly mitigates a notorious renormalization scale dependence of phase transition quantities such as a critical temperature even at the one-loop order. We also devise a tractable method that enables one to incorporate temperature-dependent higher-order corrections by fully exploiting the renormalization group invariance. A renormalization group improvement for thermally resummed effective potential Eibun Senaha^2,3 August 1, 2023 ============================================================================== § INTRODUCTION Thermal effective potential has been widely used to analyze phase transitions such as electroweak phase transition (EWPT). As is well know, the perturbative method to evaluate the effective potential suffers from bad high-temperature behavior even in a theory with small coupling constants <cit.> . One of the remedies is to incorporate the most dominant part of the higher order terms at high temperatures, that is, the mass corrections which are proportional to T^2, into the lower order contributions in a systematic manner. This is the so-called resummation of the thermal mass, or, simply, the thermal resummation. The resummation also cures the infrared divergence originating from the zero Matsubara frequency mode in bosonic-loop contributions.[Since the smallest frequency of a fermion is π T, the effect of the fermionic thermal resummation is much weaker than the bosonic one, so one usually considers only the bosonic thermal resummation.] The renormalization-group (RG) improvement of the effective potential is another method to re-arrange the perturbation series, in which a part of higher-oder contributions are taken into the lower-order terms in perturbation theory <cit.>. It is based on the fact that the bare Lagrangian, hence, the all-order results including the counterterms (CTs), be independent of the renormalization scale. Although the perturbative effective potential has an explicit scale dependence at some fixed order, the scale invariance is improved by introducing the running parameters. Once the effective potential is made scale invariant at some order, the scale can be fixed in such a way that some part of the higher order terms vanish. The running parameters defined by use of β-functions have been often determined by renormalizing the theory with the MS-scheme. At finite temperatures, a new scale dependent term arises, which cannot be taken care of by the running parameters defined by the MS-scheme. This situation is made more serious, when one executes the thermal resummation, leading to violation of the order-by-order RG invariance of the effective potential (for recent studies, see, e.g., Refs. <cit.>). In this letter, we propose a novel RG improvement method for the resummed effective potentials in which the RG invariance holds order by order. In our method, β-functions are properly defined in resummed perturbation theory instead of using those in the MS scheme. As a consequence, our β-functions of the dimensionful parameters are temperature dependent. For illustrative purpose, we first work in the ϕ^4 theory and explicitly show the RG invariance of the resummed one- and two-loop effective potentials in our scheme. Moreover, we further refine the effective potential by incorporating a series of dominant temperature-dependent higher-order terms by taking advantage of the RG invariance. To apply our scheme to a case of first-order phase transition as needed for electroweak baryogenesis <cit.> (for reviews see, e.g., Refs. <cit.>), we extend the ϕ^4 theory by adding another real scalar field. We make numerical comparisons between the MS and our schemes and show that the latter yields much less renormalization scale dependence on a critical temperature even at the one-loop level. At the two-loop level, however, not much numerical differences are observed between the two schemes unless hard thermal loops are significantly sizable. Our numerical study also shows that our refined RG-improved one-loop effective potential can capture the two-loop order effects properly. This would be particularly useful when the two-loop effective potential is not available. § Β-FUNCTIONS IN THE RESUMMED THEORY We first clarify differences between β-functions in the MS and those in our scheme. To make our discussion simpler, we focus on scalar theories. The derivation of β-functions in more general theories is given in Ref. <cit.> (see also Ref. <cit.>). Let us collectively denote scalar fields and couplings as ϕ_i(x) and g_k and scalar masses as m_a^2. We also define a vacuum energy as Ω. As we see in the next section, Ω is also needed to show the RG invariance of the effective potential. In this work, we adopt the dimensional regularization in which the spacetime dimension is analytically continued to d=4-ϵ dimension. Since the β-functions of the dimensionless parameters are not affected within the scope of our discussion here, we derive only those of dimensionful parameters. In the MS scheme, the bare parameters are expressed in terms of the renormalized ones and ϵ poles as m_Ba^2 = (δ_ab+∑_n=1^∞b_ab^(n)(g)/ϵ^n)m_b^2, Ω_Bμ^ϵ = Ω+∑_n=1^∞ω_n(g)/ϵ^n, where μ is an arbitrary scale. From the μ independence of the bare parameters, one can define the β-functions of each parameter as m_a^2β_m_a^2 = lim_ϵ→0μd m_a^2/dμ = ∑_k,bb_ab,k^(1)g_km_b^2, β_Ω =lim_ϵ→0μd Ω/dμ = ω_1, where b_ab,k^(1)=db_ab^(1)/dg_k. It is important to note that the β-functions are given by the coefficients of the single ϵ pole, which implies that if those coefficients are modified by thermal resummations, the β-functions would no longer remain the same for the theoretical consistency. This is exactly the case we consider in the following. In resummed perturbation theories, the Lagrangian is reorganized as <cit.> ℒ_B = ℒ_R+ℒ_CT = [ℒ_R-1/2Σ_a(T)ϕ_a^2] +[ℒ_CT +1/2Σ_a(T)ϕ_a^2], where Σ_a(T) denotes the thermal mass of the scalar ϕ_a. At the leading order, Σ_a(T)=𝒪(g_i T^2) with g_i representing scalar quartic couplings. Even though nothing has changed in the bare Lagrangian, Σ_a(T) in the first square brackets is regarded as zeroth order in the resummed perturbation theory while that in the second ones is the part of the counterterm (CT) which is one order higher in this perturbative expansion (referred to as thermal counterterm hereafter). Because of this reorganization, the propagators of the scalars are temperature dependent, and one encounters temperature dependent divergences when computing effective potentials at loop levels. Although such divergences must be cancelled in the all-order calculation, they inevitably appear at a fixed order in the resummed perturbation theory. With this consideration, we modify Eq. (<ref>) as m_Ba^2 = (δ_ab+∑_n=1^∞b_ab^(n)(g)/ϵ^n)m_b^2+∑_n=1^∞b̃_ab^(n)(g)/ϵ^nΣ_b(T). As explicitly shown in concrete models up to the two-loop level in the next two sections, Σ(T) must be treated as if it were the μ-independent objects though it contains g_i. This condition, called consistency condition, is necessary to prove the order-by-order RG invariance of the resummed effective potential. Following the same procedure as in the MS scheme with the consistency condition, one obtains m_a^2β_m_a^2 =∑_k,b(b_ab,k^(1)m_b^2+b̃_ab,k^(1)Σ_b)σ_kg_k. We note that although the vacuum energy is also modified by the thermal resummation, the relation β_Ω=ω_1 still holds under the aforementioned consistency condition. § Φ^4 THEORY We demonstrate how our RG scheme works using the ϕ^4 theory. The bare Lagrangian is given by ℒ_B = 1/2∂_μΦ_B ∂^μΦ_B-V_B(Φ_B), V_B(Φ_B) = Ω_B-ν_B^2/2Φ^2+λ_B/4!Φ_B^4. As mentioned in Sec. <ref>, after decomposing ℒ_B into ℒ_R and ℒ_CT, we subtract and add Σ(T) in each part. The explicit forms of CTs are summarized in Ref. <cit.>. With the resummed Lagrangian, we evaluate the effective potential up to the two-loop level. Denoting the classical background field as φ, the tree-level effective potential has the form V_0(φ) =Ω+1/2(-ν^2+Σ(T))φ^2+λ/4!φ^4, The field-dependent mass is defined as M^2 = ∂^2 V_0/∂φ^2=m^2 + Σ(T), with m^2 = -ν^2+λφ^2/2. Using a propagator with M^2, one can obtain the one-loop correction to the effective potential <cit.> μ^ϵ V_1(φ) = M^4/4(16π^2)(-2/ϵ+lnM^2/μ̅^2-3/2+𝒪(ϵ)), where μ̅ = √(4π e^-γ_E^)μ≃ 2.66 μ with γ_E^ being the Euler constant. In our renormalization scheme, we remove the divergences including the temperature dependent pieces by the one-loop CTs, resulting in δ^(1)Ω = 1/ϵ(ν^2-Σ)^2/32π^2, δ^(1)ν^2 = 1/ϵλ/16π^2(ν^2-Σ), δ^(1)λ = 1/ϵ3λ^2/16π^2. The bare ν_B is expressed as ν_B^2 = Z_Φ^-2(ν^2+δ^(1)ν^2), where Z_Φ denotes the wavefunction renormalization constant for Φ, and Z_Φ=1 at the one-loop level. From Eq. (<ref>), the coefficient of the single ϵ pole is found to be b_1(λ)=-b̃_1(λ)=λ/16π^2. Plugging them into the formula (<ref>), one obtains ν^2β_ν^2^(1) = λ(ν^2-Σ)/16π^2. By doing the same step, one can find the β-functions of the remaining parameters and γ-function as β_Ω^(1) = (ν^2-Σ)^2/32π^2, β_λ^(1) = 3λ^2/16π^2, γ_Φ^(1) = 0. Note that the β-functions in our scheme are reduced to those in the MS scheme by taking Σ=0, which implies that differences between our scheme and MS scheme could be sizable when Σ dominates over ν^2. We also note that in the MS scheme, the T-dependent divergences appearing in Eq. (<ref>) remain at this order, and higher-loop contributions are needed to cancel them <cit.> (See also Ref. <cit.>).[If ℒ_R and ℒ_CT are defined as in Ref. <cit.> instead of the way they are defined in Eq. (<ref>), the order-by-order renormalization with the MS scheme also holds by regarding the thermal mass term as one-order higher. ] After the renormalization, the resummed one-loop effective potential at the one-loop level is given by V_eff(φ) = V_0(φ)+V_1(φ), where V_0(φ) =Ω+1/2(-ν^2+Σ(T))φ^2+λ/4!φ^4, V_1(φ) = M^4/4(16π^2)( lnM^2/μ̅^2-3/2) +T^4/2π^2I_B(A^2)-1/2Σ(T)φ^2, with A^2=M^2/T^2 and the thermal function of the boson (I_B) is defined as I_B(A^2) = ∫_0^∞ dx x^2ln(1-e^-√(x^2+A^2)). The last term in V_1(φ; T) is nothing but the thermal CT. In the high-T expansion, the +Σ(T)φ^2/2 term arises from T^4I_B/(2π^2), which is cancelled by the thermal CT, avoiding the double counting of Σ(T)φ^2/2. As is the one-loop level, we regularize the two-loop effective potential by requiring that all the divergences be absorbed by the CTs, As a result, the two-loop contributions to the β-functions of the model parameters in our scheme are, respectively, given by γ_Φ^(2) = λ^2/12(16π^2)^2, β_Ω^(2) = (ν^2-Σ)Σ/16π^2, ν^2β^(2)_ν^2 = λ^2(-ν^2+Σ)/(16π^2)^2+λΣ/16π^2+2ν^2γ_Φ^(2), β_λ^(2) = -6λ^3/(16π^2)^2+4λγ_Φ^(2). We should note that β_ν^2^(2) contains the λΣ/16π^2 term that is only one-loop suppressed. This is exactly the same form as the thermal correction term in Eq. (<ref>) with an opposite sign. Therefore, they are seemingly cancelled with each other in the sum of the one- and two-loop β-functions β_ν^2 = β_ν^2^(1)+β_ν^2^(2). However, when we evaluate β_ν^2 perturbatively, λΣ/16π^2 in β_ν^2^(2) should be treated as the one-order higher term than that in β_ν^2^(1). In contrast to the MS-scheme, β_Ω^(2) is nonzero in our scheme. After the renormalization, the the two-loop correction to the resummed effective potential is cast into the form V_2(φ) = λ/8I̅^2(M)-λ^2φ^2/12H̃(M)-1/2Σ(T)I̅(M), where the thermal functions H̃(M) and I̅(M) are defined as H̃(M) = 3 [ -I̅^2(M)/2M^2+I̅(M)/16π^2-M^2/(16π^2)^2(1+2/3f_2) -1/2M^2T^2/π^2(I_B'(A^2))^2-T^2/16√(3)π^3I_B'(A^2) +4T^2/(16π^2)^2K(A) ], I̅(M) = M^2/16π^2(lnM^2/μ̅^2-1)+T^2/π^2I_B'(A^2), with I'_B(A^2)=∂ I_B(A^2)/∂ A^2 and f_2≃ -1.76. K(A) is a genuine thermal function arising from a sunset-type diagram. Its explicit form is given in Ref. <cit.>. The last term comes from the thermal CT which plays a role in eliminating the double counting and linear-like terms in φ such as 𝒪((M^2)^1/2T^3) <cit.>. Now we scrutinize the RG invariances of the resummed effective potentials obtained above. The effective potential satisfies RGE 0 = μ dV_eff/dμ≡𝒟V_eff = [ μ∂/∂μ+ν^2β_ν^2∂/∂ν^2+β_λ∂/∂λ-γ_Φ^φ∂/∂φ+β_Ω∂/∂Ω]V_eff. Let us check the RG invariance of the resummed effective potential at the one-loop level. Applying (<ref>) to V_0 and V_1 respectively, one finds 𝒟V_0|_one-loop = β^(1)_Ω-ν^2/2β_ν^2^(1)φ^2+β_λ^(1)/4!φ^4 =M^4/32π^2, 𝒟V_1|_one-loop = μ∂ V_1/∂μ=-M^4/32π^2+𝒪(1/(16π^2)^2), where the consistency condition 𝒟Σ=0 is used. Therefore, one gets 𝒟(V_0+V_1)=0+𝒪(1/(16π^2)^2), and the error is the two-loop order. On the other hand, if one uses the MS scheme, the error is estimated as 𝒟(V_0+V_1)_MS=(-2m^2+Σ)Σ/(32π^2)+𝒪(1/(16π^2)^2) → -λφ^2Σ/(32π^2)+𝒪(1/(16π^2)^2), where the φ-independent terms are suppressed after the right arrow. Note that despite the lack of the RG invariance in the MS scheme, the scale dependence could be unexpectedly smaller than that in our scheme due to an accidental cancellation between the two different errors. An example is given in Ref. <cit.>. However, such a less scale dependence has no robust footing. The proof of the RG invariance at the two-loop level is also straightforward. Applying the derivative operator 𝒟 to the resummed effective potentials (<ref>), (<ref>), and (<ref>), respectively, we can verify that 𝒟(V_0+V_1+V_2)|_two-loop =0+𝒪(1/(16π^2)^3). We here emphasize again that the order-by-order RG invariance holds by virtue of the modified β-functions in our scheme. Now we consider a further refinement that fully exploits the RG invariance to incorporate a series of temperature-dependent higher-order terms. The explicit form of the resummed one-loop effective potential that satisfies RGE (<ref>) is V̅_eff(φ̅;t)=V̅_0(φ̅;t)+V̅_1(φ̅;t) = Ω̅+1/2(-ν̅^2+Σ)φ̅^2+λ̅/4!φ̅^4 +M̅^4/4(16π^2)(lnM̅^2/e^2tμ̅_0^2-3/2)+T^4/2π^2I_B(A̅^2) -1/2Σφ̅^2, with A̅=M̅/T, M̅^2 = -ν̅^2+Σ(T)+λ̅φ̅^2/2. The barred parameters Ω̅, ν̅^2, λ̅, and φ̅ are the running parameters as functions of t=ln(μ̅/μ̅_0) with μ̅_0 being an initial scale. Hereafter, the unbarred parameters are defined at t=0. Because t is arbitrary, it would be preferable to determine it in such a way that dominant higher-order terms are incorporated into the potential (<ref>). At zero temperature, we could choose t(φ)=ln(m̅^2/μ̅_0^2)/2 to absorb logarithmic terms that could ruin the validity of perturbativity in some domain <cit.>. At finite temperature, however, this choice is not able to tame dominant temperature-dependent terms arising from I̅(M̅) ≃_T≫M̅T^2/12, For this reason and because the truncation error of RGE at this order is given by d V̅_eff(φ̅;t)/d t=∂V̅_eff(φ̅;t)/∂ t = 0+1/2∂M̅^2/∂ tI̅(M̅), we choose t to eliminate this error at each φ, yielding t(φ) = 8π^2/M̅^2I̅(M̅)_t=0. In this scheme, the higher-order terms in I̅(M̅) appearing beyond the one-loop order can be taken into (<ref>) through the t-φ relation in Eq. (<ref>). In the zero temperature limit, Eq. (<ref>) is reduced to t(φ)=ln(m̅^2/eμ̅_0^2)/2. Therefore, our scheme in this limit is related to the aforementioned scheme t(φ)=ln(m̅^2/μ̅_0^2)/2 by changing the input scale μ̅_0 to μ̅_0/√(e). Let us denote the RG-improved potential (<ref>) with ℓ-loop order β-functions as V̅_eff^(ℓ)(φ; t(φ)), which contains a part of the higher order terms beyond the ℓ-loop, arising from the running parameters including the vacuum energy Ω̅(t). It is easy to check that V̅_eff^(1)(φ; t(φ)) include I̅(M̅) terms in the two-loop effective potential (<ref>) using the t expansion of (<ref>) V̅_eff(φ; t) = V̅_eff(φ; 0) +∂V̅_eff(φ; t)/∂ t|_t=0t +1/2∂^2 V̅_eff(φ; t)/∂ t^2|_t=0t^2+⋯ and the t-φ relation (<ref>). From those expression, for example, it follows that V̅_eff^(1)(φ; t(φ)) =V̅_eff^(1)(φ; 0) +λ(M^2+λφ^2)/8M^2I̅^2(M)_t=0. The second term is exactly the same as 𝒪(I̅^2(M)) terms in V_2(φ) given in Eq. (<ref>). On the other hand, V̅_eff^(2)(φ; t(φ)) contains even 𝒪(I̅(M)) terms including the thermal CT in V_2(φ). This appears analogous to the leading and next-to-leading logarithmic resummations at zero temperature <cit.>. An important difference is that t-expanded V̅_eff^(2)(φ; t(φ)) includes more terms that are not present in the fixed-order (t=0) V_2(φ) in Eq. (<ref>) <cit.>. Since the ϕ^4 theory does not accommodate the first-order phase transition, we will consider a multi-scalar theory in the next section. § Φ^4 THEORY WITH ADDITIONAL SCALAR As a simplest extension, another real scalar field is added to the ϕ^4 theory in order to compare quantities related to first-order phase transition in both MS and our schemes. For illustration, we consider a simplified potential by imposing two ℤ_2 symmetries. The bare potential of the extended model has the form V_0(Φ_B1,Φ_B2) =Ω_B+ν_B1^2/2Φ_1^2+ν_B2^2/2Φ_B2^2 +λ_B1/4!Φ_B1^4+λ_B2/4!Φ_B2^4+λ_B3/4Φ_B1^2Φ_B2^2, which is invariant under ℤ_2 symmetries Φ_B1→ -Φ_B1 and Φ_B2→-Φ_B2. As in the ϕ^4 theory, we subtract and add the thermal masses of Φ_1 and Φ_2 (denoted as Σ_1 and Σ_2) in the renormalized Lagrangian and CTs, respectively. In this study, we assume that only Φ_1 develops the vacuum expectation value while Φ_2 does not. For later use, the classical background field of Φ_1 is denoted as φ. It is straightforward to show that the finite temperature effective potentials up to the two-loop level satisfy the RGE by virtue of the temperature-dependent β-functions in our scheme <cit.>. To improve the potentials further, we choose t in order to incorporate a series of temperature-dependent higher-order terms. For instance, at the one-loop order, we impose ∂V̅_eff(φ̅; t)/∂ t = 0+1/2∑_i∂M̅_i^2/∂ tI̅(M̅_i) =0, where M̅_1^2 = ν̅_1^2+Σ_1(T)+λ̅_1φ̅^2/2 and M̅_2^2 = ν̅_2^2+Σ_2(T)+λ̅_3φ̅^2/2 with Σ_1(T) = (λ_1+λ_3)T^2/24 and Σ_2(T) = (λ_2+λ_3)T^2/24. With this condition, the RG-improved effective potential is given by V̅_eff(φ̅;t(φ)) = V̅_0(φ̅;t(φ))+V̅_1(φ̅;t(φ)) = Ω̅+1/2(ν̅_1^2+Σ_1(T))φ̅^2+λ̅_1/4!φ̅^4 +∑_i=1,2[M̅_i^4/4(16π^2)(lnM̅_i^2/e^2tμ̅_0^2-3/2)+T^4/2π^2I_B(A̅_i^2)] -1/2Σ_1(T)φ̅^2, where A̅_i=M̅_i/T, and the explicit form of t(φ) is t(φ) = 8π^2∑_i∂M̅_i^2/∂ tI̅(M̅_i)_t=0/∑_iM̅_i^2∂M̅_i^2/∂ t. Expanding (<ref>) in powers of t, V̅_eff^(1)(φ; t) is cast into the form V̅_eff^(1)(φ; t(φ)) =V̅_eff^(1)(φ;0) +( ∑_iα_iI̅(M_i)_t=0)^2 /8∑_iα_iM_i^2, where α_i = 16π^2∂M̅_i^2/∂ t|_t=0. Unlike the ϕ^4 theory, the form of the second term does not coincides with that in the fixed-order two-loop effective potential V_2. Such a mismatch between the RG-improved and fixed-order effective potentials is peculiar to the multi-field case, which is attributed to the fact that the single parameter t alone cannot incorporate two different I̅(M_i) terms correctly in principle. We investigate to what extent our scheme can capture the higher-order effects by comparing with the two-loop order result. In this model, there are 5 parameter in the scalar potential, i.e., (ν_1^2, ν_2^2, λ_1, λ_2, λ_3). Using vacuum and mass conditions, we convert them into (v, ν_2^2, m_1, λ_2, m_2). As an example of the first-order phase transition, we take v(μ̅_0)=200, m_1(μ̅_0)=5.0, m_2(μ̅_0)=125, ν_2^2(μ̅_0)=85.0^2, λ_2(μ̅_0)=5.0, where ν_1^2(μ̅_0) and λ_1(μ̅_0) are determined by the first and second derivatives of the effective potentials at a given order while λ_3(μ̅_0) at the tree-level. μ̅_0 is fixed by the condition t(φ=v)=0. At the both one- and two-loop levels, μ̅_0≃ 75.81. The dimensionful parameters are given in units of any mass scale. Because of the smallness of m_1, the appearance of the imaginary parts of the effective potentials is only limited to low temperature, and the effective potentials are all real and well-defined near critical temperatures T_C, where the potentials have two degenerate minima. In Fig. <ref>, v(T)/T are shown as a function of the temperature T in the MS (left) and our (right) schemes, respectively. The dotted and dashed curves in blue represent the results obtained by using the one-loop effective potential (<ref>) in the cases of t=0 and 5, respectively. As clearly seen, the renormalization scale dependence on T_C in the MS case is much larger than that in our scheme. This is due to large violation of RG invariance in the former. On the other hand, the dot-dashed and two-dot-dashed lines in red correspond to the results using (<ref>) and the two-loop effective potential V̅_2(φ̅,t) [the RG-improved version of Eq. (<ref>) but with the two scalrs] with t=0 and 5, respectively. In those cases, the renormalization scale dependence is even milder than that in the one-loop result with our scheme. Note that the improvement in the MS scheme is because of the partial restoration of the RG invariance. One can explicitly check that the effective potential follows the RG invariance up to the 𝒪(λ_i^2T^2) order in the high temperature limit <cit.>. In this parameter set, the residual RG-noninvariant terms are numerically small and the truncation errors become dominant, which explain the two-loop results. We also overlay the results by use of the effective potentials V̅_eff^(1)(φ̅, t(φ)) and V̅_eff^(2)(φ̅, t(φ)) with the t-φ relation (<ref>). The former is denoted by the solid line in grey while the latter by thick solid line in black. One can see that v(T_C)/T_C using V̅_eff^(2)(φ̅, t(φ)) in both schemes lie within the two-loop level scale uncertainties, while not in the case using V̅_eff^(1)(φ̅, t(φ)). This demonstration suggests that V̅_eff^(2)(φ̅, t(φ)) can give the results closer to those at the two-loop order. § CONCLUSION We have proposed the novel method for renormalization group improvement of thermally resummed effective potential. In our method, the RG invariance of the resummed finite-temperature effective potential holds order by order since the β-functions are correctly defined in resummed perturbation theory. Taking the extended ϕ^4 theory as an example, we showed that the renormalization scale dependence of the first-order phase transition quantities, especially T_C in our scheme is much smaller than that in the MS scheme even at the one-loop level. At the two-loop level, however, no significant differences are observed in the both schemes. This is because that the RG invariance in the MS is restored up to 𝒪(λ_i^2T^2) order in the high temperature limit and the residual RG-noninvariant terms are numerically unimportant. We also devised the tractable method that enables one to incorporate a series of the temperature-dependent higher-order corrections utilizing the RG invariance in our scheme. Applying this method to RG-improved one-loop effective potential, v(T_C)/T_C in the case of V̅_eff^(2)(φ̅, t(φ)) falls within the uncertainties of the two-loop order renormalization scale dependence, suggesting that our refined method could be a practical choice when the two-loop effective potential is not available.
http://arxiv.org/abs/2307.02908v1
20230706104030
Modelling a Hot Horizon in Global 21 cm Experimental Foregrounds
[ "Joe H. N. Pattison", "Dominic J. Anstey", "Eloy de Lera Acedo" ]
astro-ph.CO
[ "astro-ph.CO", "astro-ph.IM" ]
firstpage–lastpage Audio-visual End-to-end Multi-channel Speech Separation, Dereverberation and Recognition Guinan Li, Jiajun Deng, Mengzhe Geng, Zengrui Jin, Tianzi Wang, Shujie Hu, Mingyu Cui, Helen Meng, Fellow, IEEE, Xunying Liu, Memeber, IEEE Guinan Li, Jiajun Deng, Mengzhe Geng, Zengrui Jin, Tianzi Wang, Shujie Hu, Mingyu Cui are with the Chinese University of Hong Kong, China (email: {gnli, jjdeng, mzgeng, zrjin, twang, sjhu, mycui}@se.cuhk.edu.hk) Helen Meng is with the Chinese University of Hong Kong, China (email: hmmeng@se.cuhk.edu.hk). Xunying Liu is with the Chinese University of Hong Kong, China and the corresponding author (email: xyliu@se.cuhk.edu.hk). Gastón P. Fernández[Ph.D. student at the University of Leuven (KU Leuven), Department of Economics, Naamsestraat 69, box 3565, 3000 Leuven (e-mail: gfernandez@kuleuven.be). I deeply appreciate the invaluable guidance of my advisors Laurens Cherchye and Frederic Vermeulen. I would also like to thank Wietse Leleu and all participants at the Conference of the European Society for Population Economics (ESPE) in Belgrade, the Trans-Atlantic Doctoral Conference (TADC) in London, and the Public-Labor-Health Seminar, the Household Economics Gathering, and the ECORES Summer School in Leuven for their helpful comments. All errors are on my own.] University of Leuven (KU Leuven) =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The 21 cm signal from cosmic hydrogen is one of the most propitious probes of the early Universe. The detection of this signal would reveal key information about the first stars, the nature of dark matter, and early structure formation. We explore the impact of an emissive and reflective, or `hot', horizon on the recovery of this signal for global 21 cm experiments. It is demonstrated that using physically motivated foreground models to recover the sky-averaged 21 cm signal one must accurately describe the horizon around the radiometer. We show that not accounting for the horizon will lead to a signal recovery with residuals an order of magnitude larger than the injected signal, with a log Bayesian evidence of almost 1600 lower than when one does account for the horizon. It is shown that signal recovery is sensitive to incorrect values of soil temperature and reflection coefficient in describing the horizon, with even a 10% error in reflectance causing twofold increases in the RMSE of a given fit. We also show these parameters may be fitted using Bayesian inference to mitigate for these issues without overfitting and mischaracterising a non-detection. We further demonstrate that signal recovery is sensitive to errors in measurements of the horizon projection onto the sky, but fitting for soil temperature and reflection coefficients with priors that extend beyond physical expectation can resolve these problems. We show that using an expanded prior range can reliably recover the signal even when the height of the horizon is mismeasured by up to 20%, decreasing the RMSE from the model that does not perform this fitting by a factor of 9. methods: data analysis – cosmology: dark ages, reionization, first stars – cosmology: early Universe § INTRODUCTION Creating a timeline of the universe between the period of recombination and the end of reionisation is a necessary step for cosmologists to describe the makeup of the early universe. Direct observation of cosmic neutral hydrogen remains the most promising tool to understand the universe between z ≈ 1100 and z ≈ 10. The hyperfine spin flip of neutral hydrogen produces an emission line at a rest frame of 21 cm <cit.>. The power of this feature with respect to the radio background as it redshifts through cosmic epochs will provide crucial information about structure formation up to the Epoch of Reionisation. The depth, position and width of the 21 cm absorption feature allows us to probe things like early star formation rate <cit.>, the initial mass functions of population III stars <cit.>, the nature of X-ray binaries <cit.>, and more exotic physics like dark matter distribution and properties <cit.>. We measure the 21 cm signal using the brightness temperature; that is, the temperature at which a blackbody in thermal equilibrium with an object would have to be to produce the same level of thermal excitation. The brightness temperature of this signal is several orders of magnitude lower than the radio foreground, making direct observation of the redshifted 21 cm signal extremely difficult <cit.>. Despite this, a speculative detection of the globally averaged 21 cm absorption trough was made by the `Experiment to Detect the Global Epoch of Reionization Signal' (EDGES) <cit.>. This detection describes the global signal as a flattened Gaussian centred at 76 MHz with a depth of 0.5 K. The flattened nature of the signal, as well as its depth, did not appear to match any existing theory at the time <cit.>. The depth of the signal demanded either an enhanced radio background <cit.>, or a way of cooling the Universe more rapidly than expected due to interactions with dark matter <cit.>; the flatness possibly being explained by two competing heating mechanisms, such as Lyman-α photons and cosmic rays becoming dominant at different times <cit.>. Questions, however, have been raised as to the robustness of the data analysis performed in the experiment <cit.>. Issues may have arisen from nonphysical electron temperatures, a damped sinusoidal systematic <cit.>, and other residuals <cit.> which may come from beam effects or other distortions such as the ionosphere <cit.>. The `Radio Experiment for the Analysis of Cosmic Hydrogen' (REACH) <cit.>, aims to perform an independent measurement of the sky-averaged 21 cm signal to either confirm or disprove the EDGES detection <cit.>. Utilizing a fully Bayesian data analysis pipeline it aims to model systematics and foregrounds in a more physically motivated manner than has been done previously. The global 21 cm absorption trough is predicted to sit between 70 and 200 MHz <cit.>, which lies in the ranges used by FM radio stations and Digital TV. This introduces a large amount of possible radio frequency interference (RFI) at a much higher intensity than the 21 cm signal, which poses a large problem for signal recovery. This RFI may be mitigated using data analysis tools <cit.>, but to further minimise this risk, the radiometer is set up in the Karoo Radio Astronomy Reserve in South Africa, surrounded by mountains on all sides. While the mountains accomplish the goal of greatly decreasing RFI around the antenna they create a new issue that must be overcome. It was shown in <cit.> that the effect of a horizon will be significant on the detection of the 21 cm signal. For an experiment like EDGES a horizon may be unnecessary to describe, as the polynomials used to fit for their data may have been able to encompass a horizon without describing it specifically. Any experiment using physically motivated foreground models, for example REACH, however, will require the description of the horizon itself. This paper focuses on the REACH radiometer <cit.> and pipeline <cit.>, as this is the collaboration to which the authors belong, but this analysis is applicable to all global 21 cm experiments using physically motivated foreground models for signal recovery. Section <ref> deals with the expansion of the REACH pipeline to accommodate for a horizon in data generation and foreground modelling. Section <ref> details the impact of this horizon on signal recovery, the reliance of signal recovery on correct soil parameter estimation, how fitting for soil parameters may liberate one from this reliance, and how this parameter fitting may increase tolerated error in horizon profile mapping. In Section <ref> we outline our key conclusions and discuss future work to expand the models discussed in this paper. § METHODS In this section we describe Bayesian inference (<ref>), how we simulate a mock data set describing the power given off by a horizon surrounding the REACH antenna (<ref>), and the methods used to account for this power in the physically motivated foreground models needed to recover the redshifted 21 cm signal (<ref>). §.§ Bayseian Fitting The REACH pipeline <cit.> relies on Bayesian inference for parameter estimation and model comparison in the recovery of the redshifted 21 cm signal. Bayesian inference relies on Bayes' theorem, which states: P(θ_ℳ|𝒟,ℳ) = P(𝒟|θ_ℳ,ℳ)P(θ_ℳ|ℳ)/P(𝒟|ℳ); where one can infer the probability of having a set of parameters, θ_ℳ, given a set of data, 𝒟, and a proposed model, ℳ. This equation may be written more simply to be: 𝒫 = ℒπ/𝒵, where π is the prior, describing our assumptions on the initial probability distribution of the parameters we are estimating, which is updated to the posterior 𝒫. ℒ, the likelihood, may be read as the probability of observing the data given a model and a set of parameters using that model. 𝒵 is the evidence, representing the probability of observing the data given a model, integrated over all possible parameters that the model could use. Provided the input data is the same, we can compare the Bayesian evidences of two models to determine which model fits the data with the highest probability, following: P(ℳ|𝒟) = 𝒵P(ℳ)/P(𝒟), where the ratio of the evidences, weighted by the prior probabilities of the models (which we treat as a uniform distribution), will give the ratio of the probability of each model fitting the data. The REACH pipeline uses the Nested Sampling <cit.> algorithm Polychord <cit.> to perform this parameter estimation and model comparison. This algorithm randomly draws a number of parameters from the given prior, for each of which a likelihood is calculated. The lowest likelihood points are discarded and the volume of the parameter space shrinks accordingly, updating the priors with a new sample being drawn from the newly constrained prior. This iterative shrinking of the prior space is done until a termination criterion has been met and parameters have been determined. The Bayesian evidences are generated as a byproduct of this and may be further used for model comparison. For more information on Nested Sampling see <cit.>. §.§ Data Simulation Using the Shapes algorithm developed by <cit.> we generate a profile of the horizon around the REACH antenna at -30.8387^∘N, 21.37492^∘E with data from Google Maps based on the Landsat/Copernicus surveys. We define a horizon mask such that all pixels with an altitude angle (θ) above that of the horizon for a given azimuthal angle (ϕ) are assigned a value of 0, and any pixel on or under the horizon is assigned a value of 1 (Horizon mask is shown in Figure <ref>). Maps of the variation of the spectral indicies across the sky are generated by calculating the spectral index required to map each pixel of the 2008 Global Sky Model (GSM) <cit.> at 408 MHz onto a corresponding map at 230 MHz following Equation <ref>, β(Ω) = log(T_230(Ω) - T_CMB/T_408(Ω)-T_CMB)/log(230/408). Here we choose a GSM at 230 MHz to avoid contamination of the cosmological redshifted 21 cm signal, which at the extremes of theoretical predictions should be extinguished between 200 and 250 MHz. We take then the 230 MHz base map and subtract a flat value of 2.725 K from each pixel to account for the temperature of the CMB. This map is then scaled to each frequency according to the previously calculated spectral indicies and rotated according to time and date of a given observation. Further detail of sky map generation in the REACH pipeline is found in <cit.>. Once these maps have been rotated we mask them by the physical horizon surrounding the radiometer. In this process we make the assumption that the sky and the horizon sit on the edge of an infinite, flat plane and we may ignore all near-field effects below an altitude angle of 0^∘. This assumption is nonphysical, and may cause some issues <cit.>, the impacts of near-field soil effects on REACH will be explored in a later work. Our horizon mask may then be multiplied by some estimate for soil temperature in Kelvin, T_soil, which will describe simple emission from the soil on the horizon, but will fail to replicate any additional power arising from radio waves reflected from the surface of the soil. This mask makes the assumption that all light behind the horizon is fully attenuated by the mountains surrounding REACH, and that the temperature of the horizon is constant. The former is justified as the vegetation surrounding REACH is minimal. Around REACH, while there is vegetation, there are no large trees in the close foreground that must be relied on to block parts of the sky; thus the attenuation from the mountains can be assumed to be complete. The latter is a more troublesome approximation, as T_soil will decrease or increase at various rates depending on the composition of the horizon and location of that part of the horizon with respect to the sun. However, for the purposes of this model we believe this is an unnecessary complexity in demonstrating the importance of horizon modelling. This model also assumes a lack of diffraction. Light is treated as only moving in straight lines, so any emission from behind the horizon is ignored entirely. We make this assumption as the ratio of light that is diffracted around the horizon to the amount of light that will be blocked by the horizon is of order λ/h, where λ is of order 1m, and h, the height of the mountains around REACH, is of order 1000m. For much smaller horizons than REACH, diffraction may be a greater challenge, but we address a way of approaching this in Section <ref>. In describing reflection we must make a number of assumptions about the soil, namely its makeup and moisture content, both of which will heavily impact the dielectric permittivity of the soil itself. As a proxy for the soil in the Karoo reserve we look to examples of better studied soil in other non-sandy desert environments. The relative dielectric permitivities of soil in the Avra Valley in Arizona were analysed and discussed in <cit.>. Using this as an approximation of what we expect to find around the REACH antenna we use a relative dielectric permittivity for Very High Frequency radio waves of order 10. The reflection coefficient (Γ) of an electromagnetic wave passing from a vacuum into a different medium follows Equation <ref> where we approximate the air to have a relative permittivity (ϵ_r) of 1. We model the reflection as diffuse as it will come from all areas of the sky and the ground, so we can average the angle of incidence to zero, allowing us to model the reflection coefficient as: Γ = 1- √(ϵ_r)/1 + √(ϵ_r). Here ϵ_r is the relative permittivity of the soil. Approximating the soil as having an ϵ_r of between roughly 5 and 15 depending on soil moisture levels we find a value of Γ to be between -0.4 and -0.6. Therefore, to account for reflection we take an average of the power per pixel across the entire sky (including blackbody emission from the horizon), add this power onto each pixel of the horizon and multiply it by the magnitude of the reflection coefficient. This would account for all reflection from the sky and from thermal emission only if the soil were able to see every part of the sky. This is unphysical. Thus we also account for the individual parts of the horizon being unable to view the entire sky, i.e. a rock lying on one of the mountains will be unable to see any of the sky that the mountain it is lying on obscures. We assume the mountains around the telescope to have an incline of 45^∘, an approximation we make from topographical maps of the area. We can then multiply the power we have mapped onto the horizon by a factor of 135/180 to account for the regions that remain unseen by that part of the horizon. This number is specific to the topography of REACH, and the slope of the basin surrounding it. The value of the incline used has a tolerance of up to ±15^∘ before accurate signal recovery becomes challenging and the depth of the recovered signal is poorly estimated. While accurate incline estimates are important for accurate signal recovery in our base horizon model the tolerance for precise estimation is loosened greatly once we begin fitting for the reflection coefficient, as in Section <ref>. We combine this term, which we deem T_reflection, with our map describing emission to create a snapshot of the sky, which we may integrate over time and solid angle to give a full time-averaged sky map. This is then convolved with the gain of the beam to give a simulated antenna temperature (T_data) for an all-sky observation at the REACH site. It is important to note we assign a value of zero to any part of the antenna beams that have an azimuthal angle below zero. Reflections of the beam below this angle will be dealt with in a separate paper. Our data model thus follows: T_data(ν) = 1/4π∫_0^4πD(Ω, ν) ×∫_t_start^t_end[T_emission(Ω, ν, t)+T_reflection(Ω, ν, t)]dΩ dt + σ̂, where: T_reflection(Ω, ν) = H(Ω)×|Γ|×(135/180) ×∫_0^4πT_emission(Ω, ν) dΩ/∫_0^4πdΩ, T_emission(Ω, ν) = (T_230(Ω)-T_CMB(Ω)) ×(ν/230)^-β(Ω)(1 - H(Ω)) + T_soilH(Ω). D(Ω, ν) is beam directivity at a given frequency, H(Ω) is the horizon mask, Ω describes solid angle, ν is frequency, T_soil is the soil temperature in Kelvin, T_CMB is the CMB temperature, set to 2.725 K, and σ̂ is experimental noise, which in this case we assume to be a Gaussian white noise set at 25 mK. §.§ Physically Motivated Foreground Model We follow the framework of <cit.> to generate our foreground model. The sky power is modelled by dividing the sky into N regions in which we approximate β to be equal across a given region, based on the spectral index map β(Ω) described in Section <ref>. This is shown in Figure <ref>. An N of 1 assumes the entire sky has a constant spectral index, and as N tends towards number of pixels in the map each pixel will have its own unique spectral index. This coarse-graining approach allows for greater control of the complexity of the model, with each additional sky region demanding another parameter be fit for in the model. Maximising the efficiency of the fitting process demands that we calculate the N sky `chromaticity functions' (K_i(ν)) outside of the likelihood. This means we must approach modelling horizon in the foreground differently to the data generation process. The coarse-grained spectral index map is rotated according to date and time of observation, and much the same as in the data generation process we mask out the horizon. Once the horizon has been masked out we map the average power per frequency of each spectral region onto the horizon mask, scaled by the fraction of the sky that it takes up. Multiplying by |Γ| and amount of the sky that a given part of the horizon can `see' we are given the reflected power per frequency per sky region. This is a power that we will then scale according to the frequency scaling of the spectral region it arises from. The sky `chromaticity functions', when multiplied by this sky term, which we denote P_sky, will account for all power which arises originally from the sky. K_i(ν) = 1/4π∫_0^4π D(Ω, ν) × M_i(Ω) × ∫_t_start^t_end (T_230(Ω) - T_CMB) dt dΩ, P_sky = (ν/230)^-β_i(1 + 1/4π∫_0^4πH(Ω)dΩ× |Γ| ×(135/180)/1/4π∫_0^4π(∑_i=1^NM_i(Ω) + H(Ω))dΩ), where we follow convention from <cit.>. β_i is the spectral index of a given sky region, and M_i(Ω) refers to a mask over a given sky region. We deal with the blackbody emission and self-reflection of the soil separately, as it does not scale with frequency. We take the horizon mask and multiply it by a given T_soil to give the blackbody emission term. To model reflection of the blackbody emission from the soil, we map the emission back onto the horizon mask, accounting for the total fraction of the sky it takes up, and multiplying by |Γ| and the amount of sky that the soil can `see'. These will account for our blackbody terms, multiplying our blackbody term, P_BB by the horizon `chromaticity function' J(ν) to give: J(ν) = 1/4π∫_0^4π D(Ω, ν) × H(Ω) × ∫_t_start^t_end (T_230(Ω) - T_CMB) dt dΩ, P_BB = T_soil(1 + |Γ| ×(135/180)/1/4π∫_0^4π(∑_i=1^NM_i(Ω) + H(Ω))dΩ). These two terms are added together with a flat CMB temperature to give our total model, described by: T_model (ν) = ∑_i=1^N K_i(ν)P_sky + J(ν)P_BB + T_CMB. § RESULTS In this section we discuss the results and implications of a `hot' horizon on the recovery of the 21 cm signal. In subsection <ref> we discuss the issues that arise when we do not account for, or incorrectly account for the horizon in our foreground models. In subsection <ref> we discuss the implications of incorrectly assuming the temperatures and reflection coefficients of the soil on the horizon in correcting for the mismeasurement of the height of a horizon. subsection <ref> deals with the utility of allowing the temperature and reflection coefficient of the soil. §.§ Recovery of the 21 cm Signal In previous works recovering theoretical 21 cm signals using physically motivated foreground models the horizon is either ignored, or simply treated as something that blocks out sky power without emitting anything itself <cit.>. Here we investigate the impact of including a horizon in our mock data set, but failing to properly account for it in our foreground models. We inject a Gaussian signal with a central frequency of 85 MHz, bandwidth (here treated as the standard deviation of the Gaussian) of 15 MHz, and a depth of 0.155 K into our mock data set and use our pipeline with 5 distinct models describing the horizon to attempt to recover it. We detail the prior ranges for a `realistic' redshifted Gaussian 21 cm absorption feature in Table <ref>, derived from <cit.>. For each of these models we fit for a Gaussian signal, detailing the central frequency, bandwidth and depth parameters recovered by the model, allowing us to compare these to the true values of the injected signal. This comparison will give some idea of how accurate the model is. We use the root mean square error (RMSE) between the injected signal and a Gaussian signal made from the recovered posterior parameters to show the how accurately the recovered signal mirrors the injected one, a lower RMSE indicating a better approximation of the `True' value. For each of the models we also perform a fit for a non detection, in which the foregrounds are assumed to be the only source of power. We take the difference of the Log(𝒵) of the fitting our foreground model with an injected Gaussian signal with respect to fitting for just the foregrounds with no signal to give us the δ_Log(𝒵). This is a measurement of how probable the model believes a detection of a Gaussian signal is when compared to a non-detection. A δ_Log(𝒵) of 1 would indicate that our model favours a detection with a probability ten times higher than that for a non-detection, with a δ_Log(𝒵) of -1 indicating that the non-detection is favoured by the same amount. Every increase by 1 in δ_Log(𝒵) corresponds to another order of magnitude by which the detection would be favoured over a non-detection. Thus, for us to claim that a detection has been made this number cannot be below zero. We show these results in Table <ref> in which we compare the ability of these 5 different horizon models to recover the 21 cm absorption trough from Cosmic Dawn. * The `No Horizon' model. This model fails to account for the horizon entirely, letting H(Ω) be a null matrix in Equations <ref>, <ref>, and <ref>. As shown in Figure <ref> this saturates our prior for the depth of the signal, and recovers a very biased estimate of the centre frequency with a signal model that has unacceptably large residuals. * The `Cold Horizon' model. Here we do include a horizon in our foreground model, but set T_soil and |Γ| to zero. This is an approximation of the approach that previous works have used to describe the horizon <cit.>, treating the horizon as something that attenuates radio waves, instead of being an emitter of any kind. We seem to achieve no improvement on the model that ignores the horizon entirely, as shown in Figure <ref>. Looking at Table <ref> we see the `Cold Horizon' model appears to struggle to recover the redshifted 21 cm signal even more than the model that ignores it entirely, with a Log(𝒵) 0.5 lower than the `No Horizon' model and an RMSE 0.0002 higher. This is not a surprising result. The horizon radiates a large amount of power, be it through thermal emission or reflection. The signal is buried in the radio foregrounds, so failing to account for the horizon in any way means that the foreground model used will change to account for some emission from the sky in the horizon region. This in essence will mimic some of the T_reflection term in our data model. When one just masks out the horizon and gives it no power we find ourselves even further away from a true description of the foregrounds, and will make signal recovery much more difficult. * The `No Emission' model. Allowing the horizon to reflect the sky, but not providing it with a description of its own thermal emission (T_soil = 0, |Γ| = 0.6) does help signal recovery, with a Log(𝒵) increase of almost 600. This however, still saturates the depth parameter to the prior limit, seen in Figure <ref>, due to a large amount of power being unaccounted for in the foreground. * The `No Reflection' model. In Figure <ref> we account for the thermal emission of the horizon in our foreground models, letting T_soil be equal to 300 K, but not accounting for reflections, keeping |Γ| at 0. Here we come much closer to recovering the signal. Once again, the signal is biased, and the depth parameter is saturated. However, the residuals are greatly reduced. Including thermal emission and ignoring the reflected power gives a much closer approximation to the correct foreground models than when we only consider sky reflection, which, while a large improvement on our `cold' horizon model also entirely saturates the depth priors during signal recovery. * The `All' model. Shown in Figure <ref>, when we account for both emission and reflection we find the signal with a Log(𝒵) of 288.3. This is a Log(𝒵) of approximately 1600 more when compared to the model that does not account for the horizon. The model that accounts for reflection and emission also has an RMSE of less than half of the no horizon case. These values indicate that not only is this model the most favoured in a probabilistic sense, but it is also the model that most accurately recovers our `True' signal. This implies that recovery of the 21 cm absorption trough using physically motivated foreground models demands realistic horizon modelling. §.§ Investigation into Effects of Temperature and Reflection Coefficient Until this point we have assumed we are able to perfectly predict both the temperature and reflection coefficient of the soil on the REACH horizon. Practically this is difficult. The temperature of the soil will vary with time, and while it is possible to set up temperature probes around the horizon of the antenna, this is difficult both fiscally and in terms of the human resources it would require. The Karoo is a radio quiet reserve, so temperature probes cannot be remote, and must be collected from around the reserve manually upon each observation. |Γ| is even more difficult to determine. This is heavily dependent on moisture levels of the soil, how deep the moisture penetrates the soil, and specific soil makeup across the mountains. This will all vary with weather and location around the mountain, which makes precise estimates of |Γ| year round unfeasible. Using the same model throughout for temperature and reflection coefficient in our model of data generation (300 K and 0.6 respectively), we examine the impact of incorrectly predicting the soil temperature and reflection coefficient in our foreground models. It may be observed from Figure <ref> (for full details see Table <ref>) that our misjudging the values of T_soil and |Γ| in the foreground models will cause problems for signal recovery. It does, however, demonstrate the expected link between values chosen for T_soil and |Γ|. Both will describe the amount of power emitted from the horizon; increasing T_soil, leading to overestimation of emission power from the horizon may be compensated for by decreasing the value of |Γ| and vice-versa. This is not a perfect fix. |Γ| is contained in Equation <ref>, while T_soil is not. This is an issue as this sky term in our model is the only one that gets scaled by frequency, so neither one term can entirely correct for the other. As a result we will need very accurate estimates of both T_soil and |Γ| if we are correctly recover the 21 cm signal. §.§ Soil Temperature and Reflection Coefficient Fitting As shown in Section <ref>, by fixing values of T_soil and |Γ| as reasonable estimates of the correct parameters[With T_soil being within 20 K, and |Γ| being within 0.2 of the values we use in our data model] which are slightly mismatched from the mock data we may yield a non-detection, or detection of a signal with very different parameters to the true one. We can potentially mitigate for these issues by fitting for both T_soil and |Γ| as additional parameters in our model. Setting the priors of our model to be uniform for T_soil and |Γ| between 275 and 325 K, and for between 0.4 and 0.7 respectively, we explore the ability of this model to recover a range of signals. The model is able to accurately recover Gaussian and flattened Gaussian signals, as in Figures <ref> and <ref>, with a low RMSE, but much higher residuals at lower frequencies than we saw for models in which we fixed T_soil and |Γ| to specific, correct, values. When we fit for the reflection coefficient, however, we see very large residuals at low frequencies. These residuals arise in Equation <ref> where |Γ| is multiplied by ν^-β_i, preferentially increasing the residuals for lower frequencies. While these residuals are large they are non-degenerate with the signal we recover, so do not create any issues for the model itself. In Table <ref> we compare the ability of our new model, fitting for parameters, to a `perfect' model, where T_soil and |Γ| are fixed to match the input values in our mock data[It is important to note for this comparison we can only make comparisons between models of a given signal, the evidences relating from one signal to another will be incomparable as the mock data we are fitting to will be different.]. To test this we generate 6 Gaussian signals with pseudo-random parameters designed to stretch the model beyond the simple 85 MHz, 15 MHz, 0.155 K signal we have been using up to this point. This model in which we fit for T_soil and |Γ| performs consistently as well as the model in which we have fixed parameters. The RMSE values of the two models yield similar results, and the δ_Log(𝒵) of the two models indicate the same ability of the models to recover a signal. In the cases where the fixed parameter version of our model finds a signal, the fitted one will too. However, when one fails the other does the same. A signal that cannot be found with the fixed parameter model will not be found with the fitted model. This is most notable when the we attempt to recover signals centred at 53 and 72 MHz, falling at the very low end of the REACH observation band where the Gaussian signal does not fit entirely within the band. The 53 MHz signal is not consistently detected, the pipeline giving a higher evidence for a non detection. The 72 MHz signal is found, but the pipeline does not recover the parameters to within reasonable error in either the fitted or hard coded cases. This detection has a very high RMSE, meaning we must be very careful in assuming the validity of any signals recovered around this range. The pipeline being unable to properly recover signals that do not entirely fall within the observing band of REACH is not unexpected. A signal that does not sit fully within the band will be more degenerate with the foregrounds and will be fitted poorly. Once the signal sits more comfortably in the observation band both the traditional fixed model, and the model were we fit for T_soil and |Γ| accurately recover the signal. Crucially, when we fit for a model that assumes no signal we still see a much greater evidence for the model that contains a signal, which is to say that T_soil and |Γ| being fitted will not arbitrarily increase the evidence of a fit to where it becomes impossible to determine detection from non-detection. To confirm this we also perform a test in which we inject no signal, and fit this to a Gaussian. It can be seen that even when we are fitting the additional parameters of T_soil and |Γ| we do not artificially recover a Gaussian detection, with the Bayesian evidence correctly favouring a non-detection. §.§ Soil Temperature and Reflection Coefficient Fitting as a Way of Correcting for Horizon Measurement Errors We have allowed ourselves to deal with any error in the measurement of T_soil and |Γ| using the fitting process defined in Section <ref>. However, all models to this point assume that the physical height of the horizon and its projection onto the sky is measured without error. This is not a reasonable assumption. The model being able to fit for both T_soil and |Γ| may be able to compensate for errors in the measurement of the horizon. One might assume that artificially increasing T_soil would cause the model to perceive the horizon to be higher than it actually is, or by increasing |Γ| the model will see more of the sky than it really does. We examine this naïve assumption in this section. To explore the functionality of T_soil and |Γ| fitting as a way to mitigate error in horizon height we expand the prior range beyond physical expectation. We allow T_soil to sit between 200 and 400 K. We allow |Γ| to have any value between 0.3 and 0.9. We will then systematically create an artificial error in the horizon we use for our foreground modelling. We multiply the altitude angle, θ, in the horizon mask of the foreground model, H(Ω), by some scaling factor, S, to increase or decrease the height of the horizon in the foreground model with respect to the mock data. We examine the results of this in Figure <ref> (for full details see Table <ref>), where we compare how the pipeline deals with incorrect horizon height estimates when we input the same values of T_soil and |Γ| as was used to create our mock data set versus when we allow for those parameters to be fitted with an increased prior range. Here we try to recover the 85 MHz, 15 MHz, 0.155 K signal at 300 K with a |Γ| of 0.6. We set S to be 0.8, 0.9, 1, 1.1 and 1.2 to give a deviation in the horizon height measurement in the foreground models of up to 20% from the mock data. We show that by fitting for T_soil and |Γ| with this unphysical prior range we are able to very consistently find the signal where the model that fixes values of T_soil and |Γ| is unable to. This is exemplified in the case where we make S equal to 1.2, simulating an overestimation of the horizon in the foreground models of 20%. As detailed in Figure <ref> we move from being entirely unable to recover the `True' signal with an RMSE of 0.0786 when we fix the parameters to those used in the mock data to a very accurate signal recovery. Our fitted version has an RMSE approximately 9 times lower and a Log(𝒵) 25 units higher. By analysing the values of T_soil and |Γ| in Table <ref> we see how these parameters correct for horizon height error. If the projected model of the horizon in our foreground correction is higher than the actual horizon the fitting process will compensate for this by dragging down the soil temperature and increasing the magnitude of the reflection coefficient. This will by proxy increase the amount of sky that the telescope is `seeing' in comparison to the blackbody emission from the horizon. This correction is useful, but not perfect, as a projected horizon height that is larger than the true height will mask out specific information on the sky power, obscuring parts of the spectral index map in our foreground maps, this is especially problematic when the galaxy is directly on the horizon. If the model is lower than the actual height of the horizon the fitting method will do the opposite. Here the model wants to maximise the amount of blackbody radiation coming from the horizon in order to compensate for the poor foreground modelling, and decrease the amount of sky reflection as much as possible to deal with the overestimation of the amount of sky we see. While the Log(𝒵) of our fits for an underestimation of the horizon are higher than when we systematically overestimate its height, we must be wary as the RMSE is also higher. This would indicate that an underestimation of the horizon, even with fitting, will yield a worse recovery of the true signal. § CONCLUSIONS AND FUTURE WORK This work aims to demonstrate the fact that a physically motivated foreground model demands an accurate description of the horizon for the recovery of the global 21 cm signal. We show that failing to account for the emissive and reflective properties of the soil on the horizon will lead to a non-detection of this signal. This analysis was focused on REACH but is applicable to all global 21 cm experiments that wish to use physically motivated foreground models in signal recovery. This paper describes an easy-to-implement model that should greatly increase the ability of the REACH radiometer to recover the redshifted 21 cm absorption trough in spite of the large mountainous horizon surrounding the antenna. This model, while it is a great improvement in describing the horizon has a number of shortcomings that may be addressed in future work, these are as follows: * The treatment of the horizon as composing entirely of one material that is entirely opaque to radio waves of all frequencies coming from behind is a bold one that must be addressed. In a general context this is difficult, but a specific investigation of the REACH horizon may allow for discussion and mapping of vegetation, rocks and different kinds of soil, which will all have different dielectric permittivities, attenuating and reflecting radio waves with different strength. * This model also only discusses light rays, assuming that diffraction is negligible. Further studies may need to analyse this issue in more depth. A proposed workaround to deal with diffraction would involve treating each spectral region as having a separate reflection coefficient, |Γ_i|, when reflected by the horizon which may be fit as an additional parameter. This would allow for any region of the sky obscured by the model to be artificially scaled up again to compensate for the lack of explicit diffraction in the model. * This model treats the soil as having a constant temperature around the horizon, an improvement to this model would involve dividing the horizon into a number of regions based on cardinal direction. Splitting these into a number of regions, each with the own temperature allows one to account for impact of the movement of the sun allowing for the eastern side of the valley to remain hotter than the western after the sun sets. * This model assumes an infinite, flat, ground describing no reflection or emission from the soil that falls below an altitude angle of 0^∘. These near-field effects may have a very strong impact on signal recovery and will be explored in a later work This work represents a large step forward in horizon modelling, demonstrating twofold decrease in RMSE from previous approaches to horizon modelling, with an increase in Log(𝒵) ∼ 1600. We show that including a `hot' horizon is a necessity when trying to recover the 21 cm signal using physically motivated foreground models. We show that there is a dependency of signal recovery on the accurate estimation of soil parameters. While there is some tolerance in the estimation of T_soil, in which T_soil requires a precision of ≈ 10 K, |Γ| must be accurate to within 0.1 for accurate signal recovery. To mitigate for error in soil parameter estimation we show that these parameters may be fitted for without compromising the integrity of signal recovery. This fitting process will consistently perform as well as a horizon model in which the soil is described using free parameters that perfectly match those used in data generation. We also successfully demonstrate that allowing for these parameters to have priors that reach values beyond what is strictly expected physically will allow for a tolerance in horizon height measurement of up to ∼ 20%. § ACKNOWLEDGEMENTS We would like to thank Quentin Gueuning for providing the electromagnetic simulations of the log spiral antenna. We would also like to thank Will Handley for his integral contributions to the REACH pipeline. Joe Pattison, Dominic Anstey, and Eloy de Lera Acedo were supported by the Science and Technology Facilities Council. We would also like to thank the Kavli Foundation for their support of REACH. § DATA AVAILABILITY The data that support the findings of this study are available from the first author upon reasonable request. mnras § ADDITIONAL TABLES
http://arxiv.org/abs/2307.02587v2
20230705183355
Lightcone Modular Bootstrap and Tauberian Theory: A Cardy-like Formula for Near-extremal Black Holes
[ "Sridip Pal", "Jiaxin Qiao" ]
hep-th
[ "hep-th", "math-ph", "math.MP" ]
TransformerG2G: Adaptive time-stepping for learning temporal graph embeddings using transformers [ August 1, 2023 ================================================================================================ § INTRODUCTION Understanding universal properties is of fundamental importance in the study of physical phenomena. In the realm of two-dimensional conformal field theories (2D CFTs), a famous example is the Cardy formula <cit.>. The Cardy formula establishes an asymptotic relation between the microcanonical entropy S_δ(Δ), which counts the number of states with scaling dimensions Δ' in a window (Δ-δ, Δ+δ), and the central charge c: S_δ(Δ) := log(∑_Δ'-Δ<δn_Δ') = 2π√(cΔ/3)+O(logΔ) (Δ→∞), where n_Δ denotes the number of states with scaling dimension Δ. This formula, along with its generalizations <cit.>, provides a universal connection between the high energy spectrum of a CFT and its central charge. It has played a significant role in the black hole physics, such as black hole microstate counting, the study of the Hawking-Page phase transition and checking AdS/CFT correspondence <cit.>. It has been recently realized that the Cardy formula holds only on average <cit.>(see also appendix C of <cit.>), and its precise validity requires a more rigorous treatment using Tauberian theory <cit.> (as explained in <cit.>). In this paper, we aim to prove a Cardy-like formula S^Vir_κ(J)=2π√(c-1/6J)+O(log(J)) (J→∞), for Virasoro primaries near a “twist accumulation point". Here, the microcanonical entropy S^Vir_κ(J) is the logarithm of the number of spin-J Virasoro primaries 𝒪_Δ,J in a shrinking window of twist τ: τ-c-1/12<2κ J^-1/2log J, τ:=Δ-J , κ>0 . Here the lower bound of the allowed κ is proportional to τ_ gap^-1, where τ_ gap is the twist gap in the spectrum of Virasoro primaries, and the factor of 2 is just a convention. Our primary focus is on the unitary modular invariant 2D CFTs with central charge c>1 and having a twist gap τ_ gap>0 in the spectrum of Virasoro primaries. We will study the CFT torus partition function in the so-called double lightcone limit. In our recent work <cit.>, we conducted a rigorous analysis of the torus partition function for these 2D CFTs under this limit. The outcome of this analysis led to the establishment of a theorem that confirm certain well-known claims previously discussed in the modular bootstrap literature <cit.>. In particular, it was shown in <cit.> that the theory must include a family of Virasoro primary operators 𝒪_Δ,J with Δ,J→∞, τ≡Δ-J→2A(≡c-1/12), where Δ=h+h̅ is scaling dimension and J=|h-h̅| is spin. This rigorous framework gives us a powerful tool to investigate more detailed questions about the universality of the fixed twist, large spin spectrum of operators in such CFTs. One natural question that arises is: how many spin-J Virasoro primary operators have a twist near c-1/12? The study of this question leads us to eq. (<ref>). The CFTs of aforementioned kind are also known as irrational CFTs with Virasoro symmetry only.[A recent proposal suggests constructing examples of irrational CFTs by weakly relevant deformations from multiple copies of minimal models. See <cit.> for details.] They are expected to exhibit chaos and/or some form of random-matrix-like statistics in their spectrum of primary operators. In particular, the spectrum of primary operators are expected to have dense spacing in appropriate asymptotic sense. Some of these expectations are byproduct of holography, where we know high energy spectrum of dual CFT capture the black hole microstates. While we expect the quantum systems dual to black hole to be chaotic, for example SYK <cit.>, from a CFT perspective, it is far from obvious to see the imprint of chaos (the recent explorations in this direction[There have been works probing the signature of chaos in CFT correlation functions with various degrees of rigor and implicit assumptions, for example butterfly effect <cit.>, ETH like statements as in <cit.>, however, to best of our knowledge, ETH has not been proven in 2D CFT. See also EFT approach to chaos <cit.>.] include <cit.>, <cit.> built upon <cit.>) in the spectrum of CFT operators.[Also see <cit.> for a recent discussion on the random matrix behavior of 2d CFTs and AdS_3 quantum gravity.] For instance, in the regime of fixed spin and large Δ, it is expected that the asymptotic spacing in Δ becomes exponentially small with respect to the entropy, scaling as √(Δ), and ultimately approaches zero. However, the current best bound in this direction, without assuming a twist gap, is 1 as established in <cit.>, which represents an improvement upon the results of <cit.>. It should be noted that this bound is optimal in the absence of a twist gap but is expected to be sub-optimal when a twist gap is imposed. In this paper, with the assumption of a twist gap τ_ gap, we prove the existence of a “dense" spectrum, characterized by a large number of Virasoro primary states and powerlaw decreasing spacing of twist, in the vicinity of a specific fixed twist value τ=c-1/12 and for very large spin J.[It is important to note that the term “dense" used here should not be confused with its typical usage in chaos-related CFT literature. In the chaos context, “dense" often refers to the property where the spacing between adjacent energy spectra is generally given by ρ^-1, where ρ∼ e^#√(E) represents the coarse-grained spectral density. However, in this paper, when we refer to “dense", we simply mean that as we consider specific powerlaw decreasing windows, the number of spectra within those windows increases as e^#√(E). It is worth emphasizing that the specific distribution of the spectra within the window remains unknown.] This result holds for irrational CFTs that possess Virasoro symmetry exclusively. * We prove a refined version of twist accumulation result: there always exist a Virasoro primary operator with sufficiently large spin J within a narrow window of twist around the twist accumulation point i.e (c-1)/12; the width of such window of twist goes to 0 as J^-1/2log J. * We rigorously establish two-sided bounds for 𝒩_J(ε), the number of spin-J Virasoro primary operators 𝒪_Δ,J in a window |Δ-J-c-1/12|⩽ 2ε, and let ε scale as ε=κ J^-1/2log J with κ being a fixed positive number of order O(τ_ gap^-1). In the limit J→∞, 𝒩_J grows as 𝒩_J(ε≡κ J^-1/2log J)=e^4π√(AJ)+O(log J) , which is equivalent to (<ref>) via the relation S^Vir_κ(J)≡log(𝒩_J(ε≡κ J^-1/2log J) ). A more precise form of eq. (<ref>) is stated in <ref>, a corollary of the main theorem <ref>, that we prove in this paper. * We make further conjectures on potential generalization of the result for a CFT with conserved currents in section <ref>; also see appendix <ref> for statement about W_N CFTs. In this paper we further consider a family of unitary modular invariant 2D CFTs, including the large central charge limit A≡c-1/24→∞, such that (a) the lower bound of the twist gap 2T grows at least linearly in central charge, i.e. T/A⩾α >0, and (b) their partition functions satisfy a uniform boundedness condition, inspired by the HKS sparseness condition <cit.>. Holographically, this family of CFTs probes the near extremal rotating BTZ black holes, having a nearly AdS_2× S^1 throat. One expects a Schwarzian theory <cit.> to describe such limit. We have the following main result: * For a such a class of CFTs with sufficiently high central charge, we rigorously estimate the number of operators 𝒪_Δ,J with sufficiently large spin J, scaling dimension Δ such that Δ-J-2A ∈(-ε_1,ε_2 ) , (ε_1≡1/πα√(A/J)log(AJ), ε_2≡3/κ√(A/J)(2π A+log J)). Here κ is a positive constant, and its precise value will be clear later. In the limit A→∞ and J/A^3→∞, an analogue of (<ref>) reads 𝒩_J(ε_1,ε_2)=e^4π√(AJ)+O(A)+O(log AJ) . See the theorem. <ref> and its corollary <ref> for the precise version. This result has a gravitational interpretation in terms of the near-extremal rotating BTZ black holes with angular momentum J. The entropy of the near-extremal rotating BTZ black hole is given by the formula S_ BH≈2π√(c/6J)≈4π√(AJ), c=3ℓ_3/2G_N≫1, where ℓ_3 is the radius of AdS_3, G_N is Newton's constant and c=3ℓ_3/2G_N is the Brown-Henneaux relation <cit.>. This formula is known in the standard black hole thermodynamics. Our result supports the thermodynamic description of the near-extremal black holes when the Hawking temperature T_ H, given by T_ H=β^-1, falls within a certain regime: const×√(c/J)/α⩽ T_ H≪ 1/c. In particular, the Hawking temperature is much lower than the “gap temperature" c^-1. In this paper, we leverage existing techniques to analyze the partition function in the lightcone limit with complex β_L or β_R. While the estimates related to the lightcone bootstrap were already given in <cit.>, they were applicable for partition function evalauted at real β_L and β_R. The main technical challenge for us is to uplift the aformentioned rigorous estimate as done in <cit.> so that it applies to the partition function for complex β_L or β_R and we are able to learn about the large spin, small twist spectra. We achieve this by using the Tauberian theory techniques developed in <cit.> to analyze the partition function for complex β albeit in the high temperature limit (not the light cone limit). Our main contribution in this paper lies in combining these techniques to analyze the partition function in the lightcone limit with complex β_L or β_R; this leads to the main results of this paper. We view our results as a stepping stone towards a rigorous understanding of chaotic irrational CFTs, although it has not yet been established in a general c>1 irrational CFT with a twist gap. We anticipate that with further effort, a similar analysis can be applied to CFT four-point functions based on <cit.> and <cit.>.[The analysis of CFT four-point functions using the lightcone bootstrap would be more complicated than the modular bootstrap approach because the conformal blocks of CFT four-point functions do not naturally factorize into left- and right-movers. However, in the double lightcone limit, the conformal blocks exhibit approximate factorization (see <cit.>, appendix A). Based on this observation, we anticipate that a similar Tauberian theorem to theorem <ref> can be established using the techniques explained in <cit.>.] The paper is organized as follows. In section <ref>, we present the proof of the Cardy-like formula (<ref>), and the main results of this section are summarized in theorem <ref> and its corollary <ref>. Appendices <ref> and <ref> provide additional technical details related to this section. In section <ref>, we explore potential generalizations for CFTs with conserved currents, and we include specific examples and leave technical details to appendix <ref>. Moving on to section <ref>, we focus on holographic CFTs and investigate the limit of large central charge, c→∞. The main results of this section are summarized in theorem <ref> and its corollary <ref>, with the proofs presented in appendix <ref>. Then we discuss the connection between our results and the thermodynamics of near-extremal rotating BTZ black holes. In section <ref>, we make conclusions and discuss some potential future directions. § MODULAR BOOTSTRAP §.§ Setup We consider a unitary, modular invariant 2D CFT with central charge c>1, a (unique) normalizable vacuum and a positive twist gap τ_ gap>0 in the spectrum of Virasoro primaries. The torus partition function Z(β_L,β_R) of such a CFT is defined by Z (β_L,β_R) ≡Tr_ℋ_CFT( e^- β_L ( L_0 - c/24) e^- β_R ( L̅_0 - c/24)) . where β_L and β_R are the inverse temperatures of the left and right movers, L_0 and L̅_0 are the standard Virasoro algebra generators and ℋ_ CFT is the CFT Hilbert space which is assumed to be the direct sum of Virasoro representations characterized by conformal weights h and h̅ ℋ_ CFT=⊕_h,h̅ V_h⊗ V_h̅. The twist gap assumption means that h,h̅⩾τ_ gap/2 for all representations except the vacuum representation (h=h̅=0). Using eqs. (<ref>) and (<ref>), the torus partition function can be written as a sum of Virasoro characters χ_h(β_L)χ(β_R) over primaries Z(β_L,β_R)=∑_h,h̅n_h,h̅ χ_h(β_L)χ_h̅(β_R), where n_h,h̅ counts the degeneracy of the Virasoro primaries with conformal weights h and h̅. For c>1, the characters of Virasoro unitary representations are given by χ_h (β) ≡Tr_V_h( e^- β( L_0 - c/24)) = e^c - 1/24β/η (β)× 1 - e^- β if h = 0, e^- β h if h > 0, where the Dedekind eta function η(β)≡ e^-β/24∏_n=1^∞(1-e^-n β) accounts for the contribution of descendants. Then we have Z (β_L, β_R) = Z̃ (β_L, β_R)/η (β_L) η (β_R) , where the reduced partition function Z̃ is given by Z̃ (β_L, β_R) = e^A(β_L + β_R)[(1 - e^- β_L) (1 - e^- β_R) + ∑_h, h̅⩾ T n_h,h̅ e^- β_L h - β_R h̅] . Here we have denoted A ≡c - 1/24 and T≡τ_ gap/2 for convenience. n_h,h̅ is the degeneracy of the Virasoro primaries with conformal weights h and h̅. The first term in the square bracket corresponds to the contribution from the vacuum state, while the second term represents the total contribution from Virasoro primaries with twists above the twist gap. The above formulations assumed a discrete spectrum. The argument below also works for the continuum spectrum, where eq. (<ref>) is replaced by Z̃ (β_L, β_R) = e^A(β_L + β_R)[(1 - e^- β_L) (1 - e^- β_R) + ∫_T^∞d h∫_T^∞dh̅ ρ(h,h̅) e^- β_L h - β_R h̅]. Here ρ is a non-negative spectral density of Virasoro primaries, which is related to n_h,h̅ by ρ(h,h̅)=∑_h',h̅'⩾ Tn_h',h̅'δ(h-h')δ(h̅-h̅'). We assume that (a) the partition function Z (or equivalently Z̃) for a given CFT is finite when β_L,β_R∈(0,∞); (b) Z is modular invariant, i.e. Z(β_L,β_R) is invariant under the transformations generated by (β_L,β_R)→ (β_L+2π i,β_R-2π i), (β_L,β_R)→ (4π^2/β_L,4π^2/β_R). The invariance under the first transformation implies that the spin J:=h-h̅ of any Virasoro primary state must be an integer. The invariance condition under the second transformation (which is called S modular transformation), Z (β_L, β_R) = Z_( 4 π^2/β_L, 4 π^2/β_R), can be formulated in terms of reduced partition function Z̃ as follows. By (a) and the positivity of the spectral density, the convergence domain of Z(β_L,β_R) (or equivalently Z̃(β_L,β_R)) can be extended to the complex domain of (β_L,β_R) with[This justifies why the first modular transformation is well-defined on the partition function.] Re(β_L),Re(β_R)∈(0,∞). Since under S modular transformation, η behaves as η(β)=√(2π/β)η(4π^2/β) , eqs. (<ref>) and (<ref>) imply that Z̃ transforms as Z̃ (β_L, β_R) = √(4 π^2/β_L β_R)Z̃( 4 π^2/β_L, 4 π^2/β_R). Notice that the complex domain (<ref>) is preserved by the S modular transformation. Therefore we have two convergent expansions of Z̃(β_L,β_R) for (β_L,β_R) in the domain (<ref>): * Direct channel: expanding l.h.s. of (<ref>) in terms of (<ref>). * Dual channel: expanding r.h.s. of (<ref>) in terms of (<ref>) (with β_L,β_R replaced by 4π^2/β_L,4π^2/β_R). §.§ Review of the twist accumulation point Under the above setup, one can show that in the theory, there is at least one family of Virasoro primaries 𝒪_i with h_i→ A and h̅_i→∞ <cit.>. In other words, (h=A,h̅=∞) is an accumulation point in the spectrum of Virasoro primaries. The same is true with h and h̅ interchanged. Here let us briefly explain why it is true. For more technical details, see <cit.>, section 3. We consider the reduced partition function Z̃(β_L,β_R) for real and positive (β_L,β_R). We take the double lightcone (DLC) limit , defined by[The “DLC" limit defined in this context is referred to as the “M_*" limit in <cit.>. In that work, two distinct lightcone bootstrap problems were discussed, and the “M_*" limit, specifically the modular double lightcone limit, was employed to differentiate it from the DLC limit in the other problem.] DLC limit: β_L →∞, β_R → 0, 𝔟(β_L,β_R):=4π^2T/Aβ_R -β_L-3/Alog(β_L)→∞. The important feature of this limit is that β_R approaches 0 much faster than β_L approaches ∞. The introduction of the logarithmic term in 𝔟(β_L,β_R) is just for technical reason. One can show that in the DLC limit, the partition function Z̃(β_L,β_R) is dominated by the vacuum term (the first term in eq. (<ref>)) in the dual channel, i.e. DLClimZ̃ (β_L,β_R)/8π^3/β_L^3/2β_R^1/2 e^4π^2A/β_R = 1. Here the denominator is the asymptotic behavior of the vacuum term in the dual channel: √(4π^2/β_Lβ_R)Z̃_vac(4π^2/β_L,4π^2/β_R)≡ √(4π^2/β_Lβ_R)e^A(4π^2/β_L+4π^2/β_R)(1-e^-4π^2/β_L)(1-e^-4π^2/β_R) ∼ 8π^3/β_L^3/2β_R^1/2 e^4π^2A/β_R (β_L→∞, β_R→0). Now we consider the direct channel (i.e. the l.h.s. of eq. (<ref>)). Let Ω be a set of (h,h̅) pairs (assuming that Ω does not contain the vacuum (0,0)). We define Z̃_Ω to be the partial sum of eq. (<ref>) with (h,h̅)∈Ω: Z̃_Ω(β_L,β_R):=∑_(h,h̅)∈Ω n_h,h̅ e^- β_L h - β_R h̅ In what follows we will only state the conditions of Ω, e.g. Z̃_h⩾ A+ε is the same as Z̃_Ω with Ω=[A+ε,∞)×(0,∞). The claim is that in the DLC limit, the direct channel is dominated by the sum over h∈(A-ε,A+ε) and h̅⩾h̅_∗: DLClimZ̃_h∈(A-ε,A+ε),h̅⩾h̅_∗( β_L, β_R )/8π^3/β_L^3/2β_R^1/2 e^4π^2A/β_R = 1. Here ε>0 can be arbitrarily small and h̅_∗ can be arbitrarily large (but they are fixed when we take the DLC limit). To prove this claim, <cit.> demonstrated that in the direct channel, the total contribution from other (h,h̅) pairs is suppressed compared to the dual-channel vacuum term. Therefore, (h=A,h̅=∞) must be an accumulation point in the spectrum. Otherwise one can find sufficiently small ε and sufficiently large h̅_∗ such that Z̃_h∈(A-ε,A+ε),h̅⩾h̅_∗=0, contradicting eq. (<ref>). By interchanging the roles of β_L and β_R in the above argument, we can show that (h=∞,h̅=A) is also an accumulation point in the spectrum. In terms of scaling dimension Δ=h+h̅ and spin J=|h-h̅|, the above argument implies that the theory must include a family of Virasoro primary operators 𝒪_Δ,J with Δ,J→∞, Δ-J→2A(≡c-1/12). For general CFTs, (<ref>) is slightly weaker than the existence of both (h→∞,h̅→ A) and (h→ A,h̅→∞) families. In a CFT with conserved parity, these two statements are the equivalent. §.§ Main theorem In the previous subsection, we reviewed that in the double lightcone limit, the dominant contribution to the reduced partition function Z̃(β_L,β_R) comes from the spectrum with high spin and twist near 2A in the direct channel, while the vacuum state (h=h̅=0) dominates in the dual channel. This observation implies a connection between the spectral density ρ(h,h̅) (as given by eq. (<ref>)) near the accumulation point (h=A,h̅=∞) and the vacuum term of the partition function in the dual channel. In fact, by conducting a more thorough analysis of the arguments presented in <cit.>, we can not only establish the existence of an infinite number of operators near the accumulation point (h=A, h̅=∞) but also estimate how many such operators there are. To achieve this quantitative understanding, we will employ Tauberian theory <cit.>, building upon similar reasoning presented in <cit.>, and integrate it with the arguments put forth in <cit.>. This combined approach will be the focus of the remaining sections in this paper. The object we are going to study is the total number of Virasoro primaries with h in the range (A-ε, A+ε) and h̅=h+J where the spin J is fixed. This quantity is denoted as 𝒩_J(ε) and can be expressed as the sum of the degeneracies n_h,h+J of Virasoro primaries over the specified range of h: 𝒩_J(ε) := ∑_h∈(A-ε, A+ε) n_h,h+J. Our goal is to derive non-trivial asymptotic two-sided bounds on 𝒩_J(ε) in the limit J→∞ and ε→0, under specific constraints between ε and J. However, due to technical limitations, a direct estimate of 𝒩_J(ε) is not feasible.[While it is possible to compute 𝒩_J(ε) directly on a case-by-case basis, our primary objective in this paper is to derive universal behaviors of 𝒩_J(ε).] To overcome this, we introduce another quantity 𝒜_J(β_L,ε) by assigning a β_L-dependent weight to each degeneracy n_h,h̅ of Virasoro primaries: 𝒜_J(β_L,ε):=∑_h∈(A-ε,A+ε)n_h,h+Je^-(h-A)β_L. Importantly, 𝒩_J(ε) and 𝒜_J(β_L,ε) are related by the following inequality: e^-εβ_L𝒜_J(β_L,ε)⩽𝒩_J(ε)⩽ e^εβ_L𝒜_J(β_L,ε). This inequality provides an upper and lower bound for 𝒩_J(ε) in terms of 𝒜_J(β_L,ε), with a dependence on the parameter ε and the inverse temperature β_L. So our approach involves two main steps. First, we will derive asymptotic two-sided bounds for 𝒜_J(β_L,ε). Then, we will use eq. (<ref>) to obtain corresponding bounds for 𝒩_J(ε). To estimate 𝒜_J(β_L,ε), we introduce the DLC_w (double lightcone) limit defined as follows: DLC_w limit: β_L, J→∞, 2π T(1-w^2)/A√(J/A)-β_L→∞ , β_L^-1logJ→ 0 . The reason we still refer to it as the “DLC" limit, similar to (<ref>), will become clearer later. For now, a brief explanation is that by introducing the additional identification β_R=2π√(A/J), (<ref>) becomes a slightly stronger form of (<ref>). We will revisit this point later around (<ref>). With the aforementioned setup, we present our main theorem as follows: Take any unitary, modular invariant 2D CFT with central charge c>1 (i.e. A≡c-1/24>0), a unique normalizable vacuum and a twist gap τ_ gap≡2T>0 in the spectrum of nontrivial Virasoro primaries. Then for any w∈(1/2,1) fixed, and ε within the range ε_ min(β_L,J)⩽ε⩽1-1/2w, ε_ min(β_L,J) :=max{A^3/2/π w^2TlogJ/√(J), 3logJ/4β_L+2logβ_L/β_L}, the quantity 𝒜_J, defined in (<ref>), satisfies the following asymptotic two-sided bounds in the DLC_w limit (<ref>): 1/w1/1-tan(π w(1-ε))/π w(1-ε)≲𝒜_J(β_L,ε )/4π^5/2β_L^-3/2J^-1/2e^4π√(AJ)≲1/w2/1+sin(2π wε)/2π wε, which is uniform in ε. Here by a≲ b we mean lima/b⩽1 in the considered limit. Let's make some remarks on Theorem <ref>: (a) In eq. (<ref>), the upper bound is always greater than the lower bound for the assumed range of w. This is because the upper bound is consistently larger than 1/w, while the lower bound monotonically decreases with ε in the interval ε∈(0,1-1/2w). Notably, when ε=0, the lower bound is less than or equal to 1/w. This observation provides a consistency check for the validity of the two-sided bounds. (b) The gap between the upper and lower bounds in eq. (<ref>) decreases as we increase w (i.e. when a stronger DLC_w limit is imposed) and decrease ε. In the limit ε→0 and w→1, both the upper and lower bounds converge to 1. (c) In the DLC_w limit, the lower bound ε_ min(β_L,J) for ε approaches zero. We note that our choice of ε_ min(β_L,J) is not optimal with respect to the method that we use, in the sense that the coefficients of the logarithms in (<ref>) can be further improved. But we expect that the current form of ε_ min already captures its essential behavior in the double lightcone limit, namely ε_ min=O(J^-1/2log J). Using theorem <ref>, we can obtain an estimate for 𝒩_J(ε). Let us consider the following constraints, which are compatible with the DLC_w limit (<ref>) (when J→∞): β_L=3κ^-1 J^1/2, ε=κ J^-1/2logJ (κ^-1<2π T(1-w^2)/3A^3/2 fixed). Substituting these values into eq. (<ref>) and theorem <ref>, and choose e.g. w=3/4, we obtain the following result: Given any fixed κ∈(4A^3/2/π T,∞), we have 𝒩_J(ε≡κ J^-1/2log J)=J^-5/4e^4π√(AJ)+f_κ(J), where the error term f_κ(J) satisfies the bound f_κ(J)⩽ 3log(J+1)+C(κ), with C(κ) being a finite constant. Before going to the proof, we have three remarks. (1) Recall the twist accumulation point is given by τ=2A≡c-1/12, corollary <ref> tells us that at large spin J, the number of states that are very closed to the twist accumulation point grows exponentially as e^2π√((c-1)/6J), with additional slow-growth factors that are bounded by powers of J. This implies that the average spacing between adjacent states in this regime is approximately given by e^-2π√((c-1)/6J). However, we can not at present rule out the possibility of having all the states piling up near the end points of the interval. Therefore the rigorous upper bound on spacing is given by the size of window i.e J^-1/2log J. (2) In corollary <ref>, it is crucial to note that the lower bound of κ is proportional to T^-1. This dependence clearly indicates that our analysis will not be valid if the theory does not have a twist gap. (3) If we further assume that the theory has some critical spin J_*, above which there are no Virasoro primaries with twist strictly below 2A≡c-1/12, then all the Virasoro primaries have h greater than or equal to A when h̅⩾ h+J_*. Consequently, considering the exponential term e^(A-h)β_L⩽1, we find that the number of Virasoro primaries with h in the window [A,A+κ J^-1/2log J) cannot be smaller than 𝒜_J. This leads to a more precise lower bound on 𝒩_J, given by: 𝒩_J(ε≡κ J^-1/2log J)⩾ const (J+1)^-5/4e^4π√(AJ), where the constant prefactor is strictly positive. Here the power index -5/4 is obtained by choosing β_L∼ J^1/2 in (<ref>). The index of -5/4 in 𝒩_J(ε) can be understood by considering the contribution from the vacuum character in the dual channel. This can be naively reproduced by only taking into account this part of the contribution. To see this, we rewrite the dual vacuum character in terms of the Laplace transform of the modular crossing kernel: √(2π/β)e^4π^2A/β(1-e^-4π^2/β) = ∫_A^∞ d h √(2/h-A)[cosh(4π√(A(h-A)))-cosh(4π√((A-1)(h-A)))]e^-(h-A)β. Therefore, a naive computation of the“vacuum character" contribution to 𝒩_J(ε) is as follows: [𝒩_J(ε)]_ naive= ∫_A^A+εd h∫_h+J-1^h+J+1√(4/(h-A)(h̅-A)) ×[cosh(4π√(A(h-A)))-cosh(4π√((A-1)(h-A)))] ×[cosh(4π√(A(h̅-A)))-cosh(4π√((A-1)(h̅-A)))] ∼ const ε^3/2J^-1/2e^4π√(AJ) (ε≪1, J≫1). By choosing ε=κ J^-1/2log J, we obtain the correct index of -5/4 in (<ref>). We expect that this is the optimal power index of J for the lower bound of 𝒩_J(ε≡κ J^-1/2log J), in the sense that the index cannot be larger. One possible approach to verify the optimality is to examine explicit examples of torus partition functions, e.g. the one presented in <cit.>. §.§ Sketch of the proof To derive the two-sided asymptotic bounds (<ref>) for 𝒜_J(β_L,ε) in the DLC_w limit, we introduce several tricks as follows. The first trick relies on the fact that only integer spins are allowed (here we only consider bosonic CFTs). This implies that the spectrum is empty for values of h-h̅ that are non-integers. Using this property, we can express 𝒜_J in a different form as follows (see figure <ref> for a clearer visual representation): 𝒜_J(β_L,ε)=𝒜(β_L,H̅,ε,δ) ∀δ∈(ε,1-ε), where H̅≡ A+J and 𝒜(β_L,H̅,ε,δ) is defined as 𝒜(β_L,H̅,ε,δ):= ∫_A-ε^A+εd h∫_H̅-δ^H̅+δ dh̅ ρ(h,h̅)e^-(h-A)β_L, where ρ(h,h̅) represents the spectral density of Virasoro primaries in the continuum-spectrum version of Z̃ given by the integral in eq. (<ref>). Now the problem is reduced to obtaining the upper and lower bounds for 𝒜(β_L,H̅,ε,δ) in the DLC_w limit. To achieve this, we express the DLC_w limit (<ref>) in terms of β_L and H̅, taking into account that H̅=A+J: DLC_w limit: β_L→∞, H̅→∞, 2π T(1-w^2)/A√(H̅-A/A)-β_L→∞, β_L^-1logH̅→0, where w is the same parameter introduced in (<ref>). To proceed, we introduce the next trick which was used in <cit.>. Let us consider two functions ϕ_±(x) satisfying the inequality ϕ_-(x)⩽θ_δ(x)⩽ϕ_+(x), θ_δ(x):= θ(x∈[-δ,δ]). In addition, for technical reasons, we require that ϕ_± are band-limited functions, meaning that their Fourier transforms ϕ̂_± have compact support: ϕ_±(x)= ∫ d t ϕ̂_±(t)e^-i x t, supp(ϕ̂_±)⊂ [-Λ,Λ] for some Λ<2π w. The functions satisfying these conditions exist <cit.>. Later, for the specific range of w we are interested, we will give explicit expression for ϕ_±, see (<ref>) for Λ=2π and (<ref>) for any Λ. Here again, w corresponds to the parameter in eq. (<ref>). The choice of Λ<2π w will be clarified at the end of section <ref>. By substituting eqs. (<ref>) and (<ref>) into the definition of 𝒜, we obtain an upper bound for 𝒜 given by: 𝒜(β_L,H̅,ε,δ)⩽ ∫_A-ε^A+εd h∫_0^∞ dh̅ ρ(h,h̅)e^-(h-A)β_L+(H̅+δ-h̅)β_Rθ_δ(h̅-H̅) ⩽ ∫_A-ε^A+εd h∫_0^∞ dh̅ ρ(h,h̅)e^-(h-A)β_L+(H̅+δ-h̅)β_Rϕ_+(h̅-H̅) = e^(H̅+δ)β_R∫_A-ε^A+εd h∫_0^∞ dh̅ ρ(h,h̅)e^-(h-A)β_L-h̅β_R∫ d t ϕ̂_+(t)e^-i(h̅-H̅)t = e^(H̅+δ-A)β_R∫ d t Z̃_h∈(A-ε,A+ε)(β_L,β_R+i t)ϕ̂_+(t)e^i(H̅-A)t. In the first line, we used e^(H̅+δ-h̅)β_R⩾1 in the support of θ_δ(h̅-H̅). In the second line, we bounded θ_δ by ϕ_+. In the third line, we rewrote ϕ_+ as the Fourier transform of ϕ̂_+. Finally, in the last line we used the definition of Z̃_h∈(A-ε,A+ε). Similarly, we have the following lower bound for 𝒜: 𝒜(β_L,H̅,ε,δ)⩾ e^(H̅-δ-A)β_R∫ d t Z̃_h∈(A-ε,A+ε)(β_L,β_R+i t)ϕ̂_-(t)e^i(H̅-A)t, where ϕ̂_- is the Fourier transform of ϕ_-. It is worth noting that although the bounds depend on β_R, the quantity 𝒜 itself does not. The final result, given by eq. (<ref>), will be obtained by selecting an appropriate value for β_R. Here we choose β_R to be[We will provide a technical explanation for this choice in appendix <ref>. Additionally, an intuitive argument will be presented in section <ref>.] β_R=2π√(A/H̅-A). With this choice, the limit (<ref>) can be expressed as: DLC_w limit: β_L →∞, β_R → 0, 𝔟_w(β_L,β_R):=4π^2T(1-w^2)/Aβ_R -β_L→∞, β_L^-1logβ_R→0. We observe that (<ref>) is slightly stronger than (<ref>): the inclusion of the w^2 term in the third equation of (<ref>) is sufficient to eliminate the logarithmic term log(β_L) present in the third equation of (<ref>). The last equation in (<ref>) is introduced for technical reasons. From now on, we will always assume (<ref>) and (<ref>) by default. Consequently, the three formulations of the DLC_w limit, namely (<ref>), (<ref>) and (<ref>), are equivalent. In the DLC_w limit, the exponential prefactor e^(H̅-A±δ)β_R in the upper bound (<ref>) and the lower bound (<ref>) coincide because β_Rδ→0. So we have I_-, h∈(A-ε,A+ε)(A,H̅;β_L,β_R) DLC_w≲𝒜(β_L,H̅,ε,δ)/e^2π√(A(H̅-A)) DLC_w≲I_+, h∈(A-ε,A+ε)(A,H̅;β_L,β_R) where I_± are defined in the following way: I_±, h∈(A-ε,A+ε)(A,H̅;β_L,β_R):=∫ d t Z̃_h∈(A-ε,A+ε)(β_L,β_R+i t)ϕ̂_±(t)e^i(H̅-A)t. So our goal reduces to deriving asymptotic behavior of the integral (<ref>) in the DLC_w limit. To do this, the main idea is to demonstrate that the asymptotic behavior of the integral (<ref>) remains unchanged in the DLC_w limit if we replace Z̃_h∈(A-ε,A+ε) with the vacuum term in the dual channel of the full reduced partition function Z̃: lim_DLC_w∫ d t Z̃_h∈(A-ε,A+ε)(β_L,β_R+i t)ϕ̂_±(t)e^i(H̅-A)t/∫ d t √(4π^2/β_L(β_R+i t))Z̃_ vac(4π^2/β_L,4π^2/β_R+i t)ϕ̂_±(t)e^i(H̅-A)t=1. To see this, let us consider the integral I_±(A,H̅;β_L,β_R):=∫ d t Z̃(β_L,β_R+i t)ϕ̂_±(t)e^i(H̅-A)t. Using modular invariance, we evaluate this integral in the dual channel: I_±(A,H̅;β_L,β_R)=∫ d t √(4π^2/β_L(β_R+it))Z̃(4π^2/β_L,4π^2/β_R+i t)ϕ̂_±(t)e^i(H̅-A)t. Now we split I_± in different ways in the two channels. Using equations (<ref>) and (<ref>), we have: I_±, vac+I_±,T⩽ h⩽ A-ε +I_±,A-ε<h<A+ϵ+I_±,h⩾ A+ϵ=I^ dual_±, vac+I^ dual_±, nonvac, ( direct channel) ( dual channel) where the integrals are defined as follows: I_±, vac= ∫ d t Z̃_ vac(β_L,β_R+i t)ϕ̂_±(t)e^i(H̅-A)t, I_±,T⩽ h⩽ A-ε= ∫ d t Z̃_T⩽ h⩽ A-ε(β_L,β_R+i t)ϕ̂_±(t)e^i(H̅-A)t, I_±,h∈(A-ε,A+ε)= ∫ d t Z̃_h∈(A-ε,A+ε)(β_L,β_R+i t)ϕ̂_±(t)e^i(H̅-A)t, I_±,h⩾ A+ϵ= ∫ d t Z̃_h⩾ A+ε(β_L,β_R+i t)ϕ̂_±(t)e^i(H̅-A)t, I^ dual_±, vac= ∫ d t √(4π^2/β_L(β_R+i t))Z̃_ vac(4π^2/β_L,4π^2/β_R+i t)ϕ̂_±(t)e^i(H̅-A)t, I^ dual_±, nonvac= ∫ d t √(4π^2/β_L(β_R+i t))Z̃_h,h̅⩾ T(4π^2/β_L,4π^2/β_R+i t)ϕ̂_±(t)e^i(H̅-A)t. Here and below, I^…_… always refers to I^…_…(A,H̅;β_L,β_R) with the identification β_R=2π√(A/H̅-A). In section <ref>, we will demonstrate that in the DLC_w limit, the dual channel is dominated by I^ dual_±, vac. This is equivalent to say that lim_DLC_wI^ dual_±, nonvac/I^ dual_±, vac=0. Moving on to section <ref>, we will establish that in the DLC_w limit, the direct channel is dominated by I_±,h∈(A-ε,A+ε). Since the vacuum term dominates the dual channel in the DLC_w limit, this is equivalent to say that lim_DLC_w(I_±, vac+I_±,T⩽ h⩽ A-ε+I_±,h⩾ A+ε)/I^ dual_±, vac=0. We will establish that each term in the numerator of (<ref>) is suppressed by the denominator I^ dual_±, vac. Subsequently, eq. (<ref>) follows from eqs. (<ref>), (<ref>), (<ref>) and (<ref>). We can then evaluate the denominator of eq. (<ref>), which corresponds to I^ dual_±, vac, using its precise expression. The result of I^ dual_±, vac will be given in section <ref>, and the technical details will be given in appendix <ref>. It provides us with the desired estimate of the asymptotic upper and lower bounds on 𝒜(β_L,H̅,ε,δ) in the DLC_w limit (see section <ref>). In section <ref>, we will demonstrate that, while satisfying the aforementioned estimates, ε can effectively approach zero in the DLC_w limit, as long as it remains bounded from below by ε_ min(β_L,J) (defined in eq. (<ref>)). This justification will support the final part of theorem <ref>. Returning to 𝒜_J(β_L,ε) using eq. (<ref>), we note that the spectral density ρ is positive, implying that 𝒜(β_L,H̅,ε,δ) is monotonically increasing in δ. To obtain optimal bounds for 𝒜_J, we choose the smallest δ for the upper bound on 𝒜 and the largest δ for the lower bound on 𝒜, yielding lim_δ→(1-ε)^-𝒜(β_L,H̅,ε,δ)⩽𝒜_J(β_L,ε)⩽lim_δ→ε^+𝒜(β_L,H̅,ε,δ). These inequalities provide the two-sided bounds stated in eq. (<ref>). §.§ Dual channel §.§.§ Dual channel: vacuum Consider the vacuum part of I_± in the dual channel I^ dual_±, vac≡ ∫ d t √(4π^2/β_L(β_R+i t))Z̃_ vac(4π^2/β_L,4π^2/β_R+i t)ϕ̂_±(t)e^i(H̅-A)t, where Z̃_ vac is the vacuum part of the partition function, given by Z̃_ vac(β,β̅)=e^A(β+β̅)(1-e^-β)(1-e^-β̅). By definition I^ dual_±, vac depends on A, H̅, β_L and β_R. We will choose a proper β_R to optimize the asymptotic behavior of I^ dual_±, vac in the limit (<ref>). As mentioned in section <ref>, here we choose β_R=2π√(A/H̅-A). We leave the technical reason of this choice to appendix <ref>. Here we would like to give the following intuitive explanation why this is a good choice. For simplicity let us fix β_L and take the limit β_R→0. Using modular invariance, one can show that Z̃(β_L,β_R) is dominated by the vacuum term in the dual channel: Z̃(β_L,β_R)= √(4π^2/β_Lβ_R)Z̃_vac(4π^2/β_L,4π^2/β_R)[1+O(e^-4π^2 T/β_R)] ∼ 8π^3/β_L^3/2β_R^1/2 e^4π^2A/β_R, where T is the twist gap. To reproduce this asymptotic behavior, we make a naive guess on the large-h̅ behavior of the spectral density ρ(h,h̅) by performing inverse Laplace transform on β_R^-1/2e^4π^2A/β_R: ρ(h,h̅)guess∼ F(h)e^4π√(A(h̅-A))/√(h̅-A) (h̅→∞). This statement can actually be proven in a rigorous way using the argument in <cit.>. Then we have Z̃(β_L,β_R)≡ ∫ d h dh̅ ρ(h,h̅) e^(A-h)β_L+(A-h̅)β_R ∼ ℒ(F)(β_L)e^Aβ_L∫ dh̅e^4π√(A(h̅-A))/√(h̅-A)e^(A-h̅)β_R (β_R→0), where ℒ(F) is the Laplace transform of F. Now let us focus on the β_R-related part: ∫ dh̅e^4π√(A(h̅-A))/√(h̅-A)e^(A-h̅)β_R=2e^4π^2A/β_R∫ dx e^-β_R(x-2π√(A)/β_R)^2. Here, we introduce the variable change x=√(h̅-A). Observing the integrand, we notice that it reaches its maximum value at x=2π√(A)/β_R, which implies β_R=2π√(A/h̅-A) (see figure <ref>). As we aim to extract information about the spectrum within the window h̅∈(H̅-δ,H̅+δ), it seems natural to choose eq. (<ref>) as the relation between β_R and H̅. This completes the intuitive explanation. After identifying β_R and H̅ using constraint (<ref>), we have[By A∼ B we mean that A/B→1 in the considered limit.] I^ dual_±, vac∼(4π^2β_L)^3/2√(π/H̅)e^2π√(AH̅)ϕ̂_±(0) in the DLC_w limit. We leave the detailed derivation of eq. (<ref>) to appendix <ref>. In order to compare with the contribution from other parts in (<ref>), it is convenient to rewrite the asymptotic behavior of I^ dual_±, vac as I^ dual_±, vac DLC_w∼4π^5/2β_R/β_L^3/2A^1/2e^4π^2A/β_Rϕ̂_±(0). §.§.§ Dual channel: non-vacuum Consider the non-vacuum part of I_± in the dual channel, defined as follows: I^ dual_±, nonvac≡ ∫ d t √(4π^2/β_L(β_R+i t))Z̃_h,h̅⩾ T(4π^2/β_L,4π^2/β_R+i t)ϕ̂_±(t)e^i(H̅-A)t, where Z̃_h,h̅⩾ T is defined by Z̃_h,h̅⩾ T(β,β̅)=∫_T^∞d h∫_T^∞ dh̅ ρ(h,h̅)e^-(h-A)β-(h̅-A)β̅. For technical reasons, we split I^ dual_±, nonvac into two parts I^ dual_±, nonvac=I^ dual_±,T⩽h̅<A+I^ dual_±, h̅⩾ A, where the subscripts denote the regimes of (h,h̅) that contribute. We begin with the following inequality: I^ dual_±, nonvac⩽ √(4π^2/β_Lβ_R)max_xϕ̂_±(x)∫_-Λ^Λ d t Z̃_h,h̅⩾ T(4π^2/β_L,4π^2β_R/β_R^2+t^2) = √(4π^2/β_Lβ_R)max_xϕ̂_±(x) ×∫_-Λ^Λ d t [Z̃_T⩽h̅<A(4π^2/β_L,4π^2β_R/β_R^2+t^2)+Z̃_h̅⩾ A(4π^2/β_L,4π^2β_R/β_R^2+t^2)] ⩽ √(4π^2/β_Lβ_R)max_xϕ̂_±(x) ×2Λ[Z̃_T⩽h̅<A(4π^2/β_L,4π^2/β_R)+Z̃_h̅⩾ A(4π^2/β_L,4π^2β_R/β_R^2+Λ^2)]. Here in the first line, we use the inequality √(1/β_R+i t)⩽√(1/β_R), the identity e^z=e^Re(z), and the fact that supp(ϕ̂_±)⊂[-Λ,Λ]. In the second line, we split Z̃_h,h̅⩾ T into two parts. In the last line, we make use of the inequalities e^-(h̅-A)4π^2β_R/β_R^2+t^2⩽ e^-(h̅-A)4π^2/β_R for h̅<A and e^-(h̅-A)4π^2β_R/β_R^2+t^2⩽ e^-(h̅-A)4π^2β_R/β_R^2+Λ^2 for h̅⩾ A and t⩽Λ. Here, we use the maximum notation max_x to indicate that we are taking the maximum of ϕ̂_±(x) over all x. The estimate above is for I^ dual_±, nonvac, but it is straightforward to obtain the following individual inequalities for I^ dual_±,T⩽h̅<A and I^ dual_±, h̅⩾ A: I^ dual_±,T⩽h̅<A⩽ 2Λ√(4π^2/β_Lβ_R)max_xϕ̂_±(x)Z̃_T⩽h̅<A(4π^2/β_L,4π^2/β_R), I^ dual_±,h̅⩾ A⩽ 2Λ√(4π^2/β_Lβ_R)max_xϕ̂_±(x)Z̃_h̅⩾ A(4π^2/β_L,4π^2β_R/β_R^2+Λ^2). To demonstrate that I^ dual_±,T⩽h̅<A and I^ dual_±, h̅⩾ A are suppressed by I^ dual_±, vac in the DLC_w limit, we can establish some upper bounds on Z̃_T⩽h̅<A and Z̃_h̅⩾ A. We present a useful lemma below: Let β_0∈(0,∞) be a fixed number. The partition function satisfies the following upper bound: Z̃(β_L,β_R)⩽ κ(β_0)e^A(β_L+β_R) (β_L,β_R⩾β_0), where κ(β_0)≡1+Z̃(β_0,β_0)/Z̃_ vac(β_0,β_0). For β_L,β_R⩾β_0 we have Z̃(β_L,β_R)⩽ Z̃_ vac(β_L,β_R)+Z̃_ nonvac(β_L,β_R) ⩽ Z̃_ vac(β_L,β_R)+e^(A-T)(β_L+β_R-2β_0)Z̃_ nonvac(β_0,β_0), where we split the partition function into the vacuum part and the non-vacuum part and used the fact that h,h̅⩾ T for each term in the non-vacuum part. The vacuum part is bounded as follows: Z̃_ vac(β_L,β_R)≡ e^A(β_L+β_R)(1-e^-β_L)(1-e^-β_R)⩽ e^A(β_L+β_R). We also have Z̃_ nonvac(β_0,β_0)⩽Z̃(β_0,β_0)/Z̃_ vac(β_0,β_0)Z̃_ vac(β_0,β_0)⩽Z̃(β_0,β_0)/Z̃_ vac(β_0,β_0) e^2Aβ_0, where we bounded Z̃_ nonvac by the full partition function Z̃ and used (<ref>). Putting everything together we get Z̃(β_L,β_R)⩽ e^A(β_L+β_R)[1+e^-T(β_L+β_R-2β_0)Z̃(β_0,β_0)/Z̃_ vac(β_0,β_0)]⩽κ(β_0)e^A(β_L+β_R) for β_L,β_R⩾β_0, where κ(β_0) is given by (<ref>). This completes the proof. Consider the expression Z̃_T⩽h̅<A(4π^2/β_L,4π^2/β_R) in the regime where β_R⩽β_0⩽β_L. We have the following inequalities: Z̃_T⩽h̅<A(4π^2/β_L,4π^2/β_R)⩽ e^(A-T)(4π^2/β_R-4π^2/β_0)Z̃_T⩽h̅<A(4π^2/β_L,4π^2/β_0) ⩽ e^(A-T)(4π^2/β_R-4π^2/β_0)√(β_Lβ_0/4π^2)Z̃(β_L,β_0) ⩽ κ(β_0)√(β_Lβ_0/4π^2)e^(A-T)(4π^2/β_R-4π^2/β_0)+A(β_L+β_0), In the first line, we use the fact that e^(A-h̅)4π^2/β_R⩽ e^(A-T)(4π^2/β_R-4π^2/β_0)e^(A-h̅)4π^2/β_0 for h⩾ T and β_R⩽β_0. In the second line, we bound Z̃_T⩽h̅<A by the full partition function and use modular invariance. Finally, in the last line, we use Lemma <ref>. Using (<ref>), (<ref>) and the first inequality of (<ref>), we obtain the following asymptotic inequality in the DLC_w limit: I^ dual_±,T⩽h̅<A/I^ dual_±, vac DLC_w≲ C^(1)_±(β_0,A,T)(β_L/β_R)^3/2e^-4π^2T/β_R+Aβ_L, where C^(1)±(β_0,A,T) is a finite constant for fixed β_0, A, and T, given by C^(1)_±(β_0,A,T)≡ κ(β_0) A^1/2β_0^1/2Λ/2π^5/2max_xϕ̂_±(x)/ϕ̂_±(0)e^-A((1-T/A)4π^2/β_0-β_0). We can apply the DLC_w limit (defined in (<ref>)) to the remaining factor in the r.h.s. of (<ref>), obtaining: (β_L/β_R)^3/2e^-4π^2T/β_R+Aβ_L = e^-4π^2T/β_R(1-w^2)+Aβ_L×(β_L/β_R)^3/2e^-4π^2w^2T/β_R ⩽[e^-4π^2T/β_R(1-w^2)+Aβ_L][(4π^2)^3/2β_R^-3e^-4π^2w^2T/β_R] (β_Lβ_R ⩽ 4π^2) DLC_w⟶0. Here we used the fact that β_Lβ_R⩽4π^2 eventually holds in the DLC_w limit. As a result, we conclude that I^ dual_±,T⩽h̅<A is suppressed by I^ dual_±, vac in the DLC_w limit. Then let us consider Z̃_h̅⩾ A(4π^2/β_L,4π^2β_R/β_R^2+Λ^2) in the regime β_L,Λ^2/β_R⩾β_0 and β_R⩽Λ. We have the following bound: Z̃_h̅⩾ A(4π^2/β_L,4π^2β_R/β_R^2+Λ^2)⩽ √(β_L(β_R^2+Λ^2)/4π^2β_R)Z̃(β_L,β_R+Λ^2/β_R) ⩽ κ(β_0)√(β_LΛ^2/2π^2β_R)e^A(β_L+β_R+Λ^2/β_R). Here in the first line we bounded Z̃_h̅⩾ A by the full partition function and used modular invariance, and in the second line we applied lemma <ref> in the specific regime of (β_L,β_R). By using the above bound and (<ref>), we obtain the following estimate: I^ dual_±,h̅⩾ A/I^ dual_±, vac DLC_w≲ C^(2)_±(β_0,A,T)β_L^3/2/β^2_R e^-A(4π^2-Λ^2/β_R-β_L-β_R), C^(2)_±(β_0,A,T)≡ √(AΛ^3/2π^5)κ(β_0)max_xϕ̂_±(x)/ϕ̂_±(0). We note that C^(2)_±(β_0,A,T) is a finite constant for fixed β_0, A and T. Since we have chosen ϕ_± with Λ<2π w (see (<ref>)), in the regime β_Lβ_R ⩽ 4π^2 (which eventually holds in the DLC_w limit), we can write β_L^3/2/β^2_Re^-A(4π^2-Λ^2/β_R-β_L)⩽ (4π^2)^3/2 e^-A(4π^2/β_R(1-w^2)-A/Tβ_L)×[ β_R^-7/2e^-A4π^2/β_R(w^2-Λ^2/4π^2)]. In the DLC_w limit, the first exponential factor goes to zero by (<ref>) and the second factor [..] also also goes to zero because of Λ<2π w (this is the reason why we made such a choice of Λ in (<ref>)). Therefore, we conclude that I^ dual_±,T⩽h̅<A is suppressed by I^ dual_±, vac in the DLC_w limit. §.§ Direct channel Now we consider I_± in the direct-channel. According to the dual-channel results in the previous subsection, we know that I_± has the asymptotic behavior I_±∼ I^ dual_±, vac∼4π^5/2β_R/β_L^3/2A^1/2e^4π^2A/β_Rϕ̂_±(0) in the DLC_w limit with the identification β_R=2π√(A/H̅-A). In this section, we would like to show that I_± is dominated by I_±,h∈(A-ε,A+ε) in the direct channel (in the same limit), i.e. I_±,h∈(A-ε,A+ε) DLC_w∼4π^5/2β_R/β_L^3/2A^1/2e^4π^2A/β_Rϕ̂_±(0). We will argue this by showing that lim_DLC_wI_±, vac/I^ dual_±, vac=lim_DLC_wI_±,T⩽ h⩽ A-ε/I^ dual_±, vac=lim_DLC_wI_±,h⩾ A+ε/I^ dual_±, vac=0. §.§.§ Direct channel: vacuum Let us consider the vacuum term I_±, vac in the direct channel of I_±: I_±, vac≡ ∫ d t e^A(β_L+β_R+i t)(1-e^-β_L)(1-e^-β_R-i t) ϕ̂_±(t)e^i(H̅-A)t = e^A(β_L+β_R)(1-e^-β_L)[ϕ_±(-H̅)-e^-β_Rϕ_±(1-H̅)]. So I_±, vac has the following upper bound I_±, vac⩽ 2max_xϕ_±(x) e^A(β_L+β_R). Compare (<ref>) with (<ref>), we see that the ratio I_±, vac/I_±, vac^ dual is asymptotically bounded as follows in DLC_w limit : I_±, vac/I^ dual_±, vacDLC_w≲ C^(3)_±(A)β_L^3/2/β_R e^-A(4π^2/β_R-β_L-β_R), C^(3)_±(A)≡ A^1/2/2π^5/2max_xϕ_±(x)/ϕ̂_±(0). C^(3)_±(A) is a finite constant. The rest part of (<ref>) is bounded as follows β_L^3/2/β_R e^-A(4π^2/β_R-β_L-β_R)= β_L^3/2/β_Re^-4π^2 w^2/β_R e^-A(4π^2(1-w^2)/β_R-β_L-β_R) DLC_w≲ 8π^3/β_R^5/2e^-4π^2 w^2/β_R× e^-A(4π^2T(1-w^2)/Aβ_R-β_L-β_R) → 0. Here in the second line we used the fact that β_Lβ_R⩽ 4π^2 eventually in the DLC_w limit, and the third line follows from the definition of the DLC_w limit (recall eq. (<ref>)). Therefore, I_±, vac is suppressed by I^ dual_±, vac in the DLC_w limit. §.§.§ Direct channel: high twist and low twist Then let us consider the non-vacuum terms in the direct channel of I_±: I_±,T⩽ h⩽ A-ε and I_±,h⩾ A+ε, given by (<ref>). Integrating over t in (<ref>) for I_±,T⩽ h⩽ A-ε and I_±,h⩾ A+ε, we get I_±,T⩽ h⩽ A-ε= e^A(β_L+β_R)∫_T^A-εd h∫_T^∞dh̅ ρ(h,h̅)e^-hβ_L-h̅β_R ϕ_±(h-H̅), I_±,h⩾ A+ε= e^A(β_L+β_R)∫_A+ε^∞d h∫_T^∞dh̅ ρ(h,h̅)e^-hβ_L-h̅β_R ϕ_±(h-H̅). Bounding ϕ_±(h-H̅) by its maximal value, we get I_±,T⩽ h⩽ A-ε⩽ max_xϕ_±(x)Z̃_T⩽ h⩽ A-ε(β_L,β_R), I_±,h⩾ A+ε⩽ max_xϕ_±(x)Z̃_h⩾ A+ε(β_L,β_R). Now it suffices to show that Z̃_T⩽ h⩽ A-ε(β_L,β_R) and Z̃_h⩾ A+ε(β_L,β_R) are suppressed by I^ dual_±, vac in the DLC_w limit. This follows from the same analysis as in <cit.>, section 3.[Here a quick way to see the suppression is to rewrite the asymptotic behavior of I^ dual_±, vac as I^ dual_±, vac DLC_w∼√(β_R^3/4π A)ϕ̂_±(0)×√(4π^2/β_Lβ_R)Z̃_ vac(4π^2/β_L,4π^2/β_R). The second factor is exactly the dual-channel vacuum term of the partition function Z̃. It is known in <cit.> Z̃_T⩽ h⩽ A-ε(β_L,β_R)/√(4π^2/β_Lβ_R)Z̃_ vac(4π^2/β_L,4π^2/β_R) and Z̃_h⩾ A+ε(β_L,β_R)/√(4π^2/β_Lβ_R)Z̃_ vac(4π^2/β_L,4π^2/β_R) decays exponentially fast in the M^* limit. The DLC_w limit in this paper is a stronger version of the M^* limit, and it is sufficient to kill the extra slow-growing factors β_R^-3/2. ] Let us derive an upper bound on Z̃_h⩾ A+ε(β_L,β_R) first. We chose some fixed β_0∈(0,∞) and consider the regime β_R⩽4π^2/β_0⩽β_L. We have Z̃_h⩾ A+ε(β_L,β_R)⩽ e^-ε(β_L-4π^2/β_0)Z̃_h⩾ A+ε(4π^2/β_0,β_R) ⩽ e^-ε(β_L-4π^2/β_0)√(β_0/β_R)Z̃(β_0,4π^2/β_R) ⩽ e^-ε(β_L-4π^2/β_0)√(β_0/β_R)κ(β_0)e^A(β_0+4π^2/β_R). Here in the first line we used e^(A-h)β_L⩽ e^-ε(β_L-4π^2/β_0)e^(A-h)4π^2/β_0 for β_L⩾4π^2/β_0 and h⩾ A+ε, in the second line we bounded Z̃_h⩾ A+ε by the full partition function Z̃ and used modular invariance (<ref>), and in the last line we used lemma <ref> in the regime 4π^2/β_R⩾β_0. By (<ref>), (<ref>) and (<ref>) we get I_±,h⩾ A+ε/I^ dual_±, vac DLC_w≲ C^(4)_±(A,β_0)(β_L/β_R)^3/2 e^-ε(β_L-4π^2/β_0)+Aβ_0 C^(4)_±(A,β_0)= κ(β_0)√(Aβ_0/16π^5)max_xϕ_±(x)/ϕ̂_±(0) C^(4)_±(A,β_0) is a finite constant. The rest part of (<ref>) is bounded as follows (β_L/β_R)^3/2e^-ε(β_L-4π^2/β_0)+Aβ_0=β_L^3/2e^-εβ_L/2+Aβ_0+4π^2ε/β_0×β_R^-3/2e^-εβ_L/2 DLC_w⟶0. Here the first factor obviously vanishes as β_L→∞, and the second factor vanishes because of last condition of DLC_w. Then let us derive an upper bound on Z̃_T⩽ h⩽ A-ε(β_L,β_R). We introduce an auxiliary variable β_L'. Then we have the following upper bound on Z̃_T⩽ h⩽ A-ε(β_L,β_R): Z̃_T⩽ h⩽ A-ε(β_L,β_R)⩽ e^-ε(β_L'-β_L)Z̃_T⩽ h⩽ A-ε(β_L',β_R) (0<β_L⩽β_L'). This bound follows from the fact that e^(A-h)β_L⩽ e^-ε(β_L'-β_L)e^(A-h)β_L' for h⩽ A-ε and β_L⩽β_L'. We choose β_L'=4π^2 T(1-w^2/2)/Aβ_R(=2π T(1-w^2/2)/A√(H̅-A/A)). Then in the DLC_w limit, we have β_L'-β_L⩾4π^2T(1-w^2)/Aβ_R-β_L→∞. We see that β_L'⩾β_L eventually in the DLC_w limit so eq. (<ref>) holds, and Z̃_T⩽ h⩽ A-ε(β_L',β_R) ⩽ √(4π^2/β_L'β_R)Z̃(4π^2/β_L',4π^2/β_R) = √(4π^2/β_L'β_R)[Z̃_ vac(4π^2/β_L',4π^2/β_R)+Z̃_h,h̅⩾ T(4π^2/β_L',4π^2/β_R)] ⩽ √(4π^2/β_L'β_R)[4π^2/β_L'e^A(4π^2/β_L'+4π^2/β_R)+κ(β_0)√(β_L'β_0/4π^2)e^(A-T)(4π^2/β_R-4π^2/β_0)+A(β_L'+β_0)] = (4π^2/β_L')^3/2β_R^-1/2e^A(4π^2/β_L'+4π^2/β_R) ×[1+κ(β_0)β_0^1/2(β_L'/4π^2)^3/2e^A(β_L'-4π^2/β_L'+β_0-4π^2/β_0)-T(4π^2/β_R-4π^2/β_0)] Here in the first line we bounded Z̃_T⩽ h⩽ A-ε by the full partition function Z̃ and used modular invariance (<ref>), in the second line we rewrote Z̃ as Z̃_ vac+Z̃_h,h̅⩾ T, in the third line we used Z̃_ vac(β,β̅)⩽β e^A(β+β̅) and lemma <ref>, and the last line is just a rewriting of the third line. The second term in […] vanishes in the DLC_w limit because (β_L'/4π^2)^3/2e^A(β_L'-4π^2/β_L'+β_0-4π^2/β_0)-T(4π^2/β_R-4π^2/β_0)= ((1-w^2/2)T/Aβ_R)^3/2e^-2π^2w^2T/β_R-A^2β_R/(1-w^2/2)T × e^A(β_0-4π^2/β_0+4π^2 T/Aβ_0) DLC_w⟶ 0. By (<ref>), (<ref>), (<ref>) and (<ref>) we get I_±,T⩽ h⩽ A-ε/I^ dual_±, vac DLC_w≲ C^(5)_±(A)(4π^2β_L/β_L'β_R)^3/2e^-ε(β_L'-β_L)+4π^2A/β_L', C^(5)_±(A)= √(A/16π^5)max_xϕ_±(x)/ϕ̂_±(0). C^(4)_±(A,β_0) is a finite constant. The rest part of (<ref>) is bounded as follows (4π^2β_L/β_L'β_R)^3/2e^-ε(β_L'-β_L)+4π^2A/β_L'= (Aβ_L/(1-w^2/2)T)^3/2e^-2π^2Tw^2ε/Aβ_R× e^-ε(4π^2T(1-w^2)/Aβ_R-β_L)+A^2β_R/(1-w^2/2)T ⩽ (4π^2(1-w^2)/(1-w^2/2)β_R)^3/2e^-2π^2Tw^2ε/Aβ_R ( eventually in DLC_w) × e^-ε(4π^2T(1-w^2)/Aβ_R-β_L)+A^2β_R/(1-w^2/2)T DLC_w⟶ 0. Here the first line is just a rewriting, in the second line we used the fact that β_L⩽4π^2T(1-w^2)/Aβ_R eventually in DLC_w limit, and in the last line we used fact that both two factors vanishes in the DLC_w limit. §.§ Summary of the estimates, two-sided bounds for fixed ε We summarize our estimates on various terms in (<ref>) in table <ref>. We conclude that in the DLC_w limit, I_±,h∈(A-ε,A+ε) dominates the direct channel and I^ dual_±, vac dominates the dual channel. So we get I_±,h∈(A-ε,A+ε) DLC_w∼I^ dual_±, vac, which justifies eq. (<ref>). On the other hand, eq. (<ref>) gives the asymptotic behavior of I^ dual_ vac in the DLC_w limit. Together with (<ref>), (<ref>) and (<ref>), we conclude that ϕ̂_-(0) ≲𝒜(β_L,H̅,ε,δ)/(4π^2/β_L)^3/2√(π/H̅)e^4π√(AH̅)≲ϕ̂_+(0) in the DLC_w limit. Here we would like to emphasize that the above equation is valid only when ϕ̂_+(0)≠0 (the lower bound is trivial when ϕ̂_-(0)=0). Recall that we would like to derive the bounds on 𝒜_J(ε,β_L). We use (<ref>), i.e. 𝒜_J(β_L,ε) is exactly the same as 𝒜(β_L,H̅,ε,δ) for δ∈(ε,1-ε). This fact allows us to choose different δ for the upper and lower bounds, say δ_+ and δ_- respectively. Therefore, we conclude from eqs. (<ref>) and (<ref>) that ϕ̂_-,δ_-(0)≲𝒜_J(β_L,ε )/8π^7/2β_L^-3/2J^-1/2e^4π√(AJ)≲ϕ̂_+,δ_+(0). in the DLC_w limit, with δ_±∈(ε,1-ε). Here we added the extra subscript δ_± to ϕ_±, which simply means ϕ_± in (<ref>) with δ=δ_±. Now the question is: given fixed Λ and δ_±, what are the optimal values of ϕ̂_±,δ_±(0)? This problem was studied in <cit.> for Λ=2π. In that case, when δ_+ very close to 0 and δ_- very close to 1, the optimal functions ϕ_± are given by ϕ_+,δ_+(x) =16 δ_+ ^2 [x cos(πδ_+) sin(π x)-δ_+ sin(πδ_+) cos(π x)]^2/(x^2-δ_+ ^2)^2 (2πδ_+ +sin (2πδ_+))^2, ϕ_-,δ_-(x) =δ_- ^2 (x cos(π x)-δ_- (πδ_- ) sin(π x))^2/x^2 (δ_-^2 -x^2) (πδ_- (πδ_-)-1)^2. This choice of ϕ_± gives[These values are obtained from the first equation of eq. (86) and eq. (88) in <cit.>. These function appeared in <cit.> in mathematics literature.] ϕ̂_+,δ_+(0)=1/2π2/1+sin(2πδ_+)/2πδ_+, ϕ̂_-,δ_-(0)=1/2π1/1-tan(πδ_-)/πδ_-. The case with arbitrary Λ can easily be obtained by doing scaling: ϕ_±,δ_±(x) (Λ=2π) ⟶ ϕ^Λ_±,δ_±(x):=ϕ_±,Λ/2πδ_±(Λ x/2π). Under scaling, ϕ̂_±,δ_±(0) is given by ϕ̂^Λ_±,δ_±(0)=2π/Λϕ̂_±,Λ/2πδ_±(0). This gives us ϕ̂_+,δ_+(0)=1/Λ2/1+sin(Λδ_+)/Λδ_+, ϕ̂_-,δ_-(0)=1/Λ1/1-2tan(Λδ_-/2)/Λδ_- . Here we neglect the superscript Λ in the expression. Now we insert the optimal values in (<ref>) into (<ref>). But we need to be careful because the following constraints on δ_±, w and Λ should be satisfied: * According to (<ref>) we need δ_±∈(ε,1-ε). * According to (<ref>) we must have Λ<2π w. * The choice of ϕ_+,δ_+ requires that 0<Λδ_+<π (see eq. (86) of <cit.>). * The choice of ϕ_-,δ_- requires that π<Λδ_-<2π (see eq. (88) of <cit.>). To make the above constraints consistent we also need that w>Λ/2π>1/2δ_->1/2(1-ε) ⇒ ε<1-1/2w. Then the condition ε>0 implies w>1/2. Under the above constraints, we choose δ_- to be arbitrarily close to 1-ε, δ_+ to be arbitrarily close to ε and Λ to be arbitrarily close to 2π w. So 𝒜_J has the following asymptotic two-sided bounds 1/w1/1-tan(π w(1-ε))/π w(1-ε)-α_-≲𝒜_J(β_L,ε )/4π^5/2β_L^-3/2J^-1/2e^4π√(AJ)≲1/w2/1+sin(2π wε)/2π wε+α_+. Here α_± are defined by α_+:=2π(ϕ̂^Λ_+,δ_+(0)-ϕ̂^2π w_+,ε(0)), α_-:=2π(ϕ̂^2π w_-,1-ε(0)-ϕ̂^Λ_-,δ_-(0)). They are positive but can be arbitrarily small in the limit δ_+→ε and δ_-→1-ε. However 𝒜_J(β_L,ε) does not depend on δ_± and Λ because ϕ_± are just auxiliary functions for our analysis. So we arrive at (<ref>) for w∈(1/2,1) and ε∈(0,1-1/2w) fixed. There are also other choices of ϕ̂_±,δ_±, which give non-optimal but simpler expressions of upper and lower bounds than (<ref>). For example, one can choose ϕ_±,δ at Λ=2π to be the Beurling-Selberg functions <cit.>, denoted by ϕ^ BS_±,δ.[For the explicit constructions and technical details, see <cit.>, eqs. (38) - (42) and appendix C.] The B-S functions give ϕ̂^ BS_±,δ(0)=1/2π(2δ±1). Then after the same rescaling procedure as (<ref>), one gets ϕ̂_±,δ_±=1/2π(2δ_±±2π/Λ). By taking extremal values of δ_±, the upper and lower bounds in (<ref>) becomes 1/w+2ε and 2-1w-2ε. We note that in the limit ε→0, the B-S function already gives the optimal upper bound, while it gives the optimal lower bound only when we also take the limit w→1. So far, the statement in theorem <ref> has been established with for fixed ε. As a final step, we would like to let ε also go to zero in the DLC_w limit. This will be the subject of the next subsection. §.§ Shrinking the (A-ε,A+ε) window In this subsection, we would like to establish the final part of theorem <ref>, which allows for the vanishing of ε in the DLC_w limit, provided it remains larger than ε_ min(β_L,J) as defined in (<ref>). In our analysis, the ε-dependence arises in three key aspects: * In appendix <ref>, we used the dominated convergence theorem, which requires the condition that |ϕ_±|, which depends on δ, must be bounded by an integrable function. This condition is automatically satisfied when ε is fixed since, in that case, we work with fixed functions ϕ_± that are integrable by themselves. However, as ε→ 0, the functions ϕ_± are no longer fixed, and it becomes crucial to ensure that the family of ϕ_± functions we consider remains uniformly bounded by certain integrable functions that are independent of δ. This will ensure the applicability of the dominated convergence theorem in the limit as ε→ 0. * The ratios max_xϕ̂_±(x)/ϕ̂_±(0) (as seen in (<ref>) and (<ref>)) and max_xϕ_±(x)/ϕ̂_±(0) (as seen in (<ref>), (<ref>), and (<ref>)) depend on the choice of ϕ_±. Ultimately, we selected δ_+=ε→0 for ϕ_+ and δ_-=1-ε→1 for ϕ_-. Therefore, similarly to point 1, we need to derive some uniform bounds on the ratios. * The bounds on the high- and low-twist contributions in the direct channel of I_± incorporate exponential factors that rely on ε, as presented in Table <ref>. To address the concerns raised in the first and second points, we establish upper bounds on the quantities ϕ_±,δ_±(x), max_xϕ_±,δ_±(x)/ϕ̂_±,δ_±(0) and max_xϕ̂_±,δ_±(x)/ϕ̂_±,δ_±(0) for the chosen ϕ_±,δ_±. These bounds remain uniform in δ_± within the regimes that permit the limits δ_+→0 and δ_-→1. The precise statements of these bounds are presented in lemma <ref> and lemma <ref>, while the detailed proofs can be found in appendix <ref>. Consequently, the first and second concerns are effectively resolved through the establishment of these uniform bounds. To address the third point, we examine the conditions required for the DLC_w limit as ε approaches 0, as shown in table <ref>. These conditions can be expressed as follows: β_L^3/2e^-ε(4π^2T/Aβ_R(1-w^2/2)-β_L),(β_L/β_R)^3/2e^-εβ_L→0. I.e. the exponential factors must decay rapidly enough to render the power-law factors negligible. For the first term in (<ref>), we use the DLC_w condition (<ref>) and get the following inequality: β_L^3/2e^-ε(4π^2T/Aβ_R(1-w^2/2)-β_L)⩽(4π^2T(1-w^2)/Aβ_R)^3/2e^-ε2π^2 w^2T/Aβ_R. In the DLC_w limit, where β_R→ 0, we need the r.h.s. of the above inequality to vanish. This can be achieved if we impose the following sufficient condition: ε2π^2 w^2T/Aβ_R⩾(3/2+α)log(1/β_R), where α is an arbitrary fixed positive constant. For the second term in (<ref>), we rewrite it as follows: (β_L/β_R)^3/2e^-εβ_L=e^-(ε-3log(1/β_R)/2β_L)β_L+3/2logβ_L. In order for this term to vanish in the DLC_w limit, we require the following sufficient condition: (ε-3log(1/β_R)/2β_L)β_L⩾(3/2+α)logβ_L, where α is an arbitrary fixed positive constant. Now we select the same α for both (<ref>) and (<ref>) for simplicity, and determine the smallest value of ε that satisfies these conditions. Specifically, we have: ε⩾max{(3/2+α)A/2π^2w^2Tβ_Rlog(1/β_R), 3log(1/β_R)/2β_L+(3/2+α)logβ_L/β_L} It can be verified that, in the DLC_w limit, the choice of ε given by (<ref>) tends to zero. The last part of theorem <ref> follows by choosing α=1/2 in (<ref>) and using the identification (<ref>) (which implies β_R∼ J^-1/2 in the DLC_w limit). This finishes the whole proof of theorem <ref>. § ADDING CONSERVED CURRENTS In our previous analysis, we assumed that the 2D CFT has a nonzero twist gap τ_ gap>0, and that the vacuum state is the only twist-0 primary state. However, we believe that the argument can be straightforwardly generalized to include 2D CFTs with nontrivial twist-0 primaries, which correspond to conserved chiral currents.[We thank Nathan Benjamin for a relevant discussion on this point.] To account for the presence of these twist-0 primaries, we modify our general ansatz for the partition function. While maintaining the assumption of a nonzero twist gap τ_ gap above the twist-0 primaries, the modified partition function ansatz becomes: Z(β_L,β_R)= χ_0(β_L)χ_0(β_R)+∑_h,h̅⩾τ_ gap/2n_h,h̅ χ_h(β_L)χ_h̅(β_R) +∑_j=1^∞D(j)[χ_j(β_L)χ_0(β_R)+χ_0(β_L)χ_j(β_R)], where j represents the spin of the current, and D(j) denotes the number of left/right currents with spin j . We assume parity symmetry for simplicity, which implies that for each j, there are an equal number of left and right spin-j currents. We aim to provide separate comments on the cases of finitely many currents and infinitely many currents, and propose conjectures that generalize, or partially generalize, the results presented in section <ref>. Although the following arguments lack rigor, we anticipate that they can be established using a similar analysis to section <ref>. So far we are not aware of any known examples of CFTs that can be used to verify the consistency of (a) the case of finitely many currents (section <ref>) or (b) the case of infinitely many currents with a "slow growth" of the current degeneracy D(j) (section <ref>). However, many examples exist where we can examine the last case: infinitely many currents with a "critical growth" of the current degeneracy D(j) (section <ref>). We provide three examples for the last case, and the technical details can be found in appendix <ref>. Nevertheless, a comprehensive investigation is still required in order to fully understand and validate these conjectures. We leave it for future work. §.§ Finite number of currents Consider the case where the 2D CFT has finitely many twist-0 primaries (chiral currents). In this scenario, the modified partition function, given by eq. (<ref>), reduces to: Z(β_L,β_R)= χ_0(β_L)χ_0(β_R)+∑_h,h̅⩾τ_ gap/2n_h,h̅ χ_h(β_L)χ_h̅(β_R) +∑_n=1^N[χ_j_n(β_L)χ_0(β_R)+χ_0(β_L)χ_j_n(β_R)], where j_n∈ℤ_+ represent the spins of the currents, and N denotes the total number of currents. In comparison to the asymptotic behavior of the partition function without currents, the essential difference here is that the current characters, instead of the vacuum character, dominate the dual channel in the DLC_w limit (<ref>). This change arises because in the dual channel, the β_L-dependent part of the current character approaches 1, while that of the vacuum character approaches zero in the double lightcone limit. Specifically, we have: vacuum: e^4π^2A/β_L(1-e^-4π^2/β_L)∼4π^2/β_L vs current: e^(A-j)4π^2/β_L∼1. Consequently, in the dual channel, we find that: Z̃(β_L,β_R) DLC_w∼ √(4π^2/β_Lβ_R)e^A(4π^2/β_L+4π^2/β_R)∑_n=1^Ne^-4π^2j_n/β_L(1-e^-4π^2/β_R) ∼ 2π N/(β_Lβ_R)^1/2e^4π^2A/β_R. This indicates that only the slow-growth factor (the power-growth factor) changes when finitely many currents are added to the partition function. Therefore, the argument presented in section <ref> remains valid. The point (h=A,h̅=∞) continues to be an accumulation point in the spectrum of Virasoro primaries, and the same holds true with h and h̅ interchanged. Moreover, the analysis conducted in sections <ref> - <ref> still applies, but results in table <ref> are slightly modified: the exponential factors remain the same, but the power-law indices in the estimates will change due to the presence of the currents. Therefore, in this case, we expect similar results to theorem <ref>, with two modifications: * The arguments in the two-sided bounds (<ref>) are modified as follows: 𝒜_J(β_L,ε )/4π^5/2β_L^-3/2J^-1/2e^4π√(AJ) ⟶ 𝒜_J(β_L,ε )/N√(π/β_L J)e^4π√(AJ). * The allowed lower bound on ε has a similar structure, but the coefficients in front of the logarithmic terms in (<ref>) will change. Then for 𝒩_J(ε) in the large spin limit, we still have 𝒩_J(ε≡κ J^-1/2log J)J→∞∼e^4π√(AJ)+f_κ(J), f_κ(J)⩽ C_1log( J+1)+C_2(κ), where the allowed κ and the constants C_1 and C_2 will be different from corollary <ref>. §.§ Infinitely many currents Now let us consider a CFT with infinitely many currents. In this case, the partition function is given by (<ref>) with an infinite sum over the current spin j. For convenience, we define ℱ_ current(β):=∑_j=1^∞D(j)e^-j β≡∑_j=1^∞D(j)q^j (q=e^-β). We can rewrite Z̃ using (<ref>) as: Z̃(β_L,β_R)= e^A(β_L+β_R)[(1-e^-β_L)(1-e^-β_R)+∑_h,h̅⩾τ_ gap/2e^-hβ_L-h̅β_R +ℱ_ current(β_L)(1-e^-β_R)+(1-e^-β_L)ℱ_ current(β_R)]. It is important to note that the growth of D(j), the number of left/right currents with spin j, cannot be arbitrarily fast. Here we make the following ansatz: D(j)=f(j)e^4π a j^b (a>0,b⩾0), where f(j) represents the component of slow growth compared to the exponential factor.[Here, by “slow growth" of f(j) we mean that for any δ>0, f(j)e^-δ j^b→0 in the limit j→∞.] Additionally, for simplicity, we assume that f(j) is bounded from below by some positive constant. The range of the allowed a and b is constrained by the Cardy growth <cit.>, which states that: D(j)≲ g(j) e^4π√(Aj) (j→∞), where g(j) is some factor of slow growth. This bound can be derived from modular invariance by considering reduced partition function Z̃(β_L,β_R) in the limit β_L→∞ with β_R fixed. It follows that b should be in the range: b⩽1/2, In addition, at criticality b=1/2, the parameter a should be in the range: a⩽√(A). Using the ansatz (<ref>), neglecting the slow-growing factor f(j) and performing a saddle-point approximation in j in (<ref>), we find that in the limit β_L→∞, ℱ_current(4π^2/β_L) exhibits the following growth behavior: ℱ_ current(4π^2/β_L)∼∑_j=1^∞e^-j 4π^2/β_L+4π aj^b∼ e^#β_L^b/1-b (β_L→∞). Using (<ref>) together with the derived asymptotic behavior (<ref>), we can also argue why b>1/2 is not allowed. Firstly, for b⩾1, the sum in (<ref>) diverges when β<4π a. However, such a divergence contradicts the requirement of a well-defined torus partition function. Secondly, for b ∈(1/2,1), we have b/1-b > 1 in (<ref>). Consequently, if we consider the limit β_L→∞ with β_R fixed for Z̃(β_L,β_R), the contribution solely from the left currents in the dual channel grows faster in β_L than the total contribution in the direct channel, which is at most e^Aβ_L. This contradiction rules out the possibility of b∈(1/2,1).[This argument holds irrespective of the twist-gap condition τ_gap>0 and is also valid for the case τ_gap=0.] In the subsequent subsections, we will analyze the following three cases separately: * 0 ⩽ b < 1/2; * b=1/2 and 0<a<√(A); * b = 1/2 and a=√(A). We will see that they exhibit different behaviors in the double lightcone limit. §.§.§ Case 0⩽ b<1/2 In the case when 0⩽ b<1/2, in the double lightcone limit, we expect the partition function Z̃(β_L,β_R) to be dominated by the left-current characters in the dual channel: Z̃(β_L,β_R)∼√(4π^2/β_Lβ_R)ℱ_ current(4π^2β_L)e^4π^2/β_R, where the asymptotic behavior of the current factor ℱ_ current(4π^2β_L) is given by (<ref>) (up to a slow-growing factor). By comparing (<ref>) with (<ref>), we note that only the growth in β_L changes, while the exponential factor e^4π^2/β_R, which dominates the growth in the double lightcone limit, remains the same. In the direct channel of Z̃(β_L,β_R), our expectation is that the contributions from the vacuum, currents, low twists, and high twists are still subleading compared to the r.h.s. of (<ref>). Thus, the analysis presented in section <ref> remains applicable, indicating that (h,h̅)=(A,∞) continues to be an accumulation point in the spectrum of Virasoro primaries. To estimate the quantity 𝒜_J(β_L,ε), we can follow a similar analysis as described in sections <ref> - <ref>. However, due to the presence of infinite currents, we expect additional factors to appear in the estimates: e^#β_L^b/1-b, e^#β_R^-b/1-b, e^#β_L^-b/1-b, e^#β_R^b/1-b. While the first two terms may exhibit unbounded growth, it is important to note that b/1-b<1 for the range b<1/2. Hence, we anticipate that these subexponential-growing factors will have a negligible impact on the estimates, except when considering the shrinking of the (A-ε,A+ε) range towards zero. Moreover, it is necessary to establish that the contributions from (a) right-current characters in the dual channel and (b) left- and right-current characters in the direct channel also become subleading in the double lightcone limit. Based on the aforementioned argument, we expect the following key differences compared to table <ref>: * The dominant term is now I^ dual_±, left, which corresponds to the contribution from left currents in the dual channel. * The power indices in the prefactors will change. * There are additional estimates on I^ dual_±, right, I_±, left and I_±, right. These contributions are expected to be subleading. * In the estimate of the low-twist contribution, an additional factor e^#β_R^-b/1-b will appear in the final result. This arises from the contribution of the left currents ℱ_ currents(4π^2/β_L') in the analysis of (<ref>). The first and second differences are already present in the case of finitely many currents. However, the third difference, which accounts for the growth of the current degeneracy D(j), significantly impacts the lower bound of ε allowed by theorem <ref>. Specifically, the decay behavior of ε_ min changes from J^-1/2log J to J^-1-2b/2(1-b), which decays more slowly as J increases. Therefore, we propose the following conjecture that generalizes theorem <ref> and corollary <ref>: For a torus partition function of the form (<ref>), assuming the presence of at least one chiral current, if the current density D(j) satisfies the upper bound D(j)⩽ N_0e^a j^b where a>0, 0⩽ b<1/2, and N_0<+∞, then the following statements hold: * There exists a family of Virasoro primaries {(h_n,h̅_n)}_n∈ℕ with h_n→ A and h̅_n→∞. * Theorem <ref> still holds with the following modifications: * The argument in the two-sided bounds (<ref>) is modified to 𝒜_J(β_L,ε )/4π^5/2β_L^-3/2J^-1/2e^4π√(AJ) ⟶ 𝒜_J(β_L,ε)/ℱ_ current(4π^2/β_L)√(π/β_LJ)e^4π√(AJ). * The lower bound for the allowed ε is modified to ε⩾max{α_1 J^-1-2b/2(1-b)+α_2 J^-1/2logJ, α_3β_L^-1logJ+α_4β_L^-1logβ_L}. Here the constants α_i are finite. They depend on A, T, w, and the parameters in (<ref>) (i.e. N_0, a, and b). * Corollary <ref> still holds with the following modifications: * For 𝒩_J(ε), the choice of ε is modified to ε=κ J^-1-2b/2(1-b), where κ has a finite, positive lower bound that depends on A, T, N_0, a and b. * We have 𝒩_J(ε≡κ J^-1-2b/2(1-b))=J^-3/4e^4π√(AJ)+f_κ(J), where the error term fκ(J) satisfies the bound f_κ(J)⩽ C(κ,A,T,a,b)(J+1)^b/2(1-b), and the constant C is finite. * When b=0, an additional log J factor should be included in the choice of ε and the bound on the error term f_κ(J). This conjecture also encompasses the case of finite currents (b=0), where we have ℱ_ currents(4π^2/β_L)β_L→∞⟶ℱ_ currents(0)= number of left currents. The proof of conjecture <ref> is left for future work. §.§.§ Case b=1/2 preview: the shifted twist accumulation point Now we turn our attention to the critical case: b=1/2. In this case, the ansatz (<ref>) of D(j) becomes: D(j)=f(j)e^4π a√(j), where a is a strictly positive parameter. We still assume that f(j) grows slowly compared to any exponential growth of the form e^#√(j). It is worth noting that there exist many known examples of CFTs that fall into this category (see sections <ref> and <ref> below, and appendix <ref> for more details). Before proceeding with the discussion, it is important to note that the ansatz (<ref>) does not encompass all possible cases that remain from conjecture <ref>. Recall that the conjecture covers situations where the growth of the spin degeneracy D(j) is bounded by e^a j^b for some b<1/2. However, there may be other scenarios that fall outside the scope of (<ref>). For instance, there could exist CFTs with D(j)∼ e^√(j)/log j. Such cases cannot be accommodated within the framework of (<ref>) with a positive a. At present, we do not have a systematic approach to address the most general situation, and we leave it for future work. For CFTs with a spin degeneracy D(j) described by the form (<ref>), an intriguing phenomenon occurs: the twist accumulation point is shifted by a^2: h=A-a^2, h̅=∞. This shift arises because in the double lightcone limit, there is an additional factor e^a^2β_L in the dual channel of Z̃(β_L,β_R), which corresponds to the cumulative contribution of current operators: ℱ_ current(4π^2/β_L)= ∑_j=1^∞D(j)e^-4π^2 j/β_L β_L→∞∼ ∫_0^∞d j e^4π a√(j)-4π^2j/β_L ∼ e^a^2β_L. The estimate above is not precise as we neglected the slow-growing factor f(j). Taking into account f(j), we expect ℱ_ current to have the following asymptotic behavior: ℱ_ current(β)β→0∼e^4π^2a^2/β+o(1/β). In the double lightcone limit, the contribution from left currents dominates the dual channel of Z̃(β_L,β_R), leading to: Z̃(β_L,β_R) DLC_w∼√(4π^2/β_Lβ_R)e^a^2β_L+4π^2A/β_R+o(β_L). Because of the presence of the extra factor e^a^2β_L, to make the contribution from the left currents dominant, the double lightcone limit is not necessarily as strong as (<ref>). Here we modify the DLC_w limit (<ref>) to: DLC_w limit: β_L→∞, β_R→0, 4π^2T(1-w^2)/(A-a^2)β_R-β_L→∞, β_L^-1logβ_R→0. Then to match the correct exponential growth in β_L in the direct channel of Z̃(β_L,β_R), we need contribution from operators with e^(A-h)β_L∼ e^a^2β_L (β_L→∞). This leads to a rough but quick guess of the position of the twist accumulation point, given by (<ref>). We anticipate that by employing a similar technique as in <cit.> and assuming that ℱ_ current(β) exhibits asymptotic behavior (<ref>), it may be possible to rigorously prove the existence of the twist accumulation point at (h=A-a^2,h̅=∞). Based on (<ref>) and the unitarity constraint (h⩾0), we find that a cannot exceed √(A). This is consistent with the "Cardy growth" argument presented around eq. (<ref>) and gives rise to two possibilities: (1) 0<a<√(A) and (2) a=√(A) (as mentioned at the beginning of section <ref>), which will be discussed separately in the next two subsections. §.§.§ Case b=1/2 and 0<a<√(A) In the case of 0<a<√(A), we can choose a sufficiently small (but nonzero) ε such that the interval (A-a^2-ε,A-a^2+ε) does not contain 0. Consequently, for Virasoro primaries with h∈(A-a^2-ε,A-a^2+ε) and large h̅ (i.e. large spin), their Virasoro characters correspond to the second case of (<ref>) without any subtraction. Therefore, in this scenario, we can essentially apply the same arguments as in section <ref>, but with slight modifications in the definitions of 𝒩_J, 𝒜_J: 𝒩_J(ε):= ∑_h∈(A-a^2-ε,A-a^2+ε)n_h,h+J, 𝒜_J(β_L,ε):= ∑_h∈(A-a^2-ε,A-a^2+ε)n_h,h+Je^-(h-A+a^2)β_L, and the double lightcone limit which is compatible with the current version (<ref>): DLC_w limit: β_L,J→∞, 2π T(1-w^2)/A-a^2√(J/A)-β_L→∞ , β_L^-1log J→ 0 . Here we propose the following conjecture: For a partition function of the form (<ref>), where the current degeneracy D(j) is given by (<ref>) with a∈(0,√(A)), the following statements hold: * There exists a family of Virasoro primaries {(h_n,h̅_n)}_n∈ℕ with h_n→ A-a^2 and h̅_n→∞. * For any w∈(1/2,1) and ε∈(0,A-a^2) fixed, the quantity 𝒜_J(β_L,ε) redefined in (<ref>) satisfies the following asymptotic two-sided bounds in the redefined DLC_w limit (<ref>):[The denominator in the r.h.s. is actually ℱ_ current(4π^2/β_L)√(π/β_L(J-a^2))e^4π√(A(J-a^2)), but it makes no difference in the limit J→∞.] 1/w1/1-tan(π w(1-ε))/π w(1-ε)≲𝒜_J(β_L,ε)/f_ current(β_L)√(π/β_LJ)e^4π√(AJ)≲1/w2/1+sin(2π wε)/2π wε, where f_ current(β) is the slow-growing factor in ℱ_ current: ℱ_ current(4π^2/β)=f_ current(β)e^a^2β. We have not made a conjecture regarding the lower bound of the allowed ε since it depends on the slow-growing factor f(j) in (<ref>). Additionally, f(j) will affect the ε-window of 𝒩_J(ε). We expect that by choosing proper ε→0 limit according to the growth of f(j), 𝒩_J(ε) will behave as 𝒩_J(ε)=e^4π√(AJ)+…, where “…" denotes the subleading part in the exponent. To examine that our conjecture is reasonable, we would like to provide two examples. Example 1: decoupled irrational CFTs There are very few known examples for the case of theorem <ref> (e.g. the construction of partition function in <cit.>). However, given at least one of such examples, we can make infinitely many examples for the case of conjecture <ref>. The construction is simply several decoupled copies of unitary CFTs: CFT= CFT^(1)⊗ CFT^(2)⊗…⊗ CFT^(N). We assume that each CFT^(i) has central charge c^(i)>1, a unique vacuum and a nonzero twist gap τ^(i)_ gap. Then the big CFT has the following features: * The central charge is given by c=∑_i=1^Nc^(i) (i.e. A=∑_i=1^NA^(i)+N-1/24 where A^(i)=c^(i)-1/24). * The theory has a unique vacuum and infinitely many conserved chiral currents. * The theory has a twist gap for the Virasoro primaries with nonzero twists, given by τ_ gap=min{4,τ_ gap^(1),…,τ_ gap^(N)}. * When j is large, the growth of the current degeneracy is given by D(j)∼ e^4π√(N-1/24j) up to a slow-growing factor. I.e. this example corresponds to b=1/2 and a=√(N-1/24). * It has a twist accumulation point of Virasoro primaries, given by h=∑_i=1^NA^(i) (A^(i)≡c^(i)-1/24), h̅=∞. * The quantity 𝒜_J(ε,β_L), defined in (<ref>), is expected to have the following growth in the redefined DLC_w limit (<ref>): 𝒜_J(ε,β_L) DLC_w≈e^4π√(AJ), up to some factor of slow growth. The first five points can be justified rigorously, while the last point is argued with some additional hypotheses that we consider natural. We leave the technical details to appendix <ref>. According to the first four points, this example falls within the scope of the cases discussed in this subsection, making it a suitable test-bed for examining the consistency of conjecture <ref>. The first, fourth, and fifth points indicate that the CFT exhibits a shifted twist accumulation point precisely located at h=A-a^2, aligning with the first part of conjecture <ref>. The final point demonstrates the correct exponential growth of 𝒜_J(ε,β_L) in the DLC_w limit, supporting the validity of the second part of conjecture <ref>. Example 2: W_N CFT The second example is the unitary W_N CFT with central charge c>N-1 and a twist gap τ_ gap^W_N>0 in the spectrum of W_N-primaries <cit.>. The W_N algebra is an extension of the Virasoro algebra, which implies that the W_N CFT possesses a more fine-tuned spectrum and dynamics to satisfy the constraints imposed by the W_N algebra. However, for the purpose of our discussion, let us momentarily set aside the W_N algebra and concentrate solely on the Virasoro algebra. In this context, the theory exhibits the following features from Virasoro algebra perspective: * The theory has a unique vacuum and infinitely many conserved chiral currents. * The growth of the current degeneracy D(j) (see (<ref>)) is given by D(j)∼ e^4π√(N-1/24j) (j→∞) up to a factor of slow growth in j. * The theory has a twist gap for the Virasoro primaries with nonzero twists, given by τ_ gap=min{6,τ_ gap^W_N}. * The theory has a twist accumulation point of Virasoro primaries, given by h=A-N-2/24, h̅=∞. * The quantity 𝒩_J(ε) and 𝒜_J(ε,β_L), defined in (<ref>), are expected to have the following growth: 𝒜_J(ε,β_L) DLC_w∼e^4π√(AJ), 𝒩_J(ε≡κ J^-1/2log J)J→∞∼e^4π√(AJ), up to some factor of slow growth. We provide the technical details of the above claims in appendix <ref>. The first three points can be rigorously proven. Consequently, this example falls under the case described in Conjecture <ref>. We believe that the fourth point, consistent with the first part of conjecture <ref>, can be demonstrated using a similar technique as in <cit.>. The setup in the W_N case exhibits significant similarities to the standard CFT case, leading us to propose conjecture <ref> in appendix <ref>, which extends theorem <ref> to W_N CFTs. Finally, the last point is consistent with the second part of conjecture <ref>. Its validity can be argued by considering the incorporation of some additional natural hypotheses, such as assuming the correctness of conjecture <ref> for W_N CFTs. §.§.§ Case b=1/2 and a=√(A) For the second possibility (a=√(A)), it is clear that there exists a twist accumulation point at h=0 due to parity symmetry and the existence of infinitely many left currents, which implies the existence of infinitely many right currents. This case can be realized by considering multiple copies of c<1 unitary minimal models (as demonstrated in the example of 3 copies of Ising CFTs in section <ref>). Moreover, when a=√(A), the counting of Virasoro primaries around the accumulation point (h=0,h̅=∞) becomes straightforward. Since we assumed a nonzero twist gap τ_ gap for Virasoro primaries with twists different from zero, choosing ε<τ_ gap/2 allows us to count the total number of spin-J Virasoro primaries within the window h∈[0,ε). In this case, these Virasoro primaries can only be right currents (i.e. h=0). Therefore, the counting simplifies to: 𝒩_J(ε)=𝒜_J(β_L,ε)=# of spin-J right currents=D(J) (ε<τ_ gap/2). Example 3: three copies of Ising CFTs The case discussed in this subsection can be realized by taking several copies of c<1 unitary minimal models. Here we would like to present the simplest of them, which is the three copies of Ising CFTs: CFT= Ising^(1)⊗ Ising^(2)⊗ Ising^(3). This CFT has the following features: * The theory has a central charge c=3/2, i.e. A=1/48. * It has a unique vacuum and infinitely many conserved currents. * It has a twist gap τ_ gap=1/8 for Virasoro primaries with nonzero twists, which is equal to the scaling dimension of the Ising spin field σ. * For large values of j, the growth of the current degeneracy is approximately given by D(j)∼ e^4π√(1/48j), up to a slow-growing factor. So in this example, we have b=1/2 and a=√(1/48). All of the above points can be rigorously justified by using the explicit character formulas of the Ising CFT, which are well-known. The technical details are provided in appendix <ref>. Based on these points, this example falls within the case discussed in this subsection. It is important to note that this CFT, along with the previous two examples, exhibits infinitely many twist accumulation points for Virasoro primaries. This is due to the fact that these theories are fine-tuned to have a larger symmetry algebra. However, our analysis here, which only concerns the Virasoro algebra, only allows us to conclude that the accumulation point corresponding to h=A-a^2 and h̅=∞ (or with h and h̅ interchanged) is universal. § HOLOGRAPHIC CFTS AND LARGE C LIMIT In this section, our focus shifts to holographic CFTs in the large central charge limit c→∞. Specifically, we consider a family of 2D irrational CFTs labeled by the central charge c, denoted as {𝒜_c}. Our goal here is similar to that in section <ref>: counting the spectrum of Virasoro primaries of these CFTs near the twist accumulation point. We consider 𝒩_J(ε_1,ε_2,A):=∑_A-ε_1<h<A+ε_2n_h,h+J, A≡c-1/24, and take the limit J,A→∞ and ε_1,ε_2→0 with appropriate constraints between ε_1, ε_2, J and A. In contrast to section <ref>, here we allow ε_1 and ε_2 to differ. The reason is that in the large central charge limit, the allowed lower bounds on ε_1 and ε_2 will behave differently in A. We adopt the same basic assumptions about CFTs as in section <ref>. Additionally, we assume the existence of two theory-independent quantities, namely α and C(β_L,β_R), satisfying the following conditions: * For any CFT in the family {𝒜_c}, the twist gap is bounded from below by T⩾α A, where α is strictly positive. * For any CFT in the family {𝒜_c}, given that β_L,β_R>2π, the ratio of the full partition function and its vacuum part is bounded from above by Z̃(β_L,β_R)/Z̃_ vac(β_L,β_R)⩽ C(β_L,β_R). where C(β_L,β_R) is finite as long as β_L,β_R>2π. It is worth noting that the first assumption is not strictly necessary, as we expect our results to hold even if we include a sparse spectrum of operators below the twist gap (τ_ gap≡ 2T) in the partition function. The second assumption takes inspiration from the HKS sparseness condition <cit.> and its implications.[We anticipate that (<ref>) can be derived by imposing a similar, yet stronger, sparseness condition compared to the HKS sparseness condition introduced in <cit.>. While the HKS condition focuses on the sparseness of the low-energy spectrum, (<ref>) specifically requires sparseness in the low-twist spectrum.] Our analysis in this section closely follows that in section <ref>. However, there are additional subtleties that arise due to the following reason. In section <ref>, when we decomposed the partition function into several parts in different channels: Z̃=Z̃_ vac+Z̃_T⩽ h⩽ A-ε_1+Z̃_A-ε_1<h< A+ε_2+Z̃_h⩾ A+ε_2=Z̃^ dual_ vac+Z̃^ dual_ nonvac, and argued that several terms are subleading, we actually bounded them by the partition function itself at some fixed inverse temperature, e.g. Z̃(β_0,β_0). However, in the case of a family of theories, these terms may no longer be subleading as they could all grow exponentially fast with c in the large central charge limit. This is the reason why we introduce the two additional conditions mentioned above: we want the dominant terms in the fixed CFT case to remain dominant in the holographic case. Exploring the generality of our additional assumptions, particularly the second one, in holographic CFTs would be an intriguing avenue for future research. §.§ Holographic double lightcone limit, main results To estimate 𝒩_J(ε_1,ε_2,A) defined in (<ref>), we introduce the quantity 𝒜_J(β_L,ε_1,ε_2,A) defined as follows: 𝒜_J(β_L,ε_1,ε_2,A):=∑_h∈(A-ε_1,A+ε_2)n_h,h+Je^-(h-A)β_L. This definition is similar to the previously defined 𝒜_J(β_L,ε) (as defined in eq. (<ref>)), but now it depends on A since the theory is no longer fixed. By definition, 𝒩_J(ε_1,ε_2) and 𝒜_J(β_L,ε) satisfy the following inequality: e^-ε_1β_L𝒜_J(β_L,ε_1,ε_2,A)⩽𝒩_J(ε_1,ε_2,A)⩽ e^ε_2β_L𝒜_J(β_L,ε_1,ε_2,A). We consider 𝒜_J(β_L,ε) in the holographic double lightcone limit (HDLC), which is defined by the following limit procedure: HDLC_w limit: β_L/A, J, A→∞, 2πα(1-w^2)√(J/A)-β_L→∞, β_L^-1log(J/A)→0 , where w∈(1/2,1) is fixed. We note that in the HDLC_w limit, the first, second, and fourth conditions imply that J/A^3→∞. When A and T are fixed, the HDLC_w limit reduces to the DLC_w limit defined in eq (<ref>). Thus, we consider the HDLC_w limit as a natural generalization of the DLC_w limit to the case of the large central charge limit. With the above setup, we have the following result: Let {A_c} be a family of CFTs satisfying the above mentioned conditions (see section <ref> and the beginning of section <ref>). Consider any w ∈(1/2,1) fixed and ε_i within the range ε_i, min(β_L,J,A)⩽ ε_i⩽1-1/2w, where ε_i, min are defined by ε_1, min(β_L,J,A) :=3log(Aβ_L)2πα w^2√(AJ) . ε_2, min(β_L,J,A) :=(β_L-4π/3)^-1[3π A+3/2log(β_L√(AJ)/2π)] . Then the quantity 𝒜_J(β_L,ε_1,ε_2,A) defined in (<ref>) satisfies the following asymptotic two-sided bounds in the HDLC_w limit (<ref>): 1/w(1/1-tan(π w(1-ε))/π w(1-ε))≲𝒜_J(β_L,ε_1,ε_2,A)/4π^5/2β_L^-3/2J^-1/2e^4π√(AJ)≲1/w(2/1+sin(2π wε)/2π wε), where ε≡max{ε_1,ε_2}. The above bounds are uniform in ε_1 and ε_2. Theorem <ref> serves as the large-c counterpart to theorem <ref>. It establishes the universal behavior of the spectrum near the twist accumulation point in the regime where J≫ c^3. Due to the similarity in the overall proof structure between theorem <ref> and theorem <ref>, we leave the detailed proof of theorem <ref> to appendix <ref>. In this section, we focus on providing key observations and remarks regarding the main technical distinctions specific to the large central charge limit case. 1) The basic idea in deriving the above bounds is to estimate the partition function by itself but evaluated at some fixed inverse temperature, e.g. Z̃(β_0,β_0). However, in A→∞ limit, Z̃(β_0,β_0) grows like e^2Aβ_0. This leads to subtle differences between theorem. <ref> and theorem. <ref>. For example, while the eq. (<ref>) is similar to the eq. (<ref>), there is an extra 3π A factor in the expression for ε_2, min(β_L,J,A), which comes about and is important because A is very large. 2) In the above discussions we assumed that the non-vacuum spectrum of Virasoro primaries starts from an O(c) twist gap, i.e. h,h̅⩾α A with some fixed α>0. In fact, our conclusion does not change if we have finitely many Virasoro primaries with h,h̅ being O(1) numbers. Using the same analysis as in appendices <ref> and <ref>, one can show that the contributions I^ dual_±,(h,h̅), from each of these extra operators are suppressed: I^ dual_±,(h,h̅)/I^ dual_±, vac HDLC_w⟶0 , and hence can be neglected in our analysis. Let us now estimate 𝒩_J(ε_1,ε_2) using theorem <ref>. For this purpose, we choose the following values of β_L, ε_1 and ε_2: β_L=κ√(J/A), ε_1=1/πα w^2√(A/J)log(AJ), ε_2=3/κ√(A/J)(2π A+log J), where κ∈(0,2πα(1-w^2)). It can be verified that in the limit J/A^3→∞ (which is necessary for the HDLC_w limit as mentioned after (<ref>)), both the conditions of the HDLC_w limit (<ref>) and the ε_i bounds (<ref>) are satisfied. Additionally, it is worth noting that for fixed A, both ε_1 and ε_2 decay as O(J^-1/2log J) in the limit J→∞, which is consistent with the behavior observed in the case of fixed CFT. By applying eq. (<ref>), theorem <ref>, and eq. (<ref>), we obtain two-sided bounds for the quantity 𝒩_J(ε_1,ε_2). In order to simplify the statement of the result, we sacrifice optimality by choosing w=3/4. Thus, we arrive at the following estimate for 𝒩_J: For any fixed κ∈(0,7πα/8), we have 𝒩_J(ε_1,ε_2)=e^4π√(AJ)+f_κ(A,J) (J/A^3→∞) (ε_1≡1/πα√(A/J)log(AJ), ε_2≡3/κ√(A/J)(2π A+log J)), where the error term f_κ(A,J) is bounded by f_κ(A,J)⩽6π A+5log(AJ)+C(κ), with C(κ) being a finite constant. §.§ Near-extremal rotating BTZ black holes In this subsection, we will discuss the implications of the results we have derived in the previous subsection within the context of holography. It is worth noting that the Cardy-like formulas are commonly used to compute the entropy of black holes in AdS_3 <cit.>. However, our current investigation focuses on the rotating BTZ black holes. The near-extremal rotating BTZ black hole has an approximate AdS_2× S^1 throat, as discussed in section 4.1 of <cit.>. Since the Schwarzian action describes gravity in nearly AdS_2 spacetime <cit.>, it is reasonable to expect that the nearly AdS_2× S^1 throat of the near-extremal rotating BTZ black hole is described by the Schwarzian action. From a holographic perspective, this suggests the existence of a Schwarzian sector in 2D holographic CFT. In the work <cit.>, the authors made significant progress in identifying this Schwarzian sector. They specifically highlighted that the near-extremal limit in 3D gravity is dual to the double lightcone limit in 2D CFT. Their analysis led to the proposal of a universal sector described by Schwarzian theory in irrational 2D CFTs. The methodology closely follows the intuitive aspects of the lightcone bootstrap in the large central charge regime, effectively capturing the qualitative features of this limit. Furthermore, the authors extended their analysis to correlators, providing additional evidence for the proposed universality. Here we would like to focus on the torus partition function. In section <ref>, we will briefly review the argument in <cit.>. Then in section <ref>, we will compare our results to the ones in <cit.>, and clarify what we can justify and what we cannot. §.§.§ Review: rotating BTZ black hole thermodynamics in the near-extremal limit In this section, we provide a concise review of the key points from <cit.> regarding the analysis of the torus partition function in the lightcone limit and its connection to the near-extremal limit of the rotating BTZ black hole in AdS_3. The metric of the rotating BTZ black hole (without electric charge) is given by <cit.> ds^2= -f(r)dt^2+dr^2/f(r)+r^2(dϕ-r_+r_-/ℓ_3r^2dt)^2 , f(r):=(r^2-r_+^2)(r^2-r_-^2)/ℓ_3^2r^2 . Here, r_± denote the radii of the outer and inner horizons, respectively, satisfying 0<r_-⩽ r_+, and ℓ_3 denotes the radius of AdS_3 (which is related to the cosmological constant Λ through Λ=-ℓ_3^-2). To facilitate our discussion, we will use dimensionless parameters for the physical quantities (such as temperature, mass, etc.) of the BTZ black hole. The corresponding dimensionful parameters can be obtained by multiplying dimensionless quantities by (ℓ_3)^a with the appropriate power indices a. The mass M and spin J of the black hole are given by M=r_+^2+r_-^2/8G_Nℓ_3 , J= r_+r_-/4G_Nℓ_3 , where G_N is Newton's constant. The requirement for r_± to be real implies the bound J⩽ M. The thermodynamic quantities of the BTZ black hole, including the Hawking temperature T_ H, the angular momentum chemical potential Ω, and the black hole entropy S, were derived using various semiclassical methods <cit.>. The expressions for these quantities are as follows: T_ H=r_+^2-r_-^2/2πℓ_3 r_+ , Ω=r_-/r_+ , S=π r_+/2G_N . The black hole thermodynamics can be explicitly verified: dM=T_ H dS+Ω dJ. In the classical regime where G_N ≪ℓ_3 and in the near-extremal regime where r_+ ≈ r_-, the entropy of the near-extremal black hole is related to its angular momentum using (<ref>), (<ref>), and the Brown-Henneaux relation c = 3ℓ_3/2G_N <cit.>. We find that the entropy is given by S ≈ 2π√(c/6J)≈4π√(AJ) (c≫1). This result is consistent with corollary <ref> which provides the operator counting formula in the large c limit. It is remarkable that from the CFT side, we obtain the correct entropy formula for near-extremal black holes, providing a gravitational interpretation of our results. This matching between the CFT and gravitational descriptions not only reinforces the validity of the thermodynamic description of black hole physics but also provides support for the holographic principle. The inverse temperatures for left and right movers, denoted as β_L and β_R respectively, are related to T_ H and Ω as follows: β_L=(1+Ω)β, β_R=(1-Ω)β (β=T_ H^-1). Expressing β_L and β_R in terms of r_±, we have: β_L=2πℓ_3/r_+-r_-, β_R=2πℓ_3/r_++r_-. In <cit.>, it was emphasized that the self-consistency of semiclassical methods requires the back reaction of black hole radiation to be negligible. Specifically, the fluctuation in Hawking temperature, denoted as Δ T_ H, should be much smaller than T_ H itself: ⟨(Δ T_ H)^2|⟩/T_H^2≪1. This condition can be expressed using a standard thermodynamic argument <cit.> as: ⟨(Δ T_ H)^2|⟩/T_H^2=1/T_ H(∂ T_ H/∂ S)_J≪1. By substituting (<ref>) into (<ref>), we find: T_ H≫G_N/ℓ_3. Using (<ref>) and the Brown-Henneaux relation c = 3ℓ_3/2G_N in the semiclassical regime G_N ≪ℓ_3, the above constraint can be written as: β_L,β_R≪ c (c≫1). For this reason, in ref. <cit.>, c^-1 is referred to as the “gap temperature" of the BTZ black hole. It was believed that the thermodynamic description of the black hole breaks down when T_ H = O(c^-1). The above analysis raises a puzzle regarding the validity of black hole thermodynamics in the near-extremal regime (r_+≈ r_- or Ω≈1), where T_ H could be of O(c^-1) or even smaller. A resolution to this puzzle is recently proposed in <cit.>. See also <cit.> for a recent review. Ref. <cit.> investigated the near-extremal regime of black holes characterized by the conditions: β_L=O(c), β_R=O(c^-1) (c≫1). Based on the previous argument, it appears that black hole thermodynamics breaks down in this regime due to the violation of the “gap temperature condition" (<ref>). However, <cit.> proposed that the thermodynamics remains valid, but it is no longer described by semiclassical methods. Instead, a quantum mode governed by Schwarzian theory <cit.> becomes dominant in the near-extremal regime. Furthermore, <cit.> argued for the appearance of the Schwarzian sector universally in a broad class of c≫1 irrational CFTs with a twist gap and a significantly large central charge c≫1. Here we aim to revisit their argument while presenting it from a slightly different perspective. Instead of focusing on the full partition function Z(β_L,β_R) as examined in <cit.>, our analysis focuses on the reduced partition function Z̃(β_L,β_R), which exclusively accounts for the Virasoro primaries. The computation below follows the same logic as in <cit.>.[For a review on the same argument in terms of the full partition function Z(β_L,β_R), see appendix B of <cit.>.] This choice enables us to conveniently compare our results with those of <cit.>. The intuitive argument goes as follows. In the grand canonical ensemble, focusing on Z̃(β_L,β_R), the limit as β_R→ 0 favors the dominance of the vacuum state in the dual channel, while the limit as β_L→∞ favors non-vacuum states. The presence of a twist gap in the spectrum of Virasoro primaries ensures that each non-vacuum term is suppressed, leading to the vacuum state's dominance in the dual channel. The vacuum contribution in the dual channel can then be identified with the contribution from the rotating BTZ black hole. Hence, we have the approximation: Z̃(β_L,β_R)(<ref>)≈√(4π^2/β_Lβ_R)e^4π^2 A/β_L+4π^2A/β_R(1-e^-4π^2/β_L)(1-e^-4π^2/β_R)≡Z̃_ BTZ(β_L,β_R), where A≡c-1/24. In the large central charge limit, the left-moving part of Z̃_ BTZ can be identified to the Schwarzian partition function. To see this, we introduce the Schwarzian variable β̃ by rescaling β(≡(β_L+β_R)/2) β̃:=β/2A(≡12β/(c-1)). In the regime (<ref>) with large central charge, we have β≈β_L/2, then the grand canonical partition function can be further approximated as Z̃(β_L,β_R)(<ref>)≈(π/β̃)^3/2e^π^2/β̃×(π^2/A)^3/2β_R^-1/2e^4π^2A/β_R, and Z_Schw(β̃)=(π/β̃)^3/2e^π^2/β̃ is the Schwarzian partition function with a circle length β̃. So we see that the grand canonical partition function is dominated by the Schwarzian modes in the regime (<ref>). The grand canonical entropy can be determined using the standard thermodynamic formula: S_ grand(β_L,β_R)≡ (1-β_L∂/∂β_L-β_R∂/∂β_R)logZ̃(β_L,β_R) = 8π^2 A/β_R+O(logβ_L)+O(logβ_R)+O(log A) Here, the entropy from Z_ Schw is included in the error term. The Schwarzian sector can also be seen in the canonical ensemble of primary states with h̅=h+J (spin-J): Z̃_J(β)≡∫_0^2πdθ/2π e^iθ J Z̃(β-iθ,β+iθ). In <cit.>, it was argued that in an equivalent near-extremal regime:[We modify the condition stated in ref. <cit.>, eq. (2.14), to J=O(c^3). We believe this to be the correct equivalent condition to β_R=O(c^-1). The reason will become clear shortly: we will see that β_R=2π√(A/J) under the saddle point approximation. This identification, along with the condition β_R=O(c^-1), implies J=O(c^3). This has also been noted in the appendix B of <cit.> to justify the validity of saddle.] β=O(c), J=O(c^3) (c≫1), Z̃_J(β) can be evaluated by replacing Z̃ with Z̃_ BTZ and computing the contribution around the complex saddle point θ= iβ-2π i√(A/J)+O(J^-1). This yields: Z̃_J(β)(<ref>)≈√(2)π^5/2β^-3/2J^-1/2e^4π√(AJ)-β J+2π^2A/β. Then the canonical partition function Z̃_J(β) can be expressed as Z̃_J(β)=Z_ Schw(β̃)× e^4π√(AJ)-β J+O(log A)+O(logβ), The canonical entropy is obtained using the standard thermodynamic formula: S_ canon(J,β)≡(1-β∂/∂β)Z̃_J(β)=4π√(AJ)+O(log A)+O(logβ). Here, the entropy from Z_ Schw is included in the O(log A)+O(logβ) terms. To match the grand canonical entropy (<ref>) with the canonical entropy (<ref>), we recall β_R=β+iθ=2π√(A/J)+O(J^-1) in the saddle point approximation. This identification ensures the agreement of their leading terms: S=4π√(AJ)+(errors). Let us consider the microcanonical ensemble now. In the limit (<ref>), where β_L≈2β=O(c), we expect the dominant contribution to the grand canonical partition function Z̃(β_L,β_R) to arise from states with h-A=O(c^-1). This expectation is based on the duality between β_L and h-A in the definition of Z̃(β_L,β_R) (as seen in (<ref>)). Additionally, in the canonical ensemble, we identify β_R with the saddle point at large spin, which leads to β_R=2π√(A/J). Using this relation together with (<ref>), we find that J≡h̅-h=O(c^3). In the microcanonical ensemble, we therefore anticipate that the relevant spectrum for the limit (<ref>) consists of states characterized by J(≡h̅-h)=O(c^3), h-A=O(c^-1). To determine the microcanonical entropy, we need to determine the number of Virasoro primaries within the range (<ref>). Based on the BTZ dominance (<ref>) in the limit (<ref>), a plausible approach is to perform an inverse Laplace transform of Z̃_ BTZ to obtain the coarse-grained spectral density of Virasoro primaries within the specified range. The inverse Laplace transform of Z̃_ BTZ yields the following expressions for the left- and right-moving vacuum characters: 1/√(β)e^4π^2A/β(1-e^-4π^2/β)=∫_A^∞d h̅ ρ_0(A;h̅) e^-(h̅-A)β, where the modular crossing kernel ρ_0 is given by ρ_0(A;h̅)=cosh(4π√(A(h̅-A)))√((h̅-A)π)-cosh(4π√((A-1)(h̅-A)))√((h̅-A)π). Therefore, 2πρ_0(A;h)ρ_0(A;h̅) serves as the naive coarse-grained spectral density for Z̃. Let us use it to estimate the total number of Virasoro primaries within the range: h-A⩽λ c^-1, h-A-J⩽1/2, λ=O(1). By integrating 2πρ_0(A;h)ρ_0(A;h̅) over the specified range of h and h̅, we obtain 2π∫_A^A+λ c^-1 dh ρ_0(A,h) × ∫_A+J-1/2^A+J+1/2dh̅ ρ_0(A;h̅) ≈ π^5/2/A^3/2∫_0^λ/6d(k^2)sinh(2π k) × 1/2√(π J) e^4π√(AJ) (k=2√(A(h-A))). In this expression, the integral over the left-moving part is identified as the integral over the Schwarzian density of states (which is sinh(2π k)). By taking the logarithm of the result, we obtain the microcanonical entropy of the states within the range (<ref>): S_ micro(A,λ,J)=4π√(AJ)+O(log A)+O(log J), where the contribution from the left-moving part, which includes the Schwarzian sector and is of O(log A), is absorbed into the error term. The agreement between the entropy computations in different ensembles for near-extremal BTZ black holes, as seen from eqs. (<ref>), (<ref>), and (<ref>), is significant. Furthermore, the entropy formula S=4π√(AJ) can be reproduced using eqs. (<ref>), (<ref>), and c=3ℓ_3/2G_N in the near-extremal regime. This provides strong evidence supporting the validity of the thermodynamic description of AdS_3 pure gravity in the near-extremal regime, where the Hawking temperature is of the same order as the “gap temperature". However, it is important to note that the above arguments have certain caveats, and it is necessary to consider these limitations when concluding the existence of a universal Schwarzian sector in a general class of irrational CFTs, even without a gravitational dual. We will discuss these caveats in the next subsection. §.§.§ Fine-prints and resolution a la Tauberian In this subsection, we would like to compare our results from section <ref> to the ones in <cit.>, and put some of the intuitive arguments above on rigorous footing and clarify what can be proven rigorously. Let us first consider the partition function in the grand canonical ensemble: Z̃(β_L,β_R). By assuming (<ref>) and (<ref>), we can establish the BTZ dominance, i.e. the vacuum dominance in the dual channel, within the regime specified by (<ref>). Similar to the estimate performed in <cit.>, we find that Z̃_ BTZ(β_L,β_R)/Z̃(β_L,β_R)=1+O[β_L^3/2e^-A(4π^2α/β_R-β_L-3π)], where Z̃_ BTZ was defined in (<ref>). From this result, the BTZ dominance can be reached under condition (<ref>) and an extra technical assumption 4π^2α/β_R-β_L⩾κ c, where κ>0 is a fixed constant. The above extra assumption is weaker than the one imposed in the HDLC_w condition (<ref>) (equivalent to (<ref>) when we identify β_R=2π√(A/J)). In this limit, β_L is allowed to be of O(c), which gives an O(1) Schwarzian variable β̃≈β_L/4A using (<ref>). Consequently, the grand canonical partition function Z̃(β_L,β_R) exhibits the following asymptotic behavior: Z̃(β_L,β_R)≈Z̃_ BTZ(β_L,β_R)≈ Z_ Schw(β̃)×(π^2/A)^3/2β_R^-1/2e^4π^2A/β_R. Therefore, in our framework, we have justified that the Schwarzian partition function appears in the left-moving sector of the grand canonical partition function. However, it should be noted that the presence of the Schwarzian sector in the theory is not guaranteed, as the direct channel spectrum capable of reproducing the above asymptotic behavior is not necessarily unique. Let us next consider the canonical ensemble. We would like to start by establishing a relationship between the quantity 𝒜_J(β_L,ε_1,ε_2,A) (defined in (<ref>)) which we used in theorem <ref> and the canonical partition function Z̃_J(β) (defined in (<ref>)). By explicitly evaluating the integral in (<ref>), we find Z̃_J(β)=e^-β J∑_hn_h,h+Je^-2β(h-A). Comparing this expression with (<ref>), we obtain the exact relation Z̃_J(β)=𝒜_J(2β,∞,∞,A)e^-β J. Then it is not surprising that setting β≈β_L/2≫ A in (<ref>) yields Z̃_J(β)≈ 4π^5/2β_L^-3/2J^-1/2e^4π√(AJ)× e^-β J, which, by (<ref>), implies 𝒜_J(β_L,∞,∞,A)≈4π^5/2β_L^-3/2J^-1/2e^4π√(AJ). This result agrees well with theorem <ref> (see (<ref>), the denominator below 𝒜_J). The main distinction is that (<ref>) takes into account contributions from all spin-J states (i.e. with ε_1=ε_2=∞), while theorem <ref> only counts the contributions from spin-J states with twists near c-1/12 (i.e. with finite ε_1 and ε_2). This difference is actually one of the main points of theorem <ref>: in the HDLC_w limit, both 𝒜_J(β_L,ε_1,ε_2,A) and 𝒜_J(β_L,∞,∞,A) yield the same leading behavior. Now, let us clarify the conditions required for our analysis in the canonical ensemble. For theorem <ref> to hold true, we need the HDLC_w conditions (<ref>), which imply: β_L≫ c, J≫ c^3. Thus, theorem <ref> pertains to a different regime than the one considered in <cit.>, where they require the condition (<ref>). In <cit.>, having β_L=O(c) is crucial as it enables the identification of the Schwarzian sector through the rescaling (<ref>). The condition (<ref>) indicates that our results are valid when the Hawking temperature is much lower than the “gap temperature": T_ H≪ c^-1. In addition, according to theorem <ref>, the HDLC_w limit conditions (<ref>) impose a lower bound on the temperature. Specifically, we require T_ H⩾ const×α^-1√(c/J) in order to ensure the validity of our analysis. Figure <ref> provides a schematic illustration of the corresponding regimes for our arguments, as well as for the arguments presented in <cit.> and <cit.>. We would like to highlight that the arguments concerning the universal Schwarzian sector presented in the previous subsection <cit.> are meaningful only if it can be shown that the dominant contribution to Z̃_J(β) arises from the BTZ partition function, subject to the condition (<ref>). However, to our knowledge, this has not been rigorously established yet. A clean treatment of this issue would involve proving the following equation: lim_c→∞Z̃_J, BTZ(β)/Z̃_J(β)→1 under the condition (<ref>), where Z̃_J, BTZ(β) represents the contribution from the BTZ partition function. Eq. (<ref>) is similar to (<ref>), where the compact support of ϕ̂_± played a crucial role in the proof. In the case of (<ref>), the analog of ϕ̂_± is e^iθ J, which is supported over the entire real axis of θ. Due to this technical complication, we are unable to rigorously justify (<ref>). It is possible that (<ref>) is not universally true for all classes of irrational CFTs, but may hold with certain additional assumptions that arise from the gravitational perspective. We leave this question for future study. Lastly, let us consider the microcanonical ensemble. In this paper, we have shown that in the HDLC_w limit, the dominant contribution to 𝒜_J(β_L,ε_1=∞,ε_2=∞,A) arises from the spectrum satisfying the conditions J≫ c^3, Δ-J-c-1/12∈(-ε_1,ε_2), where ε_1=O(√(c/J)log(J)), ε_2=O(√(c^3/J))+O(√(c/J)log(J)). We have established that within this range, the leading term of the microcanonical entropy is given by S_ micro=4π√(AJ), which coincides with (<ref>) as well as the standard black hole thermodynamic prediction (<ref>). It is important to note that our argument is purely based on CFT considerations and applies to a general class of irrational CFTs, without necessarily relying on a gravitational interpretation. While the microcanonical entropy formulas are the same, the regime of validity of our result differs from that of <cit.>, as mentioned earlier. In our case, the first condition (<ref>), which also appeared in the canonical ensemble (see (<ref>)), is more restrictive than the corresponding condition in (<ref>). In <cit.>, it is crucial for the spectrum with 0⩽Δ-J-c-1/12⩽ O(c^-1) to dominate Z̃(β_L,β_R) in the limit (<ref>) (i.e. the second condition of (<ref>)) in order to identify Δ-J-c-1/12 with the positive Schwarzian energy of O(1): k^2∝ c(Δ-J-c-1/12)=O(1). In contrast, in the second condition of our case (<ref>), we do not exclude the spectrum with twists lower than c-1/12 (i.e. Δ-J-c-1/12∈(-ε_1,0)). Additionally, the width of the window depends on J and is not necessarily of O(c^-1). In our case, it is unclear which modes within the range Δ-J-c-1/12∈(-ε_1,ε_2) are more important for the partition function Z̃(β_L,β_R) in the HDLC_w limit. It would be interesting to investigate the general conditions under which we can access the “Schwarzian regime" (<ref>) and rigorously perform the microstate counting. We leave this for future study. This finishes our discussion on holographic CFTs. We end this section with the remark that it is conceivable to generalize the rigorous discussion to the supersymmetric case along the lines of <cit.>. § CONCLUSION AND BRIEF DISCUSSION In this paper, we present a refined twist accumulation result for two-dimensional unitary conformal field theories with central charge c>1 and a twist gap in the spectrum of Virasoro primaries. Using the lightcone bootstrap argument and Tauberian theory, we rigorously estimate the number of Virasoro primary operators with twist near c-1/12 and large spin, leading to the derivation of a Cardy-like formula (<ref>) which counts the states around the twist accumulation point with twist spacing going to 0 in the large spin limit. We also explore potential generalizations of our result for CFTs with conserved currents. Depending on the growth of the number of currents D(j) with respect to the spin j, the generalization to (<ref>) can vary significantly. We propose conjectures in this regard and anticipate that converting these conjectures into theorems should be a feasible endeavor. For future research, it would be interesting to consider irrational CFTs symmetric under a larger chiral algebra and with a twist gap in the spectrum of primaries of the larger chiral algebra. The twist accumulation point in the spectrum of such primaries is expected to shift, and it is conceivable to establish analogous rigorous findings for irrational CFTs with the larger chiral algebra. Additionally, we study a family of CFTs with a twist gap growing linearly in the central charge and a uniform boundedness condition on the torus partition function. We establish a similar Cardy-like formula for microcanonical entropy in the limit of large central charge. From a holographic perspective, our result can be interpreted as the entropy formula for near-extremal rotating BTZ black holes in the regime where the Hawking temperature is much lower than the “gap temperature". It would be interesting to investigate a general CFT condition, inspired by the gravity side, under which we can rigorously perform the microstate counting in the “Schwarzian regime". In that regime, the Hawking temperature is comparable to the “gap temperature". Another avenue to consider is the investigation of CFTs with a global symmetry G and the study of the symmetry-resolved version of asymptotic CFT data. For instance, it is possible to derive the density of states restricted to an irreducible representation of G in the in large Δ limit <cit.> (see also <cit.>). This analysis has been extended to higher-dimensional CFTs and holographic CFTs in subsequent works such as <cit.> followed by <cit.>. Universal results for CFTs with non-invertible symmetry are discussed in <cit.> and in <cit.>. By building upon the techniques elucidated in this paper, one can aspire to derive the universality of the CFT spectrum when restricted to an irreducible representation of G in the regime of fixed twist and large spin. Furthermore, Tauberian theory can be potentially useful for extracting detailed structure of asymptotic CFT data, unveiled in a beautiful recent paper <cit.>. One might hope to use the techniques in this paper in the context of generic irrational CFTs with Virasoro symmetry only to shed light on 1) the claimed twist gap <cit.> of c-1/16 and/or 2) shifting of BTZ threshold c-1/24 by a spin dependent quantity <cit.>. Finally, it is crucial to highlight the significance of the identity block approximation in elucidating the origin of universal results in CFTs under appropriate limits. The Tauberian formalism serves as a valuable tool for rigorously understanding such approximations and their regime of validity. Notably, recent investigations have shed light on subtle effects related to identity block dominance in <cit.> and <cit.>. The Identity block approximation appears in the context of Virasoro mean field theory <cit.> as well. We believe that techniques developed in this paper will be useful to investigate such effects and more. § ACKNOWLEDGMENTS We thank Nathan Benjamin, Gabriele Di Ubaldo, Tom Hartman, Yikun Jiang, João Penedones, Eric Perlmutter, Biswajit Sahoo and Joaquin Turiaci for useful and enlightening discussions. We also thank Alexandre Belin, João Penedones, Slava Rychkov and Joaquin Turiaci for useful comments on the draft. SP acknowledges the support by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632 and by the Walter Burke Institute for Theoretical Physics. JQ is supported by the Swiss National Science Foundation through the National Centre of Competence in Research SwissMAP and by the Simons Collaboration on Confinement and QCD Strings. § ESTIMATING THE DUAL-VACUUM TERM In this section we compute the asymptotic behavior of the integral I^ dual_±, vac≡ ∫_-∞^+∞ d t √(4π^2/β_L(β_R+i t))Z̃_ vac(4π^2/β_L,4π^2/β_R+i t)ϕ̂_±(t)e^i(H̅-A)t in the modular double lightcone limit, where Z̃_ vac(β,β̅) ≡ e^A(β+β̅)(1-e^-β)(1-e^-β̅). By definition, I^ dual_±, vac is factorized into two parts I^ dual_±, vac(A,H̅;β_L,β_R)= I^ dual,L_±, vac(A;β_L)I^ dual,R_±, vac(A,H̅;β_R), I^ dual,L_±, vac(A;β_L)= √(4π^2/β_L)e^4π^2A/β_L(1-e^-4π^2/β_L), I^ dual,R_±, vac(A,H̅;β_R)= ∫_-∞^+∞ d t 1/√(β_R+i t)e^4π^2A/β_R+i t(1-e^-4π^2/β_R+i t)ϕ̂_±(t)e^i(H̅-A)t. When β_L→∞, the asymptotic behavior of I^ dual,L_±, vac is given by I^ dual,L_±, vac(A;β_L)∼(4π^2/β_L)^3/2e^4π^2A/β_L. For I^ dual,R_±, vac we introduce the identity 1/√(β)e^4π^2A/β(1-e^-4π^2/β)=∫_A^∞d h̅ ρ_0(A;h̅) e^-(h̅-A)β, where the kernel ρ_0 is given by ρ_0(A;h̅)=cosh(4π√(A(h̅-A)))√((h̅-A)π)-cosh(4π√((A-1)(h̅-A)))√((h̅-A)π). Then by eqs. (<ref>) and (<ref>) we get I^ dual,R_±, vac(A,H̅;β_R)=∫_A-H̅^∞dx ρ_0(A;x+H̅) e^-β_R(x+H̅-A)ϕ_±(x). We choose ϕ_± which satisfy the following properties (a) ϕ_±(x)⩽C_±/1+x^2, C_±<∞ (b) ϕ̂_±(0)≠0 Below we will study I^ dual,R_±, vac(A,H̅;β_R) with β_R=2π√(A/H̅-A) in two special limits I. A⩾ A_0>0, H̅,H̅/A→∞; II. A,H̅→∞, H̅/A=s>1 fixed. For the purposes of this paper, our focus is primarily on considering the case of limit I. However, interestingly, in our analysis, it requires minimal effort to include the case of limit II as well. Consequently, we provide an argument that encompasses both cases simultaneously, with the hope that the inclusion of limit II will be beneficial for future studies. We split the kernel ρ_0 in (<ref>) into four parts ρ_0(A;h̅)= ρ_0^(1)(A;h̅)-ρ_0^(2)(A;h̅)+ρ_0^(3)(A;h̅)-ρ_0^(4)(A;h̅), ρ_0^(1)(A;h̅)= e^4π√(A(h̅-A))2√((h̅-A)π), ρ_0^(2)(A;h̅)= e^4π√((A-1)(h̅-A))2√((h̅-A)π), ρ_0^(3)(A;h̅)= e^-4π√(A(h̅-A))2√((h̅-A)π), ρ_0^(4)(A;h̅)= e^-4π√((A-1)(h̅-A))2√((h̅-A)π). Correspondingly, I^ dual,R_±, vac is split into four parts I^ dual,R_±, vac=I^R(1)_±, vac+I^R(2)_±, vac+I^R(3)_±, vac+I^R(4)_±, vac. We would like to show that for ϕ̂_±(0)≠0:[By definition, we have ϕ̂_+(0)>0, whereas it is not always the case that ϕ̂_-(0)≠0 (see the defining properties of ϕ_± in (<ref>)) and (<ref>). In this paper, we will explicitly choose a specific form of ϕ_± that ensures ϕ̂_±(0)≠0.] * In the limit I, the dominant contribution to I^ dual,R_±, vac comes from I^R(1)_±, vac. Consequently, I^ dual_±, vac has the following asymptotic behavior: I^ dual_±, vac I∼(4π^2β_L)^3/2√(π/H̅)e^2π√(AH̅)ϕ̂_±(0). * In the limit II, the dominant contribution to I^ dual,R_±, vac comes from I^R(1)_±, vac and I^R(2)_±, vac. Consequently, I^ dual_±, vac has the following asymptotic behavior: I^ dual_±, vac II∼(1+e^-2π√(s-1))(4π^2β_L)^3/2√(π/H̅)e^2π√(AH̅)ϕ̂_±(0). where the first and second terms correspond to the contributions from I^R(1)_±, vac and I^R(2)_±, vac, respectively. §.§ Estimating I^R(1)_±, vac (the dominant term) By definition, I^R(1)_±, vac is given by I^R(1)_±, vac:= ∫_A-H̅^∞dx ρ_0^(1)(A;x̅+H̅) e^-β_R(x+H̅-A)ϕ_±(x), ρ_0^(1)(A;h̅)= e^4π√(A(h̅-A))2√((h̅-A)π), We rewrite I^R(1)_±, vac as I^R(1)_±, vac(A,H̅,β_R)= e^4π^2 A/β_R2√(π(H̅-A))∫_A-H̅^∞dx √(H̅-A/x+H̅-A)e^-β_R (√(x+H̅-A)-2π√(A)/β_R)^2ϕ_±(x). We split the domain of integration into two parts ∫_A-H̅^∞=∫_A-H̅^-H̅^τ+∫_-H̅^τ^∞, where τ∈(1/2,1) is some fixed constant. The first integral is bounded by |∫_A-H̅^-H̅^τdx √(H̅-A/x+H̅-A)e^-β_R (√(x+H̅-A)-2π√(A)/β_R)^2ϕ_±(x)| ⩽ (max_A-H̅⩽ x⩽-H̅^τ{|ϕ_±(x)|})(∫_A-H̅^-H̅^τdx √(H̅/x+H̅-A)) = (max_A-H̅⩽ x⩽-H̅^τ{|ϕ_±(x)|})(2√(H̅(H̅-H̅^τ-A))) Here we bounded the exponential factor by 1 and bounded ϕ_± by its maximal value. Since ϕ_± has the upper bound (<ref>), we get |∫_A-H̅^-H̅^τdx √(H̅-A/x+H̅-A)e^-β_R (√(x+H̅-A)-2π√(A)/β_R)^2ϕ_±(x)|⩽2C_±H̅/1+H̅^2τ→0 in the limit (<ref>). The second integral is controlled as follows. We rewrite the integral as ∫_-∞^∞dx Y_H̅(x), Y_H̅(x):=√(H̅-A/x+H̅-A)e^-β_R (√(x+H̅-A)-2π√(A)/β_R)^2ϕ_±(x)θ(x+H̅^τ). We have √(H̅-A/x+H̅-A)θ(x+H̅^τ)⩽√(H̅/H̅-H̅^τ-A)⩽√(2) for H̅⩾max{4A,4^1/(1-τ)}. Let us explain the above bound. Since H̅⩾max{4A,4^1/(1-τ)}, we have H̅⩾ 4A and H̅⩾ 4 H̅^τ, leading to H̅-A-H̅^τ⩾H̅/2. Plugging this in, we obtain √(H̅/H̅-H̅^τ-A)⩽√(2). Now the exponential factor in Y_H̅(x) is obviously bounded by 1. So we get |Y_H̅(x)|⩽√(2)|ϕ_±(x)| for sufficiently large H̅ and the r.h.s. is an integrable function on ℝ. Now we consider the point-wise limit of Y_H̅(x) as H̅ goes to ∞. First, using τ>0 we have √(H̅-A/x+H̅-A)θ(x+H̅^τ)(<ref>)⟶1 for fixed x. So far we do not have good control on the exponential factor e^-β_R (√(x+H̅-A)-2π√(A)/β_R)^2 (other than knowing that it is bounded from above by 1). We would like to make this factor reach its maximal value (i.e. 1) in the limit. For this purpose we choose β_R=2π√(A/H̅-A), which gives β_R (√(x+H̅-A)-2π√(A)/β_R)^2= 2π√(A/H̅-A)x^2/(√(x+H̅-A)+√(H̅-A))^2 ⩽ 2π/√(s-1)x^2/(√(x+H̅-A)+√(H̅-A))^2 → 0 for fixed x in the limit (<ref>). Thus we get Y_H̅(x)(<ref>)⟶ϕ_±(x) for fixed x. Then using the dominated convergence theorem we conclude that ∫_-∞^∞dx Y_H̅(x)(<ref>)⟶∫_-∞^∞d x ϕ_±(x)=2πϕ̂_±(0). Putting everything together we get I^R(1)_±, vac(A,H̅,β_R≡2π√(A/H̅-A))(<ref>)∼√(π/H̅-A)e^2π√(A(H̅-A))ϕ̂_±(0). This result holds for both limit I and II in (<ref>). §.§ Estimating I^R(2)_±, vac By definition, I^R(2)_±, vac is given by I^R(2)_±, vac:= ∫_A-H̅^∞dx ρ_0^(2)(A;x̅+H̅) e^-β_R(x+H̅-A)ϕ_±(x), ρ_0^(2)(A;h̅)= e^4π√((A-1)(h̅-A))2√((h̅-A)π), Similarly to the analysis for I^R(1)_±, vac, we rewrite I^R(2)_±, vac as I^R(2)_±, vac(A,H̅,β_R)= e^4π^2 (A-1)/β_R2√(π(H̅-A))∫_A-H̅^∞dx √(H̅-A/x+H̅-A)e^-β_R (√(x+H̅-A)-2π√(A-1)/β_R)^2ϕ_±(x). We see that its structure is very similar to (<ref>), except that A is replaced by A-1 in two places. Then we do the same analysis as I^R(1)_±, vac. We split the domain of integration into two parts as we have done in (<ref>). In the limit (<ref>), the first part goes to zero for the same reason. For the second part, the only different analysis is in the exponential factor. For β=2π√(A/H̅-A) we have β_R (<ref>)⟶ 0 ( limit I) 2π/√(s-1) ( limit II) . Then for limit I we have I^R(2)_±, vac(A,H̅,β_R)/I^R(1)_±, vac(A,H̅,β_R)_β_R=2π√(A/H̅-A)⩽ const(ϕ_±)e^-4π^2/β_R→0 because of the exponential suppression. For limit II, the exponential factor in the integral tends to 1 point-wise: β_R (√(x+H̅-A)-2π√(A-1)/β_R)^2= 2π/√(s-1)(x+s-1)^2/(√(x+H̅-A)+√((A-1)(H̅-A)/A))^2 → 0. Here we used (<ref>). Now using the same dominated-convergence-theorem argument as the analysis for I^R(1)_±, vac we conclude that in the limit II, has the similar asymptotic behavior to (<ref>): I^R(2)_±, vac(A,H̅,β_R≡2π√(A/H̅-A)) II∼√(π/(s-1)A)e^2π√(s-1)(A-1)ϕ̂_±(0). In this case I^R(2)_±, vac is comparable to I^R(1)_±, vac: I^R(2)_±, vac(A,H̅,β_R)/I^R(1)_±, vac(A,H̅,β_R)_β_R=2π√(A/H̅-A) II⟶e^-2π√(s-1)=O(1). §.§ Estimating I^R(3)_±, vac and I^R(4)_±, vac By definition, I^R(3)_±, vac is given by I^R(3)_±, vac:= ∫_A-H̅^∞dx ρ_0^(3)(A;x̅+H̅) e^-β_R(x+H̅-A)ϕ_±(x), ρ_0^(3)(A;h̅)= e^-4π√(A(h̅-A))2√((h̅-A)π), Similarly to the analysis for I^R(1)_±, vac, we rewrite I^R(3)_±, vac as I^R(3)_±, vac(A,H̅,β_R)= e^4π^2 A/β_R2√(π(H̅-A))∫_A-H̅^∞dx √(H̅-A/x+H̅-A)e^-β_R (√(x+H̅-A)+2π√(A)/β_R)^2ϕ_±(x). We see that I^R(3)_±, vac only differs from I^R(1)_±, vac by a change of the “±" sign in the exponential factor. Then the analysis is similar to I^R(1)_±, vac. We split the integral into two parts as we have done in (<ref>). In the limit (<ref>), the first part goes to zero for the same reason, and the second part is controlled by the dominated convergence theorem. But for I^R(3)_±, vac we have β_R(√(x+h̅-A)+2π√(A)/β_R)= 2π√(A/H̅-A)(√(x+H̅-A)+√(H̅-A))^2 ⩾ 2π√(A(H̅-A)) (<ref>)⟶ ∞. Here in the first line we used β_R=2π√(A/H̅-A), in the second line we dropped the first term in the bracket, and in the last line used (<ref>). This estimate implies that the second part of the integral also goes to zero. Therefore we get I^R(3)_±, vac(A,H̅,β_R)/I^R(1)_±, vac(A,H̅,β_R)|_β_R=2π√(A/H̅-A)(<ref>)⟶0 (ϕ̂_±(0)≠0). By the same analysis, we can also get I^R(4)_±, vac(A,H̅,β_R)/I^R(1)_±, vac(A,H̅,β_R)|_β_R=2π√(A/H̅-A)(<ref>)⟶0 (ϕ̂_±(0)≠0). So we conclude that I^R(3)_±, vac and I^R(4)_±, vac are subleading in the limit (<ref>). § SOME UNIFORM BOUNDS ON Φ_±,Δ In this appendix, we address the subtleties that arise when we take the limit δ→0 or δ→1 for the selected functions ϕ_±,δ (as discussed in section <ref>, specifically points 1 and 2). Throughout our analysis, we consistently choose the following expressions for ϕ_±,δ, which are (<ref>) rescaled under (<ref>): ϕ_+,δ(x) =16 δ ^2 (x cos(δΛ/2) sin(Λ x/2)-δsin(δΛ/2) cos(Λ x/2))^2/(x^2-δ ^2)^2 (δΛ +sin (δΛ ))^2, ϕ_-,δ(x) =4 δ ^2 (x cos(Λ x/2)-δ(δΛ/2) sin(Λ x/2))^2/x^2 (δ^2 -x^2) (δΛ(δΛ/2)-2)^2. Recall that the range of allowed values for δ is given by ε<δ<1-ε. As we approach the limit ε→0, it eventually leads us to consider δ→0 for ϕ_+,δ and δ→1 for ϕ_-,δ. In these limits, the values of the corresponding functions ϕ_±,δ are obtained point-wise as follows: ϕ_+,0(x)=4sin^2(Λ x/2)/Λ^2x^2, ϕ_-,1(x)=4[xcos(Λ x/2)-(Λ/2)sin(Λ x/2)]^2/x^2(1-x^2)[Λ(Λ/2)-2]^2. These limits are well-defined functions and are L^1-integrable over the real axis. This observation provides strong evidence that ϕ_±,δ satisfy certain uniform bounds that are essential for our analysis. We aim to establish the following properties of ϕ_±,δ: For a fixed Λ∈(0,2π], the function ϕ_+,δ satisfies the following bounds for δ∈[0,1/2]: ϕ_+,δ(x)⩽ 4 |x|⩽ 1, 64/Λ^2x^2 |x|⩾1, ϕ_+,δ(x)/ϕ̂_+,δ(0)⩽ max{64/Λ, 4Λ}, ϕ̂_+,δ(x)/ϕ̂_+,δ(0)⩽ 1. Let w∈(1/2,1) and Λ∈[(2w+1)π/2,2π w] be fixed. For δ∈[(2w+1)π/2Λ,1], the function ϕ_-,δ satisfies the following bounds: ϕ_-,δ(x)⩽ π^2 |x|⩽ 2, 4π^2/Λ^2(x-1)^2 |x|⩾2. ϕ_-,δ(x)/ϕ̂_-,δ(0)⩽ Λ(1+4((2w-1)π/4)/(2w+1)π)max{π^2, 4π^2/Λ^2}, ϕ̂_-,δ(x)/ϕ̂_-,δ(0)⩽ 2πΛ(1+4((2w-1)π/4)/(2w+1)π)(1+2/Λ^2). §.§ Proof of lemma <ref> When x⩾1, we have ϕ_+,δ(x)≡ 16[x/Λcos(δΛ/2)sin(Λ x/x)-δ/Λsin(δΛ/2)cos(Λ x/2)/(x^2-δ^2)(1+sin(δΛ)/δΛ)]^2 ⩽ 16[x+δ/Λ/x^2-δ^2]^2 = 16/Λ^2[1/x-δ]^2 ⩽ 64/Λ^2x^2. In the first line, we rewrite ϕ_+,δ(x). In the second line, we bound sin and cos by 1 and use the fact that sin(δΛ)/δΛ⩾0 for δ∈[0,1/2] and Λ∈(0,2π]. The third line is a rewrite of the second line. In the last line, we use the fact that |x|-δ⩾|x|/2 for |x|⩾1 and δ∈[0,1/2]. When 0⩽ x⩽1 we have ϕ_+,δ(x)≡ 4[2δsin(Λ(x-δ)/2)/Λ(x-δ)+2cos(δΛ/2)sin(Λ x/2)/Λ/(δ+x)(1+sin(δΛ)/δΛ)]^2 ⩽ 4[δ+x/(δ+x)(1+sin(δΛ)/δΛ)] = 4. In the first line, we rewrite ϕ_+,δ(x). In the second line, we use the inequality |sin x/x|⩽1 for any x∈ℝ and sin x⩽ x for x⩾0. In the last line, we use the fact that sin(δΛ)/δΛ⩾0 for δ∈[0,1/2] and Λ∈(0,2π]. The same bound on ϕ_+,δ(x) holds for -1⩽ x⩽0 because it is an even function of x. This completes the proof of the first inequality in (<ref>). Next, we derive a uniform upper bound on the ratio ϕ_+,δ(x)/ϕ̂_+,δ(0). Using (<ref>), we have ϕ̂_+,δ(0)⩾1/Λ for Λ∈(0,2π] and δ∈[0,1/2]. Together with the first inequality in (<ref>), we obtain ϕ_+,δ(x)/ϕ̂_+,δ(0)⩽max{64/Λ, 4Λ} (0⩽δ⩽1/2, 0<Λ⩽2π). The bound on |ϕ̂+,δ(x)/ϕ̂+,δ(0)| follows trivially from the fact that ϕ+,δ(x)⩾0 for all x. Hence, we have ϕ̂_+,δ(x)/ϕ̂_+,δ(0)⩽1. This completes the proof of Lemma <ref>. §.§ Proof of lemma <ref> We note that ϕ_-,δ(x) is an even function of x, so it is sufficient to prove the first inequality in (<ref>) for x⩾0. We rewrite ϕ_-,δ(x) as follows: ϕ_-,δ(x)=δ-x/δ+x[2δ/δΛcos(δΛ/2)-2sin(δΛ/2)]^2[sin(Λ(δ-x)/2)/δ-x-sin(Λ x/2)cos(δΛ/2)/x]^2. When x⩾0, the first factor in (<ref>) is bounded by 1. For the second factor, we use the condition on the range of δ and Λ, which implies that δΛ/2∈[π/2,π]. Thus, the second factor is bounded as follows: [2δ/δΛcos(δΛ/2)-2sin(δΛ/2)]^2⩽π^2/Λ^2, where we used the fact that the function f(x)≡1/[cos x-sin x/x]^2 is maximized at x=π/2 with the maximum value of π^2/4 for π/2⩽ x⩽π. This can be seen from the plot of f(x) shown in figure <ref>. The last factor in (<ref>) is bounded as follows: [sin(Λ(δ-x)/2)/δ-x-sin(Λ x/2)cos(δΛ/2)/x]^2⩽[Λ/2+Λ/2]^2=Λ^2. This bound is obtained by using the inequalities sin x/x⩽1, sin x⩽x, and cos x⩽1. Combining the inequalities derived above, we obtain the following bounds for ϕ_-,δ(x): ϕ_-,δ(x)⩽π^2 (∀ x∈ℝ). In particular, this bound holds for x⩽2. When x⩾2, we use the condition δ⩽1 and further refine the bound. The last factor in (<ref>) is bounded by [sin(Λ(δ-x)/2)/δ-x-sin(Λ x/2)cos(δΛ/2)/x]^2⩽[1/x-1+1/x]^2⩽4/(x-1)^2. Thus, we obtain the improved bound ϕ_-,δ(x)⩽4π^2/Λ^2(x-1)^2 (x⩾2). This completes the proof of the first inequality in (<ref>). For the second and third inequalities in (<ref>), we make use of the explicit expression of ϕ̂_-,δ given in (<ref>): ϕ̂_-,δ(0)=1/Λ1/1-2tan(Λδ/2)/Λδ⩾1/Λ1/1+4((2w-1)π/4)/(2w+1)π. Here, we used the condition (2w+1)π/4⩽δΛ/2⩽π, where 1/2< w<1, and the fact that 1/1-tan x/x is monotonically increasing for π/2⩽ x⩽π. Consequently, ϕ̂_-,δ(0) possesses a strictly positive lower bound that remains uniform in δ within the specified range. The second inequality of (<ref>) follows from (<ref>) and the first inequality of (<ref>). Similarly, the third inequality of (<ref>) is obtained by combining (<ref>) and the first inequality of (<ref>) with the additional estimate: ϕ̂_-,δ(x)⩽∫dy/2πϕ_-,δ(y)⩽ 2π+4π/Λ^2. Therefore, we have established the validity of the second and third inequalities in (<ref>), completing the proof of lemma <ref>. § EXAMPLES OF CFTS WITH A SHIFTED TWIST ACCUMULATION POINT The purpose of this appendix is to provide technical details for the examples discussed in sections <ref> and <ref>. §.§ Decoupled irrational CFTs The first example considers a CFT composed of several decoupled copies of CFTs: CFT= CFT^(1)⊗ CFT^(2)⊗…⊗ CFT^(N). By construction, the symmetry algebra 𝒢 of the combined CFT includes, at least, the direct sum of Virasoro algebras: 𝒢⊇ Vir_L^(1)⊕ Vir_R^(1)⊕…⊕ Vir_L^(N)⊕ Vir_R^(N). However, for the purpose of our analysis, we focus solely on the "naive Virasoro algebra" generated by L_n=∑_i=1^NL_n^(i), L̅_n=∑_i=1^NL̅_n^(i). We assume that each individual CFT^(i) has a central charge c^(i)>1, a unique twist-0 primary state (the vacuum), and a nonzero twist gap τ_ gap^(i)>0. §.§.§ Central charge, vacuum, currents and the twist gap The central charge and the twist gap of the combined CFT are then given by c=∑_i=1^Nc^(i), τ_ gap=min{4,τ_ gap^(1),…,τ_ gap^(N)}, where the “4" in the twist gap corresponds to the states (∑_i=1^Nu_i L^(i)_-2)(∑_i=1^Nv_i L̅^(i)_-2)| vac⟩, subject to the constraints ∑_i=1^Nu_i c^(i)=∑_i=1^Nv_i c^(i)=0. These states are twist-4 scalar primaries of the "naive Virasoro algebra" (<ref>), and there are (N-1)^2 of them (by constraints (<ref>)). From the perspective of the “naive Virasoro algebra," the theory contains infinitely many twist-0 primary states, which correspond to chiral currents. This can be argued in the following way. Let us focus on the left currents since the argument for the right currents is analogous. We consider the representation of the direct sum of Virasoro algebras: 𝒰[Vir_L^(1)⊕ Vir_R^(1)⊕…⊕ Vir_L^(N)⊕ Vir_R^(N)]|h_1,h̅_1,h_2,…,h̅_N⟩, where 𝒰[𝒢] denotes the universal enveloping algebra of the Lie algebra 𝒢, and {h_i,h̅_i} is the set of the highest weights of the representation. The primaries of the “naive Virasoro algebra" are obtained by taking linear combinations of the elements with the same conformal weights in the above representation. By considering all possible |h_1,h̅_1,h_2,…,h̅_N⟩, we obtain all the primaries of the “naive Virasoro algebra." It is important to note that these primaries can only have conformal weights satisfying: h⩾ h_1+…+h_N, h̅⩾h̅_1+…+h̅_N. Since we assumed a twist gap τ_ gap^(i) for each CFT^(i), the left currents, which have h̅=0, can only be descendants of the vacuum state. Moreover, they can only be obtained by acting with L^(i)_-n's (not L̅^(i)_-n's!) on the vacuum state. Consequently, the counting of the left currents reduces to considering the product of the left vacuum characters: ∏_i=1^Nχ_ vac^(i)(q)= ∏_i=1^Nq^-c^(i)-1/24(1-q)/η(q)≡q^-∑_i=1^Nc^(i)-1/24/η(q)f(q), f(q)= q^N-1/24/η(q)^N-1(1-q)^N. The function f(q) is related to ℱ_ current(q) (see (<ref>)) through the equation f(q)=1-q+ℱ_ current(q) (q=e^-β). The coefficients of q^j in ℱ current(q), which correspond to the total number of spin-j left currents, are the same as those in f(q) for j⩾2. Taking the limit q→1, we have f(1)=ℱ_ current(1)= total number of left currents. By (<ref>), f(q) diverges as q→1. This demonstrates that there are infinitely many left currents. The uniqueness of the vacuum state and the candidate twist gap “4" can also be observed from the character formula. By considering (<ref>), we find that the vacuum state of the “naive Virasoro algebra" and any potential twist gap other than τ_ gap^(i) can only arise from the descendants of the vacuum state. Let us decompose the product of vacuum characters into characters of the “naive Virasoro algebra": ∏_i=1^Nχ_ vac^(i)(q)χ_ vac^(i)(q̅)=(qq̅)^-∑_i=1^Nc^(i)-1/24/η(q)η(q̅)f(q)f(q̅), where f(q) is the same function as in (<ref>). In (<ref>), the prefactor accounts for the contribution from descendants of the "naive Virasoro algebra," and the coefficients of q^hq̅^h̅ in the power series expansion of f(q)f(q̅) correspond to the number of Virasoro primaries with conformal weights (h,h̅) (with the exception that we need to use 1-q instead of 1 for h=0, and similarly for h̅). The expression for f(q)f(q̅) is given by f(q)f(q̅)= (1-q)(1-q̅)+(1-q)ℱ_ current(q̅)+ℱ_ current(q)(1-q̅)+(N-1)^2q^2q̅^2 +O(q^3q̅^2)+O(q^2q̅^3). In this expression, the first term corresponds to the vacuum state, the second and third terms correspond to the chiral currents, the fourth term corresponds to the twist-4 scalar state described earlier, and the error terms correspond to operators with twists ⩾4. This decomposition justifies the uniqueness of the vacuum state and the possibility of a candidate twist gap of “4". Now let us analyze the growth of the current degeneracy D(j) by examining the asymptotic behavior of f(q) as q→ 1 (or equivalently, β→ 0) using the relation (<ref>). We use of the property of the modular form η(q) under S modular transformation, given by η(q)=√(2π/β)η(q') (q=e^-β, q'=e^-4π^2/β). Applying this to f(q), we find that f(q)=(β/2π)^N-1/2q^N-1/24/η(q')^N-1(1-q)^Nq→1∼(β/2π)^N-1/2β^N e^4π^2/βN-1/24 (q'=e^-4π^2/β). Then by (<ref>), we get b=1/2 and a=√(N-1/24) in the notation of (<ref>). Since we assumed that c^(i)>1, our case falls under the condition b=1/2 and 0<a<√(A), which corresponds to conjecture <ref>. Therefore, let's examine the predictions made by this conjecture. §.§.§ Testing conjecture <ref> The first part of conjecture <ref> says that the big CFT should contain a twist accumulation point given by h=A-a^2=∑_i=1^Nc^(i)-1/24-N-1/24=∑_i=1^NA^(i), A^(i)=c^(i)-1/24. This is expected because each individual CFT^(i) contains a twist accumulation point at h=A^(i), h̅=∞. Let {𝒪_k^(i)} represent the family of Virasoro primary operators in CFT^(i) that approach the above twist accumulation point. In the big CFT, we have the primaries given by 𝒪_k(x)=∏_i=1^N𝒪_k^(i)(x). This family of primaries will approach the twist accumulation point with h given by (<ref>). Consequently, the first part of conjecture <ref> is confirmed. The second part of conjecture <ref> is challenging to verify due to the limited information available on each individual CFT^(i). One naive intuition is that, in the double lightcone limit, the dominant contribution to 𝒜_J(β_L,ε) arises from operators of the form (<ref>). However, this intuition overlooks a significant portion of Virasoro primaries that can be generated by applying L̅^(i)_-k operators to (<ref>). To count the number of Virasoro primaries with the same h (=∑_i=1^N h_k^(i)) generated from each operator of the form (<ref>), we examine the product of the characters of the right movers: ∏_i=1^Nχ_h̅_k^(i)(q̅)= ∏_i=1^Nq̅^∑_i(h̅^(i)_k-A^(i))/η(q̅)=q̅^-A+∑_ih̅^(i)_k/η(q̅)f(q̅), where f(q̅) is the same function as in (<ref>). The coefficient of the q̅^n term in f(q̅) represents the number of generated Virasoro primaries with h_k=∑_i=1^N h_k^(i), h̅_k=∑_i=1^Nh̅_k^(i)+n. From our earlier analysis of the vacuum character, we already know that for large values of n, the coefficient of the q̅^n term exhibits growth behavior on the order of e^4π√(N-1/24n), up to a slow-growing factor. This provides insight into the growth of the coefficient and confirms the conjecture's prediction. Taking into account all these operators, we can now make the following hypothesis: * In the double lightcone limit, the leading term of log𝒜_J(β_L,ε) arises from operators of the form (<ref>), including the Virasoro primaries generated from them. Based on this assumption, we approximate 𝒜_J(β_L,ε) as follows: 𝒜_J(β_L,ε) DLC_w≈∑_n+∑_i=1^NJ_i=J(∏_ i=1^N𝒜_J_i(β_L,ε))f_n, Here, the factor f_n represents the coefficient of q̅^n in f(q̅) and counts the number of generated Virasoro primaries, as mentioned earlier. It is important to note that this approximation captures only the leading exponential growth in J, as we have neglected the slow-growing factor. Next, we apply the second approximation as follows. We expect that for large J, the leading exponential growth of A_J(β_L,ε) in J is dominated by the regime where n and all the J_i's are large. Under this assumption, we use f_n∼ e^4π√(N-1/24n) and theorem <ref>, which allows us to replace f_n and all 𝒜_J_i's with their asymptotic growth, up to slow-growing factors. Consequently, we have: 𝒜_J(β_L,ε)≈ ∑_n+∑_i=1^NJ_i=J(∏_ i=1^Ne^4π√(A^(i) J_i))e^4π√(N-1/24n) = ∑_n+∑_i=1^NJ_i=Je^4π∑_i√(A^(i) J_i)+4π√(N-1/24n). In the limit of J→∞, the leading exponential growth of the expression above is determined by maximizing √(N-1/24n)+∑_i√(A^(i) J_i) subject to the constraint n+∑_iJ_i=J. This is achieved by the condition: N-1/24n=A^(1)/J_1=A^(2)/J_2=…=A^(N)/J_N(=A/J). It is worth noting that this condition implies that both the big CFT and all the small CFTs employ the same right inverse temperature in the analysis: β_R=2π√(A/J), β_R^(i)=2π√(A^(i)/J_i)=2π√(A/J). This provides a consistency check on our approximation. Therefore, we obtain: 𝒜_J(β_L,ε) DLC_w≈e^4π√((∑_iA^(i)+N-1/24)J)=e^4π√(AJ). In this expression, we have neglected the slow-growing factor, which is currently beyond our computational capabilities. However, this result is consistent with the prediction in the second part of conjecture <ref>, as evidenced by the presence of the e^4π√(AJ) term in (<ref>). §.§ W_N CFT The second example is the W_N CFT with central charge c>N-1 and a twist gap τ_ gap^W_N in the spectrum of W_N-primaries. This example was studied in <cit.>, where the authors predicted the existence of a twist accumulation point of W_N primaries, given by h=c-N+1/24, h̅=∞. Notably, since every W_N primary state is also a Virasoro primary state, this twist accumulation point holds true for Virasoro primaries as well. It is interesting to observe that when N=2, the W_N algebra reduces to the Virasoro algebra, and the expression (<ref>) coincides with the twist accumulation point in the Virasoro case, without any need for shifting. In the case of W_N CFT, we aim to verify the following aspects (from perspective of Virasoro algebra): * The theory contains an infinite number of chiral currents. The growth of the current degeneracy D(j) (as indicated in (<ref>)) is approximately given by D(j)∼ e^4π√(N-2/24j) (j→∞), up to a slow-growing factor in j. * It has a unique vacuum and a nonzero twist gap. * It has a twist accumulation point for Virasoro primaries, characterized by (<ref>). The counting of spin-J Virasoro primaries near the twist accumulation point follows the pattern e^4π√(AJ), up to a potential slow growth factor (see equation (<ref>) for precise formulation). The first two points establish that the theory falls within the framework of conjecture <ref> with a=√(N-2/24). The existence of the twist accumulation point (<ref>) directly corresponds to the statement made in the first part of conjecture <ref>. While verifying the complete consistency with the second part of conjecture <ref> is a complex task, in this context, we solely focus on confirming the exponential growth factor e^4π√(AJ) associated with the last point, disregarding the slow-growing factor. Our main tool for analysis is the W_N character. In the case of a chiral W_N primary with conformal weight h, its corresponding W_N representation character is given by <cit.>: χ^W_N_h(q)=q^-A+N-2/24/η(q)^N-1∏_n=1^N-1(1-q^n)^N-n if h = 0, q^h-A+N-2/24/η(q)^N-1 if h > 0. In particular, when N=2, these characters reduce to the expected Virasoro characters. §.§.§ Virasoro vacuum, twist gap and chiral currents Assuming the existence of a positive W_N twist gap τ^W_N_ gap and a unique twist-0 W_N primary (which corresponds to the W_N vacuum), the torus partition function is given by: Z(q,q̅)=χ^W_N_ vac(q)χ^W_N_ vac(q̅)+∑_h,h̅⩾τ^W_N_ gap/2χ^W_N_h(q)χ^W_N_h̅(q̅). The Virasoro chiral currents comes from the W_N vacuum character: χ^W_N_0(q)=χ_0(q)+∑_j=1^∞D(j)χ_j(q). Similar to appendix <ref>, we introduce the function f(q) to count the number of Virasoro chiral currents: χ^W_N_0(q)≡ q^-A/η(q)f(q), f(q)= 1-q+∑_j=1^∞D(j)q^j. Using the h=0 case of (<ref>), we find: f(q)=q^N-2/24/η(q)^N-2∏_n=1^N-1(1-q^n)^N-n. To analyze the asymptotic behavior of D(j), we use the fact that η(q) is a modular form, which leads to: f(q)=(2π/β)^-N-2/2q^N-2/24/η(q')^N-2∏_n=1^N-1(1-q^n)^N-n (q'=e^-4π^2/β). Taking the limit q→1 (or equivalently, β→0), we obtain the asymptotic behavior of f(q): f(q)q→1∼(2π/β)^-N-2/2(∏_n=1^N-1(nβ)^N-n)e^N-2/244π^2/β. Then by (<ref>), we determine that a=√(N-2/24), which aligns with our expectations. This completes the verification of the first point. By using the W_N character formula (<ref>), we observe that a W_N descendant cannot possess a smaller twist than its corresponding W_N primary. Hence, in a W_N CFT, the twist gap of the Virasoro primaries is at most τ_ gap^W_N. Furthermore, if τ_ gap^W_N is sufficiently large, there can be Virasoro primaries with smaller nonzero twists originating from the W_N vacuum sector. To determine the lowest nontrivial twist from the W_N vacuum sector, let us compute several leading terms in f(q). Using (<ref>), we find f(q)=1-q+q^3+O(q^4). Taking into account both the left and right movers, we have f(q)f(q̅)= (1-q)(1-q̅)+(1-q)ℱ_ current(q̅)+ℱ_ current(q)(1-q̅)+q^3q̅^3 +O(q^3q̅^4)+O(q^4q̅^3). Therefore, we observe that the W_N vacuum sector includes a unique Virasoro vacuum, chiral currents, a twist-6 Virasoro scalar primary, and other Virasoro primaries with twists ⩾6. Taking into account the other candidate twist gap τ^W_N_ gap, we conclude that the twist gap in the spectrum of Virasoro primaries is given by τ_ gap=min{6,τ_ gap^W_N}. This completes the verification of the second point. §.§.§ Generalizing theorem <ref> to W_N CFT Before going to the last point, we would like to propose a W_N analogue of theorem <ref>. The setup is as follows. We define the reduced partition function in the W_N CFT as Z̃^W_N(q,q̅):=[η(q)η(q̅)]^N-1Z(q,q̅). Recall that Z̃ counts the Virasoro primaries, here similarly, Z̃^W_N counts the W_N primaries: Z̃^W_N(q,q̅)= (qq̅)^-A+N-2/24[∏_n=1^N-1[(1-q^n)(1-q̅^n)]^N-n+∑_h, h̅⩾ T q^hq̅^h̅], where A=c-1/24 and T=2τ_ gap^W_N. By using the modular invariance of Z(q,q̅) and the modular transformation property of η(q), we obtain an analogue of (<ref>) for the reduced partition function Z̃^W_N: Z̃^W_N (β_L, β_R) = (4 π^2/β_L β_R)^N-1/2Z̃^W_N( 4 π^2/β_L, 4 π^2/β_R) (q=e^-β_L,q̅=e^-β_R). Comparing this setup with the usual CFT (as described in section <ref>), we observe the following modifications specific to W_N CFTs: * The power index of the overall qq̅ factor changes from A to A-N-2/24. * Here the vacuum term is given by ∏_n=1^N-1[(1-q^n)(1-q̅^n)]^N-n instead of (1-q)(1-q̅). * The crossing factor in the modular invariance equation of the reduced partition function is modified to (4 π^2/β_L β_R)^N-1/2 instead of (4 π^2/β_L β_R)^1/2. The first modification is crucial as it shifts the position of the twist accumulation point. The second and third modifications are minor, affecting only the slow-growing factors in the analysis. We define the W_N analogues of 𝒩_J, 𝒜_J, and the double lightcone limit as follows: 𝒩_J^W_N(ε) := ∑_h∈(A-N-2/24-ε, A-N-2/24+ε) n_h,h+J, 𝒜_J^W_N(ε,β_L):= ∑_h∈(A-N-2/24-ε, A-N-2/24+ε) n_h,h+J e^-(h-A+N-1/24)β_L, W_N-DLC_w limit: β_L, J→∞, 2π T(1-w^2)/A-N-2/24√(J/A-N-2/24)-β_L→∞ , β_L^-1logJ→ 0 . It is worth noting that the double lightcone limit here is weaker than the one defined in (<ref>), with the identification a=√(N-2/24). Therefore, when we apply (<ref>) to check conjecture <ref> in the W_N CFT, the conditions of the W_N DLC_w limit defined here will always be satisfied. Now we are ready to propose the following W_N analogue of theorem <ref>: For any w∈(1/2,1), the quantity 𝒜_J^W_N(ε,β_L) satisfies asymptotic two-sided bounds in the W_N-DLC_w limit, given by: 1/w1/1-tan(π w(1-ε))/π w(1-ε)≲𝒜_J^W_N(β_L,ε )/S_N(β_L,J)e^4π√((A-N-2/24)J)≲1/w2/1+sin(2π wε)/2π wε, where S_N(β_L,J) is a slow-growing factor of the form S_N(β_L,J)=C_Nβ_L^μ_NJ^ν_N with finite coefficients C_N, μ_N, and ν_N determined by N of the W_N algebra. Furthermore, the parameter ε belongs to the interval ε_ min(β_L,J)⩽ ε⩽1-1/2w, where ε_ min(β_L,J) is defined as: ε_ min(β_L,J):= max{P_N(A,T,w)logJ/√(J), Q_N(A,T,w)logJ/β_L+R_N(A,T,w)logβ_L/β_L}, with finite coefficients P_N, Q_N, and R_N. We believe that conjecture <ref> can be proven using the same argument as presented in section <ref>. A direct consequence of conjecture <ref> is the estimate of 𝒩_J^W_N(ε), which represents the number of W_N primaries near the twist accumulation point. This estimate serves as an analogue of corollary <ref>. According to the conjecture, we have: 𝒩_J^W_N(ε≡κ J^-1/2log J)=e^4π√((A-N-2/2)J)+O(log J), where κ is a fixed positive constant. We note that κ has a positive lower bound, similar to what we observed in corollary <ref>. This estimate demonstrates the exponential growth of 𝒩_J^W_N(ε) with respect to J, with the leading term determined by A and the W_N algebra. The subleading term contributes a factor of powerlaw growth. §.§.§ Testing conjecture <ref> We have no doubt that the first part of conjecture <ref> can be checked rigorously in the case of W_N CFT. Now, let us discuss the exponential growth of 𝒜_J in the second part of the conjecture. Specifically, we aim to demonstrate that both 𝒜_J and 𝒩_J exhibit the expected exponential growth as stated in conjecture <ref> and eq. (<ref>): 𝒜_J(β_L,ε) DLC_w∼e^4π√(AJ), 𝒩_J(ε≡κ J^-1/2log J)∼ e^4π√(AJ), with the understanding that there may be additional factors of slow growth. Analogous to the first example, where we were unable to provide a rigorous verification, we would like to propose the following hypotheses for the W_N CFT: * We assume that conjecture <ref> is correct. * In the DLC_w limit, we expect the dominant contribution to log𝒜_J(β_L,ε) to come from two sources: (a) W_N primaries near the twist accumulation point, and (b) Virasoro primaries originating from the W_N descendants of these W_N primaries. According to the second hypothesis, in the DLC_w limit, we can approximate 𝒜_J(β_L,ε) as a sum over contributions from W_N primaries near the twist accumulation point and their W_N descendants which are Virasoro primaries. Specifically, we have the approximation: 𝒜_J(β_L,ε)≈∑_n=0^J𝒜_J-n^W_N(β_L,ε)× B_n. Here, J-n represents the spin of the W_N primary near the twist accumulation point, and B_n corresponds to the number of independent W_N descendants. These descendants are Virasoro primaries with the same twist as the W_N primary and an additional spin of n (i.e. the total spin of the Virasoro primary is J). The coefficients B_n can be determined using the second case of the W_N character formula (<ref>). Similar to our analysis for the W_N vacuum character, we have the expression: (q^1/24/η(q))^N-2/24=∑_n=0^∞B_nq^n. By the same analysis as the W_N vacuum character, we get the asymptotic growth of B_n when n is very large: B_n=b_ne^4π√(N-2/24n), where b_n is a factor of slow growth in n. By considering the first hypothesis, namely conjecture <ref>, we can derive the exponential factor that governs the growth of 𝒜_J-n^W_N: 𝒜_J-n^W_N(β_L,ε)≈ e^4π√((A-N-2/24)(J-n)). According to conjecture <ref>, the aforementioned asymptotic behavior remains valid when the growth rates of J-n and β_L, instead of J and β_L, satisfy the condition of the W_N-DLC_w limit. We will address this subtlety in further detail later. Substituting (<ref>) and (<ref>) into (<ref>), we obtain 𝒜_J(β_L,ε) DLC_w≈∑_n=0^J e^4π√((A-N-2/24)(J-n))e^4π√(N-2/2n). Here, we neglect the slow-growing factor. In the limit J→∞, the leading exponential growth of the sum is determined by maximizing √((A-N-2/24)(J-n))+√(N-2/24n), which occurs when A-N-2/24/J-n=N-2/24n ⟺ J-n=A-N-2/24/AJ, n=N-2/24AJ. Now we come back to the subtlety mentioned after eq. (<ref>). In the framework of W_N CFT, the right inverse temperature in the analysis of 𝒜_J-n^W_N(β_L,ε) is identified with spin J by β_R=2π√(A-N-2/24/J-n). Here the numerator in the square-root is A-N-2/24 instead of A because in the setup of W_N reduced partition function Z̃^W_N, everything depending on A is modified to A-N-2/24, so is the identification between β_R and J: CFT: β_R=2π√(A/J) vs W_N CFT: β_R=2π√(A-N-2/24/J). But now, because of condition (<ref>), we have β_R=2π√(A-N-2/24/J-n)=2π√(A/J). This result is of utmost importance in our examination of conjecture <ref> as it indicates that the right inverse temperature we are considering corresponds precisely to the one in the standard CFT! One can also check that by condition (<ref>), the W_N-DLC_w condition (<ref>), with J replaced by J-n, is compatible with the DLC_w condition (<ref>) for conjecture <ref>. This resolves the previously mentioned subtlety and serves as a consistency check on our approximation. By condition (<ref>), we have max_n⩽ J{√((A-N-2/24)(J-n))+√(N-2/24n)}≈√(AJ). Here we use “≈" instead of “=" because the maximal value is generally not attained at integer values of J and n. However, this approximation does not affect the leading exponential growth. Consequently, we obtain 𝒜_J(β_L,ε) DLC_w≈e^4π√(AJ). We expect that a more careful analysis will lead to the precise statement: lim_ DLC_wlog𝒜_J(β_L,ε)/4π√(AJ)=1. This completes the verification for the first equation of (<ref>). The same reasoning applies to the second equation of (<ref>). Thus, we have checked the last point, which is consistent with the second part of conjecture <ref>. §.§ Three copies of Ising CFTs The third example involves three copies of Ising CFTs: CFT= Ising^(1)⊗ Ising^(2)⊗ Ising^(3). This CFT has the central charge c=3/2, which corresponds to A=1/48, and a nonzero twist gap of Virasoro primaries. Therefore, based on the aforementioned argument, there exists a accumulation point for twists with h⩽1/48. In a single Ising CFT, there are only three primary states: I (h_I=h̅_I=0), σ (h_σ=h̅_σ=1/16), and ϵ (h_ϵ=h̅_ϵ=1/2). Consequently, in the big CFT, the only possible states with h not exceeding 1/48 are given by ∏_i=1,2,3∏_n⩾2(L̅^(i)_-n)^α(i,n)| vac⟩^(1)⊗| vac⟩^(2)⊗| vac⟩^(3), where i corresponds to the label in Ising^(i), and α(i,n) are non-negative integers. By taking linear combinations of these states, we can obtain the right-moving currents of the Virasoro algebra, which have h=0. Therefore, even without performing any computations, we can deduce that the twist accumulation point must be shifted by a^2=1/48, and the corresponding Virasoro primaries, which are the right currents, are located precisely at h=0. Now, let's perform a consistency check on the growth of D(j), the number of spin-j currents. Similar to the previous examples, the chiral currents are contained in the product of vacuum characters of the Ising CFTs: χ^ Ising_ vac(q)^3= q^-1/48/η(q)f(q), where f(q) is related to ℱ_ current(q) through (<ref>), and the vacuum character of the Ising CFT is given by χ^ Ising_ vac(q)=1/2√(η(q))(√(θ_3(q))+√(θ_4(q))), where θ_i(q) represents Jacobi's theta functions. Combining the above two equations, we find that f(q)=q^1/48/8√(η(q))(√(θ_3(q))+√(θ_4(q)))^3. Using the properties of η(q) and θ_i(q) under S modular transformation, we determine the asymptotic behavior of f(q) as q→ 1 (or equivalently, β→ 0): f(q)=1/8√(2π/β)q^1/48/√(η(q'))(√(θ_3(q'))+√(θ_2(q')))^3β→0∼1/8√(2π/β)e^4π^2/β1/48 (q'=e^-4π^2/β). Then by (<ref>) and (<ref>), we obtain a=√(1/48). In this case, there is no need to look for the candidate twist gap from the product of Ising vacuum characters. This is because the Ising spin field σ, which has twist τ_σ=1/8, already gives the twist gap: τ_ gap=1/8. Any Virasoro primary from the product of Ising vacuum characters can only have zero twist or twist ⩾1. § PROOF OF THEOREM <REF> In this appendix, we present a proof of theorem <ref>. Similar to section <ref>, to derive bounds on 𝒜_J(β_L,ε_1,ε_2,A) it is convenient to introduce the quantity 𝒜(β_L,H̅,ε_1,ε_2,δ,A):= ∫_A-ε_1^A+ε_2d h∫_H̅-δ^H̅+δ dh̅ ρ(h,h̅)e^-(h-A)β_L, which is the analogue of 𝒜(β_L,H̅,ε,δ) defined in (<ref>) (recall that H̅≡ A+J). Recall that we always use the identification β_R=2π√(A/J), the HDLC_w limit (<ref>) is equivalent to β_L/A→∞ , β_R→ 0 , A→∞, 4π^2 α(1-w^2)/β_R-β_L→∞, β_L^-1log(β_R)→0 . Our approach here is similar to the one explained in section <ref>. We aim to show that, under the additional assumptions stated at the beginning of the section, the results similar to the ones in section <ref> holds. Specifically, we want to prove lim_HDLC_wI^ dual_±, nonvac/I^ dual_±, vac=0. in the dual channel and lim_HDLC_wI_±, vac/I^ dual_±, vac=0 , lim_HDLC_wI_±,T⩽ h⩽ A-ε_1/I^ dual_±, vac=0 , lim_HDLC_wI_±,h⩾ A+ε_2/I^ dual_±, vac=0 in the direct channel. Here these I_±'s are defined in a similar way to (<ref>). §.§ Dual channel: vacuum We reconsider the integral (<ref>), the vacuum term in the dual channel of I_±, keeping in mind that A is no longer fixed. We claim that a similar asymptotic behavior holds: I^ dual_±, vac HDLC_w∼4π^5/2β_R/β_L^3/2A^1/2e^A(4π^2/β_L+4π^2/β_R)ϕ̂_±(0). Note that (<ref>) has an extra factor e^4π^2A/β_L compared to (<ref>). In fact, we also have this factor for fixed A, but we ignored it in (<ref>) because it is asymptotically equal to 1 in the double lightcone limit when A is fixed. This asymptotic behavior holds for any limit β_L→∞, β_R→0 (keeping in mind that β_R≡2π√(A/J)) without any constraints between A, β_L and β_R. In the HDLC_w limit, we have the extra condition A/β_L→0, so (<ref>) reduces to (<ref>), the one in a fixed CFT. To see the asymptotic behavior (<ref>), we just need to repeat the estimates in section <ref> more carefully, taking into account that the quantities dependent on A are no longer constants. The technical details are given in appendix <ref>. §.§ Other terms: some technical preparations Our aim is to show that, as we approach the double lightcone limit, the nonvacuum contribution in the dual channel as well as the vacuum/low-twist/high-twist contributions in the direct channel are subleading compared to the vacuum term in the dual channel. This was illustrated in eqs. (<ref>) and (<ref>). Our analysis builds upon the arguments presented in sections <ref> and <ref>, but to handle the limit A→∞ appropriately, we would like to introduce a technical lemma and a remark (see below) that facilitate the generalization of the previous arguments. The following lemma will be convenient for us to eliminate the unimportant powerlw factors of β_L, β_R and A in the analysis: Given any fixed u>0 and r,s,t∈ℝ, we have β_L^rβ_R^sA^te^-u/β_R HDLC_w⟶0, where the HDLC_w limit was defined in (<ref>). Since β_L→∞, β_R→0 and A→∞ in the HDLC_w limit, it suffices to prove the case of r>0, s<0 and t>0. When we approach the HDLC_w limit, eventually we will have β_L⩽4π^2α(1-w^2)/β_R, A⩽M/β_R, for some fixed positive M. In the above regime we have β_L^rβ_R^sA^te^-u/β_R⩽[4π^2α(1-w^2)]^rM^tβ_R^-r+s-te^-u/β_R→0. In the last step we used β_R→0 in the HDLC_w limit. We would also like to make the following remark: Lemma <ref> assumes only β_0>0. However, according to assumption 2 at the beginning of section <ref>, lemma <ref> still holds in the current context, given that β_0>2π. §.§ Dual channel: non-vacuum Recall from section <ref> that in the dual channel of I_±, we separated the non-vacuum contribution I^ dual_±, nonvac into I^ dual_±,T⩽h̅⩽ A and I^ dual_±,h̅⩾ A and found that in the double lightcone limit, they were aymptotically bounded by (<ref>) and (<ref>). Now, we aim to generalize (<ref>) and (<ref>) to the current context, where we need to carefully examine the changes that arise due to the theory no longer being fixed. Let us first consider I^ dual_±,T⩽h̅⩽ A. The first step of the estimate can be found in (<ref>). According to remark <ref>, the inequalities in (<ref>) remain valid if we choose β_0>2π. Let us now look at (<ref>), the next step of the estimate. The difference between our current situation and the one in (<ref>) is that I^ dual_±, vac now has an extra factor of e^4π^2A/β_L due to A no longer being fixed, as shown in eq. (<ref>). Therefore, estimate (<ref>) is modified to: I^ dual_±,T⩽h̅<A/I^ dual_±, vac HDLC_w≲ Λβ_0^1/2κ(β_0)max_xϕ̂_±(x)/2π^5/2ϕ̂_±(0)(β_L/β_R)^3/2A^1/2e^-4π^2 T/β_R-4π^2A/β_L+Aβ_L+Aβ_0-4π^2(A-T)/β_0. The factors on the r.h.s. of the above equation can be expressed in terms of β_L, β_R, and A as follows: [(β_L/β_R)^3/2A^1/2e^-4π^2α w^2 A/β_R]× e^-A[4π^2α(1-w^2)/β_R-β_L+(T/A-α)4π^2/β_R+4π^2/β_L+4π^2(A-T)/Aβ_0-β_0]. In the HDLC_w limit, the first factor vanishes, as shown by Lemma <ref>, while the second factor decays exponentially to 0 due to (<ref>), (<ref>), and the fact that T⩽ A. Consequently, we obtain: I^ dual_±,T⩽h̅<A/I^ dual_±, vac HDLC_w⟶0. Let us now consider I^ dual_±,h̅⩾ A and perform a similar analysis as we did for I^ dual_±,T⩽h̅<A. The inequalities in (<ref>) remain valid for β_0>2π. Additionally, we have an extra factor e^4π^2A/β_L in the asymptotic behavior of I^ dual_±, vac. Thus, the estimate in (<ref>) is modified to: I^ dual_±,h̅⩾ A/I^ dual_±, vac HDLC_w≲ Λκ(β_0)max_xϕ̂_±(x)/√(2)π^5/2ϕ̂_±(0)β_L^3/2A^1/2/β_R^2e^-A[4π^2-Λ^2/β_R-β_L-β_R+4π^2/β_L]. We can express the factors that depend on β_L, β_R and A in a similar manner as in (<ref>): [β_L^3/2A^1/2/β_R^2e^-A4π^2w^2-Λ^2/β_R]× e^-A[4π^2(1-w^2)/β_R-β_L-β_R+4π^2/β_L]. Both factors decay to 0 in the HDLC_w limit. Therefore, we obtain I^ dual_±,h̅⩾ A/I^ dual_±, vac HDLC_w⟶0. Eqs. (<ref>) and (<ref>) show that in the dual channel, the non-vacuum contribution is suppressed by the vacuum contribution in the HDLC_w. This finishes the proof of (<ref>). §.§ Direct channel: vacuum Let us consider I_±, vac, the vacuum term in the direct channel of I_±, with the goal of showing the first equation of (<ref>). In section <ref>, we analyzed I_±, vac for fixed CFT and derived the upper bound (<ref>), which still holds. As in the previous subsection, we need to modify eq. (<ref>) to account for the large A dependence of I^ dual_±, vac. Here we obtain the following modified estimate: I_±, vac/I^ dual_±, vacHDLC_w≲ max_xϕ_±(x)/2π^5/2ϕ̂_±(0)β_L^3/2A^1/2/β_R e^-A(4π^2/β_R+4π^2/β_L-β_L-β_R). As in the previous subsection, we express the factors that depend on β_L, β_R and A as [β_L^3/2A^1/2/β_R e^-4π^2w^2A/β_R]× e^-A(4π^2(1-w^2)/β_R+4π^2/β_L-β_L-β_R). Both factors decay to 0 in the HDLC_w limit by the same reason as in the previous subsection. So we get I_±, vac/I^ dual_±, vac HDLC_w⟶0. §.§ Direct channel: low twist and high twist Let us consider the non-vacuum terms in the direct channel of I_±: the low-twist (T⩽ h⩽ A-ε_1) and high-twist (h⩾ A+ε_2) terms, with the goal of showing the second and third equations of (<ref>). Before proceeding with the analysis, we want to emphasize that our estimates on I^ dual_±, vac, I^ dual_±, nonvac, and I_±, vac are based on a weaker version of the double lightcone limit. Specifically, the constraints we need for these estimates are as follows:[It is worth noting that under these weaker conditions, lemma <ref> still holds.] β_L→∞, β_R→0, 4π^2 α(1-w^2)/β_R-β_L→∞, A_ min⩽ A⩽M/β_R, where A_ min and M are some arbitrary fixed positive constant.[The last condition in (<ref>) is fulfilled in the HDLC_w limit.] The additional constraints in the definition of the HDLC_w limit (see eq. (<ref>)) are only required for the estimates on I_±,T⩽ h⩽ A-ε_1 and I_±,h⩾ A+ε_2. Here we only consider the case of fixed ε_1 and ε_2, and we will discuss the case of ε_1,ε_2→0 later in section <ref>. For the high-twist term I_±,h⩾ A+ε_2, we reconsider the estimates from (<ref>) to (<ref>). The subtleties here are again β_0>2π and the extra factor e^4π^2A/β_R of I^ dual_±, vac. We have a modified version of (<ref>): I_±,h⩾ A+ε_2/I^ dual_±, vac HDLC_w≲ κ(β_0)β_0^1/2max_xϕ_±(x)/4π^5/2ϕ̂_±(0)(β_L/β_R)^3/2A^1/2 e^-ε_2(β_L-4π^2/β_0)+Aβ_0-4π^2A/β_L. From (<ref>) we notice two issues: * There is a factor of β_R^-3/2, while the exponential factor does not depend on β_R. This issue has already appeared in the case of fixed CFT. * There is an extra factor e^Aβ_0 which blows up in the limit A→∞. The first issue was already resolved by introducing the condition log(β_R)/β_L→0. For the second issue, the only way to resolve it is to let β_L go to ∞ much faster than A. This is where we use the condition β_L/A→∞. We express the factors that depend on β_L, β_R and A as follows: [(β_L/β_R)^3/2A^1/2 e^-ε_2β_L/2]× e^-ε_2(β_L/2-4π^2/β_0)+Aβ_0-4π^2A/β_L It is not hard to see that for fixed ε_2, both factors vanishes in the HDLC_w limit. Therefore, we get I_±,h⩾ A+ε_2/I^ dual_±, vac HDLC_w⟶ 0 (ε_2 fixed). For the low-twist term I_±,T⩽ h⩽ A-ε_1, we reconsider the estimates from (<ref>) to (<ref>). Here we modify the definition of β_L' to β_L'=4π^2α(1-w^2/2)/β_R. Then the estimates from (<ref>) to (<ref>) remain valid if we choose β_0>2π. So we still have I_±,T⩽ h⩽ A-ε_1⩽ max_xϕ_±(x)e^-ε_1(β_L'-β_L)(4π^2/β_L')^3/2β_R^-1/2e^A(4π^2/β_L'+4π^2/β_R) ×[1+κ(β_0)β_0^1/2(β_L'/4π^2)^3/2e^A(β_L'-4π^2/β_L'+β_0-4π^2/β_0)-T(4π^2/β_R-4π^2/β_0)] in the regime β_L⩽β_L', β_R⩽β_0. Here we also used (<ref>). The second term in the […] of the second line vanishes in the HDLC_w limit in the following way: κ(β_0)β_0^1/2(β_L'/4π^2)^3/2e^A(β_L'-4π^2/β_L'+β_0-4π^2/β_0)-T(4π^2/β_R-4π^2/β_0) ⩽ [κ(β_0)β_0^1/2(α(1-w^2/2)/β_R)^3/2e^-π^2α w^2A/β_R]e^-A[π^2α w^2/β_R+β_R/α(1-w^2)-β_0+(1-T/A)4π^2/β_0]. Here we used T/A⩾α (see (<ref>)) and (<ref>). We see that both factors vanish in the HDLC_w limit. Therefore we have I_±,T⩽ h⩽ A-ε_1 HDLC_w≲max_xϕ_±(x)e^-ε_1(β_L'-β_L)(4π^2/β_L')^3/2β_R^-1/2e^A(4π^2/β_L'+4π^2/β_R). Dividing above by I^ dual_±, vac (see (<ref>)), we get I_±,T⩽ h⩽ A-ε_1/I^ dual_±, vac HDLC_w≲max_xϕ_±(x)/8π^5/2(α(1-w^2/2))^3/2ϕ̂_±(0)(Aβ_L^3)^1/2e^-ε_1(β_L'-β_L)-A(4π^2/β_L-4π^2/β_L^'). The A(4π^2/β_L-4π^2/β_L^') term in the exponential factor is irrelevant because it goes to 0 in the HDLC_w limit. For the remaining factors that depend on β_L, β_R and A, we have √(Aβ_L^3/β_Rβ_L^'3)e^-ε_1(β_L'-β_L)⩽ √(A/β_R)e^-ε_1(β_L'-β_L) = [√(A/β_R)e^-2π^2α w^2ε_1/β_R]× e^-ε_1(4π^2α(1-w^2)/β_R-β_L). Here in the first line we used β_L⩽β_L', and in the second line we used (<ref>). In the HDLC_w limit, the first factor vanishes by lemma <ref>, and the second factor vanishes by (<ref>). Therefore, we get I_±,T⩽ h⩽ A-ε_1/I^ dual_±, vac HDLC_w⟶0 (ε_1 fixed). So we have finished the proof of the second and third equations of (<ref>) in the case of fixed ε_1 and ε_2. Based on our estimates on various terms in the direct and dual channels of I_± (recall their definitions in (<ref>), (<ref>) and (<ref>)) we establish the statement in theorem <ref> for fixed ε_1 and ε_2. The bound ε_i<1-1/2w comes from similar consideration as in section <ref>. As a final step, we would like to let ε_i also go to zero in the HDLC_w limit. This will be the subject of the next subsection. §.§ Shrinking the (A-ε_1,A+ε_2) window In section <ref>, we observed that the ε-window around h=A can approach zero in the DLC_w limit. Now, we will revisit that analysis and study the rate at which ε_1 and ε_2 can tend to zero in the HDLC_w limit. In our previous analysis, ε_1 and ε_2 only affected two aspects: (a) the range of ϕ_±,δ_± that we could choose, and (b) the estimate of high- and low-twist contributions (as seen in (<ref>) and (<ref>)). The issue regarding (a) is identical to points 1 and 2 described in section <ref>, and we have already resolved it. Therefore, let us now reconsider the issue regarding (b). According to (<ref>) and (<ref>), for I_±,h⩾ A+ε_2 and I_±,T⩽ h⩽ A-ε_1 to be subleading in the HDLC_w limit as ε_1 and ε_2 tend to 0, we require the following conditions: (β_L/β_R)^3/2A^1/2 e^-ε_2(β_L-4π^2/β_0)+Aβ_0, (Aβ_L^3)^1/2e^-ε_1(4π^2α(1-w^2/2)/β_R-β_L)→0. Here we used the definition of β_L' (see (<ref>)) and neglected some irrelevant exponential factors that tend to 1 in the HDLC_w limit. Comparing (<ref>) to (<ref>), we can see that their structures are quite similar. However, in (<ref>), there are additional dependence on A. This distinction is the primary motivation for considering separate parameters ε_1 and ε_2. Let us consider ε_1-term first. We express it as follows: (Aβ_L^3)^1/2e^-ε_1(4π^2α(1-w^2/2)/β_R-β_L) HDLC_w≲ (Aβ_L^3)^1/2 e^-ε_12π^2α w^2/β_R. Here we used the fact that eventually β_L⩽4π^2α(1-w^2)/β_R in the HDLC_w limit. For the r.h.s. of the above equation to vanish in the HDLC_w limit, we can choose any ε_1 satisfying ε_1⩾β_R2π^2α w^2[3/2logβ_L+(1/2+ν)log A], where ν is an arbitrary fixed positive constant. Then let us consider the ε_2-term. Similarly to the case of ε_1-term, here we can choose any ε_2 satisfying ε_2⩾(β_L-4π^2/β_0)^-1[ Aβ_0+3/2log(β_L/β_R)+(1/2+ν)log A], where ν is again an arbitrary fixed positive constant. One can check explicitly that the choice of ε_1 and ε_2 given by the r.h.s. of (<ref>) and (<ref>) vanish in the HDLC_w limit.[Here we added the small number ν to the log A term because it is the term with slowest growth compared to A, logβ_L and logβ_R. It is of course not yet the optimal choice. For example, one can replace the νlog A term with νloglog A.] Now we choose β_0=3π and ν=1 in (<ref>) and (<ref>), and recall that β_R=2πA/J. This gives ε_1⩾ 3/2πα w^2√(A/J)log(β_L A), ε_2⩾ (β_L-4π/3)^-1[3π A+3/2log(β_L√(AJ)/2π)]. Then the last part of theorem <ref> follows. This finishes the whole proof of the theorem <ref>. utphys
http://arxiv.org/abs/2307.01274v1
20230703180402
Supersymmetric localization of (higher-spin) JT gravity: a bulk perspective
[ "Luca Griguolo", "Luigi Guerrini", "Rodolfo Panerai", "Jacopo Papalini", "Domenico Seminara" ]
hep-th
[ "hep-th" ]
Fly-by galaxy encounters with multiple black holes produce star-forming linear wakes Simeon Bird August 1, 2023 ==================================================================================== § INTRODUCTION The AdS/CFT correspondence <cit.> represents a promising framework to understand and, hopefully, to solve some subtle problems related to the quantization of gravity. Through the correspondence, the boundary theory can serve as a guide for understanding properties of the bulk physics. This is especially useful given the notorious difficulties in making sense of the functional integral of quantum gravity. A powerful non-perturbative method to perform exact computations in certain quantum field theories is the localization technique <cit.>, where the functional integral can be shown to “localize” over some solutions in field space, parameterizing a moduli space of suitable classical configurations. In simple cases, this finite-dimensional integral can be evaluated analytically, leading to a complete solution of the problem. When a system (or a particular set of observables) having a dual gravitational description in a bulk space, can be studied exactly through localization, we would expect to learn something about the structure of the related quantum gravity path integral. More ambitiously, we also hope that the bulk theory inherits some localization properties, opening to the possibility of obtaining exact results for integrations on fluctuating backgrounds. The program of studying gravitational systems from localization techniques applied to the boundary theory has been successfully exploited to derive the Bekenstein-Hawking entropy of supersymmetric black holes in AdS_4 <cit.> and AdS_5 <cit.>. There have also been attempts to extend these localization methods directly to supergravity to evaluate the bulk quantum gravity partition function, mainly in the context of AdS_4/CFT_3 holography <cit.>. On general grounds, one expects that, in low dimensions, our understanding of quantum gravity should improve. In particular, 2d/1d holography provides probably the best frameworks to study fluctuating geometries beyond perturbation theory. Gravitons or gauge bosons in two dimensions have no dynamical degrees of freedom; therefore, the quantum path integral in general simplifies dramatically, even in non-supersymmetric settings. On the other hand, many of the open questions from higher dimensional holography, such as bulk reconstruction or the physics of black holes and wormholes, persist in the lowest dimensional case. Two-dimensional Jackiw–Teitelboim (JT) theory <cit.> involving, in the second-order formalism, a dilaton field Φ and the metric tensor g_μν, is a simple tractable example of holographic correspondence and has attracted much attention in the last few years (see <cit.> for a recent review). The dual holographic theory is a one-dimensional theory, the Schwarzian quantum mechanics <cit.>, that effectively describes the bulk quantum gravity: a large class of correlation functions is precisely mapped to the boundary theory. Quite interestingly, the Schwarzian path integral can be exactly evaluated thanks to equivariant localization <cit.>, leading to a precise expression for the thermal-JT partition function at the basis of many recent developments <cit.>. In particular, higher-genus contributions to the path integral play a fundamental role in deriving a non-perturbative extension of the holographic duality in terms of an ensemble of theories described by a double-scaled matrix model <cit.>. This paper proposes that the same results can be obtained starting from the bulk theory and using a supersymmetric localization procedure <cit.>. The main idea[In three-dimensional gravity there have been similar attempts, both for the AdS <cit.> and the dS <cit.> cases] behind our computation is to use the well-known formulation of JT gravity as a BF gauge theory based on the algebra of SL(2,ℝ) <cit.>. We map the theory into a supersymmetric 𝒩=(2,2) gauge theory on the hemisphere and apply supersymmetric localization to reproduce the known Schwarzian partition function. A subtle point concerns the correct identification of the physical scales present in the gravitational theory with the geometrical scales appearing in the supersymmetric BF theory. We argue that our gauge model is actually obtained by reducing the three-dimensional Chern–Simons theory on a solid torus conformally equivalent to thermal AdS_3 and this provides us with the correct supersymmetric boundary terms and scales identification. Interestingly our approach is easily extended for the higher-spin generalization of JT gravity <cit.>. In this case, the relevant gauge theory is a SL(N,ℝ) BF theory, and its partition function has been derived using equivariant localization on the boundary SL(N,ℝ) Schwarzian quantum mechanics <cit.>. Our procedure reproduces precisely the result of <cit.>, bypassing the technical complications related to the derivation of the boundary quantum theory. Obtaining the JT partition function through localization is fascinating as it suggests that we can use the same framework to compute general observables. In the gauge theory formulation, correlation functions of boundary-anchored Wilson lines <cit.> are the most natural candidates to be studied. Physically, they represent correlators of bi-local operators in the Schwarzian theory and contain essential information about the quantum structure of the bulk gravity. While bi-local correlators have been thoroughly studied in standard JT gravity <cit.> obtaining exact expressions from different methods, their higher-spin cousins have never been considered. Supersymmetric localization could provide a convenient framework for their calculation. The structure of the paper is the following. We start Section <ref> by reviewing the gauge formulation of JT gravity, and then proceed to construct an equivalent supersymmetric BF theory. We pay particular attention to imposing supersymmetric boundary conditions on the hemisphere and to elucidate the identification between the physical and the geometrical scales. The actual localization of the path-integral is performed in Section <ref>: we present the localizing term and compute the relevant functional determinants obtaining the well-known final result for the JT partition function. In Section <ref>, we extend the computation to the higher-spin generalization of JT gravity recovering the disk partition function obtained in <cit.>. Section <ref> contains our conclusions and speculations about further uses of our procedure. The paper is completed with a couple of technical appendices. § JT GRAVITY AS A SUPERSYMMETRIC BF THEORY §.§ JT gravity as a BF theory Let us start by briefly reviewing how JT gravity can be formulated as a two-dimensional BF theory with SL(2,ℝ) gauge group <cit.>. In particular, our focus is on the theory defined on a two-dimensional manifold Σ with the topology of the disk. We start from the BF action S_BF = -i ∫_Σ(χ F) , where F = dA-A ∧ A is the field strength associated with a gauge connection one form A, and χ is an auxiliary scalar field in the adjoint representation of the gauge group. We consider a basis {𝖯_0,𝖯_1,𝖯_2} for the generators of 𝔰𝔩(2,ℝ) obeying the commutation relations [𝖯_0,𝖯_1] = 𝖯_2 , [𝖯_0,𝖯_2] = -𝖯_1 , [𝖯_1,𝖯_2] = -𝖯_0 . This algebra can be explicitly realized by choosing, for instance, a real two-dimensional representation in terms of Pauli matrices with 𝖯_0 = iσ_2/2 , 𝖯_1 = σ_1/2 , 𝖯_2 = σ_3/2 . The corresponding Killing form reads (𝖯_i 𝖯_j) = 1/2 diag(-1,+1,+1). We then expand the fields on such a basis as A = √(Λ/2) e^a 𝖯_a + ω 𝖯_0 , χ = χ^a 𝖯_a + χ^0 𝖯_0 . Here, we regard the index a∈1,2 as an SO(2) frame index. In fact, the matching of gravitational and gauge degrees of freedom is obtained by identifying the one-forms e^a and ω with the zweibein and the spin connection, respectively. In the spirit of the first-order formulation of gravity, these are regarded as independent degrees of freedom. Exploiting the expansion (<ref>) and the commutation relations (<ref>), we compute the nonabelian field strength F = F^a 𝖯_a + F^0 𝖯_0 = √(Λ/2) (de^a + ϵ^a_b ω∧ e^b) 𝖯_a + (dω+Λ/4 ϵ_ab e^a∧ e^b) 𝖯_0 . Upon plugging this expression into the BF action (<ref>) one finds that the variation of S_BF with respect to the SO(2) vector χ^a yields the equations of motion de^a + ϵ^a_b ω∧ e^b = 0 . This is precisely the zero-torsion condition, which, once solved, gives the spin connection ω in terms of the zweibein e^a. The action (<ref>) evaluated on the solutions of (<ref>) reduces to the second-order action S_BF = i/2∫_Σχ_0 (dω(e) +Λ/2 e^1∧ e^2) . In two dimensions, dω = R/2 e^1 ∧ e^2, and we recognize in e^1 ∧ e^2 the two-dimensional volume form. We can then rewrite (<ref>) in terms of the metric g = δ_ab e^a ⊗ e^b as S_BF = i/4∫_Σd^2x √(g) χ_0 (R(g) +Λ) . In (<ref>), S_BF reproduces the bulk contribution of the JT action if we identify the dilaton field with Φ = -i χ_0 /4. At this point, within the gauge formulation of JT gravity, it is common practice to introduce a boundary term that, when combined with appropriate boundary conditions, replicates the dynamics of the Schwarzian theory on the boundary of the disk <cit.>. In the metric formulation, this consists in implementing the Gibbons–Hawking boundary term. However, we will take a different approach here, utilizing supersymmetry. The process of supersymmetrization will naturally guide us towards incorporating a suitable boundary term that, upon identifying the correct physical scales, will allow us to obtain the Schwarzian partition function. In the subsequent sections, we will provide a more detailed explanation of these steps. §.§ Supersymmetrizing JT gravity As a next step, we will introduce new auxiliary degrees of freedom in the BF action (<ref>) with the aim of making it supersymmetric. In doing so, we first introduce a Riemannian structure on Σ.[ The reader should not confuse the dynamical geometry associated with the degrees of freedom of the gauge theory with the background geometry introduced to construct the supersymmetry algebra. ] We identify Σ with the hemisphere HS^2 endowed with the metric ds^2 = ℓ^2(dθ^2 + sin^2θ dφ^2) , written in terms of conventional spherical coordinates θ∈[0,π/2] and φ∈[0,2π). The boundary circle is located at θ=π/2. A simple scheme for localizing gauge theories on HS^2 was developed in <cit.> in the presence of 𝒩=(2,2) supersymmetry. In order to leverage such results, we need to embed the degrees of freedom of the BF theory into an 𝒩=(2,2) vector multiplet. This off-shell multiplet contains a two-dimensional gauge connection A, two scalars η and σ of dimension one, two Dirac fermions λ and λ̅, and an auxiliary field D. The associated supersymmetry variations, parametrized by conformal Killing spinors ϵ and ϵ̅, are given in Appendix <ref>, where the geometry and the supersymmetry of the hemisphere are spelled out in detail. While it is straightforward to identify the gauge field in the vector multiplet with the one appearing in the BF action, we have two options for the scalar field χ, namely σ and η. In choosing between them, we recall that the BF action can be constructed by dimensionally-reducing the Chern–Simons action. In this framework, the scalar χ can be identified with the third component of the gauge field in three dimensions. Similarly, the 𝒩=(2,2) vector multiplet can be obtained by performing a dimensional reduction of the 𝒩=2 vector multiplet in three dimensions, σ originates from the third component of the vector field. To address the disparity in dimensions between the dimensionless field χ and the dimensionful σ we set χ = L σ where L is a generic length scale. We will temporarily withhold any assumption about L, which will be determined in Section <ref>. With the identification (<ref>), the BF action (<ref>) reads S_ = -iL∫_HS^2d^2x √(g) (σ f) , where the scalar f = ⋆ F is the Hodge dual of the field strength two-form. The above action has its supersymmetric completion in the bulk action S_ = -iL∫_HS^2d^2x √(g) (σ f-1/2λ̅λ+ D η) , which is still equivalent to (<ref>) since the additional degrees of freedom are non-dynamical. For the hemisphere, however, the supersymmetric variation of (<ref>) produces a boundary term originating from the integration of a total divergence, namely δ_ϵ,ϵ̅ S_ = L/2∫_HS^2d^2x √(g) (D_μ[η(λ̅γ^μϵ-ϵ̅γ^μλ)-σϵ^μν(ϵ̅γ_νλ+λ̅γ_νϵ)]) = -iL ∮_∂HS^2dφ (i/2ℓ^2η(λ̅γ^θϵ-ϵ̅γ^θλ)+σδ_ϵ,ϵ̅ A_φ) . In order to obtain a supersymmetric action we should then complement S_bulk with the boundary term S_ = L ℓ∮_∂HS^2dφ (σ^2) . In fact, for half of the supersymmetry[Specifically, the preserved supercharges are those generated by (<ref>).] on ∂HS^2, the second term in (<ref>) is exactly canceled by the supersymmetric variation of (<ref>), since they nicely combine into δ_ϵ,ϵ̅ (S_+S_) = -iL∮_∂HS^2dφ (σ δ_ϵ,ϵ̅(A_φ+iℓσ)) + … . The combination A_φ+iℓσ can be regarded as the putative connection of a 1/2-BPS Wilson loop running along the boundary. The dots stand for a remaining term proportional to η coming from (<ref>) that can be eliminated by imposing the boundary condition[ The term proportional to η in (<ref>) could also be canceled by the variation of an additional boundary contribution proportional to -i∮η^2. ] η|_∂HS^2 = 0 . In summary, we managed to build a supersymmetric version of the BF action, S_ = L[-i∫_HS^2d^2x √(g) (σ f-1/2λ̅λ+ D η) + ℓ/2∮_∂HS^2dφ (σ^2)] , which preserves half of the off-shell supersymmetry enjoyed by the bulk model on the sphere. We emphasize that, up to this point, both L and ℓ are general length scales. We will fix them in the following subsection, by comparison with the relevant scales in JT gravity. Finally, let us briefly turn our attention to the variational principle associated with the action (<ref>). Upon variation of the fields, we obtain the boundary term δ S_ = -iL∮_∂HS^2dφ (σ δ( A_φ+i ℓσ)) . For a well-defined variational principle, this term must vanish. Therefore, we impose the condition: δ(A_φ + i ℓσ) = 0 , which implies setting A_φ + iℓσ equal to a constant. The specific value of this constant will be determined at the end of next section. §.§ Gauge and gravitational scales At this point, we need to identify the relevant parameters in the BF theory with their gravitational counterparts in order to ensure a precise match between the partition function of JT gravity on the disk topology with that of our supersymmetric theory. We start by reviewing some well-known facts about three-dimensional gravity with negative cosmological constant. The spectrum of 3d gravity includes global thermal AdS_3 and a collection of Euclidean BTZ solutions <cit.>, separated from the AdS_3 vacuum by a mass gap. All these Euclidean saddles are characterized by the topology of a solid torus. Notably, the modular invariance of 3d gravity naturally acts on the boundary torus of complex structure τ, allowing for their mapping to one another through modular transformations. In particular, a Euclidean geometry with torus boundary is specified once one chooses which cycle of the boundary torus is contractible in the bulk. In the case where the time cycle is contractible, we obtain an Euclidean BTZ solution. On the other hand, when the spatial cycle is contractible, we have thermal AdS_3 as the solution. Specifically, the non-rotating BTZ solution is related to thermal AdS_3 by a modular S-transformation τ→ -1/τ, which acts by swapping the two cycles. Considering 3d coordinates (t_E,r,ϕ) playing the role of time, radial coordinate, and angular coordinate respectively, it is known that the spherically symmetric (t_E,r) sector of 3d gravity is directly governed by JT gravity <cit.>. For instance, the JT black hole can be obtained as the dimensional reduction of the BTZ three-dimensional one, by reducing along the circle parametrized by ϕ. In particular, the inverse temperature β_2d of the JT black hole is given by β_2d=4 G_3 C/ℓ_AdS^2 β_3d , in terms of the inverse temperature β_3d of the BTZ. In (<ref>) ℓ_AdS represents the AdS radius, G_3d is the 3d Newton's constant, while C is the usual coupling of JT gravity[ In the JT literature, C=ϕ_r/8 π G_2, where ϕ_r is the renormalized value of the dilaton on the boundary and G_2 the 2d Newton's constant.] <cit.>. This relation, which will turn useful later, can be proven by equating the corresponding entropies of the BTZ and JT black holes.[ The entropy of the BTZ black hole is given by the Hawking formula S_BTZ=2 π r_h/4G_3, where the shorthand r_h=2πℓ_AdS^2/β_3d is used to denote the radius of the event horizon <cit.>. The entropy of the JT black hole is instead given by S_JT=4 π^2 C/β_2d <cit.>. We expect S_BTZ=S_JT in the strict s-wave reduction, connecting the three dimensional and the two-dimensional theory.] However, as noticed in <cit.>, in most of the literature the JT gravity action is found to be supported in the (r,ϕ) section <cit.>. For instance, in <cit.> the Schwarzian model, which describes the boundary degree of freedom of JT gravity, emerges on the spatial angular direction, after compactifying the time circle. It turns out the correct identification for our purposes is this second one, where the coordinate φ, which we introduced in (<ref>) to parametrize the hemisphere, is to be interpreted as a spatial direction. Based on the digression above, we now present a brief argument to identify the correct values of the unspecified scales ℓ and L appearing in (<ref>). Our starting point is the gauge formulation of 3d gravity with negative cosmological constant, which can be rephrased as a double Chern–Simons theory <cit.>: S = i (S_CS[A]-S_CS[A̅]) , with action S_[A]=k/4π∫(A∧dA+2/3A∧ A∧ A) . Here, A and A̅ are independent 𝔰𝔩(2) gauge fields, and k is the Chern–Simons level, related to the gravitational parameters by k=ℓ_AdS/4G_3. For our derivation, we focus on the holomorphic sector of the theory, which is described by the connection A. By virtue of what was argued before, we consider the Chern–Simons theory to be supported on a solid torus D × S^1 which has the same topology of thermal AdS_3, with the Euclidean time coordinate running along the non-contractible cycle S^1. We equip it with the following metric: ds^2_EAdS_3 = dt_E^2 + C^2 (dθ^2 +sin^2θ dφ^2) , where the time variable is identified as t_E∼ t_E+β_2dk.[One can easily prove that the metric (<ref>), by using the identification β_2d = C/k ℓ_AdSβ_3d given by (<ref>) and by performing the change of variables θ=arctan (r/ℓ_AdS), is Weyl equivalent to the metric ds^2_EAdS_3 = (r^2/ℓ_AdS^2+1 )dt_E^2+(r^2/ℓ_AdS^2+1)^-1dr^2+r^2 dϕ^2 of thermal AdS_3.] We then dimensionally reduce Chern–Simons theory (<ref>) along t_E and by setting σ=k^2 A_t_E we obtain: S = -iβ_2d/2 π(∫(σ F)-1/2∮dφ (σ A_φ)) . By enforcing the boundary condition A_φ=-i Cσ , we finally arrive at the action S = β_2d/2 π(-i∫(σ F)+C/2∮dφ (σ^2)) . After integrating in the auxiliary fields of the 𝒩=(2,2) vector multiplet, we note that the action (<ref>) precisely corresponds to our supersymmetric theory (<ref>). Through the comparison between them, we arrive at the identification of the physical scales in the following manner: L ≡β_2d/2π , ℓ ≡ C . We also note that setting ℓ = C is consistent with the dimensional reduction of the metric (<ref>) to the metric (<ref>) of an hemisphere with a radius of C. Finally, the boundary condition (<ref>) is consistent with a well-defined variational principle for the action (<ref>), established by the condition (<ref>). § LOCALIZATION OF THE SUPERSYMMETRIC BF MODEL The partition function of certain supersymmetric gauge theories can be evaluated using the supersymmetric localization technique. The method relies on the fact that if a theory possesses a fermionic symmetry δ_Q, we can deform the action by adding a δ_Q-exact term tδ_Q 𝒱 without altering the result of the path integral. The proof of this property is trivial and goes as follows. One introduces an auxiliary quantity Z(t) Z(t) = ∫ [DΦ] e^-S[Φ]-tδ_Q𝒱 By construction, Z(0) gives the initial partition function. If we assume that δ_Q^2 𝒱=0,[In general, δ^2_Q does not vanish, but it yields a bosonic symmetry of the theory. Thus, requiring δ^2_Q 𝒱=0 is equivalent to the invariance of 𝒱 under this bosonic symmetry.] we can easily show that[The validity of (<ref>) also assumes that the convergence of the path integral does not depend on t and the measure of integration is invariant under δ_Q.] d/dt Z(t) = ∫ [DΦ] δ_Q(𝒱e^-S[Φ]-tδ_Q𝒱), which, in turn, vanishes since it is the integral of δ_Q-exact expression. Therefore, Z(t) is independent of t. We can evaluate the partition function computing (<ref>) for any value of t. In particular, we can take the limit t→∞. If the bosonic part of δ_Q 𝒱 is positive-definite, the path integral is exactly captured by the semiclassical expansion of Z(t) around the saddle points of δ_Q 𝒱, rather than around those of the classical action. In the following, we will apply this technique to the supersymmetric BF theory (<ref>). By virtue of the considerations of Section <ref>, this procedure gives an alternative way to calculate the partition function of JT gravity. §.§ Localizing term Following <cit.>, we choose the localizing supercharge δ_Q = δ_ϵ,ϵ̅, where the specific form of the Killing spinors ϵ and ϵ̅ is given in (<ref>). The localizing term reads 𝒱≡ t δ_0,ϵ̅∫_HS^2d^2x √(g) (1/2λ̅γ_3λ-2i Dσ+iη^2) . The variation in (<ref>) will yield a true bulk term containing the Yang-Mills action, δ_Q 𝒱_ = ∫_HS^2d^2x √(g) ϵ̅ϵ/2 [ (f+η/ℓ)^2 + D^μηD_μη + D^μσD_μσ - [η,σ]^2 + D^2 -i/2(D_μλ̅γ^μλ - λ̅γ^μD_μλ) + iλ̅[η,λ]+ λ̅γ^3[σ,λ] ] , and a total divergence δ_Q 𝒱_ =∫_HS^2d^2x √(g) D_μ([-iϵ̅γ^μγ^3ϵ(f + i[η,σ] + η/ℓ)σ - iϵ̅γ^μγ^νϵσD_νη -iϵ_λνϵ̅γ^μγ^λϵσD^νσ + ϵ^μνηD_νσ + ϵ̅γ^μϵ Dσ + i/2(λ̅γ^3ϵ)(ϵ̅γ^μλ) - iϵ̅ϵ/4λ̅γ^μλ]) . With the help of Stokes' theorem, the latter translates into a family of boundary contributions. We split the boundary terms into bosonic and fermionic ones. For the bosonic part, after algebraic manipulations, we find δ_Q 𝒱_ = 2πℓ∮_∂HS^2dφ (-ℓσD^θσ + iσ f + i/ℓησ) , where the integral is taken over the boundary circle of the hemisphere. In (<ref>), we neglected all the terms proportional to ϵ̅ϵ=cosθ, since their coefficient vanishes on the boundary. Moreover, in moving from the first to the second equality, we have dropped the first and third terms since they combine into a total derivative in φ once we use that ϵ̅γ^3ϵ=1. To establish the appropriate boundary conditions for the t-deformed theory, we consider the variational principle of the total action, which consists of the classical action S_new and the localizing term δ_Q 𝒱. Since S_new does not generate any boundary term for the fermions, we focus on the bosonic part of the variational principle, resulting in: δ_Q S_bos = δ_Q S_tot + t δ_Q𝒱_ + t δ_Q𝒱_ = L∮_∂HS^2dφ [-i σ δ_Q(A_φ+i ℓσ-t/L(f+η/ℓ+i ℓD^θσ)) 10em + D^θη δ_Qη + t/L(f+η/ℓ) δ_Q(A_φ+iℓσ)] . Since η already vanishes at the boundary due to (<ref>), in order to ensure a well-defined variational principle for any t>0, we need the following set of boundary conditions: F|_∂HS^2 = 0 , (A_φ+iℓσ-i t ℓ/L D^θσ)|_∂HS^2 = c , where c is a constant. It is worth noting that as t→ 0, the first condition in (<ref>) could be relaxed (as it was in our previous discussion) while the second condition precisely reproduces the boundary condition introduced in (<ref>). Furthermore, consistency with the gravitational condition (<ref>) requires us to set c=0 in this case as well. In the opposite limit as t→∞, we observe that the second condition in (<ref>) transforms into D^θσ=0. Hence, the boundary conditions (<ref>) effectively interpolate between the classical picture at t=0 and the limit t→∞, which will be the regime of interest in the localization process. The fermionic part, on the other hand, can be massaged into δ_Q 𝒱_ = 2πℓ∮_∂HS^2dφ (-i/4λ̅λ) . A set of sufficient conditions to make the fermionic boundary term (<ref>) vanish in our convention is given by λ_1 = λ_2 and λ̅_1 = λ̅_2 on ∂HS^2. This requirement nicely complements the boundary conditions (<ref>) and (<ref>) imposed on the bosonic sector. In the following, we will be interested in the limit t→∞, where they are given by: η|_∂HS^2 = 0 , F|_∂HS^2 = 0 , D^θσ|_∂HS^2 = 0 . This set of conditions has the advantage of being (manifestly) gauge invariant. However, we find using a gauge-fixed form of them more convenient. In particular, we use our gauge freedom to set A_θ=0 on the boundary. This choice makes our localization computation easier since we can exploit some results already present in the literature <cit.>. Then, the bosonic boundary conditions reduce to the following standard form: ∂_θA_φ|_∂HS^2 = 0 , ∂_θσ|_∂HS^2 = 0 , η|_∂HS^2 = 0 , A_θ|_∂HS^2 = 0 , which amounts to considering Dirichlet boundary conditions for η and A_θ, while instead using Neumann boundary conditions for A_φ and σ. A few comments are now in order. Performing a path integral over bosonic fields involves a choice of a half-dimensional integration contour in the space of complex fields. In particular, the integration contour for bosonic fields in Euclidean gauge theories must be chosen in such a way that the resulting gauge group is some compact subgroup of the complexification G_ℂ of the original gauge group G, and the bosonic action is positive definite. In supersymmetric localization this applies both to the original and the localizing actions separately. In order to give meaning to the path integral of the supersymmetric BF theory at hand, defined at the classical level over the gauge group SL(2,ℝ), we we find that the correct choice to reproduce the Schwarzian result is to pick a contour in such way that all fields are real and the resulting gauge group is SU(2), a compact subgroup of SL(2,ℂ). For a detailed discussion on the choice of integration contour for supersymmetric 𝒩=(2,2) gauge theories on the hemisphere we refer the reader to <cit.>. §.§ Localization locus The minimum of the bosonic gauge sector in (<ref>) is realized when the following set of conditions holds f = -η , D_μσ = D_μη = 0 , D = 0 , [σ,η] = 0 . The localization locus defined by (<ref>) is easy to characterize. Consider first the scalar field η. Since it vanishes at the boundary because of (<ref>) and is covariantly constant (D_μη=0), it must vanish everywhere. This implies F_12=0. In other words, the gauge field A must be a flat connection. However, since the hemisphere is contractible, every flat connection is gauge equivalent to A_μ=0. The only non-vanishing field is the scalar field σ, fixed to be an arbitrary constant σ_0. In summary, the set of field configurations that satisfies (<ref>), compatible with the boundary conditions (<ref>), is σ = σ_0/ℓ , A_μ = 0 , η = 0 , D = 0 . Similarly to the case of the Chern–Simons reformulation of three-dimensional gravity, the BF theory is (classically) equivalent to JT gravity only when the zweibein is invertible. At the perturbative level, this requirement is implemented by expanding the path integral around the geometrical (semiclassical) saddle point e^a_μ=δ^a_μ and ω=0, which is quite far from the non-geometrical saddle obtained in (<ref>). When we turn on the localizing parameter t, we are instead allowing for the emergence of new saddle points that may compete and eventually replace the semiclassical one. We implicitly assume that complexification permits to deform the original semiclassical contour into a new one, picking the dominant contribution from the non-geometrical saddle A_μ=0 (see (<ref>)). We now evaluate the total classical action (<ref>) on the locus (<ref>). The bulk term vanishes and we are left only with the boundary term .S_tot|_locus=L/2ℓ∮_∂HS^2 dφ Tr(σ_0^2). Therefore the infinite-dimensional path integral, which evaluates the partition function of JT gravity, localizes to a matrix model with the following structure: Z_JT = ∫_𝔤dσ_0 exp(-L/2ℓ∮_∂HS^2dφ (σ_0^2)) 𝒵_1-loop[σ_0] . In (<ref>) 𝒵_1-loop[σ_0] encodes the contributions of the one-loop determinants arising from the Gaussian integrals originating from the localizing term δ𝒱 when we expand around the locus (<ref>). The subscript 𝔤 on the integral means we are integrating over the Lie algebra of the gauge group. Since the initial action is gauge invariant, the integrand in (<ref>) turns out to be invariant under the adjoint action of the gauge group. We can use this freedom to diagonalize the matrix σ_0 through a gauge transformation and reduce the integral over the entire Lie algebra 𝔤 to an integral over a chosen realization of the Cartan subalgebra 𝔱. The Jacobian of this transformation will produce the usual Vandermonde determinant at the level of the integration measure. These steps are summarized by the following general identity that holds for any integral of an adjoint invariant function 𝔣(σ) 1/vol(𝔤)∫_𝔤d^d_gσ 𝔣(σ) = 1/|W|∫_𝔱d^l_𝔤σ 𝔣(σ) ∏_α∈Δ_+α(σ)^2 . Above d_𝔤 and l_𝔤 are the dimension and the rank of 𝔤, while Δ_+ is the set of its positive roots, denoted with α. In (<ref>), we normalize the l.h.s. by the order of the Weyl group, |W|, to account for the residual gauge symmetry. Then we are left with Z_JT = 1/2!∫_𝔱dσ_0 α(σ_0)^2 exp(-L/2ℓ∮_∂HS^2dφ (σ_0^2)) 𝒵_1-loop[σ_0] where σ_0 is now assumed to be in the Cartan, i.e., the component along the diagonal generator γ_3. §.§ One-loop determinants We now turn to the detailed evaluation of the one-loop determinants producing 𝒵_1-loop. The analysis of the possible contributions for the case of 𝒩=(2,2) theories on the hemisphere was done in detail in <cit.>. Below, we shall simply review the essential steps of the calculation and collect the relevant results. To begin with, we expand each field of our supersymmetric model around the background value given by the localization locus (<ref>). Schematically, Φ↦Φ_0 + Φ̂/√(t). Plugging this into the localizing (bulk) term (<ref>) and subsequently expanding in t, we can easily single out quadratic part of δ_Q𝒱. Since this quantity vanishes on the locus (<ref>), we do not have any “classical contribution", and we can write δ_Q 𝒱^(2) = lim_t→∞ t∫_HS^2δ_Q 𝒱 = ∫_HS^2d^2 x √(g) Tr[-Â_μ∇_ν∇^νÂ^μ+ Â_μ∇_ν∇^μÂ^ν+2/ℓη̂ ϵ^μν∇_μÂ_ν+η^2/ℓ^2. -1/ℓ^2[σ_0,Â^μ][σ_0,Â_μ]- σ̂∇_μ∇^μσ̂-2i/ℓ[σ_0,Â^μ]∇_μσ̂-η̂∇_μ∇^μη̂ +D̂^2-1/ℓ^2[σ_0,η̂]^2 .+i/2λ̂̅̂γ^μ∇_μλ̂-i/2∇_μλ̂̅̂γ^μλ̂+1/ℓλ̂̅̂γ_3[σ_0,λ̂]] , where we have integrated by parts some terms taking advantage of the boundary conditions for the fluctuation fields. The quadratic integral over D is trivial; thus, we can neglect it in the localizing term. From now on, we will omit the hat used to denote the fluctuation fields since this does not cause any ambiguity and allows for simpler notation. Gauge fixing. Before moving on, we must gauge-fix the theory <cit.> to remove the gauge redundancy. We choose to impose the Lorentz gauge and set: ∇_μA^μ=0. To do so, we exploit the standard BRST construction by introducing two ghost fields c, c̅, and a Lagrange multiplier b, all living in the adjoint representation of the gauge algebra. Next, we add the following term[The total susy transformation would now become δ_Q ↦δ_Q + δ_BRST.] to the localizing action (<ref>) t∫_HS^2δ_Q 𝒱_brst=t∫_HS^2d^2 x √(g) Tr(c̅ ∇_μ𝒟^μc+b ∇_μA^μ). Expanding it as before around the combined locus (<ref>) and c=c̅=b=0, we get δ_Q 𝒱_brst^(2)=∫_HS^2d^2 x √(g) Tr(1/2c̅ ∇_μ∇^μc+b ∇_μA^μ). The integration over the bosonic Lagrange multiplier b gives δ(∇_μA^μ), which enforces the gauge-fixing condition in the path integral. Following <cit.>, we then separate the gauge field into a divergenceless and pure divergence part A_μ = ∂_μu + A_μ' , where A_μ' is the divergenceless part of A_μ, i.e. ∇_μA'^μ = 0. Exploiting this decomposition, the delta function imposing the gauge-fixing becomes δ(-∇^2 u), with ∇^2≡∇^μ∇_μ. Thus, the integration measure for the gauge field can be rewritten as follows [DA_μ] δ(∇^μA_μ) = [DA'_μ] [Du] δ(-∇^2 u) = [DA'_μ][Du] δ(u)(-∇^2)^-1/2 . The scalar u can then be integrated out, leaving only the Jacobian factor (-∇^2)^-1/2. Subsequently, we can perform the functional integrations over σ and the ghosts. The former gives an additional factor (-∇^2)^-1/2, while the latter provides a factor of (-∇^2), so that the above three contributions exactly cancel. The gauge fixed quadratic localizing term now reads δ_Q 𝒱^(2)_g.f. = ∫_HS^2d^2 x √(g) Tr[- A_μ' ∇_ν∇^ν A_μ'+1/ℓ^2A_μ'A^'μ+2/ℓη ϵ^μν∇_μA_ν'- 1/ℓ^2[σ_0,A'^μ][σ_0,A_μ'] - η∇_μ∇^μη+η^2-1/ℓ^2[σ_0,η]^2 +i/2λγ^μ∇_μλ̅+i/2λ̅γ^μ∇_μλ+1/ℓλ̅γ_3[σ_0,λ]] . The next step consists in using the Cartan decomposition, that is, to expand the adjoint field X as X = ∑_i X^i𝖧_i + ∑_α∈Δ_+ (X^α𝖤_α+X^-α𝖤_-α) , where 𝖧_i are the Cartan generators, 𝖤^α is the generator corresponding to the root α and Δ_+ is the set of positive roots. They satisfy the following relations [𝖧_i,𝖤_α] = α(𝖧_i)𝖤_α , 𝖤_α^† = 𝖤_-α , (𝖤_α𝖤_β) = δ_α+β , (𝖤_α𝖧_i) = 0 . For 𝔰𝔩(2,ℝ), we only have one Cartan generator γ_3 and one positive root α. Therefore, we shall drop the sum over the positive roots in the following. Bosonic determinants. Using the commutation and trace relations (<ref>) one can find the bosonic part of (<ref>) is proportional to δ_Q 𝒱^(2)_bos= ∫_HS^2d^2 x √(g) [- A_μ^-α∇_ν∇^ν A_μ^α+ 1/ℓ^2A_μ^-αA^α,μ+ 1/ℓη^-α ϵ^μν∇_μA_ν^α+ 1/ℓη^α ϵ^μν∇_μA_ν^-α. +1/ℓ^2α (σ_0)^2 A_μ^-αA^αμ.- η^-α∇_μ∇^μη^α+1/ℓ^2η^-αη^α+1/ℓ^2α (σ_0)^2 η^-αη^α], where we omitted the prime and implicitly considered only the divergenceless part of A_μ. We find it convenient to expand the gauge field in terms of the vector spherical harmonics 𝒞^λ_jm,μ and to write: A_μ^α = ∑_λ=1,2∑_j=1^∞∑_m=-j^j A^α,λ_jm 𝒞^λ_jm,μ(ϑ,φ) These special functions enjoy the following two properties ∇^μ𝒞_jm,μ^1 = -√(j(j+1))/ℓ^2 𝒴_jm , ∇^μ𝒞_jm,μ^2 = 0 . We indicated the usual scalar spherical harmonics with 𝒴_jm. Since A_μ is divergenceless, only the component with helicity λ=2 can appear in the above expansion. Therefore, we can drop the sum over λ in (<ref>) and write A_μ^α = ∑_j=1^∞∑_m=-j^j A^α,2_jm 𝒞^2_jm,μ(ϑ,φ) , A_μ^-α = (A_μ^α)^* . The boundary conditions satisfied by A_μ further restrict this sum, and the coefficients A^α,2_jm are different from zero only when j-m is an odd integer (see <cit.>). Similarly, we can expand the scalar field η in terms of the usual spherical harmonics: η^α = ∑_j=0^∞∑_m=-j^jη^α_jm 𝒴^_jm(ϑ,φ) , η^-α = (η^α)^* . The vanishing of this field at the boundary again imposes that the expansion coefficients differ from zero only when j-m=odd <cit.>. Both scalar and vector spherical harmonics are eigenvectors of the corresponding Laplacian, though with different eigenvalues, i.e. -∇^μ∇_μ 𝒴_jm = j(j+1)/ℓ^2 𝒴_jm , -∇^μ∇_μ 𝒞_jm^λ = j(j+1)-1/ℓ^2 𝒞_jm^λ . Moreover, the vector harmonics satisfy these further set of relations ϵ^μν∇_μ(𝒞^λ_jm)_ν=-δ^λ_2√(j(j+1))/ℓ^2𝒴_jm. The above properties will allow us to deal with the mixed terms present in the bosonic sector. Then, by taking advantage of (<ref>) and (<ref>) as well as the orthogonality relations on the hemisphere ∫_0^2πdφ∫_0^π/2dϑ sinϑ 𝒴_jm(ϑ,φ)^* 𝒴_j'm'(ϑ,φ) = 1/2δ_jj'δ_mm' , ∫_0^2πdφ∫_0^π/2dϑ sinϑ 𝒞_jm,μ^λ(ϑ,φ)^* 𝒞_j'm',ν^λ'(ϑ,φ) g^μν = 1/2δ_jj'δ_mm'δ^λλ' , we can easily show that the bosonic sector of δ𝒱^(2) (<ref>) reduces to δ_Q 𝒱^(2)_bos=∑_j,m(^α_jm)^†Δ^bos_j(^α_jm) where ^α_jm=(A^α, 2 η^α)^T_jm. The explicit form of the matrix Δ^bos_j is Δ^bos_j=1/ℓ^2( j(j+1)+α(σ_0)^2 √(j(j+1)) √(j(j+1)) j(j+1)+α(σ_0)^2+1 ). Recall now that ^α_jm vanishes when j-m is even because of the boundary conditions. This fact reduces the usual degeneracy in m of a spherical symmetric problem from 2j+1 to j. Thus, the total bosonic contribution to 𝒵_1-loop will read <cit.> (taking into account that integration variable ^α_jm is complex) 𝒵_1-loop^bos = ∏_j(Δ^bos_j)^-j = ∏_j=1^∞1/ℓ^4j[j^2+α(σ_0)^2]^j [(j+1)^2+α(σ_0)^2]^j . Fermionic determinant. The fermionic part of (<ref>) after Cartan decomposition and some integration by parts becomes δ_Q𝒱^(2)_fer = ∫d^2x √(g)(λ^α λ̅^α)^†( 0 iγ_3γ^μ∇_μ-1/ℓα (σ_0) iγ_3γ^μ∇_μ+1/ℓα (σ_0) 0 ) (λ^α λ̅^α). The symbol † denotes a Dirac-like conjugation containing also a factor γ_3, that is (λ^α λ̅^α)^†=(λ^-αγ_3 , λ̅^-αγ_3). The gauginos λ, λ̅ are fields of spin 1/2 and we denote the two different helicities with s=± (±1/2). We can expand both of them into spin spherical harmonics 𝒴^s_jm: λ^α = ∑_s=±∑_j=1/2^∞^'∑_m=-j^jλ^α,s_jm 𝒴^s_jm(ϑ,φ) , λ̅^α = ∑_s=±∑_j=1/2^∞^'∑_m=-j^jλ̅^α,s_jm 𝒴^s_jm(ϑ,φ) . The spin spherical harmonics are eigenvectors of the Dirac operator, i.e. i γ_3γ^μ∇_μ𝒴^±_jm = ±i/ℓ(j+1/2) 𝒴^±_jm with j=1/2,3/2,⋯, m=-j,-j+1,⋯,j, and they are normalized on the hemisphere with ∫_0^2πdφ∫_0^π/2dϑ sin (ϑ) 𝒴_jm^s(ϑ,φ)^* 𝒴_j'm'^s'(ϑ,φ) = 1/2 δ_jj'δ_mm'δ^ss' . In (<ref>), the prime in the internal sum over m means that we must restrict the values of m to even j-m if s=+ and to even j-m if s=-. This constraint stems from the boundary conditions imposed on the fermions <cit.>. Plugging the expansion (<ref>) into (<ref>) and exploiting the orthogonality relations to perform the angular integrations, we find the fermionic term (<ref>) can be reorganized into the sum of two series δ_Q𝒱^(2)_fer = δ_Q𝒱^(2)_fer,+ + δ_Q𝒱^(2)_fer,- = ∑_j=1/2^∞^'∑_m=-j^j (Λ^α,+_jm)^† Δ_j^fer,+ (Λ^α,+_jm) + ∑_j=1/2^∞^'∑_m=-j^j(Λ^α,-_jm)^† Δ_j^fer,- (Λ^α,-_jm) , where Λ^α,±_jm=(λ^α,± λ̅^α,±)^T_jm with Δ_j^fer,±=1/ℓ( 0 ± j±1/2+iα(σ_0) ± j±1/2-iα(σ_0) 0 ) . Since this matrix depends only on j, we have the usual degeneracy in m for each eigenvalue. This degeneracy is reduced from 2j+1 to j+1/2 by the constraint on the sum over m. Since the matrices Δ_j^fer,+ and Δ_j^fer,- possess the same determinant, the total fermionic contribution to 𝒵_1-loop^fer can be written, up to a phase, as follows <cit.>: 𝒵_1-loop^fer= ∏_j=1/2^∞(Δ_j^fer,+)^j+1/2(Δ_j^fer,-)^j+1/2 = ∏_j=1/2^∞ℓ^4j+2(j+1/2 +iα (σ_0))^2j+1(j+1/2 -iα (σ_0))^2j+1 Total one-loop contribution. Collecting together the bosonic and fermionic contributions (<ref>) and (<ref>) and simplifying the common factors between the numerator and denominator, we arrive at 𝒵_1-loop = 𝒵_1-loop^bos ∼∏_j=1^∞(j^2+α(σ_0)^2) ∼(∏_j=1^∞j^2) ∏_j=1^∞(1+α(σ_0)^2/j^2) . The first of the two infinite products can be regularized by using the zeta function regularization, and we get ∏_j=1^∞j^2 = e^-2ζ'(0) = 2π , where we used ζ'(0)=-1/2ln 2π. In the second one, we recognize the representation of the hyperbolic sine as an infinite product. Then 𝒵_1-loop = 2sinh(πα (σ_0))/α(σ_0) . §.§ Result By substituting the one-loop determinant (<ref>) into the localization formula (<ref>), we find that the partition function is given by Z_JT = 1/2∫_𝔱dσ_0 α(σ_0)^2exp(-L/2ℓ∮_∂HS^2dφ (σ_0^2)) 2sinhπα (σ_0)/α(σ_0) . We observe that the denominator of 𝒵_1-loop cancels exactly with one factor of α(σ_0) arising from the Vandermonde determinant, resulting in Z_JT = ∫_𝔱dσ_0 α(σ_0) sinh(πα (σ_0)) exp(-π L/ℓ(σ_0^2)) . Since σ_0 lies in the Cartan subalgebra 𝔱, we can parameterize it as σ_0 = sγ_3 with s ∈ℝ. For 𝔰𝔲(2), the only positive root is 1, so we have α(s γ_3)=2s and (γ_3^2) = 2. Consequently, (<ref>) becomes Z_JT= 4∫_0^∞ds s sinh(2π s) exp(-β/C s^2) , where we have reinstated the gravitational scales ℓ=C and β_2d≡β = 2π L, and utilized the integrand's parity to limit the integral to the range [0,+∞). This result (<ref>) coincides with the one obtained in <cit.> through SL(2,ℝ) Hamiltonian quantization. We can now evaluate the integral over s and obtain Z_JT∝(Cπ/β)^3/2 e^Cπ^2/β, which reproduces the result obtained through equivariant localization of the Schwarzian theory <cit.> and via conformal bootstrap from Liouville CFT <cit.>. § HIGHER-SPIN JT GRAVITY Our results can be easily generalized to compute the partition function of the higher-spin version of JT gravity on the disk. Here, we are interested in higher-spin theories living on an AdS background. The first remarkable constructions were developed in four dimensions <cit.>, and later extended to a generic number of spacetime dimensions <cit.>. higher-spin theories also display an essential role in holography <cit.>. One may hope that in lower dimensions, some simplifications happen. That is the case for three-dimensional higher-spin gravity, as there are no propagating local degrees of freedom. Moreover, higher-spin theories admit a Chern–Simons formulation <cit.> that generalizes the pure gravitational construction. Unlike the higher dimensional cases, there is no need to consider an infinite number of higher-spin fields for having consistent interactions <cit.>. A natural and simple example is the higher-spin theory corresponding to the Chern–Simons theory SL(N, ℝ)×SL(N,ℝ), containing fields with spin up to N. A somewhat analogous situation also occurs in higher-spin extensions of JT gravity. They can be constructed from the gauge theory formulation and allowing the gauge group to be SL(N, ℝ) <cit.>.[Here we are not precise on the global structure of the gauge group, as it will not be relevant.] While all these formulations require some relevant modifications of the considerations worked out in the standard JT gravity, see for instance <cit.>, the technology developed in the previous chapter can be readily extended to the higher-spin case SL(N, ℝ). §.§ The supersymmetric higher-spin theory To begin with, we shall briefly review how an SL(N, ℝ) version of the BF theory realizes a higher-spin theory <cit.>. As done for standard JT gravity in Section <ref>, we work with the first-order formalism and organize fields into a connection and dilaton field. Let us now be slightly more general and work with a gauge group G with the property that it contains an SL(2,ℝ) sector generated by the 𝖯_i of Section <ref>. This factor corresponds to the AdS_2 isometry group. Then we demand that all the other generators in the adjoint of G can be decomposed into a totally symmetric irreducible representation of the SL(2,ℝ) factor. Let us denote these generators as 𝖳_a_1… a_s for some integer s. They are totally symmetric and traceless, namely η^a_1a_2𝖳_a_1 a_2… a_s = 0. Any Lie-algebra valued field Φ in our BF theory will have an expansion Φ = Φ^a 𝖯_a + ∑_s Φ^a_1… a_s 𝖳_a_1… a_s . Because of the specific properties of Φ^a_1… a_s, it is natural to interpret it as a higher-spin s field <cit.>. It turns out that SL(N,ℝ) satisfies all the above conditions <cit.>. Therefore, from now on, we will focus on this specific case and construct the corresponding generalization of JT gravity. To do so, we mimic the BF construction of Sec. <ref>. That is, we introduce an SL(N,ℝ) dilaton field χ expanded as in (<ref>) and a SL(N,ℝ) connection A = ∑_s A_μ^a_1… a_s 𝖳_a_1 a_2… a_s dx^μ . The action is just the BF one S_BF = -i ∫_Σ(χ F) The equation of motions are F_μν^a_1… a_s = 0 , D_μχ^a_1… a_s = 0 , where D_μ=∇_μ+A_μ and F_μν^a_1… a_s is the field strength related to A_μ^a_1… a_s. One can study them in the metric formulation around the AdS_2 background and show that indeed reproduce a 2d higher-spin gravity theory, identifying the spectrum <cit.>. The next step would be studying the asymptotic boundary conditions that reproduce a consistent generalization of the Schwarzian dynamics <cit.>. This requires the addition of the familiar boundary term S_∝∮_∂Σdφ(χ^2) . Not so surprisingly, one must also consider asymptotic boundary conditions preserving the W_N-algebra (a nonlinear extension of Virasoro). However, we do not follow this approaches here. We rather adopt an extension of the method of the previous section to define and quantize the higher-spin version of JT gravity. Indeed, one advantage of our method is that there are no further technical difficulties in moving from SU(2) to an arbitrary gauge group. The quantization scheme is always the same. We start with an SL(N,ℝ) BF theory with the given boundary term. We are making the implicit assumption, based on <cit.>, that analogous steps to those outlined in Sec. <ref> can be performed. We can make the theory supersymmetric on the hemisphere (topologically equivalent to the disk) as explained in Sec <ref>. At the end of the day, our action reads again S_ = β/2π[-i∫_HS^2 d^2x√(g)(σ f - 1/2λ̅λ+ D η)+C/2∮_∂HS^2dφ (σ^2)] , Once the supersymmetric formulation is given, we complexify the fields and choose the contour which makes the supersymmetric path integral convergent. This amounts to choosing the gauge group to be SU(N), rather than SL(N, ℝ). We conjecture that this procedure reproduces the higher-spin version of JT gravity. In the following, we shall recover from a supersymmetric localization perspective all the results in the literature for the partition function on the disk <cit.> and explicitly show the relations among the different expressions. §.§ The exact partition function One main advantage of localization is that it does not depend on the specific gauge group. That choice enters only when one has to make explicit the roots or weights of the gauge group appearing in the classical contribution and in the 1-loop determinant. Therefore, we can start directly from the extension of (<ref>) to a generic group, with L and ℓ replaced by their physical values β and C respectively, i.e. Z_JT = 1/N!∫_𝔱dσ_0 exp(-β/2 C∮dφ (σ_0^2)) ∏_α>0 [2 α(σ_0) sinh(πα (σ_0))] , where now the roots α are those of SU(N). If we use their explicit expression, we find Z_JT = 1/N!∫∏_idσ_i δ(∑_iσ_i) ∏_i<j[2(σ_i-σ_j) sinh(π(σ_i-σ_j))] exp(-β/2 C∑_iσ_i^2) . where σ_i, i=1,…,N are the eigenvalues of the constant matrix σ_0, and the delta function set to zero the trace of σ_0. We aim to evaluate the integral, which computes the partition function of the SL(N, ℝ) higher-spin JT gravity on the disk. Using the integral representation of the Dirac and the Weyl denominator formula[ We use the two following variants of the formula ∏_1<i<j<N2sinh(π(σ_i-σ_j))=∑_η∈ S_N(-1)^η∏_i e^2π(N+1/2-η(i))σ_i , ∏_1<i<j<Nσ_j-σ_i=∑_λ∈ S_N(-1)^λσ_1^λ(1)-1…σ_N^λ(N)-1=∑_λ∈ S_N(-1)^λ∏_iσ_i^λ(i)-1 , where S_N denotes the set of permutations of N elements. ] we arrive at the expression Z_JT = (-1)^N(N-1)/2/N!∫dk/2π ∑_η,λ∈ S_N (-1)^λ+η ∏_i 1/(2π)^λ(i)-1 ×∂/∂ u^λ(i)-1_i∫dσ_i e^2π(N+1/2-η(i)+ik/2π+u_i)σ_i-β/2Cσ_i^2 |_u_i=0 , where we introduced sources u_i to perform the Gaussian integrals. We can now perform both integrals and obtain Z_JT = (-1)^N(N-1)/2/N!√(N) (2π C/β)^N-1/2 e^π^2CN(N^2-1)/6β∑_λ,η∈ S_N(-1)^η+λ ∏_i1/(2π)^λ(i)-1 ×∂/∂ u^λ(i)-1_iexp(2π^2C/β(∑_ju_j^2-1/N∑_j,ku_ju_k+(N+1)∑_ju_j-2∑_jη(j)u_j))|_u_i=0 . We argue that the quadratic part in u_i does not contribute to the final result. To show this, we use the Weyl denominator formula in the opposite direction for the sum over η and restore the product of hyperbolic sines: ∑_η∈ S_N(-1)^ηexp(2π^2C/β(∑_ju_j^2-1/N∑_j,ku_ju_k+(N+1)∑_ju_j-2∑_jη(j)u_j)) = exp(2π^2C/β(∑_ju_j^2-1/N∑_j,ku_ju_k)) ∏_i<j2sinh(2π^2 C/β(u_i-u_j)) . To obtain a non-vanishing result, the action of the derivatives on the second factor must act an all the hyperbolic sines. But this can only occur when no derivative acts on the exponential term. As a consequence, we can safely drop the latter from (<ref>). The β-dependence follows from a straightforward scaling argument. If we rename x_i = 2π^2 C/β u_i we can easily extract the remaining β-dependence, that is Z_JT = (-1)^N(N-1)/2/N!√(N)(2π C/β)^N^2-1/2 e^π^2CN(N^2-1)/6β 𝒦 where 𝒦 is an overall normalization given by 𝒦 = 2^-N(N-1)/2∑_η,λ∈ S_N (-1)^η+λ ∏_i ∂/∂ x^λ(i)-1_i e^(N+1)∑_jx_j-2∑_jη(j)x_j |_x_i=0 = 2^-N(N-1)/2∑_η,λ∈ S_N(-1)^η+λ ∏_i [N+1-2η(i)]^λ(i)-1 . We get rid of one sum over the permutations by renaming i→η^-1(i) and π=λ∘η^-1. This gives 𝒦 = 2^-N(N-1)/2∑_π∈ S_N(-1)^π∏_i [N+1-2η(i)]^π(i)-1 = N! ∏_i<j (i-j) = (-1)^N^2-N/2 G(N+2) , where G(x) is the Barnes function. In summary, we find Z_JT = G(N+2)/N!√(N)(2π C/β)^N^2-1/2exp(π^2CN(N^2-1)/6β) . The β-dependent part agrees nicely with the results obtained by a different localization scheme in <cit.>, up to the identification C = 2γ. This computation provides additional and non-trivial evidence that our quantization procedure is not only self-consistent but also suggests an alternative and novel computational method in the context of JT gravity and its generalizations. § CONCLUSIONS AND OUTLOOKS This paper proposes a localization procedure for JT gravity and its higher-spin generalization on the disk topology. We have used a supersymmetric completion of the related gauge theory, involving only auxiliary fields, and complexified the path integration to reduce the computation to a “standard” BF theory on the hemisphere. The correct s sinh (2π s) measure for JT gravity, which in the previous formulations was obtained as the Plancherel measure associated either with the positive semigroup SL^+(2,ℝ) <cit.> or with the analytic continuation of the universal cover of SL(2,ℝ) <cit.>, in our framework is directly provided by gaussian integral over quadratic fluctuations around the dominant saddle point as t→ +∞. The supersymmetric boundary conditions play a crucial role in annihilating any discrete sum within the moduli space of the localized theory, with the constant configurations of the field σ being the only locus to integrate over. Furthermore, supersymmetry provided us with a crucial boundary potential, quadratic in σ, which carries the information of the gravitational Gibbons-Hawking term: by carefully establishing the identification between physical and geometrical scales, we have recovered the well-known partition function of JT gravity and confirmed the results <cit.> for the higher-spin theory. The natural follow-up of this work is to generalize the supersymmetric localization of boundary-anchored Wilson lines correlation functions: they admit a simple representation at the level of gauge theory and correspond to correlators of bi-local operators in the boundary Schwarzian quantum mechanics <cit.>. The explicit expression for the two-point and the four-point functions has appeared in <cit.> and was checked to be consistent with direct Schwarzian calculations <cit.>. It would be nice to reproduce the result of <cit.> from the localization perspective: we expect that representing the Wilson lines would need an extension of the field content of the original BF, probably involving the presence of a chiral multiplet. If working in the JT gravity case, the procedure could be easily extended to the higher-spin generalization. In this case, exact results are more difficult to extract compared to the purely gravitational case. For instance, if one insists on computing them from the generalized BF approach of <cit.>, the complication comes from the lack of explicit expressions for the representation matrices of SL(N;ℝ).[See nonetheless <cit.> for some progress with a focus on the spin-3 case where one can rely on the results of <cit.>.] Perhaps our localization approach can be used to sidestep such difficulties. Another issue would be to apply our machinery to super-JT gravity, where supersymmetric localization should work along similar lines. Moreover, JT gravity is known to emerge in the near-horizon limit of four-dimensional extremal black-holes <cit.> and recently supersymmetric localization has been applied to the computation of their entropy <cit.>. In light of these advances, it would be interesting to perform the localization of JT gravity in the metric variables in the spirit of analogous higher-dimensional cases <cit.>. As a final comment, although our computation has been performed on a disk topology, JT gravity is known to admit a celebrated non-perturbative completion as a sum over different topologies <cit.>. It would be tempting to extend our BF gauge-theoretic approach to higher genus/multi-boundary surfaces. Usually, however, one is faced with the issue that the mapping class group is not taken into account in the BF formulation.[We thank T. G. Mertens for pointing out this aspect to us.] An easier case where to perform a bulk localization should be given by the singular disk geometry (i.e. the “trumpet”), realized at the gauge-theory level by the insertion of vortex configuration <cit.>. We thank Marisa Bonini for participating to the early stages of this work, Itamar Yaakov for interesting discussions and useful insights, and Thomas Mertens for reading the manuscript and providing useful suggestions. This work has been supported in part by the Italian Ministero dell’ Università e Ricerca (MIUR), and by Istituto Nazionale di Fisica Nucleare (INFN) through the “Gauge and String Theory” (GAST) research project. tocsectionAppendices Appendices § CONVENTIONS We recall the conventions for spinors. A spinor ψ_α is a two components column vector. Indices are raised and lowered according to ψ^α=ϵ^αβψ_β, ψ_α=ϵ_αβψ^β, where ϵ^αβ = [ 0 1; -1 0 ] , ϵ_αβ = [ 0 -1; 1 0 ] . Standard bilinears are ψχ ≡ψ^αχ_α , ψγ_μχ ≡ψ^α(γ_μ)_α^βχ_β . Notice that ψχ = (-1)^h+1 χψ , ψγ_μχ = (-1)^h χγ_μψ , where h=1 if both ψ and χ are odd, otherwise h=0. The (flat) gamma matrices satisfy the relations γ_aγ_b=δ_ab+iϵ_abγ_3 , γ_3γ_a=iϵ_abγ^b Usefull Fierz identities can be derived just by reducing those for 3d spinors. The basic Fierz identity is χ_αψ^β=(-1)^h/2[δ_α^β(ψχ)+(ψγ_aχ)(γ^a)_α^β+(ψγ_3χ)(γ_3)_α^β] . For instance, for any spinors χ, ψ, and λ χ_α(ψλ)+ψ_α(λχ)+ λ_α(χψ)=0 . § SUSY ON THE HEMISPHERE The metric of the hemisphere reads ds^2 = ℓ^2(dθ^2 + sin^2θ dφ^2) , where θ∈ [0,π/2] and φ∈ [0,2π), while as vielbein we choose e^1 = ℓ dθ , e^2 = ℓ sinθ dφ . We also choose γ_1=σ_1, γ_2=σ_2 and γ_3=σ_3. The spin connection is ω_12=-cosθdφ. We need to describe only the 𝒩=(2,2) vector multiplet. Its components are the gauge field A_μ, two dimension 1 scalars η and σ, two Dirac fermions λ, λ̅, and the auxiliary field D. The corresponding supersymmetry variations are[To match the notation of <cit.>, one needs to identify σ_1 = η and σ_2 = σ.] δ_ϵ,ϵ̅ A_μ = -i/2(ϵ̅γ_μλ+λ̅γ_μϵ) , δ_ϵ,ϵ̅ η = 1/2(ϵ̅λ+λ̅ϵ) , δ_ϵ,ϵ̅ σ = -i/2(ϵ̅γ_3λ+λ̅γ_3ϵ) , δ_ϵ,ϵ̅ λ = (+iDη + i f γ_3 - [η,σ]γ_3 + i/ℓ ηγ_3 - D + iϵ_μνγ^μD^νσ)ϵ , δ_ϵ,ϵ̅ λ̅ = (-iDη + i f γ_3 + [η,σ]γ_3 + i/ℓ ηγ_3 + D + iϵ_μνγ^μD^νσ)ϵ̅ , δ_ϵ,ϵ̅ D = -i/2 ϵ̅Dλ - i/2 [η,ϵ̅λ] - 1/2 [σ,ϵ̅γ_3λ] + i/2 ϵDλ̅+ i/2 [η,λ̅ϵ] + 1/2 [σ,λ̅γ_3ϵ] . ϵ and ϵ̅ are bosonic spinors satisfying the conformal Killing spinor equation ∇_μϵ=γ_μϵ̃ , for some spinor ϵ̃. It is solved by ϵ = e^-si/2θγ^2[ e^i/2φ; 0 ] and ϵ = e^-si/2θγ^2[ 0; e^-i/2φ ] , where s=±1. The same solutions hold for ϵ̅. Together, ϵ and ϵ̅ generate the entire 2d superconformal algebra. To apply localization on S^2, we can restrict to the 𝔰𝔲(2|1) Poincaré subalgebra, generated by the only four Killing spinors ϵ = e^-i/2θγ^2[ e^i/2φ; 0 ] , ϵ = e^+i/2θγ^2[ 0; e^-i/2φ ] , ϵ̅ = e^+i/2θγ^2[ e^i/2φ; 0 ] , ϵ̅ = e^-i/2θγ^2[ 0; e^-i/2φ ] . They satisfy the equations ∇_μϵ = 1/2ℓγ_μγ_3ϵ , ∇_μϵ̅ = -1/2ℓγ_μγ_3ϵ̅ . We have four solutions. In restricting to the hemisphere, the SU(2) isometry group breaks down to the U(1) group of azimuthal rotations. Therefore, we expect that supersymmetry will close on a 𝔰𝔲(1|1) algebra, with spacetime symmetry reduced to rotations along φ. We choose the first couple in (<ref>) ϵ = e^i φ/2[ cosθ/2; sinθ/2 ] , ϵ̅ = e^-i φ/2[ sinθ/2; cosθ/2 ] . Some useful Killing spinor bilinears are ϵ̅γ^3ϵ = 1 , ϵ̅ϵ = cosθ , ϵ̅γ^μϵ = (0,-i/ℓ) .
http://arxiv.org/abs/2307.02435v1
20230705165839
Exploring Continual Learning for Code Generation Models
[ "Prateek Yadav", "Qing Sun", "Hantian Ding", "Xiaopeng Li", "Dejiao Zhang", "Ming Tan", "Xiaofei Ma", "Parminder Bhatia", "Ramesh Nallapati", "Murali Krishna Ramanathan", "Mohit Bansal", "Bing Xiang" ]
cs.LG
[ "cs.LG", "cs.CL", "cs.SE" ]
Bayesian evidence for two slow-wave damping models in hot coronal loops I. Arregui 1,2 D. Y. Kolotkov3,4 V. M. Nakariakov3,5 Received ; accepted ================================================================================================= Large-scale code generation models such as Codex and CodeT5 have achieved impressive performance. However, libraries are upgraded or deprecated very frequently and re-training large-scale language models is computationally expensive. Therefore, Continual Learning (CL) is an important aspect that remains under-explored in the code domain. In this paper, we introduce a benchmark called that covers a wide range of tasks, including code generation, translation, summarization, and refinement, with different input and output programming languages. Next, on our benchmark, we compare popular CL techniques from NLP and Vision domains. We find that effective methods like Prompt Pooling (PP) suffer from catastrophic forgetting due to the unstable training of the prompt selection mechanism caused by stark distribution shifts in coding tasks. We address this issue with our proposed method, Prompt Pooling with Teacher Forcing (PP-TF), that stabilizes training by enforcing constraints on the prompt selection mechanism and leads to a 21.54% improvement over Prompt Pooling. Along with the benchmark, we establish a training pipeline that can be used for CL on code models, which we believe can motivate further development of CL methods for code models. Our code is available at https://github.com/amazon-science/codetask-cl-pptfhttps://github.com/amazon-science/codetask-cl-pptf. § INTRODUCTION Code generation models <cit.> can increase the productivity of programmers by reducing their cognitive load. These models require significant computation to train as they have billions of parameters trained on terabytes of data. Hence, they are trained once and are then used repeatedly for several downstream applications. However, as software development constantly evolves with new packages, languages, and techniques <cit.>, it is expensive to retrain these models. Therefore, it is essential to continually improve these models to avoid errors, generate optimized code, and adapt to new domains and applications. We explore continual learning (CL) <cit.> abilities of code-generation models and aim to improve them. Specifically, we present a benchmark for code-based CL and aim to train a model on sequentially presented tasks with different data distributions without suffering from catastrophic forgetting (CF) <cit.>. This occurs when the model overfits the current task, resulting in a decline in performance on previously learned tasks. Given the lack of CL benchmarks for the code domain, we create a benchmark called using existing datasets. It consists of tasks like code completion <cit.>, code translation <cit.>, code summarization <cit.>, and code refinement <cit.>. This benchmark presents a new and challenging scenario as it necessitates the adaptation of the model to varying input and output programming languages. Along with this benchmark, we also present a training framework to easily apply CL methods to code generation models. Next, we evaluate the effectiveness of popular CL methods from NLP and Vision domains in the context of code generation models. We consider prompting methods <cit.> and experience-replay <cit.> due to their good performance for pre-trained models <cit.>. We also experiment with Prompt Pooling (PP) <cit.>, an effective prompting-based method for CL in the vision domain. Our results show that Prompt Pooling suffers from catastrophic forgetting on our proposed benchmark because of the complex distribution shift from varying input and output programming languages across tasks. With further investigation, we find that the unconstrained prompt selection mechanism leads to an unstable training problem. To address this, we propose our method Prompt Pooling with Teacher Forcing (PP-TF), which imposes constraints on prompt selection during training by assigning certain prompts to fixed tasks during training (see Figure <ref>). This results in stable training and better performance. Interestingly, we find when a replay buffer is available, the simple experience-replay <cit.> method outperforms other CL methods and achieves performance similar to a multitask baseline <cit.> where all tasks are provided at once. In summary, our contributions include: (1) being the first study on CL for code generation tasks, (2) establishing a benchmark and a novel pipeline that supports CL for code generation to motivate future work, (3) identifying and addressing the unstable training issue of Prompt Pooling through our proposed method PP-TF, and (4) discussion on the best CL methods to use in different use cases. § RELATED WORK Code Generation Models. Code generation and language modeling for source code is an emerging research field experiencing active growth. Several model architectures have been examined recently, including encoder-only models <cit.>, encoder-decoder models <cit.>, and decoder-only models <cit.>. However, none of these models have been studied in the context of continual learning. Continual Learning. There are various methods for Continual Learning (CL) and they fall into three categories: Regularization, Replay, and parameter isolation methods. Regularization methods <cit.> assign importance to model components and add regularization terms to the loss function. Replay methods <cit.> retain a small memory buffer of data samples and retrain them later to avoid catastrophic forgetting (CF). Parameter isolation methods, such as prompting-based methods <cit.>, introduce or isolate network parameters for different tasks. For a more comprehensive overview of all CL methods, we refer the reader to <cit.>. To the best of our knowledge, there are currently no studies or benchmarks for CL on code generation models. Therefore, we evaluate the effectiveness of prompting <cit.> and experience replay <cit.> based methods, which have demonstrated strong performance in CL on large pretrained models <cit.>. We do not consider regularization methods as they are not effective in continually learning large-scale pretrained models <cit.>. Next, we discuss our proposed benchmark and methods. § BENCHMARK We present the benchmark to assess the CL abilities of code generation models. We also provide a novel training pipeline that can be used to continually train and evaluate code generation models. All of the datasets used to create the benchmark are available under the MIT license and more details on the dataset splits and input-output domains are in Table <ref>. §.§ Coding Tasks Code Generation aims to generate a code snippet from a natural language description. We use the CONCODE dataset <cit.> which is a collection of tuples that consist of natural language descriptions, code environments, and code snippets, obtained from approximately 33,000 Java projects on GitHub. The objective of the study is to generate class member functions utilizing the natural language descriptions and class environment. Code Summarization aims to generate a summary for a piece of code. We use the CodeSearchNet dataset <cit.>, which consists of six programming languages (Python, Java, JavaScript, PHP, Ruby, and Go). The data for this task consists of the first paragraph of each documentation. Code translation refers to the transformation of a program written in a particular programming language into another language while maintaining its functionality. We use the Java → C# dataset compiled by <cit.> that provides pairs of code that perform the same tasks. Code Refinement aims to improve the code by fixing bugs within the code automatically. We use the dataset provided by <cit.> consisting of pairs of faulty and corrected Java functions. §.§ Evaluation Next, we define the metrics used to evaluate a model continually on these datasets. We follow <cit.> and evaluate each task using BLEU <cit.>. We follow <cit.> to continually evaluate model's performance. We measure the average BLEU after learning all the tasks as, <BLEU> = 1/N∑_k=1^N b_N,k, where N is the total number of tasks and b_i,j represents the BLEU score on task j after learning task i. Additionally, we report the average forgetting metric, denoted by <Forget>, to assess the model's ability to retain performance on previously learned tasks. This metric is calculated as the average difference between the maximum accuracy obtained for each task t and its final accuracy, given by <Forget> = 1/N-1∑_t=1^N-1 (max_k ∈1, , N-1 b_k,t - b_N,t). § PROMPT POOLING WITH TEACHER FORCING Prompt Pooling <cit.> is a highly effective technique that possesses two key benefits. Firstly, the number of prompts required does not increase linearly with the number of tasks. Secondly, the prompts within the pool can be utilized across multiple tasks, thereby enabling the reuse of previously acquired knowledge. These abilities are advantageous in real-world scenarios, particularly when a model needs to be continually adjusted to accommodate a large number of users/tasks. In Prompt Pooling (PP), a set of learnable prompts P = {P_i}_i=1^M are defined and shared by multiple tasks. We follow wang2022l2p and utilize a query and key-matching process to select the prompts for each task. This process has four steps: (1) a learnable key, represented as k_i ∈ℝ^d, is defined for each prompt, resulting in a prompt pool of the form {(k_i, P_i)}_i=1^M; (2) a query function q() is defined, which takes an input from a given task and produces a query vector q_∈ℝ^d; (3) the top-k keys are selected based on the cosine similarity between the query q_ and all the key vectors {k_i}_i=1^M; (4) we obtain the final input vector _p by pre-pending the example with the prompts corresponding to the selected keys. Then _p is fed into the pre-trained model f and we minimize the following loss function to only optimize the selected prompts and the corresponding keys while keeping the pre-trained model fixed. ℒ = ℒ_LM(x_p, y) + λ∑_k_s_i∈ K_s sim(q(x), k_s_i) where ℒ_LM is the language modeling loss, is the target sequence given the input , K_s is the set of selected keys from Step (3) above. The query-key mechanism described above is an Expectation-Maximization (EM) <cit.> procedure. Given an example, we first select the top-k keys based on the cosine similarity (E-Step) and then train these selected keys to pull them closer to the query (M-Step). The training is stable when all tasks are jointly learned. However, in the CL context, tasks are sequentially trained which makes training unstable. Hence, we propose Prompt Pooling with Teacher Forcing (PP-TF) that removes the E-Step by assigning each {(k_i, P_i)} pair to fixed tasks and only performs the M-Step of optimizing the keys. To encourage knowledge sharing, we allow a few {(k_i, P_i)} pairs to be shared across tasks (see Figure <ref>). With these assignments/constraints in place, when training on task t, we use teacher forcing to select top-k prompts that are assigned to the task. Thus, for learning task t, our loss function becomes, ℒ = ℒ_LM(x_p, y) + λ∑_k_s_i∈ K_s ∩ K_t sim(q(x), k_s_i) where, K_t denotes the prompts assigned to task t for teacher forcing. As training progresses, the queries and keys learn to align in a stable manner, while also allowing for information sharing among tasks through the shared prompts. During inference, we discard the assignment for (key, prompt) pair and use cosine similarity to select the top-k pairs across the whole pool. § EXPERIMENTS We focus on the scenario of known task identities for continual learning. This is commonly the case in code-related domains and task identities can also be determined through input and output analysis in certain situations. In the field of NLP and Vision, methods utilizing experience replay and prompting have been highly effective for CL on large pre-trained models <cit.>. Moreover, regularization methods are shown to not work well in conjunction with pre-trained models <cit.>, and hence, we skip them from our study. Next, we present these methods along with some baseline methods. §.§ Baselines Sequential Finetuning <cit.> updates all model parameters for every incoming task in a sequential manner. This approach has been shown to suffer from catastrophic forgetting and serves as a lower bound for CL methods. Individual Models <cit.> finetune a separate models for each new task. This is considered an upper bound for CL methods. Multitask Learning <cit.> simultaneously learns multiple tasks at once, without experiencing distribution shift, resulting in a strong performance. For multitask learning, we prepend the task descriptors to the input and follow <cit.> to ensure balanced sampling across tasks with varying dataset sizes. Shared Prompt Tuning (SP) defines M soft continuous prompts <cit.> which are added and fine-tuned for each example from all tasks. They are trained via gradient descent while keeping the pretrained model's parameters fixed. Task Specific Prompt Tuning (TSPT) defines a total of M soft continuous prompts <cit.> that are divided across N tasks, resulting in ⌊M/N⌋ task-specific prompts. Experience Replay (ER) <cit.> involves maintaining a memory buffer B of examples from the previous task. The buffer randomly stores an equal number of samples from each past task and is used to retrain the model at later stages. Moreover, as several of the other methods outlined in this study can benefit from ER, we also include results with and without the utilization of ER. §.§ Main Results §.§.§ Task-CL Experiments We use CodeT5 model <cit.> as our pre-trained model when learning the benchmark. In Table <ref>, we report results for a single run on the methods described above and their ER variants. For more implementation details and hyperparameters used please refer to Appendix <ref>. First, we find that the popular prompt pooling demonstrates catastrophic forgetting with a test BLEU score of 22.79%. Even when using ER with PP the performance is 39.78% which is still much worse than other methods. In contrast, PP + TF even without ER outperforms PP and PP + ER by 21.54% and 4.55% respectively. Moreover, our results show that the CodeT5 + ER method which finetunes the full CodeT5 model with ER performs the best with an average test BLEU score of 49.21%. Please refer to Appendix <ref> for experiments on the effect of buffer size on performance. Discussion: We find that task-specific prompts are more effective than other prompting-based CL methods. However, due to their high storage requirements that scales linearly with the number of tasks, this approach is not feasible for large-scale applications where the model needs to be adapted for a large number of users or tasks. In contrast, a memory buffer might be available due to privacy concerns <cit.> in many situations. In such cases, the PP-TF is the recommended method. Given these findings, we believe that the current Prompt Pooling based methods can be further improved in order to reuse knowledge across tasks. §.§.§ Training Instability of Prompt Pooling To show the root of catastrophic forgetting in prompt pooling, we evaluate how queries and keys align in the representation space after learning each task. To do so, we first select a subset of 5k training samples from four tasks resulting in 20k examples. We utilize a fixed codeT5 encoder as our query function that encodes provided examples to obtain queries. These queries remain unchanged during training and the keys are initialized using the data. We then use principal component analysis (PCA) <cit.> on the queries and keys to obtain the first three principal components and plot them. After learning each task, we repeat the PCA step on the fixed queries and the updated prompt keys. From Figure <ref>, we observe before the training starts, the keys (represented by red crosses) are evenly distributed among the queries of different tasks. However, after completing the training on the first task (CodeGen), most of the keys move toward the queries associated with that CodeGen (denoted by orange stars). This indicates that the prompts corresponding to these keys were primarily used for the CodeGen task and were trained by it. As a large portion of the prompts from the pool are utilized during the training of the CodeGen task, there are no key vectors available for allocation to the second task (CodeTrans). As a result, when learning the CodeTrans, some keys used for the previous task are pulled toward CodeTrans's queries and the corresponding prompts are updated. As each subsequent task is introduced, the key vectors are dynamically adjusted to align with the current task's queries, leading to a unstable process of matching in which updates to the key-prompt pairs are frequently in conflict with the previous tasks. Hence leading to catastrophic forgetting on the previous tasks. § CONCLUSION In conclusion, we have introduced a novel benchmark, , tailored to cover a broad spectrum of tasks in the code domain, aiming to fuel advancements in Continual Learning (CL) for large-scale code generation models. Our study underscores the shortfalls of popular CL methods like Prompt Pooling when applied to coding tasks, predominantly due to catastrophic forgetting. However, we demonstrate that our proposed method, Prompt Pooling with Teacher Forcing (PP-TF), can effectively mitigate this issue, leading to a significant improvement of 21.54% over the baseline. Furthermore, we establish a comprehensive training pipeline catering to CL on code models. We believe that our contributions, both in the form of the benchmark and the PP-TF method, will ignite further exploration and innovation in CL techniques specifically designed for the dynamic and evolving realm of code generation. § LIMITATIONS This work primarily focuses on evaluating the efficacy of existing continual learning (CL) methods for code generation models. It is important to note that many of these methods were specifically designed for natural language processing or computer vision domains and may not directly transfer to the code generation domain. Nevertheless, we have made efforts to identify and address any issues encountered during our analysis. It should be acknowledged, however, that the scope of our work is limited by the selection of methods and the benchmark used. While we have utilized the most popular CL methods from various categories, there may be methods that have not been included in this study due to their inefficacy in natural language processing or computer vision tasks but may be effective in code generation. As such, we encourage further research within the community to explore the potential of CL methods for code-generation models. § ACKNOWLEDGMENT We thank Amazon for the Amazon Post-Internship Fellowship award that supported Prateek during this work. We also thank all the reviewers for their feedback on the paper. acl_natbib § APPENDIX §.§ Implementation Details In our experiments, we report the results of a single run. We used the CodeT5-small model <cit.> with 60M parameters from Huggingface <cit.>, which is an encoder-decoder model pre-trained on CodeSearchNet <cit.>. We use a separate and fixed codeT5 encoder model as the query function to encode the input examples for prompt pooling. For all prompting-related experiments, the CodeT5 model remains frozen and only the prompts are finetuned. In cases where we have ER with prompting methods, the ER is also applied while finetuning the prompts. Our prompt pool consisted of 500 prompts, with 100 prompts being selected to prepend to examples for each task. For the Shared Prompts method, we utilized 100 prompts that are used for all the tasks. For the Task-Specific Prompt method, we utilized different 100 prompts for each task. Unless otherwise specified, we used a buffer size of 5000 examples for all methods employing ER. The Adam <cit.> optimizer was utilized, along with early stopping. The hyperparameters for our experiments were taken from <cit.>, and the tasks from benchmark were learned in random order specified in Table <ref>. The results of our experiments included the Average validation and test BLEU scores, as well as the forgetting metric on the validation set. The implemntation of BLEU was taken from the CodeT5 paper <cit.>. We ran experiments on a single A6000 GPU with 48 GB of memory with total computation of 14 GPU days. §.§ Data Statistics for Benchmark Table <ref> shows the train, validation, and test data sizes for all the tasks used in the benchmark. We also present the input and output domains for each of the individual tasks. Given the input and output domains for these tasks are starkly different this makes this benchmark challenging as the distribution shift is large. Please refer to Section <ref> in the main paper for more details about the benchmark. All of the datasets used to create the benchmark are available under the MIT license. §.§ Impact of Buffer Size on ER Performance. If ER replay is possible, we find that CodeT5 + ER is the most performant method. We go on to further assess the impact of buffer size on the performance. In Table <ref>, we present the aggregated results for a total buffer size of 100, 500, 1000, 2000, and 5000. Our findings suggest that the is an increase in performance as the buffer size increases. We observe that CodeT5 + ER with a small buffer size of 100 examples outperforms PP + ER (5k examples) by 3.85% respectively. Moreover, CodeT5 + ER with a buffer size of 1000 outperforms the best method without ER. Our findings are in line with that of <cit.> and demonstrate that whenever possible, we should use ER with pretrained models. Although in cases with no buffer with a large number of tasks, PP + TF is the best method to use.
http://arxiv.org/abs/2307.01009v1
20230703134113
APEIRON: composing smart TDAQ systems for high energy physics experiments
[ "Roberto Ammendola", "Andrea Biagioni", "Carlotta Chiarini", "Andrea Ciardiello", "Paolo Cretaro", "Ottorino Frezza", "Francesca Lo Cicero", "Alessandro Lonardo", "Michele Martinelli", "Pier Stanislao Paolucci", "Cristian Rossi", "Francesco Simula", "Matteo Turisini", "Piero Vicini" ]
cs.DC
[ "cs.DC", "physics.ins-det" ]
^1 Istituto Nazionale di Fisica Nucleare (INFN), sezione di Roma, Rome, Italy ^2 Istituto Nazionale di Fisica Nucleare (INFN), sezione di Roma Tor Vergata, Rome, Italy ^3 Dipartimento di Fisica, Sapienza Università di Roma, Rome, Italy alessandro.lonardo@roma1.infn.it APEIRON is a framework encompassing the general architecture of a distributed heterogeneous processing platform and the corresponding software stack, from the low level device drivers up to the high level programming model. The framework is designed to be efficiently used for studying, prototyping and deploying smart trigger and data acquisition (TDAQ) systems for high energy physics experiments. § INTRODUCTION The general architecture of the APEIRON distributed processing platform includes m data sources, corresponding to the detectors or sub-detectors, feeding a sequence of n stream processing layers, making up the whole data path from readout to trigger processor (or storage server). The processing platform features a modular and scalable low-latency network infrastructure with configurable topology. This network system represents the key element of the architecture, enabling the low-latency recombination of the data streams arriving from the different input channels through the various processing layers, as shown in Figure <ref>. Developers can define scalable applications using a dataflow programming model (inspired by Kahn Process Networks <cit.>) that can be efficiently deployed on a multi-FPGAs system: the APEIRON communication IPs allow low-latency communication between processing tasks deployed on FPGAs, even if hosted on different computing nodes. Thanks to the use of High Level Synthesis tools in the workflow, tasks are described in high level language (C/C++) while communication between tasks is expressed through a lightweight API based on non-blocking send() and blocking receive() operations. The mapping between the computational data flow graph and the underlying network of FPGAs, such as that shown in Figure <ref>, is defined by the designer with a configuration tool, by which the framework will produce all project files required for the FPGAs bitstream generation. The interconnection logic is therefore automatically built according to the application needs (in terms of input/output data channels) as shown in Figure <ref>, allowing the designer to focus on the processing tasks expressed in C/C++ . The aim of the APEIRON project is to develop a flexible framework that could be adopted in the design and implementation of both "traditional" low level trigger systems and of data reduction stages in trigger-less or streaming readout experimental setups characterized by high event rates. For this purpose we studied and implemented algorithms capable of boosting the efficiency of these classes of online systems based on Neural Networks (NN), trained offline on Tensorflow/Keras and leveraging the QKeras and HLS4ML <cit.> software packages for deployment on FPGA. We have validated the framework on the physics use case represented by the partial particle identification system for the low-level trigger of the NA62 experiment <cit.>, working on data from its Ring Imaging Cherenkov detector to pick out electrons and number of charged particles. § MOTIVATION As for the requirements imposed by applications in the class of real-time dataflow processing, FPGA devices are a good fit inasmuch as they can provide not only adequate computing, memory and I/O resources but also a smooth programming experience. High-Level Synthesis tools, after several years since their appearance, are quickly reaching a technological readiness that paves the way to the adoption of these reconfigurable accelerators by a class of users much broader to that composed by skilled developers used to employ Hardware Description Language-based workflows. The main motivation for the design and development of the APEIRON framework is that the currently available HLS tools do not natively support the deployment of applications over multiple FPGA devices, which severely chokes the scalability of problems that this approach could tackle. To overcome this limitation, we envisioned APEIRON as an extension of the Xilinx Vitis HLS framework able to support a network of FPGA devices interconnected with a low-latency direct network as the reference execution platform. § THE APEIRON FRAMEWORK The Communication IP is the evolution of the APEnet <cit.> and exanet <cit.> designs for HPC systems and represents the main enabling component for the APEIRON framework, defined as the general architecture of an FPGA-based distributed stream processing platform and the corresponding software stack. The Communication IP allows data transfers between processing tasks hosted in the same node (intra-node communications) or in different nodes (inter-node communications), see Figure <ref>. In the context of the APEIRON framework, processing tasks are implemented by HLS kernels with Xilinx Vitis. The details of the interface between HLS kernels – the endpoints of the communication – and the Communication IP are described at the end of this section. The Routing IP defines the switching technique and routing algorithm; its main components are the Switch component, the Configuration/Status Registers and the InterNode and IntraNode interfaces. The Switch component dynamically interconnects all ports of the IP, implementing a channel between source and destination ports. Dynamic links are managed by routing logic together with arbitration logic: the Router configures the proper path across the switch while the Arbiter is in charge of solving contentions between packets requiring the same port. For inter-node communications, the routing policy applied is the dimension-order one: it consists in reducing the offset along one dimension to zero before considering the offset in the next dimension. The employed switching technique — i.e., when and how messages are transferred — is Virtual Cut-Through (VCT) <cit.>: the router starts forwarding the packet as soon as the algorithm has picked a direction and the buffer used to store the packet has enough space. The deadlock-avoidance of DOR routing is guaranteed by the implementation of two virtual channels for each physical channel (with no fault-tolerance guaranteed) <cit.>. The transmission is packet-based, meaning that the Communication IP sends, receives and routes packets with a header, a variable size payload and a footer. The Communication IP was co-designed with the APEIRON software stack in order to achieve very low-latency and scalable bandwidth (via IP design reconfiguration) between processing tasks defined as High-Level Synthesis Kernels. Starting from a YAML configuration file describing the attributes of each HLS kernel, namely its number of input and output channels and the IntraNode port of the Communication IP to which it is connected, the APEIRON framework links the Communication IP and the HLS kernels that are connected to it and generates the bitstream for the overall design. The only requisite that HLS kernels must satisfy is in the format of their prototype that must be in this form: In this way, the HLS kernel implements a generic stream interface for each communication channel, based on the AXI4-Stream protocol. The communication between kernels is expressed through a lightweight C++ API based on non-blocking send() and blocking receive() operations. This simple API allows the HLS developer to perform communications between kernels, either deployed on the same FPGA (intra-node communication) or on different FPGAs (inter-node communication) without knowing the details of the underlying packet communication protocol. The Communication API can be represented with the following pseudo-code: where: The Communication Library leverages AXI4-Stream Side-Channels to encode all the information needed to forge the packet header. Adaptation toward/from IntraNode ports of the Routing IP is done by two APEIRON IPs: Aggregator and Dispatcher, shown in Figure <ref>. The Dispatcher receives incoming packets from the Routing IP and forwards them to the right input channel, according to the relevant fields of the header. The Aggregator receives outgoing packets from the task and forges the packet header, filling then the header/data FIFOs of the Routing IP. § PHYSICS USE CASE NA62 is a fixed-target experiment at the CERN SPS North Area, dedicated to measurement of rare kaon decays. We have designed FPGA-RICH, a Particle Identification (PID) system, based on the APEIRON framework and implemented on a single FPGA device, capable of providing results to the online trigger. This systems represents the evolution of the GPURICH one that provided the same capabilities but on a more complex architecture, with a GPU performing a geometry based PID algorithm and a FPGA hosting the the NaNet design <cit.> implementing the low-latency direct data transfer between the detector and the GPU memory. FPGA-RICH receives RICH detector events in a streaming fashion and performs the PID task using a neural network (NN), supporting a throughput greater than 10 MHz as per experiment specifications. According to the APEIRON workflow the NN is implemented as a HLS Kernel and receives input data from the RICH detector only (seedless model). The resulting model, depicted in Fig. <ref>, is a three layer Dense network (64x16x4) having in input up to 64 normalized IDs of the PMTs hit by the Cherenkov photons in a single event. To limit the FPGA resources footprint we performed a quantization step on the model using QKeras, resulting in two different fixed point representations: <8, 1> for weights and biases and <16, 6> for activations. Two different features can be inferred for each event: the number of charged particles (N_r) and the number of e^± (N_e). In order to prepare the training and validation data for the NN, we prepared different data sets composed by events extracted from NA62 physics runs using the experiment analysis framework. The ground truth for training was provided by the seedless RichReco offline reconstruction method. Since the NN result would be used to enforce a trigger decision the inference performance of the NN is of utmost importance: to get a training set as much as possible similar to online data we trained the network with 3 Mevents extracted from run 8011. Validation has been done on 3.5 Mevents from run 8893 with satisfying results, as shown by ROC curves for N_r in Fig. <ref>. Since the NA62 RICH detector is able to discriminate the kind of charged particles only in the 15-35 GeV/c energy range, results for N_e are not equally satisfying. The model was synthesized on a Xilinx VCU118 FPGA platform at a 150 MHz clock frequency, and used a very limited amount of resources (14% LUT, 2% DSP), being able to sustain a 18.75 MHz throughput with a latency of 146.66 ns. § CONCLUSIONS AND FUTURE WORK We are continuing the development of the APEIRON framework in order improve its performance and usability. We are finalizing the development of the FPGA-RICH system, integrating the NN kernel in the framework encouraged by the good performance on the identification of charged particles. We envisioned a solution to improve results in identification of e^±, using the LKr calorimeter online primitives that provide information related to the energy of the event. § REFERENCES
http://arxiv.org/abs/2307.00321v1
20230701121418
Algorithms for euclidean regularised Optimal Transport
[ "Dmitry A. Pasechnyuk", "Michael Persiianov", "Pavel Dvurechensky", "Alexander Gasnikov" ]
math.OC
[ "math.OC", "65K10", "G.1.6" ]
D.A. Pasechnyuk, M. Persiianov et al. Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE dmitry.vilensky@mbzuai.ac.ae Moscow Institute of Physics and Technology, Dolgoprudny, Russia persiianov.mi@phystech.edu,gasnikov.av@mipt.ru ISP RAS Research Center for Trusted Artificial Intelligence, Moscow, Russia Institute for Information Transmission Problems RAS, Moscow, Russia Weierstrass Institute for Applied Analysis and Stochastics, Berlin, Germany pavel.dvurechensky@wias-berlin.deCaucasus Mathematic Center of Adygh State University, Maikop, Russia Algorithms for euclidean regularised Optimal Transport The research was supported by Russian Science Foundation (project No. 23-11-00229), <https://rscf.ru/en/project/23-11-00229/>. Dmitry A. PasechnyukEqual contribution1,2,3,40000-0002-1208-1659 Michael Persiianov2, 40009-0008-7059-1244Pavel Dvurechensky50000-0003-1201-2343 Alexander Gasnikov2,4,60000-0002-7386-039X =============================================================================================================================================================================================== This paper considers Optimal Transport problem regularised by square of euclidean ℓ_2-norm, provides theoretical guarantees on Sinkhorn–Knopp algorithm, Accelerated Gradient Descent, Accelerated Alternating Minimisation, and Coordinate Linear Variance Reduction algorithms' iteration complexities, and compares practical efficiency of these methods and their analogues applied to entropy regularised Optimal Transport problem by numerical experiments on MNIST dataset. § INTRODUCTION Optimal Transport (OT) problem has a long history <cit.>, is well-studied <cit.> and piques interest in modern statistical learning community <cit.>. This paper focuses on discrete OT problem statement and numerical optimisation methods applied to it. Formally, original problem to solve is: min_X1_m = a X^⊤1_n = b x_ij≥ 0⟨ C, X ⟩, where a ∈𝒮_n and b ∈𝒮_m are source and destination distributions (measures), and unit simplex 𝒮_d ≡{x ∈ℝ^d_+ | ∑_i=1^d x_i = 1}, X ∈ℝ^n× m_+ is a transportation plan such that x_ij is a mass to transport from i-th source to j-th destination, and C ∈ℝ^n × m_+ is a cost of transportation matrix. An algorithm applied to OT problem must obtain ε-optimal transportation plan X_ε which is defined as one that satisfies the following condition: ⟨ C, X_ε⟩ - ε≤⟨ C, X^*⟩≡min_X1_m = a X^⊤1_n = b x_ij≥ 0⟨ C, X ⟩, and strictly satisfies constraints X_ε1_m = a, X^⊤_ε1_n = b, and X_ε∈ℝ^n× m_+. To obtain such a solution, we consider the euclidean regularised OT problem min_X1_m = a X^⊤1_n = b x_ij≥ 0{f(X) ≡⟨ C, X ⟩ + γ/2X_2^2 }, where X_2^2 ≡∑_i=1,j=1^n,m x_ij^2, and apply convex optimisation methods to solve it. One can see that if γ∝ε, then ε-optimum of this optimisation problem is (∝ε)-optimal transportation plan for original problem (<ref>). But unlike (<ref>), problem statement (<ref>) allows one to use convex optimisation tools like duality and acceleration. Contribution. We provide the first arithmetic complexity bounds for euclidean regularised OT. The results of this paper are summarised in Table <ref> below. Each cell contains estimate of arithmetic operations number needed for an Algorithm in the leftmost column to achieve target accuracy ε for problem (<ref>) with given n, m (we assume w.l.o.g. that n > m), and C in the worst case (constant factors are omitted and ε is assumed to be small enough). Arithmetic complexities for original algorithms applied to entropy regularised OT <cit.> are known and presented in right column, while left column contains estimates obtained in this paper. The paper is organised as follows. In Section <ref>, we provide short literature review, pointing on papers which formed the basis of proofs presented in this paper and unfolding the history of application of quadratic regularisation in OT. Section <ref> contains all the theoretical result of the paper, Subsections <ref>, <ref>, <ref>, and <ref> consider Sinkhorn, Accelerated Gradient, Alternating Minimisation, and Coordinate Linear Variance Reduction algorithms in detail, respectively. Finally, Section <ref> contains results of numerical experiments which compare practical performance of the proposed algorithms and their origins applied to entropy regularised OT. § BACKGROUND The most widely-known method to solve OT problem is Sinkhorn–Knopp algorithm <cit.>. Its actual worst-case arithmetic complexity in asymptotic w.r.t. ε and n was theoretically justified in <cit.>. Our analysis of arithmetic complexity of Sinkhorn–Knopp algorithm applied to euclidean regularised OT is based on the framework described in <cit.>. In <cit.>, it was shown that accelerated gradient descent applied to OT problem via the entropy regularisation allows to improve iteration complexity w.r.t ε. Acceleration can be also made directly on the basis of Sinkhorn–Knopp algorithm, if we consider the latter as alternating minimisation procedure. This approach was proposed in <cit.>. Both of two latter approaches lead to similar iteration complexities, and their theoretical guarantees are general, so they do not require significant changes in proofs. Standard approach allowing one to apply convex optimisation methods to OT efficiently is entropy regularisation <cit.>. Recently, the interest to quadratic or euclidean regularisation started to grow <cit.>. One practically valuable property of euclidean regularised OT is that optimal plan is sparse <cit.>, which is important for some applications, e.g., image colour transfer. Also, algorithms applied to euclidean regularised OT are expected to be more computationally stable and more robust for small regularisation parameter. For example, Sinkhorn–Knopp algorithm for entropy regularised OT requires to compute exponent with γ in denominator. On the other hand, none of the above papers, where euclidean regularisation is considered, provide arithmetic complexity estimates for particular algorithms applied to euclidean regularised OT. § THEORETICAL GUARANTEES FOR VARIOUS APPROACHES §.§ Common reasoning We have two discrete probability measures, i.e. vectors a ∈𝒮_n, b ∈𝒮_m from unit simplex, such that a^⊤1_n = 1, b^⊤1_m = 1, and cost matrix C ∈ℝ_+^n × m. Our goal is to find transportation plan X ∈ℝ_+^n × m determined by optimisation problem (<ref>), which is euclidean regularised version of classical problem (<ref>). Problems under consideration are of the generalised linear form and allow the use of convex duality to get rid of linear constraints. Let us consider Lagrange saddle-point problem max_λ∈ℝ^n, μ∈ℝ^mmin_X ∈ℝ_+^n × mℒ(X, λ, μ) with Lagrangian function defined by ℒ(X, λ, μ) ≡⟨ C, X ⟩ + γ/2X_2^2 + λ^⊤ (X 1_m - a) + μ^⊤ (X^⊤1_n - b). First-order optimality condition for this problem implies that ∂ℒ(X, λ, μ)/∂ x_ij = 0 = c_ij + γ x_ij + λ_i + μ_j, which gives the closed-form expression for optimal transport plan X(λ, μ) = [-C - λ1_m^⊤ - 1_n μ^⊤]_+ / γ for given dual multipliers λ and μ, where [x]_+ ≡max{0, x}. After substitution of X(λ, μ) into formula for ℒ, we obtain the dual problem: max_λ∈ℝ^n, μ∈ℝ^m{φ(λ, μ) ≡ -1/2γ∑_j=1^m [-C_j - λ - μ_j 1_n]_+_2^2 - λ^⊤ a - μ^⊤ b}, where C_j is j-th row of C. §.§ Sinkhorn–Knopp Algorithm Following the reasoning of <cit.> on justification of Sinkhorn–Knopp algorithm for entropy regularised OT problem, below we come to analogous Sinkhorn–Knopp method for euclidean regularised OT problem. First-order optimality conditions for the dual problem (<ref>) w.r.t. λ and μ are, correspondingly, f_i(λ_i) - γ a_i = 0, i=1,...,n g_j(μ_j) - γ b_j = 0, j=1,...,m, f_i(λ) = ∑_j=1^m [-c_ij - λ - μ_j]_+, g_j(μ) = ∑_i=1^n [-c_ij - λ_i - μ]_+. Let us denote the i-th order statistic of elements of the vector x by x_(i) and choose l as the largest index j such that f_i(-(C^⊤_i + μ)_(j)) ≤γ a_i, and k as the largest index i such that g_j(-(C_j + λ)_(i)) ≤γ b_j), correspondingly <cit.>. Then, by freezing μ and λ correspondingly, explicit solutions of (<ref>) are λ_i = -(γ a_i + ∑_j=1^l (C^⊤_i + μ)_(j))/ l, i=1,...,n, μ_j = -(γ b_j + ∑_i=1^k (C_j + λ)_(i))/ k, j=1,...,m. Alternating updates of λ and μ according to the formulas above gives the Sinkhorn–Knopp algorithm applied to euclidean regularised OT, its pseudocode is listed in Algorithm <ref>. The following proposition estimates the algorithmic complexity of each iteration of Algorithm <ref>. One iteration of Algorithm <ref> requires 𝒪((n + m)^2) amortised a.o. per iteration (only +, -, * and ≤; 𝒪(n + m) /; no built-in functions calculations). Following Lemmas <ref>, <ref> and Theorem <ref> correspond to Lemmas 1, 2 and Theorem 1 from <cit.>, but the proofs are significantly different from that of their analogues due to the use of specific properties of euclidean regularisation. For R = C_∞ + γ/min{n, m} (1 - max_i=1,...,n j=1,...,m{a_i, b_j}), it holds that max_j=1,...,mμ_j - min_j=1,...,mμ_j ≤ R, max_i=1,...,nλ_i - min_i=1,...,nλ_i ≤ R, max_j=1,...,mμ^*_j - min_j=1,...,mμ^*_j ≤ R, max_i=1,...,nλ^*_i - min_i=1,...,nλ^*_i ≤ R. Firstly, thanks to the form of updates (<ref>), we can guarantee the non-positivity of dual variables. Indeed, initial values of μ and λ are zero, so non-positive. Then, for all j=1,...,m, n-1/γμ_j + b_j = 1/γ∑_i=1^n (-c_ij - λ_i - μ_j) ≤1/γ∑_i=1^n [-c_ij - λ_i - μ_j]_+ = X^⊤1_n = b_j, that implies μ_j ≤ 0. Similarly, one can prove λ_i ≤ 0 for all i=1,...,n. Further, let's relate dual variables with corresponding marginal distributions of X. Here we consider only μ, assuming that we just updated it. Similar reasoning can be applied to just updated λ as well, that gives the right column of statements from Lemma. -μ_j - C_∞ - 1/n1_n^⊤λ ≤γ/n [X^⊤1_n]_i = γ/n b_j ≤γ/n -μ_j - 1/n1_n^⊤λ ≥γ/n [X^⊤1_n]_i = γ/n b_j, ∀ j=1,...,m. This implies μ_j ≥ -C_∞ - 1/n (1_n^⊤λ + γ), μ_j ≤ -1/n (1_n^⊤λ + γ b_j), ∀ j=1,...,m. Finally, max_j=1,...,mμ_j - min_j=1,...,mμ_j ≤ -1/n(1_n^⊤λ + γmax_j=1,...,m b_j) + C_∞ + 1/n(1_n^⊤λ + γ) = C_∞ + γ/n(1 - max_j=1,...,m b_j). Reasoning for μ^* and λ^* is similar, since the gradient of objective in (<ref>) vanishes, so X^⊤1_n = b and X 1_m = a, correspondingly. For λ, μ, and X taken from each iteration of Algorithm <ref> it holds that φ(λ^*, μ^*) - φ(λ, μ) ≤ 4R √(n + m)(X 1_m - a_2 + X^⊤1_n - b_2). Due to concavity of φ, we have φ(λ^*, μ^*) ≤φ(λ, μ) + ⟨∇φ(λ, μ), (λ^*, μ^*) - (λ, μ) ⟩. Then, by Hölder inequality and Lemma <ref>, φ(λ^*, μ^*) - φ(λ, μ) ≤√(n + m)∇φ(λ, μ)_2 (λ^*, μ^*) - (λ, μ)_∞ ≤ 4R√(n + m)∇φ(λ, μ)_2 ≤ 4R √(n + m) (X 1_m - a_2 + X^⊤1_n - b_2). To obtain ε solution of problem (<ref>), its sufficient to perform 2 + 8 max{n, m}^3/2 R/γε iterations of Algorithm <ref>. Below, λ_+ and μ_+ will denote values of λ and μ after the current iteration, and λ_+k and μ_+k denote values of λ and μ after k iterations. Let current update relate to λ. Denoting S = -C - 1_n μ^⊤ - λ1_m^⊤ and δ = λ - λ_+, we have φ(λ_+, μ_+) - φ(λ, μ) = 1/2 γ∑_i,j=0,0^n,m (max{0, S_ij + δ_i }^2 - max{0, S_ij}^2) + δ^⊤ a ≥1/2 γ∑_S_ij > 0, δ_i < 0(max{0, S_ij + δ_i }^2 - S_ij^2) + δ^⊤ a ≥δ^⊤ (a + [δ]_- - 2 γ X 1_m) ≥δ_2^2 + δ^⊤ (a - 2γ X 1_m) ≥δ^⊤ (a - X 1_m) ≥γ/na - X 1_m_2^2, due to λ_i - [λ_+]_i = γ/l a_i - 1/l∑_j=1^l (-C^⊤_i - μ - λ_i)_(j)≥γ/l a - γ/l X 1_m and for small enough γ. Then, by Lemma <ref>, we have φ(λ_+, μ_+) - φ(λ, μ) ≥max{γ/16 n^2[φ(λ^*, μ^*) - φ(λ, μ)]^2/R^2, γ/nε^2 }, which implies, similarly to 2.1.5 from <cit.>, that γ/16 n^2 R^2 [φ(λ^*, μ^*) - φ(λ, μ)] - γ/16 n^2 R^2 [φ(λ^*, μ^*) - φ(λ_+, μ_+)] ≥(γ/16 n^2 R^2 [φ(λ^*, μ^*) - φ(λ, μ)])^2 γ/16 n^2 R^2 [φ(λ^*, μ^*) - φ(λ_+k, μ_+k)] ≤1/γ/16 n^2 R^2 [φ(λ^*, μ^*) - φ(λ, μ)] + k, k ≤ 1 + 16 n^2 R^2/γ1/[φ(λ^*, μ^*) - φ(λ_+, μ_+)] - 16 n^2 R^2/γ1/[φ(λ^*, μ^*) - φ(λ, μ)]. In the other case of (<ref>), we have [φ(λ^*, μ^*) - φ(λ_+k, μ_+k)] ≤ [φ(λ^*, μ^*) - φ(λ, μ)] - k γε^2/n. To combine bounds on k from (<ref>) and (<ref>), we take minimum of their sum over all options for current objective function value k ≤min_0 ≤ s ≤ [φ(λ^*, μ^*) - φ(λ, μ)]{2 + 16 n^2 R^2/γ s - 16 n^2 R^2/γ1/[φ(λ^*, μ^*) - φ(λ, μ)] + s n/γε^2} = 2 + n/γ (8 √(n) R/ε - 16 n R^2/[φ(λ^*, μ^*) - φ(λ, μ)]) [φ(λ^*, μ^*) - φ(λ, μ)] ≥ 4 ε√(n) R^2, 2 + n/γ[φ(λ^*, μ^*) - φ(λ, μ)]/ε^2 [φ(λ^*, μ^*) - φ(λ, μ)] < 4 ε√(n) R^2, which implies the statement of Theorem. We have not set R and γ in the bound above. By Lemma <ref>, R ≤C_∞ + γ/n, so k ≤ 2 + 8 n^3/2C_∞/γε + 8 n^1/2/ε, and one can take γ = ε / 2, such that solving regularised problem with accuracy ε / 4 will give (ε / 2)-solution of original problem. Besides, by Lemma 7 from <cit.> we have ⟨ C, X ⟩≤⟨ C, X^*⟩ + γ/2X_2^2 + 2 (a - X 1_m_1 + b - X^⊤1_n_1) C_∞, so one should set target accuracy to ε / (4 C_∞). This proves the following result. Number of iterations of Algorithm <ref>, sufficient for Algorithm <ref> to return ε-optimal transport plan X such that X 1_m = a, X^⊤1_n = b, is 𝒪((n + m)^3/2C_∞^2/ε^2). Note that correction a' = (1-ε/8)(a + 1_n ε/(n(8-ε))) of target marginal distributions a and b, which is required for original Sinkhorn–Knopp algorithm <cit.>, is not necessary in Algorithms <ref> and <ref>, since formula for R from Lemma <ref> makes sense even if a_i = 0 and b_j = 0 for some i and j. §.§ Adaptive Accelerated Gradient Descent To apply accelerated gradient method to the problem (<ref>), let us consider it as problem of convex optimisation with linear constrains: min_A[X] = B x_ij≥ 0 f(X), where operator A: ℝ^n × m→ℝ^n + m is defined by A[X] = (X1_m, X^⊤1_n), B = (a, b) ∈ℝ^n + m_+, f is defined in (<ref>), and corresponding dual problem is equivalent to (<ref>). The following theorem gives iteration complexity for primal-dual Algorithm <ref>, which will be further applied to obtain the solution for problem (<ref>). Note, that for given operator A it holds that A_2,2≡sup_X_2 = 1A [X]_2 = √(n + m). Assume that optimal dual multipliers satisfy (λ^*, μ^*)_2 ≤ R_2. Then, Algorithm <ref> generates sequence of approximate solutions for primal and dual problems (<ref>) and (<ref>), which satisfy f(X_k) - f(X^*) ≤ f(X_k) - φ(λ_k, μ_k) ≤16 A_2,2^2 R^2/γ k^2, A [X_k] - B_2 ≤16 A_2,2^2 R/γ k^2, X_k - X^*_2 ≤8 A_2,2 R/γ k. Following the proof scheme chosen in <cit.>, we estimate the error of solution X for the original problem (<ref>): ⟨ C, X⟩ = ⟨ C, X^*⟩ + ⟨ C, X_reg.^* - X^* ⟩ + ⟨ C, X_k - X_reg.^* ⟩ + ⟨ C, X - X_k ⟩ ≤⟨ C, X^*⟩ + ⟨ C, X_reg.^* - X^* ⟩ + ⟨ C, X - X_k⟩ + f(X_k) + φ(λ_k, μ_k) + γ, where X^*_reg. is the exact solution of problem (<ref>). By choosing γ≤ε/3, obtaining X_k such that f(X_k) - φ(λ_k, μ_k) ≤ε/3 by Algorithm <ref> and making ⟨ C, X - X_k⟩≤ε/3, we guarantee arbitrarily good approximate solution X. Let us consider the latter condition in more details. By Lemma 7 <cit.> and Theorem <ref> one has ⟨ C, X - X_k⟩ ≤C_∞X - X_k_1 ≤ 2 C_∞ (X_k 1_m - a_1 + X_k^⊤1_n - b_1) ≤_1 2 √(n + m)C_∞A [X_k] - B_2 ≤32 (n + m)^3/2C_∞ R/γ k^2 ≤_2 2 √(n + m)C_∞X_k - X_reg.^*_2 ≤16 (n + m) C_∞ R/γ k. To ensure the latter, it is sufficient to choose k such that max{k^2/32 (n + m)^3/2 R, k/16 (n + m) R}≥3 C_∞/ε^2, k = 𝒪(min{n C_∞ R/ε^2, n^3/4√(C_∞ R)/ε}). On the other hand, f(X_k) - φ(λ_k, μ_k) ≤ε/3 together with Theorem <ref> imply k = 𝒪(√(n + m) R/ε), which is majorated by (<ref>) and does not contribute to iteration complexity. This proves, taking into account (<ref>), Lemma <ref>, and that R_2 ≤ R √(n + m), the following result. Number of iterations of Algorithm <ref>, sufficient for Algorithm <ref> to return ε-optimal transport plan X such that X 1_m = a, X^⊤1_n = b, is 𝒪(min{(n + m)^3/2C_∞^2/ε^2, (n + m) C_∞/ε}). §.§ Accelerated Alternating Minimisation Note that Sinkhorn–Knopp algorithm is based on the simplest alternating optimisation scheme: dual function φ is explicitly optimised w.r.t. λ and μ alternately. Thus, if there is a way to accelerate some alternating optimisation algorithm, similar technique can be applied to Sinkhorn–Knopp algorithm. Moreover, iteration complexity will correspond to that of taken accelerated alternating optimisation method, while the arithmetic complexity of optimisation w.r.t. one variable will be the same as for Sinkhorn algorithm. The following theorem gives iteration complexity for general primal-dual alternating minimisation Algorithm <ref>, which can be used similarly to Algorithm <ref> to obtain the solution for problem (<ref>). Note that b, which denotes the number of independent variables blocks in <cit.>, can be set to b = 2 in our case, because ∇_λφ(λ, μ)_2 > ∇_μφ(λ, μ)_2 implies ∇_λφ(λ, μ)_2^2 > 1/2∇φ(λ, μ)_2^2. But since dimensionalities of λ and μ are different, one of the variables which has bigger dimensionality will be updated more often a priori. Assume that optimal dual multipliers satisfy (λ^*, μ^*)_2 ≤ R_2. Then, Algorithm <ref> generates sequence of approximate solutions for primal and dual problems (<ref>) and (<ref>), which satisfy f(X_k) - f(X^*) ≤ f(X_k) - φ(λ_k, μ_k) ≤16 A_2,2^2 R^2/γ k^2, A [X_k] - B_2 ≤16 A_2,2^2 R/γ k^2, X_k - X^*_2 ≤8 A_2,2 R/γ k, Instead of max operator taking place in the listing of general Algorithm <ref> one should use formulas (<ref>). The advantage of this approach consists in simplicity of obtaining the solution for these auxiliary problems. It is expected that while accelerated gradient descent considered before was making one gradient step at each iteration, this algorithm makes optimal step w.r.t. half of dual variables, so expected progress per iteration is bigger, while the number of iterations is the same up to small 𝒪(1) factor. Using the proof scheme similar to which is provided in Section <ref> and the same problem pre- and post-processing Algorithm <ref>, one can guarantee, taking into account (<ref>) and Lemma <ref>, that the following result holds. Number of iterations of Algorithm <ref>, sufficient for Algorithm <ref> to return ε-optimal transport plan X such that X 1_m = a, X^⊤1_n = b, is 𝒪(min{(n + m)^3/2C_∞^2/ε^2, (n + m) C_∞/ε}). §.§ Coordinate Linear Variance Reduction One can also consider problem (<ref>) as generalised linear problem with strongly-convex regulariser and sparse constraints. By using the property that dual variables or problem (<ref>) are separable into two groups (λ and μ), one can apply primal-dual incremental coordinate methods. One of the modern algorithms which is based on dual averaging and has implicit variance reduction effect was proposed in <cit.>. The following theorem presents simplified form of iteration complexity estimate for Algorithm <ref> adopted to our particular problem. Assume that optimal dual multipliers satisfy (λ^*, μ^*)_2 ≤ R_2. Then, Algorithm <ref> generates sequence of approximate solutions for primal and dual problems (<ref>) and (<ref>), which satisfy 𝔼[f(X_k) - f(X^*)] = 𝒪(A_2,2^2 R^2/γ k^2), 𝔼[A[X_k] - B_2] = 𝒪(A_2,2^2 R/γ k^2). Taking into account (<ref>) and Lemma <ref>, using the same reasoning as for Theorem <ref>, one has Number of iterations of Algorithm <ref>, sufficient for Algorithm <ref> to return expected ε-optimal transport plan X such that X 1_m = a, X^⊤1_n = b, is 𝒪(min{(n + m)^3/2C_∞^2/ε^2, (n + m) C_∞/ε}), where “expected ε-optimal” means that 𝔼[⟨ C, X⟩] - ε≤⟨ C, X^*⟩. One can see that asymptotic of iteration complexity is the same as that of Algorithms <ref> and <ref>. This allows to use the same pre- and post-processing Algorithm <ref> to apply this algorithm to the OT problem. The advantage of this algorithm is the simplicity of iterations. It is expect that despite the same 𝒪(nm) arithmetic complexity of one iteration, constant of it in practice is significantly smaller than for accelerated methods considered before. § NUMERICAL EXPERIMENTS All the optimisation algorithms described in previous section are implemented in Python 3 programming language. Reproduction package including source code of algorithms and experiments settings is hosted on GitHub[Repository is available at <https://github.com/MuXauJl11110/Euclidean-Regularised-Optimal-Transport>]. We consider OT problem for the pair of images from MNIST dataset <cit.>, where distributions are represented by vectorised pixel intensities and cost matrix contains pairwise euclidean distances between pixels. Firstly, experiment on comparison of algorithms applied to entropy regularised OT was carried out. Following algorithms were compared: Sinkhorn–Knopp algorithm (Sinkhorn) <cit.>, Adaptive Primal-dual Accelerated Gradient Descent (APDAGD) <cit.>, Primal-dual Accelerated Alternating Minimisation (PDAAM) <cit.> and its modification which uses one-dimensional optimisation to choose step size (PDAAM-LS). Results of the experiment are shown in Figure <ref>. There are presented convergence curves of methods for two progress measures: function value for original problem (<ref>) and dual gap for problem (<ref>). The range of target accuracy value is ε∈{2 · 10^-2, 1.85 · 10^-3, 5 · 10^-4} (each target accuracy value requires separate experiment, because ε is a parameters of Algorithms <ref> and <ref> and affects the convergence from the beginning). All the plots show that PDAAM is leading algorithm, and performance of APDAGD is competitive with it. On the other hand, Sinkhorn–Knopp algorithm converges slowly, especially for small ε. PDAAM-LS demonstrates unstable behaviour in our experiment. Secondly, the same algorithms were compared while applied to euclidean regularised OT problem. Figure <ref> shows convergence curves of methods, organisation of the plots is the same as above. One can see that ordering of the methods' performance remain the same as in the case of entropy regularised OT, i.e. PDAAM algorithm convergence is faster than that of APDAGD and Sinkhorn. On the other hand, difference between PDAAM and APDAGD performance is less significant in the case of euclidean regularised OT (we conclude that progress of step which is optimal w.r.t. one of the dual variables is not much bigger than progress of the gradient step), and Sinkhorn algorithm performs significantly worse that in entropy regularised OT and is not efficient in practice. CLVR did not displayed itself an efficient method in our experiment. Generally, convergence of all of the algorithms in the case of euclidean regularisation is more prone to slowing down on the latter iterations. The expected property of euclidean regularised OT that the optimal transport plan obtained with it is sparse is approved in our experiments. One can see the examples of transport plans in Figure <ref>, the fraction of zero elements (which are < 10^-21) in them is around 99.5%. § DISCUSSION Euclidean regularisation for OT problems was recently considered in several papers as having practically important properties like stability to small regularisation parameter and sparse optimal transport plan. In this paper, we provided theoretical analysis of some algorithms which can be efficiently applied to euclidean regularised OT, demonstrated and compared their practical performance. Our results show that these properties come at a price, which consists in the slower convergence of all the algorithms and faster growth of arithmetic complexity with increasing dimensionality. Our future plans are to consider different convex optimisation algorithms applied to euclidean regularised OT, especially splitting algorithms, hopefully more computationally stable to small regularisation parameter <cit.>, and to consider euclidean regularisation in the context of Wasserstein barycentre problem. plain
http://arxiv.org/abs/2307.02977v2
20230706132511
Ordering dynamics and aging in the Symmetrical Threshold model
[ "David Abella", "Juan Carlos González-Avella", "Maxi San Miguel", "José J. Ramasco" ]
physics.soc-ph
[ "physics.soc-ph" ]
APS/123-QED david@ifisc.uib-csic.es Instituto de Física Interdisciplinar y Sistemas Complejos IFISC (CSIC - UIB), Campus UIB, 07122 Palma de Mallorca, Spain Instituto de Física Interdisciplinar y Sistemas Complejos IFISC (CSIC - UIB), Campus UIB, 07122 Palma de Mallorca, Spain Advanced Programming Solutions SL, 07121 Palma de Mallorca, Spain Instituto de Física Interdisciplinar y Sistemas Complejos IFISC (CSIC - UIB), Campus UIB, 07122 Palma de Mallorca, Spain Instituto de Física Interdisciplinar y Sistemas Complejos IFISC (CSIC - UIB), Campus UIB, 07122 Palma de Mallorca, Spain The so-called Granovetter-Watts model was introduced to capture a situation in which the adoption of new ideas or technologies requires a certain redundancy in the social environment of each agent to take effect. This model has become a paradigm for complex contagion. Here we investigate a symmetric version of the model: agents may be in two states that can spread equally through the system via complex contagion. We find three possible phases: a mixed one (dynamically active disordered state), an ordered one, and a heterogeneous frozen phase. These phases exist for several configurations of the contact network. Then we consider the effect of introducing aging as a non-Markovian mechanism in the model, where agents become increasingly resistant to change their state the longer they remain in it. We show that when aging is present, the mixed phase is replaced, for sparse networks, by a new phase with different dynamical properties. This new phase is characterized by an initial disordering stage followed by a slow ordering process towards a fully ordered absorbing state. In the ordered phase, aging modifies the dynamical properties. For random contact networks, we develop a theoretical description based on an Approximate Master Equation that describes with good accuracy the results of numerical simulations for the model with and without aging. Ordering dynamics and aging in the Symmetrical Threshold model José J. Ramasco August 1, 2023 ============================================================== § INTRODUCTION A variety of collective phenomena can be well understood through stochastic binary-state models for interacting agents. In these models, each agent is assumed to be in one of two possible states, such as susceptible/infected, adopters/non-adopters, etc., depending on the context of the model. The interaction among agents is determined by the underlying contact network and the dynamical rules of the model. There are various examples of binary-state models, including processes of opinion formation <cit.> and disease or social contagion <cit.>, among others. The consensus problem consists in determining under which circumstances the agents end up sharing the same state or when the coexistence of both states prevails. This is characterized by a phase diagram that provides the boundaries separating domains of different behaviors in the control parameter space. Macroscopic descriptions of these models in terms of mean-field, pair, and higher-order approximations are well established <cit.>. An important category of binary-state models are threshold models <cit.>, which were originally introduced by M. Granovetter <cit.> to address problems of social contagion such as rumor propagation, innovation adoption, riot participation, etc. Multiple exposures, or group interaction, are necessary in these models to update the current state, a characteristic of complex contagion models <cit.>. The threshold model presents a discontinuous phase transition from a “global cascade” phase to a “no cascade” phase, which was analyzed in detail in Ref. <cit.>. This model has been extensively studied on various network topologies, such as regular lattices, small-world <cit.>, random <cit.>, clustered <cit.>, modular <cit.>, hypergraphs <cit.>, homophilic <cit.> and coevolving <cit.> networks. A main difference between the threshold model and other binary-state models, such as the Voter <cit.>, majority vote (MV) <cit.>, and nonlinear Voter model <cit.>, is the lack of symmetry between the two states. In the threshold model, changing state is only possible in one direction, representing the adoption forever of a new state that initially starts in a small minority of agents. A symmetric version of the threshold model, with possible changes of states in both directions, was introduced in Refs. <cit.> to investigate the impact of noise and anticonformity. However, a complete characterization of the Symmetrical Threshold model and its ordering dynamics have not been addressed so far. Aging is an important non-Markovian effect in binary-state models that has significant implications. It describes how the persistence time of an agent in a particular state influences the transition rate to a different state <cit.>. As such, the longer an agent remains in the current state, the smaller the probability of changing. Aging has been shown to cause coarsening dynamics towards a consensus state in the Voter model <cit.>, to induce bona fide continuous phase transitions in the noisy Voter model <cit.>, modify the phase diagram and non-equilibrium dynamics of the Schelling segregation model <cit.>, and to modify non-trivially the cascade dynamics of the threshold model <cit.>. The introduction of aging is motivated by strong empirical evidence that human interactions do not occur at a constant rate and cannot be described using a Markovian assumption. Empirical studies have reported heavy-tail inter-event time distributions that reflect heterogeneous temporal activity patterns in social interactions <cit.>. In this work, we present a comprehensive analysis of the Symmetrical Threshold model, including its full phase diagram, and we investigate the effects of aging in the model. The model is examined in various network topologies, such as the complete graph, Erdős-Rényi (ER) <cit.>, random regular (RR) <cit.>, and a two-dimensional Moore lattice. The possible phases of the system are defined by the final stationary state as well as by the ordering/disordering dynamics characterized by the time-dependent magnetization and interface density, the persistence, and the mean internal time. For both the original model and the aging variant, the results of Monte Carlo numerical simulations are compared with results from the theoretical framework provided by an Approximate Master Equation (AME)<cit.>, which is general for any random network. We also derive a mean-field analysis to describe the outcomes in a complete graph. The article is organized as follows: In Section <ref>, we describe the Symmetrical Threshold model and provide the numerical and theoretical analysis of the phase diagram. Each subsection reports the results for the different networks chosen. Section <ref> presents the Symmetrical Threshold model with aging, the corresponding numerical and theoretical analysis, and the comparison with the model without aging. The results for the Moore lattice are shown in Section <ref>. Finally, we conclude with a summary and conclusions in Section <ref>. § SYMMETRICAL THRESHOLD MODEL The system consists of a set of N agents located at the nodes of a network. The variable describing the state of each agent i takes one of the two possible values: s_i = ± 1. Every agent has assigned a fixed threshold 0 ≤ T ≤ 1, which determines the fraction of different neighbors required to change state. Even though this value might be agent dependent, we will consider here only the case with a homogeneous T value for all the agents of the system. In each update attempt, an agent i (called active agent) is randomly selected, and if the fraction of neighbors with a different state is larger than the threshold T, the active agent changes state s_i → -s_i. Simulation time is measured in Monte Carlo (MC) steps, i.e., N update attempts. Numerical simulations run until the system reaches a frozen configuration (absorbing state) or until the average magnetization, m = (1/N) ∑_i s_i, fluctuates around a constant value. §.§ Mean-field We first consider the mean-field case of the complete graph (all-to-all connections). We take an initial random configuration with magnetization m_0 and perform numerical simulations for various values of T to construct the phase diagram (shown in Fig. <ref>a). We find three different phases based on the final state: * Phase I or Mixed: The system reaches an active disordered state (final magnetization m_f = 0) where the agents change their state continuously; * Phase II or Ordered: The system reaches the ordered absorbing states (m_f = ± 1) according to the initial magnetization m_0; * Phase III or Frozen: The system freezes at the initial random state m_f = m_0. For a given initial magnetization m_0 ≠ 0 and increasing T, the system undergoes a mixed-ordered transition at a critical threshold T_c = (1-|m_0|)/2, and an ordered-frozen transition at a critical threshold T_c^* = (1 + |m_0|)/2 > T_c (indicated by dotted and dashed black lines in Fig. <ref>a). In this mean-field scheme, if the fraction of nodes in state +1 is denoted by x, the condition for a node in state -1 to change its state is given by θ(x - T), where θ is the Heaviside step function. Thus, in the thermodynamic limit (N→∞), the variable x evolves over time according to the following mean-field equation: dx/dt = (1 - x) θ(x - T) - x θ(1 - x - T) = - ∂ V(x)/∂ x. Here, V(x) is the potential function. The stationary value of x, x_ st, is the solution of the implicit equation resulting from setting the time derivative equal to 0. No analytical expression of x_ st has been found in terms of T, but the solutions can be understood in terms of the potential V(x): V(x) = - ∫ (1 - x) θ(x - T) - x θ(1 - x - T) dx = x^2/2 + 1/2( T^2 - 2T - x^2 + 1) θ(T+x-1) - 1/2( T^2 - 2T - x(x-2)) θ(x - T) . The minimum and maximum values of V(x) correspond to stable and unstable solutions, respectively. Figure <ref>b shows the potential's dependence on the magnetization, obtained after a variable change m = 2x-1 in Eq. (<ref>). For T < 0.5, m = 0 is a stable solution, but increasing the threshold reduces the range of values of the initial magnetization from which this solution is reached, enclosing Phase I between the unstable solutions m = 1-2 T and 2 T-1. In fact, if m_0 > 1-2 T, the system reaches the absorbing solution m=+1, while if m_0 < -1+2 T, it reaches m=-1 (Phase II). For T = 0.5, there is just one unstable solution at m=0, and all the initial magnetization values reach the absorbing states m=± 1. For T > 0.5, the potential is equal to a constant value for a range of m_0, which means that an initial condition will remain in this state forever (Phase III). The range of values of the initial condition from which this phase is reached grows linearly with T until T=1, where all initial conditions fulfill dm/dt=0. Note that the mean-field Symmetrical Threshold model for T=1 shows the same potential profile as the mean-field Voter model <cit.>. The important difference is that for the Voter model, any initial magnetization is marginally stable, while in our model any initial magnetization is an absorbing state in Phase III. In the Voter model finite size fluctuations will take the system to the absorbing states m=± 1. §.§ Random networks We analyze the phase diagram of the Symmetrical Threshold model in two random networks: Erdős-Rényi (ER) <cit.> and random regular (RR) <cit.> graphs with mean degree ⟨ k ⟩ = 8. Figures <ref>a and <ref>b show the phase diagram for both networks, where it is shown that the existence of the three phases previously described is robust to changes in network structure. The main difference from the all-to-all scenario is that Phase III does not freeze exactly at the same initial magnetization. Instead, the system reaches an absorbing state with a higher magnetization m_f > m_0. In this phase, the value of m_f depends on the threshold such that increasing T, increases the disorder in the system, until T = 1, where m_f = m_0 (see Fig. <ref>c). On the other hand, phases I and II reach the same stationary state as in the mean-field case. Furthermore, the critical thresholds T_c and T_c^* show a different dependence on m_0 depending on the network structure. To explain the transitions exhibited by the model, we use a theoretical framework for binary-state dynamics in complex networks <cit.>: the Approximate Master Equation (AME), which considers agents in both states ± 1 with degree k, m neighbors in state -1 that have been j time steps in the current state (called “internal time” or “age”) as different sets in a compartmental model (see details of the AME derivation in our previous work <cit.>). In general, the AME is: d/d t x^±_k, m, 0= - x^±_k, m, 0 + ∑_l T^∓_k, m,l x^∓_k, m, l - β^± (k-m) x^±_k, m, 0 -γ^± m x^∓_k, m, 0, d/d t x^±_k, m, j= - x^±_k, m, j+ A^±_k, m,j x^±_k, m, j-1 - β^± (k-m) x^±_k, m, j + β^± (k-m+1) x^±_k, m-1, j-1 +γ^± (m+1) x^±_k,m+1,j-1 -γ^± m x^±_k, m, j, where variables x^+_k,m,j and x^-_k,m,j are the fraction of nodes in state +1 or -1, respectively, with degree k, m neighbors in state -1 that have been j time steps in the current state. The rates β^± account for the change of state of neighbors (±) of a node in state +1. The rates γ^± are equivalent but for nodes in state -1. They can be written as β^+ = ∑_j ∑_k p_k ∑_m = 0^k (k - m) T^+_k,m,j x^+_k,m,j/∑_j ∑_k p_k ∑_m = 0^k (k - m) x^+_k,m,j, β^- = ∑_j ∑_k p_k ∑_m = 0^k m T^+_k,m,j x^+_k,m,j/∑_j ∑_k p_k ∑_m = 0^k m x^+_k,m,j, γ^+ = ∑_j ∑_k p_k ∑_m = 0^k (k - m) T^-_k,m,j x^-_k,m,j/∑_j ∑_k p_k ∑_m = 0^k (k - m) x^-_k,m,j, γ^- = ∑_j ∑_k p_k ∑_m = 0^k m T^-_k,m,j x^-_k,m,j/∑_j ∑_k p_k ∑_m = 0^k m x^-_k,m,j, where the degree distribution of the chosen network is p_k. The transition rate T^±_k,m,j is for the probability of changing state (±→∓) for an agent of degree k, m neighbors in state -1 and age j, while the aging rate A^±_k,m,j is for the probability of staying in the same state and increasing the internal time (j → j + 1). For the Symmetrical Threshold model, these probabilities do not depend on internal time j (Markovian dynamics): T^+_k,m,j = θ(m/k - T), T^-_k,m,j = θ(k-m/k - T) , A^±_k,m,j = 1 - T^±_k,m,j. If we were not concerned with the internal time dynamics, we can simplify our AME to the one proposed by J. P. Gleeson in Ref. <cit.> for general binary-state models. Here we keep the internal times for a dynamical characterization of the different phases and as a reference frame for the aging studies in the next section. The primary approximations of this framework are to assume the thermodynamic limit (N →∞) and uncorrelated network with negligible levels of clustering. For the complex networks considered, these conditions are satisfied for large N, and the differential equations can be solved numerically using standard methods. The mixed order and ordered frozen transitions predicted (solid black lines in Figs. <ref>a and <ref>b, respectively) are in agreement with the numerical simulations. The predicted lines represent the initial and final values of T at which the AME reaches the ordered absorbing states m_f = ± 1. In Fig. <ref>c, we also observe a good agreement between numerically integrated solutions (solid colored lines) and numerical simulations (markers). An alternative simpler approximation is to consider a heterogeneous mean-field approximation (HMF) (refer to Appendix <ref> for further details). Although this approximation captures the qualitative behavior, the numerically integrated solutions do not agree with numerical simulations (see red dashed lines in Figs. <ref>a and <ref>b, and the colored dotted lines in Fig. <ref>c), and the frozen phase is not predicted by this framework. These findings demonstrate that threshold models need approximations beyond mean-field to achieve accuracy, in agreement with the findings in Refs. <cit.>. Beyond the stationary states, the previous phases can be characterized by their ordering dynamical properties. To describe the coarsening process, we use the time-dependent average interface density ρ(t) (fraction of links between nodes in different states), the average magnetization m(t), the mean internal time τ̅(t) (mean time spent in the current state over all the nodes) and the persistence p(t) (fraction of nodes that remain in their initial state at time t) <cit.>. Fig. <ref> shows the average results obtained from the numerical simulations, starting from an initial magnetization m_0 = 0.5. There are 3 regimes with different dynamical properties: * Mixed regime (Phase I): It corresponds to Phase I in the static phase diagram and it is characterized by fast disordering dynamics, which is reflected by an exponential decay of the persistence. The interface density, the magnetization, and the mean internal time exhibit fast dynamics towards their asymptotic values in the dynamically active stationary state (see T = 0.12, 0.24 in Fig. <ref>); * Ordered regime (Phase II): It coincides with Phase II in the static diagram and it is characterized by an exponential decay of the interface density. The magnetization tends to the ordered absorbing state based on the initial majority, and the mean internal time tends to scale as τ̅(t) ∼ t. Persistence in this phase decays until a plateau that corresponds to the initial majority that reaches consensus (since this fraction of nodes does not change state from the initial condition). When consensus is reached, the surviving trajectory is stopped (see T = 0.36, 0.49 in Fig. <ref>); * Frozen regime (Phase III): This regime corresponds to Phase III and it is characterized by an initial ordering process followed by the stop of the dynamics, with constant values of the metrics. The only exceptions are the mean internal time that grows as τ̅(t) ∼ t (see T = 0.86 in Fig. <ref>) and the persistence. Using the numerically integrated solutions of AME (x^±_k,m,j(t)), we can compute the magnetization m(t), the interface density ρ(t), and the mean internal time τ̅: ρ(t) = 2 ∑_j ∑_k p_k ∑_m m x^+_k,m,j/∑_j ∑_k p_k ∑_m k (x^+_k,m,j + x^-_k,m,j), m(t) = 2 ∑_j ∑_k p_k ∑_m x^+_k,m,j - 1 τ̅ (t) = ∑_j ∑_k p_k ∑_m j (x^+_k,m,j + x^-_k,m,j). All metrics exhibit a strong agreement between the numerical simulations and the integrated solutions (see solid lines in Fig. <ref>). However, the persistence cannot be directly calculated from the integrated solutions. This is because the fraction of persistent nodes at time t corresponds to the fraction of nodes with internal time j = t, which is at an extreme of the age distribution at each time step, since x^±_k,m,j(t) = 0 for j > t. Therefore, the computation of this measure requires a more sophisticated analysis using extreme value theory <cit.>. We note that the dynamical characterization discussed above holds for all possible m_0 except for the symmetric initial condition m_0 = 0. In this case, an order-disorder transition arises at a critical mean degree k_c, whose value depends on the size of the system N <cit.>. § SYMMETRICAL THRESHOLD MODEL WITH AGING Aging refers to the property of agents becoming less likely to change their state the longer they have remained in that state <cit.>. In contrast to the original model, which assumes that agents update their state at a constant rate, this model introduces an activation probability p_A (j) that is inversely proportional to the agent's internal time j. At each time step, the following two steps are performed: * A node i with age j is selected at random and activated with probability p_A(j); * If the fraction of neighbors in a different state is greater than the threshold T, the activated node changes its state from s_i to -s_i and resets its internal time to j=0. We set the activation probability to p_A(j) = 1/(j+2) with the aim of recovering a fat-tailed inter-event time distribution, as observed in simple contagion models <cit.>. §.§ Mean-field Figure <ref> compares the evolution of the average magnetization and mean internal times on a complete graph of the original Symmetrical Threshold model and the version with aging in phases I, II and III. We observe that, for all considered threshold values, aging introduces a delay. However, the final stationary state coincides with the one observed for the original model. To explain these dynamics, we use a heterogeneous mean-field approach that considers the effects of aging (HMFA), as in Ref. <cit.> for other binary-state models (we use a general HMF description to be applied for a complete graph and to random networks in next section). In this case, the AME does not work well due to the high density of the network. For a general network with degree distribution p_k, we define the fraction of agents in state ± 1 with k neighbors and age j at time t as x^±_k,j (t). The probability of finding a neighbor in state ± 1 is x̃^±, which can be written as x̃^± = ∑_k p_k k/⟨ k ⟩ ∑_j=0^∞ x^±_k,j, where ⟨ k ⟩ is the mean degree of the network. The transition rate ω_k,j^± for a node with state ± 1, degree k and age j to change state is given by ω_k,j^± = p_A (j) ∑_m=0^kθ(m/k - T) B_k,m[x̃^∓], where B_k,m[x] is the binomial distribution with k attempts, m successes, and with the probability of success x. In our model, there are two possible events for a node with degree k and age j: * It changes state and the age is reset to j = 0; * It remains at its state and the age increases by one time step j = j + 1. According to these possible events, we can write the rate equations for the variables x^±_k,j and x^±_k,0 as dx^±_k,0/dt = ∑_j=0^∞ x^∓_k,j ω_k,j^∓ - x^±_k,0 , dx^±_k,j/dt = x^±_k,j-1 ( 1 - ω_k,j-1^±) - x^±_k,j j > 0. It can be shown from Eq. (<ref>) that the stationary solution for the fraction of agents in state +1, x_f, obeys the following implicit equation for a complete graph (see Appendix <ref> for a detailed explanation): x_f = F(1 - x_f)/F(x_f) + F(1-x_f), where, F(x) = 1 + ∑_j=1^∞∏_a=0^j-1( 1 - p_A(a) ∑_m = (N-1)T^N-1 B_N-1,m[x] ). A solution of Eq. (<ref>) can be obtained numerically using standard methods, as in Ref. <cit.>. The final magnetization is calculated as m_f = 2 x_f - 1. With this method, we obtain that the phase diagram for the model with aging is the same as for the original model (refer to Fig. <ref>a). As a technical point, we note that a truncation of the summation of the variable j in Eq. (<ref>) is required for the numerical resolution of the implicit equation. The higher the maximum age considered j_ max, the higher the accuracy. With j_ max = 5 · 10^4, the transition lines predicted by this mean-field approach show great accuracy. Moreover, by numerically integrating Eqs. (<ref>), the dynamical evolution of the magnetization and mean internal time can be obtained. Fig. <ref> shows the agreement between integrated solutions and Monte Carlo simulations of the system both for the aging and non-aging versions. It should be noted that, while aging introduces only a dynamical delay for the magnetization m(t), the mean internal time τ̅(t) in Phase I shows a different dynamical behavior with aging than in the original model. In this phase, due to the low value of T, the agents selected randomly will change their state (as they fulfill the threshold condition) and reset their internal time. Consequently, while the internal time fluctuates around a stationary value for the original model, when aging is incorporated, due to the activation probability p_A(j) chosen, the mean internal time increases following a recursive relation (Eq. (<ref>)). We refer to Appendix <ref> for a derivation of this result. §.§ Random networks In contrast to the results obtained in a complete graph, aging effects have a significant impact on the phase diagram of the model on random networks. In Fig. <ref>, we show the borders of Phase II (first and last value of T where the system reaches the absorbing ordered state for each m_0) obtained from Monte Carlo simulations running up to a maximum time t_ max (dotted colored lines). Reaching the stationary state in this model requires a large number of steps and it has a high computational cost. The two borders of Phase II exhibit different behavior as we increase the time cutoff t_ max: while the ordered-frozen border does not change with different t_ max, the mixed-ordered border is shifted to lower values of T as we increase the time cutoff t_ max. Our results suggest that Phase I is actually replaced in a good part of the phase diagram by an ordered phase in which the absorbing state m_f = ± 1 is reached after a large number of time steps. The ordered-frozen border is now slightly shifted to lower values of the threshold T due to aging. Similar results are found for a RR graph (see Appendix <ref>). The dependence of the results with t_ max calls for a characterization of different phases in terms of dynamical properties rather than by the asymptotic value of the magnetization. Figure <ref> shows the time evolution of our ordering metrics. The dynamical properties are largely affected by the aging mechanism. In terms of the evolution, we find the following regimes: * Initial mixing regime (Phase I^*): It is characterized by two dynamical transient regimes: a fast initial disordering dynamics followed by a slow ordering process. After the initial fast disordering stage, the average interface density exhibits a very slow (logarithmic-like) decay. Later, due to the finite size of the system the average interface density follows a power law decay with time, where ρ(t) scales as t^-1. This phase exists for the same domain of parameters (m_0, T) as Phase I (orange region in Fig. <ref>) in the model without aging (see T = 0.12, 0.24 in Fig. <ref>); * Ordered regime (Phase II): It is characterized by a power-law interface decay, where ρ(t) scales as t^-1. The magnetization tends to the ordered absorbing state according to the initial majority (see T = 0.36, 0.49 in Fig. <ref>); * Frozen regime (Phase III): It is characterized by an initial tendency towards the majority consensus, but very fast reaches an absorbing frozen configuration (see T = 0.86 in Fig. <ref>). The main effect of aging is that the mixed states of Phase I are no longer present, at least not for the type of networks that we are analyzing here. We will show later that Phase I reemerges in denser graphs. Instead, for sparse graphs, we observe a new Phase I^* in which the system initially disorders and later orders until reaching the absorbing states m_f = ± 1. This behavior is shown in Fig. <ref> for T = 0.12 and 0.26. For T = 0.12, the system initially disorders, and then the interface density follows a logarithmic-like decay (see inset in Fig. <ref>a). Due to the slow decay, the system stays in this transient regime even after 10^6 time steps, and the fall to the absorbing states is not seen. Similarly, for T = 0.26 the disordering process stops and then the system gradually evolves towards a fully ordered state. For this value of T, the logarithmic-like decay is not appreciated and we just observe the power-law decay due to the finite size of the system. The difference between T = 0.12 and T = 0.26 comes from the fact that in this Phase I^*, the decay of ρ becomes faster as we increase the threshold T (see inset in Fig. <ref>). Notice the different interface decay in Fig. <ref> between values of T < 0.3 (Phase I^*), where all trajectories show a logarithmic-like decay of ρ(t) in a transient regime, and T ≥ 0.3 (Phase II), where trajectories from the initial condition exhibit ordering dynamics towards the consensus of the majority. Moreover, we observe that in Phase I^*, the initial magnetization m_0 introduces a bias to the stochastic process, implying that the larger m_0 in absolute value, the larger the number of realizations that reach the absorbing state with the same sign of m_0. However, the system can still reach the absorbing state of the opposite sign of m_0 (initial minority), as shown in the trajectories with T = 0.25 in Fig. <ref>. In Phase II, the system asymptotically orders for any initial condition as in the original model, but the dynamical properties are modified due to the presence of aging: the exponential decay of the interface density is replaced by a slow power-law decay, where the exponents of the exponential and the power-law are found to be the same. Contrary, the dynamical properties of Phase III are not affected by the presence of aging. The temporal magnitudes analysis (mean internal time and persistence) can be found in Appendix <ref>. To account for the results of our Monte Carlo simulations, we use the same mathematical framework as described in Equation (<ref>). According to the update rules of the Symmetrical Threshold Model with aging, the transition probabilities now depend on the age j, as given by the activation probability p_A (j): T^+_k,m,j = p_A(j) θ(m/k - T) , T^-_k,m,j = p_A(j) θ((k-m)/k - T) , A^±_k,m,j = 1 - T^±_k,m,j. We show in Figure <ref> the mixed-ordered and ordered-frozen transition lines predicted by the integration of the AME equations until a time cutoff t_ max. We find good agreement between the theoretical predictions and the simulations both for ER and RR networks (see RR results in Appendix <ref>). Regarding dynamical properties, the AME integrated solutions exhibit a remarkable concordance with the evolution of all the metrics as shown in Figure <ref>. Minor discrepancies between the numerical simulations and the integrated solutions can be attributed to the assumption of an infinitely sized system in the AME. The numerical results discussed so far are for random networks with average degree ⟨ k ⟩ = 8. According to them and to the analytical insights, one can conclude that aging significantly changes the phase diagram for sparse networks. However, we know that the mean-field (fully connected) model with aging shows the same phase diagram as the model without aging. This implies that, for ER graphs, as the mean degree ⟨ k ⟩ approaches N, Phase I^* must disappear. Therefore, the combined effects of increasing the mean degree and introducing aging need to be investigated in more detail. Phase II is distinguishable from phases I and I^* because the system initially orders, i.e., |ρ_0 - ρ_ max| = 0, where ρ_ max is the maximum value attained by the interface density during the dynamical evolution. In contrast, Phase I is distinguished from Phases I^* and II because the system remains disordered, i.e., |ρ_ max - ρ(t_ max)| ≈ 0. Thus, Phase I^* is the only phase among these three where |ρ_0 - ρ_ max| > 0 and |ρ_ max - ρ(t_ max)| > 0. Using this criterion, we studied the dependence of the critical threshold T_c on the mean network degree defining the transition lines between phases I, I^*, and II (see Fig. <ref>). In the absence of aging, the red line in Fig. <ref> gives the value of the mixed-ordered transition line T_c. When aging is included, at low degree values, Phase I is replaced by I^*, as expected. However, as the mean degree increases, Phase I emerges despite the presence of aging, leading to the coexistence of phases I and I^* in the same phase diagram over a range of mean degree values. As the mean degree is further increased, a critical value is reached where Phase I^* is no longer present, and the discontinuous transition I-II occurs at the same value than in the model without aging. Importantly, this critical mean degree at which Phase I^* disappears, depends significantly on the initial magnetization m_0. § SYMMETRICAL THRESHOLD MODEL IN A MOORE LATTICE We consider next the Symmetrical Threshold model in a Moore lattice, which is a regular 2-dimensional lattice with interactions among nearest and next-nearest neighbors (k=8). From numerical simulations, we obtain a phase diagram (Fig. <ref>a) that is consistent with our previous results in random networks. The system undergoes a mixed-ordered transition at a threshold value T_c = 2/8 which is independent of the value of the initial magnetization m_0. When T > 4/8, the system undergoes an ordered-frozen transition at a critical threshold T_c^*, which depends on m_0 (similarly to what happens in random networks). The final magnetization m_f(m_0) (Fig. <ref>b) also shows a dependence on m_0 similar to the one found in RR networks (Fig. <ref>c). §.§ Original model without aging Fig. <ref> shows the results from numerical simulations (for m_0 = 0 and 0.5) for the average interface density, the magnetization, and the persistence (the internal time shows the same results as in random graphs). Dynamical properties change significantly for different values of the threshold and initial magnetization m_0. Similarly to the case of random networks, we find three different regimes corresponding to the three phases, but with some properties different from the results on random networks: * Mixed regime (Phase I): It is characterized by fast disordering dynamics with a persistence decay p(t) ∼exp(- ln(t)^2). The interface density and the magnetization exhibit fast dynamics towards their asymptotic values in the dynamically active stationary state (see T = 1/8,2/8 in Fig. <ref>); * Ordered regime (Phase II): It is characterized by an exponential or power-law decay of the interface density, depending on the initial condition. The magnetization tends to the absorbing ordered state (see T = 3/8,4/8 in Fig. <ref>); * Frozen regime (Phase III): It is characterized by an initial ordering process, but the system freezes fast (see T = 5/8 in Fig. <ref>). In particular, in Phase II the persistence and interface density decay as a power law, p(t) ∼ t^-0.22 and ρ(t) ∼ t^-1/2 for m_0 = 0 (as in Refs. <cit.>). For a biased initial condition (m_0 = 0.5), p(t) decays to the initial majority fraction (which corresponds to the state reaching consensus) and ρ(t) follows an exponential decay. Note that, for m_0 = 0, not all trajectories reach the ordered absorbing states (m_f=± 1). There exist other absorbing configurations as, for example, a flat interface configuration for T = 4/8, no agent will be able to change, and the system remains trapped in this state. This result is not observed for m_0 > 0. Contrary, phases I and III show similar dynamics for a symmetrical (m_0 = 0) and biased (m_0 = 0.5) initial conditions. In Phase I, the system shows disordering dynamics with a persistence decay similar to the one exhibited for the Voter model in a lattice <cit.> while in Phase III, the system exhibited freezing dynamics with an initial tendency towards the majority consensus. Due to the lattice structure and high clustering, the mathematical tools used in previous sections for random networks cannot be applied in the case of a regular lattice. In this case, we restrict ourselves to the results of numerical simulations. §.§ The role of aging We show in Figure <ref>a borders of Phase II obtained from numerical simulations running up to a time t_ max (dotted colored lines). Similarly to the behavior observed in random networks, the mixed-ordered border is shifted to lower values of T as we increase the simulation time cutoff t_ max. Thus, Phase I is replaced by an ordered phase due to the aging mechanism. Examining the dependence of the final value of the magnetization on its initial condition m_f(m_0) (Figure <ref>b), one can conclude that the mixed phase is still present, at least transiently, as in the initially disordering phase described in the previous section (Phase I^*). Phase II is again characterized by an asymptotically ordered state where the initial majority reaches consensus. However, for this specific structure, near m_0 = 0 and T = 1/2, the ordered state is not reached for any threshold value. Furthermore, comparing with Fig. <ref>b with the results from the model without aging (Fig. <ref>b), the discontinuous jump at m_0 = 0 for T = 3/8, 4/8 is replaced by a continuous transition, where a range of states with 0 < |m_f| < 1 are present around m_0 = 0. To determine whether these states belong to Phase I^*, II or III, we need again a characterization of phases in terms of dynamical properties. According to the results in Figure <ref>, we find here the same regimes identified for random networks: * Initial mixing regime (Phase I^*): After the initial disordering stage, the average interface density shows a very slow decay reflecting the slow growth of spatial domains in each binary state. The persistence in this phase shows a power-law decay p(t) ∼ t^-1 (see T = 1/8,2/8 in Fig. <ref>); * Ordered regime (Phase II): It is characterized by coarsening dynamics that end in the absorbing states m_f = ± 1. The form of the decay of the interface density depends on the value of m_0 (see T = 3/8,4/8 in Fig. <ref>); * Frozen regime Phase III): It characterizes by an initial tendency to order but the system very fast reaches an absorbing frozen configuration (see T = 5/8,7/8 in Fig. <ref>). The implications of aging become explicit by comparing the dynamical properties of the cases with aging (Figure <ref>) and without aging (Figure <ref>). When the threshold is T<3/8, Phase I is replaced by Phase I^* in which there is an initial disordering process very fast followed by a slow coarsening process that accelerates when we increase the threshold. Although the aging implications in this phase are similar to those observed in the ER graph, the coarsening process is slower (see insets in Fig. <ref>a). In Phase II (T=3/8, 4/8) and when m_0=0.5, the system exhibits coarsening towards the ordered state m_f=± 1. In this case, the exponential decay ρ∼exp(-α t) observed in the absence of aging is replaced, due to aging, by a power law ρ∼ t^-α as noted in Ref. <cit.>. We find α=0.5 and 0.8 for T=3/8 and 4/8, respectively. For m_0=0, the power law decay of the interface density vanishes with aging, and the system exhibits a coarsening dynamics much slower than for an unbalanced initial condition. In this region of the phase diagram, spatial clusters start to grow from the initial condition, but once formed, it takes a long time for the system to reach the absorbing state m_f = ± 1. We note that for these parameter values, the system is not able to reach |m| over 0.1 even after 10^6 time steps, but since there is coarsening from the initial condition, the expected stationary state as t →∞ is m_f=±1. There is neither initial disordering nor freezing, these values correspond to the defined Phase II, even though the system exhibits “long-lived segregation” in a long transient dynamics ( see the difference with the dynamics of the model without aging in Fig. <ref>). In Fig. <ref>a, we differentiate Phase II from Phase III by analyzing the activity in the system: If agents are changing, even though the interface decay is slow, the system is in the Phase II. While if agents are frozen, it lies in Phase III. When comparing the ordered-frozen critical line to the one from the original model (purple line), we notice that aging causes certain values (m_0, T) that were previously in Phase II near the critical line to enter the frozen phase. Finally, it should be noted that in Phase I^*, the initial disordering dynamics drive the system towards m=0. Therefore, the subsequent coarsening dynamics follow the slow interface decay observed in Phase II for m_0 ∼ 0. Thus, the presence of aging implies that the system asymptotically orders for any initial condition, but due to the initial disordering, the coarsening dynamics fall into the “long-lived segregation” regime independently of the initial condition. § SUMMARY AND CONCLUSIONS In this work, we have studied with Monte Carlo numerical simulations and analytical calculations the Symmetrical Threshold Model. In this model, the agents, nodes of a contact network, can be in one of the two symmetric states ± 1. System dynamics follows a complex contagion process in which a node changes state when the fraction of neighboring nodes in the opposite state is above a given threshold T. For T=1/2, the model reduces to a majority rule or the zero temperature Spin Flip Kinetic Ising Model. When the change of state is only possible in one direction, say from 1 to -1, it reduces to the Granovetter-Watts Threshold model <cit.>. We have considered the cases of a fully connected network, Erdős-Rényi, and random regular networks, as well as a regular two-dimensional Moore lattice. We have found that, in the parameter space of threshold T and initial magnetization m_0, the model exhibits three distinct phases, namely Phase I or mixed, Phase II or ordered, and Phase III or frozen. The existence of these three phases is robust for different network structures. These phases are well characterized by the final state (m_f), and by dynamical properties such as the interface density ρ(t), time-dependent average magnetization m(t), persistence times p(t), and mean internal time τ̅(t). These phases can be obtained analytically in the mean-field case of a fully connected network. For the random networks considered, we derive an approximate master equation (AME) <cit.> considering agents in each state according to their degree k, neighbors in state -1, m, and age j. From this AME, we have also derived a heterogeneous mean-field (HMF) approximation. While the AME reproduces with great accuracy the results of Monte Carlo numerical simulations of the model (both static and dynamic), the HMF shows an important lack of agreement, highlighting the importance of high-accuracy methods necessary for threshold models. Aging is incorporated in the model as a decreasing probability to modify the state as the time already spent by the agent in that state increases. The key finding is that the mixed phase (Phase I), characterized by an asymptotically disordered dynamically active state, does not always exist: the aging mechanism can drive the system to an asymptotic absorbing ordered state, regardless of how low the threshold T is set. A similar effect of aging was already described for the Schelling model in Ref. <cit.>. When the dynamics are examined in detail, a new Phase I^*, defined in terms of dynamical properties, emerges in the domain of parameters where the model without aging displays Phase I. This phase is characterized by an initial disordering regime (m → 0) followed by a slow ordering dynamics, driving the system toward the ordered absorbing states (including the one with spins opposite to the majoritarian initial option). This result is counter-intuitive since aging incorporates memory into the system, yet in this phase, the system “forgets” its initial state. The network structure plays an important role in the emergence of Phase I^* since it does not exist for complete graphs. A detailed analysis reveals that Phase I^* replaces Phase I only for sparse networks, including the case of the Moore lattice. For ER networks we find that, as the mean degree increases, Phase I reappears and there is a range of values of the mean degree for which phases I and I^* coexist. Beyond a critical value of the mean degree, Phase I extends over the entire domain of parameters where Phase I^* was observed. While aging favors reaching an asymptotic absorbing ordered state for low values of T (Phase I), in Phase II the ordering dynamics are slowed down by aging, changing, both in random networks and in the Moore lattice, the exponential decay of the interface density by a power law decay with the same exponent. The aging mechanism is found not to be important in the frozen Phase III. All these effects of aging in the three phases are well reproduced for random networks by the AME derived in this work, which is general for any chosen activation probability p_A (j). For the Moore lattice, we have also considered in detail the special case of the initial condition m_0=0. In this case also Phase I^* emerges and Phase III is robust against aging effects. However, in Phase II aging destroys the characteristic power law decay of the interface density ρ(t) ∼ at^-1/2 associated with curvature reduction of domain walls. This would be a main effect of aging in the dynamics of the phase transition for the zero temperature spin flip Kinetic Ising model <cit.>. As a final remark on the general effects of aging in different models of collective behavior, we note that the replacement of a dynamically active disordered stationary phase by a dynamically ordering phase is generic. In this paper, we find the replacement of Phase I by Phase I^*. Likewise in the Voter model, aging destroys long-lived dynamically active states characterized by a constant value of the average interface density, and it give rise to coarsening dynamics with a power law decay of the average interface density <cit.>. In the same way, in the Schelling segregation model, a dynamically active mixed phase is replaced, due to the aging effect, by an ordering phase with segregation in two main clusters. Another aging effect that seems generic, in phases in which the system orders when there is no aging, is the replacement of dynamical exponential laws by power laws. This is what happens here in Phase II for the decay of the average interface density but, likewise, exponential cascades in the Granovetter-Watts model are replaced due to aging by a power-law growth with the same exponent <cit.>. Further work with the general AME used in this work would include a new approach considering the master equation, as in Ref. <cit.>, in order to incorporate finite size effects (relevant close to m_0 = 0) and to give a mathematical framework to describe the results in Ref. <cit.>. Financial support has been received from the Agencia Estatal de Investigación (AEI, MCI, Spain) MCIN/AEI/10.13039/501100011033 and Fondo Europeo de Desarrollo Regional (FEDER, UE) under Project APASOS (PID2021-122256NB-C21 and PID2021-122256NB-C22) and the María de Maeztu Program for units of Excellence, grant CEX2021-001164-M.) § HETEROGENEOUS MEAN-FIELD (HMF) When the transition and aging probabilities do not depend on j, T^±_k,m,j = T^±_k,m and A^±_k,m,j = A^±_k,m, if we are not interested in the solutions x^±_k,m,j (t) and we just want the final magnetization, Eq. <ref> is reduced to Gleeson's AME <cit.> by summing variable j. This is a system of (k_ max+1)(k_ max+1) differential equations without loss of accuracy. Moreover, following the steps in Ref. <cit.>, we perform a heterogeneous mean-field approximation (HMF) to reduce our system to k_ max+1 differential equations: d/d t x^-_k= - x^-_k∑_m=0^k T^-_k, m B_k, m[ω] +(1-x^-_k) ∑_m=0^k T^+_k, m B_k, m[ω], where x^-_k = ∑_j∑_m^k x^-_k,m,j and ω= ∑_k p_k k/z x^-_k. This system of differential equations, coupled via ω, cannot be solved analytically. Solving numerically with standard methods, HMF predicts a mixed-ordered transition line that qualitatively captures the critical line dependence but quantitatively differs from the numerical simulations (see the red dashed line in Figs. <ref>a and <ref>b and the dotted colored lines in Fig. <ref>c). Moreover, this approximation does not predict a frozen phase in any of the networks considered. Instead, for high values of T, the integrated stationary solutions are always m_f = ± 1, regardless of m_0. From this analysis, we conclude that we need sophisticated methods beyond an HMF description to describe the Symmetrical Threshold model's phase diagram, as occurs for the asymmetrical Granovetter-Watts' Threshold model (see Ref. <cit.>). § DERIVATION OF THE STATIONARY SOLUTION VIA THE HETEROGENEOUS MEAN-FIELD CONSIDERING THE AGE (HMFA) Setting the time derivatives to 0 in Eqs. (<ref>), we obtain the relations for the stationary state: x^±_k,0 = ∑_j=0^∞ x^∓_k,j ω_k,j^∓ x^±_k,j = x^±_k,j-1 ( 1 - ω_k,j-1^±) j > 0, from where we extract the necessary condition x^-_k,0 = x^+_k,0, as in Ref. <cit.>. Notice that by setting p_A(j) = 1 and summing all ages j, we recover the HMF approximation (Eq. <ref>) for the model without aging. Defining x^±_j(t) as the fraction of agents in state ± 1 with age j: x^±_j = ∑_k p_k x^±_k,j, and placing the degree distribution of a complete graph p_k = δ(k-N+1) (where δ(·) is the Dirac delta), we sum variable k and rewrite Eq. (<ref>) in terms of x^±_j: x^±_0 = ∑_j=0^∞ x^∓_j ω_j^∓, x^±_j = x^±_j-1 ( 1 - ω_j-1^±) j > 0, where ω_j^±≡ω_N-1,j^±. We compute the solution x^±_j recursively as a function of x^±_0: x^±_j = x^±_0 F_j^± where F_j^± = ∏_a = 0^j-1 (1 - ω_a^±), and summing all j, x^± = x^±_0 F^± where F^± = 1 + ∑_j=1^∞ F_j^±. Using the stationary condition x^-_0 = x^+_0, we reach: x^+/x^- = F^+/F^-. Notice that, for the complete graph, x̃^+ = x, x̃^- = 1 - x. Therefore, F^± is a function of the variable x^∓ (F^+ = F(1 - x)). Thus, we rewrite the previous expression just in terms of the variable x: x/1- x = F(1 - x)/F(x). § INTERNAL TIME RECURSIVE RELATION IN PHASE I/I^* In Phase I and I^*, the exceeding threshold condition (m/k > T) is full-filled for almost all agents in the system. Thus, agents will change their state and reset the internal time once activated. For the original model, all agents are activated once in a time step on average, but for the model with aging, the activation probability plays an important role. We consider here a set of N agents that are activated randomly with an activation probability p_A(j) and, once activated, they reset their internal time. Being n_i(t) the fraction of agents with internal time i at the time step t, we build a recursive relation for the previously described dynamics in terms of variables i and t: n_1(t) = ∑_i=1^t-1 p_A(i) n_i(t-1) n_i(t) = (1 - p_A(i-1) ) n_i-1(t-1) i > 1. This recursion relation can be solved numerically from the initial condition (n_1(0) = 1, n_i(0) = 0 for i > 1). To obtain the mean internal time at time t, we just need to compute the following: τ̅(t) = ∑_i=1^t i n_i(t). The solution from this recursive relation describes the mean internal time dynamics with great agreement with the numerical simulations performed at Phase I (for the complete graph) and Phase I^* (for the Erdős-Rényi and Moore lattice). § SYMMETRICAL THRESHOLD MODEL WITH AGING IN RANDOM-REGULAR GRAPHS Fig. <ref> shows the borders of Phase II (first and last value of T where the system reaches the absorbing ordered state for each m_0) obtained from Monte Carlo simulations running up to a maximum time t_ max (dotted colored lines) for a RR graph. Reaching the stationary state in this model requires a large number of steps and it has a high computational cost. The two borders of Phase II exhibit different behavior as we increase the maximum number of time steps t_ max: while the ordered-frozen border does not change with different t_ max, the mixed-ordered border is shifted to lower values of T as we increase the simulation time cutoff t_ max. As it occurs for the results in ER graphs (Fig. <ref>), our results suggest that Phase I is actually replaced in a good part of the phase diagram by an ordered phase in which the absorbing state m_f = ± 1 is reached after a large number of time steps. The ordered-frozen border is now slightly shifted to lower values of the threshold T due to aging. Figure <ref>b shows the average magnetization on RR graphs with simulations running up to a time t_ max = 10^4. Upon comparison with Figure <ref>c, the dependence on m_0 is quite similar, indicating the persistence of a transient mixed phase. This calls for a characterization of different phases in terms of dynamical properties and not only by the asymptotic value of the magnetization. Regarding to the AME integrated solutions, Figure <ref> shows the mixed-ordered and ordered-frozen transition lines predicted by the integration of the AME equations until a time cutoff t_ max, which show a good agreement with the numerical simulations. Figure <ref>b also shows the predicted dependence of m_f(m_0) for the RR graph. For comparison purposes, the numerical integration is computed until the highest t_ max used in the Monte Carlo simulations. In addition, we apply the previously introduced HMFA to these random networks by numerically integrating Eqs. (<ref>). The results, displayed as dotted colored lines in Figure <ref>b, show similarity to the AME solution for T < 0.5. Nevertheless, as it occurred for the HMF in the original model, this mathematical framework is not able to describe the frozen phase. § TEMPORAL DYNAMICS IN THE SYMMETRICAL THRESHOLD MODEL WITH AGING Fig. <ref> shows the evolution of the temporal dynamics via the mean internal time and the persistence. The persistence in Phase I^* shows a power-law decay, where p(t) scales as t^-1, and the internal time shows an increase following the recursive relation given in Equation (<ref>), as it occurred for the mean-field scenario (Fig. <ref>). On the other hand, in Phase II, the persistence decays from 1 to the fraction of nodes of the initial majority (the one that does not change state and reaches consensus) and the mean internal time scales linearly with time, τ̅(t) ∼ t. For the internal time, the AME integrated solutions exhibit a remarkable concordance with the numerical simulations. Minor discrepancies between the numerical simulations and the integrated solutions can be attributed to the assumption of an infinitely sized system in the AME. As it occurred for the model without aging, the persistence cannot be predicted by this framework.
http://arxiv.org/abs/2307.01891v1
20230704193235
Are machine learning technologies ready to be used for humanitarian work and development?
[ "Vedran Sekara", "Márton Karsai", "Esteban Moro", "Dohyung Kim", "Enrique Delamonica", "Manuel Cebrian", "Miguel Luengo-Oroz", "Rebeca Moreno Jiménez", "Manuel Garcia-Herranz" ]
physics.soc-ph
[ "physics.soc-ph", "cs.CY" ]
Systematic Computation of Braid Generator Matrix in Topological Quantum Computing Mohamed Taha Rouabah August 1, 2023 ==================================================================================== 1 * UNICEF, New York, USA * IT University of Copenhagen, Denmark * Central European University, Vienna, Austria * Rényi Institute of Mathematics, Budapest, Austria * Connection Science, Massachusetts Institute of Technology, Cambridge, MA, USA * Department of Mathematics & GISC, Universidad Carlos III de Madrid, Spain * Department of Statistics, Universidad Carlos III de Madrid, Spain * United Nations Global Pulse, New York, USA * UNHCR, Geneva, Switzerland * Correspondence should be addressed to these authors Novel digital data sources and tools like machine learning (ML) and artificial intelligence (AI) have the potential to revolutionize data about development and can contribute to monitoring and mitigating humanitarian problems. The potential of applying novel technologies to solving some of humanity's most pressing issues has garnered interest outside the traditional disciplines studying and working on international development. Today, scientific communities in fields like Computational Social Science, Network Science, Complex Systems, Human Computer Interaction, Machine Learning, and the broader AI field are increasingly starting to pay attention to these pressing issues. However, are sophisticated data driven tools ready to be used for solving real-world problems with imperfect data and of staggering complexity? We outline the current state-of-the-art and identify barriers, which need to be surmounted in order for data-driven technologies to become useful in humanitarian and development contexts. We argue that, without organized and purposeful efforts, these new technologies risk at best falling short of promised goals, at worst they can increase inequality, amplify discrimination, and infringe upon human rights. § NEW TOOLS AND DATASETS Data is critical for humanitarian and development work. Accurate and updated estimates of population demographics are vital in order to understand and respond to social and economic inequalities<cit.>, and to move from reactive to proactive interventions that mitigate the impact of crises before they happen<cit.>. As such, whether using global estimates of poverty to advocate for efforts or design policies to eliminate it, or using malnutrition data in the midst of a conflict to allocate resources to where they are most needed, data is at the core of the organizations that work to meet the 2030 Agenda for Sustainable Development<cit.>. It can, however, be hard to obtain accurate and timely data. In many parts of the world traditional household surveys are the main, and often only, method for demographic data collection. Surveys provide rich and irreplaceable data, but they can be expensive and time-consuming. As such, there is a growing focus on leveraging different big digital datasets and new tools like AI and ML to complement household surveys. This is particularity important in rapidly changing contexts (e.g. humanitarian crises or pandemics) as data and information can be retrieved and analyzed in fast and relatively inexpensive ways. Unfortunately there are no clear definitions of AI. In general terms they refer to systems, which sift through data, recognize patterns, and possibly make decisions based on their discoveries. This covers the full spectrum of models, from relatively simple statistical models (e.g. linear regression and decision trees) to more sophisticated, but still explainable mathematical models, to black-box like neural network and deep learning approaches. Technologies, like mobile phones, are starting to have significant global coverage. Today there are 107 mobile-cellular subscriptions per 100 inhabitants worldwide<cit.> (see Fig. 1A), 95% of the world's population is covered by at least a 2G connection<cit.>, and mobile broadband adoption has grown 14-fold from 5 in 2008 to almost 70 subscriptions per 100 inhabitants in 2018<cit.>[Nonetheless, this does not mean phones and access to broadband are spread evenly across, and within, countries. Often new technologies reproduce and perpetuate existing inequalities.]. As a consequence, mobile phones and the vast amount of data they produce (incl. social media data) can potentially be applied to tackle humanitarian problems in places, which previously were deemed hard to reach. In addition, high-resolution satellite images are becoming more readily available with commercial vendors delivering 30cm resolution imagery, while the European Space Agency and NASA (National Aeronautics and Space Administration) are open-sourcing a wide range of remote sensed datasets (see Fig. 1B). These images analyzed with the support of AI tools can aid humanitarian response efforts, from mapping refugee shelters<cit.> to quantifying the extent of flooded areas<cit.>. The private sector has also started to play a larger role as data providers, and looking to develop new methodologies and business models that can help tackle societal problems under the umbrella of data for public good<cit.>. Examples of data collaborations between the private sector and academia include the Data for Development (D4D)<cit.> and Data for Refugees (D4R) challenges<cit.>, where telephone companies shared anonymized and aggregated call detail records (CDR) with the goal of contributing to socio-economic development and well-being for the most marginalized populations. Similarly, GSMA, the industry organization that represents global mobile communications companies, through their Mobile for Humanitarian Innovation initiative has opened up for both funding and data access<cit.>. Nonetheless, these initiatives have also raised serious privacy concerns<cit.>. During the COVID-19 pandemic, many other companies opened up their data, examples include Apple, Cuebiq, Facebook, and Google[Ordered alphabetically, Apple's Mobility Trends Reports <https://covid19.apple.com/mobility>, Cuebiq's Data for Good initative <https://www.cuebiq.com/about/data-for-good/>, Facebook's Data for Good platform <https://dataforgood.fb.com/>, and Google's COVID-19 Community Mobility Reports <https://www.google.com/covid19/mobility/>]. In alignment with this, UN agencies have started to build capacity to collaborate more meaningfully with the private sector on a data level<cit.>. These, and similar efforts, have resulted in a plethora of scientific studies (see Fig. 1C) , which focus on combining novel digital data sources (including mobile phone and social media data) with powerful tools from computer science, mathematics, and physics to estimate developmental indicators ranging from: socio-economic status<cit.>, illiteracy<cit.>, unemployment<cit.>, gender inequality<cit.>, segregation<cit.>, and population statistics<cit.>. Similarly, these datasets and mathematical techniques can be used to achieve the Sustainable Development Goals <cit.>. For instance, data from search engines has been used to understand the determinants of suicides<cit.> and chronic health conditions<cit.>, and AI analyses of satellite images have been applied for similar endeavours, from estimating monetary poverty<cit.>, to measuring crop type<cit.>, and crop productivity<cit.>. Taken together, these novel data sources and tools have the potential to transform international development and humanitarian work. However, as we argue below they are not yet ready to be rolled out on a large scale. § BARRIERS TO USEFUL TOOLS Data does not translate easily into knowledge, it requires careful collection, curation, and aggregation to become informative. When it comes to AI and ML, there are currently many open issues and challenges. For instance, the carbon footprint of AI is large, in some cases training one model can emit as much CO_2 as 57 average humans emit in a year<cit.>. Access to the computational resources needed to power AI and ML technologies is not equally distributed, which can lead to power being concentrated in the hands of a few countries and companies<cit.>. The use of new AI technologies, such as generative AI threatens to undermine our societies and erode trust in democracies. Further, when it comes to the application of AI and ML technologies no globally accepted set of AI ethics exist<cit.>, and many have argued that ethical principles alone will not guarantee ethical applications <cit.>. The issues are many, we have identified three which we believe form the biggest obstacles for using AI and ML technologies in humanitarian and development contexts. We focus on these, not because the others are less important, but because we believe these are often overlooked, and need to be addressed in order for data-driven technologies to become useful in humanitarian and development contexts. 1. The ecosystem of data for development is not yet machine friendly. Building any kind of ML or AI model requires access to high-quality data. Yet getting access to such datasets can be a hard and laborious process. Some global survey data are accessible through platforms such as the MICS[<https://mics.unicef.org/surveys/>] (Multiple Indicator Cluster Survey) and DHS[<https://dhsprogram.com/data/>] (Demographic and Health Survey) sites. However, these datasets are collected, curated, organized, and maintained for the purpose of informing decision makers, not for training algorithms. In addition, the vast majority of these datasets are spread across a multitude of national, non-governmental, and international organizations, where data are often locked up in non-machine-readable or proprietary formats and subject to complex or opaque licensing and use regimes that make them difficult to use for ML or automated processes. Another obstacle arises from non-standardized metrics, where it is entirely possible that two surveys (even within the same country) use different definitions of the same indicator. For instance, poverty is in certain cases measured by the difference in income or consumption to the average level, regardless of whether this level is sufficient to maintain a decent standard of living, while in other cases poverty is based on being able to afford a minimum amount of various food types and other necessities like shelter. Different definitions exist because poverty is a complex multifaceted problem, but this breaks with the current thinking in the computational field and makes it hard to benchmark and compare AI models over time and across countries. Additional issues arise from insufficient or missing geographic information. Most surveys are georeferenced, but that does not necessarily mean that each individual data-point is labeled with a GPS coordinate. This is a reasonable choice if data is only used for decision making, but a severe barrier for using data to train statistical algorithms. Rather, data are stratified into clusters according to various administrative boundaries such as municipalities, health zones, or census tracts. For instance, developmental estimates are often reported on a regional level<cit.>. Yet, only a minority of surveys are accompanied by a so-called shapefile, which contains information about the geographical boundaries of those administrative regions. This is an issue as administrative boundaries are not fixed in time, and many times across data collection efforts, they are often redrawn in response to changing populations, conflicts, contested borders, etc. As such, researchers and practitioners are often left on their own to infer which shapefiles were originally used, or to re-create their own files from old maps. Lacking information on where development indicators were originally collected makes it challenging to link survey data to insights from new digital data sources. All things considered, by not making data for development ready to be utilized by new ML and AI tools, we risk impeding the application of these new methodologies towards addressing complex societal issues. For instance, a lack of machine friendly data might divert the focus of scientific studies to regions, groups, or issues for which machine readable data already exists. Coordinated efforts to develop standardized open data formats and repositories that contain machine readable development data could save practitioners and researchers countless hours, which instead could be spent on addressing the problems of marginalized communities. Unfortunately, the AI and ML communities do not yet have standardized processes in place for documenting datasets, but the recently proposed datasheets for datasets is a good place to start<cit.>, along with adapting already existing standards that have been developed by national statistics institutes. Platforms like the Humanitarian Data Exchange<cit.>, built by the United Nations Office for the Coordination of Humanitarian Affairs, are a good first effort for opening data about humanitarian problems, but more efforts are needed if these platforms are to become useful for AI practitioners. 2. Lack of validation, transferability, and generalization of machine models. Using digital data to produce population demographics is a relatively new endeavour. While the field has produced some exciting results, for instance, fine grained wealth distribution maps have be generated for a large number of countries<cit.>, little is known about the shortcomings of these new approaches. By contrast, household surveys have been used for decades and their limitations are well understood and documented. With digital data it is unclear whether a model trained on satellite imagery from one period will work on images captured during a different season or if a model based on mobile phone data (or social media data) will work on behavioral traces collected from a different month. Adding to this concern, it is unclear how these new methodologies transfer across a multitude of different countries, cultures, and contexts. Some research groups have tried to replicate and benchmark published methodologies with varying success. For example, Fernando et al.<cit.> found that statistical patterns associated with socio-economic characteristics fundamentally differ for western and northern parts of Sri Lanka, Blumenstock<cit.> demonstrated that models trained on mobile phone data from Rwanda cannot be applied to the context of Afghanistan, while Tingzon et al.<cit.> successfully replicated an AI methodology (originally piloted in five African countries) for the Philippines. Our own experiences in transferring algorithms across countries suggest that relationships between behavioral patterns extracted from digital data and development indicators can greatly differ, even between countries within same geographic regions (see Figure 2A). For instance, we find the correlation between the diversity of mobility data (shown to be highly indicative of economic development<cit.>) and the human development index<cit.> to be different between countries, even within the same region (e.g. Costa Rica and Colombia). The issue of whether models will generalize over longer timescales is also of concern<cit.>. As human behavior changes and evolves over time, it is uncertain how robust the inferred statistical relations between demographic variables and digital traces will be. For instance, a decade ago Eagle et al.<cit.> demonstrated that features extracted from landline communications (in combination with mobile phone interactions) were good indicators of economic development. But a decade ago smartphones and online social media were just starting to appear, and the subsequent widespread adoption of these technologies raises the question of whether the same model would work equally well today. All data-driven methods are strongly influenced by technological drift and these effects need to be understood and accounted for if we are to use these technologies for long-term policy making. However, model generalization is not only affected by long-term shifts in technology usage. Our experiences in using aggregate behaviors extracted from mobile phone traces show that ML models can be brittle even on relatively short timescales, with model performance dramatically degrading from one month to the next (see Figure 2B). Regrettably, research centered on this topic is rare in the academic literature, instead the focus has been one-off studies that achieve spectacular results, rather than monitoring and evaluating the robustness of algorithms. 3. We need to look beyond single metrics. While digital technologies might provide advantages to society, automated systems bring learned biases into decision making, leading to new kinds of vulnerabilities<cit.>. Well known examples include gender, age, racial, class, ability, and wealth biases<cit.>. Although researchers have identified some of these vulnerabilities, quantifying others can be difficult due to the complexity and opaqueness of automated algorithmic systems<cit.>. To address these issues it is important that we look beyond single metrics such as averages, R-squares, and correlation coefficients and start to unpack algorithmic impact across an intersection of inequalities<cit.>. For instance, reporting an 80% accuracy of an algorithm is not enough as aggregate metrics can hide a lot of nuance. This is also called the tyranny of averages<cit.>. Instead we need to disaggregate algorithmic performance, for instance, into how well algorithms works for rural vs urban areas, how well they perform for poor vs wealthy regions, or along other delineations, which might be mis- or underrepresented. Ultimately, we want to avoid situations where an algorithm may have an overall accuracy of 80%, but only work 10% of the time in poor regions. As such, we need to ensure algorithmic equity—the (un)equal distribution of algorithmic accuracy across different groups—is measured and reported in future studies. For humanitarian and development work the goal is not to build an AI tool that can achieve a perfect prediction, rather it is to gain a deeper understanding of gaps and structural inequalities and how to fix them. In these situations, outliers and incorrect predictions often turn out to be vital discoveries—critical to not leaving anyone behind—which traditional metrics might not identify. It is important to be critical of the data used to develop models and draw conclusions from, as it might suffer from strong observational and selection biases. The excitement of the first years of the Big Data Revolution, where focus was on volume and velocity of datasets, needs to change; instead of being obsessed by the number of rows a dataset contains or how many terabytes it takes up, we need to dedicate far more efforts to understand which demographics are left out. For instance, digital datasets are limited to groups which: own a mobile phone, use social media, have a specific smartphone app installed, or live in a region densely photographed by satellites. Marginalized communities lacking access to new technologies, or refusing to adopt them<cit.>, will be unobservable in these digital dataset. Even when they are present, they will not generate data of the same utility and often be considered outliers<cit.>. In these situations we need to pay attention to what is important, not just what is quantifiable<cit.>. Unless these data inequalities and biases are actively taken into account, large populations will be excluded from future analyses, independent of which metrics are applied to measure algorithmic performance. § THE WAY FORWARD Computational tools from machine learning to network science can bring tremendous potential to the humanitarian and development sectors, however, they also bring a lot of unknowns. For these new methods to be routinely applied in development programs, they need to be put through rigorous evaluation processes. In order to minimize any potential harms, testing and evaluation needs to be done prior to releasing new models. The old Silicon Valley mantra of "moving fast and breaking things" does not work for international development, where decisions directly affect the life and well-being of populations. As such, it is vital to ensure that new methodologies produce replicable, explainable, transparent, and generalizable results, and that potential limitations, biases, and shortcomings are uncovered and documented. This includes improving both scientific and ethical practices within AI and ML<cit.>. Digital data-sources offer the opportunity to discover insights at unprecedented scale and speed. This can lead to a lot of good. For instance, data-poor areas can now effortlessly be mapped. However, it is critical to recognize and scrutinize how digital data are imperfect and potentially biased. For example, in certain contexts mobile phones are predominantly owned and used by men<cit.>. Does this mean that insights derived from such data will mainly represent male demographics? How will deprivations suffered by women and children be represented by models based on this data? Furthermore, what will happen once gender equality is achieved, will the earlier trained sophisticated algorithms break down and return incorrect and biased estimates? Only rigorous scientific studies where the representativity of the data is inspected can answer these questions. Studying these systems of multiple interactive components through a complex systems lens could produce novel insights. Standardized data formats and open repositories that contain machine readable development data would empower practitioners and researchers working on addressing these issues. In addition, standardized ways of sharing AI models, that go beyond posting code on GitHub, would benefit future replication, transferability, and robustness studies. This is not an impossible task, but it requires careful planing, auditable standards<cit.>, long-term partnerships, transparent guidelines, in-house AI expertise, and an understanding of the risks and pitfalls of data-driven technologies. For instance, Eurostat, the statistical office of the European Union, is exploring new datasources for official statistics<cit.> by comparing results extracted from big-data to statistical gold-standards, while at the same time being mindful of sampling biases, the volatility of big data, and that a high volume of data does not necessary guarantee high data quality. Overall, the diffusion of data science and AI techniques into the realm of international development constitutes a unique opportunity to bring powerful new techniques to the fight against inequalities and vulnerabilities. However, improving living conditions and creating lasting change can only be accomplished through closer collaborations between academic, private, governmental, and non-governmental actors. Local stakeholders need to be engaged, trust needs to be built, and local data-science ecosystems need to be strengthened to ensure that proposed solutions are adapted to local contexts<cit.>. There are no one-size-fits-all AI solutions, so we need to know what to implement where, and which solutions should never be built<cit.>. The biggest impediment to this is the capacity gap. AI talent is far removed from humanitarian and development organizations, and limited funding is available to increase that capacity. Similarly, humanitarian and social science expertise is often limited, and in some cases entirely missing, from AI hubs. Many of the above problems are a consequence of the distance between these communities, and of the misalignment of their incentives. This gap needs to be bridged from both sides, more resources need to be put towards bringing AI expertise into the humanitarian domain, and vice versa in infusing humanitarian thinking into AI<cit.>. Today, it is clear that we need to devise data-driven methodologies, which are fair and capable of producing actionable insights, while protecting all fundamental human rights<cit.>, including the right to privacy and the right to non-discrimination. The price of innovating should not come at the cost of eroding these rights. 1 § REFERENCES naturemag * M.C. was supported by the Ministry of Universities of the Government of Spain under the program 'Convocatoria de Ayudas para la recualificacion del sistema universitario español para 2021-2023' from the Universidad Carlos III de Madrid, dated July 1, 2021. E.M. would like to thank Alex 'Sandy' Pentland for helpful discussions and comments. E.M. acknowledges support by Ministerio de Ciencia e Innovación/Agencia Española de Investigación (MCIN/AEI/10.13039/501100011033) through grant PID2019-106811GB-C32 the National Science Foundation under Grant No. 2218748. Correspondence Correspondence and requests for materials should be addressed to V.S. (email: vedransekara@gmail.com) and M.G-H. (email: mherranz@unicef.org).
http://arxiv.org/abs/2307.02179v1
20230705101507
Open-Source Large Language Models Outperform Crowd Workers and Approach ChatGPT in Text-Annotation Tasks
[ "Meysam Alizadeh", "Maël Kubli", "Zeynab Samei", "Shirin Dehghani", "Juan Diego Bermeo", "Maria Korobeynikova", "Fabrizio Gilardi" ]
cs.CL
[ "cs.CL" ]
Runtime Repeated Recursion Unfolding: A Just-In-Time Online Program Optimization That Can Achieve Super-Linear Speedup Thom Frühwirth University of Ulm, Germany thom.fruehwirth@uni-ulm.de August 1, 2023 ====================================================================================================================== This study examines the performance of open-source Large Language Models (LLMs) in text annotation tasks and compares it with proprietary models like ChatGPT and human-based services such as MTurk. While prior research demonstrated the high performance of ChatGPT across numerous NLP tasks, open-source LLMs like HugginChat and FLAN are gaining attention for their cost-effectiveness, transparency, reproducibility, and superior data protection. We assess these models using both zero-shot and few-shot approaches and different temperature parameters across a range of text annotation tasks. Our findings show that while ChatGPT achieves the best performance in most tasks, open-source LLMs not only outperform MTurk but also demonstrate competitive potential against ChatGPT in specific tasks. § INTRODUCTION Generative Large Language Models (LLMs) such as GPT-3 and GPT-4 have demonstrated substantial potential for text-annotation tasks common to many Natural Language Processing (NLP) applications <cit.>. Recent research reports impressive performance metrics for these models. For instance, studies demonstrate that ChatGPT exceeds the performance of crowd-workers in tasks encompassing relevance, stance, sentiment, topic identification, and frame detection <cit.>, that it outperforms trained annotators in detecting the political party affiliations of Twitter users <cit.>, and that it achieves accuracy scores over 0.6 for tasks such as stance, sentiment, hate speech detection, and bot identification <cit.>. Notably, ChatGPT also demonstrates the ability to correctly classify more than 70% of news as either true or false <cit.>, which suggests that LLMs might potentially be used to assist content moderation processes. While the performance of LLMs for text annotation is promising, there are several aspects that remain unclear and require further research. Among these is the impact of different approaches such as zero-shot versus few-shot learning and settings such as varying temperature parameters. Zero-shot learning allows models to predict for unseen tasks, while few-shot learning uses a small number of examples to generalize to new tasks. The conditions under which one approach outperforms the other are not fully understood yet. Furthermore, the temperature parameter determines the randomness in a model's outputs. Identifying the optimal temperature for different tasks is still a topic of ongoing research. Moreover, the role of open-source LLMs deserves more attention. While models like ChatGPT have democratized the field by offering a more cost-effective alternative to traditionally more expensive annotation methods involving human annotations, open-source LLMs represent a further step towards greater accessibility. Beyond cost, the advantages of open-source LLMs include degrees of transparency and reproducibility that are typically not provided by commercial models. open-source LLMs can be scrutinized, tailored, and enhanced by a wider user base, fostering a diverse group of contributors and improving the overall quality and fairness of the models. Furthermore, open-source LLMs offer significant data protection benefits. They are designed not to share data with third parties, enhancing security and confidentiality. For these reasons, the academic community is increasingly advocating for the use of open-source LLMs <cit.>. This transition would not only broaden access to these tools for researchers, but also promote a more open and reproducible research culture. To address these questions, we extend our previous research <cit.> to compare the performance of two widely-used open-source LLMs, HugginChat and FLAN, with that of ChatGPT as well as MTurk, using eleven text annotation tasks distributed across four datasets. Each model is tested using different settings: varied model sizes for FLAN, and distinct temperature parameters in both zero-shot and few-shot approaches for ChatGPT and HuggingChat. We then compare their accuracy, using agreement with trained annotators as a metric, against that of MTurk as well as amongst themselves. While our previous research <cit.> showed that ChatGPT outperforms MTurk in almost all tasks, our new results reveal that open-source LLMs surpass MTurk in the majority of tasks. When considering the top-performing models, open-source LLMs outperform ChatGPT in certain tasks and approach its performance in others, demonstrating their potential. Furthermore, the comparison of models using different temperature settings and zero vs. few-shot prompts shows that, for both ChatGPT and open-source LLMs, there is no particular approach that uniformly maximizes performance. Given these findings, further research is warranted to optimize the use of diverse settings and prompts under different circumstances. Our conclusion is that, even though the performance of open-source LLMs generally remains below that of ChatGPT, they already represent a competitive alternative for many text annotation tasks. § RESULTS The analysis in this paper extends our previous study, which compared ChatGPT's zero-shot annotation performance with that of MTurk <cit.>. We rely on the same datasets (n = 6,183), which include tweets and news articles that we collected and annotated manually for another study on the discourse around content moderation <cit.>, as well as a new sample of tweets posted in 2023 to address the concern that LLMs might be merely reproducing texts that could have been part of their training data. While our previous study focused on ChatGPT, our analysis conducts the same classifications using two open-source LLMs (HugginChat and FLAN), using the same codebook that we originally constructed for our research assistants and which we previously used for ChatGPT and MTurk (see Appendix <ref>). Moreover, in this paper we extend our analysis to include few-shot learning for all models, including ChatGPT. The corresponding prompts are shown in Appendix <ref>. Specifically, for ChatGPT and HuggingChat, we conducted sixteen sets of annotations for each text, specifically two runs for each combination of two temperature levels, zero-shot, and few-shot. For FLAN, we conducted twelve sets of annotations, namely, two runs for three different model sizes, both zero-shot and few-shot (L, XL, XXL). More particularly, to explore the effect of ChatGPT's and HugginChat's temperature parameters, which controls the degree of randomness of the output, we conducted the annotations with default values (1 for ChatGPT and 0.9 for HuggingChat) as well as with a value of 0.2, which implies less randomness. We conducted two sets of annotations for each temperature value to compute LLM's intercoder agreement. Finally, for each combination of LLM and parameter setting, we conduct chain of thought (CoT) prompting <cit.>. This few-shot approach involves providing LLMs with question and step-by-step reasoning answer examples. Figure <ref> compares the accuracy of ChatGPT, open-source LLMs, and MTurk, evaluated in terms of agreement with trained annotators. The depicted average accuracies for both ChatGPT and open-source LLMs are accompanied by the minimum and maximum accuracies observed across models employing different settings. ChatGPT parameters entail zero-shot vs. few-shot and temperature values of 0.2 and 1. HuggingChat's settings correspond to those of ChatGPT, while FLAN includes different model sizes ranging from L to XXL. Detailed results for each model, encompassing both accuracy and intercoder agreement, are documented in Appendix <ref>. Figure <ref> shows that ChatGPT outperforms MTurk in ten out of eleven tasks on average, while open-source LLMs exceed MTurk in six out of eleven tasks. However, when we isolate the top-performing models, open-source LLMs outpace MTurk in nine out of eleven tasks. Comparing ChatGPT directly with open-source LLMs, we find that ChatGPT consistently exceeds the performance of LLMs on average. However, when we observe only the top-performing models, open-source LLMs surpass ChatGPT in three out of eleven tasks and fall within a ten percentage point difference in five additional tasks. These findings underscore that while open-source LLMs are not consistently the superior choice, they generally outperform crowd-sourced annotations and are approaching the performance levels of ChatGPT. The relationship between model settings and performance lacks a straightforward pattern, as indicated in Table <ref>. Depending on the dataset and task, the best-performing model within each group can vary. With ChatGPT, any combination of temperature and zero/few shot can lead to top performance. For HuggingChat, lower temperature settings typically result in better performance, though few-shot models do not always outperform zero-shot ones. Lastly, for FLAN, larger models do not consistently outperform smaller ones. (Note that only zero-shot classifications were tested with FLAN.) Therefore, more research is required to understand which particular settings and prompts are more effective under different circumstances. § DISCUSSION This study demonstrates that open-source LLMs such as HuggingChat and FLAN represent a competitive alternative for text annotation tasks, exhibiting performance metrics that generally exceed those of MTurk and rival those of ChatGPT. For certain tasks, these open-source LLMs are found to be an adequate substitute for crowd-annotations, and in some instances, their top-performing models approach or even exceed the performance of ChatGPT. An important appeal of open-source LLMs is that they offer considerable cost advantages. While ChatGPT provides substantial cost-efficiency, being about thirty times more affordable per annotation compared to MTurk <cit.>, open-source LLMs surpass this by being freely available. This constitutes a significant improvement in the accessibility of such models, extending their reach to a broader range of researchers irrespective of financial constraints. Open-source LLMs present benefits that go beyond cost-efficiency. One key advantage is that they help reduce reliance on proprietary models operated by for-profit companies, which may conflict with research ethics and the reproducibility standards <cit.>. Furthermore, open-source LLMs provide distinct benefits for data protection, as they are designed in such a way that data do not need to be shared with any third-party entities <cit.>. This feature ensures that sensitive information remains secure and confidential, because it not sent to or stored by an external party. The elimination of data sharing in open-source LLMs provides an extra layer of protection against potential data breaches or unauthorized access. This feature becomes especially beneficial in scenarios where sensitive data is involved, such as in the legal or medical fields, where confidentiality is of utmost importance <cit.>, but also in social science research involving data protected under the European Union's General Data Protection Regulation (GDPR), or covered by non-disclosure agreements (NDAs). Several avenues for future research emerge from these findings. First, an in-depth error analysis is needed to identify areas of underperformance and potential biases across these models. A better understanding of these shortcomings will help refine these tools and address their limitations. Second, the relationship between model settings and task-specific performance needs to be further explored. The findings indicate that optimal performance may depend on the specific interplay of parameters such as temperature and model size, as well as the choice between zero-shot and few-shot approaches. Given the variable performance of these models under different settings, it is important to identify which combinations yield the best results for specific tasks. To conclude, this study presents evidence of the potential of open-source LLMs as a practical alternative for text annotation tasks. The models' performance, coupled with their cost, accessibility, and data-protection advantages, position them as valuable tools in the domain of natural language processing. However, additional research is needed to optimize their performance and ensure their effective application across various use cases. § MATERIALS AND METHODS §.§ Datasets The analysis relies on four distinct datasets. The first dataset consists of 2,382 randomly selected tweets from a more extensive collection of 2.6 million tweets related to content moderation, spanning from January 2020 to April 2021. The second dataset comprises 1,856 tweets posted by members of the US Congress between 2017 and 2022, sampled from a dataset of 20 million tweets. The third dataset consists of 1,606 newspaper articles on content moderation published from January 2020 to April 2021, drawn from a dataset of 980k articles obtained via LexisNexis. Sample sizes were determined based on the number of texts required to construct training sets for machine-learning classifiers. Finally, the fourth dataset replicates the data collection process of the first dataset. Specifically, it focused on January 2023, comprising a random sample of 500 tweets (with 339 tweets in English) from a dataset of 1.3 million tweets. §.§ Data Annotation Tasks We implemented several annotation tasks: (1) relevance: whether a tweet is about content moderation or, in a separate task, about politics; (2) topic detection: whether a tweet is about a set of six pre-defined topics (i.e. Section 230, Trump Ban, Complaint, Platform Policies, Twitter Support, and others); (3) stance detection: whether a tweet is in favor of, against, or neutral about repealing Section 230 (a piece of US legislation central to content moderation); (4) general frame detection: whether a tweet contains a set of two opposing frames (“problem' and “solution”). The solution frame describes tweets framing content moderation as a solution to other issues (e.g., hate speech). The problem frame describes tweets framing content moderation as a problem on its own as well as to other issues (e.g., free speech); (5) policy frame detection: whether a tweet contains a set of fourteen policy frames proposed in <cit.>. The full text of instructions for the five annotation tasks is presented in Appendix S1. We used the exact same wordings for LLMs and MTurk. §.§ Trained Annotators We trained three political science students to conduct the annotation tasks. For each task, they were given the same set of instructions described above and detailed in Appendix <ref>. The coders annotated the tweets independently task by task. §.§ Crowd-workers We employed MTurk workers to perform the same set of tasks as trained annotators and LLMs, using the same set of instructions (Appendix S1). To ensure annotation quality, we restricted access to the tasks to workers who are classified as “MTurk Masters” by Amazon, who have a HIT (Human Intelligence Task) approval rate greater than 90% with at least 50 approved HITs and are located in the US. Moreover, we ensured that no worker could annotate more than 20 % of the tweets for a given task. As with the trained human annotators, each tweet was annotated by two different crowd-workers. §.§ LLM Selection We selected three LLMs to compare their annotation performance and costs. First, we use the ChatGPT API (`gpt-3.5-turbo' version), which is a proprietary, close-source LLM. We set the temperature parameter at 1 (default value) and 0.2 (which makes the output more deterministic; higher values make the output more random). Second, we use HuggingChat (`oasst-sft-6-llama-30b' version), which is an open-source model similar to ChatGPT. We set the temperature parameter at 0.9 (default value) and 0.2. Third, following promising results obtained in a previous research <cit.>, we selected FLAN-T5 <cit.> as our second open-source LLM. FLAN is available in six different sizes from small (80M parameters) to UL2 (20B parameters). For this study, we employed three different sizes: L, XL, and XXL. For each model setting, we collect two responses from each LLM to compute the intercoder agreement. We create a new chat session for every tweet to ensure that the history of annotations does not influence the LLM results. §.§ Prompt Engineering For zero-shot tests, we intentionally avoided adding any prompt engineering to ensure comparability between LLMs and MTurk crowd-workers. After testing several variations, we decided to feed tweets one by one to ChatGPT using the following prompt: “Here's the tweet I picked, please label it as [Task Specific Instruction (e.g. `one of the topics in the instruction')].” The corresponding prompts for each task are reported in Appendix <ref>. For few-shot tests, we employ Chain-of-Thought (CoT) prompting <cit.>, where large language models (LLMs) are provided with both the question and a step-by-step reasoning answer as examples. Specifically, following previous research <cit.>, we use ChatGPT to generate two CoT prompted examples per class per annotation task. More particularly, we feed ChatGPT with our human-annotated examples and ask it to annotate the example and provide explanations for the annotation. If the ChatGPT's annotation was correct (which we know thanks to our human annotations), we included the example along with the ChatGPT's explanation in our prompt for the few-shot experiment. §.§ Evaluation Metrics First, we computed average accuracy (i.e. percentage of correct predictions), that is, the number of correctly classified instances over the total number of cases to be classified, using trained human annotations as our gold standard and considering only texts that both annotators agreed upon. Second, we computed intercoder agreement, measured as the percentage of instances for which both annotators in a given group report the same class. § ACKNOWLEDGMENTS This project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement nr. 883121). We thank Fabio Melliger, Paula Moser, and Sophie van IJzendoorn for excellent research assistance. apsr arabic § FULL RESULTS § ZERO-SHOT ANNOTATION CODEBOOK §.§ Dataset 1: Content Moderation Tweets (2020-2021) §.§.§ Task 1: Relevance “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. I will ask you to classify a text as relevant or irrelevant to the content moderation: A: Text is RELEVANT if it includes: social media platforms’ content moderation rules and practices, censorship, governments’ regulation of online content moderation, and/or mild forms of content moderation like flagging, shadowbanning, or account suspension. B: Text is IRRELEVANT if they do not refer to content moderation, as defined above. This would include, for example, a tweet by Trump that Twitter has labeled his tweet as “disputed”, or a tweet claiming that something is false. Now, is the following text relevant or irrelevant to content moderation? [Paste a tweet here and remove the brackets] §.§.§ Problem/Solution Frames “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. I will ask you to classify a text as describing content moderation as a problem, as a solution, or neither: A: Text describes content moderation as a PROBLEM if they emphasize negative effects of it, such as restrictions to free speech, censorship, or the biases that can emerge from decisions regarding what users are allowed to post. B: Text describes content moderation as a SOLUTION if they emphasize positive effects of it, such as protecting users from harmful content such as hate speech, misinformation, illegal adult content, or spam. C: Text describes content moderation as NEUTRAL if they do not emphasize negative or positive effects of content moderation. For example if they simply report on the content moderation activity of social media platforms without linking them to potential advantages or disadvantages for users or stakeholders. Now, is the following text describing content moderation as a problem, as a solution, or neither? [Paste a tweet here and remove the brackets] §.§.§ Task 3: Policy Frames “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. I will ask you to classify a text as one of the frames defined below: * ECONOMY: The costs, benefits, or monetary/financial implications of the issue (to an individual, family, community, or to the economy as a whole). * Capacity and resources: The lack of or availability of physical, geographical, spatial, human, and financial resources, or the capacity of existing systems and resources to implement or carry out policy goals. * MORALITY: Any perspective—or policy objective or action (including proposed action)that is compelled by religious doctrine or interpretation, duty, honor, righteousness or any other sense of ethics or social responsibility. * FAIRNESS AND EQUALITY: Equality or inequality with which laws, punishment, rewards, and resources are applied or distributed among individuals or groups. Also the balance between the rights or interests of one individual or group compared to another individual or group. * POLICY PRESCRIPTION AND EVALUATION: Particular policies proposed for addressing an identified problem, and figuring out if certain policies will work, or if existing policies are effective. * LAW AND ORDER, CRIME AND JUSTICE: Specific policies in practice and their enforcement, incentives, and implications. Includes stories about enforcement and interpretation of laws by individuals and law enforcement, breaking laws, loopholes, fines, sentencing and punishment. Increases or reductions in crime. * SECURITY AND DEFENSE: Security, threats to security, and protection of one’s person, family, in-group, nation, etc. Generally an action or a call to action that can be taken to protect the welfare of a person, group, nation sometimes from a not yet manifested threat. * HEALTH AND SAFETY: Health care access and effectiveness, illness, disease, sanitation, obesity, mental health effects, prevention of or perpetuation of gun violence, infrastructure and building safety. * QUALITY OF LIFE: The effects of a policy on individuals’ wealth, mobility, access to resources, happiness, social structures, ease of day-to-day routines, quality of community life, etc. * POLITICAL: Any political considerations surrounding an issue. Issue actions or efforts or stances that are political, such as partisan filibusters, lobbyist involvement, bipartisan efforts, deal-making and vote trading, appealing to one's base, mentions of political maneuvering. Explicit statements that a policy issue is good or bad for a particular political party. * EXTERNAL REGULATION AND REPUTATION: The United States’ external relations with another nation; the external relations of one state with another; or relations between groups. This includes trade agreements and outcomes, comparisons of policy outcomes or desired policy outcomes. * OTHER: Any topic that does not fit into the above categories. Now, which of the above frames best fit the following text? Answer with only the option above that is most accurate and nothing else. [Paste a tweet here and remove the brackets] §.§.§ Task 4: Stance Detection “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. In the context of content moderation, Section 230 is a law in the United States that protects websites and other online platforms from being held legally responsible for the content posted by their users. This means that if someone posts something illegal or harmful on a website, the website itself cannot be sued for allowing it to be posted. However, websites can still choose to moderate content and remove anything that violates their own policies. I will ask you to classify a text as in favor of, against, or neutral about Section 230: A. “In favor of” expresses approval for Section 230 and/or advocates keeping Section 230 B. “Against” expresses disapproval towards Section 230 and/or advocates repealing Section 230 C. “Neutral” discusses Section 230 without expressing approval or disapproval towards it Now, is the following text in favor of, against, or neutral about Section 230? [Paste a tweet here and remove the brackets] §.§.§ Task 5: Topic Detection “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. I will ask you to classify a text as of the topics described below: * Section 230, which is a law in the United States that protects websites and other online platforms from being held legally responsible for the content posted by their users (SECTION 230). * The decision by many social media platforms, such as Twitter and Facebook, to suspend Donald Trump’s account (TRUMP BAN). * Requests directed to Twitter’s support account or help center (TWITTER SUPPORT). * Social media platforms’ policies and practices, such as community guidelines or terms of service (PLATFORM POLICIES). * Complaints about platform’s policy and practices in deplatforming and content moderation or suggestions to suspend particular accounts, or complaints about accounts being suspended or reported (COMPLAINTS). * If a text is not about the SECTION 230, COMPLAINTS, TRUMP BAN, TWITTER SUPPORT, and PLATFORM POLICIES, then it should be classified in OTHER class (OTHER). Now, is the following text about SECTION 230, TRUMP BAN, COMPLAINTS, TWITTER SUPPORT, PLATFORM POLICIES, or OTHER? [Paste a tweet here and remove the brackets] §.§ Dataset 2: Content Moderation Tweets (2023) §.§.§ Task 1: Relevance “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. I will ask you to classify a text as relevant or irrelevant to the content moderation: A: Text is RELEVANT if it includes: social media platforms’ content moderation rules and practices, censorship, governments’ regulation of online content moderation, and/or mild forms of content moderation like flagging, shadowbanning, or account suspension. B: Text is IRRELEVANT if they do not refer to content moderation, as defined above. This would include, for example, a tweet by Trump that Twitter has labeled his tweet as “disputed”, or a tweet claiming that something is false. Now, is the following text relevant or irrelevant to content moderation? [Paste a tweet here and remove the brackets] §.§.§ Task 2: Problem/Solution Frames “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. I will ask you to classify a text as describing content moderation as a problem, as a solution, or neither: A: Text describes content moderation as a PROBLEM if they emphasize negative effects of it, such as restrictions to free speech, censorship, or the biases that can emerge from decisions regarding what users are allowed to post. B: Text describes content moderation as a SOLUTION if they emphasize positive effects of it, such as protecting users from harmful content such as hate speech, misinformation, illegal adult content, or spam. C: Text describes content moderation as NEUTRAL if they do not emphasize negative or positive effects of content moderation. For example if they simply report on the content moderation activity of social media platforms without linking them to potential advantages or disadvantages for users or stakeholders. Now, is the following text describing content moderation as a problem, as a solution, or neither? [Paste a tweet here and remove the brackets] §.§ Dataset 3: US Congress Members Tweets (2017-2022) §.§.§ Task 1: Relevance “Political content” refers to a text that pertains to politics or government policies at the local, national, or international level. This can include political figures, events, or issues, as well as text that uses political language or hashtags. I will ask you to classify a text as relevant or irrelevant to the political content: Text is relevant if it uses political keywords or hashtags, mentions political figures or events, discusses policy issues such as immigration, abortion, foreign policy, health care, tax, or police shootings, or includes a link to news outlets or other political sources such as think tanks, political pundits or journalists, the White House, or the US Congress. Text is irrelevant if it does not fit the criteria above Now, is the following text relevant or irrelevant to political content? [Paste a tweet here and remove the brackets] §.§.§ Task 2: Policy Frames “Political content” refers to a text that pertains to politics or government policies at the local, national, or international level. This can include political figures, events, or issues, as well as text that uses political language or hashtags. I will ask you to classify a text as one of the frames defined below: * ECONOMY: The costs, benefits, or monetary/financial implications of the issue (to an individual, family, community, or to the economy as a whole). * Capacity and resources: The lack of or availability of physical, geographical, spatial, human, and financial resources, or the capacity of existing systems and resources to implement or carry out policy goals. * MORALITY: Any perspective—or policy objective or action (including proposed action)that is compelled by religious doctrine or interpretation, duty, honor, righteousness or any other sense of ethics or social responsibility. * FAIRNESS AND EQUALITY: Equality or inequality with which laws, punishment, rewards, and resources are applied or distributed among individuals or groups. Also the balance between the rights or interests of one individual or group compared to another individual or group. * POLICY PRESCRIPTION AND EVALUATION: Particular policies proposed for addressing an identified problem, and figuring out if certain policies will work, or if existing policies are effective. * LAW AND ORDER, CRIME AND JUSTICE: Specific policies in practice and their enforcement, incentives, and implications. Includes stories about enforcement and interpretation of laws by individuals and law enforcement, breaking laws, loopholes, fines, sentencing and punishment. Increases or reductions in crime. * SECURITY AND DEFENSE: Security, threats to security, and protection of one’s person, family, in-group, nation, etc. Generally an action or a call to action that can be taken to protect the welfare of a person, group, nation sometimes from a not yet manifested threat. * HEALTH AND SAFETY: Health care access and effectiveness, illness, disease, sanitation, obesity, mental health effects, prevention of or perpetuation of gun violence, infrastructure and building safety. * QUALITY OF LIFE: The effects of a policy on individuals’ wealth, mobility, access to resources, happiness, social structures, ease of day-to-day routines, quality of community life, etc. * POLITICAL: Any political considerations surrounding an issue. Issue actions or efforts or stances that are political, such as partisan filibusters, lobbyist involvement, bipartisan efforts, deal-making and vote trading, appealing to one's base, mentions of political maneuvering. Explicit statements that a policy issue is good or bad for a particular political party. * EXTERNAL REGULATION AND REPUTATION: The United States’ external relations with another nation; the external relations of one state with another; or relations between groups. This includes trade agreements and outcomes, comparisons of policy outcomes or desired policy outcomes. * OTHER: Any topic that does not fit into the above categories. Now, which of the above frames best fit the following text? Answer with only the option above that is most accurate and nothing else. [Paste a tweet here and remove the brackets] §.§ Dataset 4: Content Moderation News Articles (2020-2021) §.§.§ Task 1: Relevance “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. I will ask you to classify a text as relevant or irrelevant to the content moderation: A: Text is RELEVANT if it includes: social media platforms’ content moderation rules and practices, censorship, governments’ regulation of online content moderation, and/or mild forms of content moderation like flagging, shadowbanning, or account suspension. B: Text is IRRELEVANT if they do not refer to content moderation, as defined above. This would include, for example, a tweet by Trump that Twitter has labeled his tweet as “disputed”, or a tweet claiming that something is false. Now, is the following text relevant or irrelevant to content moderation? [Paste a news article text here and remove the brackets] §.§.§ Task 2: Problem/Solution Frames “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. I will ask you to classify a text as describing content moderation as a problem, as a solution, or neither: A: Text describes content moderation as a PROBLEM if they emphasize negative effects of it, such as restrictions to free speech, censorship, or the biases that can emerge from decisions regarding what users are allowed to post. B: Text describes content moderation as a SOLUTION if they emphasize positive effects of it, such as protecting users from harmful content such as hate speech, misinformation, illegal adult content, or spam. C: Text describes content moderation as NEUTRAL if they do not emphasize negative or positive effects of content moderation. For example if they simply report on the content moderation activity of social media platforms without linking them to potential advantages or disadvantages for users or stakeholders. Now, is the following text describing content moderation as a problem, as a solution, or neither? [Paste a news article text here and remove the brackets] § FEW-SHOT ANNOTATION CODEBOOK (CHAIN-OF-THOUGHT) §.§ Dataset 1: Content Moderation Tweets (2020-2021) §.§.§ Task 1: Relevance “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. I will ask you to classify a text as relevant or irrelevant to the content moderation: A: Text is RELEVANT if it includes: social media platforms’ content moderation rules and practices, censorship, governments’ regulation of online content moderation, and/or mild forms of content moderation like flagging, shadowbanning, or account suspension. B: Text is IRRELEVANT if they do not refer to content moderation, as defined above. This would include, for example, a tweet by Trump that Twitter has labeled his tweet as “disputed”, or a tweet claiming that something is false. The tweets below are some examples of 'RELEVANT' or 'IRRELEVANT' texts to content moderation in the form of question and answer: Question: Please stop lying.cWe all know you can’t stand Sanders and the platform he stands for and people that support progressive policies.cBut can you at least not LIE? It’s a really bad look. Thanks. Answer: Based on the given criteria, the text provided is IRRELEVANT to content moderation. It doesn't discuss social media platforms' content moderation rules and practices, censorship, governments' regulation of online content moderation, or any forms of content moderation like flagging, shadowbanning, or account suspension. Instead, it appears to be a message directed at two individuals and their opinions about Sanders and progressive policies. Question: Hey guys, my TL is hella cluttered from all the follow backs I did this season. I’m going to reinstate my “if you only rt content” I’m not going to follow you policy. No hard feelings and I hope you stick around, but if I unfollow you that’s likely why. Answer: Based on the given criteria, the text provided is IRRELEVANT to content moderation. It doesn't discuss social media platforms' content moderation rules and practices, censorship, governments' regulation of online content moderation, or any mild forms of content moderation like flagging, shadowbanning, or account suspension. Instead, it appears to be a personal statement about cluttered timelines and a follow-back policy on Twitter. Question: New from Pediatrics: social media influencers generate millions of impressions for unhealthy food and drink brands through product placement. The authors call on the FTC to strengthen regulations on YouTube product placement featuring young children. https://t.co/Yicupispun Answer: Based on the given criteria, the text provided is RELEVANT to content moderation. It discusses the impact of social media influencers and product placement on unhealthy food and drink brands. It also mentions calling on the FTC (Federal Trade Commission) to strengthen regulations on YouTube product placement featuring young children. This relates to content moderation as it involves discussing regulations and practices related to content promotion and advertising on social media platforms. Question: This is a tweet from today of the sitting President of the United States accusing a respected journalist and outspoken critic of his of MURDER with NO EVIDENCE whatsoever. Zero. Twitter should immediately remove this tweet and suspend this account. This is INSANE. https://t.co/chlp7iKBXL Answer: Based on the given criteria, the text provided is RELEVANT to content moderation. It discusses the actions and potential response of Twitter in relation to a tweet made by the sitting President of the United States. It addresses the need for Twitter to remove the tweet and suspend the account due to the serious accusation made without evidence. This involves discussing content moderation rules and practices, specifically regarding the removal of content and potential account suspension. Now, is the following text relevant or irrelevant to content moderation? [Paste a tweet here and remove the brackets] §.§.§ Problem/Solution Frames “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. I will ask you to classify a text as describing content moderation as a problem, as a solution, or neither: A: Text describes content moderation as a PROBLEM if they emphasize negative effects of it, such as restrictions to free speech, censorship, or the biases that can emerge from decisions regarding what users are allowed to post. B: Text describes content moderation as a SOLUTION if they emphasize positive effects of it, such as protecting users from harmful content such as hate speech, misinformation, illegal adult content, or spam. C: Text describes content moderation as NEUTRAL if they do not emphasize negative or positive effects of content moderation. For example if they simply report on the content moderation activity of social media platforms without linking them to potential advantages or disadvantages for users or stakeholders. The following tweets are some examples of 'PROBLEM', 'SOLUTION' or 'NEUTRAL' texts about content moderation in the form of question and answer: Question: amerxmirza I have reported him to make sure his account gets suspended. Disgusting and derogation remarks shouldn't be tolerated. Please report. Thanks Answer: The given text can be classified as describing content moderation as a SOLUTION. It emphasizes the positive effect of content moderation by advocating for reporting and suspending the account of a user who made "disgusting and derogatory remarks." The text acknowledges the need to not tolerate such behavior and encourages others to report it, highlighting the protective role of content moderation in addressing harmful content. Question: Channel link https://t.co/9ms7ukCI84 please block this channel this content very harmful https://t.co/zeJ512JPPx Answer: The given text can be classified as describing content moderation as a SOLUTION. It emphasizes the positive effect of content moderation by requesting the blocking of a specific channel due to its harmful content. The text recognizes the need to take action and protect users from potentially harmful content, aligning with the protective role of content moderation in addressing harmful material. Question: Finally could restore my account. Why twitter even require phone number in the first place? Aren't we supposed to have a choice not to show it thus not being banned after, if twitter bots decide you're one of them? #twitter #ElonMusk #FreedomInternet Answer: The given text can be classified as describing content moderation as a PROBLEM. It criticizes the requirement of providing a phone number by Twitter and questions the lack of choice in not showing it. The text implies frustration with the potential consequences of content moderation decisions made by Twitter bots, suggesting concerns about restrictions to freedom and potential biases in the platform's moderation practices. Question: Of all the issues we’re facing now, why’s social media regulation the goal for this people? Freedom of speech is a basic human right and any attempt to take it from Nigerians should be rejected #RejectNITDAbill #Notosocialmediaregulation Answer: The given text can be classified as describing content moderation as a PROBLEM. It emphasizes the negative effects of social media regulation, specifically mentioning the potential infringement on freedom of speech as a basic human right. The text opposes any attempt to take away this right from Nigerians, suggesting concerns about restrictions and censorship associated with content moderation. Question: BotGerad R6Jena zxdd account suspended xd Answer: Based on the given text, it does not explicitly emphasize the negative or positive effects of content moderation. Instead, it simply reports the suspension of an account without further elaboration. Therefore, the text can be classified as describing content moderation as NEUTRAL, as it does not emphasize either the problems or the solutions associated with content moderation. Question: elonmusk ellagirwin esthercrawford there seems to have been some mistake, renowned internet love-monger nickjfuentes was suspended from Twitter. Please do the right thing and re-instate his account for the sake of humanity Answer: Based on the given text, it is possible to interpret it differently. While the text does request the reinstatement of a suspended account, it does not explicitly mention any negative effects or problems related to content moderation. Therefore, an alternative classification could be that the text describes content moderation as NEUTRAL since it does not emphasize negative or positive effects. It simply requests the reinstatement of a specific account without further elaboration on the broader implications of content moderation. Now, is the following text describing content moderation as a problem, as a solution, or neither? [Paste a tweet here and remove the brackets] §.§.§ Task 3: Policy Frames “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. I will ask you to classify a text as one of the frames defined below: * ECONOMY: The costs, benefits, or monetary/financial implications of the issue (to an individual, family, community, or to the economy as a whole). * Capacity and resources: The lack of or availability of physical, geographical, spatial, human, and financial resources, or the capacity of existing systems and resources to implement or carry out policy goals. * MORALITY: Any perspective—or policy objective or action (including proposed action)that is compelled by religious doctrine or interpretation, duty, honor, righteousness or any other sense of ethics or social responsibility. * FAIRNESS AND EQUALITY: Equality or inequality with which laws, punishment, rewards, and resources are applied or distributed among individuals or groups. Also the balance between the rights or interests of one individual or group compared to another individual or group. * POLICY PRESCRIPTION AND EVALUATION: Particular policies proposed for addressing an identified problem, and figuring out if certain policies will work, or if existing policies are effective. * LAW AND ORDER, CRIME AND JUSTICE: Specific policies in practice and their enforcement, incentives, and implications. Includes stories about enforcement and interpretation of laws by individuals and law enforcement, breaking laws, loopholes, fines, sentencing and punishment. Increases or reductions in crime. * SECURITY AND DEFENSE: Security, threats to security, and protection of one’s person, family, in-group, nation, etc. Generally an action or a call to action that can be taken to protect the welfare of a person, group, nation sometimes from a not yet manifested threat. * HEALTH AND SAFETY: Health care access and effectiveness, illness, disease, sanitation, obesity, mental health effects, prevention of or perpetuation of gun violence, infrastructure and building safety. * QUALITY OF LIFE: The effects of a policy on individuals’ wealth, mobility, access to resources, happiness, social structures, ease of day-to-day routines, quality of community life, etc. * POLITICAL: Any political considerations surrounding an issue. Issue actions or efforts or stances that are political, such as partisan filibusters, lobbyist involvement, bipartisan efforts, deal-making and vote trading, appealing to one's base, mentions of political maneuvering. Explicit statements that a policy issue is good or bad for a particular political party. * EXTERNAL REGULATION AND REPUTATION: The United States’ external relations with another nation; the external relations of one state with another; or relations between groups. This includes trade agreements and outcomes, comparisons of policy outcomes or desired policy outcomes. * OTHER: Any topic that does not fit into the above categories. The following tweets are some examples of these frames in the form of question and answer: Question: TY AGSchneiderman for your investigation into red light camera violations by school bus companies. Our children’s safety is paramount. https://t.co/CVLj7Lk6r4 Answer: HEALTH AND SAFETY Question: Had a huge turnout for the #GrowWithGoogle digital skills event, which I hosted with google today in Charleston! Programs like this one will help strengthen our workforce & create a brighter economic future for West Virginians. https://t.co/3ma7Mv1EBR Answer: ECONOMY Question: On the floor about to offer and speak on my amendment to eliminate subsistence fees at BOP halfway houses, which house DC Code felons. Answer: Based on the provided text, the most accurate frame would be: POLICY PRESCRIPTION AND EVALUATION. The text mentions the speaker's intention to offer an amendment to eliminate subsistence fees at BOP halfway houses that house DC Code felons. This indicates a specific policy proposal aimed at addressing a problem related to subsistence fees for felons in halfway houses. Therefore, it falls under the frame of policy prescription and evaluation. Question: It is refreshing that the #JointAddress tonight was a call to unite the country around a strategy for a stronger, brighter future. Answer: Based on the provided text, the most accurate frame would be: POLITICAL. The text refers to the #JointAddress, which typically refers to a political event where the President of the United States addresses the nation. It mentions the call to unite the country around a strategy for a stronger, brighter future, which indicates a political consideration aimed at rallying support and fostering unity. Therefore, the text falls under the frame of political considerations. Question: Today we remember and honor the men and women of our Armed Forces who remain missing in action or prisoners of war. #POWMIARecognitionDay https://t.co/D9z1akkjKW Answer: The text you provided can be classified under the frame of SECURITY AND DEFENSE. It is a statement that acknowledges and honors the men and women of the Armed Forces who are missing in action or prisoners of war. It emphasizes the importance of remembering and honoring their sacrifice, which relates to the protection and welfare of individuals and the nation's security. Question: No longer can POTUS pretend that Putin’s Russia was not responsible for cyberattacks targeting the 2016 election. Today’s indictments are another example of why the Mueller investigation must continue unimpeded. The American people need to learn the truth. https://t.co/mYwE4p4jR4 Answer: The following frame is the best fit: EXTERNAL REGULATION AND REPUTATION: The text highlights the responsibility of Putin's Russia for cyberattacks targeting the 2016 election and emphasizes the need for the Mueller investigation to continue unimpeded. It mentions the truth that the American people need to learn. This relates to the external relations and reputation of the United States and its interactions with another nation (Russia) in the context of cyberattacks and election interference. Question: Women have a valuable place in STEAM fields—let’s make sure they have a seat at the table. https://t.co/LhOawvSszP Answer: Based on the provided text, the most accurate classification would be: FAIRNESS AND EQUALITY. The text emphasizes the importance of ensuring women have a seat at the table in STEAM (Science, Technology, Engineering, Arts, and Mathematics) fields. It addresses the equality or inequality of opportunities and representation for women in these fields, highlighting the need for fairness and equal participation. Question: BREAKING: HouseIntelComm Chairman Conaway just announced that the Committee is closing the "Russian collusion" investigation, having found zero evidence of any collusion between the Trump campaign and Russians. Case closed. It's time we return focus to the people's agenda. Answer: The text can be classified under the following frame: LAW AND ORDER, CRIME AND JUSTICE. The text mentions the investigation into "Russian collusion" and the finding of zero evidence of collusion between the Trump campaign and Russians. It highlights the closure of the investigation and suggests a shift in focus to the people's agenda. This classification relates to specific policies in practice (investigation), enforcement (finding of evidence), and implications (closure) within the realm of law and order, crime, and justice. Question: The Trump-Sessions "zero tolerance" family separation border policies are not required, right or moral. https://t.co/aAFX8Q6eKT Answer: Based on the provided text, the best fit among the defined frames would be: MORALITY. The text expresses a perspective on the Trump-Sessions "zero tolerance" family separation border policies, stating that they are not required, right, or moral. This indicates a moral evaluation of the policies based on ethics and social responsibility. Question: Glad to work with TomRooney RepSeanMaloney RepStefanik on this bipartisan bill to help our homeless veterans. #HousingOurHeroes Answer: The text highlights a bipartisan bill aimed at helping homeless veterans, which directly relates to improving the quality of life for these individuals. The focus is on the effects of the policy on individuals' well-being, access to resources, and housing stability. Therefore, the frame "QUALITY OF LIFE" is the most accurate classification for the given text. Question: Prayers to my dear friend SteveScalise &all injured after this morning's horrific shooting.Thinking of their families in this shocking time Answer: The text expresses sympathy and concern for the injured individuals and their families after a shooting incident, but it doesn't explicitly address the broader implications or aspects related to economy, morality, fairness and equality, policy prescription and evaluation, law and order, crime and justice, health and safety, quality of life, political considerations, external regulation and reputation, or any other specific frame. Therefore, the most accurate classification for this text would be "OTHER," as it doesn't fit into the predefined frames provided. Now, which of the above frames best fit the following text? Answer with only the option above that is most accurate and nothing else. [Paste a tweet here and remove the brackets] §.§.§ Task 4: Stance Detection “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. In the context of content moderation, Section 230 is a law in the United States that protects websites and other online platforms from being held legally responsible for the content posted by their users. This means that if someone posts something illegal or harmful on a website, the website itself cannot be sued for allowing it to be posted. However, websites can still choose to moderate content and remove anything that violates their own policies. I will ask you to classify a text as in favor of, against, or neutral about Section 230: A. “In favor of” expresses approval for Section 230 and/or advocates keeping Section 230 B. “Against” expresses disapproval towards Section 230 and/or advocates repealing Section 230 C. “Neutral” discusses Section 230 without expressing approval or disapproval towards it The following tweets are some examples of texts “IN FAVOR OF” ,“AGAINST” or “NEUTRAL” about section230 in the form of question and answer: Question: #Section230 is poorly understood by lawmakers on both sides of the aisle, and it is dangerous for them to use it as a political football. To save online free speech, we must #Protect230 Contact lawmakers: https://t.co/ldSL75knH4 Answer: The provided text is in favor of Section 230 and advocates keeping it. The use of the hashtag #Protect230 and the statement "To save online free speech, we must #Protect230" indicate support for the law and its role in preserving online free speech. Therefore, the text can be classified as "IN FAVOR OF" Section 230. Question: Facebook and Twitter CEOs warn against demolishing Section 230, the law that shields tech giants https://t.co/CItuLmTTxE by alexiskweed https://t.co/7Y6eG19YoZ Answer: The provided text is in favor of Section 230. It mentions that the CEOs of Facebook and Twitter are warning against demolishing Section 230, indicating their support for the law. Therefore, the text can be classified as "IN FAVOR OF" Section 230. Question: How do we get big tech companies like Twitter to abide by the spirit of the 1st Amendment, or moderate content by a set of clear and definable standards? Canyon Brimhall joins in the third episode of our series on big tech, free speech, and Section 230. https://t.co/RfoJVuQPEh https://t.co/FheCcceTMr Answer: The provided text discusses the need for big tech companies to abide by the spirit of the First Amendment and moderate content according to clear and definable standards. It mentions Section 230 in the context of a discussion about big tech, free speech, and content moderation. The text does not express explicit approval or disapproval towards Section 230 itself but raises the question of how to ensure big tech companies adhere to the principles of the First Amendment. Therefore, the text can be classified as "NEUTRAL" about Section 230. Question: I sat down with the Meme King himself bennyjohnson in Washington, DC to discuss Big Tech, Social Media Censorship and its impact on the election, Section 230, PLEASE DO NOT MISS THIS! https://t.co/ETWKY16J8d Answer: The provided text does not clearly express whether it is in favor of or against Section 230. It discusses various topics such as Big Tech, social media censorship, Section 230, and its impact on the election without explicitly stating approval or disapproval towards Section 230. Therefore, the text can be classified as "NEUTRAL" about Section 230. Question: JoeBiden Trump MUST be re-elected. We MUST vote in GOP in the House, keep the Senate and we MUST eliminate Section 230 protections for big tech social media! Social media & Corporate America will kill our free speech! BLM is a Marxist group looking to DESTROY USA! https://t.co/R37Qi2rlNy Answer: The provided text is against Section 230. It advocates for eliminating Section 230 protections for big tech social media and expresses concerns about the impact of social media and corporate America on free speech. Therefore, the text can be classified as "AGAINST" Section 230. Question: BREAKING: Supreme Court Justice Clarence Thomas finds social media companies do not have First Amendment right to ban protected speech and that Section 230 is unconstitutional Answer: The provided text expresses a critical stance towards Section 230 by stating that Supreme Court Justice Clarence Thomas finds it unconstitutional and that social media companies do not have a First Amendment right to ban protected speech. Therefore, the text can be classified as "AGAINST" Section 230. Now, is the following text in favor of, against, or neutral about Section 230? [Paste a tweet here and remove the brackets] §.§.§ Task 5: Topic Detection “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. I will ask you to classify a text as of the topics described below: * Section 230, which is a law in the United States that protects websites and other online platforms from being held legally responsible for the content posted by their users (SECTION 230). * The decision by many social media platforms, such as Twitter and Facebook, to suspend Donald Trump’s account (TRUMP BAN). * Requests directed to Twitter’s support account or help center (TWITTER SUPPORT). * Social media platforms’ policies and practices, such as community guidelines or terms of service (PLATFORM POLICIES). * Complaints about platform’s policy and practices in deplatforming and content moderation or suggestions to suspend particular accounts, or complaints about accounts being suspended or reported (COMPLAINTS). * If a text is not about the SECTION 230, COMPLAINTS, TRUMP BAN, TWITTER SUPPORT, and PLATFORM POLICIES, then it should be classified in OTHER class (OTHER). The following tweets are some examples of the topics described above in the form of question and answer: Question: TangledUpInDead soupmaned jkosseff jenniferm_q The referenced case has absolutely nothing to do with Section 230. Answer: Based on the provided text, it falls under the "SECTION 230" category. The text explicitly mentions Section 230 and states that the referenced case is unrelated to it. Question: LindseyGrahamSC The fact that you call it a “demand” is abhorrent! The American people deserve the support of their government and $2,000 pp is well overdue. The fact it will be held hostage unless Section 230 is addressed is horseshit! #humanroulette #americansaredyingtrumpkeepslying Answer: Based on the provided text, it falls under the "SECTION 230" category. The text mentions Section 230 and expresses frustration about it being used as leverage to address other issues, specifically relating to the demand for $2,000 per person. Question: YourAnonCentral Hack Twitter and Ban Trump Answer: Based on the given text, it would be classified as a statement related to "TRUMP BAN," specifically mentioning hacking Twitter and advocating for the banning of Trump. Question: Russian-made #Telegram messenger shoots to top of US app charts, amid fears of wider social media crackdown following Trump ban — RT Russia & Former Soviet Union https://t.co/3HGLCNNJ2T Answer: Based on the given text, it would be classified as a statement related to "TRUMP BAN." The text discusses fears of a wider social media crackdown following Trump's ban and mentions the rise of the Russian-made Telegram messenger. Question: Twitter decided to ban Mike Lindell, who founded bedding company My Pillow, due to “repeated violations” of its civic integrity policy, a spokesperson said in a statement. The policy was implemented last September and is targeted at fighting disinformation https://t.co/lM2FeHuv3f Answer: This tweet appears to be more closely related to "PLATFORM POLICIES". The text discusses Twitter's decision to ban Mike Lindell based on violations of its civic integrity policy, which falls under the category of social media platforms' policies and practices. Question: Twitter To Introduce This New Feature To Curb The Spread Of Fake, Harmful Content. https://t.co/TqHeINqeQzhttps://t.co/SrwaCC5fdL So many #cherries to choose from this season! Choose your favourite or get them all on https://t.co/gAhHib40nQ. CherrySeason #FreshFruits #superplum #FarmFresh #VocalForLocal #StellaCherries #OrderOnline #HomeDelivery #BlackCherries #MerchantCherries https://t.co/WNpIDh72p3 Answer: This tweet should be classified as "PLATFORM POLICIES". The text mentions Twitter's introduction of a new feature to curb the spread of fake and harmful content. This aspect relates to social media platform policies aimed at content moderation. Question: This post has been up 24 minutes and has 12 impressions Twitter TwitterSupport. I have 3800 followers. This number is about 95 Answer: The text is about TWITTER SUPPORT. The author is addressing Twitter and TwitterSupport and complaining about a decrease in his/her number of followers, suggesting that he/she is being shadowbanned by Twitter. The use of the words "impressions" and "followers" suggests that the author is concerned with the platform's policies and practices regarding content moderation and deplatforming. Question: Using a criminal's picture insulting our history and culture this is where I should report this user to Twitter TwitterSupport for hateful content and threatening #CyberSecurity https://t.co/KdIinpgMXf Answer: The tweet is about TWITTER SUPPORT because it mentions reporting a user to Twitter and TwitterSupport for hateful content and threatening #CyberSecurity. This indicates a request for assistance from Twitter's support team to moderate and remove potentially harmful content shared on the platform. Question: You may have agreed or disagreed with JD from NY, but I do not think anyone should celebrate deplatforming Answer: This text falls under the "COMPLAINTS" class as it expresses dissatisfaction with deplatforming and suggests that it should not be celebrated, but it does not refer specifically to any social media platform, policy, or practice. Question: dbongino I found out Twitter is shadowbanning me and I'm essentially a nobody. How many other people with viewpoints they do not like and being placed behind a wall? Answer: The given text is about COMPLAINTS. The text talks about an individual named "dbongino" who has found out that Twitter has shadowbanned them, resulting in their tweets being placed behind a wall. The user expresses concern about how many other people with differing viewpoints might also be affected by Twitter's policies. This text is a complaint about Twitter's content moderation and deplatforming practices, and hence it falls under the COMPLAINTS category. Question: propjoesays Twitter needs to issue a blanket ban on this kind of fearmongering. Anyone engaging in voter suppression or questioning the integrity of elections needs to have their account suspended. Answer: The text is not about the topics mentioned in the instructions, Instead, it is focused on a specific suggestion that Twitter should issue a blanket ban on certain types of content related to voter suppression and questioning the integrity of elections. As it does not fit into any of the given topic categories, it should be classified under OTHER. Question: Lots of people have been noticing that recently. They are big time shadow banning. It’s so obvious. There are people I follow who tweet a couple times and hour and thought they were off twitter because I haven’t seen a tweet in months https://t.co/IQkZ2346VB Answer: The tweet is more appropriately classified as "OTHER." It does not specifically mention Section 230, the Trump ban, Twitter support, or platform policies. Instead, it discusses shadow banning and the author's observation of decreased visibility of tweets from certain accounts. Since it does not align with any of the specified topics, "OTHER" is the most suitable classification. Now, is the following text about SECTION 230, TRUMP BAN, COMPLAINTS, TWITTER SUPPORT, PLATFORM POLICIES, or OTHER? [Paste a tweet here and remove the brackets] §.§ Dataset 2: Content Moderation Tweets (2023) §.§.§ Task 1: Relevance “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. I will ask you to classify a text as relevant or irrelevant to the content moderation: A: Text is RELEVANT if it includes: social media platforms’ content moderation rules and practices, censorship, governments’ regulation of online content moderation, and/or mild forms of content moderation like flagging, shadowbanning, or account suspension. B: Text is IRRELEVANT if they do not refer to content moderation, as defined above. This would include, for example, a tweet by Trump that Twitter has labeled his tweet as “disputed”, or a tweet claiming that something is false. The tweets below are some examples of 'RELEVANT' and 'IRRELEVANT' texts to content moderation in the form of question and answer: Question: GillianBateman5 ankertilbrewer BreesAnna She blocked me on my old account but added me on this one for some reason Answer: Based on the criteria provided, the text provided is IRRELEVANT to content moderation. It does not discuss social media platforms' content moderation rules and practices, censorship, governments' regulation of online content moderation, or any forms of content moderation such as flagging, shadowbanning, or account suspension. Instead, it appears to be a casual conversation or observation about someone blocking and adding people on social media. Question: This is how will #SriLanka debt restructuring look like? accordingly to VERITÉ RESEARCH SRI LANKA POLICY GROUP VeriteResearch CBSL StanChart https://t.co/adjf5cKzxnhttps://t.co/Er1nP9a4jh Answer: Based on the given criteria, the text provided is IRRELEVANT to content moderation. It does not discuss social media platforms' content moderation rules and practices, censorship, governments' regulation of online content moderation, or any mild forms of content moderation. Instead, the text appears to be about the debt restructuring in Sri Lanka, which is unrelated to content moderation on social media sites. Question Dear Twitter TwitterMENA TwitterSupport SalmaMMMT account has been suspended 1 day ago And without any reason Answer: Based on the given criteria, the text provided is RELEVANT to content moderation. It mentions the suspension of an account on Twitter (SalmaMMMT) without any given reason. This relates to content moderation as it touches upon the action of account suspension, which is a mild form of content moderation employed by social media platforms. Question: Finally could restore my account. Why twitter even require phone number in the first place? Aren't we supposed to have a choice not to show it thus not being banned after, if twitter bots decide you're one of them? #twitter #ElonMusk #FreedomInternet Answer: The text provided can be considered RELEVANT to content moderation. Although it primarily discusses personal account issues and frustrations with Twitter's phone number requirement, it also raises concerns about being banned based on the decisions of Twitter bots. This alludes to the practice of content moderation, where automated systems are often involved in flagging or suspending accounts based on certain criteria. Now, is the following text relevant or irrelevant to content moderation? [Paste a tweet here and remove the brackets] §.§.§ Task 2: Problem/Solution Frames “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. I will ask you to classify a text as describing content moderation as a problem, as a solution, or neither: A: Text describes content moderation as a PROBLEM if they emphasize negative effects of it, such as restrictions to free speech, censorship, or the biases that can emerge from decisions regarding what users are allowed to post. B: Text describes content moderation as a SOLUTION if they emphasize positive effects of it, such as protecting users from harmful content such as hate speech, misinformation, illegal adult content, or spam. C: Text describes content moderation as NEUTRAL if they do not emphasize negative or positive effects of content moderation. For example if they simply report on the content moderation activity of social media platforms without linking them to potential advantages or disadvantages for users or stakeholders. The following texts are some examples of 'PROBLEM', 'SOLUTION' or 'NEUTRAL' texts about content moderation in the form of question and answer: Question: amerxmirza I have reported him to make sure his account gets suspended. Disgusting and derogation remarks shouldn't be tolerated. Please report. Thanks Answer: The given text can be classified as describing content moderation as a SOLUTION. It emphasizes the positive effect of content moderation by advocating for reporting and suspending the account of a user who made "disgusting and derogatory remarks." The text acknowledges the need to not tolerate such behavior and encourages others to report it, highlighting the protective role of content moderation in addressing harmful content. Question: Channel link https://t.co/9ms7ukCI84 please block this channel this content very harmful https://t.co/zeJ512JPPx Answer: The given text can be classified as describing content moderation as a SOLUTION. It emphasizes the positive effect of content moderation by requesting the blocking of a specific channel due to its harmful content. The text recognizes the need to take action and protect users from potentially harmful content, aligning with the protective role of content moderation in addressing harmful material. Question: Finally could restore my account. Why twitter even require phone number in the first place? Aren't we supposed to have a choice not to show it thus not being banned after, if twitter bots decide you're one of them? #twitter #ElonMusk #FreedomInternet Answer: The given text can be classified as describing content moderation as a PROBLEM. It criticizes the requirement of providing a phone number by Twitter and questions the lack of choice in not showing it. The text implies frustration with the potential consequences of content moderation decisions made by Twitter bots, suggesting concerns about restrictions to freedom and potential biases in the platform's moderation practices. Question: Of all the issues we’re facing now, why’s social media regulation the goal for this people? Freedom of speech is a basic human right and any attempt to take it from Nigerians should be rejected #RejectNITDAbill #Notosocialmediaregulation Answer: The given text can be classified as describing content moderation as a PROBLEM. It emphasizes the negative effects of social media regulation, specifically mentioning the potential infringement on freedom of speech as a basic human right. The text opposes any attempt to take away this right from Nigerians, suggesting concerns about restrictions and censorship associated with content moderation. Question: BotGerad R6Jena zxdd account suspended xd Answer: Based on the given text, it does not explicitly emphasize the negative or positive effects of content moderation. Instead, it simply reports the suspension of an account without further elaboration. Therefore, the text can be classified as describing content moderation as NEUTRAL, as it does not emphasize either the problems or the solutions associated with content moderation. Question: elonmusk ellagirwin esthercrawford there seems to have been some mistake, renowned internet love-monger nickjfuentes was suspended from Twitter. Please do the right thing and re-instate his account for the sake of humanity Answer: Based on the given text, it is possible to interpret it differently. While the text does request the reinstatement of a suspended account, it does not explicitly mention any negative effects or problems related to content moderation. Therefore, an alternative classification could be that the text describes content moderation as NEUTRAL since it does not emphasize negative or positive effects. It simply requests the reinstatement of a specific account without further elaboration on the broader implications of content moderation. Now, is the following text describing content moderation as a problem, as a solution, or neither? [Paste a tweet here and remove the brackets] §.§ Dataset 3: US Congress Members Tweets (2017-2022) §.§.§ Task 1: Relevance “Political content” refers to a text that pertains to politics or government policies at the local, national, or international level. This can include political figures, events, or issues, as well as text that uses political language or hashtags. I will ask you to classify a text as relevant or irrelevant to the political content: Text is relevant if it uses political keywords or hashtags, mentions political figures or events, discusses policy issues such as immigration, abortion, foreign policy, health care, tax, or police shootings, or includes a link to news outlets or other political sources such as think tanks, political pundits or journalists, the White House, or the US Congress. Text is irrelevant if it does not fit the criteria above The following tweets are some examples of RELEVANT or IRRELEVANT texts to political contents in the form of question and answer: Question: What inspires me: being a warrior for human dignity, human potential. https://t.co/k6NXxcThaD Answer: Based on the given criteria, the text you provided is IRRELEVANT to political content. It does not include any political keywords or hashtags, mention political figures or events, discuss policy issues, or provide links to political sources. Instead, it focuses on concepts such as human dignity and human potential. Question: I hope this time brings you the opportunity to reflect, to #hope, and to know that you're loved by God. #MerryChristmas, #GA09! https://t.co/cOfwb9TxWa Answer: Based on the provided criteria, the text you provided is IRRELEVANT to political content. While it includes hashtags such as #hope, #MerryChristmas, and #GA09, it does not use any political keywords or discuss political figures, events, or policy issues. The focus of the text is on reflection, hope, and sending Christmas wishes. Question: Pres Trump wants another 1 percent for the richest 1 percent. 100 percent irresponsible. https://t.co/CawU4SpUTo Answer: Based on the provided criteria, the text you provided is relevant to political content. It mentions "Pres Trump" as a political figure, discusses a policy issue related to wealth distribution and taxation. Therefore, it meets the criteria for RELEVANT to political content. Question: I spoke on the floor of the House this morning to honor the life of longtime Syracuse Police Capt Richard Walsh https://t.co/6CdNF5jcLc Answer: Based on the provided criteria, the text you provided is RELEVANT to political content. It mentions speaking on the floor of the House, honoring the life of a police captain, and includes a link to a source (https://t.co/6CdNF5jcLc). This text pertains to government activities, a political event (speech on the House floor), and mentions a political figure (Syracuse Police Capt Richard Walsh). Now, is the following text relevant or irrelevant to political content? [Paste a tweet here and remove the brackets] §.§.§ Task 2: Policy Frames “Political content” refers to a text that pertains to politics or government policies at the local, national, or international level. This can include political figures, events, or issues, as well as text that uses political language or hashtags. I will ask you to classify a text as one of the frames defined below: * ECONOMY: The costs, benefits, or monetary/financial implications of the issue (to an individual, family, community, or to the economy as a whole). * Capacity and resources: The lack of or availability of physical, geographical, spatial, human, and financial resources, or the capacity of existing systems and resources to implement or carry out policy goals. * MORALITY: Any perspective—or policy objective or action (including proposed action)that is compelled by religious doctrine or interpretation, duty, honor, righteousness or any other sense of ethics or social responsibility. * FAIRNESS AND EQUALITY: Equality or inequality with which laws, punishment, rewards, and resources are applied or distributed among individuals or groups. Also the balance between the rights or interests of one individual or group compared to another individual or group. * POLICY PRESCRIPTION AND EVALUATION: Particular policies proposed for addressing an identified problem, and figuring out if certain policies will work, or if existing policies are effective. * LAW AND ORDER, CRIME AND JUSTICE: Specific policies in practice and their enforcement, incentives, and implications. Includes stories about enforcement and interpretation of laws by individuals and law enforcement, breaking laws, loopholes, fines, sentencing and punishment. Increases or reductions in crime. * SECURITY AND DEFENSE: Security, threats to security, and protection of one’s person, family, in-group, nation, etc. Generally an action or a call to action that can be taken to protect the welfare of a person, group, nation sometimes from a not yet manifested threat. * HEALTH AND SAFETY: Health care access and effectiveness, illness, disease, sanitation, obesity, mental health effects, prevention of or perpetuation of gun violence, infrastructure and building safety. * QUALITY OF LIFE: The effects of a policy on individuals’ wealth, mobility, access to resources, happiness, social structures, ease of day-to-day routines, quality of community life, etc. * POLITICAL: Any political considerations surrounding an issue. Issue actions or efforts or stances that are political, such as partisan filibusters, lobbyist involvement, bipartisan efforts, deal-making and vote trading, appealing to one's base, mentions of political maneuvering. Explicit statements that a policy issue is good or bad for a particular political party. * EXTERNAL REGULATION AND REPUTATION: The United States’ external relations with another nation; the external relations of one state with another; or relations between groups. This includes trade agreements and outcomes, comparisons of policy outcomes or desired policy outcomes. * OTHER: Any topic that does not fit into the above categories. The following tweets are some examples of these frames in the form of question and answer: Question: HURRY!!! Today is the last day to sign up for health insurance that begins on Jan 1. Visit https://t.co/rrKeGJOFBA to #GetCoveredNow. #ACA https://t.co/LCMQNHjCMN Answer: HEALTH AND SAFETY Question: The #CHOICEAct provides regulatory relief for community banks & credit unions promoting more economic opportunity → https://t.co/uOBmHKhrxkhttps://t.co/64WGHA1D2R Answer: ECONOMY Question: The #REINSAct signals our commitment to modeling reform that puts smart policy above tired politicking. https://t.co/GpOcD1NZO7 Answer: The text you provided falls under the frame of POLICY PRESCRIPTION AND EVALUATION, as it refers to the REINS Act and emphasizes the commitment to modeling reform and smart policy. Question: Tonight it was my distinct privilege to speak on the Senate floor in support of my friend & our Attorney General Jeff Sessions. https://t.co/UoIYp1R3ES Answer: POLITICAL Question: Thanks Astro_Kate7 for speaking w/students at her Alma mater VHS_Crusheds about her groundbreaking work on the International Space Station https://t.co/UXnh8STwaN Answer: OTHER Question: I always thought the best soldiers end up at Joint Base Lewis-McChord, but here's proof. Congrats to the 1st Special Forces Group (Airborne) sniper team! JBLM_PAO TaskandPurpose https://t.co/x8nX6HyYOQ Answer: SECURITY AND DEFENSE Question: As I told #SouthKorea leaders during my visit in Dec, US is committed to a strong alliance despite political turmoil https://t.co/8orrFs8atv Answer: EXTERNAL REGULATION AND REPUTATION Question: Proud to #StandWithGavin and all transgender students. Every child deserves to go to school & live as who they are free from discrimination. https://t.co/4uqpuHzbCd Answer: Based on the provided text, the best frame that fits the text is "FAIRNESS AND EQUALITY." The text expresses support for transgender students and emphasizes the importance of ensuring that every child can go to school and live free from discrimination. This relates to the equality or inequality with which laws and resources are applied or distributed among individuals or groups. In this case, it pertains to the rights and interests of transgender students compared to others, highlighting the need for fairness and equal treatment. Question: The prosecution of corruption by high ranking government officials, even years after the crimes were committed, is critical to... Answer: Based on the provided text, the best frame that fits the text is "LAW AND ORDER, CRIME AND JUSTICE." The text discusses the prosecution of corruption by high-ranking government officials, which falls under specific policies in practice, their enforcement, incentives, and implications. It involves the enforcement and interpretation of laws, the accountability of government officials, and the pursuit of justice in addressing corruption. Question: The Trump-Sessions "zero tolerance" family separation border policies are not required, right or moral. https://t.co/aAFX8Q6eKT Answer: Based on the provided text, the frame that best fits is: MORALIT. The text expresses a moral perspective on the Trump-Sessions "zero tolerance" family separation border policies, stating that they are not required, right, or moral. It addresses the ethical dimension and social responsibility associated with these policies. Question: Wisconsin is full of great role models and leaders. Congratulations to all of the outstanding women honored by the La Crosse YWCA, and thank you for making the coulee region a better place to live! https://t.co/mj1HK4PwzI Answer: Based on the provided text, the frame that best fits is: QUALITY OF LIFE. The text highlights the positive impact of the outstanding women honored by the La Crosse YWCA in making the coulee region a better place to live. It emphasizes the effects of their contributions on the quality of life in the community. Now, which of the above frames best fit the following text? Answer with only the option above that is most accurate and nothing else. [Paste a tweet here and remove the brackets] §.§ Dataset 4: Content Moderation News Articles (2020-2021) §.§.§ Task 1: Relevance “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. I will ask you to classify a text as relevant or irrelevant to the content moderation: A: Text is RELEVANT if it includes: social media platforms’ content moderation rules and practices, censorship, governments’ regulation of online content moderation, and/or mild forms of content moderation like flagging, shadowbanning, or account suspension. B: Text is IRRELEVANT if they do not refer to content moderation, as defined above. This would include, for example, a tweet by Trump that Twitter has labeled his tweet as “disputed”, or a tweet claiming that something is false. The following texts are some examples of 'RELEVANT' or 'IRRELEVANT' texts to content moderation in the form of question and answer: Question: TORONTO - Ontario Premier Doug Ford on Monday said the United States had blocked the delivery of nearly three million face masks at the American border over the weekend. Ford said restrictions on shipments at the U.S. border have left the province with just one more week's worth of personal protective equipment for health-care workers fighting the coronavirus outbreak in Ontario. In a statement today, he says Ontario is ramping up its own production of personal protective equipment, but most of those supplies are weeks away from being in the hands of front-line health workers. At least 451 health-care workers in Ontario have tested positive for COVID-19, representing about 10 per cent of all cases in the province. In all, Ontario reported 309 new COVID-19 cases today, including 13 new deaths. There have now been a total of 4,347 cases in the province, including 1,624 patients who have recovered and 132 deaths. Allies of the United States are complaining about its "Wild West" tactics in outbidding or blocking shipments to buyers who have already signed deals for medical equipment. Prime Minister Justin Trudeau sidestepped reporters' questions about the incident on Monday, saying his government was in productive talks with the United States and adding: "We expect Answer: The provided text is 'IRRELEVANT' to content moderation. It discusses the United States blocking the delivery of face masks at the border, Ontario's shortage of personal protective equipment, COVID-19 cases and deaths, and complaints about the United States' tactics in outbidding or blocking shipments. It does not refer to social media platforms, content moderation rules, censorship, or any forms of online content moderation. Question: A look at the first quarter of the year that was 2019. January 3: China's "Chang'e 4" is the first space probe to land on the far side of the moon. January 11: Macedonia is now North Macedonia, ending a row with Greece and paving the way for NATO membership and EU accession talks. The Greek parliament ratifies the historic name agreement on January 25.January 13: The mayor of the Polish city of Gdansk, Pawel Adamowicz, 53, is stabbed to death by a previously convicted bank robber during a fundraiser. January 15: Attackers claimed by the Somalia-based group al-Shabaab storm the upmarket Dusit hotel in the Kenyan capital Nairobi, killing more than 20. January 18: After four months of political deadlock in Sweden, Social Democratic leader Stefan Lofven wins a vote in parliament to form a government. January 18: At least 109 people are killed when a fuel pipeline explodes in the Mexican city of Tlahuelilpan. January 22: The EU Commission imposes a fine of 570 million euros on the credit card company Mastercard for artificially pushing up the cost of card payments. January 23: Juan Guaido, the head of Venezuela's opposition-dominated National Assembly, declares himself the country's interim president. January 24: Felix Tshisekedi is Answer: The provided text is 'IRRELEVANT' to content moderation. It discusses various events that took place in the first quarter of 2019, including space exploration, political developments, criminal incidents, and international affairs. It does not refer to social media platforms, content moderation rules, censorship, or any forms of online content moderation. Question: Bhubaneswar, Oct. 29 – New Delhi: The Supreme Court has severely criticised the growing trend of police in different States summoning individuals from far corners of the country over social media postings. Hearing a petition concerning a Delhi resident Roshni Biswas who was reportedly summoned by Bengal Police for posting objectionable content on Facebook, the Bench of Justices DY Chandrachud and Indira Banerjee noted that police's power to issue summons under Section 41A of the Code of Criminal Procedure (CrPC) cannot be used to intimidate, threaten and harass. As per reports, the apex court's comment was prompted by Bengal Police issuing summons to the 29-year-old woman who, in a Facebook post, had criticised the Mamata government for non-enforcement of lockdown norms. The FIR which relies on FB links contains a statement that the posts implied the State administration was going soft on the violation of the lockdown at Rajabazar as the area is predominantly inhabited by a particular community and that the administration is complacent while dealing with lockdown violations caused by a certain segment of the community. Mahesh Jethmalani, learned senior counsel appearing on behalf of the petitioner submitted that the petitioner has stated on oath that she disclaims any association with Answer: The provided text is 'RELEVANT' to content moderation. It discusses the Supreme Court's criticism of the trend of police summoning individuals from different states over social media postings. The text mentions the power of police to issue summons and emphasizes that it should not be used to intimidate or harass individuals. It also refers to a specific case involving objectionable content posted on Facebook and the implications of the posts regarding the state administration and lockdown violations. These aspects are related to content moderation rules, the actions of law enforcement, and the impact of social media postings. Therefore, the text falls within the scope of relevance to content moderation as defined in our previous classification. Question: Facebook and Instagram took down video tributes to George Floyd posted by the Trump campaign over copyright complaints on Friday, following a similar decision by Twitter - broadening the latest skirmish over the policing of online messages from President Donald Trump and his allies. Facebook and its subsidiary Instagram removed posts by official Trump campaign accounts that included videos narrated by Trump discussing Floyd's death in Minneapolis. The narration is played over a series of photographs and videos that appear to have been taken during recent protests around the country over Floyd's killing, Politico reported. We received a copyright complaint from the creator under the Digital Millennium Copyright Act and have removed the post," Facebook Spokesperson Andy Stone told POLITICO in an email, adding, "Organizations that use original art shared on Instagram are expected to have the right to do so. The move by the companies follows a parallel action by Twitter, which on Thursday morning disabled the same video included in a pair of tweets by TeamTrump and TrumpWarRoom 2020 campaign accounts, also citing an unspecified complaint under the Digital Millennium Copyright Act. The company left up the tweets but placed an overlay on the videos that reads, This media has been Answer: The provided text is 'RELEVANT' to content moderation. It discusses the actions taken by Facebook and Instagram to take down video tributes to George Floyd posted by the Trump campaign due to copyright complaints. The text mentions the removal of posts by official Trump campaign accounts on Facebook and Instagram, as well as the similar decision made by Twitter to disable the same video on its platform. These actions relate to content moderation rules and practices, specifically in terms of copyright infringement. Therefore, the text falls within the scope of relevance to content moderation as defined in our previous classification. Now, is the following text relevant or irrelevant to content moderation? [Paste a news article text here and remove the brackets] §.§.§ Task 2: Problem/Solution Frames “Content moderation” refers to the practice of screening and monitoring content posted by users on social media sites to determine if the content should be published or not, based on specific rules and guidelines. I will ask you to classify a text as describing content moderation as a problem, as a solution, or neither: A: Text describes content moderation as a PROBLEM if they emphasize negative effects of it, such as restrictions to free speech, censorship, or the biases that can emerge from decisions regarding what users are allowed to post. B: Text describes content moderation as a SOLUTION if they emphasize positive effects of it, such as protecting users from harmful content such as hate speech, misinformation, illegal adult content, or spam. C: Text describes content moderation as NEUTRAL if they do not emphasize negative or positive effects of content moderation. For example if they simply report on the content moderation activity of social media platforms without linking them to potential advantages or disadvantages for users or stakeholders. The following texts are some examples of 'PROBLEM', 'SOLUTION' or 'NEUTRAL' texts about content moderation in the form of question and answer: Question: Twitter removed a "misleading" tweet downplaying the efficacy of masks posted by a top coronavirus adviser to President Donald Trump, while U.S. cases surged before the Nov. 3 election, Trend reports citing Reuters. As the Trump administration fends off accusations that its mixed messaging on wearing masks hampered the fight against the coronavirus, Dr. Scott Atlas continued to minimize the importance of masks with a Twitter post on Saturday, saying, "Masks work? NO." Twitter Inc removed the tweet on Sunday, saying it violated its misleading information policy on COVID-19, which targets statements that have been confirmed to be false or misleading by subject-matter experts. The White House had no immediate comment on the decision. New infections have been rising fast in the United States, according to a Reuters analysis, with more than 69,400 reported on Friday, up from 46,000 a month ago. Total U.S. cases have surpassed 8 million. Trump, who was hospitalized with the disease for three nights in early October, has been criss-crossing the country in a surge of 11th-hour campaigning as he lags in many public opinion polls. His rallies draw thousands of supporters in close quarters, with many not wearing masks despite federal coronavirus guidelines. Despite data showing otherwise, Trump has said Answer: The provided text can be classified as describing content moderation as a SOLUTION. It highlights how Twitter removed a tweet that downplayed the efficacy of masks, stating that it violated their policy on misleading information related to COVID-19. This demonstrates content moderation as a means to address and mitigate the spread of false or misleading information. By removing the tweet, Twitter aims to protect users from harmful content and ensure accurate information is shared regarding public health during the pandemic. Question: OAKLAND, Calif. - Facebook has banned an extremist anti-government network loosely associated with the broader "boogaloo" movement, a slang term supporters use to refer to a second Civil War or a collapse of civilization. But the platform didn't try to name the group, underscoring the difficulty of grappling with an amorphous network linked to a string of domestic terror plots that appears to obfuscate its existence. Among other complications, its internet-savvy members tend to keep their distance from one another, frequently change their symbols and catch phrases and mask their intentions with sarcasm. The move by Facebook designates this group as a dangerous organization similar to the Islamic State group and white supremacists, both of which are already banned from its service. The social network is not banning all references to "boogaloo" and said it is only removing groups, accounts and pages when they have a "clear connection to violence or a credible threat to public safety." The loose movement is named after "Breakin' 2: Electric Boogaloo," a 1984 sequel to a movie about breakdancing. "Boogaloo" supporters have shown up at protests over COVID-19 lockdown orders, carrying rifles and wearing tactical gear over Hawaiian shirts - themselves a reference to "big luau," a Answer: Based on the provided text, it can be classified as describing content moderation as a SOLUTION. The text highlights Facebook's action of banning an extremist anti-government network associated with the "boogaloo" movement, which is linked to domestic terror plots. Facebook's move is presented as designating the group as a dangerous organization, similar to the Islamic State group and white supremacists, and removing groups, accounts, and pages with a clear connection to violence or a credible threat to public safety. This portrays content moderation as a measure to protect public safety and prevent the spread of extremist content. Question: Florida Governor Ron DeSantis announced this week that he would fine social media companies that ban political candidates. Every outlet from Fox News to MSNBC fired off missives about the bill. What got lost in the news coverage is that Silicon Valley deplatforms very few politicians, save shock-jocks like Donald Trump and Laura Loomer (if you want to call her a politician). The same cannot be said for sex workers. This month, Centro University released a study estimating that 46 percent of adult influencers reported losing access to Twitter or Instagram in the last year. The bans put a permanent dent in the stars’ income, with Centro estimating sex workers lose $260 million a year due to social media bans. You won’t hear DeSantis, Fox News, Glenn Greenwald, or any other so-called free speech warriors decrying porn stars’ lost incomes, so let me break down how social media companies are screwing over porn stars (and not screwing them in a good way!). Silicon Valley titans have revoked my social media access multiple times. Take my recent Snapchat ban. The Santa Monica-based app barred me from posting on my public account, so I lost the means to communicate with fans who would Answer: Based on the provided text, it can be classified as describing content moderation as a PROBLEM. The text criticizes social media companies for their content moderation practices, particularly regarding the ban on sex workers and adult influencers. It suggests that these bans have negative consequences, such as the loss of income for sex workers, and highlights the perceived lack of concern or attention from politicians and free speech advocates. The text emphasizes the negative effects of content moderation, including restrictions on speech and the impact on individuals' livelihoods, indicating that it views content moderation as a problem. Question: TALLAHASSEE – Gov. Ron DeSantis' call for punishing social media sites that deplatformed former President Donald Trump narrowly cleared a Senate committee Monday and soon will be ready for a full vote in the Legislature. Sen. Jeff Brandes, R-St. Petersburg, was the lone Republican who argued against the proposal by fellow Republican Sen. Ray Rodrigues of Naples. Brandes labeled it a "big government bill." "This Senate is currently filled with small government Republicans who do believe that government shouldn't be in the lives of businesses," Brandes said. He added: "This is the exact opposite of the things that we stand for." But Rodrigues argued back that the measure doesn't defy free market principles. The bill (SB 7072) orders social media companies to publish standards with detailed definitions of when someone would be censored or blocked, and makes companies subject to as much as $100,000 fines for deplatforming a Florida candidate. "I'm bringing you good policy supported by your constituents," Rodrigues said. The measure was approved 10-9 by the Appropriations Committee, its last stop before going to the Senate floor. A similar measure is ready for a full House vote. State and federal courts have generally taken a hands-off view involving regulating online platforms. Congress also has not Answer: Based on the provided text, it can be classified as describing content moderation as a PROBLEM. The text highlights the debate and disagreement surrounding a proposed bill that aims to punish social media sites for deplatforming former President Donald Trump. Senator Jeff Brandes argues against the proposal, labeling it a "big government bill" and stating that it goes against the principles of small government Republicans. This indicates a negative view of content moderation, particularly in terms of government intervention and potential restrictions on businesses. While Senator Ray Rodrigues argues in favor of the bill, the emphasis in the text is on the opposition and criticism, suggesting a portrayal of content moderation as a problem. Question: A scathing new report released by hedge fund Hindenburg Research claims that start-up Nikola is an 'intricate fraud' based on years of lies and fake products•Hindenburg claimed to have 'extensive evidence' that the company's proprietary technology was purchased from another company•The fund also accused Nikola's founder Trevor Milton of making countless false statements over the last decade and faking a product promotional video•When filming the Nikola One ad, the truck reportedly didn't have an engine•So the company reportedly rolled the prototype along a downhill stretch of a highway and filmed it as if it was being driven•In a tweet, Milton called the report a 'hit job' and asked the public to 'give me a few hours to put together responses to their lies'•Nikola's stock prices plummeted as much as 13 percent Thursday A scathing new report released by hedge fund Hindenburg Research claims that start-up Nikola is an 'intricate fraud' based on years of lies and fake products Hindenburg claimed to have 'extensive evidence' that the company's proprietary technology was purchased from another company The fund also accused Nikola's founder Trevor Milton of making countless false statements over the last decade and faking a product promotional video When filming the Nikola One ad, the Answer: The provided text does not explicitly discuss content moderation or its effects. It focuses on a scathing report released by Hindenburg Research that accuses the startup company Nikola of fraud and making false statements. The text does not emphasize the positive or negative effects of content moderation, nor does it link the topic to potential advantages or disadvantages. Therefore, the text can be classified as NEUTRAL regarding the description of content moderation. Question: Some of the toughest ads against Donald Trump are driven by lifelong Republicans unable to stomach the direction their party has taken. Washington: Rick Wilson apologises for running late for our phone interview: it's been a frantic morning for the veteran Republican ad-maker and his colleagues at the Lincoln Project. The anti-Trump group has just released its latest advertisement, slamming the US President for suggesting that the November 3 election may need to be delayed. In the half hour since the ad ??? titled We Will Vote ??? went live, it has already racked up more than 250,000 views online. That's nothing unusual for the operatives at the Lincoln Project, who have been pumping out attack ads at a prolific rate over recent months. "We push really fast all the time," Wilson says. "We drive ourselves and our team very hard because we think we are pursuing a worthwhile endeavour and we know it works." The group's co-founders include Steve Schmidt, who ran Republican nominee John McCain's 2008 campaign, and conservative lawyer George Conway, the husband of top Trump aide Kellyanne Conway. Having spent most of their adult lives working to get Republicans elected, they are now producing some of the toughest anti-Trump ads on Answer: The provided text describes the Lincoln Project, an anti-Trump group that releases attack ads against the US President. While the text does not explicitly discuss content moderation, it focuses on the activities and efforts of the Lincoln Project in creating and disseminating ads. It does not emphasize the positive or negative effects of content moderation or link it to potential advantages or disadvantages. Therefore, the text can be classified as NEUTRAL regarding the description of content moderation. Now, is the following text describing content moderation as a problem, as a solution, or neither? [Paste a news article text here and remove the brackets]
http://arxiv.org/abs/2307.02950v1
20230706123716
Electronic correlations and energy gap in the bilayer nickelate La$_{3}$Ni$_{2}$O$_{7}$
[ "Zhe Liu", "Mengwu Huo", "Jie Li", "Qing Li", "Yuecong Liu", "Yaomin Dai", "Xiaoxiang Zhou", "Jiahao Hao", "Yi Lu", "Meng Wang", "Hai-Hu Wen" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.str-el" ]
Zhe Liu^1,∗, Mengwu Huo^2,∗, Jie Li^1,∗, Qing Li^1, Yuecong Liu^1, Yaomin Dai^1,†, Xiaoxiang Zhou^1, Jiahao Hao^1, Yi Lu^1, Meng Wang^2,†, & Hai-Hu Wen^1,† ^1National Laboratory of Solid State Microstructures and Department of Physics, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093, China ^2Center for Neutron Science and Technology, Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices, School of Physics, Sun Yat-Sen University, Guangzhou, Guangdong 510275, China ^∗These authors contributed equally to this work. ^†email: ymdai@nju.edu.cn; wangmeng5@mail.sysu.edu.cn; hhwen@nju.edu.cn The discovery of superconductivity with a critical temperature of 80 K in La_3Ni_2O_7 under pressure has received enormous attention. La_3Ni_2O_7 is not superconducting under ambient pressure but exhibits a density-wave-like transition at T^∗≃ 115 K. Understanding the electronic correlations, charge dynamics and dominant orbitals are important steps towards the mechanism of superconductivity and other instabilities. Here, our optical study shows that La_3Ni_2O_7 features strong electronic correlations which significantly reduce the electron's kinetic energy and place it in the proximity of the Mott phase. The low-frequency optical conductivity reveals two Drude components arising from multiple bands dominated by the Ni-d_x^2 - y^2 and Ni-d_3z^2 - r^2 orbitals at the Fermi level. Above T^∗, the scattering rates for both Drude components vary linearly with temperature, indicating non-Fermi-liquid behavior which may be associated with spin-fluctuation scattering. Below T^∗, a gap opens in the Ni-d_3z^2 - r^2 orbital, suggesting the importance of the Ni-d_3z^2 - r^2 orbital in the density-wave-like instability. Our experimental results provide key insights into the mechanism of the density-wave-like order and superconductivity in La_3Ni_2O_7. Since the discovery of superconductivity with a transition temperature T_c≃ 9–15 K in the thin films of hole-doped infinite-layer nickelates Nd_1-xSr_xNiO_2 <cit.>, tremendous efforts have been made to find more superconducting nickelates and raise their T_c. To date, several other doped rare-earth nickelates such as (La/Pr)_1-xSr_xNiO_2 <cit.> and La_1-xCa_xNiO_2 <cit.>, and the stoichiometric quintuple-layer Nd_6Ni_5O_12 <cit.> have been found to exhibit superconductivity with a T_c up to 17 K. Applying pressure can enhance the onset superconducting temperature of Pr_0.82Sr_0.18NiO_2 monotonically from 17 K at ambient pressure to 31 K at 12.1 GPa without showing any trend towards saturation <cit.>. Despite the above progress achieved in thin films, the search for evidence of superconductivity in bulk materials seems extraordinarily challenging <cit.>. Recently, trace of superconductivity with a T_c = 80 K was observed in bulk single crystals of the bilayer Ruddlesden-Popper (R-P) phase under high pressure <cit.>, arousing a flurry of excitement in the community of high-T_c superconductivity <cit.>. Under ambient pressure, crystallizes in the orthorhombic Amam structure. It is a paramagnetic metal with a phase transition at about 120 K <cit.> which has been suggested as a charge density wave (CDW) <cit.>. The application of pressure induces a structural transition from the Amam to Fmmm structure at about 10 GPa, and superconductivity with a maximum T_c of 80 K emerges between 14 and 43.5 GPa. Theoretical work has underlined the crucial role of Ni-3d orbitals and electronic correlations in the high-T_c superconductivity in  <cit.>. In this context, understanding the electronic correlations, charge dynamics and dominant orbitals in , as well as the analogies and differences between this compound and cuprates from an experimental viewpoint are important steps towards the mechanism of the high-T_c superconductivity and other instabilities in . In this work, we investigate the optical properties of La_3Ni_2O_7. We find a substantial reduction of the electron's kinetic energy from the optical spectrum compared to that from band theory due to strong electronic correlations, which places La_3Ni_2O_7 near the Mott phase. Two Drude components are resolved in the low-frequency optical conductivity and ascribed to multiple bands predominantly contributed by the Ni-d_3z^2 - r^2 and Ni-d_x^2 - y^2 orbitals at the Fermi level. In the pristine phase, non-Fermi-liquid behavior, i.e. linear temperature dependence of the quasiparticle scattering rate, is observed for both Drude components, which may indicate spin-fluctuation scattering. The Drude component associated with the Ni-d_3z^2 - r^2 orbital is partially gapped below the density-wave-like transition, suggesting that the Ni-d_3z^2 - r^2 orbital is crucial to the formation of the density-wave-like order. Our experimental results provide pivotal information for understanding the mechanism of the density-wave-like order and superconductivity in La_3Ni_2O_7. §.§ Results §.§.§ Reflectivity and optical conductivity Figure 1a displays the temperature-dependent resistivity ρ(T)/ρ(300 K) of (red solid curve). While typical metallic behavior is realized from 300 down to 2 K, a kink occurs at about T^∗≃115 K, which has been observed in previous studies <cit.> and ascribed to a charge-density-wave (CDW) transition <cit.>. Figure 1b shows the far-infrared reflectivity R(ω) of at different temperatures; the spectra below 125 K are shifted down by 0.5 to better resolve the temperature dependence. R(ω) of approaches unity in the zero-frequency limit and increases with decreasing temperature in the far-infrared range, corroborating the metallic nature of the material. Below 125 K, the low-frequency R(ω) continues rising, whereas a suppression of R(ω) occurs between 400 and 1200 , resembling the optical response of a density-wave gap <cit.>. A comparison between the temperature dependence of R(ω) at 800  (Fig. 1d) and ρ(T)/ρ(300 K) (Fig. 1a) links the suppression in R(ω) to the density-wave-like transition at T^∗≃115 K. Figure 1c displays the real part of the optical conductivity σ_1(ω) for at different temperatures; the data below 125 K are shifted down by 1250 Ω^-1cm^-1 to show the temperature dependence more clearly. The temperature dependence of 1/σ_1(ω→ 0) (blue open circles in Fig. 1a) is compared with ρ(T)/ρ(300 K) (red solid curve in Fig. 1a) to verify the agreement between optical and transport measurements. A Drude peak is observed in the low-frequency σ_1(ω), which is the optical fingerprint of metals. As the temperature is lowered from 300 K to just above T^∗, a characteristic narrowing of the Drude response is observed. The narrowing of the Drude peak leads to a suppression of the high-frequency σ_1(ω) and an enhancement of the low-frequency σ_1(ω). Below T^∗, the Drude peak is suppressed and the spectral weight [the area under σ_1(ω)] is transferred to high frequency, resulting in a suppression of the low-frequency σ_1(ω) and an enhancement of the high-frequency σ_1(ω), which is opposite to the effect of the Drude peak narrowing. Figure 1e plots the value of σ_1(ω) at 1000  as a function of temperature. The increase of σ_1(1000 cm^-1) occurs at T^∗, indicating that the spectral weight transfer from low to high frequency is intimately related to the density-wave-like transition at T^∗≃ 115 K. Such a spectral weight transfer from low to high frequency is consistent with the behavior of a density-wave gap in σ_1(ω) <cit.>. §.§.§ Drude-Lorentz analysis and theoretical calculations To further understand the optical spectra of , we fit the measured σ_1(ω) to the Drude-Lorentz model, σ_1(ω) = 2π/Z_0[∑_kω^2_p,k/τ_k(ω^2+τ_k^-2) + ∑_iγ_iω^2ω_p,i^2/(ω_0,i^2 - ω^2)^2 + γ_i^2ω^2], where Z_0≃ 377 Ω is the impedance of free space. The first term refers to a sum of Drude components which describe the optical response of free carriers or intraband transitions; each is characterized by a plasma frequency ω_p and a quasiparticle scattering rate 1/τ. The square of plasma frequency (Drude weight) ω_p^2 = Z_0ne^2/2π m^∗, where n and m^∗ are the carrier concentration and effective mass, respectively. The second term represents a sum of Lorentzian oscillators that are used to model localized carriers or interband transitions. In the Lorentz term, ω_0,i, γ_i, and ω_p,i are the resonance frequency (position), damping (line width), and plasma frequency (strength) of the ith excitation. The cyan solid curve in Fig. 2a denotes the measured σ_1(ω) at 150 K, and the black dashed line through the data represents the Drude-Lorentz fitting result, which is decomposed into a narrow Drude component (D1, red shaded area), a broad Drude component (D2, blue hatched area) and a series of Lorentz components (L1, orange hatched area; L2, green hatched area; L3, purple hatched area; LH, grey hatched area). The inset of Fig. 2a shows the fitting result below 3000 , highlighting the two Drude components. The presence of two Drude components which has been widely observed in iron-based superconductors <cit.> indicates the existence of multiple bands crossing . In order to elucidate the origin of the two Drude components in σ_1(ω), we calculated the electronic band structure of using first-principles density functional theory (DFT). In the calculated band structure (Fig. 2b), there are multiple bands crossing : a flat hole-like band near the Γ point and several broad electron-like bands near the Γ and S (R) points. While the flat hole-like band and the broad electron-like band near the Γ point arise from the Ni-d_3z^2 - r^2 and Ni-d_x^2 - y^2 orbitals, respectively, the electron-like band near the S (R) point originates from mixed Ni-d_3z^2 - r^2 and Ni-d_x^2 - y^2 orbitals <cit.>. Clearly, the two Drude components in σ_1(ω) are associated with these bands. Further insights may be obtained from the width of the Drude components. For instance, as the Drude response stems from intraband transitions, its width is restricted to the band width. Hence, a flat (narrow) band at is expected to produce a narrow Drude component in σ_1(ω), as observed in MnBi_2Te_4 <cit.>. Moreover, a narrow band of heavy charge carriers in heavy fermion compounds has also been proved to produce a very narrow Drude peak in σ_1(ω) <cit.>. In this context, it is reasonable to attribute D1 (the narrow Drude component) to the flat hole-like band arising from the Ni-d_3z^2 - r^2 orbital near the Γ point, and ascribe D2 (the broad Drude component) to the broad electron-like bands contributed by the Ni-d_x^2 - y^2 or mixed Ni-d_x^2 - y^2 and Ni-d_3z^2 - r^2 orbitals near the Γ and S (R) points. The Lorentz components (L1, L2, L3 and LH) in σ_1(ω) originate from interband electronic transitions. Figure 2c depicts the calculated σ_1(ω) for which reproduces the features associated with interband transitions reasonably well. Nevertheless, the peak positions of the interband transitions in the experimental σ_1(ω) are all shifted to slightly lower energies than those in the calculated σ_1(ω). This discrepancy between the measured and calculated σ_1(ω) is most likely related to electronic correlations <cit.> which are not taken into account in DFT calculations. §.§.§ Electron's kinetic energy and electronic correlations It is noteworthy that the Drude profile in the measured σ_1(ω) (Fig. 2a) has significantly smaller weight than that in the calculated σ_1(ω) (Fig. 2c), indicating strong electronic correlations in . The electronic correlations in a material can be obtained from the ratio K_exp/K_band <cit.>, where K_exp and K_band refer to the experimental kinetic energy and the kinetic energy from band theory (DFT calculations), respectively. The kinetic energy of electrons is given by <cit.> K = 2ħ^2 c_0/π e^2∫_0^ω_cσ_1(ω)dω, where c_0 is the c-axis lattice parameter, and ω_c is a cutoff frequency covering the entire Drude component in σ_1(ω). Usually, the frequency at which σ_1(ω) reaches a minimum is taken as ω_c, because this point separates the intraband and interband excitations. Using ω_c = 2300  for both K_exp and K_band, the value of K_exp/K_band = 0.072 is obtained for . Here, we would like to mention that changing ω_c around the minimum in σ_1(ω) does not lead to a significant change in the value of K_exp/K_band. In Fig. 2d, we summarize K_exp/K_band for (solid star) and some other representative materials (open symbols). For conventional metals such as Ag and Cu, K_exp/K_band is close to unity, indicating negligible electronic correlations. In sharp contrast, the Mott insulator, e.g. the parent compound of the high-T_c cuprate superconductor La_2CuO_4, has a vanishingly small K_exp/K_band, because the motion of electrons is impeded by strong on-site Coulomb repulsion, resulting in a substantial reduction of K_exp compared to K_band. K_exp/K_band in iron-based superconductors, for example LaOFeP and BaFe_2As_2, lie between conventional metals and Mott insulators, thus being categorized as moderately correlated materials <cit.>. The value of K_exp/K_band places in the proximity of the Mott insulator phase, closely resembling the doped cuprates <cit.>. This result suggests that in electronic correlations play an important role in the charge dynamics. §.§.§ Temperature dependence of the charge dynamics By applying the Drude-Lorentz analysis to the measured σ_1(ω) at all temperatures, we extracted the temperature dependence of the Drude parameters. Figure 3a, b show the temperature dependence of the weight for D1 (ω^2_p, D1) and D2 (ω^2_p, D2), respectively. Above T^∗≃115 K, neither ω^2_p, D1 nor ω^2_p, D2 vary with temperature, suggesting no change of the Fermi surface or electronic band structure. Below T^∗, an abrupt drop by 30% occurs in ω^2_p, D1, implying that a partial gap opens on the Fermi surface associated with D1, and about 30% of Fermi surface is removed by the gap. No change is observed for ω^2_p, D2 across T^∗, suggesting that the Fermi surface associated with D2 is not affected by the density-wave-like transition at T^∗. Since D1 arises from the flat hole-like band with Ni-d_3z^2 - r^2 orbital character near the Γ point, the gap opening in D1 indicates that the Ni-d_3z^2 - r^2 orbital plays an important role in driving the density-wave-like transition at T^∗. Panels c and d in Fig. 3 plot the scattering rates of D1 (1/τ_D1) and D2 (1/τ_D2) as a function of temperature, respectively. At high temperatures, both 1/τ_D1 and 1/τ_D2 vary linearly with temperature, which is the well-known non-Fermi-liquid behavior. Interestingly, while 1/τ_D2 exhibits the non-Fermi-liquid behavior all the way down to the lowest temperature, 1/τ_D1 deviates from the T-linear response (the black dashed line) below T^∗. The inset of Fig. 3c manifests that below T^∗, 1/τ_D1 follows a quadratic temperature dependence 1/τ_D1∝ T^2, indicating Fermi-liquid behavior. This implies that the density-wave-like transition at T^∗ strongly influences the quasiparticle scattering process on the hole-like Fermi surface formed by the Ni-d_3z^2 - r^2 orbital near the Γ point. §.§.§ Energy scale of the gap and spectral weight redistribution The density-wave-like transition at T^∗ coincides with the opening of a partial gap, which leads to a suppression of ω^2_p, D1 accompanied by a spectral weight transfer from low to high frequency in σ_1(ω). The energy scale of the gap and related spectral weight redistribution contains important information. The gap value Δ can be determined from the difference optical conductivity <cit.> Δσ_1(ω) = σ_1^T < T^∗(ω) - σ_1^N(ω), where σ_1^T < T^∗(ω) and σ_1^N(ω) denote σ_1(ω) at T < T^∗ and σ_1(ω) in the normal state, respectively. Here for , σ_1(ω) at 125 K is used as σ_1^N(ω). Figure 4a shows Δσ_1(ω) at 5 K, in which the zero-crossing point, as indicated by the black arrow, corresponds to 2Δ = 100.5 meV, resulting in a ratio of 2Δ/k_BT^∗ = 10.14 that is much larger than the weak-coupling BCS value 3.52. Furthermore, the temperature dependence of Δ (orange solid circles in Fig. 4c) deviates from the BCS mean-field behavior (blue solid line in Fig. 4c) near the transition temperature T^∗. To find out the energy scale of the gap-induced spectral weight redistribution, we examine the frequency and temperature dependence of the spectral weight defined as S = ∫_0^ωσ_1(ω)dω. As shown in Fig. 4b, the spectral weight ratio S(5 K)/S(125 K) as a function of frequency gives the direction and energy scale of the spectral weight transfer associated with the density-wave-like transition at T^∗. In the low-frequency limit, the large value of S(5 K)/S(125 K) results from the narrowing of the Drude peak at low temperatures. With increasing frequency, S(5 K)/S(125 K) decreases steeply and reaches a minimum at about 600 . This indicates that the spectral weight below 600  is significantly suppressed at 5 K. Note that the spectral weight below 600  is mainly contributed by D1 (see the inset of Fig. 2a), so the sharp decrease of S(5 K)/S(125 K) is caused by the suppression of ω^2_p, D1 due to the gap opening. As the frequency further increases, S(5 K)/S(125 K) rises monotonically and reaches unity (black dashed line) at about 6000 . This behavior suggests that the lost low-frequency spectral weight due to the opening of the gap is retrieved in a very broad frequency range up to 6000 . Figure 4d-f plot S(T)/S(300 K) for different cutoff frequencies as a function of temperature. For low cutoff frequencies, such as ω_c = 600  (Fig. 4d) and ω_c = 1000  (Fig. 4e), S(T)/S(300 K) increases upon cooling from 300 K. This effect is caused by the narrowing of the Drude response in σ_1(ω), which leads to an accumulation of spectral weight in the low-frequency range. Below T^∗, S(T)/S(300 K) decreases, attesting to the opening of a gap which removes the low-frequency spectral weight and transfer it to high frequency. S(T)/S(300 K) for ω_c = 6000  (Fig. 4f) is essentially temperature independent, indicating that the removed spectral weight due to the opening of the gap in the low-frequency range is full recovered at 6000 . (0.744 eV) §.§ Discussion Having examined the charge dynamics of in both the pristine and ordered state, we now discuss the similarities and differences between and cuprates from an optical perspective, with the aim of revealing the important ingredients for high-T_c superconductivity. Firstly, our optical results show that features strong electronic correlations which substantially reduce the electron's kinetic energy and place the material in the proximity of a Mott phase. The electronic correlation strength in is comparable to that in doped cuprates <cit.>, but much stronger than that in iron-based superconductors <cit.>. Interestingly, the maximum T_c in is also comparable to that in cuprates but higher than that in iron-based superconductors. This coincidence is likely to hint that the high-T_c superconductivity in is intimately related to electronic correlations. Recent theoretical calculations have found strong electronic correlations particularly for the Ni-d_3z^2 - r^2 orbital <cit.> and stressed their important role in promoting a superconducting instability <cit.>. Secondly, the optical response of the density-wave-like transition at T^∗ in is highly unusual. For a density-wave gap, the removed spectral weight from the low-frequency σ_1(ω) is usually retrieved just above the gap, giving rise to a conspicuous peak <cit.>. In , the removed spectral weight from the Drude profile is redistributed to a very broad frequency range, and no peak is formed just above the gap. This behavior resembles the optical response of the intertwined charge and spin order in La_2-xBa_xCuO_4 with x = 1/8 <cit.>. Evidence for incommensurate CDW and SDW, which are interlocked to form static stripes, was initially observed in La_1.6-xNd_0.4Sr_xCuO_4 with x ≃ 1/8 <cit.>. Further studies have demonstrated that the CDW order strongly competes with superconductivity, leading to a suppression of T_c in the 1/8-hole doped La-based cuprates <cit.>. In , the density-wave-like transition at T^∗ may also represent a competing phase, the suppression of which may lead to a recovery of high-T_c superconductivity. Finally, has multiple bands dominated by the Ni-d_x^2 - y^2 and Ni-d_3z^2 - r^2 orbitals crossing , resulting in the presence of two Drude components in the low-frequency σ_1(ω). In the pristine phase, both Drude components exhibit a T-linear scattering rate, i.e. non-Fermi-liquid behavior. Such non-Fermi-liquid behavior, widely observed in high-T_c cuprates <cit.> and iron-based superconductors <cit.>, is believed to originate from spin-fluctuation scattering <cit.>. Recent theoretical work <cit.> has unveiled non-Fermi-liquid behavior and spin fluctuations in , and the spin fluctuations give rise to strong tendencies towards a superconducting instability with sign-reversal order parameters. Moreover, the density-wave-like transition opens a partial gap and strongly influences the quasiparticle scattering in the Drude component associated with the Ni-d_3z^2 - r^2 orbital, suggesting the Ni-d_3z^2 - r^2 orbital as an important ingredient in the formation of the density-wave-like order in . This result seems in accord with recent theoretical studies which have also highlighted the critical role of the Ni-d_3z^2 - r^2 orbital in generating a superconducting instability <cit.> or density waves <cit.> in . Therefore, is likely to reside in the vicinity of density-wave and superconducting instabilities competing for the Ni-d_3z^2 - r^2 orbital. To summarize, our optical study reveals strong electronic correlations in La_3Ni_2O_7 which give rise to a substantial reduction of the electron's kinetic energy and place this compound near the Mott insulator phase. Multiple bands dominated by Ni-d_3z^2 - r^2 and Ni-d_x^2 - y^2 orbitals cross the Fermi level, accounting for the presence of two Drude components in the low-frequency optical conductivity. In the pristine phase, the scattering rates for both Drude components exhibit linear temperature dependence, i.e. non-Fermi-liquid behavior, which may stem from spin fluctuations. The density-wave-like transition opens a partial gap and strongly affects the quasiparticle scattering in the Drude component associated with the Ni-d_3z^2 - r^2 orbital, suggesting that the Ni-d_z^2 orbital plays a critical role in triggering the density-wave-like instability. Our experimental results provide key insight into the mechanism of the density-wave-like instability and superconductivity. §.§ Methods Single crystal growth. High-quality single crystals of were grown by a vertical optical-image floating zone furnace with an oxygen pressure of 15 bar and a 5 kW Xenon arc lamp (100-bar Model HKZ, SciDreGmbH, Dresden). Optical measurements and Kramers-Kronig analysis. The near-normal-incidence reflectivity R(ω) at ambient pressure was measured in the frequency range of 30–50 000  using a Bruker Vertex 80v Fourier transform infrared spectroscopy (FTIR). An in situ gold/silver evaporation technique <cit.> was adopted. The real part of the optical conductivity σ_1(ω) was determined via a Kramers-Kronig analysis of the measured R(ω) for  <cit.>. Below the lowest measured frequency (30 ), a Hagen-Rubens (R = 1 - A√(ω)) form was used for the low-frequency extrapolation. Above the highest measured frequency, we assumed a constant reflectivity up to 12.5 eV, followed by a free-electron (ω^-4) response. DFT calculations. The density functional theory (DFT) calculations were performed using the all-electron, full-potential WIEN2K code with the augmented plane-wave plus local orbital (APW+lo) basis set <cit.> and the Perdew-Burke-Ernzerhof (PBE) exchange functional <cit.>. A total number of 252 k points in the reduced first brillouin zone was used for the self-consistency cycle. The optical properties were calculated with 60 000 k points in the first Brillouin zone to ensure convergency. All calculations were performed using the experimental structure under ambient pressure <cit.>. §.§ Data availability All data that support the findings of this study are available from the corresponding authors upon request. 10 url<#>1urlprefixURL Li2019Nature authorLi, D. et al. titleSuperconductivity in an infinite-layer nickelate. journalNature volume572, pages624–627 (year2019). Li2020PRL authorLi, D. et al. titleSuperconducting Dome in Nd_1xSr_xNiO_2 Infinite Layer Films. journalPhys. Rev. Lett. volume125, pages027001 (year2020). Zeng2020PRL authorZeng, S. et al. titlePhase Diagram and Superconducting Dome of Infinite-Layer Nd_1xSr_xNiO_2 Thin Films. journalPhys. Rev. Lett. volume125, pages147003 (year2020). Osada2020NL authorOsada, M. et al. titleA Superconducting Praseodymium Nickelate with Infinite Layer Structure. journalNano Lett. volume20, pages5735–5740 (year2020). Osada2021AM authorOsada, M. et al. titleNickelate Superconductivity without Rare-Earth Magnetism: (La,Sr)NiO_2. journalAdv. Mater. volume33, pages2104083 (year2021). Wang2022NC authorWang, N. N. et al. titlePressure-induced monotonic enhancement of T_c to over 30 K in superconducting Pr_0.82Sr_0.18NiO_2 thin films. journalNat. Commun. volume13, pages4367– (year2022). Zeng2022SA authorZeng, S. et al. titleSuperconductivity in infinite-layer nickelate La_1-xCa_xNiO_2 thin films. journalSci. Adv. volume8, pageseabl9927 (year2022). Pan2022NM authorPan, G. A. et al. titleSuperconductivity in a quintuple-layer square-planar nickelate. journalNat. Mater. volume21, pages160–164 (year2022). Li2020CM authorLi, Q. et al. titleAbsence of superconductivity in bulk Nd_1-xSr_xNiO_2. journalCommun. Mater. volume1, pages16– (year2020). Wang2020PRM authorWang, B.-X. et al. titleSynthesis and characterization of bulk Nd_1xSr_xNiO_2 and Nd_1xSr_xNiO_3. journalPhys. Rev. Materials volume4, pages084409 (year2020). Sun2023arXiv authorSun, H. et al. titleSuperconductivity near 80 Kelvin in single crystals of La_3Ni_2O_7 under pressure. journalarXiv (year2023). arXiv:2305.09586. Luo2023arXiv authorLuo, Z., authorHu, X., authorWang, M., authorWu, W. & authorYao, D.-X. titleBilayer two-orbital model of La_3Ni_2O_7 under pressure. journalarXiv (year2023). arXiv:2305.15564. Zhang2023arXiv authorZhang, Y., authorLin, L.-F., authorMoreo, A. & authorDagotto, E. titleElectronic structure, orbital-selective behavior, and magnetic tendencies in the bilayer nickelate superconductor La_3Ni_2O_7 under pressure. journalarXiv (year2023). arXiv:2306.03231. Yang2023arXiv authorYang, Q.-G., authorLiu, H.-Y., authorWang, D. & authorWang, Q.-H. titlePossible S±-wave superconductivity in La_3Ni_2O_7. journalarXiv (year2023). arXiv:2306.03706. Lechermann2023arXiv authorLechermann, F., authorGondolf, J., authorBötzel, S. & authorEremin, I. M. titleElectronic correlations and superconducting instability in La_3Ni_2O_7 under high pressure. journalarXiv (year2023). arXiv:2306.05121. Gu2023arXiv authorGu, Y., authorLe, C., authorYang, Z., authorWu, X. & authorHu, J. titleEffective model and pairing tendency in bilayer Ni-based superconductor La_3Ni_2O_7. journalarXiv (year2023). arXiv:2306.07275. Shen2023arXiv authorShen, Y., authorQin, M. & authorZhang, G.-M. titleEffective bi-layer model Hamiltonian and density-matrix renormalization group study for the high-T_c superconductivity in La_3Ni_2O_7 under high pressure. journalarXiv (year2023). arXiv:2306.07837. Sakakibara2023arXiv authorSakakibara, H., authorKitamine, N., authorOchi, M. & authorKuroki, K. titlePossible high T_c superconductivity in La_3Ni_2O_7 under high pressure through manifestation of a nearly-half-filled bilayer Hubbard model. journalarXiv (year2023). arXiv:2306.06039. Shilenko2023arXiv authorShilenko, D. A. & authorLeonov, I. V. titleCorrelated electronic structure, orbital-selective behavior, and magnetic correlations in double-layer La_3Ni_2O_7 under pressure. journalarXiv (year2023). arXiv:2306.14841. Taniguchi1995JPSJ authorTaniguchi, S. et al. titleTransport, Magnetic and Thermal Properties of La_3Ni_2O_7-δ. journalJ. Phys. Soc. Jpn. volume64, pages1644–1650 (year1995). Kabayashi1996JPSJ authorKobayashi, Y. et al. titleTransport and Magnetic Properties of La_3Ni_2O_7-δ and La_4Ni_3O_10-δ. journalJ. Phys. Soc. Jpn. volume65, pages3978–3982 (year1996). Wu2001PRB authorWu, G., authorNeumeier, J. J. & authorHundley, M. F. titleMagnetic susceptibility, heat capacity, and pressure dependence of the electrical resistivity of La_3Ni_2O_7 and La_4Ni_3O_10. journalPhys. Rev. B volume63, pages245120 (year2001). Liu2022SCPMA authorLiu, Z. et al. titleEvidence for charge and spin density waves in single crystals of La_3Ni_2O_7 and La_3Ni_2O_6. journalSci. China Phys. Mech. Astron. volume66, pages217411– (year2022). Seo1996IC authorSeo, D.-K., authorLiang, W., authorWhangbo, M.-H., authorZhang, Z. & authorGreenblatt, M. titleElectronic Band Structure and Madelung Potential Study of the Nickelates La_2NiO_4, La_3Ni_2O_7, and La_4Ni_3O_10. journalInorg. Chem. volume35, pages6396–6400 (year1996). Zhu2002PRB authorZhu, Z.-T., authorMusfeldt, J. L., authorTeweldemedhin, Z. S. & authorGreenblatt, M. titleAnisotropic ab-plane optical response of the charge-density-wave superconductor P_4W_14O_50. journalPhys. Rev. B volume65, pages214519 (year2002). Hu2008PRL authorHu, W. Z. et al. titleOrigin of the Spin Density Wave Instability in AFe_2As_2 (A = Ba, Sr) as Revealed by Optical Spectroscopy. journalPhys. Rev. Lett. volume101, pages257005 (year2008). Zhou2023PRBAVS authorZhou, X. et al. titleElectronic correlations and evolution of the charge density wave in the kagome metals AV_3Sb_5 (A = K, Rb, Cs). journalPhys. Rev. B volume107, pages165123 (year2023). Zhou2023PRBCVNS authorZhou, X. et al. titleEffects of niobium doping on the charge density wave and electronic correlations in the kagome metal Cs(V_1xNb_x)_3Sb_5. journalPhys. Rev. B volume107, pages125124 (year2023). Degiorgi1996PRL authorDegiorgi, L., authorDressel, M., authorSchwartz, A., authorAlavi, B. & authorGrüner, G. titleDirect Observation of the Spin-Density-Wave Gap in (TMTSF)_2PF_6. journalPhys. Rev. Lett. volume76, pages3838–3841 (year1996). Dai2016PRB authorDai, Y. M., authorAkrap, A., authorBud'ko, S. L., authorCanfield, P. C. & authorHomes, C. C. titleOptical properties of AFe_2As_2 (A=Ca, Sr, and Ba) single crystals. journalPhys. Rev. B volume94, pages195142 (year2016). Wu2010PRB authorWu, D. et al. titleOptical investigations of the normal and superconducting states reveal two electronic subsystems in iron pnictides. journalPhys. Rev. B volume81, pages100512 (year2010). Dai2013PRL authorDai, Y. M. et al. titleHidden T-Linear Scattering Rate in Ba_0.6K_0.4Fe_2As_2 Revealed by Optical Spectroscopy. journalPhys. Rev. Lett. volume111, pages117001 (year2013). Dai2015PRX authorDai, Y. M. et al. titleSpin-Fluctuation-Induced Non-Fermi-Liquid Behavior with Suppressed Superconductivity in LiFe_1xCo_xAs. journalPhys. Rev. X volume5, pages031035 (year2015). Xu2021PRB authorXu, B. et al. titleInfrared study of the multiband low-energy excitations of the topological antiferromagnet MnBi_2Te_4. journalPhys. Rev. B volume103, pagesL121103 (year2021). Guritanu2008PRB authorGuritanu, V. et al. titleOptical spectra of the heavy fermion uniaxial ferromagnet UGe_2. journalPhys. Rev. B volume78, pages172406 (year2008). Si2009NP authorSi, Q. titleElectrons on the verge. journalNat. Phys. volume5, pages629–630 (year2009). Qazilbash2009NP authorQazilbash, M. M. et al. titleElectronic correlations in the iron pnictides. journalNat. Phys. volume5, pages647–650 (year2009). Millis2005PRB authorMillis, A. J., authorZimmers, A., authorLobo, R. P. S. M., authorBontemps, N. & authorHomes, C. C. titleMott physics and the optical conductivity of electron-doped cuprates. journalPhys. Rev. B volume72, pages224517 (year2005). Xu2020NC authorXu, Y. et al. titleElectronic correlations and flattened band in magnetic Weyl semimetal candidate Co_3Sn_2S_2. journalNat. Commun. volume11, pages3985 (year2020). Degiorgi2011NJP authorDegiorgi, L. titleElectronic correlations in iron-pnictide superconductors and beyond: lessons learned from optics. journalNew J. Phys. volume13, pages023011 (year2011). Shao2020NP authorShao, Y. et al. titleElectronic correlations in nodal-line semimetals. journalNat. Phys. volume16, pages636–641 (year2020). Zhou2021PRB authorZhou, X. et al. titleOrigin of charge density wave in the kagome metal CsV_3Sb_5 as revealed by optical spectroscopy. journalPhys. Rev. B volume104, pagesL041101 (year2021). Uykur2022NPJQM authorUykur, E., authorOrtiz, B. R., authorWilson, S. D., authorDressel, M. & authorTsirlin, A. A. titleOptical detection of the density-wave instability in the kagome metal KV_3Sb_5. journalnpj Quantum Mater. volume7, pages16– (year2022). Homes2012PRB authorHomes, C. C. et al. titleDetermination of the optical properties of La_2-xBa_xCuO_4 for several dopings, including the anomalous x=1/8 phase. journalPhys. Rev. B volume85, pages134510 (year2012). Tranquada1995Nature authorTranquada, J. M., authorSternlieb, B. J., authorAxe, J. D., authorNakamura, Y. & authorUchida, S. titleEvidence for stripe correlations of spins and holes in copper oxide superconductors. journalNature volume375, pages561–563 (year1995). Fujita2002PRL authorFujita, M., authorGoka, H., authorYamada, K. & authorMatsuda, M. titleCompetition between Charge- and Spin-Density-Wave Order and Superconductivity in La_1.875Ba_0.125𝑥Sr_𝑥CuO_4. journalPhys. Rev. Lett. volume88, pages167008 (year2002). Leroux2019PNAS authorLeroux, M. et al. titleDisorder raises the critical temperature of a cuprate superconductor. journalProc. Natl. Acad. Sci. volume116, pages10691–10697 (year2019). Cooper2009Science authorCooper, R. A. et al. titleAnomalous Criticality in the Electrical Resistivity of La_2-xSr_xCuO_4. journalScience volume323, pages603–607 (year2009). Jin2011Nature authorJin, K., authorButch, N. P., authorKirshenbaum, K., authorPaglione, J. & authorGreene, R. L. titleLink between spin fluctuations and electron pairing in copper oxide superconductors. journalNature volume476, pages73–75 (year2011). Yuan2022Nature authorYuan, J. et al. titleScaling of the strange-metal scattering in unconventional superconductors. journalNature volume602, pages431–436 (year2022). Analytis2014NP authorAnalytis, J. G. et al. titleTransport near a quantum critical point in BaFe_2(As_1-xP_x)_2. journalNat. Phys. volume10, pages194–197 (year2014). Shibauchi2014ARCMP authorShibauchi, T., authorCarrington, A. & authorMatsuda, Y. titleA Quantum Critical Point Lying Beneath the Superconducting Dome in Iron Pnictides. journalAnnu. Rev. Cond. Matter Phys. volume5, pages113–135 (year2014). Moriya2000AP authorMoriya, T. & authorUeda, K. titleSpin fluctuations and high temperature superconductivity. journalAdv. Phys. volume49, pages555–606 (year2000). Taillefer2010ARCMP authorTaillefer, L. titleScattering and Pairing in Cuprate Superconductors. journalAnnu. Rev. Cond. Matter Phys. volume1, pages51–70 (year2010). Homes1993 authorHomes, C. C., authorReedyk, M., authorCradles, D. A. & authorTimusk, T. titleTechnique for measuring the reflectance of irregular, submillimeter-sized samples. journalAppl. Opt. volume32, pages2976–2983 (year1993). Dressel2002 authorDressel, M. & authorGrüner, G. titleElectrodynamics of Solids (publisherCambridge University press, year2002). Tanner2019 authorTanner, D. B. titleOptical Effeccts in Solids (publisherCambridge University press, year2019). Blaha2001 authorBlaha, P., authorSchwarz, K., authorMadsen, G. K. H., authorKvasnicka, D. & authorLuitz, J. titleWIEN2K, An Augmented Plane Wave + Local Orbitals Program for Calculating Crystal Properties (publisherTechnische Universität Wien, Vienna, Austria, year2001). Perdew1996PRL authorPerdew, J. P., authorBurke, K. & authorErnzerhof, M. titleGeneralized Gradient Approximation Made Simple. journalPhys. Rev. Lett. volume77, pages3865–3868 (year1996). Ling2000JSSC authorLing, C. D., authorArgyriou, D. N., authorWu, G. & authorNeumeier, J. titleNeutron Diffraction Study of La_3Ni_2O_7: Structural Relationships Among n = 1, 2, and 3 Phases La_n+1Ni_nO_3n+1. journalJ. Solid State Chem. volume152, pages517–525 (year2000). §.§ Acknowledgements We thank C. C. Homes, J. Schmalian, Huan Yang and Shunli Yu for helpful discussions. Work at Nanjing University was supported by the National Key R&D Program of China (Grants No. 2022YFA1403201 and No. 2022YFA1403000), the National Natural Science Foundation of China (Grants No. 12174180, No. 12274207, and No. 12061131001), and Jiangsu shuangchuang program. Work at Sun Yat-Sen University was supported by the National Natural Science Foundation of China (Grant No. 12174454) and the Guangdong Basic and Applied Basic Research Foundation (Grant No. 2021B1515120015). §.§ Author contributions Z.L. performed the optical measurements with the assistance of J.H., X.Z. and Y.D.; M.H. and M.W. synthesized the crystals; Q.L. and Y.L. characterized the samples; J.L. and Y.L. performed DFT calculations; Z.L., Y.D. and H.H.W. analyzed the data and wrote the manuscript; all authors made comments on the manuscript. §.§ Competing interests The authors declare no competing interests. §.§ Additional information Extended data accompanies this paper at ... Supplementary information accompanies this paper at ... Correspondence and requests for materials should be addressed to Yaomin Dai, Meng Wang or Hai-Hu Wen. Reprints and permissions information is available online. < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s >
http://arxiv.org/abs/2307.00636v1
20230702190626
Signature of (anti)cooperativity in the stochastic fluctuations of small systems: application to the bacterial flagellar motor
[ "María-José Franco-Oñate", "Andrea Parmeggiani", "Jérôme Dorignac", "Frédéric Geniet", "Jean-Charles Walter", "Francesco Pedaci", "Ashley L Nord", "John Palmeri", "Nils-Ole Walliser" ]
physics.bio-ph
[ "physics.bio-ph", "cond-mat.stat-mech", "q-bio.SC" ]
http://arxiv.org/abs/2307.02809v1
20230706065803
$J/ψ$ Pair Hadroproduction at Next-to-Leading Order in Nonrelativistic-QCD at CMS
[ "Liping Sun" ]
hep-ph
[ "hep-ph" ]
sunliping@bucea.edu.cn (a) School of Science, Beijing University of Civil Engineering and Architecture, Beijing, China We perform a complete study on the J/ψ pair hadroproduction at next-to-leading order (NLO) in the nonrelativstic-QCD (NRQCD) framework with the pair of cc̅ either in ^3S_1^[1] or ^1S_0^[8] fock state. It is found that the ^1S_0^[8] channel contribution at NLO is essential. Our results indicate that for the CMS, the NRQCD predictions can not describe the experimental data at all, and the total cross section predicted by NRQCD is smaller than the experimental data by an order of magnitude. So new mechanisms are needed to understand the CMS data for J/ψ pair production. 12.38.Bx, 13.60.Le, 14.40.Pq J/ψ Pair Hadroproduction at Next-to-Leading Order in Nonrelativistic-QCD at CMS Li-Ping Sun^a August 1, 2023 =============================================================================== Introduction.—Nonrelativistic QCD (NRQCD)<cit.> is widely used in the study of heavy quarkonium physics. In this framework, a quarkonium production process can be factorized as the multiplication of short-distance coefficients (SDCs) and long-distance NRQCD matrix elements (LDMEs). The SDCs can be calculated perturbatively and the LDMEs are strongly ordered by the relative velocity v between the quark and anti-quark inside of the quarkonium. This factorization has been applied in single quarkonium production and tested by various experiments<cit.>. Besides the single quarkonium production, multi-quarkonuim production provides complementary to understand the quarkonium production mechanism. At the LHC, the LHCb Collaboration in 2011 measured the J/ψ pair production for the first time at the center-of-mass energy √(s)=7 TeV with an integrated luminosity of 35.2 pb^-1<cit.>. In 2013, the CMS Collaboration further released the data of J/ψ pair production<cit.> with a much larger transverse moment range, providing a good platform for testing the validity of NRQCD in quarkonium pair production. Besides, the ATLAS Collaboration also gives the measurement of the J/ψ pair production<cit.>, and a large transverse momentum cut is imposed on both the J/ψ. In Refs.<cit.>, the leading order (LO) in α_s calculation of J/ψ pair production in the color singlet model (CSM) is performed. Relativistic correction to the J/ψ pair production is carried out in Ref.<cit.>, which makes significant improvement for diluting the discrepancy between the LO results and the experimental data. Furthermore, partial next-to-leading order (NLO^⋆) correction for J/ψ pair production is calculated by Lansberg and Shao <cit.>. They argued that the NLO^⋆ yield can approach the full NLO result at large p_T, which is the transverse momentum of one of the two J/ψ's, and thus the NLO^⋆ results give a more precise theoretical prediction than the LO results in this region. The full NLO predictions for color singlet(CS) channel are obtained in our previous work <cit.>. Besides, the complete LO predictions within NRQCD are obtained by Kniehl and He<cit.>. All the above works are performed in the single parton scattering (SPS) mechanism. Contribution of double parton scattering (DPS) is assessed in Refs.<cit.>, which is expected to be important. Besides, the color evaporation model is also used to interpret the production of J/ψ pair<cit.>. As predictions for DPS and color evaporation model are highly model-dependent, it is needed to have an accurate calculation for SPS contribution before one can extract the DPS contribution. In order to further study the multi-quarkonium production, it is necessary to evaluate the J/ψ pair production to NLO for more channels, which includes ^1S_0^[8], ^3S_1^[8] and ^3P_J^[8]. Because ^1S_0^[8] is found to give the most important contribution for single J/ψ production <cit.>, in this letter we focus on the ^1S_0^[8] channel and evaluate each J/ψ in ^3S_1^[1] and ^1S_0^[8] fock states to the NLO. The calculations of ^3S_1^[8] and ^3P_J^[8] channels will be studied in the future. Comparing to the LO result, NLO result can not only decrease theoretic uncertainties, but also open new kinematic enhanced topologies, which will dominate at large p_T. More precisely, we will find that the differential cross section dσ/dp_T^2 at large p_T behaves as p_T^-8 at LO, while it behaves as p_T^-6 at NLO due to double parton fragmentation contributions <cit.>. Formalism.—In NRQCD factorization, the cross section of J/ψ pair production at the LHC can be expressed as <cit.> dσ_p+p → J/ψ+J/ψ=∑_i,j,n_1,n_2∫dx_1dx_2f_i/p(x_1)f_j/p(x_2) × dσ̂^n_1,n_2_i,j⟨𝒪_n_1⟩^J/ψ⟨𝒪_n_2⟩^J/ψ. where f_i/p(x_1,2) are the parton distribution functions (PDFs), x_1 and x_2 represent the momentum fraction of initial state partons from the protons, ⟨𝒪_n⟩^J/ψ are LDMEs of J/ψ with n = ^2S+1L_J^[c] in the standard spectroscopic notation for the quantum numbers of the produced intermediate heavy quark pairs, and dσ̂ are partonic short-distance coefficients. In this letter we set either n_1=n_2=^3S_1^[1] or n_1=n_2=^1S_0^[8] in Eq. (<ref>). In the LO calculation, there are two subprocesses: g+g→J/ψ+J/ψ and q+q̅→J/ψ+J/ψ, only the former of which is taken into account since the contribution of the other process is highly suppressed by the quark PDFs. While in the NLO case, besides the gluon fusion process, the quark gluon process q+g→ 2J/ψ+q should also be considered because they can give non-negligible contribution. Typical Feynman diagrams at LO and NLO are shown in Fig.<ref>. To tackle the infrared (IR) divergences in real corrections, the two-cutoff phase space slicing method<cit.> is employed. After isolating the soft divergences and collinear divergences, the cross sections for the J/ψ pair production at the NLO can be expressed as σ_NLO=σ_Born+σ_Virtual+σ_Real^soft+ σ_Real^HC+σ_Real^HC, where HC and HC represent hard collinear and hard non-collinear contributions, respectively. The soft divergences and collinear divergences from real corrections will cancel divergences from virtual corrections, and thus the final NLO contributions are IR safe. Because there are two J/ψ states in the final state, the LO contributions behave as p_T^-8 when p_T is large. However, at NLO level, there are contributions which give p_T^-6 behavior <cit.>[Fig. <ref> (c) and (d)]. We thus expect that the NLO contribution will dominate at large p_T, especially for the CMS and ATLAS data, where a relatively large lower p_T cutoff is taken<cit.>. The expectation will be confirmed by our numerical results shown below. Numerical Inputs.—Because of the complexity of the J/ψ pair production, in our calculation, the package FEYNARTS <cit.> is used to generate the Feynman diagrams and amplitudes. The phase space integration is evaluated by employing the package Vegas<cit.>. In numerical calculation, the CTEQ6L1 and CTEQ6M parton distribution functions <cit.> are used. The renormalization scale μ_r and factorization scale μ_f are chosen as μ_r=μ_f=m_T, with m_T=√(p_T^2+16m_c^2) and charm quark mass m_c=M_J/ψ/2=1.55 GeV. In the two-cutoff method, there are soft and collinear cutoffs, δ_s and δ_c, which we set to be δ_s=10^-2 and δ_c=10^-4. Theoretical uncertainties are estimated by varying μ_r=μ_f from m_T/2 to 2m_T. The CS LDME ⟨𝒪(^3S_1^[1])⟩^J/ψ=1.16 GeV^3 is estimated by using the B-T potential model<cit.>. While color octet(CO) LDME ⟨𝒪(^1S_0^[8])⟩^J/ψ=0.089 GeV^3, is taken from <cit.>, which is determined by fitting experimental data. Results.—In the following, we give our results for the J/ψ pair production. In the CMS conditions <cit.>: |y(J/ψ)|<1.2 for p_T>6.5 GeV, or 1.2<|y(J/ψ)|<1.43 for p_T>6.5→ 4.5 GeV, or 1.43<|y(J/ψ)|<2.2 for p_T>4.5 GeV, with √(s)=7 TeV, the total cross section is measured to be σ_Exp.=1.49±0.07±0.14 nb, while our LO and NLO calculations for the total cross section give σ_LO=(0.048+0.014) ±0.02 nb, σ_NLO=(0.18+0.03)±0.10 nb. the first value in the bracket represents the CS contribution, while the second one represents the CO contribution, and the uncertainties come from the μ_r=μ_f varying from m_T/2 to 2m_T. As expected, we find the NLO calculation gives the dominant contribution. In (<ref>) the contribution of feeddown process p+p→ J/ψ+ψ(2S)+X→ 2J/ψ+X and p+p→ J/ψ+χ_cJ+X→ 2J/ψ+X are also included, which are estimated to be 30% of the direct production<cit.>. Comparing (<ref>) with (<ref>), we can see the cross section measured by CMS can not be described by the NRQCD calculation at NLO. We then compare our prediction for the transverse momentum p_T J/ψ J/ψ distribution of J/ψ pair with the CMS data and the NLO^⋆<cit.> yields. The result is shown in Fig. <ref>. At LO, p_T J/ψ J/ψ is always zero, because it is a two-body final state process. At NLO, we can first find that the contribution of the ^1S_0^[8] channel is small even if at the large p_T J/ψ J/ψ, this is normal, because we believe the dominant contribution at large p_T J/ψ J/ψ may come from the ^3S_1^[8] channel, which is our next work. we also find that the behavior of the NRQCD result is similar to the experimental data, but smaller than the data by an order of magnitude. For the NLO^⋆, it is consistent with our NLO prediction at large p_T J/ψ J/ψ. The data obviously overshoots our NLO prediction at the whole p_T J/ψ J/ψ region. Because both CS contribution and dominant CO contribution have been considered, we concluded that, the NRQCD factorization can not describe the CMS data even after the NLO correction. Therefore, other mechanism must be included, besides the SPS contribution in the NRQCD framework, to explain experimental data. The invariant mass distribution (denoted as M_J/ψ J/ψ) for CMS is shown in Fig. <ref>. We can see that the ^1S_0^[8] channel has big contribution in medium and large M_J/ψ J/ψ region, which is more important comparing with the ^3S_1^[1] channel. The sum of ^3S_1^[1] and ^1S_0^[8] channel again indicates that the NLO result can not describe the CMS data. Like the p_T J/ψ J/ψ distribution, the NLO prediction for the M_J/ψ J/ψ distribution is smaller than the experimental data by at least one order of magnitude for each bin, which also reflects the fact that, in the J/ψ pair production, the NRQCD prediction contributes little. The J/ψ pair rapidity difference |Δ y| distribution for CMS is shown in Fig. <ref>. We see that the ^1S_0^[8] channel also has big contribution in the medium and large |Δ y| region, and at large |Δ y|, the ^1S_0^[8] channel is dominant. Even though, the sum of ^3S_1^[1] and ^1S_0^[8] channels can not describe the CMS data, similar to the above two distributions. Summary.—In the framework of NRQCD factorization, we evaluate the full NLO J/ψ pair production via the ^3S_1^[1] and ^1S_0^[8] channels. We find that NLO corrections are essential for J/ψ pair production, compared to the LO results. For the CMS, the NLO predictions of total cross section, p_T J/ψ J/ψ distribution, invariant mass distribution of the J/ψ pair, and rapidity difference distribution of the J/ψ pair are much smaller than CMS data by about an order of magnitude. This reveals the signal that, in the J/ψ pair production process, the NRQCD NLO result is not the dominant contribution, there must be some new schemes dominating the process, if the CMS data are confirmed. We thank Y. Q. Ma and C. Meng for valuable discussions and suggestions. This work was supported by the National Natural Science Foundation of China(NSFC) under grants 11905006. 99 nrqcd G. T. Bodwin, E. Braaten, and G. P. Lepage, Phys. Rev. D51, 1125(1995). Inc3 Y. Fan, Y. Q. Ma and K. T. Chao, Phys. Rev. D79, 114009 (2009); Y. J. Zhang, Y. Q. Ma, K. Wang and K. T. Chao, Phys. Rev. D81, 034015 (2010); Y. Q. Ma, K. Wang and K. T. Chao, Phys. Rev. D83, 111503 (2011). Inc4 Z. G. He, Y. Fan and K. T. Chao, Phys. Rev. D75, 074011 (2007); Y. Q. Ma, K. Wang and K. T. Chao, Phys. Rev. Lett. 106, 042002 (2011); Y. Q. Ma, K. Wang and K. T. Chao, Phys. Rev. D84, 114001 (2011). Inc5 B. Gong, and J. X. Wang, Phys. Rev. Lett. 100, 232001 (2008); B. Gong, and J. X. Wang, Phys. Rev. D78, 074011 (2008); B. Gong, X. Q. Li and J. X. Wang, Phys. Lett. B673, 197 (2009). Inc6 R. Li, and J. X. Wang, Phys. Lett. B672, 51 (2009); B. Gong, and J. X. Wang, Phys. Rev. D83, 114021 (2011); B. Gong, L. P. Wan, J. X. Wang and H. F. Zhang, Phys. Rev. Lett. 112, 032001 (2014). LHCb LHCb Collaboration, R. Aaij et al., Phys. Lett. B707, 52 (2012). CMS CMS Physics Analysis Summary, CMS PAS BPH-11-021, 2013. ATLAS The ATLAS Collaboration, Eur. Phys. J. C77, 76(2017). LO1 R. Li, Y. J. Zhang and K. T. Chao, Phys. Rev. D80, 014020 (2009). LO2 C. F. Qiao, L. P. Sun and P. Sun, J. Phys. G37, 075019 (2010). LO3 A. V. Berezhnoy, A. K. Likhoded, A. V. Luchinsky and A. A. Novoselov, Phys. Rev. D84, 094023 (2011). RC Y. J. Li, G. Z. Xu, K. Y. Liu and Y. J. Zhang, J. High Energy Phys. 1307 051 (2013). NLOstar J. P. Lansberg and H. S. Shao, Phys. Rev. Lett. 111, 122001 (2013). DoubleJpsi J. P. Lansberg and H. S. Shao, arXiv:1410.8822. nlo3s11 L. P. Sun, H. Han and K. T. Chao, Phys. Rev. D65 094032 (2002). LOcomplete Z. G. He and B. A. Kniehl, Phys. Rev. Lett. 94, 074033 (2016). DPS1 C. H. Com, A. Kulesza and W. J. Stirling, Phys. Rev. Lett. 107, 082002 (2011). DPS2 D. dEnterria and A. M. Snigirev, Phys. Lett. B727, 157 (2013). DPS3 S. Baranov, A. Snigirev, and N. Zotov, Phys. Lett. B705, 116 (2011). CE1 J. P. Lansberg, H. S. Shao, N. Yamanaka, Y. J. Zhang and C. Noûs, arXiv:2004.14345. CE2 A. A. Chernyshev and V. A. Saleev, Phys. Rev. D106 114006(2022). 1201.2675 K. T. Chao, Y. Q. Ma, H. S. Shao, K. Wang and Y. J. Zhang, Phys. Rev. Lett. 108, 242004 (2012). 1403.3612 G. T. Bodwin, H. S. Chung, U. Kim and J. Lee, Phys. Rev. Lett. 113, 022001 (2014). DPF Z. B. Kang, Y. Q. Ma, J. W. Qiu and G. Sterman, Phys. Rev. D 90, 034006 (2014). twocut B. W. Harris and J. F. Owens, Phys. Rev. D65 094032 (2002). feynarts T. Hahn, Comput. Phys. Commun. 140, 418 (2001). vegas T. Hahn, Comput. Phys. Commun. 168, 2 (2005). cteq1 CTEQ Collaboration, H.L. Lai et al., Eur. Phys. J. C12, 375(2000). cteq2 J. Pumplin et al J. High Energy Phys. 07 012 (2002). BT G. T. Bodwin, H. S. Chung, D. Kang, J. Lee and C. Yu, Phys. Rev. D77 094017 (2008). chao:2012 K. T. Chao, Y. Q. Ma, H. S. Shao, K. Wang, and Y. J. Zhang, Phys.Rev. Lett. 108, 242004 (2012).
http://arxiv.org/abs/2307.04642v1
20230707024206
TRAC: Trustworthy Retrieval Augmented Chatbot
[ "Shuo Li", "Sangdon Park", "Insup Lee", "Osbert Bastani" ]
cs.CL
[ "cs.CL", "cs.AI" ]
[ TRAC: Trustworthy Retrieval Augmented Chatbot equal* Shuo Liyyy Sangdon Parkcomp Insup Leeyyy Obsert Bastaniyyy yyyUniversity of Pennsylvania, Pennsylvania, US compGeorgia Institute of Technology, Georgia, US Osbert Bastaniobastani@seas.upenn.edu Machine Learning, ICML 0.3in ] Although conversational AIs have demonstrated fantastic performance, they often generate incorrect information, or hallucinations. Retrieval augmented generation has emerged as a promising solution to reduce these hallucinations. However, these techniques still cannot guarantee correctness. Focusing on question answering, we propose a framework that can provide statistical guarantees for the retrieval augmented question answering system by combining conformal prediction and global testing. In addition, we use Bayesian optimization to choose hyperparameters of the global test to maximize the performance of the system. Our empirical results on the Natural Questions dataset demonstrate that our method can provide the desired coverage guarantee while minimizing the average prediction set size. § INTRODUCTION Neural conversational AIs have recently demonstrated fantastic performance. These chatbots are empowered by large language models (LLMs), and interact with users to perform a number of tasks; we focus on question answering. Although their answers are highly accurate, a major limitation is that these chatbots often confidently generate incorrect responses, called hallucinations. Retrieval augmented generation (RAG) has emerged as a promising solution <cit.>. Given a prompt, these techniques retrieve related contexts that can provide chatbots with helpful information to generate more accurate answers. Also, these techniques can provide timely information by using an up-to-date knowledge base. In this paper, we explore whether we can provide statistical guarantees for retrieval-augmented question answering systems, to guarantee the system is trustworthy. In particular, we build on conformal prediction <cit.>, a set of tools that modify models to predict sets of labels rather than individual labels. They typically guarantee that the set covers the ground truth label with high probability. By predicting a set of labels and providing a coverage, the user can conservatively account for uncertainty in the predicted answer. We propose a novel framework for using conformal prediction to build retrieval augmented question answering systems with high-probability coverage guarantees. There are several challenges to applying conformal prediction to question answering. First, conformal prediction is usually applied to classification and regression tasks, which are simpler than question-answering. Second, retrieval augmented systems have multiple components, which need to be composed together to form the final prediction. Thus, we need to compose the coverage guarantees of each component to obtain a guarantee for the overall system. Third, conformal prediction typically optimizes a performance metric such as the expected size of the predict label sets, subject to the coverage guarantee. We need to devise reasonable metrics for quantifying the set size for question answering. To our best knowledge, our work is the first to apply conformal prediction to retrieval augmented question answering. Our framework first construct conformal predictors for the retrieval model and the question answering model, and then combines these techniques by using a multiple hypothesis test (specifically, a global test). Given a prompt, the retriever retrieves a set of contexts guaranteed to include the most relevant context with high probability. Then, given the prompt and most relevant context, the LLM predicts a set of answers guaranteed to include the correct answer with high probability. We consider several metrics for evaluating the performance of the final prediction set over answers, including the number of generated answers, the number of unique answers deduplicated by exact match, and the number of generated answers deduplicated by semantic match. In addition, a key challenge with global tests is that they have hyperparameters that need to be tuned to maximize performance. We propose to use Bayesian optimization to optimize these hyperparameters based on a separate held-out optimization set; then, we construct the conformal predictors on a held-out calibration set as usual. We evaluate our approach on the Natural Question dataset <cit.>. Our empirical results demonstrate that our approach can provide the desired coverage guarantee, while minimizing prediction set size. § RELATED WORK Retrieval Augmentation. Augmenting chatbots with knowledge from a corpus has shown great effectiveness in reducing hallucinations. Some work focuses on retrieving relevant contexts, such as <cit.>. This line of work usually trains a neural retriever to identify relevant contexts for a given question from a knowledge base such as Wikipedia. Other approaches combine training the retriever and the question answerer, including RAG <cit.> and Atlas <cit.>. Furthermore, instead of retrieving context from an external knowledge base, <cit.> propose to retrieve contexts from another LLM, which is referred to as parametric memory. <cit.> focus on designing better in-context prompts so chatbots learn when and what knowledge to retrieve. While these approaches can reduce hallucinations, they do not provide theoretical guarantees. Conformal Prediction. Conformal prediction (CP) <cit.> is an effective distribution-free uncertainty quantification technique for providing performance guarantees on machine learning models. These techniques construct prediction sets that guarantee to contain true labels with high probability. Split conformal prediction (SCP) (or inductive conformal prediction) reduces the computation complexity of CP by introducing a hold-out calibration set, but maintains the same performance guarantee. CP has been widely applied to image classification <cit.>, regression <cit.>, object detection <cit.>. Global testing. Global testing is a multiple hypothesis testing technique that tests a global null hypothesis that consists of all individual hypotheses. Typical tests include the Bonferroni Correction <cit.>, Fisher's Test <cit.>, and the Harmonic Mean p-value <cit.>. Some work has proposed to combine conformal prediction and global testing  <cit.>. However, these approaches have not been applied to question-answering task; furthermore, they do not use optimization process to improve performance. § METHODS §.§ Individual Prediction Sets First, we describe how we construct prediction sets for the retrieval and question answering models separately using split conformal prediction (SCP). In general, SCP assumes given a nonconformity measure s: 𝒳×𝒴→ℛ mapping example-label pairs to scores (typically a pretrained model for solving the task), a held-out calibration set B = {(x_i, y_i)}_i=1^N sampled i.i.d. from the data distribution 𝒟, and a user-specified error level α, and constructs the prediction set C:𝒳→2^𝒴 such that for a new test example x_N+1 as C(x_N+1) = {y ∈𝒴| s( x_N+1, y) ≤τ}, where τ is the ⌈(N+1)(1-α)⌉/N-th smallest score in {s(x_i, y_i)}_i=1^N. It guarantees coverage as follows: We have _B∼𝒟^N,(x,y) ∼𝒟(y ∈ C(x)) ≥ 1-α. Here, the constructed prediction set C implicitly depends on the random calibration set B. In other words, the prediction set C(x) contains the ground truth label for x with probability at least 1-α. To apply conformal prediction to the retrieval and question answering models, the main challenge is to design appropriate nonconformity measures (NCMs) for each task. NCMs are functions measuring how unlikely a given label y is the true label of the observation x. For example, in a multi-classification task, let ξ_k be the estimated for label k, the NCM could be 1-ξ_k for class k. For the retrieval model, we use the negative inner product or negative cosine similarity between the prompt and context embeddings as the NCM; in both cases, a lower score indicates a higher similarity. For the question answering model, the NCM is more challenging to design. One option would be the log probability of the generated answer; however, semantically similar answers may induce different log probabilities. To address this limitation, we build on an idea proposed in <cit.>, and propose to use negative semantic confidence as the NCM, which we can estimate via Monte Carlo sampling and clustering. In particular, we first request K answers {y_k}_k=1^K; then, we semantically cluster them using an entailment model or their rouge scores; finally, we regard each cluster z as a semantic meaning and estimate its NCM by s_QA(x,y_m;c^*)= -1/K∑_k=1^K 1(y_k ∈ z(x,y_m)), where c^* is the most relevant context for example x. A lower score s_QA(x,y_m;c^*) value indicates that the model is more confident in the semantic meaning of cluster z(x,y_m). Next, we need to define what is the “ground truth” label, so we can compute the NCM of the true label, which we call the true label NCM. For retrieval, consider the true label to be the most relevant context, which is given in the Natural Question dataset. For question answering, given questions and their corresponding top-1 most relevant context, we use rouge F1 scores <cit.> along with a standard threshold to determine whether two answers are semantically equivalent. Then, answers that are semantically equivalent to the answer in the dataset are considered ground truth labels. Now, to construct prediction sets, we split all collected questions into the calibration and testing sets with equal sizes. First, we compute thresholds τ_ret as the ⌈(N+1)(1-α)⌉/N quantile of the NCMs s_ret(x,c^*) of the true context c^* for question x; and τ_QA as the ⌈(N+1)(1-α)⌉/N quantile of the NCMs s_QA(x, y^*; c^*) of the correct answer y^* for question x and true context c^*. Then, given a new question x, we construct the retrieval set C_ret by including contexts c whose NCMs s_ret(x,c) are no greater than τ^ret—i.e., C_ret(x) = {c | s_ret(x,c) ≤τ_ret}. For the question answering model, given a new question x and its most relevant context c^*, we construct prediction sets by including answers y whose corresponding semantic confidence s_QA(x, y; c^*) is no greater than τ_QA, i.e., C_QA(x; c^*) = {y | s_QA(x,y;c^*)≤τ_QA}. Finally, we have the following standard guarantees: For retrieval, given a question x and its most related context c^*, we have (c^* ∈ C_ret(x)) ≥ 1-α. For question answering, give a question x, its most related context c^*, and the true answer y, we have (∃ y' ∈ C_QA(x; c^*) . y'∼ y) ≥ 1-α, where ∼ denotes semantic similarity. Note that the randomness is in both with the calibration set and the newly observed example. §.§ End-to-End Prediction Sets Next, we describe how we integrate prediction sets for retrieval and question answering to obtain an overall guarantee. The overall prediction set can be obtained by a straightforward composition of the individual prediction sets: C(x)=⋃_c∈ C_ret(x)C_QA(x;c), i.e., run the question answering prediction set on every context. Intuitively, the most related context c^* is contained in C_ret(x) with high probability, and some answer y' semantically equivalent to the true answer y is contained in C_QA(x;c^*), we have y'∈ C_QA(x;c^*)⊆ C(x), which is the desired guarantee. The main issue is that the individual coverage guarantees only hold with high probability. A naïve strategy is to take a union bound to get (∃ y' ∈ C(x) . y'∼ y) ≥ 1-2α. More generally, we can apply global hypothesis tests, which is a technique to efficiently combine multiple statistical tests, to construct prediction sets ((<ref>) corresponds to the Bonferroni correction). Different global tests provide tradeoffs in terms of the resulting assumptions and guarantees. We focus on the Bonferroni Correction (Bonf) and Harmonic Mean p-values (HMP), which are global tests that allow for dependencies between individual tests. In particular, we treat the individual conformal predictors constructed using split conformal prediction as individual statistical tests, and then combine them with a global test. For Bonf, given error level α, we choose some hyperparameters α_ret and α_QA such that α_ret+α_QA=α. Then, we compute threshold τ_ret using α_ret and τ_QA using α_QA as before. We describe HMP in Appendix <ref>. Using Bonf, we have the following end-to-end guarantee: We have (∃ y' ∈ C(x) . y'∼ y) ≥ 1-α. We give a proof in Appendix <ref>. HMP gives a weaker guarantee since it is asymptotic rather than finite sample. §.§ Hyperparameter Optimization Many global tests, including Bonf have hyperparameters that need to be tuned. In the Bonferroni Correction, these hyperparameters are α_ret and α_QA; in HMP, these hyperparameters are weights assigned to different individual tests. Although these hyperparameters do not affect the correctness guarantee, they can significantly affect the resulting prediction set sizes. Thus, to maximize performance, we propose to use Bayesian optimization to choose hyperparameters. We describe our approach in detail in Appendix <ref>. We call our method as Combining Conformal Prediction Sets via Optimized Multiple Hypothesis Testing as CCPS. Finally, we measure prediction set size in several ways, namely: (i) expected number of answers, (ii) expected number of unique answers deduplicated by exact match, (iii) and expected number of unique answers deduplicated by semantic equivalence based on Rouge score. § EXPERIMENTS §.§ Experiment Setup We use Dense Passage Retriever (DPR) as our retriever, and gpt-3.5-turbo (ChatGPT) as our question answerer. Our method is agnostic to the retriever and question answerer, and can be straightforwardly adapted to other models. We evaluate our approach on the Natural Questions dataset <cit.>. For each question, we retrieve contexts using DPR, and then query ChatGPT on each question-context pair, asking it to return 40 potential answers. One challenge is that querying ChatGPT is costly; thus, we restrict to querying it on the top 20 retrieved contexts per question. We filter out questions for which the most relevant retrieval does not occur in the top 20; this assumption can easily be relaxed by performing additional queries to ChatGPT. While it may increase the overall sizes of the prediction sets, we expect the relative performance of different approaches to be preserved. We collect 516 examples as the calibration set, 811 data examples as the optimization set, and 812 examples as the test set. We run each experiment with ten random seeds. We denote CCPS with Bonferroni correction as CCPS-B and with Harmonic mean p-value as CCPS-H. We compare methods (CCPS-B and CCPS-H) to their counterparts (Bonf and HMP) with α_ret=α_QA=α/2. §.§ Prompt design To reduce the API cost, we used a prompt that includes both the question and context, but no in-context few-shot demonstrations. To encourage ChatGPT to answer questions based on the retrieved context, we used the following prompt, where questionis substituted with the question, and contextwith the context: Answer the following question based on the given context; Answer "I don't know" if you don't know the answer; Answer the question using only one keyword. Question: question Context: context Answer: §.§ Individual Prediction Sets We first validate the performance guarantee on retrieval and question answerer prediction sets separately using coverage levels 1-α∈{0.9, 0.925, 0.95, 0.975}. We plot the empirical coverage rate together with the desired coverage level (“Baseline”, in dotted green) in Figure <ref>. As can be seen, the coverage rates are centered around the desired level, which is consistent with (<ref>) & (<ref>). §.§ End-to-End Guarantee and Performance We show our results with α=0.1 in Table <ref>, and results with α=0.2 in Table <ref>. All approaches satisfied the desired coverage. For number of answers by exact match (“Ext”), CCPS-H and CCPS-B decreased the prediction set sizes of HMP and Bonf by approximately 8%, respectively, demonstrating the benefit of optimization. Furthermore, CCPS-B and Bonf outperformed CCPS-H and HMP, indicating that Bonferroni Correction is more effective for our task. In Figure <ref>, we plot the distribution of unique answers by exact match (across examples for one random seed). As can be seen, all approaches have similar variation, with CCPS-H and HMP having slightly higher variability. We show some examples of prediction sets in Appendix <ref>. § CONCLUSION We have proposed a novel strategy applying conformal prediction to retrieval augmented question answering. Our approach ensures an answer semantically equivalent to the true answer is contained in the prediction set, which enables users to act conservatively with respect to the qustion answering system. Our empirical results on the Natural Questions dataset demonstrate that we can obtain coverage guarantees with reasonable prediction set sizes. langley00 icml2023 § CONFORMAL PREDICTION AND HYPOTHESIS TESTING Conformal prediction is a distribution-free uncertainty quantification technique that constructs provable prediction sets for black-box models. Specifically, let 𝒳 and 𝒴 be the input and label spaces, respectively, and (x,y) be an input-label pair. Conformal prediction assumes given a calibration set B={x_n,y_n}_n=1^N with N input-label pairs, along with a nonconformity measure s(B, x, y)∈ℛ that measures how different a pair (x,y) is from the examples in B. Given a new input x_N+1, conformal prediction constructs a prediction set C(x_N+1)⊆𝒴 using Algorithm <cit.>. Intuitively, for every label y ∈𝒴, this algorithm checks whether (x_N+1,y) is similar to examples in the B according to the nonconformity measure s(B, x_N+1, y). If they are similar, then y is included in the prediction set C(x_N+1); otherwise, y is excluded from C(x_N+1). To connect these ideas with multiple hypothesis testing, we note that conformal prediction can be framed as an application of the Neyman-Pearson theory for hypothesis testing <cit.>. A variant of conformal prediction is inductive conformal prediction (ICP), which holds out a fixed calibration set and compares the nonconformity score of new inputs to the calibration set. Since the calibration set is fixed, we omit B in the nonconformity score function. Since we do not need to compute nonconformity scores for the calibration set repeatedly, ICP is more computationally efficient. <cit.> gives a detailed introduction to ICP. § GLOBAL TESTING Global testing is a technique for combining multiple statistical tests. Individually, each test potentially rejects the null hypothesis, which is referred to as producing a “discovery”. The goal of global testing is to minimize false discoveries (i.e., incorrectly rejecting the null hypothesis) by controlling some error rate while maximizing the efficiency of each test (i.e., correctly rejecting the null hypothesis). Suppose we have a number M of null hypotheses H^1, …, H^M. In a single statistical test, we accept the null hypothesis if the test is significant (p-value is sufficiently large) and reject otherwise. After taking these M tests, the possible outcomes are shown in the following table: H^m not rejected H^m rejected Total H^m True N_0 | 0 N_1 | 0 M_0 H^m False N_0 | 1 N_1 | 1 M_1 Total M-R R M Here, R is the number of rejections, N_0 | 1 and N_1 | 0 are the exact (unknown) number of errors made after testing; N_1 | 1 and N_0 | 0 are the number of correctly rejected and correctly retained null hypotheses. Then, global testing typically controls the Family-wise error rate (FWER), which is defined as the probability of falsely rejecting at least one null hypothesis: FWER = (N_1 | 0≥ 1). We say an global testing satisfying this bound is valid. Common global testing techniques include Bonferroni correction <cit.>, Fisher's method <cit.>, Brown's method <cit.>, and Harmonic mean p-value <cit.>. § GLOBAL TESTING VIA THE HARMONIC MEAN P-VALUE HMP is motivated by Bayesian model averaging, and can control of the weak and strong Famility-wise Error Rate (FWER). The control is achieved by combining dependent tests using the generalized central limit theorem. Specifically, given valid p-values[A p-value p for a null hypothesis H^m is valid if it satisfies [p ≤α| H^m] ≤α for all α∈ [0,1], which implies that valid p-values should be subject to a uniform distribution.] from M statistical tests, denoted as (p^1, …, p^M), and weights for each hypothesis (w^1, …, w^M) satisfying ∑_m=1^M w^m =1, HMP combines the p-values to form p̅ =∑_m=1^M w^m/∑_m=1^M w^m / p^m. Next, to control the weak FWER at level α, HMP uses the following policy: given the combined p-value p̅, If p̅ < α_M: Reject {H^1, …, H^M} Otherwise: Accept {H^1, …, H^M}, where α_M is an adjusted significance level based on α and the number of test M. In our case (α=0.1, M=2), α_M=0.079. Using this policy, HMP can control of the weak FWER to be under α <cit.>. Note that weak FWER equals Type-I error of the global testing when all individual null hypotheses are true <cit.>. In the retrieval augmented question answering task, given a question X and a retrieved context c, parameters w_ret and w_QA, we first compute the p-value for the retrieval task by p_ret = ∑_n=1^N𝕀(s_ret,n≤ s_ret(X, c) )/N^ret. We then compare this value to λ = w_ret/1/0.079-w_QA, which is the minimum p-value for HMP to accept context c. If p_ret < λ, we reject c; otherwise, we submit the question x together with the context c to ChatGPT, and request answers. Given a generated answer y from ChatGPT, chatbot p-values are computed as p_QA = ∑_n=1^N𝕀(s_QA,n≤ s_QA(X, y; c) )/N. Then, Harmonic Mean p-value (HMP) combines these p-values by p̅ = 1/w_ret /p_ret + w_QA/p_QA. To decide whether to include the answer y, HMP uses the following policy: given the combined p-value p̅, If p̅ < α_M: exclude y from C(x) Otherwise: include y in C(x). § END-TO-END PERFORMANCE GUARANTEE First, we define the individual null hypothesis in retrieval and chatbot tasks. Given a question X, the null hypothesis for a context ctx is defined as H_c^ret c is the most relevant context for X; given a question X and its top-1 relevant context c^*, the null hypothesis for a generated answer y from the chatbot is defined as H_y^QA y is semantically correct for X and c^*. Then, we define the global null hypothesis for c and y as the intersection between the two individual hypothesis, i.e., H_c, y = H_c^ret⋀ H_y^QA, which means that the global null hypothesis is true if both individual hypotheses are true. Using global testing, given a user-specified error level α, we can guarantee that the Type-I error, which is the rate that the true global null hypotheses are rejected, is at most α. In other words, the rate that the true global hypothesis is accepted is at least 1-α. By our null hypothesis definition, a global null hypothesis H_c, y is true only if c is the top-1 relevant context and y is a semantically correct meaning. By our algorithm, if the global null hypothesis is true, we will include y into the prediction set. Therefore, the rate that semantically correct meanings ys are included in the prediction set is no less than the rate that the global null hypothesis is accepted, which is at least 1-α. Therefore, for the end-to-end prediction set, the semantically correct meanings are included in the set with probability at least 1-α, i.e., given a question X and its semantically correct meaning y, we have (y ∈ C(x)) ≥ 1-α. Note that the true meaning coverage rate could be more than the global null hypothesis acceptance rate because semantic meanings based on other relevant contexts could also be correct. § BAYESIAN OPTIMIZATION Many global tests have hyperparameters w∈𝒲—e.g., HMP assigns a weight w_ret and w_QA to each null hypothesis, respectively; and the Bonferroni Correction assigns a significance level α_ret and α_QA to each hypothesis. While these parameters do not affect the Type-I error rate of the global test, they can affect the Type-II error rate and therefore the resulting cost of C_B,w. Our method uses Bayesian Optimization (BO) to optimize these hyperparameters w∈𝒲 to minimize the given cost g. In particular, BO first initializes a Gaussian Process (GP) model of the cost function. Then, based on the GP, BO selects parameters potentially minimizing the cost function and evaluates the prediction set cost on the selected parameters. Finally, BO refines the GP model based on the evaluated cost. BO iteratively optimizes the objective function across T iterations. To preserve the validity of the global test, we separate global testing from BO. In particular, we split the available data into a calibration set and an optimization set (we also use a separate training set to train the nonconformity scores s^m, but this step occurs prior to applying CCPS). The parameters w are first optimized by running a global test on the optimization set, and evaluating the resulting cost. Once we have chosen hyperparameters w, CCPS runs the global test one final time, but now in conjunction with the held-out calibration set B, to obtain C_B,w. The pseudo-code can be found in Algorithm <ref>. § RESULTS WITH Α=0.2 § EXAMPLES OF PREDICTION SETS [colback=yellow!5!white,colframe=yellow!50!black, colbacktitle=yellow!75!black,title=Quesiton: 'what is the second movie of the pirates of the caribbean'] Reference Answer: "Dead Man 's Chest" Answer Set: "Dead Man's Chest.", 'Dead Men Tell No Tales', "Don't know.", 'Pitch Black', "Dead Men Tell No Tales or Salazar's Revenge", "I”Don't know”", "don't know", "Unknown/ I don't know.", "Dead Men Tell No Tales/Salazar's Revenge", "Dead Men Tell No Tales (or Salazar's Revenge)", '"On Stranger Tides".', "I Don't Know", 'Unknown.', '"I dont́ know"', 'fourth.', 'Pirates', 'Pirates.', "Don't know", '"Unknown"', "Don't remember/I don't know", 'fourth', "I don't know", "dead man's chest", 'Dead Men Tell No Tales.', 'On Stranger Tides.', '"fourth"', "I Don't know.", '"Dont́ know"', 'unknown', '"Dead Manś Chest"', "Dead Man's Chest (keyword: Chest)", "Dead Man's Chest", "Dead Men's Chest", "Dead Man's Chest", '"On Stranger Tides"', '"Pirates"', "I don't know.", 'Unknown', 'Dead Men Tell No Tales (or fifth film)', 'Fourth', '"Fourth"', 'unknown.', 'Pitch Black.', '"Dead Men Tell No Tales"', '"Pirates."', 'On Stranger Tides' [colback=yellow!5!white,colframe=yellow!50!black, colbacktitle=yellow!75!black,title=Quesiton: "when did spanish town become jamaica 's capital"] Reference Answer: "1534" Answer Set: '1680', "Don't know.", '1534.', '1873.', '1655', '1845', "don't know", '1872.', '1534', 'Eighteenth Century', 'Eighteenth century.', '1670.', '1845.', 'Eighteenth century', "Don't know", 'eighteenth century', "I don't know", '1873', '1847', "Not mentioned/ I don't know", "I don't know.", '1670', '1680.', 'eighteenth century.', '1962', '1962.', '1847.', '1872', '1655.' [colback=yellow!5!white,colframe=yellow!50!black, colbacktitle=yellow!75!black,title=Quesiton: 'who presented in parliament the separate rail budget in india'] Reference Answer: "'the Minister of Railways'" Answer Set: 'Lalu Yadav.', 'Minister of Railways.', 'D. V. Sadananda Gowda', "Don't know.", 'Ms. Mamata Banerjee', 'parliament', 'D. V. Sadananda Gowda.', 'Sir William Acworth', 'Lalu Prasad Yadav.', 'John Mathai.', 'John Mathai', '"I dont́ know"', 'Minister.', 'Minister of Railways', 'Minister', 'Suresh Prabhu', '"I dont́ know".', "I don't know", 'RLDA', 'Parliament.', 'Sir William Acworth.', 'RLDA.', 'Mamata Banerjee', "I Don't Know.", 'Lalu Yadav', "I don't Know.", 'Lalu Prasad Yadav', 'D.V. Sadananda Gowda.', "I don't know.", 'Suresh Prabhu.', 'Mamata Banerjee.', 'D.V. Sadananda Gowda', 'Parliament'
http://arxiv.org/abs/2307.01556v1
20230704081413
Spatio-Temporal Perception-Distortion Trade-off in Learned Video SR
[ "Nasrin Rahimi", "A. Murat Tekalp" ]
eess.IV
[ "eess.IV" ]
Nasrin Rahimi and A. Murat Tekalp This work is supported in part by TUBITAK 2247-A Award No. 120C156 and KUIS AI Center funded by Turkish Is Bank. A. M. Tekalp also acknowledges support from Turkish Academy of Sciences (TUBA). Department of Electrical & Electronics Engineering and KUIS AI Center Koç University, 34450 Istanbul, Turkey Spatio-Temporal Perception-Distortion Trade-off in Learned Video SR Mingjie Lu[1], Yuanxian Huang[1], Ji Liu, Jinzhang Peng, Lu Tian, Ashish Sirasao Advanced Micro Devices, Inc., Beijing, China (Mingjie.Lu, YuanXian.Huang, Ji.Liu, jinz.peng, lu.tian, ashish.sirasao)@amd.com =========================================================================================================================================================================================================================== Perception-distortion trade-off is well-understood for single-image super-resolution. However, its extension to video super-resolution (VSR) is not straightforward, since popular perceptual measures only evaluate naturalness of spatial textures and do not take naturalness of flow (temporal coherence) into account. To this effect, we propose a new measure of spatio-temporal perceptual video quality emphasizing naturalness of optical flow via the perceptual straightness hypothesis (PSH) for meaningful spatio-temporal perception-distortion trade-off. We also propose a new architecture for perceptual VSR (PSVR) to explicitly enforce naturalness of flow to achieve realistic spatio-temporal perception-distortion trade-off according to the proposed measures. Experimental results with PVSR support the hypothesis that a meaningful perception-distortion tradeoff for video should account for the naturalness of motion in addition to naturalness of texture. Perceptual video super-resolution, natural texture, natural motion, perceptual straightness hypothesis, spatio-temporal perception-distortion trade-off. § INTRODUCTION Early works on learned video super-resolution (VSR) employed supervised training to minimize the l2/l1 loss <cit.>. However, it is well-known that models that are trained to minimize the mean-squared-error (MSE) result in blurry unnatural looking textures because the minimum MSE estimate is a probability weighted average of all feasible solutions. Later, generative perceptual VSR methods that optimize a weighted combination of l2/l1 loss, a no-reference adversarial loss, and full-reference perceptually motivated losses have been proposed. Perceptual VSR methods provide sharper texture in each frame of video at the expense of a decrease in PSNR as predicted by perception-distortion trade-off theory <cit.>. Typical VSR methods, whether based on fidelity only or perceptual criteria, calculate losses per frame, and therefore, do not take temporal coherence explicitly into account. Since humans can detect unnatural motion and jitter easily, temporal inconsistencies result in a video with low perceptual quality, even if the texture in each frame looks natural. Few works explicitly model or enforce the temporal consistency of frames in VSR. However, enforcing naturalness of motion remains as a non-trivial problem that should be considered within the context of spatio-temporal perception-distortion trade-off using a new well-defined measure of naturalness of motion. To this effect, we propose a new measure to evaluate spatio-temporal naturalness of super-resolved videos for better spatio-temporal perception-distortion trade-off and present a new perceptual VSR (PSVR) model with two discriminators, where one evaluates naturalness of texture and the other naturalness of motion. We discuss related works in Section <ref>. Section <ref> proposes a new measure for spatio-temporal naturalness of video based on the perceptual straightness hypothesis <cit.>. The proposed temporally coherent PVSR model is introduced in Section <ref>. Evaluation methodology and comparative experimental results are presented in Section <ref>. Finally, Section <ref> concludes the paper. § RELATED WORK §.§ Perception-Distortion Trade-off for Images According to the perception-distortion trade-off theory <cit.>, distortion refers to dissimilarity between an original image X and its reconstruction X̂, which is measured using a full-reference (FR) measure, while perceptual quality is the degree to which X̂ appears as a valid natural image, measured by a no-reference (NR) measure, regardless of how similar it is to X. It was shown that an algorithm cannot be both very high fidelity and perceptually natural regardless of the distortion measure used <cit.>. However, extension of this result to video (temporal dimension) is nontrivial as discussed in Section <ref>. §.§ Perceptual VSR Methods and Their Evaluation Perceptual VSR models <cit.> aim to achieve a trade-off between distortion and perceptual quality using conditional GANs <cit.> and perceptual losses. <cit.> employs VSRResNet as generator along with a discriminator, but do not address temporal consistency either in training or in evaluation. <cit.> proposed a recurrent architecture and a video discriminator to reinforce temporal consistency. Addressing the SR problem for fluid flow, <cit.> employs a temporal discriminator in addition to a spatial discriminator. Besides a recurrent architecture, <cit.> introduced a spatio-temporal discriminator, called TecoGAN, together with a set of training objectives for realistic and temporally coherent VSR. Motivated by the perceptual straightening hypothesis (PSH) <cit.> for human vision, <cit.> proposes a quality-aware discriminator model to enforce the straightness of trajectory of the perceptual representations of predicted video frames for the video frame prediction task. In this paper, we apply PSH to the VSR task; furthermore, we do not impose perceptual straightness as a constraint, but use it as a measure for perceptual evaluation. VSR models are typically evaluated by averages (over frames) of PSNR and SSIM as distortion measures, and of LPIPS (FR measure) <cit.> and NIQE (NR measure) <cit.> as perceptual quality measures <cit.>. However, these measures are designed for single images; hence, they do not directly evaluate motion artifacts or temporal coherence of frames. As measures of temporal coherency, <cit.> introduces tOF, which is pixel-wise difference of estimated flow vectors, and tLP, which is the difference between LPIPS of successive frames of predicted and pristine videos. Alternatively, MOVIE <cit.> integrates both spatial and temporal aspects of distortion assessment based on a spatio-spectrally localized multiscale framework. STEM <cit.> combines the NIQE metric with a blind temporal algorithm which is based on perceptual straightening hypothesis <cit.>. Nonetheless, there is no study on the effectiveness and role of these measures in evaluating spatio-temporal perception-distortion trade-off in VSR. § NEW MEASURES FOR SPATIO-TEMPORAL PERCEPTION-DISTORTION TRADE-OFF IN VSR The role of naturalness of motion in perceptual evaluation of VSR models has not been well-studied. To fill this gap, we propose new spatio-temporal perception and distortion measures to evaluate the perception-distortion trade-off in VSR. §.§ Spatio-Temporal Perceptual Measure The spatio-temporal perceptual quality can be evaluated as a combination of spatial and temporal naturalness measures. Spatial Naturalness: For still frames, it is typical to use LPIPS <cit.> or NIQE <cit.> measures, which estimate the deviation of image statistics from that of natural images. For video, spatial perceptual quality PQ_Spatial can be evaluated by averaging LPIPS or NIQE over all frames. For example, PQ_Spatial=1/N∑_n=1^NLPIPS_Alex(X_n , X̂_n) Temporal Naturalness: We are inspired by the perceptual straightening hypothesis <cit.>, which states that the human visual system transforms visual stimuli to a perceptual domain, where natural sequences follow a straighter temporal trajectory. A two-stage computational model that imitates the nonlinear properties of the early visual system, namely, the retina, lateral geniculate nucleus (LGN) and V1, to transform sequences into perceptual domain was proposed in <cit.>. Let's consider an N-frame video {X^n}_n=1,...,N. The perceptual representations {P^n}_n=1,...,N are high-dimensional vectors in the perceptual space. The curvature Cur(n) at node (frame) n is defined as the angle between successive displacement vectors, V^n = P^n - P^n-1, n=2,3,...,N, which is computed by the dot product of vectors V^n and V^n+1 Cur(n)=arccos( V^n· V^n+1/∥ V^n∥∥ V^n+1∥) The straightness of the trajectory at frame n in the perceptual space is given by ST(n) = π - Cur(n). The straightness of the trajectory in the intensity space can be computed similarly. The average difference between the straightness of a sequence in the intensity domain ST^I and the perceptual domain ST^P is defined as a measure of temporal naturalness PQ_Temporal=1/N∑_n=2^N(ST^p(n)-ST^I(n)) Spatio-temporal Naturalness: A natural sequence will have higher PQ_Temporal and lower PQ_Spatial. Consequently, a spatio-temporal perceptual quality measure can be defined as P_ST= PQ_Spatial/ PQ_Temporal where lower scores indicate better quality. §.§ Spatio-Temporal Distortion Measure The classical FR measure of fidelity is pixel-wise MSE, which is applied to video by averaging over all N-frames, as MSE_Pix = 1/N∑_n=1^NX_n-X̂_n_2^2 where X_n denotes frame n. To take temporal distortions into account explicitly, we also define optical flow MSE by MSE_OF=1/N∑_n=2^NOF(X_n,X_n-1)-OF(X̂_n,X̂_n-1)_2^2 where OF(X_n,X_n-1) is the flow between frames X_n and X_n-1. This is similar to t_OF measure defined in terms of l1 distance <cit.>. We define a spatio-temporal distortion measure D_ST for video as a weighted sum of MSE_Pix and MSE_OF D_ST= MSE_Pix+ α MSE_OF The parameter α is chosen empirically considering that the effect of optical flow distortion should neither be neglected nor dominate the overall distortion measure. We set α = 1000. § A NEW PERCEPTUAL VSR ARCHITECTURE FOR NATURAL TEXTURE AND MOTION We propose a new GAN-based perceptual VSR (PVSR) architecture motivated by the spatial and temporal naturalness measures discussed in Section <ref>. The proposed PVSR architecture, depicted in Figure <ref>, employs a generator model, a flow estimation model, and two discriminators. Generator models that minimize a distortion-only loss yield superior fidelity measures, such as PSNR, at the expense of lower perceptual scores. Inclusion of an adversarial texture loss encourages perceptually more realistic textures; however, this causes motion artifacts due to inconsistent hallucinations in successive frames. We show that an additional adversarial motion loss encourages perceptually more natural videos. §.§ PVSR Generator Model The PVSR model can use any generator network G. In this paper, we experimented with two VSR models: EDVR <cit.>, which processes each picture independent of the previous outputs, hence, is more prone to motion artifacts, and BasicVSR++ <cit.>, which is a recurrent model, hence, inherently generates temporally more coherent frames. The resulting two PVSR models are called PEDVR and PBasicVSR++, respectively. Table 2 shows that we can get significant improvements in the perceptual scores even with PBasicVSR++. §.§ PVSR Texture and Motion Discriminators To achieve both natural texture and motion, the PVSR model employs two discriminators: a spatial (texture) discriminator D_S to distinguish ground-truth (GT) frames from reconstructed ones, and a temporal (motion) discriminator D_T to discriminate between optical flows estimated from the GT frames and those estimated from reconstructed SR frames. The spatial and temporal discriminators have the same architecture, containing five convolutional layers. The first layer has 3×3 kernel and stride 1, while the others have 4×4 kernels and stride 2. The number of filters in each layer is 64, 64, 64, 128, and 256 respectively. The final feature map is input to a fully connected layer to compute the fake/real scores. The input to the flow discriminator is the optical flow between the current and previous frames estimated by the pre-trained PWC-Net <cit.>, depicted with a blue box in Figure <ref>. §.§ Loss Functions In order to ensure naturalness of both texture and motion, we propose the generator loss function to include both texture loss ℒ_G,Spatial and motion loss ℒ_G,Temporal: ℒ_G=ℒ_G,Spatial+ℒ_G,Temporal Naturalness of Texture: Inspired by SRGAN <cit.>, we use the following texture loss ℒ_G,Spatial=λ_1ℒ_Pix+λ_2ℒ_vgg+λ_3ℒ_Pix,adv where ℒ_Pix is the pixel-wise l_2 loss to ensure texture fidelity, ℒ_vgg is l_2 loss of the GT and SR feature maps extracted from a pre-trained VGG-19 network, and the adversarial loss ℒ_Pix,adv aims to maximize the probability that the spatial discriminator will be fooled by the generator given by: ℒ_Pix,adv=-1/N∑_n=1^Nlog(D_S(X̂)) where N denotes the number of samples in a mini-batch. Naturalness of Motion: To ensure that the reconstructed video has coherent motion, the temporal loss is defined by ℒ_G,Temporal=λ_4ℒ_Flow +λ_5ℒ_Flow,adv where ℒ_Flow is the l_2 loss between optical flows OF^X and OF^X̂ estimated from GT frames and SR frames, respectively. ℒ_Flow,adv denotes the adversarial flow loss similar to <ref>, which is calculated based on the output of flow discriminator D_T and encourages the generator to produce sequences with a natural flow. Furthermore, to improve the long-term temporal consistency and avoid temporal accumulation of artifacts, we exploit Ping-Pong loss introduced in <cit.>. § EVALUATION §.§ Training Details We trained all models on REDS dataset <cit.>, which contains videos with large and complex motions. Similar to other methods trained on REDS, four clips (000, 011, 015 and 020 - called REDS4) are utilized as the test set, and the remaining 266 clips are used for training. For training, 64×64 patches from the LR frames are randomly cropped and super-resolved with a factor of 4. The Adam optimizer with β_1 = 0.9, β_2=0.99 is utilized for training. All experiments are performed on a single NVIDIA Tesla V100. §.§ Ablation Studies on PVSR In order to show the benefit of using the second discriminator and different loss terms on the performance of PEDVR and PBasicVSR++, we conducted two ablations: Ablation 1 model employs only a texture discriminator, trained by the loss function (<ref>). Ablation 2 model uses the flow loss in addition to the loss function in Ablation 1, and the complete PVSR models employ both discriminators trained by the loss function (<ref>). The pretrained moderate EDVR and BasicVSR++ models are used as the starting point for the generators in all experiments. The weight of different loss terms, learning rates, and number of iterations for all experiments are summarized in Table <ref>. The results are discussed in Section <ref>. §.§ Comparative Results We compare the PEDVR model vs. original EDVR and PBasicVSR++ vs. original BasicVSR++. We also compare both PVSR models vs. TecoGAN (as the state-of-the-art temporally coherent perceptual VSR model). For fair comparison, we also trained TecoGAN on the REDS dataset from scratch[The models and video results can be found at <https://github.com/KUIS-AI-Tekalp-Research-Group/Perceptual-VSR>.] First let's discuss which metrics correlate better with video quality. Clearly, results of BasicVSR++ (evaluated as still frames) look more natural than those of EDVR; yet, EDVR results have better NIQE scores than BasicVSR++. This is because the NR measure NIQE improves as the amount of hallucinated texture increases regardless of fidelity, and the recurrent framework of BasicVSR++ limits unrestricted hallucination in successive frames. Hence, we decided to use LPIPS in P_ST rather than NIQE because we believe LPIPS is better correlated with the quality of VSR frames. In terms of naturalness of motion, we observe that the Straightness measure correlates well with visual quality and OF MSE. Inspection of Table <ref> shows that both EDVR and BasicVSR++ (optimized solely on pixel-wise Charbonnier loss) achieve better PSNR and OF MSE, while NIQE and LPIPS scores are worse compared to their PVSR counterparts. Ablation 1 models in both cases (with only texture discriminator) achieve better NIQE and LPIPS at the expense of worse PSNR as expected <cit.>. Ablation 1 models also have worse OF MSE, since hallucinated textures in successive frames lead to inconsistent optical flow compared to the ground-truth. Introducing ℒ_Flow term in the loss function leads to lower OF MSE in Ablation 2 models, while introducing the second (motion) discriminator leads to the best LPIPS and straightness scores achieved by the proposed PVSR models. We also note that PBasicVSR++ model is clearly superior to TecoGAN (state-of-the-art temporally coherent perceptual VSR model) and PEDVR in all mentioned measures due to two reasons: First, we employ two separate discriminators to allow learning spatial or temporal naturalness distributions separately as opposed to a single spatio-temporal discriminator. Second, BasicVSR++ is a stronger generator model compared to EDVR and the FRVSR network used in TecoGAN. In order to demonstrate the naturalness of motion, Figure <ref> depicts temporal profiles extracted from 100 sequential frames. Comparison of zoom-ins in (c)-(f) shows the superiority of PBasicVSR++ model over all other models, and also that PEDVR has less temporal artifacts compared to Ablation 1 model, which does not use temporal losses. Finally, the spatio-temporal perception-distortion trade-off can be evaluated in terms of the proposed D_ST and P_ST measures, which capture the spatio-temporal distortion and perceptual quality of a video. Table <ref> shows that the original EDVR and BasicVSR++ models have the best D_ST and worst P_ST compared to their PVSR counterparts and their ablations. This is while, the combination of losses in <ref> enables proposed PVSR models to gain the best P_ST and strike the desired spatio-temporal perception-distortion trade-off compared to the original models as well as their ablation models. § CONCLUSION It is well-accepted in the community that there is no single measure of image/video quality that correlates well with human preferences. Furthermore, commonly used image perception measures such as LPIPS and NIQE, do not reflect naturalness of motion in videos. To this effect, we propose the perceptual straightness as a measure of motion naturalness and also propose a new PVSR model with two discriminators, where a flow discriminator encourages naturalness of motion. As a result, this paper advances the state of the art in spatio-temporal perception-distortion tradeoff in VSR. Some lessons learned from this study include: i) a strong generator model is the most important factor to obtain the best perceptual results, ii) the perceptual quality/scores for the best model (BasicVSR++) can still be significantly improved by our PVSR architecture, iii) NIQE scores do not correlate well with visual VSR quality, iv) perceptual straightness measure correlates well with motion naturalness, and v) the second (motion) discriminator improves the straightness scores. IEEEbib
http://arxiv.org/abs/2307.00231v1
20230701053928
Forward-Forward Algorithm for Hyperspectral Image Classification: A Preliminary Study
[ "Sidike Paheding", "Abel A. Reyes-Angulo" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
The Potential of LEO Satellites in 6G Space-Air-Ground Enabled Access Networks Ziye Jia, Member, IEEE, Chao Dong, Member, IEEE, Kun Guo, Member, IEEE, and Qihui Wu, Senior Member, IEEEZiye Jia is with the College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China, and also with the State Key Laboratory of ISN, Xidian University, Xi’an 710071, China (e-mail: jiaziye@nuaa.edu.cn). Chao Dong and Qhui Wu are with the College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China (e-mail: wuqihui@nuaa.edu.cn, dch@nuaa.edu.cn). Kun Guo is with the East China Normal University, Shanghai 200241, China (e-mail: kguo@cee.ecnu.edu.cn). August 1, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= The back-propagation algorithm has long been the de-facto standard in optimizing weights and biases in neural networks, particularly in cutting-edge deep learning models. Its widespread adoption in fields like natural language processing, computer vision, and remote sensing has revolutionized automation in various tasks. The popularity of back-propagation stems from its ability to achieve outstanding performance in tasks such as classification, detection, and segmentation. Nevertheless, back-propagation is not without its limitations, encompassing sensitivity to initial conditions, vanishing gradients, overfitting, and computational complexity. The recent introduction of a forward-forward algorithm (FFA), which computes local goodness functions to optimize network parameters, alleviates the dependence on substantial computational resources and the constant need for architectural scaling. This study investigates the application of FFA for hyperspectral image classification. Experimental results and comparative analysis are provided with the use of the traditional back-propagation algorithm. Preliminary results show the potential behind FFA and its promises. § INTRODUCTION Deep Learning(DL) <cit.> has been revolutionizing many different fields due to its ability to achieve unprecedented performance when applied to real-world problems, including applications in agriculture <cit.>, medicine <cit.>, cyber-security <cit.>, and many others <cit.>. Hyperspectral image (HSI) contains an extensive array of continuous spectral information across numerous narrow bands. The inherent high-dimensionality of hyperspectral data poses substantial obstacles to accurate classification, owing to intricate spectral variations and a scarcity of labeled samples. Deep learning models, specifically convolutional neural networks (CNNs) <cit.>, have exhibited remarkable accomplishments in numerous computer vision tasks, notably including the classification of hyperspectral images <cit.>. Nevertheless, the conventional backpropagation algorithm, which is commonly employed for training deep learning models, may confront certain limitations within this domain, such as computational or energy cost, and sensitivity to initial conditions <cit.>. By propagating errors in a backward manner through the network, the backpropagation algorithm calculates gradients that guide the adjustment of the model's parameters to minimize the objective function <cit.>. While backpropagation has proven effective in deep learning applications, it encounters difficulties when dealing with hyperspectral data such as limited availability of labeled samples. As a result, there is a need for alternative training approaches that can enhance the performance of deep learning models specifically in the context of hyperspectral image classification. In this study, we investigate the performance of the forward-forward algorithm (FFA) <cit.> for the task of hyperspectral image classification. The FFA explores the relationships among input data samples by feeding forward both the original data (positive data) and an alternative version of this data (negative data), encouraging the model to learn robust features that capture the underlying characteristics of the HSI. Our initial experiments show that solely using the FFA does not yield better results compared to the backpropagation algorithm. However, considering the advantages of both methods, we propose to combine the forward-forward pass algorithm with traditional backpropagation for hyperspectral image classification. The idea is to incorporate the forward-forward algorithm as an initial learning stage, allowing the model to learn more discriminative features. Subsequently, the model is fine-tuned using the backpropagation algorithm, which refines the learned representations and optimizes the classification performance. To the best of our knowledge, this work is the first attempt to utilize FFA for HSI classification task. The remaining sections of this paper are structured as follows: Section <ref> presents an overview of the FFA explored in this work. Section <ref> details the proposed hybrid approach. Section <ref> presents the benchmark dataset utilized for the experiments. Section <ref> provides the experimental setup and analyzes the obtained experimental results. Finally, Section <ref> provides a summary of the findings from our study. § METHODS §.§ The backpropagation Introduced by Rumelhart et al. <cit.>, in the backpropagation algorithm for training artificial neural networks, the process involves computing the gradient of the cost function with respect to the network's parameters and using this information to update the weights via gradient descent. Backpropagation has demonstrated strong generalization capabilities and effectiveness in handling non-linearities, making it applicable to a wide range of neural network types including feedforward networks, recurrent networks, and convolutional networks. §.§ The Forward-Forward algorithm Forward-forward algorithm (FFA) <cit.> involves substituting the conventional forward and backward passes from the backpropagation algorithm, with two forward passes that function in a parallel manner but on distinct data with opposing objectives. The affirmative pass involves real data and modifies the weights to improve the goodness in each hidden layer, while the negative pass operates on “negative data” and modifies the weights to diminish the goodness within every hidden layer. In <cit.>, two distinct criteria were investigated for measuring quality: the sum of the squared neural activities and the negative sum of the squared activities, although numerous other criteria can also be utilized. In the original FFA, the sum of the squares of the activities in the layer is expressed as G=∑_j z^2_j, where z_i represents the activity of the j^th hidden unit. Furthermore, the positive and negative passes adjust the weights locally, and the probability of the outputs are expressed as follows: prob(positive)=σ(G-θ) where σ despite a logistic distribution function, and and θ a given threshold. To facilitate the contrast of positive and negative data during the supervised training process of FFA, we need to develop a method for merging the data with their corresponding labels. In <cit.>, Hinton proposed to overlay the label information onto the data itself or embed the label within the input data. However, in this work, we take a different approach by appending the label at one end of the spectral signature of each sample. Moreover, we explore various methods for encoding the label information. For instance, we experiment with one-hot encoding representation, binary representation, and decimal representation. Through the analysis of experimental results, it is found that using the one-hot encoding representation to append the label information with the hyperspectral signature data yields better outcomes. As a result, all the experiments reported in this work utilizing FFA are trained using this approach. Figure <ref> visually depicts the imputation method employed, where the label-encoded information is appended at one end of the pixel's hyperspectral signatures. § FFA FOR HYPERSPECTRAL IMAGE CLASSIFICATION In this preliminary study, we contemplate the implementation of FFA with the use of fully connected layers and 1D convolutional layers for HSI classification. The fully connected layer performs a linear transformation on the input data, usually in a vector shape, through weights matrix multiplication operations, in addition to a bias term. The fully connected layers comprise the contributions of all inputs for a final prediction. In contrast, convolutional layers in CNN can be used to capture local dependencies in sequential data, such as time series or text. Unlike fully connected layers that operate on the entire input, CNN considers the local receptive field of the input at a time. This allows them to extract features that are sensitive to local patterns and variations. §.§ Fully Connected FFA network for pixel-wise HSI classification The classification of hyperspectral images poses a substantial challenge attributed to the data's high dimensionality and spectral complexity. Deep learning architectures have shown promise in extracting discriminative features from hyperspectral images. Nonetheless, the efficacy of these architectures is heavily contingent upon the quality of the learned representations. In this study, we explore the use of the FFA with Fully Connected layers to enhance for HSI classification task. FFA comprises the use of a few hidden layers to extract features from the HSI data, scale it to a latent space, and produce the final pixel-wise classification. A total of 3 hidden layers were used in our FFA, with the following number of units: 784, 500, and 500, respectively. §.§ Convolutional FFA network for pixel-wise HSI classification In this work, we explore the implementation of the aforementioned types of neural network layers, limited to 1D data. Convolutional layers for 1D data are usually implemented as neural networks that convolve the input of the hyperspectral image with a set of learnable filters. These layers detect local relationships and patterns within the interaction of the hyperspectral bands per pixel. The capture of this spectral signature enables the architecture to learn discriminative representations of the data. By using the FFA technique, the network aims to update the weights layer by layer in a forward pass only, relieving the computational load of computing the gradient during the backpropagation of the error to update the learned parameters. The implementation of the 1D CNN layer allows us to implement an FFA architecture similar to the type used for HSI classification <cit.>. The proposed FFA comprises the use of 1D CNN layers in the early stages to capture feature representations within a different latent dimensional space, while the fully connected layers are used at the end to learn how to properly discriminate among the classes. The network is configured as follows: an initial 1D convolutional layer with 64 kernels of size 64, followed by a set of two hidden layers with 128 and 256 feature maps, respectively, both with a kernel size of 36. Then, a max-pooling operation is applied to downscale the dimensions of the tensor by a factor of two. Another 1D CNN layer is applied with 256 kernels of size 36. This last 1D CNN layer is followed by another max-pooling operation with similar characteristics as the previous one, and a flattening operation. Finally, two fully connected layers are added with 100 and N units, where N represents the number of classes in the HSI dataset. §.§ Combination of FFA with backpropagation Given the similarities between the nature of FFA and the training procedure in contrasting learning, we propose the utilization of FFA during the initial stage of training. In this stage, each sample is contrasted with different output choices, enabling the model to learn how to effectively discriminate between the correct prediction and other alternatives. Subsequently, the model proceeds to refine its learning through traditional backpropagation using the same deep learning architecture. This process facilitates the model in adjusting the extraction of meaningful characteristics from the high-level spectral information and fine-tuning the latent representation for accurate final predictions. After this initial phase, the model transitions to the standard backpropagation technique, which leverages the deep learning model's architecture for further refinement. During this stage, the model fine-tunes its representation, optimizing its capacity to capture significant features from the high-dimensional spectral data. § DATASETS For simplicity of this proof-of-concept, we perform experiments over two publicly available dataset[www.ehu.eus/ccwintco/index.php/Hyperspectral Remote Sensing Scenes]: The Salinas valley and the Indian Pines. §.§ The Salinas Valley The Salinas dataset is a popular hyperspectral image dataset that is commonly used in the realms of remote sensing and image processing. It is named after the Salinas Valley in California, USA, where the data was collected. The dataset consists of a hyperspectral image of size 512 × 217 pixels, with 224 spectral bands covering the range from 0.2 to 2.4 micrometers. Each pixel in the image represents a small area on the ground, and the spectral bands capture the reflectance of the surface at different wavelengths. The Salinas dataset was collected using an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor, which was flown over an agricultural area in the Salinas Valley. The image contains 16 different crop types, including lettuce, broccoli, and bare soil, among others. §.§ The Indian Pines Collected by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor over an agricultural area in Indiana, USA, the Indian Pines dataset consists of a hyperspectral image of size 145 × 145 pixels, with 224 spectral bands covering the range from 0.4 to 2.5 micrometers. Each pixel in the image represents a small area on the ground, and the spectral bands capture the reflectance of the surface at different wavelengths. The dataset contains 16 different land cover classes, including crops, trees, roads, and buildings, among others. § RESULTS §.§ Experimental setup To ensure a fair comparison, we evaluate the different techniques reported in this work using the same training and test datasets. The Salinas and Indian Pines HSI datasets were split into training, validation, and testing sets in an 8:1:1 ratio, respectively. To obtain robust performance estimates, we repeat the entire experimental process three times and average the results. For each experiment, the respective models are trained for 250 epochs using the Adam optimizer with a fixed learning rate of 1×10^-3. When employing backpropagation, categorical cross-entropy was utilized as the loss function. However, when FFA is used, a custom loss function is implemented to measure the distance between the goodness of each positive and negative sample with respect to the provided threshold. All the experiments are run on an NVIDIA RTX 3070 graphic card with 8GB of dedicated GPU. §.§ Performance comparison To assess and contrast the performance of the different techniques presented in this work, we employed the following evaluation metrics: * Overall Accuracy (OA): This metric provides the percentage of correctly classified pixels from the respective HSI dataset. * Average Accuracy (AA): This metric is computed by averaging each-class accuracy score, thus providing a class-specific evaluation of the technique's performance. * Kappa Coefficient (κ): This metric measures the agreement between the predicted and true class labels, in which the accuracy that could be achieved by chance is taken into account. The use of these evaluation metrics collectively enables a comprehensive assessment of the various techniques presented in this work for HSI classification. Table <ref> summarizes the experimental results obtained by evaluating the discussed techniques using the aforementioned performance metrics on the Salinas and Indian Pines HSI datasets. As shown in Table <ref>, when considering the Salinas HSI dataset, the combination of FFA and backpropagation (BP) achieved the best performance in terms of OA (0.9221) and κ (0.9130). However, BP alone achieved the highest AA (0.9605). On the other hand, for the Indian Pines HSI dataset, BP exhibits the best performance across all evaluation metrics, with OA (0.8109), AA (0.7759), and κ (0.7842). Nevertheless, the utilization of FFA in combination with BP demonstrated a significant improvement compared to using any of the FFN variants individually. Figure <ref> and Figure <ref> demonstrate classification maps of different models using the Salinas and Indian Pines datasets, respectively. §.§ Discussion In our experiments, we conduct a performance comparison of three different architectures: (1) a deep learning model trained solely using backpropagation, (2) a model trained exclusively with FFA, and (3) our proposed approach that combines FFA pre-training with subsequent fine-tuning using backpropagation. The results clearly illustrate the effectiveness of the combined approach in enhancing feature representation and improving classification accuracy, compared to FFA-only methods. By combining FFA pre-training with subsequent fine-tuning using backpropagation, we leverage the respective strengths of both approaches. The FFA initializes the network with meaningful representations, which is then fine-tuned through backpropagation to adapt the network to the specific classification task at hand. This combination allows the network to capture both useful spectral characteristics obtained from the FFA and task-specific discriminative features during the backpropagation process. The synergy between these two approaches yields promising results in our experiments. § CONCLUSION In summary, the integration of the Forward-Forward algorithm and traditional backpropagation during the early stages of training proved to be highly effective in enhancing feature representation and improving classification performance in hyperspectral image analysis. By incorporating the FFA algorithm, the network was able to capture useful feature representation by adjusting network parameters in every hidden layer. Subsequent fine-tuning through backpropagation facilitated the extraction of discriminative task-specific features. This combined approach exemplifies the potential for leveraging the strengths of different learning algorithms to achieve superior results in hyperspectral image classification tasks. ieeetr
http://arxiv.org/abs/2307.00309v1
20230701114636
Adversarial Attacks and Defenses on 3D Point Cloud Classification: A Survey
[ "Hanieh Naderi", "Ivan V. Bajić" ]
cs.CV
[ "cs.CV", "cs.LG", "eess.IV" ]
[1]Department of Computer Engineering, Sharif University of Technology Tehran (e-mail: hanieh.naderii@gmail.com) [2]School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada (e-mail: ibajic@ensc.sfu.ca) Deep learning has successfully solved a wide range of tasks in 2D vision as a dominant AI technique. Recently, deep learning on 3D point clouds is becoming increasingly popular for addressing various tasks in this field. Despite remarkable achievements, deep learning algorithms are vulnerable to adversarial attacks. These attacks are imperceptible to the human eye but can easily fool deep neural networks in the testing and deployment stage. To encourage future research, this survey summarizes the current progress on adversarial attack and defense techniques on point cloud classification. This paper first introduces the principles and characteristics of adversarial attacks and summarizes and analyzes the adversarial example generation methods in recent years. Besides, it classifies defense strategies as input transformation, data optimization, and deep model modification. Finally, it presents several challenging issues and future research directions in this domain. 3D deep learning, deep neural network, adversarial examples, adversarial defense, machine learning security, 3D point clouds. =-15pt Adversarial Attacks and Defenses on 3D Point Cloud Classification: A Survey Hanieh Naderi1 Ivan V. Bajić2 =========================================================================== § INTRODUCTION Deep learning (DL) <cit.> is a subset of machine learning (ML) and artificial intelligence (AI) that analyzes large amounts of data using a structure roughly similar to the human brain. Deep learning is characterized by the use of multiple layers of neural networks, which process and analyze large amounts of data. These neural networks are trained on large datasets, which allows them to learn patterns and make decisions on their own. DL has achieved impressive results in the fields of image recognition <cit.>, semantic analysis <cit.>, speech recognition <cit.> and natural language processing <cit.> in recent years. Despite the tremendous success of DL, in 2013 Szegedy  <cit.> found that deep models are vulnerable to adversarial examples in image classification tasks. Adversarial examples are inputs to a deep learning model that have been modified in a way that is intended to mislead the model. In the context of image classification, for example, an adversarial example might be a picture of a panda that has been slightly modified in a way that is imperceptible to the human eye but that causes a deep learning model to classify the image as a gibbon. Adversarial examples can be created in two or three dimensions. In the case of 2D adversarial examples, the input is an image, and the modification is applied to the pixels of the image. These modifications can be small perturbations added to the image pixels <cit.> or they can be more significant changes to the structure of the image <cit.>. Thanks to the rapid development of 3D acquisition technologies, various types of 3D scanners, LiDARs, and RGB-D cameras have become increasingly affordable. 3D data is often used as an input for Deep Neural Networks (DNNs) in healthcare <cit.>, self-driving cars <cit.>, drones <cit.>, robotics <cit.>, and many other applications. These 3D data, compared to 2D counterparts, capture more information from the environment, thereby allowing more sophisticated analysis. There are different representations of 3D data, like voxels <cit.>, meshes <cit.>, and point clouds <cit.>. Since point clouds can be received directly from scanners, they can precisely capture shape details. Therefore, it is the preferred representation for many safety-critical applications. Due to this, in the case of 3D adversarial examples, the input is a point cloud, and the modification is applied to the points in the cloud. These examples can be created by adding, dropping, and shifting some points in the input point clouds, or by generating entirely new point clouds with predefined target labels using methods such as Generative Adversarial Networks (GANs) or other transformation techniques. It is typically easier to create adversarial examples in 2D space than in 3D space because the input space is smaller and there are fewer dimensions to perturb. In general, adversarial examples exploit the vulnerabilities or weaknesses in the model's prediction process, and they can be very difficult to detect because they are often indistinguishable from normal examples to the human eye. As a result, adversarial examples can pose a serious threat to the security and reliability of DL models. Therefore, it is important to have effective methods for defending against adversarial examples in order to ensure the robustness and reliability of DL models. Adversarial defense in the 2D image and the 3D point clouds both seek to protect DL models from being fooled by adversarial examples. However, there are some key differences between the approaches used to defend against adversarial images and adversarial point clouds. Some of the main differences include the following: * Input data: Adversarial images are 2D data representations, while adversarial point clouds are 3D data representations. This means that the approaches used to defend against adversarial images and point clouds may need to take into account the different dimensions and characteristics of the input data. * Adversarial perturbations: Adversarial images may be modified using small perturbations added to the image pixels, while adversarial point clouds may be modified using perturbations applied to individual points or groups of points in the point cloud. This means that the approaches used to defend against adversarial images and point clouds may need to be tailored to the specific types of adversarial perturbations that are being used. * Complexity: Adversarial point clouds may be more complex to defend against than adversarial images, as the perturbations applied to point clouds may be more difficult to identify and remove. This may require the use of more sophisticated defenses, such as methods that are able to detect and remove adversarial perturbations from the input point cloud. On the whole, adversarial point clouds can be challenging to identify and defend against, as they may not be easily recognizable in the 3D point cloud data. Adversarial point clouds may be more harmful and harder to defend against, because their changes may be less obvious to humans due to the lack of familiarity compared to images. As a result, it is important to conduct a thorough survey of adversarial attacks and defenses on 3D point clouds in order to identify the challenges and limitations of current approaches and to identify opportunities for future research in this area. There are a number of published surveys that review adversarial attacks and defenses in general, including in the context of computer vision, machine learning, and deep learning systems. These surveys provide an overview of the various types of attacks and defenses that have been proposed, as well as their strengths and limitations. However, there is a lack of surveys specifically focused on 3D point cloud attacks and defenses. Some published surveys do mention 3D attacks and defenses briefly <cit.>, but there is a need for more comprehensive surveys that delve deeper into this topic. Table <ref> refers to a summary or overview of published surveys of adversarial attacks and defenses. Some of these surveys focus on specific domains, such as computer vision <cit.>, text <cit.>, and images <cit.> while others provide a more general overview of adversarial attacks and defenses in the field of artificial intelligence <cit.>. Our key contributions are as follows: * A review of the different types of adversarial point clouds that have been proposed and the methods that have been used to generate them, and proposing a taxonomy of these methods. * A review of the various methods that have been proposed for defending against adversarial point clouds, including data optimization, input transformation methods, and deep model modification. * Categorization of the most important datasets and models used by researchers in this field. * An assessment of the challenges and limitations of current approaches to adversarial attacks and defenses on 3D point clouds, and identification of opportunities for future research in this area. An overview of the categorization of adversarial attack and defense approaches on 3D point clouds is shown in Fig. <ref>. The rest of this paper is organized as follows. Section <ref> introduces a list of notations, terms and measurements used in the paper. We discuss adversarial attacks on deep models for 3D point cloud classification in Section <ref>. Section <ref> provides a detailed review of the existing adversarial defense methods. In Section <ref>, we summarize commonly used 3D datasets and present a taxonomy of datasets and victim models used in recent studies. We discuss current challenges and potential solutions related to adversarial attacks in Section <ref>. Finally, Section <ref> concludes the survey. § BACKGROUND In this section, we provide the necessary background in terms of notation, terminology, and point cloud distance measures used in the field of 3D adversarial attacks. By establishing clear definitions, researchers can more accurately compare the effectiveness of different approaches and identify trends or patterns in the methods. A list of symbols used in the paper is given in Table <ref>, along with their explanations. These symbols are used to represent various quantities related to point cloud adversarial attacks. The table provides a brief description of each symbol to help readers understand and follow the discussions and equations in the paper. Next, we briefly introduce the terminology and distance measures used in the field of adversarial attacks and defenses on 3D point clouds. §.§ Definition of terms It is crucial to define the technical terms used in the literature in order to provide a consistent discussion of the various methods and approaches. The definitions of these terms appear below. The rest of the paper follows the same definitions throughout. * 3D point cloud is a set of points in 3D space, typically representing a 3D shape or scene. * Adversarial point cloud is a 3D point cloud that has been intentionally modified in order to mislead a DL model that analyzes 3D point clouds. We focus on geometric modifications, rather than attribute (e.g., color) modifications, since these are predominant in the literature on adversarial point clouds. * Adversarial attack is a technique that intentionally introduces perturbations or noise to an input point cloud in order to fool a DL model, causing it to make incorrect predictions or decisions. * Black-box attacks are a type of adversarial attack in which the attacker only has access to the model's input and output, and has no access to the structure of the DL model being attacked. * White-box attacks are a type of adversarial attack in which the attacker knows all the details about the DL model’s architecture and parameters. * Targeted attacks involve manipulating the input point cloud in a way that causes the model to output a specific target label when presented with the modified input. * Non-targeted attacks involve manipulating the input point cloud in a way that causes the model to output a wrong label, regardless of what that label is. * Point addition attacks involve adding points to the point cloud to fool the DL model. * Point shift attacks involve shifting points of the point cloud to fool the DL model, while the number of points remains the same as in the original point cloud. * Point drop attacks involve dropping points from the point cloud to fool the DL model. * Optimization-based attacks are a type of attack in which the creation of an adversarial point cloud is formulated and solved as an optimization problem. * Gradient-based attacks are a type of attack in which the gradients of the cost function corresponding to each input point are used to generate an adversarial point cloud with higher tendency toward being misclassified. * On-surface perturbation attacks are a type of attack that involves modifying points along the object's surface in the point cloud. * Out-of-surface perturbation attacks are a type of attack that involves modifying points outside the object surface in the point cloud. * Transferability refers to the ability of adversarial examples generated for one DL model to be successful in causing misclassification for another DL model. * Adversarial defense is a set of techniques that aim to mitigate the impact of adversarial attacks and improve the robustness of the DL model against them. * Attack success rate refers to the percentage of times that an adversarial attack on a DL model is successful. §.§ Distance measures The objective of adversarial attacks is to modify points of 𝒫, creating an adversarial point cloud 𝒫^adv, which could fool a DL model to output wrong results. Geometric 3D adversarial attacks can be achieved by adding, dropping, or shifting points in 𝒫. If the adversarial point cloud is generated by shifting points, ℓ_P-norms can be used to measure the distance between 𝒫 and 𝒫^adv, as the two point clouds have the same number of points. In this case, we can talk about the vector difference (perturbation) η = 𝒫-𝒫^adv, and consider η_P as the distance between 𝒫 and 𝒫^adv. The typical choices for P are P ∈{0, 2, ∞}, and the equation is: D_ℓ_P (𝒫 , 𝒫^adv) = η_P = (∑_i=1^np_i - p^adv_i_P^P)^1/P where 𝒫∈ℝ^n×3 is the original point cloud consisting of n points in 3D space, 𝒫={p_i | i=1,2, ..., n} and the i^th point, p_i = (x_i,y_i,z_i), is a 3D vector of coordinates. 𝒫^adv is the adversarial point cloud formed by adding the adversarial perturbation η = (η_1,η_2, ..., η_n), η_i∈ℝ^3, to 𝒫. The three common ℓ_P norms have the following interpretations: * ℓ_0-norm or η_0 counts the number of non-zero elements in η, so it indicates how many points in 𝒫^adv have changed compared to 𝒫. * ℓ_2-norm or η_2 is the Euclidean distance between 𝒫^adv and 𝒫. * ℓ_∞-norm or η_∞ is the maximum difference between the points in 𝒫^adv and 𝒫. As mentioned above, ℓ_P-norm distance criteria require that 𝒫^adv and 𝒫 have the same number of points. Hence, these distance measures cannot be used for attacks that involve adding or dropping points. To quantify the dis-similarity between two point clouds that don't have the same number of points, Hausdorff distance D_H and Chamfer distance D_C are commonly used. Hausdorff distance is defined as follows: D_H (𝒫 , 𝒫^adv) = max_p ∈𝒫min_p^adv∈𝒫^advp - p^adv_2^2 It locates the nearest original point p for each adversarial point p^adv and then finds the maximum squared Euclidean distance between all such nearest point pairs. Chamfer distance is similar to Hausdorff distance, except that it sums the distances among all pairs of closest points, instead of taking the maximum: D_C (𝒫 , 𝒫^adv) = ∑_p^adv∈𝒫^advmin_p ∈𝒫p - p^adv_2^2 + ∑_p ∈𝒫min_p^adv∈𝒫^advp - p^adv_2^2 Optionally, Chamfer distance can be averaged with respect to the number of points in the two point clouds. Besides the distance measures mentioned above, there are other distance measures for point clouds, such as point-to-plane distance <cit.>, that are used in point cloud compression. However, these are not commonly encountered in the literature on 3D adversarial attacks, so we don't review them here. § ADVERSARIAL ATTACKS This section describes the seven most common approaches for generating adversarial point clouds. Our discussion encompasses the technicalities of these seven widely used methods and also briefly touches upon similar approaches related to these seven attacks. Some of the approaches <cit.> described in this section are extended versions of adversarial examples for 2D data, adapted for use with 3D point clouds. These approaches may face new challenges due to the additional dimension of the data. Other approaches <cit.> are specifically designed for 3D data and may be more effective at generating adversarial point clouds than methods that are simply adapted from 2D data. These approaches may consider the unique characteristics of 3D point clouds and the deep models that process them. Overall, the goal of these approaches is to understand better how adversarial point clouds could affect current deep 3D models. The most popular approaches are also summarized in Table <ref> and we explain how adversarial attacks and attack categories relate in the context of adversarial examples for point cloud classification tasks. §.§ 3D fast gradient sign method (3D FGSM) The fast gradient sign method (FGSM) presented by Goodfellow  <cit.>. In accordance with standard FGSM, the method adds an adversarial perturbation η to each point of given point cloud 𝒫 in order to create an adversarial point cloud as 𝒫^adv = 𝒫+η. Perturbations are generated according to the direction of the sign of gradient at each point. The perturbation can be expressed as η = ϵ sign(∇_𝒫J(f(𝒫:θ),Y) where f is deep model that is parameterized by θ and takes an input point cloud 𝒫 and Y denotes the label associated with 𝒫. Δ_xJ(.,.) is gradient of loss function of model w.r.t to 𝒫 and sign(.) denotes the sign function. The ϵ value is an adjusting hyperparameter that determines the ℓ_∞-norm of the difference between the original and adversarial inputs. The FGSM was extended by Liu  <cit.> to 3D data. There are three different ways were introduced <cit.> to define ϵ value as a constraint for η as follows * Constraining the ℓ_2-norm between each dimension of points 𝒫 and 𝒫^adv.. * Constraining the ℓ_2-norm between each point 𝒫 and 𝒫^adv. * Constraining the ℓ_2-norm between all points 𝒫 and 𝒫^adv. Due to the first method severely limiting the movement of points, the authors suggest the second and third methods. However, all three methods have shown little difference in the attack success rates. Yang  <cit.> used the Chamfer distance (instead of ℓ_2-norm) between the original point cloud and the adversarial counterpart to extend FGSM to a 3D domain. Using this approach, each point in the adversarial point clouds is perturbed slightly. There is a trade-off between the chamfer distance and the attack success rate because, as the chamfer distance decreases, it may become more difficult for an adversarial attack to achieve a high attack success rate. However, if the chamfer distance is set too high, the model may be more vulnerable to adversarial attacks. Finding the right balance between these two factors can be challenging, and it may depend on the specific characteristics of the point cloud model and the type of adversarial attack being used. Figure <ref> illustrates an example of an FGSM adversarial point cloud with Chamfer distances varying from 0.01 to 0.05 between the two point clouds. The author in <cit.> sets it to 0.02 as an "appreciate distance". Apart from the FGSM attack, Yang  <cit.> introduced another attack called "Momentum-Enhanced Pointwise Gradient (MPG)." The MPG attack, similar to <cit.>, integrates momentum into iterative FGSM. The MPG attack produces more transferable adversarial examples. §.§ 3D Carlini and Wagner attack (3D C&W) The C&W attack is presented by Carlini and Wagner <cit.>. They provided three kinds of attacks with three different distance measures, ℓ_0-norm, ℓ_2-norm, and ℓ_∞-norm. As a general rule, generating the C&W attack can describe as an optimization problem to find minimum perturbation η such that the label of the adversarial input 𝒫^adv is changed to the target label T by the objective function g. min_η D (𝒫 , 𝒫^adv) + c . g(𝒫 + η) s.t. f(𝒫^adv)=T where D(.) refers to distance measure (it can be defined using different distance measures like ℓ_P-norm, Chamfer or Hausdorff distance), c is a suitably chosen constant and g(𝒫^adv)≥ 0 if and only if f(𝒫^adv)=T. By doing so, the distance and penalty term can be optimized more effectively. There were seven objective functions g listed by the authors <cit.>. An effective function evaluated by their experiments, which was also used in other papers, is as follows g(𝒫^adv) = max(max_i=t(Z(𝒫^adv)_i)-Z(𝒫^adv)_t , -κ) where Z denotes the Softmax function, and κ represents a constant that controls confidence. In comparison with the FGSM attack, these attacks do not set a constraint for perturbation. In fact, the attacks search for minimal perturbation (without imposing any constraints) to change the label to the target label. As the first instance, a 3D version of the C&W attack was developed by Xiang  <cit.>. According to the paper, <cit.>, four types of attacks were proposed as follows. In Figure <ref>, you can see the four types of C&W attacks, where the bottle label has been misclassified as a result of these attacks. * Adversarial perturbation negligibly by using ℓ_2-norm (between all points 𝒫 and 𝒫^adv) as distance measure to shift points toward the point cloud's surface. * Adding adversarial independent points by using two different distance measures. 1. Chamfer distance between the original point cloud and the adversarial point cloud. 2. Hausdorff distance between the original point cloud and the adversarial point cloud. These measures are used to push independent points toward the point cloud's surface. * Adding adversarial clusters by the combination of three different distance measures. 1. Chamfer distance between the original point cloud and the adversarial cluster is used to push clusters toward the point cloud's surface. 2. The number of clusters added. Using this measure, only 1 to 3 clusters are added, so there is only a small number of clusters added. 3. Minimize the farthest distance. In this measure, the distance between the two most distant points in each cluster is minimized to constrain the added points clustered to be within small regions. * Adding adversarial objects by the combination of three different distance measures. 1. Chamfer distance between the original point cloud and the adversarial object is used to push adversarial objects toward the point cloud's surface. 2. The number of objects added. Using this measure, only 1 to 3 objects are added, so there is only a small number of objects added. 3. ℓ_2-norm between a real-world object and an adversarial object is used to generate shapes similar to the real-world ones. The first attack is based on shifting points, and three other attacks are based on adding points. Since directly adding points to the unbounded 3D space is not possible due to the vast search space, the last three attacks use the position of critical points as the initial positions of adversarial points (or clusters or objects). Critical points are like key points that are effective in classification results. An example of critical points in PointNet would be calculating the remaining points after max pooling. Tsai  <cit.> developed a shifting point attack called K-Nearest Neighbor (KNN) attack that limits distances between adjacent points by adding an extra distance loss to  <ref>, which calculates K-Nearest Neighbor distance for each point. By doing so, adversarial point clouds are restricted to becoming physical objects. They use Chamfer distance to measure the distance of two point clouds. Wen  <cit.> considered a new distance measure named consistency of local curvatures to guide perturbed points lean towards object surfaces. Adopting the C&W attack framework, the authors use the combination of Chamfer distance, Hausdorff distance, and local curvature consistency distance as the distance measure to create a geometry-aware adversarial attack (GeoA^3). The generated GeoA^3 attack has smoothness and fairness surface properties, so the difference between it and the original point cloud is imperceptible to the human eye. §.§ 3D Projected Gradient Decent method (3D PGD) One of the most potent attacks in the 2D literature is the Projected Gradient Descent (PGD), which has its roots in the pioneering paper of Madry  <cit.>. The iterative FGSM is considered a PGD method. Taking the iterative FGSM method, we can generate the adversarial point cloud as 𝒫^adv_0 = x , 𝒫^adv_t+1 = Clip_𝒫,ϵ[𝒫^adv_t+α sign(∇_𝒫J(f(𝒫:θ),Y)] where Clip_𝒫,ϵ limits the change of the generated adversarial input in each iteration and t refers to iteration. The PGD attack try to increase the cost of the correct class Y, without specifying which of the incorrect classes the model should select. The PGD attack finds the perturbation that maximizes the cost function under the η constraint with ϵ. max_η J(f(𝒫:θ),Y) s.t. D (𝒫 , 𝒫^adv) ≤ϵ-ball The 3D PGD attack is similar to the 2D version, but it usually uses different distance measures to calculate perturbations. In particular, Liu  <cit.> proposed a PGD attack named Distributional attack by using the Hausdorff distance between the triangular mesh (original point cloud surface approximate through a triangular mesh) and the adversarial point cloud as distance measure to push adversarial points toward the triangular mesh. This method is less sensitive to the density of points in 𝒫 because it uses a mesh instead of a point cloud to measure perturbation. Figure <ref> demonstrated two examples of adversarial point clouds generated by the distributional attack. Ma  <cit.> proposed Joint Gradient Based Attack (JGBA) attack. They added an extra term to the optimization function of the PGD attack  <ref> to defeat the SOR (Statistical Outlier Remover), which removes outlier points. The term computes the gradient of the loss function of model w.r.t to points in 𝒫 after removing outliers when the first term (term in  <ref>) computes the gradient of the loss function of model w.r.t to all points in 𝒫. These two terms are combined to solve the optimization problem. The JGBA attack takes ℓ_2-norm as the distance measure to constraint shifting of points. §.§ Shape attack This type of attack attempts to morph the point cloud's shape. The concept of shape attacks can be compared to what is called unrestricted attacks in 2D images <cit.>. When such attacks occur, input data might change significantly while not changing the semantics. This adversarial attacks fool the classifier without making humans confused. In this regard, Liu  <cit.> proposed three shape attacks as follows. Figure <ref> demonstrates these three shape attacks. * Perturbation resampling This attack resamples the certain number of points with the lowest gradients by farthest point sampling to ensure that all points are distributed approximately uniformly. The algorithm is iterated to generate an adversarial point cloud that deceives the model. The distance measure used to maintain the similarity between 𝒫 and 𝒫^adv is Hausdorff distance. * Adding adversarial sticks During this attack, the algorithm adds four sticks to the point cloud so that one end of them is attached to the point cloud and the other end has a very small distance from the first end. The algorithm optimizes the two ends of the sticks so that the label of the point cloud be changed. Finally, it adds a few points between the two ends to make them look like sticks. * Adding adversarial sinks In this case, critical points (remaining points after max pooling in PointNet) selects as sink points, and points pull in the point cloud toward them. The goal of this attack is to minimize global changes to points that are not selected by the max pooling operation. The distance measure used to maintain the similarity between 𝒫 and 𝒫^adv is ℓ_2-norm. Lee  <cit.> also proposed Shape-aware adversarial attacks called ShapeAdv that are based on injecting an adversarial perturbation η in the latent space z of a point cloud autoencoder. To be precise, the original point cloud is processed using an autoencoder to generate an adversarial point cloud, then the adversarial point cloud is fed to the classifier. Accordingly, Lee  <cit.> generated three attacks with varying distance measures. These measures are used as a term for C&W loss to maintain similarity between the original and the adversarial point clouds. All three attacks calculate gradient C&W loss w.r.t adversarial perturbation in the latent space z. The distance measures are defined as such for three types of attacks: * Shape-aware attack in the latent space. To make a more meaningful attack, the author minimizes the ℓ_2-norm between the latent space z and the adversarial latent space z+η. Using this approach, the generated adversarial point cloud is highly dissimilar from the original counterpart in terms of appearance. * Shape-aware attack in the point space. In this case, an attempt is being made to resolve the previous attack's problem. In order to maintain similarity between the original point cloud and the adversarial one, the distance measure is replaced by minimizing the Chamfer distance between the two. * Shape-aware attack with auxiliary point clouds. The attack minimizes the Chamfer distance between the adversarial point cloud and the average of k nearest neighbor, sampled from the original point cloud category. This attack aims to avoid adversarial perturbation in any direction in the latent space. To guide the direction in the latent space, it employs auxiliary point clouds sampled from the category of the original input. §.§.§ Shape attacks via autoencoders and generative models Hamdi  <cit.> proposed an attack called Advpc by using an autoencoder that could be transferred between networks effectively. This was achieved by introducing a new loss function and pipeline. Minimizing two losses was the goal of the Loss function. The first loss is C&W loss when adversarial point clouds are fed into deep models, and the second loss is C&W loss when adversarial point clouds are fed into deep models after reconstruction with a point cloud autoencoder. Using an autoencoder to generate an adversarial point cloud makes perturbations more meaningful. Consequently, their transferability from one network to another will be more promising. Lee  <cit.> also proposed Shape-aware attacks by injecting adversarial perturbation η in the latent space z of a point cloud autoencoder. In section <ref>, this attack was described in detail. LG-GAN attack <cit.> is proposed to generate an adversarial point cloud based on GAN (Generative Adversarial Network). The GAN is fed with the original point clouds and target labels to learn how to generate adversarial point clouds to fool deep models. In detail, it extracts hierarchical features from original point clouds using one multi-branch adversarial network, then integrates the specified label information into multiple intermediate features using the label encoder. The encoded features will be fed into a reconstruction decoder to generate the adversarial point cloud. This attack is so fast because it only takes one forward pass to generate an adversarial point cloud. Figure <ref> shows an instance of the LG-GAN attack. Dai <cit.> proposed a new type of attack based on GAN, which is created from noise rather than the original point cloud. In fact, the noise vector and the target label as the input are fed into a graph convolutional generator. It outputs the generated adversarial point cloud. The generator uses a loss function containing four parts (the objective loss, the discriminative loss, the outlier loss, and the uniform loss) to achieve a realistic adversarial attack that fools the victim network. The objective loss encourages the victim network to assign the target(incorrect) label to the adversarial point cloud while the discriminative loss encourages the auxiliary network to classify the adversarial point cloud correctly. The outlier loss and the uniform loss by removing outliers and generating a more uniform point cloud force the generator to preserve the point cloud shape. Lang <cit.> proposed a new type of adversarial attack that alters the reconstructed geometry of a 3D point cloud rather than just the predicted label, using an autoencoder trained on semantic shape classes. Mariani <cit.> proposed a method for creating adversarial attacks on surfaces embedded in 3D space, under weak smoothness assumptions on the perceptibility of the attack. §.§ Frequency attack (Attack on other domains) Liu  <cit.> have suggested an adversarial attack based on the frequency domain, which aims to enhance the transferability of generated adversarial examples. The author transformed points onto the frequency domain via graph Fourier transform (GFT). Then divide it into low-frequency components and high-frequency components, and apply perturbations to the low-frequency components to create an adversarial point cloud. In a contrasting way, Liu  <cit.> investigated the geometric structure of 3D point clouds by perturbing each of the three frequency components (low, mid, and high-frequency). They found that perturbing low-frequency components of point clouds significantly damaged their rough shape. To preserve the shape of the point cloud, they created an adversarial point cloud with constraints applying perturbations to the low-frequency components and guiding perturbations to the high-frequency components. Huang  <cit.> proposed a new attack based on applying reversible coordinate transformations to points in the original point cloud, which reduces one degree of freedom and limits their movement on the tangent plane. The best direction is calculated based on the gradients of the transformed point clouds. After that, all points are assigned a score to construct the sensitivity map. Finally, top-scoring points are selected to fool deep models. The authors in <cit.> suggest that by analyzing the eigenvalues and eigenvectors of the graph Laplacian matrix of a point cloud, it can be determined which areas of the model are particularly sensitive to perturbations. By focusing on these areas, the attack can be crafted more effectively. §.§ Minimal level of point manipulations for attacking A special type of adversarial attacks exists in the 2D domain that focuses on perturbing a minimum number of pixels in adversarial attacks <cit.>. For instance, the one-pixel attack <cit.>, which is the name given to the attack that can fool deep models by changing only one pixel, is a famous attack of this type. Taking inspiration from 2D attacks, Kim  <cit.> proposed adversarial attacks namely minimal attack that manipulate only a minimal number of points. To find an adversarial point cloud, they have modified the optimization function of the PGD attack <ref> by adding a term. In this term, the number of changed points is kept to a minimum. Furthermore, they used two different distance measures, Hausdorff and Chamfer distance, to preserve the similarity between 𝒫 and 𝒫^adv. Figure <ref> illustrates examples of minimal adversarial attack In another attack called Variable Step-size Attack (VSA) <cit.>, a hard boundary constraint on the number of modified points is incorporated into the optimization function of a PGD attack <ref> to preserve the point cloud's appearance. In more concrete terms, certain points with the highest gradient norms (which have the most impact on classification tasks) are initialized as modified points. By controlling the step-size (large step-size (α) at the beginning and smaller at the end), this method escapes local optima and finds the most appropriate locations for the modified (adversarial) points. Kim  <cit.> proposed a class of point cloud perturbation attacks called Nudge attacks that minimize point perturbation to flip 3D DNN results. The researchers generated adversarial point clouds using gradient-based and genetic algorithms with perturbations of up to 150 points in order to deceive DNNs. The attack can fool DNN even with a single point when the point has a large distance from the surface of 3D objects. Yang  <cit.> provided a point-attachment attack by attaching a few points to the point cloud. A Chamfer distance is used to preserve a small distance between the newly added points and the original point cloud. Hard boundary constraints limit the number of points added in the point cloud, making it more difficult to detect. Tan  <cit.> proposed a new type of attack called One point attack in which only a single point in the point cloud needs to be perturbed in order to fool the deep model. The authors also present an explainability method to identify the most important points in the point cloud for the attack Shape Prior Guided Attack <cit.> is a method that uses a shape prior, or prior knowledge of the structure of the object, to guide the generation of the perturbations, or changes made to the point cloud to create the adversarial point cloud. The goal of this method is to create adversarial point clouds that have minimal perturbations while still being able to fool the target object detection model. §.§ Attacks with drop points Attacks described in the previous sections mostly revolved around shifting, adding, or transforming points (transforming points into another space and making changes there). This section reviews attacks that drop some points to generate adversarial point clouds. Depending on how points are dropped, these attacks can be made. The authors have provided various algorithms for removing critical points effectively. As an example, Zheng <cit.> developed a method that by using a saliency map <cit.> finds critical points that are important in model decision-making and drops them. The points dropped by the saliency map are illustrated in red points in Figure <ref>. According to this method, every point is assigned a saliency score that reflects its contribution to the deep model recognition. By shifting high-saliency points towards the point cloud center, these points will not affect the surfaces much and practically operate in the same way as drop points. Consequently, the model can be deceived by shifting high-scoring points in a point cloud, resulting in adversarial point clouds. This method was proposed in two popular dropped attacks, Drop100 and Drop200, which drop 100 and 200 points respectively. An attack described in <cit.> identifies "adversarial drop points" in a 3D point cloud that, when dropped, significantly reduce a model's accuracy. These points are specified independently of the model by analyzing and combining fourteen-point cloud features and determining which features play key roles in the model's decision-making. In <cit.>, the critical points can be randomly determined and checked for dropping one by one. If a point increases the probability of changing the ground-truth label f(𝒫) = Y is considered a critical point and, will be dropped. Otherwise, it will not be dropped. This procedure continues iteratively until the minimum critical points are dropped according to the following optimization problem min_𝒫⊆𝒫^adv (|𝒫^adv|- |𝒫|) s.t. f(𝒫^adv) ≠ f(𝒫) where |𝒫^adv| and |𝒫| are number points in the original point cloud and the adversarial one. The adversarial examples are generated by dropping critical points that optimize formula <ref>. In order to determine the level of effectiveness of a given point in PointNet model decision-making, Yang <cit.> introduced a Point-detachment attack that assigned a class-dependent importance to each point. A greedy strategy is employed to generate an adversarial point cloud, in which the most important point dependent on the true class are dropped iteratively. The class-dependent importance associated with a given point is determined by multiplying the two terms. The first term uses the PointNet feature matrix before max-pooling aggregation. (In this matrix, each row represents a point in the point cloud and each column represents a special feature). The second term uses from gradient the feature matrix w.r.t. the true class output, which is a sparse matrix with non-zero only at the critical points. If a given point has the largest value in some columns, the first term sums the difference between the first and second largest values in these columns. A bigger difference means more significance for the largest value. This means that a given point that corresponds to the largest value is more effective in the model decision. The second term sums up all values for a given point at a row level in the sparse matrix. §.§ Miscellaneous attacks Miao <cit.> developed an adversarial point cloud based on rotation by applying an isometry matrix to the original point cloud. To find an appropriate isometry matrix the author used the Thompson Sampling method which can quickly find a suitable isometry matrix with a high attack rate. Liu  <cit.> proposed an Imperceptible Transfer Attack (ITA) that enhances the imperceptibility of adversarial point clouds by shifting each point in the direction of its normal vector. Zhang  <cit.> proposed a Mesh Attack that directly perturbs the mesh of a 3D object. Tang  <cit.> presented a method called NormalAttack for generating imperceptible point cloud attacks. The method deforms objects along their normals by considering the object's curvature to make the modification less noticeable. § DEFENSES AGAINST ADVERSARIAL ATTACKS Adversarial defense methods for 3D point clouds can generally be divided into three categories: input transformation, data optimization, and deep model modification. The following sections discuss defense methods under each of these categories. §.§ Input transformation An input transformation is a preprocessing approach that involves applying some transformations to the input point cloud before it is fed into the deep model. This transformation could be designed to reduce the sensitivity of the model to adversarial attacks or to make it more difficult for an attacker to craft an adversarial point cloud. Input transformation methods are listed below. §.§.§ Simple Random Sampling (SRS) Simple random sampling <cit.> is a statistical technique commonly known as SRS that randomly drops a certain number of points (usually 500) from an input point cloud (with the same probability). §.§.§ Statistical Outlier Removal (SOR) Since there exist outliers in most adversarial attacks, Zhou  <cit.> proposed a statistical outlier removal (SOR) method that trimmed the points in an adversarial point cloud if the average distance a point to its k nearest neighbors falls outside the (μ + σ.α), which μ is mean and σ is the standard deviation of k nearest neighbor distance of all points in the original point cloud. Depending on the size of the analyzed neighborhood, α will be determined. (In <cit.> α = 1.1 and k=2 are considered). A similar defense method is used in  <cit.>. The Euclidean distance between each point and its k-nearest neighbors is used to detect outliers. Points with High mean distances are discarded as outliers. §.§.§ Salient points removal This defense method <cit.> assumes that the adversarial points have fairly large gradient values. Taking this as true, this method calculated the saliency of each point based on the gradient output class of the model f w.r.t. to each point and points with high saliency were discarded. §.§.§ Denoiser and Upsampler Network (DUP-Net) The DUP-Net defense method consists of two steps. To remove outliers, it uses SOR as a denoiser in the first step. In the second step, the output of the first step is given to an upsampler network <cit.> to produce a denser point cloud. It is generally found that adversarial perturbations are missing critical points from original point clouds, so this defense uses a denser point cloud tracking the underlying surface of the point cloud with uniform distribution to recover these critical points. §.§.§ IF-Defense IF-Defense <cit.> is a preprocessing technique on the input point cloud. It first employs SOR to remove outliers from the input point cloud. In the next step, two losses are used to optimize input points' coordinates under geometry- and distribution-aware constraints. The geometry-aware loss tries to push points towards the surface in order to minimize outliers. To estimate the surfaces of objects, the authors train an implicit function network <cit.> on original point clouds. Because output of implicit functions are continuous, the predicted surface is locally smooth. This reduces outlier effects. The distribution-aware loss encourages points to have an uniform distribution by maximizing the distance between each point and its k-nearest neighbors. Accordingly, the input point clouds are captured in a clean shape using If-Defense. Figure <ref> shows the results of three different defense methods against a Drop100 attack, including SOR, DUP-Net, and If-defense. §.§.§ Miscellaneous Defenses Dong  <cit.> proposed Gather-Vector Guidance (GvG) method which is sensitive to the change of local features. In case the adversarial perturbation changes the local features, the gather-vector will also change. This method learns to ignore noisy local features. Liu  <cit.> developed PointGuard, a method that creates a number of random subsets of points in the original point cloud, then predicts the label of the original point cloud based on the majority vote among the labels of these random subsets. Sun <cit.> proposed a framework for evaluating the robustness of 3D point cloud classification models to adaptive attack. Ada3Diff <cit.> is a method for defending against adversarial attacks on 3D point cloud models. It uses an adaptive diffusion process to smooth out perturbations in the point cloud, effectively reducing the impact of the adversarial attack. §.§ Data optimization Another category is data optimization for training, which involves optimizing the training data to improve the robustness of the deep model to adversarial attacks. This could involve techniques such as data augmentation, which involves generating additional training examples by applying transformations to the existing training data, or adversarial training, which involves intentionally introducing adversarial examples into the training data in order to improve the model's robustness to such attacks. The following methods can be used to optimize data. §.§.§ Adversarial Training In terms of modified training sets, adversarial training <cit.> is an effective defense method, which augments the training set with adversarial examples to increase the model’s robustness against attacks. To be precise, in standard training, the model is trained using only the original point clouds, while adversarial training uses both original and adversarial point clouds. The adversarial training for point clouds is described in <cit.> for the first time. The authors of <cit.> and  <cit.> trained a deep model by augmenting the FGSM and ITA attacks. As a way to find a stronger adversarial training method, the authors in <cit.> used adaptive attacks. Using this new adversarial training, different types of attacks are added to the deep model by embedding a perturbation-injection module. This module is utilized to generate the perturbed features for adversarial training. Sun  <cit.> applied self-supervised learning to adversarial training with 3D point clouds. In different tries, the authors in <cit.> add Gaussian noise to each point by randomly sampling values from a Gaussian distribution. By doing so, the attacked models can escape from the narrow adversarial subspace. Also, they developed a Quantification Method for converting point cloud coordinates into low numerical precision with multiple quantification levels, which mitigates small variations in coordinates. These noisy point clouds are then used to augment training sets. §.§.§ PointCutMix Zhang  <cit.> proposed PointCutMix technique that generated a new training set by swapping points between two optimally aligned original point clouds and training a model with this new training set. §.§.§ Low Pass Frequency-Defense (LPF-Defense) In LPF-Defense <cit.>, deep models are trained with the low-frequency version of the original point cloud. More specifically, with the Spherical Harmonic Transform (SHT) <cit.>, original point clouds were transformed from the spatial to the frequency domain. The low-frequency version of the original point cloud is then retrieved back into the spatial domain by filtering the high-frequency input data components. This method is based on the assumption that 3D deep models are overly dependent on features with unnecessary information in the training sets, making them vulnerable to adversarial point clouds. Therefore it discards the unnecessary information from the training data by suppressing the high-frequency contents in the training phase. §.§ Deep model modification Another category is deep model modifications, which refer to modifying the architecture of the deep model itself in order to improve its robustness to adversarial attacks. This could be achieved by making changes to the original deep neural network architecture during training. Examples of this category are given below. §.§.§ Defense-PointNet The authors in <cit.> have provided a defense method by splitting the PointNet deep model into two parts. The first part is the feature extractor, with a discriminator attached to its last layer enabling it to learn more powerful features. The feature extractor feeds a mini-batch of the original point cloud and the adversarial counterpart (generated by the FGSM attack) as input to extract features and also fool the discriminator. The second part is the PointNet classifier which is trained to classify each input correctly. The model parameters are optimized using three different loss functions: a classifier, a discriminator, and a feature extractor. While discriminator loss attempts to distinguish the original point cloud from the adversarial one, feature extractor loss misleads the discriminator to label every original/adversarial vector as the original and classifier loss encourages the classifier to give correct predictions for each input. §.§.§ Context-Consistency dynamic graph Network (CCN) Li <cit.> proposed two methodologies to improve the adversarial robustness of 3D point cloud classification models. The first methodology is the introduction of a novel point cloud architecture called Context-Consistency dynamic graph Network (CCN), which is designed to be more robust to adversarial attacks. The second methodology involves an in-depth analysis of the factors that affect the robustness of point cloud models, and the development of techniques to mitigate these factors. In order to provide a more robust model against adversarial point clouds, the authors integrate the two techniques §.§.§ Lattice Point Classifier (LPC) Li  <cit.> proposed embedding a declarative node into the networks to transform adversarial examples to the clean manifold. The authors proposed an effective instantiation, the Lattice Point Classifier (LPC), which projects each point cloud onto the lattice and generates a 2D image for classification using 2D CNNs. (Structured sparse coding in the permutohedral lattice is defined as the declarative node in LPC.). The declarative nodes defend the adversarial attacks through implicit gradients by leading them to wrong updating directions for inputs. § TAXONOMY OF DATASETS AND VICTIM MODELS A variety of 3D point cloud datasets have been collected to evaluate shape classification on DNNs, including ModelNet <cit.>, ShapeNet <cit.>, ScanObjectNN <cit.>, McGill Benchmark <cit.>, ScanNet <cit.>, Sydney Urban Objects <cit.>. A summary of the characteristics of these datasets is also provided in Table <ref>. Among all, 4 datasets namely ModelNet10 <cit.>, ModelNet40 <cit.>, ShapeNet <cit.> and ScanObjectNN <cit.> have mostly been used to evaluate attack and defense techniques. Also, there is a taxonomy of datasets and victim models used in recent studies in Table <ref>. § CHALLENGES AND DISCUSSIONS This section discusses the current challenges that adversarial point clouds face, as well as the potential solutions that can be found. For both adversaries and researchers, adversarial point clouds are an interesting problem, which exploits the vulnerability of deep models and helps defenders avoid adversarial point clouds. Our discussion will focus on the following questions. What factors affect the attack on Point Cloud? §.§ What factors affect the success of adversarial attacks on 3D point clouds? There are some general factors that be more important for adversarial attacks on 3D point clouds including: The complexity and robustness of the model being attacked: When a deep model is less complex and less robust, it may be less immune to adversarial attacks and require a less sophisticated or weaker attack to fool it. The structure of the 3D point cloud: The distribution of points in the point cloud and the presence of outliers can potentially affect the success of most types of adversarial point clouds. §.§ Comparison of Different Defense Methods A 3D point cloud's distribution and outliers can significantly impact the effectiveness of defense methods against adversarial point clouds. For example, input transformation techniques are designed to make it more difficult for an attacker to craft adversarial point clouds. These techniques may rely on modifying the distribution of points in the point cloud or dropping outliers. By doing this, the structure of the original point cloud is disrupted. This makes it harder for the attacker to make successful modifications. On the other hand, other defense methods, such as adversarial training, may not rely as heavily on these factors and may not be as efficient. Adversarial training is one of the most powerful defenses in the 2D defense techniques, but it does not do well in 3D data. The paper <cit.> proves that the adversarial training maximizes the classifier loss by finding a worst-case example inside a constrained search space. This procedure can change decision boundaries so that the model gets more robust to different types of attacks. This proof is based on the regular structure of 2D data. Creating 2D attacks is performed by changing the pixel values. Note that in the 2D case, the data has a regular structure. But, a point cloud consists of a set of 3D data points that are placed irregularly in space. Furthermore, the point clouds used in the literatures are constructed by randomly sampling 1024 points from each 3D object. Therefore, points are not uniformly distributed across object's surface and any two point clouds from the same class (e.g., airplane) do not have the same regular structure, as opposed to the 2D cases. These structural differences result in different defense behaviors in the adversarial training phase. Therefore, training the model with the worst-case example inside a constrained search space can not guarantee robustness against other attacks. In other words, due to the irregular structure of point clouds, it is very challenging to model adversarial points to eliminate their impact on defense. §.§ Comparison of 3D point clouds and image data in terms of attacks and defenses There are several differences between 3D point clouds and images in terms of adversarial attacks and defenses: An adversarial attack on 3D point clouds can be more complex. Typically, an adversarial attack on an image data involves adding small perturbations to the pixel values. In contrast, adversarial attacks on 3D point clouds can involve more complex modifications, such as adding or dropping points, or changing the connectivity of the points in the point cloud. In fact, the structure of 3D point clouds is different from that of images. Images are typically represented as 2D arrays of pixel values, while 3D point clouds are represented as sets of 3D points. This difference in structure can make it more challenging to apply defense methods that were developed for image data to 3D point clouds. On the other hand, 3D point clouds can be more sensitive to perturbations. Because 3D point clouds are used to represent physical objects in the real world, even small perturbations to the point cloud can result in significant changes to the shape or appearance of the represented object. This sensitivity can make it more difficult to develop robust defense methods for 3D point clouds. § CONCLUSION Adversarial attacks on 3D point cloud classifications have become a significant concern in recent years. These attacks can successfully manipulate the classification of 3D point clouds, leading to incorrect decisions with potentially harmful consequences. Adversarial attacks on 3D point clouds can be categorized into several types, including drop attacks, add attacks, shift attacks, and transform attacks. To defend against these attacks, researchers have proposed two main categories of approaches: input transformation and adversarial training. Input transformation methods aim to preprocess the input data in order to make it more robust to adversarial perturbations, while adversarial training involves augmenting the training data with adversarial examples in order to improve the model's robustness. For more robust protection against adversarial attacks, input transformation techniques can be combined with adversarial training. Some potential future directions for research on adversarial attacks on 3D point clouds include optimizing attack methods by targeting only a subset of points in the point cloud and focusing on the local rather than global structure of the point cloud, as well as exploring the robustness of 3D point cloud classifiers to attacks that are specifically designed for 3D data rather than adapted from methods developed for 2D images. IEEEtranN
http://arxiv.org/abs/2307.00453v1
20230702022129
Don't Stop Self-Supervision: Accent Adaptation of Speech Representations via Residual Adapters
[ "Anshu Bhatia", "Sanchit Sinha", "Saket Dingliwal", "Karthik Gopalakrishnan", "Sravan Bodapati", "Katrin Kirchhoff" ]
cs.CL
[ "cs.CL", "cs.SD", "eess.AS" ]
Structural, vibrational and electronic properties of Nb substituted orthovanadates LaV_1-xNb_xO_4 Rajendra S. Dhaka August 1, 2023 ================================================================================================= Speech representations learned in a self-supervised fashion from massive unlabeled speech corpora have been adapted successfully toward several downstream tasks. However, such representations may be skewed toward canonical data characteristics of such corpora and perform poorly on atypical, non-native accented speaker populations. With the state-of-the-art HuBERT model as a baseline, we propose and investigate self-supervised adaptation of speech representations to such populations in a parameter-efficient way via training accent-specific residual adapters. We experiment with 4 accents and choose automatic speech recognition (ASR) as the downstream task of interest. We obtain strong word error rate reductions (WERR) over HuBERT-large for all 4 accents, with a mean WERR of 22.7% with accent-specific adapters and a mean WERR of 25.1% if the entire encoder is accent-adapted. While our experiments utilize HuBERT and ASR as the downstream task, our proposed approach is both model and task-agnostic. Index Terms: speech recognition, residual adapters, accents, self-supervision, fairness * Equal contribution † Corresponding author  Work done as an intern at AWS AI Labs § INTRODUCTION Self-supervised learning has been a dominant paradigm in natural language processing (NLP) <cit.> and in recent years, it has also been adopted by the speech community to learn high-fidelity representations <cit.> that capture various non-lexical aspects of speech and audio such as lip-smacking, laughter, hesitation, etc. In this paradigm, the targets to learn are derived from the input signal itself, making the learned representations more powerful in principle compared to those learned using textual labels and annotations of any kind. These powerful base representations have been successfully adopted for several downstream tasks <cit.>, some of which include: ASR, speaker identification and speech translation. Pre-training models with a very large number of parameters on proportionally large datasets has been a central theme in self-supervised learning. However, these datasets may understandably fall short in terms of sufficiently capturing non-canonical and diverse speech and audio characteristics such as rare non-native accents, stammering, etc. This leads to great disparity in downstream task performance across well-represented and underrepresented speaker populations. This data problem has also existed with supervised models for specific tasks such as ASR and in such scenarios, the typical path has been to patch task performance by collecting task-specific labeled datasets with non-canonical characteristics and fine-tuning for the task <cit.>. This unfortunately entangles speech and audio characteristics with the task itself, which can limit effective learning of such characteristics in task-specific representations as well as limiting their re-usability across tasks. In this paper, we consequently posit that continued self-supervised learning of speech and audio representations on task-agnostic unlabeled datasets is an effective strategy to adapt to non-canonical speech characteristics. The specific characteristic we choose to study is accents but the methodology holds for any characteristic. We propose learning different high-dimensional spaces for different accents via independently adding residual adapters for each target accent to the model and continuing pre-training on accent-specific datasets. Since residual adapters are parameter-wise much smaller than the base model, this serves as a parameter-efficient way for personalized adaptation without over-fitting and saves on storage costs for inference since only a single copy of the base model needs to be stored. We conduct our experiments with HuBERT-large <cit.> as the base model and ASR as the downstream task but posit that our proposed approach is both model and task agnostic. Our chosen base model is a state-of-the-art model with low word error rates on canonical datasets such as LibriSpeech. By design, we pick 4 non-native English accents where the HuBERT-large model has high word error rates (WER), in the range 24-50% and show strong results on all 4 accents with over 22% WERR over the baseline. Previous work has shown improvements in WER on such accents by supervised training using labeled datasets <cit.>. In contrast, we achieve our WER improvements by continuing to self-supervise models using unlabeled data alone. We show that the gains from adapting to an accent using a particular dataset translate to other evaluation sets with the same accent as well, indicating that the effectiveness of our approach is due to adaptation to the accents' acoustic characteristics and not other confounding factors. Finally, we also explore the degree of parameter-efficiency possible when adapting to target accents, finding that we can achieve strong WERR over the baseline while updating only 16% of the total model parameters. § METHODOLOGY In this work, we propose a model-agnostic as well as a task-agnostic method to adapt audio representations for speakers of a particular group. It can be leveraged to improve the performance of any Transformer-based speech model in the literature <cit.> on any downstream speech task. We showcase the efficacy of our approach on the widely used HuBERT model <cit.> with state-of-the-art performance on different speech tasks <cit.> at the time of our experiments. We evaluate our accent-specific audio representations on one of the important tasks, i.e., ASR. §.§ Background The HuBERT model consists of a convolutional waveform encoder <cit.>, a BERT encoder <cit.> and a projection layer. The convolutional waveform encoder (parameterized by θ_f), takes audio as an input to generate a feature sequence at a 20ms duration. The BERT encoder consists of N identical Transformer blocks (with parameters θ_T), stacked one after the other. It takes the input from the waveform encoder and passes the output feature sequence (768 dimensional vector sequence) to the projection layer. The projection layer with parameters θ_A, maps the feature sequence to the target sequence. These frame-level targets are provided by an independent clustering model like k-means and is called acoustic unit discovery module. Let X = [x_1 ··· x_T] denote a speech utterance of T frames. The discovered hidden units are denoted with h(X) = Z = [z_1 ··· z_T ], where z_t ∈ [C] is a C-class categorical variable and h is a clustering model. As defined in <cit.>, the probability of predicting c^th cluster center at time step t by the HuBERT model (Θ = {θ_f, θ_T, θ_A}) is denoted by p_Θ(c | X, t). The HuBERT model is trained in two stages where the first stage is self-supervision with unlabeled audio sequences while the second stage involves fine-tuning on a downstream task using labeled data. During the pre-training of the model, a subset of indices (M ⊂ [T]) are masked to create X̃ = r(X, M) denoting a corrupted version of X where x_t is replaced with a mask embedding x̃ if t ∈ M. The self-supervision loss () is defined as the cross-entropy loss in predicting the targets for the masked time-steps of an audio sequence. ℒ_SSL (X, r, Θ, h, M) = ∑_t ∈ Mlog p_Θ(z_t | X̃, t) After pre-training, the projection layer is removed and a light-weight decoder is used to map the audio representations from the BERT encoder to the output of the downstream task at hand. For ASR, it is used to predict targets from a pre-defined vocabulary (26 English characters, a space token, an apostrophe, and a special CTC blank symbol). Our decoder architecture is similar to <cit.>, where the output vector sequence from each of the N Transformer blocks in the BERT encoder is multiplied with scalar weights (W = [w_1 ·· w_N]), added and then passed through a vanilla 2-layer 1024-unit bidirectional LSTM <cit.> (with parameters θ_LSTM), which is used to predict the output sequence. Let Y = [y_1 ·· y_T^'] denote the ground truth labels. The parameters of the decoder (θ_d = {W, θ_LSTM}) are learned by minimizing the connectionist temporal classification (CTC) <cit.> loss (X,Y,Θ,θ_d) = log p_Θ, θ_d(Y|X) between the predicted sequence and the ground truth. During this stage, all the parameters (Θ) in the HuBERT model are frozen. As shown by <cit.>, this helps us save compute and storage costs with little to no degradation in downstream task performance as it allows for using a common encoder model for different tasks. §.§ Accent-Adaptive Continual Self-Supervision Our approach for generating accent-specific audio representations is simple and effective. It can be used to improve performance of any downstream speech task. We simply introduce an additional training stage where we use unlabeled audio from our atypical target accent for continuing self-supervised pre-training. As shown in Fig. <ref>, we train our model for a task in three stages. Let X_src, X_tar represents the unlabeled audio sequences from the generic data and the target accent respectively. We will denote the generic labeled data for a particular task like ASR with {X^'_src, Y^'_src} which may or may not overlap with X_src. The first stage is same as defined in the previous subsection, where we use generic unlabeled data X_src to minimize SSL loss defined in Eq. <ref>. In the second stage, we continue to minimize the same loss but with X_tar. Finally in the third stage, we learn the task-specific decoder parameters (θ_d) by minimizing ({X'_src, Y'_src}, Θ, θ_d). This additional self-supervision helps to modify the generic audio representations to capture the acoustic features relevant to the target accent. These accent-specific representations can improve performance of the model on any downstream task for the speaker group with a particular target accent. Although continued self-supervision improves performance, it comes at the computational and memory costs of training, storing and deploying separate BERT encoder models for different accents. To overcome these additional costs, we introduce parameter-efficiency using residual adapters <cit.>. Adapters were first introduced for Transformer-based language models to adapt these large models to different tasks. With a handful of additional parameters per task, adapters have been shown to influence the output from the Transformer and hence make them task-specific <cit.>. In our work, we extend the application of adapters to speech where they are used for adapting audio representations for different accents. We introduce an adapter sandwiched between every Transformer block of the BERT encoder of the HuBERT model. Each adapter module consists of a layer normalization, a feed-forward network to project the vector sequence from the Transformer block to a new bottle-neck dimension B_ada, ReLU activation and finally another feed-forward network to project back the vector sequence to the original dimension. The output from the adapter is added back to the original vector sequence and fed to the next Transformer block. We collectively denote parameters of all the adapters in our model by θ_ada. In our accent-adaptive self-supervision stage, rather than updating all the parameters of the HuBERT model (Θ), we only update θ_ada keeping Θ constant. This ensures that we can still obtain accent-specific audio representations while storing a much smaller set of accent-specific parameters relative to HuBERT for each target accent. Prior work <cit.> introduced accent-specific adapters in speech models by learning accent information from labeled data for downstream tasks. However, in sharp contrast, we hypothesize that the accent information can be efficiently captured without using any labels from the downstream task. Accent of a speaker is a speech characteristic and the self-supervised objective of predicting the masked audio sequence targets is suitable to capture accent-specific information. Our learned audio representations for a target accent are more general and efficient than the prior work as they can used for any downstream speech task and they do not require any additional labels per accent. § EXPERIMENTAL SETUP §.§ Datasets For our experiments, we use the publicly available version of 60K hours of LibriLight <cit.> as the generic unlabeled data (X_src) in the pre-training stage. Similarly, we use 960 hours of paired speech-text data from LibriSpeech <cit.> for the fine-tuning stage. This is representative of the standard data setting used by many SSL models <cit.>. We verify the claims of our methodology by adapting our models on four different target accents. The unlabeled data (X_tar) for two of these four accents, i.e., Indian () and Scottish (), is taken from Mozilla Common Voice Corpus v6.1 (MCV) <cit.> dataset, while we collect the audio sequences for the other two accents, German () and Chinese () in-house. These audio sequences are conversational in nature and are collected by making diverse set of speakers of a particular accent read dialogues. The details of the number of utterances and hours of recordings used for training, validation and evaluation are shown in Table <ref>. For three of the four accents, we use 30 hours of unlabeled audio while we only use 6.6 hours of accent. This is significantly smaller than 60K hours of audio used for pre-training in the first stage. Note that the ground truth labels of any of the accent-specific training and validation datasets are never used in our experiments. To test generalization of our accent-adapted models in different settings, we use additional evaluation datasets. We separately collect 10.1 hours of paired Conversational (Conv.) audio and text from speakers with Indian accents. We also use 2 hours of Indian speaker-specific audio and text from publicly available VoxForge <cit.> dataset for evaluation. §.§ Model settings As defined in Section <ref>, we use HuBERT-large model with N=24 Transformer blocks. For the first stage of generic pre-training, the HuBERT model parameters (Θ), the acoustic unit discovery module (h), the mask indices (M) and the masking function (r) are obtained from their open-sourced versions by fairseq <cit.>. This model is referred to as baseline in our tables. They use 60K hours of unlabeled LibriLight data <cit.> and minimize to obtain the model parameters. Further, their clustering model (h) is a k-means model where cluster centers are identified using the output feature sequence from the 9^th BERT encoder layer of the pre-trained HuBERT-base model. For the next stage of accent-adaptive self-supervision, we use the same clustering model h to obtain targets for accented-speech i.e. Z_tar = h(X_tar). In this stage, we train the model with and without the adapters i.e., Accent-Adapters and Accent-HuBERT respectively. For the model without the adapters, we update all the parameters of the HuBERT model (Θ) with a learning rate of 2e-5 and linear warmup phase of 20k updates. The maximum number of tokens in each batch is set to 300k and the model is trained for 150k steps and finally the best model is chosen using the self-supervision loss value on the unlabeled accent-specific validation dataset. When using adapters, all settings are the same except that we freeze Θ and only update θ_ada using a learning rate scheduler with peak learning rate of 1e-3, a linear warmup phase of 75k steps, followed by polynomial decay till 0. For our final stage of task-specific fine-tuning, we use the same experimental settings as s3prl <cit.> for ASR. For all three models, we train the decoder parameters (θ_d) with 16 batch size and 5.0e-5 learning rate till the decrease in the training loss between subsequent epochs is less than a certain threshold. Similar to <cit.>, we use the LibriSpeech official 4-gram language model powered by KenLM <cit.> and flashlight toolkit <cit.> fused together with our models during decoding. § RESULTS For our experiments, the baseline is the state-of-the-art HuBERT model <cit.> that achieves a WER of 2.3 and 4.6 on test-clean and test-other subsets of LibriSpeech <cit.> in a similar setting as used in <cit.>. As highlighted previously in Section <ref>, we specifically aim to improve the performance of the baseline model for the speakers of the accents that see high WERs even though the model performs well on the standard benchmarks. For example, the WER of the baseline model on and accent from the publicly available MCV dataset are 24.8 and 52.0 respectively. All the numbers reported in our tables are WER Reduction % (WERR) over the baseline model. Important findings from our experiments are summarized below: Continued self-supervision enables learning rich task-agnostic representations for different accents: We showcase that models with continued self-supervision perform significantly better than the baseline on the ASR task. In Table <ref>, our models reduce WERs on all four accents without using any accent-specific labeled data during training. Accent-Adapters and Accent-HuBERT achieve 22.7% and 25.1% WERR on average respectively. Since we use the same task-specific fine-tuning setting for both the baseline and our methods, we attribute the improvements to richer audio representations learned by the base model that can adapt to speech characteristics related to the target accent. This is an important finding as it enables performance gains on any downstream task without spending resources on collection of task-specific labeled data. To validate our hypothesis that the improvement in ASR performance is indeed the result of richer acoustic representations of the accented speech, we evaluate our models on unseen datasets. We use the model trained on the audio from the accent in the MCV dataset and evaluate it on two independently collected datasets by speakers of the same accent. The WERR %ages from the baseline are summarized in Table <ref>. We see a significant 12.6% and 6.7% reduction in WER of our Accent-HuBERT model on the accent subset of VoxForge and Conversational data respectively. We attribute these improvements specifically to accent-related acoustic features learned by the HuBERT model as it is the only common factor between the training and the evaluation dataset. All other confounding factors related to the unlabeled audio like content of the audio, individual speaker related features, signal-noise ratio etc., are factored out in these evaluations. This showcases the robustness of the improvements stemming from our methodology. Adapters are a cost-effective way to capture accent-specific features in large self-supervised speech models: Our baseline model HuBERT-large (Θ) has 317M parameters. Fine-tuning, storing and deploying such models individually for each speaker group can be limited by computational and memory constraints, although that would give the best performance in principle. Adapters, on the other hand, can achieve similar performance using ∼85% less parameters per speaker group. Our findings are in line with many prior works in natural language processing (NLP) <cit.> and speech <cit.>, where adapter modules have been showcased to influence the output of the Transformer model using bottle-neck layers. The dimension of this bottle-neck layer (B_ada) is used to trade-off between the performance and cost of the model. In Table <ref>, we provide an ablation for the choice of B_ada and the WERR % on one of the accents used for evaluation i.e., from the MCV test set. With just 16% of the base model parameters, we see a strong 23.9% WERR over the baseline. We see diminishing returns of performance improvement as we increase the size of the bottle-neck dimension beyond 1024. Therefore, B_ada = 1024 was the choice for all the other experiments in this work. § CONCLUSIONS In this paper, we propose adapting self-supervised speech representations to atypical accents by continuing to perform self-supervision using such data. To the best of our knowledge, we are the first to show strong improvements over state-of-the-art baselines by adapting models using self-supervision on unlabeled accented data. We experiment with modifying the base encoder by adding adapters to each Transformer block and updating the adapters alone during accent-adaptive pre-training, as well as with updating the entire encoder during accent-adaptive pre-training. Our method achieves strong WERR over the state-of-the-art on 4 different non-native accents. We achieve an average 22.7% WERR when using adapters and an average of 25.1% WERR when updating the entire encoder. We also show that our models adapted to an accent using a given dataset perform well on other evaluation sets with similar speaker characteristics, thus validating our hypothesis that our models adapt by learning accent-specific acoustic representations from the target speech. Our approach is parameter-efficient and we show strong WERR by updating just 16% of the model parameters. Although, we conduct our experiments with ASR as the downstream task in this work, we posit that our approach is task agnostic, since we perform adaptation during the pre-training stage. Our proposed approach has great practical viability due to 2 reasons: (a) we can adapt using unlabeled data alone, which is far easier and cheaper to obtain compared to high-quality labeled data, and (b) we can adapt models to different accents in a parameter-efficient way with only a small number of accent-specific parameters, without needing to incur the memory and compute costs of maintaining large models for each accent. While our current work focuses on adapting to unlabeled accented data, effectively utilizing a small amount of labeled accented data alongside accent-adaptive self-supervision is a promising future direction to explore. IEEEtran 10 url@samestyle devlin2018bert J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” arXiv preprint arXiv:1810.04805, 2018. hsu2021hubert W.-N. Hsu, B. Bolte, Y.-H. H. Tsai, K. Lakhotia, R. Salakhutdinov, and A. Mohamed, “HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 3451–3460, 2021. baevski2020wav2vec A. Baevski, Y. Zhou, A. Mohamed, and M. Auli, “wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations,” Advances in neural information processing systems, vol. 33, pp. 12 449–12 460, 2020. chen2022wavlm S. Chen, C. Wang, Z. Chen, Y. Wu, S. Liu, Z. Chen, J. Li, N. Kanda, T. Yoshioka, X. Xiao et al., “WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing,” IEEE Journal of Selected Topics in Signal Processing, vol. 16, no. 6, pp. 1505–1518, 2022. ling2020deep S. Ling, Y. Liu, J. Salazar, and K. Kirchhoff, “Deep contextualized acoustic representations for semi-supervised speech recognition,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).1em plus 0.5em minus 0.4emIEEE, 2020, pp. 6429–6433. yang21c_interspeech S. wen Yang, P.-H. Chi, Y.-S. Chuang, C.-I. J. Lai, K. Lakhotia, Y. Y. Lin, A. T. Liu, J. Shi, X. Chang, G.-T. Lin, T.-H. Huang, W.-C. Tseng, K. tik Lee, D.-R. Liu, Z. Huang, S. Dong, S.-W. Li, S. Watanabe, A. Mohamed, and H. yi Lee, “SUPERB: Speech Processing Universal PERformance Benchmark,” in Proc. Interspeech 2021, 2021, pp. 1194–1198. tomanek-etal-2021-residual K. Tomanek, V. Zayats, D. Padfield, K. Vaillancourt, and F. Biadsy, “Residual adapters for parameter-efficient asr adaptation to atypical and accented speech,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.1em plus 0.5em minus 0.4emOnline and Punta Cana, Dominican Republic: Association for Computational Linguistics, Nov. 2021, pp. 6751–6760. [Online]. Available: <https://aclanthology.org/2021.emnlp-main.541> radford2022robust A. Radford, J. W. Kim, T. Xu, G. Brockman, C. McLeavey, and I. Sutskever, “Robust Speech Recognition via Large-Scale Weak Supervision,” arXiv preprint arXiv:2212.04356, 2022. hochreiter1997long S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997. graves2006connectionist A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, “Connectionist Temporal Classification: labelling unsegmented sequence data with recurrent neural networks,” in Proceedings of the 23rd international conference on Machine learning, 2006, pp. 369–376. houlsby2019parameter N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. De Laroussilhe, A. Gesmundo, M. Attariyan, and S. Gelly, “Parameter-Efficient Transfer Learning for NLP,” in International Conference on Machine Learning.1em plus 0.5em minus 0.4emPMLR, 2019, pp. 2790–2799. pfeiffer2020adapterhub J. Pfeiffer, A. Rücklé, C. Poth, A. Kamath, I. Vulić, S. Ruder, K. Cho, and I. Gurevych, “AdapterHub: A Framework for Adapting Transformers,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2020, pp. 46–54. dingliwal2023personalization S. Dingliwal, M. Sunkara, S. Ronanki, J. Farris, K. Kirchhoff, and S. Bodapati, “Personalization of CTC speech recognition models,” in 2022 IEEE Spoken Language Technology Workshop (SLT).1em plus 0.5em minus 0.4emIEEE, 2023, pp. 302–309. fan2022draft R. Fan and A. Alwan, “DRAFT: A Novel Framework to Reduce Domain Shifting in Self-supervised Learning and Its Application to Children's ASR,” arXiv preprint arXiv:2206.07931, 2022. kahn2020libri J. Kahn, M. Riviere, W. Zheng, E. Kharitonov, Q. Xu, P.-E. Mazaré, J. Karadayi, V. Liptchinsky, R. Collobert, C. Fuegen et al., “Libri-Light: A Benchmark for ASR with Limited or No Supervision,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).1em plus 0.5em minus 0.4emIEEE, 2020, pp. 7669–7673. 7178964 V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, “Librispeech: An ASR corpus based on public domain audio books,” in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015, pp. 5206–5210. ardila2019common R. Ardila, M. Branson, K. Davis, M. Henretty, M. Kohler, J. Meyer, R. Morais, L. Saunders, F. M. Tyers, and G. Weber, “Common Voice: A Massively-Multilingual Speech Corpus,” arXiv preprint arXiv:1912.06670, 2019. Voxforge.org Voxforge.org, “Free speech... recognition (linux, windows and mac) - voxforge.org,” <http://www.voxforge.org/>, accessed 07/25/2022. ott2019fairseq M. Ott, S. Edunov, A. Baevski, A. Fan, S. Gross, N. Ng, D. Grangier, and M. Auli, “fairseq: A Fast, Extensible Toolkit for Sequence Modeling,” in Proceedings of NAACL-HLT 2019: Demonstrations, 2019. heafield2011kenlm K. Heafield, “KenLM: Faster and Smaller Language Model Queries,” in Proceedings of the sixth workshop on statistical machine translation, 2011, pp. 187–197. pratap2019wav2letter++ V. Pratap, A. Hannun, Q. Xu, J. Cai, J. Kahn, G. Synnaeve, V. Liptchinsky, and R. Collobert, “Wav2letter++: A fast open-source speech recognition system,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).1em plus 0.5em minus 0.4emIEEE, 2019, pp. 6460–6464.
http://arxiv.org/abs/2307.01226v1
20230703042341
vONTSS: vMF based semi-supervised neural topic modeling with optimal transport
[ "Weijie Xu", "Xiaoyu Jiang", "Srinivasan H. Sengamedu", "Francis Iannacci", "Jinjin Zhao" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CL", "cs.IT", "math.IT" ]
Feasibility of Universal Anomaly Detection without Knowing the Abnormality in Medical Images Can Cui1 Yaohong Wang2 Shunxing Bao1 Yucheng Tang3 Ruining Deng1 Lucas W. Remedios1 Zuhayr Asad1 Joseph T. Roland2 Ken S. Lau2 Qi Liu2 Lori A. Coburn2 Keith T. Wilson2 Bennett A. Landman1 Yuankai Huo1 August 1, 2023 ============================================================================================================================================================================================================= Recently, Neural Topic Models (NTM), inspired by variational autoencoders, have attracted a lot of research interest; however, these methods have limited applications in the real world due to the challenge of incorporating human knowledge. This work presents a semi-supervised neural topic modeling method, vONTSS, which uses von Mises-Fisher (vMF) based variational autoencoders and optimal transport. When a few keywords per topic are provided, vONTSS in the semi-supervised setting generates potential topics and optimizes topic-keyword quality and topic classification. Experiments show that vONTSS outperforms existing semi-supervised topic modeling methods in classification accuracy and diversity. vONTSS also supports unsupervised topic modeling. Quantitative and qualitative experiments show that vONTSS in the unsupervised setting outperforms recent NTMs on multiple aspects: vONTSS discovers highly clustered and coherent topics on benchmark datasets. It is also much faster than the state-of-the-art weakly supervised text classification method while achieving similar classification performance. We further prove the equivalence of optimal transport loss and cross-entropy loss at the global minimum. § INTRODUCTION Topic modeling methods such as <cit.> is an unsupervised approach for discovering latent structure in documents and achieving great performance <cit.>. Topic modeling methods take a list of documents as input. It generates the defined number of topics. It can further produce keywords and related documents for each topic. In recent years, topic modeling methods have been widely used in many fields such as finance <cit.>, healthcare <cit.>, education <cit.>, marketing <cit.> and social science <cit.>. With the development of Variational Autoencoder (VAE) <cit.>, Neural Topic Model <cit.> has attracted attention as it enjoys better flexibility and scalability. However, recent research <cit.> shows that the topics generated by these methods are not aligned with human perceptions. To incorporate users' domain knowledge into the model, semi-supervised topic modeling methods become an active area of research <cit.> and applications <cit.>. Semi-supervised topic modeling methods take a few keywords as input and generate topics based on these keywords. Poeple use semi-supervised topic modeling methods because they want each topic include certain keywords and incorporate their domain expertise in their generated topics. Traditional semi-supervised topic modeling methods fail to utilize semantic information of the corpus, causing low classification accuracy and high variance <cit.>. To solve these problems, we propose a von Mises-Fisher(vMF) based semi-supervised neural topic modeling method using optimal transport (vONTSS). We use the encoder-decoder framework for our model. The encoder uses modified vMF priors for latent distributions. The decoder uses a word-topic similarity matrix based on spherical embeddings. We use optimal transport to extend it to a semi-supervised version. vONTSS has the following enhancements: 1. We introduce the notion of temperature and make the spread of vMF distribution (κ) learnable, which leads to strong coherence and cluster-inducing properties. 2. vONT (In the rest of the paper, we use vONT to refer to the unsupervised topic model and vONTSS to semi-supervised version.) achieves the best coherence and clusterability compared to the state-of-the-art approaches on benchmark datasets. 3. We perform the human evaluation of the results for intrusion and rating tasks, and vONT outperforms other techniques. 4. Use of optimal transport to extend the stability of the model in the semi-supervised setting. The semi-supervised version is fast to train and achieves good alignment between keywords sets and topics. We also prove its theoretical properties. 5. In the semi-supervised scenario, we demonstrate the vONTSS achieves the best classification accuracy and lowest variance compared to other semi-supervised topic modeling methods. 6. We also show that vONTSS achieves similar performance as the state-of-the-art weakly text classification method while being much more efficient. § RELATED METHODS AND CHALLENGES NTM Variational Autoencoders (VAE) <cit.> enable efficient variational inference. NTM <cit.> uses Z ∈ R^M as topic proportions over M topics and X ∈ R^V to represent word count for the dataset with V unique words. NTM assumes that for any document, Z is generated from a document satisfying the prior distribution p(Z) and X is generated by the conditional distribution p_θ(X|Z) where θ denotes a decoder. Ideally, we want to optimize the marginal likelihood p_θ(X) = ∫p(Z)p_θ(X|Z)dZ. Due to the intractability of integration, NTM introduces q_ϕ(Z|X), a variational approximation to the posterior p(Z|X). The loss function of NTM is: L_θ, ϕ = (-E_q_ϕ(Z|X)[log p_θ (X|Z)] + KL[q_ϕ(Z|X) || p(Z)] ) NTM usually utilizes a neural network with softmax to approximate p_θ(X|Z) := softmax(Wz) <cit.>. NTM selects Gaussian <cit.>, Gamma <cit.> and Dirichlet distribution <cit.> to approximate p(Z). The second term Kullback-Leibler (KL) divergence regularizes q_ϕ(Z|X) to be close to p(Z). NTM has several problems in practice. Firstly, it does not capture the semantic relationship between words. Secondly, the generated topics are not aligned with human interpretations. <cit.>. Thirdly, using Gaussian prior may risk gravitating latent space toward the center and produce tangled representations among classes of documents. This is due to the fact that gaussian density presents a concentrated mass around the origin in low dimensional settings <cit.> and resembles a uniform distribution in high dimensional settings. Extending NTM to semi-supervised version is also challenging. L_θ, ϕ is not always aligned with classification-related loss such as cross-entropy loss as identified by existing research <cit.>. To be specific, cross-entropy makes keywords sets align with assigned topics, while reconstruction loss(-E_q_ϕ(Z|X)[log p_θ (X|Z)]) makes latent space as representative as possible. Thus, existing semi-supervised NTM methods either are not stable <cit.> or need certain adaptions <cit.>. Embedding Topic Model (ETM) Pre-trained word embeddings such as Glove <cit.> and word2vec <cit.> have the ability to capture semantic information, which is missing from basic bag-of-word (BoW) representations. They can serve as additional information to guide topic discovery. Dieng <cit.> proposes ETM to use a vocabulary embedding matrix e_V∈ R^V × D where D represents the dimension of word embeddings. The decoder ϕ learns a topic embedding matrix e_T∈ R^M × D. We denote topic to word distribution softmax(e_T e_V^T) as E p_θ(X|Z) := Z × E However since there exists some common words that are related to many other words, these common words' embeddings may be highly correlated with few topics' embeddings. Thus, ETM does not produce diverse topics <cit.>. Besides, using pre-trained embeddings cannot help the model identify domain-specific topics. For example, topics related to COVID-19 are more likely to be expressed by a few topics instead of one single topic using pre-trained Glove embeddings <cit.> since COVID-19 is not in the embeddings. von Mises-Fisher In low dimensions, the Gaussian density presents a concentrated probability mass around the origin. This is problematic when the data is partitioned into multiple clusters. An ideal prior should be non-informative and uniform over the parameter space. Thus, the von Mises-Fisher(vMF) is used in VAE. vMF is a distribution on the (M-1)-dimensional sphere in R^M, parameterized by μ∈ R^M where ||μ|| = 1 and a concentration parameter κ∈ R_≥ 0. The probability density function of the vMF distribution for z ∈ R^D is defined as: q(Z|μ, κ) = C_M(κ) exp(κμ^TZ) C_M(κ) = κ^M/2 - 1/(2π)^M/2 I_M/2 - 1(κ) + log 2 where I_v denotes the modified Bessel function of the first kind at order v. The KL divergence with vMF(., 0) <cit.> is KL(vMF(μ, κ)|vMF(.,0)) = κI_M/2(κ)/I_M/2-1(κ) + (M/2 - 1) logκ - M/2log (2π) - log I_M/2-1(κ) + M/2logπ + log 2 + logΓ(M/2) vMF based VAE has better clusterability of data points especially in low dimensions <cit.>. However, vMF distribution has limited expressibility when its sample is translated into a probability vector. Due to the unit constraint, softmax of any sample of vMF will not result in high probability on any topic even under strong direction μ. For example, when topic dimension M equals to 10, the highest topic proportion of a certain topic is 0.23. Most of vMF-based topic modeling methods are not VAE based and very slow to train as summarized in Appendix <ref>. § PROPOSED METHODS The architecture of vONTSS is shown in Figure <ref>. At a high level, our encoder network ϕ transforms the BoW representation of the document X_d into a latent vector generated by vmf distribution and generates a sample η_d. We then apply a temperature function τ and softmax on this sample to get a probabilistic topic distribution z_d. Lastly, our decoder uses a modified topic-word matrix E to reconstruct X_d's BoW representation. To extend into semi-supervised setting, we leverage optimal transport to match keywords' set with topics. The encoder network ϕ and generative model parameter θ are learned jointly during the training process. To overcome entangled topic latent space introduced by Gaussian distribution and limited expressibility of vMF distribution, we make two improvements: 1. Introduce a temperature function τ(η_i) prior to softmax() to modify the radius of vMF distribution. 2. Set κ to a learnable parameter to flexibly infer the confidence of particular topics during training. Encoder Network Temperature Function To alleviate concerns regarding expressibility while inducing separability among topics, we modify the radius of vMF distribution. We use a temperature function to represent the radius. As shown in Figure <ref>, unmodified vMF distribution has limited expressiveness. For instance, Gaussian posteriors can express a topic probability vector of [0.98, 0.01, 0.005, 0.0003, 0.0002], while vMF can't due to the unity constraint. In practice, if we change the radius to 10, the network can learn more polarized topics distribution as shown in the right plot in Figure <ref>. The influence of different radii is analyzed in Appendix <ref>. Given equally powerful learning networks of distributions' parameters, vMF with different radii learns richer and more nuanced structures in their latent representations than a Gaussian counterpart (Appendix <ref>). Learnable κ To further improve the clusterability, we convert κ from a fixed value to a learnable parameter. The KL divergence of vMF distribution makes the distribution more concentrated while not influencing the direction of latent distribution. This makes the result more clustered. For Gaussian distribution, KL divergence penalizes the polarization of latent distribution (Appendix <ref>). This makes the Gaussian distribution less clustered. To illustrate this, we randomly sampled encoded documents' latent distributions from AgNews Dataset <cit.> after training with both latent distributions, as shown in Figure <ref>. For the Gaussian distribution, we see that documents belonging to different topics are entangled around the center, causing the inseparability of topics during both the training and inference stage. vMF distribution, on the hand, repels four document classes into different quadrants, presents more structures when compared to Gaussian distribution, and creates better separable clusters. Detailed ablation study can be found in Appendix <ref> Decoder Network Our decoder follows ETM's construction and uses the embedding e_V and e_T to generate a topic-word matrix E. One distinction between our decoder and ETM's decoder is that we generate the word embeddings by training a spherical embedding on the dataset. Spherical embeddings perform well in word similarity evaluation and document clustering <cit.>, which further improves the clusterability of the topic modeling methods. We also keep word embeddings fixed during the topic modeling training process for two reasons. Firstly, keeping word embeddings fixed can alleviate sparsity issues <cit.>. Additionally, vMF based VAE tends to be less expressive in high dimensions due to limited variance freedom <cit.>. Keeping the embedding fixed can make topics more separable in higher-dimension settings and improve topic diversity. Loss Function for vONTSS In semi-supervised settings, the user specifies sets of keywords S associated with topics T. Let (s, t) represent a keyword set and a topic pair, where each keyword x ∈ s is labeled by topic t. Instead of training a separate neural network for the semi-supervised extension of NTM, we use the topic-word matrix (decoder θ) to represent the probability of a word x given topic t. M1 + M2 is a semi-supervised model used in VAE. We adapt the M1 + M2 model framework <cit.>. Under the assumption that p_θ(x, t, z) = p_θ(x|z) p_θ(t|x) p(z), our loss function can be approximated as L(X, T) = L_θ, ϕ(X) - α H[q_ϕ(X|T)] + δ L_ce L_ce = - ∑_(s, t) ∈ (S, T)E_x ∈ slog q_θ(x|t) For topic i and word j, we let q_θ(x_j|t_i) = E_i, j where E is the topic-word matrix. H[q_ϕ(X|T)] is entropy of q_θ(X|T). We can consider it as a regularization term. Optimizing the current model is hard because we have 3 objectives to minimize(cross-entropy, KL Divergence, and reconstruction loss) and they are not aligned with each other. To validate our point, we find out that if we make radius parameters learnable, the classification metric performs worse even if it decreases the reconstruction loss(Appendix <ref>). If we apply cross-entropy at the beginning, topic embeddings get stuck into the center of selected keywords' embeddings, which makes the model overfitting. If we first train an unsupervised vONT, we need to find a way to match keywords and trained topics. If we match them based on their cosine similarity, different keywords may match to the same topics. This makes performance unstable. To deal with these challenges, we decide to use a two-stage training process and do not specify labeled keywords to topics at the beginning. vONTSS first optimizes L_θ, ϕ(X) - α H[q_ϕ(X|T)] till convergence, then jointly optimizes L(X, T) for few epochs. This makes our method easier to optimize, less time-consuming, and suitable for interactive topic modeling <cit.>. To optimize L_ce after stage 1, we need to pair topics and keyword sets. Existing methods such as Gumbel softmax prior <cit.> often lead to instability, while naive matching by q_ϕ(x|t) may give us redundant topics. Optimal Transport for vONTSS Optimal Transport (OT) distances <cit.> have been widely used for comparing the distribution of probabilities. Specifically, let U(r,c) be the set of positive m × n matrices for which the rows sum to r and the sum of the column to c: U(r, c) = {P ∈ R_>0^m × n|P 1_t = r, P^T 1_s =c }. For each position t, s in the matrix, it comes with a cost C_t,s. Our goal is to solve d_C(r, c) = min_P ∈ U(r, c)∑_t, s P_t,s C_t,s. To make distribution homogeneous <cit.>, we let d_C^λ(r, c) = min_P ∈ U(r, c)∑_t,s P_t,s C_t, s - 1/λ h(P) h(P) = - ∑_t,s P_t,slog P_t,s OT has achieved good robustness and semantic invariance in NLP related tasks <cit.>. Optimal transport has been used in topic modeling to replace KL divergence <cit.> or create topic embeddings <cit.> as discussed in Appendix <ref>. It has not been used for extending topic modeling to semi-supervised cases. To better match topic and keywords set, we approximate L_ce using optimal transport. We choose sinkhorn distance since it has an entropy term, which makes our trained topics more coherent and stable. Our goal is to design the loss function that is aligned with derived cross-entropy loss at the global minimum. To be specific, the raw dimension of our cost matrix is equal to the dimension of topics and the column dimension of the cost matrix equals to the dimension of keywords group. We denote each entry in the M matrix in optimal transport as, C_t, s = - E_x ∈ slog(q_θ(x|t)) where t is the topic and x is the word in a keywords group s. The model uses sinkhorn distance and restricts the sum of each column and row of P to 1. We give the model an entropy penalty term to make sure each topic is only related to one group of keywords. Thus, L_OT = min_P ∈ U(|T|, |S|)∑_t,s P_t,s C_t, s - 1/λ h(P) where λ controls the entropy penalty. The first term is similar to L_ce approximation, and the second term makes the result homogeneous. To lower the second term, each keyword should be highly correlated to one topic while not/negatively correlated with others. This further separates the topics and improves the topic diversity. We further show that L_OT = L_ce when L(X, T) is minimized. When L(X, T) reaches the global minimal. For any (s, t), (s', t') ∈ (S, T): E_x ∈ slog q_ϕ(x|t) + E_x ∈ s'log q_ϕ(x|t') - (E_x ∈ s'log q_ϕ(x|t)) + E_x ∈ slog q_ϕ(x|t')) >= 0 When L(X, T) reaches the global minimal, L_OT = L_ce Appendix <ref> contains the proof. =0.09cm § EXPERIMENT Dataset Our experiments are conducted on four widely-used benchmark datasets for topic modeling and semi-supervised text classification with varied length: DBLP <cit.>, AgNews <cit.> and 20News <cit.>. All these datasets have ground truth labels. Average document length varies from 5.4 to 155. We preprocess all the datasets by cleaning and tokenizing texts. We remove stop words, words that appear more than 15 percent of all documents and words that appear less than 20 time. For semi-supervised experiments, we use the same labels in DBLP and AgNews. We sample 4 similar classes from 20News to see how our method performs in datasets with similar labels. For unsupervised settings, we keep the number of topics equal to the number of classes plus one. I keep the unit of the length to 10 for all experiments. For semi-supervised settings, we set the number of topics equal to the number of classes in semi-supervised cases, and we provide 3 keywords for each class. We use 20% as the training set to get our keywords with the top tfidf score for each class. We use 80% data as the test set. Additional details and provided keywords on the dataset are available in Appendix <ref> Settings In our experiment setting, we do not utilize any external information beyond the dataset itself. The embedding is trained on the test set. We do not compare methods that rely on transfer learning or language models such as <cit.> because of reasons mentioned in appendix <ref>. The hyperparameter setting used for all baseline models and vONT is similar to <cit.>. We use a fully-connected neural network with two hidden layers of [256, 64] unit and ReLU as the activation function followed by a dropout layer (rate = 0.5). We use Adam <cit.> as the optimizer with learning rate 0.002 and use batch size 256. We use <cit.> as scheduler and use learning rate 0.01 for maximally iterations equal to 50. We use spherical embeddings <cit.> trained on the dataset for NVTM, ETM, GSM and NSTM. For vONT, we set the radius of vMF distribution equal to 10. We fix α = δ = 1 in L(X,T) . We keep λ = 0.01 in L_OT. Our code is written in PyTorch and all the models are trained on AWS using ml.p2.8xlarge (NVIDIA K80).[Details on codebases used for baselines and fine-tuning are provided in Appendix <ref>] §.§ Unsupervised vONT experiments Evaluation Metrics We measure the topic coherence and diversity of the model. Most of unsupervised topic coherence metrics are inconsistent with human judgment, based on a recent study <cit.>. Thus, we have done a qualitative study where we ask crowdsource to perform rating and intrusion task on 4 models trained on AgNews. In rating task<cit.>, raters see a topic and then give the topic a quality score on a three-point scale. The rating score is between 1 and 3. A rating score close to 3 means that users can see a topic from provided words. Chang<cit.> devise the intrusion task, where each topic is represented as its top words plus one intruder word which has a low probability belonging to that topic. Topic coherence is then judged by how well human annotators detect the intruder word. The intrusion score is between 0 and 1. An intrusion score close to 1 means that users can easily identify the intruder word. We use mechanical turk and sagemaker groundtruth to do the labeling work. To measure clusterability, we assign every document the topic with the highest probability as the clustering label and compute Top-Purity and Normalized Mutual Information(Top-NMI) as metrics<cit.> to evaluate alignment. Both of them range from 0 to 1. A higher score reflects better clustering performance. We further apply the KMeans algorithm to topic proportions z and use the clustered documents to report purity(Km-Purity) and NMI Km-NMI <cit.>. We varied the number of topics from 10 to 50. We set the number of clusters to be the number of topics for KMeans algorithm. Models with higher clusterability are more likely to perform well in semi-supervised extension. Furthermore, we run all these metrics 10 times. We report mean and standard deviation. Detailed metric implementations are in Appendix <ref>. We also analyze topic diversity in <ref> and unsupervised topic coherence in <ref>. For Diversity, Baseline Methods We compare with the state-of-the-art NTM methods that do not rely on a large neural networks to train. These methods include: GSM <cit.>, an NTM replaces the Dirichlet-Multinomial parameterization in LDA with Gaussian Softmax; ProdLDA <cit.>, an NTM model which keeps the Dirichlet Multinomial parameterization with a Laplace approximation; ETM <cit.>, an NTM model which incorporates word embedding to model topics; vNVDM <cit.>, a vMF based NTM as mentioned in section 2. NSTM <cit.>, optimal transport based NTM, as mentioned in section 3. All baselines are implemented carefully with the guidance of their official code.[Some methods we tested had lower TC scores compared to other benchmarks. This may be because we have less complicated layers, small epochs to train, and we keep fewer words. The ranking of these metrics is mostly in alignment with the paper that has a benchmark. We exclude methods that need to rely on large neural networks and a lot of finetune such as <cit.>. We also exclude methods similar to existing methods such as <cit.>. We exclude methods that do not perform well in previous papers' experiments <cit.> such as <cit.>. We also exclude methods that are relevant but work on different use cases, such as short text.<cit.>] For qualitative study, we choose ProdLDA, ETM and LDA as a comparison to align with previous study <cit.>. Results i) In Table <ref>, vONT performs significantly better than other methods in all datasets for cluster quality metrics. This means vMF distribution induces good clusterability. ii) vONT has the lowest variance in clusterability-related metrics. (iii) In Appendix <ref>, vONT outperforms other models in TC metrics C_v and NPMI. This means that our model is coherent. We believe the introduction of the temperature function helps our method perform better than the existed method in coherence. iv) In Appendix <ref>, vONT performs well on diversity and has the lowest variance. Human Evaluation To evaluate human interpretability, we use intrusion test and ratings test. Details of the experiment are provided in Appendix <ref>. We select AgNews as our dataset, we generate 10 topics each from 4 models. In the word intrusion task, we sample five of the ten topic words plus one intruder randomly sampled from the dataset; for the rating task, we present the top ten words in order. Figure <ref> summarizes the results. vONT performs significantly better than ProdLDA, ETM, and LDA qualitatively. In intrusion test, vONT has the highest score 0.4. The second-best method is LDA, which has score 0.29. The two sample test between the two methods has the p-value equal to 0.014. In rating test, vONT has the highest score 2.51 while ProdLDA has the second-highest score 2.42. The two sample test between the two methods has a p-value equal to 0.036. Based on this study, we conclude that humans find it easier to interpret topics produced by vONT. §.§ Semi-Supervised vONTSS experiments Evaluation Metric diversity aims to measure how diverse the discovered topic is. diversity is defined as the percentage of unique words in the top 25 words from all topics.<cit.> diversity close to 0 means redundant and TD close to 1 means varied topics. We measure the classification accuracy of the model. Thus, we measure accuracy. Similar to other semi-supervised paper<cit.>, we also measure micro f1 score, since this metric gives more information in semi-supervised cases with unbalanced data. We do not include any coherence metric since we already have ground truth. Baseline methods CatE <cit.> retrieves category representative terms according to both embedding similarity and distributional specificity. It uses WeSTClass<cit.> for all other steps in weakly-supervised classification. If we do not consider methods with transfer learning or external knowledge, it achieves the best classification performance. GuidedLDA <cit.>: incorporates keywords by combining the topics as a mixture of a seed topic and associating each group of keywords with a multinomial distribution over the regular topics. Correlation Explanation CorEx <cit.> is an information theoretic approach to learning latent topics over documents by searching for topics that are ”maximally informative” about a set of documents. We fine-tune on the training set and choose the best anchor strength parameters for our reporting. We also created semi-supervised ETM by using gaussian distribution and adding the same optimal transport loss as vONTSS. We call it gONTSS. We also train all objectives instead of using two-stage training and call it vONTSS with all loss. Instead of applying optimal transport, we apply cross entropy directly after stage 1 and match topics by keywords set with the highest similarity. We call this method vONTSS with CE. To get Best Unsupervised method, we train the unsupervised models(ETM, vNVDM, vONT, ProdLDA) and consider all potential matching between topics and seed words. We report the method with the highest accuracy for each dataset across all different matching. Guided BERTopic We evaluate the guided version of BERTopic <cit.> method. They create seeded embeddings to find the most similar document. It then takes seed words and assigns them a multiplier larger than 1 to increase the IDF value. [We do not find code for other neural-based semi-supervised topic modeling methods <cit.>, but based on their experiments, the best one is <cit.> which is almost the same as vONTSS with CE which means it has similar variance and lower performance compare to vONTSS with CE ] Results Table <ref> shows that i) vONTSS outperforms all other semi-supervised topic modeling methods in classification accuracy and micro F1 score, especially for large datasets with lengthy texts such as AgNews. ii) vONTSS has a lower standard deviation compared to other models. This advantage makes our model more stable and practical in real-world applications. iii) To compare methods with/without optimal transport, methods with optimal transport vONTSS achieve much better accuracy, diversity, and lower variance compared to vONTSS with CE and vONTSS with all loss. This means optimal transport does increase the classification accuracy, stability, and diversity of generated topics. iv) In benchmark datasets, vONTSS is comparable to CatE in quality metrics. As can be seen in Table <ref> in the appendix, vONTSS is 15 times faster than CatE. v) Unsupervised methods cannot produce comparable results even if we use the best topic seed word matching. This shows that semi-supervised topic modeling methods are necessary. vi) Guided Bertopic does not produce good results. It is also not very stable. In Guided Bertopic, the assigned multiplier is increased across all topics, which makes their probability less representative. vi) If we change vONTSS to gONTSS, § CONCLUSIONS In this paper, we propose a new semi-supervised neural topic modeling method vONTSS, which leverages vMF, the temperature function, optimal transport, and VAEs. Its unsupervised version exceeds state-of-the-art in topic coherence through both unsupervised and human evaluations while inducing high clusterability among topics. We show that optimal transport loss is equivalent to cross-entropy loss under the optimal condition and induces one-to-one mapping between keywords sets and topics. vONTSS achieves competitive classification performance, maintains top topic diversity, trains fast, and possesses the least variance among diverse datasets. langley00 acl_natbib Appendix § ADDITIONAL EXPERIMENTAL RESULTS Figure <ref> shows the variation of cluster purity as the number of topics changes. This expands the information provided in Figure <ref>. Figure <ref> provides box plots for the metrics in Table <ref>. § PROOF OF LEMMA 3.1 When L(X, T) reaches the global minimum. For any (s, t), (s', t') ∈ (S, T): E_x ∈ slog q_ϕ(x|t) + E_x ∈ s'log q_ϕ(x|t') - (E_x ∈ s'log q_ϕ(x|t)) + E_x ∈ slog q_ϕ(x|t')) >= 0 If the reverse is true, then, we can just switch position of topic t and t' in the topic-word matrix and also switch the position on latent space z using temperature function. This will not change reconstruction process, since for every input, get the same reconstruction. Thus, reconstruction loss does not change. Assume this new neural network structure has loss L^'(X, T) and cross entropy loss is L^'_ce L^'(X, T) - L(X, T) = L^'_ce - L_ce = - (E_x ∈ s'log q_ϕ(x|t)) + E_x ∈ slog q_ϕ(x|t')) + E_x ∈ slog q_ϕ(x|t) + E_x ∈ s'log q_ϕ(x|t') < 0 The last step is based on (9). This contradicts that L(X, T) is global minimal. Thus, lemma holds. § PROOF OF THEOREM 3.2 When L(X, T) reaches the global minimal, L_OT = L_ce Step 1 show that p_t, s = 1 when (t, s) ∈ (T, S) and equal to 0 in all other cases. ∃ p_t, s = γ < 1 when (t, s) ∈ (T, S). Without loss of generality, we assume p_t, s' = 1 - γ, p_t', s' = γ and p_t', s = 1 - γ. Consider related term in L_OT, for the first term: γ (C_t,s + C_t', s') + (1 - γ) (C_t,s' + C_t', s) = (C_t,s + C_t', s') - (1 - γ) (C_t,s + C_t', s' - (C_t,s' + C_t', s)) ≥ C_t,s + C_t', s' using Lemma 3.1 and Equation (<ref>) For the second term in L_OT, -p_t, slog p_t, s= 0 when p_t, s = 1 or 0. Otherwise, it is larger than 0. This means that p_t, s = p_t', s' = 1 achieve smaller L_OT compare to current settings. This contradicts the definition of L_OT which is the min in the space. Thus, p_t, s = 1 when (t, s) ∈ (T, S). Since the raw sum and column sum equal to |T|. This means p_t, s = 0 when (t, s) ∉ (T, S) Step 2: h(P) = - ∑_t,s P_t,slog P_t,s = - (∑_(t, s) ∈ (T,S) 1 * log 1 + ∑_(t, s) ∉ (T,S) 0 * log 0) = 0 ∑_t,s P_t,s C_t, s = ∑_(t, s) ∈ (T, S) C_t, s =- ∑_(t, s) ∈ (T, S) E_x∈ slog q_ϕ(x|t(x)) Combine (10) and (11), we have L_OT = ∑_(t,s)∈(S, T) C_t, s - h(P)= - ∑_(t,s) ∈ (T, S) E_x ∈ slog q_ϕ(x|t_x) = L_ce § EFFECT OF LEARN-ABLE DISTRIBUTION TEMPERATURE In this study, we make it a learnable parameter and implement it in two ways. The first way is setting temperature variable as one parameter that can be learned (1-p model). All topics share the same parameter. The second way is setting the temperature variable as a vector with dimension equal to the number of topics (n-p model). This means each topic has its own temperature. The initialization value for both the vectors is 10. After training, the 1-p model has value 4.99 and n-p model has values [-0.45,4.88,5.91,3.47,4.19] (values are rounded to 2 decimals). The accuracy for 1-p model is 78.9 and n-p model is 80.5. This means that vONTSS cannot further improve with learnable temperature. This means that our loss function is not fully aligned with accuracy metric. This is due to the fact that we optimize reconstruction loss as well as KL divergence during the training procedure. This makes our objective less aligned with cross entropy loss. § CODE Code we used to implement GSM is <https://github.com/YongfeiYan/Neural-Document-Modeling> Code we used to implement ETM is <https://github.com/adjidieng/ETM> Code we used to implement vNVDM is <https://github.com/jiacheng-xu/vmf_vae_nlp> with kl weight = 1 and default scaling item for auxiliary objective term equal to 0.0001 Code we used to implement NSTM is <https://github.com/ethanhezhao/NeuralSinkhornTopicModel> We use same parameters suggested by paper for optimal transport reclossweight = 0.07 and epsilon = 0.001. Code we used to implement ProdLDA is <https://github.com/vlukiyanov/pt-avitm> Code we used to implement GSM is <https://github.com/YongfeiYan/Neural-Document-Modeling> with topic covariance penalty equals to 1. Code we used to implement GuidedLDA is <https://github.com/vi3k6i5/GuidedLDA> We fine tune best seed confidence from 0 to 1 with step equal to 0.05. We simply report the best performance on average of 10 results. Code we used to implement CorEx is <https://github.com/gregversteeg/corex_topic> CorEx are fine-tuned by anchor strength from 1 to 7 with step equal to 1. We simply report the best performance on average of 10 results. Code we used to implement Spherical Embeddings is <https://github.com/yumeng5/Spherical-Text-Embedding>. We set word dimension equals 100, window size equals 10, minimum word count equals 20 and number of threads to be run in parallel equals to 20.The pretrained embedding of all datasets is at the attached data file. Code we used to implement LDA is <https://scikit-learn.org/stable/modules/generated/sklearn.discriminant_analysis.LinearDiscriminantAnalysis.html> with solver = SVD and tol = 0.00001 Code we used to implement CatE is <https://github.com/yumeng5/WeSTClass> and <https://github.com/yumeng5/CatE> with number of terms per topic = 10 and text embeddings dimension = 50. § COHERENCE Topic coherence TC metric <cit.> is used to check if topic will include words that tend to co-occur in the same documents. TC <cit.> is the average point wise mutual information (NPMI) of two words drawn randomly from the same documents. We use both NPMI and C_v<cit.> by using top 10 words from each topic as suggested in <cit.>. § DIVERSITY METRIC diversity is implemented using scripts: <https://github.com/adjidieng/ETM/blob/master/utils.py> line 4. C_v is implemented using gensim.models.coherencemodel where coherence = 'C_v', NPMI is implemented using gensim.models.coherencemodel where coherence = 'c_npmi'. Top-NMI is implemented using metrics.normalized_mutual_info_score from sklearn. Top-Purity is implemented by definitions. km based is implemented by sklearn package kmeans. § DATASETS We store the datasets and related embeddings in the attached data file. Overall, we use 4 datasets from different domain to evaluate the performance of our 2 methods. (1) AgNews We use the same AG’s News dataset from <cit.>.Overall it has 4 classes and, 30000 documents per class. Classes categories include World, Sports, Business, and Sci/Tech. for evaluation; Keywords we use: group1: government,military,war; group2:basketball,football,athletes; group3:stocks,markets,industries; group4:computer,telescope,software (2) R8 is a subset of the Reuters 21578 dataset, which consists of 7674 documents from 8 different reviews groups. We use class acq, earn, and we group all other data in one class. Keywords we use: group1:['acquir', 'acquisit', 'stake'], group2:['avg', 'mth', 'earn'], group3:['japan', 'offici', 'export']] (3) 20News <cit.> is a collection of newsgroup posts. We only select 4 categories here. Compare to previous 2 datasets, 4 categories newsgroup is small so that we can check the performance of our methods on small datasets. Keywords we use: group1: faith,accept,world; group2:evidence,religion,belief; group3:algorithm,information,problem; group4:earth,solar,satellite (4) DBLP <cit.> dataset consists of bibliography data in computer science. DBLP selects a list of conferences from 4 research areas, database (SIGMOD, ICDE, VLDB, EDBT, PODS, ICDT, DASFAA, SSDBM, CIKM), data mining (KDD, ICDM, SDM, PKDD, PAKDD), artificial intelligent (IJCAI, AAAI, NIPS, ICML, ECML, ACML, IJCNN, UAI, ECAI,COLT, ACL, KR), and computer vision (CVPR, ICCV, ECCV, ACCV, MM, ICPR, ICIP, ICME). With a total 60,744 papers averaging 5.4 words in each title, DBLP tests the performance on small text corpus. keywords we have: group1: 'system', 'database','query'; group2: 'density', 'nonparametric', 'kernel'; group3: 'image', 'neural', 'recognition'; group4: 'partition', 'group', 'cluster' § ANALYSIS ON VMF AND GAUSSIAN In this section, we show empirically, vMF encourages topic separation naturally when comparing to Gaussian priors, especially in low dimensions. In the VAE training setting, we have the encoder network θ learning to transform document inputs x into distribution parameters. Without loss of generality, we denote learned parameters ϑ_i which is updated in the training process and corresponds to latent space η_i∼ q(ϑ_i). Theoretically, the best q should be able to approximate the posterior distribution p(η_i|x); however, our choice of parametric distribution family in practice will always associate with our intentions, whether to reduce training time or increase expressability. The choice of prior and posterior distribution can be viewed as a form of regularization on our decoder network, which is arbitrarily powerful. Intuitively, distributions with fewer parameters will introduce more regularization at the cost of less flexibility, analog to bias variance trade off. For p dimensional latent space, vMF is parameterized by p+1 variables while Gaussian is parameterized by 2*p variables assuming conditional independence or up to p(p+1)/2 + p variables assuming interdependence. In the extreme setting when labelled documents are less than O(p^2), our encoder and decoder may overfit, learning identity mapping. In the topic modelling space, a softmax transformation σ is applied to η to extract a probabilistic mixture of topics. In the independent Gaussian posterior case, we view affinity and confidence of the document to topic 1 is encoded in the first entry of μ and, σ^2 respectively. Ideally, we would want the encoder to offer variability in the sampling process to regularize, defined as difference in topic probability with initial training epochs; however, we will show through an example <ref>, that Gaussian may learn identity mapping by predicting variance to be near 0. In the figure below, we define misaligned document as those documents such argmax(ς) != argmax(η). This can be viewed as a measure of regularization. In the Gaussian case, our encoder network learns identity mapping within the first epoch. Out of 120000 documents, only 200 or so documents were able to explore different spaces. vMF allows 1/6th of documents to vary and stabilizes after KL divergence kicks in. In trained latent spaces representation, we clearly see vMF learning more nuanced and structured data when comparing to Gaussian as you can see in <ref> § HUMAN EVALUATION We use the ratings and word intrusion tasks as human evaluations of topic quality. We recruit crowdworkers using Amazon Mechanical Turk inside Amazon Sagemaker. We pay workers 0.024 per ratings task and 0.048 per intrusion tasks. We select enough crowdworkers per task so that p value for two sample t test between the best method and the second-best method is less than 0.05, resulting in a minimal of 18 crowd workers per topic for both tasks. Overall, we ask crowdsources to perform 1641 tasks and create 223 objects. It costs 77.89 for the whole labeling job(Internal price). The user interfaces are shown in Figure <ref> and Figure <ref>. We select AgNews as our dataset, we generate 10 topics each from 4 models. In the word intrusion task, we sample five of the ten topic words plus one intruder randomly sampled from the dataset; for the ratings task, we present the top ten words in order. We also document the confidence per task generated by Amazon Mechanical Turk tool and average time per task for each task as can be seen below. For time spent, crowdsources spend 100   115 seconds per intrusion task and 70   80 seconds per rating task. Crowdsources spent 102.7 seconds on intrusion task generated by vONT which is lower than all other tasks. This means that it is easier for users to find intrusion word for topics generated by vONT. The confidence per rating task is between 0.88 to 0.94, where vONT has highest confidence 0.938 while LDA has lowest confidence 0.886. The confidence per intrusion task is between 0.74 to 0.86, where vONT has highest confidence 0.858 while ETM has lowest confidence 0.747. This means the crowdsources are in general more confident in their answer to questions that is generated by vONT. § THEORETICAL ANALYSIS OF VMF CLUSTERABILITY In this section, we present theoretical intuition behind cluster inducing property of vMF distribution comparing to the normal distribution. In the normal VAE set up, the encoder network learns mean parameter μ_i and variance parameter σ_i for each document i. During the training process, we sample one data point, η_i from the learned distribution and pass into the softmax function to represent a probability distribution of topics. To introduce high clusterability, we need sampled η to have the ability to induce high confidence assignment to a topic under some form of regularization. In other words, with p number of topics, model can increase argmax(softmax(η)) ∈ (1/p, 1) without additional penalty. We prove that under normal distribution and in the two dimensional case, it is impossible to increase argmax(softmax(η)) without increase KL divergence loss with respect to the prior N(0,I). The KL divergence with p = 2 is KL_normal = -1/2[2 + logσ^2_1 + logσ^2_2 - μ^2_1 - μ^2_2 - σ^2_1 - σ^2_2] If we denote p_1 and p_2 to be expected distribution of topics, then p_1 = e^μ_1/e^μ_1 + e^μ_2 and p_2 = e^μ_2/e^μ_1 + e^μ_2. Without loss of generality, we assume that the document i is more aligned with the first topic, the model will learn and output μ_1 > μ_2. To minimize KL defined above, μ_1 and μ_2 will be centered be around 0 with μ_1 = -μ_2; however, in order to increase propensity of argmax(softmax(η)) or p_1, μ_1 and μ_2 have to increase and decrease respectively, forcing the KL divergence penalty to increase. For vMF distribution, the KL divergence is KL_vMF = κI_M/2(κ)/I_M/2-1(κ) + (M/2 - 1) logκ - M/2log (2π) - log I_M/2-1(κ) + M/2logπ + log 2 + logΓ(M/2) We note that the KL penalty under vMF case is not associated with μ, thus the model can increase the propensity without increasing regularization penalties. The KL divergence of vMF distribution also makes κ small, inducing the generated topic distribution to be localized. If a data point is far different from any direction parameter μ, the reconstruction loss will be high as κ is small. Thus, μ should be as representative as possible which makes it more clustered. § SPEED We run each model 10 times with different seeds to evaluate how long it takes to finetune the model by modifying 20 percent of keywords set. § RELATED WORKS Most of vMF based topic modeling methods does not incorporate variational autoencoders. Spherical Admixture Model (SAM) <cit.> is the first topic modeling method that uses vMF distribution to model corpus μ, topics and reconstructed documents. Kayhan <cit.> combines vMF distribution with word embeddings and uses vMF to regenerate the center of topics. It is based on Dirichlet Process to get the proportion of topics for a certain document. Hafsa <cit.> combines knowledge graph and word embeddings for spherical topic modeling. They use vMF distribution to model corpus μ, word embeddings and entity embeddings. To compare, we use modified vMF to generate topic distributions over documents and adapt spherical word embeddings instead of modeling it using vMF. Our method scales well, optimizes fast and offers highly stable performance. The choice of spherical word embeddings also alleviates the sparsity issue among words. vNVDM <cit.> is the only other method that combines vMF with variational autoencoders. <cit.> proposes using vMF(.,0) in place of Gaussian as p(Z), avoiding entanglement in the center. They also approximate the posterior q_ϕ(Z|X) = vMF(Z;μ,κ) where κ is fixed to avoid posterior collapse. The above approach does not work well for two reasons. Firstly, fixing κ causes KL divergence to be constant, which reduces the regularization effect and increases the variance of the encoder. Another concern with vMF distribution is its limited expressability when its sample is translated into a probability vector. Due to the unit constraint, softmax of any sample of vMF will not result in high probability on any topic even under strong direction μ. For example, when topic dimension M equals to 10, the highest topic proportion of a certain topic is 0.23. We also have a different decoder. NSTM <cit.> uses optimal transport to replace KL divergence. Row and column represent topics and words. Instead, our method represents row and column as topics and keywords with M matrix also defined differently. <cit.> uses optimal transport for topic embeddings, but with wasserstein distances as metric and jointly learns word embeddings. Instead, our algorithm keeps word embedding fixed during the training process to maintain stability. § ABLATION STUDY ON RADIUS Ablation study for radius parameter on AG-News where we set topics equal to 10: as we sweep temperature from 1 to 20, nmi increases and diversity decreases. Radius=10 has the best average rank over coherence based metrics in this temperature range. It has good diversity while has good coherence based metric. Temperature = 10 also has the best pruity score which make it useful for semi-supervised learning § ABLATION STUDY ON Κ Ablation study for Kappa on AG-News: we check kappa = 10, 50, 100, 500, 1000. Kappa=100 has highest purity and nmi, kappa = 50 has highest NPMI and C_v. Kappa = 500 has highest diversity. Our version of kappa has highest diversity, purity and NPMI compare to all fixed kappa. § DIVERSITY EVALUATION ON VONT vONTSS has high diversity by design. As you can see in the table, vONT achieves the best diversity on R8 and AgNews. vONT is the second best on 20News dataset. It also has the lowest standard deviation compare to other methods. § WHY NOT USE LANGUAGE MODELING BASED METHODS? Most language modeling methods are time-consuming to train and need a lot of transfer learning. They also need finetune in most of our use cases. Without fine-tuning, <cit.> makes it harder to be used in domain-specific datasets. We have tried <cit.> to compare, but both takes too much time to run. On AG-News, <cit.> takes 108 minutes to run, while <cit.> takes more than 2.5 hours. It also occurs in other models in footnote 2. vONTSS takes 8 minutes to run and 50 seconds to fine-tune. We also tried some methods which only leverage embeddings of language modeling such as On AgNews and we set topics equal to 20, For <cit.>, diversity 0.71, C_v 0.396, NPMI:-0.1089. For <cit.>, diversity 1, C_v 0.435, NPMI:-0.1073. Except diversity in <cit.>, all other metric perform worse than vONT. For semi-spervised cases, we take keywords as input. It is really different from other weakly supervised learning formulations, and how to incorporate keywords into a language model is not straight forward. We have tried few methods, but it does take a lot of time to run and change their code is not easy since their effectiveness do rely on the specific version of language model. Thus, we exclude language modeling methods in our paper. Also, in our use case, each topic model is designed for a specific user or use case. It will be very hard to be interactive or store the model on user's side when the number of parameters is too large for every single model. § LIMITATIONS AND RISKS vMF distribution has a unit constraint. This limits the variability of latent space, which in turn reduces the gains as the number of topics increase. We can try other distributions with richer variability, such as Bivariate von Mises distribution and Kent distribution. Also, in weakly supervised cases, vONTSS may not perform as well as those methods that leverage pretraining language models in classification. In the future, we can combine the structure of this model with existed language modeling to further improve its classification performance. Lastly, in semi-supervised cases version, our formulation of vONTSS requires each topic to have at least one keyword. This limits its practical usage to some extent. To solve it, we can first preselect topics before doing the topics and keywords mapping, or we can modify the optimal transport loss using Gumbel distributions.
http://arxiv.org/abs/2307.02507v1
20230705034728
STS-CCL: Spatial-Temporal Synchronous Contextual Contrastive Learning for Urban Traffic Forecasting
[ "Lincan Li", "Kaixiang Yang", "Fengji Luo", "Jichao Bi" ]
cs.LG
[ "cs.LG", "cs.AI" ]
School of Control Science and Engineering, Zhejiang University Hangzhou China lilincan@zju.edu.cn School of Computer Science and Engineering, South China University of Technology Guangzhou China yangkx@scut.edu.cn The University of Sydney Sydney Australia fengji.luo@sydney.edu.au State Key Laboratory of Industrial Control Technology, Zhejiang University Hangzhou China jonny.bijichao@zju.edu.cn Efficiently capturing the complex spatiotemporal representations from large-scale unlabeled traffic data remains to be a challenging task. In considering of the dilemma, this work employs the advanced contrastive learning and proposes a novel Spatial-Temporal Synchronous Contextual Contrastive Learning (STS-CCL) model. First, we elaborate the basic and strong augmentation methods for spatiotemporal graph data, which not only perturb the data in terms of graph structure and temporal characteristics, but also employ a learning-based dynamic graph view generator for adaptive augmentation. Second, we introduce a Spatial-Temporal Synchronous Contrastive Module (STS-CM) to simultaneously capture the decent spatial-temporal dependencies and realize graph-level contrasting. To further discriminate node individuals in negative filtering, a Semantic Contextual Contrastive method is designed based on semantic features and spatial heterogeneity, achieving node-level contrastive learning along with negative filtering. Finally, we present a hard mutual-view contrastive training scheme and extend the classic contrastive loss to an integrated objective function, yielding better performance. Extensive experiments and evaluations demonstrate that building a predictor upon STS-CCL contrastive learning model gains superior performance than existing traffic forecasting benchmarks. The proposed STS-CCL is highly suitable for large datasets with only a few labeled data and other spatiotemporal tasks with data scarcity issue. <ccs2012> <concept> <concept_id>10010147</concept_id> <concept_desc>Computing methodologies</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003033.10003083</concept_id> <concept_desc>Networks Network properties</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10002951.10002952</concept_id> <concept_desc>Information systems Data management systems</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> [500]Computing methodologies [300]Networks Network properties [300]Information systems Data management systems STS-CCL: Spatial-Temporal Synchronous Contextual Contrastive Learning for Urban Traffic Forecasting Jichao Bi =================================================================================================== § INTRODUCTION Up to date, tremendous spatiotemporal traffic data are acquired on a daily basis from vehicle GPS trajectory records, smartphone location-based services and multimodal urban sensors (e.g. speed sensors and traffic cameras). In spite of that, the massive data are usually disordered with lots of unidentified spatiotemporal patterns, which require experts' careful handicraft labeling/annotation. Thus, in real-world application scenario, there is always a lack of labeled traffic data <cit.>. Employing the collected traffic big data, one significant research direction is spatiotemporal traffic forecasting. To achieve accurate and efficient prediction, researchers have proposed a number of advanced models. In the perspective of spatial dependency modeling, existing works commonly adopt graph convolutional networks (GCN) <cit.>, convolutional neural networks (CNN) <cit.> and their variants to capture the spatial correlations of a traffic network. In the perspective of temporal dependency modeling, the current studies usually use recurrent neural networks (such as GRU and LSTM) <cit.>, Seq2Seq model architecture <cit.> and attention mechanism <cit.> to excavate the temporal correlations. Specifically, Li et al. <cit.> presented DCRNN model, which employs a novel diffusion convolution module and Seq2Seq architecture to model the spatial-temporal traffic correlations. Guo et al. <cit.> proposed ASTGCN, which adopts attention mechanism as the model kernel and integrates GCN with the designed spatial attention (SA). STSGCN <cit.> introduced a spatial-temporal simultaneous mechanism and multiple stacked GCN layers to capture the local spatiotemporal dependencies. SCINet <cit.> proposed a sample convolution method as an optimized version of vanilla CNN and integrated the interactive learning idea into the model construction. Choi et al. presented STG-NCDE <cit.>, which utilizes controlled differential equations to simulate the dynamic evolutions of spatiotemporal traffic patterns. Although tremendous efforts have been made to design sophisticated model architectures in order to fully capture the sophisticated spatial-temporal correlations, we still identify that almost all of the existing models are based on supervise learning, which usually require exhaustive handicraft labeling as prerequisite but have limited representation capability. The current benchmark supervised models ignore the reality of practical applications, thus significantly hindering their application scopes. Meanwhile, we notice that many renowned scholars have emphasized that the current limitations of deep learning are actually the limitations of supervise learning technology, and self-supervised learning is the future direction of Artificial Intelligence <cit.>. Recently, self-supervised methods have shown great capability in representation learning tasks including natural language processing (NLP) <cit.>, image/video processing <cit.>, and recommendation system <cit.>. The core idea of self-supervised learning is to derive some auxiliary supervised signals from the input dataset itself, which is capable of exploring the hidden distribution and patterns of the dataset. Among the various self-supervised methods, contrastive learning-based methods always demonstrated superior performance, such as MoCo <cit.> in object detection/image segmentation, BERT-CL <cit.> for retrieval-based dialogues, and GraphCL <cit.> for graph data classification. While some studies have extended contrastive learning for graph-structure data <cit.>, there are still three major problems worthy of further investigation. First, existing spatiotemporal contrastive learning models only adopt pre-defined static Graph Data Augmentation (GDA) approaches, which heavily limits the performance of the later contrastive representation learning. Second, previous research works <cit.> have proved the necessity and effectiveness of spatial-temporal dependency synchronous modeling. Nevertheless, none of the present graph contrastive models consider building simultaneous spatial-temporal dependency modeling structure for urban traffic forecasting. Third, we identify that none of the existing contrastive learning models realize both Graph-level and Node-level contrasting for a given spatiotemporal prediction task. To address the aforementioned problems, we propose a self-supervised model named Spatial-Temporal Synchronous Contextual Contrastive Learning (STS-CCL). The main contributions of this work are summarized as follows: * We present a novel spatiotemporal contrastive learning model named STS-CCL for spatiotemporal graph representation learning and downstream prediction tasks. To begin with, we elaborately design two different yet correlated graph data augmentation methods to enhance contrastive learning performance. Furthermore, we present a hard mutual-view contrastive training scheme and formulated an integrated objective function to assist contrastive model training. * A spatial-temporal synchronous contrastive module (STS-CM) is proposed to realize spatial-temporal synchronous traffic dependency modeling and Graph-level contrasting, which addresses the spatial and temporal separate modeling caused inconsistency in current contrastive models. * Aside from Graph-level contrasting, a comprehensive semantic contextual contrastive module (SC-CM) is proposed to achieve Node-level contrasting. SC-CM can further capture the hard distinguishable patterns along with negative filtering by using non-linear projection head and semantic contrastive loss. * We conduct extensive experiments and evaluations on two real-world urban traffic datasets from Hangzhou Metro System crowd-flow and Seattle freeway network traffic speed. Empirical study results demonstrate that STS-CCL can effectively capture the decent and dynamic spatiotemporal representations and consistently surpass other state-of-the-art methods. § BASIC DEFINITIONS FOR SPATIAL-TEMPORAL TRAFFIC FORECASTING Definition 1: Traffic Network g⋆. Most of the urban traffic scenarios can be represented as weighted undirected graphs g⋆=(V⋆,E⋆,A⋆). In the graph representation, V⋆ denotes a collection of vertices which containing all the node individuals in a given traffic network and |V⋆|=N. E⋆ denotes a collection of edges, representing the correlations between nodes. A⋆ is the graph adjacency matrix of g⋆ and A⋆={a_ij}^N × N. Each matrix element a_ij represents the computed closeness between node v_i and v_j. Definition 2: Graph Adjacency Matrix A⋆. As the crucial component of a traffic graph representation, graph adjacency matrix has received lots of research attentions. In earlier studies, researchers usually adopt pre-defined static A⋆, such as calculating the Euclidean distance or POI similarity between node v_i, v_j as the matrix element a_ij. However, such time-invariant graph adjacency matrix fails to represent the dynamic change of a transportation network in real-world scenarios. Some later studies propose to use dynamic graph adjacency matrix and achieve superior performance <cit.>. Definition 3: Spatiotemporal Traffic Forecasting based on Graph Structure Data. With the historical traffic data sequence X_τ-s+1:τ=[X_τ-s+1,X_τ-s+2,...,X_τ] and the graph representation g⋆=(V⋆,E⋆,A⋆), spatiotemporal traffic forecasting task can be formulated as follows: [X_τ-p+1:τ; g⋆] f→X̂_τ+1:τ+k where p denotes the total length of historical traffic series, k denotes the prediction scale, f is the learned mapping function of a neural network model. § METHODOLOGY This section dedicates to introduce all the technical details of STS-CCL. As shown in Fig. <ref>, STS-CCL adopts the joint-learning scheme, which consists of two branches (i.e., (1) traffic prediction branch, (2) contrastive learning branch), and the two branches are jointly learned during model training procedure. To begin with, we carry out data augmentation to generate the basic/strong augmentation views for the raw input traffic data. Next, the two views of data are separately fed into the STS-CM (denoted as "Transformer Encoder" in Fig. <ref>) to achieve spatial-temporal synchronous contrastive learning with a designed hard mutual-view prediction task. Afterwards, the learned spatiotemporal traffic representations are used for two branch of tasks. The C_t^b from basic augmentation view is sent to Transformer Decoder to generate future traffic predictions, the C_t^s from strong augmentation view is sent to SC-CM to realize semantic contextual contrasting. Because of limited space, the temporal positional embedding and spatial positional embedding will be discussed in Appendix <ref>. Finally, we introduce model training scheme and formulate the overall objective function. §.§ Spatiotemporal Graph Data Augmentation Data augmentation is an indispensable component and also the first step of contrastive learning models. Some latest works have investigated time-series data augmentation <cit.> and graph data augmentation <cit.> respectively, but there still lacks a professional spatiotemporal graph data augmentation methodology. In STS-CCL, we first adopt two latest developed graph data augmentations, namely Edge Perturbation (optimized version for spatiotemporal task) and Attribute Masking to augment data in terms of graph structure. As shown in Fig. <ref>, we then propose a new Temporal Scale Fusion method to enhance data in terms of temporal characteristics. Additionally, we employ a learning-based graph view generator to adaptively create more underlying data views. By utilizing the above augmentation methods, we can create Basic and Strong Augmentation views considering both spatiotemporal characteristics and dynamic graph-data features. §.§.§ Basic Augmentation. For Basic Augmentation, we consider using (1)Edge masking, (2)Attribute masking, and (3)Temporal scale fusion as the augmentation techniques. Edge masking is an optimized version of edge perturbation for spatiotemporal tasks, which suggest discarding a specific ratio of edges to modify graph structures and is implemented by masking a specific ratio of adjacency matrix elements as 0 <cit.>. Attribute masking can also be referred from <cit.>, which is directly implemented by randomly masking a specific ratio of input spatiotemporal data as 0. Next, we introduce the proposed Temporal Scale Fusion method. The multi-scale temporal nature (i.e. historical recent/historical period/historical trend) of spatiotemporal traffic data has been proved in previous researches. However, it has been less explored in existing contrastive learning models. Effectively integrating traffic data from the three-granularity temporal scales can excavate more diverse spatiotemporal patterns. In considering that, we obtain the correlated data from historical recent/period/trend and fuse them together. FIg. <ref> illustrates how the temporal scale fusion method operates. Let X_τ-S:τ be the traffic data sequence from the historical recent time, T_d and T_w represent the time gap of historical period and trend, respectively. X^(τ-S-T_d):(τ-T_d) means the traffic data comes from the same time window of the last day, X^(t-S-T_w):(t-T_w) means the traffic data comes from the same time window of the last week. Temporal Scale Fusion method is formulated as in Eq. <ref>. X̃^(τ-S):τ =(1-α-β)X^(τ-S):t+α X^(τ-S-T_d):(τ-T_d) +β X^(τ-S-T_w):(τ-T_w) where X̃^(τ-S):τ denotes the multi-scale temporal fused traffic data, α and β are hyper-parameters. α,β are generated from the distribution U(δ_ts,1) and then divided by two. δ_ts is a tunable factor, which ensures every epoch training data has their unique α and β.(i.e. this method is input-specific). §.§.§ Strong Augmentation. We propose to use a learning-based graph view generator for dynamic and effective augmentation, as illustrated in Fig. <ref>. For each node individual, the graph view generator first obtains the embedded node feature using GNN layers. Here, the GNN network is adopted following GraphSAGE <cit.>. The node embeddings are then transformed into probability distributions of choosing a specific graph data augmentation technique. There are three augmentation choices for each node: (1) Edge masking, (2) Attribute masking, and (3) Remain unchanged. Next, we use GumbelSoftMax upon the probability distributions to generate the final choice for each node individual. Finally, the selected best augmentation technique is applied on each node. Let h_i^(l) and e_i^(l) be the hidden state and node embedding of i-th node at the l-th GNN layer, respectively. X_i denotes the node features and F_i denotes the graph augmentation choice of node i. The learnable graph view generation procedure can be formulated as: h_i^(l-1) =COMBINE^(l)(h_i^(l-2),e_i^(l-1)) e_i^(l-1) =AGGREGATE^(l)([h_j^(l-1): j ∈ N(i)]) F_i =GumbelSoftMax(e_i^(l)) X̃_i =Multiply(X_i,F_i) where Eq. 4 and Eq. 5 are the GNN node embedding process. e_i^(l-1) denotes the probability distribution of choosing every augmentation techniques. F_i is the one-hot vector generated by GumbelSoftMax, which indicates the final choice of augmentation. Multiply(X_i,F_i) denotes the implemented node individual augmentation function. X_i is the finally augmented node features. After the augmentation process from learnable graph view generator, the next step is to employ temporal scale fusion method for further processing, which finally result in the strong augmentation view data. §.§ Spatial-Temporal Synchronous Contrastive Learning This subsection dedicates to develop the proposed spatial-temporal synchronous contrastive module (STS-CM). STS-CM is constructed upon Transformer architecture, and we stack N identical STS-CM in spatial-temporal synchronous contrasting stage. STS-CM is denoted as Transformer Encoder in Fig. <ref>, which basically consists of two components: the ProbSparse self-attention mechanism <cit.> and the Dynamic Interaction GCN (DI-GCN). In each STS-CM module, the ProbSparse self-attention is capable of capturing the dynamic traffic dependencies among the temporal dimension, while our designed DI-GCN is responsible of capturing the dynamic interactive spatial dependencies among the spatial dimension. Therefore, the spatial-temporal synchronous traffic dependency modeling is achieved. Furthermore, we let the model to carry out a hard mutual-view forecasting objective in the synchronous contrastive learning stage after the N × STS-CM traffic feature modeling. The hard mutual-view forecasting is to use the extracted traffic feature from the Basic Augmentation data to predict the future one view of the Strong Augmentation, and vice versa. This training scheme can boost the contrastive model performance upon the spatial-temporal synchronous modeling. §.§.§ ProbSparse self-attention. The ProbSparse self-attention used in STS-CM can be referred from Informer <cit.>, which decreases the time and space complexity from O(L^2) (vanilla self-attention) to O(L ln L), yielding efficiency and improved model performance. §.§.§ Dynamic Interaction GCN The source idea of DI-GCN comes from the identification that most of existing graph adjacency matrix construction methods are pre-defined and time-invariant. We propose Dynamic Interaction Graph Convolutional Networks (DI-GCN) to adaptively adjust the graph adjacency matrix by dynamically interact with traffic data from different time intervals. The right part of Fig. <ref> illustrates how DI-GCN adaptively generates the graph adjacency matrix A_τ^' (i.e. the "Final Fused A_τ^'") at time step τ. It can be seen that the generation of the final fused A_τ^' includes two major stages: (1) Dynamic Graph Generator, and (2) Element Fusion. As shown in Fig. <ref>, the Dynamic Graph Generator is composed of diffusion graph convolution (DGC) layer, multi-layer perceptron (MLP) and GumbelSoftmax. In the l-th Transformer encoder, the learned traffic representations after ProbSparse attention: Z^(l)=(Z_τ-p+1^(l),Z_τ-p+2^(l),...,Z_τ^(l)) and the pre-defined graph adjacency matrix are fed into diffusion graph convolution to extract features, then fed into MLP and obtain an intermediate graph adjacency matrix Ã_τ^dyn, which can be formulated as: Ã_τ^dyn=Softmax(MLP(DGC(Z^(l),A))) where MLP is the multi-layer perceptron, DGC is the diffusion graph convolution. Given that we need to sample A_τ^dyn during model training, but the intermediate graph adjacency matrix Ã_τ^dyn is actually discrete. To ensure the sampling process is derivable, we adopt GumbelSoftmax to re-parameterize the intermediate graph adjacency matrix, and obtain the final dynamic graph adjacency matrix A_τ^dyn: A_τ^dyn =GumbelSoftmax(Ã_τ^dyn) =Softmax(log(Ã_τ^dyn)-log(-log(μ))/Ω) where μ∈ Gumbel(0,1) is a random variable, Ω is a parameter to adjust the Softmax function, we set Ω=0.5 in this work. The generated A_τ^dyn has strong capability to represent the dynamic changing correlations between nodes in a traffic network. Next, we integrate the generated dynamic A_τ^dyn with the pre-defined A using element-wise dot product to obtain the final fused graph adjacency matrix A_τ^': A_τ^'=A ⊙ A_τ^dyn. Specifically, A is the static connectivity adjacency matrix (i.e. A=a_ij^N × N,a_ij∈{0,1}). A_τ^' will replace the original A to participate in DI-GCN operation, as formulated below: X_t^(l) =DI-GCN(X_t^(l-1))=σ(A^'X_t^(l-1)W^(l)) =σ((A ⊙ A_τ^dyn)X_t^(l-1)W^(l)) where X_t^(l-1) denotes the input traffic representations of DI-GCN module, W^(l) denotes weight parameters of DI-GCN module, X_t^(l) denotes the output results of DI-GCN module in the l-th Transformer encoder. §.§.§ Hard mutual-view training scheme. The spatial-temporal synchronous contrastive learning stage adopts a hard mutual-view prediction task, which employs the learned representations from one augmentation view to predict the future data of another augmentation view. Let Z be the input of Transformer encoder after data augmentation and positional embedding procedures. The Transformer encoder summarizes all the historical representations Z_τ-p+1:τ to obtain a spatiotemporal synchronous contrastive vector C_τ, C_τ=f_ENC(Z_τ-p+1:τ). The spatiotemporal synchronous contrastive vector C_τ is then used to predict the future input representations from Z_τ+1:τ+k. In STS-CCL model, the basic augmentation view generates C_τ^b and the strong augmentation view generates C_τ^s. Then, we design a hard mutual-view prediction task by employing the learned contrastive vector C_τ^b to predict the future input representations of the strong augmentation view: Z_τ+1:τ+k, and vice versa. The spatial-temporal synchronous contrastive loss L_sts^*, * ∈{b,s} here is defined by minimizing the dot product of a predicted representation and the corresponding ground-truth representation, while maximizing the dot product with other negative samples N_τ,k. Thus, the spatial-temporal synchronous contrastive loss for the two views are formulated as follows: L_sts^s =-1/K∑_k=1^Klogexp ((W_k (C_τ^s))^TZ_τ+k^b)/∑_n ∈ N_τ,kexp (((W_k (C_τ^s))^T)Z_n^b) L_sts^b =-1/K∑_k=1^K logexp ((W_k (C_τ^b))^TZ_τ+k^s)/∑_n ∈ N_τ,kexp((W_k(C_τ^b))^TZ_n^s) where W_k is a linear function used to reshape C_τ^*'s dimension to be the same as Z, and the Log Bi-linear equation is adopted in the spatial-temporal synchronous contrastive loss formulation. The finally learned basic/strong view contrastive vector from the N-th Transformer encoder will be sent to execute traffic prediction and stage-two contrastive learning (semantic contextual contrasting), respectively. §.§ Semantic Contextual Contrastive Learning As far as we are concerned, existing contrastive models simply treat all other samples aside from node i itself as the negative samples to be contrasted. However, the nodes in a traffic network may have similar spatial heterogeneity or similar semantic contexts. In this part, we unify similar spatial heterogeneity and similar semantic contexts as positive samples and filter them out in the denominator of semantic contextual contrastive loss. Given a batch of N nodes in model training, the learned representation by the non-linear projection head H_i^b and it's corresponding representation from another view H_i^s, a general form of semantic contextual contrastive loss can be defined as Eq. <ref>: L_sc=1/N =-logexp(sim(H_i^b,H_i^s)/Δ)/∑_j ∈ℕ_iexp(sim(H_i^b,H_j^s)/Δ) where sim(a,b)=a^T b/||a|| ||b|| denotes the dot product between l_2 normalized a and b (i.e. cosine similarity), Δ is a controlled temperature parameter. ℕ_i denotes the determined acceptable negative sample collection, which is detailed introduced in the following paragraphs. Specifically, for spatial heterogeneity similarity filtering, we use the well-known node connectivity adjacency matrix A_con <cit.> to measure the spatially similarity. A_con=(a_ij)^N × N∈{0,1} is defined as follows: if node i and j (e.g. sensors, traffic stations, road segments, etc.) are geographically neighbors, the corresponding element a_ij is set as 1, otherwise 0. Therefore, we obtain the similar spatial heterogeneity samples for each node within a batch, and exclude these data samples for negative calculation. In terms of semantic context similarity filtering, we first assign a semantic vector M_i,τ for each node, which consists of POI distribution information (including seven categories of POIs: school, hospital, commercial center, shopping mall, stadium, transportation centre, scenic spot) and refined timestamp information (including (a).day-of-week, (b).is-weekend, (c).is holiday). All these semantic features are encoded using One-Hot Encoding method. Next, we introduce how to calculate the semantic context similarity between nodes. For the first step, we calculate the similarity score between two nodes using Jensen-Shannon (JS) divergence <cit.>, which can be formulated as: [ Sim(M_i,τ,M_j,τ)=1-JS(M_i,τ,M_j,τ), ; JS(M_i,τ,M_j,τ)=1/2∑_1≤ q≤ Q( ^M_i,τ(q)log2M_i,τ(q)/M_i,τ(q)+M_j,τ(q)+_M_j,τ(q) log2M_j,τ(q)/M_i,τ(q)+M_j,τ(q)) ] where M_i,τ, M_j,τ∈ℝ^Q represents the semantic vector of node i and j, of which sum equals to 1. M_i,τ(q) means the q-th dimension of semantic vector M_i,τ. For the second step, we select the Top-u most semantically similar nodes and exclude the Top-u nodes from the negative samples. From the above spatial heterogeneity filtering and semantic context similar filtering, we can finally obtain the acceptable negative sample collection ℕ_i, which is applied to our semantic contextual contrastive loss as in Eq. <ref>. §.§ Loss Function and Model Training Method We employ joint-learning scheme for STS-CCL model, which means the model conduct prediction task and contrastive learning task in a simultaneous manner. The basic augmentation view is not only used for stage-one contrastive learning (i.e. spatial-temporal synchronous contrasting), but also used for the subsequent traffic prediction task. The strong augmentation view goes through stage-one contrastive learning together with basic augmentation view, then the two generated representations are fed into stage-two contrastive learning, which finalize the contrastive learning process. In our case, Transformer encoder E_θ(·) is jointly trained with Transformer decoder D_ω(·) and the non-linear projection head g_ϕ(·) in semantic contextual contrasting. The contrastive loss can serve as additional self-supervised signals in the overall objective function and improve spatiotemporal traffic forecasting performance. Since STS-CCL adopts the joint-learning scheme, its overall objective function should consists of both traffic prediction loss L_pred and contrastive learning loss L_cl. We determine the overall objective function as: L_STS-CCL=L_pred+ϵ L_cl, where ϵ is a weight-parameter to adaptively balance the importance of the two parts. As shown in Fig. <ref>, L_pred is calculated between the generated future traffic prediction from the Transformer Decoder and the ground-truth future traffic data. We adopt the Mean Squared Error (MSE) as traffic prediction loss, which can be formulated as: L_pred=1/M∑_i=1^M (X_i-X̂_i)^2, where M is the total number of testing data samples. The contrastive learning loss L_cl includes spatial-temporal synchronous contrasting loss L_sts^b, L_sts^s and semantic contextual contrasting loss L_sc, thus could be formulated as: L_cl=L_sts^b+L_sts^s+L_sc. The STS-CCL model training algorithm is provided in Appendix <ref>. § EXPERIMENTAL EVALUATIONS Datasets. We adopt two popular open-source datasets in spatial-temporal traffic data mining, namely Hangzhou-Metro[<https://tianchi.aliyun.com/competition/entrance/231708/information>] and Seattle-speed[<https://github.com/zhiyongc/Seattle-Loop-Data>]. Hangzhou-Metro is a crowd-flow mobility dataset collected from the metro system of Hangzhou city within a month in year 2019. The raw dataset contains over 70 million pieces of passenger records at seconds level, we process the raw data to represent the 81 subway stations' inflow/outflow in every 10-minute. Seattle-speed is a traffic network speed dataset collected by the distributed loop detectors on the freeways in Seattle area in year 2015. Table <ref> summarizes the statistics of the two datasets. Baseline Methods. The proposed STS-CCL is compared with: i) Recently proposed deep learning-based models that demonstrated top performance: DCRNN <cit.>, STSGCN <cit.>, ASTGNN <cit.>, AGCRN <cit.>, RGSL <cit.>, STG-NCDE <cit.>. ii) State-of-the-art contrastive learning model in traffic forecasting: STGCL <cit.> and SPGCL <cit.>. iii) ST-GSP <cit.>: a self-supervised learning model for urban flow prediction. We adopt root mean squared error (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) as the Evaluation Metrics. The implementation details are provided in Appendix <ref> because of limited space. §.§ Results Comparison Table <ref> shows the crowd-flow/traffic speed forecasting results. We compare STS-CCL with the aforementioned three kinds of baselines. Not surprisingly, the deep learning-based models show inferior performance than contrastive learning-based models (i.e. STGCL, SPGCL, and STS-CCL), demonstrating the powerful representation learning ability of contrastive models. The self-supervised model ST-GSP outperforms several advanced deep learning-models (i.e. DCRNN, STSGCN, AGCRN, RGCL). STG-NCDE and ASTGNN achieve excellent performance among deep learning models. Compared within the contrastive learning models, STGCL proposes an optional node-level or graph-level contrastive learning, whereas STS-CCL employs a two-stage contrasting to achieve both node-level and graph-level contrastive learning. As for dynamic graph construction, SPGCL directly uses contrastive learning to build dynamic adjacency matrix for each time step, rather than using contrastive learning methods to capture the invariant traffic representations, which may loss the original intention of contrastive learning model. Furthermore, the model architecture of SPGCL is less elaborately designed, which leads to a trade-off performance. In addition, STS-CCL consistently shows superior performance over all the baseline methods. §.§ Ablation Study To evaluate the effectiveness of each model component as well as data augmentation, we proceed to compare STS-CCL with the following designed variants. Specifically, "STS-CM only" is trained without the hard mutual-view prediction task and the SC-CM module. "STS-CM+MVP" is trained adding the hard-mutual view prediction task but without the SC-CM module. The variant "STS-CM+MVP+SC-CM" denotes the proposed full STS-CCL model. "STS-CCL w/o negative filtering" is trained without using the proposed negative filtering method in SC-CM module, simply treating all other data samples as negatives. In "STS-CCL w/o DI-GCN", we replace the DI-GCN layer with a static GCN layer, which use the pre-defined connectivity adjacency matrix A=(a_ij)^N × N, a_ij∈{0,1}. Furthermore, we study the effectiveness of the proposed basic/strong data augmentation method. In "STS-CCL BA-only", the two different views of data are both generated using basic augmentation method, whereas in "STS-CCL SA-only", we generate the two views of data only using the strong augmentation method. Finally, we presents the ablation study results in Table <ref>. It shows that without any key component in STS-CCL, the RMSE/MAE/MAPE results keep increasing, which suggest the degradation of model performance. Interestingly, we find that a specific model component may have different degree of impacts for different tasks, which can also be referred from Table <ref>. §.§ Robustness Study and Transferability Sensitivity to ϵ in Objective Function. We tune ϵ in the overall objective function of STS-CCL (see section 3.4) to evaluate the model robustness. Fig. <ref>(a) and <ref>(b) visualize the MAE results under different ϵ settings. It can be easily seen that the optimal ϵ value is 0.70 and 0.40 for Hangzhou-Metro and Seattle-speed, respectively. Furthermore, Fig. <ref>(c) shows the model training curve under two randomly selected ϵ value and the optimal value ϵ=0.40 on Seattle-speed dataset. Regardless of the changing ϵ values, STS-CCL always achieve convergence within 40 epochs then become stable in the following epochs. Also, selecting the best ϵ can obviously assist model training process. Investigating u and A_con in Negative Filtering. In this part, we first study how to set the Top-u in semantic context similarity filtering. As shown in Fig. <ref>(a)-Fig. <ref>(b), we set u=4,5,6,8,10,12,15 and visualize the MAE performance under different Top-u. Second, the adopted A_con in spatial heterogeneity similarity filtering is compared with another widely used distance-based adjacency matrix A_dist <cit.>. We employ A_con and A_dist to carry out spatial heterogeneity similarity filtering respectively, and present the model performance comparison in Table <ref>. Effects of Edge/Attribute Masking Rate in Graph Data Augmentation. Here, we carry out spatiotemporal traffic prediction under different Edge/Attribute masking rates. The performance results are illustrated in Fig. <ref>(a)-Fig. <ref>(d), with the best Edge/Attribute masking rates been highlighted. We yield the following observations: First, regardless of the Edge/Attribute masking rates, our data augmentation methods always yield better performance than the SOTA contrastive learning traffic prediction methods. Second, although different datasets have their unique semantics and traffic features, the best-performed Edge/Attribute masking rates are quite stable, which verifies the transferability of STS-CCL model on different spatiotemporal datasets. Third, we find that Attribute masking has more significant influence on model performance, the reason is that masking a certain proportion of input data as 0 is similar to the traffic data missing scenario, which will directly impacts model training as well as the degradation of model performance. § CONCLUSIONS This paper presents a general contrastive learning-based spatiotemporal forecasting model called STS-CCL, which for the first time realizes both node-level and graph-level contrasting, demonstrating strong representation learning ability over existing state-of-the-art methods. We first propose two general and efficient data augmentation methods for spatiotemporal contrastive learning. Then the designed STS-CM module simultaneously captures the decent spatial-temporal dependencies while achieving robust contrastive learning. We further propose a novel semantic contextual contrastive method for node-level contrasting, which overcomes the shortcoming of standard negative filtering in existing contrastive models. Extensive evaluations on two large-scale urban traffic datasets prove the superiority and effectiveness of STS-CCL over spatiotemporal benchmarks. We also provide comprehensive analysis and discussions for experimental results. This work is supported in part by National Natural Science Foundation of China under Grant No. xxxxxxxx. ACM-Reference-Format § SPATIAL AND TEMPORAL POSITIONAL EMBEDDING METHOD FOR TRANSFORMER ENCODER In our spatial-temporal synchronous contrastive learning stage, the module is built upon Transformer architecture, which employs self-attention mechanism as the key component. Previous works <cit.> have suggested that attention mechanism is agnostic to the sequential order, thus it's necessary to add proper positional embeddings as additional position information for Transformer. In our case, spatiotemporal traffic data has both temporal positional information (the traffic sequential data of each node individual comes from different historical time steps) and spatial positional information (at a given time step, different nodes in a traffic network has specific spatial distribution), the two kinds of positional information are both significant for accurate traffic forecasting. Thus, we explicitly introduce temporal positional embedding PE_T and spatial positional embedding PE_S to STS-CCL model, as described in the following. Temporal Positional Embedding Method We adopt the commonly adopted fixed positional embedding method here. Suppose the input traffic data from time step τ and j-th feature dimension, its corresponding temporal positional embedding can be formulated as: [ PE_T(τ,2j)=sin[τ/(2L_x)^2j/d_model], ; PE_T(τ,2j+1)=cos[τ/(2L_x)^2j/d_model] ] where 1≤ j ≤ d_model, d_model is the representation feature dimension, L_x is the total temporal length of historical traffic data. Spatial Positional Embedding Method Spatial positional embedding requires us not only to represent each node's spatial position but also reflect the traffic graph structure. In considering of that, we first generate a geographic location embedding Emb_g using sinusoid transformation and fully-connected network, then a GCN layer is used to obtain the final spatial positional embedding PE_S. Given a traffic network consisting of N nodes, a geographic location matrix G is created to represents the spatial coordinate information. Next, the sinusoidal transform ST(G,γ_min,γ_max) is applied, where γ_min,γ_max denote the minimum and maximum grid scale, respectively. Following that, a fully-connected layer FC_θ(·) is employed to reshape it into desired size Emb_g. The above procedure can be formulated as follows: Emb_g=FC_θ(ST(G,γ_min,γ_max)) As suggested by <cit.>, GCN is a type of method to realize Laplacian smoothing, which could enhance the correlation representation between a node and its neighbors. Thus, we use GCN layer as the last process to generate the final spatial positional embedding PE_S=GCN(Emb_g). § IMPLEMENTATION DETAILS We split all the datasets into 60%, 20%, 20% for training, validation and testing. All of our experiments are repeated for five times with different seeds, the mean and standard deviation values are also reported. For modeling training, the batch-size is set as 64, epoch is set as 100. Adam optimizer is adopted, with a learning rate of 1e^-4 and weight decay of 1e^-4. For data augmentation techniques, we evaluate the best ratio of Edge/Attribute masking in Fig. <ref> and use the optimal settings for STS-CCL. In experiments, Attribute masking is directly implemented by masking the input traffic data, as suggested by <cit.>. The proposed temporal scale fusion augmentation is input specific, where the hyper-params α,β are generated in every training epoch. In the overall objective function L_STS-CCL, the parameter ϵ is tuned within range [0.1,1.0] with an increasing step of 0.1. We find the best ϵ is 0.70 for Hangzhou-Metro and 0.40 for Seattle-speed. In Transformer, the Encoder and Decoder layers are set as N=4, N'=4. For semantic contextual contrastive learning, the temperature parameter Δ is set as 0.1. The proposed STS-CCL and all other baseline methods are implemented with PyTorch 1.9 framework on one NVDIA RTX 3090 GPU with Intel Xeon(R) Platinum 8255C 12-Core CPU for neural computing acceleration. § STS-CCL MODEL TRAINING ALGORITHM § CONTRASTIVE LEARNING FOR GRAPH STRUCTURE DATA The underlying idea of contrastive learning is to achieve mutual recognition among different representations of the same data by using appropriate transformations. More recently, the huge research interest of contrastive learning has been extended from CV/NLP to spatial-temporal graph data. The purpose of spatial-temporal graph contrastive learning can be categorized into Node-level and Graph-level contrasting based on the specific downstream task. For instance, graph classification is a typical Node-level contrasting task, which is also the major research object of recently proposed contrastive models. Nevertheless, spatiotemporal contrastive learning for traffic forecasting task has not been deeply explored yet. The contrastive training procedure is briefly described as follows. In training batch l, a total number of N node/graph individuals are processed by the contrastive neural network and resulting in 2N samples, where N samples come from the Basic Augmentation View, another N samples from the Strong Augmentation view. Let s_i^' and s_i^” represent the embedded features of node/graph i from the basic and strong augmentation view, respectively. The basic graph contrastive loss equation is shown in Eq. <ref>: L_gc=1/N∑_i=1^N -logexp(sim(s_i^',s_i^”)/σ)/∑_j=1,i ≠ j^Nexp(sim(s_i^',s_j^”)/σ) where sim(,) is the cosine similarity function, σ is the temperature parameter. In graph contrastive loss L_gc, we can observe that there are (N-1) number of negative pairs computed in the denominator. After pre-training, the learned spatiotemporal representations can be used to carry out specific tasks.
http://arxiv.org/abs/2307.03155v1
20230706173043
On the approximate relation between black-hole perturbation theory and numerical relativity
[ "Tousif Islam", "Gaurav Khanna" ]
gr-qc
[ "gr-qc", "astro-ph.IM" ]
tislam@umassd.edu We investigate the interplay between numerical relativity (NR) and point-particle black hole perturbation theory (ppBHPT) in the comparable mass regime. Specifically, we reassess the α-β scaling technique, previously introduced by Islam et al. <cit.>, as a means to effectively match ppBHPT waveforms to NR waveforms within this regime. Utilizing publicly available long NR data ( <cit.>) for a mass ratio of q=3 (where q:=m_1/m_2 represents the mass ratio of the binary, with m_1 and m_2 denoting the masses of the primary and secondary black holes, respectively), encompassing the final ∼ 65 orbital cycles of the binary evolution, we examine the range of applicability of such scalings. We observe that the scaling technique remains effective even during the earlier stages of the inspiral. Additionally, we provide commentary on the temporal evolution of the α and β parameters and discuss whether they can be approximated as constant values. Consequently, we derive the α-β scaling as a function of orbital frequencies and demonstrate that it is equivalent to a frequency-dependent correction. We further provide a brief comparison between Post-Newtonian waveform and the rescaled ppBHPT waveform at q=3 and comment on their regime of validity. Finally, we explore the possibility of using PN to obtain the α-β calibration parameters and still provide a rescaled ppBHPT waveform that matches NR. On the approximate relation between black-hole perturbation theory and numerical relativity Gaurav Khanna August 1, 2023 ============================================================================================ § INTRODUCTION The accurate simulation of binary-black-hole (BBH) mergers and the computation of gravitational wave (GW) radiation play a crucial role in GW research, as they enable the development of computationally efficient yet precise waveform models <cit.>. This relies heavily on accurate numerical simulations of BBH mergers. In the regime of comparable mass ratios (1 ≤ q ≤ 10), the most accurate approach to simulate a BBH merger is by solving the Einstein equations using numerical relativity (NR) <cit.> (Fig. <ref>). However, accurately simulating BBH mergers using NR in the intermediate to large mass ratio regime (10 ≤ q ≤ 100) remains a challenging task. In contrast, point particle black hole perturbation theory (ppBHPT) <cit.> offers a reliable modeling approach for extreme mass ratio binaries (q →∞) (Fig. <ref>). In ppBHPT, the smaller black hole is treated as a point particle orbiting the larger black hole in a curved space-time background. However, as the binary system becomes less asymmetric and approaches the regime of comparable mass ratios, the assumptions of the ppBHPT framework begin to break down. Consequently, the ppBHPT framework fails to generate accurate gravitational waveforms within this regime. On the other hand, post-Newtonian theories provide a dependable approximate method to generate gravitational waveforms for BBH mergers during the inspiral stage of the binary evolution when the two black holes are considerably distant from each other and their velocities are significantly smaller than the speed of light (Fig. <ref>). In recent times, there have been significant advancements in expanding the scope of both NR and ppBHPT frameworks. These advancements include the development of the surrogate model <cit.>, a fully relativistic second-order self-force model <cit.>, and the extension of NR techniques to simulate BBH mergers with higher mass ratios <cit.>. The surrogate model, which relies on the ppBHPT framework, has exhibited reasonable accuracy in predicting waveforms for BBH mergers in the comparable to large mass ratios regime. By employing a straightforward calibration procedure known as the α-β scaling, the ppBHPT waveforms are appropriately rescaled to achieve excellent match with NR data, particularly in the comparable mass regime. The scaling reads <cit.>: h^ℓ,m_ NR(t_ NR ; q) ∼α_ℓ h^ℓ,m_ ppBHPT( t_β ppBHPT;q ) , where, h^ℓ,m NR and h^ℓ,m ppBHPT represent the NR and ppBHPT waveforms, respectively, as functions of the NR time t_ NR and ppBHPT time t_β, ppBHPT. The calibration parameters, α_ℓ and β, are typically determined through matching ppBHPT waveforms to NR. Following the α-β calibration procedure, the quadrupolar mode of the rescaled ppBHPT waveform exhibits excellent agreement with NR, with errors of approximately 10^-3 or less, in the comparable mass regime <cit.>. Additionally, the rescaled ppBHPT waveforms demonstrate remarkable match to recently obtained NR data in the high mass ratio regime (q=15 to q=128) <cit.>. It has been shown that these waveforms can be used to accurately estimate the properties of the final black holes <cit.>. Further analysis provides evidence that the calibration parameters can be attributed to the absence of finite size effects within the ppBHPT framework <cit.>. In this paper, we investigate the interplay between NR and ppBHPT framework in the comparable mass regime through the lens of the α-β scaling. In particular, we use publicly available long NR data (), for mass ratio of q=3, that covers the final ∼ 65 orbital cycles of the binary evolution to understand applicability of the α-β scaling. In Section <ref>, we present our main findings and results. To begin, Section <ref> explores different methods to obtain the calibration parameters α and β. Next, in Section <ref>, we provide a detailed comparison of the α and β values obtained from these different approaches. Section <ref> then comments on the regime of validity of the α-β scaling. Finally, in Section <ref>, we discuss the implication of our results in current and future efforts in modeling gravitational waveforms from BBH mergers. § SCALING BETWEEN NR AND PERTURBATION THEORY In this section, we present a detailed analysis of the α-β scaling between ppBHPT and NR waveforms in the comparable mass regime. To do this, we utilize publicly available long NR data ( <cit.>), for mass ratio of q=3. The NR data covers the final ∼ 65 orbital cycles of the binary evolution and are ∼ 30000M long in duration (where M is the total mass of the binary). We then generate the ppBHPT waveform for this mass ratio using the framework developed in Refs. <cit.>. In particular, we first compute the full inspiral-merger-ringdown trajectory taken by the point-particle and then we use that trajectory to compute the gravitational wave emission by solving the inhomogeneous Teukolsky equation in the time-domain <cit.>. Our ppBHPT waveform data covers the final ∼ 56 orbital cycles of the binary evolution and are ∼ 35000m_1 long in duration. §.§ Methods to obtain α-β values Once we have both the ppBHPT and NR data for q=3, we investigate various methods to determine the appropriate α and β values necessary for accurately rescaling the ppBHPT waveform to achieve a strong agreement with NR. To simplify the analysis, we focus on the (2,2) mode of the waveform. §.§.§ Using full waveform data Typically, the values of α and β are determined by minimizing the L_2-norm difference between the NR data and the rescaled ppBHPT waveforms, covering full inspiral-merger-ringdown stage of the binary evolution, after aligning them on the same time grid <cit.>. The optimization problem can be formulated as follows: min_α,β∫| α h_ ppBHPT( β t_ ppBHPT) - h_ NR(t_ NR) |^2 dt/∫| h^ℓ,m_ NR(t_ NR) |^2 dt. This optimization problem yields the global best-fit values of α and β that minimize the error computed over the entire length of the waveform data or the calibration regime (e.g. t ∈ [-5000,100]M as used in  <cit.>). §.§.§ Using only inspiral data We can modify the procedure described in Section <ref> by limiting the global fit to only include inspiral data, such as data up to t=-100M. This approach eliminates the influence of the merger-ringdown portion of the waveform, which may have different mass scale and spin values. §.§.§ Using the peaks Alternatively, it is possible to estimate the optimal values of α and β at different points during the binary evolution. However, special care should be taken as this approach requires simultaneous rescaling of both the time and amplitude. We note that, in order to achieve a successful rescaling, it is necessary for the peaks of the waveform to align between the NR and ppBHPT data. Therefore, we can estimate the optimal values of α and β at each peak by matching the peak time and value between NR and ppBHPT. For instance, we can focus on the 50th peak before the merger in both NR and ppBHPT waveforms. By employing cubic splines, we can accurately determine the precise location and value of the peak from the discrete waveform data in both cases. Let us denote the peak times as t_ peak, ppBHPT and t_ peak, NR, while the peak values are denoted as h_ peak, ppBHPT and h_ peak, NR. In this analysis, the point estimates of α and β at the peaks are given by: α_ peak = h_ peak, NR/h_ peak, ppBHPT, and β_ peak = t_ peak, NR/t_ peak, ppBHPT. By repeating this analysis for all the peaks, we can obtain a temporal variation of the optimal local values of α and β throughout the binary evolution. §.§.§ Using a certain number of cycles Finally, we can modify the method to estimate the local values of α and β throughout the binary evolution by considering a broader time window instead of just focusing on individual peaks. For example, we can choose to match the ppBHPT and NR waveforms between the 50th and 41st peak before the merger. The shorter duration NR and ppBHPT data, which are restricted to the selected time window, can be denoted as (t^ NR win,h^ NR win) and (t^ ppBHPT win,h^ ppBHPT win), respectively. We then perform the α-β scaling as described in Eq. (<ref>) on these dataset. min_α,β∫| α h^ ppBHPT_ win(β t^ ppBHPT_ win) - h^ NR_ win(t^ NR_ win) |^2 dt/∫| h^ NR_ win(t^ NR_ win) |^2 dt. This optimization problem yields the global best-fit values of α and β that minimize the error computed over the entire length of the waveform data or the calibration regime (e.g. t ∈ [-5000,100]M as used in  <cit.>). This approach allows us to obtain an averaged local estimate of the α and β values around the time corresponding to the mean of the time window between the 50th and 41st peak before the merger. In this modified approach, we utilize 10 cycles of waveform data to estimate the α and β values, which we denote as α_ 5cycles and β_ 5cycles. §.§ Comparison of the α-β values from different methods To infer both global and local estimates of the α and β values for q=3, we first employ three different techniques: * We use the final ∼ 5000M of the NR data to find the global best-fit values of α and β. This is done by minimizing the L_2-norm difference between the rescaled ppBHPT waveform and the NR waveform, as described in Section <ref>. The obtained calibration values are denoted as α_5000M and β_5000M. * We match all 112 peaks in the NR data to their corresponding peaks in the ppBHPT waveform using the procedure outlined in Section <ref>. This gives us the point estimates of α and β at each peak, denoted as α_ peak and β_ peak. * The NR data is divided into smaller windows consisting of 10 consecutive peaks (i.e. 5 cycles), resulting in 10 smaller time windows. We then apply the procedure described in Section <ref> to match each of these smaller windows to the corresponding ppBHPT waveforms. This provides us with the averaged local estimations of the calibration parameters, denoted as α_ 5cycles and β_ 5cycles. By employing these three techniques, we can obtain a comprehensive understanding of the α and β values for the considered mass ratio of q=3. Figure <ref> illustrates the (2,2) mode of the NR (first row) and ppBHPT waveforms (second row), both aligned such that the maximum amplitude occurs at t=0 and the orbital phase is zero at the beginning. This alignment facilitates a direct comparison between the two waveforms. Additionally, we highlight the 102^ th peak of both waveforms (third row), along with their corresponding peak times. It is evident that the peak times and values differ between the NR and ppBHPT waveforms. The peaks in the ppBHPT waveform occur earlier in time and have larger amplitudes compared to the NR waveform. This emphasizes the need to establish a scaling relationship between the ppBHPT and NR waveforms. Finally, we show the waveform segment between the 102^ th and 103^ th peaks for both NR and ppBHPT as a demonstration of the procedure mentioned in Section <ref>. In Figure <ref>, we compare the obtained values of α and β from different approaches. We observe that α_ peak remains relatively constant throughout the binary evolution, while β_ peak shows stability in the earlier stages and deviates slightly during the late-inspiral-merger phase. It is important to note that α_ peak and β_ peak represent local optimal values and may differ slightly from the global fit values, e.g. α_5000M and β_5000M. We also examine α_ 5cycles and β_ 5cycles, which provide averaged local estimations of the calibration parameters. Interestingly, α_ 5cycles closely follows α_ peak, while β_ 5cycles aligns well with β_ peak, except for the late-inspiral and merger region where some deviations occur for β. We further note that the obtained values of α and β from the different approaches are not simply consistent with the naive mass-scale transformation of 1/1+1/q. This suggests that the calibration parameters α and β encompass additional effects beyond a simple mass-scale transformation. Next, we plot the α and β from different approaches as a function of the NR orbital frequencies (Figure <ref>). This further demonstrates that the α and β values are mostly constant for a significant portion of the frequency window. In Figure <ref>, we therefore investigate the applicability of the α-β scaling to the entire length of the available NR data by utilizing the full 30000M of NR waveform data, covering 56 cycles, along with the corresponding ∼35000m_1 ppBHPT waveform data. By employing Eq.(<ref>) and following the procedure outlined in Sec.<ref>, we successfully obtain a set of α and β values that allow us to rescale the full ppBHPT waveform to match the NR data throughout the binary evolution. Note that these values are denoted as α_ full and β_ full and are also shown in Figure. <ref> and  <ref> for comparison. In the top row of Figure <ref>, we show the NR data and the ppBHPT waveforms after applying the scaling factor of 1/1+1/q. Additionally, we present the rescaled ppBHPT waveform after the α-β calibration in the second row. In the third row of Figure <ref>, we show Δ A/A_ NR, relative error in amplitude, and Δϕ_ NR, absolute error in the phase, of both the ppBHPT (after multiplying the factor of 1/1+1/q) and rescaled ppBHPT waveform when compared to the NR data. These errors indicate that the rescaled ppBHPT waveform exhibits excellent agreement with the NR data, with amplitude errors on the order of ∼ 0.1% and phase errors of approximately ∼ 0.1 radians. These errors are significantly smaller compared to the errors between the original ppBHPT waveform (after multiplying the factor of 1/1+1/q) and the NR data, demonstrating the effectiveness of the α-β scaling in improving the agreement between the two waveforms. Finally, to understand and mitigate the effect of the merger-ringdown waveform in the α-β calibration, we follow the procedure outlined in Section <ref> and use only the waveform up to t=-100M. The resulting calibration parameters are denoted as α_ ins and β_ ins. We find that α_ ins and β_ ins are very close to α_ full and β_ full, respectively. Specifically, we have [α_ full,β_ full]=[0.737122,0.706900] and [α_ ins,β_ ins]=[0.731040,0.707100] . This suggests that the inspiral-only waveform has a slightly larger effect on the α value compared to the β value. However, since the values are very close, it implies that we can use any segment of the waveform and still obtain meaningful estimates for the α and β parameters. We show the values in Figure. <ref> and  <ref> for comparison. §.§ Validity of the α-β scaling The results presented in Section <ref> provide valuable insights into the validity and behavior of the α-β scaling between ppBHPT waveforms and NR data in the comparable mass regime. The key findings are as follows: * The scaling procedure is effective even for longer NR simulations with a duration of approximately 30000M. This demonstrates that the α-β scaling can be successfully applied to a wide range of waveform data, including those with a significant number of orbital cycles. * Throughout most of the binary evolution, the optimal values of α and β remain approximately constant. This indicates that a global set of calibration parameters can reasonably capture the local behaviour. * In the late-inspiral and merger stage, slight deviations from constant values are observed for both α and β. As a result, the scaling remains extremely effective until very close to merger (up to ∼40M before the merger) beyond which slight differences between rescaled ppBHPT and NR is observed. We can attribute these deviations to the changes in mass and spin of the final black hole during this phase. Ref. <cit.> has shown that the α and β values, obtained in the inspiral part of the waveform, can be self-consistently rescaled for the merger-ringdown part using the energy and angular momenta changes up to plunge. In particular, the α_ MR and β_ MR, calibration parameters at the merger-ringdown part, obeys the following scaling with α_ full and β_ full <cit.>: α_ MR = ξ×α_ full , and β_ MR = β_ full/ξ , where the scaling factor can be approximated as ξ=[1-(Δ J^z/M^2)^1.5] (1-Δ E/M). Here, Δ E and Δ J^z are the change in energy and angular momentum up to the plunge. Overall, these findings support the applicability and robustness of the α-β scaling approach in relating ppBHPT waveforms to NR data in the comparable mass regime. §.§ Understanding α-β scaling as frequency-dependent corrections The α-β scaling between ppBHPT and NR waveforms is designed to address the missing finite size effects and higher order self-force corrections in ppBHPT waveforms. However, recent findings by Wardell et al. (Ref. <cit.>) have shown that the second order self-force correction is frequency dependent. It raises the question of how the α-β scaling can handle frequency-dependent corrections. In order to investigate this further, we will derive the α-β scaling as a function of the orbital frequencies. The α-β scaling for the (2,2) mode can be expressed as follows: h^2,2_ NR(t_ NR ; q) ∼α h^2,2_ ppBHPT( β× t_ ppBHPT;q ) . This scaling relationship extends to the amplitude and phase of the waveforms: A_ NR (t_ NR) ≈α× A_ ppBHPT(β× t_ ppBHPT), and ϕ_ NR (t_ NR) ≈ϕ_ ppBHPT(β× t_ ppBHPT). One can compute the orbital phase as: ϕ_ orb, NR = ϕ_ NR / 2, ϕ_ orb, ppBHPT = ϕ_ ppBHPT / 2. This leads to: d ϕ_ orb, NR/dt_ NR≈dϕ_ orb, ppBHPT/d(t_ ppBHPT)dt_ ppBHPT/dt_ NR. Simplifying further, we find: ω_ orb, NR≈ω_ orb, ppBHPTdt_ ppBHPT/dt_ NR, where ω_ NR and ω_ ppBHPT are the orbital frequencies of the NR and ppBHPT waveforms respectively. Since t_ NR = β t_ ppBHPT, we can further simplify it as: ω_ orb, NR≈ω_ orb, ppBHPT×1/β. Thus, the α-β scaling relationship between ppBHPT and NR waveforms (where both of them are expressed as a function of time), given in Eq.(<ref>), can be equivalently expressed as a scaling between the waveforms as a function of the orbital frequencies. These scalings are: A_ NR (ω_ orb, NR) ≈α× A_ ppBHPT(ω_ orb, ppBHPT/β), and h^2,2_ NR (ω_ orb, NR) ≈α× h^2,2_ ppBHPT(ω_ orb, ppBHPT/β). In Figure. <ref>, we present the (2,2) mode of the NR and ppBHPT waveforms upto merger as a function of the orbital frequencies in the upper panel, accompanied by the amplitudes in the lower panel. It is evident that any rescaling aiming to match the ppBHPT amplitude (plotted against orbital frequencies; red solid line) to NR (also against orbital frequencies; blue solid line) must be frequency-dependent. Remarkably, the α-β scaling described by Eq.(<ref>) corresponds to a frequency-dependent correction, as it not only modifies the amplitudes but also alters the frequency evolution according to Eq.(<ref>). For comparison, we include the amplitudes as a function of rescaled frequencies (black dashed line) after the application of the α-β scaling. We find excellent agreement up to t_ NR=-18M, very close to the merger, between the rescaled ppBHPT amplitudes as a function of rescaled orbital frequencies and NR amplitudes as a function of NR orbital frequencies. Finally, we generalize the scaling for all modes as: h^ℓ,m_ NR (ω_ orb, NR) ≈α_ℓ× h^ℓ,m_ ppBHPT(ω_ orb, ppBHPT/β). To further support our observations, we extend our analysis to three additional mass ratio values: q=[4,6,10], using publicly available SXS NR data  <cit.>,  <cit.>, and  <cit.>, respectively. However, these NR datasets only cover the final ∼ 6000M evolution of the binary, corresponding to approximately 25 orbital cycles. For each mass ratio, we perform the α-β scaling using Eq.(<ref>), obtaining the best-fit values for α and β. We then use Eq.(<ref>) to approximate the rescaled amplitude as a function of the orbital frequency (Fig. <ref>). In Figure <ref>, we compare the amplitudes of both ppBHPT and NR waveforms as a function of the respective orbital frequencies, as well as a function of time. During the inspiral phase, the rescaled waveform's amplitude closely matches NR, but deviations become apparent as it approaches the merger. However, the approximate α-β scaling effectively captures the frequency-dependent correction needed to align ppBHPT with NR until very close to the merger, where the approximation breaks down. Specifically, we find that the scaling remains effective up to t_ NR=-32M for q=4, t_ NR=-36M for q=6, and t_ NR=-45M for q=10. This suggests that the α-β scaling successfully matches NR data very well up to the plunge phase. It is worth mentioning that the reason for the global α-β fit to be less effective around merger is that the global fit values deviate from the local optimal α-β estimates in this regime (Fig. <ref>). These deviations can also be attributed to the changes in mass and spin of the final black hole during this phase. Incorporating the updated final mass and spin values in the ppBHPT framework is expected to reduce these deviations and improve the accuracy of the rescaling. § COMPARISON AGAINST POST-NEWTONIAN THEORY We now provide a detailed comparison of the post-Newtonian theory waveforms with ppBHPT, rescaled ppBHPT (obtained through the α-β procedure) and NR in the comparable mass regime. A detailed review of post-Newtonian methods are given in Ref. <cit.>. The post-Newtonian approximation is a slow-motion, weak-field approximation to general relativity with an expansion parameter ξ = v/c where v is the magnitude of the relative velocity and c is the speed of light. While many previous analysis have focused on understanding match between NR and PN in the comparable mass regime <cit.>, our focus remains in comparing ppBHPT to PN. §.§ Comparing waveforms at q=3 We show the full (2,2) mode inspiral-merger-ringdown waveforms from NR (blue solid lines), rescaled ppBHPT (black dashed lines), and PN (green dash-dotted lines) in Fig. <ref>. In particular, we use PN approximation, generated using software package. This particular approximation includes phase terms up to 3.5PN order and amplitude terms up to 2.5PN order <cit.>. We zoom into the earlier and later times of the waveform to examine the match between rescaled ppBHPT, PN, and NR in more detail. Additionally, we compute the relative error in the amplitude and the absolute phase error for both rescaled ppBHPT and PN compared to NR. The results indicate that both rescaled ppBHPT and PN exhibit similar errors in the amplitude when compared to NR. However, in the late inspiral phase, the rescaled ppBHPT waveform yields a much smaller error in the phase compared to the PN waveform. This suggests that the rescaled ppBHPT waveform provides improved accuracy in capturing the phase evolution of the system during the late inspiral regime, as compared to the PN approximation. Finally, in Figure. <ref>, we present the amplitude of the ppBHPT, rescaled ppBHPT, NR, and PN waveforms as a function of the respective orbital frequencies. The amplitude evolution extracted from NR is compared to both the PN and rescaled ppBHPT waveforms. We observe that in the inspiral region, both the PN and rescaled ppBHPT amplitudes agree well with the amplitude evolution obtained from NR. However, as we progress towards later times, the PN approximation starts deviating from NR around t_ NR=-708M, while the rescaled ppBHPT approximation breaks down around t_ NR=-18M. This suggests that the rescaled ppBHPT waveform better captures the dynamics of NR compared to the PN waveform. §.§ Estimating α-β using PN Our analysis in Section <ref>, Section <ref>, and Section <ref> highlights interesting possibilities for using PN waveforms to accurately estimate global values for α and β. We are motivated by the following observations: * The values of α and β remain nearly constant for a significant duration of the binary evolution, with only slight deviations around the merger. These local estimates of α and β closely align with the values obtained using global error minimization techniques. Furthermore, the values of α and β obtained using only the inspiral part of the waveform exhibit remarkable agreement with the values obtained using the full waveform data. * While the PN approximation breaks down towards the merger, it provides an excellent match to NR waveforms in the inspiral phase, which is far away from the merger. These observations suggest that PN waveforms can be used to infer the α and β values required to match a ppBHPT waveform to PN. These α and β values will be very close to the values obtained using NR data. This procedure has significant implications. Firstly, it means that one can use ppBHPT and PN waveforms in the early inspiral to obtain the α-β values and generate a rescaled ppBHPT waveform that matches NR waveforms throughout the entire binary evolution, from inspiral to ringdown. In this section, we investigate the possibility of using PN waveforms for estimating α and β in great details using different PN approximations. §.§.§ α-β PN scaling at q=3 First, we perform a calibration between the ppBHPT waveform at q=3 and a PN waveform generated using the approximation. We obtain α_ PN and β_ PN as the calibration parameters. Interestingly, we find that these values are very close to α_ ins and β_ ins obtained from the inspiral portion of the NR waveform, as well as α_ NR, full and β_ NR, full obtained from the full NR waveform. Specifically, we have: [α_ PN,β_ PN] =[0.738862,0.705607], [α_ NR, full,β_ NR, full] =[0.737122,0.706900], [α_ NR, ins,β_ iNR, ns] =[0.731040,0.707100]. Furthermore, we utilize α_ PN and β_ PN to rescale the ppBHPT waveform, and we observe an excellent match with the NR data not only in the inspiral phase (Figure. <ref>, third row). We however notice some differences in the merger-ringdown part. Nonetheless, this analysis suggests that PN waveforms, which mostly capture the inspiral phase, can provide meaningful estimates of α and β for rescaling ppBHPT waveforms to match NR waveforms throughout the entire binary evolution, including the merger-ringdown phase. §.§.§ α-β PN scaling at q=[4,6,10] To investigate the validity of our observations for different mass ratios, we repeat the analysis for mass ratios q=4, q=6, and q=10. In Figure <ref>, we present the values of α and β obtained by rescaling the ppBHPT waveforms to both NR and PN data. We find that β_ PN, obtained from the PN waveform, closely matches β_ NR for all mass ratios. This suggests that the β parameter is relatively insensitive to the choice of waveform and is consistent between NR and PN. However, we observe that α_ PN, also obtained from the PN waveform, is systematically larger than the values obtained from NR. This difference appears to increase as the mass ratio increases. Nevertheless, it is noteworthy that the values of α obtained from PN are still quite close to those obtained from NR, indicating a reasonable agreement between the two. §.§.§ Understanding the effect of the choice of PN model It is important to note that each PN model includes corrections up to a certain PN order, and these higher-order corrections can affect the accuracy of the rescaling. To investigate the effect of different PN models on the α-β calibration, we repeat the calibration process for q=3 using different PN approximations: , , and . While all of these approximation includes phase terms up to 3.5PN order and amplitude terms up to 2.5PN order, they employ different techniques and expansion to obtain these terms <cit.>. This allows us to assess whether the choice of PN model affects the resulting values of α and β. By performing the α-β calibration with different PN approximations, we obtain slightly different values for α and β. In particular, we find: [α_ PN^ TaylorT4,β_ PN^ TaylorT4] =[0.738862,0.705607], [α_ PN^ TaylorT1,β_ PN^ TaylorT1] =[0.745278,0.709454], [α_ PN^ TaylorT2,β_ PN^ TaylorT2] =[0.753860,0.709265]. It is interesting to note that value of β changes marginally when we use a different PN model. However, changes in α is more prominent. This indicates that the choice of PN model does have a slight an impact on the rescaling parameters. § DISCUSSION & CONCLUSION In this paper, we investigated the validity and effectiveness of the α-β scaling approach, previously introduced by Islam et al. <cit.>, which aims to match the ppBHPT waveforms to the NR waveforms. Utilizing publicly available long NR data () for mass ratio q=3, we demonstrated that the scaling can be achieved even for longer NR simulations, spanning up to ∼ 30000M in duration. Throughout most of the binary evolution, the scaling factors α and β can be computed utilizing publicly available long NR data () for mass ratio q=3 and considered approximately constant, although they show slight deviations close to the merger. These deviations are expected due to the loss of energy and change in mass and spin of the final black hole during the merger process. By including the final mass and spin values in the ppBHPT framework, these deviations can be reduced. Furthermore, we investigated the frequency-dependent nature of the scaling. We derived the α-β scaling as a function of orbital frequencies and demonstrated its equivalence to a frequency-dependent correction. The rescaled ppBHPT waveform, when matched to NR amplitudes as a function of orbital frequencies, showed excellent agreement, providing further support for the frequency-dependent nature of the scaling. We then compared the accuracy of the rescaled ppBHPT waveform obtained through the α-β scaling against the post-Newtonian (PN) approximation. The rescaled ppBHPT waveform showed comparable accuracy to the PN waveform in terms of amplitude, but exhibited significantly smaller phase errors during the late inspiral phase. Our analysis confirms the feasibility of using PN waveforms to derive precise α-β calibration parameters. The calibration process involves matching the ppBHPT waveform to a PN waveform, focusing on the inspiral phase. The resulting α and β values obtained from this calibration closely align with those obtained from NR waveforms. Overall, our results demonstrate that the α-β scaling provides an effective method for matching ppBHPT waveforms to NR waveforms in the comparable mass regime, accounting for missing finite-size effects and possibly higher-order self-force corrections <cit.>. The scaling is frequency-dependent, capturing the correct amplitude and frequency evolution of the NR waveforms. While the scaling has limitations close to the merger, it remains highly effective in reproducing NR dynamics up to the plunge phase. These findings have implications for gravitational wave observations and waveform modeling in extreme-mass-ratio inspirals. We thank Scott Field, Scott Hughes, Adam Pound, Niels Warburton, Barry Wardell and Chandra Kant Mishra for helpful discussions and thoughtful comments on the manuscript. The authors acknowledge support of NSF Grants PHY-2106755, PHY-2307236 (G.K) and DMS-1912716, DMS-2309609 (T.I and G.K). Simulations were performed on CARNiE at the Center for Scientific Computing and Visualization Research (CSCVR) of UMassD, which is supported by the ONR/DURIP Grant No. N00014181255 and the UMass-URI UNITY supercomputer supported by the Massachusetts Green High Performance Computing Center (MGHPCC).
http://arxiv.org/abs/2307.01456v1
20230704032257
Spatiotemporal coupled-mode equations for arbitrary pulse transformation
[ "Zhaohui Dong", "Xianfeng Chen", "Luqi Yuan" ]
physics.optics
[ "physics.optics" ]
ALPGEN EVTGEN PYTHIA ⟨⟩[2] ^1State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China ^2Shanghai Research Center for Quantum Sciences, Shanghai 201315, China ^3Collaborative Innovation Center of Light Manipulation and Applications, Shandong Normal University, Jinan 250358, China ^∗yuanluqi@sjtu.edu.cn Spatiotemporal modulation offers a variety of opportunities for light manipulations. In this paper, we propose a way towards arbitrary transformation for pulses sequentially propagating within one waveguide in space via temporal waveguide coupling. The temporal waveguide coupling operation is achieved by spatiotemporally modulating the refractive index of the spatial waveguide with a traveling wave through segmented electrodes. We derive the temporal coupled-mode equations and discuss how systematic parameters affect the temporal coupling coefficients. We further demonstrated a temporal Mach-Zehnder interferometer and universal multiport interferometer, which enables arbitrary unitary transformation for pulses. We showcase a universal approach for transforming pulses among coupled temporal waveguides, which requires only one spatial waveguide under spatiotemporal modulation, and hence provide a flexible, compact, and highly compatible method for optical signal processing in time domain. Spatiotemporal coupled-mode equations for arbitrary pulse transformation Zhaohui Dong^1, Xianfeng Chen^1,2,3, and Luqi Yuan^1,* August 1, 2023 ======================================================================== § INTRODUCTION Time-varying media brings intriguing opportunities for wave manipulation in photonics <cit.> and hence attracts growing interest in both physics community and optical engineering. In particular, by combining both temporal and spatial degrees of freedom, the photonic systems undergoing spatiotemporal modulations recently emerge as new platforms for controlling light simultaneously in space and time <cit.>. Utilizing this powerful approach, researchers explore many exotic phenomena which cannot be realized in a static medium, such as luminal amplification <cit.>, Fresnel drag <cit.>, magnet-free nonreciprocal systems <cit.>, and temporal double-slit interference <cit.>. As an outstanding example of spatiotemporally modulated systems, a temporal waveguide which harnesses the total internal reflection of light at spatiotemporal boundaries and therefore confines pulses in between <cit.>, provides a novel concept for guiding light. Up to date, previous researches on temporal waveguides focus on fundamental properties for realizing a single temporal waveguide <cit.>, while interactions between multiple temporal waveguides remain unexplored. In this paper, we derive the fundamental formula for modeling interactions between two temporal waveguides, i.e., the spatiotemporal coupled-mode theory. Systematic parameters which determine temporal coupling coefficients are given, and hence our theory introduces a basic framework for studying the problem of coupled temporal waveguides. To showcase the capability of our formalism, we explore a temporal Mach-Zehnder interferometer (MZI) and further propose a design of a universal multiport interferometer <cit.> in the time domain for optical pulses. Such a universal multiport interferometer enables an arbitrary temporal transformation for sequential-propagating pulses within one spatial waveguide under the spatiotemporal modulation, which could find potential applications in optical signal processing. Our work hence provides a useful theoretical tool in the arising field of spatiotemporal metamaterials <cit.> to develop new-generation active photonic devices. § MODEL We now start to show how to model interactions between two temporal waveguides and derive the coupled-mode formula in the time domain. Before getting into details, we first review the model of a temporal waveguide which is achieved with pulse propagating in a spatiotemporally modulated waveguide as shown in FIG. <ref>(a). The modulation of the refractive index of the waveguide is chosen as n(z,t)=n_0+ξ (z-v_B t), where n_0 is the background refractive index of the waveguide and ξ (z-v_B t) denotes the spatiotemporal change of the refractive index <cit.> with z being the propagating direction, and v_B being the moving speed of the modulation. We transfer the formalism in a retarded time frame by using the transformation τ =t-z/v_B, where t is the time in the laboratory frame. In the retarded time frame (z, τ ), the change of the refractive index ξ(z-v_B t) is transformed to a z independent function ξ(τ)=ξ_0+Δξ(τ). To achieve a temporal waveguide, ξ(τ) can be chosen as ξ(τ) = { Δ n |τ-τ_c|⩽ T_w/2, 0 |τ-τ_c|>T_w/2,. where Δ n is the modulation amplitude, and T_w is the modulation time width centered at τ_c in the time retarded frame. As an analog to the conventional spatial waveguide, one can consider |τ-τ_c|⩽ T_w/2 as the core region of the temporal waveguide, and |τ-τ_c|>T_w/2 as the cladding region. We then consider a pulse having a central frequency ω_0 propagates along such modulated waveguide. One can treat such a problem by Taylor-expanding the dispersion relation of the waveguide as <cit.>: β (ω )=β_0+Δβ _1 (ω -ω _0)+β _2/2 (ω -ω _0)^2+β _m (τ), where β_0=n_0ω_0 /c, Δβ _1=β _1-1/v_B with β_1=∂β /∂ω |_ω_0 being the reciprocal of the group velocity at ω_0, β _2=∂^2 β /∂ω^2 |_ω_0 being the corresponding group velocity dispersion, and β _m (τ)=β _0 ξ (τ )/n_0 represents the change of the propagation constant due to spatiotemporal modulation. By using Maxwell's equation and the dispersion relation in Eq. (<ref>), one obtains the resulting wave equation for describing the amplitude of propagating pulse A(z,τ) in the retarded frame <cit.>: ∂ A(z,τ)/∂ z+Δβ _1 ∂ A(z,τ)/∂τ +iβ _2/2∂^2 A(z,τ)/∂τ^2 = i β _m (τ)A(z,τ). Following the treatment in Ref. <cit.>, one can take the modal solution A(z,τ) as A(z,τ)=M(τ )exp[i(Kz-Ωτ )], where M(τ) describes temporal shape of the mode, K denotes the rate of the mode accumulating phase during propagation, and Ω=-Δβ _1/β _2 is the frequency shift. So far, we briefly outline the formalism of the optical pulse propagating in a waveguide under the spatiotemporal modulation of the refractive index, i.e., the field traveling inside a temporal waveguide in the retarded frame, which has been utilized in many follow-up studies <cit.>. Next we give the important formula of the temporal coupled-mode theory. We construct two temporal waveguides (labeled as a and b), which is achieved by spatiotemporally modulating the refractive index of a spatial waveguide through the traveling-wave signal by segmented electrodes as shown in FIG. <ref>(b). In particular, the spatiotemporal change of the refractive index in the retarded time frame is taken as ξ' (τ)=ξ_0+Δξ_a (τ)+ Δξ_b (τ), where Δξ_a(τ) and Δξ_b (τ) follow Eq. (<ref>) centered at τ_c^a and τ_c^b(|τ_c^b-τ_c^a|>T_w) and therefore form two temporal waveguides a and b, respectively. We consider the field at the central frequency ω _0 propagating in such spatiotemporally modulated waveguide and assume a solution as A(z,τ)=G_a(z)M_a(τ)exp[i(K_az-Ωτ )]+G_b(z)M_b(τ)exp[i(K_bz-Ωτ )], where G_a (G_b) represents the envelope amplitude of the pulse in the temporal waveguide a (b), and the corresponding M_i(τ) and K_i(τ)(i=a, b) satisfy <cit.>, ∂^2 M_i(τ)/∂τ^2+2/β _2[K_i+(Δβ _1)^2/2β _2-β _m^i(τ) ]M_i(τ)=0, where β _m^i(τ)=β _0 ξ_i (τ)/n_0. Substituting Eq. (<ref>) into Eq. (<ref>), multiplying by M_a^*(τ) or M_b^*(τ), and then integrating over τ, we obtain the temporal coupled-mode equations: ∂ G_a(z)/∂ z =iκ _ab-C_abγ _b/1-C_abC_baexp[i(K_b-K_a)z]G_b(z)+iγ _a-C_abκ_ba/1-C_abC_ba G_a(z), 7a ∂ G_b(z)/∂ z =iκ _ba-C_baγ _a/1-C_baC_abexp[-i(K_b-K_a)z]G_a(z)+iγ _b-C_baκ _ab/1-C_baC_ab G_b(z), 7b where C_ij, κ _ij, and γ _i(i≠ j) are expressed as, C_ij =∫ M_i^*(τ)M_j(τ)d τ /∫ M_i^* (τ)M_i(τ)d τ, 8a κ _ij =∫ M_i^*(τ) Δβ _m^i(τ) M_j(τ)d τ/∫ M_i^* (τ)M_i(τ)d τ, 8b γ _i =∫ M_i^*(τ) Δβ _m^j(τ) M_i(τ)d τ/∫ M_i^* (τ)M_i(τ)d τ8c, and Δβ _m^i(τ)=β _0 Δξ_i (τ)/n_0. If we further assume the two temporal waveguides are identical in temporal shapes and ∫ M_i^*(τ)M_j(τ)d τ≪∫ M_i^*(τ)M_i(τ)d τ, we get K_a=K_b and C_ab≈ C_ba≈ 0. Eqs. (<ref>)-(<ref>) can then be simplified as, ∂ G_a(z)/∂ z≈ i κ _ab G_b(z)+i γ _a G_a(z), 9a ∂ G_b(z)/∂ z≈ i κ _ba G_a(z)+i γ _b G_b(z), 9b Here, κ _ij is the temporal coupling coefficient, and γ _i is the shift due to the presence of the other temporal waveguide. Note that we consider a symmetry case here, so we can take κ_i, j=κ_j, i^* and γ_a=γ_b. To give an illustrative picture on how the systematic parameters of this spatiotemporally modulated waveguide determine the coefficients in the temporal coupled-mode equations (<ref>), we give an example with experimentally-feasible parameters. We choose β _0Δ n/n_0=-1200 m^-1, Δβ_1 =0, β _2=5000 ps^2·m^-1, which are standard parameters for an optical waveguide with the modulation strength Δ n/n_0 ∼ 10^-4<cit.>. The spatiotemporal modulation shape can take T_w=10 ps, and |τ_c^b- τ_c^a|=20 ps. We now tune one of these parameters and fix others to investigate how γ _a and κ _ab are affected. Only the coupling between fundamental modes is considered (for the definition of the fundamental mode in a temporal waveguide, one can refer to <cit.>). We first change |β _0 Δ n/n_0| as shown in FIG. <ref>(a). The role of |β _0 Δ n/n_0| in temporal waveguides is similar to the index contrast between the core region and cladding region in spatial waveguides. When |β _0 Δ n/n_0| becomes larger, both γ _a and κ _ab becomes weaker. Next we vary the dispersion parameter β _2, which describes the ability of light to spread out of the core region in the temporal waveguide. As a result, a larger β _2 results in larger γ _a and κ _ab as shown in FIG. <ref>(b). Fig. <ref>(c) shows changes of γ_a and κ_ab versus the time spacing between two temporal waveguides |τ_c^b-τ_c^a|, and one can see both coefficients decreases when the time spacing increases as two temporal waveguides fall apart in the time domain. In all calculations above, we consider the ideal modulations, i.e., the change of Δβ_m^i(τ) is abrupt. In reality, the turn-on/off of modulations is not instantaneous. To reflect this feature, we consider the form of Δβ_m^i (τ) as Δβ_m^i (τ) = { β_0 Δ n /n_0 |τ-τ_c^i| <1/2 (T_w-T_t), β _0 Δ n/n_0 cos[|τ-τ_c^i |-1/2 (T_w- T_t)]1/2 (T_w-T_t)⩽ |τ-τ_c^i|<1/2 (T_w+T_t), 0 others,. where T_t denotes the turn-of/off time width between the temporal core and the cladding regions [see the subfigure in FIG. <ref>(d)]. One can find that in FIG. <ref>(d), when T_t/T_w becomes larger, γ _a and κ _ab are increasing, indicating that the confinement of the pulse is weaker. Nevertheless, when T_t ∼ 10%· T_w, both coefficients do not change much compared to those when T_t=0, i.e, the ideal modulation case. Therefore, in the following, we still simulate models under the ideal modulation case. § RESULTS So far, we derive the spatiotemporal coupled-mode equations for modeling interactions between two temporal waveguides and investigate how systematic parameters of the system affect temporal coupling coefficients. In the following, we use these equations and demonstrate a temporal MZI with parameters β _0Δ n/n_0=-1200 m^-1, Δβ_1 =0, β _2=5000 ps^2· m^-1, and T_w=10 ps. The scheme of such temporal MZI is depicted in FIG. <ref>(a), with values of β_m (z, τ )=1200 m^-1 in the cyan regime and =0 m^-1 in the orange regime of the retarded frame (τ,z). We aim to design the system with the functionality composed of two 50:50 couplers and a phase shifter at one of the waveguides in the time domain. The parameters are selected to guarantee that κ_ij≈ 0 in the straight temporal waveguide region in FIG. <ref>(a) while κ_ij≫ 0 in the curved temporal waveguide region. The additional phase shift φ is realized by an additional change of the refractive index δ n for the length 0.4 m corresponding to the red region in FIG. <ref>(a), resulting in a small change of β_m(z,τ), i.e., the choice of δ n/Δ n ∈ [0,1.375× 10^-2] gives effective phase shift ϕ∈ [0,2π]. We assume the input at Port A (Port B) as A (B), and the output at Port C (D) as C (D). The relation for a temporal MZI is described by [ C; D ] = [ sinδ/2 cosδ/2; cosδ/2 -sinδ/2; ][ A; B ], which can be verified by results in our simulation given in FIG. <ref>(b). In particular, the output at Port C increases as φ increases and reaches its maximum at φ =π (δ n/Δ n=6.875× 10^-3), while it further decreases and reaches its minimum at φ =2π (δ n/Δ n=1.375× 10^-2). The output at Port D just behaves exactly in the opposite way. In addition, three specific cases of φ are taken and the intensity distributions of the field are plotted in FIGs. <ref>(c)-(e). In FIG. <ref>(c), we inject the pulse at Port A, and the pulse gradually switches to the other temporal waveguide during propagation. Such a phenomenon corresponds to a pulse traveling in a spatial waveguide and gradually converting to the other pulse in front of it spatially in the laboratory frame. In Fig. <ref>(d), the pulse gradually splits into two pulses which correspond to different spatial locations in one spatial waveguide, while in Fig. <ref>(e), the pulse temporally splits into two pulses then they converge to the original one eventually. The proposed temporal MZI could have potential applications in optical signal processing and optical communication. Here, we showcase it being a component of a temporal universal multiport interferometer. The parameters are the same as those used in the above example of MZI. An arbitrary unitary transformation U performed by a temporal universal multiport interferometer with N channels shown in FIG. <ref>(a) can be decomposed in the following form: U=P (∏_(m,n) ∈ S T_m,n,l). Here the production follows an ordered sequence (S) of two-channel transformations <cit.>, and T_m,n,l(θ_l ,ϕ_l )= [ 1 0 ⋯ ⋯ 0; 0 1 ; ⋮ ⋱ ⋮; e^iϕ_l sinθ_l /2 cosθ_l /2 ; e^iϕ_l cosθ_l /2 - sinθ_l /2 ; ⋮ ⋱ ⋮; 1 0; 0 ⋯ ⋯ 0 1 ], is the l-th transformation in such sequence between two channels m and n (m=n-1), which is realized by a modified MZI with an additional phase shift ϕ_l and splitting parameter θ_l between channels m and n in the time domain as shown in FIG. <ref>(a). P is a diagonal matrix with complex elements whose modulus are equal to one, corresponding to a phase shift η _m for channel m. We perform a three-channel temporal transformation for the demonstration in principle. U(n,m) is designed as a circulation operator on the three temporal channels as shown in FIG. <ref>(b). Three pulses with different amplitudes (namely in normalized intensities as 1, 4/9, and 1/9, respectively shown in Fig. 4(c)) are injected into the input ports. In FIG. <ref>(c) the pulses take the desired circulation from one temporal waveguide to the other. This result corresponds to sequential-propagating pulses switching their position during propagation in the spatial waveguide in the laboratory frame. In addition, we reconstruct U(n,m) based on the simulation result as shown in FIG. <ref>(d), which is closely matched with the desired one in FIG. <ref>(b). § DISCUSSION We finally make a discussion on the possibility of realizing the proposal in experiments. The parameters in the simulation are achievable with the state-of-art technology in photonics <cit.>. For a pulse with the central wavelength ∼ 1000  nm, the corresponding coefficients give Δ n 10^-4 and β_2 ∼ 10^3 ps^2· m^-1, which have been demonstrated in experiments <cit.>. By properly engineering the waveguide structure, one can further enlarge the group velocity dispersion β _2<cit.>. In summary, we build a formalism of the temporal coupled-mode equations to study interactions between temporal waveguides in a system where pulses propagate in a spatiotemporally modulated waveguide, and show how systematic parameters of the modulated system determine the temporal coupling coefficients in the theory. A temporal MZI is studied and further a temporal universal multiport interferometer in the time domain is proposed, which enables an arbitrary unitary transformation for sequential-propagating pulses. Our work provides a fundamental method, which is useful for optical signal processing in time domain. In particular, compared with the conventional methods with coupled waveguides in the spatial dimension <cit.>, the generalization of the coupled temporal waveguides does not require the addition of devices in the space, and the temporal transformation can be performed in only one spatiotemporally modulated waveguide, which greatly reduces the spatial complexity and insertion loss form the connection between multiple devices. Moreover, such transformation is realized by the spatiotemporal modulation in an active way, which provides more flexibility in manipulating pulses. The temporal coupled-mode theory in Eqs. (<ref>)-(<ref>) can be further utilized to model two temporal waveguides structure with different systematic parameters and/or modulations that hold complex Δ n. In addition, the proposed scheme is also compatible with previous works for controlling a single pulse in one temporal waveguide to achieve pulse compression <cit.>, fast and slow light <cit.>, and so on <cit.>, hence it can trigger further studies on not only pulse transformation but multifunctional control for pulse propagation with multiple coupled pulse channels in the time domain, which further offers a wealth of opportunities in optical signal processing. Acknowledgements The research was supported by National Natural Science Foundation of China (12122407, 11974245, and 12192252), National Key Research and Development Program of China (No. 2021YFA1400900). L.Y. thanks the sponsorship from Yangyang Development Fund and the support from the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning. apsrev
http://arxiv.org/abs/2307.02570v1
20230705181231
Named Entity Inclusion in Abstractive Text Summarization
[ "Sergey Berezin", "Tatiana Batura" ]
cs.CL
[ "cs.CL", "cs.SI" ]
Multimodal Temporal Fusion Transformers Are Good Product Demand Forecasters Marcel Worring =========================================================================== We address the named entity omission - the drawback of many current abstractive text summarizers. We suggest a custom pretraining objective to enhance the model's attention on the named entities in a text. At first, the named entity recognition model RoBERTa is trained to determine named entities in the text. After that, this model is used to mask named entities in the text and the BART model is trained to reconstruct them. Next, the BART model is fine-tuned on the summarization task. Our experiments showed that this pretraining approach improves named entity inclusion precision and recall metrics. § INTRODUCTION Current state-of-the-art abstractive summarization methods achieved significant progress, yet they are still prone to hallucinations and substitution of the named entities with vague synonyms or omitting mention of some of them at all <cit.>, <cit.>, <cit.>. Such inconsistencies in the summary limit the practicability of abstractive models in real-world applications and carry a danger of misinformation. Example in Table <ref> demonstrates the difference that named entity inclusion could make in the generated summary. Scientific texts are especially vulnerable to this issue. Omitting or substituting the name of the metric used or the method applied can make a summary useless or, in the worst case scenario, totally misleading for a reader. We make the following contributions: * present a new method for pretraining a summarization model to include domain-specific named entities in the generated summary; * show that the BART model with the Masked Named Entity Language Model (MNELM) pretraining procedure is able to achieve higher precision and recall metrics of named entity inclusion. § RELATED WORK For automatic summarization, one of the important issues is extrinsic entity hallucinations, when some entities appear in summary, but do not occur in the source text <cit.>. A number of studies have been devoted to this problem, such as fixing entity-related errors <cit.>, ensuring the factual consistency of generated summaries <cit.>, and task-adaptive continued pertaining <cit.>. In our paper, we address the problem of named entity awareness of the summarization model by first training it on the NER task before final finetuning to make the model entity aware. The idea of utilizing named entities during the pretraining phase first was described back in <cit.>, where the authors proposed the usage of knowledge graphs by randomly masking some of the named entity alignments in the input text and asking the model to select the appropriate entities from the graphs to complete the alignments. One of the disadvantages of that approach is the need for a knowledge base, which is extremely difficult to build. Only a limited number of domain-specific knowledge bases exist, and none of them can be considered complete. The study <cit.> addresses the problem of the factual consistency of a generated summary by a weakly-supervised, model-based approach for verifying factual consistency and identifying conflicts between source documents and a generated summary. Training data is generated by applying a series of rule-based transformations to the sentences of the source documents. A similar approach is suggested by the authors of the paper <cit.> who try to preserve the factual consistency of abstractive summarization by specifying tokens as constraints that must be present in the summary. They use a BERT-based keyphrase extractor model to determine the most important spans in the text (akin to the extractive summarization) and then use these spans to constrain a generative algorithm. The big drawback of this approach is the vagueness of the keyphrases and the limited amount of training data. Also, the use of the BERT model leaves room for improvement. The analogous solution uses <cit.>, where the authors suggest entity-level content planning, i.e. prepending target summaries with entity chains – ordered sequences of entities that should be mentioned in the summary. But, as the entity chains are extracted from the reference summaries during the training, this approach cannot be used in an unsupervised manner, like MNELM, proposed in this work. § METHOD We propose a three-step approach that aims to avoid all the aforementioned drawbacks: 1) at the first step the NER model is trained on a domain-specific dataset; 2) then the trained NER model is used for the MLM-like unsupervised pretraining of a language model; 3) the pretrained model is finetuned for the summarization task. By following these steps, we can use a large amount of unlabeled data for the pretraining model to select domain-specific named entities and therefore to include them in the generated summary. In comparison with a regular MLM pretraining, the suggested approach helps the model converge faster, shows an increased number of entities included in the generated summary, and drastically improves the avoiding of hallucinations, i.e. eliminates named entities that did not appear in the original text. § DATASETS AND EVALUATION METRICS In this work, we use two datasets: SCIERC <cit.> for training named entity extraction model and ArXiv <cit.> dataset for pretraining and training of the summarization model. The SCIERC dataset includes annotations for scientific entities for 500 scientific abstracts. These abstracts are taken from 12 AI conference/workshop proceedings in four AI communities from the Semantic Scholar Corpus. These conferences include general AI (AAAI, IJCAI), NLP (ACL, EMNLP, IJCNLP), speech (ICASSP, Interspeech), machine learning (NIPS, ICML), and computer vision (CVPR, ICCV, ECCV) conferences. The dataset contains 8.089 named entities and defines six types for annotating scientific entities: Task, Method, Metric, Material, Other-Scientific-Term and Generic. SCIERC utilizes a greedy annotation approach for spans and always prefers the longer span whenever ambiguity occurs. Nested spans are allowed when a subspan has a relation/coreference link with another term outside the span. The second dataset is the Arxiv dataset which takes scientific papers as an example of long documents and their abstracts are used as ground-truth summaries. Authors of the dataset removed the documents that are excessively long or too short, or do not have an abstract or some discourse structure. Figures and tables were removed using regular expressions to only preserve the textual information. Also, math formulae and citation markers were normalized with special tokens. Only the sections up to the conclusion section of the document were kept for every paper. This dataset contains 215,912 scientific papers with the average length of 4,938 words and the average summary length of 220 words. To evaluate the performance of the model we used ROUGE-1, ROUGE-2, and ROUGE-L metrics. For scoring the occurrence of named entities and their soundness and completeness we use named-entity-wise precision and recall: NE precision = correct NE in summary/number of NE in summary NE recall = correct NE in summary/number of NE in source § EXPERIMENTS The training procedure of our model consists of the three main stages, illustrated in Figure <ref>. §.§ NER preparation To start our pipeline, we trained the Named Entity Recognition model. For this purpose, we used the RoBERTa <cit.> language model. After the training for 7 epochs, we obtained an F1 macro score of 0.51 on the test dataset. §.§ Custom LM pretraining BART <cit.> uses the standard sequence-to-sequence Transformer architecture <cit.> and it is pretrained by corrupting documents and then optimizing a reconstruction loss – the cross-entropy between the decoder’s output and the content of the original document. Unlike most of the existing denoising autoencoders, which are tailored to specific noising schemes, BART allows us to apply any type of document corruption. In the extreme case, where all information about the source is lost, BART is equivalent to a regular language model. This unique ability opens the road to usage of our previously trained NER model. We use it to find named entities in scientific texts from the ArXiv dataset and substitute them with [mask] tokens. This way, we bring the model's attention to the named entities instead of just random words, most of which might be from a general domain. In our experiments, we used a 0.5 probability of masking. This approach was inspired by the original BART paper, in the conclusion of which authors encourage further experiments with noising functions: “Future work should explore new methods for corrupting documents for pre-training, perhaps tailoring them to specific end tasks” <cit.>. We pretrained on 215,912 scientific articles on a single epoch starting with a learning rate of 5 * 10^-5 and a linear scheduler with gamma = 0.5 every 10,000 steps. §.§ Summarization training After pretraining the BART model, we finetuned it on a summarization task. Because BART has an autoregressive decoder, it can be directly fine-tuned for sequence generation tasks such as abstractive question answering and summarization. In both of these tasks, information is copied from the input, but manipulated, which is closely related to the denoising pre-training objective. Here, we trained BART with a batch size of 1 for a single epoch. We figured out that the model easily overfits, so we had to use a learning rate scheduled every 5,000 steps with gamma = 0.5. The initial learning rate was set to be 2 * 10^-5. For training we used NVIDIA Tesla K80 GPU, the training took around 30 hours. § RESULTS Our model shows higher precision and recall in named entity inclusion in comparison to the same architecture, which was pretrained using regular masked language model objective - results of both models can be found in Table <ref>. Examples of generated summaries are shown in Appendix <ref>. § DISCUSSION During the training of our model, we noticed that increase in common metrics for text summarization causes a decrease in named entity inclusion. We believe the reason for this is the limited length of the generated summary - one can have only so many named entities, before they will displace other words from the original text, causing the model to reformulate sentences and miss more words from the source. Therefore, during training, we tried to find the optimum point, at which the model will have high ROUGE scores and will still have high NE inclusion. At this point the MNELM-pretrained model, while keeping higher NE inclusion, converges faster than a regular MLM (in terms of ROUGE metrics). The comparison can be found in Table <ref>. Obtained summarization scores are inferior to the recently published state of the art models like PRIMER <cit.> (ROUGE-1 = 47.6; ROUGE-2 = 20.8) or DeepPyramidon <cit.> (ROUGE-1 = 47.2; ROUGE-2 = 20), but their ability to preserve named entities in text is yet to be determined. § CONCLUSION In this work, we described the task of preserving named entities in an automatically generated summary and presented the Masked Named Entity Language Model (MNELM) pretraining task. We show that with the MNELM pretraining procedure the BART model can achieve higher precision and recall of named entity inclusion. Pretraining with the MNELM task helps the model concentrate on domain-specific words, whereas MLM learns to reconstruct mostly common words. This leads to stronger attention on named entities, more likely preserving them in a generated text. The suggested model shows solid results in summarization metrics in comparison to the regular approach and converges faster. In further research, we plan to improve the quality of the pretraining by masking a sequence of named entities with a single mask – the step that could help the model, according to the original BART paper <cit.>. Also, we plan to conduct more experiments with different hyperparameters (such as masking probability), on more datasets, including PubMed <cit.> and to train an even better NER model. In addition, we plan to improve the proposed model by overcoming the internal limitation on the number of input tokens (currently, it only has access to 1024 tokens). acl_natbib § APPENDIX Below is the comparison of the generated summaries. Named entities are in bold. First text is generated by the MNELM-pretrained model, second text is produced by the MLM-pretrained model: 1. "the problem of admission control for web - based applications is typically considered as a problem of system sizing : enough resources are to be provisioned to meet quality of service requirements under a wide range of operating conditions. while this approach is beneficial in making the site performance satisfactory in the most common working situations, it still leaves the site incapable to face sudden and unexpected surges of traffic. in this context , it is impossible to predict the intensity of the overload. this work is motivated by the need to formulate a fast reactive and autonomous approach to admission control. in particular, we propose an original self- * overload control policy ( soc ) which enables some fundamental self - * properties such as self - configuration, self - optimization, self - protection." 2. "we propose an autonomous approach to admission control in distributed web systems. the proposed policy is based on self - configuration, self - optimization, and self - protection. in particular, the proposed system is capable of self - configuring its component level parameters according to performance requirements, while at the same time it optimizes its own responsiveness to overload. at session granularity , it does not require any prior knowledge on the incoming traffic and can be applied to non - session based traffic as well." MNELM model scores: NE precision = 0.91; NE recall = 0.49. MLM model scores: NE precision = 0.71; NE recall = 0.20.
http://arxiv.org/abs/2307.01634v2
20230704103752
Mechanisms of chiral plasmonics -- scattering, absorption and photoluminescence
[ "Yuqing Cheng", "Mengtao Sun" ]
physics.optics
[ "physics.optics" ]
School of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083, China Corresponding author: mengtaosun@ustb.edu.cn School of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083, China ABSTRACT: Chirality is a concept that one object is not superimposable on its mirror image by translation and rotation. In particular, chiral plasmonics have been widely investigated due to their excellent optical chiral properties, and have led to numerous applications such as optical polarizing element etc. In this study, we develop a model based on the concept of the interaction between harmonic oscillators to investigate and explain the optical chiral mechanisms of strongly coupled metal nanoparticles (MNPs). The chirality of the scattering, absorption, and photoluminescence spectra are carefully discussed in detail. The results show that the chirality of the system originates not only from the orientations of the MNPs, but also from the different eigen parameters between them. Specifically, the derived three factors contribute to the chirality: the symmetry, the coupling strength, and the coherent superposition of the emitted electric field. This work provides a deeper understanding on the chiral plasmonics and may guide relevant applications in theory. Mechanisms of chiral plasmonics - scattering, absorption and photoluminescence Mengtao Sun July 4, 2023 ============================================================================== § INTRODUCTION An object has chirality if it is not superimposable on its mirror image by translation and rotation. Usually, a chiral object is lack of symmetry such as mirror planes, inversion centers or improper rotational axes <cit.>. In particular, chirality in the field of plasmonics has attracted numerous attentions due to its promising applications, such as light controlling <cit.>, polarization-sensitive optical devices <cit.>, molecular sensing <cit.>, chiral catalysis <cit.>. Recently, Li Z. et al. <cit.> apply the chiral quadrupole field introduced by permanent magnets to dispersed magnetic nanoparticles, which results in long-range chiral superstructures. By tuning the magnetic field, they are able to control over the superstructures' handedness and chiroptical properties. Moreover, the chirality could be transferred to organic molecules and inorganic compounds. Singh G. et al. <cit.> use the self-assembly technology to synthesize different types of superstructures composed of magnetic nanocubes, such as belts, as well as single, double, and triple helices. They reveal a novel mechanism of symmetry breaking and chirality amplification, i.e., the neighboring helices tend to arrange in the same handedness thus maximizing packing. Jeong K. et al. <cit.> employ magnetoplasmonic nanoparticles to guide plasmonic Ag nanoparticles onto a helical magnetic flux. They are successful in tuning the chirality of the structures in real time, thus controlling the polarization state of light. All the above works are excellent in controlling light by introducing the chirality of the structures in different ways. However, a deeper understanding in the physical mechanisms of the chirality of these complex structures is in demand. In this study, we develop a model to investigate and explain the optical chiral mechanisms of strongly coupled metal nanoparticles (MNPs). This model is based on the concept of the interaction between harmonic oscillators. The chirality of the scattering, absorption, and photoluminescence (PL) spectra of the coupled MNPs are investigated in detail. We only consider two MNPs so that the mechanisms can be revealed as simple as possible. Further investigation on the chirality of multiple MNPs will be proceeded in the future. § MODEL As we have presented in our previous work <cit.>, we treat each individual MNP as an oscillator with their own eigen angular frequency ω_j and damping β_j. The oscillator is composed of numerous free electrons which are oscillating collectively when excited by external light. Here, the oscillating modes of the two coupled MNPs are in arbitrary directions. The schematic is shown in Fig. <ref>. We use x_j (j=1,2) to present the displacements of the jth oscillator, the directions of which are along their mode directions, respectively. In general, ẋ_j and ẍ_j represent the corresponding velocity and acceleration. Therefore, the equations of the two oscillators are written as: ẍ_1+β_1 ẋ_1 +ω^2_1 x_1 + e E_21/m_e=C_1 exp(-iω_ext), ẍ_2+β_2 ẋ_2 +ω^2_2 x_2 + e E_12/m_e =C_2 exp(-iω_ext). Here, e is the elementary charge, m_e is the mass of electron, C_1=-e/m_e E_x, C_2=-e/m_e (E_x cosθ_2 +E_y sinθ_2), and E_x and E_y are the electric field of the incident light in x and y directions, respectively. The electric field introduced by the two oscillators can be evaluated by <cit.>: E_12≅ -eN_1/4 πε_0 R^2( Θ_1 x_1/R +Θ_1 ẋ_1/c + Θ_2 Rẍ_1/c^2), E_21≅ -eN_2/4 πε_0 R^2( Θ_1 x_2/R +Θ_1 ẋ_2/c + Θ_2 Rẍ_2/c^2). Here, N_1 and N_2 are the free electron numbers of oscillator 1 and 2, respectively; ε_0 is the permittivity of free space, R is the distance between the two MNPs, and c is the speed of light in vacuum; the parameters Θ_1 and Θ_2 are functions of (θ_1,θ_2), and they are written as: Θ_1(θ_1,θ_2)=3cosθ_1 cos (θ_2-θ_1) -cosθ_2, Θ_2(θ_1,θ_2)=sinθ_1 sin (θ_2-θ_1). Therefore, Eq. <ref> can be written as: ẍ_1+β_1 ẋ_1 +ω^2_1 x_1 + η_2 ẍ_2 + γ_2 ẋ_2 + g_2^2 x_2 =C_1 exp(-iω_ext), ẍ_2+β_2 ẋ_2 +ω^2_2 x_2 + η_1 ẍ_1 + γ_1 ẋ_1 + g_1^2 x_1 =C_2 exp(-iω_ext). Here, the coupling coefficients are defined as: g_j^2= - N_j e^2 Θ_1/4 πε_0 m_e R^3,     γ_j= - N_j e^2 Θ_1/4 πε_0 m_e R^2 c, η_j= -N_j e^2 Θ_2/4 πε_0 m_e R c^2,     j=1,2. The form of Eq. <ref> is the same as Eq. 6 of Ref. <cit.>. The difference is that the coupling coefficients in Eq. <ref> are additionally dependent on θ_1 and θ_2, but the ones of the latter are corresponding to the particular case of (θ_1,θ_2)=(±π/2,0). Furthermore, Ref. <cit.> is a specific case of (θ_1,θ_2)=(0,0). First, we derive the formulas of the scattering, absorption, and PL spectra of the system. Next, we investigate the chirality of the strong coupled MNPs by analysing these spectra when excited by the right-handed (RCP) and left-handed circularly polarized (LCP) light, respectively. Last, we show how are the spectra are controlled by varying the orientation of the MNPs. For simplicity, we define Ω_j(α)=ω_j^2+β_j^2 α+α^2 and G_j(α)=g_j^2+γ_j α +η_j α^2 for j=1, 2, which would be used in the following derivations. §.§ Scattering and absorption For white light scattering spectra, α=-iω_ex. Assume x_j(t)=A_j exp(-iω_ext) for j=1, 2, and substitute them into Eq. <ref>, we obtain the equation: [ Ω_1 G_2; G_1 Ω_2 ]( [ A_1; A_2; ]) =( [ C_1; C_2; ]), with α=-iω_ex. The solutions are: A_1(ω_ex)=Ω_2 C_1-G_2 C_2/Ω_1 Ω_2 - G_1 G_2, A_2(ω_ex)=Ω_1 C_2-G_1 C_1/Ω_1 Ω_2 - G_1 G_2. The scattering spectrum is usually related to the far field, therefore the electric field is approximately proportional ẍ_j ∝ω_2. After substituting ω_ex with ω we obtain the white light scattering spectrum: I_sca(ω)= ω^4 |N_1 A_1(ω)+N_2 A_2(ω) cosθ_2 |^2 + ω^4 |N_2 A_2(ω) sinθ_2 |^2 . The absorption spectrum is introduced by the term β_j ẋ_j which refers to the friction force on the electrons <cit.>. Hence, the absorption spectrum can be written as: I_abs(ω)=N_1|ωβ_1 A_1(ω)|^2+N_2|ωβ_2 A_2(ω) |^2. §.§ Photoluminescence For the PL spectrum, the derivation is similar to that of Ref. <cit.>, therefore, we would rewrite some of the significance conclusions. The eigen solutions of PL should satisfy the following equations: ẍ_1+ β_1ẋ_1+ω_1^2 x_1+η_2ẍ_2+γ_2ẋ_2+g_2^2 x_2=0, ẍ_2+ β_2ẋ_2+ω_2^2 x_2+η_1ẍ_1+γ_1ẋ_1+g_1^2 x_1=0. Assume the solutions of PL are x_j(t)=B_jexp(α t) for j=1, 2, the solutions of α satisfy: Ω_1(α) G_2(α) G_1(α) Ω_2(α) =0. There are generally 4 solutions for α. Although the expressions of α are complicated, we could write it in a simple form: α_1=-β_1^'/2-iω_1^',  α_2=-β_1^'/2+iω_1^', α_3=-β_2^'/2-iω_2^',  α_4=-β_2^'/2+iω_2^'. Obviously, it appears four modes, but two of them are independent, with two new eigenfrequencies (ω_1^' and ω_2^') and two new damping coefficients (β_1^' and β_2^'). The analysis of these modes has been discussed in detail in Ref. <cit.>. The total solutions of x_j(t) can be written as: x_j(t)=A_j exp(α_0 t)+ ∑_k=1^4 B_jkexp(α_k t),  for j=1,2. Here, B_jk refers to the amplitude of the jth oscillator in the kth mode, and we define α_0=-iω_ex, then B_jk is expressed as: B_jk=-A_j ∏_n≠ k ^4 (α_0- α_n) -ẍ_j(0)∑_n≠ k^4 α_n +⃛x_j(0)/∏_n k^4(α_k-α_n). Here, the initial conditions are x_j(0)=0 and ẋ_j(0)=0, and ẍ_j(0) and ⃛x_j(0) can be derived from Eq. <ref>. After considering the angular θ_2, the PL spectrum can be written as <cit.>: I_PL(ω)=∑_k=1^4 |Re(α_k)| B_k^'/( ω+Im(α_k) )^2+(Re(α_k))^2. Here, Re(α_k) and Im(α_k) are the real and imaginary parts of α_k, respectively, and B_k^'=|α_k|^4(|N_1 B_1k+N_2 B_2kcosθ_2 |^2 + | N_2 B_2ksinθ_2 |^2) for k=1, 2, 3, 4. §.§ Chirality To analyze the chirality of the system, we should compare the spectra between the right- and left-handed circularly polarized incident light. Assume E_x=E_0, hence, for the RCP one, E_y=E_0exp(-iπ/2), the spectrum is denoted by I_R; for the LCP one E_y=E_0exp(+iπ/2), the spectrum is denoted by I_L. The intensity and line shape varies with the rotation of the incident light. We define the chirality of the system as: Chirality=I_R-I_L/I_R+I_L. Therefore, the chirality of the scattering, absorption, and PL spectra can be evaluated. § RESULTS AND DISCUSSIONS From Eq. <ref>, we notice that the coupling coefficients are proportional to Θ_1 or Θ_2, which are significant in the dynamics of the oscillators. Fig. <ref> shows the values of Θ_1(θ_1,θ_2) and Θ_2(θ_1,θ_2), in the range of -π/2≤θ_1,θ_2 ≤π/2. Obviously, Θ_1 and Θ_2 are centrosymmetric with (θ_1,θ_2), and the value ranges are -1≤Θ_1≤ 2 and -1≤Θ_2≤ 1/2. Θ_1 reaches its maximum at (θ_1,θ_2)=(0,0), and minimum at (θ_1,θ_2)=(π/4,-π/2) and (-π/4,π/2); Θ_2 reaches its maximum at (θ_1,θ_2)=(π/4,π/2) and (-π/4,-π/2), and minimum at (θ_1,θ_2)=(±π/2,0). Generally, in a practical case, |η_j| ≪ 1 so that g_j and γ_j play dominant roles in the coupling, i.e., Θ_1 is much more significant than Θ_2. Hence, the contour lines of Θ_1=0 indicate that there is (almost) no coupling at these angles, no matter how close the two MNPs are. §.§ Scattering and absorption First, we discuss the chirality of the scattering and absorption spectra. Fig. <ref> shows the maximum of the chirality as a function of N_2, ω_2, and β_2, respectively, varying with R. Here, the maximum is defined as: for a certain N_2, ω_2, or β_2, we calculate the chirality as a function of (θ_1,θ_2), in which we find the maximum value. In, Fig. <ref>a, the chirality of the absorption spectra is 0 for all the R and N_2. However, the chirality of the scattering spectra is not 0. When N_1=N_2, the system is symmetric in the parameters of the MNPs, which show no chirality. The chirality appears as N_1 ≠ N_2, which breaks the symmetry. Evidently, there appears two local maximum values for N_2<N_1 and N_2>N_1, respectively. When N_2<N_1, as N_2 decreases, the breaking of symmetry gets greater; on the other hand, the differences between the two groups of the coupling coefficients, i.e., Δ g=|g_21-g_12|, Δγ=|γ_21-γ_12|, and Δη=|η_21-η_12|, increase. The former increases the chirality, but the latter decreases the chirality. Hence, the first local maximum value (at about N_2=0.35N_1 with R=20 nm) is derived from both the two factors. The reason of the second one (at about N_2=5N_1 with R=20 nm) is similar, hence we do not describe here. As R in creases, i.e., the coupling strength decreases, the chirality decreases. This indicates that in this case (only varying N_2) the chirality of the scattering is greatly influenced by the coupling strength, which means this chirality originates from the coupling. In Fig. <ref>b, both the scattering and absorption spectra have chirality, and the former is much larger then the latter. Similar to Fig. <ref>a, there are two local maximum values for ω_2<ω_1 and ω_2>ω_1, respectively. The reason is also similar to Fig. <ref>a. The chirality is sensitive to Δω=|ω_2-ω_1|, the maximum values of which are at about Δω=0.06ω_1. As the coupling strength decreases, the chirality of the absorption spectra decreases, but the scattering one changes little. This indicates that the chirality of the scattering spectra does not originate from the coupling, the origin of which will be discussed later in Fig. <ref>. In Fig. <ref>c, both the scattering and absorption spectra have chirality with little difference, and the values of the both are so small that we can ignore the chirality of this case. As the coupling strength decreases, both of the chirality decrease. We do not find a local maximum value in this case (only varying β_2), because the difference Δβ=|β_2-β_1| is not large enough to introduce the its influence on the chirality, i.e., the breaking of symmetry plays an more important role. To analyze the chirality and its behavior, some details are included. Fig. <ref>a and <ref>b show the chirality of the scattering spectra as a function of (θ_1,θ_2) with N_2/N1=0.35 and N_2/N_1=5, respectively, which are corresponding to the two local maximums in Fig. <ref>a. The contour lines of value 0 (except for the “θ_2=0” lines) are similar to the one of Θ_1 in Fig. <ref>a. It indicates that the chirality of this case is related to the coupling strength, which agrees with the conclusion that it originates from the coupling strength in Fig. <ref>a. Furthermore, the chirality also originates from the breaking of symmetry, because the chirality gets higher at more nonsymmetric point of (θ_1,θ_2). Fig. <ref>c and <ref>d show the scattering spectra excited with RCP and LCP light, corresponding to Fig. <ref>a and <ref>b, respectively. Here, we choose the configurations of (θ_1,θ_2) at which the chirality reaches their maximums. The results show that the scattering spectra are evidently different between RCP and LCP cases. Due to the strong coupling, the spectra split into to modes, which has been discussed in our previous work in detail <cit.>. We focus on the chirality properties in this work, hence, we are not going to discuss much of the splitting. Fig. <ref>a and <ref>b show the chirality of the scattering and absorption spectra as a function of (θ_1,θ_2), respectively, which are corresponding to the two local maximums in Fig. <ref>b. In Fig. <ref>b, the contour lines of value 0 (except for the “θ_2=0” lines) are similar to the one of Θ_1 in Fig. <ref>a, which indicates the origin of the chirality of the absorption spectra is related to the coupling strength, which also agrees with the conclusion that it originates from the coupling strength in Fig. <ref>b. However, in Fig. <ref>a, the “0” contour lines are different from the one of Θ_1 in Fig. <ref>a. It indicates that in this case (only varying ω_2) the coupling plays an unimportant role in the chirality of the scattering spectra. This agrees with the behaviour of the chirality in Fig. <ref>b (solid lines). Hence, we can summarize that in this case the breaking of symmetry plays a more important role in the chirality of the scattering spectra. Fig. <ref>c and <ref>d show the scattering and absorption spectra excited with RCP and LCP light, corresponding to Fig. <ref>a and <ref>b, respectively. Here, we choose the configurations of (θ_1,θ_2) at which the chirality reaches their maximums. The results show that the scattering spectra are evidently different between RCP and LCP cases, but the absorption ones are not. The reason can be found in Eq. <ref> and Eq. <ref>. In the scattering, the spectra are from the coherent superposition of the emitted electric field, i.e., the formula has the cross terms (coherent terms) which are related to (θ_1,θ_2) that are relevant to the symmetry, and contribute to the chirality; while in the absorption, the spectra are the addition of the respective intensity, and has no cross terms and no relation to (θ_1,θ_2). The only contribution of the chirality of the absorption spectra is the coupling strength which is related to (θ_1,θ_2). Therefore, the chirality of the scattering is larger than the one of the absorption. §.§ Photoluminescence Second, we discuss the chirality of the PL spectra. Fig. <ref> shows the maximum of the chirality as a function of ω_2 and β_2, respectively, varying with R. In Fig. <ref>a, when |Δω| is very small, the chirality reaches a maximum of about 0.6; when |Δω| gets larger, the chirality decreases to about 0.1-0.2. In Fig. <ref>b, when |Δβ| increases from 0, the chirality increases rapidly from 0 to about 0.8. For both cases, in strong coupling, the chirality varies little with R; when there is no coupling, the chirality becomes 0. This consequence indicates that Δβ affects the chirality of the PL spectra more than Δω does; the chirality can be quite sensitive to a small value of Δω and Δβ. We also calculate the chirality as a function of N_2 with ω_1=ω_2 and β_1=β_2, but the value is almost 0. It indicates that in this case (varying N_2), there is no chirality in the system, hence we do not show it in the figure. To analyze the chirality and its behavior, some details are included. Fig. <ref>a and <ref>b show the chirality of the PL spectra as a function of (θ_1,θ_2) with ω_2/ω_1=1.0003 and β_2=β_1, respectively. The former is corresponding to the maximum in Fig. <ref>a (R=20 nm), and the latter is corresponding to a general case in Fig. <ref>b. The contour lines of value 0 (except for the “θ_2=0” lines) are similar to the one of Θ_1 in Fig. <ref>a. Obviously, the pattern is different from Fig. <ref> and <ref>. For the scattering, the maximum appears usually at the point far from the 0 contour lines; however, for the PL, the maximum appears near the 0 contour lines. This phenomenon indicates that the chirality of the PL spectra is quite sensitive to (θ_1,θ_2) near the contour lines, the sensitivity of which is similar to that of Δω and Δβ. Fig. <ref>c and <ref>d show the PL spectra excited with RCP and LCP light, corresponding to Fig. <ref>a and <ref>b, respectively. Here, we choose the configurations of (θ_1,θ_2) at which the chirality reaches their maximums. The results show that the PL spectra are different between RCP and LCP cases. Although R=20 nm corresponds to the strong coupling case, there is no splitting for the maximum configurations. Because, as mentioned above, the chirality reaches the maximum near the 0 contour lines, resulting in the negligible coupling strength at these (θ_1,θ_2), the spectra show no splitting. §.§ Controlling spectra Last, we show a way to control the spectra of the system, i.e., changing the configurations of the two MNPs. Fig. <ref> shows an example of a general case of the scattering, absorption, and PL spectra excited with RCP and LCP light, at θ_1=π/4 varying with θ_2. As θ_2 decreases from π/2 to -π/2, the resonance frequencies of the red mode (λ_c>600 nm) increases and then decreases accompany with the change of the intensities; the ones of the blue mode (λ_c<600 nm) behave oppositely. If we conect all the peak points of the red (blue) mode in sequence, we can obtain a left-handed (right-handed) circle (not shown), which is quite interesting. Therefore, we can conclude that, the resonance frequencies, damping coefficients, and intensities of all the spectra can be controlled by θ_2. The difference between RCP and LCP cases (the chirality) depends on the position (θ_1,θ_2) at which the configurations are. Therefore, the chirality is also controlled by θ_2. § CONCLUSIONS In conclusion, there are three important factors which directly influence the chirality of the coupled MNPs: the symmetry(factor 1), the coupling strength (factor 2), and the coherent superposition of the electric field (factor 3). Strictly speaking, factor 2 and 3 can be influenced by factor 1, but we separate them to illustrate the mechanisms of the chirality of different spectra. Among them, factors 1-3 are influenced by the orientations of the MNPs, i.e., (θ_1,θ_2), and factor 1 is additionally influenced by the difference between the MNPs' eigen parameters (parameter-difference), i.e., Δ N, Δω, and Δβ. These three factors play different proportional roles in the chirality of the scattering, absorption, and PL spectra. In particular, for the scattering, when Δω=Δβ=0, factors 1-3 are all important; when Δ N=0, Δβ=0, factor 2 is not important compared with factors 1 and 3. The chirality is not that large (about 0.2) even in strong coupling. For the absorption, factors 1 and 2 play important roles, but the chirality is small (less than 0.1). For the PL, when Δω=Δβ=0, no chirality is obtained; when Δ N=0, Δβ=0, the chirality is extremely sensitive to ω_2; when Δ N=0, Δω=0, the chirality gets large (about 0.8) with a small Δβ. Moreover, all the spectra can be controlled by varying the orientations (θ_1,θ_2). This work provides a deeper understanding on the chiral plasmonics and may guide relevant applications in theory. § ACKNOWLEDGMENT This work was supported by the Fundamental Research Funds for the Central Universities (Grant No. FRF-TP-20-075A1). § DISCLOSURES The authors declare no conflicts of interest. § DATA AVAILABILITY The data that support the findings of this study are available from the corresponding author upon reasonable request. § REFERENCES
http://arxiv.org/abs/2307.01121v1
20230703155139
Artifacts Mapping: Multi-Modal Semantic Mapping for Object Detection and 3D Localization
[ "Federico Rollo", "Gennaro Raiola", "Andrea Zunino", "Nikolaos Tsagarakis", "Arash Ajoudani" ]
cs.RO
[ "cs.RO", "cs.CV" ]
Design, fabrication, and characterization of electrostatic comb-drive actuators for nanoelectromechanical silicon photonics Søren Stobbe August 1, 2023 =========================================================================================================================== Geometric navigation is nowadays a well-established field of robotics and the research focus is shifting towards higher-level scene understanding, such as Semantic Mapping. When a robot needs to interact with its environment, it must be able to comprehend the contextual information of its surroundings. This work focuses on classifying and localising objects within a map, which is under construction (SLAM) or already built. To further explore this direction, we propose a framework that can autonomously detect and localize predefined objects in a known environment using a multi-modal sensor fusion approach (combining RGB and depth data from an RGB-D camera and a lidar). The framework consists of three key elements: understanding the environment through RGB data, estimating depth through multi-modal sensor fusion, and managing artifacts (i.e., filtering and stabilizing measurements). The experiments show that the proposed framework can accurately detect 98% of the objects in the real sample environment, without post-processing, while 85% and 80% of the objects were mapped using the single RGBD camera or RGB + lidar setup respectively. The comparison with single-sensor (camera or lidar) experiments is performed to show that sensor fusion allows the robot to accurately detect near and far obstacles, which would have been noisy or imprecise in a purely visual or laser-based approach. § INTRODUCTION To boost navigation autonomy and contextual awareness of mobile robots in unstructured environments, geometric information collected from the surroundings and the associated semantic data play key roles. The latter, in particular, includes qualitative environment information that can contribute to improving the robot's autonomy for navigation, task planning and manipulation, and simplifying human-robot interaction (HRI). This problem is tackled in the Semantic Mapping field, which aims to organize objects into classes and compute their pose and shape in a specific fixed reference frame. In this way, the environmental geometric information is supported by high-level features which increase the robot's awareness of the environment. In our specific case, we deal with the object detection and localization problem, which nowadays is widely investigated. For instance, in the last Darpa Subterranean Challenge[Darpa Subterranean Challenge: <https://www.subtchallenge.com/>], the main objectives were multi-robot exploration and object mapping in unknown environments, and the overall score was calculated based on the number of correctly detected and localized objects on the map. Different works were proposed to cope with the semantic mapping problem. Most recent results in robotics are facing the problem of using only RGB data and some interactive structures to be compliant with dynamic environments <cit.> while others rely on RGB-D data exploiting older algorithmic strategies (e.g. PnP algorithm) <cit.>. In autonomous driving, the RGB camera and lidar sensor fusion for semantic understanding is a currently tackled problem <cit.>. For a broader evaluation of the literature review see Sect. <ref>. Independently of the approaches used in robotics literature, the first thing which stands out is that most of them rely only on camera sensors. Cameras can give lots of dense information to the user especially if paired with depth data. However, their accurate depth range is within a few meters, leading to heavy depth measurement errors as the distances increase, especially if the robot is moving. This is particularly true for outdoor and vast indoor environments (e.g., warehouses), where depth cameras are limiting and object semantic mapping remains a major challenge for far distances. In these cases, lidar sensors are an essential camera partner, allowing to have precise depth measurements for a wider distance range. Rather, in autonomous driving, the lidar and the RGB camera are nowadays commonly used but depth cameras are not considered due to their low resolution in the wide outdoor areas commonly faced in driving scenarios. Another aspect not considered in most of the robotics examined works is that they do not account for limited resources applications which should run on embedded devices (e.g. Nvidia Jetson Nano[Nvidia Jetson Nano: <https://developer.nvidia.com/embedded/jetson-nano-developer-kit>]). Furthermore, Semantic Mapping is often used in the context of grasping or augmented reality scenarios while this work proposes an application for detecting and localizing objects (a.k.a artifacts) for high-level navigation tasks. In our work, we aim to merge robotics and autonomous driving applications' strengths and present a modular architecture for semantic mapping[Artifacts Mapping Youtube videos: <https://www.youtube.com/playlist?list=PLdibjJfM06zugiWd-yUcdGH-SRWKTA3nQ>]. We provide a multi-modal (camera-lidar) online semantic mapping framework which can fuse sensor information in real-time depending on the object distance and sensor's accuracies. We use image semantic information to enrich objects' filtered and stabilised positions to have precise object localization. The artifacts' shapes are simplified as spheres but they will be improved in future development. Our work relies on external geometric-based navigation frameworks such as SLAM algorithms or other localization algorithms (e.g., AMCL <cit.>). The proposed application demonstrates good accuracy for both near and far objects thanks to the camera-lidar depth fusion which, as far as the authors know, was not examined in other robotic or autonomous driving semantic mapping works. The application operates online also on low resources embedded systems (see Sect. <ref>) which strengthens the contributions of this paper. Moreover, we developed a Rviz[rviz: <http://wiki.ros.org/rviz>] application which improves the user experience (UX) for visualization and interaction with the objects and the robot (see Fig. <ref>). The authors will provide on-demand a Docker application[The authors will grant access to a Docker image with the compiled application upon acceptance of the paper, based on the Freeware license.] as an added contribution, for running the artifacts mapping applications in simulation or on a robot (see Sect <ref>-<ref> and Sect. <ref>). The paper is divided as follows. In Sect. <ref> a literature review of some recent works in semantic mapping is presented. The framework developed for this work is explained in Sect. <ref>. We can distinguish the framework pipeline as two perception modules and a manager one. The first perception module performs 2D object detection while the second aims to estimate 3D artifacts position by fusing camera and lidar depth information (see Sect. <ref>). The last module is needed to stabilize the perception estimations and to filter out noisy outliers (see Sect. <ref>). An application of the presented framework (see Sect. <ref>) is proposed based on two steps: (i) the robot can autonomously classify and localize objects on a map and save them in a specified format, (ii) the robot can load the artifacts as way-points on the map and the user can interactively select them to command the robot moving in that place to successively accomplish various kind of tasks such as manipulation, grasping, inspections or others. In Sect. <ref> the experiments to validate the framework are evaluated and discussed, and in Sect. <ref> the conclusion and some future improvements are provided. § RELATED WORKS In literature, the semantic mapping problem was addressed using several approaches both in robotics and autonomous driving fields. Different surveys were presented that analysed this topic from various points of view. In <cit.> the authors explored the semantic mapping application in a human-robot collaboration scenario in an indoor environment while in <cit.> the semantic SLAM problem is presented in a general fashion analysing the works also in terms of perception, robustness, and accuracy. In <cit.>, the less recent semantic mapping works are reviewed (i.e., before 2014). This survey is a good reference to analyse the first development for the semantic mapping problem which yielded the more recent applications. Among the modern semantic mapping approaches presented in robotics literature in the last decade, some first successful examples are <cit.> and <cit.>. In <cit.> the authors presented a monocular SLAM system that uses a SURF <cit.> feature extractor to check correspondencies and reconstruct the object's geometry. Instead, the authors in <cit.> showed an object-oriented 3D SLAM based on an ICP <cit.> object pose refinement and demonstrated that the introduction of semantic objects in the SLAM loop improves performances. the authors in <cit.> developed a monocular SLAM-aware object recognition system based on multi-view object proposals and efficient feature encoding methods giving as output a semi-dense semantic map. In <cit.> the authors proposed a framework which directly manages 3D objects. They use a Kinect[Microsoft Kinect camera <https://en.wikipedia.org/wiki/Kinect>] camera to reconstruct the 3D environment from different points of view and classify them while estimating their pose. In <cit.> the Data Associated Recurrent Neural Networks (DA-RNN) is introduced, which is an RNN for semantic labelling of RGB-D videos. The network output is fused with the KinectFusion algorithm <cit.> to merge semantic and geometric data. In <cit.> a Convolutional Neural Network (CNN) is used along with the ElasticFusion SLAM algorithm <cit.> to provide long-term dense correspondences between RGB-D video frames even in loopy trajectories. The authors in <cit.> leveraged ORB-SLAM2 <cit.> to reconstruct the geometric environment while using Single-Shot multi-box Detector (SSD) <cit.> along with an unsupervised 3D segmentation algorithm to place objects in the environment. Moving towards more recent works, in <cit.> is presented the Contextual Temporal Mapping (CT-Map). They modelled the semantic inference as a Conditional Random Field (CRF) to account for contextual relations between objects and the temporal consistency of their pose. MaskFusion <cit.> is a real-time object-aware semantic and dynamic RGB-D SLAM algorithm. The greatest difference with respect to its predecessors is that it can cope with dynamic objects by continuously labelling them. Fusion++ <cit.> performs an object-level SLAM based on a 3D graph map of arbitrary reconstructed objects. They used RGB-D cameras, Mask-RCNN <cit.> instance segmentation and the Truncated Signed Distance Function (TSDF) to perform the semantic reconstruction. In <cit.> is presented an approach that incrementally builds a volumetric object-centric map with an RGB-D camera. They used an unsupervised geometric approach with instance-aware semantic predictions to detect previously unseen objects. They then associated the 3D shape locations with their classes if available and integrate them into the map. This approach has limited time performances to be used on a mobile robot because it runs at 1 HZ so it could be impractical in real-time. Conversely, in <cit.> the authors obtained a real-time dense reconstruction and semantic segmentation of 3D indoor scenes. They used an efficient super-voxel clustering method and conditional random fields (CRF) with higher order constraints from structural and object cues, enabling progressive dense semantic segmentation without any precomputation. The CRF infer optimal segmentation labels from the prediction of a deep neural network and runs in parallel with a real-time 3D reconstructor which utilizes RGB-D images as input. In <cit.> an open-source C++ library for metric-semantic visual-inertial SLAM in real-time is presented. They provide a modular code composed of a visual-inertial odometry (VIO) module, a pose graph optimizer, a 3D mesh-building module, and a dense 3D metric-semantic reconstruction module. The authors in <cit.>, used a UAV equipped with a lidar, an RGB camera and a thermal camera to augment 3D point clouds and image segmentation masks while also generating an allocentric map. One of the last available works which focus on this topic is <cit.> which presented a semantic mapping framework which uses only RGB data. They did not accomplish only object mapping but they provided a framework that can also distinguish different rooms and buildings. They exploited the 3D dynamic scene graphs <cit.> to abstract the different layers of inference (i.e. object, room and building), to solve problems such as loop closure detection and to cope with the mapping problem. Instead, the authors of <cit.> used RGB-D cameras to reconstruct an allocentric semantic map. They used a keypoint-based approach for pose estimation using a CNN keypoint extractor trained on synthetic data. Object poses were recovered from keypoint detections in each camera viewpoint with a variant of the PnP algorithm. The outputs obtained from the multi-camera system were then fused using weighted interpolation. In autonomous driving, the multi-sensor fusion problem for 3D object detection is faced in <cit.> which uses lidar and RGB camera sensors to estimate the objects positions in the environment through ground estimation and depth completion. They use an end-to-end approach to train their multi-task network. The authors in <cit.> build a semantic map with a laser-based semantic segmentation of the point cloud not requiring any camera data. In <cit.>, the authors provided a lidar-based SLAM for the geometric mapping and then use a CRF to fuse and optimize the camera semantic labels to obtain the semantic map. Instead, in <cit.>, the camera and lidar data are used to build a probabilistic semantic octree map considering all the uncertainties of the sensors involved in the process. The authors in <cit.> presented one of the latest works in autonomous driving semantic mapping. They use an RGB camera and a lidar to perform semantic segmentation, direct sparse visual odometry and global optimization to include GNSS data in the mapping process. Our review of the state-of-the-art indicated that most of the works on robotics platforms rely only on camera measurements and the experiments are limited to small indoor environments. Instead, in the autonomous driving scenario the camera-lidar fusion is already used for semantic tasks but they rarely use depth cameras, their lidars are generally more powerful (i.e., they have 128-row lidars compared to the 16 ones commonly used in robotics) and they test the application in driving outdoor scenarios which offer different challenges with respect to robotic indoor once. Hence, with our work, we aim to stress the fact that RGB-D cameras and lidars are complementary sensors also in robotic semantic applications. For the semantic mapping application, we stated that with both sensors we can correctly localize objects at different distance ranges, improving detection accuracy. § ARTIFACT MAPPING FRAMEWORK In this section, the whole framework is presented as a conjunction of two blocks: Sect. <ref> for object perception and Sect. <ref> for object managing. In Sect. <ref> the provided UI application is illustrated. §.§ Artifacts detection and position estimation The perception part can be conceptually divided into two components: (i) 2D object segmentation, (ii) 3D object position estimation using camera-lidar filtering. §.§.§ 2D object segmentation In this phase, a deep neural network <cit.> is used to infer from RGB images (see Fig. <ref>a) some predefined objects' classes and their masks. During the navigation, the robot takes pictures of the environment using the camera mounted on it. The pictures are passed into an instance segmentation deep neural network which outputs the classification labels and masks (i.e., a binary image having 1 where the object is found and 0 elsewhere) for each object recognized on the image (see Fig. <ref>d). The outputs are grouped and passed to the next module which will convert 2D data into 3D ones. An optional feature provided in this module is the possibility to filter out classes in real-time upon request. In this way, the robot can map different objects online depending on the requirements proposed. Other implementation aspects will be further explained in Sect. <ref>. §.§.§ 3D object position estimation using camera-lidar filtering This module fuses RGBD camera and lidar measurements to have a precise estimate of the objects' positions in the environment. The input is composed of the classification labels and masks found in the previous module, and depth information extracted from the camera (see Fig. <ref>b) and the lidar (see Fig. <ref>c). Sensors depth measurements are first analyzed separately in the following. The depth image obtained from the camera (see Fig. <ref>b) is filtered using the recognized objects masks through element-wise matrix multiplication. The output, containing only the depth data of the object plus some sensor noise and environment outliers, is used to build a 3D point cloud projecting the 2D image points in the 3D space using the formula in the equation: [ x_C; y_C; z_C ] = [ 1/f_x 0 -p_x/f_x; 0 1/f_y p_y/f_y; 0 0 1 ][ u; v; 1 ] z_C , where x_C, y_C, z_C are the 3D point coordinates with respect to the camera, u, v are the pixels on the image plane and f_x, f_y, p_x and p_y are the camera intrinsic parameters (focal distances and sensor's centre). Note that z_C is the depth measured by the camera depth sensor. The obtained point cloud is filtered using a voxel grid downsampling filter[voxel grid downsampling filter: <https://pointclouds.org/documentation/tutorials/voxel_grid.html>] to reduce the number of points and, consequently, a radius outlier filter[radius outlier removal: <https://pointclouds.org/documentation/tutorials/remove_outliers.html>] is applied to remove the outliers induced by sensors noises and inference imperfections. The final point cloud is then used to compute the camera artifact centroid X_C as the mean of its points. The 3D lidar centroid estimation is computed as follows. Projecting the 3D lidar points (see Fig. <ref>c) in the 2D detected masks images using Eq. <ref>, we are able to extract the object points of interest from the point cloud (i.e., the points which have the 2D projection inside the mask). z_L [ u; v; 1 ] = [ f_x 0 p_x; 0 f_y p_y; 0 0 1 ][ R_L^C T_L^C ][ x_L; y_L; z_L; 1 ] , where R_L^C∈ℝ_3x3 and T_L^C∈ℝ_3x1 are the rotation matrix and the translation vector between the lidar and the camera, x_L, y_L, z_L are the 3D centroid position with respect to the lidar and the other parameters are the same of Eq. <ref>. The extracted point cloud, representing the noisy artifact, will be then filtered using a radius outlier filter similar to the one used for the camera. Both radius filter parameters are directly dependent on the number of point cloud points because different distances and sizes of objects affect the point-cloud density and consequently the filtering. Finally, the mean of the point cloud is computed to obtain the lidar artifact centroid X_L. Once both centroid measurements are available, they are fused in the artifact centroid X following the rules in the equation: X= 0 If dist_C < min_C X_C If min_C≤ dist_C≤ acc_C ξ X_C + (1 - ξ) X_L If acc_C≤ dist_C≤ max_C X_L If dist_C > max_C , where dist_C is the euclidean distance between the 3D point estimates and the camera, min_C and max_C are the minimum and maximum distances the depth camera can perceive, acc_C is the distance within which the camera can have accurate enough measurements to be used alone for the object localization (the camera information are generally provided by the sensors vendors), X_L∈ℝ^3 and X_C∈ℝ^3 are the lidar and camera 3D centroid estimates and ξ∈ [0, 1] ∈ℝ is the fusion weight represented by the blue slope of the segments between acc_C and max_C in Fig. <ref> and it is computed as follows: ξ = -1/max_C - acc_C(dist_C-acc_C)+1 Using the filtered camera and lidar point clouds, a rough 3D radius estimation ρ of the objects is performed. The camera radius ρ_C and the lidar radius ρ_L are computed as the mean of the two bigger dimensions along the X, Y and Z point cloud axis. the final radius ρ is computed following the same centroid fusion rules of Eq. <ref> substituting X with ρ, X_C with ρ_C and X_L with ρ_L. Also, the view angle ϕ of the artifact with respect to the robot is computed. Such an angle is rotated with respect to the map reference frame for implementation reasons with equation: ϕ = atan2(r_21, r_11) + atan2(y_r, x_r) , where the r_ij is the entry at row i and column j of the rotation matrix R_r^m∈ℝ_3x3 between the map m and the robot r and x_r, y_r are the x, y positions of the artifact centroid with respect to the robot base. The two addends of Eq. <ref> represent respectively the heading angle between the robot and the map and the angle between the robot and the 3D centroid. §.§ Artifacts manager for data association The manager (see Fig. <ref>f) is needed to filter out outliers and to stabilize artifact position estimations provided by the sensor fusion module. This process is generally known as data association<cit.><cit.>. The manager is composed of two modules: (i) object position filtering and (ii) object position stabilization which runs asynchronously in parallel. §.§.§ Position filtering Using a temporary data structure, the temporary buffer, we store and filter the perceived artifacts. Once the manager receives the 3D artifacts position estimations from the perception module (see Sect. <ref>), it checks if the artifacts were already seen before (i.e., the distance between one of the already seen artifacts and the current one is less than its 3D radius). If this is the case then the artifact in the temporary buffer is updated. Otherwise, for each not previously seen artifact received, the manager creates a new artifact instance in the temporary buffer. These instances have their own moving average filter which estimates the average of the artifact centroid position and its radius with Eq. <ref> and computes a variance based on the distances between the position and the moving average in the filter horizon with Eq. <ref>. μ = 1/N∑_χ∈Ω_Nχ σ = 1/N∑_χ∈Ω_N ||χ - μ||^2 , where N∈ℕ is the number of measurement in the moving average set Ω_N of 3D points, χ∈ℝ^3 represent the current 3D position measurement, μ∈ℝ^3 is the 3D mean position and σ∈ℝ represent the variance of the filter. §.§.§ Position stabilization This module checks the stability of the artifacts in the temporary buffer and stores stable artifacts in another similar structure, the stable buffer. If an artifact in the temporary buffer is stable, the stabilizer moves the artifact from the temporary buffer to the stable one. An artifact is considered stable when its moving average filter variance σ is less than half its 3D artifact radius ρ and at least half the average filter set Ω_N is filled. This means that we have enough stable object position estimations and the object position average can be used for fixing the object position on the map. At the end of the Artifacts Mapping application, an additional data association step is performed. The artifacts belonging to the same class which overlay each other on the XY plane are merged into a single artifact. This step reduces the duplicated object which sometimes appears on the map due to different point-of-view measurements and occlusions. After that, the stable artifacts buffer is saved in a yaml file which could be loaded into the user interface application presented in the next section. §.§ User Interface for goal sending A User Interface (UI) application based on a Rviz plugin (see Fig. <ref>) was developed to provide an intuitive visualization of the artifacts on the map, to send commands to the robot for moving near an artifact of interest and to delete artifacts which the user do not need or are wrong. Such artifacts can be loaded from the yaml file obtained with the artifacts mapping application. Through the UI application, the user can send nav_msgs/goal ROS messages which can be used by the robot to move towards the object (e.g., using the ROS navigation stack as we do, see Sect. <ref>). The user can interact with the artifacts by simply right-clicking on them on Rviz and selecting the action Go To or Delete. Being the artifacts centroid position inside the artifacts shapes, the goal is moved in front of the artifact so that the robot stops before colliding with the object. The other available option is artifact deletion. If the user notices that an artifact is wrongly identified (classification or position) then the user can delete it and, once the UI application is closed, the loaded yaml file is updated with the remaining artifacts. § EXPERIMENTS The experiments are performed both in simulation and using a real robot in a laboratory environment. The experimental setup is the same: some chosen objects are randomly positioned in the experiment area and the robot, following a predefined path, maps the predefined objects it encounters. This strategy is chosen because the objective is the validation of the artifacts mapping accuracy during an application, for example during a patrol. In other application scenarios, e.g. search and rescue, our framework could run in parallel with an exploration algorithm and the robot could trigger the exploration module every time an object of interest is encountered to obtain a precise localization. In the experiments, we compare the data fusion with the mono-sensors application (i.e. using only an RGB-D camera or only the lidar) to demonstrate that the data fusion highly improves the detection accuracy and decreases the errors. For each environment setup, the experiments are repeated three times, one for each sensors configuration: only camera, only lidar, and both. This work focuses only on semantic mapping and does not account for the robot localization which is assumed to be given. Additional errors in mapping resulting from localization are not considered in the final evaluation even if they negatively affect our application. Moreover, is important to notice that quadrupedal robots' movements are jerky and the sensors can suffer from that. We set the parameters min_C, acc_C and max_C of Eq. <ref> as 0.3, 4, 6 respectively based on the camera hardware information provided by the camera vendors (Intel Realsense). The final validation performance is based on the number of objects which the robot can correctly find over the number of total objects. Also, the number of correctly-detected objects over the total number of detections is evaluated. The object is considered found if the difference between the estimated position and the real one is less than the real object radius and the associated class label is correct. The errors are categorized as duplicated objects, wrong localization and wrong classification. The duplications occur when there are more artifacts on a single object. they could be caused by the wrong artifacts radius computation due to occlusions or distinct point of view detection (i.e., viewed from different perspectives: front and behind). The localization is considered wrong if the artifact's estimated position is outside the real object shape while the classification is erroneous if the artifact's class label is not correct. For the simulation, the Whole-body Locomotion Framework (WoLF)<cit.> is used on a notebook with an Intel® Core™ i9-11950H processor and an NVIDIA Geforce RTX 3080 Laptop GPU. In the real scenario, a Unitree Go1[Unitree Go1: <https://www.unitree.com/en/go1/>] quadrupedal robot equipped with a RoboSense RS-Helios16 lidar[RoboSense RS-Helios16: <https://www.robosense.ai/en/rslidar/RS-Helios>], an Intel RealSense D455[Intel RealSense D455: <https://www.intelrealsense.com/depth-camera-d455/>] and three Nvidia Jetson[Nvidia Jetson: <https://www.nvidia.com/it-it/autonomous-machines/embedded-systems/>] (two Jetson Nano 4GB and one Nvidia Xavier NX) are used for the evaluation. The experiments are performed with the instance segmentation algorithms Yolact++ <cit.> and YolactEdge <cit.> trained on COCO <cit.> data set. §.§ Simulation Experiments Gazebo[Gazebo simulator: <https://gazebosim.org/home>] simulator is used to simulate the robot in two different environments: the office[Clearpath robotics worlds: <https://github.com/clearpathrobotics/cpr_gazebo/tree/noetic-devel/cpr_office_gazebo>] and Maze worlds where a predefined number of objects are positioned randomly at each iteration. The chosen objects for the simulation evaluation are vase, couch, plant and person. Specifically, in the office world, there are 5 vases, 12 couches, 6 plants and 11 persons while in the Maze world, there are 15 vases, 13 couches, 12 plants and 12 persons. The robot path is chosen randomly in advance using some waypoints on the map. In total, for each sensors configuration, 10 experiments were conducted, 5 for each environment, using different setups, for a total of 30 experiments. The results of the simulation experiments are shown in the left part of Fig. <ref> in terms of the number of correct detected objects. Specifically, considering the three ordered sensors configurations (i.e. only camera, only lidar, and both), we obtain the 92%, 93% and 99% of correctly localized and classified objects. Moreover, analysing the total number of detections produced, we obtain the distribution of the detections represented in the left column of Table <ref> and the top part of Fig. <ref> for the simulation experiment. Among all the detection produced, considering again in order the three sensors configurations, the 92%, 91% and 95% were correct while the remaining 8%, 9% and 5% of them were wrong. The farthest object correctly detected in simulation during the camera-lidar sensor fusion experiments was at 15.47m from the robot, while the nearest was at 1.23m. §.§ Laboratory Experiments The real experiments were carried out in a laboratory setting considering two scenarios, a one-room laboratory environment and a complete floor environment where the robot can move through corridors. In these environments were positioned umbrellas, chairs, cabinets, backpacks and TVs in variable amounts. For each sensors configuration, A total of 6 experiments were conducted, 3 for each environment, for a total of 18 experiments. For each trial, the objects were randomly moved and the illumination changed, i.e., switching off lights or closing shutters. The results of the laboratory experiments are shown in the right part of Fig. <ref> in terms of the number of correct detected objects. Specifically, considering the three sensors configurations in order (i.e. only RGBD camera, only RGB + lidar, and both), we obtain respectively the 85%, 80% and 98% of correctly localized and classified objects. Moreover, analysing the total number of detections produced, we obtain the distribution of the detections represented in the right column of Table <ref> and the bottom part of Fig. <ref> for the real experiment. Among all the detection produced, the 76%, 68% and 88% were correct while the remaining 24%, 32% and 12% of them were wrong. The farthest object correctly detected during the camera-lidar sensor fusion experiments was at a distance of 10.37m from the robot, while the nearest was at 0.98m. §.§ Discussion The first thing to point out is that the farthest distances of the detected object were greater than 10m both in simulation and in real experiments. We take into account this distance to show a qualitative comparison between the lidar and RGB-D measurement in Fig. <ref>. The figure qualitatively upholds the thesis that a lidar sensor along with the camera is necessary to improve semantic mapping and, in general, other detection algorithms in wide areas. Moreover, from the results obtained from the experiments, it is clear that in our framework the use of both sensors improves the robustness of the application and decreases the detection errors. These improvements are less evident in a simulation environment where we used almost ideal sensors, i.e. the noise representation is not realistic as in Fig. <ref>. Still, it impacts real scenarios where there is more sensor noise. The lidar can map far obstacles precisely while the camera introduces lots of errors at high distances. If we adopt only the camera, one solution to avoid erroneous measurements could be to not consider the depth measurement out of the accurate range guaranteed by the device specifications. By the way, by doing this the robot could miss some artifacts if it does not get close enough to them. The camera, by providing more information at near distances with respect to the lidar, yields more precise centroid computations because it has fewer outliers than the lidar. Lidar outliers can be caused by wrong camera-lidar pose calibration and time synchronization which are essential for these applications especially when the robot moves fast. Instead, with RGBD cameras, the depth and the RGB images are synchronized in time and can be spatially superimposed almost exactly. It is important to notice that wrong classification errors result from erroneous classifications in the pre-trained instance segmentation neural network which can be caused by illumination, reflections or other environmental conditions. They are here considered because the image inference is a module of the proposed pipeline but such errors can be decreased using more powerful neural networks. § CONCLUSION We presented a framework which uses multi-modal sensors fusion to tackle the semantic mapping problem which is a rare setup in robotics applications. We fuse the lidar and RGB-D camera sensor readings to achieve better accuracy both for near and far objects as opposed to camera-only systems which lose accuracy for distant objects or lidar-only which lack high-level texture understanding of the environment. We proposed a UI application to interact with the artifacts map obtained during the mapping application. This application is useful to perform autonomous high-level decision-making tasks because it exposes the object's class and location to the robot and the user. The experiments showed that our application can correctly detect, localize and map the 98% of the objects present in the scene at different distances providing a small number of detection errors and good localization accuracy. The comparisons with the single-sensor scenario (only camera or only lidar) proved that sensor fusion is essential for wide areas and high-accuracy applications. There are different future improvements we planned for this framework: (i) evolve the algorithm to an independent graph-based SLAM system, (ii) use 3D semantic point clouds with oriented bounding boxes and dimension information for better visualization and object understanding, (iii) deal with dynamics obstacle. IEEEtran
http://arxiv.org/abs/2307.02980v1
20230706132722
Constraint Programming models for the parallel drone scheduling vehicle routing problem
[ "Roberto Montemanni", "Mauro Dell'Amico" ]
math.OC
[ "math.OC", "90C27" ]
roberto.montemanni@unimore.it mauro.dellamico@unimore.it [cor1]Corresponding author Department of Sciences and Methods for Engineering, University of Modena and Reggio Emilia, Via Amendola 2, 42122 Reggio Emilia, Italy Drones are currently seen as a viable way for improving the distribution of parcels in urban and rural environments, while working in coordination with traditional vehicles like trucks. In this paper we consider the parallel drone scheduling vehicle routing problem, where the service of a set of customers requiring a delivery is split between a fleet of trucks and a fleet of drones. We consider two variations of the problem. In the first one the problem is more theoretical, and the target is the minimization of the time required to complete the service and have all the vehicles back to the depot. In the second variant more realistic constraints involving operating costs, capacity limitation and workload balance, are considered, and the target is to minimize the total operational costs. We propose several constraint programming models to deal with the two problems. An experimental champaign on the instances previously adopted in the literature is presented to validate the new solving methods. The results show that on top of being a viable way to solve problems to optimality, the models can also be used to derive effective heuristic solutions and high-quality lower bounds for the optimal cost, if the execution is interrupted after its natural end. Parallel Drone Scheduling Vehicle Routing Problems Constraint Programming Drones Optimization § INTRODUCTION In the last few years, drone technology has seen investments for billion of dollars, due to its potentials. Forbes <cit.> defined such a phenomenon the “Drone Explosion”. Drones can be applied to many sectors, among which logistics, surveillance and disaster relief <cit.>. The most prominent application is probably logistics related to e-commerce, which has experienced an exponential growing in the last decades (Statista, <cit.>). In <cit.>, the authors forecast that autonomous vehicles will deliver about 80% of all parcels in the following ten years. Several advantages associated with the use of aerial drones can be identified: they do not have to stick to the road network but can fly approximately straight line. Moreover, they are not affected by road traffic congestions. The technology might lead to innovative solutions of interest for the companies (operational costs reduction), for the customers (faster deliveries) and for the whole society (sustainability). In this work we will focus on operational delivery strategies with a mixed fleet using both trucks and drones. The seminal work of Murray and Chu <cit.> pioneered a new routing problem in which a truck and a drone collaborate to make deliveries. From an operations research perspective, the authors present two new prototypical variants expanding from the traditional Traveling Salesman Problem (TSP) called the Flying Sidekick TSP (FSTSP) and the Parallel Drone Scheduling TSP (PDSTSP). In both cases a truck and drones collaborate to deliver parcels, the difference being however that in the former model drones can be launched and collected from the truck during its tour, while in the latter one drones are operated directly from the central depot, while the truck executes a traditional delivery tour. In the remainder of the paper we will focus on the latter problem, addressing the interested reader, for example, to <cit.> for full details and some solution strategies for the FSTSP. More formally, in the PDSTSP there is a truck that can leave the depot, serve a set of customers, and return to the depot, and a set of drones, each of which in the meantime can leave the depot, serve a customer, and return to the depot before serving other customers. Not all the customer can be served by the drones, either due to their location or the characteristic of their parcel. The objective of the problem is to minimize the completion time of the last vehicle returning to the depot (or a cost function related to this), while serving all the customers. A first Mixed Integer Linear Programming (MILP) model for the PDSTSP and some simple heuristic methods are proposed in <cit.>. A more refined mixed integer programming model and the first metaheuristic method, based on a two steps strategy, embedding a dynamic programming-based component, are discussed in <cit.>. A similar two steps approach, but based on matheuristics concepts is presented in <cit.>. A hybrid ant colony optimization metaheuristic is discussed in <cit.>. In <cit.> a constraint programming approach is discussed, which is able to solve to optimality all the benchmark instances previously adopted in the literature for both exact and heuristic methods. An improved variable neighbour search metaheuristic is discussed in <cit.>. More recently, in <cit.> another exact approach based on branch-and-cut was proposed, together with some new benchmark problems. Several PDSTSP variants are also introduced and studied in the literature. We refer the interested reader to <cit.> and <cit.> for a complete survey. In the following we review only the extensions of the original problem relevant to the present study, where multiple trucks are employed out of a same depot, and more realistic constraints involving load balancing, capacity and costs are eventually considered. The recent work <cit.> discusses the Parallel Drone Scheduling Multiple Traveling Salesman Problem what we will here refer to as the MT-PDSVRP ( Min-Time Parallel Drone Scheduling Vehicle Routing Problem), which is a straightforward extension of the PDSTSP where multiple trucks are employed and the target is to minimize the time required to complete the delivery to the last customer serviced and go back to the depot. The authors propose a hybrid metaheuristic algorithm along the line of the method previously introduced in <cit.>, a mixed integer linear model and a branch-and-cut approach working on such a model. What is basically the same problem is also introduced at the same time in <cit.>, where the authors propose three mixed integer linear programming models, one of which is arc-based and the other two are set covering-based, together with a branch-and-price approach based on one of the set covering-based models. A heuristic version of the branch-and-cut method is also discussed, targeting larger instances. A more realistic variation of the PDSTSP, which we will refer to as the MC-PDSVRP (Min-Cost Parallel Drone Scheduling Vehicle Routing Problem) is introduced in <cit.>. In this version of the problem concepts such as capacity, load balancing and decoupling of costs and times are taken into account. A formal definition of the problem will be provided in Section <ref>. The authors propose a mixed integer linear programming model and a ruin&recreate metaheuristic for the problem. In this paper we aim to explore the potential of constraint programming on PDSVRPs, in the hope of exploiting the recent advances of solvers in dealing with TSP and VRP problems and we present tailored models for the different PDSVRP considered. Experimental results show that several best-known upper and lower bounds can be obtained by the new models. The rest of the paper is organized as follows. Section <ref> is devoted to the the MT-PDSTSP. The problem is formally defined and two Constraint Programming models are presented. Symmetrically, in Section <ref> the MC-PDSVRP is discussed, again presenting Constraint Programming models. Experimental results for both the models are discussed in Section <ref>, while Section <ref> contains some conclusions and ideas for future work. § THE MIN-TIME PARALLEL DRONE SCHEDULING VEHICLE ROUTING PROBLEM The MT-PDSVRP can be represented on a complete directed graph G = (V, A), where the node set V = {0, 1, ..., n} represents the depot (node 0) and a set of customers C = {1, ..., n} to be serviced. A set T of homogeneous trucks and a set D of homogeneous drones are available to deliver parcels to the customers. Each truck starts from the depot 0, visits a subset of the customers, and returns back to the depot, operating a single route. The drones operate back and forth trips from the depot to customers, delivering one parcel per trip and operating multiple routes if necessary. Not all the customers can be served by a drone, due to the weight of the parcel, an excessive distance of the customer location from the depot, or eventual terrain obstacles such as hills or areas with high-rise buildings. Let C^D ⊆ C denotes the set of customers that can be served by drones. These customers are referred to as drone-eligible in the remainder of the paper. The travel time incurred by a truck to go from node i to node j is denoted as t_ij^T, while the time required by a drone to serve a customer i (back and forth) is denoted as t_i^U. The trucks and the drones start from the depot at time 0, and the objective of the MT-PDSVRP is to minimize the time required to complete all the deliveries and to have all the trucks and all the drones back at the depot. Note that since truck and drones work in parallel, the objective function translates into minimizing the time required by the vehicle of the fleet with the longest total operational time. An example of a solution of the MT-PDSVRP for a small instance is provided in Figure <ref>. §.§ Constraint programming models We propose two alternative models that exploit different functions made available by modern Constraint Programming solvers to model the problem. In the first model the truck tours are represented as separated non-overlapping entities, while in the second they are modelled similarly to the classic giant tour representation <cit.>. §.§.§ MT-3IDX: A model based on separated truck tours The variables of the model that will be referred to as MT-3IDX are as follows. The binary variable z_kij, with k ∈ T and i,j ∈ V takes value 1 (or equivalently True) if node i is visited right before node j in the tour of truck k, and value 0 (or equivalently False) otherwise. The binary variable x_di, with d ∈ D and i ∈ C^D, takes value 1 is the customer i is visited by the drone d, and value 0 otherwise. Finally, the variable α is a continuous variable introduced to implement the minmax objective function. The model is as follows. MT- 3IDX: minα s.t. Circuit(z_kij; i, j ∈ V; i ≠ 0 j ≠ 0) k ∈ T ∑_d∈ D x_di + ∑_k∈ T z_kii = 1 i ∈ C^D ∑_k ∈ T∑_i in V z_kij = 1 j ∈ C ∖ C^D α≥∑_i ∈ V∑_j ∈ V, i j t^T_ij z_kij k ∈ T α≥∑_i ∈ C^D t^D_i x_di d ∈ D z_kij∈{0;1} k ∈ T; i, j ∈ V x_di∈{0; 1} d∈ D, i ∈ C^D The objective function (<ref>) minimizes α, that will be assigned the time taken by the latest vehicle to complete its tasks. Constrains (<ref>) assures that the z variables associated to each truck k take values in such a way to form a valid tour, eventually with self-loops for the variables associated to customers visited by the drones. This is imposed through the use of the “Circuit” statement. In the logic of such a command, if a customer i is not visited by the a truck k then z_kii is set to 1. Constrains (<ref>) imposes that if a customer i ∈ C^D is not visited by any truck, then one of the drones in D must visit it. Note that the negation operator “” is used, and that if z_kii=1 ∀ k ∈ T, then customer i is not visited by any truck. Constraints (<ref>) imposes that if a customer i ∈ C ∖ C^D then one truck in T must visit it. Inequality (<ref>) constraints α to be at least as large as the length of the tour of each truck. Constrains (<ref>) set α to be at least as large as the time spent by each drone d to execute the tasks assigned to it. Finally, constrains (<ref>) and (<ref>) define the variables domain. This model presents the drawback of having many variables, and these variables also have to be constrained to take mutually-consistent values. This might affect the quality of the lower bounds provided by the relaxations of the model. §.§.§ MT-2IDX: A model based on a giant truck tour In this section we present a model with less variables than that discussed in Section <ref>. This is achieved by exploiting the “MultipleCircuit” statement present in modern Constraint Programming tools <cit.>. The meaning of variables x remain the same as in Section <ref>, while z variables are substituted by a new set of variable y_ij, with i,j ∈ V, that when i ≠ j takes value 1 if node i is visited right before node j in one of the truck tours, and value 0 otherwise. When i=j, y_ii takes value 1 if node i is visited by a drone, 0 otherwise. A new set of continuous variables γ_i, i ∈ V is introduced to count the operating time of each truck k ∈ T. The resulting model is as follows. MT- 2IDX: minα s.t. MultipleCircuit(y_ij, i,j ∈ V; i ≠ 0 j ≠ 0) ∑_j ∈ V ∖{0} y_0j≤ |T| ∑_d∈ D x_di + y_ii = 1 i ∈ C γ_0=0 y_ijγ_j = γ_i + t^T_ij i ∈ V; j ∈ V ∖{0} y_i0α≥γ_i + t^T_i0 i ∈ V ∖{0} α≥∑_j ∈ C^D t^D_j x_dj d ∈ D y_ij∈{0;1} i, j ∈ V x_di∈{0; 1} d∈ D, i ∈ C^D γ_i ≥ 0 i ∈ V The objective function (<ref>) minimizes α, that will be assigned the time taken by the latest vehicle to complete its tasks. Constrains (<ref>) assures that the y variables take values in such a way to form a set of valid truck tours, and eventually self-loops for the variables associated to customers visited by the drones. This is imposed through the use of the “MultipleCircuit” statement. Constrain (<ref>) states that the number of truck tours (outgoing arcs from the depot 0) is limited to |T|. Constrains (<ref>) imposes that if each customer i ∈ C must be visited either by a truck (y_ii=0) or by one of the drones in D. Constraint (<ref>) initializes to 0 the time counter at the depot. Constraints (<ref>) increase the operational time of each truck (by setting the time each customer i is visited). Inequalities (<ref>) force α to be at least as large as the last return time of a truck to the depot. Constrains (<ref>) counts the total operating time of each drone d and imposes that α must be greater than it. Finally, constrains (<ref>), (<ref>) and (<ref>) define the domain of the variables. With respect to the model discussed in <ref>, MT-2IDX has the advantage of having a few less variables, but on the other hand it requires more complex constraints to account for the length of each single truck tour. § THE MIN-COST PARALLEL DRONE SCHEDULING VEHICLE ROUTING PROBLEM The MC-PDSVRP is a modification of the MT-PDSVRP described in Section <ref>, where more realistic components are considered, leading to a model with more complex objective function and more constraints.. In particular, the following elements are added to concepts already analyzed for the MT-PDSVRP and have to be taken into account: A transportation cost c_ij^T is incurred by a truck to travel on the edge (i,j) ∈ E, while c^D_i is the transportation costs incurred by each drone to complete a mission to customer i ∈ C^D. These costs are used to define a new objective function to be minimized. The delivery request from each customer i ∈ C is associated with a parcel weight w_i. Each truck has to respect some capacity constraints, being limited to transport at most Q^T units of weight in its route. Finally, there are upper bounds τ^T and τ^D on the maximum working time for each truck and drone, respectively. Note that these latter constraints are normally used to introduce load-balancing concepts into the optimization. §.§ Constraint programming models Analogously to what seen in Section <ref> for the MT-PDSVRP, two models will be presented. §.§.§ MC-3IDX: A model based on separated truck tours The variables used in this model are the same used for the model MT-3IDX in Section <ref>. MC- 3IDX: min∑_k ∈ T∑_i ∈ V∑_j ∈ V, j i c^T_ijz_kij + ∑_k ∈ D∑_i ∈ V c^D_i x_ki s.t. Circuit(z_kij; i,j ∈ V) k ∈ T ∑_d ∈ D x_dj + ∑_k ∈ T∑_i in V z_kij = 1 j ∈ C^D ∑_k ∈ T∑_i in V z_kij = 1 j ∈ C ∖ C^D ∑_i ∈ V∑_j ∈ V, i j t^T_ij z_kij≤τ^T k ∈ T ∑_j ∈ C^D t^D_j x_dj≤τ^D d ∈ D ∑_j ∈ C w_j z_kij≤ Q^T k ∈ T z_kij∈{0;1} k ∈ T, i, j ∈ V x_di∈{0; 1} d∈ D, i ∈ C^D The objective function (<ref>) minimizes the total cost, given by sum of the costs of each truck and drone, incurred to service all the customers. Constrains (<ref>) assures that the z variables associated to each truck k take values in such a way to form a valid tour, eventually self-loops for the variables associated to customers visited by the drones. Constraints (<ref>) imposes that if a customer i ∈ C^D is not visited by any truck, then one of the drones in D must visit it. Constraints (<ref>) imposes that if a customer i ∈ C ∖ C^D then one truck in T must visit it. Inequalities (<ref>) make sure that each truck does not exceed its maximum working time. Inequalities (<ref>) are the constraints on maximum working time for the drones. Constraints (<ref>) model the capacity constraints for the trucks. Finally, constraints (<ref>) and (<ref>) define the variables domain. §.§.§ MC-2IDX: A model based on a giant truck tour The variables used in this model are the same used for the model MT-2IDX in Section <ref>, with the addition of the new set of continuous variables β_i, i ∈ V which is introduced to count the weight carried by each truck k ∈ T. MC- 2IDX: min∑_i ∈ V∑_j ∈ V, j i c^T_ijy_ij + ∑_d ∈ D∑_i ∈ V c^D_i x_di s.t. MultipleCircuit(y_ij, i,j ∈ V; i ≠ 0 j ≠ 0) ∑_d∈ D x_di + y_ii = 1 i ∈ C ∑_j ∈ V ∖{0} y_0j≤ |T| ∑_j ∈ C^D t^D_j x_dj≤τ^D d ∈ D β_0=0 y_ijβ_j = β_i + w_j i ∈ V; j ∈ V∖{0} β_i ≤ Q^T i ∈ V ∖{0} γ_0=0 y_ijγ_j = γ_i + t^T_ij i ∈ V; j ∈ V ∖{0} y_i0γ_i + t^T_i0≤τ^T i ∈ V ∖{0} y_ij∈{0;1} i, j ∈ V x_di∈{0; 1} d∈ D, i ∈ C^D β_i, γ_i ≥ 0 i ∈ V The objective function (<ref>) minimizes the total cost, given by sum of the costs of each truck and drone, incurred to service all the customers. Constrains (<ref>) assures that the y variables take values in such a way to form a set of valid truck tours, and eventually self-loops for the variables associated to customers visited by the drones. Constrains (<ref>) imposes that if each customer i ∈ C must be visited either by a truck (y_ii=0) or by one of the drones in D. Constrain (<ref>) states that the number of truck tours (outgoing arcs from the depot 0) is limited to |T|. Constrains (<ref>) limits the total operating time of each drone d to the maximum allowed value τ^D. Constraint (<ref>) initializes to 0 the weight counter at the depot. Constraints (<ref>) increase the weight transported by each truck. Inequalities (<ref>) impose that the incremental weight β_i at each customer i ∈ V can never exceed the maximum capacity Q^T of each truck. Constraint (<ref>) initializes to 0 the time counter at the depot. Constraints (<ref>) increase the operational time of each truck (by setting the time each customer i is visited). Inequalities (<ref>) impose that the incremental time γ_i at each customer i ∈ V, plus the time required to the truck to go back to the depot, can never exceed the maximum operating time τ^T of each truck. Finally, constrains (<ref>), (<ref>) and (<ref>) define the variables domain. § COMPUTATIONAL EXPERIMENTS The constraint programming models described in Sections <ref> and <ref> have been implemented in Python 3.9 and solved via the CP-SAT solver of Google OR-Tools 9.5.2237 <cit.>. The experiments have been run on a laptop computer equipped with 32 GB of RAM and an Intel Core i7 12700F CPU with 12 cores (8 with a maximum frequency of 4.9 GHz and 4 with a maximum frequency of 3.6 GHz). The outcome of the experimental champaign we run is discussed in the remainder of this section, and is organized according to the different problems attacked. All the tables present the information of the instances and for each method considered the given maximum computation time (in the label of the columns), although the hardware is different in each paper. For each instance/method combination, the cost of the best heuristic solution retrieved, and eventually the lower bound found. A dash means no result was retrieved in the given time (or, for some of the methods we compare with, the experiment was not attempted). Every time one of the new models we propose improve a best-known bound, the relative entry is in bold. Analogously, every time one of the new models we propose does not match or improve a best-known bound, the relative entry is in italic. Finally, proven exact solutions, retrieved by any method, are marked with an asterisk in the tables. All the new best-known solutions retrieved are available upon request to the authors. §.§ MT-PDSVRP The results are subdivided based on the source of the instances in the following subsections. §.§.§ Instances from Mbiadou Saleu et al. <cit.> A first set of benchmarks for the MT-PDSVRP was created in <cit.> starting from classic instances for the Capacitated Vehicle Routing Problem. A total of 20 instances with a number of customers ranging between 50 and 199 has been obtained. We refer the interested reader to <cit.> for full details about the elements of the new instances and the complete sources for the instances. The results are summarized in Table <ref>, where the results are those obtained by a branch and cut (BC) approach running on a MILP model and of the best of nine variations of a hybrid metaheuristc approach (HM). The BC solver is run on a computer with an Intel Xeon CPU E5-2670 CPU with 2x8 cores running at 2.6 GHz and 62.5 GB of RAM, with a maximum computation time of 10800 seconds, while the HM variations have been run on a computer with an Intel core i5-6200U CPU running at 2.4 GHz and 8 GB of RAM, with a maximum computation time of 1000 for each variation. The results presented in Table <ref> indicate that the new CP-based approaches are competitive with state-of-the-art results. In particular, new improved lower bounds were provided for all the instances considered, and 2 new best-known heuristic solutions were also retrieved. The comparison between the CP models indicate that the model MT-3IDX discussed in Section <ref>, and characterized by the set of variables z with 3 indices, performs better. In particular, it provides substantially tighter lower bounds. However, this model is not capable of handling the last instance, which by its nature allows a large number of possible drone missions, that is in turns reflected on a large number of z variables. In this case the model MT-2IDX, with its smaller memory footprint, is the only viable option to have a good lower bound. It is interesting also to observe that the quality of the heuristic solutions produced by the CP methods is always high, although not always matching the state-of-the-art provided by the nine purely heuristic methods summarized in column HM. §.§.§ Instances from Raj et al. <cit.> A second set of instances for the MT-PDSTSP was originated from TSPLIB <cit.> in <cit.>, by defining the missing elements according to what done previously in <cit.> for the PDSTSP. Only the instances with more than one truck are reported here, since those with one truck boil down the problem to a PDSTSP, and all the optimal solutions can be found in <cit.>. The number of customers range between 48 and 229. Full details about the instances can be found in <cit.>. Three methods are considered for comparison purposes: an arc-based MILP model; a branch and price (BP) method based on a set covering MILP model, and a math-heuristic method (MH) based on the same latter model. All the experiments for these methods were run on a computer equipped with an Intel i7-6700 CPU (with 8 cores running at 4 GHz) and 16 GB of RAM, with 3600 seconds for MILP and MH, and 2400 seconds for BC. The CP-based solvers we propose are run for a maximum time of 3600 seconds. The results are summarized in Tables <ref> and <ref>. The results of Table <ref> indicates that the CP model MT-3IDX is very suitable on these instances, being able to provide all the state-of-the-art lower bounds and all the best-known heuristic solutions except one, in both cases with many substantial improvements over previously known best results. It is remarkable also that five instances are closed here for the first time. The performance of the model MT-2IDX appear less brilliant, in particular on the larger instances considered, on which no solution or meaningful bound was produced in the given time. The results of Table <ref> confirm the previous impressions. Only, in this case the CP model MT-3IDX otperforms the other methods less strongly. This indicates that its performance degrade when the number of trucks considered increases: On a few instances it found sub-optimal heuristic solutions, although it remarkably provided an optimality proof for the first instance. Note that the model MT-2IDX is now able to produce two new state-of-the-art heuristic solutions, indicating that this latter method might be indicated for problems with more trucks available. §.§ MC-PDSVRP The instances considered in this section are those originally proposed in Nguyen et al. <cit.>. They are based on several statistical data shared by logistics service providers and common practices, so although being generated as random, they can be considered as realistic. The numbers of customers considered are 30, 50, 100, 200 and 400 (the first element of the name of each instance reflects this information). We refer the reader to <cit.> for the full details of the instances ranging from the number of trucks and drones, to loading capacities, speeds, battery endurance for the drones, etc. The results of our experiments are summarized in Tables <ref>, <ref>, <ref>, <ref> and <ref>, divided according the size of the instances. On top of the CP models discussed in Section <ref>, that are run for a maximum of 1800 seconds, the methods considered in the comparison are those presented in <cit.>, namely a MILP model and Ruin and Recreate heuristic algorithm (RR). The experiments for the latter methods were run on a computer equipped with an AMD Ryzen 3700X CPU running at 4.0 GHz and 16 GB of RAM, with a time limit of 3600 seconds. Note that the RR method was run 30 times, and the best result is reported here. The results reported in Tables <ref> and <ref> suggest that the MILP model is the most viable method to solve these small problems with 30 and 50 customers: several instances are solved to optimality within the given time. However, on some of the instances with 50 customers, MILP is not able to provide any solution. The RR heuristic also obtains remarkably good results, always matching or improving the best known figures. We believe that taking the best of 30 runs was an important factor in these results, because this allows a good random exploration of the search space. The CP models perform reasonably well on these instances in terms of heuristic solution found, and are able to provide good lower bounds (no lower bound was provided in previous publications). It is worth noticing that the model MC-2IDX seems to perform better than MC-3 IDX on these Minimum Cost instances, especially on the instances with 50 customers. This revert what was observed in Section <ref> for the Minimum Time instances, and can be explained by the completely different objective function between the two problems considered, and their general characteristics. The results summarized in Tables <ref>, <ref> and <ref> do not include the results of the MILP model, since these larger instances with 100, 200 and 400 customers were out of reach for it. The RR heuristic produce good quality solutions, as we are able to certify here with the first lower bounds ever produced on these instances. On the other hand, the relatively small gaps between upper and lower bounds on some of the instances, also suggest that the quality provided by the CP models are good. In details, MC-2IDX scales up very well while MC-3IDX is not able to produce significant bounds, due to the very large number of z variables necessary to model the problem. Unfortunately these good performance of the model MC-2IDX in terms of lower bounds does not get reflected on the quality of the heuristic solutions: very often the method is not capable of producing any feasible solution at all in the given computation time. § CONCLUSIONS In this paper we have discussed some different Constraint Programming models to describe two versions of the Parallel Drone Scheduling Vehicle Routing Problem that were recently proposed in the literature. Experimental results suggest that solving these models can lead to many improved state-of-the-art results. In particular, the new models seem to provide the new reference for producing high quality lower bounds on the optimal solution costs, but they are also able to produce several new best-known heuristic solutions, and even to close for the first time several of the instances considered. In our opinion the flourishing literature on specialization of Parallel Drone Scheduling Vehicle Routing Problems, aiming at introducing more and more realistic aspects could be greatly benefit from our contributions, given the high flexibility provided by Constraint Programming, and the assumption that the results we have obtained here could be replicated. This is material for future work. § ACKNOWLEDGEMENTS The authors are grateful to Prof. Hoàng Ha Minh for the useful discussions and suggestions. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. plain 10 DMN M. Dell'Amico, R. Montemanni, and S. Novellani. Matheuristic algorithms for the parallel drone scheduling traveling salesman problem. Annals of Operations Research, 289:211–226, 2020. amicobb M. Dell'Amico, R. Montemanni, and S. Novellani. Algorithms based on branch and bound for the flying sidekick traveling salesman problem. Omega, 104:102493, 2021. dinh2022 Q. T. Dinh, D. D. Do, and M. H. Há. Ants can solve the parallel drone scheduling traveling salesman problem. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), pages 14–21, 2021. for Forbes. Drone explosion: $5B investment in 2 years, 129 startups, 170 new craft. <https://www.forbes.com/sites/johnkoetsier/2022/02/07/drone-innovation-check-up-5b-investment-129-companies-170-craft/>, 2022. [Accessed: 2023-02-05]. lei2022 D. Lei and X. Chen. An improved variable neighborhood search for parallel drone scheduling traveling salesman problem. Applied Soft Computing, 127:109416, 2022. mbiadou2018iterative R. G. Mbiadou Saleu, D. Deroussi, L. qnd Feillet, N. Grangeon, and A. Quilliot. An iterative two-step heuristic for the parallel drone scheduling traveling salesman problem. Networks, 72(4):459–474, 2018. mbiadou2022 R. G. Mbiadou Saleu, D. Deroussi, L. qnd Feillet, N. Grangeon, and A. Quilliot. The parallel drone scheduling problem with multiple drones and vehicles. European Journal of Operational Research, 300:571–589, 2022. md23 R. Montemanni and M. Dell'Amico. Solving the parallel drone scheduling traveling salesman problem via constraint programming. Algorithms, 16(1):40, 2023. old R. Montemanni and L.M. Gambardella. An ant colony system for team orienteering problems with time windows. Foundations of Computing and Decision Sciences, 34(4):287–306, 2009. murray2015flying C. C. Murray and A. G. Chu. The flying sidekick traveling salesman problem: Optimization of drone-assisted parcel delivery. Transportation Research Part C: Emerging Technologies, 54:86–109, 2015. nguyen2022 M. A. Nguyen, G. T.-H. Dang, M. H. Há, and M.-T. Pham. The min-cost parallel drone scheduling vehicle routing problem. European Journal of Operational Research, 299:910–930, 2022. HA M. A. Nguyen, H. L. Luong, M. H. Hà, and H. B. Ban. An efficient branch-and-cut algorithm for the parallel drone scheduling traveling salesman problem. 4OR, 2022. ottooptimization A. Otto, N. Agatz, J. Campbell, B. Golden, and E. Pesch. Optimization approaches for civil applications of unmanned aerial vehicles (uavs) or aerial drones: A survey. Networks, 72(4):411–458, 2018. pasha2022 J. Pasha, Z. Elmi, S. Purkayastha, A. M. Fathollahi-Fard, Y.-E. Ge, Y.-Y. Lau, and M. A. Dulebenets. The drone scheduling problem: A systematic state-of-the-art review. IEEE Transactions on Intelligent Transportation Systems, 23(9):14224–14247, 2022. ortools L. Perron and V. Furnon. Google OR-Tools, 2023. https://developers.google.com/optimization/ [Accessed: 2023-03-03]. raj2021 R. Raj, D. Lee, S. Lee, J. Walteros, and C. Murray. A Branch-and-Price Approach for the Parallel Drone Scheduling Vehicle Routing Problem. SSRN Electronic Journal, 2021. rei91 G. Reinelt. TSPLIB – A Traveling Salesman Problem Library. ORSA Journal on Computing, 3(4):376–384, 1991. sta Statista. E-commerce. <https://www.statista.com/markets/413/e-commerce/>, 2022. [Accessed: 2023-02-05]. BCG R. Wolleswinkel, V. Lukic, W. Jap, R. Chan, J. Govers, and S. Banerjee. An onslaught of new rivals in parcel and express, volume Travel, Transport and Logistics. Boston Consulting Group, 2018.
http://arxiv.org/abs/2307.00488v1
20230702062636
POV-SLAM: Probabilistic Object-Aware Variational SLAM in Semi-Static Environments
[ "Jingxing Qian", "Veronica Chatrath", "James Servos", "Aaron Mavrinac", "Wolfram Burgard", "Steven L. Waslander", "Angela P. Schoellig" ]
cs.RO
[ "cs.RO" ]
POV-SLAM: Probabilistic Object-Aware Variational SLAM in Semi-Static Environments Jingxing Qian1, Veronica Chatrath1,2, James Servos3, Aaron Mavrinac3, Wolfram Burgard4, Steven L. Waslander1, Angela P. Schoellig1,2 =========================================================================================================================================== 1The University of Toronto Institute for Aerospace Studies and the University of Toronto Robotics Institute. Emails: 2The Technical University of Munich. Emails: 3Clearpath Robotics, Waterloo, Canada. Emails: 4The Technical University of Nuremberg. Email: Simultaneous localization and mapping (SLAM) in slowly varying scenes is important for long-term robot task completion. Failing to detect scene changes may lead to inaccurate maps and, ultimately, lost robots. Classical SLAM algorithms assume static scenes, and recent works take dynamics into account, but require scene changes to be observed in consecutive frames. Semi-static scenes, wherein objects appear, disappear, or move slowly over time, are often overlooked, yet are critical for long-term operation. We propose an object-aware, factor-graph SLAM framework that tracks and reconstructs semi-static object-level changes. Our novel variational expectation-maximization strategy is used to optimize factor graphs involving a Gaussian-Uniform bimodal measurement likelihood for potentially-changing objects. We evaluate our approach alongside the state-of-the-art SLAM solutions in simulation and on our novel real-world SLAM dataset captured in a warehouse over four months. Our method improves the robustness of localization in the presence of semi-static changes, providing object-level reasoning about the scene.5Dataset download and Supplementary Material are available at https://github.com/Viky397/TorWICDatasethttps://github.com/Viky397/TorWICDataset This work was supported by the Vector Institute for Artificial Intelligence in Toronto and the NSERC Canadian Robotics Network (NCRN). § INTRODUCTION Simultaneous Localization and Mapping (SLAM) estimates a robot's pose within its environment, while at the same time creating a map of its surroundings. SLAM allows for autonomous navigation in GPS-denied situations, such as underground mines, office spaces, and warehouses. Many such tasks require robots to reliably repeat their trajectories over an extended period. However, most existing SLAM methods adopt the static world assumption <cit.> which typically does not hold in the real world, as scenes are subject to change from human or robot activity. For example, a scene may contain dynamic objects (e.g., forklift driving within a factory) and semi-static objects that change position over time (e.g., pallets, boxes). Lacking the ability to properly handle such changes might result in catastrophic failures such as corrupted maps, divergent pose estimations, and obstacle collisions. Such potential failures emphasize the importance of robust SLAM solutions in the presence of scene dynamics in order to achieve efficient and robust long-term robotic operation. Recent works have attempted to handle dynamic environments in one of two ways. The first strategy leverages semantic and geometric information to mask out all potentially dynamic objects, treating them as outliers <cit.>. Hence, the system only tracks against the static background, though it often covers a small portion of the sensor's field-of-view (FOV) in cluttered environments. The second strategy builds a model for each detected object. The system then either tracks the camera against the static background and refines the object models in a two-step pipeline, or performs camera and object tracking in a joint optimization problem <cit.>. However, the second strategy requires motion to be detected over consecutive frames, and long-term, semi-static changes where objects shift, disappear, or appear in the scene, have not been thoroughly studied in SLAM. Recent attempts to handle semi-static changes during map maintenance extend object-centric mapping methods to explicitly consider semi-static changes by estimating a consistency score for each object from a known robot pose <cit.>. Critically, when the robot pose is unknown, object consistency is difficult to calculate. The aforementioned consistency estimation methods can lead to multiple ambiguous and sub-optimal solutions. This limitation highlights the need for a statistically consistent method to infer both the robot pose and object consistency. We tackle the challenge of simultaneous localization and object-level change detection in large semi-static scenes. We follow an object-aware strategy, as most mobile robots operate in environments consisting of rigid objects that move continuously or change location between visits. In addition to pose estimation, an up-to-date, object-level dense reconstruction is desired to provide rich geometric and semantic information for downstream tasks (e.g., perception-aware planning and control <cit.>). We introduce a novel framework, , which leverages recent works on object-level Bayesian consistency estimation for semi-static scenes <cit.>, to tackle the challenge in a joint optimization problem. We derive a variational formulation to approximate the Gaussian-Uniform measurement model of potentially-changing objects, and use expectation-maximization (EM) to guarantee improvement of the evidence lower bound (ELBO) of a factor graph SLAM problem. At every EM iteration, the object consistencies and robot poses are refined using geometric and semantic measurements. Additionally, there is a lack of SLAM datasets for long-term localization and mapping in large, semi-static environments. In collaboration with Clearpath Robotics, we extend the Toronto Warehouse Incremental Change Mapping Dataset (TorWIC), a long-term mapping dataset introduced in <cit.>. Here, we present a real-world semi-static SLAM dataset in a warehouse with dynamic and semi-static changes that occur over four monthsfn_dataset. To facilitate easier performance evaluation, we provide high-quality 3D scans of the entire warehouse and the ground truth robot trajectories, obtained from a Leica MultiStation and an onboard Ouster 128-beam LiDAR. Our proposed method is evaluated on: a 2D simulation to demonstrate the probabilistic framework in action and justify design choices, a synthetic semi-static dataset, and our real-world warehouse dataset. We analyze the reconstruction quality relative to a state-of-the-art (SOTA) dense semi-static mapping method <cit.> and compare the localization accuracy against a SOTA feature-based SLAM method <cit.> as well as a semi-static object-level SLAM <cit.> approach. We show that our framework is robust to semi-static changes in the scene. The main contributions of our paper are: * We derive a variational formulation for the Gaussian-Uniform bimodal measurement likelihood of potentially-changing objects. It exploits the Bayesian object consistency update rule introduced in <cit.> and provides an evidence lower bound (ELBO) for efficient inference. * We introduce an expectation-maximization (EM) algorithm to optimize factor graphs involving the variational measurement model for potentially-changing objects. * We design , an object-aware, factor graph SLAM pipeline that tracks and reconstructs semi-static object-level changes.  builds on top of the SOTA SLAM <cit.> and semi-static mapping <cit.> methods, and uses our variational EM (VEM) strategy. The system is demonstrated both in simulation and in the real world. * We release a new SLAM dataset captured in a warehouse over four monthsfn_dataset. The environment contains static, semi-static, and dynamic objects as seen by RGB-D cameras and a 3D LiDAR. We also release a high-quality 3D scan of the warehouse and ground truth robot trajectories. In Section <ref>, we review the SLAM methods for changing scenes. In Section <ref>, we present the key modules of the  pipeline. In Section <ref>, we derive the variational measurement model and discuss the details of our VEM algorithm. Finally, we evaluate  in both simulated and real-world experiments in Section <ref>. To the best of our knowledge, our method is the first to achieve joint localization and object-level change detection for large, semi-static environments. § RELATED WORKS §.§ Visual SLAM Visual SLAM is a well-established type of SLAM, mainly achieved via either feature-based methods <cit.> or dense methods <cit.>. Sparse methods match feature points of images, having lighter computational requirements, focusing on localization, whereas dense methods seek to construct accurate and more complete representations of the environment, useful for navigation and collision avoidance. In recent years, feature-based SLAM methods have gained traction for use with mobile robots in large environments, as they exhibit a high level of accuracy and efficiency. The seminal works of Mur-Artal et al. in ORB-SLAM <cit.> introduce a monocular, feature-based SLAM system with real-time camera relocalization. ORB-SLAM2 <cit.> and ORB-SLAM3 <cit.> extend <cit.> with stereo and RGB-D information. ORB-SLAM3 remains a SOTA feature-based method <cit.> and is extended to aid with our localization and map update strategy. However, most current visual SLAM methods focus on static scenes, simply rejecting inconsistent landmarks from dynamic objects as outliers. As well, object-level scene information is ignored, resulting in inconsistent map updates when items move between robot passes. Our framework aims to use object-level understanding to track scene changes and aid with accurate localization in evolving scenes. §.§ Dynamic SLAM Dynamics and object-level reasoning in SLAM have been recently studied, and there exist two common strategies to handle changes. The first is to identify dynamics from input data, which can be extracted with a semantic segmentation network such as Mask R-CNN <cit.>, discarding it completely <cit.>. Though this method is effective in the presence of a few dynamic objects, in cluttered environments, the static background is often only a small part of the sensor’s FOV and ignoring all dynamic objects could lead to an insufficient number of visual features for localization. The second strategy is to track the dynamic objects explicitly, which can be achieved using multi-object tracking (MOT) <cit.>. DetectFusion <cit.> uses semantic segmentation and motion consistency to extract both known and unknown objects. The work of Barsan et al. <cit.> uses instance-aware semantic segmentation and sparse scene flow to classify objects based on their activity. MID-Fusion <cit.> and EM-Fusion <cit.> obtain object masks and construct a signed distance function (SDF) model for objects from depth information. Object poses are obtained by directly aligning depth measurements to their corresponding SDF models. VDO-SLAM <cit.> and ClusterSLAM <cit.> group landmarks to form objects and exploit rigid body motion to construct a factor graph, jointly solving for robot and object poses. The aforementioned methods require scene changes to be observed in consecutive frames, rendering this strategy ineffective under changes that occur over a long time horizon. §.§ Semi-Static SLAM SLAM in semi-static scenes is a difficult yet overlooked problem, that is crucial for long-term operation. One challenge in the presence of semi-static objects is ambiguity in the system state, caused by potential symmetry in the scene changes and the lack of continuously observed motion. Recent works on map maintenance involving semi-static objects all aim to estimate a consistency score, based on given robot poses, to determine which part of the map needs to be updated. Fehr et al. <cit.> update an SDF map by calculating voxel-level differences between signed distance functions of the stored map and incoming depth measurements. Schmid et al. <cit.> maintain a set of object-level SDF sub-maps, propagating a stationarity score for each sub-map by calculating the overlap between their depth measurements and the existing map. Though intuitive, these overlap-based estimation methods are prone to localization errors. Gomez et al. <cit.> model objects as cuboid bounding volumes and construct an object factor graph to estimate the object poses and their moveability scores in offline batch optimization. To obtain a more accurate and consistent object-level consistency score at runtime, Qian et al. propose <cit.>, a Bayesian update rule to iteratively propagate a probabilistic object state model using both geometric and semantic measurements, which was shown to be more robust against localization error. However, the aforementioned incremental mapping solutions all assume reliable robot poses are given. Walcott et al. propose a 2D LiDAR SLAM solution <cit.> that maintains a set of sub-maps for each region. The active sub-map is replaced with new measurements if there are inconsistencies, and then stacked to form the final map. Rosen et al. incorporate a recursive Bayesian persistence filter <cit.> into classic feature-based SLAM systems to estimate the consistency of each point feature. In a more recent work, Ren et al. <cit.> attempt to integrate object-level consistency estimation and 3D visual SLAM in the presence of semi-static and dynamic objects. The authors first perform dense visual SLAM using the static background to estimate the camera pose. They calculate the image-plane overlap between new object measurements and their previously mapped objects, reconstructing the object if the inconsistency is large. The visual features of unobserved mapped objects and new object observations are compared to perform association and relocalization. However, a known static background is required to track the camera motion, making this method a two-step process and rendering it unstable in the presence of a large number of potentially changing objects. Rogers et al. <cit.> and Xiang et al. <cit.> use an EM approach to handle semi-static point landmarks. The authors integrate the traditional landmark measurement model with a latent confidence score to weight its contribution in the cost function. The EM scheme is used to iteratively update the robot pose, landmark positions, and confidence scores. However, since the optimization process runs over the entire trajectory and rejection decisions are made based on a predefined threshold at the end of the process, these two methods are limited to offline settings. In our sliding window setup, landmark rejection decisions are revised probabilistically at every EM iteration during run-time, leading to more robust, fast, and accurate convergence. § SYSTEM DESCRIPTION §.§ Overview and Assumptions This work focuses on long-term SLAM in the presence of semi-static objects. We aim to simultaneously localize the robot, and propagate a consistency estimate for each object. The robot localizes itself against objects with high measurement likelihoods, with changed objects being reconstructed once sufficient observations have been made. Finally, a truncated signed distance function (TSDF) map is produced to reflect the current scene configuration. The  system builds upon a recent semi-static map maintenance framework, POCD <cit.>, and the SOTA feature-based RGB-D SLAM system, ORB-SLAM3 <cit.>. A flow diagram of our novel  system is shown in Figure <ref>, which consists of five main stages. The following subsections provide an overview of each of the major components in the  pipeline. We make the following assumptions in this work: * The robot operates in a bounded indoor environment (e.g., warehouse or mall) where rigid objects are present. * High-level prior knowledge of the objects is available, such as their semantic class, dimension, and likelihood of change. * Objects can be added, removed, or shifted between robot traversals, though part of the environment should remain unchanged and observed by the robot. * The robot starts its trajectory from a known pose. §.§ SLAM Pipeline and Object Representation The  pipeline takes in a sequence of color and depth frames, ℱ={𝐅_t}_t=1 … T, from a RGB-D camera, 𝒞, as inputs at timestamps t ∈{1 … T}. The pipeline outputs the 6-DoF world-to-camera transformations, 𝒯^CW={𝐓^CW_t={𝐩^CW_t,𝐪^CW_t}}_t=1 … T, with 3D position, 𝐩^CW_t, and orientation, 𝐪^CW_t, at each timestep t, along with a library of mapped objects, 𝒪={𝐎_i}_i=1 … I. Each object, 𝐎_i, consists of: * a 4-DoF global pose 𝐓^OW_i with 3D position 𝐩^OW_i and heading ϕ^OW_i, * a point cloud from accumulated depth data, 𝐏_i, and the resulting TSDF reconstruction, 𝐌_i, * a bounding box, 𝐁_i, aligned with the major and minor axes of the object reconstruction, * a semantic class, c_i, * a state probability distribution, p(l_i,v_i), to model the object-level geometric change, l_i∈ℝ, and the consistency, v_i ∈ [0,1]. * a set of associated 3D landmark points in the world frame, ℒ^W_i = {𝐥^W_i,l∈ℝ^3 }_l=1 … L, * the relative positions of the landmark points with respect to the object pose, 𝐓^OW_i, ℒ^O_i = {𝐥^O_i,l∈ℝ^3 }_l=1 … L, As we consider indoor mobile robot applications, objects are restricted to only rotate around the z-axis, resulting in a 4-DoF pose, although extending to 6-DoF is trivial. The SLAM system is initialized with an empty object library, 𝒪=∅. Along with the camera pose and object models, the system also maintains a dense TSDF map which can be used for downstream tasks such as perception-based planning and control <cit.>. §.§ 3D Observation Extraction and Data Association When a new RGB-D frame, 𝐅_t, is received by the system, a set of 3D observations, 𝒴_t = {𝐘_t,j}_j=1 … J, is extracted and associated to the mapped objects by following the POCD semantic-geometric clustering and association strategy <cit.>. Additionally, each observation, 𝐘_t,j, contains the unprojected 3D keypoints, 𝒟^C_t,j = {𝐝^C_t,j,d∈ℝ^3 }_d=1 … D, detected from the masked color image. For each associated object-observation pair, {𝐎_i,𝐘_t,j}, we also match the unprojected keypoints, 𝒟^C_t,j, to the object landmark points, ℒ^C_i. §.§ Object Consistency-Augmented Factor Graph In POCD <cit.> the authors introduced a Bayesian update rule to propagate an object-level state model, p(l,v). This model consists of a Gaussian distribution which captures the magnitude of the object-level geometric change, l, and a Beta distribution which estimates the consistency between the incoming measurement and the previously mapped object, v. In this work, we exploit the Beta parametrization of consistency p(v)Beta(v |α, β), to estimate the reliability of the object observations in a factor graph optimization framework. We first consider a simple sparse SLAM problem in a semi-static scene, where previously mapped objects are either moved or unchanged when the robot revisits the region. Our goal is to estimate the robot trajectory, 𝒯^CW, and determine which of the objects have changed. Existing methods such as ORB-SLAM3 <cit.> wrap landmark measurement residuals with a robust kernel (e.g., Cauchy loss function) and run optimization multiple times to reject outlier measurements. However, such approaches are not robust to large changes in the scene. Instead, similar to <cit.>, we augment the joint likelihood of our sliding window estimation problem with the object-level Beta-parametrized consistencies, {p(v_i)}_i=1 … I, to explicitly model the reliability of each observed landmark: log p(𝒪, {𝐓^CW_t}_t=T-m … T, {𝒴_t}_t=T-m … T) ∝∑_tlog p(𝐞^pose_t) + ∑_i∑_llog p(𝐞^rigid_i,l) + ∑_i∑_llog p(𝐞^prior_i,l) + ∑_t∑_j∑_dlog p(𝐞^key-pt_t,j,d, α, β) The factor in Equation (<ref>) is the transition model. We use the ORB-SLAM3 RGB-D front-end to obtain a visual odometry (VO) measurement in the body frame, 𝐓^C_t-1,t, which is used as a prior to initialize the augmented factor graph: p(𝐞^pose_t) = 𝒩(𝐞^pose_t|0, σ^2_pose𝐈) where 𝐞^pose is the stacked translation and rotation of the deviation 𝐓^offset_t between the estimated relative pose in the body frame and the VO measurement: 𝐓^offset_t = (𝐓^CW _t-1𝐓^CW -1_t)^-1𝐓^C_t-1,t In practice, we find that this factor improves the stability of the nonlinear optimization. The factor in Equation (<ref>) constrains the relative positions of associated object landmarks with respect to the object frame to penalize the deformation of the object geometry: p(𝐞^rigid_i,l) = 𝒩(𝐞^rigid_i,l|0, σ^2_rigid𝐈) 𝐞^rigid_i,l = 𝐓^OW_i𝐥^W_i,l - 𝐥^O_i,l The factor in Equation (<ref>) encourages landmark points to remain at their original positions during optimization. This is important, as objects that have changed but not been rejected can lead to localization errors and a corrupted map, especially at early stages of the optimization process: p(𝐞^prior_i,l) = 𝒩(𝐞^prior_i,l|0, σ^2_prior𝐈) 𝐞^prior_i,l = 𝐥^W_i,l - 𝐥^W_i,l,prev Note that, for simplicity, we use Gaussian measurement likelihoods and isotropic covariance with magnitude σ^2_pose, σ^2_rigid, σ^2_prior for these three factors. The factor in Equation (<ref>), the landmark measurement model between an object landmark point, 𝐥^W_i,l and its observation, 𝐝^C_t,j,d, is more complicated, as a Gaussian likelihood is not sufficient to model possible changes in a semi-static scene. An intuitive approximation is to adopt the same Gaussian-Uniform mixture, weighted by the expectation of the Beta consistency model, 𝔼[v], as in <cit.>: p(𝐞^key-pt_t,j,d) = 𝔼[v] 𝒩(𝐞^key-pt_t,j,d|0, σ^2_key-pt𝐈) + (1-𝔼[v])𝒰(‖𝐞^key-pt_t,j,d‖_2 | 0,e_max) 𝐞^key-pt_t,j,d = 𝐓^CW -1_t𝐝^C_t,j,d - 𝐥^W_i,l This mixture model consists of two parts: 1) a zero-mean Gaussian component with an isotropic measurement covariance, σ^2_key-pt, for the unchanged scenario, and 2) a uniform component with a predefined maximum association distance, e_max, for the changed scenario in which the object could be anywhere. However, using the single point estimator, 𝔼[v], could lead to an inaccurate estimation as it does not capture the full Beta consistency distribution. We present a variational formulation to derive an ELBO for the landmark measurement model in Section <ref>, which is efficient to implement, and shown to provide better convergence behavior than the single point approximation in Equation (<ref>). Figure <ref> illustrates the complete factor graph. Optionally, our framework can be extended to handle dynamic objects in the scene. We follow the strategy in <cit.>, where the poses and associated landmarks of moving objects are modeled at each timestamp in the window, temporally constrained by the estimated velocity. This strategy is tested in simulation, as discussed in the Supplementary Material. §.§ Iterative Object Consistency Update and Pose Estimation The augmented optimization problem in Equation (<ref>) consists of both unknown parameters, which are the robot trajectory and the object poses with their associated landmark positions, and a set of unobserved latent variables, which are the object consistencies. Although the problem is complex to solve directly, a favoured approach to solving such estimation problems involving latent variables is iteratively via EM. We introduce an EM-based method in Section <ref>, which leverages our variational landmark measurement model to solve the factor graph iteratively at every frame, 𝐅_t. §.§ Object and TSDF Map Update Once optimization is complete, we extract the new robot and object pose information, and update the map and object library. All new object observations not associated to the previous map are integrated into the TSDF map and added to the object library, 𝒪. A large pseudo-change is used to penalize the consistency of objects currently in the camera frustum, but not associated with any observations. Objects accepted by the VEM optimization, as discussed in Section <ref>, are considered consistent with the map, and their observations are integrated into the object's TSDF model, M_i. Their object state models are then propagated by one step based on the new robot pose estimate. The states of objects not accepted by the VEM optimization are also propagated by one step, though their observations are not integrated, as they are no longer consistent with their previous models. If an object's consistency expectation, 𝔼[v_i], falls below a pre-defined threshold, θ_consist, the object is removed from the library, and all associated voxels in the TSDF model are reinitialized. Note that the rejected objects are not discarded immediately after optimization to ensure robustness against potential measurement noise and pose estimation error in the current frame. When dynamic objects are considered, we can update their motion models with their new pose estimates using a Kalman filter. § METHODOLOGY In this section, we discuss the details of our VEM method: 1) In the E-step we compute the ELBO for the expectation of the landmark measurement likelihood for potentially changing objects, and 2) in the M-step we optimize the approximated factor graph to update the robot and object states. Algorithm 1 in the Supplementary Material outlines how our pipeline processes one frame to update the robot and object states. §.§ E-Step: ELBO of Measurement Likelihood for Potentially Semi-Static Objects In Section <ref>, the cost function for the augmented factor graph SLAM problem is introduced, where each object and its associated landmark points share a Beta-parametrized consistency estimate. Such a problem is challenging to optimize, even with the EM algorithm. Moreover, as discussed earlier, a single point approximation using the consistency expectation, 𝔼[v] (Equation (<ref>)), does not capture the full Beta consistency model. In this section, we focus on the E-Step of the VEM algorithm and derive the ELBO for the expectation of each object landmark's measurement likelihood in Equation (<ref>), based on the robot trajectory, object landmark position, and object consistency estimated in the previous EM iteration. Consider a single landmark from an object. At frame T and EM iteration n, we obtain a Beta consistency posterior, Beta(α, β), for the object by following the Bayesian method introduced in <cit.>, with respect to the current frame measurement, 𝐝^C_T, the previous iteration's landmark position estimate, 𝐥^W, and robot pose estimate, 𝐓^CW. Note that these variables are treated as constants in the E-Step. The timestamps and indices in the notation are dropped for clarity. The object's true consistency, π∈{0,1}, can be considered as a sample from a Bernoulli distribution parametrized by v, with π=1 indicating the object has not changed. We can then write a generative process, p(π, v) = p(π| v) p(v |α, β), where: v ∼Beta(α, β) π ∼Bernoulli(v) Unchanged objects will follow a zero-mean, isotropic Gaussian measurement model and moved objects can be anywhere in the scene. The measurement residual, 𝐞_T, is defined to be the 3D point-wise distance: 𝐞_T = 𝐓^CW -1𝐝^C_T- 𝐥^W We can then rewrite the Gaussian-Uniform measurement model weighted by the sampled object consistency, π, as: p(𝐞_T) p(𝐞_T |𝐓^CW ,𝐥^W,π) = 𝒩(𝐞_T |0,σ^2𝐈)^π𝒰(‖𝐞_T ‖_2 | 0, e_max)^1-π Since π is sampled from the generative process shown in Equation (<ref>), Equation (<ref>) involving dependent latent variables, ω = {π, v }, is challenging to maximize. Fortunately, we can apply the mean field approximation <cit.> by assuming the two latent variables are fully independent, p(π,v) ≃ q(π)q(v). This would allow us to write a variational lower bound, ℒ, for the evidence, log p(𝐞_T |𝐓^CW ,𝐥^W,α, β): ℒ(ω,𝐓^CW ,𝐥^W) = 𝔼_q(ω)[ logp(𝐞_T,ω|𝐓^CW ,𝐥^W,α, β)/q(ω)] where the joint likelihood is log p(𝐞_T, ω|𝐓^CW , 𝐥^W, α, β) = log p(𝐞_T |𝐓^CW , 𝐥^W, π) + log p(π| v) + log p(v |α, β) = π[log v + log𝒩(𝐞_T |0,σ^2𝐈)] +(1-π) [ log(1-v)+ log𝒰(‖𝐞_T ‖_2 | 0, e_max) ] + logBeta(v |α, β) Following the mean field approximation, the optimal q(π) and q(v) that maximize the lower bound (<ref>) are: log q(π)= π[ 𝔼[log v]+ log𝒩(𝐞_T|0,σ^2𝐈)] +(1-π) [ 𝔼[log (1-v)] + log𝒰(‖𝐞_T ‖_2 | 0, e_max)] + const log q(v) = 𝔼[π] log v + 𝔼[1-π] log (1-v) +logBeta(v |α, β)+const Now, the expectation of the probability that the object did not change, 𝔼[π], can be computed based on the current measurement and estimates: 𝔼[π] = q(π=1) = ηexp{𝔼[log v] +log𝒩(𝐞_T |0,σ^2𝐈)} 𝔼[1-π] = q(π=0) = ηexp{𝔼[log (1-v)] +log𝒰(‖𝐞_T ‖_2 | 0, e_max)} Here, η is a normalizing factor, and 𝔼[log v] and 𝔼[log (1-v)] can be computed from the property of the Beta distribution: 𝔼[log v]=ψ(α)-ψ(α+β) 𝔼[log (1-v)]=ψ(β)-ψ(α+β) where ψ(·) is the digamma function. Finally, we can compute the lower bound: ℒ(v,π,𝐓^CW ,𝐥^W) =𝔼[π] log𝒩(𝐞_T |0,σ^2𝐈) +𝔼[1-π] log𝒰(‖𝐞_T ‖_2 | 0, e_max) + const Comparing to the naive approximation in Equation (<ref>), the ELBO is a mixture between a log-Gaussian mode and a log-Uniform mode. However, the new weights, 𝔼[π] and 𝔼[1-π], incorporate the full Beta consistency model as well as the likelihood of the two modes. This provides a more statistically consistent measurement model for potentially changing objects. We refer the reader to the Supplementary Material for a more detailed derivation, as well as a performance comparison against the single point approximation in Equation (<ref>). §.§ ELBO Tightness and Assumptions We repeat the ELBO estimation (Equation (<ref>)) presented in Section <ref> for all observed objects and their landmarks in the scene, which we substitute into the joint likelihood, discussed in Section <ref>, to construct a lower bound to the original optimization cost (Equation (<ref>)) for our sliding window SLAM problem. The new factor graph can be solved efficiently using an available SLAM solver, such as g2o <cit.>. Unfortunately, sub-optimal or diverged solutions are likely to occur. The mean field approximation used in the measurement ELBO tends to be overconfident <cit.>, especially when the Beta consistency estimate is uncertain. On the other hand, the ELBO tightens when the Beta distribution approaches a unit impulse, i.e., when α≫β or α≪β. This implies that when object consistency estimates are uncertain, the lower bound can be improved but there is no guarantee to improve the true joint likelihood, as some moved objects can be misclassified as unchanged. Nonetheless, with additional iterations the ELBO tightens, improving the true likelihood. This convergence behavior requires that: 1) a good prior robot pose is available, and 2) some distinguishable, unchanged objects are observed by the robot. We believe these are reasonable assumptions to make in the semi-static SLAM problem. Most robots deployed in industrial settings depart from and return to pre-determined charging stations. Visual place recognition techniques can also be used to initialize the system. Moreover, if the robot only observes changed objects, then it is not possible to determine the global pose of the robot just using vision data. Without inertia or off-board anchor sensors (e.g., IMUs and UWBs), the system will converge to a minimum-cost state but there is no guarantee to the correctness. We provide simulation results in the Supplementary Material to illustrate the system's behavior under adversarial scenarios. §.§ M-Step: Factor Graph Optimization In order to exploit the aforementioned assumptions and encourage the optimizer to make use of static objects with higher certainty to perform system updates, a max-mixture <cit.> approach is adopted to guide the optimization process. At every gradient descent step, for every object landmark, a weighted log measurement likelihood is computed for the unchanged and moved scenarios, and a decision is made on whether the measurement should be accepted in computing the gradient: m = { log𝔼[v] 𝒩(𝐞_T |0,σ^2𝐈), log (1-𝔼[v]) 𝒰( ‖𝐞_T ‖_2 | 0, e_max)} ℒ(v, π, 𝐓^CW ,𝐥^W) 𝔼[π] log𝒩(𝐞_T |0,σ^2𝐈), if m = 0 𝔼[1-π] log𝒰(‖𝐞_T ‖_2 | 0, e_max), if m = 1 This approximation excludes objects with lower measurement likelihood from contributing to the overall cost, achieving faster and more accurate convergence when the ELBOs are not tight. Rejected objects are not deleted immediately, but revised at every gradient step. Note that we choose 𝔼[v] instead of 𝔼[π] to weight the measurement likelihoods when making the rejection decisions. Empirical results show that 𝔼[π], despite being a more accurate estimate, could be highly noisy due to the overconfidence in the mean field approximation. On the other hand, 𝔼[v] comes from the Bayesian update rule, thus providing smoother gradients to achieve more stable convergence. More details and ablation studies are provided in the Supplementary Material. Substituting the approximated ELBO (Equation (<ref>)) into Equation (<ref>), and maximizing the new factor graph, we obtain the updated robot and object states for the next EM iteration: 𝒪, 𝒯^CW = _𝒪, 𝒯^CWlog p(𝒪, 𝒯^CW, {𝒴_t}_t) ∝∑_tlog p(𝐞^pose_t) + ∑_i∑_llog p(𝐞^rigid_i,l) + ∑_i∑_llog p(𝐞^prior_i,l) + ∑_t∑_i∑_lℒ(v_i, π_i, 𝐓^CW_t,𝐥^W_i,l) Our VEM formulation ensures a monotonically increasing ELBO until a zero-gradient solution but does not guarantee convergence to an optimum. Global optimality is inherently challenging, but our descent method mostly provides high-quality solutions when assumptions in Section <ref> are met. § EXPERIMENTAL RESULTS §.§ Experimental Setup We verify the performance of our framework qualitatively and quantitatively by comparing both the map reconstruction and robot trajectory error of  to: * ORB-SLAM3 <cit.>: A SOTA sparse visual SLAM solution, which assumes the world is static. * VI-MID <cit.>: A recent object-level SLAM method for small (5m×5m) semi-static scenes. The method performs dense RGB-D tracking on certainly-static regions based on semantics for camera localization, before updating the object states. As the code was unavailable, we modify ORB-SLAM3 to exclude features from potentially changing objects during pose estimation, and use POCD <cit.> for object change detection and mapping. Our custom implementation is referred to as Ours-MID. In essence, ORB-SLAM3 uses all features, Ours-MID employs certainly-static features, and  probabilistically selects features from likely-unchanged objects. In a static scene,  should revert to regular batch SLAM where only the Gaussian mode of the measurement model is active. To demonstrate the capabilities of , we evaluate in three scenarios: 1) a 2D simulation (Section <ref>), 2) a 3D synthetic semi-static dataset (Section <ref>), and 3) our real-world, semi-static warehouse dataset (Section <ref>). The lack of large, real-world SLAM datasets with multiple passes through environments that include both dynamic and semi-static objects prompted us to create one (Section <ref>). We implement our method on top of ORB-SLAM3 <cit.> and POCD <cit.>. The parameters used to evaluate against all methods can be found in the Supplementary Material. To benchmark 3D reconstruction accuracy under scene changes, we generate the ground truth meshes by using POCD with the ground truth robot trajectory. As POCD has shown to outperform several mapping methods (Kimera <cit.>, Fehr et al. <cit.>, and Panoptic Multi-TSDFs <cit.>), the mesh obtained is representative of the best possible reconstruction. Note that in this work, we use RGB-D information to address scene changes directly, thus inertial and odometry data are excluded. While IMUs can supplement all methods, our method yields RGB-D pose estimates that align better with IMU data, removing errors at their source. §.§ Real-World Semi-Static Warehouse Dataset We release an extension to the TorWIC change detection dataset <cit.>. The original TorWIC dataset features a small 10m×10m hallway setup using boxes and fences with limited real-world objects and changes. Its ground-truth trajectory, acquired via 2D LiDAR SLAM, suffers from jumps and drifts and thus not suitable for evaluating SLAM algorithms. Conversely, the new extension, as the first long-term real-world warehouse dataset, originates from an active 100m×80m Clearpath Robotics plant showcasing various objects and changes (e.g., forklifts, robots, people). The dataset is collected on a mobile base equipped with two Microsoft Azure RGB-D cameras, an Ouster 128-beam LiDAR, and two IMUs. We repeat a few scenarios over the course of four months, presenting changed object locations over time. The robot setup, sensor specifications and the scenario breakdown can be found in the Supplementary Material. Figure <ref> shows the scenario changes for a sample route. To facilitate SLAM and reconstruction evaluation, we also release the ground truth scan of the warehouse and ground truth trajectories. A Leica MS60 multistation was used to obtain a centimetre-level accurate point cloud of the warehouse. Iterative closest point (ICP) was performed between the 128-beam LiDAR scan and the ground truth scan to obtain highly accurate ground truth trajectories for the robot. The robot starts and ends at the pre-defined map origin, so users can easily stitch trajectories to create long routes with change. §.§ 2D Semi-Static Simulation In this section, we introduce the first of three experiments performed. A 2D simulation was constructed to demonstrate our probabilistic framework in action, and to justify the design choices made. The setup can be seen in Figure <ref>, consisting of four unchanged objects and six moved objects. The robot is spawned around its ground truth pose, with noise in both position and orientation, and drives in the scene. The robot measures the four vertices of the rectangular objects, all corrupted by Gaussian noise. As seen in Figure <ref>, the system is able to correctly identify the six moved boxes, recovering their true poses. In the Supplementary Material, the evolution of the state estimates of the system over the first four frames are shown, as the robot navigates the scene. Figure <ref> shows the evolution of the object consistency expectation, 𝔼[v], and the robot pose error over the EM iterations at the first frame. The consistency expectations converge to their true values at the end of the optimization and the robot pose converges to its ground truth after six EM iterations. There is a drop in the consistency of all objects during the first iterations due to the initial error in the robot pose estimate. However, as the robot pose becomes more accurate, the true states are recovered. This experiment shows the robustness of our method, as the system is able to recover the true state even when the number of moved objects in the scene exceeds the number of unchanged objects. We shall note that the iterative optimization process finds the most likely underlying scene configuration based on the measurements. Therefore, if there exists a different hypothesis that exhibits a higher measurement likelihood, the optimizer would converge to that solution. For example, if the six moved objects had all shifted in the same direction by the same magnitude, our system would mark them as stationary and relocalize the unchanged objects instead. However, since such scenarios cannot be distinguished from a probabilistic point of view, they are not of concern. This adversarial scenario is shown in the Supplementary Material. Further ablation studies showcasing the advantage of using the ELBO instead of the single point estimate (Section <ref>), the use of max-mixture approximation (Section <ref>), the choice of weights (𝔼[v] vs 𝔼[π]) to use when choosing the mode of max-mixture (Section <ref>), and an adversarial fully dynamic scenario, are available in the Supplementary Material. §.§ 3D Semi-Static Simulation In this section, we introduce a 3D simulated semi-static scene, henceforth referred to as BoxSim. The setup can be seen in Figure <ref>. The robot moves among 17 boxes, six of which shift between robot traversals. The scenario is very challenging as there is no static background available, requiring all methods to localize against the 17 boxes. Figure <ref> visually compares the robot trajectories estimated by ORB-SLAM3 and  against the ground truth. As discussed, ORB-SLAM3 assumes a static environment. Although it utilizes robust kernels and iterative pruning to reject outlier landmarks, it is still sensitive to large scene change. As seen in the figure, its estimated trajectory diverges from the ground truth when changed objects are encountered. On the other hand,  optimizes a lower bound to a Gaussian-Uniform likelihood to explicitly infer if any of the mapped objects have changed, resulting in much smoother and accurate pose estimates. As discussed in the literature review, a common method for dynamic object handling is to ignore all potentially moving objects. In VI-MID <cit.> the authors mask out all potentially-changing objects based on semantic information, performing dense tracking on the static background alone. However, this relies on the assumption that the changing parts of the environment are known, which is not feasible in the real world. We evaluate our adaptation, Ours-MID, on two scenarios: 1) the optimal case, where the system knows which objects will shift and 2) the random case, where objects are randomly chosen to represent the static background. The average trajectory error (ATE) and maximum position error (MPE) can be seen in Table <ref>.  significantly outperforms ORB-SLAM3 and Ours-MID. For Ours-MID, in the optimal case, by ignoring all potentially changing objects the system might not observe enough features when the robot visits locations where moving objects dominate, causing poor estimates. In the random case, its performance further degrades when objects are incorrectly classified. We further compare the dense reconstructions of  against ORB-SLAM3 and Ours-MID. The top row of Figure <ref> shows the qualitative comparisons, where we overlay and voxelize both the reconstructions and the ground truth mesh and colorize the overlapping (inlier) and inconsistent (outlier) voxels. We then compute the precision, recall, and false positive rate (FPR) by counting the voxels for a quantitative evaluation, and Table <ref> lists the quantitative results. ORB-SLAM3 and Ours-MID both generated distorted maps with failed object updates due to localization drift, which led to incorrect data association in object consistency update. On the other hand,  generates the most visually correct map where all moved boxes, except the one at the top left, are relocalized to the new locations. Quantitatively,  exhibits the highest precision (coverage of true objects), and the lowest FPR (map update quality after scene change) due to its superior localization performance. As all methods use the same POCD <cit.> framework and parameters to perform map update at every frame, this experiment highlights that 1) explicit reasoning of object consistency is required for localization in semi-static environments, and 2) joint estimation of object consistency and robot localization brings significant advantage in cluttered scenes. §.§ Real-World Experiment in a Semi-Static Scene In this section, we evaluate 's effectiveness in a warehouse scenario, available through our novel real-world semi-static dataset. We stitch two trajectories captured along the same route four months apart to introduce scene changes as the robot traverses the warehouse. Figure <ref> shows the routes overlaid on the factory's schematic floor plan and Figure <ref> shows a sample pair of frames with scene changes. Due to the limited effective range of the Azure RGB-D cameras, we rely on the Ouster Lidar to provide feature depth information when traversing in open areas. We qualitatively and quantitatively compare the trajectory estimation and scene reconstruction results against ORB-SLAM3. Ours-MID is not included in the comparison as the route is cluttered with pallets and boxes, leaving very limited static background information for Ours-MID to localize against. The output trajectories along with the ground truth are visualized in Figure <ref>. ORB-SLAM3 successfully completes the first traversal with high accuracy. However, changes along the aisle in the second traversal cause incorrect data association and lead to a shortened trajectory.  performs slightly worse than ORB-SLAM3 in the first traversal. However, in the second traversal,  is able to reject the false positive matches and track with higher accuracy. The ATEs and MPEs from the two traversals can be seen in Table <ref> and Table <ref>. The bottom row of Figure <ref> visualizes the 3D reconstruction results. Again, we voxelize the reconstructed meshes and count for overlapping and inconsistent voxels to obtain the quantitative evaluations, which are listed in Table <ref>.  outperforms ORB-SLAM3 in both localization accuracy and scene reconstruction on this route when scene changes are encountered as it does not suffer from incorrect loop closures. §.§ Run-time Performance With a max of 4,000 ORB features in each frame, a window size of 8, and 30 EM iterations per frame,  runs at approximately 1Hz on a Linux desktop with an AMD Ryzen R9-5900X CPU at 3.7Hz. To achieve a more realistic run-time, we only execute the VEM optimization every seven frames on the real-world dataset, while relying on ORB-SLAM3 in between. We use a large number of ORB features because the dataset is challenging due to varying lighting conditions, causing even the original ORB-SLAM3 to fail at times with the default 1250 features. As well, our object-aware method requires each object to have sufficient features for tracking and association. We currently use a uniform feature detection approach, so small yet key objects may not get enough features under a lower quota. In practice,  is amenable to online operation in large environments, as change detection and localization correction is not required at every frame. A semantic-aware feature extraction approach could further improve the performance in the future. § CONCLUSION In this paper we present , a novel online, probabilistic object-aware framework to simultaneously estimate the robot pose, and track and update object-level scene changes in a joint optimization-based framework. The  pipeline uses our derived variational expectation maximization strategy to optimize factor graphs accounting for potentially-changing objects. We experimentally verify the robustness of  against state-of-the-art SLAM methods on two datasets, including our novel, real-world, semi-static warehouse dataset that we release with this work. Our system explicitly reasons about object-level stationarity to improve the robustness of localization in slowly varying scenes. Our method outperforms ORB-SLAM3 on average trajectory error by 48% on the real-world dataset and 29% on the 3D synthetic semi-static dataset. As well,  shows a 4.6% improvement on dense reconstruction precision in the large real-world scene and 31% in the smaller synthetic scene. unsrtnat
http://arxiv.org/abs/2307.03129v1
20230706165604
What You Don't Know Can Hurt You: Use and Abuse of Astrophysical Models in Gravitational-wave Population Analyses
[ "April Qiu Cheng", "Michael Zevin", "Salvatore Vitale" ]
astro-ph.HE
[ "astro-ph.HE", "gr-qc" ]
0009-0007-8996-0735]April Qiu Cheng LIGO Laboratory, Massachusetts Institute of Technology, 185 Albany St, Cambridge, MA 02139, USA Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA April Qiu Cheng aqc@mit.edu 0000-0002-0147-0835]Michael ZevinNASA Hubble Fellow Kavli Institute for Cosmological Physics, The University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA Enrico Fermi Institute, The University of Chicago, 933 East 56th Street, Chicago, IL 60637, USA 0000-0003-2700-0767]Salvatore Vitale LIGO Laboratory, Massachusetts Institute of Technology, 185 Albany St, Cambridge, MA 02139, USA Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA One of the goals of gravitational-wave astrophysics is to infer the number and properties of the formation channels of BBH; to do so, one must be able to connect various models with the data. We explore benefits and potential issues with analyses using models informed by population synthesis. We consider 5 possible formation channels of BBH, as in  <cit.>. First, we confirm with the GWTC-3 catalog what  <cit.> found in the GWTC-2 catalog, i.e. that the data are not consistent with the totality of observed BBH forming in any single channel. Next, using simulated detections, we show that the uncertainties in the estimation of the branching ratios can shrink by up to a factor of ∼ 1.7 as the catalog size increases from 50 to 250, within the expected number of BBH detections in LIGO-Virgo-KAGRA’s fourth observing run. Finally, we show that this type of analysis is prone to significant biases. By simulating universes where all sources originate from a single channel, we show that the influence of the Bayesian prior can make it challenging to conclude that one channel produces all signals. Furthermore, by simulating universes where all 5 channels contribute but only a subset of channels are used in the analysis, we show that biases in the branching ratios can be as large as ∼ 50% with 250 detections. This suggests that caution should be used when interpreting the results of analyses based on strongly modeled astrophysical sub-populations. § INTRODUCTION GW emitted by the mergers of compact objects, neutron stars and black holes, encode the properties of their sources including masses, spins, and distance. When one has a large enough dataset, information can be combined from the detected sources to infer properties of the underlying astrophysical process — or processes — that created them. Nearly 100 compact binary coalescences[The exact number depends on how conservative of a detection threshold one uses.] (the large majority of which are BBH) have been revealed in the data of ground-based GW detectors, LIGO <cit.> and Virgo <cit.>, up to their O3 <cit.>, allowing this type of analysis to be performed. The formation scenarios for compact binaries can be broadly separated in two categories: isolated evolution in the galactic field and dynamical assembly in dense environments such as clusters and AGN disks <cit.>. The overall dataset might contain BBH formed from a combination of these and other evolutionary pathways. Ideally, one would like to fully characterize any astrophysical formation channels that contribute sources, as well as their relative abundances (branching ratios). In practice, several approaches have been proposed and followed in the literature. They all have merits and shortcomings, and we quickly review them here, focusing on BBH, which are the topic of our work. * Heuristic models — The most straightforward analyses rely on heuristic parametric distributions to describe the astrophysical distributions of black hole parameters. For example, the primary (i.e. most massive) black hole mass distribution can be modeled as a mixture model of a power law and a gaussian (the model of  , originally introduced by  ); the spin orientation distribution as a mixture model of an isotropic component and a component nearly aligned with the orbital angular momentum <cit.>; etc. The functional forms might be chosen based on computational expediency, or be inspired by reasonable astrophysical expectations (e.g. a power-law component in the black hole mass function because the masses of progenitor stars are distributed that way <cit.>). Meanwhile, mixture models might allow for different sub-populations to be accounted for. The main potential shortfall of this approach is that if the models are very strong, the resulting posteriors might actually be model-driven, especially for hard-to-measure parameters. This has been shown, for example, in the context of the spin magnitude measurement <cit.> and the spin orientation <cit.>. On the positive side, if a parameter can be reliably measured, it often has a clear connection with a meaningful astrophysical quantity (e.g., the slope of a mass power law). Heuristic models also allow correlations between parameters to be probed in a straightforward manner <cit.>. * Flexible models — Flexible model approaches have been proposed as a way of avoiding the risk of forcing features into the data. In such models, 1D posterior distributions (usually fully marginalized, e.g. p(m_1 | d)), are modeled as splines, Gaussian processes, or using autoregression <cit.>. While these approaches are less likely to impose stringent features into the posteriors (though see e.g. ) the number of unknown parameters is typically larger than for heuristic models and the model parameters are not usually associated to any specific astrophysical quantity. Furthermore, these methods are not well suited to disentangle sub-populations, and are instead better suited to measuring the overall distribution of parameters. * Astrophysically informed models — Finally, one may use models obtained directly from the output of population synthesis. Given a set of initial conditions and choices for uncertain stellar, binary, and environmental physical parameters, such models make predictions for the anticipated underlying and detectable distribution of compact binaries. These models can be parametrized directly in terms of physically-meaningful quantities (e.g. the onset and evolution of binary mass transfer phases, the strength of supernova natal kicks, the efficiency of angular momentum transport), and correlations between parameters are automatically built in the models, both of which are very strong positive factors. In practice, the range of variations in physical uncertainties, as well as the number and complexity of the differing channels one considers, are often limited by the availability of numerical simulations that thoroughly explore the variation of the output (e.g. distribution of spin magnitude in the population) when one of the physical input parameters (e.g. efficiency of common envelope evolution) is varied. This is due to the fact that most population synthesis algorithms require significant computational resources to run, which implies that they cannot be evaluated “on the fly” for any value of their input parameters but must instead, for example, be evaluated on a sparse grid. Recent efforts have begun to more thoroughly explore compact binary population predictions for individual formation channels over an expansive array of physical and environmental uncertainties <cit.>. Consideration of multiple formation channels simultaneously and self-consistently proves more difficult given the diversity of codebases needed to model different channels and the unique physics that affects compact binary populations from each channel. The most expansive multi-channel analysis to date was performed in <cit.>, who considered 5 possible formation channels (see Section <ref> below), parameterized by the spin of quasi-isolated black holes at birth (a proxy for the efficiency of angular momentum transport in massive stars) and the efficiency of the common envelope ejection. By analyzing the 45 confident BBH sources of the penultimate (GWTC-2) LVK catalog,  <cit.> found that the data required more than a single formation channel in order to explain the diversity of GW events and the distribution of parameters of the detected binaries. They also found that the data preferred small natal black hole spins, consistent with the fact that most of the LVK BBH have small spin magnitudes. In this paper, we analyze both merits and shortcoming of approaches based on models informed by the output of population synthesis codes. First, we repeat the analysis of  <cit.> on the latest LVK catalog (GWTC-3), which comprises 69 BBH with a false alarm rate of less than 1 yr^-1. We find that the model for common envelope evolution can explain up to of the BBH in the underlying population (while contributing up to of the detectable BBH). Then, we create catalogs of simulated BBH signals with parameters drawn from our models, for some assumed values of the common envelope efficiency, quasi-isolated natal black hole spins, and branching fractions across channels. We analyze the performance of the analysis as the number of BBH sources in the catalog increases from 50 to 250. We find that for channels that produce higher number of detectable sources (and hence are proportionally more represented in the catalog, even if the true underlying branching fractions are not higher) the uncertainty on the underlying branching fraction can improve by a factor of up to ∼ 1.7 as the number of sources increases to 250. Next, we study the biases that can be introduced in this type of inference if the analysis is not using a suite of models fully representative of what is actually realized in nature. This problem was first indirectly shown by <cit.>, who added a channel for primordial black hole formation to those used by  <cit.>, and obtained that the inference on the fraction of primordial back holes was significantly affected by which of the other channels were included in the analysis. To do so, we generate mock universes where each of the 5 channels contributes some known fraction of the underlying population, and run the inference excluding in turn one of the 5 models. We show how this introduces biases in the inference of the remaining 4 channels' branching fractions. The channels that are most heavily biased are the ones that can most easily produce sources similar to the one channel excluded from the analysis, as well as those with the lowest detection efficiencies. Finally, we generate universes where the totality of the BBH sources are produced by one of the 5 channels, and run the analysis with the same 5 models. We show that while the natal spin can be inferred correctly, it is usually not possible to exclude that more than one channel contributes to the population after 100 events, a caveat to our result from our inference on GWTC-3 data that multiple channels contribute to both the underlying and detected BBH population. The rest of the paper is organized as follows: in Section <ref> we review the basics of hierarchical inference and the models used in this work, and apply these tools to GWTC-3's BBH. In Section <ref> we apply the method to different simulated catalogs in the ideal scenario where models and true populations match. In Section <ref> we focus on biases in the inference. We conclude in Section <ref>. § HIERARCHICAL INFERENCE ON GWTC-3 DATA §.§ Hyper-Inference Method We use hierarchical Bayesian inference on the branching fractions between different astrophysical formation channels of BBH. Our methods mostly follow the analysis developed by <cit.>, adapting their codebase, Astrophysical Model Analysis and Evidence Evaluation (AMA𝒵E), for our work. Here we outline the essentials of this method, as well as the key differences. We consider five different formation channels: three isolated evolution (field) channels, and two dynamical formation channels. The CE <cit.> and SMT <cit.> scenarios are field channels which involve unstable and stable mass transfer, respectively, following the formation of the first black hole. In the CHE channel, stars in a close, tidally-locked binary rotate rapidly, causing temperature gradients that lead to efficient mixing of the stars' interiors. The stars do not undergo significant expansion of the envelope, preventing significant post-main sequence wind mass loss and premature merging, resulting in higher mass BBHs <cit.>. Finally, the two dynamical formation channels lead to merging black holes via strong gravitational encounters that harden the binary in cluster cores <cit.>, where heavy black holes migrate towards due to dynamical friction <cit.>; we consider dynamical formation of BBH in GC and NSC. See <cit.> for a more detailed description of the astrophysical models considered in this work. Each of these channels are modelled to predict the 4-dimensional distribution of the BBH it forms with parameters θ⃗ = [, , , ], where is the source-frame chirp mass, is the mass ratio (defined to be 0 < ≤ 1), is the effective dimensionless spin parameter, and is the redshift; these are constructed into a probability distribution using a 4-dimensional KDE bounded by the physical constraints of each parameter. The models also depend on two additional parameters encoding uncertainties in the physical prescription: , the dimensionless spin of a black hole formed in quasi-isolation directly following core collapse, and , which parameterizes the efficiency of common envelope ejection <cit.>. We note that the choice of natal black hole spin does not set all black holes in a given population to merge with this exact spin because tidal spin-up processes <cit.> and hierarchical mergers <cit.> can increase the spin of black holes that participate in BBH mergers. We assume these two parameters take on a grid of discrete values (∈ [0.0, 0.1, 0.2, 0.5], ∈ [0.2, 0.5, 1.0, 2.0, 5.0]) over which we compute the models, although only affects the CE channel in our models. A plot of the detection-weighted marginalized model KDE for =1.0, =0.2 is shown in <ref> as an example. Further details, including formation models, detection weighting, and mathematical framework, can be found in <cit.>. Overall, we perform hierarchical inference on 7 hyperparameters[Due to the restriction ∑_iβ_i=1 (the branching fractions must add up to 1), we technically only perform inference on 6 hyperparameters.] Λ⃗ = [β⃗, , ], where β⃗ = [, , , , ] are the 5 astrophysical formation channel branching fractions. The steps involved in perfoming hierarchical inference on GW populations given a set of posterior samples (θ⃗ = [, , , ] in our case) for sources in the presence of selection effects have been thoroughly discussed in the literature <cit.>, and we therefore do not review them here. As in <cit.>, we use an uninformative prior for the hyperparameters: a flat symmetric Dirichlet prior for β⃗ and a uniform prior for and over the allowed discrete values. The outputs of the inference are samples of the hyper-posterior p(β⃗ | , ) over the grid of hypermodels (i.e. allowed values of and ), from which we can compute the marginalized hyperposterior p(β⃗) as well as the Bayes factors ab of hypermodel a compared to hypermodel b. We also recover the detectable branching fractions , which represent the fraction of detectable BBH originating from each channel. These are computed by re-scaling each underlying branching fraction by its detection efficiency, defined as ξ^, _j = ∫ P_det(θ⃗)p(θ⃗|μ_j^, ) dθ⃗, where P_det(θ⃗) is the probability of detecting a BBH with parameters θ⃗ and p(θ⃗|μ_j^χ, α) is the probability of formation channel j producing a BBH with parameters θ⃗, dependent on the model μ_j and choice of physical prescription and <cit.>. We apply this method to both real (Section <ref>) and simulated (Sections <ref>, <ref>) BBH data. §.§ Application to GWTC-3 We extend the work of <cit.> by applying the inference to confident BBH detections up to the O3b. We use the publicly-released priors and posterior samples from the GWTC-2.1 <cit.> and GWTC-3 <cit.> analyses; detection probabilities are calculated for the LIGO-Hanford, LIGO-Livingston, and VIRGO network operating at sensitivities <cit.>. There are two key differences between our analysis and that of <cit.>. First, we apply a more stringent detection threshold of FAR≤ 1 yr^-1 <cit.>; therefore, events GW190424_180648, GW190514_065416, and GW190909_114149, which were used in the previous analysis, are now excluded. Second, we evaluate the prior at the posterior points analytically rather than by constructing a KDE from prior samples; for further discussion on this point see Appendix <ref>. Finally, as in <cit.>, we opt to exclude GW190521 from the analysis, as its posterior extends significantly to regions of our model KDE with little to no support, although we find that the inclusion of GW190521 does not significantly affect our results; see <cit.> for an analysis that includes GW190521. Overall, we do hierarchical inference on 68 BBH, compared to the 45 BBH considered in <cit.>. Unless otherwise specified, we report results as median and 90% symmetric credible intervals. <ref> shows the posterior distributions on the underlying branching fraction β⃗ including detections from O3b; the same plot but for the detectable branching fractions can be found in <ref> in Appendix <ref>. We find =, =, =, = and =, indicating strong support for the CE channel dominating the underlying astrophysical population in our set of models. However, there is comparable contribution from all five channels to the detectable BBH population: =, =, =, =, and =. We attribute this difference to the fact that compared to other channels, the CE channel produces less massive black holes distributed at higher redshifts, which therefore are harder to detect; see <ref>. With 90% (99%) credibility, no single formation channel contributes to more than 49% (61%) of the detectable BBH population. Additionally, over 98% of posterior samples have significant (>10%) contributions from three or more different formation channels. Overall, we find that the CE channel contributes the most to the underlying BBH population; however, as we discuss in Section <ref>, the posterior for is also the most uncertain and prior-dominated. We additionally find that a mix of formation channels contributes to the detectable BBH population These results, as well as the overall shapes of the branching fraction posteriors, are consistent with the analysis with GWTC-2 data <cit.>, which found =71^+19_-60%, and that no single channel contributed to more than 70% of the detectable BBH population with 99% confidence. No strong correlations are apparent upon examining a corner plot of the branching fractions. Turning to the selection of physical prescription hyperparameters and , we find no posterior support for models with > 0.1; there is no significant preference between the =0.0 and =0.1 models, with =0.1=0.0 =. We favor high common envelope efficiencies, with =5.0=1.0=, and strongly disfavor low common envelope efficiencies, with =0.2=1.0=. These results are also consistent with <cit.>. The main difference is that we obtain stronger constraints, both in singling out preferred hypermodels and in the uncertainties (widths) of the branching fraction hyperposteriors. We attribute this effect to the increase in sample size from GWTC-2 to GWTC-3, as well as the difference in method in evaluating the prior at the posterior points during the inference; see Appendix <ref> for further discussion. § PROJECTIONS FOR FUTURE CATALOGS §.§ Method Next, we perform the same inference using simulated BBH observations in a universe where the BBH population exactly follows our models. The motivation for this analysis is to to quantify how uncertainties in the hyperposterior scale with the number of observed events. We first create a mock “universe”, i.e. a set of true values for our hyperparameters Λ⃗ = [β⃗, , ]. From each formation channel j, we draw n_j = β_j n BBH from the population model p(θ | μ_j^, ), for a total of n=5 × 10^4 BBH that form our underlying population. Next, we draw from this underlying population, assigning extrinsic parameters (sky location and inclination) from an isotropic distribution. For each BBH system, we calculate its optimal signal-to-noise ratio ρ_opt assuming a network consisting of LIGO-Hanford, LIGO-Livingston, and Virgo operating at O4 low (LIGO) and high (Virgo) sensitivities <cit.>, and keep only the mock signals with ρ_opt≥ 11; we repeat this process until we have a mock catalog of detections. Then, we perform parameter estimation on the BBH. For both the SNR calculation and parameter estimation, we use the Bayesian inference software <cit.> and the waveform approximant <cit.>. Finally, we use these posterior samples for the hierarchical inference analysis outlined in Section <ref>. For consistency, we use the same O4 sensitivities for the detection weighting during the inference as the above computation of mock signal SNRs. The various mock universes that we use in this paper, along with the sections in which they are discussed, are summarized in <ref>. For this analysis, we choose a fiducial unequal mixture of formation channels with underlying branching fractions =, =, =, =, =, quasi-isolated natal spins of = 0.0, and a CE efficiency of = 1.0. To investigate the scaling of the hyperposterior with , for each universe we repeat the inference with =50, =150, and =250. =250 represents an estimate on the total number of BBH detections anticipated by the end of O4 <cit.>. §.§ Results In <ref>, we plot for our chosen fiducial universe the overall posterior distribution on the underlying branching fractions for different values of (left column), as well as contributions (to scale) from the most favored values of and (columns 2 to 5). First, we do recover the true values of β⃗ and . For =250, we find =, =, =, = and =; the true value of the branching fraction falls within the 90% symmetric credible interval of the posterior for all 5 channels. Furthermore, =0, the chosen true value for , is favored over the next best model, =0.1, by a Bayes factor of 3.5 × 10^5. Unlike , the model with equal to the true value is not favored, with a marginal preference for higher values =5.0 and =2.0 over the true value =1.0 with Bayes factors =5.0=1.0=8.9 and =2.0=1.0=2.5. Since only affects the CE channel, it is not too surprising that the inference did not decisively favor any one value, in a universe where most detected BBH do not come from the CE channel. We note two possible contributing factors to favoring higher values of over the true value. If BBH formed in the CE channel disperse their envelopes more efficiently (i.e. have higher values of ), the resulting BBH are * Less massive, leading to lower detection efficiencies and therefore larger measurement uncertainties for this channel. This is because models with larger have less low-mass binaries merging within the CE itself, leading to more low-mass BBH being able to form and merge <cit.>. * Lower spinning, due to the post-CE separations being wider for higher and therefore less susceptible to tidal spin-up <cit.>; this leads to more detections with closer to =0.0, the chosen fiducial value for this universe. In space, this is a feature degenerate with dynamical channels such as the GC channel, which also produces BBH with ≈0 due to the BBH produced having isotropic spin orientations. This result suggests to view our result in Section <ref>, that higher common envelope efficiencies are favored, with some caution. More notably, we see convergence of the hyperposteriors towards their true values as we increase the number of detections. First, we highlight the narrowing of the branching fraction posteriors from the first row to the third row of <ref>. We quantify uncertainties in the branching fraction posteriors by the widths of the 90% symmetric credible intervals, and will hereafter use the two phrases interchangeably. These uncertainties decrease by up to 69% as we go from 50 to 250 mock events (see <ref>). The Bayes Factor in favor of the correct increases by nearly 5 orders of magnitude, strongly selecting the true value =0.0. This is illustrated by the empty plot corresponding to the contribution from =0.1 for =250 (third row, middle column), indicating negligible support for competing values of . On the other hand, the inference has increasing support for =5.0, but even at 250 mock detections, we only weakly prefer it over the true model =1.0. It is worth noting that the scaling of the uncertainty with varies significantly between formation channels. Some channels have more distinctive features in parameter space (see <ref>) that make them easier to distinguish during the inference. Additionally, differences in the detection efficiencies of different formation channels likely play a role as well. <ref> shows the percent decrease in the uncertainty of branching fraction posteriors from =50 to =250 against the detection efficiency of its channel ξ^, for our fiducial values =0.0 and =1.0. Channels with lower detection efficiencies, most notably the CE channel, appear to scale much more poorly with than channels with higher detection efficiencies. Indeed, the CE channel tends to produce lower-mass black holes at higher redshifts (due to typically shorter delay), leading to fewer of them being detectable (see <ref>). We emphasize the effect of the low detection efficiency of the CE channel: despite the fact that 40% of the mock underlying population originates from this channel, only 2, 5, and 17 CE BBH end up in the mock observations, of the 50, 150, and 250 total detections, respectively. The posterior not only has the largest uncertainty of all formation channels, but also barely decreases in uncertainty as we increase . If we instead examine how the detectable branching fractions scale with in <ref>, we can see that all detectable branching fractions narrow at similar rates as we increase from 50 (first row) to 250 (third row). Indeed, we can see in <ref> that the detectable branching fractions (dotted line) do not exhibit the dependence of the convergence rate on detection efficiency that the underlying branching fractions (solid line) have. Already, we can see some of the difficulties that arise from hierarchical Bayesian inference in the face of large measurement uncertainties, selection effects, and degenerate features in parameter space. A common theme, we will explore these problems in further detail in Section <ref>. Finally, we repeat this analysis for a different set of hyperparameters consisting of equal branching fractions between all formation channels (β_j=0.2 for all j), =0.2, and =1.0 (see the second row of <ref>). We find similar results in the convergence with . Uncertainties in the underlying branching fraction decrease by up to 47% from =50 to =250, and, consistent with our previous results, the uncertainty in does not decrease. We find also that the shrinking of the detectable branching fraction () posterior uncertainty is more consistent across different channels than the underlying branching fraction. Increasing causes a strong preference for the true value of =0.2; while we have =0.2=0.1=1.0 for =50, there is no posterior support for ≠ 0.2 at =150 and =250. Similar to the previous example, there is no strong preference for the true value of , although it is slightly favored with =1.0=0.5=1.7. Figures showing the marginalized posteriors on β and for this set of hyperparameters can be found in Appendix <ref>. § BIASES OF POPULATION INFERENCE Finally, we investigate the biases that may arise when performing hierarchical inference with the methods described above in Section <ref>. We again refer to <ref> for the chosen true values of the hyperparameters that we present in subsequent sections; we use the same method for hierarchical inference and simulated BBH detections as outlined in Sections <ref> and <ref>, respectively. In Section <ref>, we perform hierarchical inference on sources originating exclusively from each individual formation channel and examine differences in the recovered posteriors. In Section <ref>, we isolate the effect of the natal black hole spin hyperparameter by comparing the posteriors of universes with different choices of . In Section <ref>, we explore the consequences of doing hierarchical inference with incomplete information (i.e. excluding a channel from the inference) with detections from a mixture of formation channels. Finally, we summarize and discuss our results in Section <ref>. §.§ Inference in Single-Channel Dominated Universes For each channel j, we perform hierarchical inference on =100 mock detections in a universe where the entire underlying BBH population originates from channel j (i.e. β_j=1). We choose for our true values of the physical prescription =0.0 and =1.0, although the latter choice only affects the CE-dominated universe. Figure <ref> shows the branching fraction posteriors and support for different values of for each single-channel dominated universe. In general, none of the branching fraction posteriors have significant support for β=1, even for the channel that is actually producing the entirety of the BBHs. This happens because we use a flat, symmetric Dirichlet distribution for our prior, which results in a prior preference for a mixture of channels rather than a single dominating channel. The 5th percentiles of the dominating-channel branching fraction posteriors for the CE, CHE, GC, NSC, and SMT-dominated universes are =, =, =, =, and =, respectively. The degree to which we underestimate the contribution from the dominating channel varies significantly depending on the channel. We again highlight the effect of detection efficiency. As we saw in Figure <ref>, the CE, GC, and SMT channels have the lowest detection efficiencies, especially the CE channel. Across the different universes under consideration, , , and (first, middle, and last columns, respectively) have larger uncertainties that compete with and take away from the branching fraction posterior of the dominating channel; because of the lower detection efficiencies of these channels, it is difficult to discern whether this channel is nonexistent or if its contribution to the full set of detected observations is minor. The case of is particularly severe: as it does not strongly deviate from the prior, the 95th percentile for is greater than 32% in all universes for which there is no CE contribution to the underlying population. The CE-dominated universe (first row) does not suffer from this effect; as a result, only in that universe do we recover with narrow precision that the dominating channel is indeed dominating. In the opposite case, the NSC channel has the highest detection efficiency. In the NSC-dominated universe (fourth row), the median value of the posterior is less than 0.5, with large contributions to the BBH population from the channels with lower detection efficiencies (, , and ). There are also several interesting features of the spin model selection. First, in the CHE-dominated universe (second row), the inference is not able to select the true natal black hole spin of =0.0, and instead gives approximately equal support to =0.0, 0.1 and 0.2, as illustrated by the similar heights of the different colored curves. Recall that while the aligned-spin CE and SMT field channels mostly produce BBH with ≈ (although CE has a tail for higher spins from tidal spin-up) and the isotropic-spin GC and NSC dynamical channels produce BBH scattered around ≈ 0, the CHE channel uniquely produces BBH with ≳ 0.2 irrespective of due to strong tidal spin-up effects (see <ref>). As a result, the inference is not able to discern between values of between 0 and the true value 0.2. In the universes dominated by dynamical formation channels (GC, third row, and NSC, fourth row), the selection of the true value of =0.0 is less strong, with non-negligible support for =0.1 as shown by the green curves. In these universes, most detections have ≈ 0, and only affects the width of this distribution, a weaker feature to detect. Additionally, the sub-population of hierarchical mergers in these channels help to drive the mild support for > 0; at lower values of , hierarchical mergers occur more readily due to weaker gravitational recoil kicks. Hierarchical mergers can have || significantly greater than zero <cit.>, as illustrated by the wings of the marginalized distribution for the GC and NSC channels in <ref>. Because hierarchical mergers form a larger sub-population in NSC due to their deeper potential wells and ability to retain post-merger black holes <cit.>, this effect is greater for the NSC-dominated universe. Additionally, in the NSC-dominated universe, the posterior for =0.1 (green curve) peaks at a higher value than =0.0 (blue curve). This is because when =0.1, field channels produce fewer of the ≈ 0 BBH that make up the bulk of the population. Finally, although not shown in Figure <ref>, we comment on the inference on . First, we do favor the true value of =1.0 for the CE-dominated channel, preferring it over the next most favored model with =1.0=2.0=72, as expected from a universe where all detections are from the CE channel. We also note consistency with the results of Section <ref> in the recovery of for universes where there is no contribution from the CE channel; there is a small bias towards higher values of . We find >1.0<1.0=1.52, 1.54, 1.68, and 1.58 for the CHE, GC, NSC, and SMT-dominated universes, respectively[To be precise, the quantity we calculate is (=2.0=1.0 + =5.0=1.0) / (=0.2=1.0 + =0.5=1.0)]. §.§ Biases in Spin Inference To examine the effect of on the hyperposterior, we perform hierarchical inference on different universes with the same underlying branching fractions, but different true values of , using 250 mock detections. To isolate the effect of , we choose an equal mixture of formation channels in the underlying population (β_j=0.2 for all channels j). We choose =1.0. <ref> shows the marginalized branching fraction posteriors for universes with =0.0 and =0.2. Consistent with our results in Section <ref>, we recover the true values of the branching fractions as well as the true value of for both universes. Here, we can see that the selection of is non-linear: it is harder to distinguish between lower black hole spins (i.e. =0.0 versus =0.1) than higher spins. While there is no support for other values of in the posterior of the =0.2 universe, there is still non-negligible support for =0.1 in the =0.0 universe, with =0.0=0.1=30. This is expected; it is difficult to discern between slowly-spinning and non-spinning populations due to the inherent measurement uncertainty of GW observations <cit.>. Figure <ref> shows a corner plot of the branching fraction posteriors for both universes. Here, we can see the same effect: despite having the same number of mock detections, the data is more informative (yielding a posterior more different from the prior) in the universe with non-zero . §.§ Inference with an Incomplete Set of Populations Finally, we perform hierarchical inference with one formation channel excluded, such that while all five channels are contributing sources, the inference is only performed with four channels. This analysis is motivated by the fact that any population analysis done on real BBH data likely does its inference with an incomplete set of formation channels; we most likely do not know the totality of all possible BBH formation channels, nor can we model them all accurately and self-consistently. We again consider an equal-mixture branching fraction universe (β=0.2 between all formation channels), and true values for our physical prescription of =1.0 and =0.2. <ref> shows the marginalized branching fraction posteriors and support for different values of for the full inference as well as for inferences with one channel excluded. We defer discussion of biases in selection to Appendix <ref>, and focus on and the formation channel branching fractions in this section. By examining which channels receive more or less posterior support as a result of the inference's incomplete knowledge of formation channels, we can infer the correlations between the branching fractions of different formation channels. For example, when the CE channel is excluded (second row), the increase in branching fraction is spread approximately equally over the other channels, suggesting a negative correlation of with the other branching fractions, as is the prior. On the other hand, when the SMT channel is excluded from the inference (last row), the posterior shifts towards higher values, while support for the other channels decreases. Such correlations are consistent with the branching fraction corner plot from the full inference (<ref>); the pink contours are relevant to the set of mock detections discussed in this section. The uncertainties in are the greatest of all channels, and the posterior the least constrained; as such, the marginalized posterior closely follows the prior. As a result, is negatively correlated with all channels, which leads to systematic overestimation when doing inference with a channel excluded. In general, we can see that some branching fraction posteriors (i.e. and ) are much better constrained than others and are affected the least by the choice of prior. As seen in Section <ref>, the prior can cause a bias towards higher values of β for channels with lower detection efficiencies and greater uncertainties. Next, we highlight the effect of incomplete formation channel knowledge on the selection of the natal black hole spin . We see two cases in which the wrong value of is selected. When the CHE channel is excluded (third row), we infer a high natal black hole spin of =0.5, as indicated by the yellow curve, with a Bayes factor over the true value =0.2 of =0.5=0.2=373. This is due to the high-BBH that the CHE channel produces. When the inference does not account for the CHE channel, it tries to explain these highly-spinning CHE black holes with other field-channel BBH spinning at a higher . Then, in order to still account for the lower non-CHE BBH, the branching fraction for the GC and SMT channels are increased and decreased, respectively. We remark that no such adjustment is seen for the NSC and CE channels, despite them having similar features in space as the GC and SMT channels, respectively. An opposite effect is seen when excluding the NSC channel (fifth row), which, as noted above, produces BBH scattered around =0 with tails that extend to more positive and negative values due to the presence of hierarchical mergers. The inference has constructed two competing explanations in order to explain these lower-spinning BBH: low-CE BBH (represented by the green and blue curves with posterior support for high and low ) and higher-GC BBH (represented by the pink curve with support for low and high ). This highlights the degeneracy between low-spinning field channels and high-spinning dynamical channels in producing similar features in the population distribution. This case is especially remarkable because the green and blue curves (=0.0 and =0.1) for the posterior bears striking resemblance to the posterior inferred from GWTC-3 data (<ref>), even though these features are purely an artifact of the inference neglecting a single BBH formation channel. We again remark on the difference between the and posteriors: despite the fact that both CE and SMT are field channels with the similar features in space, support for does not increase (and rather decreases) in the low-model as a result of the exclusion of the NSC channel. One reason why this may occur is that SMT BBH cannot go through tidal spin-up, unlike CE BBH, and hence accounts for a narrower range of concentrated around . Therefore, can be strongly affected by the exclusion of a channel with strong features in space, especially when the wrong model of is inferred. As mentioned in the previous paragraph, we also find notable that the inference favors two separate competing explanations when NSC is excluded. The GC and NSC marginalized KDE are similar due to the isotropy of spin orientations in these two channels; upon examining the marginalized KDE of the other three BBH parameters (, , ), the GC channel appears still to have the closest resemblance to the NSC channel. Despite this, the inference does not simply overcompensate the exclusion of the NSC channel by correspondingly increasing , suggesting the influence of higher-dimensional features in parameter space. On the other hand, no such effect is seen when the GC channel is excluded, whose posterior has a simple overcompensation in and . Indeed, although we have been able to broadly interpret these posteriors by focusing on different channels' features in space, there must be other subtle effects at play. We have shown in multiple ways the differences between the and posteriors and the and posteriors, despite having similar features in space. Although the exclusion of each channel has its own unique and interesting consequences, there appears to be a bias for the CE and GC channels, systematically underestimating the CHE and NSC channels. We point again to detection efficiency: of the field and dynamical channels, respectively, the CE and GC channels have by far the lowest detection efficiencies. With the uniform branching fraction spread in our current set of hyperparameters, only 2% of detections are expected to be from the CE channel (versus 30% and 16% from the CHE and SMT channels), and 16% from the GC channel (versus 36% from the NSC channel). Finally, we have repeated this analysis for the set of hyperparameters used in the convergence analysis in Section <ref>, with an unequal mixture of channels and =0.0. In this universe, the bias towards overestimating is more severe due to the lack of spin information from our choice of . Consistent with our previous discussion, there again seems to be a bias towards the CE and GC channels when one channel is excluded. Due to the CHE and NSC channels' high detection efficiencies, the and posteriors support their low true values (5% for both channels) with small uncertainties. The corresponding plot of the branching fraction posteriors for this universe can be found in Appendix <ref>. §.§ Discussion In this section, we conducted several investigations of different biases that arise in hierarchical Bayesian inference based on astrophysical formation models of BBH. We summarize the key takeaways as follows: * Single-channel dominated universes (Section <ref>) * Even when our models of the underlying formation channels are perfectly accurate, at =100 the data are still relatively uninformative due to information loss from parameter estimation and low detection rates. Thus, results of inference are still influenced by the choice of a flat prior and exhibit bias towards a mixture of formation channels. This caveats our result from Section <ref> that no single channel dominates the underlying BBH population, from the inference on GWTC-3 data. * It is difficult to precisely infer the true value of , as it only affects the CE channel, which produces very few detectable BBH due to its low detection efficiency relative to the other channels considered. Only with mock catalogs where the CE channel dominates the detections can we recover the true value of , as expected. * Biases in spin inference (Section <ref>) * It is easier to infer higher values of than lower values. When is low (0.0 or 0.1), one has less spin information, and uncertainties in both the branching fraction posteriors and the selection of the true value of are greater. * Inference with incomplete populations (Section <ref>) * Both the branching fraction posteriors and the inferred value of can be heavily affected if the inference is performed without knowledge of all contributing formation channels. Some branching fractions are overestimated or underestimated by a factor of ∼ 3 or more from the exclusion of a formation channel, and ignoring channels that produce particularly high or low BBH can cause the inference to strongly select an incorrect value of . * Degeneracies exist between different sets of hyperparameters that can make it difficult for the inference to discriminate between them. For example, it can be difficult to distinguish between field BBH with low and dynamical BBH with higher , due to being the only spin information used in the inference (which in turn is due to the fact that is arguably the only spin parameter that can be measured for all BBH with advanced detectors). There are likely more subtle degeneracies and correlations in higher-dimensional parameter space that are difficult to explain from the marginalized distributions, but nonetheless play a role in the inference. * Inference on the underlying branching fractions can be biased due to the varying detection efficiencies of different channels. In particular, the CE channel has a detection efficiency nearly an order of magnitude below the other channels, causing large measurement uncertainties in and a relatively uninformed (i.e. close to the prior) posterior for . As a result, the exclusion of a channel from the inference usually results in the posterior support extending to higher values; similar effects can be seen in other low detection efficiency channels, such as the GC and SMT channels. § CONCLUSIONS Understanding the physical processes and formation environments of compact binary mergers is one of the most pressing questions in GW astrophysics. In this paper, we pair the most recent catalog of BBH mergers provided by the LVK with an expansive, self-consistent suite of astrophysical models to investigate the origins of BBH mergers. Consistent with <cit.>, we find that given our set of astrophysical models, multiple formation channels are likely contributing to the observed population (though see Section <ref> for a caveat). We demonstrate both the predictive power of our inference methodology and its scaling with future detections by generated mock observations with realistic measurement uncertainties from synthetic universes with known branching fractions and physical prescriptions. Perhaps most important, we also demonstrate the pitfalls of this type of inference, particularly how an incomplete census of formation models or incorrect physical assumptions can lead to significant biases in inference. This work should be treated as a cautionary tale for those attempting to understand relevant physical processes leading to compact binary mergers and formation environments of compact binary progenitors, as inference can be severely compromised if models suffer from inaccuracies of incompleteness. Though the suite of BBH formation channel models used in this work are state-of-the-art and apply self-consistent physical treatments where possible, they in many ways can be treated as exemplary. Given the numerous uncertainties in massive-star evolution, binary physics, compact object formation, and environmental effects, it is currently impossible to construct models with complete physical accuracy or to fully explore all the uncertainties that impact the source property predictions of population synthesis. Regardless, the biases demonstrated in this analysis are a generic concern when performing inference based on an incomplete or inaccurate set of astrophysical models. We do not suggest that such studies have no utility; compared to population inference that rely on heuristic or flexible models, studies such as these have the benefit of translating directly to physical constraints, albeit requiring proper caveats. Despite the potential issues with such analyses, we anticipate that given the diversity of BBH properties observed to date, the key result of multiple formation channels contributing to the detected population of BBH remains robust. A potential concern one might have when considering multiple formation channels for the production of BBH mergers is how the universe could conspire to have multiple distinct formation pathways, governed by unique physics, to contribute to the population of merging BBH at a similar rate. Occam's razor would suggest that this is an unlikely scenario. However, astrophysical transients have been shown in many instances not to obey this principle <cit.>. Many channels of BBH formation have predicted rates within the same order-of-magnitude (, see for a review) and the selection effects inherent to GW detection are certainly capable of causing sources from intrinsically rare channels to be heavily represented in the detected population. Future observations and improved population synthesis routines may help to more robustly disentangle the relative rates of various compact binary formation channels, and thereby have the capability of placing constraints on underlying physical processes. Nonetheless, for the time being we show that it is important to consider the potential biases that can accumulate when accounting for an incomplete picture of compact binary mergers in the universe. Multiple avenues can be used in tandem with the analyses presented in this work to help expedite the ability of placing robust constraints on compact binary formation channels. In addition to analyses of the full population of BBH mergers, observational signatures from single events that are unique to one or a subset of formation channels will help to place constraints on the relative contribution of various formation channels <cit.>. Observational constraints from other probes of compact binary formation outside of GW astronomy, such as electromagnetic surveys of BBH stellar progenitors, astrometric observations of compact object binaries, identification and host association of gamma-ray bursts and kilonovae, and characterization of pulsar binaries in the Milky Way can all help complement and improve constraints that rely solely on GW observations. Incorporating such information into astrophysical inference will help population analyses using astrophysical simulations remain pertinent and scale with the rapidly-growing catalog of compact binary merger observations. The posterior samples in the analyses presented in this work, the code for calculating the prior at the posterior points (see Appendix <ref>), and all figures, along with additional figures and the accompanying Jupyter notebook, are available on Zenodo <cit.>. The authors thank Sylvia Biscoveanu, Tom Callister, Storm Colloms, Amanada Farah, Jack Heinzel, Colm Talbot, and Noah Wolfe for their valuable comments and suggestions. A.Q.C. is partially supported by the MIT UROP program. Support for this work and for M.Z. was provided by NASA through the NASA Hubble Fellowship grant HST-HF2-51474.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. S.V. is partially supported by NSF through the award PHY-2045740. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. § CALCULATING THE PRIOR AT THE POSTERIOR POINTS During hierarchical inference, the goal is to calculate the hyperposterior for our hyperparameters Λ⃗ = [β⃗, , ] p(Λ⃗ | 𝐱) = π(Λ⃗) p(𝐱 | Λ⃗)/p(𝐱), as given by Bayes' theorem, where 𝐱 = {x⃗_i}_i^ is the set of BBH detections and p(𝐱 | Λ⃗) = ∏_i=1^N_obsp(x⃗_i)/∫ p(θ⃗ | Λ⃗)P_det(θ⃗) dθ⃗∫p(θ⃗_i | x⃗_i) p(θ⃗_i | Λ⃗)/π(θ⃗_i) dθ⃗ is the hyperlikelihood (see Appendix D of , and for reviews). Here, we divide out the parameter estimation prior π(θ) evaluated at each point θ⃗_⃗i⃗ as we integrate over the space of BBH parameters θ⃗=[, , , ]. We approximate this integral via a Monte-Carlo discrete sum over the posterior samples. Therefore, it is necessary to calculate the prior at each posterior sample point. <cit.> did this by constructing a 4-dimensional Gaussian-kernel KDE with the prior samples in the GWTC-1 and GWTC-2 data releases. Due to the potentially prohibitive behavior of high-dimensional KDE with insufficient training samples, we choose instead to evaluate the prior at each posterior sample analytically by using the analytical priors from the GWTC-2.1 and GWTC-3 data releases and applying the appropriate Jacobians <cit.>. <ref> shows the marginalized branching fraction posteriors inferred from GWTC-2.1 and GWTC-3 data, but with the prior π(θ_i^k) for each event i evaluated at each posterior sample k calculated via a 4-dimensional Gaussian KDE constructed from the prior samples provided from LVK data releases, as in <cit.>. Comparing with <ref>, determining π(θ_i^k) in this way gives rise to some noisy features in the posterior, such as non-trivial support for =0.2 and =0.5 (pink and yellow curves, respectively). While there is no support for > 0.1 with analytical evaluation of the prior, with the KDE method we have ≤ 0.1 > 0.1 = 4.00. Similarly, preference between different values of is also weaker, with =5.0=1.0=4.24, as opposed to 249. The primary notable features in the posterior as discussed in Section <ref>, however, still remain robust. § ADDITIONAL FIGURES In this appendix, we show additional plots (starting from <ref>). Details are given in each figure's caption. aasjournal
http://arxiv.org/abs/2307.02038v1
20230705054335
Non-commutative resolutions as mirrors of singular Calabi--Yau varieties
[ "Tsung-Ju Lee", "Bong H. Lian", "Mauricio Romo" ]
hep-th
[ "hep-th", "math.AG" ]
empty preprint Non-commutative resolutions as mirrors of singular Calabi–Yau varieties ^♯[], ^†[], and ^∗[mromoj@tsinghua.edu.cn] ^♯CMSA, Harvard University, Cambridge, MA 02138 ^†Department of Mathematics, Brandeis University, Waltham, MA 02453 ^†BIMSA, Huairou District, Beijing 101408 ^∗Yau Mathematical Sciences Center, Tsinghua University, Beijing, 100084, China It has been conjectured that the hemisphere partition function Honda:2013uca,Hori:2013ika in a gauged linear sigma model (GLSM) computes the central charge <cit.> of an object in the bounded derived category of coherent sheaves for Calabi–Yau (CY) manifolds. There is also evidence in Hosono:1995bm,Hosono:2000eb. On the other hand, non-commutative resolutions of singular CY varieties have been studied in the context of abelian GLSMs <cit.>. In this paper, we study an analogous construction of abelian GLSMs for non-commutative resolutions and propose they can be used to study a class of recently discovered mirror pairs of singular CY varieties. Our main result shows that the hemisphere partition functions (a.k.a. A-periods) in the new GLSM are in fact period integrals (a.k.a. B-periods) of the singular CY varieties. We conjecture that the two are completely equivalent: B-periods are the same as A-periods. We give some examples to support this conjecture and formulate some expected homological mirror symmetry (HMS) relation between the GLSM theory and the CY. As shown in <cit.>, the B-periods in this case are precisely given by a certain fractional version of the B-series of <cit.>. Since a hemisphere partition function is defined as a contour integral in a cone in the complexified secondary fan (or FI-theta parameter space) <cit.>, it can be reduced to a sum of residues zhdanov1998computation,passare1994multidimensional. Our conjecture shows that this residue sum may now be amenable to computations in terms of the B-series. § INTRODUCTION Homological mirror symmetry (HMS) <cit.> for Calabi–Yau (CY) manifolds can be generally formulated in terms of bounded derived categories of coherent sheaves, and Fukaya categories. Thus much focus in recent decades have been to understand their relationships, and their consequences. However, not until recently, have people begun to ask similar questions about the case of singular varieties. By general considerations of marginal deformations of superconformal field theories <cit.> one can argue that supersymmetric quantities (for example chiral/anti-chiral rings or supersymmetric boundary conditions) on a singular CY variety are equivalent to their counterparts on a crepant resolution (if one exists) of the CY variety. A mathematical counterpart of this is the so-called crepant resolution conjecture <cit.>. Therefore, to formulate HMS for such a singular variety, it is natural to look for a categorical version of the resolution for the bounded derived category of coherent sheaves, and also a substitute for the Fukaya category in the singular case. One possibility for the former, is to consider some kind of non-commutative (NC) resolution associated with a singular variety <cit.>. This paper is an attempt to test the idea of using NC resolutions on a large class of recently discovered mirror pairs of singular CY varieties 2020-Hosono-Lian-Takagi-Yau-k3-surfaces-from-configurations-of-six-lines-in-p2-and-mirror-symmetry-i,2020-Hosono-Lee-Lian-Yau-mirror-symmetry-for-double-cover-calabi-yau-varieties. Let us recall briefly the results of 2020-Hosono-Lian-Takagi-Yau-k3-surfaces-from-configurations-of-six-lines-in-p2-and-mirror-symmetry-i,2020-Hosono-Lee-Lian-Yau-mirror-symmetry-for-double-cover-calabi-yau-varieties. It is shown that for each toric variety admitting a maximal projective crepant partial (MPCP) resolution, equipped with a given nef-partition, one can construct a family of equisingular CY varieties as double covers on the toric variety. The branching locus of the double cover is given by an union of hyperplanes (in general positions) specified by the nef-partition. The period integrals of the double cover are formally certain fractional counterparts of period integrals of ordinary complete intersections (hence the term `fractional complete intersection’). Most importantly, if two toric varieties equipped with nef-partitions are dual to each other, in the sense of Batyrev–Borisov <cit.>, then it has been shown that their corresponding double cover CY varieties are mirror to each other. In fact, we can apply many `mirror tests’ to see that this is in fact the case. For example, the Yukawa couplings of one family can be shown to compute the genus zero orbifold Gromov–Witten invariants of the mirror family. These new singular mirror pairs of CY therefore provide a very interesting testing ground for the idea non-commutative resolutions. In this paper, we shall experiment with this idea from the point of view of abelian GLSMs. The paper is organized as follows. In Section <ref>, we give an overview of the construction of mirror pairs of singular CY varieties, given by double covers of toric varieties. This general construction was introduced in <cit.>. We recall some basics in toric geometry, and outline the construction of double covers on a general toric variety admitting an MPCP resolution, equipped with a nef-partition. We then summarize some of the main results in <cit.> on singular CY mirror pairs, and illustrate them in examples. In Section <ref>, we introduce the construction of the abelian GLSMs, which will later on play the role of NC resolutions for singular CYs given by double covers. We start by reviewing the setup for two dimensional 𝒩=(2,2) gauge theories with boundaries, and give an overview of GLSMs. We spell out the assumptions we impose on the GLSMs to be considered in this paper. In particular, we consider nonanomalous GLSMs, whose IR theory is completely characterized by the classical Higgs branch. The GLSMs of interest will be hybrid models: their underlying target space is an orbifold vector bundle over a base given by the critical locus of a superpotential function. We consider a particular form of such a `curved’ superpotential, and propose that the resulting GLSM is a quantum field theory realization of a NC resolution for our double cover singular CY. Key ingredients introduced here include the matrix factorization category for a given gauge group and a superpotential, and the notion of the hemisphere partition function of an object in this category. Most of the expository discussion here follows Honda:2013uca,Hori:2013ika. It has been conjectured that for smooth CYs, the hemisphere partition function coincides with the so-called the A-period or the central charge of an object. The latter was introduced and studied mathematically by MR2373143,Hosono:2000eb,Iritani:2009ab from various viewpoints. Finally, we also review some basics on toric GITs, and the relation between the secondary fan and the (stringy Kähler) moduli space of CYs in mirror symmetry (or FI-theta parameter space in GLSM theory). In Section <ref>, we formulate and prove our main result in Theorem <ref>. For each toric variety X equipped with a choice of nef-partition, we consider the A-periods of the GLSM realizing the (singular) CY double covers Y of X. In <cit.> it is shown that the sheaf of B-periods of Y^∨ (the mirror of Y) can be completely characterized by a GKZ system. We then show that the A-periods are solutions to the same GKZ system, hence proving the A-periods of Y are in fact B-periods Y^∨. A crucial step here is the explicit determination of the B-brane factor in the hemisphere partition function in this case (Proposition <ref>). We end with a discussion of our conjecture (Conjecture <ref>) on the equivalence of the two kinds of periods, and provide some numerical evidence. Acknowledgements: The authors thank W. Gu for collaboration at an early stage of this work. We would like to thank L. Borisov, D. Pomerleano, and E. Scheidegger for discussions and comments. We would like to thank S. Hosono for his collaboration, which helped inspire this project. We would like to thank T. Pantev for his interest on this project. MR thanks Harvard CMSA, Rutgers University, Heidelberg University and Uppsala University for hospitality while part of this work has been performed. BHL would like to thank YMSC and BIMSA, where part of this collaboration was done. MR acknowledges support from the National Key Research and Development Program of China, grant No. 2020YFA0713000, the Research Fund for International Young Scientists, NSFC grant No. 1195041050. TJL is partially supported by AMS–Simons travel grant. BHL is partially supported by the Simons Collaboration Grant on Homological Mirror Symmetry and Applications 2015-2023. § CALABI–YAU DOUBLE COVERS AND THEIR MIRRORS In this section, we recall the construction of pairs of singular Calabi–Yau double covers (Y,Y^∨) in <cit.> and review their properties. To this end, let us fix the notation that will be used throughout this note. (1) Let N=ℤ^n be a rank n lattice and M=Hom_ℤ(N,ℤ) be its dual lattice. Let N_ℝ:=N⊗_ℤℝ and M_ℝ:=M⊗_ℤℝ. We denote by ⟨-,-⟩ the canonical dual pairing between M and N. (2) Let Σ be a fan in N_ℝ. We denote by Σ(k) the set of k-dimensional cones in Σ. In particular, Σ(1) is the set of 1-cones in Σ. Similarly, for a cone σ∈Σ, we denote by σ(1) the set of 1-cones belonging to σ. By abuse of the notation, we also denote by ρ the primitive generator of the corresponding 1-cone. (3) Denote by X_Σ the toric variety determined by the fan Σ. Each ρ∈Σ(1) determines a torus-invariant Weil divisor D_ρ on X_Σ. Any torus-invariant Weil divisor D is linearly equivalent to ∑_ρ∈Σ(1) a_ρD_ρ. We define the polyhedron of D Δ_D:={m∈ M_ℝ | ⟨ m,ρ⟩≥ -a_ρ  ρ}. Note that Δ_D is a polytope if Σ is a complete fan. In which case, Δ_D is called the polytope of D. The integral points M∩Δ_D gives rise to a canonical basis of H^0(X_Σ,D). (4) A polytope in M_ℝ is called a lattice polytope if its vertices belong to M. For a lattice polytope Δ in M_ℝ, we denote by Σ_Δ the normal fan of Δ. The toric variety determined by Δ is denoted by 𝐏_Δ, i.e., 𝐏_Δ=X_Σ_Δ. (5) A reflexive polytope Δ⊂ M_ℝ is a lattice polytope which contains the origin 0∈ M_ℝ in its interior and such that the polar dual Δ^∨:={n∈ N_ℝ | ⟨ m,n⟩≥ -1  m∈Δ} is again a lattice polytope. If Δ is a reflexive polytope, then Δ^∨ is also a lattice polytope and satisfies (Δ^∨)^∨=Δ. The normal fan of Δ (resp. face fan of Δ) is the face fan of Δ^∨ (resp. the normal fan of Δ^∨). §.§ The Batyrev–Borisov's duality construction Let us begin with the notion of nef-partitions. Let Δ⊂ M_ℝ be a reflexive polytope. Recall that a nef-partition on 𝐏_Δ is a decomposition of Σ_Δ(1)=⊔_k=1^r I_k such that each E_k:=∑_ρ∈ I_k D_ρ is numerically effective, i.e., D.C≥ 0 for any irreducible complete curve C⊂𝐏_Δ. Note that E_1+⋯+E_r=-K_𝐏_Δ. This also gives rise to a Minkowski sum decomposition Δ = Δ_1+⋯+Δ_r  Δ_i:=Δ_E_i. By abuse of terminology, both E_1+⋯+E_r=-K_𝐏_Δ and Δ = Δ_1+⋯+Δ_r are call nef-partitions. Let I_1,…,I_r be a nef-partition on 𝐏_Δ. Denote ∇_k=Conv(I_k∪0)  ∇ = ∇_1+⋯+∇_r. Borisov proved that ∇ is a reflexive polytope in N_ℝ whose polar dual is ∇^∨=Conv(Δ_1,…,Δ_r) and ∇_1+⋯+∇_r corresponds to a nef-partition on 𝐏_∇ <cit.>. This is called the dual nef-partition in <cit.>. The corresponding nef toric divisors are denoted by F_1,…,F_r. Then the polytope of F_j is ∇_j. Let X→𝐏_Δ and X^∨→𝐏_∇ be maximal projective crepant partial (MPCP for short hereafter) resolutions for 𝐏_Δ and 𝐏_∇. Recall that the polytopes Δ_i and ∇_j correspond to E_i on 𝐏_Δ and F_j on 𝐏_∇. The nef-partitions on 𝐏_Δ and 𝐏_∇ pullback to nef-partitions on X and X^∨. To save the notation, the corresponding nef-partitions and toric divisors on X and X^∨ will be still denoted by Δ_i, ∇_j and E_i, F_j respectively. §.§ Calabi–Yau double covers Suppose we are given the data in <ref> and let notation be the same as there. Throughout this note, unless otherwise stated, we assume that Both 𝐏_Δ and 𝐏_∇ admit a smooth MPCP desingularization, i.e., we assume that both Δ and ∇ admit uni-modular triangulations. From the duality, we have H^0(X^∨,F_i)≅⊕_ρ∈∇_i∩ Nℂ· t^ρ  H^0(X,E_i)≅⊕_m∈Δ_i∩ Mℂ· t^m. Here we use the same notation t=(t_1,…,t_n) to denote the coordinates on the maximal torus of X^∨ and X. A double cover Y^∨→ X^∨ has trivial canonical bundle if and only if the branch locus is linearly equivalent to -2K_X^∨. Let Y^∨→ X^∨ be the double cover constructed from the section s=s_1⋯ s_r with (s_1,…,s_r)∈H^0(X^∨,2F_1)×⋯×H^0(X^∨,2F_r). We assume that s_i∈H^0(X^∨,2F_i) is of the form s_i=s_i,1s_i,2 with s_i,1,s_i,2∈H^0(X^∨,F_i). We assume that s_i,1 is the section corresponding to the lattice point 0∈∇_i∩ N and that div(s) is a divisor with strictly normal crossings. This procedure, which is inspired by the work 2020-Hosono-Lian-Takagi-Yau-k3-surfaces-from-configurations-of-six-lines-in-p2-a nd-mirror-symmetry-i, 2019-Hosono-Lian-Yau-k3-surfaces-from-configurations-of-six-lines-in-p2-and-mirr or-symmetry-ii-lambda-k3-functions, is called the partial gauge fixing. We obtain a subfamily of double covers of X^∨ by deforming s_i,2 The family is parameterized by an open subset V⊂H^0(X^∨,F_1) ×⋯×H^0(X^∨,F_r). Given a decomposition ∇=∇_1+⋯+∇_r representing a nef-partition F_1+⋯+F_r on X^∨ as above, the subfamily 𝒴^∨→ V constructed above is called the gauge fixed double cover family branched along the nef-partition over X^∨ or simply the gauge fixed double cover family if no confusion occurs. Likewise, applying the construction to the decomposition Δ=Δ_1+⋯+Δ_r representing the dual nef-partition E_1+⋯+E_r on X yields another family 𝒴→ U, where U is an open subset in H^0(X,E_1)×⋯×H^0(X,E_r). Denote by Y (resp. Y^∨) the fiber of 𝒴→ U (resp. 𝒴^∨→ V). In <cit.>, it is conjectured that Y and Y^∨ are mirror. The following proposition is also proven in <cit.> and provides some numerical evidence of the conjecture. We have χ_top(Y)=(-1)^nχ_top(Y^∨). Here χ_top(-) is the topological Euler characteristic. Moreover, we have for p+q n, h^p,q(X)=h^p,q(Y)  h^p,q(X^∨)=h^p,q(Y^∨). Consequently, when n≤ 4, we have h^p,q(Y)=h^n-p,q(Y^∨), i.e., Y and Y^∨ form a topological mirror pair. Indeed, in <cit.>, it is shown that χ_top(Y) = χ_top(X) + (-1)^nχ_top(X^∨). It follows that H^n(Y,ℂ)=H^n(X,ℂ)+ χ_top(X^∨). [Calabi–Yau double cover of 𝐏^3 branch over eight hyperplanes] Consider the reflexive polytope Δ = Conv({(3,-1,-1),(-1,3,-1),(-1,-1,3),(-1,-1,-1)}). We then have 𝐏_Δ≅𝐏^3. In the present case, we have X=𝐏_Δ. (Recall that X is a MPCP desingularization.) Let us consider the nef-partition E_1+E_2+E_3+E_4=H+H+H+H=-K_X on X. Here H is the hyperplane class of X and the partition corresponds to the partition on the set of 1-cones {(1,0,0)}∪{(0,1,0)}∪{(0,0,1)}∪{(-1,-1,-1)}. Then the associated gauge fixed double cover family 𝒴→ U is the family of double covers of 𝐏^3 branched along eight hyperplanes in general position. Let Y denote a fiber in the family. The Hodge diamond of Y is given by [column sep=0.1em, row sep=0.1em] 1 0 0 0 1 0 1 9 9 1 0 1 0 0 0 1 Let us investigate the mirror. From Batyrev–Borisov's duality construction, the dual polytope associated with the partition H+H+H+H=-K_Xis ∇ = ∇_1+∇_2+∇_3+∇_4 where ∇_i=Conv({0}∪ (δ_i1,δ_i2,δ_i3)) for i=1,2,3 and ∇_4=Conv({0}∪ (-1,-1,-1)). (∇ is a zonotope.) In the present case, 𝐏_∇ is singular and admits a MPCP desingularization X^∨→𝐏_∇. Denote by 𝒴^∨→ V the gauge fixed double cover family over X^∨ branched along the dual nef-partition F_1+F_2+F_3+F_4=-K_X^∨ and let Y^∨ be a fiber. One can check that the Hodge diamond of Y^∨ is given by [column sep=0.1em, row sep=0.1em] 1 0 0 0 9 0 1 1 1 1 0 9 0 0 0 1 It is easy to see that (<ref>) and (<ref>) are mirror Hodge diamonds. We expect that the Hodge diamond of Y and Y^∨ for general smooth toric bases are related in a simple way; they are isomorphic after a π 2-rotation (cf. Figure <ref>). § GLSMS AS NC RESOLUTIONS OF Y §.§ 2d Gauge Theories with Boundaries In this section we will review certain aspects of gauged linear sigma models (GLSM) <cit.> with boundaries <cit.>. We will define a set that we will denote GLSM data by the following elements: * Gauge group: a compact Lie group 𝖦. * Chiral matter fields: a faithful unitary representation ρ_m G→GL(V) of 𝖦 on some complex vector space V≅ℂ^N. * Superpotential: a holomorphic, G-invariant polynomial W V→ℂ, namely W∈Sym(V^∨)^𝖦. * Fayet–Illiopolous (FI)-theta parameters: a set of complex parameters t such that exp(t)∈Hom(π_1( 𝖦 ),ℂ^*)^π_0(𝖦) i.e., exp(t) is a group homomorphism from π_1( 𝖦 ) to ℂ^* that is invariant under the adjoint action of 𝖦 [Recall that π_0( 𝖦 )≅𝖦 / 𝖦 _0, where 𝖦 _0 is the identity component of 𝖦, is the only subset of 𝖦 that acts nontrivially on π_1( 𝖦 ).]. It is customary to write t=ζ-iθ, therefore <cit.> t∈(𝔱^∨_ℂ/2π i P)^W_𝖦≅𝔷^∨_ℂ/2π i P^W_𝖦, where P is the weight lattice, W_𝖦 is the Weyl subgroup of 𝖦, 𝔱=Lie(T) is the Cartan subalgebra of 𝔤=Lie( 𝖦 ) and 𝔷=Lie(Z( 𝖦 )). * R-symmetry: a vector U(1)_V (and also an axial U(1)_A, but we do not use it in this work) R-symmetry. That is a U(1) action on V that commute with the action of 𝖦 on V. This action is determined by a representation R:U(1)_V→GL(V), which is not required to be faithful (so, it weights can be real). The superpotential W is required to have weight 2 under the U(1)_V action: W(R(λ)·ϕ)=λ^2W(ϕ), where ϕ denotes the coordinates in V. We call a tuple ( 𝖦 ,W,ρ_m,t,R) satisfying the conditions above a GLSM data. We call ( 𝖦 ,W,ρ_m,t,R) a nonanomalous GLSM data if furthermore the representation ρ_m factors through SL(V). In the following, since we are interested on properties of CY varieties as complex geometries, we will find convenient to use the complexified gauge group. We use the short notation G:=𝖦_ℂ In the following we will only work with nonanomalous GLSM data which is the relevant case for CY mirrors. We will focus our attention on B-type boundary conditions, that is, boundary conditions preserving the combination of supercharges 𝐐_B:=𝐐_++𝐐_- (and its charge conjugate 𝐐^†_B:=𝐐_++𝐐_-) Herbst:2008jq,Hori:2013ika. These boundary conditions are termed B-branes and they will play a central role in the present work. We define them for a fixed value of the FI-theta parameter t and superpotential W. Moreover they form a triangulated category. We denote such category by MF_𝖦(W). Its objects are denoted (ℬ,L_t) which we will describe in the following. Let us start defining ℬ, the algebraic data: we call algebraic data of the element (ℬ,L_t)∈ MF_G(W) to the quadruple ℬ=(M,ρ_M,R_M,𝐓) where the elements are defined as: = -3pt * Chan–Paton vector space: a ℤ_2-graded, finite rank free Sym(V^∨)-module denoted by M=M_0⊕ M_1. * Boundary gauge and (vector) R-charge representation: ρ_M G→GL(M), and R_M U(1)_V→GL(M) commuting and even representations, where the weights of R_M are allowed to be rational. * Matrix factorization of W: Also known as the tachyon profile, a ℤ_2-odd endomorphism 𝐓∈End^1_Sym(V^∨)(M) satisfying 𝐓^2=W·id_M. The group actions ρ_M and R_M must be compatible with ρ_m and R, i.e., for all λ∈ U(1)_V and g∈𝖦, we demand R_M(λ)𝐓(R(λ)ϕ)R_M(λ)^-1 = λ𝐓(ϕ) , ρ_M(g)^-1𝐓(ρ_m(g)·ϕ)ρ_M(g) = 𝐓(ϕ) . For later use, we denote the weights of ρ_m as Q_j𝔱→ℝ and the weights of R as R_j∈ℝ for j=1,…,N=dim_ℂV. We denote by ℋ⊂𝔱_ℂ the collection of hyperplanes ℋ=⋃_j=1^N⋃_n∈ℤ_≥ 0{σ∈𝔱_ℂ | Q_j(σ)-iR_j/2-in=0} The other piece of data that we need, termed L_t, is a profile for the vector multiplet scalar. Consider a gauge-invariant (i.e. invariant under the G-action) middle-dimensional subvariety L_t⊂𝔤_ℂ∖ℋ of the complexified Lie algebra of 𝖦 or equivalently its intersection L_t⊂𝔱_ℂ∖ℋ invariant under the action of the Weyl group 𝒲_𝖦⊂𝖦. We define an admissible contour as a contour L_t satisfying the following two conditions. = -3pt (a) L_t is a continuous deformation of the real contour L_ℝ:={τ=0 | τ∈𝔱_ℂ}; (b) the imaginary part of the boundary effective twisted superpotential W_eff,q𝔱_ℂ→ℂ W_eff,q(σ):= (∑_α>0± iπ α·σ)-(∑_j (Q_j(σ))(log(iQ_j(σ)/Λ)-1))-t(σ)+2π i q(σ) approaches +∞ in all asymptotic directions of L_t and for all the weights q∈𝔱^∨ of ρ_M. Signs in the sum over positive roots α of 𝖦 depend on the Weyl chamber in which σ lies. The existence of an admissible contour L_t remains widely open in higher dimensions, i.e., when Z( 𝖦 )≥ 2. In the later section, we will give a construction of L_t when the gauge group 𝖦 is abelian and connected. The objects (ℬ,L_t)∈MF_𝖦(W) are composed from algebraic data ℬ and an admissible contour L_t. the morphisms between two objects (ℬ_1,L_t), (ℬ_2,L'_t)∈MF_𝖦(W), at the same t, is defined by the class of the maps Ψ∈Hom(M_1,M_2) in the cohomology defined by the differential D: DΨ:=𝐓_2Ψ-(-1)^|Ψ|Ψ𝐓_1 where |Ψ|∈{ 0,1} is the ℤ_2-degree of Ψ. It is important to point out that the algebraic data ℬ is defined in <cit.> is a more general context. However the category MF_𝖦(W) with objects given by pairs (ℬ,L_t) has its origins on the dynamics of B-branes on GLSMs <cit.> and we are not aware on an analogous definition in the mathematics literature. The parameter e^t corresponds to local coordinates on the stringy Kähler moduli space ℳ_. The space ℳ_ is locally isomorphic to (ℂ^∗)^(𝔷). Globally it corresponds to a partial compactification of (ℂ^∗)^(𝔷)∖Δ, where Δ is a complex codimension 1 closed subset. The space ℳ_ can be determined from the nonanomalous GLSM data using physical considerations <cit.> however we are not aware of a general and purely mathematical definition of ℳ_ that does not rely on indirect methods, such as mirror symmetry. When 𝖦≅ U(1)^s, then ℳ_ can be determined from toric geometry considerations plus a study of Coulomb and mixed Coulomb–Higgs branches of the GLSM. This gives, in the abelian case, ℳ_ a chamber structure and the different chambers can be classified by cones in the secondary fan, as we will see in section <ref>. Each of these chambers is termed a phase, in the physics jargon and whenever ζ=(t) belongs to the interior of a chamber (and e^t∉Δ) it is expected that there always exists a unique, up to homotopy, an admissible L_t for any quadruple ℬ. We will show this explicitly by constructing L_t for the examples (and the chamber) we are interested in, in section... Hence, on a fixed phase, we can ignore this piece of data and work with an entirely algebraic category whose objects are the algebraic data ℬ. Is important to remark that this is not generically true if we consider paths in t-space. For a given ℬ an admissible L_t can stop being admissible as t crosses certain regions (phase walls). This problem can be solved by taking the cone between ℬ and some ℬ_nh which is nullhomotopic. This operation results in an equivalent algebraic data ℬ' but now L_t remains admissible as we cross the phase wall. This phenomenon of `B-brane transport' along phase walls is known as the grade restriction rule and was originally studied and solved for 𝖦 abelian in <cit.> and has been rigourosly formulated in the mathematics literature in segal2011equivalences,halpern2015derived,ballard2019variation for general G. A physics perspective on the case of general group 𝖦, including also anomalous GLSMs, can be found in Clingempeel:2018iub,Hori:2013ika,hori2019notes,eager2017beijing. In the present work we are only concerned on a specific phase for a family of abelian GLSMs we will define in section <ref> therefore we will not be concerned with the grade restriction rule. We are ready to define our main function that will give rise to the A-periods. We define the central charge of an object (ℬ,L_t)∈ MF_G(W) to be the function Z_D^2(ℬ):=∫_L_t⊂𝔱_d^l_𝖦σ∏_α>0α(σ)sinh(πα(σ))∏_j=1 ^NΓ(iQ_ j(σ)+R_j/2)e^it(σ)f_ℬ(σ). where f_ℬ(σ):=tr_M(R_M(e^i π)ρ_M(e^2πσ )) the symbol ∏_α>0 denotes the product over the positive roots of G and l_𝖦:=dim(𝔱). The function (<ref>) was computed by direct supersymmetric localization methods in quantum field theory Hori:2013ika,Honda:2013uca,Sugishita:2013jca. §.§ GLSMs and A-periods In order to study the IR theory of a nonanomalous GLSM, characterized by the GLSM data ( 𝖦 ,W,ρ_m,t,R) we need to define the classical Higgs branch for a given value of ζ=Re(t), We define the classical Higgs branch at ζ, associated to the GLSM data ( 𝖦 ,W,ρ_m,t,R) as the pair (Y_ζ,W_ζ) where Y_ζ:=μ^-1(ζ)/ 𝖦 where μ V→𝔤^∨ denotes the moment map associated to ρ_m and W_ζ:=W|_Y_ζ For ζ in a phase, there exist a projection functor π_ζ Herbst:2008jq,ballard2019variation π_ζMF_𝖦(W)→𝒟_ζ, 𝒟_ζ is a triangulated category, known as the the IR B-brane category, and its specific description depends on the particular model we are considering. The functor π_ζ and the category 𝒟_ζ only depends on the chamber where ζ belongs. For an object ℰ∈𝒟_ζ we can define the A-period Z(ℰ), where Z𝒟_ζ→ℂ is a function known as the central charge of ℰ. This function is related to Bridgeland's central charge <cit.>, however its intrinsic definition, relevant for mirror symmetry, has been studied in the physics literature for general superconformal field theories Knapp:2020oba,hori2019notes. In <cit.>, a mathematical proof of he relation between (<ref>) and appropriately defined invariants over a space of quasimaps associated to the GLSM has been presented (such space of quasimaps have been studied for instance in Ciocan-Fontanine:2018oaw,Favero:2020cke,Fan:2015vca). For the case that W_ζ is Morse–Smale, then X_ζ:=dW^-1_ζ(0)∩ Y_ζ is a smooth Calabi–Yau manifold and 𝒟_ζ=D^bCoh(X_ζ) in that case, a definition of A-periods has been proposed in the physics <cit.> and mathematics <cit.> literature. It is conjectured in <cit.> that Z_D^2(ℬ)=Z(π_ζ(ℬ)). We will then make use of this conjecture to define the A-periods in the following. For the cases when the stabilizer of the points μ^-1(ζ) is a finite subgroup of 𝖦, we are in the physics situation known as a weakly coupled phase and the IR B-brane category can be characterized as <cit.> 𝒟_ζ=MF(Y_ζ,W_ζ) where MF(Y_ζ,W_ζ) denoted the category of coherent matrix factorizations buchweitz1987maximal,orlov2004triangulated,efimovcoherent, ballard2012resolutions. It is worth noting that this category is independent of the choice of ζ inside the chamber. §.§ GLSMs for noncommutative resolutions We will study a particular class of GLSMs, whose IR theory in a particular phase, i.e., the classical Higgs branch (Y_ζ,W_ζ), for ζ in the interior of some cone, satisfies the following conditions * The variety Y_ζ takes the form of a vector bundle Y_ζ=𝒱→ B, where B is compact Kähler and we allow the posibility of B having orbifold singularities and 𝒱 being a orbibundle. More precisely, one should consider B an algebraic stack <cit.>. * We require W_ζ∈H^0(𝒪_Y_ζ) and dW_ζ^-1(0)=B as a set. * There exist a holomorphic vertical Killing vector field ξ in Y_ζ implementing a ℂ^∗ action on Y_ζ and satisfying ℒ_ξW_ζ=W_ζ these conditions characterize what is called a good hybrid Aspinwall:2009qy,Bertolini:2013xga (originally studied in <cit.>). We will be particularly interested in the case when the function W_ζ is quadratic in the fiber coordinates of 𝒱. Let us outline the construction suitable for our singular double covers. A toric description of the variety X can be given in terms homogeneous coordinates ϕ_I i.e. the Cox ring X. These coordinates are weighted by the action of the gauge group. Denote their 𝖦-weights by θ_I. Then rather than considering X, we will consider the GLSM describing a ℤ_2 gerbe over X. This amounts to consider the toric variety X described by homogeneous coordinates ϕ̃_I whose weights are 2θ_I. Then there is a trivially acting ℤ_2 subgroup of G. Consider a collection of line bundles E_a over X, then any section s_a(ϕ)∈ H^0(X,E_a) can be written as a section of X by considering f_a(ϕ̃):=s_a(ϕ̃). The G-weights of f_a(ϕ̃) will be a multiple of 2 therefore it makes sense to consider the line bundles E^-1_a→X̃ so that z_a^2f_a(ϕ̃)∈ H^0(𝒪_X), where z_a is a coordinate on the fiber of E^-1_a. The hybrid model we want to consider has target space Y≅𝒱→X where 𝒱= ⊕_aE^-1_a→X with E_a's line bundles over X as above. Therefore locally, if we denote the fiber coordinates as z_a, a=1,…, rk(𝒱) and the base coordinates in the ℤ_2 gerbe as ϕ̃ then W_ζ can be written as W_ζ=∑_a=1^rk(𝒱)z_a^2f_a(ϕ̃) where f_a(ϕ̃)∈H^0(X,2E_a). The resulting quantum field theory is interpreted as a noncommutative (NC) resolution of the (possibly singular) variety given by a double cover of X branched along ∏_af_a(ϕ) <cit.>. Equivalently we can write Y≅⊕_aE_a^-1/2→[X/ℤ_2] where ℤ_2 acts trivially on X and the transition functions of E_a^-1/2 are given by g_ij^-1/2 with g_ij the transition functions of E_a. This description of Y as a bundle over a gerbe appeared originally in <cit.> in the context of GLSMs and in <cit.> it was identified with a NC resolution for double covers of 𝐏^3. More recently, in Katz:2022lyl,Katz:2023zan the authors have presented a detailed study of the enumerative geometry of CY (singular) double covers over 𝐏^3, and their noncommutative resolutions. It is important to remark that even though we can consider X̃ as an orbifold (with trivial action), it is crucial to consider X̃ as an algebraic stack, in order to identify the correct category of B-branes Addington:2012zv,Guo:2021aqj. We will give more details on the GLSM construction, its B-branes and their A-periods in section <ref>. §.§ Abelian GLSMs Since we will be working on abelian GLSMs, in this subsection, we will apply the theory in <ref> to abelian (and nonanomalous) GLSMs. For our purpose, we also assume that 𝖦 is connected in what follows, i.e., 𝖦 =U(1)^s. In which case, we have = -3pt * 𝖦=T and W_𝖦=1; * π_1(G)≅ℤ^s; * 𝔱≅ℝ^s, 𝔱_ℂ=𝔱⊗_ℝℂ≅ℂ^s, and the weight lattice P≅ℤ^s. The set of the Fayet–Illiopolous (FI)-theta parameter is given by 𝔱^∨_ℂ 2πiP≅ℝ^s⊕ (iℝ 2πiℤ)^s. The hemisphere partition function (<ref>) is simplified to Z_D^2(ℬ) = ∫_L_td^sσ∏_j=1^NΓ(iQ_j(σ)+R_j/2) e^it(σ) f_ℬ(σ). Here σ=(σ_1,…,σ_s) is the coordinate on 𝔱_ℂ and t=(t_1,…,t_s)∈𝔱_ℂ^∨ 2πiP. We will also use the notation t=ζ-iθ with ζ=(ζ_1,…,ζ_s)∈ℝ^s and θ=(θ_1,…,θ_s)∈ (iℝ 2πiℤ)^s. From the construction, the central charge Z_D^2(ℬ) is apparently a multi-valued function in t. However, it would be more convenient to regard θ as the coordinate on the universal cover ℝ^s and Z_D^2(ℬ) as a function on 𝔱_ℂ rather than a multi-valued function on the quotient 𝔱_ℂ^∨ 2πiP≅ (iℝ 2πiℤ)^s. Accordingly, we will write t(σ) = ⟨ t,σ⟩ = ∑_k=1^s t_kσ_k the usual canonical dual pairing between 𝔱_ℂ and 𝔱_ℂ^∨. Consequently, e^it(σ) = e^i⟨ t,σ⟩ = ∏_k=1^s (e^-t_k)^-iσ_k= ∏_k=1^s q_k^-iσ_k and each factor is a multi-valued function on ℂ^∗ (with coordinate q_k). §.§ Toric GIT and the secondary fan For the sake of completeness, we recall somes basics of toric GIT. We recall the definition of the secondary fan and we relate it with the concept of phases of a GLSM. Consider an algebraic subgroup G⊂ (ℂ^∗)^r. It is known that the induced map on the character groups ℤ^r[r] G is surjective. Note that we do not assume that G is an algebraic torus; G might have non-trivial torsion elements. Denote by M its kernel. We obtain a short exact sequence 0[r] M[r] ℤ^r[r] G[r] 0. Let G act on ℂ^r via the inclusion G⊂ (ℂ^∗)^r. Each θ∈G determines a GIT quotient [ℂ^r_θ G]. Applying Hom_ℤ(-,ℤ) to (<ref>), we get [column sep=1em] 0[r] Hom_ℤ(G,ℤ)[r] ℤ^r[r] N:=Hom_ℤ(M,ℤ)[r] Ext^1_ℤ(G,ℤ)[r] 0. The morphism ℤ^r→ N has a finite cokernel. For θ∈G, it is known that the underlying space of the GIT quotient [ℂ^r_θ G] is a toric variety. To describe the underlying space, let ν_i be the image of e_i=(δ_1i,…,δ_ri)∈ℤ^r under the morphism ℤ^r→ N in (<ref>). For each θ∈G, we choose a lifting 𝐚:=(a_1,…,a_r)∈ℤ^r from (<ref>); this is possible since (<ref>) is surjective. Define the polyhedron P_𝐚:={m∈ M_ℝ | ⟨ m,ν_i⟩≥ -a_i, i=1,…,r}. It is known that the toric variety associated to P_𝐚 is the underlying toric variety of the GIT quotient [ℂ^r_θ G]. Let G ={(t,s) | s^2=t^2}⊂ (ℂ^∗)^2 acting on ℂ^2. Then ℂ^∗×μ_2≅ G (via (t,ζ_2)↦ (t,ζ_2t)). Here μ_2 is the abelian of order two, written additively. Under this identification, the induced map between their character group is ℤ^2→G=ℤ×μ_2,  (a,b)↦ (a+b, b2) whose kernel is given by M={(a,b) | a+b=0, b∈2ℤ}≅ℤ. Applying Hom_ℤ(-,ℤ) to 0 [r] M[r] ℤ^2[r] G[r] 0, we obtain an exact sequence 0→ℤ=Hom_ℤ(G,ℤ) ℤ^2 N≅ℤ→μ_2→ 0. Let ν_i∈ N denote the image of e_i∈ℤ^2. Consider the following various situations. (a) θ=(1,1)∈G=ℤ×μ_2. We can choose a lifting of (1,1) from (<ref>). For example, 𝐚=(0,1)∈ℤ^2 will do. Then consider the polytope in M_ℝ≅ℝ P_𝐚={m∈ M_ℝ | ⟨ m,ν_1⟩≥ 0, ⟨ m,ν_2⟩≥ -1}={m∈ M_ℝ | 0≤ m≤ 1/2} whose normal fan Σ is the fan defining 𝐏^1. The triple (N,Σ,ℤ^2→ N) is the stacky fan in the sense of Borisov–Chen–Smith <cit.>. The quotient stack we obtain is [ℂ^2_θG]≅[𝐏^1μ_2] whose coarse moduli space is equal to 𝐏^1. (b) θ=(1,0)∈ℤ×μ_2. In this case, we can use 𝐚=(1,0)∈ℤ^2 as our lifting. It turns out that we obtain the same toric stack in (a). (c) θ = (0,1)∈ℤ×μ_2. Let 𝐚=(-1,1) be a lifting. In this case, we have P_𝐚={m∈ M_ℝ | ⟨ m,ν_1⟩≥ -1, ⟨ m,ν_2⟩≥ 1}={1/2} and the underlying space of the GIT quotient [ℂ^2_θ G] is indeed a point. (d) θ=(-1,0)∈ℤ×μ_2. We can pick a lifting 𝐚=(-1,1)∈ℤ^2 and the polyhedron in the present situation is P_𝐚={m∈ M_ℝ | ⟨ m,ν_1⟩≥ 1, ⟨ m,ν_2⟩≥ 0}=∅ so [ℂ^2_θ G]=∅ in this case, i.e., there is no semistable point under the stability condition θ=(-1,0). We can see the wall-crossing phenomenon clearly in this example. Note that given G⊂ (ℂ^∗)^r there are only finitely many distinct GIT quotients [ℂ^r_θ G] up to isomorphism. Denote by χ_i the image of e_i=(δ_1i,…,δ_ri)∈ℤ^r under the morphism ℤ^r→G→G_ℝ and let C_χ:=Cone(χ_1,…,χ_r) be the cone generated by χ_i in the vector space G_ℝ. It is known that there is a fan, called the secondary fan, whose support is C_χ and satisfying the following property: * the collection of the relative interior of cones in the secondary fan give rise to a decomposition of C_χ such that the GIT quotient [ℂ^r_θ G] is constant on each subset. We can think of the secondary fan as the space parameterizing all the possibly non-empty GIT quotients. [Example <ref> continued] We have G_ℝ=ℝ in this case. The secondary fan (under the identification ℂ^∗×μ_2≅ G) consists of two cones ℝ_≥ 0  {0} and has the “chamber decomposition” ℝ_≥ 0 = {0}∪ℝ_>0. § A-PERIODS OF GLSM AND B-PERIODS OF YVEE §.§ GKZ systems for periods of Calabi–Yau double covers Given an integral matrix A=(a_ij)∈Mat_d× m(ℤ) and a parameter β=(β_i)∈ℂ^d, the GKZ system ℳ_A^β is a set of partial differential equations on ℂ^m consisting of the following two types of operators * the box operators _ℓ:=∂^ℓ^+-∂^ℓ^- where ℓ^±∈ℤ^m_≥ 0 such that Aℓ^+=Aℓ^-; * the Euler operator E_i-β_i:=∑_j=1^m a_ijx_j∂_j- β_i for i=1,…,d. In the bullets, x_1,…,x_m are coordinates on ℂ^m corresponding to the columns of A, ∂_j≡∂∂ x_j is the partial derivative, and the multi-index convention is used. Suppose we are given the data in <ref> and let us retain the notation there. It is known that the period integrals of 𝒴^∨→ V satisfy a certain type of GKZ systems with a fractional exponent. (6) Let Σ be the fan defining X and J_1⊔⋯⊔ J_r=Σ(1) be the corresponding nef-partition on X. Put J_k={ρ_k,1,…,ρ_k,m_k}. Also we put additionally ρ_k,0=0∈ N for each k=1,…,r. From the duality construction, we have ∇_k∩ N = J_k∪{ρ_k,0}. We also use the same notation ρ_i,j to denote the primitive generator of the corresponding 1-cone in Σ. (7) Denote by {e_1,…,e_r} the standard basis of ℝ^r. For 1≤ i≤ r and 0≤ j≤ m_i, we put μ_i,j:=(e_i,ρ_i,j)∈ℤ^r+n. Regarding μ_i,j as column vectors, we define for each i=1,…,r a matrix A_i = [ height 1ex height 1ex; μ_i,0 ⋯ μ_i,m_i; height 1ex height 1ex ]∈Mat_(r+n)× (m_i+1)(ℤ) and put p=m_1+⋯+m_r and A = [ A_1 ⋯ A_r ]∈Mat_(r+n)× (r+p)(ℤ). We also set β = [ -1/2; ⋮; -1/2; height 1ex; 0; height 1ex ]∈ℚ^r+n, where the first r entries are -1/2. It will be easier to label the columns of A by (i,j) where 1≤ i≤ r and 0≤ j≤ m_i; the (i,j)th column of A is precisely the vector μ_i,j. (8) Given the matrix A and β as in <ref> (7), we denote by ℳ_A^β the associated GKZ system. The variables in the GKZ systems ℳ_A^β are called x_i,j where 1≤ i≤ r and 0≤ j≤ m_i; it is the variable corresponding to the (i,j)th column of A. The following proposition can be checked easily. The period integrals of 𝒴^∨→ V satisfy the GKZ system ℳ_A^β. [Example <ref> continued] Let us explicitly write down the GKZ system for the periods of 𝒴^∨→ V. From the construction, ∇_i is the divisor polytope of F_i and H^0(X^∨,F_i) = 2, i=1,2,3,4. Moreover, ∇_1∩ N = {0,(1,0,0)}, ∇_2∩ N = {0,(0,1,0)}, ∇_3∩ N = {0,(0,0,1)}, ∇_4∩ N = {0,(-1,-1,-1)}. The GKZ system ℳ_A^β describing the double cover family 𝒴^∨→ V is given by A = [ 1 1 0 0 0 0 0 0; 0 0 1 1 0 0 0 0; 0 0 0 0 1 1 0 0; 0 0 0 0 0 0 1 1; 0 1 0 0 0 0 0 -1; 0 0 0 1 0 0 0 -1; 0 0 0 0 0 1 0 -1; ]  β = [ -1/2; -1/2; -1/2; -1/2; 0; 0; 0 ]. §.§ The A-periods of the non-commutative resolutions Let us review some basics of toric varieties. We will follow the notation introduced in (1)(5) in the beginning of <ref> and in (6)(8) in <ref>. Let X be a smooth projective toric variety. We have the short exact sequence 0[r] M[r] ℤ^p[r] Cl(X)[r] 0. Here Cl(X) is the Weil divisor class group of X which equals the Cartier divisor class group owing to our hypothesis on X. Taking dual Hom_ℤ(-,ℤ) yields 0[r] Hom_ℤ(Cl(X),ℤ)[r] ℤ^p[r,"B"] N[r] 0, where the matrix B is given by B = [ height 1ex height 1ex; ρ_1,1 ⋯ ρ_r,m_r; height 1ex height 1ex ]∈Mat_n× p(ℤ). Applying Hom_ℤ(-,ℂ^∗) to (<ref>), we obtain the short exact sequence 1 [r] Hom_ℤ(Cl(X),ℂ^∗)[d,equal] [r] (ℂ^∗)^p[r] T_N[r] 1. G In order to describe the A-periods, we now introduce the following notation. (9) We fix once for all an integral basis of Cl(X) consisting of ample divisors and hence an isomorphism Cl(X)≅ℤ^s. Under this basis, the third morphism in (<ref>) is represented by an integral matrix [ [.5ex]2.5ex0.5pt θ^1 [.5ex]2.5ex0.5pt; ⋮ ; [.5ex]2.5ex0.5pt θ^s [.5ex]2.5ex0.5pt ]:= [ θ^1_1,1 ⋯ θ^1_r,m_r; ⋮ ⋱ ⋮; θ^s_1,1 ⋯ θ^s_r,m_r ]∈Mat_s× p(ℤ). Consequently, the character matrix of the second morphism in (<ref>) is given by its transpose; in other words, G=(ℂ^∗)^s→ (ℂ^∗)^p is given by (g_1,…,g_s)↦(∏_i=1^sg_i^θ^i_1,1,…, ∏_i=1^sg_i^θ^i_r,m_r) and we have for all 1≤ k≤ s ∑_i=1^r∑_j=1^m_iθ^k_i,jρ_i,j=0. (10) Let ϕ_i,j, 1≤ i≤ r and 1≤ j≤ m_i, be the homogeneous coordinates of X associated with the divisor ρ_i,j. Under the basis chosen in (9), ϕ_i,j has weight (θ^1_i,j,…,θ^s_i,j). §.§.§ The abelian GLSM associated to CY double covers We propose a `curved' Landau–Ginzburg potential W to define the GLSM for singular Calabi–Yau double covers; the potential is given by W = ∑_i,j z_i,j^2ϕ_i,j + ∑_k=1^r z_k,0^2 f_k(ϕ), where f_k(ϕ)∈H^0(X,E_k) is a section, regarding as a homogeneous polynomial in ϕ's. As we shall see, the notation z_k,0 will be much more convenient for us and will give a more concise formula at the end. Recall that E_k=∑_j=1^m_k D_k,j and sections of E_k corresponds to a (G,θ_k,0)-equivariant algebraic function f_k(ϕ) on ℂ^p with θ_k,0≡ (θ^1_k,0,…,θ^s_k,0) := ∑_j=1^m_k (θ^1_k,j,…,θ^s_k,j) We choose the G-weight of ϕ_i,j to be θ^s_i,j rescaled by 2; namely ϕ_i,j is equipped with G-weight 2(θ^1_i,j,…,θ_i,j^s). Then, to make (<ref>) G-invariant, the G-weight of z_i,j and z_k,0 are chosen as -(θ^1_i,j,…,θ^s_i,j),   z_i,j, -(θ^1_k,0,…,θ^s_k,0),   z_k,0. Under the weight assignments (<ref>) and (<ref>), W becomes G-invariant. By the discussion in section <ref>, the category of B-branes of this GLSM in the nc resolution phase is given by MF(Y_ζ,W_ζ) where Y_ζ≅𝒱⊕_k=1^rℒ_(k,0)→ [X/ℤ_2] 𝒱:=⊕_i=1^r⊕_j=1^m_iℒ_(i,j) and the line bundles are orbibundles given by ℒ_(i,j)=𝒪_X(-θ^1_i,j/2,…,-θ^s_i,j /2 ) ℒ_(k,0)=𝒪_X(-θ^1_k,0/2,…,-θ^s_k,0 /2 ) As argued in <cit.>, the category MF(Y_ζ,W_ζ) is equivalent to the derived category of sheaves of 𝒜_0♯ℤ_2-modules over X: MF(Y_ζ,W_ζ)≅ D(X,𝒜_0♯ℤ_2) where 𝒜_0 is the endomorphism algebra of the matrix factorization 𝐓_0 (see equation (<ref>)): 𝒜_0≅End(𝐓_0) taken in the category of coherent matrix factorizations, ignoring the global orbifold structure (for a precise description, see <cit.>) and ♯ denotes the smash product. In the case at hand where the superpotential is quadratic in the fiber coordinates, 𝒜_0 becomes a Clifford algebras generated by the symbols ψ_(i,j), ψ_(k,0) satisfying {ψ_A,ψ_B}=∂^2W/∂ z_A∂ z_B, A,B ∈{(i,j),(k,0)} therefore <cit.> D(X,𝒜_0♯ℤ_2)≅ D(X,Cl_0) where Cl_0 denotes the even part of the sheaf of Clifford algebras 𝒜_0. Then, we can conjecture the chain of equivalences D(X,Cl_0)≅ D(Y) where D(Y):=D^bCoh(Y) that can be defined following <cit.>. By homological mirror symmetry we expect in addition that an appropriate definition of the Fukaya category exists for Y^∨ and it is related to D(Y). However we are not aware of such definitions of the Fukaya category for a singular cyclic cover such as Y^∨. In <cit.> a construction for so-called double mirrors involving noncommutative resolutions is given. From the point of view of the GLSM, this implies that most, if not all, the GLSMs we propose in this section, must have a geometric phase. This is not immediately clear from our construction (and not required for our results). We expect to return to this in a sequel. §.§.§ The A-periods of abelian GLSMs We can apply the discussion in <ref> to the present situation. To write down the A-periods, the only missing piece of information is an admissible contour L_t⊂𝔱_ℂ. To this end, we will need the results in <ref>. Given t∈𝔱_ℂ, let Re(t)=ζ∈ℝ^s as before. For ζ regular, it determines the geometry of the GLSM via the symplectic quotient (<ref>) Y_ζ = μ^-1(ζ)𝖦. We may assume that ζ belongs to a cone of maximal dimension in the secondary fan SΣ. In general, the fan SΣ may be singular (even non-simplicial). Denote by SΣ' a simplicialization[Is important to remark that, when we fix the GLSM data, we are automatically choosing a subdivision SΣ' i.e. the secondary fan describing GLSM phases has implicitly chosen a simplicialization.] of SΣ. We may as well assume that ζ belongs to the interior of a cone τ of maximal dimension in SΣ'. According to our construction, SΣ and SΣ' are fans in the Euclidean space 𝔱. The product of Gamma functions in (<ref>) is then given by F(σ) := ∏_i=1^r∏_j=1^m_iΓ(2i∑_m=1^sσ_mθ_i,j^m) ∏_i=1^r∏_j=1^m_iΓ(-i∑_m=1^sσ_mθ_i,j^m+1/2) ∏_i=1^rΓ(-i∑_m=1^sσ_mθ_i,0^m+1/2). The A-periods of a non-commutative resolution are defined to be Z_𝔅(q_1,…,q_s)=∫_L F(σ_1,…,σ_s) f_𝔅(σ_1,…,σ_s) q_1^-iσ_1⋯ q_s^-iσ_sdσ, where F(σ_1,…,σ_s) = ∏_i=1^r∏_j=1^m_iΓ(2i∑_m=1^sσ_mθ_i,j^m) ∏_i=1^r∏_j=1^m_iΓ(-i∑_m=1^sσ_mθ_i,j^m+1/2) ∏_i=1^rΓ(-i∑_m=1^sσ_mθ_i,0^m+1/2), f_𝔅(σ) is a brane factor, and we give an explicit construction for L in <ref>. The function F(σ_1,…,σ_s) has a pole at (σ_1,…,σ_s) whenever ∑_m=1^sσ_mθ_i,j^m∈iℤ_⩾ 0/2, ∑_m=1^sσ_mθ_i,j^m∈ -i/2+iℤ_⩽ 0,   ∑_m=1^sσ_mθ_i,0^m∈ -i/2+iℤ_⩽ 0. For simplicity, we introduce Q_i,j(σ):=∑_m=1^sσ_mθ_i,j^m,   P_i(σ):=∑_m=1^sσ_mθ_i,0^m=∑_j=1^m_i Q_i,j(σ). The function F(σ) is then transformed into F(σ) = ∏_i=1^r∏_j=1^m_iΓ(2i Q_i,j(σ))∏_i=1^r∏_j=1^m_iΓ(-iQ_i,j(σ)+1/2) ∏_i=1^rΓ(-iP_i(σ)+1/2). Using the identities Γ(z)Γ(1-z)=π/sinπ z,  Γ(z)Γ(z+1/2) = 2^1-2z√(π)Γ(2z), the formula for F(σ) can be simplified into F(σ)=√(π)^p+rπ^r/2^p+r∏_i=1^r∏_j=1^m_i2^iQ_i,j(σ)Γ(iQ_i,j(σ))/cos(iπ Q_i,j(σ))∏_i=1^r1/cos(iπ P_i(σ))Γ(iP_i(σ)+1/2)^-1. Now let us focus on the brane factors. Let us begin with a baby example. [Calabi–Yau double cover of 𝐏^1] Consider the GLSM data associated to a Calabi–Yau double cover of 𝐏^1. =-3pt * V=ℂ^6=ℂ^2×ℂ^4 with coordinates (ϕ_1,1,ϕ_2,1,z_1,1,z_2,1,z_1,0,z_2,0); * G=ℂ^∗ such that ϕ_i,j has weight 2 and z_i,j has weight -1. * The R-weight of ϕ_i,j is 0 whereas the R-weight of z_i,j is 1; * W is the superpotential for singular double cover W(ϕ,z) = ∑_i=1^2 z_i,1^2ϕ_i,1 + ∑_k=1^2 z_k,0^2f_k(ϕ), where f_1(ϕ),f_2(ϕ) are the defining (linear) equations for the branch locus. It is also clear that W has R-weight 2. Consider square matrices η_1,0,η_1,1,η_2,0,η_2,1 and η̅_1,0,η̅_1,1,η̅_2,0,η̅_2,1 satisfying the Clifford relations {η_i,j,η_k,l}={η̅_i,j,η̅_k,l}=0  {η_i,j,η̅_k,l}=δ_ikδ_jl. One can construct the matrices as follows. Consider the exterior algebra ∧^∙ℂ^4; this is a complex vector space of dimension 16. Denote by {e_1,0,e_1,1,e_2,0,e_2,1} the standard basis of ℂ^4. (This label is more convenient.) Let η_i,j:=ι_e_i,j∧^∙ℂ^4→∧^∙ℂ^4  η̅_k,l:=e_k,l∧ -∧^∙ℂ^4→∧^∙ℂ^4. It is easy to check that η_i,j and η̅_k,l obey the commutator relations. For convenience, let us denote by ℐ the index set {(i,j) | 1≤ i≤ 2, 0≤ j≤ 1}. Let ℂv be a one dimensional trivial G and R representation. Set M=Span_ℂ{∏_(i,j)∈ Iη̅_i,jv | } and 𝐓_0:=∑_i=1^2z_i,1η_i,1+ ∑_i=1^2z_i,1ϕ_i,1η̅_i,1+ ∑_k=1^2z_k,0η_k,0+ ∑_k=1^2z_k,0f_k(ϕ)η̅_k,0. From the commutator relations among η_i,j and η̅_k,l, we have 𝐓_0^2 = W·id_M. We also require the factorization 𝐓_0 to be G and R-equivariant; namely ρ_M(g)^-1𝐓_0(ρ(g)· (ϕ,z)) ρ_M(g) = 𝐓_0(ϕ,z) R_M(λ) 𝐓_0(R(λ)· (ϕ,z)) R_M(λ)^-1 = λ𝐓_0(ϕ,z). The equation (<ref>) is transformed into ∑_i=1^2 g^-1z_i,1ρ_M(g)^-1η_i,1ρ_M(g) + ∑_i=1^2 g z_i,1ϕ_i,1ρ_M(g)^-1η̅_i,1ρ_M(g) +∑_k=1^2 g^-1z_k,0ρ_M(g)^-1η_k,0ρ_M(g) + ∑_k=1^2 g z_k,0 f_k(ϕ)ρ_M(g)^-1η̅_k,0ρ_M(g) = 𝐓_0(ϕ,z) which yields ρ_M(g)^-1η_i,jρ_M(g) = gη_i,j  ρ_M(g)^-1η̅_i,jρ_M(g) = g^-1η̅_i,j. Similarly, the equation (<ref>) is transformed into ∑_i=1^2λ z_i,1R_M(λ)η_i,1R_M(λ)^-1 + ∑_i=1^2λ z_i,1ϕ_iR_M(λ)η̅_i,1R_M(λ) +∑_k=1^2λ z_k,0R_M(λ)η_k,0R_M(λ)^-1+ ∑_k=1^2λ z_k,0 f_k(ϕ)R_M(λ)η̅_k,0R_M(λ)^-1 = λ𝐓_0(ϕ,z) which yields R_M(λ)η_i,jR_M(λ)^-1 = η_i,j   R_M(λ)η̅_i,jR_M(λ)^-1 = η̅_i,j. From the relations above, we have ρ_M(g) η̅_I v = ρ_M(g) η̅_Iρ_M(g)^-1ρ_M(g) v = g^|I|η̅_Iv, R_M(λ) η̅_I v = R_M(λ) η̅_I R_M(λ)^-1 R_M(λ) v = η̅_I v. Let us now compute the trace Tr(R_M(e^iπ) ρ_M(e^2πσ)). Note that R_M acts trivially on M and therefore it suffices to compute Tr(ρ_M(e^2πσ)). It follows that Tr(ρ_M(e^2πσ)) = ∑_I()0pt04|I| e^2π |I|σ = (1+e^2πσ)^4. We also remark that if ℂv is chosen to have G-weight q and R-weight r, i.e., ρ_M(g) v = g^q v,  R_M(λ)v = λ^r v, then the trace becomes e^2π qσ e^iπ r (1+e^2πσ)^4; this is another way to generate linearly independent A-periods but we will use a different choice of generators hence we can keep the weight of ℂv fixed to be zero. Consider another matrix factorization 𝐓_1=z_1,1^2η_1,1 +ϕ_1,1η̅_1,1+ z_2,1η_2,1+ z_2,1ϕ_2,1η̅_2,1 + ∑_k=1^2z_k,0η_k,0+ ∑_k=1^2z_k,0f_k(ϕ)η̅_k,0. It turns out that 𝐓_1 is also a matrix factorization. The G- and R-equivariant conditions for 𝐓_1 now imply for (i,j) (1,1) ρ_M(g)^-1η_i,jρ_M(g) = gη_i,j   ρ_M(g)^-1η̅_i,jρ_M(g) = g^-1η̅_i,j R_M(λ)η_i,jR_M(λ)^-1 = η_i,j    R_M(λ)η̅_i,jR_M(λ)^-1 = η̅_i,j. and ρ_M(g)^-1η_1,1ρ_M(g) = g^2η_1,1   ρ_M(g)^-1η̅_1,1ρ_M(g) = g^-2η̅_1,1 R_M(λ)η_1,1R_M(λ)^-1 = λη_1,1    R_M(λ)η̅_1,1R_M(λ)^-1 = λ^-1η̅_1,1. Let us compute the trace. Again take a trivial representation ℂv. We have for a subset I with (1,1)∉ I ρ_M(g) η̅_I v = g^|I|η̅_Iv R_M(λ) η̅_Iv = η̅_Iv and for a subset I with (1,1)∈ I ρ_M(g) η̅_I v = g^|I|+1η̅_Iv R_M(λ) η̅_I v = λη̅_Iv Then it follows that Tr(R_M(e^iπ) ρ_M(e^2πσ))=(1+e^2πσ)^3 ( 1 - e^4πσ) =(1+e^2πσ)^4(1-e^2πσ). We can carry out the period integrals. Recall that the A-periods are of the form ∫_LΓ(2σi)^2Γ(-σi+1/2)^4 q^-iσf_ℬ(σ)dσ. In the present case, f_ℬ(σ) = (1+e^2πσ)^4(1-e^2πσ)^m, m=0,1. Now we return to the general setup. Consider a GLSM data (V,ρ G→GL(V), Rℂ^∗→GL(V),W) associated to a Calabi–Yau double cover, i.e., =-3pt * V=ℂ^2p+r=ℂ^p×ℂ^p+r is a vector space with coordinates ϕ_k,l (1≤ k≤ r and 1≤ l≤ m_k) and z_i,j (1≤ i≤ r and 0≤ j≤ m_i). The coordinates on V will be abbreviated as (ϕ,z); * G=(ℂ^∗)^s is an algebraic torus defined in (<ref>) acting on V via (g_1,…,g_s)· (ϕ,z) =((∏_m=1^sg_m^2θ^m_k,l)ϕ_k,l, (∏_m=1^sg_m^-θ^m_i,j)z_i,j); * Rℂ^∗→GL(V) acting on ϕ_k,l with weight zero and on z_i,j with weight one; * W is the superpotential W = ∑_i=1^r∑_j=1^m_i z_i,j^2ϕ_i,j + ∑_k=1^r z_k,0^2 f_k(ϕ). We construct a matrix factorization 𝔅_0:=(M,ρ_M,R_M,𝐓_0) of W in the following way. For simplicity, we set ℐ :={(i,j) | 1≤ i≤ r, 0≤ j≤ m_i}, 𝒥 :={(i,j) | 1≤ i≤ r, 1≤ j≤ m_i}. Note that |𝒥|=p and |ℐ|=p+r. Let η_i,j and η̅_i,j with (i,j)∈ℐ be matrices satisfying {η_i,j,η_p,q}= {η̅_i,j,η̅_p,q}=0,  {η_i,j,η̅_p,q}=δ_ipδ_jq. Again one can produce these matrices via the exterior algebra. Consider 𝐓_0:= ∑_i=1^r∑_j=1^m_i z_i,jη_i,j + ∑_i=1^r∑_j=1^m_i z_i,jϕ_i,jη̅_i,j+ ∑_i=1^r z_i,0η_i,0 + ∑_i=1^r z_i,0f_i(ϕ)η̅_i,0 = ∑_(i,j)∈𝒥 z_i,jη_i,j + ∑_(i,j)∈𝒥 z_i,jϕ_i,jη̅_i,j+ ∑_i=1^r z_i,0η_i,0 + ∑_i=1^r z_i,0f_i(ϕ)η̅_i,0. From the commutator relation, we see that 𝐓_0^2=W·id_M. We also require that 𝐓_0 to be both G- and ℂ^∗-equivariant under ρ and R. More concretely, we demand that for g∈ G and λ∈ℂ^∗ ρ_M(g)^-1𝐓_0(ρ(g)· (ϕ,z)) ρ_M(g) = 𝐓_0(ϕ,z) R_M(λ) 𝐓_0(R(λ)· (ϕ,z)) R_M(λ)^-1 = λ𝐓_0(ϕ,z). The first equation (<ref>) implies ρ_M(g)^-1η_i,jρ_M(g) = (∏_m=1^s g_m^θ^m_i,j)η_i,j ρ_M(g)^-1η̅_i,jρ_M(g) = (∏_m=1^s g_m^-θ^m_i,j)η̅_i,j Similarly, the equation (<ref>) implies R_M(λ)^-1η_i,jR_M(λ) = η_i,j R_M(λ)^-1η̅_i,jR_M(λ) = η̅_i,j It then follows that f_𝔅_0(σ) =Tr(R_M(e^iπ) ρ_M(e^2πσ_1⋯ e^2πσ_s)) =Tr(ρ_M(e^2πσ_1⋯ e^2πσ_s)) =∏_(i,j)∈𝒥(1+e^2π∑_m=1^sσ_mθ^m_i,j) ∏_k=1^r(1+e^2π∑_m=1^sσ_mθ^m_k,0) =∏_(i,j)∈ℐ (1+e^2π∑_m=1^sσ_mθ^m_i,j). In particular, the brane factor f_𝔅_0(σ) provides zeroes at σ when ∑_m=1^sσ_mθ^m_i,j∈i/2+iℤ for some (i,j)∈ℐ. We can construct various matrix factorizations from subsets of 𝒥. Let J⊂𝒥 be a subset. Consider 𝐓_J:= ∑_(i,j)∈ J z_i,j^2η_i,j + ∑_(i,j)∈ Jϕ_i,jη̅_i,j+ ∑_(i,j)∉ J z_i,jη_i,j + ∑_(i,j)∉ J z_i,jϕ_i,jη̅_i,j +∑_i=1^r z_i,0η_i,0 + ∑_i=1^r z_i,0f_i(ϕ)η̅_i,0. When J=∅, we recover 𝐓_0. Now we can compute the brane factor for 𝐓_J. The equivariance conditions ρ_M(g)^-1𝐓_J(ρ(g)· (ϕ,z)) ρ_M(g) = 𝐓_J(ϕ,z) R_M(λ) 𝐓_J(R(λ)· (ϕ,z)) R_M(λ)^-1 = λ𝐓_J(ϕ,z) imply that ρ_M(g)^-1η_i,jρ_M(g) = (∏_m=1^s g_m^θ^m_i,j)η_i,j ρ_M(g)^-1η̅_i,jρ_M(g) = (∏_m=1^s g_m^-θ^m_i,j)η̅_i,j R_M(λ)^-1η_i,jR_M(λ) = η_i,j R_M(λ)^-1η̅_i,jR_M(λ) = η̅_i,j  (i,j)∉ J and that ρ_M(g)^-1η_i,jρ_M(g) = (∏_m=1^s g_m^2θ^m_i,j)η_i,j ρ_M(g)^-1η̅_i,jρ_M(g) = (∏_m=1^s g_m^-2θ^m_i,j)η̅_i,j R_M(λ)^-1η_i,jR_M(λ) = λη_i,j R_M(λ)^-1η̅_i,jR_M(λ) = λ^-1η̅_i,j  (i,j)∈ J. Let ℂv be the trivial G- and ℂ^∗-representation. In the present case, M is spanned by (∏_J⊂𝒥η̅_J) v where the notation η̅_J means ∏_(i,j)∈ Jη̅_i,j and we use the lexicographic ordering to define this product. We have shown For a subset J⊂𝒥, the brane factor f_𝔅_J(σ) associated to the matrix factorization 𝔅_J=(M,ρ_M,R_M, 𝐓_J) is given by f_𝔅_J(σ)=∏_(i,j)∈ℐ (1+e^2π∑_m=1^sσ_mθ^m_i,j) ∏_(i,j)∈ J (1-e^2π∑_m=1^sσ_mθ^m_i,j). where J can include the empty set. Combined this with (<ref>), we have the following proposition. For any subset J⊂𝒥, we have F(σ) f_𝔅_J(σ) =√(π)^p+3r∏_i=1^r∏_j=1^m_i(2^iQ_i,j(σ)Γ(iQ_i,j(σ))) ∏_i=1^rΓ(iP_i(σ)+1/2)^-1∏_(i,j)∈ J (1-e^2π Q_i,j(σ)) where Q_i,j(σ) and P_i(σ) are defined in (<ref>). Recall that cos z = 1/2(e^iz-e^-iz). We have cos (iπ Q_i,j(σ)) = 1/2(e^-π Q_i,j(σ)+e^π Q_i,j(σ)) =e^-π Q_i,j(σ)/2(1+e^2π Q_i,j(σ)) cos (iπ P_i(σ)) = 1/2(e^-π P_i(σ)+e^π P_i(σ)) =e^-π P_i(σ)/2(1+e^2π P_i(σ)). By Proposition <ref>, we have f_𝔅_J(σ)=∏_i=1^r∏_j=1^m_i (1+e^2π Q_i,j(σ))∏_i=1^r(1+e^2π P_i(σ)) ∏_(i,j)∈ J (1-e^2π Q_i,j(σ)). Using the fact that P_i(σ)=∑_j=1^m_iQ_i,j(σ), we have F(σ) f_𝔅_J(σ) =√(π)^p+3r∏_i=1^r∏_j=1^m_i(2^iQ_i,j(σ)Γ(iQ_i,j(σ))) ∏_i=1^rΓ(iP_i(σ)+1/2)^-1∏_(i,j)∈ J (1-e^2π Q_i,j(σ)) as desired. §.§ A-periods and GKZ systems Now we prove that A-periods are governed by the GKZ A-hypergeometric system associated with our singular Calabi–Yau family 𝒴^∨→ V. To relate our A-periods with the GKZ systems, we introduce the change of variables q_k :=∏_i=1^r∏_j=1^m_i(-4x_i,j)^θ_i,j^k∏_i=1^rx_i,0^-θ^k_i,0. Let 𝐱=(x_i,j)_(i,j)∈ℐ and set Ẑ_𝔅(𝐱):= 1/(∏_i=1^r x_i,0)^1/2Z_𝔅(q_1,…,q_s). Under the change of variables (<ref>), the functions Ẑ_𝔅_J(𝐱) satisfy the GKZ system ℳ_A^β with A and β defined in <ref>(7). We begin with an observation. There is an isomorphism ker(A)≅ker(B) by forgetting all the (i,0)th component. The integral vector θ^k=(θ^k_i,j)∈ker(B) defined in <ref> admits a unique lifting to ker(A). By abuse of notation, such a lifting is denoted by θ^k. Notice that the (i,0)th component of θ^k is given by -θ_i,0^k. From the change of variables, it is clear that Ẑ_𝔅_J(𝐱) is killed by the Euler operators in ℳ_A^β. All we have to check is that the box operators annihilate Ẑ_𝔅_J(𝐱). Pick ℓ∈ker(A) and write ℓ = ∑_k=1^s n_kθ^k=ℓ^+-ℓ^-. Let ℓ^+=(α_i,j) and ℓ^-=(β_i,j). We have ℓ = (α_i,j-β_i,j) and ∏_i=1^r∏_j=1^m_i(-4x_i,j)^ℓ_i,j∏_i=1^rx_i,0^ℓ_i,0 =(∏_i=1^r∏_j=1^m_i(-4x_i,j)^α_i,j∏_i=1^rx_i,0^α_i,0) (∏_i=1^r∏_j=1^m_i(-4x_i,j)^-β_i,j∏_i=1^rx_i,0^-β_i,0). By a direct computation, we see that (Ẑ≡Ẑ_𝔅_J(𝐱)) (∏_i=1^r∏_j=1^m_i(-4x_i,j)^α_i,j∏_i=1^rx_i,0^α_i,0) ∂^ℓ^+Ẑ =∏_i=1^r∏_j=1^m_i∏_m=1^α_i,j (-4)(-i∑_k=1^sθ_i,j^kσ_k-m+1) ·∏_i=1^r∏_m=1^α_i,0(i∑_k=1^sθ_i,0^kσ_k-m+1/2)Ẑ. Likewise, we have (∏_i=1^r∏_j=1^m_i(-4x_i,j)^β_i,j∏_i=1^rx_i,0^β_i,0) ∂^ℓ^-Ẑ =∏_i=1^r∏_j=1^m_i∏_m=1^β_i,j (-4)(-i∑_k=1^sθ_i,j^kσ_k-m+1) ·∏_i=1^r∏_m=1^β_i,0(i∑_k=1^sθ_i,0^kσ_k-m+1/2)Ẑ. It follows that (∏_i=1^r∏_j=1^m_i(-4x_i,j)^α_i,j∏_i=1^rx_i,0^α_i,0)∂^ℓ^-Ẑ =(∏_i=1^r∏_j=1^m_i(-4x_i,j)^ℓ_i,j∏_i=1^rx_i,0^ℓ_i,0) ·(∏_i=1^r∏_j=1^m_i(-4x_i,j)^β_i,j∏_i=1^rx_i,0^β_i,0)∂^ℓ^-Ẑ =1/(∏_i=1^r x_i,0)^1/2∫_L F(σ⃗)f_𝔅_J(σ⃗) ∏_i=1^r∏_j=1^m_i∏_m=1^β_i,j (-4)(-i∑_k=1^sθ_i,j^kσ_k-m+1) ×∏_i=1^r∏_m=1^β_i,0(i∑_k=1^sθ_i,0^kσ_k-m+1/2) ∏_k=1^s(∏_i=1^r∏_j=1^m_i(-4x_i,j)^θ_i,j^k∏_i=1^rx_i,0^-θ_i,0^k)^-iσ_k+n_kdσ. Here σ⃗=(σ_1,…,σ_s). Under the change of variables τ_k = σ_k + in_k, the last quantity in the above equation is transformed into 1/(∏_i=1^r x_i,0)^1/2∫_L+in⃗ F(τ⃗-in⃗) f_𝔅_J(τ⃗-in⃗) ∏_i=1^r∏_j=1^m_i∏_m=1^β_i,j (-4)(-i∑_k=1^sθ_i,j^k(τ_k-in_k)-m+1) ×∏_i=1^r∏_m=1^β_i,0(i∑_k=1^sθ_i,0^k(τ_k-in_k)-m+1/2) ∏_k=1^s(∏_i=1^r∏_j=1^m_i(-4x_i,j)^θ_i,j^k∏_i=1^rx_i,0^-θ_i,0^k)^-iτ_kdτ. We claim that ∏_i=1^r∏_j=1^m_i∏_m=1^α_i,j (-4)(-i∑_k=1^sθ_i,j^kσ_k-m+1) ·∏_i=1^r∏_m=1^α_i,0(i∑_k=1^sθ_i,0^kσ_k-m+1/2) F(σ⃗) =∏_i=1^r∏_j=1^m_i∏_m=1^β_i,j (-4)(-i∑_k=1^sθ_i,j^k(σ_k-in_k)-m+1) ·∏_i=1^r∏_m=1^β_i,0(i∑_k=1^sθ_i,0^k(σ_k-in_k)-m+1/2) × F(σ⃗-in⃗). Here the notation τ⃗ and n⃗ are the obvious ones. Grating this equality, let us explain how the claim implies the theorem. From the claim, we have 1/(∏_i=1^r x_i,0)^1/2∫_L+in⃗ F(τ⃗-in⃗) f_𝔅_J(τ⃗-in⃗) ∏_i=1^r∏_j=1^m_i∏_m=1^β_i,j (-4)(-i∑_k=1^sθ_i,j^k(τ_k-in_k)-m+1) ×∏_i=1^r∏_m=1^β_i,0(i∑_k=1^sθ_i,0^k(τ_k-in_k)-m+1/2) ∏_k=1^s(∏_i=1^r∏_j=1^m_i(-4x_i,j)^θ_i,j^k∏_i=1^rx_i,0^-θ_i,0^k)^-iτ_kdτ =1/(∏_i=1^r x_i,0)^1/2∫_L+in⃗ F(σ⃗) f_𝔅_J(σ⃗) ∏_i=1^r∏_j=1^m_i∏_m=1^α_i,j (-4)(-i∑_k=1^sθ_i,j^kσ_k-m+1) ×∏_i=1^r∏_m=1^α_i,0(i∑_k=1^sθ_i,0^kσ_k-m+1/2) ∏_k=1^s(∏_i=1^r∏_j=1^m_i(-4x_i,j)^θ_i,j^k∏_i=1^rx_i,0^-θ_i,0^k)^-iσ_kdσ. We examine the poles in the integrand F(σ⃗) f_𝔅_J(σ⃗) ∏_i=1^r∏_j=1^m_i∏_m=1^α_i,j (-4)(-i∑_k=1^sθ_i,j^kσ_k-m+1) ∏_i=1^r∏_m=1^α_i,0(i∑_k=1^sθ_i,0^kσ_k-m+1/2). According to Proposition <ref>, the function F(σ⃗) f_𝔅_J(σ⃗) has a pole at σ⃗ whenever iQ_i,j(σ⃗) ∈i·ℤ_≥ 0  (i,j)∈𝒥. It suffices to show that for every 0≤ε≤ 1, the cycle L+εin⃗ does not meet the poles in (<ref>). To this end, let δ⃗ = σ⃗ + εin⃗∈ L+εin⃗. If iQ_i,j(δ⃗) = iQ_i,j(σ⃗) - ε Q_i,j(n⃗) ∈ℤ_≤ 0 (particularly iQ_i,j(σ⃗) is real), then we must have Q_i,j(n⃗) ≥ 0. Otherwise, iQ_i,j(σ⃗) ∈ℤ_≤ 0 + ε Q_i,j(n⃗) ⊂ℝ_≤ 0 and L can not be a continuous deformation of the real locus L_ℝ (cf. the condition (a) in Definition <ref>). Notice that Q_i,j(n⃗) = ℓ_i,j = α_i,j. Therefore, -iQ_i,j(σ⃗) + α_i,j≥ -iQ_i,j(δ⃗) = -iQ_i,j(σ⃗) + ε Q_i,j(n⃗)≥ -iQ_i,j(σ⃗) and there exists 1≤ m≤α_i,j such that -iQ_i,j(δ⃗) = m-1. In other words, the pole will be annihilated by a zero in the product in (<ref>). Now let us prove the claim. Fix (i,j)∈𝒥. We will prove that Γ(2i∑_k=1^sθ_i,j^kσ_k) Γ(-i∑_k=1^sθ_i,j^kσ_k+1/2) (-1)^α_i,j∏_m=1^α_i,j (-4)(-i∑_k=1^sθ_i,j^kσ_k-m+1) =  Γ(2i∑_k=1^sθ_i,j^k(σ_k-in_k)) Γ(-i∑_k=1^sθ_i,j^k(σ_k-in_k)+1/2) ·(-1)^β_i,j∏_m=1^β_i,j (-4)(-i∑_k=1^sθ_i,j^k(σ_k-in_k)-m+1) . Recall that Q_i,j(n⃗)=∑_k=1^sθ_i,j^kn_k=ℓ_i,j=α_i,j-β_i,j. We then have (i) [t] Γ(2i∑_k=1^sθ_i,j^k(σ_k-in_k))/Γ(2i∑_k=1^sθ_i,j^kσ_k) = Γ(2i∑_k=1^sθ_i,j^kσ_k+ 2(α_i,j-β_i,j))/Γ(2i∑_k=1^sθ_i,j^kσ_k) =∏_m=-∞^2(α_i,j-β_i,j)(2i∑_k=1^sθ_i,j^kσ_k+m-1)/∏_m=-∞^0( 2i∑_k=1^sθ_i,j^kσ_k+m-1), (ii) [t] Γ(-i∑_k=1^sθ_i,j^kσ_k-(α_i,j-β_i,j)+1/2)/Γ(-i∑_k=1^sθ_i,j^kσ_k+1/2) =∏_m=-∞^-(α_i,j-β_i,j)(-i∑_k=1^sθ_i,j^kσ_k+1/2+m-1)/∏_m=-∞^0(-i∑_k=1^sθ_i,j^kσ_k+1/2+m-1) =(-2)^α_i,j-β_i,j ∏_m=-∞^-(α_i,j-β_i,j)(2i∑_k=1^sθ_i,j^kσ_k-1-2m+2)/∏_m=-∞^0(2i∑_k=1^sθ_i,j^kσ_k-1-2m+2) =(-2)^α_i,j-β_i,j ∏_m=-∞^0(2i∑_k=1^sθ_i,j^kσ_k-1+2m)/∏_m=-∞^α_i,j-β_i,j(2i∑_k=1^sθ_i,j^kσ_k-1+2m), (iii) [t] (-4)^β_i,j∏_m=1^β_i,j(-i∑_k=1^sθ_i,j^k(σ_k-in_k)-m+1)/(-4)^α_i,j∏_m=1^α_i,j(-i∑_k=1^sθ_i,j^kσ_k-m+1) = 4^β_i,j∏_m=1^β_i,j(i∑_k=1^sθ_i,j^k(σ_k-in_k)+m-1)/ 4^α_i,j∏_m=1^α_i,j(i∑_k=1^sθ_i,j^kσ_k+m-1) =4^β_i,j-α_i,j ∏_m=1^β_i,j(i∑_k=1^sθ_i,j^kσ_k+ (α_i,j-β_i,j)+m-1)/∏_m=1^α_i,j(i∑_k=1^sθ_i,j^kσ_k+m-1) =4^β_i,j-α_i,j ∏_m=α_i,j-β_i,j+1^α_i,j( i∑_k=1^sθ_i,j^kσ_k+m-1)/∏_m=1^α_i,j(i∑_k=1^sθ_i,j^kσ_k+m-1) =4^β_i,j-α_i,j ∏_m=-∞^0(i∑_k=1^sθ_i,j^kσ_k+m-1)/∏_m=-∞^α_i,j-β_i,j(i∑_k=1^sθ_i,j^kσ_k+m-1) =2^β_i,j-α_i,j ∏_m=-∞^0(2i∑_k=1^sθ_i,j^kσ_k+2m-2)/∏_m=-∞^α_i,j-β_i,j(2i∑_k=1^sθ_i,j^kσ_k+2m-2). Multiplying them together, we get (<ref>). It is also clear that (-1)^α_i,0Γ (-i∑_k=1^sθ_i,0^kσ_k+1/2) ∏_m=1^α_i,0(i∑_k=1^sθ_i,0^kσ_k-m+1/2) =(-1)^β_i,0Γ(-i∑_k=1^sθ_i,0^k (σ_k-in_k)+1/2) ∏_m=1^β_i,0(i∑_k=1^sθ_i,0^k(σ_k-in_k)-m+1/2). Combined with (<ref>), the validity of the claim is reduced to the fact that (-1)^∑_i,jβ_i,j = (-1)^∑_i,jα_i,j. This holds since (α_i,j-β_i,j)=(ℓ_i,j)∈ker(A) and therefore ∑_i=1^r∑_j=0^m_iℓ_i,j = 0. When J runs through all subsets in 𝒥, the A-periods Ẑ_𝔅_J(𝐱) generate the full solution set of ℳ_A^β. To illustrate Conjecture <ref>, we provide a proof for the case G=ℂ^* and θ_(i,j)=1 for all (i,j), i.e. when the base is X=𝐏^n. In the present case, the subsets J⊂𝒥 can be label just by a positive integer m=0,…,n. It is straightforward to show that the hemisphere partition function Z_𝔅_J(q) can be written as a sum of residues of the poles at σ∈iℤ_≥ 0 when (t)≫ 1: Z_𝔅_J(q)=∑_l=0^∞Z_J^(l) at (t)≫ 1 where Z_J^(l)=√(π)^3(n+1+r)(-2)^-l(n+1)q^l∫_γ_0dσ 2^i(n+1)σq^-iσ∏_k=1^rΓ(iθ_k,0σ-l+1/2)^-1(1-e^2πσ)^m/Γ(1-iσ+l)^n+1sin(iπσ)^n+1 where γ_0 is a small counterclockwise contour surrounding the origin σ=0. In order to show completeness is enough to compute the residue at l=0. Write the residue Z_J^(0) as Z_J^(0) = √(π)^3(n+1+r)∫_γ_0dσ/σ^n+1q̃^-iσf(σ)(1-e^2πσ)^mσ^n+1/sin(iπσ)^n+1 = 2π i√(π)^3(n+1+r)/n!d^n/dσ^n.(f(σ)q̃ ^ -iσ(1-e^2πσ)^mσ^n+1/sin(iπσ)^n+1)|_σ=0 where we defined f(σ):=∏_k=1^rΓ(iθ_k,0σ+1/2)^-1/Γ(1-iσ)^n+1, q̃=q2^-i(n+1) since f(σ) is a regular function at σ=0, Z_J^(0) becomes Z_J^(0)=m!(2π)^m2π i√(π)^3(n+1+r)/n!d^n-m/dσ^n-m.( f(σ)q̃ ^ -iσσ^n+1/sin(iπσ)^n+1)|_σ=0. The expression (<ref>) has a single term proportional to (-ilnq̃)^n-m (all the rest are lower order in -ilnq̃) given by Z_J^(0)=i^-n-1m!(2π)^m2π i√(π)^3(n+1+r)/n!(-ilnq̃)^n-m 1/Γ(1/2)^rπ^n+1+𝒪((lnq̃)^n-m-1) this shows that all the n+1 functions, labeled by J, are algebraically independent functions of lnq̃, hence forming a complete set. § CONTOURS In this section, we give an explicit construction for the contour L. Let us define y^m:=Im(σ^m) x^m:=Re(σ^m) m=1,…, s In order to define L we fix the value of ζ in the interior of a chamber C given by a maximal cone of the secondary fan. Then we can write C=Span_ℝ_≥ 0{θ_1,…,θ_s} where θ_m, m=1,…, s is any subset of the weights whose positive real span corresponds to C. We done the rest of the weights as θ̂_I, I=1,…,2p+r-s. Is convenient to define the matrices B∈Mat_2p+r-s,s(ℚ) by θ̂_I=∑_m=1^sB_I^ mθ_m B_I^ m∈ℚ and the variables x', y' defined by the (invertible) linear transformation y'_m:=θ_m(y) x'_m:=θ_m(x) then the contour L is defined as a graph L:={(x,y(x))| x∈ℝ^s}⊂𝔱_ℂ this automatically satisfies the condition of <cit.> of L being a deformation of L_ℝ. Less trivially, L must satisfy the 'pole avoiding' conditions θ(x)=0⇒θ(y(x))≤ 0 for all weights θ∈{θ_1,…,θ_s,θ̂_1,…,θ̂_2p+r-s} we define the a graph by specifying the functions f_m(x) in y'_m(x)=f_m(x)|x'_m|^p where p∈ℤ_>0 can be chosen arbitrarily. We require that the functions f_m(x) satisfy lim_|x|→∞f_m(x)=1 for all m where the notation lim_|x|→∞ stands for ∪_m|x_m|→∞. An explicit expression for f_m(x) satisfying (<ref>) and (<ref>) is given by f_m(x):=∏_i,B_i^ m> 0(1-e^-|B_i(x')|), B_i(x')≡∑_m=1^sB_i^ mx'_m where the product is taken over all i∈{1,…,2p+r-s} such that the coefficient B_i^ m is strictly positive. amsxport
http://arxiv.org/abs/2307.00989v1
20230703131521
Coordinated motion of epithelial layers on curved surfaces
[ "Lea Happel", "Axel Voigt" ]
cond-mat.soft
[ "cond-mat.soft", "math-ph", "math.MP" ]
[plainnat] Institute of Scientific Computing, TU Dresden, 01062 Dresden, Germany Institute of Scientific Computing, TU Dresden, 01062 Dresden, Germany Center for Systems Biology Dresden, Pfotenhauerstr. 108, 01307 Dresden, Germany Cluster of Excellence, Physics of Life, TU Dresden, Arnoldstr. 18, 01307 Dresden, Germany Coordinated cellular movements are key processes in tissue morphogenesis. Using a cell-based modeling approach we study the dynamics of epithelial layers lining surfaces with constant and varying curvature. We demonstrate that extrinsic curvature effects can explain the alignment of cell elongation with the principal directions of curvature. Together with specific self-propulsion mechanisms and cell-cell interactions this effect gets enhanced and can explain observed large-scale, persistent and circumferential rotation on cylindrical surfaces. On toroidal surfaces, extrinsic curvature only plays a minor role and the curvature coupling results primarily from intrinsic curvature. These findings unveil the role of curvature and postulate its importance for tissue morphogenesis. Coordinated motion of epithelial layers on curved surfaces A. Voigt August 1, 2023 ========================================================== Geometry, and in particular local curvature, influences biological systems at various length scales <cit.>. One example which is associated with curved epithelial layers is collective rotation. Persistent and synchronous rotation around a given axis of a sphere has been observed in vivo <cit.>, in vitro <cit.> and in silico <cit.>. These phenomena differ significantly from collective behavior in flat space and are attributed to the geometric and topological properties of the sphere. Nevertheless, the underlying principles and mechanisms that trigger such collective rotation remain unclear even for surfaces as simple as a sphere, not to mention the curved environments that epithelial tissues encounter during morphogenesis. To better understand how curvature influences the mechanics of epithelial layers we consider cylindrical and toroidal surfaces. They provide ideal prototypical geometries to test the impact of curvature and allow for validation for specific cell types <cit.>. At the single cell level it has been shown that cells sense and respond to curvature <cit.>, essentially by regulating the transcellular network architecture <cit.> and aligning the filaments with the principal directions of curvature <cit.>. Experimental realizations furthermore show a dependence on the cell type, while, e.g., fibroblasts align with the minimal curvature direction <cit.>, MDCK cells align with the maximal curvature direction <cit.>. In addition to regulating the network architecture the nucleus also plays a role and cell migration on curved surfaces is shown to follow the path of least nuclear mechanical stress <cit.>. These phenomena, which describe the response to cell-scale curvature, is termed curvotaxis <cit.> and can be extended to collective cell behavior on curved surfaces. Coordinated rotation has been associated with the alignment of filaments with principal directions of curvature, cell-cell adhesion as well as apical-basal polarity <cit.>. In <cit.> cylindrical epithelia of MDCK cells are considered. The results indicate that proper cell-cell adhesion is essential, as well as aligned cellular polar order. This alignment is again in the principal directions of curvature. In contrast to these factors, the orientation of the actin network does not seem to be essential for collective rotation. Also geometries with varying curvature, e.g. toroidal surfaces have been considered <cit.>. However, in this study only the cell elongation is addressed but not their coordinated motion. In this Letter we propose a minimal cell-based surface model that reproduces these effects for MDCK cells. Two-dimensional vertex models, e.g., <cit.> and multi-phase field models <cit.> have been successfully used to simulate epithelial tissue in flat space. Extending these approaches to curved surfaces is still rare, see <cit.> for vertex models and <cit.> for multi-phase field models considered on a sphere. None of these approaches account for extrinsic curvature contributions. These terms, which somehow translate the three-dimensional nature of a thin layer, for an epithelial layer, e.g. the difference between the apical and basal side, into an effectively two-dimensional framework on the curved surface, will be shown to be essential to model the effects of curvature discussed above. Extrinsic curvature effects are well established in the theory of surface liquid crystals <cit.>. These theories force the director field to be tangential to the surface and the corresponding free energies contain coupling terms between the director field and the principal curvature directions of the shape operator <cit.>. These terms follow naturally if the energies are derived as thin film limits from three-dimensional theories <cit.> and have shown various implications on phase transitions <cit.>, active nematodynamic flows <cit.> and shape deformations <cit.>. However, in the context of cell alignment, these implications are unexplored. We consider a multi-phase field model that allows for cell deformations and detailed cell-cell interactions, as well as extrinsic curvature coupling. To allow for large-scale tissue deformations local cellular rearrangements are required. These topological changes, e.g., T1 transitions or formation of rosettes, follow naturally in this framework <cit.>. We consider two-dimensional phase field variables _i(𝐱,t) one for each cell, with 𝐱 defined on the surface S. Values of _i=1 and _i=-1 denote the interior and exterior of a cell, respectively. The cell boundary is implicitly defined as the zero-level set of _i. We consider various surfaces S, see Figure <ref>. They are topologically equivalent to a flat torus but differ by their geometric properties. The dynamics for each _i are considered as _i + v_0(𝐯_i ·_i) = δ/δ_i, for i=1,...,N, where N denotes the number of cells. is a free energy and 𝐯_i is a vector field used to incorporate active components, with a self-propulsion strength v_0. The operators and denote the covariant derivative and Laplace-Beltrami operator on S, respectively. All quantities are non-dimensional quantities. As in previous studies <cit.>, we consider conserved dynamics. The free energy contains several contributions and reads = + + with a de Gennes-Cahn-Hilliard energy <cit.> = ∑_i=1^N1/G(_i)(/2_i^2 + 1/W(_i)), which stabilizes the interface, with W(ϕ_i) = 14 (1 - _i^2)^2 a double-well potential, a small parameter determining the width of the diffuse interface and 1/G(ϕ_i) called the de Gennes coefficient in polymer science. This term does not influence the asymptotic limit but helps to keep -1 ≤ϕ_i ≤ 1, which becomes important as the numerical solution is more sensitive to variations as in flat space <cit.>. We consider G(ϕ_i) = 3/2(1 - ϕ_i^2). is the capillary number. This covariant formulation only accounts for intrinsic curvature effects. Minimizing this energy by solving eq. (<ref>) with v_0 = 0 on a cylindrical surface leads to a geodesic circle with no preferred orientation, see Figure <ref> (purple cell). This does not resample the observed properties of single cells on cylindrical shapes <cit.>. Lets associate a director field with the shape of the cell. In flat space this has been considered in <cit.>. Adapting the definition of the shape operator to the surface we obtain the surface Q-tensor fields 𝐪_i=[ (∂_𝐭_2_i)^2-(∂_𝐭_1_i)^2/2 -∂_𝐭_1_i∂_𝐭_2_i; -∂_𝐭_1_i∂_𝐭_2_i (∂_𝐭_1_i)^2-(∂_𝐭_2_i)^2/2 ] where 𝐭_1 and 𝐭_2 denote the directions of principal curvatures of the center of mass of cell i. Together with the outward-pointing normal to the surface S they define the Darboux frame, see SI for details. The eigenvectors of the tensor field 𝐪_i correspond to the direction of largest elongation and contraction and the corresponding eigenvalues measure the degree of elongation and contraction in these directions. Using these directions to define director fields 𝐝_i allows to associate nematic order to the epithelial tissue. This has been considered before in flat space <cit.> and on a sphere <cit.>. In our case 𝐪_i and 𝐝_i are tangential tensor and vector fields, respectively. Coarse-grained quantities of the surface Q-tensor fields 𝐪 and the director fields 𝐝 are considered in surface liquid crystal models. In these models both quantities are related by 𝐪 = S (𝐝⊗𝐝 - 1/2𝐠) <cit.> where S is a nematic order parameter and 𝐠 is the metric of the surface S. Already in typical one-constant approximations of the corresponding surface energies, if derived as a thin film limit from the corresponding 3D models, additional geometric coupling terms occur <cit.>. In case of the surface Frank-Oseen model the term of interest reads ||𝐝||^2=||𝐝||^2 + ⟨ν⊗𝐝,ν⊗𝐝⟩ where =-ν denotes the shape operator and ⟨·, ·⟩ the scalar product on <cit.>. In differential geometry denotes the so-called Guenther derivative and denotes the surface tangential gradient, see SI for definitions. There are various physical implications resulting from the choice of derivative, see <cit.> for an overview. Of relevance to our case is only the alignment of 𝐝 with principal directions of curvature resulting from the second summand in eq. (<ref>). This coupling term has been added in an ad hoc manner in <cit.> to account for linear curvature contributions in surface active nematodynamics. Our goal is to account for this contribution in the cellular description. We therefore consider it in the phase field context and define =∑_i=1^N ⟨⊗_i,⊗_i ⟩, where is a parameter that determines the preferred direction and strength of this geometric coupling. We furthermore use that the integral mean of _i is orthogonal to the elongation of the cell and is thus related to 𝐝_i. While can become negative, the area conservation of _i and guarantees a well-posed problem within reasonable parameter settings. Figure <ref> shows the effect of and on a single cell on different geometries if v_0 = 0. While the shape is independent of the position in flat space and on the cylinders, with constant principle curvatures and zero Gaussian curvature, the shape depends on the position on the torus. Here can be reduced by moving the cell towards the region of maximal Gaussian curvature. The varying intrinsic curvature deforms the cell in an energetically favorable manner from elongation in toroidal direction in regions of lowest Gaussian curvature (inside) to elongation in poloidal direction in the region of highest Gaussian curvature (outside). The influence of is almost negligible in this setting. Further details on the evolution are provided in SI. The missing energy contribution accounts for the interaction between cells. It is convenient to define ψ_i = 1/2 (_i + 1). A common way to model repulsive and attractive forces is to consider = ∑_i=1^N ∑_j≠ ia_repψ_i^2 ψ_j^2 - a_attψ_i^2 ψ_j^2_:= f_INT with coefficients a_rep and a_att, see <cit.> for the corresponding form in flat space. We modify this formulation and consider the equilibrium condition /2ϕ_i^2 ≈1/ W(ϕ_i) resulting from the tanh-profile of _i and approximate a_attψ_i^2 ψ_j^2 ≈ã_att W(ϕ_i) W(ϕ_j), with the rescaled coefficient ã_att. This leads to the numerically more appropriate form without derivatives, where f_INT = ã_rep (ϕ_i +1)^2(ϕ_j+1)^2 - ã̃_att (ϕ_i^2 -1)^2 (ϕ_j^2 -1)^2 with rescaled coefficients ã_rep and ã̃_att, as considered in <cit.>. Activity is incorporated by self-propulsion defining 𝐯_i. There are various possibilities, which differ by complexity, ranging from random motion <cit.> to considering mechanochemical subcellular processes <cit.> and physical implications, e.g. polarity and velocity alignment and contact inhibition <cit.>, see <cit.> for a comparison. Here we define 𝐯_i= v_0(cos(θ_i)𝐞_1^i+sin(θ_i)𝐞_2^i) with v_0 a constant self-propulsion strength, the angle θ_i which is controlled by rotational noise dθ_i(t)=√(2D_r)dW_i(t) with diffusivity D_r and a Wiener process W_i and the local orthonormal coordinate system (𝐞_1^i,𝐞_2^i) in the tangent plane of the center of mass of cell i. We consider an elongation model with 𝐞_1^i pointing in the direction of largest elongation, as considered in flat space in <cit.>. The problem can be solved numerically using surface finite elements <cit.> and the parallelization concept introduced in <cit.>, which considers each cell on a different core and accounts for the short-range interaction between cells to reduce the communication. This essentially allows scaling with the number of cells, see SI for details. We consider three different cylindrical surfaces with equal surface area || but different curvature and 60 equally sized cells with a packing fraction of 90% placed on them with random initial direction of movement. For geometric quantities and detailed parameters see SI. Figure <ref> shows data for one cylinder and > 0, clearly indicating collective rotation, consistent with the experiments for MDCK cells in <cit.>. All simulations on cylindrical surfaces are summarized in Figure <ref>. We consider each cell within a time frame after an initial phase and plot the distribution of their orientation and direction of movement with respect to the angle with the longitudinal direction for three different simulations, see SI for details on data analysis. The color coding corresponds to the magnitude of the averaged velocity. Without any extrinsic curvature contribution (= 0 (purple cell), see Figure <ref> (middle row)) there is no clear trend visible for any preferred direction of elongation or movement. For < 0 (yellow cell) and > 0 (green cell) the cells collectively elongate and move in the longitudinal and azimuthal direction, respectively. These effects are enhanced with increasing curvature. This is associated with stronger elongation, more pronounced movement in longitudinal or azimuthal direction and increased velocity, see Figure <ref> (top and bottom row). The detailed data shown in Figure <ref> corresponds to h). Corresponding data for a) - i) are provided in SI. While the effect of extrinsic curvature is rather small for single cells, it is enhanced in coordinated motion leading qualitatively different behaviour. However, the enhancement of the elongation with principal curvature directions also strongly depends on the self-propulsion mechanism. Corresponding results for a random model, where 𝐞_1^i is chosen as the direction of the velocity vector from the last time step, which can be considered as a generalization of active Brownian particles on surfaces to deformable objects <cit.>, are shown in SI. This mechanism leads to a preferred elongation direction only for the cylindrical surfaces with the strongest curvature (r_Cyl, h_Cyl) = (0.41, 9.49), but no tendency for collective motion in azimuthal or longitudinal direction. On toroidal surfaces curvature varies along the poloidal direction, see SI. As already seen for a single cell, this has consequences for the shape and position of the cell. We consider the same setting on two different toroidal surfaces of equal area but now with 144 cells. Figure <ref> shows a summary of the results. As in Figure <ref> we plot the distribution of the direction of movement and the elongation direction. The angle is with respect to the poloidal direction. We see a strong elongation of the cells in the poloidal direction which is only slightly increased (decreased) by the extrinsic curvature contribution if >0 ( < 0), respectively, see Figure <ref> (top and bottom row). Considering the elongation as a function of the Gaussian curvature shows a clear tendency to change from elongation in toroidal direction towards poloidal direction going from negative to positive values. This agrees, at least qualitatively, with measurements for MDCK cells on toroidal surfaces within the region of negative Gaussian curvature <cit.>. Quantitative differences can be associated with significantly different numbers of cells and considered geometries, different measurement techniques and possible influences of the considered geometry in <cit.>. The direction of movement is less pronounced compared to cylindrical surfaces. This can be explained by geometric constraints, as movement in the poloidal direction is restricted by the geometry of the toroidal surface. However, as for single cells extrinsic curvature effects only play a minor role on toroidal surfaces. Incorporating extrinsic curvature contributions into a cell-based surface multi-phase field model allows to effectively resolve the three-dimensional nature of epithelial layers, e.g., the difference between the apical and basal side. This reveals essential effects of curvature on single cells and their collective motion. The alignment of cells with the principal directions of curvature leads under appropriate propulsion mechanisms and cell-cell interactions to collective motion on specific geometries. On cylindrical surfaces this can lead to long-term changes from a quiescent state to spontaneous collective rotation, as observed in vitro for MDCK cells <cit.>. Cylindrical surfaces are not only special mathematical objects, they are representative of many epithelial tissues, such as tubular vessels, ranging from small capillaries to large arteries, tubular glands, and ducts <cit.>. On more general surfaces with varying Gaussian curvature these extrinsic curvature contributions only play a minor role. The curvature coupling is dominated by intrinsic curvature with a strong impact on the elongation of the cells. Both couplings vastly increase the range of tissue parameters to control the flow of the epithelial layer. Combining this with shape changes induced by these tangential flows, as considered in coarse-grained models for fluid deformable surfaces <cit.>, has the potential to transform our understanding of morphogenesis. Acknowledgments: This work was funded by DFG within FOR3013. We further acknowledge computing resources at FZ Jülich under grant PFAMDIS and at ZIH under grant WIR. [library_curvature] SUPPLEMENTAL MATERIAL Coordinated motion of epithelial layers on curved surfaces L. Happel,^1A. Voigt,^1,2,3 ^1Institute of Scientific Computing, TU Dresden, 01062 Dresden, Germany ^2Center for Systems Biology Dresden, Pfotenhauerstr. 108, 01307 Dresden, Germany ^3Cluster of Excellence, Physics of Life, TU Dresden, Arnoldstr. 18, 01307 Dresden, Germany [plainnat] § GEOMETRY We consider two prototypical geometries, cylindrical and toroidal surfaces. With periodic boundary conditions these geometries are topologically equivalent to a flat torus but differ in their geometric properties. Figure <ref> shows the principal directions of curvature for these geometries. Cylindrical surfaces are characterized by their radius r_Cyl and height h_Cyl, see Figure <ref>. They are labeled as (r_Cyl,h_Cyl). The principal curvatures are k_1 = 1/r_Cyl and k_2 = 0, corresponding to the azimuthal and longitudinal direction, respectively. The three cylindrical shapes have the same area || = 2 π r_Cyl h_Cyl. Toroidal surfaces are characterized by two radii R_T and r_T, see Figure <ref>. They are labeled as (R_T,r_T). The principal curvatures are k_1 = (√(x_1^2 + x_2^2) - R_T)/r_T in toroidal direction and k_2 = 1/r_T in poloidal direction. The two toroidal shapes have the same area || = 4 π^2R_Tr_T. However, they strongly differ with respect to the Gaussian curvature K = k_1 k_2. § DIFFERENTIAL GEOMETRY Related to the surface we denote the outward pointing surface normal , the shape operator (negative of the extended Weingarten map) with =-ν and the surface projection 𝐏=𝐈-⊗. Let be the covariant derivative. This operator is well defined for vector fields in the tangent bundle of . For the tangential director field 𝐝 we use the Guenther derivative, which is a component-wise tangential derivative defined as 𝐝 = (∇𝐝^e)|_S 𝐏 where 𝐝^e is an extension of 𝐝 constant in normal direction and ∇ is the gradient of the embedding space ^3. For tangential director fields relates to the covariant derivative by 𝐝 = 𝐝 +⊗𝐝, see <cit.>. For sufficiently smooth ℝ^3-vector fields 𝐰 the tangential derivative is defined as 𝐰=𝐏∇𝐰^e𝐏. Again, 𝐰^e is an extension of 𝐰 constant in normal direction. For scalar fields these derivatives are identical, e.g. ϕ = ϕ = ϕ. § NUMERICAL ISSUES The resulting system of surface partial differential equations is solved by surface finite elements <cit.> within the toolbox AMDiS <cit.> which was recently integrated into the DUNE framework <cit.>. For the surface discretization an analytic grid function from DUNE-CurvedGrid <cit.> is used, which gives access to analytic formulas for the projection 𝐏, surface normal ν and the shape operator . An accurate representation of the surface is crucial as has been shown to be sensitive to surface discretization errors. Each cell, represented by the phase field variable ϕ_i, is considered on its own core and has its own mesh, which is adaptively refined within the diffuse interface to ensure approximately 7 grid points across the interface. Dealing with leads to a non-local problem and in principle requires communication between all cells and thus all cores. Due to the short-range interaction this communication can be reduced to the neighboring cells. This approach allows parallel scaling with the number of cells <cit.>, which has been demonstrated for up to 1,000 cells in flat space and carries over to the curved surface. We split the higher order partial differential equations for each ϕ_i into a system of second order partial differential equations by introducing μ_i=δ/δ_i and consider P^2-Lagrange elements for the unknowns ϕ_i and μ_i. Discretization in time is done by finite differences using _i ≈_i^n+1-_i^n/τ_n, where τ_n denotes the time step size for the n-th time step. It is chosen to fulfill the CFL condition. In general a linear implicit-explicit scheme is used, where all linear terms treated implicitly and all nonlinear terms explicitly. However, the double-well potential W(ϕ_i) and the non-linear terms in are linearized with one Newton-step and the de Gennes factor G(ϕ_i) is regularized by G_η(ϕ_i) = √(9/4 (1 - ϕ_i^2)^2 + η^2 ϵ^2), with η > 0. The resulting linear system in each time step is solved by the direct solver UMFPACK. To extract the elongation of the cells the eigenvalues and eigenvectors of the surface Q-tensors 𝐪_i have to be computed. It turns out to be essential that these surface Q-tensors are defined with respect to the local coordinate system of the tangent plane at the center of mass. The Darboux frame with 𝐭_1 and 𝐭_2 the directions of the principal curvatures and the outward-pointing normal to the surface S at this point is needed to resolve the sensitive dependence on curvature. We calculate the eigenvalues of 𝐪_i as λ_i^1 = √((1/2((∂_𝐭_2_i)^2-(∂_𝐭_1_i)^2))^2+(-∂_𝐭_1_i∂_𝐭_2_i)^2) λ_i^2 = - λ_i^1 and the corresponding eigenvectors by 𝐮_i^1 =1/2((∂_𝐭_2_i)^2-(∂_𝐭_1_i)^2) + λ_i^1/-∂_𝐭_1_i∂_𝐭_2_i𝐭_1+𝐭_2 𝐮_i^2 =1/2((∂_𝐭_2_i)^2-(∂_𝐭_1_i)^2) + λ_i^2/-∂_𝐭_1_i∂_𝐭_2_i𝐭_1+𝐭_2. 𝐮_i^1 is the eigenvector pointing in the direction of largest elongation and 𝐮_i^2 is the one pointing in the direction of largest contraction of cell i. The director field is thus defined as 𝐝_i = 𝐮_i^1 / 𝐮_i^1. To take the periodicity of the domain into account when calculating the center of mass we follow the approach suggested in <cit.>. Using that has an orthonormal basis out of eigenvectors and that the eigenvectors of are the principal curvature directions 𝐭_1, 𝐭_2 (with the values of principal curvature _1,_2 as eigenvalues) and the surface normal with eigenvalue 0.0 we can rewrite the extrinsic curvature terms as ⟨ν⊗𝐝_i,ν⊗𝐝_i⟩ =_1^2⟨𝐭_1,𝐝_i⟩^2+_2^2⟨𝐭_2,𝐝_i⟩^2 = 1/^2⟨𝐭_1,𝐝_i⟩^2, where the last simplification holds only for a cylinder with radius since _2 = 0 and _1 = 1/. In our setup for a cylinder the direction of the zero curvature is always aligned with the z-axis. Therefore the direction of the non-zero curvature is 𝐭_1=×𝐞_z. In this setting, simplifies to =∑_i=1^N ⟨⊗_i,⊗_i ⟩=∑_i=1^N 1/^2⟨_i,𝐭_1 ⟩^2. On a torus the directions and values of the principal curvature depend on the position 𝐱= (x_1,x_2,x_3)^T. Therefore eq. (<ref>) on the torus reads ⟨ν⊗𝐝_i,ν⊗𝐝_i⟩ =_1(𝐱)^2⟨𝐭_1(𝐱),𝐝_i⟩^2+_2(𝐱)^2⟨𝐭_2(𝐱),𝐝_i⟩^2 = ( √(x_1^2+x_2^2)-R_T/r_T)^2 ⟨[ -x_2; x_1; 0.0 ],𝐝_i ⟩^2+1/r_T^2⟨(𝐱)×𝐭_1(𝐱),𝐝_i⟩^2, and therefore we obtain =∑_i=1^N ⟨⊗_i,⊗_i ⟩ =∑_i=1^N ( √(x_1^2+x_2^2)-R_T/r_T)^2 ⟨[ -x_2; x_1; 0.0 ],_i ⟩^2+1/r_T^2⟨(𝐱)×𝐭_1(𝐱),_i⟩^2. The sign of determines whether the direction of largest elongation or the direction of largest contraction of the cell wants to align with the direction of largest absolute curvature. On a cylinder > 0 leads to an elongation of the cells in azimuthal direction (green cell in Figure 1b) in the main article) and < 0 leads to an elongation of the cell in longitudinal direction (yellow cell in Figure 1b) in the main article). On the torus the effect of is less pronounced because the intrinsic curvature effects are much stronger. They determine the evolution and shape of the cell. The energy depends on the position of the cell and the geometry of the torus (as sketched in Figure 1c) in the main article). Besides active driving in the direction of cell elongation we also consider a random model. This was sufficient to obtain collective rotation on the sphere <cit.>. However, it turns out that this mechanism does not lead to coordinated movement on a cylindrical shape, see Figure <ref>. Instead of the elongation model with 𝐞_1^i pointing in the direction of largest elongation, we specify 𝐞_1^i to be the direction of movement from the previous time step. Such an approach was introduced in <cit.> and can be considered as an extension of active Brownian particles to deformable objects. However, as already seen in <cit.>, where different propulsion mechanisms are compared, such an approach is not sufficient to resample basic mechanical properties. The numerical approach can be easily adapted to consider this propulsion mechanism. § CONSIDERED PARAMETERS Table <ref> summarizes the parameters varied in the simulations. The remaining parameters are kept constant during all simulations and are denoted in Table <ref>. All cells are of equal size. All simulations correspond to an area fraction of 90%. § DATA ANALYSIS The data is evaluated after an initial time period so that our measurements are independent of the random initialization. The kymographs and averaged cell velocities over time in Figure 2 in the main article and in Figures <ref> and <ref> show one simulation in the time interval [50,150]. The statistical data on the distribution of the direction of motion and elongation direction in Figures 3 and 4 in the main article and Figure <ref> takes three different simulations into account. For the polar plots of the cylinder showing the direction of movement (see Figure 3a) - 3i) in the main article and Figure <ref>a) - <ref>i)) the velocity vectors 𝐯_𝐜𝐞𝐥𝐥 are calculated from the difference in ℝ^3 between the center of mass of the corresponding cell at time t and time t+6.0. If the magnitude of the velocity vector is smaller than 10^-6 it is not regarded, because we only want to consider velocity vectors where the movement direction surely dominates approximation errors, e.g. resulting from the approximate calculation of the center of mass. With this we have roughly 1500 data points for each plot. We then calculate the angle between 𝐯_𝐜𝐞𝐥𝐥 and the longitudinal direction and compute the distribution from this. For the distribution we use 16 bins, so we divide the interval from 0^∘ to 90^∘ into 16 equal-sized bins. For each bin we calculate the mean velocity 𝐯_𝐜𝐞𝐥𝐥 which is color-coded. For the polar plots of the cylinder which show the distribution of the elongation direction (see Figure 3j) - 3r) in the main article and Figure <ref>j) - <ref>r)) the elongation directions 𝐮^1_𝐢 have been calculated during run time according to eq. (2). We only consider values every 6 time units to be consistent with the evaluations for the direction of movement. Again we calculate the angle between 𝐮^1_𝐢 and the longitudinal direction and compute the distribution from this. For the distribution we use 16 bins, so we divide the interval from 0^∘ to 90^∘ into 16 equal-sized bins. For the kymographs (Figure 2b) in the main article and Figures <ref> and <ref>) the velocity in longitudinal (respectively azimuthal) direction of each cell is calculated. We calculate this from the center of mass of the corresponding cell at time t and time t+1.5. The velocity in the azimuthal direction is calculated from the signed angle between (x_1,x_2) at the two time points and . For the azimuthal velocity the data points at each time point are sorted according to the height of the cell on the cylinder, i.e. according to x_3 of the center of mass of the cell. This makes also batch-wise rotation visible, e.g. all cells in the upper half of the cylinder rotating in one direction and all cell in the lower part of the cylinder rotating in the other direction. The longitudinal velocity is calculated from the difference of the x_3-coordinates at the two time points. For the longitudinal velocity the data points at each time point are sorted according to the angle between (x_1,x_2) of the center of mass of the cell and the direction (1.0,0.0). This makes also common movements in longitudinal direction of only a part of the cells visible. The calculation of the polar plots (see Figure 4a)-4f) in the main article) for the tori, which show the direction of movement, is done similarly to the calculation of the polar plots for the cylinders except for two things: First, we calculate the velocity vectors 𝐯_𝐜𝐞𝐥𝐥 from the difference in ℝ^3 between the center of mass of the corresponding cell at time t and time t+1.5, since the difference between the velocity vector in ℝ^3 and the velocity vector on the surface is larger for the tori than for the cylinder. Second, we calculate the angle between 𝐯_𝐜𝐞𝐥𝐥 and the poloidal direction instead of the angle with the longitudinal direction. For this we take the poloidal direction at the center of mass of the cell at t+0.75. To be consistent with the plots for the direction of movement also for the polar plots showing the direction of elongation (see Figure 4g)-4l) in the main article) the values of 𝐮^1_𝐢 every 1.5 time units are used. There the angle between 𝐮^1_𝐢 and the poloidal direction is calculated with the poloidal direction at the center of mass at the same point in time. The mean Gaussian curvature experienced by a cell, K_cell, is calculated at runtime using the following formulae: K_cell=K ψ_i/ψ_i with ψ_i = 1/2 (ϕ_i +1). To evaluate the direction of movement with respect to K_cell (see Figure 4m) in the main text) we compute 𝐯_𝐜𝐞𝐥𝐥 and the angle with the poloidal direction exactly as for the polar plots. But this time the bins for the distribution are computed with respect to K_cell, where we took K_cell at t+0.75. We used 10 bins, which are equally distributed between K_cell=-3.26 and K_cell=0.47 as this where the minimal and maximal value for K_cell encountered in the simulations. In these bins the mean angle was calculated. For the evaluation of the elongation direction in terms of K_cell (see Figure 4n) in the main text) the angles between the elongation direction and the poloidal direction are computed exactly as in the polar plots and the values of K_cell are taken from the same time step. As for Figure 4m) from the main text, the bins for the distribution are computed with respect to K_cell and 10 equally sized bins from K_cell=-3.26 to K_cell=0.47 are used. In these bins the mean angle was calculated. The error bars in Figure 4m) and 4n) in the main article have the length of the standard deviation of the values in that particular bin. § RESULTS FOR A SINGLE CELL The evolution of the energy for a single cell corresponding to Figure 1 in the main article is shown in Figure <ref> for the cylindrical surfaces and in Figure <ref> for the toroidal surfaces. We consider the energy contributions and and F=+. We solve the evolution equation eq. (1) in the main articles with v_0 = 0. For the evaluation of one cell on the cylinder (see figure <ref>) we start with a geodesic circle on the cylinder, so the optimal solution for a system with =0.0. This is also illustrated by the Figures <ref> d)-f), where no change in energy occurs because the solution is already optimal. If ≠ 0.0 a small deformation of the cell can lead to a further decrease of the total energy, since the geodesic circle is no longer the optimal solution. This effect is most visible for the cylinder with the highest curvature, see Figures <ref> c) and <ref> i). Decreasing is associated with an increase of . The sum of both energy contributions decreases. However, the absolute values of the energy contributions differ by three orders of magnitude. For the toroidal surfaces the cell is placed on the inside of the torus. This leads to a movement of the cell towards the outside of the torus. It is clearly visible that this is driven by and not , see Figure <ref>. The decrease of is by several orders of magnitude larger than changes in . § RESULTS FOR COORDINATED MOVEMENT We provide the corresponding detailed data as shown in Figure 2 of the main article for all configurations considered in Figure 3 of the main article, see Figure <ref>. The results confirm the argumentation in the main article. In addition we provide the corresponding results for a random propulsion mechanism, see figures <ref> and <ref>. While a preferred elongation direction depending on is visible for the cylinder with the largest curvature, there is no preferred direction of movement. § MOVIES We provide movies for the considered geometries with > 0, modeling the behaviour of MDCK cells, corresponding to Figure 3 g), h), i) or p), q, r) in the main article for the cylindrical surfaces and Figure 4 e), f) or k), i) in the main article for the toroidal surfaces. The movies show one simulation within the time frame [50,150]. The cells are visualized by their zero level sets ϕ_i = 0. [library_curvature]
http://arxiv.org/abs/2307.01420v1
20230704012426
Modeling Tag Prediction based on Question Tagging Behavior Analysis of CommunityQA Platform Users
[ "Kuntal Kumar Pal", "Michael Gamon", "Nirupama Chandrasekaran", "Silviu Cucerzan" ]
cs.CL
[ "cs.CL" ]
Work done during the summer internship 2022 at Microsoft Research kkpal@asu.edu 1234-5678-9012 Arizona State University Tempe Arizona United States Microsoft Research Redmond Washington United States In community question-answering platforms, tags play essential roles in effective information organization and retrieval, better question routing, faster response to questions, and assessment of topic popularity. Hence, automatic assistance for predicting and suggesting tags for posts is of high utility to users of such platforms. To develop better tag prediction across diverse communities and domains, we performed a thorough analysis of users' tagging behavior in 17 communities. We found various common inherent properties of this behavior on those diverse domains. We used the findings to develop a flexible neural tag prediction architecture, which predicts both popular tags and more granular tags for each question. Our extensive experiments and obtained performance show the effectiveness of our model. <ccs2012> <concept> <concept_id>10002951.10003317.10003347.10003350</concept_id> <concept_desc>Information systems Recommender systems</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002951.10003317.10003347.10003348</concept_id> <concept_desc>Information systems Question answering</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002951.10003317.10003318.10003320</concept_id> <concept_desc>Information systems Document topic models</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010179.10010182</concept_id> <concept_desc>Computing methodologies Natural language generation</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010179.10003352</concept_id> <concept_desc>Computing methodologies Information extraction</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Recommender systems [500]Information systems Question answering [500]Information systems Document topic models [500]Computing methodologies Natural language generation [500]Computing methodologies Information extraction Modeling Tag Prediction based on Question Tagging Behavior Analysis of CommunityQA Platform Users Michael Gamon, Nirupama Chandrasekaran, Silviu Cucerzan August 1, 2023 ================================================================================================= § INTRODUCTION Community Question Answering (CQA) platforms have become a very important online source of information for Web users. On these platforms, information seeking takes the form of questions and answers in communities formed around common domains of interest. , Quora, AnswerBag, Question2Answer, Reddit[stackexchange.com, quora.com, answerbag.com, question2answer.org, reddit.com] and Biostars <cit.> are some of the most popular public CQA platforms. Many enterprise entities offer similar private platforms for their employees. These communities have amassed over time large online information repositories, with high numbers of daily active users. Thus, there is a need to organize and retrieve information efficiently, as well as to facilitate question routing to interested and qualified experts in order to provide a seamless user experience and interaction. Semantic tagging of questions plays an important role in this context. Most CQA platforms require users to assign tags to their questions. Tags are keywords representative of the topics covered by those questions. They help communities to (1) categorize and organize information (2) retrieve existing answers for users looking for information, which in turn reduces duplicate question creation (3) route questions to topic experts which improves query response time and answer quality (4) provide tag-based notifications, which allow knowledgeable community members to answer questions in their areas of expertise and gain reputation (5) assess the popularity of various areas and topics in the targeted domain. Asking users to annotate their questions with tags without providing adequate support poses several challenges, in particular with respect to novice users and to the lack of knowledge about tag usage in a community, which may lead to the creation of various tags with the same meaning, as well as different orthographic forms of those tags. This makes question routing difficult (for tag-based subscription platforms), delays response time, and leads to poor information organization. In turn, addressing these issues would require community administrators to constantly work on identifying and merging near-duplicate tags. Additionally, lack of support in suggesting adequate tags may inhibit novice users from asking questions and/or lead to questions being mistagged and not answered. These challenges may become more severe in enterprise CQA platforms due to community size and topic sparsity. Against this background, tag-prediction becomes an extremely important while challenging task for both public and private CQA platforms. In this investigation, our first goal was to understand the commonalities of the tagging behaviors of users through a large scale analysis of 17 diverse domains in (Section <ref>). Our analysis revealed that while these domains are quite diverse in terms of volume of questions, users and tags, they share common distributional properties for tag and tag pair usage. Also, there is a large lexical overlap between the tags and user texts in every domain. Post coverage of tags is high in all domains. Tags also show positional stability and tag pairs show particular ordering preferences forming a soft hierarchy among tags. We incorporate the findings to develop a neural model with two tag-prediction heads - one trained to predict existing popular tags such as the name of important topics in a domain (e.g. "harry-potter", doctor-who", and "star-wars" in the scifi domain) and frequently-used meta-tags (e.g. "video-games", "books", and "short-stories" in scifi) and another one generate finer-grained tags, which may have been used rarely on previous questions or are new. Typically, the former category of tags represent the main topic area of a question while the latter help in further scoping down and clarifying it. Both types of tags are equally important in identifying the question and hence it is necessary for the tag prediction systems to not only predict the main generic tags but also the refined ones. Our experiments show that the proposed approach significantly outperforms baseline methods in prediction of both generic tags and finer-grained tags. We also investigate and show the effect of reducing the pre-defined vocabulary size, as well as the contributions of each prediction head. Our main contributions in this work are: * We present an in-depth analysis of the tagging behaviors of the users of a CQA platform () on 17 diverse domains. We present our findings of question tag analysis across four dimensions: tag space, tag co-occurrence, tag pair ordering, and tag positional stability. * We propose a tag prediction architecture for both predicting popular tags from a pre-defined vocabulary and generating refined tags not present in the vocabulary. * We perform comprehensive experiments on the 17 domains and show effects of each model component under various experimental settings. § DATASET PREPARATION We collected data from 17 communities of that correspond to a diverse set of domains. We use the data dumps[https://archive.org/details/stackexchange_20210301] (2021-03-01) for our analysis and model. We find that the Post.xml file is sufficient for our tag analysis and predictions. We only consider the posts from the dataset which are either questions or answers (PostTypeId) for our analysis. We reject posts with no owners (OwnerUserId, OwnerDisplayName). As imposed by , the minimum and maximum number of tags assigned to each posts are one and five respectively and all the posts in this data set are dated prior to March, 2021. We chose several domains from each of the following categories[https://stackexchange.com/sites#]: Technology, Culture & recreation, Life & arts, Science and Professional. Each selected domain has at least a decade of posts. We do not include the stackoverflow domain because of its enormous volume and also a random sample set might not be representative of the full data of this domain. Hence we consider askUbuntu which is also a representative community of the Technology domain. § TAGGING BEHAVIOR ANALYSIS To understand the user behavior of question tagging and to identify the inherent commonalities, we analyze ten years of data from these 17 domains. Mathematical Notation: Without loss of generality, let D denote one of the domains (out of 17) being investigated, P the set of posts in the data for this domain, and T={t_1, t_2,…,t_|T|} the set of all tags used in domain D. Each post p_j∈ P has associated a sequence of tags S(p_j)=(t_(1),t_(2),…,t_(l)), 1≤ l≤ 5, where t_(i) denotes the tag at position i in that sequence. We employ parentheses to distinguish between the positional information of a tag in a sequence and the indexes that identify elements t_i of the tag set T observed for domain D. §.§ Community Diversity We observed a high degree of variability for the selected domains in terms of Question Volume, Tag Space and Asker Volume. Figure <ref> shows a visual comparison of this variability, while Table <ref> shows general statistics for each domain. In terms of the amount of information created over a decade, only four domains have over 100K posted questions while the domains politics and history have merely 12K. If we consider the number of unique tags (#T) created, the domain movies ranks highest, as new movie titles are added to the tag set on weekly basis. To quantify tag re-use in each domain, we define post-per-Tag (PPT) as the number of posts available for one tag. We also observe that physics, askubuntu, and chemistry are domains with the most tag-reuse (PPT > 100) while movies domain (PPT < 5) shows frequent new tags. The number of posts having views over 100 (V>100) can be used to infer the popularity of posts in each domain. From the average number of tags (AvgT) per post, we can infer the need for detailed tagging in each domain. In travel, physics, and money, AvgT > 3 indicates users feel the need to assign more than 3 tags to clarify their questions. Also, the movie domain has the least AvgT (2.09), showing that only two tags on average are sufficient. Some domains like aviation, philosophy, history, movies, politics are not popular (#A < 10K in a decade). More statistics are in Appendix Table <ref>. §.§ Tag-Space Analysis We analyzed each domain's tag spaces into (1) General Tag Statistics (2) Tag Distributions (3) Tag-Post Coverage (4) Tag-Post Overlap. General Tag Statistics: The shortest tag in every domain is merely 1-3 characters long (c, air, 3g) while the longest tag is 22-35 characters long (valerian-city-of-a-thousand-planets, neurodegenerative-disorders). askubuntu has the lowest average tag length (8.17) while movies has the highest (13.66). We believe that the tags in askubuntu are short technical terms of a subtopic but movie names tend to be quite long in comparison and are often used as a part of a tag in the movie domain. Table <ref> shows the distribution based on the number of words of the tags. With the exception of movies, rpg, and scifi the majority of tags in all the domains consist of three or fewer words. The shortest and longest tags for each domain are presented in Appendix Table <ref>. Tag Distributions: There is a long tail in the distribution of tags in every domain (Figure <ref>). We observe that (1) most larger domains where the tag re-use is high, have smoother tag distributions like askubuntu, electronics, biology and (2) for some smaller domains like scifi, movies, rpg, the most frequent tag dominates the distribution. The rest of the distributions are shown in the Appendix Figure <ref>. Also, Table <ref> shows that the 100 most frequent tags (100Tag%) constitute a very small portion of the tag space for large domains. Post Coverage by Tags: We consider a tag to cover a post if it is present in the tag sequence of the post. Table <ref> shows the percentage of total posts that can be covered by the top n most frequent tags in each domain. We observe that the most frequent tag covers (Top1) at most 10% of posts in electronics, askubuntu, cooking, and biology domains but more than 40% in politics and rpg domains. More than 81% of all posts in each domain are covered by the 100 most frequent tags. Tag-Post Overlap: Figure <ref> shows whether the tags appear in user contents (question-title / question-body / answers) using two metrics: (1) single worded tag exact-match (EMS) and both single and multiple worded tag exact-match (EMM). We observe that in 8/17 domains, tags appear in more than 50% of post titles. The movie domain has more multi-worded tags than single worded tags (9.49% compared to 34.51%). Two science domains - biology and chemistry - have the lowest tag overlap (<30%) with the question title (T-EMS). When we include the question body, we observe, in 9/17 domains, question tags appear in more than 70% of posts. Finally, if we include every answer for each question, all the domains (except chemistry and biology) have their tags appear in more than 70% of the posts. The three larger domains (askubuntu, serverfault, and electronics) have more than 90% overlap. The overlap is lowest (56%) for the chemistry and biology domains. §.§ Tag Co-Occurrence Analysis For a post p_k, we define tag co-occurrence C_ij = {{t_i,t_j}:t_i,t_j∈ S(p_k), t_i≠ t_j} as a pair of tags {t_i, t_j} appearing in a post together irrespective of their positions. Soft Tag Hierarchy: From the tag co-occurrence analysis in the 17 domains, we find that there exists a soft hierarchy among the tag pairs. One of the tags indicates the main topic or area of the question and the other tag is often fine-grained which makes the question more specific. For these examples, the second tag is a sub-category of the first: (baking, bread) in cooking, (dnd-5e, spells) in rpg and (aircraft-design, wing) in aviation. In the science domain, similar examples of topic-subtopic relationships are (organic-chemistry, carbonyl-compounds) in chemistry and (hilbert-space, quantum-mechanics) in physics. The most frequently occuring tag-pair for each domain is shown in Table <ref>, in Appendix Table <ref> a more comprehensive set of the top-5 most frequent pairs per domain are shown. Tag Pair Post Coverage: We consider a tag-pair ({t_i,t_j}) to cover a post if the tag-pair occurs in the sequence of tags for that post in any position. Table <ref> shows the tag pair post coverage across the domains. We see around 10-20% of posts have only a single tag. Considering the most frequent 100 pairs we can cover 18-53% posts. Also, the most frequent tag pair can cover more than 10% of posts in money and rpg domains which shows that this tag-pair is extremely essential for these two domains. Tag Pair Distribution: On analyzing the distribution of top-50 frequently occurring tag pairs in each domain, we observe three patterns: (1) Smooth Distribution (2) Spike in Top-1 and (3) Spikes in top few pairs. Larger domains (askubuntu, serverfault, electronics) have smooth distributions. In smaller domains (movies, scifi, travel) few tag pairs dominate the distributions, indicating their popularity. More Details are available in Appendix Section <ref> and Figures <ref> and <ref>. §.§ Tag Pair Ordering We analyze the top-10 most frequent tag pairs in each domain to identify users' ordering preferences for tags. For a post p_k, O_ij=(t_(m),t_(n)) (and O_ji) are the tag ordering for the tag pairs t_i and t_j, where m and n are the positions of t_i and t_j respectively in the tag sequence S(p_k). We find that community users have a tendency to assign the more generic tags prior to the specific ones, for each domain by analyzing the occurrence of O_ij and O_ji. For example, aircraft-design always appears before wings out of 221 times they appear together in aviation, united-states appears before income-tax, 99.95% of times out of 3393 times they appear in the money domain and dnd-5e always appears before magic-items out of 1367 times in rpg. More examples are in the Appendix <ref>. §.§ Tag Position Stability We study the positional stability of tags i.e., whether some tags frequently appear in any particular position among the five allowed by . We consider ϕ_x (t) as the percentage of occurrence of a tag (t) in any position x, given by, ϕ_x(t) = c(t_(x))/∑_k=1^5c(t_(k))% where c(t_(x)) denotes the count of tag t in position x. We consider three stability thresholds (δ) - 80%, 90%, 99% (Figure <ref> and <ref>). For a tag t and position x, ϕ_x(t) > δ indicates that the tag is stable at that position. Q_X = { t ∈ T: ∑_x∈ Xϕ_x(t)≥δ} ST_X = |Q_X|/|T|% where Q_X is the set of tags that occurs more than δ in sets of positions defined by X and ST_X is the percentage of tags in a domain that are stable at positions X. In Figure <ref>, (rpg domain) for δ = 99%, we find ST_1,2=13.81 i.e. 13.81% of all tags in rpg are stable in positions 1 and 2 combined, and ST_3,4,5=15.06 are stable in positions 3, 4 and 5 combined. The rest of the tags are unstable. Also, the stable tags (Q_3,4,5) appearing in positions 3, 4, and 5 are finer-grained (or refined) tags that support the stable tags present in positions 1 and 2 (Q_1,2). The travel domain, has the highest number of stable tags appearing in positions 3,4, and 5 (Q_3,4,5) with δ=90%, 99%, 80% threshold showing that to make a question specific more than one refined tags is needed in this domain. We neither find any conclusive evidence of this stability within positions 1 and 2 (i.e. Q_1 and Q_2), nor within positions 3, 4 and 5 (i.e. Q_3, Q_4 and Q_5) individually. Table <ref> shows five randomly selected examples of position-stable tags in 17 domains. These positions account for more than 99% of the occurrences of these tags in their respective domains. § MODELING TAG PREDICTION Based on the observations from our tagging behavior analysis (Section <ref>), we develop an automated generic tag prediction approach for CQA platforms that predicts both generic and refined tags. The inherent commonalities in community diversity influence our decision to develop a common tag generation framework. The long tail in tag-space analysis guided us to develop a predictive-generative hybrid model. Tag co-occurrence analysis, tag-pair ordering, and tag-positional analysis on these domains led us to generate n tags from a common vocabulary of popular tags at certain positions and m related granular tags at the remaining positions. §.§ Majority Baseline Five most frequent tags per domain from training, data are considered as Top1-Top5 predictions for the test data in order (Hit@1 to Hit@5). We introduce this baseline as the top few tags cover a large number of posts in each domain (Table <ref>). §.§ Feature-Based Models We use linear multi-label classifiers using the one-vs-all strategy with tf-idf and bag-of-word features as two baselines since most of the feature-based tag prediction models use either of them as features. We hypothesize that these models can leverage the high amount of tag-post overlap (Figure <ref>). Here we train the models for each domain with classes corresponding to all the unique tags. §.§ MetaTag Predictor Model (MP) In this model (Figure <ref>), we first select a vocabulary (MetaTag) of tags based on a frequency analysis of the tag's post coverage per domain. Here, we consider popular tags as meta tags. We formulate this multi-label classification task as a language model mask-filling task using pre-trained roberta-base <cit.> as the base of this model. We train separately for each domain. Training: We tokenize the question title (Q_T) and body (Q_B) and hide the tags from the MetaTag vocabulary with a mask token, . These are concatenated and provided as input to the model. Q_T + Q_B +... This model is trained to predict those masks optimizing the prediction loss (ℒ_P) over all masked tokens (ℒ = ℒ_P). Here the number of mask tokens may vary based on the post (shown as above). ℒ is the total loss. Inference: We tokenize Q_T and Q_B, and append five tokens at the end, enforcing the model to predict exactly five tags for the post (the most probable tag for each position). This is because StackExchange allows a maximum of five tags to be associated with a question. This ensures that the model predicts the tags from the MetaTag vocabulary. Q_T + Q_B + §.§ Meta Refined Tag Predictor Generator Model (MRPG) This model (Figure <ref>) is similar to the MP model, with the additional ability to generate tags not present in the MetaTag vocabulary (OOV). In a more general sense, here the motivation is to develop a model capable of predicting tags from a predefined set and generating novel tags as well. Training: Similar to MP model, we tokenize Q_T and Q_B, and replace the tags present in the MetaTag vocabulary with token. The rest of the tags (out-of-vocab or OOV) are tokenized and each token is replaced with a separate mask token, . A token is added to mark the boundaries (start and end) of these OOV tag tokens. The model is trained on joint loss (ℒ) of meta tag prediction head loss (ℒ_P) and refined tag generation head loss (ℒ_G) given by ℒ = ℒ_P + ℒ_G. Inference: Our goal is to encourage the model to generate a combination of meta and refined tags. Based on our tag-stability analysis (Section <ref>), tag pair ordering analysis (Section <ref>) and soft tag-hierarchy findings (Section <ref>), we train the MRPG model to predict the first two tags from the MetaTag vocabulary and to generate the remaining three tags based on the user texts. We append two tokens and a parameterized number of tokens with tokenized Q_T and Q_B. Q_T + Q_B +… Tag Generation: For each tokens, MRPG generates one token from the tokenizer vocabulary following a greedy approach by selecting the most probable token. We concatenate the generated tokens between two tokens and form a tag. We choose the most probable three generated refined tags based on our earlier data analysis and stack exchange tag limitations. However, for implementing this model to any other CQA platform, this number can be incremented or decremented based on the above-mentioned parameter. Also, there is no restriction in the model that will limit it to generating tags with more than 3 words. But they are rare for most of the domains, as can be seen from Table <ref>. More details are in Appendix Section <ref>. § EXPERIMENTS §.§ Settings We split our dataset into train-dev-test in the ratio 70:10:20 based on a random seed value. In our experiments we build our model on top of the base version (125M parameters) of pre-trained roberta language model. We remove html tags (since these tags are irrelevant to tags) from the user contents (question title and body) before tag prediction. We ran all experiments on 4 NVIDIA RTX A6000 GPUs (48GB GPU memory) with a batch size of 60 and an input length of 256. We use AdamW <cit.> optimizer, linear warmup scheduler, and a learning rate of 5e-5. §.§ Metrics: We define Hit@k (where k=1,…,5) as the percentage of posts where at least one predicted tags match with the actual tags for k predictions. We generate at most 5 tag predictions in line with 's upper limit of tags. This metric aligns with our motivation of maximizing the probability that a user will be able to find at least one tag among the recommended fixed number of tags. Hence we do not consider other metrics like precision and recall. §.§ Performance Analysis §.§.§ Baseline vs MP vs MRPG In Table <ref>, we compare our models with the baselines (mean of five different runs). The feature-based models, bag-of-word, and tf-idf models are able to achieve good performance for those domains where we found a high overlap between user texts and tags. We find that our MP model shows improvements over the majority baseline and the feature-based models by a substantial margin (p-values < 0.05 on Wilcoxon test) in Hit@5 performance. The MRPG model outperforms other methods in almost all the domains (significant improvements in 12 out of 17 domains). This is because it was able to generate tags outside the MetaTag vocabulary. In the biology domain, the MP model performs better than MRPG. This might be because of the high tag reuse in this domain. All the model performance numbers (Hit@k for k=1…5) are present in Appendix Table <ref>. In this table, we observe that for Hit@1 MRPG model is always better than MP model. §.§.§ Effects of Vocabulary Size Reduction We build the MetaTag vocab with 85% post-coverage by tags (↓5%) and show the impact in Figure <ref>. We observe that the performance gap between MP and MRPG at 90% (Table <ref>) reduces as vocab size decreases by 5% (Figure <ref>) across all domains. This is because the MP model suffers the most (2-5%) for this reduction. This is expected since MP's performance (by P-head) is based on how big the MetaTag vocabulary is. MRPG model, however, is robust to this vocabulary reduction, i.e., the performance (Hit@5) only changes in the range 0-1.13% with the exception of askubuntu domain (2.26%). Details are in Appendix Table <ref>. Also with reduced vocab, the maximum performance difference is 9.12% (travel) since it has more refined tags (Section <ref>). The minimum difference is 1.06% (biology). Here the MRPG model could not take much advantage over MP because of high tag reusablity and fewer refined tags. §.§.§ Head Contribution of MRPG Figure <ref> shows the contribution of P-Head and G-Head in the prediction performance (Hit@5 for 90% coverage vocab). We extract for how many posts (%) (1) only the P-Head correctly predicted at least one tag and (2) only the G-Head correctly predicted at least one tag. P-Head's contributions were highest (45-74%) since the MetaTag Vocabulary is created using popular tags in each domain. The G-Head was able to predict at least one tag correctly for an extra 4-13% of the posts. The effect of decreasing and increasing the MetaTag vocabulary size by 5% change in tag-post coverage is shown in Appendix Table <ref>. We observe that the G-Head's contribution increases up to 4% (on vocab size decrease) and decreases up to 5% (on vocab size increase). We also find that both the heads combined were able to suggest some non-overlapping tags in up to 33% of the posts. §.§.§ Out-of-Vocabulary Tags Generation % Table <ref> shows MRPG's performance in the prediction of tags outside MetaTag Vocabulary for 90% Tag-Post Coverage. % Posts shows the percentage of posts where MRPG correctly predicted at least one OOV tag. It has the least contribution in two domains movies (13.88%) and scifi (17.01%). % ALL Tags and % OOV Tags shows that MRPG was able to correctly predict a considerable amount of OOV tags because of the generative head. §.§ Case Studies We compare tag predictions of our methods in Figure <ref>. MRPG was able to generate two extra refined tags than MP in askubuntu domain and was able to predict four out of five tags in physics domain. Included below are examples for five other domains. 0.9 Domain: Physics Title: Does matter become energy at the speed of light? Gold: special-relativity, speed-of-light, mass-energy, matter MP: special-relativity, energy, speed-of-light, mass MRPG: special-relativity, speed-of-light, mass-energy, matter 0.9 Domain: Travel Title: Nigerian citizen (university student) was refused a UK visit visa due to lack of funds and connection to school - how to resolve? Gold: UK, visa-refusals, nigerian-citizens MP: visas, customs-and-immigration, visa-refusals, paperwork, standard-visitor-visas MRPG: uk, visa-refusals, nigerian-citizens 0.9 Domain: Music Title: Piano tuning just under the absolute pitch Gold: piano, tuning MP: piano, tuning, maintenance MRPG: piano, tuning, alternative-tunings, pitch, relative-pitch 0.9 Domain: Biology Title: Why aren't all infections immune-system resistant? Gold: evolution, microbiology, immunology, bacteriology MP: evolution, microbiology, bacteriology, bacteriology, immune-system MRPG: evolution, bacteriology, immunity, antibiotic-resistance 0.9 Domain: History Title: Where to find a list of participants in The Crusades? Gold: middle-ages, crusades MP: middle-ages, middle-ages, europe, historiography MRPG: middle-ages, sources, crusades §.§ Adaptability of the MP & MRPG Architectures Both the MP and MRPG models can be adapted for use in other domains or in different public and private CQA platforms with specific tag-space restrictions. This can help in efficient question routing to area-experts for faster response time, especially in private CQA platforms where the motivation of the community authority is to get queries resolved faster. Such adaptations can be done by customizing the MetaTag vocabulary based on prior behavioral analysis. Additionally, the number of meta and refined tags can be controlled based on the domain and platform requirements without changes in architecture (through a parameter). Also, the MRPG model can be used in platforms where a soft-hierarchy of tags is known, and routing requires the prediction of top-level tags and leaf tags. In such a scenario, the MetaTag vocabulary could be populated with only top-level tags, allowing the model to generate lower-level tags (from the tail of the tag distribution) based on user texts. With the combination of both types of tags, a query can be routed to a specific sub-area expert without overwhelming all the experts to a specific topic. § RELATED WORK Community QA platform analysis: There have been several studies on Folksonomy <cit.>, the practice of associating custom tags to questions in a social environment. Some of the prior works are: a large-scale analysis of tags and their correlation with other tags <cit.>, tag-distribution and tag-occurrence of 168 SE communities <cit.>, quality analysis of SO <cit.>. User behavior analysis was done on Quora <cit.>, Yahoo Answers <cit.>, Google Answers <cit.> and StackOverflow <cit.>. However, here we perform a large-scale study of tags, tag occurrences, and tag relation for 17 domains to understand how they have some common properties in spite of being quite diverse, an observation similar to a prior work <cit.>. Community QA NLP Tasks: As the use of community QA platforms increased and with it the volume of community-created data, various NLP approaches were used to address some of the issues of each platform and also to understand behaviors of users. There have been various insights gathered through analysis of such communities. Similar Question Identification <cit.>, Similar Tag Identification <cit.>, Tag popularity prediction <cit.>, Popular Question Prediction <cit.>, Tag predictions <cit.>, detecting anomalous tag combinations <cit.>, CQA entity linking <cit.>, expert recommendation <cit.>, question routing <cit.>, identifying unclear questions <cit.>, automatic identification of best answers <cit.> and tag-hierarchy predictions <cit.> are some of the interesting tasks. We, perform a large-scale analysis with data over 10 years and across 17 diverse communities. We focus only on the tag-prediction NLP task for CQA platform. Text Tagging: There are some feature-based machine learning approaches <cit.> and some deep learning approaches <cit.> for tag prediction. Tagcombine <cit.> uses software object similarity while TagStack <cit.> uses tf-idf features with Naive Bayes classifier on StackOverflow texts. QUINTA <cit.> works on 6 domains using KNN, <cit.> on microblogging sites (Twitter) based on tweet-similarity, Tag2word <cit.> in math and StackOverflow domains using an LDA variant, <cit.> on BibSonomy and StackOverflow datasets based on tag co-occurrence and user preference. Among the deep learning methods, F2Tag <cit.> is on math domains based on visual and textual formula representation, ITAG <cit.> is on the math domain using RNN and TagDC <cit.> is based on software object similarity using an LSTM. We here, predict a soft hierarchy of tags (predicting both meta and fine-grained tags) unlike the above-mentioned methods. § CONCLUSION We perform an in-depth analysis of 17 domains in a popular CQA platform, , focusing on various aspects of question tagging such as domain diversity analysis, tag-space analysis, tag co-occurrence analysis, tag order, and tag positional stability. We present multiple insights into user behavior in assigning tags to the questions they post. Based on these findings we develop a tag prediction architecture that generates rarer and finer-grained tags in addition to popular tags from a pre-selected vocabulary. Our approach significantly out-perform feature-based baselines and also shows significant improvement in 12 domains when compared with vocabulary-based approach. § LIMITATIONS The analysis and its findings presented here are limited to 17 selected domains considering their diversity. However, they may vary for the remaining 150 domains. Some of the findings (e.g. tag's positional stability) may vary for other CQA platforms which do not have any bounds on the number of tags. We use roberta-base and a smaller input size (256 tokens) for our experiments. With larger models and more context, the performance is expected to increase since more context usually leads to better learning by larger parameterized models. We have ignored the answers in for model training. We believe that indiscriminately selecting all answers as context for a question could be too noisy and if we were to select one or more appropriate answers, this would add complexity in choosing between the fastest answer, best answer, accepted answers, etc. We consider this as a separate area of research and future work. We randomly sampled the data for each domain to create the train and test split to show that our MRPG model is capable of both predicting and generating tags. Splitting with respect to timestamp would require tag temporal analysis and tag-evolution which we consider as a future area of research. § ETHICAL STATEMENT This work analyzes various aspects of aggregate tagging behavior of users on a popular community question-answering platform . The data is publicly provided by as an anonymized dump of all user-contributed content on the Stack Exchange network. The data is cc-by-sa 4.0 licensed, and intended to be shared and remixed. No specific user has been identified and no user-level information (user name etc.) has been used for this work. We only used the Post.xml extracted from the dumps and do not use any user profile statistics. The aggregate user behavior has been analyzed with respect to tagging and user-generated questions. Based on these findings a tag predictor model has been developed. The data has not been modified or redistributed as part of this research. ACM-Reference-Format § DOMAIN STATISTICS Table <ref> shows more details about domain diversity apart from those mentioned in the main section <ref>. We can see cooking and rpg are the domains with the least number of questions with no answers (<5%) which indicates the experts in these domains are very active. The science domains have more than 15% questions with no answers which shows that special knowledge is required to answer such questions. maxview and maxans show the maximum limit of users who viewed the questions and the maximum number of answers that a question has. no accept ans shows the percentage of posts that have not been accepted by the askers as answers. This gives an indication of whether askers are active and also whether the answers are satisfactory. § TAG LENGTH ANALYSIS Table <ref> shows the maximum and minimum length tags in each domain. We also see that the average tag length of the movies and physics domain are the highest. We find that often the movie names or physics topics are longer than three words leading to an increase in average tag length. § TAG CO-OCCURRENCE DISTRIBUTION ANALYSIS We analyzed the distribution of the top-50 frequently occurring tag pairs in each domain (Figure <ref>, <ref>). We observe three main patterns: (1) Smooth Distribution (2) Spike in Top-1 (3) Spikes in top few pairs. Larger domains like askubuntu, serverfault, electronics, and physics, have smooth distributions. Some of the smaller domains like politics, philosophy, and music also show this behavior, which we believe is because, in these domains, the questions have fine-grained topics. In domains like rpg, money, history, aviation, biology, chemistry, the tags of the most frequent tag pair that appears in abundance are generic in nature. Finally, in domains like movies, scifi, cooking and travel, few tag pairs dominate the distributions, indicating their popularity in such smaller domains. § TAG CO-OCCURRENCE EXAMPLES Table <ref> shows the most frequent tag pairs that appear in each domain. § TAG DISTRIBUTIONS Figure <ref> shows the distribution of top-100 most frequent tags in each domain. § TAG ORDERING EXAMPLE: Tables <ref>, <ref>, and <ref> show top-10 most frequently occurring tag pairs in each domain. On analyzing manually, we found that in most of the cases meta-tag appears before the refined tags. § TAG-POST OVERLAP: FULL TABLE Table <ref> shows the tag-post overlap in tabular form similar to Figure <ref> in Section <ref>. § DECODING PHASE OF THE MRPG MODEL We allow the model to generate the tags based on the input parameter maximum output length and then use few heuristics to filter out appropriate tag-tokens and choose the top-k tags. Our heuristics are based on prior knowledge about how a tag token should be like (1) a tag cannot start or end with a '-' (2) skip the punctuation tokens (3) ignoring adjacent repeated tags. We then combine the tag tokens between two tokens to form the final tag. We also calculate the top-k (k=1…5) most probable tags based on the combined probability scores of the tag-tokens. § FEATURE-BASED MODEL CONFIGURATIONS: For building both the tf-idf and bag of words features we consider unigram and bigram features with a minimum document frequency of 0.00009. We generate 200,000 maximum features. We consider log loss and search hyper-parameter space using alpha = [0.0001,0.001,0.00001] and penalty=[l_1, l_2] for the Stochastic Gradient Descent One versus rest classifier. For both the models, we find that l_2 penalty with 0.00001 alpha yields the best performance. § P-VALUES FOR HIT@5 Table <ref> shows the p-values when MRPG model's Hit@5 is compared with MP model. The significance test has been done by one-sided Wilcoxon Test<cit.>. For k=1,2,3,4 MRPG model's Hit@k shows significant improvements over MP model. MRPG model outperforms all other baselines significantly in Hit@k metrics for each value of k. § DETAILED TAG-POST COVERAGE % Table <ref> shows detailed tag-post coverage. § EFFECT OF USING ANSWERS We can use answers in those domains or organizations where we already have some answers posted and the tag-prediction approach is being deployed later. The motivation for using answers directly comes from our Tag-Post Overlap analysis in Table <ref>, where we can find a minimum overlap of tags in 70% of posts in 16/17 domains with the exception of chemistry and biology domains. In these two domains, the overlap increases by around 9-10%. In some domains, the overlap also increases to 91%.
http://arxiv.org/abs/2307.02618v1
20230705194209
SimSpin v2.5.1 -- Constructing synthetic spectral IFU cubes for comparison with observational surveys
[ "K. E. Harborne", "A. Serene", "E. J. A. Davies", "C. Derkenne", "S. Vaughan", "A. I. Burdon", "C. del P. Lagos", "R. McDermid", "S. O'Toole", "C. Power", "A. S. G. Robotham", "G. Santucci", "R. Tobar" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.IM" ]
In this work, we present a methodology and a corresponding code-base for constructing mock integral field spectrograph (IFS) observations of simulated galaxies in a consistent and reproducible way. Such methods are necessary to improve the collaboration and comparison of observation and theory results, and accelerate our understanding of how the kinematics of galaxies evolve over time. This code, , is an open-source package written in R, but also with an API interface such that the code can be interacted with in any coding language. Documentation and individual examples can be found at the open-source website connected to the online repository. is already being utilised by international IFS collaborations, including SAMI and MAGPI, for generating comparable data sets from a diverse suite of cosmological hydrodynamical simulations. § INTRODUCTION Astronomy is divided. Observers are collecting increasingly exquisite data using telescopes focused on the Universe around us. Theorists, meanwhile, are attempting to explain and predict the observable Universe from first principles using fundamental physics and progressively more complex computational models. The discussion between these parties is most commonly separated by paper preparation and publication cadence, while further data is collected and new simulations features are implemented and tested. To accelerate the conversation between these parties, and our understanding of galaxy evolution as a result, it is imperative that like-for-like comparisons between observational data and theory results are easy to produce in a consistent and reproducible manner. This is particularly important given ongoing advances in both observational and theoretical astrophysics. We have seen a revolution in spatially resolved kinematic studies of stars and gas with the development of the integral field spectrograph (IFS). Based on the principles developed for the TIGER and OASIS instruments <cit.>, which used lens-let arrays to collect spectra in a grid across the surface of galactic nuclei, further instruments such as SAURON <cit.> paved the way for studying the stellar motions of entire galaxy structures. Following the final data releases of SAMI <cit.> and MaNGA <cit.>, instruments with multi-object apertures that allow the collection of many galaxies during a single observation, astronomers now have access to spatially-resolved, kinematic observations of over 10,000 galaxies. These products give us the required statistics to examine the kinematic variety within the nearby Universe at a scale only imagined at the turn of the century. Availability of such data is due only to increase in resolution and scale with the commissioning of the Hector instrument in July 2022 <cit.>. Alongside these developments, only the most recent of the large-scale cosmological hydrodynamical simulations have sufficient resolution to explore individual galaxies on a case-by-case basis within a representative cosmological volume. Cosmological simulations such as EAGLE <cit.>, Magneticum Pathfinder <cit.>, HorizonAGN <cit.> and IllustrisTNG <cit.> have baryonic particles that represent of order 10^6 - 10^7 solar masses such that an individual resolved galaxy can be composed of 10^3 - 10^5 individual stellar particles. In comparison to the early cosmological models of Metzler1994Agalaxies and Katz1996CosmologicalTreeSPH, in which galaxies were represented by single particles or tens of stellar particles respectively, the structural parameters of individual galaxies can now be examined in a statistical manner. The numerical convergence, and hence the kinematics, of these galaxies will be affected by the smoothness of the underlying potential, specifically the number of dark matter particles within the simulation in question. In modern simulations, this number is generally minimised to reduce the computational cost of large volume codes which results in numerical disk heating (e.g. Ludlow2019NumericalHaloes, Ludlow2021SpuriousParticles, Wilkinson et al. 2022). Never-the-less, these models are an important test-bed for experimental models of galaxy evolution. They enable us to uncover the key ingredients necessary for recovering observed distributions. It remains important that our comparisons between observation and simulation are made consistently such that the impact of any changes to sub-grid physics and numerical methods can be properly contextualised. In recent years, we have seen a number of direct comparisons made between cosmological models and integral field spectroscopic observations - * Bendo2000Theremnants demonstrated the first example of post-processing idealised galaxy merger simulations into projected line-of-sight (LOS) velocity and dispersion maps. These were used for direct comparison with observations made around this time using long-slit spectra, in an effort to explore the possible formation paths of different kinematic morphologies. * The concept of utilising theoretical simulations to explore formation scenarios was further utilised by the results of the SAURON survey <cit.>. () produced 2D kinematic maps with the aim of exploring the formation mechanisms driving the range of kinematic morphologies discovered by the survey, e.g. counter-rotating cores and slow rotating ellipticals. Subsequently, as part of the Atlas3D survey <cit.>, Naab2014TheRotators demonstrated the first example of comparison with cosmological simulations from Oser2010TheFormation to explore the cosmological origin of variety in kinematic morphology. * A thorough study systematically comparing results from modern cosmological simulations and observational surveys was presented in vandeSande2019TheSimulations. The key purpose of this study was to demonstrate key areas of success and tension between various hydrodynamical simulations and IFS observational surveys. Although every attempt was made to ensure consistency, each simulation's data was compiled by the respective team and methodological differences exist between the samples as a result. For example, (1) the method of determining the projected ellipticity of a galaxy is done iteratively using the observational method of Cappellari2007TheKinematics at 1.5 times the effective radius (R_e) for the Magneticum simulation, while EAGLE and HorizonAGN were measured using the eigenvalues of the moment-of-inertia tensor within 1 R_e. (2) Various particle-per-pixel choices are made per simulation; HorizonAGN has a lower particle limit of 10 per pixel, while Magenticum uses Voronoi bins to increase this resolution to at least 100 particles per `pixel' <cit.>. Then in Foster2021MAGPIOverview, we saw the first example of a survey incorporating comparisons with theoretical simulations from the project conception. Since this time, the number of examples have increased exponentially, with Bottrell2022RealisticIFS, Nanni2022iMaNGAcubes and Sarmiento2023MaNGIAanalysis the most recent examples of mock observations produced for either simulation suites, or individual surveys. Other works, such as Poci2021Fornax3Danalysis and Zhu2022Massmass, have used such mocks as independent tests to explore the success of Schwarzschild models in reconstructing the full orbital distributions of galaxies. As the popularity of these comparisons increases, it is important that concrete methods of constructing our comparative data sets are established. Differences in constructing these data may introduce errors that carry through to later inference. It is important that methods are: (1) applicable to different simulations and telescopes, (2) that their operation is well-documented and tested, and (3) that this operation is open to extension and modification by the wider community, i.e. that the code is open source. In this paper, we present an updated version of the software . This code is open-source and fully documented with function descriptions and examples. is designed to be agnostic to the input simulation, with various cosmological hydrodynamical simulations supported including , Pathfinder, and . It is worth noting that, especially for open-source code, it is difficult to provide a static reference for the current capabilities of a given code-base. For that reason, this paper is just one form of reference for . When using this code, we advise you visit the website <www.github.io/kateharborne/SimSpin> for the most recent updates and code examples. If you use this code for your research, we ask that you cite this paper, as described in the citation file contained in the repository. Aim of this paper The code presented in this paper is a substantial body of work, extending the capabilities of the original code presented in Harborne2020SimSpinCubes. A new publication is warranted to record the new methodologies involved. In summary, new features of the code include: * the addition of spectral data-cube generation, such that mock data-products can be run through analysis pipelines in the same way as real IFS observations; * the analysis and incorporation of gas particles within mock data-products; * the addition of higher-order kinematic measurements in both gas and stellar mock-kinematic data cubes; * and the incorporation of multi-threading capabilities to aid speed-up of processing large numbers of galaxies from a cosmological simulation. In this paper, we present the new methodology behind each of these added features. For further documentation details, go to <https://kateharborne.github.io/SimSpin/>. This website contains a series of walk-throughs and examples, as well as the full documentation for each function. The information at these locations will continue to evolve with development time (the date at the end of each page will reflect the last time that document was modified). You can also check out the NEWS on GitHub[<https://github.com/kateharborne/SimSpin/blob/master/NEWS.md>] to see the latest updates to the code since the publication of this paper. As this code is continuously improving and extending to tackle new science questions, we have chosen to use traditional semantic versioning standards. This paper presents the methodology behind the code at the time of writing, with . For further information about the current version of the code, please visit the website for the live documentation. § METHODOLOGY The key function performed by is the construction of a mock IFS data cube from a galaxy simulation input, as shown in Figure <ref>. In this section, we describe the methodology used for constructing such an observation. The process is broken into three steps: (1) preparing the input simulation; (2) preparing the mock observation settings (i.e. telescope and object projection); (3) building the mock data cube. This section does not aim to act as documentation for each function, rather to highlight the key methodological principles incorporated at each step. For specific documentation and examples, we refer the reader to the live and continuously-updated documentation website <https://kateharborne.github.io/SimSpin/>. The aim is for this code to be agnostic to the type of simulation supplied: smoothed-particle hydrodynamics, adaptive mesh refinement, or N-body. In all cases, you should receive a consistent and comparable output cube with metadata such that the whole product can be reconstructed with the information contained within the file itself and the input simulation. §.§ Creating an input file We begin with a function, , whose purpose is to prepare the simulation into a consistent format. This first step allows all other processes to occur in the same way for any type of input simulation or requested telescope. The function accepts an output simulation file (in either HDF5 or Binary format) and returns a binary () file in a universal format that can process. [basicstyle=68] make_simspin_file(filename, cores = 1, disk_age = 5, bulge_age = 10, disk_Z = 0.024, bulge_Z = 0.001, write_to_file = TRUE, output, overwrite = F, template = "BC03lr", # spectral template choice centre = NA, half_mass = NA, # alignment choice sph_spawn_n = 1) # smoothing choice Currently, directly supports simulation inputs cut-out from a range of cosmological models including , Pathfinder, and . However, the expected format is fully described within the documentation such that any HDF5 file with the required parameters and units can be read and processed by the code. Further details about how to format your simulation data for ingestion into can be found through the documentation website[<https://kateharborne.github.io/SimSpin/examples/generating_hdf5.html>]. We summarise the main important features here. Each particle within a simulation will have a number of existing tagged properties used to track their progress throughout the simulation. In order to make a observation, the key elements we require include positions (x, y, z), velocities (v_z, v_y, v_z), and masses for the stellar and/or gas components. In order to assign spectra to a given “stellar” particle, we also require ages, metallicities [M/H] and the initial mass of that star. In the case of hydro-dynamical simulations, these properties will be tracked throughout the evolution of the system and can be used directly from the output. For N-body models, in which we are just tracing gravitational effects, we specify these age and metallicity parameters for the bulge and disk components within the function to assign these values arbitrarily. A summary of these necessary particle properties will be tabulated and stored as list elements within the file for later data cube processing. The stellar and gas particle properties will be split into two separate data tables, in the case that gas is present in the input model. This formatting of the input allows the code specific metadata to be summarised in an efficient way. In the output of this file, we summarise the properties of the input simulation (e.g. the simulation type and location of the input file from which this product has been made); the parameter choices (e.g. the name and properties of the chosen spectral templates); as well as a record of the code version used to build the file and the date on which it was constructed. This aids the user in inspecting the status of a given file in a human-readable way. It also enables the user to re-create the same file with the same methodology in the future without needing to retain the code used to generate the file. Besides the universal formatting procedure and metadata addition, the main justification for creating a `'-formatted input file is to pre-compute the computationally expensive steps - (1) associating a spectrum with each particle, (2) to align the object within the field of view such that our observations are clearly defined and (3) smoothing gas particle or cell properties across their kernel. The galaxy within the output file can be observed multiple times once a single file has been constructed. However, there are some choices made at this stage that may depend on the type of observations you wish to make, as highlighted in the code snippet above. Further information about these choices will be discussed in this section. §.§.§ Spectral template choice There are currently three options to choose from for spectral templates used to associate spectra with individual particles, which are listed in Table <ref>. These prepared templates have been taken from ProSpect, a generative spectral energy distribution code <cit.>, for which these templates have been prepared using a Chabrier initial mass function <cit.>. We give this selection of options as a user may wish to focus on different science question with one set of templates better suited than the other, e.g. for observations using higher spectral resolution instruments, the high resolution template options will be necessary, but these may be avoided in other cases due to the increased memory requirements and computation. This suite of templates is also a reflection of those commonly used within the literature for exploring galaxy kinematics. When selected, the spectral templates within the chosen library are used to tag each stellar particle with an associated spectrum. Here, the requirement to select the correct template for the science in question is made clear. While the E-MILES templates are obviously higher spectral resolution (Δλ = 0.9Å with σ_LSF = 2.51Å in comparison to Δλ = 1-50Å with σ_LSF = 3Å for the BC03 templates), the grid of possible age and metallicity combinations is more sparse (with 636 combinations in comparison to the 1326 available for the BC03 templates). To aid memory load, we further choose to bin particles according to their age and metallicity in order to remove the requirement to save a single spectrum for every stellar particle in a given simulation. We justify this by considering the match up between the possible age and metallicity grid for each set of templates and the intrinsic resolution limitations within the simulations themselves. Taking the range of ages and Z for each simulation, we grid these parameters into bins equally distributed between the minimum and maximum in logarithmic space with widths of in age and in metallicity. We demonstrate the effect of this binning using an example galaxy from the simulation as shown in Figure <ref>. Currently, the size of these bins are fixed within the code, but feasibly could be made adjustable to account for ranges in precision between simulations and additional template libraries. Individual age-metallicity bins for each group of particles do not necessarily line up with the grid of available templates within the chosen library. The assigned spectrum for a given age-metallicity bin are computed as a weighted interpolation of the four nearest template spectra. These are then stored in a list element of the file, with references to each “spectrum” row stored in the stellar particle data table. §.§.§ Alignment choice By default, the galaxy is aligned such that the semi-major axis of an ellipsoid is oriented with the x-axis of the reference frame, and the minor axis of the ellipsoid is oriented with the z-axis. This provides consistency for multiple observations made at a variety of inclination angles. However, in the case that a full cluster, galaxy group, or a particularly `clumpy' galaxy with lots of substructure is requested for observation, this alignment will be arbitrary. Hence, gives the user the option to define a single location around which to centre the system () and define a half-mass value at which the shape of the galaxy will be measured (). If unspecified, the code will evaluate the alignment about the median stellar particle position with an iteratively fit ellipsoid that contains half the total stellar mass in the input file. This is relevant when the input simulation contains more than a single galaxy, but the user would like to centre and align the observation on one of the specific systems in the file. This alignment is done using the method described in the work of Bassett2019GalaxyShapes, <cit.>. We first assume that the initial distribution of stellar particles is an ellipsoid with axis ratios p = q (i.e. a sphere, where p = b/a and q = c/a, with a, b and c representing the axes lengths in decreasing size such that a > b > c and p > q, by necessity). This ellipsoid is grown from the median position of all stellar particles within the file (or from the position specified by ) until it contains half the total stellar mass within the file (or the threshold mass described by the specified parameter input). Once this limit is reached, we use the stellar particles within the region to measure the reduced inertia tensor. The reduced inertia tensor, I, is computed: I_i,j = ∑_nM_n x_i,n x_j,n/r^2_n, where we perform this sum for n stellar particles within the ellipsoid with given positions, x_n, weighted by individual stellar particle masses, M_n, which may vary within the simulation and r_n, the 3D radius of that particle from the centre as described by, r_n = √(x_n^2 + y_n^2 / p^2 + z_n^2 / q^2). The eigenvalues and eigenvectors of this tensor, I_ij give the orientation and distribution of matter within the ellipsoid. Specifically, p and q are given by the square-root of the ratios between the intermediate and largest eigenvalues (b and a) and the smallest and largest eigenvalues (c and a) respectively. The ellipsoid is then deformed to match the distribution of stellar particles. The whole system is reoriented such that the major axis of the distribution identified is now aligned with the major axis of the ellipsoid. We then begin the procedure again, this time growing an ellipsoid with new a, b, and c reflecting the matter distribution of the stellar particles contained. This is repeated until the axis ratios p and q stabilise over ten iterations. All particles within the input simulation file are aligned with the major axis, a, along the x-axis of the volume and the minor axis, c, aligned with the z-axis using this method. In the majority of cases, we find that this is suitable for finding the underlying shape of the galaxy in question and aligning the object within the frame. In a data set of 1835 galaxies taken from the IllustrisTNG50-1 simulation, a comparison between the alignment found by the function <cit.> and revealed that 92.3% (1693/1835) of the alignments agreed within ten degrees. Caution is advised when making mock observations of ellipticals or galaxies undergoing merger interactions, as a visual analysis of the farthest outliers found most fell under these categories. This allows us to correctly re-orient the galaxy to the user specified inclination and twist projection at the stage of building the mock data cubes. In cases where the semi-major axis is not well defined, this can be adjusted for purpose with some experimentation of the alignment parameters, and . §.§.§ Smoothing choice In the case of galaxies extracted from hydrodynamical simulations, a population of particles or cells trace the underlying distribution of a fluid (such as the gas in a galaxy). Properties of the fluid are computed across a volume, described by the smoothing `kernel' or cell size, centred at the given location. In order to ensure that we reproduce this smoothing in our data cubes and recover the underlying fluid properties appropriately within our images, we use an over-sampling method to visualise this kernel volume. This means that, when we generate a mock observation of the gas component, we must project particles with adaptive sizes onto a fixed grid of pixels. As discussed in Borrow2021ProjectingEnvironments, there are many methods of doing this. However, many of the simple methods result in inaccuracies and artifacts due to the projection of spherical kernels onto a rectangular grid. The smoothed particle hydrodynamic (SPH) kernel projection method is outlined in Borrow2021ProjectingEnvironments (a flavour of which is used in Dolag2005ThePlanck). We have taken the sub-sampling regime described in these papers and redesigned them for use in . Particles are treated as Monte Carlo tracers of the field. The basic features of this algorithm are stated below: * Each SPH particle read in contains information about its “smoothing length”, h, across which hydrodynamical equations have been computed for the fluid represented at that particle position. In the case of AMR codes, the equivalent information about the “cell size” is used. * We randomly sample tracer particles within a sphere centred on the true SPH particle position. * Each tracer particle is associated with a numerical weight as described by the relevant SPH kernel. All weights for an individual SPH particle will sum to one in order to conserve mass within the system. * These new tracer particles replace the original SPH particle. They gain all the properties of the original particle, but a weighted fraction of the total mass according to the weight assigned using the kernel. This results in a new table of particle properties. The new table will contain times as many rows as the original component of SPH particles. However, once this table has been computed, the processing of these observations with will be very quick due to the 𝒪(n) computation used to grid particles into pixels. For this reason, we perform the smoothing at the point of making the input file, rather than at the step. When using this option, we attempt to ensure that the projection kernel corresponds to the kernel used for the SPH calculations within the simulation. For supported hydrodynamic simulations, we provide a smoothing kernel to best match the one used in the original model. These are selected automatically based on the metadata contained within the input file. Most SPH simulations use a flavour of the Wendland kernel outlined in Wendland1995PiecewiseDegree. The C^2 Wendland kernel, used in <cit.>, is a spherically symmetric kernel, W(r,h), which has the form: W(r,h) = 21/2 π(1 - r/h)^4 (4r/h + 1), if 0 ≤ r/h < 1 0, if r/h ≥ 1 Here, r denotes the distance from the particle to another position at which the weight is calculated and h denotes the smoothing length of a particle. For each simulation, this smoothing length, h, is a value given by requiring that the weighted number of nearest neighbouring particles, N_neigh, is a pre-defined constant: N_neigh = 4 π h_i^3/3∑_j W(|x_i - x_j|, h_i ). For the simulation, N_neigh = 48, but this will vary for each SPH simulation. This smoothing length is computed for each particle throughout the simulation, as this value will obviously be dependent on the the local number density of particles. The smoothing length, h, is commonly stored as a parameter within the output files. We can use this parameter to then determine the radius across which each individual gas particle should be over-sampled. The C^6 Wendland kernel used in <cit.> has the form: W(r,h) = 1365/64 π(1 - r/h)^8 × (1 + 8 r/h + 25 (r/h)^2 + 32 (r/h)^3), if 0 ≤ r/h < 1 0, if r/h ≥ 1 In , the smoothing lengths have been computed with N_neigh=64 <cit.>, but again, the raw h for each particle is given in the output for this simulation. Finally, in the case of SPH simulations, and for visualisation of AMR/cell model implementations, we use the M4 cubic spline kernel to smooth gas distributions across our image grid. In particular, for mesh-based codes, we do not have a smoothing length for a given cell. As an approximation, we use the quoted cell density and mass to compute an “effective” smoothing length at a position at the centre of the cell (at the position where cell properties are given). h_i = 2 3/4 π( M_i / ρ_i)^1/3 where the effective smoothing length, h, for a given cell, i, is the mass within that cell, M_i, divided by the density of the cell, ρ_i. A spherical distribution is assumed so that the system can be observed fairly from any angle without observing discontinuities at low density locations. We then use a simplest appropriate kernel, the M4 cubic spline kernel, as an approximation of the behaviour of the gas within a given cell: W(r,h) = 1/4 π((2 - r/h)^3 - (1 - r/h)^3) , if 0 ≤ r/h < 1 0, if r/h ≥ 1 This approximation is used for visualisation of and simulations. Once this file is created for one simulated object, it can be used many times for observations. This file contains all of the multi-dimensional information from the simulation file, with an additional set of tagged properties for to construct each cube. §.§ Initialising the telescope and observing strategy acts as a virtual telescope wrapper. You can choose to observe your galaxy model in a variety of different ways with any integral field unit (IFU) instrument. This requires you to set two distinct groups of properties - the properties of the instrument used to take the observation i.e. the , and the properties of the object under scrutiny i.e. the . The properties are split in this way to enable a suite of observations to be generated in a straightforward manner. It is common that an observer will wish to observe a suite of galaxies using the same telescope, but may want to iterate over a number of projected viewing angles, distances or seeing conditions. Hence, we have split the description classes for the observing telescope and observed object properties into two. We describe the mathematics behind the functions in the sections below, but direct the reader to the specific documentation pages[<https://kateharborne.github.io/SimSpin/docs/documentation>] for up-to-date, detailed examples of running each function. §.§.§ Telescope choice [basicstyle=68] telescope(type="IFU", fov=15, aperture_shape="circular", wave_range=c(3700,5700), wave_centre, wave_res=1.04, spatial_res=0.5, filter="g", # luminosities output in this band lsf_fwhm=2.65, signal_to_noise = NA) # target signal-to-noise ratio has a number of predefined IFU telescopes, for which the required field-of-view, spectral and spatial resolutions have been taken from the available literature. In Table <ref>, we describe the values associated with these defaults and their appropriate references. For a number of these choices, there are further selections that can be made. For example, the “MaNGA” telescope has a variable field-of-view size that the user can select. If a specific telescope “type” is not covered by the available options, the parameters can be fully specified by using the . This requires the user to describe the remaining parameter options of the telescope, including the field-of-view size in arcseconds, the shape of the field-of-view, the wavelength range and central wavelength in Å, the wavelength resolution in Å, the spatial resolution in arcseconds, and the associated line-spread function (LSF) of the instrument in Å. Two parameters can be further altered by the user when using the predefine telescope types: the filter, and the minimum level of signal-to-noise. The available filters in include the SDSS u, g, r, i and z filters <cit.>. Each of these data tables are stored as an file optimally compressed using compression such that they are lazy loaded with the package. The associated documentation gives the location from which these data have been collected. As with the predefined telescope types, the list of available filters may grow in time. Any updates will be listed on the live documentation website. These filters are then ready to be used in the function. The signal-to-noise specified will be implemented in spectral and kinematic data cubes when a minimum signal-to-noise value is specified. Following the mathematical implementation of noise to cubes given in Nanni2022iMaNGAcubes we similarly scale the level of Gaussian perturbation added to each spectrum based on the total flux measured within an integrated spectrum: dF_i/F_i = √(F̃)/S/N ×√(F_i), where dF/F is the fractional perturbation of flux within a given spaxel i, S/N is the requested parameter given in the function, and F̃ is the median pixel flux from the observation. At each spaxel, we draw a random number from a Gaussian distribution, scaled by this dF/F, and add this perturbation as a function of wavelength to each spectrum. For kinematic cubes, this perturbation is applied to the observed fluxes alone. With the elements defined, parameters can be precomputed. The number of spatial pixels, , required to fill the diameter of the field-of-view (FOV) is computed and stored for gridding purposes. When combined with the coordinate information for a simulation at the stage, we can use the following simple equation to label each particle with a corresponding pixel in the FOV of the telescope. We bin the particle data along the x- and y-axes respectively (x_bin and y_bin) labelling each bin with an integer value from 1 to and then combine these using, = x_bin + (× y_bin) - , such that every pixel within the FOV has a unique identifier which can be associated with each particle within the model at the stage. Checks are also performed at this stage such that a user does not waste the time loading in a large simulation file only to have the code fail due to a filter mismatch. We ensure that the requested filter will overlap with the telescope wavelength range coverage and the centre of this wavelength range, if not provided, is computed as the centre of the given range. A further check is made for the variable parameters such as MaNGA field-of-view, that the requested value is one of the available bundle sizes (i.e. 12, 17, 22, 27 or 32). If not, the closest value larger than the requested parameter will be taken by default and a warning will be issued. Similarly, if a user asks for a MUSE cube with greater than 60” field-of-view, the value will be reduced to a value of 60”. Users will also be able to specify wide-field mode (WFM) in which spaxels are 0.2” or near-field mode (NFM) where spaxels are 0.025” for MUSE. If another value is suggested, the function will default to WFM (as this is the most computationally efficient due to the smaller number of spaxels per arcsecond) and issue a warning to the user that this has occurred. Further default telescope types will be added in the future to keep up with ongoing developments. The live documentation will reflect any changes made. §.§.§ Observation strategy choice [basicstyle=68] observing_strategy(dist_z = 0.05, # projected distance inc_deg = 70, # projected angle twist_deg = 0, pointing_kpc = c(0,0), # telescope centre blur = T, # seeing conditions fwhm = 1, psf = "Gaussian") Another necessary ingredient for specifying a mock observation is the description of the conditions in which the model galaxy is observed. How far away is the object? How is it projected on the sky? How severe are the seeing conditions? These properties are specified using the function. It is expected that a user may wish to observe the same galaxy at a range of distances, inclinations, and seeing conditions, while the overall properties of the are more likely to remain fixed.[It is also possible to iterate over a range of these parameters to produce a series of observations using within R, an example of which can be found at <https://kateharborne.github.io/SimSpin/docs/observing_strategy.html>.] To describe the distance to the observed galaxy model, the user may specify a redshift distance (), a physical luminosity distance in Mpc () or an angular scale distance in kpc per arcsecond (). When any one of these parameters are specified, the other two are calculated through the S4 class using the methods <cit.> implemented in the R package [<https://github.com/asgr/celestial>]. The inclination and twist parameters define how the model is projected onto the sky. Following the function, the system is aligned such that the major axis of the ellipsoid (a) is aligned with the x-axis, while the minor axis (c) is aligned with the z-axis. With this knowledge, we can then use basic trigonometry to incline the ellipsoid to a requested inclination and twist. The inclination of the object describes the level of rotation about the x-axis defined in degrees. We use the definition that = 0 is a face on system, while = 90 is edge-on. The following mathematics then gives us the coordinates at which the particles would be observed in the y- and z-axis frames. y^obs_i = - y_i sin(π/180) + z_i cos(π/180), z^obs_i = y_i cos(π/180) + z_i sin(π/180), where the y^obs_i and z^obs_i denote the observed y and z coordinates of particle i in the rotated frame, and y_i and z_i are the y and z coordinates in the original, fixed ellipsoid frame. The same projections are used for the velocities observed along the rotated y- and z-axis. Similarly, the “twist” of the object is described as the rotation about the z-axis of the ellipsoid, i.e. the azimuthal projected rotation on the sky, defined also in degrees. Here, = 0 is an object viewed with the major axis, a, parallel to the x-axis of the projection, while = 90 would be the ellipsoid viewed from the side, such that a is now aligned with the y axis instead. This is computed using similar trigonometric projections as above, x^obs_i = x_i cos(π/180) - y_i sin(π/180), y^obs_i = x_i sin(π/180) + y_i cos(π/180). where, as above, the x^obs_i denotes the observed x coordinates of particle i in the rotated frame, and x_i is the x coordinate in the original, fixed ellipsoid frame. The same equations are used to project the particle velocities. These projections are performed in the order discussed, i.e. the galaxy ellipsoid is inclined on the sky using Equations <ref>-<ref> and then twisted using Equations <ref>-<ref>, such that the object can be observed from any angle across the surface of a sphere. This is important for exploring the effects of inclination and projection on the recovery of galaxy kinematics. The final specification of describes the level of atmospheric seeing via the parameters and , describing the shape and full-width half-maximum (FWHM) size of the point-spread function (PSF) smoothing kernel respectively. We compute and store the kernel shape here, for each image plane of the observed cube to be convolved at a later stage. Two options are currently available to the user, where the may be described by a “Gaussian” kernel, or a “Moffat” kernel <cit.> which has a Normal-like distribution at the centre with more extended wings. These are taken from the parameterisation in the R package <cit.>. A Gaussian kernel is parameterised: I(R) = I_0 exp(- R^2/2 σ^2), where, σ = /2 √(2ln(2)) and I_0 is the peak intensity at the centre and the is the value specified in the function. A Moffat kernel is parameterised: I(R) = I_0 [ 1 + (R/R_d)^2 ], where, R_d = /2√(2^1/c - 1) and c = 5, in line with the common defaults. We ensure that the kernel is normalised to 1 such that convolution with the kernel results in suitable flux conservation. These kernels are then stored for use in the blurring step later on. Having specified the nature of the observation, these functions ( and ) are combined to summarise the properties of the resulting observation. This is stored as metadata in the final data cube produced. Storing the data in this way ensures that the same file can be produced at a later time using the information stored in the output cube alone. With these parameters specified, we can now go about building our mock observation. §.§ Building a data cube Once the observing telescope and properties of the underlying galaxy have been specified, we can go about building a mock observation. Within , we present the user with an option at this stage. Either, a series of kinematic maps can be generated from the line-of-sight velocity distributions at each spaxel using the 3D velocity information present for stars, gas or just star-forming, cold gas in the simulation; or, you can choose to create a spectral cube using the stellar spectra themselves, shifted in wavelength space to reflect those underlying velocities and projected redshift distance. The resulting spectral cube needs to be run through observational software to generate kinematic maps, and as such is useful for exploring the reliability of reduction pipelines. This choice is specified in the input parameters by the key word : [basicstyle=68] build_datacube(simspin_file, telescope, observing_strategy, method = "spectral", # "velocity","gas","sf_gas" verbose = F, write_fits = F) The behaviour of the code will be different depending on the method chosen, though the outputs of the and are equivalent once run through an observational fitting code such as pPXF <cit.>. We demonstrate this equivalence in the results section. Despite the differences in the type of output, the structure and format followed for the two methods is also consistent. Whether we are building a kinematic data cube, or a spectral one, the process of re-projecting the model galaxy to a given orientation (using the information provided in the ) and gridding particles into the necessary pixel locations is done in the same way (using the specific information) before splitting off into method specific functions. The output of will always include five list elements containing (1) the observed data cube, (2) the metadata table recording the details of the observation, (3) the raw particle property images for reference against, (4), the observed kinematic property images and (5) the observed inverse variance cube (1/noise^2). The final two image elements (“raw” and “observed”) will vary in length depending on the type of observation requested. These are summarised in each description below. §.§.§ Spectral data cubes If , the function will return a data cube containing spatial information along the x- and y-axes of the cube, and wavelength information along the z-axis. As particles have been allocated to individual pixels within the FOV, we can parallelise over each pixel and perform the mathematics at each pixel in turn, as demonstrated in Figure <ref>. Each stellar particle has been assigned a spectrum using the template described within the function in <ref>. These spectra are at the resolution of the templates from which they have been drawn, e.g. with E-MILES templates, these spectra will have a wavelength resolution of Δλ = 0.9Å and a spectral resolution of 2.51Å. The template spectrum is weighted by the particle's stellar initial mass to give the luminosity as a function of wavelength. We shift the wavelength labels to λ_obs-z = λ (1 + z) to account for the input redshift of the system. Within each pixel, we then further shift the wavelength labels according to the LOS velocity of each individual particle, λ_obs = λ_obs-zexp(v_LOS/c). At this stage we are still just modifying the raw spectral templates. Once each template is both z-shifted and v_LOS-shifted, we then interpolate these spectra onto the wavelengths observed by the requested . This is done using a spline function in which an exact cubic is fitted using the method described by Forsythe1977ComputerMethods. Next, the individual particle spectra are summed column-wise to produce the observed spectrum at that pixel. This procedure is repeated for every pixel within the FOV and then the spaxels are combined into a volume to construct a 3D data cube, with spatial dimensions in the x- and y-axes and wavelength information in the z-axes. If a point-spread-function (i.e. atmospheric blurring) has been specified in the , we then perform a spatial 2D convolution across each x-y plane in the cube. The convolution kernel will have a shape and width as described by the in <ref>: F_obs(λ) = F(λ) ⊛ PSF. Following the spatial convolution, we also need to convolve the summed spectrum with a Gaussian kernel, with width Δσ_LSF, mimicking the effects of the spectral resolution of the instrument, where Δσ_LSF is the root-square of the difference between the telescope and the redshift-ed templates. The template spectra associated with a single particle have an intrinsic spectral resolution, λ_LSF^template. Of the templates included within this package, these resolutions range from 2.51 Å - 3 Å in the rest-frame. This spectral resolution represents a “minimum dispersion” due to the instrument with which the template was observed or modelled. When the template spectrum is moved to greater redshift, the spectrum is stretched in wavelength space. When we wish to model a galaxy at redshift, z, the intrinsic spectral resolution of the templates must also be adjusted to this new redshift-ed spectrum. At higher redshift, the minimum dispersion we can detect with these templates becomes larger, as the wavelength space is broadened. Hence, we must account for this when mimicking the effect of using our “mock” telescope with its spectral resolution, λ_LSF^telescope. The value of this resolution is fixed by the telescope and is assumed constant with redshift. However, the templates which we have redshift-ed to some distance, z, will now have some intrinsic spectral resolution, λ_LSF@z^template = λ_LSF^template (1 + z). To match the spectral resolution of the observing telescope then, we only need convolve our templates with a Gaussian the root-square of the difference between the telescope and the redshift-ed templates, i.e. Δλ_LSF = √((λ_LSF^telescope)^2 - (λ_LSF@z^template)^2). This is computed using the metadata information contained in the input SimSpin file. The user simply needs to specify the resolution of the observing telescope. Finally, we add the level of noise requested to each spectrum, as described in the function. The inverse variance of this noise (1/noise^2) is also returned to the user under the “” list element. If no noise is requested, this list element will be returned the user with NULL. The resulting “observed” spectral cube is returned under the “” list element. A summary of the run observation details are tabulated and returned under the list element “”. At each pixel, we also measure a number of particle properties, including the total number of particles in each location, the total particle flux, the mean and standard deviation LOS velocity, the mean stellar age and mean stellar metallicity. This information is stored as an image returned to user under the list element “”. All of these details can optionally be saved to a FITS file that contains each of these elements in subsequent HDU extensions for later processing with observational pipelines. Examples of this can be found at the documentation website.[<https://kateharborne.github.io/SimSpin/examples/examples>] §.§.§ Kinematic data cubes If , the function will return a data cube containing spatial information along the x- and y-axes of the cube, and velocity information along the z-axis. A visual representation of this process is outlined in Figure <ref>. Given the wavelength and spectral resolution of the underlying telescope, we can compute the effective velocity sampling rate of a given instrument as: Δ v = c Δlog(λ)_min, were λ is the wavelength resolution of the given instrument, Δlog(λ)_min represents the smallest wavelength gap in log space and c is the speed of light. As in the previous methodology, we can use the gridded FOV to perform the required mathematics on a pixel-by-pixel basis. For each pixel, we take each contained stellar particle. Each stellar particle has been assigned a spectrum using the template described within the function in <ref>. This spectrum is multiplied by the initial mass of the stellar particle and re-gridded on the wavelength scale of a given telescope to give the luminosity at all wavelengths measured. From this spectrum, the luminosity of that particle can be computed. Each particle also has a mass, which can be used to weight the kinematics in place of the particle luminosity when . Each particle's velocity is binned along the velocity axis dependent on the wavelength (and associated velocity) resolution as specified in Equation <ref>. This distribution is weighted either the particle's luminosity in a given band (given by passing the observed spectrum through the specified band pass filter) or the mass of the particle. This leaves us with a line-of-sight velocity distribution (LOSVD), weighted by luminosity or mass, for each spatial pixel at the resolution of the respective . This process is repeated for every spatial pixel. At each pixel, as in the spectral mode case, we also measure a number of the raw particle properties including the total number of particles, the mean and standard deviation of the population of particle velocities, the mean stellar age and mean stellar metallicity. These are returned to the user as 2D named arrays embedded within the list element, “”. If an atmospheric blurring is specified, convolution of the kernel selected and described in <ref> is performed across each spatial plane of the kinematic data cube following its construction, this time as a function of the velocity channels rather than wavelength channels: F_obs(v) = F(v) ⊛ PSF. If requested, noise is added per spaxel as described in the function, applying dF/F as a function of velocity, rather than wavelength. We save a volume of the added noise as an inverse variance velocity cube (1/noise^2) and return this to the user under the list element “”. The final, “observed” 3D array structure, containing spatial planes of the data in the x-y at subsequent velocity channels along the z-axis, is returned to the user under the “” list element. From this kinematic data cube, we also compute a number of “”. At each pixel in the cube, we now have a LOSVD sampled at the same resolution as the wavelength resolution of the telescope. This distribution is fit with a Gauss-Hermite function of the form: L(ω, h_3, h_4) = 1/σ√(2π) exp(-ω^2/2) [ 1 + h_3 H_3 + h4 H_4 ], where, ω = v_i - V/σ, H_3 = 1/√(6)( 2√(2)ω^3 - 3√(2)ω), H_4 = 1/√(24)( 4 ω^4 - 12 ω^2 + 3 ), where v_i is the observed velocity channels, V and σ are the first and second order moments of the LOSVD, h_3 and h_4 represent the expanded third and fourth moments of the Hermite polynomial <cit.>. This fit is performed using the quasi-Newton method published simultaneously by Broyden, Fletcher, Goldfarb and Shanno in 1970 (known as BFGS) <cit.> using the minimisation provided in base R. We compute the observed LOS velocity, dispersion and higher-order kinematics h_3 and h_4 on a pixel by pixel basis through this fit. If a PSF has been specified in the , this fit is performed on the spatially blurred cubes and the resulting images will have this blurring effect incorporated (unlike the raw particle properties, which will be returned as a summary of the underlying simulation). Each parameter is stored as a 2D array and returned to the user under the list element “”. The residual of this fit to the LOSVD is also returned to give an understanding of how well these returned parameters describe the true underlying distribution. This is also output as a 2D array under the same list element as the observed images. As in the spectral mode case, the returned observation can be written to a FITS file for later processing. Each of the arrays in the list elements are saved to subsequent HDU extensions with explanatory names so that the raw and observed images can be distinguished (e.g. and for the observed LOSVD and the raw particle mean velocity images respectively). These will be presented in a consistent format to the spectral FITS files, but with the velocity cube output under the extension, with the necessary axes labels given the the header information. §.§.§ Gas data cubes If or , the function will follow the kinematic data cube methodology, but only for the gas component (or gas classed as star-forming, in the later case) of the input model. As in <ref>, this results in a data cube containing the spatial information about the gas distribution along the x-y axes, with velocity information along the z-axis. To distinguish between all gas and gas particles that are classed as star forming, we filter by the instantaneous star-formation rate. These properties are commonly reported against each gas particle within the model and allow us to filter gas that has met the threshold for star formation. Beyond the focus on the gas component, rather than the stellar component, the process by which this cube is constructed is almost identical to above. The gas kinematics are weighted by the observed gas mass per pixel, rather than using a luminosity. This is equivalent to forcing in the stellar kinematic cube construction. Each gas particle also has some intrinsic dispersion related to their thermal motions. We compute the thermal contribution to the dispersion of each particle as: σ_thermal^2 = P / ρ = u (1 - γ), where P is the gas pressure, ρ is the gas density, u is the internal energy of the gas and γ = 5/3 is the adiabatic index. Of course, due to the effective equation of state employed by many cosmological simulations, this approximation for thermal motions is no longer valid once gas cools below the star forming threshold. At this stage, the temperature and internal energies become effective measures. In this regime, we assume an isotropic thermal contribution, σ_thermal = 11 km s^1. The mock observed kinematic images are constructed as above and we return the observed mass, velocity, dispersion, h_3 and h_4 images under the However, a number of additional raw particle properties are also included in the gas output. In addition to the raw gas mass per pixel, we record the mean mass-weighted instantaneous star formation rate, the mean gas metallicity, and the mean oxygen over hydrogen abundance ratio. The raw mass-weighted mean velocity and standard deviation are also returned in this list under clearly named 2D arrays. A number of these images are shown for our example galaxy from the EAGLE simulation in Figure <ref> in the gas property maps on the right hand side. In the future, we aim to incorporate the gas information at each pixel position within the spectral cube, through the addition of emission lines of appropriate ratio and kinematics. This is currently beyond the scope of the code, due to the necessity to incorporate other features of realism such as the attenuation and re-emission due to dust which is currently beyond the resolution limits of the majority of simulations. We direct the user to codes such as SKIRT <cit.> and the work of Barrientos2023Spatiallygalaxies for the proper radiative transfer treatment through an assumed dust distribution. § RESULTS §.§ Comparison of spectral and kinematic cubes A kinematic data cube should mimic the kinematic information included within a full spectral cube. Here, we present a series of tests to ensure the similarity of these products using two high-resolution galaxy models. One model represents a disk galaxy with highly coherent rotation. The other represents an elliptical galaxy with highly dispersive support. At these extremes, we hope to identify any systematic offsets between the kinematic cubes and spectral cubes as a function of the underlying model. These high-resolution N-body models have been constructed using the initial conditions code <cit.> and evolved in a smooth analytic potential using a modified version of <cit.>. Each galaxy contains 6.5 × 10^6 particles, each of mass 1 × 10^4 M_⊙. The elliptical system is modelled as a spherically symmetric model with density profile described by: ρ(r) = M/2 πa/r(r+a)^3, where a is the scale factor given by, a = r_200/c√(2 [ ln(1 + c) - c/(1 + c)]), where r_200 and c is the concentration of the distribution. The disk model has been initialised with an exponential radial profile and sech^2-profile in the vertical direction, described by: ρ(R,z) = M_*/4 π z_0 h^2sech^2( z/z_0)exp( - R/h). The velocity profiles of these structures are initialised using the optimisation procedure outlined in Yurin2014AnEquilibrium. We allow these systems to evolve in an analytic potential for 10 Gyr using . We outline the procedure for each test below. We begin by generating three files. As these are N-body models, we must assign each particle a stellar age and metallicity (such that an appropriate stellar template can be assigned). We produce two files with identical particle ages and metallicities in order to examine both the variation of the underlying kinematic model (i.e. bulge vs. disk) with consistent underlying spectra: [basicstyle=68] make_simspin_file(filename = "disk_model.hdf5", disk_age = 5, disk_Z = 0.024, template = "E-MILES", output = "disk_age05_Z024.Rdata") make_simspin_file(filename = "bulge_model.hdf5", bulge_age = 5, bulge_Z = 0.024, template = "E-MILES", output = "bulge_age05_Z024.Rdata") The final file contains more realistic stellar ages and metallicities for their component, with older, more metal poor stars present in the elliptical system and younger, more metal rich stars in the disk galaxy. This allows us to examine the effect of varied stellar templates on the comparison between spectral and kinematic data cubes. [basicstyle=68] make_simspin_file(filename = "bulge_model.hdf5", bulge_age = 10, bulge_Z = 0.001, template = "E-MILES", output = "bulge_age10_Z001.Rdata") We build two versions of each of these files. One is prepared using the E-MILES templates <cit.>, which have both higher wavelength and spectral resolution parameters than the alternative BC03 models <cit.> from which we prepare the second file. These files are used for the following tests in an effort to evaluate the consistency between our two mock observing methods. In each case, we generate a kinematic data cube and a spectral data cube. These cubes have identical observing conditions, i.e. projected distance, observed projection angle on the sky, field of view, etc. The spectral cube is then fit using pPXF, with the E-MILES spectra used as fitting templates. We then compare the kinematic maps produced through the penalised pixel fitting method and our kinematic cubes. With the two versions of SimSpin file (E-MILES and BC03), we can examine consistency across a variety of spectral qualities. Within each test, we can further turn the dials of the and functions to explore the reliability of the results with respect to the LSF, spatial and spectral resolution and seeing. Selected properties are described in each of the case studies below. We have comparison figures for each test contained in the supplementary material at the end of the paper, though a summary of these is also presented at the end of the results section. A walk-through of code used to generate these examples can also be found at the SimSpin website. §.§.§ Test 1: Observations of intrinsic template spectral resolution at low redshift. We would like to ensure that, in the simplest regime where there is no line-spread-function convolution and the object is projected to a small redshift, a kinematic data cube and a spectral data cube fit with pPXF should return consistent answers. In essence, this tests that the velocity-shift added to each particle's spectrum is working appropriately. This is done for all three examples (the young disk, young bulge and old bulge). We use different configurations for the E-MILES and BC03 cubes to suit the different resolution constraints for these spectra, and to explore the robustness of the comparison to different configurations of the telescope. This is done using the following telescope parameters for the E-MILES and BC03hr cubes respectively: [basicstyle=68] telescope(type = "IFU", signal_to_noise = 30, lsf_fwhm = 0, wave_res = 1.04, aperture_shape = "circular", fov = 15, spatial_res = 0.5) telescope(type = "IFU", signal_to_noise = 30, lsf_fwhm = 0, wave_res = 3, aperture_shape = "hexagonal", fov = 17, spatial_res = 0.7) The same observing strategy is used for all observations: [basicstyle=68] observing_strategy(dist_kpc_per_arcsec = 0.3, inc_deg = 60, blur = F) In this test, we force SimSpin to generate a spectral cube at the intrinsic template spectral resolution by requesting a telescope with a λ_LSF^telescope = 0 Å. This will cause the code to issue a warning that the templates used have insufficient resolution to construct such an observation, but will produce the output spectral cube never-the-less. It is important to remember that the underlying templates used to construct the observed galaxy do have some intrinsic line-spread-function, as shown in Table <ref>. Hence, when using spectral templates to fit the SimSpin spectral cubes with pPXF, it is important that we do match the fitting templates to the true underlying LSF, which is dependent on the templates from which the cube has been made (λ_LSF^templates = 2.51 Å in the case of E-MILES SimSpin cubes and λ_LSF^templates = 3 Å in the case of BC03 SimSpin cubes). When performing the pPXF fit using the E-MILES templates to fit the model spectra, we do convolve the fitting templates with the root-square difference between the BC03 and E-MILES line spread function (i.e. √(3^2 - 2.51^2) = 1.64Å and √(2.51^2 - 2.51^2) = 0Å), due to the fact that the intrinsic templates from which the mock observation has been built have a greater LSF than the templates used to perform the fit. We compare the output of the pPXF run in this case to a kinematic data cube run using the same parameters, but this time with . We expect that the observed kinematics will be consistent within the noise. The resulting comparison can be seen visually for our disk model in Figures <ref> and <ref>. Similar plots for each of the models built for these tests can be found in <ref>. Visually, it is clear that the E-MILES spectral cube comparison in Figure <ref> is much more consistent than the BC03 spectral cube comparison in Figure <ref>. However, in both cases the residual distributions are centred around zero. In the recovery of the kinematics in the BC03 example, we struggle to find a sufficiently good fit, with the χ^2/DOF averaging ≃ 4, as opposed to the E-MILES comparison value of ≃ 1. We believe this is due to template mismatch with pPXF. In Figure <ref>, we show the residual differences between the kinematic and spectral cubes as a histogram for each model and spectral template set. This allows us to directly compare the differences between spectral cubes built with the E-MILES and BC03 templates. At low redshift and with no additional LSF effects, we see that the two methods ( and ) compare quite nicely, with all resulting residuals centred around zero. As noted visually from the kinematic maps, there is a broader difference between the returned kinematics for the BC03 SimSpin cubes fit with the E-MILES templates through pPXF. §.§.§ Test 2: Observations of intrinsic template spectral resolution at high redshift. Following the success at low redshift, where we tested that spectra are shifted in wavelength space effectively, we next consider the effect of projecting the galaxies to larger distances. In this study, we use the same telescope definitions as in Test 1, keeping the templates from which the cubes are built at their intrinsic resolution using λ_LSF^telescope = 0 Å, but modifying the observing strategy as follows: [basicstyle=68] observing_strategy(dist_z = 0.3, inc_deg = 60, blur = F) We note here that the median signal-to-noise is set to 30, as in the previous test. It is important to remember that, with objects projected to further distances, we do not perform an exposure time calculation and as such these may not be realistic of the noise expected from such an observation. Here, we examine whether the red-shifting module is working effectively in both methods and still produces equivalent results between the and cubes. We build both a spectral and velocity cube of each of the simulations with these specifications. The resulting spectral cubes are fit using pPXF to find the observed spectral kinematics and the maps are compared with their counterparts. As in the previous test, we produce visual analysis by comparing the kinematic maps for each model, as shown in Figures <ref> and <ref>. This time, we demonstrate using the bulge model, but provide the images for every model tested in <ref>. We successfully recover kinematic details in the E-MILES built images in Figure <ref>. However, we find that it is much more difficult to get a successful fit for the BC03 spectral cubes through pPXF. The direct comparison between the two is clearly demonstrated in the histograms in Figure <ref>. At the wavelength resolution of 3 Å, as is run for the BC03 SimSpin cubes, we find that it is especially difficult to recover the higher-order kinematics, as would be expected for higher redshift observations. §.§.§ Test 3: Observations with spectral resolution applied at low & high redshift. The next test is designed to evaluate the module of the code that varies the spectral resolution. In this case study, we take the disk simulation built with each the E-MILES and BC03 templates and observe them using telescopes with line-spread functions greater than the underlying templates. We do this with the disc projected at both low and high redshift, as the convolution kernel used for the LSF will change as a function of z as demonstrated by equation <ref>. We broaden each set of templates by different amounts as shown in the specifications below for the E-MILES and BC03 SimSpin files respectively: [basicstyle=68] telescope(type = "IFU", signal_to_noise = 30, lsf_fwhm = 3.61, wave_res = 1.04, aperture_shape = "circular", fov = 15, spatial_res = 0.5) telescope(type = "IFU", signal_to_noise = 30, lsf_fwhm = 4.56, wave_res = 3, aperture_shape = "hexagonal", fov = 17, spatial_res = 0.7) We then run each model twice, once at low and once at high z, using the following functions: [basicstyle=68] observing_strategy(dist_kpc_per_arcsec = 0.3, inc_deg = 60, blur = F) observing_strategy(dist_z = 0.3, inc_deg = 60, blur = F) As before, we produce a spectral and kinematic cube for each iteration and run the spectral cubes through pPXF to recover the observable kinematics. For this set of pPXF fits, when using the E-MILES templates to fit the model spectra, we only need to convolve the fitting templates with the root-square difference between line spread function for each observation and the fitting templates (i.e. √(3.61^2 - 2.51^2) = 2.59Å and √(4.56^2 - 2.51^2) = 3.81Å for the E-MILES and BC03 examples respectively). The results of these fits are demonstrated visually in Figures <ref> and <ref>. These examples show the high redshift examples, with the low z fits shown in <ref>. In Figure <ref>, we can see that the structure of the LOS dispersion has been well captured in the resulting spectral fit. However, we see that the higher-order kinematics, h_3 and h_4 become quite difficult to explore as you go out in radius where noise begins to dominate. We provide a direct comparison between the BC03 and E-MILES residuals in Figure <ref>, built at both high and low redshift. It is quite clear from this comparison that there is no significant difference between the high and low redshift behaviour, except in the case of the cube built with BC03 templates. In this example, we see that the lower redshift model appears to under-estimate the true dispersion, as shown by the positive dispersion residuals. Given the difficulty we have had fitting the BC03 spectral models for kinematics, it is unclear whether these discrepancies are the fault of the code, or the fitting methodology. As the fits are quite consistent for the E-MILES spectral cubes, we will proceed with the final test to check for consistency when atmospheric blurring conditions are incorporated. §.§.§ Test 4: Observations with spectral resolution applied with atmospheric seeing conditions included. The final test involves taking the previous mock observations, and introducing seeing conditions. As described in section <ref>, we convolve each spatial plane of our spectral or velocity data cube with a kernel to imitate the blurring effects of the atmosphere. This is done by specifying the parameter below, indicating that we would like the image to be blurred, as well as the size and shape of the convolution kernel. This is all done in the function. For the following study, we use the following specification for the E-MILES and BC03 models, projecting each to both near and far distances with the varied seeing conditions: [basicstyle=68] observing_strategy(dist_kpc_per_arcsec = 0.3, inc_deg = 60, blur = T, fwhm = 1, psf="Gaussian") observing_strategy(dist_z = 0.3, inc_deg = 60, blur = T, fwhm = 2.8, psf="Moffat") The rest of the parameters remain consistent with the previous case study. As in the previous case, we test these observations at both high and low redshift distances for the young disc model with the two flavours of spectral templates. The results of these fits are demonstrated in Figures <ref> and <ref>, where we show the fitting results for the low redshift E-MILES galaxy and the high redshift BC03 galaxy. The remaining images are included in the final <ref>. Even in the blurred images, as shown in Figure <ref>, we can see that the kinematics between the and cubes are closely comparable, with residuals nicely balanced around the zero point. With the BC03 system, we see a poorer recovery. Comparing the two directly using the histograms in Figure <ref>, a hollow yellow bump is visible towards the positive residuals showing that the kinematic cubes provide an overestimate of the dispersion in comparison to the spectral cube fit with pPXF. These are important tests to run in order to evaluate the success and flexibility of the code. Here we have taken each feature in turn and assessed how its addition affects the resulting kinematic image. We note that, within the extra-galactic community, the use of E-MILES templates for kinematic fitting is commonplace and as such it is good to see the consistency between input and output in the simulations that have been built and kinematics fit using the same set of stellar population synthesis models. Concern is raised with regards to the poorer fits found between the BC03 models fit using E-MILES templates. The BC03 templates are evolutionary stellar population synthesis codes that are commonly used within the theory community for semi-analytic models and for stellar population fitting. §.§ Web application is a flexible and modular code, as demonstrated in this article and the numerous examples available online. As the number of applications for mock simulation data grows with ever more resolved models of galaxy formation and evolution, it is important that access to the code is accessible and usable by a wide range of users, theorists and observers alike. In order to remove some of the barriers we perceive preventing users working with this code (including working with R, handling simulation data, or running large memory jobs locally), we have built a web application of [<https://simspin.datacentral.org.au/app/>]. The web application has the same range of functionality as the R-package, without the necessity to download and install the package yourself. It is a performant React Single Page App communicating asynchronously with a RESTful API, hosted by Data Central. The application allows for instant data exploration via a dedicated viewer, where authenticated users can re-visit previous queries and share results with others. Generated FITS files can be directly downloaded for further exploration and quantification. All services are containerised and managed by docker compose, such that the project is easily re-deployable. The API is fully documented, and comes with an API Schema (adhering to the OpenAPI Specification) to aid users in calling the API from other services. The SimSpin app removes the barrier of entry for novice astronomers, providing an accessible and time-saving tool for simulated galaxy visualisations. The API further removes a code language barrier as individuals can generate queries using whichever language they choose. An example of this can be found within the documentation.[<https://kateharborne.github.io/SimSpin/examples/query_the_API.html>] § CONCLUSION In conclusion, we have presented a significant update to the mock observation code, . We have demonstrated a number of new features available in the code , including the measurement of higher-order kinematics, the construction of spectral data cubes and the inclusion of gas component analysis. The code now supports a wide number of different cosmological, hydrodynamical simulations, including , , , and . We further have containerised the code into a web application such that anyone can work with mock data, regardless of their coding language or computer specifications. All of these features have been tested using unit testing, as well as the longer case study explorations that are presented in the results of this paper. In line with standard continuous integration procedures, we run all unit tests and require them to pass before any changes can be merged into the main branch of the code. We also require the code coverage (as measured by the number of lines within the code hit by the unit tests) to remain at approximately 90% for tests to pass. In the future, as more developers aim to expand the capabilities of the code, we may further implement another set of checks by core developers using the review system in place through GitHub. The range of applications for this code is already beginning to be demonstrated within the literature for applications from designing corrections for the effects of seeing conditions <cit.>, exploring the observational signatures of slow rotating systems formed in different ways <cit.>, or building machine learning models to explore the connection between intrinsic 3D shape and observable kinematics (Yong et al., in prep). Of particular interest, with the ready incorporation of theory data with observational surveys, we hope to see similar data releases of simulated galaxies for comparison alongside observations <cit.>. With tools like , we are enabling these comparisons to be made consistently, both between simulations and observations, but also consistently between the different simulations themselves. Simulations provide us with the ability to explore the far reaches of space and time, while now enables us to compare these simulations to our exquisite observations. The benefit of this is that, we our models we know the ground truth - projection effects can be modified by simply moving our observer, the atmosphere can be turned "on" or "off", and we can fast-forward through time to examine how a given system may change over the course of it's life. Such information is undoubtedly useful for contextualising the results we find in observations, as well as to improve existing sub-grid recipes within simulations in line with this. The future of mock observables is bright. § ACKNOWLEDGEMENTS KH acknowledges funding from CL's Discovery Project (DP210101945) funded by the Australian Government. AS acknowledges support through the summer internship program from the International Centre for Radio Astronomy Research (ICRAR) and the Pawsey Supercomputing Centre. This work has been made possible through the Astronomy Data and Computing Services (ADACS). This research was conducted under the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. Parts of this research, including the construction of N-body models for the case study analysis, were undertaken on Magnus at the Pawsey Supercomputing Centre in Perth, Australia. § ADDITIONAL CASE STUDY FIGURES §.§ Observations of intrinsic template spectral resolution at low redshift Here, in Figures <ref>-<ref>, we present the young bulge and old bulge observations from case study 1, where we have used the intrinsic spectral resolution of the underlying templates at a negligible redshift of z = 0.0144. The hexagonal maps are those models that have been built with the BC03 templates, while the circular maps have been built with the E-MILES templates. We can see a proportion of the pixels fit in the bulge E-MILES maps return an extremely low value of the observed dispersion (with equally extreme h_4 values), which may be reduced by increasing the signal-to-noise of the image as shown in the following Figures <ref> and <ref>, either at the SimSpin construction stage, or through binning techniques not explored here. §.§ Observations of intrinsic template spectral resolution at high redshift Here, in Figures <ref>-<ref>, we present the young disc and old bulge observations from case study 2, where we have used the intrinsic spectral resolution of the underlying templates shifted up to a redshift of z = 0.3. The hexagonal maps are those models that have been built with the BC03 templates, while the circular maps have been built with the E-MILES templates. §.§ Observations of with spectral resolution at low & high redshift Here, in Figures <ref>-<ref>, we present the young disc low-z observations from case study 3, where we have used spectral resolutions of 3.61Å and 4.56Å for the E-MILES and BC03 models respectively. The hexagonal maps are those models that have been built with the BC03 templates, while the circular maps have been built with the E-MILES templates. §.§ Observations of with spectral resolution with atmospheric seeing conditions included. Here, in Figures <ref>-<ref>, we present the young disc high-z observations for the E-MILES model and low-z observations for the BC03 model from case study 4, where we have used spectral resolutions of 3.61Å and 4.56Å for the E-MILES and BC03 models respectively, and added different levels of seeing conditions by convolving each spatial plane with a convolution kernel. The hexagonal maps are those models that have been built with the BC03 templates, while the circular maps have been built with the E-MILES templates.
http://arxiv.org/abs/2307.01151v1
20230703164648
BayesDose: Comprehensive proton dose prediction with model uncertainty using Bayesian LSTMs
[ "Luke Voss", "Ahmad Neishabouri", "Tim Ortkamp", "Andrea Mairani", "Niklas Wahl" ]
physics.med-ph
[ "physics.med-ph" ]
Characteristic signatures of accreting binary black holes produced by eccentric minidisks [ August 1, 2023 ========================================================================================= Purpose: Fast dose calculation techniques are needed in proton therapy, particularly in light of time restrictions in adaptive workflows. Neural network models show the potential to substitute conventional dose calculation algorithms with fast and accurate dose predictions, while lacking measures to quantify an individual prediction's quality. We propose to use a Bayesian approach to learn uncertainty of AI-based dose prediction. Methods: Our resulting BayesDose-Framework is based on a previously published deterministic LSTM. Similarly, it is trained and evaluated on Monte Carlo beamlet doses simulated on (1) 2500 water phantoms with slab inserts and (2) 1000 geometries extracted from a lung patient for a single initial energy. The network's weights are parameterized with 2D Gaussian mixture models, and 100 ensemble predictions are used to quantify mean dose predictions and their standard deviation. Generalizability as well as re-training of the model is evaluated on smaller datasets with two different initial energies as well as five additional patients. Results: The averaged predictions of the BayesDose model performed similarly to its deterministic variant and at least as good the original published LSTM model. Predictions of the uncertainty (measured through the sampled predictions' standard deviation) seemed conservative, particularly for the phantom dataset, however regions of high uncertainty correlated spatially with the largest dose differences between the prediction and Monte Carlo calculated reference. Large average uncertainty within a prediction correlates strongly with dosimetric differences (up to ρ = -0.74). This correlation is reduced when applying the model to patient data with unseen HU value ranges. Runtime overhead could be decreased to 9x of a deterministic prediction for an ensemble size of 100 by parallelizing predictions and presampling network weights. Conclusion: Bayesian models for dose prediction can produce fast predictions with quality equal to analogue deterministic models. The obtained prediction's standard deviation correlates well – both globally and locally – with dosimetric inaccuracy. Models like BayesDose could thus support decision making and quality assurance when translating dose prediction models into the clinic, while the Bayesian approach in general could translate to other AI models in medical physics. § INTRODUCTION Dose calculation for particle therapy is a computational task sensitive to runtime and accuracy. Algorithms for dose calculation usually present a trade-off between these, with MC dose calculation on the one side as the gold standard for accuracy but deficits in runtime, and pencil-beam dose calculation on the other side as one of the fastest, but also less accurate methods. The latter are still used in the context of treatment plan optimization regularly, requiring the computation of each potential beamlet's dose. While one may argue that MC dose calculation is reaching runtimes sufficient for inverse planning, treatment planning is at the same time evolving towards even more demanding time restrictions, e. g., within real-time/online adaptation, tightening the need for speed. To combine accuracy with short runtimes, recent research proposed the use of AI to either correct the result of fast, inaccurate algorithms<cit.> or learn the dose calculation entirely <cit.>. These works usually train on a reference dataset computed with MC algorithms for high accuracy. This initial training step is not particularly time sensitive, allowing to build specialized models (e. g., per commissioned energy, patient class etc.). The inference step during the actual dose calculation task can then be implemented highly performant, ideally surpassing the fastest classical numerical methods while keeping near MC dose calculation accuracy. One concern in translating the use of such models into clinical application is explainability, including the quantification of the model's prediction accuracy.<cit.> While the systematic error of, for example, a pencil-beam algorithm is explainable through the assumptions and approximations made, this is not straightforward for a neural network. To quantify the uncertainty of a model's prediction, Bayesian approaches can be used: In a BNN, weights and biases (i. e., the model's free parameters) are stochasticized by parametric probability distributions whose shape is learned in the training process.<cit.> After training, samples can be taken from those distributions to obtain a set of neural networks with different parameterizations. These then generate multiple dose predictions on a single input, consequently allowing the calculation of statistical information on their predicitive performance. For dose prediction with BNN, this would result in multiple dose predictions on a given patient's CT image, from which statistical moments like the expected dose prediction and respective standard deviation can be estimated. Based on previous work by <cit.>, which utilized LSTM for the calculation of dose of individual proton beamlets, we demonstrate the feasibility of a BLSTM to generate such statistical dose predictions. Our model – BayesDose – provides a meaningful way to mitigate black-box concerns for clinical training and potentially helps quality assurance and decision making when using AI based dose calculation. § MATERIALS AND METHODS Since the overall viability of using LSTM networks in proton dose predictions has already been shown in <cit.>, our focus lies primarily on analyzing the feasibility of the Bayesian variant architecture for dose calculation tasks. §.§ Datasets Our BayesDose model builds upon the collection of datasets already procured for <cit.>. Choosing the same datasets enables verification of the BayesDose model with its deterministic predecessor under the same boundary conditions. The datasets are briefly explained in this section – a more detailed explanation of data procurement can be found in <cit.>. The input data is always given as a three-dimensional, voxelized image containing RSP values. The ground truth contains corresponding proton beamlet dose distributions as the desired output, simulated with MC using the TOPAS (Tool for particle simulations) wrapper for Geant4 <cit.>. Both input and ground truth are clipped laterally and in depth to contain only the area of interest, i. e., the respective beamlet's dose, and is selected and rotated such that the central beamlet axis coincides with the longest dimension of the respective rectangular cuboid (compare to Figure <ref>). This also means that dose is always deposited from left to right in all figures within this work. Overall, a supervised regression problem is defined, mapping the input RSP values between 0 and 2.5 (0 for vacuum, 2.5 for denser bone structure) to real-valued dose output data. §.§.§ Water Phantom The first dataset within the collection of <cit.> is based on dose simulations for protons with initial energy of 104.25 within an artificial cubic water box phantom. Inside this phantom, cuboid inhomogeneities of varying dimensions (2 to 14 in z’ and x’ axis) and densities (0.1 RSP to 2.5 RSP) were placed. This dataset enables intuitive and explainable feasibility checks for developed models, and is inspired by previous dose calculation studies investigating dose calculation accuracy <cit.>. For this first dataset, a total of 10000 samples (2500 phantom geometries, each in four augmentations + corresponding Monte Carlo dose) were generated with a clipping area of 80 × 15 × 15 voxel with an isotropic resolution of (2)^3. §.§.§ Lung Patient The second dataset from <cit.> is generated on a lung patient case (with similar initial energy of 104.25), naturally exhibiting strong anatomical heterogeneity between normal tissue, lung tissue, and bony anatomy. Dose calculation in lung is known to be among the most difficult to achieve high accuracy with approximate methods <cit.>. All data points inside this dataset are from the same patient. Different geometric problems were created by altering the beam orientation in 5 steps from 0 to 355 as well as shifting the isocenter position in 10 shifts, spanning the lung along the z’ axis. This way of generating the training data resulted in cases where the gantry angle is oblique in relation to the CT axis. Consequently, these cubes experience strong wavering behavior due to the occurring interpolation errors (example can be seen in figure <ref>). However, we decided to include these samples with high interpolation artifacts for validity of our comparison to the deterministic LSTM and to analyze how these interpolation artifacts are interfering with the model uncertainty predicted by the BayesDose model. In total 4000 different samples were generated with a longitudinal clipping of l=150 voxels, thus creating input and output dimensions of 150 × 15 × 15 at a voxel spacing of 2. §.§.§ Data for generalization tests Choosing a distinct energy for training is justified by the ability to train multiple models on different initial energies (i. e., the commissioned energy spectra for the respective accelerator). To underline this transferability to other energies, the dataset collection includes smaller datasets for a low-range energy of 67.85 and a high-range energy of 134.68. Each energy is represented by a total of 1000 pencil beam samples that were split and preprocessed with an identical approach to the primary lung patient dataset. The high-range dataset has an extended longitudinal clipping, and therefore an increased sequence length for the BLSTM, of l=200. To assess the generalizability of the model to other lung patients, the final dataset of the collection incorporates dose simulations on four additional lung patients. Each patient entails a random selection of 200 proton beamlets. The input data exhibits individual RSP value ranges different to the original patient as visualized in Figure <ref>. Patients 2, 3, and 4 feature RSP values close to the training set, patient 5 has a slightly wider RSP range, and patient 1 has the largest range of RSP values. §.§ Model Implementation The BayesDose model is implemented in Python 3.10.5 using the PyTorch framework 1.12.0, based on the original deterministic model from <cit.>. Bayesian network layers were implemented using the Blitz framework 0.2.8 <cit.>. Minimal improvements to the original reference were implemented, streamlining data processing and enabling training with larger batch sizes, providing significantly faster training time as well as improved convergence. Before outlining the full network architecture, we shortly introduce the inner workings of a BLSTM layer ( within the Blitz framework) compared to a conventional layer: Within a , each weight and bias of the conventional layer is stochasticized by a corresponding parameterized probability distribution. The consequential Bayesian LSTM cell structure is displayed in <ref>. As a result, one LSTM cell, depending on the input and the hidden state dimension, will contain multiple stochastic weights and biases, constituting the stochastic component of the cell. The weight update uses Bayesian inference, where a prior distribution p(w) is translated into a posterior distribution p(w|D) conditioned on the training data D as per p(w|D)= p(D|w)p(w)/p(D) . p(D|w) is the likelihood of the weights w given the data D, and p(D) describes the evidence. Since calculating the evidence usually represents an intractable problem, the Blitz Framework approximates the posterior distribution by a variational posterior distribution parameterized with the learnable parameters μ and ρ. These parameters are optimized during training by minimizing the negative ELBO defined as ELBO = -𝔼_q_ψ(w){log p(D|w)}_model fit+KL{q_ψ(w)||p(w)}_regularization , which is equivalent to minimizing the KL divergence between the variational posterior and the exact posterior distribution. The first term 𝔼_q_ψ(w){log p(D|w)} measures how well the model fits the data D and is determined in the implementation by the MSE. The second term KL{q_ψ(w)||p(w)} measures the distance between the variational posterior q_ψ(w) and the prior distribution p(w), acting as a regularizer, and thus preventing overfitting of the model. The parameters describing the probability distributions of the weights and biasas of the network are updated during training following the Bayes by Backprob method.<cit.> Afterwards, to obtain an ensemble of predictions, the weights and biases (indexed by i) of the BLSTM network need to be sampled regarding w_i = 𝒩(0,1) ×log (1+ρ_i)+μ_i b_i = 𝒩(0,1) ×log (1+ρ_i)+μ_i , whereby 𝒩(0,1) represents a sampled value out of a normal Gaussian distribution with zero mean and unit variance. The Blitz Framework uses a Gaussian prior scale mixture <cit.> for p(w). The prior distribution as well as the initial values of the trainable parameters ρ and μ can be obtained by a hyperparameter search. To do so, we used the Optuna Framework <cit.> with 200 trials, yielding the dispersion parameters σ_1 = 3.8, σ_2=0.2, π = 0.25, μ=0, and ρ=-5.6. §.§ Model Architecture The network architecture is predominantly consistent with the architecture of the deterministic model explained by <cit.>. For processing the input data, each 15 × 15 slice out of a given sequence of lateral 2D dose slices is transposed into a vector of size 225 and then introduced into the BLSTM cell. Cell state and hidden state each comprise 1000 neurons, which, after the current slice's state update, pass on their state as input with the respective next slices. This way, each image within the input sequence gets processed inside the BLSTM based on previously gained information. The final output of the BLSTM layer is a sequence of vectors of size 1000, each representing a processed image out of the input sequence. Finally, back-end fully connected layers will generate the results by converting the hidden state embedded vectors of size 1000 into their original shape of size 225, which after that is reshaped into a 15×15 slice for each t. For BayesDose, the back-end network differs from the network used by <cit.>. While each of the three Linear layers is replaced by a layer, we also substituted the ReLU activation functions with SiLU<cit.> to ensure smoothness of the posterior probability distributions of the weights and biases. The resulting back-end network has a Bayesian linear input layer of 1000 neurons, a hidden one of 100 neurons, and a Bayesian output layer of 225 neurons, with a SiLU activation function between the layers. For generating the network's final prediction and estimating the uncertainty, BayesDose generates a certain number of sampled weights and biases based on <ref> for the same input sequence. The resulting series of predictions can then be used to calculate the mean and the standard deviation representing the aggregate prediction of the network as well as its uncertainty estimate. §.§ Model Training The loss function used for training BayesDose is the negative ELBO loss function composed of the MSE and the KL divergence (see <ref>). To account for the Bayesian nature, the average ELBO loss from three calculations is used for backpropagation. The MSE is about a factor of 1 · 10^5 times smaller than the KL divergence loss. This leads to the KL divergence dominating the loss function. The optimizer then disregards minimization of the MSE loss and thus converges slowly. To guarantee an equal impact on the loss function during optimization, we scaled the MSE during training by above factor, and denote this scaled MSE as SMSE. Performing a learning rate range test proposed by <cit.> on the SMSE and KL divergence loss separately, also showed that both loss function components required a different range of optimal learning rates. To schedule learning rates appropriately, as well as benefit from its fast and low convergence capabilities, PyTorch's <cit.> is used. For the phantom dataset, the starting value of the scheduler was set to a value 1.3 · 10^-3, allowing the SMSE loss to converge fast and the KL divergence loss to start converging slowly. From that point, the LR increases until a maximum LR is achieved. The maximum value of 6.5 · 10^-3 is found to be the largest LR possible, where the SMSE loss does not explode. Sequentially, the LR drops to a minimum value of 4.13 · 10^-5 during the second half of training, causing the total loss to converge. The patient case was found to allow for slightly larger LRs with an initial value of 3.3· 10^-4, a maximum LR of 1· 10^-3 and a minimum LR of 3.3· 10^-6. Although <cit.> suggests training with large batch sizes, an improvement could not be observed when applied to the BayesDose architecture. The learning process of the network is led by the Adam optimizer <cit.> with the scheduler and a batch size of 32, which was found to have the smallest difference in learning rates from both loss components and ultimately offered the most effective training. The Adam optimizer is shown to generalize well across many different application area and is one of the suggested optimizers to be used for Bayesian learning <cit.>. The network is trained for 600 epochs for the phantom dataset and 1000 epochs for the patient dataset, after these number of epochs no significant improvements in test performance could be observed. The evolution of the total loss and its constituents is shown in <ref> and shows that after initial reduction of the SMSE, the KL-divergence is gradually reduced. A full training cycle of the phantom data takes about 10 and for the patient data about 11 on an NVIDIA RTX A6000 GPU. §.§ Model Evaluation For evaluating the performance of BayesDose, 100 prediction samples were drawn to enable a precise estimation of the mean and standard deviation, while maintaining decent prediction time. The metrics for quantifying the performance of the model were divided into accuracy metrics using the ensemble mean and uncertainty metrics using the ensemble standard deviation. For the evaluation of the predictive accuracy of the network, a global γ analysis <cit.> with a 1/3 criterion was used. To avoid clustering of γ pass-rates at 100, voxels with doses under 0.1 of the maximum dose were excluded from the gamma analysis, resulting in a stricter metric than the one used in <cit.>, since values close to zero caused by numerical inaccuracy of the neural network may artificially increase the passing rate. For the patient case, 10 interpolation points were used for γ computation. For a fair comparison, the results for the original model by <cit.> were then recalculated using this stricter criterion. In addition to the γ pass-rate, MSE, as well as the MAE of the DD are reported. To assess the quality of the uncertainty prediction, the relative number of voxels not correctly predicted within its respective nσ confidence interval, where σ is the standard deviation of the predicted dose ensemble within a voxel, was extracted for n = [1,5]. While the (uncertain) predicted dose is not following a Gaussian probability distribution, comparing the relative number of voxels deviating more than nσ with the Gaussian expectation may help indicate if the model heavily over- or underestimates its confidence. Following the same data preparation technique as for the γ analysis, DD of less than 1 as well as differences below 0.1 of the maximum dose were neglected for the analysis. For evaluation reference, the Bayesian LSTM network was compared to the LSTM network introduced by <cit.>, as well as to its equivalent deterministic structure to exclude any effects that might originate from the several differences in overall architectures (like using SiLU) as well as training beyond the switch to Bayesian Network layers. §.§ Experimental Design Based on the available dataset and the implemented model, we devised five experiments to test the behavior of BayesDose. Experiment 1: The model is trained, tested and evaluated on the phantom dataset described in section <ref>. Likewise to <cit.>, a 60-20-20 split (training-validation-testing) is used. Experiment 2: The model is trained, tested and evaluated on the individual lung patient dataset described in section <ref>, again using a 60-20-20 split. Experiment 3: Validation of the performance for different initial energies, by training, validating and testing each on the low-range energy and high-range energy of the dataset described in <ref>. Experiment 4: Investigation on the generalizability by applying the model trained in Experiment 2 to the other four patients. Experiment 5: Testing the network's capability to be re-trained on new data, by first training the network on data from patient 0 and subsequently fine-tuning it for 10 epochs on patient 5, which features a wider range of RSP values than the training dataset but still a lower range of the outlier patient 1. Afterwards, the network was tested once again on the remaining patients (similar to Experiment 4) and the results were compared. In Experiments 4 and 5, we also correlate the predicted uncertainty with various γ-criteria to investigate the potential of developing decision criteria for accepting the models output or discarding it, given the output uncertainty. Experiment 5 evolves this further by testing the potential impact of re-training on previously unseen data on this correlation. The results of this study are, if not explicitly stated otherwise, always evaluated on the unseen test subset of the data. § RESULTS §.§ Phantom Data (Experiment I) For each individual pencil beam in the test set, the BayesDose predictions (i. e., the average out of 100 samples) are first compared to the MC ground truth dose distribution using a [1%,3] γ analysis to confirm the literature results and suitability of the architecture for dose prediction. In <ref>, the average, the standard deviation, minimum and maximum of γ pass-rates across the test dataset are reported, and the corresponding MAE and MSE between the generated dose cubes and the ground truth MC simulation. Both BayesDose and its deterministic variant seem well suited for dose calculation with pass-rates >97.81, even slightly outperforming the original LSTM model by <cit.> with higher γ average and minimum pass-rates and lower MAE and MSE. Notably, comparing the average prediction from BayesDose to the ground truth showed a small predictive performance improvement over using the deterministic variant. Nearly perfect predictions occured when cuboid heterogeneities only had a minor effect on the dose distribution resulting in only small distortion of the Bragg-peak. Nonetheless, these samples were largely estimated by the network as cases with low output uncertainty. In the case of <ref>, which is one of the best predictions of BayesDose, the maximum standard deviation is 1.1 (relative to the maximum dose) and no voxel differs more than 3σ. The prediction with the lowest γ pass-rate of 88.93 is shown in <ref>, illustrating that large dose differences between ground truth and prediction spatially coincide with high dose regions and high uncertainty. For example, at the end of the range, the standard deviation from BayesDose reached up to 12.9 relative to the maximum dose in <ref>, which is among the highest observed standard deviations in Experiment 1 and indicates a less robust prediction than <ref>. 0.88 of voxels differ more than 3σ and are located mainly near high dose gradients in depth. The obtained percentages of voxels outside of nσ over the entire test set range are reported for each sigma from one to five in <ref>. These values allow comparison to the Gaussian assumptions of probability mass in a confidence interval. For the phantom case, these values are consistently too low, indicating an overestimation of the uncertainty. <Ref> shows the case with the largest amount of voxels deviating more than 3σ. The high percentage of significantly deviating voxels in <ref> originates from the systematic underestimation of proton path length; the Bragg peak and therefore high dose values appear distally shifted. This indicates a failure of the model to correctly maintain the unusually long temporal, i. e., downstream, dependency on the entrance cavity in this particular case. Still, the failing area is also estimated to have high uncertainty, however, its magnitude is too small in scale to grasp the full DD occurring in that area. §.§ Patient Data (Experiment 2) <Ref> summarizes the performance of BayesDose, its deterministic variant and the original model for the first lung patient. All three algorithms have equal performance on the test dataset with average γ pass-rates over 99.3 and only minor differences in their MSE and MAE values. Again, we relate dosimetric accuracy to predicted uncertainty through best and worst examples. One of the most accurate predictions is illustrated in <ref> with a 100 γ pass-rate. In the patient dataset, BayesDose generally predicts zero dose in low RSP values, as the dose in air before the patient was zero in the training data as well, which reproduced the original deterministic model's behavior. BayesDose seemingly learns the occurring interpolation artifacts from the pre-processing pipeline as model uncertainty, forming stripes of standard deviation around the beam line axis in where the interpolation effects were usually most pronounced in the training dataset. Also, <ref> shows a higher maximum standard deviation relative to the maximum dose (8.3) than the best phantom prediction (1.1). In the test sample with the lowest dosimetric accuracy, shown in <ref>, BayesDose underestimates the length of the dose wash-out beyond the Bragg peak, but associates this region with very high uncertainty (reaching up to 13.1 of the maximum dose). Further, the interpolation artifacts seem to be a large driver of failing voxel dose predictions, visible in the large number of voxels deviating more than 5σ. In general, over the whole patient dataset, the average percent of voxels deviating more than nσ (shown in <ref>) are consistently higher than in the first experiment on the phantom data. Thus, according to <ref>, BayesDose seems to produce less conservative uncertainty predictions on the patient data. A large part of voxels significantly deviating beyond 5σ, as illustrated in the prediction example with the maximum amount of voxels showing a dose difference >nσ (<ref>), are seemingly rooted in the interpolation artifacts in the training data. §.§ Multiple Energies (Experiment 3) BayesDose was trained and tested on two additional proton energies. Similar to the original model by <cit.>, the accuracy is comparable across all three energies, as listed in <Ref> where for all three energies, the mean pass-rate is above 99. §.§ Inter-patient Generalization (Experiment 4) For experiment 4, that is, evaluation of the network performance on 5 additional lung patients, we again first analyze dosimetric performance of the average prediction in <ref>. BayesDose shows similar behavior to the original deterministic model, seeing Patient 1 and Patient 5 as worse performing outliers compared to the rest. Correlating the prediction uncertainty on patients unseen to the model in training to the dosimetric accuracy allows us to simulate the case of dose prediction on a new patient to be irradiated, having only the predicted uncertainty as a decision making criterion. Thus, <ref> depicts how the average standard deviation of dose predicted by the model correlates with the γ pass-rate results for all patients. To avoid clustering of the predictions around 100, a more strict criterion of [1%, 2] was chosen. In all patients apart from Patient 1, low dosimetric accuracy correlates well with high average prediction uncertainty. Patient 1 shows sever outliers, which are investigated further below in <ref> (worst accuracy) and <ref> (highest average uncertainty). To quantify the correlations, we calculate correlation coefficients with and without Patient 1 in <ref>. A strong negative correlation is visible, supporting how high predicted uncertainty correlates with dosimetric inaccuracies. The trend holds also among multiple γ-criteria, where strongest correlation is achieved using a [1%, 2] γ criterion (without Patient 1). Further analysis of the outliers for Patient 1 show the following: In the case of the worst dosimetric prediction on Patient 1, i. e., the far-left outlier in <ref>, failing voxels seem to mainly originate from a low dose flair behind the Bragg peak, where the model fails to shape the distortion of the peak. The outlier with the highest mean standard deviation, shown in <ref>, is also showing difficulties in predicting the dose flair distal to the Bragg peak. Further, the range seems incorrectly predicted. However, the model associates large standard deviation with the low-dose region distal to the peak, leading to high relative uncertainty. Both examples also show substantial interpolation artifacts and also highlight a common issue with using γ-tests for dosimetric analysis, particularly in this correlation study, as the low-dose region is not captured as it is discarded in the analysis by the dose threshold. §.§ Retraining (Experiment 5) To test if the substantially worse performance on Patient 1, both dosimetrically and in quality of the uncertainty prediction, can be mitigated, the model was retrained on the second worst performing dataset – Patient 5 – for 10 epochs, which also showed the second widest HU range compared to the largest HU range in Patient 1. <Ref> shows a substantial increase in dosimetric accuracy for Patient 1, similar to the other patients. Also, recalculating the correlation coefficients, with the newly acquired outcomes after retraining (see <ref>), shows large improvement compared to the first evaluation in <ref>, with the correlation coefficient for the illustrated [1%, 2]γ pass-rate improving from -0.5 to -0.74. §.§ Runtimes For a stable (average) prediction and acceptable accuracy of the standard deviation, multiple samples of the BayesDose parameters need to be taken, creating multiple predictions with the same computational operation, thus naturally increasing runtime compared to a deterministic variant of the network. Runtimes for BayesDose computing 100 samples (predictions) as well as its deterministic variant and the original network are listed in <ref>. For the naive analysis in <ref>, the individual samples were requested fully sequential. In this naive process, a full BayesDose ensemble prediction thus took about 400 times as long as its deterministic variant – a single prediction took thus about 43, and the ensemble of predictions about 4.3. However, after moving from sequential to parallel predictions reduced the runtime of the full ensemble prediction to 67. The deterministic variant structure and the LSTM network from <cit.> were equally fast with a slightly faster prediction time of 8 for the deterministic variant of the BLSTM. The reported measures include the time required to send the input CT cube for each pencil beam from CPU to GPU and vice versa for the yielded dose cube. § DISCUSSION In this paper, we exhibited the feasibility of using BNN for proton dose calculation tasks. The implemented BayesDose model features a fully Bayesian network structure built from a BLSTM and a Bayesian backend network. It was capable of accurately predicting dose distributions inside a patient while quantifying the uncertainty in each prediction. Thus BayesDose provides supporting evidence for the viability of applying BNN to dose calculation tasks in real-world applications. §.§ Nominal Predictive Accuracy For both phantom and lung patient cases, BayesDose reproduces at least the accuracy of the original LSTM-based deterministic netowrk, both trained on an energy of 104.25. In the phantom case study, BayesDose and its deterministic variant both showed improved metrics over the original LSTM. BayesDose showed a slightly better γ analysis performance whereas its deterministic variant had a minor improvement in MSE and MAE values. Note that the values presented in the results differ from the original publication on the deterministic LSTM <cit.>, due to switching the algorithm for γ calculation. Notable, the γ pass-rates measured for BayesDose over the whole dataset seemed more robust than the original LSTM. In the patient case, the measured test results of all three compared networks were similar with average γ pass-rates above 99.59. Training BayesDose on different initial proton energies also confirms accuracy comparable to the deterministic LSTM-based model from <cit.>, with slight reduction in accuracy for the other energy datasets. This might originate from the higher variance in pencil beam ranges and hence a larger required clipping area to compensate for that change. However, even with a larger clipping area the number of pronounced air cavities, which led to cases where the Bragg peak was located outside the patient or reached the second lung after completely passing through the first lung, increases with higher energy ranges. When testing the network generalization by application on other unseen patients, we also observe behavior similar to the deterministic original, with lower test results from Patient 1 and Patient 5 in comparison to the other patients. We explain these – similarly to <cit.> – with the different ranges of input RSP values in these patients (as shown in <ref>). Therefore, BayesDose tries to predict previously unseen RSP values, potentially failing to recognize important features such as air cavities having a drastic influence on the Bragg peak location. This hypothesis is further underlined by the results from the retraining experiment, where an increase in the seen RSP range yielded substantially better results for the worst-performing Patient 1. These findings suggest, for both phantom and patient, that the Bayesian nature of the network structure does not show any obvious deterioration effects of prediction accuracy on the test results while being compared to an equivalent deterministic network structure. Analyzing the contributions to the loss functions during training (<ref>), BayesDose quickly reaches comparable performance to a deterministic model in the early epochs of model fitting. As the training progresses, BayesDose continues to explore a range of Pareto-like optimal solutions while maintaining and stabilizing test performance in terms of SMSE. Simultaneously, it aims to optimize the probability distributions of the posterior, minimizing regularization without compromising performance. Thus, BNN could potentially be applied to a variety of deep learning proton dose engines without major deterioration in prediction accuracy. §.§ Quality of the uncertainty prediction The defining benefit of the BayesDose over its compared deterministic variant is the quantification of uncertainty in its prediction. Visual inspection of the results from the worst dose predictions on the phantom dataset (<ref>) and from the patient dataset (<ref>) both show comparably large standard deviations – and thus high uncertainty – in areas where many voxels failed the γ test as well. This spatial correlation of high uncertainty with high dosimetric differences is, while expected, a promising and also necessary feature for sensible use of the model uncertainty as a potential decision making metric. A more quantitative analysis was performed by measuring how dose differences between prediction and Monte Carlo relate to multiples of the standard deviation nσ. The analysis on the phantom dataset indicates an apparently overcautious uncertainty estimate compared to the expected probability mass when assuming a Gaussian uncertainty. A reason for this could be that wrong predictions usually occur in regions with high dose gradients, which are especially pronounced in the phantom case and whose bimodal behavior does not closely follow a Gaussian distribution, leading to large uncertainty estimates. In the patient dataset, the number of failing voxels move closer to what would be expected from the Gaussian assumption, but seem too low for 1σ and to large for confidence intervals greater than 3σ. We attribute to the wider spectrum of anatomies as well as the interpolation artifacts, exhibiting a more noise-like behavior. <Ref> also suggests an attempt to learn the occurring interpolation artifacts as model uncertainty prediction. This is visible in form of slightly increased uncertainty around the beam line axis in the center of the dose cubes, where the interpolation effects were usually most pronounced in the training dataset. It may thus be sensible to move beyond the analysis of standard deviation in the future and, for example, examine higher moments like skewness or empirical confidence intervals quantiles. The potential use for model quality assurance or decision making, already indicated above, is quantified by the (negative) correlation between the average σ and the γ pass-rate (reported in <ref>). These results strongly correlate large standard deviations with inaccurate predictions. Thus, the standard deviation can potentially serve as an indicator to accept a predicted dose distribution or move to a deterministic numerical algorithm. The strongest correlations (before re-training) could be observed at a γ criterion of [1%, 2] when excluding Patient 1. The worst performing Patient 1 features a substantially wider RSP value range which might cause the outliers from the correlation in the test results (see <ref>). Most of these observed outliers, however, exhibit extreme cases of a high average standard deviation. Thus, they become neglectable for dose prediction, as cases with high uncertainty and high γ pass-rates would be classified as false negatives. Considering this, patient 1 only exhibits one single prediction outlier with a low γ pass-rate and corresponding low average σ, which when used as a decision criterion would lead to a false positive (for using the BayesDose model despite limited accuracy). Further, Patient 1 could also be easily identified as not suited for model prediction during dose calculation due to its differing HU range. Hence, the model could alert the operating physician of high uncertainty in a given patient and discourage/forbid usage in this scenario. Consequently, the BayesDose model was able to substantially increase either the γ pass-rates results as well as the correlation coefficient by re-training on Patient 5, incorporating a larger seen RSP value range into training. While the potential clinical use of such a model would thus involve a "preferred range of RSP" specification, patient cases with higher RSP values similar to patient 1 could be readily identified as unsuitable for model prediction during dose calculation due to their differing HU range. Therefore, the model could alert the attending staff of heightened uncertainty in such patients and discourage or prohibit its usage in such scenarios. The accurate estimation of uncertainty in each prediction may solve the major problems of nontransparent and overconfident predictions of DNN in areas of critical decision-making and could potentially open the doors for these algorithms in everyday clinical practice. §.§ Runtime The main limitation of BNN is that a large ensemble of single predictions is required to have a precise prediction and estimation of uncertainty. Additionally, the network has more free parameters (depending on the probability distributions parameterizing the weights and biases), resulting in more time-consuming single predictions. This becomes apparent in <ref>, where the parallelized BayesDose network still takes about seven times as long as its deterministic variant structures for a single prediction. However, due to the parallel nature of a neural network's feed-forward operation, there is a lot of space for additional speed improvements. By parallelizing the calculation of a large number of proton beamlets the runtime could be substantially reduced. Further, improvements can be made by developing a more efficient parallel sampling of multiple weights internally or alternatively the option to sample the required weights prior to the use case. Another speed-up could potentially be achieved by sending the CT cubes in advance to the GPU before dose calculation, as proposed by <cit.>. This way, only the effective feed-forward time of the network would be the limiting factor in speed. Efforts regarding runtime optimization are further supported through constant advances in dedicated deep-learning hardware, and more prominently by leveraging the parallel nature of the problem. In future studies, the required ensemble size of predictions could also be analyzed which may lead to the conclusion that a smaller ensemble size still has satisfying precision. Thus, the computation complexity and corresponding runtime could be greatly reduced. Overall, the parallel structure of the BNN's predictions leaves multiple options for dedicated optimization and thus considerable runtime speedup during execution. §.§ Limitations and Application We demonstrated that BNN can be used successfully for dose calculation in heterogeneous tissue. Nevertheless, in this study the implemented BayesDose model does not compose a complete dose calculation engine that is ready to be used clinically. In this study, the focus was exclusively on individual beamlets with a single initial energy (mainly of 104.25) and a specific beam focus applied to a lung patient case to be able to compare the results with those of its deterministic variant proposed by <cit.>. To compose a full dose calculation engine, multiple models would need to be trained on each commissioned energy and potentially other geometric parameters. Such a dose engine for the deterministic use case is currently developed <cit.> and may afterwards be extended to use the BayesDose model presented here. While other works generalize their models by including energetic <cit.> or geometric <cit.> parameters in training, we argue that individual models exhibit no substantial disadvantage and might even be advantageous in handling extremely low and high energies in dose calculation and quality assurance. Further, model training was limited to a single patient geometry. To generalize well over all given input images, a larger training dataset with different patient geometries as well as a variety CT scanner should be considered for practical implementations. Further improvements may be achieved by using a different loss function for the model fit that has a similar dimension to the KL divergence loss and more importantly a similar range of optimal LR. Hence, the efficiency of the model and its training process can be further enhanced by fine-tuning the network's parameters. For incorporating the measured uncertainty into the dose calculation process, a conceivable way would be to specify an uncertainty threshold (or some form of global model confidence indicator) prior to the calculation of the dose distribution. If this threshold is crossed, the algorithm could either warn about an uncertain prediction of a specific pencil beam and/or could calculate that beam distribution again with MC simulations. Thereby, the calculation time could be substantially reduced without compromising accuracy, suggesting particular benefit to online adaptive proton therapy. Besides the application in proton dose calculation, potential applications in photon as well as heavier ion (carbon, oxygen, helium) dose calculation would be feasible. §.§ Conclusion Our BayesDose model can achieve state-of-the-art prediction accuracy with the added benefit of comprehensively providing a spatial distribution of the model's uncertainty. Even with ensembles of 100 predictions, the runtime overhead stayed below a factor of 10 and could be further reduced by pruning and computational optimization. Expanding the patient training dataset to other patients and/or regular transfer learning can finetune prediction accuracy as well as uncertainty estimation. The Bayesian approach may similarly be applied to other recent and upcoming approaches in comprehensible deep-learning dose prediction. The strong correlation between prediction uncertainty and deviation from the ground truth could play a vital role in quality assured clinical translation of dose calculation algorithms based on neural networks.
http://arxiv.org/abs/2307.00267v1
20230701081723
Self-Supervised Query Reformulation for Code Search
[ "Yuetian Mao", "Chengcheng Wan", "Yuze Jiang", "Xiaodong Gu" ]
cs.SE
[ "cs.SE" ]
authorsperrow=4 Both authors contributed equally to this research. Shanghai Jiao Tong University Shanghai China mytkeroro@sjtu.edu.cn [1] East China Normal University Shanghai China wancc1995@gmail.com Shanghai Jiao Tong University Shanghai China jyz-1201@sjtu.edu.cn Xiaodong Gu is the corresponding author. Shanghai Jiao Tong University Shanghai China xiaodong.gu@sjtu.edu.cn Automatic query reformulation is a widely utilized technology for enriching user requirements and enhancing the outcomes of code search. It can be conceptualized as a machine translation task, wherein the objective is to rephrase a given query into a more comprehensive alternative. While showing promising results, training such a model typically requires a large parallel corpus of query pairs (, the original query and a reformulated query) that are confidential and unpublished by online code search engines. This restricts its practicality in software development processes. In this paper, we propose , a self-supervised query reformulation method that does not rely on any parallel query corpus. Inspired by pre-trained models, treats query reformulation as a masked language modeling task conducted on an extensive unannotated corpus of queries. extends T5 (a sequence-to-sequence model based on Transformer) with a new pre-training objective named corrupted query completion (CQC), which randomly masks words within a complete query and trains T5 to predict the masked content. Subsequently, for a given query to be reformulated, identifies potential locations for expansion and leverages the pre-trained T5 model to generate appropriate content to fill these gaps. The selection of expansions is then based on the information gain associated with each candidate. Evaluation results demonstrate that our method outperforms unsupervised baselines significantly and achieves competitive performance compared to supervised methods. <ccs2012> <concept> <concept_id>10002951.10003317.10003325.10003330</concept_id> <concept_desc>Information systems Query reformulation</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Query reformulation Self-Supervised Query Reformulation for Code Search Xiaodong Gu August 1, 2023 =================================================== § INTRODUCTION Searching through a vast repository of source code has been an indispensable activity for developers throughout the software development process <cit.>. The objective of code search is to retrieve and reuse code snippets from existing projects that align with a developer's intent expressed as a natural language query <cit.>. However, it has been observed that developers often struggle to articulate their information needs optimally when submitting queries <cit.>. This difficulty may arise from factors such as inconsistent terminology used in the query or a limited understanding of the specific domain in which information is sought. Developers may constantly reformulate their queries until the queries reflect their real query intention and retrieve the most relevant code snippets. Studies <cit.> have shown that in Stack Overflow, approximately 24.62% of queries on Stack Overflow have undergone reformulation. Moreover, developers, on average, reformulate their queries 1.46 times before selecting a particular result to view. One common solution to this problem is automatic query reformulation, namely, rephrasing a given query into a more comprehensive alternative <cit.>. A natural first way to accomplish this objective is to replace words in a query with synonyms based on external knowledge such as WordNet and thesauri <cit.>. However, this methodology restricts the expansion to the word level. Besides, gathering and maintaining domain knowledge is usually costly. The knowledge base might always lag behind the fast-growing code corpora. There have been other attempts that consider pseudo-relevance feedback, i.e., emerging keywords in the initial search results <cit.>. They search for an initial set of results using the original query, select new keywords from the top k results using TF-IDF weighting, and finally expand the original query with the emerging keywords. Nevertheless, despite expanding queries at a word level, this approach also has a risk of expanding queries with noisy words. Hence, the expanded query can be semantically irrelevant to the original one. In recent years, driven by the prevalence of deep learning, researchers seek the idea of casting query reformulation as a machine translation task: the original query is taken as input to a neural sequence-to-sequence model and is translated into a more comprehensive alternative <cit.>. Despite showing substantial gains, such models require to be trained on a large-scale parallel corpus of query pairs (, the original query and a reformulated query). Unfortunately, acquiring large query pairs is infeasible given that real-world search engines (, Google and Stack Overflow) do not publicly release the evolution of queries. For example, the state-of-the-art method SEQUER <cit.> relies on a confidential parallel dataset that cannot (likely to be impossible) be accessed by external researchers. Replicating the performance of SEQUER becomes challenging or even impossible for those who lack access to such privileged datasets. This lack of replicability hampers the wider adoption and evaluation of the method by the research community. In this paper, we present , a self-supervised query reformulation method that achieves competitive performance to the state-of-the-art supervised approaches, while not relying on the availability of parallel query data for supervision. Inspired by the pre-trained models, automatically acquires the supervision of query expansion through self-supervised training on a large-scale corpus of code comments. Specifically, we design a new pre-training objective called corrupted query completion (CQC) to simulate the query expansion process. CQC masks keywords in long, comprehensive queries and asks the model to predict the missing contents. In such a way, the trained model is encouraged to expand incomplete queries with keywords. leverages T5 <cit.>, the state-of-the-art language model for code. The methodology of involves a two-step process. Firstly, T5 is pre-trained using the CQC objective on a vast unannotated corpus of queries. This pre-training phase aims to equip T5 with the ability to predict masked content within queries. When presented with a query to be reformulated, enumerates potential positions within the query that can be expanded. It then utilizes the pre-trained T5 model to generate appropriate content to fill these identified positions. Subsequently, employs an information gain criterion to select the expansion positions that contribute the most valuable information to the original query, resulting in the reformulated query. We evaluate on two search engines through both automatic and human evaluations, and compare with state-of-the-art approaches, including SEQUER <cit.>, NLP2API <cit.>, LuSearch <cit.>, and GooglePS <cit.>. Experimental results show that improves the MRR score by over 50% compared with the unsupervised baselines and gains competitive performance over the fully-supervised approach. Human evaluation reveals that our approach can generate more natural and informative queries, with improvements of 19.31% and 26.35% to the original queries, respectively. Our contributions are summarized as follows: * To the best of our knowledge, is the first self-supervised query reformulation approach, which does not rely on a parallel corpus of reformulations. * We propose a novel information gain criterion to select the pertinent expansion positions that contribute the most valuable information to the original query. * We perform automatic and human evaluations on the proposed method. Quantitative and qualitative results show significant improvements over the state-of-the-art approaches. § BACKGROUND §.§ Code Search Code search is a technology to retrieve and reuse code from pre-existing projects <cit.>. Similar to general-purpose search engines, developers often encounter challenges when attempting to implement specific tasks. In such scenarios, they can leverage a code search engine by submitting a natural language query. The search engine then traverses an extensive repository of code snippets collected from various projects, identifying code that is semantically relevant to the given query. Code search can be broadly classified into two categories: search within the context of a specific project or as an open search across multiple projects. The search results may include individual code snippets, functions, or entire projects. §.§ Query Reformulation Query reformulation provides an effective way to enhance the performance of search engines <cit.>. The quality of queries is often a bottleneck of search experience in web search <cit.>. This is because the initial query entered by the user is often short, generic, and ambiguous. Therefore, the search results could hardly meet the specific intents of the user. This requires the user to revise his query through multiple rounds. Query reformulation is a technology that reformulates user's queries into more concrete and comprehensive alternatives <cit.>. Figure <ref> shows an example of query reformulation in Google search engine. When a user enters the query “convert string” in the search box, there may exist multiple possible intents, such as “convert something to a string” or “convert a string to something”. Additionally, the specific programming language for implementing the conversion function is not specified. In such cases, conventional search engines like Google face challenges in accurately determining the user's true intent. To address this issue, search engines often employ tools like the Google Prediction Service (GooglePS). GooglePS automatically suggests multiple reformulations of the original query. These reformulations provide alternative options that the user can consider to refine their search. By presenting a range of reformulations, users can narrow down their search target by selecting the most relevant reformulation that aligns with their intended query. This process helps users in finding more precise and tailored search results. Query reformulation broadly encompasses various techniques, including query expansion, reduction, and replacement <cit.>. While query expansion involves augmenting the original query with additional information, such as synonyms and related entities, to enhance its content, query reduction focuses on eliminating ambiguous or inaccurate expressions. Query replacement, on the other hand, involves substituting incorrect or uncommon keywords in the original query with more commonly used and precise terms. Among these types, query expansion constitutes the predominant approach, accounting for approximately 80% of real-world search scenarios <cit.>. §.§ Self-Supervised Learning and Pre-trained Models Supervised learning is a class of machine learning methods that train algorithms to classify data or predict outcomes by leveraging labeled datasets. It is known to be expensive in manual labeling, and the bottleneck of data annotation further causes generalization errors <cit.>, spurious correlations <cit.>, and adversarial attacks <cit.>. Self-supervised learning alleviates these limitations by automatically mining supervision signals from large-scale unsupervised data using auxiliary tasks <cit.>. This enables a neural network model to learn rich representations without the need for manual labeling <cit.>. For example, the cloze test masks words in an input sentence and asks the model to predict the original words. In this way, the model can learn the semantic representations of sentences from large unlabeled text corpora. Pre-trained language models (PLMs) such as BERT <cit.>, GPT <cit.>, and T5 <cit.> are the most typical self-supervised learning technology. A PLM aims to learn language's generic representations on a large unlabeled corpus and then transfer them to specific tasks through fine-tuning on labeled task-specific datasets. This requires the model to create self-supervised learning objectives from the unlabeled corpora. Take the Text-to-Text Transfer Transformer (T5) <cit.> in Figure <ref> as an example. T5 employs the Transformer <cit.> architecture where an encoder accepts a text as input and outputs the encoded vector. A decoder generates the target sequence based on the encodings. To efficiently learn the text representations, T5 designs three self-supervised pre-training tasks, namely, masked span prediction, masked language modeling, and corrupted sentence reconstruction. By pre-training on large-scale text corpora, T5 achieves state-of-the-art performance in a variety of NLP tasks, such as sentence acceptability judgment <cit.>, sentiment analysis <cit.>, paraphrasing similarity calculation <cit.>, and question answering <cit.>. § METHOD The primary focus of this paper is on query expansion, the most typical (accounting for 80%) technique for query reformulation. Query expansion aims to insert key phrases into a query thereby making it more specific and comprehensive. Essentially, query expansion addresses a pinpoint-then-expand problem, wherein the goal is to identify potential information gaps within a given query and generate a set of keywords to fill those gaps. Inspired by the masked language modeling (MLM) task introduced by pre-trained models like BERT <cit.>, our proposed method adopts a self-supervised idea. Specifically, we mask keywords within complete code search queries and train a model to accurately predict and recover the masked information. This allows the model to learn the underlying patterns and relationships within the queries, enabling it to generate meaningful expansions for query reformulation. §.§ Overview Figure <ref> shows the main framework as well as the usage scenario of our method. The pipeline involves two main phases: an offline pre-training phase and an online expansion phase. During the pre-training phase, continually pre-trains a PLM named T5 <cit.> with a newly designed corrupted query completion task on an unlabelled corpus of long queries (<ref>). This enables the T5 to learn how to expand incomplete queries into longer ones. During the runtime of , when a user presents a query for code search, employs a two-step process for query expansion. Firstly, it enumerates candidate positions within the query that can be expanded and utilizes the pre-trained T5 model to generate content that fills these positions (as discussed in <ref>). Following the expansion step, proceeds to select the position that offer the highest information gain after the expansion (introduced in <ref>). This selection process ensures that the most valuable and informative expansions are chosen, thereby enhancing the reformulated query in terms of its relevance and comprehensiveness. Finally, once the query has been expanded, users conduct code search by selecting the most relevant reformulation that aligns with their intended query. Our approach specifically focuses on the function-level code search scenario, which involves the retrieval of relevant functions from a vast collection of code snippets spanning multiple projects. The following sections elaborate on each step of our approach respectively. §.§ Pre-training T5 with Corrupted Query Completion We start by pre-training a PLM which can predict the missing span in a query. We take the state-of-the-art T5 <cit.> as the backbone model since it has a sequence-to-sequence architecture and is more compatible with generative tasks. Besides, T5 is specialized in predicting masked spans (, a number of words). To enable T5 to learn how to express a query more comprehensively, we design a new pre-training objective called corrupted query completion (CQC) using a large-scale corpus of unlabelled queries. Similar to the MLM objective, CQC randomly masks a span of words in the query and asks the model to predict the masked span. More specifically, given an original query q=(w_1, …, w_n) that consists of a sequence of n words, masks out a span of 15%×n consecutive words from a randomly selected position i, namely, s_i: j=(w_i, …, w_j), and replaces it with a token. Then, the corrupted query is taken as input to T5 which predicts the words in the masked span. We use the teacher-forcing strategy for pre-training. When predicting a word in the corrupted span, the context visible to the model consists of two parts: 1) the uncorrupted words in the original query, denoted as q_\ s_i: j=(w_1, …, w_i-1, w_j+1, ⋯, w_n); and 2) the ground truth words appeared before the current predicting position w_t, denoted as w_i: t-1. We pre-train the model using the cross-entropy loss, namely, minimizing ℒ_cqc=-∑_t=i^jlog p(w_t| q_\ s_i: j, w_i: t-1). Figure <ref>(a) shows an example of the CQC task. For a query “how to reverse an array in Java” taken from the training corpus, the algorithm corrupts the query by replacing the modifier “in Java” with . The corrupted query is taken as input to T5 which predicts the original masked tokens “in Java”. §.§ Expanding Candidate Spans The pre-trained T5 model is then leveraged to expand queries. We consider a query that needs to be expanded as an incomplete query where a span of words is missing at a position (denoted as a masked token), we want the model to generate a sequence of words to fill in the span. This is exactly the problem of masked span prediction as T5 aims to solve. Therefore, we leverage the pre-trained T5 to expand the incomplete queries. However, a query with n words have n+1 positions for expansion. Therefore, we design a best-first strategy: we enumerate all the n+1 positions as the masked spans, perform the CQC task, and select the top-k positions that have the most information gain of predictions. Specially, given a original query q ={w_1,w_2,...,w_n}, enumerates the n+1 positions between words. For each position, it inserts a token. This results in n+1 candidate masked queries. Each incomplete query q̃=[w_1,…,,…,w_n] is taken as input to the pre-trained T5. The decoder of T5 generates a span of words s=[v_1,…,v_m] for the token by sampling words according to the predicted probabilities. Finally, replaces the token with the generated span s, yielding the reformulated query. Figure <ref>(b) shows an example. Given the first two masked queries, the T5 model generates “in Java” and “integer” for the masked tokens, respectively. The former refers to the language used to implement the function, while the latter refers to the data type of the target data structure. The reformulated queries supplement the original queries with additional information, revealing user potential intents from different aspects. §.§ Selecting Expansion Positions A query with n words has n+1 candidate positions for expansion, but not all of them are necessary for expansion. Hence, we must determine which positions are the most proper to be expanded. selects the top-k candidate queries that have the most missing information in the masked span. The resulting expanded queries are more likely to gain information after span filling. The key issue here is how to measure the information gain after filling each span. In our approach, we define information gain for a span expansion as the negative entropy over the predicted probability distribution of the generated words <cit.>. In information theory, entropy <cit.> characterizes the uncertainty of an event in a system. Suppose the probability that an event will happen follows a distribution of (p_1,…,p_n), the entropy of the event can be computed as -∑_i=1^np_ilogp_i. The lower the entropy, the more certain that the event can happen. That means the event brings more information to humans. This can be analogized to the span prediction problem: when the probability distribution of the generated words over the vocabulary is uniform, the entropy (uncertainty) becomes high because every word is likely to be generated. By contrast, smaller entropy means that there is a greatly different likelihood of generating each word and thus the certainty of the generation is high. The lower the entropy, the higher the certainty that this span contains the word, and the more information that the expansion brings to the query. If the span contains multiple words, we can measure the information gain of the span prediction using their average negative entropy. For each candidate query q̃, we predict a span s=[v_1,...,v_m] using the pretrained T5 model: p(v_i)=T5(q̃, v_<i), i=1,…,L where each v_i denotes a sub-token in the predicted span. Each prediction p(v_i) follows a probability distribution of p_1,…,p_|V| over the vocabulary of the entire set of queries in the training corpus, indicating the likelihood of each token in the vocabulary appearing in the span. Our next step is to compute the information gain of each expansion using negative entropy: For each sub-token v_i, the information gain can be calculated by IG(v_i) = -H(v_i)=∑_v=1^|V|p(v_i=v)log p(v_i=v). The higher the IG, the more certainty that the prediction is. For a predicted span s = [v_1,…,v_m] with m sub-tokens, we compute the average IG of all its tokens, namely, IG(s) = 1/m∑_i=1^LIG(v_i). Finally, we select the top K expansions with the highest information gain and then replace the token with the predicted span. The top-k expansions are provided to users for choosing the most relevant one that aligns with their intention. The specific details of the method are summarized in Algorithm <ref>. Figure <ref>(c) shows an example query expansion. For a given query “convert string to list” to be expanded, firstly enumerates five expansion positions of the original query, inserting a token into each one. Next, the pre-trained T5 model takes these candidate queries as input and calculates the information gain from the prediction for these candidate queries. Finally, recommends the top-2 masked queries (here k=2) with the highest information gain (, minimum entropy values) for users to choose. § EXPERIMENTAL SETUP §.§ Research Questions We evaluate the performance of in query reformulation through both automatic and human studies. We further explore the impact of different configurations on performance. In specific, we address the following research questions: * RQ1: How effective is in query reformulation for code search? We apply query reformulation to Lucene and CodeBERT based code search engines and compare the search accuracy before and after query reformulation by various approaches. * RQ2: Whether the queries reformulated by are more contentful and easy to understand? In addition to the automatic evaluation of code search performance, we also want to assess the intrinsic quality of the reformulated queries. To this end, we perform a human study to assess whether the reformulated queries contain more information than the original ones and meanwhile conform to human reading habits. * RQ3: How do different configurations impact the performance of ? To obtain a better insight into , we investigate the performance of under different configurations. We firstly investigate the effect of different positioning strategies, i.e., what is the best criteria to select the expansion position in the original query. We are also interested in the number of expansions for each query. §.§ Datasets We pre-train, fine-tune, and test all models using two large code search corpora: CODEnn <cit.> and CodeXGLUE <cit.>. They are non-overlapping and thus alleviate the duplicated code issue between pre-training and downstream tasks <cit.>. Dataset for Pre-training. We pre-train T5 using code comments from the large-scale CODEnn dataset. CODEnn has been specifically processed for code search. Compared to CodeSearchNet <cit.>, this dataset has a much larger volume (i.e., more than 10 million queries). We take the first 1 million for pre-training. Dataset for Code Search. We use the code search dataset of CodeXGLUE which provides queries and the corresponding code segments from multiple projects. In this dataset, each record has five attributes, including the code segment (in Python), repository URL, code tokens, doc string (i.e., NL description of the function), and the index of the code segment. We split the original dataset into training and test sets, with 251,820 and 19,210 samples respectively. The training set is used to fine-tune the CodeBERT search engine, while the test set is used as the search pool from which a search engine retrieves code. All queries in the test phase are tests on the same pool. The statistics of our datasets are summarized in Table <ref>. §.§ Implementation Details Implementation of the pre-trained model. Our model is implemented based on from the open-source collection of HuggingFace <cit.>. We use the default tokenizer and input-output lengths. Since HuggingFace does not provide an official PyTorch script for T5 pre-training, we implement the pre-training script based on the PyTorch Lightening framework <cit.>. We initialize T5 with the default checkpoint provided by Huggingface and continually pre-train it with our proposed CQC task. The pre-training takes 3 epochs with a learning rate of 1e-3. The batch size is set to 32 in all experiments. We set m and k in Algorithm 1 to 10 and 3 respectively. Since T5 is non-deterministic, can generate different queries for an input query at each run. To guarantee the same output at each run, we fix the random seeds to 101 and reload the same state-dict of T5. Implementation of the search engines. We experiment under two search engines based on CodeBERT <cit.> and Lucene <cit.>. 1) A CodeBERT-based search engine: As our approach is built on pre-trained models, we first verify the effectiveness on a pre-training based search engine. Specifically, we test our approach on the default search engine by CodeBERT. We reuse the implementation of the code search (, Text-Code) task in CodeXGLUE <cit.>. Then, we fine-tune the CodeBERT-base checkpoint on a training set from CodeXGLUE for 2 epochs with a constant learning rate of 5e-5. 2) The Lucene search engine: Besides the pre-training based search engine, we also test our approach on a classic search engine named Lucene <cit.>. Lucene is a keyword-based search library that is widely adapted to a variety of code search engines and platforms <cit.>. We implement the Lucene search engine based on the Lucene core in Java. We extract the code segments from the test dataset from CodeXGLUE, parse them by Lucene's StandardAnalyzer, and build their indexes. We train all models on a Linux server with Ubuntu 18.04.1 and a GPU of Nvidia GeForce RTX 2080 Ti. §.§ Baselines We compare our method with the state-of-the-art query reformulation approaches, including a supervised method called SEQUER, and unsupervised methods such as NLP2API, LuSearch, and GooglePS. 1) Supervised <cit.>: a supervised learning approach for query reformulation named SEQUER. SEQUER leverages Transformer <cit.> to learn the sequence-to-sequence mapping between the original and the reformulated queries. The method relies on a confidential parallel dataset of query evolution logs provided by Stack Overflow. The dataset contains internal HTTP requests processed by Stack Overflow’s web servers within one year. 2) NLP2API <cit.>: a feedback based approach that expands query with recommended APIs. NLP2API automatically identifies relevant API classes collected from Stack Overflow using keywords in the initial search results and then expands the query with these API classes. 3) LuSearch <cit.>: a knowledge-based approach that expands a query with synonyms in WordNet <cit.>. The reformulated queries by LuSearch are based on Lucene's structural syntax, which are too long and contain too many Lucene-specific keywords. Due to the constraint on the input length of T5 model (, 512 tokens), we keep the synonyms and remove keywords about attribute names in Lucene such as “methbody” and “methname”. 4) GooglePS <cit.>: the Google query prediction service that gives real-time suggestions on query reformulation. We directly enter test queries into the Google search box and manually collect the reformulated queries in RQ2 and the case study. We do not compare our method with GooglePS in RQ1 because its search API is unavailable to us for processing a large number of queries. Besides, our baseline model has demonstrated a great improvement over it in terms of MRR <cit.>. §.§ Evaluation Metrics The ultimate goal of query reformulation is to enhance search accuracy by using the reformulated queries. In our experiments, we first evaluate the search accuracy measured by the widely used mean reciprocal rank (MRR). MRR is defined as the average of the reciprocal ranks (, the multiplicative inverse of the target post’s rank) of the search results for all the queries, namely, MRR=1/Q∑_i=1^|Q|1/rank_i where Q refers to a set of queries and rank_i stands for the position of the first relevant document for the i-th query. A higher MRR indicates better search performance. Besides the indirect criteria in search performance, query reformulation also aims to help users write more precise and high-quality queries. Therefore, we further define two metrics to measure the intrinsic quality of the reformulated queries: * Informativeness measures how much information a query contains that contributes to code search. We use this metric to evaluate how much information gain the reformulation brings to the original query. * Naturalness measures how well a query is grammatically correct and follows human reading habits. By using this metric, we want the reformulation to be semantically coherent with the original query. Both metrics range from 1 to 5. Higher scores indicate better performance. § RESULTS §.§ RQ1: Performance on Code Search As the ultimate goal of query reformulation, we first evaluate whether the reformulated queries by lead to better code search performance. We experiment under both search engines and compare the improvement of MRR scores before and after query reformulation by various methods. For each query, we calculate its similarity to code instances in the test set. The top 100 instances with the highest similarity are selected as the search results. Each query has one ground-truth code instance in the test set. We calculate the MRR scores by comparing the results and the ground-truth code. Then, for each method, we select the first three reformulations and report the highest MRR score among them. Since the purpose of query reformulation is to hit the potential search intent of the user. We believe that results with the maximum MRR in the top-k reformulations are the most likely to satisfy this goal and are therefore considered meaningful. The experimental results are presented in Table <ref>. enhances the MRR by 9.90% and 12.23% on the two search engines, respectively. Compared to the two unsupervised baselines, LuSearch and NLP2API, it brings a giant leap of over 50% in search accuracy. More surprisingly, achieves competitive results to the supervised counterpart though it is not given with any annotations. This indicates that our self-supervised approach can assist developers to write high-quality queries, which ultimately leads to better code search results. We notice that the performance of is slightly worse than the supervised counterpart with the Lucene search engine. This is probably because the supervised approach applies fixed expansion patterns to queries and therefore tends to expand queries with common, fixed keywords. These keywords can be easily hit by search engines based on keyword matching (e.g., Lucene). On the contrary, does not use fixed expansion patterns and thus has more various keywords. Lucene is not able to perform keyword matching on them. Instead, CodeBERT, which models the semantic relationships between keywords, can understand queries expanded by . Another interesting point is that LuSearch and NLP2API do not contribute to the CodeBERT-based search engine. This is probably because both approaches append words to the tail of the original query, hence perturbing the semantics of the original query when we use deep learning based search engines such as CodeBERT. §.§ RQ2: Qualitative Evaluation To evaluate the intrinsic quality of the reformulated queries, we perform a human study with programmers. Four participants from author's institution, but different labs, are recruited through invitations. All participants are postgraduates in the area of software engineering or natural language processing, having over-four-year programming experience. We took the first 100 queries from the test set in RQ1, and reformulated them using various methods, including SEQUER, LuSearch, NLP2API, , and GooglePS. We assigned 100 search tasks to human annotators using these 100 queries and present the reformulated queries by various approaches. The annotators were asked to search code using Google and provide their ratings (on a scale of 1 to 5) towards the reformulation in terms of informativeness and naturalness, without knowing the source of the reformulation tool. Table <ref> summarizes the quality ratings by annotators. Overall, achieves the most improvement in terms of naturalness (19%) and informativeness (26%), showing that it reformulated queries are more human-like. Comparatively, GooglePS and SEQUER have much less improvement. LuSearch and NLP2API even decrease the naturalness and informativeness, as they directly append relevant APIs or synonyms to the tail of the original query, and thus break the coherence of the query. In particular, compared with the strong baseline SEQUER, obtains a greater improvement of 17.14% in terms of informativeness, while outperforming slightly in terms of naturalness. The main reason could be that SEQUER applies three reformulation patterns, , deleting unimportant words, rewriting typos, and adding keywords, where only the last pattern increases the informativeness of the query. Besides, SEQUER often adds keywords in a monotonous pattern, such as appending “in Java” at the tail of the queries; meanwhile, our method can generate diverse spans at the proper positions of the original queries. Consequently, the reformulated queries by our approach are more informative. §.§ RQ3: Performance under Different Configs In this experiment, we evaluate the performance of in code search under different configurations with the CodeBERT search engine. We vary the positioning strategy and the number of candidate positions in order to search for the optimal configuration. Positioning Strategies. Selecting expansion positions is critical to the performance. We compare three strategies, including the entropy-based criterion: * RAND randomly selects k positions in the original query for expansion. * PROB selects the top-k positions that have the maximum probability while predicting their missing content. * ENTR selects the top-k positions that have the minimum entropy while predicting their missing content. The results are shown in Table <ref>. The PROB and ENTR strategies bring a large improvement (around 10%) to the code search performance. This indicates that both criteria correctly quantify the missing information at various positions. Between these two strategies, ENTR performs slightly better than PROB, probably because ENTR considers the entire distribution of the prediction while PROB just considers the maximum one. As expected, the RAND strategy causes a degradation of 6.68% in code search performance because it selects expansion positions without any guidance, which results in incorrect or redundant expansions. Number of Candidate Positions. We also investigate how many expansions lead to the best performance. We vary the number of candidate positions from 1 to 3 and verify their effects on performance. Table <ref> shows the results. We observe that increasing the number of candidate positions has a positive effect on performance. The best performance is achieved when 3 candidate positions are expanded. Meanwhile, only one candidate position can have a negative effect on query reformulation. The reason can be that our method reformulates the original query with a variety of query intents. A larger number of candidate positions can hit more user intents and hence leads to better search accuracy. §.§ Qualitative Analysis To further understand the capability of , we qualitatively examine the reformulation samples by various methods. Four examples are provided in Table <ref>. Example 1 compares the reformulation for the query “The total CPU load for the Synology DSM” by various methods. The original query aims to find the code that monitors the CPU load of a DSM. The reformulated query by is more precise to the real scenario since CPU load and memory usage are often important indicators that need to be monitored simultaneously. In contrast, SEQUER only removes the “total” at the beginning of the query during reformulation. LuSearch appends the query with the synonyms of the keyword “total” such as “sum” and “aggreg”. Meanwhile, NLP2API appends the query with APIs that are relevant to CPU and operating system, which are more useful compared to those of SEQUER and LuSearch. GooglePS appends the word “7” after “DSM” to indicate the version of DSM, which helps to narrow the range of possible solutions. In Example 2, the original query “Fetch the events” is incomplete and ambiguous because the user does not specify what events to fetch and where to fetch them from. The reformulated query by is more informative than that by SEQUER: specifies the source of events, i.e., from the server, which makes the query more concrete and understandable; meanwhile, SEQUER only restricts the programming language of the target code, without alleviating the ambiguity of the original query. LuSearch expands the synonyms of “Fetch” such as “get” and “convei” to the tail of the query. NLP2API adds APIs relevant to events to the original query. But these synonyms and APIs have limited effect on improving search accuracy. GooglePS specifies the requirement of the query to be a service by adding “Service worker” at the beginning of the original query. But such a specification has a limited effect on narrowing the search space. Example 3 shows the results for the query “Get method that raises MissingSetting if the value was unset.” The reformulated query by recognizes that MissingSetting is an exception and prepends it with the exception keyword. This facilitates the search engine to find code with similar functionality. In contrast, SEQUER just specifies the programming language of the target code. Compared to and SEQUER, LuSearch and NLP2API only append irrelevant APIs and synonyms to the original query. Hence, the semantics of the query are broken. GooglePS fails to reformulate such a long query. Instead, it returns a search query from other users that contains the keyword “unset”. The returned results by GooglePS discard much information from the original query, making deviate from the user intent. Finally, the last example shows a worse case. Although achieves the new state-of-the-art, it might occasionally produce error reformulations. prepends a modifier “a list of” in front of the word “values”, which conflicts with “dictionary” in the given query and thus hampers the code search performance. This is probably because the word “values” occurs frequently in the training corpus and often refers to elements in arrays and lists. Therefore, tends to expand it with modifiers such as “all the” and “a list of”. Comparably, SEQUER does nothing to the original query. LuSearch concerns “Load” as the keyword and expands it. NLP2API adds APIs relevant to the key and value of the dictionary data structure, which results in better search performance. GooglePS cannot handle such a long query and just gives an irrelevant reformulation. These examples demonstrate the superiority of in query reformulation for code search, affirming the strong ability of both position prediction and span generation. In future work, we will conduct empirical research on the error types, and improve our model for the challenging reformulations. § DISCUSSION §.§ Strength of over fully supervised approaches? One debatable question is what are the benefits of since it does not beat the SOTA fully-supervised approach in terms of the code search metrics. Fully-supervised methods such as SEQUER achieve the state-of-the-art performance by sequence-to-sequence learning on a parallel query set. However, acquiring such parallel queries is infeasible since the query evolution log by search engines such as Google and Stack Overflow is not publicly available. Besides, the sequence-to-sequence approach tends to learn generic reformulation patterns, , specifying the programming language or deleting a few irrelevant words. Compared to SEQUER, does not rely on the supervision of parallel queries, instead, it is trained on a nonparallel dataset (queries only) that does not need to collect the ground-truth reformulations. This significantly scales up the size of training data, and therefore allows the model to learn diverse reformulation patterns from a large number of code search queries. provides an alternative feasible and cheap way of achieving the same performance. §.§ Limitations and Threats We have identified the following limitations and threats to our method: Patterns of query reformulation. In this work, we mainly explore query expansion, the most typical class of query reformulation. While query expansion is only designed to supplement queries with more information, redundant or misspelled words in the query can also hamper the code search performance, which cannot be handled by our method. Thus, in future work, we will extend our approach to support more reformulating patterns, including query simplification and modification. For example, in addition to only inserting a token in the CQC task, we can also replace the original words with a token or simply delete a token and ask the pre-trained model to predict the deletion position. A classification model can also be employed to decide whether to add, delete or modify keywords in the original query. Code comments as queries. As obtaining real code queries from search websites is difficult, we use code comments from code search datasets to approximate code queries in building and evaluating our model. Although code comments are widely used for training machine learning models on NL-PL matching <cit.>, they may not represent the performance of queries in real-world code search engines. § RELATED WORK §.§ Query Reformulation for Code Search Query reformulation for code search has gained much attention in recent years <cit.>. There are approximately three categories of technologies, namely, knowledge-based, feedback-based, and deep learning based approaches. The knowledge-based approaches aim to expand or revise the initial query based on external knowledge such as WordNet <cit.> and thesauri. For example, Howard  <cit.> reformulated queries using semantically similar words mined from method signatures and corresponding comments in the source code. Satter and Sakib <cit.> proposed to expand queries with co-occurring words in past queries mined from code search logs. Yang and Tan <cit.> constructed a software-specific thesaurus named SWordNet by mining code- comment mappings. They expanded queries with similar words in the thesaurus. Lu  <cit.> proposed LuSearch which extends queries with synonyms generated from WordNet. Unlike knowledge-based approaches, feedback-based approaches identify the possible intentions of the user from the initial search results and use them to update the original query. For example, Rahman and Roy <cit.> proposed to search Stack Overflow posts using pseudo-relevance feedback. Their approach identifies important API classes from code snippets in the posts using TF-IDF, and then uses the top-ranked API classes to expand the original queries. Hill  <cit.> presented a novel approach to extract natural language phrases from source code identifiers and hierarchically classify phrases and search results, which helps developers quickly identify relevant program elements for investigation or identify alternative words for query reformulation. Recently, deep learning has advanced query reformulation significantly <cit.>. Researchers regard query reformulation as a machine translation task and employ neural sequence-to-sequence models. For example, Cao  <cit.> trained a sequence-to-sequence model with an attention mechanism on a parallel corpus of original and reformulated queries. The trained model can be used to reformulate a new query from Stack Overflow. While deep learning based approaches show more promising results than previous approaches, they rely on the availability of large, high-quality query pairs. For example, Cao 's work requires the availability of query pairs within the same session in the search logs of Stack Overflow. But such logs are confidential and unavailable to researchers. This restricts their practicality in real-world code search. Unlike these works, is a data-driven approach based on self-supervised learning. expands queries by pre-training a Transformer model with corrupt query completion on large unlabeled data. Results demonstrate that achieves competitive results to that of fully-supervised models without requiring data labeling. §.§ Code Intelligence with Pre-trained Language Models In recent years, there is an emerging trend in applying pre-trained language models to code intelligence <cit.>. For example, Feng <cit.> pre-trained the CodeBERT model based on the Transformer architecture using programming and natural languages. CodeBERT can learn the generic representations of both natural and programming languages that can broadly support NL-PL comprehension tasks (, code defect detection, and natural language code search) and generation tasks (, code comment generation, and code translation). Wang  <cit.> proposed CodeT5, which extends the T5 with an identifier-aware pre-training task. Unlike encoder-only CodeBERT, CodeT5 is built upon a Transformer encoder-decoder model. It achieves state-of-the-art performance on both code comprehension and generation tasks in all directions, including PL-NL, NL-PL, and PL-PL. To the best of our knowledge, is the first attempt to apply PLM in query reformulation, which aims to leverage the knowledge learned by PLM to expand queries. § CONCLUSION In this paper, we propose , a novel self-supervised approach for query reformulation. formulates query expansion as a masked query completion task and pre-trains T5 to learn general knowledge from large unlabeled query corpora. For a search query, guides T5 through enumerating multiple positions for expansion and selecting positions that have the best information gain for expansion. We perform both automatic and human evaluations to verify the effectiveness of . The results show that generates useful and natural-sounding reformulated queries, outperforming baselines by a remarkable margin. In the future, we will explore other reformulation patterns such as query simplification and modification besides query expansion. We also plan to compare the performance of our approach with large language models such as GPT-4. § DATA AVAILABILITY Our source code and experimental data are publicly available at https://github.com/RedSmallPanda/SSQRhttps://github.com/RedSmallPanda/SSQR. § ACKNOWLEDGMENTS This research is supported by National Natural Science Foundation of China (Grant No. 62232003, 62102244, 62032004) and CCF-Tencent Open Research Fund (RAGR20220129). ACM-Reference-Format
http://arxiv.org/abs/2307.02152v2
20230705094958
Suboptimal subspace construction for log-determinant approximation
[ "Zongyuan Han", "Wenhao Li", "Yixuan Huang", "Shengxin Zhu" ]
math.NA
[ "math.NA", "cs.NA", "65C05, 65D32, 65F15, 65F60, 65G99, 65Y20, 68Q10, 68Q87" ]
Zongyuan Han School of Mathematical Sciences, Beijing Normal University and Laboratory of Mathematics and Complex Systems, Ministry of Education, Beijing 100875, P.R. China Wenhao Li Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, Department of Applied Mathematics, BNU-HKBU United International College, Zhuhai 519087, P.R. China Yixuan Huang School of Mathematical Sciences, Beijing Normal University and Laboratory of Mathematics and Complex Systems, Ministry of Education, Beijing 100875, P.R. China 41Shengxin Zhu Shengxin.Zhu@bnu.edu.cn Research Center for Mathematics, Beijing Normal University, Zhuhai 519087, P.R. China; Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, Department of Applied Mathematics, BNU-HKBU United International College, Zhuhai 519087, P.R. China Suboptimal subspace construction for log-determinant approximation Zongyuan Han Wenhao Li Yixuan Huang Shengxin Zhu Received: date / Accepted: date ================================================================== Variance reduction is a crucial idea for Monte Carlo simulation and the stochastic Lanczos quadrature method is a dedicated method to approximate the trace of a matrix function. Inspired by their advantages, we combine these two techniques to approximate the log-determinant of large-scale symmetric positive definite matrices. Key questions to be answered for such a method are how to construct or choose an appropriate projection subspace and derive guaranteed theoretical analysis. This paper applies some probabilistic approaches including the projection-cost-preserving sketch and matrix concentration inequalities to construct a suboptimal subspace. Furthermore, we provide some insights on choosing design parameters in the underlying algorithm by deriving corresponding approximation error and probabilistic error estimations. Numerical experiments demonstrate our method's effectiveness and illustrate the quality of the derived error bounds. Mathematics Subject Classifications (2020) 65C05· 65D32· 65F15· 65F60 · 65G99 · 65Y20 · 68Q10 · 68Q87 § INTRODUCTION The computation of log determinants of symmetric positive definite matrices is a fundamental problem in high-dimensional inference. It has widespread applications in fields such as Gaussian process kernel learning <cit.>, linear mixed models <cit.>, Markov random fields <cit.>, Bayesian inference <cit.>, information geometry <cit.>, and others. One straightforward approach to computing the logarithm of determinants (or log determinants) is through factorization. A sophisticated multi-frontal Cholesky decomposition approach can be used to calculate the log determinant and its derivatives for very large-scale sparse matrices <cit.>. Such a method, together with other matrix analyses, works efficiently for linear mixed models and has been well implemented in stable software. However, when the matrix is dense, the Cholesky decomposition requires a time complexity of O(n^3) and storage requirements of O(n^2), making it computationally prohibitive for large-scale problems <cit.>. Alternatively, iterative methods can be designed according to the well-known identity <cit.>, log(A)=∑_i=1^nlog(λ_i)=tr(log(A)), for symmetric positive definite matrices. After this transformation, the calculation of the log determinants can be reformulated as the problem of estimating the trace of matrix logarithm function log(A). Several approaches have been proposed for estimating the trace of matrix logarithms. These approaches can be broadly categorized based on the techniques employed, including Monte Carlo-based methods, polynomial approximation-based methods, subspace iteration-based methods, and methods that utilize Gaussian quadrature and Lanczos iteration. First, the matrix trace estimation method based on Monte Carlo can be traced back to Girard’s literature <cit.>, in which he proposed a fast Monte Carlo algorithm for approximating the calculation of matrix traces in minimizing Generalized Cross-Validation (GCV) problems. It can be simply described as 1/ntr(A)=𝔼(z^TAz/z^Tz)≈1/m∑_i=1^m(z^(i))^TAz^(i)/(z^(i))^Tz^(i), where A∈ℝ^n× n, z and z^(i) are stochastic vectors (also referred to as query vectors) with entries independently sampled from the standard normal distribution, m is the number of query vectors. Hutchinson trace estimator <cit.> simplified the above estimation implementation and satisfies a minimum variance criterion, which can be expressed as tr (A) = 𝔼(z^T A z) ≈1/m∑_i=1^m (z^(i))^T A z^(i), where z and z^(i) are stochastic query vectors with entries independently sampled from the Rademacher distribution. Subsequently, other estimators have been proposed for estimating the trace using the same form, but with different distributions for random vectors. These include random Gaussian vectors <cit.>, random phase vectors <cit.>, and columns derived from a Hadamard matrix <cit.>, among others. In contrast to previous research that focused solely on analyzing the variance of estimators, the seminal work of <cit.> provides the first comprehensive analysis of the bound of query complexity for estimators, i.e., the minimum number of matrix-vector multiplications (MVM) required to achieve a desired accuracy and success rate. This bound was further improved and extended by Roosta-Khorasani and Ascher <cit.>. Recently, multi-level Monte Carlo methods <cit.> have been introduced in <cit.> to accelerate the convergence rate of the stochastic trace estimator. Second, some papers study the trace of matrix logarithms using a polynomial approximation of the logarithmic function. Boutsidis et al. applied Taylor series expansions to the logarithmic function and used the Monte Carlo method to approximate the traces of a small number of matrix powers <cit.>. However, the Taylor series is hard to be optimal from the viewpoint of polynomial approximation. Therefore, Han et al. <cit.> proposed a stochastic Chebyshev expansion method to accelerate the Taylor approximation. Based on the eigenvalue distribution prior, Wang and Peng designed a weighted orthogonal polynomial approximation algorithm for log-determinant computation. The three-term-recursion relation of orthogonal polynomials makes the algorithm computationally efficient <cit.>. Third, some subspace iteration methods are also used to estimate the trace and log determinants. The essence of such an approach is to retain the larger eigenvalues while dropping the smaller ones. To calculate log((A)) for A∈ℝ^n× n, Saibaba et al. <cit.> obtained a surrogate matrix T∈ℝ^l× l (l≪ n) of matrix A using power iterations and QR factorization. Compared to Monte Carlo-based trace estimators, this estimator can be computationally efficient when A is a low-rank matrix. Similarly, Li et al. <cit.> obtained the surrogate matrix T using the block Krylov subspace method. Theoretical analysis and numerical experiments show that they achieve better error bounds than those in <cit.>. Fourth, Bai and Golub <cit.> presented deterministic upper and lower bounds for tr(A^-1) and log(A) applying Gaussian quadrature and related theory. They also proposed <cit.> probabilistic upper and lower bounds for large sparse matrices using integral approximation and Hoeffding’s inequality, the estimation process features the Hutchinson trace estimator and the Lanczos method to construct a Gaussian quadrature, which is currently known as the stochastic Lanczos quadrature (SLQ). In <cit.>, Ubaru et al. named the SLQ method for estimating tr(f(A)) and provides the rigorous theoretical analysis that is missing in <cit.>. <cit.> further analyzes SLQ in the context of asymmetric quadrature nodes. In the following text, we will discuss the relevant details of the SLQ method in-depth and in relation to our problems. Recently, Meyer et.al <cit.> proposed a stochastic trace estimator named Hutch++, which combined the Hutchinson trace estimator with subspace projection. This new estimator can be considered as a variance reduction version (also used e.g. in <cit.>) of the Hutchinson trace estimator <cit.>, it reduced the query complexity from O(1/ϵ^2) (as in <cit.>) to O(1/ϵ). Later on, <cit.> developed an adaptive version of Hutch++, which near-optimally divides the MVM between the two phases of stochastic trace estimation and subspace projection. Similarly, <cit.> proposed a variance reduction trace estimation scheme that adaptively allocates the numbers of Block Krylov subspace iterations and query vectors. Different from Hutch++, <cit.> designed an exchangeable trace estimator named XTrace, which exploited both variance reduction and the exchangeability principle, in short, the test vectors for low-rank approximation and for estimating the trace of the residual are the same. Unlike <cit.>, we combine the variance reduction technique with the SLQ method to estimate the trace of matrix logarithm. Our method contains four key ingredients. The first is the Hutchinson trace estimator, used to estimate the trace. The second is the stochastic Lanczos quadrature method, used to efficiently approximate quadratic forms. The third are concentration inequalities, including Markov and Hanson-Wright inequalities. The fourth is the projection-cost-preserving sketch, used to bound the size of the Gaussian random matrix and the number of Hutchinson query vectors. This is one of the significant differences from <cit.>. Furthermore, we explicitly present the bounds of all design parameters involved in our method, such as the dimension of the projection subspace, the number of Hutchinson query vectors, and the iterations of the Lanczos procedure. This provides practitioners with more guidance than asymptotic bounds in the form of O(·) or Ω(·). The paper is organized as follows. Section <ref> presents the preliminaries. In Section <ref>, we provide the main idea for approximating tr(log(A)) and state the main theorem. Section <ref> presents an error analysis of the relative probability error bound and explicit bounds for the relevant parameters. The performance evaluation of our method is given in Section <ref>. Finally, we present our concluding remarks. § PRELIMINARIES The main result of this paper is given by Theorem <ref> and the following figure shows the roadmap for proving Theorem <ref>. Let ·_2 denote the Euclidean norm for a vector and the spectral norm for a matrix. Let ·_F denote the Frobenius norm (F-norm) for a matrix. Let A∈ℝ^n× n be a symmetric positive definite (SPD) matrix with eigendecomposition A=UΛ U^T, where U∈ℝ^n× n is orthogonal and Λ = diag(λ_1,…, λ_n)∈ℝ^n× n is a diagonal matrix of eigenvalues with non-increasing ordering. Let A_k=min _rank(B)= kA-B_F be the best rank-k approximation to A. Matrix function f(A)≜log(A), and its best rank-k approximation is denoted by A_k(f)=min_rank(B)=kf(A)-B_F. By the Eckart-Young theorem <cit.>, A_k(f)=U_kU_k^Tf(A) where U_k contains the first k columns of U. To prepare for the subsequent analysis, we will introduce some related definitions and lemmas. If matrix S can project a high-dimensional point cloud E onto a lower-dimensional space while approximately preserving vector norms, then S is called a Johnson-Lindenstrauss Embedding (JLE). A more detailed description is as follows. (JLE <cit.>) Let S∈ℝ^k× n be a random matrix where k<n, p∈ℕ and ϵ, δ∈ (0,1). We say that S is a (p,ϵ, δ)-JLE if for any subset E⊆ℝ^n with |E|=p and (1-ϵ)x_2^2≤Sx_2^2≤ (1+ϵ)x_2^2, holds simulatneously for all x∈ E with a probability of at least 1-δ. Furthermore, if S is a JLE of the range of a matrix M, then S is called the subspace embedding of M, the specific definition is as follows. (Subspace embedding <cit.>) S∈ℝ^k× m is an ϵ-subspace embedding for M∈ℝ^m× n, if ∀x∈ℝ^n, (1-ϵ)Mx_2^2≤SMx_2^2≤ (1+ϵ)Mx_2^2. Similar to JLE, Kane and Nelson <cit.> defined a high-order moment form of norm preservation for a projected vector, known as JL moments. (JL moments <cit.>) Matrix S∈ℝ^n× q satisfies the (ϵ,δ,ℓ)-JL moments if for any x∈ℝ^n with x_2=1, 𝔼_S|x^TS_2^2-1|^ℓ≤ϵ^ℓ·δ. For any matrix A∈ℝ^n× d, if there is a matrix Ã∈ℝ^n× d' (d'≪ d) that preserves the distance between A's columns and any k-dimensional subspace, then à can be used as a surrogate for A to solve certain low-rank optimization problems. à is called a projection-cost-preserving sketch (PCPS) of A. A more detailed description is as follows. (PCPS <cit.>) Ã∈ℝ^n× d' is a rank-k PCPS of A∈ℝ^n× d (d≫ d') with error 0≤ϵ < 1, if for all rank-k orthogonal projection matrices P∈ℝ^n× n, (1-ϵ)A-PA_F^2≤Ã-PÃ_F^2+c≤ (1+ϵ)A-PA_F^2, for some fixed non-negative constant c that may depend on A and à but independent of P. à is also called an (ϵ, c, k)-PCPS of A. The constant c in Definition <ref> is used to control the tightness of the bound or the similarity between A and Ã. In this paper, we set c to 0 to get a tight bound. Next, we present two important inequalities that will be used later in this paper, along with some remarks. (<cit.>) Let A be a symmetric positive semidefinite matrix and A_k be the best rank-k approximation of A. Then, A-A_k_F≤tr(A)/√(k). (Hanson-Wright inequality <cit.>) Let z=(z_1,…,z_n)^T represent a vector with entries that are independently sampled from a Rademacher distribution and A∈ℝ^n× n be a real and symmetric matrix, for all p≥ 1 z^TAz-𝔼z^TAz_p≲√(p)·A_F+p·A_2, where ·_p denotes (𝔼|·|^p)^1/p. In this paper, we suppose matrix A∈ℝ^n× n is an SPD matrix with λ_min(A)=λ_n≥ 1. If λ_min(A) < 1, one may let Â≜ A/λ_min(A), whose minimum eigenvalue is greater than or equal to 1, indicating that log(A)=log(λ_min(A)Â)=nlog(λ_min(A))+tr(log(Â)), then tr(log(Â)) is the crucial problems as we considered. Moreover, if A is asymmetric and non-singular, then its logarithmic determinant (absolute value) can be obtained through the log-determinant problem of SPD matrix A^TA since log (|(A)|) = 1/2log((A)^2) = 1/2log ((A^TA)), which is transformed into the problem described in (<ref>). § APPROXIMATION OF TR(F(A)) Let S∈ℝ^n× q be a random matrix whose entries are independent normal random variables, and Q∈ℝ^n× k consist of k principal orthonormal bases of the column space spanned by f(A)S. Enlighted by the variance reduction technique introduced in <cit.>, we separate f(A) into its projection onto the subspace spanned by Q and its orthogonal complement, that is, [ tr(f(A)) =tr(QQ^Tf(A))+tr((I-QQ^T)f(A)); =tr(Q^Tf(A)Q)+tr((I-QQ^T)f(A)(I-QQ^T)); =tr(Q^Tf(A)Q)_first part+tr(Δ)_second part.; ] Let Δ≜ (I-QQ^T)f(A)(I-QQ^T). The second equality holds due to the cyclic property of the trace and the idempotency of I-QQ^T, that is, I-QQ^T=(I-QQ^T)^2. These two parts in (<ref>) will be estimated separately. For the first part, we will use the (m+1)-step Lanczos quadrature approximation method to estimate the trace. For the second part, we will combine the (m+1)-step Lanczos quadrature approximation method with the N-query Hutchinson trace estimator to estimate the trace of Δ. The final sum of these two estimated results will be represented by the symbol Γ. The following theorem is the main result of this paper, which is similar in form to Theorem 4.1 in <cit.>. As our trace estimation method adopts the variance reduction technique, new design parameters k and q are included, and the explicit lower bounds for all these parameters are presented in the theorem. Given ϵ,δ∈ (0,1), an SPD matrix A∈ℝ^n× n with its minimum eigenvalue λ_min≥ 1. Let S∈ R^n× q be a random matrix and Q∈ℝ^n× k be composed of the k principal orthonormal bases of the column space spanned by f(A)S. Thus, if the following inequality is satisfied * the column number of Q satisfies k≥ 16(1+ϵ)/(1-ϵ), * the column number of S satisfies q≥ 288k/(ϵ^2δ), * the number of query vectors N ≥(1+√(1+4ϵ√(δ)))^2/(2ϵ^2δ), * the Lanczos iteration parameter m'≥1/2log(2kC_ρ/ε)/log(ρ) for the first part and m≥1/2log(4nC_ρ/ε)/log(ρ) for the second part, where C_ρ=4M_ρ/(C(ρ^2-ρ)) and C=(n-1)f(λ_min)+f(λ_max), M_ρ=|log(λ_min/2)|+π and ρ=(λ_max+√(2λ_maxλ_min-λ_min^2))/(λ_maxλ_min), then ℙ{|tr(f(A))-Γ|≤ϵ |tr(f(A))|}≥ 1-δ, where Γ is the final estimation of tr(f(A)). The proof of this theorem is deferred to Section <ref>. To prove the theorem, we first derive the approximation error bound of the second part in (<ref>), followed by the error bound of the first part. § THE APPROXIMATE ERROR ANALYSIS To approximate tr(Δ) in (<ref>), we employ the SLQ method proposed by <cit.>. The approximation process is divided into two stages. In the first stage, tr(Δ) is approximated by the Hutchinson trace estimator with N-query random Rademacher vectors Z=[z^(1),…,z^(N)]. We use the notation H^N(Δ) to represent this approximation result, H^N(Δ) ≜1/N∑_i=1^N(z^(i))^TΔz^(i)≈tr(Δ). Let v^(i)=(I-QQ^T)z^(i)/(I-QQ^T)z^(i)_2 and μ^(i)=U^Tv^(i)=[μ_1^(i),…,μ_n^(i)]^T. Recall that A=UΛ U^T, then H^N(Δ) =1/N∑_i=1^N(I-QQ^T)z^(i)_2^2(v^(i))^Tf(A)v^(i) =1/N∑_i=1^N(I-QQ^T)z^(i)_2^2(v^(i))^T U f(Λ) U^T v^(i) =1/N∑_i=1^N(I-QQ^T)z^(i)_2^2∑_j=1^nf(λ_j)(μ_j^(i))^2 = 1/N∑_i=1^N(I-QQ^T)z^(i)_2^2∫_λ_n^λ_1f(t)dμ^(i)(t). In the last equality, we view ∑_j=1^nf(λ_j)(μ_j^(i))^2 as a Riemann-Stieltjes integral with the form of ∫_λ_n^λ_1f(t)dμ^(i)(t), where the measure μ^(i)(t) is a piecewise constant function defined as <cit.> μ^(i)(t)={[ 0, t<λ_n,; ∑_j=i^n (μ_j^(i))^2, λ_i≤ t<λ_i-1, i=2, …, n,; ∑_j=1^n (μ_j^(i))^2, λ_1≤ t. ]. As a Riemann-Stieltjes integral ∫_λ_n^λ_1f(t)dμ(t) can be estimated by applying the Gaussian quadrature rule, that is v^Tf(A)v=∫_λ_n^λ_1f(t)dμ(t)≈∑_k=0^mτ_kf(θ_k), where {τ_k} are the weights and {θ_k} are the nodes of the (m+1)-point Gaussian quadrature <cit.>. Moreover, let T_m+1 be the Jacobi matrices derived by (m+1)-steps Lanczos algorithm <cit.> for matrix A and any given vector v, the quadrature weights τ_k=[e_1^Ty_k]^2 and nodes θ_k are extracted from the eigenpairs {(θ_k,y_k),k=0,1,…,m} of T_m+1. The approximate method described by the formula (<ref>) is called (m+1)-steps Lanczos quadrature method. Thus, in the second stage, we approximate H^N(Δ) with (m+1)-steps Lanczos quadrature method, and use the notation L_m+1^N(Δ) to represent this approximation result, L_m+1^N(Δ) ≜1N∑_i=1^N(I-QQ^T)z^(i)_2^2∑_k=0^mτ_k^(i)f(θ_k^(i)) ≈ H^N(Δ). There are two approximation errors associated with the calculation of tr(Δ), corresponding to the two stages of the approximation process. The first stage error, known as the Lanczos quadrature approximation error, arises from the difference between H^N(Δ) and L^N_m+1(Δ) and is given by |H^N(Δ)-L_m+1^N(Δ)|. The second stage error, known as the Hutchinson estimation error, arises from the difference between tr(Δ) and H^N(Δ) and is given by |H^N(Δ)-tr(Δ)|. We will analyze each of these errors in the following subsections. §.§ Lanczos quadrature approximation error As (I-QQ^T)z^(i)_2^2≤I-QQ^T_2^2z^(i)_2^2=n, the Lanczos quadrature approximation error is bounded by |H^N(Δ)-L_m+1^N(Δ)| =|1N∑_i=1^N(I-QQ^T)z^(i)_2^2((v^(i))^Tf(A)v^(i)-∑_k=0^mτ_k^(i)f(θ_k^(i)))| ≤nN∑_i=1^N|(v^(i))^Tf(A)v^(i)-∑_k=0^mτ_k^(i)f(θ_k^(i))|. Before delving into the analysis of the above error bound, we first consider a standard case where function g is defined in [-1,1], then generalize the result to the function f defined in [λ_n,λ_1] with an affine linear transform. <cit.> and <cit.> Let function g be analytic in [-1,1] and analytically continuable in the open Bernstein ellipse E_ρ with foci ± 1 and the sum of major and minor axis equals ρ>1, where it satisfies |g(z)|≤ M_ρ. Then the (m+1)-step Lanczos quadrature approximation error E_m+1(g) satisfies E_m+1(g)≜|∫_-1^1g(t)dμ(t)-∑_k=0^mτ_kg(θ_k)|≤4M_ρ/(ρ-1)ρ^2m+1, where the μ(t) is the measure of integration, {τ_k} and {θ_k} are the weights and nodes of Gaussian quadrature rule and derived by (m+1)-steps Lanczos algorithm. For the proof of this lemma, readers are referred to <cit.>, and for a minor correction to the result, readers may consult <cit.>. Let A∈ℝ^n× n be an SPD matrix with minimum eigenvalue λ_min≥ 1, f(t) is analytic on [λ_min,λ_max]. If the Lanczos iteration parameter m satisfies m≥1/2log16nM_ρ/Cε(ρ^2-ρ) /log(ρ), where C=(n-1)f(λ_min)+f(λ_max), M_ρ=|log(λ_min/2)|+π and ρ=λ_max+√(2λ_maxλ_min-λ_min^2)/λ_maxλ_min, then |H^N(Δ)-L_m+1^N(Δ)|≤ε/4|tr(f(A))|. Define a new function g(t)≜ f[(λ_max-λ_min/2)t+(λ_max+λ_min/2)], which is analytic on [-1,1] and has singularity at t=-(λ_max+λ_min)/(λ_max-λ_min). To ensure that g(t) can be analytically continued to an appropriate Bernstein ellipses E_ρ with foci ± 1, one can choose the length of semi-major axis as α=λ_max/(λ_max-λ_min), then the length of semi-minor axis is β=√(α^2-1), so the elliptical radius ρ can be derived by ρ = α+β = λ_max+√(2λ_maxλ_min-λ_min^2)/λ_max-λ_min>1. As g(t) is analytic on the open ellipse E_ρ defined above, max_t∈ E_ρ |g(t)| =max_t∈ E_ρ|log(λ_max-λ_min/2t+λ_max+λ_min/2)| ≤max_t∈ E_ρ√((log|λ_max-λ_min/2t+λ_max+λ_min/2|)^2+π^2) =√((log|λ_max-λ_min/2(-λ_max/λ_max-λ_min)+λ_max+λ_min/2|)^2+π^2) ≤|log(λ_min/2)|+π ≜ M_ρ, where the first inequality results from |log(z)|=|log |z|+i(z)|≤√((log|z|)^2+π^2), and the second equality holds since the maximum absolute value of the logarithm on the ellipse is attained at the endpoint t=-λ_max/(λ_max-λ_min) on the real axis <cit.> and <cit.>. Next, based on Lemma <ref> and Corollary 3 in <cit.>, we obtain |H^N(Δ)-L_m+1^N(Δ)| ≤nN∑_i=1^N|(v^(i))^Tf(A)v^(i)-∑_k=0^mτ_k^(i)f(θ_k^(i))| = nN∑_i=1^N|∫_λ_min^λ_maxf(t)dμ^(i)(t)-∑_k=0^mτ_k^(i)f(θ_k^(i))| ≤4nM_ρ/(ρ -1)ρ^2m+1, where the last inequality utilizes E_m+1(f)=E_m+1(g) <cit.>. As f(t) increases monotonically on the interval [λ_min,λ_max], one may obtain C≜ [(n-1)f(λ_min)+f(λ_max)]≤∑_i=1^nf(λ_i)=tr(f(A)). Let C_ρ≜4M_ρ/C(ρ^2-ρ), when the Lanczos iteration parameter m satisfies m≥1/2log(4nC_ρ/ε)/log(ρ), the Lanczos quadrature approximation error has the following upper bound |H^N(Δ)-L_m+1 ^N(Δ)|≤ε/4C ≤ε/4tr(f(A)). §.§ Hutchinson estimation error with bound of F-norm In this subsection, we analyze the error bound of the Hutchinson trace estimator for any symmetric matrix A. <cit.> showed that when the number of query vectors for the Hutchinson trace estimator is N=O(log(1/δ)/ϵ^2), then |H^N(A)-tr(A)|≤ϵA_F with probability at least 1-δ. However, this bound for N lacks explicit constants, providing little guidance for practitioners. To address this issue, we derive an explicit bound for the number of query vectors in the following discussion. Let A∈ℝ^n× n be a symmetric matrix, z=(z_1,…,z_n)^T be a Rademacher random vector. Then, for all ϵ >0 ℙ{ |z^TAz-tr(A)|≥ϵA_F}≤(√(2)+2sr(A)^-1/2)^2/ϵ^2, where sr(A)≜A_F^2/A_2^2 denotes the stable rank of A. From lemma <ref>, for all p≥ 1, z^T A z-𝔼z^T A z_p≤√(p)A_F+pA_2, where ·_p denotes (𝔼|·|^p)^1/p. In particular, for p=2, we have 𝔼|z^T A z-𝔼z^T A z|^2≤ (√(2)A_F+2A_2)^2, then based on the Chebyshev inequality <cit.>, ℙ{ |z^TAz-tr(A)|≥ϵA_F}≤(√(2)A_F+2A_2)^2/ϵ^2A_F^2. The proof ends by replacing A_F^2/A_2^2 with sr(A). Let A∈ℝ^n× n be a symmetric matrix. Let H^N(A) denote an N-query Hutchinson trace estimator implemented with Rademacher random vectors. For any given ϵ and δ, if query number N satisfies N≥(1+√(1+4ϵ√(δ)))^22ϵ^2δ, we have ℙ{|H^N(A)-tr(A)|≤ϵA_F}≥ 1-δ. Define the following block diagonal matrix 𝒜=diag(N^-1A,…,N^-1A)∈ℝ^N n× N n, that is, matrix 𝒜 consists of N diagonal blocks containing rescaled copies of A. The N-query Hutchinson trace estimator of A equals z̃^T𝒜z̃ for a Rademacher vector z̃ of length N× n, that is z̃^T𝒜z̃=H^N(A). Note that tr(𝒜)=tr(A), 𝒜_F=N^-1/2A_F and 𝒜_2=N^-1A_2. From lemma <ref>, we have ℙ{|H^N(A)-tr(A)|≥ϵ N^-1/2A_F} =ℙ{|z̃^T𝒜z̃-𝔼z̃^T𝒜z̃|≥ϵ𝒜_F} ≤(√(2)𝒜_F+2𝒜_2)^2ϵ^2𝒜_F^2 = (√(2)+2(sr(A) N)^-1/2)^2ϵ^2, that is ℙ{|H^N(A)-tr(A)|≥ϵA_F}≤(√(2)N^1/2+2sr(A)^-1/2/ϵ N)^2. Let the right-hand side of the above inequality be less than δ, and as sr(A)>1, for a given tolerance factor ϵ>0, if the query number N satisfies N ≥(1+√(1+4ϵ√(δ)))^2/2ϵ^2δ, we have ℙ{|H^N(A)-tr(A)|≤ϵA_F}≥ 1-δ. Recall that Δ is an SPD matrix defined in (<ref>). Based on Theorem <ref>, if N satisfies the condition formulated in (<ref>), we have ℙ{|H^N(Δ)-tr(Δ)|≤ϵΔ_F}≥ 1-δ. §.§ Hutchinson estimation error with bound of tr(f(A)) In this subsection, we will transform the error bound ϵΔ_F that appears in (<ref>) into ϵ·tr(f(A))/2 for the purpose of total error analysis. Recall that H^N(Δ) is the Hutchinson trace estimator, as defined in (<ref>). If the number of query vectors N satisfies the bound given in (<ref>), then the following theorem holds. Given ϵ,δ∈ (0,1), A∈ℝ^n× n is an SPD matrix with minimum eigenvalue λ_min≥ 1. Let S∈ℝ^n× q be a random matrix with independent normal random variables entries S_ij∼𝒩(0,1) /√(q), and Q∈ℝ^n× k consists of k-principal orthonormal bases of the column space spanned by f(A)S. If k and q satisfy k≥16(1+ϵ)1-ϵ, q≥288kϵ^2δ, then ℙ{|H^N(Δ)-tr(Δ)|≤ϵ/4tr(f(A))}≥ 1-δ. To prove this theorem, we first establish some properties of JL moments and PCPS, which were defined in Section 2. If matrix S∈ℝ^n× q satisfies the (ϵ/6√(k),δ,ℓ)-JL moment property for any ℓ≥2, then with probability ≥ 1-4δ, Ã=AS is an (ϵ,0,k)-PCPS of A. The proof of this lemma closely follows <cit.>. The main difference is that the condition on S has been relaxed, resulting in a refined lower bound for the number of columns of the random matrix in <cit.>. In this context, we mainly focus on the proof that if S satisfies the (ϵ/6√(k),δ, ℓ)-JL moment property for any ℓ≥ 2, then for any x∈ℝ^n, ℙ{|x^TA_k_2^2-x^TA_kS_2^2|>ϵ/3x^TA_k_2^2}<δ, where A_k is the optimal rank-k approximation of A. That is, S is the ϵ/3-subspace embedding for A_k with probability ≥ 1-δ. Let y^T=x^TA_k∈ℝ^n and substitute it into of (<ref>), ℙ{|y^T_2^2-y^TS_2^2|>ϵ/3y^T_2^2}< δ, ∀y∈range(A_k^T), If y=0, the probability inequality (<ref>) is obvious. If y≠0, we can normalize y by dividing its norm, that is, ℙ{|y^TS_2^2-1|>ϵ/3}< δ, ∀y∈range(A_k^T) and y_2=1. Since S satisfies the (ϵ/6√(k),δ,ℓ)-JL moment property, we have 𝔼_S|y^TS_2^2-1|^ℓ≤(ϵ/6√(k))^ℓδ=(ϵ/3)^ℓ(1/2√(k))^ℓδ< (ϵ/3)^ℓδ. The second inequality comes from (1/2√(k))^ℓ≤ 1 for all k≥ 1, ℓ≥ 2. Based on the Markov inequality, we have ℙ{|y^TS_2^2-1|>ϵ/3}≤(ϵ/3)^-ℓ𝔼_S|y^TS_2^2-1|^ℓ. Thus, the goal result in (<ref>) can be derived by substituting (<ref>) into (<ref>). For the remainder of the proof of this lemma, please refer to <cit.>. For any matrix A∈ℝ^n× n, let S∈ℝ^n× q be a random matrix with independent normal random variables entries S_ij∼𝒩(0,1)/√(q). If q≥ 288k/(ϵ^2δ), then Ã=AS is an (ϵ,0,k)-PCPS of A with probability ≥ 1-δ. For any given x∈ℝ^n and x_2=1, let S_*j denote the j-th column of matrix S, then qx^TS_2^2=q∑_j=1^q(x^TS_*j)^2=∑_j=1^q(∑_i=1^nx_i√(q)S_ij)^2. Let y_j≜∑_i=1^nx_i√(q)S_ij,j=1,2,…,q, as √(q)S_ij∼ N(0,1) we have 𝔼y_j=0, 𝔻y_j=1, that is, y_j∼ N(0,1). As qx^TS_2^2=∑_j=1^qy_j^2, then random variable Y≜ qx^TS_2^2∼𝒳^2(q). As 𝔼Y = q, 𝔻Y=2q, thus 𝔼|x^TS_2^2-1|^2=2/q. When q≥288kϵ^2δ, 𝔼|x^TS_2^2-1|^2≤(ϵ/6√(k))^2δ/4, that is, S satisfies (ϵ/6√(k),δ/4,2)-JL moment property. From Lemma <ref>, matrix AS is an (ϵ,0,k)-PCPS of A with probability at least 1-δ. §.§ Proof of Theorem <ref> Based on Lemma <ref>, if q≥ 288k/(ϵ^2δ), then with probability not less than 1-δ, f(A)S is an (ϵ,0,k)-PCPS of A. Let 𝒫 be the set of rank-k orthogonal projections. Let P̃^*≜min_P∈𝒫f(A)S-Pf(A)S_F= QQ^T, and P^*≜min_P∈𝒫f(A)-Pf(A)_F=U_kU_k^T. Based on the Definition <ref>, the following two inequalities hold: (1-ϵ)f(A)-P̃^*f(A)_F^2≤f(A)S-P̃^*f(A)S_F^2, f(A)S-P^*f(A)S_F^2≤ (1+ϵ)f(A)-P^*f(A)_F^2. As f(A)S-P̃^*f(A)S_F^2≤f(A)S-P^*f(A)S_F^2, combine (<ref>) and (<ref>) f(A)-QQ^Tf(A)_F^2≤1+ϵ/1-ϵf(A)-P^*f(A)_F^2. Furthermore, we have Δ_F =(I-QQ^T)f(A)(I-QQ^T)_F ≤I-QQ^T_2f(A)-QQ^Tf(A)_F =f(A)-QQ^Tf(A)_F ≤√(1+ϵ)/√(k(1-ϵ))tr(f(A)), the first inequality is based on the sub-multiplicativity property of Frobenius norm, the second inequality results from Lemma <ref>. Combined the error bound formulated in (<ref>), we have ℙ{|H^N(Δ)-tr(Δ)|≤ϵ√(1+ϵ)/√(k(1-ϵ))tr(f(A))}≥ 1-δ. Thus, when k≥ 16(1+ϵ)/(1-ϵ), ℙ{|H^N(Δ)-tr(Δ)|≤ϵ/4tr(f(A))}≥ 1-δ. The result in <cit.> is of significant theoretical importance. However, the bound given (q≥ c· (k+log(1/δ))/ϵ^2, where c is a sufficiently large universal constant) lacks explicit constants, providing little guidance for practitioners. In the Appendix, Lemma <ref> complements the proof of <cit.> and provides an explicit bound. However, this bound is much looser than the one given in Theorem <ref>. One reason is that the conditions on S specified in <cit.> are more stringent than those stated in Lemma <ref>. In this subsection, we use PCPS tricks to analyze the error bound. An analysis without using PCPS tricks is also presented in the Appendix. A comparison of the performance of these two methods will be conducted in Section <ref>. §.§ The error bound of the first part Computing f(A) explicitly requires a full eigendecomposition with Ω(n^3) time complexity, which can be expensive for large values of n. As a result, directly computing tr(Q^Tf(A)Q), the first part of the equation (<ref>), is usually not feasible. Instead, we can use the (m+1)-step Lanczos quadrature approximation method to approximate the computation of tr(Q^Tf(A)Q). tr(Q^Tf(A)Q) = ∑_i=1^k(Q_*i)^Tf(A)Q_*i= ∑_i=1^k(U^TQ_*i)^Tf(Λ)U^TQ_*i, where Q_*i denotes the i-th column of Q, U^TQ_*i_2=1. Given ϵ∈ (0,1), A∈ℝ^n× n is an SPD matrix with minimum eigenvalue λ_min≥ 1. Let S∈ℝ^n× q be a random matrix with independent normal random variables entries S_ij∼𝒩(0,1)/√(q), and Q∈ℝ^n× k consists of k-principal orthonormal bases of the column space spanned by f(A)S. Let L_m'+1(Q^Tf(A)Q) denote the (m'+1)-step Lanczos quadrature approximation of tr(Q^Tf(A)Q). If m'≥1/2log(2kC_ρ/ε)/log(ρ), where M_ρ=|log(λ_min/2)|+π, ρ=(λ_max+√(2λ_maxλ_min-λ_min^2))/(λ_maxλ_min) and C=(n-1)f(λ_min)+f(λ_max), C_ρ=4M_ρ/(C(ρ^2-ρ)), then |tr(Q^Tf(A)Q)-L_m'+1(Q^Tf(A)Q)| ≤ϵ/2tr(f(A)). Let μ̃^(i)=U^TQ_*i=[μ̃_1^(i),…,μ̃_n^(i)]^T and substitute into (<ref>), we have tr(Q^Tf(A)Q) =∑_i=1^k∑_j=1^nf(λ_j)(μ̃_j^(i))^2 =∑_i=1^k∫_λ_n^λ_1f(t)dμ̃^(i)(t) ≈∑_i=1^k∑_l=0^m'τ̃_l^(i)f(θ̃_l^(i))≜ L_m'+1(Q^Tf(A)Q), where {μ̃^(i)(t), i=1,2,…, k} are measures of integration similar to (<ref>), and the (m'+1) pairs of Gaussian nodes and weights are denoted by {(τ̃_l^(i),θ̃_l^(i))}_k=0^m', which is consistent with the description for (<ref>). Then the approximation error can be described in the following form, |tr(Q^Tf(A)Q)-L_m'+1(Q^Tf(A)Q)| = |∑_i=1^k(∫_λ_n^λ_1f(t)dμ̃^(i)(t)- ∑_l=0^m'τ̃_l^(i)f(θ̃_l^(i)))| ≤ ∑_i=1^k|∫_λ_n^λ_1f(t)dμ̃^(i)(t)- ∑_l=0^m'τ̃_l^(i)f(θ̃_l^(i))| ≤ 4kM_ρ/(ρ-1)ρ^2m'+1≤ϵ/2tr(f(A)), where the second inequality results from Lemma <ref> and the last inequality is obtained by referring to the bound of m as stated in (<ref>) and satisfying m'≥1/2log(2kC_ρ/ε)/log(ρ). §.§ Proof of Theorem <ref> Based on Theorem <ref> and Theorem <ref>, we have 1-δ ≤ℙ{|H^N(Δ)-tr(Δ)|≤ε/4tr(f(A))} ≤ℙ{|H^N(Δ)-tr(Δ)|+|H^N(Δ)-L_m+1^N(Δ)|≤ε/2|tr(f(A))|}. By applying the triangle inequality, we have ℙ{|tr(Δ)-L_m+1^N(Δ)|≤ε/2 |tr(f(A))|}≥ 1-δ. From equation (<ref>), we can derive that tr(f(A))=tr(Q^Tf(A)Q)+tr(Δ). Let Γ≜ L_m'+1(Q^Tf(A)Q)+L_m+1^N(Δ). By combining equations (<ref>) and (<ref>) and applying the triangle inequality again, we obtain the following equation, ℙ{. |tr(f(A))-Γ| ≤|tr(Q^Tf(A)Q)-L_m'+1(Q^Tf(A)Q)|+|tr(Δ)-L_m+1^N(Δ)| ≤ϵ|tr(f(A))|.}≥ 1-δ. This completes the proof of Theorem 3.1. § NUMERICAL RESULTS In this section, we introduce and evaluate our proposed algorithm for estimating the log-determinant of a positive definite matrix A∈ℝ^n× n. The pseudocode description of the algorithm is as follows, where Algorithm <ref> provides an estimation of tr(Q^Tf(A)Q), while Algorithm <ref> returns the final estimation of log(A). Next, we will evaluate the methods proposed in this text from three perspectives: 1) A performance comparison between the method with PCPS proposed in the main text and the method without PCPS provided in the appendix; 2) A performance comparison with methods proposed in related literature; and 3) A comparison between the experimental and theoretical parameter values when the algorithm reaches a specified probability error bound for a given matrix. In the following numerical experiments, we set δ=0.1 and uniformly sample the value of parameter ϵ in the interval [0.01,0.2]. The symmetric positive definite matrix A used for algorithm evaluation is set in the following form: A=I+∑_j=1^4010/j^2x_jx_j^T+∑_j=41^3001/j^2x_jx_j^T, where x_1,…,x_300∈ℝ^5000 are generated in Matlab using sprandn(5000,1,0.025). This example comes from <cit.> and satisfies the condition that the minimum eigenvalue is not less than 1. In order to facilitate the reproduction of the experimental results in this paper, we set the seed of rng(seed) to 50. §.§ Performance comparison of methods with and without PCPS The bounds of the algorithm-related parameters were provided in the previous text. The derivation and analysis of these bounds were combined with PCPS tricks. As mentioned earlier, another method that does not use PCPS is also provided in the appendix. Table <ref> lists all the designed parameters. Since the use of PCPS tricks does not affect the estimation of the trace in the second part, the relevant parameters (m and N) are consistent in both methods. Therefore, it is sufficient to compare only the relevant parameters of the first part and the number of MVM required for the first part. Figure <ref> (a) shows the relationship between the number of columns in the random matrix S and the relative error tolerance parameter ϵ. In the method without PCPS, the number of columns in the random matrix is only related to probability parameter δ and does not change with the changes in ϵ. Figure <ref> (b) shows that the number of Lanczos iteration steps required by the method with PCPS in the first part is significantly less than that required by the method without PCPS. The number of MVM required by the method with PCPS for the first part is q+km', while for the method without PCPS, the number of MVM is (k+p)(1+m'). And the comparison of the numerical values is described in Figure <ref> (c). §.§ Comparison with other methods For the same matrix A setting, we use the symbol Γ_(N,m) to represent the estimation result of the estimator proposed in <cit.> and <cit.>, with N being the number of query vectors and m being the number of Lanczos steps. To satisfy ℙ{|tr(f(A))-Γ_(N,m)|≤ϵ|tr(f(A))|}≥ 1-δ, it is shown in <cit.> that N and m should satisfy N≥24n^2/(ϵtr(f(A)))^2log^2(1+κ)log(2/δ), m≥√(3κ)/4log(5κ nlog(2κ+2)/ϵtr(f(A))√(2κ+1)), where κ denotes the condition number of A. While, <cit.> shows that the probability inequality in (<ref>) holds when N satisfies N≥8/(ϵtr(f(A)))^2(nlog^2κ +2ϵtr(f(A))logκ)log(2/δ), m≥√(κ+1)/4log(4n(√(κ+1)+1)log(2κ)/ϵtr(f(A))). The new bound reduces the number of required query vectors by a factor n compared to that in (<ref>). As shown in Tables <ref> and <ref>, compared to our lower bound presented in equation (<ref>), the other two bounds require more matrix information and depend on the matrix dimension n, which will become very large as n increases. The bound of N presented in <cit.> is a simplified version of the result in <cit.>. We compare both in Figure <ref>. As shown in Figure <ref> (a), the conclusion in <cit.> has the smallest sampling bound, followed by the conclusion in this paper. Since we divide the log-determinant problem into two parts, the Lanczos steps required for both parts are shown in Figure <ref> (b). The comparison of the total MVM required for algorithm implementation is shown in Figure <ref> (c), with <cit.> having the least MVM calculation, followed by this article. However, it should be noted that the result in <cit.> involves calculating the spectral norm and F-norm of matrix (log(A)-D_log(A)), where D_log(A) denotes the diagonal matrix containing the diagonal entries of log(A), which can be difficult to obtain for large-scale matrices. §.§ Theoretical and experimental values In this section, we use an example to verify the theoretical bounds given in the previous text. As shown in Table <ref>, the bound on the number of columns in Q is relatively tight. However, the bound on the number of columns in the random matrix S is not as tight. In our experiments, we observed that q≈ 3k is sufficient for achieving high accuracy. For the matrix presented in equation (<ref>), Figure <ref> shows that the experimental results perform much better than that predicted by our theoretical bounds. The number of Lanczos steps required for the first and second parts are depicted in Figure <ref> (a) and (b) respectively. Figure <ref> (c) shows the average number of MVM required by our algorithm over 100 runs, with the shaded green area representing the 10th to 90th percentiles of these results. § CONCLUSIONS In this paper, we analyze the error for approximating the log-determinant of large-scale positive definite matrices, where subspace projection and the stochastic Lanczos quadrature method are used. We provide a deterministic bound for the trace approximation of the projection part and a probabilistic bound for the remaining part. Unlike most literature that gives asymptotic upper or lower bounds, this paper presents explicit bounds for all of the algorithm-related design parameters. Although the explicit bound for the number of columns in the random matrix S may appear large, it is sufficient to guarantee the (ϵ,δ) probability error bound for any large-scale matrix. Further research will also be conducted to explore the optimal bounds, filling the gap between the theoretical bounds and the actual requirements. Besides, this paper adopts a fixed-ratio error-bound allocation scheme, while we also present an optimized error allocation technique in <cit.> to reduce the overall MVM required by SLQ. Note that this paper focuses on the computation of tr(f(A)), where f(·)=log(·). However, the techniques and results presented in this paper have significant theoretical and computational implications for the trace estimation of other matrix functions, which would help solve those corresponding practical applications efficiently. This work was funded by the natural science foundation of China (12271047); Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, BNU-HKBU United International College (2022B1212010006); UIC research grant(R0400001-22; UICR0400008-21; R72021114); Guangdong College Enhancement and Innovation Program(2021ZDZX1046). § APPENDICES §.§ A looser bound of column number q For any matrix A∈ℝ^n× n, let S∈ℝ^n× q be a random matrix with entries S_ij∼𝒩(0,1)/√(q) are independent standard normal random variables. If q ≥ 1152e 9^k/ϵ^2δ, Ã=AS is an (ϵ,0,k)-PCPS of A with probability ≥ 1-δ. For any given x∈ℝ^n and x_2=1, then random variable Y≜ qx^TS_2^2∼𝒳^2(q) (as explained in lemma <ref>). Let Ỹ=Y-q, then 𝔼Ỹ=0 and the moment generating function (MGF) of Ỹ is 𝔼exp(λỸ) = 𝔼exp(λ(Y-q)) =𝔼exp(λ Y)exp(-λ q) =(1-2λ)^-q/2exp(-λ q) ≤exp(4qλ^2), for |λ|≤1/√(5) the third equality is derived by the MGF of 𝒳^2(q). Let K=2√(q), then for all λ if |λ|≤ 1/K,(q≥ 2), 𝔼exp(λỸ)≤exp(K^2λ^2). For all x∈ℝ and r>0, the following inequality is valid <cit.> |x|^r≤ r^r(exp(x)+exp(-x)). Substituting x=Ỹ/K and taking expectation, we have 𝔼| Ỹ/K|^r ≤ r^r(𝔼exp( Ỹ/K)+𝔼exp(-Ỹ/K)) ≤ 2er^r the second inequality holds by sustituting λ=1/K into equation (<ref>). Then 𝔼|x^TS_2^2-1|^r≤ 2e(Kr/q)^r. Specially, take r=2, if q ≥ 1152e 9^k/ϵ^2δ, it is easy verify the following inequality holds. 2e(4/√(q))^2≤ (ϵ/3)^2·δ/4·min{ (1/2√(k))^2,1/9^k} That is, S satisfies both the (ϵ/6√(k),δ/4,2)-JL moment property and the (ϵ/3,δ/4/9^k,2)-JL moment property. From Lemma 6 in <cit.>, with probability at least 1-δ, AS is an (ϵ,0,k)-projection-cost-perserving sketch of A. §.§ Without using PCPS In this section, we will provide a new analytical route for the probabilistic error bounds ℙ{|H^N(Δ)-tr(Δ)|≤ϵ/4tr(f(A))}≥ 1-δ. Slightly different from (<ref>) with the projection matrix Q∈ℝ^n× k, we have the following new decomposition form of tr(f(A)), tr(f(A)) =tr(Q^Tf(A)Q)+tr((I-QQ^T)f(A)(I-QQ^T)) = tr(Q^Tf(A)Q)+tr(Δ̃). Let Δ̃≜ (I-QQ^T)f(A)(I-QQ^T), and Q∈ℝ^n× (k+p) is the k+p orthogonal basis of the span of f(A)S, S∈ℝ^n× (k+p) is an standard Gaussian random matrix. Similar to Theorem <ref>, we have the following lemma. Given ϵ,δ∈ (0,1), an SPD matrix A∈ℝ^n× n with its minimum eigenvalue λ_min≥ 1. Let S∈ℝ^n× (k+p) be a standard Gaussian matrix, and Q∈ℝ^n× (k+p) whose columns form an orthonormal basis of the range of f(A)S. If the target rank k and oversampling parameter p satisfy k≥64/δ^2, p≥ 1+64k/kδ^2-64, then ℙ{|H^N(Δ̃)-tr(Δ̃)|≤ϵ/4tr(f(A))}≥ 1-δ. Based on Theorem 10.5 in <cit.>, we have 𝔼f(A)-QQ^Tf(A)_F≤(1+k/p-1)^1/2f(A)-A_k(f)_F. Using Markov inequality, we have ℙ{f(A)-QQ^Tf(A)_F≤√(k)/4f(A)-A_k(f)_F}≥ 1-(1+k/p-1)^1/2/√(k)/4, where k≥ 16, p≥ (17k-16)/(k-16). From lemma <ref>, we have f(A)-A_k(f)_F≤tr(f(A))/√(k). Then ℙ{f(A)-QQ^Tf(A)_F≤1/4tr(f(A))}≥ 1-(1+k/p-1)^1/2/√(k)/4. And when the oversampling number p satisfies p ≥ 1+64k/kδ^2-64, where k≥ 64/δ^2, we have ℙ{f(A)-QQ^Tf(A)_F≤1/4tr(f(A))}≥ 1-δ/2. As Δ̃_F≤f(A)-QQ^Tf(A)_F. Based on Theorem <ref>, the following formulation is hold ℙ{|H^N(Δ̃)-tr(Δ̃)|≤ϵ/4tr(log(A))}≥ 1-δ. with the number of columns of the Gaussian random matrix S satisfies (k+p) ≥(k^2+k)δ^2-64/kδ^2-64. spmpsci
http://arxiv.org/abs/2307.03208v1
20230706094303
On a formula for all sets of constant width in 3d
[ "Bernd Kawohl", "Guido Sweers" ]
math.MG
[ "math.MG", "52A15" ]
On a formula for all sets of constant width in 3d Bernd Kawohl, Guido Sweers Dept. Mathematik & Informatik, Universität zu Köln, Albertus-Magnus-Platz, 50923 Köln, Germany ^1 West University of Timişoara, Bd. V. Pârvan nr. 4, 300223, Timişoara, Romania ^* Corresponding Author: eva.kaslik@e-uvt.ro =================================================================================================================================== Email: Orcid-Id: Kawohl 0000-0003-2918-7318; Sweers 0000-0003-0180-5890 In the recent paper On a formula for sets of constant width in 2D, Comm. Pure Appl. Anal. 18 (2019), 2117–2131, we gave a constructive formula for all 2d sets of constant width. Based on this result we derive here a formula for the parametrization of the boundary of bodies of constant width in 3 dimensions, depending on one function defined on 𝕊^2 and a large enough constant. Moreover, we show that all bodies of constant width in 3d have such a parametrization. The last result needs a tool that we describe as shadow domain and that is explained in an appendix. Our formula is more explicit than the result by T. Bayen, T. Lachand-Robert and É. Oudet, Analytic parametrization of three-dimensional bodies of constant width in Arch. Ration. Mech. Anal., 186 (2007), 225–249. AMS Mathematics Subject Classification: 52A15 Keywords: Constant width, convex geometry, 3-dimensional Acknowledgement: The authors thank Prof. Hansjörg Geiges for pointing out reference <cit.> and Ameziane Oumohand M.Sc. for <cit.>. Thanks also go to a referee, since the final version benefitted from the careful and detailed report. It should be mentioned that a first version of the manuscript was essentially completed while the first author participated in the program Geometric Aspects of Nonlinear Partial Differential Equations, which was supported by the Swedish Research Council, at Institut Mittag-Leffler in Djursholm, Sweden during October 2022. § INTRODUCTION For a compact set G⊂ℝ^n one defines its directional width in direction ω∈𝕊^n-1:={ x∈ℝ ^n;| x| =1} by 𝖽_G( ω) =max{⟨ω ,x⟩ ;x∈ G} -min{⟨ω ,x⟩ ;x∈ G} , with ⟨· ,·⟩ denoting the standard inner product. If G is convex and 𝖽_G( ω) =𝖽_G is constant, then G is called a set of constant width. In 3 dimensions a set of constant width is also called a body of constant width. The interest in the subject started with Leonhard Euler, who around 1774 considered 2d curves of constant width, which he called `curva orbiformis'. He not only studied such sets for 2 dimensions but also gave a formula describing such curves. See 10 of <cit.>. In 3 dimensions a ball is obviously the classical example of a body of constant width but the famous Meissner bodies also have this property. See <cit.> or <cit.>. Quite simple examples can also be constructed by taking a reflection symmetric 2d set of constant width and rotating it around its line of symmetry. Famous mathematicians such as Minkowski <cit.> and Hilbert <cit.> were intrigued by the subject. The first interest of most scholars focused on deriving properties of such domains. A wonderful survey on sets of constant width (up to 1983) was provided by Chakerian and Groemer in <cit.>, and a more recent updated and thorough treatment can be found in the book by Martini, Montejano and Oliveros <cit.>. Let us recall that the 3d question, motivated by Blaschke's 2d result <cit.>, as to which body of constant fixed width has the smallest volume or, equivalently, the smallest surface area, is still open. We will not solve that problem, but will give an alternative formula for constructing bodies of constant width that might help. Let us recall some known facts about sets of constant width. Sets of constant width G in ℝ^n are strictly convex and hence any tangential plane touches G in at most one point. Although the Gauss-map ∂Ω→𝕊^n-1 (the outside normal on smooth parts of ∂ G) will not be uniquely defined on edges or corners, the `inverse' γ _G:𝕊^n-1→∂ G is well-defined for strictly convex G and parametrizes ∂ G. See <cit.>. In <cit.> one finds, when G is a set of constant width and γ _G∈ C^1( 𝕊^n-1), that γ _G(ω )=P_G(ω ) ω +∇ _P_G(ω ), where P_G(ω ):=max{⟨ω ,x⟩ ;x∈ G} is the support function and ∇ _P_G its gradient along 𝕊^n-1, i.e. ⟨∇ _P_G(ω ),v⟩ =( dP_G(ω )) (v) for all ω∈𝕊^n-1 and v∈ T_ω𝕊^n-1. Necessary for a set of constant width 𝖽_G is that γ _G(ω )-γ _G(-ω )=𝖽_G ω , from which it follows that P_G(ω )+P_G(-ω )=𝖽_G and ∇ _P_G(ω )=∇ _P_G(-ω ) for all ω∈𝕊^n-1. In <cit.> a parametrization of sets of constant width is given by using the so-called median surface, which is parametrised by M_G(ω ):=γ _G(ω )-12𝖽_G ω for ω∈𝕊^n-1. Writing x· Y:=∑_ix_iY_i, which may coincide with but will not be restricted just to the inner product ⟨·,·⟩, the convexity of G leads to ( M_G(ω̂)-M_G(ω )) ·ω≤1 4𝖽_G^2 |ω̂-ω| ^2 for all ω ,ω̂∈𝕊^n-1, while (<ref>) implies M_G(ω )=M_G(-ω ) for all ω∈𝕊^n-1. The reverse question would be: can one give criteria on a continuous function γ :𝕊^n-1→ℝ^n such that γ( 𝕊^n-1) parametrizes the boundary of a set of constant width? An answer is given by Theorem 2 of <cit.>, where it is stated that for any continuous map M:𝕊^n-1→ ℝ^n and α >0, which satisfy [ M(ω )=M(-ω ) for all ω∈𝕊^n-1,; ( M(ω̂)-M(ω )) ·ω≤1/4α ^2 |ω̂-ω| ^2 for all ω ,ω̂∈𝕊^n-1, ] the set G:={ M(ω )+tω ; ω∈𝕊^n-1,0≤ t≤12α} is of constant width 𝖽_G:=α and M_G(ω ):=M(ω ). The γ _G and M_G are as in (<ref>). One finds by continuity of M that ∂ G⊂{ M(ω )+12α ω ; ω∈𝕊^n-1} and even that the identity holds in (<ref>). Although (<ref>) are hence necessary and sufficient conditions, there is not yet an obvious class of functions leading to bodies of constant width. Continuity or even differentiability by itself is not enough for an α to exist for which (<ref>) holds. The second condition in (<ref>) implies the convexity of G from (<ref>) and as such it gives a monotonicity for directional derivatives, hence a necessary one-sided estimate for second derivatives of M, whenever these exist. In two dimensions, see <cit.>, a few simple conditions on a function in L^∞( 0,π) are necessary and sufficient in order to have a curve of constant width. The construction in 2 dimensions is also helpful in 3 dimensions. It will allow us to give a more explicit formula for all bodies of constant width, which is what we want to show here. § TWO DIMENSIONS In the last century Hammer and Sobczyk described a construction for 2 dimensions in <cit.>, based on a characterization of what they called `outwardly simple line families'. More recently a direct concise formula was given in <cit.> to describe all those sets in two dimensions starting from any L^∞( 0,π)-function satisfying 2 equations, namely the ones in (<ref>). Let us recall the 2d formula from <cit.>: Let x_0∈ℝ^2, r∈ℝ and a∈ L^∞( ℝ) satisfy r≥‖ a‖ _∞, a( φ +π) =-a( φ) for all φ , ∫_0^π a( s) -sin scos s ds=00. Define the closed curve x:[ 0,2π] →ℝ^2 by x( φ) =x_0+∫_0^φ( r-a( s) ) -sin scos sds. Then x describes the boundary of a set of constant width 2r. For a simple statement in Theorem <ref> the function a∈ L^∞( 0,π) is extended to ℝ and such that ( <ref>) and (<ref>) are satisfied. The formula in (<ref>) shows that x∈ C^0,1( ℝ), which is optimal for r=‖ a‖ _∞. For r>‖ a‖ _∞ when considering the set x[ 0,2π] as a curve one finds that x[ 0,2π ] ∈ C^1,1. The formula in (<ref>) describes the boundary of all 2d domains of constant width: If G⊂ℝ^2 is a closed convex set of constant width 2r, then there exists x_0 and a as in Theorem <ref>, such that ∂ G=x( [ 0,2π] ) with x as in (<ref>). The geometric interpretation of the formula in (<ref>) is that x(φ) and x(φ+π) describe the ends of a rotating stick of length 2r with the varying point of rotation lying on the stick by (<ref>) and determined by a(φ). For these ends to coincide for φ∈[0,π] with those for φ∈[π,2π] one needs condition (<ref>). The two equalities in condition (<ref>) make it a closed curve. § A FORMULA IN THREE DIMENSIONS There have been previous attempts to provide an explicit construction of all 3d bodies of constant width. In <cit.> Lachand-Robert and Oudet present a geometric construction that generates 3d bodies of constant width from 2d sets of constant width. This construction, however, does not capture all 3d bodies of constant width because a counterexample is provided in the paper <cit.> by Danzer. In <cit.> Montejano and Roldan-Pensado generalize the construction of Meissner bodies to generate so-called Meissner polyhedra. This construction does not generate all 3d bodies either, because the rotated Reuleaux triangle is a counterexample. As already mentioned Bayen, Lachand-Robert and Oudet give a description of (all) n-d sets of constant width in <cit.> but the function M has to satisfy a condition at each point of 𝕊^n-1. We provide an alternative construction, based on the method from <cit.> which gives a simpler condition although still at each point of 𝕊^2. A simple condition as in 2d does not seem possible, but our conditions will be close to the optimal one. We use the 2d-approach to get a curve of constant width 2r for each θ based on the function a from Theorem <ref>, which may now depend also on θ: φ↦ a(φ,θ) for each θ, Whenever ∂^2_θ a_∞ is bounded and when r is large enough, there exists a unique perturbation in the θ -direction for the collection of rotating 2d-curves, hence perpendicular to such an initial curve, such that the combined result will yield a 3d-body of constant width. Moreover, each body of constant width can be written this way. Aside from our results from <cit.> for two dimensions we will use a result by Hadwiger in <cit.>, which can be roughly described as follows: convex bodies in ℝ^n are uniquely determined by the projections in ℝ^n-1 perpendicular to one fixed direction. The result holds for n≥ 4 and, whenever the one fixed direction is regular, also for n=3. This last addendum is due to <cit.>. Regular means here, that the planes perpendicular to that fixed direction which touch the convex domain, do that in precisely one point. Since sets of constant width are necessarily strictly convex, this is obviously the case for those sets and any choice of the fixed direction. Let us define for ω∈𝕊^2 the orthogonal projection P_ω on the plane E_ω:={x∈ℝ^3;⟨ x,ω⟩=0 }. To exploit the result of Hadwiger we will use for a fixed u∈𝕊^2 all projections in the directions ω∈𝕊^2 with ⟨ω ,u⟩ =0. See Figure <ref>. For those ω we have P_ωx=⟨ u,x⟩ u+⟨ u×ω ,x⟩( u×ω) . For later use we need to identify the projections on E_ω with coordinates in ℝ^2 through P_ωx=( [ ⟨ u,x⟩; ⟨ u×ω ,x⟩ ] ) . We may now explain the result by Hadwiger in <cit.> in more detail. He proved that for two convex bodies G_1 and G_2 in ℝ^3 the following holds. * If P_uG_1≃ P_uG_2 and P_ωG_1≃ P_ωG_2 for all ω∈𝕊^2 with ⟨ω ,u⟩ =0, then G_1≃ G_2. Here A≃ B means that A equals B after a translation. In other words, there is a fixed v∈ℝ^3 such that A=v+B. Groemer showed in <cit.> that one could drop the condition P_uG_1≃ P_uG_2, whenever u is a regular direction for G_1. Here regular means that max{⟨ u,x⟩ ;x∈ G_1} is attained for a unique x∈ G_1. Since domains G of constant width are precisely those domains for which G^∗:=12G+12( -G) :={12 x-12y;x,y∈ G} is a ball, which has only regular directions, one finds that (P _ωG)^∗ is a disc for all ω∈𝕊^2 with ⟨ω ,u⟩ =0, if and only if G^∗ is a ball. Necessarily those discs and the ball have the same radius. This implies that a convex closed set G⊂ℝ^3 is a body of constant width if and only if there is a direction u∈𝕊^2, such that for some fixed ρ >0 one finds (P_ωG)^∗≃ D_ρ:={ y∈ℝ^2;| y|≤ρ} for all ω∈𝕊^2 with ⟨ω ,u⟩ =0. This means that all those P_ωG should be two-dimensional convex sets of constant width ρ. So by taking u=( 1,0,0) we find that the boundary of P_ωG is described by (<ref>) with some a depending on ω. This leads us to the result in Theorem <ref> that will be formulated using an admittedly unusual parametrization of 𝕊^2, which we introduce next: We parametrize 𝕊^2=U(ℝ^2) by ω = U( φ ,θ) :=( [ sinφcosθ; sinφsinθ; cosφ ] ) . This is the standard parametrization with φ the angle between ω and the positive z-axis and θ the counterclockwise angle of the projection on the xy-plane with the x-axis, viewed from the positive z-axis. Obviously this parametrization is not unique as we may restrict (φ,θ) to some subset of ℝ^2. We may define a convenient φ ,θ-dependent orthonormal basis, first for sinφ 0, {U( φ ,θ) ,U _φ( φ ,θ) ,U_θ( φ ,θ) /sinφ} = {( [ sinφcosθ; sinφsinθ; cosφ ] ) ,( [ cosφcosθ; cosφsinθ; -sinφ ] ) ,( [ -sinθ; cosθ; 0 ] ) } =: {U( φ ,θ) ,V ( φ ,θ) ,W( θ) }, with the expression in the middle showing the obvious extension when sinφ =0. Any function (φ,θ)↦ v(φ,θ):ℝ^2→ ℝ that is used to define a quantity on 𝕊^2 necessarily has to possess the obvious periodicity properties as well as some compatibility conditions. The relations, which the function a from (<ref>) has to satisfy, are more subtle. For φ∉{0,π } the value r-a(φ,θ) coincides with the inverse curvature in the φ-direction. There is however a pecularity at the north- and southpole, where the curvature in any(!) direction is given by (r-a(0,θ))^-1, respectively (r-a(π,θ) )^-1, through varying θ. This leads to the following definition with a distinction between pure periodicity and what we call compatibility: For a function f:ℝ^2→ℝ we say that: * f satisfies the periodicity conditions for 𝕊^2, if f(φ̂,θ̂) =f(φ ,θ ) for all φ̂-φ ,θ̂-θ∈ 2πℤ, f(φ̂,θ̂) =f(φ ,θ ) for all φ̂+φ ,θ̂-θ +π∈ 2πℤ; * f satisfies the compatibility conditions for the poles of 𝕊 ^2, if f(0,θ )=f(0,0) and f(π ,θ )=f(π ,0) for all θ∈ℝ. Suppose that B(ℝ^2) is some function space. We write: * f∈ B_p(ℝ^2), whenever f∈ B(ℝ^2) and satisfies (<ref>) and (<ref>); * f∈ B_p,c(ℝ^2), whenever f∈ B(ℝ^2) and satisfies (<ref>), (<ref>) and (<ref>). One usually restricts ℝ^2 to [ 0,π] ×[ 0,2π] to have a unique parametrization at least for the interior points and with some compatibility assumptions at its boundary, but here it will be more convenient to take 𝖲=[ 0,2π] ×[ -12π,12π ]. As in the 2d-case the function a from (<ref>) that we use is such that at opposite points of 𝕊^2 the value is opposite: a( φ ,θ) =-a( φ +π ,θ) =-a( π -φ ,θ +π). Hence a is completely defined by its values on [ 0,π) ×[ -1/2π,1/2π). Suppose that a∈ C_p^2(ℝ^2) satisfies a( φ ,θ) =-a( φ +π ,θ) for all ( φ ,θ) ∈ℝ^2, ∫_0^πa( s,θ) cos ssin s ds=0 0 for all θ∈ℝ, let V and W be as in (<ref>) and suppose that h:(0,π)×ℝ→ℝ is defined by: h( φ ,θ) :=-∫_0^φsin( φ -s)  ∂ _θa(s,θ ) ds/sinφ. * Then the definition in (<ref>) can be continuously extended to ℝ^2. The extended h is such that (φ ,θ )↦ h(φ ,θ )W(θ )∈ C_ p,c^1(ℝ^2), and satisfies h(φ ,θ )=0 for all (φ ,θ )∈πℤ ×ℝ. * There exist r_0(a)∈[ ‖ a‖ _∞,‖ a‖ _∞+‖∂ _θa‖ _∞+‖∂ _θ^2a‖ _∞0cm0.4cm ] , such that for all r≥ r_0(a)and X_0∈ℝ^3 , the surface X(𝖲), defined by X(φ ,θ )=X_0+∫_0^φ( r-a( s,θ) ) V(s,θ ) ds+h( φ ,θ) W(θ ), describes the boundary of a body of constant width. * Moreover, with a(· ,· ) as above, the function h in (<ref>) is the unique possibility in order that X in (<ref>) describes the boundary of a body of constant width. Our construction will be illustrated by an example in Section 3. Although a∈ C_p^2(ℝ^2) will imply that ( φ ,θ) ↦ h( φ ,θ) W(θ) ∈ C_p,c^1(ℝ^2), one finds at most X∈ C^0,1_p,c(ℝ^2). Hence the induced parametrization 𝕊^2→∂ G is a diffeomorphism. This will only be the case for r>r_0(a) and in general not for r=r_0(a). By taking r>r_0(a) one obtains a C^1,1-surface with a distance ε=r-r_0(a) from the body of constant width for r=-r_0(a) where Lipschitz is optimal. In the next section the example is such that r=r_0(a)=1 and X will not be a diffeomorphism near ( 0,0). Indeed, the surface near the north pole is not C^1. We have assumed that a∈ C_p^2(ℝ^2), which is sufficient for describing a 3d set of constant width for r large, but certainly more than necessary for h and X to be well-defined. Necessary for h to be well-defined will be L^∞ bounds for a,a_θ and a_θθ. For the 2d case a necessary and sufficient restriction appears, namely r≥ r_0(a):=‖ a‖ _∞. In 3d this condition is still necessary but not sufficient. To have a differentiable parametrization in 3d a bound appears that contains ∂ _θh. We are however not able to quantify such a bound more precisely like in 2d. We first proof some results for h that we gather in the next lemma. Suppose that a∈ C_p^2(ℝ^2) satisfies (<ref>) and (<ref>). Then (<ref>) can be continuously extended to ℝ^2, and the extended h is such that for all (φ ,θ )∈ℝ^2: h( φ ,θ) = h( φ +π ,θ) , h( φ ,θ) = -h( -φ ,θ +π) . and satisfies (<ref>) and (<ref>). Moreover, for all (φ ,θ )∈ℝ^2 it holds that | h( φ ,θ) | ≤‖ a_θ‖ _∞|sinφ| , | h_φ( φ ,θ) | ≤‖ a_θ‖ _∞, | h_θ( φ ,θ) | ≤‖ a_θθ‖ _∞|sinφ| . A priori h is defined for φ∈( 0,π). From (<ref>) we find that ∫_0^πsin( φ -s)  a_θ(s,θ ) ds=∂ _θ( ∫_0^πsin( φ -s)  a(s,θ ) ds) =0. With a∈ C_p^2(ℝ^2) and (<ref>) we hence find that h is uniformly bounded on ( 0,π) ×ℝ. Moreover, for φ∈( 0,1/2π] we find |∫_0^φsin( φ -s)  ∂ _θa(s,θ ) ds|≤‖∂ _θa‖ _∞∫_0^φsin( φ -s) ds =‖∂ _θa‖ _∞( 1-cosφ) ≤‖∂ _θa‖ _∞( 1-cosφ) ( 1+cosφ) =‖∂ _θa‖ _∞|sinφ| ^2, while for φ∈( 1/2π ,π) we obtain with ( <ref>) that |∫_0^φsin( φ -s)  ∂ _θa(s,θ ) ds|≤‖∂ _θa‖ _∞|∫_φ^πsin( φ -s) ds| =‖∂ _θa‖ _∞( 1+cosφ) ≤‖∂ _θa‖ _∞( 1-cosφ) ( 1+cosφ) =‖∂ _θa‖ _∞|sinφ| ^2. Hence we find | h( φ ,θ) |≤‖∂ _θa‖ _∞|sinφ| , which is (<ref>) at least on ( 0,π) ×ℝ , and we may extend h by 0 to φ∈{ 0,π}. By (<ref>), a substitution and (<ref>) h( φ +π ,θ) =-∫_0^φ +πsin( φ +π -s) ∂ _θa( s,θ) ds /sin( φ +π) =∫_π^φ +πsin( φ +π -s) ∂ _θa( s,θ) ds /sinφ =∫_0^φsin( φ -s) ∂ _θa( s-π ,θ) ds/sinφ=-∫_0^φsin( φ -s) ∂ _θa( s,θ) ds /sinφ=h( φ ,θ) and we find that the definition is well-defined on ( 0,2π) ×ℝ and at least there (<ref>) holds. Then h can be extended continuously by 0 to φ =2π and (<ref>) holds for φ∈{ 0,π ,2π}. This allows us to use the definition in (<ref>) for h for all φ with sinφ≠ 0 and to set h=0 whenever sinφ =0. Moreover, since a_θ∈ C_p^1(ℝ^2) we find that h and also h W satisfies (<ref>). One also finds that (<ref>) holds true. For (<ref>) first note that h(-φ ,θ +π ) =-∫_0^-φsin( -φ -s)  ∂ _θa(s,θ +π ) ds/sin( -φ) =∫_0^-φsin( -φ -s)  ∂ _θa(-s,θ ) ds/sinφ =-∫_0^φsin( -φ +s)  ∂ _θa(s,θ ) ds/sinφ=∫_0^φsin( φ -s)  ∂ _θa(s,θ ) ds/sinφ =-h(φ ,θ ), which shows (<ref>). Moreover, with h(-φ ,θ +π )W(θ +π )=-h(φ ,θ ) W(θ +π )=h(φ ,θ )W(θ ) and a_θ∈ C_p^1(ℝ^2) implying h∈ C^1( ℝ^2), one finds (<ref>). Since the θ-dependence only comes through a the estimate in (<ref>) is proven similarly as for (<ref>). For ( <ref>) we use a straightforward computation from (<ref>) and using (<ref>) to find h_φ( φ ,θ) =-∫_0^φsin s ∂ _θa(s,θ ) ds/( sinφ) ^2=∫_φ^πsin s ∂ _θa(s,θ ) ds/( sinφ) ^2. Next we may estimate by | h_φ( φ ,θ) |≤min( ∫_0^φsin s ds,∫_φ^πsin s ds)‖ a_θ‖ _∞/( sinφ) ^2≤‖ a_θ‖ _∞, which concludes the proof of Lemma <ref>. Proofs of Theorem <ref> and of the converse result in the next theorem are given in Section <ref>. * Each body of constant width is described by (<ref>) for some a∈ L_p^∞(ℝ^2) with θ↦ a(φ,θ) uniformly Lipschitz on 𝖲, a(·,·) satisfying (<ref>) and (<ref>), with some r≥‖ a‖ _L^∞(𝖲) and with h satisfying (<ref> ), (<ref>), (<ref>) and h( φ ,θ) =lim_ε→ 0∫_0^φa(s,θ )-a(s,θ +ε )/ εsin( φ -s) ds/sinφ. * Concerning regularity we have h(φ,θ)=0 for φ∈{0,π,2π}, θ∈[-1/2π,1/2π] and h( φ ,θ) ( [ -sinθ; cosθ ] ) ∈ C^0,1( 𝖲) and moreover, if a(·,·) can be extended such that i. a,∂ _θa∈ C_p^0(ℝ^2), then also (<ref>) holds true; ii. a,∂ _θa∈ C_p^1(ℝ^2), then h can be extended such that (<ref>) holds true. § AN EXAMPLE The formulas are rather technical and in order to illustrate that (<ref>) does deliver a body of constant width, we give an actual construction in a case that is computable. The example yields a body of constant width connecting two triangular 2d-domains of constant width based on the 2d-formula. In addition to x_0=(0,0) and r=1 we use in Fig. <ref>: * for the figure on the left: a(s)=a_1(s):=-cos (3s); * for the figure in the middle: a(s)=a_2(s):=sin (3s). The figure on the right of Fig. <ref> combines these two curves in a 3d-setting in orthogonal planes with the red line as common intersection. In order to find a smooth perturbation from the horizontal to the vertical curve by curves whose projections will be 2d-curves of constant width 1, we use the following: a( φ ,θ) :=( cosθ) ^2a_1(φ )+( sinθ) ^2a_2(φ ). The a in (<ref>) is used to produce the sketch on the left in Fig.  <ref> using the formula in (<ref>) without the h-term. Each intersection with a plane containing the horizontal (red) line {λ (1,0,0); λ∈ℝ} will produce a 2d set of constant width although the body itself will not be a 3d set of constant width. Only after the modification which includes the additional h-term in (<ref>) does one indeed find a 3d set of constant width. The 2d-curves from Figure <ref> appear in the horizontal and a vertical plane in blue. The special choice of ( cosθ) ^2 and ( sinθ) ^2 in (<ref>) implies that ∂ _θa( φ ,0) =0=∂ _θa( φ ,±π /2) for all φ. Therefore h( φ ,0) =0=h( φ ,±π /2) for all φ or, in other words, those two curves remain unmodified. So in the left part of Fig. <ref> one finds rotated 2d-domains of constant width `rotating' from the the 2d figure with a_1 to the 2d figure with a_2. Since that rotated form is not even convex it cannot be a 3d-body of constant width. On the right one finds the same projections as on the left but the curves are now moved in a perpendicular direction with factor h. The function h in (<ref>) is the only one such that the figure on the right is a 3d-body of constant width. § PROOFS OF THE TWO THEOREMS For the standard inner product of u,v∈ℝ^n we use ⟨ u,v⟩. The notation u· v is used for componentwise multiplication, which includes but can be more general than the inner product. Let us start by introducing three vectors for a more concise notation: Θ=( [ cosθ; sinθ; 0 ] ) , Ψ=( [ -sinθ; cosθ; 0 ] ) and Ξ=( [ 0; 0; 1 ] ) . These three directions constitute a θ-dependent orthonormal basis in ℝ^3 that turns out to be convenient for our parametrization. The following identities hold true: ∂ _θΘ=Ψ and ∂ _θΨ=-Θ . Also note that our initial basis (<ref>) can be expressed in term of (<ref>) U( φ ,θ) =( [ sinφcosθ; sinφsinθ; cosφ ] ) =cosφ Ξ+sinφ Θ =( [ sinφ; cosφ ] ) ·( [ Θ; Ξ ] ) , V( φ ,θ) =( [ cosφ; -sinφ ] ) ·( [ Θ; Ξ ] ) and W( θ) = Ψ. The dot product · in (<ref>), (<ref>) might seem artificial. However, it coincides with the usual definition of inner product and is most convenient for a concise notation in the following proofs. We will have to show that X in (<ref>) is a regular parametrization and secondly, that the resulting surface will yield a body of constant width. For both aspects we need to consider ∂ _φ X( φ ,θ) and ∂ _θ X( φ ,θ). ▸ Computation of ∂ _φ X and ∂ _φX. We will check first that X in (<ref>) is a regular parametrization of the boundary ∂ G of a body of constant width for r large enough, that is X̃:𝕊^2→∂ G defined by X̃( ω) :=X( φ ,θ) for ω =U(φ ,θ ) is C^1, one-to-one and onto, and even a diffeomorphism. With the notation from (<ref>) we can rewrite (<ref>) as X( φ ,θ) =X _0+∫_0^φ( r-a(s,θ )) ( [ -sin s; cos s ] ) ds·( [ Ξ; Θ ] ) +h(φ ,θ ) Ψ . One computes that ∂ _φX( φ ,θ) =( r-a(φ ,θ )) ( [ -sinφ; cosφ ] ) ·( [ Ξ; Θ ] ) +∂ _φh(φ ,θ ) Ψ and ∂ _θX( φ ,θ) =-∫_0^φ∂ _θa(s,θ )( [ -sin s; cos s ] ) ds·( [ Ξ; Θ ] ) -h(φ ,θ ) Θ + ( ∫_0^φ( r-a(s,θ )) cos s ds+∂ _θh(φ ,θ )) Ψ =∫_0^φ∂ _θa(s,θ )sin s ds Ξ-cosφ/sinφ∫_0^φ∂ _θa(s,θ )cos s ds Θ + ( ∫_0^φ( r-a(s,θ )) cos s ds+∂ _θh(φ ,θ )) Ψ . ▸ Invariant normal direction. Before continuing to show that the parametrization is appropriate, notice that one directly finds that then the outward normal at X ( φ ,θ) satisfies: ν_X̃( ω)=ω for all ω∈𝕊^2. Indeed, with ω =cosφ Ξ+sinφ Θ we find ω·∂ _φX( φ ,θ) =0 and by the definition of h: ω·∂ _θX( φ ,θ) =∫_0^φ∂ _θa(s,θ )sin( s-φ) ds-h(φ ,θ )sinφ =0. Note that for the inverse of ω↦X( φ ,θ) to be the Gauss map of the body of constant width we need ( <ref>) to be zero and hence it leads us to the only possible definition of h in (<ref>). ▸ Well defined parametrization. For a∈ C_ p^2( ℝ^2) Lemma <ref> implies that h is well-defined and hW lies in C_p,c^1(ℝ^2) . So with (<ref>) and (<ref>) also the expression in (<ref>) lies in C^1_p,c(ℝ^2). In order to have a regular parametrization it is sufficient that: * ∂ _φX×∂ _θ X is nontrivial on {( φ ,θ) ∈ℝ^2;φ∉πℤ}, and * ∂ _φX( φ ,0) ×∂ _φX( φ ,1/2π) is nontrivial for φ∈{ 0,π}. Let us start with the second case for φ =0, with φ =π similarly: ∂ _φX(φ ,0)×∂ _φ X(φ ,12π )=( [ r-a(0,0); ∂ _φh( 0,0); 0 ] ) ×( [ -∂ _φh( 0,1/2π); r-a(0,1/2π ); 0 ] ) =:( [ 0; 0; T ] ) , where T=( r-a(0,0)) ( r-a(0,12π )) +∂ _φh( 0,0) ∂ _φh( 0,12π) . Since ∂ _φh(φ ,θ )=-∫_0^φsin s∂ _θa(s,θ )ds/( sinφ) ^2 one obtains as in the derivation of (<ref>), that |∂ _φh(φ ,θ )|≤‖∂ _θa‖ _∞. A sufficient condition for T>0 hence is r>‖ a‖ _∞+‖∂ _θa‖ _∞. For φ∉πℤ, using (<ref>) and (<ref>), which state that ω is perpendicular to ∂ _φ X and ∂ _θX, a simple way of checking that ∂ _φX×∂ _θ X is nontrivial, is to show that ω·( ∂ _φX×∂ _θX) ≠ 0. With the orthonormal basis {Θ, Ψ ,Ξ} we obtain ω·( ∂ _φX(φ ,θ )×∂ _θX(φ ,θ ) 0mm3mm) = ( [ sinφ ( r-a(φ ,θ )) cosφ - cosφ/sinφ∫_0^φ∂ _θa(s,θ )cos s ds; 0 ∂ _φh(φ ,θ ) ∫_0^φ( r-a(s,θ )) cos s ds+∂ _θh(φ ,θ ); cosφ -( r-a(φ ,θ )) sinφ ∫_0^φ∂ _θa(s,θ )sin s ds ] ) =( r-a(φ ,θ )) ( ∂ _θh(φ ,θ )+∫_0^φ( r-a(s,θ )) cos s ds) +∂ _φh(φ ,θ )∫_0^φsin s ∂ _θa(s,θ ) ds/sinφ = ( r-a(φ ,θ )) ( ∫_0^φ( r-a(s,θ )) cos s ds+∂ _θh(φ ,θ )) -( ∂ _φh(φ ,θ )) ^2sinφ ≥ ( ( r-‖ a‖ _∞) ^2-( r-‖ a‖ _∞) ‖∂ _θ^2a‖ _∞-‖∂ _θa‖ _∞^2) |sinφ| ≥ ( r-‖ a‖ _∞-‖∂ _θa‖ -‖∂ _θ^2a‖ _∞0mm4mm) ( r-‖ a‖ _∞+‖∂ _θa‖ _∞0mm4mm ) |sinφ| , where the last inequalities hold whenever r≥‖ a‖ _∞+‖∂ _θa‖ _∞+‖∂ _θ^2a‖ _∞, with the strict version of this inequality also being sufficient for a regular parametrization. So there exists a minimal r_0(a)∈[ ‖ a‖ _∞,‖ a‖ _∞+‖∂ _θa‖ _∞+‖∂ _θ^2a‖ _∞] such that the parametrization is well-defined for all r>r_0(a). For r=r_0(a) the parametrization is no longer of class C^1 or one-to-one. However, since for all r>r_0(a) one finds a body of constant width and all functions involved are continuous, also the limit by taking t↓ r_0(a) gives a body of constant width. ▸ Homotopy to the sphere. The parametrization is well-defined for all r>r_0(a) and to be able to focus on the dependence on r we use an explicit r in the following expression (<ref>) in this paragraph: X_e( r,φ ,θ) :=X( φ ,θ) and X̃_e( r,ω) :=X̃( ω), with X,X̃ from (<ref>) and (<ref>). We define ( 0,1] ×𝕊^2∋( ρ ,ω) ↦Ỹ( ρ ,ω) :=1/r X̃_e( ρ ^-1r,ω) ∈ℝ ^3. One finds that Ỹ( 1,ω) =1/rX̃_e( r,ω) and Ỹ( 0,ω) :=lim_ρ↓ 0Ỹ( ρ ,ω) =ω with all Ỹ( ρ ,·) for ρ∈[0,1 ] being regular parametrizations and their outside normals ν satisfying ν_Ỹ( ρ ,ω ) =ω for all ω∈𝕊^2. So ℝ^3∖X̃_e( 1,𝕊 ^2) has precisely two connected components. The bounded one we call A. ▸ Convexity of A. Since the extreme value of X̃(𝕊^2) in the direction ω has normal ω, and since by (<ref>) ν _X̃ ( ω) =ω, that extreme point is indeed X̃( ω). So for each ω it holds that X̃(𝕊^2), except for X̃ ( ω) itself, is on one side of that tangent plane. Hence A̅ lies on one side of all the tangent planes for ∂ A= X̃(𝕊^2), which implies that A̅ is convex. See also the proof of Hadamard's Theorem <cit.>. ▸ Body of constant width. According to the results proved above it is sufficient to show that X̃( ω) -X̃( -ω) =2rω for all ω∈𝕊^2. For the parametrization X with U as in (<ref>) this coincides with X(φ ,θ )-X(φ +π ,θ )=2rU( φ ,θ) for all (φ ,θ )∈ S. Indeed, using (<ref>) we find with (<ref>), (<ref>) and (<ref>) that X(φ +π ,θ )-X(φ ,θ ) =∫_φ^φ +π( r-a(s,θ )) ( [ -sin s; cos s ] ) ds·( [ Ξ; Θ ] ) +( h(φ +π ,θ )-h(φ ,θ )) Ψ =r( [ cos( φ +π) -cosφ; sin( φ +π) -sinφ ] ) ·( [ Ξ; Θ ] ) =-2r U( φ ,θ) , as desired. Suppose that G is a body of constant width. Define X_0∈ℝ^3 as the point on ∂ G with the largest x_3 -coordinate. Taking u=( 0,0,1) ^T and ω =( 0,-sinθ ,cosθ) ^T the result of Hadwiger, extended by the remark of Groemer that bodies of constant width have only regular boundary points, states that is is sufficient that the projections P_ωG of G on the planes E_ω:=c_1( [ 0; 0; 1 ] ) +c_2( [ cosθ; sinθ; 0 ] ) with θ∈[ -1/2π ,1/2π] are curves of constant width d_P_ωG=2r. Thus all those sets can be described by (<ref>) from Theorem <ref> with for each θ some function a(·,θ) depending on θ as a parameter. The value of r is the same for all projections and does not depend on θ. In other words, a fixed r exists and for each θ a mapping φ↦ a(φ ,θ )∈ L^∞(0,2π ) such that for the corresponding x as in Theorem <ref> we have ∂P_ωG=x( [ 0,2π] ,θ) with some x( 0,θ) ∈ℝ^2 and sup{| a( φ ;θ) | ;0≤φ≤π}≤ r for all θ∈[ -π /2,π /2 ]. Moreover, the mapping φ↦ a(φ ,θ ) satisfies (<ref>) and (<ref>). Hence (<ref>), (<ref>) and r_0(a)≥‖ a‖ _L^∞(𝖲) are necessary conditions. Since for each ω̃∈𝕊^2 the set G lies in the cylinder perpendicular to its projection, in other words, we have G⊂ P_ω̃G+[ ω̃] with [ ω̃] ={λω̃;λ∈ℝ}. It follows that for each X_∗∈∂ G∩( ∂ P_ω̃G+[ ω̃] ) there is ( φ ,θ) ∈𝖲 with U(φ ,θ )·ω̃=0 and a value h( φ ,θ) ∈ℝ such that X_∗=P_ω̃(X_∗)+h(φ ,θ )ω̃. If X_∗∗∈∂ G is such that ‖ X_∗-X_∗∗‖ =2r, the width, then also ‖ P_ω̃ (X_∗)-P_ω̃(X_∗∗)‖ =2r and X_∗∗=P_ω̃(X_∗∗)+h(φ ,θ )ω̃, which implies that h( φ ,θ) =h( φ +π ,θ) for all ( φ ,θ) ∈𝖲. Indeed X_∗=x(φ ,θ )·( [ Ξ; Θ ] ) +h( φ ,θ) Ψ. Here (<ref>) follows from the fact that the line through the points of farthest distance is perpendicular to the plane through (0,0,0), (0,0,1) and (cosθ ,sinθ ,0). Since for φ∈{ 0,π ,2π} the X_∗ in (<ref>) does not depend on θ , one finds for all θ∈ -1/2π ,1/2π ], that h( 0,θ) =h( π ,θ) =h( 2π ,θ) =0 and x(0,0)=x(0,θ )=x(2π ,θ )= x(π ,θ )+( [ 2r; 0 ] ) . The first vector on the right in (<ref>) inherits the conditions of the two-dimensional formula and so φ↦x(φ ,θ ) is for each θ∈ -1/2π ,1/2π ] as in (<ref>), and the formula in (<ref>) gives a parametrization X:𝖲→ℝ^3 of ∂ G given by X(φ ,θ )=X_0+∫_0^φ( r-a(s,θ )) ( [ -sin s; cos s ] ) ds·( [ Ξ; Θ ] ) +h( φ ,θ) Ψ. Since the two vector functions in X( φ ,θ) have independent directions, each one should be continuous on 𝖲 and with Θ and Ψ reversing sign in θ =±π /2, hence, allowing the notation C^0_ p(𝖲) as restriction of C^0_p(ℝ^2), ( φ ,θ) ↦ ∫_0^φ( r-a(s,θ )) sin s ds∈ C_p^0( 𝖲 ) , ( φ ,θ) ↦ ∫_0^φ( r-a(s,θ )) cos s ds( [ cosθ; sinθ ] ) ∈ C_p^0( 𝖲) , ( φ ,θ) ↦ h( φ ,θ) ( [ cosθ; sinθ ] ) ∈ C_p^0( 𝖲) . For later use we will also define X_oh(φ ,θ ):=X_0+∫_0^φ( r-a(s,θ )) ( [ -sin s; cos s ] ) ds·( [ Ξ; Θ(θ ) ] ) . We now have shown the formula in (<ref>) but without a more specific formula for h. In order to find (<ref>) we need some additional tools that we explain next. Note that the sketch on the left of Figure <ref> shows a domain with boundary X̃_oh(𝕊^2). The first step to address formula (<ref>) for h of Theorem <ref> is to find a formulation of how Lipschitz-continuity on 𝕊^2 translates to our parametrization that uses S. This translation into our (or any) spherical coordinates (φ ,θ )∈𝖲 is however not so obvious. The formulation is found in Lemma <ref> of Appendix <ref> . It is not clear how to show directly that the parametrization ω↦X̃(ω ) is Lipschitz. Instead of a direct approach we make a detour through ω↦X̃ _oh(ω ) from (<ref>) that turns out to be something that parametrizes what we will call a shadow domain. See Appendix <ref>. Without loss of generality we may assume that the domain satisfies the assumptions in Definition <ref>. There one also finds the definition of the 3d-shadow 𝑆ℎ_Ξ(Ω ). Suppose that X from (<ref>) parametrizes the boundary of the body of constant width G and let X_oh be as in ( <ref>). Then the function ω↦X̃_oh(ω ):𝕊 ^2→ℝ^3, defined by X̃_oh(U( φ ,θ) ):=X_oh(φ ,θ ) for (φ ,θ )∈𝖲, is Lipschitz-continuous and satisfies: * P_Ψ(θ )(X̃(ω ) )=X̃_oh(ω ) for ω∈𝕊^2, and * ∂(𝑆ℎ_Ξ(G ))=X̃ _oh(𝕊^2). Here P is as in (<ref>), Ψ,Ξ as in (<ref>), U as in (<ref>) and 𝑆ℎ_Ξ(Ω ) is the 3d-shadow as in Definition <ref> with respect to the axis Ξ. Witout loss of generality we may assume that the width equals 2. It will save us from using estimates involving r. From our construction one finds that the function X̃_oh:𝕊^2→ℝ^3 parametrizes the collection of boundaries of `2d-shadows' in the directions Ψ(θ ) for θ∈[ -1 /2π ,1/2π] and gives a bounded two-dimensional manifold in ℝ^3. Each 2d-shadow P_Ψ(θ )(G) for θ∈[ -1 /2π ,1/2π] is a two-dimensional set of constant width in the plane spanned by Ξ and Θ(θ). The 3d-domain Ω bounded by these curves, that is ∂Ω =⋃_θ∈[ -1 /2π ,1/2π] P_Ψ (θ )(X([ 0,2π] ,θ ))=X̃_oh(𝕊^2), is in general not a body of constant width and not even convex. In fact, if we rotate a body G of constant width around the axis through e_3 and - e_3, assuming G has width 2 and lies between these points, we may use that each projection is a curve of constant width. Moreover, its boundary lies between the extreme cases of two-dimensional curves of constant width. These extreme cases are the Reuleaux triangle pointing left and the one pointing right. See Fig. <ref> . For each fixed φ the function θ↦X_oh(φ,θ) is Lipschitz-continuous according to Lemma <ref> and with φ-dependent Lipschitz-constant L_φ = sup{X_oh(φ,θ);θ∈ [0,2π]} ≤min(√(4-(1+z)^2),√(4-(1-z)^2)) ≤ 2 |sinφ|. Here L_φ is the horizontal distance of the maximal outer boundary to the axis in terms of the heigth z∈[-1,1] as sketched in Fig. <ref> on the left. We obtain |X_oh(φ,θ)-X_oh(φ,θ_0)| ≤ 2|sinφ||θ-θ_0| for all θ,θ_0 and φ. Lipschitz-continuity of φ↦X_oh(φ,θ), with constant L=2, follows from our 2d-construction: |X_oh(φ,θ)-X_oh(φ_0,θ)| ≤ 2 |φ-φ_0| for all φ,φ_0 and θ. For |θ-θ_0|≤1/2π we use a triangle inequality with either (φ,θ_0) or (φ_0,θ) as an intermediate point and both (<ref>) and (<ref>) to get (<ref>), while for 1/2π<|θ-θ_0|≤π, and φ,φ_0 both either near 0 or π, one uses a triangle inequality with (0,θ) or (π,θ) (corresponding with the poles ± e_3) as an intermediate point and twice (<ref>). Note that one may always choose θ,θ_0 such that one of these two cases holds, possibly by extending periodically as in Definition <ref>. So, according to Lemma <ref>, the function X̃_oh is Lipschitz-continuous on 𝕊^2. If 𝕊^2∋ω↦ R(ω )ω for a positive R describes the boundary of a convex domain, then this function is Lipschitz continuous. Such a result does not hold if one just has that ω↦X̃(ω ) describes the boundary of a convex domain. To show the Lipschitz-continuity of this function, we use the 3d-shadow domain from Definition <ref> in Appendix <ref>. From Lemma <ref> we know that the function X̃_oh, defined from X_oh as in (<ref>) with X_oh given by 𝖲∋( φ ,θ) ↦X _oh(φ ,θ )=P_Ψ(θ )(X (φ ,θ )), is Lipschitz-continuous on 𝕊^2. It remains to show that this transfers to X̃. Note that for Θ and Ψ as functions of θ: Θ(θ +ε )=cosε Θ(θ )+sinε Ψ(θ ) and Ψ(θ +ε )=cosε Ψ(θ )-sinε Θ(θ ). When there is no misunderstanding we skip the θ-dependence of Θ and Ψ and use only Θ=Θ(θ ) and Ψ= Ψ(θ ) . Thus one computes X_oh(φ ,θ +t)-X_oh(φ ,θ )= ∫_0^φ( r-a(s,θ +t)) ( [ cos s; -sin s ] ) ds·( [ cos t Θ+sin t Ψ; Ξ ] ) -∫_0^φ( r-a(s,θ )) ( [ cos s; -sin s ] ) ds·( [ Θ; Ξ ] ) . Hence for t≠ 0 and small enough such that θ +t∈[ -π /2,π /2], it holds by the Lipschitz-continuity that X_oh(φ ,θ +t)-X_oh(φ ,θ )/tsinφ·Ψ=sin t/tsinφ∫_0^φ( r-a(s,θ +t)) cos s ds, is bounded and moreover positive for φ∉{ 0,π ,2π}. The contribution by h in (<ref>) is in the ±Ψ-direction but, contrary to X_oh, has no a-priori fixed direction although the directions in φ and φ +π are the same since h(φ ,θ )=h(φ +π ,θ ). Since the paramerization X as a function of θ has to be pointing in the direction of sinφ in order to be well-defined, we obtain X_oh(φ +π ,θ +t)-X _oh(φ +π ,θ )/t·Ψ≤ h(φ ,θ +t)-h( φ ,θ) /t≤ X_oh(φ ,θ +t)-X_oh(φ ,θ ) /t·Ψ. This estimate implies, since |sin t|≤| t|, that |h(φ ,θ +t)-h( φ ,θ) /t |≤ 2r|sinφ| . So X=X_oh+hΨ is Lipschitz-continuous in θ and we find X(φ ,θ +ε )-X(φ ,θ )=∫_0^φ( r-a(s,θ +ε )) ( [ -sin s; cos s ] ) ds·( [ Ξ; cosε Θ+sinε Ψ ] ) + +h( φ ,θ +ε) ( cosεΨ-sinε Θ) -∫_0^φ( r-a(s,θ )) ( [ -sin s; cos s ] ) ds·( [ Ξ; Θ ] ) -h( φ ,θ) Ψ =∫_0^φ( a(s,θ )-a(s,θ +ε )) ( [ -sin s; cos s ] ) ds·( [ Ξ; Θ ] ) +( h( φ ,θ +ε) -h( φ ,θ) ) Ψ + -2sin (ε /2)( h( φ ,θ +ε) ( [ cos (ε /2); sin (ε /2) ] ) +∫_0^φ( r-a(s,θ +ε )) cos sds( [ sin (ε /2); -cos (ε /2) ] ) ) ·( [ Θ; Ψ ] ) . As X(φ ,θ ) describes the surface of a body of constant width 2r and X(φ ,θ )-X(φ +π ,θ )=2r U(φ ,θ ) we find that for all ε |X(φ ,θ +ε )-X (φ +π ,θ )|≤ 2r = |X(φ ,θ )-X(φ +π ,θ )| . Note that ( X(φ ,θ +ε )-X(φ +π ,θ )) ·U(φ ,θ )=( X(φ ,θ +ε )-X(φ ,θ )) ·U(φ ,θ )+2r and thus we need ( X(φ ,θ +ε )-X(φ ,θ )) ·U(φ ,θ )≤ 0. Since U(φ ,θ )=cosφ Ξ +sinφ Θ we find, using the Lipschitz-continuity of θ↦X(φ ,θ ), that ( X(φ ,θ +ε )-X(φ ,θ )) ·U(φ ,θ )=∫_0^φ( a(s,θ )-a(s,θ +ε )) sin( φ -s) ds -sinφ( sinε h( φ ,θ +ε) +2( sin (ε /2)) ^2∫_0^φ( r-a(s,θ +ε )) cos s ds) =ε( ∫_0^φa(s,θ )-a(s,θ +ε )/εsin( φ -s) ds-sinφh( φ ,θ) ) +𝒪( ε ^2) . For (<ref>) to hold it follows that for all ε small: ∫_0^φa(s,θ )-a(s,θ +ε )/εsin( φ -s) ds-sinφ h( φ ,θ) =𝒪( ε) . And with h being Lipschitz-continuous itself, we find h( φ ,θ) =lim_ε→ 0∫_0^φa(s,θ )-a(s,θ +ε )/ εsin( φ -s) ds/sinφ. It remains to show the regularity properties stated in the second item of the theorem. These follow rather immediately. Whenever a,∂ _θa∈ C_p^0(ℝ^2) one finds from (<ref>) that h( φ ,θ) =-∫_0^φa_θ( s,θ) sin( φ -s) ds/sinφ as in (<ref>). With h satisfying (<ref>) one finds for a,∂ _θa∈ C_p^1(ℝ^2)) that also (<ref>) is satisfied. § ON THE DISTANCE IN 𝕊^2 Let f̃ :𝕊^2→ℝ be some function. The standard definition for such a function f̃ to be Lipschitz-continuous, is, that there exists L>0 such that |f̃(ω )-f̃(ω _0)|≤ L|ω -ω _0| for all ω ,ω _0∈𝕊^2. Since the functions we use, are defined in terms of (φ,θ)∈𝖲 instead of ω∈𝕊^2, with 𝖲 from (<ref>), we need to reformulate the Lipschitz-condition in (<ref>) to a condition for f:𝖲→ℝ for ω =U( φ ,θ) and f(φ ,θ )=f̃(ω ) with U from (<ref>). In other words, we have to replace |ω-ω_0| by an equivalent expression using (φ,θ) and (φ_0,θ_0). The corresponding estimates follow next. Setting ω =U( φ ,θ) and ω _0=U( φ _0,θ _0), one finds that for all (φ,θ) and (φ_0,θ_0) in 𝖲: * if φ ,φ _0∈[ 0,π] or φ ,φ _0∈[ π ,2π]: |ω -ω _0|≤|φ -φ _0| +|θ -θ _0|min( |sinφ| ,|sinφ _0|) ≤π|ω -ω _0| ; * if φ∈[ 0,π] and φ _0∈[ π ,2π], or vice versa: |ω -ω _0|≤| 2π -φ -φ _0| +( π -|θ -θ _0|) min( |sinφ| ,|sinφ _0|) ≤π|ω -ω _0| . Assuming φ ,φ _0∈[ 0,π] or φ ,φ _0∈[ π ,2π] one considers as an intermediate point ω _∗=U(φ _0,θ ) and uses the following estimates: * The triangle inequality in ℝ^3: |ω -ω _0|≤|ω -ω _∗| +|ω _∗-ω _0|. * Comparing the length via the circle with fixed φ _0 on the sphere through the points ω and ω _∗ with the straight line in ℝ^3 through those points gives: |ω -ω _∗|≤|φ -φ _0|≤π2|ω -ω _∗| . * A direct computation shows that |ω _∗-ω _0| =2|sinφ _0||sin( 12( θ -θ _0) ) | and since θ -θ _0∈[ -π ,π] one finds 2π|θ -θ _0|≤ 2|sin( 12( θ -θ _0) ) |≤|θ -θ _0| , implying |ω _∗-ω _0|≤|θ -θ _0||sinφ _0|≤ π2|ω _∗-ω _0| . * Both ω _∗ and ω _0 lie on the circle on the unit sphere with fixed φ _0. Since ω _∗ is the point on that circle that is closest to ω, one obtains |ω -ω _∗|≤|ω -ω _0| . A similar argument now for the circle on the unit sphere with fixed θ _0 shows |ω _∗-ω _0|≤|ω -ω _0| . Combining these inequalities gives the estimates in (<ref>). By symmetry we may replace |sinφ _0| by min( |sinφ| ,|sinφ _0|). For the second case we assume φ∈[ 0,π], φ _0∈[ π ,2π] as in Fig. <ref> on the right. We consider the shortest path from ω to ω _0 through ω _∗=U(2π -φ _0,θ ) and the top or bottom boundary of 𝖲. Obviously |ω -ω _0|≤|ω -ω _∗| +|ω _∗-ω _0| still holds. As before one finds |ω -ω _∗|≤| 2π -φ _0-φ|≤π2|ω -ω _∗| and since |ω _∗-ω _0| =2|sinφ _0|sin( π -|θ -θ _0|/2) with 0≤π -|θ -θ _0|≤π one obtains |ω _∗-ω _0|≤( π -|θ -θ _0|) |sinφ _0|≤π2|ω _∗-ω _0| . Also as before we have |ω -ω _∗|≤|ω -ω _0| but the last inequality (<ref>) holds if |sinφ _0|≤|sinφ|. So in (<ref> ) the minimum term is necessary if one wants to keep the same constant. See Fig. <ref>. § SHADOW DOMAINS In order to show that a body of constant width has some minimal regularity property, namely a kind of Lipschitz-continuity under rotation, we need a geometrical argument. Such an argument follows from observing the shadows during rotation. We did not find such a tool in the literature and supply it here. Suppose that Ω⊂ℝ^2 is a bounded, simply connected domain with 0∈Ω. We define R_Ω: ℝ→ ℝ^+ by R_Ω(ψ ):=sup{ xcosψ +ysinψ ; xy∈Ω} and define the rotational shadow domain of Ω by 𝑆ℎ(Ω ):={rcosψrsinψ;0≤ r<R_Ω(ψ ) and ψ∈[ 0,2π] } . The intersection of 𝑆ℎ(Ω ) with the line ℓ (ψ ):={ tcosψsinψ;t∈ℝ} gives precisely the shadow of Ω with the light at infinity in the direction -sinψcosψ. See Fig. <ref> in the case of a triangle. Let Ω be as in Definition <ref>. The function R_Ω in (<ref>) is Lipschitz-continuous with Lipschitz-constant at most L=sup{‖ x‖ ; x∈Ω} . Let co(Ω ) denote the convex hull of Ω. It holds that R_co( Ω) (ψ )=R_Ω(ψ ). Note that taking the convex hull also does not change L. Hence we may assume without loss of generality that Ω is convex. The boundary of a bounded convex domain in ℝ^2 with 0∈Ω can be parametrized in polar coordinates with r(t)>0 as follows: ∂Ω ={ r(t)cos tsin t;t∈[ 0,2π ] } . For such a parametrization one finds R_Ω(ψ ) =sup{ r(t)cos tcosψ +r(t)sin tsinψ ;t ∈[ 0,2π] } =sup{ r(t)cos( ψ -t) ;t ∈[ 0,2π] } . The function ψ↦ r(t)cos( ψ -t) is Lipschitz-continuous with constant ‖ r‖ _∞=L as in (<ref>). A function defined as the supremum of Lipschitz-functions with a uniform constant is Lipschitz-continuous with that same constant. Notice that (<ref>) leads to R_Ω(ψ )=sup{ r(ψ -s)cos( s) ;| s| <12π} , which again explains, why we call 𝑆ℎ(Ω ) the rotational shadow domain. Notice that, since cos s<0 for s∈ [-π,-1/2 )∪ (1/2π,π] and 0∈Ω, only the subinterval (1/2π,–1/2π) contributes to this positive supremum. Next we extend this shadow in 2 dimensions to 3d-shadows of a bounded convex domain Ω⊂ℝ^3. With the basis {Ξ,Θ(θ ),Ψ(θ )} as in (<ref>) we define P_Ψ(θ ): ℝ^3→ℝ^3, consistent with (<ref>), by P_Ψ(θ )( [ x_1; x_2; x_3 ] ) :=⟨Ξ,x0mm3mm⟩Ξ+⟨Θ(θ ),x0mm3mm ⟩Θ(θ )=( [ ( cosθ x_1+sinθ x_2) cosθ; ( cosθ x_1+sinθ x_2) sinθ; x_3 ] ) . After the Ξ-axis is fixed the 3d-shadow is constructed as in the 2d-case for each Ξ-coordinate being constant, that is, by a rotating shadow through rotating around that axis. Suppose that Ω⊂ℝ^3 is convex, bounded and such that * the domain lies in the Ξ-direction between -1 and 1: -1=inf{⟨Ξ,x⟩ ;x∈Ω} and sup{⟨Ξ ,x⟩ ;x∈Ω} =1, * with Ξ, -Ξ∈∂Ω, then we define the 3d-shadow domain in the directions perpendicular to the Ξ-axis by 𝑆ℎ_Ξ(Ω ):=⋃{ P_ Ψ(θ )(Ω );|θ|≤12π} . One may notice that this 3d-shadow domain is related to the 2d-shadows for fixed x_3 through the formula 𝑆ℎ_Ξ(Ω )=⋃_| x_3| <1( [ 𝑆ℎ( P_Ξ( Ω∩ x_1Ξ) 0mm3mm); x_3 ] ) , with P_Ξ as in (<ref>). 99 BLO T. Bayen, T. Lachand-Robert and É. Oudet, Analytic parametrization of three-dimensional bodies of constant width, Arch. Ration. Mech. Anal., 186 (2007), 225–249. Bl W. Blaschke, Einige Bemerkungen über Kurven und Flächen von konstanter Breite, Ber. Verh. Sächs. Akad. Leipzig, 67 (1915), 290–297. CG G.D. Chakerian, H. Groemer, Convex bodies of constant width. In: Convexity and its Applications, ed. P. M. Gruber and J. M. Wills, Birkhäuser, Basel 1983, 49–96. Da L. Danzer, Über die maximale Dicke der ebenen Schnitte eines konvexen Körpers, Archiv der Mathematik, 8 (1957), 314–316. Eu L. Euler, De curvis triangularibus. Acta Academiae Scientarum Imperialis Petropolitinae 1778, 1781, 3–30 (Opera Omnia: Series 1, Volume 28, 298–321) Ha H. Hadwiger, Seitenrisse konvexer Körper und Homothetie, Elem. Math. 18 (1963), 97–98. HS1 P.C. Hammer, A. Sobczyk, Planar line families I, Proc. Amer. Math. Soc. 4 (1953), 226–233. HS2 P.C. Hammer, A. Sobczyk, Planar line families II, Proc. Amer. Math. Soc. 4 (1953), 341–349. Ha1 P.C. Hammer, Constant breadth curves in the plane, Proc. Amer. Math. Soc. 6 (1955), 333–334. HC D. Hilbert and St. Cohn-Vossen, Geometry and the imagination, Chelsea Publ. Co., New York, 1952 (transl. from the German: Anschauliche Geometrie, Springer, Berlin, 1932). Gr H. Groemer, On the determination of convex bodies by translates of their projections, Geom. Dedicata 66 (1997), 265–279. KS B. Kawohl, G. Sweers, On a formula for sets of constant width in 2D, Commun. Pure Appl. Anal. 18 (2019), 2117–2131. KW B. Kawohl, Ch. Weber, Meissner's Mysterious Bodies, The Mathematical Intelligencer 33 (2011), 94–101. LO T. Lachand-Robert and É. Oudet, Bodies of constant width in arbitrary dimension, Mathe­matische Nachrichten, 280 (2007), 740–750. MMO H. Martini, L. Montejano, D. Oliveros, Bodies of constant width. An introduction to convex geometry with applications. Birkhä user/Springer, Cham, 2019. Me1 E. Meissner, Über die Anwendung der Fourier-Reihen auf einige Aufgaben der Geometrie und Kinematik, Vierteljahrsschr. Nat.forsch. Ges. Zür., 54 (1909), 309–329. Me2 E. Meissner, Über Punktmengen konstanter Breite, Vierteljahrsschr. Nat.forsch. Ges. Zür., 56 (1911), 42–50.. MP R.S. Millman, G.D. Parker, Elements of Differential Geometry, Prentice-Hall, Englewood Cliffs, 1977. Mi H. Minkowski, On the bodies of constant width, Mat. Sbornik 25 (1905), 505–508. (in Russian) MR L. Montejano, E. Roldan-Pensado, Meissner Polyhedra, Acta Math. Hungar., 151 (2017), 482–494.
http://arxiv.org/abs/2307.00611v1
20230702163713
Some applications of the Tsirelson spectral measures to noise filtering problems
[ "Rémi Lassalle" ]
math.PR
[ "math.PR", "math.ST", "stat.TH" ]
=cmti12 0.25truein 0.0truein 0.0truein 8.5truein 6.0truein theoremTheorem[section] problemProblem [section] definitionDefinition [section] lemmaLemma[section] propositionProposition[section] corollaryCorollary[section] exampleExample[section] conjectureConjecture algorithmAlgorithm exerciseExercise[section] remarkkRemark[subsection]
http://arxiv.org/abs/2307.00780v1
20230703064800
Variational theory and algorithms for a class of asymptotically approachable nonconvex problems
[ "Hanyang Li", "Ying Cui" ]
math.OC
[ "math.OC" ]
ACDMSR: Accelerated Conditional Diffusion Models for Single Image Super-Resolution Axi Niu, Pham Xuan Trung, Kang Zhang, Jinqiu Sun*, Yu Zhu, In So Kweon, Member,  IEEE, and Yanning Zhang, Senior Member, IEEE This work is funded in part by the Project of the National Natural Science Foundation of China under Grant 61871328, Natural Science Basic Research Program of Shaanxi under Grant 2021JCW-03, as well as the Joint Funds of the National Natural Science Foundation of China under Grant U19B2037.). (* Corresponding author:Jinqiu Sun.) Axi Niu, Yu Zhu, and Yanning Zhang are with the School of Computer Science, Northwestern Polytechnical University, Xi’an, 710072, China, and also with the National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, Xi’an, 710072, China (email: nax@mail.nwpu.edu.cn, yuzhu@mail.nwpu.edu.cn, ynzhang@nwpu.edu.cn ) Pham Xuan Trung, Kangzhang, and In So Kweon are with the School of Electrical Engineering, Korea Advanced Institute of Science and Technology. (email:trungpx@kaist.ac.kr, kangzhang@kaist.ac.kr, iskweon77@kaist.ac.kr) Jinqiu Sun is with the School of Astronautics, Northwestern Polytechnical University, Xi’an 710072, China (email: sunjinqiu@nwpu.edu.cn) August 1, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We investigate a class of composite nonconvex functions, where the outer function is the sum of univariate extended-real-valued convex functions and the inner function is the limit of difference-of-convex functions. A notable feature of this class is that the inner function can be merely lower semicontinuous instead of continuous. It covers a range of important yet challenging applications, including the composite value functions of nonlinear programs, the weighted value-at-risk for continuously distributed random variables, and composite rank functions. We propose an asymptotic decomposition of the composite function that guarantees epi-convergence to the original function, leading to necessary optimality conditions for the corresponding minimization problems. The proposed decomposition also enables us to design a numerical algorithm that is provably convergent to a point satisfying the newly introduced optimality conditions. These results expand on the study of so-called amenable functions introduced by Poliquin and Rockafellar in 1992, which are compositions of convex functions with smooth maps, and the prox-linear methods for their minimization. Keywords: epi-convergence; optimality conditions; nonsmooth analysis; difference-of-convex functions § INTRODUCTION. The class of amenable functions has been a subject of interest in optimization theory since its introduction by Poliquin and Rockafellar in <cit.>. An amenable function is locally representable as the composition of a proper, lower semicontinuous, convex function and a continuously differentiable mapping, linked by a proper constraint qualification. When the outer function is further assumed to be piecewise linear quadratic, the resulting composite function is called fully amenable. In fact, fully amenable functions are studied by Rockafellar in <cit.> prior to the two aforementioned papers. For a thorough exploration of the variational theory of amenable functions, readers are referred to the monograph <cit.>. In particular, Chapter 10(F) and Chapter 13(C) of the monograph provide comprehensive results on the first-order and second-order subdifferential calculus of (fully) amenable functions. The structural properties of amenable functions have also led to the development of prox-linear algorithms, where convex subproblems are constructed through the linearization of the inner smooth mappings <cit.>. Despite being generally nonconvex and nonsmooth, amenable functions exhibit the desirable property of being Clarke regular. In the case where the outer convex function is Lipschitz continuous and the inner mapping has a Lipschitz continuous gradient, the amenable functions even become weakly convex <cit.>. However, these appealing properties also reveal the practical limitations of amenable functions. The primary aim of the present paper is to extend the composite structure of amenable functions to encompass a broad range of non-Clarke-regular functions, including composite value functions of nonlinear programs, the weighted value-at-risk of continuously distributed random variables, and composite rank functions. With these examples in mind, we aspire to consider nonsmooth or even discontinuous inner functions in the composition that still possess favorable variational properties and computational tractability. In addition to expanding the theory of amenable functions, this paper also seeks to address a computational challenge of difference-of-convex (DC) optimization. The class of DC functions boasts a rich history, with a brief historical account and an overview of numerous properties detailed in <cit.>. DC functions are prevalent across various applications. It is established that any real-valued piecewise twice continuously differentiable function and any lower-𝒞^2 function can be classified as DC <cit.>. However, while many functions can be categorized as DC, their DC decompositions may not be readily accessible for practical computation. Notable results emphasizing this point stem from the mixing and composition principles <cit.>. When the domains of the involved functions are suitably restricted, these principles reveal that (a) continuous selections of DC functions are DC, and (b) compositions of DC functions are DC. Yet, in general, the DC decompositions of such functions have not been proven to be computationally valuable. Recently, Royset <cit.> has showed that every multivariate lower semicontinuous function is the epigraphical limit of a sequence of piecewise affine functions of the difference-of-max type, a special case of DC functions. Despite this finding, the full potential of this result for algorithmic design in optimization problems again remains to be explored. In this paper, drawing inspiration from <cit.> and with a focus on tractable computation, we introduce a novel class of functions derived from DC functions through a limiting process, referred to as approachable difference-of-convex (ADC) functions. The formal definition of ADC functions will be presented in the subsequent section. In the following, we provide two specific examples to further illustrate the motivation behind our work. Example 1.1: Composite value functions. For p=1, ⋯,P, consider the optimal value function f_p(x) ≜ [ minimum_y∈^m (c^ p + C^ p x)^⊤ y + 1/2 y^⊤ Q^ p y A^ p x + B^ p y ≤ b^ p ], x∈^n, which is defined using appropriate dimensional vectors b^ p and c^ p, and matrices A^ p, B^ p, C^ p and Q^ p. The inverse (multi) optimal value problem <cit.> finds a vector x ∈ X⊂^n that minimizes the discrepancy between observed optimal values {v_p}_p=1^P and true values {f_p}_p=1^P based on a prescribed metric, such as the ℓ_1-error: _x ∈ X ⊂^n ∑_p=1^P |v_p - f_p(x)|. The optimal value function in (<ref>) also features in the two-stage quadratic stochastic program <cit.>: _x ∈ X ⊂^n f_0(x) + 1/P∑_p=1^P f_p(x), where f_0 is a convex first-stage objective function. As proven in <cit.>, each f_p is DC on its domain when Q^ p is positive semidefinite. However, the cited work's DC decomposition incurs an exponential computational cost in m, as it necessitates the calculation of all extreme points of the solution set in terms of y for the problem in (<ref>). This may not be ideal for practical consideration due to the computational inefficiencies and scalability issues. Moreover, each f_p as a function of x is nonconvex and nonsmooth, posing a challenge in minimizing the composite function ∑_p=1^Pφ_p ∘ f_p for some outer function φ_p, including models (<ref>) and (<ref>). In fact, even if φ_p is convex, φ_p ∘ f_p is not an amenable function. Example 1.2. The weighted value-at-risk for continuously distributed random variables. The value-at-risk (VaR) of a random variable Y at the confidence level α∈ (0,1) is defined as _α(Y) ≜inf{γ∈ | ℙ(Y≤γ ) ≥α}. Given nonnegative weights {w_i}_i=1^s with ∑_i=1^s w_i = 1 and confidence levels 0 ≤α_1 < ⋯ < α_s ≤ 1, we consider the risk measure ρ_w(Y) ≜∑_i=1^s w_i _α_i(Y), which is a special case of the mixed superquantile if w_1 ≤⋯≤ w_s <cit.> and the weighted VaR <cit.>. Consider a function c: ^n ×^m → and a random vector Z: Ω→^m. Let c(x,Z) represent the profit of investments parameterized by a decision vector x∈^n. Suppose an agent's goal is to maximize the expected utility of c(x,Z), denoted as u(c(x,Z)), while also controlling risk via a constraint on ρ_w[c(x,Z)] under a prescribed level r. Adapted from <cit.>, the model can be written as maximize_x ∈^n 𝔼[ u( c(x,Z) ) ] ρ_w[ c(x,Z)] ≤ r. Similar to Example 1.1, it can be difficult to solve problem (<ref>) that has a nonsmooth and even discontinuous function ρ_w[ c(x,Z)] in the constraint. In fact, for a discrete random variable Y, _α(Y) is DC in Y with an explicit decomposition <cit.>, implying that ρ_w[c(∙,Z)] is also DC if c(∙,z) is DC for any z ∈^m and c(x,Z) follows a discrete distribution for any x ∈^n <cit.>. Yet, it remains unclear if ρ_w[ Y ] is DC when Y is continuously distributed, and whether an explicit decomposition exists. It turns out that both functions f_p and ρ_ω (associated with a continuously distributed random variable) from the preceding examples belong to the class of ADC functions, where each approximating function admits explicit and computationally tractable DC decompositions. More examples of ADC functions are given in Section <ref>. With this new class of functions in hand, we have made a first step in this paper to understand the variational properties of composite ADC functions and the algorithmic approach for their minimization. Specifically, we have conducted an in-depth analysis of the necessary optimality conditions for the composite problem, attained through the limit of the optimality conditions derived from their composite DC approximations. We further present an associated algorithm capable of computing a point that meets these derived conditions. As a by-product, we have also developed a chain rule of the limiting subgradients when the outer function is not strictly increasing and the inner function is not locally Lipschitz continuous. These contributions expand upon the existing theory of amenable functions and enhance our understanding of composite nonconvex functions. Our strategy to handle the nonsmooth and possibly discontinuous inner function through a sequence of DC functions shares certain similarity with smoothing approximations in the existing literature. For instance, Ermoliev et al. <cit.> have designed smoothing approximations for lower semicontinuous functions utilizing convolutions with bounded mollifier sequences, a technique akin to local “averaging". Research has sought to identify conditions that ensure gradient consistency for the smoothing approximation of composite nonconvex functions <cit.>. Notably, Burke and Hoheisel <cit.> have emphasized the importance of epi-convergence for the approximating sequence, a less stringent requirement than the continuous convergence assumed in earlier works <cit.>. In a recent work, Royset <cit.> has studied the consistent approximation of the composite optimization in terms of the global minimizers and stationary solutions, where the inner function is assumed to be locally Lipschitz continuous. Our notion of subgradients and optimality conditions for composite ADC problems takes inspiration from these works, but adapts to accommodate nonsmooth approximating sequences that exhibit the advantageous geometric property of being DC. The rest of the paper is organized as follows. Section <ref> presents a class of ADC functions and introduces a new associated notion of subgradients. In Section <ref>, we investigate the necessary optimality conditions for minimizing the composition of univariate convex functions and ADC functions. A chain rule of the composite nonsmooth functions is also studied. Section <ref> is devoted to an algorithmic framework for solving the composite model and its convergence analysis to the newly introduced optimality conditions. The paper ends with a concluding section. 0.1in Notation and Terminology. We write ^n as the n-dimensional Euclidean space equipped with the inner product ⟨ x,y⟩ = x^⊤ y and the induced norm x≜√(x^⊤ x). The symbol 𝔹(x̅, δ) is used to denote the Euclidean ball {x ∈^n |x - x̅≤δ}. The set of nonpositive, nonnegative and positive real numbers are denoted as _-, _+ and _++, respectively, and the set of nonnegative integers is denoted as ℕ. The notation {t^k} is employed to abbreviate any sequence {t^k}_k ≥ 0, wherein the elements may take the form of points, sets, or functions. The notation {t^k}_k ∈ N is used to represent a subsequence that is indexed by N ⊂ℕ. We further write ℕ_∞^♯≜{N ⊂ℕ | N }. Given two sets A and B in ^n, the Minkowski sum and the scalar multiple are defined as A + B ≜{a + b | a ∈ A, b ∈ B} and λ A ≜{λ a | a ∈ A}. If either A or B is empty, we set A+B = ∅. We also define 0 ·∅ = {0} and λ·∅ = ∅ whenever λ≠ 0. When A and B are nonempty and closed, we define the one-sided deviation of A from B as (A, B) ≜ sup_x ∈ A (x, B), where (x, A) ≜inf_y ∈ A y - x. The Hausdorff distance between A and B is given by (A, B) ≜max{(A, B), (B, A)}. The boundary and interior of A are denoted by bdry(A) and (A). The topological closure and the convex hull of A are indicated by cl(A) and A. We let δ_A(x) be the indicator function of A, i.e., δ_A(x)=0 for x ∈ A and δ_A(x)=+∞ for x ∉ A. For a sequence of sets {C^k}, we define its outer limit as _k → +∞ C^k ≜{ u |∃ N ∈ℕ_∞^♯, {u^k}_k ∈ N→ u with u^k ∈ C^k }, and the horizon outer limit as _k → +∞^∞ C^k ≜{0}∪{u |∃ N ∈ℕ_∞^♯, {λ_k}↓ 0, {λ_k u^k}_k ∈ N→ u with u^k ∈ C^k}. The outer limit of a set-valued mapping S: ^n ⇉^m is defined as _x →x̅ S(x) ≜⋃_{x^k}→x̅_k → +∞ S(x^k) = { u |∃{x^k}→x̅, {u^k}→ u with u^k ∈ S(x^k)}, x̅∈^n. We say S is outer semicontinuous (osc) at x̅∈^n if Limsup_x →x̅ S(x) ⊂ S(x̅). The regular normal cone and the limiting normal cone of a set C ⊂^n at x̅∈ C are given by 𝒩_C(x̅) ≜{v | v^⊤(x - x̅) ≤ o(x - x̅), ∀ x ∈ C} 𝒩_C(x̅) ≜_x(∈ C)→x̅𝒩_C(x). The proximal normal cone of a set C at x̅∈ C is defined as 𝒩^p_C(x̅) ≜{λ(x - x̅) |x̅∈ P_C(x), λ≥ 0}, where P_C is the projection onto C that maps any x to the set of points in C that are closest to x. For f:^n →≜∪{±∞}, we write its effective domain as f ≜{x ∈^n | f(x) < +∞}, and the epigraph as epi f ≜{(x, α)∈^n+1|α≥ f(x)}. We say f is proper if f is nonempty and f(x) > -∞ for all x ∈^n. The function f is lower semicontinuous (lsc) if f(x̅)≤lim inf_x →x̅ f(x) at any x̅∈^n, and upper semicontinuous (usc) if (-f) is lsc. We set +∞ - (+∞) = +∞. Let f: ^n → be a proper function. We write x →_f x̅, if x →x̅ and f(x) → f(x̅). The regular subgradient and the limiting subgradient of f at x̅∈ f are respectively defined as ∂f(x̅) ≜{v | f(x) ≥ f(x̅) + v^⊤(x - x̅) + o(x - x̅), ∀ x ∈^n} ∂ f(x̅) ≜_x →_f x̅∂f(x). For any x̅∉ f, we set ∂f(x̅) = ∂ f(x̅) = ∅. When f is locally Lipschitz continuous at x̅, ∂ f(x̅) equals to the Clarke subgradient ∂_C f(x̅). We further say f is subdifferentially regular at x̅∈ f if f is lsc at x̅ and ∂f(x̅) = ∂ f(x̅). When f is proper and convex, ∂f, ∂ f, and ∂_C f coincide with the concept of subgradients in convex analysis. Finally, we introduce the notion of function convergence. A sequence of functions {f^k: ^n →} is said to converge pointwise to f: ^n →, written f^k f, if f^k(x) → f(x) for any x ∈^n. The sequence {f^k} is said to epi-converge to f, written f^k f, if for any x ∈^n, it holds {[ liminf_k → +∞ f^k(x^k) ≥ f(x) for every sequence {x^k}→ x,; limsup_k → +∞ f^k(x^k) ≤ f(x) for some sequence {x^k}→ x. ]. The sequence {f^k} is said to continuously converge to f, written f^k f, if lim_k → +∞ f^k(x^k) = f(x) for any x ∈^n and any sequence {x^k}→ x. § APPROACHABLE DIFFERENCE-OF-CONVEX FUNCTIONS. In this section, we formally introduce a class of functions that can be asymptotically approximated by DC functions. A new concept of subgradients that is defined through the approximating functions is proposed. At the end of this section, we provide several examples that demonstrate the introduced concepts. §.§ Definition and properties. An extended-real-valued function can be approximated by a sequence of functions in various convergent notions, as comprehensively investigated in <cit.>. Among these approaches, epi-convergence has a notable advantage in its ability to preserve the minimizers <cit.>. Our focus lies on a particular class of approximating functions, wherein each function exhibits a DC structure. Let f: ^n → be a proper function, and each f^k: ^n → be a proper function that is DC on its domain, i.e., there exist proper convex functions g^k, h^k: ^n → such that f^k(x) = g^k(x) - h^k(x) for any x ∈ f^k and f^k = [ g^k ∩ h^k ]. Then (a) f is said to be pointwise approachable DC (p-ADC) associated with {f^k} if f^k f. (b) f is said to be epigraphically approachable DC (e-ADC) associated with {f^k} if f^k f. (c) f is said to be continuously approachable DC (c-ADC) associated with {f^k} if f^k f. A function f is referred to as an ADC function if the sequence {f^k} converges to f following one of the convergent notions in (a)-(c). By a slight abuse of notation, we use {f^k=g^k-h^k} to represent the sequence of DC decompositions of {f^k}, although the equality may only hold for x ∈ f^k. The strategy of utilizing a sequence of functions with favorable structures to (epi-)approximate an ill-behaved function has also been adopted in the design of smoothing functions <cit.>. The primary difference between ADC and epi-smoothing functions <cit.> lies in the required properties of the approximating functions f^k. In the ADC framework, f^k is required to possess a DC structure, while smoothing functions necessitate f^k to exhibit continuous differentiability. The DC structure plays a crucial role in the subsequent design of numerical algorithms. A p-ADC function may be neither lsc nor usc. An example is given by the function f(x) = 1_{0}(x) + 2 ·1_(0,+∞)(x), where for a set C ⊂^n, we write 1_C(x) = 1 if x ∈ C and 1_C(x) = 0 if x ∉ C. In this case, f is neither lsc nor usc at x=0. However, f is p-ADC associated with f^k(x) = max( 0, 2kx+ 1 ) - max( 0, 2kx-1 ). In contrast, any e-ADC function must be lsc <cit.>, and any c-ADC function is continuous <cit.>. The relationships among different notions of function convergence, including the unaddressed uniform convergence in this paper, have been thoroughly examined in the monograph <cit.>. Generally, pointwise convergence and epi-convergence do not imply one another. These two convergence notions coincide when the sequence of interest, {f^k}, is asymptotically equi-lsc everywhere <cit.> (see <cit.> for the definition of equi-lsc). In addition, f^k continuously converges to f if and only if both f^k f and (-f^k) (-f) are satisfied <cit.>. In general, verifying epi-convergence and continuous convergence can be a challenging task. However, the following lemma suggests that continuous convergence (and therefore epi-convergence) of {f^k} to f can be obtained when {f^k} is a monotonic sequence that converges pointwise to f. Let f:ℝ^n → and each f^k:^n → be proper lsc functions. If f^k f and {f^k} is monotonic, i.e., f^k ≥ f^k+1 or f^k≤ f^k+1, then f^k f. Furthermore, if f and each f^k are also continuous, then f^k f. The first statement is a consequence of <cit.>. If f and f^k are further continuous, we can apply the former result to (-f) and the sequence {-f^k} to obtain (-f^k) (-f). Hence, we have f^k f. §.§ Subgradients of ADC functions. Characterizing limiting and Clarke subgradients can be challenging when dealing with functions that exhibit complex composite structures. Our focus is on a notion of subgradients that can be both analytically and numerically computed, while effectively capturing the variational geometry of ADC functions. Consider a proper ADC function f: ^n → associated with {f^k= g^k - h^k}, where each g^k and h^k satisfy the conditions in Definition 1. The approachable subgradient of f (associated with {f^k=g^k-h^k}) at x̅∈^n is defined as [ ∂_A f(x̅) ≜ ⋃_{x^k}→x̅_k → +∞ [∂ g^k(x^k) - ∂ h^k(x^k)]. ] The approachable horizon subgradient of f (associated with {f^k=g^k-h^k}) at x̅∈^n is ∂^∞_A f(x̅) ≜ ⋃_{x^k}→x̅_ k → +∞^∞ [∂ g^k(x^k) - ∂ h^k(x^k)]. While ∂_A f(x̅) is the set of all limits of {v^k} with v^k ∈[∂ g^k(x^k) - ∂ h^k(x^k)] as x^k approaches x̅, the set ∂^∞_A f(x̅) is defined by horizon outer limits, and consists of all possible directions along which v^k converges as v^k→ +∞. In contrast to the definition of the limiting subgradient, which exclusively considers sequences {x^k}→x̅ accompanied by {f(x^k)}→ f(x̅), we define ∂_A f(x) using sequences {x^k}→x̅ without necessitating the convergence of function values. In fact, ∂_A f: ^n ⇉^n is the graphical outer limit (see <cit.>) of the sequence of mappings {∂ g^k - ∂ h^k}. The following proposition establishes useful properties of the approachable (horizon) subgradient mappings. The following statements hold. (a) The mappings x ↦∂_A f(x) and x ↦∂_A^∞ f(x) are osc. (b) Let x̅∉ f. Then ∂_A f(x̅) = ∅ if for any sequence {x^k}→x̅, we have x^k ∉ f^k for all k sufficiently large. The latter condition is particularly satisfied whenever f is closed and f^k ⊂ f for all k sufficiently large. The results in (a) follow directly from the definition of the approachable (horizon) subgradient mappings. To show (b), note that for any {x^k}→x̅∉ f, we have [∂ g^k(x^k) - ∂ h^k(x^k)] = ∅ for all k sufficiently large due to x^k ∉ f^k = [ g^k ∩ h^k]. Thus, ∂_A f(x̅) = ∅ for any x̅∉ f. The proof is then completed. Proposition <ref>(b) presents a sufficient condition for ∂_A f(x̅) = ∂ f(x̅) = ∅ at any x̅∉ f. In the subsequent analysis, we restrict our attention to x̅∈ f. Admittedly, the set ∂_A f(x̅) depends on the DC decomposition of each f^k, which may contain some irrelevant information concerning the local variational geometry of epi f. In fact, for a given ADC function f, the set ∂_A f(x̅) can be expanded arbitrarily large by adding the same extra nonsmooth functions to both g^k and h^k. By Attouch's theorem (see for example <cit.>), for proper, lsc, convex functions f and {f^k}, if f^k f, we immediately have ∂_A f = ∂ f when taking g^k=f^k and h^k=0. In what follows, we further explore the relationships among ∂_A f and other commonly employed subgradients in the literature beyond the convex setting. As it turns out, with respect to an arbitrary DC decomposition of f^k that is lsc, ∂_A f(x̅) contains the limiting subgradient of f at any x̅∈ f whenever f^k f. Consider an ADC function f: ^n → and let x̅∈ f. The following statements hold. (a) If f is e-ADC associated with {f^k} and f^k is lsc, then ∂ f(x̅) ⊂∂_A f(x̅) and ∂^∞ f(x̅) ⊂∂^∞_A f(x̅). (b) If f is locally Lipschitz continuous and bounded from below, then there exists a sequence of DC functions {f^k = g^k - h^k} such that f^k f, ∂ f(x̅) ⊂∂_A f(x̅) ⊂∂_C f(x̅), and ∂^∞_A f(x̅) = {0}. Consequently, ∂_A f(x̅) = ∂_C f(x̅), the set ∂_A f(x̅) is nonempty and bounded, and ∂ f(x̅) = ∂_A f(x̅) when f is subdifferentially regular at x̅. (a) Since f is e-ADC, it must be lsc. By f^k f and <cit.>, any element of ∂ f(x̅) can be generated as the limit of proximal subgradients at x^k with {x^k}_k ∈ N→x̅ and {f^k(x^k)}_k ∈ N→ f(x̅) for some N ∈ℕ^♯_∞. Indeed, we can further restrict x^k ∈ f^k since {f^k(x^k)}_k ∈ N→ f(x̅) and x̅∈ f. Using the fact that the proximal subgradients form a convex subset of the regular (limiting) subgradients <cit.>, we know that ∂ f(x̅) ⊂⋃_{x^k ∈ f^k}→x̅_k → +∞∂ f^k(x^k) ⊂⋃_{x^k ∈ f^k}→x̅_k → +∞[∂ g^k(x^k) - ∂ h^k(x^k)] ⊂∂_A f(x̅). Similarly, by <cit.>, we also have ∂^∞ f(x̅) ⊂⋃_{x^k ∈ f^k}→x̅_k → +∞^∞ ∂ f^k(x^k) ⊂∂^∞_A f(x̅). (b) For a locally Lipschitz continuous function f, consider its Moreau envelope e_γ f(x) ≜inf_z{f(z) + z - x^2/(2γ)} and the set-valued mapping P_γ f(x) ≜argmin_z{f(z) + z - x^2/(2γ)}. For any sequence {γ_k}↓ 0, we demonstrate in the following that {f^k ≜ e_γ_k f} is the desired sequence of approximating functions. Firstly, since f is bounded from below, it must be prox-bounded and, thus, each f^k is continuous (c.f. <cit.>). By the continuity of f and f^k, and {f^k(x̅)}↑ f(x̅) for any x̅∈^n, we have f^k f by Lemma <ref>. It then follows from part (a) that ∂ f(x̅) ⊂∂_A f(x̅). Consider the following DC decomposition for each f^k: f^k(x) = x^2/2γ_k_≜ g^k(x) - sup_z ∈^n{-f(z) - z^2/2 γ_k + z^⊤ x/γ_k}_≜ h^k(x), ∀ x ∈^n. By the subgradient formula of the parametric minimization <cit.>, we have ∂(-h^k)(x) ⊂⋃_z ∈ P_γ_k f(x){y | (0,y) ∈∂_(z,x)(f(z)+ z^2/2 γ_k - z^⊤ x/γ_k)} = ⋃_z ∈ P_γ_k f(x){∂ f(z) - x/γ_k}. Since h^k is convex, we have -∂ h^k(x) =∂(-h^k)(x), which further yields that [∂ g^k(x) - ∂ h^k(x)] ⊂{ ∂ f(z) | z ∈ P_γ_k f(x) }, ∀ x ∈^n, k ≥ 0. For any {x^k}→x̅ and {z^k ∈ P_γ_k f(x^k)}, we have 1/2γ_kz^k - x^k^2 + inf_x f(x) ≤1/2γ_kz^k - x^k^2 + f(z^k) ≤1/2γ_kx̅ - x^k^2 + f(x̅). Then, z^k - x^k≤√(x̅ - x^k^2 + 2γ_k [f(x̅) - inf_x f(x)])→ 0 due to the assumption that f is bounded from below and, therefore, {z^k}→x̅. By the locally Lipschitz continuity of f and <cit.>, there is a bounded set S such that {∂ f(z^k) | z^k ∈ P_γ_k f(x^k)}⊂ S for all k sufficiently large. It follows directly from <cit.> and the definition of the approachable horizon subgradient that ∂^∞_A f(x̅) = {0}. By Carathéodory's Theorem (see, e.g. <cit.>), any point in the convex hull of a bounded set in ^n can be expressed as a convex combination of (n+1) points in this set. Thus, for any x̅∈^n and any u ∈∂_A f(x̅), there exist sequences of vectors {x^k}→x̅ and {u^k}→ u such that for each k, we have u^k = ∑_i=1^n+1λ_k,i v^k,i for some nonnegative scalars {λ_k,i}^n+1_i=1 with ∑_i=1^n+1λ_k,i = 1 and sequence {v^k,i∈∂ f(z^k,i)}^n+1_i=1 with {z^k,i∈ P_γ_k f(x^k)}^n+1_i=1. For any given ϵ > 0, take an integer K_0 such that {z^k,i}_k ≥ K_0⊂𝔹(x̅, ϵ) for each i. Since for any fixed i, the sequences {λ_k,i}_k ≥ 0 and {v^k,i}_k ≥ 0 are bounded, and ∂ f is osc, we can obtain an infinite subset N ∈ℕ_∞^♯ such that for each i, {λ_k,i}_k ∈ N converges to some nonnegative scalar λ̅_i with ∑_i=1^n+1λ̅_i = 1 and {v^k,i}_k ∈ N converges to some vector v̅^ i∈∂ f(x̅). Thus, {u^k}_k ∈ N→ u = ∑_i=1^n+1λ̅_i v̅^ i∈∂ f(x̅) = ∂_C f(x̅). Hence, we have ∂_A f(x̅) ⊂∂_C f(x̅). The rest statements in (b) follow from the fact that ∂_C f(x̅) is nonempty and bounded whenever f is locally Lipschitz continuous (c.f. <cit.>). Under suitable assumptions, Proposition <ref>(b) guarantees the existence of an ADC decomposition that has its approachable subgradients contained in the Clarke subgradients of the original function. Notably, this decomposition may not always be practically useful due to the necessity of computing h^k, the value function for a generally nonconvex optimization problem. §.§ Examples of ADC functions. In this part, we demonstrate that ADC functions possess a broad range of applications, including functions that are discontinuous relative to their domains. Moreover, we undertake an investigation into the approachable subgradients that are associated with the DC decomposition of the approximating functions. Example 2.1: implicitly convex-concave functions. In the monograph <cit.>, a class of functions called implicitly convex-concave (icc) functions is introduced. The concept is further generalized to extended-real-valued functions in <cit.>. A proper function f: ^n → is said to be icc associated with a lifted function f: ^n ×^n → if the following three conditions hold: (i) f(z,x) = +∞ if z ∉ f, x ∈^n, and f(z,x) = -∞ if z ∈ f, x ∉ f; (ii) f(∙, x) is convex for any fixed x ∈ f, and f(z, ∙) is concave for any fixed z ∈ f; (iii) f(x) = f(x, x) for any x ∈ f. The optimal value function of the bi-parametric quadratic programs in (<ref>) is one instance of icc functions, which is associated with the following lifted counterpart when x, z ∈ f (the subscripts/superscripts p are omitted for brevity): [ f(z,x) ≜ [ minimum_y∈^m (c+Cx)^⊤ y + 1/2 y^⊤ Q y Az + By≤ b], ] The icc property can in fact be established for a broad class of value functions of nonlinear programs; see <cit.> for more examples and details. For any γ > 0, the partial Moreau envelope of an icc function f:^n → associated with the lifted function f is given by e_γ f(x) ≜ inf_z ∈^n{f(z,x) + 1/2 γz - x^2 } = 1/2 γx^2- sup_z ∈^n{ - f(z,x) - 1/2 γz^2 + 1/γ z^⊤ x }. Compared to the approachable decomposition based on the Moreau envelope derived in Proposition <ref>(b), the above decomposition is more computationally friendly. This is because the objective function of the supremum problem for z is concave for any fixed x, regardless of the convexity of f. However, as evident from the subsequent discussion, the drawback of this newly derived decomposition is that ∂_A f can potentially be larger than ∂_C f. To proceed, we denote ∂_1 f(∙,x) as the subgradient of the convex function f(∙,x) for any x ∈ f, and ∂_2(-f)(z,∙) as the subgradient of the convex function (-f)(z,∙) for any z ∈ f. Let f: ^n → be proper, lsc and icc associated with f: ^n ×^n →. Suppose that f is closed, f is lsc on ^n × f, and f is bounded from below on f × f. For each k, define g^k(x) ≜ x^2/2γ_k h^k(x) ≜ sup_z ∈^n{ -f(z,x) - z^2/2 γ_k + z^⊤ x/γ_k}, ∀ x ∈^n. Let {γ_k}↓ 0 be a given scalar sequence. The following statements hold. (a) f is e-ADC associated with {f^k}, where each f^k(x) ≜ g^k(x) - h^k(x) + δ_ f(x). Moreover, if f = ^n, then f is c-ADC associated with {f^k = g^k - h^k}. (b) For any x̅∈( f), one has ∂_A f(x̅) ⊂∂_1 f(x̅,x̅) - ∂_2 (-f)(x̅,x̅) and ∂^∞_A f(x̅) = {0}. (a) Observe that f^k(x) = inf_z{f(z,x) + δ_ f(x) +z-x^2/(2γ_k) }. It is easy to verify that f(z,x) + δ_ f(x) is proper and lsc in (z,x). One can then generalize the convergent results of the classical Moreau envelopes when γ↓ 0 (see, e.g., <cit.>) to the partial Moreau envelopes under the assumption that f is bounded from below on f × f. Therefore, each f^k is lsc and {f^k(x)}↑ f(x) for all x. Hence, f^k f is a direct consequence of Lemma <ref> and the monotonicity of e_γ f(x) with respect to γ. If f = ^n, one has that f is continuous, and thus f^k f by Lemma <ref>. The proof is then completed due to the convexity of g^k and h^k. (b) For any x̅∈( f), [ ∂_A f(x̅) = ⋃_{x^k}→x̅_k → +∞ {∂ g^k(x^k) - ∂ h^k(x^k)}; (1)= ⋃_{x^k}→x̅_k → +∞ {x^k/γ_k - ∂_2 (-f)(z^k, x^k) - z^k/γ_k | z^k = _z ∈^n[ f(z,x^k) + z - x^k^2/2γ_k] }; (2)⊆ ⋃_{(x^k, z^k)}→ (x̅, x̅)_k → +∞ [∂_1 f(z^k, x^k) - ∂_2 (-f)(z^k, x^k) ]; (3)= ∂_1 f(x̅,x̅) - ∂_2 (-f)(x̅,x̅), ] where (1) follows from the convexity of (-f)(z,∙) for any z ∈ f and Danskin's Theorem <cit.>; (2) is due to the optimality condition for z^k, and {z^k}→x̅ is obtained by similar arguments as Proposition <ref>(b) and the assumption that f is bounded from below on f × f; and (3) uses the local boundedness and the outer semicontinuity of ∂_1 f and ∂_2 (-f) at (x̅, x̅) (see <cit.>). Therefore, for any x̅∈ X, ∂ f(x̅) ⊂∂_A f(x̅) ⊂∂_1 f(x̅,x̅) - ∂_2 (-f)(x̅,x̅) and ∂_A f(x̅) is bounded. Moreover, the local boundedness of the mappings ∂_1 f and ∂_2 (-f) at (x̅, x̅) implies ∂^∞_A f(x̅) = {0}. When the original function f possesses an explicit DC decomposition g-h, it must be icc since one can take f(z,x) = g(z) - h(x) for x,z∈ f as a corresponding lifted function. Consequently, we obtain ∂_1 f(x̅,x̅) - ∂_2 (-f)(x̅,x̅) = ∂ g(x̅) - ∂ h(x̅). In the DC literature, a point x̅ is called a critical point of the problem minimize_x∈^n f(x) if 0∈∂ f(x̅) - ∂ g(x̅), to which the DC algorithm is proven to converge to <cit.>. In this instance, the latter condition can be implied by 0∈∂_A f(x̅). 0.1in Example 2.2: the VaR for continuously distributed random variables. Let Ω denote a sample space and Y: Ω→ be a random variable under the same setting as in Example 1.2, where we have defined its VaR. A related concept, called the upper conditional VaR for Y at the confidence level α∈ (0,1), is defined as ^+_α(Y) ≜𝔼[ Y | Y > _α(Y)]. Now let c(x,Z) be the loss associated with the decision vector x ∈^n and the random vector Z: Ω→^m. Even though the VaR of a continuously distributed random function may be discontinuous, the following result indicates that _α[ c(∙, Z) ] is e-ADC under mild conditions. Let α∈ (0,1) be a given constant and c:^n ×^m → be a lsc function with c(∙, z) being convex for any z ∈^m. Suppose that for any x ∈^n, c(x,Z) is a random variable following a continuous distribution induced by a random vector Z:Ω→^m with 𝔼[ |c(x,Z)| ] < +∞. The following properties hold. (a) Define g^k(x) ≜ [k(1-α)+1] ^+_α-1/k[c(x,Z)] and h^k(x) ≜ k(1-α) ^+_α[ c(x,Z) ] for any k > 1/α. Then f(x) ≜_α [ c(x,Z) ] is lsc and e-ADC associated with {f^k=g^k-h^k}. Additionally, if c(∙,∙) is continuous, then f is continuous and c-ADC associated with {f^k}. (b) Suppose that there is a measurable function κ: ^m →_+ such that 𝔼[ κ(Z)] < +∞ and |c(x,z) - c(x^',z)| ≤κ(z) x-x^' for all x,x^'∈^n and z ∈^m, then one has, for any x̅∈^n, ∂_A f(x̅) = ⋃_{x^k}→x̅_k → +∞𝔼[ ∂_x c(x^k,Z) | _α-1/k[ c(x^k,Z)] < c(x^k,Z) < _α[ c(x^k,Z)] ], where the expectation of a random set-valued mapping 𝔼[𝒜(x,Z)] is defined as the set of 𝔼[a(x,Z)] for all measurable selections a(x,Z) ∈𝒜(x,Z). (a) Note that for any x ∈^n, ^+_α[ c(x,Z) ] is well-defined and takes finite value since 𝔼[ |c(x,Z)| ] < +∞. Since c(x,Z) follows a continuous distribution for any x ∈^n, it is known that ^+_α[ c(x,Z) ] = inf_t ∈{ t + 1/1-α𝔼[max{c(x,Z)-t,0}] } = 1/1-α∫_α^1_t[ c(x,Z) ] t and ^+_α[ c(∙,Z) ] is convex by the convexity of c(∙,z) for any fixed z ∈^m (c.f. <cit.>). Therefore, both g^k and h^k are convex. For any x ∈^n, we have f^k(x) = k ∫_α-1/k^1_t[ c(x,Z) ] t - k ∫_α^1_t[ c(x,Z) ] t = k ∫_α - 1/k^α_t[ c(x,Z) ] t. Thus, _α-1/k[ c(x,Z) ] ≤ f^k(x) ≤_α[ c(x,Z) ] for any x ∈^n and k > 1/α. Since _t(Z) as a function of t on (0,1) is left-continuous, it follows that {f^k(x)}↑_α[ c(x,Z) ] for any fixed x ∈^n. Observe that {x ∈^n |_α[ c(x,Z) ] ≤ r} = { x ∈^n |ℙ(c(x,Z) ≤ r) ≥α}. Based on our assumptions and <cit.>, we know that, for any r ∈, the probability function x ↦ℙ(c(x,Z) ≤ r) is usc, which implies that the level set {x |ℙ(c(x,Z) ≤ r) ≥α} is closed for any (r, α) ∈× (0,1). Hence, _α [ c(x,Z) ] is lsc for any given α∈ (0,1) and, moreover, is continuous whenever c(∙, ∙) is continuous. Statements in part (a) then hold due to Lemma <ref>. (b) Next we characterize ∂_A_α[ c(x̅,Z)]. Let ℙ_Z be the probability measure associated with Z. By using <cit.>), we have ∂(^+_α[ c(x,Z) ]) = cl(⋃_ϕ∈∂ (CVaR_α) [c(x,Z)]∫∂_x c(x,Z(ω)) ϕ(ω) ℙ_Z(ω)). Since c(x, Z) is continuous distributed for any x ∈^n, we can derive from <cit.> that ∂ (^+_α) [ c(x,Z) ] = {ϕ : Ω→_+ | [ ϕ(ω) = (1-α)^-1 if c(x,Z(ω)) > _α[ c(x,Z)]; ϕ(ω) = 0 if c(x,Z(ω)) < _α[ c(x,Z)] ]}. By the convexity of c(∙,z) for any fixed z ∈^m and the existence of a measurable function κ, we can get from <cit.> that the set ∫∂_x c(x,Z(ω)) ϕ(ω) ℙ_Z(ω) = ∂∫ c(x,Z(ω)) ϕ(ω) ℙ_Z(ω) is closed for any x∈^n. Then, for any k > 1/α, {∂ g^k(x^k) - ∂ h^k(x^k)} can be written as {∫∂_x c(x^k,Z(ω)) ϕ(ω) ℙ_Z(ω) | ϕ(ω) = k if _α-1/k[ c(x^k,Z)] < c(x^k,Z(ω)) < _α[ c(x^k,Z)] }. The proof is thus completed by the definition of ∂_Af. The sequence {f^k} in Proposition <ref> can be viewed as a nonsmooth approximation of f(x) = _α[c(x,Z)], which is constructed through an averaging process with respect to the variable α, rather than x. This is clear from the equation f^k(x) = ∫__t-s[c(x,Z)] ψ^k(s) s with ψ^k(s)≜ k ·1_[0, 1/k](s). Alternatively, a different sequence of averaged functions {f^ k} could be derived through the convolution with mollifiers with respect to x, as studied in <cit.>. Under mild conditions, this approach also ensures f^ k f and offers the added advantage of smoothness for each f^ k. It is worth noting that while the sequence in Proposition <ref> might lack smoothness, its DC decomposition presents potential benefits when it comes to the design of a numerical algorithm. 0.1in Example 2.3: the ℓ_0-norm and rank functions. Given x∈^n, we denote x_0 as its ℓ_0-norm, which is the number of nonzero entries of x. For each entry x_i, one has that x_i_0 is e-ADC associated with {g^k - h^k}, where g^k(x_i) ≜ k |x_i| and h^k(x_i) ≜max(k |x_i| - 1, 0 ). Since x_0 = ∑_i=1^n x_i_0, we know that x_0 is also e-ADC and ∂_Ax_0 = ∂_Ax_1_0 ×⋯×∂_Ax_n_0. For any t ∈, observe that [ ∂_At_0 = ⋃_{t_k}→ t_k → +∞[∂(k |t_k|) - ∂( max(k |t_k| - 1, 0))] ={[ 0 if t ≠ 0; if t = 0 ]. = { v | v t = 0 }. ] Therefore, ∂_Ax_0 = ∂x_0 = ∂^∞_Ax_0 = ∂^∞x_0 = { v ∈^n | v_i x_i = 0, i=1,⋯,n }. The rank function of a given matrix X∈^m× n is the cardinality of its singular values, which is also discontinuous. Since the rank can be viewed as the ℓ_0-norm of the vector of singular values, one may expect that it is e-ADC as well. In the following proposition, we confirm this guess. To proceed, let the singular values of any matrix X∈^m× n (m≤ n) be σ_1(X), ⋯, σ_m(X). Let g^k and h^k:ℝ→ℝ be convex functions and symmetric around 0, i.e., g^k(t) = g^k(-t) and h^k(t) = h^k(-t) for any t∈ℝ. Assume that {g^k - h^k} epi-converges to the univariate ℓ_0-norm. Then the rank of X∈^m × n, denoted as (X), is an e-ADC function associated with f^k(X) ≜∑_i=1^m g^k(σ_i(X)) - ∑_i=1^m h^k(σ_i(X)). For any k, the functions ∑_i=1^m g^k ∘σ_i and ∑_i=1^m h^k ∘σ_i are convex, because they are spectral functions of X associated with the absolutely symmetric functions ∑_i=1^m g^k(x_i) and ∑_i=1^m h^k(x_i) for x ∈^m <cit.>. Epi-convergence of f^k to (∙) can be seen from Lemma <ref>, the lower semicontinuity of the rank function and {g^k(x_i) - h^k(x_i)}_k ≥ 0↑x_i_0 for i=1,⋯,m. § THE CONVEX COMPOSITE ADC FUNCTIONS AND MINIMIZATION. As outlined in Section <ref>, the primary aim of this paper is to expand the family of amenable functions, initially introduced in <cit.>, to encompass a wider range of composite structures. Rather than requiring the inner mapping to be smooth, we consider ADC functions explored in the preceding section. Although we strive to preserve the properties of the outer function— being proper, lsc, and convex— as in the original definition of amenable functions, developing a numerical algorithm to solve such composite problems seems challenging due to the lack of a convenient convex approximation for the overall composite function. To mitigate this difficulty, we propose a stronger assumption regarding the outer function: separability across each coordinate. Specifically, we assume that the outer function φ:ℝ^P → can be expressed as φ(z) = ∑_p=1^P φ_p(z_p) for any z=(z_1,⋯,z_p)^⊤∈ℝ^P, where each φ_p signifies a univariate proper, lsc, and convex function. As will be illustrated in Section <ref>, this separability is essential for the development of a numerical algorithm to solve the newly formulated composite optimization problems. The composite optimization problem that we investigate takes the following form: _x∈^n ∑_p=1^P [F_p(x) ≜φ_p (f_p(x))], where, for each p = 1, ⋯, P, φ_p:→ is a proper, lsc, and convex function with (φ_p) ≠∅, and f_p:^n → is lsc. We further assume that ⋂_p=1^P F_p ≠∅ throughout the rest of the paper. Additional assumptions regarding φ_p and f_p will be introduced later. Our goal of this section is to derive necessary optimality conditions for (<ref>), with a particular focus on cases where each inner function f_p lacks local Lipschitz continuity. To accomplish this, we consider to perturb each f_p to a sequence of locally Lipschitz continuous functions and introduce a new concept of asymptotic stationarity. Conditions under which the newly proposed concept can serve as a necessary optimality condition for problem (<ref>) are examined. Our investigation reveals that the key factor lies in epi-convergence of the approximating sequence. As an intermediate step, we develop a new chain rule for the limiting subgradients in Subsection <ref>, which may be of independent interest as this analysis does not depend on the ADC structure of the inner function. §.§ An exploration of perturbation-based optimality conditions. We begin our study of optimality conditions for problem (<ref>) by investigating a simplified case where p = 1 and f:^n → is locally Lipschitz continuous: _x∈^n φ(f(x)). The outer function φ:→ retains its properties of being proper, lsc, and convex. Taking inspiration from the parametric approach in <cit.>, we consider a perturbed problem: _x∈^n F(x,u,v) ≜φ(f(x, u) + v), where f: ^n ×→ is a parametric approximation of f with f(x, u) = f(x) for any u≤ 0. We derive the following optimality condition for (<ref>) through the above perturbation. The result serves as an impetus for the forthcoming introduction of asymptotic optimality conditions when f is lsc. Consider problem (<ref>) at u̅=v̅=0 and let x̅ be its local minimizer. Suppose that the following conditions hold. (i) f is locally Lipschitz continuous on ^n×, and F is proper and lsc on ^n ××. (ii) There does not exist (y_1, y_2) ≠ (0, 0) satisfying (0,y_2) ∈∂_(x,u)( y_1 f )(x̅, u̅) and y_1 ∈∂^∞φ(f(x̅)). Then, there exists (y̅_1, y̅_2) ∈^2 such that (0, y̅_2) ∈∂_(x,u)(y̅_1 f )(x̅, u̅) with y̅_1 ∈∂φ(f(x̅, u̅)+v̅). It is easy to verify that y=0 is the only scalar such that y ∈∂^∞φ(f(x̅)) with 0 ∈∂_(x,u,v)[y (f(x,u) + v)] (x̅, u̅, v̅) = (∂_(x,u)(y f )(x̅, u̅), y). The chain rule <cit.> yields ∂_(x,u,v)F(x̅, u̅, v̅) ⊂{(∂_(x,u)(y_1 f )(x̅, u̅), y_1) | y_1 ∈∂φ(f(x̅, u̅) + v̅) }, ∂^ ∞_(x,u,v)F(x̅, u̅, v̅) ⊂{(∂_(x,u)(y_1 f )(x̅, u̅), y_1) | y_1 ∈∂^∞φ(f(x̅, u̅) + v̅) }. From the second inclusion and condition (2), we know that there does not exist y ≠ (0,0) such that (0, y) ∈∂^∞_(x,u,v)F(x̅,u̅,v̅). As a consequence of the parametric version of Fermat's rule <cit.>, there exists y̅∈^2 with (0, y̅) ∈∂_(x,u,v)F(x̅, u̅, v̅). Combining this with the first inclusion gives rise to the result. The parameters y̅_1 and y̅_2 in (<ref>) can be interpreted as multipliers corresponding to the linear perturbation v ∈ and the nonlinear perturbation u ∈, respectively. The standard optimality condition for the problem minimize_x∈^n φ(f(x)) requires the function f to be locally Lipschitz continuous <cit.>, which may not hold under our general setting for (<ref>). Although Proposition <ref> imposes a similar condition (a), it suggests that, to derive necessary optimality conditions for (<ref>), we can leverage the optimality conditions for a sequence of perturbed problems (<ref>) with {u_k}↓ 0 and {v_k}↓ 0, where each f(x,u_k) is locally Lipschitz continuous. Since (<ref>) is a necessary optimality condition when u̅=v̅ = 0, one may anticipate that the limit of the following conditions becomes a necessary optimality condition for (<ref>) when f lacks the local Lipschitz continuity: [ ∃ {ε^k}→ 0, {u_k}↓ 0, {v_k}↓ 0, {x^k}→x̅, {(y_1,k, y_2,k)} such that; (ε^k, y_2,k) ∈∂_(x,u)(y_1,kf )(x^k, u_k) y_1,k∈∂φ(f(x^k, u_k) + v_k). ] This idea of asymptotically relaxing optimality conditions has also appeared in a recent paper <cit.>. The authors have proposed an asymptotic version of the Mordukhovich-stationarity (M-stationary) condition for nonsmooth constrained optimization problems, which is proven to be a necessary condition for a local minimizer and is equivalent to M-stationarity under a constraint qualification. If we further assume that the parametric approximation f(∙,u) is DC for any fixed u ∈_++, the following lemma provides an estimation for the partial subgradient of (y_1f ) with respect to x. Suppose that there exist g, h: ^n ×_++→ such that f(x,u) = g(x,u) - h(x,u) for any (x,u) ∈^n ×_++ and g(∙, u), h(∙, u) are convex for any fixed u ∈_++. Then, for any y_1 ∈, one has { w | ∃ y_2 with (w,y_2) ∈∂_(x,u)( y_1 f )(x, u) }⊂ y_1 [ ∂_x g(x,u) - ∂_x h(x,u) ], ∀ (x,u) ∈^n ×_++. By the definition of the limiting subgradient and the rule of partial subgradients <cit.>, for any (x,u) ∈^n ×_++, it holds that [ ∂_(x,u)(y_1 f )(x,u) = _(x^', u^') →_(y_1 f) (x,u)∂_(x,u)( y_1 f )(x^', u^'); ⊂_(x^', u^') →_(y_1 f) (x,u)∂_x ( y_1 f )(x^', u^') ×∂_u ( y_1 f )(x^', u^'); ⊂_(x^', u^') →_(y_1 f) (x,u)∂_x ( y_1 f )(x^', u^') ×_(x^', u^') →_(y_1 f) (x,u)∂_u ( y_1 f )(x^', u^'). ] Therefore, for any (x,u) ∈^n ×_++, [ { w | ∃ y_2 with (w,y_2) ∈∂_(x,u)( y_1 f )(x, u) } ⊂_(x^', u^') → (x,u)∂_x ( y_1 f )(x^', u^'); ⊂_(x^', u^') → (x,u) y_1 [∂_x g(x^', u^') - ∂_x h(x^', u^')]. ] Due to the convexity of g(∙, u) and h(∙, u) for any fixed u ∈_++, the mappings ∂_x g and ∂_x h are osc at (x,u) ∈^n ×_++ (see <cit.> for example). Hence, the last outer limit equals y_1 [∂_x g(x, u) - ∂_x h(x, u)], which completes the proof. Let {f^k(∙) ≜ f(∙, u_k)}, {g^k(∙) ≜ g(∙, u_k)}, and {h^k(∙) ≜ h(∙, u_k)}. According to Lemma <ref>, the sequence {(ε^k, u_k, v_k, x^k, y_1,k)} in (<ref>) satisfies ε^k ∈ y_1,k[∂ g^k(x^k) - ∂ h^k(x^k)] with y_1,k∈∂φ(f^k(x^k) + v_k). Importantly, this allows us to discard the multiplier y_2,k, which is associated with the nonlinear perturbation u_k. Given that f^k is locally Lipschitz continuous as per our assumption, we further deduce from the chain rule in <cit.> and ∂ f^k(x) ⊂[∂ g^k(x) - ∂ h^k(x)] that x^k is approximately a stationary point of the perturbed problem minimize_x ∈^n φ(f^k(x)). Two major distinctions exist between the standard optimality condition and its asymptotic counterpart: (i) The sequence of multipliers {y_1,k}, corresponding to the linear perturbation v_k, may be unbounded; (ii) The sequence {f^k(x^k)} may not converge to f(x̅), without assuming the continuous convergence of {f^k}. To confirm that the asymptotic stationarity remains a valid optimality condition, addressing these issues is crucial. In the following subsections, we will prove that the boundedness issue related to {y_1,k} can be tackled by imposing constraint qualifications (see (<ref>) as well as a unified asymptotic version (<ref>) in Assumption 5). Meanwhile, the full convergence of {f^k(x^k)} is not needed, provided epi-convergence of {f^k} is guaranteed. §.§ Chain rules for the limiting subgradients of lsc functions. In preparation for the introduction of the asymptotic necessary optimality conditions, we establish chain rules for the limiting subgradients when the inner function is only assumed to be lsc. These results will be utilized in Proposition <ref> to validate the constraint qualification (<ref>) that will be presented later. Let φ:→ be a proper, lsc and convex function, and f:^n→ be a real-valued function. When f is locally Lipschitz continuous and x̅∈(φ∘ f), the classical chain rule <cit.> states that if the only scalar y ∈𝒩_φ (f(x̅)) with 0 ∈∂(y f)(x̅) is y = 0, one has [ ∂ (φ∘ f)(x̅) ⊂{ ∂(y f)(x̅) | y ∈∂φ(f(x̅)) } , ∂^∞ (φ∘ f)(x̅) ⊂{ ∂(y f)(x̅) | y ∈𝒩_φ (f(x̅)) }. ] When the outer function φ is nondecreasing with supφ = +∞, and φ(α) > φ(f(x̅)) for all α > f(x̅), a chain rule for ∂(φ∘ f) can be derived using the nonlinear rescaling presented in <cit.>. However, the stated assumption on φ may even fail for a simple indicator function over _-. Specifically, consider the composite function δ_(-∞,0]∘ f with a lsc inner function f:→. Suppose x̅ is a point such that f(x̅) < 0. It is evident that the assumption of strictly monotonicity on φ fails at f(x̅). Furthermore, the classical chain rule may also fail in this case due to the discontinuity of f even though we have {∂(y f)(x̅) | y ∈𝒩_(-∞,0] (f(x̅))} = {0}. As our aim is to allow φ to encode constraints through the indicator function, we develop a chain rule for ∂(φ∘ f) that allows f to be merely lsc and φ to lack the strictly increasing property at a given point. Let φ: → be proper, lsc, convex, and nondecreasing with supφ = +∞, and f: ^n → be lsc. Consider x̅∈ (φ∘ f). If the only scalar y ∈Limsup_x →_(φ∘ f)x̅ 𝒩_φ (f(x)) with 0 ∈ y ·Limsup_x →x̅∂ f(x) is y = 0, then [ ∂ (φ∘ f)(x̅) ⊂{ y ·_x →x̅∂ f(x) | y ∈_x →_(φ∘ f) x̅∂φ(f(x)) }∪[ _x →x̅^∞∂ f(x) \{0} ] ,; ∂^∞ (φ∘ f)(x̅) ⊂{ y ·_x →x̅∂ f(x) | y ∈_x →_(φ∘ f) x̅𝒩_φ (f(x)) }∪[ _x →x̅^∞∂ f(x) \{0} ]. ] The basic idea is to rewrite φ∘ f as a parametric minimization problem and apply <cit.>. Note that φ(f(x)) = inf_α [g(x, α) ≜δ_epi f(x, α) + φ(α)] for any x ∈ (φ∘ f). Define the corresponding set of optimal solutions as Λ(x) for any x ∈ (φ∘ f). Then, we have f(x̅) ∈Λ(x̅) and φ(α) = φ(f(x̅)) for any α∈Λ(x̅). Based on the fact that supφ = +∞ and f is lsc, the conditions in <cit.> can be verified similarly as in the proof of <cit.>. Therefore, ∂ (φ∘ f)(x̅) ⊂{ v | (v, 0) ∈∂ g(x̅, α̅), α̅∈Λ(x̅) } , ∂^∞ (φ∘ f)(x̅) ⊂{ v | (v, 0) ∈∂^∞ g(x̅, α̅), α̅∈Λ(x̅) }. If we can show that for any α̅∈Λ(x̅), 𝒩_epif(x̅, α̅) ∩( {0}× [-𝒩_φ(α̅)] ) = {0}, then it follows from the sum rule in <cit.> that ∂ g(x̅, α̅) ⊂𝒩_epif(x̅, α̅) + {0}×∂φ(α̅) and ∂^∞ g(x̅, α̅) ⊂𝒩_epif(x̅, α̅) + {0}×𝒩_φ(α̅). We divide the proof of (<ref>) into two cases. Case 1. If Λ(x̅) is a singleton {f(x̅)}, we can characterize 𝒩_epif(x̅, f(x̅)) by using the result in <cit.>. Since ∂ f(x̅) ⊂Limsup_x →x̅∂ f(x) and 𝒩_φ(f(x̅)) ⊂Limsup_x →_(φ∘ f) x̅ 𝒩_φ(f(x)), it follows from our assumption that either 0 ∉∂ f(x̅) or 𝒩_φ (f(x̅)) = {0}. Hence, condition (<ref>) is satisfied, and we can derive the stated results for ∂(φ∘ f)(x̅) and ∂^∞ (φ∘ f)(x̅) based on the observations that ∂φ(f(x̅)) ⊂Limsup_x →_(φ∘ f)x̅φ(f(x)) and ∂^∞ f(x̅) ⊂Limsup_x →x̅^∞∂ f(x). Case 2. Otherwise, there exists α̅_max∈ (f(x̅), +∞) such that Λ(x̅) = [f(x̅), α̅_max] since φ is lsc, nondecreasing and supφ = +∞. Thus, ∂ (φ∘ f)(x̅) ⊂[{ v | (v, 0) ∈∂ g(x̅, f(x̅))}∪{ v | (v, 0) ∈∂ g(x̅, α̅), f(x̅) < α̅≤α̅_max}], ∂^∞ (φ∘ f)(x̅) ⊂[{ v | (v, 0) ∈∂^∞ g(x̅, f(x̅))}∪{ v | (v, 0) ∈∂^∞ g(x̅, α̅), f(x̅) < α̅≤α̅_max}]. Let Λ_1(x̅) ≜{α̅∈ (f(x̅), α̅_max] | ∃ {x^k}→x̅ with {f(x^k)}→α̅} and Λ_2(x̅) ≜Λ(x̅) \Λ_1(x̅). In the following, we characterize 𝒩_epif (x̅, α̅) and verify (<ref>) separately for α̅∈Λ_1(x̅) and α̅∈Λ_2(x̅). Case 2.1. For any α̅∈Λ_1(x̅), we first prove the inclusion: 𝒩_epif(x̅, α̅) ⊂[{λ (v, -1) | v ∈_x →x̅∂ f(x), λ > 0 }⋃{ (v, 0) | v ∈_x →x̅^∞∂ f(x) }]. Observe that for any α̅∈Λ_1(x̅), it holds that 𝒩_epif(x̅, α̅) ⊂_(x, α)(∈epif) → (x̅, α̅) 𝒩^p_epif (x, α) ⊂_x →x̅ 𝒩^p_epif (x,f(x)) ⊂_x →x̅ 𝒩_epif (x,f(x)), where the first inclusion is because any normal vector is a limit of proximal normals at nearby points <cit.>; the second one uses the fact that, for any fixed α > f(x), any proximal normal to epif at (x, α) is also a proximal normal to epif at (x, f(x)); the last inclusion follows directly from the definition of proximal normals. Based on the the result of <cit.> that 𝒩_epif (x,f(x)) = {λ (v, -1) | v ∈∂ f(x), λ > 0 }∪{ (v, 0) | v ∈∂^∞ f(x) }, we conclude that 𝒩_epif (x̅,α̅) ⊂^n ×_- for any α̅∈Λ_1(x̅). For any (v, -1) ∈𝒩_epif (x̅,α̅) with α̅∈Λ_1(x̅), there exist {x^k}→x̅, {v^k}→ v with v^k ∈∂ f(x^k). Then v ∈Limsup_x →x̅∂ f(x). To prove (<ref>), it remains to show that v ∈Limsup_x →x̅^∞∂ f(x) whenever (v, 0) ∈𝒩_epif (x̅, α̅). It follows from (<ref>) that (v, 0) is a limit of proximal normals and limiting normals of epif at (x^k, f(x^k)) for some sequence {x^k}→x̅. First consider the case {(v^k, 0)}→ (v, 0) with (v^k, 0) ∈𝒩^p_epif (x^k,f(x^k)). Following the argument in the proof of <cit.>, we can derive v^k ∈∂^∞ f(x^k). Therefore, v ∈_k → +∞ ∂^∞ f(x^k) ⊂ _k → +∞( ⋃_{x^k,i}→_f x^k_i → +∞^∞∂ f(x^k,i)) ⊂ ⋃_{x^j}→x̅_j → +∞^∞ ∂ f(x^j), where the last inclusion follows from a standard diagonal extraction procedure. In the other case, we have {λ^k (v^k, -1)}→ (v, 0) with {λ^k}↓ 0 and v^k ∈∂ f(x^k) for all k ≥ 0. It is easy to see v ∈Limsup_x →x̅^∞∂ f(x). So far, we obtain inclusion (<ref>). Since α̅∈Λ_1(x̅), we have 𝒩_φ (α̅) ⊂Limsup_x →_(φ∘ f)x̅ 𝒩_φ(f(x)), and our assumption implies that λ = 0 is the unique solution satisfying 0 ∈λ·Limsup_x →x̅∂ f(x) with λ∈𝒩_φ(α̅). Thus, (<ref>) is satisfied. Case 2.2. For any α̅∈Λ_2(x̅), consider any sequence {(x^k, α^k)}⊂epif converging to (x̅, α̅). Then α^k > f(x^k) for all k sufficiently large since α̅∉Λ_1(x̅). It is easy to see that 𝒩^p_epif (x^k, α^k) ⊂^n ×{0}, which gives us 𝒩_epif (x^k, α^k) ⊂^n ×{0}. By following a similar pattern as the final part of Case 2.1, it is not difficult to obtain 𝒩_epif(x̅, α̅) ⊂{ (v, 0) | v ∈Limsup_x →x̅^∞∂ f(x) } for any α̅∈Λ_2(x̅). In this case, (<ref>) holds trivially. Combining the results of Cases 2.1 and 2.2, we have [ { v | (v, 0) ∈∂ g(x̅, α̅), f(x̅) < α̅≤α̅_max}; ⊂{ y ·_x →x̅∂ f(x) | y ∈∂φ(α̅), α̅∈Λ_1(x̅) }∪{_x →x̅^∞∂ f(x) | 0 ∈∂φ(α̅), f(x̅) < α̅≤α̅_max}; ⊂{ y ·_x →x̅∂ f(x) | y ∈_x →_(φ∘ f) x̅∂φ(f(x)) }∪[ _x →x̅^∞∂ f(x) \{0} ], ] and [ { v | (v, 0) ∈∂ g^∞(x̅, α̅), f(x̅) < α̅≤α̅_max}; ⊂ { y ·_x →x̅∂ f(x) | y ∈_x →_(φ∘ f) x̅𝒩_φ(f(x)) }∪[ _x →x̅^∞∂ f(x) \{0} ]. ] The proof is then completed by the inclusions in (<ref>). A necessary optimality condition for problem (<ref>) can be derived if each φ_p∘ f_p either satisfies the conditions in Proposition <ref>, or the conditions in <cit.>. Depending on whether φ_p is nondecreasing or not, we partition {1, ⋯, P} into two categories: I_1 ≜{ 1 ≤ p ≤ P |φ_p is nondecreasing} I_2 ≜{1, ⋯, P}\ I_1. Observe that we do not specifically address the case where φ_p is nonincreasing, as one can always redefine φ_p(t) = φ_p(-t), enabling the treatment of these indices in the same manner as those in I_1. Let x̅∈⋂_p=1^P F_p be a local minimizer of problem (<ref>). Suppose that supφ_p = +∞ for each p ∈ I_1, and f_p is locally Lipschitz continuous for each p ∈ I_2. In addition, assume that the following constraint qualifications hold. (i) For each p ∈ I_1, the only scalar y ∈Limsup_x →_F_p x̅ 𝒩_φ_p(f_p(x)) with 0 ∈ y ·Limsup_x →x̅∂ f_p(x) is y = 0; (ii) For each p ∈ I_2, the only scalar y ∈𝒩_φ_p(f_p(x̅)) with 0 ∈∂(y f_p)(x̅) is y = 0; (iii) w_1 = ⋯ = w_p = 0 is the unique solution of ∑_p=1^P w_p = 0 with {[ w_p ∈{ y_p ·_x →x̅∂ f_p(x) | y_p ∈_x →_F_px̅𝒩_φ_p (f_p(x)) }∪[ _x →x̅^∞∂ f_p(x) \{0} ], ∀ p ∈ I_1,; w_p ∈{∂(y_p f_p)(x̅) | y_p ∈𝒩_φ_p(f_p(x̅))}, ∀ p ∈ I_2. ]. Then it holds that [ 0 ∈ ∑_p ∈ I_1{ y_p ·_x →x̅∂ f_p(x) | y_p ∈_x →_F_px̅∂φ_p(f_p(x)) }∪[ _x →x̅^∞∂ f_p(x) \{0} ]; +∑_p ∈ I_2{∂(y_p f_p)(x̅) | y_p ∈∂φ_p(f_p(x̅)) }. ] It follows directly from the assumed constraint qualifications (i)-(iii) and Proposition <ref> that w_1 = ⋯ = w_p = 0 is the only combination of vectors w_p ∈∂^∞ (φ_p ∘ f_p)(x̅) with ∑_p=1^P w_p = 0. Then, by the Fermat's rule <cit.> and the sum rule of limiting subgradients <cit.>, we have 0 ∈∂[ ∑_p=1^P (φ_p∘ f_p)(x̅) ] ⊂∑_p=1^P ∂ (φ_p ∘ f_p)(x̅). The proof is thus completed by the characterization of ∂ (φ_p ∘ f_p)(x̅) in Proposition <ref>. §.§ Epi-convergence of convex composite ADC functions. To further prepare for the forthcoming necessary optimality conditions for (<ref>) that will be presented in the next subsection, we investigate sufficient conditions for ensuring φ∘ f^k φ∘ f. A recent paper <cit.> has provided an in-depth study of the conditions that ensure both epi-convergence of composite approximating functions and graphical convergence of their subgradients. As our focus lies solely on the former, the proposed conditions are weaker than those in <cit.>. We begin by examine the case where φ is univariate. The conditions for this particular case can be classified into two categories: one for the situation where φ is nondecreasing, in which f^k f is typically imposed; on the other hand, when φ is not monotone, a more stringent convergence condition, f^k f, is needed. Given f:^n → and a proper function φ:→. One has φ∘ f^k φ∘ f, if one of the following conditions holds: (a) f^k f, and φ:→ is continuous and nondecreasing. (b) f^k f, f^k(x) ≤ f(x) for any x∈^n, and φ is lsc and nondecreasing. (c) f^k f, and φ is lsc on and continuous relative to φ, and for all x with f(x) ∈bdry(φ), there exists {x^k}→ x with f(x^k) ∈(φ). Moreover, if φ is real-valued, then (c) is also sufficient for φ∘ f^k φ∘ f to hold. (a) See <cit.>. (b) For any sequence {x^k}→ x, consider a subsequence {φ(f^k(x^k)) }_k ∈ N→lim inf_k → +∞φ(f^k(x^k)) for some N ∈ℕ^♯_∞. If lim inf_k (∈ N)→ +∞ f^k(x^k) = +∞, then the monotonicity of φ yields lim inf_k → +∞φ(f^k(x^k)) = lim_k (∈ N) → +∞φ(f^k(x^k)) = supφ≥φ(f(x̅)). Otherwise, there exists an index set N^'⊂ N with {f^k(x^k)}_k ∈ N^'→lim inf_k(∈ N) → +∞ f^k(x^k) ∈ since epi-convergence gives a finite lower bound. Using the lower semicontinuity of φ, we obtain lim inf_k → +∞φ(f^k(x^k)) = lim_k (∈ N^') → +∞φ(f^k(x^k)) ≥φ(lim_k (∈ N^') → +∞ f^k(x^k) ) = φ(lim inf_k (∈ N) → +∞ f^k(x^k) ). By the monotonicity of φ and f^k f, we have φ(lim inf_k (∈ N) → +∞ f^k(x^k) ) ≥φ(lim inf_k → +∞ f^k(x^k) ) ≥φ(f(x)). Thus, lim inf_k → +∞φ(f^k(x^k)) ≥φ(f(x)). Next, we show the existence of a sequence {x^k}→ x such that lim sup_k → +∞φ(f^k(x^k)) ≤φ(f(x)). Indeed, one has φ(f^k(x)) ≤φ(f(x)) due to f^k ≤ f and, hence, {x^k = x} is the desired sequence. This completes the proof. (c) The proof can be found in <cit.>. If φ is real-valued, φ∘ f^k φ∘ f is a direct consequence of the continuity of φ and f^k f. We further note that, in Lemma <ref>(c), the continuity of φ relative to its domain can be satisfied when φ is lsc and univariate convex <cit.>. Now we return to our composite model (<ref>). In addition to epi-convergence of the composite function φ_p ∘ f^k_p φ_p ∘ f_p for each p, two more conditions are needed for subsequent analysis. 0.97 Assumption 1 For each p, we have (a) f_p is an ADC function associated with {f^k_p = g^k_p-h^k_p}, and g^k_p = h^k_p = ^n; (b) -∞ < lim inf_x^'→ x, k → +∞ f^k_p(x^') ≤lim sup_x^'→ x, k → +∞ f^k_p(x^') < + ∞, ∀ x ∈^n; (c) [F^k_p ≜φ_p ∘ f^k_p] F_p. Obviously, f^k_p f_p is sufficient for Assumption 1(b) to hold. By using f^k_p f_p, we have lim inf_x^'→ x, k → +∞ f^k_p(x^') ≥ f_p(x) > -∞ for each p at any x ∈^n. However, the condition lim sup_x^'→ x, k → +∞ f^k_p(x^') <+∞ does not hold trivially. Indeed, for any continuous function f, by defining f^k(x) ={[ f(x) + k^2 x + k if x ∈ [- 1/k, 0]; f(x) - k^2 x + k if x ∈ (0, 1/k]; f(x) otherwise ]., one has f^k f but lim sup_x^'→ 0, k → +∞ f^k(x^') = + ∞. Another observation is that, at each point x and for every sequence {x^k}→ x, the sequence {f^k_p(x^k)} must be bounded due to Assumption 1(b). Assumption 1(c) guarantees that each F_p = φ_p ∘ f_p is lsc, yet it doesn't necessarily lead to ∑_p=1^P F^k_p ∑_p=1^P F_p. Consequently, Assumption 1(c) alone may not be sufficient to ensure that every limit point of x^k with x^k ∈_x ∑_p=1^P F^k_p(x) qualifies as a minimizer of ∑_p=1^P F_p. To maintain epi-convergence under addition of functions, one could take into consideration the sufficient conditions outlined in <cit.>. §.§ Asymptotic stationarity under epi-convergence. In this subsection, we introduce a novel stationarity concept for problem (<ref>), grounded in a monotonic decomposition of univariate convex functions. We demonstrate that under certain constraint qualifications, epi-convergence of approximating functions ensures this stationarity concept as a necessary optimality condition. Alongside the fact that epi-convergence results in the convergence of global optimal solutions <cit.>, this highlights the usefulness of epi-convergence as a tool for studying the approximation of the composite problem (<ref>). The following lemma is an extension of a monotonic decomposition of real-valued univariate convex functions in <cit.>. Let φ: → be a proper, lsc and univariate convex function. Then there exist a proper, lsc, convex and nondecreasing function φ^↑, as well as a proper, lsc, convex and nonincreasing function φ^↓, such that φ = φ^↑ + φ^↓. In addition, if (φ) ≠∅, the following properties hold: (a) For any z_0 ∈, there exists a positive scalar δ such that either 𝒩_φ^↑(z) = {0} for any z ∈𝔹(z_0, δ), or 𝒩_φ^↓(z) = {0} for any z ∈𝔹(z_0, δ). (b) ∂φ(z) = ∂φ^↑(z) + ∂φ^↓(z) and 𝒩_φ^↑(z) ⋂[-𝒩_∂φ^↓(z) ] = {0} for any z ∈φ. Consequently, 𝒩_φ (z) = 𝒩_φ^↑(z) + 𝒩_φ^↓(z) for any z ∈φ. From the convexity of φ, we conclude that φ is an interval on , possibly unbounded. In fact, we can explicitly construct φ^↑ and φ^↓ in following two cases. Case 1. If φ has no direction of recession, i.e., there does not exist d ≠ 0 such that for any z, φ(z+λ d) is a nonincreasing function of λ>0, it follows from <cit.> that φ attains its minimum at some z^∗∈φ. Define φ^↑(z) = {[ φ(z^∗) if z ≤ z^∗,; φ(z) if z > z^∗, ]. φ^↓(z) = {[ φ(z) - φ(z^∗) if z ≤ z^∗,; 0 if z > z^∗. ]. For any z ≠ z^∗, note that 𝒩_φ^↑(z) = {[ {0} if z < z^∗,; 𝒩_φ(z) if z > z^∗, ]. 𝒩_φ^↓(z) = {[ 𝒩_φ(z) if z < z^∗,; {0} if z > z^∗. ]. Thus, part (a) holds except at z_0 = z^∗. When z^∗∈(φ), there exists δ > 0 such that 𝒩_φ^↑ (z) = 𝒩_φ^↓ (z) = {0} for any z ∈𝔹(z^∗, δ). Next, consider the case of z^∗∈bdry(φ). If φ(z)=+∞ for any z < z^∗, then we have 𝒩_φ^↑ (z) = {0} for any z ∈𝔹(z^∗,δ) with some δ>0 since φ^↑ is finite-valued in some neighborhood of z^∗. Likewise, if φ(z)=+∞ for any z > z^∗, we have 𝒩_φ^↓ (z) = {0} for any z ∈𝔹(z^∗, δ) with some δ>0. Combining the arguments for z ≠ z^∗, we conclude that (a) is true. To show part (b), observe that ∅≠(φ) ⊂[(φ^↑) ∩(φ^↓)]. Consequently, from <cit.>, we have ∂φ(z) = ∂φ^↑(z) + ∂φ^↓(z) for any z ∈. The remaining results hold trivially if either φ^↑ = or φ^↓ =. Now we only need to consider the case where φ^↑ = (-∞, p ] and φ^↓ = [ q, +∞) for some p > q (p ≠ q due to (φ) ≠∅), since the cases involving open domains can be derived similarly. It is evident that 𝒩_(-∞, p ] (z) ∩[-𝒩_[ q, +∞)(z)] = {0} for any z ∈. By <cit.> and φ = φ^↑∩φ^↓, it holds that 𝒩_φ (z) = 𝒩_φ^↑(z) + 𝒩_φ^↓(z) for any z ∈φ. Case 2. Otherwise, there exists d ≠ 0 such that for any z ∈, φ(z + λ d) is a nonincreasing function of λ > 0. Consequently, φ mush be an unbounded interval on . Let d = 1 (or -1) is such a recession direction, then φ is nonincreasing (or nondecreasing) on . We can set φ^↑ = 0 and φ^↓ = φ (or φ^↑ = φ and φ^↓ = 0). Since we have shown φ that is either nondecreasing or nonincreasing in Case 2, the conclusion of (a) and (b) follow directly. The proof is thus completed. In the subsequent analysis, we use φ^↑ and φ^↓ to denote the monotonic decomposition of any univariate convex function φ constructed in the proof of Lemma <ref> and, in particular, we take φ^↓ = 0 whenever φ is nondecreasing. We are now ready to present the definition of asymptotically stationary points. Let each f_p be an ADC function associated with {f^k_p=g^k_p-h^k_p}. For each p, define T_p(x) ≜{t_p | ∃ N ∈ℕ_∞^♯, {x^k}→ x with {f^k_p(x^k) }_k ∈ N→ t_p}, ∀ x ∈^n. We say that x̅ is an asymptotically stationary (A-stationary) point of problem (<ref>) if for each p, there exists y_p ∈{∂φ_p(t_p) | t_p ∈ T_p(x̅)} such that 0 ∈ ∑_p=1^P ( { y_p ∂_A f_p(x̅)}∪[ ±∂^∞_A f_p(x̅)\{0} ] ). We say that x̅ is a weak asymptotically stationary (weak A-stationary) point of problem (<ref>) if for each p, there exist y_p,1∈{∂φ^↑_p(t_p) | t_p ∈ T_p(x̅)} and y_p,2∈{∂φ^↓_p(t_p) | t_p ∈ T_p(x̅)} such that 0 ∈∑_p=1^P ( { y_p,1 ∂_A f_p(x̅) + y_p,2 ∂_A f_p(x̅) }∪[ ±∂^∞_A f_p(x̅)\{0}] ). Clearly, the definitions of weak A-stationarity and A-stationarity coincide when each φ_p is nondecreasing or nonincreasing. For any given point x̅, we can rewrite the definition of A-stationarity of problem (<ref>) as 0 ∈∑_p ∈ I[ ±∂^∞_A f_p(x̅)\{0}] + ∑_p ∈{1,⋯,P}\ I{y_p ∂_A f_p(x̅)} for some index set I ⊂{1,⋯,P}. While for any p ∈ I, the scalar y_p does not explicitly appear in this inclusion, the existence of y_p ∈{∂φ_p(t_p) | t_p ∈ T_p(x̅)} plays a crucial role in ensuring that x̅∈ (φ_p ∘ f_p); see Example 3.2 in the next subsection for a practical instance. In the following, we take a detour to compare the A-stationarity with the stationarity for the composite model defined in <cit.>, where the author has focused on a more general composite problem minimize_x∈^n φ∘ (f_1(x),⋯, f_p(x)) for some convex function φ:^p → and locally Lipschitz continuous f_p for each p. Consider the special case where φ(z) = ∑_p=1^P φ_p(z_p). Under this setting, a vector x̅ is called a stationary point in <cit.> if there exists y̅ and z̅ such that 0 ∈ S(x̅, y̅, z̅), where S(x̅, y̅, z̅) ≜{(f_1(x̅), ⋯, f_P(x̅)) - z̅}×{∂φ_1(z̅_1) ×⋯×∂φ_P(z̅_P) - y̅}×(∑_p=1^P y̅_p ∂_C f_p(x̅)). For any fixed k ≥ 0, the surrogate set-valued mapping S^k can be defined similarly as S by substituting f_p and φ_p with f^k_p and φ^k_p for each p. The cited paper provides sufficient conditions to ensure Limsup_k → +∞ (gph S^k) ⊂gph S, which asserts that any accumulation point (x̅, y̅, z̅) of a sequence {(x^k, y^k, z^k)} with 0 ∈ S^k(x^k, y^k, z^k) yields a stationary point x̅. Our study on the asymptotic stationarity differs from <cit.> in the following aspects: * Our outer convex function φ is assumed to have the separable form ∑_p=1^P φ_p, while <cit.> allows a general proper, lsc, convex function. In addition, each φ_p is fixed in our approximating problem while <cit.> considers a sequence of convex functions {φ^k_p} that epi-converges to φ_p. * We do not require the inner function f_p to be locally Lipschitz continuous. * If for some p, the function f_p fails to be locally Lipschitz continuous, then (<ref>) is not a necessary condition for optimality. In addition, the conditions in <cit.>, particularly the requirement of f^k_p f_p, become challenging to satisfy due to the potential discontinuity of f_p. At the sacrifice of the consistency, we weaken the optimality condition (<ref>) by considering the (weak) A-stationarity. * If for each p, f_p is locally Lipschitz continuous and bounded from below, it then follows from Proposition <ref> that f_p is c-ADC associated with {f^k_p=g^k_p-h^k_p} such that ∂ f_p(x) ⊂∂_A f_p(x) ⊂∂_C f_p(x) and ∂^∞_A f_p(x) = {0} for any x. Moreover, by f^k_p f_p, one has T_p(x) = {f_p(x)}. Thus, for any A-stationary point x̅ induced by these ADC decompositions, for each p, there exists y_p ∈∂φ_p(f_p(x̅)) such that 0 ∈∑_p=1^P { y_p ∂_A f_p(x̅)} ⊂ ∑_p=1^P { y_p ∂_C f_p(x̅)}. So x̅ is also a stationary point defined in <cit.> satisfying 0 ∈ S(x̅, y̅, z̅). Indeed, A-stationarity here can be sharper than the latter one as the last inclusion in (<ref>) may not hold with equality. Next we demonstrate our main result of this section that both A-stationary and weak A-stationary conditions are first-order necessary conditions for local optimality under Assumption 1. While it may seem natural to combine Proposition <ref> and Proposition <ref>(a) to derive a necessary optimality condition under f^k_p f_p, we take an alternative approach below under Assumption 1(c) that φ_p ∘ f^k_p φ_p ∘ f_p. In fact, the former approach leads to a optimality condition that is less stringent than the (weak) A-stationarity. To proceed, for each p, we define S_p(x) ≜ { {x^k_p}→ x | {φ_p(f^k_p(x^k_p))}→φ_p(f_p(x)) }, ∀ x ∈(φ_p ∘ f_p). Let x̅∈⋂_p=1^P F_p be a local minimizer of problem (<ref>). Suppose that Assumption 1 and the following two conditions hold: (i) For each p and any sequence {x^k_p}∈ S_p(x̅), there is a positive integer K such that 0 ∉∂_C f^k_p(x^k_p) or 𝒩_φ_p(f^k_p(x^k_p)) = {0}, ∀ k ≥ K, and [0 ∈ y_p ∂_A f_p(x̅), y_p ∈{𝒩_φ_p(t_p) | t_p ∈ T_p(x̅) }] ⟹ y_p = 0, p=1,⋯,P. (ii) One has [∑_p=1^P w_p = 0, w_p ∈∂^∞ (φ_p ∘ f_p)(x̅)] ⟹ w_1 = ⋯ = w_P = 0. Then x̅ is a (weak) A-stationary point of problem (<ref>). By using Fermat's rule and the condition (<ref>), we have 0 ∈∑_p=1^P ∂ (φ_p ∘ f_p)(x̅) (1)⊆ ∑_p=1^P ⋃_{x^k_p}∈ S_p_k → +∞ ∂ (φ_p ∘ f^k_p)(x^k_p) (2)⊆∑_p=1^P ⋃_{x^k_p}∈ S_p_k → +∞ {∂ (y^k_p f^k_p)(x^k_p) | y^k_p ∈∂φ_p(f^k_p(x^k_p)) } (3)⊆∑_p=1^P ⋃_{x^k_p}∈ S_p_k → +∞ { y^k_p v^k_p | y^k_p ∈∂φ_p(f^k_p(x^k_p)), v^k_p ∈∂_C f^k_p(x^k_p) } (4)⊆∑_p=1^P ⋃_{x^k_p}∈ S_p_k → +∞ { y^k_p v^k_p | y^k_p ∈∂φ_p(f^k_p(x^k_p)), v^k_p ∈[∂ g^k_p(x^k_p) - ∂ h^k_p(x^k_p)] }. The inclusion (1) is due to approximation of subgradients under epi-convergence <cit.>, and the fact that the proximal subgradients form a convex subset of the regular subgradients <cit.>; (2) follows from the locally Lipschitz continuity of f^k_p <cit.>, condition (<ref>) and the nonsmooth Lagrange multiplier rule <cit.>; (3) and (4) use the calculus rules of Clarke subgradients. For each p, any sequence {x^k_p}∈ S_p and any element w^∞_p ∈_k → +∞ { y^k_p v^k_p | y^k_p ∈∂φ_p(f^k_p(x^k_p)), v^k_p ∈[∂ g^k_p(x^k_p) - ∂ h^k_p(x^k_p)] }, there is a subsequence {w^k_p = y^k_p v^k_p}_k ∈ N→ w^∞_p for some N ∈ℕ^♯_∞. Next, we show the existence of y_p ∈{∂φ_p(t_p) | t_p ∈ T_p(x̅)} for each p such that w^∞_p ∈{ y_p ∂_Af_p(x̅) } ∪ [ ±∂^∞_A f_p(x̅)\{0} ]. By Assumption 1(b), the subsequence {f^k_p(x^k_p)}_k ∈ N is bounded. Taking a subsequence if necessary, we can suppose that f^k_p(x^k_p) converges to some z̅_p ∈ T_p(x̅) as k(∈ N) → +∞. If {y^k_p}_k ∈ N is unbounded, then {v^k_p}_k ∈ N has a subsequence converging to 0 and, thus, 0 ∈∂_A f_p(x̅). Additionally, {y^k_p /|y^k_p|} converges to some y_p ≠ 0 with y_p ∈_k(∈ N) → +∞^∞ ∂φ_p (f^k_p(x^k_p)) (5)=_k(∈ N) → +∞^∞ ∂φ_p (f^k_p(x^k_p)) (6)⊂∂^∞φ_p(z̅_p) (7)=𝒩_φ_p(z̅_p). The equation (5) follows from <cit.>. From {x^k_p}∈ S_p and x̅∈ F_p, we must have f^k_p(x^k_p) ∈φ_p for sufficiently large k ∈ N. Since φ_p is lsc, it holds that φ_p(z̅_p) ≤lim inf_k(∈ N) → +∞φ_p(f^k_p(x^k_p)) = φ_p(f_p(x̅)) and, thus, z̅_p ∈φ_p. Hence, {φ_p(f^k_p(x^k_p))}_k ∈ N→φ_p(z̅_p) by the continuity of φ_p relative to its domain. The inclusion (6) is then a consequence of <cit.>. Lastly, (7) is due to the lower semicontinuity of φ and <cit.>. Therefore, we have (0 ≠)y_p ∈{𝒩_φ_p(t_p) | t_p ∈ T_p(x̅) } and 0 ∈∂_A f_p(x̅), contradicting (<ref>). So far, we conclude that {y^k_p}_k ∈ N is a bounded sequence. By passing to a subsequence if necessary, we can assume that {y^k_p}_k ∈ N→ y^∞_p ∈∂φ_p(z̅_p). Case 1. If y^∞_p = 0, inclusion (<ref>) holds trivially for w^∞_p = 0, and for w^∞_p ≠ 0 we can find a subsequence {|y^k_p|}_k ∈ N^'↓ 0 such that {|y^k_p| v^k_p}_k ∈ N^'→ w^∞_p or -w^∞_p(≠ 0) with v^k_p ∈[∂ g^k_p(x^k_p) - ∂ h^k_p(x^k_p)] for all k ∈ N^'. Therefore, (<ref>) follows from w^∞_p ∈[ (±_k → +∞^∞ [∂ g^k_p(x^k_p) - ∂ h^k_p(x^k_p)] ) \{0} ] ⊂[±∂^∞_A f_p(x̅) \{0}]. Case 2. Otherwise, {v^k_p}_k ∈ N→w^∞_p / |y^∞_p|. We can assume that {v^k_p}_k ∈ N converges to some v^∞_p ∈Limsup_k → +∞[∂ g^k_p(x^k_p) - ∂ h^k_p(x^k_p)] ⊂∂_A f_p(x̅). Then (<ref>) is apparent from w^∞_p = y^∞_p v^∞_p. In either case, we have proved (<ref>). Combining (<ref>) with (<ref>), for some y_p ∈{∂φ_p(t_p) | t_p ∈ T_p(x̅)}, we have 0 ∈∑_p=1^P { y_p ∂_A f_p(x̅) }∪[ ±∂^∞_A f_p(x̅)\{0} ] and x̅ is an A-stationary point. Consequently, x̅ is also a weak A-stationary point since ∂φ_p(z) = ∂φ^↑_p(z) + ∂φ^↓_p(z) for z ∈ (see Lemma <ref>(b)). The technical assumptions (<ref>), (<ref>), and (<ref>) will be discussed in detail later in Proposition <ref>. By imposing a new asymptotic constraint qualification (<ref>), we derive these three conditions to guarantee that any local minimizer of problem (<ref>) is an A-stationary point. §.§ Examples of A-stationarity. We present two examples to illustrate the concept of A-stationarity and to study its relationship with other known optimality conditions. Example 3.1: bi-parametrized two-stage stochastic programs. Let us revisit the two-stage stochastic program in (<ref>), where each optimal value function f_p is given by (<ref>). Consider the constraint set of the first-stage decision X ≜{x ∈^n |ϕ_i (x) ≤ 0, i=1,⋯,M} with each ϕ_i, f_0:^n → being convex and continuously differentiable. Furthermore, let us assume that ∩_p=1^P f_p = ^n, i.e., each f_p(x) is finite for all x ∈^n. Recall that in Example 2.1, we have examined the icc property of (<ref>) associated with a convex-concave function (<ref>)—here denoted as f_p(z,x), and have shown that f_p is an ADC function under mild conditions in Proposition <ref>. Problem (<ref>) is a specific instance of the composite model (<ref>) that includes (P + M + 1) terms, where we can regard the convex functions θ and ϕ_i themselves as ADC functions. In the following, we show that for a point x̅ to be an A-stationary point of (<ref>), there must exist {y̅^p}^P_p=1 along with multipliers {μ̅^i}^P+M_i=1 such that (x̅, y̅^1, ⋯, y̅^P, μ̅^1, ⋯, μ̅^P+M) satisfies the Karush-Kuhn-Tucker (KKT) conditions of the following problem [ minimize_x, y^1, ⋯, y^P f_0(x) + 1/P∑_p=1^P[ (c^ p + C^p x)^⊤ y^p + 1/2 (y^p)^⊤ Q y^p ]; A^p x + B^p y^p ≤ b^ p, p=1,⋯,P, ϕ_i(x) ≤ 0, i=1,⋯,M. ] Let x̅∈ X be an A-stationary point of problem (<ref>). Suppose that each f_p is bounded from below on ^2n, and the Slater condition holds for the constraint set of each second-stage problem (<ref>) at x̅, i.e., for each p, there exists u^p ∈^m such that B^p u^p < b^p - A^p x̅. Then there exist vectors {y̅^p}^P_p=1 together with multipliers {μ̅^i}^P+M_i=1 such that (x̅, y̅^1, ⋯, y̅^P, μ̅^1, ⋯, μ̅^P+M) satisfies the KKT conditions of (<ref>), i.e., {[ 0 = ∇ f_0(x̅) + 1/P∑_p=1^P[ (C^p)^⊤y̅^p + (A^p)^⊤μ̅^p ] + ∑_i=1^Mμ̅^P+i ∇ϕ_i(x̅),; c^ p + C^p x̅ + Q y̅^p + B^⊤μ̅^p = 0, 0 ≤ b^p - A^p x̅ - B y̅^p ⊥ μ̅^p ≥ 0, p = 1,⋯,P,; 0 ≤ϕ_i(x̅) ⊥ μ̅^ P+i≥ 0, i=1,⋯,M. ]. Since x̅ is an A-stationary point of (<ref>), we know from Proposition <ref> that -0.1in 0 ∈∇ f_0(x̅) + 1/P∑_p=1^P{∂_A f_p(x̅) }∪[±∂^∞_A f_p(x̅) \{0}] + ∑_i=1^Mμ̅^ P+i ∇ϕ_i(x̅) ⊂∇ f_0(x̅) + 1/P∑_p=1^P{∂_1 f_p (x̅, x̅) - ∂_2(-f_p) (x̅, x̅) } + ∑_i=1^Mμ̅^ P+i ∇ϕ_i(x̅), where μ̅^P+i∈𝒩_(-∞,0] (ϕ_i(x̅)) for i=1,⋯,M. For each p, let Λ_p(x̅) and Y_p(x̅) be the optimal solutions and multipliers of the second-stage convex minimization problem at x = x̅. By the Slater condition at x̅ and the compactness of X, both Λ_p(x̅) and Y_p(x̅) are nonempty and bounded, and Λ_p(x̅) × Y_p(x̅) = { (y^p, μ^p) | c^p + C^p x̅ + Q y^p + B^⊤μ^p = 0, 0 ≤ b^p - A^p x̅ - B y^p ⊥ μ^p ≥ 0 }. It then follows from Danskin's Theorem <cit.> that ∂_1 f_p (x̅, x̅) = {(A^p)^⊤μ^p | μ^p ∈Λ_i(x̅) } = {(A^p)^⊤μ^p | μ^p ∈Λ_p(x̅) }, ∂_2 (-f_p) (x̅, x̅) = {-(C^p)^⊤ y^p | y^p ∈ Y_p(x̅) } = {-(C^p)^⊤ y^p | y^p ∈ Y_p(x̅) }. Combining these expressions with (<ref>), we complete the proof. Example 3.2: cardinality-constrained problems. As a continuation of Example 2.3, we consider the following optimization problem with the cardinality and nonlinear constraints: _x ∈^n θ(x) x_0 ≤ r, g_i(x) ≤ 0, i=1,⋯,P, h_j(x) = 0, j=1,⋯,M. We assume that r (≤ n) is a positive integer, and θ, g_i, h_j: ^n → are convex and continuously differentiable for simplicity. Let e_i be the vector whose i-th component is one and zero elsewhere. Define the active index sets I_0(x) ≜{i = 1,⋯,n | x_i = 0} and I_g(x) ≜{i = 1,⋯,P | g_i(x) = 0}. The M-stationarity for cardinality constrained problems, which has been motivated by the stationarity concept in mathematical programs with complementarity constraints, was first introduced in <cit.>. A feasible point x̅ for problem (<ref>) is said to be its M-stationary point if there exist multipliers (γ, λ, μ) ∈^n ×^P_+ ×^M such that ∇θ(x̅) + ∑_i ∈ I_0(x̅)γ_i e_i + ∑_i ∈ I_g(x̅)λ_i ∇ g_i(x̅) + ∑_j=1^Mμ_j ∇ h_j(x̅) = 0. Using the approximation in Example 2.3, one has x̅ is an A-stationary point of problem (<ref>) if there exist λ_i ∈𝒩_(-∞, 0] (g_i(x̅)),i=1,⋯,P and μ_j ∈𝒩_{0}(h_j(x̅)), j=1,⋯,M, and u ∈⋃_x̅_0 ≤ t ≤ n𝒩_(-∞, r](t) = {[ [0, +∞) if x̅≤ r,; ∅ if x̅ > r. ]. such that 0 ∈∇θ(x̅) + {u ∂x̅_0}∪[±∂^∞x̅_0 \{0}] + ∑_i=1^P λ_i ∇ g_i(x̅) + ∑_j=1^Mμ_j ∇ h_j(x̅) Hence, x̅ is feasible for problem (<ref>). Since ∂x̅_0 = ∂^∞x̅_0 = {v ∈^n | v_i x̅_i = 0, i=1,⋯,n}, it is easy to see that the A-stationarity and the M-stationarity for problem (<ref>) coincide. § A COMPUTATIONAL ALGORITHM. We present an algorithm for solving problem (<ref>), which is provably convergent to its (weak) A-stationary point. Since each f_p is an ADC function, we leverage its approximating DC functions f_p^k and consider a sequence of perturbed problems ∑_p=1^P F^k_p(x). Working with the latter problem allows us to construct a convex upper approximation of the objective by Lemma <ref> and the DC structure of f_p^k. Specifically, at y∈^n, we take any a^k_p ∈∂ h^k_p(y) and b^k_p ∈∂ g^k_p(y) and consider F^k_p(x;y) ≜φ_p^↑( g^k_p(x) - h^k_p(y) - (a^k_p)^⊤ (x - y)) + φ_p^↓(g^k_p(y) + (b^k_p)^⊤ (x - y) - h^k_p(x)). Owing to the monotonicity of φ_p^↑ and φ_p^↓, one has F^k_p(x) ≤F^k_p(x;y) for any x,y∈^n, as well as the convexity of F^k_p(∙;y). As a result, F^k_p(∙;y) is a convex majorization of F^k_p. Our overall algorithm has double loops, where the inner loop aims to find an approximate stationary point of minimize_x∈^n∑_p=1^P φ_p(f^k_p(x)), while the other loop drives k → +∞. §.§ Assumptions. Before delving into the complete algorithm, let us first discuss some important details to ensure that our proposed algorithm is well-defined and properly constructed. First, it is important to note that each function φ_p ∘ f^k_p in the perturbed problems may not necessarily be proper. When φ_p is extended-valued, epi-convergence of {φ_p∘ f^k_p} in Assumption 1(c) does not guarantee that (φ_p ∘ f^k_p) ≠∅ for all k ≥ 0, as epi-convergence only constraints the asymptotic behavior of the approximating functions and is insufficient for the property to hold for all k ≥ 0. To address this issue, we design a strategy to select an initial point x^0 ∈∩_p=1^P (φ_p ∘ f^0_p), ensuring that all subproblems have solutions with finite objective values. This is accomplished by pre-estimating the accumulated errors between each f_p and its approximating sequence {f^k_p}. Specifically, let X^k ≜∩_p=1^P F^k_p and the supreme of any nonnegative function over X^k to be 0 if X^k = ∅, and define α^i, ↑_p ≜∑_k=i^∞sup_x ∈ X^k[f^k+1_p(x) - f^k_p(x)]_+, α^i, ↓_p ≜∑_k=i^∞sup_x ∈ X^k[f^k_p(x) - f^k+1_p(x)]_+. We will assume (see Assumption 2) that max_1 ≤ p ≤ P{α^0,↑_p, α^0,↓_p} < +∞, which necessitates that the inner approximating sequence {f^k_p} has a bounded “positive/negative variation". It is also evident from this assumption and the construction of {(α^i,↑_p, α^i,↓_p) }_i ≥ 0 that {(α^i,↑_p, α^i,↓_p)}_i ≥ 0→ 0 for each p. The second concern pertains to determining the termination rule of the inner loop when solving each perturbed subproblem. Intuitively, the error tolerance should be associated with the tightness of the linear expansions of g^k_p and h^k_p, as we construct the convex majorization (<ref>) of φ_p ∘ f^k_p by respectively linearizing these two functions inside φ_p^↑ and φ_p^↓. In Assumption 3 below, we impose conditions on the Lipschitz continuity of the subgradients of either g_p^k or h_p^k. A straightforward sufficient condition for this assumption is that, for each p and k, either g^k_p or h^k_p is ℓ_k-smooth, i.e., ∇ g^k_p(x) - ∇ g^k_p(x^')≤ℓ_k x - x^' or ∇ h^k_p(x) - ∇ h^k_p(x^')≤ℓ_k x - x^' for any x, x^'∈^n. In fact, this assumption holds as long as g_p^k and h_p^k do not share common nonsmooth points. In addition, we need two more technical assumptions to ensure the boundedness of the generated sequences. Our assumptions are summarized below. -0.1in 0.98 Assumption 2 (strict feasibility) One has max_1 ≤ p ≤ P{α^0,↑_p, α^0,↓_p } < +∞, and there exists x^0 such that [ f^0_p(x^0) - α^0,↓_p , f^0_p(x^0) + α^0,↑_p ] ∈φ_p for each p. 0.05in Assumption 3 (smoothness of either g_p^k or h_p^k) For each k≥ 0, there exists ℓ_k>0 such that min{ (∂ g^k_p(x), ∂ g^k_p(x^')), (∂ h^k_p(x), ∂ h^k_p(x^')) }≤ℓ_k x^' - x, ∀ x, x^'∈^n, p = 1, ⋯, P. -0.1in Assumption 4 (level-boundedness) For each k ≥ 0, the function ∑_p=1^P F^k_p is level-bounded, i.e., for any r ∈, the level set { x ∈^n | ∑_p=1^P φ_p(f^k_p (x)) ≤ r } is bounded. Assumption 5 (an asymptotic constraint qualification) For any x̅, if there exists {y_p}_p=1^P satisfying 0 = ∑_p=1^P y_p v_p where, for each p, (with the definition of T_p(x̅) in (<ref>)), -0.1in (y_p, v_p) ∈( {𝒩_φ_p(t_p) | t_p ∈ T_p(x̅) }×∂_A f_p(x̅)) ∪(×[ ∂^∞_A f_p(x̅) \{0} ]), then we must have y_1 = ⋯ = y_P = 0. Let us discuss Assumption 5 in more detail. Firstly, it holds trivially if each φ_p is real-valued and ∂^∞_A f_p(x̅) = {0}. For Example 3.1, the assumption translates into [∑_i=1^Mλ_i ∇ϕ_i(x̅) = 0, λ_i ∈𝒩_(-∞,0] (ϕ_i(x̅)), i=1,⋯,M] ⟹ λ_1=⋯=λ_M=0. This is equivalent to the Mangasarian-Fromovitz constraint qualification (MFCQ) for problem (<ref>) by <cit.>; see also <cit.> (pages 197-198). For Example 3.2, the assumption reduces to .[ u v + ∑_i=1^Pλ_i ∇ g_i(x̅) + ∑_j=1^Mμ_j ∇ h_j(x̅) = 0,; (u, v) ∈(_+ ×∂x̅_0) ∪(× [ ∂^∞x̅_0 \{0} ]),; λ_i ∈𝒩_(-∞,0] (g_i(x̅)), i=1,⋯,P ]}⟹ u = λ_1 =⋯ = λ_P = μ_1 = ⋯ = μ_M = 0, which implies that there are no nonzero multipliers (γ, λ, μ) ∈^n ×^P_+ ×^M such that ∑_i ∈ I_0(x̅)γ_i e_i + ∑_i ∈ I_g(x̅)λ_i ∇ g_i(x̅) + ∑_j=1^Mμ_j ∇ h_j(x̅) = 0. This condition corresponds to the so-called cardinality constraints MFCQ proposed in <cit.>. Furthermore, if each f_p is c-ADC associated with {f^k_p = g^k_p - h^k_p} such that ∂_A f_p(x̅) = ∂_C f_p(x̅), and ∂^∞_A f_p(x̅) = {0}, Assumption 5 states that [ 0 ∈∑_p=1^P y_p ∂_C f_p(x̅), y_p ∈𝒩_φ_p(f_p(x̅)), p=1, ⋯, P ] ⟹ y_1 = ⋯ = y_P = 0. This condition aligns with the constraint qualification for the composite optimization problem in <cit.>, and is stronger than the condition in the nonsmooth Lagrange multiplier rule <cit.>. Finally, Assumption 5 implies the constraint qualifications in Theorem <ref>, which is formally presented below. Recall the definitions of I_1 and I_2 in (<ref>). Suppose that Assumptions 1 and 5 hold and f^k_p f_p for each p. If supφ_p=+∞ for each p∈ I_1, and f_p is locally Lipschitz continuous for each p ∈ I_2, then conditions (<ref>), (<ref>), and (<ref>) hold. To derive the constraint qualification (<ref>), we first apply the chain rules in Section 3.1 to characterize ∂^∞(φ_p ∘ f_p). Let x̅ be any point of ∩_p=1^P F_p. Case 1. For p ∈ I_2, it is clear that ∂(y f_p)(x̅) ⊂ y ∂_C f_p(x̅) ⊂ y ·∂_A f_p(x̅) for any y ∈ and 𝒩_φ_p (f_p(x̅)) ⊂{𝒩_φ_p(t_p) | t_p ∈ T_p(x̅)} due to f^k_p f_p. Together with Assumption 5, we deduce that the only scalar y ∈𝒩_φ_p(f_p(x̅)) with 0 ∈∂(y f_p)(x̅) is y = 0. By <cit.>, we have ∂^∞(φ_p ∘ f_p)(x̅) ⊂{ y ·∂_A f_p(x̅) | y ∈𝒩_φ_p(t_p), t_p ∈ T_p(x̅) }. Case 2. For p ∈ I_1, recall that φ_p=φ^↑_p. We claim that [0 ∈ y ·_x →x̅∂ f_p(x), y ∈_x →_F_px̅ 𝒩_φ_p (f_p(x))] ⟹ y = 0. It suffices to consider the case where φ^↑_p = (-∞,r_p) or (-∞,r_p] for some r_p ∈, because the statement holds trivially when φ^↑_p is real-valued. For any element y ∈Limsup_x →_F_p x̅ 𝒩_φ_p (f_p(x)), there exist {x^k}→x̅ and {y_k}→ y with y_k ∈𝒩_φ_p (f_p(x^k)) and {F_p(x^k)}→ F_p(x̅). Since x̅∈ F_p, we must have x^k ∈ F_p for k sufficiently large, i.e., f_p(x^k) ∈φ^↑_p, and {f_p(x^k)} is bounded from above. It follows immediately from the lower semicontinuity of f_p that {f_p(x^k)} is bounded. Assume that this sequence converges to some z̅_p. We have z̅_p ∈φ_p since F_p(x̅) = lim inf_k → +∞φ_p(f_p(x^k)) ≥φ_p(z̅_p). Therefore y = lim_k → +∞ y_k ∈𝒩_φ_p(z̅_p). By epi-convergence f^k_p f_p, each f_p(x^k) can be expressed as the limit of a sequence {f^i_p(x^k,i)}_i ≥ 0 with {x^k,i}_i ≥ 0→ x^k for any fixed k ≥ 0. Using the diagonal extraction procedure, one can extract a subsequence {f^i_k_p(x^k,i_k)}_k ≥ 0→z̅_p with {x^k,i_k}_k ≥ 0→x̅. Hence, z̅_p ∈ T_p(x̅) and Limsup_x →_F_p x̅ 𝒩_φ_p (f_p(x)) ⊂{𝒩_φ_p(t_p) | t_p ∈ T_p(x̅)}. Using the subgradient relationship in Proposition <ref> and the outer semicontinuity of ∂_A f_p and ∂^∞_A f_p in Proposition <ref>(a), we have _x →x̅ ∂ f_p(x) ⊂_x →x̅ ∂_A f_p(x) = ∂_A f_p(x̅) _x →x̅^∞ ∂ f_p(x) ⊂∂^∞_A f_p(x̅). By these inclusions and Assumption 5, we immediately get (<ref>). Proposition <ref> further implies that ∂^∞ (φ_p ∘ f_p)(x̅) ⊂{ y ∂_A f_p(x) | y ∈𝒩_φ_p (t_p), t_p ∈ T_p(x̅) }∪[ ∂^∞_A f_p(x̅) \{0} ]. Combining (<ref>), (<ref>) with Assumption 5, we derive (<ref>). To prove (<ref>), for any fixed p = 1,⋯,P, let y_p^' = 0 for any p^'∈{1,⋯,P}\{p} in Assumption 5. Then the only scalar y_p ∈{𝒩_φ_p (t_p) | t_p ∈ T_p(x̅) } with 0 ∈ y_p ∂_A f_p(x̅) is y_p = 0. The proof of (<ref>) is completed. Now suppose for contradiction that (<ref>) does not hold. Thus, there exist p_1 ∈{1,⋯,P}, an index set N ∈ℕ_∞^♯ and {x^k}∈ S_p_1(x̅) such that 0 ∈∂_C f^k_p_1(x^k) and 𝒩_φ_p_1(f^k_p_1(x^k)) ≠{0} for all k ∈ N. Take an arbitrary nonzero scalar y^k ∈𝒩_φ_p_1(f^k_p_1(x^k)) for all k ∈ N. Let y be any accumulation point of the unit scalars {y^k/|y^k|}_k ∈ N. Then we have 0 ∈∂_A f_p_1 (x̅) by Proposition <ref>(a) and (0 ≠)y∈{𝒩_φ_p_1(t_p_1) | t_p_1∈ T_p_1(x̅)}, contradicting Assumption 5. The proof is completed. §.§ The algorithmic framework and convergence analysis. We now formalize the algorithm for solving problem (<ref>). At the k-th iteration of the outer loop, we modify the convex majorant from F_p^k in (<ref>) by incorporating (α_p^k, ↑,α_p^k, ↓) and construct the following approximation: F^k_p(x;y) ≜φ_p^↑(g^k_p(x) - h^k_p(y) - (a^k_p)^⊤ (x - y) + α^k,↑_p )_≜ f^k,upper_p(x; y) + φ_p^↓( g^k_p(y) + (b^k_p)^⊤ (x - y) - h^k_p(x) - α^k,↓_p)_≜ f^k,lower_p(x; y), where a^k_p ∈∂ h^k_p(y) and b^k_p ∈∂ g^k_p(y). Similar to the properties of F_p^k(∙;y), one still has F^k_p(x) ≤F^k_p(x;x) ≤F^k_p(x;y) for any x,y∈^n, along with the convexity of F^k_p(∙;y). In contrast to the prox-linear algorithm, which is designed to minimize amenable functions and adopts complete linearization of the inner maps, our proposed algorithm retains more curvature information inherent in these maps; as illustrated in Figure <ref>. The proposed algorithm for solving the composite optimization in (<ref>) is given below. -0.1in -0.1in In the following, we show that the prox-ADC method is well-defined. Namely, for each k, the conditions (<ref>) can be achieved in finite steps. Suppose that Assumptions 1-4 hold. For any nonnegative integer k, let {x^k,i}_i ≥ 0 be the sequence generated by the inner loop of the prox-ADC method. Then the following statements hold. (a) ( ∑_p=1^P F^k_p(∙; x^k, i) ) ≠∅ and problem (<ref>) has a unique solution for any k, i ≥ 0. (b) The stopping rule of the inner loop is achievable in finite steps, i.e., the smallest integer i satisfying condition (<ref>), denoted by i_k, is finite for any k ≥ 0. (c) One has H^k+1(x^k+2) ≤ H^k(x^k+1) - λ/2∑_j = 0^i_k+1x^k+1, j+1 - x^k+1, j^2, ∀ k ≥ 0. We prove (a) and (b) by induction. For k=0, we have x^0 ∈( ∑_p=1^P F^0_p(∙; x^0) ) ≠∅ by Assumption 2. Consequently, problem (<ref>) has a unique solution for k=0 and any i ≥ 0, and ( ∑_p=1^P F^0_p(∙; x^0, i) ) ≠∅ for all i ≥ 1. To proceed, define H^k(x) ≜∑_p=1^P F^k_p(x;x) that is level-bounded due to Assumption 4 and F^k_p(x) ≤F^k_p(x;x). Using the update of x^0,i+1, we have H^0 (x^0,i+1) ≤ ∑_p=1^P F^k_p(x^0, i+1; x^0, i) ≤ H^0(x^0,i) - λ/2x^0,i+1 - x^0,i^2, ∀ i ≥ 0. Observe that H^0 is continuous relative to its domain since φ^↑_p and φ^↓_p are continuous relative to their domains <cit.> and f^0_p is continuous. Using the continuity and level-boundedness of H^0, we immediately conclude that inf_x H^0(x) > -∞. Thus, the sequence {H^0(x^0,i)}_i ≥ 0 converges, and ∑_i=0^∞x^0,i+1 - x^0,i^2 < +∞. The latter further yields x^0,i+1 - x^0,i→ 0 as i → +∞, which implies that the last condition in (<ref>) is achievable in finite iterations. Next, we show that the first two conditions in (<ref>) can also be achieved in finite number of steps. By the level-boundedness of H^0, there exists a compact set S^0 containing {x^0,i}_i ≥ 0. For each p, we have [ 0 ≤ f^0, upper_p (x^0, i+1; x^0, i) - f^0_p(x^0,i+1) - α^0,↑_p; = h^0_p(x^0, i+1) - h^0_p(x^0, i) - (a^0,i_p)^⊤ (x^0, i+1 - x^0, i) ⟶ 0 as i → +∞, ] where the terms in the second line converges to 0 because h^0_p is uniformly continuous on S^0 and {a^0,i_p}_i ≥ 0⊆{∂ h^0_p(x) | x ∈ S^0} is bounded by <cit.>. Therefore, for a fixed ϵ_0 > 0, there exists i_0 < +∞ such that f^0,upper_p (x^0,i_0+1; x^0,i_0) ≤ f^0_p(x^0,i_0+1) + α^0,↑_p + ϵ_0 holds for each p. The remaining condition in (<ref>) can be proved with similar arguments. Thus, (a) and (b) hold for k=0. Now suppose that (a)-(b) are true for k = j  (≥ 0) and, hence i_j is finite. To show that ( ∑_p=1^P F^j+1_p(∙;x^j+1,i) ) ≠∅ and problem (<ref>) has a unique solution for k=j+1 and any i ≥ 0, it suffices to prove that ∑_p=1^P F^j+1_p(x^j+1,0; x^j+1,0) =H^j+1(x^j+1,0) is finite, i.e., (f^j+1_p(x^j+1,0) + α^j+1,↑_p ) ∈φ^↑_p (f^j+1_p(x^j+1,0) - α^j+1,↓_p) ∈φ^↓_p, ∀ p=1, ⋯, P. From x^j+1, 0 = x^j, i_j+1∈ X^j, we have (f^j_p(x^j+1, 0) - α^j,↓_p ) ∈φ^↓_p for each p. Notice that for each p, [ f^j+1_p(x^j+1, 0) - α^j+1,↓_p = f^j_p(x^j+1, 0) + [ f^j+1_p(x^j+1,0) - f^j_p(x^j+1, 0) ] - α^j+1,↓_p; ≥ f^j_p(x^j+1, 0) - sup_x ∈ X^j[ f^j_p(x) - f^j+1_p(x) ]_+ - α^j+1,↓_p = f^j_p(x^j+1, 0) - α^j,↓_p. ] Since each φ^↓_p is nonincreasing, we deduce that (f^j+1_p(x^j+1, 0) - α^j+1,↓_p) ∈φ^↓_p for each p. Similarly, one can prove (f^j+1_p(x^j+1, 0) + α^j+1,↑_p ) ∈φ^↑_p for each p. Therefore, statement (a) holds for k=j+1. Building upon this, we can now clearly observe the validity of (b) for k=j+1, as we have shown similar results earlier in the case of k=0. By induction, we complete the proof of (a)-(b). To verify (c), observe that [ H^k+1(x^k+1) - H^k(x^k+1) = ∑_p=1^P [ φ^↑_p(f^k+1_p(x^k+1) + α^k+1,↑_p) - φ^↑_p(f^k_p(x^k+1) + α^k,↑_p)]; + ∑_p=1^P [ φ^↓_p(f^k+1_p(x^k+1) - α^k+1,↓_p) - φ^↓_p(f^k_p(x^k+1) - α^k,↓_p)], ∀ k ≥ 0, ] and for each p and any k ≥ 0, { f^k+1_p(x^k+1) + α^k+1,↑_p - f^k_p(x^k+1) - α^k,↑_p = f^k+1_p(x^k+1) - f^k_p(x^k+1) - sup_x ∈ X^k[f^k+1(x) - f^k(x)]_+ ≤ 0, f^k+1_p(x^k+1) - α^k+1,↓_p - f^k_p(x^k+1) + α^k,↓_p = f^k+1_p(x^k+1) - f^k_p(x^k+1) + sup_x ∈ X^k[f^k(x) - f^k+1(x)]_+ ≥ 0. . Therefore, H^k+1(x^k+1) ≤ H^k(x^k+1) for any k ≥ 0. Combining this inequality with (<ref>), we obtain the descent inequality (<ref>) and complete the proof. As we will see in the following lemma, the asymptotic constraint qualification (Assumption 5) implies the existence of the multipliers for the surrogate subproblem (<ref>). Suppose that Assumptions 1-5 hold. Let { x^k } be the sequence generated by the prox-ADC method and {x^k}_k ∈ N be a subsequence converging to some x̅. Then, for all (k+1) ∈ N sufficiently large and all p=1,⋯,P, there exist y^k_p,1∈∂φ^↑(f_p^k,upper(x^k, i_k + 1; x^k, i_k)) and y^k_p,2∈∂φ^↓(f_p^k,lower(x^k, i_k + 1; x^k, i_k)) satisfying 0 ∈∑_p=1^P [y^k_p,1 ∂ f^k,upper_p(x^k, i_k + 1;x^k, i_k) + y^k_p,2 ∂ f^k,lower_p(x^k, i_k + 1; x^k, i_k) ] + λ(x^k, i_k + 1 - x^k, i_k). Observe that the sequences {x^k,i_k}_(k + 1) ∈ N and {x^k,i_k+1}_(k+1) ∈ N converge to x̅ by the stopping conditions (<ref>). The conclusion of this lemma can be derived from the nonsmooth Lagrange multiplier rule <cit.> if we can show that, for all p and any (k+1) ∈ N sufficiently large, y̅^k_p,1 = y̅^k_p,2 = 0 is the unique pair of (y^k_p,1, y^k_p,2) that satisfies the following conditions {[ 0 ∈∑_p=1^P [ y^k_p,1 ∂ f_p^k,upper(x^k, i_k + 1;x^k, i_k) + y^k_p,2 ∂ f_p^k,lower(x^k, i_k + 1; x^k, i_k) ] + λ(x^k, i_k + 1 - x^k, i_k),; y^k_p,1∈𝒩_φ_p^↑ (f_p^k,upper(x^k, i_k + 1; x^k, i_k)),; y^k_p,2∈𝒩_φ_p^↓ (f_p^k,lower(x^k, i_k + 1; x^k, i_k)). ]. Suppose that the above claim does not hold. Without loss of generality, take {(y^k_p,1, y^k_p,2)}_(k+1) ∈ N for each p satisfying (<ref>) and ∑_p=1^P (|y^k_p,1| + |y^k_p,2|) = 1. For each p and (k+1)∈ N, define A^k_p ≜{y^k_p,1 v^k_p,1 + y^k_p,2 v^k_p,2 | [[ v^k_p,1∈∂ g^k_p(x^k, i_k) - ∂ h^k_p(x^k, i_k); v^k_p,2∈∂ g^k_p(x^k, i_k + 1) - ∂ h^k_p(x^k, i_k + 1) ]] or [[ v^k_p,1∈∂ g^k_p(x^k, i_k + 1) - ∂ h^k_p(x^k, i_k + 1); v^k_p,2∈∂ g^k_p(x^k, i_k) - ∂ h^k_p(x^k, i_k) ]]}. Then, for all (k+1) ∈ N, we have [ (0, ∑_p=1^P A^k_p) (1)≤( 0, ∑_p=1^P (y^k_p,1[∂ g^k_p(x^k,i_k+1) - ∂ h^k_p(x^i,i_k)] + y^k_p,2[∂ g^k_p(x^k,i_k) - ∂ h^k_p(x^i,i_k+1)] ) ); + ∑_p=1^P (y^k_p,1[ ∂ g^k_p(x^k,i_k+1) - ∂ h^k_p(x^i,i_k)] + y^k_p,2[ ∂ g^k_p(x^k,i_k) - ∂ h^k_p(x^i,i_k+1)], A^k_p ); (2)≤ λx^k, i_k + 1 - x^k, i_k + ∑_p=1^P ( |y^k_p,1| + |y^k_p,2| ) min{(∂ g^k_p(x^k,i_k + 1), ∂ g^k_p(x^k,i_k)), (∂ h^k_p(x^k,i_k + 1), ∂ h^k_p(x^k,i_k)) }; (3)≤ λx^k, i_k + 1 - x^k, i_k + ∑_p=1^P ( |y^k_p,1| + |y^k_p,2| ) ℓ_k x^k,i_k + 1 - x^k,i_k; (4)≤ λ δ_k/ℓ_k + ∑_p=1^P ( |y^k_p,1| + |y^k_p,2| ) δ_k = λ δ_k/ℓ_k + δ_k, ] where (1) uses the inequalities (A,C) ≤(A,B) + (B, C) and (A+B, A^' + B^') ≤(A,A^') + (B, B^'); (2) is due to (<ref>) and the definition of A^k_p; (3) is by Assumption 3; and (4) is implied by condition (<ref>). Equivalently, for all (k+1) ∈ N, it holds that {[ ∑_p=1^P ( y^k_p,1 v^k_p,1 + y^k_p,2 v^k_p,2 ) ≤λ δ_k/ℓ_k + δ_k,; y^k_p,1∈𝒩_φ^↑_p( f^k,upper_p(x^k, i_k + 1; x^i, i_k) ),; y^k_p,2∈𝒩_φ^↓_p( f^k,lower_p(x^k, i_k + 1; x^k, i_k) ),; [ [ v^k_p,1 ∈∂ g^k_p(x^k, i_k) - ∂ h^k_p(x^k, i_k); v^k_p,2 ∈∂ g^k_p(x^k, i_k + 1) - ∂ h^k_p(x^k, i_k + 1) ]] or [ [ v^k_p,1 ∈∂ g^k_p(x^k, i_k + 1) - ∂ h^k_p(x^k, i_k + 1); v^k_p,2 ∈∂ g^k_p(x^k, i_k) - ∂ h^k_p(x^k, i_k) ]]. ]. Note that the sequence { f^k_p(x^k,i_k + 1) }_(k+1) ∈ N is bounded for each p due to Assumption 1(b). It then follows from Lemma <ref>(a) that for each p and all (k+1) ∈ N sufficiently large, one has y^k_p,1∈𝒩_φ^↓_p( f^k,upper_p(x^k,i_k+1; x^k,i_k) ) = {0} or y^k_p,2∈𝒩_φ^↑_p( f^k,lower_p(x^k,i_k+1; x^k,i_k) ) = {0}. Consequently, the conditions in (<ref>) can be written as {[ ∑_p=1^P y^k_p v^k_p≤λ δ_k/ℓ_k + δ_k,; y^k_p ∈ {𝒩_φ^↑_p(f^k,upper_p(x^k, i_k + 1; x^i, i_k)) ∪ 𝒩_φ^↓_p(f^k,lower_p(x^k, i_k + 1; x^k, i_k))},; v^k_p ∈ {[ ∂ g^k_p(x^k, i_k + 1) - ∂ h^k_p(x^k, i_k + 1) ] ∪ [ ∂ g^k_p(x^k, i_k) - ∂ h^k_p(x^k, i_k) ]}, ]. where {y^k_p}^P_p=1 satisfies ∑_p=1^P |y^k_p| = 1 for all (k+1) ∈ N. Taking a subsequence if necessary and using conditions (<ref>), we can assume that f^k,upper_p(x^k,i_k+1; x^k,i_k), f^k,lower_p(x^k,i_k+1; x^k,i_k) and f^k_p (x^k,i_k+1) converge to the same limit point z̅_p ∈ T_p(x̅) as (k+1)(∈ N) → +∞ for each p. By using the lower semicontinuity of each φ_p, ∑_p=1^P F^k_p(x^k+1) ≤ H^k(x^k+1) and the descent inequality (<ref>), we have ∑_p=1^P φ_p(z̅_p) ≤∑_p=1^P lim inf_(k+1)(∈ N) → +∞φ_p(f^k_p(x^k+1)) ≤lim inf_(k+1)(∈ N) → +∞∑_p=1^P φ_p(f^k_p(x^k+1)) ≤ H^0(x^1) < +∞, and therefore z̅_p ∈φ_p for each p. Recall from Theorem <ref>(a) that f^k,upper_p(x^k, i_k + 1; x^i, i_k) ∈φ^↑_p and f^k,lower_p(x^k, i_k + 1; x^i, i_k) ∈φ^↑_p. Using the outer semicontinuity of the normal cone, we assume that {y^k_p}_(k+1) ∈ N converges to some y_p ∈{𝒩_φ^↑_p(z̅_p) ∪𝒩_φ^↓_p(z̅_p)}. By Lemma <ref>, we have 𝒩_φ^↑_p(z̅_p) ∪𝒩_φ^↓_p(z̅_p) = 𝒩_φ^↑_p(z̅_p) + 𝒩_φ^↓_p(z̅_p) = 𝒩_φ_p(z̅_p), ∀ p=1,⋯,P. Therefore, y_p ∈𝒩_φ_p(z̅_p) ⊂{𝒩_φ_p(t_p) | t_p ∈ T_p(x̅)} for each p. Obviously, ∑_p=1^P |y_p| = 1, and {y_p}^P_p=1 has at least one nonzero element. We consider two cases. Case 1. If {v^k_p}_(k+1) ∈ N is bounded for each p, there exist vectors {v_p}^P_p=1 such that {v^k_p}_(k+1) ∈ N→ v_p for each p and 0 = ∑_p=1^P y_p v_p ∈∑_p=1^P y_p ∂_A f_p(x̅), contradicting Assumption 5 since y_1, ⋯, y_P are not all zeros. Case 2. Otherwise if there exists some p such that {v^k_p}_(k+1) ∈ N is unbounded, define the index sets I_ub≜{ 1 ≤ p ≤ P | {v^k_p}_(k+1) ∈ N is unbounded } (≠∅) I_b≜{1,⋯,P}\ I_ub. Notice that {∑_p ∈ I_b y^k_p v^k_p}_(k+1) ∈ N is bounded. Without loss of generality, we can assume that this sequence converges to some w and, thus, {∑_p ∈ I_ub y^k_p v^k_p}_(k+1) ∈ N→ -w. Step 1: Next we prove by contradiction that, for each p ∈ I_ub, the sequence {y^k_p v^k_p}_(k+1) ∈ N is bounded. Suppose that I_∗ ≜ { p ∈ I_ub | {y^k_p v^k_p}_(k+1) ∈ N is unbounded } ≠ ∅ and {∑_p ∈ I_∗y^k_p v^k_p}_(k+1) ∈ N→ +∞. Consider w^k_p ≜ y^k_p v^k_p/∑_p ∈ I_∗y^k_p v^k_p for each p ∈ I_ub. Then {∑_p ∈ I_ubw^k_p}_(k+1) ∈ N converges to 0. Observe that {w^k_p}_(k+1) ∈ N is bounded for each p ∈ I_∗ and ∑_p ∈ I_∗w^k_p = 1 for all (k+1) ∈ N. Hence, by taking a subsequence if necessary, we can assume that there exist p_1 ∈ I_∗ and w_p_1≠ 0 such that {w^k_p_1}_(k+1) ∈ N→w_p_1. It then follows from the construction of w^k_p that {w^k_p}_(k+1) ∈ N has a subsequence converging to some element of ±∂^∞_A f_p(x̅) for each p ∈ I_∗ and, in particular, w_p_1∈[ ±∂^∞_A f_p_1 (x̅) \{0}]. From {∑_p ∈ I_ubw^k_p}_(k+1) ∈ N→ 0, we obtain 0 ∈[ ±∂^∞_A f_p_1(x̅) \{0} ] + ∑_p ∈ I_ub\{p_1}[ ±∂^∞_A f_p(x̅) ], contradicting Assumption 5 since the coefficient of the term [ ±∂^∞_A f_p_1(x̅) \{0} ] is nonzero. So far, we have shown that the sequence {y^k_p v^k_p}_(k+1) ∈ N is bounded for each p ∈ I_ub. Step 2: Now suppose that {y^k_p v^k_p}_(k+1) ∈ N→ w_p for each p ∈ I_ub with ∑_p ∈ I_ub w_p = -w. Thus {y^k_p}_(k+1) ∈ N→ 0 and w_p ∈[ ±∂^∞_A f_p(x̅) ] for each p ∈ I_ub. Since ∑_p=1^P |y_p|=1, we can find an index p_2 ∈ I_b such that y_p_2≠ 0. Then {∑_p=1^P y^k_p v^k_p}_(k+1) ∈ N→ 0 implies 0 ∈ y_p_2 ∂_A f_p_2(x̅) + ∑_p ∈ I_b\{p_2} y_p ∂_A f_p(x̅) + ∑_p ∈ I_ub[ ±∂^∞_A f_p(x̅) ], which leads to a contradiction to Assumption 5 and therefore completes the proof. The main convergence result of the prox-ADC method follows. We make an additional assumption that the approachable subgradient ∂_A f_p(x̅) is a bounded set for each p ∈ I_2. This assumption in particular holds for the ADC function f_p associated with {f^k_p = g^k_p-h^k_p} in Proposition <ref>(b) if f_p is locally Lipschitz continuous and bounded from below. Suppose that Assumptions 1-5 hold, and the sequence {x^k} generated by the prox-ADC method has an accumulation point x̅. Suppose in addition that ∂^∞_A f_p(x̅) = {0} for each p ∈ I_2. Then x̅ is a weak A-stationary point of (<ref>). Moreover, if either each φ_p is nondecreasing, or for each k ≥ 0 and each p, g^k_p and h^k_p are ℓ_k-smooth, i.e., there exists a sequence {ℓ_k} such that for each k ≥ 0, max{ ∇ g^k_p(x) - ∇ g^k_p(x^'), ∇ h^k_p(x) - ∇ h^k_p(x^') }≤ℓ_k x^' - x, ∀ x, x^'∈^n, p = 1, ⋯, P, then x̅ is also an A-stationary point. Let {x^k}_k ∈ N be a subsequence converging to x̅. It is easy to see that {x^k, i_k}_(k+1) ∈ N→x̅ and {x^k, i_k+1}_(k+1) ∈ N→x̅. By Lemma <ref>, for all (k+1) ∈ N sufficiently large, -0.08in 0 ∈∑_p=1^P [ y^k_p,1(∂ g^k_p(x^k,i_k + 1) - ∂ h^k_p(x^k,i_k)) + y^k_p,2(∂ g^k_p(x^k,i_k) - ∂ h^k_p(x^k,i_k+1))] + λ(x^k,i_k + 1 - x^k, i_k), where y^k_p,1∈∂φ^↑_p( f^k,upper_p(x^k, i_k+1; x^k, i_k)) and y^k_p,2∈∂φ^↓_p( f^k,lower_p(x^k, i_k+1; x^k, i_k)). Due to Assumption 3 and similar arguments in Lemma <ref>, the optimality condition (<ref>) implies {[ ∑_p=1^P (y^k_p,1 v^k_p,1 + y^k_p,2 v^k_p,2)≤λ δ_k/ℓ_k + ∑_p=1^P (|y^k_p,1| + |y^k_p,2|) ·δ_k,; [ [ v^k_p,1∈∂ g^k_p(x^k, i_k) - ∂ h^k_p(x^k, i_k); v^k_p,2∈∂ g^k_p(x^k, i_k + 1) - ∂ h^k_p(x^k, i_k + 1) ]] or [ [ v^k_p,1∈∂ g^k_p(x^k, i_k + 1) - ∂ h^k_p(x^k, i_k + 1); v^k_p,2∈∂ g^k_p(x^k, i_k) - ∂ h^k_p(x^k, i_k) ]]. ]. Step 1: To start with, we prove the boundedness of {∑_p=1^P (|y^k_p,1| + |y^k_p,2|)}_(k+1) ∈ N. If the boundedness of {∑_p=1^P (|y^k_p,1| + |y^k_p,2|)}_(k+1) ∈ N fails, then by taking a subsequence if necessary, one has {∑_p=1^P (|y^k_p,1| + |y^k_p,2|)}_(k+1) ∈ N→ +∞ and {[ y^k_p,1≜y^k_p,1/∑_p=1^P (|y^k_p,1| + |y^k_p,2|) → y_p,1∈_(k+1)(∈ N) → +∞^∞ ∂φ^↑_p (f^k,upper_p(x^k, i_k+1; x^k, i_k)),; y^k_p,2≜y^k_p,2/∑_p=1^P (|y^k_p,1| + |y^k_p,2|) → y_p,2∈_(k+1)(∈ N) → +∞^∞ ∂φ_p^↓(f^k,lower_p(x^k, i_k+1; x^k, i_k)), ]. as (k+1)(∈ N) → +∞ with ∑_p=1^P (|y_p,1| + |y_p,2|) = 1. Similarly as in the proof of Lemma <ref>, suppose that f^k,upper_p(x^k, i_k+1; x^k, i_k), f^k,lower_p(x^k, i_k+1; x^k, i_k) and f^k_p(x^k,i_k+1) converge to some z̅_p ∈ T_p(x̅) as (k+1)(∈ N) → +∞ for each p. We can also show that z̅_p ∈φ_p for each p. Because each φ^↑_p is continuous relative to its domain and f^k,upper_p (x^k,i_k+1; x^k,i_k) ∈φ^↑_p, we have φ^↑_p(f^k,upper_p (x^k,i_k+1; x^k,i_k)) →φ^↑_p(z̅_p) as (k+1)(∈ N) → +∞. Then, _(k+1)(∈ N) → +∞^∞ ∂φ^↑_p (f^k,upper_p(x^k, i_k+1; x^k, i_k)) ⊂∂^∞φ^↑_p(z̅_p) = 𝒩_φ^↑_p (z̅_p). Thus, y_p,1∈𝒩_φ^↑_p (z̅_p). Likewise, we also have y_p,2∈𝒩_φ^↓_p (z̅_p). Note that 𝒩_φ^↑_p(z̅_p)={0} or 𝒩_φ^↓_p(z̅_p)={0} due to Lemma <ref>(a) and, thus, either y^k_p,1 or y^k_p,2 equals to 0 for all (k+1)∈ N sufficiently large. Then (<ref>) can be expressed as {[ ∑_p=1^P y^k_p v^k_p≤λδ_k / ℓ_k/∑_p=1^P (|y^k_p,1| + |y^k_p,2|) + δ_k ⟶ 0 as (k+1) (∈ N) → +∞,; y^k_p ⟶ y_p ∈ {𝒩_φ^↑_p(z̅_p) ∪ 𝒩_φ^↓_p(z̅_p)} as (k+1) (∈ N) → +∞,; v^k_p ∈ {[ ∂ g^k_p(x^k, i_k + 1) - ∂ h^k_p(x^k, i_k + 1) ] ∪[ ∂ g^k_p(x^k, i_k) - ∂ h^k_p(x^k, i_k) ]}, ]. where {y^k_p}^P_p=1 satisfies ∑_p=1^P |y^k_p| = 1 for all (k+1) ∈ N. As in the proof of Lemma <ref>, this system also leads to a contradiction to Assumption 5. Therefore, we establish the boundedness of {∑_p=1^P (|y^k_p,1| + |y^k_p,2|)}_(k+1) ∈ N. By taking a subsequence and using the outer semicontinuity of each φ^↑_p and φ^↓_p, we can assume that {y^k_p,1}_(k+1) ∈ N and {y^k_p,2}_(k+1) ∈ N converge to some y_p,1∈∂φ^↑_p(z̅_p) and y_p,2∈∂φ^↓_p(z̅_p). Step 2: From the result of Step 1 and the notations of I_1 and I_2 in (<ref>), there exist y^k_p,1∈∂φ^↑_p( f^k,upper_p(x^k, i_k+1; x^k, i_k)) for each p and y^k_p,2∈∂φ^↓_p( f^k,lower_p(x^k, i_k+1; x^k, i_k)) for each p ∈ I_2 satisfying {[ ∑_p ∈ I_1 y^k_p,1 v^k_p,1 + ∑_p ∈ I_2(y^k_p,1 v^k_p,1 + y^k_p,2 v^k_p,2) ⟶ 0 as (k+1) (∈ N) → +∞,; [ [ v^k_p,1∈∂ g^k_p(x^k, i_k) - ∂ h^k_p(x^k, i_k); v^k_p,2∈∂ g^k_p(x^k, i_k + 1) - ∂ h^k_p(x^k, i_k + 1) ]] or [ [ v^k_p,1∈∂ g^k_p(x^k, i_k + 1) - ∂ h^k_p(x^k, i_k + 1); v^k_p,2∈∂ g^k_p(x^k, i_k) - ∂ h^k_p(x^k, i_k) ]]. ]. For any fixed p ∈ I_2, the sequences {v^k_p,1}_(k+1) ∈ N and {v^k_p,2}_(k+1)∈ N must be bounded, otherwise we could assume {v^k_p,1}_(k+1) ∈ N→ +∞ and then every accumulation point of unit vectors {v^k_p,1/v^k_p,1}_(k+1) ∈ N would be in the set ∂^∞_A f_p(x̅), contradicting our assumption that ∂^∞_A f_p(x̅) = {0} for each p ∈ I_2. Without loss of generality, suppose that {∑_p ∈ I_2(y^k_p,1 v^k_p,1 + y^k_p,2 v^k_p,2)}_(k+1) ∈ N→ w, where w ∈∑_p ∈ I_2 (y_p,1 ∂_A f_p(x̅) + y_p,2 ∂_A f_p(x̅)). Thus, {∑_p ∈ I_1 y^k_p,1 v^k_p,1}_(k+1) ∈ N→ -w. We next prove by contradiction that {y^k_p,1 v^k_p,1}_(k+1) ∈ N is bounded for each p ∈ I_1. Suppose that {∑_p ∈ I_1y^k_p,1 v^k_p,1}_(k+1) ∈ N→ +∞. Let w^k_p ≜ y^k_p,1 v^k_p,1/∑_p ∈ I_1y^k_p,1 v^k_p,1 for each p ∈ I_1. Then {∑_p ∈ I_1w^k_p}_(k+1 ∈ N)→ 0. Observe that {w^k_p}_(k+1) ∈ N is bounded for each p ∈ I_1 and ∑_p ∈ I_1w^k_p = 1 for all (k+1) ∈ N. Hence, by taking a subsequence if necessary, we can assume that there exist p_1 ∈ I_1 and w_p_1≠ 0 such that {w^k_p_1}_(k+1) ∈ N→w_p_1. Note that {y^k_p,1/∑_p ∈ I_1y^k_p,1 v^k_p,1}⊂_+ converges to zero. So {w^k_p}_(k+1) ∈ N has a subsequence converging to some element in ∂^∞_A f_p(x̅) for each p ∈ I_1. In particular, w_p_1∈∂^∞_A f_p_1 (x̅) \{0}. Since {∑_p ∈ I_1w^k_p}_(k+1) ∈ N→ 0, we obtain 0 ∈[ ∂^∞_A f_p_1(x̅) \{0} ] + ∑_p ∈ I_1\{p_1}∂^∞_A f_p(x̅), which contradicts Assumption 5 because the coefficient of the term [ ∂^∞_A f_p_1(x̅) \{0} ] is nonzero. Now suppose that {y^k_p,1 v^k_p,1}_(k+1) ∈ N→ w_p for each p ∈ I_1 with ∑_p ∈ I_1 w_p = -w. It remains to show that for any p ∈ I_1, there exists y_p,1∈{∂φ^↑_p(t_p) | t_p ∈ T_p(x̅)} such that w_p ∈{ y_p,1 ∂_A f_p(x̅) }∪[ ∂^∞_A f_p(x̅)\{0} ], which can be derived similarly as the proof of Theorem <ref>. Summarizing these arguments, we conclude that x̅ is a weak A-stationary point of (<ref>). Under the additional assumption of the theorem, for some y^k_p,1∈∂φ^↑_p( f^k,upper_p(x^k, i_k+1; x^k, i_k)) and y^k_p,2∈∂φ^↓_p( f^k,lower_p(x^k, i_k+1; x^k, i_k)), we have ∑_p=1^P (y^k_p,1 + y^k_p,2) [∇ g^k_p(x^k,i_k) - ∇ h^k_p(x^k,i_k)] (1^')≤ λx^k,i_k + 1 - x^k,i_k + ∑_p=1^P (|y^k_p,1| ·∇ g^k_p(x^k, i_k) - ∇ g^k_p(x^k, i_k + 1) + |y^k_p,2| ·∇ h^k_p(x^k, i_k + 1) - ∇ h^k_p(x^k, i_k)) (2^')≤ λx^k,i_k + 1 - x^k,i_k + ∑_p=1^P (|y^k_p,1| + |y^k_p,2|) ·ℓ_k x^k,i_k + 1 - x^k,i_k (3^')≤ λ δ_k/ℓ_k + ∑_p=1^P (|y^k_p,1| + |y^k_p,2|) ·δ_k, ∀ k ≥ 0, where (1^') is implied by the optimality condition (<ref>), (2^') uses the inequality (<ref>), and (3^') follows from conditions (<ref>). Observe that the above inequality can be viewed as a tighter version of (<ref>) in the sense that, for any k ≥ 0 and each p, v^k_p,1 and v^k_p,2 are elements taken from the single-valued mapping ∇ g^k_p(∙) - ∇ h^k_p(∙) evaluated at the same point x^k,i_k. By a simple modification of the proof above and using Lemma <ref>, one can show that x̅ is an A-stationary point. §.§ Reduction to a single-loop algorithm In the prox-ADC method, we iteratively minimize each perturbed subproblem ∑_p=1^P F^k_p until conditions (<ref>) are met. In what follows, we strengthen Assumption 3 so that the prox-ADC method can be simplified to a single-loop algorithm, that is, only a single step of the inner iteration needs to be executed before moving to the next outer iterate. The new condition aim to ensure that the cumulative optimality errors generated by one inner iterate do not excessively increase over the outer iterations. 0.97 Assumption 3^' (uniform smoothness) There exists a constant ℓ_f such that for any k ≥ 0, max{∇ g^k_p(x) - ∇ g^k_p(x^'), ∇ h^k_p(x) - ∇ h^k_p(x^')}≤ℓ_f x^' - x, ∀ x, x^'∈ X, p = 1, ⋯, P. Suppose that Assumptions 1, 2, 3^' and 4 hold. Let { x^k } be the sequence generated by the prox-ADC method and {x^k}_k ∈ N be a subsequence converging to some x̅. Then, there exists a sequence {(ϵ_k, δ_k)}→ 0 in the prox-ADC method such that each inner loop terminates after only one iteration, i.e., i_k = 0 for any k ≥ 0. For any positive sequence {(ε_k, δ_k)}→ 0, Theorem <ref> shows that {x^k,i}_i ≥ 0 is well-defined for any k ≥ 0 and each inner loop terminates after a finite number of iterations, i.e., i_k is finite for any k ≥ 0. Next, we will find a sequence {(ε_k, δ_k)}→ 0 such that i_k = 0. Epi-convergence F^k_p F_p for each p (Assumption 1) implies lim inf_(k+1)(∈ N) → +∞∑_p=1^P F^k_p(x^k+1) ≥∑_p=1^P lim inf_(k+1)(∈ N) → +∞ F^k_p(x^k+1) ≥∑_p=1^P F_p (x̅) > -∞, and therefore the sequence {H^k(x^k+1)}_(k+1) ∈ N is bounded from below since H^k(x) ≥∑_p=1^P F^k_p(x) for any x ∈^n and any k ≥ 0. Using the inequality (<ref>), we know that lim_k → +∞ H^k(x^k+1) exists and ∑_k=1^∞∑_j = 0^i_kx^k, j+1 - x^k, j^2 < +∞. Thus, lim_k → +∞x^k, 1 - x^k, 0 = 0, and we can therefore take a positive sequence {δ_k}→ 0 satisfying x^k, 1 - x^k, 0≤δ_k/ ℓ_f. For each p and any fixed k ≥ 0, using Assumption 3^', we have [ 0 ≤ f^k, upper_p(x^k, 1; x^k, 0) - f^k_p(x^k, 1) - α^k,↑_p = h^k_p(x^k, 1) - h^k_p(x^k, 0) - ∇ h^k_p(x^k, 0)^⊤ (x^k, 1 - x^k, 0); ≤ℓ_f x^k, 1 - x^k, 0^2/2 ⟶ 0 as k → +∞. ] Similarly, 0 ≤ f^k_p(x^k, 1) - α^k,↓_p - f^k, lower_p(x^k, 1; x^k, 0) ≤ℓ_f x^k, 1 - x^k, 0^2 / 2 → 0 as k → +∞. Hence, there exists a positive sequence {ϵ_k}→ 0 such that the first two conditions in (<ref>) hold for i=0 in each inner loop. Summarizing these arguments, we find a desired sequence {(ε_k, δ_k)}→ 0 such that the prox-ADC method reduces to a single-loop algorithm. The main convergence theorem follows immediately. No more proof is needed. Suppose that Assumptions 1, 2, 3^', 4 and 5 hold, and the sequence {x^k} generated by the prox-ADC method has an accumulation point x̅. If ∂^∞_A f_p(x̅) = {0} for each p ∈ I_2, then x̅ is an A-stationary point of (<ref>). § CONCLUSIONS. In this paper, we have introduced a new class of composite functions that broadens the scope of the well-established class of amenable functions. Our principal objective has been to demonstrate that when the outer convex function is separable across each coordinate, and the inner function is ADC, the resulting composite function retains computational amenability. Despite the theoretical advances we have achieved, the practical implementation of this framework to address real-world applications is yet to be explored. Future work should aim to bridge this gap, translating the theoretical aspects of our findings into tangible computational solutions. § ACKNOWLEDGMENTS. The authors are partially supported by the National Science Foundation under grants CCF-2153352 and DMS-2309729. plain
http://arxiv.org/abs/2307.02783v1
20230706052220
UIT-Saviors at MEDVQA-GI 2023: Improving Multimodal Learning with Image Enhancement for Gastrointestinal Visual Question Answering
[ "Triet M. Thai", "Anh T. Vo", "Hao K. Tieu", "Linh N. P. Bui", "Thien T. B. Nguyen" ]
cs.CV
[ "cs.CV", "cs.HC" ]
2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CLEF 2023: Conference and Labs of the Evaluation Forum, September 18–21, 2023, Thessaloniki, Greece 1,2]Triet M. Thai[ email=19522397@gm.uit.edu.vn, ] [1]Faculty of Information Science and Engineering, Unviversity of Information Technology, Ho Chi Minh City, Vietnam [2]Vietnam National University, Ho Chi Minh City, Vietnam 1,2]Anh T. Vo[ email=19521226@gm.uit.edu.vn, ] 1,2]Hao K. Tieu[ email=19521480@gm.uit.edu.vn, ] 1,2]Linh N.P. Bui[ email=20521527@gm.uit.edu.vn,] 1,2]Thien T.B. Nguyen[ email=thienntb@uit.edu.vn, ] [1] [1]Corresponding author. In recent years, artificial intelligence has played an important role in medicine and disease diagnosis, with many applications to be mentioned, one of which is Medical Visual Question Answering (MedVQA). By combining computer vision and natural language processing, MedVQA systems can assist experts in extracting relevant information from medical image based on a given question and providing precise diagnostic answers. The ImageCLEFmed-MEDVQA-GI-2023 challenge carried out a visual question answering (VQA) task in the gastrointestinal domain, which includes gastroscopy and colonoscopy images. Our team approached Task 1 - Visual Question Answering of the challenge by proposing a multimodal learning method with image enhancement to improve the VQA performance on gastrointestinal images. The multimodal architecture is set up with a BERT encoder and different pre-trained vision models based on convolutional neural network (CNN) and Transformer architecture for features extraction from question and endoscopy image. The result of this study highlights the dominance of Transformer-based vision models over the CNNs and demonstrates the effectiveness of the image enhancement process, with six out of the eight vision models achieving better F1-Score. Our best method, which takes advantages of BERT+BEiT fusion and image enhancement, achieves up to 87.25% accuracy and 91.85% F1-Score on the development test set, while also producing good result on the private test set with accuracy of 82.01%. visual question answering multimodal learning BERT pre-trained models gastrointestinal imagingcolonoscopy analysis medical image processing UIT-Saviors at MEDVQA-GI 2023: Improving Multimodal Learning with Image Enhancement for Gastrointestinal Visual Question Answering [ ================================================================================================================================== § INTRODUCTION The digestive system is one of the most complex and essential systems in the human body, consisting of various organs such as the mouth, stomach, intestines, and rectum. From the process of digestion in the stomach to the absorption of nutrients in the small and large intestines, and finally the elimination of waste through the rectum, the entire process involves the interaction and coordination of each organ to ensure the supply of nutrients and energy to the body. Any issues that occur in any part of the digestive system can directly impact the entire gastrointestinal tract, such as inflammation of the intestines, digestive cancers, and diseases of the stomach and colon, especially colorectal diseases, which remain a significant concern for the healthcare community. According to estimates from the American Cancer Society[https://www.cancer.org/cancer/types/colon-rectal-cancer/about/new-research.htmlhttps://www.cancer.org/cancer/types/colon-rectal-cancer/about/key-statistics.html. ], colorectal cancer ranks as the third leading cause of cancer-related deaths for both men and women in the United States. The projected numbers for colorectal cancer cases in the year 2023 are 106,970 new cases of colon cancer and 46,050 new cases of rectal cancer, with an estimated 52,550 deaths. However, it is important to note that the mortality rate from colorectal cancer has decreased over the past decade due to advancements in scientific and technological research. Screening techniques allow for the detection of abnormalities in the colon and rectum to be removed before they develop into cancer. Clinical imaging techniques such as X-rays, computed tomography (CT), or ultrasound are often not highly effective in diagnosing pathological conditions in the colon. Therefore, colonoscopy remains the primary technique used for detection, screening, and treatment of gastrointestinal diseases. This method involves using a flexible endoscope, which is inserted through the anus and advanced into the colon. The real-time images of the colon obtained from the endoscopic device are displayed on a monitor, allowing the physician to observe and evaluate any abnormalities in the intestinal tract, the condition of the mucosal lining, and other structures within the colon. Colonoscopy is considered the gold-standard screening procedure for examining and treating colorectal diseases. The endoscopic images contain a wealth of important information about the patient's condition. However, the effectiveness of the colonoscopy process can vary depending on the skills of the performer and the complexity of the endoscopic image analysis, which requires specialized knowledge and manual interpretation <cit.>. To improve the performance of colonoscopy in accurately detecting and classifying lesions, decision support systems aided by artificial intelligence (AI) are being rapidly developed. Among them, Visual Question Answering (VQA) is one of the most prominent techniques. Combining computer vision and natural language processing, VQA assists in extracting information from images, identifying abnormalities, and providing accurate answers to specific diagnostic questions. By integrating information from images and questions, VQA enhances the accuracy of lesion detection and classification, improves communication between users and images, and helps guide appropriate treatment strategies. To successfully deploy VQA in the healthcare domain, in addition to algorithmic integration, a sufficiently large and diverse training dataset is required. Our research team participated in the VQA task of the ImageCLEFmed Medical Visual Question Answering on Gastrointestinal Image (MEDVQA-GI) <cit.> competition at ImageCLEF2023<cit.>. The contribution of the paper focused on performing the VQA task with a new dataset from ImageCLEFmed MEDVQA-GI. Specifically, we employed a multimodal approach for the VQA task (Task 1), combining information from two primary data sources: endoscopic images and textual questions. To achieve a good performance on the VQA task with the provided dataset, we first performed an efficient image preprocessing steps, which involved specular highlights inpainting, noise, and black mask removal to enhance the image quality. Subsequently, we conducted experiments and compared the performance of various image feature extraction models based on CNN and Transformer using both raw and enhanced image data. The final results, with accuracy up to 87.25% on the development test set and 82.01% on the private test set, demonstrate the potential of the proposed method in improving the performance of VQA systems in the field of gastrointestinal endoscopy imaging in general and colonoscopy in particular. § BACKGROUND AND RELATED WORKS §.§ Colonoscopy Image Analysis With the advancement of modern advanced technology, AI has made significant contributions to the field of healthcare, specifically in the progress of the colonoscopy examination process. Currently, two potential approaches with AI being utilized for colonoscopy image analysis, including Computer-Aided Detection (CAD) and Deep Learning (DL) systems. In the CAD approach, the system utilizes image processing algorithms to improve the performance of endoscopic procedures, enabling physicians to easily detect lesions in hard-to-identify locations and reduce the chances of misdiagnosis <cit.>. On the other hand, the DL-based system employs a deep learning model trained on specific datasets, which enhances the accuracy of lesion detection compared to the CAD-based system <cit.>. However, developing algorithms for automatic analysis and anomaly detection in endoscopic images requires preliminary image preprocessing to address various factors, such as specular highlights, interlacing or artefacts that impact the system's performance <cit.>. §.§ Preprocessing Methods for Colonoscopy Images In reality, the quality of endoscopy images depends on various factors such as the skill of the performing physician, limitations of the equipment, and certain environmental conditions. Some common difficulties in processing endoscopy images include black masks, ghost colors, interlacing, specular highlights, and uneven lighting <cit.>. Black masks are the occurrence of a black border around the edges of the image due to the use of lenses in the endoscopy system that have a black frame surrounding the edges. This frame can hinder the development of algorithms. To address this issue, techniques such as restoration, thresholding, cropping, or inpainting are necessary. Specular highlights, which are bright spots reflected from tumors or polyps captured by the camera, can disrupt the algorithms. Therefore, to remove them, we can employ detection or inpainting methods. Additionally, for issues like interlacing, ghost colors, and uneven lighting, segmentation methods can be applied to achieve optimal results <cit.> <cit.> <cit.>. Overall, preprocessing steps play a crucial role in mitigating the challenges commonly encountered with colonoscopy images. The mentioned techniques will help improve the overall quality of the images, thereby enhancing the performance of analysis and diagnosis. §.§ Medical Visual Question Answering Medical visual question answering (MedVQA) is an important field in medical AI that combines VQA challenges with healthcare applications. By integrating medical images and clinically relevant questions, MedVQA systems aim to provide plausible and convincing answers. While VQA has been extensively studied in general domains, MedVQA presents unique opportunities for exploration. Currently, there are 8 publicly available MedVQA datasets, including VQA-MED-2018 <cit.>, VQA-RAD <cit.>, VQA-MED-2019 <cit.>, RadVisDial <cit.>, PathVQA <cit.>, VQA-MED-2020 <cit.>, SLAKE <cit.>, and VQA-MED-2021 <cit.>. These datasets serve as valuable resources for advancing MedVQA research. The basic framework of MedVQA systems typically contains an image encoder, a question encoder, a fusion algorithm, and an answering component. Other frameworks may exclude the question encoder when the question is simple. Common choices for image encoder are ResNet <cit.> and VGGNet <cit.> that are pre-trained on ImageNet dataset <cit.>. For language encoders, Transformer-based architectures such as BERT <cit.> or BioBERT <cit.> are commonly applied because of their proven advantages, besides the Recurrent Neural Networks (LSTM <cit.>, Bi-LSTM <cit.>, GRU <cit.>). The fusion stage, the core component of VQA methods, has typical fusion algorithms, including the attention mechanism and the pooling module. Common attention mechanisms are the Stacked Attention Networks (SAN) <cit.>, the Bilinear Attention Networks (BAN) <cit.>, or the Hierarchical Question-Image Co-Attention (HieCoAtt) <cit.>. Most multimodal pooling practices are concatenation, sum, and element-wise product. The attention mechanism can aggregate with the pooling module. The answering component has two modes of output depending on the properties of the answer. The classification mode is used if the answer is brief and limited to one or two words. Otherwise, if the response is in free-form format, the generation modules such as LSTM or GRU are taken into account. There are additional techniques to the basic concept, for instance, Sub-task strategy, Global Average Pooling <cit.>, Embedding-based Topic Model, Question-Conditioned Reasoning, and Image Size Encoder. § TASK AND DATASET DESCRIPTIONS §.§ Task Descriptions Identifying lesions in endoscopy images is currently one of the most popular applications of artificial intelligence in the medical field. For the task at ImageCLEFmed-MEDVQA-GI-2023 <cit.>, the main focus will be on VQA and visual question generation (VQG). The main goal is to provide support to healthcare experts in diagnosis by combining image and text data for analysis. The task consists of three sub-tasks: * VQA (Visual Question Answering): For the visual question answering part, participants are required to generate a textual answer to a given textual question-image pair. This task involves combining endoscopy images from the dataset with textual answers to respond to questions. * VQG (Visual Question Generation): This is the reverse task of VQA, where participants need to generate textual questions based on given textual answers and image pairs. * VLQA (Visual Location Question Answering): Participants are provided with an image and a question, and they are required to provide an answer by providing a segmentation mask for the image. In this study, our team only focuses on the VQA task (Task 1) for the provided endoscopy image dataset. In general, we receive a textual question along with the corresponding image, and the main task is to generate accurate and appropriate answers based on information from both sources. For example, for an image containing a colon polyp with the following question, "Where in the image is the polyp located?", the proposed VQA system should return answer giving a textual description of where in the image the polyp is located, like upper-left or in the center of the image. §.§ Dataset Information The new dataset released for the ImageCLEFmed-MEDVQA-GI-2023 challenge is based on the HyperKvasir dataset <cit.>, the largest gastrointestinal collections with more than 100,000 images, with the additional question-and-answer ground truth developed by medical collaborators. The development set and test set include a total of 3949 images from different procedures such as gastroscopy and colonoscopy, spanning the entire gastrointestinal tract, from mouth to anus. Each image has a total of 18 questions about abnormalities, surgical instruments, normal findings and other artefacts, with multiple answers possible for each, as shown in Table <ref>. Not all questions will be relevant to the provided image, and the VQA system should be able to handle cases where there is no correct answer. Figure <ref> depicts several examples of question-answer pairs on common abnormalities in gastrointestinal tract, such as Colon Polyps, Oesophagitis, and Ulcerative Colitis. As shown in Figure <ref>, there are three possible answers to the question "What color is the abnormality?": "Pink," "Red," and "White", and a typical VQA system should be able to identify all three colors. In general, the image may contains a variety of noise and components that locates across abnormalities, such as highlight spots or instruments, which pose a significant challenge in developing efficient VQA systems for gastrointestinal domain. § THE PROPOSED APPROACH The method used in this study is based on a standard framework that is commonly used to tackle general VQA problems. Figure <ref> depicts an overview of the proposed method for ImageCLEFmed-MEDVQA-GI-2023 dataset. In general, the VQA architecture employs powerful pre-trained models to extract visual and textual features from image-question pairs, which are then combined into a joint embedding using a fusion algorithm and passed to a classifier module to generate the appropriate answer. To improve the quality of the region of interest and achieve better VQA performance, the original image is passed through a series of enhancement procedures before being fed into the image encoder for features extraction. §.§ Image Enhancement The purpose of the image pre-processing and enhancement steps is to remove noise and artifacts, which are frequently caused by the equipment used in diagnostic or environmental difficulties. Some of the major problems to be mentioned are black mask, specular highlights, interlacing or uneven lighting. The impact of these elements, such as black mask and specular highlights, is significant since they, like the polyp, create valley information and affect the performance of polyp localization, causing the VQA system to generate incorrect answers. This study employs pre-processing and enhancing methods to cope with specular highlights and black mask in colonoscopy image, which are prevalent artifacts in the dataset provided. The desired outcome is an enhanced image with no specular reflection or black frame while retaining the visual features of the region of interest. §.§.§ Specular Highlights Removal The removal of specular highlights from colonoscopy image includes two sequential processes: detection of specular highlights and highlights inpainting. Figure <ref> depicts the overall procedure of the method, the outcome of which is generally based on the combination of Telea inpainting algorithm with initial image restoration after several modification steps. Specular highlights detection First, it is necessary to convert the image from the original RGB channel to grey scale to process the subsequent procedure. Rather than adaptive thresholding, the proposed approach employs standard thresholding method with a fixed threshold value to identify specular highlights in all images. This is due to the gastrointestinal image's varied textures and components, and if not done properly, may result in information loss. Some samples of the dataset contain text, high exposure regions and brightly colored instrument, as described in Figure <ref>. Aside from text in white color, high exposure regions are parts of specular highlights that received excessively high intensity compared to regular highlight spots, while the instruments are sometimes in white or blue color. After thresholding, these factors may emerge in the mask, as shown in Figure <ref>, and affect the inpainting outcome. Thus, the following step is to remove these undesired elements from the mask in order to assure consistency. To cope with these problems, two directions are considered, either to perform segmentation for text, polyp and instrument, separately, or remove the parts that meet certain size threshold. For simplicity, the second approach is used in this study. The preprocessing step consists of several morphology transformations interspersed by contour detection and removal. More specifically, a dilation operation with kernel size 3×3 is performed initially to connect the pixels related to undesirable parts. Among the obtained contours, those whose scaled area following the Modified Z-scores formula <cit.>, as shown in Formula <ref>, exceeds 17.0 are removed from the mask. The mask is then passed into another erosion module with the same settings to restore the initial highlights intensity. Finally, Gaussian filter of size 19×19 is applied to reduce the intensity of highlights area and improve the inpainting performance. S_i = |s_i-s̃|/MAD where: * S_i: is the scaled area of contour i based on modified Z-score. * s_i: is the area of contour i * s̃: is the median area of all contours * MAD = median(|s_i-s̃|),∀ i=1..n: is the Median Absolute Deviation of contour areas Highlights inpainting Once the mask of specular highlights has been achieved, the image regions indicated by the mask are then reconstructed through an efficient inpainting operation. First, a filter of size 3×3 slides across every pixels of the original image and calculate the average value. The process is repeated N times to ensure a desirable outcome. We then perform an initial restoration on the image by directly replacing its pixels under the specular highlights mask with pixels from the blur image. Despite the drastically reduced intensity, specular highlight spots still remains in the reconstructed image, as shown in Figure <ref>. To obtained the final result, Telea algorithm <cit.>, a powerful image inpainting strategy, is applied to eliminate the remaining noisy and dim highlights. The inpainted image is noticeably higher in quality, with specular highlights removed without negatively impacting other areas of the image. §.§.§ Black Mask Removal Previous research has shown that black masks do generate valley information, which can reduce polyp localization performance. Based on this, we propose a black mask removal strategy for the VQA task that still retains black box information in order to answer the question "Is there a green/black box artefact?". In general, an artificial mask of black frame is initially created based on its border width, and then the inpainting operation is performed to remove the black frame from the image. The overall procedure is described in Figure <ref>. Our method does not use cropping or thresholding directly to detect and remove the black mask because it may contain the black box artifact, shadow regions, or black instrument, the removal of which causes information loss and decreases VQA performance. To detect the border width, we first perform a grey scale conversion and inverse thresholding with erosion operation to remove noise, and then measure the distance from each edge of the image to the nearest pixel that does not belong to the mask. After determining the width of the border, the crucial step of the method is to create an artificial mask with internal octagon shape. This can be done by creating two sub-masks, one rectangle and one circle, followed by a bitwise OR operation to combine them into the final mask, as show in Figure <ref>. The circle mask is created with a center point based on the information of border width and a radius calculated by multiplying the ordinate of the center point by a value σ (σ>1). In some cases, the final mask is not always octagonal, as shown in the last example, but it still covers the main region of interest. Finally, the inpainting of black mask is completed using the same procedure as described in the previous section for specular highlights, giving the final enhanced image with black mask removed. If a black box artefact exists in the bottom-left corner, as shown in the second example, it will not be significantly affected as long as its size is greater than the area of the mask at the respective position. For images containing an expanded black mask labeled as black box artefact, we process by creating a simulated green box that contains the text and placing it in bottom-left corner. By doing so, the text and box artefact information still remain after the inpainting procedure. Though the obtained results are quite satisfactory, there are still some cases where the mask is not completely removed and need further processing steps. §.§ Multimodal Fusion Architecture Since this study focus mainly on the VQA task, the architecture should be capable of extracting meaningful features from the question and corresponding image, and incorporating them to give the correct answer. Our multimodal fusion architecture is set up with important components such as an image encoder for feature extraction from images, a text encoder for features extraction from questions, a fusion algorithm for unifying modalities and a classifier for producing the appropriate answer. The proposed approach uses pre-trained Bidirectional Encoder Representations based on Transformers (BERT) <cit.> to extract textual features from questions. As a bidirectional model, it can learn the meaning of words in a sentence by considering both the words that come before and after them. With massive pre-training data, BERT can be fine-tuned and achieved state-of-the-art results on a number of natural language processing (NLP) benchmarks. For features extraction from the images, this study set up and experiment with eight different pre-trained models that are belong to two main concepts: * CNN-based architectures including Resnet152 <cit.>, Inception-v4 <cit.>, MobileNetV2 <cit.> and EfficientNet <cit.>. The group of models take advantage of traditional CNN's components such the convolutional layer, pooling layer, residual block and fully connected layer to achieve significant result in computer vision field. The training of CNN-based model is more efficient with less computational resources compared to new approaches based on Transformers. * Transformer-based architectures including ViT <cit.>, DeiT <cit.>, Swin Transformer <cit.> and BEiT <cit.>. The family of models leverages a massive amount of training data and Transformer's multi-head self-attention for a game-changing breakthrough in the computer vision field. ViT (Vision Transformer) and other models inspired from it initially encodes the image as patch embeddings and pass them into a regular Transformer Encoder for feature extraction, which is similar to text data. Currently they are considered as the prominent architectures to achieve state-of-the-art performance on a variety of tasks in computer vision such as image classification, object detection, and semantic image segmentation. After obtaining the embeddings of text and image, a multimodal fusion method based on concatenation is used to combine these features along the embedding dimension. The unified embedding matrix is then passed through an intermediate fully connected layer with drop out 0.5 and ReLU activation followed by a classification layer to produce the final output. Because there can be more than one appropriate answer for each question, we approach the VQA task as a multi-label classification problem. To successfully train the proposed arhitecture, multi-label binarization is used to encode a list of all possible answers into a binary vector. Furthermore, the final layer is configured with sigmoid activation function to return an output vector of the same size containing the corresponding probability for each class. § EXPERIMENTAL SETUP §.§ Data Preparation The development set released for the VQA challenge contains 2000 images of gastroscopy and colonoscopy procedures. In order to experiment and evaluate our method, we randomly divided the provided development set into three parts: train, validation, and test, with 1600 images for training and 200 images for each validation and test set. The data preparation process is designed to ensure that each abnormality has the same proportion in the training, validation, and testing sets, and that each image contains all 18 questions. This produces 28,800 question-answer pairs on the training set, 3600 pairs for validation and 3600 pairs for test. All images from development set and private test set are first passed into an image enhancement block, where numerous image preprocessing methods are applied to remove specular highlights and black mask from the images. The enhanced results are then used as input in the training and testing of the proposed VQA model. §.§ Experiment Configurations Many experiments are carried out in order to evaluate the performance of the proposed methods toward the ImageCLEFmed-MEDVQA-GI-2023 challenge. Specifically, each pre-trained vision model is initialized and experimented as an image encoder and unify with BERT encoder through concatenation fusion for multimodal learning. Table <ref> gives the general information of pre-trained models used in this study including vision model name, version and number of parameters for each fusion model. Through experiments, we can discover the potential and limitation of each model for the VQA task and thus, choose the best method for giving the final prediction on the private test set of the competition. To achieve a comparative result, we set up the same hyperparameters for all experiments. The models are trained in 15 epochs with batch size of 64. We utilize the Adam optimizer <cit.> using weighted decay with an initial learning rate of 5e-5 and a linear scheduler to decrease learning rate 6.67% after each epoch. Since we approach the VQA task as multi-label classification, the output layer is configured to return a tensor containing probabilities of answers, where the final predicted answers for each question can be achieved using threshold value of 0.5. Due to this, the BCEWithLogitsLoss function, which combines a Sigmoid layer and the BCELoss, is applied in the training process. After each epoch, the training loss and validation loss are calculated, and the performance are then evaluated on classification metrics such as accuracy, precision, recall and F1-Score. To ensure a meaningful result for multi-label classification, the metrics are calculated using ground truth and prediction sets of binary vectors, in which recall, precision and F1-scores should be calculated on each sample and find their average. The model's state that obtains best F1-Score is used for prediction in the testing phase. The proposed architecture are implemented in PyTorch and trained on the Kaggle platform with hardware specifications: Intel(R) Xeon(R) CPU @ 2.00GHz; GPU Tesla P100 16 GB with CUDA 11.4. § EXPERIMENTAL RESULTS The comparative result of different pre-trained image model on the testing set is shown in Table <ref>. It is clear that, with no image enhancement, Swin-B achieves the best result with 86.64% accuracy and 90.90% F1-Score while BEiT-B gives a slightly lower performance with accuracy of 86.47% and 90.74% F1-Score. CNN-based vision models have acceptable results, but cannot be compared with the result of Transformer architecture models. With image enhancement, six out of eight vision models from both CNN and Transformer architectures achieve a better performance on F1-Score metric. BEiT-B has an outstanding result with accuracy and F1-Score of 87.2% and 91.85%, respectively. Overall, the enhancement process helps to improve the F1-Score at least 0.4% and up to 1.11% on VQA performance. The result of the convolutional models is still under when compared with Transformers architecture models. We found that the BERT and BEiT fusion (BERT+BEiT) with image enhancement is the best method of our approach and use it for prediction in final private test phase. Our method obtains a good result on the private test set with an accuracy of 82.01%. Table <ref> illustrates the performance evaluation of BERT+BEiT fusion on each question from the development test set compared with the private test set. In general, there are 14/18 questions with predicted answers achieve greater than 80% accuracy on the development test set, while 11/18 questions on the private test set achieve the same result. Our method still struggles to produce full and precise answers for questions with multiple answers, such as "What color is the abnormality?" or questions that refer to the location of the abnormality, anatomical landmark, and instrument. § CONCLUSION AND FUTURE WORKS Along with performing image enhancement, we also set up and experimented using various powerful pre-trained image models together with the BERT encoder for our proposed multimodal architecture in the VQA task at ImageCLEFmed-MEDVQA-GI-2023 <cit.>. The visual enhancement steps, which include specular hightlights and black mask removals, help improve multimodal learning performance on the dataset by up to 1.11% F1-Score. Our best method, BERT+BEiT fusion with image enhancement, achieved 87.25% on development test set and 82.01% on the private test set by the accuracy. Through performance analysis, there are question cases that require multiple positions or colors in the answer, which are our limitations in this study. In summary, there are factors that have significant impact on our solution for the VQA task such as answer imbalance, noise, and artifacts. Our future research for this task is to improve the accuracy of the model in giving the correct answer by enriching the features from images and questions through instrument segmentation and polyp localization with methods such as U-net <cit.>, ResUnet++ <cit.> developed on object-specific datasets such as Kvasir-Instrument <cit.> and Kvasir-seg <cit.>. Other advanced colonoscopy image preprocessing techniques such as interlacing removal or uneven lighting removal can be examined to improve the image quality. From the proposed system, an intelligent chatbot application can be implemented for question-answering from medical images and help improve colonoscopy analysis. This research was supported by The VNUHCM University of Information Technology’s Scientific Research Support Fund.
http://arxiv.org/abs/2307.02379v1
20230705154607
Machine learning at the mesoscale: a computation-dissipation bottleneck
[ "Alessandro Ingrosso", "Emanuele Panizon" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "cond-mat.dis-nn", "cs.LG" ]
Quantitative Life Sciences, Abdus Salam International Centre for Theoretical Physics, 34151 Trieste, Italy The cost of information processing in physical systems calls for a trade-off between performance and energetic expenditure. Here we formulate and study a computation-dissipation bottleneck in mesoscopic systems used as input-output devices. Using both real datasets and synthetic tasks, we show how non-equilibrium leads to enhanced performance. Our framework sheds light on a crucial compromise between information compression, input-output computation and dynamic irreversibility induced by non-reciprocal interactions. Machine learning at the mesoscale: a computation-dissipation bottleneck Emanuele Panizon August 1, 2023 ======================================================================= What does a theory of computation at the mesoscopic scale look like? To begin to answer this question, we need to bridge the formalism of computation with a physical theory of systems whose energy scales are close to thermal fluctuations. Stochastic Thermodynamics (ST), by associating single stochastic trajectories with meaningful thermodynamic quantities <cit.>, exposes the deep relation between information and dissipation. One of the fundamental results of ST is that information and time irreversibility, as measured by the rate of Entropy Production (EP) <cit.>, are inherently related <cit.>. Thermodynamic Uncertainty Relations <cit.> have been derived that describe fundamental precision-dissipation trade-offs, leading to a framework successfully applied to a variety of bio-chemical processes, such as chemo-sensing <cit.>, copying <cit.>, reaction networks <cit.>, cascade models of synapses <cit.>, among others. To set the following discussion, we will refer to computation at the mesoscopic scale as the ability of a system to react to the environment - via physical interactions between its parts and external heat baths - in such a way that the modification of its state depends on some function of the environmental conditions. This transformation possibly leads the system far from equilibrium. Encoding external signals in their entirety is one of such computations. Borrowing terminology from Machine Learning (ML), a mesoscopic system can be considered as an “autoencoder”, thus focusing on its ability to sense, compress information and perform error correction capabilities <cit.>. Full encoding, however, may be energetically wasteful when a computation regards a limited aspect of the environment: discarding non-relevant information allows to strike a balance between performance and energy expenditure, in a manner crucially dependent on the task at hand. We recognize this task-dependence of the performance/cost trade-offs as the critical ingredient of any physical theory of computation. On one side of such trade-off lies dissipation, the study of which is starting to be addressed in many body systems <cit.>. Irreversibility of macroscopic neural dynamics is also attracting attention <cit.>. The system's computational performance, on the opposite side of the trade-off, can be formulated both in information theoretic terms and in the more practical lens of standard error metrics employed in ML. One recently emerging approach attempts to define a framework for irreversibility in formal models of computational systems <cit.>, in a way that is agnostic to physical implementations. Here, we consider generic parametrizations of mesoscopic systems whose stochastic transitions are induced by an environment, possibly out-of-equilibrium, so that resulting interactions may show non-reciprocity <cit.>. In particular, we focus on asymmetric spin models, which have been subject of intense study in the field of disordered systems <cit.> and provide a bridge to classical models of neural computation <cit.>. In line with conventional formalism of neural networks, we consider the dynamics of these systems as producing internal representations of their inputs, the geometry and intrinsic dimensionality of which impact the ability to learn input-output relations. We show how entropy-producing non-reciprocal interactions  <cit.> are crucial to generate effective representations, in such a way that a fundamental trade-off emerges between expressivity and performance. § A COMPUTATION-DISSIPATION BOTTLENECK The stochastic dynamics of mesoscopic systems, usually described using continuous-time Markov processes, results from their interactions with thermal baths and external driving mechanisms. Let us consider a system 𝒮 with discrete states s, driven by a time homogeneous input protocol x. The evolution of the probability of state p(s,t) is given by a master equation with jump rates k_s's from state s to states s'. To facilitate the connection to ML, we take a set of parameters θ that determine – rather abstractly – the jump rates. We assume computation is performed on a timescale much longer than any initial transient. For each independent input x, the system reaches a steady-state (SS) probability p(s|x), serving as internal representation of x. At the (possibly non-equilibrium) SS, each input x is associated to an average EP rate, Σ(x), a measure of irreversibility at the steady state and corresponding to the housekeeping heat. In Markovian systems with discrete states, the EP rate can be computed via the Schnakenberg formula <cit.>: σ = 1/2∑_s,s' J_ss'logk_ss' p(s' )/k_s's p(s ) where J_ss' = [ k_ss' p(s') - k_s's p(s)] are the steady state fluxes and we work in units where the Boltzmann constant κ_B=1. Note that in our case σ = σ(x, θ) through k_ss', J_ss' and p(s). A supervised learning task is specified by a finite set 𝒟=(x,y) of input-output pairs, so the EP rate averaged over the whole dataset is simply Σ (θ) = 1/|𝒟|∑_x σ( x , θ). Alternatively, the learning task could be defined by a distribution over the input space p(x) and an conditional output distribution p (y|x). The (average) EP rate is similarly Σ(θ) = ∑_x p(x) σ( x , θ). The EP rate is a function of the dynamic process alone. How the resulting p(s|x) is able to disentangle and predict the output is a separate, task-specific factor. There exist a number of ways to define a good measure of computational performance. One possible choice is the mutual information between the internal representations s and the output y, i.e. I(s,y): such choice makes no assumption on the additional computational burden needed to extract the information about y, possibly encapsulated in arbitrarily complex high-order statistics of the steady state distribution. A different path is to use a small subset of moments of p(s|x) as internal representations of the inputs, to be then fed to a simpler linear readout. This approach, closer to standard ML practice, allows us to use the Mean Square Error (MSE) or Cross Entropy (CE) loss functions. Both approaches and their limitations will be explored in the following. Given a performance measure 𝒢(θ), the trade-off can be encapsulated in a quantity: ℒ( θ) = 𝒢(θ) - α Σ(θ), where α is a positive parameter that has units of time. We study the trade-off by optimizing ℒ over the interaction parameters θ for different values of α: increasing α, the cost of dissipation with respect to performance is enhanced, with the α→∞ limit effectively constraining the system to be at equilibrium. In this letter, we first use numerical methods to build a multi-spin system that performs two different classification tasks. We then employ an analytically solvable 2-spin model to investigate the enhanced expressivity of non-equilibrium systems with respect to equilibrium ones, and relate it to the structure of their computational tasks. § MULTI-SPIN SYSTEMS AS STOCHASTIC RECURRENT NETWORKS To exemplify the computation-dissipation trade-off, we use a spin-based model to perform an input-output computation. Specifically, a classification task where inputs x – schematically represented by the tape in Fig <ref> – must be correctly associated with given output labels y. The system at hand is composed of two chains of size N with possibly asymmetric couplings. Spins of the two chains are driven by the same inputs x_i, serving as constant external fields. Each spin s_i is subject to random flips with rates k^(i)_s∝ e^-β s_i (W s + x)_i. Interactions, encoded in the matrix W, connect spins both along the same chain and across the two lines of spins, similarly to an implicit, stochastic version of a convolutional layer (see Appendix). When W is symmetric, the system relaxes to the equilibrium of a Hamiltonian ℋ=-1/2s^T W s - x at inverse temperature β. Non-reciprocal interactions (W ≠ W^T) lead to non-equilibrium and a non-zero EP rate. After a transient, the system reaches a steady state p(s|x), with an average magnetization m_x = < s |x > and an entropy production rate σ(x, θ = W). For any input-output dataset, each W will thus be associated with both a different task performance and an average EP rate Σ. In close analogy with standard ML methods, we implement a final linear readout of the average magnetization W_out m_x, with a learnable matrix W_out. This allows us to separately consider the system's computation as a two-step process: (i) a highly non-linear deformation of the input space x into m_x induced by the dynamics of the process, akin to what occurs in a hidden layer of an artificial neural network; (ii) a separation in the m_x space to produce the output y. Note that our formalism is a stochastic, mesoscopic generalization of the recently introduced implicit layers, which serve as building blocks of deep equilibrium odels <cit.>. To minimize ℒ, we coupled a standard Gillespie algorithm <cit.> for the simulation of the system's evolution with each input field x to a gradient-based optimization method. Due to the stochastic nature of the Gillespie trajectory and the high dimension of the W parameter space, we adopted a finite-difference method called Simultaneous Perturbation Stochastic Approximation (SPSA) <cit.> to compute an estimate of the gradient (see Appendix for details). The solutions at each value of α allow us to construct an optimal front between G^*(Σ) and Σ^*(α), where ^* denotes that optimal values of Eq. <ref>, as shown in Fig. <ref>A,C. We showcase our approach with two different tasks. The first is MNIST-1D <cit.>, a one-dimensional version of the classic digit-classification dataset MNIST. Each element has an input dimension N=40 and belongs to one of 10 different classes, i.e. the digits. See an example of the input configurations in Fig. <ref>B. To enable multi-label classification, we apply a normalized exponential function SM (softmax) to the output to get a 10-dimensional probability vector ŷ = SM(W_out m_x), and use the negative cross-entropy between actual labels and ŷ, 𝒢 = -CE( ŷ, y ), as a measure of task performance. Our results show an inverse relationship between task performance and entropy production at steady state, Fig. <ref>A. Enforcing the system to be at equilibrium (α→∞) reduces performance by ≈ 5% and accuracy – defined as the percentage of output labels identified as most probable – by 7%. This highlights how non-reciprocal interactions enhance the complexity of internal representations needed for learning, at the cost of higher dissipation. The second task is a classic random input-output association <cit.>, where input components x_i^μ of each pattern μ=1,...,M are drawn i.i.d. from a normal distribution, and labels are random y∈{ -1,+1} with probability 1/2 (Fig. <ref>D). We measure the performance in this task by the mean squared error (MSE): 𝒢 = -MSE(ŷ, y ), where ŷ = W_out m_x. For all random instances of this second task, we reproduce the front between entropy production and performance, Fig. <ref>C. While quantitative details differ slightly for different instances, the performance consistently increases with the amount of non-reciprocity in the optimal coupling matrix W and therefore with dissipation in the system. § A TRACTABLE 2-SPIN SYSTEM To exemplify a general formulation of the computation-EP bottleneck, let us study a specific case of the system we introduced in the previous section, which can be solved analytically. We consider a 2-spin system with asymmetric couplings θ = (J_s + J_a, J_s - J_a), driven by constant two-dimensional inputs x that act as external fields, Fig. <ref>A. When J_a=0, the system respects detailed balance and reaches an equilibrium state. Non-reciprocity in the coupling between the spins leads to non-negative Σ. The information-coding capabilities at steady state of such a system have been recently analyzed <cit.>. In turn, we treat such a mesoscopic network as an input-output device. In full generality, we prescribe a stochastic rule by a known conditional distribution p(y | x ), with y∈{0,1} a binary output variable. This formulation encompasses classic Teacher-Student setup <cit.> and mixture models <cit.> employed in the theoretical study of feed-forward neural networks. At variance with the previous examples, we relax the assumption of a linear readout and ask how much information I(s,y) about the output y is contained in the steady state probabilities p(s | x). Let us consider a stochastic and continuous generalization of a parity gate, where the output is prescribed by p(y=1|x)=sigmoid(η x^ϕ_1 x^ϕ_2), with x^ϕ=R^ϕ x, R^ϕ a rotation operator of angle ϕ. This defines a family of tasks with a controllable degree of asymmetry in input space. Examples of such tasks are shown in Fig. <ref>C. The additional parameter η affects the sharpness in the change of the output probability as a function of x. The mutual information I(s, y ) = H(y ) - H(s|y) can be computed easily using the conditional independence of y and s given x. For ϕ=0, the optimal structure is an equilibrium system (J_a^⋆=0). As ϕ increases, the optimal 2-spin network has asymmetric weights (J_a^⋆>0), implying a non-zero entropy production at steady state, Fig. <ref>B. Limiting the system to be at equilibrium thus results in performance degradation, down to a minimum of zero information when the rotation reaches ϕ = π / 4. For a given value of ϕ and the free parameter α, one can define the computation-dissipation trade-off in the form of maximizing Eq. <ref>, now with 𝒢 = I(s,y). Note the analogy with the formulation of task-relevance of internal representations provided by the classic Information Bottleneck <cit.>. Here, instead of a compromise between input compression and retention of output information, we trade off the latter with dissipation. We can compare the performance of an auto-encoding system, whose couplings θ^sx={J^sx_s,J^sx_a} are chosen using 𝒢 = I(s,x | θ), with that of a system with parameters θ^sy optimizing 𝒢 = I(s,y | θ), the information about y. The optima corresponding to α = 0 have finite non-reciprocal terms J_a – see Fig <ref>A,B – and therefore positive, but finite, EPs. For all values of ϕ there exists a maximum dissipation rate above which performance degrades <cit.>. Fig <ref>C shows the computation-dissipation front for a task with ϕ=0.5, each point representing a different optimal compromise between input-output performance, measured by the mutual information I(s,y), and rate of entropy production at steady state. We chose a parameter regime where a non-equilibrium solution is optimal also for I(s,x) <cit.>. Crucially, a system that maximizes the information on the entire input I(s,x) performs worse than a system tailored to maximize the output information. This is a hallmark of optimization of task-relevant information. This simple system allows us to explore the relation between the non-equilibrium steady state probability p(s|x) and the task. Fixing J_s=0, the effect of increasing J_a resembles a rotation by an angle of π/4 of p(s|x) in the region where |x| < J_a, see Fig. <ref>D. An increasing amount of non-reciprocity in the system will thus align the steady-state probabilities p(s|x) with the rotation induced on the conditional output p(y | x ) by the angle parameter ϕ. § DISCUSSION We introduced a framework to characterize a fundamental trade-off between computational capabilities and energetic expenditure in mesoscopic systems. We showcase how such systems can be used in supervised learning tasks and how limiting entropy production can degrade their performance, as measured either using standard loss functions in ML or with information theoretical methods. Our results point to the general necessity to gauge encoding and task-relevance while considering energetic trade-offs. In a simple 2-spin system, we show how non-reciprocal interactions affect the capability of the system to solve different tasks optimally, independently from the encoding of the input signals: a simple modulation of the input-output task switches the optimal system configuration from an equilibrium to a highly non-equilibrium one. Linear stochastic systems (Ornstein-Uhlenbeck processes) are another case for which one can derive an analytical expression for the computation-dissipation trade-off (see Appendix). The emerging trade-off between entropy production and output information is again controlled by the degree of asymmetry of the task in input space. In this study, we concentrated on one-time statistics of the steady state distribution, leaving aside interesting properties of time-correlations. The study of both transient behavior and non-stationary protocols – where more general tasks can be formulated for instance by prescribing time-dependent average responses y(t) to multi-dimensional time-dependent signals x(t) – opens an interesting avenue to investigate general speed-dissipation-computation trade-offs within this framework. Special care must be used in such cases to distinguish between housekeeping and excess entropy production <cit.>. Studying the impact of hidden units is an important avenue for future work. In generative models, marginalization over hidden states is the crucial ingredient to induce higher-order interactions. This forms the basis for the attention mechanism in transformers <cit.> – arguably the most powerful ML models to date <cit.> – as the recent works on modern Hopfield networks <cit.> have shown. Drawing a bridge between ML, theoretical neuroscience and ST can prove fruitful in systematically studying how internal representations depend on the cost. Rate-distortion approaches have been used to study the impact of information compression on classification accuracy and maximal attainable rewards <cit.>, but a general theory is currently lacking. Our perspective is complementary: energetic costs are expected to have a strong impact on the complexity of internal representations, leading to different mechanisms for information processing. § ACKNOWLEDGEMENTS We wish to thank Antonio Celani, Roman Belousov and Edgar Roldan for fruitful discussions and for reading a preliminary version of this manuscript. § APPENDIX § STEADY STATE AND MUTUAL INFORMATION IN A CONTINUOUS TIME MARKOV CHAIN The evolution of the probability p(s,t) of state s is described by a master equation: d/dtp(s,t)=∑_s'[k_ss'(t)p(s',t)-k_s's(t)p(s,t)], with k_ss'(t) the jump rate from state s' to state s, whose time dependence is due to a generic external protocol x(t). In our case with a constant-in-time protocol, the steady state p(s|x ) can be obtained extracting the kernel of the matrix R_ss'=k_ss'-δ_s,s'∑_s” k_s”s. For systems of small size, this is viable numerically using Singular Value Decomposition (SVD). The mutual information between the input x and the system state s at steady state can be easily computed using I (s,x) = H(x) - H(s|x), with H the Shannon entropy. As for I (s,y) = H(s) - H(s|y), the entropy term H(s|y) can be easily obtained by exploiting the conditional independence between y and s, which implies that the joint distribution p(s,y) can be written as: p(s,y)=∑_x,s,y p(s,y|x)p(x)=∑_x,s,y p(s|x)p(y|x)p(x). Using Eq. <ref>, the posterior distribution p(s|y) is directly calculated using the Bayes theorem. § TRAINING OF A MULTI-SPIN SYSTEMS §.§ Details on the system We consider a system composed of two chains of size N. Interactions connect spins up to the k^th neighbours, where we use k=2. If we identify a spin by (m,n) where 1 ≤ m ≤ N is the position in the chain and n=1,2 the chain index, two spins (m_i,n_i) and (m_j,n_j) are connected if |m_i-m_j| ≤ k. The interaction parameter W_ij depends only on m_i-m_j and n_i-n_j, so that the number of non-zero, fully independent parameters of W is 8k-2. The external input x is repeated such that it is the same for both chains. Such spin system at steady state implements a stochastic version of an implicit convolutional layer with two channels <cit.>. §.§ Datasets MNIST-1D is a 1-dimensional version of size N=40 of the classic MNIST handwritten digits dataset <cit.>. We used 4000 training samples, organized in 10 different classes, each containing roughly 400 samples. Data is available at https://github.com/greydanus/mnist1d, where a description of its generation from the original MNSIT dataset is given. We generated instances of the Random Task by drawing M=100 patterns x^μ in dimension N=10, with components x_i^μ independently from a Normal distribution. The corresponding labels y^μ, drawn from { -1,+1} with probability 1/2, were randomly associated with each pattern. §.§ Details on Gillespie simulations The Gillespie algorithm <cit.> offers a remarkably simple method to generate stochastic trajectories by randomly selecting sequences of jumps between states. Let us consider a system with a discrete number of states s and transition rates k_ss', which are constant in time. Given a current state s_start, the Gillespie algorithm works by identifying both the time τ and the final state s_end of the following jump. As a first step, the total rate k_out = ∑_s k_s s_start of leaving state s_start is computed. The time τ until the following jump is then drawn from an exponential distribution with mean 1/k_out. The landing state is selected with probability p(s) = k_s s_start / k_out. The trajectory is thus constructed concatenating jumps. First, the initial state s_0 is chosen (in our case, at random) at time t=0. A first jump (τ_1, s_1) is selected starting from s_0, and then a second (τ_2, s_2) starting from s_1. The process is repeated until one of two criteria is met, either a total time or a maximum number of steps. Average occupations can be computed considering that the system occupies state s_i exactly for a time τ_i between jumps i and i+1. In our system, s is a vector of 2N individual spins s_i taking values in {-1,+1}. We will restrict the jumps to single spin flips. Given a state s, an input x (external field) and a interaction matrix W, the transition where the ith spin flips has a rate k^(i)_s∝ e^-β s_i h_i, with h_i=(W s + x)_i. The actual proportionality term (identical for all spins), which determines the time scale of the jumps, is not relevant since we are only interested in steady state properties and average occupancy. To measure the average magnetization m_x for each input x, we first select a random state s_0 and proceed to construct a trajectory up to a final time T_max=5000 or, alternatively, a maximum number of jumps N_max = 10000. The average magnetization of individual spins m_x for that input is calculated after an initial transient time of T_transient = 200 is removed. Since we only consider the steady-state, we can evaluate the entropy production rate by summing the quantity Δσ_n≡logk_s_n+1^(i)/k_s_n^(i)=-2β s_n,ih_n,i for each jump s_n→ s_n+1, consisting of a single spin flip, and dividing by the total time <cit.>. §.§ Task performance and parameter optimization. Given an input-output pair (x^μ, y^μ) from the set 𝒟=(x, y), we measure task performance by first computing m_x^μ and then the error between the prediction ŷ^μ of the final readout and the target y^μ. We use two different readouts, with their respective loss functions: * Cross Entropy loss: the "logit" vector h^μ=W_out m_x^μ is passed through a Softmax function, thus getting the normalized estimated output probabilities p^μ_k=e^h^μ_k/∑_l=1^K e^h^μ_l, with K the number of output labels. The loss function then amounts at computing the cross-entropy with the targets y^μ: L=-1/M∑_μ=1^M log p^μ_y^μ; * MSE loss: we compute the loss as L=1/2M∑_μ=1^M ( y^μ - W_out m_x^μ)^2. The minimization of a loss L with respect to W_out was performed either via a linear solver (for MSE) or a multinomial classifier solver (for CE), using standard libraries in julia, which retrieve optimal W_out^* at fixed W, for the full input set. We used MSE loss for the binary classification in the Random Task, whereas we employed the CE loss for multi-label classification in the MNIST-1D task. §.§.§ Optimization of W: SPSA Due to the stochastic nature of the dynamics, the optimization of the interaction parameters W cannot be performed with standard gradient-based methods. Additionally, typical gradient evaluation through finite difference quickly becomes prohibitive as the number of independent parameters in W grows. To overcome this issue, we employ Simultaneous Perturbation Stochastic Approximation (SPSA) <cit.>, where the gradient is approximated via a single finite difference in a random direction of the parameter space. To evaluate the gradient ∇ℒ|_W, a random vector δ W is constructed at every update step. Two symmetrical parameters configurations are constructed: W^± = W ±δ W. Independent dynamics are simulated to produce the average spin magnetizations m^± and measure entropy production rates Σ^±. The average magnetizations m^± are thus used to compute the performance losses 𝒢^±. Finally the gradient approximation reads ∇ℒ|_W ≈[Σ^+ - Σ^- + α (𝒢^- - 𝒢^+) ] δ W/2|δ W|. To avoid being trapped into local minima, we performed several initializations for each value of α. § DETAILS ON 2-SPIN SYSTEM §.§ Steady state The stationary state can be computed by imposing the stationary condition in Eq. <ref> and the normalization of p, thus getting <cit.>: p(s|x)=e^-β(F + δ F) / Z, where F=x_1 s_1 + x_2 s_2 + J_s s_1 s_2 and δ F=-β^-1log[e^β J_a s_1s_2cosh(β(x_1-2 J_a s_2))/coshβ x_1+coshβ x_2+e^-β J_a s_1s_2cosh(β(x_2- 2 J_a s_1))/coshβ x_1+coshβ x_2]. § COMPUTATION-DISSIPATION BOTTLENECK IN LINEAR SYSTEMS Let us consider a system whose dynamics, in the presence of a constant input x, is described by a multi-dimensional Ornstein-Uhlenbeck process: ṡ=W s + x + σ_sξ with ⟨ξξ^T⟩ =δ(t-t')ℐ, where ℐ is the identity matrix. The (generally non-equilibrium) steady state distribution p(s|x) is a Gaussian with mean m_x=W^-1 x and whose covariance C solves the Lyapunov equation: W C +C W^T + σ_s^2ℐ = 0. Let us consider a noisy linear function y=w_0^T x+ξ_y, with ⟨ξ_y⟩ = 0 and σ^2_y=⟨ξ^2_y⟩. Assuming x is a Gaussian with mean zero and covariance C_x, one has ⟨ y^2⟩ = w^T_0C_xw_0+σ^2_y and C_sy=⟨ sy⟩ =-W^-1⟨ xy⟩. To compute the mutual information, we use I(s,y)=H(y)-H(y|s) and the relation for the entropy of a zero-mean, d dimensional Gaussian variable z with covariance C_z, H(z)=1/2log((2π e)^d C_z), to get: I(s,y)=1/2log( W^-1C_xW^-T+C)-1/2log(W^-1C_x|yW^-T+C) where we used that the covariance matrix C_s=⟨ ss^T⟩, averaged over the entire input distribution, equals C_s=W^-1C_xW^-T+C and that C_s|y=C_s-C_syC_y^-1C_ys, with C_s|y the conditional covariance matrix of s given y. As shown in <cit.>, the entropy production in the presence of a given input x can be computed in terms of an integral σ=∫_-∞^+∞dω/2πℰ(ω) where the density ℰ(ω) is given by: ℰ(ω)=1/2[C(ω)(C^-1(-ω)-C^-1(ω))], with C(ω) the Fourier Transform of the steady state auto-correlation C(t-t')=<s(t) s^T(t')>. The expressions derived thus far can be used to obtain the computation-dissipation bottleneck over any parametrization of the coupling matrix W by numerical optimization, for different values of the tradeoff parameter α. To exemplify the approach, the next section treats a 2-dimensional case where simple analytical expressions can be derived and a full enumeration of the parameter space is viable. §.§ An example of a computation-dissipation bottleneck in a 2-dimensional case Let us then consider the case of a 2-particle system with an interaction matrix of the form: W=([ -1 J_s+J_a; J_s-J_a -1 ]). Stability is guaranteed for Δ=1+J_a^2-J_s^2>0. The solution of the Lyapunov Eq. <ref> for an input noise with covariance σ_s^2ℐ is: C=σ_s^2/2Δ([ 1+J_s J_a+J_a^2 J_s; J_s 1-J_s J_a+J_a^2 ]). The entropy production can be evaluated using Eq. <ref> and the Fourier transform of the system's Green function: G(ω)=(iω-W)^-1=1/Δ-ω^2+2iω([ 1+iω J_s+J_a; J_s-J_a 1+iω ]). From the Fourier Transform of the steady-state auto-correlation C(ω)=G(ω)G^†(ω) we get for the entropy production density: ℰ(ω)=8ω^2J_a^2/|(1+iω)^2+J_a^2-J_s^2|^2. After integration in Eq. <ref>, and noting that C doesn't depend on x, we get for a stable system: Σ=2J_a^2. We show in Fig. <ref> the results for a system with σ_s=0.1 tasked to compute a linear function y=w_0^T x+ξ_y with w_0=[cos(π/6),sin(π/6)], and ξ_y a zero-mean Gaussian variable with standard deviation σ_y=0.1. The trade-off between entropy production and output information is controlled by the degree of asymmetry in the entries of the vector w_0. In a similar vein, each particle s_i can be used as a direct readout for the output y. In such a case, the average squared deviation MSE_i = <(y-s_i)^2> at steady state again shows a characteristic front with respect to entropy production.
http://arxiv.org/abs/2307.01519v1
20230704070019
Deep Attention Q-Network for Personalized Treatment Recommendation
[ "Simin Ma", "Junghwan Lee", "Nicoleta Serban", "Shihao Yang" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Exploiting Richness of Learned Compressed Representation of Images for Semantic Segmentation Wassim Hamidouche, Lina Bariah, and Mérouane Debbah ============================================================================================= Tailoring treatment for individual patients is crucial yet challenging in order to achieve optimal healthcare outcomes. Recent advances in reinforcement learning offer promising personalized treatment recommendations; however, they rely solely on current patient observations (vital signs, demographics) as the patient's state, which may not accurately represent the true health status of the patient. This limitation hampers policy learning and evaluation, ultimately limiting treatment effectiveness. In this study, we propose the Deep Attention Q-Network for personalized treatment recommendations, utilizing the Transformer architecture within a deep reinforcement learning framework to efficiently incorporate all past patient observations. We evaluated the model on real-world sepsis and acute hypotension cohorts, demonstrating its superiority to state-of-the-art models. The source code for our model is available at <https://github.com/stevenmsm/RL-ICU-DAQN>. § INTRODUCTION Intensive Care Unit (ICU) treatment recommendation is a critical task, as it plays a vital role in the management and care of critically ill patients. Current state-of-the-art methods for providing ICU treatment recommendations primarily involve rule-based protocols and evidence-based clinical guidelines, which are informed by randomized controlled trials (RCTs), systematic reviews, and meta-analyses. However, RCTs may not be available or definitive for many ICU conditions <cit.>, and individual patients may respond differently to the same treatment strategy <cit.>. Therefore, more personalized and effective treatment plans that take into account the dynamic nature of patients' conditions and the potential presence of multiple comorbidities are needed in the ICU setting to benefit critically ill patients. Recent developments in artificial intelligence (AI) have demonstrated various successful applications in the healthcare domain, such as diagnosis <cit.>, treatment <cit.>, and resource management <cit.>. Reinforcement learning (RL), in particular, is well-suited for learning optimal individual treatment interventions. RL involves sequential decision-making in an environment with evaluative feedback, with the goal of maximizing an expected reward <cit.>. RL shares the same goal as clinicians: making therapeutic decisions to maximize a patient’s probability of a good outcome. Therefore, RL has many desirable properties and has already shown its success in providing sequential treatment suggestions in various ICU settings, such as optimal dosing of medication <cit.>, optimal timing of intervention <cit.>, optimal choice of medication <cit.>, and optimal individual target lab value <cit.>, among others. The findings from these studies all suggest that if physicians followed the RL policy, the estimated hospital mortality could be improved. However, the patient-clinician interactions in the aforementioned studies are all modeled as Markov decision processes (MDPs), while in practice, the pathology is often complex, and the "true" underlying states of the patients are latent and can only be observed through emitted signals (observations) with some uncertainty. The challenge is that the ICU setting might not be a fully observable environment for RL agents; this could be due to a variety of factors such as noisy measurements, omission of relevant factors, and the incongruity of the frequencies and time-lags among the considered measurements <cit.>. To alleviate the issue of a partially observable environment, RL agents may need to remember (some or possibly all) previous observations. As a result, RL methods typically add some sort of memory component, allowing them to store or refer back to recent observations to make more informed decisions. For example, recurrent neural networks (RNNs) have been used to encode histories <cit.> or belief transitions <cit.>. However, this creates further issues: RNNs can be subject to gradient exploding/vanishing and can be difficult to train. Recent advancements in natural language processing have led to the development of RL studies that employ the powerful Transformer architecture <cit.>. For instance, <cit.> takes a step further from <cit.>'s deep RL approach by replacing the RNNs with Transformer. Similarly, <cit.> abstracts the RL problem as a sequential modeling problem, and solve it by a variant of Transformer architecture. While these methods have demonstrated improved performance, they lack interpretability between the states and actions, which is a crucial factor in the healthcare settings. This interpretability issue arises due to the black-box nature of the models, which makes it challenging to understand how the decisions are made. Thus, it is important to develop interpretable models that can provide insights into the reasoning behind the decision-making process in healthcare settings. In this study, we propose a novel data-driven reinforcement learning approach capable of dynamically suggesting optimal personalized clinical treatments for ICU patients. By efficiently memorizing past patient states and actions, our proposed deep reinforcement learning method can identify suitable upcoming actions and interpret the importance of relationships between actions and past observations. Compared to generic approaches for RL <cit.>, the proposed method's Transformer structure is tailor-made for healthcare data, offering improved interpretability. We demonstrate the robustness and efficiency of our proposed method on two ICU patient disease cohorts, sepsis and acute hypotension, and compare the performances against simpler and alternative benchmark approaches. The evaluation results illustrate that our proposed algorithm's learned optimal policy is able to outperform competing policies, with the help of the attention mechanism. Additionally, we observe that the optimal policy can focus on different past observations by visualizing the attention mechanism, further providing interpretability in our proposed approach. §.§ Generalizable Insights about Machine Learning in the Context of Healthcare * Enhancing the resolution of the patient's state space results in more comprehensive and valuable recommendations, ultimately leading to improved treatment effectiveness. In ICU settings, patient's “true” underlying health status might depend more than the current patient's observations, due to various factors. Therefore, it is important to improve the granularity of patient's state representation, when making treatment recommendations that depends on “true” patient underlying health status. Our proposed RL approach “enrich” the patient state space by letting RL agent efficiently memorize patients' current and past observations and actions in order to provide more effective and robust recommendations. * Interpretability in Machine Learning solutions to healthcare is important and challanging. Machine learning solutions are often lack explanability and interpretability in clinical sense, which hinders clinical insight and subsequent deployment in real-world settings. Our proposed approach is able to focus on past observations (and actions) that indicate worse patient health when making treatment decisions, which aligns with clinician's diagnosis process and makes RL-based treatment search closer to real-world deployment. § RELATED WORK §.§ Reinforcement learning for personalized treatment recommendation Reinforcement learning (RL) has garnered considerable attention in determining optimal dosages and treatments for patients in the intensive care unit (ICU). Various studies have investigated its application to different medical scenarios, including propofol dosing for surgical patients <cit.>, heparin dosing for patients with cardiovascular diseases <cit.>, intravenous (IV) fluids and vasopressors dosing for sepsis patients <cit.>, and morphine dosing for patients with at least one pain intensity score <cit.>. Sepsis, a leading cause of hospital deaths, is a disease that is costly to treat <cit.>. In addition to antibiotics and source control, the use of IV fluids to correct hypovolemia and vasopressors to counteract sepsis-induced vasodilation presents significant challenges. <cit.> initially formulated the personalized optimal dosage for IV fluids and vasopressors as an RL problem with the goal of improving patients' outcomes, solving it with discrete state and action-based value iteration. <cit.> extended the model to continuous state space and discrete action and proposed solving the problem using Deep Q-learning <cit.>. Subsequent research studies proposed various re-formulations of the problem or explored different RL algorithms, including model-based RL algorithms <cit.>, a combination of model-free deep RL approach and model-based kernel RL approach <cit.>, and extension to continuous action space solved via policy-gradient algorithms <cit.>. To the best of our knowledge, our study is the first to consider modeling patient-clinician interactions as a partially observable environment. Inspired by the Transformer architecture <cit.>, we represent all prior patient observations and actions as the patient's state space in our proposed deep RL approach. §.§ Background: Deep Q-Learning Reinforcement Learning is concerned with learning control policies for agents interacting with unknown environments. Such environments are often formalized as a Markov Decision Processes (MDPs), described by a 4-tuple (𝒮, 𝒜, 𝒫, ℛ). At each timestep t, an agent interacting with the MDP observes a state s_t∈𝒮, and chooses an action at a_t∈𝒜 which determines the reward r_t ∼ℛ(s_t, a_t) (reward distribution) and next state s_t+1∼𝒫(s_t, a_t) (state transition probability distribution). The goal of the agent is to maximize the expected discounted cumulative reward, 𝔼[∑_tγ^tr_t], for some discount factor γ∈ [0,1). Q-Learning <cit.> is a model-free off-policy algorithm for estimating the long-term expected return of executing an action from a given state in an MDP. These estimated returns are known as Q-values. A higher Q-value indicates an action a is judged to yield better long-term results in a state s. Q-values are learned iteratively by updating the current Q-value estimate towards the observed reward plus the max Q-value over all actions a' in the resulting state s': Q(s,a) Q(s,a)+α(r+γmax_a'∈𝒜Q(s',a') - Q(s,a)) In more challenging domains, however, the state-action space of the environment is often too large to be able to learn an exact Q-value for each state-action pair. Instead of learning a tabular Q-function, Deep Q-Networks (DQN) <cit.> learns an approximate Q-function featuring strong generalization capabilities over similar states and actions, with the help of neural networks. DQN is trained to minimize the Mean Squared Bellman Error: L(θ) = 𝔼_s,a,r,s'[(r+γmax_a'∈𝒜Q(s',a';θ') - Q(s,a;θ))^2] where transition tuples of states, actions, rewards, and future states (s, a, r, s') are sampled uniformly from a replay buffer, D, of past experiences while training. The target r+γmax_a'Q(s',a';θ') invokes DQN’s target network (parameterized by θ'), which lags behind the main network (parameterized by θ) to produce more stable updates. § METHODS §.§ Problem formulation When an environment does not emit its full state to the agent, the problem can be modeled as a Partially Observable Markov Decision Process (POMDP), described by 6-tuple (𝒮, 𝒜, 𝒫, ℛ, Ω, 𝒪). The two additional sets, Ω, 𝒪, represents the observations set and state-observation distributions, respectively. In particular, at each time step t, after the agent (in state s_t∈𝒮) interacts with the environment by taking action a_t∈𝒜 and obtain the reward r_t ∼ℛ(s_t, a_t), the agent moves into the next state s_t+1∼𝒫(s_t, a_t), but no longer observes true system state and instead receives an observation o_t+1∈Ω, generated from the underlying system state s_t+1 according to the probability distribution o_t+1∼𝒪(s_t+1). In ICU setting, s_t can be interpreted as the “true” underlying patient's health status, while o_t is the current observable measurements (vital signs, demographics, etc.). Because agents in POMDPs do not have access to the environment’s full state information, they must rely on the observations o_t∈Ω. In this case, DQN may not learn a good policy by simply estimating the Q-values from the current patient's observation, o_t, since it may not be representative enough for the “true” underlying patient's health status, s_t. Instead, one often needs to consider some form of all patient's historical observations and actions, for instance {(o_0, a_0), (o_1, a_1), …, (o_t-1, a_t-1)}, to approximate the true current state, s_t. Because the history grows indefinitely as the agent proceeds in a trajectory, efficient ways of encoding the history is needed, for example using an agent's belief <cit.>, using recurrent neural network and its variants <cit.>, etc. Here, we incorporate the recently developed Transformer's attention mechanisms <cit.> into the Deep Q-Networks, which is able to incorporate histories into the Q-function and reflect the relative importance between the upcoming actions and the histories. §.§ Proposed Method: Deep Attention Q-Network The transformer architecture <cit.>, originally introduced for sequence to sequence translation in natural language applications, utilizes attention mechanism <cit.>, which is able to “focus” on different portions of the input when translating to outputs. With its strong interpretability and computational efficiency, the transformer architecture, originally formed as an encoder-decoder structure, is now broadly used in various applications, using either the encoder <cit.>, the decoder <cit.>, or reconstructed architectures <cit.>. Transformer and its attention structures seem like a natural fit to represent the histories in POMDPs, as it encapsulates several inductive biases nicely. Therefore, we propose Deep Attention Q-Network (DAQN), by “inserting” the attention mechanisms to the traditional Deep Q-Networks to learn the approximate Q-function. The high-level overview of the proposed DAQN is shown in Figure <ref>, and the detailed workflow is presented in the caption. Similar to the original Transformer's decoder structure <cit.>, each attention-like blocks in DAQN features two main submodules: encoder-decoder attention and position-wise feedforward network. First, in the encoder-decoder attention, the fixed start token (serving as a dummy variable) will be projected to queries Q, and positional encoded and embedded observation histories will be projected to keys K and values V, through the learned weight matrices W^Q, W^K, W^V, respectively. Then, via “Scaled Dot-Product Attention” <cit.>, a softmax function is applied on the dot products between queries and keys, which are the attention weights and are used to obtain a weighted sum on the values. On the high-level, the attention weights can be interperated as the “importance” weight of each observation history relative to the the Q-values of state-action pairs. Then the output from the encoder-decoder attention will pass through a fully connected feed-forward network, which is applied to each position separately and identically. After each submodule, that submodule’s input and output are combined and followed by layer normalization <cit.>. We additionally incorporate Dueling Q-network architecture <cit.> and Double-Deep Q-network architecture <cit.> into our proposed architecture, and also use Prioritized Experience Replay <cit.> to accelerate learning, similar to prior RL sepsis studies <cit.>. More implementation details are presented in the Appendix <ref>. § COHORT We obtain the data from the “Medical Information Mart for Intensive Care database” (MIMIC-III) version 1.4 <cit.>, a publicly available database consist of deidentified health-related data associated with patients stayed in critical care units of the Beth Israel Deaconess Medical Center between 2001 and 2012. The database includes information such as demographics, vital sign measurements made at the bedside, laboratory test results, procedures, medications, etc. In this study, we focus on two ICU patient cohorts: sepsis patient and acutely hypotensive patient. §.§ Sepsis Patients §.§.§ Data Extraction We extract all adult sepsis patients in MIMIC-III database that fulfill the following criteria: (1) patients who were older than 18 years old; (2) patients with the length of stay over 24 hours (to ensure sufficient data for analysis); (3) patients with the diagnosed of sepsis according to the Sepsis-3 criteria <cit.>. Also, if a patient had multiple admissions with sepsis, only the first admission was analyzed. After excluding patients with relevant variables missing (see Table <ref>), we have a total of 6,164 patients. For each patient, we extract the relevant physiological parameters including demographics, lab values, vital signs, and intake/output events (see Table <ref>). Then, each patient's record is aggregated into windows of 4 hours, with the mean or sum being recorded when several data points were present in one window (same as prior studies <cit.>). This yielded a feature vector for each patient at each timestep. For each patient i∈{1,…,N}, at each timestep t, the current feature vector and the previous feature vectors (and previous actions) form the observation o_t^(i) in the underlying POMDP. §.§.§ Action and Reward We focus on the action space defined by two treatment interventions: intravenous fluid (IV fluid) and vasopressor, given their uncertainty in clinical literature <cit.> and the crucial impact on a patient's eventual outcome. We define a 5× 5 action space for the medical interventions covering the space of intravenous (IV) fluid and maximum dose of vasopressor in each 4-hour period (4 per-drug quartiles, and a special case of no drug), similar to prior studies' action space discretization <cit.>. Thus, we are end up with total of 25 actions, each representing the intervention as a tuple of Input 4H and Max Vaso at each 4-hour period. To train an RL agent for sepsis management, we adopted a similar reward function as in <cit.>, which uses the Sequential Organ Failure Assessment scores (SOFA) <cit.> and the lactate level of the patients. On the high level, higher SOFA scores indicate greater organ dysfunction and is predictive of ICU mortality, while lactate levels measure cell-hypoxia which is higher in septic patients. The rewards function penalizes high SOFA scores and lactate levels at time t, as well as positive changes in these scores. Conversely, positive rewards are given for decreased SOFA scores and lactate levels, indicating improved patient states. Further details on the reward function are in Appendix <ref>. §.§ Acutely Hypotensive Patients §.§.§ Data Extraction Following prior study <cit.>, we extract the acutely hypotensive patients in MIMIC-III database that fulfill the following criteria: (1) patients who were older than 18 years old; (2) patients with the length of stay over 24-hours (and select the initial ICU admission only); (3) patients with seven or more mean arterial pressure (MAP) values of 65 mmHg or less, which indicated probable acute hypotension. For each patient, we extract the relevant physiological parameters (Table <ref>), and limit to only using information captured during the initial 48 hours after admission. We have a final cohort consisting of 3910 distinct ICU admissions. §.§.§ Action and Reward Following prior studies <cit.>, we choose to focus on the action space defined by two treatment interventions: fluid boluses and vasopressor, defining a 4× 4 action space (3 per-drug quartiles, and a special case of no drug). We adopted a similar reward function as <cit.> for training an RL agent for the management of acute hypotension. The reward at time step t of a patient, is dependent on the Mean Arterial Pressure (MAP) and urine output at time t. The detailed formula is provided in Appendix <ref>. § RESULTS In this section, we show how the proposed DAQN dynamically suggest optimal personalized healthcare treatments, in intensive care units (ICU). In this study, we focus on off-policy learning, which means that our RL agent aims to learn an optimal policy (i.e. optimal medication dosages) through data that are already generated by following the clinician policy (see section <ref>). For each cohort, we learn the optimal DAQN policy, while conduct evaluation comparisons against other benchmark policies. Proper quantitative evaluation of learned policy is crucial before deployment, especially in healthcare. Off-policy evaluation (OPE) in the reinforcement learning context is typically used as the performance metric for comparison <cit.>. Here, we employ weighted doubly-robust method (WDR) to quantify the performance of RL policies <cit.>. We include the proposed DAQN policy, DRQN (Deep Recurrent Q-Network) policy <cit.>, DQN (Deep Q-Network) policy <cit.>, clinician policy, and random policy for comparison. The DRQN and DQN policy use vanilla LSTM network (long-short-term-memory) <cit.> and feedforward neural networks <cit.> for learning the approximate Q-function (see Equation <ref>), respectively. The clinician policy comprises actions from historical data which clinicians take. For random policy, actions is uniformly sampled from the 0 to safety upper bound range. More details on the benchmark policies are presented in the Appendix <ref>. To account for randomness, we performed 50 experiments with a unique train/test set split in each experiment. §.§ Sepsis Patients The results are presented in Table <ref> and the box plot in Figure <ref>. The quantitative results demonstrate that our proposed DAQN policy is able outperform benchmark DRQN policy in both mean and standard deviation, which also incorporates historical patients' observations for state representation, while outperforming traditional DQN policy, clinician policy, and random policy. The value of the clinician's policy is estimated with high confidence, as expected. DAQN and DQN-based policy all bound on the estimated value relatively tight and larger than the value of the clinician's policy (1st to 3rd quartile box small and above the mean for the clinician’s policy), in contrast to the random policy's large box extending to well below the clinician's policies. The random policy's values are distributed evenly around zero, which is expected as the reward distributions is also approximately distributed around zero. DRQN policy is also able to outperform clinician's policy and baseline random policy, and exhibit larger mean than DQN policy, but exhibit the largest variance. In addition, we examine the interpretability in our proposed DAQN policy, by focusing on the attention weights over the input historical patient observations. We observe that the attention weights, produed in the “Encoder-Decoder Attention” block (in Figure <ref>), is positively associated with patient's SOFA score, change in SOFA score, and lactate level, which are all important indicator in sepsis patients, with high association with mortality and morbidity <cit.>. Numerous studies <cit.> show that the SOFA score is highly sensitive and predictive in the diagnosis of sepsis. For instance, a initial and highest scores of more than 11 or mean scores of more than 5 corresponded to mortality of more than 80%, while change in SOFA score (delta SOFA score) also significantly associated with mortality and patient discharge <cit.>. Indeed, with the designed reward function that penalizes high SOFA scores, positive changes in SOFA scores, and high lactate levels, the attention weights learn to “focus” on the prior observations and actions that exhibit high SOFA score, change in SOFA score, or lactate level, respectively. Table <ref> shows the correlation coefficient between the average attention weights (across attention heads) in each layer and SOFA score, change in SOFA score, lactate level, respectively. Then, we select three example patients and visualize the average attention weights in each layer with SOFA score, delta SOFA score, and lactate level (the elements that has the highest correlation coefficient with each layer's average attention weight, see Table <ref>), in Figure <ref>, <ref>, <ref>. For example, the attention weights of example patient 1 in <ref> exhibit strong trend matching behavior with SOFA score, Delta SOFA score, and lactate level, and learn to “focus” (high attention weights) on past observations that indicates worse patient's health status. This further confirms DAQN's ability to focus on more “important” and severe historical observations when learning the optimal policy. More details on presented in Appendix <ref>. §.§ Acutely Hypotensive Patients Managing hypotensive patients in the ICU is a challenging task that lacks standardization due to the high heterogeneity of patients, which often leads to high morbidity and mortality rates <cit.>. In light of the limited evidence to guide treatment guidelines, RL offers a promising approach to improve strategies for managing these patients <cit.>. To evaluate the efficacy of RL policies for acutely hypotensive patients, we conducted experiments on the MIMIC-III dataset, and the results are presented in Table <ref> and the box plot in Figure <ref>. We used the same hyperparameters for DAQN and the benchmark policies as in the previous section. Our results show that, similar to the sepsis patient cohort, the DAQN policy outperforms the DRQN and DQN policies, with the 3rd quartile box located above the mean of the DRQN and DQN policies. This performance improvement is attributed to the Transformer architecture and attention mechanism employed by the DAQN, which enables it to focus on and efficiently memorize past patient observations and actions as current patient health status representation more robustly than the DRQN, which can suffer from vanishing/exploding gradient problems due to RNNs. However, the DRQN policy shows a higher mean than the DQN policy, which underscores the importance of modeling the ICU setting as a POMDP problem, treating the current patient vitals and static information as observations, and not as the “true” patient underlying health status. The estimated values of the DAQN, DRQN, and DQN policies are relatively tight, and larger than those of the clinician policy and randomized policy. The expected reward of the clinician policy is estimated with high confidence, while the randomized policy's reward distribution is similar to that of the entire cohort. Overall, our results demonstrate the potential of RL policies, especially the DAQN, to improve the management of acutely hypotensive patients in the ICU. § DISCUSSION Various pioneering studies have explored applying reinforcement learning algorithms to the search for optimal clinical treatment, such as for sepsis patients <cit.> and for acutely hypotentive patients <cit.>, demonstrating the potential of using RL to improve ICU patient outcomes. However, these studies are limited to a coarse-grained state space that only depends on the current patient's observations for state space definitions. Although this representation of patient's state space is intuitive and straightforward for determining current actions, patients' prior observations and prior actions are also important factors of treatment intervention decisions. Thus, simply modeling the patient-clinician interactions as MDPs could prone to mis-specification of the “true” underlying patient's states, and potentially take sub-optimal actions, leading to sub-optimal results (rewards). In order to make personalized treatment design more clinically meaningful, we proposed to “enrich” the state space, by modeling the ICU setting as a POMDP and letting the RL agent efficiently memorize patients' current and prior observations and actions. By incorporating attention mechanism, our proposed RL algorithm is able to outperform baseline benchmark policies, and provide interpretability, similar to clinician's diagnosis process. By extending state space with prior observations, we learn the optimal policy using our proposed RL algorithm, which can provide more meaningful and higher-resolution decision support to patients. For quantitative policy evaluation, we compare our optimal policy with other policies learned by alternative benchmark RL algorithms, as well as baseline policies, through off-policy evaluation with WDR estimator <cit.>. The results from off-policy evaluations shows that the proposed the RL policy is able to perform competitively against alternative benchmark policy that uses simpler memorization techniques (LSTM), and outperform other RL policies that do not incorporate prior observations (and actions). The proposed RL policy can also provide better expected reward compared to clinician policy and random policy baseline. Moreover, similar to clinician decision process and thinking, the proposed RL policy will focus on the prior observations (and actions) that indicate worse patient health in combination of current observations, when making treatment dosage decisions (see Figure <ref>, <ref>, <ref>). With clinician guided reward function and attention mechanism, our proposed RL policy is able to shift focus among previous patient's observations for subsequent treatment decisions. This improvement also makes reinforcement learning-based treatment search closer to real-world deployment. Potential avenues for future work include a more thorough discussion with clinicians to potentially make the observation and action histories even more representative, and architectural improvements that could provide more detailed interpretation for patient-intervention relation. Limitations As mentioned by prior studies <cit.>, evaluating RL policies with off-policy evaluation is challenging, as all the available data are offline sampled (i.e. following clinician's policy). Since the evaluation policies are deterministic, the the off-policy evaluation that uses importance weight will only be non-zero if the evaluation policy recommends the same treatment as the clinician's policy. The will results in high-variance estimates of the quality of the evaluation policy. Future examination from both policy learning and policy evaluation aspects shall be considered. Another limitation of this study is that the dosages are discretized into per-drug quartiles, with each action representing dosages in a particular range. However, each quartile includes a wide range of dosages , which can be complicated in practice for clinicians to make decisions on the exact dosages of IV fluids and vasopressor to use. <cit.> proposed a RL-based solution via Deep Deterministic Policy Gradient <cit.>, which is a model-free off-policy algorithm for learning continuous actions. Therefore, it is also important to further investigate policy-gradient algorithms that efficiently memorizes patient's prior observations and actions continuous actions. § §.§ Implementation Details §.§.§ Propsoed DAQN modifications Similar to prior RL studies on sepsis <cit.>, we incorporate Dueling Q-network architecture <cit.>, and Double-Deep Q-network architecture <cit.> into our proposed DAQN, to combat a few shortcomings of traditional Q-Networks. Dueling Q-network architecture <cit.> proposed to produce separate state-value (V_t) and action advantages (A_t) streams, through two different liner layers, instead of a single linear layer (see Figure <ref>). This will separate the effect on the Q-values of a patient being in a good underlying state from a good action being taken, improving the learning <cit.>). Lastly, the state-value and action advantages will combine into a set of Q-values, in action space dimension, Q_t. Double-Deep Q-network architecture <cit.> helps correct overestimation of Q-values by using a second target network to compute the Q-values, i.e. using two different networks for maximum calculation and Q-value estimation in the mean squared Bellman error (see Equation <ref>). Thus, we denote our proposed method as Dueling-Double-Deep-Attention-Q-Network (Dueling-DDAQN), and shown in the flow chart in Figure <ref>. Table <ref> hyperparameter combinations used for the Dueling-DDAQN architecture thorough out the experiments . §.§.§ Benchmark methods details The Deep Recurrent Q-Network (DRQN) structure used in evaluation comparisons uses a vanilla LSTM structure, with hidden unit and cell unit being in 128 dimension <cit.>. The Deep Q-Network (DQN) structure used in evaluation comparison uses two feedforward neural network, with dimensionality of 128. Both DRQN and DQN also incorporate the Dueling Q-network architecture <cit.>, and Double-Deep Q-network architecture <cit.> for fair comparisons, i.e. denoting as Dueling-DDRQN and Dueling DDQN. Also, for both DAQN and DRQN, we use a look back window of 9, i.e. incorporate up to k=9 historical patient observation as the current state representation (see <ref>). The final mean-squared Bellman error loss function used for learning DTQN, DRQN and DQN policies is as follows: L(θ) = 𝔼_s,a,r,s'[(r+γ Q(s', _a'∈𝒜Q(s',a';θ);θ') - Q(s,a;θ))^2] where θ are the weights used to parameterize the main network, and θ' are the weights used to parameterize the target network. We use a train/test split of 80/20 and ensure that a proportionate number of patient outcomes are present in both sets. During training, we sample transitions of the form {s, a, r, s'} from our training set, using the Prioritized Experience replay scheme <cit.>, perform feed-forward passes on the main and target networks to evaluate the output and loss, and update the weights in the main network via backpropagation. Training was conducted for 10000 batches, with batch size 128. The reward discount factor for DAQN, DRQN and DQN policy are set to be γ=0.99 (see Equation <ref>). §.§ More details on problem setup and data pre-processing Functions §.§.§ Selected Features §.§.§ Sepsis For all sepsis patients, we include 42 dynamic variables and 5 static variables as observations, to define a patient's state in each encounter (4-hour period). Among the variables, IV fluids and vasopressors define the action space and the rest define the state space, in the RL setup. The dynamic variables are shown in Table <ref>. The static variables are: gender, mechanical ventilation, readmission, age, and weight. §.§.§ Acute Hypotension We include 22 dynamic variables and 5 static variables for acutely hypotensive patients, see Table <ref>. The static variables are: gender, mechanical ventilation, readmission, age, and weight. §.§.§ Reward Function §.§.§ The Sepsis Reward Function We adopted the reward function from <cit.> for training an RL agent for the management of sepsis. The reward function is computed in three parts such that: reward_t = reward^(1)_t + reward^(2)_t+ reward^(3)_t where reward^(1)_t = -0.025 if SOFA_t=SOFA_t-1 and SOFA_t>0 reward^(2)_t = -0.125(SOFA_t-SOFA_t-1) reward^(3)_t = -2tanh(Lactate_t-Lactate_t-1) Note that the SOFA score can be easily derived from the patient variables: PaO2, FiO2, Platelets, Total Bili, Mean BP, Max Vaso, GCS, Creatinine, and Output 4H (see Tables <ref> for details of the above variables). §.§.§ The Acute Hypotension Reward Function We adopted the reward function from <cit.> for training an RL agent for the management of acute hypotension. The reward at time step t is dependent on the Mean Arterial Pressure MAP_t and is given as: reward_t = 0, MAP_t > 65 -0.05 (65-MAP_t )/5 , 60<MAP_t ≤ 65 -0.1(60-MAP_t)/5 - 0.05, 55<MAP_t ≤ 60 -0.85(55-MAP_t)/15 - 0.15, MAP_t ≤ 55 but the reward value is also dictated, and overwrote, by the Urine output urine_t of a patient when reward_t = 0, if urine_t > 30 and MAP_t5 > 55 §.§ More Results §.§.§ Evaluation Results In this section, we present more evaluation results. Figure <ref> and <ref> show the evaluating policies' value estimations via WDR estimator <cit.>, for sepsis patients and acutely hypotentive patients respectively. §.§.§ Interpretability In addition to off-policy evaluations, our proposed DAQN policy is able to “foucs” on patients' different historical observations, when estimating the Q-values. To further investigate and visualize this, we first compute a the correlation coefficient of averaged attention weights in each layer (across each attention heads), with the SOFA score, delta SOFA score, and lactate, respectively. For example, for layer 1, we obtain the averaged attention weights across two attention heads (see hyperparameter choices in Table <ref>), and compute its correlation coefficient with all the historical observations' SOFA score, delta SOFA score, and lactate. Additionally, we select three example patients and visualize the average attention weights in each layer and SOFA score, delta SOFA score, and lactate level, respectively (Figure <ref>, <ref>, <ref>). Here, all three patients' observation sequence are select towards the end point of their records, i.e. the last time step in the visualization is also the end of their ICU records.
http://arxiv.org/abs/2307.03391v1
20230707053005
On Adaptive Portfolio Management with Dynamic Black-Litterman Approach
[ "Chi-Lin Li", "Chung-Han Hsieh" ]
q-fin.PM
[ "q-fin.PM", "math.OC", "q-fin.CP", "q-fin.RM" ]
[=]list-name=ENDNOTES T>l >^>@̧table@̧figure @̧lstlisting@̧figure 4.5cm(0.5cm,9cm) ^**Chi-Lin Li was with the Double Specialty Program of Management and Technology, at National Tsing Hua University. He is currently with the Program of Mathematical Finance and Financial Technology, Questrom School of Business, Boston University, U.S.A. Rafik B. Hariri Building 595 Commonwealth Avenue Boston, MA 02215, U.S.A. Cell: +886 974268156 ^†Chung-Han Hsieh is with the Department of Quantitative Finance, National Tsing Hua University, Taiwan. No. 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan 300044, R.O.C. Cell: +886 933211956 ^†Correspondence: Chung-Han Hsieh left=2.3in,right = 1in, top =1in, bottom = 1in On Adaptive Portfolio Management with Dynamic Black-Litterman ApproachThis paper is partially supported by the Ministry of Science and Technology (MOST), Taiwan, under Grant: MOST: 111–2813–C–007–021–H. Chi-Lin Li^** and Chung-Han Hsieh^† ================================================================================================================================================================================================================ [lines = 4]This paper presents a novel framework for adaptive portfolio management that combines a dynamic Black-Litterman optimization with the general factor model and Elastic Net regression. This integrated approach allows us to systematically generate investors' views and mitigate potential estimation errors. Our empirical results demonstrate that this combined approach can lead to computational advantages as well as promising trading performances. §.§ INTRODUCTION The Black-Litterman (BL) approach, first introduced by <cit.>, incorporates investors' views into market equilibrium to predict the expected return of underlying assets. Since then, the BL approach has been widely applied and has undergone various developments. For instance, <cit.> applied the BL model to global portfolio optimization, <cit.> incorporated the BL approach into trading strategies, and <cit.> studied an extension of BL beyond the mean-variance framework to use all available information, including the equilibrium model, the investor's view, and the data. Typically, the investors' views used in the BL model are formed subjectively using the information provided by some financial analysts, e.g., <cit.>. While some studies, such as <cit.>, have attempted to use sentiment analysis techniques to generate the views objectively, it often requires a large amount of linguistic data. This data may not be available when timely investment decisions are needed. To address this, in this paper, we propose the use of factor models, as seen in works by <cit.>, to assign the views in the BL model objectively and systematically. Additionally, it is known that obtaining expected returns with small estimation errors is challenging. The difficulty is compounded when using the optimization technique to determine the portfolio weights since the resulting “optimal" portfolio may allocate significant capital to assets with a high estimation error in expected return, as noted by <cit.>. To this end, recent studies, e.g., <cit.>, have shown that these errors can be reduced by using machine learning techniques such as ridge regression; see <cit.>. This paper extends the analysis to involve the Elastic Net, a regularization technique that combines the strengths of both ridge regression and LASSO regression, in the estimation. By incorporating this, we have a more flexible and robust approach to error reduction. §.§ Generating Time-Varying Views The traditional BL approach uses investors' views once and does not update them dynamically, which may limit its ability to reflect the latest market information. In contrast, several approaches are proposed to update an investor's view dynamically and have been shown to have better short-term trading performance in previous studies; see <cit.>. As a result, we propose a novel three-phase sliding window approach to update investors' views dynamically. Our empirical studies show that mean-variance portfolios with time-varying views have the potential to outperform portfolios without the BL approach without time-varying views. §.§ PRELIMINARIES Consider a portfolio consisting of n ≥ 1 assets. The general factor model states the relationship between return on the Asset i and factors f_j for j=1,2,…, J as follows: For i=1,2,…,n, r_i = α_i + F^T β_i + ε_i where α_i is the intercept constant, F:=[f_1 ⋯ f_J]^T is the factor vector with J < n, and β_i := [β_i,1 ⋯ β_i,J]^T are the factor loadings, representing the change on the return of Asset i per unit change in factor, and ε_i is the specific error factor for Asset i, which is assumed to be a white noise series and uncorrelated with the factors f_j and other specific factors. We assume that 𝔼[ε_i] = 0 for all i, cov(f_j, ε_i) = 0 for all j,i, and lastly, cov(ε_i, ε_j) = σ_i^2 if i=j, and zero otherwise.[ The joint model for n assets is r = α + F^Tβ + ε, k=1,2,…, where r:=[r_1 … r_n]^T,α := [α_1, …, α_k]^T, β:= (β_ij) is a n × J factor-loading matrix, and ε:=[ε_1 … ε_n]^T is the error vector with cov(ε) :=D:= diag(σ_1^2,…,σ_k^2}.] Typical factor models used in finance include Fama-French's three-factor or five-factor model, see <cit.>, Cahart's four-factor model, see <cit.>. As seen later in this paper, we will specifically adopt the latter for illustrative purposes. In general, to obtain the parameters α and β, one solves an ordinary least squares (OLS) problem; see <cit.>. However, the approach is known to be sensitive to outliers and may suffer significantly from overfitting issues. To this end, following <cit.>, we consider Elastic Net in the estimation to assure the flexibility and robustness of our estimates; see the next section to follow. §.§ PROBLEM FORMULATION This section considers two main problems that are central to our subsequent development. The first involves estimating the expected return using the BL approach with Elastic Net. The second pertains to determining optimal portfolio weights using the mean-variance criterion. §.§ Extended BL Approach with Elastic Net The classical BL approach is driven by two key factors: market equilibrium and investor views, based on the Capital Asset Pricing Model (CAPM). Let Π be the implied returns satisfying Π := μ +ε_Π, ε_Π∼𝒩(0, Q), where μ is the true expected return vector to be determined, 𝒩(0, Q) is a normal distribution with zero mean and covariance matrix Q:=τΣ, representing our confidence in estimating expected returns. The small scale parameter τ≪ 1,[A typical choice of parameter τ is between 0.01 and 0.05; see <cit.>.] and Σ is the covariance matrix of returns. The investor views, denoted by a vector q∈ℝ^K with K views, are incorporated with the mean return μ and can be expressed with the linear equation; i.e., q := Pμ+ε_q,ε_q∼𝒩(0,Ω), where P ∈ℝ^K× n represents K views of n assets with K < n, and Ω∈ℝ^K× K expresses the confidence (variance) of K views. To incorporate the investors' views with the market equilibrium, we consider y := Bμ+ε_y, ε_y ∼𝒩(0,V), where y:= [ Π; q ], B := [ I_N× N; P ] and V := [ Q 0; 0 Ω ] where I_N× N is the N× N identity matrix. Then, we seek an optimal estimator for the true expected returns, call it μ, that solves the Elastic Net-based weighted least-squares (WLS) problem min_μ (y - Bμ)^T V^-1(y - Bμ)+λ_2 μ^2_2+λ_1 μ_1, where λ_1,λ_2 ≥ 0 and 0≤λ_1+λ_2 ≤ 1 are fixed coefficients for the regularization terms, and μ_p is the ℓ_p-norm which satisfies μ_p= (∑_i=1^n |μ_i|^p)^1/p for p ∈{1,2}. The key idea for incorporating the Elastic Net into the ordinary WLS regression is to address both heteroscedastic errors and potential high dimensionality on the factors, which may lead to a more robust and accurate model. If q = Ω = 0, i.e., the investor has no views or zero confidence in the views and λ_i =0 for i ∈{1,2}, then the solution to Problem (<ref>), call it μ, becomes μ = Π. In addition, if λ_i = 0, one obtains the Black-Litterman estimates for expected return μ = Π + QP^T (PQP^T + Ω)^-1 (q - P Π) and for covariance matrix Σ = Σ + (Q^-1 + P^TΩ^-1 P)^-1. A more detailed discussion can be found in <cit.>. §.§ Mean-Variance Portfolio Optimization Let w:=[w_1 ⋯ w_n]^T ∈ℝ^n be the portfolio weights. We consider Markowitz's mean-variance (MV) model, see <cit.>. That is, max_w∈𝒲μ^T w -ρw^T Σw where 𝒲:={w: w_1 =1, |w_i| ≤ W ∈ [0,1] } for some W ∈ [0,1], w_1:=∑_i=1^n |w_i| is the ℓ^1-norm, μ and Σ are obtained via the BL approach described previously, and ρ >0 is a risk aversion coefficient, which is typically selected within an interval [1, 10], see <cit.>. In the next section, we shall discuss how to dynamically estimate μ and Σ. §.§ TIME-VARYING VIEWS AND WEIGHTS This section provides a novel three-phase sliding window algorithm for generating time-varying views and optimal weights: * Estimating α and β. * Use α and β to generate time-varying views q and estimate μ and Σ. * Calculate the optimal portfolio weight w^*. Specifically, fix M ≥ 1 and set starting time stamp t ≥ 0. For t-1,t-2,…, t-M, we first solve the Elastic Net regression problem to obtain intercept term α and factor loadings β. Applying these (α,β) in the factor models, we generate the views q: q = α + F^T β + ε_q, where ε_q ∼𝒩(0, Ω) is the specific error factor of views q defined in Equation (<ref>). Then we solve the Elast Net-based WLS Problem (<ref>) to obtain μ and Σ. Additionally, to incorporate the fact that the covariance matrix is possibly time-varying, we follow <cit.> to use the Exponentially Weighted Moving Average (EWMA) model. Specifically, let η∈ [0,1] be the decay factor. The EWMA model for estimating the covariance matrix Σ is as follows: Σ = ηΣ+ (1-η) rr^T, where r:=[r_1 ⋯ r_n]^T is the return vector. Having obtained μ and Σ, we solve the mean-variance portfolio optimization problems (<ref>) to obtain the corresponding optimal portfolio weight w^*. Subsequently, using optimal weights w^* to trade in the following time stamps [t,t+1,…, t+M-1]. Then, we reinitialize by setting t:=t+M and repeat this procedure until the terminal stage has arrived. The details of the algorithm can be found in Algorithm <ref>; see also Exhibit <ref> for an illustration of the main idea of our approach. §.§ EMPIRICAL STUDIES: THE DOW 30 This section provides a series of empirical studies using the BL approach with time-varying views. We first use daily closing prices for 30 assets comprising the Dow Jones Industrial Average (DJIA) over a one-year period from January 1, 2020 to January 1, 2021. It is worth noting that during this period, the prices of DJIA constituents experienced fluctuations and a significant drawdown in the first half of 2020. The index price trend, as a representative of the DOW30, is shown in Exhibit <ref>. In addition to the 30 constituents in the DJIA index, we also add a U.S. Treasury bill to our portfolio, resulting in a mid-sized portfolio of a total of 31 assets. While shorting is allowed in our approach, for the sake of simplicity, we demonstrate the result exclusively based on long-only positions. In the sequel, W=0.3 is imposed in the constraint of Problem (<ref>) with the aim of diversification purposes. We adopt the Carhart Four-Factor Model, as described in <cit.>. This model can be viewed as an extension of the celebrated Fama-French three-factor model, incorporating an additional factor–momentum. The model is defined as follows: 𝔼[r_i] = r_f+β_i(𝔼[r_m]-r_f) +β_i,SMBSMB + β_i,HMLHML+ β_i,UMDUMD where r_f ≥ 0 is the risk-free rate, β_i, · are the factor loadings, SMB stands for the size factor (small minus big), HML represents the value factor (high book-to-market ratio minus low), and UMD represents momentum factor (high daily momentums minus the low). To evaluate the trading performance, we use the following metrics. The first one is the excess return of the portfolio given by r^p:= w^Tr-r_f, We use r^p to denote the mean excess return, σ to denote the volatility, and SR to denote the realized Sharpe ratio. Moreover, to study the downside risks, we take d^* to be the maximum drawdown. In the following sections, we compare the trading performance of an equal-weighted market-based portfolio with the mean-variance portfolios obtained by Algorithm <ref>. §.§ Trading Performance Using an initial account with $1, Exhibit <ref> shows the account value trajectories of the market-based portfolio, mean-variance (MV) portfolio without BL model, and the MV portfolio with dynamic BL, which is generated by Algorithm <ref>. Remarkably, on a 1.30 GHz laptop with 8 GB RAM, the algorithm showcases computational efficiency in the sense that it takes about a total of 14.28 seconds to compute the views, estimate the expected returns, calculate the covariance matrix, and determine the optimal MV weights. Some key performance metrics are summarized in Exhibit <ref>, covering different window sizes M that range from 15 to 50. For M=30 and 40, the proposed MV portfolio with the dynamic BL approach significantly outperforms both the portfolio without the BL approach and the market-based portfolios. Notably, with M=30, the MV portfolio with the BL approach reaches a Sharpe ratio SR ≈ 1.0 and maximum drawdown d^*=30.57%. §.§ Impact of Regularization Terms To compare the impact of regularization terms, we fix M=40 and then examine the trading performance under different values of λ_1 and λ_2. Without imposing Elastic net, which corresponds to λ_1 = λ_2 =0, we see that the MV portfolio with the BL model outperforms both the market-based portfolio and the MV portfolio without the BL model. However, with the Elastic Net, specifically when λ_1 exceeds 0.5, we see that the portfolio performance achieves a superior Sharpe ratio of approximately SR ≈ 1.290 and a smaller maximum drawdown of d^*=19.50% compared to the portfolio without the Elastic Net. In particular, Exhibit <ref> shows the account value of the MV portfolios with specific regularization parameters λ_1 =0.7 and λ_2 =0.3 . As a bonus, the Elastic Net also offers mitigation for estimation errors, further enhancing the overall performance and stability of the portfolio. §.§ Robustness Test: A Hypothetical Scenario To evaluate the robustness of our approach, we conducted a hypothetical trading scenario by flipping the market prices horizontally; see Exhibit <ref> for the hypothetical prices of DJIA index over a one-year duration from January 1, 2020 to January 1, 2021. Remarkably, as shown in Exhibit <ref>, our approach continues to outperform the portfolio without the BL approach even with the opposite price trend. By reversing the price trends, our approach demonstrates its efficacy in adapting to changing market conditions and consistently achieving favorable performance. §.§ Monte-Carlo Based Robustness Test To validate the effectiveness of our approach, we conduct extensive Monte-Carlo simulations, generating a total of 10,000 paths for each asset. During the simulation, we consider a stock whose prices follow the geometric Brownian motion (GBM) with an estimated drift rate and volatility derived from the same data used in the previous empirical studies. From Exhibit <ref>, we see that our approach, incorporating dynamic Black-Litterman with time-varying views, consistently outperforms portfolios without the BL approach.
http://arxiv.org/abs/2307.01603v1
20230704094159
Asymptotic direction of a ballistic random walk in a two-dimensional random environment with nonuniform mixing
[ "Julien Allasia" ]
math.PR
[ "math.PR" ]
Cosmology with fast radio bursts in the era of SKA Xin Zhang ================================================== In this paper, we study random walks evolving with a directional drift in a two-dimensional random environment with correlations that vanish polynomially. Using renormalization methods first employed for one-dimensional dynamic environments along with additional ideas specific to this new framework, we show that there exists an asymptotic direction for such a random walk. We also provide examples of classical models for which our results apply. § INTRODUCTION Research on random walks in random environments (RWREs) has been active since the 1970s and has found its motivation in various applied fields. Typically, in a static framework, we allocate to each point in ^d (d⩾ 1) a transition probability measure on the set of its neighbors, used to determine the law of the jump of a particle located at this site (we often refer to the random walker as a particle). Contrary to classical random walks, results as simple as laws of large numbers (LLNs) are often hard to obtain, and strong assumptions describing the dependencies in the environment and the ballisticity of the random walk usually have to be made. The one-dimensional i.i.d. case is well understood, and a LLN was shown in <cit.> using ergodicity arguments. In larger dimensions, it is possible to derive a LLN under ballisticity conditions. For instance, <cit.> used a drift assumption that implied large deviation results. In <cit.> and <cit.>, the authors introduced a seminal regeneration argument that gives a LLN under Kalikow's condition; see <cit.>. In <cit.>, a weaker ballisticity condition known as Sznitman's condition (T) was introduced, which gives a LLN in the uniformly elliptic setting. In any cases, not even the i.i.d. framework is well understood without ballisticity assumptions, and our paper is no exception: we will make a drift assumption that is a strong version of ballisticity. One can wonder if conditions on the dependencies of the environment weaker than the i.i.d. assumption would be sufficient to derive a LLN. In <cit.>, the authors managed to adapt the regeneration argument from <cit.> when the environment is assumed to satisfy some uniform mixing conditions. Recent progress has also been made for one-dimensional dynamic random environments, in which ballisticity is automatic in the time direction. Similar mixing conditions to that of <cit.> were used to derive a LLN; see for instance <cit.>. Asymptotic results were also shown for some particular environments using their specific properties, like the contact process in <cit.> and <cit.>, or the environment given by independent simple random walks in <cit.>. In <cit.> however, a LLN was shown for general environments satisfying a non-uniform polynomial mixing condition, using multi-scale renormalization methods inspired by percolation theory. The latter article fundamentally relies on a monotonicity property of the model (see (2.9) in <cit.>), which is ensured by the dynamic framework and a nearest-neighbor assumption. Generalizing the methods of <cit.> when this essential property is missing was already explored in <cit.> by lifting the nearest-neighbor assumption. In the present paper, we keep this assumption but we move from the dynamic one-dimensional framework to a static two-dimensional one. More precisely, we assume that we are given a polynomially correlated random environment μ on ^2, where for each site x in ^2, μ(x) gives the transition probabilities for the jump of a particle located at x to one of the four nearest-neighbors of x. We consider X_n, the random walk starting at the origin in environment μ. In order to use the ideas of <cit.>, we give X_n a drift upwards by asking that μ_x,x+e_2⩾ 1/2+ε a.s. for every site x. This allows us to think of the vertical coordinate as roughly equivalent to time. Thus we will be able to show the existence of an asymptotic direction for our random walk, which is an almost sure limit of X_n/| X_n| when n goes to infinity, where |·| is the Euclidean norm on ^2; see Theorem <ref>. This could well be the first step towards showing a LLN in this framework, i.e. the almost sure convergence of X_n/n. The question of the existence of an asymptotic direction for RWREs has already been discussed in the i.i.d. setup in <cit.> and <cit.>. One important result is that if the random walk is transient in the neighborhood of a given direction, then an asymptotic direction can be found using renewal structures. But again these methods fail when we have weaker decorrelation assumptions for the environments. Extending the ideas of <cit.> does not merely consist in rewriting its arguments in a different framework. On top of the additional technical considerations about ballisticity (which is a given in the dynamic framework), it requires finding a way to generalize the lost monotonicity property. More precisely, we need to guarantee that if a particle starts on the left of another particle, it will remain on its left forever. This is made possible by choosing the right coupling for our random walks and proving a weaker "barrier" property: see Proposition <ref> and Figure <ref> for an illustration of what can happen in this new framework. Furthermore, the fact that our random walks can revisit their pasts calls for an argument to somehow split sample paths into different sections that do not meet. Since the classical argument of renewal times does not work with our weak decorrelation assumptions, we use a weaker notion of cut lines, presented in Section <ref>. Outline of the paper. In Section <ref>, we define precisely the framework of this paper by defining static environments and random walks on them, before stating our main result, Theorem <ref>. Its proof is divided into two parts, which correspond respectively to Sections <ref> and <ref>. In the first part, we show the existence of two limiting directions that bound the spatial behavior of our random walks in some sense. In the second part, we show that these two directions coincide, which will give the asymptotic direction that we are after. In Section <ref>, we introduce essential tools that will be instrumental in both parts of the proof. In Section <ref>, we give some ideas and problems that we are facing to show a complete LLN. In Section <ref>, we present some models for which our results apply. Conventions. * , and respectively denote the set of natural integers (starting from 0), relative integers and real numbers. ^* denotes ∖{0}, _+ denotes {x∈, x⩾ 0} and _+^* is _+∖{0}. If a<b, [a,b) is the interval {x∈, a⩽ x<b}. If n⩽ m are two integers, n,m is the set of integers [n,m]∩. For a couple x=(a,b)∈^2, we write a=π_1(x) and b=π_2(x), and we refer to them as the horizontal and vertical coordinates of x. The letter o denotes the origin (0,0)∈^2 and 0 the everywhere zero function of ^^2. If S is a finite set, |S| and # S denote the cardinality of S. When we say that a∈ is "less than" or "at most" (resp. "greater than" or "at least") b∈, we mean a⩽ b (resp. a⩾ b). * c denotes a positive constant that can change throughout the paper and even from line to line. Constants that are used again later in the paper will be denoted with an index when they appear for the first time (for instance c_0, c_1...). * The following letters will usually be used to denote the same kind of object: n∈ for an integer time quantity, H∈ for an integer space distance, x,y,z∈^2 for a space location. Capital letters are usually used for events (A, F, E, ℱ...) or random variables (X, Y, Z, N, U...). Γ will denote a fixed history (see Section <ref>) while Λ will be a random history. * Drawings across the paper are not to scale and they do not necessarily represent the random walks in an accurate way: they are only meant to make the reading easier. For instance, sample paths are depicted as smooth curves, although our random walks evolve on ^2. § FRAMEWORK §.§ Environment Let e_1=(1,0) and e_2=(0,1). Let S={(p_i)_i=1^4∈_+^4, ∑_i=1^4 p_i=1} and Ω_1=S^^2. An element μ∈Ω_1 is called an environment. For x∈^2, we will use the following notation: μ(x)=(μ_x,x+e_1,μ_x,x-e_1,μ_x,x-e_2,μ_x,x+e_2), where, for example, μ_x,x+e_2 will denote the probability for a particle located at x to jump to x+e_2. We consider the topology on S induced by the canonical topology of ^4, and the product topology on Ω_1. We denote by 𝒯_1 the associated Borel σ-algebra. If μ∈Ω_1 and y∈^2, we define the translated environment θ^yμ:x∈^2↦μ(x+y). We also define, for F∈𝒯_1, the translated event θ^yF={θ^yμ, μ∈ F}. Take a probability measure on (Ω_1,𝒯_1). On (Ω_1,𝒯_1,), the random variable id_Ω_1 is called a static two-dimensional random environment with law . We denote it using the same letter μ by abuse of notation. For the rest of the paper, we make the following assumptions on the random environment. [Translation invariance] For every y∈^2 and F∈𝒯_1, we assume that (θ^y F)=(F). [Drift] There exists ε>0 and 𝒜⊆Ω_1 satisfying (𝒜)=1 such that for every μ∈𝒜, ∀ x∈^2, μ_x,x+e_2⩾1/2+ε. From now on, ε is fixed. In anticipation for Definition <ref>, we also fix an integer constant β satisfying β>1/2-ε/2ε. All constants introduced from now on are allowed to depend on ε and β. [Vertical decoupling of the environment] c:decoupling Let h>0. If B_1 and B_2 are 2-dimensional boxes (i.e. sets of ^2 of the form [a,b)× [c,d) where a<b and c<d), we say that they are h-separated if the vertical distance between B_1 and B_2 is at least h. We assume that there exist c:decoupling>0 and α>12 such that for every h>0, for every pair of h-separated boxes B_1 and B_2 with maximal side lengths 2(2β+1)h, and for every pair of {0,1}-valued functions f_1 and f_2 on Ω_1 such that f_1(μ) is σ(μ|_ B_1)-measurable and f_2(μ) is σ(μ|_ B_2)-measurable, Cov_(f_1(μ),f_2(μ))⩽c:decoupling h^-α. See Figure <ref> for an illustration of this assumption: the environment inside box B_1 can be decoupled from that inside box B_2. We will come back to this property and this figure later, see Fact <ref>. §.§ Random walker We will work with random walks jumping at discrete times, but our results also hold in continuous time (in the Poissonian framework); see Remark <ref>. Mind that in <cit.>, using continuous time was crucial in the proof, because we needed that particles located at neighboring sites almost surely cannot jump simultaneously. However with this new model, time will not play such an important role in the coupling of particles. See Section <ref> for more details. We now define the random walk we are interested in and state our main results. For the sake of clarity, we define it in a simplified intuitive way before introducing a complete construction and a coupling in Section <ref>. In a certain probability space with measure , we define the random walk (X_n)_n∈^* as follows. The random walk starts at the origin of ^2: X_0=o. Then, at each integer time n, the random walk jumps to one of the sites in {X_n+e_1,X_n-e_1,X_n-e_2,X_n+e_2} with a probability given by μ(X_n), and this jump is independent of {X_k, k⩽ n} knowing μ(X_n). The goal of this paper is to show the existence of an asymptotic direction for X=(X_n)_n∈. This is stated in the following theorem, where |·| denotes the Euclidean norm on ^2 and 𝕊^1 the unit Euclidean sphere centered at o. c:concentration There exists χ∈𝕊^1 with π_2(χ)>0 such that -almost surely, X_n/| X_n |χ, where X_n/| X_n| is almost surely well-defined for n large. Moreover we have a polynomial rate of convergence: ∀ξ>0, ∃ c:concentration=c:concentration(ξ)>0, ∀ n∈^*, (| X_n- | X_n |χ|⩾ξ| X_n |)⩽c:concentration n^-α/4. It is straightforward to check that this result is a consequence of the following result. The latter is less appealing but its formulation is closer to the methods used in <cit.>, which is why we will focus on it from now on. c:LLN There exists v∈ such that -almost surely, π_1(X_n)/π_2(X_n) v, where π_1(X_n)/π_2(X_n) is almost surely well-defined for n large. Moreover we have a polynomial rate of convergence: ∀ξ>0, ∃ c:LLN=c:LLN(ξ)>0, ∀ n∈^*, (|π_1(X_n)-v π_2(X_n)|⩾ξ |π_2(X_n)| )⩽c:LLN n^-α/4. Mind that in the rest of the paper, what we (abusively) call a direction is simply the relation between the two coordinates of a point in ^2. For instance, π_1(X_n)/π_2(X_n) is the direction of X at time n. We will refer to v as the limiting direction of X. The link between v and χ from Theorems <ref> and <ref> is given by χ=(v,1)/√(v^2+1) and v=π_1(χ)/π_2(χ). Theorem <ref> also holds for the random walk (Y_t)_t⩾ 0 in ^2, started at o, in the following continuous time framework. Instead of jumping at integer times, we set a Poisson process (T_n)_n∈^* of parameter 1 in _+^* (independent of μ) and we allow Y_t to jump at each time given by this Poisson process; everything else is the same as in the discrete time framework. Then, (X_n=Y_T_n)_n∈ (where T_0=0) satisfies Theorem <ref>. From there we can check that Y_t/|Y_t| converges to the same asymptotic direction as X_n/|X_n| when t goes to infinity. §.§ Complete construction and coupling Inspired by <cit.>, we want to define random walks starting from all possible starting points in ^2 and couple them in the following way: no matter its starting point, a random walk visiting a fixed site for the first time should jump to the same neighboring site. To define this properly, we first define a jump function g:S× [0,1]→{e_1,-e_1,-e_2,e_2} by setting, for p=(p_1,p_2,p_3,p_4)∈ S and u∈ [0,1], g(p,u)={[ +e_1 if u∈[0,p_1);; -e_1 if u∈ p_1+[0,p_2);; -e_2 if u∈ p_1+p_2+[0,p_3);; +e_2 if u∈[1-p_4,1]. ]. Then, let (U(x,i))_x∈^2, i∈^* be a family of independent uniform random variables in [0,1], defined on a probability space (Ω_2,𝒯_2,). The idea is that U(x,i) will be the source of randomness used for the jump of a random walk visiting x for the i^th time. Let Ω=Ω_1×Ω_2, 𝒯=𝒯_1⊗𝒯_2, =⊗. We usually call the annealed law. When μ∈Ω_1 is a fixed environment, ^μ=δ_{μ}⊗ is usually called the quenched law. We have (·)=∫^μ(·) (μ). In order to couple random walks, we have to count the number of times that each particle has visited each site. Therefore, for every starting point y∈^2, we define simultaneously a random walk X^y and a counting process N^y, both as random variables on (Ω,𝒯), by the following: {[ X_0^y=y;; ∀ x∈^2, N_0^y(x)=0;; ∀ n∈, ∀ x∈^2, N_n+1^y(x)=N_n^y(x)+δ_x,X_n^y;; ∀ n∈, X_n+1^y=X_n^y+g(μ(X_n^y), U(X_n^y,N_n+1^y(X_n^y))),; ]. where δ is the Kronecker symbol. Let us rephrase what these formulas mean. If a particle started at y reaches x for the first time at time n, then its jump at time n (namely X^y_n+1-X^y_n) is determined by U(x,1). If it comes back to x later in time, it will use U(x,2) to choose where to jump, and so on. Note that when y=o, we do recover the law of random walk X introduced in Section <ref>, because our coupling ensures that the sequence of uniform variables used for the jumps is i.i.d. (for a detailed proof, see Proposition <ref>). Therefore, from now on, when working with y=o, the superscript y will be omitted, and X will denote the random walk (X_n^o)_n∈ defined in (<ref>). We will use a more practical notation for the uniform variables that are read by the random walker. For n∈^*, we set U_n^y=U(X_n-1^y, N_n^y(X_n-1^y)). With this notation, the induction formula that defines our random walks in <ref> can be written in a more straightforward manner: ∀ n∈, X_n+1^y=X_n^y+g(μ(X_n^y), U_n+1^y). Let y∈^2 and P be a subset of _+. We define X_P^y={X_s^y, s∈ P∩} to be the sample path of X^y restricted to the times in P∩. In practice, we will use decoupling for events that involve our random walks, which are elements of the sigma-algebra that we denoted by 𝒯 (recall (<ref>)). This is actually not stronger than Assumption <ref>, because the uniform variables used for the jumps of our random walks are i.i.d., so two sets of uniform variables supported by disjoint boxes are independent. In practice, we will always use decoupling to upper bound the probability of the intersection of two events of 𝒯. We will say that an event A∈𝒯 is measurable with respect to a set B if it is a measurable function of μ|_B and {U(x,i), i∈^*, x∈ B}. [Decoupling] Assume Assumption <ref> is satisfied. Let h>0. Let B_1 and B_2 be h-separated boxes with maximal side lengths 2(2β+1)h. Let A_1 resp. A_2 be events of 𝒯 that are measurable with respect to B_1 resp. B_2. We have (A_1∩ A_2)⩽(A_1)(A_2)+ c:decoupling h^-α. See Figure <ref> for an illustration of this fact: events describing respectively the two sample paths drawn here can be decoupled using the decoupling property. §.§ History Let n_0∈^*. Because of our coupling, the random walks given by X^y_n_0+· and X^X_n_0^y do not necessarily have the same sample paths. Indeed, the first one has a non-empty history, in the sense that between time 0 and n_0, it has visited a certain number of sites and it has looked at n_0 random variables among the {U(x,i), x∈^2, i∈^*}, which it will not look at again in the future. In order to address this issue, it will be convenient to define our random walks by adding an initial condition alongside the starting point, which we will call the initial history of the random walk. * For Γ:^2→, we define its support as Supp Γ={x∈^2, Γ(x)>0}. We let ℋ={Γ: ^2→ such that Supp Γ is finite}. * Let y∈^2 and n∈. The random variable N_n^y defined in (<ref>), taking values in ℋ, is called the history of random walk X^y at time n. * Let y∈^2 and Γ∈ℋ. The random walk X^y,Γ starting at y with history Γ is defined in the same way as before, except that in (<ref>), we replace U(x,i) by U(x,i+Γ(x)). We also define a process N^y,Γ in the same way as before, and we use U_n^y,Γ as in Notation <ref>. We extend the definitions of ^μ and to all the random walks {X^y,Γ,y∈^2,Γ∈ℋ}. Note that we could have restricted ourselves to an even smaller subset of ^^2 for our set of histories. For instance, the support of a random walk's history has to be connected. Here we simply chose to define ℋ as a simple countable subset of ^^2, in order to sum over possible outcomes Γ∈ℋ without worrying about uncountability. Definition <ref> addresses the issue mentioned just before, for it ensures that for every n_0∈, we have ∀ n∈, X_n_0+n^y=X_n^X_n_0^y,N_n_0^y. Using Definition <ref>, we recover (<ref>) by noticing that X_n^y=X_n^y,0. From now on, an omission of Γ in any notation that is defined using a history superscript Γ will always mean that we are considering Γ=0. Also, as mentioned before, the omission of the starting point superscript y will mean that y=o. The rest of the paper is dedicated to showing Theorem <ref>. Its final proof using lemmas that will be shown later can be found at the end of Section <ref>. § KEY PROPERTIES AND TOOLS §.§ Lower-bound random walk It will often be very handy to lower-bound the vertical position of our random walkers using Assumption <ref>. Recall Notation <ref>. Let y∈^2 and Γ∈ℋ. We define the lower-bound random walk X̂^y,Γ as the random walk on defined by {[ X̂_0^y,Γ=π_2(y);; ∀ n∈, X̂_n+1^y,Γ=X̂_n^y,Γ+ĝ( U_n+1^y,Γ), ]. where ĝ(u)=1_u⩾ 1/2-ε-1_u< 1/2-ε. The definition of X̂^y,Γ is made for the following properties to hold. First, X̂^y,Γ is simply a biased standard random walk, with transition coefficients given for every n∈ by [ (X̂_n+1^y,Γ=x+1 | X̂_n^y,Γ=x)=1/2+ε;; (X̂_n+1^y,Γ=x-1 | X̂_n^y,Γ=x)=1/2-ε. ] Second, it is coupled with X^y,Γ in such a way that, for μ∈𝒜, we have the following inclusion of events: {X̂_n+1^y,Γ=X̂_n^y,Γ+1}⊆{X_n+1^y,Γ=X_n^y,Γ+e_2}. Indeed, assume X̂_n+1^y,Γ=X̂_n^y,Γ+1. By definition of ĝ, this means that U_n+1^y,Γ⩾1/2-ε. Now, Assumption <ref> ensures that for μ∈𝒜, μ_X_n^y,Γ,X_n^y,Γ+e_2⩾ 1/2+ε, so U_n+1^y,Γ⩾ 1-μ_X_n^y,Γ,X_n^y,Γ+e_2, hence the result using (<ref>) and (<ref>). This implies the following essential inequality between increments of X^y,Γ and increments of X̂^y,Γ. [Increment inequality] For every y∈^2, Γ∈ℋ, n_0∈, n∈^* and μ∈𝒜, X̂_n_0+n^y,Γ-X̂_n_0^y,Γ⩽π_2(X_n_0+n^y,Γ-X_n_0^y,Γ). This inequality is a mere consequence of (<ref>). It justifies the name "lower-bound random walk" that was given to X̂: its role is to lower-bound the vertical behavior of X. §.§ Markov-type properties Our coupling makes the definition of our random walks more complex than they usually are. Yet, as we already said, a single particle will behave just as in the usual framework, meaning that our random walks are Markov chains under a fixed environment. We make this more precise in the following proposition, whose proof can be found in the appendix. Let y∈^2 and Γ∈ℋ. Under either or ^μ, the (U_n^y,Γ)_n∈^* are independent uniform random variables in [0,1]. The law under of X^y,Γ-y and the law under or ^μ of X̂^y,Γ-π_2(y) do not depend on y and Γ. This is a consequence of Assumption <ref>, induction formulas in (<ref>) and (<ref>), and Proposition <ref>. Oftentimes, we will have to bound the probability of an event describing a random walk whose initial conditions (starting point and history) are random variables. To do this, we will need Markov-type properties. In addition to the invariance property given by Corollary <ref>, Proposition <ref> ensures that for any y∈^2 and Γ∈ℋ, X^y,Γ is a Markov chain under the quenched law. Nonetheless in general this Markov chain is obviously not time-homogeneous, since its transition matrices depend on the location of the random walker at each step. However, X̂^y,Γ is indeed a time-homogeneous Markov chain (under either or ^μ), and so we have the strong Markov property given by Corollary <ref>. Let y∈^2 and Γ∈ℋ. We say that a random variable τ is a stopping time for X^y,Γ if for every t∈, {τ=t} is measurable with respect to μ and {U_n^y,Γ, n⩽ t}. Let y∈^2, Γ∈ℋ and let τ be a stopping time for X^y,Γ. Then, conditioned on τ<∞, {X̂_n, n⩽τ} is independent from {X̂_n-X̂_τ, n>τ} (under either or ^μ). However, mind that even if we are working with a deterministic time τ=t∈ and under the quenched law ^μ, one cannot generalize Corollary <ref> by substituting X̂ with X, because of inhomogeneity. Indeed, the jumps of the process given by {X_n-X_τ, n>τ} do not involve uniform variables only, but also the past of the random walk. For instance, the jump of the random walk between time t and t+1 is given by considering U_t+1 and μ(X_t): even if μ is fixed, we still need to know X_t, which is clearly not independent of {X_n, n⩽ t}. This is an obstacle to studying the probability of an event describing a random walk whose initial conditions are given by its past. Nonetheless, we do have the following proposition, which will be very useful in the future. Recall the definition of 𝒜 from Assumption <ref>, as well as Definition <ref>. Because of Corollary <ref>, we can assume that y=o. Let f_1 and f_2 be two non-negative measurable functions. We write [f_1({X_n, n⩽τ}) f_2({X̂_τ+n, n>0})] =∑_t∈[f_1({X_n, n⩽ t}) 1_τ=t f_2({X̂_t+n, n>0})] Now, for every t∈, f_1({X_n, n⩽ t}) 1_τ=t is measurable with respect to μ and {U_1,…,U_t}, while f_2({X̂_t+n, n>0}) is measurable with respect to {U_n, n>t}, so applying Proposition <ref>, their are independent. Moreover, by the same proposition, for every t∈, {U_n, n>t} has the same law as {U_n, n∈}. Therefore, [f_1({X_n, n⩽τ}) f_2({X̂_τ+n, n>0})] =∑_t∈[f_1({X_n, n⩽ t}) 1_τ=t] [f_2({X̂_t+n, n> 0})] =∑_t∈[f_1({X_n, n⩽ t}) 1_τ=t] [f_2({X̂_n, n> 0})] =[f_1({X_n, n⩽τ})] [f_2({X̂_τ+n, n> 0})]. This yields the -independence. If μ is a fixed environment, the same line of reasoning gives the ^μ-independence. Let y_0∈^2, Γ_0∈ℋ and let τ be a stopping time for X^y_0,Γ_0. For y∈^2 and Γ∈ℋ, let A^y,Γ be an event that is measurable with respect to μ and (U_n^y,Γ)_n∈^*. Then (A^X_τ^y_0,Γ_0,N_τ^y_0,Γ_0)⩽μ∈𝒜sup y,Γsup ^μ(A^y,Γ). Therefore, if we have an upper bound of ^μ(A^y,Γ) that is uniform in μ∈𝒜, y∈^2 and Γ∈ℋ, it is also an upper bound of (A^X_τ^y_0,Γ_0,N_τ^y_0,Γ_0). Mind that a priori we may not replace sup_y,Γ^μ(A^y,Γ) by ^μ(A^o,0). Indeed, although in the quenched setting A^y,Γ is a measurable function of (U_n^y,Γ)_n, whose law does not depend on (y,Γ), the function itself may depend on y and Γ. However we will usually not use Proposition <ref> in that case and so we will usually simply use a supremum over μ: see for instance the proof of (<ref>). For the sake of simplicity, we write the proof for y_0=o and Γ_0=0. Let us fix an environment μ, y∈^2 and Γ∈ℋ. First, note that the (U_n^y,Γ)_n∈^* are measurable with respect to {U(x,i), x∈^2, i>Γ(x)}. Furthermore, we claim that for every t∈, {X_t=y, N_t=Γ, τ=t} is measurable with respect to {U(x,i), x∈^2, i⩽Γ(x)}. Let us prove this claim. Since τ is a stopping time for X, for every t∈, there exists some {0,1}-valued measurable function f_t on [0,1]^t such that {τ=t}={f_t(U_1,…,U_t)=1}. Therefore, we can write {X_t=y, N_t=Γ, τ=t} =⋃_ o=y_0,…,y_t=y 1=n_0,…,n_t-1=Γ(y_t-1)[ {X_1=y_1,…, X_t=y_t}∩{N_1(y_0)=n_0,… N_t(y_t-1)=n_t-1}; ∩ {N_t=Γ}∩{f_t(U(y_0,n_0),…,U(y_t-1,n_t-1))=1}. ] In the union above, all the choices of y_j and n_j (where 0⩽ j<t) such that n_j>Γ(y_j) give an empty contribution. Indeed, each choice of y_j and n_j corresponds to an event that is included in {N_j+1(y_j)=n_j, N_t=Γ}; therefore, if n_j>Γ(y_j), then N_j+1(y_j)>N_t(y_j), which is impossible since s↦ N_s(x) is non-decreasing for any x∈^2. Considering that each event in the union above is measurable with respect to {U(y_0,n_0),…,U(y_t-1,n_t-1)}, the claim is proven. Consequently, by Proposition <ref>, {X_t=y, N_t=Γ, τ=t} is ^μ-independent from the σ-algebra generated by (U_n^y,Γ)_n∈^*, so it is ^μ-independent of A^y,Γ. As a result, we have (A^X_τ,N_τ) =∑_y∈^2, Γ∈ℋ∫_Ω_1∑_t∈ ^μ(A^y,Γ, X_t=y, N_t=Γ, τ=t) (μ) =∑_y∈^2, Γ∈ℋ∫_Ω_1 ^μ(A^y,Γ) ^μ(X_τ=y, N_τ=Γ) (μ) =∑_y∈^2, Γ∈ℋ∫_𝒜 ^μ(A^y,Γ) ^μ(X_τ=y, N_τ=Γ) (μ) ⩽μ∈𝒜sup y,Γsup ^μ(A^y,Γ), concluding the proof of the proposition. §.§ 2D simplification As explained in the introduction, the idea of our proof is to adapt arguments from the framework of one-dimensional dynamic environments from <cit.>. The idea is therefore to treat the vertical coordinate as a time coordinate somehow. We will forget about the actual time variable and "hide" the time information by only considering hitting times of horizontal lines. In other words, we work in two dimensions instead of three (2 space + 1 time dimension), as was the case in <cit.> (1 space + 1 time dimension). Let H∈, y∈^2, w∈× and Γ∈ℋ. The reaching time of height π_2(w)+H by X^y,Γ is defined by τ_H,w^y,Γ={[ inf{n∈, π_2(X_n^y,Γ)=π_2(w)+H} if π_2(y)⩽π_2(w)+H;; 0 otherwise, ]. where the infimum is in ∪{+∞}. In τ_H,w^y,Γ, w is a reference point (whose horizontal coordinate does not play any role). It will be very useful in the future, because we will want to stop our random walks on a lattice centered at w, and we will have π_2(y) slightly larger that π_2(w) (see Definition <ref> and the proof of Lemma <ref>). Note that when y=w, τ_H,y^y,Γ is simply the time that X^y,Γ needs to go up H times. Notations can get very heavy and so we introduce several conventions: * Consistently with previous conventions, τ_H^y,Γ will mean τ_H,o^y,Γ, not τ_H,y^y,Γ. τ_H will simply be τ^o,0_H,o. * We will write X^y,Γ_τ_H,w^y,Γ without specifying what X^y,Γ_∞ means - any arbitrary value would work, since τ_H,w^y,Γ<∞ almost surely (see Section <ref>). * We will write X^y,Γ_τ_H,w instead of X^y,Γ_τ_H,w^y,Γ, and we will write X^y,Γ_[0,τ_H,w] instead of X^y,Γ_[0,τ_H,w^y,Γ]. Mind that in these special cases, the omission of y and Γ does not mean y=o and Γ=0, contrary to the general rule we gave. Anyway things should be clear with the context. We also define a stopping time for X̂^y,Γ as follows. Let y∈^2, Γ∈ℋ and H∈. We let τ̂_H^y,Γ=inf{n∈, X̂_n^y,Γ=π_2(y)+H}∈∪{+∞}. Mind that τ̂^y,Γ_H is the equivalent for X̂^y,Γ of τ_H,y^y,Γ, not τ_H^y,Γ. In order to show Theorem <ref>, it will actually be sufficient to show an almost sure asymptotic estimate for X along the subsequence given by (τ_H)_H∈. This is what the following lemma is about. There exists v∈ such that -almost surely, π_1(X_τ_H)/H v. The proof of Lemma <ref> is the purpose of Sections <ref> and <ref>. The fact that Lemma <ref> implies Theorem <ref> is shown at the end of Section <ref>. §.§ Localization properties §.§.§ Ballisticity Recall Definition <ref>. Classically, we have, for any y∈^2 and Γ∈ℋ, the almost sure divergence X̂_n^y,Γ +∞. Therefore, because of (<ref>), we also have π_2(X_n^y,Γ) +∞ -almost surely. In other words, we have directional transience for X^y,Γ along the e_2 direction. Actually we have a much stronger ballisticity property that gives a minimum speed along the vertical coordinate, which is one of the key properties usually required to get a LLN for a RWRE. c:ballisticity For any ξ>0, there exists a constant c:ballisticity=c:ballisticity(ξ)>0 such that for every n∈, y∈^2 and Γ∈ℋ, we have (|X̂_n^y,Γ-π_2(y)-2ε n|⩾ξ n)⩽c:ballisticity^-1 e^-c:ballisticityn, and the inequality is also true when replacing by ^μ for μ∈𝒜. Because of Remark <ref>, we can assume that y=o and Γ=0. Let μ∈𝒜 and t⩾ 0. Let a be a positive parameter, to be fixed later in the proof. First, we have ^μ(|X̂_t-X̂_0-2ε t|⩾ξ t)=^μ(X̂_t-X̂_0⩽ (2ε-ξ) t)+^μ(X̂_t-X̂_0⩾ (2ε+ξ) t). The two terms are estimated with the same method, so we only write the proof for the first one. First let n=⌊ t⌋∈. We set ζ=2ε-ξ and write ^μ (X̂_n-X̂_0⩽ (2ε-ξ) n) = ^μ(∑_k=0^n-1ĝ(U_k+1) ⩽ζ n) ⩽^μ(exp(-a ∑_k=0^n-1ĝ(U_k+1))⩾ e^-a ζ n) ⩽ e^a ζ n ∏_k=0^n-1^μ[exp(-a ĝ(U_k+1))] =e^aζ n(e^-a(1/2+ε)+e^a(1/2-ε))^n a→ 0=exp(-na (ξ+o(1))) As ξ>0, we just need to choose a small enough a to get a constant c:proof_ballisticity=c:proof_ballisticity(ξ)>0 such that c:proof_ballisticity (X̂_n-X̂_0⩽ (2ε-ξ) n)⩽c:proof_ballisticity^-1 e^-c:proof_ballisticityn. Let us get back to t⩾ 0. We show the result for ξ<2ε, which is sufficient because the function ξ>0⟼^μ(X̂_t-X̂_0⩽ (2ε-ξ) t) is non-increasing. We have ^μ (X̂_t-X̂_0⩽ (2ε-ξ) t) ⩽^μ([ X̂_n-X̂_0⩽ (2ε-ξ)(n+1),; X̂_n+1=X̂_n+1 ])+^μ([ X̂_n+1-X̂_0⩽ (2ε-ξ)(n+1),; X̂_n+1=X̂_n-1 ]) ⩽^μ(X̂_n-X̂_0⩽ (2ε-ξ/2)n)+^μ(X̂_n+1-X̂_0⩽ (2ε-ξ)(n+1)) for n⩾ 2 ⩽c:proof_ballisticity(ξ/2)^-1e^-c:proof_ballisticity(ξ/2)n+c:proof_ballisticity(ξ)^-1e^-c:proof_ballisticity(ξ)(n+1) ⩽1/2c:ballisticity^-1e^-c:ballisticityt, for a well chosen constant c:ballisticity=c:ballisticity(ξ)>0, suited also for the cases n=0 and n=1. Applying the same line of reasoning for the second term in (<ref>), we get the result for ^μ. Then, integrate to get the same result for . Naturally, the limiting speed 2ε in Proposition <ref> is simply the minimal possible expected value for the vertical jump of X, that is (1/2+ε)-(1/2-ε)=2ε, according to Assumption <ref>. The proof of Proposition <ref> is based on a very classical Chernoff bound, so we choose to leave it out. From Proposition <ref> and inequality (<ref>), we can easily deduce the following ballisticity property for X^y,Γ. For any ζ∈ (0,2ε), y∈^2, Γ∈ℋ and n∈, (π_2(X^y,Γ_n)-π_2(y)⩽ζ n)⩽c:ballisticity^-1 e^-c:ballisticity n, where c:ballisticity=c:ballisticity(2ε-ζ) is the constant defined in Proposition <ref>. Moreover, the inequality is also true when replacing by ^μ, for μ∈𝒜. Let y∈^2, Γ∈ℋ, n∈ and μ∈𝒜. Using increment inequality (<ref>) and Proposition <ref>, we have ^μ(π_2(X_n^y,Γ)-π_2(y)⩽ζ n)⩽^μ(X̂_n^y,Γ-X̂_0^y,Γ⩽ζ n)⩽c:ballisticity^-1 e^-c:ballisticity n. Now, let Y∈^2 and Λ∈ℋ be two random variables under such that for all y∈^2 and Γ∈ℋ, 1_Y=y,Λ=Γ is ^μ-independent from (U_n^y,Γ)_n∈^*. Then ^μ(π_2(X_n^Y,Λ)-π_2(Y)⩽ζ n) =∑_y∈^2, Γ∈ℋ^μ(π_2(X_n^y,Γ)-π_2(y)⩽ζ n, Y=y, Λ=Γ) =∑_y∈^2, Γ∈ℋ^μ(π_2(X_n^y,Γ)-π_2(y)⩽ζ n) ^μ(Y=y,Λ=Γ) ⩽c:ballisticity^-1 e^-c:ballisticity n where we used independence in the last equality and (<ref>) in the last inequality. Integrating over μ, we also get the inequality for . §.§.§ Vertical lower bound Another key property of biased random walks is the gambler's ruin estimate (see for instance <cit.>, Section 3.9), that gives a formula for the probability of exiting a section of by either of the two sides. This will allow us to have a global lower bound for the second coordinate of X^y,Γ. We define an event guaranteeing that X^y,Γ stays above a certain horizontal line by setting, for H∈, E_H^y,Γ={∀ n∈, π_2(X_n^y,Γ)⩾π_2(y)-H}. c:gambler_ruin There exists c:gambler_ruin>0 such that for every H∈, ((E_H^y,Γ)^c)⩽ e^-c:gambler_ruinH. Moreover when H=0, we even have ((E_0^y,Γ)^c)⩽ 1-2ε. Both inequalities are also true when replacing by ^μ for μ∈𝒜. The proof of Proposition <ref> can be found in the appendix. The localization properties given by Propositions <ref> and <ref> allow us to prove that Lemma <ref> is sufficient to prove Theorem <ref>, using an argument of interpolation. Let v be as in Lemma <ref>. Let n∈^* be such that π_2(X_n)>0 (which, by (<ref>), happens for n large enough -almost surely). Let H_n∈ be such that τ_H_n⩽ n<τ_H_n+1. Note that, using (<ref>) again, H_n+∞. Also, note that since π_2(X_n)<H_n+1, we have, for n large enough, (H_n<ε n)⩽(π_2(X_n)<ε n+1)⩽(π_2(X_n)⩽3ε/2 n)⩽c:ballisticity(ε/2)^-1 e^-c:ballisticity(ε/2) n. Now, note that π_1(X_n)/π_2(X_n)=π_1(X_τ_H_n)/H_n+π_1(X_n)-π_1(X_τ_H_n)/H_n+π_1(X_n)(1/π_2(X_n)-1/H_n). First, using (<ref>) and Lemma <ref>, we have π_1(X_τ_H_n)/H_n v. As for the second term on the right-hand side of (<ref>), let us fix a>0 and note that (|π_1(X_n)-π_1(X_τ_H_n)|⩾ a H_n) ⩽(τ_H_n+1-τ_H_n⩾ aH_n) ⩽(τ_1,X_τ_H_n^X_τ_H_n,N_τ_H_n⩾ aε n)+(H_n<ε n). Now, if we fix μ∈𝒜, we have ^μ(τ_1⩾ a ε n) ⩽^μ(X_⌊ a ε n⌋⩽ε⌊ aε n ⌋) ⩽c:ballisticity(ε)^-1 e^-c:ballisticity(ε) ⌊ aε n⌋ ⩽ c^-1e^-c n. Therefore, using Proposition <ref> and (<ref>), (|π_1(X_n)-π_1(X_τ_H_n)|⩾ a H_n)⩽ c^-1e^-c n, which is summable in n. Using Borel-Cantelli, we obtain that π_1(X_n)-π_1(X_τ_H_n)/H_n0. In order to estimate the third term on the right-hand side of (<ref>), first note that if π_2(X_n)⩾ H_n-H_n^1/2 and H_n⩾ε n, then we have, for n large enough, |π_1(X_n)(1/π_2(X_n)-1/H_n)| =|π_1(X_n)/π_2(X_n)|H_n-π_2(X_n)/H_n⩽n/H_n/2H_n^1/2/H_n=2n H_n^-3/2. In the first equality, we used that H_n-π_2(X_n)⩾ 0, since n<τ_H_n+1. In the inequality, we used that |π_1(X_n)|⩽ n, and that for n large enough, H_n⩾ε n⩾ 4, so H_n-H_n^1/2⩾ H_n/2. Therefore, if we fix a>0, we have, for n large enough, (|π_1(X_n)(1/π_2(X_n)-1/H_n)|⩾ a) ⩽(π_2(X_n)<H_n-H_n^1/2)+(H_n⩽ (2n/a)^2/3)+(H_n<ε n) ⩽((E^X_τ_H_n,N_τ_H_n_⌊ (ε n)^1/2⌋)^c) +3(H_n<ε n) ⩽ e^-c n^1/2+3c^-1e^-cn, using Propositions <ref> and <ref>, as well as (<ref>). Applying the Borel-Cantelli lemma once more, we obtain that π_1(X_n)(1/π_2(X_n)-1/H_n)0. Putting together (<ref>), (<ref>), (<ref>) and (<ref>), we obtain that π_1(X_n)/π_2(X_n)v, concluding the proof of Theorem <ref>. §.§.§ Horizontal bounds It will also be essential to control the horizontal behavior of the random walk. The lack of intrinsic information on the horizontal jumps of the random walks does not allow us to get a global horizontal bound as in Section <ref>. However, what we can do using Assumption <ref> is bound the horizontal displacement of the random walk by the time it reaches a certain height. To that end, we define the following event, for y∈^2, Γ∈ℋ and H∈^*: D^y,Γ_H={∀ n∈ 0,τ_H,y^y,Γ, |π_1(X_n^y,Γ)-π_1(y)|⩽β H}. c:horizontal There exists c:horizontal>0 such that for every y∈^2, Γ∈ℋ and H∈^*, ((D^y,Γ_H)^c)⩽c:horizontal^-1e^-c:horizontal H, and the same estimate holds with ^μ for any μ∈𝒜. We refer to the appendix for a proof of Proposition <ref>. §.§.§ Localization in boxes In order to apply Fact <ref>, we will have to localize events in boxes. In practice, this will be done by working on large probability events that ensure that our random walks stay in certain boxes before reaching a certain height, or, in other words, that they exit those boxes through the top side. This simply requires to put together the results of Sections <ref> and <ref>. However, we actually want something stronger: we want to control the behavior of a lot of particles simultaneously. This will be instrumental for Section <ref>. Recall the definition of β in (<ref>). We will also often use the following notation, for H∈_+^*, H'=⌈ H^1/2⌉. We will also use this notation with specific values of H: for instance in the future we will write H_k' for ⌈ H_k^1/2⌉ or (hL_k)' for ⌈ (hL_k)^1/2⌉. Let H∈_+^* and w∈×. We define {[ I_H(w)=(w+[0,H)× [0,H'))∩^2;; ℐ_H(w)=(w+[0,H)×{0})∩^2;; B_H(w)=w+[-β H, (β+1) H)× [-H',H]⊆^2. ]. We also define the following events, for H∈^* and w∈×: F_H(w)=⋂_y∈ I_H(w){X^y_[0,τ_H,w]⊆ B_H(w)}. As usual, I_H=I_H(o), B_H=B_H(o) and F_H=F_H(o). See Figure <ref> for an illustration of those definitions. Note that in Definition <ref>, we used a real parameter H>0, while H is usually an integer. This is because we will use the objects defined above with non-integer parameters as of Section <ref>. Naturally, the choice of H^1/2 and β are made in order for our random walks to exit the boxes that we defined through the top side with large probability. In fact, we have the following estimates. c:box There exists c:box>0 such that for every w∈× and H∈^*, we have (F_H(w)^c)⩽c:box^-1e^-c:box H^1/2. This is also true when replacing by ^μ for μ∈𝒜. Mind that in spite of Corollary <ref>, we may not take w=o (although we could restrict ourselves to w∈ [0,1)×{0}). Let H∈^* and μ∈𝒜. Using a union bound along with Propositions <ref> and <ref>, we have ^μ(F_H(w)^c) ⩽ H H'(^μ(E_H'^c)+^μ(D_H^c)) ⩽ HH'(e^-c:gambler_ruin H^1/2+c:horizontal^-1 e^-c:horizontalH) ⩽ c^-1 e^-cH^1/2. This is a direct consequence of Propositions <ref> and <ref>, along with a union bound. H:size_box For the rest of the paper, we fix H:size_box an integer constant satisfying ∀ H⩾H:size_box, H'⩽min(H/2,2β H). Why is that? We will often use Fact <ref> with B_H(w) and h=H/2. The horizontal size of boxes B_H(w) is precisely (2β+1)H=2(2β+1)h. As for the vertical size, it is equal to H+H', and we want it to be at most (2β+1)H, so we want H'⩽ 2β H. As for the H'⩽ H/2 condition, it is because we will encounter boxes that are (H-H')-separated; in order to apply the decoupling property with vertical separation h=H/2, we therefore need H'⩽ H/2. For the rest of the paper, we will work with H⩾H:size_box. §.§ Cut lines When trying to adapt the ideas of <cit.>, the history that our random walk accumulates will raise issues (see for instance Section <ref>). Therefore, it will be very useful to find a time after which our random walk does not revisit the sites visited in the past. In this sense, everything will be as if, considering the random walk after this time, its initial history is everywhere zero. Let z∈. * Let Z=(Z_n)_n∈ be a random walk in and let T_z denote the first hitting time of {z} by Z. We say that z is a cut point for Z if T_z<∞ and for every n⩾ T_z, Z_n⩾ z. In other words, the sample path of the random walk Z can be split into two parts with each part contained in a half-line delimited by z. We set Θ(Z)=inf{a∈, Z_0+a is a cut point for Z}; T_c(Z)=T_Z_0+Θ(Z). * Let now Z=(Z_n)_n∈ be a random walk in ^2. We say that ×{z} is a cut line for Z if z is a cut point for π_2(Z). We extend the previous definitions by setting Θ(Z)=Θ(π_2(Z)) and T_c(Z)=T_c(π_2(Z)). As before, we start by showing estimates on the lower-bound random walk (recall Section <ref>). We refer to the appendix for a proof of the next Lemma. c:cut_line_hat There exists c:cut_line_hat>0 such that for every y∈^2, Γ∈ℋ and a∈, (Θ(X̂^y,Γ)>a)⩽c:cut_line_hat^-1 e^-c:cut_line_hata^1/2. The inequality is also true when replacing by ^μ for μ∈𝒜. c:cut_line There exists c:cut_line>0 such that for every y∈^2, Γ∈ℋ and a∈, (Θ(X^y,Γ)>a)⩽c:cut_line^-1e^-c:cut_linea^1/2. This estimate is also true when replacing by ^μ for μ∈𝒜. We write the proof for y=o and Γ=0 for simplicity. Let μ∈𝒜 and a∈. The crucial idea here is that X_T_c(X̂)+×{0} is a cut line for X. Indeed, using increment inequality (<ref>), we have: * For n∈, π_2(X_T_c(X̂)+n)⩾π_2(X_T_c(X̂))+X̂_T_c(X̂)+n-X̂_T_c(X̂)⩾π_2(X_T_c(X̂)); * For 0<n⩽ T_c(X̂), π_2(X_T_c(X̂)-n)⩽π_2(X_T_c(X̂))+X̂_T_c(X̂)-n-X̂_T_c(X̂)< π_2(X_T_c(X̂)). Using this observation, if we let b=⌊ε a⌋, we get ^μ(Θ(X)>a) ⩽^μ(π_2(X_T_c(X̂))>a) ⩽^μ(π_2(X_T_c(X̂))>a, Θ(X̂)⩽⌊ε a⌋)+^μ(Θ(X̂)>⌊ε a⌋). Now, using Lemma <ref>, ^μ(Θ(X̂)>⌊ε a⌋)⩽ c:cut_line_hat^-1e^-c:cut_line_hat⌊ε a⌋^1/2, and ^μ(π_2(X_T_c(X̂))>a, Θ(X̂)⩽⌊ε a⌋) ⩽^μ(π_2(X_τ̂_⌊ε a⌋)>a) ⩽^μ(τ̂_⌊ε a⌋>a) ⩽c:ballisticity(ε)^-1 e^-c:ballisticity(ε) ⌈ a ⌉ hence the result by adjusting c:cut_line. c:cut_times There exists c:cut_times>0 such that for every y∈^2, Γ∈ℋ and n∈, we have (T_c(X^y,Γ)>n)⩽c:cut_times^-1e^-c:cut_timesn^1/2. This estimate is also true when replacing by ^μ, for μ∈𝒜. Let y=o, Γ=0, n∈ and μ∈𝒜. Using Propositions <ref> and <ref>, ^μ(T_c(X)>n) ⩽^μ(Θ(X)>⌊ε n⌋)+^μ(T_c(X)>n, Θ(X)⩽⌊ε n⌋) ⩽c:cut_line^-1 e^-c:cut_line⌊ε n⌋^1/2+^μ(τ_⌊ε n ⌋>n) ⩽ c^-1 e^-cn^1/2+c:ballisticity(ε)^-1e^-c:ballisticity(ε) n, hence the result by choosing c:cut_times properly. §.§ The multi-scale renormalization method The proofs of several major propositions in the rest of the paper are based on the fundamental idea of multi-scale renormalization, which gives a practical method for using decoupling property (<ref>). We now give a general idea of how such a proof works, and we will often refer to it in the future. Suppose we want to show an estimate for the probability of a certain family of "bad" events (A_H)_H∈. * We start by focusing on a certain subsequence (A_H_k)_k, (H_k)_k∈ being a sequence of scales. We set p_k=(A_H_k). We show that A_H_k+1 is included in two events of probability p_k supported by R_k-separated boxes of maximal side lengths 2(2β+1)R_k. * We deduce the desired estimate for (p_k)_k∈. * Using Fact <ref> and a union bound, we get an inequality p_k+1⩽ C_k (p_k^2+c:decoupling R_k^-α), where C_k is a certain integer. * From this inequality we deduce the desired estimate of p_k by induction on k. For this to work, the scales and the bound to show have to be chosen properly. The base case of the induction (often referred to as "triggering") requires arguments that are specific to each case. * We conclude by interpolating the estimate from the (H_k)_k∈ to any parameter H. In order to accommodate to the polynomial decoupling, it will be useful to use the following scales. Recall the definition of H:size_box from Remark <ref>. We set L_0=max(10^10,H:size_box) and, for k⩾ 0, L_k+1=l_k L_k, where l_k=⌊ L_k^1/4⌋. The choice 10^10 will become clearer in the proof of Proposition <ref>. The rest of this paper will be dedicated to showing Lemma <ref>. To do this, we strongly rely on methods developed in <cit.>. First, in Section <ref>, we will show that there exist limiting directions v_- and v_+ that bound the asymptotic behavior of our random walk with high probability. This requires to adapt the methods in <cit.> by addressing two technical issues: the deterministic drift in the time direction is lost in the static framework, and the random walks can revisit their paths. Then, in Section <ref>, we will show that these two directions actually coincide, which will give us the limiting direction v in Lemma <ref>. It is in this part of the proof that introducing a weaker "barrier" property as a replacement of the monotonicity property of <cit.> will be instrumental. § LIMITING DIRECTIONS §.§ Definitions and main results Recall that for us a direction is simply the inverse slope of a line in ^2; for instance, all points x∈^2 satisfying π_1(x)=aπ_2(x) with π_2(x)>0 have direction a. The goal of this section is to show that there exist two directions v_- and v_+ that somehow bound the spatial behavior of our random walker in the long run. This property is made clearer in Lemma <ref>. It will consist of the first part of the proof of Lemma <ref>, and we will show that in fact v_-=v_+ in Section <ref>, thus concluding the proof. As a matter of fact, we aim at showing a stronger version of Lemma <ref> by considering not only one fixed particle but all the particles starting simultaneously in I_H(w) from Definition <ref>. This will be instrumental in Section <ref>, where we will need to control the directions of lots of particles at once. Recall also notation τ^y,Γ_H,w from Definition <ref>. Let w∈× and H∈^*. Let y∈ I_H(w) and Γ∈ℋ. We define the empirical direction of X^y,Γ at height H with reference point w to be [ V_H,w^y,Γ=1/H (π_1(X^y,Γ_τ_H,w)-π_1(y)). ] As usual, when w or y are not mentioned, it means that we are considering the origin, and an omission of Γ means Γ=0. Now let v∈. We consider the following events: [ A_H,w(v)={∃ y∈ I_H(w), V_H,w^y⩾ v};; Ã_H,w(v)={∃ y∈ I_H(w), V_H,w^y⩽ v}. ] As usual, A_H(v)=A_H,o(v) and A_H(v)=A_H,o(v). We set [ p_H(v)=(A_H(v));; p̃_H(v)=(Ã_H(v)). ] We define the limiting directions by setting [ v_+=inf{v∈, lim inf_H→∞ p_H(v)=0};; v_-=sup{v∈, lim inf_H→∞p̃_H(v)=0}. ] Note that when π_2(y)=π_2(w), V_H,w^y,Γ=V_H,y^y,Γ is simply the inverse slope of the line connecting y and X^y,Γ_τ_H,w: it is the direction of X^y,Γ at height H. Mind that when π_2(y)>π_2(w) however, this is not exactly true anymore. Note that because of translation invariance, (A_H,w(v)) and (Ã_H,w(v)) actually do not depend on w, which is why we only considered the origin for the definitions of v_- and v_+. Indeed, we can first restrict ourselves to w∈ (-1,0]×{0}, using Corollary <ref>. Then, H being an integer here, I_H(w)=I_H(o) for every w∈(-1,0]×{0}, so that A_H,w(v)=A_H(v). This would be wrong if H was any positive real number, and that is why we will have to be more careful later, in Lemma <ref>. It may sound unclear why we use liminfs in the definitions of v_- and v_+, instead of limsups. In fact, this will be required in order to get a much needed uniform lower bound on the probability for the random walk to attain average direction greater but close to v_- over long time intervals (see Lemma <ref>). Note that we never stated that v_-⩽ v_+, although it would be very tempting to say that it is obvious. In fact, it is not an obvious consequence of their definitions, but it will be a consequence of Lemma <ref>. We have the following bounds on v_- and v_+: {[ -β⩽ v_+⩽β;; -β⩽ v_-⩽β. ]. The proof being symmetric, let us just focus on the bounds for v_+. * If v<-β, then using Proposition <ref>, p_H(v)=(∃ y∈ I_H, V_H^y⩾ v) ⩾(V_H⩾ -β) ⩾(D_H) ⩾ 1-c:horizontal^-1 e^-c:horizontalH 1. * If v>β, using Proposition <ref> again, p_H(v) =(∃ y∈ I_H, V_H^y⩾ v)⩽(∃ y∈ I_H, V_H^y>β) ⩽ HH' sup_y∈ I_H(w)((D_H^y)^c)⩽c:horizontal^-1 HH' e^-c:horizontalH^1/2 0. Note that v∈↦ p_H(v) is a non-increasing function. Therefore, for v>v_+, we must have lim inf_H→∞ p_H(v)=0. Similarly, for v<v_-, lim inf_H→∞p̃_H(v)=0. In spite of Remark <ref>, the definitions that we gave for v_- and v_+ are quite weak at first glance, because we only have information on the liminfs. Our goal now is to show that for v>v_+ and v<v_-, the liminfs given in Remark <ref> are actual limits, and we will even prove a precise estimate for p_H(v) and p̃_H(v) when H goes to infinity. c:deviation For every ξ>0, there exists c:deviation=c:deviation(ξ)>0 such that for every H∈^*, {[ p_H(v_++ξ)⩽c:deviation H^-α/4;; p̃_H(v_–ξ)⩽c:deviation H^-α/4. ]. The proof of Lemma <ref> is the goal of Section <ref>. The two limiting directions satisfy v_-⩽ v_+. We will then show the following result, using Corollary <ref>. We have v_-=v_+. We call this quantity v. The proof of Lemma <ref> is the goal of Section <ref> and can be found more precisely in Section <ref>. For now, let us now prove Lemma <ref> as a consequence of Lemmas <ref> and <ref>. Let ξ>0. Combining Lemma <ref> with Lemma <ref>, we have, for every H∈^*, (|X_τ_H/H-v|⩾ξ)⩽ 2 c:deviation(ξ) H^-α/4. Therefore, since α>4, ∑_n∈(|X_n/n-v|⩾ξ)<∞. As a consequence, Borel-Cantelli's lemma ensures that -almost surely, X_τ_H/H v, concluding the proof of Lemma <ref>. §.§ Deviation bounds: proof of Lemma <ref> §.§.§ Ideas of the proof k:first_scale Let us first give some heuristic insight on how the proof is going to unfold. * We will only show the estimate for p_H(v), where v>v_+. The estimate for p_H(v) with v<v_- is shown in the same way, the proof being symmetric. * The road map for the proof is given by the renormalization method explained in Section <ref>, with a sequence of scales given by (h_0L_k)_k⩾k:first_scale, where h_0 and k:first_scale will have to be chosen properly. In the induction that will give an estimate on this sequence of scales, the choice of k:first_scale and the definition of (L_k)_k∈ will be instrumental in the induction step, while h_0 is chosen for the base case to work. * We are going to work with the sequence of events (A_h_0L_k(v_k))_k⩾k:first_scale with an appropriate choice of (v_k)_k⩾k:first_scale. The goal is to show that with good probability, on A_h_0L_k+1(v_k+1), we can find events A_h_0L_k,w_1(v_k) and A_h_0L_k,w_2(v_k) with certain base points w_1,w_2 located on a grid whose cardinality does not depend on h_0. The challenge is that we asked those two events to have everywhere-zero histories. One way to find them is to look for the two starting points y_1 and y_2 (from the definitions of A_h_0L_k,w_1(v_k) and A_h_0L_k,w_2(v_k)) on cut lines that we ask to be at vertical distance less than (h_0L_k)^1/2 of two points w_1 and w_2 on our grid. This is the whole reason why in our paper, I_H(w) is a flattened rectangle, instead of being a true horizontal interval as in <cit.>. Recall that we fixed v>v_+. Obviously, p_H(v)⩽_H(v), therefore we only need to show that there exists c:deviation>0 such that for every H∈^*, _H(v)⩽c:deviation H^-α/4. §.§.§ Control of the liminf In this section, we show that lim inf_H→∞_H(v)=0. Let v'=v_++v/2. Because v'>v_+, by Remark <ref> we have lim inf_H→∞ p_H(v')=0. Therefore, there exists a sequence of integers (n_k)_k∈ such that p_n_k(v')0. For H∈^*, let H”=⌈ H^1/4⌉. For k∈, let m_k=n_k-n_k”. Our goal is to show that _m_k(v)0, which would conclude the proof of (<ref>). Let y∈_m_k. Let y'=y+(m_k”,-π_2(y)). Note that y'∈ I_n_k, since m_k”⩽ n_k”. Let us define a monotonicity event Mon_k={X^y_τ_m_k-π_2(y)⩽ X^y'_τ_m_k}. Then, on Mon_k∩{_m_k(y)⩾ v}∩ F_n_k”^N_τ_m_k^y'(X_τ_m_k^y'), we have X^y'_τ_n_k ⩾π_1(y)+v (m_k-π_2(y))-(β+1)n_k” ⩾π_1(y')-n_k”+v (m_k-n_k”-n_k')-(β+1)n_k” =π_1(y')+(v-ϵ_k) n_k. where ϵ_k⩾ 0 and ϵ_k0. Therefore, for k large enough (depending on v_+ and v), X^y'_τ_n_k⩾π_1(y')+v'n_k, which means that we are on A_n_k(v'). Therefore, _m_k(v)⩽ p_n_k(v')+(Mon_k^c)+(( F_n_k”^N_τ_m_k^y'(X_τ_m_k^y'))^c). §.§.§ Choice of h_0 and k:first_scale Let us fix v>v_+. Recall Definition <ref>. We let k:first_scale=k:first_scale(v)∈^* be such that ∑_k⩾k:first_scale(2β/H_k'+6β/l_k)<v-v_+/2. We also set v_k:first_scale=v+v_+/2. Using Remark <ref>, note that since v_k:first_scale>v_+, lim inf_H→∞ p_H(v_k_0)=0. Therefore there exists H⩾ L_k:first_scale such that p_H(v_k:first_scale)⩽ L_k:first_scale^-α/2. Let h_0=H/L_k:first_scale∈ [1,∞). By definition, we have h_0L_k∈ for all k⩾k:first_scale, and p_h_0 L_k:first_scale(v_k_0)⩽ L_k:first_scale^-α/2. This will be the base case for our estimate on p_H(v). Now that h_0 is fixed, we let H_k=h_0L_k for all k⩾ k_0. Recall notation H_k' from (<ref>). We now define a sequence (v_k)_k⩾k:first_scale by setting, for k⩾k:first_scale, {[ v_k'=v_k+2β/H_k';; v_k+1=v_k'+6β/l_k. ]. This definition combined with (<ref>) and the fact that h_0⩾ 1 ensures that v_k v_∞<v. Therefore, if we show that ∀ k⩾k:first_scale, p_H_k(v_k)⩽ L_k^-α/2, then we get, using Remark <ref>, ∀ k⩾k:first_scale, p_H_k(v)⩽ L_k^-α/2. So the second step of the proof will be devoted to showing estimate (<ref>) by induction on k. §.§.§ Proof of (<ref>) c:cardinality Definition of the grid. Let us fix k⩾k:first_scale. Recall (<ref>). In order to link scales h_0 L_k and h_0 L_k+1, we define the grid _k⊆× H_k to be such that ⋃_w∈_kℐ_H_k(w)=B_H_k+1∩(× H_k), where the union above is disjoint (note that boxes B_H_k(w) with w∈_k are not disjoint though). The cardinality of _k can be bounded from above by c:cardinality l_k^2, where c:cardinality>0. Localization at scale k. We first need to define an event ℱ_k that guarantees that the random walks starting in I_H_k+1 will stay in B_H_k+1 and that their horizontal behaviours at scale H_k are properly bounded. To define this precisely, we set, for y∈ I_H_k+1 and j∈ 0,l_k, {[ 𝒳_j^y=X^y_τ_jH_k;; 𝒩_j^y=N^y_τ_jH_k. ]. Note that 𝒳_0^y=y and 𝒩_0^y=0. Note also that for j⩾ 1, τ_jH_k^y>0, since π_2(I_H_k+1) is included in [0,H_k) (indeed, H_k+1'=⌈ (h_0 L_k+1)^1/2⌉⩽ h_0 L_k^5/8⩽ H_k). Therefore, indices satisfying j⩾ 1 will not be a problem even when π_2(y)>0, while j=0 will be set aside in the next steps of the proof. Recall Definitions (<ref>) and (<ref>). Let ℱ_k=F_H_k+1∩⋂_y∈ I_H_k+1⋂_j=0^l_k-1 D_H_k^𝒳_j^y,𝒩_j^y. Note that in order to bound the horizontal displacement of X^y between times 0 and τ_H_k^y, it would have been sufficient to consider D_H_k-π_2(y)^y instead of D_H_k^y, but the stronger event given by (<ref>) is more pleasant to write and work with. For each y and j, in order to bound ((D_H_k^𝒳_j^y,𝒩_j^y)^c), we use Proposition <ref> with stopping time τ_jH_k^y. Using Proposition <ref>, for every μ∈𝒜, z∈^2 and Γ∈ℋ, we have ^μ((D_H_k^z,Γ)^c)⩽c:horizontal^-1 e^-c:horizontal H_k, which is uniform in μ, z and Γ. So, by Proposition <ref>, for every y and j, we have ((D_H_k^𝒳_j^y,𝒩_j^y)^c)⩽c:horizontal^-1 e^-c:horizontal H_k^1/2. In the end, using union bounds and Proposition <ref>, (ℱ_k^c)⩽c:box^-1 e^-c:box H_k+1^1/2+H_k+1 H_k+1' l_k c:horizontal^-1 e^-c:horizontal H_k^1/2⩽ c^-1 e^-cH_k^1/2, where c>0 does not depend on h_0 and k:first_scale. Link between scales k and k+1. We are now moving on to the crucial idea of the proof: on event A_H_k+1(v_k+1), which is observable at scale k+1, several similar events occur at scale k. Let us fix y∈ I_H_k+1. We claim to have the following inclusion of events: {V^y_H_k+1⩾ v_k+1} ∩ ℱ_k ⊆{[ there exist three j∈ 1,l_k-1; such that π_1(𝒳_j+1^y)⩾π_1(𝒳_j^y)+v_k' H_k ]}. Indeed, let us argue by contraposition and assume that π_1(𝒳_j+1^y)⩾π_1(𝒳_j^y)+v_k'H_k for at most two j∈ 1,l_k-1. The horizontal displacement of X^y between times 0 and τ^y_H_k+1 is the sum of l_k horizontal displacements, l_k-3 of which we can now bound by v_k'H_k, and the three remaining ones can be bounded using D_H_k^𝒳_j^y,𝒩_j^y. More precisely, on ℱ_k, π_1(X^y_τ_H_k+1) =π_1(𝒳_l_k^y) =π_1(𝒳_1^y)+∑_j=1^l_k-1(π_1(𝒳^y_j+1)-π_1(𝒳^y_j)) <π_1(y)+(l_k-3)v_k'H_k+3β H_k =π_1(y)+(v_k'+3(β-v_k')/l_k)H_k+1 <π_1(y)+(v_k'+6β/l_k)H_k+1 =π_1(y)+v_k+1H_k+1, where in the last inequality, we used bounds (<ref>) to get that v_k'>v_k>v_+⩾ -β. This concludes the proof of (<ref>). Removal of histories. The issue now is that in (<ref>), events {π_1(𝒳_j+1^y)⩾π_1(𝒳_j^y)+v_k'H_k} implicitly feature a non-zero history 𝒩_j^y, while our goal is to get zero-history events A_H_k,w(v_k) for two w∈_k. In order to get those, we use cut lines as defined in Section <ref>, which requires defining a new event of large probability that will fulfill the technical requirements for the rest of the argument, namely that we find cut lines quickly enough, that before then the random walks do not go too far horizontally, and that they all stay in boxes allowing us to use decoupling. Recall notation H_k' defined in (<ref>). We let 𝒢_k=⋂_y∈ I_H_k+1 ⋂_j=1^l_k-1 (D_H_k'^𝒳_j^y,𝒩_j^y∩{Θ(X^𝒳_j^y,𝒩_j^y)<H_k'})∩⋂_w∈_k F_H_k(w). In order to control the probability of 𝒢_k, we use Proposition <ref> again, as well as Propositions <ref> and <ref>. Using a union bound, we have (𝒢_k^c) ⩽ H_k+1H_k+1' l_k sup_y,j[((D_H_k'^𝒳_j^y,𝒩_j^y)^c)+(Θ(X^𝒳_j^y,𝒩_j^y)> H_k'/2)]+c:cardinalityl_k^2 c:box^-1e^-c:box H_k^1/2 ⩽ H_k+1H_k+1' l_k (c:horizontal^-1 e^-c:horizontal H_k^1/2 + c:cut_line^-1 e^-c:cut_line 2^-1/2 H_k^1/4)+ c^-1e^-c H_k^1/2 ⩽ c^-1 e^-c H_k^1/4, where c>0 does not depend on h_0 and k:first_scale. Let j∈ 1,l_k-1, and let θ_j^y be the location of X^y on the first cut line reached after height jH_k, that is θ_j^y=X^𝒳_j^y,𝒩_j^y_T_c(X^𝒳_j^y,𝒩_j^y) (recall notations from Definition <ref>). On 𝒢_k ∩ {π_1(𝒳_j+1^y)⩾π_1(𝒳_j^y)+v_k'H_k}, we have π_1(X^θ_j^y_τ_H_k,𝒳^y_j) =π_1(𝒳_j+1^y) ⩾π_1(𝒳_j^y)+v_k'H_k ⩾π_1(θ_j^y)+v_k'H_k-β H_k' using D^𝒳_j^y,𝒩_j^y_H_k' ⩾π_1(θ_j^y)+v_kH_k so, in other words, V^θ_j^y_H_k,𝒳_j^y⩾ v_k. Using (<ref>), this means that we have found three points (given by θ_j^y for three values of j∈ 1,l_k-1) with the right lower bound on their directions and with everywhere-zero initial histories. Furthermore, on ℱ_k∩𝒢_k, we have π_2(θ_j^y)<π_2(𝒳_j^y)+H_k'=jH_k+H_k' (since for j⩾ 1, τ^y_jH_k>0). Therefore the θ_j^y are located in three rectangles I_H_k(w_i) for w_i∈_k satisfying |π_2(w_i)-π_2(w_j)|⩾ H_k for i<j. As a result, ℱ_k∩𝒢_k∩ A_H_k+1(v_k+1) ⊆⋃_|π_2(w_1)-π_2(w_2)|⩾ 2H_kw_1,w_2∈𝒞_k[ (A_H_k,w_1(v_k) ∩ F_H_k(w_1)); ∩ (A_H_k,w_2(v_k) ∩ F_H_k(w_2)). ] Now, events A_H_k,w_1(v_k) ∩ F_H_k(w_1) and A_H_k,w_2(v_k) ∩ F_H_k(w_2) above are respectively measurable with respect to boxes B_H_k(w_1) and B_H_k(w_2), which have maximum side lengths (2β+1)H_k and are H_k/2-separated under condition |π_2(w'_1)-π_2(w'_2)|⩾ 2H_k (recall Remark <ref>). Therefore, we can use Fact <ref> to get c:end_proof (ℱ_k∩𝒢_k∩ A_H_k+1(v_k+1)) ⩽ |_k|^2 (p_H_k(v_k)^2+c:decoupling(H_k/2)^-α) ⩽c:cardinality^2 l_k^4 (p_H_k(v_k)^2+c:decoupling(H_k/2)^-α). In the end, using bounds (<ref>) and (<ref>), we get (A_H_k+1(v_k+1)) ⩽c:cardinality^2 l_k^4 (p_H_k(v_k)^2+c:decoupling(H_k/2)^-α)+(ℱ_k^c)+(𝒢_k^c) ⩽c:end_proof l_k^4(p_H_k(v_k)^2+c:end_proof L_k^-α), for a certain constant c:end_proof>0 that does not depend on h_0 and k:first_scale (to which we gave a name because we will need it again for the proof of Proposition <ref> at the end of our paper). By induction, we can conclude that if p_H_k(v_k)⩽ L_k^-α/2, then p_H_k+1(v_k+1)/L_k+1^-α/2⩽ c L_k+1^α/2 l_k^4 L_k^-α⩽ c L_k^-3α+8/8, for a well-chosen constant c>0 that does not depend on h_0 and k_0. Since α>3, up to taking an even larger k:first_scale (independently on h_0), we can assume that this is less than 1, which concludes the induction and the proof of estimate (<ref>). §.§.§ Interpolation k:first_scale_largek:scale_interpolation In Section <ref>, we proved (<ref>), which, as we explained in Section <ref>, implies estimate (<ref>). To sum up, so far we have shown that ∀ v>v_+, ∃ k:first_scale(v)∈^*, ∃ h_0=h_0(v)⩾ 1, ∀ k⩾k:first_scale, p_h_0L_k(v)⩽ L_k^-α/2. We want to interpolate this estimate to show that ∀ v>v_+, ∃ c:deviation=c:deviation(v)>0, ∀ H∈^*, p_H(v)⩽c:deviation H^-α/4. Let v>v_+. Set v'=v_++v/2, v”=v'+v/2, h_0=h_0(v') and k:first_scale_large⩾k:first_scale(v') be such that L_k:first_scale_large^1/10>2β/v-v'; 2β/L_k:first_scale_large^1/2+2|v'|/L_k:first_scale_large^1/10⩽ v”-v'. Let H⩾ (h_0L_k:first_scale_large)^11/10, and let k:scale_interpolation⩾k:first_scale_large be such that (h_0L_k:scale_interpolation)^11/10⩽ H<(h_0L_k:scale_interpolation+1)^11/10. Since k:scale_interpolation⩾k:first_scale(v') and v'>v_+, using (<ref>), p_h_0 L_k:scale_interpolation(v')⩽ L_k:scale_interpolation^-α/2. In the same fashion as in (<ref>), let us define ⊆× h_0L_k:scale_interpolation to be a minimal set satisfying ⋃_w∈ℐ_h_0L_k:scale_interpolation(w)⊇ B_H∩(× h_0L_k:scale_interpolation), where the union is disjoint (mind that here, H has no reason to be a multiple of h_0L_k:scale_interpolation, which is why we ask for an inclusion instead of an equality). The cardinality of satisfies ||⩽ c(H/h_0L_k:scale_interpolation)^2⩽ c L_k:scale_interpolation^3/4, using (<ref>). Recall notation H_k:scale_interpolation from (<ref>). Note that (<ref>) implies that H'<H_k:scale_interpolation, therefore every y∈ I_H satisfies τ^y_H_k:scale_interpolation>0. Now let H̅=⌊ H/H_k:scale_interpolation⌋ H_k:scale_interpolation be the last multiple of H_k:scale_interpolation before H. For j∈ 0,⌊ H/H_k:scale_interpolation⌋-1 and y∈ I_H, we let 𝒳_j^y=X^y_τ_jH_k:scale_interpolation and 𝒩_j^y=X^y_τ_jH_k:scale_interpolation. Let us define to be a minimal set satisfying ⋃_w∈ℐ_H_k:scale_interpolation(w)=B_H∩(× H_k:scale_interpolation). We will work with the following events: 𝒜_1=⋂_w∈𝒞̂ A_H_k:scale_interpolation,w(v')^c; 𝒜_2=⋂_y∈ I_H{π_1(X^y_τ_H)-π_1(X^y_τ_H̅)<(v-v”)H-β H_k:scale_interpolation}; ℱ= F_H∩⋂_y∈ I_H D^y_ H_k:scale_interpolation 𝒢=⋂_y∈ I_H⋂_j=1^⌊ H/H_k:scale_interpolation⌋-1( D_H_k:scale_interpolation'^𝒳_j^y,𝒩_j^y ∩ {Θ(X^𝒳_j^y,𝒩_j^y)<H_k:scale_interpolation'}). For any y∈ I_H, events 𝒜_1 and 𝒢 (as well as F_H) allow us to bound the displacement of X^y between times τ_jH_k:scale_interpolation^y and τ_(j+1)H_k:scale_interpolation^y for j∈ 1,⌊ H/H_k:scale_interpolation-1⌋. Indeed, the conditions on cut lines given by 𝒢 allow us to find a point inside I_H_k:scale_interpolation(w), for a certain w∈, for which we can use A_H_k:scale_interpolation,w(v')^c given by 𝒜_1. More precisely, on 𝒜_1∩ℱ∩𝒢 and for every y∈ I_H, we have π_1(X^y_τ_H̅) =π_1(X^y_τ_H_k:scale_interpolation)+∑_j=1^⌊ H/H_k:scale_interpolation⌋-1( π_1(X^y_τ_(j+1)H_k:scale_interpolation)-π_1(X^y_τ_jH_k:scale_interpolation) ) ⩽π_1(X^y_τ_H_k:scale_interpolation)+(⌊ H/H_k:scale_interpolation⌋-1) (β H_k:scale_interpolation'+v'H_k:scale_interpolation) ⩽π_1(X^y_τ_H_k:scale_interpolation)+H/H_k:scale_interpolation(β H_k:scale_interpolation'+v'H_k:scale_interpolation)+2|v'| H_k:scale_interpolation ⩽π_1(X^y_τ_H_k:scale_interpolation)+v”H, where in the last line we used (<ref>) and (<ref>). Therefore, on 𝒜_1∩𝒜_2∩ℱ∩𝒢, we have, for every y∈ I_H, π_1(X^y_τ_H)-π_1(y) =(π_1(X^y_τ_H_k:scale_interpolation)-π_1(y) )+(π_1(X^y_τ_H̅)-π_1(X^y_τ_H_k:scale_interpolation) )+(π_1(X^y_τ_H)-π_1(X^y_τ_H̅)) < β H_k:scale_interpolation+v”H + (v-v”)H-β H_k:scale_interpolation=vH. Note that H'<h_0L_k:scale_interpolation, using (<ref>). Therefore, τ^y_jh_0L_k:scale_interpolation>0 for j⩾ 1. So what we are going to do is bound three terms: π_1(X^y_τ_h_0L_k:scale_interpolation)-π_1(y) using D^y_h_0L_k:scale_interpolation, π_1(X^y_τ_H̅)-π_1(X^y_τ_h_0L_k:scale_interpolation) using 𝒜_1^y and π_1(X^y_τ_H)-π_1(X^y_τ_H̅) using 𝒜_2^y. On D^y_h_0L_k:scale_interpolation∩𝒜_1^y∩𝒜_2^y, we have π_1(X^y_τ_H)-π_1(y)<β h_0L_k:scale_interpolation+ v'(⌊H/h_0L_k:scale_interpolation⌋-1)h_0L_k:scale_interpolation+(v-v')H-β h_0 L_k:scale_interpolation⩽ vH. 𝒜^y_2={π_1(X^y_τ_H)-π_1(X^y_τ_H̅)<(v-v')H-β h_0L_k:scale_interpolation} As a result, A_H(v)⊆𝒜_1^c∪𝒜_2^c ∪ℱ^c∪𝒢^c. Now, note that (𝒜_1^c)⩽ c (H/H_k:scale_interpolation)^2 L_k:scale_interpolation^-α/2, using (<ref>). Also, by (<ref>) and the fact that k:scale_interpolation⩾k:first_scale_large, as well as (<ref>), we have (v-v')H>2β H_k:scale_interpolation, therefore (𝒜_2^c) ⩽ HH' (π_1(X^X^y_τ_H̅,N^y_τ_H̅_τ_H-H̅,X^y_τ_H̅)-π_1(X^y_τ_H̅)>β H_k:scale_interpolation) ⩽ HH' sup_μ∈𝒜 sup_z,Γ ^μ((D_H_k:scale_interpolation^z,Γ)^c) ⩽ HH' c:horizontal^-1e^-c:horizontal H_k:scale_interpolation, using Propositions <ref> and <ref>. We also have (ℱ^c)⩽c:box^-1 e^-c:box H^1/2+HH' c:horizontal^-1 e^-c:horizontal H_k:scale_interpolation; (𝒢^c)⩽ HH' H/H_k:scale_interpolation( c:horizontal^-1 e^-c:horizontal H_k:scale_interpolation^1/2+c:cut_line^-1 e^-c:cut_line 2^-1/2 H_k:scale_interpolation^1/4). Using (<ref>), we can see that the upper bounds given by (<ref>), (<ref>) and (<ref>) are all negligible with respect to that given by (<ref>), so we get (A_H(v)) ⩽ c (H/H_k:scale_interpolation)^2 L_k:scale_interpolation^-α/2 ⩽ c H^2 L_k:scale_interpolation^-α/2 ⩽ c H^2 H^-5α/7 ⩽ c H^-α/4 By adjusting c to accommodate small values of H, this concludes the proof of (<ref>) and therefore the proof of Lemma <ref>. § EQUALITY OF THE LIMITING DIRECTIONS: PROOF OF LEMMA <REF> The goal of this section is to show Lemma <ref>. The heuristic idea behind the proof is the following. The definitions of v_+ and v_- ensure that the random walk often has directions close to v_- and v_+. On the other hand, the probability that the random walk has a direction larger than v_++ξ (where ξ>0 is fixed) decreases quickly, as was shown in Lemma <ref>. Therefore, assuming by contradiction that v_+>v_-, the moments when its direction stays close to v_- may prevent it from reaching a direction close to v_+ in the future, which would be a contradiction. However, the random walk might be able to compensate by going faster than v_++ ξ(H) for some well-chosen ξ(H). This is why we need precise estimates, and these will be given by a notion of trap that we will introduce further on. We start by presenting a major property of our model, which comes from the coupling of random walks that we chose. The choice of the coupling is actually made in order to get this property, which is inspired by the arguments from <cit.>. Roughly, it says that particles block each other in some weak sense: a random walk can always bypass another random walk, but this happens with low probability. §.§ Barrier property Let ∈̋ and x_0,x_0' be two distinct points of ^2. Set T(x_0)={[ sup {t⩽τ^x_0_a+H, π_2(X^x_0_t)=π_2(x_0')} if this set is non-empty;; ∞ otherwise. ]. T(x_0')={[ sup {t⩽τ^x_0'_H, π_2(X^x_0'_t)=π_2(x_0)} if this set is non-empty;; ∞ otherwise. ]. We also define ℰ^right_x_0',H(x_0)={T<∞ and π_1(X_T^x_0)> π_1(x_0')}; ℰ^left_x_0,H(x_0')={T'<∞ and π_1(X_T'^x_0')< π_1(x_0)}; ℰ^right_x_0,H(x_0')={T'<∞ and π_1(X_T'^x_0')> π_1(x_0)}; ℰ^left_x_0,H(x_0')={T'<∞ and π_1(X_T'^x_0')< π_1(x_0)}. We say that X^x_0 bypasses x_0' to the right at scale H if ℰ_x_0',H^right(x_0) occurs, and that X^x_0' bypasses x_0 to the left at scale H if ℰ_x_0,H^left(x_0') occurs. Let x_0,x_0'∈^2 such that π_2(x_0)=π_2(x_0')=H_0∈ and π_1(x_0)<π_1(x_0'). Let ∈̋. Suppose that τ=τ_H^x_0<∞ and τ'=τ_H^x_0'<∞ (which happens -almost surely). Then at least one of the following scenarios occurs: * There exists t<τ such that π_2(X_t^x_0)<H_0; * There exists t'<τ' such that π_2(X_t'^x_0')<H_0; * π_1(X_τ^x_0)⩽π_1(X_τ'^x_0'). In other words, if both random walks do not come back under their initial vertical coordinate, then they will keep the same horizontal order. This is a kind of monotonicity property. Actually, we will deduce from this lemma a stronger property (see Proposition <ref>), which can be summed up as follows: the only way for two random walks starting on the same horizontal line to end up on a higher horizontal line having swapped horizontal orders, is for one of the random walks to bypass the other one from below. Bear in mind that this property is incorrect if we add two different initial histories to our random walks. However, note that if their initial histories have supports located under the initial horizontal line, then the proposition still holds. Assume τ<∞, τ'<∞ and that none of the three scenarios occurs. In other words, both sample paths X^x_0_[0,τ] and X^x_0'_[0,τ'] stay over the initial horizontal line, and still they swap horizontal orders. * First, we justify that in this case, the two sample paths X^x_0,_[0,τ] and X^x_0'_[0,τ'] meet at some point of ^2. Let us define, for 0⩽ n⩽τ', d_n=π_1(X_n^x_0')-π_1(X_ν(n)^x_0), where ν:→ [0,τ] is chosen such that π_2(X_ν(n)^x_0)=π_2(X_n^x_0') and |d_n| is minimal. Note that ν(n) can be found because τ<∞ and X^x_0 jumps at range 1 on ^2. First, d_0⩾ 0, because scenarios 1 and 2 do not occur and the random walks jump at range 1. Also, d_τ'=π_1(X_τ'^x_0')-π_1(X_τ^x_0)<0, for we assumed that scenario 3 does not occur. * If there exists n_0⩽τ' such that d_n_0=0, then π_1(X_n_0^x_0')=π_1(X_ν(n_0)^x_0), but also by definition of ν, π_2(X_n_0^x_0')=π_2(X_ν(n_0)^x_0). Therefore X_n_0^x_0'=X_ν(n_0)^x_0, so the two trajectories meet. * Otherwise, there exists n_0⩽τ' such that d_n_0>0 and d_n_0+1<0. Now, by definition of (d_n) and the fact that both random walks jump at range 1, we must have d_n_0=1 and d_n_0+1=-1. The only possible configuration for this to happen is X_ν(n_0)^x_0=x, X_n_0^x_0'=x+e_1, X_ν(n_0)+1^x_0=x+e_1, X_n_0^x_0'=x. Therefore both trajectories meet. Set ∈̋ and x_0 and x_0' two distinct points of ^2. Suppose that τ_x_0',H(x_0)<∞ and τ_x_0,H(x_0')<∞. Then at least one of the following scenarios occurs: * ℰ^right_x_0',H(x_0), i.e. X^x_0 bypasses x_0' to the right at scale H; * ℰ^left_x_0,H(x_0'), i.e. X^x_0' bypasses x_0 to the left at scale H; * π_1(X_τ_x_0',H(x_0)^x_0)⩽π_1(X_τ_x_0,H(x_0')^x_0'). Let x_0,x_0'∈^2 with π_2(x_0)⩾π_2(x'_0) and π_1(x_0)<π_1(x_0'). Let H∈^*. Let Γ∈ℋ such that Supp Γ∩ X_^x_0'=∅. Assume that τ^x_0,Γ_H,x_0<∞ and τ^x_0'_H,x_0<∞ (which happens almost surely). Then at least one of the following scenarios occurs: * X^x_0,Γ visits the half-line x_0'+{0}× (-∞,0); * X^x_0' visits the half-line x_0+{0}× (-∞,0); * We have π_1(X^x_0,Γ_τ_H,x_0)⩽π_1(X^x_0'_τ_H,x_0). The statement is also true when we replace x_0, x_0' and Γ by values of random variables satisfying the same assumptions. Suppose neither of the three scenarios occurs. In particular, there exists t_0,t_0'⩾ 0 such that π_2(X_t^x_0)=π_2(X_t'^x_0') and π_1(X_t^x_0)>π_1(X_t'^x_0'). * First, we justify that in this case, the two sample paths X^x_0_[0,t_0] and X^x_0'_[0,t_0'] meet at some point of ^2. This is an argument similar to Jordan's theorem. Therefore, we form a closed curve of ^2 in the following way. First, let a_1,a_2∈ be such that X^x_0_[0,t_0]∪ X^x_0'_[0,t_0'] (which is a compact set), is contained in the set (a_1,∞)× (a_2,∞). Let S be the closed curve formed by X^x_0'_[0,t_0'] and the segments {π_1(x_0')}× [a_2,π_2(x_0')], [a_1,π_1(x_0')]×{a_2}, {a_1}× [a_2,π_2(X_t_0'^x_0')] and [a_1,π_1(X_t_0'^x_0')]×{π_2(X_t_0'^x_0')}. Up to removing loops in X^x_0'_[0,t_0'] (which is straightforward because it is a subset of ×∪×), we can assume that S is a simple closed curve. According to Jordan's theorem, S separates ^2 into two connected sets C_1 and C_2, such that C_1 is bounded, C_2 is unbounded, and their boundary is S. * Now, consider n_0⩽τ to be the maximal integer such that X^x_0_n_0∈ X^x_0'_[0,τ'], and n_0'⩽τ' to be the maximal integer such that X^x_0'_n_0'=X^x_0_n_0. If n_0=τ, then we are done for in that case, X_τ^x_0=X_τ'^x_0', which contradicts the assumption that scenario 3 does not occur. Assume now that n_0<τ. Let z_0=X^x_0_n_0=X^x_0'_n_0'∈^2. By definition of n_0, the uniform random variables U(z_0,i_0) and U(z_0,i_0') used for the jumps of X^x_0 at time n_0 and X^x_0' at time n_0' are different, so i_0≠ i_0'. * If i_0<i_0', then X^x_0' had jumped using U(z_0,i_0) before time n_0', so X^x_0_n_0+1∈ X^x_0'_[0,τ'], which contradicts n_0 being maximal. * Therefore we must have i_0>i_0'. Consequently, X^x_0 had jumped using U(z_0,i_0') before time n_0. Mind that a priori this is not a contradiction, for the definitions of n_0 and n_0' are not symmetric. Consider now x_1=x_1'=z_0, and define τ_1, τ_1', n_1, n_1', z_1, i_1 and i_1' in the same way as τ, τ', n_0, n_0', z_0, i_0 and i_0' after replacing X^x_0 by X^z_0,𝒩_0 and X^x_0' by X^z_0,𝒩_0', where 𝒩_0 and 𝒩_0' denote the histories of X^x_0 and X^x_0' at the first time they hit z_0. Because X^x_0 jumps using U(z_0,i_0') at some point, we have n_1>0. Furthermore, with the same line of reasoning as before, we can justify that if n_1<τ_1, then i_1>i_1' and we carry on the argument. This defines recursively a sequence of points (z_k) of X^x_0_[0,τ]∩ X^x_0'_[0,τ']∩^2, which are all different, because the n_k are set to be maximal. Because the latter set is a finite set, at some point we must have n_k=τ_k and so z_k=X^x_0_τ=X^x_0'_τ', which contradicts the assumption that scenario 3 does not occur. Replacing x_0, x_0' and Γ by values of random variables does not change the proof, since the statement is deterministic. Even so, the proof is subtle. A lot of different things can happen, as is illustrated by Figure <ref>. We start by discussing the issues and ideas of the proof, in order to give some motivations for the next steps. Heuristics of the proof. The main problem of the proof is that even if two particles meet, they may not coalesce (i.e. they may not stay together forever from then on), since the two random walks do not necessarily look at the same uniform variables every step of the way afterwards. However, we will show when two particles meet, either they coalesce (as in scenario 3.b)) or they end up splitting up without having swapped their initial horizontal order (as in scenario 3.c)). In the latter case, what actually happens is that both particles visit the same loop, and removing this loop is tantamount to adding the same history to both random walks. This prompts us to show a stronger version of the proposition by adding a common history Γ_0 to both random walks, which will allow us to apply our line of reasoning inductively by removing loops one by one. Stronger claim. We now add more history. We fix x_0,x_0', H and Γ as in the statement of the proposition, and we let Γ_0∈ℋ. From now on, we use simpler notations: τ for τ_H,x_0^x_0,Γ+Γ_0, τ' for τ_H,x_0^x_0',Γ_0, X for X^x_0,Γ+Γ_0 and X' for X^x_0',Γ_0. Assume that both τ and τ' are finite. We argue by contradiction and assume that {[ X does not visit x_0'+{0}× (-∞,0) (1); X' does not visit x_0+{0}× (-∞,0) (2); π_1(X_τ)> π_1(X'_τ') (3) ]. We want to get to a contradiction from this, and then choosing Γ_0=0 will give the desired result. Loop removal algorithm. We now define an algorithm allowing us to remove all loops from the paths of our random walks, so that we can focus on the zero-loop case later on. We consider a path defined by a parametrization f:P→^2, where P is a bounded subset of , satisfying for every s,t∈ P, |t-s|=1⇒‖ f(t)-f(s)‖=1, where ‖·‖ is the Euclidean norm on ^2. If f is not injective, we define [ T^out_1(f)=min{t∈ P, f(t)∈{f(s),s∈ P,s<t}};; T^in_1(f)=min{t∈ P, f(t)=f(T_1^out(f))};; P_1(f)= T_1^in(f),T_1^out(f)-1∩ P;; L_1(f)=f(P_1(f)). ] We call L_1(f) the first loop of f. The times T^in_1(f) and T^out_1(f) are called the first entry and exit times of L_1(f). We define by induction the other loops of f, if they exist, by defining, for i⩾ 2, [ T^out_i(f)=T^out_1(f|_P∖∪_j<i P_j(f));; T^in_i(f)=T^in_1(f|_P∖∪_j<i P_j(f));; P_i(f)= T_i^in(f),T_i^out(f)-1∩(P∖∪_j<i P_j(f)); L_i(f)=f(P_i(f)). ] Mind that here we consider functions defined on subsets of P that are not necessarily connected in P, which is why we did not assume P to be connected in in the first place. If there are no more loops, we just set P_i(f)=∅, L_i(f)=∅ and T_i^in(f)=T_i^out(f)=∞. We also define, for such a function f, its interpolated sample path as the curve in × ∪ × obtained by joining each pair of points {f(t),f(t+1)} (for t and t+1∈ P) by a segment. We denote it by int(f). The two sample paths meet. Recall our assumption (<ref>), from which we want to get to a contradiction. Let C=int(X|_ 0,τ) and C'=int(X'|_ 0,τ'). For now, we want to show that C and C' meet at some point of ^2, which implies, by construction, that the two sample paths X_[0,τ] and X'_[0,τ'] meet at some point of ^2. To show that, we first form a closed simple curve C_0 of ^2 as shown in Figure <ref>. First we consider C'_*=int(X'|_ 0,τ'∖∪_i∈^* P_i(X')), which is C' from which we removed all the loops and which we interpolated. Then we join the two extreme points x_0' and x_1' of C' (note that they both have to be on C'_*) using horizontal and vertical segments that go low enough and left enough so that they do not meet C∪ C' except at x_0' and x_1' (if C'_* intersects x_0'+{0}×(-∞,0), we remove the initial part of C'_* so that x_0' is replaced by the lowest point on C'_*∩ (x_0'+{0}×(-∞,0))). This is possible because C∪ C' is a compact set and because of (1) and (3) in (<ref>). By construction, C_0 is a closed simple curve, so we can apply Jordan's theorem to C_0. Point x_1=X_τ_H,x_0^x_0,Γ+Γ_0 is in the unbounded component, because the half-line x_1+(0,∞)×{0} cannot meet C_0. On the contrary, point x_0 has to be in the bounded component, because the vertical segment joining x_0 to a lower point x_2 in the unbounded component meets C_0 only once, because of (2) in (<ref>) (here we use the sometimes called even-odd rule that can be found in <cit.>). Therefore, curve C has to meet C_0, and by construction of C_0, it has to meet it on C'_*. Therefore C and C' intersect. Zero-loop case. First consider the simpler case where C' has no loops intersecting C. By the previous point, C' meets C, so we can consider t̂=max{t∈ 0,τ', X'_t∈ C}, and x̂=X'_t̂. Because of (3) in (<ref>), t̂<τ'. The uniform variable that X' uses to jump at time t̂ is U(x̂,Γ_0(x̂)+1) (for there is no loop on C' intersecting C, so x̂ cannot be on a loop of C'). The same uniform variable is used by X when it gets to x̂ for the first time, since by assumption Supp Γ∩ C'=∅. Therefore, X'_t̂+1∈ C, which contradicts the definition of t̂. At the end of the day, we have a contradiction, so our assumption (<ref>) was false. General case. Consider now the case where C' has a loop intersecting C. For i⩾ 1, let P_i=P_i(X'), L_i=L_i(X'), T_i^in=T_i^in(X') and T_i^out=T_i^out(X'). We let n_1=min{i∈^*, L_i ∩ C≠∅}. Let T be the first hitting time of L_n_1 by X and T'∈ T^in_n_1,T^out_n_1-1 be the first time such that X'_T'=X_T. Let us show by induction on t⩽ T_n_1^out-T' that (X_T,X_T+1,…,X_T+t)=(X'_T',X'_T'+1,…, X'_T'+t). The case t=0 follows from the fact that X'_T'=X_T. Suppose (<ref>) is true for t<T_n_1^out-T'. We need to show that X_T+t+1=X'_T'+t+1. * First, note that X' cannot have visited X'_T'+t before time T'+t. Indeed, suppose that it has; then there exists i<n_1 such that X'_T'+t∈ L_i, now X'_T'+t=X_T+t so L_i∩ C≠∅, which contradicts the definition of n_1. Therefore, the uniform variable X' uses to jump at time T'+t is U(X'_T'+t,Γ_0(X'_T'+t)+1). * This also means that X'_T'+t=X_T+t is not among {X'_T',…, X'_T'+t-1}, which is the same set as {X_T,…, X_T+t-1} by the induction assumption. Therefore, using also the definition of T, X has not visited site X_T+t before time T+t, so the uniform variable it uses to jump at time T+t is U(X_T+t,Γ_0(X_T+t)+1)=U(X'_T'+t,Γ_0(X'_T'+t)+1) (we also use the fact that Supp Γ∩ C'=∅). Both random walks use the same uniform variable, therefore X_T+t+1=X'_T'+t+1, which shows (<ref>) for t+1 and ends the induction. Applying equality (<ref>) with t=T_n_1^out-T' yields (X_T,X_T+1,…,X_T+T_n_1^out-T')=(X'_T',X'_T'+1,…, X'_T_n_1^out). With the same arguments, we can show that we also have (X_T+T_n_1^out-T',…, X_T+T_n_1^out-T_n_1^in)=(X'_T_n_1^in,X'_T_n_1^in+1,…,X'_T'). Also, remark that in (X'_T^in_n_1,…, X'_T^out_n_1-1), we have T^out_n_1-T^in_n_1+1 distinct points of L_n_1 (by definition of T_n_1^out), so we have all the points in L_n_1 exactly once. Therefore, putting together (<ref>) and (<ref>), and considering that X'_T_n_1^in=X'_T_n_1^out, we see that between times T and T+T_n_1^out-T_n_1^in, X visits all the sites in L_n_1 exactly once too. Set Γ_1=Γ_0+∑_x∈ L_n_1δ_{x} and P̃_1= T, T+T_n_1^out-T_n_1^in-1. Separating what happens before time T resp. T_n_1^in and what happens after time T+T_n_1^out-T_n_1^in resp. T_n_1^out, we get [ X^x_0,Γ+Γ_1_[0,τ_H,x_0]=X^x_0,Γ+Γ_0_[0,τ_H,x_0]∖P̃_1;; X^x_0',Γ_1_[0,τ_H,x_0]=X^x_0',Γ_0_[0,τ_H,x_0]∖ P_n_1. ] Let K be the number of loops of C' that intersect C (K is finite). By applying the same line of reasoning inductively on the next loops of C' that intersect C (which we denote by L_n_2,…,L_n_K), we can construct a history Γ_K such that [ X^x_0,Γ+Γ_K_[0,τ_H,x_0]=X^x_0,Γ+Γ_0_[0,τ_H,x_0]∖ (P̃_1∪…∪P̃_K);; X^x_0',Γ_K_[0,τ_H,x_0]=X^x_0',Γ_0_[0,τ_H,x_0]∖ (P_n_1∪…∪ P_n_K). ] Therefore, by construction, the sample path of X^x_0',Γ_K has no loops intersecting that of X^x_0,Γ+Γ_K, so we can apply the previous zero-loop case by replacing Γ_0 by Γ_K. Assumptions from (<ref>) are still satisfied, and Supp Γ_K ∩ X_^x_0',Γ_K=∅ by construction, so we do recover a contradiction. §.§ Trapped points Let us move on to the proof of Lemma <ref>. Recall that v_-⩽ v_+, on account of Corollary <ref>, so we now argue by contradiction and assume that v_-<v_+. In the rest of this section, we set δ=v_+-v_-/4(β+1). Note that δ∈ (0,1/2], using the bounds in (<ref>). The crucial idea of our proof is given by Proposition <ref>, which implies that a particle can be "trapped" by another particle. We want to ensure that trapped particles will experience a delay with respect to v_+, which motivates the first definition below. Let H∈^* and w∈×. Recall notation H' from (<ref>). We define z_w=w+(δ H+4β H',-2H')∈×; R_H(w)=w+((-∞,δ H)× (-∞, H') ∪ [δ H,+∞)× (-∞, -3H')) ⊆^2. See Figure <ref> for an illustration of these notations. Let H∈^* and w∈×. w is said to be H-trapped if there exists y∈ I_δ H/2(z_w) such that: * V_H+2H',z_w^y⩽ v_-+δ/2; * X^y does not visit R_H(w). Let us explain heuristically the idea behind this definition. Condition 2 ensures that the random walk started at y passes the point w+(δ H,H') on the right only. This will guarantee, using the barrier property (Proposition <ref>), that the sample path started at y is a barrier for any random walk starting in w+(-∞,δ H)× [0,H'). Condition 1 gives quantitative information about this barrier at height π_2(w)+H. See Figure <ref> for an illustration of Definition <ref>. Note that event {w is H-trapped} is measurable with respect to the horizontal strips between heights π_2(w)-3H' and π_2(w)+H. Indeed, the definition of a trap implies that we can define an algorithm to decide if w is H-trapped or not, only looking at the environment and the uniform variables outside R_H(w) and below height π_2(w)+H. §.§.§ Probability of being trapped Of course, we will not be able to show that a point is trapped with a high probability, for point 1 in Definition <ref> is very demanding. However, the definition of v_- will allow us to show that we can reach any distance close to but greater than v_- with a positive probability, so we will be able to get a uniform lower bound on the probability of being trapped. This is what the following lemma expresses. Recall the definition of 𝐇_0 from Remark <ref>. H:traps There exists an integer constant H:traps⩾H:size_box, depending on δ, such that inf_H⩾H:trapsinf_w∈×(w is H-trapped)>0. c:prob2c:numbc:prob1 c:prob c:inf Let us first study condition 1 in Definition <ref>. Recall notation p̃ from Definition <ref>. We claim that there exist two positive constants c:numb and c:prob1 such that for H large enough, inf_w∈×(∃ y∈ I_δ H/2(z_w), V_H+2H',z_w^y⩽ v_-+δ/2)⩾c:numb^-1 p̃_H(v_-+δ/4) -c:prob2^-1e^-c:prob2 H^1/2. Let us prove this claim. Let us fix H⩾ 4/δ. We have inf_w∈×(∃ y∈ I_δ H/2(z_w), V_H+2H',z_w^y⩽ v_-+δ/2) =inf_w∈×(∃ y∈ I_δ H/2(w), V_H+2H',w^y⩽ v_-+δ/2) =inf_w∈[-1,0)×{0}(∃ y∈ I_δ H/2(w), V_H+2H',w^y⩽ v_-+δ/2) ⩾sup_w∈[0,1)×{0}(∃ y∈ I_δ H/4(w), V_H+2H',w^y⩽ v_-+δ/2) = sup_w∈×(∃ y∈ I_δ H/4(w), V_H+2H',w^y⩽ v_-+δ/2). In the first equality, we used that w↦ z_w is a bijection of ×. In the second and last equalities, we used Corollary <ref>. In the inequality, we used that since H⩾ 4/δ, for any w∈[-1,0)×{0} and w'∈ [0,1)×{0}, I_δ H/4(w') is included in I_δ H/2(w). Now, we want to replace H+2H' by H in the parameter of the direction. Indeed, the information we have on v_- is a liminf when H goes to infinity, and it could be that we are unlucky and this liminf is reached on a subsequence that is not eventually in the image of H↦ H+2H'. In order to do this, we work on ⋂_y∈ I_δ H/4(w) D^X^y_τ_H,w,N^y_τ_H,w_2H', which, using Propositions <ref> and <ref>, has probability at least 1-c:prob1^-1e^-c:prob1H^1/2, where c:prob1 is a positive constant that does not depend on H. On this event, provided that 2β H'⩽δ (H+2H')/6, we have that if V^y_H,w⩽ v_-+δ/3, then V^y_H+2H',w⩽ v_-+δ/2. Therefore, for H large enough, sup_w∈×(∃ y∈ I_δ H/4(w), V_H+2H',w^y⩽ v_-+δ/2) ⩾sup_w∈×(∃ y∈ I_δ H/4(w), V_H,w^y⩽ v_-+δ/3)-c:prob1^-1e^-c:prob1H^1/2. Now, in order to recover parameter H in the size of the rectangle too, we consider I_H and split it into rectangles I_δ H/4(w) for a certain number c:numb (which does not depend on H) of values of w∈× satisfying 0⩽π_2(w)<H'. Let us fix such a w and y∈ I_H. In order to link V^y_H with V^y_H,w, we work on ⋂_y∈ I_H D_H'^X^y_τ_H,N^y_τ_H, which, using Propositions <ref> and <ref>, has probability at least 1-c:prob^-1e^-c:probH^1/2 for a certain constant c:prob>0 that does not depend on H. On this event, the displacement of X^y between times τ_H^y and τ_H,w^y is less than β H', which is less than δ H/12 for H large enough. In the end, using a union bound, we have sup_w∈× (∃ y∈ I_δ H/4(w), V_H,w^y⩽ v_-+δ/3) ⩾c:numb^-1 ((∃ y∈ I_H, V_H^y⩽ v_-+δ/4)-c:prob^-1e^-c:prob H^1/2) = c:numb^-1 (p̃_H(v_-+δ/4) -c:prob^-1e^-c:prob H^1/2). Putting all inequalities together, we derive our claim (<ref>) with c:prob2 depending on c:numb, c:prob and c:prob1. Now, lim inf_H→∞p̃_H(v_-+δ/4)>0, since v_-+δ/4>v_- (recall Definition <ref>). Therefore, using (<ref>), lim inf_H→∞inf_w∈×(∃ y∈ I_δ H/2(z_w), V_H+2H',z_w^y⩽ v_-+δ/2)>0. This implies that there exists H:traps>max(4,H:size_box) such that c:inf:=inf_H⩾H:trapsinf_w∈×(∃ y∈ I_δ H/2(z_w), V_H+2H',z_w^y⩽ v_-+δ/2)>0. As for condition 2 in Definition <ref>, we can notice that a scenario on which it is satisfied is when events E_H'^y, D_4H'^y and E_H'^X^y_τ_4H',z_w,N^y_τ_4H',z_w occur for every y∈ I_δ H/2(z_w) (recall (<ref>) and (<ref>)). Indeed: * Being on E_H'^y ensures that X^y stays outside w+[δ H,+∞)× (-∞,-3H'); * X^y also stays outside w+(-∞,δ H)× (-∞,H'). Indeed, the horizontal distance between y and w+(δ H,H') is at least 4β H' (by definition of z_w), so on D_4H'^y, X^y passes w+(δ H,H') on the right, and E_H'^𝒳,𝒩 ensures that it never comes back to height π_2(w)+H' afterwards. That being said, for every H⩾H:traps and w∈×, we get (w is H-trapped) ⩾c:inf -c H^3/2sup_y∈ I_δ H/2(z_w)(((E_H'^y)^c)+((D^y_3H')^c)+((E_H'^X^y_τ_4H',z_w,N^y_τ_4H',z_w)^c)) ⩾c:inf-c H^3/2(2e^-c:gambler_ruinH^1/4+c:horizontal^-1 e^-c:horizontal√(3)H^1/2) ⩾c:inf/2 if H:traps is large enough. where in the second-to-last inequality, we used Propositions <ref> and <ref> as well as Proposition <ref>. Since c:inf/2>0, this yields the result. We stated the above result as a lemma because it will later appear as a mere first step towards a stronger result, Proposition <ref>. The same holds for the next lemma, which is the first step towards Proposition <ref>. §.§.§ Delay near a trapped point The following lemma explains why the name "trap" was chosen: heuristically speaking, when we start a random walk X^x_0 near an H-trapped point, with high probability it is delayed by the time it reaches height H. H:delay_traps Recall the definition of H:traps in Lemma <ref>. For the next lemma, we need another technical requirement on H. Note that, since βδ>0, there exists H:delay_traps>H:traps, which depends on v_- and v_+, such that ∀ H⩾H:delay_traps, 4βδ H -(4β-(8β+7)δ+2v_+)H'⩾ 0. Let H⩾H:delay_traps, w∈×, x_0∈ w+(-∞,δ H)× [0,H') and Γ∈ℋ whose support satisfies Supp Γ⊆ R_H(w). Suppose that w is H-trapped and that E^x_0,Γ_H' occurs. Then, we have π_1(X^x_0,Γ_τ_H,w)⩽π_1(w)+(v_+-2δ)H. Again, the statement is also true when we replace x_0, w and Γ by values of random variables satisfying the same assumptions. Let H, w, x_0 and Γ be as in the statement of the lemma. Suppose w is H-trapped and E_H'^x_0,Γ occurs. By definition, there exists y∈ I_δ H/2(z_w) such that V_H+2H',z_w^y⩽ v_-+δ/2 and X^y does not visit R_H(w). Let us apply the barrier property (Proposition <ref>) with x_0, Γ and y (replacing x_0' by y and H by H+2H'). * Since X^y does not visit R_H(w) and Supp Γ⊆ R_H(w), we have Supp Γ ∩ X_^y=∅. * Since X^y does not visit R_H(w) and the half-line x_0+{0}× (-∞,0) is included in R_H(w), X^y cannot visit that half-line. * Since E_H'^x_0,Γ occurs and π_2(y)<π_2(x_0)-H', X^x_0 cannot visit the half-line y+{0}× (-∞,0) either. Therefore we must have π_1(X_τ_H,w^x_0,Γ) ⩽π_1(X_τ_H+2H',z_w^y) ⩽π_1(y)+(v_-+δ/2)(H+2H') ⩽π_1(z_w)+δ H/2+(v_-+δ/2)(H+2H') ⩽π_1(w)+δ H+4β H'+δ H/2+(v_-+δ/2)(H+2H') = π_1(w)+(v_+-2δ)H-4βδ H +(4β-(8β+7)δ+2v_+)H' ⩽π_1(w)+(v_+-2δ)H The interest of traps becomes clear with Lemma <ref>. The issue however is that the probability of being trapped cannot be made arbitrarily close to 1 when H goes to infinity; we only know, thanks to Lemma <ref>, that it is uniformly positive. Therefore, we need to introduce another notion in which we will allow some entropy on where to find a trap. §.§ Threatened points The problem with traps is that the probability of being trapped may be very small; however we will see that it is sufficient to have a trapped point along a line segment of slope v_+ in order to experience the delay, which motivates the new definition below. Let H∈^*, r∈^* and w∈×. w is said to be (H,r)-threatened if one of the points w_j=w+jH(v_+,1), where j∈ 0,r-1, is H-trapped. §.§.§ Probability of being threatened When r increases (keep in mind that r is the vertical length of the line segment along which we look for trapped points), it is clear that the probability that w is threatened increases. We now show that it goes to 1 when r→∞, and quantify the convergence using α. This is the major interest of the notion of threats. Recall constant H:traps from Lemma <ref>. c:threats There exists c:threats=c:threats(δ)>0 such that for every H⩾H:traps and r∈^*, sup_w∈×(w is not (H,r)-threatened)⩽c:threats r^-α. c:proof_threats k:proof_threats k:proof_threats_2 We follow again the structure of proof given in Section <ref> (only here the scale parameter is r and not H). Mind that here we will need to apply the renormalization method twice to get the desired estimate. First estimate. We start by considering only r=3^k for k∈. We set q_k=q_k(H)=sup_w∈×(w is not (H,3^k)-threatened). Let us start by showing that q_k converges to 0 when k→∞, uniformly in H large enough. More precisely, we show that there exists c:proof_threats∈ [1/3,1) and k:proof_threats∈ such that ∀ k⩾ 2, ∀ H⩾H:traps, q_k:proof_threats+k⩽c:proof_threats^k. Note that the problem with this bound is that it does not involve α, which is why we will need to show a second estimate after this one. To prove (<ref>), we use induction on k⩾ 2. Let us fix k:proof_threats∈ (we will choose it later in the proof). Base case. If a point is not (H,r)-threatened, in particular it is not H-trapped, so, by Lemma <ref>, sup_H⩾H:traps q_k:proof_threats+2⩽sup_H⩾H:trapssup_w∈×(w is not H-trapped)<1. Therefore there exists c:proof_threats∈[1/3,1) such that the case k=2 in (<ref>) is satisfied, namely q_k:proof_threats+2⩽c:proof_threats^2 for all H⩾H:traps, and the choice of c:proof_threats can be made independently of k:proof_threats. Induction step. Fix k⩾ 2 and suppose that sup_H⩾H:traps q_k:proof_threats+k⩽c:proof_threats^k. Fix an integer H⩾H:traps and w∈×. Note that event {w is not (H,3^k:proof_threats+k+1)-threatened} is included in the events given by 𝒜_k=⋂_j=0^3^k:proof_threats+k-1{w_j is not H-trapped} and 𝒜_k'=⋂_j=2· 3^k:proof_threats+k^3^k:proof_threats+k+1-1{w_j is not H-trapped}. Using Remark <ref>, those events are measurable with respect to horizontal strips separated in time by 3^k:proof_threats+kH-3H', which is larger than 3^k:proof_threats+kH/2. In order to replace those strips by boxes of side lengths at most (2β+1)· 3^k:proof_threats+k H (anticipating the use of Fact <ref>), first note that by definition, {w_j is H-trapped} is measurable with respect to the sigma-algebra generated by {X^y_[0,τ_H+2H',z_w_j], y∈ I_δ H/2(z_w_j)}, which motivates the introduction of the following events: 𝒪_k=⋂_j=0^3^k:proof_threats+k-1 G_j(w) and 𝒪_k'=⋂_j=2· 3^k:proof_threats+k^3^k:proof_threats+k+1-1 G_j(w_2· 3^k:proof_threats+k) where, for w̅∈×, G_j(w̅)=⋂_y∈ I_δ H/2(z_w_j){X^y_[0,τ_H+2H',z_w_j]⊆ B_3^k:proof_threats+kH(w̅-(3^k:proof_threats+k H/2,0))}. Now, 𝒜_k∩𝒪_k is measurable with respect to box B_3^k:proof_threats+kH(w-(3^k:proof_threats+k H/2,0)), and 𝒜_k'∩𝒪_k' is measurable with respect to box B_3^k:proof_threats+kH(w_2· 3^k:proof_threats+k-(3^k:proof_threats+k H/2,0)). Those two boxes are (3^k:proof_threats+kH/2)-separated and have maximum side lengths (2β+1)· 3^k:proof_threats+kH. Let us now bound the probability of (G_j(w))^c. For j∈ 0, 3^k:proof_threats+k-1, we have ⋂_y∈ I_δ H/2(z_w_j) D^y_3^k:proof_threats+k-1 H/β⊆ G_j(w). Indeed, let us assume that k:proof_threats is large enough so that for every H we have 3^k:proof_threats+k-1H/β⩾ H+2H'. Let j∈ 0, 3^k:proof_threats+k-1, y∈ I_δ H/2(z_w_j) and n∈ 0,τ^y_H+2H',z_w_j. On the event on the left-hand side of (<ref>), we have |π_1(X^y_n)-π_1(w)| ⩽|π_1(X^y_n)-π_1(y)|+|π_1(y)-π_1(w)| ⩽ 3^k:proof_threats+k-1 H +jH| v_+| +δ H + 4β H'+δ H/2 ⩽ 3^k:proof_threats+k-1 H + 3^k:proof_threats+kβ H+ δ H + 4β H'+δ H/2 <3^k:proof_threats+kβ H+ 3^k:proof_threats+k H/2, provided that k:proof_threats is large enough (independently of H), which gives a first condition to choose k:proof_threats. From these horizontal bounds, noting that the vertical bounds are always satisfied by construction, we obtain X^y_[0,τ_H+2H',z_w_j]⊆ B_3^k:proof_threats+kH(w-(3^k:proof_threats+k H/2,0)), which ends the proof of (<ref>). Similarly, with the exact same arguments, for j∈ 2· 3^k:proof_threats+k,3^k:proof_threats+k+1-1, we have ⋂_y∈ I_δ H/2(z_w_j) D^y_3^k:proof_threats+k-1 H/β⊆ G_j(w_2· 3^k:proof_threats+k). Using (<ref>) and (<ref>) along with union bounds and Proposition <ref>, we have (𝒪_k^c)⩽c:horizontal^-1 3^k:proof_threats+k(δ H/2)^2 e^-c:horizontal 3^k:proof_threats+k-1H/β⩽ c^-1e^-c 3^k:proof_threats+kH, where c>0 does not depend on H, and the same holds for 𝒪_k'. So, using Fact <ref>, q_k:proof_threats+k+1 ⩽((𝒜_k∩𝒪_k) ∩ (𝒜_k'∩𝒪_k'))+(𝒪_k^c)+((𝒪_k')^c) ⩽ q_k:proof_threats+k^2+c_1 (3^k:proof_threats+kH/2)^-α+2 c^-1e^-c 3^k:proof_threats+kH ⩽ q_k:proof_threats+k^2+c 3^-(k:proof_threats+k)α In the end, using induction assumption (<ref>) as well as the fact that 1/3⩽c:proof_threats<1, α⩾ 1 and k⩾ 2, q_k:proof_threats+k+1/c:proof_threats^k+1 ⩽c:proof_threats^k-1+c c:proof_threats^k:proof_threats-1⩽c:proof_threats+c c:proof_threats^k:proof_threats-1⩽ 1, provided that k:proof_threats is chosen large enough (recall that the choice of c:proof_threats was independent of k:proof_threats). Estimate on the subsequence. We now prove the desired estimate on the subsequence; more precisely, we prove that there exists k:proof_threats_2∈^* such that ∀ k∈^*, ∀ H⩾H:traps, q_k:proof_threats_2+k⩽1/2 3^-α k. We use exactly the same method as in the proof of the first estimate (<ref>). Since q_k goes to 0 uniformly in H⩾H:traps (on account of (<ref>)), we have, for any k:proof_threats_2∈ large enough and H⩾H:traps, q_k:proof_threats_2+1⩽1/2 3^-α. We now show by induction on k⩾ 1 that q_k:proof_threats_2+k⩽1/2 3^-α k. For the induction step, we obtain with the same arguments as before, q_k:proof_threats_2+k+1/1/23^-α(k+1)⩽ 2· 3^α(k+1) (1/43^-2α k+c 3^-α(k:proof_threats_2+k)), which is less than 1 provided that k:proof_threats_2 is large enough. This gives a second condition to choose k:proof_threats_2. This constant being properly chosen, we get (<ref>). Interpolation. Let H⩾H:traps, r⩾ 3^k:proof_threats_2+1 and k∈^* such that 3^k:proof_threats_2+k⩽ r<3^k:proof_threats_2+k+1. Then (w is not (H,r)-threatened) ⩽(w is not (H,3^k:proof_threats_2+k)-threatened) ⩽1/2 3^-α k ⩽3^α(k:proof_threats_2+1)/2 r^-α. It remains to tailor constant c:threats in order for (<ref>) to hold for every r∈^*. §.§.§ Delay near a threatened point Now that we have shown that every point is threatened with a high probability, we need to quantify the delay caused by threats for the random walk, just as we did for traps with Lemma <ref>. First a technical definition is required, because we do not want to look for threats among too many points for entropy reasons (see the proof of Lemma <ref>). Let y∈^2 and H∈ such that H ⩾ 4/δ and H'⩾ 1 (recall (<ref>)). We denote by ⌊ y⌋_H the point of ⌊δ H/4⌋× H' given by ⌊ y ⌋_H=(⌊π_1(y)/H̃⌋H̃, ⌊π_2(y)/H'⌋ H'), where H̃=⌊δ H/4⌋. H:delay_threats Let H:delay_threats⩾H:delay_traps be an integer (depending on v_- and v_+) satisfying for every H⩾H:delay_threats, {[ 4H'<H;; H⩾ 4/δ;; H'⩾ 1;; 4β H'⩽δ H/5;; 4v_+H'> -δ H/20. ]. The second and third conditions ensure that Definition <ref> can be used, and the others are technical requirements that will appear later on. Let H⩾H:delay_threats, r∈^* and y∈^2. We set w=⌊ y⌋_H. Let Γ∈ℋ be such that Supp Γ⊆ R_H(w). For every j∈ 0,r, set [ 𝒳_j=𝒳_j^y,Γ=X^y,Γ_τ_jH,y and 𝒩_j=𝒩_j^y,Γ=N^y,Γ_τ_jH,y;; 𝒳̃_j=𝒳̃_j^y,Γ=X^y,Γ_τ_jH,w and 𝒩̃_j=𝒩̃_j^y,Γ=N^y,Γ_τ_jH,w;; Z_j=Z_j^y,Γ=X^y,Γ_τ_(j+1)H-4H',y and Λ_j=Λ_j^y,Γ=N^y,Γ_τ_(j+1)H-4H',y. ] Assume the following conditions are met: * w is (H,r)-threatened; * For every j∈ 0,r-1, V_H,𝒳_j^𝒳_j,𝒩_j⩽ v_++δ/2r; * For every j∈ 0,r-1, V_H-4H',𝒳_j^𝒳_j,𝒩_j⩽ v_++δ/2r and D_4H'^Z_j,Λ_j occurs; * For every j∈ 0,r-1, E_H'^𝒳_j,𝒩_j occurs; * For every j∈ 1,r, D_H'^𝒳̃_j,𝒩̃_j occurs. Then we have π_1(𝒳_r)⩽π_1(y)+( v_+-δ/2r) rH. Again, y and Γ can be replaced by random variables satisfying the same assumptions. Let us explain the proposition heuristically. Suppose that a point w close to y is threatened (condition 1). Divide the strip π_2^-1([π_2(y),π_2(y)+rH]) into r strips of height H, and assume that the random walk started at y does not go too fast on each of these r strips (condition 2). The fact that a point w near y is threatened and that X^y does not go too fast will imply that X^y meets a potential barrier on its right (which is given by a certain trapped point w_j_0=w+j_0 H(v_+,1)), and it cannot get around this barrier because of condition 4. Condition 3 ensures that X^y stays inside R_H(w_j_0), which is required to apply Lemma <ref> (see Figures <ref> and <ref> for an illustration). So by combining the upper bound we have on each of the r strips, and the new upper bound that the barrier gives us on this particular strip, we get a global upper bound. Mind that in this proposition two grids coexist, one with lines at heights π_2(y)+jH (j∈ 0,r), where the 𝒳_j are, and one with lines at heights π_2(w)+jH (j∈ 1,r), where the 𝒳̃_j are. Condition 5 allows us to control the error of displacement between 𝒳̃_j and 𝒳_j. Let H⩾H:delay_threats, r∈^*, y∈^2, w=⌊ y⌋_H, Γ∈ℋ such that Supp Γ⊆ R_H(w). Assume all five assumptions from the statement are satisfied. Because of condition 1, there exists j_0∈ 0,r-1 such that w_j_0 is H-trapped. We want to apply Lemma <ref> replacing w in the statement by w_j_0, x_0 by 𝒳_j_0 and Γ by 𝒩_j_0 (recall that the lemma was also true for a random choice of x_0, w and Γ), which requires justifying that 𝒳_j_0∈ w_j_0+(-∞,δ H)× [0,H'); Supp 𝒩_j_0⊆ R_H(w_j_0). Note that the fact that E_H'^𝒳_j_0,𝒩_j_0 occurs is a direct consequence of condition 4. Proof of (<ref>). We compute π_1(𝒳_j_0) =π_1(y)+∑_j=0^j_0-1(π_1(𝒳_j+1)-π_1(𝒳_j)) ⩽π_1(y)+(v_++δ/2r)j_0H ⩽π_1(w_j_0)+δ H/4+δ H/2 <π_1(w_j_0)+δ H. As for the second coordinate, by definition of w we have π_2(𝒳_j_0) =π_2(y)+j_0H∈π_2(w)+j_0H+[0,H')=π_2(w_j_0)+[0,H') This proves (<ref>). Proof of (<ref>). When j_0=0, 𝒩_j_0=Γ⊆ R_H(w) by assumption, so (<ref>) is satisfied. Suppose now that j_0⩾ 1. See Figure <ref> for an illustration of the following arguments. Mind that without condition 3, in spite of (<ref>), there is a possibility that between time τ^y,Γ_(j_0-1)H and time τ^y,Γ_j_0H, X^y,Γ exits R_H(w_j_0). Note that since π_2(Z_j_0-1)=π_2(𝒳_j_0)-4H'⩽π_2(w_j_0)-3H', we only need to check that for every n∈τ^y,Γ_j_0H-4H',y,τ^y,Γ_j_0H,y, we have π_1(X_n^y,Γ)<π_1(w_j_0)+δ H. Now, condition 3 ensures that V_H-4H',𝒳_j_0-1^𝒳_j_0-1,𝒩_j_0-1⩽ v_++δ/2r, therefore, π_1(Z_j_0-1) ⩽π_1(𝒳_j_0-1)+(v_++δ/2r)(H-4H') ⩽π_1(y)+(v_++δ/2r)(j_0-1)H+(v_++δ/2r)(H-4H') =π_1(y)+(v_++δ/2r)j_0H-4(v_++δ/2r)H' ⩽π_1(w_j_0)+3 δ H/4-4(v_++δ/2r)H' < π_1(w_j_0)+4δ H/5 Now, condition 3 also ensures that D_4H'^Z_j_0-1,Λ_j_0-1 occurs. Now, by (<ref>), we have 4β H'⩽δ H/5. Combining that with (<ref>), we get that for every n∈τ^y,Γ_j_0H-4H',y,τ^y,Γ_j_0H,y, π_1(X^y,Γ_n) ⩽π_1(Z_j_0-1)+δ H/5 < π_1(w_j_0)+4δ H/5+δ H/5 =π_1(w_j_0)+δ H. Therefore, X^y,Γ_[0,τ_j_0H,y)⊆ R_H(w_j_0), and therefore Supp 𝒩_j_0⊆ R_H(w_j_0). Step 3. At the end of the day, we have ([ V_rH^y,Γ>v_+-δ/2r;; ⌊ y⌋_H is (H,r)-threatened;; ∀ j∈ 0,r-1, V_H^𝒳_j,𝒩_j⩽ v_++δ/2r ]) ⩽(∃ j_0∈ 0,r-1, |[ w_j_0 is H-trapped,; π_1(𝒳_j_0+1)> π_1(w_j_0)+(v_+-δ)H; 𝒳_j_0∈ w_j_0+(-∞,δ H)×{0} ].) ⩽ r sup_w∈× sup_w'∈ w+(-∞,δ H)×{0} (π_1(X^w',Γ_τ_H^w',Γ)> π_1(w)+(v_+-δ)H, wH-trapped) ⩽ r e^-c:gambler_ruin H Conclusion. By applying Lemma <ref>, we get that π_1(𝒳̃_j_0+1)=π_1(X^𝒳_j_0,𝒩_j_0_τ_H,w_j_0)⩽π_1(w_j_0)+(v_+-2δ)H. Therefore, using that we are on D_H'^𝒳̃_j_0+1,𝒩̃_j_0+1 (condition 5), we get π_1(𝒳_j_0+1) ⩽π_1(𝒳̃_j_0+1)+β H' ⩽π_1(w_j_0)+(v_+-2δ)H+β H' ⩽π_1(w_j_0)+(v_+-δ)H, using (<ref>). Consequently, we have π_1(𝒳_r) =π_1(𝒳_j_0+1)+∑_j=j_0+1^r-1(π_1(𝒳_j+1)-π_1(𝒳_j)) ⩽π_1(w_j_0)+(v_+-δ)H+(r-j_0-1)(v_++δ/2r)H ⩽π_1(y)+j_0v_+H+(v_+-δ)H+(r-j_0-1)(v_++δ/2r)H ⩽π_1(y)+rv_+H-δ/2H = π_1(y)+(v_+-δ/2r)rH. §.§ Threatened paths k:paths c:paths We now know that when a particle passes near a threatened point, it will be delayed to the left with a high probability (Proposition <ref>), and that each point has a high probability of being threatened (Proposition <ref>). The goal of this section is to improve the latter result, by showing that with a high probability, every particle meets a lot of threatened points along its way. Mind that this is not a direct consequence of Proposition <ref>, because the random walk could unfortunately go precisely to areas where there are few threats. From now on, we will focus on specific values of the parameters introduced before : H=hL_k with k>k:paths for a wise choice of h∈^* and k:paths∈, and r=l_k:paths. There exists k:paths∈ and c:paths=c:paths(δ)>0 such that the following conditions are met: * L_k:paths⩾H:delay_threats; * For every h∈^*, (∃ y∈ I_hL_k:paths+1, ⌊ y ⌋_hL_k:paths is not (hL_k:paths,l_k:paths)-threatened)⩽c:paths L_k:paths+1^-(2α-3)/10; * The following two technical requirements are satisfied 49β l_k:paths⩽δ l_k:paths+1; c:end_proof (c:end_proof+c:paths^2) L_k:paths^-(6α-49)/40⩽c:paths, where c:end_proof was defined in (<ref>). Let h∈^* and k:paths∈ satisfying L_k:paths⩾H:delay_threats and (<ref>). Then, using Proposition <ref> and the fact that H:delay_threats⩾H:traps, we have (∃ y∈ I_hL_k:paths+1, ⌊ y ⌋_hL_k:paths is not (hL_k:paths,l_k:paths)-threatened) ⩽⌈hL_k:paths+1/⌊δ hL_k:paths/4⌋⌉⌈(hL_k:paths+1)'/(hL_k:paths)'⌉ c:threats l_k:paths^-α ⩽ c L_k:paths+1^-(2α-3)/10. Therefore, we do get inequality (<ref>) with a certain constant c:paths=c:paths(δ)>0. Now that c:paths is fixed, it suffices to take a larger k:paths so that inequality (<ref>) holds as well, which is possible because α⩾ 9. Conditions (<ref>) and (<ref>) will appear naturally later in the proof. Also, note that considering only rounded points ⌊ y⌋_hL_k:paths was crucial here to obtain a bound that is uniform in h. Let k:paths be defined as in Lemma <ref>. Let k>k:paths, w∈×, h∈^* and y∈ I_hL_k(w). We set the threatened density of random walk X^y to be D_h,k^y(w)=L_k:paths+1/L_k #{0⩽ j<L_k/L_k:paths+1, ⌊ X^y_τ_jhL_k:paths+1,w⌋_hL_k:paths is (hL_k:paths,l_k:paths)-threatened}. As usual, we also set D_h,k^y=D_h,k^y(o). Mind that contrary to what is depicted in Figure <ref>, we could have π_1(y)-π_1(w)>hL_k:paths+1. In that case, whenever jhL_k:paths+1⩽π_1(y)-π_1(w), τ^y_jhL_k:paths+1,w=0, so X^y_τ_jhL_k:paths+1,w=y. That is why, in (<ref>), we are not interested in the j=0 term. Let us now state our final proposition before ending the proof of Lemma <ref>: with a high probability, our random walks encounter threats more than half of the time along the way. For every k>k:paths and h∈^*, (∃ y∈ I_hL_k, D_h,k^y<1/2)⩽c:paths L_k^-(2α-3)/10. The proof uses again the renormalization method presented in Section <ref> and is very similar to that of Lemma <ref>. Let us fix k>k:paths and h∈^*. We define a sequence of densities (ρ_k)_k⩾k:paths by setting {[ ρ_k:paths=1;; ∀ k⩾k:paths, ρ_k+1=ρ_k-5/l_k. ]. One can check, using a computational knowledge engine, that since L_0⩾ 10^10, we have ∑_k⩾ 15/l_k⩽1/2, therefore we have ρ_k⩾ 1/2 for every k⩾k:paths. We define, for w∈×, S_h,k(w)={∃ y∈ I_hL_k(w), D_h,k^y(w)⩽ρ_k}. Since ρ_k⩾ 1/2, it suffices to show that s_h,k=(S_h,k(o)) satisfies s_h,k⩽c:paths L_k^-(2α-3)/10. To do this, we use induction on k>k:paths. Base case. When k=k:paths+1, the result follows directly from Definition <ref> and Lemma <ref>. Induction step. Assume that (<ref>) has been shown for a fixed k>k:paths, and fix y∈ I_hL_k+1. Recall the definitions of _k, ℱ_k and 𝒢_k, from (<ref>), (<ref>) and (<ref>), where H_k is replaced by hL_k. Recall also notations 𝒳_j^y and 𝒩_j^y used in (<ref>), and θ_j^y used in (<ref>). Note that in the definitions of ℱ_k and 𝒢_k, we will not use the part with events D (because contrary to Lemma <ref>, here we are not looking at horizontal displacements). We claim that 𝒢_k∩{D_h,k+1^y⩽ρ_k+1}⊆{there exist three j∈ 1,l_k-1 such that D^θ_j^y_h,k(𝒳^y_j)⩽ρ_k}. Indeed, suppose that 𝒢_k occurs but it is not the case that there exist three j∈ 1,l_k-1 such that D^θ_j^y_h,k(𝒳^y_j)⩽ρ_k. This means that for l_k-3 values of j∈ 1,l_k-1, we have D^θ_j^y_h,k(𝒳^y_j)>ρ_k, which means #{0⩽ i<L_k/L_k:paths+1, ⌊ X^θ^y_j_τ_ihL_k:paths+1,𝒳_j^y⌋_hL_k:paths is (hL_k:paths,l_k:paths)-threatened}>ρ_kL_k/L_k:paths+1. Now, on 𝒢_k, there are at most (hL_k)'/hL_k:paths+1⩽ 2L_k^1/2/L_k:paths+1 indices i∈ 0,L_k/L_k:paths+1 such that X^θ_j^y_τ_ihL_k:paths+1,𝒳_j^y=θ_j^y (indeed, this occurs only when ihL_k:paths+1⩽π_2(θ^y_j)-π_2(𝒳^y_j)⩽ (hL_k)'). Therefore, #{0⩽ i<L_k/L_k:paths+1, ⌊ X^𝒳_j^y,𝒩_j^y_τ_ihL_k:paths+1,𝒳_j^y⌋_hL_k:paths is (hL_k:paths,l_k:paths)-threatened}>ρ_kL_k/L_k:paths+1-2L_k^1/2/L_k:paths+1. In the end, D_h,k+1^y =L_k:paths+1/L_k+1 #{0⩽ j<L_k+1/L_k:paths+1, ⌊ X^y_τ_jhL_k:paths+1,𝒳_j^y⌋_hL_k:paths is (hL_k:paths,l_k:paths)-threatened} ⩾L_k:paths+1/L_k+1 (l_k-3)(ρ_kL_k/L_k:paths+1-2L_k^1/2/L_k:paths+1) =(1-3/l_k) (ρ_k-2/L_k^1/2) >ρ_k-5/l_k=ρ_k+1, which proves (<ref>). Now, note that on ℱ_k∩𝒢_k, for every j∈ 0,l_k-1, θ_j^y is in a I_hL_k(w) with w∈_k. Therefore, in a similar way as in (<ref>), we get ℱ_k∩𝒢_k∩ S_h,k+1⊆⋃_|π_2(w_1)-π_2(w_2)|⩾ 2 hL_kw_1,w_2∈𝒞_k[ (S_h,k(w_1) ∩ F_hL_k(w_1)); ∩ (S_h,k(w_2) ∩ F_hL_k(w_2)). ] Recall constant c:end_proof from (<ref>). Here again we get s_h,k+1⩽c:end_proof l_k^4 (s_h,k^2+c:end_proof L_k^-α), and so, using the induction assumption and (<ref>), we get s_h,k+1/L_k+1^-(2α-3)/10 ⩽c:end_proof (c:end_proof+c:paths^2) L_k^(2α-3)/8 l_k^4 L_k^-(2α-3)/5 ⩽c:end_proof (c:end_proof+c:paths^2) L_k^-(6α-49)/40⩽c:paths. This concludes the induction and thus the proof of (<ref>). §.§ Final proof of Lemma <ref>. Recall that we argued by contradiction and assumed that v_-<v_+, therefore δ=v_+-v_-/4(β+1)>0. Let η=δ/4 l_k:paths>0 where k:paths is defined as in Lemma <ref>. We are going to show that p_L_k^2(v_+-η/6) 0, which contradicts the definition of v_+. From now on, we fix k>k:paths+1, and we work with h=h_k=L_k, which is why it was important for our previous estimates to hold uniformly on h⩾ 1. We let H_k=h_k L_k:paths=L_k L_k:paths and r=l_k:paths. In order to prove (<ref>), we consider the large box B_L_k^2, which we pave using small sub-boxes B_H_k(y) for y∈_k, where _k is the minimal set satisfying w∈_k⋃ℐ_H_k(w)=B_L_k^2∩ (× H_k ). Recall notations 𝒳_j^z,Γ, 𝒩_j^z,Γ, 𝒳̃_j^z,Γ, 𝒩̃_j^z,Γ, Z_j^z,Γ and Λ_j^z,Γ from the statement of Proposition <ref>, where z∈^2 and Γ∈ℋ. For y∈ I_L_k^2 and 1⩽ i⩽ L_k/L_k:paths+1, we set 𝒳^y_i=X^y_τ_irH_k and 𝒩^y_i=N^y_τ_irH_k, and for 0⩽ j⩽ r, we set 𝒳_i,j^y=𝒳_j^𝒳^y_i,𝒩^y_i. In the same way, we define 𝒩_i,j^y, 𝒳̃_i,j^y, 𝒩̃_i,j^y, Z_i,j^y and Λ_i,j^y. We define the following events: 𝒜̂_k= ⋂_w∈_k(A_H_k,w(v_++η)^c∩ A_H_k-4H_k',w(v_++η)^c); ℱ̂_k= F_L_k^2∩⋂_y∈ I_L_k^2(D_rH_k^y∩⋂_i=1^L_k/L_k:paths-1 ⋂_j=0^r D_4H_k'^Z^y_i,j,Λ^y_i,j∩ E_H_k'^𝒳^y_i,j,𝒩^y_i,j∩ D_H_k'^𝒳̃^y_i,j,𝒩̃^y_i,j); 𝒢̂_k= ⋂_y∈ I_L_k^2⋂_i=1^L_k/L_k:paths-1 ⋂_j=0^r-1(D_H_k'^𝒳^y_i,j,𝒩^y_i,j∩{Θ(X^𝒳^y_i,j,𝒩^y_i,j)<H_k'}); ℋ̂_k= ⋂_y∈ I_L_k^2{D^y_L_k,k⩾ 1/2}. Using Lemma <ref> and the fact that α>8, we have (𝒜̂_k^c)⩽ c (L_k/L_k:paths)^2 (c:deviation(η) H_k^-α/4+c:deviation(δ/2r) (H_k-4H_k')^-α/4)⩽ c L_k^-(α-8)/4 0. Using Propositions <ref>, <ref> and <ref>, we have (ℱ̂_k^c) ⩽c:box^-1 e^-c:box L_k+ c L_k^4 (2c:horizontal^-1 e^-c:horizontal H_k^1/2 + e^-c:gambler_ruin H_k^1/2)0. Using Propositions <ref>, <ref> and <ref>, we have (𝒢̂_k^c)⩽ cL_k^4 (c:horizontal^-1e^-2c:horizontal H_k^1/2+ c:cut_line^-1 e^-c:cut_line H_k^1/4) 0. By Proposition <ref>, we have, using that α⩾ 2, (ℋ̂_k^c)⩽c:paths L_k^-(2α-3)/10 0. The goal now is to show that on the four events defined above, we have, for every y∈ I_L_k^2, V^y_L_k^2=1/L_k^2(π_1(𝒳^y_L_k/L_k:paths+1)-π_1(y))<v_+-η/3. First note that since ⌈ (L_k^2)^1/2⌉=L_k<H_k<rH_k, we have τ_irH_k>0 for every i⩾ 1. Therefore, we will only isolate i=0 and simply use D_rH_k^y to bound the displacement π_1(𝒳^y_1)-π_1(y). Let us now focus on bounding π_1(𝒳^y_i+1)-π_1(𝒳^y_i) where 1⩽ i<L_k/L_k:paths+1. First note that 𝒜̂_k along with F_L_k^2 and 𝒢̂_k allow us to bound the displacements of the random walk, similarly to what we did in Section <ref>. We have, for 1⩽ i<L_k/L_k:paths+1 and 0⩽ j<r, π_1(𝒳^y_i,j+1)-π_1(𝒳^y_i,j) ⩽β H_k'+ (v_++η) H_k ⩽(v_++3η/2) H_k ⩽(v_++δ/2r) H_k We can use (<ref>) for indices i such that ⌊𝒳^y_i⌋ is not (H_k,r)-threatened. As for the remaining indices i, we will use Proposition <ref>, replacing in the statement of the proposition H by H_k, y by 𝒳^y_i and Γ by 𝒩^y_i. Assumption 2 in Proposition <ref> is satisfied using (<ref>), and we can show in a similar way, using events A_H_k-4H_k',w(v_++η)^c in 𝒜̂_k, that we have π_1(Z^y_i,j)-π_1(𝒳^y_i,j)⩽(v_++δ/2r)(H-4H'), so assumption 3 is satisfied too. Assumption 4 and 5 are satisfied using event ℱ̂_k. Finally, the fact that Supp 𝒩^y_i⊆ R_H(⌊𝒳^y_i⌋) can be shown in the same way as (<ref>) in the proof of Proposition <ref>, using (<ref>), (<ref>) and ℱ̂_k. At the end of the day, Proposition <ref> ensures that for indices i such that ⌊𝒳^y_i⌋ is (H_k,r)-threatened, we have π_1(𝒳^y_i+1)-π_1(𝒳^y_i)⩽(v_+-δ/2r)rH_k. Denote by J^y_k the set of indices i∈ 1,L_k/L_k:paths+1-1 such that ⌊𝒳^y_i⌋_H_k is (H_k,r)-threatened. By Definition <ref>, we have the inclusion of events ℋ̂_k⊆⋂_y∈ I_L_k^2{|J^y_k|⩾L_k/2 L_k:paths+1-1}. Therefore, π_1(𝒳^y_L_k/L_k:paths+1)-π_1(y) =π_1(𝒳^y_1)-π_1(y)+∑_i=1^L_k/L_k:paths+1-1(π_1( 𝒳^y_i+1)- π_1(𝒳^y_i)) ⩽β r H_k +∑_i∈ J^y_k(π_1( 𝒳^y_i+1)- π_1(𝒳^y_i))+∑_i∉ J_k^y(π_1( 𝒳^y_i+1)- π_1(𝒳^y_i)) ⩽β r H_k + |J^y_k| (v_+-δ/2r) r H_k +(L_k/L_k:paths+1-1-|J_k^y|)(v_++3η/2)r H_k ⩽β r H_k+ v_+ L_k^2 + 3η/2 L_k^2-| J_k^y|(δ/2r+3η/2)r H_k =(v_+-η/4+(7η/2+β) L_k:paths+1/L_k) L_k^2 <(v_+-η/6) L_k^2 . See Figure <ref> for an illustration of the above bounds. In the end, we do have (<ref>), which is true for any y∈ I_L_k^2, so p_L_k^2(v_+-η/6)=(∃ y∈ I_L_k^2, V_L_k^2^y⩾ v_+-η/6)⩽(𝒜̂_k^c)+(ℱ̂_k^c)+(𝒢̂_k^c)+(ℋ̂_k^c)0, using (<ref>), (<ref>), (<ref>) and (<ref>). Therefore lim inf_H→∞ p_H(v_+-η/6)=0, where v_+-η/6<v_+. This contradicts the definition of v_+, therefore, v_-=v_+. § TOWARDS A COMPLETE LLN The next step of our work would be to prove a LLN for the random walk defined in Section <ref>, as is expressed in the following conjecture. There exists ξ∈^2 such that -almost surely, X_n/nξ. With Theorem <ref>, we have a result that is weaker - albeit very interesting both in itself and in the methods used to prove it. Indeed, when assuming conjecture <ref>, we immediately get Theorem <ref> with χ=ξ/‖ξ‖. However, going from Theorem <ref> to an actual LLN is not trivial at all. It is actually sufficient to show that τ_n/n converges almost surely to derive the LLN from Theorem <ref>. In other words, what we are missing at this point is the understanding of the temporal behavior of X. This is a priori a hard question, because the environment from the point of view of the particle may not behave very nicely under our assumptions. We used renormalization methods to get around this issue, but it is unclear to which events describing the temporal behaviour of X we could apply a renormalization method. § APPLICATIONS In this section, we give examples of environments that satisfy the assumptions introduced in Section <ref>. These are taken from classical 1D dynamic or 2D static models for which we can control the vertical dependencies (provided that for a 1D dynamic model, the vertical coordinate is time). Oftentimes, a static environment μ∈Ω_1 is constructed using a background environment, namely a random partition 𝒫 of ^2 into sets (O_i)_i, and allocating to all the points x∈ O_i a common fixed value μ(x)=(p_1^(i),…, p_4^(i))∈ S. Typically the number of sets in partition 𝒫 is finite, often with simply two sets. This construction ensures that μ is a deterministic function of 𝒫, so that translation invariance and decoupling for the background environment implies the same for μ. As for the drift assumption, it suffices to demand that there exist ε>0 such that p^(i)_4⩾ 1/2+ε for every i. All the examples in this section fall under this framework. In the rest of this section, the subscript b will be used to indicate that we are working with the background environment. §.§ One-dimensional dynamic environments In <cit.> are presented several models of 1D dynamic environments that have at most polynomial time correlations. More precisely, let I⊆. A one-dimensional dynamic environment is a random variable on a certain probability space (Ω_b,𝒯_b,_b) given by η:(y,t)∈×_+↦η_t(y)∈ I and taking values in 𝒟(_+,I^), the space of càdlàg functions from _+ to I^. The state of environment η at time t and site y is described by η_t(y). We assume that η is translation-invariant, that is [ for every (z,s)∈×_+, (η_t(y))_(y,t)∈×_+ and; (η_s+t(z+y))_(y,t)∈×_+ have the same law under _b. ] c:decoupling_dynamic We also assume the following time-decoupling condition. There exists α>0 such that for every A>0, there exists c:decoupling_dynamic=c:decoupling_dynamic(A)>0 such that for every h>0, for every pair of boxes B_1 and B_2 with maximal side lengths Ah that are h-separated, for all pairs of [0,1]-valued functions f_1 and f_2 on 𝒟(_+,I^) such that f_1(η) is σ(η|_ B_1)-measurable and f_2(η) is σ(η|_ B_2)-measurable, Cov_b(f_1(η),f_2(η))⩽c:decoupling_dynamic h^-α, where Cov_b denotes the covariance with respect to _b. This model consists of our background environment in the sense that it partitions ^2 into sets given by 𝒪_i={x=(y,t)∈^2,η_t(y)=i} for every i∈ I. Assumption (<ref>) implies the decoupling property we are after using the right choice of A and provided that α is large enough. Examples of environments satisfying (<ref>) and (<ref>) are given in <cit.>: the contact process, Markov processes with a positive spectral gap, the East model and independent renewal chains. Mind that our contribution for random walks in those environments is quite different from what is done in <cit.>, even if we consider the random walks from the 1D dynamic setup as evolving in ^2 by seeing time as a second spatial coordinate. For instance, these never go downwards. Another difference is that in <cit.>, in order to know where to jump, random walks are allowed to look at the environment not only where they are but in a horizontal interval of ^2. §.§ Boolean percolation In <cit.>, the authors show a decoupling property for the Boolean percolation process in ^2, a model first introduced in <cit.>. Here is a brief account of what this model consists of and how it can be used in the framework of this paper. Heuristically, Boolean percolation can be defined using a Poisson point process of intensity λ>0 in ^2, and allocating independently to each point in this point process a ball of random radius, sampled from a common distribution ν in _+. One way to make this more rigorous is that chosen in <cit.>. For a subset η∈^2×_+, let 𝒪(η)=⋃_(x,z)∈η B(x,z), where B(x,z) is the Euclidean open ball of center x and radius z. Let λ>0 and ν be a probability measure on (_+,ℬ(_+)). We assume that ν satisfies the following moment condition: there exists α>0 such that c:Boolean2 ∫_0^∞ z^2+α ν(z)=c:Boolean2<∞. This common assumption implies, using Markov's inequality, that the radii of our Boolean percolation have tails that decrease with a polynomial rate of exponent α+2. Let η be a Poisson point process in ^2×_+ with intensity λ x ⊗ν(z), where x is the Lebesgue measure on ^2. Let _b denote the law of this random variable. _b and Cov_b denote the associated expectation and covariance. Random variable 𝒪=𝒪(η) is called the Poisson-Boolean percolation of intensity λ and radius law ν. For every site x∈^2, we say that x is occupied if x∈𝒪. Otherwise, we say that x is vacant. This model consists of our background environment in the sense that it partitions ^2 into two sets: the occupied sites and the vacant sites. Recall the definition of S from Section <ref>. Let p^∙=(p_1^∙,…, p_4^∙) and p^∘=(p_1^∘,…, p_4^∘) be two elements of S satisfying, for a certain ε>0, {[ p_4^∙⩾ 1/2+ε;; p_4^∘⩾ 1/2+ε. ]. We define an environment μ by setting, for every x∈^2, μ(x)={[ p^∙ if x is occupied; p^∘ if x is vacant. ]. Recall also the definitions of Ω_1 and 𝒯_1. We let be the probability measure on (Ω_1,𝒯_1) such that, for every 𝐩=(p_x)∈{p^∙,p^∘}^^2, (𝐩)=_b([ x is occupied for all x s.t. p_x=p^∙; x is vacant for all x s.t. p_x=p^∘ ]). Drift assumption (<ref>) is clearly a consequence of (<ref>). Let us now state a decoupling property for this environment. Proposition <ref> gives a stronger property than the decoupling property we want to get, using translation invariance, the right choice of κ and provided that α is large enough. For r>0, let B^∞(r)=[-r,r]^2. c:Boolean Recall (<ref>). For every κ>0, there exists c:Boolean=c:Boolean(λ,ν,κ)>0 such that for all r⩾ 1 and for all pairs of functions f_1,f_2:𝒫(^2)→ [-1,1] such that f_1(𝒪) is σ(𝒪∩ B^∞(r))-measurable and f_2(𝒪) is σ(𝒪∩ B^∞(r(1+κ))^c)-measurable, we have Cov_b(f_1(𝒪),f_2(𝒪))⩽c:Boolean r^-α. Let κ>0, r⩾ 1 and f_1, f_2 as in the statement of the proposition. For a subset K⊆^2, we let 𝒪_K=⋃_(x,z)∈η, x∈ K B(x,z) be the union of all the balls from 𝒪 whose centers are in K. Mind that in general 𝒪_K≠𝒪∩ K. Let K=B^∞(r(1+κ/2)). Because of the properties of a Poisson point process, random variables f_1(𝒪_K) and f_2(𝒪_K^c) are _b-independent. Therefore Cov_b(f_1(𝒪),f_2(𝒪)) ⩽_b(f_1(𝒪)≠ f_1(𝒪_K))+_b(f_2(𝒪)≠ f_2(𝒪_K^c)) ⩽_b(𝒪_K^c∩ B^∞(r)≠∅)+_b(𝒪_K∩ B^∞(r(1+κ))^c≠∅). In the last line, in order to bound the second term in the sum, we can notice that _b(𝒪_K∩ B^∞(r(1+κ))^c≠∅) ⩽_b(∃ (x,z)∈η, x∈ K, z⩾κ r/2) ⩽_b[#{(x,z)∈η, x∈ K, z⩾κ r/2}] ⩽ 4r^2(1+κ/2)^2λ∫_κ r/2^∞ν(z) ⩽2^α+4λ(1+κ/2)^2/κ^2+α r^α∫_κ r/2^∞ z^2+α ν(z) ⩽2^α+4λ(1+κ/2)^2 c:Boolean2/κ^2+α r^-α. As for the first term, we use the partition of K^c given by the sets A_i=B^∞(r(1+κ/2)+i+1)∖ B^∞(r(1+κ/2)+i) for i∈, and write _b(𝒪_K^c∩ B^∞(r)≠∅) ⩽∑_i∈_b(𝒪_A_i∩ B^∞(r)≠∅) ⩽ 4λ∑_i∈ (r(1+κ/2)+i+1)∫_κ r/2+i^∞ν(z) =4λ∫_κ r/2^∞∑_i=0^⌊ z-κ r/2⌋ (r(1+κ/2)+i+1) ν(z) ⩽ 4λ∫_κ r/2^∞ (r+z+1)^2 ν(z) ⩽4λ/(r(1+κ/2)+1)^α∫_r(1+κ/2)+1^∞ z^2+α ν(z) ⩽ 4λ c:Boolean2 r^-α. Combining the two estimates yields the result. §.§ Gaussian fields c:dec_Gaussian c:dec_Gaussian2 In <cit.> (Section 6.1), the authors introduce a background environment on ^2 using Gaussian fields. This environment satisfies a decoupling assumption that is stronger than ours. Here is a brief account of what we need from <cit.> in our framework. Let q:^2→_+ a non-zero function such that ∀ (x_1,x_2)∈^2, q(x_1,x_2)=q(-x_1,x_2). We also assume that there exists λ>2 and c:dec_Gaussian>0 such that ∀ x∈^2∖{0}, q(x)⩽c:dec_Gaussian |x|^-λ. We also consider a family (W_x)_x∈^2 of i.i.d. standard normal random variables and we define the Gaussian field (g_x)_x∈^2 by setting g_x=∑_y∈^2 q(x-y) W_y. The background environment we are interested in is given by (η_x)_x∈^2 where η_x is the sign of g_x (that is, η_x∈{± 1}). By construction (η_x)_x∈^2 is translation-invariant. It remains to check that it satisfies our decoupling assumption. The authors of <cit.> show the stronger property that follows. Recall (<ref>) and (<ref>). There exists c:dec_Gaussian2>0 such that for every integer r⩾ 2 and every box C=[a,a+w]× [b,b+h]⊆^2 with w,h⩾ 1, there exists a coupling between η and a field η^C,r such that _b(η≠η^C,r)⩽c:dec_Gaussian2 (wh+(w+h)r+r^2) r^-λ+3/2 and, if A⊆ C and B⊆^2 satisfy d(A,B)>r, then η^C,r|_A and η^C,r|_B are independent. This decoupling property implies that if B_1 and B_2 are two boxes with maximum side lengths 2(2β+1)h that are h-separated, and if f_1 and f_2 are two measurable functions on {± 1}^^2 such that f_1(η) is σ(η|_ B_1)-measurable and f_2(η) is σ(η|_ B_2)-measurable, Cov_b(f_1(η),f_2(η))⩽ c h^-α for a certain constant c>0, where α=-(2-λ+3/2). Therefore we have α>12 provided that λ>31/2. §.§ Factors of i.i.d. with light-tail finite radii Let Y=(Y_x)_x∈^2 be a family of i.i.d. random variables in [0,1] with law _b (as usual, _b and Cov_b will denote the associated expectation and covariance). Let η:^2→{0,1} be a random variable. We say that η is a factor of Y with finite radius if there exist two measurable functions ϕ:[0,1]^^2→{0,1} and ρ:[0,1]^^2→_+ such that: * For all x∈^2, η(x)=ϕ(θ^x Y), where θ^x 𝐲=(y_x+v)_v∈^2 for every 𝐲=(y_v)∈ [0,1]^^2; * For _b-almost all 𝐲,𝐲'∈ [0,1]^^2 that coincide outside of B(o,ρ(𝐲)), ϕ(𝐲) and ϕ(𝐲') are equal at o. This implies that we only need to look at Y in a ball of radius ρ(Y) around a site x∈^2 to determine η(x). Random variable R=ρ(Y) is called the radius of η. c:factors Such a process η can be seen as a background environment. It is translation-invariant by construction. In order to show a decoupling property for η, we need to make an additional assumption on the radius: we assume that there exist α>0 and c:factors>0 such that for all r>0, _b(R>r)⩽c:factors r^-α. c:decoupling_factors There exists c:decoupling_factors>0 such that for every h>0, for every pair of h-separated boxes B_1 and B_2 of ^2, for all pairs of [0,1]-valued functions f_1 and f_2 on {0,1}^^2 such that f_1(η) is σ(η|_ B_1)-measurable and f_2(η) is σ(η|_ B_2)-measurable, Cov_b (f_1(η),f_2(η))⩽c:decoupling_factors h^-α. Let us define, for i∈{1,2}, a box B̃_i=(B_i+[-h/3,h/3]^2)∩^2 and a function ψ_i:[0,1]^B̃_i→ [0,1]^^2 by setting, for 𝐲=(y_v)_v∈B̃_i∈ [0,1]^B̃_i and x∈^2, ψ_B̃_i(y)_x=y_x 1_B̃_i(x). Also, we define g_i=f_i∘ϕ∘ψ_B̃_i, which takes arguments in [0,1]^B̃_i. The crucial idea is that if R⩽ h/3, we have f_i(η)=g_i(Y_B̃_i), where Y_B̃_i=(Y_v)_v∈B̃_i. Now, remark that Y_B̃_1 and Y_B̃_2 are independent, since B̃_1 and B̃_2 are disjoint. Therefore, _b[f_1(η) f_2(η)] ⩽_b[g_1(Y_B̃_1) g_2(Y_B̃_2) 1_R⩽ h/3]+ _b(R>h/3) ⩽_b[g_1(Y_B̃_1)] _b[g_2(Y_B̃_2)]+c:factors h^-α ⩽ (_b[f_1(η)]+c:factors h^-α) (_b[f_2(η)]+c:factors h^-α)+c:factors h^-α ⩽_b[f_1(η)] _b[f_2(η)] +c h^-α. Taking a closer look at this proof, it is clear that in fact the decoupling property holds not only for boxes that are vertically separated, but for any two sets of ^2 between which the distance is at least h. In the end we do recover the decoupling property that we want, provided that α>12. To make things simple, we used a factor of i.i.d. that takes values in {0,1}, but this does not affect the proof of the decoupling property. We could work with a much bigger set I and use the background environment given by 𝒪_i={x∈^2, η(x)=i} for i∈ I. Furthermore, one way to construct a random environment directly (that is, without using an intermediary background environment as we have been doing so far), would be to take for I the set S defined in Section <ref>. In that case, if we add the drift condition, μ=ϕ(Y) is a random environment satisfying our assumptions. § ACKNOWLEDGMENTS This work could not have been possible without the extensive help of my PhD supervisors Oriane Blondel (ICJ, Villeurbanne, France) and Augusto Teixeira (IMPA, Rio de Janeiro, Brazil), and I would like to take this opportunity to thank them wholeheartedly for their involvement, kindness and patience. This work was supported by a doctoral contract provided by CNRS. Finally, my working in person with Augusto Teixeira in IST (Lisbon, Portugal) was enabled by grants from ICJ and Labex Milyon, and I would like to thank IST and especially Patricia Gonçalves and Beatriz Salvador for welcoming me away from ICJ. § APPENDIX Proof of Proposition <ref>. Let y∈^2 and Γ∈ℋ. We want to show that for every k∈^* and f_1,…,f_k measurable non-negative functions on [0,1], we have [f_1(U_1^y,Γ)⋯ f_k(U_k^y,Γ)]=∫_0^1 f_1(u) u ⋯ ∫_0^1 f_k(u) u. We show this by induction on k. The case k=1 simply follows from the fact that U_1^y,Γ=U(y,Γ(y)+1). Assume (<ref>) is true for a fixed k∈^*. Let f_1,…,f_k+1 be measurable non-negative functions on [0,1]. Set n_0=1 and x_0=y. We have [f_1(U_1^y,Γ)⋯ f_k+1(U_k+1^y,Γ)] =∑_x_1,…,x_k∈^2 n_1,…,n_k∈^*[f_1(U(x_0,Γ(x_0)+n_0))⋯ f_k+1(U(x_k,Γ(x_k)+n_k)) ∏_j=1^k1_X^y,Γ_j=x_j 1_N_j+1^y,Γ(x_j)=n_j]. Now in each term of this sum, the variable f_k+1(U(x_k,Γ(x_k)+n_k)) is independent from all the other variables that appear, for those are all measurable with respect to μ and {U(x_0,Γ(x_0)+n_0),…,U(x_k-1,Γ(x_k-1)+n_k-1)}, where, for every j∈ 0,k-1, either x_k≠ x_j or Γ(x_k)+n_k≠Γ(x_j)+n_j. Now for any x_k∈^2 and n_k∈^*, [f_k+1(U(x_k,Γ(x_k)+n_k))]=∫_0^1 f_k+1(u) u. Therefore [f_1(U_1^y,Γ)⋯ f_k+1(U_k+1^y,Γ)] =[f_1(U_1^y,Γ)⋯ f_k(U_k^y,Γ)] ∫_0^1 f_k+1(u) u, and then using the induction assumption allows us to conclude. The exact same arguments work when replacing by ^μ. Proof of Proposition <ref>. We write the proof for y=o and Γ=0 for the sake of simplicity (initial conditions play no part in our reasoning). Let μ∈𝒜. Recall Definition <ref> for the lower-bound random walk, as well as (<ref>). Using increment inequality (<ref>), we have ^μ(E_H^c) =^μ(∃ n∈^*, π_2(X_n)<-H) ⩽^μ(∃ n∈^*, X̂_n<-H) ⩽^μ(τ̂_-H<∞). Now, under ^μ, X̂ is a standard biased 1D random walk with probability 1/2+ε of going up and 1/2-ε of going down. Applying the gambler's ruin estimate, we get ^μ(τ̂_-H<∞)=(1/2-ε/1/2+ε)^H, hence the result for ^μ, and integrate to get the result for . For the case H=0, we can write ^μ(E_0) ⩾^μ({X_1=e_2} ∩ E_1^e_2,1_{o}) =^μ(X_1=e_2) ^μ(E_1^e_2,1_{o}) ⩾(1/2+ε) ^μ(E_1^e_2,1_{o}). Now we can use the gambler's ruin estimate again: ^μ(E_1^e_2,1_{o})⩾^μ(τ̂_-1^e_2,1_{o}=+∞)=2ε/1/2+ε. This yields the result for ^μ, and we integrate to get the result for . c:ballisticity_tilde Proof of Proposition <ref>. Let μ∈𝒜. Again, we only show the case y=o and Γ=0 for the sake of conciseness. Now, in order to study the horizontal behavior of X, let us define, in the same fashion as in Definition <ref>, a lazy biased 1D random walk X̃ coupled to X in the following way: {[ X̃_0=0;; ∀ n∈, X̃_n+1=X̃_n+1_U_n+1⩽ 1/2-ε. ]. We can check that this random walk satisfies, for every n∈, [ ^μ(X̃_n+1=x+1 | X̃_n=x)=1/2-ε;; ^μ(X̃_n+1=x | X̃_n=x)=1/2+ε;; {X_n+1=X_n+1}⊇{X_n+1=X_n+e_1}. ] We also associate a stopping time τ̃_H for every H∈^*, in the same way as τ̂_H was associated to X̂. The mean speed of this new random walk is 1/2-ε, so we can obtain a ballisticity property similar to that of Proposition <ref> where we replace 2ε by 1/2-ε. More precisely, for any ξ>0, there exists a constant c:ballisticity_tilde=c:ballisticity_tilde(ξ)>0 such that for every n∈, we have ^μ(|X̃_n-X̃_0-(1/2-ε)n|⩾ξ n)⩽c:ballisticity_tilde^-1 e^-c:ballisticity_tilden. We could define another random walk for when X goes left, but using only X̃ is sufficient by symmetry of the problem. Actually, if we fix a parameter ζ>0 to be chosen later, we have ^μ(D_H^c)⩽ 2^μ(τ̃_β H⩽τ̂_H)⩽ 2^μ(τ̂_H⩾⌈ H/ζ⌉)+2^μ(τ_β H⩽⌈ H/ζ⌉). For the first term above, we use Proposition <ref> and write, for ζ<2ε, ^μ(τ̂_H⩾⌈ H/ζ⌉) ⩽^μ(X̂_⌈ H/ζ⌉⩽ H) ⩽c:ballisticity^-1 e^-c:ballisticity⌈ H/ζ⌉ ⩽ c^-1 e^-c H, for a certain constant c=c(ζ)>0. In the same way, using (<ref>), ^μ(τ_β H⩽⌈ H/ζ⌉)⩽ c^-1 e^-cH when β>1/2-ε/ζ, that is ζ>1/2-ε/β. This means that our proof works whenever ζ∈(1/2-ε/β,2ε), which is non-empty because of (<ref>). Therefore, estimate (<ref>) is shown. Proof of Lemma <ref>. Let μ∈𝒜. By Corollary <ref>, we can assume that y=o and Γ=0. We fix a∈ and k=k(a) an integer to be chosen later. Let us consider hitting times τ̂_jk for j∈ (recall (<ref>)). Let us consider the following events (with the convention that if τ̂_jk=∞, we just take the empty set): [ A_j,k={(X̂_τ̂_jk+n)_n∈ does not return below jk}=⋂_n∈{X̂_τ̂_jk+n⩾ jk};; Ã_j,k={(X̂_τ̂_jk+n)_n∈ does not return below jk within k steps}=⋂_n=0^k {X̂_τ̂_jk+n⩾ jk}. ] Remark that, since X̂ jumps at range 1, for every j∈ we have τ̂_(j+1)k⩾τ̂_jk+k. Therefore, using Corollary <ref> along with an induction argument, the (Ã_j,k)_j∈ are independent events. Moreover they all have the same probability p_k=^μ(Ã_j,k)⩾^μ(A_j,k)=^μ(A_0,k)⩾ 2ε, using Proposition <ref> and the same line of reasoning as in the end of the proof of Proposition <ref>. Therefore the random variable given by G_k=inf{j∈, Ã_j,k occurs} is a geometric variable of parameter p_k. Now, we have ^μ(Θ(X̂)>a) ⩽^μ(∀ j⩽ a/k, jk is not a cut point of X̂) ⩽^μ(∩_j⩽ a/k A_j,k^c) ⩽^μ(∩_j⩽ a/k Ã_j,k^c)+^μ(∪_j⩽ a/k A_j,k^c∩Ã_j,k). The first term in the last line above can be bounded from above by ^μ(G_k>⌊ a/k⌋)=(1-p_k)^⌊ a/k⌋ +1⩽ (1-2ε)^a/k. As for the second term, we use a union bound and remark that, for any j∈, we have ^μ(A_j,k^c∩Ã_j,k) = ^μ((X̂_τ̂_jk+n)_n∈ returns below jk but not within k steps) =∑_t∈ ^μ((X̂_t+n)_n∈ returns below X̂_t but not within k steps, τ̂_jk=t). In each term of the sum above, the two events between parentheses are independent, using Corollary <ref>. Now the probability of the first event actually does not depend on t, using Proposition <ref>, so ^μ(A_j,k^c∩Ã_j,k) =^μ(X̂ returns below 0 but not within k steps) ⩽^μ(X̂ returns below 0 but not within k steps, X̂_k⩾ε k)+^μ(X̂_k<ε k). Using Proposition <ref>, ^μ(X̂_k<ε k)⩽c:ballisticity(ε)^-1 e^-c:ballisticity(ε) k. To study the other term, we remark that it is less than ^μ(τ̂^X_k,N_k_-⌊ε k⌋<∞). Now, we can apply Proposition <ref> to estimate this, considering that for any μ∈𝒜, by the gambler's ruin estimate, we have ^μ(τ̂_-⌊ε k⌋<∞)=(1/2-ε/1/2+ε)^⌊ε k⌋⩽ e^-c k, where c>0 is uniform in μ. Therefore this bound also holds for ^μ(τ̂^X_k,N_k_-⌊ε k⌋<∞). At the end of the day, combining this with (<ref>) and (<ref>), we get the desired result by choosing k=k(a)=⌊ a^1/2⌋ and adjusting c:cut_line_hat properly. To get the same estimate with , we integrate over μ. alpha
http://arxiv.org/abs/2307.01298v1
20230703190828
Extremely Persistent Dense Active Fluids
[ "Grzegorz Szamel", "Elijah Flenner" ]
cond-mat.soft
[ "cond-mat.soft" ]
Department of Chemistry, Colorado State University, Fort Collins, Colorado 80523, USA Department of Chemistry, Colorado State University, Fort Collins, Colorado 80523, USA We examine the dependence of the dynamics of three-dimensional active fluids on persistence time τ_p and average self-propulsion force f. In the large persistence time limit many properties of these fluids become τ_p-independent. These properties include the mean squared velocity, the self-intermediate scattering function, the shear-stress correlation function and the low-shear-rate viscosity. We find that for a given f in the large τ_p limit the mean squared displacement is independent of the persistence time for times shorter than τ_p and the long-time self-diffusion coefficient is proportional to the persistence time. For a large range of self-propulsion forces the large persistence time limits of many properties depend on f as power laws. Extremely Persistent Dense Active Fluids Elijah Flenner August 1, 2023 ======================================== Particles that use energy from their environment to perform persistent motion, i.e. self-propelled or active particles, behave in surprising and interesting ways <cit.>. Recently, novel intermittent dynamics was identified in extremely persistent dense homogeneous two-dimensional systems <cit.>. It was shown that these systems evolve through sequences of mechanical equilibria in which self-propulsion forces balance interparticle interactions. Here we examine extremely persistent dense homogeneous three-dimensional active fluids in which the interparticle interactions never manage to balance the self-propulsion forces. Many properties of these fluids become τ_p independent and scale with the root-mean-squared self-propulsion force f as power laws. We recall that the phase space of active systems is much larger than that of passive ones. At a minimum, one has to specify the average strength of active forces and their persistence time in addition to the set of parameters characterizing the corresponding passive system. If one considers athermal active systems, this results in a three-dimensional control parameter space. Thus, when comparing results of diverse studies, one needs to specify the path in the parameter space that one is following. Early studies of dense homogeneous active systems focused on the glassy dynamics and the active glass transition <cit.>. These studies considered a limited range of persistence times and often examined the behavior of the systems at constant active temperature T_a that characterizes the long-time motion of an isolated active particle. At constant T_a, with increasing τ_p the strength of active forces decreases and dense active systems typically glassify, see Fig. 2c of Ref. <cit.> for a recent example. Thus, to investigate the effects of extremely persistent active forces it is common to fix their strength while increasing their persistence time <cit.>. Recent simulational studies of dense two-dimensional active systems demonstrated that, for large persistence times, there is a new phase between fluid and glass phases with intermittent dynamics <cit.>. Importantly, in the large τ_p limit the relaxation happens on the time scale of the persistence time, and the mean-square displacement and the two-point overlap function exhibit well-defined limits when plotted versus time rescaled by the persistence time <cit.>. This observation suggests that the dynamics at large persistence times may be studied by assuming that the active and interparticles forces converge to a force-balanced state for times much less than the persistence time and that all the rearrangements happen on the time-scale of the persistence time. This approach is termed activity-driven dynamics <cit.>. A recent study that used the activity-driven dynamics algorithm <cit.> discovered very interesting extreme persistence time limit dynamics of two-dimensional active systems. Displacement distributions were found to be non-Gaussian and to exhibit fat exponential tails. An intermediate-time plateau in the mean-square displacement was absent. Instead, a region scaling with time as t^β with β≈ 1.6 for times less than the persistence time was identified. The complex intermittent dynamics resembled that found in zero-temperature driven amorphous solids, but with some important differences. Here we focus on extremely persistent three-dimensional dense homogeneous active fluids. Like Keta et al. <cit.>, we use τ_p and f as our control parameters. In the systems we investigated, for large enough τ_p the mean square velocity saturates at a non-zero value determined by f. The interparticle forces never manage to balance completely the active foces and the systems relax on the time scale shorter than the persistence time. Therefore, the infinite τ_p limits of our systems lie in the un-jammed phase of the active yielding phase diagram studied (in two dimensions) by Liao and Xu <cit.>. Several properties of these systems become τ_p-independent and scale as non-trivial power laws with f. In the following we describe the systems we studied and then present and discuss our observations. We study a three-dimensional 50:50 binary mixture of spherically symmetric active particles interacting via the Weeks-Chandler-Andersen potential, V_αβ = 4 ϵ[ (σ_αβ/r)^12 - (σ_αβ/r)^6] for r < ς_αβ = 2^1/6σ_αβ and 0 otherwise. Here, α, β denote the particles species A or B and ϵ is the unit of energy. The distance unit is set by σ_BB = 1.0, σ_AA = 1.4, and σ_AB = 1.2. We study the number density N/V=0.451, which corresponds to the volume fraction ϕ=π N [ς_AA^3 + ς_BB^3]/(12 V)=0.625. We use the athermal active Ornstein-Uhlenbeck particle model <cit.>. The equation of motion for the position 𝐫_n of particle n is ξ_0 𝐫̇_n = 𝐅_n + 𝐟_n, where 𝐅_n = - ∑_m n∇_n V(r_nm) and 𝐟_n is the active force. ξ_0=1 is the friction coefficient of an isolated particle and ξ_0σ_BB/ϵ sets the unit of time. The equation of motion for 𝐟_n reads τ_p 𝐟̇_n = -𝐟_n + ζ_n, where τ_p is the persistence time of the self-propulsion and ζ_n is a Gaussian white noise with zero mean and variance < ζ_n(t) ζ_m(t^')>_noise = 2 ξ_0 T_a 𝐈δ_nmδ(t-t^'), where < …>_noise denotes averaging over the noise distribution, T_a is a single particle effective temperature, 𝐈 is the unit tensor and we set the Boltzmann constant k_B = 1. The root-mean square strength of active forces is f = √(3 T_a/τ_p). We start be examining the persistence time dependence of the mean square velocity, v^2 ≡< ∑_n 𝐫̇_n^2 > = < ∑_n 𝐅_n^2 + ∑_n 2𝐅_n ·𝐟_n > + N f^2, which is shown in Fig. <ref>. We observe that with increasing persistence time v^2 decreases and then saturates. For each active force strength f we define a characteristic persistence time τ_p(f) at which v^2 stops changing <cit.>. The cancellation of the interparticle and active forces is never complete, unlike in systems investigated in Ref. <cit.>. For the range of f that we studied, our systems are never at the bottom of an effective potential consisting of the potential energy tilted by the terms originating from the active forces <cit.> and do not become arrested on the time scale of the persistence time. Therefore, in the infinite τ_p limit our systems fall into the fluid phase of the three-dimensional version of the phase diagram of Liao and Xu <cit.>. Next, we examine the persistence time dependence of the mean squared displacement (MSD) < δ r^2(t) > = N^-1< ∑_n [𝐫_n(t) - 𝐫_n(0)]^2 >, shown in Fig. <ref>. At short times the motion is ballistic and it is determined by the mean square velocity, < δ r^2(t) > = v^2 t^2 <cit.>. Thus, the short-times MSD decreases with increasing τ_p and becomes constant beyond τ_p(f). At long times, the MSD exhibits diffusive behavior. The self-diffusion coefficient, D=lim_t→∞< δ r^2(t) >/(6t), is shown in Fig. <ref>. For a given f it monotonically increases with increasing τ_p. At large τ_p we find that D ∼τ_p, indicated by the dashed lines. We found a surprising time-dependence of the MSD between the initial ballistic and the long-time diffusive regimes. In Fig. <ref> we show the MSD divided by v^2t^2 to show this time-dependence more clearly. The MSD exhibits a superdiffusive behavior that seems to approach a large τ_p master curve. The superdiffusive behavior does not follow a single power law. Instead, a second, intermediate time ballistic regime appears, with velocity v_1. This is in contrast to the finding of Keta et al. <cit.> who observed intermediate power law behavior < δ r^2(t) > ∼ t^β with β≈ 1.6. In the τ_p→∞ limit the systems stays in the second ballistic regime. Results shown in Fig. <ref> suggest that in the large τ_p limit the diffusion can be thought of as a random walk consisting of of steps of length v_1τ_p taken every τ_p. This picture rationalizes the observed scaling D∼τ_p. In Fig. <ref> we show the velocity distributions. As found by Keta et al. <cit.>, the distributions are strongly non-Gaussian. Their broad tails become more prominent with increasing τ_p until τ_p(f). For τ_p > τ_p(f) the distributions overlap. The evolution of the mean square displacement with the persistence time is reflected in the τ_p dependence of the self-intermediate scattering function F_s(k;t) = 1/N< ∑_n e^i 𝐤· (𝐫_n(t) - 𝐫_n(0))>. We chose k = 5.3, which is approximately equal the first peak of the total static structure factor. In Fig. <ref> we show F_s(k;t) for f = 0.0548. With increasing τ_p the intermediate time glassy plateau disappears and the decay changes from stretched exponential, to exponential, then to compressed exponential. Shown in the inset to Fig. <ref> is the parameter Γ obtained from fits to F_s(k;t) = ae^-(t/τ_s)^Γ where we restrict a ≤ 1. Γ increases with increasing τ_p and reaches a plateau above τ_p ≈ 94. We find that the large persistence time limits of several properties discussed above depend on the strength of the active forces as power laws. In Fig. <ref> we show the large τ_p limits of v^2 (squares), D/τ_p (circles) and τ_s (triangles). We find that the former two quantities follow a power law with f with statistically the same exponent, 2.6± 0.1 for v^2 and 2.5± 0.1 for D/τ_p. The power law of the relaxation time, τ_s∼ f^-1.3 can be related to that of v^2; in the large τ_p limit F_s decays on the time scale on which a particle moves over its diamater, which scales as v. The above discussed quantities describe the single particle motion in our many-particle systems. To access collective properties of these systems we investigated the τ_p dependence of the stress fluctuations and the rheological response. First, we examined the shear-stress correlation function Σ_αβ(t) = V^-1< σ_αβ(t) σ_αβ(0) >, where σ_αβ(t) = -1/2∑_n ∑_m nr_nm^α r_nm^β/r_nmdV(r_nm)/dr_nm, and r_nm^α is the α component of the distance vector between particle n and particle m. In Fig. <ref> we report the normalized shear stress correlation function, Σ_xy(t)/Σ_xy(0), for f = 0.0548 and a large range of persistence times. For small τ_p there is a rapid decay to an emerging plateau followed by a slow decay to zero. With increasing τ_p, the decay of Σ_xy(t) becomes more exponential and it is exponential above τ_p(f). In the inset we show the dependence of the initial value, Σ_xy(0), on τ_p. We see that the initial value first grows with τ_p and then plateaus. To probe the rheological response of our active systems we simulated shear flow by adding to Eq. (<ref>) a bulk non-conservative force 𝐅_n^γ̇ = ξ_0 γ̇ y_n 𝐞_x with Lees-Edwards boundary conditions <cit.>. In Fig. <ref> we show the average shear stress, < σ_xy>/γ̇, for f = 0.548 and a large range of range of τ_p. The flow curves for other strengths of the self-propulsion have the same main features. The limiting zero-shear-rate viscosity η can be obtained from the small γ̇ pleateus. In Fig. <ref> we show the τ_p dependence of the zero-shear-rate viscosity. We find that η initially decreases and reaches a τ_p-independent plateau above τ_p(f). Again, we find that large τ_p limits of collective properties depend on f a power laws. In Fig. <ref> we show the dependence of the large τ_p limits of the relaxation time of the normalized stress tensor autocorrelation function and of the viscosity on the strength of the self-propulsion. When analyzing the dynamics in passive systems, one usually tries to make connection between the average distribution of the particles and their dynamics. To check how the average arrangement of the particles in our active systems changes with increasing persistence time we evaluated the steady-state structure factor S(k) = 1/N< ∑_n,m e^i 𝐤· (𝐫_n - 𝐫_m)>. In Fig. <ref> we show that the peak height of the structure factor initially decreases with increasing persistence time, which nicely correlates with relaxation getting faster and viscosity decreasing. The peak height then saturates at persistence time around τ_p(f). However, the structure factors for τ_p≥τ_p(f) still look liquid-like <cit.>. It is not obvious at all from these structure factors that the MSD exhibits two ballistic regimes and F_s(k;t) is well fitted by a compressed exponential. We conclude that to describe the dynamics of extremely persistent dense active fluids one cannot rely upon static structure factors only. We presented here a new class of extremely persistent active matter systems. Whereas earlier investigations <cit.> revealed systems that relax on the time scale of the self-propulsion and exhibit intermittent dynamics with system-size spanning elastic and plastic events, we uncovered systems that relax on the time scale that, in the large persistence time limit, depends only on the strength of the self-propulsion. Curiously, the single-particle motion exhibits two ballistic regions separated by a superdiffusive regime. Classic signatures of two-step relaxation are absent both in the mean square displacement and in the intermediate scattering function. Many properties that quantify the large persistence time limit of the relaxation depend on the strength of the active forces as a power law. We expect that for higher volume fractions there is a transition between the regime in which the relaxation becomes independent of the persistence time of the self-propulsion, which is the regime we analyzed, and the regime in which the system flows only on the time scale of the self-propulsion, which is the regime investigated earlier <cit.>. At a fixed volume fraction the transition would be driven by the strength of the active forces while at a fixed strength of the active forces it would be driven by the density. We hope that future work will determine the corresponding phase diagram, which would be the three-dimensional analog of the diagram uncovered by Liao and Xu <cit.>. Finally, while for small and moderate persistence times there are approximate theories that can be used to describe the relaxation in active fluids <cit.>, these theories are not expected to work in the large persistence time limit. Thus, the discovery of a new different paradigm of extremely persistent active fluids with non-trivial power laws call for additional theoretical work. We thank L. Berthier and P. Sollich for discussions and comments on the manuscript. Part of this work was done when GS was on sabbatical at Georg-August Universität Göttingen. He thanks his colleagues there for their hospitality. We gratefully acknowledge the support of NSF Grant No. CHE 2154241. 99 Marchetti2013 M.C. Marchetti, J.F. Joanny, S. Ramaswamy, T.B. Liverpool, "Hydrodynamics of soft active matter", J. Prost, Rev. Mod. Phys. 85, 1143 (2013). Elgeti2015 J. Elgeti, R.G. Winkler, and G. Gompper, "Physics of microswimmers–single particle motion and collective behavior: a review", Rep. Prog. Phys. 78, 056601 (2015). Bechinger2016 C. Bechinger, R. Di Leonardo, H. Löwen, C. Reichhardt, G. Volpe, and G. Volpe, "Active particles in complex and crowded environments", Rev. Mod. Phys. 88, 045006 (2016). Mandal2020 R. Mandal, P. J. Bhuyan, P. Chaudhuri, C. Dasgupta, and M. Rao, "Extreme active matter at high densities", Nat. Commun. 11, 2581 (2020). Mandal2021 R. Mandal and P. Sollich, "How to study a persistent active glassy system", J. Phys.: Condens. Matt. 33, 184001 (2021). Keta2022 Y.-E. Keta, R.L. Jack, and L. Berthier, "Disordered Collective Motion in Dense Assemblies of Persistent Particles", Phys. Rev. Lett. 129, 048002 (2022). Keta2023 Y.-E. Keta, R. Mandal, P. Sollich, R. L. Jack, and L. Berthier, "Intermittent relaxation and avalanches in extremely persistent active matter", Soft Matter 19, 3871 (2023). Berthier2013 L. Berthier and J. Kurchan, "Non-equilibrium glass transitions in driven and active matter", Nat. Phys. 9, 310 (2013). Berthier2014 L. Berthier, "Nonequilibrium Glassy Dynamics of Self-Propelled Hard Disks", Phys. Rev. Lett. 112, 220602 (2014). Ni2013 R. Ni, M. A. Cohen Stuart, and M. Dijkstra, "Pushing the glass transition towards random close packing using self-propelled hard spheres", Nat. Commun. 4, 1 (2013). Szamel2015 G. Szamel, E. Flenner, and L. Berthier, "Glassy dynamics of athermal self-propelled particles: Computer simulations and a nonequilibrium microscopic theory", Phys. Rev. E 91, 062304 (2015). Mandal2016 R. Mandal, P. J. Bhuyan, M. Rao, and C. Dasgupta, "Active fluidization in dense glassy systems", Soft Matter 12, 6268 (2016). Berthier2019 L. Berthier, E. Flenner, and G. Szamel, "Perspective: Glassy dynamics in dense systems of active particles ", J. Chem. Phys. 150, 200901 (2019). Klongvessa2019 N. Klongvessa, F. Ginot, C. Ybert, C. Cottin-Bizonne, and M. Leocmach, "Active Glass: Ergodicity Breaking Dramatically Affects Response to Self-Propulsion", Phys. Rev. Lett. 123, 248004 (2019). Janssen2019 L. Janssen, "Active glasses", J. Phys.: Condens. Matter 31, 503002 (2019). Flenner2016 E. Flenner, G. Szamel, and L. Berthier, "The nonequilibrium glassy dynamics of self-propelled particles", Soft Matter 12, 7136 (2016). Berthier2017 L. Berthier, E. Flenner, and G. Szamel, "How active forces influence nonequilibrium glass transitions", New J. Phys. 19, 125006 (2017). MandalSollich2020 R. Mandal and P. Sollich, "Multiple Types of Aging in Active Glasses", Phys. Rev. Lett. 125, 218001 (2020). LiaoXu Q. Liao and N. Xu, “Criticality of the zero-temperature jamming transition probed by self-propelled particles”, Soft Matter 14, 853 (2018). Szamel2014 G. Szamel, "Self-propelled particle in an external potential: Existence of an effective temperature", Phys. Rev. E 90, 012111 (2014). Maggi2015 U.M.B. Marconi, N. Gnan, M. Paoluzzi, C. Maggi, and R. Di Leonardo, "Velocity distribution in active particles systems", Sci. Rep. 6, 23297 (2016). Fodor2016 E. Fodor, C. Nardini, M. E. Cates, J. Tailleur, P. Visco, and F. van Wijland, "How Far from Equilibrium Is Active Matter?", Phys. Rev. Lett. 117, 038103 (2016). Wiese2023 R. Wiese, K. Kroy, and D. Levis, "Fluid-Glass-Jamming Rheology of Soft Active Brownian Particles", arXiv:2303.11245 (2023). method To approximately quantify τ_p(f) we fit v^2 for small τ_p to a power law, which approximately describes the small τ_p behavior, and determine when this power law is equal to the average of the large τ_p value of b. We find that a reasonable value of τ_p(f) is approximately twice when the power law equals the average value. LeesE A. W. Lees and S. F. Edwards, "The computer study of transport processes under extreme conditions", J. Phys. C: Solid State Phys. 5, 1921 (1972). homogeneous The absence of small wavevector peaks implies that the systems are homogenenous. We confirmed this observation by evaluating local density histograms at several simulated state points. SzamelMCT G. Szamel, "Theory for the dynamics of dense systems of athermal self-propelled particles", Phys. Rev. E 93, 012603 (2016). LiluashviliMCT A. Liluashvili, J. Ónody, and T. Voigtmann, "Mode-coupling theory for active Brownian particles", Phys. Rev. E 96, 062608 (2017). FengMCT1 M. Feng and Z. Hou, "Mode coupling theory for nonequilibrium glassy dynamics of thermal self-propelled particles", Soft Matter 13, 4464 (2017). FengMCT2 M. Feng and Z. Hou, "Mode-coupling theory for the dynamics of dense underdamped active Brownian particle system", J. Chem. Phys. 158, 024102 (2023). DebetsMCT V.E. Debets and L.M.C. Janssen, “Mode-coupling theory for mixtures of athermal self-propelled particles”, arXiv:2304.08936.
http://arxiv.org/abs/2307.02397v1
20230705161341
Extended team orienteering problem: Algorithms and applications
[ "Wen Ji", "Ke Han", "Qian Ge" ]
math.OC
[ "math.OC" ]
Extended team orienteering problem: Algorithms and applications Wen Ji Ke HanCorresponding author, e-mail: kehan@swjtu.edu.cn; Qian Ge Institute of System Science and Engineering, School of Transportation and Logistics, Southwest Jiaotong University, Chengdu, China, 611756 August 1, 2023 ============================================================================================================================================================================================================================== The team orienteering problem (TOP) determines a set of routes, each within a time or distance budget, which collectively visit a set of points of interest (POIs) such that the total score collected at those visited points are maximized. This paper proposes an extension of the TOP (ETOP) by allowing the POIs to be visited multiple times to accumulate scores. Such an extension is necessary for application scenarios like urban sensing where each POI needs to be continuously monitored, or disaster relief where certain locations need to be repeatedly covered. We present two approaches to solve the ETOP, one based on the adaptive large neighborhood search (ALNS) algorithm and the other being a bi-level matheuristic method. Sensitivity analyses are performed to fine-tune the algorithm parameters. Test results on complete graphs with different problem sizes show that: (1) both algorithms significantly outperform a greedy heuristic, with improvements ranging from 9.43% to 27.68%; and (2) while the ALNS-based algorithm slightly outperform the matheuristic in terms of solution optimality, the latter is far more computationally efficient, by 11 to 385 times faster. Finally, a real-world case study of VOCs sensing is presented and formulated as ETOP on a road network (incomplete graph), where the ALNS is outperformed by matheuristic in terms of optimality as the destroy and repair operators yield limited perturbation of existing solutions when constrained by a road network. Keywords: The orienteering problem; route planning; large neighborhood search; matheuristic; mobile urban sensing § INTRODUCTION The orienteering problem (OP) is a routing problem, which determines a subset of nodes (or Points of Interests, POIs) to visit, and the order in which they are visited, so that the total collected score is maximized within a given time or distance budget <cit.>. The team orienteering problem (TOP) is a natural extension of the OP by generating multiple routes <cit.>. Typical application scenarios of the TOP include athlete recruiting <cit.>, home fuel delivery <cit.>, search and rescue operations <cit.> and tourist trip planning <cit.>. A number of variants of the TOP have been proposed and studied in the literature, including the (team) orienteering problem with time window, in which each POI can only be visited within a given time window <cit.>; the time dependent orienteering problem, in which the traveling time between two POIs is time-dependent <cit.>; the stochastic orienteering problem, in which the traveling time and the score at each POI have stochastic attributes <cit.>; and the (team) orienteering problem with variable profits, in which the profit depend on the arrival time or the service time at each POI <cit.>. For more comprehensive review of the OP, its variants, and their applications, we refer the reader to <cit.> and <cit.>. It is essential to note that the OP and its variants proposed in the literature require that each POI is visited at most once. In reality, however, there are a significant number of cases where certain POIs need to be visited multiple times, e.g. to collect sufficient information, such as urban mobile sensing (air quality, noise, heat island, etc.), or to perform tasks that require repetition, such as disaster relief (forest fire containment, emergency food supply). Such needs vary among different POIs, and it is important to allow overlap of the agents' routes in a way that meets application-specific requirements. We take, as an example, drive-by sensing of Volatile Organic Compounds (VOCs, a type of air pollutant) performed by a single vehicle, which needs to consistently monitor a number of factories and collect VOCs measurements. The factories are associated with different sensing importance (weights). The vehicle needs to execute two routes per day, with a distance budget of 20km per route, which amounts to a total of 10 routes per week (5 workdays). Clearly, these 10 routes are expected to have certain overlap since factories with higher sensing weights should be visited more frequently. Figure <ref> shows the 10 routes generated by our proposed algorithm (see Section <ref>). No existing variant of TOP is suited for such a situation. Motivated by this, we propose the Extended Team Orienteering Problem (ETOP) by assuming that (1) each POI may be visited multiple times to accumulate scores; (2) the POIs have varying degrees of importance (weights); and (3) the score collected at each POI increases with the number of visits, but the marginal gain decreases[The diminishing marginal gain is essential to avoid over-concentration of the agents in a few important POIs in a score-maximizing exercise.]. The aim of the ETOP is to determine a set of routes, each within a given time/distance budget and covering a subset of POIs, such that the total score is maximized. It is a nontrivial and necessary extension of the TOP. Furthermore, if the POIs are arcs instead of vertices, the ETOP can be applied to the route planning of anti-dust (road sprinkler) operation <cit.> and road surface condition monitoring <cit.>, which can be seen as an extension of the classical capacitated arc routing problem (CARP) <cit.>, with heterogeneous road link coverage. We present a mathematical formulation of the ETOP as a nonlinear integer program, and two solution approaches: one based on the adaptive large neighborhood search (ALNS) algorithm <cit.>, the other being a bi-level matheuristic. Test results on complete graphs with different problem sizes show that: (1) both algorithms significantly outperform a greedy heuristic, with improvements ranging from 9.43% to 27.68%; and (2) while the ALNS-based algorithm slightly outperform the matheuristic in terms of solution optimality, its computational times are much higher, by 11 to 385 times. Finally, a real-world case study of mobile VOCs sensing on a road network is presented and formulated as ETOP on an incomplete graph. In this case, the ALNS is outperformed by matheuristic in terms of optimality as the destroy and repair operators yield limited perturbation of existing solutions when constrained by a road network (incomplete graph). Such a case study highlights the unique applicability of ETOP to certain real-world scenarios, as well as the effectiveness of the proposed algorithms in achieving a sensible solution. The rest of this article is organized as follows. Section <ref> articulate the ETOP with a mathematical formulation. Sections <ref> and <ref> present the ALNS and matheuristic solution approaches, respectively. A few discussions on model extensions are presented in Section <ref>. Section <ref> presents extensive numerical and application studies. Finally, Section <ref> presents some concluding remarks. § PROBLEM STATEMENT AND MODEL FORMULATION §.§ Preliminaries We consider a geographic area with several points of interests (POIs), which are to be visited by a set of agents. The ETOP is articulated as follows. The Extended Team Orienteering Problem (ETOP) Given a set of POIs I={1, ..., n}. Each POI i ∈ I is associated with a scoring function that depends on the POI itself as well as the number of times it is visited. Then, the ETOP is to determine n_K routes, each to be executed by an agent within a fixed time or distance budget, such that the total score collected from visiting these POIs is maximized. For each POI i∈{1,…,n}, let q_i∈ℤ be the number of visits by all the agents. The scoring function, denoted f_i(q_i), needs to satisfy the following conditions: * f_i(0)=0, and f_i(·) is monotonically increasing; * The marginal gain of score is decreasing: f_i(q_i,1+1)-f_i(q_i,1)> f_i(q_i,2+1)-f_i(q_i,2) ∀ 0≤ q_i,1< q_i,2 The first condition is straightforward. The second condition implies that when i is visited many times, the extra score from one more visit is small. It is necessary to avoid over-concentration of visits at a few high-value POIs. The following function, which satisfies these conditions, will be used in this paper. f(q_i)= q_i^β β∈ (0, 1) Finally, the objective of the ETOP is to maximize the following weighted sum: f = ∑_i ∈ Iw_iq_i^β where the parameter w_i is the weight of POI i. Intuitively, when β→ 1, the agents tend to visit high-weighting POIs more frequently; as β→ 0, their visits are more evenly distributed among all POIs. The parameters w_i's and β should be jointly determined based on the underlying application. §.§ Mathematical Model Table <ref> lists some key notations used in the model. @rl Notations and symbols 2lSets I Set of POIs; K Set of agents; N Set of vertices in the undirected graph G; A Set of arcs on the undirected graph G; 2lParameters and constants n_K Number of agents; n Number of POIs; t_ij Travel time from vertex i to vertex j; w_i The weight of POI i; Δ The length of time or distance budget. β The parameter in the score function (<ref>) that ensures diminishing marginal gain. 2lAuxiliary variables y_i,k Binary variable that equals 1 if POI i is visited by agent k; a_i, k The visiting time of POI i by agent k; q_i The total number of times the POI i is covered by the vehicles; 2lDecision variables x_i,j,k Binary variable that equals 1 if agent k travels directly from vertex i to vertex j. We consider a network represented as an undirected graph G=(N,A), where N is the set of vertices, and A is the set of arcs. The arc set consists of two parts: those connecting any pair of POIs, as well as those connecting the starting/ending depots to the POIs: A = {(0, i)|i ∈ I}∪{(i, j)|i,j ∈ I; t_ij≠inf}∪{(j, n+1)|j ∈ I}. The full ETOP model is formulated as: max_s={x_i,j,k: i,j∈ I, k∈ K} f(s)=∑_i ∈ Iw_iq_i^β ∑_i ∈ Ix_0,i,k=1 ∀ k ∈ K ∑_i ∈ Ix_i,n+1,k=1 ∀ k ∈ K ∑_(i,l) ∈ Ax_i,l,k = ∑_(l,j) ∈ Ax_l,j,k ∀ l ∈ I, ∀ k ∈ K ∑_(i,j) ∈ Ax_i,j,k = y_i, k ∀ i ∈ I, ∀ k ∈ K q_i=∑_k ∈ Ky_i,k ∀ i ∈ I a_i,k+t_ij-a_j,k≤ M(1 - x_i,j,k) ∀ i,j ∈ N, k ∈ K a_n+1≤Δ x_i,j,k∈{0,1} ∀ i,j ∈ N, ∀ k ∈ K y_i,k∈{0, 1} ∀ i ∈ I, ∀ k ∈ K a_i, k∈ℝ_+ ∀ i ∈ I, ∀ k ∈ K The objective function (<ref>) maximizes the weighted scores within time or distance budget Δ. Constraints (<ref>) and (<ref>) ensure that each agent departs from the starting depot and return to the ending depot. Constraint (<ref>) ensures flow conservation for each agent. Constraint (<ref>) couples variables x_i,j,k with indicator variable y_i,k. Constraint (<ref>) calculates the total number of times the POI i is visited by the agents. Constraint (<ref>) acts as subtour elimination constraint <cit.>. Constraint (<ref>) ensure that the time or distance budget is not exceeded. § ALNS APPLIED TO THE ETOP This section describes a solution framework based on adaptive large neighborhood search (ALNS) for the ETOP. As summarized in <cit.>, the ALNS has the following important parts: (1) initial solution s_0; (2) a set of destroy operators Ω^-={Ω_1^-, ..., Ω_m^-}; (3) a set of repair operators Ω^+={Ω_1^+, ..., Ω_n^+}; (4) adaptive mechanism; (5) acceptance criterion and (6) termination criterion. In each iteration of ALNS, a single destroy and a single repair operators are selected according to the adaptive mechanism. The destroy operator is first implemented to remove parts of a feasible solution s and the resulting solution is stored in s^'. The repair operator then re-inserts parts of elements to s^' so that s^' become a complete solution. Let f(s), f(s^'), f(s^*) be the objective of the current solution, the newly obtained solution, and the best-known solution, respectively. At each iteration, an acceptance criterion is used to determine whether the newly obtained solution s^' is accepted as the current solution s. When the termination criterion is met, the algorithm outputs the optimal solution s^*. The framework of ALNS is shown in Algorithm <ref>. The rest of this section instantiates each step of the ALNS for solving the ETOP. §.§ Initial Solution The generation of an initial solution is non-trivial in the ETOP. We begin with the following definition. (Marginal gain) Given a POI i ∈ I and the number of visits q_i already made by the agents, the marginal gain refers to the additional score obtained from one more visit, namely η_i=w_i((q_i+1)^β - (q_i)^β). We adopt a greedy constructive heuristic algorithm to generate the routes of each agent in sequence. We begin with the POI j ∈ I with the highest marginal gain as the starting location for the agent. The idea is to sequentially select the next POI i^* that maximizes the reward efficiency ψ_ji among all feasible POIs: i^* = max_iψ_ji= max_iη_i/t_ji In prose, the reward efficiency refers to the score collected per unit spending of time (or distance). The pseudo-code for the generation of initial solution is shown in Algorithm <ref>. §.§ Destroy Operators This section describes four removal heuristics: random removal, worst removal, related removal and route removal. All four heuristics take the current solution s as input. The output of the heuristic is a temporary solution s_temp^' after applying the destroy operator, which removes some points from the current solution s. §.§.§ Random removal In the random removal heuristic, we randomly remove m% of the points (any decimals will be rounded to the nearest integer) from the route of each agent using a uniform probability distribution. §.§.§ Worst removal As proposed in <cit.>, the worst removal heuristic removes points with the aim of obtaining most savings. For the ETOP, we need to consider the trade-off between collecting scores and the time/distance required. Specifically, we aim to remove the POIs that have limited impact on the total score, while resulting in the most time/distance savings. Let 𝒱 be the set of visited POIs in the current solution s. Given a POI i∈𝒱, we use f(s) to represent the score of the current solution s, and T(s) to represent the total time/distance used by all agents in the current solution s. Next, we define the score lost by removing i from the current solution s as r(i,s)=f(s)-f_-i(s), and the time/distance saved as t(i,s)=T(s)-T_-i(s). We define the value of each POI in the current solution s as value(i,s): value(i,s) = r(i,s)/t(i,s) Building on such a notion, this operator iteratively removes POIs with relatively low values. Following <cit.>, we introduce random factors to avoid situations where the same points are removed over and over again. In addition, randomization is applied in a way that favors the selection of POIs with lower values. The pseudo-code for the worst removal heuristic is shown in Algorithm <ref>. §.§.§ Related removal The purpose of the related removal heuristic is to remove a set of points that, in some way, are closely related and hence easy to interchange among different routes during repairs <cit.>. For the ETOP, we remove some adjacent POIs, with the understanding that points closer to each other are more likely to be interchanged. Specifically, we first randomly select a POI o ∈ I as the center point, and then remove the m% points closest to o in the current solution s. §.§.§ Route removal This operator is commonly used in the vehicle routing problem <cit.>. For the ETOP, the operator randomly removes [n_K× m%] of the routes from the solution s, where n_K is the number of routes ([·] is the rounding operator). §.§ Repair Operators This section describes some insertion heuristics: greedy insertion and (k)-regret insertion. Insertion heuristics are typically divided into two categories: sequential and parallel. The difference between the two is that the former builds one route at a time while the latter constructs several routes simultaneously <cit.>. The insertion heuristics adopted in this paper are all parallel, which will be used to repair the temporary solution s_temp^' following the destroy operator. The output of the insertion heuristic is a new feasible solution s^'. §.§.§ Greedy insertion (Minimum-cost position) Given a POI i ∈ I and a route R_k, there are |R_k|+1 possible insertion positions for this POI. Let Δ t_ik(m) be the extra time/distance incurred by inserting i into the m-th position of R_k. We define the minimum-cost position to be m_ik^*=min_m={1,..., |R_k|+1}Δ t_ik(m) and the minimum cost as Δ t_ik(m_ik^*). Let Δ r_i, k be the increased score after inserting point i into the route of agent k at the minimum-cost position. If the POI i cannot insert to the route of agent k, we set Δ r_i, k = 0 Similar to the reward efficiency defined in (<ref>), the reward efficiency of inserting point i is defined as the extra score collected per unit time/distance required to accommodate such insertion: Ψ_i,k≐Δ r_i,kΔ t_ik(m_ik^*) Then, we define the maximum insertion-value of i to be v(i)≐max_k ∈ KΨ_i,k The greedy insertion operator iteratively selects and inserts a candidate POI i that has the maximum insertion-value v(i). This process continues until no more POIs can be inserted into any route. §.§.§ (k)-Regret insertion The regret heuristic tries to improve upon the greedy heuristic by incorporating look-ahead information when selecting the point for insertion <cit.>. In prose, the algorithm performs the insertion that will be most regretted if it is not done now. Recalling the notions from Section <ref>: Δ v_ik(m) = Δ r_i, k/Δ t_ik(m) ∀ i ∈ I, k ∈ K, m = {1, ..., |R_k|+1} We sort the list {Δ v_ik(m): i ∈ I, k ∈ K, 1≤ m ≤ |R_k|+1} in descending order to obtain {Δ v^(n)_ik(m): i ∈ I, k ∈ K, 1≤ m ≤ |R_k|+1}} where n indexes the ordered elements in the list. Then, the k-regret heuristic chooses to insert the POI that maximizes c_k^* in each step. c_k^*= ∑_j=1^k(Δ f_ik^j(m) - Δ f_ik^1(m)) where c_k^* is the k-regret value. This process continues until no POI can be inserted into any route. §.§ Adaptive Mechanism We defined, in Section <ref>, four destroy operators (random, worst, related, and route removal), and in Section <ref> a class of repair operators (greedy insertion, k-regret). This section explains a strategy to adaptively select the destroy and repair operators at each iteration of ALNS. We assign weights to different opertors and implement the classical roulette wheel mechanism <cit.>. If we have L operations with weights μ_l, l ∈{1,2,...,L}, we select operator l with probability p_l given as: p_l= μ_l/∑_l=1^Lμ_l Initially, we set the same weight for each operator. Then, we continuously update the weight by keeping track of a score for each operator, which measures how well the heuristic has been performing recently. Operators with higher scores have a higher probability of being selected during the iteration. The entire search is divided into a number of segments, indexed by h. A segment contains a few iterations of the ALNS, hereafter defined to be δ iterations. The score of all operators is set to zero at the start of each segment. The score of the operators are updated according to the following rules: (a) if the score of the new solution is better than the best-known solution, the operator score increases by σ_1; (b) if the score of the new solution is worse than the best-known solution, but better than the current solution, the operator score increases by σ_2; (c) if the score of the new solution is worse than the current solution, but is accepted, the operator score increases by σ_3; (d) if the new solution is not accepted, the operator score increases by σ_4. It is reasonable to set σ_1 > σ_2 > σ_3 > σ_4. At the end of each segment we calculate new weights using the recorded scores. Let μ_l,h be the weight of operator l used in segment h, and the probability p_l,h is calculated according to (<ref>). At the end of each segment h we update the weight for operator l to be used in segment h+1 as follows: μ_l,h+1= μ_l,h(1-ε) + επ_l,h/θ_l,h where π_l,h is the score of operator l obtained during the last segment and θ_l,h is the number of times operator l was used during the last segment. The reaction factor ε controls how quickly the weight adjustment algorithm reacts to changes in the effectiveness of the operators. §.§ Acceptance Criterion A simulated annealing framework is used to decide whether to accept the newly obtained solution s^' given the current solution s. If the objective value f(s^') > f(s), then s^' is accepted as the current solution. Otherwise, the probability of s^' being accepted is P(s ← s^'). P(s ← s^')= exp{f(s^')-f(s)/T} where T >0 is the temperature, which begins with T_max and decreases at every iteration according to T = T · c, where c∈ (0,1) is the cooling rate. When T is below t_min, take reheating measures and set T to T_max. §.§ Termination Criterion The algorithm stops after: (1) a maximum number of iterations N_1^max; or (2) a certain number of non-improving iterations N_2^max. § A MATHEURISTIC METHOD FOR THE ETOP This section proposes a matheuristic method to solve the ETOP based on a bi-level structure. The master (upper) problem is a route selection model and the sub (lower) problem generates a new route. The matheuristic iteratively adds new routes that may improve the score to the candidate list of the master problem until the termination criterion is met. The pseudo-code of the matheuristic method is shown in Algorithm <ref>. §.§ The master problem The master problem is a route selection model. Let R be the set of candidate routes to be selected, and λ_kr be a binary variable taking the value 1 if agent k selects the route r. Let a_ir be a binary parameters taking the value 1 if the route r visits POI i. Then, q_i is an auxiliary variable representing the number of visits of POI i. max_{λ_kr: k∈ K, r∈ R} f=∑_i ∈ Iw_iq_i^β ∑_r ∈ R∑_k ∈ Kλ_kra_ir=q_i ∀ i ∈ I ∑_r ∈ Rλ_kr = 1 ∀ k ∈ K ∑_r ∈ R∑_k ∈ Kλ_kr≤ n_K λ_kr∈{0,1} ∀ k ∈ K, ∀ r ∈ R The objective function (<ref>) maximizes the weighted sum of scores. Constraints (<ref>) calculate the total number of visits of POI i by the agents. Constraints (<ref>) ensure that each agent must choose exactly one route. Constraint (<ref>) ensure that the number of selected routes cannot exceed the number of agents. The master problem is a knapsack problem, which can be solved directly by commercial solver. §.§ The sub-problem The sub problem aims to generate a new route r^* which may improve the score, and add it to the set R of candidate routes for the master problem. Let x_i,j be a binary variable taking the value 1 if the new route directly connects vertex i to vertex j, y_i be a binary variable taking the value 1 if POI i is visited by the new route. Let z_i be the order in which POI i is visited in the new route, and we set z_0=0 and z_n+1=n+1 (where 0 and n+1 are deemed the starting and ending depots). Finally, η_i denotes the marginal gain of POI i. max_x_i,j∑_i ∈ Iη_iy_i ∑_i ∈ Ix_0,i=1 ∑_i ∈ Ix_i,n+1=1 ∑_(i,l) ∈ Ax_i,l = ∑_(l,j) ∈ Ax_l,j ∀ l ∈ I ∑_(i,j) ∈ Ax_i,j = y_i ∀ i ∈ I ∑_(i,j) ∈ At_ijx_i,j≤Δ z_i-z_j+1 ≤ n(1-x_i, j) ∀ (i,j) ∈ A 1 ≤ z_i≤ n ∀ i ∈ I x_i,j∈{0,1} ∀ (i,j) ∈ A y_i∈{0, 1} ∀ i ∈ I The objective function (<ref>) maximizes the total marginal gain collected by the new route. Constraints (<ref>) and (<ref>) ensure that the route starts from node 0 and ends with node n+1 (i.e. the depots). Constraints (<ref>) expresses flow conservation. Constraints (<ref>) relates variables x_i,j to the indicator variable y_i. Constraint (<ref>) ensure that the time or distance budget is not exceed. Constraints (<ref>) and (<ref>) act as sub-tour elimination constraints <cit.>. We note that the sub problem is an orienteering problem, which is solved by the heuristic algorithm <cit.> in this paper. § DISCUSSION AND EXTENSION This section provides some discussion on the ETOP when modeling various real-world problems. §.§ Completeness of graphs The ETOP proposed in this paper takes as input the POI set I, their weights in the objective {w_i: i∈ I}, and the adjacency matrix. In practice, the adjacency matrix is replaced by the travel time (or distance) matrix {t_ij: i, j∈ I}, where t_ij denotes the travel time (or distance) from vertex i to j. * In application scenarios like UAV routing, the spatial domain of the problem is a subset of a Euclidean space, in which any two vertices are directly connected. In this case, any element of the adjacency matrix t_ij is a finite number. In other words, all the POIs can be seen as vertices in a complete graph. * In application scenarios like car routing, the spatial domain is a road network represented as a directed graph G(𝒱, 𝒜), where 𝒱 is the set of vertices and 𝒜 is the set of arcs, we set t_ij=∞ whenever the arc (i, j)∉𝒜. In other words, road networks can be seen as incomplete graphs. Not only is the completeness of graphs linked to different application domains (e.g. UAV routing or car routing), it also impacts the effectiveness of the ALNS as we later demonstrate in Sections <ref> and <ref>. §.§ Arcs as POIs In some applications based on road networks, the scores are collected on arcs instead of nodes. Examples include road surface sprinkling (aiming at reducing fugitive dust), or road roughness monitoring (for regular maintenance and repair). Such tasks need to be repeatedly performed, and the weights of these arcs are heterogeneous. These problems fall within the purview of ETOP because one can simply augment the original network with artificial nodes (with scores assigned) in the middle of relevant arcs. For example, if arc (i, j)∈𝒜 carries scores, we insert a node k to form two new arcs (i,k) and (k, j), and set t_ik=t_kj=1 2 t_ij. Then, the weight and score of arc (i,j) in the original network are transferred to node k in the augmented network. Such a simple technique converts arc-based ETOP to node-based ETOP, which can be solved with the proposed algorithms. § COMPUTATIONAL STUDIES All the computational performances reported below are based on a Microsoft Windows 10 platform with Intel Core i9 - 3.60GHz and 16 GB RAM, using Python 3.8 and Gurobi 9.1.2. §.§ Model and algorithm parameters In ALNS, the destroy operators are parameterized by m, which will be determined through sensitivity analysis in Section <ref>. The insertion heuristics in the repair operators are parameter free. The weight adjustment algorithm is parameterized by σ_1, σ_2, σ_3, σ_4 and learning rate ε. We set (σ_1, σ_2, σ_3, σ_4)=(20, 10, 3, 0); the parameter ε will be determined through the sensitivity analysis in Section <ref>. To manage the acceptance criterion we use three parameters, T_max, t_min and c. The start temperature T_max is determined based on the value of the initial solution f(s_0) and the formula is T_max = 0.05×f(s_0)/ln(2), which allows for a 50% probability that non-improving solutions generated at the initial solution temperature is accepted. In addition, we set t_min=0.1 and c=0.95. We also need to determine N_1^max and N_2^max that govern algorithm termination. We set N_1^max to a large number 2000, and mainly rely on N_2^max to terminate the algorithm. The value of parameter N_2^max will be determined through sensitivity analysis in Section <ref>. In the matheuristic method, only parameter N_0^max needs to be determined and the sensitivity analysis results are shown in Section <ref>. §.§ Sensitivity analysis of key parameters §.§.§ Test datasets (complete graphs) This section analyzes the impact of some key parameters or operators on the model performance, and subsequently chooses their appropriate values. For this purpose, we consider three test datasets with randomly generated locations and weights of POIs, as shown in Figure <ref>. Note that in this case, the travel times t_ij are directly calculated as the Euclidean distance between two points. In other words, those POIs depicted in Figure <ref> are vertices in complete graphs. §.§.§ ALNS parameters The sensitivity analysis of parameters m, ε and N_2^max is conducted with the following constants (n_K, Δ, β)=(4, 30, 0.5). We provide detailed explanation of these parameters and their test values in Table <ref>. We tune one parameter at a time by conducting 10 independent runs, while fixing the other parameters. The objective values and the CPU time are averaged and are shown in Figure <ref>. The results with regard to m are shown in the first column of <ref>. For all three test cases, it is evident that the CPU times increase with m. Regarding the objective value, Cases 1 & 2 show similar increasing trends while Case 3 shows a non-monotone profile due to the size and complexity of the problem. All cases considered, we set m=0.4 (40%) for subsequent calculations. The results with regard to ε are shown in the second column of Figure <ref>. In all three cases, the objective value displays concave trends with the optimal value of ε around 0.5. The trends of CPT times are less obvious. We set ε=0.5 in the following computational experiments. The results with regard to N_2^max are shown in the third column of Figure <ref>. It is seen that, in all three cases, the objective values and CPU times increase with N_2^max with only minor exceptions. We set N_2^max=300 in the following computational experiments. §.§.§ Matheuristic parameters The sensitivity analysis of the parameter N_0^max is conducted with the following constants (n_K, Δ, β)=(4, 30, 0.5). The values tested for the number of non-improving iterations N_0^max are {50, 100, 150, 200, 250}. The results averaged from 10 independent runs are shown in Figure <ref>. The CPU times all increase with the threshold value N_0^max, as expected. In addition, the effect on the objective value is limited past 100. Therefore, we set N_0^max=100 in the following computational experiments. §.§.§ Adaptive weights of the operators in ALNS The ALNS algorithm employed in this paper considers four removal operators (random removal, worst removal, related removal and route removal) and five repair operators (greedy insertion, 2-regret insertion, 3-regret insertion, 4-regret insertion, 5-regret insertion). Here, we consider Case 3 with the constants (n_K, Δ, β)=(4, 30, 0.5), and show in Figure <ref> the weights of these operators as they evolve over iterations of the ALNS. We find that: (1) All the plots show decreasing trends because of the simulated annealing acceptance criteria; (2) For the destroy operators, the random and worst removal operators are selected over related and route removal by the adaptive mechanism; (3) The greedy and 5-regret insertion operators show better performance than the others. §.§ Algorithm performance In this section, we compare the solution optimality and CPU times of greedy algorithm (Algorithm <ref>), ALNS and matheuristic on the three datasets described in Section <ref>. The test results under different n_K (number of routes) are shown in Figure <ref> and Table <ref>. It is shown that: (1) The matheuristic and ALNS significantly outperform the greedy algorithm in solution optimality, with improvements ranging from 9.43% to 27.68%; (2) Such improvements are less pronounced for larger n_K (number of routes), because of (i) the diminishing marginal gain by design; (2) relatively high saturation of agent routes; (3) Although the ALNS slightly outperform the matheuristic, the latter is much more computationally efficient, by a factor of 10^2 for larger instances n_K≥ 12. cl | l | lll | lll Comparison of objective values and CPU times among greedy algorithm, matheuristic and ALNS. The `Increase' column represents improvements over the greedy algorithm; the numbers in the parentheses in the ALNS-CPU column are computed as ALNS-CPU/Matheuristic-CPU. 2-9 2*n_K 1c|Greedy 3c|Matheuristic 3cALNS 3-9 Obj. Obj. CPU(s) Increase Obj. CPU(s) Increase 4*Case 1 4 77.0 92.1 45 19.57% 93.5 777 (17) 21.40% 4*50 POIs 6 108.8 120.2 56 10.49% 120.2 848 (15) 10.49% 4*Δ=30 8 125.9 139.2 63 10.63% 139.4 2326 (37) 10.72% 10 142.5 156.8 55 10.01% 156.7 4867 (88) 9.92% 12 157.1 171.9 46 9.43% 172.0 6770 (147) 9.46% 14 170.0 185.9 43 9.39% 186.0 10925 (254) 9.47% 4*Case 2 4 112.0 135.0 308 20.54% 143.0 3504 (11) 27.68% 4*100 POIs 6 155.9 190.0 420 21.88% 194.1 7344 (17) 24.52% 4*Δ=30 8 199.1 226.5 363 13.78% 225.7 9809 (27) 13.40% 10 228.1 254.0 293 11.39% 255.7 18006 (61) 12.13% 12 251.0 281.4 259 12.12% 280.6 34408 (133) 11.81% 14 272.4 303.6 246 11.45% 303.8 40550 (165) 11.50% 4*Case 3 4 96.0 117.0 261 21.88% 121.0 11841 (45) 26.04% 4*200 POIs 6 136.5 168.2 290 23.26% 172.0 12510 (43) 26.02% 4*Δ=30 8 174.2 212.1 317 21.71% 220.2 31293 (99) 26.40% 10 216.5 251.1 497 15.98% 262.0 47984 (97) 21.00% 12 252.1 287.9 307 14.17% 304.8 118219 (385) 20.89% 14 292.6 323.0 477 10.39% 341.1 126867 (266) 16.57% Figure <ref> provides a visualization of the ETOP solution for Case 1, with four routes and different distance budget Δ. It can be seen that, even for a small-scale problem, the solution displays some complexity, especially for larger Δ where the routes have considerable overlap. Moreover, such overlap took place at high-value nodes, which reflects the effectiveness of our proposed algorithms. Such solutions cannot be obtained via conventional TOPs or their variants. §.§ Real-world case study (incomplete graphs) The ETOP is demonstrated in a real-world case of Volatile Organic Compounds (VOCs) monitoring in the Longquanyi District, Chengdu, China. VOCs include a wide variety of chemicals, some of which have adverse health effects and act as catalyst of processes that form PM_2.5 and O_3 <cit.>. VOCs are emitted from the manufacturing activities from 141 factories in Longquanyi, and a single VOCs sensing vehicle, operated by local environmental protection agency, needs to perform regular monitoring tasks of the area by moving along designated routes. The 141 factories are categorized into four class: A, B, C and D, where class A has the lowest sensing priority and D has the highest; see Figure <ref>(a). Based on the locations of these factories and their relative positions to the road network, their sensing priorities are transformed into the weights of the network nodes, which are treated as the POIs in our model; see Figure <ref>(b). We begin by comparing the performance of the matheuristic and the ALNS in Table <ref>. The computational efficiency of the former is significantly higher, which agrees with Table <ref>. Interestingly, unlike our previous finding in Figure <ref>, the ALNS is outperformed by the matheuristic in terms of solution optimality. The reason is that the destroy and repair operations used in the ALNS are constrained by the road network, leading to limited number of viable perturbations of a routing solution, which is different from the situation on a complete graph (Figure <ref>). Finally, to assess the validity of the solution in a mobile sensing context, we assume that the sensing vehicle needs to perform two routes per day, each within a distance budget of 20 km[These are in line with real-world operations in Longquanyi]. Therefore, 10 routes need to be generated for a week's (5 working days) monitoring task. This is formulated as an ETOP and solved by the ALNS and matheuristic, whose performance is compared in Figure <ref>(a). Figure <ref>(b) shows the 10 routes generated by the matheuristic. In this solution, 7 routes are located in the southern part of the area where the majority of the POIs are located. Moreover, many of the high-value POIs (with higher weights) are covered by multiple routes, which means those factories with higher sensing priority are more frequently visited, which is a desired feature of the routing plan. § CONCLUSION This paper proposes an extension of the team orienteering problem (TOP), named ETOP, by allowing a POI to be visited multiple times to accumulate scores. Such an extension is important because many real-world applications such as mobile sensing and disaster relief require that the POIs are continuously visited, and such needs are heterogeneous. To model such scenarios, we present the ETOP as a nonlinear integer program and propose two solution approaches: ALNS-based and matheuristic methods. The following findings are made from extensive numerical tests. * Both algorithms are effective in finding good-quality solutions, where high-value POIs are visited more frequently. This is not achievable through solution approaches for conventional TOPs. * On complete graphs (Figure <ref>), the ALNS outperforms matheuristic by a small margin, but the situation is reversed on incomplete graphs (e.g. real-world road networks), due to the limited effect of destroy and repair operators in generating new viable routes on the network. * The ALNS is much less computationally efficient than the matheuristic, because the destroy and repair operators are typically time-consuming. * The ETOP is transferrable to treat scenarios where the demands are concentrated on arcs instead of vertices/nodes. The resulting model can be applied to anti-dust (road sprinkler) operations or road surface condition monitoring. § ACKNOWLEDGEMENT This work is supported by the National Natural Science Foundation of China through grants 72071163 and 72101215, and the Natural Science Foundation of Sichuan Province through grants 2022NSFSC0474 and 2022NSFSC1906. 99 [Ali and Dyo, 2017]AD2017 Ali, J., Dyo, V., 2017. Coverage and Mobile Sensor Placement for Vehicles on Predetermined Routes: A Greedy Heuristic Approach, in: Proceedings of the 14th International Joint Conference on e-Business and Telecommunications. Madrid, Spain, 83-88. [Chao et al., 1996a]CGW1996a Chao, I., Golden, B., Wasil, E., 1996a. A fast and effective heuristic for the orienteering problem. European Journal of Operational Research, 88, 475–489. [Chao et al., 1996b]CGW1996b Chao, I.-M., Golden, B. L., Wasil, E. A., 1996b. The team orienteering problem. European Journal of Operational Research, 88(3), 464–474. [Dang et al., 2013]DGM2013 Dang, D.-C., Guibadj, R. N., Moukrim, A., 2013. An effective PSO-inspired algorithm for the team orienteering problem. European Journal of Operational Research, 229(2), 332–344. [Demir et al., 2012]Demir2012 Demir, E., Bektaş, T., Laporte, G., 2012. An adaptive large neighborhood search heuristic for the Pollution-Routing Problem. European Journal of Operational Research, 223, 346–359. [Duque et al., 2015]DLM2015 Duque, D., Lozano, L., Medaglia, A. L., 2015. Solving the orienteering problem with time windows via the pulse framework. Computers & Operations Research, 54, 168–176. [Gambardella et al., 2012]Gambardella2012 Gambardella, L. M., Montemanni, R., Weyland, D., 2012. Coupling ant colony systems with strong local searches. European Journal of Operational Research, 220(3), 831–843. [Golden and Wong, 1981]GW1981 Golden, B. L., Wong, R. T., 1981. Capacitated routing problems. Networks, 11:305-31. [Golden et al., 1984]GAD1984 Golden, B., Assad, A., Dahl, R., 1984. Analysis of a large-scale vehicle routing problem with an inventorycomponent, Large Scale Systems, 7, 181-190. [Golden et al., 1987]Golden1987 Golden, B. L., Levy, L., Vohra, R., 1987. The orienteering problem. Naval Research Logistics, 34(3), 307–318. [Gunawan et al., 2014]Gunawan2014 Gunawan, A., Yuan, Z., Lau, H. C., 2014. A mathematical model and metaheuristics for time dependent orienteering problem. In Proceedings of the 10th international conference of the practice and theory of automated timetabling (patat 2014), York, United Kingdom, 202–217. [Gunawan et al., 2016]Gunawan2016 Gunawan, A., Lau, H.C., Vansteenwegen, P., 2016. Orienteering Problem: A survey of recent variants, solution approaches and applications. European Journal of Operational Research, 255, 315–332. [Ilhan et al., 2008]Ilhan2008 Ilhan, T., Iravani, S. M. R., Daskin, M. S., 2008. The orienteering problem with stochastic profits. IIE Transactions, 40(4), 406–421. [Ke et al., 2015]KZLC2015 Ke, L., Zhai, L., Li, J., Chan, F. T. S., 2015. Pareto mimic algorithm: an approach to the team orienteering problem. Omega, 61, 155–166. [Lin and Yu, 2012]LY2012 Lin, S. W., Yu, V. F., 2012. A simulated annealing heuristic for the team orienteering problem with time windows. European Journal of Operational Research, 217(1), 94–107. [Miller et al., 1960]MTZ1960 Miller, C. E., Tucker, A. W., Zemlin, R. A., 1960. Integer programming formulation of traveling salesman problems. Journal of the ACM, 7(4), 326–329. [Mishra et al., 2023]Mishra2023 Mishra, M., Chen, P.-H., Bisquera, W., Lin, G.-Y., Le, T.-C., Dejchanchaiwong, R., Tekasakul, P., Jhang, C.-W., Wu, C.-J., Tsai, C.-J., 2023. Source-apportionment and spatial distribution analysis of VOCs and their role in ozone formation using machine learning in central-west Taiwan. Environmental Research, 232, 116329. [Pisinger and Ropke, 2007]Pisinger2007 Pisinger, D., Ropke, S., 2007. A general heuristic for vehicle routing problems. Computers & Operations Research, 34, 2403–2435. [Potvin and Rousseau, 1993]Potvin1993 Potvin, J.-Y., Rousseau, J.-M., 1993. A parallel route building algorithm for the vehicle routing and scheduling problem with time windows. European Journal of Operational Research, 66, 331–340. [Ropke and Pisinger, 2006]RP2006 Ropke, S., Pisinger, D., 2006. An Adaptive Large Neighborhood Search Heuristic for the Pickup and Delivery Problem with Time Windows. Transportation Science, 40, 455–472. [Tang et al., 2007]TMT2007 Tang, H., Miller-Hooks, E., Tomastik, R., 2007. Scheduling technicians for planned maintenance of geographically distributed equipment. Transportation Research Part E: Logistics and Transportation Review, 43(5), 591–609. [Vansteenwegen and Van Oudheusden, 2007]VV2007 Vansteenwegen, P., Van Oudheusden, D., 2007. The mobile tourist guide: an OR opportunity. OR Insights, 20(3), 21–27. [Vansteenwegen et al., 2011]Vansteenwegen2011 Vansteenwegen P., Souffriau W., Oudheusden D. V., 2011. The orienteering problem: A survey. European Journal of Operational Research, 209(1), 1–10. [Verbeeck et al., 2014]Verbeeck2014 Verbeeck, C., Sörensen, K., Aghezzaf, E. H., Vansteenwegen, P., 2014. A fast solution method for the time-dependent orienteering problem. European Journal of Operational Research, 236(2), 419–432. [Windras Mara et al., 2022]Windras2022 Windras Mara, S.T., Norcahyo, R., Jodiawan, P., Lusiantoro, L., Rifai, A.P., 2022. A survey of adaptive large neighborhood search algorithms and applications. Computers & Operations Research, 146, 105903. [Yang et al., 2020]Yang2020 Yang, H.-H., Gupta, S.K., Dhital, N.B., 2020. Emission factor, relative ozone formation potential and relative carcinogenic risk assessment of VOCs emitted from manufacturing industries. Sustainable Environment Research, 30, 28. [Yu et al., 2022]YARZM2022 Yu, Q., Adulyasak, Y., Rousseau, L.-M., Zhu, N., Ma, S., 2022. Team Orienteering with Time-Varying Profit. INFORMS Journal on Computing, 34, 262–280. [Yu et al., 2019]YFZM2019 Yu, Q., Fang, K., Zhu, N., Ma, S., 2019. A matheuristic approach to the orienteering problem with service time dependent profits. European Journal of Operational Research, 273, 488–503. [Zhang et al., 2014]ZOT2014 Zhang, S., Ohlmann, J. W., Thomas, B. W., 2014. A priori orienteering with time windows and stochastic wait times at customers. European Journal of Operational Research, 239(1), 70–79. [Zhu et al., 2022]ZSZWZ2022 Zhu,B., Sun, Z., Zhou, C., Wang, X., Zhang, P., 2022. The capacitied arc routing problem for anti-dust vehicles with heterogeneous services and time-dependent profits. in: Proceedings of the Transportation Research Board of the National Academies.
http://arxiv.org/abs/2307.01460v1
20230704034006
Graphs with girth 9 and without longer odd holes are 3-colorable
[ "Yan Wang", "Rong Wu" ]
math.CO
[ "math.CO" ]
Approximating Quantum Lyapunov Exponents in Quantum Kicked Rotor Varsha Guptamailto:vvarsha@purdue.edu Department of Mathematics, Southern University of Science and Technology, Shenzhen, China E-mail: hongjl@sustech.edu.cn ============================================================================================================================================ For a number l≥ 2, let G_l denote the family of graphs which have girth 2l+1 and have no odd hole with length greater than 2l+1. Wu, Xu and Xu conjectured that every graph in ⋃_l≥ 2G_l is 3-colorable. Chudnovsky et al., Wu et al., and Chen showed that every graph in G_2, G_3 and ⋃_l≥ 5G_l is 3-colorable respectively. In this paper, we prove that every graph in G_4 is 3-colorable. This confirms Wu, Xu and Xu's conjecture. § INTRODUCTION All graphs considered in this paper are finite, simple, and undirected. Let G be a graph and let S be a subset of V(G). We use G[S] to denote the subgraph of G induced by S. Let x∈ V(G) (we also write x∈ G if there is no confusion), we use N_S(x) to denote the neighbours of x in S. For subgraphs H and H' of G, we use H H' to denote the symmetric difference of H and H', that is, V(H H')=V(H)∪ V(H')\{V(H)∩ V(H')} and E(H H')=E(H)∪ E(H')\{E(H)∩ E(H')}. Let P be an (x, y)-path, that is, the ends of P are x and y. And we usually use xPy to denote P, if there is no confusion, we just write it as P. Let P^* denote the set of internal vertices of P. Let C be a cycle and u, v be two vertices of C. We use C(u, v) to denote the subpath of C from u to v in clockwise order, and C^*(u, v) to denote the set of internal vertices of C(u, v). A graph G is k-colorable if there exists a mapping c: V(G)→{1, 2, ⋯, k} such that c(u)≠ c(v) whenever uv∈ E(G). The chromatic number χ(G) of G is the minimum integer k such that G is k-colorable. The clique number ω(G) of G is the maximum integer k such that G contains a complete graph of size k. For a graph G, if χ(G)=ω(G), then we call G a perfect graph. For a graph H, we say that G is H-free if G induces no H (i.e., G has no induced subgraph isomorphic to H). Let F be a family of graphs. We say that G is F-free if G induces no member of F. If there exists a function ϕ such that χ(G)≤ϕ(ω(G)) for each G∈ F, then we say that F is χ-bounded class, and call ϕ a binding function of F. The concept of χ-boundedness was put forward by Gyárfás in 1975 <cit.>. Studying which family of graphs can be χ-bounded, and finding the optimal binding function for χ-bounded class are important problems in this area. Since clique number is a trivial lower bound of chromatic number, if a family of χ-bounded graphs has a linear binding function, then the linear function must be asymptotically optimal binding function of this family. For more recent information on χ-bounded problems, see <cit.>. A hole in a graph is an induced cycle of length at least 4. A hole is said to be odd (resp. even) if it has odd (resp. even) length. Addario-Berry, Chudnovsky, Havet, Reed and Seymour <cit.>, and Chudnovsky and Seymour <cit.>, proved that every even hole free graph has a vertex whose neighbours are the union of two cliques, which implies that χ(G)≤ 2ω(G)-1. However, the situation becomes much more complicated on odd hole free graphs. The Strong Perfect Graph Theorem <cit.> asserts that a graph is perfect if and only if it induces neither odd holes nor their complements. Confirming a conjecture of Gyárfás <cit.>, Scott and Seymour <cit.> proved that odd hole free graphs are χ-bounded with binding function 2^2^ω(G)+2/48(ω(G)+2). Hoáng and McDiarmid <cit.> conjectured for an odd hole free graph G, χ(G)≤ 2^ω(G)-1. A graph is said to be short-holed if every hole of it has length 4. Sivaraman <cit.> conjectured that χ(G)≤ω^2(G) for all short-holed graphs whereas the best known upper-bound is χ(G)≤ 10^202^ω^2(G) due to Scott and Seymour <cit.>. The girth of a graph G, denoted by g(G), is the minimum length of a cycle in G. Let l≥ 2 be an integer. Let G_l denote the family of graphs that have girth 2l+1 and have no odd holes of length at least 2l+3. The graphs in G_2 are called pentagraphs, and the graphs in G_3 are called heptagraphs. Robertson <cit.> conjectured that the Petersen graph is the only non-bipartite pentagraph which is 3-connected and internally 4-connected. Plummer and Zha <cit.> presented some counterexamples to Robertson's conjecture, and conjectured that every pentagraph is 3-colorable. Xu, Yu and Zha <cit.> proved that every pentagraph is 4-colorable. Generalizing the result of <cit.>, Wu, Xu and Xu <cit.> proved that graphs in ⋃_l≥ 2 G_l are 4-colorable and proposed the following conjecture. <cit.> Graphs in ⋃_l≥ 2 G_l are 3-colorable. Recently, Chudnovsky and Seymour <cit.> confirmed that pentagraphs are 3-colorable. Wu, Xu and Xu <cit.> showed that heptagraphs are 3-colorable. More recently, Chen <cit.> proved that all graphs in ⋃_l≥ 5 G_l are 3-colorable. In this paper, we prove Conjecture <ref>. Graphs in G_4 are 3-colorable. § PRELIMINARY In this section, we collect some useful lemmas. The authors of <cit.> proved the following lemma for l=2, but their proof also works for l ≥ 2. <cit.> For any number l≥ 2, every 4-vertex-critical graph in G_l has neither K_2-cut or P_3-cut. <cit.> For any number k≥ 4, each k-vertex-critical graph has no 2-edge-cut. A theta graph is a graph that consists of a pair of distinct vertices joined by three internally disjoint paths. Let C be a hole of a graph G. A path P of G is a chordal path of C if C∪ P is an induced theta-subgraph of G. <cit.> Let l≥ 4 be an integer and C be an odd hole of a graph G∈ G_l. Let P be a chordal path of C, and P_1, P_2 be the internally disjoint paths of C that have the same ends as P. Assume that |P| and |P_1| have the same parity. Then In particular, when |P_1|≥ 2, all chordal paths of C with the same ends as P_1 have length |P_1|. <cit.> Let l ≥ 4 be an integer and x,y be the vertices of a graph G∈ G_l. Let X be a vertex cut of G with {x, y}⊆ X⊆ N[{x, y}], and G_1 be an induced subgraph of G whose vertex set consists of X and the vertex set of a component of G-X. If all induced (x, y)-paths in G_1 have length k with 4≤ k≤ l, then G has a degree-2 vertex or a K_2-cut. Let H=(u_1,u_2,u_3,u_4,P_1,P_2,Q_1,Q_2,L_1,L_2) be a K_4-subdivision such that u_1, u_2, u_3, u_4 are degree-3 vertices of H and P_1 is a (u_1, u_2)-path, P_2 is a (u_3, u_4)-path, Q_1 is a (u_2, u_3)-path, Q_2 is a (u_1, u_4)-path, L_1 is a (u_1, u_3)-path, and L_2 is a (u_2, u_4)-path (see Figure <ref>). We call P_1, P_2, Q_1, Q_2, L_1, L_2 arrises of H. Let C_1:=P_1∪ Q_1∪ L_1, C_2:=P_1∪ Q_2∪ L_2, C_3=P_2∪ Q_1∪ L_2 and C_4:=C_1 C_2 C_3 be four holes in H. We call that H is an odd K_4-subdivision if C_1, C_2, C_3 and C_4 are odd holes. If all arrises of an odd K_4-subdivision have the same length, then we call it a regular odd K_4-subdivision, otherwise an irregular odd K_4-subdivision. If C_1 and C_2 are odd holes, C_3 and C_4 are even holes, |Q_1|=1 and |L_2|≥ 2, then we call H a balanced K_4-subdivision of type (1, 2). <cit.> Let l ≥ 2 be an integer and H be a subgraph of a graph G∈ G_l. If H is isomorphic to an odd K_4-subdivision, then the following statements hold. (1) Each pair of vertex disjoint arrises have the same length and their lengths are at most l. (2) H is an induced subgraph of G. (3) When l≥ 3, no vertex in V(G)-V(H) has two neighbours in H. Chen in <cit.> proved the following lemmas holds for l≥ 5. In fact, they are also true for l=4. <cit.> Let l≥ 4 be an integer and G be a graph in G_l. If G is 4-vertex-critical, then G does not contain a balanced K_4-subdivision of type (1, 2). Let C be an odd hole of a graph G and s, t∈ V(C) nonadjacent. Let P be an induced (s, t)-path. If V(C)∩ V(P^*)=∅, we call P a jump or an (s, t)-jump over C. Let Q_1, Q_2 be the internally disjoint (s, t)-paths of C. If some vertex in V(Q^*_1) has a neighbour in V(P^*) and no vertex in V(Q^*_2) has a neighbour in V(P^*), we say that P is a local jump over C across Q^*_1. When there is no need to point out Q^*_1, we will also say that P is a local jump over C. In particular, when |V(Q^*_1)|=1, we say that P is a local jump over C across one vertex. When no vertex in V(Q^*_1∪ Q^*_2) has a neighbour in V(P^*), we say that P is a short jump over C. Hence, short jumps over C are chordal paths of C. But chordal paths over C maybe not short jumps over C as the ends of a chordal path maybe adjacent. When P is a short jump over C, if PQ_1 is an odd hole, we say that PQ_1 is a jump hole over C and P is a short jump over C across Q^*_1. <cit.> Let l≥ 4 be an integer and C be an odd hole of a graph G∈ G_l, s, t be two vertices of C. Let P be an (s, t)-jump, Q_1=C(s, t) and Q_2=C(t, s). If P is a local or short jump over C across Q^*_1, then |P|, |Q_2| have the same parity, and thus, PQ_2 is an even hole and PQ_1 is an odd cycle. <cit.> Let l≥ 4 be an integer and C be an odd hole of a graph G∈ G_l. If a jump P over C is not local, then G[V(C∪ P)] contains a short jump over C. <cit.> Let l≥ 4 be an integer and C be an odd hole of a graph G∈ G_l. If P is a local (v_1, v_2)-jump over C, then G[V(C∪ P)] contains a local jump over C across one vertex or a short jump over C. It is easy to derive the following from Lemmas <ref> and <ref>. Let l≥ 4 be an integer and C be an odd hole of a graph G∈ G_l. If there is a jump over C, then G contains either a short jump or a local jump over C across one vertex. Let C be an odd hole and P_i be an (u_i, v_i)-jump over C for each integer 1≤ i≤ 2. If u_1,v_1,u_2,v_2 are disjoint vertices and u_1,u_2,v_1,v_2 appear on C in this order, then we say that P_1, P_2 are crossing; otherwise, they are uncrossing. The following lemmas concern crossing jumps in G. <cit.> Let l≥ 4 be an integer and C be an odd hole of a graph G∈ G_l, and P_i be a short (u_i, v_i)-jump over C for each integer 1≤ i≤ 2. If P_1, P_2 are crossing, then G contains an odd K_4-subdivision or a balanced K_4-subdivision of type (1, 2). <cit.> Let l≥ 4 be an integer and C be an odd hole of a graph G∈ G_l. For any integer 1≤ i≤ 2, let P_i be a short or local (u_i, v_i)-jump over C such that {u_1, v_1}≠{u_2, v_2} and u_1, u_2, v_2, v_1 appear on C in clockwise order. Assume that P_1 is across C^*(v_1, u_1), P_2 is across C^*(u_2, v_2) and if P_i is local, then P_i is across one vertex for any integer 1≤ i≤ 2. Then the following hold. * When P_1, P_2 are short, G has an odd K_4-subdivision. * At least one of P_i (i∈ [2]) is local, then at most two vertices in V(C(v_1, u_1)∪ C(u_2, v_2)) are not in a jump hole over C. § PROOF OF THEOREM <REF> Let H_1, H_2 be vertex disjoint induced subgraphs of a graph G. An induced (v_1, v_2)-path P is a direct connection linking H_1 and H_2 if v_1 is the only vertex in V(P) having a neighbour in H_1 and v_2 is the only vertex in V(P) having a neighbour in H_2. Let l≥ 4 be an integer. For each graph G in G_l, suppose G is 4-vertex-critical, either G has no odd K_4-subdivision or G has an odd K_4-subdivision H = (u_1,u_2,u_3,u_4,P_1,P_2,Q_1,Q_2,L_1,L_2) such that every minimal direct connection (v_1, v_2)-path linking H\P^*_2 and P^*_2 must have N_H(v_1)=N_H\P^*_2(v_1)={u_3} or {u_4} and N_H(v_2)=N_P^*_2(v_2)=N_P_2^*(N_H(v_1)). Suppose G has an odd K_4-subdivision denoted by H. By Lemma <ref> (2), H is an induced subgraph of G. Let H=(u_1,u_2,u_3,u_4,P_1,P_2,Q_1,Q_2,L_1,L_2) (see Figure <ref>) and C_1:=P_1∪ Q_1∪ L_1, C_2:=P_1∪ Q_2∪ L_2, C_3=P_2∪ Q_1∪ L_2 and C_4:=C_1 C_2 C_3. Since H is an odd K_4-subdivision, C_1, C_2, C_3 and C_4 are odd holes. By Lemma <ref> (1), 3.1 |P_1|=|P_2|≤ l, |Q_1|=|Q_2|≤ l, |L_1|=|L_2|≤ l. Without loss of generality we may assume that P_1, P_2 are longest arrises in H. Let e, f be the edges of P_2 incident with u_3, u_4, respectively. Since G is 4-vertex-critical, {e, f} is not an edge-cut of G by Lemma <ref>. So there exists a minimal path P linking P^*_2 and H \ V(P^*_2) such that P is a (v_1, v_2)-path, N_H \P^*_2(v_1)≠∅ and N_P^*_2(v_2)≠∅. Let P be such a path. By Lemma <ref> (3), let N_H \P^*_2(v_1)={x} and N_P^*_2(v_2)={y}. Let P'=xv_1Pv_2y. Note that H∪ P' is induced by minimality of P. Also note that any minimal path P satisfies the following claims. Claim 3.1.1 x∉{u_1, u_2}. Suppose it is false. By symmetry we may assume that x=u_1. Set C'_4=u_3L_1u_1P'yP_2u_3. Since C_4 is an odd hole, by symmetry we may assume that C'_4 is an even hole and C_4 C'_4 is an odd hole. Since u_1P'yP_2u_3 is a chordal path of C_1, by (<ref>) and Lemma <ref>, we have |L_1|=1. So |P_1|=|Q_1|=l by (<ref>) again. Since C_4 C'_4 is an odd hole, |u_1P'yP_2 u_4|=l+1 which implies |P'|≤ l. Moreover, since |P_2|=l and |L_1|=1, we have |C'_4|≤ 2l, a contradiction as g(G) = 2l+1. So x≠ u_1. We say that |P_1|-min{|Q_1|, |L_1|} is the difference of H. Without loss of generality we may assume that among all odd K_4-subdivisions, H is chosen with smallest difference. Claim 3.1.2 x∉ V(P^*_1). Suppose it is false. Without loss of generality, we may assume that |L_1|≥ |Q_1|. Set C'_2=u_4Q_2u_1P_1xP'y P_2u_4. Since C_4 is an odd hole, either C'_2 or C_4 C'_2 is an odd hole. When C_4 C'_2 is an odd hole, since C_1∪ C_3∪ P' is an odd K_4-subdivision, by Lemma <ref> (1), |P'|=|Q_1|, |u_1P_1x|=|u_4P_2y|, and |u_2P_1x|=|u_3P_2y|. Since |P'|=|Q_1|=|Q_2|, we have C'_2 is an even hole of length 2(|Q_2|+|u_1P_1x|), implying |L_1|+|u_1P_1x|≥ |Q_2|+|u_1P_1x|≥ l+1. Then C_4 C'_2 C_1 is an even cycle of length at most 2l, which is a contradiction. So C'_2 is an odd hole. Since C_2∪ C'_2∪ C_3 is an odd K_4-subdivision, it follows from Lemma <ref> (1) that 3.2 |P'|=|L_2|, |u_1P_1x|=|u_3P_2y|, |u_2P_1x|=|u_4P_2y|. Since C_2 C'_2 C_1 is an odd cycle of length larger than 2l+1, it is not an odd hole, so 3.3 1∈{|Q_2|, |u_2P_1x|, |u_3P_2y|} When |u_2P_1x|=1, by (<ref>), (<ref>) and Lemma <ref>, we have |L_1|=|P'|=l. Moreover, as l ≥ |P_1|≥ |L_1| = l, we have |P_1| = l. So the difference of H is l-1. Since |u_2P_1x|=|u_4P_2y|=|Q_1|=1, the graph G[V(C_1∪ C'_2∪ P_2)] is an odd K_4-subdivision with difference l-2, which is a contradiction to the choice of H. So |u_2P_1x|≥ 2. Assume that |Q_2|=1. Then |L_1|=|P_1|=l by (<ref>). Since G[V(C'_2∪ C_2∪ C_3)] is an odd K_4-subdivision whose difference is at most l-2 as |u_2P_1x|≥ 2, which is contradiction to the choice of H. So |Q_2|≥ 2. Then yu_3∈ E(H) by (<ref>), implying xu_1∈ E(H) by (<ref>). Then |C_4 C'_2|=2+2|L_1| by (<ref>) and (<ref>), so |L_1|=l by (<ref>) again. Since |P_1|≥ |L_1|, we have |P_1|=l and |Q_1|=1 by (<ref>), which is a contradiction as |Q_2|≥ 2. Claim 3.1.3 If x∈{u_3, u_4}, then xy∈{e, f}. By symmetry we may assume that x=u_3. Assume to the contrary that x, y are non-adjacent. Set C'_3=u_3P'yP_2u_3. Since P' is a chordal path of C_3, we have that C'_3 is an odd hole by Lemma <ref> and (<ref>). Since C'_3 C_3 is an even hole, C'_3 C_3 C_2 is an odd cycle and |Q_1|=|L_2|=1 by (<ref>) and Lemma <ref> again. Then |P_1|=2l-1>l by (<ref>), which is a contradiction. So e=xy. Claim 3.1.4 If x∈ V(L^*_1), then |Q_1|=1, |P_1|=|L_1|=l, |P'|=2l-1 and xu_3, yu_3∈ E(H). Set C'_4=xL_1u_1Q_2u_4P_2yP'x. Assume that C_4 C'_4 is an even hole. Since x≠ u_3, xP'yP_2u_3 is a chordal path of C_1. Hence, xu_3∈ E(H) by (<ref>) and Lemma <ref>. Since x≠ u_1, the path u_3xP'y is a chordal path of C_3. If yu_3∉ E(H), then |u_3xP'y|=|u_3P_2y|≥ l+1 by (<ref>) and Lemma <ref>, which is a contradiction to the fact that |P_2|≤ l. If yu_3∈ E(H), then C'_4 is an odd hole of length at least 2l+3, which is not possible. So C_4 C'_4 is an odd hole, implying that C'_4 is an even hole. Since x≠ u_3, the graph u_1L_1xP'yP_2u_4 is a chordal path of C_2. Moreover, since C'_4 is an even hole, |Q_2|=1 by (<ref>) and Lemma <ref>. Hence, |P_1|=|L_1|=l by (<ref>) again. If y, u_3 are non-adjacent, xP'yP_2u_4Q_2 is a chordal path of C_1, so xu_1∈ E(H), implying that u_4Q_2u_1L_1xP'y is a chordal path of C_3. Then yu_4∈ E(H). Since C_4 and C_4 C'_4 are odd holes, |P'|=3, so |C'_4|=6, which is not possible. So yu_3∈ E(H). Since C_4 C'_4 is an odd hole, |P'|≥ l+1 by (<ref>). When x, u_3 are non-adjacent, since C'_4 is an even hole, C'_4 C_3 is an odd hole of length at least 2l+3, which is not possible. So xu_3∈ E(H), implying |P'|=2l-1 as C_4 C'_4 is an odd hole. Hence Claim 3.1.4 holds. Claim 3.1.5 Assume that P' has the structure as stated in Claim 3.1.4. Then no vertex in V(G) \ V(H∪ P') has two neighbours in H∪ P'. Assume that a vertex x”∈ V(G) \ V(H∪ P') has two neighbours x_1, x_2 in H∪ P'. Since no vertex has two neighbours in an odd hole, it follows from Lemma <ref> (3) that x” has exactly two neighbours in H∪ P' with x_1∈ V(H) \{x, y, u_3} and x_2∈ V(P). If x_1∈ V(H)\{P^*_2∪{x, u_3}}, then x_2=v_1 by Claims 3.1.1-3.1.4. Now, G[H\{P^*_2}∪{x_2, x”}] induces a hole with length ≤ 2l, a contradiction. If x_1∈P^*_2\{y}, then by Claims 3.1.1-3.1.4, x_2=v_2. Now, C_3∪{x_2, x”} induces a hole with length ≤ 2l, a contradiction. By Claims 3.1.1-3.1.3, it suffices to show that x∉ V(L^*_1∪ L^*_2∪ Q^*_1∪ Q^*_2). Suppose that it is false. By symmetry we may assume that x∈ V(L^*_1). By Claim 3.1.4, we have that xu_3∈ E(L_1), e=yu_3, |P'|=2l-1, |P_1|=|L_1|=l, |Q_1|=1. Since no 4-vertex-critical graph has a P_3-cut by Lemma <ref>, it suffices to show that {x, y, u_3} is a P_3-cut of G. Assume not. Let R be a shortest induced path in G-{x, y, u_3} linking P and H \{x, y, u_3}. By 3.1.5, |R|≥ 3 and no vertex in V(H∪ P') \{x, y, u_3} has a neighbour in R^*. Let s and t be the ends of R with s∈ V(P). We claim that t∉ V(L_1∪ P_2) \{x, y, u_3}. Assume to the contrary that t∈ V(L_1) \{x, u_3} by symmetry. Let R_1 be an induced (y, t)-path in G[V(P'∪ R) \{x}]. When u_3 has no neighbour in R^*_1, let R_2:=R_1 and C:=yR_2tL_1u_3y. When u_3 has a neighbour in R^*_1, let t'∈ V(R^*_1) be a neighbour of u_3 closest to t, R_2:=u_3t'R_1t and C:=u_3R_2tL_1u_3. Note that C_4 C is a hole, but C may not be a hole. Since C C_1 C_2 is an odd hole with length at least 2l+3 (since |tL_1u_3|≤ l-1) when C is an odd cycle, it suffices to show that |C| is odd. When x has a neighbour in R^*_2, since |L_1|=l, |R_2|≥ 2l. Now |C_4 C|≥ 3l, the subgraph C_4 C is an even hole, which implies that C is an odd cycle. So we may assume that x has no neighbour in R^*_2. When u_3 is an end of R_2, since R_2 is a chordal path of C_1, it follows from Lemma <ref> and (<ref>) that C is an odd hole. When y is an end of R_2, since neither x nor u_3 has a neighbour in R^*_2, we have s=v_1, by Claims 3.1.1-3.1.4, so |R_2|>2l since |yP'v_1|=2l-2 and |R|≥ 3. Then C_4 C is an even hole, so C is an odd cycle. Hence, the claim holds. Then t∈ V(P_1∪ L_2) \{u_1}, by symmetry we may therefore assume that t∈ V(P_1) \{u_1}. Let R_1 be the induced (y, t)-path in G[V(P'∪ R) \{x}]. By Claims 3.1.1-3.1.2, either s=v_1 and y has no neighbour in R or some vertex in {x, u_3} has a neighbour in R^*_1. No matter which case happens, we have |R_1|≥ 2l (if the first case happens, due to |yP'v_1|=2l-2 and |R|≥ 3, |R_1|≥ 2l; else some vertex in {x, u_3} has a neighbour in R^*_1, and then by g(G)=2l+1, |R_1|≥ 2l). That is, tR_1yP_2u_4 is a chordal path of C_2 with length at least 3l-1, which is a contradiction to Lemma <ref> as t, u_4 are non-adjacent. Hence, {x, y, u_3} is a P_3-cut of G. Let l≥ 4 be an integer. For each graph G in G_l, if G is 4-vertex-critical, then G has no odd K_4-subdivisions. Suppose not. Let H be a subgraph of G that is isomorphic to an odd K_4-subdivision. By Lemma <ref> (2), H is an induced subgraph of G. Let H=(u_1,u_2,u_3,u_4,P_1,P_2,Q_1,Q_2,L_1,L_2) (see Figure <ref>) and C_1:=P_1∪ Q_1∪ L_1, C_2:=P_1∪ Q_2∪ L_2, C_3=P_2∪ Q_1∪ L_2 and C_4:=C_1 C_2 C_3. Since H is an odd K_4-subdivision, C_1, C_2, C_3 and C_4 are odd holes. By Lemma <ref> (1), 3.4 |P_1|=|P_2|≤ l, |Q_1|=|Q_2|≤ l, |L_1|=|L_2|≤ l. Without loss of generality we may assume that P_1, P_2 are longest arrises in H. By Lemma <ref> and Lemma <ref> (3), any minimal path P:=(v_1, v_2)-path linking P^*_2 and H \ V(P^*_2) must have N_H \P^*_2(v_1)={x}, N_P^*_2(v_2)={y}, where x∈{u_3, u_4} and xy∈{e, f}. Let P'=xv_1Pv_2y, so H∪ P' is induced by minimality of P. Claim. 3.2.1 |P_1|=|P_2|=3. Suppose not. Since P_1, P_2 are longest arrises in H, |P_1|=|P_2|≥ 4. By Lemma <ref>, there is a minimal vertex cut X of G with {u_3, u_4}⊆ X⊆ N_G[{u_3, u_4}] and {u_3, u_4}=X∩ V(H). Let G_1 be the induced subgraph of G whose vertex set consists of X and the vertex set of the component of G-X containing P^*_2. If all induced (u_3, u_4)-paths in G_1 have length |P_2|, by Lemma <ref>, G has a degree-2 vertex or a K_2-cut, which is not possible. Hence, it suffices to show that all induced (u_3, u_4)-paths in G_1 have length |P_2|. Let Q be an arbitrary induced (u_3, u_4)-paths in G_1. When |L_1|≥ 2, since QQ_2 is a chordal path of C_1 by Lemma <ref> (3) and the definition of G_1, we have |QQ_2|=|Q_1P_1| by Lemma <ref>, so |Q|=|P_1| by (<ref>). Hence, by (<ref>) we may assume that |L_1|=1 and |Q_1|=|P_1|=l. Since Q_1L_2 is an induced (u_3, u_4)-path of length l+1, either |Q|=|P_2|=l or |Q|≥ l+1 and |Q| has the same parity as l+1. Assume that the latter case happens, then G[L_1∪ P_1∪ L_2∪ Q] is an odd hole of length at least 2l+3, which is not possible. By Claim 3.2.1, G∈ G_4, so g(G)=9. Thus H is a regular odd K_4-subdivision and each arris of H has length 3. And by Lemma <ref>, without loss of generality, we may assume x=u_3, then xy=e. Since 4-vertex-critical graph G has no P_3-cut (in particular {x,y,v_1} is not a P_3-cut), there exists a direct connection induced (w,t)-path R' linking P\{v_1} and H\{x, y} in G-{x, y, v_1} with N_H\{x, y}(w) ∅. It is clear that |N_H(w)|=|N_H\{x, y}(w)|=1. Let R be the path induced by R'∪ (P\{v_1})∪{y}∪ N_H(w), so one end of R is y and the other is in N_H(w). Let the neighbour of y in P^*_2 is y'. Note that u_3 has a neighbour in R by Lemma <ref> as u_3 ∉N_H(w). We claim that wy'∉ E(G). Otherwise, there exists a path Q induced by P'∪ R\{y} linking u_3 and y'. Set C'=u_3L_1u_1P_1u_2L_2u_4y'Qu_3, C”=u_3L_1u_1Q_2u_4y'Qu_3. Then |C'|=10+|Q| and |C”|=7+|Q| imply |Q|=2, which contradicts Lemma <ref> (3). We claim that wu_1, wu_2∉ E(G). Otherwise, we may assume wu_1∈ E(G). Set C'=u_1Q_2u_4y'yRu_1, C”=u_1P_1u_2L_2u_4yy'Ru_1. Then |C'|=5+|R| and |C”|=8+|R| imply |R|=4. Now, G[{u_3, y}∪ R] contains a cycle with length less than 9, a contradiction. We claim that N_ P^*_1(w)=∅. Otherwise, let u'_1 be the neighbour of u_1 in P_1. Without loss of generality, we may assume that wu'_1∈ E(G). Set C'=u'_1u_1Q_2u_4y'yRu'_1, C”=u'_1P_1u_2L_2u_4y'yRu'_1. Then |C'|=6+|R| and |C”|=7+|R|, implying that |R|=3. Now, G[{u_3, y}∪ R] contains a cycle with length less than 9, a contradiction. We claim that wu_4∉ E(G). Otherwise, let u'_3 be the neighbour of u_3 in R closest to w. Set C'=u'_3Ru_4Q_2u_1L_1u_3u'_3, C”=u'_3Ru_4Q_2u_1P_1u_2Q_1u_3u'_3. Then |C'|=7+|u'_3Ru_4| and |C”|=10+|u'_3Ru_4| imply that |u'_3Ru_4|=2. Since yu'_3∉ E(G), now, yu_3u'_3Ru_4y'y is a 6-hole, a contradiction. We claim that N_Q^*_1∪ L^*_1(w)=∅. Otherwise, Let N_H(w)={z} and without loss of generality we may assume that z∈ Q^*_1. Let h := |zQ_1u_2|. Note that h∈{1,2}. Set C'=zQ_1u_2L_2u_4y'yRz, C”=zQ_1u_2P_1u_1Q_2u_4y'yRz. Then |C'|=h+5+|R| and |C”|=h+8+|R|. So |R| = 4 - h ≤ 3. Now, G[{u_3, y}∪ R] contains a cycle with length less than 9, a contradiction. Hence, we have N_Q^*_2∪ L^*_2(w) ∅. Let N_H(w)={z} and without loss of generality, we may assume that z∈ Q^*_2. If zu_1∈ E(G), set C'=zu_1P_1u_2L_2u_4y'yRz, C”=zQ_2u_4y'yRz. Then |C'|=9+|R| and |C”|=4+|R|, implying |R|=5. Now, G[{u_3, y}∪ R] contains a cycle with length less than 9, a contradiction. So zu_4∈ E(G), let u'_3 be the neighbour of u_3 in R closest to w. Then u'_3y∉ E(G). Set C'=u'_3Rzu_4y'yu_3u'_3 and C”=u'_3RzQ_2u_1L_1u_3u'_3. Then |C'|=|u'_3Rz|+5 and |C”|=|u'_3Rz|+6, implying |u'_3Rz|=4. Now, u'_3RzQ_2u_1P_1u_2Q_1u_3u'_3 is a 13-hole, a contradiction. Let G∈ G_4. Assume G has no 2-edge-cut or K_2-cut. Then one of the following holds. * G has an odd K_4-subdivision. * G contains a balanced K_4-subdivision of type (1, 2). * G has a P_3-cut. * G has a degree-2 vertex. Assume that neither 1) nor 2) is true. Set C=v_1v_2⋯ v_9v_1. Since G has no 2-edge cut or K_2-cut, there exists a jump over C. By Corollary <ref>, C either has a short jump or a local jump over C across one vertex. If there exists a short jump over C, let P be a short jump over C with |P| as small as possible. By symmetry we may assume that the ends of P are v_2, v_k with k≤ 6 and P is across v_2v_3 … v_k (as g(G) = 9). By Lemma <ref>, it is clear that any two short jumps are not crossing. Then, if there exists a second short jump P^*, then either one of the ends of P^* is in {v_2, v_k} and the other end is in {v_k+1, ⋯, v_9}, or both ends of P^* are in {v_k+1, ⋯, v_9}, then either |P^*|<|P| or by Lemma <ref> 1), G has an odd K_4-subdivision, a contradiction. So all short jumps over C have ends in {v_2, v_3, ⋯, v_k} and are across a subpath of C(v_2,v_k). Since |P| is minimum, we have additionally that 3.5 By Lemma <ref> 2), as long as there exist two local jumps over C across one vertex that are uncrossing, then there exists a jump hole, and thus a short jump. Otherwise, all local jumps over C across one vertex are crossing or there exists only one local jump over C across one vertex. For each integer 1≤ i≤ 2, let P_i be a local (s_i, t_i)-jump over C across one vertex and Q_i be the (s_i, t_i)-path on C of length 2. When P_1 and P_2 are uncrossing, by (<ref>) and Lemma <ref> 2), |V(Q_1∪ Q_2)\{v_2, v_3, ⋯, v_k}|≤ 2, then at least one vertex in {s_i, t_i} for each i ∈{1,2} is in {v_2, v_3, ⋯, v_k}. When P_1 and P_2 are crossing, |V(Q_1∪ Q_2)|=4. If there is only one local jump over C across one vertex denoted by P_1, then V(Q_1)=3 where P_1 across Q_1 over C. We relabel the indices of v_1, …, v_9 for convenience. If there is no short jump, we may assume the ends of all local jump across one vertex are in {v_1, v_2, ⋯, v_k+1}. Now assume P exists. If there exists a local jump across v_1, then we relabel v_i to be v_i+1 for each i∈ [9] (we write v_9 instead of v_0 for convenience), see Figure <ref> (2) for illustration. And if there exists a local jump across v_k+1, then we relabel v_i to be v_i-1 for each i∈ [9] (we write v_9 instead of v_0 for convenience), see Figure <ref> (3) for illustration. For the rest of cases, we do not relabel. We would like to point out that if there exist two local jump with one across v_2 and the other across v_k, then we do not relabel, see Figure <ref> (1) for illustration. In all the cases, after possible relabeling, we may assume that 3.6 For any integer 1≤ i≤ k+1, let X_i be the set of vertices adjacent to v_i that are in a local jump over C across one vertex with one end v_i or a short jump over C with one end v_i. Set X=X_1∪ X_2∪⋯∪ X_k+1. Since no vertex in V(G)\ V(C) has two neighbours in V(C), v_8 has no neighbours in X. Assume that v_8 has degree at least 3, for otherwise 4) holds. There is a connected induced subgraph D such that v_8 has a neighbour in V(D) and V(D)∩ (V(C)∪ X)=∅, and we choose D to be maximal with these properties. Let N={w|w∈ V(C)∪ X, N_D(w)≠∅}. Evidently, v_8∈ N. It suffices to show that N⊆{v_7, v_8, v_9}. Because otherwise {v_7, v_8, v_9} is a P_3-cut, and thus 3) holds. This will conclude the proof. Suppose that N ⊈{v_7, v_8, v_9}. For 1≤ i≤ k+1≤ 7, let W_i=X_i∪{v_i}. First assume that N∩ W_i≠∅ for some i∈ [6]. Let Q be a shortest (v_8, v_i)-path with interior in V(D)∪ (W_i\{v_i}). Exactly one of {v_7, v_9} has a neighbour in Q^*, otherwise either there is a (v_7, v_9)-short jump or Q is a short jump, which contradicts to (<ref>). We claim that Q is not a (v_8, v_1)-path. Suppose not. If N_Q^*(v_7)=∅, then N_Q^*(v_9)≠∅ and Q is a local jump across v_9, which contradicts to (<ref>). Otherwise, there is a (v_1, v_7)-short jump, a contradiction to (<ref>). So it is clear that Q is a (v_8, v_i)-path for i∈{2,3,⋯,6}. If N_Q^*(v_7)=∅ and N_Q^*(v_9)≠∅, then there is a (v_i, v_9)-short jump, which contradicts (<ref>). Thus we have 3.7 We claim that Q is not a (v_8, v_2)-path. Otherwise, by (<ref>), there is a (v_2, v_7)-short jump, which contradicts (<ref>). Claim. Q is not a (v_8, v_i)-path for i∈{3, 4, 5}. By (<ref>), there is a (v_i, v_7)-short jump, which implies that C is relabelled. Then there is a local jump across only v_2, say R. So |R| is odd with length ≥ 7 by Lemma <ref>. Let Q':=(v_i,v_7)-short jump, where Q'\{v_7}⊂ Q, then |Q'|=9-(7-i)=2+i. Let N_R(v_3)={x}, N_Q'(v_i)={y}. If there exists a vertex in V(R^*)∩ V(Q'^*) ∖{x,y} or the end of the edge between R^* and Q'^* is not x or y, then there is a (v_7, v_1)-short jump or (v_7, v_2)-short jump, which is a contradiction as |P| is minimum. Then we have (i) either V(R^*)∩ V(Q'^*)=∅ and R^* is anticomplete to Q'^* (ii) or N_Q'^*(x)≠∅ (iii) or N_R^*(y)≠∅ (iv) or V(R^*)∩ V(Q'^*)∩{x, y}≠∅ The first case implies an odd hole v_1Rv_3Cv_iQ'v_7v_8v_9v_1 of length ≥ 11, a contradiction. The second case is impossible since |Q| is shortest and g(G)=9. If the third case happens, let z∈ N_R(y) be nearest to v_1. When i=3, since v_1Rzyv_3 is a jump across v_2, |v_1Rz| is odd and has size at least 7. Then v_7Q'yzRv_1v_9v_8v_7 is an odd hole with length 8+|v_1Rz| ≥ 15, a contradiction. When i={4, 5}, then v_1Rzyv_i is either a short jump (which is a contradiction) or a local jump has the same parity as Q'. Now, v_1v_9v_8v_7Q'yRzv_1 is an odd hole with length ≥ 11, a contradiction. So V(R^*)∩ V(Q'^*)∩{x, y}≠∅. Thus either x ∈ V(R^*)∩ V(Q'^*) or y ∈ V(R^*)∩ V(Q'^*). In both cases, one can see that x=y and i=3. Similarly, there is an odd hole with length 8+|v_1Rz| ≥ 15, a contradiction. First assume that V(R^*)∩ V(Q'^*)=∅. If R^* is anticomplete to Q'^*, then v_1Rv_3Cv_iQ'v_7v_8v_9v_1 is an odd hole and with length ≥ 11, a contradiction. So there exist an edge between R^* and Q'^*. If there exists an edge between R^* ∖{x} and Q'^*∖{y}, then there is a (v_1, v_7)-short jump or (v_2, v_7)-short jump, a contradiction. So the ends of every edge between R^* and Q'^* must intersect {x,y}. Since |Q| is shortest and g(G)=9, x has no neighbours in Q'^*. Thus y has an neighbour in R^*. Let z∈ N_R(y) be nearest to v_1. When i=3, since v_1Rzyv_3 is a jump across v_2, |v_1Rz| is odd and has size at least 7. Then v_7Q'yzRv_1v_9v_8v_7 is an odd hole with length 8+|v_1Rz| ≥ 15, a contradiction. When i={4, 5}, then v_1Rzyv_i is either a short jump (which is a contradiction) or a local jump has the same parity as Q'. Now, v_1v_9v_8v_7Q'yRzv_1 is an odd hole with length ≥ 11, a contradiction. So V(R^*)∩ V(Q'^*)≠∅. If there exists a vertex in V(R^*)∩ V(Q'^*) ∖{x,y} or the end of the edge between R^* and Q'^* is not x or y, then there is a (v_7, v_1)-short jump or (v_7, v_2)-short jump, which is a contradiction as |P| is minimum. Thus either x ∈ V(R^*)∩ V(Q'^*) or y ∈ V(R^*)∩ V(Q'^*). In both cases, one can see that x=y and i=3. Since v_1Rzyv_3 is a jump across v_2, |v_1Rz| is odd and has size at least 7. Then v_7Q'yzRv_1v_9v_8v_7 is an odd hole with length 8+|v_1Rz| ≥ 15, a contradiction. We claim that Q is not a (v_8, v_6)-path. Otherwise, by (<ref>), there is a (v_8, v_6)-local jump across v_7, which contradicts to (<ref>). Therefore, such Q does not exist, so there is no path from v_8 to {v_1, v_2,⋯, v_6} with interior in V(D)∪ (W_i\{v_i}). If X_7∩ N≠∅, then there exists x_7∈ X_7∩ N. Since x_7 is in a local jump or a short jump, there is a path from v_8 to {v_1, v_2,⋯, v_6}, a contradiction. So X_7∩ N=∅. This completes the proof. Now we are ready to prove Theorem <ref>. (Theorem <ref>) Suppose Theorem <ref> is not true. Let G∈ G_4 be a minimal counterexample to Theorem <ref>. Then G is 4-vertex-critical. So G has no degree-2 vertex or 2-edge cut or K_2-cut. By Theorem <ref> and Lemma <ref>, G contains neither odd K_4-subdivision nor balanced K_4-subdivision of type (1, 2). Hence, G has a P_3-cut by Theorem <ref>, which is a contradiction to Lemma <ref>. 9999 achrs L. Addario-Berry, M. Chudnovsky, F. Havet, B.Reed and P. Seymour, Bisimplicial vertices in even-hole-free graphs, Journal of Combinatorial Theory, Series B, 98 (2008) 1119–1164. cz22 R. Chen and Y. Zhou, On coloring of graphs with girth 2l+1 and without longer odd holes. odd K_4-subdivisions, arXiv preprint arXiv:2210.12376 (2022). c23 R. Chen, Graphs with girth 2l+1 and without longer odd holes are 3-colorable, arXiv preprint arXiv:2301.00112 (2023). cs02 M. Chudnovsky, N. Robertson, P. Seymour and R. Thomas, The strong perfect graph theorem, Annals of Mathematics, 164 (2006) 51-229. cs08 M. Chudnovsky and P. Seymour, Even-hole-free graphs still have bisimplicial vertex, Journal of Combinatorial Theory, Series B, 161 (2023) 331–381. cs22 M. Chudnovsky and P. Seymour, Proof of a conjecture of Plummer and Zha, Journal of Graph Theory, 103 (2023) 437–450. g2 A. Gyárfás, On Ramsey covering-numbers, Colloquia Mathematic Societatis János Bolyai 10, Infinite and Finite Sets. North-Holland/American Elsevier, New York (1975) 801–816. hm C. T. Hoáng and C. McDiarmid, On the divisibility of graphs, Discrete Mathematics, 242 (2002) 145–156. NPRZ11 D. Nelson, M. Plummer, N. Robertson and X. Zha, On a conjecture concerning the Petersen graph, The Electronic Journal of Combinatorics, 18 (2011) P20, 37pp. PZ14 M. Plummer and X. Zha, On a conjecture concerning the Petersen graph: Part II, The Electronic Journal of Combinatorics, 21 (2014) P1.34, 9pp. sivaraman V. Sivaraman, Some problems on induced subgraphs, Discrete Applied Mathematics, 236 (2018) 422–427. sso A. D. Scott and P. Seymour, Induced subgraphs of graphs with large chromatic number. I. odd holes, Journal of Combinatorial Theory, Series B, 121 (2016) 68–84. ss A. D. Scott and P. Seymour, A survey of χ-boundedness, Journal of Graph Theory, 95 (2020) 473–504. wxx22 D. Wu, B. Xu and Y. Xu, On coloring of graphs of girth 2l+1 without longer odd holes s (in Chinese), to appear in Science China: Mathematics http://doi.org/10.1360/SCM-2021-0373. See arXiv:2204.06284 wxx22+ D. Wu, B. Xu and Y. Xu, The chromatic number of heptagraphs, arXiv preprint arXiv:2206.01400 (2022). xyz17 B. Xu, G. Yu and X. Zha, A note on chromatic number and induced odd cycles, The Electronic Journal of Combinatorics, 24(4) (2017) P4.32.
http://arxiv.org/abs/2307.00973v1
20230703124419
Digital Twin-Empowered Communications: A New Frontier of Wireless Networks
[ "Lina Bariah", "Hikmet Sari", "Merouane Debbah" ]
cs.NI
[ "cs.NI" ]
Over-The-Air Federated Learning: Status Quo, Open Challenges, and Future Directions Lina Bariah, Hikmet Sari, and Mérouane Debbah =================================================================================== The future of wireless network generations is revolving toward unlocking the opportunities offered by virtualization and digitization services, with the aim to realize improved quality-of-experience (QoE) and bring several advantages to network users. According to the rapid development in the field of network virtualization, we envision that future wireless networks will run over ubiquitous deployment of virtualized components that are controlled by artificial intelligence (AI), i.e., the conceptualization of the Digital Twin (DT) paradigm. The key principle of the DT relies on creating a holistic representation of wireless network elements, in addition to decoupling the information pertaining to physical objects and dynamics, into a cyber twin. The cyber twin will then leverage this information for AI models training, and then reasoning and decision-making operations, which will be then reflected to the physical environment, for improved sustainability. Motivated by this, in this article, we dig deep into the intertwined role of wireless technologies as being enablers and enabled by the DT. Furthermore, we put a forward-looking vision of the integral role that future 6G networks are anticipated to play in order to realize an efficient DT. Finally, we sketch the roadmap toward identifying the limitations of the DT in 6G-enabled wireless networks, and open new horizons for further developments in different design aspects. § INTRODUCTION Since the evolution of smart city concept, the way we are living our everyday life has been revolving toward integrating intelligence in every aspect of our lives. While this paradigm has motivated the emergence of novel technological trends, in order to meet its demanding requirements, several technologies have played a fundamental role in enabling smart city concept, such as iot, sensing, ml and ai, and cloud and edge computing. Along the way, wireless network generations have fundamentally participated in maturing the paradigm of smart city, which will continue to grow with the aim to realize pervasive user-centric applications supported by ubiquitous intelligence. This means, ai is anticipated to be the fuel for every aspect in smart cities. Hence, it is necessary to further explore what beyond 5G networks will bring, in terms of new technologies and services, in order to deliver the promised qoe to network users. While the research continues to argue what future wireless generations will be, it has become apparent that 6G will be shaped towards provisioning network virtualization and softwarization, with the aim to support ubiquitous deployment of latency-sensitive applications <cit.>. While ultra-high reliability and data rates are essential for the successful implementation of these applications, they require the development of online proactive mechanisms, in order to realize self-adaptive, self-optimizing, and self-sustaining networks, with intelligent decision-making capabilities. In this regard, dt has recently been identified as a promising candidate for enabling zero-touch wireless networks, thanks to sdn and nfv which have paved the way for the evolution of interactive physical-cyber platforms. The key principle of the dt paradigm is to create a virtual representation for the physical elements and the dynamics and functions of the network. According to its definition, the dt is envisioned to enable end-to-end digitization of wireless networks, with the aim to perform cost-effective, adaptive, and fast network-wide optimization and design <cit.>. Furthermore, the dt allows the utilization of the digital realm with the aim to develop and test novel schemes and ai algorithms, that are capable of handling previously experienced or envisioned scenarios based on the collected data at the ct, and then to implement them at the pt once fully mature. Despite its promising advantages, to reap the full potential of the dt technology in 6G networks, the ct is envisaged to leverage ai algorithms, with novel data-driven paradigms, high performance computing, optimization theory, matching theory, and efficient cyber-physical interaction schemes, to realize the necessary adaptation/reconfiguration at the pt with an imperceptible time-lag. For the successful implementation of a high-fidelity dt paradigm in 6G, a new level of stringent requirements pertaining to connectivity, reliability, latency, and data rate are imposed on future wireless generations. In particular, it is foreseen that dt will require an outage time/year less than 52 minutes, with 99.99% reliability. Furthermore, sufficient bandwidth is necessary to handle the data transfer requirements between the pt and ct, especially for real-time applications that require low-latency communication. Also, collected data from the twin necessitates the development of security schemes for ensuring trusted data repositories, as well as hardware & software security of data collectors. Moreover, dt often involve the integration of various systems and devices, which may operate on different network protocols or communication standards. The network infrastructure should support interoperability and provide the necessary protocols and interfaces to enable seamless connectivity and data exchange between different components of the dt ecosystem. Despite being in its infancy, few standardization and industrial activities on dt have been released by different entities (Table <ref>). §.§ Related Work Although several research contributions have discussed the implementation of dt within the context of wireless networks, e.g. <cit.>-<cit.>, in this article, we aim to shed light on how to unlock the full potential of different wireless technologies, through leveraging the dt. In particular, in <cit.>, the authors defined the dt paradigm, discussed the key requirements of physical-virtual communications, and identified the various requirements of pt-pt, pt-ct, and ct-ct links. The article <cit.> focuses mainly on how ai will introduce a noticeable enhancement to the performance of dt in 5G networks. Particularly, the authors have focused on three main concepts, 5G automotive, 5G radio and channel emulation, and 5G optimization and validation, and how edge intelligence can support such concepts. In <cit.>, Tang et al. articulated the basic concepts of dt and mec, and provided a general overview on the communication and computation advantages reaped from dt edge network. They further investigated the technical deployments of dt at the cloud and at the edge. They also discussed blockchain and ml paradigms within the dt networks. The authors in <cit.> explored the dt structure, and discussed the edge and cloud deployment of dt. The key design requirements were discussed in detail in <cit.>, where the authors studied the decoupling, intelligent analytics, blockchain-based data management, and scalability and reliability. Also, they presented a general architecture of the dt when implemented in 6G, and highlighted the main operation steps. On the other hand, the authors in <cit.> articulated the key concepts of dt, and provided a taxonomy for the implementation of dt in wireless networks. Also, they discussed the role of wireless network in realizing the vision of dt. Finally, the authors in <cit.> have provided a comprehensive survey on the definition of the dt, and its applications (e.g., smart city, manufacturing, and healthcare). They also shed light on the utilization of dt in industry, where they highlighted the challenges pertaining to data analytics, iot, and dt design. Additionally, they articulated the history behind iot networks, ml, and the dt. The state-of-the-art is summarized in Table <ref>. Different than the existing literature, the objective of this article is three-folds. First, we overview the technological trends that have motivated the concept of dt-empowered 6G, and have defined the constraints and requirements that need to be taken into consideration when designing dt networks. Second, we identify the encountered challenges in different wireless technologies, and we put a vision forward on how to employ the dt paradigm in order to overcome such challenges and introduce several advantages to different wireless technologies. Third, we approach the interplay of dt and 6G from fundamental limitations point of view, where we explore the theoretical limitation of semantic communication, air-interface design, source and channel coding design, etc., on the implementation of dt. We further open the floor for exploring dt as an enabler for 6G through stressing on some challenges that are envisioned to be encountered when such a technology is deployed. Note that such a discussion has not been presented the current literature. In Figs. <ref> & <ref>, we summarize the intertwined role of dt and wireless networks in providing enhanced user's experience. § DIGITAL TWIN-EMPOWERED WIRELESS NETWORKS: DRIVING TRENDS §.§ Sustainable Smart Cities New smart cities are evolving toward conceptualizing urban sustainability principle, by fueling the initiatives of building green cities, that are envisioned to enjoy zero harmful emissions and reduced carbon footprint, and improved resources utilization and waste management efficiency. While wireless networks in smart cities aim at achieving an improved life quality, and to deliver the needed qoe for city users, it is essential to revisit current wireless technologies in order to develop environment-friendly system that support sustainability in smart cities. A promising candidate, the dt technology promises to enable sustainable cities, through performing a virtualized network planning, optimization, configuration, and adaptation, and hence, extremely reducing the consumption of the network resources. Route selection in autonomous driving constitutes a potential use-case for sustainable dt-empowered cities, where a self-optimized route selection can help, not only in delay reduction and congestion avoidance, but in an improved efficiency in the vehicles' resources as well. Nevertheless, due to the heterogeneous nature of smart city applications, from healthcare and intelligent manufacturing to urban planning and congestion control, large data sets arises from the continuous monitoring of the network that need to be processed, and then, heavy computing tasks need to be executed for performing multi-objective optimization of the network resources, for guaranteed reliability and ensured optimum solutions. §.§ Zero-touch Networks The fourth industrial revolution, Industry 4.0, has conceptualized the vision of automation in iiot, and has motivated the emergence of zero-touch network management, and hence, the unfolding of self-healing networks that are capable of autonomously identifying experienced faults and perform real-time self-maintenance, without human intervention. With the diverse fault scenarios in smart cities, including electricity supply interruption, unplanned massive traffic congestion, and unpredicted security breach, current fault management techniques might be insufficient to handle the increased operational complexity, and to realize closed-loop automation for network management. In this regard, the dt technology can provide the necessary flexibility feature to enable automated fault management and alleviate the management overhead from the network nodes. By leveraging ai algorithms, multiple agents can be trained in the ct over a very wide range of possible network failures and faults, in order to be ready to overcome potential incidents, and to autonomously perform qualitative faults detection, isolation, and restoration, when installed in the physical environment. Besides reduced physical overhead, other promising advantages can be revealed when dt is employed for enabling self-healing networks. Generally speaking, despite their severe effect on the network status, disasters and critical network faults rarely happen in real cities, and therefore, the available data for agents training is limited. Therefore, the ct can be exploited to generate synthetic data that represent a diverse range of fault scenarios, and therefore, enable improved agents/models training. §.§ URLLC With the strong believe that the role of virtual and augmented reality services is instrumental in achieving the qos requirements of emerging applications, urllc has been identified as one of the 6G verticals, and an enabler for immersive real-time metaverse. Specifically, in order to realize the envisioned seamless dt paradigm, sensing, communications, and computing tasks are anticipated to be performed in few milliseconds, and the ct is expected to be in a perfect synchronization with its physical counterpart. Equally important, extremely reliable communication should be guaranteed in dt-empowered wireless networks, in order to ensure accurate model training, and hence, precise decision-making process. Within the context of smart cities, urllc is an essential element in deploying mission-critical applications, such as autonomous driving, remote surgery, and public safety, to name a few <cit.>. dt and urllc technologies in smart city applications are intertwined, in the sense that each is an enabler and being enabled by the other. Taking the example of autonomous driving, on one hand, instead of imposing a huge pressure on the network energy and spectrum resources, and draining the limited on-board resources of smart vehicles and roadside units to perform intelligent prediction and fast inference, the parallel cyber world can be leveraged in order to perform lightweight control over autonomous vehicles. Furthermore, parallel driving can be an efficient approach to overcome the short-sight challenge in vehicular networks, where vehicles have limited knowledge of the network status, constrained by the limited coverage of vehicles, and hence, the dt can provide a holistic representation of the whole monitored area, over a long period of time. This enables the network controller to develop a sufficient level of understanding of the network dynamics and scenarios, and therefore, perform reliable and fast control over future actions. On the other hand, the successful implementation of the dt paradigm in autonomous driving scenarios is constrained by the efficiency of the cyber-physical interface, and its ability to deliver the needed on-demand sensing, modeling, and communications, within a constrained time-frame and with the required accuracy, in order to realize safe autonomous driving. §.§ Digital Representation of Reality Managing wireless networks is inherently complicated, with the extremely large number of homogeneous interactive nodes, and the existence of static and moving smart objects. Therefore, it is very challenging to enable network-wide control and to realize the envisioned automation in future smart cities. With the recent advancements in virtual and augmented reality services, the convergence of the physical and digital worlds has become feasible. While 5G has set the scene towards ultimately enabling cyber-physical interactions, 6G networks are envisioned to witness an evolution of network digitization paradigms, where digital representations of an entire networked city can be realized, incorporating each sophisticated aspect in highly dynamic cities. The utmost goal of such a digital platform is to enable a precise planning of the network assets and the execution of the necessary actions, with a distributed control capabilities <cit.>. While the digital representation should be in perfect synchronization with the physical realm, it gives the freedom to travel in time to the past or the future, with the aim to facilitate better understanding for the network dynamics, and predict future events for enhanced network planning. In more details, a wireless dt can enable the exploration of past and future scenarios to allow improved network planning, design, and configuration, as well as the development of tuned ai models that can accommodate these scenarios. From the one hand, dt can be used as a tool to acquire historical data, which can be leveraged to analyze past network performance, patterns, and events. Gaining insights into how the network has behaved in the past, helps identify trends, patterns, and potential issues that occurred, enabling better understanding of the network dynamics and behavior. On the other hand, an efficient dt can be used to create rare network scenarios that might be experienced in future, allowing minimized risk and network failures. In particular, the exploitation of historical data from the ct and real-time data from the pt, can be integrated with simulated scenarios at the ct to enable predictive analysis, which helps to identify potential bottlenecks, and predict events that may impact network operations. Such type of dt are further aimed for generating data pertinent to future envisioned events, and therefore, they allow the realization of more generalized ai models that are capable of handling a wide-range of network scenarios. §.§ Sensing as a Service Although sensing has already become an indispensable tool in current and future wireless networks, the dt technology is envisaged to substantially participate in advancing the sensing capabilities, particularly in smart cities applications. This is motivated by the fact that sensors in dt-enabled smart cities are required to perform a wide range of sensing functionalities in order to accurately and continuously capture all operational and environmental information that assist with understanding the dynamics of the network, and thereby allow the ct to realize efficient data analysis and decision making. Therefore, it is essential for the sensory data that constitutes the fuel for the dt paradigm to comprise diverse types of data, including vision-based data, voice-based data, control information, environmental status, activities and objects tracking, etc. As a result, current sensing tools, e.g., cameras, microphones, and inertial measurements tools, need to be redesigned as their probability of failure is expected to be high when integrated in dt-empowered smart cities applications. This is due to several limitations, including their constrained battery-lifetime and high battery recharging/replacement overhead, as well as the strict sensed data-type and sensing frequency. Such limitations prevent delivering on-demand sensing, yielding inaccurate representation of the pt. Accordingly, developing efficient, sustainable, and self-powered sensors that enjoy various operational modes, different material and compatibility features is essential for the successful deployment of dt. While wearables and ultra-small sensors are a perfect fit for current sensing services, the emergence of dt-empowered cities opens new horizons for more innovative solutions of efficient sensors that are embedded, implanted, or printed, in order to serve a diversity of applications with a high number of sensing modalities <cit.>. §.§ Distributed Intelligence While building a holistic digital representation of a city is the vision, it might be challenging to achieve the needed qoe through a single twin, due to the highly dynamic and heterogeneous nature of urban cities. This is particularly pronounced in highly-densed sprawling cities, where models training and inference experience long delays. By leveraging distributed learning algorithms, multiple distributed twins can be trained over multiple locations, and then a centralized twin can be developed for models aggregation and global decision making. Although such a distributed approach can help with overcoming the slow training over extremely-large data sets, concerns pertaining to multiple-twins coordination and accuracy of the trained model might arise, which might be intolerable by accuracy-sensitive life-threatening applications, e.g., autonomous driving. Furthermore, this requires the development of efficient twin-to-twin interfaces to enable successful interaction between the distributed multiple twin. Accordingly, a delay-accuracy trade-off problem is resulted. On the other hand, as distributed learning is an enabler for dt, the latter can be leveraged with the aim to achieve lightweight distributed learning in vehicular edge networks. In particular, the ct can be employed in order to train multiple agents, by utilizing the data collected from the network, and then the agents can be installed in the pt for on-demand inference. In this approach, the agents are trained offline to perform the assigned tasks, ensuring reduced complexity and energy consumption at edge vehicles. Although the agents might need to interact with the physical world few times, the ultimate goal is to build a comprehensive dt framework, which is capable of offering extensive model training that encompasses a wide-range of networks scenarios, and hence, an interaction with the physical environment will not be required. §.§ Smart Energy With the evolution of the ioe paradigm, and the emergence of human-centric urban and rural applications, smart energy management has been deemed as an instrumental key to allow wireless networks to enjoy a long life-cycle, contributing to the conceptualization of the sustainable city paradigm. Taking advantage of the surrounding environment, zero-energy devices unlock new opportunities in terms of energy efficiency and sustainability, where they are characterized by their no-battery architecture <cit.>. Rather, they exploit the vibrations, light sources, temperature gradients, and rf signals to harvest the energy needed for signals communication, alleviating the overhead of battery replacement and recharging, particularly, in remote or hard-to-reach areas. The development of the zero-energy devices concept comes inline with the emergence of the dt paradigm, where the former paves the way for the latter. From the dt perspective, zero-energy devices represent the seed for on-demand, sustainable sensing services, in order to ensure an up-to-date synced ct. Owing to their low-cost and low energy consumption, ubiquitous deployment of zero-energy devices can be implemented to allow continuous sensing of the pt status, and hence, contribute to the successful realization of an operational dt. §.§ Context Awareness Future wireless networks are envisioned to leverage context awareness for a better understanding of the environmental, temporal, and situational aspects of events and network behaviors. By offering intelligent storage and processing services, dt introduces a new era of context awareness, where the fusion of multi-sensory data can be realized, and leveraged for recognizing nodes status and network traffic, and therefore, enables trends and deviation prediction. This can be achieved by understanding and storing the current and historical contexts, allowing real-time detection and identification of temporal deviations in nodes status. In this regard, wireless dt offers an augmented dimension of data modality by integrating various sources of information and providing real-time insights. In particular, the data collected by dt provides real-time insights into the behavior and performance of their physical counterparts, enabling proactive decision-making and anomalies detection, and thereby, improving overall context awareness. It is worthy to note that this data can include various types of information, such as operational parameters, environmental conditions, performance metrics, and user interactions. It should be further emphasized that the acquired data comprises images, videos, radio signals, and sound signals, offering an efficient platform for uncovering hidden patterns and correlations between the different data modalities, and therefore, contributing to a better understanding of the network behaviors. On the other hand, dt constitutes a secure channel for context exchange, where sensitive data pertaining to nodes' status and location can be securely stored and processed at the twin. § WIRELESS COMMUNICATION TECHNOLOGIES: HOW DIGITAL TWIN FITS §.§ Beyond mmWave Communication While high frequency communication paradigms promise to offer several advantages in terms of latency and data rates, compared to their low frequency counterparts, several factors limit their applicability to key scenarios in future wireless generations, including the limited communication distances due to the high path-loss, the strong channel sparsity which limits the spatial multiplexing gains, and the dominance of the line-of-sight (LoS) component which makes high-frequency communication paradigms prone to signal blockage. In the following, we identify the potentials of dt as an enabler for high-frequency communications, including, thz, vlc, and owc. §.§.§ Terahertz Communications Albeit the promising potentials of thz communications in future wireless generation, in terms of spectral and energy efficiency, and the wide range of 6G applications, that are anticipated to be supported by thz links, e.g., indoor localization, vehicle-to-vehicle communication, smart manufacturing, and sensing, several concerns pertaining to the design and implementation of thz communications need to be tackled. This includes the vulnerability of thz links to deep fading and blockage, which results in unreliable thz links and a degraded qos for end users. This challenge exacerbates with the ultra-narrow beamwidth of thz links, rendering thz systems vulnerable to pointing errors, blockage, with limited coverage and multi-user support. It was recently demonstrated that dt can be instrumental in enhancing the performance of thz communication in indoor environments, even with highly dynamic networks, through enabling improved context awareness and efficient, adaptive beam steering <cit.>. This is achieved through leveraging the dt to model, predict, and control the thz channel characteristics, in order to find the optimum signal path, and therefore, to maximize the received snr at the receiver end. It should be highlighted that an accurate channel modeling at the thz frequency is challenging, and despite the fact that there are several research activities initiated to derive accurate channel models that are experimentally tested as well as analytically modeled, this challenge is still an open research topic. Accordingly, the dt offers a rich environment for testing and measuring thz channels under various scenarios through heuristic approaches, and validating such models with theoretically derived models, for enhanced modeling accuracy. §.§.§ Visible Light Localization and positioning Over the last couple of years, extensive research and industrial efforts were devoted to exploit vlc for localization and positioning purposes, due to its inherent low deployment cost, high throughput, and security features. While current visible light localization has demonstrated an acceptable performance in terms of 2D and 3D localization, it suffers from some challenges that prevent them from delivering the needed qos imposed by future wireless generations. In addition to the ambient light noise, these challenges include the performance-degrading interference experienced when an array of led is implemented. Furthermore, the limited flexibility of the led, i.e., the fixed configuration of the led parameters such as the led's fov and dimming, prevents the exploitation of the full potential of vlc systems. Motivated by its promising features, the dt paradigm constitutes an efficient planning, training, and operational tool for enhanced vlc-based localization systems. From one perspective, a planning dt can be utilized to perform close-to-real testing for optimum led placement, in order to realize enhanced coverage and controlled interference, and hence, improved visible light sensing and localization. From another perspective, a training twin can be used as a tool to train multiple network agents, in order to enable them to perform efficient control of the led configuration, by manipulating the transmission power and the fov, according to different critical cases pertaining to the object different mobility scenarios, blockage by different object sizes, the size of the target, the interference to and from adjacent light diodes, and ambient light interference. Furthermore, the continuous reconfiguration of the led parameters should be imperceptible by the human eyes, and hence, agents' training should take into account the uniformity of the illumination as well. It is not hard to tell that such a complex and adaptive scenario requires dedicated computing resources with sophisticated ml algorithms, which are difficult to be accurately performed at the physical level alone, thanks to the dt for making such a reconfigurable framework feasible, for high resolution localization in visible light environments. This in addition to offering a simple platform for the acquisition of high-quality vlc data, given that the available vlc datasets are limited to particular scenarios. Summary: dt technology is anticipated to offer valuable capabilities for improved high-frequency communication. This can be achieved through utilizing the ct, with the multi-modal data acquired from it, to enable optimized system design, where the behavior of high-frequency antennas, transmitters, and receivers, can be tested and accordingly, system parameters pertinent to antenna placement, transmission power, and modulation schemes, can be fine-tuned for maximized performance and coverage. Furthermore, due to their high susceptibility to blockage and interference, multi-modal data acquired from the dt can enable improved contextual awareness and allows the identification of active and idle users within the surrounding environment, and therefore, helps in identifying the most effective beamforming design to maximize signal quality and minimize signal loss. §.§ Non-terrestrial Wireless Communication §.§.§ Satellite Communications 6g wireless networks are expected to be designed vertically, incorporating the ground, air, and space layers to provide a ubiquitous coverage with improved reliability and reduced latency. Therefore, leo satellite constellations have been deemed as an efficient communication paradigm, and have been identified as an enabler for different use-cases and applications, particularly in smart cities, which require a city-wide on-demand coverage to connect a massive number of users at separated geographical points, while ensuring sustainable operation. A main critical design aspect is the high mobility of leo satellites, compared to a ground user, and therefore, frequent handover and service interruption become limiting factors in satellite wireless communications. This is particularly observed in critical applications, e.g., autonomous driving, where service interruption might cause life-threatening incidents. The currently proposed handover schemes for satellite communications rely on optimizing a single metric to enable frequent handover. Such approaches are considered unscalable and don't yield the optimum handover strategy. Rather, multiple attributes need to be considered to ensure optimized handover, and hence, improved qos for ground users, opening the door for the dt technology to enable enhanced handover strategy design, while considering a wide range of system parameters, without draining the limited vehicles' resources. These parameters include maximum elevation angle, maximum service time, nearest location, handover times, satellite-satellite content delivery, and satellite speed, to name a few. In specific, one can resort to dt for performing data generation, computationally-heavy operation, and on-demand operation for multiple leo satellites and with considering the ground users' kpi. This is due to the capabilities of dt to model an entire networked city from the ground to the space, and to ensure the availability of the required synthetic data for the purpose of optimum model configuration, and hence, handover optimization. Recent results demonstrated that the exploitation of dt in handover design for satellite communication can potentially reduce the transmission delay between satellite and terrestrial nodes, as well as improving the data delivery quality <cit.>. §.§.§ Internet-of-Drones uav have attracted a significant attention over the last decade, and have found their applications in wireless networks, such as surveillance, object tracking and coverage extension. This is particularly promising when a swarm of uav perform particular tasks in a collaborative manner <cit.>. Hence, in order to implement an efficient uav-based network, a swarm of uav should be involved, due to the limited capabilities of uav. While a swarm of uav provides improved coverage and enhanced services, a large number of networked uav are needed to cover a relatively large geographical area. This results in an increased management difficulty, and exacerbates the inter- and intra- uav coordination complexity. Furthermore, to satisfy the needed reliability and latency requirements, an optimized cross-layer scheduling, that takes into account several kpi in addition to the uav status, including the instantaneous available resources at each uav, as well as the available network resources. Within this context, dt technology can play an important role in enabling a live control over uav-based delivery systems. Given that all information pertaining to the network and the uav for different scenarios will be decoupled at the ct, the dt can be leveraged to realize cross-layer multi-objective network-wide optimization, in order to efficiently manage the network resources, in terms of spectrum and power allocation, and accordingly, to select the optimum data routing and uav trajectory design. Furthermore, the dt can develop efficient schemes in order to overcome the single-point of failure in cooperative uav, due to energy outage or a uav failure. This can be achieved through two ways: 1) multiple agents can be trained over all possible failure scenarios, to perform data re-routing and uav handover when a failure is expected to happen, to avoid service interruption and delivery delays. 2) Another way is to employ an operational interactive twin that continuously senses the network and uav status, and performs online decision making according to the received information. Summary: With the several challenging issues pertaining to satellite operations and data (demonstrating the satellites behavior, or space-based transceivers) acquisition, dt can assist in emulating various use-cases to represent several network scenarios, and therefore, generate the needed data sets to identify potential bottlenecks, optimize system parameters, and improve overall system efficiency. By leveraging this synthetic data, dt can be used after for studying areas of coverage gaps and link budgets, taking into consideration satellite orbits, antenna patterns, and atmospheric conditions, allowing improved beamforming and constellation design. This can be further exploited for maintenance purposes, where realistic scenarios related to satellite operations, mission planning, and emergency situations can be generated at the ct, to develop response strategies, ultimately improving operational readiness and efficiency of satellite networks. §.§ Beyond Massive MIMO While massive MIMO technology has demonstrated a successful approach towards enabling improved wireless communication and increased network capacity, through antenna densification at the base-station, it is yet constrained by the maximum number of antennas that can be deployed at each base-station. This constitutes a bottleneck in the network capacity expansion in 6G networks. The vision of smart buildings concept in the era of beyond 5G networks is not limited to implementing smart technologies inside buildings, rather, the vision is to have buildings that are smart from inside and outside, to serve multiple indoor and outdoor applications. The recent technological advancements in smart materials and ris have made this vision possible, where large intelligent surfaces can be mounted at the buildings facades in order to perform different functionalities, and support outdoor use-cases. It is worthy to note that the emergence of intelligent surfaces, which comprise a number of reflective elements with reconfigurable amplitude and phase, has revolutionized the way how wireless communications is performed, where an intelligent control over the wireless environment can be achieved through a proper tuning of the ris elements. A planning dt can be employed to design the optimum ris placement and orientation scheme, in a way to serve multiple users that are randomly distributed. In particular, different ris locations and orientation can be tested over the ct, in terms of coverage, interference control, and security, before the real implementation in the physical environment. This is particularly helpful in large surfaces, which require huge efforts and costs in order to be placed at buildings, and therefore, the selected placement and orientation of the ris should be guaranteed to be optimum. From a different perspective, an operational twin can be leveraged in order to realize an on-demand control over the ris configuration. It should be highlighted that large intelligent surfaces are capable of serving multiple users and enabling different applications, simultaneously, when their parameters are properly tuned. For example, multi-user support can be achieved, simultaneously with performing smart control over the street lighting system and weather sensing, through proper configuration of the ris elements, i.e., which element serves what scenario and what are the optimum amplitude and phase of each element. In order to realize the full potential of ris in 6G networks, the latter information cannot be fixed over the network life-cycle, however, it should be adaptive in order to fit the vision of smart networks. Accordingly, the operational twin can handle the heavy computational overhead imposed by the complex optimization and continuous reconfiguration of the ris characteristics, in a live manner, and it can provide the required adaptability and flexibility, that are key elements in intelligent systems. Summary: The main advantage of exploiting dt in ris applications is the possibility of efficiently design the ris network in terms of placement, orientation, and configuration, to accommodate various network and user mobility scenarios. It further facilitates the understanding the multi-ris interference through multimodal data, where the latter offer improved contextual awareness and therefore, an enhanced interference detection and mitigation. Operational dt can be also exploited for low-complex and highly reliable real-time configuration of of the ris elements, where new network configuration observed at the ct can be leveraged to optimize the ris design, which can be then reflected at the pt. §.§ Wireless caching Wireless data caching, where popular data according to users demands are brought closer to edge devices, has shown a tangible improvement in terms of latency, energy, and spectral efficiency. Data caching can follow two approaches, namely, infrastructure-based (where data are cached at the nearest base station) and infrastructure-less (where data are cached at end devices). While the latter enjoys a reduced delay and allows device-to-device communications, the former has higher storage capabilities. In this regard, several mechanisms were proposed to optimize the caching experience at edge users, while taking into consideration different limitations, in terms of cache resources, data popularity profile, users' demands and patterns, and overall network's nodes behaviour. Although some mechanisms have demonstrated robust performance while taking into account one or few of the earlier mentioned requirements, wireless data caching design and optimization is still an open challenge to be addressed in future wireless networks. This is pertinent to the large amount of information that need to be processed in order to enhance the prediction model of data caching. While ai algorithms can play an important role in such an optimization problem, the lack of sufficient data sets, that comprise several scenarios of wireless caching and taking into consideration all data caching limitations, limits the accuracy of the developed ai models <cit.>. A high-fidelity dt can provide a holistic overview of the network caching requirements, and offer a better understanding of the popular data profiles and users needs, and hence, improve the accuracy of the caching prediction. In particular, a high-resolution ct can be considered as a rich environment for generating the needed information with the aim to enhance the data caching experience, including geographical, social, and network data, in addition to offering a sophisticated framework for explaining the correlations among different data dimensions, while considering users' historical patterns and demands, and hence, allows improved models training for more accurate data caching design. Summary: By leveraging real-time synthetic data, dt can be exploited as a tool to enable improved content placement, adaptation, and resource management in wireless caching. From the one hand, analyzing real-time data over the dt, including cache utilization, network condition, content popularity, and user demands and behavior, efficient resource allocation strategies can be suggested at the ct. This can be further exploited for recommending cache updates and eviction, and hence, ensure maximized cache hit rate and reduced latency. §.§ Network Slicing The recent progress in network slicing paradigm has come as a consequence of the diverse applications that have emerged and are envisioned to be supported simultaneously by 6G. As its name indicates, network slicing relies on dividing the physical network into multiple separated virtualized segments that share the same physical infrastructure, where each segment (slice) is anticipated to support a particular application. Such a paradigm is the key towards realizing the adaptivity and flexibility required in future wireless generations. Owing to the dynamic nature of wireless networks, in addition to the inherent traffic management and resource allocation challenges, which are pertinent to the virtualization and softwarization in network slicing, dt offers a prominent solution for optimized resource allocation in the network slicing paradigm <cit.>. In particular, multiple ct can be designed to represent multiple network slices, and monitor and update the network states using graph theory. It is noted that gnn at the dt can be leveraged to capture the complex interdependence between multiple slices, which are difficult to quantify in conventional wireless systems due to the irregular topology of graphs, and hence, to obtain the optimum slicing policy that maximizes the network performance <cit.>. Different than relying completely on the physical environment, the dt offers a comprehensive replica of the physical environment, where more information can be acquired to enhance the models accuracy, and therefore, strike an optimum end-to-end performance of the network metrics. Summary: dt technology offers a powerful approach for efficient network design, resource allocation, performance monitoring, dynamic adaptation, and testing of network slicing configurations and services. In particular, multi-modal data acquired at the ct can be leveraged with the real-data from the physical environment to optimize the allocated resources between multiple slices, where the dt can be used to identify resource bottlenecks, forecast demand, and dynamically allocate resources to different slices based on their requirements. Furthermore, dt can be utilized as a tool to emulate various network conditions, traffic patterns, and user behaviors, in order to verify the efficiency of the network slice configurations before their deployment in the physical network. §.§ Integrated Sensing and Communication Being two of the main 6G verticals, high performance communication and ultra-reliable sensing were integrated into a unified system, opening up a new paradigm known as isac, which allows communication tasks and sensing to share the same hardware and spectrum resources. The latter is performed under the assumption that an acceptable level of performance degradation will be experienced at the two services. Despite its promising potential in achieving on-demand throughput enhancement, improved hardware utilization, and reduced energy consumption, isac paradigm has several fundamental constraints that limit its application in current and future wireless networks, in regard to communication reliability and localization accuracy. Driven by the several advantages that can be offered by the dt, it is anticipated that dt can overcome some limitations experienced in isac systems. Motivated by the fact that 6G networks are envisioned to realize a positioning accuracy of sub-centimeters, multi-modal data that are generated by a high-fidelity dt can be exploited to acquire more information about objects of interest, and hence, lead to a higher localization and positioning accuracy. Multi-modal localization allows end devices to select the optimum channel and communication methodology given the information obtained from the generated data at the ct. From a communication perspective, enhanced context awareness at the dt plays an important role in identifying the need for data communication, by predicting the ideal time to communicate information signals. As a consequence, improved network resources utilization and on-demand throughput can be achieved. As a joint advantage for the co-existing systems, the dt paradigm can provide high-resolution csi acquisition, via vision-aided estimation methods, which highly rely on multi-modal data. High precision csi estimation represents a key element toward realizing an efficient integration between communication and localization systems. Summary: The multimodal nature of data acquired from the dt constitutes a promising factor in realizing 3D wireless sensing, in which wireless signals, combined with users’ locations, can be used to reconstruct a 3D image of the wireless environment. Improved wireless sensing can readily contribute to enhancing the communication schemes. In particular, from the reconstructed 3D image, improved beamforming, localization, resource management, etc., can be achieved. §.§ Massive Unsourced Random Access As recent wireless technologies are oriented towards enabling smart city applications, with a noticeable increase in the number of IoT devices (in the order of thousands), mura paradigm has emerged, where a subset of available devices are assumed to be active simultaneously. While this has come as a result of the need for multi-access support for ultra-high dense networks, it imposes several challenges in terms of csi acquisition and the availability of sufficient training sequences, in order to identify the active nodes and their number. As a potential solution, researchers are resorting to exploiting information pertaining to the statistical fluctuation of all nodes' channels to estimate the large-scale component of wireless channels. Nevertheless, extracting such an information requires collecting a large sample-set of received signal power at each node, with the aim to capture the statistical behaviour the random channels, imposing a huge data exchange overhead over resources-limited devices. The unavailability of sufficient data detains the development of accurate ml models, in addition to the numerical instabilities experienced in theoretical approaches as the number of devices increases <cit.>. Within this context, a high-fidelity dt can offer an approach for coherent activity detection in mura scenarios, where the historical channel behaviour of a massive number of uncoordinated nodes can be recorded at the ct, which can be then leveraged as a seed for ml models training. It is worth mentioning that, in mura, the system can benefit of an statically-based ml model that is built on mathematical models and that rely on statistical behaviors of the training environment. However, such models, e.g., Naive Bayes, requires a large set of high-quality data in order to build the basis for such models. Therefore, over a sufficiently long period of time and considering a wide-range of scenarios and number of total/active nodes, the collected historical data at the ct is anticipated to play an important role in facilitating the design of accurate and robust statistical models that can precisely track the statistical behaviour of the unsourced nodes and their channels, and hence, enables the corresponding trained models to operate accurately when implemented in the pt. Such an approach significantly reduces the overhead resulted from data exchange for csi acquisition in mura and hence, offers a low-cost, reliable, and energy-efficient multi-access support in future wireless networks <cit.>. Summary: The key element in employing dt technology in mura scenarios lies in the contextual and situational awareness offered by the dt, in which multimodality plays an essential role in capturing active and idle users. This not only allows to identify the number of active users, but also to recognize which users are active, quantify access delay, collision probability, and resource utilization, enable accurate vision-aided channel estimation, and thereby, realize reliable signal detection in mura systems. According to the channel conditions of the random users, resource can be allocated in an optimized manner. §.§ Ultra-Dense HetNets The recent urban development in terms of intelligent city applications has corroborated the fact that future wireless networks will run over ultra-dense hetnets. This has motivated the need for a significant network capacity expansion, in order to meet the demands of future dense smart cities. Ultra-dense hetnets are characterized by the extreme densification of base stations, the massive number of heterogeneous nodes with diverse access technologies, and varied cell sizes. While such networks can offer a reasonable capacity enhancement, they raise several concerns pertaining to hardware miniaturization design, intra- and inter-cell coordination, resource allocation, as well as interference management. In this regard, dt technology can play a useful role in ultra-dense hetnets through leveraging the multimodal information that can be extracted from the ct in order to achieve the optimum performance of ultra-dense hetnets. In specific, in such networks, small-sized base stations need to be equipped with various sensing, computing, and decision-making capabilities, which, with the massive increase of network sizes and complexity, can potentially drain the network resources. On the other hand, virtual ultra-dense hetnets can accurately mimic the behavior of real ones, and hence, with extremely reduced resource consumption, can provide a promising solution to optimize the configuration of hetnets at the physical realm. Furthermore, given the highly dynamic and complex nature of such networks, it is envisioned that the dt constitutes a rich platform for multi-dimensional information acquisition, with the aim to realize enhanced awareness and cognition in hetnets. This can be equivalent to the deployment of all-senses base station, that are capable of capturing multi-modal data, that can be leveraged to train ai models for improved inference, with reduced network overhead. Efficiently trained ai models are anticipated to realize fully-aware base stations, and hence, to perform complex network-wide optimization, taking into consideration beamforming design, cell-zooming, resource allocation, cells sizes, interference, and base stations transmit power, for various hetnets sizes. It is worth highlighting that, such an optimization framework cannot be solved through conventional theoretical nor ai-based approaches, where the former lacks the adaptivity, scalability, and tractability, and the latter fails to provide accurate models and inference relying on the limited data collected from the physical environment. Summary: Optimizing resource allocation strategies in ultra-Dense HetNets is a main motivation behind considering dt technology. In particular, dt can be utilized for improved real-time dynamic allocation of spectrum resources, adjusting transmit power levels, and performing interference management techniques to maximize network capacity and improve overall spectral efficiency. The overhead of sensing, computing, and decision-making at the small cells can be reduced through leveraging the virtual twin to perform resource optimization and management. § WIRELESS NETWORKS FOR DIGITAL TWIN: A VISION §.§ Semantic Communication The rise of the semantic communication paradigm is fueled by its promising advantages that promote the development of several technologies, with strict energy, spectrum, and delay requirements. Semantic communications can be efficiently utilized for sensed information communication between the physical and cyber realms <cit.>. It should be highlighted that, in order to improve the qoe at the dt, the sensing granularity should be increased, and therefore, imposing a huge data processing, storage, and communication overhead, directly impacting the qos performance of the system. This is more pronounced when considering immersive multimedia data. In this regard, the role of semantic-aware communication within the context of dt is two-fold. First, with the aim to reduce the communication overhead resulted from multimedia transfer, semantic communication can be utilized to extract the important meanings (semantics) encapsulated in the data to be transmitted, and share it to and from the ct, thereby, extremely reducing the amount of data that need to be transferred. Second, as several multimedia streams are anticipated to be exchanged between the physical and cyber environments, and given that there exists a sort of correlation across different data modalities exchanged, semantic communication can further boost the spectral efficiency through extracting the cross-modal semantics and share them with the dt servers, further reducing the communication overhead. §.§ Joint Source & Channel Coding While the concept of separation has been one of the basic principles in information theory, the emergence of novel paradigms with delay constraints, including the dt, has motivated the research in the field of finite block-length coding and communication. In this regard, jscc is considered as a promising solution to support ultra-low latency systems. jscc allows the channel decoders to exploit the residuals of the source decoder (stemmed from the fact that for finite-length blocks, the source is considered redundant), and hence, not only reduce the delay, but maintain an acceptable reliability performance <cit.>. Within the dt paradigm, communicating data between the pt and ct in ultra-reliable and low-latency fashion constitutes a cornerstone in the implementation of an efficient dt. Therefore, jscc can potentially contribute to the realization of ultra-reliable low-latency communication in dt-empowered wireless networks. Although the joint approach might experience lossy signals recovery process at the dt, some techniques can be followed to reduce the recovery loss, including Markov model-based source that is jointly decoded with the channel code. §.§ Emergent Communications m2m communication represents an instrumental paradigm in current and future wireless generations, and due to its machine-centric nature, it offers potential advantages to the dt, including enhanced spectral and energy efficiency and reduced delay. A promising tool, emergent communication has been regarded as a key to enable machines to communicate and interact meaningfully, without human intervention, in order to perform particular tasks <cit.>. These tasks are generally characterized by their high level of coordination requirements, and therefore, emergent communication can offer an intelligent interaction between machines at the pt and dt servers to perform joint tasks in a coordinated manner. While there might be some delay experienced, once emergent agents reach to an agreement with respect to their communication protocol, several tasks can be achieved with low-latency between the two realms. If efficiently evolved, emergent agents can help with relieving the communication overhead, and save a considerable amount of network resources. §.§ Waveform Design As elaborated earlier, the dt paradigm will impose strict requirements pertinent to data rate, reliability, spectral efficiency, power consumption, and latency. Therefore, it is anticipated that current waveform designs might fail to deliver the needed qos for the dt paradigm <cit.>. Being sensitive to synchronization errors, ofdm can experience some limitations when implemented within the context of the dt, where a huge volume of data is generated from the dt operations and communicated, and hence, perfect synchronization is a challenging task to be achieved. Therefore, a joint waveform design of sensing and communication is most likely to define the future of dt-centric waveforms, in order to strike a balance between these two essential functionalities in the dt. The latter can be designed through considering various approaches, e.g., spatial modulation, joint time/frequency design, and waveform optimization approaches. Such approaches offer enhanced degrees of freedom, and mutual gains between the sensing and communication capabilities. Different than frequency-based waveforms, spatial multiplexing approaches, e.g., oam, unfold several advantages to the dt with respect to data rates, particularly with the consideration of massive MIMO schemes. This is due to the fact that in order to reap the full potential of oam, a massive number of antennas need to be employed. The latter approach of waveform design can incorporate index modulation in order to ensure reliable pt-ct communication with high data rates and low power consumption <cit.>. §.§ Non-coherent Communication Systems Over the last few decades, coherent systems, where accurate csi can be acquired at the transmitter/receiver for signals recovery purposes, have been the a pillar in all wireless generations. Although non-coherent systems have been extensively discussed in various use-cases, their applicability to the earlier wireless generations was limited due to their degraded performance, compared to their coherent counterpart. Nevertheless, with the emergence of the dt paradigm and related technologies that incorporate massive campaigns of data transfer in a live manner, coherent systems represent an obstacle in accommodating the latency and spectral efficiency requirements of dt-enabled networks. This is stemmed from the fact that the task of csi acquisition for all transmitted sensed information from and to the ct is close to impossible, within the network resources constraints. Therefore, there are reignited efforts devoted to explore the potential of blind methods in the dt paradigm. For short-length information, e.g., data pertaining to weather, status, etc., maximum likelihood sequence estimation <cit.>, which relies on extracting the correlation between consecutive signals for upcoming signals recovery, is a possible approach. However, as we move to more immersive, larger data, related to multimedia, e.g., 3D videos and images, the performance of such an approach severely degrades. As an alternative, Grassmannian signaling (which was demonstrated to achieve close-to-optimal pilot-free modulation) was deemed as an efficient method for longer information sequences. In Grassmannian modulation, where information is conveyed over tall unitary matrices, which enjoy robust sub-spaces (where each signal is carried) over a wide range of snr values. § OPEN RESEARCH DIRECTIONS §.§ Trust & Security Being the main controller that manages all network elements, it is very important to ask to what extent we can trust the dt. Ensuring a perfectly secured dt operation is one of the most critical design aspects that need to be investigated. Specifically, in order to maintain smooth, uninterrupted operations and trusted decisions, the research should put huge efforts in developing secure data communication and authentication schemes that guarantee a trusted twin. This open research issue is of a particular interest in mission-critical applications, due to the severe consequences of security breaches in such scenarios. While blockchain has been proposed as a mean to ensure security at the dt, it is yet to be tested whether the heavy operations and high latency in blockchain can be tolerated by the dt paradigm. §.§ Data Communication With the successful deployment of dt in wireless networks, it is anticipated to have a huge amount of generated data from distributed sensors to be communicated instantly with the ct, imposing challenges on the communication resources between the physical and cyber domains. This necessitates the development of ultra-low latency and high data rate communication paradigms. Although thz communications might be seen as a good candidate, it is yet unclear to what degree thz systems can provide the necessary coverage for on-demand interactive twin, particularly with high probability of blockage. §.§ To What Extent We Can Go? As a holistic operational twin of a wireless network is still a visionary concept, it is important to lay down a theoretical foundation for exploring the limitations of dt-enabled wireless networks from different angles. Specifically, it is essential to revisit the current communication-theoretic models, and to explore whether they can be leveraged to evaluate the performance of dt-enabled 6G. Hence, it is necessary to develop a solid, yet scalable and versatile enough mathematical foundation for investigating the capabilities of dt-empowered wireless networks, in addition to identifying the system bottlenecks and setting the performance borders at a large-scale. Also, such frameworks assist with further quantifying the benefits that can be reaped, and open new opportunities of innovation, improvement, and development. §.§ Advanced AI Algorithms With ai being the engine for activating dt and enabling it to be an efficient orchestrator for the network, it is of paramount importance to ensure that the employed ai algorithms fit well within the needs of dt-enabled 6G, in terms of latency, accuracy, and energy efficiency. It is yet to be revealed whether existing ai algorithms will be able to capture the inherent relationships of the physical dynamics and their impact on the wireless environment, and therefore, to develop an understanding of all network functionalities, variations, and interactions over different time periods. With the strong belief that the dt will constitute the brain of the network in the future, it is essential to ensure that the trained ai models will enable the dt to have cognition capabilities, including reasoning and inference. §.§ Quantum Digital Twin The advent of quantum computing, that is characterized by high-dimensional quantum states (a.k.a Qubits), has revolutionized the way how classical computers operate, introducing a new era of computing capabilities and functionalities. In the wireless communication domain, the emergence of the quantum domain offers a way to break the capacity limits set by conventional wireless systems, that rely on the basic 0/1 state space <cit.>. In addition to the improved channel capacity, quantum communication has a unique robustness to noise and high immunity to eavesdropping. While the research in quantum communications has witnessed a noticeable progression, challenges pertaining to quantum teleportation, quantum error correcting codes, entanglement purification, and the quantum repeater, are yet to be further explored in order to enable long-distance quantum communication. The fusion of dt and quantum computing paves the way for several advantages that can be reaped from the two schemes, where the interplay of the classical dt paradigm with the quantum computing and processing capabilities (including quantum machine learning) promising to offer real-time simulation, monitoring, and control of highly-complex, highly-dynamic, and interconnected elements, with the aim to achieve the ultimate vision of quantum internet. §.§ Immersive Communication The prevalence of virtual and augmented reality devices, in addition to the availability of computing resources and high-resolution equipment, have given a rise to the immersive communication paradigm, which relies on exchanging natural haptic signals with remote devices. The key element in immersive communication is the ability of wireless nodes to interact with remote environments, and to detect and quantify this interaction through exploiting all-sense features. In other words, haptic perception of participating nodes in remote environments should enjoy high resolution to achieve the required QoE. In order to realize the ultimate immersion, several KPIs are imposed on future wireless networks, in terms of communication latency (sub-millisecond) and throughput, where the latter is characterized by the resolution quality, color depth, and frame-rate. A super-resolution dt is envisioned to be the key towards realizing fully immersive communication, through offering a holistic framework for multi-sensory data acquisition, with reduced latency and improved resource management. §.§ Synchronization vs. Delay vs. Accuracy The development of a high-fidelity dt is constrained by three main factors, namely, shared data granularity to ensure perfect agreement between the physical and cyber twins, perfect synchronization, and the inference and update latency between the two realms. While the three factors are essential, and highly affect the network performance at the physical twin, each contribute to the successful realization of particular twin modes. For example, low latency can be tolerated in some scenarios that don't require live inference, as long as the two twins are in a perfect sync, meaning that the changes experienced at the pt need to updated at the ct in a synchronized fashion, in order to perform long-term network configuration. On the other hand, ultra-low latency is necessary in operational twins, where decisions made at the ct need to be reflected to the pt in a live manner. Yet, a high-fidelity dt requires continuous data sharing between the two twins, demanding high throughput and yielding increased delay. § CONCLUSION In this article, we laid down the foundation for integrating wireless technologies with dt paradigm, and overviewed the technological trends that have manifested themselves as key enablers for dt-assisted 6G. We further outlined the opportunities offered by the dt in enhancing the performance of wireless networks, and we shed light on how existing communication systems need to be revisited in order to guarantee that future wireless networks will be capable of supporting the needs of the dt. Finally, we highlighted several potential research directions for further exploration of this disruptive innovation. IEEEtran § BIOGRAPHIES Lina Bariah (lina.bariah@ieee.org) is a Senior Researcher at the Technology Innovation Institute, Abu Dhabi. Hikmet Sari (hsari@ieee.org) is a Professor with the Nanjing University of Posts and Telecommunications (NJUPT). Mérouane Debbah (Merouane.Debbah@tii.ae) is a Professor at Khalifa university, Abu Dhabi.
http://arxiv.org/abs/2307.01497v2
20230704060610
Accelerated stochastic approximation with state-dependent noise
[ "Sasila Ilandarideva", "Anatoli Juditsky", "Guanghui Lan", "Tianjiao Li" ]
math.OC
[ "math.OC", "cs.LG", "stat.CO", "stat.ML" ]
equationsection dist prox #1Π_X^n^ψ(#1) α β̱ ϵ δ̣ γ θ øω σ łλ ∇ 1m∑_i=1^m ∑_i=1^m ∑_t=1^s ∑_t=2^s ∑_ℓ=1^k ∑_ℓ=2^k ∑_ℓ=1^k-1 ℝ ℳ Ø𝒪 ⟨ ⟩ ⟨ ⟩ x ỹ x̅ x x̅ x y̅ łℓ λ 𝔼 SO G X 𝕀 𝐀 𝐛 Prob Prob ub lb X A c B Q b p q thmTheorem lem[thm]Lemma cor[thm]Corollary prop[thm]Proposition V A P Q KL Pr (<ref>)1(<ref>) assumptionAssumption
http://arxiv.org/abs/2307.02011v1
20230705035232
Precise WiFi Indoor Positioning using Deep Learning Algorithms
[ "Minxue Cai", "Zihuai Lin" ]
eess.SP
[ "eess.SP" ]
Precise WiFi Indoor Positioning using Deep Learning Algorithms Minxue Cai and Zihuai Lin School of Electrical and Information Engineering, The University of Sydney, Australia Emails: zihuai.lin@sydney.edu.au. ============================================================================================================================================================= ieeetr This study demonstrates a WiFi indoor positioning system using Deep Learning algorithms. A new method using fitting function in MATLAB will be utilized to compute the path loss coefficient and log-normal fading variance. To reduce the error, a new hybrid localization approach utilizing Received Signal Strength Indicator (RSSI) and Angle of Arrival (AoA) has been created. Three Deep Learning algorithms would be utilized to decrease the adverse influence of the noise and interference. This paper compares the performance of two models in three different indoor environments. The average error of our hybrid positioning model trained by CNN in the big classroom is less than 250 mm. WiFi Indoor Positioning, Deep Learning Algorithms, Trilateration Approach, Artificial Intelligence. Precise WiFi Indoor Positioning using Deep Learning Algorithms Minxue Cai and Zihuai Lin School of Electrical and Information Engineering, The University of Sydney, Australia Emails: zihuai.lin@sydney.edu.au. ============================================================================================================================================================= § INTRODUCTION The Internet of Things (IoT) and mobile networks are changing people’s lives and are also contributing to the development of indoor positioning and indoor navigation technologies <cit.>. Compared with indoor positioning, outdoor positioning technology is now more mature. Global Positioning System (GPS) requires open skies to receive satellite signals and it needs to receive at least four satellite signals for accurate positioning. Therefore, a large number of indoor obstructions can prevent GPS systems from receiving sufficient satellite signals, resulting in significant positioning errors or the inability to locate accurately. In addition, multipath effect and signal fading can also contribute to an increased positioning error in GPS technology <cit.>. In order to solve the problem of GPS inaccuracy in indoor environments, indoor positioning technology has been developed rapidly and applied in many fields. Among the technologies, WiFi indoor positioning technology is widely utilized. WiFi is a wireless network technology by using the IEEE 802.11 standard. WiFi signals are generally divided into two frequency bands, 2.4 GHz and 5 GHz. Compared with the signals of 5GHz band, the signals of 2.4GHz band have better penetration ability and can transmit longer distances. Therefore, the signals in the 2.4 GHz band are more suitable for indoor positioning. In addition, devices such as WiFi routers, WiFi access points, and wireless cards can generate WiFi signals, thereby increasing the flexibility of applying WiFi in indoor positioning field <cit.>. To further reduce the negative influence of NLoS problem and multipath effect, Deep Learning algorithms will be utilized in indoor positioning models. The Deep Learning algorithms can help to identify the relationship between the WiFi signal information and the target points. The Deep Learning algorithms can also decrease the negative influence of noise and interference by changing the weights of the neurons in the layers <cit.>. The remainder of the paper is organized as follows. Section II reviews the concepts of indoor positioning field and the existing neural network-based indoor localization methods. Section III demonstrates the system model and positioning methods used in the paper. Section IV describes the system model and the methods used in this paper. Section V analyzes the experimental results. The last Section gives a summary of this work and the future work plan. § BACKGROUND §.§ WiFi Ranging-based Technology According to different technical principles, WiFi indoor positioning technology can be divided into Angle of Arrival (AoA), Received Signal Strength Indicator (RSSI), and Time of Flight (ToF) <cit.>. The Received Signal Strength Indicator is a method by utilizing the signal strength at the receiver point. The energy loss during propagation will be calculated by comparing the signal strength of the receiving and transmitting points. It will be used as an input to the log-distance path loss model to compute the distance from the transmitting point to receiving point. As a result, the location of the receiving point in the indoor environment will be obtained <cit.>. The Angle of Arrival method is a technique that utilizes the angle of arrival of the signal to calculate the target position. By deploying a large number of antenna arrays in an indoor environment, phase differences between different antennas will be used to calculate the angle of arrival of the signal <cit.>. Time of Flight is a technique that uses the propagation time of a wireless signal to calculate the location of a target. The transmitter sends a signal to the receiver, and the receiver receives the signal and immediately transmits back a response signal. The distance from the sender to receiver is calculated by recording the round-trip time of the signal.The location of the receiver is calculated based on these distances <cit.>. §.§ Trilateration Method RSSI, AoA and ToF methods usually need at least three anchor points to complete the measurement of the target point location <cit.>. Therefore, the Trilateration method is utilized in the field of indoor positioning for ranging-based methods. There are three non-collinear signal transmitters and one unknown receiver in the plane. The distances from the three transmitting points to the receiving point can be calculated. When the coordinates of the three transmitting points are known, three circles can be drawn with the transmitting point as the center and the calculated distance as the radius. Theoretically, the intersection point of the three circles represents the receiving point. However, the positioning accuracy decreases due to signal fading and environmental interference in real indoor environment. Therefore, there is not necessarily only one intersection point of the three circles in the real environment <cit.>. §.§ Deep Learning and Neural Network Deep Learning algorithms are developed from Machine Learning algorithms. Deep Learning algorithms are used to study and find out features from data to perform classification, regression or other prediction tasks so that patterns in the data can be identified and classified. A neural network is the core component of Deep Learning. It is a hierarchical structure consisting of multiple neurons, each receiving input from a neuron in the previous layer and passing the output to the next layer of neurons. Neurons have many weights utilized to adjust the transmission intensity of signals from one neuron to another. These weights are adjusted by training the neural network to minimize the prediction error <cit.> . Back-Propagation Neural Network (BPNN) is a common multi-layer feed forward neural network. It trains the network through Back-Propagation algorithms to find the mapping relationship between input data and corresponding output targets. BPNN has three types of layers including input layers, hidden layers and output layers. Each layer has many neurons, and each neuron has a weight <cit.>. Radial Basis Function (RBF) is commonly used to solve classification and regression problems. RBF utilizes Radial Basis functions to find the relationship between input data and output targets. The linear output layer will be utilized to compute the results <cit.>. It has a similar structure compared with BPNN . Convolutional Neural Network (CNN) is a Deep Learning algorithm that is skilled in processing multidimensional data. The most important sections of a CNN are the convolutional layers and the pooling layers. In addition, it also includes fully connected layers, activation function, batch normalization, and other sections. CNN has the abilities to automatically find out and learn the characteristics of entered data through convolutional kernels in the convolutional layer. Afterwards, the Back-Propagation algorithm and Random Gradient Descent method will be used for CNN training <cit.>. §.§ Indoor Positioning based on Deep Learning In <cit.>, Xue et al. introduced a highly adaptive indoor localization (HAIL) approach which could take advantage of both relative RSSI values and absolute RSSI values to rise the accuracy. BPNN was used in the designed method to calculate the degree of matching between the absolute RSSI values in the database and the measured absolute RSSI values. The results illustrated that HAIL attained a mean absolute error of 0.87 m. In <cit.>, Meng et al. focused on RSSI-based WiFi indoor localization. First of all, the collected RSSI values would be pre-processed by utilizing the weighted median Gaussian filtering method to enhance the reliability of the data. The pre-processed data would be utilized to found a database. An improved fast clustering algorithm was utilized to adjust the amount of neurons on the RBF hidden layer and the kernel function parameters of neurons. The Levenberg Marquard algorithm optimized the input RSSI values. The improved RBF algorithm would improve the matching degree between the measured RSSI values and the RSSI values in the database. The mean absolute error of the improved RBF algorithm was 1.421 m, while that of the RBF algorithm was 1.925 m. In <cit.>, Zhang et al. concentrated on the WiFi fingerprint-based method based RSSI. An algorithm combined with the CNN and Gaussian Process Regression (GPR) was introduced. CNN could automatically find out the features of the input RSSI values and decrease the negative effects of noise and interference in the indoor environments. Meanwhile, GRU could decrease the overfitting problem of CNN. The outcome demonstrated that the mean absolute error of CNN was 1.442 m, while that of hybrid algorithm was 1.06 m. § METHODS AND SYSTEM MODEL The system model will consist of three fixed anchor points and a moving target point. Fig.1 demonstrates the detailed layout of the system model. Three WiFi routers named T1, T2, and T3 are placed as signal transmitters at the fixed anchor points. A laptop and an AD9361 device will be placed as a signal receiver at the target point. The target point will be chosen within the range covered by the three anchor points. All four devices use wireless communication technology based on WiFi signals. Additional functions in MATLAB will be used with the AD9361 instrument to measure the experimental data. To better assess the performance of our designed method, the test environments will be selected from a big classroom, a corridor and a small classroom. §.§ RSSI-based Trilateration Method The RSSI-based Trilateration method is a common wireless positioning method that calculates the location of a device by measuring the RSSI values. When a WiFi router sends out a signal and a laptop as a receiver receives it, the RSSI value is measured. The position of the receiver is calculated by utilizing the RSSI values of the three transmitters. The log-distance path loss model, which is shown in equation (1), is utilized to transfer the RSSI value to a distance from the transmitting point to the receiving point. PL(d)=P_T(d)-P_R(d)=PL(d_0)+10γlog_10d/d_0+X_g where PL(d) is the total path loss in dBm, P_T(d) represents the transmitted power in dBm, P_R(d) is the received power in dBm, d_0 is the reference distance (usually 1 m to 10 m for indoor environments), PL(d_0) illustrates the path loss in dBm at the reference distance d_0, d represents the total length of propagation path, γ represents the path loss coefficient and X_g represents a Gaussian random variable with zero mean, which means the attenuation caused by fading. Because of the complexity of the indoor scene, NLoS problem and multipath effect have influence on the RSSI values measured at the receiver. Therefore, the random variable X_g obeys a Gaussian distribution and has a standard deviation σ ,which is also called log-normal fading variance, in decibels <cit.>. The path loss coefficient and log-normal fading variance vary in different indoor environments. Therefore, a MATLAB fitting function method will be invented to calculate these two parameters. The equation (1) will become equation (2), which is shown below. P_R(d)=P_R(d_0)-10γlog_10d/d_0-X_g In equation (2), P_R(d_0) represents the received power in dBm at the reference distance d_0. Therefore, the signal strength values at different distances to the signal transmitter need to be measured. In the test environment, the location of the WiFi router is defined as the original point and the reference distance d_0 will be defined as 1 m. The functional model of the relationship between the value of RSSI and distance is a sum of a log_10 function and a Gaussian distribution function with zero mean. The fitting function in MATLAB will be utilized to fit this functional model. The calculated path loss coefficient and the log-normal fading variance will be used in equation (2). The measured RSSI value will be carried into equation (2) to compute the distance from the WiFi router to the receiver point. In order to measure the location of the signal reception point more precisely, three WiFi routers will be arranged in each indoor environment. The distances from the three WiFi routers to the receiver will be calculated as d_1,d_2 and d_3 respectively. The three WiFi routers have fixed locations which can be demonstrated as (x_1,y_1), (x_2,y_2) and (x_3,y_3) respectively. Three circles are made with three WiFi routers as the centers as well as d_1,d_2 and d_3 as the radii. Their mathematical expressions will be represented by the below expressions. (x-x_1)^2+(y-y_1)^2=d_1^2 (x-x_2)^2+(y-y_2)^2=d_2^2 (x-x_3)^2+(y-y_3)^2=d_3^2 The least square method will be utilized to compute the coordinates of the receiver point. Equation (3) and equation (4) will subtract equation (5) to obtain the linearized equation AX=b where The coordinates of the receiver point would be computed by utilizing least square approach, which can be illustrated in the below formula. X=(A^TA)^-1 A^Tb There are three WiFi routers in the indoor environment, the multipath effect will prevent the three circles from intersecting at one point. Apart from that, there are some obstacles such as tables and cupboards in the indoor environments. The NLoS problem results in more energy loss during signal propagation, which affects the accuracy of RSSI measurements and indoor localization. Therefore, the coordinates of the calculated receiver point will be located in the common area of the three circles, as presented in Fig.2. §.§ AoA Estimation The Multiple Signal Classification (MUSIC) algorithm is a high-resolution spectral analysis method for estimating the direction and number of multiple sources in a signal contaminated with noise <cit.>.The MUSIC algorithm estimates the direction of a signal source primarily through spatial spectral analysis. Suppose there is a uniform linear array with M receiving antennas that can receive signals from K sources. For each signal source, its position is assumed to be at θ_k, its frequency is ω_k, and its sampling time is T. The uniform linear array will be used to receive the signal from the source and convert it into a digital signal for processing. So the received signal will have the following expression. x_i(t)=∑_k=1^K α_k(θ_k)e^jω_kt+n_i(t), i=1,2,....,M where α_k(θ_k) is the spatial filtering coefficient of the signal source k in the direction of θ_k and and n_i(t) represents the noise term. The received signal at each receiver can be illustrated by equation (9). x(t)=[x_1(t),x_2(t),....,x_M(t)]^T Then the received signal vector will be used to calculate the spatial correlation matrix R, which can be shown in equation (10). R=𝔼[x(t)x^H(t)] ^H means the Hermitian transpose and 𝔼 represents the expectation operator. Eigenvalue decomposition is performed on the spatial correlation matrix to attain the eigenvalues and eigenvectors: R=U_RΛ_RU_R^H In equation (11), U_R=[u_1,u_2,....u_M] can be shown as the eigenvector matrix and Λ_R represents the eigenvalue matrix. The eigenvectors and eigenvalues will meet the following requirement: Ru_i=λ_iu_i, i=1,2,....,M Since the source signal and the noise are independent of each other, the spatial correlation matrix R can be decomposed into two parts, which are the source signal and the noise. Thus, the spatial correlation matrix R is eigendecomposed to obtain equation (13). R=U_RΛ_RU_R^H=U_sΛ_sU_s^H+U_NΛ_NU_N^H U_s is the subspace consisting of the larger eigenvectors among all eigenvalues of R, called the source signal subspace. U_N is the subspace consisting of the smaller eigenvectors among all eigenvalues of R, named the noise subspace. U_s is a matrix of M × K while U_N is a matrix of M×(M-K). The eigenvector matrix will be utilized to build the spatial spectral function, which can be shown in equation (14). P(θ)=1/α^H(θ)U_NU_N^Hα(θ) In equation (14), α represents the direction vector in the source signal subspace. The K largest peaks in the equation (14) correspond to the AoAs of the signals from K sources. §.§ Hybrid Positioning Method based on RSSI and AoA To decrease the error and attain more precise receiver location information, a new hybrid positioning method utilizing RSSI values and AoA values is introduced into indoor positioning research. The measured AoA values will be utilized in the calculation of the coordinates of the receiver point. The RSSI values from the three WiFi routers will be measured at the receiver point. The measured RSSI values are brought into the log-distance path loss model to calculate the distances from the WiFi routers to the receiver point. The distances from the three WiFi routers to the receiver point are measured as d_1, d_2, and d_3 respectively. Meanwhile, the MUSIC algorithm will be used to measure the AoA values of the signals from the three WiFi routers to the receiver point. The three AoA values are measured as θ_1, θ_2 and θ_3 respectively. The locations of the three WiFi routers are fixed, which can be demonstrated as (x_1,y_1), (x_2,y_2) and (x_3,y_3) respectively. Finally, the measured RSSI and AoA values will be used to calculate the coordinates of the receiver point, which can be illustrated in the below equations. x_R1=x_1+d_1×sin(θ_1) y_R1=y_1+d_1×cos(θ_1) x_R2=x_2+d_2×sin(θ_2) y_R2=y_2-d_2×cos(θ_2) x_R3=x_3-d_3×sin(θ_3) y_R3=y_3-d_3×cos(θ_3) To better determine the location of the receiver point, the average of (x_R1,y_R1), (x_R2,y_R2) and (x_R3,y_R3) will be utilized to represent the coordinates of the receiving point. Theoretically, (x_R1,y_R1), (x_R2,y_R2) and (x_R3,y_R3) are equal. However, noise and interference in the real indoor environments will have influence on the performance of the MUSIC algorithm, thus affecting the AoA values. The Fig.3 shows the outcome of the hybrid positioning method. The location of the receiving point will be in the blue triangle. Compared with the RSSI-based Trilateration method, the hybrid positioning method reduces the error because the AoA values can provide more location information than the RSSI values. §.§ Deep Learning Methods Deep Learning algorithms have demonstrated good performance in regression, prediction, and classification of data. Neural networks reduce the NLoS problem and the impact of multipath effect by adjusting the weights of their internal neurons. In addition, neural networks can learn and extract features from the data in order to complete the computation of the data <cit.>. Therefore, two models based on RSSI-based Trilateration method and hybrid positioning method will be built. The neural network can increase the accuracy of indoor positioning by training the model to obtain the optimal solution. The RSSI values will be utilized as the training data in the RSSI-based Trilateration model, while the RSSI values and AoA values will be utilized as the training data in the hybrid positioning model. The training targets are the coordinates of the test points in the test environments. There are three main steps by using the Deep Learning methods. (1) Build the two models by using the reference points. (2) Train the models by using Deep Learning algorithms. (3) Test the performance and calculate errors. In the first step, the fixed locations of the three WiFi routers will be used as the reference points in each test environment. Then train the model by using different Deep Learning algorithms and different results will be obtained. Three types of the Deep Learning algorithms will be used in this paper. They are BPNN, RBF and CNN. Finally, the test data sets will be utilized to assess the performance of algorithms. The mean absolute error (MAE) will be utilized to record the errors. § EXPERIMENT §.§ Experimental Environments To better evaluate the accuracy of the Deep Learning algorithms and explore the errors in different indoor environments, the experiments will be conducted in different indoor scenarios, including a big classroom, a corridor and a small classroom, as shown in Fig.4. The dimensions (L× W) of the big classroom are 13 m × 13 m. The dimensions (L× W) of the corridor are 12m × 4m, and the dimensions (L× W) of the small classroom are 9m× 7m. The three WiFi routers as the transmitters will be located at the corners of each test environment. The laptop and AD-FMCOMMS3-EBZ with the Zedboard will be used as the receiver. Ten test points are used as the locations of receiver. The coordinates of the WiFi routers and the test points will be demonstrated in Appendix A. §.§ Experimental Procedures The experimental procedures will be shown in the following steps. (1) The path loss coefficient and log-normal fading variance will be calculated for each test environment. The values calculated in both directions for each test environment will be averaged, which would be illustrated in Table I and Table II. (2) The WiFi analyzer software on the laptop will be used to measure the RSSI values from the three WiFi routers. The theoretical RSSI values for test points will be calculated based on calculated distances, path loss coefficients and log-normal fading variances. A small percentage of the measured data is so different from the theoretical values that it is classified as outliers. Eliminating outliers not only improves the accuracy of Deep Learning algorithm models, but also reduces the negative impact of unreasonable data on experimental results. Therefore, when the absolute value of the difference between the theoretical RSSI value and the measured RSSI value is too large, the RSSI values need to be remeasured. (3) The hardware together with the additional functions in MATLAB will capture the raw I & Q signal waveform. The raw I & Q signal waveform will be utilized as the input signal waveform of the MUSIC algorithm. The MUSIC algorithm will analyze the signal waveform and calculate the AoA values. The theoretical values of the AoA can be calculated based on the coordinates of the WiFi routers and the receiver point. To decrease the negative influence of unreasonable experimental data on the experimental results, the AoA values need to be recalculated and remeasured when the absolute value of the difference between the measured value and the theoretical value is large. (4) To better record the variation of RSSI values and AoA values for each test point, 500 groups of RSSI and AoA values will be collected for each test point as the training data set for the Deep Learning algorithms. (5) The Deep Learning algorithms will be used to train the models and calculate the coordinates and errors. The whole procedure can be shown in Fig.5. First of all, the coordinates of the three WiFi routers will be used as reference points. Secondly, 80 % of the input data sets are divided into training data sets, while 20 % of the input data sets are defined as test data sets. In order to reduce computational load and calculation time, the data sets will be normalized. In addition, after data normalization, data flattening will be employed in the CNN algorithm. Thirdly, the parameters of the neural networks need to be defined. Then the neural networks will be constructed. The neural networks will reduce impacts of the NLoS problem and multipath effect by adjusting the weights of neurons during the calculation. Multiple times of training help the neural networks to find the most suitable weights. The training will stop when the number of training times reach the defined number of iterations. The data sets need to be denormalized to return to the original ranges. Finally, the coordinates calculated by the Deep Learning algorithms will be compared with the theoretical values. The distance between the calculated coordinates and the theoretical coordinates will be defined as the error. The mean absolute errors (MAE) for all the test points in three test environments will be calculated. § RESULTS AND DISCUSSIONS §.§ The Results of the RSSI-based Trilateration Model In this model, the WiFi analyzer will be used to measure the RSSI values from the three WiFi routers. The method in Section III will be used in the neural networks to calculate the coordinates of the test points. Meanwhile, the neural networks reduce the negative impacts of NLoS problem, multipath effect and other disturbance by adjusting the weights of neurons. The mean absolute error (MAE) will be used to assess the performance. The errors of three test environments can be shown in Fig.6 and Table III. To accurately describe the errors, the unit of error will be millimeters. From Table III, the performance of CNN is the best compared to BPNN and RBF. Additionally, from Fig.6, the errors of the big classroom are the smallest, while the errors of the small classroom are the largest. The big classroom has fewer obstacles. The dimensions of big classroom are large, so there are fewer signal reflections. However, there are more obstacles in the small classroom. The reflected signal travels a longer distance than the direct line signal to reach the receiving point due to the presence of obstacles. Therefore, NLoS problem leads to increased energy consumption during signal propagation, resulting in a lower measured RSSI value compared to the line-of-sight (LoS) propagation <cit.>. This can adversely affect the accuracy of positioning. In addition, WiFi analyzer is an open source software that obtains information about WiFi signals through a wireless network card. Wireless network cards are more focused on implementing wireless network connections rather than providing highly accurate signal measurements. As a result, WiFi analyzer may measure RSSI values with relatively low precision. Compared with the traditional RSSI-based Trilateration approach in the paper <cit.>, the precision of the RSSI-based Trilateration method combined with neural networks is improved. There are three reasons why the accuracy is increased. First of all, instead of relying on empirical values found on the websites, the path loss coefficients and log-normal fading variances calculated for different test environments will be brought into equation (2) to calculate the distances from the WiFi routers to the test points. In addition, the RSSI data correction method will be used to clean up measured RSSI outliers to mitigate the negative effect of inaccurate experimental data on positioning accuracy. Most importantly, neural networks can adjust the weights of neurons to reduce the negative effects of noise and interference such as NLoS problem in the indoor environments. Neural networks also extract features from large amounts of data to identify and classify patterns in the data, accomplishing a large number of calculations. §.§ The Results of the Hybrid Positioning Model In this part, MUSIC algorithm will analyze the raw I&Q signal waveform captured by the AD-FMCOMMS3-EBZ with the Zedboard. The AoA values will be calculated. The RSSI values and the AoA values will be utilized as the input data of this model. The method in Section III combined with the Deep Learning algorithms will be used to calculate the coordinates of the test points. The neural networks decrease the negative influence of the noise and interference by changing the weights of the neurons among the output layers and the layers before the output layers. The errors of three test environments can be demonstrated in Fig.7 and Table IV. From the Table IV, the performance of CNN is the best one among the Deep Learning algorithms. Apart from it, the errors of the big classroom are the smallest, which are less than 300 mm. The errors of the small classroom are still big, which are approximately 500 mm. There are two main reasons why the errors of the small classroom are larger than those of the big classroom in this model. First of all, NLoS problem has a negative impact on AoA value measurements. In an indoor environment with many obstacles such as the small classroom, the signal strength will be attenuated due to the long propagation path and the presence of obstacles. The attenuated signal may result in increased noise in the measurement, which affects the accuracy of the AoA values. Secondly, in NLoS environments, the signal not only propagates through the direct path, but also reaches the test point through reflection and scattering. As a result, there will be a difference between the AoA of the signal after reflection and scattering, and the AoA of the signal traveling along the direct path. Therefore, the measured AoA values in a small classroom may be more different from the theoretical values, thus affecting the positioning accuracy <cit.> . Compared with the results of RSSI-based Trilateration model, the errors in three test environments are reduced, which indicates that the hybrid positioning method has a high accuracy. The hybrid positioning method can measure the AoA values in high accuracy. The AoA values provide more accurate positioning information, resulting in higher positioning accuracy. In addition, the MUSIC algorithm is a high-resolution algorithm that efficiently calculates the AoA values from the signal waveform. However, when three WiFi routers send signals at the same time, their signals are superimposed on each other at the receiving end. This may result in phase difference variations and interference effects that reduce the precision of the calculated AoA values <cit.>. In addition, the movement of people and objects in indoor environments can also have a negative impact on the measurement of AoA values, thereby affecting the positioning accuracy. §.§ Comparison and Discussions By comparing the results in the previous two sections, it is clear that the errors of the hybrid positioning model are smaller than those of the RSSI-based Trilateration model, regardless of the test environments and the used Deep Learning algorithms. So the improvement between the two models in three different Deep Learning algorithms and three test environments will be calculated, which can be illustrated in Table V, Table VI and Table VII. By analyzing from Table V to Table VII, the average improvement in the big classroom is greater than that in the small classroom. The reason for this is that the NLoS problem has a less detrimental impact on the accuracy of the measured AoA values in a big classroom with fewer obstacles, compared to a small classroom with more obstacles. The accuracy of the measured AoA values in the big classroom is better than that in the small classroom, which can also have influence on the precision of indoor positioning. In addition, by comparing the improvement ability of the three Deep Learning algorithms, CNN has the best improvement ability, followed by BPNN, and RBF has the worst improvement ability. Compared with RBF, CNN has the following advantages. First of all, CNN has the capabilities to automatically study and extract features from the input data by employing a combination of multi-layer convolution and pooling layers. In contrast, RBF networks usually require manual design and selection of appropriate Radial Basis functions for feature extraction <cit.>. Therefore, CNN is able to train on large-scale data by Back-Propagation algorithms, thus efficiently accomplishing a large number of computations. In addition, CNN has multiple fully connected layers but RBF has only one hidden layer. Therefore, CNN has more neurons than RBF. CNN can better reduce the influence of noise and interference on localization precision by adjusting the weights of neurons. Although the improvement ability of BPNN is good, the improvement ability of CNN is better. First of all, CNN has the abilities to efficiently capture the local features of the input data through the use of convolutional operations and a parameter sharing mechanism. This makes CNN more advantageous when dealing with multidimensional data. In contrast, BPNN requires more parameters and computational resources to process these data . The dimension of the entered data for the RSSI-based Trilateration model is three, while the dimension of the entered data for the hybrid positioning model is six. So the errors of CNN are less than those of BPNN. In addition, CNN uses a multi-layer structure to study and pick up abstract characteristics of the entered data layer by layer. In contrast, BPNN has shallow, fully-connected networks that do not study the features layer by layer . The layered structure of CNN enables it to automatically learn hierarchical representations in the input data, thereby enhancing the model's capacity to comprehend the data <cit.>. Finally, BPNN has two advantages over RBF. First of all, BPNN can better capture complex nonlinear data relationships by utilizing nonlinear activation functions and multi-layer connections. In contrast, RBF has limited nonlinear fitting capability, which is more suitable for dealing with relatively simple data relationships <cit.>. The relationship between the RSSI values and the coordinates of test points is nonlinear. The relationship between the input items and output items in the hybrid positioning model is also nonlinear. The BPNN has better capabilities to process the nonlinear data, thus attaining a lower error. In addition, BPNN has multiple hidden layers, which can increase the learning capability of the network, while RBF has only one hidden layer. BPNN has more neurons than RBF, which can more effectively help BPNN to reduce the negative effects of NLoS problem and multipath effect by adjusting the weights of neurons <cit.>. Therefore, the learning ability of RBF is poor and it has a larger error. § CONCLUSION In this paper, the WiFi indoor positioning has been investigated. Apart from the RSSI-based Trilateration method, a hybrid RSSI and AoA-based positioning method is innovated to improve the positioning accuracy. Additionally, to minimize the negative effects of noise and interference in indoor environments, neural networks will be used to train the RSSI-based Trilateration model and the hybrid positioning model. The experimental results show that the errors of the hybrid positioning model are smaller compared to the RSSI-based Trilateration model. The errors obtained by utilizing CNN are the smallest among three Deep Learning algorithms. The errors in the big classroom are the smallest while the errors in the small classroom are the largest among the three test environments. The error of utilizing CNN to train the hybrid positioning model in the big classroom is the smallest, which is less than 250 mm. § APPENDIX.A
http://arxiv.org/abs/2307.00306v1
20230701112853
SyMFM6D: Symmetry-aware Multi-directional Fusion for Multi-View 6D Object Pose Estimation
[ "Fabian Duffhauss", "Sebastian Koch", "Hanna Ziesche", "Ngo Anh Vien", "Gerhard Neumann" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
IEEEexample:BSTcontrol Spatio-Temporal Classification of Lung Ventilation Patterns using 3D EIT Images: A General Approach for Individualized Lung Function Evaluation Shuzhe Chen, Li Li, Zhichao Lin, Ke Zhang, Ying Gong, Lu Wang, Xu Wu, Maokun Li, Senior Member, IEEE, Yuanlin Song, Fan Yang, Fellow, IEEE, and Shenheng Xu, Member, IEEE, This paragraph of the first footnote will contain the date on which you submitted your paper for review. This work was supported in part by the Institute for Precision Medicine, Tsinghua University, National Natural Science Foundation of China (61971263 and 62171259), Biren Technology, and BGP Inc. Shuzhe Chen and Li Li contributed equally to this work. (Corresponding author: Maokun Li.) Shuzhe Chen, Zhichao Lin, Ke Zhang, Maokun Li, Fan Yang and Shenheng Xu are with Department of Electronic Engineering, Institute of Precision Medicine, Tsinghua University, Beijing 100084, China, Beijing National Research Center for Information Science and Technology (BNRist), (e-mails: csz21@mails.tsinghua.edu.cn, lzc19@mails.tsinghua.edu.cn, kzhang320@mail.tsinghua.edu.cn, maokunli@tsinghua.edu.cn, fanyang@tsinghua.edu.cn, shxu@tsinghua.edu.cn). Li Li, Ying Gong, Lu Wang, Xu Wu, and Yuanlin Song are with the Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital, Fudan University, 180 Fenglin Rd, Shanghai 200032, China (e-mails:li.li@zs-hospital.sh.cn, gong.ying@zs-hospital.sh.cn, wu.xu@zs-hospital.sh.cn, bluewang723@163.com, song.yuanlin@zs-hospital.sh.cn). August 1, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Detecting objects and estimating their 6D poses is essential for automated systems to interact safely with the environment. Most 6D pose estimators, however, rely on a single camera frame and suffer from occlusions and ambiguities due to object symmetries. We overcome this issue by presenting a novel symmetry-aware multi-view 6D pose estimator called SyMFM6D. Our approach efficiently fuses the RGB-D frames from multiple perspectives in a deep multi-directional fusion network and predicts predefined keypoints for all objects in the scene simultaneously. Based on the keypoints and an instance semantic segmentation, we efficiently compute the 6D poses by least-squares fitting. To address the ambiguity issues for symmetric objects, we propose a novel training procedure for symmetry-aware keypoint detection including a new objective function. Our SyMFM6D network significantly outperforms the state-of-the-art in both single-view and multi-view 6D pose estimation. We furthermore show the effectiveness of our symmetry-aware training procedure and demonstrate that our approach is robust towards inaccurate camera calibration and dynamic camera setups. § INTRODUCTION Estimating the 6D poses of objects is an essential computer vision task which is widely used in robotics <cit.>, automated driving <cit.>, and augmented reality <cit.>. In recent years, 6D pose estimators have made significant progress based on deep neural network architectures which rely on a single RGB image <cit.>, on a single point cloud <cit.>, or fuse both <cit.>. Single-view methods, however, have problems detecting objects which are occluded by other objects. These problems can be overcome by considering data from multiple perspectives. Fusing multi-view data can significantly improve the accuracy and robustness of environmental understanding in complex scenarios, which can enable more flexible production and assembly processes, among other applications. There are already a few methods that consider multi-view data <cit.> which are, however, computationally expensive and not designed for scenes with strong occlusions. Moreover, most methods suffer from symmetric objects as they have multiple 6D poses with same visual and geometric appearance, causing most learning-based estimators to average over these multiple solutions. We present a novel Symmetry-aware Multi-directional Fusion approach for Multi-view 6D pose estimation called SyMFM6D which overcomes the previously mentioned issues. <ref> shows an overview of our system. SyMFM6D exploits the visual and geometric information from an arbitrary number of RGB-D images depicting a scene from multiple perspectives. We propose a deep multi-directional fusion network which fuses the multi-view RGB-D data efficiently and learns a compact representation of the entire scene. Our approach predicts the 6D poses of all objects in the scene simultaneously based on keypoint detection, instance semantic segmentation, and least-squares fitting. Furthermore, we present a novel symmetry-aware training procedure including a novel objective function which significantly improves the keypoint detection. Our experiments demonstrate a large benefit of the proposed symmetry-aware training procedure, improving the accuracy of both symmetric and non-symmetric objects due to synergy effects. Thus, our approach outperforms the state-of-the-art in single-view 6D pose estimation. SyMFM6D also outperforms the state-of-the-art multi-view approach while being computationally more efficient. We furthermore show that our approach works accurately in both fixed and dynamic camera settings. Moreover, our method is robust towards inaccurate camera calibration by compensating imprecise camera pose information when using multiple views. Our main contributions are: [label=] * We propose a novel multi-directional multi-view fusion network for efficient representation learning of multiple RGB-D frames and present a novel multi-view 6D pose estimation method based on it. * We present a novel symmetry-aware training procedure for 3D keypoint detection based on a symmetry-aware objective function. * We present a novel synthetic dataset with photorealistic multi-view data and labels for 6D pose estimation as well as instance semantic segmentation. * We demonstrate significant improvements and synergy effects due to our symmetry-aware training procedure on challenging datasets including symmetric and non-symmetric objects. * Our method outperforms the state-of-the-art in single-view and multi-view 6D object pose estimation. We further demonstrate the robustness of our approach towards inaccurate camera calibration and dynamic camera setups. § RELATED WORK Over the last few years, there has been significant progress in the area of 6D pose estimation. We now discuss the most important milestones subdivided into single-view methods, multi-view methods, and symmetry-aware methods. §.§ Single-View 6D Pose Estimation The methods in this family require only a single input modality, which can be RGB, point cloud, or RGB-D. Traditional pose estimators using a single RGB image are mostly feature-based <cit.> or based on template matching <cit.>. Especially the former group of methods are often multi-staged and first extract local features from the given RGB image before matching the 2D-3D-correspondences to estimate the object's pose using a Perspective-n-Point (PnP) algorithm <cit.>. End-to-end trainable neural networks directly predict object poses without requiring multiple stages <cit.>. These methods share similar ideas to exploit differentiable PnP or differentiable rendering techniques. The recent advance of LiDAR and depth sensors promoted the proposal of methods based on a single point cloud <cit.>. These methods apply either 3D convolutions <cit.>, or variations of PointNet <cit.> as backbone <cit.>. The authors of <cit.> and <cit.> introduce and further improve voting techniques for 3D object detection. However, since point cloud based methods cannot extract texture information, their application range is limited. In contrast, RGB-D based approaches can combine the advantages of both modalities. For instance, <cit.> and <cit.> fuse an RGB image with a LiDAR point cloud by applying networks for convolutional feature extraction and for generating 3D object proposals. The approaches proposed in <cit.>, <cit.>, and <cit.> separately process the RGB image by a CNN and the point cloud by a PointNet-based network before fusing the appearance features and the geometric features with a dense fusion network. In <cit.> and <cit.> the authors employ a deep Hough voting network for 3D keypoint detection before estimating 6D poses by least-squares fitting <cit.>. However, most previous methods do not consider object symmetries and suffer from strong occlusions. §.§ Multi-View 6D Pose Estimation Multi-view pose estimators consider multiple RGB(-D) frames showing the same scene from different perspectives in order to reduce the effect of occlusions and to improve the 6D pose estimation accuracy. The approach proposed in <cit.> first segments all frames with a CNN before aligning the known 3D models with the segmented object point cloud to estimate their poses. The authors of <cit.> present an end-to-end trainable CNN-based architecture based on a single RGB or RGB-D image. They perform the single-view pose estimation multiple times with images from different viewpoints before selecting the best hypothesis using a voting score that suppresses outliers. In <cit.> the authors propose a three-stage approach which first employs a CNN for generating object candidate proposals for each view independently. Secondly, they conduct a candidate matching considering the predictions of all views before finally performing a refinement procedure based on object-level bundle adjustment <cit.>. The approach of <cit.> directly fuses the features from multiple RGB-D views before predicting the poses based on keypoints and least-squares fitting <cit.>. However, the method uses a computationally expensive feature extraction and fusion network which does not consider object symmetries and it is evaluated on only synthetic datasets. §.§ Symmetry-aware 6D Pose Estimation Symmetric objects are known to be a challenge for 6D pose estimation approaches due to ambiguities <cit.>. Different techniques have been proposed to address this issue. The authors of <cit.> and <cit.> propose to utilize an additional output channel to classify the type of symmetry and its domain range. In <cit.>, a loss is introduced that is the smallest error among symmetric pose proposals in a finite pool of symmetric poses. In <cit.> the authors propose to use compact surface fragments as a compositional way to represent objects. As a result, this representation can easily allow handling of symmetries. The authors of <cit.> employ an additional symmetry prediction as output, and an extra refining step of predicted symmetry via an optimization function. A novel output space representation for CNNs is presented in <cit.> where symmetrical equivalent poses are mapped to the same values. In <cit.> the authors introduce a compact shape representation based on grouped primitives to handle symmetries. However, non of these methods outperforms the keypoint-based methods <cit.> and <cit.>, even though they do not consider object symmetries. In contrast, our method extends current keypoint based methods to consider object symmetries, and consequently outperforms all previous methods on single and multi-view scenes. § PROPOSED METHOD: SYMFM6D We propose a deep multi-directional fusion approach called SyMFM6D that estimates the 6D object poses of all objects in a cluttered scene based on multiple RGB-D images while considering object symmetries. In this section, we define the task of multi-view 6D object pose estimation and present our multi-view deep fusion architecture. 6D object pose estimation describes the task of predicting a rigid transformation p = [ R | t] ∈ SE(3) which transforms the coordinates of an observed object from the object coordinate system into the camera coordinate system. This transformation is called 6D object pose because it is composed of a 3D rotation R ∈ SO(3) and a 3D translation t ∈ℝ^3. The designated aim of our approach is to jointly estimate the 6D poses of all objects in a given cluttered scene using multiple RGB-D images which depict the scene from multiple perspectives. We assume the 3D models of the objects and the camera poses to be known as proposed by <cit.>. §.§ Network Overview Our symmetry-aware multi-view network consists of three stages which are visualized in <ref>. The first stage receives one or multiple RGB-D images and extracts visual features as well as geometric features which are fused to a joint representation of the scene. The second stage performs a detection of predefined 3D keypoints and an instance semantic segmentation. Based on the keypoints and the information to which object the keypoints belong, we compute the 6D object poses with a least-squares fitting algorithm <cit.> in the third stage. §.§ Multi-View Feature Extraction To efficiently predict keypoints and semantic labels, the first stage of our approach learns a compact representation of the given scene by extracting and merging features from all available RGB-D images in a deep multi-directional fusion manner. For that, we first separate the set of RGB images RGB_1, ..., RGB_N from their corresponding depth images Dpt_1, ..., Dpt_N. The N depth images are converted into point clouds, transformed into the coordinate system of the first camera, and merged to a single point cloud using the known camera poses as in <cit.>. Unlike <cit.>, we employ a point cloud network based on RandLA-Net <cit.> with an encoder-decoder architecture using skip connections. The point cloud network learns geometric features from the fused point cloud and considers visual features from the multi-directional point-to-pixel fusion modules as described in <ref>. The N RGB images are independently processed by a CNN with encoder-decoder architecture using the same weights for all N views. The CNN learns visual features while considering geometric features from the multi-directional pixel-to-point fusion modules. We followed <cit.> and build the encoder upon a ResNet-34 <cit.> pretrained on ImageNet <cit.> and the decoder upon a PSPNet <cit.>. After the encoding and decoding procedures including several multi-view feature fusions, we collect the visual features from each view corresponding to the final geometric feature map and concatenate them. The output is a compact feature tensor containing the relevant information about the entire scene which is used for keypoint detection and instance semantic segmentation as described in <ref>. §.§ Multi-View Feature Fusion In order to efficiently fuse the visual and geometric features from multiple views, we extend the fusion modules of FFB6D <cit.> from bi-directional fusion to multi-directional fusion. We present two types of multi-directional fusion modules which are illustrated in <ref>. Both types of fusion modules take the pixel-wise visual feature maps and the point-wise geometric feature maps from each view, combine them, and compute a new feature map. This process requires a correspondence between pixel-wise and point-wise features which we obtain by computing an XYZ map for each RGB feature map based on the depth data of each pixel using the camera intrinsic matrix as in <cit.>. To deal with the changing dimensions at different layers, we use the centers of the convolutional kernels as new coordinates of the feature maps and resize the XYZ map to the same size using nearest interpolation as proposed in <cit.>. The point-to-pixel fusion module in <ref> computes a fused feature map F_f based on the image features F_i(v) of all views v ∈{1, …, N}. We collect the K_p nearest point features F_p_k(v) with k ∈{1, …, K_p} from the point cloud for each pixel-wise feature and each view independently by computing the nearest neighbors according to the Euclidean distance in the XYZ map. Subsequently, we process them by a shared MLP before aggregating them by max-pooling, i.e., F_p(v) = max_k ∈{1, …, K_p}( MLP_p( F_p_k(v)) ). Finally, we apply a second shared MLP to fuse all features F_i and F_p as F_f = MLP_fp( F_p⊕ F_i) where ⊕ denotes the concatenate operation. The pixel-to-point fusion module in <ref> collects the K_i nearest image features F_i_k(i2v(i_k)) with k∈{1, ..., K_i}. i2v(i_k) is a mapping that maps the index of an image feature to its corresponding view. This procedure is performed for each point feature vector F_p(n). We aggregate the collected image features by max-pooling and apply a shared MLP, i.e., F_i = MLP_i( max_k ∈{1, …, K_i}( F_i_k(i2v(i_k)) ) ). One more shared MLP fuses the resulting image features F_i with the point features F_p as F_f = MLP_fi( F_i⊕ F_p). §.§ Keypoint Detection and Segmentation The second stage of our SyMFM6D network contains modules for 3D keypoint detection and instance semantic segmentation following <cit.>. However, unlike <cit.>, we use the SIFT-FPS algorithm <cit.> as proposed by FFB6D <cit.> to define eight target keypoints for each object class. SIFT-FPS yields keypoints with salient features which are easier to detect. Based on the extracted features, we apply two shared MLPs to estimate the translation offsets from each point of the fused point cloud to each target keypoint and to each object center. We obtain the actual point proposals by adding the translation offsets to the respective points of the fused point cloud. Applying the mean shift clustering algorithm <cit.> results in predictions for the keypoints and the object centers. We employ one more shared MLP for estimating the object class of each point in the fused point cloud as in <cit.>. §.§ 6D Pose Computation via Least-Squares Fitting Following <cit.>, we use the least-squares fitting algorithm <cit.> to compute the 6D poses of all objects based on the estimated keypoints. As the M estimated keypoints k_1, ..., k_M are in the coordinate system of the first camera and the target keypoints k_1, ..., k_M are in the object coordinate system, least-squares fitting calculates the rotation matrix R and the translation vector t of the 6D pose by minimizing the squared loss L_Least-squares = ∑_i=1^M k_i - ( R k_i + t)_2^2. §.§ Symmetry-aware Keypoint Detection Most related work, including <cit.>, and <cit.> does not specifically consider object symmetries. However, symmetries lead to ambiguities in the predicted keypoints as multiple 6D poses can have the same visual and geometric appearance. Therefore, we introduce a novel symmetry-aware training procedure for the 3D keypoint detection including a novel symmetry-aware objective function to make the network predicting either the original set of target keypoints for an object or a rotated version of the set corresponding to one object symmetry. Either way, we can still apply the least-squares fitting which efficiently computes an estimate of the target 6D pose or a rotated version corresponding to an object symmetry. To do so, we precompute the set S_I of all rotational symmetric transformations for the given object instance I with a stochastic gradient descent algorithm <cit.>. Given the known mesh of an object and an initial estimate for the symmetry axis, we transform the object mesh along the symmetry axis estimate and optimize the symmetry axis iteratively by minimizing the ADD-S metric <cit.>. Reflectional symmetries which can be represented as rotational symmetries are handled as rotational symmetries. Other reflectional symmetries are ignored, since the reflection cannot be expressed as an Euclidean transformation. To consider continuous rotational symmetries, we discretize them into 16 discrete rotational symmetry transformations. We extend the keypoints loss function of <cit.> to become symmetry-aware such that it predicts the keypoints of the closest symmetric transformation, i.e. L_kp(ℐ) = 1/N_Imin_S∈S_I∑_i ∈ℐ∑_j=1^M x_ij - Sx_ij_2, where N_I is the number of points in the point cloud for object instance I, M is the number of target keypoints per object, and ℐ is the set of all point indices that belong to object instance I. The vector x_ij is the predicted keypoint offset for the i-th point and the j-th keypoint while x_ij is the corresponding ground truth. §.§ Objective Function We train our network by minimizing the multi-task loss function L_multi-task = λ_1 L_kp + λ_2 L_semantic + λ_3 L_cp, where L_kp is our symmetry-aware keypoint loss from <ref>. L_cp is an L1 loss for the center point prediction, L_semantic is a Focal loss <cit.> for the instance semantic segmentation, and λ_1=2, λ_2=1, and λ_3=1 are the weights for the individual loss functions as in <cit.>. § EXPERIMENTS To demonstrate the performance of our method in comparison to related approaches, we perform extensive experiments on four very challenging datasets. §.§ Datasets The YCB-Video dataset <cit.> contains a total of 133,827 RGB-D images showing 92 scenes composed of three to nine objects from the 21 Yale-CMU-Berkeley (YCB) objects <cit.>. Additionally, there are 80,000 synthetic non-sequential frames showing a random subset of the YCB objects placed at random positions. However, most frames from YCB-Video are very similar because they originate from videos with 30 frames per second recorded by a handheld camera that was moved slowly. The videos also do not show the scene from all sides but just from similar perspectives. Furthermore, the scenes do not include strong occlusions, and hence, most object poses are simple to estimate from a single perspective. Therefore, we additionally consider the recently proposed photorealistic synthetic datasets MV-YCB FixCam and MV-YCB WiggleCam <cit.> as they contain much more difficult scenes with strong occlusions and diverse camera perspectives. Both datasets depict 8,333 cluttered scenes composed of eleven non-symmetric YCB objects which are randomly arranged so that strong occlusions occur. Each scene is photorealistically rendered from three very different perspectives providing 24,999 RGB-D images with accurate ground truth annotations. Unlike FixCam which uses fixed camera positions while providing accurate camera poses, WiggleCam has varying camera poses which are inaccurately annotated on purpose. Since FixCam and WiggleCam contain only non-symmetric objects, we created an additional photorealistic synthetic dataset with symmetric and non-symmetric objects called MV-YCB SymMovCam using Blender with physically based rendering and domain randomization as in <cit.>. It also depicts 8,333 cluttered scenes, but they are composed of 8 – 16 objects randomly chosen from the 21 YCB objects which results in very strong occlusions. For each scene, we created four cameras at changing positions around the scene with the restriction that in each quadrant there is only one camera so that the perspectives are very distinct. This results in a total of 33,332 annotated RGB-D images. §.§ Training Procedure For training our model in single-view mode on YCB-Video, we randomly use the synthetic and real images of YCB-Video with a ratio of 4:1. Since consecutive real frames are very similar, we consider only every seventh real frame. For training a multi-view model, we start from the corresponding single-view checkpoint and continue training with batches of real YCB-Video frames. For training on FixCam and WiggleCam we follow <cit.> and use random permutations of the three available camera views. For SymMovCam, we take a random subset of three views from the available four views. §.§ Evaluation Metrics We evaluated our method using the area-under-curve (AUC) metrics for ADD-S and and the precision metrics ADD-S  2cm and  2cm as these metrics are most commonly used in related work <cit.>. §.§ Baseline Methods We compare our methods with many established and some very recent methods namely DenseFusion <cit.>, CosyPose <cit.>, PVN3D <cit.>, FFB6D <cit.>, ES6D <cit.>, and MV6D <cit.>. §.§ Results on YCB-Video <ref> compares the single-view performance of our SyMFM6D network with all baseline methods using the AUC of ADD-S and on YCB-Video. Please note that MV6D corresponds to PVN6D in the single-view scenario. The results show that our approach copes very well with the dynamic camera setup of YCB-Video while outperforming all methods significantly. On the symmetry-aware AUC metric, SyMFM6D outperforms the current state-of-the-art FFB6D by even 1.5%. Please note that unlike DenseFusion (iterative) and CosyPose, our approach does not perform computationally expensive post processing or iterative refinement procedures. To examine the effect of our symmetry-aware training procedure, we provide an object-wise evaluation of the three best single-view methods on YCB-Video in <ref>. Please note that in single-view mode, our model architecture is the same as FFB6D except for our novel symmetry-aware loss function. The results show that not only most symmetric objects (highlighted in bold) are estimated more accurate but also most non-symmetric objects. This indicates that there is a synergy effect which improves the keypoint detection for non-symmetric objects due to an improvement of the keypoint detection for symmetric objects. <ref> shows a visualization of three scenes of YCB-Video with 6D pose ground truth, predictions of FFB6D, and predictions of our SyMFM6D network using only the depicted view. It can be seen that both FFB6D and SyMFM6D estimate very accurate poses as the scenes of YCB-Video contain only a few objects and not many occlusions. However, SyMFM6D predicts even more accurate poses than FFB6D due to our proposed symmetry-aware training procedure. <ref> compares our multi-view results with all multi-view baseline methods on YCB-Video using three and five input views. We see that our approach with disabled symmetry training procedure already outperforms all previous multi-view methods significantly. Enabling the symmetry awareness further improves the results slightly. However, using more views does not improve the accuracy as most views of YCB-Video are very similar in which case additional views do not provide beneficial information while the learning problem of fusing different views becomes slightly harder. §.§ Results on MV-YCB FixCam, WiggleCam and SymMovCam We show the quantitative results on the datasets MV-YCB FixCam, MV-YCB WiggleCam, and MV-YCB SymMovCam in <ref>. It includes a comparison with two modified CosyPose (CP) versions with and without known camera poses as presented by <cit.>. Our SyMFM6D network yields the best results on all metrics on all three datasets. This shows that SyMFM6D copes very well with the strong occlusions in the datasets. The results on WiggleCam are just slightly worse than on FixCam which demonstrates that our approach is robust towards inaccurately known camera poses. On the novel SymMovCam dataset, our method outperforms the baselines by a much larger margin than on FixCam and WiggleCam. This is due to the symmetric objects in the datasets on which the keypoint estimation of the baseline methods is inaccurate. The results also prove that our approach is robust to very dynamic camera setups where the cameras are mounted at varying positions. §.§ Keypoint Visualization <ref> shows predicted keypoints of FFB6D and SyMFM6D in a YCB-Video scene. We additionally visualize the keypoint proposals of each object in individual colors. The resulting predicted keypoints are white, the target keypoints are black. You can see that both FFB6D and SyMFM6D predict very accurate keypoints on all non-symmetric objects. However, FFB6D fails to predict accurate keypoints on the large clamp which has one discrete rotational symmetry. This shortcoming of FFB6D is also apparent on other symmetric objects. We believe that this is caused by the ambiguities of the object poses resulting in ambiguous target keypoints which results in averaging over the multiple solutions given by the symmetry. Therefore, the training loss is minimized when predicting keypoints on the symmetric axis rather than predicting them on the desired target locations. SyMFM6D in contrast overcomes this problem by our novel symmetry-aware training procedure as it can be seen in <ref>. §.§ Implementation Details and Runtime We trained our network up to seven days on four NVIDIA Tesla V100 GPUs with 32GB of memory. The network architecture of our SyMFM6D approach has 3.5 million trainable parameters and requires about 46ms for processing a single RGB-D image on a single GPU. Mean shift clustering and least-squares fitting for computing a 6D pose require additional 14ms per object. Please visit our previously mentioned GitHub repository for code, datasets, and further details. § CONCLUSION In this work, we present SyMFM6D, a novel approach for symmetry-aware multi-view 6D object pose estimation based on a deep multi-directional fusion network for RGB-D data. We additionally propose a novel method for predicting predefined 3D keypoints of symmetric objects based on a symmetry-aware objective function. Using the 3D keypoint predictions and an instance semantic segmentation, we compute the 6D poses of all objects in the scene simultaneously with least-squares fitting. Our experiments show that our symmetry-aware training procedure significantly improves the 6D pose estimation accuracy of both symmetric and non-symmetric objects due to synergy effects. Our method outperforms the state-of-the-art in single-view and multi-view 6D pose estimation on four very challenging datasets. We furthermore demonstrate the robustness of our approach towards inaccurately known camera poses and dynamic camera setups. IEEEtran
http://arxiv.org/abs/2307.01866v1
20230704182515
Understanding User Behavior in Carousel Recommendation Systems for Click Modeling and Learning to Rank
[ "Santiago de Leon-Martinez" ]
cs.IR
[ "cs.IR", "cs.HC" ]
Understanding User Behavior in Carousel Recommendation Systems]Understanding User Behavior in Carousel Recommendation Systems for Click Modeling and Learning to Rank Faculty of Information Technology, Brno University of Technology Brno Czechia Kempelen Institute of Intelligent Technologies Bratislava Slovakia santiago.deleon@kinit.sk 0000-0002-2109-9420 Carousels (also-known as multilists) have become the standard user interface for e-commerce platforms replacing the ranked list, the previous standard for recommender systems. While the research community has begun to focus on carousels, there are many unanswered questions and undeveloped areas when compared to the literature for ranked lists, which includes information retrieval research on the presentation of web search results. This work is an extended abstract for the RecSys 2023 Doctoral Symposium outlining a PhD project, with the main contribution of addressing the undeveloped areas in carousel recommenders: 1) the formulation of new click models and 2) learning to rank with click data. We present two significant barriers for this contribution and the field: lack of public datasets and lack of eye tracking user studies of browsing behavior. Clicks, the standard feedback collected by recommender systems, are insufficient to understand the whole interaction process of a user with a recommender requiring system designers to make assumptions, especially on browsing behavior. Eye tracking provides a means to elucidate the process and test these assumptions. Thus, to address these barriers and encourage future work, we will conduct an eye tracking user study within a carousel movie recommendation setting and make the dataset publicly available. Moreover, the insights learned on browsing behavior will help motivate the formulation of new click models and learning to rank. <ccs2012> <concept> <concept_id>10002951.10003317.10003347.10003350</concept_id> <concept_desc>Information systems Recommender systems</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002951.10003260.10003261.10003267</concept_id> <concept_desc>Information systems Content ranking</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10003120.10003121</concept_id> <concept_desc>Human-centered computing Human computer interaction (HCI)</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> [500]Information systems Recommender systems [300]Information systems Content ranking [300]Human-centered computing Human computer interaction (HCI) [ Santiago de Leon-Martinez Received June 2023; accepted y ================================== § INTRODUCTION Over the years, work in recommender systems has extensively explored and developed the single ranked list. However, more recently, carousels (also-known as multilists) have become the most prominent user interfaces for recommendation platforms, especially in e-commerce, to the point where it is difficult to find non-carousel interfaces for marketplaces or streaming services today. This is true across mobile and desktop interfaces, such as Netflix seen in Figure <ref>, and other popular platforms, such as Amazon, Ebay, HBO, and Spotify, that all use carousels on their homepages and during exploration of predefined categories or subcategories. It is this area of initial impressions and exploration, where carousels are predominant, while specific searches still default to the single ranked list, whether in 1D top-down or 2D grid format. The term carousels is used broadly in the literature (see e.g. <cit.>). For clarity, we define carousels as multiple distinct lists with the following properties: * each list has a different vertical position on the interface, * each list is governed by a topic, tag, or category, and * the lists generally present their items from left to right. The first and second property distinguish carousels from a 2D ranked list by having a different topic for each row (in practise the rows in the carousel are spatially separated and the topic is displayed as a header for a row) and each row being a separate list. A 2D ranked list is a 1D ranked list that has been cut and stacked generally with a ranking pattern as seen in Figure <ref>. Most carousels also have an added element of complexity in that they allow display of more items than the initial presented, called swipeable carousels <cit.>. This allows for even more diverse browsing behaviors and the possibility of generating within-topic online recommendations based on current page feedback. While there is already some research on carousels, it is currently hampered by two main barriers: 1) lack of public datasets and 2) lack of empirical studies of users' browsing behavior in carousel-based interfaces. In practise, the main problem in recommender systems is what items to present to a user when interacting with the system, i.e., how to determine the item relevance for a user (in a specific context). It was the availability of many public recommender datasets containing large amounts of user feedback <cit.> that propelled the research and discovery of a multitude of techniques of determining items relevance for users in a single ranked list or of learning to rank, since all these methods use click or rating data for training and evaluation. For such advancements to also happen in carousel interfaces, multiple publicly available datasets of user feedback are needed. Another key piece of answering how to present items and determine relevancy is how users interact or browse within the recommendation system interface. The assumption that is typically used in the single ranked list is that users browse from top of the list to the bottom of the list. This top-down browsing behavior and position bias can be seen in click data <cit.>, while eye tracking reinforces this with empirical evidence of how a user interacts directly with the interface. Single ranked lists have the advantage of being directly linked to the presentation of web search results in information retrieval, leveraging eye tracking studies performed within the web search domain <cit.> along with eye tracking studies within the recommender system domain <cit.>. Eye tracking studies provide a clear picture of how users interact with a system, what assumptions are reasonable to make, what biases are present, and how each should be accounted for when designing feedback models. For example, eye tracking studies at the time motivated the popular cascade click model <cit.>, which assumes that a user browses a list using a top-down approach and that items after the click are not observed and items before the click were seen (or skipped). However, determining true item examination before a click is not possible with only click data. To build upon a cascade click model, the probability of termination, the user exiting possibly due to unsatisfactory results seen so far, can also be included <cit.>. Both item examination and termination are elements that can feasibly be learned from eye tracking data (a possible direction for future work) rather than inferring them probabilistically as is done in click models, but it is difficult to say how well eye tracking data can be generalized across users, groups, and tasks. It may not be necessary to address every possible sequence of interactions that may be learned from eye tracking data, leading to an overly complicated model, but it is imperative to confirm even the most basic assumptions and discover if there is a prominent interaction or sequence of interactions that may be beneficial to model. Eye tracking user studies were done in both search engine results pages and in ranked list recommendation systems and paved the way for the extensive work that has improved how user feedback is modeled. Therefore, eye tracking studies are necessary stepping stones for the development of user feedback modeling in any interface whether old or new, such as the carousel. While a general browsing behavior can be theorized in the case of a single 1D or 2D ranked list, it becomes much more complicated in the case of carousels, since a user can interact with topics in many ways. For example, Figure <ref> gives two possible browsing behaviors for a carousel interface: the first which is similar to that of a 2D ranked list and the second in which a user finds Topic A to be undesired and does not examine the first carousel going directly to Topic B and its carousel. This is a topic first browsing behavior, which was used in the first carousel click model <cit.>. However, in reality browsing paths can be more complicated in the case of two dimensions. Kammerer and Gerjets <cit.> compared user browsing behavior between a 1D ranked list and 3x3 2D ranked list across 80 subjects. For the 1D ranked list, they found that users tended to linearly browse from top to bottom, which was consistent with eye tracking studies of search engine results pages <cit.>. Their results in the 2D ranked list showed users browsed in a non-linear fashion, rather than going row by row or column by column, the two were mixed. Zhao et al. <cit.> found similar results in eye tracking 17 subjects during task-based exploration of 2D ranked lists, where they confirmed the F-pattern gaze hypothesis, which is also known as the "golden triangle" <cit.> shown in Figure <ref>. However, there are no eye tracking studies to date for carousel interfaces only user studies <cit.>. For this reason, the research project's initial contribution will be the first eye tracking study within a carousel movie recommendation setting not only to examine user behavior, but to use the insights learned for the two main contributions of designing better carousel click models and formulating learning to rank from click data, while also providing another dataset for evaluation. § RESEARCH QUESTIONS AND PROPOSED APPROACH This PhD research project seeks to address challenges in carousel recommendation and further advance the areas of carousel click models and learning to rank in carousels. Specifically, we aim to address the following identified open research questions: RQ 1: What are the common user browsing behaviors when presented with recommendations in a carousel and how do genre preferences impact this browsing behavior? RQ 2: How effective is a row, column position based model for modeling clicks of users, especially as compared to the first carousel click model? RQ 3: How can we formulate and solve the problem of learning to rank directly from clicks in the carousel setting? The first step in this project is to collect more empirical evidence on how users interact with carousel interfaces. Therefore, we will conduct the first eye tracking user study of movie carousels examining browsing behavior and the impact of genre preference on browsing (RQ 1). We plan to make this dataset publicly available, being – to the best of our knowledge – the first public eye tracking dataset within a (carousel) recommender system setting and the third public dataset of carousel recommenders <cit.>. We hope that making the dataset public will help advance the field, while also helping to inform the next steps of this research project and providing more data for model validation. After this first step of an eye tracking user study, the two main contributions of the project will involve addressing areas in the research where carousels are lagging behind ranked lists. The first being the advancement of carousel click models, in particular the formulation of a row, column position based carousel click model (RQ 2), and the second being learning to rank directly from clicks from a carousel interface (RQ 3). We reference (to the best of our knowledge) all prior and current works on carousel interfaces and recommendations as related to each research question and propose our approach on how to address them. §.§ RQ 1: Users' Browsing Behavior with Carousels With regards to the research specific to recommender systems with carousel interfaces, there is much less work compared to non-carousel and web search settings. This is in part due to the presence of only a few public datasets <cit.>: 1) the first being a dataset of n=974,960 anonymized Deezer, an online music streaming platform, user embeddings and n=862 playlist embeddings in a simulation framework with a "ground truth" display-to-stream probability and no feedback data and 2) the second a small user study of n=776 clicks in single ranked lists and carousels. Additionally no eye tracking user study has been conducted with a carousel interface. As a consequence, most carousel studies have used synthetic data from Movielens and Netflix datasets<cit.> or created simulations of how a user may interact under a carousel setting. Two user studies (without eye tracking) have been conducted comparing ranked lists and carousel interfaces <cit.>. The small amount of user studies and lack of any eye tracking study leave a gap in the literature and understanding of how users interact with the carousel interface. Eye tracking particularly can provide general browsing behaviors, item examination, and topic examination that can help motivate click models and greatly help in learning to rank, especially in learning the propensity of observations for debiasing clicks <cit.>. To address the lack of datasets and lack of user studies in the carousel setting, the initial contribution of the doctoral research project will be an eye tracking user study to determine the browsing behavior in movie carousel interfaces and examine the impact of carousel topic preference on browsing behavior. This will be done by designing a desktop interface, similar to Netflix, initially presenting 4 different movie genres (of the total 8) carousels each with 6 displayed movie posters and the ability to swipe each carousel to display more movies and also scroll the page downwards to reveal the 4 other genres. 30 screens are planned to be shown to at least 60 participants with the task to browse for a movie that the user would like to watch ultimately generating n=1800 clicks of feedback. Participants will initially be asked their preferred genres which will be used to generate 15 screens containing at least one preferred genre carousel in the initial screen (no page scrolling required) and the other 15 screens will not contain any preferred genre carousels in the initial screen. The purpose of this experiment is to present a naturalistic movie selection setting similar to streaming services like Netflix to better understand general browsing behavior and compare browsing behaviour between the half splits of screens to determine the impact of genre preference on browsing behaviour. We hope to examine consistent patterns in browsing behavior across the 30 screens and additional browsing behaviors that are related to genre preference. For example, the study may elucidate how users interact with the topics (genres) of the carousel, a key difference between the carousel and 2D ranked lists that may break the golden triangle browsing behavior and confirm the assumed topic browsing behavior of the carousel click model <cit.>. §.§ RQ 2: Carousel Click Models Click models are generative models that seek to explain how a user interacts with a recommender system, generally some form of a ranked list. Clicks originally gathered from search engine results pages were used in the field of information retrieval to improve search engines and were naturally extended to recommender systems to do the same equivalent task, improvement of the recommendation model. The click still remains as the most common feedback (implicit or not) for both information retrieval and recommender systems, but in the field of recommender systems explicit feedback may also be gathered, such as item ranking, like/dislike, item add-to-cart, and item purchase. Early researchers in information retrieval believed that item relevance led to clicks and applied this to click through data gathered from search engine results pages to improve search engine results. However, in the early 2000s Joachims <cit.> showed clicks were dependent on position and further research argued that position may be more important than item relevance <cit.>. In order to better understand how and why users were clicking on search results, eye tracking studies were conducted showing the importance of position, confirming the top-down browsing behavior <cit.>, and showing the correspondence between clicks and explicit judgements <cit.>. This was naturally extended to ranked lists in recommender systems and over time eye tracking studies were conducted to determine browsing behavior in 2D single ranked lists <cit.>, examine user traits <cit.>, and predict gaze or interest <cit.>. Eye tracking studies have allowed researchers to better understand the process leading to a click and design better click models. The most popular click model is the previously mentioned cascade click model for 1D single ranked lists <cit.> that uses this observed top-down browsing behavior to account for position bias. Clicks models can allow researchers to test their assumptions on user browsing behavior, examine and discover biases, and improve ranking policies online or offline. This is why click modeling in new interfaces like the carousel is so important for the advancement of the field. For example, researchers at Deezer modeled carousel personalization as a multi-armed bandit problem with multiple plays and took inspiration from the cascade click model to deal with the problem of unobserved songs in the playlist and also integrated semi-personalization through user clustering <cit.>. The two most recent works in carousels by Rahdari et al. simulated user browsing behavior in a carousel interface <cit.> and designed the first carousel click model <cit.> comparing them to ranked lists to reach an analytical understanding of the prominence of carousels. There is a large gap between the single carousel click model and the many in the ranked list setting, including the popular cascade click model and more <cit.>. In terms of click models, we wish to expand on the work of Rahdari et al <cit.> and design a position based click model where a user would examine an item (i,j) at row position i and column position j with probability related to the probability of examining a row i and probability of examining a column j. This is one of many click models that can be created for the carousel interface, especially with the countless models from ranked lists that may be transitioned to carousels. The data gathered from the user study will be used to evaluate the designed click models and inform assumptions and biases taken into account when designing the models. It may also motivate novel formulations of click models based on the eye tracked browsing behaviors. §.§ RQ 3: Learning to Rank in Carousel Setting While click models are concerned with generating click data that is similar to real-life users, one of their primary goals is to help inform the process of ranking items in a list, also known as learning to rank. Beginning with a web search engine or recommendation system, from gathered clicks we would like to improve our engine or system. The difficulty is that it is not clear why users click a certain item or link. Assumptions can be tested by user studies or click modeling, but even then it can still be difficult to decipher a click, especially when in the context of being presented with other items. One approach is to ignore the context of choice of one item amongst others and just look at the feedback as positive, which is commonly done in collaborative filtering transforming click data to explicit feedback <cit.>, usually binary consumption (1 signifies the item was clicked or rated and 0 signifies the item was unclicked and unrated). This especially makes sense in the domain of movie recommendation where it is common to have explicit feedback and users may have seen a movie outside of the streaming service or recommender system, where the choice context would be impossible to determine. Moreover, there are large databases of movie ratings that allow the recommendation of movies based on item-item, user-user, and item-user similarities. A ranked list in this case can simply be created by listing the top n most similar movies to the user in question. Gathering more click or rating data will help improve collaborative filtering recommendations, but recommendations may be improved further by taking into account the context of choice in ranked list presentation. Moreover, collaborative filtering models have been shown to have strong popularity biases <cit.>, which can be made worse by not taking into account ranked list position bias and other presentation biases. This creates a feedback loop of placing the most popular unseen items at the top of the ranked list, which increase their chances of being seen and then clicked, and upon retraining the collaborative filtering model will recommend these popular items even more. While this provides a straightforward approach in effectively recommending the most popular items, it makes recommendation of novel items particularly difficult. Rather than focus on a collaborative filtering approach to ranking we consider the context of choice and learn to rank directly from the click data. The conventional problem remains the same as its earliest formulations in information retrieval <cit.> by defining a risk function that aggregates the loss of a ranking of documents given a query over the query distribution, with the objective to find a ranking function that minimizes this risk. Learning this ranking function is commonly done through Empirical Risk Minimization <cit.>. However, a problem arises in that we do not know the relevances of documents given a query, which is known as partial information learning to rank <cit.>. We only have access to user feedback, which is representative of a user's relevancy judgment specific to the context when the feedback was gathered. In the case of implicit feedback (clicks), we must take into account presentation bias. Joachims <cit.> addresses this by defining the propensity of the observation, the marginal probability of observing the relevancy signal of documents given a query and the ranking presented to the user. Using a counterfactual model, an unbiased estimate of the loss of a ranking of documents given a query, presented ranking, and observed relevances can be calculated via inverse propensity scoring. In other words, it provides a general framework for learning to rank from biased user feedback based on user's relevancy signal, the observation/examination pattern, and propensities of observations. When taking into account clicks and positional bias, a position based click propensity model is used to determine the propensities of the observed clicks or the examination probabilities of each result, which can be learned by a swap-intervention experiment. This leads to an unbiased model that can learn from clicked results without assuming: 1) that unclicked results are irrelevant and 2) knowing whether the unclicked results were examined. Learning to rank was also formulated in recommender systems <cit.> and extended to non-contextual and contextual bandit methods. Bandit methods are simply another approach, motivated by reinforcement learning, to the problem of learning to rank balancing exploration and exploitation of recommendations <cit.>. In carousels, researchers experimented on synthetic and online datasets to optimize banner carousels (carousels showing one item at a time and switching to the next after a certain time period) with contextual bandit algorithms <cit.>. While this is the main work in learning to rank in carousels, there has been tangential work. Felicioni and Ferrari Dacrema et al. have proposed offline evaluation protocols for carousel interfaces taking into account complementary lists <cit.>. Lo et al. worked on optimizing the personalization of related item carousels present on a specific product page <cit.>. However, as Rahdari et al. <cit.> mention there is gap in the literature in terms of learning to rank directly from clicks. Our final contribution will be examining and working on the problem of non-bandit learning to rank with click data, as there are no approaches in the literature as of yet. This presents the challenges of having system defined queries (topics) and a governing page/session query that can affect the relevancy signals observed along with a much more complex observation pattern across multiple lists. Moreover, a risk equation would need to be formulated over the whole space of this problem with a well defined loss to allow for learning. § PRELIMINARY RESULTS Preliminary work was done on using eye tracked area of interest (movie poster) dwell time as an additional source of implicit feedback to improve movie recommendations from a collaborative filtering model <cit.>. The PhD research project has transitioned from this particular focus of directly using eye tracking signal as feedback to improve recommender systems to using eye tracking to learn general browsing behaviors and improve click models and learning to rank due to the problem of generalizability of user, group, and task specific eye tracking signals. Moreover, generalizing models that learn to rank using eye tracking signals to the standard use case of recommender systems without eye tracking data poses a difficult problem that we believe needs to be addressed with large amounts of eye tracking data within a recommender system. While we seek to provide some of the data necessary, we believe that the community overall would need to focus on this problem to reach the data necessary. While this topic is no longer the focus of the project, it may be pursued secondarily especially if new approaches are discovered. § CONCLUSION This paper presents a doctoral research project that seeks to understand user behavior in recommendation systems with carousel-based interfaces to implement new click models and develop the formulation of learning to rank from carousel clicks. The expected results of this project are to help address the lack of carousel feedback datasets, encourage the research and publication of recommender datasets with eye tracking data, and tackle the undeveloped areas of click models and learning to rank in carousel recommenders, which are lagging behind ranked list recommenders. As the first step towards achieving the expected contributions and results, we will conduct an eye tracking user study of a carousel-based interface, the methodology of which we introduced in more detail in the paper. Any feedback during the Doctoral Symposium on the proposed ideas, study design, and models designs would be greatly appreciated. This work was supported by Eyes4ICU, a project funded by the European Union's Horizon Europe research and innovation funding programme under grant agreement No. https://doi.org/10.3030/101072410101072410. I would like to acknowledge the guidance and support of both of my supervisors: Maria Bielikova and Robert Moro, and in addition Branislav Kveton for motivating the direction of this work. ACM-Reference-Format
http://arxiv.org/abs/2307.01221v1
20230702134142
Filter Bubbles in Recommender Systems: Fact or Fallacy -- A Systematic Review
[ "Qazi Mohammad Areeb", "Mohammad Nadeem", "Shahab Saquib Sohail", "Raza Imam", "Faiyaz Doctor", "Yassine Himeur", "Amir Hussain", "Abbes Amira" ]
cs.IR
[ "cs.IR", "cs.AI" ]
Filter Bubbles in Recommender Systems: Fact or Fallacy - A Systematic Review Qazi Mohammad Areeb1, Mohammad Nadeem2, Shahab Saquib Sohail3, Raza Imam1, Faiyaz Doctor45, Yassine Himeur5, Amir Hussain6, and Abbes Amira78 1 Mohamed bin Zayed University of Artificial Intelligence, computer vision MBZUAI Abu Dhab Masdar City, Abu Dhabi 2Department of Computer Science, Aligarh Muslim University, Aligarh, 202002, India 3Department of Computer Science and Engineering, Jamia Hamdard University, New Delhi, 110062, India 5School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, United Kingdom 4Edinburgh Napier University, United Kingdom 5College of Engineering and Information Technology, University of Dubai, Dubai, UAE 6Edinburgh Napier University, United Kingdom 7Department of Computer Science, University of Sharjah, Sharjah, United Arab Emirates 8Institute of Artificial Intelligence, De Montfort University, Leicester, United Kingdom August 1, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== A filter bubble refers to the phenomenon where Internet customization effectively isolates individuals from diverse opinions or materials, resulting in their exposure to only a select set of content. This can lead to the reinforcement of existing attitudes, beliefs, or conditions. In this study, our primary focus is to investigate the impact of filter bubbles in recommender systems. This pioneering research aims to uncover the reasons behind this problem, explore potential solutions, and propose an integrated tool to help users avoid filter bubbles in recommender systems. To achieve this objective, we conduct a systematic literature review on the topic of filter bubbles in recommender systems. The reviewed articles are carefully analyzed and classified, providing valuable insights that inform the development of an integrated approach. Notably, our review reveals evidence of filter bubbles in recommendation systems, highlighting several biases that contribute to their existence. Moreover, we propose mechanisms to mitigate the impact of filter bubbles and demonstrate that incorporating diversity into recommendations can potentially help alleviate this issue. The findings of this timely review will serve as a benchmark for researchers working in interdisciplinary fields such as privacy, artificial intelligence ethics, and recommendation systems. Furthermore, it will open new avenues for future research in related domains, prompting further exploration and advancement in this critical area. Recommender systems, filter bubble, echo chamber, social media. § INTRODUCTION The proliferation of the Internet has resulted in an overwhelming abundance of information, necessitating the development of systems that can curate and present tailored options from the vast array of available resources <cit.>. Recommender Systems (RSs) have emerged as a prominent research area, rapidly advancing in their ability to provide users with personalized recommendations for items of interest <cit.>. However, as the field of recommendation systems progresses, several critical issues have been identified in the literature <cit.>. Two widely discussed problems in Recommender System Research (RSR) are the "cold start" issue, which pertains to making recommendations for new or sparse users or items <cit.>, and the sparsity problem caused by the lack of available data for certain users or items <cit.>. Furthermore, scalability <cit.> and recency time <cit.> have been addressed as additional challenges in RSs. In recent years, privacy concerns have also garnered significant attention due to the susceptibility of RSs to security breaches and privacy threats <cit.>. The emergence of new tools and techniques has introduced novel privacy considerations for RSs, with biased and fair RSs becoming prominent topics in the privacy domain <cit.>. Recommender systems exhibit algorithmic biases that can significantly impact their recommendation outputs, potentially leading to issues such as preference manipulation, threat intelligence, and privacy breaches for users <cit.>. These biases can arise from various aspects and causes within RSs. For instance, favoring frequently purchased items over more relevant ones can lead to popularity bias <cit.>. Additionally, position bias, exposure bias, selection bias, demographic bias, and anchoring biases may exist in RSs <cit.>. However, the phenomenon of filter bubbles has not been extensively explored in the context of RSR <cit.>. Olshannikova et al. <cit.> propose a social diversification strategy for recommending relevant individuals on platforms like Twitter. Their approach leverages dormant ties, mentions of mentions, and community members within a user's network to offer diverse recommendations and facilitate new social connections. In a study by Alam et al. <cit.>, biases in news recommender systems are examined using stance and sentiment analysis. By conducting an experiment on a German news corpus focused on migration, the study reveals that these recommender systems tend to recommend articles with negative sentiments and stances against refugees and migration. This reinforces user biases and leads to a reduction in news diversity. Cai et al. <cit.> address issues like echo chambers and filter bubbles caused by recommender systems by concentrating on estimating the effects of recommending specific items on user preferences. They propose a method based on causal graphs that mitigates confounding bias without requiring costly randomized control trials. Experimental results on real-world datasets validate the effectiveness and efficiency of their approach. Hildebrandt <cit.> explores the implications of recommender systems prioritizing sales and ad revenue, which can result in feedback loops, filter bubbles, and echo chambers. The article discusses the economic incentives that influence design decisions and examines proposed EU regulations that aim to address these issues by imposing constraints on targeting and requiring responsible design and deployment of recommender systems. The investigation of filter bubbles in Recommender Systems (RSs) is a burgeoning area of research that has recently garnered considerable attention, especially in the context of social networks <cit.>. Initially, there was disagreement regarding the significance of filter bubbles as a problem worthy of attention. However, subsequent discussions in the referenced paper <cit.> indicate that the majority of practitioners now recognize the importance of addressing this issue. Consequently, there is a consensus that further research is needed to identify effective solutions. Figure <ref> provides a visual representation of a filter bubble. Given the increasing interest in studying filter bubbles and their impact on recommendation systems, it becomes crucial to conduct a comprehensive Systematic Literature Review (SLR) of recent academic publications. Such a review would offer insights into the historical, recent, and current advancements in recommendation systems. It would deepen our understanding of the influence of filter bubbles and pave the way for new research directions aimed at mitigating their effects on content recommendations. However, the existing literature falls short in terms of in-depth discussions and insightful studies specifically exploring the presence of filter bubbles in RSSs <cit.>. This systematic literature review represents the first comprehensive study of its kind that investigates the presence of filter bubbles in RSs. The primary objective of this review is to synthesize and organize the latest research contributions in the field of filter bubbles, employing a well-defined methodology to enhance understanding in this area. The study focuses on classifying existing contributions, evaluating their strengths and weaknesses, and identifying dominant research areas and trends. Through an extensive review supported by relevant literature and related studies, this review identifies the causes of filter bubble occurrence and examines reported approaches to address this issue. It also proposes potential future research directions to effectively tackle filter bubbles in RSs. Furthermore, it offers a critical assessment of techniques employed to mitigate the negative consequences of filter bubbles, aiming to avoid or reduce their harmful effects. In addition, this paper explores alternative approaches and proposes theoretical models that aim to minimize the influence of filter bubbles on recommendation systems. The key contributions of this article can be summarized as follows: * This study presents the first Systematic Literature Review (SLR) dedicated to investigating the presence of filter bubbles in RSs. It fills a significant gap in the existing research by providing a comprehensive analysis of the literature on this topic. * The article examines existing frameworks and provides detailed insights into their features, advantages, disadvantages, and the techniques employed for detecting and mitigating filter bubbles. This analysis helps in understanding the current state of the field and identifying effective strategies for addressing this issue. * The article highlights open research issues that need to be addressed to effectively tackle the concerns raised by filter bubbles. These issues provide a roadmap for future investigations and prompt researchers to explore innovative solutions. * Additionally, the paper proposes potential research directions that have the potential to contribute significantly to the field in the near future. These directions serve as a valuable resource for researchers looking to expand on the existing knowledge and make further advancements. § RELATED WORKS §.§ Recommendation Systems (RSs) Recommender systems (RSs) play a crucial role in providing personalized suggestions to users based on their past interactions. These systems encompass a wide range of recommendations, including movies, products, travel options, advertisements, and news. User preferences can be inferred from their behavior, which can be either implicit or explicit. Implicit preferences are deduced from activities such as online shopping, website visits, link clicks, and web browser cookies, without directly soliciting feedback from users. On the other hand, explicit feedback involves actively requesting users to provide ratings or comments on the recommendations they have received <cit.>. Content-based filtering, collaborative filtering, and hybrid approaches are the three most commonly employed recommendation techniques in RSs <cit.>. Commercial recommendation methods often adopt a combination of these approaches rather than relying solely on content or collaborative filtering. They frequently integrate knowledge-based and context-based strategies to enhance the accuracy and effectiveness of recommendations <cit.>. The distinction between a current experience and one that has already occurred can be described as novelty, while the internal variations within the components of an experience are referred to as diversity. Initially, recommender systems (RSs) were primarily designed to predict users' interests. However, as research on RSs progressed, the literature began to emphasize a broader perspective on recommendation utility, which includes not only prediction accuracy <cit.>, but also the importance of originality, variety, and other features in enhancing the value of recommendations <cit.>. This awareness has grown over time, leading to a surge of activity in this area over the past decade <cit.>. As a result, novelty and diversity have gained prominence and are increasingly recognized as important evaluation measures for new recommendation systems. Algorithmic advancements are consistently aimed at improving these aspects. §.§ Filter Bubble In recent decades, the rise of the Internet has sparked considerable scholarly interest in its potential negative effects on society and the public sphere <cit.>. The concept of the internet filter bubble has gained widespread recognition as a manifestation of this pessimistic perspective. The underlying premise of an echo chamber is that social media users deliberately interact with like-minded individuals and consume content that aligns with their ideologies. As a result, they rarely encounter diverse viewpoints that are crucial for fostering a more inclusive and vibrant public sphere <cit.>. This phenomenon is exacerbated by the algorithmic content selection employed by social media platforms, which tends to limit users' exposure to novel and diverse content. As a result, online communities become clustered and polarized, lacking the necessary viewpoint diversity. The concept of the "Filter Bubble" refers to the potential consequence of personalized internet customization, where individuals are isolated from diverse perspectives and information. Users often find themselves exposed to familiar content or consistent information on similar topics, reinforcing their existing knowledge. This concern initially arose in 2009 when platforms like Google began prioritizing customized search results, leading to variations in outcomes for users based on their previous interactions, expressed preferences, and other criteria <cit.>. Consumers now encounter a more personalized online environment that delivers content tailored to their perceived interests and the preferences of like-minded individuals within their network. While recommendation engines effectively identify users' preferred choices, they can also contribute to information polarization and restrict novelty and variety, exerting a significant influence on user preferences and satisfaction. Consequently, users are exposed to a narrower range of information and content, as recommended and selected options are reinforced, ultimately leading to the formation of information cocoons. In the field of media communication, this phenomenon is commonly referred to as "echo chambers" <cit.>, while information retrieval scholars label it as "filter bubbles." Filter bubbles represent self-reinforcing systems that isolate individuals from diverse ideas, beliefs, or content <cit.>. The filter bubble effect facilitates the solidification of existing beliefs and preferences, potentially leading to the adoption of more extreme views or behaviors over time, a phenomenon known as "group polarization" <cit.>. In the business context, the filter bubble effect gives rise to the "Matthew effect" among popular items, wherein products and information that deviate from the long tail hypothesis are not recommended, resulting in reduced sales diversity and potential limitations to corporate success <cit.>. Furthermore, the prevalence of the filter bubble effect in society can lead to the polarization of political ideas and undermine democratic fairness <cit.>. Additionally, filter bubbles indirectly contribute to the dissemination of undesirable content on online social media platforms, such as rumors and fake news <cit.>. Current recommendation algorithms primarily prioritize enhancing recommendation accuracy rather than promoting diverse outcomes, which is one of the factors contributing to the formation of filter bubbles <cit.>. While several surveys have been conducted in recent years to explore filter bubbles and recommendation algorithms, no single study comprehensively investigates all the necessary changes required in recommendation systems to address filter bubbles. Most of the research discussed in this section consists of unstructured surveys, and relevant literature pertaining to the review of filter bubbles is also included within this domain. In 2019, <cit.> presented a critical analysis of the "filter bubble" hypothesis, arguing that its continued emphasis has diverted scholarly attention from more pressing areas of investigation. The authors also highlight the tangible effects of the persistent use of these notions in mainstream media and political discussions, shaping societal institutions, media and communication platforms, and individual users. Traditional broadcast media's diminishing influence in determining information exposure has given way to contemporary information filters such as recommender systems, aggregators, search engines, feed ranking algorithms, bookmarked websites, and the individuals and organizations followed on social media platforms like Twitter. Critics express concerns that the combination of these filters may isolate individuals within their own information bubbles, making it challenging to correct any false beliefs they acquire. In <cit.>, the authors delve into the research surrounding exposure selectivity preferences and actual exposure to shed light on this topic. Furthermore, <cit.> presents an integrated solution model aimed at assisting users in avoiding filter bubbles within social networks. The author conducted a comprehensive literature review, identifying 571 publications from six highly regarded scientific databases. After removing irrelevant studies and conducting an in-depth analysis of the remaining publications, a recommended category of research papers was developed. This categorization serves as the basis for designing an integrated tool that incorporates previous research findings and introduces novel features to mitigate the impact of filter bubbles. In 2021, <cit.> conducted a comprehensive review of scientific literature on the subject of echo chambers in social media, aiming to provide a consolidated and critical perspective on the various techniques, similarities, differences, benefits, and limitations associated with echo chambers. This review serves as a foundation for future research in this field. The authors performed a systematic review of 55 studies that examined the presence of echo chambers on social media platforms, classifying the literature and identifying common themes in the focus, techniques, and conclusions of the studies. Similarly, in their paper, <cit.> provide an exploratory overview of the utilization of digital echo chambers and filter bubbles in the context of nature conservation practice. They gathered data from a literature review and a digital expert poll of German conservation actors to analyze the current understanding of these phenomena. The findings indicate that these concepts are already being investigated in relation to conservation issues, particularly climate protection, and to a lesser extent, natural conservation practice. However, there is a limited understanding of the specific mechanisms underlying digital echo chambers and filter bubbles. The study highlights the urgent need for research and strategic assessment in managing and addressing these challenges in the field of nature conservation. Furthermore, <cit.> conducted a semi-systematic literature review to examine the digital political economy. They identified and characterized four major threats: false news, filter bubbles/echo chambers, online hate speech, and surveillance. The authors also proposed a typology of "workable solutions" to address these risks, emphasizing the tendency to adopt technological, regulatory, and culturally ingrained approaches as part of the solution. In <cit.>, the authors conducted a survey of empirical research in the Netherlands to explore tailored information delivery, with a particular focus on echo chambers and filter bubbles in a global context. The study investigated the involvement of government agencies, tech businesses, and academics in addressing these issues. Currently, the Dutch journalism landscape seems to offer a diverse range of information to different citizen groups. However, the precise impact of news personalization is not fully understood, and the increasing influence of digital corporations underscores the need for further research and deeper insights. Without a comprehensive understanding of the situation, it is challenging to develop effective strategies to mitigate the potential concerns of news customization. Similarly, in <cit.>, a qualitative approach was employed to propose new research directions on the impact of filter bubbles on democracy. The study included a comprehensive literature review and secondary data analysis. The authors argued that the emerging financial models of digital media, heavily reliant on technology companies, marketers, and the public, contribute significantly to the creation of filter bubbles. Newsrooms increasingly gather and analyze customer data for personalized information in digital advertising and subscription models. The media industry enthusiastically embraces customization, with limited critique of its negative aspects. The authors suggested that journalism has a crucial role to play in combating information bubbles by reassessing its digital economic models and raising public awareness. The previous review of the literature reveals a significant gap in systematic and comprehensive research specifically dedicated to investigating the filter bubble phenomenon in recommendation systems. To fill this gap, we conducted an extensive and critical investigation into the presence and impact of filter bubbles in recommendation systems. This research aims to contribute to the advancement of knowledge and understanding in the fields of recommendation systems and computational social science, offering valuable insights for both researchers and practitioners. To provide a clear overview of the existing literature reviews in this area, Table <ref> presents a summary of relevant studies, highlighting their scope, survey methodologies employed, and the number of references considered in each study. This table serves as a reference point for understanding the scope and depth of previous research efforts in this field. § METHODOLOGY A well-executed survey entails a systematic review and comprehensive analysis of all relevant studies and research conducted on the topic of interest. As highlighted in <cit.>, there are several key motivations for conducting a systematic survey, including synthesizing existing literature and findings on a specific issue, identifying gaps or limitations in the research, and proposing potential avenues for further investigation. By employing a structured and systematic methodology, this type of survey enhances the overall rigor and reliability of the study, allowing for the categorization and analysis of relevant themes and parameters. In this section, we examine the methodological approach employed in our review, highlighting its robustness and its ability to address the objectives and expected outcomes of the study. §.§ Research questions This review article aims to address several key research questions related to the filter bubble phenomenon in recommendation systems. These research questions are as follows: * Does the filter bubble exist in recommendation systems? If so, what are the reasons for its existence? * What are the approaches used to identify the presence of a filter bubble in recommendation systems? * How can the impact of the filter bubble be mitigated or avoided in recommendation systems? A systematic literature review (SLR), as described by Kitchenham, is a methodological approach that involves thoroughly examining and synthesizing all relevant works pertaining to a specific research topic or subject area. Systematic reviews provide an objective and comprehensive analysis of the topic by following a rigorous and transparent process that can be audited and replicated. Despite the importance of the filter bubble phenomenon in recommendation systems, a comprehensive systematic literature review specifically focusing on this topic is currently missing in the existing literature. Therefore, conducting a thorough and meticulous analysis, guided by an SLR methodology, is crucial to examine and shed light on the various assertions and findings related to the research questions stated above in an unbiased and replicable manner <cit.>. §.§ Bibliographic databases selection criteria We conducted an extensive search to gather relevant literature for this systematic literature review (SLR). We focused on recent publications available in reputable scientific journals and top conference proceedings, utilizing leading academic databases. Our search covered the period from 2012 onwards, as our findings indicated that significant research on the filter bubble phenomenon emerged during this time. To ensure a comprehensive search, we employed a double-staged search strategy consisting of Phase 1 and Phase 2 (see Figure <ref>). In the first phase, we manually explored various search strings and their combinations using Boolean operators. Additionally, we leveraged research databases and academic search engines to access relevant literature from multiple disciplinary and publishing platforms. Figure <ref> provides an overview of the databases, libraries, and search engines that were included in our search strategy. During the initial phase, we refined our search strings by incorporating specific keywords, including terms from the titles, abstracts, and relevant keywords of publications. This iterative process allowed us to fine-tune our search results and address any potential limitations in matching our searches for thoroughness and consistency. The refined search strings were then applied in the selected databases to retrieve additional relevant literature. In addition to our systematic search strategy, we conducted manual searches in esteemed journals and established conferences that are highly relevant to our research discipline. These journals and conferences encompass a wide range of topics, including AI, Neural Networks, Recommendation Systems, and related advancements in the relevant disciplines. By including earlier published research from these sources, we aimed to ensure comprehensive coverage of the existing literature and gather valuable insights from the forefront of the field. §.§ Search strategy generation In this study, we implemented a systematic search strategy to identify pertinent literature addressing filter bubble approaches in recommendation systems. To ensure comprehensive coverage, we established inclusion and exclusion criteria based on expressive and descriptive terms associated with recommender systems and the filter bubble phenomenon. These terms encompassed concepts such as "Recommender System," "Recommendation System," "Filter Bubble," "Echo Chamber," "Self Loop," and other closely related terms. By employing this approach, we aimed to tailor our search to our specific research objectives and review scope, while mitigating the potential impact of nomenclature discrepancies.Subsequently, we employed the Boolean OR operator to consolidate synonyms and related terms as part of our search strategy. This approach aimed to broaden the search scope and encompass various regions where the concept of the filter bubble has been investigated. By using the Boolean OR operator, we aimed to attain comprehensive results while avoiding redundancy. To further refine and narrow down the search outcomes, we then utilized the Boolean AND operator. This step allowed us to focus specifically on studies that concurrently addressed both recommender systems and the filter bubble phenomenon, ensuring the inclusion of relevant literature. §.§ Inclusion and exclusion selection criteria During our systematic review, we initially collected 312 papers. To ensure the relevance of the literature, we applied basic criteria such as title, abstract, and topic alignment with our research question. We then established detailed inclusion and exclusion criteria to streamline the selection process. The inclusion criteria encompassed papers proposing solutions, addressing the existence of the filter bubble, implementing techniques, or proposing enhanced versions of recommender systems to mitigate its effects. Conversely, the exclusion criteria were applied to exclude publications that did not specifically address the filter bubble, focused on other applications or research sectors, or compared various recommendation techniques. One author took the lead in the selection strategy and conducted the initial screening, ensuring consistency with our research theme. Any disagreements regarding the suitability of specific works were resolved through discussions with other authors. After removing duplicates, we identified 185 unique articles. We then conducted a thorough assessment of the remaining articles by carefully reviewing titles, abstracts, and conclusions. Based on this assessment, we narrowed down the selection to 55 articles that exhibited relevance based on title and abstract. In the subsequent stage, we applied the specified inclusion and exclusion criteria to the remaining articles, leading to the exclusion of certain studies that did not meet our criteria. Typically, during the selection process, we applied the following exclusion criteria to refine our literature set: * Duplicate records * Papers that did not comment on the existence of a filter bubble. * Papers related to the implementation of applications utilizing previous RSs. * Papers related to research sectors other than RSs. * Papers that compared various recommendation techniques. * Papers written in languages other than English. Furthermore, the following inclusion criteria were used to select relevant literature: * Proposes a solution to the issue of filter bubbles in recommender systems. * Comments on the existence of a filter bubble. * Implements a technique or method to alleviate filter bubbles. * Proposes an enhanced version of recommender systems to address the problem of filter bubbles. By applying these exclusion and inclusion criteria, we ensured that the selected articles provided insights, solutions, or advancements specifically related to the filter bubble phenomenon in RSss. This process resulted in a final selection of 28 articles that met our inclusion criteria. In order to ensure a comprehensive review, we conducted a reference scan of the selected articles, which led us to identify an additional 6 relevant papers. Consequently, a total of 34 articles were included in our systematic review on the existence of the filter bubble. Figure <ref> provides an overview of our research selection criteria and the distribution of publications obtained from each database. § DISCUSSION AND FINDINGS Recommender systems (RSs) have the power to either create or dismantle filter bubbles, playing a significant role in shaping the openness or closedness of the internet. However, when analyzing RSs, some methods focus on short-term user engagement and the number of clicks, rather than considering the user's long-term interest in diverse and relevant information. In recent years, researchers have proposed various theories and conducted studies to explore the presence of filter bubbles and echo chambers within RSs. By examining the issues addressed, the techniques employed, and the data used, we can gain insights into the findings and draw conclusions accordingly. Upon evaluating the data, a clear distinction emerged between studies that identified the presence of filter bubbles and those that did not. We can categorize these studies into three groups: (i) those that found evidence of a filter bubble, (ii) those that did not explicitly comment on its existence, and (iii) those that did not find evidence of a filter bubble but observed heterogeneity, cross-cutting interactions, and exposure. To further analyze the literature, we classified the research based on the methodologies or approaches employed to support their claims. Consequently, we divided the research into two categories: (i) studies that empirically established the presence or absence of the filter bubble, and (ii) studies that assumed its existence or non-existence and utilized it to propose or support another concept. By examining these categories and the corresponding research, we can gain a deeper understanding of the filter bubble phenomenon and its implications in the context of RSs. When comparing the methodologies, data, and research focuses with the corresponding findings, notable patterns and trends emerged (refer to Table <ref> and Figure <ref>). Among the collected research, a majority of studies (n = 29) acknowledged the presence of the filter bubble and proposed solutions or alternative theories to address it. Specifically, three out of the 25 experiments provided empirical evidence supporting the existence of the filter bubble. In contrast, only two studies concluded that filter bubbles do not occur. Additionally, five studies did not explicitly comment on the existence of the filter bubble. These findings highlight the consensus among researchers regarding the prevalence of the filter bubble phenomenon in recommendation systems. The empirical evidence from a subset of experiments further strengthens the argument for its existence. However, it is important to note that the research landscape also includes studies that explore alternative perspectives and propose differing viewpoints. The diversity of approaches and conclusions contributes to a comprehensive understanding of the filter bubble phenomenon and provides insights for future research directions. Other investigations <cit.> have also identified the presence of the filter bubble and proposed solutions to address this issue. These studies employed various experimental approaches to devise their solutions. For instance, <cit.> focused on building diversity-aware neighborhood-based session-based recommender systems. They proposed strategies to diversify the recommendation lists of these systems. The findings revealed that all tested scenarios led to increased diversity across all news databases. The selection of a diversification strategy can be considered as a hyperparameter based on the validation set. Diversification contributes to combating the filter bubble by increasing the number of distinct news topics in the recommendation lists. Similarly, <cit.> introduced techniques to enhance variety and accuracy in session-based recommender systems using sequential rule mining and session-based k nearest neighbor algorithms. They developed a performance balancing technique to address the filter bubble, which improved the diversity and accuracy of these session-based recommender systems. Real-world datasets from the field of music recommendation were utilized to validate their approach. Other techniques explored in the literature focused on the usage of the MovieLens dataset, which is a synthetic dataset derived from real-world movie ratings. To address the limitations associated with this dataset, several studies, including <cit.>, <cit.>, and <cit.>, employed experimental techniques. For instance, Polatidis et al. <cit.> conducted experiments using various recommendation algorithms, ranging from collaborative filtering to complex fuzzy recommendation systems, to tackle the filter bubble problem. They validated their approach using a real-world dataset, and the results indicated its practicality and effectiveness. Similarly, <cit.> and <cit.> proposed a filter-free recommendation system that promotes information neutrality from a user-defined perspective. They suggested methods to improve the neutrality of the recommendation process, allowing users to have more control over their exposure to diverse content. In another study, <cit.> utilized multiple MovieLens datasets to propose two models: popularity-based and distance-based Novelty-aware Matrix Factorization (NMF). These models aimed to strike a balance between matrix factorization performance and the need for novelty in recommendations, while only marginally sacrificing accuracy. Furthermore, <cit.> developed a recommendation model and evaluated it using two publicly available datasets. The results demonstrated that their approach outperformed existing diversification methods in terms of recommendation quality. In their study, <cit.> propose three scenarios to enhance the diversification of the session-based k-nearest neighbor strategy and address the filter bubble phenomenon. The findings, based on three different news data sources, demonstrate that these diversification scenarios increase the rank and relevance-sensitive diversity metric within the session-based k-nearest neighbor approach. In order to decrease polarization, <cit.> present a framework that aims to mitigate the formation of echo chambers. Additionally, <cit.> propose a graphical agent-based model to diversify suggestions, promoting exposure to a wider range of information. Addressing the issue of filter bubbles, <cit.> investigate the construction of recommendations to encourage diverse information exposure and challenge the formation of potential filter bubbles. In the context of social media, <cit.> suggest an echo chamber-aware buddy recommendation algorithm based on Twitter data. This algorithm learns individual and echo chamber representations from shared content and previous interactions of users and communities. Examining the recommendation environment, <cit.> explore situations where consumers remain within their filter bubbles despite receiving diverse recommendations. They find that while recommendations can mitigate the effects of filter bubbles, they may also lead to user boredom, resulting in a trade-off between diversifying across users and within-user consumption. In the domain of diet diversification, <cit.> develop a case-based reasoning (CBR) system called DiversityBite. This system promotes diet diversification by generating dynamic criticism that guides users through different search areas and encourages them to explore alternative examples. The authors evaluate the impact of DiversityBite on diversity through user research in the recipe domain. <cit.> addressed the filter bubble issue, specifically focusing on the role of recommender systems in causing it within the News domain. To tackle this challenge, they developed a point-of-view diversification technique. This technique stands out as the first functional and active News recommender system that incorporates point-of-view diversity, distinguishing it from previous studies. Similarly, <cit.> proposed an adaptive diversity regularization CDMF (Collaborative Deep Matrix Factorization) model. Their approach utilizes social tags as a means to connect the target and source domains, resulting in improved recommendation accuracy and enhanced recommendation diversity through adaptive diversity regularization. To evaluate the effectiveness of their proposed methodology, extensive experiments were conducted on a real social media website. The analysis of the data led to several important conclusions. Firstly, the use of social tags to overcome the low recommendation accuracy caused by the target domain's sparsity proved to be particularly beneficial. Secondly, the incorporation of adaptive regularization significantly increased the individual variety of recommendations. Lastly, their proposed methodology struck a fair balance between accuracy and diversity of recommendations, while also reducing user polarization. Only two studies included in this analysis reported no evidence of a filter bubble in recommendation systems. These studies found that recommendation systems actually help users broaden their interests and create commonalities with other users. Both studies employed different approaches to analyze personalization and focused on its positive aspects. For instance, <cit.> examined data from an online music service and found that personalization does not lead to fragmentation of the online population. Instead, they observed that as users follow recommendations, their purchasing behavior becomes more similar to that of other users, as indicated by purchase similarity. Similarly, <cit.> found that perceived suggestion serendipity has a significant positive impact on both perceived preference fit and user satisfaction. Their findings suggest that simply increasing the number of innovative recommendations is not enough. Instead, recommenders should make occasional random suggestions, which can lead to a higher perception of preference fit and enjoyment for users. §.§ Existence of filter bubble In this section, we present the overall results of our study, which are based on the persuasive research, observed trends, comparative analysis, and analytical assessment conducted by all authors through a thorough debate and deliberation. Based on our findings, we have observed that research in the field of the filter bubble is growing. While the number of studies on the filter bubble is still relatively small due to its emerging nature, there has been a significant increase in research activity in recent years. As depicted in Figure <ref>, which illustrates the annual distribution of filter bubble studies, there were only 8 publications from 2012 to 2018, whereas in 2021 alone, there were 9 publications on the topic. Through various methodologies and datasets, the presence of a filter bubble in recommendation systems has been convincingly demonstrated. The studies have examined contextual biases using diverse datasets and platforms. Furthermore, the majority of investigations successfully illustrated the personalized effect of recommendation systems. Therefore, based on the literature we reviewed, we can confidently conclude that the filter bubble exists in recommendation systems. The literature extensively examines various forms of bias that contribute to the problem of personalization in recommendation systems (RSs). Biases can arise at different stages, including during system design and implementation, evaluation, and user interaction. These biases can significantly impact the information gathered for system improvement and customization <cit.>. One prominent form of bias is algorithmic bias, which refers to biases introduced during the design and implementation of the RS. This bias can be a result of the underlying algorithms and data processing techniques used in the system. Additionally, biases can arise from the evaluation process, where researchers may unknowingly introduce their own biases into the assessment of the system's performance. The design of the user interaction is also critical, as it can introduce additional biases in the form of presentation or exposure bias <cit.>. Furthermore, cognitive biases, such as confirmation bias and other behavioral biases, can influence the user's interactions with the system and introduce biases into the data collected. These biases can affect the feedback loops used by RSs, as they are based on implicit user feedback, such as clicks or other trackable user activities. However, due to the limitations of these feedback mechanisms, the interactions are skewed towards the options presented by the system, leading to a form of bias known as presentation or exposure bias <cit.>. According to the research, the major causes of filter bubbles in recommendation systems can be attributed to algorithmic bias, data bias, and cognitive bias. These biases can have significant implications for the personalization and customization of RSs, and addressing them is crucial to mitigate the formation of filter bubbles. §.§ Approaches to identifying FB SSeveral research studies in the literature have proposed strategies to understand, avoid, and mitigate the harmful effects of the filter bubble phenomenon (refer to Table <ref> and <ref>). This category of research explores novel ideas and diverse perspectives on how to identify and counteract the negative impact of recommendation algorithms that contribute to the formation of filter bubbles. Different approaches have been employed to determine the existence of a filter bubble, with studies utilizing benchmark datasets such as MovieLens, Twitter, or self-generated datasets. For instance, <cit.> conducted their research using a user interaction dataset from a WebTV platform and demonstrated that contextual bias leads to biased program recommendations, resulting in users being trapped in a filter bubble. To address this, they leveraged the Twitter social stream as an external context source, expanding the selection to include content related to social media events. They investigated the Twitter histories of key programs using two trend indicators: Trend Momentum and SigniScore. The analysis showed that Trend Momentum outperformed SigniScore, accurately predicting 96 percent of all peaks in the selected candidate program titles ahead of time. While many studies rely on datasets to support their research, some propose frameworks or models without utilizing specific datasets. For example, <cit.> proposed a generic framework to prevent polarization by ensuring that each user is presented with a balanced selection of content. They demonstrated how modifying a basic bandit algorithm can improve the regret bound above the state-of-the-art while satisfying the requirements for reducing polarization. These research studies offer valuable insights and methodologies for understanding and addressing the filter bubble phenomenon, providing a foundation for developing effective strategies to mitigate its negative effects in recommendation systems. Examining users' behavior is another important aspect of identifying the filter bubble phenomenon. For instance, <cit.> incorporated the Twitter social stream as an external context source to expand the selection of items to include those related to social media events. They recognized the significance of users' behavior in determining the composition of the filter bubble. Similarly, <cit.> investigated the biases of four algorithms based on five metrics (relevance, variety, novelty, unexpectedness, and serendipity) across user groups categorized by eight different characteristics. To gain insight into the identified biases, they analyzed users' behavioral patterns, such as their inclination to provide more favorable ratings. The study found that biases varied to a greater extent among user groups based on their age and curiosity levels. Despite the range of research projects conducted in this area, there is a common observation that real-time implementation of the proposed methodologies in recommendation systems has received limited attention. The practical application and integration of these research findings into real-world recommendation systems have been identified as an important area for future exploration and development. Graph/network-based analysis and visualization have been employed by researchers to investigate the presence of the filter bubble. For instance, <cit.> developed FRediECH, a system that combines echo chamber awareness with user representations to balance the relevance, diversity, and originality of friend suggestions. FRediECH utilizes a Deep Wide architecture and a graph convolutional network to enhance the diversity of recommendations by re-ranking the results based on the network's explicit community structure. However, this approach may have limitations as it requires defining the criteria for identifying such groups. FRediECH aims to adapt the community structure to changes in user interactions and content patterns, striking a balance between relevance and variety. In another study, <cit.> employed a CNN-based deep neural network technique to construct article embeddings for news articles using information such as article title, synopsis, full text, and tags from datasets. They utilized the Maximal Marginal Relevance (MMR) re-ranking technique, which compares the results of the suggested approaches with a diversified baseline. The MMR-based method evaluates multiple performance criteria, such as accuracy and variety, to re-rank items from the original recommendation list. While MMR-based methods help reduce the impact of the filter bubble, they are often criticized for being computationally expensive and sacrificing relevance for diversity, making them less feasible in real-world scenarios. Addressing these concerns, <cit.> proposed a novel approach called Targeted Diversification VAE-based Collaborative Filtering (TD-VAE-CF) to mitigate political polarization in media recommendations. This approach aims to strike a balance between relevance and diversity by leveraging the capabilities of Variational Autoencoders (VAE) in generating diverse and targeted recommendations. After identifying the presence of filter bubbles, many studies have proposed potential solutions. The first category of solutions focuses on bypassing or modifying algorithms. In our selected research, a significant number of solutions concentrated on enhancing content diversity. For instance, <cit.> and <cit.> presented scenarios to make session-based recommendation systems more diversity-aware by considering not only a user's current session interactions but also diverse content from other sessions. Additionally, <cit.> proposed a flexible framework that allows users to have control over the source from which recommendations are selected, thereby reducing polarization in personalized systems. Furthermore, some researchers have identified strategies to enable users to explore fresh information that was previously unknown to them (<cit.>). To achieve content diversity, the two most commonly used approaches in recommendation systems are re-ranking and diversity modeling. Re-ranking methods, such as those proposed by <cit.>, <cit.>, and <cit.>, involve post-processing techniques that reorder the ranked list provided by the baseline recommender. They assess the diversity of suggestions on the candidate list and perform a re-ranking based on this criterion. While these strategies can enhance diversity, they often require additional post-processing steps and can be computationally expensive. On the other hand, diversity modeling approaches, as suggested by <cit.>, <cit.>, and <cit.>, involve modifying the core algorithm itself to make it more diversity-aware. These approaches adapt the recommendation algorithm to incorporate diversity as a key consideration (see Table <ref>). Several researchers have explored the incorporation of diversity regularization into matrix factorization (MF) models to achieve multi-objective recommendations that maximize both accuracy and variety. In their study, <cit.> utilize a probabilistic matrix factorization approach (<cit.>) to predict ratings, which has shown significant success in terms of prediction accuracy and scalability. Similarly, <cit.> propose two models, namely popularity-based and distance-based novelty-aware MF, which allow for a trade-off between matrix factorization performance and the requirement for novelty while only moderately sacrificing accuracy. The results of their experiments suggest that it is possible to achieve high accuracy while also introducing unique and diverse recommendations. In summary, the majority of research in this area focuses on enhancing diversity in recommendations while still maintaining a level of personalization. Additionally, there is a strong emphasis on making the recommendation process more transparent and explainable, as well as involving users in the decision-making process. Many researchers have also highlighted the importance of developing frameworks or models that are efficient and feasible for real-world scenarios. Building upon these insights, the authors of this study propose generalized methods to mitigate the filter bubble phenomenon in recommender systems, which will be discussed in the next section. § PREVENTING FILTER BUBBLE Despite being a relatively nascent area of research, this study has successfully identified commonalities and variations in the understanding of echo chambers in recommender systems. It provides a comprehensive and critical analysis of peer-reviewed literature, shedding light on this significant issue. The field itself is complex and fragmented, characterized by challenges in collecting, interpreting, and comprehending variables and data. Nevertheless, the importance and potential of studying echo chambers in recommender systems are evident. In the subsequent sections, we will present several viable approaches to addressing the filter bubble problem. We strongly believe that user awareness is a crucial initial step towards mitigating this issue. Informed users can question why certain recommendations are suggested and understand the user features influencing those recommendations. This awareness also empowers users to recognize bias in the presented information and encourages them to explore opposing opinions and recommendations. Additionally, we will propose strategies to tackle the creation of filter bubbles in recommender systems. §.§ Modeling filter bubble as multi-objective optimization problem We know that the filter bubble is created due to highly personalized recommendations. A possible way to avoid this situation is to add some diversity to the recommendations through various means, including random recommendations. However, we can not completely neglect the personalized recommendations generated through previous user experiences. The solution lies in a balance between personalized and diversified recommendations. Both components are necessary but of competing nature, i.e., increasing one will decrease the other. Such conflict situations can be seen and modeled as a multi-objective optimization problem. The solution to a multi-objective optimization problem is a set of ‘non-inferior’ or ‘non-dominated’ solutions called a Pareto-optimal front. Theoretically, this set contains infinitely many points for which no solution can be said better than the others. For example, a possible solution Pareto set for filter bubble could be: 100% personalization, 0% diversification, 90% personalization, 10% diversification,…, 50% personalization, 50% diversification,…, 0% personalization, 100% diversification. The first solution of the solution set 100% personalization, 0% diversification focuses only on personalized recommendations. On the other hand, the last solution 0% personalization, 100% diversification prefers diverse recommendations only. However, there are many intermediate solutions that try to make a balance between both. An important point to note here is that one solution is not better than any other solution because each has a better value for exactly one objective. The concept of the Pareto optimal set is described in Figure <ref>. The filter bubble is a result of highly personalized recommendations. To avoid this situation, it is necessary to introduce diversity into the recommendations, which can be achieved through various means, including random recommendations. However, personalized recommendations based on previous user experiences cannot be completely disregarded. The solution lies in finding a balance between personalized and diversified recommendations, recognizing that both components are necessary but inherently compete with each other. This conflict can be formulated and modeled as a multi-objective optimization problem. In a multi-objective optimization problem, the solution space consists of a set of 'non-inferior' or 'non-dominated' solutions known as the Pareto-optimal front. Theoretically, this set comprises infinitely many points, with no solution being considered better than others. For instance, in the context of addressing the filter bubble, the Pareto set may include solutions such as 100% personalization, 0% diversification, 90% personalization, 10% diversification, …, 50% personalization, 50% diversification, …, 0% personalization, 100% diversification. The first solution in the set, 100% personalization, 0% diversification, focuses solely on personalized recommendations, while the last solution, 0% personalization, 100% diversification, prioritizes diverse recommendations. However, there exist many intermediate solutions that aim to strike a balance between both objectives. It is important to note that no single solution is superior to others since each solution offers better values for a specific objective. The concept of the Pareto-optimal set is illustrated in Figure <ref>. Here, the solutions A, B, and C are incomparable but all of them are better than solution D and E. If the filter bubble problem is posed as a bi-objective optimization problem, it may be represented as Eq. <ref>: Maximize  Diversity Score Maximize  Personalization Score The Diversity Score measures the degree of diversified recommendations, while the Personalization Score represents the degree of personalized recommendations in the final outcome, both normalized to the range [0,1]. The Pareto set of Eq. <ref> is depicted in Figure <ref>. In this figure, point P (0,1) represents a solution that emphasizes full personalization, while point D (1,0) represents a completely random recommendation. Recommendations A, B, and C fall within the Desirable Area of the Pareto-optimal front, exhibiting non-zero values for both scores, but with varying degrees. Recommendation A contains more personalized information than B and C, while C has a higher level of diversity. Once we have developed such a theoretical model, the next step is to define the mathematical formulation of Eq. <ref>, which involves determining the formulas for calculating the Diversity Score and the Personalization Score. By solving Eq. <ref>, we can obtain a set of recommendations that have incomparable values of personalization and diversity scores. Recommendations falling within the Desirable Area are expected to generate bubble-free results. §.§ Explainable Recommender Systems (XRSs) Based on the insights gained from our research, we propose an architecture for integrated tools that can be employed in recommendation systems to mitigate the formation of filter bubbles. Drawing upon the findings of our literature analysis, we suggest that this integrated tool should serve two primary functions: (1) alerting users to the potential presence of a filter bubble, and (2) allowing users to customize the extent of personalization. < g r a p h i c s > An illustration of the effect of XRSs over filter bubble In recent times, there has been a growing interest in explainable artificial intelligence (XAI) across various research domains, aiming to address the challenges posed by increasing complexity, scalability, and automation <cit.>. Consequently, the development of explainable recommendation systems (XRSs) has gained momentum. Notably, researchers such as Peake et al. <cit.> have proposed a novel approach for extracting explanations from latent factor recommendation systems by employing training association rules on the outcomes of a matrix factorization black-box model. Their method effectively balances interpretability and accuracy without compromising flexibility or relying on external data sources. Explanations play a crucial role in ensuring that users comprehend and trust recommendation systems that prioritize explainability. Without accompanying explanations, there is a risk that the recommendations generated by a system may be perceived as untrustworthy or lacking authenticity <cit.>. By understanding the rationale behind a recommendation, users can identify potential filter bubbles and take steps to burst them. For instance, if an item is accompanied by a rating indicating the level of personalization in the suggestion, whether it is based on previous searches or purely random <cit.>, users can gain insights into why the recommendation is being made. In line with designing a fair and explainable system, an XRS focused on food recipe recommendations has been proposed <cit.>. The notable contribution of this recommendation approach is its comprehensive inclusion of explainability features, which not only provide explanations for recommendations but also raise nutrition awareness. By incorporating additional aspects into the explanation process, this approach aims to enhance user satisfaction and understanding, making it a valuable component of an XRS. Balancing the trade-off between personalization and diversification is crucial when recommending items in order to address the filter bubble phenomenon. Customized recommendations are important as they facilitate the user's search for relevant items. However, it is equally important to provide diverse results to break the bubble effect. Therefore, we aim to incorporate this trade-off into our tools and give users the ability to choose the type of recommendations they desire. By providing users with control over this trade-off, the recommendation system can achieve its goal while also preventing users from being trapped in a filter bubble. For instance, if a user prefers items that are similar to their previous searches, the degree of personalization can be adjusted to provide more tailored recommendations. On the other hand, if a user wants to explore a wider range of items without being influenced by their past data, they can modify the degree of personalization to receive more diverse recommendations. The tool proposed in Figure <ref> aims to provide users with a better understanding of the recommendations they receive and empower them to customize their future searches to break free from the filter bubble. By offering transparency and explanation, users can gain insights into why a specific recommendation was made, allowing them to make informed decisions and challenge the bubble effect. Figure <ref> also illustrates a comparison between the proposed explainable recommendation system and a standard recommendation system. The added layer of explainability in the proposed system enhances user understanding and trust, promoting a more satisfying user experience. Figure <ref> focuses on the tool's interface, using a movie suggestion example. In this scenario, the user's preferences primarily revolve around action, thriller, and drama movies, as depicted in the figure. When the system is personalized, the user is presented with recommendations that align with their preferred genres. On the other hand, when the degree of personalization is adjusted towards diversity, the system recommends a broader range of content, allowing the user to explore movies beyond their usual preferences. §.§ Approaches for diversification As discussed in previous sections, the primary solution to combat the filter bubble problem is to incorporate diverse content in recommendations. However, it is crucial to define diversity itself as it encompasses various types, each with its specific definition and implications. It is worth noting that current recommendation systems intentionally introduce some level of variety to ensure that the recommended items are not excessively similar <cit.>. Additionally, other types of diversity, such as personalized and temporal diversity, are also being utilized in recommendation systems <cit.>. While measures of diversity are already employed in recommendation systems, their objective has not always been to address the filter bubble issue but rather to provide users with a range of somewhat dissimilar options to choose from. Consequently, it becomes crucial to define diversity in the context of the filter bubble phenomenon. In selecting an appropriate diversity measure, several key considerations should be taken into account. * Opposite of similarity: In early recommendation systems, diversity was viewed as the opposite of similarity and defined as (1 - similarity), where similarity is a measure of the proximity between user interests and recommended items <cit.>. In a list of items, diversity is calculated as the average dissimilarity between all pairs of items. * Diversity through Rearrangement/Re-ranking: This approach involves rearranging the list of recommended items generated by the algorithm to improve the diversity metric <cit.>. It has been observed that this simple approach works well in certain scenarios. It can be seen as an optimization problem that aims to maximize the diversity metric. * Diversity in items and/or source: It is important to decide whether diversity should be introduced only in the recommended content or in the content provider as well <cit.>. For example, in online shopping, diversified items may include different garments, while diversified sources may involve different brands. * Personalized/User-specific Diversity: Diversity can be introduced irrespective of user profiles, which is referred to as non-personalized diversity. However, it is considered better to capture the diversity needs of individuals by modeling their characteristics and incorporating them into the diversity metric <cit.>. Such diversity measures are known as personalized matrices. * Temporal Diversity: In certain domains, recommendations need to consider the dimension of time, giving rise to the concept of temporal diversity <cit.>. News recommendation systems, for instance, must account for rapidly changing news topics, as well as the evolving preferences of users over different time periods (weekly, monthly, yearly, or seasonally). Thus, temporal diversity should be designed to address users' short- and long-term preferences. * Hybrid Diversity: A diversity metric may incorporate multiple aspects discussed above, resulting in a hybrid diversity measure <cit.>. A simple implementation could involve calculating a weighted sum of various diversity measures to capture different dimensions of diversity. Overall, the process of selecting the right diversity metric is a meticulous task that involves careful consideration of various factors. To ensure an effective selection, the following steps need to be followed: * Study the specific domain of the recommendation system under consideration. This involves understanding the characteristics of the items, the preferences of the users, and any temporal or contextual factors that may influence recommendations. * Define diversity in the context of the predetermined domain. This entails identifying the specific dimensions or aspects of diversity that are relevant and meaningful for the given domain. * Select appropriate diversity measure(s) that align with the defined notion of diversity. This may involve choosing from existing diversity metrics or developing new ones tailored to the specific requirements of the domain. * Combine the selected diversity measure(s) with an appropriate prevention approach to effectively address the filter bubble problem. This could involve incorporating diversity constraints into recommendation algorithms or utilizing post-processing techniques for re-ranking recommendations. * Gather feedback from users, either implicitly through user interactions or explicitly through surveys or interviews, to evaluate the effectiveness of the diversity measures and their impact on user satisfaction. * Adapt and modify the diversity measure(s) based on the received feedback. This iterative process ensures that the diversity metric continues to capture the evolving needs and preferences of the users. § OPEN ISSUES AND FUTURE RESEARCH DIRECTIONS Several open challenges related to overcoming filter bubble in RSs exist, including but not limited to: §.§ Open issues * Defining diversity in a domain-specific context: Diversity plays a critical role in addressing the filter bubble problem, but its definition may vary depending on the recommendation domain <cit.>. For instance, diversity in a movie recommendation system may differ from diversity in an online clothing portal. It is important to establish domain-specific definitions of diversity and develop mathematical frameworks accordingly. It is worth noting that similar concepts to diversity, such as novelty and serendipity, have been discussed in the literature <cit.>. While diversity refers to the presence of variety in a recommended item list, novelty captures the difference between past and present recommendations, and serendipity occurs when new and relevant but previously unknown items are included in the recommendations. The choice of which concept or combination to utilize should be based on the specific requirements of the application. * Exploring contrasting recommendations or opinions: When addressing the filter bubble issue, incorporating contrasting recommendations or opinions can promote a more balanced understanding, particularly in news recommendation systems. However, it is necessary to define the concept of "Opposite Recommendations" and establish domain-specific definitions to effectively incorporate this approach. It should be noted that defining "Opposite" is relatively straightforward in domains like news recommendation but may pose challenges in other domains, such as book recommendations <cit.>. * Identifying sources responsible for spreading fake news: Identifying sources responsible for spreading fake news is crucial in addressing the filter bubble problem. Fake news or misinformation greatly contributes to the issue. However, developing advanced natural language processing (NLP) techniques that can effectively detect fake news poses a challenge, especially when dealing with aspects such as sarcasm and deceptive language usage. Deep learning-based NLP models like Deep Bidirectional Transformers, along with techniques like transfer learning and fine-tuning, can be explored to enhance language understanding and mitigate the negative impact of the filter bubble <cit.>. * Establishing the relationship between domain-specific external factors and the filter bubble: Establishing the relationship between domain-specific external factors and the filter bubble is crucial in understanding and addressing this phenomenon. Various external factors, such as the presence of fake news in news recommendation systems, contribute to the filter bubble problem. It is important to investigate and comprehend the connection between these factors and the filter bubble. Tracing the origins of misinformation is a vital step in addressing the filter bubble, and advanced natural language processing (NLP) techniques can greatly assist in this process. Furthermore, the impact of the filter bubble can vary significantly across different applications. For example, in a food recommendation system, a filter bubble can have detrimental effects on users' well-being by excluding nutritious diets and promoting a particular genre of food. The findings of <cit.> also highlight how the filter bubble effect can introduce intentional biases when providing choices for restaurants and related domains. On the other hand, the influence of the filter bubble may be more pronounced in a video streaming platform like YouTube, while having only a marginal effect on users in a dress/outfit recommendation system for an online clothing portal. It is essential to recognize that generalized solutions may not be effective for every application, emphasizing the need for domain-specific analysis of the filter bubble. Each application requires a tailored approach and a deeper understanding of its specific dynamics to effectively mitigate the filter bubble's impact. * Enhancing data quality for visualization and integration: Enhancing the quality of data is of utmost importance for effective visualization and integration in the context of the filter bubble. As emphasized by <cit.>, researchers should dedicate efforts to explore methods that can enhance the quality of data used in this context. By improving data quality, we can ensure more reliable and accurate results in visualization and integration processes. Furthermore, it is crucial to address the issue of information cocooning that is prevalent in news recommender systems. These systems often filter out content that users may find uninteresting, resulting in a narrowing of their information exposure over a period of approximately seven days. This can have significant implications, particularly for individuals who are heavily reliant on social media platforms. It is imperative for the research community to tackle the challenge of designing evaluation mechanisms that incorporate social filtering. By doing so, we can mitigate the potential negative consequences of information cocooning and promote a more diverse and balanced information environment for users <cit.>. §.§ Future Research Directions Several promising research directions could be pursued to mitigate the filter bubble problem: * Diversity-aware recommendations: Designing algorithms that aim to increase the diversity of recommendations can help in mitigating the filter bubble. These algorithms need to balance the trade-off between relevance and diversity <cit.>. * Serendipity in recommendations: Developing recommendation techniques that emphasize serendipity (unexpected but useful recommendations) could help users discover new, out-of-bubble content. These methods would encourage exposure to diverse and novel items that the user might not have found otherwise <cit.>. * Explainability and transparency: Explainable AI can help users understand why a particular recommendation was made. Seeing the rationale behind the recommendations might make users more receptive to different content, reducing the filter bubble effect <cit.>. * User-controlled recommendations: Allowing users to have more control over their recommendations, such as adjusting the degree of novelty or diversity, could also help alleviate the filter bubble problem. * Cross-domain recommendations: Leveraging data from different domains can help in providing a broader range of recommendations. For example, if a user interacts with various content types (books, movies, music), these can be used to cross-pollinate recommendations across these domains <cit.>. * Fairness and bias mitigation: Actively researching and implementing algorithms that take into account and mitigate biases in recommender systems can help to ensure that the system does not favour certain types of content, hence reducing the risk of a filter bubble <cit.>. * Long-term user modeling: Traditionally, recommender systems have focused on immediate rewards (clicks, purchases, etc.), leading to a filter bubble. Research into long-term user modeling can help understand the evolving needs and tastes of users, potentially aiding in delivering a more diverse set of recommendations <cit.>. § CONCLUSION The term "Filter Bubble" refers to the phenomenon where internet personalization isolates individuals by presenting them with content and perspectives that align with their existing preferences. Consequently, users are exposed to a limited range of information or similar content on related topics. This issue gained attention in 2009 when platforms like Google started customizing search results based on users' previous interactions, expressed preferences, and various other factors <cit.>. Many individuals rely on recommendation systems (RSs) to assist them in finding products that align with their specific needs. While RSs offer numerous benefits, they also have the potential to trap users within a filter bubble due to their heavy reliance on similarity measurements. In this Systematic Literature Review, we investigate the existence, causes, and potential solutions to the filter bubble problem in recommendation systems. We addressed the research problems by conducting an extensive analysis of the studies reported in the literature. The findings confirm the presence of a filter bubble in recommendation systems. This raises the question: What are the underlying causes of excessive personalization in RSs? The literature points to algorithmic bias and cognitive bias as the primary culprits. Algorithmic bias arises when biases are introduced during the design and implementation of a system, while cognitive biases, such as confirmation bias, taint the interaction data. To address this issue, diversification techniques are commonly employed. In recommendation systems, re-ranking and diversity modeling are the two most prevalent methods of diversification. Re-ranking involves post-processing the ranked list provided by a baseline recommender, but this approach increases the computational complexity of the overall algorithm (<cit.>). On the other hand, diversity modeling techniques modify the core algorithm to incorporate diversity-awareness (<cit.>). Our work has made significant contributions in reviewing the existing literature across various domains within recommender systems. We have examined the causes of the filter bubble phenomenon, identified trends, and proposed strategies for its identification and prevention. Our key findings highlight the importance of diversity in recommendations while maintaining personalized experiences, as well as the need for transparency and explainability in the recommendation process. While recent studies have expanded our understanding of the filter bubble, it is important to note that the complexity of these models often hinders their practical adoption. Taking this into consideration, we have outlined generalized methods that can effectively mitigate the filter bubble issue in recommender systems. One promising approach involves employing multi-objective optimization techniques to strike a balance between personalization and diversification. In addition, we emphasize the significance of incorporating an explanatory framework that provides users with insights into why a particular item is recommended. To this end, we present the components of an integrated tool in the form of an architectural map, which can aid in the prevention of filter bubbles and enhance user understanding and control over recommendations. The present study sheds light on several promising research avenues that lie ahead. One important aspect is the establishment of criteria for selecting appropriate definitions of personalization and diversification, along with the development of corresponding mathematical metrics. It is evident that these definitions should take into account the specific characteristics of the domain or application under consideration. In fact, each component of the strategy aimed at mitigating the filter bubble issue should be tailored to the particular domain. Hence, there is a pressing need to devise domain-specific strategies for resolving the filter bubble problem. Such strategies should address various concerns, including assessing the degree of filter bubble present in the application, understanding its impact, determining the necessity for reduction, identifying suitable measures of personalization and diversification, and selecting appropriate prevention methodologies. By taking a domain-centric approach, we can develop effective solutions that are tailored to the unique challenges and requirements of each application. 100 url@samestyle sayed2021intelligent A. Sayed, Y. Himeur, A. Alsalemi, F. Bensaali, and A. Amira, “Intelligent edge-based recommender system for internet of energy applications,” IEEE Systems Journal, 2021. atalla2023intelligent S. Atalla, M. Daradkeh, A. Gawanmeh, H. Khalil, W. Mansoor, S. Miniaoui, and Y. Himeur, “An intelligent recommendation system for automating academic advising based on curriculum analysis and performance modeling,” Mathematics, vol. 11, no. 5, p. 1098, 2023. himeur2021survey Y. Himeur, A. Alsalemi, A. Al-Kababji, F. Bensaali, A. Amira, C. Sardianos, G. Dimitrakopoulos, and I. Varlamis, “A survey of recommender systems for energy efficiency in buildings: Principles, challenges and prospects,” Information Fusion, vol. 72, pp. 1–21, 2021. varlamis2022smart I. Varlamis, C. Sardianos, C. Chronis, G. Dimitrakopoulos, Y. Himeur, A. Alsalemi, F. Bensaali, and A. Amira, “Smart fusion of sensor data and human feedback for personalized energy-saving recommendations,” Applied Energy, vol. 305, p. 117775, 2022. dokoupil2022long P. Dokoupil, “Long-term fairness for group recommender systems with large groups,” in Proceedings of the 16th ACM Conference on Recommender Systems, 2022, pp. 724–726. tahmasebi2021hybrid F. Tahmasebi, M. Meghdadi, S. Ahmadian, and K. Valiallahi, “A hybrid recommendation system based on profile expansion technique to alleviate cold start problem,” Multimedia Tools and Applications, vol. 80, no. 2, pp. 2339–2354, 2021. ali2022citation Z. Ali, G. Qi, K. Muhammad, S. Bhattacharyya, I. Ullah, and W. Abro, “Citation recommendation employing heterogeneous bibliographic network embedding,” Neural Computing and Applications, vol. 34, no. 13, pp. 10 229–10 242, 2022. wu2022prediction H.-h. Wu, G. Ke, Y. Wang, and Y.-T. Chang, “Prediction on recommender system based on bi-clustering and moth flame optimization,” Applied Soft Computing, vol. 120, p. 108626, 2022. sardianos2020rehab C. Sardianos, I. Varlamis, G. Dimitrakopoulos, D. Anagnostopoulos, A. Alsalemi, F. Bensaali, Y. Himeur, and A. Amira, “Rehab-c: Recommendations for energy habits change,” Future Generation Computer Systems, vol. 112, pp. 394–407, 2020. wu2022news C. Wu, F. Wu, T. Qi, C. Li, and Y. Huang, “Is news recommendation a sequential recommendation task?” in Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2022, pp. 2382–2386. arif2021towards M. Arif, S. S. Sohail, M. T. Alam, S. Ubaid, M. T. Nafis, G. Wang et al., “Towards a two-tier architecture for privacy-enabled recommender systems (pers),” in Inernational Conference on Ubiquitous Security.1em plus 0.5em minus 0.4emSpringer, 2021, pp. 268–278. himeur2022blockchain Y. Himeur, A. Sayed, A. Alsalemi, F. Bensaali, A. Amira, I. Varlamis, M. Eirinaki, C. Sardianos, and G. Dimitrakopoulos, “Blockchain-based recommender systems: Applications, challenges and future opportunities,” Computer Science Review, vol. 43, p. 100439, 2022. boratto2019effect L. Boratto, G. Fenu, and M. Marras, “The effect of algorithmic bias on recommender systems for massive open online courses,” in European Conference on Information Retrieval.1em plus 0.5em minus 0.4emSpringer, 2019, pp. 457–472. protasiewicz2016recommender J. Protasiewicz, W. Pedrycz, M. Kozłowski, S. Dadas, T. Stanisławek, A. Kopacz, and M. Gałężewska, “A recommender system of reviewers and experts in reviewing problems,” Knowledge-Based Systems, vol. 106, pp. 164–178, 2016. himeur2022latest Y. Himeur, S. S. Sohail, F. Bensaali, A. Amira, and M. Alazab, “Latest trends of security and privacy in recommender systems: A comprehensive review and future perspectives,” Computers & Security, p. 102746, 2022. ashraf2023private M. Ashraf, S. Fatima, M. Fatma, S. Muntaha, U. Rooman, S. S. Sohail, and A. Sarosh, “Private browsing does not affect google personalization: An experimental evaluation,” in Proceedings of 3rd International Conference on Artificial Intelligence: Advances and Applications: ICAIAA 2022.1em plus 0.5em minus 0.4emSpringer, 2023, pp. 457–465. chen2020bias J. Chen, H. Dong, X. Wang, F. Feng, M. Wang, and X. He, “Bias and debias in recommender system: A survey and future directions,” arXiv preprint arXiv:2010.03240, 2020. gao2022mitigating Z. Gao, T. Shen, Z. Mai, M. R. Bouadjenek, I. Waller, A. Anderson, R. Bodkin, and S. Sanner, “Mitigating the filter bubble while maintaining relevance: Targeted diversification with vae-based recommender systems,” in Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2022, pp. 2524–2531. olshannikova2022utilizing E. Olshannikova, E. Skenderi, T. Olsson, S. Koivunen, and J. Huhtamäki, “Utilizing structural network positions to diversify people recommendations on twitter,” Advances in Human-Computer Interaction, vol. 2022, 2022. alam2022towards M. Alam, A. Iana, A. Grote, K. Ludwig, P. Müller, and H. Paulheim, “Towards analyzing the bias of news recommender systems using sentiment and stance detection,” in Companion Proceedings of the Web Conference 2022, 2022, pp. 448–457. cai2023causal W. Cai, F. Feng, Q. Wang, T. Yang, Z. Liu, and C. Xu, “A causal view for item-level effect of recommendation on user preference,” in Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, 2023, pp. 240–248. hildebrandt2022issue M. Hildebrandt, “The issue of proxies and choice architectures. why eu law matters for recommender systems,” Frontiers in Artificial Intelligence, p. 73, 2022. spohr2017fake D. Spohr, “Fake news and ideological polarization: Filter bubbles and selective exposure on social media,” Business information review, vol. 34, no. 3, pp. 150–160, 2017. curkovic2020re M. Ćurković and A. Košec, “(re) search filter bubble effect—an issue still unfairly neglected,” Advances in Nutrition, vol. 11, no. 3, pp. 744–744, 2020. ref1 J. Bobadilla, F. Ortega, A. Hernando, and A. Gutiérrez, “Recommender systems survey,” Knowledge-based systems, vol. 46, pp. 109–132, 2013. ref2 D. Jannach, M. Zanker, A. Felfernig, and G. Friedrich, Recommender systems: an introduction.1em plus 0.5em minus 0.4emCambridge University Press, 2010. ref3 F. Ricci, L. Rokach, and B. Shapira, “Introduction to recommender systems handbook,” in Recommender systems handbook.1em plus 0.5em minus 0.4emSpringer, 2011, pp. 1–35. ref53 J. L. Herlocker, J. A. Konstan, L. G. Terveen, and J. T. Riedl, “Evaluating collaborative filtering recommender systems,” ACM Transactions on Information Systems (TOIS), vol. 22, no. 1, pp. 5–53, 2004. ref54 B. Smyth and P. McClave, “Similarity vs. diversity,” in International conference on case-based reasoning.1em plus 0.5em minus 0.4emSpringer, 2001, pp. 347–361. ref55 S. M. McNee, J. Riedl, and J. A. Konstan, “Being accurate is not enough: how accuracy metrics have hurt recommender systems,” in CHI'06 extended abstracts on Human factors in computing systems, 2006, pp. 1097–1101. ref56 C.-N. Ziegler, S. M. McNee, J. A. Konstan, and G. Lausen, “Improving recommendation lists through topic diversification,” in Proceedings of the 14th international conference on World Wide Web, 2005, pp. 22–32. ref57 P. Adamopoulos and A. Tuzhilin, “On unexpectedness in recommender systems: Or how to better expect the unexpected,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 5, no. 4, pp. 1–32, 2014. ref58 G. Adomavicius and Y. Kwon, “Improving aggregate recommendation diversity using ranking-based techniques,” IEEE Transactions on Knowledge and Data Engineering, vol. 24, no. 5, pp. 896–911, 2011. ref59 Ò. Celma and P. Herrera, “A new approach to evaluating novel recommendations,” in Proceedings of the 2008 ACM conference on Recommender systems, 2008, pp. 179–186. ref60 N. Hurley and M. Zhang, “Novelty and diversity in top-n recommendation–analysis and evaluation,” ACM Transactions on Internet Technology (TOIT), vol. 10, no. 4, pp. 1–30, 2011. ref61 S. Vargas and P. Castells, “Rank and relevance in novelty and diversity metrics for recommender systems,” in Proceedings of the fifth ACM conference on Recommender systems, 2011, pp. 109–116. ref4 E. Pariser, The filter bubble: What the Internet is hiding from you.1em plus 0.5em minus 0.4emPenguin UK, 2011. ref5 C. Mouffe, On the political.1em plus 0.5em minus 0.4emRoutledge, 2011. ref7 A. Bruns, “Filter bubble,” Internet Policy Review, vol. 8, 2019. ref8 P. Resnick, R. K. Garrett, T. Kriplean, S. A. Munson, and N. J. Stroud, “Bursting your (filter) bubble: strategies for promoting diverse exposure,” in Proceedings of the 2013 conference on Computer supported cooperative work companion, 2013, pp. 95–100. ref9 A. Amrollahi et al., “A conceptual tool to eliminate filter bubbles in social networks,” Australasian Journal of Information Systems, vol. 25, 2021. ref10 L. Terren and R. Borge-Bravo, “Echo chambers on social media: a systematic review of the literature,” Review of Communication Research, vol. 9, pp. 99–118, 2021. ref11 A. Miller, S. Arndt, L. Engel, and N. Boot, “Nature conservation in a digitalized world: echo chambers and filter bubbles,” Ecology and Society, vol. 26, no. 3, 2021. ref12 K. M. Kuehn and L. A. Salter, “Assessing digital threats to democracy, and workable solutions: a review of the recent literature,” International Journal of Communication, vol. 14, p. 22, 2020. ref13 H. Roosenschoon and E. Loos, “Maatschappelijke impact van gepersonaliseerde informatievoorziening: feit of fictie?” Mens & Maatschappij, vol. 95, no. 2, pp. 133–150, 2020. ref15 B. Kitchenham and S. Charters, “Guidelines for performing systematic literature reviews in software engineering,” 2007. ref62 R. Karlsen, K. Steen-Johnsen, D. Wollebæk, and B. Enjolras, “Echo chamber and trench warfare dynamics in online debates,” European journal of communication, vol. 32, no. 3, pp. 257–273, 2017. ref63 F. Rowland, “The filter bubble: what the internet is hiding from you,” portal: Libraries and the Academy, vol. 11, no. 4, pp. 1009–1011, 2011. ref64 A. Gharahighehi and C. Vens, “Diversification in session-based news recommender systems,” Personal and Ubiquitous Computing, pp. 1–11, 2021. ref65 D. Fleder and K. Hosanagar, “Blockbuster culture's next rise or fall: The impact of recommender systems on sales diversity,” Management science, vol. 55, no. 5, pp. 697–712, 2009. ref66 D. Lee and K. Hosanagar, “Impact of recommender systems on sales volume and diversity,” 2014. ref67 G. Eady, J. Nagler, A. Guess, J. Zilinsky, and J. A. Tucker, “How many people live in political bubbles on social media? evidence from linked survey and twitter data,” Sage Open, vol. 9, no. 1, p. 2158244019832705, 2019. ref68 E.-M. Schomakers, C. Lidynia, and M. Ziefle, “All of me? users’ preferences for privacy-preserving data markets and the importance of anonymity,” Electronic Markets, vol. 30, no. 3, pp. 649–665, 2020. ref69 F. Zimmer, K. Scheibe, M. Stock, and W. G. Stock, “Echo chambers and filter bubbles of fake news in social media. man-made or produced by algorithms,” in 8th annual arts, humanities, social sciences & education conference, 2019, pp. 1–22. ref70 P. Symeonidis, L. Coba, and M. Zanker, “Counteracting the filter bubble in recommender systems: Novelty-aware matrix factorization,” Intelligenza Artificiale, vol. 13, no. 1, pp. 37–47, 2019. ref14 S. Monteiro Machado, “Democracia em risco? explorando a contribuição do ciberjornalismo para o fenómeno do filtro-bolha.” Observatorio (OBS*), vol. 15, no. 2, 2021. ref17 N. Wang and L. Chen, “User bias in beyond-accuracy measurement of recommendation algorithms,” in Fifteenth ACM Conference on Recommender Systems, 2021, pp. 133–142. ref30 Y. Ge, S. Zhao, H. Zhou, C. Pei, F. Sun, W. Ou, and Y. Zhang, “Understanding echo chambers in e-commerce recommender systems,” in Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval, 2020, pp. 2261–2270. ref39 F. Lorenz, J. Yuan, A. Lommatzsch, M. Mu, N. Race, F. Hopfgartner, and S. Albayrak, “Countering contextual bias in tv watching behavior: introducing social trend as external contextual factor in tv recommenders,” in Proceedings of the 2017 ACM International Conference on Interactive Experiences for TV and Online Video, 2017, pp. 21–30. ref40 A. Antikacioglu and R. Ravi, “Post processing recommender systems for diversity,” in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017, pp. 707–716. ref36 K. Hosanagar, D. Fleder, D. Lee, and A. Buja, “Will the global village fracture into tribes? recommender systems and their effects on consumer fragmentation,” Management Science, vol. 60, no. 4, pp. 805–823, 2014. ref37 C. Matt, A. Benlian, T. Hess, and C. Weiß, “Escaping from the filter bubble? the effects of novelty and serendipity on users’ evaluations of online recommendations,” 2014. ref16 S. Vrijenhoek, M. Kaya, N. Metoui, J. Möller, D. Odijk, and N. Helberger, “Recommenders with a mission: assessing diversity in news recommendations,” in Proceedings of the 2021 Conference on Human Information Interaction and Retrieval, 2021, pp. 173–183. ref18 T. Donkers and J. Ziegler, “The dual echo chamber: Modeling social media polarization for interventional recommending,” in Fifteenth ACM Conference on Recommender Systems, 2021, pp. 12–22. ref19 A. Tommasel, J. M. Rodriguez, and D. Godoy, “I want to break free! recommending friends from outside the echo chamber,” in Fifteenth ACM Conference on Recommender Systems, 2021, pp. 23–33. ref21 W. Wu, L. Chen, and Y. Zhao, “Personalizing recommendation diversity based on user personality,” User Modeling and User-Adapted Interaction, vol. 28, no. 3, pp. 237–276, 2018. ref22 M. Makhortykh and M. Wijermars, “Can filter bubbles protect information freedom? discussions of algorithmic news recommenders in eastern europe,” Digital Journalism, pp. 1–25, 2021. ref23 A. Gharahighehi and C. Vens, “Diversification in session-based news recommender systems,” Personal and Ubiquitous Computing, pp. 1–11, 2021. ref24 G. M. Lunardi, G. M. Machado, V. Maran, and J. P. M. de Oliveira, “A metric for filter bubble measurement in recommender algorithms considering the news domain,” Applied Soft Computing, vol. 97, p. 106771, 2020. ref25 G. Aridor, D. Goncalves, and S. Sikdar, “Deconstructing the filter bubble: User decision-making and recommender systems,” in Fourteenth ACM Conference on Recommender Systems, 2020, pp. 82–91. ref27 N. Polatidis, A. Papaleonidas, E. Pimenidis, and L. Iliadis, “An explanation-based approach for experiment reproducibility in recommender systems,” Neural Computing and Applications, vol. 32, no. 16, pp. 12 259–12 266, 2020. ref28 S. Milano, M. Taddeo, and L. Floridi, “Recommender systems and their ethical challenges,” Ai & Society, vol. 35, no. 4, pp. 957–967, 2020. ref29 A. Gharahighehi and C. Vens, “Personalizing diversity versus accuracy in session-based recommender systems,” SN Computer Science, vol. 2, no. 1, pp. 1–12, 2021. ref31 P.-R. Lhérisson, F. Muhlenbach, and P. Maret, “Fair recommendations through diversity promotion,” in International Conference on Advanced Data Mining and Applications.1em plus 0.5em minus 0.4emSpringer, 2017, pp. 89–103. ref32 A. Gharahighehi and C. Vens, “Making session-based news recommenders diversity-aware,” in Proceedings of the Workshop on Online Misinformation-and Harm-Aware Recommender Systems.1em plus 0.5em minus 0.4emCEUR Workshop Proceedings, 2020, pp. 60–66. ref33 G. Joris, C. Colruyt, J. Vermeulen, S. Vercoutere, F. D. Grove, K. V. Damme, O. D. Clercq, C. V. Hee, L. D. Marez, V. Hoste et al., “News diversity and recommendation systems: setting the interdisciplinary scene,” in IFIP International Summer School on Privacy and Identity Management.1em plus 0.5em minus 0.4emSpringer, 2019, pp. 90–105. ref34 T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma, “Enhancement of the neutrality in recommendation.” in Decisions@ RecSys.1em plus 0.5em minus 0.4emCiteseer, 2012, pp. 8–14. ref35 ——, “Efficiency improvement of neutrality-enhanced recommendation.” in Decisions@ RecSys.1em plus 0.5em minus 0.4emCiteseer, 2013, pp. 1–8. ref38 T. T. Nguyen, P.-M. Hui, F. M. Harper, L. Terveen, and J. A. Konstan, “Exploring the filter bubble: the effect of using recommender systems on content diversity,” in Proceedings of the 23rd international conference on World wide web, 2014, pp. 677–686. ref41 N. Helberger, K. Karppinen, and L. D’acunto, “Exposure diversity as a design principle for recommender systems,” Information, Communication & Society, vol. 21, no. 2, pp. 191–207, 2018. ref42 P. Symeonidis, L. Coba, and M. Zanker, “Counteracting the filter bubble in recommender systems: Novelty-aware matrix factorization,” Intelligenza Artificiale, vol. 13, no. 1, pp. 37–47, 2019. ref43 L. E. Celis, S. Kapoor, F. Salehi, and N. Vishnoi, “Controlling polarization in personalization: An algorithmic framework,” in Proceedings of the conference on fairness, accountability, and transparency, 2019, pp. 160–169. ref48 F. Abbas, N. Najjar, and D. Wilson, “The bites eclectic: Critique-based conversational recommendation for diversity-focused meal planning,” in International Conference on Case-Based Reasoning.1em plus 0.5em minus 0.4emSpringer, 2021, pp. 1–16. ref49 G. M. Lunardi, “Representing the filter bubble: Towards a model to diversification in news,” in International Conference on Conceptual Modeling.1em plus 0.5em minus 0.4emSpringer, 2019, pp. 239–246. ref50 J. Sun, J. Song, Y. Jiang, Y. Liu, and J. Li, “Prick the filter bubble: A novel cross domain recommendation model with adaptive diversity regularization,” Electronic Markets, pp. 1–21, 2021. ref51 Y. Zhao, C. Wang, H. Han, M. Shu, and W. Wang, “An impact evaluation framework of personalized news aggregation and recommendation systems,” in 2020 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT).1em plus 0.5em minus 0.4emIEEE, 2020, pp. 893–900. ref52 J. Sun, J. Song, Y. Jiang, Y. Liu, and M. Zhu, “Leveraging cross domain recommendation models to alleviate filter bubble problems,” 2020. ref74 Z. Gao, T. Shen, Z. Mai, M. R. Bouadjenek, I. Waller, A. Anderson, R. Bodkin, and S. Sanner, “Mitigating the filter bubble while maintaining relevance: Targeted diversification with vae-based recommender systems,” in Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2022, pp. 2524–2531. ref75 P. Dokoupil, “Long-term fairness for group recommender systems with large groups,” in Proceedings of the 16th ACM Conference on Recommender Systems, 2022, pp. 724–726. ref45 R. Baeza-Yates, “Bias on the web,” Communications of the ACM, vol. 61, no. 6, pp. 54–61, 2018. ref72 A. Mnih and R. Salakhutdinov, “Probabilistic matrix factorization. in advances in neural information processing systems,” 2007. ref46 A. B. Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. García, S. Gil-López, D. Molina, R. Benjamins et al., “Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai,” Information fusion, vol. 58, pp. 82–115, 2020. peake2018explanation G. Peake and J. Wang, “Explanation mining: Post hoc interpretability of latent factor models for recommendation systems,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 2060–2069. ref47 A. Ghazimatin, O. Balalau, R. Saha Roy, and G. Weikum, “Prince: Provider-side interpretability with counterfactual explanations in recommender systems,” in Proceedings of the 13th International Conference on Web Search and Data Mining, 2020, pp. 196–204. yera2022exploring R. Yera, A. A. Alzahrani, and L. Martínez, “Exploring post-hoc agnostic models for explainable cooking recipe recommendations,” Knowledge-Based Systems, vol. 251, p. 109216, 2022. smyth2001similarity B. Smyth and P. McClave, “Similarity vs. diversity,” in International conference on case-based reasoning.1em plus 0.5em minus 0.4emSpringer, 2001, pp. 347–361. kunaver2017diversity M. Kunaver and T. Požrl, “Diversity in recommender systems–a survey,” Knowledge-based systems, vol. 123, pp. 154–162, 2017. adomavicius2011improving G. Adomavicius and Y. Kwon, “Improving aggregate recommendation diversity using ranking-based techniques,” IEEE Transactions on Knowledge and Data Engineering, vol. 24, no. 5, pp. 896–911, 2011. vrijenhoek2021recommenders S. Vrijenhoek, M. Kaya, N. Metoui, J. Möller, D. Odijk, and N. Helberger, “Recommenders with a mission: assessing diversity in news recommendations,” in Proceedings of the 2021 Conference on Human Information Interaction and Retrieval, 2021, pp. 173–183. eftimov2021framework T. Eftimov, B. Paudel, G. Popovski, and D. Kocev, “A framework for evaluating personalized ranking systems by fusing different evaluation measures,” Big Data Research, vol. 25, p. 100211, 2021. xiang2010temporal L. Xiang, Q. Yuan, S. Zhao, L. Chen, X. Zhang, Q. Yang, and J. Sun, “Temporal recommendation on graphs via long-and short-term preference fusion,” in Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, 2010, pp. 723–732. schafer2002meta J. B. Schafer, J. A. Konstan, and J. Riedl, “Meta-recommendation systems: user-controlled integration of diverse recommendations,” in Proceedings of the eleventh international conference on Information and knowledge management, 2002, pp. 43–51. marras2022equality M. Marras, L. Boratto, G. Ramos, and G. Fenu, “Equality of learning opportunity via individual fairness in personalized recommendations,” International Journal of Artificial Intelligence in Education, vol. 32, no. 3, pp. 636–684, 2022. sharma2022explanation C. Sharma, P. Bedi et al., “Explanation-based serendipitous recommender system (ebsrs),” in International Conference on Innovative Computing and Communications.1em plus 0.5em minus 0.4emSpringer, 2022, pp. 1–18. castells2022novelty P. Castells, N. Hurley, and S. Vargas, “Novelty and diversity in recommender systems,” in Recommender systems handbook.1em plus 0.5em minus 0.4emSpringer, 2022, pp. 603–646. liu2023app H.-W. Liu, J.-Z. Wu, and F.-L. Wu, “An app-based recommender system based on contrasting automobiles,” Processes, vol. 11, no. 3, p. 881, 2023. ref73 A. Hussain and A. Sheikh, “Opportunities for artificial intelligence–enabled social media analysis of public attitudes toward covid-19 vaccines,” NEJM Catalyst Innovations in Care Delivery, vol. 2, no. 1, 2021. sardianos2021emergence C. Sardianos, I. Varlamis, C. Chronis, G. Dimitrakopoulos, A. Alsalemi, Y. Himeur, F. Bensaali, and A. Amira, “The emergence of explainability of intelligent systems: Delivering explainable and personalized recommendations for energy efficiency,” International Journal of Intelligent Systems, vol. 36, no. 2, pp. 656–680, 2021. sardianos2020real C. Sardianos, C. Chronis, I. Varlamis, G. Dimitrakopoulos, Y. Himeur, A. Alsalemi, F. Bensaali, and A. Amira, “Real-time personalised energy saving recommendations,” in 2020 International Conferences on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics).1em plus 0.5em minus 0.4emIEEE, 2020, pp. 366–371. hirata2023categorical K. Hirata, D. Amagata, S. Fujita, and T. Hara, “Categorical diversity-aware inner product search,” IEEE Access, vol. 11, pp. 2586–2596, 2023. wang2023correction N. Wang and L. Chen, “Correction to: How do item features and user characteristics affect users’ perceptions of recommendation serendipity? a cross-domain analysis,” User Modeling and User-Adapted Interaction, vol. 33, no. 3, pp. 767–767, 2023. atalla2022recommendation S. Atalla, Y. Himeur, W. Mansoor, A. Amira, F. Fadli, A. Copiaco, and S. S. Sohail, “Recommendation system towards residential energy saving based on anomaly detection,” in 2022 5th International Conference on Signal Processing and Information Security (ICSPIS).1em plus 0.5em minus 0.4emIEEE, 2022, pp. 169–174. shi2023selection L. Shi, S. Li, X. Ding, and Z. Bu, “Selection bias mitigation in recommender system using uninteresting items based on temporal visibility,” Expert Systems with Applications, vol. 213, p. 118932, 2023. liu2023mitigating Z. Liu, Y. Fang, and M. Wu, “Mitigating popularity bias for users and items with fairness-centric adaptive recommendation,” ACM Transactions on Information Systems, vol. 41, no. 3, pp. 1–27, 2023. xu2023optimizing R. Xu, J. Bhandari, D. Korenkevych, F. Liu, Y. He, A. Nikulkov, and Z. Zhu, “Optimizing long-term value for auction-based recommender systems via on-policy reinforcement learning,” arXiv preprint arXiv:2305.13747, 2023. § ACKNOWLEDGMENTS (NOT COMPULSORY) We would like to acknowledge the support of the UK Engineering and Physical Sciences Research Council (EPSRC) Grants Ref. EP/M026981/1, EP/T021063/1, EP/T024917/. § AUTHOR CONTRIBUTIONS STATEMENT The corresponding author (S.S.S) initiated the idea of the review, discussed it with all co-authors, they all contributed in writing and structuring the article. S.S.S, Q.M.A and R.I worked on literature collection via searching over academic databases. The diagrams were suggested by S.S.S and created by R.I., Y.H and A.A supervised the idea and structure of the paper. All authors reviewed and revised the manuscripts. S.S.S, M.N and Q.M.A have contributed equally to the manuscript. § COMPETING INTERESTS The authors declare no competing interests.
http://arxiv.org/abs/2307.02995v1
20230706135614
Collective flow and the fluid behavior in p/d/$^3$He+Au collisions at $\sqrt{s_{NN}} = 200$ GeV
[ "Zeming Wu", "Baochi Fu", "Shujun Zhao", "Runsheng Liu", "Huichao Song" ]
nucl-th
[ "nucl-th", "hep-ph", "nucl-ex" ]
School of Physics, Peking University, Beijing 100871, China Collaborative Innovation Center of Quantum Matter, Beijing 100871, China Center for High Energy Physics, Peking University, Beijing 100871, China By varying the intrinsic initial geometry, the p/d/^3He+Au collisions at the Relativistic Heavy Ion Collider (RHIC) provide a unique opportunity to understand the collective behavior in the small systems. In this paper, we employ the hybrid model   with   initial conditions to study the collective flow and the fluid behavior in p/d/^3He+Au collisions. With fine-tuned parameters,   can describe the v_2(p_T) and v_3(p_T) data from the PHENIX and STAR collaborations. However, for these parameter sets tuned to fit the STAR data, the hydrodynamic simulations have already beyond their limits with the average Knudsen number ⟨ K_n ⟩ obviously larger than one. Our calculations demonstrate that, for a meaningful evaluation of the fluid behavior in the small systems, model simulations should also pay attention to the validity range of hydrodynamics. Collective flow and the fluid behavior in p/d/^3He+Au collisions at √(s_NN) = 200 GeV Zeming Wu1,2 Baochi Fu3,1,2email: fubaochi@gmail.com Shujun Zhao1,2 Runsheng Liu1,2 Huichao Song1,2,3email: huichaosong@pku.edu.cn Received: date / Revised version: date ========================================================================================================================================= § INTRODUCTION Over the last two decades, the properties of the extremely hot and dense QCD matter, the quark-gluon plasma (QGP), has been studied intensively by the relativistic heavy-ion programmes at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). It has been found that the QGP created in large collision systems, such as Au+Au and Pb+Pb collisions, behaves as an almost “perfect liquid" <cit.>. Its strong collective expansion and associated flow observables have been successfully described by hydrodynamic calculations with small specific shear viscosity, which is close to the lowest KSS bound <cit.>. The small collision systems, such as the p+Pb and p+p collisions at the LHC, were originally intended to provide the reference data for the large collision systems. However, similar collective behaviour has been observed in the high multiplicity events, including the “double ridge” structure in two particle correlations<cit.>, multi-particle culumants <cit.>, and the mass ordering of the anisotropic flow of identified particles <cit.>, see <cit.> for recent review. To understand these flow-like observables, a key question is whether the QGP droplets were formed in the small collision systems. The theoretical study can be classified into two scenarios, the final state effect associated with the QGP fluid expansion and the initial state effect without the QGP formation. In the final state scenario, the collective flow is related to the initial state geometry through the non-linear evolution, where hydrodynamic or kinetic calculations can roughly reproduce the data with tuned parameters <cit.>. In contrast, the initial state effect describes the observed anisotropies by momentum correlation of the initially produced particles in the color field domain. One typical model is the Color Glass Condensate(CGC), which can qualitatively describe the experimental measurements such as two-particle and multi-particle correlations, long-range rapidity correlations and mass ordering <cit.>. Compared to the final state scenario, a major difference is that the correlations are expected to be weaker for larger collision systems with more uncorrelated domains involved. It was believed that comparative runs of p/d/^3He + Au collisions with variation of the initial state geometry could provide useful information to identify the above two scenarios for flow-like signals in the small systems <cit.>. More specifically, with the initial state geometry dominated by the nucleon position fluctuations, models such as MC-Glauber show an eccentricity ordering as: _2^ p+Au < _2^ d+Au≃_2^^3He+Au, _3^ p+Au≃_3^ d+Au < _3^^3He+Au. Hydrodynamic evolution responses directly to such initial state eccentricities and generates the associated flow ordering as: v_2^ p+Au < v_2^ d+Au≈ v_2^^3He+Au, v_3^ p+Au≈ v_3^ d+Au < v_3^^3He+Au <cit.>. In contrast, CGC model calculations are not sensitive to such initial state geometry, predicting the flow anisotropies of much smaller magnitude for the three collision systems as: v_2^ p+Au≳ v_2^ d+Au≳ v_2^^3He+Au, v_3^ p+Au≳ v_3^ d+Au≳ v_3^^3He+Au <cit.>. With this motivation, the PHENIX Collaboration has measured the flow anisotropies of v_2(p_T) and v_3(p_T) in p/d/^3He+Au collisions using the event-plane method <cit.> and the “3 × 2PC" method <cit.>, respectively. Both methods observed similar flow ordering: v_2^p+Au < v_2^d+Au≈ v_2^^3He+Au and v_3^p+Au≈ v_3^d+Au < v_3^^3He+Au <cit.>, which consists with the hydrodynamic predictions with MC-Glauber initial conditions <cit.>. The PHENIX measurements also largely ruled out the CGC calculations with only initial state effects, which show different v_2 and v_3 orderings of smaller magnitude <cit.>. Recently, the STAR collaboration has also measured the flow coefficients of these three small systems using non-flow subtraction methods based on the template fit and the Fourier expansion fit, respectively <cit.>. Compared to the PHENIX results, the STAR measurement gave much larger v_3(p_T) in p+Au and d+Au systems with the TPC detector at mid-rapidity. It did not reproduce the v_3 ordering as claimed by PHENIX, but observed v_3^p+Au≈ v_3^d+Au≈ v_3^^3He+Au. Such a large discrepancy of v_3(p_T) measurements may originate from different non-flow subtraction or different detector pseudorapidity acceptance due to the longitudinal decorrelation <cit.>. On the theoretical side, the flow signals of these small collision systems have been studied by 2+1-d / 3+1-d hydrodynamics with different initial conditions <cit.>. The calculations with MC-Glauber initial conditions can well describe the v_2(p_T) and v_3(p_T) data from PHENIX, but cannot explain the flow discrepancy between STAR and PHENIX, even with the longitudinal decorrelations in the 3+1-d simulations <cit.>. Using the initial conditions with sub-nucleon fluctuations, hydrodynamic simulations produce similar v_3 values in p/d/^3He+Au collisions, which is obviously different from the PHENIX results and the calculations with nucleon fluctuations. However, these hydrodynamic calculations cannot simultaneously describe the STAR v_2 and v_3 data and the initial sub-nucleon structure is still not well constrained. In this paper, we study the flow observables in p/d/^3He + Au collisions at √(s_NN) = 200 GeV, using 2+1-d   model with   initial conditions including both nucleon and sub-nucleon flucuations. We tune the model parameters to fit the elliptic and triangular flow data from PHENIX and STAR, respectively, and calculate the 4-particle cumulants c_2{4} in p+Au and d+Au systems. We also evaluate the validity of the hydrodynamics by Knudsn numbers and draw the hydrodynamic predicted v_3/v_2(p_T) band in p/d/^3He + Au collisions with the relatively reliable region of 2+1-d hydrodynamics. § MODEL SETUP In this paper, we implement iEBE-VISHNU to study the flow observables in p/d/^3He+Au collisions at √(s_NN)=200 GeV. iEBE-VISHNU <cit.> is an event-by-event hybrid model that combines 2+1-d viscous hydrodynamics VISH2+1 <cit.> for the QGP evolution, a particle sampler iSS <cit.> for the particlization at a switching temperature, and the hadron cascade model UrQMD <cit.> for the subsequent hadronic evolution. Following <cit.>, we use the HotQCD+HRG equation of state (EoS) <cit.> as input, and set temperature-dependent specific shear viscosity η/s and bulk viscosity ζ/s <cit.>. We implement , a parameterized initial condition model, to generate the initial entropy density for the hydrodynamic simulations starting at τ_0  <cit.>. In the case without sub-nucleon structure, the fluctuations come from the distribution of nucleon center position. For each nucleon, its density distribution is parameterized as a Gaussian function with nucleon Gaussian width ω: ρ_nucleon(x) = 1/(2πω^2)^3/2exp(-x^2/2ω^2), On the other hand, when considering the sub-nucleon fluctuation, the nucleon is assumed to be composed of independent constituents, and the nucleon density is written as: ρ_nucleon(x) = 1/n_c∑_i=1^n_cρ_constit (x - x_i), where n_c is the constituent number, x_i is the position of the i-th constituent, density ρ_constit defined as ρ_constit (x) = 1/(2π v^2)^3/2exp(-x^2/2v^2). The constituent Gaussian width v relates to the nucleon width ω with a standard deviation r: ω = √(r^2 + v^2) and in this case ω is defined as the root mean square radius of a nucleon. After obtaining the nucleon density distribution, the fluctuated thickness of the colliding nucleons is then written as: T̃_A,B(x)≡∫ dz 1/n_c∑^n_c_i=1γ_i ρ_constit(x - x_i ±b/2). Here, besides the nucleon/constituent position, the initial fluctuation is controlled mainly by the gamma random variable γ_i, which is parameterized by the shape factor k. The resulting standard devaition of the initial fluctuation is denoted as σ_ fluct = 1/ √(k n_c). With the fluctuating thickness T̃_A,B, the initial entropy density at mid-rapidity can be calculated by the generalized means with a dimensionless parameter p: dS/d^2x_⊥dη|_η=0∝(T̃_A + T̃_B/2)^1/p with free-streaming Tab. <ref> list the model parameters used in the calculations for p+Au, d+Au and ^3He+Au collisions at √(s_NN) = 200 GeV. The Para-I is tuned to fit the v_2(p_T) and v_3(p_T) data from STAR, which includes the initial nucleon sub-structure and a small constituent width to enlarge the fluctuation. The Para-II and Para-III fit the published PHENIX data, which are tuned with/without sub-nucleon fluctuation, respectively. Para-III with sub-nucleon structure is similar to the Bayesian analyses in the p+Pb and Pb+Pb collisions <cit.>, except that  paratemr k and ν are tuned to fit the charged particle multiplicity distribution in d+Au collisions at √(s_NN) = 200 GeV. Note that, reproducing the STAR v_2 and v_3 at the same time requires a large shear viscosity in Para-I, which lies outside the usual parameter range of the hydrodynamic approaches. This will be further discussed in section IV. Fig. <ref> plots the averaged eccentricities ε_2,3, calculated from the initial conditions with the parameter sets in Tab. <ref>. Compared to the early results of the MC-Glauber model <cit.>, Fig. <ref> shows a weaker ordering of _n for p+Au, d+Au and ^3He+Au collisions. Such weaker ordering in Para-I and Para-III can be explained by sub-nucleonic fluctuations. Due to the imprinted multiplicity fluctuations, Para-II without subnucleonic fluctuations also shows a weaker ordering of _n for the initial conditions <cit.>. § RESULTS AND DISCUSSIONS In this section, we show the flow harmonic results calculated by iEBE-VISHNU with the parameters listed in Table  <ref>. All parameter sets in Table <ref> are well tuned to reproduce the multiplicity distribution in d+Au collisions. We also tune Para-I to fit v_2(p_T) and v_3(p_T) from STAR, and Para-II and Para-III to fit v_2(p_T) and v_3(p_T) from PHENIX. Figure <ref> shows the differential flow harmonics v_2(p_T) and v_3(p_T) of all charged hadrons in 0-5% p+Au, d+Au and ^3He+Au collisions [In our hydrodynamic simulation, we follow the PHENIX centrality definition, and calculate the flow harmonics in p/d/^3He+Au collisions with 0-5% centrality cut. Note that the STAR measurements focus on the Ultra-Central(UC) p+Au collisions with 0-2% centrality and the most central d/^3He+Au collisions with 0-10% centrality. As argued by the STAR paper <cit.>, the orderings of flow harmonics are insensitive to the centrality definition. This is also confirmed by our hydrodynamic simulations. ]. For the triangular flow v_3(p_T), the STAR and PHENIX measurements are not consistent, particularly for p+Au and d+Au collisions, where the STAR data are larger than the PHENIX data by a factor of 3. Such apparent discrepancies may be due to the different rapidity region and non-flow substraction methods used by these two collaborations <cit.>. Therefore, we fit the STAR and PHENIX data, respectively. Following <cit.>, we use the two-subevent cumulant method to calculate the two particle correlations with a kinematic cut 0.2<p_T<2.0 GeV/c and |η|<1.0 with a gap |Δη|>1.0. With the Para-I, iEBE-VISHNU nicely fits the v_2(p_T) and v_3(p_T) data measured by STAR. We find that sub-nucleon fluctuations are essential to produce larger v_3, which are insensitive to the collision systems variation. Meanwhile a large shear viscosity is also required to simultaneously fit the v_2 and v_3 data from STAR. The validity of hydrodynamic simulations with such large shear viscosities will be discussed in the next section. Note that the effects of sub-nucleon fluctuations on the flow of small systems also have been studied and discussed in the earlier paper <cit.>. For the PHENIX measurements, iEBE-VISHNU simulations with nucleon fluctuations in the initial state (Para-II) were able to roughly reproduce the v_2 and v_3 data within the statistical error bars. Meanwhile, one can achieve similar results with sub-nucleonic initial conditions with a free streaming (Para-III). Here, Para-III is the one obtained from the Bayesian analysis for Pb+Pb and p+Pb collisions at √(s_NN) = 5.02 TeV <cit.> except for the fluctuation parameter k and the constituent width ν, which are tuned to fit the multiplicity fluctuation in top RHIC energy. We conclude that, for the PHENIX measurements, iEBE-VISHNU simulations with both nucleonic and sub-nucleonic initial state fluctuations can fit the v_2 and v_3 hierarchies in p/d/^3He+Au collisions. Fig. <ref> plots the 4-particle cumulant c_2{4} as a function of dN_ ch/dη for p+Au and d+Au collisions at √(s_NN) = 200 GeV. Panel (a) shows the iEBE-VISHNU predictions with the Para-I, which is tuned to fit the STAR v_2(p_T) and v_3(p_T) data and generates positive c_2{4} for p+Au collisions and negative c_2{4} for d+Au collisions in the high multiplicity events, consistent with the experimental measurements qualitatively. In fact, large event-by-event fluctuation in Para-I leads to positive c_2{4} in p+Au collisions. While for the d+Au collisions, the intrinsic geometry of the deutron gives a dominant contribution to the initial eccentricities, leading to a negative c_2{4} in the high multiplicity events. Panels (b) and (c) show the iEBE-VISHNU results, calculated with the Para-II and Para-III fitting the PHENIX v_2(p_T) and v_3(p_T) data. For p+Au collisions, c_2{4} is always close to zero for both parameter sets over the whole range of multiplicities, due to small flow fluctuations. For d+Au collisions, c_2{4} are always negative due to the intrinsic geometry of the deutron. § APPLICABILITY OF HYDRODYNAMIC SIMULATION We have noticed that in the above calculation, the specific shear viscosity in some parameter sets tuned to fit the v_2(p_T) and v_3(p_T) data become quite large. In order to evaluate the validity of the hydrodynamic simulations in small systems, we calculate the Knudsen number K_n defined as <cit.>: K_n = τ_πθ=5ηθ/sT, where τ_π is the relaxation time associated with the microscopic time scale and θ = ∂_μ u^μ is the expansion rate associated with the macroscopic hydrodynamic time scale. K_n → 0 is the perfect fluid limit where the local equilibrium is maintained during the hydrodynamic evolution. K_n →∞ is the other limit, which corresponds to the case that the fluid system breaks up into free-streaming particles. It is generally suggested that the hydrodynamics is relatively reliable with K_n < 1 <cit.> [Besides the expansion rate θ defined in Eq. <ref>, the macroscopic scale can also be estimated from other macroscopic gradients <cit.>. For the propose of illustration, we use Eq. <ref> and set the criterion ⟨ K_n ⟩ > 1 for the failure of hydrodynamics, which is associated with the fact that the macroscopic expansion rate is larger than the microscopic relaxation rate.]. Fig. <ref> shows the time evolution of the averaged Knudsen number ⟨ K_n ⟩ in the event-by-event hydrodynamic simulations for p/d/^3He+Au collisions at 0-5% centrality. The average is taken within the freeze-out hypersurface with the local energy density as the weight for each time step. For the Para-I with sub-nucleon fluctuations tuned to fit the STAR v_2(p_T) and v_3(p_T) data, we observed that the averaged Knudsen number ⟨ K_n ⟩ is always larger than 1 throughout the whole evolution for different collision systems. Obviously, such a large Knudsen number indicates that the hydrodynamic simulations are beyond their applicable limit, which is mainly due to the large specific shear viscosity η/s ∼ 0.28 and the large initial gradients introduced by fluctuations to fit the v_3 data. In contrast, the average Knudsen number for Para-II is about or less than 1 with a smaller specific shear viscosity η/s ∼ 0.09. For Para-III, the Knudsen number lies between those of Para-I and Para-II, which is large in the early time due to the free streaming evolution before thermalization, but drops below 1 after certain time of hydrodynamic evolution. In short, Fig. <ref> suggests that hydrodynamic simulations with Para-I that tuned to fit the STAR data are beyond the limit due to the large Knudsen number. To further investigate whether iEBE-VISHNU could fit the STAR flow data within its hydrodynamic limit, we explore the model parameter space as far as possible but with the constraint of the Knudsen number ⟨ K_n ⟩<1 at the end of the evolution. Our test parameter sets correspond such initial conditions with/without nucleon substructure and with/without the free-streaming effect. The range of the free parameters is listed in Tab. <ref>. With n_c=1, the initial conditions include only nucleon fluctuations and with n_c=2-9, the initial conditions include sub-nucleon fluctuations. In our investigation, the effective shear viscosity η/s and the shape parameter k are fixed to reproduce the v_2(p_T) data in 0-5% ^3He+Au collisions and the multiplicity distribution of d+Au collisions with neglecting the bulk viscosity. Fig. <ref> shows the p_T dependent v_3(p_T)/v_2(p_T) ratio in 0-5% p/d/^3He+Au collisions, where the theoretical band is calculated by iEBE-VISHNU with the parameter range listed in Tab. <ref>, together with the constraint ⟨ K_n ⟩<1. The experimental data are taken from STAR with the statistical uncertainty of v_2(p_T) and v_3(p_T) using the error propagation formula. As shown in panels (b) and (c), the flow harmonic ratio v_3(p_T)/v_2(p_T) in d/^3He+Au collisions can be reproduced by iEBE-VISHNU within the allowed parameter range ⟨ K_n ⟩<1. While panel (a) shows that the upper limit of the v_3(p_T)/v_2(p_T) ratio in p+Au collisions, calculated from iEBE-VISHNU simulations, is clearly below the experimental data. These results indicate that the current hybrid model calculations are not able to simultaneously describe the STAR flow data in the three small collision systems with reasonable parameter range within the hydrodynamic limit. § SUMMARY In this paper, we implemented iEBE-VISHNU with  initial condition to study the collective flow in p/d/^3He+Au collisions at √(s_NN) = 200 GeV. For the PHENIX measurements, v_2(p_T) and v_3(p_T) data show obvious hierarchies for different collision systems, which can be reproduced by our hybrid model simulations with nucleon/sub-nucleon fluctuating initial conditions. The related simulations also reproduce a negative 4-particle cumulant c_2{4} for d+Au collisions, but give an almost zero c_2{4} for p+Au collisions, which can not describe the positive c_2{4} measured by PHENIX. For the STAR measurements, the magnitude of v_3 are insensitive to the collision systems, which is obviously different from the PHENIX ones. iEBE-VISHNU simulations with sub-nucleon fluctuating initial conditions can fit these v_2 and v_3 data, which can also roughly reproduce the positive and negative c_2{4} measured in the high multiplicity p+Au and d+Au collisions, respectively. However, due to the large shear viscosity tuned to fit the STAR data, the hydrodynamic simulations has already beyond its limits with the average Knudsen number ⟨ K_n ⟩ obviously larger than one for these three collision systems. We also explore the model parameter space as far as possible with the constraint of the Knudsen number ⟨ K_n ⟩<1, and found that   with the   initial condition with/without sub-nucleon fluctuations always underestimate the STAR v_3 / v_2 ratio for p+Au collisions. Our calculations demonstrate that for a meaningful evaluation of the collective flow in the small systems, one should also evaluate the validity of hydrodynamics. As the collision system become smaller, the isotropization and thermalization conditions are harder and harder to reach. Besides applying a full 3+1-d hydrodynamic simulations, other improved hydrodynamic theories like anisotropic hydro <cit.> should be implemented to the small systems that may not reach equilibrium in the early stage. It was also found that the fragmentation and mini-jets effects become more important in small collision systems <cit.>, a comprehensive model includes the core-corona effects <cit.> is also required to further evaluate the flow signals in p/d/^3He+Au collisions at √(s_NN) = 200 GeV. § ACKNOWLEDGEMENTS We thank Wenbin Zhao for helpful discussions. This work was supported in part by the NSFC under grant No. 12247107, No. 12075007 and No. 12147173 (B.F.) We also acknowledge the extensive computing resources provided by the Supercomputing Center of Chinese Academy of Science (SCCAS), Tianhe-1A from the National Supercomputing Center in Tianjin, China and the High-performance Computing Platform of Peking University. unsrt-phys
http://arxiv.org/abs/2307.00258v1
20230701073937
ASASSN-22ak: La Belle au bois dormant in a hydrogen-depleted dwarf nova?
[ "Taichi Kato", "Franz-Josef Hambsch", "Berto Monard", "Rod Stubbings" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.HE" ]
affil:Kyoto Department of Astronomy, Kyoto University, Sakyo-ku, Kyoto 606-8502, Japan tkato@kusastro.kyoto-u.ac.jp affil:GEOS Groupe Européen d'Observations Stellaires (GEOS), 23 Parc de Levesville, 28300 Bailleau l'Evêque, France hambsch@telenet.be affil:BAV Bundesdeutsche Arbeitsgemeinschaft für Veränderliche Sterne (BAV), Munsterdamm 90, 12169 Berlin, Germany affil:Hambsch Vereniging Voor Sterrenkunde (VVS), Oostmeers 122 C, 8000 Brugge, Belgium affil:Monard Bronberg Observatory, Center for Backyard Astrophysics Pretoria, PO Box 11426, Tiegerpoort 0056, South Africa astroberto13m@gmail.com affil:Monard2 Kleinkaroo Observatory, Center for Backyard Astrophysics Kleinkaroo, Sint Helena 1B, PO Box 281, Calitzdorp 6660, South Africa affil:Stubbings Tetoora Observatory, 2643 Warragul-Korumburra Road, Tetoora Road, Victoria 3821, Australia stubbo@dcsi.net.au abst.inc § INTRODUCTION In the famous fairy tale La belle au bois dormant (the Beauty in the Sleeping Forest or the Sleeping Beauty), a princess was cursed by an evil fairy to sleep for a hundred years before being awakened by a prince <cit.>. This tale produced one of the world most famous ballets composed by Pyotr Tchaikovsky <cit.>[ The reference refers to the earliest publication of this work in the form of a score of Aleksandr Ziloti's arrangement for solo piano according to Tchaikovsky's letter (<https://en.tchaikovsky-research.net/pages/The_Sleeping_Beauty>). The premiere at the Mariinsky Theatre was performed in 1890. ]. The similar things appear to have happened in the world of dwarf novae. The giant outburst and subsequent superoutbursts in V3101 Cyg = TCP J21040470+4631129 <cit.> could be a signature of long “dormant” phase before the initial outburst. MASTER OT J030227.28+191754.5 <cit.> might be another such example. Here, we report on an instance of ASASSN-22ak, which may be the first similar case in a cataclysmic variable (CV) with an evolved core in the secondary. § ASASSN-22AK ASASSN-22ak was discovered as a dwarf nova by the All-Sky Automated Survey for Supernovae (ASAS-SN: <cit.>) at g=15.0 on 2022 January 7.[ <https://www.astronomy.ohio-state.edu/asassn/transients.html>. ] The object further brightened and reached the peak of g=13.2 on 2022 January 8. The object apparently faded rapidly after this (there was a 6-d gap in observation in ASAS-SN). When the object was observed again on 2022 January 16 by Gaia (=Gaia22afw)[ <http://gsaweb.ast.cam.ac.uk/alerts/alert/Gaia22afw/>. ], the object faded to G=15.16. This outburst was announced in VSNET <cit.> by Denis Denisenko (vsnet-alert 26518)[ <http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/26518>. ]. According to this, this outburst was also detected by MASTER-OAFA <cit.> at 13.8 mag on 2022 January 9. The object underwent another outburst at 15.4 mag on 2022 July 20 detected by one of the authors (RS) (vsnet-alert 26875)[ <http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/26875>. ] and 16.2 mag on 2022 December 18 (by RS, vsnet-alert 27223)[ <http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/27223>. ]. After these two outbursts, the unusual light curve of this object received attention (vsnet-alert 27224).[ <http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/27224>. ] The ASAS-SN light curve suggested that all outbursts were superoutbursts. Although the similarity to V3101 Cyg and the possibility of an AM CVn star, as judged from the short recurrence time of long outbursts, were discussed, the nature of the object remained elusive. One of the authors (BM) obtained a single-night run during the 2022 January outburst and a possible period of 0.044 d was suggested (vsnet-alert 27225).[ <http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/27225>. ] This period, however, did not comfortably fit what is expected for a WZ Sge star and the reality of the period remained to be confirmed. During the 2022 December outburst, one of the authors (FJH) obtained time-resolved photometry, which also suggested a period of 0.0412 d (vsnet-alert 27243).[ <http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/27243>. ] This suggestion of a period, however, remained unconfirmed since the object faded soon after these observations and the amplitudes of the variations were small. The sudden fading of 1.8 mag (corresponding to more than 2.0 mag d^-1) on 2022 December 29 was sufficient to convince us that the 0.0412 d, but not its double, is the true period (vsnet-alert 27258).[ <http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/27258>. ] These outbursts, however, left us important lessons and we started observations following the detection of another outburst at 15.2 mag on 2023 April 29 by RS. FJH obtained time-resolved photometry. The sampling rate, however, was initially insufficient to detect a period. After increasing the sampling rate on 2023 May 13, the detected period was confirmed to be the same as in the previous outbursts. The log of observations is summarized in table <ref>. obslog.inc table-1 obslog2.inc § LONG-TERM BEHAVIOR The long-term light curve of ASASSN-22ak using the survey data and visual observations by RS is shown in figure <ref>. During Gaia observations between 2015 and 2021, the object very slowly faded. This trend was different from V3101 Cyg before the first outburst <cit.>. The four outbursts starting from 2022 January are seen in the right part of the figure. The quiescent brightness between these outbursts were brighter than Gaia observations before the first outburst. The enlarged light curves of these outbursts are given in figure <ref>. Near the termination of the third and fourth outbursts, there were a short (less than 1 d in the third and 2 d in the fourth) dip and rebrightening. The presence of such a short dip indicates that the long outbursts were indeed superoutbursts of a system with a short orbital period, not long outbursts seen in SS Cyg stars. We should note that the post-outburst observations after the 2022 December outburst (third panel in figure <ref>) were biased brighter since aperture photometry could measure the object only on limited number of frames. The true magnitudes should be fainter (see the fourth panel in figure <ref>, which were observed under more ideal conditions). § SUPERHUMPS We analyzed the best observed 2023 outburst. We used locally-weighted polynomial regression (LOWESS: <cit.>) to remove long-term trends. The periods were determined using the phase dispersion minimization (PDM: <cit.>) method, whose errors were estimated by the methods of <cit.>. The result before the dip (2023 June 8, BJD 2460103), after excluding the scattered data on 2023 May 17 (BJD 2460081–2460082) is shown in figure <ref>. The period obtained by this analysis is 0.042876(3) d. The variation of the profiles in 2023 is shown in figure <ref>. The amplitude of the variations increased on BJD 2460088 (2023 June 23), which corresponded to temporary brightening from the fading trend (see figure <ref>). Based on the amplitude variation correlated with the overall trend similar to SU UMa stars <cit.> and the gradual shift in the phase of peaks, we identified these variations to be superhumps, not orbital variations. An analysis of less observed outburst in 2022 December during the plateau phase is shown in figure <ref>. Note that these 7-night observations recorded only the terminal portion of the outburst and the statistics were not ideal. The phase plot assumes a period of 0.042876 d, which is allowed as one of the aliases as seen in the PDM analysis. § DISCUSSION §.§ Comparison with hydrogen-rich WZ Sge stars As we have seen, there was no evidence of an outburst in ASASSN-22ak before 2022 (at least for seven years based on ASAS-SN and Gaia observations). The object suddenly became active and repeated superoutbursts with cycle lengths of 132–188 d. No very similar object has been known. V3101 Cyg is somewhat analogous in that it repeated four superoutbursts (up to the time of the writing) following the 2019 large outburst. The case of V3101 Cyg is different in that short rebrightenings were also observed <cit.>. The initial (2019) outburst of V3101 Cyg showed a relatively rapidly fading phase, which is the viscous decay phase characteristic to WZ Sge stars <cit.>. The initial (2022 January) outburst of ASASSN-22ak had a similar feature, reaching ∼2 mag brighter than subsequent outbursts and which apparently faded rapidly. The second and third outbursts of ASASSN-22ak had similar, but less distinct, features. The same feature was almost lacking in the fourth outburst (figure <ref>). These features suggest that the first outburst of ASASSN-22ak was a strong WZ Sge-type one and that the second and third ones were weaker WZ Sge-type ones, although early superhumps <cit.> were not directly observed during any of these outbursts. The superhump period of 0.042876 d should be close to the orbital period (see also discussions later). This period is rather too short for a hydrogen-rich CV. If ASASSN-22ak is a hydrogen-rich CV, the orbital period should break the record of 0.0462583 d in OV Boo <cit.>, which is considered to be a population II CV. We consider the possibility of ASASSN-22ak being a population II CV less likely since the transverse velocity of ASASSN-22ak is 20% of OV Boo <cit.> (but still with a 28% 1-σ error in the Gaia parallax) and because of the difference in the light curve (lack of short rebrightenings, long durations of superoutbursts compared to supercycles) from the hydrogen-rich V3101 Cyg. ASASSN-22ak would then be more likely a hydrogen-depleted CV. There are two possibilities. It could be either an EI Psc star (CV with an evolved core in the secondary but still with considerable surface hydrogen) or an AM CVn star in which the surface hydrogen of the secondary is almost lost. We consider these possibilities in more detail. §.§ Comparison with EI Psc stars in general EI Psc has an orbital period of 0.0445671(2) d <cit.> very similar to ASASSN-22ak. EI Psc, however, has a hot, luminous secondary <cit.>, whose quiescent color (Gaia GP-RP=+0.88) is much redder than in ASASSN-22ak (GP-RP=+0.16). Another EI Psc-type object V418 Ser [superhump period 0.04467(1) d] has GP-RP=+0.52 and this object shows outbursts similar to hydrogen-rich CVs <cit.>. The properties of V418 Ser look different from those of ASASSN-22ak. CRTS J174033.4+414756 (orbital period 0.045048 d) has GP-RP=+0.43 and the outburst behavior <cit.> appears moderately similar to ASASSN-22ak. CRTS J174033.4+414756 indeed showed a bright WZ Sge-type outburst in 2023 February after 5-yr quiescence (vsnet-alert 27373).[ <http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/27373>. ] Not sufficient time has passed since this outburst and it is unknown whether CRTS J174033.4+414756 behaves like ASASSN-22ak. The known differences between CRTS J174033.4+414756 and ASASSN-22ak are that the former shows superhumps with much larger amplitudes, which suggests a higher mass ratio [q=0.077(5) was obtained by <cit.>], and the redder color in quiescence. Although CRTS J174033.4+414756 would be a good candidate for an already known object having properties similar to ASASSN-22ak, particularly with a bright superoutburst after 5-yr quiescence, the secondary in ASASSN-22ak appears to be fainter and less massive. §.§ Comparison with CRTS J112253.3-111037 The object most similar to ASASSN-22ak appears to be CRTS J112253.3-111037 <cit.>. This object has an orbital period 0.04530 d and a very small fractional superhump excess ϵ≡ P_ SH/P_ orb-1, where P_ SH and P_ orb represent superhump and orbital periods, respectively. The secondary in CRTS J112253.3-111037 was undetected in contrast to other EI Psc stars. The Gaia color GP-RP=+0.10 is also very similar to that of ASASSN-22ak. Although P_ SH was reported in <cit.>, this value is vital to this discussion and we re-analyzed the data in <cit.>, in which the modern de-trending method was not yet employed. The resultant period was 0.045409(9) d (figure <ref>). This value corresponds to ϵ=0.0024(2). In the treatment by <cit.>, old ϵ-q calibrations, which did not consider the pressure effect, were used and they obtained an exceptionally small q. Using the modern calibration in table 4 of <cit.> considering the pressure effect (but calibrated using hydrogen-rich systems), this ϵ corresponds to q=0.043(1) assuming stage B superhumps [for superhump stages, see <cit.>]. There remains a possibility that the observed superhumps were stage C ones since observations only recorded the final part of the outburst. The periods of stage B superhumps are generally longer by 0.5% than those of stage C superhumps in hydrogen-rich systems <cit.>. If stage B superhumps were missed and we only observed stage C superhumps, this q value would be an underestimate. By artificially increasing the superhump period by 0.5%, the resultant q becomes 0.058(1), which should be regarded as the upper limit. In actual WZ Sge stars, stage C tends to be missing <cit.>, and we consider that the first value [q=0.043(1)] is expected to be closer to the real one. CRTS J112253.3-111037 is also similar to ASASSN-22ak in terms of the low frequency of outbursts <cit.>. There was no information how the 2010 outburst in CRTS J112253.3-111037 started due to an ∼50 d observational gap in the CRTS data <cit.> and it is unknown whether CRTS J112253.3-111037 showed a sharp peak or a viscous decay phase. No repeated superoutbursts like ASASSN-22ak, however, appear to have been present since then. It might be interesting to note that ATLAS and ASAS-SN data show that CRTS J112253.3-111037 showed brightening with a broad peak reaching g=17.8 around 2022 June 6 (BJD 2459737). The entire event lasted ∼15 d and this may be similar to the enhanced quiescent activity in the AM CVn star NSV 1440 <cit.>, possibly signifying the similarity to AM CVn stars. The small amplitude of superhumps (0.05 mag) in CRTS J112253.3-111037 is also similar to ASASSN-22ak (0.05 mag), implying a similarly low q in ASASSN-22ak. <cit.> suggested a possibility that CRTS J112253.3-111037 had already evolved past its period minimum based on <cit.> and that its secondary can be semidegenerate. Although this conclusion was apparently partly based on q smaller than the one obtained in the present paper, we agree that both ASASSN-22ak and CRTS J112253.3-111037 are evolving close to AM CVn stars since the properties of these objects are very different from other EI Psc objects with similar orbital periods (subsection <ref>). ASASSN-22ak may have already lost hydrogen and it may even be an AM CVn star. If this is the case, ASASSN-22ak breaks the longest record of orbital periods in AM CVn stars showing a genuine superoutburst [see also the discussion in <cit.>; superhump period of 0.0404–0.0415 d in ASASSN-21au = ZTF20acyxwzf (vsnet-alert 25369;[ <http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/25369>. ] <cit.>)]. Another AM CVn star with a long orbital period [PNV J06245297+0208207 in 2023 <cit.>: superhump period 0.035185(8) d (vsnet-alert 27353[ <http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/27353>. ])] showed a superoutburst very similar to ASASSN-21au. This morphology of superoutbursts appears to be common to AM CVn stars with long orbital periods and the possibility of ASASSN-22ak as being an AM CVn star might be less likely. We leave this question open since the outburst properties were so unusual in ASASSN-22ak. In any case, spectroscopy of ASASSN-22ak to determine the hydrogen and helium content and to determine the orbital period is very much desirable. The addition of ASASSN-22ak seems to strengthen the idea that cataclysmic variables could be the dominant progenitors of AM CVn binaries <cit.>. The EI Psc-type objects treated in this paper for comparisons with ASASSN-22ak are summarized in table <ref>. The mean superhump amplitude for EI Psc was obtained from the data in <cit.>. The superhump amplitudes for V418 Ser and CRTS J174033.4+414756 were from <cit.> and <cit.>, respectively. Although CRTS J174033.4+414756 showed the initial phase of large superhump amplitudes <cit.>, no such a phase was recorded in ASASSN-22ak. The q values from ϵ assuming stage B were obtained by the method in <cit.>. §.§ Pre-outburst dormancy and repeated superoutbursts Repeated long superoutbursts with short recurrence times is the unique feature of ASASSN-22ak. In the case of (hydrogen-rich) V3101 Cyg, some of post-superoutburst rebrightenings may have been caused by the matter in the disk left after the main superoutburst <cit.>. Repeated superoutbursts appear to be more easily explained if the mass-transfer rate increased after the initial outburst <cit.>. This increase in the mass transfer may either have been caused by irradiation of the secondary by the initial outburst <cit.>, or it could have been that the quiescent viscosity of the disk before the initial outburst was simply extremely low to accumulate a large amount of mass in the disk and that the mass-transfer rate and the quiescent viscosity is simply returning to the normal value of this object after the initial outburst. In the case of ASASSN-22ak, the initial outburst was not as strong as in V3101 Cyg, although the peak was bright, and the mechanism may be different from the case of V3101 Cyg. In ASASSN-22ak, q would be smaller than in V3101 Cyg (as inferred from the smaller amplitude of superhumps and from the analogy with CRTS J112253.3-111037) and the weaker tidal effect would make it more difficult to maintain superoutbursts in contrast to V3101 Cyg. Although there have been a suggestion that smaller q can lead to premature quenching of superoutbursts <cit.>, there is no established theory when superoutbursts end. Although this premature quenching of superoutbursts might explain the repeated superoutbursts with relatively short intervals, the lack of post-superoutburst rebrightenings in ASASSN-22ak might be problematic. It may be that the hydrogen depletion in the disk of ASASSN-22ak is not as strong as AM CVn stars and long superoutbursts are easier to maintain than in almost pure helium disks. A combination of effects of all these circumstances, unusual for ordinary CVs, should be a challenging target for theorists working with the disk-instability model. The pre-outburst dormancy might be easier to explain in ASASSN-22ak. In contrast to V3101 Cyg, which is expected to have a fully convective secondary, ASASSN-22ak has an evolved core and a magnetic dynamo can still work <cit.> and is probably necessary to form the observed AM CVn stars within reasonable time. With such a dynamo, the instantaneous mass-transfer rate can be different from the secular average, as seen in the spread of absolute magnitudes in CVs above the period gap <cit.> and the presence of VY Scl stars. There is also a possibility that the quiescent viscosity of the disk before the initial outburst was simply very low and the viscosity increased after the outburst as proposed by <cit.> for hydrogen-rich WZ Sge stars. This explanation, however, might face a difficulty to realize a very quiet, low-viscosity disk when the secondary has a seed magnetic field, which may increase the quiescent viscosity of the disk via the magneto-rotational instability (cf. <cit.>; but see also <cit.>). High and low states in polars (AM Her stars: <cit.>) may provide additional insight. EF Eri has a brown-dwarf secondary <cit.> and a strong magnetic activity cycle as in CVs above the period gap is not expected. This object showed (and still showing) a long-lasting high state (just like “awakening”) starting from 2022 December (vsnet-alert 27205).[ <http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/27205>. ] Since polars do not have an accretion disk, storage of mass in the disk before the active (high) state, as in WZ Sge stars, is impossible. There could be a reservoir of additional angular momentum other than the disk, and this might also explain the dormany/waking-up phenomena in dwarf novae. § ACKNOWLEDGEMENTS This work was supported by JSPS KAKENHI Grant Number 21K03616. The authors are grateful to the ASAS-SN, ATLAS and Gaia teams for making their data available to the public. We are also grateful to Naoto Kojiguchi for helping downloading the ZTF and Gaia data and Yusuke Tampo, Junpei Ito and Katsuki Muraoka for converting the data reported to the VSNET Collaboration. This work has made use of data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) project. The ATLAS project is primarily funded to search for near earth asteroids through NASA grants NN12AR55G, 80NSSC18K0284, and 80NSSC18K1575; byproducts of the NEO search include images and catalogs from the survey area. This work was partially funded by Kepler/K2 grant J1944/80NSSC19K0112 and HST GO-15889, and STFC grants ST/T000198/1 and ST/S006109/1. The ATLAS science products have been made possible through the contributions of the University of Hawaii Institute for Astronomy, the Queen's University Belfast, the Space Telescope Science Institute, the South African Astronomical Observatory, and The Millennium Institute of Astrophysics (MAS), Chile. We acknowledge ESA Gaia, DPAC and the Photometric Science Alerts Team (http://gsaweb.ast.cam.ac.uk/ alerts). § LIST OF OBJECTS IN THIS PAPER objlist.inc § REFERENCES We provide two forms of the references section (for ADS and as published) so that the references can be easily incorporated into ADS. asn22akaph.bbl asn22ak.bbl.vsolj
http://arxiv.org/abs/2307.01684v1
20230704123001
Serving Graph Neural Networks With Distributed Fog Servers For Smart IoT Services
[ "Liekang Zeng", "Xu Chen", "Peng Huang", "Ke Luo", "Xiaoxi Zhang", "Zhi Zhou" ]
cs.DC
[ "cs.DC", "cs.AI", "cs.LG", "cs.NI" ]
Serving Graph Neural Networks With Distributed Fog Servers For Smart IoT Services Liekang Zeng, Xu Chen, Peng Huang, Ke Luo, Xiaoxi Zhang, and Zhi Zhou The authors are with the School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, Guangdong, 510006 China (e-mail: zenglk3@mail2.sysu.edu.cn, chenxu35@mail.sysu.edu.cn, {huangp57, luok7}@mail2.sysu.edu.cn, {zhangxx89, zhouzhi9}@mail.sysu.edu.cn). August 1, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================= Graph Neural Networks (GNNs) have gained growing interest in miscellaneous applications owing to their outstanding ability in extracting latent representation on graph structures. To render GNN-based service for IoT-driven smart applications, traditional model serving paradigms usually resort to the cloud by fully uploading geo-distributed input data to remote datacenters. However, our empirical measurements reveal the significant communication overhead of such cloud-based serving and highlight the profound potential in applying the emerging fog computing. To maximize the architectural benefits brought by fog computing, in this paper, we present Fograph, a novel distributed real-time GNN inference framework that leverages diverse and dynamic resources of multiple fog nodes in proximity to IoT data sources. By introducing heterogeneity-aware execution planning and GNN-specific compression techniques, Fograph tailors its design to well accommodate the unique characteristics of GNN serving in fog environments. Prototype-based evaluation and case study demonstrate that Fograph significantly outperforms the state-of-the-art cloud serving and fog deployment by up to 5.39× execution speedup and 6.84× throughput improvement. Fog computing, Graph Neural Networks, model serving, distributed processing § INTRODUCTION Graphs are ubiquitous. Given the intuitionistic abstraction on relational structures, graphs drive the organization and computation of miscellaneous real-world data such as traffic sensory networks <cit.>, online social graphs <cit.>, and power grids <cit.>. To facilitate deep learning using such form of data, recent advances in neural networks have extrapolated to the graph domain, resulting in a new stream of models called Graph Neural Networks (GNNs). GNNs differ from traditional Deep Neural Networks (DNNs) by integrating graph embedding techniques with convolutions <cit.>. In essence, GNNs leverage an iterative aggregation to an input graph and, through neural network operators, to capture hierarchical patterns from subgraphs of variable sizes. This enables the model to learn the properties for specific vertices, edges, or the graph as a whole, and generalize to unobserved graphs. Benefited from such powerful expressiveness, GNNs achieve superior prediction performance in various graph-related tasks, and have emerged as a powerful data-driven tool for enabling a multitude of real-world IoT-driven smart applications, e.g., traffic flow forecasting <cit.>, location-based recommendation <cit.>, and vehicle trajectory prediction <cit.>. To render smooth services for these applications, the de facto standard methodology is to offload raw data and computation to central cloud servers <cit.>. For instance, in Fig. <ref>, the massive sensory data from IoT devices are fully uploaded (in physical domain) and their corresponding GNN input graph (in data domain) is computed at a remote cloud. While this paradigm may act well for many CNN-based image processing tasks <cit.>, however, it can reach suboptimal performance for GNN model serving due to its unique input characteristics. First, the input graph of GNN typically spans geographically with scattered data sources, e.g. the IoT sensory devices, as vertices. Unlike image or video from a single source, to obtain the complete input for one inference, GNN execution is obliged to wait until all correlated data points arrive, which considerably prolongs the total serving latency. Second, as the graph scales and the number of vertices increases, the input data size grows linearly and can become much larger than an ordinary CNN inference input, emphasizing the communication stress. Worse still, the transmission cost is further magnified due to the long transmission delay of Wide Area Network (WAN) and potential network congestion. Specifically, as we will show later in <ref>, the data uploading phase could dominate the whole procedure by consuming >95% latency in a typical cloud-based GNN serving. To tame such intractability, a promising solution is to exploit available computing resources in proximity to data sources with the emerging fog computing[In the terminology of some literature, fog computing also refers to edge computing. Since edge has represented the links in graphs, throughout this paper, we use the term fog to avoid ambiguity.] paradigm <cit.>. Concretely, as Fig. <ref> illustrates, we can sink GNN workload from remote cloud into vicinal fog nodes[We exclusively use node to denote a fog server and leave vertex for graphs.] (e.g. 5G fog servers) and manage data collection and computation within the Local Area Network (LAN). Consequently, the avoidance of unreliable WAN connections allows observably lower communication overhead, reducing at most 67% data collection latency in our experimental measurements. In brief, fog computing exhibits prospective potential for real-time GNN serving at the network edge. Nevertheless, despite the advantages, efficient fog deployment yet suffers from a set of challenges. First, different from the cloud that takes computing resources as a whole, the fog environments usually consist of loosely coupled nodes <cit.>. To adapt complex GNN processing over them, a distributed counterpart is required, where input data need to be judiciously placed and routed to respective fog nodes for distributed execution. Second, fog environments are inherently heterogeneous <cit.>, e.g. with computing facilities ranging from small-size gateways <cit.> to powerful cloudlets <cit.>; their available bandwidth allocated for serving also vary. To exploit the maximum parallelization from the diversity, a heterogeneity-aware data placement strategy with effective load balancing is highly desired. Further complicating the problem is dynamic factors like fog nodes' load levels, network conditions, etc., which may dramatically decline the performance of the whole pipeline. Unfortunately, existing GNN serving mechanisms cannot sufficiently meet these requirements. To this end, we present Fograph, a novel distributed system that enables real-time GNN inference over multiple heterogeneous fog nodes. Fograph's contribution goes beyond merely applying fog computing to boost GNN serving, instead it addresses the above challenges at four levels. First, from an execution perspective, a holistic distributed workflow is introduced for enabling fog nodes to collaboratively serve GNN inference. Second, to attain efficient runtime, an inference execution planner is designed to optimize the data placement of the input graph, along with a GNN-oriented profiling methodology that allows accurately characterizing heterogeneous computing capabilities. Third, to alleviate the communication bottleneck and ameliorate the overall performance, a novel graph data packing technique is applied that leverages the topological properties and compresses transferred data with minimal impact on accuracy. Finally, to adapt to dynamic changes such as load fluctuation, a dual-mode workload scheduler is developed which progressively adjusts the graph data placement in order to acquire the best-performing configuration. Extensive evaluation against multiple benchmarks demonstrates Fograph's superior performance gain over traditional cloud serving and straw-man fog counterpart. In summary, this work makes the following key contributions. * An empirical study on GNN serving latency with existing cloud and basic fog mechanisms. By breaking down the overheads of communication and execution, we observe a major cost reduction on the communication side, highlighting utilizing fog computing as a promising optimization opportunity (<ref>). * A regularized workflow for fog-enabled GNN serving that covers the full lifecycle from offline configuration to online data collection and distributed runtime. Data parallelism is applied and retrofitted by leveraging the execution characteristics of GNN inference in order to collaborate multiple fog nodes (<ref>). * A heterogeneity-aware distributed GNN inference system Fograph that enables real-time performance. Reflecting on the diverse and fluctuated fog resources, we design an inference execution planner with load balancing for maximum parallelization (<ref>, <ref>), and a dual-mode workload scheduler to accommodate dynamics (<ref>). * A GNN-specific packing mechanism that exploits the reduced-precision resilience and sparsity of GNN workloads to minimize data uploading overhead. Our communication optimizer combines a lossless compressor and a degree-aware lossy quantizer, which exposes previously unattainable designs for distributed GNN inference, while not sacrificing the predition accuracy of the system (<ref>). * A comprehensive evaluation of Fograph using multiple benchmarks, demonstrating its superiority over state-of-the-art cloud serving and straw-man fog deployment by up to 5.39× execution speedup and 6.84× throughput improvement (<ref>). § BACKGROUND AND MOTIVATION §.§ Graph Neural Networks The real-world graphs typically contain two kinds of data. One is the adjacency matrix, implying the global structural information, and the other is the feature vectors that describe vertices and edges' physical properties. GNNs take both data as input and learn a representation vector, called embedding or activation, for each vertex. The learned representation can be used for downstream tasks such as vertex clustering, link prediction, and graph classification <cit.>. In Fig. <ref>, we illustrate a two-layer GNN instance from the perspectives of data flow during inference process. Fig. <ref> depicts the input graph, with vertex A and its one-hop and two-hop neighbors in different color. Fig. <ref> unfolds a layer's detail operations. Essentially, each GNN layer collectively aggregates the neighbor vertices' activations from the previous layer's output, and then updates the target vertex's activation using a neural network operator such as convolution or multi-layer perceptron. Within the same layer, all vertices share the same weights in and functions, while different layers may differ them. To compute embeddings through a K-layer GNN, vertices should retrieve information from their K-hop neighbors. Formally, the computation of the k-th GNN layer on vertex v can be described as: a^(k)_v = ({h^(k-1)_u|u ∈𝒩_v} ), h^(k)_v = (a^(k)_v, h^(k-1)_v), where h^(k)_v is the representation vector of vertex v at the k-th layer, h^(0)_v is initialized by the input features of v, and 𝒩_v denotes the vertices set of v's direct neighbors. Examples. Table <ref> lists three popular GNN models to exemplify the above two functions. GCN <cit.> is one of the first graph learning models that bridge the gap between spectral transformation and spatial convolutions. Its simply uses a summation and the puts weighted aggregation to an element-wise nonlinearity σ(·). GAT <cit.> is the representative of another GNN category that incorporates attention mechanism into feature propagation. Its inference directly uses the learned attention parameters α^(k)_vu to weight neighbors and passes the aggregation through the nonlinearity for output. GraphSAGE <cit.> is recognized as the classic inductive GNN variant. While its training adopts sampling-based techniques to trade accuracy with training speed, its inference fully collects the neighbor sets for aggregation and update. Here in Table <ref> we formalize its mean aggregate version. §.§ Emerging Real-Time GNN Applications GNNs have been submerged in many scenarios with real-time responsiveness requirements, particularly for many emerging IoT-enabled smart applications. In the following, we provide several motivating examples. Traffic flow forecasting. Accurately forecasting traffic speed, volume or the density of roads is the fundamental problem in Intelligent Transportation Systems (ITS). To support such intelligence, GNNs construct spatial-temporal models to perform graph-level predictions. For instance, some models <cit.> consider traffic sensory networks as a spatial-temporal graph where the vertices are roadside detectors and the edges are roads. Each vertex is attached with a time-varying vector that records immediate properties such as traffic speed and occupancy. In such circumstances, timely prediction is of paramount importance given the speedily changing traffic and its publicly wide impacts, which requires real-time GNN processing. Location-based recommendation. Recommending yet-unvisited Point of Interest (POI) to potentially interested users has been a core function for many commercial mobile applications (e.g. Airbnb, TripAdvisor). To utilize the rich semantic information like geographical constraints and social influences, a number of works <cit.> have built upon GNN models. Several graphs are created in these systems, including a spatial graph of geo-distributed POIs, a social graph of users, and a bipartite graph connecting POIs and users based on historical consuming records. Such kind of services typically exhibit a soft-realtime necessity - if the recommendation comes late, the results can be out of date as the user may have moved to other locations. In other words, low latency GNN inference is demanded for rendering effective user experience. Vehicles trajectory navigation. Autonomous robotics has become a hot spot in recent years, and efficient and collision-free trajectory navigation is a key technology to ensure its mobility <cit.>. As an example, in precision agriculture <cit.>, a fleet of autonomous drones move along fields to measure and maintain the crops' health, spraying pesticides if there is an infestation. GNN-based methods <cit.> enhance this procedure by mapping the vehicles as graphs and performing inference to help plan paths instantaneously. Each drone, as a vertex, captures sensory data (for flight height, ambient light intensity, etc.) every few seconds as features. Any delay of the control may result in catastrophic crashes of the vehicles, for which fast inference is needed. §.§ Examining GNN Serving Pipeline This subsection examines the serving latency of de facto standard cloud serving and a vanilla fog deployment to investigate how much performance fog computing can promote. Methodology. The measurement targets a location-based recommendation application <cit.> that runs GCN inference on the SIoT dataset <cit.> (dataset details in Table <ref> and <ref>). The used graph includes 16216 devices from Spain as vertices with 146117 social connections, and each vertex attaches a 52-dimension feature that identifies its properties such as the device's type and brand. Initially, the data are randomly divided into equal parts and assigned to 8 Raspberry Pis. During the measurement, we launch the Pis to simultaneously send the respective graph data via 4G/5G/WiFi network, and then perform inference on the cloud/fog based on PyTorch Geometric (PyG) <cit.> once the complete graph is received. The cloud server is an Aliyun instance (8vCPUs | 32GB | Tesla V100 GPU | Ubuntu 16.04) located in the nearest region, and its geographical distance to the Pis is about 200km. The fog cluster consists of six heterogeneous servers (specifications in <ref>) as computing nodes, all set on the same campus as the Pis. In particular, for single-fog serving, we select the most powerful one to execute; for multi-fog serving, we apply the state-of-the-art technique in <cit.> to place the input data among fog nodes and perform collaborative execution. During the runtime, each node maintains a local graph, computes GNN layers, and exchanges vertices data with each other when needed. The 4G/5G network employs commercial operator services, where the 5G network is provided by the 5G base stations surrounded nearby under the non-standalone (NSA) mode. Measurements reported. Fig. <ref> (left) shows the serving latency of the cloud, single-fog and multi-fog mechanisms under different networking settings, and Fig. <ref> (right) breaks down stage-wise costs in terms of data collection and inference execution. Regarding the load distribution in the multi-fog serving, Fig. <ref> visualizes the number of assigned vertices and the execution latency of each fog node. Key observations. First, the fog approaches enjoy better performance than the cloud alternative, demonstrating its efficiency in vicinal serving. Quantitatively, for 4G, 5G, and WiFi, the single-fog approach achieves 1.65×, 1.73×, and 1.40× speedups over the cloud serving, respectively. The multi-fog counterpart attains even lower latency than the single-fog. The weaker the networking condition is, the more superiority the fog serving reaps. Second, we observe that the favour of fog serving is mainly contributed by communication. As evidence, when switching from cloud to single-fog, the data collection latency can be reduced by 64%, 67%, and 61% under 4G, 5G, and WiFi, respectively. Such a similar degree of reductions implies the consistent advantages gained by the avoidance of remote Internet data transfer. Surprisingly, multi-fog serving achieves lower costs in data collection than single-fog. It is because employing more fog nodes provides more access points and therefore widens the bandwidth and relieves the networking contention. Nonetheless, data collection still occupies >50% costs in both fog approaches as in Fig. <ref> (right), suggesting that communication is yet the major spending factor in the serving pipeline. Third, while the fog data collection significantly saves the overhead, its execution can dramatically compromise the benefit. Nearly half of the costs are taken by execution in single-fog serving, while that in the cloud is <2%. Multi-fog serving alleviates that, but only reduces 33% execution cost upon single-fog with five more fog nodes. Such inferior performance indicates poor resource utilization, which comes from the gap between equally assigned data placement and heterogeneous computing resources. This can be clearer from the measurements in Fig. <ref>, where the existing data placement strategy merely yields an equilibrium in the number of assigned vertices but a severe imbalance in actual load distribution. §.§ Opportunities and Challenges with Fog Computing The advanced ability of GNNs has spread themselves to a wide range of end adoptions with real-time requirements. Fog computing, as evident by the above realistic measurements, is revealed to be a potentially effective solution to address it, with both opportunities and challenges. Opportunities. By relocating execution to the approximate computing nodes close to the data sources, fog computing manages all data communication within a local network and thus avoids the unreliable and delay-significant Internet connections. Such architectural wisdom, as proved by the above empirical measurements, is translated effectually into the reduction of communication time. In addition, the multi-node fog cluster provides more space for parallel GNN processing, which can further accelerate the serving pipeline. Challenges. In spite of the opportunities, simply adopting fog computing is not competent. First, to effectively exploit the fog resources, it is imperative to accurately characterize the fog nodes' heterogeneity, decide the graph data placement, and orchestrate the data flow during the runtime. Second, regarding the major contribution of transmission cost, a communication-friendly compression technique is desired for expedited graph data collection. Third, to enable resilient serving in real-time, the system should be able to react dynamically to resource fluctuation. § FOGRAPH SYSTEM DESIGN As aforementioned, the objective is to fully unleash the architectural benefits of fog computing in rendering real-time GNN serving while adapting to fog nodes' heterogeneity and dynamic changes. To this end, we propose Fograph, a distributed GNN inference system. In what follows, we present Fograph modules following its workflow. §.§ Workflow and Design Overview Fig. <ref> and Fig. <ref> show the high-level view of Fograph's workflow and system design, respectively, where the five modules in Fig. <ref> work for the five steps in Fig. <ref> correspondingly. In the setup phase, Fograph obtains a GNN model and uses the calibration dataset to sketch the computing capabilities of the heterogeneous fog nodes. This is accomplished by the offline profiler (Fig. <ref> 202), which builds latency estimation models for predicting the GNN performance. Moreover, it records the static properties of the historical input like the adjacency matrix, acting as the initial graph skeleton. A dedicated fog node is selected as the metadata server that is responsible for registering metadata from all available fog nodes (<ref>). Next, Fograph's execution planner (<ref> 203) applies the latency estimates and judiciously schedules a graph data placement to match the heterogeneity among fog nodes, aiming at maximum parallelization with effective load balance guarantee. In the runtime phase, the participated fog nodes individually collect their assigned data partitions in light of the execution plan. To speed up device-to-fog data transfer, a novel GNN-specific compression technique is employed to exploit the features sparsity and GNN's resilience to progressively reduce data uploading costs (<ref> 204). Once input graph data completely arrive, Fograph's runtime orchestrates distributed inference execution, handling all data exchange between fog nodes (<ref> 205). Simultaneously, each fog node's online profiler monitors its resident execution across inferences, as well as the runtime conditions, and updates the offline performance profile, periodically transferring to the metadata server for execution plan refinement (<ref> 206). In this way, the system can adapt to dynamic-changing environments, reconfigure its execution and maintain real-time serving. §.§ Metadata Acquisition and Registration The aim of metadata registration (Fig. <ref> 202) is to readily provision fundamental serving configurations and sensibly characterize the heterogeneity of fog nodes, providing necessary materials for the subsequent execution planning. To achieve that, we design a dynamic profiler, operating across the setup phase and the runtime phase. Setup phase. Before deployment, the offline profiler (Fig. <ref> 202) performs two kinds of measurements, device-independent and device-specific. The former focuses on the static configurations stated by service providers. Concretely, it comprises 1) available bandwidth of fogs, 2) the employed GNN model (trained in advance), and 3) the invariant metrics of the input graph. Here we identify the invariance as the adjacency matrix/list that depicts the graph topology, and the size of a feature vector, which is determined once a given GNN model is trained. For instance, in smart transportation applications <cit.>, we can interpret them as the traffic monitoring sensors' logical topology (e.g. sensors as vertices and roads as edges) and the form of sensory records, both of which are known before runtime. These parameters are independent of the running platforms and thus can be profiled only once for a given model. For the latter, we intend to establish latency estimation models that are specific to each fog node, targeting quantifying their heterogeneous computing capability. The performance of computing GNN inference, however, relies heavily on the fundamental settings such as the underlying hardware and the used DL framework, where trivial estimations based on static configurations are rough and unfaithful. Therefore, to build performance models in a precious granularity, we employ a proxy-guided profiling process: First, we construct a calibration set by uniformly sampling subgraphs of varying cardinality from the initial graph. The cardinality, defined as ⟨ c ⟩=⟨ |𝒱|, |𝒩_𝒱| ⟩, shapes a subgraph's size from a GNN perspective with the number of vertices it includes and their one-hop neighbors. To reserve the actual degree distribution, for each cardinality axis we collect a group of 20 samples. Next, we measure the average execution latency for each fog node by passing the GNN through the calibration set, and build regression-based latency estimation models ω, e.g. linear regression model in Eq. (<ref>) where β and ε are regression parameters. latency = ω(⟨ c ⟩) = β·⟨ |𝒱|, |𝒩_𝒱| ⟩ + ε. Runtime phase. At run time, the profiler keeps tracking the execution time of each fog node to update the offline estimates and derives the balance indicators in order to gauge the global performance. To keep the profiler lightweight, instead of adopting a more accurate but prohibitively costly estimator, we employ a two-step linear estimation to predict the inference latency on the fly. In the first step, the profiler measures the actual execution time of the local c-cardinality graph, denoted by T^real_⟨ c ⟩, during each inference. Next, it calculates a load factor η as the ratio between the actual time and the offline latency estimate of cardinality c, i.e. η = T^real_⟨ c ⟩/ω(⟨ c ⟩). As the second step, the profiler treats the load factor as a reflection on the present load level, and uses it to predict the latency of all other cardinalities. Thus, the latency of a different cardinality c' is estimated as η·ω(⟨ c' ⟩). §.§ Inference Execution Planning Given a GNN model, Fograph exploits data parallelism to distribute the inference workloads over multiple fog nodes, where input data needs to be divided and distributed. To attain high-performing serving, an inference execution planner (IEP, Fig. <ref> 203) is developed to schedule data placement ahead of runtime. Problem formulation. Let 𝒢 = (𝒱, ℰ) define a GNN input graph, where 𝒱 and ℰ are the set of vertices and edges, respectively. Suppose a set ℱ of n fog nodes are available in serving, denoted by ⟨ f_1, f_2, ⋯, f_n ⟩. Each vertex v_i ∈𝒱 is a data source point (e.g. a sensor), and its placement to a certain fog f_j is specified by a binary variable x_ij∈{0, 1}. While a fog admits multiple vertices, a vertex can only be placed to exactly one fog, i.e., ∑_j x_ij = 1, ∀ v_i ∈𝒱. To reckon the cost of a fog's data collection process, we should tally the transmission latency of all vertices placed to it, as in Eq. (<ref>), where φ is the data size of a single vertex's feature vector and b_j indicates the f_j's available bandwidth for serving. t^colle_j = ∑_i x_ijφ/b_j, ∀ f_j ∈ℱ. To calculate the inference execution latency on f_j, we summarize its placed vertices as a subgraph ⋃_i x_ij v_i, and estimate its computing time through the performance model ω(·) from metadata. Besides, for the complete execution runtime, there are additionally K synchronizations for cross-fog data exchange through a K-layer GNN, due to GNN's neighbor aggregation mechanism as discussed in <ref>. Assuming the cost of a synchronization is δ, we append Kδ to complement the total execution cost, as in Eq. (<ref>). t^exec_j = ω_j(⋃_i x_ij v_i) + Kδ, ∀ f_j ∈ℱ. Putting both data collection and inference execution together, the objective of IEP is to find an efficient data placement strategy π = {x_ij|∀ v_i ∈𝒱, ∀ f_j ∈ℱ} such that the latency of the complete serving pipeline is minimized, formally formulated in problem 𝒫: 𝒫: min max_j(t^colle_j + t^exec_j), s.t. (<ref>), (<ref>), (<ref>). The IEP problem 𝒫 is NP-hard when the number of fog nodes n ≥ 2. The quality of this data placement matters in that 1) uneven assignment usually catalyzes the straggler effect <cit.> and 2) skew load distribution commonly accompanies communication bottlenecks, and either of them can largely slow down the parallel performance as measured in <ref>. However, to yield an optimal solution of 𝒫 is intractable when the number of fogs n ≥ 2, due to its NP-hardness stated in Theorem <ref> (proof in Appendix A). Unfortunately, the integration of the unique computation pattern enforced by GNN workload and the inherent heterogeneity of fog nodes makes existing data placement techniques hardly be applied. IEP data placement. To enable real-time GNN serving among fogs, we alternatively leverage heuristics to make efficient optimization. Specifically, we capitalize two insights in IEP: 1) Co-locating vertices with connections to a mutual data partition can not only save its computing time ω_j(⋃_i x_ij v_i) on a single fog but alleviate the synchronization burden Kδ in Eq. (<ref>). This knob attributes to GNN's neighbor aggregation mechanism, where computing inference over a data partition is essentially operating on a neighbor-augmented graph upon the vertices it contains. Maximizing the vertices' locality within a data partition can thus significantly decrease the number of neighbors |𝒩_𝒱|, so that the computing time (refer to Eq. (<ref>)) and synchronization costs (for pulling neighbors from other fogs) are lessened simultaneously. 2) In light of the parallel nature of 𝒫, a serving-oriented load balance with regards to heterogeneous computing capability and diverse bandwidth can profitably promote the holistic performance. Particularly, the costs of data collection t^colle_j and inference execution t^exec_j should be jointly considered to maximize the utilization of available resources. Motivated by these two insights, we tackle 𝒫 via a two-step optimization that first preprocesses the input graph to generate locality-maximized partitions, and next maps them to fog nodes accounting for both computation and communication resources. Fig. <ref> depicts its overall flow in IEP and Algorithm <ref> shows the pseudocode. First, we intend to generate data partitions over the input graph, aiming at both internal vertex locality and load balancing. Instead of searching by brute force, we remark that this task is partially related to Balanced Graph Partitioning (BGP), a family of problems that have been extensively studied <cit.>. Particularly, maximizing the partitions' internal locality can be conversely interpreted to a minimization on inter-partition connections, i.e. edge-cuts. Therefore, as an initialization, we employ BGP solvers and attain a group of n min-cut data partitions, where n is the number of fogs (Line 2). These partitions, however, are merely statistically balanced in vertices' number rather than actual GNN workload, which may still induce uneven load distribution (as discussed in <ref>). To bridge this gap, in the second step, we build a resource-aware mapping between partitions and fogs. Specifically, a bipartite graph ℬ is defined as in Fig. <ref> (Res.-aware mapping box), with partitions ⟨ P_1, P_2, ⋯, P_n ⟩ and fogs ⟨ f_1, f_2, ⋯, f_n ⟩ in separate columns. We associate every partition-fog pair with an edge, weighted by a compositive cost ⟨ P_k, f_j ⟩ of uploading and executing partition P_k on fog f_j: ⟨ P_k, f_j ⟩ = |P_k| φ/b_j + ω_j(P_k) + Kδ, k,j ∈{1,2,⋯,n}. Yet to find a mapping in ℬ that satisfies 𝒫's min-max objective has a huge decision space of n!, and differs from the traditional bipartite matching problem of maximum weighted sum <cit.>. However, we observe that it is a variant of Linear Bottleneck Assignment Problem (LBAP) <cit.>, and we can apply a threshold-based algorithm to solve an optimal mapping. Specifically, it first instantiates a priority queue 𝒬 to accommodate all edge weights in ℬ, and next successively inspects every element in iterations. For each iteration, it dequeues the front element in 𝒬, the maximum weight in the queue, as the weight threshold τ, and filters edges in ℬ that have a weight smaller than τ to construct an auxiliary bipartite graph ℬ' (Line 8-10). Applying the Hungarian algorithm <cit.>, we obtain a mapping M in ℬ' and check whether it is a perfect matching towards the original bipartite graph ℬ. If succeed, we record the obtained mapping in M^*, and move forward to another iteration for new attempts with lower thresholds. Otherwise, there is no perfect matching anymore in filtered bipartite graphs since the remaining to-be-examined thresholds in 𝒬 will be smaller; the obtained mapping M^* is thus the expected result that minimizes the maximum weight in ℬ and the iteration consequently terminates. Finally, the algorithm ends by returning the M^*'s corresponding data placement π. Discussion. The expense of IEP's first step mainly relies on the selected BGP solver, and Fograph allows for altering appropriate solvers to adapt to specific graphs and reduce the overhead. The second step takes O(n^2) iterations for threshold descending, according to the O(n^2) length of the priority queue from the bipartite graph ℬ with n partitions and n fogs. However, we can use binary search to further expedite the threshold searching, which can significantly decrease overall iterations to O(log n). As each iteration's Hungarian algorithm invocation requires O(n^3), the second step of IEP takes a total time complexity of O(n^3 log n). We note that such a complexity is affordable since the number of available fog nodes in real deployment is usually small (e.g. <100). Besides, the scheduling overhead of IEP data placement is an upfront cost before runtime and it can be amortized across multiple inferences. In our implementation, we apply the widely-adopted METIS <cit.> as the BGP solver and binary search for the mapping step, which spends only seconds for SIoT in total. To verify the effectiveness of the proposed IEP algorithm, we examine its performance against two comparative straw-man approaches: 1) METIS+Random, a trivial version that first invokes METIS for balanced partitions and next assigns them to arbitrary fog nodes, and 2) METIS+Greedy, which takes a greedy heuristic in IEP's partition-fog mapping procedure, i.e. iteratively finds fogs for partitions such that their edge weight ⟨ P_k, f_j ⟩ is minimized. Fig. <ref> depicts the results in three heterogeneous environments, where we observe that IEP always surpasses baselines with lower serving latency, demonstrating the superiority of our resource-aware algorithm design. Specifically, the updated IEP algorithm outperforms METIS+Greedy with an average latency reduction of 10.9%, 19.1%, and 19.5% for three different model configurations, respectively. §.§ Degree-Aware Compressed Data Collection According to IEP's data placement, each fog collects the input data individually in the runtime phase (Fig. <ref> 204). As discussed in <ref>, the considerable costs of uploading graph data stress its significance towards real-time serving. To alleviate this bottleneck, Fograph integrates a communication optimizer (CO, Fig. <ref> 204), operating in two steps. Degree-aware quantization (DAQ). In the first step, we exploit the resilience of GNNs to low-precision representation and lower the data precision in a differentiated way with the topological information. Concretely, we use each vertex's degree as a knob to modulate the quantization intensity on its feature vector. The rationale behind is that a vertex with a higher degree assimilates more abundant information from its neighbors, and is more robust to low bitwidths since its quantization errors can be smooth through successive aggregations <cit.>. In detail, DAQ maintains a triplet ⟨ D_1, D_2, D_3 ⟩ to divide the vertices' degrees into four intervals [0, D_1), [D_1, D_2), [D_2, D_3) and [D_3, +∞), and assigns respective quantization bits of ⟨ q_0, q_1, q_2, q_3 ⟩ to each. Next, for each vertex, we check its degree to obtain a target bitwidth according to the interval it lies in, and implement a linear quantization. In Fograph, we reckon up four equal-length intervals based on the input graph's degree distribution and instantiate the quantized bits as ⟨ 64, 32, 16, 8 ⟩ by default, as illustrated in Fig. <ref>. However, it should be noted that ⟨ D_1, D_2, D_3 ⟩ and ⟨ q_0, q_1, q_2, q_3 ⟩ are tunable to accommodate specific graph topology and customized accuracy-latency preference. The exploration of these configurations is left for future work. Given the input feature's bitwidth as Q, DAQ with configurations ⟨ D_1, D_2, D_3 ⟩ and ⟨ q_0, q_1, q_2, q_3 ⟩ renders a compression ratio of 1/Q[q_3 - ∑_i F_D(D_i)(q_i - q_i-1)], i∈{1,2,3}, where F_D(·) is the cumulative distribution function of the graph degree distribution. Theorem <ref> theoretically gives the compression ratio of DAQ, with proof in Appendix B. By discriminatively reducing the bitwidths of feature vectors, DAQ allows the transferred size to be substantially lower without significant impact on the inference accuracy (<1% drop in our experiments). Our scheme differs from both 1) uniform quantization <cit.>, which ignores the vertices' different quantization sensitivity and degrades accuracy, and 2) all-layers quantization that demands complicated techniques <cit.>, such as quantization-aware training, to fine-tune the model prediction performance. Sparsity elimination. The second step exploits the observation that feature vectors are amenable to compression. A major fraction of feature vectors are sparse and highly compressible due to their encoding mechanism, and the sparsity is further magnified by precision reduction in the above quantization step. Hence, we compress the sparsity using the LZ4 compressor <cit.> with bit shuffling. Deployment of CO. The lifecycle of CO comprises procedures of packing and unpacking, where the former is deployed at end devices that contribute data sources and the latter is installed on fog nodes. While its cost can be amortized by the communication savings, we develop several optimizations to further reduce the overhead. For packing, we mitigate the burden on end devices by 1) pre-calculating the targeted quantization bitwidth before deployment and using it through the runtime (as long as the graph topology is unchanged), and 2) quantizing the feature vector's elements in parallel for additional acceleration. Since each device uploads its local data individually, the data packing process is naturally parallelized from a global view and the cost is apportioned. For the fog side, the received data are first decompressed and then dequantized back to the original bitwidth before inference. Further, we launch a separated thread for the unpacking procedure to pipeline data recovering and inference execution. §.§ Distributed Execution Runtime With the generated execution plan, Fograph's runtime execution engine (Fig. <ref> 205) orchestrates distributed GNN inference upon multiple nodes. Specifically, when an inference query is launched, fog nodes will collect the necessary data from nearby sensory devices as per the data placement policy, and then collaborate to conduct the GNN execution. For each GNN layer execution, cross-fog data exchanges will be carried out when necessary, i.e. neighboring vertices' data belong to different data partitions. Next, inference functions ( and ) are invoked by the fog nodes to compute the layer over the data partitions in parallel. Repeating the above process for all layers completes the whole execution and produces expected embeddings. Note that the model has been loaded and stays resident throughout the runtime so that it can be called immediately on demand. To adapt to dynamic fluctuation of resources, e.g. background load and bandwidth changes, the metadata server periodically aggregates the metadata from fog nodes, replays IEP to yield an updated data placement, and deploys to fogs at system idle time. The iterative layer processing is implemented with the Bulk Synchronous Parallel (BSP) model <cit.>, where a synchronization step is triggered when data exchange is needed. Although the total synchronization times depend on the number of GNN layers, which is usually very small (e.g. GCN typically stacks two or three layers), we apply the following optimizations for further acceleration. First, the adjacency matrix of each data partition can be constructed prior to the execution as long as the data placement is determined, in order to lower the occupancy of runtime. Second, the synchronization is run as a separated thread to enable the pipelining of data preparation and inference execution. Third, we wrap the execution on top of the mature framework PyG <cit.>, which allows to directly benefit from all existing kernel-level optimizations. §.§ Adaptive Workload Scheduling Even with the offline best graph data placement, the distributed inference performance can reach suboptimum due to the fluctuation of computing resource, e.g. caused by machine load variation. This can be relevant considering that fog nodes usually run versatile services simultaneously. To this end, the adaptive workload scheduler (Fig. <ref> 206) is developed to refine the data placement tailored to dynamic load levels. Unlike the offline IEP that makes meticulous yet expensive optimization, the adaptive scheduler works online and should keep agile and agnostic to the inference. Therefore, we employ a lightweight adjustment method and a dual-mode regulation to adapt the workload distribution. Load balance indicator. To reflect how skew the load distribution is, we first define a indicator μ_j for each node f_j by inspecting the fraction between its actual execution time T^real_j and the mean value of all fogs' measurements 1/n∑_k T^real_k: μ_j = T^real_j/1/n∑_k T^real_k, ∀ f_j ∈ℱ, where T^real_j is the real measured execution time of f_j in the last time interval, and is obtained from the online profiler. We further introduce a slackness factor λ to tune the imbalance tolerance. If there is a node such that μ_j > λ, it implies this node breaks the imbalance tolerance λ and is supposed to suffer from a high background load. Note that λ is obliged to be larger than 1. λ = 1 represents that exact balance is required whereas λ > 1 relaxes the balanced constraints. Additionally, we count the number of overloaded nodes, denoted as n^+, to gauge the global load skewness and use it for the next configuration. Diffusion-based adjustment. The diffusion method aims at amending the graph data placement to align with the load level at a low cost. With the latest profiling data, it first selects two partitioned sets of vertices with the highest and lowest execution time, and then progressively migrates the vertices from the overloaded set to the underloaded set until an estimated local balance is achieved. For each migration, the boundary vertex that shares the most neighbors with the other side is picked. As an example, Fig. <ref> illustrates this diffusion process across fog nodes f_1 and f_2, where they are supposed to be moderate and weak in computing capability, respectively. Without external load burdens, a graph data placement is decided initially, separating four/two vertices as in Fig. <ref>. Assuming a load increase abruptly happens in f_1 such that the two nodes' capabilities are currently on a par, the diffusion is then applied to migrate vertices for balancing their workload. The number of to-be-migrated vertices is 1, and the adjustment will choose vertex D as the moving candidate since it connects the most edge-cuts across subgraph in f_1 and f_2. This consequently results in an updated balanced layout in Fig. <ref>. In a data placement with multiple partitions, the above pairwise diffusion process continues for all unevenly-loaded nodes until the overall estimated performance satisfies the imbalance tolerance λ. We refer to this method as diffusion in that the flow of vertices continuously moves from the overloaded regions to the underloaded regions. This method is lightweight since it only operates on a small part of the graph and migrates a few vertices. Yet it is effective as it consistently makes incremental improvements on the graph layout. However, when the load distribution is dramatically skew, this property also comes at a high cost. Therefore, we introduce a dual-mode scheduler to integrate the lightweight adjustment with the global partitioning. Dual-mode scheduler. The workload scheduler considers dual regulations, the lightweight diffusion-based adjustment discussed above and the heavy global partitioning that invokes the offline IEP. Algorithm <ref> presents the scheduler's processing flow. As a first step, the scheduler uses recorded execution time to update the performance estimation models and calculate the skewness indicators (Line 1-2). If there is a node such that μ_j > λ, it implies the imbalance tolerance λ is violated. This subsequently triggers adjustments on the existing graph layout. To decide which mode to be applied, we count the number of overloaded nodes, denoted as n^+, and compare it with a user-specific skewness threshold θ (Line 4-5). θ is a positive decimal and is set 0.5 by default. If |n^+| / n ≤θ, it means that the skewness is still tolerable and the lightweight diffusion is applied. Otherwise, the percentage of the overloaded nodes exceeds θ and we pass the entire graph 𝒢 and the updated performance estimates ω' to IEP to yield a new data placement. Note that all the layout modifications are first operated virtually in Algorithm <ref> and will be executed physically when the final result is determined and the system is in idle period. § EVALUATION This section presents the effectiveness of Fograph in significantly improving the performance of GNN inference serving by examining its core components and comparing with the currently standard cloud implementations and straw-man fog inference systems. §.§ Experimental Setup Prototype and methodology. We implement Fograph prototype with three types of computing nodes and their specifications are listed in Table <ref>. We label them as type A, B, and C, respectively representing fog nodes with weak, moderate, and powerful capabilities. Such a category depends on their processors' power and available memory: Type-C fogs equip the highest computing power with the largest memory space, whereas Type A is on the opposite side and Type B is in between them. Although fogs of Type A and B own the same processor, the former performs much poorer than the latter with the used SIoT and Yelp datasets, reporting an average of 37.8% longer inference latency with GCN. Fograph is built on top of PyG <cit.>, though its design is agnostic to the backend and can be conveniently switched to other engines like DGL <cit.>. We compare Fograph with the de-facto standard cloud serving and a straw-man multi-fog deployment (multi-fog serving without any proposed optimization), all running the same workload. The configurations of cloud and the emulation of distributed graph data uploading follow the methodology in <ref>. To ensure a fair comparison, the straw-man fog approach adopts the data placement strategy in state-of-the-art distributed GNN processing <cit.> that first calls METIS <cit.> to partition the data graph and next map them to fog nodes stochastically. According to the placement, the fog approach's runtime directly collects graph data without communication optimization, and launches collaborative inference upon the same distributed framework as Fograph. Without loss of generality, Fograph selects an arbitrary fog node as the metadata server among the used ones. We measure the end-to-end latency from data collection to GNN inference completion. All the background loads are disabled during the runtime and each benchmark is evaluated 50 times to calculate the average results. GNN models and datasets. Four GNN models are employed: GCN <cit.>, GAT <cit.>, GraphSAGE <cit.> and ASTGCN <cit.>. The first three are representative GNN models that have been widely used across GNN-based services <cit.>, and their formulas have been listed in Table <ref>. ASTGCN is a spatial-temporal model specific for traffic flow forecasting. All the models are implemented using the instances from PyG model zoo <cit.> or the original code from the model authors <cit.>, and are trained prior to deployment. We evaluate Fograph on three real-world datasets and their statistics are listed in Table <ref>. We select them since they have been used in existing literature <cit.>, and are yet the largest publicly available ones at the time of evaluation, regrading the real-world IoT-driven smart applications discussed in <ref>. We also synthesize much larger graphs of different scales, namely the RMAT series in Table <ref>, upon the above real-world datasets to examine system scalability. The detailed description of the datasets is presented in Appendix C and Appendix D. §.§ Performance Comparison This subsection compares Fograph with the state-of-the-art cloud serving and the straw-man fog approach in metrics of latency, throughput, and inference accuracy, using a cluster of six fog nodes including types of 1×A, 4×B and 1×C. All types of nodes here run with CPU only since the energy-intensive accelerators are not prevailing on existing IoT computing platforms <cit.>, though we will still evaluate how GPU improves Fograph’s performance latter in <ref>. Latency and throughput. Fig. <ref> and Fig. <ref> respectively show the achieved inference latency and throughput across varying models, datasets and network conditions. Given a dedicated dataset and network speed, e.g. the upper left subfigure SIoT-4G, the cloud serving yields the highest latency and the lowest throughput despite the models because it is constrained by the inevitable communication overhead of remote transmission. The fog approach significantly lessens the latency compared with cloud, showing that trivially applying fog computing still enjoys its architectural advantages. More specifically, by shortening the transmission distances between data sources and processing nodes, the fog-based solutions can significantly alleviate the communication bottleneck of centralized cloud serving caused by the long-tail data collection, since GNN inferences can be executed only if all graph data arrive. Fograph further shrinks the costs, achieving <1s inference latency, realizing efficient serving in practical sense. Across all setups, Fograph consistently delivers the highest performance with a latency reduction up to 82.18% and 63.70%, and a throughput improvement of 6.84× and 2.31×, all over cloud and fog, respectively. This can be attributed to our heterogeneity-aware IEP and communication optimizer on maximizing the computing resource utilization while minimizing data uploading overhead. Traversing horizontally the subfigures with varying network conditions, we observe that poorer channels induce higher speedups of Fograph, e.g. averagely from 4.67× to 5.39 × on SIoT over cloud when switching WiFi to 4G. Vertically varying the datasets, a larger source graph enhances Fograph's superiority in saving costs, e.g. an average latency reduction of 80.63% and 70.21% over cloud for SIoT (larger) and Yelp (smaller), respectively. An interesting observation of Fig. <ref> is that the serving latency seems to be relatively stable when the dataset and network condition are determined notwithstanding the models. It is interpreted that the communication overhead majorly dominates the total cost in the cases of GCN, GAT and GraphSAGE inferences. Although execution's impact will be promoted if larger GNN models are applied, this fact highlights the ponderance of communication optimization in end deployment, which is exactly what Fograph emphasizes. In Fig. <ref>, the superiority of Fograph is more evident in contrast with the baselines, validating the effectiveness of optimizing bandwidth utilization and pipeline execution. Accuracy. While Fograph profitably exploits the compression technique in reducing data collection overhead, its practicability yet relies on inference accuracy. To investigate that, we configure the communication optimizer with default settings and assess the approaches under WiFi connections[The inference accuracy is irrespective to the network conditions. The data corruption caused by unstable transmission, which may decrease the accuracy, is not considered in this paper.]. Table <ref> shows the inference accuracy for SIoT and Yelp upon three models. The cloud and fog approaches maintain the same accuracy as they both reserve full precision features. Fograph drops <0.1% for both SIoT and Yelp, which will not cause substantial usability issues on the numerical results. The accuracy loss is minimal in SIoT because its features are simply one-hot encoded, where the outcome of quantization and compression is maximized without side effects. §.§ Case Study: Traffic Flow Forecasting This subsection uses a realistic case to complement the performance comparison in demonstrating Fograph's superiority. We emulate a traffic flow forecasting application <cit.> by running the inferences to predict future flow over the PeMS sensor network in an expected time window. In this setting, four nodes are employed: 1×A, 2×B, and 1×C. The reason behind configuring such a smaller, weaker cluster (minus two Type-B fogs from the testbed in <ref>) is to match the computing resources with the much smaller graph dataset targeted in this experiment (compared to SIoT and Yelp). To fit the application, we select ASTGCN, a representative spatial-temporal model with GCN as building blocks. IEP result. Fig. <ref> visualizes the sensors' spatial distribution of PeMS and its data placement results. Each vertex is colored to indicate its placed fog node. From a global view, the sensors, as the graph vertices, exhibit a clustering pattern of their placement, demonstrating the locality preservation of IEP. Moreover, the different vertices number of each partition implies its heterogeneity awareness. This can be clearer in Fig. <ref>, which counts the number of assigned vertices and the execution latency of each fog node. The execution latency results among fog nodes are well close though they hold entirely different numbers of vertices. In particular, the 4-th fog node (Type C) accommodates the most vertices (the yellow points in Fig. <ref>), but conversely takes the lowest execution time. This is because that it possesses the highest computing capability among others (Type A and B fog nodes), and thus Fograph enforces it to afford more workload (vertices). Such load balance attributes to the heterogeneity-aware IEP, which effectively aligns execution workload to the diverse computing resources and ensures maximum resource utilization towards parallel and distributed inference. Latency and throughput. Fig. <ref> shows the inference latency and throughput of the approaches under 4G, 5G, and WiFi. Analogous to the comparison in Fig. <ref>, Fograph surpasses the baselines with the lowest costs. Specifically, with varying channels, Fograph consistently attains the lowest serving latency, achieving speedups of up to 2.79× and 1.43× over cloud and fog, respectively. The latency of traffic flow forecasting appears higher cost than the SIoT and Yelp tasks because it predicts a window of 12 flow possibilities for every 5 minutes in the next hour, which puts a higher workload on execution. Fig. <ref> presents the corresponding throughput results, where Fograph outperforms all other approaches, showing its superiority. Forecasting performance. Table <ref> records three common evaluated metrics for traffic flow forecasting: Mean Absolute Error (MAE), Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE). We compare Fograph in two time horizons against cloud, straw-man fog, and an additional compression method that uniformly compresses all feature vectors into 8-bit precision. Similar to the results in Table <ref>, Fograph induces minimal error expansion of around 0.1 for all metrics compared with the full precision version (cloud and fog). In contrast, uniformly quantizing all features to 8-bit results in an evident error gap to Fograph, which may significantly sacrifice the serviceability of the forecasting results. Such prediction advantages of Fograph origins from our degree-aware quantization choice on exploiting the data sensitivity in GNN inference, enabling both latency reduction and accuracy reservation. §.§ Optimization Implication This subsection investigates the performance boost of each individual optimization technique introduced in <ref>, using SIoT dataset and the cluster described in <ref>. Metadata profiler. We first show the profiler's effectiveness by comparing the estimated execution time predicted by the profiler with the real execution time measured in actual inferences. Fig. <ref> depicts the profiling and prediction results for different GNN models and different datasets, collected on a fog node of type B. The solid line indicates an exact equivalence between the measurement and the prediction, while the dashed lines mean a relative difference of ±10% between them. All data points are encompassed by the dashed lines, which implies a small and bounded variance between actual execution latency and estimates, demonstrating the effectiveness of the proxy-guided profiling methodology. Moreover, the figure shows that if a graph has a higher execution latency than the other one, their latency estimates still preserve the ordering. This demonstrates that the profiler's latency prediction is an appropriate tool to guide IEP's data placement optimization. Inference execution planner and communication optimizer. Fig. <ref> normalizes the latency of four approaches: fog, Fograph, and its ablated variants without inference execution planner (IEP) or communication optimizer (CO). Specifically, Fograph without IEP replaces its data placement strategies with the one in the straw-man fog approach, and Fograph without CO simply applies direct data transmission between data sources and fog nodes. All other configurations keep unaltered for these two ablated counterparts except for their respectively targeted modules. From a global view, we observe that both modules make sense for performance improvement, while a combination promotes higher speedups. The IEP and CO take similar effects but their focuses are different as indicated by their execution statistics in Fig. <ref> (right). The former essentially solves a graph layout to maximize the parallelization and thus saves the overall latency on the execution side given a smaller execution ratio, whereas the latter centers around the data uploading aspect and validly reduce the transmission costs, which lowers the proportion of the communication side. The almost orthogonal optimization directions make their incorporation fully benefit the performance gains. Workload scheduler. To appraise the responsiveness of Fograph in adapting to dynamic load fluctuation, we target production workload traces from Alibaba <cit.>. The trace contains background CPU load variation running on clusters and we select a snapshot of 1000 timestamps to exert pressures on fog nodes, as shown in Fig. <ref> (top). The associated behaviors of Fograph with/without workload scheduler are recorded in Fig. <ref> (bottom). Note that the scheduler-less version is a non-dynamic counterpart implemented by deactivating the workload scheduler module in Fograph and maintaining the original IEP data placement all the time. Given the superiority of IEP, it represents a fair performance produced by non-dynamic load-balancing methods like fixed division of computing pressure according to computing capability and communication distance. When the fog nodes share similar load levels (timestamp [0, 150]), the two versions report close inference latency. As node 4's background load increases, the latency of the w/o scheduler version rapidly climbs while the integrated Fograph keeps relatively steady. Particularly, the serving latency can exceed 1s in the extreme situation without scheduler, while Fograph keeps immersing below 0.9s. At the other end, when node 4's load diminishes, its computing capability is released so that Fograph is able to further lessen the costs and achieve up to 18.79% latency reduction over the ablated copy. Overall, we observe that the lack of scheduler leads to a latency trajectory going after the overloaded nodes' trace changes and magnifies load fluctuation in costs. Instead, by employing the workload scheduler, Fograph can adjust to resource dynamics and provides continuously stable serving. Moreover, we observe that Fograph acts agilely to load dynamics for low-latency service provisioning. With a very mild communication delay of ∼0.2s between the metadata server and fog nodes, our measurements report an average of 4.32s from imbalance detection to load migration. §.§ Scalability This subsection investigates the scalability of Fograph using much larger synthetic RMAT datasets, with varying fog nodes of type B. Fig. <ref> shows Fograph's serving costs over the models and datasets as the number of fogs increases, where we observe the latency effectively shrinks with the augment of computing resources. Employing multiple fogs (>2) performs much better than merely using one fog for every size of graphs, demonstrating Fograph's efficient resource utilization with parallel and distributed execution. Upon larger graphs (e.g. RMAT-100K), Fograph can gain clearly much performance improvement when appending additional fog nodes, suggesting Fograph's capability in handling heavier serving workload. We remark that the serving costs converge for all graphs when there are ample fog nodes, which implies that Fograph can readily afford million-edge graphs with just six moderate fog nodes. §.§ GPU Enhancement This subsection explores how GPU enhances Fograph towards serving performance. We equip each fog node of type B with an Nvidia GTX 1050 GPU, and run GCN inference over the synthetic RMAT-100K dataset. Fig. <ref> visualizes the achieved latency of the straw-man fog method and Fograph, with and without GPU. For a single fog, both fog and Fograph fail with GPU and encounter the out-of-memory (OOM) error, due to the limited GPU memory. By extending to multi-fog, however, GPU clearly shows its advantage over CPU, and makes progressive improvements as the number of fogs grows, demonstrating Fograph's enhanced performance with hardware accelerators. We also find that serving with a small number of fog nodes (e.g. with 2 fog nodes) exhibits a broader performance gap between solutions with and without GPU. This implies GPU is particularly expedient when the available fog resource is inadequate towards the targeted GNN services. Moreover, Fograph without GPU can reach even lower latency compared with the straw-man fog with GPU (with >3 fog nodes), proving the remarkable performance advance from the proposed set of optimizations. § RELATED WORK As an intersection of GNN processing and fog infrastructure, Fograph provides a group of GNN-generic designs for maximizing the architectural advantages of fog computing. Next we discuss these two lines. Accelerating GNN processing. A growing body of recent worksfrom both the research <cit.> and industry communities <cit.> have focused on reaching high performance GNN processing by optimizing different execution levels. From a hardware perspective, a number of domain-specific processors <cit.> are designed by either customizing the microarchitectures to the GNN execution semantics or alternating hardware-software interfaces for graph data reading/writing. The aggregate and update functions are discriminatingly optimized according to their access patterns, data reusability, and computation intensities. Fograph is orthogonal to these fundamental efforts and can directly enjoy their pedestal improvements. From a library perspective, PyG <cit.> and DGL <cit.> are representative efforts that provide GNN-tailored API supports atop the neural networks execution engines like PyTorch and TensorFlow. The message-passing model is exploited for unified programmability and the matrix operations are particularly optimized with specialized kernels. Fograph utilizes these libraries as the backend so as to fully benefit from their underlying optimizations. From a system perspective, GNN execution is treated and optimized as an integrated production process considering its global lifecycle from graph storage <cit.>, data loading <cit.>, memory management <cit.> to multi-GPU execution <cit.>. Miscellaneous systems are proposed to address some of the aspects, e.g., P^3 <cit.> for scalable processing and Dorylus <cit.> for affordable training. Nonetheless, a majority of the systems focus only on training rather than inference, and all are on single powerful machines or cloud environments, ignoring the inherent data uploading overhead in the serving pipeline. Instead, Fograph capitalizes the more realistic and practical scenarios of GNN serving, emphasizes and implements the unique potential of fog computing to yield high performance. Fog-enabled DNN inference. To achieve efficient intelligent services close to the end, a line of works <cit.> have explored collaborative execution with the assist of fog computing. By splitting and mapping the input workloads or model parameters, these works parallelize CNN or RNN inferences over a cluster of fog nodes to meet dedicated service-level objectives. A few constraints are usually added in accordance with applications requirements such as execution deadline <cit.> and energy efficiency <cit.>. Fograph also lies in this category of work by first bringing GNN workload into the fog-enabled intelligence. Further, Fograph tailors its design to bridge the gap between the unique characteristics of GNN execution and the distributed and heterogeneous environments of fog, which notably improves the overall performance beyond cloud and vanilla fog deployment. § DISCUSSION AND FUTURE WORK Fograph is a pilot effort in bringing fog computing to GNN processing systems and yet has certain limitations. Exploiting inference context. Fograph has explored leveraging the unique characteristics of GNN and graph, e.g. degree-aware quantization, to boost the serving performance. As a result, Fograph's performance largely relies on the size and complexity of input data volume, where input graph data with larger vector sizes and sparser features can further highlight its superiority over cloud solutions. However, the rich semantics of other inference contexts still call for exploitation including input graph properties, application workflow patterns, and inference functions specialties. Recent works on GNN performance characterizations <cit.> can provide useful insights to enhance Fograph and guide the parameters tuning and module altering. Complex heterogeneity and dynamics. The heterogeneity and dynamics considered in Fograph are mainly on the execution side, i.e. the diverse and fluctuated computing capabilities. The allocated bandwidth is implicitly assumed to be relatively stable for all fog nodes. Although it is realistic in many fog computing cases (e.g. in a closed factory or campus <cit.>), real-world serving could confront more complicated situations where the connections between sensors and fog nodes are varied and even fail <cit.>. In these cases, we may ignore those few vertices with ultra-high transmission delay during data collection in order to stabilize the overall serving latency. Consequently, the long-tail distribution of data collection time is squeezed and cloud serving could act as an efficient complement to Fograph for more robust serving performance. Scalability and fog-cloud collaboration. Fograph's scheduling employs a centralized metadata server to attain fair resource utilization, which in principle tunes the communication-computation tradeoff for the distributed runtime. While it works well for typical moderate-scale deployment in fog scenarios, it may fall short in scaling up with very huge graphs and massive fog nodes due to the single-point scheduler. To address that, one of the potential solutions is to apply a lightweight, decentralized data placement strategy for inference execution planning such as hashing <cit.>. Alternatively, we may utilize the abundant cloud resources as a supplement to accomplish scalable GNN processing, e.g. by designing a fog-cloud collaboration mechanism that uses fog nodes to collect and compress data and cloud servers to accommodate full-batch GNN processing at scale. Other optimizing objectives. The proposed system concentrates on rendering GNN model serving in a real-time manner, whereas end deployment may tackle additional Service-Level Agreements (SLAs) like server costs and memory footprint <cit.>. The deadline can be integrated as a constraint in the workload scheduler, while the memory issue can be accounted for by redesigning the problem formulation in IEP. Composite SLAs require supplementary scheduling in jointly optimizing the system behaviors with multiple objectives. Our future work intends to enhance Fograph to meet additional types of objectives, e.g. energy consumption and privacy preservation. § CONCLUSION In this paper, we present Fograph, a distributed GNN inference system that addresses real-time GNN serving with fog computing. Fograph introduces a brand new serving pipeline that allows exploiting the diverse resources from distributed fog servers for resilient service rendering. Through a heterogeneity-aware inference execution planner and adaptive workload scheduler that effectively maps the input graph over multiple fog nodes, Fograph can maximize the parallelization while simultaneously adapting to resource fluctuation. By employing a GNN-specific communication optimizer, Fograph is able to deliver higher performance over the state-of-the-art cloud serving and basic fog deployment, without sacrificing the overall system’s accuracy and validity. Since GNNs have been widely adopted in a broad range of IoT and fog scenarios, the proposed system and its workflow can serve as a basis for future analysis and optimization on specific GNN-based services. IEEEtran § MATHEMATICAL PROOFS §.§ Proof of Theorem 1 We prove Theorem 1 by reducing 𝒫 to the Minimum Makespan Scheduling Problem (MMSP). First, we consider a special case of 𝒫 by forcing the fogs' allocated bandwidth and computing capability to be identical. Suppose an input graph that comprises isolated vertices, to collect and compute inference of these vertices can thus be regarded as individual jobs. Regarding 𝒫, the objective is to assign the vertices to the fogs such that the maximum job processing time on fogs is minimized, which is exactly a MMSP problem. Given that MMSP has been proved to be NP-hard when there are two or more machines <cit.>, 𝒫 is NP-hard when n ≥ 2, where n is the number of fogs. §.§ Proof of Theorem 2 We give the compression ratio of degree-aware quantization (DAQ) by treating a vertex's degree as a discrete random variable D. First, we build the degree distribution of a given graph 𝒢 = (𝒱, ℰ) by counting the frequency of its vertices' degrees, and subsequently obtain a Cumulative Distribution Function (CDF) F_D(d) with respect to vertex degree. Formally, the CDF defines the probability that the vertex degree D takes a value less than or equal to a given threshold d: F_D(d) = 𝐏(D ≤ d). Using CDF, we can derive the percentage r_i of vertices that locate in the four intervals divided by the DAQ triplet ⟨ D_1, D_2, D_3 ⟩: r_i = F_D(D_i+1) - F_D(D_i), i ∈{0, 1, 2, 3}. where we supplement that D_0 = 0 and D_4 = D_max. Particularly, remark that F_D(0) = 0 and F_D(D_max) = 1, thus we have r_0 = F_D(D_1) - F_D(D_0) = F_D(D_1), r_3 = F_D(D_4) - F_D(D_3) = 1 - F_D(D_3). The total bits B^quant of the quantized feature vectors is ∑_i r_i |𝒱|q_i, and substitute Equation (<ref>) we have B^quant = ∑_i [F_D(D_i+1) - F_D(D_i)]|𝒱|q_i, i ∈{0, 1, 2, 3}. With Equations (<ref>) and (<ref>), we extrapolate the above B^quant to B^quant = |𝒱|[q_3 - ∑_i F_D(D_i)(q_i - q_i-1)], i∈{1,2,3}. Given the original feature bitwidth Q, the overall bitwidth B^origin of 𝒢's original features is |𝒱|Q, and hence the compression ratio directly follows B^quant/B^origin = q_3/Q - 1/Q∑_i F_D(D_i)(q_i - q_i-1), i∈{1,2,3}. § DETAILS OF THE USED DATASETS §.§ Details of Real-World Datasets Three real-world graph datasets are employed in our evaluation. The first is SIoT <cit.>, a graph of socially connected Internet-of-Things collected in Santander, Spain. It includes 16216 devices as vertices with 146117 social links, and each vertex attaches a 52-dimension feature that identifies its properties such as the device's type, brand, and mobility. Each device is managed by an organization or a person, and thus yields a label of whether public or private. The GNN serving task over SIoT is to identify the devices' labels, by inferring from their features and relationships. The second is Yelp, a subgraph extracted from its complete back-up <cit.>, which collects reviews for a set of hotels and restaurants in Chicago. A vertex in the Yelp graph is a review comment, represented by a Word2Vec <cit.> vector, and each connection indicates the two reviews share a common history like they are posted by the same user. The comments are separated into two classes: spam reviews that produce fake and filtered content, and benign reviews that are not filtered. We run inference on Yelp to identify the spammers. The third is PeMS, which is collected by Caltrans Performance Measurement System <cit.> in San Francisco Bay Area, containing the traffic sensors' every-30s records on total flow, average speed, and average occupancy. Its topology is exactly the road network, comprising 307 loop sensors and 340 edges. Unlike the above two datasets for prediction on a single moment, PeMS associates the task of forecasting every-5min flows in an hour (12 timestamps) and is used in our case study (IV-C). §.§ Details of Synthetic Datasets The synthetic datasets are created by RMAT <cit.>, a widely-adopted graph generator that is able to quickly generate realistic graphs. Specifically, we set the number of vertices in {20K, 40K, 60K, 80K, 100K}, respectively. To capture the sparsity in realistic graphs, we use the graph density of SIoT, 0.11%, to steer the generation of edges, and accordingly produce different edge numbers of {199K, 799K, 1.79M, 3.19M, 4.99M}. In addition, we use Node2Vec <cit.> to generate the vertices features in 32 dimensions, and community clustering to induce vertices' labels in 8 classes.
http://arxiv.org/abs/2307.01528v1
20230704073331
Extended Dynamical Causal Modelling for Phase Coupling (eDCM PC)
[ "Azamat Yeldesbay", "Silvia Daun" ]
physics.data-an
[ "physics.data-an", "physics.bio-ph" ]
Software paper for submission to the Journal of Open Research Software To complete this template, please replace the blue text with your own. The paper has three main sections: (1) Overview; (2) Availability; (3) Reuse potential. Please submit the completed paper to: editor.jors@ubiquitypress.com 1pt § (1) OVERVIEW § TITLE Extended Dynamical Causal Modelling for Phase Coupling (eDCM PC) § PAPER AUTHORS 1. Yeldesbay, Azamat; 2. Daun, Silvia; § PAPER AUTHOR ROLES AND AFFILIATIONS 1. Role: designing, implementing and testing the program code, writing the article and the documentation, creation of the figures. Affiliation: (a) Institute of Zoology, University of Cologne, Cologne, Germany (b) Research Centre Jülich, Institute of Neuroscience and Medicine, Cognitive Neuroscience (INM-3), 52425 Jülich, Germany 2. Role: Supervision, writing the article and the documentation, revising the text and the figures. Affiliation: (a) Research Centre Jülich, Institute of Neuroscience and Medicine, Cognitive Neuroscience (INM-3), 52425 Jülich, Germany (b) Institute of Zoology, University of Cologne, Cologne, Germany § ABSTRACT We present a software tool - extended Dynamic Causal Modelling for Phase Coupling (eDCM PC) - that is able to estimate effective connectivity between any kind of oscillating systems, e.g. distant brain regions, using the phase information obtained from experimental signals. With the help of a transformation function eDCM PC can measure observable independent coupling functions within and between different frequency bands. eDCM PC is written in the numerical computing language MATLAB as an extension to Dynamic Causal Modelling (DCM) for phase coupling (Penny et al. 2009)<cit.>. eDCM PC is available on GitLab under the GNU General Public License (Version 3 or later). § KEYWORDS phase oscillators; coupling functions; phase reduction; electroencephalography (EEG); magnetoencephalography (MEG); oscillatory signals § INTRODUCTION An oscillatory process is a wide spread phenomenon in nature that occurs in physical and biological systems <cit.>. Oscillatory and rhythmic activities play a prominent role in the interaction between biological systems, especially in the communication between brain areas <cit.>. Therefore it is of particular interest to understand how the interaction between the elements of these oscillating systems, e.g. between distant brain regions, occurs. In this context phase reduction is a useful method to analyze a large oscillating network by representing every oscillating system with one variable - the phase. In the theory of synchronization the interaction between oscillating systems is analyzed by the model of weakly coupled phase oscillators <cit.>: φ̇_i = ω_i + Q_i(φ_1,…,φ_N), i=1,…,N, where φ_i is the phase of an oscillating system (an oscillator) i, ω_i is the natural frequency of the oscillator i, Q_i is the interaction function (coupling function) with other oscillators. For weak coupling the coupling function Q_i in Eq. (<ref>) can be simplified as the sum of the pairwise coupling functions q_i,j: φ̇_i = ω_i + ∑_j=1, j≠ i^Nq_i,j(φ_i,φ_j). The pairwise coupling functions q_i,j, on the other hand, can be represented as 2D surfaces (Fig. <ref>). One can obtain the coupling between the phase oscillators in the model Eq. (<ref>) by finding the coupling functions q_i,j directly from the experimental data <cit.>. This method was actively developed in the past decades and used to find the directionality of the couplings, the causal relations and to build dynamical models <cit.>. Kralemann and colleagues <cit.> have shown that the phase extracted from the experimental signals (e.g. by using Hilbert or Wavelet transformation) behaves differently than the phase described in the theoretical model Eq. (<ref>). The difference appears if the limit cycle of the oscillating system, the signal which we are measuring, is not circular, and the oscillatory signal has a non-sinusoidal form. This leads to a non-linear growth of the measured phase, even if the oscillating system has no input, as demonstrated in Fig. <ref>. Moreover, it was shown in <cit.> that this non-linear growth of the measured phase can cause spurious couplings in the system under investigation. The problem was resolved by introducing a transformation function between the observable and the theoretical phases (referred to as the proto- and the true phases in <cit.>). Kralemann and colleagues implemented this approach into their DAMOCO toolbox [The Data Analysis with Models Of Coupled Oscillators (DAMOCO) toolbox can be found here http://www.stat.physik.uni-potsdam.de/~mros/damoco.html] by approximating the transformation function from the observable phase to the theoretical phase using the distribution of the observable phase over a long period of time. The software tool presented in this metapaper - extended Dynamical Causal Modelling for Phase Coupling (eDCM PC) - addresses the problem of a potential non-linear growth in the observable phase. It is implemented as an extension to Dynamical Causal Modelling for phase coupling (DCM PC) <cit.>. eDCM PC aims at finding the effective coupling in a network of oscillating systems by inferring the coupling function from the measured signals. In particular, the software tool is designed to reconstruct the coupling functions for the cases when the phase is not uniformly distributed, i.e. in the case when the transformation between the observable and the theoretical phases should be taken into account. Moreover, eDCM PC extends DCM PC by allowing to find the coupling between different frequency bands, thereby making it possible to analyze n:m synchronization cases. In contrast to the DAMOCO toolbox, eDCM PC uncovers the transformation functions from the theoretical to the observable phases together with the coupling functions. It uses the capability of DCM, based on Bayesian inference, to analyze and compare several possible network structures at the same time. The theoretical results related to eDCM PC and the numerical testing on synthetic data sets were presented in our previous work <cit.>. Since its first presentation in <cit.> eDCM PC was supplemented with a documentation, additional tests and plotting functions for data sets and results, and was rearranged in a user-friendly structure. In this work we give a detailed description on the architecture and usage of eDCM PC, as well as provide testing examples. § IMPLEMENTATION AND ARCHITECTURE The architecture of eDCM PC is presented in Fig. <ref> as a data flow with different interface levels: data preprocessing level, user-interface level, eDCM PC level, and SPM/DCM level. We describe the levels in bottom up direction, namely starting from the SPM/DCM level up to the eDCM PC level by indicating specific implementation details. Thereafter, in the usage part, we describe the user-interface level. §.§.§ Dynamic causal modelling The structure of the eDCM PC is imposed by the architecture of the Dynamic Causal Modelling (DCM) <cit.>. DCM is an opensource toolbox within the Statistical Parametric Mapping (SPM) software <cit.>, developed to analyze connectivity in the brain. DCM can work with different modalities (functional magnetic resonance imaging (fMRI), EEG, MEG, local field potentials (LFP), functional near-infrared spectroscopy (fNIRS)) <cit.>. The usage of a wide variety of modalities is provided by a particular feature of the DCM architecture that allows the modification of its different parts without changing the common, basic structure of the software <cit.>. DCM uses Bayesian inference to find the parameters of the system and the coupling between the brain regions. The elements of the common structure of DCM are the data preprocessing component (Fig. <ref>,a), the modelling component (Fig. <ref>,f), and the statistical component (Fig. <ref>,i). These elements can be modified with respect to the modality and the modelling, however the interaction between them remains the same for all versions of DCM. Initial raw data are transformed into time courses of observables, e.g. into observable phases θ̂ (Fig. <ref>,a). In the modelling component (Fig. <ref>,f) synthetic signals are generated by numerically integrating the model with an initial guess for the system parameters (priors). The modelling component consists of the evolution equations (Fig. <ref>,g), a system of ODEs that represents the hidden state of the system and the connections between parts of the system, and the observation equations (Fig. <ref>,h), functions that express the observables using the hidden states. In the statistical component of DCM (Fig. <ref>,i) the observables θ̂ obtained from the data and the observables θ from the synthetic signals are "compared" by means of the Bayesian model inversion, namely Variational Laplace (VL) (Fig. <ref>,j). In a simplified manner, the result of this comparison gives the correction for the system parameters. Using these corrected parameter values the modelling component generates new synthetic signals. This procedure repeats until the required convergence between the measured and the synthetically generated observables is reached (Fig. <ref>,k). §.§.§ Modelling component eDCM PC introduces a new modelling component. As mentioned before the modelling component contains the evolution equation and the observation equation. The extended evolution equation is defined as φ̇_i = ω_i + ∑_j=1, j≠ i^N_q q_ij(φ_i,φ_j), where the pairwise coupling functions are constructed in the way to detect `n:m` synchronization q_ij(φ_i,φ_j) = ∑_n,m = -N_q, n,m≠ 0^N_q Q_ije^i(nφ_i + m φ_j), with complex Fourier coefficients Q_ij truncated at a given number of terms N_q. In the code, however, the coupling functions q_ij are represented with real-valued Fourier coefficients as q_ij(φ_i,φ_j) = ∑_n=1^N_q∑_m=1^N_q[ a^(n,m)_ijcos(nφ_i)cos(mφ_j) + b^(n,m)_ijcos(nφ_i)sin(mφ_j) +c^(n,m)_ijsin(nφ_i)cos(mφ_j) + d^(n,m)_ijsin(nφ_i)sin(mφ_j)]. To introduce the transformation function between the theoretical and observable phases (the forward transformation) we use the observation equation. Following <cit.> we define the transformation as follows θ_i=Θ(φ_i) = φ_i + ∑_k=-N_ρ, k≠ 0^N_ρ(e^ikφ_i-1), also truncated at a given number of terms N_ρ. The forward transformation function is represented in the code with real-valued coefficients: θ_i = Θ(φ_i) = φ_i + ∑_k=1^N_ρ1/k[α_i^(k)sin(nφ_i) - β_i^(k)cos(nφ_i) + β_i^(k)]. Thus, in eDCM PC the hidden states are the theoretical phases φ_i and the observables are the observable phases θ_i defined for every region and every frequency band. The resulting reconstructed coupling and transformation functions are (1) the real-valued coefficients (matrices) a_ij^(n,m), b_ij^(n,m), c_ij^(n,m), d_ij^(n,m) defined for every connected pair of regions and (2) the real-valued coefficients (vectors) α_i^(k) and β_i^(k) defined for every region, respectively. §.§.§ Initial approximation of the observation equation Good convergence is provided by a good initial guess of the parameters of the system. Therefore, the initial approximation of the transformation function, i.e. the observation equation, is essential for a successful reconstruction of the system parameters and the coupling functions. eDCM PC uses the approach presented in <cit.> to approximate the inverse transformation function φ=Φ(θ), as shown in Fig. <ref>, panels (a),(b), and (c). The average distribution of the observable phases θ̂_i over a long period of time for a region i can be approximated by a 2π periodic function σ_i(θ) with the mean equal to 1. (Fig. <ref>, (a) and (b)), and be written as the following Fourier series <cit.> σ_i(θ) = 1 + ∑_k=1^N_σα̂_i^(k)cos(k φ_i) + β̂_i^(k)sin(k φ_i), where the sum is truncated at N_σ. The integral of σ_i(θ) over one period gives us the inverse transformation function Φ_i(θ) = θ_i + ∑_k=1^N_σ1/k[α̂_i^(k)sin(kθ_i) - β̂_i^(k)cos(kφ_i) + β̂_i^(k)], which is the inverse function of the forward transformation function Θ(φ) defined in Eq. (<ref>) (Fig. <ref>, (c)). Note, that the truncation orders N_σ and N_ρ are, in general, different. After obtaining the coefficients α̂ and β̂ by approximating the average distribution of the observable phases σ_i(θ), eDCM PC finds the initial approximation of the coefficients α and β of the forward transformation Θ(φ). §.§.§ Updating the initial conditions In every step of the Bayesian estimation the updated parameters α and β change the transformation function Θ(φ). Therefore the initial conditions θ̂_0 of the observations should also be recalculated in every step to obtain new initial conditions for the evolution equation φ_0 (Fig. <ref>, (d)). For this the updated inverse transformation function Φ(θ) is needed. Therefore, eDCM PC recalculates the coefficients α̂ and β̂ of Eq. (<ref>) using the updated coefficients α and β after every step of the Bayesian estimation. §.§.§ Usage The data flow of the eDCM PC is presented in Fig. <ref>. The eDCM PC package works with time courses of the observable phases obtained from oscillatory signals. The observable phase can be extracted using the Hilbert or Wavelet transformation applied to the raw signals filtered within a specific frequency band (Fig. <ref>, (a)). In the user-interface level the usage of eDCM PC comes down to the initialization of the elements of the structure (Fig. <ref>, (b)) and calling the function (Fig. <ref>, (d)). The code of eDCM PC contains an example script that runs the reconstruction procedure using a pre-simulated data set. For details of the structure and calling the function we refer to the documentation included in the repository of eDCM PC. Here, we discuss only essential parts of the structure. The observable phases θ̂ should be stored in the structure in the fields , where is the trial index (Fig. <ref>, (b)). All trials should have the same length with an equally distributed time step. The supposed network structure of the system is defined in the binary adjacency matrix (Fig. <ref>, (c)). DCM allows the definition of several possible network structures and finds the structure best fitting the data using Bayesian model comparison. In that case is an array of matrix cells, and different structures should be defined in different cells. For details of the definition of the structure, please refer to the documentation of DCM in Chapters 41,42, and 43 of <cit.>. The elements of the modelling component are defined in the substructure , such as the mean frequency ω_i () and the frequency band () for every region, the truncation orders of the Fourier series for the coupling functions N_q (), for the forward transformation functions N_ρ (), and for the inverse transformation functions N_σ (). The initialized structure is used as an argument to call the function (Fig. <ref>, (d)). This function performs the initial approximation of the inverse transformation function Φ(θ), defines the priors for all parameters of the system (), and assigns the functions of the model in the substructure such as: * state activities generation function ; * evolution equation function ; * observable equation function ; * linear observation function , which is a dummy function for compatibility with other versions of DCM. The function returns the values without change. Thereafter, calls DCM routines with the structure as the input argument to start the Bayesian estimation of the parameters of the system (Fig. <ref>, (e)). The result of the function is an extended structure that contains the estimated values of the system parameters in the substructure (Fig. <ref>, (k)), namely the matrices of the Fourier coefficients of the coupling functions a_ij^(n,m), b_ij^(n,m), c_ij^(n,m), d_ij^(n,m) in , , , respectively, of the forward transformations in and , which correspond to α_i^(k) and β_i^(k) respectively. §.§.§ Installing and testing eDCM PC is written in MATLAB and uses Dynamic Causal Modelling (DCM), which is included in the Statistical Parametric Mapping (SPM) package <cit.>. The SPM package can be downloaded here <https://www.fil.ion.ucl.ac.uk/spm/software/download/>. The minimum version of SPM used by eDCM PC is SPM12. eDCM PC can either be downloaded or cloned from our GitLab repository <https://gitlab.com/azayeld/edcmpc>. For installation follow the instruction given in the documentation of the eDCM PC package. After installation, eDCM PC can be tested by calling the script. § QUALITY CONTROL The users can easily test whether the installation was correct by calling a script. The package includes two other example scripts, which are also intended to test the quality of the reconstruction. In the first example, eDCM PC successfully reconstructs the coupling and the transformation functions from the signals of the synthetically simulated system of two weakly coupled phase oscillators. In this example the parameters of the coupling are chosen such that one coupling is zero, i.e. making the coupling unidirectional. Moreover, an artificial distortion (transformation) is added to the theoretical phases in order to simulate the effect of non-linear growing of the observable phase. Thus, the parameter values reconstructed by eDCM PC can be compared with the original parameter values used by the simulation. The second example analyzes two uni-directionally coupled neural mass models, namely, Jansen and Rit models, which simulate EEG-like signals. In this case neither the coupling function nor the non-linear relation between the observable and the theoretical phases are known, which simulates the real case scenario. In this example, eDCM PC demonstrates successful reconstruction of the coupling functions and shows the presence of a non-linear relation between the observable and theoretical phases in the neural mass models. The eDCM PC package includes several routines which allow testing of the data set before running the reconstruction procedure and examining the quality of the reconstruction afterwards: * Routines that allow the analysis of the spectral relation between two regions by plotting an approximated 2D Fourier spectrum of the observable phases. These routines can be used to analyze the data before the reconstruction procedure to set the truncation orders of the Fourier series ( and ). * Routines to plot the distribution of the observable phases together with the initial approximation of the inverse transformation function and the final reconstructed inverse transformation function. These routines can be used to set the value of the parameter , and test the final reconstructed inverse function. * Routines to plot the reconstructed pairwise coupling functions together with the measured data points. These functions are useful to examine the convergence after the reconstruction procedure. Since the coupling functions q_ij(φ_i,φ_j) and the measured data points (the measured observable phases θ̂_i and θ̂_j) are in different domains, there are two possibilities to compare them: (1) by representing the data points on the domain of the theoretical phases φ using the inverse transformation functions φ̂_i=Φ_i(θ̂_i) and φ̂_j=Φ_j(θ̂_j), (2) or by projecting the coupling functions onto the domain of the observable phases using the forward transformation functions q_ij(θ_i,θ_j) = q_ij(θ_i=Θ_i(φ_i),θ_i=Θ_i(φ_i)). These functions are also included in the code of the examples. For more details we refer to the documentation of the eDCM PC package. § (2) AVAILABILITY § OPERATING SYSTEM * Windows (=XP 32bit, SP2), * Linux (=Kernel 2.4.x or 2.6.x and = glibc (glibc6) 2.3.4), * MacOS (=X 10.4.7) Please refer to the system requirements of MATLAB 2007a, which can be found here: <https://de.mathworks.com/support/requirements/previous-releases.html> § PROGRAMMING LANGUAGE The numerical computing language MATLAB. § ADDITIONAL SYSTEM REQUIREMENTS Processor Intel Pentium IV and above, Disk space =1024MB (500MB MATLAB and 350 MB SPM), RAM =1024MB. § DEPENDENCIES eDCM PC needs MATLAB and SPM: * MATLAB (=7.4, R2007a) * Statistical Parametric Mapping (SPM) (SPM12, = Update 6225) § LIST OF CONTRIBUTORS Please refer to the list of authors. § SOFTWARE LOCATION: Archive Name: extended Dynamic Causal Modelling for Phase Coupling (eDCM PC) Persistent identifier: <https://doi.org/10.5281/zenodo.5782819> Licence: Creative Commons Attribution 4.0 International Publisher: Azamat Yeldesbay Version published: Version 1 Date published: 15/12/2021 Code repository Name: GitLab Persistent identifier: <https://gitlab.com/azayeld/edcmpc> Licence: GPLv3 <https://gitlab.com/azayeld/edcmpc/-/blob/master/LICENSE> Date published: 18/02/2019 § LANGUAGE English § (3) REUSE POTENTIAL Despite being originally developed to analyze the connectivity between oscillating brain regions, eDCM PC makes no assumption about the origin of the oscillating system. Therefore, the reuse potential of eDCM PC is not restricted to brain signals, but it can also be used in other biological as well as physical and mechanical systems. In general, this software can be used to reconstruct the effective connectivity in any network of oscillating systems. The only requirement is the presence of the phase information extracted from the rhythmic signals for every element of the network. The collection of codes of eDCM PC and the testing and example scripts, together with the documentation are uploaded in the GitLab repository <https://gitlab.com/azayeld/edcmpc>, where users can clone and modify the code. The modifications can also be included into the package by sending a merge request. Moreover, the GitLab repository has a build-in support mechanism in the form of an issue tracking system. In the issue tracking system the users of eDCM PC can post their suggestions and problems with the code by opening an issue, which is monitored and will be solved by the authors. § ACKNOWLEDGEMENTS We would like to thank Gereon Fink for useful discussions on this project. § FUNDING STATEMENT This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 431549029 – SFB 1451. § COMPETING INTERESTS The authors have no competing interests to declare. 10 Winfree1980 Winfree, A, 1980 The Geometry of Biological Time. Biomathematics (Berlin). Springer Verlag. ISBN 9783540093732. Pikovsky2001 Pikovsky, A, Rosenblum, M, and Kurths, J, 2001 Synchronization: A Universal Concept in Nonlinear Sciences. Cambridge nonlinear science series. Cambridge University Press. ISBN 9780511075957. Strogatz2004 Strogatz, S, 2004 Sync: The Emerging Science of Spontaneous Order. Penguin Books Limited. ISBN 9780141933184. Buzsaki2006 Buzsaki, G, 2006 Rhythms of the Brain. Oxford University Press. ISBN 9780198041252. Kuramoto1984 Kuramoto, Y, 1984 Chemical Oscillations, Waves, and Turbulence, volume 19. Springer Berlin Heidelberg, Berlin, Heidelberg. ISBN 978-3-642-69691-6. DOI: <https://doi.org/10.1007/978-3-642-69689-3>. Hoppensteadt1997 Hoppensteadt, FC and Izhikevich, EM, 1997 Weakly Connected Neural Networks, volume 126 of Applied Mathematical Sciences. Springer New York, New York, NY. ISBN 978-1-4612-7302-8. DOI: <https://doi.org/10.1007/978-1-4612-1828-9>. Rosenblum2001 Rosenblum, MG and Pikovsky, aS, 2001 Detecting direction of coupling in interacting oscillators. Physical review. E, Statistical, nonlinear, and soft matter physics 64(4 Pt 2): p 045202. DOI: <https://doi.org/10.1103/PhysRevE.64.045202>. Kralemann2007 Kralemann, B, Cimponeriu, L, Rosenblum, M, Pikovsky, A, and Mrowka, R, 2007 Uncovering interaction of coupled oscillators from data. Physical Review E 76(5): pp. 1–4. DOI: <https://doi.org/10.1103/PhysRevE.76.055201>. Kralemann2008 Kralemann, B, Cimponeriu, L, Rosenblum, M, Pikovsky, A, and Mrowka, R, 2008 Phase dynamics of coupled oscillators reconstructed from data. Physical Review E 77(6): pp. 1–16. DOI: <https://doi.org/10.1103/PhysRevE.77.066205>. Kralemann2011 Kralemann, B, Pikovsky, A, and Rosenblum, M, 2011 Reconstructing phase dynamics of oscillator networks. Chaos 21(2): pp. 1–10. DOI: <https://doi.org/10.1063/1.3597647>. Stankovski2012 Stankovski, T, Duggento, A, McClintock, PV, and Stefanovska, A, 2012 Inference of time-evolving coupled dynamical systems in the presence of noise. Physical Review Letters 109(2): pp. 1–5. DOI: <https://doi.org/10.1103/PhysRevLett.109.024101>. Kralemann2013 Kralemann, B, Pikovsky, A, and Rosenblum, M, 2013 Detecting triplet locking by triplet synchronization indices. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics 87(5): pp. 1–6. DOI: <https://doi.org/10.1103/PhysRevE.87.052904>. Kralemann2014 Kralemann, B, Pikovsky, A, and Rosenblum, M, 2014 Reconstructing effective phase connectivity of oscillator networks from observations. New Journal of Physics 16(8): p 085013. DOI: <https://doi.org/10.1088/1367-2630/16/8/085013>. Stankovski2015 Stankovski, T, Ticcinelli, V, McClintock, PVE, and Stefanovska, A, 2015 Coupling functions in networks of oscillators. New Journal of Physics 17(3): p 035002. DOI: <https://doi.org/10.1088/1367-2630/17/3/035002>. Stankovski2017 Stankovski, T, Pereira, T, McClintock, PVE, and Stefanovska, A, 2017 Coupling functions: Universal insights into dynamical interaction mechanisms. Reviews of Modern Physics 89(4): p 045001. DOI: <https://doi.org/10.1103/RevModPhys.89.045001>. Pikovsky2018 Pikovsky, A, 2018 Reconstruction of a random phase dynamics network from observations. Physics Letters, Section A: General, Atomic and Solid State Physics 382(4): pp. 147–152. DOI: <10.1016/j.physleta.2017.11.012>. Stankovski2019 Stankovski, T, Pereira, T, McClintock, PV, and Stefanovska, A, 2019 Coupling functions: Dynamical interaction mechanisms in the physical, biological and social sciences. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 377(2160). DOI: <https://doi.org/10.1098/rsta.2019.0039>. Penny2009 Penny, WD, Litvak, V, Fuentemilla, L, Duzel, E, and Friston, K, 2009 Dynamic Causal Models for phase coupling. Journal of Neuroscience Methods 183(1): pp. 19–30. DOI: <https://doi.org/10.1016/j.jneumeth.2009.06.029>. Yeldesbay2019 Yeldesbay, A, Fink, GR, and Daun, S, 2019 Reconstruction of effective connectivity in the case of asymmetric phase distributions. Journal of Neuroscience Methods 317(February): pp. 94–107. DOI: <https://doi.org/10.1016/j.jneumeth.2019.02.009>. Friston2003 Friston, KJ, Harrison, L, and Penny, W, 2003 Dynamic causal modelling. NeuroImage 19(4): pp. 1273–1302. DOI: <https://doi.org/10.1016/S1053-8119(03)00202-7>. Friston2007 Friston, K, Mattout, J, Trujillo-Barreto, N, Ashburner, J, and Penny, W, 2007 Variational free energy and the Laplace approximation. NeuroImage 34(1): pp. 220–234. DOI: <https://doi.org/10.1016/j.neuroimage.2006.08.035>. Friston2007b Friston, K, Ashburner, J, Kiebel, S, Nichols, T, and Penny, W, editors, 2007 Statistical Parametric Mapping. Elsevier. ISBN 9780123725608. DOI: <https://doi.org/10.1016/B978-0-12-372560-8.X5000-1>. David2006 David, O, Kiebel, SJ, Harrison, LM, Mattout, J, Kilner, JM, and Friston, KJ, 2006 Dynamic causal modeling of evoked responses in EEG and MEG. NeuroImage 30(4): pp. 1255–1272. DOI: <https://doi.org/10.1016/j.neuroimage.2005.10.045>. Chen2008 Chen, CC, Kiebel, SJ, and Friston, KJ, 2008 Dynamic causal modelling of induced responses. NeuroImage 41(4): pp. 1293–1312. DOI: <https://doi.org/10.1016/j.neuroimage.2008.03.026>. Chen2012 Chen, CC, Kiebel, SJ, Kilner, JM, Ward, NS, Stephan, KE, Wang, WJ, and Friston, KJ, 2012 A dynamic causal model for evoked and induced responses. NeuroImage 59(1): pp. 340–348. DOI: <https://doi.org/10.1016/j.neuroimage.2011.07.066>. Stephan2008 Stephan, KE, Kasper, L, Harrison, LM, Daunizeau, J, den Ouden, HEM, Breakspear, M, and Friston, KJ, 2008 Nonlinear dynamic causal models for fMRI. NeuroImage 42(2): pp. 649–662. DOI: <https://doi.org/10.1016/j.neuroimage.2008.04.262>. Moran2009 Moran, RJ, Stephan, KE, Seidenbecher, T, Pape, HC, Dolan, RJ, and Friston, KJ, 2009 Dynamic causal models of steady-state responses. NeuroImage 44(3): pp. 796–811. DOI: <https://doi.org/10.1016/j.neuroimage.2008.09.048>. Tak2015 Tak, S, Kempny, AM, Friston, KJ, Leff, AP, and Penny, WD, 2015 Dynamic causal modelling for functional near-infrared spectroscopy. NeuroImage 111: pp. 338–349. DOI: <https://doi.org/10.1016/j.neuroimage.2015.02.035>. Stephan2010 Stephan, KE, Penny, WD, Moran, RJ, den Ouden, HEM, Daunizeau, J, and Friston, KJ, 2010 Ten simple rules for dynamic causal modeling. NeuroImage 49(4): pp. 3099–3109. DOI: <https://doi.org/10.1016/j.neuroimage.2009.11.015>. Daunizeau2011 Daunizeau, J, David, O, and Stephan, KE, 2011 Dynamic causal modelling: A critical review of the biophysical and statistical foundations. NeuroImage 58(2): pp. 312–322. DOI: <https://doi.org/10.1016/j.neuroimage.2009.11.062>. David2007 David, O, Harrison, L, and Friston, K, 2007 Neuronal models of EEG and MEG. In Statistical Parametric Mapping, chapter 33, pp. 414–440. Elsevier. ISBN 9780123725608. DOI: <https://doi.org/10.1016/B978-012372560-8/50033-4>. 1pt Copyright Notice Authors who publish with this journal agree to the following terms: Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a <http://creativecommons.org/licenses/by/3.0/>Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal. By submitting this paper you agree to the terms of this Copyright Notice, which will apply to this submission if and when it is published by this journal.
http://arxiv.org/abs/2307.02328v1
20230705143918
Measurement of $e^+e^-\to pK^-\barΛ+c.c.$ cross sections between 4.009 GeV and 4.951 GeV
[ "BESIII Collaboration", "M. Ablikim", "M. N. Achasov", "P. Adlarson", "X. C. Ai", "R. Aliberti", "A. Amoroso", "M. R. An", "Q. An", "Y. Bai", "O. Bakina", "I. Balossino", "Y. Ban", "V. Batozskaya", "K. Begzsuren", "N. Berger", "M. Berlowski", "M. Bertani", "D. Bettoni", "F. Bianchi", "E. Bianco", "J. Bloms", "A. Bortone", "I. Boyko", "R. A. Briere", "A. Brueggemann", "H. Cai", "X. Cai", "A. Calcaterra", "G. F. Cao", "N. Cao", "S. A. Cetin", "J. F. Chang", "T. T. Chang", "W. L. Chang", "G. R. Che", "G. Chelkov", "C. Chen", "Chao Chen", "G. Chen", "H. S. Chen", "M. L. Chen", "S. J. Chen", "S. M. Chen", "T. Chen", "X. R. Chen", "X. T. Chen", "Y. B. Chen", "Y. Q. Chen", "Z. J. Chen", "W. S. Cheng", "S. K. Choi", "X. Chu", "G. Cibinetto", "S. C. Coen", "F. Cossio", "J. J. Cui", "H. L. Dai", "J. P. Dai", "A. Dbeyssi", "R. E. de Boer", "D. Dedovich", "Z. Y. Deng", "A. Denig", "I. Denysenko", "M. Destefanis", "F. De Mori", "B. Ding", "X. X. Ding", "Y. Ding", "Y. Ding", "J. Dong", "L. Y. Dong", "M. Y. Dong", "X. Dong", "S. X. Du", "Z. H. Duan", "P. Egorov", "Y. L. Fan", "J. Fang", "S. S. Fang", "W. X. Fang", "Y. Fang", "R. Farinelli", "L. Fava", "F. Feldbauer", "G. Felici", "C. Q. Feng", "J. H. Feng", "K Fischer", "M. Fritsch", "C. Fritzsch", "C. D. Fu", "J. L. Fu", "Y. W. Fu", "H. Gao", "Y. N. Gao", "Yang Gao", "S. Garbolino", "I. Garzia", "P. T. Ge", "Z. W. Ge", "C. Geng", "E. M. Gersabeck", "A Gilman", "K. Goetzen", "L. Gong", "W. X. Gong", "W. Gradl", "S. Gramigna", "M. Greco", "M. H. Gu", "Y. T. Gu", "C. Y Guan", "Z. L. Guan", "A. Q. Guo", "L. B. Guo", "M. J. Guo", "R. P. Guo", "Y. P. Guo", "A. Guskov", "T. T. Han", "W. Y. Han", "X. Q. Hao", "F. A. Harris", "K. K. He", "K. L. He", "F. H H. Heinsius", "C. H. Heinz", "Y. K. Heng", "C. Herold", "T. Holtmann", "P. C. Hong", "G. Y. Hou", "X. T. Hou", "Y. R. Hou", "Z. L. Hou", "H. M. Hu", "J. F. Hu", "T. Hu", "Y. Hu", "G. S. Huang", "K. X. Huang", "L. Q. Huang", "X. T. Huang", "Y. P. Huang", "T. Hussain", "N Hüsken", "W. Imoehl", "M. Irshad", "J. Jackson", "S. Jaeger", "S. Janchiv", "J. H. Jeong", "Q. Ji", "Q. P. Ji", "X. B. Ji", "X. L. Ji", "Y. Y. Ji", "X. Q. Jia", "Z. K. Jia", "P. C. Jiang", "S. S. Jiang", "T. J. Jiang", "X. S. Jiang", "Y. Jiang", "J. B. Jiao", "Z. Jiao", "S. Jin", "Y. Jin", "M. Q. Jing", "T. Johansson", "X. K.", "S. Kabana", "N. Kalantar-Nayestanaki", "X. L. Kang", "X. S. Kang", "R. Kappert", "M. Kavatsyuk", "B. C. Ke", "A. Khoukaz", "R. Kiuchi", "R. Kliemt", "O. B. Kolcu", "B. Kopf", "M. K. Kuessner", "A. Kupsc", "W. Kühn", "J. J. Lane", "P. Larin", "A. Lavania", "L. Lavezzi", "T. T. Lei", "Z. H. Lei", "H. Leithoff", "M. Lellmann", "T. Lenz", "C. Li", "C. Li", "C. H. Li", "Cheng Li", "D. M. Li", "F. Li", "G. Li", "H. Li", "H. B. Li", "H. J. Li", "H. N. Li", "Hui Li", "J. R. Li", "J. S. Li", "J. W. Li", "K. L. Li", "Ke Li", "L. J Li", "L. K. Li", "Lei Li", "M. H. Li", "P. R. Li", "Q. X. Li", "S. X. Li", "T. Li", "W. D. Li", "W. G. Li", "X. H. Li", "X. L. Li", "Xiaoyu Li", "Y. G. Li", "Z. J. Li", "Z. X. Li", "C. Liang", "H. Liang", "H. Liang", "H. Liang", "Y. F. Liang", "Y. T. Liang", "G. R. Liao", "L. Z. Liao", "J. Libby", "A. Limphirat", "D. X. Lin", "T. Lin", "B. J. Liu", "B. X. Liu", "C. Liu", "C. X. Liu", "D. Liu", "F. H. Liu", "Fang Liu", "Feng Liu", "G. M. Liu", "H. Liu", "H. B. Liu", "H. M. Liu", "Huanhuan Liu", "Huihui Liu", "J. B. Liu", "J. L. Liu", "J. Y. Liu", "K. Liu", "K. Y. Liu", "Ke Liu", "L. Liu", "L. C. Liu", "Lu Liu", "M. H. Liu", "P. L. Liu", "Q. Liu", "S. B. Liu", "T. Liu", "W. K. Liu", "W. M. Liu", "X. Liu", "Y. Liu", "Y. Liu", "Y. B. Liu", "Z. A. Liu", "Z. Q. Liu", "X. C. Lou", "F. X. Lu", "H. J. Lu", "J. G. Lu", "X. L. Lu", "Y. Lu", "Y. P. Lu", "Z. H. Lu", "C. L. Luo", "M. X. Luo", "T. Luo", "X. L. Luo", "X. R. Lyu", "Y. F. Lyu", "F. C. Ma", "H. L. Ma", "J. L. Ma", "L. L. Ma", "M. M. Ma", "Q. M. Ma", "R. Q. Ma", "R. T. Ma", "X. Y. Ma", "Y. Ma", "Y. M. Ma", "F. E. Maas", "M. Maggiora", "S. Malde", "A. Mangoni", "Y. J. Mao", "Z. P. Mao", "S. Marcello", "Z. X. Meng", "J. G. Messchendorp", "G. Mezzadri", "H. Miao", "T. J. Min", "R. E. Mitchell", "X. H. Mo", "N. Yu. Muchnoi", "Y. Nefedov", "F. Nerling", "I. B. Nikolaev", "Z. Ning", "S. Nisar", "Y. Niu", "S. L. Olsen", "Q. Ouyang", "S. Pacetti", "X. Pan", "Y. Pan", "A. Pathak", "P. Patteri", "Y. P. Pei", "M. Pelizaeus", "H. P. Peng", "K. Peters", "J. L. Ping", "R. G. Ping", "S. Plura", "S. Pogodin", "V. Prasad", "F. Z. Qi", "H. Qi", "H. R. Qi", "M. Qi", "T. Y. Qi", "S. Qian", "W. B. Qian", "C. F. Qiao", "J. J. Qin", "L. Q. Qin", "X. P. Qin", "X. S. Qin", "Z. H. Qin", "J. F. Qiu", "S. Q. Qu", "C. F. Redmer", "K. J. Ren", "A. Rivetti", "V. Rodin", "M. Rolo", "G. Rong", "Ch. Rosner", "S. N. Ruan", "N. Salone", "A. Sarantsev", "Y. Schelhaas", "K. Schoenning", "M. Scodeggio", "K. Y. Shan", "W. Shan", "X. Y. Shan", "J. F. Shangguan", "L. G. Shao", "M. Shao", "C. P. Shen", "H. F. Shen", "W. H. Shen", "X. Y. Shen", "B. A. Shi", "H. C. Shi", "J. L. Shi", "J. Y. Shi", "Q. Q. Shi", "R. S. Shi", "X. Shi", "J. J. Song", "T. Z. Song", "W. M. Song", "Y. J. Song", "Y. X. Song", "S. Sosio", "S. Spataro", "F. Stieler", "Y. J. Su", "G. B. Sun", "G. X. Sun", "H. Sun", "H. K. Sun", "J. F. Sun", "K. Sun", "L. Sun", "S. S. Sun", "T. Sun", "W. Y. Sun", "Y. Sun", "Y. J. Sun", "Y. Z. Sun", "Z. T. Sun", "Y. X. Tan", "C. J. Tang", "G. Y. Tang", "J. Tang", "Y. A. Tang", "L. Y Tao", "Q. T. Tao", "M. Tat", "J. X. Teng", "V. Thoren", "W. H. Tian", "W. H. Tian", "Y. Tian", "Z. F. Tian", "I. Uman", "S. J. Wang", "B. Wang", "B. L. Wang", "Bo Wang", "C. W. Wang", "D. Y. Wang", "F. Wang", "H. J. Wang", "H. P. Wang", "J. P. Wang", "K. Wang", "L. L. Wang", "M. Wang", "Meng Wang", "S. Wang", "S. Wang", "T. Wang", "T. J. Wang", "W. Wang", "W. Wang", "W. P. Wang", "X. Wang", "X. F. Wang", "X. J. Wang", "X. L. Wang", "Y. Wang", "Y. D. Wang", "Y. F. Wang", "Y. H. Wang", "Y. N. Wang", "Y. Q. Wang", "Yaqian Wang", "Yi Wang", "Z. Wang", "Z. L. Wang", "Z. Y. Wang", "Ziyi Wang", "D. Wei", "D. H. Wei", "F. Weidner", "S. P. Wen", "C. W. Wenzel", "U. W. Wiedner", "G. Wilkinson", "M. Wolke", "L. Wollenberg", "C. Wu", "J. F. Wu", "L. H. Wu", "L. J. Wu", "X. Wu", "X. H. Wu", "Y. Wu", "Y. J. Wu", "Z. Wu", "L. Xia", "X. M. Xian", "T. Xiang", "D. Xiao", "G. Y. Xiao", "H. Xiao", "S. Y. Xiao", "Y. L. Xiao", "Z. J. Xiao", "C. Xie", "X. H. Xie", "Y. Xie", "Y. G. Xie", "Y. H. Xie", "Z. P. Xie", "T. Y. Xing", "C. F. Xu", "C. J. Xu", "G. F. Xu", "H. Y. Xu", "Q. J. Xu", "Q. N. Xu", "W. Xu", "W. L. Xu", "X. P. Xu", "Y. C. Xu", "Z. P. Xu", "Z. S. Xu", "F. Yan", "L. Yan", "W. B. Yan", "W. C. Yan", "X. Q. Yan", "H. J. Yang", "H. L. Yang", "H. X. Yang", "Tao Yang", "Y. Yang", "Y. F. Yang", "Y. X. Yang", "Yifan Yang", "Z. W. Yang", "Z. P. Yao", "M. Ye", "M. H. Ye", "J. H. Yin", "Z. Y. You", "B. X. Yu", "C. X. Yu", "G. Yu", "J. S. Yu", "T. Yu", "X. D. Yu", "C. Z. Yuan", "L. Yuan", "S. C. Yuan", "X. Q. Yuan", "Y. Yuan", "Z. Y. Yuan", "C. X. Yue", "A. A. Zafar", "F. R. Zeng", "X. Zeng", "Y. Zeng", "Y. J. Zeng", "X. Y. Zhai", "Y. C. Zhai", "Y. H. Zhan", "A. Q. Zhang", "B. L. Zhang", "B. X. Zhang", "D. H. Zhang", "G. Y. Zhang", "H. Zhang", "H. H. Zhang", "H. H. Zhang", "H. Q. Zhang", "H. Y. Zhang", "J. J. Zhang", "J. L. Zhang", "J. Q. Zhang", "J. W. Zhang", "J. X. Zhang", "J. Y. Zhang", "J. Z. Zhang", "Jianyu Zhang", "Jiawei Zhang", "L. M. Zhang", "L. Q. Zhang", "Lei Zhang", "P. Zhang", "Q. Y. Zhang", "Shuihan Zhang", "Shulei Zhang", "X. D. Zhang", "X. M. Zhang", "X. Y. Zhang", "X. Y. Zhang", "Y. Zhang", "Y. Zhang", "Y. T. Zhang", "Y. H. Zhang", "Yan Zhang", "Yao Zhang", "Z. H. Zhang", "Z. L. Zhang", "Z. Y. Zhang", "Z. Y. Zhang", "G. Zhao", "J. Zhao", "J. Y. Zhao", "J. Z. Zhao", "Lei Zhao", "Ling Zhao", "M. G. Zhao", "S. J. Zhao", "Y. B. Zhao", "Y. X. Zhao", "Z. G. Zhao", "A. Zhemchugov", "B. Zheng", "J. P. Zheng", "W. J. Zheng", "Y. H. Zheng", "B. Zhong", "X. Zhong", "H. Zhou", "L. P. Zhou", "X. Zhou", "X. K. Zhou", "X. R. Zhou", "X. Y. Zhou", "Y. Z. Zhou", "J. Zhu", "K. Zhu", "K. J. Zhu", "L. Zhu", "L. X. Zhu", "S. H. Zhu", "S. Q. Zhu", "T. J. Zhu", "W. J. Zhu", "Y. C. Zhu", "Z. A. Zhu", "J. H. Zou", "J. Zu" ]
hep-ex
[ "hep-ex" ]
< g r a p h i c s > BESIII Collaboration Using e^+e^- collision datasets corresponding to total integrated luminosity of 21.7 fb^-1 collected with the BESIII detector at the BEPCII collider at center-of-mass energies ranging from 4.009 GeV to 4.951 GeV, the energy-dependent cross sections of e^+e^-→ pK^-Λ̅+c.c. are measured for the first time. By fitting these energy-dependent cross sections, we search for the excited ψ states ψ(4160) and ψ(4415), and the vector charmonium-like states ψ(4230), ψ(4360), and ψ(4660). No evidence for these is observed and the upper limits on the branching fractions of these states decaying into pK^-Λ̅+c.c. are set at the 90% confidence level. Measurement of e^+e^-→ pK^-Λ̅+c.c. cross sections between 4.009 GeV and 4.951 GeV M. Ablikim^1, M. N. Achasov^13,b, P. Adlarson^75, X. C. Ai^81, R. Aliberti^36, A. Amoroso^74A,74C, M. R. An^40, Q. An^71,58, Y. Bai^57, O. Bakina^37, I. Balossino^30A, Y. Ban^47,g, V. Batozskaya^1,45, K. Begzsuren^33, N. Berger^36, M. Berlowski^45, M. Bertani^29A, D. Bettoni^30A, F. Bianchi^74A,74C, E. Bianco^74A,74C, J. Bloms^68, A. Bortone^74A,74C, I. Boyko^37, R. A. Briere^5, A. Brueggemann^68, H. Cai^76, X. Cai^1,58, A. Calcaterra^29A, G. F. Cao^1,63, N. Cao^1,63, S. A. Cetin^62A, J. F. Chang^1,58, T. T. Chang^77, W. L. Chang^1,63, G. R. Che^44, G. Chelkov^37,a, C. Chen^44, Chao Chen^55, G. Chen^1, H. S. Chen^1,63, M. L. Chen^1,58,63, S. J. Chen^43, S. M. Chen^61, T. Chen^1,63, X. R. Chen^32,63, X. T. Chen^1,63, Y. B. Chen^1,58, Y. Q. Chen^35, Z. J. Chen^26,h, W. S. Cheng^74C, S. K. Choi^10A, X. Chu^44, G. Cibinetto^30A, S. C. Coen^4, F. Cossio^74C, J. J. Cui^50, H. L. Dai^1,58, J. P. Dai^79, A. Dbeyssi^19, R.  E. de Boer^4, D. Dedovich^37, Z. Y. Deng^1, A. Denig^36, I. Denysenko^37, M. Destefanis^74A,74C, F. De Mori^74A,74C, B. Ding^66,1, X. X. Ding^47,g, Y. Ding^41, Y. Ding^35, J. Dong^1,58, L. Y. Dong^1,63, M. Y. Dong^1,58,63, X. Dong^76, S. X. Du^81, Z. H. Duan^43, P. Egorov^37,a, Y. L. Fan^76, J. Fang^1,58, S. S. Fang^1,63, W. X. Fang^1, Y. Fang^1, R. Farinelli^30A, L. Fava^74B,74C, F. Feldbauer^4, G. Felici^29A, C. Q. Feng^71,58, J. H. Feng^59, K Fischer^69, M. Fritsch^4, C. Fritzsch^68, C. D. Fu^1, J. L. Fu^63, Y. W. Fu^1, H. Gao^63, Y. N. Gao^47,g, Yang Gao^71,58, S. Garbolino^74C, I. Garzia^30A,30B, P. T. Ge^76, Z. W. Ge^43, C. Geng^59, E. M. Gersabeck^67, A Gilman^69, K. Goetzen^14, L. Gong^41, W. X. Gong^1,58, W. Gradl^36, S. Gramigna^30A,30B, M. Greco^74A,74C, M. H. Gu^1,58, Y. T. Gu^16, C. Y Guan^1,63, Z. L. Guan^23, A. Q. Guo^32,63, L. B. Guo^42, M. J. Guo^50, R. P. Guo^49, Y. P. Guo^12,f, A. Guskov^37,a, T. T. Han^50, W. Y. Han^40, X. Q. Hao^20, F. A. Harris^65, K. K. He^55, K. L. He^1,63, F. H H.. Heinsius^4, C. H. Heinz^36, Y. K. Heng^1,58,63, C. Herold^60, T. Holtmann^4, P. C. Hong^12,f, G. Y. Hou^1,63, X. T. Hou^1,63, Y. R. Hou^63, Z. L. Hou^1, H. M. Hu^1,63, J. F. Hu^56,i, T. Hu^1,58,63, Y. Hu^1, G. S. Huang^71,58, K. X. Huang^59, L. Q. Huang^32,63, X. T. Huang^50, Y. P. Huang^1, T. Hussain^73, N Hüsken^28,36, W. Imoehl^28, M. Irshad^71,58, J. Jackson^28, S. Jaeger^4, S. Janchiv^33, J. H. Jeong^10A, Q. Ji^1, Q. P. Ji^20, X. B. Ji^1,63, X. L. Ji^1,58, Y. Y. Ji^50, X. Q. Jia^50, Z. K. Jia^71,58, P. C. Jiang^47,g, S. S. Jiang^40, T. J. Jiang^17, X. S. Jiang^1,58,63, Y. Jiang^63, J. B. Jiao^50, Z. Jiao^24, S. Jin^43, Y. Jin^66, M. Q. Jing^1,63, T. Johansson^75, X. K.^1, S. Kabana^34, N. Kalantar-Nayestanaki^64, X. L. Kang^9, X. S. Kang^41, R. Kappert^64, M. Kavatsyuk^64, B. C. Ke^81, A. Khoukaz^68, R. Kiuchi^1, R. Kliemt^14, O. B. Kolcu^62A, B. Kopf^4, M. K. Kuessner^4, A. Kupsc^45,75, W. Kühn^38, J. J. Lane^67, P.  Larin^19, A. Lavania^27, L. Lavezzi^74A,74C, T. T. Lei^71,k, Z. H. Lei^71,58, H. Leithoff^36, M. Lellmann^36, T. Lenz^36, C. Li^48, C. Li^44, C. H. Li^40, Cheng Li^71,58, D. M. Li^81, F. Li^1,58, G. Li^1, H. Li^71,58, H. B. Li^1,63, H. J. Li^20, H. N. Li^56,i, Hui Li^44, J. R. Li^61, J. S. Li^59, J. W. Li^50, K. L. Li^20, Ke Li^1, L. J Li^1,63, L. K. Li^1, Lei Li^3, M. H. Li^44, P. R. Li^39,j,k, Q. X. Li^50, S. X. Li^12, T.  Li^50, W. D. Li^1,63, W. G. Li^1, X. H. Li^71,58, X. L. Li^50, Xiaoyu Li^1,63, Y. G. Li^47,g, Z. J. Li^59, Z. X. Li^16, C. Liang^43, H. Liang^71,58, H. Liang^35, H. Liang^1,63, Y. F. Liang^54, Y. T. Liang^32,63, G. R. Liao^15, L. Z. Liao^50, J. Libby^27, A.  Limphirat^60, D. X. Lin^32,63, T. Lin^1, B. J. Liu^1, B. X. Liu^76, C. Liu^35, C. X. Liu^1, D.  Liu^19,71, F. H. Liu^53, Fang Liu^1, Feng Liu^6, G. M. Liu^56,i, H. Liu^39,j,k, H. B. Liu^16, H. M. Liu^1,63, Huanhuan Liu^1, Huihui Liu^22, J. B. Liu^71,58, J. L. Liu^72, J. Y. Liu^1,63, K. Liu^1, K. Y. Liu^41, Ke Liu^23, L. Liu^71,58, L. C. Liu^44, Lu Liu^44, M. H. Liu^12,f, P. L. Liu^1, Q. Liu^63, S. B. Liu^71,58, T. Liu^12,f, W. K. Liu^44, W. M. Liu^71,58, X. Liu^39,j,k, Y. Liu^39,j,k, Y. Liu^81, Y. B. Liu^44, Z. A. Liu^1,58,63, Z. Q. Liu^50, X. C. Lou^1,58,63, F. X. Lu^59, H. J. Lu^24, J. G. Lu^1,58, X. L. Lu^1, Y. Lu^7, Y. P. Lu^1,58, Z. H. Lu^1,63, C. L. Luo^42, M. X. Luo^80, T. Luo^12,f, X. L. Luo^1,58, X. R. Lyu^63, Y. F. Lyu^44, F. C. Ma^41, H. L. Ma^1, J. L. Ma^1,63, L. L. Ma^50, M. M. Ma^1,63, Q. M. Ma^1, R. Q. Ma^1,63, R. T. Ma^63, X. Y. Ma^1,58, Y. Ma^47,g, Y. M. Ma^32, F. E. Maas^19, M. Maggiora^74A,74C, S. Malde^69, A. Mangoni^29B, Y. J. Mao^47,g, Z. P. Mao^1, S. Marcello^74A,74C, Z. X. Meng^66, J. G. Messchendorp^14,64, G. Mezzadri^30A, H. Miao^1,63, T. J. Min^43, R. E. Mitchell^28, X. H. Mo^1,58,63, N. Yu. Muchnoi^13,b, Y. Nefedov^37, F. Nerling^19,d, I. B. Nikolaev^13,b, Z. Ning^1,58, S. Nisar^11,l, Y. Niu ^50, S. L. Olsen^63, Q. Ouyang^1,58,63, S. Pacetti^29B,29C, X. Pan^55, Y. Pan^57, A.  Pathak^35, P. Patteri^29A, Y. P. Pei^71,58, M. Pelizaeus^4, H. P. Peng^71,58, K. Peters^14,d, J. L. Ping^42, R. G. Ping^1,63, S. Plura^36, S. Pogodin^37, V. Prasad^34, F. Z. Qi^1, H. Qi^71,58, H. R. Qi^61, M. Qi^43, T. Y. Qi^12,f, S. Qian^1,58, W. B. Qian^63, C. F. Qiao^63, J. J. Qin^72, L. Q. Qin^15, X. P. Qin^12,f, X. S. Qin^50, Z. H. Qin^1,58, J. F. Qiu^1, S. Q. Qu^61, C. F. Redmer^36, K. J. Ren^40, A. Rivetti^74C, V. Rodin^64, M. Rolo^74C, G. Rong^1,63, Ch. Rosner^19, S. N. Ruan^44, N. Salone^45, A. Sarantsev^37,c, Y. Schelhaas^36, K. Schoenning^75, M. Scodeggio^30A,30B, K. Y. Shan^12,f, W. Shan^25, X. Y. Shan^71,58, J. F. Shangguan^55, L. G. Shao^1,63, M. Shao^71,58, C. P. Shen^12,f, H. F. Shen^1,63, W. H. Shen^63, X. Y. Shen^1,63, B. A. Shi^63, H. C. Shi^71,58, J. L. Shi^12, J. Y. Shi^1, Q. Q. Shi^55, R. S. Shi^1,63, X. Shi^1,58, J. J. Song^20, T. Z. Song^59, W. M. Song^35,1, Y.  J. Song^12, Y. X. Song^47,g, S. Sosio^74A,74C, S. Spataro^74A,74C, F. Stieler^36, Y. J. Su^63, G. B. Sun^76, G. X. Sun^1, H. Sun^63, H. K. Sun^1, J. F. Sun^20, K. Sun^61, L. Sun^76, S. S. Sun^1,63, T. Sun^1,63, W. Y. Sun^35, Y. Sun^9, Y. J. Sun^71,58, Y. Z. Sun^1, Z. T. Sun^50, Y. X. Tan^71,58, C. J. Tang^54, G. Y. Tang^1, J. Tang^59, Y. A. Tang^76, L. Y Tao^72, Q. T. Tao^26,h, M. Tat^69, J. X. Teng^71,58, V. Thoren^75, W. H. Tian^52, W. H. Tian^59, Y. Tian^32,63, Z. F. Tian^76, I. Uman^62B, S. J. Wang ^50, B. Wang^1, B. L. Wang^63, Bo Wang^71,58, C. W. Wang^43, D. Y. Wang^47,g, F. Wang^72, H. J. Wang^39,j,k, H. P. Wang^1,63, J. P. Wang ^50, K. Wang^1,58, L. L. Wang^1, M. Wang^50, Meng Wang^1,63, S. Wang^12,f, S. Wang^39,j,k, T.  Wang^12,f, T. J. Wang^44, W. Wang^59, W.  Wang^72, W. P. Wang^71,58, X. Wang^47,g, X. F. Wang^39,j,k, X. J. Wang^40, X. L. Wang^12,f, Y. Wang^61, Y. D. Wang^46, Y. F. Wang^1,58,63, Y. H. Wang^48, Y. N. Wang^46, Y. Q. Wang^1, Yaqian Wang^18,1, Yi Wang^61, Z. Wang^1,58, Z. L.  Wang^72, Z. Y. Wang^1,63, Ziyi Wang^63, D. Wei^70, D. H. Wei^15, F. Weidner^68, S. P. Wen^1, C. W. Wenzel^4, U. W. Wiedner^4, G. Wilkinson^69, M. Wolke^75, L. Wollenberg^4, C. Wu^40, J. F. Wu^1,63, L. H. Wu^1, L. J. Wu^1,63, X. Wu^12,f, X. H. Wu^35, Y. Wu^71, Y. J. Wu^32, Z. Wu^1,58, L. Xia^71,58, X. M. Xian^40, T. Xiang^47,g, D. Xiao^39,j,k, G. Y. Xiao^43, H. Xiao^12,f, S. Y. Xiao^1, Y.  L. Xiao^12,f, Z. J. Xiao^42, C. Xie^43, X. H. Xie^47,g, Y. Xie^50, Y. G. Xie^1,58, Y. H. Xie^6, Z. P. Xie^71,58, T. Y. Xing^1,63, C. F. Xu^1,63, C. J. Xu^59, G. F. Xu^1, H. Y. Xu^66, Q. J. Xu^17, Q. N. Xu^31, W. Xu^1,63, W. L. Xu^66, X. P. Xu^55, Y. C. Xu^78, Z. P. Xu^43, Z. S. Xu^63, F. Yan^12,f, L. Yan^12,f, W. B. Yan^71,58, W. C. Yan^81, X. Q. Yan^1, H. J. Yang^51,e, H. L. Yang^35, H. X. Yang^1, Tao Yang^1, Y. Yang^12,f, Y. F. Yang^44, Y. X. Yang^1,63, Yifan Yang^1,63, Z. W. Yang^39,j,k, Z. P. Yao^50, M. Ye^1,58, M. H. Ye^8, J. H. Yin^1, Z. Y. You^59, B. X. Yu^1,58,63, C. X. Yu^44, G. Yu^1,63, J. S. Yu^26,h, T. Yu^72, X. D. Yu^47,g, C. Z. Yuan^1,63, L. Yuan^2, S. C. Yuan^1, X. Q. Yuan^1, Y. Yuan^1,63, Z. Y. Yuan^59, C. X. Yue^40, A. A. Zafar^73, F. R. Zeng^50, X. Zeng^12,f, Y. Zeng^26,h, Y. J. Zeng^1,63, X. Y. Zhai^35, Y. C. Zhai^50, Y. H. Zhan^59, A. Q. Zhang^1,63, B. L. Zhang^1,63, B. X. Zhang^1, D. H. Zhang^44, G. Y. Zhang^20, H. Zhang^71, H. H. Zhang^59, H. H. Zhang^35, H. Q. Zhang^1,58,63, H. Y. Zhang^1,58, J. J. Zhang^52, J. L. Zhang^21, J. Q. Zhang^42, J. W. Zhang^1,58,63, J. X. Zhang^39,j,k, J. Y. Zhang^1, J. Z. Zhang^1,63, Jianyu Zhang^63, Jiawei Zhang^1,63, L. M. Zhang^61, L. Q. Zhang^59, Lei Zhang^43, P. Zhang^1, Q. Y.  Zhang^40,81, Shuihan Zhang^1,63, Shulei Zhang^26,h, X. D. Zhang^46, X. M. Zhang^1, X. Y. Zhang^50, X. Y. Zhang^55, Y. Zhang^69, Y.  Zhang^72, Y.  T. Zhang^81, Y. H. Zhang^1,58, Yan Zhang^71,58, Yao Zhang^1, Z. H. Zhang^1, Z. L. Zhang^35, Z. Y. Zhang^44, Z. Y. Zhang^76, G. Zhao^1, J. Zhao^40, J. Y. Zhao^1,63, J. Z. Zhao^1,58, Lei Zhao^71,58, Ling Zhao^1, M. G. Zhao^44, S. J. Zhao^81, Y. B. Zhao^1,58, Y. X. Zhao^32,63, Z. G. Zhao^71,58, A. Zhemchugov^37,a, B. Zheng^72, J. P. Zheng^1,58, W. J. Zheng^1,63, Y. H. Zheng^63, B. Zhong^42, X. Zhong^59, H.  Zhou^50, L. P. Zhou^1,63, X. Zhou^76, X. K. Zhou^6, X. R. Zhou^71,58, X. Y. Zhou^40, Y. Z. Zhou^12,f, J. Zhu^44, K. Zhu^1, K. J. Zhu^1,58,63, L. Zhu^35, L. X. Zhu^63, S. H. Zhu^70, S. Q. Zhu^43, T. J. Zhu^12,f, W. J. Zhu^12,f, Y. C. Zhu^71,58, Z. A. Zhu^1,63, J. H. Zou^1, J. Zu^71,58 (BESIII Collaboration) ^1 Institute of High Energy Physics, Beijing 100049, People's Republic of China ^2 Beihang University, Beijing 100191, People's Republic of China ^3 Beijing Institute of Petrochemical Technology, Beijing 102617, People's Republic of China ^4 Bochum Ruhr-University, D-44780 Bochum, Germany ^5 Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA ^6 Central China Normal University, Wuhan 430079, People's Republic of China ^7 Central South University, Changsha 410083, People's Republic of China ^8 China Center of Advanced Science and Technology, Beijing 100190, People's Republic of China ^9 China University of Geosciences, Wuhan 430074, People's Republic of China ^10 Chung-Ang University, Seoul, 06974, Republic of Korea ^11 COMSATS University Islamabad, Lahore Campus, Defence Road, Off Raiwind Road, 54000 Lahore, Pakistan ^12 Fudan University, Shanghai 200433, People's Republic of China ^13 G.I. Budker Institute of Nuclear Physics SB RAS (BINP), Novosibirsk 630090, Russia ^14 GSI Helmholtzcentre for Heavy Ion Research GmbH, D-64291 Darmstadt, Germany ^15 Guangxi Normal University, Guilin 541004, People's Republic of China ^16 Guangxi University, Nanning 530004, People's Republic of China ^17 Hangzhou Normal University, Hangzhou 310036, People's Republic of China ^18 Hebei University, Baoding 071002, People's Republic of China ^19 Helmholtz Institute Mainz, Staudinger Weg 18, D-55099 Mainz, Germany ^20 Henan Normal University, Xinxiang 453007, People's Republic of China ^21 Henan University, Kaifeng 475004, People's Republic of China ^22 Henan University of Science and Technology, Luoyang 471003, People's Republic of China ^23 Henan University of Technology, Zhengzhou 450001, People's Republic of China ^24 Huangshan College, Huangshan 245000, People's Republic of China ^25 Hunan Normal University, Changsha 410081, People's Republic of China ^26 Hunan University, Changsha 410082, People's Republic of China ^27 Indian Institute of Technology Madras, Chennai 600036, India ^28 Indiana University, Bloomington, Indiana 47405, USA ^29 INFN Laboratori Nazionali di Frascati , (A)INFN Laboratori Nazionali di Frascati, I-00044, Frascati, Italy; (B)INFN Sezione di Perugia, I-06100, Perugia, Italy; (C)University of Perugia, I-06100, Perugia, Italy ^30 INFN Sezione di Ferrara, (A)INFN Sezione di Ferrara, I-44122, Ferrara, Italy; (B)University of Ferrara, I-44122, Ferrara, Italy ^31 Inner Mongolia University, Hohhot 010021, People's Republic of China ^32 Institute of Modern Physics, Lanzhou 730000, People's Republic of China ^33 Institute of Physics and Technology, Peace Avenue 54B, Ulaanbaatar 13330, Mongolia ^34 Instituto de Alta Investigación, Universidad de Tarapacá, Casilla 7D, Arica, Chile ^35 Jilin University, Changchun 130012, People's Republic of China ^36 Johannes Gutenberg University of Mainz, Johann-Joachim-Becher-Weg 45, D-55099 Mainz, Germany ^37 Joint Institute for Nuclear Research, 141980 Dubna, Moscow region, Russia ^38 Justus-Liebig-Universitaet Giessen, II. Physikalisches Institut, Heinrich-Buff-Ring 16, D-35392 Giessen, Germany ^39 Lanzhou University, Lanzhou 730000, People's Republic of China ^40 Liaoning Normal University, Dalian 116029, People's Republic of China ^41 Liaoning University, Shenyang 110036, People's Republic of China ^42 Nanjing Normal University, Nanjing 210023, People's Republic of China ^43 Nanjing University, Nanjing 210093, People's Republic of China ^44 Nankai University, Tianjin 300071, People's Republic of China ^45 National Centre for Nuclear Research, Warsaw 02-093, Poland ^46 North China Electric Power University, Beijing 102206, People's Republic of China ^47 Peking University, Beijing 100871, People's Republic of China ^48 Qufu Normal University, Qufu 273165, People's Republic of China ^49 Shandong Normal University, Jinan 250014, People's Republic of China ^50 Shandong University, Jinan 250100, People's Republic of China ^51 Shanghai Jiao Tong University, Shanghai 200240, People's Republic of China ^52 Shanxi Normal University, Linfen 041004, People's Republic of China ^53 Shanxi University, Taiyuan 030006, People's Republic of China ^54 Sichuan University, Chengdu 610064, People's Republic of China ^55 Soochow University, Suzhou 215006, People's Republic of China ^56 South China Normal University, Guangzhou 510006, People's Republic of China ^57 Southeast University, Nanjing 211100, People's Republic of China ^58 State Key Laboratory of Particle Detection and Electronics, Beijing 100049, Hefei 230026, People's Republic of China ^59 Sun Yat-Sen University, Guangzhou 510275, People's Republic of China ^60 Suranaree University of Technology, University Avenue 111, Nakhon Ratchasima 30000, Thailand ^61 Tsinghua University, Beijing 100084, People's Republic of China ^62 Turkish Accelerator Center Particle Factory Group, (A)Istinye University, 34010, Istanbul, Turkey; (B)Near East University, Nicosia, North Cyprus, 99138, Mersin 10, Turkey ^63 University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China ^64 University of Groningen, NL-9747 AA Groningen, The Netherlands ^65 University of Hawaii, Honolulu, Hawaii 96822, USA ^66 University of Jinan, Jinan 250022, People's Republic of China ^67 University of Manchester, Oxford Road, Manchester, M13 9PL, United Kingdom ^68 University of Muenster, Wilhelm-Klemm-Strasse 9, 48149 Muenster, Germany ^69 University of Oxford, Keble Road, Oxford OX13RH, United Kingdom ^70 University of Science and Technology Liaoning, Anshan 114051, People's Republic of China ^71 University of Science and Technology of China, Hefei 230026, People's Republic of China ^72 University of South China, Hengyang 421001, People's Republic of China ^73 University of the Punjab, Lahore-54590, Pakistan ^74 University of Turin and INFN, (A)University of Turin, I-10125, Turin, Italy; (B)University of Eastern Piedmont, I-15121, Alessandria, Italy; (C)INFN, I-10125, Turin, Italy ^75 Uppsala University, Box 516, SE-75120 Uppsala, Sweden ^76 Wuhan University, Wuhan 430072, People's Republic of China ^77 Xinyang Normal University, Xinyang 464000, People's Republic of China ^78 Yantai University, Yantai 264005, People's Republic of China ^79 Yunnan University, Kunming 650500, People's Republic of China ^80 Zhejiang University, Hangzhou 310027, People's Republic of China ^81 Zhengzhou University, Zhengzhou 450001, People's Republic of China ^a Also at the Moscow Institute of Physics and Technology, Moscow 141700, Russia ^b Also at the Novosibirsk State University, Novosibirsk, 630090, Russia ^c Also at the NRC "Kurchatov Institute", PNPI, 188300, Gatchina, Russia ^d Also at Goethe University Frankfurt, 60323 Frankfurt am Main, Germany ^e Also at Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry of Education; Shanghai Key Laboratory for Particle Physics and Cosmology; Institute of Nuclear and Particle Physics, Shanghai 200240, People's Republic of China ^f Also at Key Laboratory of Nuclear Physics and Ion-beam Application (MOE) and Institute of Modern Physics, Fudan University, Shanghai 200443, People's Republic of China ^g Also at State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, People's Republic of China ^h Also at School of Physics and Electronics, Hunan University, Changsha 410082, China ^i Also at Guangdong Provincial Key Laboratory of Nuclear Science, Institute of Quantum Matter, South China Normal University, Guangzhou 510006, China ^j Also at Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, People's Republic of China ^k Also at Lanzhou Center for Theoretical Physics, Lanzhou University, Lanzhou 730000, People's Republic of China ^l Also at the Department of Mathematical Sciences, IBA, Karachi 75270, Pakistan ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION In the last two decades, a large number of charmonium-like vector states ψ have been discovered <cit.> in the hidden or open charm final states. The ψ(4260) was first observed by the BaBar Collaboration in the process of e^+e^-→γ_ ISRπ^+π^-J/ψ  <cit.>, where ISR denotes initial state radiation. The ψ(4360) and ψ(4660) were found by the Belle and BaBar Collaborations in the π^+π^-ψ(2S) final states <cit.>. Later, a precise study on the process e^+e^-→π^+π^-J/ψ by the BESIII Collaboration revealed two structures with masses of 4222.0±3.1±1.4 MeV/c^2 and 4320.0±10.4±7.0 MeV/c^2 in the ψ(4260) region <cit.>. The former one, renamed as Y(4230) <cit.>, was further confirmed by the BESIII Collaboration in the decay channels e^+e^-→ωχ_c0 <cit.>, e^+e^-→π^+π^-h_c <cit.>, e^+e^-→π^+D^0D^*- <cit.>, e^+e^-→η J/ψ <cit.>, e^+e^-→π^+π^+ψ(3686) <cit.>, and e^+e^-→π^+D^*0D^*- <cit.>. Since some properties of these states cannot be explained by the conventional charmonium model, they are usually regarded as candidates for exotic states, such as hybrids, tetraquarks, and molecules <cit.>. Searching for the Y states decaying into light hadron final states will help to understand the nature of Y states and investigate the mechanism of quantum chromodynamics at low energies. Although several processes with light hadron final states, such as e^+e^-→ pp̅π^0 <cit.>, e^+e^-→ pp̅η(ω) <cit.>, e^+e^-→ pn̅K^0_SK^- <cit.>, e^+e^-→ K^0_SK^±π^∓π^0(η) <cit.>, e^+e^-→ωπ^+π^- <cit.>, have been studied, no significant charmonium-like structures were found in these. Consequently, further exploration of e^+e^- decaying into other light hadrons is highly desirable to probe the nature of these charmonium-like states. In this analysis, the cross sections of the process e^+e^-→ pK^-Λ̅+c.c. are measured by analyzing 21.7 fb^-1 of e^+e^- collision data taken at center-of-mass energies (√(s)) ranging from 4.009 GeV to 4.951 GeV. The vector charmonium(-like) states, ψ(4160), ψ(4230), ψ(4360), ψ(4415), and ψ(4660) are investigated by fitting the obtained energy-dependent cross sections. Throughout the paper, the charged-conjugation mode is always implied, unless explicitly stated. § DETECTOR AND DATA SETS The BESIII detector is a magnetic spectrometer <cit.> located at the Beijing Electron Positron Collider (BEPCII) <cit.>. The cylindrical core of the BESIII detector consists of a main drift chamber filled with helium-based gas (MDC), a plastic scintillator time-of-flight system (TOF), and a CsI(Tl) electromagnetic calorimeter (EMC), which are all enclosed in a superconducting solenoidal magnet providing a 1.0 T magnetic field. The flux-return yoke is instrumented with resistive plate chambers arranged in 9 layers in the barrel and 8 layers in the endcaps for muon identification. The acceptance of charged particles and photons is 93% of 4π solid angle. The charged-particle momentum resolution at 1.0 GeV/c is 0.5%, and the specific energy loss resolution is 6% for the electrons from Bhabha scattering. The EMC measures photon energies with a resolution of 2.5% (5%) at 1 GeV in the barrel (end cap) region. The time resolution of the TOF barrel part is 68 ps, while that of the end cap part is 110 ps. The end cap TOF system was upgraded in 2015 with multi-gap resistive plate chamber technology, providing a time resolution of 60 ps <cit.>. All of those are enclosed in a superconducting solenoidal magnet providing a 1.0 T magnetic field <cit.>. The data samples used in this analysis were collected by the BESIII detector at 37 energy points between 4.009 GeV and 4.951 GeV. The center-of-mass energies and the corresponding integrated luminosities <cit.> at various energy points are shown in Table <ref>. Simulated samples produced with the geant4-based <cit.> Monte-Carlo (MC) software, which includes the geometric description of the BESIII detector and the detector response, are used to determine the detection efficiencies and to estimate the background levels. The simulation includes the beam energy spread and ISR in the e^+e^- annihilations modeled with the generator kkmc <cit.>. Inclusive MC simulation samples generated at √(s)=4.178 GeV with 40 times the luminosity of the data sample are used to analyze the possible background contributions. They consist of open charm production processes, ISR production of vector charmonium or charmonium-like states, and continuum processes (e^+e^-→ qq̅,q=u,d,s). The open charm production processes are generated using conexc <cit.>, and the ISR production is incorporated in kkmc <cit.>. The known decay states are modeled with beseventgen <cit.> using branching fractions taken from the Particle Data Group (PDG) <cit.> and the remaining unknown decays from the charmonium states are modeled with lundcharm <cit.>. Final state radiation from charged final state particles is incorporated with the photos <cit.> package. The signal MC samples of e^+e^-→ pK^-Λ̅ are generated by using the amplitude model with parameters fixed to the amplitude analysis results <cit.>. § EVENT SELECTION AND BACKGROUND ANALYSIS To select candidates for e^+e^-→ pK^-Λ̅, the Λ candidates are reconstructed with the charged decay mode, and hence there are only four charged tracks pp̅π^± K^∓ in the final states. All charged tracks are required to satisfy |d_z|< 20 cm and |cosθ|< 0.93. Here, |d_z| is the coordinate of the charged particle production point along the beam axis and θ is the polar angle of the charged track. For each event, four charged tracks with zero net charge are required. Due to the high momenta of final charged tracks, particle identification is not required. To reconstruct candidates for Λ, all possible opposite charged track pairs will be assigned as pπ^-. The pπ^- trajectories are constrained to originate from a common vertex by applying a vertex fit, and the χ^2 of the vertex fit is required to be less than 100. The Λ candidate is constrained in a secondary vertex fit to originate from the interaction point. The decay length of the Λ candidate must be greater than twice the vertex resolution. The invariant mass of the pπ^- combination is required to be within 1.10 <M_pπ^-< 1.13 GeV/c^2 to suppress wrong assignments. Exactly one Λ candidate per event is required to satisfy the selection criteria. The other two charged tracks are assigned according to their charges as proton and kaon not from Λ decays. To ensure that the two charged tracks originate from the interaction point, they are imposed with additional requirements of |d_z|< 10 cm and |d_r|< 1 cm. Here, |d_r| is the distance between the charged track production point and the beam axis in the plane perpendicular to the beam axis. To further suppress background and improve track momentum resolution, a four-momentum constraint (4C) kinematic fit is imposed on the initial e^+e^- beam energy under the hypothesis of e^+e^-→ pK^-Λ̅ The χ^2 of the kinematic fit is required to be less than 100. To reduce the number of background events caused by spurious kaons originating from Bhabha scattering events interacting with the materials in the detector, the events with |cosθ_K|>0.83 are vetoed, where θ_K is the polar angle of the charged kaon. After applying all of the above selection criteria, studies of the inclusive MC sample indicate that the total background fraction is only 1.8%. The background contributions are categorized into the peaking background, such as e^+e^-→π^+ΛΣ̅^-, and non-peaking background, such as e^+e^-→ρ^0 pp̅. Given that the peaking background fraction is lower than 1.0%, it is neglected in the further analysis and will be considered as one source of systematic uncertainties. § CROSS SECTION MEASUREMENT To obtain the e^+e^-→ pK^-Λ̅ signal yield, an un-binned maximum likelihood fit is performed to the invariant mass spectrum of M_pπ^-. The signal is described with the MC-determined shape convolved with a Gaussian function to consider the difference between data and MC simulation. The background shape is parametrized as a linear function. As an example, Fig. <ref> shows the fit result of the accepted candidates in data at √(s)=4.178 GeV, and all the signal yields (N_ sig) at 37 energy points are listed in Table <ref>. In order to determine the detection efficiency, the amplitude analysis is performed for energy points with signal yield greater than 700, which are exactly the same as in Ref. <cit.>. The amplitudes for the sequential processes e^+e^-→γ^*→ X^+ K^- (X^+→ pΛ̅), e^+e^-→γ^*→ N^*+p̅(N^*+→ K^+Λ̅), e^+e^-→γ^*→Λ^*Λ̅(Λ^*→ pK^-), and their charge conjugations, are constructed using the relativistic covariant tensor amplitude formalism <cit.>. These effective vertices Γ are deduced from an effective Lagrangian by considering C- and P-parity invariance, Lorentz invariance, and CPT invariance. The amplitude of a process containing a specific resonance is written as 𝒜_j=ϵ_*α(p_0,m)u̅(p_1,λ_1)Γ_1^αμ_1μ_2...𝒫_μ_1μ_2...ν_1ν_2...Γ_2^ν_1ν_2...× v(p_2,λ_2)BW(s), where ϵ^* is the γ^* polarization vector; u(p_1,λ_1) and v(p_2,λ_2) are the free Dirac spinors for proton and Λ̅, respectively; Γ_1 and Γ_2 are the two strong interaction vertices describing the resonance couplings with γ^*, p, K^-, and Λ̅, and BW(s) is a Breit-Wigner function for an intermediate states with a spin projection operator 𝒫. The complex coupling constants of the amplitudes are determined by an un-binned maximum likelihood fit using minuit <cit.>. The background contribution is estimated with the inclusive MC sample and subtracted from the likelihood. The baseline solution is determined at √(s)=4.178 GeV, which includes the pΛ̅ threshold enhancement X(2085) <cit.>, K^*_2(1980),K^*_4(2045),K_2(2250),Λ(1520),Λ(1890),Λ(2350),N(1720), and N(2570). Except for X(2085), the resonance parameters are fixed to the respective world average values <cit.>. The signal MC samples of the other energy points are generated based on the amplitude analysis result of the nearby energy point. Figure <ref> shows the distributions of polar angles and momenta of final state particles, as well as the invariant mass spectra of all two-particle combinations of signal candidates in data and MC simulation at √(s)=4.178 GeV. The Born cross section at a given center-of-mass energy is calculated as σ^ B=N_ sig/ℒ_ int×ℬ×ϵ(1+δ)1/|1-Π|^2, where N_ sig is the fitted signal yield, ℒ_ int is the integrated luminosity, ℬ is the branching fraction of the Λ charged decay <cit.>, ϵ is the detection efficiency, (1+δ) is the ISR correction factor, and 1/|1-Π|^2 is the vacuum polarization factor <cit.>. To obtain the ISR correction factor, an iterative procedure is performed. First, a series of signal MC samples are generated for all energy points with a constant cross section. The cross sections are calculated based on the reconstruction efficiencies and correction factors obtained from the signal MC simulation. The line shape 𝒫^4(1-e^-Δ M/p_0) is used to describe the measured cross sections, where 𝒫^4 is a forth-order polynomial with free parameters, p_0 is a free parameter, and Δ M=√(s)-2.645 GeV since no signal event is observed with the data sample at √(s)=2.645 GeV, which is close to the mass threshold of pK^-Λ̅. Then, the method introduced in Ref. <cit.> is used to get the ISR correction factors and efficiencies, and a new series of cross sections is obtained. This procedure is repeated until the difference of (1+δ)ϵ between two subsequent iterations is less than 0.1%. The vacuum polarization factor 1/|1-Π|^2 is obtained from conexc <cit.>. Table <ref> summarizes the Born cross sections together with the relevant values used to determine them. The energy-dependent dressed cross sections of e^+e^-→ pK^-Λ̅, defined as σ^ D=σ^ B×1/|1-Π|^2, are shown in Fig. <ref>. § SYSTEMATIC UNCERTAINTY The systematic uncertainties on the cross section measurement include several sources, as summarized in Table <ref>. They are estimated as described below. The integrated luminosity is measured using Bhabha scattering events, with uncertainty less than 1.0% <cit.>. The uncertainty related to the tracking efficiency of kaons is estimated to be 1.0% using a control sample of e^+e^-→ K^+K^-π^+π^- and that of protons not from Λ is estimated to be 2.0% using a control sample of e^+e^-→ pπ^-p̅π^+. The total systematic uncertainty from tracking is assigned as a linear sum of kaon and non-Λ proton contributions. The systematic uncertainty due to the Λ reconstruction efficiency including tracking efficiencies of the pπ^- pair, decay length requirement, mass window, vertex fit, and second vertex fit, is assigned as 2.0% using the control sample of J/ψ(ψ(3686))→ΛΛ̅ <cit.>. The systematic uncertainty related to the N_ trk=4 requirement, i.e. the number of charged tracks must be four, is estimated to be 0.5% with a control sample of J/ψ→ pK^-Λ̅ following Ref. <cit.>. The systematic uncertainty of the MC modeling is estimated with a new signal MC sample, in which all fitted complex coupling constants, quoted resonance parameters are smeared with their uncertainties. The difference between the detection efficiencies obtained with the new signal MC sample and the nominal one is taken as this uncertainty. The systematic uncertainty from the fit to the M_pπ^- spectrum is taken into account in two aspects. The uncertainty associated with the fit range is estimated by varying the fit range by 1 MeV. The uncertainty from the background shape is estimated by using a second-order polynomial function. For each of these two aspects, the maximum difference between the signal yields obtained with nominal and alternative background shapes is taken as the corresponding systematic uncertainty. Adding these two items in quadrature, we obtain the relevant systematic uncertainty. The systematic uncertainty related to the 4C kinematic fit is estimated by comparing the detection efficiencies with and without the helix parameter correction <cit.>, taking the difference as the corresponding systematic uncertainty. The systematic uncertainty of the |cosθ_K|< 0.83 requirement is estimated by varying the cut range by 0.01. The maximum difference between the detection efficiencies obtained with nominal and alternative cut ranges is taken to be the corresponding systematic uncertainty. The systematic uncertainty related to the correction factor is considered in two aspects. The uncertainty due to the theoretical uncertainty of the vacuum polarization factor is assigned to be 0.5%  <cit.>. The uncertainty due to the line shape used in the iteration is estimated by varying all free parameters of the line shape within their statistical uncertainties. The distribution of σ^ B obtained with the alternative parameter sets is fitted with a Gaussian function (μ_1,σ_1). The uncertainty is assigned to be (|μ_1-μ_0|+σ_1)/μ_0×100%, where μ_0 is the nominal value. Adding these two items in quadrature gives the systematic uncertainty in the correction factor. The systematic uncertainty due to the Λ peaking background is assigned as 1.0% since the fraction of this background from the inclusive MC sample is found to be less than 1.0%. The systematic uncertainty of the quoted branching fraction of Λ→ pπ^- is 0.8%. § FIT TO THE CROSS SECTIONS OF E^+E^-→ PK^-Λ̅+C.C. In order to search for possible charmless decays of charmonium(-like) states ψ→ pK^-Λ̅+c.c., we try two kinds of least chi-square fits to the dressed cross sections. The χ^2 is constructed as χ^2= (Δσ⃗)^T V^-1Δσ⃗, where Δσ⃗_i=σ_i-σ^ fit_i(θ⃗) and V is the covariance matrix. The σ_i and σ^ fit_i are the measured and fitted values for the cross section at the i-th energy point, respectively. The covariance matrix is constructed as V_ii=V_ sta,i+V_ sys,i for diagonal elements and V_ij=√(V_ corr-sys,i× V_ corr-sys,j) for off-diagonal elements (i≠ j). Here, V_ corr-sys includes the systematic uncertainties of integrated luminosity, tracking, Λ reconstruction, and ℬ(Λ→ pπ^-). In the first fit, the cross sections are assumed to result only from continuum production and to follow a relation of a/s^n. The fit result is shown in Fig. <ref>, with the goodness-of-fit of χ^2/ ndf=39.05/35, where both statistical and systematic uncertainties are included and ndf denotes the number of degrees of freedom. In the second fit, the cross section is modeled as a coherent sum of continuum production and resonant amplitudes, e.g., σ(√(s))=|a/s^n+BW(√(s))e^iϕ|^2, where BW(√(s))=M/√(s)√(12πΓ_ eeΓ_ totℬ(ℛ→ pK^-Λ̅+c.c.))/s-M^2+iMΓ_ tot√(PS(√(s))/PS(M)) is used to describe charmonium-(like) states. Here, M, Γ_ tot, and Γ_ ee are the mass, full width, and e^+e^- partial width of the resonance ℛ, respectively. ℬ(ℛ→ pK^-Λ̅+c.c.) denotes the branching fraction of the decay ℛ→ pK^-Λ̅+c.c., ϕ is the relative phase between the continuum and resonance, and PS(√(s))/PS(M) is the three-body phase space factor. In the second case, the values of M and Γ_ tot are fixed to the PDG values <cit.>. Several well established charmonium-(like) states, ψ(4160),ψ(4230), ψ(4360), ψ(4415), and ψ(4660), are checked and no evidence for any ψ(Y) → pK^-Λ̅+c.c. decay is found. To set the upper limits of Γ_ eeℬ(ℛ→ pK^-Λ̅+c.c.), the likelihood distributions are constructed as L(Γ_ eeℬ)=e^-0.5χ^2. The upper limits at the 90% confidence level (C.L.) is obtained by integrating L(Γ_ eeℬ) from zero to 90% of the total curve. The uncertainty associated with the quoted resonance parameters of ℛ is studied by sampling its parameters according to its uncertainty, repeating the estimation of upper limits, and taking the width of resulting distribution as this uncertainty. The upper limit is assigned as the nominal value plus the uncertainty due to quoted resonance parameters, as summarized in Table <ref> § SUMMARY In summary, with 21.7 fb^-1 of e^+e^- collision data taken at √(s) ranging from 4.009 GeV to 4.951 GeV, the energy-dependent cross sections of e^+e^-→ pK^-Λ̅ are measured for the first time. We fit the obtained cross sections under different hypotheses of charmonium(-like) states plus continuum production. No evidence for any decay of charmonium(-like) states is found and the upper limits of Γ_ eeℬ(ℛ→ pK^-Λ̅+c.c.) at the 90% C.L. are given. The BESIII Collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key R&D Program of China under Contracts Nos. 2020YFA0406300, 2020YFA0406400; National Natural Science Foundation of China (NSFC) under Contracts Nos. 12175244, 11875170, 11565006, 11505034, 11635010, 11735014, 11835012, 11935015, 11935016, 11935018, 11961141012, 12022510, 12025502, 12035009, 12035013, 12061131003, 12192260, 12192261, 12192262, 12192263, 12192264, 12192265, 12221005, 12225509, 12235017; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics (CCEPP); CAS Key Research Program of Frontier Sciences under Contracts Nos. QYZDJ-SSW-SLH003, QYZDJ-SSW-SLH040; 100 Talents Program of CAS; The Institute of Nuclear and Particle Physics (INPAC) and Shanghai Key Laboratory for Particle Physics and Cosmology; ERC under Contract No. 758462; European Union's Horizon 2020 research and innovation programme under Marie Sklodowska-Curie grant agreement under Contract No. 894790; German Research Foundation DFG under Contracts Nos. 443159800, 455635585, Collaborative Research Center CRC 1044, FOR5327, GRK 2149; Istituto Nazionale di Fisica Nucleare, Italy; Ministry of Development of Turkey under Contract No. DPT2006K-120470; National Research Foundation of Korea under Contract No. NRF-2022R1A2C1092335; National Science and Technology fund of Mongolia; National Science Research and Innovation Fund (NSRF) via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation of Thailand under Contract No. B16F640076; Polish National Science Centre under Contract No. 2019/35/O/ST2/02907; The Swedish Research Council; U. S. Department of Energy under Contract No. DE-FG02-05ER41374. JHEP
http://arxiv.org/abs/2307.03095v1
20230706161117
Joint implications of BBN, CMB, and PTA Datasets for Scalar-Induced Gravitational Waves of Second and Third orders
[ "Qing-Hua Zhu", "Zhi-Chao Zhao", "Sai Wang" ]
astro-ph.CO
[ "astro-ph.CO", "astro-ph.HE", "gr-qc" ]
=1 compat=newest,every axis plot/.append style=line width=1pt figureFig.Figs. figureFig.Figs. () [ ]
http://arxiv.org/abs/2307.00795v1
20230703072341
Inference for Projection Parameters in Linear Regression: beyond $d = o(n^{1/2})$
[ "Woonyoung Chang", "Arun Kumar Kuchibhotla", "Alessandro Rinaldo" ]
math.ST
[ "math.ST", "stat.TH" ]
A full waveform model for arbitrarily axis-symmetric black hole mergers Wen-Biao Han August 1, 2023 ========================================================================= We consider the problem of inference for projection parameters in linear regression with increasing dimensions. This problem has been studied under a variety of assumptions in the literature. The classical asymptotic normality result for the least squares estimator of the projection parameter only holds when the dimension d of the covariates is of smaller order than n^1/2, where n is the sample size. Traditional sandwich estimator-based Wald intervals are asymptotically valid in this regime. In this work, we propose a bias correction for the least squares estimator and prove the asymptotic normality of the resulting debiased estimator as long as d = o(n^2/3), with an explicit bound on the rate of convergence to normality. We leverage recent methods of statistical inference that do not require an estimator of the variance to perform asymptotically valid statistical inference. We provide a discussion of how our techniques can be generalized to increase the allowable range of d even further. § INTRODUCTION Linear regression is a fundamental statistical tool that has been widely used in various fields of research. The classical literature on linear regression studies the ordinary least square (OLS) estimation has focused primarily on the well-specified case, where the underlying truth postulates the linear relation between the response variable and covariates. As elucidated in works in an assumption-lean framework, although the model assumption sometimes takes account of the pre-knowledge, the usage of model assumptions is dishonest when used because of the mathematical convenience. As real data often possess a highly nonlinear structure, relying on model assumptions and considering them as representing ground truth in inference may be problematic. Compared to the popularity of linear regression in both practical studies in statistics and econometrics, the theoretically established properties of linear regression in an assumption-lean framework have just started to gain attraction. To this effort, some approaches start with a traditional estimator of a parameter indexing a parametric regression model and then characterize what estimand the estimator converges to, without assuming that the model is true. In particular, we focus on misspecified linear regression models comprised of a d-dimensional random vector of covariates X and a scalar random variable Y. If _X,Y admits the second moment, then the conditional expectation [Y|X], which is not necessarily linear, is the best L^2 approximation to Y among functions of X. It is well known that the best linear L_2 approximation to Y is the linear function β^⊤ X is well-defined where the coefficients β=β(_X,Y) is given by β = _θ∈^d[(Y-θ^⊤ X)^2]. Provided that the population Gram matrix Σ = _X[XX^⊤] is invertible, the solution is unique and given by the vector of projection parameters, β = Σ^-1Γ, where Γ=[XY]. The projection parameter is traditionally estimated using the ordinary least square estimator (OLSE). Suppose that we observe a sample of n i.i.d observations (X_1,Y_1),…,(X_n,Y_n) from _X,Y. Then, the OLSE is defined as β̂= _θ∈^d1/n∑_i=1^n(Y_i-θ^⊤ X_i)^2. Provided that the sample Gram matrix Σ̂:=n^-1∑_i=1^nX_iX_i^⊤ is invertible with probability 1, then the OLSE is well-defined and can be expressed as β̂= Σ̂^-1Γ̂, where Γ̂= n^-1∑_i=1^n X_iY_i. In a fixed dimension setting, the large sample theory and Berry-Esseen type bounds have been utilized for the ordinary least squares estimator (OLSE), as demonstrated by <cit.> and <cit.>, respectively. With increasing dimensions, there also exists a substantial body of literature on the asymptotic normality of the OLSE and the Berry-Esseen type bounds for normal approximation <cit.>. Translating these findings to our specific context necessitates a favorable scaling requirement of d=o(√(n)) to guarantee the √(n)-consistency of the OLSE, even within a well-specified model under a set of relatively strong assumptions. Recently, <cit.> derived a novel finite sample bound for the normalized OLSE under minimal assumptions. This result allows for the construction of simultaneous confidence intervals for the projection parameter coefficients, where the interval width can be of the order 1/√(n), regardless of the dimension. While considerable attention has been given to the regime where d=o(√(n)), with the OLSE well-defined as long as d=o(n), little focus has been placed on cases beyond this favorable scaling of d=o(√(n)). Specifically, it is known that the OLSE is biased when d≳√(n), and the non-vanishing bias of the OLSE prevents valid inference on the projection parameter without the presence of implicit or explicit de-biasing procedure. In this regard, <cit.> proposed the bias-corrected M-estimator using the jackknife method, which allows for consistency of the estimate under d=O(√(n)). <cit.> introduced an abstract de-biasing procedure that imposes no dimension requirement other than d=o(n) and provides 1/√(n)-confidence intervals for individual coefficients of the model parameter. However, their result is presented under a well-specified and sparse model, which is more restrictive compared to our setting. Finally, <cit.> demonstrated the consistency of the pairs bootstrap and wild bootstrap under increasing dimensions, but does not provide any rate of convergence. In this paper, we adopt the assumption-lean framework to provide a finite-sample guarantee for the projection parameter, requiring only very mild assumptions. The main contributions of this study include the proposal of a bias-corrected least square estimator for the projection parameter and the derivation of a Berry-Esseen bound for the approximation to the adjusted normal distribution. Our main result, Theorem <ref>, establishes a high-dimensional Berry-Esseen bound for a linear contrast of the bias-corrected estimator under the assumptions of finite moments for both the covariates and the errors. Our bound, disregarding the poly-logarithmic factor, scales as follows: 1/√(n) + (d/n^3/4-1/q_x)^3/2 + d^3/2/n, provided that q_x≥ 8 and q≥ 4, where q_x and q represent the finite moments of the covariate X and the error Y-X^⊤β, respectively. Consequently, our bound tends to zero if the condition d = o(n^min{2/3, 3/4-1/q_x}) is satisfied. Notably, when q_x≥ 12, this requirement can be further reduced to d=o(n^2/3), strictly embracing the traditional d=o(n^1/2) scaling. The weaker dimensionality requirement we imposed stems from our decision not to estimate the asymptotic variance of our proposed estimate, instead leaving an unknown parameter in our Berry Esseen inequality. This departure allowed us to avoid a stronger dimension requirement and preserve the fast convergence rate. In the context of linear regression, the inflating variance of coefficient estimates along with increasing dimension has prompted efforts to propose robust covariance estimates capable of accommodating dimensionality and potential heteroscedasticity. The degree-of-freedom-corrected covariance estimator, introduced by <cit.>, has served as a foundation for subsequent modifications within well-specified linear models, known as HC-class variance estimators. For comprehensive reviews of these estimators, we refer our readers to <cit.> and <cit.>. Recent additions to the HC class include the HCK estimator proposed by <cit.> and the HCA estimator proposed by <cit.>. However, in both cases, consistency of variance estimate requires a dimensionality constraint of d=O(n^1/2-ϵ) in misspecified linear models, albeit under different settings. Additionally, <cit.>, who studied a similar setting to ours, noted that a dimensionality requirement of d=o(n^1/2) appears to be unavoidable for deploying the sandwich estimator for the OLSE variance (see Lemma 8 of <cit.>). A common inferential method to form the confidence interval for the projection parameter may not be directly applicable to our estimate without knowing the asymptotic variance. Hence, the inferential methods not utilizing the variance estimate are necessitated. Fortunately, there have been practical successes of resampling or sample splitting methods in uncertainty quantification, enabling us to address the challenging inferential problems. In particular, we construct the confidence interval for the projection parameter via three methods; HulC (convex-hull based confidence interval) <cit.>, t-statistic based inference <cit.>, and wild bootstrap <cit.>. The rest of this paper is organized as follows. In Section <ref>, we describe the problem setup and notations. In Section <ref>, we describe the distributional assumptions on data generating process. In Section <ref>, we deliver our main result on Berry Esseen bound for the distribution approximation of the linear contrast of the projection parameter. In Section<ref>, we describe three inferential methods based on the bootstrap and the sample splitting. Numerical result are contained in Section <ref> § METHODS AND MAIN RESULT §.§ Problem setup Let (X_1, Y_1),…, (X_n, Y_n)∼(X,Y)∈^d× be an i.i.d. sample of n observations. The projection parameter β is defined as β=β_n:=_θ∈^d1/n∑_i=1^n[(Y_i-X_i^⊤θ)^2]. If the population gram matrix Σ = Σ_n := n^-1∑_i=1^n[X_iX_i^⊤], is positive definite, the projection parameter is well-defined and equal to Σ_n^-1Γ_n where Γ_n:=n^-1∑_i=1^n[X_iY_i]. In the case where the observations follow the linear model Y= X^⊤β^*+ϵ with [ϵ|X]=0, the projection parameter corresponds to the model parameter, i.e., β=β^*. On the contrary, if the underlying truth exhibits a possibly non-linear structure, then the projection parameter gives the best linear approximation X^⊤β to Y with respect to the joint distribution of (X,Y). For a detailed discussion on the projection parameter and its interpretation, see <cit.>. The projection parameter is traditionally estimated using the ordinary least square estimator (OLSE) defined as β̂= β̂_n:=_θ∈^d1/n∑_i=1^n(Y_i-X_i^⊤θ)^2. If the sample population matrix Σ̂=Σ̂_n:=n^-1∑_i=1^n X_iX_i^⊤ is positive definite with probability 1, then the OLS estimator is well defined and can be written as β̂_n = Σ̂_n^-1Γ̂_n, Γ̂_n:=n^-1∑_i=1^n X_iY_i. Notation For any x∈^d, we write x_2=√(x^⊤ x). In addition with a positive definite matrix A∈^d× d, we denote the scaled Euclidean norm as x_A = √(x^⊤ Ax). We let ^d-1=θ∈^d:θ_2=1 be the unit sphere in ^d. §.§ Assumptions We will deliver our main results using the following assumption on the data distribution. In particular, our assumptions only require moment conditions on the response and covariates, which embraces a large class of distributions including heavy-tailed distributions and discrete distributions. There exists some q≥2 and a constant K_y>0 satisfying that ([|Y_i-X_i^⊤β|^q])^1/q≤ K_y, for i=1,…,n. Assumption <ref> imposes only the moment condition for the error, thus allowing for heteroscedastic errors that can depend on the covariates arbitrarily and also for heavy-tailed errors. Assumption <ref> often appears in the recent linear regression literature. <cit.> further suppose that the conditional variance of the error is uniformly bounded almost everywhere. That is, 0<inf_x v^2(x)≤sup_x v^2(x) <∞, where v^2(x)=[(Y-X^⊤β)^2|X=x]. Here, inf_x and sup_x are essential infimum and essential supremum, respectively, with respect to the marginal distribution of X. Given this condition, our results in the following section remain valid even under the relaxed moment assumptions on the covariates. Nevertheless, we have chosen not to rely on this condition, as to present our findings in a general form. <cit.> and <cit.> allow for heavy-tailed errors, but require them to be independent of covariates and to be sub-Gaussian, respectively. assumption There exists some q_x≥2 and a constant K_x≥1 satisfying that ([|u^⊤Σ^-1/2X_i|^q_x])^1/q_x≤ K_x, for i=1,…,n and u∈^d-1. There exists a constant K_x>0 such that [exp(|u^⊤Σ^-1X_i|^2/K_x^2)]≤2, for all i=1,…,n and u∈^d-1. There exists some q_x≥2 and a constant K_x≥1 satisfying that ([|u^⊤Σ^-1/2X_i|^q_x])^1/q_x≤ K_x, for i=1,…,n and u∈^d-1. Furthermore, Σ^-1/2X_i∈^d, i=1,…,n, have d independent entries. Assumption <ref> is a finite moment assumption on covariates, and this is a significant weakening of sub-Gaussianity in Assumption <ref>, which is commonly used in the literature. Assumption <ref> is more restrictive than <ref> as it supposes that Σ^-1/2-normalized covariates are independent, and is often used in random matrix theory literature. In the next section, we describe the results under three different assumptions on covariates. In Assumption <ref> and <ref>, the least number of moments of covariates is determined by q_x. It is common to impose q_x≥ 4 when studying the behavior of the OLS estimator with increasing dimensions. However, we may require a higher moment condition to ensure a sharper convergence rate of the bias-corrected least square estimator and a less restrictive dimension requirement for consistency. Let V:=Var[n^-1/2∑_i=1^n X_i(Y_i-X_i^⊤β)]. There exist constants 0<λ≤λ <∞ satisfying that λ≤ u^⊤(Σ^1/2V^-1Σ^1/2)u≤λ, for all u∈^d-1. Assumption <ref> ensures that V and Σ scale in the same order. This assumption appears to be unavoidable to prohibit the asymptotic variance of the bias-corrected estimator from inflating or vanishing. Noting that V=[XX^⊤(Y-X^⊤β)^2] = [XX^⊤[(Y-X^⊤β)^2|X]], the condition (<ref>) is sufficient for Assumption <ref> to be held. §.§ Main Result In this section, we present an approximation of the distribution of √(n)c^⊤(β̂-β). We begin by approximating the sample Gram matrix Σ̂ using Taylor series expansion as Σ̂^-1 = Σ^-1 +Σ̂^-1(Σ-Σ̂)Σ^-1 ≈ Σ^-1 +Σ^-1(Σ-Σ̂)Σ^-1. (See Lemma 33 of <cit.>.) Consequently, we get the following approximation of β̂-β: β̂- β = Σ̂^-11/n∑_i=1^n X_i(Y_i-X_i^⊤β) ≈ Σ^-11/n∑_i=1^n X_i(Y_i-X_i^⊤β) + Σ^-1(Σ-Σ̂)Σ^-11/n∑_i=1^n X_i(Y_i-X_i^⊤β). The term (<ref>) is the first-order approximation and represents the influence function. Using the conventional large sample approximation for the case when d = o(√(n)), the variability of β̂ can be well-represented only through the term (<ref>), and the second order term (<ref>) or higher order terms are negligible. Here, however, the term (<ref>) may create a non-vanishing bias when d≫√(n). To see this, we note that Σ^-1(Σ-Σ̂)Σ^-11/n∑_i=1^n X_i(Y_i-X_i^⊤β) = (1/n∑_i=1^n Σ^-1(Σ-X_iX_i^⊤)Σ^-1)(1/n∑_i=1^n X_i(Y_i-X_i^⊤β)) = 1/n^2∑_i=1^n Σ^-1X_i(Y_i-X_i^⊤β) -1/n^2∑_i=1^n Σ^-1X_i(Y_i-X_i^⊤β)X_i_Σ^-1^2 +1/n^2∑_1≤ i≠ j≤ nΣ^-1(Σ-X_iX_i^⊤)Σ^-1X_j(Y_j-X_j^⊤β). The quantities in (<ref>) and (<ref>) are resulted from multiplying the individual terms in (<ref>) with the same index, while the quantity (<ref>) corresponds to the cross-index product. It is noteworthy that (<ref>) is the average of n independent mean-zero random variables and the quantity (<ref>) is a degenerate (that is, mean-zero) U-statistic of order 2 up to a scaling factor that converges to 1 as n→∞. Therefore, the only quantity that may have a non-zero mean is (<ref>) and we denote this by :=-1/n^2∑_i=1^nΣ^-1X_i(Y_i-X_i^⊤β)X_i_Σ^-1^2. Fix any c∈^d with c_Σ^-1≤1. Under the control of c^⊤Σ^-1X_i and Y_i-X_i^⊤β using Assumption <ref> and <ref>, respectively, the quantity c^⊤ scales like O(d/n) due the X_i_Σ^-1^2 factor. Consequently, √(n)c^⊤ yields a non-degenerate bias when d≫√(n). By manually removing the bias, a linear contrast c^⊤(β̂--β) can be approximated by the weighted sum of first-order and the second-order U-statistics as c^⊤(β̂--β) ≈ 1/n∑_i=1^nc^⊤ψ(X_i,Y_i)+1/n(n-1)∑_1≤ i≠ j≤ nc^⊤ϕ(X_i,Y_i,X_j,Y_j), where ψ(x,y) = (1+1/n)Σ^-1x(y-x^⊤β) ϕ(x,y,x',y') = (1-1/n) Σ^-1(Σ-xx^⊤)Σ^-1x'(y'-x'^⊤β). It is well known that U-statistics of order k≥ 2 are asymptotically normally distributed, and there has been a vast literature related to normal approximations and the rates of convergence for k-order U-statistics. We refer the readers to <cit.>. In order to state our first theorem, we define σ_c^2 = Var[c^⊤ψ(X_1,Y_1)], κ_c = n^-5/2n2[{c^⊤ψ(X_1,Y_1)}{c^⊤ψ(X_2,Y_2)}{c^⊤ϕ(X_1,Y_1,X_2,Y_2)}]/σ_c^3, for any c∈^d with c_Σ^-1=1. In addition, let Φ(x) be the cumulative distribution function of the standard normal distribution and let Φ^(3)(x) be the third order derivative of Φ(x). Theorem <ref> states a Berry-Essen bound for the normal approximation of a linear contrast c^⊤(β̂--β) with known bias . Suppose that Assumption <ref> holds for q≥ 3 and that Assumption <ref> holds. (i) Suppose further that Assumption <ref> holds for q_x≥ 4 and s:=(1/q+1/q_x)^-1≥3. Then, there exists a constant C=C(q,q_x,λ,K_x,K_y) such that sup_c∈^d: c_Σ^-1=1sup_t∈[√(n){c^⊤(β̂--β)}≤ t]-{Φ(σ_ct)+κ_cΦ^(3)(σ_ct)} ≤ C{(d/n^4/5-8/(5q_x))^5q_x/(2q_x+8)+d^5/2-4/q_x/n^2-4/q_xlog^8(n/d)+d^3/2/n}(1∨√(log (en)/d)) +C{(1/n^1/2∨d/n)+(d/n^1-2/q_x)^q_x/2}. (ii) Suppose further that Assumption <ref> holds. Then, there exists a constant C=C(q,λ,K_x,K_y) such that sup_c∈^d: c_Σ^-1=1sup_t∈[√(n){c^⊤(β̂--β)}≤ t]-{Φ(σ_ct)+κ_cΦ^(3)(σ_ct)}≤ C(d^3/2/n∨dlog^1/2 n/n). (iii) Suppose further that Assumption <ref> holds for q_x≥ 4 and s≥3. Then, there exists a constant C=C(q,q_x,λ,K_x,K_y) such that sup_c∈^d: c_Σ^-1=1sup_t∈[√(n){c^⊤(β̂--β)}≤ t]-{Φ(σ_ct)+κ_cΦ^(3)(σ_ct)} ≤ C((d^3/2/n)^1-4/(q_x+4)+d^5/2-4/q_x/n^2-4/q_xlog^8(n/d)+d^3/2/n) (1∨√(log (en)/d)) +C(1/n^1/2∨d/n). Furthermore, if (3/q_x+1/q)^-1≥ 2, then the same results hold true even when replacing β̂- with β̂. The conventional Berry Esseen type inequality implies the proximity between the distribution of the law of the estimator and the normal distribution. In contrast, we show a distribution approximation to the “adjusted” normal distribution, allowing a more precise representation of the approximation. The degree of adjustment is determined by the parameter κ_c, which turns out to scale as O(√(d/n)) (see Lemma <ref>). Consequently, we can enhance the Berry-Esseen bound for the normal approximation by incorporating an additional term of O(√(d/n)) to the existing bounds in Theorem <ref>, but yielding a slower convergence rate. More importantly, if the model is correctly specified, i.e., [Y|X]=X^⊤β, then it follows that κ_c vanishes from its definition in (<ref>). Consequently, in this case, the presented Berry-Esseen bound in Theorem <ref> can be employed for the normal approximation as well. In the last part of Theorem <ref>, it is noteworthy that the OLSE does not necessarily require de-biasing under Assumption <ref> with a sufficiently large finite moment of covariates and errors. To provide an intuition for this, let us consider the bias term c^⊤ in (<ref>), which scales with its expected value [c^⊤]. By examining the definition of the projection parameter, we can express [c^⊤] as the covariance between two random variables: [c^⊤] = -1/n [c^⊤Σ^-1X(Y-X^⊤β)X_Σ^-1^2] = -1/n Cov(c^⊤Σ^-1X(Y-X^⊤β), X_Σ^-1^2). An application of Cauchy Schwarz's inequality yields that [c^⊤]≤1/n( Var[c^⊤Σ^-1X(Y-X^⊤β)])^1/2( Var[ X_Σ^-1^2])^1/2. The leading variance term on the right-hand side of (<ref>) is O(1) under the finite moment assumptions of covariates and errors. On contrary, the quantity Var[ X_Σ^-1^2] scales as O(d^2) in general, resulting in |[c^⊤]|=O(d/n). However, when an additional assumption of independent entries of covariates, i.e., Assumption <ref>, is satisfied, X_Σ^-1^2 has an approximate χ^2_d distribution, which significantly reduces the magnitude of the right-hand side of (<ref>) to O(√(d)/n). Consequently, the expected bias (when scaled by √(n)) is negligible if d=o(n). Theorem <ref> gives the Berry Essen bound for the linear contrast under three different distributional assumptions on covariates. The first part is the most general and relies on finite-moment assumptions. Ignoring the poly-logarithmic factor, the Berry-Esseen bound tends to zero as long as d=o(n^min{2/3,4/5-8/(5q_x)}). If q_x≥12, this reduces to d=o(n^2/3) and meets the dimension requirement for the consistency under Assumption <ref> of sub-Gaussianity. Moreover, the dimension requirement for consistency under Assumption <ref> is d=o(n^2/3) for all q_x≥4. The result given in Theorem <ref> is mostly of theoretical importance as the inequality yet involves unknown parameters through bias and variance σ_c^2. Therefore, to make an inferential statement on β, for e.g. building confidence region, it seems necessary to incorporate the estimates of bias and variance in the Berry Esseen bound given in Theorem <ref>. To this effect, we consider a method of moment estimator for defined as = -1/n^2∑_i=1^n Σ̂^-1X_i(Y_i-X_i^⊤β̂)X_i_Σ̂^-1^2. The next result provides the consistency rate for the bias estimate in high dimensions and under mild moment conditions. As elucidated in Remark <ref>, the OLSE does not necessitate a debiasing procedure under Assumption <ref>, the following result considers the cases under Assumption <ref> and <ref> only. Suppose that Assumption <ref> holds for q≥3 and that Assumption <ref> holds. Fix c∈^d with c_Σ^-1=1. (i) Suppose further that Assumption <ref> holds for q_x≥8, s≥3, and 3/q_x+1/q≤ 1/2. If d+(s-2)log (4n)≤ n/(14K_x)^2, then there exists a constant C=C(q,q_x,λ,K_x,K_y) such that for all δ∈(0,1), √(n)c^⊤(-)≤ C[d^2δ^-2/q_x/n^3/2-2/q_x+d^3/2/n+d/nlog^1/2(en)+d^2-2/q_x/n^3/2-2/q_xlog^4(n/d)] + Clog(ed)[d^7/2δ^-(1/2+2/q_x)/n^5/2-2/q_x+d^7/2-2/q_xδ^-1/2/n^5/2-2/q_xlog^4(n/d)+d^3δ^-1/2/n^2], occurs with a probability at least 1-(1/n)^s/2-1-d/n-(d/n^1-2/q_x)^q_x/8-δ. (ii) Suppose further that Assumption <ref> holds. Then, there exists a constant C=C(q,λ,K_x,K_y) such that √(n)c^⊤(-)≤ C(d^3/2/n+d^3log(ed)δ^-1/2/n^2), occurs with a probability at least 1-(1/n)^s/2-1-d/n-e^-d-δ. Theorem <ref> provides the finite sample concentration inequality of the estimated bias. Inspecting the result under Assumption <ref>, with the choice of δ=(d/n^3/4-1/q_x)^q_x/4∨d^3/n^2, the bias estimate is √(n)-consistent if d=o(n^min{2/3,3/4-1/q_x}), ignoring the logarithmic factor. If q_x≥12, then δ=d^3/n^2 only requires d=o(n^2/3). If Assumption <ref> is replaced with Assumption <ref>, then the dimension requirement for the consistency becomes d=o(n^2/3) ignoring the poly-logarithmic factor. One may consider a variance estimate for σ_c^2. However, the variance σ_c^2 may not be consistently estimable if d≫√(n). In particular, <cit.> presented that for the consistency of the method of moment estimator for σ_c^2, a.k.a the sandwich variance estimate, the dimension scaling of d=o(√(n)) seems unavoidable. See Lemma 8 of <cit.>. Nevertheless, Assumption <ref> prevents the variance σ_c^2 from degenerating and inflating, we can still utilize inferential methods that does not employ the variance estimate, which will be delivered in Section <ref>. Combining the error bound for the bias estimate from Theorem <ref> with the Berry-Esseen bound in Theorem <ref>, we immediately get to the main result, a uniform Berry-Esseen bound for the bias-corrected OLS estimator. We notate the bias-corrected estimator for the projection paratmeter as β̂_ bc:=β̂-. (i) Suppose that assumptions made in Theorem <ref>(i) hold. Then, there exists a constant C=C(q,q_x,λ,K_x,K_y) such that sup_t∈[√(n)c^⊤(β̂_ bc-β)≤ t]-{Φ(σ_ct)+κ_cΦ^(3)(σ_ct)}) ≤ C((d/n^4/5-8/(5q_x))^5q_x/(2q_x+8)+d^5/2-4/q_x/n^2-4/q_xlog^8(n/d)+dlog^1/2(en)/n) + Clog(en)[(d/n^3/4-1/q_x)^3/2+d^3/2/n+d^2-2/q_x/n^3/2-2/q_xlog^4(n/d)] + C((1/√(n)∨d/n)+ (d/n^1-2/q_x)^q_x/2), for any c∈^d with c_Σ^-1=1. (ii) Suppose that the assumptions made in Theorem <ref>(ii) hold. Then, there exists a constant C=C(q,λ,K_x,K_y) such that sup_c∈^d: c_Σ^-1=1sup_t∈[√(n)c^⊤(β̂_ bc-β)≤ t]-{Φ(σ_ct)+κ_cΦ^(3)(σ_ct)}≤ C d^3/2log (ed)/n, for any c∈^d with c_Σ^-1=1. Under the moment condition of Assumption <ref>, our bound converges to zero if d=o(n^min{2/3,4/5-8/q_x}), disregarding the poly-logarithmic factor. This reduces to d=o(n^2/3) when q_x≥ 12. Furthermore, if the covariates are sub-Gaussian, the Berry Esseen bound tends to zero if d=o(n^2/3), again ignoring the poly-logarithmic factor. Comparing the Berry Essen bound of the bias-corrected estimator to the bound given in Theorem 1 when the bias is known, we observe that using the bias estimate does not impose a stricter dimension requirement. This is due to the fact that the dimension requirement for the √(n)-consistency of the bias estimate is less strict than that given in Theorem 1, as shown by the inequality: min{2/3,4/5-8/(5q_x)}≤ min{2/3,3/4-1/q_x} q_x≥4. If q_x≥ 12, then the quantities on both sides coincide with 2/3, and thus, the bound converges to zero if d=o(n^2/3). The following corollary presents a simpler Berry Essen bound for the bias-corrected OLS estimator. We use A≲ B as a shorthand for the inequality A≤ C_n,d B, where C_n,d involves only the constants and log-polynomial factors of n and d. Suppose that the assumptions made in Theorem <ref>(i) hold. Furthermore, if q_x≥ 12 and d = o(n^2/3), then sup_c∈^d: c_Σ^-1=1sup_t∈[√(n)(c^⊤β̂--c^⊤β)≤ t]-{Φ(σ_ct)+κ_cΦ^(3)(σ_ct)}≲1/√(n)∨(d/n^3/4-1/q_x)^3/2∨d^3/2/n. § CONFIDENCE REGION FOR THE PROJECTION PARAMETER In the previous section, we have shown that the bias-corrected least square estimator is asymptotically normal distributed. However, as discussed previously, the asymptotic variance may not be consistently estimated if d≫√(n). In this section, we present three methods that do not require a consistent estimate of the asymptotic variance but still enable us to get the confidence intervals for the projection parameter. §.§ HulC The HulC <cit.> is a statistical method that computes confidence regions for parameters by constructing convex hulls around a set of estimates. Suppose that we split a sample of n i.i.d observations (X_1,Y_1),…,(X_n,Y_n) into B batches where each batch contains at least ⌊ n/B⌋ observations. Denote the bias-corrected least square estimators obtained from b:th batch as β̂_ bc^(b) for b=1,…,B. For any c∈^d, define the maximum median bias <cit.> for β as Δ_c := max_1≤ b≤ B(1/2-min{(c^⊤(β̂_ bc^(b)-β)>0),(c^⊤(β̂_ bc^(b)-β)<0)}). A key idea of the HulC is that the event min_1≤ b≤ Bc^⊤β̂_ bc^(b)≤ c^⊤β≤max_1≤ b≤ Bc^⊤β̂_ bc^(b) occurs unless either of the followings happens: (1) c^⊤(β̂_ bc^(b)-β)>0 for all 1≤ b≤ B or (2) c^⊤(β̂_ bc^(b)-β)<0 for all 1≤ b≤ B. Consequently, we get (min_1≤ b≤ Bc^⊤β̂_ bc^(b)≤ c^⊤β≤max_1≤ b≤ Bc^⊤β̂_ bc^(b))≥ 1-(1/2-Δ_c)^B-(1/2+Δ_c)^B. Since the bias-corrected OLS estimator is asymptotically normal as shown in Corollory <ref>, Δ_c converges to the asymptotic median, which is 0. The detailed procedure for constructing confidence interval at level α is described in Algorithm <ref>. Suppose that the assumptions made in Theorem <ref>(i) hold. For any α∈(0,1) and c∈^d with c_Σ^-1=1, let B=⌈log (2/α)⌉ and assume d≤ n/B. Let CI^ HulC_α be the confidence interval returned by Algorithm <ref>. Then, there exists a constant C=C(q,q_x,λ,K_x,K_y) such that (c^⊤β∈ CI^ HulC_α)≤α(1+B^2η_n/B), where η_n = C((d/n^4/5-8/(5q_x))^5q_x/(q_x+4)+d^5-8/q_x/n^4-8/q_xlog^16(n/d)+d^2log(en)/n^2) + Clog^2(en)[(d/n^3/4-1/q_x)^3+d^3/n^2+d^4-4/q_x/n^3-4/q_xlog^8(n/d)] + C(d/n+ (d/n^1-2/q_x)^q_x). §.§ -statistic based Inference We have proved that if a certain dimension requirement is fulfilled, then the bias-corrected OLS estimator converges to the normal distribution. As in the previous section, consider B partitions of n observations. Here, the number of batches B does not necessarily depend on the target coverage 1-α, and suppose that each batch has at least ⌊ n/B⌋ observations. Denote by β̂_ bc^(b) the bias-corrected estimator of β using observations in batch b, 1≤ b≤ B. Since estimators are obtained independently, this amounts to the convergence of the joint distribution; √(n/B)(c^⊤(β̂_ bc^(1)-β),…,c^⊤(β̂_ bc^(B)-β))(0_B,σ^2I_B), as n→∞ for any c∈^d and an asymptotic variance σ^2 that may depend on a choice of c. Hence, we can form an asymptotically valid confidence region based on the t-statistic, T = √(B)c^⊤(β̂_ bc-β)/s_β̂, where β̂_ bc=B^-1∑_b=1^B β̂_ bc^(b) and s_β̂^2 = (B-1)^-1∑_b=1^B{c^⊤(β̂_ bc^(b)-β̂_ bc)}^2. By Continuous Mapping Theorem, the above statistic is asymptotically t-distributed with (B-1) degrees of freedom. By implication, the confidence interval for c^⊤β is given by CI_α^ T:=[c^⊤β̂_ bc - t_α/2,B-1s_β̂/√(B),c^⊤β̂_ bc + t_α/2,B-1s_β̂/√(B)], where t_q,ν denotes the upper qth quantile of t-distribution with ν degrees of freedom. §.§ Bootstrap Inference This section presents a method for carrying out post-bias-correction inference in our scenario, based on a bootstrap approach. Specifically, we utilize the wild bootstrap technique <cit.> to derive a computationally-efficient approximation of the distribution of the bias-corrected OLS estimator. The idea is, as the residual bootstrap, to leave the covariates at their sample value, but to resample the response variable based on the residual values. First, we compute from the original dataset that the OLS estimator β̂ and the residuals ϵ̂_i:=Y_i-X_i^⊤β̂ for i=1,…,n. For i=1,…,n, let ξ_i^* be i.i.d. bootstrap weights such that ξ_i^* =0, ξ_i^*^2=1, and ξ_i^*^3=1. Then, we generate a bootstrap sample of n observations (X_1,Y_1^*),…,(X_n,Y_n^*) by letting Y_i^* = X_i^⊤β̂+ϵ̂_iξ_i^*, for i=1,…,n. Although Bootstrap weights are often drawn from the standard normal distribution, we made use of the asymmetric weights in Section <ref>, which was proposed in <cit.>; (ξ^*=-(√(5)-1)/2)=(√(5)+1)/(2√(5)) and (ξ^*=(√(5)+1)/2)=(√(5)-1)/(2√(5)). Let estimator β̂^*_ bc be the bias-corrected least square estimator based on bootstrap observations, β̂^*_ bc = β̂^* - ^*, where β̂^* and ^* are bootstrap counterparts of the OLS estimator and estimated bias, respectively; β̂^*=Σ̂^-11/n∑_i=1^nX_iY_i^* ^* = -1/n^2∑_i=1^n Σ̂^-1X_i(Y_i^*-X_i^⊤β̂^*)X_i_Σ̂^-1^2. To construct the confidence interval, let T_b^* = β̂_ bc^*,(b)-β̂ where β̂_ bc^*,(b) is the bootstrap bias-corrected estimator in b:th simulation. The wild bootstrap level α bias-corrected confidence interval for c^⊤β is given by CI_α^ WBS = [β̂_ bc -q̂^*_1-α/2, β̂_ bc -q̂^*_α/2], where q̂^*_α=inf{t∈:F̂^*(t)≥α} is the empirical upper α:th quantile of {T_b^*:1≤ b ≤ B} with F̂^*(t) = 1/B∑_b=1^B 1(T_b^*≤ t). The validity of the above wild bootstrap inference may require additional dimension requirements. Theorem 1 of <cit.> shows that wild bootstrap provides a consistent estimation of the unconditional distribution of √(n)(β̂-β) in a well-specified linear regression model, without any bias correction, under the condition that d = o(√(n)). More recently, Theorem 11 of <cit.> establishes a finite sample approximation of the conditional bootstrap distribution of the law of OLSE in a similar setting to ours. In particular, the proof utilizes the Gaussian comparison bound <cit.> to compare the conditional distribution of the bootstrap estimate with the normal distribution. However, the estimation of asymptotic variance appears to be inevitable in the process, and this adds an additional dimension requirement of d = o(√(n)), particularly when q_x≥ 8 and s≥ 4 (see Lemma 10 of <cit.>). Furthermore, although not discussed in a linear regression context, Lemma 4.1 of <cit.> gives an example of a sequence of random variables that justifies the necessity of the condition d= o(√(n)) for the consistency of the wild bootstrap when estimating sample mean. § NUMERICAL STUDY In this section, we compare the empirical coverages and widths of the 95% confidence intervals obtained from three inferential methods discussed in Section <ref>. Furthermore, we also compare the bootstrap confidence interval based on the jackknife-debiased OLS proposed in <cit.>. In our specific simulation settings, which will be described shortly, we discovered that when leveraged with the bootstrap inferential methods, the jackknife-based debiased OLSE slightly outperforms the OLSE and the proposed bias-corrected OLSE. A large set of implementations, including both resampling bootstrap- and wild bootstrap-based confidence intervals using the OLS and the proposed debiased OLS, are presented in Appendix <ref>. §.§ Well-specified Linear Model This section concerns the well-specified linear model. The simulation setting is as follows: for n∈1000, 2000 and d∈20k:1≤ k≤ 24, independent observations (X_i,Y_i), 1≤ i ≤ n, are generated from X_i∼(0_d, I_d), ϵ_i∼(0, 1),Y_i = 2X_i(1)+ϵ_i, where X_i(1) is the first coordinate of X_i. Thus, the response Y_i relies only on the first dimension of X_i and independent error ϵ_i. As the given model is well-specified, the projection parameter is given by β=(2,0,0,…,0)^⊤∈^d. Figure <ref> compares the empirical coverage and the width of the 95% confidence intervals of the first coordinate of β, which is 2, obtained from various methods. It is noteworthy that our bias-corrected estimator when incorporated with HULC achieves the target coverage of 0.95. On the contrary, the t-statistic-based confidence interval seems to be fairly conservative while the wild bootstrap inference yields a seemingly less conservative confidence interval. §.§ Misspecified Model In this section, we concern about a misspecified non-linear model. The data-generating process for individual observation is as follows. We first independently generate two d-dimensional Gaussian random vector Z∼(0_d,I_d) W∼(0_d,Σ_ρ). Here, the covariance matrix of W, Σ_ρ, is a compound symmetry where the diagonal entries are all 1 and the off-diagonal elements are all ρ for ρ∈[0,1). That is, Σ_ρ = (1-ρ)I_d + ρ 1_d1_d^⊤. We define the covariate vector as X = Z⊙ W where ⊙ denotes the entry-wise product. Then, the entries of X are uncorrelated, i.e., [XX^⊤]=I_d, but not necessarily independent except the case when ρ=0. For a d-dimensional parameter θ∈^d, we let Y=(X^⊤θ)^3+ϵ, where ϵ∼(0,1) and ϵ is independent with Z and W. Under the aforementioned data-generating process, we conducted comprehensive experiments with the following sample sizes, dimensions, θ, and ρ; n ∈ 1000, 2000, 5000, 10000, 20000, d ∈ 50k: 1≤ k ≤ 19, θ ∈ (1,0,…,0)^⊤, 1_d/√(d)⊂^d, ρ ∈ 0, 0.2, 0.5. Under the model (<ref>), the projection parameter β has the closed-form representation and turns out to be determined by θ and ρ as, β = 3(1+2ρ^2)θ^2_2θ + 6(1-ρ^2)θ^⊙ 3, where θ^⊙ 3 = θ⊙θ⊙θ (see Lemma <ref>). The linear contrast of interest was set to θ^⊤β which becomes the first coordinate of β when θ=(1,0,…,0)^⊤ and becomes the scaled average of coefficients of β when θ=1_d/√(d). Furthermore, we can show that θ^⊤β= 9, θ=(1,0,…,0)^⊤, 3(1+2ρ^2)+6(1-ρ^2)/d, θ=1_d^⊤/√(d); . Figure <ref> compares the coverage and length of the 95% confidence intervals for θ^⊤β under the setting where n=20000, θ=(1,0,…,0)^⊤, and ρ=0 or 0.5. Results under different combinations of sample size, dimension, θ, and ρ are contained in Appendix <ref>. § DISCUSSION We provide an estimator for the bias of the ordinary least squares estimator that is consistent as long as the dimension d grows slower than n^2/3, where n is the sample size. The resulting debiased least squares estimator, after proper normalization, is asymptotically normal at an n^1/2-rate as long as d = o(n^2/3). We also provide valid inference along any arbitrary direction for the projection parameter without having to estimate the variance. We achieve this inferential goal by leveraging the methods such as HulC. We believe the results of this paper can be extended to further expand the allowed growth rate of dimension. This extension would involve further expansion of the least squares estimator into higher order U-statistics and removing the bias. Acknowledgements. A. K. Kuchibhotla and A. Rinaldo were partially supported by NSF DMS-2113611. plainnat § PROOFS §.§ Proof of Theorem <ref> Recall that the OLS estimator can be expressed as β̂= β + Σ̂^-11/n∑_i=1^nX_i(Y_i-X_i^⊤β). We note that Σ̂^-1-Σ^-1 = Σ̂^-1(Σ-Σ̂)Σ^-1 = Σ^-1(Σ-Σ̂)Σ^-1+(Σ̂^-1-Σ^-1)(Σ-Σ̂)Σ^-1 = Σ^-1(Σ-Σ̂)Σ^-1+Σ̂^-1(Σ-Σ̂)Σ^-1(Σ-Σ̂)Σ^-1. Combining this with the OLS expression yields β̂-β = Σ^-1∑_i=1^nX_i(Y_i-X_iβ)+Σ^-1(Σ-Σ̂)Σ^-1∑_i=1^nX_i(Y_i-X_i^⊤β) +Σ̂^-1(Σ-Σ̂)Σ^-1(Σ-Σ̂)Σ^-1∑_i=1^nX_i(Y_i-X_i^⊤β). =: + +. As in (<ref>), the low order approximation (<ref>) of β̂-β can be expressed as + where is the bias defined in (<ref>) and is a sum of the first and the second order U-statistics, = (1+1/n)∑_i=1^nψ(X_i,Y_i)+1/n(n-1)∑_1≤ i≠ j≤ nϕ(X_i,Y_i,X_j,Y_j). The approximation error (<ref>)is denoted as . The proof comprises two parts; (1) an approximation of the distribution of √(n)c^⊤(β̂-β-) to that of √(n) c^⊤ and (2) an approximation of the distribution of √(n) c^⊤ to a normal distribution. For any x∈ and ϵ>0, (√(n)c^⊤(β̂-β-)≤ x) = (√(n)c^⊤(+)≤ x) ≤ (√(n) c^⊤≤ x+ϵ)+(√(n)c^⊤>ϵ). The results of Lemma <ref> yields that sup_x∈(√(n) c^⊤≤ x+ϵ)-N_c(σ_c(x+ϵ))≤Δ_n, where N_c(x) = Φ(x)-κ_cΦ(x) (see (<ref>) for the definition of κ_c) and Δ_n=O(n^-1/2∨ d/n). Hence, we get (√(n) c^⊤≤ x+ϵ) ≤ N_c(σ_c(x+ϵ)) +Δ_n ≤ N(σ_cx)+σ_c^2ϵsup_x∈N_c'(x) +Δ_n. Combining this with (<ref>) yields (√(n)c^⊤(β̂-β-)≤ x)-N_c(σ_cx) ≤σ_c^2ϵsup_x∈N_c'(x) +Δ_n + (√(n)c^⊤>ϵ). Similarly, we can obtain N_c(σ_cx)-(√(n)c^⊤(β̂-β-)≤ x)≤σ_c^2ϵsup_x∈N_c'(x) +Δ_n + (√(n)c^⊤>ϵ), and thus, (√(n)c^⊤(β̂-β-)≤ x)-N_c(σ_cx)≤σ_c^2ϵsup_x∈N_c'(x) +Δ_n + (√(n)c^⊤>ϵ). Since N_c(x)=Φ(x)-κ_cΦ^(3)(x), the absolute value of its derivative N'(x) attains the maximum value (1+3κ_c)/√(2π) at 0. Furthermore, Lemma <ref> proves that κ_c≤K_x^3K_y/2σ_c√(d/n). We now claim that κ_c=O(√(d/n)). First, Cauchy Schwarz inequality yields [{c^⊤ψ(X_1,Y_1)}{c^⊤ψ(X_2,Y_2)}{c^⊤ϕ(X_1,Y_1,X_2,Y_2)}] ≤ [{c^⊤ψ(X_1,Y_1)}{c^⊤ψ(X_2,Y_2)}{c^⊤ϕ(X_1,Y_1,X_2,Y_2)}] ≤ (c^⊤ψ(X_1,Y_1)^2c^⊤ψ(X_2,Y_2)^2)^1/2( c^⊤ϕ(X_1,Y_1,X_2,Y_2)^2)^1/2 = σ_c^2( c^⊤ϕ(X_1,Y_1,X_2,Y_2)^2)^1/2. Lemma <ref> shows that c^⊤ϕ(X_1,Y_1,X_2,Y_2)^2≤ K_x^6K_y^2d leading to that κ_c ≤K_x^3K_y/σ_c√(d/n). Consequently, we get (√(n)c^⊤(β̂-β-)≤ x)-N_c(σ_cx) ≤Δ_n + ϵ/√(2π)(σ_c^2+3σ_cK_x^3K_y√(d/n)) + (√(n)c^⊤>ϵ) ≤Δ_n + ϵ/√(2π)(4λ^-1+6λ^-1/2K_x^3K_y√(d/n)) + (√(n)c^⊤>ϵ) ≤Δ_n + Cϵ +(|√(n)c^⊤|>ϵ), where C = (2π)^-1/2(4λ^-1+6λ^-1/2K_x^3K_y). The next-to-last inequality holds because σ_c^2≤ (1+1/n)^2λ^-1 from Assumption <ref>, and we used d≤ n for the last inequality. It is noteworthy that Lemma <ref>–<ref> establishes the concentration inequality for the quantity √(n)c^⊤ under three distinct moment assumptions on covariates. Three scenarios require different choices of ϵ. Under Assumption <ref> We take the right-hand side of (<ref>) as ϵ = ϵ(δ). Then, we get (√(n)c^⊤(β̂-β-)≤ x)-N_c(σ_cx)≤Δ_n + Cϵ(δ) +δ, for any x∈ and c∈ℝ^d with c_Σ^-1=1. Furthermore, the choice of δ = (d/n^4/5-8/(5q_x))^5q_x/(2q_x+8), yields the result. Under Assumption <ref> With the result under the sub-Gaussianity, we also take the right-hand side of (<ref>) as ϵ = ϵ(δ). The choice of δ = e^-d yields the result. Under Assumption <ref> Taking the right-hand side of (<ref>) as ϵ = ϵ(δ). The choice of δ = (d/n^4/5-8/(5q_x))^5q_x/(2q_x+8), yields the result. Moreover, in this scenario, we claimed that the bias degenerates with the additional condition that (3/q_x+1/q)^-1≥2. To see this, we revisit to inequality in (<ref>) and instead write, (√(n)c^⊤(β̂-β)≤ x) = (√(n)c^⊤(++)≤ x) ≤ (√(n) c^⊤≤ x+ϵ+ϵ') +(√(n)c^⊤>ϵ)++(√(n)c^⊤>ϵ'). This leads to (√(n)c^⊤(β̂-β-)≤ x)-N_c(σ_cx) ≤ Δ_n + Cϵ +(|√(n)c^⊤|>ϵ)+(|√(n)c^⊤|>ϵ) + Cϵ'+(|√(n)c^⊤|>ϵ'). It is noteworthy that (<ref>) can still be controlled in the aforementioned way. The quantity in (<ref>) can be controlled using Lemma 8. Let ϵ'=ϵ'(δ) be the right-hand side in the probability notation in (<ref>), then Cϵ' + (|√(n) c^⊤|>ϵ')≤ C K_x^3K_y√(d/n)(1+1/√(δ))+δ. Finally, choosing δ=(d/n)^1/3 yields the desired result. §.§ Proof of Theorem 2 Throughout the proof, let us fix c to be a d-diemsional vector such that c_Σ^-1=1. To control |c^⊤(-)|, we begin by writing, c^⊤Σ̂^-1X_i = c^⊤Σ^-1X_i + R_1,i, R_1,i=c^⊤(Σ̂^-1-Σ^-1)X_i, (Y_i-X_i^⊤β̂)=(Y_i-X_i^⊤β) + R_2,i, R_2,i=X_i^⊤(β-β̂), X_i_Σ̂^-1^2=X_i_Σ^-1^2(1+R_3,i), R_3,i=X_i_Σ̂^-1^2/X_i_Σ^-1^2-1. Then, |c^⊤(-)| can be bounded by seven distinct quantities, denoted as Rem_k for k=1,…,7. n/dc^⊤(-)≤ ∑_i=1^nR_1,i(Y_i-X_i^⊤β)X_i_Σ^-1^2/d+∑_i=1^nc^⊤Σ^-1X_iR_2,iX_i_Σ^-1^2/d + ∑_i=1^nc^⊤Σ^-1X_i(Y_i-X_i^⊤β)R_3,iX_i_Σ^-1^2/d+ ∑_i=1^nR_1,iR_2,iX_i_Σ^-1^2/d + ∑_i=1^nc^⊤Σ^-1X_iR_2,iR_3,iX_i_Σ^-1^2/d+∑_i=1^nR_1,i(Y_i-X_i^⊤β)R_3,iX_i_Σ^-1^2/d + ∑_i=1^nR_1,iR_2,iR_3,iX_i_Σ^-1^2/d =: ∑_k=1^7 Rem_k. Before analyzing individual remainder terms, we let _Σ = Σ^1/2Σ̂^-1Σ^1/2-I_d_ op. It is noteworthy that the quantity R_3,i can be deterministically bounded by _Σ. Let θ∈^d be an arbitrary d-dimensional vector, and let ω = Σ^-1/2θ. Then, θ^⊤Σ̂^-1θ/θ^⊤Σ^-1θ-1 = ω^⊤(Σ^1/2Σ̂^-1Σ^1/2-I_d)ω/ω^⊤ω-1≤_Σ. By inspecting the relationship between the remainders, we can reduce the number of remainders that we need to handle using, Rem_5 ≤ _Σ Rem_2, Rem_6 ≤ _Σ Rem_1, Rem_7 ≤ _Σ Rem_4. Define the event = _Σ≤ 1. On , we have n/dc^⊤(-)≤ 2∑_k=1^k Rem_k. Now it suffices to shift our attention to the remainder Rem_k for k=1,2,3,4. These terms have been carefully analyzed and controlled in Lemmas <ref>—<ref>. We provide a brief summary of the key results for each remainder term. Rem_1 ≤ C_1_Σ ≥ 1-d/n, Rem_2 ≤ C_2β̂-β_Σ ≥ 1-d/n, Rem_3 ≤ C_3_Σ ≥ 1-1/n, Rem_4 ≤ C_4_Σβ̂-β_Σ(1+dlog (ed)/√(nδ)) ≥ 1-δ. Here, the constants C_i:i=1,2,3,4 only depend on K_x and K_y. Consequently, we get n/d|c^⊤(-)|≤ C{_Σ+β̂-β_Σ+_Σβ̂-β_Σ(1+dlog (ed)/√(nδ))}, with probability at least 1-δ-1/n-2d/n-(_Σ > 1), and the constant C only depends on K_x and K_y. Therefore, we only need to control two quantities, _Σ and β̂-β_Σ, and these are done in Proposition <ref> and <ref>, respectively. §.§ Proof of Theorem 3 For a given c, write Ŝ_n=c^⊤(β̂--β) and S̃_n=c^⊤(β̂--β). For any x∈ and ϵ_n>0, we have (Ŝ_n≤ x) ≤(S̃_n≤ x+ϵ)+(|S̃_n-Ŝ_n|≥ϵ_n) ≤ N_2(σ_c(x+ϵ_n))+Δ̃_n+(√(n)|-|≥ϵ_n) ≤ N_2(σ_cx)+σ_cϵ_nN_2'_∞+Δ̃_n+(√(n)|-|≥ϵ_n). Here, Δ̃_n=sup_x∈|(S̃_n≤ x)-N_2(σ_c x)| is controlled by Theorem 2 while (√(n)|-|≥ϵ_n) is bounded using Theorem 3. Specifically, when considering the conditions outlined in Assumption <ref>, taking δ=(d/n^3/4-1/q_x)^q_x/4∧(d^3/n^2) in Theorem 2 yields that √(n)-≤ Cdlog^1/2(en)/n+Clog(ed)[(d/n^3/4-1/q_x)^3/2+d^3/2/n+d^2-2/q_x/n^3/2-2/q_xlog^4(n/d)], holds with a probability at least 1-d/n-(1/n)^s/2-1-(d/n^1-2/q_x)^q_x/8-(d/n^3/4-1/q_x)^q_x/4∧(d^3/n^2). The upper bound is simplified since the most of terms are absorbed in the second term. Taking the right-hand side as the choice of ϵ_n in (<ref>) gives the result. In the case when Assumption <ref> of sub-Gaussianity holds, we can apply similar reasoning by selecting δ=d^3/n^2 in the second part of Theorem 2. §.§ Proof of Theorem 4 Throughout the proof, we fix c∈^d such that c_Σ^-1=1. Theorem 3 proves that sup_t∈(c^⊤(β̂_ bc-β)≤ t))-Φ(t)≤ϵ_n, for some rate ϵ_n. Note that β̂_ bc^(1),…,β̂_ bc^(B) are independently obtained bias-corrected estimators. For any 1≤ b≤ B, we note from Theorem 3 that (c^⊤(β̂_ bc^(b)-β)<0)-1/2≤ϵ_n/B. This implies that the maximum median bias Δ_c can be also bounded as Δ_c≤ϵ_n/B. An application of Theorem 2 of <cit.> results in (β∉ CI_α^ HULC)-α≤α B(B-1)ϵ_n/B^2/2. This completes the proof. § TECHNICAL LEMMAS In this section, we begin by presenting deterministic inequality for the approximation of the OLS estimator. Recall that the OLS estimator can be expressed as β̂= β + Σ̂^-11/n∑_i=1^nX_i(Y_i-X_i^⊤β). We note that Σ̂^-1-Σ^-1 = Σ̂^-1(Σ-Σ̂)Σ^-1 = Σ^-1(Σ-Σ̂)Σ^-1+(Σ̂^-1-Σ^-1)(Σ-Σ̂)Σ^-1 = Σ^-1(Σ-Σ̂)Σ^-1+Σ̂^-1(Σ-Σ̂)Σ^-1(Σ-Σ̂)Σ^-1. This leads to β̂-β = Σ^-1∑_i=1^nX_i(Y_i-X_iβ)+Σ^-1(Σ-Σ̂)Σ^-1∑_i=1^nX_i(Y_i-X_i^⊤β) +Σ̂^-1(Σ-Σ̂)Σ^-1(Σ-Σ̂)Σ^-1∑_i=1^nX_i(Y_i-X_i^⊤β). =: + +. As in (<ref>), the low order approximation (<ref>) of β̂-β can be expressed as + where is the bias defined in (<ref>) and is a sum of the first and the second order U-statistics, = (1+1/n)∑_i=1^nψ(X_i,Y_i)+1/n(n-1)∑_1≤ i≠ j≤ nϕ(X_i,Y_i,X_j,Y_j) The lemma below bounds the approximation error of + for β̂-β. Let _Σ=Σ^-1/2(Σ-Σ̂)Σ^-1/2_ op. On the event _Σ<1, the following deterministic inequality holds for any β∈^d. _Σ≤_Σ^2/1-_Σ∑_i=1^nΣ^-1/2X_i(Y_i-X_i^⊤β)_2. It follows that β̂-β-{Σ^-1∑_i=1^nX_i(Y_i-X_iβ)+Σ^-1(Σ-Σ̂)Σ^-1∑_i=1^nX_i(Y_i-X_i^⊤β)}_Σ = Σ^1/2Σ̂^-1(Σ-Σ̂)Σ^-1(Σ-Σ̂)Σ^-1∑_i=1^nX_i(Y_i-X_i^⊤β)_2 ≤ Σ^1/2Σ̂^-1Σ^1/2_ opΣ^-1/2(Σ-Σ̂)Σ^-1/2_ op^2∑_i=1^nΣ^-1/2X_i(Y_i-X_i^⊤β)_2 ≤ _Σ^2/(1-_Σ)_+∑_i=1^nΣ^-1/2X_i(Y_i-X_i^⊤β)_2. For the last inequality, we used the fact that I-A_ op^-1≤ (1-A_ op)^-1 whenever A_ op<1. From Cauchy Schwarz inequality, we further get c^⊤(β̂-β-{Σ^-1∑_i=1^nX_i(Y_i-X_iβ)+Σ^-1(Σ-Σ̂)Σ^-1∑_i=1^nX_i(Y_i-X_i^⊤β)})_2 ≤ c_Σ^-1β̂-β-{Σ^-1∑_i=1^nX_i(Y_i-X_iβ)+Σ^-1(Σ-Σ̂)Σ^-1∑_i=1^nX_i(Y_i-X_i^⊤β)}_Σ ≤ _Σ^2/(1-_Σ)_+∑_i=1^nΣ^-1/2X_i(Y_i-X_i^⊤β)_2. for any c∈^d such that c_Σ^-1≤1. [Concentration inequality for _Σ.] The following concentration inequalities for _Σ hold under different moment assumptions on covariates. * Under Assumption <ref> with q_x≥4, there exists a constant C>0 that only depends on q_x that (_Σ≤ C K_x^2[dδ^-2/q_x/n^1-2/q_x+(d/n)^1-2/q_xlog^4(n/d) +√(d/n)])≥ 1-1/n-δ, for all δ∈(0,1). With δ=(d/n^1-2/q_x)^q_x/8, the right-hand side inside a probability tends to 0 as n→∞ if d=o(n^1-2/q_x). * Under Assumption <ref>, there exists a universal constant C>0 such that (_Σ≤ C K_x^2[√(d+log(1/δ)/n)+d+log(1/δ)/n])≥ 1-δ, for all δ∈(0,1). * Under Assumption <ref> with q_x≥4, there exists a constant C>0 that only depends on q_x that (_Σ≤ C K_x^2[√(2dlog(n/δ))/n+d^2/q_xδ^-2/q_x/n^1-2/q_x+(d/n)^1-2/q_xlog^4(n/d) +√(d/n)])≥ 1-1/n-δ, for all δ∈(0,1). With δ=√(d/n), the right-hand side inside a probability tends to 0 as n→∞ if d=o(n^1-2/q_x). The result under Assumption <ref> of sub-Gaussianity is standard. See, for instance, Theorem 4.7.1 of <cit.> or Theorem 1 of <cit.>. The proofs for the result under Assumption <ref> or Assumption <ref> of the moment condition can be found in, for instance, <cit.>. Their results further allow q_x≥2. Suppose that Assumption <ref>, <ref>, and <ref> holds with q_x≥4, q≥3, and s=(1/q_x+1/q)^-1≥1/3. Then, there exists a constant C_s which only depends on s such that (1/n∑_i=1^nΣ^-1/2X_i(Y_i-X_i^⊤β)_2≥√(d+log(1/δ)/nλ)+C_sK_xK_yd^1/2/δ^1/sn^1-1/s)≤δ, for all δ∈(0,1). For the sake of simplicity, write W_i :=Σ^-1/2X_i(Y_i-X_i^⊤β), and note that W_i=0_d, for 1≤ i≤ n. Jensen's inequality yields 1/n∑_i=1^nW_i_2 ≤ (1/n∑_i=1^nW_i_2^2)^1/2 = [ tr{ Var(1/n∑_i=1^nW_i)}]^1/2=1/√(n){ tr(Σ^-1/2VΣ^-1/2)}^1/2 ≤ √(d/λ^1/2n). The last inequality is due to Assumption <ref>. An application of Theorem 4 of <cit.> with η=δ=1 yields (1/n∑_i=1^n W_i_2≥2/λ^1/2√(d/n) +t)≤exp(-nλt^2/3)+C_ν/(nt)^ν∑_i=1^nW_i_2^ν, for any ν>2 such that W_i_2^ν exists for i=1,…,n, and the constant C_ν only depends on ν. We take ν=s=(1/q_x+1/q)^-1. Then, W_i_2^s ≤ [(∑_j=1^d(e_j^⊤Σ^-1/2X_i)^2(Y_i-X_i^⊤β)^2)^s/2] ≤ d^s/2[1/d∑_j=1^d(e_j^⊤Σ^-1/2X_i)(Y_i-X_i^⊤β)^s] ≤ d^s/2max_j∈[d](e_j^⊤Σ^-1/2X_i)(Y_i-X_i^⊤β)^s ≤ d^s/2max_j∈ [d][(e_j^⊤Σ^-1/2X_i)^q_x]^s/q_x[(Y_i-X_i^⊤β)^q]^s/q ≤ d^s/2(K_xK_y)^s. Further, taking t=√(3log(1/δ)/nλ)+C_sK_xK_yd^1/2/δ^1/sn^1-1/s, in (<ref>) yields the desired result. Suppose that Assumption <ref>,<ref>, and <ref> holds. Then, there exists constants C_1=C(λ, q_x,q,K_x,K_y) and C_2=C'(λ, q_x,q,K_x,K_y) such that √(n)c^⊤≤ C_1 [d^5/2/n^2-4/q_xδ^4/q_x+d^5/2-4/q_x/n^2-4/q_xlog^8(n/d)+d^3/2/n](1∨√(log n/d)) with probability at least 1-1/n^s/2-1-2/n-C_2(d/n^1-2/q_x)^q_x/2-δ, for δ∈(0,1) and any c∈^d with c_Σ^-1=1. Suppose that Assumption <ref>,<ref>, and <ref> holds. Then, there exists constants C_1=C(λ,q,K_y) and an absolute C_2=C'(λ, q,,K_y) such that √(n)c^⊤≤ C_1 [d+log(1/δ)/n+{d+log(1/δ)/n}^2](√(d)∨√(log n)) with probability at least 1-1/n^s/2-1-C_2e^-n-δ, for δ∈(0,1) and any c∈^d with c_Σ^-1=1. Suppose that Assumption <ref>,<ref>, and <ref> holds. Then, there exists constants C_1=C(λ, q_x,q,K_x,K_y) and C_2=C'(λ, q_x,q,K_x,K_y) such that √(n)c^⊤≤ C_1d^3/2/n[δ^-4/q_x+(d/n)^1-4/q_xlog^8(n/d)+1](1∨√(log n/d)) with probability at least 1-1/n-1/n^s/2-1-C_2ne^-n-δ, for δ∈(0,1) and any c∈^d with c_Σ^-1=1. Lemma <ref>—<ref> share a common structure, differing only in the moment assumption for covariates upon which they rely. As evidenced by Lemma <ref>, the deterministic upper bound for hinges on the computation of two pivotal quantities, specifically, _Σ and the average of influence functions. It is noteworthy that these quantities can be controlled through the application of Proposition <ref> and Lemma <ref>, respectively. Notably, Proposition <ref> offers three distinct concentration inequalities based on the three different moment assumptions. While we shall solely present the proof for Lemma <ref>, it is pertinent to note that Lemma <ref> and <ref> can be proved in a similar vein. First, Proposition <ref> guarantees that with a probability at least 1-1/n-δ, ^2_Σ≤ 3C_q_x^2K_x^4[d^2/n^2-4/q_xδ^4/q_x+(d/n)^2-4/q_xlog^8(n/d)+d/n]. Meanwhile, taking δ=n^1-s/2 in Lemma <ref>, we have 1/n∑_i=1^nΣ^-1/2X_i(Y_i-X_i^⊤β)_2 ≤ √(d+(s/2-1)log n/nλ) +C_sK_xK_y√(d/n) ≤ (1/λ^1/2∨ C_sK_xK_y)√(d/n)+√((s/2-1)log n/nλ), with a probability at least 1-n^1-s/2. Combining (<ref>) and (<ref>) implies that with a probability at least 1-1/n-1/n^s/2-1-(_Σ>1/2)-δ, √(n)c^⊤≤√(n)_Σ ≤ √(n)^2_Σ/1-_Σ1/n∑_i=1^nΣ^-1/2X_i(Y_i-X_i^⊤β)_2 ≤ 12√(n)C_q_x^2K_x^4(s^1/2/λ^1/2∨ C_sK_xK_y)[√(d/n)+√(log n/n)][d^2/n^2-4/q_xδ^4/q_x+(d/n)^2-4/q_xlog^8(n/d)+d/n] = C_1[d^5/2/n^2-4/q_xδ^4/q_x+d^5/2-4/q_x/n^2-4/q_xlog^8(n/d)+d^3/2/n](1∨√(log n/d)). Finally, (_Σ>1/2) can be bounded again using Proposition <ref> as (D_Σ>1/2)≤1/n+C(d/n^1-2/q_x)^q_x/2, for some constant C. This completes the proof. Suppose that Assumption <ref>,<ref>, and <ref> holds for q_x≥4, q≥3, and s=(1/q_x+1/q)^-1≥1/3. Then, there exists a constant C>0, which only depends on q_x and q, such that for any δ∈(0,1), with probability at least 1-δ, β̂-β_Σ ≤ (1-7K_x√(d+2log(4/δ)/n))^-1_+[2√(d+log(1/δ)/nλ)+C_sK_xK_yd^1/2/δ^1/sn^1-1/s]. For any c∈^d such that c_Σ^-1=1 and κ_c defined in (<ref>), let N_c(x)=Φ(x)-κ_cΦ^(3)(x). Then, the distribution of scaled linear contrast can be approximated with N_c as sup_csup_x∈(√(n)c^⊤/σ_c≤ x)-N_c(x)≤ C(λ̅^3/2K_x^3K_y^3/√(n)+λ̅K_x^6K_y^2d/2n), for some absolute constant C>0. Moreover, κ_c=O(√(d/n)), and there exists a (possibly different) absolute constant C>0 such that sup_csup_x∈(√(n)c^⊤/σ_c≤ x)-Φ(x)≤ C(λ̅^3/2K_x^3K_y^3/√(n)+λ̅^1/2 K_x^3K_y√(d)/√(2n)). For any c∈^d with c_Σ^-1=1, let η_c = n^-1/2c^⊤ψ(X_1,Y_1)^3/σ_c^3, γ_c = n2n^-3c^⊤ϕ(X_1,Y_1,X_2,Y_2)^2/σ_c^2, where ψ and ϕ are defined in (<ref>). Theorem 1 of <cit.> implies that there exists a universal constant C>0 such that sup_x∈(√(n)c^⊤/σ_c≤ x)-N_c(x)≤ C(η_c+γ_c). We first control the asymptotic variance σ_c^2. From the definition of σ_c^2, Assumption <ref> leads to that (1+1/n)^2λ̅^-1≤σ_c^2=(1+1/n)^2 Var[c^⊤Σ^-1X(Y-X^⊤β)]≤(1+1/n)^2λ^-1. Now, we bound two quantities, η_c and γ_c, respectively. First, Jensen's inequality yields η_c = (1+1/n)^3/σ_c^3√(n)c^⊤Σ^-1X_1(Y_1-X_1^⊤β)^3 ≤ λ̅^3/2/√(n)(c^⊤Σ^-1X_1(Y_1-X_1^⊤β)^s)^3/s, where s=(1/q_x+1/q)^-1≥ 3. An application of Hölder's inequality with Assumption <ref> and <ref> implies that η_c ≤ λ̅^3/2/√(n)(c^⊤Σ^-1X_1(Y_1-X_1^⊤β)^q_xq/(q_x+q))^3/s ≤ λ̅^3/2/√(n){(c^⊤Σ^-1X_1^q_x)^q/(q_x+q)(Y_1-X_1^⊤β^q)^q_x/(q_x+q)}^3/s ≤ λ̅^3/2K_x^3K_y^3/√(n). On the other hand, the definition of ϕ implies that c^⊤ϕ(X_1,Y_1,X_2,Y_2)^2 = (1-1/n)^2|c^⊤Σ^-1(Σ-X_1X_1^⊤)Σ^-1X_2(Y_2-X_2^⊤β)|^2 ≤ |c^⊤Σ^-1(Σ-X_1X_1^⊤)Σ^-1X_2(Y_2-X_2^⊤β)|^2. For any v∈^d, we note that Jensen's inequality yields v^⊤Σ^-1/2X_2(Y_2-X_2^⊤β)^2 ≤ v_2^2((v/v_2)^⊤Σ^-1/2X_2(Y_2-X_2^⊤β)^s)^2/s ≤ v_2^2{((v/v_2)^⊤Σ^-1X_2^q_x)^q/(q_x+q)(Y_2-X_2^⊤β^q)^q_x/(q_x+q)}^2/s ≤ v_2^2K_x^2K_y^2, where the first inequality is Jensen's inequality, and the second inequality follows from Hölder's inequality. Combining this with the independence of (X_1,Y_1) and (X_2,Y_2) yields c^⊤Σ^-1(Σ-X_1X_1^⊤)Σ^-1X_2(Y_2-X_2^⊤β)^2≤ K_x^2K_y^2c^⊤Σ^-1(Σ-X_1X_1^⊤)Σ^-1/2_2^2. Inspecting the right-hand side, we have c^⊤Σ^-1(Σ-X_1X_1^⊤)Σ^-1/2_2^2 = {c^⊤Σ^-1(Σ-X_1X_1^⊤)Σ^-1(Σ-X_1X_1^⊤)Σ^-1c} = c^⊤Σ^-1/2(Σ^-1/2X_1X_1^⊤Σ^-1/2X_1_Σ^-1^2-I)Σ^-1/2c ≤ c^⊤Σ^-1/2(Σ^-1/2X_1X_1^⊤Σ^-1/2X_1_Σ^-1^2)Σ^-1/2c ≤ sup_u∈^d-1{(u^⊤Σ^-1/2X_1)^2X_1_Σ^-1^2}. To control the quantity X_1_Σ^-1^2, let _j be the j:th canonical basis of ^d for 1≤ j ≤ d. Then, sup_u∈^d-1{(u^⊤Σ^-1/2X_1)^2X_1_Σ^-1^2} = sup_u∈^d-1{(u^⊤Σ^-1/2X_1)^2∑_j=1^d (_j^⊤Σ^-1/2X_1)^2} = sup_u∈^d-1∑_j=1^d {(u^⊤Σ^-1/2X_1)^2(_j^⊤Σ^-1/2X_1)^2} ≤ dsup_u∈^d-1max_1≤ j≤ d{(u^⊤Σ^-1/2X_1)^2(_j^⊤Σ^-1/2X_1)^2} ≤ dsup_u∈^d-1{(u^⊤Σ^-1/2X_1)^4}≤ dK_x^4. Here, the next-to-last inequality is Cauchy Schwarz inequality, and the last inequality is due to Assumption <ref>. Combining all, we have γ_c ≤ λ̅/2nc^⊤ϕ(X_1,Y_1,X_2,Y_2)^2 ≤ λ̅K_x^6K_y^2d/2n. This concludes the proof of the first part. The last part can be done by applying Theorem 2 of <cit.>, which proves the following: there exists a universal constant C>0 such that sup_x∈(√(n)c^⊤/σ_c≤ x)-Φ(x)≤ C(η_c+γ_c^1/2). Finally, we control κ_c. From its definition in (<ref>) and Cauchy Schwarz's inequality, we have κ_c ≤ 1/2σ_c^3√(n)([{c^⊤ψ(X_1,Y_1)}^2{c^⊤ψ(X_2,Y_2)}^2])^1/2([{c^⊤ϕ(X_1,Y_1,X_2,Y_2)}^2])^1/2 = 1/2σ_c√(n)([{c^⊤ϕ(X_1,Y_1,X_2,Y_2)}^2])^1/2 ≤ K_x^3K_y√(d)/2σ_c√(n) ≤ λ̅^1/2K_x^3K_y√(d)/2√(n). Suppose that Assumption <ref> and <ref> hold with (3/q_x+1/q)^-1≥ 2. Then, (√(n)c^⊤>K_x^3K_y√(d/n)(1+1/√(δ)))≤δ, for δ∈(0,1). Recall that = -1/n^2∑_i=1^n V_i, where V_i = Σ^-1X_i(Y_i-X_i^⊤β)X_i_Σ^-1^2 for i=1,…,n. Chebyshev's inequality implies that for any ϵ >0 (√(n)c^⊤>[√(n)c^⊤]+ϵ)≤ϵ^-2 Var(√(n)c^⊤). Since β is the projection parameter, note that c^⊤ V_i = c^⊤Σ^-1X_i(Y_i-X_i^⊤β)X_i_Σ^-1^2 = Cov(c^⊤Σ^-1X_i(Y_i-X_i^⊤β), X_i_Σ^-1^2). This leads to c^⊤ V_i≤[ Var{c^⊤Σ^-1X_i(Y_i-X_i^⊤β)}]^1/2{ Var(X_i_Σ^-1^2)}^1/2. The leading term on the right-hand side can be bounded as in (<ref>) Var{c^⊤Σ^-1X_i(Y_i-X_i^⊤β)} ≤ c^⊤Σ^-1X_i^2Y_i-X_i^⊤β^2≤ K_x^2K_y^2. To control X_i_Σ^-1, write Z_i = Σ^-1/2X_i and Z_i = (Z_i(1),…,Z_i(d))^⊤ where Z_i(j)^2=1 and Z_i(j) for 1≤ j≤ d are independent. Then, we get Var(X_i_Σ^-1^2) = X_i_Σ^-1^4 - (X_i_Σ^-1^2)^2 =∑_j=1^d [Z_i(j)^4] +∑_j≠ j' [Z_i(j)^2Z_i(j')^2] - d^2 =∑_j=1^d [Z_i(j)^4] - d ≤ d(K_x^4-1)≤ dK_x^4. The next-to-last inequality is due to Assumption <ref>. All in all, we have | c^⊤ V_i|≤ K_x^3K_y √(d) for i=1,…,n, and this implies that √(n) c^⊤≤ K_x^3K_y √(d/n). Now we focus on the variance of the bias . We note that Var(√(n) c^⊤) = 1/n^2 Var(c^⊤ V_1)≤1/n^2 [(c^⊤ V_1)^2]=1/n^2(c^⊤Σ^-1X_i)^2(Y_i-X_i^⊤β)^2X_i_Σ^-1^4 = 1/n^2(c^⊤Σ^-1X_i)^2(Y_i-X_i^⊤β)^2(∑_j=1^d Z_i(j)^2)^2 ≤ d/n^2(c^⊤Σ^-1X_i)^2(Y_i-X_i^⊤β)^2(∑_j=1^d Z_i(j)^4) = d/n^2∑_j=1^d(c^⊤Σ^-1X_i)^2(Y_i-X_i^⊤β)^2 Z_i(j)^4, where two inequalities are both Cauchy Schwarz inequality. Let l := (6/q_x+2/q)^-1≥ 1. Combining Jensen's inequality and Hölder's inequality implies that (c^⊤Σ^-1X_i)^2(Y_i-X_i^⊤β)^2 Z_i(j)^4 ≤ [|c^⊤Σ^-1X_i|^2l|Y_i-X_i^⊤β|^2l |Z_i(j)|^4l]^1/l ≤ [{|c^⊤Σ^-1X_i|^q_x}^2l/q_x{|Y_i-X_i^⊤β|^q}^2l/q{|Z_i(j)|^q_x}^4l/q_x]^1/l ≤ K_x^6K_y^2. Hence, we get Var(√(n) c^⊤)≤ K_x^6K_y^2d/n. Combining this with (<ref>) in (<ref>) completes the proof. Suppose that Assumption <ref>,<ref>, and <ref> holds with q_x≥4 and (3/q_x+1/q)^-1≥ 1/2. Then, it holds that Rem_1≤ 2K_x^3K_y_Σ with a probability at least 1-d/n. Note that Rem_1 ≤ c^⊤(Σ̂^-1-Σ^-1)Σ^1/2_2∑_i=1^nΣ^-1/2X_i(Y_i-X_i^⊤β)X_i_Σ^-1^2/d_2 ≤ _Σ∑_i=1^nΣ^-1/2X_i(Y_i-X_i^⊤β)X_i_Σ^-1^2/d_2, since c_Σ^-1=1. Hence, it suffices to control the rightmost quantity in (<ref>). We note that ∑_i=1^nΣ^-1/2X_i(Y_i-X_i^⊤β)X_i_Σ^-1^2/d_2≤Σ^-1/2X_1(Y_1-X_1^⊤β)X_1_Σ^-1^2/d_2 +∑_i=1^nΣ^-1/2X_i(Y_i-X_i^⊤β)X_i_Σ^-1^2/d-Σ^-1/2X_1(Y_1-X_1^⊤β)X_1_Σ^-1^2/d_2. The leading term on the right-hand side can be bounded via Assumption <ref> and <ref> as Σ^-1/2X_1(Y_1-X_1^⊤β)X_1_Σ^-1^2/d_2 ≤ sup_θ∈^d-1[θ^⊤Σ^-1/2X_1(Y_1-X_1^⊤β)X_1_Σ^-1^2/d] ≤ sup_θ∈^d-1max_1≤ j≤ d[θ^⊤Σ^-1/2X_1(Y_1-X_1^⊤β)(_j^⊤Σ^-1/2X_1)^2] ≤ sup_θ∈^d-1θ^⊤Σ^-1/2X_1^3Y_1-X_1^⊤β ≤ K_x^3K_y. To control the second term, we bound the second moment as ∑_i=1^nΣ^-1/2X_i(Y_i-X_i^⊤β)X_i_Σ^-1^2/d-Σ^-1/2X_1(Y_1-X_1^⊤β)X_1_Σ^-1^2/d_2^2 = 1/n Var(Σ^-1/2X_i(Y_i-X_i^⊤β)X_i_Σ^-1^2/d) ≤ [X_1_Σ^-1^2(Y_1-X_1^⊤β)^2X_1_Σ^-1^4/d^2] ≤ d/nsup_θ∈^d-1[(θ^⊤Σ^-1/2X_1)^6(Y_1-X_1^⊤β)^2] ≤ K_x^6K_y^2d/n. Combining (<ref>) and (<ref>) with Chebyshev inequality yields that for any δ∈(0,1), (∑_i=1^nΣ^-1/2X_i(Y_i-X_i^⊤β)X_i_Σ^-1^2/d_2≥ K_x^3K_y+K_x^3K_y√(d /nδ))≤δ. Taking δ = d/n leads to the intended conclusion. Suppose that Assumption <ref>,<ref>, and <ref> holds with q_x≥ 8. Then, it holds that Rem_2≤ 2K_x^4β̂-β_Σ, with a probability at least 1-d/n. The definition of Rem_2 leads to Rem_2≤β̂-β_Σ∑_i=1^nc^⊤Σ^-1X_iX_i^⊤Σ^-1/2X_i_Σ^-1^2/d_2. Hence, it suffices to control the following term on the right-hand side. We note that ∑_i=1^nc^⊤Σ^-1X_iX_i^⊤Σ^-1/2X_i_Σ^-1^2/d_2≤ c^⊤Σ^-1X_1X_1^⊤Σ^-1/2X_1_Σ^-1^2/d_2 +∑_i=1^nc^⊤Σ^-1X_iX_i^⊤Σ^-1/2X_i_Σ^-1^2/d- c^⊤Σ^-1X_1X_1^⊤Σ^-1/2X_1_Σ^-1^2/d_2. The first term on the right-hand side can be bounded as c^⊤Σ^-1X_1X_1^⊤Σ^-1/2X_1_Σ^-1^2/d_2 ≤ Σ^-1/2X_1X_1^⊤Σ^-1/2X_1_Σ^-1^2/d_ op ≤ sup_θ∈^d-1θ^⊤Σ^-1/2X_1^4 ≤ K_x^4. The second term is bounded using the second moment; ∑_i=1^nc^⊤Σ^-1X_iX_i^⊤Σ^-1/2X_i_Σ^-1^2/d- c^⊤Σ^-1X_1X_1^⊤Σ^-1/2X_1_Σ^-1^2/d_2^2 ≤ 1/nc^⊤Σ^-1X_1X_1^⊤Σ^-1/2X_1_Σ^-1^2/d_2^2 = 1/n[(c^⊤Σ^-1/2X_1)^2X_1_Σ^-1^6/d^2] ≤ d/nsup_θ∈^d-1θ^⊤Σ^-1/2X_1^8 ≤ K_x^8d/n. Consequently, the tail bound follows from Chebyshev's inequality as (∑_i=1^nc^⊤Σ^-1X_iX_i^⊤Σ^-1/2X_i_Σ^-1^2/d_2≥ K_x^4+K_x^4√(d/nδ))≤δ, for δ∈(0,1). Taking δ=d/n yields the result. Suppose that Assumption <ref>,<ref>, and <ref> holds with (3/q_x+1/q)^-1≥ 1/2. Then, Rem_3 ≤ 2K_x^3K_y_Σ, holds with probability at least 1-1/n. Since R_3,i≤_Σ, we get Rem_3≤_Σ1/n∑_i=1^nc^⊤Σ^-1X_i(Y_i-X_i^⊤β)X_i_Σ^-1^2/d. We can control the last quantity routinely. Note that 1/n∑_i=1^nc^⊤Σ^-1X_i(Y_i-X_i^⊤β)X_i_Σ^-1^2/d≤ c^⊤Σ^-1X_1(Y_1-X_1^⊤β)X_1_Σ^-1^2/d +1/n∑_i=1^nc^⊤Σ^-1X_i(Y_i-X_i^⊤β)X_i_Σ^-1^2/d- c^⊤Σ^-1X_1(Y_1-X_1^⊤β)X_1_Σ^-1^2/d. The first term which involves the expectation can be controlled as c^⊤Σ^-1X_1(Y_1-X_1^⊤β)X_1_Σ^-1^2/d ≤ c_Σ^-1Σ^-1/2X_1(Y_1-X_1^⊤β)X_1_Σ^-1^2/d_ op = sup_θ∈^d-1θ^⊤Σ^-1/2X_1(Y_1-X_1^⊤β)X_1_Σ^-1^2/d ≤ sup_θ∈^d-1(θ^⊤Σ^-1/2X_1)^3(Y_1-X_1^⊤β) ≤ K_x^3K_y. The second term can be bounded as 1/n∑_i=1^nc^⊤Σ^-1X_i(Y_i-X_i^⊤β)X_i_Σ^-1^2/d- c^⊤Σ^-1X_1(Y_1-X_1^⊤β)X_1_Σ^-1^2/d^2 ≤ 1/nc^⊤Σ^-1X_1(Y_1-X_1^⊤β)X_1_Σ^-1^2/d^2 = 1/n[(c^⊤Σ^-1X_1)^2(Y_1-X_1^⊤β)^2X_1_Σ^-1^4/d^2] ≤ 1/nsup_θ∈^d-1[(θ^⊤Σ^-1X_1)^6(Y_1-X_1^⊤β)^2]≤K_x^6K_y^2/n. Now, Chebyshev's inequality leads to that (1/n∑_i=1^nc^⊤Σ^-1X_i(Y_i-X_i^⊤β)X_i_Σ^-1^2/d≥ K_x^3K_y+K_x^3K_y√(1/nδ))≤δ. The choice of δ=1/n gives the result. Suppose that Assumption <ref>,<ref>, and <ref> holds with q_x≥8. Then, for any δ∈(0,1), ( Rem_4 ≤_Σβ̂-β_Σ(K_x^4+16K_x^4dlog (ed)/√(nδ)))≥ 1-δ. It follows from the definition of Rem_4 that Rem_4 ≤ c^⊤(Σ̂^-1-Σ^-1)Σ^1/2_2β̂-β_Σ1/n∑_i=1^nΣ^-1/2X_iX_i^⊤Σ^-1/2X_i^2_Σ^-1/d_ op ≤ _Σβ̂-β_Σ1/n∑_i=1^nΣ^-1/2X_iX_i^⊤Σ^-1/2X_i^2_Σ^-1/d_ op. Hence, we focus on the last term in (<ref>). We note that 1/n∑_i=1^nΣ^-1/2X_iX_i^⊤Σ^-1/2X_i^2_Σ^-1/d_ op ≤ Σ^-1/2X_1X_1^⊤Σ^-1/2X_1^2_Σ^-1/d_ op+1/n∑_i=1^nΣ^-1/2X_iX_i^⊤Σ^-1/2X_i^2_Σ^-1/d-Σ^-1/2X_1X_1^⊤Σ^-1/2X_1^2_Σ^-1/d_ op. The first term can be bounded as Σ^-1/2X_1X_1^⊤Σ^-1/2X_1^2_Σ^-1/d_ op = sup_θ∈^d-1[(θ^⊤Σ^-1/2X_1)^2X_1^2_Σ^-1/d] ≤ sup_θ∈^d-1[(θ^⊤Σ^-1/2X_1)^4] ≤ K_x^4. To control the second part, we denote Z_i=Σ^-1/2X_iX_i^⊤Σ^-1/2X_i^2_Σ^-1/d, for i=1,…,n. An application of Theorem 5.1(2) of <cit.> gives that (1/n∑_i=1^nZ_i- Z_i_ op^2)^1/2≤ C_d^1/2[(1/n∑_i=1^nZ_i- Z_i)^2]_ op^1/2+C_d/n([max_1≤ i≤ nZ_i- Z_i_ op^2])^1/2, where C_d=2(1+4⌈log d⌉). First part in (<ref>) can be bounded as [(1/n∑_i=1^nZ_i- Z_i)^2]_ op =1/√(n)[Z_1^2]-( Z_1)^2_ op ≤1/√(n)[Z_1^2]_ op ∵ Z_1 ≤1/n(sup_θ∈^d-1[(θ^⊤Σ^-1/2X_i)^2X_1^6_Σ^-1/d^2]) ≤d/nsup_θ∈^d-1[(θ^⊤Σ^-1/2X_i)^8]≤K_x^8d/n. Meanwhile, the second part is controlled as [max_i∈[n]Z_i- Z_i_ op^2] ≤[max_i∈[n]Z_i_ op^2] ∵ Z_1 ≤[∑_i=1^nZ_i_ op^2]=nZ_1_ op^2 =n[X_1^8_Σ^-1/d^2] ≤ nd^2sup_θ∈^d-1[(θ^⊤Σ^-1/2X_i)^8]≤ K_x^8nd^2. Combining these with (<ref>), we get (1/n∑_i=1^nZ_i- Z_i_ op^2)^1/2 ≤ K_x^4(C_d^1/2∨ C_d)(√(d/n)+d/√(n)) ≤ 16K_x^4dlog (ed)/√(n). Combining all, it follows from Chebyshev's inequality that (1/n∑_i=1^nΣ^-1/2X_iX_i^⊤Σ^-1/2X_i^2_Σ^-1/d_ op≥ K_x^4+16K_x^4dlog (ed)/√(nδ))≤δ, for any δ∈(0,1). § AUXILARY RESULTS Under the data generating process described in Section <ref>, the projection parameter β is given by β = 3(1+2ρ^2)θ_2^2θ + 6(1-ρ^2)θ^⊙ 3. Let X=(X(1),…,X(d))^⊤, Z=(Z(1),…,Z(d))^⊤, and W=(W(1),…,W(d))^⊤, so that X(j)=Z(j)W(j) for 1≤ j≤ d. We note that XX^⊤ =I_d since X(1)^2 = Z(1)^2 W(1)^2=1 X(1)X(2) = Z(1) Z(2) W(1)W(2)=0. Consequently, the projection parameter is β = XY. Write θ = (θ_1,…,θ_d)^⊤, and the ith coordinate β(i) of β can be written as β(i) = X(i)(X^⊤θ)^3 + X(i)ϵ = X(i)(∑_j=1^dθ_jX(j))^3, where the second equality follows from the independence of ϵ and X. Unfolding the last expression, we get β(i) = [X(i){∑_j=1^dθ_j^3X(j)^3+3∑_j≠ kθ_j^2θ_k X(j)^2X(k)^2+6∑_j<k<lθ_jθ_kθ_l X(j)X(k)X(l)}]. Each individual term on the right-hand side can be simply computed using the following moments; X(1)^4 = Z(1)^4 W(1)^4 =9, X(1)^3 X(2) = Z(1)^3 Z(2) W(1)^3W(2)=0, X(1)^2 X(2)^2 = Z(1)^2 Z(2)^2 W(1)^2 W(2)^2 = 1+2ρ^2, X(1)^2X(2)X(3) = Z(1)^2 Z(2) Z(3) W(1)^2W(2)W(3)=0, X(1)X(2)X(3)X(4) = Z(1) Z(2) Z(3) Z(4) W(1)W(2)W(3)W(4)=0. This leads to that β(i) = 9θ_i^3 + 3(1+2ρ^2)θ_i ∑_j≠ iθ_j^2 = (6-6ρ^2)θ_i^3 + 3(1+2ρ^2)θ_2^2θ_i. This completes the proof. § ADDITIONAL NUMERICAL RESULTS In this section, we provide the simulation results in addition to those presented in Section <ref>. These encompass the implementation under various combinations of sample sizes and model parameters. Furthermore, we also compare the confidence interval obtained by the OLS (without bias correction) based on the resampling bootstrap and the wild bootstrap. Due to the computational limitation, the resampling bootstrap is only implemented for small sample size cases, specifically n∈1000, 2000. §.§ Additional Figures for Well-specified Model Here we adhere to the well-defined linear setting outlined in Section <ref>. Figures <ref> and <ref> present the empirical coverages and the lengths of confidence intervals for the first coefficient of the projection parameter β attained from various inferential methods. These results are based on a sample size of n=1000 and 2000, respectively. §.§ Additional Figures for Misspecified Model This section presents the additional numerical results on comparing confidence intervals for the projection parameter under the misspecified model. The simulation settings are summarized in Table <ref>.
http://arxiv.org/abs/2307.02931v1
20230706114254
Smartphones in a Microwave: Formal and Experimental Feasibility Study on Fingerprinting the Corona-Warn-App
[ "Henrik Graßhoff", "Florian Adamsky", "Stefan Schiffner" ]
cs.CR
[ "cs.CR", "cs.SI" ]
Feasibility Study on Fingerprinting the Corona-Warn-App]Smartphones in a Microwave: Formal and Experimental Feasibility Study on Fingerprinting the Corona-Warn-App University of Münster Münster Germany grasshoff@uni-muenster.de Hof University of Applied Sciences, Institute of Information Systems Hof Germany florian.adamsky@hof-university.de BHH University of Applied Sciences Hamburg Germany stefan.schiffner@bhh.hamburg.de CTA have been developed to contain the COVID-19 spread. By design, such apps invade their users' privacy by recording data about their health, contacts, and—partially—location. Many CTA frequently broadcast pseudorandom numbers via Bluetooth to detect encounters. These numbers are changed regularly to prevent individual smartphones from being trivially trackable. However, the effectiveness of this procedure has been little studied. We measured real smartphones and observed that the German CWA exhibits a device-specific latency between two subsequent broadcasts. These timing differences provide a potential attack vector for fingerprinting smartphones by passively recording Bluetooth messages. This could conceivably lead to the tracking of users' trajectories and, ultimately, the re-identification of users. <ccs2012> <concept> <concept_id>10002978.10002991.10002994</concept_id> <concept_desc>Security and privacy Pseudonymity, anonymity and untraceability</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Security and privacy Pseudonymity, anonymity and untraceability [ Stefan Schiffner August 1, 2023 ==================== § INTRODUCTION The coronavirus pandemic was the first pandemic in which we, i.e. humanity, had the means to observe its spread in real time. This wealth of information posed and continues to pose a challenge to societies around the world. More information allows to take the right decisions to slow the spread of a pandemic, one might conclude. Or does it though? The greater goal, limiting the spread of the virus or at least slowing it down, is in conflict with individual freedom rights. In a first response, many governments opted to bring public live to a halt. This understandable and early response was not sustainable. Traditionally, one would aim to only isolate those who are infected, but this approach was undermined by the fact that individuals with asymptomatic infection can also transmit the virus <cit.>. This has led to the first ever large-scale introduction of automatic contact tracing by means of CTA. Such apps record their users' contacts and alert them in case of a close encounter with an infected person. In 2020, Google and Apple integrated extensive contact tracing functionality into their respective mobile operating systems, and many national authorities worldwide deployed CTA since then that have been used by millions of people, cf. <cit.> for more information on downloads and active usage in Europe. CTA inherently concern their users' privacy as they process personal contact and health data. The German CWA <cit.> and numerous other CTA operate by broadcasting a pseudorandom number (pseudonym) several times per second via BLE to all nearby devices. Linking the pseudonym to a real person might allow an adversary to gain insights into their infection status or movement patterns. Developers have implemented basic privacy protection mechanism, but their effectiveness has not been proven. Due to this conceivable profound privacy threat, many legal frameworks, particularly the GDPR <cit.>, hence require a PIA, which must be based on a thorough threat analysis. This PIA should happen in the light of article 9 GDPR which establishes a special protection of health data. As the virus' spread slowed down and with the wide availability of vaccines, many CTA are discontinued, i.e. infrastructure has been scaled down or switched off and maintenance of the apps has been brought to a hold. Therefore, imitate privacy risks of CTA got reduced. However, two concerns remain: First, what happens to the contact tracing functionality implemented in the Android and iOS operating systems? Has this come to stay and pose a continuous threat to security? Second, while maintenance for e.g. the CWA has stopped, it is not actively removed from users' phones but considered to hibernate.[The German Federal Minister of Health, Karl Lauterbach, says that as of June the German CWA will hibernate (https://www.tagesschau.de/inland/innenpolitik/corona-warn-app-ende-100.htmlhttps://www.tagesschau.de/inland/innenpolitik/corona-warn-app-ende-100.html).] However, the semantics are unclear in many ways: Under which circumstances can such a hibernating app be woken up, i.e. which political and scientific process decides if a new pandemic is severe enough? What form of maintenance will be provided for such an app? While the above questions are not subject of this paper, we observe that WHO epidemiologists expect that “COVID-19 will not be the last” pathogen with pandemic potential and the next one “could appear at any time” <cit.>. With this paper, we aim to contribute to the PIA if electronically aided contact tracing is re-considered in the future. Our contribution consists of two experiments. We used low-cost and off-the-shelf hardware to monitor the BLE sending behavior of smartphones with the German CWA installed. In our first experiment, we observed 15 smartphones in a shielded laboratory environment. It showed that the average latency between two successive broadcasts varies across devices and is stable over time. This characteristic acted as a fingerprint for some device and uniquely identified them among all tested phones. We were able to replicate our observations in a second experiment in busy public places in the city of Münster. To the best of our knowledge, our paper provides the first study investigating device fingerprinting of smartphones running a CTA. This in turns brings us to the conclusion that further investigations are needed. § RELATED WORK A large body of research exists on fingerprinting computing devices. Publications on fingerprinting typically fall into two categories: logical fingerprints and physical fingerprints. In the first case, devices are distinguishable due to differences in their software behavior; in the latter case, devices are different due to some physical process, e.g. manufacturing tolerances of a crystal which in turn influences the exact clock rate of a device. Fingerprinting on Logical Behavior. Browser Fingerprinting aims to create fingerprints of web browsers to recognize returning visitors to a website. In 2009, Mayer <cit.> conducted a small-scale experiment and collected different information such as JavaScript objects (e.g. , , , , among others) from 1328 web browser to generate a fingerprint. Panopticlick <cit.> replicated and extended the former results in 2010 in a large-scale experiment with 470162 browser fingerprints and additional features with Flash and Java. These studies marked the beginning of a discipline; since then, the scientific community has improved fingerprinting continuously, further aided by the introduction of new API by the W3C to provide rich multimedia content on web pages. Studies <cit.> discovered that the Canvas API could be exploited to offer high-entropy attributes for a fingerprint. Further, a study <cit.> designed fingerprinting techniques based on the WebGL API. We refer interested readers to <cit.> for a detailed survey of browser fingerprinting. Researchers <cit.> found that even complex network protocols such as TLS and OpenVPN are fingerprintable by the protocol handshake. Similarly for Bluetooth, Celosia and Cunche <cit.> showed that the GATT profile of the Bluetooth stack contains identifying characteristics. By connecting to nearby discoverable devices, they could collect complete GATT profiles to obtain fingerprints which are unique in many cases. Fingerprinting Using Physical Attributes. Crystal oscillators are being used to generate the required frequency for any radio device. Due to small imperfections in production, their actual frequencies are slightly off target <cit.>; hence, devices have a unique frequency. It has been shown that this frequency offset can be used to distinguish devices <cit.>. Similar results have been established using the deviation of the device clock's speed from real-time <cit.>. For Bluetooth, Huang et al. <cit.> exploited the frequency hopping behavior to extract a device's clock skew and use this as a fingerprint.   Our work falls between these two broad categories: We measure timing behavior which is partially influenced by the logic of the CWA, the logic of the underlying API by Google and Apple, and the logic of the operating system and particularly the BLE stack, but at the same time, our measurements are influenced by the accuracy of the underlying clocks. § TECHNICAL BACKGROUND This section explains the technical foundation of the CWA which uses BLE for broadcasting the pseudonyms provided by the GAEN API. §.§ Bluetooth Low Energy BLE <cit.> is a wireless communication standard introduced in 2010. Initially designed for battery-powered gadgets such as smartwatches and Internet of Things applications, it is nowadays supported by almost all modern devices. BLE uses 40 channels in the 2.4 ISM band; 37 of these are used for data transfer while the other three advertising channels are reserved for devices to signal their presence. To do so, a device broadcasts advertisement frames to all nearby devices indicating e.g. its connectibility and characteristics. Moreover, this broadcast mechanism can be used to transmit small amout of data without establishing a connection between sending and receiving device. The sender of a BLE message is identified by a 48 bit MAC address. For basic privacy protection, BLE introduced randomized MAC addresses: instead of broadcasting its globally unique MAC address, the device can generate a random number to be sent in place of the persistent identifier. Due to the length of this number, collissions occur extremely rarely so that the randomized MAC address is a unique identifier for its period of validity. The longevity and change is carried out by the device, the Bluetooth specification <cit.> merely recommends to change it after at most 15 minutes. §.§ Exposure Notification In April 2020, Apple and Google jointly announced the integration of contact tracing directly into their respective mobile OS, naming it Exposure Notification <cit.>, or GAEN for short. When activated by the user, the OS generates pseudorandom 128 bit numbers (pseudonyms) which are changed every 1020 according to the documentation <cit.>. GAEN frequently emits these pseudonyms with a recommended waiting time of 200270 between two sendings, a delay which we refer to as IBL. Additionally, the device listens to other smartphones' broadcasts and logs the pseudonyms it receives. An infected individual can decide to upload specific keys to a server which allow other phones to reconstruct their emitted pseudonyms. This key material is regularly fetched by every participating smartphones and employed to calculate a contagion risk for its user. This functionality is implemented as an API on the OS level. Authorized apps like the CWA can access this API to provide a frontend to the user, but the underlying contact tracing function—especially the BLE broadcasting—is nevertheless not influenced by the app. The GAEN API emits its information in the data unit of an type advertisement packet whose general structure is shown in <ref>. It is comprised of a 2 byte header and a payload of variable size. The latter contains a field for the device's (possibly randomized) MAC address as well as the whose content is variable. As can be seen in <ref>, Google and Apple have defined it to contain two main ingredients: * A UUID 0xFD6F by which other smartphones can detect a GAEN broadcast among packets for other purposes. * The pseudorandom 16 bit contact tracing pseudonym. The MAC address is randomized in sync with the pseudonym as performing their change asynchronously would clearly annihilate the intended privacy protection. The present, very frequent broadcasting is a design decision in favor of the app's utility. While a lower broadcasting frequency would restrict the possibility of continuous device monitoring, it would also increase the risk of not detecting infectious encounters, eventually making the app less valuable from a medical perspective. § FORMAL BACKGROUND AND PRIVACY METRIC In this section, we provide the terminological and mathematical background of analysing fingerprintability. §.§ Pseudonym Types and Anonymity In general, pseudonyms are identifying an entity in a given context. If the relation between the identity and pseudonym can be hidden from an adversary, a pseudonym can provide a certain level of privacy protection. The level of privacy protection pseudonyms can provide is depending on their usage; in particular, it depends on for how long and in which contexts they are used. For a systematic discussion on pseudonym types we refer to <cit.>. Following this terminology, GAEN pseudonyms are short term role pseudonyms, i.e. in its role as a participant of GAEN, an app user provides this pseudonym during interactions with other app users for a given time period. These pseudonyms are only used in the context of the app, hence by individuals in their role as app users. In the case of GAEN, users are broadcasting their pseudonym for 1020. During said period, all broadcast messages of the same user can be linked to each other. After said time period, users will change their pseudonym, rendering it theoretically impossible to trivially link pseudonyms of different time periods. Here by link we mean that an adversary can distinguish if two or more pseudonyms belong to the same entity or not. The way pseudonyms are used in GAEN allows a certain level of privacy protection: under the assumption that pseudonyms of different time periods cannot be linked, users' trajectories cannot be reconstructed even if an adversary can observe broadcast messages at many locations and over longer periods of time. In other words, these pseudonyms provide a certain level of conditional anonymity. §.§ Mathematical Treatment To measure fingerprintability, we adapt the degree of anonymity model proposed by Diaz et al. <cit.>. The authors consider an adversary whose goal is to deanonymize the users of a system, e.g. a sender-recipient system. By observing the system, the adversary obtains probabilities about whether a user is the sender of a particular message. The normalized Shannon entropy of this probability distribution is then taken as a measure for the anonymity that the system provides. Transferring this to fingerprinting, suppose an adversary observes n data points X = {x_1,…,x_n}⊆ℝ over time originating from different entities and tries to group the data points according to their sources. For each entity, the data may vary and therefore can only be measured with some uncertainty ε > 0 even if the adversary has unrestricted measuring accuracy. A fingerprinting attack then is the attempt to partition X into subsets of data originated from the same entity. Such an attack is obviously more successful if the data are precise (i.e. ε is small) and admit high variation. The amount of information that the adversary gains from the observed characteristic X can be quantified as follows: By grouping the data set X into k bins 1,…,k of width ε, we obtain the histogram of a discrete probability distribution. The probability p_i of bin i=1,…,k is given by the number of elements in that particular bin divided by the number of total data points n. Practically, elements in the same bin can be considered indistinguishable by the adversary as their distance is at most the uncertainty ε. Hence, the maximum information the adversary can obtain from their observations is quantified by the Shannon entropy of that histogram: where 0log_2(0)=0 H(X) = - ∑_i=1^k p_i log_2(p_i) Technically, note that the probabilities p_i depend not only on ε but on the location of the bins on the x-axis as well which we did not define. The above term H(X) is understood to be the maximum of the right hand side over all (finitely many) probability distributions for different bin locations. Even an adversary with unlimited background knowledge could not gain more than H(X) information from their observations. If data precision and variation are high, then k > n and p is the uniform distribution p_i = 1/n which results in a maximum entropy of log_2(n). On the other hand, a low precision or variation leads to data points from different entities in the same bin and in the most extreme case of p_i = 1 for one bin i to H(X) = 0. Similar to <cit.> we say that the data set X with precision ε provides a fingerprinting anonymity of A(X,ε) = 1-H(X)/log_2(n)∈ [0,1]. Note that our definition is almost literally the same as the degree of anonymity given in <cit.>, but in our case, the attacker knowledge is represented by the histogram entropy H(X) instead of log_2(n)-H(X). The fingerprinting anonymity reaches its minimum and maximum if =1.6pt [ A(X, ε) = 0 ⇔ H(X) = log_2(n) ⇔ high precision and variation,; A(X, ε) = 1 ⇔ H(X) = 0 ⇔ low precision or variation. ] § EXPERIMENTAL METHODOLOGY This section presents the results of our two performed experiments. The first experiment was conducted on a small scale in an isolated environment: we collected temporal broadcasting data of 15 smartphones which had the CWA installed and running. After finding device-specific differences in the IBL, we proceeded in a second experiment and measured smartphones in multiple public places. This yielded insights into the IBL distribution across an estimated 121 smartphones. We used this distribution data to evaluate the privacy breach of the IBL differences in terms of the fingerprinting anonymity. §.§ Software and Hardware Setup Processing BLE broadcasts is feasible with little programming expertise and cheap hardware. All our measurements were performed using a Python script which operates as follows: collect BLE advertisements every 50 and filter GAEN broadcasts by their UUID 0xFD6F; group incoming BLE broadcasts by their MAC address as such advertisements originate from the same device; for each device, calculate the latencies between its successive broadcasts and store those between 220 and 350. The decision to group BLE broadcasts by their MAC address instead of their GAEN pseudonym was made in order to process as little personal data as possible: in contrast to the MAC addresses, the pseudonyms could potentially leak the Covid infection status of a participant at a later time. Since MAC address and pseudonym are changed in sync, both identify a broadcast's source equally well. As for Bluetooth receiving hardware, we used a Lenovo Ideapad 510S laptop running Fedora Linux. However, we subsequently verified that the measurements could be carried out identically on a Raspberry Pi 4B with 4 of RAM (cf. <ref>). The attack is thus feasible without significant hardware requirements. The above methodology was applied in two experiments: §.§ Laboratory Experiment In the first experiment, we measured the IBL of the 15 smartphones in <ref> in an isolated environment. At the time of testing, all phones were personal devices in everyday use, meaning that a variety of apps other than the CWA were installed, custom settings were made, and some phones could be measured for a longer time and contribute a greater number of pseudonym cycles than others. While being measured, the phones did not perform any resource-intensive tasks. Moreover, we isolated the phone and receiver in a common microwave to reduce the influence of other environmental Bluetooth devices. Considering the full ISM band, a microwave oven is not a Faraday cage. It still blocks 2.4 RF communication sufficiently which is the relevant frequency range for our experiment. We were able to verify the effectiveness of our isolation by observing that the Python script recorded only a single BLE source once the microwave door was closed. §.§ Field Experiment In the second experiment, we collected the IBL of unknown smartphones carried around by present people in public places. We conducted this experiment in multiple spots in Münster, Germany, in April, July, and August 2022. Since a pseudonym change could result in a smartphone contributing twice to our data, we limited each measurement to ten minutes. Subsequently, we rejected entries with less than 50 data points. § RESULTS This section is divided into two parts. We begin by presenting the main qualitative observations we made in the two experiments. Afterwards, we evaluate the fingerprinting information leakage by the IBL in terms of the privacy metric introduced in <ref>. §.§ Key Findings The IBL data collected in the laboratory experiment are presented in <ref>. For each device, the IBL of a pseudonym cycle were averaged to give the IBL mean for this particular pseudonym. The columns mean and double standard deviation were then derived from these values. Hence, when talking about a phone's overall IBL (mean), we refer to the average of its pseudonyms means. * The IBL distribution can vary strongly between different devices. For example, <ref> plots the IBL distributions of the OnePlus Nord and the OnePlus Nord 2. Both distributions have evidently little intersection and are separable by a visual inspection with the naked eye. More comprehensively, the means in <ref> range from roughly 262286 among all observed devices. While some smartphones in our test set (such as the Huawei P10 Lite) are uniquely indentifiable by this characteristic, others share a similar IBL (e.g. all iPhone 13 Mini or Huawei Mate 10 & Samsung Galaxy J7). We will discuss possible influences on this attribute later in <ref>. * For each device, the IBL mean varies little between different pseudonym cycles. The rather small standard deviations in <ref> indicate little variation of the IBL mean between pseudonym cycles. For instance, the IBL means per pseudonym in <ref> narrowly fluctuate around the Huawei Mate 10's overall IBL mean of 283.04. * The results from the isolated experiment are reflected in observations of public spaces. All means from <ref> also arise in the histogram of the roughly 121 observed pseudonyms in public (cf. <ref>). Regarding the fact that we did not prevent phones from possibly contributing twice to our measurements—i.e. the 121 pseudonyms could potentially originate from only 110 devices—the distribution of the IBL must be taken with caution. However, it shows that the sample of phones in <ref> is not considerably different from what an adversary would observe in public spaces. * This behavior is not consistent with the GAEN documentation which specifies an IBL of 200270 <cit.>. Although this may not be crucial, it raises the questions as whether and how the specification can be adapted to improve privacy and how smartphones can be made to follow this specification. §.§ Quantification of Fingerprintability We apply fingerprinting anonymity as a privacy metric to the above data in order to quantify the information provided by the IBL. Therefore, we first need to determine the precision ε with which an adversary can observe the IBL mean. As each device d apparently targets the same IBL during different pseudonym cycles, one can reasonably argue that the pseudonyms' IBL means are normally distributed around the device's IBL μ_d. Consequently, 95 of pseudonyms' IBL means would lie within the range μ_d ± 2σ_d of two standard deviations. By averaging the values from <ref>, we obtain a precision of ε = 1/15∑_d device in <ref> 2σ_d = 0.2513̅≈ 0.25. This value determines the bin width of the histogram in the fingerprinting anonymity quantification. The histogram shown in <ref> is already the result of dividing the field experiment data into a histogram of bin width ε with a maximal entropy of H(X) = 4.88. Theoretically, these 4.88 bits of information suffice to distinguish 2^4.88≈ 29 devices. This number must be noted cautiously for different reasons. On the one hand, we cannot exclude the possibility that our measurements overestimate the real entropy of the IBL which would make more devices indistinguishable than assumed. On the other hand, a real adversary could exploit additional heuristics such as asynchronous pseudonym changes or signal strength to link pseudonyms efficiently. The IBL mean thus provides a fingerprinting anonymity of 1 - H(X)/log_2(121) = 0.29. With lower values implying less privacy protection, one might consider this result as a warning and call for a closer investigation whether users of the CWA are exposed to a disproportional privacy risk. However, this warning needs to come with a caveat: Fingerprinting anonymity—like the degree of anonymity <cit.> from which it is derived—should be interpreted as a relative measure which is meant to compare different scenarios. Hence, our calculations here are merely setting a baseline for further investigations that might help fine-tuning parameters towards an optimal balance between privacy protection and utility of the CWA. § DISCUSSION AND FUTURE WORK GAEN-based apps such as the German CWA turn smartphones into continuous radio wave emitters and raise questions about their users' privacy. The privacy protection of GAEN relies on the assumption that a smartphone's broadcasted pseudonyms cannot be linked. If this fails to be the case, various attacks such as trajectory reconstruction could arise. Against this background, unlinkability of randomized pseudonyms should not be taken for granted but must be ensured and verified. Our results indicate that the temporal differences in the broadcast behavior can potentially be exploited to link pseudonyms of the CWA. To illustrate how an adversary could proceed, observe that the Huawei Mate 10 from <ref> is present in the screenshot in <ref>. The first and last entry are clearly similar in terms of their mean and much different from all other observed pseudonyms. Moreover, the last pseudonym in the list was observed for the first time just a few seconds after the first one stopped broadcasting. In various scenarios, these information may be enough to link these two pseudonyms. We quantified the information provided by the IBL to be 4.88 bits which is theoretically enough to distinguish 29 devices. As pointed out, this quantitative result is subject to some uncertainty due to the small sizes of our experiments. It is of particular interest for future studies to investigate which factors have an influence on the IBL. As we did not conduct any reverse engineering, we cannot answer this question definitely but may discuss various approaches. Overall, our observations lead us to the conjecture that a smartphone's IBL is mostly affected by two factors: * Its hardware stack. By design, the GAEN API frequently accesses the phone's Bluetooth hardware and is thus influenced by the physical characteristics of the device. For example, our experiment included three iPhone 13 Mini as well as two phones from different manufactures sharing the same chipset (the Google Pixel 4a and the OnePlus Nord have a Qualcomm Snapdragon 765G built in), and the devices exhibited similar IBL in both cases. * Its usage and multitasking. Whenever two processes demand hardware resources at the same time, they are granted access by the operating system's scheduler in a specific order. The mentioned frequent access to processor and Bluetooth consequently causes a waiting time for the GAEN process if the demanded resources are already allocated. If this waiting time has a notable influence on the IBL, then the latter might change with a varying usage. During our experiments we found subtle hints that the IBL may be slightly prolonged when another app heavily uses Bluetooth, but we could not examine this any further. If it turns out to be the case, an active adversary could disturb phones (e.g. the processor via network queries) and observe changes in their IBL to gain further information about which phone broadcasts which pseudonym. Moreover, we expect that this behavior is not limited to the German CWA but also appears in the context of other GAEN apps. § CONCLUSION This exploratory study demonstrated that the German CWA is vulnerable to device fingerprinting. Smartphones with installed CWA target a device-specific latency between two subsequent Bluetooth broadcasts. This latency can potentially identify a smartphone, among others, and can be measured with no more than a few minutes of passive Bluetooth observation. Contrary to public assurances, regular pseudonym changes—as implemented today—are not enough to disguise a user reliably. Our work contributes to the costs and effectiveness of CTA by indicating that the CWA's privacy impact could be higher than expected. This becomes more significant since passive Bluetooth sniffing attacks are virtually unpreventable, and the affected OS-level code cannot be easily removed from the users' smartphones. Hence, any non-negligible risk of device fingerprinting needs to be considered in the evaluation and further development of CTA. Given that medical experts do expect the next similar pandemic soon, the time is now! We should work to reduce the fingerprintability of continuously sending BLE devices. As a side effect, a more privacy-friendly version of pseudonym-changing protocols with BLE or other wireless technologies might open up opportunities for other, more mundane uses of such technologies. ACM-Reference-Format
http://arxiv.org/abs/2307.02630v2
20230705200155
Continuum Reverberation Mapping of Mrk 876 Over Three Years With Remote Robotic Observatories
[ "Jake A. Miller", "Edward M. Cackett", "Michael R. Goad", "Keith Horne", "Aaron J. Barth", "Encarni Romero-Colmenero", "Michael Fausnaugh", "Jonathan Gelbord", "Kirk T. Korista", "Hermine Landt", "Tommaso Treu", "Hartmut Winkler" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.HE" ]
0000-0001-8475-8027]Jake A. Miller Wayne State University, Department of Physics & Astronomy, 666 W Hancock St, Detroit, MI 48201, USA 0000-0002-8294-9281]Edward M. Cackett Wayne State University, Department of Physics & Astronomy, 666 W Hancock St, Detroit, MI 48201, USA 0000-0002-2908-7360]Michael R. Goad School of Physics and Astronomy, University of Leicester, University Road, Leicester, LE1 7RH, UK 0000-0003-1728-0304]Keith Horne SUPA Physics and Astronomy, University of St. Andrews, Fife, KY16 9SS Scotland, UK 0000-0002-3026-0562]Aaron J. Barth Department of Physics and Astronomy, 4129 Frederick Reines Hall, University of California, Irvine, CA, 92697-4575, USA 0000-0003-0607-1136]Encarni Romero-Colmenero South African Astronomical Observatory, P.O. Box 9, Observatory 7935, Cape Town, South Africa Southern African Large Telescope Foundation, P.O. Box 9, Observatory 7935, Cape Town, South Africa 0000-0002-9113-7162]Michael Fausnaugh Department of Astronomy, The Ohio State University, 140 W 18th Ave, Columbus, OH 43210, USA 0000-0001-9092-8619]Jonathan Gelbord Spectral Sciences Inc., 4 Fourth Avenue, Burlington, MA 01803, USA 0000-0003-0944-1008]Kirk T. Korista Department of Physics, Western Michigan University, 1120 Everett Tower, Kalamazoo, MI 49008-5252, USA 0000-0001-8391-6900]Hermine Landt Centre for Extragalactic Astronomy, Department of Physics, Durham University, South Road, Durham DH1 3LE, UK 0000-0002-8460-0390]Tommaso Treu Department of Physics and Astronomy, University of California, Los Angeles, CA 90095, USA 0000-0003-2662-0526]Hartmut Winkler Department of Physics, University of Johannesburg, P.O. Box 524, 2006 Auckland Park, South Africa Continuum reverberation mapping probes the sizescale of the optical continuum-emitting region in active galactic nuclei (AGN). Through 3 years of multiwavelength photometric monitoring in the optical with robotic observatories, we perform continuum reverberation mapping on Mrk 876. All wavebands show large amplitude variability and are well correlated. Slow variations in the light curves broaden the cross-correlation function (CCF) significantly, requiring detrending in order to robustly recover interband lags. We measure consistent interband lags using three techniques (CCF, JAVELIN, PyROA), with a lag of around 13 days from u to z. These lags are longer than the expected radius of 12 days for the self-gravitating radius of the disk. The lags increase with wavelength roughly following λ^4/3, as would be expected from thin disk theory, but the lag normalization is approximately a factor of 3 longer than expected, as has also been observed in other AGN. The lag in the i band shows an excess which we attribute to variable Hα broad-line emission. A flux-flux analysis shows a variable spectrum that follows f_ν∝λ^-1/3 as expected for a disk, and an excess in the i band that also points to strong variable Hα emission in that band. § INTRODUCTION It is now known that most, if not all, galaxies host a supermassive black hole (SMBH) in their nuclei <cit.>. Active galactic nuclei (AGN) occur when material falls onto this SMBH. The conservation of angular momentum is expected to lead to the formation of an accretion disk. Most of these disks are too far away to be spatially resolved directly with current instruments, so indirect techniques are needed to observe and understand the AGN system. By measuring how the irradiated areas surrounding the SMBH respond to changes from the ionizing source, one can use time lags between different wavelength bands to measure the size scale. This technique is called reverberation mapping <cit.> and has been successfully carried out on a number of different AGN. Many regions of the AGN can be probed in this way, from the innermost areas of the accretion disk out to the dusty torus. For more information on reverberation mapping, a review can be found in <cit.>. Time lags between the continuum at different wavelengths are expected for reverberation of the accretion disk. In the lamppost model <cit.>, X-rays from an ionizing source located above the SMBH irradiate the accretion disk, where they are reprocessed and re-emitted at longer wavelengths depending on the temperature of the disk where they land. The disk is hotter closer to the SMBH, producing strong ultraviolet and continuum emission, while further and cooler regions of the disk predominantly produce the optical continuum. This model predicts that the X-rays should drive and lead the variability seen at longer wavelengths. Inner regions of the accretion disk should then see variations first, followed by regions further away from the SMBH. The measured time lag is then assumed to be dominated by the light travel time between different regions of the disk. A geometrically thin, optically thick accretion disk <cit.> is expected to have a disk radial temperature profile of T(R) ∝ R^-3/4. Since wavelength maps to temperature via the Wien displacement law and time lag to radius this gives wavelength dependent lags, τ(λ), which should scale with wavelength as τ(λ)∝λ^4/3 <cit.>. This relationship has been observed in previous continuum reverberation mapping studies, both in focused multi-instrument campaigns <cit.> and from AGN survey studies <cit.>. However, using a sample of over 9000 quasars from the SDSS Southern Survey <cit.> find a bluer spectrum, τ(λ)∝λ^5/7, when possible Small Magellanic Cloud-like dust is considered. Recent studies have shown some predictions of the lamppost model are not always valid, at least not for all AGN. Many AGN show a weaker correlation, or even a de-coupling, between the X-ray and the ultraviolet/optical light curves <cit.>. Some AGN have trends indicative of systems more complicated than the lamppost model. Several studies have shown that there appears to be an incoming slow-moving lag, where the optical leads the ultraviolet and X-rays over the span of 100s to 1000s of days <cit.>. This implies that on longer timescales it is changes in the accretion disk's accretion flow that drive the variability, not an ionizing source. There is also evidence for obscuring elements, such as disk winds, that could exist between the ionizing source and the accretion disk, severing their direct connection <cit.>. Previous reverberation mapping campaigns have found lags that indicate accretion disks ∼3 times larger than expected <cit.>. However, certain physical models have been created <cit.> that give lags consistent with what is observed. More studies involving long-term multi-wavelength campaigns spanning several years are needed to understand variability on different timescales in AGN. Mrk 876 is a prime target for such a study. Its location in the sky allows it to be observed by northern ground-based robotic observatories nearly year-round. It has a history of reliable variability, and has been studied in several other independent campaigns <cit.>. From 2016 to 2019 it was observed by the Las Cumbres Observatory, the Liverpool Telescope, and the Dan Zowada Memorial Observatory. By combining these campaigns, a long-term look into the variability patterns and reverberation lags in Mrk 876 can be performed. In Section <ref> we describe the data reduction and analysis, in Section <ref> we present the results from time lag and spectral variability analysis, and in Section <ref> we discuss the implications of our findings. We summarize our results in Section <ref>. § DATA REDUCTION Observations of Mrk 876 were taken from 2016 March through 2019 May. The observatories involved with this project are the Dan Zowada Memorial Observatory (Zowada), the Liverpool Telescope (LT), and the Las Cumbres Observatory (LCO). All images were processed using the standard pipelines for each observatory, which includes bias/dark subtraction and flat fielding. Each observatory uses the SDSS ugri filters. Zowada and LCO both have the Pan-STARRS z_s filter, while LT uses an SDSS z filter. Between 2 to 5 images per filter were taken each night, depending on the instrument, filter, and weather conditions. Zowada is a robotic 0.5-meter telescope located outside of Rodeo, New Mexico <cit.>. It is owned and operated by Wayne State University. Located in the Canary Islands, LT is a robotic 2-meter telescope operated by the Astrophysics Research Institute of Liverpool John Moores University <cit.>. LCO is a global network of robotically operated telescopes <cit.>. For this project, observations were taken from the 2-meter telescope at Haleakala Observatory (ogg02), as well as the two 1-meter telescopes (elp08 and elp06) at McDonald Observatory. Monitoring of Mrk 876 continued past 2019 May as part of a larger coordinated reverberation mapping campaign, and therefore those data are not used here for time lag analysis, but are only used to improve intercalibration between the light curves from the different telescopes. The LCO data from 2016-2018 was also used in <cit.> to analyze the infrared reverberation signal. A summary of the observatories and observation epochs can be found in Table <ref>. lccccc 4 Summary of Observations Telescope Epochs Start Date End Date Period Length Cadence Zowada 57 2019-01-24 2019-05-31 127 2.49 LT 175 2016-07-09 2018-08-12 764 3.53 ogg02 (LCO) 63 2016-02-15 2016-08-26 193 2.60 elp08 (LCO) 145 2016-03-31 2018-10-07 920 4.03 Total 383 2016-02-15 2019-05-31 1201 2.53 An epoch is defined as a night on which observations were obtained. Each epoch may not have data from every filter, but indicates that at least one observation occurred. Period Length is the total number of days that each telescope's observation campaign lasted. Cadence is the average cadence of observations from the g band in days excluding observational gaps of 30+ days. The light curves are obtained using differential photometry. A selection of comparison stars are chosen to be contrasted with the brightness of Mrk 876. We assume that the combined flux from the comparison stars is constant over time. We create an AGN light curve by calculating the flux of the AGN relative to the total flux of the comparison stars, allowing us to create the light curve for the AGN. Comparison stars are typically chosen such that they are 2-5 times brighter than the AGN in order to maximize the signal-to-noise ratio. All of the stars must be present in each telescope's field of view, meaning the choice of comparison stars was limited. Three stars were used for differential photometry based on the above factors. Different comparison stars are chosen for the u band, since most stars are typically fainter in this band. For each image the stars are found using the photutils <cit.> module DAOStarFinder. Once found, we identify the comparison stars and AGN from the list of detected objects. A circular annulus and aperture are created for each object. The sizes of the apertures and annuli vary depending on the telescope. The annuli are chosen to have an inner radius of 20 pixels and an outer radius of 30 pixels for Zowada images. LCO and Liverpool have these annuli adjusted to match the same angular size based on their respective pixel scales. To choose the aperture size, we measure the signal-to-noise ratio for different aperture sizes and the fractional standard deviation of the comparison stars. We combine this in quadrature with the statistical uncertainty in flux. The aperture size with the lowest total flux uncertainty is chosen for each band and averaged to be used for each observatory. For Zowada, this is 5 pixels, for LT this is 8 pixels, for the LCO 1-meter observatories (elp06 and elp08) this is 11 pixels, and for the LCO 2-meter observatory (ogg02) this is 9 pixels. The median background is measured within the annulus and is scaled to the area within the aperture and subtracted. The average count rate is calculated from all of the observations of a specific filter taken on a given night, and these average observations are collected into a light curve for each band and each telescope. We combine and intercalibrate the light curves from all the telescopes using CALI <cit.>. CALI assumes the AGN variability is described by a dampened random walk process in order to interpolate between gaps in data and align multiple telescopes' data to a common scale. It applies both additive and multiplicative factors to the data to achieve this. A Bayesian framework with a diffusive nested sampling algorithm is used to determine the intercalibration factors. Additional systematic errors may exist, so CALI increases the uncertainty on all measurements with a systematic error term that is added in quadrature to the original uncertainties. The complete set of light curves can be found in the left panel of Fig. <ref>. When combined, we get an average cadence of 3.31 days in the g band. When we exclude large observational gaps of >30 days, we have an average cadence of 2.53 days. lccccc 6 Time Lag Comparison Boxcar Width 5cLag (days) (Days) u g r i z 0 (Base) 14.60^+3.72_-3.63 0.00^+2.50_-2.52 -5.17^+2.85_-2.68 0.43^+2.87_-2.52 2.31^+3.10_-2.97 25 -0.29^+0.81_-0.80 0.00^+0.35_-0.36 0.55^+0.49_-0.48 2.18^+0.92_-1.12 1.27^+2.69_-1.31 50 -1.79^+0.73_-0.85 0.00^+0.43_-0.44 2.07^+0.48_-0.51 6.48^+0.98_-0.90 6.51^+1.49_-2.11 75 -4.05^+0.79_-0.83 0.00^+0.39_-0.38 3.14^+0.47_-0.45 10.03^+0.89_-0.84 7.42^+1.31_-1.27 100 -5.52^+0.87_-0.89 0.00^+0.34_-0.34 3.32^+0.42_-0.43 10.95^+1.14_-0.88 7.68^+1.20_-1.06 125 -5.34^+0.90_-1.01 0.00^+0.39_-0.38 3.24^+0.48_-0.44 11.6^+1.33_-1.06 8.12^+1.19_-1.12 150 -4.11^+1.05_-1.08 0.00^+0.41_-0.41 3.26^+0.49_-0.48 12.21^+1.41_-1.05 9.24^+1.51_-1.18 175 -3.56^+1.05_-1.16 0.00^+0.40_-0.41 2.86^+0.48_-0.47 11.03^+1.34_-1.09 10.12^+1.62_-1.55 200 -2.26^+1.28_-1.20 0.00^+0.45_-0.46 2.04^+0.62_-0.59 8.21^+1.08_-1.07 8.87^+1.43_-1.63 225 -1.50^+1.40_-1.48 0.00^+0.63_-0.62 1.07^+0.80_-0.83 6.74^+1.15_-1.34 5.8^+1.53_-1.63 250 -0.43^+2.20_-2.27 0.00^+1.01_-1.01 0.02^+1.25_-1.35 4.90^+1.6_-1.94 2.80^+1.81_-2.14 A comparison of the PyCCF timelags for different detrending lengths. The first row with a detrend length of 0 days represents non-detrended data, which we will refer to as the base data in further analyses. § ANALYSIS AND RESULTS §.§ Measured Time Lags The light curves for the ugriz filters can be found in Fig. <ref>. To quantify the variability, the excess variance F_ var <cit.> is calculated for each of the light curves. Strong variability is observed, with a variability amplitude of 13%, 19%, 16%, 13%, and 12% in the u, g, r, i, and z bands respectively. To determine the time lags, the cross correlation function (CCF) is found for these light curves using the g band. This band is chosen as the comparison band because it has the best-sampled light curve and has the highest variability amplitude. The CCF is found using the Python module PyCCF <cit.>, which follows the <cit.> implementation of the ICCF technique to determine lag uncertainties. PyCCF calculates the CCF of two unevenly sampled light curves using interpolation to fill in the gaps between data points. The mean of the CCF at 80% of the CCF's peak (the centroid) is taken as the time lag between the two respective bands. We find in Fig. <ref> that the CCF of Mrk 876 is extremely broad, with a near-flat top spanning several hundred days. This indicates the lag is dominated by long-term variations observed in the light curve. The broad CCF prevents a robust reverberation lag measurement. The data are detrended by subtracting a moving boxcar average to remove this long-term trend. The process of detrending AGN for improved short-term lag measurements is a common practice <cit.>. The data were boxcar subtracted with a variety of boxcar widths ranging from 25-250 days, in intervals of 25 days. The lags of each set of detrended light curves were calculated using PyCCF as well. All boxcar widths between 75 and 175 days produced CCF lags consistent within 1σ uncertainties. The boxcar width with the lowest average lag uncertainty was 100 days, and therefore was chosen for continued analysis. This process does not guarantee that the resulting time lags are true representations of the intrinsic lag between the different wavelengths. While the majority of boxcar lengths tested produce time lags that agree within 1σ, other detrending approaches may produce different results. The light curves and CCF of the 100 days detrended data can be found in Fig. <ref>. A plot comparing all of the recovered time lags versus detrend lengths is shown in Fig. <ref>, with the lags given in Table <ref>. The g lags are used for time-lag comparison against the other bands. We also measure the lag of the g band against itself, allowing the g band to have a lag beyond exactly 0 days. We use the detrended light curves for most analyses going forward. When we use the non-detrended data, we will refer to it as the base data. The light curves span several years, and due to Mrk 876's position in the sky there are annual seasonal gaps when Mrk 876 was not visible. The CCF method simply performs a linear interpolation between the gaps. However, more sophisticated methods, e.g. JAVELIN and PyROA, have been developed to use the variability properties of the light curves to inform a more realistic interpolation. JAVELIN <cit.> uses a dampened random walk model for the power spectrum of the light curves and assumes a top-hat transfer function. A recent comparison between CCF and JAVELIN methods for determining lags is presented by <cit.>. We use the Python 3 implementation of JAVELIN for our analysis. PyROA <cit.> uses a running optimal average to determine a model for the driving light curve, and assumes that each light curve is a time-shifted and flux-scaled version of this model. The Bayesian Information Criterion is then used to determine how much smoothing is required for the data, and model parameters are estimated using Markov Chain Monte Carlo. The time lags calculated using each method are given in Table <ref>, while in Fig. <ref> we show the lags as a function of wavelength. We also give the maximum correlation coefficient (R_ max) in Table <ref>. All the light curves are well correlated. The lags generally agree between each method within their errors. cccccc 0.99 6 Detrended Time Lags and Light Curve Properties Method u g r i z R_ max 0.75 1.0 0.95 0.80 0.68 PyCCF -5.52^+0.87_-0.89 -0.0^+0.34_-0.34 3.32^+0.42_-0.43 10.95^+1.14_-0.88 7.68^+1.20_-1.06 JAVELIN -1.66^+3.96_-0.06 -0.0^+0.00_-0.00 3.51^+0.99_-0.03 10.49^+2.01_-2.03 7.5^+3.97_-1.03 PyROA -4.86^+0.78_-0.78 -0.01^+0.16_-0.16 3.18^+0.25_-0.40 9.46^+0.62_-0.69 8.63^+1.15_-0.56 R_ max is the maximum value of the cross correlation coefficient between the two light curves (calculated with respect to g). A value of 1 is maximal correlation, while a value of 0 means no correlation between the signals. The lags for PyCCF, JAVELIN, and PyROA are all given in units of days and are calculated with respect to the g band. §.§ Theoretical Time Lags The expected time lag τ can be calculated from a simple model of the accretion disk. Assuming a standard lamppost-like X-ray corona ionizing the accretion disk, the time lag should relate to the radius of the disk R as τ∼ R/c. For a geometrically thin, optically thick accretion disk <cit.>, the temperature profile follows T(R) ∝ (MṀ)^1/4R^-3/4. Assuming blackbody radiation so that λ∝ T^-1, one finds a relationship between the time lag and the emitted wavelength of light: τ∝ (MṀ)^1/3T^-4/3∝ (MṀ)^1/3λ^4/3. We test this relation by fitting the following function: τ = τ_0[(λ/λ_0)^β - y_0], where τ_0 is the normalization parameter and is measured in days, β is the relationship of wavelength to the measured timelags, λ is the wavelength of the band being observed, and λ_0 is the reference wavelength band used, which for this study is λ_0 = 4770Å (the effective wavelength of the g band). The adjustment factor y_0 is present to prevent the lags from being exactly zero at λ_0, with its value normally being around 1. A standard thin disk predicts β = 4/3. We fit the lag-wavelength relation both fixing β = 4/3 and allowing it to be a free parameter in the fit. Best-fits are shown in Fig. <ref>, and given in Table <ref>. When β is left as a free parameter it drops below 1, leading to an unreasonably large value for τ_0. As such, it is not included in the figure. This equation holds the assumption that the x-ray emitting component's height is small relative to the radius of the accretion disk. If this is not true, then this acts as a lower limit to the measured lags. We also fit to the lags the analytic prescription described by <cit.>. This prescription predicts time lags using five different parameters. These are the black hole mass, mass accretion rate (ṁ), X-ray luminosity in the 2-10 keV range, X-ray corona height (H), and black hole spin (a^*). We fit our observed time lags for when the spin parameter a^* = 0.998 and a^* = 0. The black hole mass is estimated to be 2.2 ×10^8 M_⊙ <cit.>, and the X-ray luminosity in the 2-10 keV range is 10^44.11 erg/s from archival XMM-Newton data <cit.>. We estimate the mass accretion rate using our calculation of the Eddington accretion rate (ṁ_E), which is described in Sec. <ref> as a part of the analysis on the normalization parameter τ_0. The initial fits are done fixing ṁ to be this value while H remains the only free parameter. This fit fails for a^* = 0, with the corona height H becoming non-physical. This agrees with previous investigations indicating that Mrk 876's SMBH has a high spin parameter <cit.>. We also perform the fitting allowing ṁ_E to be a free parameter. For this regime we used a grid search to find the lowest possible χ^2 value. We search from 0 – 250 R_G for H and between 0 – 0.75 Eddington accretion for ṁ_E. The upper and lower bounds for the uncertainty of the measurement are determined by finding the value of the parameters when χ^2 is 2.3 above the lowest value found. When left as free parameters, uncertainties on ṁ_E and H are largely unconstrained, but agree within 1-sigma (for both zero spin and maximally spinning cases). We therefore only give the parameters when ṁ_E is fixed and a^* = 0.998 in Table <ref>. cccc 0.99 Fitted Accretion Disk Properties Method τ_0 (days) H (R_G) ṁ_E PyCCF 9.22 ± 2.64 54.0 ± 56.0 0.416 JAVELIN 6.50 ± 2.00 28.0 ± 55.0 0.416 PyROA 9.16 ± 1.38 48.0 ± 25.0 0.416 Comparison of the fitted parameters between PyCCF, JAVELIN, and PyROA. The normalization parameter τ_0 from Eqn. <ref> is found when β is fixed to be 4/3, as expected from thin disk theory. These are plotted as the solid black lines in Fig. <ref>. The X-ray corona height (H) is found when the mass accretion fraction (ṁ) is fixed to be 0.416 for a spin parameter of a^* = 0.998. For more details, see Sec. <ref>. §.§ Calculation of Normalization Parameter τ_0 One noted problem among AGN reverberation mapping campaigns is the measured value of normalization parameter τ_0. We can estimate the expected value of τ_0 for the thin disk model using estimates of the black hole mass and mass accretion, and compare to the measured value <cit.>. The majority of campaigns have found that the fitted value of τ_0 is 2-3 times larger than the expected/calculated value <cit.>. This implies that the accretion disk itself is 2-3 times larger than the standard thin disk model predicts. Alternatively, some other aspect of the AGN system is interfering with measurements and creating a falsely large measurement of τ_0. <cit.> parameterize the following equation for calculating τ_0: τ_0 = 1/c(Xk λ_0/hc)^4/3[(GM/8 πσ) (L_Edd/η c^2) ( 3+κ) ṁ_ E]^1/3. To calculate this value for Mrk 876, several assumptions are made. It is assumed that the X-rays and viscous heating contribute roughly the same amount of energy to the disk, so that the radiative efficiency for converting rest mass into radiation η = 0.1 and the local ratio of external heating to internal heating κ= 1. The factor X is a multiplicative factor of order unity, and is determined from temperature T and the wavelength measured via T = Xhc/kλ. This factor helps account for how temperature relates to the wavelength emitted for a given radius, and is influenced by the choice of geometry in the disk. For a flux-weighted mean radius, X = 2.49 <cit.>, but taking variation of the disk emission into account leads to X = 5.04 <cit.>. To calculate the Eddington accretion rate (ṁ_E), one must estimate the bolometric luminosity. We use the standard bolometric correction via L_ bol∼ 9λ L_λ(5100) <cit.>. However, since 5100 is not a central wavelength of the SDSS filters, the g band is used as the nearest available approximation. The g-band data are converted from relative flux to magnitudes using the comparison stars, which have magnitudes from the APASS catalogue. The AGN magnitudes are then converted into fluxes and extinction corrected with an E(B-V) of 0.027 <cit.> using Cardelli's extinction law <cit.>, and corrected to rest-frame fluxes using z = 0.1385 <cit.>. We assume a luminosity distance of D_ L = 588.4 Mpc to calculate L(5100) from the dereddened, rest-frame flux. The Eddington luminosity is calculated assuming a black hole mass of 2.18×10^8 M_⊙ <cit.>. For our bolometric luminosity, L_ bol = 1.144 × 10^46 erg s^-1, we determine an Eddington fraction of ṁ_ E = 0.416. Substituting into Eqn. <ref> we calculate τ_0 = 2.57 days and 6.58 days for X = 2.49 and X = 5.04 respectively. §.§ Spectral Analysis We note that in the time lag analysis the i-band lags (Fig. <ref>) are consistently offset from the rest of the trend. At the redshift of Mrk 876, Hα is close to the effective wavelength of the i band. Other emission lines may be affecting the other lags as well. Fig. <ref> shows spectra taken on 2019-07-02 (just after the end of the campaign) from the LCO Haleakala Observatory (FTN) overlaid with the filters used to take the data. The z band shows no significant emission line contribution. The u, g, and r bands show some emission lines. To determine total flux contribution, we modeled the emission lines using astropy Spectrum1D models. These are shown in Fig. <ref> as the orange lines. The total contribution is summed up from each model for each point, using the throughput of each filter as a modifier of the total strength of the emission. The percentage of flux that comes from the continuum versus the emission lines are found in Table <ref>. Note that we only factor in the broad emission lines to the emission line percentage, and not any potential contribution from the diffuse continuum which is also thought to originate from the BLR. We find that the majority of filters see a small amount of emission line contribution, but not enough to warrant additional consideration. The exception to this is the i band, where we find that Hα contributes 33% of the total flux. To ensure that the presence of Hα was consistent throughout the campaign, we also analyze spectra taken from the start of the campaign in 2016. We find the Hα line contributes 29% of the total flux in 2016, confirming that Hα is a strong, consistent presence throughout the entire monitoring campaign. ccc 1.2 2 Emission Line Contributions by Filter Filter Continuum Emission Line u 91% 9% g 94% 6% r 85% 15% i 67% 33% The percentage of continuum and emission line contribution to the overall image from the modeled LCO spectra found in Fig. <ref>. However, a 30% contribution by Hα does not imply a 30% increase in lag. It is a common expectation that to zeroth order, the continuum lag and the Hα lag will combine weighted by their flux, however, simulations have shown that F_var is the dominating factor <cit.>. The Hα lag for Mrk 876 has been measured to be 43^+40_-22 days, with a measured variability amplitude smaller than the continuum variability <cit.>. However, it is more complicated than this, as the lag also depends on the variable flux, properties of the driving light curve, and the shapes of the transfer functions. Detailed simulations would be needed to properly assess this and are beyond the scope of this paper. Given that the Hα flux is a significant fraction of the flux in the i band, it is plausible to attribute the excess i-band lag to the Hα line. §.§ Flux-Flux Analysis To determine the spectral energy distribution (SED) of the variable flux, we perform a flux-flux analysis on the base data similar to <cit.> <cit.>. The photometric light curves are first flux-calibrated using the magnitudes of the comparison stars as found in the APASS catalogue <cit.> for the g, r, and i bands, and the Pan-STARRs catalogue <cit.> for the z band. Neither catalogue contained our u-band comparison stars, so as a proxy we use an observation from the Neil Gehrels Swift Observatory <cit.> of Mrk 876. The flux-calibrated light curves are corrected for Galactic absorption with an E(B-V) of 0.027 <cit.>. We use the extinction law of <cit.>, and adjust the data to rest-frame flux. We then perform the flux-flux analysis by breaking the flux into constant and variable components, representing the galaxy and the AGN respectively, using the following formula: f_ν(λ, t) = A_ν(λ) + R_ν(λ) X(t). A_ν is the average spectrum, R_ν is the rms spectrum, and X(t) is a dimensionless light curve normalized to a mean of 0 and a standard deviation of 1. The light curves and fits for the non-detrended data are shown in Fig. <ref>, Panel (a). The flux-flux relations are shown in Panel (b). To estimate the galaxy contribution to the different bands, we extrapolate the fits to where the uncertainty envelope of the shortest wavelength band crosses f_ν = 0 which we define as X(t) = X_G. This serves as a reference point for the other bands, and determining f_ν at X(t) = X_G provides a lower limit on the constant component in each band. The dashed lines X_F and X_B represent the lowest and highest points found from all filters, and X_0 is given as reference. Panel (c) of Fig. <ref> shows the maximum, minimum and average SED of Mrk 876 during the monitoring, along with the variable (rms) and constant spectral components determined from the flux-flux analysis. Table <ref> gives the values determined with the flux-flux analysis. The rms spectrum is consistent with f_ν∝λ^-1/3 expected for an accretion disk spectrum. An excess in the variable spectrum in the i band is seen, which would be consistent with a significant variable broad Hα line contributing in that band. There is also an additional constant component in the i band shown in Panel (c), indicating there is another source of continuum emission beyond the accretion disk. cccccc 0.99 6 Flux-Flux Results u g r i z Max 5.58 ± 0.02 4.53 ± 0.01 4.79 ± 0.01 7.69 ± 0.02 5.74 ± 0.04 Mean 4.61 ± 0.00 3.52 ± 0.00 3.87 ± 0.00 6.56 ± 0.0 4.85 ± 0.00 Min 3.54 ± 0.02 2.22 ± 0.03 2.68 ± 0.03 4.9 ± 0.07 3.54 ± 0.09 Constant 0.09 ± 0.07 0.21 ± 0.01 0.78 ± 0.01 2.64 ± 0.02 2.08 ± 0.04 RMS 0.92 ± 0.02 0.67 ± 0.00 0.63 ± 0.00 0.80 ± 0.01 0.56 ± 0.01 Base light curve flux-flux analysis values, which are shown in Fig. <ref>, panel c. All units are in milliJanskies (mJy). § DISCUSSION We performed photometric monitoring of the AGN Mrk 876 over a 3 year period, during which is exhibited large amplitude (12 – 19%) variability in each of the ugriz bands (see light curves in Fig. <ref>). We looked for lags between the different bands, as would be expected from an accretion disk reverberation scenario where ionizing radiation drives variability at longer wavelengths, with the hottest, inner (ultraviolet) part of the disk responding before the cooler, outer (optical) region of the disk. Initial CCF lags recovered from PyCCF reveal a long-term variability (>100∼days) that dominates over the short term variability expected from accretion disk reverberation. AGN are known to vary on longer time scales, and to accurately recover the reverberation lags expected on shorter timescales (days), this long-term variability needs to be removed. We remove the long-term variability by subtracting a moving boxcar average from the light curves. To do this, a moving boxcar average is subtracted from the data. We test a range of widths to see which provided the lowest uncertainties in the resulting time lags. We find that detrend widths of 75-175 days produce time lags that agree within 1σ uncertainties, and select 100 days as this width provided the lowest uncertainties. The base data CCF distributions are shown to the right of the light curves in Fig. <ref>, and the detrended light curves and their CCFs can be found in Fig. <ref>. Once the long-term trends are removed, the resulting CCF is significantly narrower, allowing a precise recovery of the reverberation lags. The measured lags from all tested detrending lengths can be found in Table <ref>. The source of the long-term variability may be changes in the accretion rate. This acts on the viscous timescale, predicted to be on the scale of hundreds of days for the optical emitting region <cit.>. As more reverberation mapping studies are undertaken, it is becoming apparent that the lamppost model does not adequately explain all variability seen in some AGN light curves. It is possible that many AGN exhibit these long-term trends, as many observation campaigns detect some kind of long-term variation often associated with accretion disk flow or broad-line region interference <cit.>. It is difficult to perform the lengthy monitoring needed to capture these long-term variations with traditional observing on a single telescope. With the rise of robotic observatories like LT, LCO, and Zowada, as well as all-sky surveys, such as ASAS-SN, ATLAS, ZTF, PAN-STAARS, and CRTS, more studies like this one will be possible. The wavelength dependence of the detrended lags are shown in Fig. <ref>. As a comparison, the lags are also found with the more sophisticated JAVELIN and PyROA techniques. We fit the relation τ∝λ^β to all sets of lags. The lags recovered by PyCCF, JAVELIN, and PyROA are generally in agreement within uncertainty, and all recovered lags are well represented by τ∝λ^4/3. They are consistent with the expected wavelength dependence for a Shakura-Sunyaev geometrically thin optically thick accretion disk with an illuminating central source. All methods find an excess in their i-band lags, deviating from an extrapolation of the trend in the other wavebands. The lags from different methods are given in Table <ref>. Notably, we do not detect a u-band excess lag in our detrended time lag measurements. These excesses have been detected in a number of reverberation mapping studies <cit.>, however they also have not been detected as well <cit.>. Without UV/X-ray monitoring, we lack the wavelength coverage to determine if Mrk 876 truly lacks a u-band lag excess. However, if the source of the u-band excess is from the BLR <cit.>, then it is possible that our detrending process has already removed this contribution. Looking at the base lightcurve timelags in Table <ref>, we see that u-band lag has the largest lag of all the bands at 14.68 days. This could imply that the BLR u-band emission operates on timescales of around 100 days, and that its contribution is removed by our detrending process. However, the large flat-topped CCFs that are produced prevent a robust lag measurement, so this result should be taken with caution. The boundary of the accretion disk and the dusty torus can be examined using our data. We can extrapolate what the values for longer wavelength lags would be, assuming they follow the τ = λ^4/3 trend predicted for a geometrically thin optically thick disk. We compare the values we predict against the values determined from fitting the near-infrared rms spectrum by <cit.>. The implied radii are measured to be ∼ 25 light-days in the J band and ∼ 56 light-days in the K band. We extend our fits of τ∝λ^4/3 to the optical data to estimate the expected near-IR disk lags for these bands. The uncertainty on τ_0 and y_0 are used to create the upper and lower bounds for the lags. We extrapolate lags from each of the lag methods. For PyCCF, J = 17±7 days and K = 48±16 days. For JAVELIN, J = 14±7 days and K = 36±14 days. For PyROA, J = 17±4 days and K = 47±8 days. Our extrapolation of the disk lags is in general agreement with what is measured by <cit.>. We also fit the time lags to the analytical prescription described in <cit.> in Fig. <ref>. Black hole mass and X-ray luminosity are found in literature, leaving the mass accretion fraction (ṁ_E) and X-ray corona height (H) to be fit. We calculated the Eddington fraction using a bolometric luminosity ṁ_E = 0.416, so we fit for both when ṁ is fixed to this value and when it is allowed to be a free parameter. However, when leaving ṁ_E as a free parameter the fit is poorly constrained. We therefore only consider fits with ṁ_E fixed at 0.416. Fig. <ref> only shows the fit for when ṁ_E is a fixed parameter, with the red dashed line representing a^* = 0.988 regime. All of the values determined from fitting can be found in Table <ref>. The values of H are found to be around 30-50 R_G, but agree with each other to within 1σ. Spectra of Mrk 876 (Fig. <ref>) show that there is a significant contribution from the Hα broad line that is present in i-band measurements. This emission remains significant throughout the duration of the campaign. While determining the exact impact is beyond the scope of this paper, emission from the broad-line region has been suspected to influence lags in prior continuum reverberation mapping campaigns. Contributions from broad emission lines can also influence the lag in a photometric band <cit.>. While this is not prominent in all objects, the redshift of Mrk 876 puts the strong Hα line in the middle of the i band, indicating that it is a strong possibility in this case. We perform a flux-flux analysis on the dereddened and redshift-corrected flux in Fig. <ref>. The values determined for the SED are recorded in Table <ref>. The flux-flux analysis allows a determination of the variable and constant components of the SED. The variable (rms) component agrees with a f_ν∝λ^-1/3 spectrum expected for an accretion disk, though it shows an excess in the i band. This indicates the presence of additional variability beyond what is expected from the accretion disk. The constant spectrum also shows an excess in the i band. Both the variable and constant component excesses can be attributed to a prominent Hα line. This excess variability lends credence to the broad Hα line being the source of the longer than expected i-band lag. The Hα line has variations smaller than what our detrending removes but longer than what the accretion disk is expected to create. Our detrending process removes slow variations on timescales around 100 days, while the Hα in Mrk 876 has been measured to vary on timescales of roughly 43^+40_-22 days <cit.>. This allows its variations to influence the i-band lags still, despite the detrending process. As explained in Sec. <ref>, we do not expect the lags of Hα and the accretion disk to add together simply, but the effect on the lags is clear to see. We estimate the Eddington fraction using the bolometric luminosity to be ṁ_ E = 0.416 during the campaign. This makes it one of the highest Eddington rate AGN studied via continuum reverberation mapping to date. Based on its black hole mass and mass accretion rate, like many other studies have found <cit.>, we find that the normalization (τ_0) recovered from fitting the measured time lags (9.22 ± 2.64 days, 6.50 ± 2.00 days, and 9.16 ± 1.38 days for PyCCF, JAVELIN, and PyROA respectively) is several times larger than τ_0 = 2.57 days calculated from theory when X = 2.54. These values range from 2.6-3.6 times greater than theory, depending on the method. This implies that a flux-weighted mean radius alone cannot adequately describe the measured accretion disk sizes with the other assumptions about accretion disks we applied. In order for a flux-weighted mean radius model of the accretion disk to recover the size of the disk measured, we would need to assume a much higher accretion rate than expected, higher Eddington ratio, or lower accretion efficiency. This problem exists beyond just reverberation lag measurements, as similar results are found through gravitational microlensing campaigns <cit.>. The additional phenomena required are broadly divided into two categories. First are theories that the source of the lags is still X-ray reprocessing, but that some aspect of AGN geometry is either incorrectly assumed or different than expected. One example of this would be that the irradiating source is higher above the accretion disk than usually assumed <cit.>. The other category of explanations involve different sources for the lags, such as lags instead being due to disk turbulence <cit.> or due to far-UV illumination from the inner disk shining onto and providing the reprocessing radiation for the rest of the disk <cit.>. Alternatively, <cit.> suggest these longer than expected lags can be attributed to an underestimate of the intrinsic flux of the AGN, and hence an underestimate of the Eddington ratio, due to the presence of large amounts of intrinsic reddening. However, that the variable spectrum closely follows f_ν∝λ^-1/3 would seem to suggest that there is not a large amount of intrinsic reddening in this particular object. For our analysis specifically, the choice of detrending via a moving boxcar average could influence the lags and therefore the measured size of the accretion disk. While the majority of detrending lengths we test agree with each other within uncertainty (Fig. <ref>), the outlier cases show that the different lengths produce smaller time lags. We argue that the best detrend length is that of 100 days due to it having the smallest uncertainties, but again this does not guarantee that it is the correct length. The equation we use to calculate the lags (Eq. <ref>) is also simplistic in its assumption about the geometry of the AGN and accretion disk system. Another possible explanation for the accretion disk size problem is an underestimation of X. Many studies use the value calculated by <cit.> of 2.54. However, other studies have calculated it considering other factors of the AGN. When including variation of the disk emission <cit.>, the value of X then becomes 5.04. Using this value, we calculate τ_0 = 6.58 days. This is closer to what we observe for all lag measurement methods, falling within 1σ for all methods except PyROA. Given its mass and Eddington fraction, the continuum lags in Mrk 876 are some of the longest yet observed. The outer edge of the accretion disk is expected to become self-gravitating at 12 light days regardless of the mass of the system <cit.>. Our u to z lag is around 13 days, and the measured τ_0 of around 9 days suggests that the z band corresponds to a disk size of 17 - 20 days (depending on lag method), significantly larger than the 12 light days self-gravitating radius. Our estimates are consistent with what <cit.> estimate through spectral fitting. § CONCLUSIONS In summary, Mrk 876 displays large amplitude variability over 3 years and shows significant time lags across the optical band. The lags in Mrk 876 are some of the longest continuum lags yet measured, and longer than expected for the self-gravitating radius. Our conclusions on Mrk 876 are as follows: * We measure broad CCFs with the base lightcurves, indicating long term variations are present. The data are detrended by subtracting a moving boxcar average of 100 days, recovering the short term lags. We measure the lags with PyCCF, JAVELIN, and PyROA. The results can be found in Table <ref> and plotted in Fig. <ref>. * The i-band lag is longer than expected from an extrapolation of the other bands. We analyze spectra taken from before and after the campaign, finding that the Hα broad line emission has a strong contribution in the i band – up about 1/3 of the total i-band flux. We find that due to its intermediately long lags (∼40 days) this signal would not be removed by detrending and would still exist in our lag measurements of the detrended data. * We perform a flux-flux analysis on both the base and detrended data. The base data contains an i-band excess in both the constant and variable (rms) emissions, while the rest of the bands agree with the expected profile for an accreting thin disk f_ν∝λ^-1/3. The detrended flux-flux analysis reveals that the excess variable emission is removed via detrending, implying that what remains as an excess in the detrended light curves varies on scales longer than reverberation mapping but shorter than 100 days. This adds further support that Hα is responsible for this lag excess. * The normalization parameter τ_0 is found for all lag measurement methods. We calculate this value following the parameterization described by <cit.>. Two different values of the factor X, 2.49 <cit.> and 5.04 <cit.>, are used to calculate τ_0 = 2.57 days and τ_0 = 6.58 days, respectively. Our τ_0 values are closer to the latter, agreeing for most methods within uncertainty. * The lags are fit to the analytical prescription described in <cit.>. When ṁ_e is fixed at the observed value of 0.416, we find an X-ray source height of 30-50 R_G for a maximally-spinning black hole. Continuum reverberation mapping continues to challenge the standard picture of AGN accretion. More studies with high cadence observations are required to truly understand the AGN system. We thank the anonymous referee for their comments and suggestions. JAM and EMC gratefully acknowledge support from the National Science Foundation through AST-1909199. We thank David Moutard for feedback that has improved this manuscript. This research made use of Photutils, an Astropy <cit.> package for detection and photometry of astronomical sources <cit.>. This work makes use of observations from the Las Cumbres Observatory global telescope network. Research at UC Irvine is supported by NSF grant AST-1907290. HL acknowledges a Daphne Jackson Fellowship sponsored by the Science and Technology Facilities Council (STFC), UK. ERC acknowledges support by the NRF of South Africa. TT acknowledges support from NSF through grant NSF-AST-1907208. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. LCOGT (elp06, elp08, FTN, optical) <cit.>, Liverpool:2m <cit.>, Swift <cit.>, Zowada <cit.> Astropy <cit.>, CALI <cit.>, JAVELIN <cit.>, photutils166 <cit.>, PyCCF <cit.>, PyROA <cit.> mnras
http://arxiv.org/abs/2307.01061v1
20230703144014
Quantizing the Quantum Uncertainty
[ "Etera R. Livine" ]
quant-ph
[ "quant-ph" ]
calc decorations.pathmorphing compatibility=false [figure]format=plain,position=top,justification=centerlast,textfont=sf,width=.9 [figure]belowskip=12pt,aboveskip=8pt
http://arxiv.org/abs/2307.01407v1
20230703235540
Enhancing ab initio diffusion calculations in materials through Gaussian process regression
[ "Seyyedfaridoddin Fattahpour", "Sara Kadkhodaei" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
Sliding suffix trees simplified Laurentius Leonard10000-0001-8477-7033 Shunsuke Inenaga20000-0002-1833-010X Hideo Bannai30000-0002-6856-5185 Takuya Mieno40000-0003-2922-9434 ============================================================================================================================================================== Saddle point search schemes are widely used to identify the transition state of different processes, like chemical reactions, surface and bulk diffusion, surface adsorption, and many more. In solid-state materials with relatively large numbers of atoms, the minimum mode following schemes such as dimer are commonly used because they alleviate the calculation of the Hessian on the high-dimensional potential energy surface. Here, we show that the dimer search can be further accelerated by leveraging Gaussian process regression (GPR). The GPR serves as a surrogate model to feed the dimer with the required energy and force input. We test the GPR-accelerated dimer method for predicting the diffusion coefficient of vacancy-mediated self-diffusion in bcc molybdenum and sulfur diffusion in hexagonal molybdenum disulfide. We use a multi-task learning approach that utilizes a shared covariance function between energy and force input, and we show that the multi-task learning significantly improves the performance of the GPR surrogate model compared to previously used learning approaches. Additionally, we demonstrate that a translation-hop sampling approach is necessary to avoid over-fitting the GPR surrogate model to the minimum-mode-following pathway and thus succeeding in locating the saddle point. We show that our method reduces the number of evaluations to a fraction of what a conventional dimer requires. § INTRODUCTION Transition state theory (TST) <cit.> is widely used to quantify the free energy barrier (or activation free energy) of chemical reactions, such as molecular dissociation, as well as material processes including bulk diffusion, surface diffusion, or surface adsorption <cit.>. Within TST, the activated state is identified as the saddle point on the free energy surface. Consequently, saddle point search methods are crucial for quantifying the activated state, energy barrier, and rate of various kinetic processes in materials <cit.>. Among saddle point search methods, minimum mode following methods <cit.>, such as the dimer algorithm<cit.>, have gained popularity due to their computational advantages, particularly for solid-state processes. Unlike alternatives such as the partitioned rational function optimization (P-RFO), these methods do not require energy Hessian calculations in the high-dimensional space of solid-state atomic systems <cit.>. However, utilizing the dimer algorithm can still be computationally prohibitive when combined with density functional theory (DFT) energy calculations. In this study, we show that we can further enhance computational efficiency by utilizing Gaussian process regression (GPR) as a surrogate model for inputting forces to the dimer algorithm. We implement a GPR-guided dimer algorithm, which we call the GPR-dimer, and apply it to investigate bulk diffusion in bcc Mo as well as diffusion of sulfur in hexagonal MoS2. Building upon previous studies combining GPR with saddle point or minimum energy path search methods, this work provides two new insights for advancing the utility of GPR-dimer. Firstly, we employ a multi-task GPR learning approach, demonstrating a significant reduction in both training error and time compared to previously used GPR learning methods. Secondly, we introduce a translation-hop sampling approach that reduces the computational effort of DFT and enhances the robustness of the search algorithm. Furthermore, this work extends the application of GPR-dimer to solid-state materials. Previous studies have successfully employed GPR to accelerate the search for saddle points or minimum energy paths <cit.>. For instance, Jónsson's group developed an adaptive GPR surrogate model of the potential energy surface (PES) <cit.>. They utilized this model to derive an initial interpolation of the minimum energy path, which was subsequently optimized using the nudged elastic band (NEB) method. Their investigations focused on 25 chemical reactions, primarily involving organic molecules, known as the Baker test systems  <cit.>. The results demonstrated the superiority of the GPR-accelerated NEB search over the classical NEB optimizer. The GPR model was trained using the Matérn covariance function and a predetermined weighted combination of energy-based and force-based loss functions. Another study by Kästner's group combined a GPR-interpolated PES with the P-RFO method to identify transition states in the Baker test systems <cit.>. By providing the necessary Hessian information to the P-RFO optimizer, the surrogate GPR model rendered the method computationally efficient, comparable to force-based methods like the dimer algorithm. Subsequently, they introduced a GPR-based Hessian update scheme <cit.>, where the GPR was employed to update Hessian matrices using gradient-based information during the optimization procedure. This approach involved at least one initial Hessian, along with additional energies and gradients. Moreover, Denzel and Kästner <cit.> utilized the Matérn covariance function and the “derivative observation” GPR learning tehnique <cit.>, which explicitly relates the learned forces to the negative partial derivatives of learned energies. In a subsequent study, the same group combined the GPR surrogate model, employing derivative observation learning, with the NEB optimizer <cit.>. In two additional studies, Jónsson's group introduced the inverse-distance covariance function as an alternative to the previously employed covariance functions <cit.>, resulting in a significantly enhanced GPR surrogate model. By utilizing this improved GPR model to guide the dimer and NEB saddle point searches, they investigated the dissociative adsorption of an H2 molecule on the Cu(110) surface, three gas-phase chemical reactions, and the diffusion hop of an H2O molecule on an ice Ih(0001) surface <cit.>. Building upon previous studies, we extend the application of GPR-dimer to investigate solid-state processes, moving beyond molecular processes. This study presents two examples of solid-state processes: vacancy-mediated self-diffusion in bcc Mo and sulfur diffusion in hexagonal MoS2. We address the challenge of handling high dimensionality when applying GPR-dimer to solid-state processes through the use of the inverse-distance covariance function introduced by Jónsson's group <cit.>. By only considering atoms in the vicinity of the diffusing atom, the inverse-distance covariance formulation significantly reduces the degrees of freedom in the high-dimensional space of the atomic systems considered in this study. More details are provided in section  <ref>. Additionally, this work advances the GPR-dimer method in two key aspects. First, we introduce the use of multi-task learning for the GPR surrogate model, resulting in a substantial improvement in the model's performance and robustness compared to the previously employed derivative observation learning. The multi-task GPR learning approach resembles the learning scheme used in Ref. <cit.>, where the loss function represents a weighted average of energy and force losses. However, in the multi-task approach, the contribution from force and energy losses is learned through a shared covariance function, unlike the approach in Ref. <cit.>, which requires prior knowledge of the contribution of each loss. Second, we demonstrate that by employing a translation-hop sampling approach (defined below), the GPR-dimer search becomes both successful and robust. As detailed in section <ref>, the GPR-dimer method iteratively updates the GPR model as the dimer walker progresses, incorporating new DFT-calculated values from the energy surface into the training data. We show that a minimum number of dimer translation steps must be hopped over before updating the GPR to ensure successful guidance of the dimer to reach the saddle point. We refer to this approach as the translation-hop sampling approach. This sampling strategy strikes a balance between an overfitted and underfitted surrogate model. Sampling at every translation step leads to a GPR surrogate model that is overfitted to the dimer walk path on the PES, while skipping too many translation steps results in an underfitted model. A detailed discussion is provided in section <ref>. Denzel and Kästner discuss a similar balance between interpolation and extrapolation with the use of an overshooting approach for sampling the GPR for geometry optimization <cit.> (not for saddle point search). The remainder of this article is organized as follows: In section <ref>, we explain the GPR-dimer method developed in this study. In section <ref>, we validate the predictions of our GPR-dimer method for diffusivity coefficient of monovacancy diffusion in bulk bcc Mo and the activation energy for sulfur diffusion in MoS2. Subsequently, in section <ref>, we elucidate the role of different factors in enhancing the performance of the GPR-dimer saddle point search method. Finally, we compare the computational cost of the standard dimer method against our implementation of the GPR-dimer method. In section <ref>, we provide a general interpretation of the numerical experiments using the GPR-dimer method within the context of GPR learning and the dimer search algorithm. § METHOD The approach to accelerate the dimer walk using GPR is based on a simple premise: GPR serves as a surrogate model for the computationally intensive sampling of the potential energy surface typically through methods like DFT. Once trained, the surrogate model can readily provide the energy values and their gradients (forces) at unsampled locations of the energy surface. The interaction between GPR and the dimer takes place through an iterative feedback loop: GPR provides estimates of the energy and its gradient along the dimer walk, while the dimer walker contributes new points on the energy surface. These new points are sampled through DFT and then used to update (retrain) the GPR. Through this iterative process, the dimer gradually converges towards the saddle point. The next two subsections delve into the design and training of the GPR, as well as the communication between GPR and the dimer, respectively. §.§ Gaussian process regression (GPR) surrogate model Training Data Set. The GPR is initially trained on n_i atomic configurations. These atomic configurations are collected from the first n_i translation steps of a DFT-guided dimer walk (i.e., standard dimer). The standard dimer is launched from an atomic configuration which is estimated to be in the vicinity of the saddle point using a geometric interpolation (as detailed in section <ref>). The atomic configurations constitute the input space and the DFT-calculated atomic forces and energies constitute the target values in the training data set (see more details below). The training data set is expanded as the dimer progresses by adding a new DFT-calculated data at every n_h translation steps of the dimer walk. We call this approach the translation-hop sampling method. The effect of different n_h values are examined and explained in section <ref>. For both studies of bcc Mo self-diffusion and sulfur diffusion in hex MoS2, we use 3 atomic configurations for the initial training of the GRP (n_i=3) and we hop over 10 translation steps before adding a new DFT calculation to the training data (n_h=10). The DFT calculation of energy and forces are performed using the Vienna Ab-initio Simulation Package (VASP) <cit.>, which employs the projector-augmented-wave (PAW) method <cit.> and the generalized gradient approximation (GGA) for exchange-correlation energy in the Perdew-Burke-Ernzerhof (PBE) form <cit.>. For bcc Mo, we use a a 3×3×3 supercell of the conventional bcc unit cell with 54 atoms. We use a Monkhorst-Pack k-point mesh of 5 × 5 × 5 and an energy cutoff of 520, respectively, within the PBE exchange-correlation functional. For hex MoS2, we use a 2×2×2 supercell of the convectional hexagonal unit cell with 48 atoms. We use a Monkhorst-Pack k-point mesh of 5 × 5 × 1 and an energy cutoff of 520, respectively, within the PBE exchange-correlation functional. GPR Covariance Function. Choosing an appropriate covariance function is crucial for GPR performance <cit.>. Here, we use the inverse distance covariance function of Ref. <cit.>, which demonstrates superior performance compared to the radial basis function (RBF) or its variants (e.g., Matérn) as shown in Ref.<cit.>. Compared to a stationary covariance function such as RBF, the inverse distance covariance function can better capture the asymmetry of inter-atomic forces, specifically the large repulsive forces caused when atoms get close to each other. This is because the inverse distance difference measure (i.e., 𝒟_1/r(𝐱, 𝐱^') term in equation <ref>) stretches when atoms approach each other. This makes the covariance function non-stationary with respect to the atom coordinates and allows faster variation of energy in those directions (see more details in Ref. <cit.>). The inverse distance covariance function measures the similarity of two input atomic coordinates 𝐱 and 𝐱^' as <cit.>: k_1 / r(𝐱, 𝐱^') =σ_c^2+σ_m^2exp(-1/2∑_i ∈ A_m∑_j ∈ A_m, j>i  V j ∈ A_f(1/r_i, j(𝐱)-1/r_i, j(𝐱^'))^2/l_ϕ(i, j)^2_𝒟_1/r(𝐱, 𝐱^')) Here, 𝐱 (or 𝐱^') denotes a 3N-dimensional configuration vector including the Cartesian coordinates of the atomic system with N atoms, 𝐱=[x_11,x_12,x_13,...,x_N1,x_N2,x_N3]^T. r_i, j is the distance between atoms i and j, defined as r_i,j=√(∑_d=1^3(x_id-x_jd)^2). l_ϕ(i, j) denotes the length scale for the atom pair ϕ_(i, j). σ_m controls the magnitude of the covariance function, and σ_c is the variance of a constant Gaussian prior distribution. The 𝐥_ϕ vector (with a size of the number of atomic pairs), σ_m and, σ_c are the training parameters of the inverse distance covariance function. As shown in equation <ref>, index i runs over moving atoms A_m and index j runs over other moving atoms and frozen atoms A_f. Therefore, atom pairs are only defined between moving atoms and the rest of the moving and frozen atoms. This construct reduces the total number of pairs (i.e., the size of vector 𝐥_ϕ) from 1/2N×(N-1) to 1/2N_m×(N_m-1)+N_m× N_f, where N, N_m, and N_f denote the number of all atoms, moving atoms, and frozen atoms, respectively. In the examples of this study, we define the diffusing atom to be the moving atom (N_m=1), and frozen atoms are those confined in a sphere of radius r_f centered around the moving atom. We call this spherical region the active region. We examine the effect of different r_f values on GPR performance in section <ref>. The partitioning of the atomic system into the moving and frozen atoms is specially advantageous in reducing the number of degrees of freedom (i.e., the number of atomic pairs or size of 𝐥_ϕ vector) for the solid-state phases in our study. Additionally, by only including the atomic pair distances between a moving atom and frozen atoms in the covariance function, we inform the GPR model with the most physically important atomic pairs. In other words, the atomic pair distances formed between non-moving atoms carry less physically significant information in describing the potential energy surface. This physical knowledge embedded into the construct of the covariance function aids the model to learn the energy surface more effectively. GPR Training & Prediction. For training the GPR, we adopt a multi-task learning approach <cit.> as implemented in GPyTorch <cit.>, which enables simultaneous learning of energy and forces by sharing information across the prediction tasks. As we show in section <ref>, multi-task learning outperforms derivative observation learning by reducing the GPR training time and error and enhancing its performance and robustness. Through multi-task GPR <cit.> the inter-task dependencies are learned based solely on the task identities and the observed data for each task, unlike the derivative observation GPR which explicitly enforces the dependence of forces and energy values by equating forces to the negative derivative of energy. For details of the derivative observation GPR learning approach, see equations 28 and 29 of Ref. <cit.>. For multi-task GPR learning, a shared covariance function between tasks t_1 and t_2 is defined for two inputs 𝐱 and 𝐱^' as <cit.>: k([𝐱, t_1],[𝐱^', t_2])=k_input (𝐱, 𝐱^') × k_task(t_1, t_2) where k_input is the inverse distance covariance function defined in equation <ref> and k_task is the inter-task similarity measure describing the correlation between tasks. In this study, the related tasks are the prediction of energy and atomic force components. The prediction of each force component is a separate task, thus the total number of tasks is 1+3N for N atoms in the system. Following the multi-task learning approach of Ref. <cit.>, k_input is defined as a “free-form” task-similarity matrix, instead of a parametric covariance function. Specifically, k_input is defined as a positive semi-definite matrix which is approximated by an incomplete-Cholesky decomposition of rank P. Here, we use rank 1 for approximating k_input, resulting in only one additional trainable parameter of the GPR. More details about the paramterization of the task-similarity matrix are given in Ref. <cit.>. Given a set of M tasks and D training data points (or observations), the shared covariance matrix 𝐊∈ℝ^DM × DM can be expressed as the Kronecker product of the input covariance matrix 𝐊_input∈ℝ^D × D and the task covariance matrix 𝐊_task∈ℝ^M × M: 𝐊 = 𝐊_task⊗𝐊_input Here, the D distinct observations constitute the training data set {𝐗,𝐘}. 𝐗 consists of input atomic configurations, 𝐗={𝐱_1,...,𝐱_D}, and 𝐘 consists of DFT-evaluated energy and force components at 𝐗, 𝐘 = (E_1,..., E_D, 𝐟_1,...,𝐟_D)^T, where E and 𝐟=[f_1,1,f_1,2,f_1,3,...,f_N,1,f_N,2,f_N,3] are the energy and force vector for each input atomic configuration, respectively. The set of trainable parameters θ={σ_c,σ_m,𝐥_ϕ} and the single parameter of matrix 𝐊_task are optimized by maximizing the log marginal likelihood over the shared covariance function <cit.> θ,K_taskargmax{ℒ = - 1/2𝐘^T (𝐊 + σ^2_n 𝐈)^-1𝐘 - 1/2log |𝐊 + σ^2_n 𝐈| - D/2log 2π} Here, 𝐊 is the shared covariance matrix of equation <ref>, 𝐈 is the identity matrix, and σ^2_n is the random noise variance, which we set to 10^-4. The GP approximation for the energy or each force component is then obtained as the mean prediction on a new data-point x^* for task l using the posterior distribution conditional on the optimized parameters: f_l(x^*) = (k_task^l ⊗ k_input^*) (𝐊 + σ^2_n 𝐈)^-1𝐘 where k_task^l denotes the l^th column of 𝐊_task and k_input^* is the vector of covariances between the query point x^* and the training points. §.§ GPR-accelerated dimer In this study, we use the dimer saddle point search method as detailed in Ref. <cit.>. A dimer contains a pair of two auxiliary points in the atomic configuration space of dimension 3N (or images), separated by a fixed distance of 0.1 Å. Each dimer iteration is divided into a set of rotation steps and a translation step. During the rotation steps, the dimer is rotated around its midpoint to find the orientation that gives the lowest total energy of the two images. This gives the direction of the lowest curvature mode or the minimum mode. The dimer is then translated by reversing the force components in the minimum mode direction multiplied by a step size of 0.1 Å. Details of the dimer algorithm used in this work are presented in Ref. <cit.>. We used Algorithm B1 and Algorithm B3 of Ref. <cit.>, respectively, for rotation and translation of the dimer. For a GPR-guided dimer, the GPR-approximation of the energy and force components (according to equation <ref>) are used to provide the forces acting on the images of the dimer during rotation and translation. Rotational forces are then defined according to the projected atomic forces on the two images of the dimer as F_R. Rotational steps are carried out until F_R falls below a threshold (0.1 eV/Å) or a maximum number of rotations are performed. The maximum number of rotations is set to 5 for a standard DFT-dimer and to 25 for a GPR-guided dimer. We use the conjugate gradient algorithm for determining the rotational plane of the dimer. As explained in the previous section, the GPR model is updated (retrained) after every n_h translation steps by using an expanded set of DFT-calculated training data points. The final convergence of the dimer to the saddle point is achieved when the maximum atomic force approximated by the GPR is below 0.01 eV/atom. An accurate DFT-calculated force at the final point of dimer is used to confirm the convergence. § RESULTS §.§ Validation: 2-Dimensional Sinusoidal Potential Model We first validate the GPR-dimer method of this work, as detailed in section <ref>, on a toy potential model of sinusoidal form. The model has the function form of z=-sinπ xsinπ y, where z denotes the potential energy value and x and y constitute the two coordinates of the input, mimicking the atomic coordinates in a 2-dimensional space. We initiate the GPR-dimer from the minimum on the potential surface at x=-0.5 and y=-0.5. The GPR-approximated energy surface z is updated (or retrained) after each dimer translation step (i.e., n_h=0). The training set is expanded by a new data point, z_i(x_i,y_i), at each dimer translation step, where (x_i,y_i) specify the 2D atomic coordinates of the new dimer location, and then the GPR is trained on the new training data. The threshold for training of the GPR is for the mean absolute error (MAE) of the force and energy to drop below 0.01 eV/Å and 0.01 eV, respectively. Figure <ref> illustrates the evolution of the GRP energy surface, z , and the dimer walker location at different dimer translation steps. Fig. <ref> also shows the decrease of the force magnitude, |F| (i.e., |F|=√(F_x^2+F_y^2) where F_x=-∂z/∂ x and F_y=-∂z/∂ y) as the GPR-dimer progresses toward the saddle point. The GPR-dimer reaches the saddle point at x=0 and y=0 after 13 translation steps, with a total number of 60 dimer rotations. For the 2D potential model, we use the radial basis covariance function (RBF), as implemented in GPyTorch <cit.> and the derivative-observation learning approach <cit.>. §.§ Validation: Self-Diffusion in bcc Mo To validate that the GPR-dimer method can successfully identify the transition state of a solid-state process, we apply it to calculate the energy barrier for a vacancy diffusive hop in the bcc phase of Mo. We initiate the GPR-dimer walker at an atomic configuration that is a linear interpolation between the initial state (a local minimum state), where a bcc lattice site is vacant, and the final state (a symmetrically-equivalent local minimum state), where the vacant bcc site has hopped to the nearest neighbor. The input configuration is a weighted average of the atomic coordinates with 3/4 contribution from the initial state and 1/4 from the final state (see Figure <ref>(b)). Supplementary Note 1 examines the GPR-dimer application using the local minimum configuration as the initial input. Given the input configuration, we perform a standard DFT-dimer for two translation steps to provide the data points for training the GPR. A total of three atomic configurations are used to train the GPR (n_i=3). The GPR-dimer is then launched to locate the saddle point. We use the inverse distance covariance function with an active region of size r_f=3 Å. For training the GRP, we employ the multi-task learning as detailed in section <ref>. The training data set is expanded by an additional atomic configuration at every 10^th translation step of the dimer (i.e., n_h=10), followed by an update (or retraining) of the GPR. The criterion of reaching the saddle point is for the total force magnitude to be less than 0.01 eV/Å, where the GPR-dimer stops. The total force magnitude is calculated as F=√(∑_i=1^NF_i1^2+F_i2^2 + +F_i3^2). The energy difference between the final step of the GPR-dimer (or the transition state) and the initial state of the vacancy hop (the local minimum) is calculated to provide the energy barrier for the vacancy diffusive hop (or the enthalpy of vacancy migration), Δ H_m. Energies of the transition and local minimum states are both calculated using DFT. The calculated enthalpy of vacancy migration is equal to 1.34 eV (see Figure <ref>(b)), which is in good agreement of our previous calculation using NEB <cit.>. Using the calculated enthalpy of vacancy migration, Δ H_m, we validate the diffusion coefficient of bcc Mo as a function of temperature with experimental measurements <cit.>. Figure <ref>(a) shows the calculated diffusion coefficient based on the located saddle point on the energy surface by the GPR-dimer method in comparison with experimental results <cit.>. We calculate the self-diffusion coefficient for monovacncy diffusive jumps according to D=C_v d^2 Γ, where d is the vacancy (or atom) jump distance, C_v is the equilibrium vacancy concentration, and Γ is the successful vacancy jump rate. Vacancy jump distance in bcc is equal to the nearest neighbor distance or √(3)/2a_0, where a_0 is the lattice constant. Vacancy concentration at temperature T is given by C_v = exp(Δ S_f/k_B)exp(-Δ H_f/k_BT), where Δ H_f and Δ S_f are the formation enthalpy and entropy of vacancy, respectively, and k_B is the Boltzmann constant. We obtain the DFT-calculated values of a_0 and C_v from our previous results in Ref. <cit.>. The vacancy jump rate Γ is obtained from the migration enthalpy, Δ H_m, and the effective vibration frequency along the migration path, ν^*, by Γ = ν^* exp(-Δ H_m/k_BT). Here, Δ H_m is the vacancy migration energy barrier calculated according to the saddle point located by the GPR-dimer method. We obtain the DFT-calculated ν^* from Ref. <cit.>, which calculates ν^* as the ratio of the product of normal vibration frequencies of the initial state of atomic migration, ν_i, to that of the non-imaginary normal frequencies of the transition state, ν^'_j, i.e., ν^* = ∏^3N-3_i=1ν_i/∏^3N-4_j=1ν^'_j. Alternatively, we estimate ν^* to be equal to the Debye frequency, ν^*≈ν_D, which is calculated from Debye temperature Θ_D as ν_D=Θ_Dk_B/ħ, where ħ is the reduced Plank's constant. §.§ Validation: Sulfur Diffusion in hex MoS2 To further validate the accuracy of the GPR-dimer method in identifying transition states in solid-state processes, we apply it to calculate the energy barrier for the monovacancy-sulfur diffusive jumps in hexagonal MoS2 (P6_3/mmc space group with 2b and 4f wyckoff positions for Mo and S respectively). The input atomic coordinates to the GPR-dimer are linearly interpolated with 3/4 contribution from the local minimum configuration, where one sulfur site is vacant, and 1/4 from the final state, where the vacancy and the nearest sulfur has exchanged their positions (see Figure <ref> for the input configuration). Subsequently, we launch the GPR-dimer method to locate the saddle point along the diffusion pathway. Like the example of bcc Mo, we use the first three atomic configurations from the translation steps of the DFT-dimer to train the GPR (i.e., n_i=3), and we consequently update (or retrain) the GPR after every 10 translation steps (i.e., n_h=10). The use an active region of radius r_f=5 Å centered at the diffusing sulfur atom for the inverse-distance covariance function of equation <ref>. Figure <ref>(a) shows the evolution of the total force magnitude of the atomic configuration at different translation steps of the GPR-dimer. Once the total force is below 0.01 eV/Å, we stop the GPR dimer. Figure <ref>(b) shows the displacement of the diffusing atom projected on the [100] and [010] directions during the GPR-dimer evolution. The energy of the output of the GPR-dimer (or the transition state) and the local minimum configuration are calculated using DFT. The calculated energy difference between these two configurations equals 2.3 eV, which is in good agreement with the DFT-NEB calculation of Ref. <cit.>. §.§ GPR-Dimer Performance Analysis In this section, we examine the impact of various parameters on the performance of the GPR-dimer method introduced in this study. These parameters include: 1) learning approach or GPR training method, 2) energy surface sampling frequency or number of dimer translation hops between DFT-calculations, 3) size of the active region containing frozen atoms in the inverse-distance covariance function (Equation <ref>), and 4) number of preceding DFT-sampled data points included in GPR training set at each retraining step. We showcase our investigations for the case of a monovacancy diffusive jump in bcc Mo. To evaluate the impact of different GPR learning approaches, we compare multi-task learning (explained in Section <ref>) with the commonly used derivative-observation learning. Figure <ref> illustrates the learning curves for multi-task learning and derivative-observation learning. The GPR serves as the initial surrogate model for the energy surface, and the training data consist of n_i consecutive DFT-calculated dimer translation steps starting from the initial atomic configuration (as detailed in Section <ref>). The mean absolute error (MAE) for energy and force predictions are presented over the training epoch for various numbers of input training data, n_i. The multi-task learning method demonstrates superior learning performance in terms of prediction accuracy and stability compared to derivative-observation learning. The multi-task learned GPR consistently exhibits lower MAE across all epochs and input data sizes. Furthermore, the MAE remains low and relatively stable for different input data sizes, indicating the robustness of multi-task learning and its lower sensitivity to observations (or training data). In contrast, the GPR trained with derivative-observation learning shows an increase in MAE as the number of input data increases. Specifically, the MAE for energy prediction starts to rise significantly above 1 eV after around 200 epochs, while the MAE for force prediction remains low. This behavior is likely attributed to over-fitting of the derivative-observation GPR to a single task. The explicit relationship between forces and energies in derivative-observation learning constrains the optimization process, making it prone to issues such as over-fitting to forces in this particular example. On the other hand, multi-task learning employs an implicit regularization effect, mitigating the risk of over-fitting to a single task. By simultaneously learning multiple correlated tasks, the model captures common latent features that generalize better to new data. We assess the impact of the number of translation hops, denoted as n_h, in the translation-hop sampling approach described in Section <ref>, on the performance of GPR-dimer. In Figure <ref>, we present the behavior of the GPR-dimer for different values of n_h (n_h=0,1,3,5,10). The figure illustrates the DFT-calculated total atomic force magnitude and the force magnitude of the diffusing atom as a function of the dimer translation step. The force values are only displayed for the dimer translation steps where the GPR is updated, corresponding to the atomic configurations associated with the DFT-calculated steps. The GPR-predicted forces at the intermediate steps between DFT sampling points are not shown. Based on our examination, we observe that the GPR-dimer with zero, one, and three translation hops failed to locate the saddle point. In contrast, the GPR-dimer with five and ten translation hops successfully converged to the saddle point. This observation provides valuable insights into how the sampling frequency along the dimer path influences the GPR surrogate model, striking a balance between a localized and global representation of the energy surface. For zero, one, and three translation hops, the GPR becomes excessively influenced by the energy surface in the vicinity of the dimer path. Consequently, it overfits to the minimum-mode following path while neglecting the broader energy landscape. On the other hand, delaying the sampling by five or ten translation hops enables the GPR to capture a more balanced representation, encompassing both the vicinity of the path and the wider energy surface shape. In other words, the translation-hop sampling approach provides the opportunity to balance exploration and exploitation through tuning the n_h parameter. It is worth noting that in the case of a 2D potential model (depicted in Figure <ref>), no translation hops are required. The GPR demonstrates robustness against overfitting due to the low dimensionality of the input space of the covariance function. To further investigate the influence of translation-hop sampling frequency on the GPR model, we present the energy profile of the GPR-dimer for different translation hops in Figure <ref>. The minimum energy pathway, obtained from NEB calculations implemented in VTST <cit.>, is shown as a reference. The movement of the GPR-dimer is projected along the minimum energy path direction (or along the [111] lattice direction), which serves as the x-axis in Figure <ref>. In the case of zero hops (Figure <ref>(a)), the GPR-dimer bypasses the saddle point and explores high-ridge regions of the energy surface. This is because the GPR is over-fitted to the walker pathway and most likely to the noise in the initial dimer walker oscillations, which results in misguiding the walker. For one or three hops (Figure <ref>(b) and (c)), the walker goes back and forth between lower and higher energy regions but fails to locate the saddle point. In contrast, for five and ten hops (Figure <ref>(d) and (e)), the GPR-dimer walker deviates from the NEB path at the beginning, as the GPR's accuracy in predicting energies near the pathway is reduced. The walker takes larger steps and explores a diverse range of points on the energy surface, eventually converging to the saddle point as it progresses and incorporates more sampled data points. Supplementary Figure S1 illustrates a 2D projection of the GPR-dimer trajectory on the high-dimensional energy surface for different translation hops. We investigate the impact of the active region radius, denoted as r_f (Section <ref>), on the overall performance of the GPR-dimer. Increasing the active region radius results in more frozen atoms in the inverse distance covariance function (equation  <ref>). This leads to more pairs between the moving atom (or the diffusing atom) and frozen atoms, providing the GPR with more information about the surrounding atomic configuration. Figure <ref> illustrates the GPR-dimer's behavior for r_f values of 3, 5, and 7 Å (associated with the first, second, and third nearest neighbors of the moving atom, respectively), with a fixed translation hop of n_h=10. The DFT-calculated total atomic force and diffusing atom force are shown as a function of the GRP-dimer step. In all three cases, the GPR-dimer successfully locates the saddle point. However, as shown in Figure <ref>, the force evolution is smoother for r_f=3 Å compared to larger active regions. The force exhibits an early peak and a monotonic decrease as the GPR-dimer progresses towards the saddle point. In contrast, for r_f=5Å and r_f=7Å, the force demonstrates significant oscillations throughout the GPR-dimer progression, with r_f=7Å displaying two force peaks before reaching the saddle point. The smooth evolution of the GPR-dimer observed at r_f=3 Å provides valuable insights into achieving an optimal balance between the number of atomic pairs incorporated in the GPR's covariance function and the captured physical information. By setting r_f=3 Å, the covariance function effectively captures variations in the pair distance between the diffusing atom and its first nearest neighbors throughout the progression of the GPR-dimer. Constraining the pair distance information to the first nearest neighbors proves to be the most effective approach for constructing a surrogate model. This is because the pair distances among the nearest neighbors contain the most relevant physical information while maintaining a relatively low number of pairs. Consequently, this results in a smaller size of the 𝐥_ϕ vector, reducing the risk of overfitting and improving the model's performance. Lastly, we investigate the influence of the GPR-dimer's training data history on its performance. The GPR-dimer's tail size is defined as the last n_t translation steps preceding the current step, which are used as training data for updating the GPR surrogate model. We consider three tail sizes: n_t=5, n_t=10, and n_t including all preceding DFT-sampled steps. The translation hop is set to n_h=10, indicating 10 dimer translations between consecutive DFT-sampled steps. The active region radius is fixed at 3 Å (r_f=3 Å). Figure <ref> presents the GPR-dimer evolution for different tail sizes n_t. The DFT-calculated total atomic force magnitude and diffusing atom force magnitude are shown as a function of the GPR-dimer progression. The GPR-dimer successfully locates the saddle point only when all preceding DFT-sampled steps are included in the training set. This observation aligns with our previous analysis of the translation-hop frequency. Limiting the GPR training data to the last 5 or 10 preceding steps results in a surrogate model representing a local view of the energy surface, causing the walker to bypass the saddle point and move towards high-energy ridges. This is evident from the large force magnitudes observed in Figure <ref> (a) and (b). Conversely, updating the GPR using the entire history of the dimer walker allows the model to capture a broader view of the energy surface. Our examination reveals that excluding atomic configurations with large repulsive forces (force peaks in the early steps of the GPR-dimer) prevents the GPR from gaining a comprehensive understanding of the energy surface. Therefore, updating the GPR with a diverse set of low- and high-energy points sampled along the walk is necessary for the successful identification of the saddle point by the GPR-dimer. §.§ Assessing Computational Efficiency In order to evaluate the potential computational efficiency gains achievable by employing a GPR surrogate model with the dimer method, we compare the computational efforts required to locate the saddle point for self-diffusion in bcc Mo using three different approaches: the standard DFT-dimer as implemented in our work, the GPR-dimer as implemented in our work, and the DFT-dimer implementation of the Transition State Tools for VASP (VTST) <cit.>. Table <ref> presents a comparison of the total number of DFT calculations of energy and forces necessary for each method to converge to the saddle point. In both the standard dimer methods (our implementation and the VTST code), DFT calculations are performed during both the translation and rotation steps. However, for the GPR-dimer, forces are obtained from the GPR surrogate model during rotations, and DFT force calculations are only carried out after five dimer translation steps when the GPR is updated (n_h=5). The GPR surrogate model is initially trained on three DFT-calculated configurations (i.e., n_i=3). Table <ref> also includes the number of core-hours required for the DFT calculations by each method, conducted on 2 AMD EPYC 7742 CPUs with 64 cores. As illustrated in Table <ref>, the GPR-dimer approach entails 44 DFT calculations (equivalent to 17.89 core hours), compared to 54 (20.81 core hours) in our implementation of the standard DFT-dimer. Consequently, utilizing the GPR as a surrogate model yields approximately a 15% enhancement in computational effort. The VTST implementation, on the other hand, necessitates 60 DFT calculations (equivalent to 23.1 core hours). The slightly higher computational effort associated with the VTST implementation, compared to our implementation of the standard dimer, is likely due to the more efficient conjugate gradient algorithm employed in our work. Specifically, while the VTST code utilizes the original conjugate gradient (CG) algorithm <cit.>, our code adopts another version of CG <cit.>. Supplementary tables S1 and S2 presents the computational effort for the GPR-dimer method for different translation hops, n_h, and active region radii, r_f, respectively. § CONCLUSION We present a methodology that leverages Gaussian process regression (GPR) to develop a surrogate model for the ab initio energy surface. By integrating the dimer method with GPR in an iterative feedback loop, we simultaneously sample the energy surface and converge to the saddle point. The versatility of our proposed GPR-dimer method is demonstrated through its successful application in identifying transition states of vacancy-mediated diffusion in both bcc molybdenum and hexagonal molybdenum disulfide. Our results indicate the promising potential of the GPR-dimer method in enhancing the efficiency of saddle point search in solid-state materials characterized by a large number of atoms. To establish a robust and computationally efficient GPR-dimer scheme, we introduced two key components: 1) multi-task GPR learning and 2) translation-hop sampling of training data. The translation-hop sampling approach proves to be essential in striking a delicate balance between exploration and exploitation of the ab initio energy surface during the search for the saddle point. This approach enables effective utilization of the available training data while efficiently exploring the energy landscape. Furthermore, by applying the GPR-dimer method to solid-state materials with a high degree of atomic freedom, our findings offer valuable strategies to tackle the challenge of high-dimensionality when employing GPR. In summary, our methodology showcases the potential of GPR-dimer as a powerful tool for enhancing saddle point search in solid-state materials. By integrating GPR with the dimer method and incorporating novel strategies, we pave the way for more efficient exploration of complex energy landscapes in the search for transition states. § ACKNOWLEDGEMENT This work was supported by the US National Science Foundation Award No. DMR-1954621. This work used Bridges2 at Pittsburgh Supercomputing Center (PSC) through allocation MAT200013 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296. 34 urlstyle [Eyring(1935)]Eyring1935Activated Henry Eyring. The Activated Complex in Chemical Reactions. The Journal of Chemical Physics, 30 (2):0 107–115, 1935. ISSN 0021-9606. 10.1063/1.1749604. URL <https://doi.org/10.1063/1.1749604>. [Vineyard(1957)]VINEYARD1957121 George H. Vineyard. Frequency factors and isotope effects in solid state rate processes. Journal of Physics and Chemistry of Solids, 30 (1):0 121–127, 1957. ISSN 0022-3697. https://doi.org/10.1016/0022-3697(57)90059-8. URL <https://www.sciencedirect.com/science/article/pii/0022369757900598>. [Wynne‐Jones and Eyring(1935)]Jones2001Absolute W. F. K. Wynne‐Jones and Henry Eyring. The Absolute Rate of Reactions in Condensed Phases. The Journal of Chemical Physics, 30 (8):0 492–502, 1935. ISSN 0021-9606. 10.1063/1.1749713. URL <https://doi.org/10.1063/1.1749713>. [Kadkhodaei and van de Walle(2019)]Kadkhodaei2018Simple S. Kadkhodaei and A. van de Walle. A simple local expression for the prefactor in transition state theory. The Journal of Chemical Physics, 1500 (14), 04 2019. ISSN 0021-9606. 10.1063/1.5086746. URL <https://doi.org/10.1063/1.5086746>. 144105. [JÓNSSON et al.(1998)JÓNSSON, MILLS, and JACOBSEN]HANNESNudged HANNES JÓNSSON, GREG MILLS, and KARSTEN W. JACOBSEN. Nudged elastic band method for finding minimum energy paths of transitions, pages 385–404. 1998. 10.1142/9789812839664_0016. URL <https://www.worldscientific.com/doi/abs/10.1142/9789812839664_0016>. [Ren and Vanden-Eijnden(2013)]ren2013climbing Weiqing Ren and Eric Vanden-Eijnden. A climbing string method for saddle point search. The Journal of Chemical Physics, 1380 (13):0 134105, 04 2013. ISSN 0021-9606. 10.1063/1.4798344. URL <https://doi.org/10.1063/1.4798344>. [Henkelman et al.(2000)Henkelman, Uberuaga, and Jónsson]Henkelman2000climbing Graeme Henkelman, Blas P. Uberuaga, and Hannes Jónsson. A climbing image nudged elastic band method for finding saddle points and minimum energy paths. The Journal of Chemical Physics, 1130 (22):0 9901–9904, 12 2000. ISSN 0021-9606. 10.1063/1.1329672. URL <https://doi.org/10.1063/1.1329672>. [Henkelman and Jónsson(1999)]Henkelman1999dimer Graeme Henkelman and Hannes Jónsson. A dimer method for finding saddle points on high dimensional potential surfaces using only first derivatives. The Journal of Chemical Physics, 1110 (15):0 7010–7022, 10 1999. ISSN 0021-9606. 10.1063/1.480097. URL <https://doi.org/10.1063/1.480097>. [Caspersen and Carter(2005)]Caspersen2005Finding Kyle J. Caspersen and Emily A. Carter. Finding transition states for crystalline solid–solid phase transformations. Proceedings of the National Academy of Sciences, 1020 (19):0 6738–6743, 2005. 10.1073/pnas.0408127102. URL <https://www.pnas.org/doi/abs/10.1073/pnas.0408127102>. [Sheppard et al.(2008)Sheppard, Terrell, and Henkelman]Sheppard2008Optimization Daniel Sheppard, Rye Terrell, and Graeme Henkelman. Optimization methods for finding minimum energy paths. The Journal of Chemical Physics, 1280 (13):0 134106, 04 2008. ISSN 0021-9606. 10.1063/1.2841941. URL <https://doi.org/10.1063/1.2841941>. [Henkelman and Jónsson(2000)]Henkelman2000Improved Graeme Henkelman and Hannes Jónsson. Improved tangent estimate in the nudged elastic band method for finding minimum energy paths and saddle points. The Journal of Chemical Physics, 1130 (22):0 9978–9985, 12 2000. ISSN 0021-9606. 10.1063/1.1323224. URL <https://doi.org/10.1063/1.1323224>. [Zeng et al.(2014)Zeng, Xiao, and Henkelman]Zeng2014Unification Yi Zeng, Penghao Xiao, and Graeme Henkelman. Unification of algorithms for minimum mode optimization. The Journal of Chemical Physics, 1400 (4), 01 2014. ISSN 0021-9606. 10.1063/1.4862410. URL <https://doi.org/10.1063/1.4862410>. 044115. [Olsen et al.(2004)Olsen, Kroes, Henkelman, Arnaldsson, and Jónsson]Olsen2004Comparison R. A. Olsen, G. J. Kroes, G. Henkelman, A. Arnaldsson, and H. Jónsson. Comparison of methods for finding saddle points without knowledge of the final states. The Journal of Chemical Physics, 1210 (20):0 9776–9792, 11 2004. ISSN 0021-9606. 10.1063/1.1809574. URL <https://doi.org/10.1063/1.1809574>. [Xiao et al.(2014)Xiao, Sheppard, Rogal, and Henkelman]Xiao2014Solid-state-dimer Penghao Xiao, Daniel Sheppard, Jutta Rogal, and Graeme Henkelman. Solid-state dimer method for calculating solid-solid phase transitions. The Journal of Chemical Physics, 1400 (17), 05 2014. ISSN 0021-9606. 10.1063/1.4873437. URL <https://doi.org/10.1063/1.4873437>. 174104. [Plasencia Gutiérrez et al.(2017)Plasencia Gutiérrez, Argáez, and Jónsson]Plasencia2017Improved Manuel Plasencia Gutiérrez, Carlos Argáez, and Hannes Jónsson. Improved minimum mode following method for finding first order saddle points. Journal of Chemical Theory and Computation, 130 (1):0 125–134, 2017. 10.1021/acs.jctc.5b01216. URL <https://doi.org/10.1021/acs.jctc.5b01216>. PMID: 27959552. [Koistinen et al.(2017)Koistinen, Dagbjartsdóttir, Ásgeirsson, Vehtari, and Jónsson]Koistinen2017NEB Olli-Pekka Koistinen, Freyja B. Dagbjartsdóttir, Vilhjálmur Ásgeirsson, Aki Vehtari, and Hannes Jónsson. Nudged elastic band calculations accelerated with gaussian process regression. The Journal of Chemical Physics, 1470 (15):0 152720, 2017. 10.1063/1.4986787. URL <https://doi.org/10.1063/1.4986787>. [Denzel and Kästner(2018a)]Denzel2018Gaussian Alexander Denzel and Johannes Kästner. Gaussian process regression for transition state search. Journal of Chemical Theory and Computation, 140 (11):0 5777–5786, 2018a. 10.1021/acs.jctc.8b00708. URL <https://doi.org/10.1021/acs.jctc.8b00708>. PMID: 30351931. [Denzel et al.(2019)Denzel, Haasdonk, and Kästner]Denzel2019Gaussian Alexander Denzel, Bernard Haasdonk, and Johannes Kästner. Gaussian process regression for minimum energy path optimization and transition state search. The Journal of Physical Chemistry A, 1230 (44):0 9600–9611, 2019. 10.1021/acs.jpca.9b08239. URL <https://doi.org/10.1021/acs.jpca.9b08239>. PMID: 31617719. [Koistinen et al.(2020)Koistinen, Ásgeirsson, Vehtari, and Jónsson]Koistinen2019minimum_mode Olli-Pekka Koistinen, Vilhjálmur Ásgeirsson, Aki Vehtari, and Hannes Jónsson. Minimum mode saddle point searches using gaussian process regression with inverse-distance covariance function. Journal of Chemical Theory and Computation, 160 (1):0 499–509, 2020. 10.1021/acs.jctc.9b01038. URL <https://doi.org/10.1021/acs.jctc.9b01038>. PMID: 31801018. [Koistinen et al.(2019)Koistinen, Ásgeirsson, Vehtari, and Jónsson]Koistinen2019Nudged Olli-Pekka Koistinen, Vilhjálmur Ásgeirsson, Aki Vehtari, and Hannes Jónsson. Nudged elastic band calculations accelerated with gaussian process regression based on inverse interatomic distances. Journal of Chemical Theory and Computation, 150 (12):0 6738–6751, 2019. 10.1021/acs.jctc.9b00692. URL <https://doi.org/10.1021/acs.jctc.9b00692>. PMID: 31638795. [Denzel and Kästner(2020)]Denzel2020Hessian Alexander Denzel and Johannes Kästner. Hessian matrix update scheme for transition state search based on gaussian process regression. Journal of Chemical Theory and Computation, 160 (8):0 5083–5089, 2020. 10.1021/acs.jctc.0c00348. URL <https://doi.org/10.1021/acs.jctc.0c00348>. PMID: 32609514. [Baker and Chan(1996)]Baker1996location Jon Baker and Fora Chan. The location of transition states: A comparison of cartesian, z-matrix, and natural internal coordinates. Journal of Computational Chemistry, 170 (7):0 888–904, 1996. https://doi.org/10.1002/(SICI)1096-987X(199605)17:7<888::AID-JCC12>3.0.CO;2-7. [Solak et al.(2002)Solak, Murray-smith, Leithead, Leith, and Rasmussen]SolakAdvances2002 E. Solak, R. Murray-smith, W. Leithead, D. Leith, and Carl Rasmussen. Derivative observations in gaussian process models of dynamic systems. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems, volume 15. MIT Press, 2002. URL <https://proceedings.neurips.cc/paper_files/paper/2002/file/5b8e4fd39d9786228649a8a8bec4e008-Paper.pdf>. [Denzel and Kästner(2018b)]Denzel2018geometry Alexander Denzel and Johannes Kästner. Gaussian process regression for geometry optimization. The Journal of Chemical Physics, 1480 (9), 03 2018b. ISSN 0021-9606. 10.1063/1.5017103. URL <https://doi.org/10.1063/1.5017103>. 094114. [Kresse and Furthmüller(1996)]kresse1996 G. Kresse and J. Furthmüller. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys. Rev. B, 54:0 11169–11186, Oct 1996. 10.1103/PhysRevB.54.11169. URL <https://link.aps.org/doi/10.1103/PhysRevB.54.11169>. [Blöchl(1994)]blochl1994 P. E. Blöchl. Projector augmented-wave method. Phys. Rev. B, 50:0 17953–17979, Dec 1994. 10.1103/PhysRevB.50.17953. URL <https://link.aps.org/doi/10.1103/PhysRevB.50.17953>. [Perdew et al.(1996)Perdew, Burke, and Ernzerhof]perdew1996 John P. Perdew, Kieron Burke, and Matthias Ernzerhof. Generalized gradient approximation made simple. Phys. Rev. Lett., 77:0 3865–3868, Oct 1996. 10.1103/PhysRevLett.77.3865. URL <https://link.aps.org/doi/10.1103/PhysRevLett.77.3865>. [Rasmussen and Williams(2005)]Rasmussen2006 Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 11 2005. ISBN 9780262256834. 10.7551/mitpress/3206.001.0001. URL <https://doi.org/10.7551/mitpress/3206.001.0001>. [Bonilla et al.(2007)Bonilla, Chai, and Williams]Bonilla2008 Edwin V Bonilla, Kian Chai, and Christopher Williams. Multi-task gaussian process prediction. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems, volume 20. Curran Associates, Inc., 2007. URL <https://proceedings.neurips.cc/paper_files/paper/2007/file/66368270ffd51418ec58bd793f2d9b1b-Paper.pdf>. [Swersky et al.(2013)Swersky, Snoek, and Adams]Swersky2013 Kevin Swersky, Jasper Snoek, and Ryan P Adams. Multi-task bayesian optimization. In C.J. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc., 2013. URL <https://proceedings.neurips.cc/paper_files/paper/2013/file/f33ba15effa5c10e873bf3842afb46a6-Paper.pdf>. [Gardner et al.(2018)Gardner, Pleiss, Bindel, Weinberger, and Wilson]gardner2018gpytorch Jacob R Gardner, Geoff Pleiss, David Bindel, Kilian Q Weinberger, and Andrew Gordon Wilson. Gpytorch: Blackbox matrix-matrix gaussian process inference with gpu acceleration. In Advances in Neural Information Processing Systems, 2018. [Fattahpour et al.(2022)Fattahpour, Davariashtiyani, and Kadkhodaei]Kadkhodaei2022 Seyyedfaridoddin Fattahpour, Ali Davariashtiyani, and Sara Kadkhodaei. Understanding the role of anharmonic phonons in diffusion of bcc metals. Phys. Rev. Mater., 6:0 023803, Feb 2022. 10.1103/PhysRevMaterials.6.023803. URL <https://link.aps.org/doi/10.1103/PhysRevMaterials.6.023803>. [Herzig and Köhler(1987)]herzig1987 Christian Herzig and U. Köhler. Anomalous self-diffusion in bcc ivb metals and alloys. In Vacancies and Interstitials in Metals and Alloys, volume 15 of Materials Science Forum, pages 301–322. Trans Tech Publications Ltd, 1 1987. 10.4028/www.scientific.net/MSF.15-18.301. [Komsa et al.(2013)Komsa, Kurasch, Lehtinen, Kaiser, and Krasheninnikov]DFT_MoS2 Hannu-Pekka Komsa, Simon Kurasch, Ossi Lehtinen, Ute Kaiser, and Arkady V. Krasheninnikov. From point to extended defects in two-dimensional MoS_2: Evolution of atomic structure under electron irradiation. Phys. Rev. B, 88:0 035301, Jul 2013. 10.1103/PhysRevB.88.035301. URL <https://link.aps.org/doi/10.1103/PhysRevB.88.035301>.
http://arxiv.org/abs/2307.02456v1
20230705173144
Derived Categories of Derived Grassmannians
[ "Qingyuan Jiang" ]
math.AG
[ "math.AG", "math.AC", "math.RT" ]
This paper establishes semiorthogonal decompositions for derived Grassmannians of perfect complexes with Tor-amplitude in [0,1]. This result verifies the author's Quot formula conjecture <cit.> and generalizes and strengthens Toda's result in <cit.>. We give applications of this result to various classical situations such as blowups of determinantal ideals, reducible schemes, and varieties of linear series on curves. Our approach utilizes the framework of derived algebraic geometry, allowing us to work over arbitrary base spaces over . It also provides concrete descriptions of Fourier-Mukai kernels in terms of derived Schur functors. The East-West Asymmetry of Particle Intensity in Energetic Storm Particle Events Dimitris G. Angelakis August 1, 2023 ================================================================================ § INTRODUCTION This paper establishes semiorthogonal decompositions for a broad class of maps _X(; d) → X, where _X(; d) is the relative Grassmannian of a complex over X (<cit.>): [Theorem <ref>] Let d ∈_>0. For any scheme (or more generally, prestack) X over , any perfect complex of Tor-amplitude in [0,1] and rank r ≥ 0 on X, and any type of derived category ∈{, , , }, there is a semiorthogonal decomposition (_X(;d)) = ⟨ri copies of (_X(^∨[1];d-i)) ⟩_0 ≤ i ≤min{r, d}. This semiorthogonal decomposition is induced by faithfully functors Φ^(i,λ) (Notation <ref>) that are explicitly expressed in terms of derived Schur functors applied to universal perfect complexes on the incidence loci, parametrized by Young diagrams λ of height ≤ (r-i) and width ≤ i. This result verifies and generalizes the author's Quot formula conjecture <cit.>. Yukinobu Toda <cit.> has established a version of this theorem[The semiorthogonal decompositions in these two papers have different semiorthogonal orders, but we expect that they differ by a sequence of mutations.] using a different method, the categorified Hall product. His theorem applies to any smooth quasi-projective complex variety X. This paper extends and strengthens Toda's result by removing the assumptions of smoothness and quasi-projectivity on the base X, providing explicit descriptions of the Fourier–Mukai kernels, and including the cases for = ,, and . Our theorem both unifies and generalizes the following important results: * Orlov's projective bundle formula <cit.>. * Kapranov's exceptional collections for Grassmannians and the generalization to Grassmannian bundles <cit.>. * Orlov's blowup formula <cit.>. * The semiorthogonal decompositions for standard flips <cit.>. * Orlov's universal hyperplane section formula <cit.>. * The embedding of derived categories for Grassmannian flips (see <cit.>). * Pirozhkov's formula for total spaces of universal bundles on Grassmannians <cit.>. * The author and Leung's projectivization formula (<cit.>; see also <cit.>). It not only extends these results to arbitrary base X over but also to the stratified situations. We first consider the case where r > d. Assuming for simplicity that is presented by a generically injective map _X^m _X^n, where r=n-m ≥ 1, then: * The map _X(;d) → X is a stratified Grassmannian bundle. The general fibers are Grassmannian varieties _d(r), but the fiber dimension jumps over the degeneracy loci X_j = D_m-j(σ) where the map σ has rank ≤ m-j, for j ≥ 1. * The maps _X(^∨[1]; j) → X on the right-hand side of (<ref>) are (derived) partial resolutions of the degeneracy loci X_j=D_m-j(σ), j ≥ 1. Consequently, their derived categories provide noncommutative partial resolutions of X_j's. Therefore, in this case, our theorem extends Kapranov's result to stratified Grassmannian bundles. The formula (<ref>) implies that (_X(;d)) contains rd copies of (X), which corresponds to a family version of Kapranov's exceptional collections for a genuine _d(r)-bundle. The “corrections" are given by the noncommutative partial resolution (_X(^∨[1]; j) of the degeneracy loci X_j, j=1,2,…, d, capturing the contributions arising from the fiber-dimension-jumping behavior of _X(;d) over X_j. Similarly, the case d=r of the theorem extends Orlov's blowup formula <cit.> for blowups along locally complete intersection (l.c.i.) subschemes to non-l.c.i. cases (see Corollary <ref>). In the case where d>r, (_X(;d)) and (_X(^∨[1]; d-r)) are both noncommutative partial resolution of the degeneracy locus X_d-r. We should view _X(;d)) _X(^∨[1]; d-r) as a derived generalization of Grassmannian flip. The formula (<ref>) recovers the embedding of derived categories for this flip, with the orthogonal complement given by noncommutative partial resolutions of higher degeneracy loci X_d-r+j, j=1,2,…,r. Notice that in our proof, the characteristic-zero assumption is only required in Lemma <ref>.(<ref>). Consequently, if we consider cases where the involved Lascoux-type complexes of Lemma <ref>.(<ref>) are characteristic-free (e.g., when d = 1), our theorem's result is characteristic-free. §.§ Classical Applications Due to the complex behavior of map _X(;d) → X, our theorem finds applications to various interesting classical situations: * (Blowup formula for blowups of determinantal ideals <ref>). Let _Z(X) → X be the blowup of a scheme X along a determinantal subscheme of codimension (r+1) considered in <ref>. Then we obtain a semiorthogonal decomposition (_Z(X)) = ⟨⟨rj copies of (X_j) ⟩_1 ≤ j ≤ r ,  (X) ⟩, where X_j are (possibly derived) partial resolutions of the determinantal loci X_j, j=1,…,r; see Corollary <ref>. * (Derived categories for reducible schemes <ref>). In <cit.>, the projectivization formula was used to obtain the following formula for attaching a rational tail ^1 to a smooth point p of a complex curve C: (C _p^1) = ⟨([ε_1]), (C) ⟩, where [ε_1] is the derived ring of dual numbers with (ε_1) = 1 and ε_1^2=0; see also <cit.>. This paper greatly generalizes this result to a large class of reducible schemes (see Corollary <ref>), which includes the central fibers of the deformation-to-normal-cone construction as special cases (Remark <ref>). X = _Z(X) __Z(_Z/X^∨)_Z(_Z/X^∨⊕_Z^∨), where Z ⊆ X is a codimension r l.c.i. subscheme and _Z is any line bundle. Our theorem implies a semiorthogonal decomposition (Corollary <ref>): (X) =⟨( Tot_Z(_Z[-1]) )_-r, ⋯, ( Tot_Z(_Z[-1]))_-1,  (X)_0 ⟩. * (Varieties of Linear Series on Curves). Consider the varieties G_d^r(C) parametrizing linear series of degree d and dimension r on a smooth complex projective curve of genus g ≥ 1 (see <cit.>). Our theorem implies that G_d^r(C) have natural derived enhancements _d^r(C), for which there is a semiorthogonal decomposition (_d^r(C)) = ⟨1-g+di copies of (_2g-2-d^r-i(C)) ⟩_0 ≤ i ≤min{1-g+d, r+1} provided that d ≥ g-1 and r ≥ -1; see Corollary <ref> and <cit.>. This result extends Toda's result for symmetric products of curves in <cit.> (see also <cit.>) and the author's result <cit.> for the case when r=1. For special curves, the above Corollary <ref> gives rise to examples of flips of classical threefolds, where the semiorthogonal decomposition contains components given by nonclassical derived schemes (see Example <ref>). It also produces examples of derived equivalences for threefold flops induced by nonclassical derived incidence schemes (see Example <ref>). The framework presented in this paper allows us to extend the above Corollary <ref> to families of singular integral curves /S, with the role of ^d(C) replaced by the compactified Jacobians Jac^d_/S; see Remark <ref>. §.§ A Categorified Decomposition Theorem For a proper map Y → X between complex algebraic varieties, the Beilinson– Bernstein–Deligne (BBD) decomposition theorem (see <cit.>) provides a decomposition for intersection cohomologies IH^k(Y) ≃⊕_i IH^k-d_i(X_i, L_i), where X_i ⊆ X are strata for the map Y → X, L_i are locally systems on X_i, and d_i ∈ℤ. A fundamental question is in which situation can we “categorify" this result, in the sense of finding a semiorthogonal decomposition of the derived category (Y) of Y, with pieces given by derived categories of spaces supported over the closure X_i of strata X_i. For instance, such a categorified decomposition is not possible for K-trivial contractions Y → X. As the spaces on the right-hand side of the formula (<ref>) are derived partial resolutions of the closed strata for the map Y=_X(;d) → X, we could regard our main Theorem <ref> as such a categorification for a broad class of maps. §.§ Derived Algebraic Geometry This paper uses the framework of derived algebraic geometry (DAG), developed by Lurie, Toën and Vezzosi and many others (<cit.>). DAG plays a crucial role in this paper in the following aspects: * (Generality and compatibility with base change). The theorem applies to any prestack X over , including all (derived) schemes and stacks as special cases. Moreover, the formulation of the formula (<ref>) commutes with arbitrary base change. * (Fourier–Mukai kernels via derived Schur functors). The theorem provides explicit descriptions of the Fourier–Mukai kernels involved in terms of derived Schur functors. The derived Schur functors are non-abelian derived functors (in the sense of Quillen <cit.> and Lurie <cit.>, or equivalently, animations in the sense of <cit.>) of classical Schur module functors. The theory of derived Schur functors has been studied in <cit.>. Importantly, they are highly computable: using the generalized Illusie's isomorphisms <cit.>, the derived Schur functors appearing in the theorem can be computed using Akin, Buchsbaum, and Weyman's theory of Schur complexes <cit.>. * (Derived incidence correspondence schemes). The Fourier–Mukai kernels of the theorem are supported on certain universal incidence loci (Definition <ref>). These incidence loci generally possess non-trivial derived structures, even in the cases where all involved spaces in the formula (<ref>) are classical (see Example <ref>). They are the derived zero loci of cosections of the form _+^∨⊠_X _-^∨ [1] ^∨⊠, where _± are universal quotient bundles. This incidence relation can be seen as a higher-rank and shifted version of the universal quadric incidence relation studied in homological projective duality (<cit.>). §.§ Other Related Works The Chow-theoretical version of this paper's main theorem has been established by the author in <cit.>. In Koseki's paper <cit.>, Theorem <ref> (in smooth case, for =) is used to prove a categorical blow-up formula for Hilbert schemes of points: (_n(S)) = ⟨p(j) copies of (_n-j(S))⟩_j=0,1,…,n, where S is the blowup of a smooth complex surface S at a point and p(j) is the number of partitions of j. Koseki <cit.> also considered the cases of higher rank sheaves on del Pezzo, K3, or abelian surfaces. For general surfaces, the moduli spaces of higher rank sheaves are highly singular. We expect our Theorem <ref> to be helpful to generalize the above results in these situations and address open questions (1) &(2) of <cit.>. The flag correspondences of relative Grassmannians (see Notation <ref>) have also been studied by Hsu in <cit.>. We expect the results and methods presented in this paper to be beneficial for investigating the categorical actions explored in loc. cit.. In the case where d=r in Theorem <ref>, the map (;r) → X should be regarded as a derived version of blowup, and we expect it to be closely related to the concept of derived blowups studied by Hekking, Khan and Rydh (see <cit.>). §.§ Notation and Convention We will use the framework of ∞-categories developed by Lurie in <cit.>. Our notations and terminologies will mostly follow those of <cit.>. Here, we list the notations and conventions that are frequently used in this paper: * (∞-categories of spaces). We let denote the ∞-category of spaces (or equivalently, the ∞-category of ∞-groupoids). For a pair of objects C,D in an ∞-category , we let _(C,D) ∈ denote their mapping space. We let ^≃ denote core of , that is, the ∞-category obtained from by discarding all non-invertible morphisms. For a pair of ∞-categories and , we let (,) denote the ∞-category of functors from to . * (Simplicial commutative ring). We let denote the ∞-category of “derived rings", that is, simplicial commutative rings (see <cit.>; or equivalently, animated commutative rings in the sense of <cit.>). * (Prestacks). A prestack is a functor X →. A map between prestacks X, Y → is a natural transformation f X → Y of the functors . The notion of a prestack is probably the most general concept of spaces in algebraic geometry (<cit.>), and includes all derived schemes and derived higher stacks as special cases. * (Partitions). We let B_ℓ,d denote the set of partitions of height ≤ℓ and width ≤ d, i.e., partitions λ = (λ_1, λ_2, …, λ_ℓ) such that d ≥λ_1 ≥λ_2 ≥…≥λ_ℓ≥ 0. For a partition λ∈ B_ℓ, d, we let |λ| = ∑_i=1^ℓλ_i. We denote its transpose by λ^t = (λ_1^t, λ_2^t, …, λ_s^t), i.e., for any i ∈_>0, λ^t_i is the number of j's such that λ_j ≥ i. By convention, if one of ℓ and d is zero, we set B_ℓ,d to be the singleton of zero partition (0); we let B_ℓ,d = ∅ if ℓ<0 or d<0. * (Notations for Derived categories). In this paper, we will use the symbol to represent one of the following derived ∞-categories: , , , or . Specifically, for a prestack X, this paper will consider the following derived ∞-categories (X): * (=). We let (X) denote the ∞-category of quasi-coherent complexes on X (<cit.>) and let (X)^≤ 0 denote the full subcategory spanned by complexes such that ^i():=π_-i()=0 for i>0. If X is a quasi-compact, quasi-separated scheme, the homotopy category of (X) is equivalent to the triangulated derived category of unbounded complexes of _X-modules with quasi-coherent cohomologies. * (=,). We let (X) (resp. (X)) denote the full subcategory of (X) spanned by almost perfect complexes (resp. locally truncated almost perfect complexes); see <cit.> (resp., cf. <cit.>). For a Noetherian scheme X, the homotopy category of (X) (resp. (X)) corresponds to the triangulated derived category ^-( (X)) (resp. ^ b((X))) of right-bounded (resp. bounded) complexes of coherent sheaves on X, justifying the notations. * (=). We let (X) denote the ∞-category of perfect complexes on X. Then (X) is equivalent to subcategory of (X) spanned by dualizable objects. * (Tor-amplitude). A quasi-coherent R-complex M, where R ∈, is said to have Tor-amplitude in [0,1] if, for any discrete R-module N, π_i(M ⊗_R N) = 0 for i ∉[0,1]. A quasi-coherent complex over X is said to have Tor-amplitude in [0,1] if, for any η R → X, where R ∈, η^*() has Tor-amplitude in [0,1] as an R-complex. * (Derived convention). All the functors are assumed to be derived. For example, if f X → Y is a map between schemes, is a sheaf on X, then f_*() corresponds to the derived pushforward f_*() in the classical convention. * (Grothendieck's convention). We will use Grothendieck's convention for projectivizations (), Grassmannians (;d) and flags (;), so that they parametrize quotients rather than sub-objects. For example, the projectivization _X() parametrizes line bundle quotients of over X. §.§ Acknowledgment The author would like to thank Arend Bayer for numerous helpful discussions and suggestions throughout this project, Richard Thomas for many valuable suggestions on the paper and helpful discussions on relative Grassmannians and degeneracy loci, and Yukinobu Toda for fruitful discussions related to the Quot formula conjecture and helpful comments on an earlier draft of this paper. This project originated when the author was a member at IAS, and he would like to thank János Kollár and Mikhail Kapranov for inspiring discussions during that period. The author is supported by the Engineering and Physical Sciences Research Council [EP/R034826/1] and by the ERC Consolidator grant WallCrossAG, no. 819864. § DERIVED GRASSMANNIANS AND INCIDENCE CORRESPONDENCES §.§ Derived Grassmannians and Derived Schur Functors This subsection briefly reviews the theory of derived Grassmannians and of derived Schur functors developed in <cit.>. We let X be a prestack and let ∈(X)^≤ 0. §.§.§ Derived Grassmannians and Derived Flag Schemes Let = (0 ≤ d_1< … <d_k) be an increasing sequence of integers, where k ≥ 1. The derived flag scheme of type (<cit.>) is the prestack over X, denoted by _(;)_X(;) = (;) → X, which carries each η T = A → X, where A ∈, to the full sub-Kan complexes of (Δ^k, (T)^≤ 0)^≃ spanned by those elements ζ_T = (η^*_k_k-1→⋯_1), where each ϕ_i,i+1 is surjective on π_0, and _i is a vector bundle over T of rank d_i, i ≤ 1 ≤ k. Derived flag schemes are derived extensions of Grothendieck's classical flag schemes (<cit.>). The natural projection (;) → X is a relative derived scheme (<cit.>). The formation of _X(; ) commutes with arbitrary derived base change X' → X (<cit.>). If is a perfect complex of Tor-amplitude in [0,1], then the projection (;) → X is a proper, quasi-smooth relative derived scheme, with an invertible relative dualizing complex (see <cit.>). We refer to <cit.> for more details of their properties. There are two important special cases of derived flag schemes: [Derived Grassmannians; <cit.>] If k=1, = (d), then we denote the projection _X(,) → X by _(;d)_X(;d) = (;d) → X, and refer to it as the rank d derived Grassmannian of . It is by definition an element of (, )_/X which carries each η T → X, where T ∈, to the space of morphisms { u η^* →|u is surjective on π_0 and is a vector bundle of rank d on T}^≃. We will denote the universal fiber sequence on (;d) by _(;d)→_(;d)^*() _(;d), where _(;d) is the universal quotient bundle of rank d. [Derived Complete Flag Schemes; <cit.>] Let n ≥ 1 be a positive integer, let = n :=(1,2, 3, ⋯, n). We will refer to _(;n)_X(; n) = (; n) → X. as the derived complete flag scheme of type n. We denote the universal quotient sequence by _(;n)^* () _n_n-1→⋯_1, where _i is the universal quotient bundle of rank i, and let _i : = (ϕ_i-1,i_i↠_i-1) denote the universal line bundle on (; n), where we set _0 = 0 by convention. Consequently, for each 1 ≤ i ≤ n, we have _1 ⊗_2 ⊗⋯⊗_i ≃_i. For any sequence of integers λ = (λ_1, λ_2, …, λ_n) ∈^n, we define a line bundle (λ) on (;n) by the formula: (λ) : = _1^⊗λ_1⊗_2^⊗λ_2⊗⋯⊗_n^⊗λ_n. If = is a vector bundle of rank n, then the morphism ϕ_n,n+1_(;n)^* () →_n is an equivalence, and the forgetful map induces an equivalence (; n) (; n-1). §.§.§ Forgetful Maps Between Derived Flag Schemes If ' = (d_i_j)_1 ≤ j ≤ℓ be a subsequence of = (d_i)_1 ≤ i ≤ k, where (i_1 < i_2 < … < i_ℓ) is a subsequence of n=(1,2, 3, ⋯, n), then there is a natural forgetful map (<cit.>) π_',(; ) →(; '). We will need the following proposition, which is a special case of <cit.>: Let k ≥ 2, let i be an integer such that 1 ≤ i ≤ k-1, and let ' = (d_1, ⋯, d_i) and ”=(d_i+1, ⋯, d_k) so that = (', ”). * The forgetful map π_”,_X(; ) →_X(; ”) identifies _X(; ) as the derived flag scheme (_d_i+1^(”); ') over _X(; ”), where _d_i+1^(”) is the universal quotient bundle on _X(; ”) of rank d_i+1. * The forgetful map π_',_X(; ) →_X(; ') identifies _X(; ) as the derived flag scheme ((φ_i^(')); d_i+1-d_i, …, d_k - d_i) over _X(; '), where _d_i^(') is the universal rank d_i quotient bundle on _X(; ') and φ_i^(')_(; ')^*() →_d_i^(') is the universal quotient map. As a direct consequence of Proposition <ref>, if we assume k ≥ 3, let i,j be integers such that 1 ≤ i < j < k and let be written as: = (d_1, ⋯, d_i_^(1); d_i+1, ⋯, d_j_^(2); d_j+1, ⋯, d_k_^(3)), then the natural commutative square of forgetful maps [column sep = 5 em] _X(; ) dπ_(^(1), ^(2)), rπ_(^(2), ^(3)), _X(; ^(2), ^(3)) dπ_^(2), (^(2), ^(3)) _X(; ^(1), ^(2)) rπ_^(2), (^(1), ^(2)) _X(; ^(2)) is a pullback square (i.e., it is a derived fiber product square). §.§.§ Derived Schur Functors Derived Schur functors, studied in <cit.>, are non-abelian derived functors (in the sense of Dold–Puppe <cit.>, Quillen <cit.> and Lurie <cit.>, or equivalently, animations in the sense of Cesnavicius and Scholze <cit.>) of the classical Schur module functors. Specifically, for a prestack Y and a partition λ, the derived Schur functor associated with λ, as defined in <cit.>, is an endorfunctor of the derived ∞-category of quasi-coherent complexes denoted by ^λ(Y)^≤ 0→(Y)^≤ 0. This functor extends the classical Schur module functors of vector bundles and preserves sifted colimits. In the particular case where λ = (n) (resp. λ =(1, …, 1)_n terms) for some integer n ≥ 0, the derived Schur functor ^(n) = ^n (resp. ^(1, …, 1) = ⋀^n) corresponds to the nth derived symmetric power (resp. nth derived exterior power) functor studied by Dold–Puppe <cit.>, Illusie <cit.> and Lurie <cit.> <cit.>. The derived Schur functors possess many desirable functorial properties, such as their compatibility with arbitrary base change, and they satisfy derived generalizations of classical formulae like Cauchy's decomposition formula and Littlewood–Richardson rule. We refer the readers to <cit.> for a more comprehensive discussion and detailed explanations. §.§.§ Derived Borel–Weil–Bott Theorem The theories of derived flag schemes and derived Schur functors are connected via a derived generalization of the Borel–Weil–Bott theorem. Consider the situation described in Example <ref>, assume that is a perfect complex rank n and Tor-amplitude in [0,1] over X, and let λ = (λ_1, …, λ_n) ∈^n. Then: * If λ is a partition, we have a canonical equivalence (_(;n))_*((λ)) ≃^λ(), where ^λ() is the derived Schur functor applied to the perfect complex . * If X is defined over , one of the following two mutually exclusive cases occurs: * There exists a pair of integers 1 ≤ i < j ≤ n-1 such that λ_i - λ_j = i - j. In this case, (_(;n))_* ((λ)) ≃ 0. * There exists a unique permutation w ∈_n such that w λ is non-increasing. In this case, there is a canonical equivalence (_(;n))_* ((λ)) ≃^w λ() [- ℓ(w)]. Here, w λ = w(λ+ρ) -ρ denotes the dot action, and ℓ(w) is the length of w. In the special case where is a vector bundle, the above results reduce to the familiar Borel–Weil–Bott theorem for vector bundles; see <cit.> and <cit.>. The above theorem implies that corresponding Borel–Weil–Bott theorem for derived Grassmannians (;d) studied in Example <ref>. Specifically, let d be an integer such that 1 ≤ d ≤ n. Let α = (α_1, … , α_d) and β = (β_1, …, β_n-d) be two partitions and let λ = (α,β) be their concatenation. Then (α, β) := (π_(d), n)_*((λ))≃^α(_(;d)) ⊗^β(_(;d)). Consequently, the theorem implies the following: * If λ is a partition, (_(;d))_* ((α,β)) ≃^λ(). * If X is defined over , then one of the following two mutually exclusive cases occurs: * There exists a pair of integers 1 ≤ i < j ≤ n-1 such that λ_i - λ_j = i - j. In this case, (_(;d))_* ((α,β))≃ 0. * There exists a unique permutation w ∈_n such that w λ is non-increasing. In this case, there is a canonical equivalence (_(;d))_* ((α,β)) ≃^w λ() [- ℓ(w)]. §.§ Incidence Correspondences This subsection studies the incidence correspondences between derived Grassmannians, generalizing the incidence correspondences of the projectivization case <cit.>. Throughout this subsection, we let X be a prestack and assume that is a perfect complex over X of Tor-amplitude in [0,1] of rank r ≥ 0. Notice that the shifted dual ^∨[1] is also a perfect complex of Tor-amplitude in [0,1], but has rank (-r). Let (d_+, d_-) ∈^2 be a pair of integers and consider the derived Grassmannians (Example <ref>): _+ (;d_+) → X and _- (^∨[1]; d_-) → X, with tautological fiber sequences _±→_±^*() _±, where _± are universal quotient bundles of rank d_±, respectively. We define the universal incidence locus _(d_+,d_-)() to be the derived zero locus of the cosection of the perfect complex _+^∨⊠_X _-^∨ [1] over _d_+() ×_X _d_-(^∨[1]) defined as the composition _+^∨⊠_X _-^∨ [1] _+^*(^∨) ⊠_X _-^*() _(;d_+) ×_X (^∨[1]; d_-). (Here, for complexes _+ on (;d_+) and _- on (^∨[1];d_-), we use _+ ⊠_X _- to denote the external tensor product _1^* _+ ⊗__Z_2^* _-, where _i are the projections from (;d_+) ×_X (^∨[1]; d_-) to its ith factors.) We will refer to the commutative diagram [row sep= 2.5 em, column sep = 4 em] _(d_+,d_-)() rdrr_+d[swap]r_- (;d_+) d_+ (^∨[1];d_-) r_- X as the incidence diagram. Unwinding the definitions, for any given map η T → X of prestacks, the functor of points _(d_+,d_-)()(η) at η is the space of triples (u_+η^*→_+,  u_-η^*(^∨[1]) →_-, σ) where _± are vector bundles on T of ranks d_±, respectively, u_± are surjective on π_0, and σΔ^1 ×Δ^1 →(T) is a commutative diagram of the form _-dru_-^∨[1] η^* du_+ 0 r _+. Consequently, By construction, there is a canonical commutative diagram r_-^*(_-^∨) [1] drr_-^*(ρ_-^∨ [1]) ^*() dr_+^*(ρ_+) 0 r r_+^*(_+) in (_(d_+,d_-)()). We consider the following perfect complex ^ univ_(d_+,d_-) :=(r_-^*(_-^∨) [1] →(^*() r_+^*(_+) ) ), and refer to it as the universal perfect complex on _(d_+,d_-)(). * If d_-=0, _(d_+,0)() = (;d_+) and ^ univ_(d_+,0) = _(;d_+). * In the universal local situation of Notation <ref> (where X = _(^m,^n) and = [_X^m _X^n] is the tautological map), the perfect complex ^ univ_(d_+,d_-) is canonically represented by a universal two-term complex of vector bundles [r_-^*(R__d_-^-^∨) → r_+^*(R__d_+^+)]. In the situation of Definition <ref>, we have: * There is a canonical equivalence ^ univ_(d_+,d_-)≃( (r_-^*(_-^∨) [1] ^*() ) → r_+^*(_+) ). * If has rank r (and Tor-amplitude in [0,1]), then ^ univ_(d_+,d_-) is a perfect complex over _(d_+,d_-)() of Tor-amplitude in [0,1] and rank (r-d_+ + d_-). To prove assertion (<ref>), consider the following induced commutative diagram r_-^*(_-^∨) [1] dr (r_+^*(ρ_+)) dr ^*() d 0 r ^ univ_(d_+,d_-)rd (r_-^*(ρ_-^∨ [1])) d 0 r r_+^*(_+) where all three squares are pushouts, hence bi-Cartesian as (_(d_+,d_-)()) is a stable ∞-category. This proves (<ref>). Since (r_-^*(ρ_-^∨ [1])) has Tor-amplitude in [0,1] and rank (r+d_-), r_+^*(_+) is a vector bundle of rank d_+, and the natural map (r_-^*(ρ_-^∨ [1])) → r_+^*(_+) is surjective on π_0, assertion (<ref>) follows from (<ref>) (see <cit.>). In the situation of Definition <ref>, we have: * The projection r_+ of (<ref>) identifies _(d_+,d_-)()) as the rank d_- derived Grassmannian of the perfect complex (r_+^*(ρ_+^∨[1]) _+^∨[1] →^∨[1]) over (;d_+). * The projection r_- of (<ref>) identifies _(d_+,d_-)()) as the rank d_+ derived Grassmannian of the perfect complex (r_-^*(ρ_-^∨[1]) _-^∨[1] →) over (^∨[1]; d_-). Consequently, the projections r_± are both proper, quasi-smooth relative derived schemes. Similar to the projectivization case <cit.>, assertions (<ref>) and (<ref>) follow from the characterizations of closed immersions of the form (”; d) →(;d) between derived Grassmannians induced by cofiber sequences ' →→” of connective complexes (see <cit.>). As a result, the assertion about the properness and quasi-smoothness of the maps r_± follow from <cit.>. In the situation of Lemma <ref>, assume X has constant dimension X, then the quasi-smooth relative derived schemes (;d_+), (^∨[1];d_-) and _(d_+,d_-)() over X have virtual dimensions X + d_+ (r - d_+), X -d_- (r +d_-) X + (d_+ - d_-) + d_+ d_- - d_+^2 - d_-^2, respectively. vir.(;d_+) = X + d_+ (r - d_+) vir.(^∨[1];d_-) = X + d_- (-r - d_-) resp. vir._(d_+,d_-)()= X + r(d_+ - d_-) + d_+ d_- - d_+^2 - d_-^2. If X is a Cohen–Macaulay scheme, then (;d_+), (resp. (^∨[1];d_-), _(d_+,d_-)())) is classical if and only if its underlying classical scheme has dimension equal to its virtual dimension (see the proof of <cit.>). Moreover, if all these schemes are classical, then _(d_+,d_-)() is canonically isomorphic to the classical fiber product of (;d_+) and (^∨[1],d_-) over X. §.§ Compatibility of Incidence and Flag Correspondences In the situation of Definition <ref>, we let d_+' > d_+ be another integer, then there is a canonical forgetful map forg(; d_+, d_++1, …, d_+') ×_(;d_+')_(d_+', d_-)() →_(d_+,d_-)() which identifies the domain of forg with the derived flag scheme __(d_+,d_-)()(^ univ_(d_+,d_-); 1, 2, …, d_+'-d_+) of the universal perfect complex ^ univ_(d_+,d_-) over _(d_+,d_-)(). Let : = (_-^∨[1] _-^*()) over Y := (^∨[1]; d_-), and consider the following commutative diagram: Z:=_Y(; d_+, d_++1, …, d_+') dπrπ' _Y(;d_+') d' _Y(; d_+) r Y=(^∨[1]; d_-), where the maps π, π' are the natural forgetful maps between derived flag schemes (<ref>), and , ' are the natural projections. By virtue of Lemma <ref>, there are canonical equivalences _Y(;d_+') ≃_(d_+', d_-)() and _Y(; d_+) ≃_(d_+,d_-)(). Let r_+ _(d_+, d_-)() →(; d_+) (resp. r_+' _(d_+', d_-)() →(; d_+')) denote the natural projection, and _+ (resp. _+') the tautological quotient bundle of rank d_+ (resp. d_+') over (; d_+) (resp. (; d_+')). By virtue of Proposition <ref>.(<ref>), the forgetful map π' identifies Z as the derived flag scheme Z ≃__(d_+',d_-)()(r_+'^*(_+'); d_+, d_++1, …, d_+' ). Since Proposition <ref>.(<ref>)) also implies that the forgetful map (; d_+, d_++1, …, d_+') →(; d_+') is equivalent to the derived flag bundle of _+' of type (d_+, d_++1, …, d_+') over (; d_+), we obtain that π' identifies Z with the domain of the map forg. On the other hand, by virtue of Proposition <ref>.(<ref>) and the equivalence ^ univ_(d_+,d_-)≃(^* () → r_+^*(_+)) of Lemma <ref>.(<ref>), the forgetful map π identifies Z as the derived flag scheme of ^ univ_(d_+,d_-) of type (1, 2, …, d_+'-d_+) over the incidence space _(d_+,d_-)(). Hence the proposition is proved. In concrete terms, for each morphism η T → X (where T = A for some A ∈), the map forg of Proposition <ref> carries an element of the form ζ =(η^* _d_+'→⋯→_d_++1→_d_+,  η^*(^∨[1]) _d_-',  σ), where _i, _i' are vector bundles over T of rank i, and σ is a commutative diagram _d_-dru_-^∨[1] η^* du_+ 0 r _d_+', to the element ζ' = (u_+' η^* →_d_+,  u_- η^*(^∨[1]) →_d_-',  σ' ) where u_+' is the composite map η^* _d_+'→⋯→_d_++1→_d_+, and σ' is the induced commutative diagram _d_-dru_-^∨[1] η^* du_+' 0 r _d_+. Let be a perfect complex of rank r ≥ 0 and Tor-amplitude in [0,1], let d ≥ 0 and 0 ≤ i ≤min{d, r} be integers, and let λ = (λ_1 ≥…≥λ_r-i) be a partition. Then there is a natural forgetful map forg(; d, d+1, …, d+r-i) ×_(;d+r-i)_(d+r-i,d-i)() →_(d,d-i)(). which is a proper, quasi-smooth relative derived scheme and induces a canonical equivalence forg_*(_d+1^λ_1⊗_d+2^λ_2⊗…⊗_d+r-i^λ_r-i) ≃^λ(^ univ_(d,d-i)), where _i's are universal quotient bundles of rank i (where d ≤ i ≤ d+r-i) and _i = (_i→_i-1) are the associated universal line bundles (where d+1 ≤ i ≤ d+r-i). Apply Proposition <ref> and Theorem <ref>.(<ref>) to the case where (d_+ ,d_-) = (d,d-i) and d_+'=d+r-i. In this case, _(d,d-i)^ univ is a perfect complex of Tor-amplitude in [0,1] and rank (r-i) (Lemma <ref>.(<ref>)), and the properness and quasi-smoothness of the map forg follow from <cit.>. The above relationship between moduli prestacks yields compatibility result for the induced Fourier–Mukai functors, which we will now investigate. Assume that we are in the situation of Definition <ref> and let maps r_± be defined as in diagram (<ref>). Assume that r- d_+ + d_- ≥ 0, and let λ = (λ_1 ≥⋯≥λ_r-d_+ + d_-) be a partition and i ∈. We consider Fourier–Mukai functors: Φ_(d_+,d_-)^λ = r_+ * ( r_-^*() ⊗^λ(^ univ_(d_+,d_-)) ) ((^∨[1]; d_-)) →((; d_+)). Φ_(d_+,d_-)^(i, λ) = Φ_(d_+,d_-)^λ() ⊗(_+)^i ((^∨[1]; d_-)) →((; d_+)). We will omit the subindex (d_+,d_-) and write Φ^λ and Φ^(i,λ) instead when there is no confusion. Here, we use the symbol to denote any of the following derived ∞-categories: , , or . This definition will be justified by the following lemma. In the situation of Notation <ref>, let =, then the functor Φ^λ (resp. Φ^(i,λ)) admits both a left adjoint (Φ^λ)^L (resp. (Φ^(i,λ))^L) and a right adjoint (Φ^λ)^R (resp. (Φ^(i,λ))^R). Furthermore, all these functors preserve (almost) perfect complexes and locally truncated almost perfect complexes, and commute with arbitrary base change X' → X. The left and right adjoints of Φ^λ can be given explicitly by the formula (Φ^λ)^L = r_- !(r_+^*() ⊗^λ(^ univ_(d_+,d_-))^∨) ((; d_+)) →((^∨[1]; d_-)). (Φ^λ)^R = r_- *(r_+^!() ⊗^λ(^ univ_(d_+,d_-))^∨) ((; d_+)) →((^∨[1]; d_-)). Here, r_- ! denotes the left adjoint of r_-^*, and r_+^! denotes the right adjoint of r_+ *. Since r_± are proper and quasi-smooth (Lemma <ref>), the desired assertions follow from Lipman–Neeman–Lurie's version of Grothendieck duality (see <cit.>). Let be a perfect complex of Tor-amplitude in [0,1] and rank r ≥ 1, and let d be an integer 0 ≤ d ≤ r-1. We consider the following commutative diagram [column sep = 4 em, row sep = 2.5 em] (;d,d+1) dp_+rp_- (;d+1) d_(;d+1) (;d) r_(;d) X, where p_± are the natural forgetful maps (<ref>). Let denote , , or . We consider Fourier–Mukai functors: Ψ__d+1^k = p_+ * (p_-^*() ⊗_d+1^k) ((;d+1)) →((;d)). Ψ_k = p_+ * p_-^*() ⊗(_d)^⊗ k ((;d+1)) →((;d)). Here _i's are universal quotient bundles of rank i for i=d,d+1, and _d+1 = (_d+1→_d). Proposition <ref> implies that the projection p_+ of diagram (<ref>) identifies (;d,d+1) as the derived projectivization of the perfect complex _(;d) over (;d) with _p_+(1) ≃_d+1, where _(;d) = (_(;d)^*() →_(;d)) has Tor-amplitude in [0,1] and rank (r-d). The projection p_- identifies (; d,d+1) as the derived projectivization (_(; d+1)^∨) of the rank (d+1) vector bundle _(;d+1)^∨ over (; d+1), with _p_-(1) ≃_d+1^∨; or equivalently, as the derived Grassmannian (_(;d+1); d) of _(;d+1) over (; d+1), with universal quotient bundle _d. As a consequence of Corollary <ref>, we have the following compatibility result for the Fourier–Mukai functors considered in Notations <ref> and <ref>: In the situation of Corollary <ref>, let denote , , or , assume that λ = (i ≥λ_1 ≥…≥λ_r-i≥ 0) ∈ B_r-i,i, and let Φ^(i, λ)_(d,d-i)((^∨[1]; d-i)) →((; d)) denote the functor defined in Notation <ref> in the case (d_+,d_-) = (d,d-i). Let Ψ_k's be defined as in Notation <ref>. Then there is a canonical equivalence of functors: Φ^(i, λ)_(d,d-i)≃Ψ_i-λ_1∘⋯∘Ψ_λ_r-1 - λ_r-i-1∘(_d+r-i)^λ_r-i∘Φ_(d+r-i,d-i)^(0). For each d + 1 ≤ k ≤ d+r-i, we have (_k) ≃_k ⊗(_k-1). By induction, we obtain a canonical equivalence of functors from ((; d+r-i)) to ((;d)): (⊗(_d)^i )∘Ψ__d+1^λ_1∘⋯∘Ψ__d+r-i^λ_r-i≃Ψ_i-λ_1∘⋯∘Ψ_λ_r-1 - λ_r-i-1∘ (⊗(_d+r-i)^λ_r-i), where Ψ__d+1^k's are defined in Notation <ref>. Consequently, it suffices to prove that there is a canonical equivalence of functors Φ^λ_(d,d-i)≃Ψ__d+1^λ_1∘⋯∘Ψ__d+r-i^λ_r-i∘Φ_(d+r-i,d-i)^(0)((^∨[1]; d-i)) →((; d)). Consider the following commutative diagram: [column sep = 15em,between origins] (; d, d+1, …, d+r-i) ×_(;d+r-i)_(d+r-i,d-i)() d forgldd[swap]π_-rddπ_+ _(d,d-i)() ldr_-rd[swap]r_+ (^∨[1]; d-i) (; d), where the vertical map forg is the forgetful map in Corollary <ref> and r_± are the projection maps of the incidence diagram (<ref>). By repeated use of Remark <ref>, we see that the composite functor Ψ__d+1^λ_1∘⋯∘Ψ__d+r-i^λ_r-i∘Φ_(d+r-i,d-i)^(0) is equivalent to the functor π_+ *(π_-^*() ⊗ (_d+1^λ_1⊗_d+2^λ_2⊗…⊗_d+r-i^λ_r-i)) ≃ r_+ *∘ forg_*( forg^* ∘ r_-^*() ⊗ (_d+1^λ_1⊗_d+2^λ_2⊗…⊗_d+r-i^λ_r-i)) ≃ r_+ *(r_-^*() ⊗ forg_* (_d+1^λ_1⊗_d+2^λ_2⊗…⊗_d+r-i^λ_r-i)) ≃ r_+ *(r_-^*() ⊗^λ(^ univ_(d,d-i)) ) = Φ_(d,d-i)^λ (), where the second equivalence follows from projection formula, and the third equivalence follows from Corollary <ref>. Hence the corollary is proved. § SEMIORTHOGONAL DECOMPOSITIONS OF DERIVED GRASSMANNIANS This section establishes the main result of this paper, namely Theorem <ref>. In <ref>, we present the result and reduce it to the universal local situation, which will be addressed in <ref>. §.§ Semiorthogonal Decompositions Let r ≥ 0 be an integer and let 0 ≤ i ≤ r, so that B_r-i, i denotes the set of partitions λ = (i ≥λ_1 ≥λ_2 ≥⋯λ_r-i≥ 0). We define a total order < on the set _r = { (i, λ) | 0 ≤ i ≤ r, λ∈ B_r-i,i} as follows: for any (i,λ), (j,μ) ∈_r, we write (i, λ) < (j, μ), if (i-λ_1, λ_1 - λ_2, …, λ_i-1 - λ_i, λ_i) <_ lex (j-μ_1, μ_1 - μ_2, …, μ_j-1 - μ_j, μ_j) in the lexicographical order <_ lex. Notice that if i=j, then for λ,μ∈ B_r-i,i, we have (i,λ) < (i, μ) if and only if λ >_ lexμ (that is, λ is smaller than μ in the opposite lexicographical order of partitions in B_r-i,i). The goal of this section is to establish the following theorem: Let X be a prestack defined over , let be a perfect complex of Tor-amplitude [0,1] and rank r ≥ 0 over X, let d ≥ 1 be an integer, and let be either , , or . For any integer 0 ≤ i ≤min{r, d} and any partition λ∈ B_r-i,i, we let Φ^(i, λ) = Φ^(i,λ)_(d,d-i) denote the functor defined in Notation <ref> in the case (d_+,d_-) = (d,d-i), that is: Φ^(i, λ)((^∨[1];d-i)) →((;d)) ↦ r_+ *(r_-^*() ⊗^λ(^ univ_(d,d-i)) )⊗ (_(;d))^i. Then Φ^(i, λ) is fully faithful. Moreover, these functors Φ^(i, λ), where 0 ≤ i ≤min{r, d} and λ∈ B_r-i,i, induce a semiorthogonal decomposition ((;d)) = ⟨(Φ^(i,λ)) | 0 ≤ i ≤min{r, d}, λ∈ B_r-i,i⟩, with semiorthogonal order given by the total order < defined in Notation <ref>. Specifically, ( (Φ^(i,λ)), (Φ^(j,μ))) ≃ 0 if (j,μ) <(i,λ). Notice that when fixing d, the subindices “(d,d-i)" of Φ^(i,λ)_(d,d-i) are uniquely determined by the superscripts “(i,λ)". Therefore, there is no ambiguity in writing Φ^(i, λ) for Φ^(i,λ)_(d,d-i). If r=3 and d ≥ 3, we have a semiorthogonal decomposition ((;d)) = ⟨(Φ^(0,(0))), (Φ^(1,(1,1))), (Φ^(1,(1))), (Φ^(2,(2))), (Φ^(1,(0))), (Φ^(2,(1))), (Φ^(2,(0))), (Φ^(3,(0)))⟩. If r=4 and d ≥ 4, we have a semiorthogonal decomposition ((;d)) = ⟨(Φ^(0,(0))), (Φ^(1,(1,1,1))), (Φ^(1,(1,1))), (Φ^(2,(2,2))), (Φ^(1,(1))), (Φ^(2,(2,1))), (Φ^(2,(2))), (Φ^(3,(3))), (Φ^(1,(0))), (Φ^(2,(1,1))), (Φ^(2,(1))), (Φ^(3,(2))), (Φ^(2,(0))), (Φ^(3,(1))), (Φ^(3,(0))), (Φ^(4,(0))) ⟩. Let Φ_1, Φ_2, …, Φ_N be all the functors in {Φ^(i,λ)| 0 ≤ i ≤min{r, d}, λ∈ B_r-i,i} listed in ascending order with respect to the total order < on the superscrips (i,λ), where N = ∑_i=0^min{r,d}ri. For each 1 ≤ j ≤ N, we let _j denote the endofunctor (𝕀→Φ_j ∘Φ_j^L) of ((;d)), where Φ_j^L denotes the left adjoint of Φ_j. Consequently, there is a canonical filtered sequence in (((;d)), ((;d))) _N ∘_N-1∘⋯∘_1 →_N-1∘⋯∘_1 →⋯→_2 ∘_1 →_1 →𝕀, where (_1 →) ≃Φ_1 ∘Φ_1^L ((;d))) →(Φ_1), and for each 2 ≤ j ≤ N, _j : = (_j ∘_j-1∘⋯∘_1 →_j-1∘⋯∘_1) ≃Φ_j ∘Φ_j^L ∘ (_j-1∘⋯∘_1) defines a functor from ((;d)) to (Φ_j). Therefore, to establish the desired semiorthogonal decomposition, it is equivalent to prove the following assertions about the functors Φ_j, Φ_j^L and _j (for the corresponding category ): * Fully-faithfulness: For each 1 ≤ j ≤ N, the counit map Φ_j^L ∘Φ_j →𝕀 is an equivalence. * Semiorthogonality: For all 1 ≤ j < k ≤ N, Φ_j^L ∘Φ_k ≃ 0. * Generation: _N ∘_N-1∘⋯∘_1 ≃ 0. We have the following observations: * Since the assertions (<ref>), (<ref>) and (<ref>), regarded as properties for the pair (X, ), are local with respect to Zariski topology, we may assume that X is a derived affine scheme A, where A ∈, and is the cofiber of a map σ A^m → A^n between finite local free sheaves, where m,n ≥ 0 are integers such that n-m=r. * Given that the functors Φ_j, Φ_j^L, and hence _j, preserve all small colimits and perfect objects (Lemma <ref>), and that for any quasi-compact, quasi-separated derived scheme Y, we have (Y) ≃((Y)), to prove the assertions (<ref>), (<ref>) and (<ref>) in the case where = and X = A, it suffices to verify them in the case where = and X = A, and vice versa. * Since the functors Φ_j, Φ_j^L, and hence _j, preserve almost perfect objects and locally truncated almost perfect objects (Lemma <ref>), the assertions (<ref>), (<ref>) and (<ref>) in the cases = and = can be deduced from the case =. Consequently, it suffices to consider the case where =, X = A, where A ∈, and = (σ A^m → A^n), where n-m = r. Moreover, as the formation of the assertions (<ref>), (<ref>) and (<ref>) commutes with arbitrary base change (as discussed in Lemma <ref>), we may assume that X = _(^m,^n) is the total space of -homomorphisms from ^m to ^n and ≃ [_X^m _X^n], where is an ordinary commutative ring and τ is the tautological map; in this case, the desired assertions are established in the next subsection (see <ref>). Let d<r and consider the functors Φ^(i,λ) in the case i=d. Then Theorem <ref> implies that the collection of objects {^λ(_(;d)) |λ∈ B_r-d,d} forms a relative exceptional sequence (<cit.>) over X with respect to the opposite lexicographic order. On the other hand, in <cit.> we show that {^α(_(;d))|α∈ B_d,r-d} forms a relative exceptional sequence over X with respect to the colexicographic order. Using derived Borel–Weil–Bott Theorem <ref> and the filtered sequences associated with Schur complexes, we can show that these two relative exceptional sequences are dual (and mutation-equivalent) to each other. Details will appear in a separate note. In the situation of the Theorem <ref>, if we assume that d<r and consider the case i=d, then the theorem implies that the functors Φ_λ :=Φ^(d,λ) = _(;d)^*() ⊗^λ(_(;d)) (X) →((;d)) are fully faithful for all λ∈ B_r-d,d, and their essential images form a semiorthogonal sequence with respect to the opposite of lexicographic order of B_r-d,d. On the other hand, in <cit.>, we showed that the functors Φ'_α = _(;d)^*() ⊗^α(_(;d)) (X) →((;d)) are fully faithful for all α∈ B_d,r-d, and their essential images form a semiorthogonal sequence with respect to the colexicographic order of B_d,r-d. In fact, one can show that these two semiorthogonal sequences are dual (and mutation-equivalent) to each other. First, for all α∈ B_d,r-d and λ∈ B_r-d,d, we have the duality relation (Φ'_α, Φ_λ) ≃ 0 if λ≠α^t, and (Φ'_λ^t, Φ_λ) ≃𝕀 [-|λ|]. Thanks to the derived Borel–Weil–Bott Theorem <ref>, (in characteristic zero) we can reduce the computations of (Φ'_α, Φ_λ) to that of derived pushforward of line bundles for the flag bundle (;r) →(;r), and the desired result follows from the same calculation as in the Grassmannian variety case of Kapranov <cit.>. Secondly, using the filtered sequences <cit.> obtained from Schur complexes for the fiber sequence _(;d)→_(;d)^*() →_(;d), and the equivalences ^α^t≃^α^t in characteristic zero, we obtain that these two semiorthogonal sequences are mutation-equivalent to each other. Let X → S be any quasi-smooth map between prestacks that is a relative derived algebraic space of constant dimension r ≥ 0. Then the relative cotangent complex _X/S has perfect-amplitude in [0,1] and rank r. We let _X/S[1] = _X/S^∨[1] denote the shifted tangent complex. Theorem <ref> implies semiorthogonal decompositions for all d ≥ 0: (_X(_X/S;d)) = ⟨ri copies of (_X(_X/S[1];d-i)) ⟩_0 ≤ i ≤min{r, d}. In the special case where d=r, the derived relative scheme _X(_X/S;r) → X is closely related to the construction of Nash blowups. §.§ The Universal Local Situation In this subsection, we let =. §.§.§ The Setup for the Universal Local Situation Now we introduce the basic setup for the universal local situation: * Let be a commutative ring, let n, m, d ≥ 0 be integers such that n-m=: r ≥ 0, and W = ^m and V = ^n. * For a pair of non-negative integers (d_+, d_-), we let _d_+^+ := _(V; d_+) and _d_-^- := _(W^∨ ; d_-) denote the rank d_± Grassmannian -schemes of V and W^∨, respectively, and let R__d_+^+↪ V ⊗__d_+^+↠ Q__d_+^+ and R__d_-^-↪ W^∨⊗__d_-^-↠ Q__d_-^-, denote the tautological short exact sequences, where Q__d_±^± are tautological quotient bundles of ranks d_±, respectively. * Let X = _(W,V) = (_^*(W ⊗_ V^∨) ) denote affine -space parametrizing -homomorphisms from W to V, and let τ W ⊗_X → V ⊗_X denote the tautological morphism. Let = [W ⊗_X V ⊗_X] (with V ⊗_X placed in degree 0); then ^∨[1] ≃ [V^∨⊗_X W^∨⊗_X] (with W^∨⊗_X placed in degree 0). In the situation of Notation <ref>, we have canonical identifications: q__d_+^+(;d_+) ≃(^*___d_+^+(W ⊗ R__d_+^+^∨) ) →_d_+^+. q__d_-^-(^∨[1];d_-) ≃(^*___d_-^-(R__d_-^-^∨⊗ V^∨) )→_d_-^-. q__(d_+,d_-)_(d_+,d_-)() ≃(___d_+^+×_d_-^-^*(R__d_-^-^∨⊠ R__d_+^+^∨) ) →_d_+^+×_d_-^-. In this case, these derived schemes are classical (Remark <ref>). Therefore, the desired result follows easily from definitions (see <cit.> or <cit.>). In the situation of Lemma <ref>, to reduce the burden of notations, for objects _±∈(_d_±^±), we will simply use the same notations _+ = q__d_+^+^*(_+) ∈( (;d_+)) and _- =q__d_-^-^*(_-) ∈( (^∨[1];d_-)) to denote their respective pullbacks. If is an idempotent-complete stable ∞-category (such as ((;d_+)) and (^∨[1];d_-)), and {C_i}_i ∈ I is a collection of objects of , we let ⟨{C_i}_i ∈ I⟩⊆ denote the stable subcategory thickly generated by {C_i}_i ∈ I (i.e., ⟨{C_i}_i ∈ I⟩ is the smallest idempotent-complete stable ∞-subcategory of which contains all C_i). In the situation of Lemma <ref> and using Notations <ref>, <ref>, we have ((;d_+)) = ⟨{^λ(R__d_+^+^∨) }_λ∈ B_n-d_+,d_+⟩ ((^∨[1];d_-) ) = ⟨{^μ(R__d_-^-) }_μ∈ B_m-d_-,d_-⟩. This follows from Kapranov's exceptional collections for (_d_±^±) (<cit.>; see also <cit.> for the characteristic-free version) and the fact that the natural projections q__d_±^± are relative affine spaces. §.§.§ Incidence Correspondences in the Universal Local Situation This subsection considers the incidence diagram (<ref>) of Definition <ref> in the universal local situation <ref>. We assume that d ≥ r=m-n and consider the incidence diagram (<ref>) in the case where (d_+,d_-) = (d, d-r). Let Φ = Φ^(0)_(d,d-r) be the functor of Notation <ref> in the case where λ = (0), and let Φ^L be its left adjoint functor, that is: Φ = r_+* r_-^* ((^∨[1]); d-r) →((; d)) Φ^L = r_-! r_+^* ((;d)) →((^∨[1];d-r)). In the above situation, we have canonical equivalences Φ(^λ(R__d-r^-)) ≃^λ(R__d^+^∨) for all λ∈ B_n-d,d-r. Φ^L(^λ(R__d^+^∨)) ≃^λ(R__d-r^-) for all λ∈ B_n-d,d. This a special case of the key lemma <cit.>; we present here a characteristic-free proof for readers' convenience. We only prove the first equivalence; the other case is similar. The projection r_+ factorizes through a composite map (<cit.>) _(d,d-r)() _d-r^- ×_(;d) (;d), where ι is a closed immersion induced by a regular section of the vector bundle Q__d-r^-⊠ R__d^+, and is the canonical projection. Therefore, we have a canonical equivalence Φ(^λ(R__d-r^-)) ≃_*(^λ(R__d-r^-) ⊗ι_* (__(d,d-r)()) ), where ι_* (__(d,d-r)()) is resolved by a Koszul complex whose ℓth terms are given by ⋀^ℓ(Q__d-r^-^∨⊠ R__d^+^∨) where 0 ≤ℓ≤ (n-d)(d-r). By Cauchy's decomposition formula (<cit.>, <cit.>), there is a canonical filtration of ⋀^ℓ(Q__d-r^-^∨⊠ R__d^+^∨) whose associated graded is given by ^μ^t(Q__d-r^-)^∨⊗^μ(R__d^+^∨), where μ run through all elements of B_n-d,d-r such that |μ| = ℓ. Consequently, it suffices to prove the following _(_d-r^-)(^μ^t(Q__d-r^-), ^λ(R__d-r^-) [λ]) ≃δ_μ, λ·𝕀, where δ_μ,λ = 1 if μ=λ and δ_μ,λ = 0 if μ≠λ. This follows from that {(^μ^t(Q__d-r^-)}_μ∈ B_n-d,d-r and {^λ(R__d-r^-) [λ]}_λ∈ B_n-d,d-r are dual full exceptional collections of (_d-r^-); see <cit.> and <cit.>. In the situation of Lemma <ref>, the functor Φ is fully faithful, with essentially image Φ = ⟨{^μ(R__d^+^∨) }_μ∈ B_n-d,d-r⟩⊆((;d)). Lemma <ref> implies that the counit map Φ^L ∘Φ→𝕀 is an equivalence when evaluated at the generators ^λ(R__d-r^-) of ((^∨[1]; d-r)) described in Lemma <ref>, where λ∈ B_n-r,r-d. Since the collection of objects , for which the counit map Φ^L ∘Φ() → is an equivalence, forms an idempotent-complete stable ∞-subcategory of ((^∨[1]; d-r)), it follows that the counit map Φ^L ∘Φ→𝕀 is an equivalence. Hence the corollary follows. §.§.§ Flag Correspondences in the Universal Local Situation Now we consider flag correspondences (<ref>) in the universal local situation of <ref>. Let d be an integer such that 0 ≤ d ≤ n-1, and let Ψ=Ψ_0 be the functor defined in Notation <ref> and let Ψ^L be its left adjoint; that is: Ψ = p_+ * p_-^* ((; d+1)) →((;d)), Ψ^L = p_- ! p_+^* ((;d)) →((;d+1)), where p_± are defined as in (<ref>), and p_-! denotes the left adjoint of p_-^*. The following is analogous to <cit.> in the case where ℓ_+ - ℓ_- = 1; the combinatorics of the Lascoux-type complexes F_* in this case are also similar to that of the staircase complexes studied in <cit.>. In the above situation, we have: * For any λ∈ B_n-d,d, there is a canonical equivalence Ψ^L(^λ(R__d^+^∨)) ≃^λ(R__d+1^+^∨) if λ∈ B_n-d-1,d⊆ B_n-d,d; 0 if λ∈ B_n-d,d \ B_n-d-1,d. * If is a -algebra, then for any λ∈ B_n-d-1, d with λ_1 = k, where max{0,d-r+1}≤ k ≤ d, the image Ψ(^λ(R__d+1^+^∨)) admits a resolution by vector bundles Ψ(^λ(R__d+1^+^∨)) ≃ F_* = [0 → F_k →⋯→ F_1 → F_0], where F_0 = ^λ(R__d^+^∨) and F_i = ^λ^(i)(R__d^+^∨) ⊗⋀^|λ^(i)| - |λ|(W) for 1 ≤ i ≤ k. Here, for any given 1 ≤ i ≤ k, let 1 ≤ j ≤ n-d-1 be such that λ_j ≥ i ≥λ_j+1+1, then λ^(i) = (λ_1, λ_2, …, λ_j, i, λ_j+1+1, …, λ_n-d-1+1) ∈ B_n-d,k \ B_n-d-1,k. First, we prove assertion (<ref>). Using the Notation(s) <ref> (and <ref>), there is a short exact sequence of vector bundles on (;d,d+1): _d+1^∨↪ p_+^*(R__d^+^∨) ↠ p_-^*(R__d+1^+^∨), where _d+1 = (_d+1→_d), and p_± are defined as in (<ref>). Let λ∈ B_n-d,d, then from the from direct-sum decomposition formula <cit.> (see also <cit.>), there is filtration on ^λ(p_+^*(R__d^+^∨) ) ≃ p_+^* (^λ(R__d^+^∨) ) whose associated graded is ⊕_ν=(ν_1, …, ν_n-d-1) ⊆λ=(λ_1, …, λ_n-d) ^ν(p_-^*(R__d+1^+^∨)) ⊗^λ/ν (_d+1^∨). [notice that we switch the roles of N and L in <cit.>, and our version follows from <cit.> by applying the duality ^λ/μ(^∨)^∨≃^λ/μ() for vector bundles .] The skew Schur module ^λ/ν (_d+1^∨) is a quotient of tensor products ⊗_i=1^λ_1⋀^λ^t_i - ν^t_i(_d+1^∨) (see <cit.> or <cit.>), where λ^t = (λ_1^t, λ_2^t, …) and ν^t = (ν_1^t, ν_2^t, …) are transposes of λ and ν, respectively. As a result, ^λ/ν(_d+1^∨) is zero unless 0 ≤λ_n-d≤ν_n-d-1≤λ_n-d-1≤…≤ν_2 ≤λ_2 ≤ν_1 ≤λ_1 ≤ d, in which case the corresponding summand of the associated graded is equvivalent to ^ν(p_-^*(R__d+1^+^∨)) ⊗ (_d+1^∨)^|λ| - |ν|≃ p_-^* (^ν(R__d+1^+^∨) ) ⊗ (_d+1^∨)^|λ| - |ν|. If λ∈ B_n-d-1,d, the partitions ν appearing in (<ref>) can be classified into two cases: * Case ν=λ: In this case, the corresponding summand is equivalent to p_-^* (^ν(R__d+1^+^∨) ). * Case ν≠λ: In this case, we have 1 ≤ |λ| - |ν| ≤λ_1 ≤ d. For such cases, Serre's vanishing theorem (Remark <ref>, <cit.>) implies that p_- ! ((_d+1^∨)^|λ| - |ν|) ≃ 0. Consequently, we obtain that p_- ! p_+^* (^λ(R__d^+^∨) ) ≃^λ(R__d+1^+^∨) as claimed. If λ∈ B_n-d,d \ B_n-d-1,d which means that λ_n-d≥ 1 and λ_1 ≤ d, then we have 1 ≤ |λ| - |ν| ≤λ_1 ≤ d for all partitions ν satisfying (<ref>). By applying Serre's vanishing theorem once again, we conclude that p_- ! p_+^* (^λ(R__d^+^∨) ) ≃ 0 as desired. Next, we prove assertion (<ref>). Similarly as with Lemma <ref>, <cit.> implies that the projection p_+ factorizes through a composite map (;d,d+1) _(;d) (R__d^+) (;d) where ι is a closed immersion induced by a regular section of the vector bundle W^∨⊗_d+1, and is the canonical projection. Therefore, we have a canonical equivalence Ψ(^λ(R__d+1^+^∨)) ≃_*( ^λ(R__d+1^+^∨) ⊗ι_* (_(;d,d+1)) ), where ι_* (_(;d,d+1)) is resolved by a Koszul complex whose ℓth terms are given by (⋀^ℓ W ) ⊗_d+1^-ℓ where 0 ≤ℓ≤ m. Since ⋀^ℓ W ≃ 0 if ℓ >m, we may assume 0 ≤ℓ≤ n+k-d-1. By considering the spectral sequence which computes the above higher direct image _*( ^λ(R__d+1^+^∨) ⊗ι_* (_(;d,d+1)) ) (see <cit.>), it suffices to compute derived pushforwards of the form _* (^λ(R__d+1^+^∨) ⊗_d+1^-ℓ) ⊗(⋀^ℓ W) [ℓ] 0 ≤ℓ≤ n+k-d-1. Using the equivalence (R__d^+) ≃(R__d^+^∨;n-d-1) and Theorem <ref>.(<ref>), we have _* (^λ(R__d+1^+^∨) ⊗_d+1^-ℓ) ≃π_* ( (λ,ℓ)), where π(R__d^+^∨; n-d) →(;d) is the complete flag bundle of R__d^+^∨ over (;d), and (λ,ℓ) is the line bundle associated with the sequence (λ,ℓ)= (λ_1, …, λ_n-d-1, ℓ). According to Borel–Weil–Bott theorem, let ρ = (n-d-1, n-d-2, …, 2, 1, 0), to compute the derived pushforward π_* ( (λ,ℓ)) it suffices to analyze the sequence (λ,ℓ) + ρ= (λ_1 + n-d-1, λ_2 + n-d-2, …, λ_n-d-1+1, ℓ). First, we consider the case ℓ=0. In this case, (<ref>) is a partition, and Borel–Weil theorem (see <cit.> or Theorem <ref>.(<ref>)) implies that (<ref>) is isomorphic to _* (^λ(R__d+1^+^∨) )≃^λ(R__d^+^∨). Next, we consider the the case where 1 ≤ℓ≤ n+k-d-1. From Borel–Weil–Bott theorem (see <cit.>, <cit.> or Theorem <ref>.(<ref>)), we obtain that (<ref>) is nonzero only if the entries of (<ref>) are pairwise distinct. There are precisely (n+k-d-1) - (n-d-1) = k such choices for ℓ, all of the form λ_j + n -d - j > ℓ > λ_j+1 + n - d - (j+1), where 1 ≤ j ≤ n-d-1. For each such ℓ and j, it requires a minimal number of (n-d-1-j) permutations of entries of (<ref>) such that the resulting sequence (λ_1 + n-d-1, …, λ_j + n -d - j,  ℓ ,  λ_j+1 + n - d - (j+1) , …, λ_n-d-1+1) is strictly decreasing. Subtracting ρ from the above sequence, we obtain a partition (λ_1, …, λ_j,  ℓ+j-(n-d-1),  λ_j+1 +1, …, λ_n-d-1+1), which precisely corresponds to the partition λ^(i) of (<ref>), where i = ℓ+j-(n-d-1). Conversely, for any 1 ≤ i ≤ k, we let 1 ≤ j ≤ n-d-1 be such that λ_j ≥ i ≥λ_j+1+1. In this case, ℓ := |λ^(i)| - |λ| is the unique integer in [1, n+k-d-1] such that λ_j + n -d - j > ℓ > λ_j+1 + n - d - (j+1). For each such i and ℓ, the Borel–Weil–Bott theorem implies that (<ref>) is canonically equivalent to ^λ^(i)(R__d^+^∨)[ℓ-(n-d-1-j)] ⊗(⋀^ℓ W) = ^λ^(i)(R__d^+^∨) ⊗(⋀^|λ^(i)| - |λ| W) [i]. Hence the lemma is proved. Notice that Lemma <ref>.(<ref>) is the only part of the proof of the main theorem in this paper where the characteristic-zero assumption is required. Assume we are in the same situation as Lemma <ref>.(<ref>) and let k be an integer such that max{0,d-r+1}≤ k ≤ d. We let Ψ_i = Ψ be defined as in Notation <ref>; that is Ψ_i = Ψ() ⊗(_(;d))^⊗ i≃Ψ() ⊗(R__d^+^∨)^⊗ i. For any ℓ, d' ≥ 0, we define _ℓ, d' = ⟨{^λ(R__n-ℓ^+^∨) }_λ∈ B_ℓ,d'⟩⊆((; n-ℓ)). Then for each integer 0 ≤ i ≤ k - max{0,d-r+1}, the restriction of the functor Ψ_i, Ψ_i|__n-d-1,k-i_n-d-1,k-i→((;d)), is fully faithful, with essential image contained in _n-d,k. Moreover, these functors Ψ_i|__n-d-1,k-i, for 0 ≤ i ≤ k - max{0,d-r+1}, induce a semiorthogonal decomposition _n-d,k = ⟨⟨Ψ_k-i (_n-d-1,i) ⟩_i ∈ [0, max{0,d-r+1}] ,  Ψ_k-d+r^0(_n-d,d-r ) ⟩, where Ψ_k-d+r^0 denotes the functor ⊗(_(;d))^⊗ (k-d+r), the last component is understood as empty if d < r, and the semiorthogonal order of the first part is given by the usual order < of integers in [0, max{0,d-r+1}], that is: for all 0 ≤ j < i ≤ k - max{0,d-r+1}, ( Ψ_k-i (_n-d-1,i), Ψ_k-j (_n-d-1,j)) ≃ 0. _n-d,k = ⟨Ψ_0(_n-d-1,k), Ψ_1(_n-d-1,k-1), ⋯, Ψ_k-max{0,d-r+1} (_n-d-1,max{0,d-r+1}), _n-d,d-r⊗(_(;d))^⊗ (k-d+r)⟩, We will only prove the case where d ≥ r; the other case where d < r is similar and simpler. Notice that Lemma <ref>.(<ref>) implies that Ψ_i(^λ(R__d+1^+^∨)) ∈⟨{^λ(R__d^+^∨)}_λ∈ B_n-d,k⟩ for all 0 ≤ i ≤ k - d+r-1 and λ∈ B_n-d-1,k-i. This proves the assertion that the essential image Ψ_i|__n-d-1,k-i is contained in _n-d,k. To establish the desired semiorthogonal decomposition (<ref>), it suffices to prove: (a) Fully-faithfulness: The counit map Ψ_i^L Ψ_i →𝕀 is an equivalence when restricted to the subcategory _n-d-1,k-i, where 0 ≤ i ≤ k - d+r-1. As with Corollary <ref> and from the definition of _n-d-1,k-i, it suffices to prove that Ψ_i^L Ψ_i (^λ(R__d+1^+^∨)) →^λ(R__d+1^+^∨) is an equivalence for all λ∈ B_n-d-1,k-i. This follows from Lemma <ref>.(<ref>)-(<ref>). (b) Semiorthogonality: * For all 0 ≤ j < i ≤ k - d+r-1, Ψ_j^L Ψ_i ≃ 0 when restricted to the subcategory _n-d-1,k-i. As before, it suffices to prove that Ψ_j^L Ψ_i (^λ(R__d+1^+^∨))≃ 0 for all λ∈ B_n-d-1,k-i. This is again a direct consequence of Lemma <ref>.(<ref>)-(<ref>). * For all 0 ≤ j < i ≤ k - d+r-1, the restriction of the functor Ψ_i^L to Ψ_k-d+r^0(_n-d,d-r) is equivalent to zero. Once again, it suffices to prove that for any α∈ B_n-d,d-r, Ψ_i^L (^α(R__d^+^∨) ⊗(R__d^+^∨)^(k-d+r))≃ 0. Since d ≥ r and 0 ≤ i ≤ k-d+r-1, we have 1 ≤ k-d+r -i ≤ d. Hence ^α(R__d^+^∨) ⊗(R__d^+^∨)^(k-d+r)∈ B_n-d,d\ B_n-d-1,d, and the desired result follows from Lemma <ref>.(<ref>). (c) Generation: To complete the proof, we will show that any element ^α(R__d^+^∨), where α∈ B_n-d,d-r, belongs to the right-hand side of (<ref>). We will establish this result using induction. Let us introduce the following notations: for any ν∈ B_n-d,w and i ∈, where w ≥ 0 is an integer, we let ν(i) = (ν_1+i, …, ν_n-d+i). Let B_n-d,w(i) : = {ν(i) |ν∈ B_n-d,w}. Using these notations, we can express a disjoint union decomposition as follows: B_n-d,k = B_n-d,d-r(k-d+r) ⊔_i=0^n-d+r-1 B_n-d-1,k-i(i). It is clear that if α∈ B_n-d,d-r(k-d+r), meaning that α = ν(k-d+r) for some ν∈ B_n-d,d-r, then ^α(R__d^+^∨) = ^ν(R__d^+^∨) ⊗(R__d^+^∨)^(k-d+r) belongs to the right-hand side of (<ref>). Now we assume that α∈ B_n-d-1,k-i(i), that is, α = ν(i) for some ν∈ B_n-d-1,k-i. According to Lemma <ref>.(<ref>), there is a canonical map ^α(R__d^+^∨) →Ψ_i(^ν(R__d+1^+^∨)), and the cone of this map is given by iterated extensions of elements of the form ^β(R__d^+^∨) ⊗ K, where β∈ B_n-d,d-r(k-d+r) ⊔_j=i+1^n-d+r-1 B_n-d-1,k-j(j) and K is a finite free -module. Consequently, the desired result regarding generation follows from induction. §.§.§ Proof of Theorem <ref>, Part 2 We now complete the proof of Theorem <ref> by establishing the theorem in the universal local situation <ref> and when =, using the preparations made in the preceding subsections. If d = 0 or r=0, the desired result follows directly from Corollary <ref>. Therefore, we may assume d and r are both greater than zero. We now generate a semiorthogonal decomposition of ((;d)) by iteratively applying Corollary <ref>. Let us describe the process: (*) Starting with the case where k=d, we apply Corollary <ref> and obtain a semiorthogonal decomposition of ((;d)) = _n-d,d. This decomposition takes the form of the form (<ref>) and its components are given by the images Ψ_i (_n-d-1, d-i) for 0 ≤ i ≤ d - max{0, d-r+1}, and Ψ_d-r^0(_n-d,d-r) (if d ≥ r). Notably, the appearing subcategories _a,b as the domains of Ψ_i or Ψ_d-r^0 satisfy the condition n - r ≤ a+b ≤ n - 1. For any subcategory _a,b appearing in the above decomposition with a+b>n -r (implying _a,b = _n-d-1, d-j for some j ≥ 0), we further decompose _a,b by applying Corollary <ref> again. The involved subcategories _a',b' appearing of this decomposition again satisfy n - r ≤ a'+b' ≤ a+b-1 ≤ n - 2. We continue this process, applying Corollary <ref> to each involved subcategory _a',b' such that a'+b'> n-r, until all the subcategories are of the form _a”,b”, where a”+b”=n-r. The above process (*) clearly terminates in a finite number of steps. At the end, we obtain a semiorthogonal decomposition of ((;d)) whose components are given by fully faithful images of subcategories of the form _n-d-r+i,d-i, where 0 ≤ i ≤min{r,d}. Each such category _n-d-r+i,d-i is embedded via a functor of the form: Ψ_a_1∘Ψ_a_2∘⋯∘Ψ_a_r-i∘Ψ_i - ∑ a_j^0 _n-d-r+i,d-i→((;d)), where Ψ_i - ∑ a_j^0= ⊗(_(;d+r-i))^i - ∑ a_j; the notation Ψ_i - ∑ a_j^0 indicates that it is a “zero-times composition of Ψ's, further twisted by a line bundle of degree (i - ∑ a_j)". Here, a_1, …, a_r-i≥ 0 is a (possibly empty) sequence of integers with with ∑ a_j ≤ i. If i=r, we understand a_1,…,a_0 as the empty sequence and (<ref>) as the functor Ψ_r^0 = ⊗(_(;d))^r. Conversely, for any (possibly empty) sequence of integers a_1, …, a_r-i≥ 0 with ∑ a_j ≤ i, there is precisely one copy of _n-d-r+i,d-i embedded as the image of the functor (<ref>) in the semiorthogonal decomposition obtained through the above process (*). Moreover, for any given 0 ≤ i ≤min{r,d}, such a (possibly empty) sequence a_1, …, a_r-i is in one-to-one correspondence with a (possibly zero) partition λ∈ B_r-i,i via the formula a_1 = i -λ_1, a_2 = λ_1 - λ_2, …, a_r-i = λ_r-i-1 - λ_r-i. Here, if r=i, the empty sequence a_1, …, a_0 corresponds to the zero partition (0)∈ B_0,r. For each such (possibly empty) sequence a_1, …, a_r-i (or equivalently, for each partition λ∈ B_r-i,i, in view of (<ref>)), composing (<ref>) with the equivalence of Corollary <ref>, Φ^(0)_(d+r-i,d-i)((^∨[1];d-i)) _n-d-r+i,d-i, we obtain precisely one copy of ((^∨[1];d-i)) embedded into ((;d)) via the fully faithful functor Ψ_a_1∘⋯∘Ψ_a_r-i∘ (⊗(_(;d+r-i))^i - ∑ a_j) ∘Φ_(d+r-i,d-i)^(0)≃Φ^(i, λ)_(d,d-i), where the last equivalence follows from Corollary <ref> and (<ref>). To summarize, for each 0 ≤ i ≤min{r,d} and each partition λ∈ B_r-i,i, we obtain an embedding of ((^∨[1], d-i)) into ((,d)) via the fully faithful functor Φ^(i,λ) = Φ^(i, λ)_(d,d-i). All the components produced in the process (*) can be expressed in this form in a unique way. Therefore, we have obtained the desired semiorthogonal decomposition. Furthermore, it is clear from the Corollary <ref> and the process (*) that the resulting semiorthogonal decomposition has the semiorthogonal order given by the lexicographic order <_ lex of the sequences (a_1, a_2, …, a_r-i, 0, 0, …) indexing the components embedded via the functors (<ref>), where the empty sequence represents the largest element. In view of (<ref>), this is equivalently to the order <_ diff on the pairs (i,λ) defined in Notation <ref>. This concludes the proof of Theorem <ref> in the universal local situation. By combining it with the argument presented in <ref>, we have completed the proof of Theorem <ref>. □ § APPLICATIONS In this section, we explore some of the applications of Theorem <ref> in classical scenarios. We will fix a -algebra , and consider schemes, morphisms, and classical fiber products within the category of -schemes. We let the symbol denote , , or . §.§ Blowups of Determinantal Ideals We consider a -scheme with Z ⊆ X a determinantal subscheme of codmension (r +1), where r ≥ 1. For simplicity, we define Z as the zero subscheme of a Fitting ideal Fitt_r(^0()) (see <cit.>), where is a perfect complex with Tor-amplitude in [0,1] and rank r and ^0() is the zeroth sheaf homology of . We consider the projection π =_(;r)_X(;r) → X. Assuming that _X(;r) is a classical scheme and that E:=π^-1(Z) ⊆_X(;r) is an effective Cartier divisor, then _X(;d) is isomorphic to the (classical) blowup π_Z(X) = _X⊕_n ≥ 0_Z^n → X of X along Z, with (_(;d)) = __Z X(-E) ⊗ () (see <cit.>). For each j ≥ 0, we let X_j (=X^≥ r+j(^0()) of <cit.>) be the closed subscheme defined by the Fitting ideal Fitt_r-1+j(^0()); notice then X_0 = X and X_1 = Z. We write X_j : = _X(^∨[1]; j) → X. The underlying classical map of X_j→ X factorizes through X_j, and is an isomorphism over X_j \ X_j+1 (see <cit.> or <cit.>). Therefore, we can view X_j as a (possibly derived) partial desingularitzation of the higher determinantal subscheme X_j. For example, in the case where X is an irreducible Cohen–Macaulay subscheme and X_j ⊆ X have expected codimensions j(r+j) for all j ≥ 1, then X_j are classical irreducible Cohen–Macaulay schemes and X_j→ X_j are IH-small partial desingularitzations (see <cit.>) for all j ≥ 1. For each j ≥ 0, the incidence locus _(r,j)() is a possibly derived scheme, whose underlying classical scheme is the classical fiber product _Z(X)×_X^ clX_j, and _(r,j)^ univ is a universal perfect complex of rank j and Tor-amplitude in [0,1] on _(r,j)(). For each j ≥ 0 and λ∈ B_j, r-j, we consider the Fourier–Mukai functors Ω_(j,λ) := r_+ *(r_-^*() ⊗^λ(^ univ_(r,j)) ) ⊗__Z(X)(j E) (X_j) →(_Z(X)), where r_± are the natural projection maps in the incidence diagram (<ref>), X_j_(r,j)() _Z(X). We denote the essential image of Ω_(j,λ) by (X_j)_(j,λ). When j=0, the functor Ω_(0,(0)) is the pullback functor π^* (X) →(_Z(X)), and we denote its essential image by (X)_0. As a result of our main theorem <ref>, we obtain the following corollary: In the situation of a determinantal subscheme Z ⊆ X of codmension (r +1) as described above, where r ≥ 1, the functors Ω_(j,λ) are fully faithful for all j ≥ 0 and λ∈ B_j, r-j. Moreover, these functors Ω_(j,λ), where 0 ≤ j ≤ r and λ∈ B_j, r-j, induce a semiorthogonal decomposition (_Z(X)) = ⟨⟨(X_j)_(j,λ)| 1 ≤ j ≤ r, λ∈ B_j,r-j⟩,  (X)_0 ⟩, with semiorthogonal order given as follows: ( (X_j)_(j,λ), (X_k)_(k,μ)) ≃ 0 if (r-k,μ) < (r-j,λ), where < is the total order defined in Notation <ref>. This result generalizes both Orlov's blowup formula <cit.> and the formula for blowups of Cohen–Macaulay subschemes of codimension 2 (<cit.>). If we base-change the above semiorthogonal decomposition to the Zariski open subset X \ Z_2, we recover Orlov's blowup formula for the local complete intersection (l.c.i.) closed immersion (Z\ Z_2) ⊆ (X \ Z_2). Therefore, the above formula extends Orlov's to the non-l.c.i. loci of Z ⊆ X. The “corrections" to Orlov's formula in this situation are precisely given by copies of derived categories of the partial resolutions Z_j of the higher determinantal loci Z_j ⊆ Z for 2 ≤ j ≤ r. Even if we don't assume that _X(;r) is classical and π^-1(Z) ⊆_X(;r) is an effective Cartier divisor, the semiorthogonal decomposition described in Corollary <ref> still applies to (_X(;r)). However, in this situation, _X(;r) is no longer isomorphic to the classical blowup _Z(X). Instead, we should regard _X(;r) as a derived version of blowup of X along Z. We expect this perspective to be closely related to the concept of a derived blowup of Hekking, Khan and Rydh (see <cit.>). §.§ Reducible Schemes We consider two classes of reducible schemes. §.§.§ Let X be a -scheme, and let Z ⊆ X a regularly immersed closed subscheme of codmension r ≥ 1 with normal bundle _Z/X. For simplicity, we assume that Z is the zero locus of a regular section s of a rank r vector bundle over X. We also consider a line bundle on X, and denote by _Z the restriction of to Z. We define a perfect complex of Tor-amplitude in [0,1] and rank r as follows: = [ _X ⊕ ] so that ^∨[1] ≃ [ ^∨⊕^∨_X ]. We have the following observations: * The derived Grassmannian _X(; r) is isomorphic to the classical reducible scheme _Z(X) __Z(_Z/X^∨)_Z(_Z/X^∨⊕_Z^∨), where _Z(_Z/X^∨) ⊆_Z(X) is the inclusion of the exceptional divisor, and _Z(_Z/X^∨) ⊆_Z(_Z/X^∨⊕_Z^∨) is the closed immersion induced by _Z/X⊆_Z/X⊕_Z. The scheme structure is described as follows. By working Zariski locally on X, we may assume that X = R for some commutative ring R, = _X^r and = _X, and s is given by a regular sequence (x_1, …, x_r) of R. Using <cit.> and the fact that s is regular, we obtain that the regular closed immersion _X(;r) ↪_X(_X ⊕_X^r; r) ≃ R ×^r is identified with the inclusion of the classical subscheme defined by the equations x_i X_j - x_j X_i = 0 for 1 ≤ i < j ≤ r, and x_k X_0 =0 for 1 ≤ k ≤ r, where [X_0: X_1: …: X_r] denotes the homogeneous coordinates of ^r. (In fact, one can work over affine charts of ^r as follows. For any 1 ≤ i ≤ r, let U_i = {X_i 0}≃Å^r ⊆^r, with affine coordinates (u_0, …, u_i, …, u_r), u_j = X_j/X_i for j≠ i. In the local chart R × U_i, _X(;r) is defined by the equations {x_j = x_i u_j | j≠ 0,r} together with u_0 · x_i = 0. The first (r-1) equations {x_j = x_i u_j} are precisely the defining equations for the blowup _Z(X) in X ×Å^r-1 = R[u_1,…, u_i, …, u_r], which we shall denote as _Z(X)_U_i. The last equation u_0 · x_i =0 defines a normal crossing divisor in _Z(X)_U_i×[u_0], where the two divisors {u_0 = 0}≃_Z(X)_U_i and {x_i =0}≃ Z × U_i intersect along {u_0 = x_i = 0}≃ Z ×{u_0 = 0}≃ Z ×Å^r-1.) * By virtue of <cit.>, we have a canonical equivalence q_Z _X(^∨[1]) ≃ Tot_Z(_Z[-1]) → Z, where Tot_Z(_Z[-1]) = _Z^*(_Z^∨[1]) denotes total space of _Z[-1]. * The map r_- exhibits the incidence locus _(r,1)() as the projective bundle r_- =q _(r,1)() ≃_ Tot_Z(_Z[-1])(p_Z^*(_Z/X^∨⊕_Z^∨)) → Tot_Z(_Z[-1]), and the universal perfect complex _(r,1)^ univ is isomorphic to _q(-1) (see Lemma <ref>.(<ref>)). The map r_+ = ι is a closed immersion (see Lemma <ref>.(<ref>)). For 1 ≤ j ≤ r, we let Ω_j = ι_*(q^*() ⊗_q(-j) ) ( Tot_Z(_Z[-1]) ) →(_X(;d)). Therefore, in the above situation, Theorem <ref> implies that: The pullback functors ^*_(;d) and Ω_j (where 1 ≤ j ≤ r) are fully faithful. Denoting the essential image of _(;d)^* as (X)_0 and Ω_j as ( Tot_Z([-1]) )_-j (where 1 ≤ j ≤ r), we have a semiorthogonal decomposition: (_Z(X) __Z(_Z/X^∨)_Z(_Z/X^∨⊕_Z^∨)) = ⟨( Tot_Z(_Z[-1]) )_-r, ⋯, ( Tot_Z(_Z[-1]))_-1,  (X)_0 ⟩. In the special case where = _X is trivial, Tot_Z(_Z[-1]) = Z[ε_1], where Z[ε_1] denotes Z ×(^*([1])), and the above semiorthogonal decomposition reduces to (_Z(X) __Z(_Z/X^∨)_Z(_Z/X^∨⊕_Z)) = ⟨(Z[ε_1])_-r, ⋯, (Z[ε_1])_-1,  (X)_0 ⟩. Here, the scheme appearing on the left-hand side is precisely the central fiber ρ^-1(0) in the deformation-to-normal-cone construction, where ρ denotes the natural projection _Z ×{0} (X ×Å^1) →Å^1. In this case, the above semiorthogonal decomposition agrees with the derived base-change of Orlov's blowup formula <cit.> for _Z ×{0} (X ×Å^1) to the central fiber. In the special case where Z=D is an effective divisor and = _D/X, we recover the the semiorthogonal decomposition (X _D_D^1) = ⟨( Tot(_D/X[-1]) ),  (X) ⟩ of <cit.>. If we furthermore assume that X=C is a complex curve and Z = {p} is a non-singular closed point, we recover the semiorthogonal decomposition (C _p^1) = ⟨([ε_1]), (C) ⟩ of <cit.> (see also <cit.>), where [ε_1] = ^*_([1]) is the ring of derived dual numbers. Hence the above result is a higher-codimensional generalization of the formula <cit.> for attaching ^1-bundles to divisors. §.§ Varieties of Linear Series on Curves In this subsection, we consider the case where =, and study a family of smooth complex projective curves /S of genus g ≥ 1. For simplicity, we assume the existence of a section σ S → of /S. We denote the classical (rigidified) relative Picard functor of degree d by _/S^d, which assigns to each S-scheme T the isomorphism class of pairs (_T, i), where _T is a line bundle on X_T with fiberwise degree d, and i is an isomorphism σ^*(_T) _T. Under this assumption, the functor _/S^d is representable by a locally projective, smooth S-scheme _/S^d → S of relative dimension g (see <cit.>). Let _ univ be the Poincaré line bundle on _/S^d ×_S and _/S^d ×_S →_/S^d the natural projection. By applying the argument of <cit.>, we obtain that := (_* (_ univ) )^∨ is a perfect complex on _/S^d of Tor-amplitude in [0,1] and rank (1-g+d). For an integer r ≥ -1, we define a (possibly derived) scheme _d^r(/S): = __/S^d(; r+1) →^d_/S. This derived scheme is proper and quasi-smooth over ^d_/S, and its underlying closed points over a point s ∈ S() correspond to the -points of the variety G_d^r(_s) of linear series g_d^r of degree d and dimension r on _s as studied in <cit.>. More specifically, the closed points of _d^r(/S) over s ∈ S() are given by the isomorphism classes of the pair (_s, g_d^r), where _s is a line bundle on _s of degree d, and g_d^r is a r-dimensional linear projective subspace of ^ sub(^̋0(_s; _s)). For any 0 ≤ i ≤ r+1, the relative Serre duality implies the isomorphism _2g-2-d^r-i(/S) ≃__/S^d(^∨[1]; r+1-i) →^d_/S whose underlying map carries a pair (_s, g_2g-2-d^r-i) to the line bundle _s^∨⊗ω__s∈^2g-2-d(_s). In this case, Theorem <ref> yields the following corollary: In the above situation, assuming that d ≥ g-1 and r ≥ -1, there exists a semiorthogonal decomposition: (_d^r(/S)) = ⟨1-g+di copies of (_2g-2-d^r-i(/S)) ⟩_0 ≤ i ≤min{1-g+d, r+1}, where the Fourier-Mukai functors and semiorthogonal orders are given as in Theorem <ref>. Now we focus on the case where S =. We denote _d^r(C) = _d^r(C/). If C is a general curve, _d^r(C) = G_d^r(C) is the classical variety of linear series on C of degree d and dimension r studied in <cit.>. Similarly, _2g-2-d^r-i(C) = G_2g-2-d^r-i(C) for 0 ≤ i ≤ r+1. Moreover, these varieties are reduced, smooth, and have expected dimensions (<cit.>). They are non-empty precisely when their expected dimensions are non-negative (<cit.>). In this case, Corollary <ref> implies (G_d^r(C)) = ⟨1-g+di copies of (G_2g-2-d^r-i(C)) ⟩_0 ≤ i ≤min{1-g+d, r+1}. Additionally, when C is general, it can be shown, following a similar argument as in <cit.>, that the incidence schemes _r+1,r+1-i() are isomorphic to the classical fiber products G_d^r(C) ×__C^d G_2g-2-d^r-i(C) and have expected dimensions. Furthermore, for a general curve C, Lin and Yu <cit.> showed that the derived categories (G_2g-2-d^r-i(C)) are indecomposable for all 0 ≤ i ≤ r+1. If C is special, then Corollary <ref> still holds and reveals many intriguing phenomena. Here, we only focus on two 3-fold examples: If C is a (non-hyperelliptic) trigonal curve of genus 5, then W^2_5(C) ≃ W^1_3(C) consists of a single point. In this case, ^1_5(C) =G^1_5 is a classical irreducible singular threefold, and ^0_3 ≃ C^(3) is classical and smooth. Corollary <ref> implies (G^1_5(C)) = ⟨(^1_3(C)),  (C^(3)) ⟩, where ^1_3(C) is a nonclassical derived scheme with virtue dimension -1, and and has underlying scheme W^2_5(C) ≃ W^1_3(C) whose support consists of a single point. The birational map C^(3) G^1_5(C) is a flip of threefold, and the embedding (C^(3)) ↪(G_5^1(C)) is induced by the structure sheaf of the classical reducible scheme C^(3)×^ cl_^3(C) G_1^5(C). If C is a general trigonal curve of genus 7, then W^1_6(C) = 3, W^2_6(C) = 0, and they are both nonempty (see <cit.>). In this case, ^1_6(C) = G^1_6(C) is a classical, equidimensional scheme of dimension three. Hence Corollary <ref> implies a derived equivalence (G^1_6(C)) (G^1_6(C)) for the threefold flop G^1_6(C) → W^1_6(C) ← G^1_6(C), where the second projection G^1_6(C) → W^1_6(C)⊆^6(C) is given by (L,g^1_6) ↦ L^∨⊗ω_C. Moreover, the derived equivalence is induced by a nonclassical incidence scheme whose underling scheme is the classical fiber products G^1_6(C) ×_^6(C)^ cl G^1_6(C). The above examples highlight the importance of considering both the derived structures on _d^r's as well as derived structures on incidence correspondence schemes when studying semiorthogonal decompositions for varieties of linear series on special curves. The framework presented in this paper allows us to extend Corollary <ref> to families of singular curves /S. For instance, consider a family /S of integral Gorenstein curves with arithmetic genus g≥ 1, and let d≥ g-1 be an integer. In the case of r=0, we obtain a semiorthogonal decomposition: (^d_/S) = ⟨(^2g-2-d_/S) ,  ( Jac^d_/S)_1, ⋯, ( Jac^d_/S)_1-g+d⟩, where ^d_/S and ^2g-2-d_/S are derived Hilbert schemes of d and (2g-2-d) points, respecitvely, for the family /S, and Jac^d_/S is the compactified Jacobian scheme parametrizing rank one torsion-free sheaves of degree d. Similar generalizations of Corollary <ref> exist in the case of all r. The details will appear in a forthcoming paper. alpha
http://arxiv.org/abs/2307.00528v1
20230702094123
Mixed Riemann-Hilbert boundary value problem with simply connected fibers
[ "Miran Černe" ]
math.CV
[ "math.CV", "35Q15, 30E25" ]
[thanks] The author was supported in part by grants Analiza in geometrija P1-0291, Kompleksna in geometrijska analiza J1-3005, Holomorfne parcialne diferencialne relacije N1-0237 and Nelinearni valovi in spektralna teorija N1-0137 from ARRS, Republic of Slovenia. miran.cerne@fmf.uni-lj.si organization=Faculty of Mathematics and Physics, University of Ljubljana, and Institute of Mathematics, Physics and Mechanics, addressline=Jadranska 19, city=Ljubljana, postcode=1 111, country=Slovenia We study the existence of solutions of mixed Riemann-Hilbert or Cherepanov boundary value problem with simply connected fibers on the unit disk . Let L be a closed arc on with the end points ω_-1, ω_1 and let a be a smooth function on L with no zeros. Let γ_ξ_ξ∈∖L be a smooth family of smooth Jordan curves in which all contain point 0 in their interiors and such that γ_ω_-1, γ_ω_1 are strongly starshaped with respect to 0. Then under condition that for each w∈γ_ω_± 1 the angle between w and the normal to γ_ω_± 1 at w is less than π/10, there exists a Hölder continuous function f on , holomorphic on , such that Re(a(ξ) f(ξ)) = 0 on L and f(ξ)∈γ_ξ on ∖L. Boundary value problem, mixed Riemann-Hilbert problem, Cherepanov problem [2020] 35Q15, 30E25 § INTRODUCTION Let = z∈; |z|<1 be the open unit disc in the complex plane and let = ξ∈; |ξ|=1 be the unit circle. Let L be a closed arc on , let L denote its interior with respect to , and let a : L→ be a smooth function. Recall that the interior Int(γ) of a Jordan curve γ⊆ is the bounded component of ∖γ. We orient γ positively with respect to Int(γ). Jordan curve γ⊂ is starshaped with respect to 0, if for any point w in the interior of γ the line segment which connects points 0 and w lies in the interior of γ, and it is strongly starshaped with respect to 0, <cit.>, if there exists a positive continuous function R on the unit circle such that γ={ w∈; |w|= R(w/|w|)} and Int(γ)={ w∈∖{0}; |w|< R(w/|w|)}∪{0}. Let γ_ξ_ξ∈∖L be a smooth family of smooth Jordan curves in which all contain point 0 in their interiors. In this paper we study the existence and properties of holomorphic solutions of the nonlinear mixed Riemann-Hilbert problem, that is, the Che­re­pa­nov boundary value problem with simply connected fibers. The problem asks for a continuous function f on , holomorphic on , such that Re(a(ξ) f(ξ)) = 0 for ξ∈ L and f(ξ)∈γ_ξ for ξ∈∖L. That is, f solves a linear Riemann-Hilbert problem on L and a nonlinear Riemann-Hilbert problem with simply connected fibers on ∖L. See also <cit.>. The problem with circular fibers γ_ξ and L a finite union of disjoint arcs was considered by Obnosov and Zulkarnyaev in <cit.>, and by the author in <cit.>. The structure of the family of solutions of problem (<ref>-<ref>) is well known in the cases where either L= or L=∅. If L=, we consider a homogeneous linear Riemann-Hilbert problem. In this case the essential information on the problem is given by the winding number W(a) of function a. It is well known <cit.> that if the winding number W(a) is nonnegative, the space of solutions of (<ref>) is a vector subspace of A^α(), 0<α<1, of real dimension 2W(a)+1. The linear Riemann-Hilbert problem can also be considered in the case of a nonorientable line bundle over , that is, in the case where at some point ξ_0∈ we have a(ξ_0^-) = - a(ξ_0^+). Then the winding number of function a^2 or the Maslov index of the problem is an odd integer. In this case it holds that if W(a^2)≥ -1, or, with a little bit of abuse of notation, if W(a)≥ -1/2, then the space of solutions of (<ref>) is a vector subspace of A^α() of real dimension 2W(a)+1, see <cit.>. If L is empty, we have a nonlinear Riemann-Hilbert problem with smooth simply connected fibers which all contain 0 in their interiors. This problem was considered and solved in <cit.>. In particular, it was proved that the family of solutions with exactly m zeros on , m∈∪{0}, forms a manifold in space A^α() of dimension 2m+1, and this manifold is compact if and only if m=0. We assume from now on that neither L=∅ nor L=. Let k≥ 3. Let a : L→∖{0} be a C^k+1 function and let γ_ξ_ξ∈∖L be a C^k family of Jordan curves in which all contain point 0 in their interiors. Let ω_1 and ω_-1 be the first and the last point of arc L with respect to the positive orientation of . Let Jordan curves γ_ω_j, j=± 1, be strongly starshaped with respect to 0 and such that for each w∈γ_ω_j the angle between w and the outer normal to γ_ω_j at w is less than π/10. Let w_j, j=± 1, be the intersection of γ_ω_j and the line Re(a(ω_j) w)=0 of the form λ (-i a(ω_j)), λ>0, and let πβ_j be the oriented angle of intersection of the line Re(a(ω_j)w)=0 with the fiber γ_ω_j at point w_j, where β_1∈(0,1) and β_-1∈(-1,0). Let 0< β < min{β_1, 1-β_1, |β_-1|, 1-|β_-1|}. Then there exists a unique f∈ A^β() with no zeros on which solves (<ref>-<ref>) for which f(ω_1)=w_1 and f(ω_-1)=w_-1. Here β_1> 0, if the tangent vector -i a(ω_1) to Re(a(ω_1)w)=0 is rotated counterclockwise by angle πβ_1 to get a positive tangent vector to γ_ω_1 at point w_1, and β_-1<0, if a positive tangent vector to γ_ω_-1 at w_-1 is rotated clockwise by angle π|β_1| to get tangent vector -i a(ω_-1) to Re(a(ω_-1)w)=0. Observe that conditions in Theorem <ref> imply ||β_j| - 1/2|<1/10, j=± 1, and hence one could choose β=2/5. In the cases considered in <cit.> all boundary curves were circles with center at point 0. Hence |β_j| = 1/2, j=± 1, and the maximal regularity we got was β<1/2. Let a_1,…, a_n∈ be a finite set of points with given multiplicities. Then under the assumptions of Theorem <ref> there exists β∈ (0,1) and f∈ A^β() which has zeros exactly at points a_1,…, a_n∈ with the given multiplicites and which solves (<ref>-<ref>). § FUNCTION SPACES, HILBERT TRANSFORM AND DEFINING FUNCTIONS Let 0<α<1 and let G⊂ be a compact subset. We denote by C^α(G) the algebra over of Hölder continuous complex functions on G and by C_^α(G) the algebra over of real Hölder continuous functions on G. Using the norm f_α = max_z∈ G |f(z)| + sup_z,w∈ G, z w|f(z)-f(w)|/|z-w|^α the algebras C^α(G) and C_^α(G) become Banach algebras. For G= or G= and k∈∪{0} we also define spaces C^k,α(G) and C_^k,α(G) of k times continuously differentiable functions on G, whose all k-th derivatives belong to space C^α(G) or space C_^α(G). We also need some algebras of holomorphic functions on . By A() we denote the disc algebra, that is, the algebra of continuous functions on which are holomorphic on , and by A^α() = A()∩ C^α() the algebra of Hölder continuous functions on the closed disc which are holomorphic on . Using appropriate norms, that is, the maximum norm ·_∞ for A() and the Hölder norm ·_α for A^α(), these algebras become Banach algebras. Similarly we define A^k,α() = A()∩ C^k,α() (k∈∪{0}, 0<α<1). Recall that Hilbert transform H assigns to a real function u on a real function Hu on such that the harmonic extension of f=u+i Hu to is holomorphic on and real at 0. It is known that H is a bounded linear operator on C_^k,α() (k∈∪{0}, 0<α<1), <cit.>, and hence the harmonic extension of f=u+i Hu to belongs to A^k,α(). Also, <cit.>, the Hilbert transform is a bounded linear operator on the Sobolev space W^k_p() of k times generalized differentiable functions with derivatives in L^p() (k∈∪{0}, 1<p<∞) equipped with the norm f_W_p^k = (∑_j=0^k D^jf_p )^1/p. Recall, <cit.>, that if = T_1∪ T_2 is a partition of in two subarcs T_1 and T_2 and if T_0⊆ T_1 is a compactly contained subarc of T_1, then for k∈∪{0}, 1<p<∞, 0<α<1 there exists a constant C=C(k,p,α) such that Hu_W^k_p(T_0)≤ C (u_W^k_p(T_1) + u_L^1(T_2)) and Hu_C^k,α(T_0)≤ C (u_C^k,α(T_1) + u_L^1(T_2)). We will also need compact embedding result, <cit.>, W^1_p()↪ C^β()↪ C^α() for 0<α<β<1-1/p, 1<p<∞, which holds on arcs in as well. Since L we can extend a to as a nowhere zero function of class C^k+1 so that the winding number W(a)=0. Therefore, <cit.>, we can write a in the form a = r e^h, where r>0 is a positive C^k,α function on and h∈ A^k,α(). Thus the original problem (<ref>-<ref>) is equivalent to the problem Im(f_∗(ξ)) = 0 for ξ∈ L and f_∗(ξ)∈γ^∗_ξ for ξ∈∖L, where f_∗ = i e^h f and γ^∗_ξ = i e^h(ξ)γ_ξ. Observe that the number of zeros of f_∗ and f are the same and that 0 belongs to the interiors of all curves γ^∗_ξ, ξ∈∖L. Also, since for each ξ∈ the transfomation w⟼ i e^h(ξ) w is a composition of a dilation and a rotation, the angle conditions from Theorem <ref> stay the same. Using a holomorphic automorphism of the unit disc we may even assume that L={ξ∈; Im(ξ)≤ 0} is the lower semicircle. From now on we will consider problem (<ref>-<ref>) with the addition that L is the lower semicircle and instead of f_∗ and γ^∗_ξ_ξ∈∖L we will still write f and γ_ξ_ξ∈∖L. One can also create the 'double' of the boundary value problem. Using a biholomorphism one can replace the unit disc with the upper half-disk _+={ξ∈; Im(ξ)> 0} and L by the interval -1,1. By the reflection principle we see that problem (<ref>-<ref>) is equivalent to the nonlinear Riemann-Hilbert problem on , where the boundary curves γ_ξ_ξ∈_+∖L are symmetrically extended and defined on the lower semicircle so that we have γ_ξ = γ_ξ for every ξ∈∖{1,-1}. In general this symmetrical extension of Jordan curves γ_ξ_ξ∈_+∖L to the lower semicircle produces boundary data which are not continuous at points 1 and -1. Because the biholomorphism from to the upper semidisc is in A^1/2(), we get that the regularity of solutions of (<ref>-<ref>) is in general a half of the regularity of solutions of the symmetrical Riemann-Hilbert problem. We will consider smooth families of smooth Jordan curves γ_ξ_ξ∈∖L in . Let k∈. The family of Jordan curves γ_ξ_ξ∈∖L is a C^k family parametrized by ξ∈∖L if there exists a function ρ∈ C^k((∖L)×) such that γ_ξ= w∈; ρ(ξ,w) = 0 and Int(γ_ξ)= w∈; ρ(ξ,w) < 0, and the gradient ∂ρ/∂w(ξ,w)= ρ_w(ξ,w) 0 for every ξ∈∖L and w∈γ_ξ. We call ρ a defining function for C^k family of Jordan curves γ_ξ_ξ∈∖L. We will consider only bounded families of Jordan curves which all lie in some fixed disc (0,R), R>0, and the space C^k((∖L)×(0,R)) is equipped with the standard C^k norm. Since we assume that γ_± 1 are strongly starshaped Jordan curves, we also assume that for ρ, the defining function for Jordan curves γ_ξ_ξ∈∖L, and j=± 1 we have ρ(j,w) = |w|^2 - R_j^2(w/|w|) for some positive C^k functions R_j(z) on . Using parametrization θ↦ e^iθ of the unit circle we will also use the notation γ_θ, ρ(θ,w) and ρ_θ(θ, w) instead of γ_ξ, ρ(ξ,w) and ρ_ξ(ξ,w). Also, for a function h on , we will write either h(ξ) or h(θ), where ξ=e^iθ. Observe that if h is holomorphic on with well defined derivative on , then ∂ h/∂θ (θ) = iξ h'(ξ) for ξ = e^iθ. The reflection principle and the symmetric extension to the lower semicircle mentioned in Remark <ref> is in terms of defining function ρ given as ρ(ξ, w) = ρ(ξ, w) for every ξ∈∖{1,-1} and every w∈. § REGULARITY OF SOLUTIONS In this section we prove regularity of continuous solutions of a specific form of problem (<ref>-<ref>), where the defining function ρ∈ C^k((∖L)×) (k≥ 3). Let f∈ A() be a solution of (<ref>-<ref>). It is well known <cit.> that f restricted to ∖{-1,1} is in C^k-1,α for any 0<α<1. Hence we need information on the regularity of f near points ξ=± 1. For j=± 1 we denote f(j)=w_j∈∩γ_j. Using Möbius tranformation from the unit disc to the upper half-plane H = {z∈; Im(z)>0} we consider the case where f is bounded and continuous on H and holomorphic on H. Also, point ξ=1 is mapped into t=0 and point ξ=-1 into ∞. Now f solves the problem Im(f(t)) = 0 for t≤ 0 and f(t)∈γ_t for t≥ 0. Also, using translation, we will assume that f(0)=0∈∩γ_0. Let πβ_1 (β_1∈ (-1,1)∖{0}) be the oriented angle of intersection of the real axis Im(w)=0 and γ_0 at w=f(0)=0. The orientation of the real axis is positive with respect to the upper half-plane and the orientation of γ_0 is positive with respect to the interior of γ_0. Hence β_1> 0, if the tangent vector to the real axis is rotated counterclockwise by angle πβ_1 to get a tangent vector to γ_0 at point 0, and β_1<0, if the tangent vector to the real axis is rotated clockwise by angle π|β_1| to get a tangent vector to γ_0 at 0. The defining function ρ can near (0,0) for t≥ 0 be written as ρ(t,w) = ρ(0,0)+ρ_t(0,0) t + 2 Re(ρ_w(0,0) w) + 1/2ρ_tt(0,0) t^2 + + ρ_ww(0,0)|w|^2 + Re(ρ_ww(0,0)w^2 + ρ_tw(0,0) t w) + √(t^2+|w|^2) g(t,w), where g∈ C^1(×) such that g(0,0) = g_t(0,0)=g_w(0,0)=g_w(0,0)=0. Recall that ρ(0,0)=0 and that ρ_w(0,0) represents an outer normal to γ_0 at point w=0. So we have ρ_w(0,0) = -i λ e^iπβ_1 for some real λ > 0. We may assume λ = 1/2. Because Re(i e^-iπβ_1 w) = - Im (e^-iπβ_1 w) = Im (e^iπ(1-β_1) w) we have ρ(t,w) = A t + Im(e^iπ(1-β_1) w) + B t^2 + C |w|^2+ + Re(D w^2) + t Re(E w) +√(t^2+|w|^2) g(t,w) for some A,B,C∈ and D,E∈. Let us assume that we have a solution f of the problem (<ref>-<ref>) of the form f(t) = t^sκ(t), where κ is bounded and continuous on H, holomorphic on H, and 0<s<1 to be determined. For t≤ 0 we have t=(-1)|t| and from (<ref>) we get Im( e^iπ sκ(t))=- Im( e^iπ (1+s)κ(t))=0. On the other hand for t>0 we have 1/t^sρ(t,t^s κ(s)) = A t^1-s + Im(e^iπ(1-β_1)κ(t)) + B t^2-s + C t^s|κ(t)|^2+ +t^s Re(D κ(t)^2) + t Re(E κ(t)) + √(t^2-2s+|κ(t)|^2) g(t,t^sκ(t))=0. We choose 0<s<1 so that κ solves boundary value problem with continuous boundary data. That is, we choose s= 1-β_1, if β_1> 0, and s=-β_1=|β_1|, if β_1<0. Thus κ solves the following Riemann-Hilbert problem Im( e^iπ (1-β_1)κ(t))= 0 for t≤ 0 and ρ(t,κ(t)) = 0 for t≥ 0, where, if β_1> 0, ρ(t,w) = A t^β_1 + Im(e^iπ (1-β_1) w) + B t^1+β_1 + C t^1-β_1|w|^2+ + t^1-β_1 Re(D w^2) + t Re(E w) + √(t^2β_1+|w|^2) g(t,t^1-β_1 w), and, if β_1< 0, ρ(t,w) = A t^1-|β_1| + Im(e^iπ (1-β_1) w) + B t^2-|β_1| + C t^|β_1||w|^2+ + t^|β_1| Re(D w^2) + t Re(E w) + √(t^2-2|β_1|+|w|^2) g(t,t^|β_1| w). For such choice of s are the defining function for problem (<ref>-<ref>) (t,w)⟼{[ ρ(t,w) = 1/t^sρ(t,t^s w) ; t≥0, w∈; Im( e^iπ(1-β_1) w); t≤ 0, w∈ ]. and its partial w-derivative (t,w)⟼{[ ρ_w(t,w) = ρ_w(t,t^s w) ; t≥0, w∈; 1/2i e^iπ(1-β_1); t≤ 0, w∈ ]. continuous on ×. On the other hand, the partial derivative of defining function (<ref>) with respect to the t variable is not continuous at t=0, but, as we will see, it still has certain L^p regularity properties, which will imply regularity conditions on κ and f. We know that κ is C^k-1,α on ∖{0} and we can differentiate (<ref>-<ref>) on ∖{0} to get Im( e^iπ (1-β_1)κ'(t))= 0 for t< 0 and ρ_t (t,κ(t))+ 2 Re(ρ_w(t,κ(t))κ'(t)) = 0 for t> 0. For t>0 and β_1>0 we have ρ_t(t,w)= A β_1 t^β_1-1 + B(1+β_1)t^β_1 + (1-β_1) C t^-β_1|w|^2 + +(1-β_1)t^-β_1 Re(D w^2)+ Re(E w)+ β_1 t^2β_1-1/√(t^2β_1+|w|^2) g(t,t^1-β_1 w) + + √(t^2β_1+|w|^2) (g_t(t,t^1-β_1w) + 2 Re(g_w(t,t^1-β_1w)(1-β_1)t^-β_1w)) and for t>0 and β_1<0 we have ρ_t(t,w)= A (1-|β_1|) t^-|β_1| + B(2-|β_1|)t^1-|β_1| + |β_1| C t^|β_1|-1|w|^2 +|β_1|t^|β_1|-1 Re(D w^2)+ Re(E w)+ (1-|β_1|) t^1-2|β_1|/√(t^2-2|β_1|+|w|^2) g(t,t^|β_1| w) + + √(t^2-2|β_1|+|w|^2) (g_t(t,t^|β_1|w) + 2 Re(g_w(t,t^|β_1|w)|β_1|t^|β_1|-1w)). The t-derivative of defining function (<ref>) is 0 for t<0. Since β_1∈ (-1,1)∖{0} and κ is bounded, we have that ρ_t(t, κ(t)) is in L^p_ loc() for 1≤ p<min{1/|β_1|, 1/1-|β_1|} . A similar argument can be used for point ξ=-1∈. Let πβ_-1 (β_-1∈ (-1,1)∖{0}) be the orientied angle of intersection of γ_-1 and the real axis Im(w)=0 at point f(-1). Now β_-1 is positive, if a positive tangent vector to γ_-1 at f(-1) is rotated counterclockwise to get a positive tangent vector to the real axis and negative otherwise. For j=±1 we define δ_j = 1-β_j, if β_j∈ (0,1), and δ_j=|β_j|, if β_j∈ (-1,0). To transfer our observations to the boundary value problem (<ref>-<ref>) on the unit disc, let Ψ∈ A^1/2() be a biholomorphic map from to the upper half-disc _+, which maps the lower semicircle L on -1, 1 so that Ψ(± 1) = ± 1. Let F(x)=1/2x(3-x^2). Then F(x)-1= -1/2(x-1)^2(x+2) and F(x)+1=-1/2(x+1)^2(x-2). Hence function ψ(ξ) = F(Ψ(ξ)) is real on L, ψ(± 1) = ± 1, and C^1 on . Recall that w_j is the positive intersection of γ_j and the real axis, j=± 1. Now we consider only those solutions f of the Cherepanov problem (<ref>-<ref>), which are of the form f(ξ) = (ξ-1)^δ_1(ξ+1)^δ_-1κ(ξ) + w_1 1+ψ(ξ)/2+ w_-11-ψ(ξ)/2, where κ is in A(). We will define two (local) defining functions ρ_1(ξ,w) for ξ -1 and ρ_-1(ξ,w) for ξ 1. Let T_1(ξ) = (ξ-1)/i(ξ+1) and T_-1(ξ) = 1/T_1(ξ)= i(ξ +1)/(ξ-1). Then T_1(-i)=T_-1(-i)=-1, and T_1, T_-1 map the upper semicircle to the positive real axis and the lower semicircle to the negative real axis. For j=± 1 and Im(ξ)> 0 we define ρ_j(ξ,w)= 1/T_j(ξ)^δ_jρ(ξ,(ξ-1)^δ_1 (ξ+1)^δ_-1w+ w_1 1+ψ(ξ)/2+ w_-11-ψ(ξ)/2) and for Im(ξ)< 0 we set ρ_j(ξ,w)= Im(e^iπ(1-β_j)(ξ-1)^δ_1 (ξ+1)^δ_-1/T_j(ξ)^δ_jw). As before one can check that ρ_j and ρ_jw are continuous on ∖{-j}, j=±1. Since f solves the original boundary value problem, we have that ρ_j(ξ,κ(ξ))=0, j=± 1. Let χ:∖{-i}→ 0,1 be a smooth function such that χ(ξ)=1 for ξ=e^iθ, -π/2<θ≤π/3 and χ(ξ)=0 for ξ=e^iθ, 2π/3≤θ<3π/2. We define a new (global) defining function as ρ(ξ,w) = χ(ξ)ρ_1(ξ,w) + (1-χ(ξ))ρ_-1(ξ,w). Then ρ and ρ_w are well defined continuous function on (∖{-i})×. If β_1, β_-1 have the same sign, then both function are also continuous at ξ=-i, but if β_1, β_-1 have the opposite signs, then ρ(-i^-,w) = - ρ(-i^+,w) and ρ_w(-i^-,w) = - ρ_w(-i^+,w), which means that we have a nonorientable bundle as the boundary value data for κ. Now locally considered problem (<ref>-<ref>) for κ(t) and κ'(t) becomes global boundary value problem for κ(θ) and ∂κ/∂θ (ξ=e^iθ). Hence ∂κ/∂θ solves the linear Riemann-Hilbert problem 2 Re(ρ_w(θ,κ(θ))∂κ/∂θ)=-ρ_θ(θ, κ(θ)), where ρ_w(θ,κ(θ)) is either a nonzero continuous function on or ρ_w(-i^-,κ(-i)) = - ρ_w(-i^+,κ(-i)) and ρ_θ(θ, κ(θ)) belongs to the appropriate L^p() space 1≤ p < min{1/|β_1|, 1/1-|β_1|, 1/|β_-1|, 1/1-|β_-1|}. In fact ρ_θ(θ, κ(θ)) belongs to L^p_ loc for 1≤ p < min{1/|β_1|, 1/1-|β_1|} near ξ=1 and to L^p_ loc near ξ=-1 for 1≤ p < min{1/|β_-1|, 1/1-|β_-1|}. Let N be the winding number of function ρ_w(θ,κ(θ)), that is, 2N is the Maslov index of the associated linear Riemann-Hilbert problem. If ρ_w(θ,κ(θ)) is a continuous function on , Maslov index is an even integer and hence N is an integer. On the other hand, if ρ_w(-i^-,κ(-i)) = - ρ_w(-i^+,κ(-i)), Maslov index is an odd integer and N is a half of an odd integer. Let r(ξ) be the square root function, where we take the branch where is cut along the negative imaginary axis. Then function ρ_w(θ,κ(θ)) can be written in the form ρ_w(θ,κ(θ)) = ξ^-N e^u+iv(θ), where u and v are real continuous functions on , <cit.>. In the case N=2M+1/2, M∈, is a half of an odd integer, we define ξ^N = ξ^M r(ξ), which corresponds to the sign changing of ρ_w at ξ=-i. See also <cit.>. Hence e^± Hv belongs to L^p'() for any p'≥ 1, <cit.> and thus e^± i (v+i Hv) belongs to L^p'() for any p'≥ 1. Therefore Re(ξ^-N e^ i (v+i Hv)∂κ/∂θ) = -e^-ue^-(Hv)ρ_θ(θ, κ). We conclude that the right-hand side belongs to the same L^p() space as function ρ_θ(θ, κ). Since Hilbert transform is bounded in L^p() spaces, 1<p<∞, <cit.>, we get that ∂κ/∂θ is in L^p() for the same set (<ref>) of values of p as function ρ_θ(θ, κ). Therefore κ belongs to L^1,p() for all such values of p and this implies that κ∈ C^β(), <cit.>, where 0< β < min{|β_1|, 1-|β_1|, |β_-1|, 1-|β_-1|}. Observe that regularity of κ and f could also be expressed locally, that is, near j=± 1 functions κ and f belong to Hölder space C^β, where 0<β<min{|β_j|, 1-|β_j|}. Let k≥ 3. Let γ_ξ_ξ∈∖L be a C^k family of Jordan curves in . Let w_j, j=± 1, be an intersection of γ_j and the real axis and let πβ_j, β_j∈(-1,1)∖{0}, be the oriented angle of intersection of γ_j with the real axis at point w_j. Let 0< β < min{|β_1|, 1-|β_1|, |β_-1|, 1-|β_-1|}. Then for every solution f of (<ref>-<ref>) of the form f(ξ) = (ξ-1)^δ_1(ξ+1)^δ_-1κ(ξ) + w_1 1+ψ(ξ)/2+ w_-11-ψ(ξ)/2, where κ∈ A(), we have f,κ∈ A^β(). Observe that in cases where β_1,β_-1∈ (0,1), the regularity conditions we get for solutions of the Cherepanov/mixed Riemann-Hilbert problem (<ref>-<ref>) are consistent with results on the regularity of Riemann maps from the unit disc into simply connected domains bounded by Jordan curves which satisfy so called wedge condition, <cit.>. If the defining function ρ is independent of ξ and β_j∈(0,1), we get (1-β_j)-regularity. The β_j-regularity comes from ξ-dependence. Similarly, the expected regularity and the 'order' of zeros of Riemann maps in the cases where β_j∈ (-1,0) and which are ξ independent, would be 1+|β_j|, but ξ-dependence of the defining function ρ changes regularity conditions. On the other hand, results in <cit.> show that in the case of nontransversal intersection of the real axis with either γ_1 or γ_-1 solutions might not be of the form (ξ-1)^δ_1κ(ξ) or (ξ+1)^δ_-1κ(ξ) for some function κ∈ A(). § LINEAR CHEREPANOV BOUNDARY VALUE PROBLEM In this section we consider the linear version of problem (<ref>-<ref>), that is, a linear Riemann-Hilbert problem with piecewise continuous boundary data, <cit.>, and L the lower semicircle. First we consider homogeneous linear problem with piecewise continuous boundary data Im(f(ξ)) = 0 for ξ∈ L and Re(B(ξ) f(ξ)) = 0 for ξ∈∖L, where B is a complex nonzero function of class C^β on the upper semicircle. The regularity exponent β∈ (0,1) is bounded by conditions given in Proposition <ref>. We may assume without loss of generality that |B(ξ)|=1 for all ξ∈∖L. Let πβ_1, β_1∈ (-1, 1)∖{0}, be the oriented angle of intersection of the real axis Im(w)=0 and Re(B(1)w) =0 at point 0, that is, B(1)=-i e^iπβ_1. Similarly, let πβ_-1, β_-1∈ (-1, 1)∖{0}, be the oriented angle of intersection of Re(B(-1)w) =0 and the real axis Im(w)=0 at point 0, that is, B(-1) =-i e^- iπβ_-1. We search for solutions f∈ A() of (<ref>-<ref>) of the form f(ξ)= (ξ-1)^δ_1(ξ+1)^δ_-1κ(ξ) for some κ∈ A^β(). Recall that for j=±1 we defined δ_j = 1-β_j, if β_j∈ (0,1), and δ_j=|β_j|, if β_j∈ (-1,0). Hence we also have f∈ A^β(). To define noninteger powers of (ξ-1) and (ξ+1) we take appropriate branches of the complex logarithm. For (ξ-1)^δ_1 the complex plane is cut along positive real numbers so that the argument of (ξ-1) for ξ∈ lies on interval (π/2,3π/2), and for (ξ+1)^δ_-1 the complex plane is cut along negative real numbers and the argument of (ξ+1) for ξ∈ lies on interval (-π/2,π/2). An argument similar to the argument in Section <ref> shows that κ solves homogeneous linear Riemann-Hilbert problem Re(B(ξ)κ(ξ))=0 for all ξ∈, where B∈ C^β( D∖{1}) is defined as B(ξ)= {[ B(ξ)(ξ-1/|ξ-1|)^δ_1(ξ+1/|ξ+1|)^δ_-1, if Im(ξ) > 0; ± i(ξ-1/|ξ-1|)^δ_1(ξ+1/|ξ+1|)^δ_-1, if Im(ξ)<0 ]. with the left and the right limits at ξ=±1. The sign for Im(ξ)<0 is chosen so that B is continuous at -1, that is, we have plus sign, if β_-1<0, and minus sign, if β_-1>0. At point ξ=1 function B might not be continuous. In general we have B(1^+)=±B(1^-). See <cit.> for more. Each factor ξ-1/|ξ-1|, ξ+1/|ξ+1| changes the argument by π when ξ passes once in the positive direction. Hence possible widing number of B is either an integer (Maslov index of problem (<ref>) is even) or a half of an odd integer (Maslov index of problem (<ref>) is odd). Consider the case B(e^iθ)= e^i θ for θ∈ [0,π]. In particular we have β_1=β_-1=1/2 . Then we get B(ξ)= {[ e^-iπ/4B(ξ) ξ^1/2= e^i2θ-π/4, if 0≤θ≤π; e^i3π/4 ξ^1/2 =e^i3π-2θ/4 , if π< θ<2π. ]. Hence the winding number W(B)=0. Using identification of the boundary problem (<ref>-<ref>) with the problem on the unit disc with reflected boundary conditions (<ref>), this example corresponds to the linearization of the boundary value problem, where all boundary curves are unit circles and we linearize at f(z)=z. The family of (nearby) solutions which are real on the real axis is one-dimensional f_a(z) = z-a/1-az, where a∈ (-1,1) is a real number. Consider the case B(e^iθ)= 1 for θ∈ [0,π]. In particular we have β_1=-β_-1=1/2 . Then we get B(ξ)= e^-iπ+2θ/4 and the winding number W(B)=-1/2. Using identification of the boundary problem (<ref>-<ref>) with the problem on the unit disc with reflected boundary conditions (<ref>), this example corresponds to the linearization of the problem where all boundary curves are unit circles and we linearize at function f(z)=1. The family of (nearby) solutions which are real on the real axis is zero-dimensional. The dimension of the space of solutions in A^β() depends on the winding number W(B) of function B. It equals 2 W(B)+1 if W(B)≥ -1/2, see <cit.> and <cit.>. We define the winding number of B∈ C^α(∖L) as the winding number of B. Now we can solve appropriate nonhomogeneous linear Riemann-Hilbert problem with piecewise continuous boundary data Im(f(ξ)) = 0 for ξ∈ L and Re(B(ξ) f(ξ)) = b(ξ) for ξ∈∖L, where B is as above and b a real function on of the form b(ξ) = |ξ-1|^δ_1|ξ+1|^δ_-1b(ξ) for some function b∈ C_^β() which equals 0 on L. To solve (<ref>-<ref>) in the space of functions f∈ A^β(D) of the form f(ξ)= (ξ-1)^δ_1(ξ+1)^δ_-1κ(ξ) for some κ∈ A^β() is equivalent to solve the problem Re(B(ξ)κ(ξ))=b(ξ) for all ξ∈. It is well known that if W(B)=W(B)≥ -1/2, then the equation is solvable for any b∈ C_^β(), see <cit.> and <cit.>. If the winding number W(B)=W(B) is an odd integer, the function on the right-hand side of (<ref>) needs to belong to a special space of Hölder continuous real functions on ∖{1} of the form b_0(r(ξ)), where r(ξ) is the principal branch of the square root and b_0∈ C^β_() is an odd function. Hence we need condition b(1^-) + b(1^+) =0, which is satisfied because in our case we have b(1^-) = b(1^+) =0. See <cit.> for more information. Let 0<β<1. Let B : ∖L→∖{0} be a non-vanishing complex function in C^β(∖L) and let W(B)≥ -1/2. Then for every real function b on of the form b(ξ) = |ξ-1|^δ_1|ξ+1|^δ_-1b(ξ) for some b∈ C^β_() which equals 0 on L, there exists a solution f of the linear Cherepanov problem Im(f(ξ)) = 0 for ξ∈ L and Re(B(ξ) f(ξ)) = b(ξ) for ξ∈∖L of the form f(ξ) = (ξ-1)^δ_1(ξ+1)^δ_-1κ(ξ), where κ∈ A^β(). Moreover, the space of solutions of this form is 2W(B)+1 dimensional real subspace of A^β(). Let γ_ξ_ξ∈∖L be a C^k (k≥ 3) family of Jordan curves in and let ρ_0∈ C^k((∖L)×) be its defining function. Let β_1,β_-1 and β be as in Proposition <ref>. Let f_0 be a solution of the Cherepanov problem (<ref>), (<ref>) of the form f_0(ξ) = (ξ-1)^δ_1(ξ+1)^δ_-1κ_0(ξ) + w_1 1+ψ(ξ)/2+ w_-11-ψ(ξ)/2, where κ_0∈ A^β(). Then the mapping Φ(κ) : A^β()→ C^β_(), for each κ evaluated at point ξ∈ as {[ ρ_0(ξ, (ξ-1)^δ_1(ξ+1)^δ_-1κ(ξ) + w_1 1+ψ(ξ)/2+ w_-11-ψ(ξ)/2), if Im(ξ) ≥ 0 ,; ± Im((ξ-1)^δ_1(ξ+1)^δ_-1κ(ξ) + w_1 1+ψ(ξ)/2+ w_-11-ψ(ξ)/2), if Im(ξ) < 0 ]. is differentiable at κ_0 with the derivative (DΦ)(κ_0) acting on κ∈ A^β() as {[ 2 Re(∂ρ_0w(ξ, f_0(ξ)) (ξ-1)^δ_1(ξ+1)^δ_-1κ(ξ)), if Im(ξ) ≥ 0,; ± Im((ξ-1)^δ_1(ξ+1)^δ_-1κ(ξ)), if Im(ξ) < 0. ]. The sign for Im(ξ)<0 is chosen as in (<ref>-<ref>). Let Ω⊂ C^k+1 ((∖L)×) be an open subset of defining functions ρ of the families of Jordan curves over ∖L such that the intersection of the corresponding γ_1 and γ_-1 with the real axis at some points w_1∈γ_1 and w_-1∈γ_-1 are transversal with the oriented angles of intersection given by β_1, β_-1∈(-1,1)∖{0}. Then, at least locally, w_1, w_-1 and β_1,β_-1 smoothly depend on ρ. Let X ={ (κ,ρ)∈ A^β()×Ω; Im((ξ-1)^δ_1(ξ+1)^δ_-1κ(ξ)) = 0, if Im(ξ) < 0} which is a Banach submanifold of A^β()×Ω. Also, let Y ={b(ξ) = |ξ-1|^δ_1|ξ+1|^δ_-1b(ξ); b∈ C^β_(), b(ξ)=0, if Im(ξ) < 0}. The mapping Φ : X→ Y defined as in (<ref>) has partial derivative with respect to κ as a map from X_ρ ={κ∈ A^β(); Im((ξ-1)^δ_1(ξ+1)^δ_-1κ(ξ)) = 0, if Im(ξ) < 0} to Y of the form (<ref>). If the winding number W(B) of the Cherepanov problem defined by (<ref>) is greater or equal to -1/2, then the partial derivative is surjective with 2W(B)+1 dimensional kernel. Hence implicit function theorem applies and there is a neighbourhood of ρ_0 in Ω and a neighbourhood of κ_0 in A^β() such that for every ρ∈Ω close to ρ_0 there is a 2W(B)+1 dimensional family of solutions of (<ref>-<ref>) near κ_0. § A PRIORI ESTIMATES §.§ A priori estimates on function f To get existence results using continuity method we need a priori estimates on solutions of (<ref>-<ref>). It is well known that such a priori estimates can only be achieved for the family of solutions with no zeros on , <cit.>. We follow the approach in <cit.>. By assumption all Jordan curves {γ_ξ}_ξ∈∖L contain point 0 in their interiors. Hence the function (θ, w)⟼ wρ_w(θ,w), defined for (θ, w) such that w∈γ_θ, is homotopic to 0 in ∖{0} and it can be written in the form wρ_w(θ,w) = e^c(θ,w) + i d(θ,w) for some C^k-1 functions c and d, defined for (θ, w) such that θ∈ 0, π and w∈γ_θ. Observe that for each w∈γ_θ function d(θ,w) represents the angle between w and the outer normal to γ_θ at point w. There exists a C^k isotopy ρ^t, t∈ 0,1, where ρ^0=ρ and ρ^1 (ξ,w) = |w|^2 - R^2 for R>0 large enough, such that the gradient ρ^t_w is nonzero on ρ^t=0 for each t, <cit.>. Then one can find C^k-1 functions c(t,θ,w) and d(t,θ,w) such that (<ref>) holds for each t∈ 0,1 and w∈γ^t_θ. In addition, the isotopy can be made such that for every t∈ 0, 1, j=± 1, Jordan curves γ^t_j are strongly starshaped with respect to 0 and that for each w∈γ^t_ω_j the angle between w and the normal to γ^t_ω_j at w is less than π/10. Instead of solving (<ref>-<ref>) on the unit disc we consider equivalent problem on the upper semidisc ^+ ={ z∈; Im(z)>0}, where the role of the lower semicircle L is replaced by the interval -1,1. Using the reflection principle f(ξ) = f(ξ)) we can holomorphically extend every solution f of (<ref>-<ref>) to the unit disc such that it solves nonlinear Riemann-Hilbert problem defined by the function ρ which we get as an extension of the original function ρ using the reflection to the lower semicircle as ρ(ξ, w) = ρ(ξ,w) for ξ± 1. For ξ=± 1 function ρ(ξ, w) has well defined limits as ξ approaches ± 1 from above and below. Then we have ρ_w(ξ, w) = ρ_w(ξ,w) = ρ_w(ξ,w) for ξ± 1 and hence ρ_w(ξ, w) w = ρ_w(ξ,w) w= ρ_w(ξ,w) w . Therefore c(ξ,w) = c(ξ,w) and d(ξ,w) = - d(ξ,w). Also, observe that for w, an intersection of γ_1 with the real axis, we have ρ_w(1+, w) = ρ_w(1-,w) = ρ_w(1-,w) and similarly for an intersection of γ_-1 with the real axis. Thus for every solution f of (<ref>-<ref>) the absolute value of f(θ)ρ_w(θ, f(θ)) is well defined and continuous on , whereas d(0+,f(0+))= - d(2π-, f(2π-)) and similarly at θ=π. Let f be a solution of the symmetrized boundary value problem with no zeros. Hence f can be written in the exponential form f = e^g. Since the biholomorphic map ψ from to the upper half-disc _+ is of class C^1/2, a C^β estimate on solutions of the symmetrized boundary value problem gives C^β/2 estimate on solutions of (<ref>-<ref>). Let us differentiate function ρ(θ, f(θ)) to get ρ_θ(θ, f(θ)) + 2 Re(ρ_w(θ, f(θ)) ∂ f/∂θ(θ)) = 0. Since f=e^g, we get ρ_θ(θ, f(θ)) + 2 Re(ρ_w(θ, f(θ)) f(θ) ∂ g/∂θ(θ)) = 0 and so ρ_θ(θ, f(θ)) + 2 Re(e^c(θ,f(θ))+ i d(θ,f(θ))∂ g/∂θ(θ)) = 0. From here we get 2 Re(e^i (d(θ,f(θ))+ i Hd(θ,f(θ)))∂ g/∂θ(θ)) = -ρ_θ(θ, f(θ)) e^-c(θ,f(θ))-Hd(θ,f(θ)). Observe that function θ⟼ e^i (d(θ,f(θ))+ i Hd(θ,f(θ)))∂ g/∂θ(θ) extends holomorphically to the unit disc with value 0 at 0. We will get C^β a priori estimates on g and hence on f by getting C^β a priori estimates on function (<ref>). Using Hilbert transform it is enough to get C^β a priori estimates on its real part. Hence we need a priori estimates on the right hand side of (<ref>). Function θ⟼ -ρ_θ(θ, f(θ)) e^-c(θ,f(θ)) is bounded with the bound which does not depend on function f but only on the data γ_ξ∈∖L and defining function ρ. The bound can also be found to be independent of the C^k isotopy ρ^t, t∈ 0, 1. Hence one needs a priori bound on function θ⟼ e^± Hd(θ,f(θ)). Recall that, <cit.>, for u∈ L^∞(), such that u_∞<π/2p (1≤ p<∞) we have the estimate e^Hu_p≤ (2π/cos(pu_∞))^1/p. Let a∈(0,π/5) and let χ_0, χ_π be smooth functions on 0,π with values in 0,1 such that χ_0(t)=1 on 0,a, χ_π(t)=1 on π-a, π, χ_0(t)=0 on 2a, π, and χ_π(t)=0 on 0, π-2a. Let us consider the function d(θ,w) = d(θ,w) - χ_0(θ)d_0(w) - χ_π(θ)d_π(w) for θ∈ 0,π and d(θ,w) =-d(2π-θ,w) for θ∈π,2π. Here we used notation d_0(w)=d(0+,w) and d_π(w)=d(π-,w). We see that d(0,w) = d(π,w)=0 and so d(θ,w) is a continuous function on ×. Let 1< p<∞ be given. By results from <cit.> we can write d = Re(q) + e, where pe_∞<π/2 and q is a finite sum of terms of the form e^ijθ w^m, j∈, m∈∪{0}, on which Hilbert transform acts as a bounded nonlinear operator from A() into C(). Therefore for a given solution f of (<ref>-<ref>) with no zeros we can write continuous function d(θ,f(θ)) on in the form d(θ,f(θ)) = Re(q(θ,f(θ))) + e(θ,f(θ)) and so H(d) = H( Re(q)) + H(e) where the first term is uniformly bounded and for the second we have e_∞<π/2p. Hence e^± Hd = e^± H Re(q) e^± He where the first factor is uniformly bounded and the second factor is bounded in L^p() for a given 1< p<∞. Since for a given p∈ (1,∞) we can get L^p() bounds on (<ref>), the boundedness of e^± Hd in some L^p() is determined by Hilbert transform of the extension of function χ_0(θ)d_0(f(θ))+χ_π(θ)d_π(f(θ)) to 0,2π. Recall that γ_± 1 are strongly starshaped Jordan curves with respect to 0 and we may assume that for j=± 1 we have ρ(j,w) = |w|^2 - R_j^2(w/|w|) for some positive C^k function R_j(z) on . A short calculation gives ρ_w(j,w)=w-2 R_j (-1/2w^2/|w|^3 (R_j)_z + 1/21/|w| (R_j)_z) and so wρ_w(j,w)=|w|^2- 2 iR_j/|w| Im( w(R_j)_z) which has strictly positive real part on γ_ξ_ξ∈∖L. Functions d_0 and d_π represent the argument of (<ref>). By compactness it follows that there exists 0<β_0<1, such that |d_0(w)|≤π/2β_0 and |d_π(w)|≤π/2β_0 on γ_ξ_ξ∈∖L and therefore |χ_0(θ)d_0(f(θ)) + χ_π(θ)d_π(f(θ))|≤π/2β_0 for every θ. Also, if there is an open condition on the size of d_jπ on γ_j, such as |d_iπ(w)|<π/2β_0, j=± 1, we can, by choosing the supports of functions χ_0 and χ_π small enough, that is, by choosing a>0 small enough, assume that the same condition on the size holds for function χ_0(θ)d_0(f(θ)) + χ_π(θ)d_π(f(θ)) for all θ. Observe also that |d_0(f(0))|= π|β_1-1/2| and |d_π(f(π))|=π||β_-1|-1/2|. and so ||β_j|-1/2|<β_0/2, j=± 1. By (<ref>) and (<ref>) we get that for every fixed 1<p<∞ such that pβ_0<1 the estimate e^± Hd_p≤ C holds. Hence we also have a priori L^p estimate on function (<ref>). Since ∂ f/∂θ =f ∂ g/∂θ, an estimate on ∂ g/∂θ will give an estimate on ∂ f/∂θ. We can write ∂ g/∂θ = (e^-i (d + i Hd))(e^i (d+ i Hd)∂ g/∂θ). By assumptions of Theorem <ref> we have β_0≤1/5<1/2 and we can choose p>2. By Cauchy-Schwarz inequality we then have ∂ g/∂θ_p/2≤e^-i (d + i Hd)_pe^i (d+ i Hd)∂ g/∂θ_p. From here we get L^p/2 a priori estimates on ∂ g/∂θ which imply a priori estimates on g and f in Hölder space C^β() for 0<β<1-2/p<2(1/2-β_0). Recall (Remark <ref>) that this gives Hölder space a priori estimates on solutions with no zeros of the nonsymmetrical problem (<ref>-<ref>) for β∈(0,1/2-β_0). §.§ A priori estimates on function κ We also need a priori estimates on function κ for which it holds f(ξ) = (ξ-1)^δ_1(ξ+1)^δ_-1κ(ξ) + f(1) 1+ψ(ξ)/2+ f(-1) 1-ψ(ξ)/2. In this subsection we again consider the nonsymmetrical case (<ref>-<ref>). We denote by C a universal constant, which depends on the data but does not depend on the particular function we consider. We know that f and hence κ are C^k-1,α smooth on ∖{-1,1} and on compact subsets of ∖{1,-1} we get a priori estimates on κ by expressing it in terms of f. Hence we need a priori estimates on κ near points ± 1. Also, we know from Section <ref> that if κ is continuous on , then both functions belong to A^β() for 0< β < min{|β_1|, 1-|β_1|, |β_-1|, 1-|β_-1|}. Let us fix 0<β<1/2-β_0 that we have a priori estimates on function f. Recall (<ref>) and that t^sκ(t) = f(t). Hence ρ_w(θ,κ) is a C^β function with a priori bounds. As in (<ref>) we can globally write Re(r e^ i (v+i Hv)∂κ/∂θ) = -e^-ue^-(Hv)ρ_θ(θ, κ), where u and v are real C^β functions with a priori bounds. To get L^p' a priori bounds on ∂κ/∂θ for some p'>1 we will get L^p' bounds on the right-hand side function ρ_θ (θ,κ(θ)), that it, on ρ_t (t,κ(t)) near t=0. Considering (<ref>-<ref>-<ref>) termwise we get that t^β_1-1, t^-β_1, κ(t) = t^β_1-1 f(t), and all terms with function g are L^p' bounded for any p'>1 such that p'(1-β_1)<1 and p'β_1<1. Let us consider terms which are bounded by t^-β_1|κ(t)^2| = t^β_1-2 |f(t)^2|. Since we have β∈(0,1/2-β_0) a priori bounds on f, we have |f(t)|≤ C |t|^β for some universal constant C. Hence t^-β_1|κ(t)^2|≤ C |t|^2β+β_1-2 and this function is in some L^p', p'>1, if 1<2β + β_1. The bound 0<β<1/2-β_0 implies that this will be the case for some such β if 2β_0<β_1. Similar argument near ξ=-1 gives 2β_0<1-|β_-1|. If these two conditions are satisfied, we get L^p' a priori estimates on ρ_θ (θ,κ(θ)) for some p'>1. This implies C^β' a priori estimate on κ for β'<1-1/p'. There are natural bounds on β_j, j=± 1, in terms of β_0, that is, 1/2 - β_0/2<|β_j|<1/2+β_0/2. Hence, if 2β_0≤1/2 - β_0/2 and 1/2 + β_0/2≤ 1-2β_0 both inequalities needed for L^p' a priori estimates will be satisfied. These two inequalities are equivalent to the condition β_0≤1/5, that is, the angle between w and the normal to γ_ω_j at w is less than π/10. § FINAL REMARKS If arc L is the lower semicircle, we can state Theorem <ref> in an equivalent simplified form. Let γ_ξ_ξ∈∖L be a C^k (k≥ 3) family of Jordan curves in which all contain point 0 in their interiors. Let Jordan curves γ_j, j=± 1, be strongly starshaped with respect to 0 and such that for each w∈γ_ω_j the angle between w and the outer normal to γ_ω_j at w is less than π/10. Let w_j, j=± 1, be the positive intersection of γ_j and the real axis with the oriented angle of intersection πβ_j, where β_1∈(0,1) and β_-1∈(-1,0). Let 0< β < min{β_1, 1-β_1, |β_-1|, 1-|β_-1|}. Then there exists a unique f∈ A^β() with no zeros on which solves (<ref>-<ref>) for which f(1)=w_1 and f(-1)=w_-1. To prove Theorem <ref> one uses continuity method (see also <cit.>). The starting boundary value problem (<ref>-<ref>) can be, using an isotopy from Jordan curves γ_ξ_ξ∖L to circles with center at 0 and fixed radius R>0, embedded in a one parameter family of boundary value problems which all satisfy assumptions of Theorem <ref>. Here, for t=0 we have the starting boundary value problem and for t=1 circles as the boundary data. Results in Section <ref> (Proposition <ref>, Proposition <ref>) imply that a solution of the boundary value problem (<ref>-<ref>) for curves γ^t_ξ_ξ∖L can be locally perturbed into a solution for the nearby perturbed boundary data. Hence the set of parameters t for which there is a solution of (<ref>-<ref>) is open. On the other hand, a priori estimates from Section <ref> together with compact embeddings (<ref>) imply that the set of parameters t∈ 0,1 for which there is a solution of (<ref>-<ref>) is closed. Since there is an obvious solution for the case t=1, where all Jordan curves are circles with center at 0 and fixed radius R>0, we get that there is a solution of (<ref>-<ref>) for t=0. Let a_1,…, a_n∈ be a finite set of points with given multiplicities. Then under the assumptions of Theorem <ref> there exists β∈ (0,1) and f∈ A^β() which has zeros exactly at points a_1,…, a_n∈ with the given multiplicites and which solves (<ref>-<ref>). To prove the corollary we search for solutions f of the symmetric problem on the unit disc of the form f(z) = z-a/1-az z-a/1-az f(z), where a is a point in the upper half-disc. Then f has to solve a modified problem, where the boundary curves are given by γ_ξ = 1-aξ/ξ-a 1-aξ/ξ-a γ_ξ. Observe that γ_ξ = γ_ξ for ξ=± 1. In a similar way one can create a zero at a point a∈(-1,1), that is, on L in the original problem. Jordan curves for the modified problem are γ_ξ = 1-aξ/ξ-a γ_ξ. Then γ_1=γ_1 and γ_-1 = -γ_-1 but conditions of Theorem <ref> are still satisfied. § ACKNOWLEDGEMENTS The author is grateful to the referee for his/her valuable suggestions and comments. § DISCLOSURE No potential conflict of interest was reported by the author. § FUNDING The author acknowledges the financial support from the Slovenian Research Agency (grants P1-0291, J1-3005, N1-0237 and N1-0137). plainnat
http://arxiv.org/abs/2307.02224v1
20230705120234
Quantifying memory in spin glasses
[ "Janus Collaboration", "I. Paga", "J. He", "M. Baity-Jesi", "E. Calore", "A. Cruz", "L. A. Fernandez", "J. M. Gil-Narvion", "I. Gonzalez-Adalid Pemartin", "A. Gordillo-Guerrero", "D. Iñiguez", "A. Maiorano", "E. Marinari", "V. Martin-Mayor", "J. Moreno-Gordo", "A. Muñoz Sudupe", "D. Navarro", "R. L. Orbach", "G. Parisi", "S. Perez-Gaviro", "F. Ricci-Tersenghi", "J. J. Ruiz-Lorenzo", "S. F. Schifano", "D. L. Schlagel", "B. Seoane", "A. Tarancon", "D. Yllanes" ]
cond-mat.dis-nn
[ "cond-mat.dis-nn" ]
ilaria.paga@gmail.comInstitute of Nanotechnology, Soft and Living Matter Laboratory, Consiglio Nazionale delle Ricerche (CNR-NANOTEC), Piazzale Aldo Moro 5, I-00185 Rome, Italy This author performed the experiments reported in this work.Department of Mechanical Engineering, The University of Texas at Austin, Austin, Texas 78712, USA Eawag, Überlandstrasse 133, CH-8600 Dübendorf, Switzerland Dipartimento di Fisica e Scienze della Terra, Università di Ferrara e INFN, Sezione di Ferrara, I-44122 Ferrara, Italy Departamento de Física Teórica, Universidad de Zaragoza, 50009 Zaragoza, SpainInstituto de Biocomputación y Física de Sistemas Complejos (BIFI), 50018 Zaragoza, Spain Departamento de Física Teórica, Universidad Complutense, 28040 Madrid, Spain Instituto de Biocomputación y Física de Sistemas Complejos (BIFI), 50018 Zaragoza, Spain Departamento de Física Teórica, Universidad Complutense, 28040 Madrid, Spain Departamento de Ingeniería Eléctrica, Electrónica y Automática, U. de Extremadura, 10003, Cáceres, SpainInstituto de Computación Científica Avanzada (ICCAEx), Universidad de Extremadura, 06006 Badajoz, Spain Instituto de Biocomputación y Física de Sistemas Complejos (BIFI), 50018 Zaragoza, SpainFundación ARAID, Diputación General de Aragón, Zaragoza, SpainDepartamento de Física Teórica, Universidad de Zaragoza, 50009 Zaragoza, Spain Dipartimento di Biotecnologie, Chimica e Farmacia, Università degli studi di Siena, 53100, Siena ItalyInstituto de Biocomputación y Física de Sistemas Complejos (BIFI), 50018 Zaragoza, Spain Dipartimento di Fisica, Sapienza Università di Roma, and CNR-Nanotec, Rome Unit, and INFN, Sezione di Roma1, 00185 Rome, Italy Departamento de Física Teórica, Universidad Complutense, 28040 Madrid, Spain Instituto de Biocomputación y Física de Sistemas Complejos (BIFI), 50018 Zaragoza, SpainDepartamento de Física Teórica, Universidad de Zaragoza, 50009 Zaragoza, SpainDepartamento de Física, Universidad de Extremadura, 06006 Badajoz, SpainInstituto de Computación Científica Avanzada (ICCAEx), Universidad de Extremadura, 06006 Badajoz, Spain Departamento de Física Teórica, Universidad Complutense, 28040 Madrid, Spain Departamento de Ingeniería, Electrónica y Comunicaciones and I3A, U. de Zaragoza, 50018 Zaragoza, Spain Texas Materials Institute, The University of Texas at Austin, Austin, Texas 78712, USA Dipartimento di Fisica, Sapienza Università di Roma, and CNR-Nanotec, Rome Unit, and INFN, Sezione di Roma1, 00185 Rome, Italy Departamento de Física Teórica, Universidad de Zaragoza, 50009 Zaragoza, SpainInstituto de Biocomputación y Física de Sistemas Complejos (BIFI), 50018 Zaragoza, Spain Dipartimento di Fisica, Sapienza Università di Roma, and CNR-Nanotec, Rome Unit, and INFN, Sezione di Roma1, 00185 Rome, Italy Departamento de Física, Universidad de Extremadura, 06006 Badajoz, SpainInstituto de Computación Científica Avanzada (ICCAEx), Universidad de Extremadura, 06006 Badajoz, Spain Dipartimento di Scienze dell'Ambiente e della Prevenzione Università di Ferrara and INFN Sezione di Ferrara, I-44122 Ferrara, Italy Division of Materials Science and Engineering, Ames Laboratory, Ames, Iowa 50011, USA Université Paris-Saclay, CNRS, INRIA Tau team, LISN, 91190 Gif-sur-Yvette, FranceInstituto de Biocomputación y Física de Sistemas Complejos (BIFI), 50018 Zaragoza, Spain Departamento de Física Teórica, Universidad de Zaragoza, 50009 Zaragoza, SpainInstituto de Biocomputación y Física de Sistemas Complejos (BIFI), 50018 Zaragoza, Spain Chan Zuckerberg Biohub, San Francisco, CA, 94158 Instituto de Biocomputación y Física de Sistemas Complejos (BIFI), 50018 Zaragoza, Spain Janus Collaboration Rejuvenation and memory, long considered the distinguishing features of spin glasses, have recently been proven to result from the growth of multiple length scales. This insight, enabled by simulations on the Janus II supercomputer, has opened the door to a quantitative analysis. We combine numerical simulations with comparable experiments to introduce two coefficients that quantify memory. A third coefficient has been recently presented by Freedberg et al. We show that these coefficients are physically equivalent by studying their temperature and waiting-time dependence. Quantifying memory in spin glasses D. Yllanes August 1, 2023 ================================== Memory is among the most striking features of far-from-equilibrium systems <cit.>, including granular materials <cit.>, phase separation in the early universe <cit.> and, particularly, glass formers <cit.>. Whether a universal mechanism is responsible for memory in all these materials is unknown, but spin glasses stand out <cit.>. On the one hand, memory effects are particularly strong in these systems —perhaps because of the large attainable coherence lengths <cit.>. More importantly, their dynamics is now understood in great detail. Indeed, to model protocols where temperature is varied, one must first understand the nonequilibrium evolution at constant temperature. In other words, before tackling memory, rejuvenation and aging should be mastered. These intermediate steps, including the crucial role of temperature chaos, have now been taken for spin glasses <cit.>. In the context of spin glasses, rejuvenation is the observation that when the system is aged at a temperature T_1 for a time , and then cooled to a sufficiently lower T_2, the spin glass reverts apparently to the same state it would have achieved had it been cooled directly to T_2. That is, its apparent state is independent of its having approached equilibrium at temperature T_1. However, when the spin glass is then warmed back to temperature T_1, it appears to return to its aged state, hence memory. Conventional wisdom has long ascribed rejuvenation to temperature chaos —the notion that equilibrium states at close temperatures are unrelated— and memory was experimentally exhibited long ago <cit.> but these effects remained unassailable to numerical simulations in three-dimensional systems <cit.>. This state of affairs has recently changed <cit.>, thanks to the combination of multiple advances: the availability of the Janus II supercomputer <cit.> and of single-crystal experiments <cit.>, accessing much larger coherence lengths; the establishment of a relation between experimental and numerical time scales <cit.>; and the quantitative modelling of nonequilibrium temperature chaos <cit.>. Ref. <cit.> has not only demonstrated memory numerically but also shown that this effect is ruled by multiple length scales, setting the stage for a more quantitative study. Here we introduce two coefficients to quantify memory, one experimentally accessible and the other adapted to numerical work. The example of temperature chaos has shown that such indices are key to a comprehensive theory <cit.>. In principle, the only constraints for such a coefficient 𝒞 are that 𝒞=1 means perfect memory and 𝒞=0 means that the memory has been totally erased. Many choices could satisfy these conditions: besides the coefficients introduced herein, <cit.> presents an alternative based on a different observable. Fortunately, the length scales discussed in <cit.> allow us to express these coefficients as smooth functions of similar scaling parameters and to demonstrate that they have the same physics as temperature and waiting time are varied. *Protocols. In both experiment and simulations <cit.>, we have a three-step procedure: (i) the system is quenched to an aging temperature T_1 < (is the glass temperature) and relaxes for a time . In simulations, this quench is instantaneous, while in experiment, it is done at ≈ 10 K/min. A protocol where T is kept constant after the initial quench is termed native. (ii) The system is then quenched to T_2<T_1, where it evolves for time . (iii) The system is raised back to T_1, instantaneously in simulations, and at the same rate as cooling in experiment. After a short time at T_1 (2^10 time steps in simulations), the dynamics are compared to the native system, which has spent at T_1. The temperature drop T_1- T_2 is chosen to ensure that temperature chaos (and, hence, rejuvenation) is sizeable <cit.>. Our experiments are performed on a sample with =41.6 K. For experimental parameters see Table <ref> (Table <ref> for simulations). *Length scales. Memory and rejuvenation are ruled by several related length scales (see Appendix  <ref> for details), of which only one is experimentally accessible —, related to the Zeeman effect <cit.>. In simulations, the basic length is the size of glassy domains in a native protocol, . We regard rejuvenation as a consequence of temperature chaos. If the temperature drop meets the chaos requirement, namely the chaos length scale set by T_1-T_2 is small compared to (T_1,) <cit.>, the system that has aged at the starting temperature T_1 for “rejuvenates” at the cold temperature T_2. That is, preexisting correlated spins are “frozen” dynamically at T_2 and a new correlated state of size (T_2,) forms, where is the waiting time at T_2. The newly created correlated state at T_2 is independent from that formed at T_1. As increases at T_2, the newly correlated state at T_2 grows larger, to a maximum size set by the final time at T_2, . Upon heating back to T_1, the two correlated states interfere, causing a memory loss that will be seen both in experiment and simulations. Simulations can easily access and  <cit.>. Experiments measure instead the number of correlated spins N_c <cit.>. Equivalently, in simulations, the native protocol results in (T,)= [(T,)]^D-θ2 , where θ is the replicon exponent <cit.>. If the chaos condition is met, and only in this case, the jump protocol results in, concomitantly, (T_2,) ∝[(T_2,)]^D-θ2. Therefore, when chaos is strong enough, we have a dictionary relating the experimentally accessible to these two basic length scales <cit.> (see Appendix <ref> for details). *Qualitative behavior of the memory coefficient. According to the above, one would expect memory to depend on the ratio (T_2,)/(T_1,): the smaller the ratio, the larger the memory. Because both lengths are increasing functions of time, then (i) memory should increase with increasing , everything else held constant. Concomitantly, (ii) memory should decrease with increasing , everything else held constant. Finally, (iii) if Δ T = T_1 - T_2 increases, everything else held constant, (T_2,) will progressively decrease, and memory should increase. The memory coefficients that we define below, plotted in Fig. <ref>, display precisely these predicted variations. The experimental data in Fig. <ref> agree with this expectation, which we shall test in our simulations through a scaling analysis. *Experimental definition of the memory coefficient. We define a memory coefficient from the number of correlated spins N_c, see Eq. (<ref>). We shall first show that rejuvenation can be observed from N_c, thus confirming that temperature chaos is strong enough given our choice of temperatures and waiting times. A small sample was cut from a single crystal of CuMn, 7.92 at.%, with T_g = 41.6 K. T_1 and in Table <ref> were chosen to facilitate direct comparison with the results of Freedberg et al. <cit.>, who defined another memory coefficient from the linear magnetic susceptibility. The lower temperature, T_2, was variable, as was the waiting time . Our measurements of N_c follows Ref. <cit.> and are illustrated in Fig. <ref>–top. For the reader's convenience, we briefly recall the main steps leading to the measurement of N_c (see Appendix <ref> for further details). After the sample has undergone the appropriate preparatory protocol, we probe its dynamic state by switching on a magnetic field. We set time t=0 when the field is switched on and record the magnetization M(t), which grows steadily from M(t=0)=0, to obtain the dynamic response function S(t,H)=1/Hd M/dlog t . For native protocols and H→ 0, the peak of S(t) against t occurs approximately at the effective time ≈. As H increases, the peak moves to shorter (H). The slope of the plot of log vs H^2 equals the product of times the field-cooled susceptibility per spin: (χ_FC/spin) —χ_FC is roughly constant over the measured temperature range. We thus gain access to . We compare in Fig. <ref>–top the outcome of the above measuring procedure as obtained in protocols N_3 and R_1 (see Table <ref>). The two systems evolve for the same time at 26 K. The values obtained for log turned out to be nearly identical: relaxation at 30 K does not leave measurable traces at 26 K, hence rejuvenation. To quantify memory, we compare for systems that have undergone the temperature-cycling protocols in Table <ref> with their native counterparts. These measurements are illustrated in Fig. <ref>–bottom. We define the Zeeman-effect memory coefficient as C_Zeeman=N_c^cycle/N_c^native . Note that in both cases measurements are carried out at T_1 (the difference lies in the previous thermal history of the sample), hence the ratio in Eq. (<ref>) is just the ratio of the corresponding slopes in Fig. <ref>–bottom. Our results for C_Zeeman are given in Fig. <ref>. *A memory coefficient from simulations. We extend <cit.> to quantify memory by pushing current computational capabilities to their limit [Our simulations will closely follow <cit.> (for the sake of completeness we describe them in Table <ref> and Appendix  <ref>).]. We look for memory through the quantity that can be extracted most accurately from simulations: the spin-glass correlation function at H=0, ( r,;p)= ⟨ q^(a,b)(x,t) q^(a,b)(x+r,t) ⟩_p . Here, q^(a,b)(x,) ≡σ^(a)(x,) σ^(b)(x,t) where superscripts (a,b) label different real replicas and ⟨⋯⟩_p stands for the thermal average after a temperature cycle built from T-drop p (see Table <ref> and Protocols) [Due to rotational invariance <cit.>, essentially depends on r=|r|.]. Our starting observation is that the experimental determination of C, Fig. <ref>, relies on the nonlinear response to the magnetic field. Interestingly enough, the equilibrium nonlinear susceptibility is proportional to the integral of r^2 with r∈(0,∞). Thus, following <cit.>, we generalize this equilibrium relation by computing these integrals using the nonequilibrium correlation function in Eq. (<ref>). Fig. <ref>-top exhibits a small but detectable difference in the behavior of r^2 as varies in cycles built from T-drop J_1 in Table <ref>. This success has encouraged to consider the curve with t_w2=0 as the reference curve [A native protocol can be regarded as a cycle with =t_w3=0 (+2^10 is indistinguishable from within our numerical accuracy for native runs).]. We evaluate effects due to t_w2 as Δ(r, t_w2; p) = (r,t_w2;p) - (r, 0; p) . The difference in the correlation lengths in Eq. (<ref>) represents the amount of growth in the correlation length that has occurred at T_2 for a waiting time . If the temperature difference T_1-T_2 is sufficient for full chaos to develop at T_2, then Δ G_R(r,;T_1) represents the competing correlation length that interferes with the native correlation length established at T_1. From Fig. <ref>–bottom, it is natural to define the numerical memory coefficient 𝒞_num as 𝒞_num = 1- ∫_0^∞dr r^2 Δ G_R(r,;p)/∫_0^∞dr r^2 G_R(r,0;p) . Further scaling arguments supporting our choice can be found in Appendix <ref>. Our results for 𝒞_num are presented in Fig. <ref>, for several temperature cycles built from the temperature drops in Table  <ref>. The values of T_1, T_2 and were chosen to meet the chaos requirement proposed in <cit.> (the only exception is J_9, used as a testing case for the scaling analysis, below). It is comforting that, even when the ratio / is as large as in Fig. <ref>, we still obtain 𝒞_num > 0.75. *Discussion. Memory can be quantified in several ways: we have proposed two such memory coefficients, 𝒞_Zeeman and 𝒞_num, respectively, adapted to experimental and numerical computation. Each coefficient is used in a different time scale, see Fig. <ref>. Furthermore, <cit.> proposes yet another experimental coefficient, 𝒞_χ”, based on the linear response to a magnetic field (rather than the nonlinear responses considered herein). It is obvious that more options exist. Hence, it is natural to ask what (if any) is the relationship between these coefficients. We look for this relationship in the two length scales that rule our nonequilibrium dynamics, namely (T_2,) and (T_1,). If we succeed in expressing our coefficients as simple functions of these two lengths, we shall have a natural bridge between different memory definitions. Specifically, we consider two variables x and y: x=[(T_2,)/(T_1,)]^D-θ2,y=T_1/ (T_2,) . Both scaling variables, x and y, are approximately accessible to experiment through Eqs. (<ref>) and (<ref>); we shall name their experimental proxies x' and y' [We computed x' and y' from the approximation (T_2,)≈(T_2,) <cit.> and the approximate law (T,t_w)/a≈ 0.58 (t/t_w)^c_2 T/T_g with c_2=0.104 and τ=0.186 ps <cit.>. Indeed, our measurements of ξ in Table <ref> rather follow ξ/a≈ 0.58 (t/t_w)^c_2 T/T_g +3.9. We omit the constant background 3.9 to diminish corrections to scaling.]. Therefore, we seek numerical constants a_1 and a_2, [see Appendix  <ref> for details], such that the 𝒞_num from all our temperature cycles fall onto a single function of ℱ(x,y)= y [1+a_1 x+a_2 x^2] . Setting aside T-drop J_9, which does not meet the chaos condition, the overall linear behavior in Fig. <ref>–bottom is reassuring. With appropriate a'_1 and a'_2 in Eq. (<ref>), see Appendix <ref>, the data for 𝒞_χ” <cit.> also fall onto a smooth function of ℱ'(x',y'), Fig. <ref>–top. There is a problem, however: 𝒞_χ” goes to one for significantly larger than zero (ℱ'=0 only at =0). The same problem afflicts 𝒞_num, albeit to a lesser degree. Interestingly, when plotted as a function of ℱ', 𝒞_Zeeman is compatible with a straight line that goes through 𝒞=1 at ℱ'=0, as it should. So, at least in this respect, 𝒞_Zeeman is the most sensible coefficient. Perhaps more importantly, the scaling representation (dashed line in Fig. <ref>–top) evinces that, away from the 𝒞≈ 1 region, the relation 𝒞_χ”≈ [𝒞_Zeeman]^K holds with K≈ 3.9. This relation makes it obvious that the different memory coefficients carry the same physical information. In summary, we have identified the length scales that govern memory in spin glasses in the form of a simple scaling law. We expect this picture to carry over to other glassy systems that exhibit rejuvenation and memory. We are grateful for helpful discussions with S. Swinnea about sample characterization. We thank J. Freedberg and coauthors for sharing their data and letting us analyze them. This work was partially supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Science and Engineering, under Award DE-SC0013599. We were partly funded as well by Grants PID2022-136374NB-C21, PID2020-112936GB-I00, PID2019-103939RB-I00, PGC2018-094684-B-C21, PGC2018-094684-B-C22 and PID2021-125506NA-I00, funded by MCIN/AEI/10.13039/501100011033 by “ERDF A way of making Europe” and by the European Union by the Junta de Extremadura (Spain) and Fondo Europeo de Desarrollo Regional (FEDER, EU) through Grant GR21014 and IB20079 and by the DGA-FSE (Diputación General de Aragón – Fondo Social Europeo). This research has been supported by the European Research Council under the European Unions Horizon 2020 research and innovation program (Grant No. 694925—Lotglassy, G. Parisi) and by ICSC—Centro Nazionale di Ricerca in High Performance Computing, Big Data, and Quantum Computing funded by European Union—NextGenerationEU. IGAP was supported by the Ministerio de Ciencia, Innovación y Universidades (MCIU, Spain) through FPU grant FPU18/02665. JMG was supported by the Ministerio de Universidades and the European Union NextGeneration EU/PRTR through 2021-2023 Margarita Salas grant. IP was supported by LazioInnova-Regione Lazio under the program Gruppi di ricerca2020 - POR FESR Lazio 2014-2020, Project NanoProbe (Application code A0375-2020-36761). § OUR SIMULATIONS ON JANUS II We simulate the Edwards-Anderson model in a cubic lattice with linear size L=160 and periodic boundary conditions. The Ising spins S_x=± 1 occupy the lattice nodes and interact with their nearest lattice-neighbors through the Hamiltonian H=-∑_⟨x,y⟩ J_x,y S_x S_y , where the coupling constants are independent random variables that are fixed at the beginning of the simulation (we choose J_x,y=± 1 with 50% probability). A choice of the {J_x,y} is named a sample. The model in Eq. (<ref>) undergoes a phase transition at temperature T_c=1.1019(29) <cit.>, separating the paramagnetic phase (at high temperatures) from the spin-glass phase (at low temperatures). We study the nonequilibrium dynamics of the model with a Metropolis algorithm. The time unit is a full lattice sweep (roughly corresponding to one picosecond of physical time <cit.>). The simulation is performed on the Janus II custom-built supercomputer <cit.> We consider 16 statistically independent samples. For each sample, we simulate N_R=512 independent replicas (i.e., N_R system copies that share the couplings {J_x,y} but are otherwise statistically independent). Replicas are employed to compute correlation functions as explained in Refs. <cit.> § A SIMPLE RELATION BETWEEN COHERENCE LENGTHS In this section we define the relevant length scales, ξ_micro, ζ and ξ_Zeeman and their relationships: * ξ_micro() is the size of the glassy domain. It is estimated through the replicon propagator, G_R(r,;p). * ξ_Zeeman is obtained by counting the number of spins that react coherently to an external magnetic field, and thereby the volume of correlated spins subtended by the correlation length ξ_Zeeman. This is the only experimentally accessible method for obtaining a correlation length directly. * ζ(t_1,t_2) is the scale at which defects are correlated [See Refs. <cit.> for more details.]. In a fixed-temperature protocol, these quantities are almost equivalent <cit.>. The scenario is more intricate in varying-temperature protocols because of temperature chaos. In Fig. <ref>, we compare these length scales in varying-temperature protocols. As the reader can notice, if chaos is large [J_1, J_2 and J_8], an equivalence exists between ξ_Zeeman, ζ, and ξ_micro. Otherwise [J_9], the number of correlated spins is not a good proxy for ζ. § SCALING OF Χ^', Χ^'' AND Χ_2/V In this section, we show that the dynamical behavior of the three susceptibilities, χ^', χ^'' and χ̂_2 ≡χ_2/V, are similar. The former two are measured in conventional memory experiments (e.g., in Ref. <cit.>), and the latter in numerical simulations. We analyze first the scaling properties near the critical point. We then extend the analysis to the glass phase. In the following we use ϵ≡ (T-)/ to denote the reduced temperature. On the one hand, the experimentally computed linear magnetic susceptibilities behave as <cit.> χ_0 - χ^'/χ_0 ∼ϵ^β G(t/ τ) , χ^'' ∼ϵ^β H(t/τ) , where we have taken both linear magnetic susceptibilities in the time domain (the relationship to frequency is a simple Fourier transform). Above, G(·) and H(·) are two scaling functions, and χ_0 is the equilibrium value of the linear magnetic susceptibility at the critical point. On the other hand, the nonlinear spin-glass susceptibility per spin, computed in numerical simulations, scales as <cit.> χ̂_2 ∼ϵ^2β K(t/ τ) , with a suitable scaling function K(·). At the critical point, there is only a critical mode and we can avoid the use of the replicon term. The rationale behind these scaling relations is that the overlap is essentially the magnetization squared. The linear magnetic susceptibilities scale with the exponent associated with the average of the overlap q (the fluctuation of the magnetization is given by ⟨ m^2⟩, the average of the overlap), and χ_2 is a nonlinear susceptibility per spin associated with the overlap. Therefore, it scales with the usual exponent γ=2 β- ν D. Remember that the order parameter, the overlap, scales with the exponent β and that χ̂_2=χ_2/V. For the analogous scaling relations in the spin-glass phase, the out-of-equilibrium situation is dominated by the replicon mode. The only diverging nonlinear susceptibility is the replicon nonlinear one, defined as χ_2(t) =∫_0^∞ dr r^2 G_R(r,t) . The overlap field scales with the replicon exponent θ as q ∼ξ^-θ/2 (see Appendix H of <cit.>), so we can write the dependence of these susceptibilities on the correlation length as χ_0 - χ^'/χ_0 ∼χ^''∼ξ(t)^-θ/2 , χ̂_2 ∼ξ(t)^-θ . We conclude from this analysis that the dynamical behaviors of these different susceptibilities are similar. Hence, we can safely compare the results from numerical simulations with experiments that measure χ^'' <cit.> for this reason. § EXPERIMENTAL DETAILS AND DATA PROCESSING This section explores the experimental protocol for measuring the change in magnetization as a function of log t. In particular, it delves into the nuances of data processing for accurate identification of the time that the dynamic function S(t) peaks, ^eff(H). As discussed in the main body of the text, the accurate determination of ^eff(H) is vital for extracting ξ_Zeeman. Magnetization measurements were conducted using a Quantum Design MPMS system. The magnetization was gauged as the sample traversed through an array of superconducting quantum interference devices (SQUIDs). The system was set to take continuous magnetization measurements over approximately 10 hours for =1 h, and 30 hours for =3 h. The magnetization as a function of time t displayed intermittent spikes owing to the SQUIDs' measurements. A representative example is exhibited in Figure <ref>. These aberrations, typically because of external interference with the SQUID coil, were subsequently removed in our analysis. The first derivative of the magnetization as a function of log t was then calculated. The derivative of the raw data tends to be noisy, complicating the task of identifying the position of the peak in S(t). It was necessary to suitably smooth the S(t) curve. We utilized a Chebyshev polynomial fit for the M vs t curve prior to computing its derivative. With an appropriate number of terms, the Chebyshev polynomial fit accurately represents the raw M vs t data, effectively eliminating the spikes in the d M/dlog t data produced by measurement artifacts, as illustrated in Figure <ref>. A 60-term Chebyshev polynomial fit typically depicts the S(t) curve satisfactorily, with less than a 1% residual. Using higher terms further reduces this residual. The Chebyshev polynomial fit excels over a simple box smoothing of raw data because it preserves the spline of the M vs t curve while simultaneously decreasing the noise in the derivative. Upon obtaining the derivative of the Chebyshev polynomial, we constructed a single Lorentzian peak function with a constant background: f(x) = Aγ^2/(x-x_0)^2+γ^2 + C  . In this equation, A denotes the amplitude of the peak, x_0 represents the center, and γ indicates the peak width. As depicted in Figure <ref>, the derivative of the Chebyshev curve provides a relatively uncluttered peak with a fit to a Lorentzian shape. With a sufficient number of Chebyshev polynomial terms, the fitted curve can trace the raw data precisely. To ensure that the Chebyshev polynomial accurately represents the M vs t data, we iteratively conducted the fitting and peak searching procedure over a number range of terms, typically from 30 to 1200. Each iteration produced a value for the time at which the Lorentzian peaks. We then calculated the average of these values to determine the peak time for a given S(t) measurement. The error bar for the time of the peak was determined from the standard deviation of the peak times. In cases where overfitting or underfitting was apparent within certain ranges of the number of Chebyshev polynomial terms, as evinced by deviant peak values, the peak values obtained in these ranges were disregarded. § DETERMINATION OF THE COEFFICIENTS FOR THE SCALING FUNCTION ℱ(X,Y) As we have explained in the main text, we introduce a simple scaling function ℱ(x,y) for describing both the experimental and numerical data. Here we report the details for the determination of the constants a_1 and a_2 [and analogously a_1' and a_2']. Our procedure consists of two steps: * we fit the data for the memory coefficient under consideration —𝒞_num or 𝒞_χ”— to a smooth, generic function of the scaling function ℱ(x,y) introduced in the main text[We employ ℱ(x,y) for 𝒞_num and ℱ'(x',y') for 𝒞_χ”.]: 𝒞(x,y)= B_0-B_1 ℱ(x,y)-B_2 [ℱ(x,y)]^2, Note that there are five fit parameters, namely B_0, B_1 and B_2 and the two parameters a_1 and a_2 that determine ℱ(x,y). * The free parameters B_0,B_1,B_2 are disregarded in the analysis. Instead, we keep a_1,a_2 (or a_1' and a_2' for 𝒞_χ”), to describe our data as a function of the scaling function ℱ(x,y) [ℱ'(x',y') in the case of 𝒞_χ”]. For the numerical data, B_0 ≃ 1 and B_2=0. The data for 𝒞_χ”, taken from Ref. <cit.>, require B_2 ≠ 0. apsrev4-1
http://arxiv.org/abs/2307.03253v1
20230706185955
Charting the landscape of stochastic gene expression models using queueing theory
[ "Juraj Szavits-Nossan", "Ramon Grima" ]
q-bio.SC
[ "q-bio.SC" ]
Juraj.Szavits.Nossan@ed.ac.uk School of Biological Sciences, University of Edinburgh, Edinburgh EH9 3JH, United Kingdom Ramon.Grima@ed.ac.uk School of Biological Sciences, University of Edinburgh, Edinburgh EH9 3JH, United Kingdom Stochastic models of gene expression are typically formulated using the chemical master equation, which can be solved exactly or approximately using a repertoire of analytical methods. Here, we provide a tutorial review of an alternative approach based on queuing theory that has rarely been used in the literature of gene expression. We discuss the interpretation of six types of infinite server queues from the angle of stochastic single-cell biology and provide analytical expressions for the stationary and non-stationary distributions and/or moments of mRNA/protein numbers, and bounds on the Fano factor. This approach may enable the solution of complex models which have hitherto evaded analytical solution. Charting the landscape of stochastic gene expression models using queueing theory Ramon Grima August 1, 2023 ================================================================================= § INTRODUCTION Biochemical reaction systems are inherently stochastic, in the sense that even if we could account for all possible external factors, it is not possible to precisely predict which reaction event will occur in a small-time interval <cit.>. The effect of this intrinsic noise on the dynamics of simple biochemical circuits has been extensively studied by means of the chemical master equation (CME), a probabilistic description of reaction dynamics under the assumption that the time between successive reaction events is exponential (the Markovian memoryless assumption) <cit.>. A repertoire of methods have been developed to approximately or exactly solve the CME in stationary and non-stationary conditions. These include the generating function method <cit.>, the Poisson representation <cit.>, the linear-noise approximation and its various extensions <cit.> and methods inspired by quantum mechanics including ladder operators and Feynman-like diagrams <cit.>. These and other methods have been discussed in detail in various reviews <cit.>. In particular, due to the observed large cell-to-cell variation in mRNA and protein numbers <cit.>, there has been an intense interest in the application of these techniques to solve a large variety of models of transcription and/or translation <cit.>. The solution of each new stochastic model of gene expression is often laborious and specific to that model because the general form of the solution of the CME is only known for a class of chemical systems with rather restrictive constraints <cit.>. The situation is made even more difficult by the fact that some gene expression systems require a non-Markovian description <cit.> for which very few analytical methods exist <cit.>. Hence, there is considerable interest in the development of methods that circumvent the limitations of the present techniques. A promising approach that has been put forward to solve complex reaction systems (biochemical or of another kind) is queueing theory. Queueing theory is a branch of mathematics which describes customers arriving to some facility where they receive service of some kind and then depart <cit.>. The word “queueing" describes a scenario in which there is a finite number of servers, so if all servers are busy, then new customers must wait or queue for the service. Queueing systems are usually described using Kendall's notation A/S/c <cit.>, where A describes the arrival process, S describes the service process, and c is the number of servers. In biology, queueing theory has been used to solve models of enzymatic reactions <cit.>, gene regulatory networks <cit.>, mRNA translation under limited resources <cit.> and stochastic expression of a single gene <cit.>. A variety of stochastic models of gene expression, in particular those describing expression occurring in bursts, have been mapped <cit.> to a particular queueing system known as the G^X/G/∞ queue <cit.>, which was further reviewed in Ref. <cit.>. Similarly, stochastic models of nascent RNA kinetics have been recently mapped to the G/D/∞ queue <cit.> that is intimately connected to renewal theory <cit.>. However, beyond these two queueing systems and few recent studies <cit.>, no further connection between gene expression modelling and queueing theory has been explored. In this review, we establish a deep connection between stochastic models of gene expression and queueing theory. Most results from queueing theory that we review here are old, but they seem to be unknown in the gene expression modelling community. We show that the solution of many stochastic models of gene expression obtained using the master equation approach can be readily obtained from queueing theory. However, because of the generality of this theory, it is clear that it can address the solution of much more complex models of gene expression than presently considered, and we hope this review gives readers the tools to achieve this goal. § STOCHASTIC GENE EXPRESSION AS A QUEUING SYSTEM The output of models of gene expression is typically either the statistics of RNA or protein counts; rarely the description of both is considered because the simultaneous measurement of RNA and protein in the same cell is challenging. In what follows, we will mostly focus on the RNA description of gene expression; where appropriate, we will discuss the protein description. We consider a gene producing RNA which eventually gets degraded. As a measure of the gene's activity, we are interested in the number of RNA molecules from that gene that are present in the cell at a given time. By degradation, we loosely mean any biological process that reduces the number of the RNA of interest in the cell. For example, if we are interested in nascent RNA, then by “degradation" we mean transcription elongation and termination, followed by the processing of the nascent RNA into mature RNA. Imagine now that we are able to record the exact time at which new RNA molecules are produced, as well as the time it takes each RNA molecule to degrade. We refer to the former as the arrival epochs, and to the latter as the service times. We assume that the molecular machinery responsible for degradation is abundant, so that RNA becomes available for degradation as soon as it is produced. The process of RNA production and degradation that we have just described is an example of a queueing system. The six main characteristics that define a queueing system are: (1) the arrival process A, (2) the service process S, (3) the number of servers c, (4) the capacity of the queue K, (5) the calling population N, and (6) the queue's discipline D. These characteristics are usually summarized using the extended Kendall's notation A/S/c/K/N/D. The arrival process A describes how often customers arrive at the system, and whether they arrive one at a time or in batches. Batch arrivals are typically denoted by A^X. The service process S describes how long it takes to serve each customer. A given server may serve one customer at a time, or a batch of customers. The number of servers c may be any number from 1 to ∞. It is usually assumed that servers operate in parallel and are independent of each other. If all servers are busy, then new customers arriving at the system must queue for the service. The total number of customers in the system, which includes customers that are waiting and customers that are being served, is called the queue length. For infinite-server queues, the queue length is equal to the number of busy servers. The capacity K of the queue is the maximum number of customers that are allowed to queue for the service at any given time. If the queue length reaches this number, then no further customers are allowed to join the queue until the queue length drops below this number due to service completion. The calling population N is the total number of customers, which, if finite, may affect the arrival process. Finally, the queue's discipline D describes how the next customer to be served is selected among the queueing population. A common example is the first come, first served discipline (FCFS), in which the customers are served in the order in which they arrive. The default values are K=∞, N=∞ and D=FCFS, in which case a simpler notation A/S/c is used instead of A/S/c/K/N/D. In our queueing system of gene expression, the “customers" are RNAs, the arrival process is RNA production (transcription), and the service process is RNA degradation (including other processes that may reduce their number). We assume that RNAs become available to the degradation machinery as soon as they are produced, which is equivalent to having an infinite number of servers. In this case, the queue length is equal to the number of busy servers, i.e. the number of RNA molecules (from the gene of interest) that are present in the cell. The capacity of the queue and the calling population are assumed to be infinite. The queueing discipline does not apply to an infinite-server queue, as there is no queue, only the number of busy servers. However, since RNAs begin their service in the order in which they are produced, we set D=FCFS. This, of course, does not mean the RNA that was produced first will degrade first, unless the degradation process is deterministic taking a fixed amount of time. Based on these characteristics, we conclude that our description of stochastic gene expression is equivalent to a queueing system A/S/∞, where A and S are the yet unspecified arrival and service processes, respectively, and there are infinitely many servers. We now focus on the arrival process of RNA production and the service process of RNA degradation. We set up these two processes within the framework of Markov (jump) processes, and discuss later their non-Markovian generalizations. We consider a model of gene expression in which the gene switches between multiple gene states, labelled by G_1,…,G_S, and eventually produces RNA, which is then degraded through a process with multiple states labelled by M_1,…,M_R (Fig. <ref>). Transitions between gene states may reflect individual biochemical events, such as binding of transcription factors at the promoter, or more phenomenologically, a combination of biochemical events resulting in the gene being active or inactive. RNA production involves two types of transitions: type 0 and type 1. Transitions of type 0 occur between two distinct gene states without the production of RNA. Transitions of type 1 occur between two gene states, not necessarily distinct, and involve the production of RNA. All transitions are assumed to be Markovian (memoryless) with constant (time-independent) rates. As we show later, this model includes many of the existing stochastic models of gene expression as special cases. §.§ Stochastic models of RNA production We now look more closely into the part of the model describing the production of RNA. Since transitions between gene states are Markovian, the sojourn time the gene spends in each state i=1,…,S is exponentially distributed, and we denote the rate of this distribution by λ_i. After the sojourn time, one of the two following transitions occurs. With probability λ_i p_0(i→ j), the gene jumps from state i to state j≠ i without producing an RNA molecule, or with probability λ_i p_1(i→ j), the gene jumps from state i to j and produces an RNA molecule. The probabilities p_0(i→ j) and p_1(i→ j) satisfy ∑_0ptj=1j≠ i^Sp_0(i→ j)+∑_j=1^Sp_1(i→ j)=1, i=1,…,S. If W_0(i→ j) and W_1(i→ j) denote the transition rates from i to j of type 0 (no RNA produced) and type 1 (RNA produced) respectively, then λ_i, p_0(i→ j) and p_1(i→ j) can be expressed in terms of these rates as λ_i=∑_0ptj=1j≠ i^SW_0(i→ j)+∑_j=1^SW_1(i→ j), p_0(i→ j)=W_0(i→ j)/λ_i, and p_1(i→ j)=W_1(i→ j)/λ_i. This process is known in queueing theory as the Markovian arrival process (MAP) <cit.>. The dynamics of the MAP can be expressed using S× S matrices D_0 and D_1 defined as [D_0]_ij=-λ_i, i=j W_0(i→ j), i≠ j, [D_1]_ij=W_1(i→ j). The matrix D_0 accounts for transitions of type 0 during which no RNA is produced, whereas the matrix D_1 accounts for transitions of type 1 that result in the production of RNA. Let X(t) denote the gene state at time t, Y(t) the number of transcription events until t, and P_n,i(t) the joint probability that Y(t)=n and X(t)=i. The probability row vector P_n(t)=(P_n,1,…,P_n,S) satisfies the following master equation, dP_0/dt=P_0 D_0, d P_n/dt=P_n D_0+P_n-1D_1, n≥ 1. We note that the transition matrix of the full process is equal to D_0+D_1, from where it follows that (D_0+D_1)1^T=0, where 1=(1,…,1) and 1^T is the transpose of 1. A generalization of the MAP to batch arrivals is called the batch Markovian arrival process (BMAP), in which the probability p_1(i→ j) is replaced with p_k(i→ j), where k≥ 1 is the batch size. If the batch size distribution a_k is assumed to be independent of i and j, then p_k(i→ j)=a_k W_1(i→ j)/λ_i. Hence, instead of only one matrix D_1, there is a matrix D_k for every batch size k≥ 1, and the previous equation generalizes to dP_0/dt=P_0 D_0, d P_n/dt=P_n D_0+∑_k=1^nP_n-kD_k, n≥ 1. The MAP is a versatile process that includes several other processes as special cases [Fig. <ref>(a)]. The simplest MAP is the Poisson process (denoted by M for Markovian or memoryless), which has only one state. This process describes a gene that is always active and produces RNA at exponentially distributed intervals [Fig. <ref>(b)]. One way to generalize the Poisson process is to have the arrival rate controlled by a finite-state Markov process. This process, which is called the Markov modulated Poisson process (MMPP), is a special case of the MAP in which D_1 is a diagonal matrix <cit.>. The simplest example of this arrival process is a gene that switches between two states and produces RNA from each state with a different rate [Fig. <ref>(c)]. An example of stochastic gene expression model with this arrival process is the leaky telegraph model <cit.>. We note that in the MMPP, the gene remains in the same state immediately after producing RNA. A gene that produces RNA from multiple states, but is allowed to change state upon the production of RNA (in which case D_1 is no longer a diagonal matrix), is described by a general MAP. Another way to generalize the Poisson process is to allow for non-exponential inter-arrival times, while keeping the inter-arrival times uncorrelated. This defines a renewal process (denoted by G for general or arbitrary inter-arrival distribution, or GI to emphasize that the inter-arrival times are mutually independent). The simplest example of the model in Fig. <ref> is a gene that switches between two states of activity and inactivity, and produces RNA from the active state [Fig. <ref>(d)]. An example of stochastic gene expression model with this arrival process is the (random) telegraph model <cit.>. Another example of the renewal process is a three-state model that accounts for the binding of RNA polymerase (RNAP) and its release into productive elongation <cit.>. Here gene state changes upon arrival since the released RNAP is lost and a new RNAP needs to be recruited at the promoter. As we show below, the MAP is not a renewal process, unless D_1 takes a special form, in which case the inter-arrival times are phase-type distributed. The MMPP is a renewal process only if all but one diagonal elements of D_1 are zero, i.e. if the gene always produces RNA from the same state and remains in that state immediately after producing RNA. Finally, we mention a generalization of the renewal process in which the inter-arrival time distribution itself is controlled by a Markov process, which is called the semi-Markov process (SMP). A semi-Markov process is defined as a sequence of random variables (X_n, T_n), where T_n is time of the n-th arrival, and X_n is the state of the system in the time interval [T_n, T_n+1⟩. Given X_n=i, the inter-arrival time t_n+1=T_n+1-T_n and the new state X_n+1=j are selected according to the probability P(t_n+1≤ t, X_n+1=j| X_n=i)=Q_ij(t). The MAP is a special case of the SMP with the following conditional probability matrix Q_ij(t) <cit.>, Q_ij(t)=[∫_0^tdt'e^D_0 t'D_1]_ij=[(I-e^D_0 t)(-D_0^-1D_1).]_ij. When mapping the MAP to the SMP, only the states at the arrival epochs are recorded. These states form what is known as the embedded Markov chain, whose probability transition matrix is D_0^-1D_1. Depending on the matrices D_0 and D_1, some states of the MAP may appear as transient states of the embedded Markov chain. These states do not appear at the arrival epochs, but they do leave an imprint in the inter-arrival time distributions of the SMP. Therefore, the MAP is preferred over the SMP when we want to give the process between arrivals a Markovian interpretation, whereas the SMP is preferred over the MAP when we have limited information about the process between arrivals. In the following, we show two important characteristics of the MAP: (1) that its inter-arrival times are phase-type distributed, and (2) that its successive inter-arrival times are mutually correlated. The appeal of phase-type distributions is that their set is dense in the field of all positive-valued distributions, meaning that the distribution of any positive-valued random variable can be approximated by a phase-type distribution <cit.>. Correlated inter-arrival times are important for modelling complex arrival patterns. Uncorrelated inter-arrival times, on the other hand, are much easier to work with analytically. This prompts us to derive a renewal condition under which the inter-arrival times become uncorrelated (hence identically distributed) random variables, meaning that the MAP becomes a renewal process. Let π_n,i denote the probability that the gene is in state i at the time of the n-th arrival of RNA. Since D_0 governs transitions during which no RNA is produced, the probability density function of the inter-arrival time t_n+1 until the (n+1)-th arrival can be written as f_n+1(t_n+1)=∑_i=1^S∑_j=1^S∑_k=1^Sπ_n,i[e^D_0 t]_ijW_1(j→ k)=π_n e^D_0 tD_1 1^T=π_n e^D_0 t(-D_0 1^T). The probability density function in Eq. (<ref>) corresponds to the phase-type distribution PH(π_n,D_0). The joint probability density function of t_n and t_n+1 reads f_n,n+1(t_n,t_n+1)=π_n-1e^D_0 t_nD_1e^D_0 t_n+1D_11^T. In general, f_n,n+1(t_n,t_n+1)≠ f_n(t_n) f_n+1(t_n+1), unless D_1 has a special structure. According to Eqs. (<ref>) and (<ref>), the i-th row of D_1 is equal to λ_i multiplied by a row vector whose j-th element is equal to p_1(i→ j), the probability of changing the gene state from i to j upon the production of RNA. If p_1(i→ j) is independent of i for every i=1,…,S, then the Markovian arrival process resets to the same initial state vector after every RNA production event. In that case, D_1 takes the form of D_1=(-D_01^T)κ, where κ is a row vector whose j-th element κ_j=p_1(i→ j), since p_1(i→ j) is independent of i. After inserting D_1 into Eq. (<ref>), we get f_n,n+1(t_n,t_n+1)=κe^D_0 t_n(-D_01^T)κe^D_0 t_n+1(-D_01^T)=f(t_n+1)f(t_n), where f_n(t_n)=(-κe^D_0 t_nD_01^T)≡ f(t_n). Hence, if D_1 is made up of rows that are all equal up to a scaling factor, then the two successive inter-arrival times in the MAP are uncorrelated, and the MAP becomes a renewal process. We will refer to Eq. (<ref>) as the renewal condition of the MAP. §.§ Stochastic models of RNA degradation We assume that the RNA degradation times of individual RNAs are independent and identically distributed random variables whose probability distribution is independent of the RNA production process and the RNA number. Hence, in our model the service process is fully specified by the service time distribution, which does not change over time and is the same for every RNA. In the parlance of the CME, this means that the propensity of RNA degradation is assumed to be linear in the RNA molecule number. This is different from some models of gene expression in which degradation is mediated by enzymes, such that the effective propensity, obtained under timescale separation when the enzyme species are eliminated, is of the Hill form <cit.>. Examples of service time distributions that we consider include a deterministic (degenerate) distribution (denoted by D), an exponential distribution (denoted by M) and a phase-type distribution (denoted by PH). An unspecified or general distribution is denoted by G. For the general multistep RNA degradation process illustrated in Fig. <ref>, the service time distribution is a phase-type distribution. This type of distribution describes the absorption time of a Markov chain consisting of a finite number of transient states and one absorbing state. Let D_deg denote the (R× R) transition rate matrix of the RNA degradation process, such that [D_deg]_ij=-∑_0ptj=1j≠ i^RW_deg(i→ j), i=j W_deg(i→ j), i≠ j, where W_deg(i→ j) is the rate of transition from state M_i to M_j. The probability density function of the service time is then given by h(t)=e_1 e^D_degt(-D_deg1^T), where e_1=(1,0,…,0), 1=(1,…,1) and 1^T is the transpose of 1. Hence, the RNA degradation times are distributed according to the phase-type distribution PH(κ_1,D_deg). Examples of phase-type distributions include the exponential distribution, h(t)=β e^-β t, and the Erlang distribution of shape R, h(t)=λ^R t^R-1e^-λ t/(R-1)!. The Erlang distribution models a service process where there are many fast steps but in which only R steps are rate-limiting. We note that the mean and the variance of the Erlang distribution are R/λ and R/λ^2, respectively, which gives the coefficient of variation CV=1/√(R). If we fix the mean T=R/λ, and set R→∞ and λ→∞, we get the deterministic distribution h(t)=δ(t-T). Hence, we can use the deterministic distribution to approximate a service process that consists of many similar fast steps. One such example is transcription elongation, during which RNA polymerase traverses thousands of nucleotides and produces nascent RNA, one nucleotide at a time. In this case, the customers are nascent RNAs, the arrival process is transcription initiation, and the service process are the processes of transcriptional elongation and termination. § THE LANDSCAPE OF INFINITE-SERVER QUEUES In the previous section, we showed that stochastic gene expression can be modelled by an infinite-server queue A/S/∞, where RNA production (transcription) is the arrival process A, RNA degradation is the service process S, and the number of observed RNA is the queue length (the number of busy servers). We introduced the Markovian arrival process (MAP) as a model for transcription, which includes the Poisson process (denoted by M) and the Markov-modulated Poisson process (MMPP) as special cases. We derived the renewal condition under which the MAP becomes a renewal process (denoted by G). We also introduced the semi-Markov process (SMP) as a generalization of the renewal process, of which the MAP is a special case. Finally, we showed that RNA degradation is specified by the RNA degradation time distribution, which is assumed to be the same for all RNA. Examples of service time distributions include a deterministic (degenerate) distribution (D), a phase-type distribution (PH) of which an exponential distribution (M) is a special case, and a general (unspecified) distribution (G). In this section, we review known results for six infinite-server queues that are of particular importance for stochastic gene expression modelling. We focus on queues whose arrivals are described by renewal and Markov-modulated processes, as these type of arrivals are present in most of the stochastic gene expression models in the literature. The main results are summarized in Table <ref>. For each queueing system, we report whether mathematical expressions for the non-stationary and stationary queue length distributions and their corresponding moments are known. Some results are in a closed form, whereas others require inverting the Laplace transform (denoted by LT), computing the moments recursively (denoted by RR) or approximating the probability distribution by truncated series (TS). Below we show these results in detail for the G/M/∞, MMPP/M/∞, G/D/∞ and M^X/G/∞ queues. We do not show results for the G^X/G/∞ queue, which have been reviewed in detail elsewhere <cit.>. We also do not show results for the MMPP/G/∞ queue, as they are quite complicated, and only the mean and the variance have been reported. Other infinite-server queues not mentioned in Table <ref> are discussed later. §.§ Results for the G/M/inf queue The G/M/∞ queue is an infinite-server queue in which the inter-arrival times are independent and identically distributed random variables, and the service times are exponentially distributed. In this subsection, we review known results for the non-stationary and stationary distributions of the queue length (the observed number of RNA). These results were derived in Ref. <cit.>. §.§.§ Non-stationary queue length distribution Let ϕ(s) denote the Laplace transform of the inter-arrival time distribution f(t), ϕ(s)=ℒ[f](s)=∫_0^∞dt e^-stf(t), α the mean inter-arrival time, α=∫_0^∞dt t f(t)=-.d/dsϕ(s)|_s=0, and β the service rate. Both α and β are assumed to be finite. Let N(t) denote the queue length (the number of RNA) at time t, P(m,t) the probability that N(t)=m, and G(z,t) the corresponding probability generating function, G(z,t)=∑_m=0^∞P(m,t)z^m. The initial time t=0 is assumed to be an arrival epoch and the initial queue length is N(0)=0. Under these assumptions, the probability generating function G(z,t) satisfies an integral equation that is derived using the renewal property of the arrival process, G(z,t)=1-F(t)+∫_0^tdt'f(t')G(z,t-t'){1+(z-1)[1-H(t-t')]}, where F(t)=∫_0^tdt'f(t') and H(t)=1-exp(-β t) are the cumulative distribution functions of the inter-arrival and service times, respectively. Eq. (<ref>) can be solved by Laplace transform, yielding ℒ[G](z,s)=∫_0^∞dte^-stG(z,t)=1/s+∑_j=1^∞(z-1)^j/s+jβ∏_i=0^j-1ϕ(s+i β)/1-ϕ(s+i β), from where the following expression for the Laplace transform of P(m,t) is obtained, ℒ[P](m,s)=∫_0^∞dt e^-stP(m,t)=1/s-∑_j=1^∞(-1)^j-1/s+jβ∏_i=0^j-1ϕ(s+i β)/1-ϕ(s+i β), m=0 ∑_j=m^∞(-1)^j-mjm/s+jβ∏_i=0^j-1ϕ(s+i β)/1-ϕ(s+i β), m≥ 1. . §.§.§ Stationary queue length distribution The stationary distribution P(m)=lim_t→∞P(m,t) can be obtained by taking the limit lim_s→ 0sℒ[P(m,s)], which gives P(m)= 1-∑_j=1^∞(-1)^j-1C_j-1/αβ j, m=0 ∑_j=m^∞(-1)^j-mjmC_j-1/αβ j, m≥ 1 , where C_i for i=0,1,… are defined as C_0=1, C_j=∏_i=1^jϕ(iβ)/1-ϕ(i β), j≥ 1. The moments of P(m) can be computed from the probability generating function G(z), which is given by G(z)=∑_m=0^∞P(m) z^m=1+∑_m=1^∞C_m-1/αβ m(z-1)^m. The mean and the variance of the stationary queue length distribution read μ=1/αβ, σ^2=1/αβ[1/1-ϕ(β)-1/αβ], where the expression in the square brackets is equal to the Fano factor FF, FF=σ^2/μ=1/1-ϕ(β)-1/αβ. The above result can be used to obtain the lower and upper bounds on the Fano factor for any distribution of the inter-arrival times with finite mean and variance. Let CV_a denote the coefficient of variation of the inter-arrival time distribution and δ=CV_a^2. The following sharp bounds on the Laplace transform ϕ(s) were derived in Ref. <cit.>, e^-α s≤ϕ(s)≤δ/1+δ+1/1+δe^-α s(1+δ), s≥ 0. Using this result together with Eq. (<ref>) yields u(αβ)≤ FF≤(1+δ)u(αβ(1+δ)), where u(x)=1/(1-e^-x)-1/x. Since the function u(x) is monotonically increasing from u(0)=1/2 to u(∞)=1, we conclude that 1/2≤ FF≤ 1+CV_a^2. This result shows that the Fano factor of the queue length (the number of RNA) in the stationary limit cannot be smaller than 1/2, regardless of the inter-arrival time distribution or the service (RNA degradation) rate. §.§.§ Examples of G/M/inf queues We use Eq. (<ref>) to derive the stationary probability generating function G(z) for the special case of the PH/M/∞ queue, i.e. for stochastic models of gene expression with arbitrary connections between gene states (as in Fig. <ref>) under the renewal condition (<ref>) of the MAP. In this case, the inter-arrival time distribution is PH(κ,D_0), whose Laplace transform ϕ(s) is given by ϕ(s)=κ1/sI-D_0(-D_01^T), where I is the S× S identity matrix. As ϕ(s) is a rational function of s and ϕ(0)=1, we can always write ϕ(s)/[1-ϕ(s)] in the following form ϕ(s)/1-ϕ(s)=c(s+a_1)…(s+a_p)/s(s+b_1)…(s+b_q), which also serves as an implicit definition for the parameters p, q, c, a_1,…,a_p and b_1,…,b_q. We note that since ϕ(s)=1-α s+O(s^2) as s→ 0, μ=1/c(b_1… b_q/a_1… a_p). Inserting (<ref>) into (<ref>) gives, after some algebra, G(z)=1+1/β^p-q[_p F_q(a_1,…,a_p;b_1,…,b_q;c β^p-q-1(z-1))-1], where _p F_q(a_1,…,a_p;b_1,…,b_q;z) is the generalized hypergeometric function defined as _p F_q(a_1,…,a_p;b_1,…,b_q;z)=∑_m=0^∞(a_1)_m…(a_p)_m/(b_1)_m…(b_q)_mz^m/m!, and (x)_n=x(x+1)…(x+n-1) is the rising factorial. In the non-stationary case, Eqs. (<ref>)-(<ref>) can be inverted by partial fraction decomposition, for which many efficient methods have been developed <cit.>. Examples of gene expression models that are equivalent to the PH/M/∞ queue are presented in Fig. <ref>. Fig. <ref>(a) shows the one-state model in which the gene is always active and produces RNA at exponential intervals. Fig. <ref>(b) shows the (random) telegraph model in which the gene switches between two states of activity and inactivity and produces RNA from the active state <cit.>. Fig. <ref>(c) shows a generalization of the telegraph model to multiple transcriptionally inactive states that are accessed sequentially <cit.>. These three models are examples of phenomenological models in which the gene states are not linked to particular molecular events. In contrast, Fig. <ref>(d) shows a three-state model which accounts for the binding of RNA polymerase (RNAP) and its release into productive elongation <cit.>. Finally, Fig. <ref>(e) shows a canonical model of eukaryotic transcription <cit.> which includes the on and off switching of the promoter, the binding of six general transcription factors (IID, IIA, IIB, IIF, IIE and IIH) and RNAP, the unwinding of the double-stranded DNA, and the promoter proximal pausing of RNAP in metazoans <cit.>. The stationary RNA number distributions for the models in Fig. <ref>(a), (b), (c) and (d) have been previously derived using the master equation approach <cit.>. The same distributions can be directly obtained using Eq. (<ref>). Here we show this derivation for the telegraph model in which the gene switches between two states G_1 and G_2 with rates W_0(1→ 2)=k_on and W_0(2→ 1)=k_off, and produces RNA from the state G_2 with rate W_1(2→ 2)=k_syn [Fig. <ref>(b)]. The reaction scheme for this model is G_1[k_off]k_onG_2G_2+M, M∅. The Laplace transform of the inter-arrival time distribution for the telegraph model is given by <cit.> ϕ(s)=k_syn(k_on+s)/s^2+s(k_on+k_off+k_syn)+k_onk_syn. After inserting Eq. (<ref>) into ϕ(s)/[1-ϕ(s)], we get ϕ(s)/1-ϕ(s)=k_syn(s+k_on)/s(s+k_on+k_off). By comparing this expression to Eq. (<ref>), we identify p=1, q=1, c=k_syn, a_1=k_on and b_1=k_on+k_off. After inserting these results into Eq. (<ref>), we get the following expression for the probability generating function G(z), G(z)=_1F_1(k_on/β;k_on/β+k_off/β;k_syn/β(z-1)). We note that the process of switching between two states with arrivals from one of the states is known in queueing theory as the interrupted Poisson process (IPP). The telegraph model is therefore equivalent to the IPP/M/∞ queue. The steady-state distribution of the queue length for this queueing system was first derived in 1973 <cit.>, more than 20 years before the seminal paper by Peccoud and Ycart on the telegraph model <cit.>. §.§ Results for the MMPP/M/inf queue The MMPP/M/∞ queue is an infinite-server queue in which the arrival rate changes according to a finite-state Markov process, and the service times are exponentially distributed. In this subsection, we review known results for factorial moments of the non-stationary and stationary queue length distribution, which were derived in <cit.>. We note that if the matrix D_1 describing arrivals has all but one diagonal element equal to zero, then the arrival process is renewal and the results for the G/M/∞ queue are applicable. Here we consider the general case in which the renewal condition (<ref>) may not be satisfied. §.§.§ Factorial moments of the non-stationary queue length distribution Let X(t) denote the state of the arrival process (the gene state), N(t) the queue length (the number of RNA) at time t, and P_i(m,t) the joint probability that X(t)=i and N(t)=m. Using matrices D_0 and D_1, the master equation for P(m,t)=(P_1(m,t),…,P_S(m,t)) can be written as d/dtP(m,t)=P(m-1)D_1-P(m)(-D_0+m β I)+(m+1)βP(m+1), m=0,1,2…, where P(-1)≡ (0,…,0) and I is the S× S identity matrix. Let f_s(t) denote the s-th (vector) factorial moment of P(m,t), f_s(t)=s!∑_m=s^∞msP(m,t), s=0,1,2,…. If f_s(0) exists for s≥ 0, then f_s(t) exists for all s≥ 0 and t≥ 0 <cit.>. Furthermore, f_s(t) satisfy d/dtf_s(t)=sf_s-1(t)D_1-f_s(t)(sβ I-D_0-D_1), where f_-1≡ 0. This equation can be solved recursively. §.§.§ Factorial moments of the stationary queue length distribution Let P(m) denote the stationary limit of P(m,t). In this limit, Eq. (<ref>) becomes P(m-1)D_1-P(m)(-D_0+m β I)+(m+1)βP(m+1)=0, m=0,1,2…, This equation can be seen as a recurrence relation for P(m), provided P(0) is known. The latter can be computed from P(0)=∑_s=0^∞f_s (-1)^s/s!, where f_s is the s-th (vector) factorial moment of P(m), f_s=s!∑_m=s^∞msP(m), s=0,1,2,…. The series in Eqs. (<ref>) and (<ref>) both converge for any integer s, which was proved in Ref. <cit.>. From Eq. (<ref>), f_0=∑_m=0^∞P(m)=P, where P is the steady-state probability vector of the Markov process whose transition matrix is D_0+D_1. This means that P and therefore f_0 can be computed by solving the steady-state master equation P(D_0+D_1)=0. The other factorial moments can be computed by multiplying Eq. (<ref>) by m(m-1)…(m-s+1) and summing over m, which yields f_s=sf_s-1D_1(sβ I-D_0-D_1)^-1, s=1,2,3…. Based on these results, the following procedure can be set up to compute the stationary queue length distribution. In the first step, f_0=P is computed by solving P(D_0+D_1)=0. In the second step, the first K factorial moments f_s are computed using Eq. (<ref>), and P(0) is approximated by the sum of the first K terms in Eq. (<ref>). The integer K is selected to achieve the desired numerical precision of P(0). In the third step, P(m) is computed recursively using Eq. (<ref>) up to some value of m for which P(m) becomes negligibly small. In the fourth and final step, P(m) is multiplied by 1^T to get P(m). We note that a similar recurrence method for computing the stationary RNA number distribution for this system was recently developed in Ref. <cit.>. The moments of the stationary queue length distribution can be computed from Eq. (<ref>). The mean and the variance of the queue length are given by μ=f_11^T=PD_1(β I-D)^-11^T, σ^2=f_21^T+μ-μ^2=2PD_1(β I-D)^-1D_1(2β I-D)^-11^T+μ-μ^2, where D=D_0+D_1. These expressions can be further simplified by means of the Neumann series, ∑_n=0^∞A^n=(I-A)^-1, which is valid for any square matrix A such that detA<1. Applying this result to (xI-D)^-1 for x≠ 0 and noting that D1^T=0 and detD=0 (hence detD/x<1), we get D_1(xI-D)^-11^T=1/x∑_n=0^∞(-1)^nD_1 D^n1^T/x^n=D_11^T/x. Using this identity, the mean and the variance of the queue length simplify to μ=P(D_11^T)/β, σ^2=PD_1(β I-D)^-1(D_11^T)/β+μ-μ^2. The results for μ and σ^2 in this form were derived using the master equation approach in Ref. <cit.> for a general stochastic gene expression model that is equivalent to the MMPP/M/∞ queue discussed above. §.§.§ Examples of MMPP/M/inf queues Fig. <ref> illustrates examples of stochastic gene expression models that are equivalent to the MMPP/M/∞ queue. Fig. <ref>(a) shows the leaky telegraph model in which the gene switches between two distinct microscopic transcription factor binding configurations, and produces RNA from both configurations <cit.>. The exact stationary probability distribution of the RNA number for this model was computed in <cit.>, whereas in queueing theory, the same result was obtained previously in Ref. <cit.>. Fig. <ref>(b) shows a generalization of the leaky telegraph model to include multiple transcription factor binding sites <cit.>. Fig. <ref>(c) shows a stochastic model describing mRNA production from lysogeny maintenance promoter of bacteriophage lambda, in which gene states correspond to different binding combinations of the lambda repressor Cl <cit.>. §.§ Results for the G/D/inf queue The G/D/∞ queue is an infinite-server queue with independent and identically distributed inter-arrival times, and a fixed service time. In this subsection, we review results for the non-stationary and stationary probability distributions of the queue length using renewal theory <cit.>. §.§.§ Non-stationary queue length distribution Let N(t) denote the queue length (the number of RNA) at time t. The initial time is assumed to be an arrival epoch, and the initial queue length is N(0)=0. Let Y(t) denote the number of arrivals until time t, T_n the time of the n-th arrival, and t_n the inter-arrival time between the (n-1)-th and n-th arrival, i.e. t_n=T_n-T_n-1. The probability that Y(t)≥ n is given by P(Y(t)≥ n)=P(T_n≤ t)≡ K_n(t)=∫_0^tdt'f^*n(t'), where f^*n(t) is the n-fold convolution of f(t). Since P(Y(t)≥ n)=P(Y(t)=n)+P(Y(t)≥ n+1), we get P(Y(t)=n)=K_n(t)-K_n+1(t). The mean number of arrivals in (0,t), which is called the renewal function, is given by R(t)=∑_n=0^∞n P(Y(t)=n)=∑_n=1^∞K_n(t). The renewal function R(t) can be computed by inverting its Laplace transform, ℒ[R](s)=ϕ(s)/s[1-ϕ(s)], where ϕ(s)=ℒ[f](s). Let T denote the service time. Since the service time is fixed, the queue length at time t is equal to N(t)= Y(t), t≤ T, Y(t)-Y(t-T), t>T. For t≤ T, the probability distribution P(N(t)=m)≡ P(m,t)=K_m(t)-K_m+1(t). For t>T, we introduce the forward recurrence time τ, which is the time until the next arrival measured from some reference time t_0. The probability density function of τ can be computed from f_t_0(τ)=f(t_0+τ)+∫_0^t_0dt'r(t_0-t')f(t'+τ), where r(t)=dR/dt is called the renewal density, since r(t)dt is equal to the probability that an arrival occurs in (t,t+dt). The first term comes from having no arrivals before t_0, whereas the second term comes from having the previous arrival at some earlier time t_0-t'. If we know f_t_0(τ), then the non-stationary probability P(m,t) can be computed as follows. For m=0, P(0,t) is equal to the probability that the forward recurrence time τ is greater than T. For m≥ 1, P(m,t) is equal to the convolution of f_t_0 and K_m-1-K_m for t_0=t-T. Altogether, P(m,t)= K_m(t)-K_m+1(t), t≤ T, m≥ 0, ∫_T^∞dτ f_t-T(τ), t>T, m=0, ∫_0^Tdτ f_t-T(τ)[K_m-1(T-τ)-K_m(T-τ)], t>T, m≥ 1. The moments of the non-stationary queue length distribution can be computed recursively without computing the forward recurrence time distribution, see Ref. <cit.> for more details. The mean and the variance of the queue length read μ(t) = R(t), t≤ T, R(t)-R(t-T), t>T, σ^2(t) = 2∫_0^tdt'r(t')R(t-t')+R(t)-[R(t)]^2, t≤ T, 2∫_t-T^tdt'r(t')R(t-t')+μ(t)-[μ(t)]^2, t>T. §.§.§ Stationary queue length distribution The above calculation simplifies in the stationary limit. In this limit, lim_t→∞q(t)=1/α, where α is the mean inter-arrival time. Assuming that lim_t_0→∞f_t_0(t_0+τ)=0, from Eq. (<ref>) it follows that lim_t_0→∞f_t_0(τ)≡ f_∞(τ)=1-F(τ)/α, where F(t)=∫_0^tdt'f(t') is the cumulative distribution function of the inter-arrival time. Inserting Eq. (<ref>) into Eq. (<ref>) yields in the limit t_0→∞ P(m)= 1-∫_0^Tdτ f_∞(τ), m=0, ∫_0^Tdτ f_∞(τ)[K_m-1(T-τ)-K_m(T-τ)], m≥ 1. By taking the Laplace transform of P(m) with respect to T, we get ℒ[P(m)](s)=∫_0^∞dT P(m)e^-sT=α s-1+ϕ(s)/α s^2, m=0 [1-ϕ(s)]^2[ϕ(s)]^m-1/α s^2, m≥ 1. If the arrival process is a MAP under the renewal condition (<ref>), then the inter-arrival time distribution is a phase-type distribution whose Laplace transform is a rational function of s. In this case, P(m) can be obtained from Eq. (<ref>) using partial fraction decomposition <cit.>. The moments of P(m) can be computed from the probability generating function G(z) defined as G(z)=∑_m=0^∞z^m P(m). The Laplace transform of G(z) with respect to T is given by ℒ[G(z)](s)=∫_0^∞dT G(z)e^-sT=1/s+(z-1)[1-ϕ(s)]/α s^2[1-zϕ(s)]. The mean and the variance of the queue length are given by μ=T/α, σ^2=ℒ^-1{1+ϕ(s)/α s^2[1-ϕ(s)]}(T)-(T/α)^2, where ℒ^-1{…}(T) is the inverse Laplace transform evaluated at T. From the above result, we obtain two general results on the Fano factor FF without specifying the inter-arrival time distribution. The first result concerns the limit T→∞. By expanding ϕ(s) in Eq. (<ref>) around s=0 and collecting the lowest-order term, we get lim_T→∞FF=CV_a^2, where CV_a^2 is the coefficient of variation of the inter-arrival time distribution. This result has been previously derived for fluctuations in the number of cycles of a processive enzyme <cit.>. The second result gives an upper bound on the Fano factor in terms CV_a^2. By rearranging 1+ϕ(s) into 1-ϕ(s)+2ϕ(s) and using Eq. (<ref>), we get σ^2=2/α∫_0^Tdt R(t)+T/α-(T/α)^2. The Fano factor FF of the queue length is given by FF=σ^2/μ=1+2/T∫_0^Tdt R(t)-T/α. This expression was derived in Ref. <cit.> where the G/D/∞ queue was considered as a special case of the G^X/G/∞ queue. Using a general upper bound on the renewal function derived in Ref. <cit.>, R(t)≤ t/α+CV_a^2, we get FF≤ 1+2CV_a^2. §.§.§ Examples of G/D/inf queues Fig. <ref> illustrates a stochastic gene expression model that consists of multistep transcription initiation which produces nascent RNA, elongation and termination after which nascent RNA turns into mature RNA, and mature RNA degradation. Transcription initiation is modelled by a MAP under the renewal condition (<ref>), whereas transcription elongation and termination are deterministic. The transcription part of the model is equivalent to the PH/D/∞ queue, where PH refers to transcription initiation, and D refers to deterministic elongation and termination. Fig. <ref>(a) shows the one-state model describing a gene that is always active <cit.>. Fig. <ref>(b) shows the two-state (telegraph) model, describing a gene that switches between two states of activity and inactivity <cit.>. Fig. <ref>(c) shows a three-state model that accounts for the binding of RNA polymerase at the promoter and its release into productive elongation <cit.>. Other, more complicated models equivalent to the PH/M/∞ queue, including the canonical model of transcription initiation in Fig. <ref>(e), have been studied in Ref. <cit.>. Transcription initiation ends with the release of RNA polymerase into productive elongation [Fig. <ref>(d)]. Since elongation and termination are deterministic, the inter-arrival times of nascent and mature RNA are equal, which in turn means that the mature RNA turnover is described by the PH/M/∞ queue with the same arrival process as the one describing the production of nascent RNA [Fig. <ref>(e)]. As an example, we show how the stationary probability generating function G_nc(z) of the nascent RNA number for the two-state model in Fig. <ref>(b) can be easily computed from Eq. (<ref>). The reaction scheme for this model is G_1[k_off]k_onG_2G_2+M_nc, M_nc[]TM, M∅, where M_nc denotes nascent RNA and ⇒ denotes a deterministic reaction that takes a fixed amount of time to finish. The Laplace transform of the inter-arrival time distribution for the two-state process is given by Eq. (<ref>). Inserting this result into Eq. (<ref>), we get ℒ[G_nc(z)](s)=s+k_off+k_on-k_syn u+u/α/s^2+s(k_off+k_on-k_syn u)-k_on k_syn u, where u=z-1 and α=(k_on+k_off)/(k_onk_syn). This expression can be inverted using partial fraction decomposition as follows. In the first step, we find λ_1,2 such that (s+λ_1)(s+λ_2)=s^2+s(k_off+k_on-k_syn u)-k_on k_syn u, which gives λ_1,2=k_off+k_on-k_syn u±√(Δ(u))/2, where Δ(u)=(k_off+k_on-k_syn u)^2+4k_on k_syn. In the second step, we compute A and B such that ℒ[G(z)](s)=A/s+λ_1+B/s+λ_2, which gives A=-(k_on+k_off)^2+√(Δ)(k_on+k_off)-(k_on-k_off)k_synu/2√(Δ)(k_on+k_off), B=(k_on+k_off)^2+√(Δ)(k_on+k_off)+(k_on-k_off)k_synu/2√(Δ)(k_on+k_off). Inserting λ_1, λ_2, A and B into (<ref>), and using ℒ^-1[1/(s+λ)](T)=e^-λ T for T≥ 0, we get G_nc(u)= e^-λ_1(u)T/2(k_on+k_off)√(Δ(u)){(k_on+k_off)[√(Δ(u))-(k_on+k_off)]-(k_on-k_off)k_syn u. +.(k_on+k_off)[√(Δ(u))+(k_on+k_off)]e^√(Δ(u))T+(k_on-k_off)k_syn u e^√(Δ(u))T}, The mean and the variance of the nascent RNA number are equal to μ_nc=k_synk_onT/k_on+k_off, σ_nc^2=μ_nc{1+2k_syn k_off/T(k_on+k_off)^3[e^-(k_on+k_off)T-1+(k_on+k_off)T]}, where the expression in the curly brackets is the Fano factor of the nascent RNA number. The results for G_nc(z), μ_nc and σ_nc^2 have been previously derived using the master equation approach <cit.>. Since we have switching between two states, it can easily be shown that CV_a^2=1+2k_synck_off/(k_on+k_off)^2. We see that the limit lim_T→∞FF_nc=CV_a^2 in Eq. (<ref>) is satisfied, and so is the inequality in Eq. (<ref>), since e^-x≤ 1 for any x≥ 0. §.§ Results for the MX/G/inf queue The M^X/G/∞ queue is an infinite-server queue in which the inter-arrival times are exponentially distributed, customers arrive in batches, and the service time distribution is arbitrary. In this subsection, we present an explicit formula for the non-stationary and stationary probability generating functions of the queue length N(t) from which the non-stationary and stationary queue length distributions can be obtained. This result was derived in Ref. <cit.>. §.§.§ Non-stationary and stationary queue length distributions Let X≥ 1 denote the batch size, P(X=k)=a_k the batch size distribution and A(z) the probability generating function of X, A(z)=∑_k=0^∞a_k z^k. For any positive integer n, the n-th factorial moment of X is defined as A_n=.d^n/dz^nA(z)|_z=1=∑_k=n^∞k(k-1)…(k-n+1)a_k. The mean and the variance of X are assumed to be finite. The initial time is chosen to be an arrival epoch, and the system is initially empty, N(0)=0. Under these assumptions, the probability generating function G(z,t) of the queue length N(t) reads G(z,t)=exp{-λ∫_0^tdt' [1-A(z+(1-z)H(t'))]}, where λ is the arrival rate, and H(t) is the cumulative distribution function of the RNA degradation time, H(t)=∫_0^tdt' h(t'). From here, we get the following expressions for the mean and the variance of the queue length at time t, μ(t)=λ A_1∫_0^tdt' [1-H(t')], σ^2(t)=μ(t)+λ A_2∫_0^tdt'[1-H(t')]^2. The stationary limit is obtained by letting t→∞ in Eq. (<ref>), (<ref>) and (<ref>). §.§.§ Examples of MX/G/inf queues We consider arrivals whose batch size X is geometrically distributed, P(X=k)=(1-p)p^k, k=0,1,…. This distribution can be derived from the reaction scheme of the telegraph model in Eq. (<ref>) when the gene spends most of its time in the inactive state (k_off≫ k_on). In that case, RNA synthesis can be described by the effective reaction GG+kM, where k follows the geometric distribution in Eq. (<ref>) with p=k_syn/(k_off+k_syn). The mean burst size for this distribution is k_syn/k_off. Inserting Eq. (<ref>) into Eq. (<ref>), we get A(z)=1-p/1-pz. We first consider one-step degradation, which is equivalent to the M^X/M/∞ queue. In this case, H(t)=1-e^-β t, where β is the RNA degradation rate. Inserting A(z) and H(t) into Eq. (<ref>) and taking the stationary limit t→∞ yields G(z)=[1-p/1-pz]^r, where r=k_on/β. From here, expanding G(z) around z=0 gives P(m)=m+r-1mp^m(1-p)^r, p=k_syn/k_off+k_syn, r=k_on/β. which is the negative binomial distribution NB(r,1-p) that is often used in the analysis of single-cell data <cit.>. Next we consider the reaction scheme GG+kM_n, M_nM_c∅. This model describes production of nuclear RNA M_n, which is transported to the cytoplasm where it becomes cytoplasmic RNA M_c. The model was studied in Ref. <cit.>, where the stationary joint probability distribution P(m_n,m_c) of the nuclear RNA number m_n and cytoplasmic RNA number m_c was considered. Using the master equation approach, the following expression for the probability generating function of nuclear and cytoplasmic RNA numbers was obtained, G(x,y)=∑_m_n=0^∞∑_m_c=0^∞P(m_n,m_c)x^m_ny^m_c=exp{-k_on∫_0^∞dt[1-A(u(x,y,t))],} where A is the probability generating function of the batch size, and u is given by u(x,y,t)=1+[(x-1)+(y-1)β_e/β_c-β_e]e^-β_e t-(y-1)β_e/β_c-β_ee^-β_c t. The probability generating function of nuclear RNA is obtained by setting y=1, which is equivalent to the M^X/M/∞ queue for which the probability generating function G(z) is given by Eq. (<ref>) with β=β_e. On the other hand, the probability generating function of total (mature) RNA consisting of both nuclear and cytoplasmic RNA is obtained by setting x=y. This is equivalent to the M^X/G/∞ queue, where the service time is the total time of nuclear export and cytoplasmic RNA degradation. This time is distributed according to the hypoexponential distribution H(t)=1-β_c/β_c-β_ee^-β_e t+β_e/β_c-β_ee^-β_c t. Indeed, G(x,x) obtained by setting x=y in u(x,y,t) is the same as G(z) obtained by inserting H(t) given by Eq. (<ref>) into Eq. (<ref>) in the stationary limit t→∞. The appeal of the M^X/G/∞ queue is that it accounts for bursty expression and arbitrary RNA degradation time distribution, while being mathematically tractable. Fig. <ref> illustrates a stochastic gene expression model with a detailed, multi-state process of eukaryotic RNA degradation that can be analysed using the M^X/G/∞ queue. In the limit that the gene spends most of its time in the off state, RNA production, as described by the telegraph process [Fig. <ref>(a)], is replaced with the Poisson process with batch arrivals [Fig. <ref>(b)], making the model equivalent to the M^X/G/∞ queue. The RNA degradation process, adapted from Ref. <cit.>, includes poly(A) shortening, decapping, terminal deadenylation and 5'-3' exonuclease digestion. Another example of multi-state RNA processing is RNA splicing. In Ref. <cit.>, a model for RNA splicing has been proposed in which a parent RNA is produced in bursts, each of which then goes through a number of irreversible steps representing intron splicing and RNA degradation. A similar multi-state model has been considered for nascent RNA, in which multiple states represent positions of the RNA polymerase on the gene, i.e. the length of the nascent RNA <cit.>. Finally, it is worth noting that queues with batch arrivals can be also used to model stochasticity in protein numbers. This is since when mRNA (M) degrades much faster than protein (P), it can be shown <cit.> that the standard model for the protein production process, G_1[]G_2G_2+M, M∅, MM + P, can be replaced by the effective set of reactions GG+k P, where k is a random variable distributed according to the geometric distribution. The perturbative approach of <cit.> can be extended to the case where there are more than two gene states, implying that if one is not interested in RNA fluctuations then an effective bursty protein production process can always be derived as a reduced model valid under timescale separation conditions. Thus, the M^X/G/∞ queue can serve as a model for protein fluctuations where the degradation time distribution is arbitrary. If this distribution is exponential, then it is a crude model for protein dilution due to cell division <cit.>; more complex distributions such as an Erlang distribution could describe the fact that multiple ubiquitination events are required before protein degradation <cit.>. § DISCUSSION As a theory that is more than a hundred years old, queueing theory is rich and vast. In this review, we focused on old, classical results in queueing that are directly applicable to RNA production and degradation models that are traditionally used in modelling gene expression, particularly in the setting of finite-state Markov processes describing discrete promoter states and multi-state RNA degradation pathways. Following this tradition, we set up a general model of RNA production as a Markovian arrival process, which assumes that all transitions are Markovian. However, we emphasize that queues with renewal arrivals, such as the G/M/∞ and G/D/∞ queues or more generally the G^X/G/∞ queue, can be used to model gene expression beyond the Markovian framework, since in these queues the inter-arrival time distribution is arbitrary. An example of such model is a generalized telegraph model in which the time spent in the off state has an arbitrary probability distribution <cit.>. This model has renewal arrivals, because the time spent in the on state is exponentially distributed, meaning that immediately after arrival the gene has no memory of how much time it has already spent in the on state. For this model, the Laplace transform of the inter-arrival distribution can be found and applied to the G/M/∞ queue to get the stationary probability distribution of the RNA number, avoiding laborious derivation using the master equation approach <cit.>. This example shows just how important are renewal arrivals in modelling gene expression: they account for uncorrelated inter-arrival times of any complexity, but retain the analytical tractability. Since the inter-arrival times between individual transcriptional events can now be measured experimentally <cit.>, queueing theory can be used to include experimentally measured inter-arrival time distributions without resorting to their Markovian interpretation. Once we move away from renewal arrivals, there are many results potentially useful for gene expression modelling that we did not cover in detail. We first mention the BMAP/G/∞ queue, where customers arrive in batches according to a batch Markovian arrival process (BMAP), and the service times are generally distributed. This queueing system describes our stochastic model of gene expression in Fig. <ref> in the most general setting. Results for this queueing system are limited and quite complicated, however numerically feasible formulas have been derived for service times that are phase-type distributed <cit.>. Another type of non-renewal processes which we mentioned only briefly are semi-Markov processes (SMP). Semi-Markov processes change their inter-arrival time distribution according to a finite-state Markov process. In that sense, they can be considered as Markov-modulated renewal processes. A generalization of the G/M/∞ queue to semi-Markov arrivals is the SMP/M/∞ queue, for which the stationary queue length distribution and the Laplace transform of the non-stationary queue length distribution have been computed in Ref. <cit.>. More general is the SMP/G/∞ queue, in which the service time distribution is arbitrary. This queueing system was studied in Ref. <cit.>, where recurrence relations for (binomial) moments of both non-stationary and stationary queue length distributions have been derived. We showed in Eq. (<ref>) that the MAP is a special case of the SMP. An advantage of the latter approach is that inter-arrival time distributions can be described by any suitable, user-defined function rather than a phase-type distribution as for a MAP. Practically, this means that the SMP description has fewer parameters than MAP. For example, a phase-type distribution of the hypoexponential type which is the distribution of a random variable composed of N exponential distributions each with their own rate could be approximated by a two-parameter continuous distribution such as the gamma distribution. Hence, the SMP maybe useful as a reduced version of complex models of gene expression. One limitation of the general stochastic model for RNA production in Fig. <ref> is that all its rate constants are assumed to be time-independent. This limitation can be addressed using non-stationary queueing systems. A classic queueing system in this regard is the M_t/G/∞ queue, where the subscript t denotes that the arrival rate (of the Poisson process) is time-dependent. It is well-known that the queue length distribution of the M_t/G/∞ queue is Poisson distributed <cit.>. Perhaps more interesting to gene expression modelling is the M_t^X/G/∞ queue, where customers arrive in batches. The probability generating function of the queue length for this queueing system is also known exactly <cit.>. This result opens many possibilities for studying bursty gene expression under time-dependent conditions. For example, it is known that the identities and intensities of different extracellular time-dependent signals are transmitted by modulation of certain transcription factors in the cytoplasm, which exert an influence on gene expression upon their translocation to the nucleus <cit.>. Finally, we mention two open problems in queueing theory that are relevant for gene expression modelling. The first problem is extending gene expression models to include both RNA and proteins, and finding their joint probability distribution. In this case, there are two queues, one that describes RNAs and the other that describes proteins. The difficulty is that the arrival rate of the second queue (the protein production rate) is dependent on the number of customers in the first queue (the RNA number). This problem is not standard in queueing theory. Systems with multiple queues are typically studied in a way that the output of one queue becomes the input of another. Here, however, customers arriving at the first queue leave the system after service, instead of being routed to the second queue. This problem has been addressed recently by several authors <cit.>. The second problem concerns finding joint queue length probability distributions for tandem queues, in which customers leaving one queue are routed to the next queue. In general, tandem queues are difficult to solve, with notable exceptions being tandems of M/G/∞ and M_t/G/∞ queues <cit.>. An example of tandem queue in gene expression is nascent RNA turning into nuclear RNA, which is then transported to the cytoplasm where it becomes cytoplasmic RNA. An open problem here is to find the joint distribution of nascent, nuclear and cytoplasmic RNA which can be measured experimentally <cit.>. Concluding, we have shown how a wide variety of models of gene expression can be formulated in terms of queueing theory. We hope this review stimulates quantitative biologists to use the tools of queueing theory to analytically study the stochastic properties of complex and biologically realistic models of gene expression. § ACKNOWLEDGMENTS This work was supported by a Leverhulme Trust research award (RPG-2020-327).
http://arxiv.org/abs/2307.01962v1
20230705001036
Oriented spanning trees and stationary distribution of digraphs
[ "Jiang Zhou", "Changjiang Bu" ]
math.CO
[ "math.CO", "math.PR", "05C30, 05C50, 05C05, 05C81, 05C20" ]
1.15 GBKsong zhoujiang@hrbeu.edu.cn College of Mathematical Sciences, Harbin Engineering University, Harbin 150001, PR China By using biclique partitions of digraphs, this paper gives reduction formulas for the number of oriented spanning trees, stationary distribution vector and Kemeny's constant of digraphs. As applications, we give a method for enumerating spanning trees of undirected graphs by vertex degrees and biclique partitions. The biclique partition formula also extends the results of Knuth and Levine from line digraphs to general digraphs. Oriented spanning tree, Matrix-tree theorem, Random walk, Stationary distribution AMS classification (2020): 05C30, 05C50, 05C05, 05C81, 05C20 § INTRODUCTION Let G be a weighted digraph with vertex set V(G) and edge set E(G), and each edge e=ij∈ E(G) is weighted by an indeterminate w_e(G)=w_ij(G). The notation e=ij∈ E(G) means there exists a directed edge from tail vertex i to head vertex j, and the tail and head of e are denoted by i=t(e) and j=h(e), respectively. The outdegree and the indegree of a vertex v in G are denoted by d_v^+(G) and d_v^-(G), respectively. The word “weighted" is omitted when w_e(G)=1 for each e∈ E(G). The line digraph ℒ(G) of a digraph G has vertex set V(ℒ(G))=E(G), and there exits directed edge from e to f in ℒ(G) if and only if h(e)=t(f). An oriented spanning tree <cit.> of weighted digraph G is a subtree containing all vertices of G, in which one vertex, the root, has outdegree 0, and every other vertex has outdegree 1. Let 𝕋_u(G) denote the set of oriented spanning trees of G with root u, and let κ_u(G)=|𝕋_u(G)|. A spanning tree enumerator of G is defined as t_u(G)=∑_T∈𝕋_u(G)∏_e∈ E(T)w_e(G). If w_e(G)=1 for each e∈ E(G), then t_u(G)=κ_u(G) is the number of oriented spanning trees of G with root u. Some algebraic formulas for κ_u(G) and t_u(G) were given in <cit.>. In the study of random walks on digraphs, one basic problem is to determine the stationary distribution vector, which represents the long-term behaviour of the Markov chain associated with the digraph <cit.>. The stationary distribution has applications in PageRank algorithms <cit.>. There exists a closed relation between the stationary distribution vector and spanning tree enumerators of digraphs. For a strongly connected digraph G with n vertices, its stationary distribution vector π=(π_1,…,π_n)^⊤ satisfies <cit.> π_i=t_i(G)/∑_j=1^nt_j(G), where G is the weighted digraph obtained from G by taking w_ij(G)=d_i^+(G)^-1 for each ij∈ E(G). Let m_ij be the mean first passage time from i to j, then the value 𝒦(G)=∑_j∈ V(G),j≠ im_ijπ_j (does not depend on i) is called the Kemeny's constant <cit.> of G. Some formulas for counting spanning trees in undirected line graphs can be found in <cit.>. For line digraphs, Knuth <cit.> proved the following formula for counting oriented spanning trees. Combinatorial and bijective proofs of the Knuth formula are given in <cit.> and <cit.>, respectively. Let G be a digraph such that d_u^+(G)d_u^-(G)>0 for each u∈ V(G). For any edge e=ij of G, we have κ_e(ℒ(G))=(κ_j(G)-d_j^+(G)^-1∑_kj∈ E(G) k≠ iκ_k(G))∏_v∈ V(G)d_v^+(G)^d_v^-(G)-1. If d_u^+(G)=d_u^-(G) for each u∈ V(G), then κ_e(ℒ(G))=d_j^+(G)^-1κ_u(G)∏_v∈ V(G)d_v^+(G)^d_v^+(G)-1 (u∈ V(G)). Let ℰ(G) denote the number of Eulerian circuits of a digraph G. By Lemma <ref>, we can derive the following formula for counting Eulerian circuits of a line digraph from Theorem <ref>. Let G be a digraph with n vertices such that d_u^+(G)=d_u^-(G)=d>0 for each u∈ V(G). For any e∈ E(G) and v∈ V(G), we have κ_e(ℒ(G)) = d^n(d-1)-1κ_v(G), ℰ(ℒ(G)) = d^-1(d!)^n(d-1)((d-1)!)^nκ_v(G)=d^-1(d!)^n(d-1)ℰ(G). Let {w_i(G)}_i∈ V(G) be indeterminates on V(G). If each edge {u,v} in G has weight w_uv(G)=w_v(G), then we say that the weights of G are induced by {w_i(G)}_i∈ V(G). Levine <cit.> proved the following formula for spanning tree enumerators of a line digraph, which is a generalization of Theorem <ref>. Let G be a weighted digraph such that d_u^-(G)>0 for each u∈ V(G). For any edge e=ij of G satisfying d_j^-(G)≥2, we have t_e(ℒ(G))=w_e(G)t_i(G)d_j(G)^d_j^-(G)-2∏_v∈ V(G) v≠ jd_v(G)^d_v^-(G)-1, where the weights of ℒ(G) are induced by indeterminates {w_e(G)}_e∈ V(G), d_v(G)=∑_vu∈ E(G)w_vu(G). A biclique <cit.> is a bipartite digraph Q whose vertices can be partitioned into two parts Q^(1) and Q^(2), and E(Q)={ij:i∈ Q^(1),j∈ Q^(2)}. A biclique partition of a digraph G is a set ε={Q_1,…,Q_r} of bicliques in G such that each edge of G belongs to exactly one biclique of ε. For a biclique partition ε={Q_1,…,Q_r} of G, its biclique digraph has vertex set ε={Q_1,…,Q_r} and edge set {Q_iQ_j:Q_i^(2)∩ Q_j^(1)≠∅}. For a vertex i of a digraph G, the edge sets Q_i^(1)={e:e∈ E(G),h(e)=i} and Q_i^(2)={f:f∈ E(G),t(f)=i} form a biclique in the line digraph ℒ(G), and all such bicliques form a natural biclique partition ε={Q_i:i∈ V(G),d_i^+(G)d_i^-(G)>0} of ℒ(G). Notice that Q_i^(2)∩ Q_j^(1)≠∅ if and only if ij∈ E(G). So the biclique digraph of ε is isomorphic to G when d_u^+(G)d_u^-(G)>0 for each u∈ V(G). It is known that every digraph G has a biclique partition ε satisfying |ε|≤|V(G)| (|ε| is much smaller than |V(G)| for many digraphs). So we can get reduction formulas for spanning tree enumerators, stationary distribution vector and Kemeny's constant of G by counting oriented spanning trees in the biclique digraph of ε. Moreover, based on Observation <ref>, it is natural to use biclique partitions to extend the results of Knuth <cit.> and Levine <cit.> from line digraphs to general digraphs. In Section 2, we give some basic definitions, notations and auxiliary lemmas. In Section 3, we give biclique partition formulas for counting oriented spanning trees and Eulerian circuits of digraphs, which generalize Theorems 1.1-1.3 to general digraphs. In Section 4, we give biclique partition formulas for stationary distribution vector and Kemeny's constant of digraphs. In Section 5, we give some concluding remarks, including more general spanning tree identity in digraphs, and the method for enumerating spanning trees of undirected graphs by vertex degrees and biclique partitions. § PRELIMINARIES Let G be a weighted digraph on n vertices, and each edge e=ij∈ E(G) is weighted by an indeterminate w_ij(G). The weighted degree of a vertex i is d_i(G)=∑_ij∈ E(G)w_ij(G). The Laplacian matrix L_G is the n× n matrix with entries (L_G)_ij=d_i(G)             i=j, -w_ij(G)         ij∈ E(G), 0                 . Let A(i,j) denote the submatrix of a matrix A obtained by deleting the i-th row and the j-th column, and let (A) denote the determinant of a square matrix A. The following lemma follows from the all minors matrix tree theorem <cit.>. Let G be a weighted digraph with n vertices. For any i,j∈{1,…,n}, we have (L_G(i,j))=(-1)^i+jt_i(G). The following is a fomula for counting Eulerian circuits of digraphs. Let G be a Eulerian digraph. For any u∈ V(G), we have ℰ(G)=κ_u(G)∏_v∈ V(G)(d_v^+(G)-1)!. Let G be a strongly connected weighted digraph with n vertices, and all weights of G are positive. The transition probability matrix P_G of G is the n× n matrix with entries (P_G)_ij=w_ij(G)d_i(G)^-1        ij∈ E(G), 0                         ij∉ E(G). A random walk on G is defined by P_G, that is, (P_G)_ij denotes the probability of moving from vertex i to vertex j. Notice that P_G is an irreducible nonnegative matrix with spectral radius 1, and the all-ones vector is a right eigenvector for the eigenvalue 1. By the Perron-Frobenius theorem, there exists a unique positive vector π(G)=(π_1(G),…,π_n(G))^⊤ such that π(G)^⊤ P_G=π(G)^⊤ and ∑_i=1^nπ_i(G)=1. Such vector π(G) is called the stationary distribution vector <cit.> of G. Let G be a strongly connected weighted digraph with n vertices, and all weights of G are positive. Then π_i(G)=(I-P_G(i,i))/∑_j=1^n(I-P_G(j,j)), i=1,…,n. Let m_ij be the mean first passage time from vertex i to vertex j, then the value 𝒦(G)=∑_j∈ V(G),j≠ im_ijπ_j(G) (does not depend on i) is called the Kemeny's constant of G. Let G be a strongly connected weighted digraph with positve weights, and λ_1=1,λ_2,…,λ_n are eigenvalues of P_G. Then 𝒦(G)=∑_i=2^n1/1-λ_i. For a matrix E, let E[i_1,…,i_s|j_1,…,j_t] denote an s× t submatrix of E whose row indices and column indices are i_1,…,i_s and j_1,…,j_t, respectively. The following is a determinant identity involving the Schur complement. Let M=[ A B; C D ] be a block matrix of order n, where A=M[1,…,k|1,…,k] is nonsingular. If k+1≤ i_1<⋯<i_s≤ n and k+1≤ j_1<⋯<j_s≤ n, then (M[1,…,k,i_1,…,i_s|1,…,k,j_1,…,j_s])/(A)=(S[i_1,…,i_s|j_1,…,j_s]), where S=D-CA^-1B is the Schur complement of D in M. § ORIENTED SPANNING TREES OF DIGRAPHS A biclique is a bipartite digraph Q whose vertices can be partitioned into two parts Q^(1) and Q^(2), and E(Q)={ij:i∈ Q^(1),j∈ Q^(2)}. Let ε={Q_1,…,Q_r} be a biclique partition of a weighted digraph G whose weights are induced by indeterminates {w_i(G)}_i∈ V(G). Let Ω(ε) denote the weighted biclique digraph with vertex set ε={Q_1,…,Q_r} and edge set {Q_iQ_j:Q_i^(2)∩ Q_j^(1)≠∅}, and the weight of Q_iQ_j in Ω(ε) is w_Q_iQ_j(Ω(ε))=w(Q_j)∑_u∈ Q_i^(2)∩ Q_j^(1)w_u(G)/d_u(G),3.1 where w(Q_j)=∑_u∈ Q_j^(2)w_u(G). By Observation <ref>, we know that the part (1) of the following theorem extends Theorem <ref> to general digraphs. Let ε={Q_1,…,Q_r} be a biclique partition of a weighted digraph G whose weights are induced by indeterminates {w_i(G)}_i∈ V(G), and d_u^+(G)>0 for each u∈ V(G). Set w(Q_i)=∑_u∈ Q_i^(2)w_u(G). (1) For any u∈ V(G), we have t_u(G)=w_u(G)∏_u≠ v∈ V(G)d_v(G)/∏_i=1^rw(Q_i)∑_Q_i:u∈ Q_i^(2)t_Q_i(Ω(ε)). (2) For any Q_i∈ε, we have t_Q_i(Ω(ε)) = ∏_i=1^rw(Q_i)/∏_u∈ V(G)d_u(G)∑_u∈ Q_i^(1)t_u(G). For a biclique partition ε={Q_1,…,Q_r} in G, let R_ε∈ℝ^|V(G)|× r and S_ε∈ℝ^r×|V(G)| be two corresponding incidence matrices with entries (R_ε)_uj = w(Q_j)            u∈ V(G),u∈ Q_j^(1), 0                   u∈ V(G),u∉ Q_j^(1), (S_ε)_iu = w_u(G)            u∈ V(G),u∈ Q_i^(2), 0                   u∈ V(G),u∉ Q_i^(2), where w(Q_j)=∑_u∈ Q_j^(2)w_u(G). Let H be the bipartite weighted digraph with Laplacian matrix L_H=[ D_1 -R_ε; -S_ε D_2 ], where D_1 is a diagonal matrix satisfying (D_1)_uu=d_u(G), D_2 is a diagonal matrix satisfying (D_2)_ii=w(Q_i). By computation, we have D_1-R_ε D_2^-1S_ε=L_G, D_2-S_ε D_1^-1R_ε=L_Ω(ε). For any u∈ V(G) and Q_i∈ε, by Lemmas <ref> and <ref>, we have t_u(H)=(L_H(u,u))=t_u(G)∏_i=1^rw(Q_i),3.2 t_Q_i(H)=(L_H(Q_i,Q_i))=t_Q_i(Ω(ε))∏_u∈ V(G)d_u(G).3.3 Let X be the adjoint matrix of L_H. Then XL_H=(L_H)I=0.3.4 By Lemma <ref>, we get (X)_uv=t_v(H)    (u,v∈ V(H)).3.5 Hence ∑_v∈ V(H)t_v(H)(L_H)_vu=d_u(G)t_u(H)-∑_i=1^rt_Q_i(H)(S_ε)_iu=0. By (3.2) and (3.3) we get d_u(G)t_u(G)∏_i=1^rw(Q_i)-w_u(G)∏_v∈ V(G)d_v(G)∑_Q_i:u∈ Q_i^(2)t_Q_i(Ω(ε))=0. Hence t_u(G)=w_u(G)∏_u≠ v∈ V(G)d_v(G)/∏_i=1^rw(Q_i)∑_Q_i:u∈ Q_i^(2)t_Q_i(Ω(ε)). So part (1) holds. By (3.4) and (3.5) we have ∑_v∈ V(H)t_v(H)(L_H)_vQ_i=w(Q_i)t_Q_i(H)-∑_u∈ Q_i^(1)t_u(H)(R_ε)_ui=0. By (3.2) and (3.3) we get w(Q_i)t_Q_i(Ω(ε))∏_u∈ V(G)d_u(G)-w(Q_i)∏_j=1^rw(Q_j)∑_u∈ Q_i^(1)t_u(G)=0. Hence t_Q_i(Ω(ε))=∏_i=1^rw(Q_i)/∏_u∈ V(G)d_u(G)∑_u∈ Q_i^(1)t_u(G), i=1,…,r. So part (2) holds. We can deduce the following result in <cit.> from part (2) of Theorem <ref>. Let G be a weighted digraph such that d_u^-(G)>0 for each u∈ V(G). Then ∑_e∈ E(G)t_e(ℒ(G))=∏_v∈ V(G)d_v(G)^d_v^-(G)-1∑_v∈ V(G)t_v(G),3.6 where ℒ(G) is a weighted digraph whose weights are induced by indeterminates {w_e(G)}_e∈ V(G). If d_u^+(G)=0 for some vertex u of G, then by d_u^-(G)>0, there exist two edges e,f∈ E(G) whose outdegrees are zeros in ℒ(G). In this case, t_e(ℒ(G))=0 for each e∈ E(G), the left and right sides of (3.6) are both zeros. If d_u^+(G)>0 for each vertex u of G, then by Observation <ref> and part (2) of Theorem <ref>, we have t_i(G)=∏_u∈ V(G)d_u(G)/∏_u∈ V(G)d_u(G)^d_u^-(G)∑_e∈ E(G) h(e)=it_e(ℒ(G)), t_i(G)∏_v∈ V(G)d_v(G)^d_v^-(G)-1=∑_e∈ E(G) h(e)=it_e(ℒ(G)). Hence ∑_e∈ E(G)t_e(ℒ(G))=∑_i∈ V(G)∑_e∈ E(G) h(e)=it_e(ℒ(G))=∏_v∈ V(G)d_v(G)^d_v^-(G)-1∑_v∈ V(G)t_v(G). For a biclique partition ε={Q_1,…,Q_r} of digraph G, let Θ(ε) denote the weighted biclique digraph with vertex set ε={Q_1,…,Q_r} and edge set {Q_iQ_j:Q_i^(2)∩ Q_j^(1)≠∅}, and the weight of Q_iQ_j in Θ(ε) is w_Q_iQ_j(Θ(ε))=|Q_j^(2)|∑_u∈ Q_i^(2)∩ Q_j^(1)1/d_u^+(G). Clearly, Θ(ε) is obtained from Ω(ε) by taking w_v(G)=1 for each v∈ V(G) in equation (3.1). We can deduce the following result from Theorem <ref>. Let ε={Q_1,…,Q_r} be a biclique partition of a digraph G, and d_u^+(G)>0 for each u∈ V(G). (1) For any u∈ V(G), we have κ_u(G)=∏_u≠ v∈ V(G)d_v^+(G)/∏_i=1^r|Q_i^(2)|∑_Q_i:u∈ Q_i^(2)t_Q_i(Θ(ε)). (2) For any Q_i∈ε, we have t_Q_i(Θ(ε)) = ∏_i=1^r|Q_i^(2)|/∏_u∈ V(G)d_u^+(G)∑_u∈ Q_i^(1)κ_u(G). By Observation <ref>, we know that the following formulas extend Theorem <ref> to general Eulerian digraphs. Let G be a Eulerian digraph. For any biclique partition ε={Q_1,…,Q_r} of G, we have κ_u(G) = ∏_v∈ V(G)d_v^+(G)/∏_j=1^r|Q_j^(2)|t_Q_i(Θ(ε))/|Q_i^(1)|, i=1,…,r, ℰ(G) = ∏_v∈ V(G)d_v^+(G)!/∏_j=1^r|Q_j^(2)|t_Q_i(Θ(ε))/|Q_i^(1)|, i=1,…,r. Since G is Eulerian, we have κ_u(G)=κ_v(G) for any u,v∈ V(G). By Theorem <ref>, we have t_Q_i(Θ(ε)) = ∏_j=1^r|Q_j^(2)|/∏_v∈ V(G)d_v^+(G)∑_u∈ Q_i^(1)κ_u(G)=|Q_i^(1)|∏_j=1^r|Q_j^(2)|/∏_v∈ V(G)d_v^+(G)κ_u(G), κ_u(G) = ∏_v∈ V(G)d_v^+(G)/∏_j=1^r|Q_j^(2)|t_Q_i(Θ(ε))/|Q_i^(1)|. By Lemma <ref>, we have ℰ(G)=κ_u(G)∏_v∈ V(G)(d_v^+(G)-1)!=∏_v∈ V(G)d_v^+(G)!/∏_j=1^r|Q_j^(2)|t_Q_i(Θ(ε))/|Q_i^(1)|. § STATIONARY DISTRIBUTION AND KEMENY'S CONSTANT OF DIGRAPHS Let π(G) denote the stationary distribution vector of a digraph G. By using biclique partitions of digraphs, we give the following reduction formulas for stationary distribution vector and Kemeny's constant of digraphs. Let G be a strongly connected digraph with n vertices. For any u∈ V(G) and biclique partition ε={Q_1,…,Q_r} of G, we have π_u(G) = ∑_Q_i:u∈ Q_i^(2)|Q_i^(2)|^-1π_Q_i(Θ(ε)), 𝒦(G) = 𝒦(Θ(ε))+n-r. For a biclique partition ε={Q_1,…,Q_r} in G, let R_ε∈ℝ^n× r and S_ε∈ℝ^r× n be two corresponding incidence matrices with entries (R_ε)_uj = d_u^+(G)^-1|Q_j^(2)|        u∈ V(G),u∈ Q_j^(1), 0                         u∈ V(G),u∉ Q_j^(1). (S_ε)_iu = |Q_i^(2)|^-1                u∈ V(G),u∈ Q_i^(2), 0                        u∈ V(G),u∉ Q_i^(2). By computation, we have R_ε S_ε=P_G, S_ε R_ε=P_Θ(ε). Suppose that P_Θ(ε)=S_ε R_ε has eigenvalues λ_1=1,λ_2,…,λ_r, then P_G=R_ε S_ε has eigenvalues λ_1,λ_2,…,λ_r,0,…,0. By Lemma <ref>, we have 𝒦(G)=n-r/1-0+∑_i=2^r1/1-λ_i=𝒦(Θ(ε))+n-r. The products of all nonzero eigenvalues of I-P_G and I-P_Θ(ε) are both ∏_i=2^r(1-λ_i)=∑_v∈ V(G)(I-P_G(v,v))=∑_i=1^r(I-P_Θ(ε)(Q_i,Q_i)).4.1 Let H be the bipartite weighted digraph with Laplacian matrix L_H=[ I -R_ε; -S_ε I ]. For any u∈ V(G) and Q_i∈ε, by Lemmas <ref> and <ref>, we have t_u(H)=(I-P_G(u,u)), t_Q_i(H)=(I-P_Θ(ε)(Q_i,Q_i)).4.2 Let X be the adjoint matrix of L_H. Then XL_H=(L_H)I=0. By Lemma <ref>, we get (X)_ij=t_j(H). Hence ∑_v∈ V(H)t_v(H)(L_H)_vu=t_u(H)-∑_i=1^rt_Q_i(H)(S_ε)_iu=0. By (4.2) we get (I-P_G(u,u))=∑_Q_i:u∈ Q_i^(2)|Q_i^(2)|^-1(I-P_Θ(ε)(Q_i,Q_i)). By Lemma <ref> and (4.1) we get π_u(G)=∑_Q_i:u∈ Q_i^(2)|Q_i^(2)|^-1π_Q_i(Θ(ε)). For a digraph G with vertex set V(G)={1,…,n}, its k-blow up G(k) has vertex set V(G(k))=V_1∪⋯∪ V_n and edge set E(G(k))=⋃_ij∈ E(G){uv:u∈ V_i,v∈ V_j}, where |V_1|=⋯=|V_n|=k. For a biclique partition ε={Q_1,…,Q_r} of G, η={P_1,…,P_r} is a biclique partition of G(k), where P_i is the k-blow up of Q_i. Notice that w_P_iP_j(Θ(η))=kw_Q_iQ_j(Θ(ε)) if Q_i^(2)∩ Q_j^(1)≠∅. By Theorem <ref>, we have κ_u(G(k))=k^nk-2∏_v∈ V(G)d_v^+(G)^k-1κ_u(G) (u∈ V_i). By Theorem <ref>, we have π_u(G(k)) = k^-1π_i(G) (u∈ V_i), 𝒦(G(k)) = 𝒦(G)+n(k-1). By Observation <ref> and Theorem <ref>, we get the following formulas for stationary distribution vector and Kemeny's constant of line digraphs. Let G be a strongly connected digraph with n vertices and m edges. For any e=ij∈ E(G), we have π_e(ℒ(G)) = d_i^+(G)^-1π_i(G), 𝒦(ℒ(G)) = 𝒦(G)+m-n. Take ℒ^0(G)=G, and the iterated line digraph ℒ^s(G)=ℒ(ℒ^s-1(G)) (s=1,2,3,…). We can get the following formula involving iterated line digraph from Corollary <ref>. Let G be a strongly connected digraph with n vertices. Then 𝒦(ℒ^s(G))=𝒦(G)+|V(ℒ^s(G))|-n. § CONCLUDING REMARKS Let G be a weighted digraph with a partition V(G)=V_1∪ V_2, and let L_G=[ L_1 -B; -C L_2 ], where L_1 and L_2 are principal submatrices of L_G corresponding to V_1 and V_2, respectively. If L_1 and L_2 are nonsingular, then S_1=L_1-BL_2^-1C and S_2=L_2-CL_1^-1B are Laplacian matrices of some weighted digraphs G_1 and G_2, respectively (because S_1 and S_2 are square matrices whose all row sums are zeros). Similar with the proof of Theorem <ref>, we can derive the following more general spanning tree identity d_u(G)t_u(G_1)-∑_v∈ V_1,vu∈ E(G)w_vu(G)t_v(G_1)=(L_1)/(L_2)∑_v∈ V_2,vu∈ E(G)w_vu(G)t_v(G_2). We can also obtain new reduction formula for counting spanning trees in undirected graphs from our results. For a connected undirected graph H, let H_0 denote the digraph obtained from H by replacing every edge {i,j}∈ E(H) with two directed edges ij and ji. Then the number of spanning trees in H is equal to κ_u(H_0) for each u∈ V(H). For any biclique partition ε={Q_1,…,Q_r} of H_0, by Corollary <ref>, we have κ_u(H_0)=∏_v∈ V(H)d_v/∏_j=1^r|Q_j^(2)|t_Q_i(Ω(ε))/|Q_i^(1)|, i=1,…,r, where d_v is the degree of vertex v in H. Acknowledgements This work is supported by the National Natural Science Foundation of China (No. 12071097), and the Natural Science Foundation of the Heilongjiang Province (No. YQ2022A002). 00 AardenneT. van Aardenne-Ehrenfest, N.G. de Bruijn, Circuits and trees in oriented linear graphs, Simon Stevin 28 (1951) 203-217. AksoyS. Aksoy, F. Chung, X. Peng, Extreme values of the stationary distribution of random walks on directed graphs, Adv. Appl. Math. 81 (2016) 128-155. AldousD. Aldous, J. Fill. Reversible Markov Chains and Random Walks on Graphs, 2002, Unfinished monograph, recompiled 2014, available at http://www.stat.berkeley.edu/∼aldous/RWG/book.html. AndersenR. Andersen, F. Chung, K. Lang, Local partitioning for directed graphs using PageRank, Internet Math. 5 (2008) 3-22. BeveridgeA. Beveridge, A hitting time formula for the discrete Green's function, Combin. Probab. Comput. 25 (2016) 362-379. BidkhoriH. Bidkhori, S. Kishore, A bijective proof of a theorem of Knuth, Combin. Probab. Comput. 20 (2011) 11-25. ChaikenS. Chaiken, A combinatorial proof of the all minors matrix tree theorem, SIAM J. Algebraic Discrete Methods 3 (1982) 319-329. ChungF.R.K. Chung, R.P. Langlands, A combinatorial Laplacian with vertex weights, J. Combin. Theory Ser. A 75 (1996) 316-327. DongF. Dong, W. Yan, Expression for the number of spanning trees of line graphs of arbitrary connected graphs, J. Graph Theory 85 (2017) 74-93. GongH. Gong, X. Jin, A simple formula for the number of spanning trees of line graphs, J. Graph Theory 88 (2018) 294-301. GregoryD.A. Gregory, N.J. Pullman, K.F. Jones, J.R. Lundgren, Biclique coverings of regular bigraphs and minimum semiring ranks of regular matrices, J. Combin. Theory Ser. B 51 (1991) 73-89. KirklandS. Kirkland, Directed forests and the constancy of Kemeny's constant, J. Algebraic Combin. 53 (2021) 81-84. KnuthD.E. Knuth, Oriented subtrees of an arc digraph, J. Combinatorial Theory 3 (1967) 309-314. LevineL. Levine, Sandpile groups and spanning trees of directed line graphs, J. Combin. Theory Ser. A 118 (2011) 350-364. OrlinJ.B. Orlin, Line-digraphs, arborescences, and theorems of Tutte and Knuth, J. Combin. Theory Ser. B 25 (1978) 187-198. TutteW.T. Tutte, The dissection of equilateral triangles into equilateral triangles, Proc. Cambridge Philos. Soc. 44 (1948) 463-482. SmithW.T. Tutte, C.A.B. Smith, On unicursal paths in a network of degree 4, Am. Math. Mon. 48 (1941) 233-237. YanW.G. Yan, On the number of spanning trees of some irregular line graphs, J. Combin. Theory Ser. A 120 (2013) 1642-1648. ZhouJ. Zhou, C. Bu, The enumeration of spanning tree of weighted graphs, J. Algebraic Combin. 54 (2021) 75-108.
http://arxiv.org/abs/2307.02825v1
20230706074531
Bundle-specific Tractogram Distribution Estimation Using Higher-order Streamline Differential Equation
[ "Yuanjing Feng", "Lei Xie", "Jingqiang Wang", "Jianzhong He", "Fei Gao" ]
cs.CV
[ "cs.CV", "eess.IV" ]
Bundle-specific Tractogram Distribution Estimation Using Higher-order Streamline Differential Equation Yuanjing Feng*, Lei Xie, Jingqiang Wang, Jianzhong He, and Fei Gao This work was supported in part by the National Natural Science Foundation of China [grant No. (61976190, 61903336, 6197020521]. (*Corresponding author: Yuanjing Feng (fyjing@zjut.edu.cn).) The authors are with the College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China, (e-mail: leix@zjut.edu.cn). August 1, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================= Tractography traces the peak directions extracted from fiber orientation distribution (FOD) suffering from ambiguous spatial correspondences between diffusion directions and fiber geometry, which is prone to producing erroneous tracks while missing true positive connections. The peaks-based tractography methods “locally” reconstructed streamlines in ‘single to single’ manner, thus lacking of global information about the trend of the whole fiber bundle. In this work, we propose a novel tractography method based on a bundle-specific tractogram distribution function by using a higher-order streamline differential equation, which reconstructs the streamline bundles in ‘cluster to cluster’ manner. A unified framework for any higher-order streamline differential equation is presented to describe the fiber bundles with disjoint streamlines defined based on the diffusion tensor vector field. At the global level, the tractography process is simplified as the estimation of bundle-specific tractogram distribution (BTD) coefficients by minimizing the energy optimization model, and is used to characterize the relations between BTD and diffusion tensor vector under the prior guidance by introducing the tractogram bundle information to provide anatomic priors. Experiments are performed on simulated Hough, Sine, Circle data, ISMRM 2015 Tractography Challenge data, FiberCup data, and in vivo data from the Human Connectome Project (HCP) data for qualitative and quantitative evaluation. The results demonstrate that our approach can reconstruct the complex global fiber bundles directly. BTD reduces the error deviation and accumulation at the local level and shows better results in reconstructing long-range, twisting, and large fanning tracts. Diffusion MRI, Tractography, Bundle-specific tractography distribution, High-order streamline differential equation. § INTRODUCTION TRACTOGRAPHY based on diffusion weighted magnetic resonance imaging (dMRI) is a potentially useful way of revealing the trajectories of white matter and structural connectome of the human brain in vivo <cit.>. Typically, conventional tractography aims to integrate voxel-scale local fiber orientation extracted from fiber orientation distribution (FOD) to infer global connectivity, called FOD-based tractography <cit.>, which faces challenges of producing large amounts of false-positives fibers and omitting true-positives fibers. These challenges have been widely discussed in schematic representations or theoretical arguments <cit.>. In general, it is usually attributed to the spatially ambiguous correspondences between diffusion directions and fiber geometry on voxel, local and global levels <cit.>. In recent decades, numerous efforts have been devoted to accurately estimating fiber trajectories, which are mainly reflected in diffusion modeling techniques and tractography strategies. For the diffusion modeling, high angular resolution diffusion imaging (HARDIs), such as constrained spherical deconvolution <cit.> and high-order tensor <cit.>, are proposed to characterize multiple fibers within one voxel to overcome the limitation of DTI model. However, diffusion orientations are ambiguous when the asymmetric curvatures of the underlying fiber bundles are in the range of the voxel size <cit.>. In recent years, asymmetric fiber orientation distributions (AFODs) have been proposed to address the problem of asymmetric fiber geometry. S. N. Sotiropoulos et al. <cit.> estimated fiber dispersion using Bingham distributions to represent continuous distributions of fiber orientations, centered on the main orientation, and captured anisotropic dispersion. Based on the geometric interpretation, Reisert et al. <cit.> derived a continuity condition that should be preserved for valid AFODs. Cetin et al. <cit.> proposed an asymmetric orientation distribution function (ODF) using a cone model in a voxel-wise manner. Considering the local surroundings of a voxel and using intervoxel information to derive potential fiber patterns is another approach used to estimating fiber geometry <cit.>. For instance, the estimation of AFODs based on a spherical deconvolution approach uses a set of symmetric and asymmetric basis functions by adding neighborhood continuity components <cit.>. However, these asymmetric models are good at representing such specific fiber geometry and simply plugging these AFODs into a typical tractography paradigm continues to face ambiguous spatial correspondences. Voxel-level or local-level diffusion models face challenges in resolving global fiber trajectory reconstruction with complex tractograms. For tractography strategies, current tractography methods are based on the peaks extracted from FODs or AFODs, which we named peak-based tractography. Typical deterministic tractography tracks the peaks of the maximum diffusion direction, which inevitably accumulates error in the process. The probabilistic tractography algorithm obtains the fibers by randomly extracting directions from the FOD, leading to the production of substantial amounts of false-positive fibers <cit.>. Filter-based tractography <cit.> can optimize the signal prediction error to iteratively reduce tractography biases when tracing the maximum diffusion direction. The signal is examined at each new position, and the filter recursively updates the underlying local model parameters to indicate the direction in which to propagate tractography. Filter-based methods cannot capture global fiber trajectory information to guide filtering and only optimize the fiber trajectory at the local level. Global optimization based tractography approaches <cit.> reconstruct the fiber pathways, which aim to maximize the global energy of the vector field and the fiber structure solutions to prevent the fiber trajectory from being affected by local fiber direction errors. However, they can always find the maximal energy path between two seeds, and whether these optimal fibers exist is difficult to confirm. All these peak-based tractography algorithms reconstruct the bundles in a single streamline to single streamline manner from a seed point to an endpoint, which we referred to as ‘single to single’ tractography manner (Fig. <ref>a). From the view of the fiber bundle, we treat these ‘single to single’ tractography method as ‘local’ tractography methods. Substantial methodological innovation, such as directly reconstructing the fiber bundle from the starting region to the ending region, which we referred to as ‘cluster to cluster’ tractography manner, is a trend used to resolve these challenges. For example, Cottaar et al. <cit.> modeled fiber density and orientation using a divergence-free vector field to encourage an anatomically-justified streamline density distribution along the cortical white/gray-matter boundary while maintaining alignment with the diffusion MRI estimated fiber orientations. Aydogan et al. <cit.> proposed a novel propagation-based tracker that was capable of generating geometrically smooth curves using higher-order curves by using the more flexible parallel transport tractography (PTT) for curve parametrization. Based on the commonly accepted anatomical prior in which the fibers impossibly originate or terminate in the white matter and the hypothesis that fibers in the white matter show the form of non-intertwined streamlines, our previous works proposed a fiber trajectory distribution (FTD) function defined on the neighborhood voxels by using a ternary quadratic polynomial-based streamline differential equation <cit.>. The FTD can reveal continuous asymmetric fiber trajectory and showed an advantage over current methods. However, the FTD is still a local reconstruction of the fiber trajectory at the neighborhood voxel level. In this paper, we directly reconstruct a bundle-specific tractogram, i.e., a fiber bundle between two regions in a ‘cluster to cluster’ (Fig. <ref>b) instead of a voxelwise or local model combining the ‘single to single’ manner. To describe this global tractogram, we define a bundle-specific tractogram distribution (BTD) function based on any higher-order streamline differential equation in the measured diffusion tensor vector field. The optimization model is reconstructed with the measured diffusion orientations and BTD function. At the global level, the tractography process is to parameterize as BTD coefficients combining voxel location by minimizing the energy of optimization model. The relations between BTD and diffusion tensor vector are described under the prior guidance by introducing the tractography atlas as anatomic priors <cit.>. This paper is organized as follows: The methods section defines the BTD based on higher-order streamline differential equation, which is resolved by global constraints in the diffusion tensor vector field. The experiments section presents the comparison results of the algorithms on three simulated datasets and the ISMRM 2015 Tractography Challenge dataset as well as the in vivo dataset of HCP <cit.>. The discussion section provides the conclusions of this work. § METHODS In its most basic form, a tractography algorithm takes two arbitrary points (voxels) of interest as input, labeled as a seed and a target, and yields the most likely trajectory on the given diffusion tensor vectorial field. Let Ω⊂R^3 be the diffusion tensor vectorial field, and S_U,V(t) ⊂Ω denotes a path that is parameterized by t ∈ [0,T] connecting U with V. Let P(S) denote the probability of paths S representing an anatomically genuine fiber trajectory in diffusion native space Ω⊂R^3, which can be defined as, P(s) = ∫_0^T p(s(t),ṡ(t))dt, where p( s(t),ṡ(t)) is the metric representing the potential for the point s(t) to be located inside a fiber bundle in the direction ṡ(t) = ds(t)/.-dt. In an early deterministic tracking algorithm, (streamline tractography) <cit.>, s(t) is computed to satisfy the Frenet equation <cit.>, ds(t)/dt = V_max(s(t)),t ∈[ 0,T], where V_max(s(t)) is the principal diffusion direction of s(t). In a sense, this is a “greedy” algorithm, as it tries to find the optimal fiber trajectory. The inherent unreliability and inaccuracy of deterministic fiber tracking have driven the introduction of new tracking algorithms. Stochastic tractography algorithms <cit.> probabilistically generate the tracing directions based on the fiber conditional probability density function and the probability of direction within each voxel is commonly defined as ODF at the center of a voxel. Filter tractography algorithms use their current model state to obtain the predicted signal and combine the measured signal to propagate forward in most consistent direction. While shortest path methods aim to find an optimal path by computing the maximum energy between two seeds. In this paper, we are interested in the particular case of computing the diffusion tensor field on a Riemannian manifold <cit.>, which is a potential under the form p( s(.),ṡ(.)) = √(ṡ(.)^T M(s(.))ṡ(.)) describing an infinitesimal distance along the fiber path S relative to the metric tensor M (symmetric definite positive). In this situation, finding the curve connecting two points that globally minimizes the energy <cit.> is a shortened path called a geodetic <cit.>. In <cit.>, a variational model is proposed based on Hamilton-Jacobi-Bellman, in which an infinite number of particles start from a given seed region evolving along the streamline orientation given by the gradient of the defined cost function to reach the seed target. In our recent work <cit.>, penalized geodesic tractography based on a Finsler metric with a global optimization framework is introduced to improve cortical connectomics. These methods are related to an optimal control problem and turn out to be very useful for establishing the connectivity of a single point on the seed region but provide little information about the connectivity between two regions. <cit.> attempted to introduce optimal mass transportation to describe the connectivity between two cerebral regions. Unfortunately, it is only a preliminary mathematical description without algorithmic implementation and validation. Herein, we will focus here on the bundle structure tractography connecting two cerebral regions in a ‘cluster to cluster’ manner (Fig. <ref>b). Considering two given regions ρ _1 and ρ _2, the optimal fiber bundle can be viewed as a superposition of non-intertwined fiber streamline cluster S(t) starting from ρ _1 to ρ _2 along the diffusion tensor vector field, and the energy P(S) assigned to the streamline cluster is defined as, P(S) = ∫_0^T p( S(t),Ṡ(t))dt, where S(0) = ρ _1, S(T) = ρ _2 and Ṡ(t) is the diffusion vector field between ρ _1 and ρ _2. We first define the S(t) and Ṡ(t), and p( S(t),Ṡ(t)). In Section <ref>, we define the BTD function S(t) on the diffusion vector field Ṡ(t) from ρ _1 to ρ _2 based on a higher-order streamline differential equation. For bundle continuity, we add spatial continuity constraint equation in Section <ref>. Then, the S(t) solution can be simplified as estimation of BTD coefficients, detailed process is shown in Section <ref>. §.§ Bundle-specific tractogram distribution function Consider the diffusion field vector at position (x,y,z) to be v(x,y,z) = [ v_x(x,y,z),v_y(x,y,z),v_z(x,y,z)]^T. The BTD can be parameterized by a 3D fiber bundle with a set of streamlines S(t) = {s_i,i = 1, ⋯ ,λ} that satisfies the following properties <cit.>: The tangent vector at point (x,y,z) of fiber path s_i equals the field vector v(x,y,z) = [ v_x(x,y,z),v_y(x,y,z),v_z(x,y,z)]^T, that is, ∂s_i/∂ x = v_x(x,y,z),∂s_i/∂ y = v_y(x,y,z),∂s_i/∂ z = v_z(x,y,z). The streamlines satisfy s_i∩s_j = 0, i j, which is defined using the streamline differential equation, dx/v_x(x,y,z) = dy/v_y(x,y,z) = dz/v_z(x,y,z). We introduce the higher-order streamline differential Eq. <ref> with the tangent vector approximated by the n^th-order polynomial, f(x,y,z) = ∑_i = 0^n ∑_j = 0^n - i∑_k = 0^n - i - ja_ijk^x^iy^jz^k, where a_ijk is the coefficients of the polynomial. Combined with Eq. <ref>, the diffusion vector v(x,y,z) is denoted, [ v(x,y,z) = [ [ ∑_i = 0^n ∑_j = 0^n - i∑_k = 0^n - i - ja_ijk^xx^iy^jz^k; ∑_i = 0^n ∑_j = 0^n - i∑_k = 0^n - i - ja_ijk^yx^iy^jz^k; ∑_i = 0^n ∑_j = 0^n - i∑_k = 0^n - i - ja_ijk^zx^iy^jz^k ]]^T; = A · C(x,y,z) ] where A is the coefficient matrix defined as, A = [ [ a_n00^xa_(n - 1)10^xa_(n - 1)01^xa_(n - 1)00^x...a_001^xa_000^x; a_n00^ya_(n - 1)10^ya_(n - 1)01^ya_(n - 1)00^y...a_001^ya_000^y; a_n00^za_(n - 1)10^za_(n - 1)01^za_(n - 1)00^z...a_001^za_000^z ]]_( 3∑_Δ = 0^n (Δ + 1)*(Δ + 2)/2) and C(x,y,z) denotes as, [ C_n(x,y,z) = ([x^n,x^n - 1y,x^n - 1z,x^n - 1, ··· ,z,1]^T)_( ∑_Δ = 0^n (Δ + 1)*(Δ + 2)/2,1); = [F_n(x,y,z), C_n - 1(x,y,z)]; {[ C_1(x,y,z) = [x,y,z,1],; F_n(x,y,z) = [c_m,···,c_l, ···c_2, c_1],m = (n + 1)(n + 2)/2; c_l = x^iy^jz^k with; l = i(n - i - 3/2) + j + 1,i = [0,1, ··· ,n],; j = [0,1, ··· ,n - i],i + j + k = n ]. ] To further illustrate the Eq. <ref>, we give an example of C_n with n=2, The n represented the order of higher-order streamline differential equation [ C_2(x,y,z) = [F_2(x,y,z), C_1(x,y,z)]; = [x^2,y^2,z^2,xy^,xz^,yz,x,y,z,1^]; {[ C_1(x,y,z) = [x,y,z,1]; F_2(x,y,z) = [c_6,c_5,c_4,c_3,c_2, c_1] with; c_6 = x^2,c_5 = y^2,c_4 = z^2,c_3 = xy^,c_12 = xz^,c_1 = yz^ ]. ] §.§ Spatial continuity constraint equation for a tractogram We assume that the diffusion displacement of water molecules in the same fiber maintains continuity. We use continuous incompressible fluid theory to describe the spatial continuity of the bundle by introducing the concept of divergence of the fiber flow on diffusion tensor vectorial field, divS = ∂v_x/∂ x + ∂v_y/∂ y + ∂v_z/∂ z. We assume that the fibers do not originate or terminate in the white matter, that is, divS satisfies divS = 0. The substitution of Eq. <ref>, and Eq. <ref> into Eq. <ref> yields: [ divS = ∑_i = 0^n ∑_j = 0^n - i∑_k = 0^n - i - j( [ a_ijk^xix^i - 1y^jz^k +; a_ijk^yjx^iy^j - 1z^k +; a_ijk^zkx^iy^jz^k - 1 ]); = H_n - 1·C_n - 1(x,y,z) = 0 ] where C_n - 1(x,y,z) can be derived from Eq. <ref>, and C_n - 1(x,y,z) = [x^n - 1,x^n - 2y,x^n - 2z, ··· ,x^iy^jz^k, ··· ,z,1]_( M,1 ) H_n - 1 = [ [ na_n00^x + a_(n - 1)10^y + a_(n - 1)01^z; (n - 1)a_(n - 1)10^x + 2a_(n - 2)20^y + a_(n - 2)11^z; (n - 1)a_(n - 1)01^x + a_(n - 2)11^y + 2a_(n - 2)02^z; (n - 1)a_(n - 1)00^x + a_(n - 2)10^y + a_(n - 2)01^z; ···; (i + 1)a_(i + 1)jk^x + (j + 1)a_i(j + 1)k^y +; (k + 1)a_ij(k + 1)^z; ···; a_101^x + a_011^y + 2a_002^z; a_100^x + a_010^y + a_001^z ]]^T_( 1,M ) where M=∑_Δ = 0^n (Δ )*(Δ + 1)/2. §.§ Estimation of the BTD With the definition of BTD, finding the optimal tractogram from ρ _1 to ρ _2 can be simplified as the estimation of coefficient matrix A. The coefficient matrix A is estimated by minimizing the energy P(S) in Eq. <ref> with p( S(t),Ṡ(t)) defined as: p( S(t),Ṡ(t)) = Φ (v(x,y,z)) - v(x,y,z)_2^2 where Φ (v(x,y,z)) is the probability that fiber trajectory is actually passes through the FOD at point (x,y,z), which is derived from the orientation distribution functions (ODF) in each voxel in the volume. To achieve fiber spatial continuity, we add constrain Eq. <ref> to Eq. <ref>, which yields the optimization model: [ min_S P(S) = ∭_ΩΦ (v(x,y,z)) - v(x,y,z)_2^2dxdydz; s.t.H_n - 1·C_n - 1(x,y,z) = 0 ] where Ω = [ω _1,ω _2, ··· ,ω _γ] is the bundle pathway containing γ voxels from ρ _1 to ρ _2. For simplicity, we approximate Φ (v(x,y,z)) as the peak direction values in each voxel and set [G = [g_1, g_2···g_γ]_( 3,γ) as a set of peak direction values in ω. The (X,Y,Z) is the center of a voxel in ω and C_n^*(X,Y,Z) is a set of C_n(X,Y,Z). The estimation of coefficient matrix A of BTD is simplified as min_A E = G - A · C_n^*(X,Y,Z)_2^2 s.t.H_n - 1· C_n - 1^*(X,Y,Z) = 0 [ C_n^*(X,Y,Z) = [C_n^(X_1,Y_1,Z_1),C_n^(X_2,Y_2,Z_2), ··· ,C_n^(X_γ,Y_γ,Z_γ)]_( ∑_Δ = 0^n (Δ + 1)*(Δ + 2)/2 ,γ) ] The estimation of the BTD is decomposed into two stages. First, the least squares method is used to solve Eq. <ref> to obtain the coefficient matrix A. Second, we use the coefficient matrix A to obtain the flow field vectors and use the fourth-order Runge-Kutta method <cit.> to solve the higher-order streamline differential equation Eq. <ref>. See Algorithm. <ref> for a detailed implementation. § EXPERIMENTS Tests and validations of our algorithm are based on four simulated datasets, namely, Hough data, Sine data <cit.>, Circle data, and ISMRM 2015 Tractography Challenge data <cit.>, and in vivo data of HCP <cit.>. Our BTD estimation is based on FODs estimated using constrained spherical deconvolution, which are included in the software package MRtrix <cit.>. The experimental results are evaluated by comparing BTD with deterministic FOD-based tracking (SD_Stream) <cit.>, integration over fiber orientation distributions (iFOD2) <cit.>, and unscented Kalman filter (UKF) algorithm <cit.>. For quantitative evaluation, we use Tractometer metrics, that is, valid connections (VC), bundle overlap (OL), and bundle overreach (OR) for Sine and Hough data <cit.>. For Circle data, we design a metric referred to as Deviation, which can be expressed as: Deviation = ∑_j = 1^N ∑_i = 1^λ _j(√((x_ij - U_c)^2 + (y_ij - V_c)^2) - R_j)/∑_j = 1^N (λ _j) where N is the number of streamlines belonging to a bundle, and the j-th streamline has λ _j points. The (x_ij,y_ij) is the i-th point on j-th streamline, and R _j is the distance from a seed to circle center (U_c,V_c) on j-th streamline. In addition, we compute the bundle volumetric overlap to quantify the spatial coverage <cit.>. §.§ Hough, Sine, Circle, and FiberCup data We simulate three datasets: the Hough data feature brain-like large fanning connections, while the Sine data feature long-range and large twisting connections, and the Circle data feature large bending connections. The datasets are generated with the following parameter settings. The spatial resolution is 1mm × 1mm × 1mm. There are 78 gradient directions with a b-value of 1000 s/mm^2, and the signal-to-noise ratio (SNR) is set to 10, 20, and + ∞. The Hough and Circle data size is 60 × 60 × 6, and the Sine data size is 100 × 100 × 6. Sine data satisfy 2π·y = α·sin (x), and α is set to 0.1, 0.2, 0.3, and 0.4 to adjust different amplitudes. The inner radius (r_1) of the circle data is 10 mm, and the outer radius (r_2) is 20 mm. FiberCup data were released in the MICCAI Challenge in 2009. The spatial resolution is 3mm × 3mm × 3mm; the data have 30 gradient directions, with a b-value of 1000 s/mm^2 and a size of 64 × 64 × 3. Tractography parameters are as follows. The step size λ is 0.2mm. The seed region for the Hough data is the first six rows of voxels (bottom of the data), the Sine data is the first two columns of voxels (left side of the data), and the Circle data is the four rows of voxels in the red box (Fig. <ref>). The seed masks are the first four columns or rows in regions A1, B1, C1, D1 and E1 for FiberCup data. The number of seeds is set to 2000 for the Sine and Hough data and 720 for the Circle data, and there are 4 seeds in each voxel for the FiberCup data. The minimum length of any track is set to 5mm (5 voxels) for the Hough and Sine data, 15mm (5 voxels) for the FiberCup data, and 2π·r_1 mm for the Circle data. To evaluate the tractograms of BTD on different orders, we test BTD from the 3rd- to 6th-order on Hough data and Sine (α = 0.3) data with SNRs of 10, 20, and + ∞, which are shown in Fig. <ref> and Fig. <ref>. We further test the algorithms on Sine data with α from 0.1, 0.2, 0.3, and 0.4 (Fig. <ref>) to adjust the amplitude. The quantitative results of Hough data with SNR=10, Sine data with α = 0.3 and SNR=10, and Sine data with α = 0.4 and SNR=10 are shown in Table. <ref>. From Fig. <ref>, Fig. <ref> and Table. <ref> show the fitting ability increase, and the BTD with 5th-order and 6th-order yield better results than the 3th-order and 4th-order BTD. Compared to the 5th-order BTD, the 6th-order BTD shows approximate fitting ability but the complexity will increase significantly because the coefficients of the BTD from the 5th-order to the 6th-order will increase by 84 terms. Therefore, the 5th-order BTD is used to compare the tractography results in the following experiments. Notion, we can select the lower order of BTD when we track the simple bundles, which can reduce running time. However, for the complex bundle, like the corpus callosum, we suggested the higher order of BTD as the bundle is complex. Moreover, for most of the bundle in vivo, the fitting ability of 5th-order is sufficient and the run time is suitable. Therefore, we recommended the 5th-order of BTD for some complex bundles. To further verify the proposed algorithm, we assess the tractograms using Circle data with SNRs of 10, 20, and + ∞, in which error accumulation is more obvious. The results are shown in Fig. <ref> and Table. <ref>. The blue curves represent the fibers in the same starting area but with different trajectories. The ending points of the SD_Stream and iFOD2 deviate largely compared to their starting points. To illustrate the model performance in one phantom image covering various streamline scenarios, we test BTD on FiberCup data. This data included crossing, fanning etc., which was widely used for comparison of tracking algorithms. The BTD has better performance for the bundles with crossing and twisting regions in Fig. <ref>. Furthermore, the BTD shows better VC, OL and OR in Table. <ref>. We compare the tractograms among 5th-order BTD, SD_Stream, and iFOD2. From Fig. <ref>, the tractograms of BTD are evenly distributed in the mask and have larger VC and OL than SD_Stream and iFOD2 with different SNRs. In addition, the tractograms of SD_Stream and iFOD2 show the small angle of divergence, and fewer fibers reach the large fanning regions on Hough data. As an important factor affecting tractography, error accumulation leads to premature termination of the fibers, which is more obvious in long-range and large twisting connections, such as the bundle on Sine data (Fig. <ref>). The BTD obtains larger spatial coverage as well as better VC and OL at different SNRs. To further compare the tractograms on more complex data, we adjust the from 0.1, 0.2, 0.3, and 0.4 for Sine data (Fig. <ref>). The BTD shows more stable tractograms and higher VC and OL, while SD_Stream and iFOD2 exhibit an increase in the number of prematurely terminated fibers with decreasing amplitude. In Fig. <ref>, the BTD shows less deviation with increasing noise and most of fibers can return their starting points. In Fig. <ref> and Table. <ref>, the BTD shows better performance compared with SD_Stream and iFOD2, specifically for the bundles with crossing and fanning (E1-E2, E1-E4). The BTD shows large VC and OL and lower OR compared with SD_Stream and iFOD2. Additionally, the computational time from 3th- to 6th-order BTD may be need approximately 2.0s, 2.4s, 5.8s and 9.2s runtime using Hough data (repeat 100 times; in fourth column in Table. <ref>) with Inter i-9900k processor and Matlab2019 platform. From the above results, the BTD has more valid fibers, larger spatial coverage, and lower error accumulation as fibers spread forward than SD_Stream and iFOD2. The BTD seems to capture the better fanning, long-range and twisting, and large bending bundle tracking results. §.§ ISMRM 2015 Tractography Challenge data In this section, we evaluate the performance of the BTD on the ISMRM 2015 Challenge data, which simulates the shape and complexity of 25 well-known in vivo fiber bundles. The dataset has 32 gradient directions, a b-value of 1000 s/mm^2, and 2mm isotropic voxels. The dataset is denoised and corrected for distortions using MRtrix3 (dwidenoise and dwipreproc). The tractography parameters are as follows: The min-separation angle is 30_0, the step size is 0.2mm, and the minimum length of any track is more than 50 mm. FODs are performed by standard constrained spherical deconvolution (CSD) in MRtrix for iFOD2 and SD_Stream, and the UKF uses a two-tensor model. We selected the corticospinal tract (CST) as an example to test the algorithms. The CST has the features of large fanning and long range. The anatomy of CST is well known from the brainstem to the precentral gyrus <cit.>. The bundle masks are the voxels that ground truth fibers pass through after dilatation. In Fig. <ref>, we exhibit the details of the CST near area 4t (yellow boxes) and another regions (green boxes)(Brainetome regions <cit.>). Tractometer metrics with VC, OR, and OL for left and right CST are presented in Table. <ref>. From the right column in Fig. <ref>, the BTD preserves better spatial fluency and is closer to the ground truth. The BTD tractography more fibers ending in precentral gyrus than iFOD2, SD_Stream, and UKF methods. While the UKF shows some twisted fibers and is unevenly distributed. The iFOD2 and SD_Stream show sparse and interrupted fibers. The BTD can track the large fanning fibers that ending nearby 4tl and 4hf in precentral gyrus. The iFOD2 and SD_Stream show fewer or no fibers in these regions. We can see that the VC and OL of iFOD2 and SD_Stream in Table. <ref> are lower than BTD and UKF. In addition, the BTD has a lower OR compared with other three algorithms. The BTD seems to capture the complexity in regions where we expect fiber geometry (details are shown in Fig. <ref>). This is because the BTD reconstructs a bundle in a ‘cluster to cluster’ manner to reduce the ambiguous spatial correspondences between diffusion directions and fiber geometry. Therefore, the BTD preserves better spatial fluency and can better track the complex fibers than current peak-based tractography. §.§ HCP data For visual and quantitative comparisons on data from real subjects, we used the HCP dataset subjects <cit.>. These are acquired using 288 gradient directions, consisting of 18 scans at b = 0 s/mm^2 and three b-values (1000 s/mm^2, 2000 s/mm^2, 3000 s/mm^2) using 90 gradients, and the voxel size is 1.25mm × 1.25mm × 1.25mm. We used the preprocessed dMRI images shared by HCP. FODs were estimated using constrained spherical deconvolution, which are included in the software package MRtrix <cit.>. We used HCP #100307 subject to visually compare fiber tractograms from the proposed BTD algorithm and results from the tract orientation mapping (TOM) <cit.>, parallel transport tractography (PTT) <cit.>, fiber trajectory distribution (FTD) <cit.>, unscented Kalman filter (UKF) algorithm <cit.>, integration over fiber orientation distributions (iFOD2) <cit.>, and deterministic FOD-based tracking (SD_Stream) <cit.>. In this paper, we use VC and OL to validate the proposed method, so all tractography methods were selected 3000 streamlines which were seeded from all voxels within the each tract start regions, and tract masks were used to filter the tractograms. The tractography specific parameters are as follows: i) FTD, iFOD2, and SD_Stream: maximum angle = 60^0, step size = 0.3, cutoff = 0.1, minimum length = 75; ii) UKF: seedingFA = 0.06, stoppingFA = 0.05, stoppingThreshold = 0.06, Qm = 0.001, and Ql = 50. iii) PTT: the default parameters in <cit.>, the difference is that the seeding regions are the tract start regions. iv) TOM: the default parameters in <cit.>, the difference is that the seeding regions are the tract start regions, max_nr_fibers=3000, no streamline filtering by end mask. In the absence of ground truth fibers for in vivo data, the relatively familiar corpus callosum (CC_4), CST, and arcuate fasciculus (AF) were selected for qualitative and quantitative evaluations. The AF is a neuronal pathway that connects Wernicke’s area and Broca’s area <cit.>. The CC_4 is minor forceps of the corpus callosum that connects bilateral frontal lobe <cit.>. These three tracts have the characteristics of long-range, twisting, and fanning characteristics, making them suitable to assess the algorithms. The main lost fibers in the TOM, PTT, FTD, UKF, iFOD2, and SD_Stream are mainly distributed in fanning and other complex geometric structures, such as the fibers ending in area 4tl and 4hf of precentral gyrus (CST), frontal lobe on CC_4 and temporal lobes on AF. The results are consistent with VC and OL in middle of Fig. <ref>, in which the BTD obtains highest VC and OL. We also exhibited the fibers of AF and CC_4 on anatomical slices in top of Fig. <ref> (the slices and fibers of CC_4 can be seen in bottom of Fig. <ref>). In addition, the BTD of 7th-order or higher may overfit in the termination region. In addition, we also give tractometer metrics results of the proposed BTD on CST using five HCP data (subject ID: #100307, #112112, #112920, #113821, #118831) in Fig. <ref>. The results show that the proposed method is higher than the other compared methods in VC and OL tractometer metrics. We can also see that BTD has a significant improvement compared to FTD, which illustrates the advantage of BTD for complex bundle reconstruction by establishing higher-order streamline differential equations at the global level. § DISCUSSION AND CONCLUSION In this work, a novel bundle-specific tractography approach BTD is proposed, which integrates higher-order streamline differential equation to derive brain connectome between two regions in ‘cluster to cluster’ manner instead of ‘single to single’ manner. We parameterize fiber bundles using the BTD coefficients that are estimated by minimizing the energy on the diffusion tensor vector field by combining the priors. Experiments are performed on Hough, Sine, Circle data, and the ISMRM 2015 Tractography Challenge data and in vivo data of HCP for qualitative and quantitative evaluation. The horizontal comparisons show that with increasing order of the BTD, the numbers of valid fibers and overlapped regions will gradually increase. The best results are obtained with the 5th- or 6th-order BTD, and an order higher than six may cause overfitting. The comparisons with state-of-the-art methods show that the BTD can reconstruct complex fiber bundles, such as long-range, large twisting, and fanning tracts, and show better spatial consistency with fiber geometry, which is potentially useful for robust tractography. Our method may be affected when FODs or peaks are inaccurate due to noise, artifacts, pathological cases, or poor-quality datasets. In addition, the BTD has difficulty tracking bundles that have received tumor compression. Fiber tracking using deep learning in tumor scenarios will be studied in future work. IEEEtran
http://arxiv.org/abs/2307.02062v1
20230705065939
Convergence Analysis for Restarted Anderson Mixing and Beyond
[ "Fuchao Wei", "Chenglong Bao", "Yang Liu", "Guangwen Yang" ]
math.NA
[ "math.NA", "cs.NA" ]
A Genetic Algorithm based Superdirective Beamforming Method under Excitation Power Range Constraints Jingcheng Xie, Haifan Yin, Member, IEEE, and Liangcheng Han Jingcheng Xie, Haifan Yin and Liangcheng Han are with Huazhong University of Science and Technology, Wuhan 430074, China (e-mail: xiejc@hust.edu.cn; yin@hust.edu.cn; hanlc@hust.edu.cn). =========================================================================================================================================================================================================================================================== Anderson mixing (AM) is a classical method that can accelerate fixed-point iterations by exploring historical information. Despite the successful application of AM in scientific computing, the theoretical properties of AM are still under exploration. In this paper, we study the restarted version of the Type-I and Type-II AM methods, i.e., restarted AM. With a multi-step analysis, we give a unified convergence analysis for the two types of restarted AM and justify that the restarted Type-II AM can locally improve the convergence rate of the fixed-point iteration. Furthermore, we propose an adaptive mixing strategy by estimating the spectrum of the Jacobian matrix. If the Jacobian matrix is symmetric, we develop the short-term recurrence forms of restarted AM to reduce the memory cost. Finally, experimental results on various problems validate our theoretical findings. § INTRODUCTION Anderson mixing (AM) <cit.>, also known as Anderson acceleration <cit.>, or Pulay mixing, DIIS method in quantum chemistry <cit.>, is a classical extrapolation method for accelerating fixed-point iterations <cit.> and has wide applications in scientific computing <cit.>. Consider a fixed-point problem x = g(x), where x∈ℝ^d and g: ℝ^d→ℝ^d. The conventional fixed-point iteration x_k+1 = g(x_k),   k=0,1,…, converges if g is contractive. To accelerate the convergence of (<ref>), AM generates each iterate by the extrapolation of historical steps. Specifically, let m_k≥ 0 be the size of the used historical sequences at the k-th iteration. AM obtains x_k+1 via x_k+1 = (1-β_k)∑_j=0^m_kα_k^(j) x_k-m_k+j +β_k∑_j=0^m_kα_k^(j)g(x_k-m_k+j), where β_k>0 is the mixing parameter, and the extrapolation coefficients {α_k^(j)}_j=0^m_k are determined by solving a constrained least squares problem: min_{α_k^(j)}_j=0^m_k∑_j=0^m_kα_k^(j)(g(x_k-m_k+j)-x_k-m_k+j) _2     ∑_j=0^m_kα_k^(j) = 1. There are several approaches to choosing m_k. For example, the full-memory AM chooses m_k = k, i.e., using the whole historical sequences for one extrapolation; the limited-memory AM sets m_k = min{ m,k}, where m≥ 1 is a constant integer. For solving systems of equations where the fixed-point iterations are slow in convergence, AM is a practical alternative to Newton's method when handling the Jacobian matrices is difficult <cit.>. It has been recognized that AM is a multisecant quasi-Newton method that implicitly updates the approximation of the inverse Jacobian matrix to satisfy multisecant equations <cit.>. Also, another type of AM called Type-I AM was introduced in <cit.>. Different from the original AM (also called Type-II AM), the Type-I AM directly approximates the Jacobian matrix. Both types of AM have been adapted to solve various fixed-point problems <cit.>. Motivated by the promising numerical performance in many applications, the theoretical analysis of AM methods has become an important topic. For solving linear systems, it turns out that both types of full-memory AM methods are closely related to Krylov subspace methods <cit.>. However, for solving nonlinear problems, the theoretical properties of AM are still vague. For the Type-II AM, the known results in <cit.> show that the limited-memory version has a local linear convergence rate that is no worse than that of the fixed-point iteration. Recent works <cit.> further point out that the potential improvement of AM over fixed-point iterations depends on the quality of extrapolation, which is determined during iterations. For the Type-I AM, whether similar results hold remains unclear. It is worth noting that these theoretical results of the limited-memory AM follow the conventional one-step analysis, which may only have a partial assessment of the efficacy of AM, as also commented by Anderson in his review <cit.>. A fixed-point analysis in <cit.> reveals the continuity and differentiability properties of the Type-II AM iterations, but the convergence still lacks theoretical quantification. Besides, some new variants of AM have been developed and analyzed in different settings, e.g., see <cit.>. In this paper, we apply a multi-step analysis to investigate the long-term convergence behaviour of AM for solving nonlinear fixed-point problems. We focus on the restarted version of AM, i.e., restarted AM, where the method clears the historical information and restarts when some restarting condition holds. Restart is a common approach to improving the stability and robustness of AM <cit.>. Compared with the limited-memory AM, the restarted AM has the benefit that it is more amenable to extending the relationship between AM methods and Krylov subspace methods to nonlinear problems. Based on such a relationship, we establish the convergence properties of both types of restarted AM methods which explain the efficacy of AM in practice. Furthermore, by investigating the properties of restarted AM, we obtain an efficient procedure to estimate the eigenvalues of the Jacobian matrix that is beneficial for choosing the mixing parameters; for problems with symmetric Jacobian matrices, we derive the short-term recurrence forms of AM. We highlight our main contributions as follows. * We formulate the restarted Type-I and Type-II AM methods with certain restarting conditions and give a unified convergence analysis for both methods. Our multi-step analysis justifies that the restarted Type-II AM method can locally improve the convergence rate of the fixed-point iteration. * We propose an adaptive mixing strategy that adaptively chooses the mixing parameters by estimating the eigenvalues of the Jacobian matrix. The eigenvalue estimation procedure originates from the projection method for eigenvalue problems and can be efficiently implemented using historical information. We also discuss the related theoretical properties. * We show that the restarted AM methods can be simplified to have short-term recurrences if the Jacobian matrix is symmetric, which can reduce the memory cost. We give the convergence analysis of the short-term recurrence methods and develop the corresponding adaptive mixing strategy. Notations. The operator Δ denotes the forward difference, e.g., Δ x_k = x_k+1-x_k. h' is the Jacobian of a function h:ℝ^d→ℝ^d. For every matrix A, range(A) is the subspace spanned by the columns of A; 𝒦_k(A,v):= span{ v,Av,…,A^k-1v } is the k-th Krylov subspace generated by A and a vector v; 𝒮(A):=(A+A^T)/2 is the symmetric part of A; σ(A) is the spectrum of A; A_2 is the spectral norm of A; x_A:=(x^TAx)^1/2 is the A-norm if A is symmetric positive definite (SPD). 𝒫_k denotes the space of polynomials of degree not exceeding k. § TWO TYPES OF ANDERSON MIXING METHODS We re-interpret each iteration of the Type-I/Type-II AM method as a two-step procedure following <cit.>. Define r_k=g(x_k)-x_k to be the residual at x_k. The historical sequences are stored as two matrices X_k, R_k ∈ℝ^d× m_k (m_k≥ 1): X_k = ( Δ x_k-m_k , Δ x_k-m_k+1 , … , Δ x_k-1 ), R_k = ( Δ r_k-m_k , Δ r_k-m_k+1 , … , Δ r_k-1 ). Both Type-I and Type-II AM obtain x_k+1 via a projection step and a mixing step: x̅_k = x_k - X_kΓ_k, r̅_k = r_k - R_kΓ_k, x_k+1 = x̅_k + β_kr̅_k, where β_k>0 is the mixing parameter. For convenience, let Z_k:=X_k for the Type-I AM and Z_k:=R_k for the Type-II AM, then Γ_k is determined by the condition r̅_k ⊥ range(Z_k). Assume Z_k^TR_k is nonsingular. From (<ref>), x_k+1 = x_k + β_k r_k - ( X_k + β_k R_k )Γ_k. With the solution Γ_k from (<ref>), we obtain x_k+1 = x_k+G_kr_k, G_k = β_k I -(X_k+β_kR_k)(Z_k^TR_k)^-1Z_k^T. For the Type-I AM, G_k satisfies G_k = J_k^-1, where J_k solves min_JJ-β_k^-1I_F s.t. J X_k = -R_k; For the Type-II AM, G_k solves min_GG-β_kI_F s.t. GR_k=-X_k. Hence, both methods can be viewed as multisecant quasi-Newton methods <cit.>. For the Type-II method, the condition (<ref>) is equivalent to Γ_k = min_Γ∈ℝ^m_k r_k-R_kΓ_2. Let Γ_k = (Γ_k^(1),…,Γ_k^(m_k))^T∈ℝ^m_k. The extrapolation coefficients {α_k^(j)} can be obtained from Γ_k: α_k^(0)=Γ_k^(1), α_k^(j)=Γ_k^(j+1)-Γ_k^(j) (j=1,…,m_k-1), α_k^(m_k)=1-Γ_k^(m_k). Then r_k-R_kΓ_k = ∑_j=0^m_kα_k^(j)r_k-m_k+j. The above formulation of Type-II AM is equivalent to that given by (<ref>) and (<ref>). § RESTARTED ANDERSON MIXING Initialized with m_0 = 0, the restarted AM sets m_k = m_k-1+1 if no restart occurs and sets m_k=0 if a restarting condition is satisfied, similar to the restarted GMRES <cit.>. Thus, the restarting conditions are critical for the method. To define the restarting conditions, we first construct modified historical sequences. Such modification does not alter the iterates but is essential for the following analysis. §.§ The AM update with modified historical sequences Consider the nontrivial case that m_k>0. Note that the G_k in (<ref>) does not change if we replace X_k, R_k by P_k := X_kS_k^-1, Q_k := R_kS_k^-1, where S_k∈ℝ^m_k× m_k is nonsingular. So we can choose some suitable transformation S_k to reformulate the AM update. We construct the modified historical sequences P_k = (p_k-m_k+1,…,p_k), Q_k = (q_k-m_k+1,…,q_k) in a recursive way. Let V_k:=P_k for the Type-I AM and V_k:=Q_k for the Type-II AM. Assume that (Z_j^TR_j)≠ 0 for j=k-m_k+1,…,k. The AM update with modified historical sequences consists of the following two steps. Step 1: Modified vector pair. If m_k = 1, then p_k = Δ x_k-1, q_k = Δ r_k-1. If m_k ≥ 2, we set the vector pair p_k, q_k as p_k = Δ x_k-1-P_k-1ζ_k,    q_k = Δ r_k-1-Q_k-1ζ_k, where ζ_k = (ζ_k^(1),…,ζ_k^(m_k-1))^T is determined by q_k ⊥ range(V_k-1). Step 2: AM update. We obtain x_k+1 via x̅_k = x_k-P_kΓ_k,   r̅_k = r_k-Q_kΓ_k,    x_k+1 = x̅_k+β_kr̅_k, where Γ_k =(Γ_k^(1),…,Γ_k^(m_k))^T is determined by r̅_k ⊥ range(V_k). It can be verified by induction that the above process produces the same iterates as (<ref>). To facilitate the analysis, we give explicit procedures to obtain ζ_k and Γ_k. Let Z_k = (z_k-m_k+1,…,z_k), V_k = (v_k-m_k+1,…,v_k). We first describe the procedure to compute ζ_k and q_k. Define q_k^0 = Δ r_k-1. For j=1,2,…,m_k-1, the procedure computes ζ_k^(j) and the intermediate vector q_k^j sequentially: ζ_k^(j) = v_k-m_k+j^Tq_k^j-1/v_k-m_k+j^Tq_k-m_k+j, q_k^j = q_k^j-1-q_k-m_k+jζ_k^(j). Then q_k = q_k^m_k-1. Next, Γ_k and r̅_k can be computed similarly. Define r_k^0 = r_k. For j=1,2,…,m_k, the Γ_k^(j) and the intermediate vector r_k^j are computed sequentially: Γ_k^(j) = v_k-m_k+j^Tr_k^j-1/v_k-m_k+j^Tq_k-m_k+j, r_k^j = r_k^j-1-q_k-m_k+jΓ_k^(j). Then r̅_k = r_k^m_k. Procedures (<ref>) and (<ref>) are reminiscent of the modified Gram-Schmidt orthogonalization process that is recommended for the implementation of Type-II AM <cit.>. The next proposition shows the correctness of the above procedures. Suppose that (Z_j^TR_j) ≠ 0 for j=k-m_k+1,…,k. Then the procedures (<ref>) and (<ref>) are well defined, and the following properties hold: * X_k = P_kS_k, R_k = Q_kS_k, where S_k is unit upper triangular; * V_k^TQ_k is lower triangular; * r̅_k ⊥ range(V_k). The scheme (<ref>) produces the same { x_j }_j=k-m_k+1^k+1 as the original AM update (<ref>). The proof is given in <Ref>. It is worth noting that our formulation of the restarted AM focuses on theoretical analysis. Better implementations are needed in some specific scenarios, e.g., parallel computing. §.§ Restarting conditions Let τ∈ (0,1), η > 0, and m∈(0,d] is an integer. Following <cit.>, the restart criterion is related to the following conditions: m_k ≤ m, | v_k^Tq_k |≥τ| v_k-m_k+1^Tq_k-m_k+1|, r_k_2 ≤ηr_k-m_k_2. If any condition in (<ref>)-(<ref>) is violated during the iteration, set m_k = 0 and restart the method. Details of the restarted AM are given in <Ref>. Next, we explain the rationale behind the above three conditions. The first condition (<ref>) limits the size of the historical sequences, which plays an important role in bounding the accumulated high-order errors in the convergence analysis. The second condition (<ref>) ensures the nonsingularity of V_k^TQ_k as long as v_k-m_k+1^Tq_k-m_k+1≠ 0. This is because V_k^TQ_k is lower triangular and the diagonal elements { v_j^Tq_j}_j=k-m_k+1^k are nonzero due to (<ref>). Also, (<ref>) controls the condition number of V_k^TQ_k by the following lower bound: | v_k-m_k+1^Tq_k-m_k+1|/| v_k^Tq_k | = | e_1^TV_k^TQ_k e_1 |/| e_m_k^TV_k^TQ_k e_m_k|≤V_k^TQ_k_2 ( V_k^TQ_k )^-1_2, where e_j denotes the j-th column of the identity matrix I_m_k. Thus, a too-small | v_k^Tq_k| can cause numerical instability and we have to restart the AM method. The third condition (<ref>) is to control the growth degree of the residuals, which avoids the problematic behaviour of AM and can be seen as a safeguard condition. Moreover, as shown in our proof, the conditions (<ref>)-(<ref>) can lead to the boundedness of the extrapolation coefficients, which is a critical assumption in <cit.>. § CONVERGENCE ANALYSIS In this section, we give a unified convergence analysis for the restarted AM methods described in <Ref>. We first recall the relationship between AM methods and the Krylov subspace methods for solving linear systems. Let x_k^ A and x_k^ G denote the k-th iterate of Arnoldi's method <cit.> and the k-th iterate of GMRES <cit.>, respectively. We summarize the results if (<ref>) is linear. Consider the fixed-point problem (<ref>) with g(x) =(I-A)x+b, where A∈ℝ^d× d is nonsingular and b∈ℝ^d. Let { x_k} be the sequence generated by the full-memory Type-I/Type-II AM method with nonzero mixing parameters. If (Z_j^TR_j) ≠ 0 for j=1,…,k, then the following relations hold: * R_k = -AX_k,   range(X_k) = 𝒦_k(A,r_0); * for the Type-I AM method, x̅_k = x_k^ A provided that x_0 = x_0^ A; * for the Type-II AM method, x̅_k = x_k^ G provided that x_0 = x_0^ G. Furthermore, if A is positive definite and r_j ≠ 0, j=0,…,k, then (Z_j^TR_j) ≠ 0, j=1,…,k; the constructions of the modified historical sequences P_k and Q_k are well-defined, and Q_k = -AP_k,    range(P_k) = range(X_k) = 𝒦_k(A,r_0). We give the proof in <Ref>. Properties <ref>-<ref> are known results <cit.>. <Ref> and <Ref> establish the relationship between the restarted AM and Krylov subspace methods in the linear case. Now, we study the convergence properties of the restarted AM for solving nonlinear problems. Rewriting the fixed-point problem (<ref>) as h(x):=x-g(x) = 0, we make the following assumptions on h: (i) There exists x^* such that h(x^*)=0; (ii) h is Lipschitz continuously differentiable in a neighbourhood of x^*; (iii) The Jacobian h'(x^*) is positive definite, i.e., all the eigenvalues of 𝒮(h'(x^*)) are positive. From <ref>, there exist positive constants ρ̂, κ̂, μ, and L such that for all x∈ℬ_ρ̂(x^*):= { z∈ℝ^d|z-x^*_2≤ρ̂}, the following relations hold: μy_2 ≤h'(x)y_2 ≤ Ly_2, ∀ y∈ℝ^d; μy_2^2 ≤ y^Th'(x)y ≤ Ly_2^2, ∀ y∈ℝ^d; h(x)-h'(x^*)(x-x^*)_2 ≤1/2κ̂x-x^*_2^2. Inspired by the proofs of the restarted conjugate gradient methods <cit.> and the cyclic Barzilai-Borwein method <cit.>, we establish the convergence properties of the restarted AM methods from their properties in the linear problems. To achieve this goal, we first introduce the local linear model of h around x^*: ĥ(x) = h'(x^*)(x-x^*), which deviates from h(x) by at most a second-order term 1/2κ̂x-x^*_2^2 in ℬ_ρ̂(x^*) from (<ref>). Then we construct two sequences of iterates { x_k} and {x̂_k }, which are associated with solving h(x)=0 and ĥ(x)=0, respectively. Let the mixing parameters {β_k} satisfy β≤|β_k|≤β' for positive constants β and β'. The sequences { x_k} and {x̂_k } are generated by two processes: (i) Process I: Solve the fixed-point problem (<ref>) with the restarted Type-I/Type-II AM method (see <ref>), and the resulting sequence is { x_k}. (ii) Process II: In each interval between two successive restarts in Process I, apply the full-memory Type-I/Type-II AM with modified historical sequences to solve the linear system ĥ(x) = 0. Specifically, let m_k and β_k be the same ones in Process I and define r̂_k = -ĥ(x̂_k). The iterates are given as follows: x̂_k = x_k,     x̂_k+1 = x̂_k+β_kr̂_k,      m_k = 0; x̂_k+1 = x̂_k + β_kr̂_k-(P̂_k+β_kQ̂_k)Γ̂_k,      m_k>0, where Γ̂_k is chosen such that r̂_k-Q̂_kΓ̂_k ⊥ range(V̂_k). Here P̂_k = (p̂_k-m_k+1,…,p̂_k) and Q̂_k = (q̂_k-m_k+1,…,q̂_k) are the modified historical sequences. Let V̂_k = P̂_k if the Type-I method is used in Process I, and V̂_k = Q̂_k if the Type-II method is used in Process I. Then, p̂_k = Δx̂_k-1, q̂_k = Δr̂_k-1, if m_k = 1; p̂_k = Δx̂_k-1-P̂_k-1ζ̂_k, q̂_k = Δr̂_k-1-Q̂_k-1ζ̂_k, if m_k≥ 2, where ζ̂_k is chosen such that q̂_k ⊥ range(V̂_k-1). The next lemma compares the outputs of the above two processes. Suppose that <ref> holds for the fixed-point problem (<ref>). For the sequences { x_k } and {x̂_k } in <ref>, if x_0 is sufficiently close to x^* and h(x_j)_2≤η_0h(x_0)_2, j=0,…,k, where η_0>0 is a constant, then r_k - r̂_k_2 = κ̂·𝒪(x_k-m_k-x^*_2^2), x_k+1 - x̂_k+1_2 = κ̂·𝒪(x_k-m_k-x^*_2^2). The proof is given in <Ref> due to space limitations. Since Process II is closely related to Krylov subspace methods from <Ref>, <ref> extends this relationship to the nonlinear case. When certain assumptions hold, x_k-x̂_k_2 is bounded by a second-order term. Intuitively, we can obtain the convergence of { x_k } for nonlinear problems from the convergence of {x̂_k } for the corresponding linear problems. If {x̂_k } converges linearly (not quadratically), it is expected that { x_k } has a similar convergence rate to {x̂_k } provided that x_0 is sufficiently close to x^*. Suppose that <ref> holds for the fixed-point problem (<ref>). Let { x_k } and { r_k } denote the iterates and residuals of the restarted AM, A:=I-g'(x^*), θ_k := I-β_kA_2, and η_0>0 is a constant. We assume β_j∈[β,β'] (j≥ 0) for some positive constants β and β'. The following results hold. 1. For the Type-I AM, let π_k be the orthogonal projector onto 𝒦_m_k(A,r_k-m_k) and A_k := π_kAπ_k. A_k|_𝒦_m_k(A,r_k-m_k) denotes the restriction of A_k to 𝒦_m_k(A,r_k-m_k). If r_j_2≤η_0r_0_2 (0≤ j≤ k) and x_0 is sufficiently close to x^*, then x_k+1-x^*_2 ≤θ_k√(1+γ_k^2κ_k^2)min__p(0)=1^p∈𝒫_m_kp(A)(x_k-m_k-x^*)_2 + κ̂𝒪(x_k-m_k-x^*_2^2), where γ_k = π_kA(I-π_k)_2 ≤ L, and κ_k = ( A_k|_𝒦_m_k(A,r_k-m_k))^-1_2 ≤ 1/μ. 2. For the Type-II AM, if r_j_2≤η_0r_0_2 (0≤ j≤ k+1) and x_0 is sufficiently close to x^*, then r_k+1_2 ≤θ_k min__p(0)=1^p∈𝒫_m_kp(A)r_k-m_k_2 + κ̂𝒪(x_k-m_k-x^*_2^2). Alternatively, letting θ∈[(1-μ^2/L^2)^1/2,1) be a constant, if θ_j=I-β_jA_2 ≤θ (j≥ 0) and x_0 is sufficiently close to x^*, then (<ref>) holds. 3. For either method, if the aforementioned assumptions hold and m_k = d, then x_k+1-x^*_2 = κ̂𝒪(x_k-m_k-x^*_2^2), namely, (d+1)-step quadratic convergence. The proof is given in <Ref>, which is based on <Ref>, <Ref>, and the convergence properties of Krylov subspace methods. Results (<ref>) and (<ref>) characterize the long-term convergence behaviours of both restarted AM methods for solving nonlinear equations h(x) = 0, where h satisfies <Ref>. The assumption that x_0 is sufficiently close to x^* is common for the local analysis of an iterative method <cit.>. Similar to <cit.>, since an explicit bound for x_0-x^*_2 is rather cumbersome and not very useful in practice, we omit it here for conciseness. Besides, we do not assume g to be contractive here. The critical point is the positive definiteness of the Jacobian h'(x^*), without which there is no convergence guarantee even for solving linear systems <cit.>. If m_k is large and x_0 is sufficiently close to x^*, the convergence rates of both restarted AM methods are dominated by the minimization problems in (<ref>) and (<ref>), which have been extensively studied in the context of Krylov subspace methods <cit.>. For j≥ 0, define u_j=x_j-x^* for the Type-I method, and u_j=r_j for the Type-II method. Note that I-μ/L^2A_2 ≤θ:=(1-μ^2/L^2)^1/2 (see <Ref> in <Ref>). Choosing p(A) = ( I-μ/L^2A)^m_k, it follows that min__p(0)=1^p∈𝒫_m_kp(A)u_k-m_k_2 ≤min__p(0)=1^p∈𝒫_m_kp(A)_2u_k-m_k_2 ≤θ^m_ku_k-m_k_2. With more properties about A, we may choose other polynomials to sharpen the upper bound in (<ref>). We give a refined result in <Ref> when A is symmetric. Now, we consider the case that the fixed-point map g is a contraction. Specifically, we make the following assumptions on g, which are similar to those in <cit.>. The fixed-point map g:ℝ^d→ℝ^d has a fixed point x^*. In the local region ℬ_ρ̂(x^*):= { z∈ℝ^d|z-x^*_2≤ρ̂} for some constant ρ̂>0, g is Lipschitz continuously differentiable, and there are constants κ∈ (0,1) and κ̂>0 such that * g(y)-g(x)_2 ≤κy-x_2 for every x,y ∈ℬ_ρ̂(x^*); * g'(y)-g'(x)_2 ≤κ̂y-x_2 for every x,y ∈ℬ_ρ̂(x^*). In fact, we show <ref> is a sufficient condition for <ref>. Suppose <ref> holds for the fixed-point problem (<ref>). Let h(x):=x-g(x). Then h satisfies <ref>. In ℬ_ρ̂(x^*), the Lipschitz constant of h' is κ̂; for (<ref>) and (<ref>), the constants are μ=1-κ, L=1+κ. The proof is given in <Ref>. Based on <Ref> and <Ref>, we obtain the following corollary for the Type-II AM. Suppose that <ref> holds for the fixed-point problem (<ref>). Let { x_k } and { r_k } denote the iterates and residuals of the restarted Type-II AM with β_k=1 (k≥ 0). If x_0 is sufficiently close to x^*, then r_k+1_2 ≤κmin__p(0)=1^p∈𝒫_m_kp(A)r_k-m_k_2 + κ̂𝒪(x_k-m_k-x^*_2^2), where A:=I-g'(x^*). If m_k=d, then x_k+1-x^*_2 = κ̂𝒪(x_k-m_k-x^*_2^2). The R-linear convergence of the limited-memory Type-II AM has been established in <cit.>: Under <Ref> and assuming that ∑_j=0^m_k|α_k^(j)| is bounded, it is proved that for κ̃∈(κ,1), if x_0 is sufficiently close to x^*, then r_k_2 ≤κ̃^kr_0_2. However, as noted by Anderson <cit.>, (<ref>) does not show the advantage of AM over the fixed-point iteration (<ref>) since the latter converges Q-linearly with Q-factor κ. In <cit.>, an improved bound is obtained: r_k+1_2 ≤ s_k(1-β_k+κβ_k)r_k_2 + ∑_j=0^m𝒪(r_k-j_2^2), where k≥ m and s_k:=r̅_k_2/r_k_2. If β_k = 1, (<ref>) improves (<ref>) since s_k≤ 1. However, the quality of extrapolation, namely s_k, is difficult to estimate in advance. The recent analysis in <cit.> refines the higher-order terms in (<ref>), but leaves the issue about s_k unaddressed. For the restarted Type-II AM, <Ref> shows that its convergence rate is dominated by the first term on the right-hand side of (<ref>). Using p(A) = (I-A)^m_k, (<ref>) leads to r_k+1_2 ≤κ^m_k+1r_k-m_k_2+κ̂𝒪(x_k-m_k-x^*_2^2) that is comparable to the fixed-point iteration (<ref>). Nonetheless, due to the optimality, the polynomial that minimizes p(A)r_k-m_k_2 corresponds to the m_k-step GMRES iterations and can often provide a much better bound than (I-A)^m_k <cit.>, which justifies the acceleration by Type-II AM in practice. Therefore, our multi-step analysis provides a better assessment of the efficacy of Type-II AM than previous works. Though the numerical experiments in <cit.> suggest that the limited-memory AM can converge faster than the restarted AM with the same m, the theoretical properties of the limited-memory AM are much more vague, even in the linear case <cit.>. We leave the analysis for the limited-memory AM as our future work. § ADAPTIVE MIXING STRATEGY As shown in <Ref>, the choice of β_k directly affects the factor θ_k in (<ref>) and (<ref>). If g is not contractive, a proper β_k is required to ensure the numerical performance of AM <cit.>. However, tuning β_k with a grid search can be costly in practice. In this section, we explore the properties of restarted AM to develop an efficient procedure to estimate the eigenvalues of h'(x^*), based on which we can choose β_k adaptively. We start from the linear case to better explain how to estimate the eigenvalues. Let g(x)=(I-A)x+b in the fixed-point problem (<ref>), where A∈ℝ^d× d is positive definite and b∈ℝ^d. Then h(x)=Ax-b. Using the historical information in the restarted AM, we apply a projection method <cit.> to estimate the spectrum of A: u ∈ range(Q_k), (A-λ I)u ⊥ range(V_k), where u∈ℝ^d is an approximate eigenvector of A sought in range(Q_k), and λ∈ℝ is an eigenvalue estimate. The orthogonality condition in (<ref>) is known as the Petrov-Galerkin condition. Let u = Q_ky,   y∈ℝ^m_k. Then (<ref>) leads to V_k^TAQ_k y = λ V_k^TQ_k y. Next, we describe how to solve the generalized eigenvalue problem (<ref>) using the properties of restarted AM. At the (k+1)-th iteration, suppose that m_k+1≥ 2. As will be shown in <ref>, there is an upper Hessenberg matrix H_k ∈ℝ^m_k× m_k such that AQ_k = Q_k H_k + q_k+1· h_k^(m_k+1)e_m_k^T, where h_k^(m_k+1)∈ℝ and e_m_k is the m_k-th column of I_m_k. Since q_k+1^TV_k = 0 from the construction of q_k+1 (cf. <Ref>), it follows from (<ref>) that V_k^TAQ_k = V_k^TQ_kH_k, which together with (<ref>) yields that V_k^TQ_kH_ky = λ V_k^TQ_ky. Noting that (V_k^TQ_k) ≠ 0 if the restarted AM does not reach the exact solution, we find (<ref>) is reduced to H_k y = λ y, which can be solved by efficient numerical algorithms <cit.> using 𝒪(m_k^3) flops. From <ref>, range(Q_k) = A𝒦_m_k(A,r_k-m_k). For the Type-I method, (<ref>) is an oblique projection method; for the Type-II method, (<ref>) can be viewed as the Arnoldi's method <cit.> based on A^TA-norm. It is expected that with larger m_k, the eigenvalue estimates are closer to the exact eigenvalues of A. Now, we describe the construction of H_k in <Ref> and show the role of H_k in <Ref>. Consider applying the restarted AM to solve the fixed-point problem (<ref>). At the (k+1)-th iteration, suppose that m_k+1≥ 2. Define the unreduced upper Hessenberg matrix H̅_k = (H_k^T,h_k^(m_k+1)e_m_k)^T∈ℝ^(m_k+1)× m_k, where h_k^(m_k+1)∈ℝ, and e_m_k is the m_k-th column of I_m_k. The H_k∈ℝ^m_k× m_k is defined as H_k = h_k if m_k=1 and H_k = (H̅_k-1,h_k) if m_k≥ 2. Define ϕ_k = Γ_k+ζ_k+1, Γ_k^[m_k-1] = (Γ_k^(1),…,Γ_k^(m_k-1))^T. The h_k∈ℝ^m_k in H_k is constructed as follows: h_k = 1/1-Γ_k( 1/β_k-1 - 1/β_kϕ_k),    m_k = 1; h_k = 1/1-Γ_k^(m_k)( 1/β_k-1[ ϕ_k-1; 1 ] -1/β_kϕ_k - H̅_k-1( ϕ_k-1-Γ_k^[m_k-1]) ),    m_k≥ 2. The construction of h_k^(m_k+1) is h_k^(m_k+1) = -1/β_k(1-Γ_k^(m_k)),    m_k≥ 1. Let g(x)=(I-A)x+b in the fixed-point problem (<ref>), where A∈ℝ^d× d is positive definite and b∈ℝ^d. For the restarted Type-I/Type-II AM method, if m_k+1≥ 2 at the (k+1)-th iteration, then with the notations defined in <Ref>, we have AP_k = P_k+1H̅_k = P_kH_k+p_k+1· h_k^(m_k+1)e_m_k^T. The proof is given in <Ref>. Since Q_k+1 = -AP_k+1 from <Ref>, the relation (<ref>) holds as a result of (<ref>). <Ref> suggests that H_k can be economically constructed by manipulating the coefficients in the restarted AM. Thus we can efficiently solve the problem (<ref>) without any additional matrix-vector product. Next, consider the nonlinear case. We can still construct H_k by <Ref>. Let A:=h'(x^*). Since g is nonlinear, the relation (<ref>) does not exactly hold in general, which can make the eigenvalues of H_k different from those computed by solving (<ref>). Nonetheless, similar to the proof of <Ref>, we consider an auxiliary process using restarted AM to solve the linearized problem ĥ(x) = 0, where a Hessenberg matrix Ĥ_k can be constructed as well. By comparing H_k and Ĥ_k, we show that the eigenvalues of H_k can still approximate the eigenvalues of A. Suppose that <ref> holds for the fixed-point problem (<ref>). For the Process I in <ref>, assume that there are positive constants η_0,τ_0 such that h(x_j)_2≤η_0h(x_0)_2 (0≤ j≤ k+1) and | 1-Γ_j^(m_j)|≥τ_0 (1≤ j≤ k); H_k is defined by <Ref>. For the Process II in <ref>, the upper Hessenberg matrix Ĥ_k∈ℝ^m_k× m_k is defined correspondingly (by replacing Γ_k, ζ_k+1 with Γ̂_k, ζ̂_k+1 in <Ref>). Then for x_0 sufficiently close to x^*, we have H_k_2 = 𝒪(1),   H_k-Ĥ_k_2 = κ̂𝒪(x_k-m_k-x^*_2). The proof can be found in <Ref>. It suggests that H_k is a perturbation of Ĥ_k. Since ĥ(x) is linear, the eigenvalues of Ĥ_k exactly solve V̂_k^TAQ̂_k y = λ̂V̂_k^TQ̂_k y, thus approximating the eigenvalues of A. Next, we compare σ(H_k) and σ(Ĥ_k) using the perturbation theory. Under the same assumptions of <ref>, let λ be an eigenvalue of H_k. Then for x_0 sufficiently close to x^*, we have min_λ̂∈σ(Ĥ_k)|λ̂-λ| = κ̂^1/m_k𝒪(x_k-m_k-x^*_2^1/m_k). If further assuming Ĥ_k is diagonalizable, i.e., there is a nonsingular matrix M_k∈ℝ^m_k× m_k such that Ĥ_k = M_kD̂_kM_k^-1, where D̂_k is diagonal, then min_λ̂∈σ(Ĥ_k)|λ̂-λ| = M_k_2M_k^-1_2κ̂𝒪(x_k-m_k-x^*_2). (<ref>) follows from <Ref> and <cit.>, and (<ref>) is a consequence of <Ref> and Bauer-Fike theorem <cit.>. Since Ĥ_k is unavailable in practice, <Ref> suggests that we can use the eigenvalues of H_k to roughly estimate the eigenvalues of A. Then, suppose m_k≥ 2 and λ̃ is the eigenvalue of H_k-1 of the largest absolute value. We set the mixing parameter at the k-th iteration as β_k = 2/|λ̃|. We call such a way to choose β_k as the adaptive mixing strategy since β_k is chosen adaptively. Usually, the extreme eigenvalues can be quickly estimated, so we only need to run the eigenvalue estimation procedure for a few steps. § SHORT-TERM RECURRENCE METHODS For solving high-dimensional problems, the memory cost of AM can be prohibitive when m_k is large. In this section, we show that if the Jacobian matrix is symmetric, the restarted AM methods can have short-term recurrence forms which address the memory issue while maintaining fast convergence. Since <Ref> assumes that h'(x^*) is positive definite, the symmetry of h'(x^*) motivates us to consider solving SPD linear systems first. Let g(x)=(I-A)x+b in the fixed-point problem (<ref>), where A∈ℝ^d× d is SPD and b∈ℝ^d. For the full-memory Type-I/Type-II AM with modified historical sequences, we have ζ_k^(j) = 0,    j≤ k-3   (k≥ 4),    Γ_k^(j) = 0,    j ≤ k-2   (k≥ 3), if the algorithm has not found the exact solution. Since A is SPD, by <Ref>, the procedures (<ref>) and (<ref>) are well defined during the iterations. <ref> also suggests that q_k = Δ r_k-1-Q_k-1ζ_k ⊥ range(V_k-1),   r̅_k = r_k - Q_kΓ_k ⊥ range(V_k). Hence, ζ_k = (V_k-1^TQ_k-1)^-1V_k-1^TΔ r_k-1,   Γ_k = (V_k^TQ_k)^-1V_k^Tr_k. Since A is symmetric, it follows that V_k^TQ_k is diagonal for either type of the AM methods. Note that r_k = r̅_k-1-β_k-1Ar̅_k-1 and range(AV_k-2) ⊆ range(V_k-1) due to (<ref>) in <ref>. Hence V_k-2^Tr_k = V_k-2^Tr̅_k-1 -β_k-1(AV_k-2)^Tr̅_k-1 = 0, as a consequence of (<ref>). So the first (k-2) elements of V_k^Tr_k are zeros. Thus, Γ_k^(j) = 0, j≤ k-2, for k≥ 3. Also, (<ref>) yields that V_k-3^TΔ r_k-1 = V_k-3^Tr_k - V_k-3^Tr_k-1 = 0, which infers that the first (k-3) elements of V_k-1^TΔ r_k-1 are zeros. Thus, ζ_k^(j) = 0, j≤ k-3, for k≥ 4. <ref> suggests that for solving SPD linear systems, we only need to maintain the most recent two vector pairs, and there is no loss of historical information. Specifically, suppose k≥ 3 and define {v_j} as that in <Ref>. The procedure has short-term recurrences and is described as follows. Step 1: Modified vector pair. At the beginning of the k-th iteration, p_k,q_k are obtained from Δ x_k-1, Δ r_k-1 and (p_k-2,p_k-1), (q_k-2,q_k-1): p_k = Δ x_k-1-(p_k-2,p_k-1)ζ_k,    q_k = Δ r_k-1-(q_k-2,q_k-1)ζ_k, where ζ_k∈ℝ^2 is chosen such that q_k ⊥ span{v_k-2,v_k-1}. Step 2: AM update. The next step is the ordinary AM update: x̅_k = x_k-(p_k-1,p_k)Γ_k,   r̅_k = r_k-(q_k-1,q_k)Γ_k,    x_k+1 = x̅_k+β_kr̅_k, where Γ_k∈ℝ^2 is chosen such that r̅_k ⊥ span{v_k-1,v_k}. We call it short-term recurrence AM (ST-AM). The Type-II ST-AM has been proposed in <cit.>. Combining the ST-AM update with the restarting conditions (<ref>)-(<ref>), we obtain the restarted ST-AM methods, as shown in <Ref>. We establish the convergence properties in the nonlinear case. For the fixed-point problem (<ref>), suppose that g'(x) is symmetric and <ref> holds. Let { x_k } and { r_k } denote the iterates and residuals of the restarted ST-AM, A:=I-g'(x^*), θ_k := I-β_kA_2, and θ∈[L-μ/L+μ,1) is a constant. For k = 0,1,…, β_k is chosen such that θ_k ≤θ. The following results hold. 1. For the Type-I method, if x_0 is sufficiently close to x^*, then x_k+1-x^*_A ≤ 2θ_k( √(L/μ)-1/√(L/μ)+1)^m_kx_k-m_k-x^*_A+κ̂𝒪(x_k-m_k-x^*_2^2). 2. For the Type-II method, if x_0 is sufficiently close to x^*, then r_k+1_2 ≤ 2θ_k ( √(L/μ)-1/√(L/μ)+1)^m_kr_k-m_k_2+κ̂𝒪(x_k-m_k-x^*_2^2). 3. For either method, if the aforementioned assumptions hold and m_k=d, then x_k+1-x^*_2=κ̂𝒪(x_k-m_k-x^*_2^2), namely (d+1)-step quadratic convergence. We give the proof in <Ref>. <ref> shows that the asymptotic convergence rates of both types of restarted ST-AM methods are optimal with respect to the condition number (see <cit.>), thus significantly improving the convergence rate of the fixed-point iteration. The theorem also suggests that the restarted ST-AM methods are applicable for solving large-scale unconstrained optimization problems since the Hessian matrices are naturally symmetric. When A is symmetric, the convergence bound (<ref>) for the restarted Type-I AM can be refined to x_k+1-x^*_A ≤θ_kmin__p(0)=1^p∈𝒫_m_kp(A)(x_k-m_k-x^*)_A + κ̂𝒪(x_k-m_k-x^*_2^2). Using Chebyshev polynomials for the minimization problems in (<ref>) and (<ref>) <cit.>, we can establish the same convergence rates as (<ref>) and (<ref>) for the restarted Type-I AM and the restarted Type-II AM, respectively. On the other side, the convergence results in <cit.> shown by (<ref>) and (<ref>) in <Ref> cannot provide such refined results and underestimate the efficacy of AM. In practice, we can also choose the mixing parameters {β_k} with simplified computation by exploring the symmetry of the Jacobian. Let g(x)=(I-A)x+b in the fixed-point problem (<ref>), where A∈ℝ^d× d is SPD and b∈ℝ^d. For the restarted Type-I/Type-II ST-AM method, if m_k+1≥ 2 at the (k+1)-th iteration, then Ap_k = t_k^(m_k-1)p_k-1+t_k^(m_k)p_k+t_k^(m_k+1)p_k+1, where p_k-m_k:=0∈ℝ^d, and the coefficients are given by t_k^(m_k-1) = ϕ_k-1/β_k-1(1-Γ_k^(m_k)), t_k^(m_k) = 1/1-Γ_k^(m_k)( 1/β_k-1-ϕ_k/β_k), t_k^(m_k+1) = -1/β_k(1-Γ_k^(m_k)), where ϕ_k:=0 if m_k = 0, and ϕ_k := Γ_k^(m_k)+ζ_k+1^(m_k) = Γ_k+1^(m_k)= v_k^Tr_k+1/v_k^Tq_k if m_k≥ 1. Thus there exists a tridiagonal matrix T̅_k∈ℝ^(m_k+1)× m_k such that AP_k = P_k+1T̅_k = P_kT_k+p_k+1· t_k^(m_k+1)e_m_k^T, where T_k is obtained from T̅_k by deleting its last row, and e_m_k is the m_k-th column of I_m_k. Note that Γ_k+ζ_k+1 = (V_k^TQ_k)^-1V_k^Tr_k + (V_k^TQ_k)^-1V_k^TΔ r_k = (V_k^TQ_k)^-1V_k^Tr_k+1. Since V_k^TQ_k is diagonal due to symmetry, and V_k-1^Tr_k+1 = 0, it follows that Γ_k+ζ_k+1 = (0,…,0,v_k^Tr_k+1/v_k^Tq_k)^T = Γ_k+1^[m_k], where Γ_k+1^[m_k] is the subvector of the first m_k elements of Γ_k+1. Then the formula (<ref>) follows from <Ref> and <ref>. Then, following the derivation in <ref>, the projection method (<ref>) to estimate the eigenvalues of h'(x^*) is reduced to solving the eigenvalues of T_k. For the fixed-point problem (<ref>), suppose that g'(x) is symmetric and <ref> holds. For the Process I in <Ref>, replace the restarted AM by the restarted ST-AM, and assume that there are positive constants η_0, τ_0 such that h(x_j)_2 ≤η_0h(x_0)_2 (0≤ j≤ k+1), |1-Γ_j^(m_j)|≥τ_0 (1≤ j≤ k); T_k is the tridiagonal matrix as defined in <Ref>, and λ∈σ(T_k). For the Process II in <Ref>, the tridiagonal matrix T̂_k is defined correspondingly. For x_0 sufficiently close to x^*, we have min_λ̂∈σ(T̂_k)|λ̂-λ| = κ̂𝒪(x_k-m_k-x^*_2). The proof is given in <Ref>. Since ĥ(x) is linear and h'(x^*) is symmetric, using the eigenvalues of T̂_k to approximate the eigenvalues of h'(x^*) is equivalent to a Lanczos method <cit.>. For the Type-I method, the Lanczos method is A-norm based; for the Type-II method, the Lanczos method is A^2-norm based. Thus the eigenvalue λ̂∈σ(T̂_k) is known as the generalized Ritz value <cit.>. <Ref> indicates that σ(T_k) is close to σ(T̂_k) when x_k-m_k-x^*_2 is small. At the k-th iteration, where m_k≥ 2, let μ̃ be the eigenvalue of T_k-1 of the smallest absolute value, and let L̃ be the eigenvalue of T_k-1 of the largest absolute value. We use |μ̃| and |L̃| as the estimates of μ and L. Then we set β_k = 2/|μ̃|+|L̃| as an estimate of the optimal value 2/(μ+L). § NUMERICAL EXPERIMENTS In this section, we validate our theoretical findings by solving three nonlinear problems: (I) the modified Bratu problem; (II) the Chandrasekhar H-equation; (III) the regularized logistic regression. Additional experimental results can be found in <Ref>. Let AM-I and AM-II denote the restarted Type-I and restarted Type-II AM. AM-I(m) and AM-II(m) denote the Type-I AM and Type-II AM with m_k = min{m,k}. ST-AM-I and ST-AM-II are abbreviations of the restarted Type-I and restarted Type-II ST-AM. Since this work focuses on the theoretical properties of AM, we used the iteration number as the evaluation metric of convergence in the experiments. §.§ Modified Bratu problem To verify <Ref> and <Ref>, we considered solving the modified Bratu problem introduced in <cit.>: u_xx+u_yy+α u_x+λ e^u = 0, where u is a function of (x,y)∈𝒟=[0,1]^2, and α, λ∈ℝ are constants. The boundary condition is u(x,y)≡ 0 for (x,y)∈∂𝒟. The equation was discretized using centered differences on a 200× 200 grid. The resulting problem is a system of nonlinear equations: F(U) = 0, where U∈ℝ^200× 200 and F:ℝ^200× 200→ℝ^200× 200. Following <cit.>, we set λ = 1 and initialized U with 0. The Picard iteration is U_k+1 = U_k+β r_k, where r_k = F(U_k) is the residual. For the restarted AM and ST-AM, we set τ = 10^-32, m=1000, and η = ∞ since a large m_k is beneficial for solving this problem. §.§.§ Nonsymmetric Jacobian We set α = 20 so that the Jacobian F'(U) is not symmetric. For the Picard iteration, we tuned β_k and set it as 6× 10^-6. We applied the adaptive mixing strategy (<ref>) for AM-I and AM-II with β_0 = 1. As shown in <Ref>, both AM-I and AM-II converge much faster than the Picard iteration. In fact, to achieve F_2 ≤ 10^-6, AM-I uses 500 iterations, and AM-II uses 497 iterations, where no restart occurs in either method. Hence, the results verify <Ref> and suggest that AM methods significantly accelerate the Picard iteration when m_k is large in solving this problem. Also, observe that AM-I and AM-II diverge in the initial stage due to the inappropriate choice β_0=1. Nonetheless, from <Ref>, we see the β_k is quickly adjusted to the optimal value β = 6× 10^-6 based on the eigenvalue estimates. Thus we only need to compute the eigenvalue estimates within a few steps and keep β_k unchanged in the later iterations. §.§.§ Symmetric Jacobian We set α = 0 so that the Jacobian F'(U) is symmetric. We compared the restarted ST-AM methods with the Picard iteration, the limited-memory AM, and the full-memory AM. By a grid search in {1× 10^-6, 2× 10^-6,…,1× 10^-5}, we chose β = 6× 10^-6 for the Picard iteration. Then, we set β_k=6× 10^-6 for AM-I(2), AM-II(2), AM-I(∞), and AM-II(∞). We applied the adaptive mixing strategy (<ref>) for ST-AM-I and ST-AM-II with β_0 = 1. The results in <Ref> show the convergence of each method and the choices of β_k in ST-AM-I/ST-AM-II. Observe that AM-I(2) and AM-II(2) perform similarly to the Picard iteration, which is reasonable since m = 2 is too small. On the other hand, ST-AM-I and ST-AM-II exhibit significantly faster convergence rates as predicted by <Ref> (no restart occurs in either method, i.e., m_k = k). We also find that the β_k of ST-AM-I/ST-AM-II quickly converges to 6.19× 10^-6. So the curve of ST-AM-I/ST-AM-II roughly coincides with that of the full-memory AM-I/AM-II in the early stage. However, due to the loss of orthogonality, ST-AM-I and ST-AM-II require more iterations to achieve F_2≤ 10^-6 than the full-memory methods. §.§ Chandrasekhar H-equation To check the effect of the restarting conditions (<ref>)-(<ref>), we applied the restarted AM to solve the Chandrasekhar H-equation considered in <cit.>: ℱ(H)(μ) = H(μ) - ( 1-ω/2∫_0^1 μ H(ν)dν/μ+ν)^-1 = 0, where ω∈ [0,1] is a constant and the unknown is a continuously differentiable function H defined in [0,1]. Following <cit.>, we discretized the equation with the composite midpoint rule. The resulting equation is h^i = G(h)^i := ( 1-ω/2N∑_j=1^N μ_ih^j/μ_i+μ_j)^-1. Here h^i is the i-th component of h∈ℝ^N, G(h)^i is the i-th component of G(h) ∈ℝ^N, and μ_i = (i-1/2)/N for 1≤ i ≤ N. Define F(h)=h-G(h) =0, and r_k = G(h_k)-h_k is the residual at h_k. We set N=500 and considered ω=0.5, 0.99, 1. The initial point was h_0 = (1,1,…,1)^T. Since the fixed-point operator G is nonexpansive and the Picard iteration h_k+1 = G(h_k) = h_k+r_k converges in this case, we set β_k = 1 for both restarted AM methods. (The h_k here has no relation with that in <Ref>.) We studied the convergence of restarted AM with different settings of m, τ, and η. <Ref> tabulates the results. This problem is hard to solve when ω approaches 1. For the easy case ω = 0.5, the restarting conditions have neglectable effects on the convergence. However, for ω = 0.99 and especially for ω=1, the restarting conditions are critical, which help avoid the divergence of the iterations. It is preferable to use a small m in this problem. By comparing the case m=100, τ=10^-15 with the case m=100, τ=10^-32, we find that using (<ref>) to control the condition number of V_k^TQ_k is necessary. Also, setting η=1 in (<ref>) is helpful for AM-I. §.§ Regularized logistic regression To validate the effectiveness of ST-AM-I and ST-AM-II for solving unconstrained optimization problems, we considered solving the regularized logistic regression: min_x∈ℝ^df(x):= 1/T∑_i=1^T log (1+exp(-y_ix^Tξ_i))+w/2x_2^2, where ξ_i∈ℝ^d is the i-th input data sample and y_i=± 1 is the corresponding label. We used the “madelon” dataset from LIBSVM <cit.>, which contains 2000 data samples (T=2000) and 500 features (d=500). We considered w=0.01. The compared methods were gradient descent (GD), Nesterov's method <cit.>, and the limited-memory AM methods with m=2 and m=20. For GD, we tuned the step size and set it as 1.6. For AM methods, we also set β_k = 1.6. Let x^* denote the minimizer. We used the smallest and the largest Ritz values of ∇^2 f(x^*) to approximate μ and L, which are required for Nesterov's method. For the restarted ST-AM methods, we set m=40, τ=1× 10^-15, η = ∞ and applied the adaptive mixing strategy (<ref>) with β_0 = 1 to choose β_k. <Ref> shows the convergence of each method and the choices of β_k in ST-AM-I and ST-AM-II. Like AM-I(2) and AM-II(2), the ST-AM methods only use two vector pairs for the AM update. However, they have improved convergence, and the convergence rates are close to those of AM-I(20) and AM-II(20). Also, the mixing parameters for restarted ST-AM methods do not need to be tuned manually. With the convergence of x_k to the minimizer x^*, the β_k is adjusted to 2/(μ+L). In <Ref>, we show the effects of the restarting conditions on ST-AM-I and ST-AM-II. We only consider m and τ in (<ref>) and (<ref>) since both methods converge in solving this problem. The results suggest that well-chosen m and τ can lead to improved convergence. § CONCLUSIONS In this paper, we study the restarted AM methods formulated with modified historical sequences and certain restarting conditions. Using a multi-step analysis, we extend the relationship between AM and Krylov subspace methods to nonlinear fixed-point problems. We prove that under reasonable assumptions, the long-term convergence behaviour of the restarted Type-I/Type-II AM is dominated by a minimization problem that also appears in the theoretical analysis of Krylov subspace methods. The convergence analysis provides a new assessment of the efficacy of AM in practice and justifies the potential local improvement of restarted Type-II AM over the fixed-point iteration. As a by-product of the restarted AM, the eigenvalues of the Jacobian can be efficiently estimated, based on which we can choose the mixing parameter adaptively. When the Jacobian is symmetric, we derive the short-term recurrence variants of restarted AM methods and the simplified eigenvalue estimation procedure. The short-term recurrence AM methods are memory-efficient and can significantly accelerate the fixed-point iterations. The experiments validate our theoretical results and the restarting conditions. § PROOFS OF SECTION <REF> §.§ Proof of Proposition <ref> We prove the results by induction. If m_k=1, then p_k = Δ x_k-1, q_k = Δ r_k-1. The Property <ref> and Property <ref> hold, and v_k^Tq_k=z_k^TΔ r_k-1≠ 0. Then by (<ref>), r̅_k ⊥ v_k. Hence, (<ref>) produces the same iterate as (<ref>). For m_k>1, suppose that from the (k-m_k+1)-th iteration to the (k-1)-th iteration, Properties <ref>-<ref> hold, and (<ref>) produces the same iterates as (<ref>). Then, at the k-th iteration, we first prove that q_k^j ⊥ span{v_k-m_k+1,…,v_k-m_k+j}, j=1,…,m_k-1, by induction. For j=1, since v_k-m_k+1^Tq_k-m_k+1≠ 0, it follows that q_k^1 ⊥ v_k-m_k+1 due to (<ref>). Consider 1<j≤ m_k-1. Due to the inductive hypothesis, 0≠(Z_k-1^TR_k-1)= (S_k-1^TV_k-1^TQ_k-1S_k-1). It follows that (V_k-1^TQ_k-1) ≠ 0. So the diagonal element v_k-m_k+j^Tq_k-m_k+j≠ 0, which together with (<ref>) implies q_k^j ⊥ v_k-m_k+j. Also, both q_k^j-1 and q_k-m_k+j are orthogonal to span{v_k-m_k+1,…,v_k-m_k+j-1} by the inductive hypotheses. Thus, q_k^j ⊥ span{v_k-m_k+1,…,v_k-m_k+j-1}. We complete the induction. Consequently, we have q_k=q_k^m_k-1⊥ span{v_k-m_k+1,…,v_k-1} = range(V_k-1). With the inductive hypothesis that V_k-1^TQ_k-1 is lower triangular, it follows that V_k^TQ_k is lower triangular, namely the Property <ref>. We prove that q_k ≠ 0. Note that q_k = Δ r_k-1-Q_k-1ζ_k. If q_k = 0, then Δ r_k-1∈ range(Q_k-1) = range(R_k-1), which is impossible since R_k has full column rank due to (Z_k^TR_k) ≠ 0. Hence q_k ≠ 0 and R_k=Q_kS_k, where S_k is unit upper triangular. Since p_k = Δ x_k-1-P_k-1ζ_k, we also have X_k = P_kS_k. So the Property <ref> holds. Next, we prove that r_k^j ⊥ span{v_k-m_k+1,…,v_k-m_k+j}, j=1,…,m_k, by induction. As Properties <ref>-<ref> hold at the k-th iteration, we have 0 ≠(Z_k^TR_k) = (S_k^TV_k^TQ_kS_k), which implies that (V_k^TQ_k) ≠ 0. Hence v_k-m_k+j^Tq_k-m_k+j≠ 0 for j=1,…,m_k. Then we have r_k^1 ⊥ v_k-m_k+1 due to (<ref>). Consider 1< j≤ m_k. Due to v_k-m_k+j^Tq_k-m_k+j≠ 0 and (<ref>), r_k^j ⊥ v_k-m_k+j. Also, by the inductive hypotheses, both r_k^j-1 and q_k-m_k+j are orthogonal to span{v_k-m_k+1,…,v_k-m_k+j-1}. It follows that r_k^j⊥ span{v_k-m_k+1,…,v_k-m_k+j-1}. Thus, we complete the induction. It yields that r̅_k = r_k^m_k⊥ range(V_k), namely the Property <ref>. Finally, the complete update of (<ref>) is x_k+1=x_k+G_kr_k, where G_k = β_kI - (P_k+β_kQ_k)(V_k^TQ_k)^-1V_k^T. Here, we use Property <ref> which implies Γ_k = (V_k^TQ_k)^-1V_k^Tr_k. Then with P_k = X_kS_k^-1 and Q_k = R_kS_k^-1, the equivalent form of (<ref>) is G_k = β_kI-(X_k+β_kR_k)(Z_k^TR_k)^-1Z_k^T, which is the original AM update (<ref>). So (<ref>) produces the same iterate as (<ref>). As a result, we complete the induction. § PROOFS OF SECTION <REF> §.§ Proof of Proposition <ref> 1. The Properties <ref>-<ref> are known results <cit.>. We give the proof here for completeness. The definition of g suggests that the residual r_k = g(x_k)-x_k=b-Ax_k for k≥ 0 and R_k = -AX_k for k≥ 1. Recall that each Γ_j is determined by solving r̅_j = r_j - R_jΓ_j ⊥ range(Z_j), where 1≤ j≤ k and k≥ 1. The condition (Z_j^TR_j)≠ 0 ensures that Γ_j is uniquely determined. Thus the AM updates are well defined. Since A is nonsingular and R_k = -AX_k, it follows that rank(X_k) = rank(R_k). Then, due to (Z_k^TR_k)≠ 0, we have rank(Z_k)= rank(R_k) = k. So rank(X_k)=k. We first prove range(X_k)=𝒦_k(A,r_0) by induction. First, Δ x_0 = β_0r_0 since x_1 = x_0+β_0r_0. If k=1, then the proof is complete. Suppose that k>1 and range(X_k-1)=𝒦_k-1(A,r_0). Define e^k∈ℝ^k to be the vector with all elements being ones. From the AM update (<ref>), we have Δ x_k-1 = β_k-1r_k-1-(X_k-1+β_k-1R_k-1)Γ_k-1 = β_k-1(b-Ax_k-1)-(X_k-1-β_k-1AX_k-1)Γ_k-1 =β_k-1b-β_k-1A(x_0+Δ x_0+⋯+Δ x_k-2) -(X_k-1-β_k-1AX_k-1)Γ_k-1 =β_k-1r_0-β_k-1AX_k-1e^k-1-(X_k-1-β_k-1AX_k-1)Γ_k-1. Since range(X_k-1) = 𝒦_k-1(A,r_0), we have range(AX_k-1)⊆𝒦_k(A,r_0). Also, noting that r_0∈𝒦_k-1(A,r_0), we have Δ x_k-1∈𝒦_k(A,r_0). Thus, range(X_k) ⊆𝒦_k(A,r_0). Since rank(X_k) = k, it follows that range(X_k) = 𝒦_k(A,r_0), thus completing the induction. Since r_k = b-Ax_k=b-A(x_0+X_ke^k)=r_0-AX_ke^k, it follows that r_k-R_kΓ = r_k + AX_kΓ = r_0-AX_ke^k + AX_kΓ = r_0 -AX_kΓ̃, where Γ̃=e^k-Γ, for ∀ Γ∈ℝ^k. So Γ_k solves (<ref>) for j=k if and only if Γ̃_k=e^k-Γ_k solves r_0 - AX_kΓ̃_k ⊥ range(Z_k). Since range(X_k) = 𝒦_k(A,r_0), the condition (<ref>) is equivalent to r_0 - Az ⊥ range(Z_k)      z∈𝒦_k(A,r_0). Here range(Z_k) = 𝒦_k(A,r_0) for the Type-I method, and range(Z_k) = A𝒦_k(A,r_0) for the Type-II method. Since the initializations are identical, the conditions (<ref>) for Type-I and Type-II methods are the Petrov-Galerkin conditions for the Arnoldi's method and GMRES, respectively. Due to the nonsingularity of Z_k^TR_k, the solution of (<ref>) is also unique. Therefore, we have x̅_k =x_k-X_kΓ_k=x_k-X_k(e^k-Γ̃_k) =x_0+X_kΓ̃_k = x_k^ A, for the Type-I method, and x̅_k =x_k^ G for the Type-II method. 2. Consider the case that A is positive definite, and the algorithm has not found the exact solution, i.e. r_j ≠ 0 for j = 0,…,k. We prove the result by induction. If k=1, then Δ x_0 = β_0 r_0, and Δ r_0 = -β_0 Ar_0. Hence Z_1^TR_1 = Δ x_0^TΔ r_0 = -β_0^2 r_0^TAr_0 for the Type-I method; Z_1^TR_1 = Δ r_0^TΔ r_0 = β_0^2 r_0^TA^TAr_0 for the Type-II method. Since r_0 ≠ 0 and A is positive definite, it follows that (Z_1^TR_1) ≠ 0. For k>1, suppose that (Z_k-1^TR_k-1)≠ 0. It indicates rank(R_k-1) = k-1, thus rank(X_k-1) = k-1. We prove (Z_k^TR_k) ≠ 0 by contradiction. If (Z_k^TR_k) = 0, then there exists a nonzero y∈ℝ^k such that Z_k^TR_ky = 0. Then y^TZ_k^TR_ky = 0. Note that Z_k^TR_k = X_k^TR_k = -X_k^TAX_k for the Type-I method, and Z_k^TR_k = R_k^TR_k = X_k^TA^TAX_k for the Type-II method. Since A is positive definite, we have X_ky = 0, which implies that X_k is rank deficient. As X_k-1 has full column rank, it yields Δ x_k-1 = -X_k-1Γ_k-1+β_k-1r̅_k-1∈ range(X_k-1). Hence r̅_k-1∈ range(X_k-1). So r̅_k-1=X_k-1ξ for some ξ∈ℝ^k-1. Since (Z_k-1^TR_k-1) ≠ 0, the condition r̅_k-1=r_k-1-R_k-1Γ_k-1⊥ Z_k-1 has a unique solution. Thus 0 = r̅_k-1^TZ_k-1ξ = ξ^TX_k-1^TZ_k-1ξ. For the Type-I method, X_k-1^TZ_k-1 = X_k-1^TX_k-1; for the Type-II method, X_k-1^TZ_k-1 = X_k-1^TR_k-1 = -X_k-1^TAX_k-1. Since X_k-1 has full column rank and A is positive definite, it follows from (<ref>) that ξ = 0 for both cases, which yields r̅_k-1 = 0. However, it is impossible because when r̅_k-1 = 0, we have x_k = x̅_k-1 and r_k = r̅_k-1 = 0, which contradicts the assumption that r_k ≠ 0. Therefore, (Z_k^TR_k) ≠ 0. We complete the induction. 3. Since (Z_j^TR_j) ≠ 0, j=1,…,k, it follows from <Ref> that the constructions of the modified historical sequences P_k and Q_k are well defined. The Property <ref> in <Ref> further yields the relation (<ref>). §.§ Proof of Lemma <ref> The proof follows the technique in <cit.>. Besides (<ref>) and (<ref>), we shall also prove the following relations. x_k∈ℬ_ρ̂(x^*), |ζ_k^(j)| = 𝒪(1), |ζ̂_k^(j)-ζ_k^(j)|=κ̂𝒪(x_k-m_k-x^*_2), p_k_2 = 𝒪(x_k-m_k-x^*_2), q_k_2 = 𝒪(x_k-m_k-x^*_2), p_k-p̂_k_2=κ̂𝒪(x_k-m_k-x^*_2^2), q_k-q̂_k_2=κ̂𝒪(x_k-m_k-x^*_2^2), |Γ_k^(j)| = 𝒪(1), |Γ̂_k^(j)-Γ_k^(j)|=κ̂𝒪(x_k-m_k-x^*_2), x̅_k-x̅̂̅_k_2 = κ̂𝒪(x_k-m_k-x^*_2^2), r̅_k-r̅̂̅_k_2 = κ̂𝒪(x_k-m_k-x^*_2^2), where j=0,…,m_k. Here, for convenience, we define ζ̂_k^(0)= ζ_k^(0)=ζ̂_k^(m_k)= ζ_k^(m_k)=0, Γ̂_k^(0)=Γ_k^(0) = 0; when m_k = 0, define p̂_k=q̂_k= p_k = q_k = 0, x̅_k=x_k, r̅_k=r_k, and x̅̂̅_k=x̂_k, r̅̂̅_k=r̂_k; when m_k>0, x̅̂̅_k = x̂_k-P̂_kΓ̂_k, r̅̂̅_k = r̂_k-Q̂_kΓ̂_k. Then, the two processes to generate {x_k} and {x̂_k} are x_k+1 = x̅_k+β_kr̅_k,     x̂_k+1 = x̅̂̅_k+β_kr̅̂̅_k. We first prove (<ref>). Due to (<ref>), we have the following relation: μx_k-x^*_2 ≤r_k_2=h(x_k)-h(x^*)_2 ≤ Lx_k-x^*_2. Choose x_0-x^*_2 ≤μρ̂/η_0L. With the condition r_k_2 ≤η_0r_0_2, we obtain x_k-x^*_2 ≤1/μr_k_2 ≤η_0/μr_0_2 ≤η_0 L/μx_0-x^*_2≤η_0 L/μ·μρ̂/η_0L = ρ̂, namely (<ref>). The (<ref>) also implies we can choose sufficiently small x_0-x^*_2 to ensure x_k-m_k-x^*_2 ≤η_0 L/μx_0-x^*_2 is sufficiently small. Then, we prove (<ref>), (<ref>), and (<ref>)-(<ref>) by induction. For k=0, the relations (<ref>)-(<ref>) clearly hold. Besides, due to (<ref>), we have r_0-r̂_0_2 ≤1/2κ̂x_0-x^*_2^2, namely (<ref>). Since x_0 = x̂_0, the (<ref>) also holds. Then (<ref>) follows from x_1-x̂_1_2 = x_0+β_0 r_0 - (x̂_0+β_0 r̂_0)_2 = β_0 r_0-r̂_0_2 ≤β_0κ̂/2x_0-x^*_2^2. Suppose that k≥ 1, and as an inductive hypothesis, the relations (<ref>), (<ref>), and (<ref>)-(<ref>) hold for i=0,…,k-1. Consider the k-th iteration. If m_k=0, i.e., a restarting condition is met at the beginning of the k-th iteration, then x̂_k = x_k. The same as the case that k=0, (<ref>), (<ref>), and (<ref>)-(<ref>) hold. Consider the nontrivial case that m_k>0. Due to (<ref>), we have x_j-x^*_2≤1/μr_j_2 ≤η/μr_k-m_k_2 ≤η L/μx_k-m_k-x^*_2,    j=k-m_k+1,…,k. Therefore, x_j-x^*_2 = 𝒪(x_k-m_k-x^*_2),    j=k-m_k,…,k. Since x_k∈ℬ_ρ̂(x^*), it follows that r_k - r̂_k_2 = h(x_k)-ĥ(x̂_k)_2 ≤h(x_k)-ĥ(x_k)_2 + ĥ(x_k)-ĥ(x̂_k)_2 = h(x_k)-h'(x^*)(x_k-x^*)_2+ h'(x^*)(x_k-x̂_k)_2 ≤1/2κ̂x_k-x^*_2^2+Lx_k-x̂_k_2 = κ̂𝒪(x_k-m_k-x^*_2^2), where the second inequality is due to (<ref>) and (<ref>), and the last equality is due to (<ref>) and the inductive hypothesis (<ref>). Thus, the relation (<ref>) holds. Since the condition (<ref>) holds, we have |v_k^Tq_k| ≥τ |v_k-m_k+1^Tq_k-m_k+1|. We discuss the Type-I method and the Type-II method separately. Using the fact that the (k-m_k)-th iteration is x_k-m_k+1 = x_k-m_k+β_k-m_kr_k-m_k, we have that |v_k^Tq_k| ≥τ |p_k-m_k+1^Tq_k-m_k+1| = τ |Δ x_k-m_k^TΔ r_k-m_k| = τ|Δ x_k-m_k ^T∫_0^1 h'(x_k-m_k+tΔ x_k-m_k)Δ x_k-m_kdt| ≥τμΔ x_k-m_k_2^2 = τμβ_k-m_k^2r_k-m_k_2^2 ≥τμ^3 β_k-m_k^2 x_k-m_k-x^*_2^2, for the Type-I method, where the second inequality is due to (<ref>) and the third inequality is due to (<ref>). For the Type-II method, |v_k^Tq_k| ≥τ |q_k-m_k+1^T q_k-m_k+1| = τΔ r_k-m_k_2^2 ≥τμ^2 Δ x_k-m_k_2^2 = τμ^2β_k-m_k^2r_k-m_k_2^2 ≥τμ^4β_k-m_k^2x_k-m_k-x^*_2^2. Then, define κ = τμ^3β^2 for the Type-I method, and κ = τμ^4β^2 for the Type-II method. Since no restart has occurred in the last m_k iterations, we have |v_i^Tq_i|_2 ≥κx_k-m_k-x^*_2^2,    i=k-m_k+1,…,k. Now, we prove (<ref>). We shall prove an auxiliary relation: q_k^j_2 = 𝒪(x_k-m_k-x^*_2), q_k^j-q̂_k^j_2 = κ̂𝒪(x_k-m_k-x^*_2^2), for j=0,…,m_k-1. We conduct the proof by induction. For j=0, (<ref>) holds due to ζ_k^(0) = ζ̂_k^(0) = 0. Since q_k^0=Δ r_k-1, q̂_k^0=Δr̂_k-1, it follows that q_k^0_2 ≤r_k_2+r_k-1_2 ≤ 2ηr_k-m_k_2 =𝒪(x_k-m_k-x^*_2), which is due to (<ref>) and (<ref>). Also, from (<ref>) and (<ref>), we have q_k^0-q̂_k^0_2 ≤r_k-r̂_k_2+r_k-1-r̂_k-1_2 =κ̂𝒪(x_k-m_k-x^*_2^2). Hence, the (<ref>) and (<ref>) hold when j=0. Suppose that j≥ 1, and (<ref>) and (<ref>) hold for ℓ =0,…,j-1. Consider the j-th step in (<ref>). Due to (<ref>) and the inductive hypotheses (<ref>) and (<ref>), we obtain |ζ_k^(j)| ≤v_k-m_k+j_2q_k^j-1_2/κx_k-m_k-x^*_2^2 = 𝒪(x_k-m_k-x^*_2^2)/κx_k-m_k-x^*_2^2 = 𝒪(1). Next, if v_k-m_k+j^Tq_k^j-1≠ 0, then |ζ_k^(j)-ζ̂_k^(j)| = |ζ_k^(j)|·| 1- ζ̂_k^(j)/ζ_k^(j)| = |ζ_k^(j)|·| 1- v̂_k-m_k+j^Tq̂_k^j-1/v_k-m_k+j^Tq_k^j-1·v_k-m_k+j^Tq_k-m_k+j/v̂_k-m_k+j^Tq̂_k-m_k+j| = |ζ_k^(j)|· |a(1-b)+b| ≤ |ζ_k^(j)|· (|a|+|b|+|ab|), where a:=1-v̂_k-m_k+j^Tq̂_k^j-1/v_k-m_k+j^Tq_k^j-1 and b:=1-v_k-m_k+j^Tq_k-m_k+j/v̂_k-m_k+j^Tq̂_k-m_k+j. We have |ζ_k^(j)|· |a| = | v_k-m_k+j^Tq_k^j-1-v̂_k-m_k+j^Tq̂_k^j-1/v_k-m_k+j^Tq_k-m_k+j|. From (<ref>), (<ref>), and (<ref>), we obtain |v_k-m_k+j^T(q_k^j-1-q̂_k^j-1)| ≤v_k-m_k+j_2 q_k^j-1-q̂_k^j-1_2 = κ̂𝒪(x_k-m_k-x^*_2^3), and |(v_k-m_k+j-v̂_k-m_k+j)^Tq̂_k^j-1| ≤ |(v_k-m_k+j-v̂_k-m_k+j)^Tq_k^j-1| + |(v_k-m_k+j-v̂_k-m_k+j)^T(q_k^j-1-q̂_k^j-1)| ≤κ̂𝒪(x_k-m_k-x^*_2^3) + κ̂^2𝒪(x_k-m_k-x^*_2^4) = κ̂𝒪(x_k-m_k-x^*_2^3). Then, it follows that | v_k-m_k+j^Tq_k^j-1-v̂_k-m_k+j^Tq̂_k^j-1 | ≤ |v_k-m_k+j^T(q_k^j-1-q̂_k^j-1)| +|(v_k-m_k+j-v̂_k-m_k+j)^Tq̂_k^j-1| = κ̂𝒪(x_k-m_k-x^*_2^3). Combining (<ref>), (<ref>), and (<ref>) yields |ζ_k^(j)|· |a| ≤κ̂𝒪(x_k-m_k-x^*_2^3)/κx_k-m_k-x^*_2^2 = κ̂𝒪(x_k-m_k-x^*_2). Similar to (<ref>), the following bound holds: |v_k-m_k+j^Tq_k-m_k+j-v̂_k-m_k+j^Tq̂_k-m_k+j| = κ̂𝒪(x_k-m_k-x^*_2^3). Besides, |v̂_k-m_k+j^Tq̂_k-m_k+j| ≥ |v_k-m_k+j^Tq_k-m_k+j|- |v_k-m_k+j^Tq_k-m_k+j-v̂_k-m_k+j^Tq̂_k-m_k+j| ≥κx_k-m_k-x^*_2^2-κ̂c_1x_k-m_k-x^*_2^3 ≥1/2κx_k-m_k-x^*_2^2, where the existence of c_1 is guaranteed by (<ref>), and the last inequality holds if x_k-m_k-x^*_2≤κ/2κ̂c_1, which can be obtained by choosing x_0-x^*_2≤μκ/2κ̂η_0Lc_1 since x_k-m_k-x^*_2 ≤η_0L/μx_0-x^*_2 by (<ref>). From (<ref>) and (<ref>), it follows that |b| = | v̂_k-m_k+j^Tq̂_k-m_k+j-v_k-m_k+j^Tq_k-m_k+j/v̂_k-m_k+j^Tq̂_k-m_k+j| = κ̂𝒪(x_k-m_k-x^*_2). As a result, by (<ref>), (<ref>), (<ref>), and (<ref>), we obtain |ζ_k^(j)-ζ̂_k^(j)| = κ̂𝒪(x_k-m_k-x^*_2). Now consider the case that v_k-m_k+j^Tq_k^j-1 = 0. It is clear that ζ_k^(j) = 0. Then |ζ_k^(j)-ζ̂_k^(j)|= |v̂_k-m_k+j^Tq̂_k^j-1/v̂_k-m_k+j^Tq̂_k-m_k+j| ≤κ̂𝒪(x_k-m_k-x^*_2^3)/1/2κx_k-m_k-x^*_2^2 = κ̂𝒪(x_k-m_k-x^*_2). Therefore, (<ref>) holds for ℓ = j. Next, we obtain q_k^j_2 ≤q_k^j-1_2+q_k-m_k+j_2 |ζ_k^(j)| = 𝒪(x_k-m_k-x^*_2), which is due to (<ref>), (<ref>), (<ref>), and j<m_k≤ m. Also, from (<ref>), (<ref>), (<ref>), and j≤ m_k-1, it follows that q_k-m_k+jζ_k^(j)-q̂_k-m_k+jζ̂_k^(j)_2 ≤(q_k-m_k+j-q̂_k-m_k+j)ζ_k^(j)_2 + (q̂_k-m_k+j-q_k-m_k+j)(ζ_k^(j)-ζ̂_k^(j))_2 + q_k-m_k+j(ζ_k^(j)-ζ̂_k^(j))_2 = κ̂𝒪(x_k-m_k-x^*_2^2), which together with (<ref>) further yields that q_k^j-q̂_k^j_2 ≤q_k^j-1-q̂_k^j-1_2 +q_k-m_k+jζ_k^(j)-q̂_k-m_k+jζ̂_k^(j)_2 = κ̂𝒪(x_k-m_k-x^*_2^2). Then, (<ref>) holds for ℓ = j, thus completing the induction. Since (<ref>) holds for j=m_k-1, and q_k=q_k^m_k-1, we know q_k_2=𝒪(x_k-m_k-x^*_2) and q_k-q̂_k_2 = κ̂𝒪(x_k-m_k-x^*_2^2). If m_k = 1, then p_k = Δ x_k-1. So p_k_2 ≤x_k-x^*_2 + x_k-1-x^*_2 = 𝒪(x_k-m_k-x^*_2) and p_k-p̂_k_2 ≤x_k-x̂_k_2 + x_k-1-x̂_k-1_2 = κ̂𝒪(x_k-m_k-x^*_2^2). Consider m_k≥ 2. Since p_k_2 = Δ x_k-1-P_k-1ζ_k_2 ≤x_k-x^*_2+x_k-1-x^*_2+ ∑_j=1^m_k-1p_k-m_k+jζ_k^(j)_2, it follows that p_k_2 = 𝒪(x_k-m_k-x^*_2). Also, similar to (<ref>), we have p_k-m_k+jζ_k^(j)-p̂_k-m_k+jζ̂_k^(j)_2 = κ̂𝒪(x_k-m_k-x^*_2^2), which further yields P_k-1ζ_k - P̂_k-1ζ̂_k_2 ≤∑_j=1^m_k-1p_k-m_k+jζ_k^(j)-p̂_k-m_k+jζ̂_k^(j)_2 = κ̂𝒪(x_k-m_k-x^*_2^2). Then, with p_k-p̂_k_2 ≤x_k-x̂_k_2+x_k-1-x̂_k-1_2+ P_k-1ζ_k - P̂_k-1ζ̂_k_2, we obtain p_k-p̂_k_2 = κ̂𝒪(x_k-m_k-x^*_2^2). Hence, (<ref>) and (<ref>) hold. Now, we prove (<ref>), following a similar way of proving (<ref>). The concerned auxiliary relation is r_k^j_2 = 𝒪(x_k-m_k-x^*_2), r_k^j-r̂_k^j_2 = κ̂𝒪(x_k-m_k-x^*_2^2), for j=0,…,m_k. We still conduct the proof by induction. For j=0, (<ref>) holds due to Γ_k^(0) = Γ̂_k^(0) = 0. Since r_k^0=r_k, r̂_k^0=r̂_k, we have r_k^0_2≤ηr_k-m_k_2 ≤η Lx_k-m_k-x^*_2, and r_k^0-r̂_k^0_2 = κ̂𝒪(x_k-m_k-x^*_2^2). Suppose that j≥ 1, and (<ref>) and (<ref>) hold for ℓ = 0,…,j-1. Consider the j-th step in (<ref>). With (<ref>), we have |Γ_k^(j)|≤v_k-m_k+j_2r_k^j-1_2/κx_k-m_k-x^*_2^2 = 𝒪(x_k-m_k-x^*_2^2)/κx_k-m_k-x^*_2^2 = 𝒪(1). Next, if v_k-m_k+j^Tr_k^j-1≠ 0, then |Γ_k^(j)-Γ̂_k^(j)| = |Γ_k^(j)|· |a_1(1-b_1)+b_1| ≤ |Γ_k^(j)|· (|a_1|+|b_1|+|a_1b_1|), where a_1:=1-v̂_k-m_k+j^Tr̂_k^j-1/v_k-m_k+j^Tr_k^j-1 and b_1:=1-v_k-m_k+j^Tq_k-m_k+j/v̂_k-m_k+j^Tq̂_k-m_k+j. With (<ref>), (<ref>), and (<ref>), it follows that |v_k-m_k+j^Tr_k^j-1-v̂_k-m_k+j^Tr̂_k^j-1| ≤ |v_k-m_k+j^T(r_k^j-1-r̂_k^j-1)|    + |(v_k-m_k+j-v̂_k-m_k+j)^Tr_k^j-1| + |(v_k-m_k+j-v̂_k-m_k+j)^T(r̂_k^j-1-r_k^j-1)| = κ̂𝒪(x_k-m_k-x^*_2^3). Then with (<ref>) and (<ref>), we obtain |Γ_k^(j)|· |a_1| = |v_k-m_k+j^Tr_k^j-1-v̂_k-m_k+j^Tr̂_k^j-1/v_k-m_k+j^Tq_k-m_k+j| ≤κ̂𝒪(x_k-m_k-x^*_2). For the bound of |b_1|, note that we have obtained (<ref>) and also have already proved (<ref>) and (<ref>) for the k-th iteration. Thus, |b_1| = κ̂𝒪(x_k-m_k-x^*_2), which together with (<ref>), (<ref>), and (<ref>) yields |Γ_k^(j)-Γ̂_k^(j)| = κ̂𝒪(x_k-m_k-x^*_2). On the other side, if v_k-m_k+j^Tr_k^j-1 = 0, then Γ_k^(j) = 0. Hence |Γ_k^(j)-Γ̂_k^(j)|= | v̂_k-m_k+j^Tr̂_k^j-1/v̂_k-m_k+j^Tq̂_k-m_k+j| ≤κ̂𝒪(x_k-m_k-x^*_2^3)/1/2κx_k-m_k-x^*_2^2 = κ̂𝒪(x_k-m_k-x^*_2). Therefore (<ref>) holds for ℓ = j. Next, we obtain r_k^j_2 ≤r_k^j-1_2+ q_k-m_k+j_2 |Γ_k^(j)| = 𝒪(x_k-m_k-x^*_2) due to (<ref>), (<ref>), (<ref>), and j≤ m_k≤ m. By (<ref>), (<ref>), and (<ref>), we have q_k-m_k+jΓ_k^(j)-q̂_k-m_k+jΓ̂_k^(j)_2 ≤(q_k-m_k+j-q̂_k-m_k+j)Γ_k^(j)_2 +(q̂_k-m_k+j-q_k-m_k+j)(Γ_k^(j)-Γ̂_k^(j))_2 + q_k-m_k+j(Γ_k^(j)-Γ̂_k^(j))_2 = κ̂𝒪(x_k-m_k-x^*_2^2), which yields that r_k^j-r̂_k^j_2 ≤r_k^j-1-r̂_k^j-1_2 + q_k-m_k+jΓ_k^(j)-q̂_k-m_k+jΓ̂_k^(j)_2 = κ̂𝒪(x_k-m_k-x^*_2^2). Then, (<ref>) holds for ℓ = j, thus completing the induction. Since (<ref>) holds for j=m_k, and r̅_k=r_k^m_k, we obtain r̅_k_2 = 𝒪(x_k-m_k-x^*_2) and r̅_k-r̅̂̅_k_2 = κ̂𝒪(x_k-m_k-x^*_2^2). Moreover, similar to (<ref>), we have p_k-m_k+jΓ_k^(j)-p̂_k-m_k+jΓ̂_k^(j)_2 =κ̂𝒪(x_k-m_k-x^*_2^2), which further yields P_kΓ_k-P̂_kΓ̂_k_2 ≤∑_j=1^m_kp_k-m_k+jΓ_k^(j)-p̂_k-m_k+jΓ̂_k^(j)_2 = κ̂𝒪(x_k-m_k-x^*_2^2). Then, from x̅_k-x̅̂̅_k_2≤x_k-x̂_k_2 + P_kΓ_k-P̂_kΓ̂_k_2, we obtain x̅_k-x̅̂̅_k_2 = κ̂𝒪(x_k-m_k-x^*_2^2). Hence, (<ref>) holds. Finally, since x_k+1 = x̅_k+β_kr̅_k, it follows that x_k+1-x̂_k+1_2 = (x̅_k-x̅̂̅_k)+β_k(r̅_k-r̅̂̅_k)_2 = κ̂𝒪(x_k-m_k-x^*_2^2), where the second equality is due to (<ref>) and the fact that β_k is bounded. As a result, we complete the induction. Thus, (<ref>) and (<ref>) are proved. §.§ Proof of Theorem <ref> Let λ_min(·) and λ_max(·) denote the smallest eigenvalue and the largest eigenvalue of a real symmetric matrix. We first give a lemma. Suppose that A∈ℝ^d× d is positive definite with λ_min(𝒮(A))≥μ and A_2≤ L, where μ, L > 0. Then for a constant θ∈[(1-μ^2/L^2)^1/2,1), there exist positive constants β,β' such that when β_k∈[β,β'], the inequality I-β_kA_2 ≤θ holds. If θ = (1-μ^2/L^2)^1/2, then I-β_kA_2 ≤θ when β_k = μ/L^2. Since (I-β_kA)^T(I-β_kA) = I-β_k(A+A^T)+β_k^2 A^TA, it follows from Weyl's inequalities <cit.> that I-β_kA_2^2 ≤λ_max(I-β_k(A+A^T))+ λ_max(β_k^2A^TA) ≤ 1-β_kλ_min(A+A^T)+β_k^2A_2^2 ≤ 1-2β_kμ+β_k^2L^2. Thus, to ensure I-β_kA_2≤θ, it suffices to require that 1-2β_kμ+β_k^2L^2≤θ^2. Since θ∈[(1-μ^2/L^2)^1/2,1), solving (<ref>) yields that β_k∈ [β,β'], where β = μ-( μ^2-L^2(1-θ^2) )^1/2/L^2, β' = μ+( μ^2-L^2(1-θ^2) )^1/2/L^2. If θ = (1-μ^2/L^2)^1/2, then β_k = μ/L^2. Now, we give the proof of Theorem <ref>. Let the notations be the same as those in the proof of <Ref>. 1. For the Type-I method, let x_k^A and r_k^A denote the m_k-th iterate and residual of Arnoldi's method applied to solve ĥ(x) = 0, with the starting point x_k-m_k. Due to <ref>, we have x̅̂̅_k = x_k^A. Then, according to the known convergence of Arnoldi's method <cit.>, x̅̂̅_k-x^*_2 = x_k^A-x^*_2 ≤√(1+γ_k^2κ_k^2)min__p(0)=1^p∈𝒫_m_kp(A)(x_k-m_k-x^*)_2. Since x̂_k+1 = x̅̂̅_k+β_kr̅̂̅_k, it follows that x̂_k+1-x^* = (I-β_kA)(x̅̂̅_k-x^*). Hence x̂_k+1-x^*_2 ≤θ_k x̅̂̅_k-x^*_2, which along with (<ref>) and <ref> yields (<ref>). Let σ_min(·) denote the smallest singular value. Choose U_k∈ℝ^d× m_k that satisfies range(U_k) = 𝒦_m_k(A,r_k-m_k) and U_k^TU_k = I. Then π_k = U_kU_k^T. Since π_k and I-π_k are orthogonal projectors, it follows that γ_k = π_kA(I-π_k)_2 ≤A_2 ≤ L. For the restriction A_k|_𝒦_m_k(A,r_k-m_k), we have σ_min(A_k|_𝒦_m_k(A,r_k-m_k)) = min__y_2=1^y∈ℝ^m_kA_kU_ky_2 = min__y_2=1^y∈ℝ^m_kU_kU_k^TAU_ky_2 = σ_min(U_k^TAU_k). Since σ_min(U_k^TAU_k) ≥λ_min(𝒮(U_k^TAU_k)) = λ_min(U_k^T𝒮(A)U_k)≥λ_min(𝒮(A))≥μ, where the first inequality is due to Fan-Hoffman theorem <cit.>, and the second inequality is due to <cit.>, it follows that κ_k = (A_k|_𝒦_m_k(A,r_k-m_k))^-1_2 = 1/σ_min(A_k|_𝒦_m_k(A,r_k-m_k))≤ 1/μ. 2. For the Type-II method, let x_k^G and r_k^G denote the m_k-th iterate and residual of GMRES applied to solve ĥ(x) = 0, with the starting point x_k-m_k. We have x̅̂̅_k = x_k^G due to <ref>. It follows from the property of GMRES <cit.> that r̅̂̅_k_2 = r_k^G_2 = min__p(0)=1^p∈𝒫_m_kp(A)r̂_k-m_k_2 ≤min__p(0)=1^p∈𝒫_m_kp(A)r_k-m_k_2 + κ̂𝒪(x_k-m_k-x^*_2^2), where the inequality is due to (<ref>) and p(A)_2 ≤ 1 when p(A) = (I-μ/L^2A)^m_k (see <Ref>). Since x̂_k+1 = x̅̂̅_k+β_kr̅̂̅_k, it follows that r̂_k+1 = (I-β_kA)r̅̂̅_k. Hence r̂_k+1_2 ≤θ_k r̅̂̅_k_2, which along with (<ref>), <ref>, and θ_k ≤ 1+β' L yields (<ref>). If θ_j ≤θ<1 (ensured by <Ref>) for j= 0,…,max{k-1,0}, then r_j_2 ≤r_0_2,   j=0,…,k, when x_0-x^*_2 is sufficiently small. We prove it by induction. For k=0, (<ref>) is clear. Suppose that k≥ 0, and as an inductive hypothesis, (<ref>) holds for k. We establish the result for k+1. Since A(x̂_k+1-x^*)_2 = r̂_k+1_2 ≤θ_k min__p(0)=1^p∈𝒫_m_kp(A)r̂_k-m_k_2 ≤θ_k A(x_k-m_k-x^*)_2, it follows that A(x_k+1-x^*)_2 ≤A(x̂_k+1-x^*)_2 + A(x_k+1-x̂_k+1)_2 ≤θA(x_k-m_k-x^*)_2+Lx_k+1-x̂_k+1_2. From <Ref>, x_k+1-x̂_k+1_2 = κ̂𝒪(x_k-m_k-x^*_2^2). So there exists a constant c>0 such that x_k+1-x̂_k+1_2 ≤κ̂c A(x_k-m_k-x^*)_2^2. Hence, if x_k-m_k is chosen such that A(x_k-m_k-x^*)_2 ≤1-θ/2Lκ̂c, it yields A(x_k+1-x^*)_2 ≤1+θ/2A(x_k-m_k-x^*)_2. Thus, x_k+1-x^*_2≤1+θ/2L/μx_k-m_k-x^*_2, which indicates x_k+1∈ℬ_ρ̂(x^*) for x_k-m_k-x^*_2≤2μρ̂/L(1+θ). Then due to (<ref>), we have r_k+1_2 ≤A(x_k+1-x^*)_2+1/2κ̂x_k+1-x^*_2^2. Therefore, r_k+1_2 ≤1+θ/2A(x_k-m_k-x^*)_2 + 1/2κ̂x_k+1-x^*_2^2 ≤1+θ/2(r_k-m_k_2+κ̂/2x_k-m_k-x^*_2^2)+1/2κ̂(1+θ/2L/μx_k-m_k-x^*_2)^2 ≤θ'r_k-m_k_2 + κ̂ c' r_k-m_k_2^2, where θ':=1+θ/2, c'>0 is a constant, and the last inequality is due to (<ref>). So by choosing x_k-m_k-x^*_2≤1-θ'/2κ̂c'L, it follows that r_k-m_k_2≤ Lx_k-m_k-x^*_2 ≤1-θ'/2κ̂c'. Then r_k+1_2 ≤1+θ'/2r_k-m_k_2 < r_k-m_k_2 ≤r_0_2. Since x_k-m_k-x^*_2 ≤1/μr_k-m_k_2 ≤1/μr_0_2 ≤L/μx_0-x^*_2, the requirement that x_k-m_k-x^*_2 ≤ρ for some constant ρ>0 can be induced from x_0-x^*_2≤μρ/L. Hence, we complete the induction. Then (<ref>) holds if θ_j≤θ (j≥ 0) and x_0 is sufficiently close to x^*. 3. If m_k = d, then the Process II obtains the exact solution of ĥ(x) = 0, i.e. x̂_k+1 = x^*. Therefore x_k+1-x^*_2 = κ̂𝒪(x_k-m_k-x^*_2^2). §.§ Proof of Lemma <ref> Since h'(x) = I-g'(x), it follows that for every x,y∈ℬ_ρ̂(x^*), h'(x)-h'(y)_2 = g'(x)-g'(y)_2 ≤κ̂x-y_2, which implies that h(x) is Lipschitz continuously differentiable in ℬ_ρ̂(x^*) and the Lipschitz constant of h'(x) is κ̂. Due to I-h'(x)_2 = g'(x)_2 ≤κ < 1, we have h'(x)_2 ≤I_2+I-h'(x)_2 ≤ 1+κ, and 1/σ_min(h'(x))=h'(x)^-1_2 = (I-g'(x))^-1_2 ≤1/1-g'(x)_2≤1/1-κ, where σ_min(·) denotes the smallest singular value. Thus, σ_min(h'(x)) ≥ 1-κ. The (<ref>) holds for μ=1-κ and L=1+κ. Note that ‖ I-𝒮(h'(x)) ‖_2 ≤1/2(‖ I-h'(x)‖_2+‖ I-h'(x)^T‖_2) = κ. Let λ be an arbitrary eigenvalue of 𝒮(h'(x)). Since 𝒮(h'(x)) is symmetric, it follows from (<ref>) that | 1-λ|≤κ, which yields 0<1-κ≤λ≤ 1+κ. Thus (<ref>) also holds for μ=1-κ and L=1+κ. Therefore, Assumption <ref> is satisfied. § PROOFS OF SECTION <REF> §.§ Proof of Proposition <ref> Since v_j^Tq_j ≠ 0 for j=k-m_k+1,…,k, the procedures (<ref>) and (<ref>) are well defined. First, by construction, we have p_k+1 = Δ x_k - P_kζ_k+1 = -P_kΓ_k +β_kr̅_k-P_kζ_k+1 = β_kr̅_k-P_kϕ_k = β_k(r_k-Q_kΓ_k) - P_kϕ_k = β_k(I-β_k-1A)r̅_k-1-β_kQ_kΓ_k-P_kϕ_k = β_k(I-β_k-1A)p_k+P_k-1ϕ_k-1/β_k-1+β_k AP_kΓ_k-P_kϕ_k. Here, for brevity, we define P_k = 0∈ℝ^d, ϕ_k = 0, Γ_k+1^[0] = 0, ζ_k+1 = 0, if m_k = 0. Correspondingly, x̅_k = x_k, r̅_k = r_k, if m_k = 0. We prove (<ref>) by induction. If m_k=1, it follows from (<ref>) that p_k+1 = β_k(I-β_k-1A)p_k/β_k-1+β_kAp_kΓ_k-p_kϕ_k. It follows that Ap_k = 1/1-Γ_k(1/β_k-1-1/β_kϕ_k)p_k-1/(1-Γ_k)β_kp_k+1, namely (<ref>). For m_k≥ 2, the inductive hypothesis is AP_k-1=P_kH̅_k-1. With (<ref>), we have p_k+1 = β_k/β_k-1(p_k+P_k-1ϕ_k-1)-β_kA(p_k+P_k-1ϕ_k-1) + β_kAP_kΓ_k-P_kϕ_k = P_k( β_k/β_k-1[ ϕ_k-1; 1 ] -β_kH̅_k-1(ϕ_k-1-Γ_k^[m_k-1]) -ϕ_k ) - β_kAp_k(1-Γ_k^(m_k)). Hence, by rearrangement, we obtain (<ref>), thus completing the induction. Suppose that m_k≥ 1. We prove 1-Γ_k^(m_k)≠ 0 by contradiction. If Γ_k^(m_k) = 1, then r̅_k = r_k-Q_kΓ_k = r_k-Q_k-1Γ_k^[m_k-1]-q_k = r_k-Q_k-1Γ_k^[m_k-1]-(Δ r_k-1-Q_k-1ζ_k) = r_k-1-Q_k-1(Γ_k^[m_k-1]-ζ_k) ⊥ range(V_k). Hence r̅_k = r̅_k-1 due to r̅_k ⊥ range(V_k-1). For the Type-I method, v_k = p_k = β_k-1r̅_k-1-P_k-1ϕ_k-1, so 0 = r̅_k-1^Tp_k = β_k-1r̅_k-1^Tr̅_k-1, which indicates r̅_k-1 = 0. For the Type-II method, v_k = q_k = -Ap_k = -β_k-1Ar̅_k-1-Q_k-1ϕ_k-1, so 0 = r̅_k-1^Tq_k = -β_k-1r̅_k-1^TAr̅_k-1, which indicates r̅_k-1 = 0 since A is positive definite. However, r̅_k-1 = 0 yields that r_k = (I-β_k-1A)r̅_k-1 = 0, which is impossible because the algorithm has not found the exact solution. As a result, 1-Γ_k^(m_k)≠ 0. Thus (<ref>) and (<ref>) are well defined. §.§ Proof of Lemma <ref> From (<ref>) in the proof of <ref> and the assumption |1-Γ_k^(m_k)| ≥τ_0, with sufficiently small x_0-x^*_2, we can ensure |Γ_k^(m_k)-Γ̂_k^(m_k)| = κ̂𝒪(x_k-m_k-x^*_2) and |1-Γ̂_k^(m_k)|≥1/2τ_0. Thus |1/1-Γ_k^(m_k) - 1/1-Γ̂_k^(m_k)| =|Γ_k^(m_k)-Γ̂_k^(m_k)|/|(1-Γ_k^(m_k))(1-Γ̂_k^(m_k))| = κ̂𝒪(x_k-m_k-x^*_2). We prove (<ref>) by induction. The same as h_k, h_k^(m_k+1), H_k, H̅_k, ϕ_k in Process I, the notations ĥ_k, ĥ_k^(m_k+1), Ĥ_k, Ĥ̅_k, ϕ̂_k are defined for Process II, correspondingly. If m_k = 1, then | h_k - ĥ_k| = |1/1-Γ_k(1/β_k-1-1/β_kϕ_k)-1/1-Γ̂_k( 1/β_k-1-1/β_kϕ̂_k )| ≤|1/1-Γ_k·ϕ̂_k-ϕ_k/β_k| + |( 1/1-Γ_k-1/1-Γ̂_k)·( 1/β_k-1-1/β_kϕ̂_k) | = κ̂𝒪(x_k-m_k-x^*_2), because of (<ref>), and (<ref>), (<ref>) in the proof of <ref>. Also, H_k_2 = | h_k| = 𝒪(1). Suppose that m_k≥ 2, and as an inductive hypothesis, H_k-1-Ĥ_k-1_2 = κ̂𝒪(x_k-m_k-x^*_2), H_k-1_2 = 𝒪(1). First, due to (<ref>), we have | h_k-1^(m_k)- ĥ_k-1^(m_k)| = 1/β_k-1|1/1-Γ_k-1^(m_k-1) - 1/1-Γ̂_k-1^(m_k-1)| = κ̂𝒪(x_k-m_k-x^*_2). Also, | h_k-1^(m_k)|≤1/βτ_0, and m_k≤ m. Thus for H̅_k-1 and Ĥ̅_k-1, we have that H̅_k-1-Ĥ̅_k-1_2 = κ̂𝒪(x_k-m_k-x^*_2), H̅_k-1_2 = 𝒪(1). As a result, ‖H̅_k-1(ϕ_k-1-Γ_k^[m_k-1])-Ĥ̅_k-1(ϕ̂_k-1-Γ̂_k^[m_k-1])‖_2 ≤‖H̅_k-1(ϕ_k-1-Γ_k^[m_k-1]- (ϕ̂_k-1-Γ̂_k^[m_k-1])) ‖_2 + ‖( H̅_k-1-Ĥ̅_k-1)( ϕ̂_k-1-Γ̂_k^[m_k-1]) ‖_2 ≤κ̂𝒪(x_k-m_k-x^*_2), and H̅_k-1(ϕ_k-1-Γ_k^[m_k-1])_2 = 𝒪(1). Besides, ‖1/β_k-1[ ϕ_k-1; 1 ]-1/β_kϕ_k - ( 1/β_k-1[ ϕ̂_k-1; 1 ]-1/β_kϕ̂_k ) ‖_2 = κ̂𝒪(x_k-m_k-x^*_2), ‖1/β_k-1[ ϕ_k-1; 1 ]-1/β_kϕ_k ‖_2 = 𝒪(1). Therefore, ‖(1-Γ_k^(m_k))h_k -(1-Γ̂_k^(m_k))ĥ_k‖_2 = κ̂𝒪(x_k-m_k-x^*_2), (1-Γ_k^(m_k))h_k_2 = 𝒪(1). Hence, h_k-ĥ_k_2 ≤‖1/1-Γ_k^(m_k)( (1-Γ_k^(m_k))h_k -(1-Γ̂_k^(m_k))ĥ_k ) ‖_2 + ‖( 1/1-Γ_k^(m_k)-1/1-Γ̂_k^(m_k))·( 1-Γ̂_k^(m_k))ĥ_k ‖_2 = κ̂𝒪(x_k-m_k-x^*_2), and h_k_2 = 𝒪(1), which together with (<ref>) and m_k≤ m implies that (<ref>) holds. Thus we complete the induction. § PROOFS OF SECTION <REF> §.§ Proof of Theorem <ref> Consider the two processes defined in <ref>. Here, we replace the restarted AM method by the restarted ST-AM method. Note that the restarted ST-AM is obtained from the restarted AM by setting ζ_k^(j) = 0 for j≤ k-3, and Γ_k^(j) = 0 for j ≤ k-2. Similar to <ref>, it can be proved that r_k - r̂_k_2 = κ̂·𝒪(x_k-m_k-x^*_2^2),    x_k+1 - x̂_k+1_2 = κ̂·𝒪(x_k-m_k-x^*_2^2), provided that there exists a constant η_0 > 0 such that r_j_2 ≤η_0 r_0_2,   j=0,…,k, and x_0∈ℬ_ρ̂(x^*) is sufficiently close to x^*. Since θ_k = I-β_kA_2 ≤θ, there are positive constants β, β' such that β≤β_k≤β'. In fact, by choosing β_k ∈[1-θ/μ,1+θ/L], we can ensure θ_k ≤max{|1-β_kL|,|1-β_kμ| }≤θ<1. We give the proof of the Type-I method here. For the restarted Type-I ST-AM method, if x_0 is sufficiently close to x^*, then x_j-x^*_A ≤x_0-x^*_A,   j=0,…,k. We prove (<ref>) and (<ref>) hold for the Type-I method by induction. For k=0, (<ref>) and (<ref>) hold. Suppose that for k≥ 0, the results hold for k. We establish the results for k+1. Let x_k^A and r_k^A denote the m_k-th iterate and residual of Arnoldi's method applied to solve ĥ(x)=0, with the starting point x_k-m_k. Due to <ref> and <ref>, we have x̅̂̅_k = x_k^A. Hence x̂_k+1 - x^*_A ≤θ_kx̅̂̅_k-x^*_A = θ_kx_k^A-x^*_A = θ_kmin__p(0)=1^p∈𝒫_m_kp(A)(x_k-m_k-x^*)_A ≤θ_kx_k-m_k-x^*_A. Here, we use the fact that I-β_kA_A = I-β_kA_2. From (<ref>), it follows that x_k+1-x̂_k+1_A = κ̂𝒪(x_k-m_k-x^*_A^2). Then, there is a constant c_1>0 such that x_k+1-x̂_k+1_A ≤κ̂c_1x_k-m_k-x^*_A^2. With (<ref>), we have x_k+1-x^*_A ≤θx_k-m_k-x^*_A+ κ̂c_1x_k-m_k-x^*_A^2. Then x_k+1-x^*_A ≤1+θ/2x_k-m_k-x^*_A provided x_k-m_k-x^*_A≤1-θ/2κ̂c_1, which can be satisfied by choosing x_0-x^*_2 ≤1-θ/2√(L)κ̂c_1, since by the inductive hypothesis, x_k-m_k-x^*_A ≤x_0-x^*_A ≤√(L)x_0-x^*_2. Thus, x_k+1-x^*_A < x_k-m_k-x^*_A≤x_0-x^*_A, namely (<ref>) for k+1. Also, x_k+1-x^*_2 ≤1/√(μ)x_k+1-x^*_A ≤1/√(μ)x_0-x^*_A ≤√(L)/√(μ)x_0-x^*_2. So we can impose x_0-x^*_2≤√(μ)ρ̂/√(L) to ensure x_k+1∈ℬ_ρ̂(x^*), which further yields that r_k+1_2 ≤ Lx_k+1-x^*_2≤L√(L)/√(μ)x_0-x^*_2≤L√(L)/μ√(μ)r_0_2, namely (<ref>) for k+1, and η_0 = L√(L)/μ√(μ). Hence, we complete the induction. Since A is SPD, we can use the Chebyshev polynomial to obtain min__p(0)=1^p∈𝒫_m_kp(A)_2 ≤min__p(0)=1^p∈𝒫_m_kmax_λ∈ [μ,L]|p(λ)| ≤ 2(√(L/μ)-1/√(L/μ)+1)^m_k, which is a classical result <cit.>. Note that p(A)(x_k-m_k-x^*)_A ≤p(A)_Ax_k-m_k-x^*_A = p(A)_2x_k-m_k-x^*_A. Thus, by choosing x_0 sufficiently close to x^*, (<ref>) holds as a result of (<ref>), (<ref>), and (<ref>). For the Type-II method, since θ_k≤θ < 1, the bound (<ref>) can be established following the similar approach to proving <Ref>. With (<ref>), the bound (<ref>) holds. §.§ Proof of Theorem <ref> The same as t_k^(m_k+1), T_k in Process I, the notations t̂_k^(m_k+1), T̂_k are defined for Process II, correspondingly. In this case, the tridiagonal matrix T̂_k can be diagonalized. Let A:=h'(x^*). Then AQ̂_k = Q̂_kT̂_k+t̂_k^(m_k+1)q̂_k+1e_m_k^T. Hence V̂_k^TAQ̂_k = V̂_k^TQ̂_kT̂_k, due to V̂_k^Tq̂_k+1 = 0. Thus T̂_k = (V̂_k^TQ̂_k)^-1V̂_k^TAQ̂_k. Here, V̂_k^TQ̂_k and V̂_k^TAQ̂_k are symmetric for both types of ST-AM methods. Define Ŵ_k=-V̂_k^TQ̂_k for the Type-I method, and Ŵ_k = V̂_k^TQ̂_k for the Type-II method. Then Ŵ_k^1/2T̂_kŴ_k^-1/2 = ∓Ŵ_k^-1/2(V̂_k^TAQ̂_k)Ŵ_k^-1/2, where the sign is “-” for the Type-I method, and “+” for the Type-II method. The right side in (<ref>) is symmetric, so there exists an orthonormal matrix Û_k∈ℝ^m_k× m_k such that T̂_k = Ŵ_k^-1/2Û_k^TD̂_kÛ_kŴ_k^1/2, where D̂_k is a diagonal matrix formed by the eigenvalues of T̂_k. Also, similar to the proof of <Ref>, the relations (<ref>), (<ref>), and (<ref>) also hold for the ST-AM methods. Note that V̂_k^TQ̂_k is diagonal. We have V̂_k^TQ̂_k_2(V̂_k^TQ̂_k)^-1_2 = max_k-m_k+1≤ i≤ k{|v̂_i^Tq̂_i |}/min_k-m_k+1≤ j≤ k{|v̂_j^Tq̂_j |} = 𝒪(1). Thus Ŵ_k^1/2_2 Ŵ_k^-1/2_2 = 𝒪(1). Also, similar to <ref>, we have T_k-T̂_k_2 = κ̂𝒪(x_k-m_k-x^*_2). Hence, the result (<ref>) follows from Bauer-Fike theorem. § ADDITIONAL EXPERIMENTAL RESULTS §.§ Solving linear systems To verify the theoretical properties of the AM and ST-AM methods for solving linear systems, we considered solving Ax = b, where A∈ℝ^d× d, b∈ℝ^d, the residual is defined as r_k = b-Ax_k at x_k. The fixed-point iteration is the Richardson's iteration x_k+1 = x_k+β r_k, where β was chosen to ensure linear convergence. For the restarted AM and restarted ST-AM, the restarting conditions were disabled since (<ref>) is linear. AM-I and AM-II used (<ref>) with β_0 = 1 to choose β_k; ST-AM-I and ST-AM-II used (<ref>) with β_0 = 1 to choose β_k. §.§.§ Nonsymmetric linear system The matrix A∈ℝ^100× 100 was randomly generated from Gaussian distribution and was further modified by making all the eigenvalues have positive real parts. The results are shown in <Ref> and <Ref>. The convergence behaviours of r̅_k_2/r_0_2 and r_k_2/r_0_2 verify <ref>, <ref> and <ref>. The eigenvalue estimates well approximate the exact eigenvalues of A, which justifies the adaptive mixing strategy. §.§.§ SPD linear system We first generated a matrix B∈ℝ^100× 100 from Gaussian distribution, then chose A=B^TB. In this case, the conjugate gradient (CG) method <cit.> and the conjugate residual (CR) method <cit.>, which have short-term recurrences, are equivalent to Arnoldi's method and GMRES, respectively. The results in <Ref> verify the properties of the ST-AM methods. We see the intermediate residuals {r̅_k} of ST-AM-I/ST-AM-II match the residuals { r_k} of the CG/CR method during the first 30 iterations. However, the equivalence cannot exactly hold in the later iterations due to the loss of global orthogonality in finite arithmetic. Nonetheless, the convergence of ST-AM-I/ST-AM-II is comparable to that of CG/CR. <Ref> shows that the eigenvalue estimates from ST-AM well approximate the exact eigenvalues of A. §.§ Additional results of solving the modified Bratu problems We provide details about the eigenvalue estimates and show the effect of β_k on the convergence. §.§.§ Nonsymmetric Jacobian To verify <Ref>, we compared the eigenvalue estimates with the Ritz values of F'(U^*) where F(U^*) = 0. The Ritz values were obtained from the k-step Arnoldi's method <cit.> (denoted by Arnoldi(k)). <Ref> indicates that the extreme Ritz values are well approximated, which accounts for the proper choices of β_k. We also tested AM-I and AM-II with fixed β_k. <Ref> shows that the choice of β_k can largely affect the convergence behaviours of both methods, and the adaptive mixing strategy performs well. It is worth noting that for the Picard iteration, choosing β from {10^-5,…,10^-2} causes divergence, which suggests θ_k > 1 in (<ref>) and (<ref>) when β_k ∈{10^-5,…,10^-2}. Nevertheless, the residual norms of the restarted AM methods can still converge since the minimization problems in (<ref>) and (<ref>) dominate the convergence when m_k is large. §.§.§ Symmetric Jacobian <Ref> shows the eigenvalue estimates computed by ST-AM-I/ST-AM-II and the Ritz values of F'(U^*) computed by the k-step symmetric Lanczos method <cit.> (denoted by Lanczos(k)), where F(U^*) = 0. It is observed that the eigenvalue estimates well approximate the Ritz values, which verifies <Ref>. §.§ Additional results of solving the Chandrasekhar H-equation <Ref> shows the Ritz values of F'(h^*) and the eigenvalue estimates, where h^* is the solution. We computed the Ritz values of F'(h^*) by applying 500 steps of Arnoldi's method to G'(h^*). It is observed in <Ref> that most Ritz values are nearly 1, which accounts for the efficiency of the simple Picard iteration for solving this problem. Since the eigenvalues form 3 clusters, we also computed three eigenvalue estimates by AM-I/AM-II (η=∞, m=100, τ=10^-15, and β_k ≡ 1). We find the eigenvalue estimates still roughly match the Ritz values in the cases ω=0.5 and ω=0.99. For ω=1, the Jacobian F'(h^*) is singular, so the error in estimating the eigenvalue zero is large. §.§ Additional results of solving the regularized logistic regression <Ref> shows the eigenvalue estimates computed by the eigenvalue estimation procedure of ST-AM-I/ST-AM-II at the last iteration. The comparison with the Ritz values of ∇^2 f(x^*) indicates that the extreme eigenvalues are well approximated. plainnat
http://arxiv.org/abs/2307.01244v1
20230703173234
Dynamical Projective Operatorial Approach (DPOA) for out-of-equilibrium systems and its application to TR-ARPES
[ "Amir Eskandari-asl", "Adolfo Avella" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "physics.comp-ph", "physics.optics", "quant-ph" ]
#̨1|#1⟩ #̱1⟨#1| #1#2#1#2 #1#1 #1#2⟨#1|#2⟩ #1⟨#1|⟩ #1|#1|^2 #1#1 #1#2**#1#2 #1#2#3**#1#2#3 ^1Dipartimento di Fisica “E.R. Caianiello”, Universit degli Studi di Salerno, I-84084 Fisciano (SA), Italy ^2CNR-SPIN, Unit di Salerno, I-84084 Fisciano (SA), Italy ^3CNISM, Unit di Salerno, Universit degli Studi di Salerno, I-84084 Fisciano (SA), Italy Efficiently simulating real materials under the application of a time-dependent field and computing reliably the evolution over time of relevant response functions, such as the TR-ARPES signal or differential transient optical properties, has become one of the main concerns of modern condensed matter theory in response to the recent developments in all areas of experimental out-of-equilibrium physics. In this manuscript, we propose a novel model-Hamiltonian method, the dynamical projective operatorial approach (DPOA), designed and developed to overcome some of the limitations and drawbacks of currently available methods. Relying on (i) many-body second-quantization formalism and composite operators, DPOA is in principle capable of handling both weakly and strongly correlated systems, (ii) tight-binding approach and wannierization of DFT band structures, DPOA naturally deals with the complexity and the very many degrees of freedom of real materials, (iii) dipole gauge and Peierls substitution, DPOA is built to address pumped systems and, in particular, pump-probe spectroscopies, (iv) a Peierls expansion we have devised ad hoc, DPOA is numerically extremely efficient and fast. The latter expansion clarifies how single- and multi-photon resonances, rigid shifts, band dressings, and different types of sidebands emerge and allows understanding the related phenomenologies. Comparing DPOA to the single-particle density-matrix approach and the Houston method (this latter is generalized to second-quantization formalism), we show how it can compute multi-particle multi-time correlation functions and go well beyond these approaches for real materials. We also propose protocols for evaluating the strength of single- and multi-photon resonances and for assigning the residual excited electronic population at each crystal momentum and band to a specific excitation process. The expression for relevant out-of-equilibrium Green's functions and TR-ARPES signal are given within the DPOA framework and, defining a retarded TR-ARPES signal, it is shown that it is possible to obtain an out-of-equilibrium version of the fluctuation-dissipation theorem. Hamiltonians, where intra- and inter-band transitions are selectively inhibited, are defined and used to analyze the related effects on the TR-ARPES signal and the residual electronic excited population of a prototypical pumped two-band system. Three relevant cases of light-matter coupling are analyzed within the dipole gauge, which is derived in the second-quantization formalism: only a local dipole (as in quantum dots and molecules, and transverse-pumped low-dimensional systems), only the Peierls substitution in the hopping term (as in many real materials), and both terms at once. The transient and residual pump effects are studied in detail, including the consequences of the lattice symmetries at different crystal momenta. A detailed study of the TR-ARPES signal dependence on the probe-pulse characteristics is also performed and reported. Dynamical Projective Operatorial Approach (DPOA) for out-of-equilibrium systems and its application to TR-ARPES Amir Eskandari-asl^1 and Adolfo Avella^1,2,3 August 1, 2023 ================================================================================================================= § INTRODUCTION The modern developments in technology made it possible to study condensed matter systems in the attosecond regime and investigate their real-time dynamics upon perturbation by very short and intense electromagnetic pulses, the so called pump-probe setups <cit.>. Investigating the real-time behavior of electronic excitations induced by the laser pulse reveals the fundamental processes that govern the physics of the system under study <cit.>. One avenue is to investigate the response of the solid by reading out the high-harmonic generation upon irradiation which gives high-energy pulses <cit.>. In other pump-probe setups, the system is pumped with an intense laser pulse, usually in the IR regime and with a duration ranging from few to hundreds of fs, and analyzed using a positively or negatively delayed probe pulse by measuring either the transient relative change in the optical properties <cit.> or the time-resolved angle-resolved photoemission spectroscopy (TR-ARPES) signal <cit.>. Even though the theoretical method we introduce in this manuscript is in principle capable of dealing with any time-dependent scenario, we will mainly focus on its description of the TR-ARPES signal. ARPES investigates the electronic band structure of materials by analyzing the energy and momentum distribution of the electrons ejected from a solid via photoelectric effect <cit.>. Instead, in pump-probe setups, TR-ARPES is exploited to determine the out-of-equilibrium electronic properties of materials by measuring the signal as a function of the time delay between the pump and probe pulses <cit.>. TR-ARPES measurements in pump-probe setups can reveal the different dynamical processes taking place in the system<cit.>, which are of fundamental importance for understanding the underlying physics and eventually engineering materials for practical purposes. Thanks to the capability of monitoring the dynamics of the electronic excitations, TR-ARPES can give valuable information about the bands above the Fermi energy, well beyond what one can achieve by measuring thermal excitations at equilibrium <cit.>. Moreover, TR-ARPES can measure and study the dressing of the main bands and the emergence of side-bands due to the pump pulse <cit.> opening a pathway to novel applications in ultra-fast engineering of materials. TR-ARPES measurements can be used to investigate many other complex effects induced by the pump pulse such as the perturbation (melting, switching, emergence, etc.) of ordered states in materials <cit.> and the dynamical excitation of collective modes <cit.>, just to give a few examples. To understand the underlying physical phenomena and microscopic processes induced by the pumping of the material, such advanced experimental studies require their theoretical description and numerical simulation independently of the probing scheme (optics or TR-ARPES). The standard approach to the numerical study of the out-of-equilibrium behavior of a material pumped with an intense laser pulse is the time-dependent density functional theory (TD-DFT) <cit.>, which is unfortunately rather time-consuming and computationally expensive <cit.>. Moreover, it is not easy to get deep insights into the underlying physics through TD-DFT simulations just using (without tampering) currently available software packages, while, in a model-Hamiltonian approach, it is possible to switch on and off terms and investigate their relative relevance and interplay <cit.>. Model-Hamiltonian approaches, for both matter and light-matter interaction terms, rely on parameters supplied by DFT calculations at equilibrium for real materials <cit.>. If the material is strongly correlated, one can use the dynamical mean-field theory to compute its out-of-equilibrium properties if the number of degrees of freedom involved (spin, bands, atoms in the basis, etc.) is limited <cit.>. On the other hand, for weakly correlated materials, such as most of the semiconductors, the Hamiltonian can be mapped to an effective quadratic form, for which one can in principle compute the time-dependent single-particle density-matrix and/or higher-order correlation functions according to the probing scheme <cit.>. Another approach that is suitable for effective few-band models is the so called Houston method in which one expands electronic single-particle wave functions in terms of the instantaneous eigenstates of the time-dependent Hamiltonian and solves the equations of motion for the expansion coefficients within some approximations <cit.>. The relevance of this approach is to provide a framework to disentangle the effects of different processes, in particular those related to the inter-band and to the intra-band transitions, and their interplay. At any rate, model-Hamiltonian approaches can not be applied so easily to real materials as one either run the risk to use oversimplified models that could lose some important features or has to find an efficient way to deal with the actual very complicated Hamiltonians describing many degrees of freedom at once <cit.>. Even for quadratic Hamiltonians, one needs to numerically solve the equations of motion of the multi-particle density matrices or multi-time correlation functions, which are needed to describe response functions, such as the optical conductivity, or for computing the TR-ARPES signal. Unfortunately, without a proper framework, such calculations can be computationally quite heavy and eventually unaffordable. Very recently, we designed and developed a novel method, the dynamical projective operatorial approach (DPOA), and used it to analyze the transient and residual electronic photo-excitations in ultrafast (attosecond) pumped germanium. We benchmarked our results with those obtained through TD-DFT calculations, which were in turn validated by direct comparison to the experimental results for the differential transient reflectivity <cit.>. DPOA is a quite versatile model-Hamiltonian approach that deals with the time evolution of composite operators <cit.> and is capable of simulating real materials, and the time-dependent transitions among their actual numerous bands, with a lower numerical cost as compared to TD-DFT. In this paper, we introduce DPOA by reporting its detailed derivation and its quadratic-Hamiltonian version, which is particularly fast and efficient. Such a version is very useful for semiconductors where one can usually safely discard the dynamical Coloumb interaction. We also report how to compute all single/multi-particle single/multi-time observables and correlation functions within this approach going well beyond the single-particle density-matrix and Houston approaches. Moreover, we provide a very efficient way to implement the Peierls substitution through a numerically exact expansion and to compute the m-th partial derivative in momentum space of the hopping and of the dipole terms appearing in such expansion. This allows to analyze and characterize the terms in such an expansion defining the related characteristic frequencies, timescales, bandwidths, and relative phases that explain the emergence and the features of different kinds of sidebands (multi-photon resonant, non resonant, envelope, ...) within a generalized Floquet scenario modified by the finite width of the envelope of the pump pulse. The presence of the envelope modifies/generalizes also the Rabi-like phenomenology that takes place when, within the generalized Floquet scenario, some of the band gaps are in resonance with integer multiples of the central frequency of the pump pulse. Such multi-photon resonances determine the accumulation of (residual) electronic excited populations after the pump pulse turns off. Then, we propose a procedure to determine the strength of a multi-photon non-exact resonance and through this to assign residual electronic excited populations per momentum and band to specific multi-photon resonant processes. Furthermore, we use our approach to reproduce the Houston method, generalize it to second quantization, and to obtain numerically exact expectation values of the Houston coefficients overcoming its limitations and drawbacks. We also show that the separation of inter-band and intra-band transition effects can be obtained in DPOA without any ambiguity, while such separation is questionable within the Houston approach. Moreover, we show how to compute Green's functions (GFs) using DPOA, and hence the TR-ARPES signal. As the standard spectral functions can become negative out of equilibrium, there is no out-of-equilibrium counter-part of the fluctuation-dissipation theorem<cit.>. Indeed, by defining the retarded TR-ARPES signal, we generalize the fluctuation-dissipation theorem and find its equivalent out of equilibrium for TR-ARPES signal, which can be useful to better understand and compute the out-of-equilibrium energy bands of pumped systems. As already mentioned above, very recently, we exploited DPOA to unveil the various charge-injection mechanisms active in germanium <cit.>. In the near future, we plan to report further studies on germanium as well as on other real materials. In this work, to analyze and discuss a larger variety of fundamental physical processes without the limitations imposed by the peculiarities of a specific real material, we apply DPOA to a non-trivial toy model. We analyze a two-band (valence-conduction) model and consider three relevant cases by switching on and off the Peierls substitution in the hopping term (relevant to bulk systems) and a local dipole term (relevant to systems such as quantum dots and molecules and low-dimensional systems with transverse pumps). We discuss the main effects of the two terms separately as well as the relevance of their interplay. In particular, we analyze how the first-order (in the pumping field) terms of the two types of light-matter couplings assist the higher-order ones and how their decomposition in terms of intra- and inter-band components can help understanding the actual phenomenology. We compute and analyze, in connection to the symmetries of the system, the lesser and the retarded TR-ARPES signals as well as the residual (after pump pulse) excited population and through them we discuss (a) the broadening of the out-of-equilibrium (TR-ARPES) bands, (b) their relationship to the equilibrium bands (the rigid shift due to the even terms starting from the inverse-mass one) and the instantaneous eigenstates, (c) the emergence of the different kinds of sidebands and, in particular, (i) of the resonant ones in connection to the vanishing of velocity (one-photon) and inverse-mass (two-photon) terms due to band symmetries and how such symmetry protection is lost in the presence of the dipole term and (ii) of the envelope/even-term induced ones, (d) the accumulation of residual electronic excited population (clearly visible also in the lesser TR-ARPES signal) induced by Rabi-like oscillations at the multi-photon resonant non-symmetry-protected k points and the characteristics of such oscillations in terms of the pump-pulse features, (e) the effects of inhibiting selectively intra- and inter-band transitions on the TR-ARPES bands, on the different types of sidebands, on the one-(odd-) and two-(even-)photon resonances, on the residual electronic excited population, and on the characteristics and the effectiveness of the different photo-injection multi-photon processes. Moreover, we study in detail the changes in the TR-ARPES signal and in its characteristics (broadness, different types of sidebands, population inversion, residual lesser signal, hole photo-injection – photo doping, relation to equilibrium and instantaneous eigenenergies, self-averaging over time and energy, etc.) at relevant k points on varying the delay (evolution in time) and the width (spread in energy) of the probe pulse. In addition, we report a detailed derivation of the dipole-gauge second-quantization Hamiltonian for light-matter interaction from the velocity-gauge first-quantization one within the minimal coupling. The actual expressions of Hamiltonian, electronic current, and charge density operators are derived requesting charge conservation and cast in real and momentum space and in Bloch and Wannier basis. Such expressions are fundamental for the current study (residual excited electronic population and TR-ARPES signal) and for the determination of optical response functions. The manuscript is organized as follow. In Sec. <ref>, we introduce DPOA (Sec. <ref>), its quadratic-Hamiltonian version (Sec. <ref>), its relation to and overcoming of the single-particle density-matrix approach (Sec. <ref>). We also discuss how to describe pumped lattice systems in the dipole gauge, how to very efficiently numerically compute Peierls substitution and how multi-photon resonances, rigid shifts, band dressings and different types of sidebands naturally emerge (Sec. <ref>), how to evaluate the strength of single- and multi-photon resonances and how to assign the residual excited electronic population at each k point and band to a specific multi-photon process (Sec. <ref>), how to generalize the Houston approach to second quantization and overcome its limitations and drawbacks through DPOA (Sec. <ref>), how to analyze the relevance, the peculiar/distinct effects and the interplay/cooperation/antagonism of inter- and intra-band transitions on the system response (Sec. <ref>), how to obtain all relevant out-of-equilibrium Green's function of a system within DPOA as well as the experimentally measurable TR-ARPES signal and, proposing a definition for the retarded TR-ARPES signal, and how to obtain an out-of-equilibrium version of the fluctuation-dissipation theorem (Sec. <ref>). In Sec. <ref>, in order to show how DPOA works in a fundamental and prototypical case, we present and discuss in detail the DPOA results for the TR-ARPES signal and the residual electronic excited population of a pumped two-band (valence-conduction) system in the case of a light-matter interaction described only by a local-dipole term (Sec. <ref>), only by the Peierls substitution in the hopping term (Sec. <ref>), and by both light-matter interaction terms at once (Sec. <ref>), and conclude with a study of the TR-ARPES signal dependence on the probe-pulse characteristics (Sec. <ref>). In Sec. <ref>, we summarize the main physical messages and the major technical advancements reported in this manuscript and provide possible perspectives for the application of DPOA to other relevant response properties and real materials. Finally, we included three appendices regarding the derivation and the discussion of the velocity and the dipole gauges in second quantization (App. <ref>), the Houston approach in first quantization (App. <ref>) and the out-of-equilibrium spectral functions (App. <ref>). § THEORY §.§ Dynamical Projective Operatorial Approach (DPOA) For any system at equilibrium, described by a time-independent Hamiltonian ℋ in second quantization and Heisenberg picture, one can find as many sets of composite operators ψ_α^†=(ψ_α,1^†,…,ψ_α,a^†,…), as many degrees of freedom characterizing the system (spin, orbital, momentum, etc.), which close their hierarchy of the equations of motion: iħ∂_tψ_α(t)=[ψ_α(t),ℋ]=Ξ_α·ψ_α(t). In Eq. (<ref>), · is the scalar product in the space of the operators in a specific set α, while Ξ_α and ψ_α,a are called energy matrix and eigenoperators, respectively <cit.>. A very effective measure of the degree of correlation in the system is the ratio between the number of independent (disjoint) sets and the number of degrees of freedom: for a non-correlated system this ratio is 1, and it tends to 0 (1) according to how much the system is strongly (weakly) correlated. To study the properties of a solid-state system and its linear response, two types of sets are essential. One is the set stemming from the canonical electronic (fermionic) operators of the system under study, c_ν(𝐫,t), where, for instance, 𝐫 can be the site in a Bravais lattice and ν collects all possible degrees of freedom (spin, orbital, atom in a basis, etc.). The other is the set stemming from the canonical charge, spin, orbital, ... number and ladder (bosonic-like) operators of the system under study that allow to obtain the related susceptibilities. Now, let us consider a general time-dependent external perturbation applied to the system: ℋ→ℋ(t). For instance, it can be an electromagnetic pump pulse whose interaction with the system is usually described via the minimal coupling. Such a perturbation preserves the closure of the hierarchy of the equations of motion of ψ_α as it usually changes only the single-particle term of the Hamiltonian <cit.>, therefore iħ∂_tψ_α(t)=[ψ_α(t),ℋ(t)]=Ξ_α(t)·ψ_α(t). These considerations guided us to design and devise the Dynamical Projective Operatorial Approach (DPOA) according to which we have ψ_α(t)=P_α(t,t_0)·ψ_α(t_0) ∀ t≥ t_0, where P_α(t,t_0) are called dynamical projection matrices. Eq. <ref> can be verified using mathematical induction as follows. Basis: At time t=t_0, Eq. <ref> obviously holds with P_α(t_0,t_0)=1. Induction step: Let us discretize the time axis in terms of an infinitesimal time step Δ t→0 (t_n=n Δ t+t_0) and let us assume that Eq. <ref> holds for time t_n, i.e., ψ_α(t_n)=P_α(t_n,t_0)·ψ_α(t_0), Then, for time t_n+1=t_n+Δ t, we have ψ_α(t_n+1)=ψ_α(t_n)+Δ t∂_tψ_α(t_n) =[P_α(t_n,t_0)-Δ ti/ħΞ_α(t_n)· P_α(t_n,t_0)]·ψ_α(t_0), that closes the proof and suggests the following relation P_α(t_n+1,t_0)=P_α(t_n,t_0)-Δ ti/ħΞ_α(t_n)· P_α(t_n,t_0). In the following, we choose as initial time t_0 any time before the application of the pump pulse (e.g., t_0→-∞) and, for the sake of simplicity, we indicate the dynamical projection matrices using just one time argument P_α(t,t_0)→ P_α(t) . Then, ψ_α(t_0) simply stands for the operatorial basis describing the system at equilibrium. Applying the limit Δ t→0 to Eq. <ref>, one obtains the equation of motion for the dynamical projection matrix as iħ∂_tP_α(t)=Ξ_α(t)· P_α(t). For stationary Hamiltonians, i.e., when Ξ_α(t)→Ξ_α^(0), the solution of Eq. <ref> is simply P_α^(0)(t)=e^-i/ħ(t-t_0)Ξ_α^(0). However, for a general perturbed system, where Ξ_α(t)=Ξ_α^(0)+Ξ_α^(1)(t), one needs to compute, numerically in almost all cases, the dynamical projection matrix P_α(t) from which it is possible to obtain all out-of-equilibrium properties and response functions of the system. Finally, it is worth noting that rewriting P_α(t)=P_α^(0)(t)· P_α^int(t)=e^-i/ħ(t-t_0)Ξ_α^(0)· P_α^int(t), we can deduce the following reduced equation of motion iħ∂_tP_α^int(t)=Ξ_α^(1)int(t)· P_α^int(t), where Ξ_α^(1)int(t)=e^i/ħ(t-t_0)Ξ_α^(0)·Ξ_α^(1)(t)·e^-i/ħ(t-t_0)Ξ_α^(0). Eq. <ref> can be helpful (i) to stabilize the numerical solution when high frequencies are involved and (ii) to apply any approximation only to the time-dependent component of the Hamiltonian and preserve intact the equilibrium dynamics. The equivalent (iterative) integro-differential equation reads as P_α^int(t)=1-i/ħ∫_t_0^tdt_1 Ξ_α^(1)int(t_1)· P_α^int(t_1). §.§ Quadratic Hamiltonians Quadratic Hamiltonians play a fundamental role in many fields of physics as they retain the full complexity of a system in terms of its degrees of freedom as well as the possibility to describe to full extent the effects of applying a (time-dependent) external field or gradient to the system. Obviously, one cannot describe strong correlations, that is a deep and intense interplay between degrees of freedom, but this is not essential in many cases. As it specifically regards solid-state systems, the most relevant quadratic Hamiltonians are the tight-binding ones that can be built for real materials through wannierization (for example, exploiting Wannier90 <cit.>) of the basic standard results of almost any DFT code available. This procedure preserves the static Coloumb interaction among the electrons (appearing in the exchange integral within DFT), which usually results in the opening of gaps and in band repulsion. In presence of a time-dependent perturbation, e.g., a pump pulse, TD-DFT is usually applied although it results in very lengthy and very resource-consuming calculations. DPOA for time-dependent quadratic Hamiltonians is instead very fast and efficient although it neglects the dynamical Coloumb interaction, which can be safely discarded in many cases. Even excitonic effects can be easily described in DPOA by choosing the proper effective terms in the Hamiltonian under analysis. It is worth noticing that DPOA allows to retain and to catch the physics of all time-dependent complications and all transitions among the actual, although very numerous, bands of real materials (in contrast to Houston approach, see Sec. <ref>). Let us consider a system described by the following completely general time-dependent quadratic Hamiltonian in second quantization and Heisenberg picture ℋ(t)=a^†(t)·Ξ(t)· a(t), where a^†(t)=(a_1^†(t),…,a_n^†(t),…) is the creation operator in Heisenberg picture and vectorial notation with respect to the set of quantum numbers n=(n_1,n_2,…) that label all degrees of freedom of the system under analysis. a_n(t) obeys the canonical (anti-)commutation relations [a_n^†(t),a_m(t)]_η=δ_nm where, as usual, η=1 , anticommutation {}, should be used for fermions and η=-1 , commutation [], should be used for bosons. Ξ(t)=Ξ^(0)+Ξ^(1)(t) is the energy matrix, so that iħ∂_ta(t)=Ξ(t)· a(t), and, as main simplification coming from the Hamiltonian being quadratic, the eigenoperators of the system are just the a_n(t). Matrix Ξ^(0) gives the equilibrium Hamiltonian, ℋ^(0), and matrix Ξ^(1)(t) describes the coupling of the system to the time-dependent external perturbations and gives ℋ^(1)(t). Then, the total Hamiltonian is ℋ=ℋ^(0)+ℋ^(1)(t). Ξ^(1)(t) is non zero only after time t_0 so that the system is in equilibrium prior to it. According to this and to the general case discussed above, we have a(t)=P(t)· a(t_0), iħ∂_tP(t)=Ξ(t)· P(t), with the initial condition P(t_0)=1. At each instant of time, the canonical commutation relations obeyed by a_n(t) lead to P(t)· P^†(t)=1, which is an extremely useful relation to check the stability and the precision over time of any numerical approach used to compute P(t). §.§ Single-Particle Density Matrix (SPDM) To show how to obtain the dynamical properties of the system using the dynamical projection matrices P(t), we consider first the single-particle density matrix (SPDM) ρ(t)=⟨â(t)⊗â^†(t)⟩, whose equation of motion reads as iħ∂_tρ(t)=[Ξ(t),ρ(t)]. Once the time evolution of ρ(t) is known, it is possible to compute the average of any single-particle single-time operator X(t)=a^†(t)·𝒳(t)· a(t) and, therefore, of the corresponding physical quantity as follows, ⟨ X(t)⟩ =(ρ(t)·𝒳(t)). Obviously, if one wants to compute quantities involving only some specific degrees of freedom or transitions, 𝒳(t) should be just constructed out of the corresponding elements. Given that ρ(t)=P(t)·⟨ a(t_0)⊗ a^†(t_0)⟩· P^†(t)=P(t)·ρ(t_0)· P^†(t), we have ⟨ X(t)⟩ =[P^†(t)·𝒳(t)· P(t)·ρ(t_0)]. If we choose the quantum numbers n such that the corresponding operators diagonalize the equilibrium Hamiltonian H^(0), that is Ξ_nm^(0)=δ_nmε_n, we simply have ρ_nm(t_0)=δ_nm(1-η f_η(ε_n)) where f_η(ε)=1/e^βε+η is the related equilibrium distribution function. Once the dynamical projection matrices P(t) are known at all times, it is possible to recover all the results of the SPDM approach, and more importantly, go beyond them. In fact, we are not limited to single-particle properties and even within these latter not to single-time ones. For instance, given a general single-particle two-time operator Y(t,t^')=a^†(t)·𝒴(t,t^')· a(t^'), the time evolution of its average ⟨ Y(t,t^')⟩ is simply given by ⟨ Y(t,t^')⟩ =[P^†(t)·𝒴(t,t^')· P(t^')·ρ(t_0)]. The extension to multi-particle multi-time operators is straightforward and requires only the knowledge of equilibrium averages, which for quadratic Hamiltonians can be easily calculated thanks to the Wick's theorem. §.§ Pumped lattice systems, Peierls expansion and multi-photon resonances Let us consider an electromagnetic pump pulse described by the vector potential A(t) and the electric field E(t)=-∂_tA(t), applied to a lattice system after time t_0. Accordingly, the dynamics in the dipole gauge is governed by the Hamiltonian ℋ(t)=∑_𝐤,ν,ν^'c̃_𝐤,ν^†(t)Ξ̃_𝐤,ν,ν^'(t)c̃_𝐤,ν^'(t) (see App. <ref> for derivation), where c̃_𝐤,ν(t) is the annihilation operator of an electron with momentum 𝐤 in the maximally localized Wannier state (MLWS) ν and <cit.> Ξ̃_𝐤,ν,ν^'(t)=T̃_𝐤+e/ħA(t),ν,ν^'+eE(t)∙D̃_𝐤+e/ħA(t),ν,ν^'. T̃_𝐤,ν,ν^' and D̃_𝐤,ν,ν^' are the hopping and dipole matrix elements in the reciprocal space, respectively, and the over-script ∼ indicates that they are expressed in the basis of the MLWSs (the one in which we get these parameters out of wannerization), e>0 is the value of electronic charge, and ∙ is the scalar product between vectors in real space. The momentum shift by the vector potential, 𝐤+e/ħA(t), resembles the Peierls substitution <cit.> and Eq. <ref> can be considered as its generalization to multi-band systems <cit.>. Eq. <ref> shows that the coupling to the pump pulse is two fold: the Peierls substitution (in both T̃_𝐤,ν,ν^' and D̃_𝐤,ν,ν^') and the dipole term E(t)∙D. It is worth noting that, for (d<3)-dimensional systems with transverse pump-pulse polarization and 0-dimensional systems, like quantum dots and molecules, there is no coupling through the Peierls substitution and the dipole term is the only coupling to the external field. The equilibrium Hamiltonian reduces to Ξ̃_𝐤,ν,ν^'(t<t_0)=T̃_𝐤,ν,ν^', which can be diagonalized through the matrix Ω_𝐤,ν,n as follows, δ_n,n^'ε_𝐤,n=∑_ν,ν^'Ω_𝐤,n,ν^†T̃_𝐤,ν,ν^'Ω_𝐤,ν^',n^', where n indicates the energy band. Being diagonal at equilibrium, the band basis provides a great advantage in computations. The transformation to the band basis is performed as D_𝐤,n,n^'=∑_ν,ν^'Ω_𝐤,n,ν^†D̃_𝐤,ν,ν^'Ω_𝐤,ν^',n^', Ξ_𝐤,n,n^'(t)=∑_ν,ν^'Ω_𝐤,n,ν^†Ξ̃_𝐤,ν,ν^'(t)Ω_𝐤,ν^',n^', and c_𝐤,n(t)=∑_νΩ_𝐤,n,ν^†c̃_𝐤,ν(t). It is worth recalling that c_𝐤(t)=P_𝐤(t)· c_𝐤(t_0), where P_𝐤(t_0)=1 and iħ∂_tP_𝐤(t)=Ξ_𝐤(t)· P_𝐤(t). Moreover, N_𝐤,n(t)=⟨ c_𝐤,n^†(t)c_𝐤,n(t)⟩, the time-dependent number of electrons in band n with momentum 𝐤, is given by N_𝐤,n(t)=∑_n^'P_𝐤,n,n^'(t)f_+(ε_𝐤,n^')P_𝐤,n^',n^†(t). For real materials (our recent work on germanium being an example <cit.>), with many bands involved in the dynamics and hopping and dipole parameters obtained in real space through wannerization, the presence of the Peierls substitution, 𝐤+e/ħA(t), in Eq. <ref> makes any time-dependent measure extremely time-consuming, as it is necessary, at each time step in the numerical time grid, to Fourier transform again and again, because of the shift, the hopping and dipole matrices to momentum space on the numerical momentum grid and, finally, perform the rotation to the band space. A very efficient way to deal with this problem, which makes it possible to study systems with really many bands without overheads in terms of time consumption and numerical precision, exploits the expansion of the hopping matrix and of the dipole matrix with respect to the vector potential, to sufficiently high order (determined by the maximum strength of the vector potential and the bandwidth of the system) and uses the expansion coefficients, computed once for all, at all times: T_𝐤+e/ħA(t)(t)=∑_m=0^∞1/m!Ω_𝐤^†·[∂_k_A^(m)T̃_𝐤]·Ω_𝐤(e/ħA(t))^m, D_𝐤+e/ħA(t)=∑_m=0^∞1/m!Ω_𝐤^†·[∂_k_A^(m)D̃_𝐤]·Ω_𝐤(e/ħA(t))^m, where ∂_k_A^(m) is the m-th partial derivative in momentum space in the direction of the pump-pulse polarization, Â, and A(t) is the magnitude of the vector potential: A(t)=A(t)Â. We call this procedure Peierls expansion hereafter. The expansion coefficients, that is, the m-th partial derivatives, can be efficiently computed by means of the Fourier transformation as, ∂_k_A^(m)T̃_𝐤=∑_R(iÂ∙R)^me^i𝐤∙RT̃_R, ∂_k_A^(m)D̃_𝐤=∑_R(iÂ∙R)^me^i𝐤∙RD̃_R, where T̃_R and D̃_R are the hopping and dipole matrices, respectively, in the direct space, as outputted by the wannierization procedure. Such an expansion is of fundamental relevance as it gives insight into the actual excitation processes active in the system and connects them to the symmetries of the band structure and of the dipole couplings. According to a well-established practice, we call the coefficient of the first-(second-)order term of the Peierls expansion, Eq. <ref>, of the hopping term T as the velocity (inverse-mass) term. The pump pulse A(t) can be usually represented as A(t)=A_0S(t)cos(ω_put+ϕ) where ω_pu is the central frequency of the pulse, ϕ is its phase, and S(t) is an envelope function that vanishes at t→±∞. A usual expression for the envelope function is a Gaussian, S(t)=e^-4ln2t^2/τ_pu^2, where τ_pu is its full-width at half maximum (FWHM) and, for the sake of simplicity, its center is just at t=0. Such an envelope gives a finite bandwidth to the pulse of the order 2πħτ_pu^-1, where τ_pu^-1 is FWHM of the corresponding Gaussian in frequency domain. Given the above expression for the pump pulse, A(t), we can expand its m-th power, A^m(t), and get Λ_𝐤+e/ħA(t)(t)=Ω_𝐤^†·Λ̃_𝐤·Ω_𝐤+∑_m=1^∞Θ_0,m(t) +2∑_l=1^∞[∑_m=0^∞Θ_l,m(t)]cos(lω_put+lϕ), Θ_l,m(t)=(eA_0S(t)/2ħ)^2m+l/m!(m+l)!Ω_𝐤^†·[∂_k_A^(2m+l)Λ̃_𝐤]·Ω_𝐤 where Λ can be either the hopping matrix T or the dipole matrix D. Such an expression allows us to understand the excitation processes. The first term on the right-hand side is just the pristine (time-independent) hopping/dipole matrix. The second term would result in a 𝐤-dependent energy shift coming from the even derivatives (mainly from the inverse-mass coefficient of the hopping term) if there would be no envelope function S(t). Actually, it is time-dependent because of the envelope function S(t), but not periodic, and will lead to the emergence of non-resonant side bands, as we will show in Sec. <ref>, on a timescale of the order τ_pu/√(2) around the envelope center provided that the energy-band symmetries do not require the inverse-mass term (and higher-order even terms) to be zero. The third term leads to Rabi-like l-photon resonances whenever the energy gap between any two bands in the system, not both empty or full at a certain instant of time, is close to lħω_pu within a bandwidth of order 2πħ√(l)τ_pu^-1 . Each l-component of this term is active on a timescale of the order τ_pu/√(l) around the envelope center and has a phase shift of (l-1)ϕ with respect to the l=1 component. For very short values of τ_pu with respect to 2πω_pu^-1, that is, when we have so few cycles of the pump pulse within the envelope to hardly recognize any oscillation, we end up in an impulsive regime. Actually, given that the oscillation period decreases with l^-1 while the FWHM decreases with l^-1/2, even in the case where lower-l terms are impulsive, sufficiently-higher-l terms are anyway oscillatory, although these latter can have a negligible effect on the dynamics. §.§ Resonances and residual electronic excited population At resonance, the dynamics of the electronic population has a Rabi-like behavior which is completely different from the off-resonance behavior. In particular, the residual electronic population N_𝐤,n^res=N_𝐤,n(t→∞), that is the electronic population in band n at momentum 𝐤 after the application of the pump pulse, becomes a very relevant quantity to measure and analyze. For a perfectly periodic pump pulse, that is, with infinite extension in time and no envelope, checking the l-photon resonance condition requires just the comparison of the energy gaps to lħω_pu. Instead, the presence of an envelope broadens the range of frequencies appearing in the Fourier transform of the pump pulse and hence increases the range of resonant energy gaps. To quantify this occurrence and on the basis of what reported in the previous section, we define the normalized strength of a l-photon resonance with respect to an energy gap ε_gap, w_l(ε_gap) as w_l(ε_gap)=e^-τ_pu^2/16ln2ħ^2l(ε_gap-lħω_pu)^2, where ω_pu is the pump-pulse frequency and τ_pu is the FWHM of its Gaussian envelope. Then, to measure the total number of effective l-photon resonant energy gaps, W_l, is sufficient to sum up all normalized strengths over all points 𝐤 of the numerical momentum grid for all possible pairs of valence-conduction bands W_l=∑_𝐤,n_C,n_Vw_l(ε_𝐤,n_C-ε_𝐤,n_V), where n_C(n_V) runs over all conduction(valence) bands. The residual electronic population in one specific conduction band n_C at momentum 𝐤, N_𝐤,n_C^res, is the result of resonant processes originating in different valence bands { n_V} at the same momentum 𝐤. Each of this valence bands will contribute to N_𝐤,n_C^res with an undetermined portion of its residual hole population N_𝐤,n_V^(h)res=1-N_𝐤,n_V^res: ∑_n_VN_𝐤,n_V^(h)res=∑_n_CN_𝐤,n_C^res. Here, we suggest a procedure that allows to determine the contribution N_𝐤,n_C,n_V^res(l) of the residual hole population of the valence band n_V due to a l-photon resonant process to N_𝐤,n_C^res: N_𝐤,n_C^res=∑_l,n_VN_𝐤,n_C,n_V^res(l). The rationale is to assign to each of the valence band n_V such a contribution, N_𝐤,n_C,n_V^res(l), according to the strength of the l-photon resonant process involved, w_l(ε_𝐤,n_C-ε_𝐤,n_V), and to the actual value of N_𝐤,n_V^(h)res with respect to those of all other valence bands N_𝐤,n_C,n_V^res(l)=N_𝐤,n_V^(h)resw_l(ε_𝐤,n_C-ε_𝐤,n_V)/∑_n'_VN_𝐤,n'_V^(h)res∑_l^'w_l^'(ε_𝐤,n_C-ε_𝐤,n'_V)N_𝐤,n_C^res. Given these ingredients, it is now possible to compute (i) the contribution to N_𝐤,n_C^res coming from all l-photon resonant processes, N_𝐤,n_C^res(l), N_𝐤,n_C^res(l)=∑_n_VN_𝐤,n_C,n_V^res(l), (ii) the contribution to N_𝐤,n_C^res coming from each valence band n_V, N_𝐤,n_C,n_V^res, N_𝐤,n_C,n_V^res=∑_lN_𝐤,n_C,n_V^res(l), (iii) the total residual electronic population at momentum 𝐤 coming from all l-photon resonant processes, N_𝐤^res(l), N_𝐤^res(l)=∑_n_V,n_CN_𝐤,n_C,n_V^res(l), (iv) the average residual electronic population per momentum point coming from all l-photon resonant processes, N^res(l), N^res(l)=1/M_grid∑_𝐤,n_V,n_CN_𝐤,n_C,n_V^res(l), where M_grid is the total number of momentum points in the numerical grid, and finally we can be interested in (v) the average residual excited electronic population per momentum point, N^res, which is actually the residual excitation population per unit cell and does not require our procedure of assignment, N^res=1/M_grid∑_𝐤,n_CN_𝐤,n_C^res. §.§ The generalized Houston approach One of the most commonly adopted methods to simulate the behavior of pumped semiconductors is the Houston approach <cit.>, which has been formulated and is generally used in first quantization and in the velocity gauge (see App. <ref>). Here, we reformulate this approach in second quantization within the DPOA framework, highlighting its limitations and drawbacks. We have seen that the Hamiltonian of a pumped quadratic lattice system has the general form H(t)=∑_𝐤H_𝐤(t) where H_𝐤(t)=c_𝐤^†(t)·Ξ_𝐤(t)· c_𝐤(t) and c_𝐤(t_0)=(c_𝐤,1(t_0),…,c_𝐤,ν(t_0),…) is the canonical operatorial basis at equilibrium in vectorial notation for an electron with momentum 𝐤 and with ν denoting all possible degrees of freedom of the system. Let us consider the time-dependent transformation matrix U_𝐤(t) that diagonalizes Ξ_𝐤(t) at each instant of time, i.e., Ξ_𝐤^S(t)=U_𝐤^†(t)·Ξ_𝐤(t)· U_𝐤(t) has only diagonal elements that are usually called instantaneous bands. Then, we can define a new operatorial basis for the system, the Houston basis c_𝐤^S(t), given by c_𝐤^S(t)=U_𝐤^†(t)· c_𝐤(t). Within the DPOA framework, we can write c_𝐤^S(t)=P_𝐤^S(t)· c_𝐤(t_0) where P_𝐤^S(t) is the Houston projection matrix that satisfies the following equation of motion iħ∂_tP_𝐤^S(t)=[Ξ_𝐤^S(t)+Π_𝐤(t)]· P_𝐤^S(t), where Π_𝐤(t)=iħ∂_tU_𝐤^†(t)· U_𝐤(t). Another quite diffused variant of the Houston method can be obtained, within second quantization, by the following transformation P_𝐤^' S(t)=e^i/ħ∫_t_0^tdt^'Ξ_𝐤^S(t^')· P_𝐤^S(t), which results in the following equation of motion iħ∂_tP_𝐤^' S(t)=Π_𝐤^'(t)· P_𝐤^' S(t), where Π_𝐤^'(t)=e^i/ħ∫_t_0^tdt^'Ξ_𝐤^S(t^')·Π_𝐤(t)·e^-i/ħ∫_t_0^tdt^'Ξ_𝐤^S(t^') Computing Ξ_𝐤^S(t) and Π_𝐤(t), or equivalently Π_𝐤^'(t), is not only extremely more time-consuming when many bands are involved as in real materials than just using Ξ_𝐤(t), as in DPOA, because of the numerical diagonalizations necessary to obtain U_𝐤(t) and ∂_tU_𝐤^†(t) at each instant of time, but it can be extremely difficult to calculate it numerically, because of the well-known difficulty of tracking the phase of eigenvectors between different instants of time in particular in the presence of instantaneous-band crossing (dynamical degeneracy) <cit.>. This usually leads to implementing the Houston method only for very few (two or three) effective bands and to use approximate 𝐤-independent matrix elements. Actually, DPOA can yield, if ever needed, the exact Houston-method results just computing P_𝐤^S(t) as P_𝐤^S(t)=U_𝐤^†(t)· P_𝐤(t) where P_𝐤(t) is the usual DPOA dynamical projection matrix: c_𝐤(t)=P_𝐤(t)· c_𝐤(t_0) and iħ∂_tP_𝐤(t)=Ξ_𝐤(t)· P_𝐤(t). §.§ Inter- and intra-band transitions Within DPOA, it is straightforward to separate the effects of inter-/intra-band transitions. In order to have only intra-band transitions in the dynamics, in the basis of equilibrium bands, those indexed by n, one needs to keep only the diagonal elements of Ξ_𝐤(t) and remove all off-diagonal ones which cause transitions among the bands: iħ∂_tP_𝐤,n,n^'^intra(t)=ε_𝐤,n^intra(t)P_𝐤,n,n^'^intra(t), where ε_𝐤,n^intra(t)=Ξ_𝐤,n,n(t)=∑_ν,ν^'Ω_𝐤,n,ν^†Ξ̃_𝐤,ν,ν^'(t)Ω_𝐤,ν^',n. Eq. <ref> has the formal solution P_𝐤,n,n^'^intra(t)=δ_n,n^'e^-i/ħ∫_t_0^tε_𝐤,n^intra(t^')dt^'. On the other hand, in order to keep only inter-band transitions, it is needed to keep the off-diagonal elements of Ξ_𝐤(t) and discard the Peierls substitution in its diagonal elements: ε_𝐤,n^inter(t)=ε_𝐤,n+eE(t)∙D_𝐤,n,n. Accordingly, we have iħ∂_tP_𝐤,n,n^'^inter(t)=ε_𝐤,n^inter(t)P_𝐤,n,n^'^inter(t) +∑_n̅≠ nΞ_𝐤,n,n̅(t)P_𝐤,n̅,n^'^inter(t). Usually, the diagonal elements of the dipole matrix are negligible, D_𝐤,n,n≃0, and therefore ε_𝐤,n^inter(t) is almost equal to the equilibrium band energy ε_𝐤,n. The Houston method is often used to perform the same kind of analysis. Within the velocity gauge, to remove the intra-band dynamics and define an only inter-band one, one sets 𝐤+e/ħA(t)→𝐤 in the instantaneous eigenenergies and eigenvectors reducing them to the equilibrium ones, but one still computes the projection coefficients (see Eq. <ref>) through the full equation of motion whose inter-band term just comes from the differentiation of the very same Peierls-like term. This is somehow questionable and ambiguous. At any rate, defining inter- and intra-band dynamics in the Houston basis is again ambiguous as the instantaneous bands are superpositions of equilibrium bands and therefore any interpretation becomes very cumbersome. §.§ Green's functions and TR-ARPES signal Green's functions (GFs) are extremely important tools as they allow to compute many interesting properties of a system. The most relevant single-particle two-time electronic GFs are the retarded, G^R, and the lesser, G^<, GFs, defined in the vectorial notation as follows G_𝐤^R(t,t^')=-iθ(t-t^')⟨{ a_𝐤(t),a_𝐤^†(t^')}⟩ , G_𝐤^<(t,t^')=i⟨ a_𝐤^†(t^')⊗ a_𝐤(t)⟩ . Even for a quadratic Hamiltonian, the GFs cannot be computed within the SPDM approach (unless one defines a two-time SPDM <cit.>, which is computationally very heavy), but they can be straightforwardly obtained within DPOA in terms of the dynamical projection matrices P as G_𝐤^R(t,t^')=-iθ(t-t^')P_𝐤(t)· P_𝐤^†(t^'), G_𝐤^<(t,t^')=i[P_𝐤(t)·(1-ρ_𝐤(t_0))· P_𝐤^†(t^')]^T, where, in the band basis in which the equilibrium Hamiltonian is diagonal, δ_n,n^'-ρ_𝐤,n,n^'(t_0)=δ_n,n^'f_+(ε_𝐤,n). At equilibrium, the usual way to study the energy bands of the system, ε_𝐤,n, and their corresponding occupations, is to compute the spectral functions through the imaginary components of the retarded and of the lesser GFs, respectively. However, out-of-equilibrium, the spectral functions are not necessarily non-negative quantities <cit.> (see App. <ref>). This occurrence invalidates their physical interpretation of availability and occupation of the corresponding energies per momentum. Nevertheless, such an information is of crucial importance to describe and understand the response of the system to external probes. Indeed, out of equilibrium, one investigates the TR-ARPES signal <cit.>, which individuates the occupation of the energy ω at momentum 𝐤 for a probe pulse centered at time t_pr. The TR-ARPES signal is proportional to I_𝐤^<(ω,t_pr)=τ_pr/√(8πln2)∫_-∞^+∞dt_1∫_-∞^+∞dt_2S(t_1-t_pr) S(t_2-t_pr)[e^iω(t_1-t_2)[G_𝐤^<(t_1,t_2)]], where S(t-t_pr)=2√(ln2)/√(π)τ_pre^-4ln2(t-t_pr)^2/τ_pr^2 is the probe-pulse envelope which is assumed to be Gaussian with a FWHM τ_pr. Here we assumed that the TR-ARPES matrix elements are just constant numerical factors and removed them from the expression. Moreover, we assumed that the ejected photo-electrons outside of the sample, originating from orthogonal electronic states inside of the solid, are described by orthogonal wave functions. This assumption leads to the presence of the trace () in Eq. <ref>. At any rate, [G_𝐤^<(t_1,t_2)] is invariant with respect to the chosen basis as it is desirable. Without such assumptions, one would need to carry on a detailed modeling to get the actual matrix elements <cit.>. We have chosen the normalization factor in such a way that I^<(ω,t_pr) is normalized to the total number of particles at momentum 𝐤, ∫_-∞^+∞dω I_𝐤^<(ω,t_pr)=∑_nN_𝐤,n. I_𝐤^<(ω,t_pr) gives information about the occupied states. Instead, to identify the available states (ω,𝐤), that is the bands out-of-equilibrium or TR-ARPES bands, we use the retarded GF in place of the lesser one and define I_𝐤^R(ω,t_pr)=-τ_pr/√(2πln2)∫_-∞^+∞dt_1∫_-∞^+∞dt_2S(t_1-t_pr) S(t_2-t_pr)[e^iω(t_1-t_2)[G_𝐤^R(t_1,t_2)]]. It is straightforward to show that, in the band basis, I_𝐤^<(ω,t_pr)= ∑_n∑_n^'L_𝐤,n;n^'(ω,t_pr)f_+(ε_𝐤,n^'), I_𝐤^R(ω,t_pr)= ∑_n∑_ν^'L_𝐤,n;n^'(ω,t_pr), where L_𝐤,n;n^'(ω,t_pr)= =τ_pr/2√(2πln2)|∫_-∞^+∞dt_1S(t_1-t_pr)e^iω t_1P_𝐤,nn^'(t_1)|^2, which guarantees that the TR-ARPES signal is always non-negative. Eqs. <ref> and <ref> provide a generalized fluctuation-dissipation theorem for TR-ARPES signal. § A TWO-BAND LATTICE SYSTEM: A NOTEWORTHY APPLICATION Very recently, we have proved the capabilities of DPOA in investigating real materials by exploiting it to analyze the actual photo-injection mechanisms in germanium within an ultrafast (attosecond) pump-probe setup <cit.>. To discuss the variety of possible physical phenomena without being limited by the characteristics of a single particular real material, here we choose to study a cubic lattice system, of lattice constant a, with two bands corresponding to the main valence and conduction bands in a semiconductor. We consider two states (MLWFs) with the onsite energies T̃_𝐑=0,1,1=-1.65Δ and T̃_𝐑=0,2,2=1.35Δ, respectively, diagonal first-neighbor hoppings T̃_𝐑=δ,1,1=0.2Δ and T̃_𝐑=δ,2,2=-0.15Δ, and off-diagonal first-neighbor hoppings T̃_𝐑=δ,1,2=T̃_𝐑=δ,2,1=-0.1Δ, where T̃_R,ν,ν^' is the hopping matrix between two sites at distance R and states ν and ν^', respectively, δ∈{ a(±1,0,0),a(0,±1,0),a(0,0,±1)} and Δ is the unit of energy that can be adjusted to obtain the desired band gap energy at Γ=(0,0,0). With our parameters, the band gap at Γ is 1.5Δ, so that in order to have a gap of [0.75]eV for instance, one should set Δ=[0.5]eV. For the cases that we analyze with a finite dipole, we consider an on-site (local) and off-diagonal dipole moment: 𝐃̃_𝐑=0,1,2=𝐃̃_𝐑=0,2,1^*=i0.05a𝐣̂, which will lead only to a 0-th term in its Peierls expansion. In Fig. <ref>, top panel, we show the high-symmetry points of the first Brillouin zone, while in the bottom panel we show the equilibrium energy bands, ε̅_𝐤,val=ε_𝐤,1/Δ and ε̅_𝐤,cond=ε_𝐤,2/Δ, for a path which connects these high-symmetry points (the main path hereafter). All energies denoted with a bar on top are divided by Δ and hence dimensionless. Having Δ as the unit of energy, the unit of time is simply chosen to be ħ/Δ, which results in the dimensionless time t̅=tΔ/ħ for each time t. We apply a pump pulse in the form A(t̅)=A(t̅)𝐣̂ where A(t̅) is a wave packet given by A(t̅)=2πħ/aeA̅_0e^-(4ln2)t̅^2/τ̅_pu^2cos(ω̅_put̅), in which the center of the pump pulse is taken as the origin of time axis. The dimensionless frequency of the pump pulse is chosen to be ω̅_pu=ω_puħ/Δ=2.33 and, unless otherwise explicitly stated, the FWHM is chosen to be τ̅_pu=7 and the dimensionless pump-pulse amplitude A̅_0=0.2. The square of the pumping vector potential as a function of time is plotted in Fig. <ref>, top panel. Fig. <ref> bottom panel shows the energy gaps, ε̅_𝐤,gap=ε̅_𝐤,cond-ε̅_𝐤,val, at the k points along the main path, while the colored map shows w_l=1,2(ε̅_𝐤,gap), which indicates the strength of l-photon resonance for each energy gap. For TR-ARPES signal, we apply a probe pulse with FWHM of τ̅_pr=τ̅_pu=7, unless otherwise explicitly stated. We study the dimensionless signals that are obtained as I̅_𝐤^R,<=Δ/ħI_𝐤^R,<. In Fig. <ref>, we show I̅_𝐤^eq,R(ω̅) and I̅_𝐤^eq,<(ω̅) at equilibrium, that is when no pump pulse is applied to the system. The finite width of the probe pulse results in a broadening of the levels, which is intrinsic to quantum mechanics and unavoidable. Increasing the FWHM of the probe pulse, one can decrease this broadening, but we are not interested in probe pulses much wider than the pump-pulse envelope. The retarded signal, I̅_𝐤^eq,R(ω̅), is peaked around both valence and conduction band energies and shows the spectrum of the system, while the lesser signal, I̅_𝐤^eq,<(ω̅), shows the occupied valence-band levels only, which is the signal measured in experiments. §.§ Local dipole coupling (no Peierls substitution) As first case, we consider a Hamiltonian in which the coupling to the pumping field comes only through a local dipole moment, i.e., we neglect the Peierls substitution in the hopping term and in the dipole one (in Eq. <ref> we set 𝐤+e/ħA(t)→𝐤), to focus only on the effects of such a coupling on the system and analyze them in detail. This case is relevant to systems such as quantum dots and molecules, and low-dimensional systems with transverse pumps. In Fig. <ref>, we show the maps of TR-ARPES signals along the main path. The left panel shows the retarded signal, I̅_𝐤^R(ω̅,t̅_pr=0), for the case where the center of the probe pulse coincides with the center of the pump pulse. The valence and conduction bands are more broad than at equilibrium (Fig. <ref>) because the electrons get excited to the conduction band and cannot be assigned to a specific band anymore, inducing a quantum-mechanical uncertainty in the energy of the bands themselves. The photon-side-bands (PSBs) emerge at energies that differ from the main-band energies of integer multiples of the (dressed) pump-pulse photon energy. Some PSBs overlap in energy with the conduction and the valence bands and, therefore, are not distinguishable in the map of the retarded signal. On top of the maps, we reported both the equilibrium band energies (black solid curves) and the local maxima in energy of the signals at each 𝐤 (green dots), that indicate the (out-of-equilibrium) bands of TR-ARPES. As the retarded signal shows, the equilibrium valence and conduction bands coincide with TR-ARPES ones: a local dipole, for realistic intensities, has negligible effects on the TR-ARPES bands of the system. Since in equilibrium only the valence band is occupied, the lesser signal, I̅_𝐤^<(ω̅,t̅_pr=0), which is reported in the middle panel of Fig. <ref>, shows only the valence band and its corresponding PSBs. Wherever (in 𝐤 space) we have a one-photon resonance, the related resonant one-photon PSB is definitely stronger than other PSBs as it coincides with the conduction band in this case. The two-photon PSBs are some orders of magnitude weaker than the one-photon ones and, in the scale we have chosen for the maps, it is not possible to see them. If we probe the system after the pump pulse is turned off, i.e., by setting a large t̅_pr→+∞, but still much shorter than the time scale of other decoherence and recombination processes like spontaneous emission or electron-phonon interaction, the spectrum of the system goes back to equilibrium, so that we have I̅_𝐤^R(ω̅,t̅_pr→+∞)=I̅_𝐤^R(ω̅,t̅_pr→-∞), which is already shown in Fig. <ref> and we do not repeat here. In Fig. <ref>, right panel, we report I̅_𝐤^<(ω̅,t̅_pr→+∞): contrarily to what happens for the retarded signal, the lesser signal shows residual effects at the k points for which the pump-pulse frequency is in one-photon resonance with the equilibrium gap energy. The more-than-one-photon PSBs do not show any residual signal even though at t̅_pr=0 they are non-vanishing. This is because we have only a local dipole in the interaction Hamiltonian and such a term have no cos(lω̅_put̅) term for l>1, hence no more-than-one-photon Rabi-like resonances. According to our experience, this can be overcome having more than two bands in the system (not shown). In Fig. <ref>, top panel, we plot the residual excited electronic population in the conduction band, N_𝐤,cond^res, for the k points along the main path as a function of the pump-pulse amplitude. Rabi-like oscillations induce residual excited populations at the k points for which a one-photon resonance condition is realized. The finite width of the pump pulse broadens the resonant energies so that, in addition to the exact resonances, also the k points in the proximity of resonant ones have some residual excited population (compare with Fig. <ref>). In Fig. <ref>, bottom panel, we plot the residual excited population in the conduction band at S as a function of the amplitude and of the FWHM of the pump pulse. Being (i) the Rabi frequency, Ω_R, proportional to the pump-pulse amplitude and (ii) the overall oscillation time roughly proportional to the FWHM of the pump pulse, the residual excited population is almost constant wherever A̅_0τ̅_pu is constant, that yields the hyperbolic shape of the color contours in the figure. For the very same reason, on both cuts at fixed A̅_0 and at fixed τ̅_pu, one clearly sees the signature of the Rabi-like oscillations. For instance, at fixed A̅_0, that is at fixed Ω_R, the end tail (in time) of the pump-pulse envelope determines the residual excited population and on changing τ̅_pu one can scan the Rabi-like oscillating behavior of the population (roughly N_𝐤;cond^res∝sin^2(Ω_Rτ_pu)). It is worth reminding that, for smaller pump-pulse amplitudes, which are those experimentally more relevant, one can approximate sin(Ω_Rτ_pu)≃Ω_Rτ_pu, which results in N_𝐤;cond^res∝A̅_0^2τ̅_pu^2. §.§ Peierls substitution in hopping (no dipole) In this case, we consider an interaction with the pump pulse via the Peierls substitution in the hopping term and set the dipole to zero. This is very relevant as the dipole term is often negligible in many realistic cases. Moreover, neglecting the dipole we can focus on the effects of band symmetries on TR-ARPES signal and electronic excitations and analyze them in detail. In Fig. <ref> top-left (top-right), we show the map of I̅_𝐤^R(ω̅,t̅_pr=0) (I̅_𝐤^<(ω̅,t̅_pr=0)). The higher local maxima of the TR-ARPES signal, that one can consider the main TR-ARPES bands, are slightly shifted with respect to the equilibrium valence and conduction bands and show almost no correspondence to the instantaneous eigenenergies at time zero. This is expected since TR-ARPES measures the system over a time period and not at a specific instant of time. We will shed more light on this issue later on (Fig. <ref> and related discussion). For the retarded signal, which shows the full TR-ARPES spectrum of the system, we can see both valence and conduction bands and all of their sidebands. Because of the finite broadening of the bands, they overlap and distinguishing them in the case of retarded signal can be very difficult. Obviously, in the lesser signal, we see only the valence band and its sidebands. The one-photon PSBs originate from the velocity term in the Peierls expansion, which is proportional to sin(ak_y) and, therefore, identically vanishes on the planes Γ-X-A-Z and Y-M-B-D, yielding no one-photon PSB there. Instead, on S, C, middle points of the lines X-M, A-B, Z-D and Γ-Y, the second order (inverse-mass) term – as well as all other even terms – of the Peierls expansion vanishes as it is proportional to cos(ak_y). Recall that the polarization of the pump-pulse has been chosen along the y direction. However, even though at some of these points the two-photon PSBs are very weak, at some others (such as S), where we have a strong one-photon PSB, the two-photon PSB is also strong, which shows that the second order signal is assisted by multiple actions of the first order terms of the Hamiltonian. At the k points where the band gap is at either one- or two-photon resonance with the pump pulse, the corresponding PSB is much stronger than non resonant ones, provided that it is not zero by symmetry. At the k points where the inverse-mass vanishes, we have practically no shift of the TR-ARPES bands with respect to the equilibrium ones. The shift in the bands is mainly due to the non-oscillating components that appear in the even order terms of Peierls expansion (∑_m=1^∞Θ_0,m(t) in Eq. <ref>), which identically vanish when the inverse-mass vanishes by symmetry. Moreover, the higher order effects of the same term results in some weak side-bands near the main bands as it is more clear in the map of I̅_𝐤^<(ω̅,t̅_pr=0) (top-right panel). It is worth noting that if we had an infinitely oscillating pump pulse without an envelope (that is, a pump-pulse FWHM extremely longer than the probe-pulse FWHM), there would have been no higher order effects and the non-oscillating components would have just resulted in rigid shifts of the bands. Therefore, we dub these new side-bands as envelope-Peierls side-bands (EPSBs): they are due to both the envelope and the even terms of Peierls expansion. In Fig. <ref>, bottom-left (bottom-right) panel, we show the map of I̅_𝐤^<(ω̅,t̅_pr=0) for the dynamics with inter-band-only (intra-band-only) transitions. Interestingly, the main TR-ARPES bands are practically on top of the equilibrium ones in the case of inter-band-only dynamics. On the other hand, for the case of intra-band-only dynamics, we see the same shift as for the full dynamics. This is consistent with the inter-band transitions governing the electronic transitions between the bands and not altering the bands noticeably, while the intra-band transitions change the band energies dynamically. As we already mentioned above, the shift in the main bands have the same origin as the EPSBs and since we do not have band shifts for inter-band-only transitions, the EPSBs disappear as well. PSBs have different behaviors depending on being one-photon or two-photon, and in resonance or off resonance. The resonant one-photon PSBs are much stronger in the inter-band-only case (bottom-left panel) than in the intra-band-only case (bottom-right panel), because in order to differentiate between in resonance and off resonance, one needs the inter-band transitions. On the contrary, the off-resonant one-photon PSBs are stronger in the intra-band-only case than in the inter-band-only one, which shows that for the system parameters that we have chosen, out of resonance, the inter-band transitions have very negligible effects on the system, while intra-band transitions obviously still induce one-photon PSBs. In fact, in the intra-band-only case, our system is equivalent to a single-band (the valence band) Floquet one as the conduction band is obviously empty and not coupled to the valence band. However, in our system, in the inter-band-only case (bottom-left panel), two-photon off-resonance PSBs can be noticeable in comparison to the case of full dynamics (top-right panel). The resonant one-photon (two-photon) PSBs are stronger (weaker) in the case of full dynamics than in the case of the inter-band-only dynamics. This can be understood by noticing that removing intra-band transitions pins down the electrons at one-photon resonant k points and helps them to get more and more excited, while lack of the inter-play between first-order inter- and intra-band transitions reduces the two-photon resonant PSBs. In fact, considering a Hamiltonian with even terms only in the Peierls expansion (no first – velocity – and higher-order odd terms) and removing the intra-band transitions, one obtains stronger PSBs at the two-photon resonances (not shown). Another important property to be studied is the residual signal of TR-ARPES. As we already mentioned, after the action of pump pulse, the spectrum which is given by the retarded signal is exactly the one of equilibrium (Fig. <ref>), while the lesser signal is different. As shown in Fig. <ref> top panel, where we plot I̅_𝐤^<(ω̅,t̅_pr→∞), at the one- or two-photon resonant k points we have the corresponding residual signals at PSBs, unless the PSB is prohibited by symmetry. For instance, this condition realizes for one-photon PSBs at X, Y, and Z, where we have exact one-photon resonances, and for two-photon PSBs at the middle of A-B and at C, where we have non-exact two-photon resonances. Fig. <ref> bottom panel, shows the residual lesser signal for the dynamics given by inter-band-only transitions. The one-photon (two-photon) residual PSBs are stronger (weaker) for the inter-band-only dynamics than for the full one according to the very same reasoning reported above. It is noteworthy that even though the intra-band-only transitions induce PSBs within the pump-pulse envelope (see Fig. <ref>, bottom-right panel), they yield no residual in the TR-ARPES signal, which return to equilibrium after the pump pulse is turned off (Fig. <ref>). In Fig.␣<ref>, top panel, we plot the residual excited populations along the main path as a function of the pump-pulse amplitude. One- and two-photon resonances have residual excited populations that show Rabi-like oscillations with respect to changing the pump-pulse amplitude. Moreover, unlike in the former case (only local dipole), k points with the same gap energies (for example those along the path Y-S-X) have different behaviors as their velocities and inverse-masses are different, which yields different couplings to the pumping field. In particular, at the 𝐤 points where we have one-/two-photon resonances, but the velocity/inverse mass vanishes (see the related discussion above on the TR-ARPES signal regarding the relevant regions of first Brillouin zone), there are no residual excitations. However, the points at their immediate proximities with non-exact resonant gaps, but non-zero velocities/inverse masses, host some residual excited populations. In Fig.␣<ref>, bottom panel, we see that for the case of a dynamics with just the inter-band transitions, the residual excited population coming from one-photon resonances gets larger (on the contrary, if one keeps only the intra-band transitions there would be no excitations at all). In this case, one-photon resonances get stronger because we have removed the intra-band transitions that drive them transiently out of resonance and leads to a smaller residual excited population. However, there are some weak one-photon resonances that benefit from intra-band transitions since they are far from the exact resonant points and by the intra-band transitions they can get transiently closer to resonance. The net effect for them is to gain some residual excited population so that the related resonant region in 𝐤-space appears wider. An example can be the proximities of the S point on the path M-S-Γ. On the other hand, for the two-photon resonances this will result in smaller residual excited populations. The two-photon resonances are assisted by the inter-play between the inter-band and intra-band contributions of the first term of the Peierls expansion, which is removed by removing the intra-band transitions altogether (similar to TR-ARPES signal). In fact, also in this case, considering a Hamiltonian with even terms only in the Peierls expansion and removing the intra-band transitions, one obtains larger and sharper in k-space residual excitations at the two-photon resonances (not shown). In Fig. <ref>, top panel, we plot the total residual excited population per unit cell, Eq. <ref>, as a function of the pump-pulse amplitude. We have considered a 8×8×8 k-grid to sample the first Brillouin zone, even though we checked the robustness of the results with respect to the size of the grid by using also 16×16×16 and 32×32×32 k-grids for a larger step in the pump-pulse amplitude (not shown). We compare the two cases: the full Hamiltonian and the one with inter-band transitions only. For all values of the pump-pulse amplitude, the total residual excited population with only inter-band transitions is larger. The middle (bottom) panel of Fig. <ref>, shows the contribution from one-photon (two-photon) resonances, i.e., Eq. <ref> with l=1 (l=2). In our system, the largest contribution comes from the one-photon resonances (middle panel). Computing the relative multi-photon resonance strengths (see Eq. <ref>) in our grid, we find out that the relative strength of one-photon (l=1) resonances is 64%, while for two-photon (l=2) resonances is 36% (W_l/(W_1+W_2), see Eq. <ref>). Clearly, these numbers do not take into account the actual strength of the system-pump couplings at these resonant 𝐤 points and that the second order transitions are generally weaker than the first order ones. As we already explained above in detail, for the one-photon resonances (middle panel), the removal of intra-band transitions increases the residual excited populations, while for the two-photon resonances (bottom panel), the residual excited populations get reduced by removal of the intra-band transitions. In the latter case, increasing the pump-pulse amplitude to high values, the behavior changes and the results of inter-band-only Hamiltonian overcome the full Hamiltonian ones. This can be understood by noting that, upon removing intra-band transitions, the Rabi-like oscillations become in average slower over all of the two-photon resonant k points and of the sin-like shape we see only the monotonously increasing behavior that eventually manages to overcome the usual bending-over sin-like behavior in the case of the full Hamiltonian. This results in a higher residual excitation for very high amplitudes of the pump pulse in the case of inter-band-only and two-photon resonances. It is noteworthy that such very high pump-pulse intensities are not affordable in realistic setups as they would damage the sample. §.§ Both Peierls substitution and local dipole In this section, we consider both local dipole and Peierls substitution with the related Hamiltonian parameter values given in the former two cases. In this case, even though some of the effects can be explained by simply considering the mere addition of the effects yielded by the individual coupling terms, we clearly see that the interplay between the two interaction terms is very important. The retarded TR-ARPES signal along the main path at t̅_pr=0 is shown in Fig. <ref>, left panel, while the lesser TR-ARPES signal is shown in the middle panel. The local dipole strengthens both one-photon and two-photon PSBs. In this case, the k points with zero velocity do have one-photon PSBs, because of the local dipole which does not follow the symmetry of the bands. The TR-ARPES bands are definitely closer to the equilibrium bands rather than to the instantaneous eigenenergies. The presence of both coupling terms augments the broadening of the signals as it increases the excited population overall and, in particular, at the main resonant k point, S. Looking at the inter-band-only lesser TR-ARPES signal, which is shown in the right panel, we see similar behaviors to the case of zero dipole, except for one main difference: the reduction in the two-photon resonant signal is much stronger. As a matter of fact, the inter-play between inter-band and intra-band first-order terms assists the second-order two-photon resonance and is strengthened by the cooperation of Peierls substitution and dipole. It is worth noticing that we considered the local dipole to be just of inter-band form, therefore, the intra-band-only results are exactly the same as the case of zero dipole, which were presented in Fig. <ref>, bottom-right panel. In Fig. <ref> top panel, we plot the residual excited population along the main path vs the pump-pulse amplitude. The first important change with respect to the zero dipole case is that the one-photon resonant k points with zero velocity (X, Y, and Z) do have residual excited population now: the symmetry protection is lost in the presence of the dipole term (as discussed for the TR-ARPES signal). Moreover, having both local dipole and Peierls substitution increases the Rabi frequency on the line X-S-Y which yields the residual excited population at S to have a maximum at around the pump-pulse peak amplitude of A̅_0≃0.19, showing more clearly the Rabi-like behavior. In Fig. <ref> bottom panel, we plot the residual excited population keeping only inter-band transitions in the dynamics. Removing the intra-band transitions, noticeably increases the residual excited population at the resonance points near X and Y, so that they can also reach the maximum of full population inversion. Moreover, for the two-photon resonant k points, the difference between full and inter-band-only dynamics is much larger than in the case of zero dipole (as discussed for TR-ARPES signal). After investigating the residual excitation on the main path, we discuss the excitation per unit cell, which is obtained using a 8×8×8 k grid to sample the first Brillouin zone and plotted in the top panel of Fig. <ref>. Comparison with the two former cases of considering no Peierls substitution and having zero dipole (both shown in the same panel), we see the maximum occurs at a smaller pump-pulse amplitude, as the local dipole adds up to the Peierls substitution which increases the Rabi frequencies at one-photon resonant k points. Another relevant feature is related to only-inter-band transitions in the dynamics that seem to reduce the residual excited population, which is apparently in contradiction with the result of Fig. <ref>. However, the behavior of one-photon and two-photon resonance contributions, as plotted in the middle and bottom panels of Fig. <ref>, reveals that similar to the case of zero dipole, the inter-band-only dynamics gives more (less) residual excited population for the one-photon (two-photon) resonance contributions, but the difference between the inter-band-only and full dynamics of the two-photon resonances are much larger in this case, as we explained in the discussion of Fig. <ref>. §.§ More on the characteristics of the TR-ARPES signal In this section, we get more insights about the behavior of the system out of equilibrium by changing the probe-pulse parameters. For the coupling Hamiltonian, we consider both local dipole and Peierls substitution, but the general conclusions we will draw are independent of this choice. First, we study how the TR-ARPES signal changes on varying the center of probe pulse, t̅_pr, from before (equilibrium) to after (possible residual excitations) the pump-pulse envelope. The retarded and lesser TR-ARPES signals for two high-symmetry k points, Γ and S, are reported in Fig. <ref>, top panels. For both k points, the PSBs are detected as soon as the probe-pulse center enters the pump-pulse envelope, that is when the instantaneous eigenenergies become different from the equilibrium band energies. At equilibrium, as expected, the lesser signal (bottom panels) shows that the electrons reside in the valence band, while they get excited into the conduction band during the pump-pulse application. At S, which is exactly one-photon resonant, we register an almost complete population inversion as the electron excitation process is really very efficient. After the pump pulse is turned off, the residual signal at Γ is very weak (Γ is not in resonance), while at S we have a very strong residual signal because of the resonance condition. Having both local dipole and Peierls substitution, yields slightly more residual signal compared to the cases of removing one of the two coupling terms. Moreover, at S we have the splitting of the valence band. Such a splitting is not visible at time t̅_pr=0, because the two wide split bands overlap with each other. Increasing either the local dipole or the pump-pulse amplitude one can see the splitting even at time t̅_pr=0 (not shown). The splitting can be seen only in the lesser signal, which is the actual signal measured in the experiments, and not in the retarded one, and, accordingly, is due to the holes photo-injected in the system by resonant pumping of the valence electrons into the conduction band (photo doping). So far, the FWHM of the probe pulse was kept constant and equal to the one of the pump pulse, τ̅_pu=τ̅_pr=7. Now, we study the effect of varying τ̅_pu while keeping t̅_pr=0. In Fig. <ref>, the lesser signal is reported at an off-resonant k point (0.375,0.125,0.5) (top panel) and at the resonant k point S (bottom panel). The former point is chosen in order to have an energy difference between the instantaneous eigenenergies at time zero and the equilibrium-band energies quite noticeable to better illustrate the phenomenology we are going to discuss. First, we analyze the behavior at the off-resonant k point (top panel). For very narrow probes, that is small τ̅_pr with respect to τ̅_pu, by decreasing τ̅_pr, the signal gets wider and its peaks tend to the instantaneous eigenenergies. This indicates that the system is practically in the lower eigenstate, which is predominantly valence-band-like as there is no excitation to the higher eigenstate. At any rate, the peaks will never exactly coincide with the instantaneous eigenenergies (even though they are very close to them) as the process is not adiabatic. We expect this also in real semiconductors and insulators as there the off-diagonal terms of the coupling Hamiltonian are usually much smaller than the energy gaps determined by the total Hamiltonian. Instead, increasing the width of the probe-pulse envelope, τ̅_pr, corresponds to measuring the system over a finite time interval and, practically, to performing a time average over such an interval. This averaging process results in the emergence of side-bands while the main peaks tend to the equilibrium bands. The shifts of the bands are due to the high non-linearity of the processes and to the non-zero average of the oscillating pump pulse. The PSBs remain at almost fixed energies after they emerge because they are related to the oscillating component of the pumping field and, if the probe pulse is wide enough to see the oscillations, it does not matter how much wider it becomes. On the other hand, EPSBs change their energies by changing the width of the probe-pulse envelope, because they are driven by the non-oscillating component of the pumping field. By increasing the width of the probe pulse to very high values, the resolution in energy increases and the peaks become very sharp. However, having such a large probe-pulse FWHM corresponds to (i) reduce more and more the time resolution of the measurement and (ii) include more and more equilibrium behavior (before pump-pulse envelope) and residual effects (after pump-pulse envelope) into the measurement. Therefore, we cannot obtain enough information about the real-time out-of-equilibrium dynamics of the system. On the other hand, on decreasing the width of the probe pulse, the signals become very wide in energy. This requires more and more experimental resolution in energy to determine the position of the peaks and understand the physics. Consequently, one needs to choose some intermediate value in order to cope with the unavoidable intrinsic time-energy uncertainty relationship of the underlying quantum mechanical system. The situation at resonance is quite different. As it is shown in the bottom panel of Fig. <ref>, even for the smallest values of τ̅_pu, the peak of lesser TR-ARPES signal does not coincide with the lower eigenenergy as the resonant dynamics forces the electrons to evolve in a superposition of valence and conduction band states. The superposition of two eigenstates results in the overlap of the TR-ARPES signals and, consequently, gives a peak somewhere in the middle of the two eigenenergies. Increasing the width of probe pulse, again the PSBs emerge and the one-photon PSB is highly populated. It is noteworthy that the inverse mass at S is zero and this is why we do not have any shifting of the bands and no EPSB emerges. § SUMMARY AND PERSPECTIVES In this manuscript, we have reported on a novel model-Hamiltonian approach that we have recently devised and developed to study out-of-equilibrium real materials, the dynamical projective operatorial approach (DPOA). Its internals have been illustrated in detail and a noteworthy prototypical application, a pumped two-band (valence-conduction) system, is discussed extensively. DPOA naturally endorses the current need to overcome the limitations and the drawbacks of current publicly available ab-initio softwares and also of too simplistic approaches such as the Houston method. DPOA relies on many-body second-quantization formalism and composite operators in order to be capable of handling both weakly and strongly correlated systems. DPOA exploits the tight-binding approach and the wannierization of DFT band structures in order to cope with the complexity and the very many degrees of freedom of real materials. DPOA uses the dipole gauge and the Peierls substitution in order to seamlessly address pumped systems and, in particular, pump-probe setups. We have devised an ad hoc Peierls expansion in order to make DPOA numerically extremely efficient and fast. This expansion makes clear how multi-photon resonances, rigid shifts, band dressings and different types of sidebands naturally emerge and allows to understand deeply the related phenomenologies. We have defined a protocol for evaluating the strength of multi-photon resonances and for assigning the residual excited electronic population at each k point and band to a specific multi-photon process. Comparing DPOA to the single-particle density-matrix approach and the Houston method, which we have generalized to second-quantization formalism and rephrased in the DPOA framework to compute exactly its dynamics, we have shown that DPOA goes much beyond both of them in terms of computing capabilities (multi-particle, multi-time correlators) and complexity handling (all relevant bands of real materials). To study the injection processes and the out-of-equilibrium electronic dynamics, we have expressed the relevant out-of-equilibrium Green's functions and the (lesser) TR-ARPES signal within the DPOA framework. Then, defining a retarded TR-ARPES signal, which allows to analyze the behavior of the dynamical bands independently from their occupation, we have shown that it is possible to obtain an out-of-equilibrium version of the fluctuation-dissipation theorem. Another very relevant aspect that we have thoroughly considered resides in the possibility to analyze intra- and inter-band transitions in the TR-ARPES signal and in the residual electronic excited population by selectively inhibiting them in the model Hamiltonian. We have studied the most three relevant cases of light-matter coupling within the dipole gauge, which has been derived in the second-quantization formalism: only a local dipole (relevant to systems such as quantum dots and molecules, and low-dimensional systems with transverse pumps), only the Peierls substitution in the hopping term (relevant to many real materials), and both terms at once. Within the framework of a pumped two-band system, we have analyzed in detail the TR-ARPES signal and the residual electronic excited population with respect to the band energies and their symmetries as well as their dependence on the pump/probe-pulse characteristics. We have studied: (i) how the first-order (in the pump-pulse amplitude) terms of the two types of light-matter couplings assist the higher-order ones; (ii) how their decomposition in terms of intra- and inter-band components can allow to understand the actual photo-injection process; (iii) how the symmetries of the system rule the actual behavior of the lesser and the retarded TR-ARPES signals as well as of the residual excited populations; (iv) how the (dynamical) bands broaden out-of-equilibrium and shift with respect to the equilibrium ones; (v) how different kinds of photon (resonant, non resonant) and envelope-Peierls sidebands emerge and vanish in relation to band symmetries and how dipole term breaks this symmetry protection; (vi) how residual electronic excited population accumulate in the conduction band induced by Rabi-like oscillations at the multi-photon resonant non-symmetry-protected k points and the characteristics of such oscillations in terms of the pump-pulse features; (vi) how the width and the delay of the probe pulse affect the TR-ARPES signal. Very recently, we applied DPOA to unveil the different charge-injection mechanisms in ultrafast (attosecond) pumped germanium <cit.> proving its efficiency and relevance to real experimental setups. In the near future, we will obtain, within DPOA, the expressions for the time-dependent optical response (transient reflectivity and absorption) in pump-probe setups and we will use them, as well as those for the TR-ARPES signals, for germanium and other real materials. This kind of analyses is fundamental to advance the physical understanding of complex materials and the capability to eventually turn this knowledge into actual industrial and commercial applications, such as the recently proposed novel types of electronics. The authors thank Claudio Giannetti, Matteo Lucchini, Stefano Pittalis, and Carlo Andrea Rozzi for the insightful discussions. The authors acknowledge support by MIUR under Project No. PRIN 2017RKWTMY. § VELOCITY AND DIPOLE GAUGES: HAMILTONIAN, DENSITY AND CURRENT OPERATORS §.§ System Let us start from the single-particle Hamiltonian operator in first quantization, Ĥ_0, for an electron of charge -e and mass m in the periodic potential V(𝐫+𝐑_𝐢)=V(𝐫) generated by the Bravais lattice {𝐑_𝐢} of ions of a solid state system [𝐑_𝐢=∑_λ=1^3i_λ𝐚_λ where 𝐢=(i_1,i_2,i_3), i_λ∈ℤ and 𝐚_λ are the lattice vectors]: Ĥ_0=p̂^2/2m+V(𝐫̂), where 𝐩̂ and 𝐫̂ are the momentum and the position operators of the electron, respectively, that satisfy the canonical commutation relation [r̂_η,p̂_η^']=iħδ_ηη^', where η,η^'∈{ x,y,z}. In this appendix, we denote the operators in first-quantization formulation by the hat () over-script. The Bloch theorem states that we can find a solution ϕ̨_̨𝐤̨,̨n̨=e^i𝐤·𝐫̂ų_̨𝐤̨,̨n̨, parametrized by the band index n and the momentum 𝐤, of the Schrdinger equation, Ĥ_0ϕ̨_̨𝐤̨,̨n̨=ε_𝐤,nϕ̨_̨𝐤̨,̨n̨, where u_𝐤,n(𝐫)=𝐫u_𝐤,n has the periodicity of the Bravais lattice and ε_𝐤,n is the n-th band-energy dispersion. We also have Ĥ_0,𝐤ų_̨𝐤̨,̨n̨=ε_𝐤,nų_̨𝐤̨,̨n̨, where Ĥ_0,𝐤=e^-i𝐤·𝐫̂Ĥ_0e^i𝐤·𝐫̂, and ϕ_𝐤,n(𝐫)=𝐫ϕ_𝐤,n. §.§ Velocity gauge In the dipole approximation (i.e., for wavelengths much larger than the unit cell extent in the direction of propagation), an electromagnetic wave interacting with the system (the electrons) can be described by a homogeneous vector potential 𝐀(t). Then, according to the minimal coupling protocol 𝐩̂→π̂=𝐩̂+e𝐀(t), the Hamiltonian operator reads as Ĥ=π̂^2/2m+V(𝐫̂)=Ĥ_0+e/m𝐀(t)·𝐩̂+e^2/2mA^2(t), where · is the scalar product in direct space. This scenario is known as velocity gauge after the electron-field interaction term in the Hamiltonian: e𝐀(𝐫̂,t)·𝐩̂/m. Let us suppose that ψ̨ is the solution of the time-dependent Schrdinger equation, iħ∂/∂ tψ̨=Ĥψ̨. Then, the dynamics of the charge density operator ρ̂=-e𝐫 and, in particular, of its average ρ̂=ρ̂ψ=-eψ(𝐫,t) (recall that ψ(𝐫,t)=𝐫ψ) is given by ∂/∂ tρ̂=-i/ħ[ρ̂,Ĥ]=-1/2m∑_η=x,y,z∇_η[ρ̂π̂_η+π̂_ηρ̂]. Next, the continuity equation, ∂/∂ tρ̂+∇·𝐉̂=0, calls for the following definition for the current operator 𝐉̂=1/2m(ρ̂π̂+π̂ρ̂)=1/2m(ρ̂𝐩̂+𝐩̂ρ̂)+e/m𝐀(t)ρ̂, where we can distinguish the paramagnetic (first) and the diamagnetic (second) terms. It is worth noticing that the continuity equation can be equivalently written as follows: ∂/∂ tρ̂=i/ħ∑_η=x,y,z[π̂_η,Ĵ_η]. In order to move to second quantization in the Bloch basis, we need 𝐯_n,n^'(𝐤)=1/mϕ_𝐤,n𝐩̂ϕ_𝐤,n^' =1/ħu_𝐤,n∇_𝐤Ĥ_0,𝐤u_𝐤,n^' =δ_nn^'1/ħ∇_𝐤ε_𝐤,n-i/ħ(ε_𝐤,n^'-ε_𝐤,n)𝐁_n,n^'(𝐤), where we have used the relation 𝐩̂=-im/ħ[𝐫̂,Ĥ_0] and 𝐁_n,n^'(𝐤)=u_𝐤,n∇_𝐤u_𝐤,n^' is the Berry connection. It is worth noticing that the last expression requires that the Bloch basis used in the actual numerical calculations is complete. Then, we have ℋ=∑_𝐤,n,n^'ϕ_𝐤,nĤϕ_𝐤,n^'c_𝐤,n^†c_𝐤,n^'=∑_𝐤,nε_𝐤,nc_𝐤,n^†c_𝐤,n +∑_𝐤,n,n^'(e𝐀(t)·𝐯_n,n^'(𝐤)+δ_nn^'e^2/2mA^2(t))c_𝐤,n^†c_𝐤,n^', where c_𝐤,n is the annihilation operator related to the single-particle state ϕ̨_̨𝐤̨,̨n̨. We also have ρ(𝐫)=-e∑_𝐤,nϕ_𝐤,n(𝐫)c_𝐤,n^†c_𝐤,n, 𝐉(𝐫,t)=1/2∑_𝐤,n,n^'ϕ_𝐤,n(𝐫)𝐯_n,n^'(𝐤)c_𝐤,n^†c_𝐤,n^' +1/2∑_𝐤,n,n^'ϕ_𝐤,n^'(𝐫)𝐯_n,n^'(𝐤)c_𝐤,n^†c_𝐤,n^' +e/m𝐀(t)ρ(𝐫). It is worth noting that, in principle, for any real material, 𝐯_n,n^'(𝐤) can be obtained through the outputs of the majority of the available DFT codes. §.§ Dipole gauge Now, we can move from computing the average of the velocity operator 1/m𝐩̂ and, consequently, the Berry connection, to computing the average of the operator 𝐫̂ and, therefore, the dipole operator 𝐃̂. In order to do that, one can apply the following unitary transformation: Û=e^-iŜ where Ŝ=-e/ħ𝐀(t)·𝐫̂. Let us recall the following general relations: Ô=ÛÔÛ^† =∑_n=0^∞(-i)^n/n![Ŝ,[Ŝ,…,[Ŝ,[Ô]_0]_1…]_n-1]_n, ψ̨=Ûψ̨, iħ∂/∂ tψ̨=Ĥψ̨, Ĥ=Ĥ+(iħ∂/∂ tÛ)Û^†. Accordingly, we have 𝐫̂=𝐫̂, 𝐩̂=𝐩̂-e𝐀(t), π̂=π̂-e𝐀(t)=𝐩̂, Ĥ=Ĥ_0, ∂/∂ tÛ=-ie/ħ(𝐄(t)·𝐫̂)Û, Ĥ=Ĥ_0+e𝐄(t)·𝐫̂, ρ̂=ρ̂, 𝐉̂=𝐉̂-e/m𝐀(t)ρ̂=1/2m(ρ̂𝐩̂+𝐩̂ρ̂), ∂/∂ tρ̂=-i/ħ[ρ̂,Ĥ]=-∇·𝐉̂ =i/ħ∑_η=x,y,z[π̂_η,Ĵ_η], where 𝐄(t)=-∂/∂ t𝐀(t) is the electric field applied to the system. It is just Eq. <ref>, as sought outcome, that inspired the transformation. This scenario is known as dipole gauge after the electron-field interaction term in the Hamiltonian: e𝐄(t)·𝐫̂. Moving from the Bloch states to the Wannier ones, φ̨_̨𝐢̨,̨ν̨=1/√(N)∑_𝐤,na_𝐤,ν,ne^-i𝐤·𝐑_𝐢ϕ̨_̨𝐤̨,̨n̨, where N is generically the number of lattice sites and a_𝐤,ν,n can be chosen, for instance, to get the maximally localized Wannier functions (MLWFs), φ_𝐢,ν(𝐫-𝐑_𝐢)=𝐫φ_𝐢,ν, around the specific Bravais lattice site 𝐑_𝐢. We will assume hereafter that this is the choice that has been made in order to properly compute the dipole term of the Hamiltonian as we will see in the following. It is easy to demonstrate that the 𝐫̂ operator has an ill-defined average on a Bravais lattice, φ_𝐢,ν𝐫̂φ_𝐢,ν=∫ d𝐫𝐫|φ_𝐢,ν(𝐫-𝐑_𝐢)|^2|𝐑_𝐢|→∞→∞, implying that the Hamiltonian is also ill defined. φ_𝐢,ν𝐫̂-𝐑̂φ_𝐢,ν can be instead always finite (𝐑̂φ̨_̨𝐢̨,̨ν̨=𝐑_𝐢φ̨_̨𝐢̨,̨ν̨) if the MLWFs of the system are localized enough; actually, if it is not so, the following procedure cannot be adopted. This problem calls for the application of one more unitary transformation, Û=e^-iŜ where Ŝ=+e/ħ𝐀(t)·𝐑̂. We can exploit the following relation to apply this transformation to the relevant operators: Ô_𝐢,ν;𝐣,ν^'=φ_𝐢,νÔφ_𝐣,ν^'=e^-ie/ħ𝐀(t)·𝐑_𝐢𝐣Ô_𝐢,ν;𝐣,ν^', and get 𝐫_𝐢,ν;𝐣,ν^'=e^-ie/ħ𝐀(t)·𝐑_𝐢𝐣𝐫_𝐢,ν;𝐣,ν^', 𝐩_𝐢,ν;𝐣,ν^'=e^-ie/ħ𝐀(t)·𝐑_𝐢𝐣[𝐩_𝐢,ν;𝐣,ν^'-e𝐀(t)δ_𝐢,ν;𝐣,ν^'], π_𝐢,ν;𝐣,ν^'=e^-ie/ħ𝐀(t)·𝐑_𝐢𝐣𝐩_𝐢,ν;𝐣,ν^', Ĥ_𝐢,ν;𝐣,ν^'=e^-ie/ħ𝐀(t)·𝐑_𝐢𝐣[Ĥ_0+e𝐄(t)·𝐫̂]_𝐢,ν;𝐣,ν^', ∂/∂ tÛ=+ie/ħ(𝐄(t)·𝐑̂)Û, Ĥ=Ĥ+ie/ħ𝐄(t)·𝐑̂, Ĥ_𝐢,ν;𝐣,ν^'=e^-ie/ħ𝐀(t)·𝐑_𝐢𝐣[Ĥ_0+e𝐄(t)·𝐃̂]_𝐢,ν;𝐣,ν^', ρ_𝐢,ν;𝐣,ν^'=e^-ie/ħ𝐀(t)·𝐑_𝐢𝐣ρ_𝐢,ν;𝐣,ν^', 𝐉_𝐢,ν;𝐣,ν^'=e^-ie/ħ𝐀(t)·𝐑_𝐢𝐣𝐉_𝐢,ν;𝐣,ν^', ∂/∂ tρ̂=-i/ħ[ρ̂,Ĥ]=-∇·𝐉 =i/ħ∑_η=x,y,z[π_η,J_η], where 𝐑_𝐢𝐣=𝐑_𝐢-𝐑_𝐣 and 𝐃̂=𝐫̂-𝐑̂. Accordingly, in the dipole gauge we have ℋ=1/M∑_𝐢,ν;𝐣,ν^'e^-ie/ħ𝐀(t)·𝐑_𝐢𝐣T̃_𝐑_𝐢𝐣,ν,ν^'c̃_𝐢,ν^†c̃_𝐣,ν^' +1/M∑_𝐢,ν;𝐣,ν^'e^-ie/ħ𝐀(t)·𝐑_𝐢𝐣e𝐄(t)·𝐃̃_𝐑_𝐢𝐣,ν,ν^'c̃_𝐢,ν^†c̃_𝐣,ν^', ρ̂(𝐪=0)=-e1/M∑_𝐢,νc_𝐢,ν^†c_𝐢,ν=-eN/M, 𝐉̂(𝐪=0,t)= =ie/ħ1/M∑_𝐢,ν;𝐣,ν^'e^-ie/ħ𝐀(t)·𝐑_𝐢𝐣𝐑_𝐢𝐣T̃_𝐑_𝐢𝐣,ν,ν^'c̃_𝐢,ν^†c̃_𝐣,ν^' +ie/ħ1/M∑_𝐢,ν;𝐣,ν^'e^-ie/ħ𝐀(t)·𝐑_𝐢𝐣[𝐃̂,Ĥ_0]_𝐢,ν;𝐣,ν^'c̃_𝐢,ν^†c̃_𝐣,ν^', where [𝐃̂,Ĥ_0]_𝐢,ν;𝐣,ν^'=∑_𝐢^',ν^''𝐃̃_𝐑_𝐢𝐢^',ν,ν^''T̃_𝐑_𝐢^'𝐣,ν^'',ν^' -∑_𝐢^',ν^''T̃_𝐑_𝐢𝐢^',ν,ν^''𝐃̃_𝐑_𝐢^'𝐣,ν^'',ν^', T̃_𝐑_𝐢𝐣,ν,ν^'=(Ĥ_0)_𝐢,ν;𝐣,ν^' is known as the hopping matrix, and 𝐃̃_𝐑_𝐢𝐣,ν,ν^'=(𝐃̂)_𝐢,ν;𝐣,ν^' is the dipole matrix. We have considered a homogeneous lattice so that both the hoping and the dipole matrices depend on the difference 𝐑_𝐢𝐣. c̃_𝐢,ν is the annihilation operator related to the single-particle state φ̨_̨𝐢̨,̨ν̨, M is the number of lattice sites, N is the total number of electrons in the system, and we have used the relations 𝐩̂=-im/ħ[𝐫̂,Ĥ_0] and φ_𝐢,ν[𝐑̂,Ô]φ_𝐣,ν^'=𝐑_𝐢𝐣Ô_𝐢,ν;𝐣,ν^'. Then, we move to the momentum space using Fourier transformation, c̃_𝐤,ν=1/√(M)∑_𝐑e^i𝐤·𝐑c̃_𝐢,ν, and obtain ℋ=∑_𝐤,ν,ν^'T̃_𝐤+e/ħ𝐀(t),ν,ν^'c̃_𝐤,ν^†c̃_𝐤,ν^' +e𝐄(t)·∑_𝐤,ν,ν^'𝐃̃_𝐤+e/ħ𝐀(t),ν,ν^'c̃_𝐤,ν^†c̃_𝐤,ν^', ρ(𝐪=0)=-e1/M∑_𝐤,νc̃_𝐤,ν^†c̃_𝐤,ν^'=-eN/M, 𝐉(𝐪=0,t)=-e/ħ∑_𝐤,ν^',ν[∇_𝐤T̃_𝐤+e/ħ𝐀(t),ν,ν^']c̃_𝐤,ν^†c̃_𝐤,ν^' +ie/ħ∑_𝐤,ν,ν^',ν^''[𝐃̃_𝐤+e/ħ𝐀(t),ν,ν^''T̃_𝐤+e/ħ𝐀(t),ν^'',ν^']c̃_𝐤,ν^†c̃_𝐤,ν^' -ie/ħ∑_𝐤,ν,ν^',ν^''[T̃_𝐤+e/ħ𝐀(t),ν,ν^''𝐃̃_𝐤+e/ħ𝐀(t),ν^'',ν^']c̃_𝐤,ν^†c̃_𝐤,ν^', where T̃_𝐤,ν,ν^'=1/M∑_𝐢,𝐣e^-i𝐤·𝐑_𝐢𝐣T̃_𝐑_𝐢𝐣,ν,ν^', 𝐃̃_𝐤,ν,ν^'=1/M∑_𝐢,𝐣e^-i𝐤·𝐑_𝐢𝐣𝐃̃_𝐑_𝐢𝐣,ν,ν^'. Again, it is worth noting that, in principle, for any real material, T̃_𝐑_𝐢𝐣,ν,ν^' and 𝐃̃_𝐑_𝐢𝐣,ν,ν^' can be obtained as standard outputs of Wannier90 code <cit.>, given its interfaces to a certain number of available DFT codes. § THE HOUSTON APPROACH One of the methods used to simulate the behavior of pumped semiconductors is the Houston approach <cit.>. Such an approach is usually formulated in the velocity gauge and first quantization, just for the reason that will become clear in the following. Let us start from the time-independent single-particle Hamiltonian of the Bloch system under analysis, Ĥ_0=p̂^2/2m+V(𝐫̂), where 𝐩̂ and 𝐫̂ are the momentum and position operators, respectively, m is the electron mass, and V(𝐫̂) is the periodic potential of the system under analysis. The related time-independent Schrdinger equation Ĥ_0|ψ_,n⟩ =ε_,n|ψ_,n⟩ is solved in terms of the Bloch bands ε_,n and of the Bloch functions |ψ_,n⟩ =e^-ik·𝐫̂|u_,n⟩ where |u_,n⟩ displays the same periodicity of the potential. Accordingly, we have the following reduced equation Ĥ_0,k|u_,n⟩ =ε_,n|u_,n⟩ where Ĥ_0,k=e^ik·𝐫̂Ĥ_0e^-ik·𝐫̂=(𝐩̂-ħk)^2/2m+V(𝐫̂). Now, if we have a pump pulse described by the vector potential 𝐀(t) impinging on the system, the related time-dependent minimal-coupling Hamiltonian in the velocity gauge reads as Ĥ(t)=(𝐩̂+e𝐀(t))^2/2m+V(𝐫̂), where e>0 is the electronic charge. It is straightforward to demonstrate that the eigenfunctions and the eigenvalues of this Hamiltonian are simply |φ_,n(t)⟩ =e^-ik·𝐫̂|u_+,n⟩ and ε_+,n, respectively. The set of such eigenfunctions is usually named the instantaneous or the adiabatic basis because these states would exactly describe the behavior of the system only if the pump pulse would be so slowly varying on the characteristic timescales (energies) of the system to allow it to adjust to the pump pulse at each instant of time (i.e., adiabatically). Accordingly, they do not solve the general time-dependent Schrdinger equation Ĥ(t)|ϕ_k(t)⟩ =iħ∂/∂ t|ϕ_k(t)⟩, but they can be used as a basis for expanding |ϕ_k(t)⟩ =∑_nλ_,n(t)|φ_,n(t)⟩. The projection coefficients λ_,n(t) are determined via the following equation of motion, iħ∂/∂ tλ_,n(t)=(ε_+,n-θ_,n(t))λ_,n(t) +iħe/m∑_n^'(≠ n)𝐄(t)·𝐩_k,n,n^'(t)/Δε_k,n,n^'(t)λ_k,n^'(t), where θ_,n(t)=⟨φ_,n(t)|iħ∂/∂ t|φ_,n(t)⟩ is connected to the Berry phase of the system and can be neglected if there is no degeneracy, 𝐩_k,n,n^'(t)=⟨φ_,n(t)|𝐩̂|φ_k,n^'(t)⟩ is the matrix element of the momentum in the instantaneous basis, Δε_k,n,n^'(t)=ε_+,n-ε_k+e/ħ𝐀(t),n^' and 𝐄(t)=-∂/∂ t𝐀(t) is the applied electric field. Defining new coefficients β_k,n(t) such that λ_,n(t)=β_,n(t)e^-i/ħ∫_-∞^tε_+,ndt^', the projection-coefficient equation further simplifies to, iħ∂/∂ tβ_,n(t)=-θ_,n(t)β_,n(t) +iħe/m∑_n^'(≠ n)𝐄(t)·𝐩_k,n,n^'(t)/Δε_k,n,n^'(t)e^i/ħ∫_-∞^tΔε_k,n,n^'(t')dt^'β_k,n^'(t) and the corresponding basis {e^-i/ħ∫_-∞^tε_+,ndt^'|φ_,n(t)⟩} is the Houston basis, which differs just by a time-dependent phase factor from the instantaneous basis (actually, both basis are often dubbed in the literature as Houston basis). The main appeal of such a procedure resides in the possibility of obtaining sensible results even if one: (i) focuses only on few bands (e.g., one valence, one conduction and, if needed, one core band), (ii) supposes that 𝐩_k,n,n^'(t) is approximately k independent so that 𝐩_k,n,n^'(t)≈𝐩_n,n^' (the time dependence gets cancelled as well), and (iii) uses the parabolic approximation ε_,n≈ε_n+ħ^2k^2/2m_n, that is, retains only relevant gaps |ε_n-ε_n^'| and effective masses m_n in the proximity of few selected k points. § OUT-OF-EQUILIBRIUM SPECTRAL FUNCTIONS To obtain the spectral functions, we need the Fourier transformation of the GFs with respect to time, which we perform as follows, G_𝐤^R,<(ω,t) =∫_-∞^+∞dτ e^iωτ-0^+|τ|G_𝐤^R,<(t+τ/2,t-τ/2), where 0^+ is an infinitesimal convergence factor. Then, the retarded spectral function is given by A_𝐤^R(ω,t)=-1/π[ G_𝐤^R(ω,t)], while the lesser spectral function is defined as A_𝐤^<(ω,t)=1/2π[ G_𝐤^<(ω,t)]. In Fig. <ref> top and bottom panels, we report the dimensionless retarded and lesser functions, A̅_𝐤^R,<(ω̅,t)=A_𝐤^R,<(ω̅,t)Δ/ħ , respectively, as function of ω̅ along the main path, for the same pump-pulse and system parameters as the ones of Fig. <ref>. In the left, middle and right panels, the time t is chosen to be well before the application of the pump pulse, in the center of the pump pulse (t=0), and well after the application of pump pulse, respectively. Clearly, during the application of the pump pulse, the spectral functions, A̅_𝐤^R,<(ω̅,t), become negative and lose their original physical interpretations. Obviously, in the absence of the pump pulse (both before and after its application), the spectral function gives correct information about the energy bands of the system. The left panel of Fig. <ref> can be directly compared to Fig. <ref> and the only difference to be acknowledged is that, in the former, the band broadening originates from the finite numerical value of 0^+, while, in the latter, it originates from the finite FWHM of the probe pulse. After the application of the pump pulse, the bands recover their equilibrium shape as it can be seen by comparing the top-right panel of Fig. <ref> to its top-left panel. On the other hand, the occupied spectral function (see bottom-right panel of Fig. <ref>) shows that at some k points we have residual excitations similarly to what is reported in the top panel of Fig. <ref>. apsrev4-2
http://arxiv.org/abs/2307.01582v1
20230704092250
IAdet: Simplest human-in-the-loop object detection
[ "Franco Marchesoni-Acland", "Gabriele Facciolo" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.HC", "cs.LG" ]
[ Bernard Bercu, Jérémie Bigot and Gauthier Thurin August 1, 2023 at  ==================================================== This work proposes a strategy for training models while annotating data named Intelligent Annotation (IA). IA involves three modules: (1) assisted data annotation, (2) background model training, and (3) active selection of the next datapoints. Under this framework, we open-source the tool[https://github.com/franchesoni/IAdet], which is specific for single-class object detection. Additionally, we devise a method for automatically evaluating such a human-in-the-loop system. For the PASCAL VOC dataset, the tool reduces the database annotation time by 25% while providing a trained model for free. These results are obtained for a deliberately very simple design. As a consequence, is susceptible to multiple easy improvements, paving the way for powerful human-in-the-loop object detection systems. § INTRODUCTION In the last ten years, deep supervised learning has revolutionized many areas of science and has found widespread applications <cit.>. Although the deep learning community has used overparameterized functions for much more than vanilla supervised learning <cit.><cit.><cit.>, supervision is still of great importance for most applications <cit.><cit.>. To upgrade supervised learning, the community has developed new architectures <cit.><cit.><cit.>, learning methodologies (e.g. few-shot <cit.>, semi-supervised <cit.>, active <cit.>, self-supervised <cit.>, online <cit.>), and interactive tools <cit.>. Although some of these methods deal with the lack of annotated data, free tools that put the different arts together from annotation to a trained model are hard to find. Such a tool would just require an annotator and (ideally little) time to obtain a solid deep learning model. This paper presents the simplest example of such a tool for the specific application of single-class object detection. The tool is called , where IA stands for [Intelligent, Interactive, Incremental] Annotation. is an example of the IA framework and it is kept as simple as possible: the main goal of this work is to present the IA scheme (this Section) and an evaluation methodology (Section <ref>). This article also shows that the simplest implementation is already valuable for the single-class object detection problem (Section <ref>), and that performances could be much better (Section <ref>). The IA scheme comprises three components shown in Figure <ref>. These components are not specific to object detection but are general research fields themselves. The first component is an assisted annotation tool, which allows the user to quickly make annotations. The second component is the deep learning model, which ideally achieves great performance while being very data-efficient and fast to train. The third component is an active learning method, which chooses the best next datapoint to be annotated. This loop is run until the whole dataset is annotated or the user is satisfied with the model. Even though IA is applied in spirit, formulating such annotating-while-training systems opens new research challenges. These are the same old challenges of active learning, model training, and interactive tooling, but augmented by the interactions between them and the time and data constraints involved. Evaluation criteria are mandatory, and ideally consider the potential applications of the system: (i) reducing the annotation time and (ii) fast model training by annotating. We propose an automated method to experimentally measure these two aspects. This evaluation method is based on the simulation of the annotator. In summary, our main contributions are (i) open-sourcing the tool, which allows for fast annotation and model training while annotating, (ii) conceptualizing the IA human-in-the-loop learning system and a possible evaluation methodology, and (iii) showing that the tool can be easily and greatly improved in multiple ways. The rest of the paper is organized as follows: in Section <ref> we describe the related work, discussing each of the many relevant research fields in the context of the related IA modules. In the same section, we present the current industrial tooling. Section <ref> presents the tool. Its software design and the challenges there involved are presented in further detail in the Supplementary Material. An evaluation method is needed to sort out the many questions about optimal implementation that arise. This method is exposed in Section <ref>. In Section <ref> we apply the evaluation method to our tool and present the results over the first 10 classes of the PASCAL VOC dataset <cit.>. Section <ref> presents ablation studies that show that the tool can be easily and significantly improved. Finally, we conclude our work in Section <ref> after a discussion of limitations and possible improvements in Section <ref>. § RELATED WORK §.§ Active learning Active Learning (AL) is both an IA module and a mature research field <cit.>. It tackles the problem of the online creation of the optimal sequence of datapoints to be annotated, as measured by a time-performance curve, where the time is measured in number of annotated datapoints. There are active learning methods that deal with deep models <cit.>, some modifying them, e.g. adding auxiliary heads or using ensembles <cit.>, and some only using their products, e.g. using the entropy of their predictions. Even though the optimal solution for a given dataset and model exists, it is unfeasible to exhaustively search for it. AL methods can be then seen as heuristics or probabilistic formulations of this problem. The usual AL baseline is random sampling. Active learning strategies that handle neural networks usually operate in the batch setting and can be divided into diversity and uncertainty sampling. Diversity sampling looks for batches whose datapoints are representative of the data distribution <cit.>. Uncertainty sampling uses the output distribution to find the most informative datapoints, while not necessarily enforcing diversity. The previous state-of-the-art, BADGE <cit.>, combined both approaches. It has recently been surpassed by BAIT <cit.>, which not only has stronger theoretical foundations and performance but can also be extended to the regression setting. BAIT builds on classical maximum likelihood estimators and involves the computation of the Fisher information of the samples with respect to the parameters of the last layer. Unfortunately, the reported performance of some active learning methods is not consistent. For instance, CORESET <cit.> performs worse than the random sampling baseline (RSB) in the experiments reported in BAIT. In computer vision, active learning has been mostly applied to classification tasks, with some recent works applying it to object detection <cit.><cit.>. Recently, <cit.> thoroughly evaluated active learning algorithms for computer vision and showed that “the difference in performance between the AL methods and the RSB is much smaller than reported in the literature”, which is in line with <cit.>. This comparison does not take into account the computational cost, the simplicity, and the adaptability to models and tasks of the methods, where the RSB is superior. §.§ Assisted annotation A way of annotating more efficiently, complementary to AL, is to label more quickly. Methods that facilitate this are usually called “interactive”, e.g. interactive image segmentation methods. The interactive annotation area studies the optimization of a loop between the human and the machine that takes place at the datapoint level, e.g. image level, in contrast to the IA loop that operates at the dataset level. For instance, in the interactive image segmentation (IIS) literature, a few user clicks guide a neural network that makes mask proposals <cit.>. Such IIS tools can be used to efficiently annotate huge datasets <cit.>. Interactive segmentation tools are especially valuable because it is hard to create a detailed mask by using free-painting tools. However, for object detection, it is much easier to create and remove targets, which are bounding boxes. Assistance in object detection is more important when numerous instances are present in the same image, as exemplified by <cit.>. The ideal assisted annotation tool would propose reasonable annotations from the start and quickly converge to the desired annotation if corrections were needed. In this regard, we note that most IIS tools do not propose annotations from the start, whereas does. However, some IIS tools do provide assistance when correcting annotations <cit.>, while does not. More recently, general refinement blocks that can make fixed networks interactive <cit.> and language-guided methods <cit.> were proposed, opening up even more possibilities. §.§ Model training For the IA framework, one would like a well-performing, data-efficient, and fast-to-train model. In other words, we want the model that achieves the best time-performance curve and thus enables the fastest annotation. Note that performance here refers to test performance, which we assume to be statistically identical to the performance over unlabeled samples. In what follows we will restrict our attention to object detection models, but their literature is representative of other areas as well. The best performing models are huge, thus they are not fast to train and are not necessarily data efficient. For instance, the current state-of-the-art in COCO detection <cit.> is a self-supervised transformer with 3 billion parameters. On the other hand, the most data-efficient models are presumably few-shot object detection models. Few-shot object detection models are usually classified into meta-learned models and finetuned models. Note that both are usually finetuned, and both achieve comparable performance <cit.>. The meta-learned models are trained with episodes that simulate the few-shot setting by using a limited number of arbitrary query vectors that have to be recognized in the training images. In the IA framework, the amount of annotated data increases, yet meta-learned few-shot models do not scale to more annotations as naturally as the simpler finetuned models <cit.>. More recently, state-of-the-art few-shot detection performance has been established by leveraging self-supervised representations <cit.><cit.>, which suggests that traditionally fine-tuning self-supervised models can be better than using few-shot specific ones. One important lesson from the few-shot learning literature is that full finetuning is usually better than retraining the head only <cit.>, although simply retraining the head can bring solid performance too <cit.>. Even if few-shot learning methods have been greatly developed, simple baselines are still competitive <cit.><cit.>. The semi-supervised learning literature explores learning from little annotated data and much more unlabeled data. Self-supervised learning has enabled the extraction of supervised-level representations from unlabeled data only. These representations make further learning (even self-supervised <cit.>) easier. These methods usually involve pretraining, but we prefer to save time by starting from already pretrained backbones. How to best do this is studied by transfer learning. A comparison between transfer and self-supervised learning is given in <cit.>. <cit.> have shown that the most important factor for transfer learning is the source image domain, which should ideally include the target image domain. Moreover, there are metrics one could compute to predict which models would better transfer to a target dataset <cit.>. If the source and target domain are different, one should look into the domain adaptation literature <cit.>. In this work, we make transfer learning suitable by using MS-COCO <cit.> weights to tackle PASCAL VOC: the classes of the former dataset include those of the latter. §.§ Industrial tools There are between 15 and 20 companies offering data labeling tools for computer vision. There are also many open-source tools for annotation, e.g. <cit.><cit.> or https://pixano.cea.fr/about/Pixano. To the best of our knowledge, there are no generic free tools that involve assistance and allow training while annotating. From the tools behind a paywall, our experience suggests that the assistance, that only some of them have, is not as developed as marketed. The closest to is <cit.>, but it was presented as specific to ecology and does not tackle the questions of what a correct evaluation procedure is, or what happens when such a loop is constructed. The assisted annotation area is attracting more attention every day, however there are no good open tools yet. § TOOL The tool is defined by its modules and their interaction. We aimed for the simplest possible tool and we selected it without any tuning. This tool can (and will) be enhanced by using better components. For the assisted annotation module, we developed a Graphical User Interface that uses the model to suggest bounding boxes before starting the annotation (see Figure <ref>). For the deep learning model, we use SSD <cit.>, one of the simplest and worst performing object detection architectures available in <cit.>, a well-maintained computer vision library. The chosen active learning algorithm is the random sampling baseline, i.e. we avoid using active learning. The interaction between the components is done via files: the Graphical User Interface (GUI) implements assisted annotation by using the latest model weights to predict the bounding boxes for the image to be labeled next. The annotations are saved in a file with standardized format that contains the bounding boxes as vectors of length 4 and the path to each corresponding image. Simultaneously, the background model trains continually by repeating the following operations: (i) load the annotation file and (ii) train one epoch over it. All hyperparameters are the default of the chosen library (SSD300 + PASCALVOC). Our user experience (UX)-aware decisions involved avoiding the overlay of annotations and predictions, preferring click-click over click-drag bounding boxes, removing a bounding box with one click, removing all bounding boxes and navigating the dataset directory with the keyboard, automatically saving when leaving an image, between others. We provide a more detailed description and discussion of the design and the limitations in the supplementary material. § EVALUATION A comprehensive evaluation should include the interaction between the modules, and since they are complex and evolve through time, the full annotation process needs to be simulated. For our tool the automatic evaluation will be simple: a robot annotator will make the annotations for new images drawing them from the ground truth at a rate of R /s. Computing the number of interactions per image is straightforward. 1 key-press is needed to move to the next image and also to erase all predicted bounding boxes. 2 clicks are needed to create one new bounding box, and 1 click is needed to remove one bounding box. Assuming perfect annotation with the least number of interactions, and given the number of false-positives (FP), false-negatives (FN), and true-positives (TP), the time to annotate the ith image is I_i / R, where I_i = (1 + min(1+(TP+FN)×2, FP+FN×2 )) and the min considers the best strategy between ignoring all predictions or correcting them. Note that choosing to ignore all predictions involves an extra interaction, thus it is strictly worse than not using the tool at all. If we assume this model of annotation time as correct, an annotating robot with speed R can be used to simulate a full dataset annotation in real-time. For a given dataset and IA system, one can think of the performance as dependent on a few values: the speed of annotation R, the training speed v and batch size b, and the elapsed time t. In other words, for each (R,v,b), we can obtain a time-performance curve. This curve is defined for t ∈ [0, t_A], where t_A is the time taken to annotate the whole dataset. Another important annotation time is t_N = I / R, where I is the total number of needed unassisted interactions, or 2× the number of bounding boxes in all ground truths. Because these times are highly dependent on the speed of annotation R, it is better to look at their ratio, t_A / t_N, which roughly measures the incremental error incurred by the model. In the ratio, R seems to disappear, but it is not the case: t_A continues to depend on R because R roughly determines how many new images the model will see at each epoch, which in turn changes its performance. We also note that t_N is not necessarily an upper bound of t_A, as bad predictions cause one more interaction per image. The final model performance or negative loss -L(t_A) could be compared with the one achieved by the same model but trained with the full annotated dataset instead of incrementally, which would be the ideal case. This supervised performance is -L_sup and it is not necessarily an upper bound of -L(t_A), as some active learning methods could outperform their fully supervised reference. Naturally, this performance is evaluated on the test split of the given dataset. Analogously, we can take the performance ratio L_A / L_sup as a key metric. This metric measures how well the background model performs at the end of the annotation relative to the performance of a model trained after the dataset was fully annotated. Of course, one could always use the tool to annotate before training a new model, but it is interesting to know if the background model has any use after annotation. In summary, simulating the annotation at R interactions per second, we can measure the change in required annotation time t_A/t_N and the relative final model performance L_A / L_sup. This evaluation protocol is also useful to evaluate the merits of individual modules, as some of their most important potential applications live inside the IA framework. § RESULTS Our experiments deal with the simplistic single-class object detection problem. We further assume that all images contain the class of interest. We choose the first half of the classes in PASCAL VOC 2007 and 2012 <cit.> for the experiment regarding the annotation time and the first three classes to analyze background-model performances. For the ablation studies, we use the least common class, sheep, for which we have 420 training or validation examples and 97 test examples (from the PASCAL 2007 test split). Our objective is twofold: first, compare the annotation time while using assistance against the default annotation time, and second, compare the performance of the assistant model against the performance of a supervised model that had access to the fully annotated dataset. All experiments were run sequentially on a single GPU NVIDIA TITAN V. Table <ref> presents the results for the first half of the classes of the dataset. The table shows that: (i) the unassisted annotation time is about one hour per class and (ii) when using the tool, the average improvement is 25%. Figure <ref> shows the increase in the advantage provided by the tool as time goes by. We noticed a dip caused by a first stage in which the predictions of the model are not yet useful and have to be discarded. This is likely the greatest drawback of the present tool, as it takes more than 100 annotated images for the assistance to start being useful. Closely related to the annotation speedup is the model performance through time. Figure <ref> compares the Average Precision (AP) for four cases. The first is the model trained with annotations that are incrementally added to the dataset by the (simulated) annotator (A). The second and third are models trained with all annotations, which could happen after unassisted annotation (N) or after assisted annotation (M). As more information is available in these cases, we expect the (N) or (M) models (which are identical up to a time shift) to reach higher performance, i.e. to set an upper bound for the first curve (A). The background model improves performance through time but can not reach the one of the supervised. The other option we consider, (B), is to train the model after assisted annotations with initial weights coming from (A). From Figure <ref> one can observe that there is not much difference between bootstrapping (B) or training from scratch (M) after the first few epochs. Table <ref> shows the final IoUs for the models (A, N, B) for the first three classes of PASCAL VOC. The model we get for free after the assisted annotation of a full dataset achieves a performance only 5% below that of the model trained after annotation. § ABLATIONS Table <ref> compares different annotation speeds. What we find is that if the annotator is slower the tool adds more value: the model learns faster relative to how much has been annotated. In the other direction, if the annotator is very fast, the tool is too slow to learn and do anything useful. Figure <ref> and Table <ref> compare different changes on the training procedure. The first change, random, which deteriorates the performance the most, is to initialize the weights of the model not from COCO, but randomly. The second change evaluated was to make the initial learning rate 10 × smaller. Both these changes make the model learn less quickly to the point of making assistance counterproductive. The third variation evaluated, frozen, involves freezing the backbone weights coming from the COCO-trained model, but not the neck and head weights. This allows for a modest improvement, although it does not provide the fast and robust performance one could expect. The fourth change is a hack: instead of using each image once in an epoch, we use it 10 times with various augmentations (e.g. random crop, flip, and photo-metric distortion), thereby reducing the time needed to load the dataset at each epoch and increasing data diversity. This change makes the model useful much sooner. Lastly, we change the model from SSD <cit.> to a modern Faster R-CNN <cit.> with a Feature Pyramid Network <cit.> ResNet 50 <cit.> as the backbone. The architectural change is the most impactful, reducing annotation time by more than 50%. Note that we do not combine improvements, as this section solely aims to show that the base tool can be easily improved. § LIMITATIONS The limitations of relate to all the obvious possible improvements: using a better background model, e.g. Faster-RCNN <cit.> (as shown) or DETR <cit.>, including an active learning algorithm, adding intra-image assistance. Other than these, the main limitations are the domain shift, which makes transfer learning and the current tool not as useful for real applications, and the long time it takes the model to produce relevant proposals. The greatest weakness of the evaluation procedure is that we did not consider the labeling noise introduced by the model, whose predictions were defined as correct with the standard 0.5 IoU threshold. Future work could include few-shot learning methods such as <cit.>. § CONCLUSIONS The generic IA framework for both fast-annotation and training-while-annotating was introduced in this paper, which ideally combines assistance, background model training, and active learning. This IA framework was exemplified by the tool, an extremely simple tool to make annotations for object detection. Despite its simplicity, is effective: it can reduce the annotation time by 25% and provides a competitive trained model for free. Any human-in-the-loop system similar to could be a very interesting application where many related fields of machine learning could converge to. We have shown that there is value in such systems (even when they are very simple) and that there is ample room for improvement. While we were in the course of correcting this paper, we discovered that a previous reference <cit.> had addressed a similar problematic and reached several of the conclusions presented herewith. In particular, they propose a similar human-in-the-loop system including a Faster-RCNN architecture and evaluate the annotation efficiency over datasets including PASCAL VOC, which is definitely similar to our developments. Their conclusions are also comparable to ours, in particular they reach a workload reduction between 30% and 60% which can be compared to our 52% result in Table <ref>. The main differences are that IAdet i) presents no waiting time to the user, ii) was evaluated in real time, iii) assumes different number of clicks for FPs and FNs, iv) does not operate in a per-batch setting, and v) is open sourced. § POTENTIAL NEGATIVE SOCIETAL IMPACTS The authors do not recognize any potentially negative societal impact of this paper other than potentially lowering the barrier of entry to object detection model training. Work partly financed by Office of Naval research grant N00014-20-S-B001, MENRT, and by a grant from ANRT. Centre Borelli is also a member of Université Paris Cité, SSA and INSERM. § SUPPLEMENTARY MATERIAL §.§ User experience An IA tool respecting the framework exposed in Figure <ref> should be user-friendly. User experience design (UX) studies how to make a user perceive and respond positively to a product. It involves the perception of the utility, the ease of use, and the efficiency of the product. For instance, a door that has a door handle on the “Push” side will provide a sub-optimal user experience. In our tool, the utility of the tool is made explicit: each time an unlabeled image is loaded, the predictions of the current model are visualized. Thus, as the annotation process goes on, the user can see how the model improves. For the ease-of-use, we use assisted annotation. The ideal assisted annotation provides a reasonable proposal of bounding boxes that the user can keep and quickly converges to the desired annotation when corrected. In this first version, we only implement the proposal, and corrections of the user are not assisted at all. This works very well when there are few object instances per image. Inspired by https://github.com/developer0hye/Yolo_LabelYOLO-Label, we depart from the usual click-and-drag bounding box creation, and implement bounding box creation with two clicks: the first for one corner and the second for the opposite corner. These clicks are made with the usual left button. To remove a bounding box only one click is needed. This right-button click will remove the bounding box whose border is closest to it. All functionalities that are not location specific go through the keyboard: if desired, one can delete all bounding boxes by pressing . The user can change images using keyboard keys that call the and functions. The annotations are automatically saved when changing an image without the need for user interaction. Regarding efficiency, model selection is vital. However, we do not expect the user to know about a more efficient model or training method. Nevertheless, there are a few things that make a tool feel efficient. First, it has to be fast: we solve image loading with , while the annotations loading is done with the library. Second, the tool has to be responsive: we display the mouse coordinates, which give a feeling of sensitivity when quickly changing as the mouse moves. Third, it has to be informative: the user is shown the tensorboard logs and sees how the loss decreases as the training progresses, and it is also shown the current image path in the context of the other paths in the dataset. One thing that annoys users while annotating is a model that does not respect their annotations. How to fully fit the data is an open question, but we simply make this limitation invisible: the user is always shown the annotations she has made if available. A few extra elements involve a button, that displays a short manual and an email address to provide feedback to. The size of the display is also a parameter the user can change. The limitations of the tool involve: (i) the use of as User Interface library when code is presumably faster, (ii) the last image is not saved when closing the tool, (iii) the lack of intra-image assistance, (iv) the absence of zoom-in/out or contrast changing capabilities, (v) the tool is not packaged and (vi) it is launched from the command line, i.e. it is not yet a completely no-code tool. §.§ Implementation details * The chosen network is SSD300 <cit.>, with the https://github.com/open-mmlab/mmdetection/blob/master/configs/pascal_voc/ssd300_voc0712.pyconfiguration file of PASCAL VOC. * We modified the training loop to trigger a new dataloader creation at each epoch. * The tool is responsible for using the background model, it makes bounding box predictions for the requested image. In the future, this result could be cached, batched, or pre-run. * The IoU threshold used for the simulation is 0.5 * For the simulation, the proposal score threshold was the minimum between 0.7 and the highest predicted score. In other words, we always predict one bounding box.
http://arxiv.org/abs/2307.03299v1
20230706212211
Design of a Majorana trijunction
[ "Juan Daniel Torres Luna", "Sathish R. Kuppuswamy", "Anton R. Akhmerov" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
Design of a Majorana trijunction Juan Daniel Torres Luna^1*, Sathish R. Kuppuswamy^2, Anton R. Akhmerov^2†. ^1 QuTech, Delft University of Technology, Delft 2600 GA, The Netherlands ^2 Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands ^* jd.torres1595@gmail.com ^† trijunction@antonakhmerov.org August 1, 2023 § ABSTRACT Braiding of Majorana states demonstrates their non-Abelian exchange statistics. One implementation of braiding requires control of the pairwise couplings between all Majorana states in a trijunction device. In order to have adiabaticity, a trijunction device requires the desired pair coupling to be sufficently large and the undesired couplings to vanish. In this work, we design and simulate of a trijunction device in a two-dimensional electron gas with a focus on the normal region that connects three Majorana states. We use an optimisation approach to find the operational regime of the device in a multi-dimensional voltage space. Using the optimization results, we simulate a braiding experiment by adiabatically coupling different pairs of Majorana states without closing the topological gap. We then evaluate the feasibility of braiding in a trijunction device for different shapes and disorder strengths. § INTRODUCTION A pair of well-separated Majorana states encodes the occupation of a single fermionic state non-locally as two zero-energy states <cit.> . Under exchange of two Majorana states, i.e. braiding, the protected ground state evolves via unitary operations. The discrete nature of braiding allows to implement all Clifford operations with very low error rates—a requirement for universal fault tolerant quantum computation <cit.>. This has brought a lot of attention to the field in the past two decades with several proposals for experimental realization <cit.> and detection <cit.> of Majorana bound states. Therefore, there are several proposals for braiding that include moving Majoranas around each other in semiconductor nanowire networks <cit.>, long range coupling of Majorana islands connected by quantum dots <cit.>, and networks of Josephson junctions connected by trijunctions <cit.>. Braiding in hybrid semiconductor-superconductor devices requires coupling all Majorana states via control of the electrostatic potential. Two-dimensional electron gases (2DEGs) are suitable for realising trijunction devices because they combine different ingredients such as electrostatic control and superconductivity <cit.> in a non-linear layout. 2DEGs are an active field of research for topological physics with experiments focused on detecting signatures of Majorana states in single nanowires <cit.>, planar Josephson junctions <cit.>, or in minimal realisations of the Kitaev chain <cit.>. Unambiguous detection of Majoranas requires distinguishing them from non-Majorana physics producing similar results <cit.>. The recently proposed topological gap protocol <cit.> establishes a first step towards fully-automated detection of Majorana states. A braiding experiment poses additional requirements to the creation of spatially isolated Majoranas. It requires measurement of the fermion parity of Majoranas belonging to the same nanowire <cit.>. Furthermore, it also requires a trijunction—a switch that selectively couples Majoranas from three different nanowires—which is the focus of our work. The requirements for a braiding experiment are such that (i) the energy of the coupled pairs needs to be larger than the thermal broadening, (ii) the ratio of the energies of coupled pairs with the remaining Majoranas should be as large as possible to ensure adiabaticity, and (iii) the gap between the zero-energy ground state and the coupled Majoranas does not close while coupling different pairs. A trijunction device that satisfies these requirements is suitable to perform braiding. In order to evaluate feasibility of a braiding experiment, we design and simulate a trijunction device as shown in Fig. <ref>. In order to find the operational regime of the device, we use an optimisation approach using an effective Hamiltonian in the basis of decoupled Majorana states. Then, we illustrate the device operation by simulating the braiding protocol from Ref. <cit.> where we switch the coupling between different pairs of Majorana states while preserving the energy gap. We define quality metrics relevant for braiding and systematically compare the performance of different trijunction device geometries. We highlight the geometries that are suitable for braiding and investigate their resilience to increasing concentration of electrostatic disorder that is unavoidable in this system <cit.>. § DEVICE LAYOUT A braiding protocol <cit.> requires time-dependent manipulation of the pair couplings between three Majorana states shown in Fig. <ref> (a). The computational subspace—one Majorana in the trijunction and three Majoranas in the far nanowires' ends—is protected as long as the number of zero-energy modes remains constant. In other words, the computation is protected as long as two out of six Majorana states are always coupled. The full braiding protocol requires coupling Majoranas from the same wire via a transmon <cit.> or flux qubit <cit.>, which is outside the scope of this work. It also requires to move one Majorana state between three different wires by coupling different pairs of Majoranas via a trijunction. By combining these two procedures, it is possible to perform a braiding experiment where two Majorana states exchange positions. Detailed modelling of Majorana nanowires is outside the scope of our study. Therefore, we consider an idealised model of topological nanowires. We simulate clean nanowires of size W_NW=70 and L_NW=1.5 such that the Majoranas are well-separated. The nanowires are parallel to a homogeneous magnetic field, which drives them into the topological phase simultaneously. We connect the nanowires to the trijunction formed in the central normal region as shown in Fig. <ref> (b). We use one layer of depletion gates shown in Fig. <ref> (b) to form the trijunction and a second layer for a global accumulation gate to control the electron density. We parameterize the shape of the device using channel length L, channel width W and the angle θ between the x-axis and the arms. We use the materials from Ref. <cit.> for the substrate, dielectric and gate electrodes. We simulate the three dimensional device configuration shown in Fig. <ref> (b-c). We use the electrostatic solver of Ref. <cit.> to numerically solve the Poisson's equation ∇·[ ϵ_r(𝐫) ∇ U(𝐫) ] = -ρ(𝐫)/ϵ_0, where ρ_r is the charge density, ϵ_0 is the vacuum permittivity and ϵ_r is the relative permittivity. Because the 2DEG has a low electron density, we neglect the potential induced by charges in the 2DEG. We express U as a linear combination of the potential induced by each gate electrode U(𝐫) = ∑_i V_i U_i(𝐫) + U_0(𝐫), where U_0(𝐫) is the potential induced by dielectric impurities when 𝐕 = 0, and V_i are the elements of 𝐕=(V_L, V_R, V_T, V_global). In order to reduce the number of control parameters, we apply the same voltages to the depletion gates closest to a channel shown in Fig. <ref> (b). We use the 2D Hamiltonian H = (1/2m^*(∂_x^2+∂_y^2) - U(x, y)) σ_0 τ_z + α(∂_x σ_y - ∂_y σ_x) τ_z + E_z σ_y τ_0 + Δ(x, y) σ_0 τ_x, where σ_i and τ_i are the Pauli matrices in the spin and particle-hole space, α is the spin orbit coupling strength, E_z is the Zeeman field induced by the homogeneous magnetic field, and m^* is the effective mass in the semiconductor. Using the Kwant software package <cit.>, we discretize Eq. (<ref>) over a 2D tight-binding square lattice with lattice constant a=10 as for typical devices <cit.>. The electrostatic potential in the 2DEG, U(x, y, z=0) = U(x,y), is defined relative to the Fermi level in the nanowires which is set to the bottom of the lowest transverse band μ_0. The superconducting pairing is absent in the normal region, and in the nanowires it is Δ(x,y)=Δ_0e^iϕ_j where Δ_0 is the magnitude of gap and ϕ_j is the phase in the j-th nanowire. We tune the Hamiltonian to be in the topological phase for the lowest subband, i.e. E_z > √(μ_0^2 + Δ^2). The induced gap in the nanowires is Δ_t. § DEVICE TUNING We numerically compute the six lowest energy modes |ϕ_i ⟩ of the depleted trijunction, which are linear combinations of decoupled Majorana states |γ_i⟩. In order to obtain a basis of individual Majorana states, we use |γ_i⟩ = Ŵ | ϕ_i⟩ where Ŵ is the matrix that simultaneously approximately diagonalizes the projected position operators 𝐏̂_x = ⟨ϕ_i|𝐗̂|ϕ_j⟩ and 𝐏̂_y = ⟨ϕ_i|𝐘̂|ϕ_j⟩. The Majoranas in the maximally localized basis are shown in Fig. <ref> (a). For an arbitrary voltage configuration, the three Majorana states close to the junction interact, while the three far Majoranas remain decoupled. Our goal is to design a device that separately couples multiple pairs of Majorana states by tuning the gate voltages. We use the overlap between the coupled and decoupled Majoranas S_ij =⟨ψ_i | γ_j ⟩ to heuristically determine the coupling between Majoranas originating from different arms. We apply a singular value decomposition, S = U D V^†, where U and V^† are unitary and D is positive diagonal. The approximate transformation is the unitary part of the SVD decomposition, i.e. S' = U V^†. This transformation approximates the coupled Majorana wavefunction in Fig. <ref> (b) as a linear combination of decoupled Majorana wavefunctions shown in Fig. <ref> (c). The low-energy effective Hamiltonian is H_eff = S' diag(E_0, E_1, E_2)S'^† = i ∑_i≠ jΓ_ij |i ⟩⟨ j |, where Γ_ij is the coupling between Majoranas γ_i and γ_j, and E_k are the three lowest eigenvalues of the exact Hamiltonian. When coupling a single pair of Majorana states, the effective coupling always corresponds to the first non-zero eigenvalue, i.e. Γ_ij = E_2, however, when there are multiple pairs of coupled Majoranas, interpretation of the effective couplings is ambiguous. In order to find the operational regime of the device, we use an optimisation approach to find the optimal couplings as a function of gate voltages and phase differences. For the coupling of the i-th and j-th Majorana states, we define the desired and undesired couplings as δ_+ = Γ_ij, δ_- = Γ_ik + Γ_jk, where k is the remaining Majorana state. The goal of our device is to maximize the energy of the coupled Majorana pair while keeping the couplings to the remaining Majorana state exponentially small. Therefore, we use a loss function C_pair = - δ_+ + log(δ_-^2 + ϵ). Here, δ_± is in units of Δ_t. We use ϵ=10^-3 to regularize the divergence of the logarithm. To remove local minima of the loss function and improve the convergence, we penalize the regions in the gate voltage space where either regions under the gate are not depleted or the channels are fully depleted. We achieve this by adding the following soft-threshold terms to the loss function: S(U(𝐫)) = A ( ∑_{𝐫_acc} U(𝐫_acc) Θ[U(𝐫_acc)] + ∑_{𝐫_dep} (U(𝐫_dep) - u_0) Θ[-U(𝐫_dep)] ). Here Θ(x) is there heavy-side function. We choose {r_acc} and {r_dep} to be in the accumulated channel and in the depleted regions, respectively. We choose the scale factor A=10^2, and use a threshold u_0 ∼1-2. The total loss function is L = C_pair + S. Minimizing this loss function for all Majorana pairs separately yields the voltage configuration where two Majorana states are optimally coupled. The results for the right-top pair are shown in Fig <ref>. At the optimal point, the depletion gates form a channel between the right and top Majorana states while disconnecting the left Majorana as shown in Fig. <ref> (c-d). Once the channel is formed by the depletion gates, the coupling is controlled by tuning the accumulation gate voltage V_global as shown in Fig. <ref> (a). The phase difference between the top and right superconducting arms modulates the coupling Γ_LR as shown in Fig. <ref> (d). While the optimal point reaches the maximum coupling for a given pair, device operation depends on the stability of the coupling with respect to variations in gate voltages. In order to find the operational range of the device, we perform a two-dimensional scan of the gate voltages of the depletion arms corresponding to the coupled Majoranas while keeping the extra arm depleted and the global gate at the optimal point. Figures <ref> (e-f) shows the operational regime of the device around the optimal point based on desired coupling magnitude and the ratio between the desired and undesired couplings, respectively. The operational regime of the device has a desired coupling comparable to the topological gap, and is exponentially larger than the undesired coupling. The area that satisfies both criteria corresponds to the operational range. § BRAIDING OF MAJORANA STATES We consider the braiding protocol from Ref. <cit.> that exchanges Majoranas γ_L and γ_R as shown in Fig. <ref> (a). The ingredients that we require for the braiding protocol are * coupling Majoranas within the same nanowire via charging energy <cit.>, * coupling pairs of Majoranas via the trijunction as described in Sec. <ref>, * coupling all three Majoranas in the trijunction, * a path that connects from two to three coupled Majorana states without closing the topological gap. In order to couple all three Majorana states, at least two pairs of Majoranas must be coupled. Because the device without disorder is symmetric around the x axis, we couple the left-top and right-top pairs simultaneously, and constrain the voltages to be symmetric, i.e. V_L = V_R. Furthermore, since finding the optimal path in voltage space is hard, we choose the path that linearly interpolates between the points where two and all Majorana states are coupled. Depending on the choice of the triple coupled point, the gap along this path may close. In the trijunction that we have studied, we find that the following loss function finds a triple coupled point connected by a gapped path to the pair coupled points: C_triple = - ( |Γ_LT| + |Γ_RT|) + |Γ_LR|. The gap reaches a minimum ≲ 0.1 ×Δ_t along the braiding path. As before, we add a soft-threshold term to ensure that all channels are formed. We obtain the optimal coupling by minimizing the loss function as in Eq. (<ref>). The resulting spectrum of the trijunction is shown in Fig. <ref> (c). The wavevunctions at the optimal points are shown in Fig. <ref> (b). § GEOMETRY DEPENDENCE In order to evaluate the adiabaticity of the braiding protocol, we compute the desired coupling, δ_+, and the ratio between desired and undesired couplings, δ_+/δ_-, at the optimal point. Because the topological gap is small, we require the Majorana couplings to be comparable to it: δ_+ ≳ 0.3 ×Δ_t. As a minimum requirement for adiabaticity, the desired coupling should be larger than the undesired coupling: δ_+ ≳ 50 ×δ_-. In order to quantify the tunability of a device, we define its operational range as the area 𝒜 in the voltage space where δ_+ ≳ 0.85 ×δ_max, with δ_max the maximum coupling in the scan as shown in Fig. <ref> (e-f). In order to determine which geometries are suitable for braiding, we compute the quality metrics δ_+/Δ_t, δ_+/δ_-, and 𝒜 for different L, W, and θ. We evaluate the quality metrics for the worst performing pair. We summarize the results in Fig. <ref> and indicate the geometries that meet the thresholds in Eqs. (<ref>) and (<ref>). We find that small trijunctions have a systematically larger operational voltage range as well as larger couplings. For larger trijunctions, it is possible to find a geometry suitable for braiding, but it requires fine-tuning W. The angle θ does not affect the qualitative behaviour of the trijunction. § ELECTROSTATIC DISORDER We compare the susceptibility to electrostatic disorder of larger and smaller geometries. For that we select two geometries and analyze their performance in the presence of disorder. We simulate disorder in the dielectric between the depletion gate layer and 2DEG by randomly positioned positive charges. Figure <ref> shows that devices with an impurity concentration of ∼1e10^-2 are not degraded by disorder. On the other hand, a small concentration of electrostatic disorder ∼1e11^-2, which is achieved in state-of-the-art Majorana devices <cit.>, significantly reduces the performance of a trijunction. While smaller geometries performs better, we expect that they are more susceptible to fabrication imperfections, therefore posing a tradeoff between two challenges. § SUMMARY In this work, we developed a numerical procedure to design a braiding protocol using a trijunction device—one of the ingredients for a topologically protected quantum computer—by using three dimensional electrostatic and quantum simulations. We used an optimization approach to find the voltage configurations where all different pairs of Majorana states are strongly coupled. Consequently, we discovered that a range of trijunction device geometries can be used as switches that selectively couple and decouple different Majorana states. We confirmed that trijunctions are suitable for braiding by simulating the braiding protocol from Ref. <cit.> without closing the gap between the ground state and the coupled Majorana states. The operation of the device is limited by the gap size, which decreases to ≲ 0.1 ×Δ_t along the braiding protocol. We observe that state-of-the-art levels of disorder render this trijunction design inoperable because the narrow channels cannot be formed. Therefore, we expect that cleaner materials <cit.> or a different design would be required to resolve this problem. The methods developed in our study are applicable to other realisations of Majorana states such as the minimal Kitaev chain <cit.>. Similarly, the optimization method that we developed is transferable to other semiconducting devices such as spin qubits <cit.> or planar Josephson junctions <cit.>. The operational regime of these devices usually lies in a region of a multidimensional space that maximises certain quantities such as the wavefunction overlap <cit.> or the energy gap <cit.>. Our work demonstrates that combining electrostatic simulations, effective Hamiltonians, and optimization routines is a powerful tool in designing and operating semiconductor devices. § ACKNOWLEDGEMENTS We thank C. Liu, V. Fatemi, H. Spring, J. Zijderveld, K. Vilkelis, C. Prosko, C. Moehle and S. Goswami for useful discussions. We thank I. Araya Day for help with the algorithms for identifying the effective Hamiltonian. Author contributions A.R.A. defined the project goal and supervised the project. J.D.T.L. designed the trijunction device. J.D.T.L. and S.R.K. setup the simulations and obtained the results. J.D.T.L. wrote the manuscript with input from S.R.K. and A.R.A. Data availability All code and data used in this work is available at Ref. <cit.>. Funding information This work was supported by the Netherlands Organization for Scientific Research (NWO/OCW) as part of the Frontiers of Nanoscience program, an Starting Grant 638760, a subsidy for top consortia for knowledge and innovation (TKl toeslag) and a NWO VIDI Grant (016.Vidi.189.180).